• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Decoding video with 6800

jim1976

Platinum Member
It's being some time since NV40 is here and the drivers that promised us decoding inside gpu isn't here yet.
Does anybody know anything about it?
Also I have a question in order to understand how it works.
I know that doing the decoding inside the gpu is faster than let it to the cpu,but how does this work exactly since gpu MHz speeds are significantly lower than cpu speed?
Thanks
 
they are faster pretty easily.....for starters they are built soley to run 3d games, and now decode encode video.....cpu's are built to do everything possible

2. take a 6800ultra.....its 400mhz core with 16 pipeline
take a pentium 4 3.2ghz...its only got 1 pipeline.

16x400 = would be the same as 6.4ghz cpu with 1 pipeline
 
Originally posted by: otispunkmeyer
they are faster pretty easily.....for starters they are built soley to run 3d games, and now decode encode video.....cpu's are built to do everything possible

2. take a 6800ultra.....its 400mhz core with 16 pipeline
take a pentium 4 3.2ghz...its only got 1 pipeline.

16x400 = would be the same as 6.4ghz cpu with 1 pipeline

I don't get it.I thought pipelines were only used for faster image rendering, so how does core speed is multiplied with pipelines and give such a "theoretical" total speed?


 
Pipelines are like execution units, the latest and greatest from nvidia and ati have 16 pipelines built in.

CPUs are fully programmable but only have one pipeline at a very very high clockspeed. This is more beneficial than having more pipelines on a cpu because the tasks they do are NOT highly parrellel like they are in graphics rendering and video encoding/decoding.

Its the same reason 16 Xeon 500mhz cant beat a 3.2 pentium 4 (unless the task is rediculously simple and repetitive).
 
Thanks makes more sense now. So when we are talkin about theoretical max fill rate we actually mean the "speed" of the gpu?
Also does anybody have a good site to recommend that explains gpu architecture?
 
Actually as of the 65.32 drivers the video encoder is now enabled. I believe Guru3d ran soem benchmarks and it really helps A LOT!!

So any driver after that should have it enabled.

-Kevin
 
Originally posted by: Gamingphreek
Actually as of the 65.32 drivers the video encoder is now enabled. I believe Guru3d ran soem benchmarks and it really helps A LOT!!

So any driver after that should have it enabled.

-Kevin

Any link at Guru3d?
 
I know that doing the decoding inside the gpu is faster than let it to the cpu,but how does this work exactly since gpu MHz speeds are significantly lower than cpu speed?
Generally dedicated hardware is faster and more efficient than generic hardware like a CPU. A CPU can do just about anything it likes but pays for that flexibility by being outperformed by dedicated hardware (such as a GPU) which can do certain tasks much faster but can't do other tasks at all.

As for pipelines. rendering processes are inherently parallel which means if you double the amount of pipelines you can expect double the performance in certain situations. OTOH things aren't quite that rosy with standard code executing on CPUs and doubling the amount of processors almost never doubles the performance; in fact sometimes the performance won't go up at all.
 
Back
Top