Nvidia drivers give lower performance than AMD's due to high CPU overhead/inefficient

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Im sorry to say it but the title is misleading, it only applies for Physics Test in 3D Mark 11 and its not the rule for Games.

Different Game Engines and different Games with the same Game engine will produce different results.
 

cusideabelincoln

Diamond Member
Aug 3, 2008
3,275
46
91
Why aren't you using the latest (from 2011) drivers yourself then, or is there something special about last June's drivers?
Why is it necessary to update drivers every two weeks? It certainly isn't. Upgrading drivers should be relegated if the new driver does something your current driver doesn't - whether it be an added feature or performance improvements or bug fix. The most logical explanation is that his current driver satisfies him without doing anything wrong, and as such there's no incentive for him to upgrade. That doesn't mean there is anything special about him using June's driver; it most likely means there isn't anything the new drivers do which interest him. Or another likely scenario is that he just doesn't care. Or perhaps he hasn't updated his signature.

What a silly call out. Are you using a jump to conclusions mat?

Im sorry to say it but the title is misleading, it only applies for Physics Test in 3D Mark 11 and its not the rule for Games.

Different Game Engines and different Games with the same Game engine will produce different results.

I think with thorough testing we would find there isn't one single, set in stone, simple, collective rule that can be applied for all games. So far the more extensive testing done by Xbits and Alienbabel show this. In some games the OPs claim is true. In others it isn't.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
I think with thorough testing we would find there isn't one single, set in stone, simple, collective rule that can be applied for all games. So far the more extensive testing done by Xbits and Alienbabel show this. In some games the OPs claim is true. In others it isn't.

Exactly, just because 2 or 3 Games can produce the same result as with 3D Mark 11 Physics Test, it doesn't make the title of this thread valid.

There are Games that don't benefit with more CPU Cores or Higher Frequencies (like AvP, Metro 2033 etc) and there are games that NV cards are more efficient with slower CPUs than AMD cards.

There are even games that produce more frames with AMD CPUs than with Intel but that's not the rule nor we would say AMD CPU's are better for Games (Call of Pripyat single Card/ Dirt 2).

http://alienbabeltech.com/main/?p=22167&page=20
 
Last edited:

nemesismk2

Diamond Member
Sep 29, 2001
4,810
5
76
www.ultimatehardware.net
3DMark 11 wouldnt even fully complete 100% of the time with my system until the patch was released. I wouldnt use 3dmark 11 as a good indicator of anything and it doesn't even look very nice imo.
 

biostud

Lifer
Feb 27, 2003
19,729
6,808
136
Name a game where the drivers from either company makes the gaming experience worse than the competitor, and provide information that it's a driver problem and not hardware.
(physX enabled titles excluded)
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,329
126
Everyone who says the thread title is inaccurate has understood exactly what is being discussed here. As anyone who opens the thread and reads the article would. :confused:
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
The thread title is misleading. Nvidia's drivers do not result in lower performance relative to the competition:

GTX580 has no single GPU competition.
GTX570 competes well with HD6970
GTX560 TI competes well with HD6950
GTX460/470 compete well with HD6850/6870.

The title should be changed to something like "NV's videocards are more dependent on CPU speed to extract maximum performance compared to AMD's cards." That's not the same as saying NV's graphics cards are slower than AMD's because their driver is less efficient.

The article itself is very weak in trying to portray this idea. Xbitlabs produced a far more comprehensive article on this topic many months ago.

Nope, the GTX 560 when overclocked, competes well with the HD 6950. At stock, its closer to an HD 6870 than to an HD 6950.

Its strange how Nvidia gpu's are all part of the
3DMark 11 Top 20 (Performance preset)


AMD gpu's / cpu's are nowhere to be seen. Seems like AMD needs to do some work . :)

Kinda odd considering that a single HD 6970 is slightly faster than the GTX 570 on 3DMark11.

As an aside, NV does not allow customers who buy their cards to use them for Physx processors alongside Radeons. Nvidia has stated this is due to quality assurance issues, which I believe entirely. Nvidia's miserable Quality Assurance was confirmed with the 196.75 driver release linked above.

Which I think is false because my 9600GT works fine along with my HD 6970 and I haven't had any single issues while playing games or PhysX games. It is just a bussiness/strategic lock to force users to buy nVidia hardware.

My opinion on the thread is the following. nVidia's approach is all about thread parallelism and thread encapsulation, to max its execution resources usage. But it would seems that there's a hardware or driver bug that still not allow to use its entire capacity, relying on the processor to do some driver tweaks/compiling work to get the job almost done, that's specially true on the GTX 460/GTX560 which has some superscalar stuff and is harder to be efficient. nVidia's reliance on more CPU cycles has been proved several times with CPU bottleneck articles around the web, It isn't a big deal for me, but as games gets demanding, would means that games would demand more CPU cycles, CPU bottlenecking the nVidia solution.

While AMD's approach is that since their VLIW architecture is very hard to keep it fed, they use the Command Queue Processor inside of the GPU to accept and process compiler commands to optimize and maximize its very wide execution resources utilization which is quite challenging, instead of relying on the CPU which might not be a good idea as the GPU's Command Queue Processor is like some sort of RISC processor which is very specialized in that regard, that's what I think.

This also might explain why AMD hardware tends to age better compared to similar nVidia solutions. Newer, more intensive games played for example, an HD 3870 with an Athlon X2 6400+ might perform better than a similar setup with for example, an 8800GT, as the former has less reliance on CPU cycles.

Wreckage and SolMiester, if you don't have something smart to post here, just don't. :rolleyes:
 

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
My opinion on the thread is the following. nVidia's approach is all about thread parallelism and thread encapsulation, to max its execution resources usage. But it would seems that there's a hardware or driver bug that still not allow to use its entire capacity, relying on the processor to do some driver tweaks/compiling work to get the job almost done, that's specially true on the GTX 460/GTX560 which has some superscalar stuff and is harder to be efficient. nVidia's reliance on more CPU cycles has been proved several times with CPU bottleneck articles around the web, It isn't a big deal for me, but as games gets demanding, would means that games would demand more CPU cycles, CPU bottlenecking the nVidia solution.

While AMD's approach is that since their VLIW architecture is very hard to keep it fed, they use the Command Queue Processor inside of the GPU to accept and process compiler commands to optimize and maximize its very wide execution resources utilization which is quite challenging, instead of relying on the CPU which might not be a good idea as the GPU's Command Queue Processor is like some sort of RISC processor which is very specialized in that regard, that's what I think.

This also might explain why AMD hardware tends to age better compared to similar nVidia solutions. Newer, more intensive games played for example, an HD 3870 with an Athlon X2 6400+ might perform better than a similar setup with for example, an 8800GT, as the former has less reliance on CPU cycles.

Great post. Seems plausible and was interesting to read. +1
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Why aren't you using the latest (from 2011) drivers yourself then, or is there something special about last June's drivers?

He's waiting on a 6th hotfix. 11.1.2.a.x.beta :D


You weren't asked to be involved in the conversation, the question was very specifically asked of Obsoleet.

Your unsolicited response is off-topic and inflammatory.

Moderator Idontcare
 
Last edited by a moderator:

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Great post. Seems plausible and was interesting to read. +1

Thanks! :p I love GPU architectures hehe.

He's waiting on a 6th hotfix. 11.1.2.a.x.beta :D

You have been reported, your attitude isn't helping to build anything positive in the thread. If you can't say something smart to the thread, just don't say it at all. Follow the example.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Nope, the GTX 560 when overclocked, competes well with the HD 6950. At stock, its closer to an HD 6870 than to an HD 6950.

Not really. GTX470 and HD6870 are close in performance. HD6950 is only 5% faster at 1920x1200 4AA/16AF than a stock 560, while GTX560 is 14% faster at the same resolution than the HD6870. My point is valid since you can also buy a factory pre-overclocked 560 at the same price as a stock 560 and many factory pre-overclocked 560s can be found at prices near $260 of the HD6950. So I am not sure what the purpose of bringing up a stock clocked 560 was, except for stirring a new debate. This comes up time and time again but factory pre-overclocked cards do exist and can be easily purchased on the market without a premium to the reference design. Even if you just look at the stock 560, it's still closer to the 6950 in performance than it is to the 6870 unless you start getting into 2560x1600 resolutions where 2GB of VRAM on the 6950 comes into play.
 
Last edited:

Termie

Diamond Member
Aug 17, 2005
7,949
48
91
www.techbuyersguru.com
Yes, seems nvidia has lots of room for improvement in catch up to AMD's drivers. Not only in getting their drivers more efficient to reduce CPU overhead, but also would be nice if they could improve SLI scaling to be as good as Crossfire scaling is with AMD's 6 series. ():)

I'm not sure that's really it - I think it's probably more that nVidia would get lots of improvement in a quad system versus a dual core, but that in a quad system where all cores aren't pegged at 100% (and in most cases they won't be), the Fermis are probably working at full capacity.

Thanks. It would be good to know if a 6850 would be better for a system with maybe a core2duo over a 460. Seeing as most review sites use heavily clocked i7s so you cant see the drivers would make a real difference.

My own experience says yes. My e8400/GTX460 puts a huge strain on my e8400 regardless of the game, and I've never seen the GTX460 at over 90% use, and it's usually at 50-70%. In the same games, my i7/HD5850 system always pegs the HD5850 at 99%. I'd try it in my e8400 system but it doesn't fit!
 
Last edited:

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Even if you just look at the stock 560, it's still closer to the 6950 in performance than it is to the 6870 unless you start getting into 2560x1600 resolutions where 2GB of VRAM on the 6950 comes into play.

This is the review of the Galaxy GTX 560 Ti OCED

http://www.hardocp.com/article/2011/02/01/galaxy_geforce_gtx_560_ti_gc_overclocking_review/3

"When it comes down to performance it seems the overclock helped quite a bit. You should see around a 12-17% performance improvement with the video card overclocked to 1015MHz GPU/2030MHz shaders and 4.34GHz memory. At this overclock, it seems to match the Radeon HD 6950 quite well. That is the catch though, it takes pushing the Galaxy GeForce GTX 560 Ti GC to its limits in order to achieve Radeon HD 6950 performance."

http://www.hardocp.com/article/2011/01/25/galaxy_geforce_gtx_560_ti_gc_video_card_review/9

When we look back at the performance it seems like the Galaxy GTX 560 Ti GC is all over the place. In some games its outclassed by the Radeon HD 6870 and Radeon HD 6950, and in others it matches the Radeon HD 6950 for gameplay performance. In all cases, the Radeon HD 6950 2GB was either the same, or a lot better than the Galaxy GeForce GTX 560 Ti. We simply experienced the Radeon HD 6950 being able to use higher resolutions and higher AA settings in our gaming. In our apples-to-apples testing it also seems to indicate the Radeon HD 6950 being faster than the 560 Ti..

The GeForce GTX 560 Ti seems to compete well with the Radeon HD 6870, this is its main competition according to our performance testing. However looking at price, it is closer to a Radeon 6950 1GB or 2GB card. In some games it will perform better than the Radeon HD 6870, but in others it might be slower, or the same. We found the Galaxy GTX 560 Ti GC delivering the same gameplay experience as the Radeon HD 6870 more often than not.

A highly clocked GTX 560 Ti may come close or match a Radeon HD 6950. NVIDIA has told us that 1000MHz core speed on GTX 560 Ti parts should not be unheard of on a wide basis. However, you may be at your limit on the GTX 560 Ti to achieve that near HD 6950 performance. On the other hand, the Radeon HD 6950 is at stock, and can also be overclocked well, and if you max that out, then once again you are surpassing the GTX 560 Ti. It just makes sense, at around the $280 price; to go ahead and get the 2GB Radeon HD 6950 which will allow more headroom, plus it has 2GB for the higher resolutions with AA.


In regard to the Radeon HD 6870, it works the opposite way. Current pricing on the Radeon HD 6870 has them all the way down to $219 now! A stock GeForce GTX 560 Ti might cost you $249, but already the Radeon HD 6870 is $30 less. Our performance testing has revealed that the GeForce GTX 560 Ti is no faster than the Radeon HD 6870 in gaming. The GTX 560 Ti matches the gameplay experience of the Radeon HD 6870, but costs more. Logically, it makes sense to save $30 and get the same performance for less by going with the Radeon HD 6870. Unless of course you are using a specific game that does better on the GTX 560 Ti, like Civ 5 for example.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I am not getting into this debate of which review is better than the other (i.e., HardOCP vs. Computerbase). There are plenty of published reviews which show that GTX560 Ti is much closer to the HD6950 than it is to the HD6870. Besides the point, 560s are generally $10-20 cheaper than most 6950s. Therefore, with the 6950, you get slightly more performance, but it costs more $$. Therefore, 560 does compete rather well.
 
Last edited:

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
I'm not sure that's really it - I think it's probably more that nVidia would get lots of improvement in a quad system versus a dual core, but that in a quad system where all cores aren't pegged at 100% (and in most cases they won't be), the Fermis are probably working at full capacity.



My own experience says yes. My e8400/GTX460 puts a huge strain on my e8400 regardless of the game, and I've never seen the GTX460 at over 90% use, and it's usually at 50-70%. In the same games, my i7/HD5850 system always pegs the HD5850 at 99%. I'd try it in my e8400 system but it doesn't fit!

Thanks for your apportation to the thread. It would means than a Quad Core is required to push Fermi architecture performance wise, I wonder how though.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,329
126
I'm not sure that's really it - I think it's probably more that nVidia would get lots of improvement in a quad system versus a dual core, but that in a quad system where all cores aren't pegged at 100% (and in most cases they won't be), the Fermis are probably working at full capacity.



My own experience says yes. My e8400/GTX460 puts a huge strain on my e8400 regardless of the game, and I've never seen the GTX460 at over 90% use, and it's usually at 50-70%. In the same games, my i7/HD5850 system always pegs the HD5850 at 99%. I'd try it in my e8400 system but it doesn't fit!

Thanks. This is more what I was wondering about and looks to be the case.

For someone still using a dual core, looks like an AMD card is a better choice since nvidia sucks so much CPU horsepower to run.

I'd like to see this review broadened to show benchmarks on a quad without an overclock as well.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
For someone still using a dual core, looks like an AMD card is a better choice since nvidia sucks so much CPU horsepower to run.

If im not mistaken GTX480 with Phenom X2 550 outperform the AMD HD5870 with the same CPU in Call Of Pripyat and theirs no difference in performance going to a quad Phenom or Core i7 in the same game.

AvP, Lost Planet 2 and Heaven benchmark with Tessellation ON dont care what CPU you have and they are GPU bound.

Same goes for Metro 2033, you get higher Frames with a faster card no matter what CPU you going to use.

Unreal Engine 3 games like UT3, Batman and Resident Evil 5 all exhibit the same behavior and benefiting for more cores and higher Frequency for both AMD and NV Graphics cards.

I dont see a pattern with NV cards needing more CPU than AMD cards, perhaps it happens in a few Games but that is not the general rule.

http://alienbabeltech.com/main/?p=22167
 
Last edited:

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,329
126
If im not mistaken GTX480 with Phenom X2 550 outperform the AMD HD5870 with the same CPU in Call Of Pripyat and theirs no difference in performance going to a quad Phenom or Core i7 in the same game.

AvP, Lost Planet 2 and Heaven benchmark with Tessellation ON dont care what CPU you have and they are GPU bound.

Same goes for Metro 2033, you get higher Frames with a faster card no matter what CPU you going to use.

Unreal Engine 3 games like UT3, Batman and Resident Evil 5 all exhibit the same behavior and benefiting for more cores and higher Frequency for both AMD and NV Graphics cards.

I dont see a pattern with NV cards needing more CPU than AMD cards, perhaps it happens in a few Games but that is not the general rule.

http://alienbabeltech.com/main/?p=22167

'
The results of our comparative test of two Intel platforms are easy to understand and explain. Yes, it is best to equip your gaming platform with both a top-performance graphics card and a premium-class CPU if you’ve got the money, but what if you haven’t? According to our tests, the graphics card being the same, the performance of the platform with an Intel Core i7-975 EE processor can be 10 to 30% higher than that of the platform with an Intel Core i5-750.

But as our tests have also shown, this difference is far from crucial and can be easily made up for by purchasing a better graphics card. It is only in individual cases such as Call of Duty: Modern Warfare 2 or WiC: Soviet Assault that using a top-end CPU is indeed justifiable and necessary to enjoy a comfortable frame rate, but such games are rather rare'

http://www.xbitlabs.com/articles/cpu/display/cpus-and-games-2010_12.html#sect0

Per that article the are saying that essentially you can out-run the issue by using a more powerful video card. Which is what you are basically saying.

In the case where the line is blurry and you are discussing a user with a dual core, not wanting to spend another $100 for a more powerful card, choosing between two near equal cards. Let's say a 6850 and a 460 1GB, with nvidia's high driver cpu overhead, they'll get better performance going with the AMD 6850 and not suffering from Fermi's need for more CPU to perform.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
again, have a look at the results of the Phenom X2 550 in alienbabel review.

http://alienbabeltech.com/main/?p=22167

You will see that both GTX480 and HD5870 exhibit the same behavior with the dual core Phenom X2 550 and you will not find a trend forming to justify saying AMD cards perform better with dual core CPUs.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,329
126
I think that Tom's Hardware have a similar review regarding CPU bottlenecking.

Could only turn this one up, only using a GTX 460 and various cpus.

http://www.tomshardware.com/reviews/game-performance-bottleneck,2737.html

fpsdragon_age.png


fpsgta_4_eflc.png


Some examples showing some hefty FPS differences with 2 vs 4 cores on Fermi.