Cookie Monster
Diamond Member
- May 7, 2005
- 5,161
- 32
- 86
Why is it necessary to update drivers every two weeks? It certainly isn't. Upgrading drivers should be relegated if the new driver does something your current driver doesn't - whether it be an added feature or performance improvements or bug fix. The most logical explanation is that his current driver satisfies him without doing anything wrong, and as such there's no incentive for him to upgrade. That doesn't mean there is anything special about him using June's driver; it most likely means there isn't anything the new drivers do which interest him. Or another likely scenario is that he just doesn't care. Or perhaps he hasn't updated his signature.Why aren't you using the latest (from 2011) drivers yourself then, or is there something special about last June's drivers?
Im sorry to say it but the title is misleading, it only applies for Physics Test in 3D Mark 11 and its not the rule for Games.
Different Game Engines and different Games with the same Game engine will produce different results.
I think with thorough testing we would find there isn't one single, set in stone, simple, collective rule that can be applied for all games. So far the more extensive testing done by Xbits and Alienbabel show this. In some games the OPs claim is true. In others it isn't.
looooooooooooooooooooooooooooooooooooooooooooool+1. You better do it fast.
The thread title is misleading. Nvidia's drivers do not result in lower performance relative to the competition:
GTX580 has no single GPU competition.
GTX570 competes well with HD6970
GTX560 TI competes well with HD6950
GTX460/470 compete well with HD6850/6870.
The title should be changed to something like "NV's videocards are more dependent on CPU speed to extract maximum performance compared to AMD's cards." That's not the same as saying NV's graphics cards are slower than AMD's because their driver is less efficient.
The article itself is very weak in trying to portray this idea. Xbitlabs produced a far more comprehensive article on this topic many months ago.
Its strange how Nvidia gpu's are all part of the
3DMark 11 Top 20 (Performance preset)
AMD gpu's / cpu's are nowhere to be seen. Seems like AMD needs to do some work .![]()
As an aside, NV does not allow customers who buy their cards to use them for Physx processors alongside Radeons. Nvidia has stated this is due to quality assurance issues, which I believe entirely. Nvidia's miserable Quality Assurance was confirmed with the 196.75 driver release linked above.
My opinion on the thread is the following. nVidia's approach is all about thread parallelism and thread encapsulation, to max its execution resources usage. But it would seems that there's a hardware or driver bug that still not allow to use its entire capacity, relying on the processor to do some driver tweaks/compiling work to get the job almost done, that's specially true on the GTX 460/GTX560 which has some superscalar stuff and is harder to be efficient. nVidia's reliance on more CPU cycles has been proved several times with CPU bottleneck articles around the web, It isn't a big deal for me, but as games gets demanding, would means that games would demand more CPU cycles, CPU bottlenecking the nVidia solution.
While AMD's approach is that since their VLIW architecture is very hard to keep it fed, they use the Command Queue Processor inside of the GPU to accept and process compiler commands to optimize and maximize its very wide execution resources utilization which is quite challenging, instead of relying on the CPU which might not be a good idea as the GPU's Command Queue Processor is like some sort of RISC processor which is very specialized in that regard, that's what I think.
This also might explain why AMD hardware tends to age better compared to similar nVidia solutions. Newer, more intensive games played for example, an HD 3870 with an Athlon X2 6400+ might perform better than a similar setup with for example, an 8800GT, as the former has less reliance on CPU cycles.
Why aren't you using the latest (from 2011) drivers yourself then, or is there something special about last June's drivers?
Great post. Seems plausible and was interesting to read. +1
He's waiting on a 6th hotfix. 11.1.2.a.x.beta![]()
Nope, the GTX 560 when overclocked, competes well with the HD 6950. At stock, its closer to an HD 6870 than to an HD 6950.
Yes, seems nvidia has lots of room for improvement in catch up to AMD's drivers. Not only in getting their drivers more efficient to reduce CPU overhead, but also would be nice if they could improve SLI scaling to be as good as Crossfire scaling is with AMD's 6 series. ()![]()
Thanks. It would be good to know if a 6850 would be better for a system with maybe a core2duo over a 460. Seeing as most review sites use heavily clocked i7s so you cant see the drivers would make a real difference.
Even if you just look at the stock 560, it's still closer to the 6950 in performance than it is to the 6870 unless you start getting into 2560x1600 resolutions where 2GB of VRAM on the 6950 comes into play.
I'm not sure that's really it - I think it's probably more that nVidia would get lots of improvement in a quad system versus a dual core, but that in a quad system where all cores aren't pegged at 100% (and in most cases they won't be), the Fermis are probably working at full capacity.
My own experience says yes. My e8400/GTX460 puts a huge strain on my e8400 regardless of the game, and I've never seen the GTX460 at over 90% use, and it's usually at 50-70%. In the same games, my i7/HD5850 system always pegs the HD5850 at 99%. I'd try it in my e8400 system but it doesn't fit!
I'm not sure that's really it - I think it's probably more that nVidia would get lots of improvement in a quad system versus a dual core, but that in a quad system where all cores aren't pegged at 100% (and in most cases they won't be), the Fermis are probably working at full capacity.
My own experience says yes. My e8400/GTX460 puts a huge strain on my e8400 regardless of the game, and I've never seen the GTX460 at over 90% use, and it's usually at 50-70%. In the same games, my i7/HD5850 system always pegs the HD5850 at 99%. I'd try it in my e8400 system but it doesn't fit!
For someone still using a dual core, looks like an AMD card is a better choice since nvidia sucks so much CPU horsepower to run.
If im not mistaken GTX480 with Phenom X2 550 outperform the AMD HD5870 with the same CPU in Call Of Pripyat and theirs no difference in performance going to a quad Phenom or Core i7 in the same game.
AvP, Lost Planet 2 and Heaven benchmark with Tessellation ON dont care what CPU you have and they are GPU bound.
Same goes for Metro 2033, you get higher Frames with a faster card no matter what CPU you going to use.
Unreal Engine 3 games like UT3, Batman and Resident Evil 5 all exhibit the same behavior and benefiting for more cores and higher Frequency for both AMD and NV Graphics cards.
I dont see a pattern with NV cards needing more CPU than AMD cards, perhaps it happens in a few Games but that is not the general rule.
http://alienbabeltech.com/main/?p=22167
again, have a look at the results of the Phenom X2 550 in alienbabel review.
http://alienbabeltech.com/main/?p=22167
You will see that both GTX480 and HD5870 exhibit the same behavior with the dual core Phenom X2 550 and you will not find a trend forming to justify saying AMD cards perform better with dual core CPUs.
I think that Tom's Hardware have a similar review regarding CPU bottlenecking.