That's not the same as saying NV's graphics cards are slower than AMD's because their driver is less efficient.
The article itself is very weak in trying to portray this idea. Xbitlabs produced a far more comprehensive article on this topic many months ago.
Thanks. It would be good to know if a 6850 would be better for a system with maybe a core2duo over a 460. Seeing as most review sites use heavily clocked i7s so you cant see the drivers would make a real difference.
http://alienbabeltech.com/main/?p=22167&page=13
This review is just as comprehensive. Far Cry 2 is a shining example of what
can happen with this bottleneck we're investigating. With a dual core the GTX 480s performance drops below the performance level of the 5870; with a quad core the 480 is faster. Also it appears Crossfire also follows the similar trend as Nvidia drivers and appears to be even more influenced by processor speed. Apoppin didn't test SLI but I think it's safe to assume SLI would follow the same trend as well.
World In Conflict is another good example. Here the AMD card paired with the Phenom X4 is faster than the 480 paired with the Phenom. The tide changes when switching to Core i7.
This doesn't happen in every game.
The article shows that NV's driver has a higher overhead compared to AMD's in 1 test. They specifically look at "Physics" 3dMark11 test to highlight this difference. Ironically, NV graphics cards utterly dominate AMD in any game that actually uses PhysX effects (i.e., some attempt at realistic physics effects). So do you really care how good a CPU is in running physics when hardly any modern games run realistic physics effects on the CPU? In other words, if they wanted to look at CPU dependence by brand, they should have compared various CPUs in actual games.
PhysX, as is it's currently implemented, isn't even relevant to this discussion and is not physics. But that isn't the point of what the article was highlighting; you're fabricating an argument out of nothing. As currently implemented in games I'd take something like Havok over PhysX.
It would have been better if they tested different CPUs, but that doesn't make their one test invalid. The Alienbabel and your own Xbits articles verify what the OP's article is showing: When CPU resources become limited then Fermi's performance drops like a rock, and that can possibly result in lower performance than a competing AMD card. You do see a trend in the ABT article - the GTX 480 is very susceptible to CPU speed, even if the end result in some games doesn't drop its performance below that of the competition.
Precisely. The title of the thread implies that NV's cards actually perform worse [in games] than comparable AMD cards. However, not a single piece of evidence in that article supports this view. In other words, we can only conclude that AMD's driver seems to be more efficient -- but so what? How does that impact your videocard purchasing decision? We can't deduce any useful information unless actual games are tested with various CPUs/clock speeds, etc!! (see Xbitlabs' article).
You can deduce what can happen under situations. Even the Xbit and ABT articles don't show what will happen under every condition - and for that you simply have to make your best educated guess. Just consider the OPs article a data point among many, highlighting the theoretical worst case scenario. And it does exactly that.
http://www.guru3d.com/news/nvidia-gr...s-card-market/
NVIDIA grabs 71% of discrete graphics card market
Hmmm... it does not take a physics professor to do the math on that one.
May not take a physics professor to do what you are showing, but it does take someone with common sense and thoughtfulness to not post an irrelevant stat. What does the discrete graphics market have to do with that chart? Your link doesn't mean anything, because there are (were) also integrated graphics which contribute to those crashes.
But even if I do play along, look at that chart. If we isolate AMD and Nvidia, then Nvidia has 75% of the total crashes between the two. Hmm... 75% of the crashes for "71% of the discrete market" is not a favorable ratio here... it doesn't take a math professor to...
Don't use facts to shoot holes in their fallacy, it just angers them more.
Don't use red herrings to deflect away from the true nature of this discussion. Because notty's link and your approval are not relevant to the discussion, which is and always has been in this thread about how Nvidia's drivers react under CPU-resource limitations. The top 20 list does nothing to show the correlation because the users are not running their tests under processor resource limitations (they are running the fastest processors possible).