Originally posted by: Azn
It's pointless to argue with you when I already gave you an explanation why the 8800gt beats the 8800gts 640mb. Although the 8800gt is bandwidth starved it's able to beat 8800gts 640 mb because it has more SP and TMU. With more bandwidth it would easily beat 8800gts 640mb by 2 folds and even further. While 8800gts 640mb has been saturated by bandwidth and more bandwidth would not help this card much like 2900xt.
You have a good point here, I don't think everyone sees it.
A nice test would be to take an 8800GTS640, and run some benchmarks, first with the memory slightly underclocked, then with the memory slightly overclocked.
If what you're saying is correct, then the benchmark results will be pretty close.
Doing the same test on an 8800GT should show a more direct impact on changing the memory speed (and therefore bandwidth).
I'd like to add in general... the first 8800s probably had 'too much' bandwidth because nVidia didn't have any DX10 software to analyse. They just had to estimate bandwidth requirements, and build their cards around those.
I think people focus mostly on fillrate and AA in this thread, but memory is used for more than just that. Another huge factor in the consumption of memory bandwidth is texturing.
I guess that's the core of the issue here. DX10 games aren't all that texture-heavy, they are mostly shader-limited. nVidia probably thought there would be a lot more texture-usage when they originally designed the 8800-series. In terms of pure fillrate (ROPs and all), it doesn't have the horsepower to use anywhere near its memory bandwidth. I think that's what Azn is alluding to. It has a lot of 'spare' bandwidth which can be used for texturing.
Later generations of DX10 hardware focused more on shader performance, maximizing fillrate/AA and such, and less on memory bandwidth, which resulted in better performance at a lower cost. That's exactly what the transition from G80 to G92 was: a leaner, meaner chip.
With AMD it seems they overshot the bandwidth requirements even worse with the original 2900XT, and you see a similar trend going to the 3000 and 4000 series.
But, hindsight is always 20/20. Neither AMD nor nVidia could predict what requirements today's DX10 games would have, back in 2006.