860 is the default frequency of that model of 580, lol classic "high OC"...
:thumbsup:
The shocking part is that when someone points out to you that the 7970 @ 1125mhz is yielding huge 40-50% gains over 580 @ 860mhz, you state this test doesn't show 580 in the best light (because some of them can hit 930-950mhz), but then you ignore the other side which is overclocking 7970 beyond 1125mhz. In simplest terms, the 50% increase over 580 oc will remain because the 7970 can scale more as well.
Every time someone points those benchmarks, you ignore them and yet you what you say doesn't change the conclusion at all unless you LN2 the 580.
7970 overclocked also beats 6970 overclocked by 75-80% in many modern games. Benchmarks are everywhere. Check TR's review of R9 280X where it absolutely crushes the 6970.
It takes a GTX680 boosting to 1280-1290mhz to just match a 7970 at 1165mhz or so. A 580 oc never stands a chance. In BF4 beta, the 580 gets absolutely crushed by 7970GE. If R9 290 is $499 and once it is overclocked, it can trade blows with 780/Titan overclocked, that will absolutely be a win for AMD if this comes true.
I never made any arguments regarding correlations between performance and memory bandwidth, that is your own assumption. I was simply stating that memory speeds have increased over time and that 320GB/s bandwidth over a 512 bit bus is not impressive by any means.
That's the entire point of my post that you still missed. You look at memory bandwidth as only a number and a function of how wide the bus width is vs. the total memory bandwidth attained. Without looking at the context of the GPU speed, comparing 288GB/sec bandwidth of GPU A vs. 320GB/sec of GPU B is a
useless exercise. It's like comparing a car with 288 hp vs. 320 hp but not looking at the curb weight....waste of time. You can have a GPU like Tahiti that has 288GB/sec but only benefits from 200GB/sec because the GPU is a major bottleneck. The only thing you are comparing is a number on a piece of paper in theoretical terms, but the only thing that matters is how efficiently that GPU can actually utilize the memory bandwidth in real world programs.