Given how many 670s there are, it looks like 680 yields were less than stellar, so what makes you think BigK would have had good yields?
I was saying if GK100 was profitable (i.e., had good yields and volume) then it would have been a G80 situation. Since NV couldn't launch GK100 in large volumes in consume space, obviously this isn't happening any time soon.
People forget that part of the reason why Tahiti eats more power than 6x0 is due to the extra 1GB VRAM.
The power consumption difference between 1GB and 2GB GTX560Ti/HD6950 was minimal. I think there is more to it than just 1GB of VRAM that's contributing to HD7970's power consumption. For example, HD7970 needs 1.25V to get to 1250mhz while 670/680 cards get here on 1.175V-1.19V.
So the delta is even smaller than you think, hence why an upscaled Pitcairn on an improved 22nm process could probably beat a gtx680 and potentially beat it on perf/watt.
NV would also benefit from 22nm shrink though. So it's a moot point.
Right now 1344 SP GTX670 is
25% faster than Pitcairn.
The delta between GCN and Kepler
architecture is huge. Kepler only needs 1536 SPs, 32 ROPs and 256-bit memory bus over 192GB/sec bandwidth to keep up with a 2048 SP, 32 ROPs, 384-bit 264 GB/sec GCN 7970 card. Forget about GPGPU for a second and look at the underlying GPU specs. It would take very large IPC increase or 2560 SPs, 48 ROPs for HD8000 series to be able to go against GTX780. As an architecture Kepler, Kepler is about a half to a full generation ahead. It's obviously when NV's GTX560Ti successor (GTX670) ~ HD7970. Also, it's evident in the actual specs of the cards. AMD needs a lot more raw specs to match NV's card.
NV's 680 card is just as fast as 7970 despite way less shaders and way less memory bandwidth. Not to mention Kepler is
way faster in tessellation and bi-linear texture filtering. It's going to be interesting to see if AMD brings 48 ROPs for HD8000 series since they keep leaving pixel performance on the table time and time again and it hurts them. What's the point of gobbles of memory bandwidth and shaders when Pitcairn has more pixel shading power than HD7970? (32000 MPixels/sec vs. 29600 MPixels/sec)
I think you are overstating how great Pitcairn is. GCN architecture is LESS efficient than VLIW4 (i.e., HD6950/6970) is for games. The only reason it looks more efficient is because of
28nm node. It operates at 1000 mhz vs. 880mhz for HD6970 and is only
7-8% faster than HD6970. If HD6970 was shrunk to 28nm and its tessellation performance doubled, it would have SMOKED it for games even if it was still using VLIW-4. Pitcairn is great mostly because of the 28nm shrink. Take a look what happens in a game with tessellation:
HD7850 CF ~ GTX670 OC:
If you lift the hood, GCN is still behind Kepler architecture in tessellation and texture performance. While texture performance is a non-issue right now since current games still have low quality textures and GTX670/680/7970 are similarly "slow" (~ 30 fps) at very high resolutions, Kepler wins in most tessellated games (Batman AC, Lost Planet 2, Hawks 2, Crysis 2, Civ5). That bodes well for future products based on this architecture. AMD will probably have to go GCN 2.0 Enhanced while NV can just roll over Kepler to GTX700 series chips.
When a lot of us have said it took AMD a 28n node shirk + a new architecture with HD7950 to barely beat a 40nm Fermi, this is what we meant - GCN is 1st generation compute architecture from AMD. AMD delivered a well balanced card but keep in mind that their primary target market of $300-500 GPUs are gamers, not heterogeneous computing professionals who spend $3-5K on GPUs. Fermi was criticized for this but it was able to pull through with GTX460 1GB/768mb and sell a lot of Tesla cards. Also, the fact that GTX560/560Ti/GTX570 and 580 launched soon after 470/480s helped. AMD can do the same if it launches faster clocked 7950/7970 cards. AMD has its own GTX460 = HD7850 so they are fine for now.