If we compare the 460 to the 580 they would even be close to the 250 Watt figure(not quite, but certainly ballpark).
GTX460 peaks at
119W.
GTX680 peaks at
186W.
You are comparing GTX460 --> GTX580 and GTX680 --> GTX780 and ignoring this massive power consumption difference between 460 and 680.....
GTX480 was
48% faster on average at 1080P vs. the 460 and peaked at 273W.
Ignoring that GTX480/580 like GK110 has Double precision and dynamic compute scheduler and both parts were 520-530mm^2 with power consumption going to 230-270W. That's 2 major areas you totally missed in your comparison. You cannot increase the functional units by 100% in GK110 since some of that die space is eaten up by compute related fat.
I'm not saying it will be great on performance/watt basis, but I thought this entire sub forum had abandoned that metric once AMD's parts were so bad this generation in comparison?
There is very little difference in power consumption this generation despite the hoopla.
HD7950 = 144W
GTX670 = 152W
and
GTX680 = 180-190W
HD7970 = 190W
HD7970 GE after-market cards = 210-220W
HD7970 1.175V @ 1150mhz = 225-238W.
A stock GTX580 still draws more than after-market 7970 GEs.
http://static.techspot.com/articles-info/555/bench/Power.png
In other words, GTX680 is nothing like GTX460 was to 480/580. NV is already running up against 190W of power use on the 680. With 460 they had 110-150W of room to play with before getting to 580 / 480 power consumption. There is only 60W left from 680 to 250W. Apples vs. oranges.
Compare the 460 to the 580, it is frequently approaching 100%(exceeding it rarely, normally ~75%). That is the type of performance scaling we can reasonably expect from nV.
No you cannot. GTX460 peaked at 120W. GTX580 peaks at
229W. 110W increase in power.
GTX680 peaks at 186W. To get to 250W, that's just 64W more, or almost half the headroom 580 had over 460.
Given that we have a direct historical comparison to make on the nV side, an increase of 100% over the 680 isn't close to being unreasonable.
No, just no. GTX680 = 186W.
100% faster on 28nm node in a GTX780 is not unreasonable? Okkkk. :thumbsup: I don't think you even remotely grasp modern 28nm manufacturing vs. die size relationship, or that GK110 has excess transistors used for dynamic scheduler and DP. 100% faster means 100% more functional units/memory bandwidth at 1058mhz GPU clocks on the same 28nm node when the 680 uses 186W. Ya, good luck with that.
HD7970 GE 1050mhz is 365mm^2 and pushes 210-220W in after-market version form, 238W in reference blower / stock VRM components form. You think NV has some
magical 28nm node that allows it to grow the die size to 520-600mm^2, keep 1.06ghz clocks of the 680 and increase functional units 75-100% and not go to 275-300W?
Do I think the 780 will end up 50%-100% faster then the 8970? I think that possibility is in play.
When did a 520-600mm die size GPU beat a competing 410-420mm^2 die size GPU by 50-100%? Ya, that never happened. When did NV's flagship GPU beat AMD/ATi's flagship part by 50-100% on average? Ya, that also never happened. But this generation it will happen you are saying, especially since 680 is already trailing the 7970 GE which means NV needs to increase performance 5-12% to begin with just to match a 7970GE. But you are saying GTX780 will be 50-100% faster than 8970? Makes sense how 8970 may use 230-240W of power on a 410-420mm^2 die but GTX780 will beat that by 50-100% and still have double precision and dynamic scheduler, widen the bus width to 384-bit, all that stuff and not at blow way past 250W of power use. Based on your projection GTX780 = Fermi 2.0 here we go!
I don't see it as unreasonable either, we have the historical performance characteristics of each company at relative die sizes to compare.
Yes, we do but you ignored power consumption. You only compared die sizes which is just half the story.
GTX560 Ti = 68% - Peak 159W
GTX580 = 96% (
41% faster) - Peak 229W
GTX680 - peak 186W......Your entire projection for 100% is not unreasonable is falling apart and fast unless NV goes to 275-300W, or NV builds an entirely different gaming chip and doesn't start with GK110 to begin with.
Is it possible they will step out of form? Absolutely, but if I have to wager on who is going to come out ahead in designing a monolithic monster GPU at this point in the tech industry I'd be a fool to bet against nVidia.
No one said anything about GTX780 not being able to beat HD8970, but these wild estimates of 50-100% faster than 8970 being thrown around are 100% wrong. Never in the entire history of ATI vs. NV or AMD vs. NV did NV have a 50-100% faster high-end flagship. Not even 8800GTX beat 2900XT by 50% on average and 2900XT was a total disaster since AA was broken on it.
All sorts of things can easily go wrong for nV and better then expected for AMD to change this in a profound matter, but since we are talking about the refresh the fact that the 680 was supposed to be a 660 becomes a very real factor. AMD screwed up *badly* this generation- the fact that they did so allowed nVidia to completely hide the fact that *they did to*, because they didn't need to ship anything high end to compete.
Ummm...no. AMD launched 3-6 months earlier, raised prices and didn't lose market share = sounds like a much better generation for the firm than 4800/5800/6900 series were. HD7970 GE > GTX680 and GCN has class leading compute architecture per mm^2. In 1 generation, AMD managed to maintain price/performance and single-GPU performance crown at the same time. The 365mm^2 die has dynamic scheduler + DP. Tahiti XT is faster clock per clock than GK104 is as well. Keper has its advantages, GCN has its advantages. If AMD screwed up this generation badly, NV wouldn't have lost the single-GPU performance, nor the price/performance at the same time, not lost 2.5% market share last quarter, etc.
Remove DP and you have Pitcairn, with superior performance/watt than GTX670/680 parts. This excess fat Tahiti XT has will translate to wasteful transistors that GK110 has to deal with too. That means increased power consumption on GK110, like 7970 had to suffer this generation due to this excess fat. There is no real evidence now that NV held back GK110 because they could. It seems they held back GK110 because they couldn't launch it. At first, many of us thought this was reasonable that NV held back GK110 purposely, but since K20 is just starting to ship, GK110 appears to have been completely unmanageable in large volumes for most of this year at reasonable profit margins and yields to be able to have been sold at $500-600 in the consumer market. That's actually the more reasonable story of why NV had to use GK104 since GK110 would have ended up too hot, too expensive, and yields and wafer capacity constraints forced NV go to plan B --> GK104 which turned out to be a lot better than they expected.
Again: I'll let this sink in for you --> HD7970 GE peaks at 210-220W for after-market cards (238W for reference blower cards @ 1.25V) on a 365mm^2 die using 1st generation 28nm node at 1.05ghz. You are telling me that GK110 was easily manufacturable this year at 520-600mm^2 die at 1ghz? Yup, NV has alien 28nm technology stashed just for them.