You're not being accurate or realistic. After I just showed you that netting an 80% increase in transistor density, and doubling perf/mm2 would massively outstrip what that last node leap accomplished, you answer in disbelief. AMD has said time and again 2x perf/w improvement. Look at Pitcairn; double the performance and keep the power envelope the same. That gets you 390 performance at Pitcairn power levels, 2x perf/w.
I'm going by what was reported in the Polaris demo shown to the press. The Polaris 10 card was put up against GTX 950. Now, the 950 is about equivalent in performance to AMD's R9 270X. (TechPowerUp's newest performance summary charts have them neck-and-neck at 1080p.) The reference R9 270X, when tested by TPU, averaged 111W during gaming; there is no reference GTX 950, but the ones TPU tested average about 95W. Polaris 10, in a gaming scenario, was pulling 86W
at the wall, compared to 140W for GTX 950. That would mean the Polaris 10 card couldn't be pulling much more than 40W. So we're talking close to
three times the power efficiency of 28nm GCN - unless you count the Fury Nano, which they might be.
Regarding the last node, there were a lot of issues with Tahiti which make it a poor comparison. It was the first chip not only of a new node, but a new architecture; this meant very conservative design. Pitcairn came only a couple months later, and AMD had already made the memory controllers far smaller and more efficient and increased the transistor density from 12.25 million to 13.21 million transistors per square millimeter.
Also, the move to the 14nm Samsung-based process is a bit more than a full node shrink. 40nm->28nm reduced the gate pitch by about 30% and the metal pitch by 25%. 28nm->14nm reduces the gate pitch by about 34% and the metal pitch by about 29%. Not a huge difference but every little bit adds up. We have seen with the A9 chips that the Samsung process is denser than TSMC.
40-50% you say? Can I quote the bolded part in my signature so we can revisit all these crazy claims when reality sets in?
I could be wrong. I'd be reluctant to bet more than ten bucks on any of the speculations I'm making here. AMD has dropped the ball before, and they could disappoint again. But they really do seem to be going all-in on FinFET. And they have a pretty good history of pulling off node shrinks effectively and with few hitches.
Let me ask you: what effect do
you think FinFET will have on GPU clock speeds?