- May 28, 2013
- 88
- 21
- 81
The recent launch of the Maxwell, at about 400mm^2 did not lead to a much faster chip than the existing ~560mm^2 Kepler 780Ti, perhaps 5% at stock.
With an overclock, that number will undoubtedly be bigger, say 10-15% faster (assume the top 780Ti could sustain perhaps ~1300-1350 MHz on water 24-7; any higher and you'll be using voltages that will seriously reduce the life of your chip). Perhaps 1600-1700 MHz might be realistic for "little" Maxwell on water?
But the issue is, that factoring in the performance, the leap from say 580 > 680 brought about a roughly ~30% gain in gaming performance (roughly). The 580 was a 520mm^2 die going to a ~300mm^2 GTX 680. Granted, this was with a die shrink 40 > 28 nm, but it was still a substantial gain in terms of performance, even more if you factored in overclocking headroom.
The leap from 780Ti > 980 does not appear to be bringing anything close to the 580 > 680 leap.
A "big >550mm^2 Maxwell" might perform perhaps 30-40% faster than the current 980 (there's less headroom than 780Ti to 680 because the 680 was a ~300mm^2 chip and the 980 is a ~400mm^2 chip), unless they make a >600mm^2 chip.
We have yet to see what AMD has to offer at this point, but I'd imagine that the gains would be comparable to Maxwell - perhaps somewhat better with GCN.
So this leaves the question - where do GPUs go from here?
Does this mean that barring a breakthrough, like in III-V materials, we are starting to see GPUs end up like CPUs?
On one hand, it's disappointing that this is the stagnation. On the other hand, it may finally make sense, for the first time, to go quad-GPU knowing that next year's GPUs may not be much faster.
I'm thinking it may end up like CPUs where well, let me put it this way. Imagine if you own a 2600K and you had good luck with the silicon lottery (ex: 5GHz+ at 1.45V or under and stable under Intel Burn Test). Haswell, even with Devil's Canyon might prove a sidegrade for 4 cores, unless you need the new instruction sets, in which case an "E" series might be justifiable.
Are GPUs starting to end up like that? Granted, GPUs are much more parallel in nature than CPUs, but they still are limited by architecture and die shrinks.
Historically, AMD's GPU's have generally offered better 3-way and 4-way GPU scaling.
Considering Maxwell SLI offered about ~60% performance at 4K, I do not see that changing.
With an overclock, that number will undoubtedly be bigger, say 10-15% faster (assume the top 780Ti could sustain perhaps ~1300-1350 MHz on water 24-7; any higher and you'll be using voltages that will seriously reduce the life of your chip). Perhaps 1600-1700 MHz might be realistic for "little" Maxwell on water?
But the issue is, that factoring in the performance, the leap from say 580 > 680 brought about a roughly ~30% gain in gaming performance (roughly). The 580 was a 520mm^2 die going to a ~300mm^2 GTX 680. Granted, this was with a die shrink 40 > 28 nm, but it was still a substantial gain in terms of performance, even more if you factored in overclocking headroom.
The leap from 780Ti > 980 does not appear to be bringing anything close to the 580 > 680 leap.
A "big >550mm^2 Maxwell" might perform perhaps 30-40% faster than the current 980 (there's less headroom than 780Ti to 680 because the 680 was a ~300mm^2 chip and the 980 is a ~400mm^2 chip), unless they make a >600mm^2 chip.
We have yet to see what AMD has to offer at this point, but I'd imagine that the gains would be comparable to Maxwell - perhaps somewhat better with GCN.
So this leaves the question - where do GPUs go from here?
- Moore's Law seems to have died out at 28nm. Price per transistor is going up with each new generation of node. In fact, low and medium end stuff may stay on 28 nm for good. Fab costs are going up, at the same time, the marginal benefit for moving onto the next node is dropping. Things like EUV and 450mm wafers appear problem plagued. It's been claimed that FD SOI 20nm may give a new lease on life to Moore's Law (STMicro especially says this), but I remain skeptical (I hope they are right though).
- This means that it will have to rely on mostly architectural gains per generation. There are some technologies that are exciting, like HBM, but for how long will we continue to see performance gains based mostly on architecture? See the above on my thoughts from Kepler to Maxwell. Whatever happens next at both AMD and Nvidia will probably lead to even smaller marginal gains.
- We might see 16nm FinFET based GPUs, but I'd imagine that they'd only be ~15% faster at a given power level compared to their 28nm counterparts. Compounding the issue, their price per transistor might be higher and the OC headroom might be lower owing to leakage.
- How big could a die get? As the 28nm process continues to mature, we could see higher yields, but eventually it will flatten out. The largest die I have ever heard of was Intel's Tukwila at ~700mm^2. That must be near the reticule size. Could we see a big 700mm^2 GPU at 28nm ever? Could dies get any bigger (we're talking >1000mm^2 here)?
Does this mean that barring a breakthrough, like in III-V materials, we are starting to see GPUs end up like CPUs?
On one hand, it's disappointing that this is the stagnation. On the other hand, it may finally make sense, for the first time, to go quad-GPU knowing that next year's GPUs may not be much faster.
I'm thinking it may end up like CPUs where well, let me put it this way. Imagine if you own a 2600K and you had good luck with the silicon lottery (ex: 5GHz+ at 1.45V or under and stable under Intel Burn Test). Haswell, even with Devil's Canyon might prove a sidegrade for 4 cores, unless you need the new instruction sets, in which case an "E" series might be justifiable.
Are GPUs starting to end up like that? Granted, GPUs are much more parallel in nature than CPUs, but they still are limited by architecture and die shrinks.
Historically, AMD's GPU's have generally offered better 3-way and 4-way GPU scaling.

Considering Maxwell SLI offered about ~60% performance at 4K, I do not see that changing.
Last edited: