- Nov 4, 2015
- 148
- 66
- 66
While the transition to finfets should help with AMD/RTG's performance per watt, is this the only thing Polaris has going for it or will the architecture be designed with efficiency as a higher priority than the previous generation?
To put it roughly, basing on most benchmarks, AMD's performance per watt seems to be roughly 70-80% of nVidia's. To put it in perspective from my own findings, my R9 390 is set to power limit -30% / vcore -30mv at all times, giving it roughly GTX 980 power consumption figures but only giving about 75-80% of the performance. nVidia has come a long way since Fermi, Fermi to Kepler was akin to Pentium 4 to Conroe, and Maxwell somehow repeated that feat even on the same process node.
If nVidia can manage another feat through architectural optimization alone regardless of the benefits of a smaller fab process, is there any way for AMD to catch up?
To put it roughly, basing on most benchmarks, AMD's performance per watt seems to be roughly 70-80% of nVidia's. To put it in perspective from my own findings, my R9 390 is set to power limit -30% / vcore -30mv at all times, giving it roughly GTX 980 power consumption figures but only giving about 75-80% of the performance. nVidia has come a long way since Fermi, Fermi to Kepler was akin to Pentium 4 to Conroe, and Maxwell somehow repeated that feat even on the same process node.
If nVidia can manage another feat through architectural optimization alone regardless of the benefits of a smaller fab process, is there any way for AMD to catch up?