What comes after 28nm GPUs?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I've been reading GameGPU benchmarks for years and following CPU benchmarks all over forums and testing it myself as I moved from Q6600 @ 3.4ghz to i7 860 @ 3.9ghz to i5 2500K @ 4.5ghz, etc. Since 1st generation i7, CPU performance has barely improved.

Like I said most of the advancement has come through a reduction in power consumption. Even if you have GTX690, in 99% of games there will be less than 25% performance difference between an i7 920 @ 4.0ghz and an i7 3770K @ 4.6ghz. That's a joke. And also, testing CPUs at 1280x800 is meaningless. People who have high-end flagship cards like 680/690/Titan are not gaming at those resolutions. So if an Intel CPU cannot improve avg benchmarks or minimums, that only exacerbates the lack of upgrade value. Your assumptions are perfectly fine but they never translated into real world gaming. If you go back to pre-Nehalem era, this was not the case. Moving from Pentium 4 1.8ghz to Pentium 4 C @ 3.6ghz to Core 2 Duo @ 3.6ghz to Nehalem i7 920 @ 4.0ghz provided MASSIVE gains in gaming performance even at 1920x1080.
 

OCGuy

Lifer
Jul 12, 2000
27,227
36
91
I've been reading GameGPU benchmarks for years and following CPU benchmarks all over forums and testing it myself as I moved from Q6600 @ 3.4ghz to i7 860 @ 3.9ghz to i5 2500K @ 4.5ghz, etc. Since 1st generation i7, CPU performance has barely improved.

Like I said most of the advancement has come through a reduction in power consumption. Even if you have GTX690, in 99% of games there will be less than 25% performance difference between an i7 920 @ 4.0ghz and an i7 3770K @ 4.6ghz. That's a joke. And also, testing CPUs at 1280x800 is meaningless. People who have high-end flagship cards like 680/690/Titan are not gaming at those resolutions. So if an Intel CPU cannot improve avg benchmarks or minimums, that only exacerbates the lack of upgrade value. Your assumptions are perfectly fine but they never translated into real world gaming. If you go back to pre-Nehalem era, this was not the case. Moving from Pentium 4 1.8ghz to Pentium 4 C @ 3.6ghz to Core 2 Duo @ 3.6ghz to Nehalem i7 920 @ 4.0ghz provided MASSIVE gains in gaming performance even at 1920x1080.


The problem isn't Intel or AMD on the CPU side. The problem is the programmers and the people who fund their development. It is not easy to properly utilize 4+ true CPU "cores", and if it isn't easy, it isn't cheap. And if it isn't cheap, it may not be in the budget, even for AAA games. So we find games with potential for graphics being dumbed-down for consoles, and we find games with good graphics being bottlenecked by poor coding for multi-threaded CPU usage.

That is why I cherish the ARMA series...pushes everything to the max. If you play Counterstrike at 1080p and buy anything over a GTX460 or HD5850 you need to give mom back her credit card.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
I've been reading GameGPU benchmarks for years and following CPU benchmarks all over forums and testing it myself as I moved from Q6600 @ 3.4ghz to i7 860 @ 3.9ghz to i5 2500K @ 4.5ghz, etc. Since 1st generation i7, CPU performance has barely improved.

Depends on what games you play and what qualifies as barely. Generalization doesn't work well with CPU and games.

Like I said most of the advancement has come through a reduction in power consumption. Even if you have GTX690, in 99% of games there will be less than 25% performance difference between an i7 920 @ 4.0ghz and an i7 3770K @ 4.6ghz. That's a joke.

Again, depends on the scenario. In mainly GPU-limited scenarios/games/settings the difference will be small. In simulations, strategy games etc. or at different settings/scenarios it will be larger.

And also, testing CPUs at 1280x800 is meaningless. People who have high-end flagship cards like 680/690/Titan are not gaming at those resolutions. So if an Intel CPU cannot improve avg benchmarks or minimums, that only exacerbates the lack of upgrade value.

Not it is not. Increasing resolution and enabling AA/AF doesn't increase fps. Look at the recent Crysis 3 results. Would you want to play a fast 1st person shooter at 30-40fps avg?
http://www.pcgameshardware.de/Crysis-3-PC-235317/Tests/Crysis-3-Benchmark-Test-Grafikkarten-1056218/
I certainly wouldn't want to. How else would I know that with proper settings I could achieve close to 60fps avg in this game?
http://www.pcgameshardware.de/Crysis-3-PC-235317/Tests/Crysis-3-Test-CPU-Benchmark-1056578/

CPU benchmarks at 1280x720 or 1280x800 show what fps your system can ultimately achieve CPU-wise. If you belong to group A that rather plays with lower fps in GPU bottlenecks, these results are not for you. But if you belong to group B that needs higher fps and don't shy from lowering resolution or settings to achieve them, those tests are invaluable. We shouldn't be so arrogant and dictate where the fps bar for each and every one should be. Turning down options to achieve comfortable framerates is not forbidden, but it is a viable option. People often forget that since GPU benchmarks are always run with maxed out graphics. But that is not necessarily a realistic scenario for everyone.

Of course that varies by game. If at 1280x720 you already have 150+ fps with medium class CPUs, the results are not really relevant. But I don't think it is a good idea to benchmark game A in 720p, game B in 1080p and game C in 1600p. There has to be some kind of consistency, therefore we're stuck with 720p as the best choice and should sort out the rather useless cases later instead of missing valuable information in the first place.

Your assumptions are perfectly fine but they never translated into real world gaming. If you go back to pre-Nehalem era, this was not the case. Moving from Pentium 4 1.8ghz to Pentium 4 C @ 3.6ghz to Core 2 Duo @ 3.6ghz to Nehalem i7 920 @ 4.0ghz provided MASSIVE gains in gaming performance even at 1920x1080.

Yes they do, just not always. If you play Crysis 3@30fps, then you're right and they don't. If you play BF3 multiplayer@highest fps possible, X3 in a crowded sector, Shogun 2 in a large battle, Skyrim with ugridstoload for higher object density in the distance etc., then they do. I try not to generalize since workloads and standards can be very different.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
If you play Counterstrike at 1080p and buy anything over a GTX460 or HD5850 you need to give mom back her credit card.

What about pro gamers who routinely turn down settings to minimum and aim for insane fps? :p

But.. yeah, I agree, multithreaded coding is apparently a tough nut to crack... infeasible given time/money constraints for many studios, even if it's possible at all.
 

Imouto

Golden Member
Jul 6, 2011
1,241
2
81
The second image may have 10x times more poly count but the design is way worse.
 

Spjut

Senior member
Apr 9, 2011
928
149
106
The problem isn't Intel or AMD on the CPU side. The problem is the programmers and the people who fund their development. It is not easy to properly utilize 4+ true CPU "cores", and if it isn't easy, it isn't cheap. And if it isn't cheap, it may not be in the budget, even for AAA games. So we find games with potential for graphics being dumbed-down for consoles, and we find games with good graphics being bottlenecked by poor coding for multi-threaded CPU usage.

That is why I cherish the ARMA series...pushes everything to the max. If you play Counterstrike at 1080p and buy anything over a GTX460 or HD5850 you need to give mom back her credit card.

It's apparently an API problem. Draw calls are still mainly being done from 1 thread.

Don't remember what site it was from originally, but here's a quote of an explanation
http://forums.tripwireinteractive.com/showthread.php?t=69260