Discussion Anand, the last tech reviewer who still do it properly

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Intrepid3D

Junior Member
Jun 20, 2017
9
1
41
For whatever reasons a lot of tech reviewers recently make no effort to find differences in performance between the CPU's they test when it comes to gaming.

There is a trend now to publish bar charts that is a solid block of near equal length bars between as many as 20 different CPU's, in the most extreme case i have seen a 3300X is 95% the performance of a 10900K or 5800X at 1080P, really?

I'm not going to point fingers at individual publishers but it seems pretty prevalent now that they seem to have no interest now in finding those CPU's that are good at gaming vs those which are not as good or bad, its as if they all want to come to a predefined conclusion, the one they always make "they are all the same"
No winners, no losers, everyone gets a prise. Well that's nice but for me researching my next hardware it tells me absolutely nothing, am i really supposed to believe that a 3300X is just as good as a 5800X???????

Its obvious to me, but only because i understand a bit about it, that they are all publishing slides where the GPU, not the CPU is the limitation. I'm looking at a performance test of the GPU they used, not the CPU's.

Its only when you come to AnandTech that you see there is actually a tangible difference between these CPU's. for example. https://www.anandtech.com/show/16535/intel-core-i7-11700k-review-blasting-off-with-rocket-lake

With that i would like to extend my thanks to AnandTech for still doing it properly. Thank you.
 

MrTeal

Diamond Member
Dec 7, 2003
3,569
1,699
136
It's a pretty good look at how the CPU will do five or so years down the line. There was a recent HUB video about Nvidia having some driver overhead issues in more recent DX12 titles that spurred a bit of investigation and regardless of whether you use an AMD or Nvida GPU, for some titles the CPU will hold you back at 1080p. An original Zen CPU can bottleneck enough that a 3080 is no better than a 1080 in terms of FPS.

Not everyone upgrades every time a new generation comes out. Trying to future proof when you know that you'll be holding on to a system for 5+ years is certainly worth while. However, if you're just going to game at 4K ultra, then the CPU doesn't matter at all for the most part. You could buy a Celeron and it will do almost equally as well as a top of the line i9.
What you'd want to do is look back at Haswell era reviews run at 360p or other ridiculously low resolutions and see if the differences you see back then actually carry forward to running new games on modern high end GPUs. If pushing settings to places no one would ever play at just to make it so all the bars in your graph aren't the same length doesn't translate to actual differences at the resolutions people play at in the future, what's the point?
 
  • Like
Reactions: ondma

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
What you'd want to do is look back at Haswell era reviews run at 360p or other ridiculously low resolutions and see if the differences you see back then actually carry forward to running new games on modern high end GPUs. If pushing settings to places no one would ever play at just to make it so all the bars in your graph aren't the same length doesn't translate to actual differences at the resolutions people play at in the future, what's the point?

They actually do. There have been some posts over in the GPU forums surrounding driver overhead in DX12 games bottlenecking Nvidia GPUs. Even more recent CPUs like the original Zen lineup will bottleneck anything above 1070 in some games. There are even cases where AMD GPUs are affected as well so it's not just an Nvidia problem.

The funny thing is that around the time of the original Zen launch it was the Intel fans using these same low resolution benchmarks to point out that this could cause a problem in the future. I was a little skeptical of it at the time, but I think there's plenty of evidence now to suggest that it can become an actual issue.

One of the more recent posts in the thread linked to some results where you basically need one of the newest Zen 3 CPUs in order not to bottleneck a 3090 when running Ultra RTX settings in 1080p in Cyberpunk. Even running at 1440p can still see bottlenecks at the CPU if using any older than Coffee Lake or Zen 2 CPUs.

gPJH8FU.png

(Image taken from: https://gamegpu.com/action-/-fps-/-tps/cyberpunk-2077-2020-test-gpu-cpu)
 

ondma

Platinum Member
Mar 18, 2018
2,721
1,281
136
They actually do. There have been some posts over in the GPU forums surrounding driver overhead in DX12 games bottlenecking Nvidia GPUs. Even more recent CPUs like the original Zen lineup will bottleneck anything above 1070 in some games. There are even cases where AMD GPUs are affected as well so it's not just an Nvidia problem.

The funny thing is that around the time of the original Zen launch it was the Intel fans using these same low resolution benchmarks to point out that this could cause a problem in the future. I was a little skeptical of it at the time, but I think there's plenty of evidence now to suggest that it can become an actual issue.

One of the more recent posts in the thread linked to some results where you basically need one of the newest Zen 3 CPUs in order not to bottleneck a 3090 when running Ultra RTX settings in 1080p in Cyberpunk. Even running at 1440p can still see bottlenecks at the CPU if using any older than Coffee Lake or Zen 2 CPUs.

gPJH8FU.png

(Image taken from: https://gamegpu.com/action-/-fps-/-tps/cyberpunk-2077-2020-test-gpu-cpu)
Obviously, old, quad core cpus are slower than new cpus with more cores, better IPC, and faster clockspeeds. Who woulda thunk it? I am still not sure how this relates to ultra low resolution benchmarks though. One would have to compare the low res benchmarks from both Intel and AMD when the cpus were current, with current games and see if the relative performance is the same. I dont see that data here.
In any case, it still begs the question, "is anyone going to run a Haswell quad core with a 3090?".
 

MrTeal

Diamond Member
Dec 7, 2003
3,569
1,699
136
So, in regards to the chart above and the talk of Zen/Zen+ inadequacies, that does seem to be the case at least for Nvidia. If you go through a review from that era with low res benchmarks though, does it match with the above? This is only 720p instead of really low, but from the Guru3D 2700X review.
1615861129044.png1615861148300.png1615861211809.png
1615861748447.png
1615861709515.png

The 2700X trails the 8700k of course, but trades benchmarks back and forth with the 4790k. The 4790k and 5960X trade back and forth, neither is a clearly superior gaming option. Basically, if I were a buyer back then looking at the Guru3D review, would I learn anything from that 720p testing that would give me insight into what's going to bottleneck my GPU in 2020/2021?

Not saying that there's not CPU bottlenecks, just that cranking down the resolution and settings to make it look like you're see big differences in the numbers will actually help you identify the cases like Haswell falling off a cliff. It's not even just 4 cores isn't enough, because the Zen+ quad core 3400G mops the floor with the 4770k.
 
  • Like
Reactions: Tlh97 and Mopetar

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
The relative performance of Zen 3 in the 320p gaming benchmarks AT does is exaggeratedly pronounced by the bigger caches (32MB - 64MB). Once the GPU starts asking questions of the CPU that requires going to main memory, at resolutions most users play at, that advantage largely disappears and Cometlake-S shows its strengths. In the 320p gaming tests, AT has found a way to test the strengths of Zen 3's humongous unified cache system. The 320p gaming test is largely a cache benchmark.
 
  • Like
Reactions: Mopetar

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,407
2,440
146
Very curious Tombraider benchmark there, I noticed the 5960X does better than its successor, the 6950X. This doesn't make any sense, given the Broadwell-E has the same boost clocks, higher IPC, 2 more cores, and a L4 cache. I wonder what is going on there?
 

Bigos

Member
Jun 2, 2019
129
287
136
The relative performance of Zen 3 in the 320p gaming benchmarks AT does is exaggeratedly pronounced by the bigger caches (32MB - 64MB). Once the GPU starts asking questions of the CPU that requires going to main memory, at resolutions most users play at, that advantage largely disappears and Cometlake-S shows its strengths. In the 320p gaming tests, AT has found a way to test the strengths of Zen 3's humongous unified cache system. The 320p gaming test is largely a cache benchmark.

You clearly have no idea what you are talking about.

First, GPUs don't ask the CPU questions. Only when the working set doesn't fit in VRAM does CPU need to upload textures on demand from the main memory. But the size of these textures usually doesn't correlate to the resolution but to the graphics settings in the game. And they are so big that they don't fit in the CPU L3 cache anyway.

Second, the CPU is tasked with multiple things, neither of which is correlated to the resolution the GPU renders to - game logic, AI, physics, input, audio, network, pipeline compilation, texture and buffer upload, draw call preparation and submission.

The one thing that *could* change the workload on the CPU is changing the graphics quality settings. Simpler shaders and smaller textures are faster to compile and upload (though that should not affect framerates anyway, only load times). Less effects usually means less draw calls. Lower resolution geometry is probably also easier to process and upload. Without sufficient data (like low-resolution tests with ultra quality) it is difficult to quantify this effect, and it can vary between games.
 
  • Like
Reactions: Tlh97

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
Only when the working set doesn't fit in VRAM does CPU need to upload textures on demand from the main memory. But the size of these textures usually doesn't correlate to the resolution but to the graphics settings in the game.
Thank you for confirming my point.
And downvoting me.
 

Bigos

Member
Jun 2, 2019
129
287
136
Could you please quote the entire paragraph you are responding to? You have omitted one pretty important aspect of L3 cache being too small for texture sizes. And what's more, with enough VRAM such on-demand upload is unnecessary, and will not be happening every single frame even if it is needed (with low VRAM GPUs).
 

TheELF

Diamond Member
Dec 22, 2012
3,973
730
126
You reduce the resolution until you max out the main thread, that's as fast as the CPU will ever be able to play that game,



if you get more FPS than that by reducing the resolution further then that's the FPS that you get when running the game mechanics is irrelevant, like in deus ex when you are in the ducts you get like hundreds of FPS more or when you look at a wall from very close up and so on, that's the FPS that those ultra low res benchmarks show you.

I don't think there are too many people that care about a big difference in those scenarios.
Also at that point the bandwidth between CPU and GPU could be a bigger factor than actual CPU performance.
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
The relative performance of Zen 3 in the 320p gaming benchmarks AT does is exaggeratedly pronounced by the bigger caches (32MB - 64MB). Once the GPU starts asking questions of the CPU that requires going to main memory, at resolutions most users play at, that advantage largely disappears and Cometlake-S shows its strengths. In the 320p gaming tests, AT has found a way to test the strengths of Zen 3's humongous unified cache system. The 320p gaming test is largely a cache benchmark.

While that's likely true and the low resolution results are a bit more exaggerated as a result, we do see Zen 3 performing best in other tests where there's a CPU limit, so either the cache offers an advantage even at 1080p or the IPC is responsible. Perhaps a little of both, but it is something that is definitely observable.

The low resolution benchmarks which show the original Zen CPUs being behind coupled with their poor performance in modern titles does show the importance of these benchmarks. This isn't something really new either. If you go back to posts from 2017 there were already people pointing this out (here's one thread where it was brought up: https://forums.anandtech.com/thread...ks-fixed-updated.2511792/page-4#post-39000444) and the results we're seeing now suggest that reasoning was worthwhile.

The funniest part about the old threads like that are that it's the Intel fans arguing why the low resolution benchmarks are valid and the AMD ones saying that it's meaningless since no one games like that so why even bother to use that as a comparison. I'm sure the tables will turn once again when Intel regains the IPC lead and do better in these benchmarks.
 

ultimatebob

Lifer
Jul 1, 2001
25,135
2,445
126
Just a FYI, but Anand sold the company and has been gone for a long, long time (7+ years now).

I think the last review he did was for Haswell CPUs. It has been Ian Custress since that time.

Yeah... Anand joined the "dark side" and joined Apple. He probably now spends his days cherrypicking benchmarks for Tim Cook that make the new Apple M class processors look a bazillion percent faster than Intel or AMD :)
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
Yeah... Anand joined the "dark side" and joined Apple. He probably now spends his days cherrypicking benchmarks for Tim Cook that make the new Apple M class processors look a bazillion percent faster than Intel or AMD :)

Honestly, Apple has been pretty tame in the numbers they throw out when talking about their chips. If you go back and look at the old presentations you'd get a lot of "Up to 50-70% faster" remarks, which were true if you only considered that one particular benchmark and ignored that it wasn't even close to indicative of the average. Maybe that's just more to do with them being so far ahead of the competition now that they don't see much point in trying to talk up their own chips or because they want to focus more on the new features that their SoCs enable.

Anand always came across as fairly principled and as someone with a lot of integrity. I don't think that would change much just because he joined Apple. I'm not even really sure what the hell he actually does there anyhow.
 

moinmoin

Diamond Member
Jun 1, 2017
4,952
7,663
136
Note that internally for focused development every company needs accurate and thorough benchmarks to know where is room and where is necessity for further advancements. PR benchmarks affecting actual development decisions is the absolute worst case scenario imo.