***OFFICIAL*** Ryzen 5000 / Zen 3 Launch Thread REVIEWS BEGIN PAGE 39

Page 75 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Abwx

Lifer
Apr 2, 2011
10,847
3,297
136

NBC says that the 5800 and 5900 non-X are official but OEM only.

These are just factory forced to 65W by default, in a regular model you ll have to set them in Eco mode in Ryzenmaster, perf difference is minimalistic given the substancially lower power.

Perfs/mode :


Resulting power savings :


For the 5800X that s someting like 75W CPU TDP instead of 110W for 5% lower perf.
 
  • Like
Reactions: Tlh97 and moinmoin

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
These are just factory forced to 65W by default, in a regular model you ll have to set them in Eco mode in Ryzenmaster, perf difference is minimalistic given the substancially lower power.

I dunno, they did reduce the max turbo clock by 100 Mhz.
 

Abwx

Lifer
Apr 2, 2011
10,847
3,297
136
I dunno, they did reduce the max turbo clock by 100 Mhz.

About 2% lower max frequency, that s what Computerbase measured in Eco mode in the ST scores, so this mode extend also to single core usage, actual TDP and frequencies of those non X SKUs should be the same.

Also max frequency is what is guaranted, in ST a 5800X can clock 150MHz higher than the max guaranted frequency , we should expect comparable behaviour from those models.
 

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
I don't think any one said that cache does not affect gaming performance, because it clearly does. A CPU with more cache can avoid less lengthy trips to system memory, which can accelerate processing and feed the GPU faster. The contention is whether cache affects performance at various resolutions, and the answer to that is no, because the CPU does not directly involve itself in graphical processing.

A Core i7 5775C and a 9700K will have similar or equal performance at 4K assuming the game is GPU bound, despite the 5775C having much more cache.
Well, here we go: https://kingfaris.co.uk/cpu/battle/16
1611704031946.png

Conclusion:
As seen from the charts above, the 5950X is faster than the 10900K in some games, and slower in others.
At 1080p High when games are more graphically bound, the 10900K wins in more titles as the 5950X struggles to keep the large game data in its cache. Part of the 5950X's excellent performance at 720p Low is because of its 32MB cache, allowing it to balance against the relatively slow memory latency.
Keep in mind that as you increase resolution and/or become more GPU bound, the difference in the CPUs' game performance would become less and less.

Based on these results, I cannot definitively recommend one over the other as they both have strengths and weaknesses. The 10900K had the better game performance at 1080p, sometimes marginally, in more titles, but the 5950X would have much better multicore performance above 10 cores. This is most likely the case with a 5900X too, however I cannot test this.

Also, against popular opinion, belief, and even data from some 'trusted' sites, in a comparison featuring the best 10900k vs the best 5950x Gaming Profile, the 10900k is the better gaming chip, based on the results obtained by this reviewer. I have argued in this same thread that overclocking does yield results. 5.4Ghz is the absolute cream of the crop of Comet-Lake silicon but there are scattered 10900Ks on reddit spotting that profile. AMD did a good job of masking the still weaker memory subsystem (because it's tied to the fabric) by introducing bigger caches but it becomes discerningly clearer once you get to resolutions at which people actually game, combined with the superior overclocking of the Intel platform, Comet-Lake S trades blows and sometimes surpasses Zen 3. This is a far cry from the "total demolition" some were predicting in the heady days of the Zen 3 release.


 

gdansk

Golden Member
Feb 8, 2011
1,979
2,355
136
On the 720p graph he includes CS:GO but on the 1080p graph replaces it with Overwatch. Given what I've seen from CS:GO at 1080p it'd probably heavily favor the 5950X and ruin your cute average.
Play the lottery and hope you get a 10900K that can OC 5.4GHz (per Silicon Lottery's data this is less than 1%). Or just enable PBO and get nearly the same results. There's a reason he doesn't recommend the 10900K either, you expound too much upon those results.
 
Last edited:
  • Like
Reactions: Tlh97

Hitman928

Diamond Member
Apr 15, 2012
5,180
7,631
136
Well, here we go: https://kingfaris.co.uk/cpu/battle/16
View attachment 38601

Conclusion:


Also, against popular opinion, belief, and even data from some 'trusted' sites, in a comparison featuring the best 10900k vs the best 5950x Gaming Profile, the 10900k is the better gaming chip, based on the results obtained by this reviewer. I have argued in this same thread that overclocking does yield results. 5.4Ghz is the absolute cream of the crop of Comet-Lake silicon but there are scattered 10900Ks on reddit spotting that profile. AMD did a good job of masking the still weaker memory subsystem (because it's tied to the fabric) by introducing bigger caches but it becomes discerningly clearer once you get to resolutions at which people actually game, combined with the superior overclocking of the Intel platform, Comet-Lake S trades blows and sometimes surpasses Zen 3. This is a far cry from the "total demolition" some were predicting in the heady days of the Zen 3 release.



I don't believe Mr. Kingfaris has presented anywhere close to the data he needs to support his conclusion that the L3 cache sees significantly worse cache hit rates at 1080p high than 720p low.
 

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
I don't believe Mr. Kingfaris has presented anywhere close to the data he needs to support his conclusion that the L3 cache sees significantly worse cache hit rates at 1080p high than 720p low.
What in your educated opinion is the cause then? Don't game engines go to main memory? What instances occasion that? Is main memory not, in simple terms, an extension of the cache?
 

Hitman928

Diamond Member
Apr 15, 2012
5,180
7,631
136
What in your educated opinion is the cause then? Don't game engines go to main memory? What instances occasion that? Is main memory not, in simple terms, an extension of the cache?

There are a ton of variables at play, could be something like a platform issue where increasing the PCIe bus utilization causes decreased performance on the AMD system compared to Intel. Could be a lot of things. He did zero investigation and presented zero controlled data that justify his conclusion though. Increasing resolution should have a negligible impact on CPU cache hit rate as well as most settings for increased graphics fidelity. Those by and large effect the GPU cache hit rate (settings like increased draw distance could potentially effect the CPU cache hit rate, but again, there's no data in that blog post from which to draw any kind of conclusion). It looks like to me his highest performance Intel setup also had HT turned off and yet he doesn't mention turning off HT for the 5950x that I can see, so that could cause quite a disparity right there, but again, who knows, there's no data to lead to any real conclusion on this.
 

Hitman928

Diamond Member
Apr 15, 2012
5,180
7,631
136
Some interesting commentary form Ian as well on GB5.

Unfortunately we are not going to include the Intel GB5 results in this review, although you can find them inside our benchmark database. The reason behind this is down to AVX512 acceleration of GB5's AES test - this causes a substantial performance difference in single threaded workloads that thus sub-test completely skews any of Intel's results to the point of literal absurdity. AES is not that important of a real-world workload, so the fact that it obscures the rest of GB5's subtests makes overall score comparisons to Intel CPUs with AVX512 installed irrelevant to draw any conclusions. This is also important for future comparisons of Intel CPUs, such as Rocket Lake, which will have AVX512 installed. Users should ask to see the sub-test scores, or a version of GB5 where the AES test is removed.

To clarify the point on AES. The Core i9-10900K scores 1878 in the AES test, while 1185G7 scores 4149. While we're not necessarily against the use of accelerators especially given that the future is going to be based on how many and how efficient these accelerators work (we can argue whether AVX-512 is efficient compared to dedicated silicon), the issue stems from a combi-test like GeekBench in which it condenses several different (around 20) tests into a single number from which conclusions are meant to be drawn. If one test gets accelerated enough to skew the end result, then rather than being a representation of a set of tests, that one single test becomes the conclusion at the behest of the others, and it's at that point the test should be removed and put on its own. GeekBench 4 had memory tests that were removed for Geekbench 5 for similar reasons, and should there be a sixth GeekBench iteraction, our recommendation is that the cryptography is removed for similar reasons. There are 100s of cryptography algorithms to optimize for, but in the event where a popular tests focuses on a single algorithm, that then becomes an optimization target and becomes meaningless when the broader ecosystem overwhelmingly uses other cryptography algorithms.
 

gdansk

Golden Member
Feb 8, 2011
1,979
2,355
136
I presume the new improvement is from VAES? It isn't a good benchmark of general CPU performance but AES is a widely used standard. Even the default hash maps in some programming languages use AESNI. Secondly Tiger Lake can achieve near double vector throughput in some other workloads too. It isn't entirely misleading, just mainly.

Given its ubiquity I think AT should include the numbers but with a note that x% of the geomean improvement is from one workload. I wouldn't complain if they remove AES from GB6. But it is a somewhat useful thing to know if you plan to use encryption.
 

Hitman928

Diamond Member
Apr 15, 2012
5,180
7,631
136
I presume the new improvement is from VAES? It isn't a good benchmark of general CPU performance but AES is a widely used standard. Even the default hash maps in some programming languages use AESNI. Secondly Tiger Lake can achieve near double vector throughput in some other workloads too. It isn't entirely misleading, just mainly.

Given its ubiquity I think AT should include the numbers but with a note that x% of the geomean improvement is from one workload. I wouldn't complain if they remove AES from GB6. But it is a somewhat useful thing to know if you plan to use encryption.

Just my opinion, but I'd much rather AT just run a stand alone test rather than have to start putting asterisks and notes on benchmarks.
 

gdansk

Golden Member
Feb 8, 2011
1,979
2,355
136
Just my opinion, but I'd much rather AT just run a stand alone test rather than have to start putting asterisks and notes on benchmarks.
What's the point of AT if not benchmark commentary? They included the explanation, they should show the result too. Otherwise people will look for "1185G7 GB5" on say Google and end up on some other site which isn't as discerning but does include the numbers.
 
  • Like
Reactions: krumme

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
There are a ton of variables at play, could be something like a platform issue where increasing the PCIe bus utilization causes decreased performance on the AMD system compared to Intel. Could be a lot of things. He did zero investigation and presented zero controlled data that justify his conclusion though. Increasing resolution should have a negligible impact on CPU cache hit rate as well as most settings for increased graphics fidelity. Those by and large effect the GPU cache hit rate (settings like increased draw distance could potentially effect the CPU cache hit rate, but again, there's no data in that blog post from which to draw any kind of conclusion). It looks like to me his highest performance Intel setup also had HT turned off and yet he doesn't mention turning off HT for the 5950x that I can see, so that could cause quite a disparity right there, but again, who knows, there's no data to lead to any real conclusion on this.
Yet, in all of this confusion, AMD had to go and name their unified cache "game cache." Plus, he presented every possible maximized score from both platforms, including turning off CCDs in the process. If HT Off could've provided better scores, he'd certainly have presented that. Mind you, Intel also does benefit from HT sometimes. Dude went to lengths to present both platforms essentially maximized.
 

Hitman928

Diamond Member
Apr 15, 2012
5,180
7,631
136
What's the point of AT if not benchmark commentary? They included the explanation, they should show the result too. Otherwise people will look for "1185G7 GB5" on say Google and end up on some other site which isn't as discerning but does include the numbers.

Anandtech doesn't want to include numbers that they feel are bad data, I don't see anything wrong with that. There's a lot of benchmarks they could include with mixed data in terms of value and quality while adding paragraphs of commentary to each, but I don't think that's what they want to do. In the end, I tend to agree with them but I understand wanting the data either way. Luckily for GB, if you want the data, it's very easy to find.
 
  • Like
Reactions: Tlh97 and krumme

Hitman928

Diamond Member
Apr 15, 2012
5,180
7,631
136
Yet, in all of this confusion, AMD had to go and name their unified cache "game cache." Plus, he presented every possible maximized score from both platforms, including turning off CCDs in the process. If HT Off could've provided better scores, he'd certainly have presented that. Mind you, Intel also does benefit from HT sometimes. Dude went to lengths to present both platforms essentially maximized.

Game cache was a marketing term, so who cares? I believe AMD dropped the moniker for Zen3 anyway.

How do you know he even tested with HT off for AMD? Does he mention it anywhere? Am I supposed to just assume he tried every possible combination? If we're talking max possible performance, am I to assume he was able to perfectly extract all performance from each platform and that no one else could have a higher performing system? Am I to assume that given a different sample for each CPU he would get the exact same results? I don't even really care about his results overall because Zen3 and CML results all over the place, lots of data from which to form a conclusion. I was just pointing out that his conclusion about the L3 cache in Zen3 is completely unsubstantiated by any data in his blog post and he gave no supporting references for his conclusion either. Because of this, I can't take his conclusion seriously at all.
 

gdansk

Golden Member
Feb 8, 2011
1,979
2,355
136
Luckily for GB, if you want the data, it's very easy to find.
That's why I don't like it. People will see it on other sources instead. Without the explanation nearby. The way it is now people will assume they didn't compare it to Tiger Lake. But the higher TGL 1T figure would convey the truth: Zen 3 around 35W doesn't always win 1T. And the present caveat would explain it isn't as big a difference as GB5 makes it appear.
 

Hitman928

Diamond Member
Apr 15, 2012
5,180
7,631
136
That's why I don't like it. People will see it on other sources instead. Without the explanation nearby. The way it is now people will assume they didn't compare it to Tiger Lake. But the higher TGL 1T figure would convey the truth: Zen 3 around 35W doesn't always win 1T. And the present caveat would explain it isn't as big a difference as GB5 makes it appear.

Yeah, you could argue the flip side though and say that most people don't read the reviews, they just look at the graphs, even when there are notes or whatever added and so Anandtech doesn't want to add a bad comparison at all in their graphs. Like I said, I see both sides of it and it really just comes down to opinion/preference. There's not a right or wrong here, it's up to the reader to decide if they like the way Anandtech does review or not.
 

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
Game cache was a marketing term, so who cares?
You can believe whatever you want. His conclusion is logical. I latched onto it immediately I saw the data, too. Unless you can come up with a better explanation? Because Zen 3 does not display this behavior in other areas except when a powerful gpu is asking questions of it, ie. performs better with small datasets (that fit in cache) and struggles with larger datasets (that goes to main memory.)
 

Hitman928

Diamond Member
Apr 15, 2012
5,180
7,631
136
You can believe whatever you want. His conclusion is logical. I latched onto it immediately I saw the data, too. Unless you can come up with a better explanation? Because Zen 3 does not display this behavior in other areas except when a powerful gpu is asking questions of it, ie. performs better with small datasets (that fit in cache) and struggles with larger datasets (that goes to main memory.)

Ok, please provide the evidence used to come to this 'logical' conclusion.
 

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
Ok, please provide the evidence used to come to this 'logical' conclusion.
Read his conclusion. Heck, read mine, posted months ago, for good measure. Zen 3's big, fast cache really helps in overall ipc. Any code that relies or goes to main memory a lot, Intel is more than competitive, mainly because of lower latency. Games fall in this category. The higher effective bandwidth per core is also on the Intel platform, these are the only noticeable advantages the Skylake arch has over the Zen 3 arch. Even inter-core latency favors Zen 3, iirc. As well as cores and overall ipc, thanks to more powerful front and backend. Given all these, where do you logically think an intel advantage would manifest itself given different datasets which could fit in cache or otherwise? The system that is going to main memory even at 720p is going to show more consistent results at both resolutions, whereas the one with the way bigger (and faster) cache and slower main memory subsystem is going to show a bigger disparity between resolutions.

What is your logical explanation, if not the involvement of main memory, giving the Intel platform, with the significantly lower average ipc, the gaming advantage at higher resolutions?
 

Hitman928

Diamond Member
Apr 15, 2012
5,180
7,631
136
Read his conclusion. Heck, read mine, posted months ago, for good measure. Zen 3's big, fast cache really helps in overall ipc. Any code that relies or goes to main memory a lot, Intel is more than competitive, mainly because of lower latency. Games fall in this category. The higher effective bandwidth per core is also on the Intel platform, these are the only noticeable advantages the Skylake arch has over the Zen 3 arch. Even inter-core latency favors Zen 3, iirc. As well as cores and overall ipc, thanks to more powerful front and backend. Given all these, where do you logically think an intel advantage would manifest itself given different datasets which could fit in cache or otherwise? The system that is going to main memory even at 720p is going to show more consistent results at both resolutions, whereas the one with the way bigger (and faster) cache and slower main memory subsystem is going to show a bigger disparity between resolutions.

What is your logical explanation, if not the involvement of main memory, giving the Intel platform, with the significantly lower average ipc, the gaming advantage at higher resolutions?

A conclusion is not evidence. A conclusion based on no evidence is called guess work. That's all you're doing is guess work. A more efficient PCIe controller could be responsible. There are lots of possibilities to explain what is going on, picking one based on no evidence and then saying that is the logical conclusion is ridiculous.
 
  • Like
Reactions: Tlh97

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
A conclusion is not evidence. A conclusion based on no evidence is called guess work. That's all you're doing is guess work. A more efficient PCIe controller could be responsible. There are lots of possibilities to explain what is going on, picking one based on no evidence and then saying that is the logical conclusion is ridiculous.
A crippled PCIe controller, huh? This is just conspiracy theory at this point. :D

Skylake has the lower latency memory subsystem. Fact. A lower latency memory subsystem is advantageous in gaming. Fact.
 

Hitman928

Diamond Member
Apr 15, 2012
5,180
7,631
136
A crippled PCIe controller, huh? This is just conspiracy theory at this point. :D

Skylake has the lower latency memory subsystem. Fact. A lower latency memory subsystem is advantageous in gaming. Fact.

So now show the evidence that the L3 cache hit rate is suffering at 1080p. I showed exactly as much evidence for my suggestion as you did for yours/his, that was the point.