• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Discussion Ryzen 3000 series benchmark thread ** Open **

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Concillian

Diamond Member
May 26, 2004
3,755
8
81
You are actually missing Moonbogg's point.

The point of testing at low resolutions and settings such as 720p, is to isolate how the CPU performs in games.

Yes, for actual gaming it seems irrelevant, but hear me, it is. Because a single frame number like Average FPS does not show all the data. Benchmarks are limited. What you may feel or experience in the game can simply not be represented by one set of data.

What you are trying to do is eliminate the bottleneck in times when it might be one.
By the time it's a bottleneck I can feel, I think the $400 or so saved by going with a 3600 platform vs a 9900k platform will probably pay for an entire new platform that will eliminate a bottleneck.

Yes, there is merit in exploring how far from the limit we are... and when we do those tests, we notice that we are FAR from any of these CPUs being a realistic bottleneck to perceptible gaming performance. With that knowledge under our belt, we can evaluate the differences to mean diddly squat.

It was always a questionable decision to buy a 9900k specifically for only gaming workloads, but some could justify it with some productivity workloads on the side... but now? The tipping point has been reached. 9900k is king, but from now on, it'll only be ruling over that one guy with the cornerest of corner cases where it actually matters.
 
Last edited:

maddogmcgee

Senior member
Apr 20, 2015
268
125
116
  • Like
Reactions: CHADBOGA

PotatoWithEarsOnSide

Senior member
Feb 23, 2017
664
700
106
By the time it's a bottleneck I can feel, I think the $400 or so saved by going with a 3600 platform vs a 9900k platform will probably pay for an entire platform that will eliminate a bottleneck.

Yes, there is merit in exploring how far from the limit we are... and when we do those tests, we notice that we are FAR from any of these CPUs being a realistic bottleneck to perceptible gaming performance. With that knowledge under our belt, we can evaluate the differences to mean diddly squat.

It was always a questionable decision to buy a 9900k specifically for only gaming workloads, but some could justify it with some productivity workloads on the side... but now? The tipping point has been reached. 9900k is king, but from now on, it'll be ruling that one guy with the cornerest of corner cases where it actually matters.
The King in a world of Republics.
 

Concillian

Diamond Member
May 26, 2004
3,755
8
81
Are there any memory latency sensitive benchmarks to showcase Zen 2 worst case scenario, or do anyone remembers any equivalent Nehalem era comparisons (Bloomfield vs Arrandale)? I'm rather amazed at the results since I wasn't expecting to Zen 2 to significantly improve gaming performance as much as it did considering that the Memory Controller is now offdie. The massive Cache L3 should be doing rather well hiding the extra memory latency, because it seems to perform consistently faster in absolutely everything than Zen+.

Well, AMD IMC has not exactly been the high point of past AMD architectures. They (significantly) improved the controller at the same time they moved it off-die. Pretty sure there was a good deal of headroom to be had to more than make up for the latency penalty of moving it.
 

CHADBOGA

Platinum Member
Mar 31, 2009
2,018
626
136
Minimal difference. Even single channel hangs in there on most games...which is a win for those badly configured systems OEM's make sometimes.

Certainly not enough to make me worry about selling my 3200 ram and buying something faster.

https://www.techpowerup.com/review/amd-zen-2-memory-performance-scaling-benchmark/

Edit: beaten to it.......
Thanks a lot, that is very useful and helpful to me. :D
 

Concillian

Diamond Member
May 26, 2004
3,755
8
81
Yep, Linus' video has some BFV streaming results.
Oh man, I just got used to MW2 not referring to MechWarrior 2. Now we have the same franchise coming full circle? BFV was the last Battlefield game I played seriously. Battlefield Vietnam, that is.
 

Chaoticlusts

Member
Jul 25, 2010
162
7
81
@Arkaign Pretty sure Gamers Nexus also tested on 1903 with all mitigations

*edit* actually speaking of GN they're also the first review site I've seen that's benchmarked turn times in a TBS game (Civilization VI in their case). I've been wanting to see that for aggeess. Sadly the 9900k actually seems to have a somewhat decent edge there, would love to see how they stack up in Total War. Anyone know of any other sites that have more benchmarks along those lines?
 

epsilon84

Senior member
Aug 29, 2010
996
704
136
Just on the topic of 720P gaming, it's a yardstick on how well the CPU runs pure gaming code without any GPU limitations in place. It's purely an academic measurement, but it's useful if you actually take the data into context and don't go off on a fanboy rant about Intel still ruling gaming.

Yes, in the real world where 1080P is the entry level res, the Zen chips are within spitting distance of the CFL chips. But let's not confuse that with AMD actually being 'as fast' clock for clock in gaming. It's not quite there yet, the deficit has basically been halved from 20% in Zen+ to about 10% in Zen 2. I'm somewhat surprised by this as it seems overall IPC has definitely caught up to Skylake levels, or maybe even exceeded it slightly. Yet gaming is a bit behind. Does AMD need to get their latencies down even lower to finally beat Intel at gaming?
 

PotatoWithEarsOnSide

Senior member
Feb 23, 2017
664
700
106
Latency does seem to be Intel's last stronghold. Whilst the doubled L3 is clearly having an impact in some games, others clearly do send requests that are too large to fit within it...and lots of them. If AMD can crack this nut then I suspect they will surge past Intel in all use cases.
 

CHADBOGA

Platinum Member
Mar 31, 2009
2,018
626
136
Latency does seem to be Intel's last stronghold. Whilst the doubled L3 is clearly having an impact in some games, others clearly do send requests that are too large to fit within it...and lots of them. If AMD can crack this nut then I suspect they will surge past Intel in all use cases.
Hopefully in Zen 3, rather than add 20% more cores or whatever the process will allow, they double the cache again.
 

DrMrLordX

Lifer
Apr 27, 2000
16,506
5,482
136
I don't think Zen4 on 5nm is confirmed.
Hard to say. We'll find out more next year.

Not read the full thread but over here in Europe as always. Looks like a paper launch (1-2 weeks) and prices are almost $100 higher, the 3900x is listed at $579 for example and hence more expensive than a 9900k. And it's through the whole lineup 30-80 more expensive than US MSRP. AMD? WTF? Can you explain? it should be cheaper here, no tarifs. I feel like they are just using tarifs as a way to increase prices everywhere else as well. (And same for the Navi GPUs...)
Umm, not sure the tariffs would affect AMD CPUs specifically?

XFR+PBO should allow for good effective overclocks but current BIOS versions cannot completely remove the 142W PPT limit (Power Package Tracking, think of it as PL2 power limit at Intel).
Some of those reviews do feature average power draw going past 142W for the 3900x. Power numbers have not been consistent.

The King in a world of Republics.
Does that mean it gets to do the royal wave during public processions?
 

HurleyBird

Platinum Member
Apr 22, 2003
2,175
595
136
Hopefully in Zen 3, rather than add 20% more cores or whatever the process will allow, they double the cache again.
I expect that they'll rework the cache hierarchy in general. It's a bit surprising that they didn't do very much there with Zen 2, and this seems to be the obvious remaining low hanging fruit for Zen 3. It wouldn't surprise me to see on-package HBM or something similar. As it stands AMD is spending an embarrassing amount of transistors on cache to compensate for a questionable hierarchy. Rome has 256 MB of L3, but will behave in most situations like it only has 16 MB. Yes, there are some advantages including latency and bandwidth, but not enough to justify that degree of wastage.

If I had to guess, I'd peg Zen 3 at <= the L3 per core as Zen 2, but effectively shared between all cores on a die rather than constrained to a 4-core CCX, plus a large L4 that is either on the IO die or (more likely) on its own chiplet. (Although perhaps only for Epyc and TR)

Lastly, given expected improvements to 7nm yields I can see AMD moving to a larger, 16-core chiplet. Maybe. Probably by Zen 4 at least.
 
Last edited:
  • Like
Reactions: CHADBOGA

tamz_msc

Platinum Member
Jan 5, 2017
2,679
2,355
106
I expect that they'll rework the cache hierarchy in general. It's a bit surprising that they didn't do very much there with Zen 2, and this seems to be the obvious remaining low hanging fruit for Zen 3. It wouldn't surprise me to see on-package HBM or something similar. As it stands AMD is spending an embarrassing amount of transistors on cache to compensate for a questionable hierarchy. Rome has 256 MB of L3, but will behave in most situations like it only has 16 MB. Yes, there are some advantages including latency and bandwidth, but not enough to justify that degree of wastage.

If I had to guess, I'd peg Zen 3 at <= the L3 per core as Zen 2, but effectively shared between all cores on a die rather than constrained to a 4-core CCX, plus a large L4 that is either on the IO die or (more likely) on its own chiplet.

Lastly, given expected improvements to 7nm yields I can see AMD moving to a larger, 16-core chiplet.
Yeah it makes no sense to continue with the CCX approach now that they've committed to the chiplet design. CCXes were always going to be a compromise in terms of cache hierarchy as it's original purpose was to have a modular design that could allow for an iGPU a-la the Ryzen APUs. Now that chiplets are here, AMD should instead focus on rearranging the CCD so that L3 is shared among all cores. This split cache arrangement requires accessing the other half through the IF which burns additional power and isn't efficient in terms of bandwidth utilization.
 

coercitiv

Diamond Member
Jan 24, 2014
3,958
4,293
136
Some of those reviews do feature average power draw going past 142W for the 3900x. Power numbers have not been consistent.
According to The Stilt all power numbers from Asus boards are wonky, as Asus BIOS makes power management believe CPU is drawing less power.
 

DrMrLordX

Lifer
Apr 27, 2000
16,506
5,482
136
According to The Stilt all power numbers from Asus boards are wonky, as Asus BIOS makes power management believe CPU is drawing less power.
Yeah. He did indicate that some other OEMs may be using something similar to the "thing" (as he calls it).
 
  • Like
Reactions: Xcobra

Hans de Vries

Senior member
May 2, 2008
236
465
136
www.chip-architect.com
TR 2950X is on par with 9900K, useless benchmark really
TR2950X wipes the floor with 9900K in everything seriously multithreaded

The Ryzen 7 2700x multi-threaded score increased 25% overnight because of the new Windows 1903 version(?)
http://browser.geekbench.com/v4/cpu/search?utf8=✓&q=2700x+windows
Geekbench 4.0 is a series of very short bursts with pauses. The interaction with windows to increase/decrease the clock fast enough seems to be crucial.
(besides the usual issue of threads being moved from core-to-core)


Linux is still far superior to Windows:

LINUX: http://browser.geekbench.com/v4/cpu/search?utf8=✓&q=ryzen+9+3900x+linux
WINDOWS: http://browser.geekbench.com/v4/cpu/search?utf8=✓&q=ryzen+9+3900x+windows

Single threaded score for the Ryzen 9 3900x goes up from 5630 to 6300 (12 %)
Multi threaded score for the Ryzen 9 3900x goes up from 44900 to 55000 (23 %)
 
Last edited:

BigDaveX

Senior member
Jun 12, 2014
440
210
116
Well, AMD IMC has not exactly been the high point of past AMD architectures. They (significantly) improved the controller at the same time they moved it off-die. Pretty sure there was a good deal of headroom to be had to more than make up for the latency penalty of moving it.
It hasn't been the high point of their recent architectures, but the original Athlon 64's DDR1 memory controller is still all these years later probably the most efficient memory controller ever to appear on the x86 platform. Their subsequent IMCs have generally been okay from a bandwidth standpoint, but they've struggled latency-wise against Intel.

Fortunately, it seems like now they've figured out the same trick that Intel did with the Core 2 - if you can't compete on outright latency, then gobs of cache and insanely good prefetchers are a good substitute!
 

TheGiant

Senior member
Jun 12, 2017
727
328
106
The Ryzen 7 2700x multi-threaded score increased 25% overnight because of the new Windows 1093 version(?)
http://browser.geekbench.com/v4/cpu/search?utf8=✓&q=2700x+windows
Geekbench 4.0 is a series of very short bursts with pauses. The interaction with windows to increase/decrease the clock fast enough seems to be crucial.
(besides the usual issue of threads being moved from core-to-core)


Linux is still far superior to Windows:

LINUX: http://browser.geekbench.com/v4/cpu/search?utf8=✓&q=ryzen+9+3900x+linux
WINDOWS: http://browser.geekbench.com/v4/cpu/search?utf8=✓&q=ryzen+9+3900x+windows

Single threaded score for the Ryzen 9 3900x goes up from 5630 to 6300 (12 %)
Multi threaded score for the Ryzen 9 3900x goes up from 44900 to 55000 (23 %)
well that is nice, but it doesnt reflect why should I buy more than 8C CPU
none of MT workloads I use or remember do somehow switch so fast between cores and rise/reduce frequency, that is simply not happening
I am using my 14C xeon to calculate CFD things and other workloads like x265 encoding etc I dont recall doing that switching like geekbench
what is is then good for except showing windows scheduler being still worse than linux?
as much as I like the 3900X this is not the best showcase of its performance
 

dlerious

Senior member
Mar 4, 2004
821
170
116
I have to wonder, especially with the most recent updates suggesting to NOT update your system per ASRock:
https://www.asrock.com/mb/Intel/Z370 Taichi/index.asp#BIOS


They never used this language before to describe previous updates. Maybe I should test performance before and after...
That's odd. Greg at Science Studio tried a 3900X in an Asrock B350 motherboard. He didn't do a whole lot of testing, but apparently it worked. He's gonna try an A320 board next.

 

ASK THE COMMUNITY

TRENDING THREADS