Discussion Zen 5 Speculation (EPYC Turin and Strix Point/Granite Ridge - Ryzen 9000)

Page 729 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hail The Brain Slug

Diamond Member
Oct 10, 2005
3,513
2,464
136
So you're buying today, right?
This is the very thing I've been railing against chatting with others. Your opinion of the launch doesn't matter if you never intended to buy.

Not to devalue or dismiss everyone's opinions, but if there was no circumstance under which you would have bought these new products, your opinion doesn't matter to AMDs bottom line.

Only people who would have bought but don't, or wouldn't but do make a difference to their bottom line.
 
  • Like
Reactions: Jan Olšan
Jul 27, 2020
20,040
13,738
146
If these figures are from TYC review, I wouldn't put too much stock into them yet. I'd wait for someone else to verify that.
@B-Riz already posted one graph of Baldur's Gate 3 minimum fps improving with 9600X. At this point, at least I don't need further proof that there's some VERY good and VERY practical improvements in Zen 5, enough to warrant an upgrade for some, if not all, Zen 4 users.
 
  • Haha
Reactions: Rigg

Heartbreaker

Diamond Member
Apr 3, 2006
4,340
5,464
136
Some more screens

Now you are cherry picking the screens where SMT does worse, ignoring the ones where it does better.

If they would have included Intel CPUs with HT in their you would likely see a similar thing. SMT is never an exclusive win.

If 9700X is behaving worse with SMT on than previous cores, that's Zen 5 problem not that HT/SMT scheduling is wrong after it's been on the market for decades.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,486
2,023
136
That's a...non-insignificant gain in front-end bound workloads like browsing. I expected less.
2 decode clusters doing work?

I don't know if the two decode clusters can actually serve a single thread in practice. However, the µop cache does seem to work, in that the machine can do two taken branches per clock from µop cache, which is very useful on it's own when running interpreters.
 
Jul 27, 2020
20,040
13,738
146
This is the very thing I've been railing against chatting with others. Your opinion of the launch doesn't matter if you never intended to buy.
Zen 5 has launched at an unfortunate time. If we had data to see how many users upgraded in the past 6 months from their aging platform to 7800X3D or any other Zen 4 CPU, we would understand why the Zen 5 reception is lukewarm. AMD should've launched Zen 5 in January. This is a market where waiting just hurts you more because a lot of people with the upgrade itch don't wait for impending launch of new CPUs. There's a reason why it's called an "itch". People just want something new and they want it NOW.
 

CouncilorIrissa

Senior member
Jul 28, 2023
541
2,120
96
I don't know if the two decode clusters can actually serve a single thread in practice. However, the µop cache does seem to work, in that the machine can do two taken branches per clock from µop cache, which is very useful on it's own when running interpreters.
Gotta wait for C&C article to finally get some clarity on this.
 

StefanR5R

Elite Member
Dec 10, 2016
5,926
8,863
136
[more cores in tiny desktop PCs]
There are many MT workloads that don’t require that much memory bandwidth.
Video transcoding, source code compilation, ...
I am not familiar with video transcoding. Are there transcoders which scale to very high core counts?

Source code compilation however: Software build jobs are not scaling well with core count. There are significant single- and lowly threaded sections during a build. The compilation stage scales well if there are respectively many files. But most of the time, developers build incrementally, so there is no good scaling there either.
 
  • Like
Reactions: carancho

StefanR5R

Elite Member
Dec 10, 2016
5,926
8,863
136
All you armchair experts saying Zen5 stinks at gaming, explain the .1% lows away
superb .1% lows, sign of a superior CPU
I am not interested in video games myself, hence never pay attention to that part of CPU reviews. What strikes me as odd is that most reviewers still focus on average FPS a lot. If average FPS were the most important thing to immersion into a game, then the conclusion of all of these reviews should be that all desktop CPUs perform alike: Average FPS are always good enough if screen resolution and game details are chosen according to GPU performance. — From what I understand, what matters additionally, and very much, to the experience of playing a video game are things like low percentiles of FPS, and frame time variance. Yet hardly any reviewer seems to put these prominently into the center of the video game part of CPU reviews.

(Especially ridiculous is when reviewers produce huge diagrams with a dozen of CPUs all giving absurdly high FPS. They should shrink those graphs into a one-line summary that all tested CPUs were good for more FPS than necessary.)
 

DrMrLordX

Lifer
Apr 27, 2000
22,065
11,693
136
I really want AMD to pull some miracle with an updated IOD that allows the X3D chips to use DDR5-8000 in 1:1 mode.

Other than the observations already made related to GMI links, you're looking at the current limitations of AMD's implementation of IF. There are likely good reasons why 1:1 beyond DDR4-6400/6600 becomes impossible.

This is the very thing I've been railing against chatting with others. Your opinion of the launch doesn't matter if you never intended to buy.

Does this also apply to people parroting negative reviews of Granite Ridge? Nobody said this until people started praising Zen5.
 

Fjodor2001

Diamond Member
Feb 6, 2010
3,990
440
126
I am not familiar with video transcoding. Are there transcoders which scale to very high core counts?

Source code compilation however: Software build jobs are not scaling well with core count. There are significant single- and lowly threaded sections during a build. The compilation stage scales well if there are respectively many files. But most of the time, developers build incrementally, so there is no good scaling there either.
Regarding video transcoding, just check out the Handbrake perf tests, which are quite common in CPU reviews.

Regarding source code compilation, do you have any reviews / perf tests showing that Zen5 would be bottle necked due to memory bandwidth beyond 16C?
 

Hail The Brain Slug

Diamond Member
Oct 10, 2005
3,513
2,464
136
Does this also apply to people parroting negative reviews of Granite Ridge? Nobody said this until people started praising Zen5.
I added obvious context just to avoid responses like this, and you left that out of the quote.

It's like piracy. The company didn't lose money from someone pirating the game if the person would have never otherwise paid for it.

It doesn't matter to AMD's bottom line what your opinion is of the product or release if you were never going to buy it no matter how well it was received.

Yes, I was 100% set on buying a 9950X until release rolled around and the reality set in. Now I am not buying one. It doesn't make my opinion more valid, but the fact that AMD lost a sale is what matters, not my opinion of the release.
 
  • Like
Reactions: Thibsie

DrMrLordX

Lifer
Apr 27, 2000
22,065
11,693
136
I added obvious context just to avoid responses like this, and you left that out of the quote.

I reread that and . . . okay?

Fact still remains that someone chose to wait to say this until after others stepped in and said "no wait, Zen5 is actually kinda good, here's why". Nothing you said explains why anyone would wait for that. HUB has been laying into AMD with both barrels.
 
  • Like
Reactions: Thibsie

Hitman928

Diamond Member
Apr 15, 2012
6,186
10,693
136
Starting to think that SMT causes more issues in normal use than improvements.

I don’t think this is an issue with Zen 5 and SMT. @Det0x has a 16 core Zen 5 and isn’t seeing the same behavior/issues. I think it’s more likely an issue with a particular Windows build or driver that’s causing improper scheduling between physical and logical cores.
 

StefanR5R

Elite Member
Dec 10, 2016
5,926
8,863
136
[more cores in tiny desktop PCs, for applications which apparently just don't exist]
I am not familiar with video transcoding. Are there transcoders which scale to very high core counts?
Regarding video transcoding, just check out the Handbrake perf tests, which are quite common in CPU reviews.
Well, Handbrake does not scale with core count. This is well known.
Example: AnandTech's Threadripper 3000 review

Source code compilation however: Software build jobs are not scaling well with core count. There are significant single- and lowly threaded sections during a build. The compilation stage scales well if there are respectively many files. But most of the time, developers build incrementally, so there is no good scaling there either.
Regarding source code compilation, do you have any reviews / perf tests showing that Zen5 would be bottle necked due to memory bandwidth beyond 16C?
I said there is no high parallelism in common software build jobs.
The pure compilation stage scales well if an entire source tree has to be rebuilt, but that's a rare task in practice.
 

Fjodor2001

Diamond Member
Feb 6, 2010
3,990
440
126
Well, Handbrake does not scale with core count. This is well known.
Example: AnandTech's Threadripper 3000 review
DDR4. And not Zen5.

Also, depends on the encoder used. Check for x265 which is most commonly used nowadays, which scales more or less linearly to 32C and quite well even to 64C:


time-to-complete-vs-cores.png

fps-vs-cores.png



I said there is no high parallelism in common software build jobs.
The pure compilation stage scales well if an entire source tree has to be rebuilt, but that's a rare task in practice.
You said. I asked for reviews / tests showing this for Zen5 beyond 16C. Still waiting.

Also, rebuild is the most important of course as it takes the longest time.
 
Last edited:

Mahboi

Golden Member
Apr 4, 2024
1,033
1,897
96
I am not interested in video games myself, hence never pay attention to that part of CPU reviews. What strikes me as odd is that most reviewers still focus on average FPS a lot. If average FPS were the most important thing to immersion into a game, then the conclusion of all of these reviews should be that all desktop CPUs perform alike: Average FPS are always good enough if screen resolution and game details are chosen according to GPU performance. — From what I understand, what matters additionally, and very much, to the experience of playing a video game are things like low percentiles of FPS, and frame time variance. Yet hardly any reviewer seems to put these prominently into the center of the video game part of CPU reviews.

(Especially ridiculous is when reviewers produce huge diagrams with a dozen of CPUs all giving absurdly high FPS. They should shrink those graphs into a one-line summary that all tested CPUs were good for more FPS than necessary.)
Yesn't. Speaking as a formerly more active gheymer myself, there's really 3 metrics:
- 30-60 fps average
- dips below 30
- actual high framerate for some games

You want the average at 60 for 90% of games, but actually a ton of people will never notice it dipping in the 50s, and a minority in the 40s. 30 FPS is basically the hard limit, below that almost everyone will notice stuttering and a lack of smoothness. The problem ofc is that if it's only about 30 fps, the game channels all lose their biggest talking point, so they all collectively started pretending that 60 fps was the hard limit. If it became reachable by the slowest CPUs, it would become 144fps, and so on.

Actual high framerate with minimal dips is very important in very reflex intensive games, like multiplayer FPS. The problem is that a hundred percent of the game devs know that fully well, and optimise their high FPS games a lot.
Just check any review having something like Counter Strike or Call of Duty or even Rocket League: suddenly the CPU that couldn't get above 150/180 FPS in ideal conditions runs at 450.

So yes most gheymeeeing reviews are nowadays fundamentally silly for CPUs. Any ADL/Zen 3 and beyond CPU is way enough for 99% of cases and enough for the 1% (the 1% being Starfield).
Frametime variance is also a bad metric because any game can randomly load assets and have lag spikes that will basically never ever be compressed. As long as your non-multiplayer game has 60 FPS and doesn't dip below 30-40 or your multiplayer game doesn't dip below 144/240, it's done, the rest is just GN Steve moaning.