Discussion Zen 5 Speculation (EPYC Turin and Strix Point/Granite Ridge - Ryzen 9000)

Page 819 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Mar 11, 2004
23,444
5,852
146
So you can still get your green fix?

Yeah, dGPU (even AMD's own) should still handily offer performance above Strix Halo for a variety of reasons (memory bandwidth constrained situations especially). And if AMD and/or OEMs are delusional, it might even be similar price (you'll just give up form factor some, but that might also further help the dGPU where if they stuff Strix Halo in thin and lights that constrain its performance due to heat and/or packaging like restricting memory for packaging constraint). Frankly, I'm not sure dGPU wouldn't actually be better in thin and light at this point even because it distributes the heat points more. I like the idea of Strix Halo, but it feels like this is what AMD's APUs should have been already, and still isn't the large

Adding to the recent "OEMs don't work with AMD because AMD doesn't help them" I think it highlights that issue, which is both valid but not. By that I mean, AMD certainly could (and should be doing more, like making reference designs - I'd honestly wish for more but I've posted that diatrible multiple times at this point), but also shows why would AMD do that when OEMs consistently do stuff like putting in lower channel memory configs and the like for no reason other than being cheap (to the point they sabotage AMD to save pennies which drastically hurts the performance or the usability like when they use awful cheap displays, or exceedingly small batteries). They've shown that AMD can't stop them from doing that (short of simply not making such configs possible but then OEMs would say the same thing "AMD isn't working with us" by letting them save money and so nothing ultimately changes).

I do think the comparison with consoles and Apple highlight the argument I've been making for awhile, AMD should have been leveraging their semi-custom/console team to develop similar PC APUs. They should have pivoted their whole dGPU business that direction as a means of being competitive and disrupting that market instead of just floundering at it. It would have helped limit OEMs' ability to screw things up (while also doing what they claim they want where AMD more or less does their work for them), utilized what was one of AMD's major strengths and advantages versus Intel and Nvidia. But I digress.

Eh, a 14 year old game that was not that well optimized for more than single core, maybe dual core isn't that relevant. Other than to Blizzard's sloppiness. It is just a matter of can I now run the game at a decent fps with 200+ units on screen or in game as it were on a big game hunters type map with 8 players. That or one of the crazy tower defense maps in custom games.

It certainly isn't going to sell me on a zen5 chip.

I get your argument, and especially its not like suddenly it makes this some amazing CPU, but such longterm viability of software is a major strength of PC, so I think it does matter. I don't keep up with gaming...uh, culture (?) but if people are still playing it (isn't it popular in competitive gaming?) then its relevant to some. Kinda like CS, Minecraft, etc where a lot of people are still playing it. I think in some instances there's even benefits to how the game operates (modern hardware enables massive Minecraft builds I believe?).
 

branch_suggestion

Senior member
Aug 4, 2023
826
1,795
106
Eh, a 14 year old game that was not that well optimized for more than single core, maybe dual core isn't that relevant. Other than to Blizzard's sloppiness. It is just a matter of can I now run the game at a decent fps with 200+ units on screen or in game as it were on a big game hunters type map with 8 players. That or one of the crazy tower defense maps in custom games.

It certainly isn't going to sell me on a zen5 chip.
It is a nice point of reference, the game seems to scale nicely with newer CPUs and so it is a nice outlier to look at.
Remember the best CPU at launch was Westmere, so there is a lot of evolution since then.
Yeah, dGPU (even AMD's own) should still handily offer performance above Strix Halo for a variety of reasons (memory bandwidth constrained situations especially). And if AMD and/or OEMs are delusional, it might even be similar price (you'll just give up form factor some, but that might also further help the dGPU where if they stuff Strix Halo in thin and lights that constrain its performance due to heat and/or packaging like restricting memory for packaging constraint). Frankly, I'm not sure dGPU wouldn't actually be better in thin and light at this point even because it distributes the heat points more. I like the idea of Strix Halo, but it feels like this is what AMD's APUs should have been already, and still isn't the large
It should outperform the majority of dGPU laptops on the market.
Also it is way nicer to cool one package than 2, you have more room to fit extra cooling and the overall power is lower so less heat and more flexibility with the battery.
And it is over 400mm^2 of Si, that isn't small by any definition.
Adding to the recent "OEMs don't work with AMD because AMD doesn't help them" I think it highlights that issue, which is both valid but not. By that I mean, AMD certainly could (and should be doing more, like making reference designs - I'd honestly wish for more but I've posted that diatrible multiple times at this point), but also shows why would AMD do that when OEMs consistently do stuff like putting in lower channel memory configs and the like for no reason other than being cheap (to the point they sabotage AMD to save pennies which drastically hurts the performance or the usability like when they use awful cheap displays, or exceedingly small batteries). They've shown that AMD can't stop them from doing that (short of simply not making such configs possible but then OEMs would say the same thing "AMD isn't working with us" by letting them save money and so nothing ultimately changes).
AMD internally would've pushed for big APU's for a long time, but there simply wasn't the TAM to justify it, on top of a lack of foundation to build from.
Apple is to thank for the big APU becoming a real market, they had the muscle to try it where AMD for many reasons couldn't.
I do think the comparison with consoles and Apple highlight the argument I've been making for awhile, AMD should have been leveraging their semi-custom/console team to develop similar PC APUs. They should have pivoted their whole dGPU business that direction as a means of being competitive and disrupting that market instead of just floundering at it. It would have helped limit OEMs' ability to screw things up (while also doing what they claim they want where AMD more or less does their work for them), utilized what was one of AMD's major strengths and advantages versus Intel and Nvidia. But I digress.
It still is a major strength, and RDNA has always been good for iGPU.
Thing is for such devices you need a lot of things in place already to make it work, like robust software support in relatively niche apps.
Being good in games and CPU workloads alone is not quite enough to get people to invest in a new swimlane.
 
  • Like
Reactions: Tlh97

Saylick

Diamond Member
Sep 10, 2012
4,035
9,454
136
This is likely outdated.

We now know the die size of Zen 5 CCD (it is smaller), and also, there were some more recent leaks pointing to Strix Halo SoC being bigger.
The SOC die should be TSMC N3E, as far as I understand. Also agree that the CCDs should be smaller than 90mm2, especially if they only have half the L3 as the desktop CCD.
 

Joe NYC

Diamond Member
Jun 26, 2021
3,634
5,176
136
The SOC die should be TSMC N3E, as far as I understand. Also agree that the CCDs should be smaller than 90mm2, especially if they only have half the L3 as the desktop CCD.
Is that the case? I thought Strix Halo CCDs would have full implementation of Zen 5, like desktop and server. Which has 32 MB of L3 and die size of about 76 mm2
 

Timmah!

Golden Member
Jul 24, 2010
1,571
935
136
So i stopped paying attention, has the CCD latency thing been fixed yet? Or still takes 3 business days for data to reach the other die? :p
 
Last edited:

naukkis

Golden Member
Jun 5, 2002
1,020
853
136
So i stoppee paying attention, has the CCD latency thing been fixed yet? Or still takes 3 business days for data to reach the other die? :p

Kind of, new agesa version returned that round trip time back to one business day. Still some could argue that dual-ccd desktop cpu's are still quite unusable to anything but running many single-thread jobs in parallel.
 

naukkis

Golden Member
Jun 5, 2002
1,020
853
136
You mean 99.999% if all software ever made?

Pretty much everything is nowadays multi-threaded. Dual-CCD cpu's always have possibility that scheluder assigns dependent threads to different cache domains - making a them performing worse than just using single CCD.
 

Det0x

Golden Member
Sep 11, 2014
1,465
4,999
136

naukkis

Golden Member
Jun 5, 2002
1,020
853
136
wouldn't it work better if windoze saw dual CCDs as 2 different CPUs?

That basicly disables second ccd for desktop-use cases. Which actually is AMD preferred profile for multithreaded software like games - disable second CCD to prevent threads going to wrong CCD accidentally.
 
  • Like
Reactions: marees

CakeMonster

Golden Member
Nov 22, 2012
1,629
809
136
If the latency has been 'fixed', why does 9950X chipset driver still install the coreparking/prioritization driver, while it does not on 7950X? Are there still reasons to treat it differently?
 

yottabit

Golden Member
Jun 5, 2008
1,671
874
146
Do we know yet when more info on Strix Halo will come out? My next laptop will probably be between a M4 MBP or (preferably) Strix Halo depending on how botched the execution and launch drivers are/aren't
 

naukkis

Golden Member
Jun 5, 2002
1,020
853
136
If the latency has been 'fixed', why does 9950X chipset driver still install the coreparking/prioritization driver, while it does not on 7950X? Are there still reasons to treat it differently?

Both CCDs have their own cache domain. So latency ain't main problem, two cache domains is. If multithreaded program needs to share modified cache lines between cores bandwidth between cores and memory run out as same links are used for traffic between CCDs. Best approach is to disable other CCD totally. 8 cores are mostly enough for desktop users so AMD got free pass from implementing their multiple cache domains for desktop users - designs that are basically useless for most users and best experience is achieved by disabling those alltogether.
 

Timmah!

Golden Member
Jul 24, 2010
1,571
935
136
Kind of, new agesa version returned that round trip time back to one business day. Still some could argue that dual-ccd desktop cpu's are still quite unusable to anything but running many single-thread jobs in parallel.
Thanks!