Discussion Zen 5 Speculation (EPYC Turin and Strix Point/Granite Ridge - Ryzen 9000)

Page 281 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Mopetar

Diamond Member
Jan 31, 2011
8,415
7,593
136
If AMD has "won", why do they still have less than 15% marketshare in laptops?

Same reason I pointed out as to why they can't really grow market share by significant amounts in a single generation. They don't have enough wafers. In this case AMD decides they'd rather win more market share in the highly lucrative server market.

The only way for AMD to gain market share is to increase their wafer allotment, to create a smaller chiplet so they can make more per wafer, to shift sales away from more profitable segments, or if the market contracts while their own sales don't.

That's because it was a usable $999 macbook and not really the SoC part of it.
There's really not a market for tablet parts otherwise.

The other big part of it (at least for me) was that at the time a $$$$ Mac didn't seem expensive when between scalpers and shortages even building a mid-range PC was going to be close to a $2,000 project.

The design for the new MacBook also knocked it out of the park and fixed a lot of issues that people had with their previous notebooks. Otherwise you are correct that you can't really compare just their SoC. People buy the Mac because it's the whole package and they would still do so even if the SoC weren't quite as good as what AMD/Intel had.
 

jpiniero

Lifer
Oct 1, 2010
16,355
6,830
136
Same reason I pointed out as to why they can't really grow market share by significant amounts in a single generation. They don't have enough wafers. In this case AMD decides they'd rather win more market share in the highly lucrative server market.

It's more TSMC's pricing or maybe some apathy. Basically until Q4, AMD's laptop products other than Cezanne were more or less irrelevant. That's why I think Phoenix was basically delayed 6 months - because they needed Lil Phoenix.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,066
466
126
The only way for AMD to gain market share is to increase their wafer allotment
In that case why don’t AMD simply increase the number of wafers they order from TSMC? If you say AMD can use those extra wafers for more laptop chips, and that the market would absorb those chips, that is.
 
Jul 27, 2020
24,995
17,370
146
People who are willing to try non-Intel laptops make up a small percentage of the population.

Another thing that might be preventing AMD from ordering too many wafers is 5500U. AMD must've overproduced it tremendously. Laptops with it are selling for $349 and still not out of stock. Retailers must hate this CPU. I hate seeing it come up again and again in my searches for AMD laptops. I mean, what the heck AMD???? I would've expected the 6600U to be more popular at this point in time but apparently, AMD didn't produce too much of that.
 

SteinFG

Senior member
Dec 29, 2021
717
853
106
Oh, some more leakers mention 800 series chipset for AMD, seems like MLID had the right sources this time. I still have some doubts about other contents of the leak tho.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Okay so if we come back to Earth, and consider that MLID may have been correct with the leaked slides - what do you think could be the reason why there is a huge disparity in the performance expectations here (30%+ IPC) compared to MLID(10-15%+ IPC)?

My guess is probably that N4 doesn't quite give AMD the transistor budget to go for a super-fat core.
 

SteinFG

Senior member
Dec 29, 2021
717
853
106
Okay so if we come back to Earth, and consider that MLID may have been correct with the leaked slides - what do you think could be the reason why there is a huge disparity in the performance expectations here (30%+ IPC) compared to MLID(10-15%+ IPC)?

My guess is probably that N4 doesn't quite give AMD the transistor budget to go for a super-fat core.
I'd actually wait until AMD gives concrete numbers
 

uzzi38

Platinum Member
Oct 16, 2019
2,746
6,637
146
In the long run, AMD stands more to lose from the gains they've made in DC, because everybody and their grandmas are realizing that it doesn't make sense to lose margins to AMD and INTC buying their CPUs for the data center, when you can build your own. MSFT, Amazon, Google are all going there.
This is entirely dependant on either:

a) Base ARM offerings being competitive with Intel/AMD designs, which really isn't a given. Recently gains haven't been very impressive, they've either come at significant increases to die area spent or power consumption. (Zen and even more so particular Zen xC competes extremely well against Cortex-X/V based designs with regards to power/perf/area)

b) those companies producing their own in-house cores to compete against AMD/Intel. Apple did a great job getting to A14 gen where they're on a level playing field, and have swiftly dropped off with no improvements since. Ampere have almost vanished off the face of the earth with their own in house core debuting with AmpereOne. Only real hopes at this approach is really Qualcomm with Nuvia cores and potentially Nvidia, but we don't really know if the latter is still developing their own in-house cores given Orin uses base ARM ones instead - and they're weren't even recent cores for when Orin started shipping either.

Also, losing margins to AMD/Intel is a great tagline, but you're missing out the bit where all of those companies need to spend millions on R&D to develop their own in-house products, have to then fight for wafer and packaging capacity with significantly less volume than AMD/Intel as they're only servicing their own needs. Reality isn't quite as rosy as "we get to save money if we do it ourselves". There's a good reason why AWS hasn't totally abandoned anything but Graviton - it's because they don't have a choice.
 

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,757
106
This is entirely dependant on either:

a) Base ARM offerings being competitive with Intel/AMD designs, which really isn't a given. Recently gains haven't been very impressive, they've either come at significant increases to die area spent or power consumption. (Zen and even more so particular Zen xC competes extremely well against Cortex-X/V based designs with regards to power/perf/area)
Ah, you haven't heard of ARM Blackhawk (Cortex X5), the Ultimate ARM core to kill all custom ARM cores!
A word from ARM themselves about Blackhawk
b) those companies producing their own in-house cores to compete against AMD/Intel. Apple did a great job getting to A14 gen where they're on a level playing field, and have swiftly dropped off with no improvements since.
Just right when the CPU architects left!
Ampere have almost vanished off the face of the earth with their own in house core debuting with AmpereOne. Only real hopes at this approach is really Qualcomm with Nuvia cores and potentially Nvidia, but we don't really know if the latter is still developing their own in-house cores given Orin uses base ARM ones instead - and they're weren't even recent cores for when Orin started shipping either.
I doubt Nvidia would go to the trouble of designing custom ARM cores. Custom designs are difficult (Look at the fate of Samsung Mongoose). If Blackhawk is good enough, there is no need for custom cores.
 

Kolifloro

Member
Mar 15, 2023
29
26
61
Would you expect the power consumption delta between 'Strix Halo' and 'Strix Point' when on IDLE to be very significant ?

I am asking from a 24x7 ON laptop which 80% time is under-demanded and the remaining 20% is over-demanded. I would prefer to wait for Halo ... but what I don't want is having a hot oven when idle ...

Thank you !!!
 

S'renne

Member
Oct 30, 2022
139
105
86
Would you expect the power consumption delta between 'Strix Halo' and 'Strix Point' when on IDLE to be very significant ?

I am asking from a 24x7 ON laptop which 80% time is under-demanded and the remaining 20% is over-demanded. I would prefer to wait for Halo ... but what I don't want is having a hot oven when idle ...

Thank you !!!
I'm more worried about the availability since RDNA 3 laptops were dried up ngl
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Ah, you haven't heard of ARM Blackhawk (Cortex X5), the Ultimate ARM core to kill all custom ARM cores!
A word from ARM themselves about Blackhawk

Just right when the CPU architects left!

I doubt Nvidia would go to the trouble of designing custom ARM cores. Custom designs are difficult (Look at the fate of Samsung Mongoose). If Blackhawk is good enough, there is no need for custom cores.
Also as per arm's comments during their earnings release, armv9 contributed 15% to the revenue compared to 10% last quarter, and they say that v9 is bringing royalty at twice the rate of v8.
 

BorisTheBlade82

Senior member
May 1, 2020
695
1,093
136
Would you expect the power consumption delta between 'Strix Halo' and 'Strix Point' when on IDLE to be very significant ?

I am asking from a 24x7 ON laptop which 80% time is under-demanded and the remaining 20% is over-demanded. I would prefer to wait for Halo ... but what I don't want is having a hot oven when idle ...

Thank you !!!
Well, that heavily depends on the efficiency of the D2D interconnect. I would say yes, there might a significant difference, unless AMD surprises us with something like MTL's LP island.
 
  • Like
Reactions: Kolifloro

HurleyBird

Platinum Member
Apr 22, 2003
2,792
1,512
136
Okay so if we come back to Earth, and consider that MLID may have been correct with the leaked slides - what do you think could be the reason why there is a huge disparity in the performance expectations here (30%+ IPC) compared to MLID(10-15%+ IPC)?

My guess is probably that N4 doesn't quite give AMD the transistor budget to go for a super-fat core.

On one hand, AMD prefers to be a bit conservative. On the other hand, there's always someone making outlandish claims on any future product for attention. I assume the truth is somewhere in-between.
 

DisEnchantment

Golden Member
Mar 3, 2017
1,777
6,771
136
It is very odd for AMD to send patches before the HW is shipping. It is not like these patches are needed to make Zen 5 work properly. I suppose Zen 5 needs a lot of code to be recompiled to properly extract performance from it. With Zen 4 everything was more or less same width at the frontend and backend vs Zen 3 so it was not important to upstream early.
That said, they can get it in in GCC, but LLVM 18.x is already forked, so not sure if they can add it unless the release is indeed much later in this year.

And probably like Mike already mentioned, they might get 10-15% IPC (surprise!! it is the same value in the slides ) on older code base.
With the four-wide decode for example, a lot of the compilers have optimizations they do because you have a four wide machine. But when we give them something wider, they will be updated to realize how to compile the code to make it even better. So we'll see we only managed to get 10 to 15% IPC on these older codes that when we launched, but as the compilers developed, they'll be able to extract more and more out of our future designs based on what they get out from our current design.

For benchmarks compiled from sources (like SPEC) it might give bigger IPC, so AMD upstreaming early means they want to show better benchmark numbers at launch. So likes of Phoronix will give out better reviews for Zen 5 than say the typical YouTube Influencer.

As it is, the core seems absolutely humongous, doubled SIMD width everywhere, +50% int units. Doubt Mike Clark and team would make a design like that without making sure the core can be well fed.