Discussion Zen 5 Discussion (EPYC Turin and Strix Point/Granite Ridge - Ryzen 8000)

Page 20 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

What do you expect with Zen 5?


  • Total voters
    103

Exist50

Golden Member
Aug 18, 2016
1,694
1,778
136
Lead times on design are usually around four years and v-cache isn't just something you can bolt on to an existing chip.
4 years is basically the very beginning of a project, before the requirements have solidified, all the way to shipping. Regardless, the beauty of AMD's implementation is that they've developed something that they can happily sell with or without hybrid bonding, with minimal overhead. So they could plumb in the interface, but if the tech wasn't ready, no big deal.
I'd imagine that Intel has something in the works that uses this sort of technology by now, or have started figuring out how to do something similar with their own future processes.
They've claimed that their hybrid bonding tech will be ready sometime this year, but how true that is or when we'll actually see a product with it remains to be seen.
 
  • Like
Reactions: Kaluan and ftt

deasd

Senior member
Dec 31, 2013
454
594
136

It doesn't appear to be using N4.
This slide was an obsolete rumor from May 2021. These guys just thought Zen4D refered to Zen4 Dense but it turned out to be wrong with Zen4C(Cloud) official confirmation.

'Advanced node' with Strix Point on newest slide could mean AMD haven't decide which node should be used.
 
Last edited:
  • Like
Reactions: Kaluan

BorisTheBlade82

Senior member
May 1, 2020
381
543
106
This slide was an obsolete rumor from May 2021. These guys just thought Zen4D refered to Zen4 Dense but it turned out to be wrong with Zen4C(Cloud) official confirmation.

'Advanced node' with Strix Point on newest slide could mean AMD haven't decide which node should be used.
How come you think it is obsolete? Just because c != D ? Well, marketing names may change over time.
 

Mopetar

Diamond Member
Jan 31, 2011
7,096
4,540
136
Considering AMD is using D as part of their v-cache products (e.g., 5800X3D), it's unlikely that they'd use it for a separate product designation.
 

deasd

Senior member
Dec 31, 2013
454
594
136
How come you think it is obsolete? Just because c != D ? Well, marketing names may change over time.
You just answer your own question :)

1. We cannot use codename like Zen4D(Dense) which is not existed. Especially the rumored one's meaning is very different from the official one Zen4C(Cloud)

2. StrixPoint process node is still uncertain with 'Advanced Node' statement.

It is strange for me that AMD use a new full node (3nm, 5nm, 7nm etc) on APUs ahead of non-APUs. If Zen5 non-APUs is on N4, it would be a surprise that StrixPoint on N3.

Considering AMD is using D as part of their v-cache products (e.g., 5800X3D), it's unlikely that they'd use it for a separate product designation.
It is an old rumor. I don't know how the Zen4D rumor being filled 20 months ago but we can sure the old slide is obsolete or wrong cuz some products on the slide would never exist. Like Warhol.
 
Last edited:

BorisTheBlade82

Senior member
May 1, 2020
381
543
106
Considering AMD is using D as part of their v-cache products (e.g., 5800X3D), it's unlikely that they'd use it for a separate product designation.
The other way around: It is entirely possible that they named Bergamo Zen4D internally back in 2021 and before, but switched to Zen4c lafter on.

You just answer your own question :)

1. We cannot use codename like Zen4D(Dense) which is not existed. Especially the rumored one's meaning is very different from the official one Zen4C(Cloud)

2. StrixPoint process node is still uncertain with 'Advanced Node' statement.

It is strange for me that AMD use a new full node (3nm, 5nm, 7nm etc) on APUs ahead of non-APUs. If Zen5 non-APUs is on N4, it would be a surprise that StrixPoint on N3.



It is an old rumor. I don't know how the Zen4D rumor being filled 20 months ago but we can sure the old slide is obsolete or wrong cuz some products on the slide would never exist. Like Warhol.
I am having a hard time understanding your first two points - somehow the words do not add up for me to become "reasoning".

I certainly agree with you on the node point - but on the other side it also would not be totally out of this world. Maybe Granite Ridge comes too early for N3E at reasonable prices or wafer capacity would not be feasible at that point in time.

As to Warhol: Just because it did not materialize in an end product does not mean it never existed in some stage. Maybe AMD at some point just decided that it did not need it or there were some costs of opportunity involved (AKA not worth the effort).
 
Last edited:

jamescox

Senior member
Nov 11, 2009
588
1,016
136
Thanks. I did speculate that MI300 contains 6 APU chiplets, but there are other possibilities, of course. And, someone intriguingly hinted to me that my speculation is wrong. Anyway, here is my mock-up based on the slide rendering and the actual chip photo:




Now, continue the "Zen 5" speculation!
I suspect Zen5 will use some of the stacking and connectivity tech used for RDNA3 and MI300, so it is kind of relevant. The things you have labeled as Zen4 cores look more like infinity cache, or maybe L2 cache, or something like that. I have seen the small chips between the HBM3 referred to as structural silicon (semiaccurate, I think). The chiplet you have labeled as "adaptive chiplet" looks exactly like a Zen 4 chiplet with 8 cores. The thing you have labeled "AI chiplet" may be partially FPGA. FPGAs have large arrays, so it could look like cache. It could also just be all AI hardware. That would have large, regular, arrays of things in addition to possible caches. It would be easier to tell if I knew the die size of HBM3. I didn't find it in a quick search and I don't have time to search more today. I thought HBM2 was around 100 mm2. The rendering may be completely inaccurate, but if the "AI chiplets" are actually cpu cores, then where do the 24 cores come from? There are essentially 3 GPUs (2 chiplets each), so having 3x8-cores would make sense. I don't know where the other 8 cores would be hiding unless there is something weird like 2 low power cores in each base die.
 

DisEnchantment

Golden Member
Mar 3, 2017
1,419
4,787
136
Has there been some feature or instruction that was present in Zen 4 QS samples but didn't get released in final silicon, based on your review of these patches?
Will never know such things, because in traditional AMD style they release manuals and patches after product launch, save for the vague unintelligble patches.
AFAIK, UAI is not working in Zen4 currently. But I believe only deactivated in microcode.
 

Glo.

Diamond Member
Apr 25, 2015
5,244
3,811
136

Any 128 bit memory controller will get 156 GB/s from this.

To put into perspective, Radeon RX 6500 XT from 18 Gbps GDDR6 memory on 64 bit bus gets 144 GB/s.
 

jamescox

Senior member
Nov 11, 2009
588
1,016
136
I suspect Zen5 will use some of the stacking and connectivity tech used for RDNA3 and MI300, so it is kind of relevant. The things you have labeled as Zen4 cores look more like infinity cache, or maybe L2 cache, or something like that. I have seen the small chips between the HBM3 referred to as structural silicon (semiaccurate, I think). The chiplet you have labeled as "adaptive chiplet" looks exactly like a Zen 4 chiplet with 8 cores. The thing you have labeled "AI chiplet" may be partially FPGA. FPGAs have large arrays, so it could look like cache. It could also just be all AI hardware. That would have large, regular, arrays of things in addition to possible caches. It would be easier to tell if I knew the die size of HBM3. I didn't find it in a quick search and I don't have time to search more today. I thought HBM2 was around 100 mm2. The rendering may be completely inaccurate, but if the "AI chiplets" are actually cpu cores, then where do the 24 cores come from? There are essentially 3 GPUs (2 chiplets each), so having 3x8-cores would make sense. I don't know where the other 8 cores would be hiding unless there is something weird like 2 low power cores in each base die.
Replying to myself...

I am wondering if the layout pictured just isn't the 24 core device. Perhaps it is a 16 core with an FPGA or other accelerator. They apparently can put more than one type of chip on top of the base die. The base die looks like it might be able to fit 4 cpu chiplets, so I am wondering if the 24 core variant is really the top end. This seems like a small number of cores compared to what Nvidia will have with each Grace Hopper package (144?), although that may have a more powerful gpu.
 
  • Like
Reactions: Vattila

TESKATLIPOKA

Golden Member
May 1, 2020
1,280
1,527
106

Any 128 bit memory controller will get 156 GB/s from this.

To put into perspective, Radeon RX 6500 XT from 18 Gbps GDDR6 memory on 64 bit bus gets 144 GB/s.
It's actually 153.6 GB/s.
This is nice and all but the price will be ridiculously high and only a few models will use them at best.
But at least there is this option.
 

soresu

Golden Member
Dec 19, 2014
1,875
1,027
136
It's actually 153.6 GB/s.
This is nice and all but the price will be ridiculously high and only a few models will use them at best.
But at least there is this option.
The new CAMM modules may include these as a possibility as they are supposed to support LPDDR too.
 

TESKATLIPOKA

Golden Member
May 1, 2020
1,280
1,527
106
It will be expensive now, it must be seen if it will stay expensive when these APUs will arrive.
It was just developed, It's only sample production for customers.
It will take time until It's available in first products, maybe in Q4 2023?
When Strix Point will arrive, then price could be lower, true, but It will be higher than lower clocked ones.
Strix with this memory would once more not be for cheaper laptops but premium ones.
I always thought APU was meant as a cheaper alternative to CPU+dGPU combo, yet It's not.
 

TESKATLIPOKA

Golden Member
May 1, 2020
1,280
1,527
106
Its 64 bit, as well.

LPDDR5X(standard) is 16 bit.

;).
It doesn't have a wider bus per package.

LPDDR5 is also 64-bit per package.
Samsung
At 6,400 megabits per second (Mb/s), the new LPDDR5 is about 16 percent faster than the 12Gb LPDDR5 (5,500Mb/s) found in most of today’s flagship mobile devices. When made into a 16GB package, the LPDDR5 can transfer about 10 5GB-sized full-HD movies, or 51.2GB of data, in one second.
51200/6400=8*8 => 64-bit package
Rembrandt needs 2 such packages for 128-bit and 32GB RAM.
 
Last edited:

ASK THE COMMUNITY