Discussion Zen 5 Discussion (EPYC Turin and Strix Point/Granite Ridge - Ryzen 8000)

Page 20 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
Mar 3, 2017
1,590
5,722
136
Well, since many folks already got their hands (or at least going to get) on Zen 4 CPUs , time to discuss about Zen 5 (Zen 4 already old news :D)

We already got roadmaps and key technologies like AIE
1664493390795.png

1664493471118.png

1664493514999.png

1664493556491.png
1681912883215.png
Some things we already knew
  • Dr. Lisa Su and Forrest Norrod already mentioned at FAD 2022 on May 9th, during Q&A that Zen 5 will come in N3 and N4/5 variants so it will be on multiple nodes.
  • Mark Papermaster highlighted that it will be a grounds up architecture, Also mentioned last para here
  • Mike Clark mentioned that they started to work on Zen 5 already in 2018. This means Zen 5 by the time it launches would have been under conception and planning and development for much longer than the original Zen program
For a CPU architecture launching in early 2024 in the form of Strix Point for OEM notebook refresh, tape out should be happening in the next few months already.
Share your thoughts


"I just wanted to close my eyes, go to sleep, and then wake up and buy this thing. I want to be in the future, this thing is awesome and it's going be so great - I can't wait for it." - Mike Clark
 
Last edited:

Glo.

Diamond Member
Apr 25, 2015
5,657
4,409
136
Tbh, wouldn't be surprising if Strix uses N4P and server use N3.
2021-05-28-image.jpg

It doesn't appear to be using N4.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
Lead times on design are usually around four years and v-cache isn't just something you can bolt on to an existing chip.
4 years is basically the very beginning of a project, before the requirements have solidified, all the way to shipping. Regardless, the beauty of AMD's implementation is that they've developed something that they can happily sell with or without hybrid bonding, with minimal overhead. So they could plumb in the interface, but if the tech wasn't ready, no big deal.
I'd imagine that Intel has something in the works that uses this sort of technology by now, or have started figuring out how to do something similar with their own future processes.
They've claimed that their hybrid bonding tech will be ready sometime this year, but how true that is or when we'll actually see a product with it remains to be seen.
 

deasd

Senior member
Dec 31, 2013
512
724
136
2021-05-28-image.jpg

It doesn't appear to be using N4.

This slide was an obsolete rumor from May 2021. These guys just thought Zen4D refered to Zen4 Dense but it turned out to be wrong with Zen4C(Cloud) official confirmation.

'Advanced node' with Strix Point on newest slide could mean AMD haven't decide which node should be used.
 
Last edited:
  • Like
Reactions: Kaluan

BorisTheBlade82

Senior member
May 1, 2020
660
1,003
106
This slide was an obsolete rumor from May 2021. These guys just thought Zen4D refered to Zen4 Dense but it turned out to be wrong with Zen4C(Cloud) official confirmation.

'Advanced node' with Strix Point on newest slide could mean AMD haven't decide which node should be used.
How come you think it is obsolete? Just because c != D ? Well, marketing names may change over time.
 

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
Considering AMD is using D as part of their v-cache products (e.g., 5800X3D), it's unlikely that they'd use it for a separate product designation.
 

deasd

Senior member
Dec 31, 2013
512
724
136
How come you think it is obsolete? Just because c != D ? Well, marketing names may change over time.
You just answer your own question :)

1. We cannot use codename like Zen4D(Dense) which is not existed. Especially the rumored one's meaning is very different from the official one Zen4C(Cloud)

2. StrixPoint process node is still uncertain with 'Advanced Node' statement.

It is strange for me that AMD use a new full node (3nm, 5nm, 7nm etc) on APUs ahead of non-APUs. If Zen5 non-APUs is on N4, it would be a surprise that StrixPoint on N3.

Considering AMD is using D as part of their v-cache products (e.g., 5800X3D), it's unlikely that they'd use it for a separate product designation.

It is an old rumor. I don't know how the Zen4D rumor being filled 20 months ago but we can sure the old slide is obsolete or wrong cuz some products on the slide would never exist. Like Warhol.
 
Last edited:

BorisTheBlade82

Senior member
May 1, 2020
660
1,003
106
Considering AMD is using D as part of their v-cache products (e.g., 5800X3D), it's unlikely that they'd use it for a separate product designation.
The other way around: It is entirely possible that they named Bergamo Zen4D internally back in 2021 and before, but switched to Zen4c lafter on.

You just answer your own question :)

1. We cannot use codename like Zen4D(Dense) which is not existed. Especially the rumored one's meaning is very different from the official one Zen4C(Cloud)

2. StrixPoint process node is still uncertain with 'Advanced Node' statement.

It is strange for me that AMD use a new full node (3nm, 5nm, 7nm etc) on APUs ahead of non-APUs. If Zen5 non-APUs is on N4, it would be a surprise that StrixPoint on N3.



It is an old rumor. I don't know how the Zen4D rumor being filled 20 months ago but we can sure the old slide is obsolete or wrong cuz some products on the slide would never exist. Like Warhol.
I am having a hard time understanding your first two points - somehow the words do not add up for me to become "reasoning".

I certainly agree with you on the node point - but on the other side it also would not be totally out of this world. Maybe Granite Ridge comes too early for N3E at reasonable prices or wafer capacity would not be feasible at that point in time.

As to Warhol: Just because it did not materialize in an end product does not mean it never existed in some stage. Maybe AMD at some point just decided that it did not need it or there were some costs of opportunity involved (AKA not worth the effort).
 
Last edited:
  • Like
Reactions: Kaluan

jamescox

Senior member
Nov 11, 2009
637
1,103
136
Thanks. I did speculate that MI300 contains 6 APU chiplets, but there are other possibilities, of course. And, someone intriguingly hinted to me that my speculation is wrong. Anyway, here is my mock-up based on the slide rendering and the actual chip photo:

instinct-mi300-chiplet-speculation-png.74130



Now, continue the "Zen 5" speculation!
I suspect Zen5 will use some of the stacking and connectivity tech used for RDNA3 and MI300, so it is kind of relevant. The things you have labeled as Zen4 cores look more like infinity cache, or maybe L2 cache, or something like that. I have seen the small chips between the HBM3 referred to as structural silicon (semiaccurate, I think). The chiplet you have labeled as "adaptive chiplet" looks exactly like a Zen 4 chiplet with 8 cores. The thing you have labeled "AI chiplet" may be partially FPGA. FPGAs have large arrays, so it could look like cache. It could also just be all AI hardware. That would have large, regular, arrays of things in addition to possible caches. It would be easier to tell if I knew the die size of HBM3. I didn't find it in a quick search and I don't have time to search more today. I thought HBM2 was around 100 mm2. The rendering may be completely inaccurate, but if the "AI chiplets" are actually cpu cores, then where do the 24 cores come from? There are essentially 3 GPUs (2 chiplets each), so having 3x8-cores would make sense. I don't know where the other 8 cores would be hiding unless there is something weird like 2 low power cores in each base die.
 

DisEnchantment

Golden Member
Mar 3, 2017
1,590
5,722
136
Has there been some feature or instruction that was present in Zen 4 QS samples but didn't get released in final silicon, based on your review of these patches?
Will never know such things, because in traditional AMD style they release manuals and patches after product launch, save for the vague unintelligble patches.
AFAIK, UAI is not working in Zen4 currently. But I believe only deactivated in microcode.
 

Glo.

Diamond Member
Apr 25, 2015
5,657
4,409
136

Any 128 bit memory controller will get 156 GB/s from this.

To put into perspective, Radeon RX 6500 XT from 18 Gbps GDDR6 memory on 64 bit bus gets 144 GB/s.
 

jamescox

Senior member
Nov 11, 2009
637
1,103
136
I suspect Zen5 will use some of the stacking and connectivity tech used for RDNA3 and MI300, so it is kind of relevant. The things you have labeled as Zen4 cores look more like infinity cache, or maybe L2 cache, or something like that. I have seen the small chips between the HBM3 referred to as structural silicon (semiaccurate, I think). The chiplet you have labeled as "adaptive chiplet" looks exactly like a Zen 4 chiplet with 8 cores. The thing you have labeled "AI chiplet" may be partially FPGA. FPGAs have large arrays, so it could look like cache. It could also just be all AI hardware. That would have large, regular, arrays of things in addition to possible caches. It would be easier to tell if I knew the die size of HBM3. I didn't find it in a quick search and I don't have time to search more today. I thought HBM2 was around 100 mm2. The rendering may be completely inaccurate, but if the "AI chiplets" are actually cpu cores, then where do the 24 cores come from? There are essentially 3 GPUs (2 chiplets each), so having 3x8-cores would make sense. I don't know where the other 8 cores would be hiding unless there is something weird like 2 low power cores in each base die.

Replying to myself...

I am wondering if the layout pictured just isn't the 24 core device. Perhaps it is a 16 core with an FPGA or other accelerator. They apparently can put more than one type of chip on top of the base die. The base die looks like it might be able to fit 4 cpu chiplets, so I am wondering if the 24 core variant is really the top end. This seems like a small number of cores compared to what Nvidia will have with each Grace Hopper package (144?), although that may have a more powerful gpu.
 
  • Like
Reactions: Joe NYC and Vattila

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106

Any 128 bit memory controller will get 156 GB/s from this.

To put into perspective, Radeon RX 6500 XT from 18 Gbps GDDR6 memory on 64 bit bus gets 144 GB/s.
It's actually 153.6 GB/s.
This is nice and all but the price will be ridiculously high and only a few models will use them at best.
But at least there is this option.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
It will be expensive now, it must be seen if it will stay expensive when these APUs will arrive.
It was just developed, It's only sample production for customers.
It will take time until It's available in first products, maybe in Q4 2023?
When Strix Point will arrive, then price could be lower, true, but It will be higher than lower clocked ones.
Strix with this memory would once more not be for cheaper laptops but premium ones.
I always thought APU was meant as a cheaper alternative to CPU+dGPU combo, yet It's not.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
Its 64 bit, as well.

LPDDR5X(standard) is 16 bit.

;).
It doesn't have a wider bus per package.

LPDDR5 is also 64-bit per package.
Samsung
At 6,400 megabits per second (Mb/s), the new LPDDR5 is about 16 percent faster than the 12Gb LPDDR5 (5,500Mb/s) found in most of today’s flagship mobile devices. When made into a 16GB package, the LPDDR5 can transfer about 10 5GB-sized full-HD movies, or 51.2GB of data, in one second.
51200/6400=8*8 => 64-bit package
Rembrandt needs 2 such packages for 128-bit and 32GB RAM.
 
Last edited: