Discussion Zen 5 Speculation (EPYC Turin and Strix Point/Granite Ridge - Ryzen 9000)

Page 201 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Joe NYC

Diamond Member
Jun 26, 2021
3,647
5,186
136
Well that's be good news for APUs.

You get higher VRAM quantity for the same price.

I wonder why big APUs like the upcoming Strix Halo haven't taken off yet. Did we have no idea until Apple showed us how to do it with their SoCs with massive iGPUs?

Instead of paying for two separate components (CPU+dGPU), Laptop OEMs will have to pay for only one component (big APU). This would give cost savings as well space savings in the laptop's internals.

Actually 4 components:
- CPU
- dGPU
- DRAM
- VRAM

vs.

- a single package with CPU + GPU + LPDDR.

The advantage AMD and Intel do it that way rather than follow what Apple has started? It gives OEM a chance to customize that special combo of:
- garbage CPU
- garbage GPU
- garbage DRAM
- garbage VRAM
 

MadRat

Lifer
Oct 14, 1999
11,999
307
126
When I suggested on chip and on package memory a few years ago - to compete with Apple products - people lost their freaking minds. They said memory access couldn't discern which memory was what speed so this was nonsensical. They also said it would be too complex. And there were impossible power considerations.

So what changed?
 

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,761
106
Actually 4 components:
- CPU
- dGPU
- DRAM
- VRAM

vs.

- a single package with CPU + GPU + LPDDR.

The advantage AMD and Intel do it that way rather than follow what Apple has started? It gives OEM a chance to customize that special combo of:
- garbage CPU
- garbage GPU
- garbage DRAM
- garbage VRAM
LPDDR is really the silver bullet. The saviour. The hero.

Especially LPDDR5X and beyond.

Solid amount of bandwidth, compact package dimensions, more cost efficient than GDDR, and more power efficient than DDR.
 
  • Like
Reactions: Tlh97

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,761
106
I really don't get the memory bus argument against big APUs.

A CPU de-facto comes with a 128 bit bus.

Let's say a certain dGPU comes with a 128-bit bus as well.

Then what's the issue in combining both to get a 256 bit bus for the APU? It's not going to be more costly is it? You are still having the same buses as before, except that now- it's in one component (APU) instead of two components (CPU+GPU).
 

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,761
106
LPDDR5X-8533 MT/s

128 bit = 136 GB/s
256 bit = 273 GB/s
384 bit = 409 GB/s
512 bit = 546 GB/s

546 GB/s is like the bandwidth an RTX 4090 Laptop.
 

Joe NYC

Diamond Member
Jun 26, 2021
3,647
5,186
136
LPDDR is really the silver bullet. The saviour. The hero.

Especially LPDDR5X and beyond.

Solid amount of bandwidth, compact package dimensions, more cost efficient than GDDR, and more power efficient than DDR.
The only thing I am wondering about is how the latency differs, and if it makes a material difference.
 

Joe NYC

Diamond Member
Jun 26, 2021
3,647
5,186
136
LPDDR5X-8533 MT/s

128 bit = 136 GB/s
256 bit = 273 GB/s
384 bit = 409 GB/s
512 bit = 546 GB/s

546 GB/s is like the bandwidth an RTX 4090 Laptop.

256 bit = 273 GB/s is the Strix Halo bandwidth, which is roughly the same as 7600 / 4600 desktop of 288 GB/s

I think it is a good compromise for notebook APU (minus GPU). Hopefully it does become the mainstream.
 

Joe NYC

Diamond Member
Jun 26, 2021
3,647
5,186
136
They've literally just lost share last quarter.
What are you on?

They refer to the time when Apple was the only vendor to offer the higher end Intel iGPUs (GT3w) in the laptops. (I actually got one of those for my kids).

At that time, Apple was still gaining market share.
 

eek2121

Diamond Member
Aug 2, 2005
3,410
5,049
136
That I've no idea but cranking higher speeds on PCBs is PITA.
Indeed. We are bound to hit a ceiling at some point. The amount of electrical work that goes into PCIE Gen 5 is bordering on ridiculous. I dread seeing PCIE Gen 6 or 7. Glad I'm not doing THAT type of work.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
I think on N3 we will see that chiplet based designs will replace monolithic design of APUs from AMD.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
nodes don't matter.
For mainstream, non-Halo products - they do ;).

I was not talking about Halo product type but mainstream Strix Point replacement on 3 nm. For those - you have to split them apart, at least - the caches.
 

Joe NYC

Diamond Member
Jun 26, 2021
3,647
5,186
136
You don't attach system-level design decisions to any specific nodes.

CPU caches will stay CPU caches.
Wider the disparity between leading node and N6, the stronger the incentive is to go to chiplets.

I am also guessing that after Strix (regular), the following mainstream notebook product will use chiplets.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
You don't attach system-level design decisions to any specific nodes.
You know full well what I mean, yet still you talk about different thing ;).

Yes, you are correct, that you don't attach system-level design decision to specific nodes. Its just that economies of 3 nm process and smaller nodes will require companies to cut costs as hard as possible on mainstream offerings.
 

Joe NYC

Diamond Member
Jun 26, 2021
3,647
5,186
136
No, that wasn’t it. Sh*t posting about an already doomed future product because *reasons*. Otherwise, please carry on.

I don't take everything from MLID as gospel. Which is where no V-Cache on Strix Halo comes from.