Discussion Zen 5 Speculation (EPYC Turin and Strix Point/Granite Ridge - Ryzen 9000)

Page 201 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Joe NYC

Golden Member
Jun 26, 2021
1,927
2,265
106
Well that's be good news for APUs.

You get higher VRAM quantity for the same price.

I wonder why big APUs like the upcoming Strix Halo haven't taken off yet. Did we have no idea until Apple showed us how to do it with their SoCs with massive iGPUs?

Instead of paying for two separate components (CPU+dGPU), Laptop OEMs will have to pay for only one component (big APU). This would give cost savings as well space savings in the laptop's internals.

Actually 4 components:
- CPU
- dGPU
- DRAM
- VRAM

vs.

- a single package with CPU + GPU + LPDDR.

The advantage AMD and Intel do it that way rather than follow what Apple has started? It gives OEM a chance to customize that special combo of:
- garbage CPU
- garbage GPU
- garbage DRAM
- garbage VRAM
 

MadRat

Lifer
Oct 14, 1999
11,910
238
106
When I suggested on chip and on package memory a few years ago - to compete with Apple products - people lost their freaking minds. They said memory access couldn't discern which memory was what speed so this was nonsensical. They also said it would be too complex. And there were impossible power considerations.

So what changed?
 

FlameTail

Platinum Member
Dec 15, 2021
2,123
1,154
106
Actually 4 components:
- CPU
- dGPU
- DRAM
- VRAM

vs.

- a single package with CPU + GPU + LPDDR.

The advantage AMD and Intel do it that way rather than follow what Apple has started? It gives OEM a chance to customize that special combo of:
- garbage CPU
- garbage GPU
- garbage DRAM
- garbage VRAM
LPDDR is really the silver bullet. The saviour. The hero.

Especially LPDDR5X and beyond.

Solid amount of bandwidth, compact package dimensions, more cost efficient than GDDR, and more power efficient than DDR.
 
  • Like
Reactions: Tlh97

FlameTail

Platinum Member
Dec 15, 2021
2,123
1,154
106
I really don't get the memory bus argument against big APUs.

A CPU de-facto comes with a 128 bit bus.

Let's say a certain dGPU comes with a 128-bit bus as well.

Then what's the issue in combining both to get a 256 bit bus for the APU? It's not going to be more costly is it? You are still having the same buses as before, except that now- it's in one component (APU) instead of two components (CPU+GPU).
 

FlameTail

Platinum Member
Dec 15, 2021
2,123
1,154
106
LPDDR5X-8533 MT/s

128 bit = 136 GB/s
256 bit = 273 GB/s
384 bit = 409 GB/s
512 bit = 546 GB/s

546 GB/s is like the bandwidth an RTX 4090 Laptop.
 

Joe NYC

Golden Member
Jun 26, 2021
1,927
2,265
106
LPDDR is really the silver bullet. The saviour. The hero.

Especially LPDDR5X and beyond.

Solid amount of bandwidth, compact package dimensions, more cost efficient than GDDR, and more power efficient than DDR.
The only thing I am wondering about is how the latency differs, and if it makes a material difference.
 

Joe NYC

Golden Member
Jun 26, 2021
1,927
2,265
106
LPDDR5X-8533 MT/s

128 bit = 136 GB/s
256 bit = 273 GB/s
384 bit = 409 GB/s
512 bit = 546 GB/s

546 GB/s is like the bandwidth an RTX 4090 Laptop.

256 bit = 273 GB/s is the Strix Halo bandwidth, which is roughly the same as 7600 / 4600 desktop of 288 GB/s

I think it is a good compromise for notebook APU (minus GPU). Hopefully it does become the mainstream.
 

adroc_thurston

Platinum Member
Jul 2, 2023
2,040
2,613
96

Joe NYC

Golden Member
Jun 26, 2021
1,927
2,265
106
You don't attach system-level design decisions to any specific nodes.

CPU caches will stay CPU caches.
Wider the disparity between leading node and N6, the stronger the incentive is to go to chiplets.

I am also guessing that after Strix (regular), the following mainstream notebook product will use chiplets.
 

Glo.

Diamond Member
Apr 25, 2015
5,696
4,533
136
You don't attach system-level design decisions to any specific nodes.
You know full well what I mean, yet still you talk about different thing ;).

Yes, you are correct, that you don't attach system-level design decision to specific nodes. Its just that economies of 3 nm process and smaller nodes will require companies to cut costs as hard as possible on mainstream offerings.