• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion Zen 5 Speculation (EPYC Turin and Strix Point/Granite Ridge - Ryzen 9000)

Page 201 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Well that's be good news for APUs.

You get higher VRAM quantity for the same price.

I wonder why big APUs like the upcoming Strix Halo haven't taken off yet. Did we have no idea until Apple showed us how to do it with their SoCs with massive iGPUs?

Instead of paying for two separate components (CPU+dGPU), Laptop OEMs will have to pay for only one component (big APU). This would give cost savings as well space savings in the laptop's internals.

Actually 4 components:
- CPU
- dGPU
- DRAM
- VRAM

vs.

- a single package with CPU + GPU + LPDDR.

The advantage AMD and Intel do it that way rather than follow what Apple has started? It gives OEM a chance to customize that special combo of:
- garbage CPU
- garbage GPU
- garbage DRAM
- garbage VRAM
 
When I suggested on chip and on package memory a few years ago - to compete with Apple products - people lost their freaking minds. They said memory access couldn't discern which memory was what speed so this was nonsensical. They also said it would be too complex. And there were impossible power considerations.

So what changed?
 
Actually 4 components:
- CPU
- dGPU
- DRAM
- VRAM

vs.

- a single package with CPU + GPU + LPDDR.

The advantage AMD and Intel do it that way rather than follow what Apple has started? It gives OEM a chance to customize that special combo of:
- garbage CPU
- garbage GPU
- garbage DRAM
- garbage VRAM
LPDDR is really the silver bullet. The saviour. The hero.

Especially LPDDR5X and beyond.

Solid amount of bandwidth, compact package dimensions, more cost efficient than GDDR, and more power efficient than DDR.
 
I really don't get the memory bus argument against big APUs.

A CPU de-facto comes with a 128 bit bus.

Let's say a certain dGPU comes with a 128-bit bus as well.

Then what's the issue in combining both to get a 256 bit bus for the APU? It's not going to be more costly is it? You are still having the same buses as before, except that now- it's in one component (APU) instead of two components (CPU+GPU).
 
LPDDR5X-8533 MT/s

128 bit = 136 GB/s
256 bit = 273 GB/s
384 bit = 409 GB/s
512 bit = 546 GB/s

546 GB/s is like the bandwidth an RTX 4090 Laptop.
 
LPDDR is really the silver bullet. The saviour. The hero.

Especially LPDDR5X and beyond.

Solid amount of bandwidth, compact package dimensions, more cost efficient than GDDR, and more power efficient than DDR.
The only thing I am wondering about is how the latency differs, and if it makes a material difference.
 
LPDDR5X-8533 MT/s

128 bit = 136 GB/s
256 bit = 273 GB/s
384 bit = 409 GB/s
512 bit = 546 GB/s

546 GB/s is like the bandwidth an RTX 4090 Laptop.

256 bit = 273 GB/s is the Strix Halo bandwidth, which is roughly the same as 7600 / 4600 desktop of 288 GB/s

I think it is a good compromise for notebook APU (minus GPU). Hopefully it does become the mainstream.
 
They've literally just lost share last quarter.
What are you on?

They refer to the time when Apple was the only vendor to offer the higher end Intel iGPUs (GT3w) in the laptops. (I actually got one of those for my kids).

At that time, Apple was still gaining market share.
 
I think on N3 we will see that chiplet based designs will replace monolithic design of APUs from AMD.
 
You don't attach system-level design decisions to any specific nodes.

CPU caches will stay CPU caches.
Wider the disparity between leading node and N6, the stronger the incentive is to go to chiplets.

I am also guessing that after Strix (regular), the following mainstream notebook product will use chiplets.
 
You don't attach system-level design decisions to any specific nodes.
You know full well what I mean, yet still you talk about different thing 😉.

Yes, you are correct, that you don't attach system-level design decision to specific nodes. Its just that economies of 3 nm process and smaller nodes will require companies to cut costs as hard as possible on mainstream offerings.
 
Back
Top