- Mar 3, 2017
- 1,777
- 6,791
- 136
Well that's be good news for APUs.
You get higher VRAM quantity for the same price.
I wonder why big APUs like the upcoming Strix Halo haven't taken off yet. Did we have no idea until Apple showed us how to do it with their SoCs with massive iGPUs?
Instead of paying for two separate components (CPU+dGPU), Laptop OEMs will have to pay for only one component (big APU). This would give cost savings as well space savings in the laptop's internals.
Those quite literally had zero market traction outside of Apple designs.
oh and Kaby Lake-G existed.
LPDDR is really the silver bullet. The saviour. The hero.Actually 4 components:
- CPU
- dGPU
- DRAM
- VRAM
vs.
- a single package with CPU + GPU + LPDDR.
The advantage AMD and Intel do it that way rather than follow what Apple has started? It gives OEM a chance to customize that special combo of:
- garbage CPU
- garbage GPU
- garbage DRAM
- garbage VRAM
The only thing I am wondering about is how the latency differs, and if it makes a material difference.LPDDR is really the silver bullet. The saviour. The hero.
Especially LPDDR5X and beyond.
Solid amount of bandwidth, compact package dimensions, more cost efficient than GDDR, and more power efficient than DDR.
And Apple gained traction (market share) over those offering garbage GPU.
LPDDR5X-8533 MT/s
128 bit = 136 GB/s
256 bit = 273 GB/s
384 bit = 409 GB/s
512 bit = 546 GB/s
546 GB/s is like the bandwidth an RTX 4090 Laptop.
They've literally just lost share last quarter.And Apple gained traction (market share) over those offering garbage GPU.
It's PoP.Also LPDDR5T-9600 exists, which on a 512 bit bus exceeds 600 GB/s.
They've literally just lost share last quarter.
What are you on?
For now?It's PoP
No they weren't.At that time, Apple was still gaining market share.
That I've no idea but cranking higher speeds on PCBs is PITA.It believe it's not a technical limitation that makes it restricted to PoP?
Indeed. We are bound to hit a ceiling at some point. The amount of electrical work that goes into PCIE Gen 5 is bordering on ridiculous. I dread seeing PCIE Gen 6 or 7. Glad I'm not doing THAT type of work.That I've no idea but cranking higher speeds on PCBs is PITA.
And power efficiency takes hit in the-I think on N3 we will see that chiplet based designs will replace monolithic design of APUs from AMD.
PCIe6 gonna be like, cable city.I dread seeing PCIE Gen 6 or 7
nodes don't matter.I think on N3 we will see that chiplet based designs will replace monolithic design of APUs from AMD.
there are ways to dance around it and the obvious one is obviousAnd power efficiency takes hit in the-
For mainstream, non-Halo products - they donodes don't matter.
You don't attach system-level design decisions to any specific nodes.For mainstream, non-Halo products - they do
CPU caches will stay CPU caches.at least - the caches.
And power efficiency takes hit in the-
Wider the disparity between leading node and N6, the stronger the incentive is to go to chiplets.You don't attach system-level design decisions to any specific nodes.
CPU caches will stay CPU caches.
You know full well what I mean, yet still you talk about different thingYou don't attach system-level design decisions to any specific nodes.
No, that wasn’t it. Sh*t posting about an already doomed future product because *reasons*. Otherwise, please carry on.
Chip stacking?there are ways to dance around it and the obvious one is obvious