Info LPDDR6 @ Q3-2025: Mother of All CPU Upgrades

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Io Magnesso

Senior member
Jun 12, 2025
578
164
71

Yep, about time: Right before year-end launching of important SoC.



So far, we know 8 Elite G2 and X Elite G2 are going to support LPDDR6. Both SoC are using same Oryon v3 cores, GB6 ST should surpass 4000+. X Elite G2 would be in range of upcoming Apple's M5 Pro.



Still not sure Dimensity 9500 with X935 cores would support LPDDR6 or not, we shall see...

No signs of iPhone SoC would employ LPDDR6. But man, I would bet M5 series are going full throttle support of 96-bit, 192-bit and 384-bit LPDDR6.

NV's GB10 might as well, we shall see. In case of supporting LPDDR6, GB10 would be in range of M5 Max, not Pro level.
From previous generation GB6 scores below 3000 average... (oryon v1)
The expectation that it suddenly jumps higher than 4000 score is a dream story.
 

Tigerick

Senior member
Apr 1, 2022
799
763
106
Qualcomm Technologies, Inc. is a global leader in AI edge innovation and is proud to be one of the first in the industry to implement LPDDR6

One of the first, interesting terms. Cause we know Qualcomm will announce both SoC in Qualcomm Summit in Sept 23. They should claim the first if no other vendors are announcing LPDDR6 support before Sept 23. Hmm, from now on until Sept 23, there are following events:
  • NV GB10 launching this month
  • Mediatek Dimensity 9500 may drop earlier in Sept
Two vendors which could potentially launch their SoC supporting LPDDR6. We shall hear more in the coming days...
 
  • Like
Reactions: igor_kavinski

poke01

Diamond Member
Mar 8, 2022
4,000
5,332
106
From previous generation GB6 scores below 3000 average... (oryon v1)
The expectation that it suddenly jumps higher than 4000 score is a dream story.
Qualcomms aim will be to beat M4 Max and for that they need to aim for >4000.
 
Jul 27, 2020
26,976
18,577
146
Given how different the standards are, that's a weird product.
Probably means that there's a lot of untapped LPDDR5 supply and it may not dry up for a couple years at least. Also, shifting complete production over to LPDDR6 isn't something they want to do with all of their fabs for cost reasons so probably gonna keep pumping out LPDDR5 for quite some time.
 

Doug S

Diamond Member
Feb 8, 2020
3,420
6,057
136

For OEMs who want to serve a wide range of products with a single SoC maybe that's not too crazy? Either because you want to operate at different price points (LPDDR6 is going to cost a lot more than LPDDR5/X for the first couple years) or you plan for a single product to live long enough that the supply of LPDDR5 might dry up before you replace it.

I'd really be curious to know how its size compares to the size of their LPDDR5 only and LPDDR6 only controllers in the same process. Maybe there is no area savings per se but if there are significant savings in shoreline that might be enough to justify this misfit toy.
 

Cheesecake16

Junior Member
Aug 5, 2020
21
85
91

Yep, about time: Right before year-end launching of important SoC.



So far, we know 8 Elite G2 and X Elite G2 are going to support LPDDR6. Both SoC are using same Oryon v3 cores, GB6 ST should surpass 4000+. X Elite G2 would be in range of upcoming Apple's M5 Pro.



Still not sure Dimensity 9500 with X935 cores would support LPDDR6 or not, we shall see...

No signs of iPhone SoC would employ LPDDR6. But man, I would bet M5 series are going full throttle support of 96-bit, 192-bit and 384-bit LPDDR6.

NV's GB10 might as well, we shall see. In case of supporting LPDDR6, GB10 would be in range of M5 Max, not Pro level.
1) GB10 is LPDDR5X-8533 not LPDDR6
2) I doubt that any SOC launching this year will have LPDDR6 because none of the memory manufacturers have announced LPDDR6 high volume production and the JEDEC announcement doesn't actually say that 8 Elite G2 or X Elite G2 will have LPDDR6... Perhaps it uses LPDDR5X/LPDDR6 but launches with LPDDR5X to start with...
 

Doug S

Diamond Member
Feb 8, 2020
3,420
6,057
136
So wccftech posted an article about DDR6 which appears to repeating something I've posted here and RWT. They said DDR6 would have four 24 bit channels, a claim I've never seen anywhere but my own posts. The DDR6 spec still hasn't been released so there's nothing official, so either they have some insider info or more likely they got that from me.

I've posted my reasoning why I think that before, but that's not the only way to do it. JEDEC could do it the "obvious but wasteful" way, making standard DDR6 four 16 bit channels and ECC DDR6 four 16+8 bit channels, a full 50% bit overhead for ECC. While wasteful it has the virtue of simplicity and doing things the same way past DDR standards have. Another possibility would be increasing the burst length. LPDDR6 bumped its burst length to 24 to make the math work with its 12 bit channels, so if DDR6 wanted to stick with 16 bit channels, a burst length of 18 ends up with the same 288 bits as LPDDR6.

Anyway, I think it would be funny if they are so confidently repeating what I said, since I'm just speculating and could easily be wrong.

https://wccftech.com/ddr6-memory-de...window-with-8800-mtps-base-17600-mtps-speeds/
 

NTMBK

Lifer
Nov 14, 2011
10,433
5,771
136
So wccftech posted an article about DDR6 which appears to repeating something I've posted here and RWT. They said DDR6 would have four 24 bit channels, a claim I've never seen anywhere but my own posts. The DDR6 spec still hasn't been released so there's nothing official, so either they have some insider info or more likely they got that from me.

I've posted my reasoning why I think that before, but that's not the only way to do it. JEDEC could do it the "obvious but wasteful" way, making standard DDR6 four 16 bit channels and ECC DDR6 four 16+8 bit channels, a full 50% bit overhead for ECC. While wasteful it has the virtue of simplicity and doing things the same way past DDR standards have. Another possibility would be increasing the burst length. LPDDR6 bumped its burst length to 24 to make the math work with its 12 bit channels, so if DDR6 wanted to stick with 16 bit channels, a burst length of 18 ends up with the same 288 bits as LPDDR6.

Anyway, I think it would be funny if they are so confidently repeating what I said, since I'm just speculating and could easily be wrong.

https://wccftech.com/ddr6-memory-de...window-with-8800-mtps-base-17600-mtps-speeds/
I would not be surprised if an LLM was trained on your forum posts and is now confidently regurgitating your speculation as facts.
 

Tigerick

Senior member
Apr 1, 2022
799
763
106
AMD AI 9 HX375 Strix PointMedusa Halo MiniNova Lake - HAMD AIMax+ 395 Strix HaloMedusa HaloRTX 5070 Super
TDP28 W??55W?275 W
Total Dies125 + Base331
CPU4 x Zen5 + 8 x Zen5c4 x Zen6 + 8 x Zen6c4 + 816 x Zen 524 x Zen 6
Threads2424123248
Node4nm N4PN3PN24nm N4PN3P
Max Boost Clock5.1 GHz??5.1 GHz?
GPU + IODRadeon 890MRDNA5 Model AT4Arc Model ?Radeon 8060SRDNA5 Model AT3GB205
NodeiGPUN3P / XN3E4nm N4XN3P / X4N
CU / SM /Xe162412404850
ICNA10 MB32 MB20 MB48 MB
Memory Interface128-bit LPDRR5x-8000192-bit LPDDR6128-bit LPDDR5x-8533 ?256-bit LPDRR5x-8000384-bit LPDDR6192-bit GDDR7
Max Memory256 GB256 GB ??128 GB512 GB ?18 GB
Memory BW128 GB/s307 GB/s136 GB/s256 GB/s614 GB/s672 GB/s
NPU55 TOPS>= 70 TOPS>= 70 TOPS50 TOPS>= 70 TOPS

Geez, MLID has updated Medusa Halo specs with non-sense about AT3 and AT4. They are not designed as discreet GPU just like Radeon 8060S is not used as discreet GPU. Too damn stupid for me to explain, please refer to the table above with my understanding below:
  • LPDDR6 should be standard by 2027. Thus, I am focusing on LPDDR6.
  • Medusa Halo no doubt is going to be flagship CPU+GPU SoC with 24-core Zen 6 and 48CU RDNA5. But AMD won't be the leader anymore. I will update the table with N1X and Qualcomm X Elite G2 which are shipping by the end of this year.
  • Medusa Halo Mini or used to call Medusa Little Halo is more like Strix Point successor with 4+8+2 CPU and 24 CU RDNA5. If you are disappointed by STX's GPU performance, then wait for Medusa Halo Mini. Mini no longer remains monolithics design though; AMD has to split the SoC to 2 dies.
  • I have included the upcoming Intel Nova Lake-H SoC. If you guys want to know more about the changes in tile design, please refer to main page of Nova Lake Discussion. Why comparing to NVL, not PTL? Well, 18A is so broken that Intel could not ship full die of PTL, that is 4+8+4 core configuration. Intel has to use the cut die of N2 8+16 to create 4+8+4 die. Initially, SoC that are made by 18A supposedly to include iGPU; it seems that part is canned thus Intel has to use tGPU to combine as one SoC. That's mean total dies of Mobile Nova Lake is 5 dies excluding base tile. That's why I said PTL-H and NVL-H are both DoA & that's why Pat are fired, so much for 5NI4Y.
 
Last edited:
Jul 27, 2020
26,976
18,577
146
Last edited:

Doug S

Diamond Member
Feb 8, 2020
3,420
6,057
136

Interesting bit for me:



So hopefully, we will be rid of the curse of 100+ ns latencies?

I'm not sure what they're smoking, but that isn't going to make any real difference. The main reason for longer latencies for LPDDR is that it is designed for low power, so it uses lower drive currents and lower capacitance connections between controller and chips. LPDDR5 already supported half width channel operation, so if that could deliver significant latency improvements someone would have already been doing it.

Since almost all of the overall DRAM power is consumed by the controller/interface not actual reading and writing of bits (~99% for DDR5 controllers) any power savings have to be won on the controller side. Whatever changes you make to the controller and the protocol it uses to reduce its power needs have to be paid for in some manner - and latency is how you pay.
 
  • Like
Reactions: 511

Tuna-Fish

Golden Member
Mar 4, 2011
1,658
2,508
136
I don't understand how the sub-channel architecture is supposed to reduce latency?

It is a nice power optimization for low-load situations. When the load is low, you can turn off half of your bus and do transfers with twice the burst length instead. This increases latency (by like 2ns, not that it matters).
 

Darkmont

Member
Jul 7, 2023
72
220
86
I don't understand how the sub-channel architecture is supposed to reduce latency?

It is a nice power optimization for low-load situations. When the load is low, you can turn off half of your bus and do transfers with twice the burst length instead. This increases latency (by like 2ns, not that it matters).
Splitting channels allows both to serve accesses independently. Timings are independent between subchannels. Going from a 64-bit channel and 8n prefetch to a 32-bit channel and 16n prefetch means that two channels in say a DIMM or module can service a 64-byte cache line vs one.
 
Last edited:

Doug S

Diamond Member
Feb 8, 2020
3,420
6,057
136
Splitting channels allows both to serve accesses independently. Timings are independent between subchannels. Going from a 64-bit channel and 8n prefetch to a 32-bit channel and 16n prefetch means that two channels in say a DIMM or module can service a 64-byte cache line vs one.

Doubling up on subchannels to able to service twice as many independent cache line refills makes for a good fit for modern CPUs that have more and more cores all doing their own thing. Hence DDR5 getting two subchannels, and DDR6 going to four.

But it sounds like @igor_kavinski wants to see DRAM load to use latency go down (that's where his 100ns comes from) but adding subchannels won't help that anymore than adding more full channels will.
 
Jul 27, 2020
26,976
18,577
146
But it sounds like @igor_kavinski wants to see DRAM load to use latency go down (that's where his 100ns comes from) but adding subchannels won't help that anymore than adding more full channels will.

If it's possible to reduce the latency and improve responsiveness AND performance on LPDDR4, I don't see why things can't improve on LPDDR6.
 

Darkmont

Member
Jul 7, 2023
72
220
86

If it's possible to reduce the latency and improve responsiveness AND performance on LPDDR4, I don't see why things can't improve on LPDDR6.
The internals have already reached their respective bandwidth-latency-area-power tradeoffs for what LPDDR’s market segments are.
 
  • Like
Reactions: adroc_thurston

mikk

Diamond Member
May 15, 2012
4,293
2,382
136
Initially, SoC that are made by 18A supposedly to include iGPU; it seems that part is canned thus Intel has to use tGPU to combine as one SoC. That's mean total dies of Mobile Nova Lake is 5 dies excluding base tile. That's why I said PTL-H and NVL-H are both DoA & that's why Pat are fired, so much for 5NI4Y.
[/LIST]

Are you sure? Raichu told iGPU part IS based on 18A(P) family two days ago.
 

marees

Golden Member
Apr 28, 2024
1,455
2,049
96
They are not designed as discreet GPU just like Radeon 8060S is not used as discreet GPU. Too damn stupid for me to explain, please refer to the table above
I didn't follow this. Why can't AT3 & AT4 be stand alone GPU dies ?