Info LPDDR6 @ Q3-2025: Mother of All CPU Upgrades

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Doug S

Diamond Member
Feb 8, 2020
3,622
6,401
136
You are still getting a third more B/W at the same frequencies.
For eg. standard dual channel LPDDR5X @10667MT/s will get ~170GB/s,
standard dual channel LPDDR6 @10667MT/s will get ~227.5GB/s.

But yeah LPDDR6 is still quite some time away for being available and then become mainstream. DDR6 seems even further away for availability .

That's because an LPDDR6 controller is 24 bits wide instead of LPDDR5X's 16, meaning a LPDDR6 controller will be physically larger and require more shoreline and pins. If you devote more area/shoreline/pins to LPDDR5X you can get more bandwidth too. LPDDR5X-10666 actually offers slightly more bandwidth than LPDDR6-10666 (assuming the same total width of memory bus) due to LPDDR6's extra bits for ECC/tagging.
 
  • Like
Reactions: 511

marees

Golden Member
Apr 28, 2024
1,808
2,429
96
You should open a Zen7 Speculation Thread.
Here is the zen 7 SPECULATION thread

1.4nm
33 cores ccd
264 core processor

AMD's next-gen Zen 7 chips with X3D teased, new 3D cores, and more in fresh rumors
AMD's future-gen Zen 7 CPU architecture rumored with '3D cores' that would usher in 'absurdly high' performance increases for gaming.

Read more: https://www.tweaktown.com/news/1052...-3d-cores-and-more-in-fresh-rumors/index.html




 

regen1

Member
Aug 28, 2025
153
221
71
That's because an LPDDR6 controller is 24 bits wide instead of LPDDR5X's 16, meaning a LPDDR6 controller will be physically larger and require more shoreline and pins. If you devote more area/shoreline/pins to LPDDR5X you can get more bandwidth too. LPDDR5X-10666 actually offers slightly more bandwidth than LPDDR6-10666 (assuming the same total width of memory bus) due to LPDDR6's extra bits for ECC/tagging.
Yeah that part is known but does it matter much that frequency alone doesn't move much initially when we can get B/W gain from increased width and subsequently gain with frequency improvements that follow?

For the last few years we mostly got improvements of 1066MT/s in LPDDR5X(improvement cadence)
Let's take a mainstream platform. For eg.
MTL-H(7467MT/s)--> Lunar Lake/ARL-H(8533MT/s)-->PTL-H(9600MT/s)
In a Dual-channel config we have been getting ~17GB/s per gen(cadence).
Let's say hypothetically with NVL-H we get LPDDR5x 10667MT/s(another same bump) but in 2028 we get even the base specs LPDDR6(10667MT/s), we are still getting a lot (~57GBps in Dual channel or one-third B/W gain over previous gen). That's still a fairly decent gain while moving to new standard and then subsequent gains follow from increased frequencies.
 

Doug S

Diamond Member
Feb 8, 2020
3,622
6,401
136
Yeah that part is known but does it matter much that frequency alone doesn't move much initially when we can get B/W gain from increased width and subsequently gain with frequency improvements that follow?

For the last few years we mostly got improvements of 1066MT/s in LPDDR5X(improvement cadence)
Let's take a mainstream platform. For eg.
MTL-H(7467MT/s)--> Lunar Lake/ARL-H(8533MT/s)-->PTL-H(9600MT/s)
In a Dual-channel config we have been getting ~17GB/s per gen(cadence).
Let's say hypothetically with NVL-H we get LPDDR5x 10667MT/s(another same bump) but in 2028 we get even the base specs LPDDR6(10667MT/s), we are still getting a lot (~57GBps in Dual channel or one-third B/W gain over previous gen). That's still a fairly decent gain while moving to new standard and then subsequent gains follow from increased frequencies.


You seem to be relying on the information found in this graphic from the first post of this thread:

screenshot-2025-05-01-140601-png.123026


Do you not notice that jump from "x64" to "x96"? That is not free, and it something any OEM could choose to TODAY with LPDDR5X for roughly the same cost, and get MORE bandwidth than LPDDR6.

The ONLY advantages to LPDDR6-10666 are Samsung's reported 20% power savings, and the ability to use those extra bits for ECC or tagging. The disadvantage is that you get 114.1 GB/s, whereas LPDDR5X-10666 at the same x96 width would provide 128.4 GB/s. There's also the downside that LPDDR6 is going to cost a LOT more initially, and it will take probably two years before it is at price parity with LPDDR5X.

Smartphones today have 64 bit wide memory busses, with four LPDDR5X controllers. Do you really believe they are going to go to a 96 bit wide bus when they transition to LPDDR6, because "4" memory controllers is some kinda magic number? Most likely they'll go to 72, which would give them EXACTLY the same 85.6 GB/s that a 64 bit wide LPDDR5X bus provides.

Ditto with PCs, they aren't going to go from 128 bit to 192 bit simply because "8" memory controllers is a magic number.
 
  • Like
Reactions: LightningDust

johnsonwax

Senior member
Jun 27, 2024
406
606
96
Yeah, phone makers will go for more expensive memory, in order to *checks notes* get no benefit whatsoever. brilliant!
Will note that the iPhone has routinely gotten benefits from features like this ahead of the Mac. If Apple wanted to take 120fps video off of all of their pro phone cameras simultaneously to do some computational video, that might not fit in 85 GB/s. I don't think Apple is planning to do that, but smartphone cameras generally consume resources faster than anything done on a desktop PC. I've noted the rumor of putting AI compute in RAM which would be precisely the application for that kind of thing.
 

regen1

Member
Aug 28, 2025
153
221
71
You seem to be relying on the information found in this graphic from the first post of this thread:

screenshot-2025-05-01-140601-png.123026


Do you not notice that jump from "x64" to "x96"? That is not free, and it something any OEM could choose to TODAY with LPDDR5X for roughly the same cost, and get MORE bandwidth than LPDDR6.

The ONLY advantages to LPDDR6-10666 are Samsung's reported 20% power savings, and the ability to use those extra bits for ECC or tagging. The disadvantage is that you get 114.1 GB/s, whereas LPDDR5X-10666 at the same x96 width would provide 128.4 GB/s. There's also the downside that LPDDR6 is going to cost a LOT more initially, and it will take probably two years before it is at price parity with LPDDR5X.

As for the standard modules and channel width are concerned we can take hint from LPDDR6 CAMM2. This(192 bit LP6 LPCAMM2) is supposed to be a (supposed to be more widely used vs previous CAMM2 of DDR5 and LPDDR5X has been till now) delivery vehicle for LPDDR6 other than soldered packages.lp6c.png

You seem to overlook the examples of mainstream Notebook platform and even in the hypothetical scenario we were talking 2028+ for LPDDR6 where it starts to be more commonly for mainstream notebook platform(It's kinda known that NVL-H doesn't support LPDDR6, Medusa Point SKUs also don't at least the initial versions). EVEN IF RazorLake notebook SKUs support LPDDR6 their broad availability would be in 2028 although for smartphones it would be earlier.

Are you implying that by the timelines of 2028(+), LPDDR6 won't likely have significant B/W gains from either channel width or frequency i.e. neither x96/net 192bit be standard for mainstream notebooks nor their would be very significant frequency uplift any higher than 10667MT/s by that time ?
Are you saying that LP6 in LPCAMM2 will use x96/192bit but other soldered LP6 packages won't be using x96/192 bit for mainstream notebook platforms ?
What do you think would be a standard bus-width from future successors of mainstream platforms like Intel's -H series of AMD's "Point" series that have LPDDR6 support? 144-bit?

Choosing x72/144bit vs x96/192 bit means memory config capacity goes down as well.

There's also the downside that LPDDR6 is going to cost a LOT more initially, and it will take probably two years before it is at price parity with LPDDR5X.
Yeah this is kinda expected though, sort of similar to new standard adoption in past. Also for some of the same reasons flagship smartphone SoCs are are likely to adopt LPDDR6 first than mainstream notebook.

Smartphones today have 64 bit wide memory busses, with four LPDDR5X controllers. Do you really believe they are going to go to a 96 bit wide bus when they transition to LPDDR6, because "4" memory controllers is some kinda magic number? Most likely they'll go to 72, which would give them EXACTLY the same 85.6 GB/s that a 64 bit wide LPDDR5X bus provides.

Ditto with PCs, they aren't going to go from 128 bit to 192 bit simply because "8" memory controllers is a magic number.
All that was referred was mainstream notebook platform, didn't refer to smartphones at all. As far as LPDDR6 in notebooks are concerned there is no need of any "magic number" here, they are going to gain benefits from LP6 LPCAMM2(192-bit) or any better standard down the line, other categories likely will get soldered LPDDR6 packages and some others will use older standards(LPDDR5X or DDR5) for some time.

DDR6(far from finalization) is likely going to use 16-bit sub-channel(not 24bit of LPDDR6), we will get know about its final specs with time.
The Synopsys LPDDR6 PHY results validated on N2P were most likely about a 48-bit bus AFAIK, using 2*48(net 96bit) will be a thing eventually in flagship smartphone SoCs. So it won't be a surprise that 96bit for flagship smartphone SoCs could become a fairly common thing eventually.
 
Last edited:

Doug S

Diamond Member
Feb 8, 2020
3,622
6,401
136
As for the standard modules and channel width are concerned we can take hint from LPDDR6 CAMM2. This(192 bit LP6 LPCAMM2) is supposed to be a (supposed to be more widely used vs previous CAMM2 of DDR5 and LPDDR5X has been till now) delivery vehicle for LPDDR6 other than soldered packages.


Well considering that LPDDR5 CAMMs are nearly nonexistent, "more widely used" is a given. It is unclear how widely CAMMs will be adopted by PC OEMs, and if 192 bits will become the standard width as a result. The fact that LPDDR6 will remain the "high end" option for several years obviously helps there, and they can figure out if they want to split the low end / value segment off at a lower width (where LPCAMM2 may not be used anyway) and the CPUs that support that would have a narrower bus.

You seem to overlook the examples of mainstream Notebook platform and even in the hypothetical scenario we were talking 2028+ for LPDDR6 where it starts to be more commonly for mainstream notebook platform(It's kinda known that NVL-H doesn't support LPDDR6, Medusa Point SKUs also don't at least the initial versions). EVEN IF RazorLake notebook SKUs support LPDDR6 their broad availability would be in 2028 although for smartphones it would be earlier.

Are you implying that by the timelines of 2028(+), LPDDR6 won't likely have significant B/W gains from either channel width or frequency i.e. neither x96/net 192bit be standard for mainstream notebooks nor their would be very significant frequency uplift any higher than 10667MT/s by that time ?
Are you saying that LP6 in LPCAMM2 will use x96/192bit but other soldered LP6 packages won't be using x96/192 bit for mainstream notebook platforms ?
What do you think would be a standard bus-width from future successors of mainstream platforms like Intel's -H series of AMD's "Point" series that have LPDDR6 support? 144-bit?


OK, so you're talking 2028. I wasn't talking 2-3 years in the future, all I said was that the first generation of LPDDR6 Samsung announced will enter mass production by the end of THIS year is pointless because it isn't any faster than LPDDR5X. If you stick it in a form factor that requires 192 bits like LPCAMM then yeah it is "more bandwidth" but that's accompaned by more cost. A LOT more cost, as an early adopter not only for LPDDR6 but also for LPCAMM2!

By 2028 the cost penalty will be fairly moderate, but LPDDR6 will have moved on to higher bins so it'll be faster than LPDDR5X on its merits not because OEMs are being forced (if they want to support LPCAMM2) to go wider.

DDR6(far from finalization) is likely going to use 16-bit sub-channel(not 24bit of LPDDR6), we will get know about its final specs with time.
The Synopsys LPDDR6 PHY results validated on N2P were most likely about a 48-bit bus AFAIK, using 2*48(net 96bit) will be a thing eventually in flagship smartphone SoCs. So it won't be a surprise that 96bit for flagship smartphone SoCs could become a fairly common thing eventually.


I've speculated here previously that I think DDR6 will use the same 24 bit channel width as LPDDR6, because supporting ECC on 16 bit subchannels is extremely wasteful (50% bit overhead) but the only "confirmation" of that I'd seen is a wccftech article, where they stated the 24 bit subchannels as an objective fact leading me to believe they got that from me (whether directly from one of my posts, or via an AI that scraped it and regurgitated it back to a wccftech "journalist" as fact, who knows)

But recently I've seen two other articles implying 24 bit channels / 96 bit width as a likelihood for DDR6 so I wouldn't count that out.

Anyway I just think it is disingenous comparing 128 bit LPDDR5X with 192 bit LPDDR6, as any factors that might drive platforms to widen with LPDDR6 would be driven by external factors like the defined width of LPCAMM2 modules, or Synopsos PHYs (and if they are implemented as 48 bit wide "pairs" of controllers, that could be driven by them wanting to design combo LPDDR5X/LPDDR6 controllers without wasting area when used with LPDDR5X) LPDDR6 will eventually be faster than LPDDR5X which isn't likely to get any faster than -12800. It just won't happen soon.