Discussion Intel current and future Lakes & Rapids thread

Page 526 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
We'll have to wait and see with final kit, but overall I'm super meh on DDR5, at least for a while lol.
 

Kedas

Senior member
Dec 6, 2018
355
339
136
Isn't the bigger latency partially mitigated due to having 4 channels instead of 2 for DDR5.
You can already ask for the next one while you are still waiting for the first.

And since the design has changed are we sure we measure it correctly?
Maybe the memory controller has less to do and is faster now compared to DDR4.
 

eek2121

Platinum Member
Aug 2, 2005
2,904
3,906
136
You guys are going nuts over a system with unknown configuration. Review systems with DDR-5 will perform just fine.
 
  • Like
Reactions: clemsyn

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
Something else to keep in mind, CPU cache structures have grown significantly in the last few years.yes, first access latency for DDR5 certainly looks higher, but, with L3 caches being 20-32MB, is it really going to be as bad in production? On low end systems, you are already paying a penalty for the cheapness, and many will run DDR4 for a long time to come. Mobile systems already run memory with atrocious timings.

I just don't think that, in general day to day use scenarios, it'll ever be noticeable, and, on things that lean on memory bandwidth hard, it'll be an advantage. Quite literally, it should only show up as an issue in software that's specifically written to thrash memory pages deliberately to expose first word latency.

For context, when DDR4 hit desktops, AMD had bristol ridge, which had a less than ideal cache structure. Intel had haswell and Broadwell, which had 8MB on the 4770 and 6MB on the 57xx i7s. Current products have 4x the L3 at almost every level.

Its just not the same world.
 
  • Like
Reactions: Joe NYC

lyonwonder

Member
Dec 29, 2018
31
23
81
I think for Raptor Lake, minor refresh as it is, they won't do anything as drastic as actually removing DDR4 hardware. Instead, I expect them to either leave it supported, or officially remove support because they don't want to bother validating it on a "new" platform.

My guess is Intel won't remove DDR4 support entirely until 14th gen Meteor Lake, which will probably have a new socket too.
 

Hulk

Diamond Member
Oct 9, 1999
4,191
1,975
136
Something else to keep in mind, CPU cache structures have grown significantly in the last few years.yes, first access latency for DDR5 certainly looks higher, but, with L3 caches being 20-32MB, is it really going to be as bad in production? On low end systems, you are already paying a penalty for the cheapness, and many will run DDR4 for a long time to come. Mobile systems already run memory with atrocious timings.

I just don't think that, in general day to day use scenarios, it'll ever be noticeable, and, on things that lean on memory bandwidth hard, it'll be an advantage. Quite literally, it should only show up as an issue in software that's specifically written to thrash memory pages deliberately to expose first word latency.

For context, when DDR4 hit desktops, AMD had bristol ridge, which had a less than ideal cache structure. Intel had haswell and Broadwell, which had 8MB on the 4770 and 6MB on the 57xx i7s. Current products have 4x the L3 at almost every level.

Its just not the same world.

Good points. Especially the thinking that larger L3 can provide "cover" for suboptimal main memory. That being said I think some here are concerned that early DDR5 testing is not outperforming current DDR4 solutions. Eventually DDR5 will get there but is Intel pushing the new standard too soon?
 

DrMrLordX

Lifer
Apr 27, 2000
21,583
10,785
136
That being said I think some here are concerned that early DDR5 testing is not outperforming current DDR4 solutions.

Let's be fair, DDR5 isn't going to outperform 14-14-14-28 DDR4-3800 in terms of latency. Bandwidth? Sure, and the bandwidth number for that Alder Lake-S system looked good. Latency, though? For some perspective:

aida.jpg


(source: https://www.techpowerup.com/forums/...ache-and-memory-benchmark-here.186338/page-27)

That's DDR4-2133 15-15-15 36, and he hasn't even got CR1. Tuned secondaries? Probably not. As you would expect, it has much less memory bandwidth (1/3rd what you would get from that Alder Lake leak system), but the latency is just so much better. We aren't even talking about a "current" DDR4 system. It's Skylake, which was second-gen DDR4 for Intel, and it isn't even DDR4-2400 which was pretty common by the time of the Skylake launch. It's very close to launch performance for Intel's implementation of a DDR4 memory controller. For more perspective, here's a Haswell-E review:


A 5960X with DDR4-3000 15-15-15-35 (probably CR2) showed up with 64ns latency.

Yeah, maybe Aida64 is drunk and producing wrong latency readings. I guess we'll just have to look for more data points. These early examples seem a little weird though.

You guys are going nuts over a system with unknown configuration. Review systems with DDR-5 will perform just fine.

Nobody's going nuts. It's just unexpectedly bad. How could they configure it to produce that much latency?
 
  • Like
Reactions: Tlh97

jj109

Senior member
Dec 17, 2013
391
59
91
Easy. Gear 4 + CR2 which effectively means 1 command every 8 DDR cycles.

Like eek2121 says, flipping out over memory latency on a ES 0000 chip is pointless. Rocket Lake 3800CL16 is now hitting AIDA 40ns on Gear 1 after it was doing 50ns on launch firmware, never mind whatever it was hitting when that was still a 0000 ES
 

uzzi38

Platinum Member
Oct 16, 2019
2,565
5,575
146
The latency in that image has nothing to do with Gear 4 mode. In fact it's almost certainly either Gear 1 or 2 - more likely the latter.

Early ADL-S silicon was an absolute mess, and that's A0 silicon. Just ignore it.
 
  • Like
Reactions: Tlh97

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Like eek2121 says, flipping out over memory latency on a ES 0000 chip is pointless. Rocket Lake 3800CL16 is now hitting AIDA 40ns on Gear 1 after it was doing 50ns on launch firmware, never mind whatever it was hitting when that was still a 0000 ES

Devil is in detail tho. RL just cannot clock Gear 1 above ~3800, so in bandwidth x latency metric there is hard wall for bandwidth. Lucky for users, there are no real uses for bandwidth on 8C desktop platform.
"Launch" firmware issues are also still there, like default GEAR2 mode for certain models killing stock performance.

Something else to keep in mind, CPU cache structures have grown significantly in the last few years.yes, first access latency for DDR5 certainly looks higher, but, with L3 caches being 20-32MB, is it really going to be as bad in production?

The answer to Your question is "YES". Performance keeps scaling with latency even going from 40 to 35ns, despite for example 10C CML already having 20MB of L3 memory. Workloads like Cinebenches that fit L2 and barely touch L3 are exception and not the norm. Web browsing, gaming, office work, compression all keep benefitting from better latency.

In fact the argument about caches can be turned around: IF AMD is claiming 15% gaming performance benefits from going to 32MB to 96MB of L3, that means there is plenty of L3 cache misses with 32MB of L3 and if each of those misses is served slower, performance will suffer.
 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
If there were no cache misses ever, why would there even be a next level of memory? Of course there are going to be misses, and certainly, they cause performance hits. There are always cases where some software blows past it's cache fairly regularly. Games have a lot of data to work through, but, even then, some will certainly be more affected than others.

My point is that these gigantic caches are hiding a lot of the latency of dram, better than it ever has. While first word latency will always have a performance impact, it's not going to be noticeable for 99% of computer users out there. Large rendering tasks may take a fraction of a second longer to start, but the increased bandwidth available will help them finish in much less time than previous platforms.
 

uzzi38

Platinum Member
Oct 16, 2019
2,565
5,575
146
And how do you know this?
Because I've seen an image with significantly higher latency already with JEDEC B or C (I forget which) memory. Also A0 silicon. It can go far higher than the 90ns in that image.
 

Shivansps

Diamond Member
Sep 11, 2013
3,835
1,514
136
My guess is Intel won't remove DDR4 support entirely until 14th gen Meteor Lake, which will probably have a new socket too.

Last time, DDR3 support lasted from the 6th gen to the 9th gen, there actually was a H310 board with DDR3 that supported the i9-9900.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Because I've seen an image with significantly higher latency already with JEDEC B or C (I forget which) memory. Also A0 silicon. It can go far higher than the 90ns in that image.

Still, if DDR5 requires GEAR2 mode that will result in a nasty penalty, just like on RKL. While CapFrameX is not the source i'd love to quote, they do test CPU limited scenarios:


Some nasty deficits for GEAR2 vs GEAR1 there, 25% FPS deficit in CPU limited scenario is just bad. DDR5 might be starting at huge disadvantage already if GEAR2 is engaged by default and DDR4 can run GEAR1 near 4000. GEAR4 is probably next level of stupid.
 

uzzi38

Platinum Member
Oct 16, 2019
2,565
5,575
146
Still, if DDR5 requires GEAR2 mode that will result in a nasty penalty, just like on RKL. While CapFrameX is not the source i'd love to quote, they do test CPU limited scenarios:


Some nasty deficits for GEAR2 vs GEAR1 there, 25% FPS deficit in CPU limited scenario is just bad. DDR5 might be starting at huge disadvantage already if GEAR2 is engaged by default and DDR4 can run GEAR1 near 4000. GEAR4 is probably next level of stupid.
Gear 4 mode is a meme for ADL-S, basically will only ever be used to chase memory frequency world record memes.

As a whole I would just stick to Gear 1 with the lowest latency memory you can get. We're not talking Rembrandt here, there's virtually no need to be chasing higher memory bandwidth figures.
 

Shivansps

Diamond Member
Sep 11, 2013
3,835
1,514
136
As a whole I would just stick to Gear 1 with the lowest latency memory you can get. We're not talking Rembrandt here, there's virtually no need to be chasing higher memory bandwidth figures.

Even with RMB there the little issue of sync vs async IF unless this somehow changes with DDR5, if Cezanne has showed something is that going async only makes things worse.
 

uzzi38

Platinum Member
Oct 16, 2019
2,565
5,575
146
Even with RMB there the little issue of sync vs async IF unless this somehow changes with DDR5, if Cezanne has showed something is that going async only makes things worse.
Eh going async still nets you get higher memory bandwidth so the iGPU can benefit. I'm not saying for the CPU ofc.
 

mikk

Diamond Member
May 15, 2012
4,112
2,108
136
Last time, DDR3 support lasted from the 6th gen to the 9th gen, there actually was a H310 board with DDR3 that supported the i9-9900.


6th to 9th gen is all based on the initial Skylake architecture, even though DDR3 was pretty much dead starting with the Kabylake refresh. Raptor Lake isn't Golden Cove apparently.
 

Hulk

Diamond Member
Oct 9, 1999
4,191
1,975
136
Speaking of memory subsystems...

I have a Surface Laptop 3, Kaby Lake R, 256GB SSD, don't know the spec.

Also a 4770k desktop with an EVO 860 1TB SSD.

Both running Windows 10. In every day use, opening applications, the 4770K is MUCH snappier. Is this due mainly to the memory subsystem?
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
Speaking of memory subsystems...

I have a Surface Laptop 3, Kaby Lake R, 256GB SSD, don't know the spec.

Also a 4770k desktop with an EVO 860 1TB SSD.

Both running Windows 10. In every day use, opening applications, the 4770K is MUCH snappier. Is this due mainly to the memory subsystem?
I'd say with 90% certainty that it's the superior SSD. 10% uncertain because I don't know any of the other factors.
 

scannall

Golden Member
Jan 1, 2012
1,944
1,638
136
Some of you are likely getting a bit too excitable at every leak. Do something else. Have the neighbors over for a cookout, remodel your bathroom or do a Diners Drive-ins and Dives tour instead. Chill and wait for products to be on shelves and bench-marked before you decide whether to love or hate it. Leaks are entertaining, I'll grant you that. I enjoy them too. I just don't give any seriousness to them.

I'm sure Alder Lake will be a nice product, and depending on price of course a good buy. There are no bad parts, just bad prices.
 

coercitiv

Diamond Member
Jan 24, 2014
6,151
11,686
136
Is this due mainly to the memory subsystem?
No. You're running a system built with frugal energy usage in mind. Power management and sleep states keep performance low, there's extra latency everywhere and probably some intentional throttling as well.

I find it strange you're asking people about the memory subsystem limiting performance, yet provide no specs. I feel like a fortune cookie.
 
Last edited:
  • Like
Reactions: Tlh97 and Zucker2k