Discussion Intel current and future Lakes & Rapids thread

Page 557 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mikk

Diamond Member
May 15, 2012
4,112
2,108
136
At those frequencies, core for core I'm expecting GC to be more power-efficient. Package power of 78.5W running AIDA Stress FPU at 6x4 GHz is pretty good (my undervolted 8-core Tiger Lake at 3.4GHz needs 56W for Stress FPU and 49W for CB R20) :


And it confirms what I said, AIDA64 FPU runs at a higher power than Cinebench MT on Intel. 78W AIDA64 FPU could translate into 70W Cinebench MT which is really good.
 

mikk

Diamond Member
May 15, 2012
4,112
2,108
136
Those gaming scores are interesting.

Questions though.

DDR4 or DDR5? JEDEC spec or tighter timings/higher speeds?

W11? Before or after the L3 cache issue with zen3 was fixed?

Tdp and tau settings? Intel Spec or motherboard spec?

I foresee a lot of variance in the results come review day. Anandtech who use JEDEC specs and Intel tdp/tau specs may show considerably worse results than HUB who will likely run above spec ram and allow the motherboard to dictate power and tau limits. GN who will likely run above spec ram but stick to Intel Spec power and tau limits will be an interesting middle ground.

On top of that W10 vs W11 testing, will reviewers stick to a single OS for reviews or will they choose the best performing for each platform?


Also this:




E-cores disabled or enabled?
 
  • Like
Reactions: lightmanek

Timorous

Golden Member
Oct 27, 2008
1,533
2,535
136

Oh yea. So many configs to test to find the performance / price sweet spot.

If disabling e-cores does improve performance or maintain the same performance with lower power and temperatures it better be doable via windows because that could be a real pain otherwise.

I think for many the 12400 may be the best of the bunch because that is just 6 p-cores. I am also interested in seeing HUB do their cache performance scaling testing to see how much more cache helps the 12900k vs the lower tier models that have less L3.
 
  • Like
Reactions: lightmanek

eek2121

Platinum Member
Aug 2, 2005
2,904
3,906
136
ADL-S pricing leaks?

i9 12900K - $589
i9 12900KF - $564
i7 12700K - $409
i7 12700KF - $384
i5 12600K - $289
i5 12600KF - $264


I would have to dig through my post history (on mobile for the next few weeks, so I am not doing it now), but I believe Ii guessed $588, was I really only a dollar off? 🤣
 
  • Like
Reactions: lightmanek

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
I think for many the 12400 may be the best of the bunch because that is just 6 p-cores. I am also interested in seeing HUB do their cache performance scaling testing to see how much more cache helps the 12900k vs the lower tier models that have less L3.

The problem with cache "help" will be finding reliable information - in "stock" testing those 5-10MB extra will have outsized impact due to memory being very slow. While fully tuned DDR4 3733CL14 will lessen impact of L3.
Btw any news about memory tuning still being unlocked on non Z mbs ? If there ever was generation that benefits from faster memory, here it is.
 

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
Yeah, Anandtech and JEDEC timings on DDR5 will destroy them. Large L3 and L2 can carry only so much when main memory is slower than mobile CPUs from fruit vendors.

I would have to think they are comparing JEDEC 3200CL22 to JEDEC 4800CL40. If they used 4800CL36 (which is still JEDEC) it would only be 3% worse. 4800CL40 is 21%.

I personally would stick with DDR4 if you were considering buying 4800CL40.
 

Abwx

Lifer
Apr 2, 2011
10,853
3,298
136
That gaming perfomance looks quiet... interesting for Intel's own benchmarks. Will probably be even worse in third party reviews.

Well...

Intel did have a small one slide of comparisons against AMD in gaming with an RTX 3090, however they stated they were done without the latest L3 patch fix, and admitted that they would have preferred to show us full results.




39-630.210e04f1.png



 

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
Yikes, ADL only officially supports DDR5 4400 if your board has 4 slots. And if all 4 slots are used you only get 3600. How this affects realistic clock speeds with XMP isn't mentioned.
 

Saylick

Diamond Member
Sep 10, 2012
3,084
6,184
136
Pricing looks good but total platform costs are not, assuming mobos and DDR5 prices are elevated over last gen and DDR4. With DDR5 being expensive plus it's going to be gimped if you run 4 slots, who will want to use DDR5 for Alderlake?
 

phillyman36

Golden Member
Jun 28, 2004
1,762
160
106
I already sold some stuff to reduce the hit on my wallet. I got what i wanted from Newegg
12900k
Asus z690 Hero
Gskill ram(late november)
Oloy ram to hold me over until i get the Gksill ram
 

coercitiv

Diamond Member
Jan 24, 2014
6,151
11,686
136
Another factor to mention is DRM. Intel has made statements to this, but there is an issue with Denuvo as it uses part of the CPU configuration to identify systems to stop piracy. Due to the hybrid nature, Denuvo might register starting on a different core (P vs E) as a new system, and eventually lock you out of the game either temporarily or permanently. Out of the top 200 games, around 20 are affected and Intel says it still has a couple more to fix. It’s working with Denuvo for a high-level fix from their side, and with developers to fix from their end as well. Intel says it’s a bit harder with older titles, especially when there’s no development going on, or the IP is far away from its original source. A solution to this would be to only launch those games on specific cores, but look out for more updates as time marches on.
My Thread Dictator™ joke is slowly coming to life.

When a user is on the balanced power plan, Microsoft will move any software or window that is in focus (i.e. selected) onto the P-cores. Conversely, if you click away from one window to another, the thread for that first window will move to an E-core, and the new window now gets P-core priority. This makes perfect sense for the user that has a million windows and tabs open, and doesn’t want them taking immediate performance away.
The way that this is described also means that if you use any software that’s fronted by a GUI, but spawns a background process to do the actual work, unless the background process gets focus (which it can never do in normal operation), then it will stay on the E-cores.

In my mind, this is a bad oversight. I was told that this is explicitly Microsoft’s choice on how to do things.
Sounds like the first thing to do on ADL-S is to switch to Performance Plan. I'll be curious to see idle efficiency in this case.

EDIT: hopefully Ian got an incomplete description and the switch from P to E only happens when the focused program requires more P cores than available.
 

Krteq

Senior member
May 22, 2015
991
671
136
Those performance numbers - it's even worse than I expected

Igor Walosek was right, Intel had set PL1 (TDP) to 241W and they used unpatched Windows 11 build, so Ryzen was underperforming





//nvm, that unpatched W11 build has been mentioned already
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
And with that, let the clown fiesta finally begin! When are the reviews due again? I totally forgot.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,483
14,434
136
And with that, let the clown fiesta finally begin! When are the reviews due again? I totally forgot.
Nov 4th. And yes, it looks bad to me. NOBODY is saying its efficient. The power usage is insane. The testing so far done without a patched OS ? And even Intel is not claiming it wins in anything..... YET
 

inf64

Diamond Member
Mar 11, 2011
3,685
3,957
136
It won't be long, only a week from now. The forced PL1 at 240W is raising a lot of eyebrows... If GC needs to be blasted to above 5GHz in order to outperform Zen3, even in games, then it sounds like a very inefficient design. The part where the 3.3Ghz fixed clock parts showed 19% IPC increase indicates that GC has around 10-11% , on average, higher IPC than Zen3. The core structures are much larger than Zen3's so I wonder why are they getting only 10% more IPC out of such a wide machine? Sure, it can run at 5.2GHz clock, which Zen cannot do, but look at the power it takes, it's crazy stuff.

I do see ALD being great mobile chip, those E cores will do awesome in the lower clock/power domains. Desktop and server, on the other hand, looks abysmal.
 
  • Like
Reactions: Tlh97 and Markfw

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
Those performance numbers - it's even worse than I expected

Igor Walosek was right, Intel had set PL1 (TDP) to 241W and they used unpatched Windows 11 build, so Ryzen was underperforming

OTOH they did use high latency DDR5 (4400 CL36). According to Ian, Intel shipped 4800CL40 to reviewers which is just as bad. That could hurt gaming performance in some capacity.
 

insertcarehere

Senior member
Jan 17, 2013
639
607
136
OTOH they did use high latency DDR5 (4400 CL36). According to Ian, Intel shipped 4800CL40 to reviewers which is just as bad. That could hurt gaming performance in some capacity.

Testing DDR5 4400 CL36 vs DDR4 3200 CL14 for Zen 3/RKL almost certainly does ALD no favors in most of those comparisons, DDR4 is probably the way to go for performance at this time for 12th gen.
 
Last edited:

Abwx

Lifer
Apr 2, 2011
10,853
3,298
136
The part where the 3.3Ghz fixed clock parts showed 19% IPC increase indicates that GC has around 10-11% , on average, higher IPC than Zen3.

In respect of CML they state 12% better IPC for RKL and 19% on top of this for ADL, that make 33% from CML to ADL, yet the same slide say that the latter comparison is 28%...

From the numbers it seems that the eight E cores use 20% of the total power, wich mean a little less than 50W, that make around 190W for the P cores, so the design is not that inefficient, it s just over overclocked, litteraly.
 

Karnak

Senior member
Jan 5, 2017
399
767
136
OTOH they did use high latency DDR5 (4400 CL36). According to Ian, Intel shipped 4800CL40 to reviewers which is just as bad. That could hurt gaming performance in some capacity.
Well official specs are official specs. Which means 4800 for ADL on a MB with 2 slots and 4400 on a MB with 4 slots but only 2 RAM modules. Looks like the IMC in general is not that great.
 

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
Well official specs are official specs. Which means 4800 for ADL on a MB with 2 slots and 4400 on a MB with 4 slots but only 2 RAM modules. Looks like the IMC in general is not that great.

4800 CL36 is part of the JEDEC spec, which they could have used but it seems like most RAM manufacturers are opting for 4800 CL40.