Question Alder Lake - Official Thread

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tomatosummit

Member
Mar 21, 2019
184
177
116
IIRC historically the i3 has been the full smaller die though.
I know it's a technicality but 8th gen i3 was a 4c/4t version of the kaby lake die.
Although between kabyR, coffee and coffeeR, 8th and 9th gens were all over the place.
11th had tiger, tigerH, rocket and coffeelake? (i3 desktop?) dies in it too.
 

blckgrffn

Diamond Member
May 1, 2003
9,110
3,028
136
www.teamjuchems.com
That's quite a jump if i3 ends up being 6+0. Most games can't put more than 6 cores to good use so it would make a really decent budget gaming CPU. I also just found out that there is no i3 in the RocketLake line-up. WTH? Is their yield that good that they dont have defective chips for i3?

I think this was known all along - as a desperate back port, there was no smaller RKL die and the 11th gen i3 and lower were just rebrands (if they ever existed?).

If it didn't have 6 functional cores to hit the min spec they must have been discarded.

I think it makes more sense if the i5 is the full smaller core (and maybe some harvested larger cores down the road?) given the volume there.

Given it is three steps from the top, the i5 is the new i3, adjust your expectations accordingly :D Back in the day, the i3 was seen as the goto for "current" budget gaming rigs but tended to age poorly. An i5 spec that has stayed largely static (added SMT, which is likely a big deal) since the 8th gen might be the same.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
For desktop 6 big cores is way better approach than less big cores plus small cores. Intel's choice is good, that 4+8 would lose to 5600x at games that scale beyond 4 threads. This time 6 big cores is sweet spot, most of users won't need more but many workloads prefer 6 cores over 4.

This is particularly evident when looking at the 5600X vs 5950X gaming benches. With a 3090 and at 1080p, the gap is ~2% on average across a couple dozen or so of the most demanding titles (per HWU). In fact, the gap is basically explained by the cache difference, which affects a few titles more than others.

The 6/12 i5 will be 95% as good as the 12900K for gaming easily, all the more so when keeping in mind most people are running much weaker GPUs than 3090s, and not running everything at 1080 medium details lol.

If fact, in these early days of the updated windows scheduler and lot of people sticking with Win10 for at least a while, the i5 is arguably the superior option full stop, as no tweaking or potential suboptimal core allocation is an issue with the 6/0/12 layout.
 

blckgrffn

Diamond Member
May 1, 2003
9,110
3,028
136
www.teamjuchems.com
Intel is selling a 4 core Rocket Lake Xeon-E.

"In terms of SKUs, Intel has ten new SKUs. Six of these have “G” suffixes to make them graphics enabled SKUs. Unfortunately, the bottom two SKUs, the Xeon E-2324G and Xeon E-2314 are four core, non-Hyper-Threading chips that harken back to the same 4 core/ 4 thread configuration we saw in our Intel Xeon E3-1220 review over a decade ago but at a ~20-25% cost decrease, and a massive number of improvements. Still, nothing feels like this is a non-competitive segment more than not incrementing thread counts in more than a decade. "

Haha, ouch. Probably use about the same (or more) power too, rated still at 65W (vs 80W) in 2021 terms... :D

Good for probably ~50%+ performance upgrade given the frequency uplift and the stacked generation of IPC gains though.

Source:


Assuming we'll see ADL server parts ~Q2 2022? I assume these will be quite exciting as well given this relative stagnation. I used to get much more excited about these parts, but now it's all about VMs and most hosts (linode, Vultr, etc.) are using much denser EPYC or larger Intel based hosts.
 

dullard

Elite Member
May 21, 2001
24,998
3,325
126
I also just found out that there is no i3 in the RocketLake line-up. WTH? Is their yield that good that they dont have defective chips for i3?
To go from 8 cores to 4 cores requires a lot of defects. At that point, it is likely worth just scrapping it rather than trying to do all the labor to make another line of products for this stopgap generation. Instead, they just did a Comet Lake refresh of i3 and Pentium chips that got basically no press at all.
Note: that Wikipedia has an error in base frequencies for the G6405T chip, it should be 3.5 GHz, not 3.3 GHz.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,226
9,990
126
Unfortunately, the bottom two SKUs, the Xeon E-2324G and Xeon E-2314 are four core, non-Hyper-Threading chips that harken back to the same 4 core/ 4 thread configuration we saw in our Intel Xeon E3-1220 review over a decade ago but at a ~20-25% cost decrease, and a massive number of improvements. Still, nothing feels like this is a non-competitive segment more than not incrementing thread counts in more than a decade. "

Haha, ouch. Probably use about the same (or more) power too, rated still at 65W (vs 80W) in 2021 terms... :D

Good for probably ~50%+ performance upgrade given the frequency uplift and the stacked generation of IPC gains though.
Think more towards pro/pro-sumer NAS units that support virtualization.
 

mikk

Diamond Member
May 15, 2012
4,111
2,105
136
Given that it's WTFTech, it's probably not accurate. But not too far off. Not having the small cores is going to make the 12600 a bad deal compared to the 12600K unless the price gap is much bigger than in the past.


Not having the small cores could be better in several circumstances like gaming workloads or Windows 10. The 6+0 die doesn't need to deal with the e-cores.
 
  • Like
Reactions: Space Tyrant

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
If fact, in these early days of the updated windows scheduler and lot of people sticking with Win10 for at least a while, the i5 is arguably the superior option full stop, as no tweaking or potential suboptimal core allocation is an issue with the 6/0/12 layout.

I mean it's technically tweaking, but you can just disable the Gracemont cores if you don't want them for anything.

Huh, I was randomly looking at the cheapest board on Newegg to try to figure how much power it will allow the CPU to have and apparently there's a BIOS feature you can turn on which will turn off the Atom cores (temporarily?) if you enable scroll lock. That might be useful to people.

That's really clever! Assuming you don't need scroll lock for anything else.
 
  • Like
Reactions: Arkaign

TheELF

Diamond Member
Dec 22, 2012
3,967
720
126
Not having the small cores could be better in several circumstances like gaming workloads or Windows 10. The 6+0 die doesn't need to deal with the e-cores.
No, it couldn't be better, it could be the same but not better.
Only exception the DRM issue where games would just not even run but that's not a performance difference.
 
Jul 27, 2020
15,739
9,809
106
apparently there's a BIOS feature you can turn on which will turn off the Atom cores (temporarily?) if you enable scroll lock. That might be useful to people.
It would be cool to watch the E-cores appear and disappear in Task Manager if Windows reacts to the scroll lock in real time. Also, I hope the E-cores are shown distinctly in Task Manager.
 

naukkis

Senior member
Jun 5, 2002
701
569
136
No, it couldn't be better, it could be the same but not better.


When primary thread pool thread get assigned to E-core performance will suffer due thread-syncronization problems. Using different speed cores will bring lots of new problems - Intel optimization simply points to target priority threads only to high performance cores.
 
  • Like
Reactions: Space Tyrant

mikk

Diamond Member
May 15, 2012
4,111
2,105
136
No, it couldn't be better, it could be the same but not better.
Only exception the DRM issue where games would just not even run but that's not a performance difference.


Did you test it or how can you know? There are rumors about performances losses when they are enabled, higher power consumption and temperature as well. Surely not in all games but like with hyperthreading in the early days it wouldn't surprise me there are performances losses in a number of games.


 

TheELF

Diamond Member
Dec 22, 2012
3,967
720
126

When primary thread pool thread get assigned to E-core performance will suffer due thread-syncronization problems. Using different speed cores will bring lots of new problems - Intel optimization simply points to target priority threads only to high performance cores.
But this already happens all the time even without ecores, just make a google search with any game you want + stutter and you will get tons of hits because all the games are terribly coded and don't have any code to sync the threads anyway.
Also most games are coded for the prev gen consoles that had two clusters of 4 cores so all those games already have internal sync issues that are made to not show up on consoles but will on anything else.

Did you test it or how can you know? There are rumors about performances losses when they are enabled, higher power consumption and temperature as well. Surely not in all games but like with hyperthreading in the early days it wouldn't surprise me there are performances losses in a number of games.


Sure, if you turn off the ecores and that gives the rest of your cores more power to hit higher clocks then it will run better.
Isn't everybody's opinion of intel though that power limits are always lifted and the CPU always has like 300W available?
 

diediealldie

Member
May 9, 2020
77
68
61

When primary thread pool thread get assigned to E-core performance will suffer due thread-syncronization problems. Using different speed cores will bring lots of new problems - Intel optimization simply points to target priority threads only to high performance cores.

Actually, that problem happens in the real world already. We have hyperthreading(Which is slower than threads 0 2 4...) and various core parking for power saving, sudden background tasks kicking in...etc.
On the bright side, if there are well-written background windows programs that don't touch affinity masks on their own, then you might see gaming performance increase because now background tasks will not evict precious game data in P-core cache(L1 and L2) memory.
 

Hulk

Diamond Member
Oct 9, 1999
4,191
1,975
136
Couple of thoughts...

The Gracemont cores at Skylake level IPC are quite strong in terms of compute. If it's a matter of working out scheduling with Intel and their 80% market share behind it I believe it will happen. There will be some issues here and there, and yes, minor and rare as they might be, there will be people jumping up and down with banners screaming it to the world. That's fine, there are those people on both sides of this fence. Intel is seriously committed to this and they have the right guys (finally) steering the ship.

Early Alder Leak benchmarks are looking to be legit... but... By legit I mean they were actually Alder Lake benchmark scores. But legit and useful are two different things. We have no idea of the specific benchmark version in some cases, no idea of clock speeds, memory configuration, motherboards, stage of chip development... Moving forward I'm not going to worry about benches without support. I think I've learned something from this "round."
 

Hulk

Diamond Member
Oct 9, 1999
4,191
1,975
136
One more thing regarding Small Core/Big Core possible scheduling issues.

HT can also be viewed as Big/Little cores and while there were some issues when HT was first introduced it's rare today where you need to turn off HT as a result of decreased performance. The issue has mostly been worked out.

Gracemont cores will certainly be more powerful then Golden Cove HT threads so the Thread Director should be able to get this right.

As Ian stated in the ADL architecture deep dive if an application requires 16 threads the Thread Director will assign 8 Golden Cove cores and 8 Gracemont cores to these 16 threads. HT will not be employed.
 

eek2121

Platinum Member
Aug 2, 2005
2,904
3,906
136
There is still hyperthreading to disable. Intel did mention to someone to talk about threads so i3 is only 6threads compared to i5 at 12 and 16
Disabling hyperthreading is a market segmentation thing. A core not capable of hyperthreading will be completely disabled. This has been the case for both Intel and AMD for a while now.

When primary thread pool thread get assigned to E-core performance will suffer due thread-syncronization problems. Using different speed cores will bring lots of new problems - Intel optimization simply points to target priority threads only to high performance cores.
AMD uses CPPC to designate preferred cores. Intel has a similar mechanism in both Windows 10 and Windows 11. Note that the Gracemont cores are higher latency, so they should always be deprioritized for performance tasks. The AMD bug regarding CPPC was precisely because of this FYI.
But this already happens all the time even without ecores, just make a google search with any game you want + stutter and you will get tons of hits because all the games are terribly coded and don't have any code to sync the threads anyway.
Also most games are coded for the prev gen consoles that had two clusters of 4 cores so all those games already have internal sync issues that are made to not show up on consoles but will on anything else.


Sure, if you turn off the ecores and that gives the rest of your cores more power to hit higher clocks then it will run better.
Isn't everybody's opinion of intel though that power limits are always lifted and the CPU always has like 300W available?
Comparing 7+ old consoles with modern hardware is not a good idea, in general. Both GC and Gracemont cores are much faster than the console CPUs from 7+ years ago.

Gracemont cores have higher latency, but much higher efficiency. GC cores eat power like anything, but have much lower latency, and much higher performance. There are games that will eat the architecture up, and in these games Intel will leave AMD behind. Other games will not get along with this hybrid architecture and you won't see a difference in either/or.

Intel has a great chip, every x86 system out there needs a few low power chips to power background tasks, but my concern is that AMD may get trapped because of it. I think that x86 does need the type of architecture Intel is providing, but knowing Intel, they will abuse, and knowing AMD, they will ignore. If an x86 laptop had a chip that could run on 1-2 watts and power 99% of the services and lazy foreground stuff, while having mid/high cores to deal with the more demanding stuff, we'd all win.

The disturbing thing for me is that Intel is basically forcing developers to optimize for their hybrid architecture at the expense of AMD chips. We consumers need hybrid chips, but both AMD and Intel should be collaborating on an approach, despite being competitors. They are not.

We need standards here, and I suspect we won't get them. Intel is aiming too high with both core types, and AMD doesn't care.

Couple of thoughts...

The Gracemont cores at Skylake level IPC are quite strong in terms of compute. If it's a matter of working out scheduling with Intel and their 80% market share behind it I believe it will happen. There will be some issues here and there, and yes, minor and rare as they might be, there will be people jumping up and down with banners screaming it to the world. That's fine, there are those people on both sides of this fence. Intel is seriously committed to this and they have the right guys (finally) steering the ship.

Early Alder Leak benchmarks are looking to be legit... but... By legit I mean they were actually Alder Lake benchmark scores. But legit and useful are two different things. We have no idea of the specific benchmark version in some cases, no idea of clock speeds, memory configuration, motherboards, stage of chip development... Moving forward I'm not going to worry about benches without support. I think I've learned something from this "round."

Golden Cove will set new benchmarks for ST performance. Gracemont will lead the way for perf/watt on x86 (from what we've seen). Together they will be "meh" due to power usage. The benchmarks you see are indicative of real world performance, fyi. Intel has gotten better with power usage, but they are still behind AMD on the top end due to the desire to push things to the bleeding edge. Intel is very clearly going to lead absolute IPC for the short term.
 
  • Like
Reactions: Arkaign

naukkis

Senior member
Jun 5, 2002
701
569
136
The disturbing thing for me is that Intel is basically forcing developers to optimize for their hybrid architecture at the expense of AMD chips. We consumers need hybrid chips, but both AMD and Intel should be collaborating on an approach, despite being competitors. They are not.

Intel is forcing developers to use more cores , at last. AMD won't suffer from that at all as cores are cores as long as there are enough of them. What AMD does need to do against hybrid arch is to partition it's cores, for 5950x it might be done as easily as make other chiplet high priority cores and boost it to max and other chiplet low priority and boost it only so high that it won't lower other chiplet boost(to get performance uplift from cache separation). AMD does need to do Windows-scheluder to do those things, but as long as they have at least as many threads as Intel version there isn't any problem to absorb any optimizations they do for AlderLake.
 

geegee83

Junior Member
Jul 5, 2006
23
13
66
Thread Director sets a precedent that AMD can probably just copy or modify. The Intel Bridge tech for Android apps in W11 works for AMD as well. They could have locked it out. Overall I think all of these innovations are good for the ecosystem.
 

TheELF

Diamond Member
Dec 22, 2012
3,967
720
126
Comparing 7+ old consoles with modern hardware is not a good idea, in general. Both GC and Gracemont cores are much faster than the console CPUs from 7+ years ago.
I'm not comparing the consoles, all the games are still being coded for the old system is all I'm saying.
They expect to run their heavy stuff on one set of cores and dispatch lighter stuff to the second set of cores.
Gracemont cores have higher latency, but much higher efficiency.
Latency is how fast they are going to react, not how fast they are going to complete a task.
If you are talking about games they will take longer to perform the same task as the bigger core would but that's also the case if that thread would run on a double booked core, one that has to run two or even more threads.
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
Latency is how fast they are going to react, not how fast they are going to complete a task.

It affects performance in any kind of branchy code where the predictors can't maintain high hit rates. If the Gracemont cores have to snag something from L3 outside of their cluster or from main memory, that latency is going to sting you.