Discussion Intel current and future Lakes & Rapids thread

Page 880 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

H433x0n

Golden Member
Mar 15, 2023
1,224
1,606
106

ASUS TUF Gaming A17 (FA707, 2023)

View attachment 85815
8C16T 7940HS manages 4.73 GHz at 80W, not sure in what test. I think at 45W It would be <4.5GHz.
Yet a 6P MTL would manage 4.5GHz while also needing to feed 8-10 E-cores looks a bit too good.

Here is Raptor for comparison.
View attachment 85817

Appreciate the link, that’s a good informative source. There may be a misunderstanding, I don't think it's being claimed that it'll run 4.5ghz at 45W. Those are boost clocks being listed. The way he displayed the information was confusing.

Core 9 185H Specs:

Boost Clocks (>45W)

1 P-Core - 5.1ghz
2 P-Core - 5.1ghz
4 P-Core - 4.8ghz
6 P-Core - 4.5ghz
1 E-Core Cluster - 3.8ghz
2 E-Core Cluster - 2.8ghz

Base Clocks (<=45W)
6 P-Core - 3.8ghz
8 E-Core - 2.8ghz

Edit: Looking at your previous screenshot, it looks like 13900H needs 76W to achieve 3.6 P-Core / 2.8 E-Core. This would have MTL-H ~matching these frequencies at 45W which seems about right.

20A is supposed to a better node with better PPA than N3

Intel 7 to Intel 4 PPW gain is just 20% (but pat mentioned Intel 4 cell library is optimized for efficiency unlike Intel 7 which is heavily optimized for performance).

The massive node jump actually comes next year from Intel 4 to Intel 20A with a PPW gain of 36%

Not sure about ARL desktop parts, but ARL mobile parts
will gain a lot from this jump.
The biggest efficiency jump out of all of these nodes (4, 3, 20A, 18A) is likely the jump from Intel 7 -> Intel 4 8VT.
 
Last edited:

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
The biggest efficiency jump out of all of these nodes (4, 3, 20A, 18A) is likely the jump from Intel 7 -> Intel 4 8VT.
In litho performance, yes, the new nodes should prove to be wildly successful based on what Intel has said if everything goes smoothly. The move to backside power delivery will deliver another round of benefit for Intel going forward in time and the move to ribbonfet/gaafet will be another incredible move. TSMC will follow suit later along with Samsung. This is really Intel's chance to shine. Once they get this problem solves and launched without relative issue, they can focus on a new design while doing this big little stuff. As I was saying to Henry Swagger, I truly do not see Intel sticking with big little for the future but for the interim it is solving their problem they were facing with the Skylake arch and then rocketlake. They've had to make some cuts but are working rapidly to fix them. A lot hinges on intel's board not to give Patrick Gelsinger a boot in the bum and keep him on so that his vision plays out. Having an actual engineer as CEO will help Intel recover. Unlike the meandering one who chose infidelity and poor choices for the company when he took over years ago or the MBA pretty boy they had as a temporary CEO.
 
  • Like
Reactions: SiliconFly

H433x0n

Golden Member
Mar 15, 2023
1,224
1,606
106
As I was saying to Henry Swagger, I truly do not see Intel sticking with big little for the future but for the interim it is solving their problem they were facing with the Skylake arch and then rocketlake.
I've got some news for you, I don't think there's a major client product on the horizon that isn't heterogenous arch.

Why aren’t you a fan of the big.Little approach? Have you tried a laptop that has an alder lake or newer or processor?
 

Mopetar

Diamond Member
Jan 31, 2011
8,489
7,731
136
4) External node has significantly better perf/watt than available in house processes...

Even if that were the case Intel loses out twice by not using their own fabs. First they have to pay to use the other fab and again when they don't have their own fab being utilized.

Intel has scraped by over the years despite having to rely on their own older nodes. By many accounts TSMC has had (is still having?) some issues with their own transition to a new node.

I think Intel booking wafers on TSMC was a good business move as it helped to assure investors that it would have chips on a next generation node even if they had more problems with their own process. If they can use their own fabs, then it's also a good move to cancel their wafers (or sell them to someone else) at TSMC.

Frankly even if they don't beat TSMC, just being able to functionally execute and deliver on time is a big step forward for them over their recent history. It's even more important with all of the political posturing between the US and China and any uncertainty surrounding that.
 

FangBLade

Senior member
Apr 13, 2022
203
399
106
I've got some news for you, I don't think there's a major client product on the horizon that isn't heterogenous arch.

Why aren’t you a fan of the big.Little approach? Have you tried a laptop that has an alder lake or newer or processor?
What's wrong with creating efficient 16 big cores? Some competitors are already doing that, and it would mean less headache for developers because all the cores are the same. The moment they start working on 3D stacking of cores, big/little might disappear, or rather lose its purpose, as its current role is a partial solution for scaling beyond 8 cores.
 
  • Like
Reactions: A///

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
Why aren’t you a fan of the big.Little approach? Have you tried a laptop that has an alder lake or newer or processor?
Yes, wasn't a fan. There's a delay with everything as thread director decides what to do. It's a bit better from Alderlake to Raptorlake, but it's still a nuisance. Intel says 8 cores are enough for gaming but people expect all the cores to be used in gaming. Software needs to be writen to take advantage of the extra e cores. Adding e cores to spunk up your MT score is a measuring contest after a point.

Big little will come, of course, and then go once a better solution is found. It's not the solution for the future. It's a stop gap.
 
  • Like
Reactions: SiliconFly

SiliconFly

Golden Member
Mar 10, 2023
1,924
1,284
106
Even if that were the case Intel loses out twice by not using their own fabs. First they have to pay to use the other fab and again when they don't have their own fab being utilized.

Intel has scraped by over the years despite having to rely on their own older nodes. By many accounts TSMC has had (is still having?) some issues with their own transition to a new node.

I think Intel booking wafers on TSMC was a good business move as it helped to assure investors that it would have chips on a next generation node even if they had more problems with their own process. If they can use their own fabs, then it's also a good move to cancel their wafers (or sell them to someone else) at TSMC.

Frankly even if they don't beat TSMC, just being able to functionally execute and deliver on time is a big step forward for them over their recent history. It's even more important with all of the political posturing between the US and China and any uncertainty surrounding that.
Also, creating two new variants for ARL, one for 20A and one for N3 is just way too expensive and just not worth it.

I don't think Intel's working on 2 different versions of the same architecture for two different nodes at the same time. Not feasible.
 
  • Like
Reactions: A///

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
What's wrong with creating efficient 16 big cores? Some competitors are already doing that, and it would mean less headache for developers because all the cores are the same. The moment they start working on 3D stacking of cores, big/little might disappear, or rather lose its purpose, as its current role is a partial solution for scaling beyond 8 cores.
AMD's alleged approach with "big little" is the same core with no difference in how they operate apart from a smaller size whilst retaining smt and cache. I don't have a problem with such a design. It's a good approach that balances constraint vs needing more. I wish Intel can take this route eventually. Is there a future for big core designs with a reduced power consumption level and better data handling? Sure... it'll cost billions to discover. And some of us old farts on here will be retirement homes by then or too busy living on a farm.


It goes without saying that if you step back 5 years from Alderlake, big little was being worked on then. Same for AMD and their smarter approach with compact cores. 5 years prior to 2021 would have been 2016. Around the time we began hearing about Ryzen, although my fuzzy memory says that was 2015. It is the natural progression to a problem you'll be facing akin to staring down a gun's barrel in the years to come.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
Also, creating two new variants for ARL, one for 20A and one for N3 is just way too expensive and just not worth it.

I don't think Intel's working on 2 different versions of the same architecture for two different nodes at the same time. Not feasible.
Yeah if anything Intel would sooner run a different architecture for mobile or a leading node on home turf or utilising TSMC. 2-3 years ago some folks who are "new" to this scene assumed Intel going to TSMC was a massive failure when Intel had been relying on TSMC for many years for less than important work. Being flexible and smart, weighing your options for the best success isn't "crazy stuff".
 

H433x0n

Golden Member
Mar 15, 2023
1,224
1,606
106
Yes, wasn't a fan. There's a delay with everything as thread director decides what to do. It's a bit better from Alderlake to Raptorlake, but it's still a nuisance. Intel says 8 cores are enough for gaming but people expect all the cores to be used in gaming. Software needs to be writen to take advantage of the extra e cores. Adding e cores to spunk up your MT score is a measuring contest after a point.

Big little will come, of course, and then go once a better solution is found. It's not the solution for the future. It's a stop gap.
YMMV I suppose. Thread director decision making happens so fast it wouldn't be perceptible by a human. The only way it'd be noticeable is if it chose an e-core when it shouldn't have. I personally do like the big.Little approach and notice it works as expected. It's a big value-add if you have a dual monitor setup and like running things in the background while gaming. Stuff like Windows updates / Steam game downloads all get relegated to e-cores as well and are totally imperceptible to the rest of your workflow. I reckon this is a personal preference thing though.

Recent games do use the e-cores, there's been quite a few of them this year alone (TLOU, Jedi Survivor, Ratchet & Clank). There's of course some games that don't touch them at all (Returnal, Hogwarts Legacy, Starfield).
 

dullard

Elite Member
May 21, 2001
26,019
4,633
126
What's wrong with creating efficient 16 big cores? Some competitors are already doing that, and it would mean less headache for developers because all the cores are the same. The moment they start working on 3D stacking of cores, big/little might disappear, or rather lose its purpose, as its current role is a partial solution for scaling beyond 8 cores.
Because 40+ small cores using about the same power and roughly the same amount of silicon and blows the 16 big cores away in multi-threaded performance.

As a programmer myself, who works on multithreaded programs, I can say for my projects the difference is minimal. Add one line of code per thread to designate the ideal core. Done. You are already specifying dozens of other aspects about the thread (including priority). One more isn't any significant amount of work for me.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
YMMV I suppose. Thread director decision making happens so fast it wouldn't be perceptible by a human. The only way it'd be noticeable is if it chose an e-core when it shouldn't have. I personally do like the big.Little approach and notice it works as expected. It's a big value-add if you have a dual monitor setup and like running things in the background while gaming. Stuff like Windows updates / Steam game downloads all get relegated to e-cores as well and are totally imperceptible to the rest of your workflow. I reckon this is a personal preference thing though.

Recent games do use the e-cores, there's been quite a few of them this year alone (TLOU, Jedi Survivor, Ratchet & Clank). There's of course some games that don't touch them at all (Returnal, Hogwarts Legacy, Starfield).
Agree to disagree. I don't want to pull the age card here because frankly I'm not the type of old muppet to do so but if you've used computers for decades then you can pick up on something not being quite right when working at flow. the behaviour is different. I think there is a chasm between what Intel is doing and what Microsoft is doing. I've been told these perceptible phenomena is not present on Linux but I've not tried it myself. Linux generally performs much better than Windows for AMD or Intel. Probably because they care about performance whereas Microsoft slaps lipstick on a pig and calls it a day.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
Because 40+ small cores using about the same power and roughly the same amount of silicon and blows the 16 big cores away in multi-threaded performance.
This is a bizarre example. No such client x8664 processor exists for the typical consumer and your example is quite far fetched.
 
  • Like
Reactions: Executor_

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
So we are stuck with the 1+4 core idea of Lakefield forever (or a few more for Alder Lake, Raptor Lake, etc)? Time marches on. More cores will too.
I never had Lakefield. Couldn't find a dang laptop with it. For client Intel will need to figure out a solution like AMD's to make it work. There's only so much area you can use on the package to house core tiles or dies if you will. It all gets more complex the more you stuff in. I don't think people would be complaining if Intel's e cores were capable of smt/ht and still had AVX512. If they did? There's no doubt Zen 4 would have been toast. Probably why AMD began looking into compact core designs years ago if they had an inkling of what Intel was beginning to get into in 2015-2016.

That is my biggest gripe with Intel. No ht and no avx512. Once they can figure this out, the sky is the limit. Intel's moves on reducing power use and litho advances will help. If anything TSMC is in a tough spot with their N2 if they can't get it moving on time.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
No offense intended. But thats hilarious! :joycat:
None taken even though you're not using that quite right. Someone makes a claim for client cpu that's incredible and no one has heard of it, sounds like something mlid would claim. Closest claim was the rumour of one Intel client processor being a 8+32 setup which we later realised was a sham rumour.
 

dullard

Elite Member
May 21, 2001
26,019
4,633
126
That is my biggest gripe with Intel. No ht and no avx512. Once they can figure this out, the sky is the limit. Intel's moves on reducing power use and litho advances will help. If anything TSMC is in a tough spot with their N2 if they can't get it moving on time.
No hyperthreading isn't an issue to me. Hyperthreading slows far too many important software down that I usually just leave it turned off. I'd much rather just spam with more E cores instead. Plus, you might agree with FangBlade that hyperthreading is a headache for developers since those threads are so much slower.

The no AVX-512 was a terrible mistake by Intel. Shot themselves in more than just the foot. But that is just a side distraction from my comments of why every company is going to Big/Little. We just can't have 40+ big cores each using 20W or more. In order to go to large numbers of cores, each core needs to use less and less power. Thus, might as well bite the bullet and use cores that are optimal for that small power. Then toss in 4 Big cores for a snappy user experience on the single-threaded issues (Intel's other mistake was using 8 big cores in Big/Little).
 

Dayman1225

Golden Member
Aug 14, 2017
1,160
996
146
None taken even though you're not using that quite right. Someone makes a claim for client cpu that's incredible and no one has heard of it, sounds like something mlid would claim. Closest claim was the rumour of one Intel client processor being a 8+32 setup which we later realised was a sham rumour.

I can see team red offering something like that on their next socket with ddr6 and more channels and more work on the processor simply due the bandwidth limitations present now. If at all. IDK Joe's 8+16 "p" core theory/love dream seems more legitimate but still a love dream fantasy.
I’m fairness 8+32 was meant to exist but it got axed
 
  • Like
Reactions: uzzi38

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
I’m fairness 8+32 was meant to exist but it got axed
I keep seeing this but I've never seen any concrete proof other than the initial rumour about the setup that goes back several years now before alderlake even came out. I simply don't see it. Otherwise it feeds into that weird game Intel loves to play against themselves thinking they can add for linear performance. When core came out they theorised they could add cores ad infinitum. Sounded incredible but modern logic tells us that doesn't work. You'll soon hit limits, multiples you need to address. Though we are now on the verge of getting up hundreds of cores on datacentre processors.
 

dullard

Elite Member
May 21, 2001
26,019
4,633
126
None taken even though you're not using that quite right. Someone makes a claim for client cpu that's incredible and no one has heard of it, sounds like something mlid would claim. Closest claim was the rumour of one Intel client processor being a 8+32 setup which we later realised was a sham rumour.
I am not making claims about any specific CPUs that no one had heard of. I am talking about the future of computing.
 
  • Like
Reactions: SiliconFly and A///

Mopetar

Diamond Member
Jan 31, 2011
8,489
7,731
136
This is a bizarre example. No such client x8664 processor exists for the typical consumer and your example is quite far fetched.

Threadripper CPUs have had 64-cores for a while. Obviously grandma doesn't need one, but professional users can use that kind of processing power.

The most exciting thing about Intel's e-cores is that it gave them a way back into HEDT where they basically gave up trying to compete because they couldn't make a monolithic die that had even a quarter as many cores.

Putting out a die with 64 e-cores is definitely possible even on their older node, but shouldn't be an issue for them going forward. Even though it's a niche market and e-cores may not stand up against a full Zen core, there's so much room for Intel to compete on price and power that I think they could carve out part of that market.
 

dullard

Elite Member
May 21, 2001
26,019
4,633
126
You'll soon hit limits, multiples you need to address. Though we are now on the verge of getting up hundreds of cores on datacentre processors.
The first limit you hit and easiest to address is you can't realistically have hundreds of big cores each using 10 W, 20 W, or more for typical customers. You either have to severely frequency limit the big cores which no longer makes them act big. Or you have to go to cores that operate well with lower power (i.e. little cores).