Discussion Intel Nova Lake in H2-2026: Discussion Threads

Page 23 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Fjodor2001

Diamond Member
Feb 6, 2010
4,160
566
126
I am talking about gaming. Have they been able to utilize the E cores yet for gaming? If the bLLC actually makes it to market, will the E cores be able to utilize the extra cache anyway, and would there be a latency penalty? My understanding is that the E cores are pretty much still useless for gaming.

Unless Intel can somehow get the E cores working for gaming, their "gaming" chip will effectively be 8 cores/8 threads. AMD's "gaming" chip, if the 12 core CCD materializes, will be 12 cores/24 threads. As much as I would like to see Intel meet or even exceed AMD for the gaming crown, I just cant see it happening, unless they pull some magic out of the hat with the P core architecture.
E cores are just like any other cores, just that they are a bit slower than the P cores.

If the game needs more than 8 cores, it'll be using E cores "automatically" too. It's not like the E cores will idle then.
(This is all assuming an 8P + xE cores CPU, but the principle applies in general for other CPUs too.)
 

ToTTenTranz

Senior member
Feb 4, 2021
495
895
136
I'll wait until Zen 7 and the new socket. That generally gets me one CPU ONLY upgrade some time in the future. I am already at that position with my AM4 mb, so as long as the ole girl keeps on coming to the show every day, I'll be sticking with her :).

Well I OTOH have to make that investment before that, so unless Threadripper non-Pro brings some unforeseen surprises on gaming performance I'll probably just go with a 9950X3D and a X870E right now.
 

MS_AT

Senior member
Jul 15, 2024
777
1,575
96
simply do this in Unreal/Unity(best approach imo).
While I have no experience with game development, I would expect the engine is unable to predict the load the particular game will create, so in the end all what it can do, is expose similar api that the OS is exposing and it's back to the developers. I mean engine could try to do some kind of dynamic scheduling internally based on the past, but that brings its own set of problems.

They don't have much resources
Nobody forced them to add E-cores to their CPUs;) So they need to account for those expenses, like nVidia or AMD do.
If the game needs more than 8 cores, it'll be using E cores "automatically" too. It's not like the E cores will idle then.
True, but the problem is to do that optimally so they help instead of hinder;)
 
  • Like
Reactions: 511

511

Diamond Member
Jul 12, 2024
3,246
3,182
106
While I have no experience with game development, I would expect the engine is unable to predict the load the particular game will create, so in the end all what it can do, is expose similar api that the OS is exposing and it's back to the developers. I mean engine could try to do some kind of dynamic scheduling internally based on the past, but that brings its own set of problems.
Yeah but they should do both expose the api and some defaults as well
True, but the problem is to do that optimally so they help instead of hinder;)
wish windows scheduler didn't sucked as much as it does
 

MS_AT

Senior member
Jul 15, 2024
777
1,575
96
wish windows scheduler didn't sucked as much as it does
So how Windows Scheduler is supposed to know a game process is more special than others if the game itself doesn't tell it about it. Without hints it will treat it as every other processes. It does not know which thread is the main one, which are case sensitive and should be kept in place and which can be moved freely to avoid thermal hot spots.

But of course if you have an example, where you can clearly blame the scheduler itself and not the program, (in other words we know the program is using correct APIs, but the scheduler is doing something stupid) I would be grateful for sharing.

Windows has a lot of issue, lots of bloatware but I am not sure it's fair to pin everything on the scheduler, at least not without data to back this up;)
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,160
566
126
True, but the problem is to do that optimally so they help instead of hinder;)
That's the job of the OS process scheduler, in orchestration with the HW and other SW. The OS scheduler should move threads to the most suitable CPU cores dynamically as needed.

Also, the problem should be similar to that not all cores will be executing on max turbo frequency. Usually it'll only be e.g. 1 or 2 cores at max turbo, and the rest on lower frequency. So then as an analogy you could see the 1-2 cores operating at max frequency as P cores, and the rest as E cores.

But really, how big of a problem is it in reality? big.LITTLE style CPUs is basically used by all CPU companies nowadays. Even AMD has it with the Zen 5C cores and soon also LP cores in Zen6.
 
  • Like
Reactions: 511

MS_AT

Senior member
Jul 15, 2024
777
1,575
96
The OS scheduler should move threads to the most suitable CPU cores dynamically as needed.
I am not saying it is not a scheduler job. I am saying scheduler does not have a crystal ball, so you need to tell it if your proccess have special requirements. If you don't do so, don't blame it for treating your process as any other process.

Also, the problem should be similar to that not all cores will be executing on max turbo frequency. Usually it'll only be e.g. 1 or 2 cores at max turbo, and the rest on lower frequency. So then as an analogy you could see the 1-2 cores operating at max frequency as P cores, and the rest as E cores.
That is completly different problem... Plus OS no longer controls boost behaviour, just the state of allowable boost. Since Skylake days at least.
 

dacostafilipe

Senior member
Oct 10, 2013
804
305
136
I am not saying it is not a scheduler job. I am saying scheduler does not have a crystal ball, so you need to tell it if your proccess have special requirements. If you don't do so, don't blame it for treating your process as any other process.

Idk, but I would expect a modern scheduler to detect software behavior and adapt his strategy based on that. Or at least let executable select some "modes" to run on. GameMode was a possible solution, but do people really use it?
 
  • Like
Reactions: Fjodor2001

Fjodor2001

Diamond Member
Feb 6, 2010
4,160
566
126
That is completly different problem... Plus OS no longer controls boost behaviour, just the state of allowable boost. Since Skylake days at least.
In what way is it different? The problem of matching threads to suitable cores is still the same. I.e. which thread should run on the "fastest" core.

As for the other OS scheduler issue, I noticed dacostafilipe already commented on that, and I agree with him.
 
Last edited:

OneEng2

Senior member
Sep 19, 2022
727
974
106
But really, how big of a problem is it in reality? big.LITTLE style CPUs is basically used by all CPU companies nowadays. Even AMD has it with the Zen 5C cores and soon also LP cores in Zen6.
Well, kind of.

Zen 5c is a fully capable Zen 5 core with a complete 100% instruction set compatibility to the full P core.... it just has less cache and uses different transistors to achieve a different goal (smaller, and more power efficient).

You do have a point though. By placing LP cores in Zen 6, even AMD will run afoul of poorly scheduled threads. I suspect that schedulers in Windows will improve by then though.

Intel is taking the brunt of the problems by being the first to have dis-similar cores in one processor.
 

MS_AT

Senior member
Jul 15, 2024
777
1,575
96
Idk, but I would expect a modern scheduler to detect software behavior and adapt his strategy based on that
Scheduler does not know what it does not know. That is why we have APIs to tell it the information its missing. Even obseving app behaviour is difficult as the time critical thread might not be the busiest. It will try to account for things it does know about like numa topology, CCDs etc. But by default it will round robin your workload between cores to avoid hotspots, give everyone a fair chance to run.

The GameMode you mention is one possible input to the scheduler. Software developers can inform the scheduler about their app needs directly from within the app if they choose so.


In what way is it different? The problem of matching threads to suitable cores is still the same. I.e. which thread should run on the "fastest" core.
Because orginally you were talking about

Usually it'll only be e.g. 1 or 2 cores at max turbo, and the rest on lower frequency.
And this is controlled by the chip itself.

Then you don't always want to run the workload on the fastest core. But pin it to particular one, so you won't flush all the caches on every wake up, etc. And the critical thread doesn't have to be the hottest thread.
 

OneEng2

Senior member
Sep 19, 2022
727
974
106
I haven't written any modern threaded windows code in a while. When I did, you could only really request a core. Generally speaking, putting a thread priority at TP_HIGHEST guaranteed you that thread would consume the CPU it was on... but the SetThreadAffinity was just that .... a preference for the core. It didn't always guarantee you got that core.

Perhaps things are getting better today. You would think it would have to considering how tasks are becoming much more core type specific and cores are becoming task type specific as well.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,160
566
126
Because orginally you were talking about
Then you misinterpreted my intention with this post.
And this is controlled by the chip itself.
The point was not the control of the turbo, but the OS scheduling control over which thread should run on the fastest core. And the scheduling issues are similar regardless if it's fast because it's a P core instead of E core, of if it's because it has the highest boost frequency.
 

MS_AT

Senior member
Jul 15, 2024
777
1,575
96
Then you misinterpreted my intention with this post.
That is quite possible.
That's my point, it should know given enough time/cycles.
And my point is, that rather than spend scheduler cycles and waste cpu time, the application could hint to the OS it's own needs so there is less guessing. That is why every OS has robust apis for those purposes. This places the burden on the developer of the application of course, they need to understand what they do, but in my opinion they best understand what their application requires, and the OS should just try to accommodate those needs as best a possible.
 
  • Like
Reactions: 511

dacostafilipe

Senior member
Oct 10, 2013
804
305
136
And my point is, that rather than spend scheduler cycles and waste cpu time, the application could hint to the OS it's own needs so there is less guessing. That is why every OS has robust apis for those purposes. This places the burden on the developer of the application of course, they need to understand what they do, but in my opinion they best understand what their application requires, and the OS should just try to accommodate those needs as best a possible.
You talk about optimisation, I'm talking about default behaviour. 🤷‍♂️
 

lemans24

Junior Member
Jul 29, 2025
8
1
11
Funny because Intel must've known how bad ARL's roughly 5% ST uplift looked but did nothing to improve the situation through any means necessary.

When I first saw those ARL slides, I was like, ok maybe that's projected perf uplift. Most likely they will do something drastic to at least push the halo part to greater than 10% IPC (through frequency gain or special tweaked die). One more shred of evidence that Pat wasn't concerned about consumer CPUs at all and this is where they took the worst reputation hit. When 265K is selling for close to $200, you know that only a miracle will save them from a disastrous quarter.
Intel is doing quite well with ARL in OEM laptops/desktops so no disaster quarter from client segment of Intels business. In fact the client segment is bringin in the most revenue for Intel now. High end gaming DIY desktop chips are really a small part of their client revenue.
 

lemans24

Junior Member
Jul 29, 2025
8
1
11
Intel seems to have created garbage with TSMC N2
The idea that using TSMC's process can improve is a silly idea.
The problem is to devise what kind of cooking can be done to provide an excellent CPU while using the TSMC process.
Intel have not used N2 yet until Nova Lake and only for select compute tiles
 

OneEng2

Senior member
Sep 19, 2022
727
974
106
In fact the client segment is bringin in the most revenue for Intel now.
While this is true, I believe that it is a HUGE negative, not a positive at all.

The reason Intel's client segment is bringing in more revenue ISN'T because Intel's client revenue share has increased, it is because their DC revenue has TANKED.

Furthermore, their client PROFIT is decreasing because they have no competitive part for the high end where margins are better.

Intel is faced with a huge problem from a financial standpoint. They are losing market share where the most profit is made. Even IF they gain market share at the low to mid range, with their huge overhead, they can't make ends meet without the big profit makers.

This is hardly a great position to be in for Intel.
 
  • Like
Reactions: Thunder 57

511

Diamond Member
Jul 12, 2024
3,246
3,182
106
Furthermore, their client PROFIT is decreasing because they have no competitive part for the high end where margins are better.
Their margin pressure is due to TSMC as well plus their Fabs utilization dropping due to wafers at external.
 

lemans24

Junior Member
Jul 29, 2025
8
1
11
Intel will be 52 core and AMD 24 (48 threads) core, how it can win in multithreading at all??
Easy as 48 threads is not the same as 48 cores but Intel 48 cores have 2 different thread profiles as well. What it come down to how much performance per core contributes to overall performance. For multithreading, e-cores have already shown they are good in multithreading but Zen 6 mayne better clocked. I think its a tossup right now but Nova Lake flagship is definitely designed more for excelling at multithreading compared to single thread performance. Just wait and see
 

lemans24

Junior Member
Jul 29, 2025
8
1
11
While this is true, I believe that it is a HUGE negative, not a positive at all.

The reason Intel's client segment is bringing in more revenue ISN'T because Intel's client revenue share has increased, it is because their DC revenue has TANKED.

Furthermore, their client PROFIT is decreasing because they have no competitive part for the high end where margins are better.

Intel is faced with a huge problem from a financial standpoint. They are losing market share where the most profit is made. Even IF they gain market share at the low to mid range, with their huge overhead, they can't make ends meet without the big profit makers.

This is hardly a great position to be in for Intel.
Look at the last few quaters for client segment and Intel has done fairly well. But I agree for sure that the financial damage is manily from investing foundry and actuaklly losing real marketshare in DC. My point is overall, Intel is somewhat healthy in client segment and nowhere near in dire straights if Intel misses gaming performance crown like a lot of other posters eems to think. Also I think Intel made a fatal mistake bringing out an e-core only server when clearly there is no need when it compares really bad to Epyc compact cores. What on earth was Intel thinking. Good that they will consolidate there servers to p-core only in the future. The client segment can buy them some time IF executed properly as OEM laptops really are Intels lifeline right now!!
 
  • Like
Reactions: OneEng2

lemans24

Junior Member
Jul 29, 2025
8
1
11
Why does it need to clock at a certain frequency?

Since we know ST gains are 1.1x, it would be beneficial for Intel if they can achieve that at noticeably below 5.7GHz.

No one cared about clock stagnation from 2006 to 2011.

You can't predict that exactly. It's beyond silly when Intel/AMD claims exact numbers. 17% really? Why not 16%? Just say 15-20% like in the old days and be done with it. Or, undersell and call it 15%.
Yes, but the power / frequency curve is not linear, so the top of the curve costs more power than it brings frequency. Therefore, lowering the frequency to peak efficiency point should provide enough power headroom to insert more cores, also operating at near peak efficiency point, what it theory should lead to effective performance increase, within the same power envelope. The problem comes when doing this slows down your critical path (Cinebench users should not worry), but they have P cores for that.

Still, it's theory, the execution remains to be seen;)
I really think doing a 1:1 comparison ratio is misleading. And that > 60 % gain is an estimate only. I am sure it can ne near 100% in some apps and less than 10% in others. Wait until it comes out and look at benchmarks that mre represent the most import multithread apps you use.