Discussion Intel Nova Lake in H2-2026: Discussion Threads

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Io Magnesso

Senior member
Jun 12, 2025
502
138
71
Since preliminary leaks are out I wanted to make a pole how many people are disappointed by this 😛
Well, so far I've degraded NOVA LAKE as a rag…
There is still plenty of time
It's not even known how the physical actual chiplet layout of Nova looks like so I think it's still unspecified, including Zen6. Let's wait for the information
 
Jul 27, 2020
26,010
17,948
146
But if you look at the meme, it's your responsibility...
Japanese people are 360 degree on many things.

Like I read that they protest in the workplace by working even harder with a vengeance and increasing productivity.

Who decides these things? The elders?
 
  • Haha
Reactions: 511

Io Magnesso

Senior member
Jun 12, 2025
502
138
71
Me. stupidly arrogantly angrily viciously grief-strickenly disappointed.

Guess I'll have to hug my Bartlett Lake and Arrow Refresh PCs unless Nova Lake has something exciting in it (like really low RAM latency or overclocking speeds approaching 11000 MT/s).
As I've said many times, Bartlett, unfortunately, is for incorporation.
Models that are only P Core will not flow into the consumer market.
 
Jul 27, 2020
26,010
17,948
146
Models that are only P Core will not flow into the consumer market.

There is a chance that they may release higher end CPUs (like 180F and 190F) with only P-cores for gaming.
 
  • Haha
Reactions: Io Magnesso

Io Magnesso

Senior member
Jun 12, 2025
502
138
71
Japanese people are 360 degree on many things.

Like I read that they protest in the workplace by working even harder with a vengeance and increasing productivity.

Who decides these things? The elders?
No, regardless of Japanese sensibility
The MEME about this Japan's most famous wild beast is... In a sense, it is a caution of reading
It was just a warning that I would not be held responsible for researching the meme out of curiosity. My previous article is
Sorry for the misunderstanding
Intel 810
 
  • Like
Reactions: 511

Io Magnesso

Senior member
Jun 12, 2025
502
138
71
Whatever it is, Google isn't interested in letting me see it. I get pretty safe responses.
That's right lol
But maybe I've seen that MEME without knowing it... This MEME is often mixed up with you... (in Japan) With the spread to this point, the trend of this MEME can be confirmed overseas...
 

Darkmont

Member
Jul 7, 2023
60
176
86
What was wrong with that? That's what the mont cores are for anyway.
If you want 6 13mm^2 cores clocking 4 giggleshits max only to get obliterated by Zen 6 when Beast Lake would've launched then be my guest. The cores were too fat for server, too power hungry for mobile, the only market they would've made sense in is the teeny tiny DT and mobile workstation segment, at which point it'd still be suspect. Also, conspiring that it's IDC jewish influence that made Pat axe RYC when he literally started UC after to give the Atom guys the reins is a new level of schizoposting. The core just sucked shit dude, give it up
 
Jul 27, 2020
26,010
17,948
146
Also, conspiring that it's IDC jewish influence that made Pat axe RYC when he literally started UC after to give the Atom guys the reins is a new level of schizoposting. The core just sucked shit dude, give it up
Pat realized too late how much trusting IDC damaged his tenure.

If you want 6 13mm^2 cores clocking 4 giggleshits max only to get obliterated by Zen 6 when Beast Lake would've launched then be my guest. The cores were too fat for server, too power hungry for mobile, the only market they would've made sense in is the teeny tiny DT and mobile workstation segment, at which point it'd still be suspect.

Then why let Debbie embark on a journey like that in the first place? There were no feasibility studies done? No questions asked before starting how big the core would have to be to achieve their goals? They just let her run around unchecked racking up bills? Makes no sense. There was a good reason for the core to exist and Debbie was invested in it enough to be determined to bring it to life even after leaving Intel. What if shutting down that project turns out to be yet another one of Intel's legendary mistakes?
 

Darkmont

Member
Jul 7, 2023
60
176
86
Then why let Debbie embark on a journey like that in the first place? There were no feasibility studies done? No questions asked before starting how big the core would have to be to achieve their goals? They just let her run around unchecked racking up bills? Makes no sense. There was a good reason for the core to exist and Debbie was invested in it enough to be determined to bring it to life even after leaving Intel.
Pathfinding takes many directions, and the AADG was a rather academic bunch and not directly involved in a long line of industry cores like Austin or Haifa. From 2019 the project could’ve went in a whole lot of directions before they realized it wouldn’t work out. What’s now known as Sunny Cove had been worked on in Oregon for 2-3 years before getting moved to Haifa and restarted, shit happens. From the beginning it was experimental hail mary of a core devised from a time when Intel had great financial prospects and now that they don’t they cannot afford funding those kinds of experiments like they used to. The cores reason to exist was a proof of concept that there’s still room for mega-ILP and evidently there isn’t unless you want to sell gobs of SI to a clientele who will hate it for only bringing an estimated ~40% 1T if Raichu is to be believed. Projects get shelved all the time, just look at Ocean Cove, MTL-S, PTL-S, MTL-M, Arrow Lake Halo, ARL 8+32 etc.
What if shutting down that project turns out to be yet another one of Intel's legendary mistakes?
And what if it wasn’t, and the people leading roadmaps are at least semi-competent and realized the core didn’t make fiscal sense and was going to commit an Intel™ classic and be delayed for running behind schedule. Not to mention the rumored killing of gobs of 32-bit support.
 
Last edited:

Det0x

Golden Member
Sep 11, 2014
1,455
4,948
136
So lets make some estimates for NVL-S perf.
Gaming ~5% faster than 9800X3D, well behind Z6X3D.
1T, ~3900 GB6.4 vs ~4k+ Z6
nT, ~4k CB24 vs ~3750 Z6 24c (nearly Intel best case, AMD wins in anything FP heavy)
Not good famalam, considering this is an iso-node showdown.
GB6 MT hardly scales with above 8 cores, nova will lose both ST and MT in in this benchmark is my guess
 

Doug S

Diamond Member
Feb 8, 2020
3,298
5,734
136
I don't know much about Geekbench
Even so, in the case of ARM, the score is increased with SME...
After all, SPEC2017 is the best in this case

How is SME bumping ARM scores any different than AVX2 / AVX512 which bump up x86 scores?

I agree SPEC is preferable, but Geekbench isn't designed to measure the same things as SPEC, because it gives out binaries while SPEC gives out source.

SPEC is partially a compiler test since everyone starts with the same source code and it is up to the compiler to produce optimal code for your architecture. That's led to some cheating in the past from Sun and Intel (and I'm sure others) gaming their own compiler to "break" certain SPEC tests. There are also some shady things like using special heap libraries etc. But cheating aside if the compiler is smart enough to figure out how to translate the source code into AVX512 or SME instructions then the results will show the benefit. I tend to only consider results made using an open source compiler like gcc or clang/llvm with all benchmarks compiled using the same flags and no special libraries used.

So when Geekbench adds direct support for something like when they added SME last year or when they added AVX512 support in the past that will "unfairly" boost the results of CPUs that have that capability. But real world applications will use hand coded assembly for SIMD type stuff so in that aspect this a bit more real world than SPEC.

Each have their advantages, and shortcomings. Pick your poison. I will say I mainly look at the results of the compiler subtest in both SPEC and GB6, because it is impossible to game, and is "pure" integer code that doesn't benefit much from SIMD. If you care about the SIMD results there are SIMD heavy benchmarks you can look at, ditto for FP.
 
  • Like
Reactions: Io Magnesso

Io Magnesso

Senior member
Jun 12, 2025
502
138
71
If you want 6 13mm^2 cores clocking 4 giggleshits max only to get obliterated by Zen 6 when Beast Lake would've launched then be my guest. The cores were too fat for server, too power hungry for mobile, the only market they would've made sense in is the teeny tiny DT and mobile workstation segment, at which point it'd still be suspect. Also, conspiring that it's IDC jewish influence that made Pat axe RYC when he literally started UC after to give the Atom guys the reins is a new level of schizoposting. The core just sucked shit dude, give it up
It's true that it's like a conspiracy theories
 
  • Like
Reactions: Darkmont

Io Magnesso

Senior member
Jun 12, 2025
502
138
71
How is SME bumping ARM scores any different than AVX2 / AVX512 which bump up x86 scores?

I agree SPEC is preferable, but Geekbench isn't designed to measure the same things as SPEC, because it gives out binaries while SPEC gives out source.

SPEC is partially a compiler test since everyone starts with the same source code and it is up to the compiler to produce optimal code for your architecture. That's led to some cheating in the past from Sun and Intel (and I'm sure others) gaming their own compiler to "break" certain SPEC tests. There are also some shady things like using special heap libraries etc. But cheating aside if the compiler is smart enough to figure out how to translate the source code into AVX512 or SME instructions then the results will show the benefit. I tend to only consider results made using an open source compiler like gcc or clang/llvm with all benchmarks compiled using the same flags and no special libraries used.

So when Geekbench adds direct support for something like when they added SME last year or when they added AVX512 support in the past that will "unfairly" boost the results of CPUs that have that capability. But real world applications will use hand coded assembly for SIMD type stuff so in that aspect this a bit more real world than SPEC.

Each have their advantages, and shortcomings. Pick your poison. I will say I mainly look at the results of the compiler subtest in both SPEC and GB6, because it is impossible to game, and is "pure" integer code that doesn't benefit much from SIMD. If you care about the SIMD results there are SIMD heavy benchmarks you can look at, ditto for FP.
Well, I am aware that both benches have different advantages.
However, in GB, the AVX512 does not have as much advantage as AVX2…
The AVX512, which should have a lot more use than the current SME, is not helping to increase the score...
As the name suggests, SME is a MATRIX instruction (although it can be used for other purposes) You can think of it as an order mainly for machine learning.
 
  • Like
Reactions: Jan Olšan

511

Platinum Member
Jul 12, 2024
2,877
2,887
106
How is SME bumping ARM scores any different than AVX2 / AVX512 which bump up x86 scores?

I agree SPEC is preferable, but Geekbench isn't designed to measure the same things as SPEC, because it gives out binaries while SPEC gives out source.

SPEC is partially a compiler test since everyone starts with the same source code and it is up to the compiler to produce optimal code for your architecture. That's led to some cheating in the past from Sun and Intel (and I'm sure others) gaming their own compiler to "break" certain SPEC tests. There are also some shady things like using special heap libraries etc. But cheating aside if the compiler is smart enough to figure out how to translate the source code into AVX512 or SME instructions then the results will show the benefit. I tend to only consider results made using an open source compiler like gcc or clang/llvm with all benchmarks compiled using the same flags and no special libraries used.

So when Geekbench adds direct support for something like when they added SME last year or when they added AVX512 support in the past that will "unfairly" boost the results of CPUs that have that capability. But real world applications will use hand coded assembly for SIMD type stuff so in that aspect this a bit more real world than SPEC.

Each have their advantages, and shortcomings. Pick your poison. I will say I mainly look at the results of the compiler subtest in both SPEC and GB6, because it is impossible to game, and is "pure" integer code that doesn't benefit much from SIMD. If you care about the SIMD results there are SIMD heavy benchmarks you can look at, ditto for FP.
Look at the Sub Score also SME is a shared Unit not a core Unit like AVX/AMX.
just for reference
 

branch_suggestion

Senior member
Aug 4, 2023
711
1,534
96
GB6 MT hardly scales with above 8 cores, nova will lose both ST and MT in in this benchmark is my guess
Agreed, I made my MT prediction CB24 since it is a good bench for Intel.
How is SME bumping ARM scores any different than AVX2 / AVX512 which bump up x86 scores?
SME is a cluster level accelerator with very limited uses, the most notable is juicing 1T scores in Geekbench.
AVX/SVE is core level and actually useful in real workloads.
 

poke01

Diamond Member
Mar 8, 2022
3,731
5,076
106
Even better than CB 2024. Blender open benchmark, I’d say AMD will win.
 

reb0rn

Senior member
Dec 31, 2009
300
99
101
Intel will be 52 core and AMD 24 (48 threads) core, how it can win in multithreading at all??
 

511

Platinum Member
Jul 12, 2024
2,877
2,887
106
Even better than CB 2024. Blender open benchmark, I’d say AMD will win.
6% faster for 9950X vs 285K doubt I am roughly expecting 1.5-1.6X vs 9950X for 24C Zen 6 should be a nice tie.
Screenshot_20250630-215919.png
 

Doug S

Diamond Member
Feb 8, 2020
3,298
5,734
136
Look at the Sub Score also SME is a shared Unit not a core Unit like AVX/AMX.
just for reference

How it is implemented in a particular design is not relevant. Apple could have chosen to make it a part of the core, and another implementation like ARM's or Qualcomm's might choose to do so. For obvious reasons ARM's implementation is very likely to do so.
 

511

Platinum Member
Jul 12, 2024
2,877
2,887
106
How it is implemented in a particular design is not relevant. Apple could have chosen to make it a part of the core, and another implementation like ARM's or Qualcomm's might choose to do so. For obvious reasons ARM's implementation is very likely to do so.
OFC it is just a glorified accelerator at this point for a single task AVX/AVX-512 has so many use cases unlike this only one use case
 

DavidC1

Golden Member
Dec 29, 2023
1,616
2,671
96
https://www.anandtech.com/show/2493/4 The concept and design of Atoms go back to 2004 so it's difficult to wave the "less bloat" magic wand. The truth is that the designers at Haifa IDC are having serious skill issues with the area and power usage to performance they're getting. Also most tocks a team will probably look at fat to trim where they see fit and I doubt legacy bits take up any significant area vs. some fat L2 macro or a big BPU.
If you look at the changes the cores go through, even in their best days they weren't able to make as big of a change as E cores were able to. The "skill difference" existed even when comparing to Haifa team in the Core 2 days.
Pathfinding takes many directions, and the AADG was a rather academic bunch and not directly involved in a long line of industry cores like Austin or Haifa. From 2019 the project could’ve went in a whole lot of directions before they realized it wouldn’t work out.
This is a potential long term issue. Because you need pathfinding teams to try new ideas that you can't risk on soon to be upcoming designs. R&D on the surface seems like a waste, but it has value greater than the immediate monetary one.

Intel in the early to mid 2000's had tons of research projects going on.
 
Last edited:

Io Magnesso

Senior member
Jun 12, 2025
502
138
71
If you look at the changes the cores go through, even in their best days they weren't able to make as big of a change as E cores were able to. The "skill difference" existed even when comparing to Haifa team in the Core 2 days.

This is a potential long term issue. Because you need pathfinding teams to try new ideas that you can't risk on soon to be upcoming designs. R&D on the surface seems like a waste, but it has value greater than the immediate monetary one.

Intel in the early to mid 2000's had tons of research projects going on.
That's right, R&D leads to the future