Discussion Intel Meteor, Arrow, Lunar & Panther Lakes + WCL Discussion Threads

Page 803 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Tigerick

Senior member
Apr 1, 2022
919
834
106
Wildcat Lake (WCL) Preliminary Specs

Intel Wildcat Lake (WCL) is upcoming mobile SoC replacing ADL-N. WCL consists of 2 tiles: compute tile and PCD tile. It is true single die consists of CPU, GPU and NPU that is fabbed by 18-A process. Last time I checked, PCD tile is fabbed by TSMC N6 process. They are connected through UCIe, not D2D; a first from Intel. Expecting launching in Q2/Computex 2026. In case people don't remember AlderLake-N, I have created a table below to compare the detail specs of ADL-N and WCL. Just for fun, I am throwing LNL and upcoming Mediatek D9500 SoC.

Intel Alder Lake - NIntel Wildcat LakeIntel Lunar LakeMediatek D9500
Launch DateQ1-2023Q2-2026 ?Q3-2024Q3-2025
ModelIntel N300?Core Ultra 7 268VDimensity 9500 5G
Dies2221
NodeIntel 7 + ?Intel 18-A + TSMC N6TSMC N3B + N6TSMC N3P
CPU8 E-cores2 P-core + 4 LP E-cores4 P-core + 4 LP E-coresC1 1+3+4
Threads8688
Max Clock3.8 GHz?5 GHz
L3 Cache6 MB?12 MB
TDP7 WFanless ?17 WFanless
Memory64-bit LPDDR5-480064-bit LPDDR5-6800 ?128-bit LPDDR5X-853364-bit LPDDR5X-10667
Size16 GB?32 GB24 GB ?
Bandwidth~ 55 GB/s136 GB/s85.6 GB/s
GPUUHD GraphicsArc 140VG1 Ultra
EU / Xe32 EU2 Xe8 Xe12
Max Clock1.25 GHz2 GHz
NPUNA18 TOPS48 TOPS100 TOPS ?






PPT1.jpg
PPT2.jpg
PPT3.jpg



As Hot Chips 34 starting this week, Intel will unveil technical information of upcoming Meteor Lake (MTL) and Arrow Lake (ARL), new generation platform after Raptor Lake. Both MTL and ARL represent new direction which Intel will move to multiple chiplets and combine as one SoC platform.

MTL also represents new compute tile that based on Intel 4 process which is based on EUV lithography, a first from Intel. Intel expects to ship MTL mobile SoC in 2023.

ARL will come after MTL so Intel should be shipping it in 2024, that is what Intel roadmap is telling us. ARL compute tile will be manufactured by Intel 20A process, a first from Intel to use GAA transistors called RibbonFET.



LNL-MX.png
 

Attachments

  • PantherLake.png
    PantherLake.png
    283.5 KB · Views: 24,034
  • LNL.png
    LNL.png
    881.8 KB · Views: 25,527
  • INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg
    INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg
    181.4 KB · Views: 72,435
  • Clockspeed.png
    Clockspeed.png
    611.8 KB · Views: 72,321
Last edited:

511

Diamond Member
Jul 12, 2024
5,118
4,605
106
Everything sucks about Lion Cove, the only saving factor was that it was made on the most dense node. Otherwise, the PPA would have been so embarrassing, a donkey could have done a better job.
The Lion Cove on LNL is fine though it's the ARL LNC that is the issue
You hear that, Intel HR??? Start looking for donkeys :p

Or maybe don't. You already have a few. Put them to work!
HR are already donkey's you want them to look for more of their own kind 🤣.
 

DavidC1

Platinum Member
Dec 29, 2023
2,021
3,157
96
The Lion Cove on LNL is fine though it's the ARL LNC that is the issue
I would say 3x core size with 10% gain on a "Big" core is not fine.
Even so, I feel that Lion Cove has become much smaller than Raptor Cove.
Raptor is on Intel 7 and Lion Cove is on TSMC N3B. It's like a 2.5x density difference.
It's like trying to get a child to think better. What is 2+2? 5? Are you sure? Ummm....6? Think harder!
A child will understand new concepts often with a single example, without the arrogance that what they are saying is 100% correct. If you show a 3-year old a cat, they will be able to identify future cats regardless of the color, size, weight.

A glorified comparator doesn't even come close to a child. Even animals have sort of intuition which exists exactly zero on modern "AI".

ChatGPT is trained on billions of parameters, with hundreds of millions of users "training" using Google ReCaptchas and even manual labor with people in 3rd world countries being paid human-rights-violating wages and hours classifying what is what. You could change the slightest detail and it'll go from identifying a STOP sign as a squirrel.
 

Doug S

Diamond Member
Feb 8, 2020
3,746
6,613
136
You can't trust "AI" for facts, because it often makes stuff up, but with the arrogance that it's completely right, until it's corrected.*

The problem with AI isn't that it makes things up, whether arrogantly or not. It is that as the consumer of that information you have no cues to help you make a judgment about whether or not to accept the answer.

I recently encountered a Linux system (DD-WRT router) that had fully deprecated 'ifconfig' so I was forced to use the 'ip' command for the first time, and the behavior of the options "change" / "replace" didn't match my expectations. I thought there must be a single command to change the IP address of an interface like there is with ifconfig, but I couldn't figure out how to manage it. So I did a DDG search basically asking "how do I change the IP address of an interface using the Linux ip command" and I tried several links then did the same search with Google and tried a few other links on it. All told me the same unsatisfying answer that it is a two step process, and stupidly that's by design. Just for the hell of it I tried ChatGPT and it told me the same thing, and it gave examples showing the exact command syntax except it said you should remove the old IP first then add the new one! That's stupidly wrong for obvious reasons.

There's an interesting lesson in this. With web search if I get an answer I don't like (as I did in this case) I can check other sources by clicking on other links. When I decide to click on a link I'm making a semi-conscious evaluation based on the URL and the preview text. If I click on it I will make other semi conscious evaluations based on context like what's the purpose of the site this link is taking me to, do I have reason to trust the answer I'm being provided / the person providing it whether that's someone giving their real name or some Reddit moniker. Internet search is part art, you get better at it from doing it - by necessity, as SEOs are always trying to poison the well and trolls will try to mislead people - and if it is a search about something with political overtones, trying to astroturf. Now we're having to learn to discern pages written by AI and figure out how much to discount that.

Asking a question directly to an AI is different. You have no cues, no context. You either accept its answer or don't. The only way you can determine how much credence to give the answer is based on your previous experience with AI. If it has given you a lot of correct answers in the past - or at least answers you have decided to believe are correct - human nature means you're more likely to believe its answers in the future. That's a dangerous game though. Just because it has been right in one domain doesn't mean it will do as well in another, or if you pose it a different type of problem. There's also the "you're too dumb to know when its wrong" - unless you are asking questions you already know the answer to, how do you know it has been right in the past?

The answer it gave me shows the perils. It is almost right - it has the correct information in it, but because it can't reason it doesn't "understand" that removing the IP first is a real problem if that's the only way you have of connecting over the network. Typically you'd use the 'ifconfig' command and in a single step change the IP. It causes your connection to lock up but that's no problem you just reconnect on the new IP. ChatGPT can't reason, so it doesn't know that deleting the IP first will cause your connection to hang and leave you with no way of re-establishing that connection without physical access to the device in question.

The best way to describe it is that using AI is like if Google's only option was the "I'm feeling lucky" button.
 

511

Diamond Member
Jul 12, 2024
5,118
4,605
106
I would say 3x core size with 10% gain on a "Big" core is not fine.
Yeah the only thing it has to show for is AVX-512 support but that is fused off(Stupid Intel why can't we have it).
Ehhhh part of it is frequency targets.
Atom trying to do 5.5-ish will also be xboxhueg (relatively).
It will still be relatively smaller than Lame Cove still.
Raptor is on Intel 7 and Lion Cove is on TSMC N3B. It's like a 2.5x density difference.
Yup the same difference as Intel 14nm to Intel 10nm++ at least Golden Cove was not an embarrassment.
 
Last edited:

Io Magnesso

Senior member
Jun 12, 2025
578
165
71
I would say 3x core size with 10% gain on a "Big" core is not fine.

Raptor is on Intel 7 and Lion Cove is on TSMC N3B. It's like a 2.5x density difference.

A child will understand new concepts often with a single example, without the arrogance that what they are saying is 100% correct. If you show a 3-year old a cat, they will be able to identify future cats regardless of the color, size, weight.

A glorified comparator doesn't even come close to a child. Even animals have sort of intuition which exists exactly zero on modern "AI".

ChatGPT is trained on billions of parameters, with hundreds of millions of users "training" using Google ReCaptchas and even manual labor with people in 3rd world countries being paid human-rights-violating wages and hours classifying what is what. You could change the slightest detail and it'll go from identifying a STOP sign as a squirrel.
It's a little smaller than the Redwood Cove...?
Well, I don't know how much the transistor density of the N3B used by Intel is...
 

DavidC1

Platinum Member
Dec 29, 2023
2,021
3,157
96
It's a little smaller than the Redwood Cove...?
Well, I don't know how much the transistor density of the N3B used by Intel is...
It's smaller because N3B is denser than Intel 4 on Redwood Cove. Remember it takes Intel 18A to catch up in density.
 

DavidC1

Platinum Member
Dec 29, 2023
2,021
3,157
96
also removal of HT as well
The logic itself is extremely small to enable SMT:

Registers and buffers are essentially caches, at tiny capacities. The commonly quoted number is 3-5% at the core level, so excluding L2 in this case. I think even 3% might be too high.

What really matters is the increased complexity in validation, to make everything work without corner case bugs and erratas. And you have to do that for every new design. I'm 95% convinced that SMT is one reason the x86 vendors are falling further and further behind.
The problem with AI isn't that it makes things up, whether arrogantly or not. It is that as the consumer of that information you have no cues to help you make a judgment about whether or not to accept the answer.
Yea, absolutely that matters. With search engines, you can narrow to what you want/need. AI just throws it at you. Come to think of it, it seems to remind me of the mobile era, where it focused on simplicity even at the cost of details. The "hamburger" icon used in sites actually hinders you if you are on a computer because it's an extra step instead of having all the options available immediately.

If I search on a laptop for a definition of a word, it gives me options that are detailed with the etymology of a word and multiple examples, the way to pronounce it, etc. Even the search engine summary is more detailed. On a phone? It gives you just one sentence!

LLMs in a way do the same thing. Now you get one answer! Simple!
 

511

Diamond Member
Jul 12, 2024
5,118
4,605
106
The logic itself is extremely small to enable SMT:

Registers and buffers are essentially caches, at tiny capacities. The commonly quoted number is 3-5% at the core level, so excluding L2 in this case. I think even 3% might be too high.

What really matters is the increased complexity in validation, to make everything work without corner case bugs and erratas. And you have to do that for every new design. I'm 95% convinced that SMT is one reason the x86 vendors are falling further and further behind.
Not to mention not fewer only 16 GPR vs ARM's 32 also x86 validation takes additional time something people in ARM camp don't worry about.
 

Io Magnesso

Senior member
Jun 12, 2025
578
165
71
It's smaller because N3B is denser than Intel 4 on Redwood Cove. Remember it takes Intel 18A to catch up in density.
Right, N3B is much more denser, But that's the case with the most dense configuration...
I still don't know what kind of N3B configuration of Intel is using in Arrow Lake...
 

poke01

Diamond Member
Mar 8, 2022
4,606
5,916
106
Yeah non-x3D Ryzen SKUs are dead next gen. Intel got production and creative workflows next gen 100%.
 

511

Diamond Member
Jul 12, 2024
5,118
4,605
106
I am shocked 52cores are within 150watts of TDP. So let’s add another 100-150 watts and that’s the actual TDP when under full load.
I am fine with 300W for 52 Core Variant it's reasonable for this one not like the 14900K.
 
  • Like
Reactions: Io Magnesso

Io Magnesso

Senior member
Jun 12, 2025
578
165
71
I am shocked 52cores are within 150watts of TDP. So let’s add another 100-150 watts and that’s the actual TDP when under full load.
Considering the total number of cores, it still seems to be decent
For Intel Baseline Profile with Ultra9 , about 200-250W at high load?
 
  • Like
Reactions: poke01

poke01

Diamond Member
Mar 8, 2022
4,606
5,916
106
I’m guessing here but it looks like coyote cove is only on N2 on the Ultra 9 SKU? Rest are 18A?
 

Io Magnesso

Senior member
Jun 12, 2025
578
165
71
The logic itself is extremely small to enable SMT:

Registers and buffers are essentially caches, at tiny capacities. The commonly quoted number is 3-5% at the core level, so excluding L2 in this case. I think even 3% might be too high.

What really matters is the increased complexity in validation, to make everything work without corner case bugs and erratas. And you have to do that for every new design. I'm 95% convinced that SMT is one reason the x86 vendors are falling further and further behind.

Yea, absolutely that matters. With search engines, you can narrow to what you want/need. AI just throws it at you. Come to think of it, it seems to remind me of the mobile era, where it focused on simplicity even at the cost of details. The "hamburger" icon used in sites actually hinders you if you are on a computer because it's an extra step instead of having all the options available immediately.

If I search on a laptop for a definition of a word, it gives me options that are detailed with the etymology of a word and multiple examples, the way to pronounce it, etc. Even the search engine summary is more detailed. On a phone? It gives you just one sentence!

LLMs in a way do the same thing. Now you get one answer! Simple!
Maybe it's the x86's SMT implementation, or maybe it's worse than other companies' SMT implementations.
Even NVIDIA has adopted SMT in the next-generation architecture VERA.
Still, AMD's SMT seems to be working well because it's new...
SMT isn't that bad...? Depends on the implementation...?
(Even so, SMT was introduced for the first time in AMD, and I really believe that the first Zen generation will be 10 years old and the flow of time will be cruel in two years.)
 

511

Diamond Member
Jul 12, 2024
5,118
4,605
106
I’m guessing here but it looks like coyote cove is only on N2 on the Ultra 9 SKU? Rest are 18A?
Nope anything with more than 4+8 is using the 8+16 tile on N2 also i think the 6+8 U5 is incorrect should be 4+8 also Coyote Cove is on both 18AP and N2.
I'm getting major AdoredTV vibes.

But hey, bring on all the free cores!
Adored has been MIA
 
  • Like
Reactions: poke01

Io Magnesso

Senior member
Jun 12, 2025
578
165
71
I’m guessing here but it looks like coyote cove is only on N2 on the Ultra 9 SKU? Rest are 18A?
There may be a possibility that an 8P/16E die will be manufactured with an Intel 18A.
It may be possible to make a multi-die even with the 18A version of the die.
8p/16e Die is It may be produced by both IFS and TSMC.
 
  • Like
Reactions: poke01