Discussion Intel Meteor, Arrow, Lunar & Panther Lakes + WCL Discussion Threads

Page 84 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Tigerick

Senior member
Apr 1, 2022
851
802
106
Wildcat Lake (WCL) Preliminary Specs

Intel Wildcat Lake (WCL) is upcoming mobile SoC replacing ADL-N. WCL consists of 2 tiles: compute tile and PCD tile. It is true single die consists of CPU, GPU and NPU that is fabbed by 18-A process. Last time I checked, PCD tile is fabbed by TSMC N6 process. They are connected through UCIe, not D2D; a first from Intel. Expecting launching in Q2/Computex 2026. In case people don't remember AlderLake-N, I have created a table below to compare the detail specs of ADL-N and WCL. Just for fun, I am throwing LNL and upcoming Mediatek D9500 SoC.

Intel Alder Lake - NIntel Wildcat LakeIntel Lunar LakeMediatek D9500
Launch DateQ1-2023Q2-2026 ?Q3-2024Q3-2025
ModelIntel N300?Core Ultra 7 268VDimensity 9500 5G
Dies2221
NodeIntel 7 + ?Intel 18-A + TSMC N6TSMC N3B + N6TSMC N3P
CPU8 E-cores2 P-core + 4 LP E-cores4 P-core + 4 LP E-coresC1 1+3+4
Threads8688
Max Clock3.8 GHz?5 GHz
L3 Cache6 MB?12 MB
TDP7 WFanless ?17 WFanless
Memory64-bit LPDDR5-480064-bit LPDDR5-6800 ?128-bit LPDDR5X-853364-bit LPDDR5X-10667
Size16 GB?32 GB24 GB ?
Bandwidth~ 55 GB/s136 GB/s85.6 GB/s
GPUUHD GraphicsArc 140VG1 Ultra
EU / Xe32 EU2 Xe8 Xe12
Max Clock1.25 GHz2 GHz
NPUNA18 TOPS48 TOPS100 TOPS ?






PPT1.jpg
PPT2.jpg
PPT3.jpg



As Hot Chips 34 starting this week, Intel will unveil technical information of upcoming Meteor Lake (MTL) and Arrow Lake (ARL), new generation platform after Raptor Lake. Both MTL and ARL represent new direction which Intel will move to multiple chiplets and combine as one SoC platform.

MTL also represents new compute tile that based on Intel 4 process which is based on EUV lithography, a first from Intel. Intel expects to ship MTL mobile SoC in 2023.

ARL will come after MTL so Intel should be shipping it in 2024, that is what Intel roadmap is telling us. ARL compute tile will be manufactured by Intel 20A process, a first from Intel to use GAA transistors called RibbonFET.



LNL-MX.png
 

Attachments

  • PantherLake.png
    PantherLake.png
    283.5 KB · Views: 24,029
  • LNL.png
    LNL.png
    881.8 KB · Views: 25,523
  • INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg
    INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg
    181.4 KB · Views: 72,431
  • Clockspeed.png
    Clockspeed.png
    611.8 KB · Views: 72,319
Last edited:

SpudLobby

Golden Member
May 18, 2022
1,041
702
106
So what do we know about the low power E Cores on MTL other than
- N6, separate tile from the compute tile
- Two of them
- Likely Crestmont
- Visible to the OS

Specifically:

Is this going to be A) only for idle power savings when the laptop is shut and in deep sleep - since powering down the compute tile is then possible and basic networking updates can run through modestly performing Crestmont cores on (presumably) low leakage N6 libraries?

Or B) do we have indication these can be utilized in daily, active PC use, say when browsing below some compute or interactivity threshold or reading a PDF in order to preserve power efficiency (maybe with an optional mode or autonomous, firmware use) from all the Ring/Uncore power in the compute tile - but then context switches are going to take longer and you have other processes that would have to be run.

Because being exposed in and of itself to the firmware and OS is perfectly compatible with utilizing this for A) only, but B) is where things actually get interesting and where Intel could save much more power for some simple tasks, if they have a way to pull off the migration and/or intelligent switching.
 

eek2121

Diamond Member
Aug 2, 2005
3,415
5,054
136
So what do we know about the low power E Cores on MTL other than
- N6, separate tile from the compute tile
- Two of them
- Likely Crestmont
- Visible to the OS

Specifically:

Is this going to be A) only for idle power savings when the laptop is shut and in deep sleep - since powering down the compute tile is then possible and basic networking updates can run through modestly performing Crestmont cores on (presumably) low leakage N6 libraries?

Or B) do we have indication these can be utilized in daily, active PC use, say when browsing below some compute or interactivity threshold or reading a PDF in order to preserve power efficiency (maybe with an optional mode or autonomous, firmware use) from all the Ring/Uncore power in the compute tile - but then context switches are going to take longer and you have other processes that would have to be run.

Because being exposed in and of itself to the firmware and OS is perfectly compatible with utilizing this for A) only, but B) is where things actually get interesting and where Intel could save much more power for some simple tasks, if they have a way to pull off the migration and/or intelligent switching.
Honestly all we know is that the cores appear to be visible to the OS.

If Intel decides to let them be used without restriction, I can see latency being an issue that could cause regression in some workloads. However, other workloads will see a bit of a boost.

If those two cores are on the SoC or another chip that allows/encourages the CPU “chiplet” to power down, that will be pretty amazing. Everything from viewing documents to Netflix n chill will benefit. Even certain games would benefit. Shoot, I will speculate that the majority of all tasks users use their computer for could be powered by those two little cores.


Last night I watched youtube in one tab and browsed the internet in another for two hours, my machine (desktop) consistently used over 100w. A setup like that could potentially cut that down < 10W
 
Jul 27, 2020
28,098
19,171
146
Two cores is still too low for being usable on its own. They can only be useful when the Windows desktop is locked and there is no user activity taking place. Otherwise, the two cores will just get bogged down if all essential threads are forced to run on them.
 

Khato

Golden Member
Jul 15, 2001
1,279
361
136
Two cores is still too low for being usable on its own. They can only be useful when the Windows desktop is locked and there is no user activity taking place. Otherwise, the two cores will just get bogged down if all essential threads are forced to run on them.
Two cores remain perfectly adequate for the majority of non-compute usage. Effectively one core for all background tasks and the other for current user task. Outside of actual compute tasks, the only times that I recall dual core computers feeling slow compared to quad core were on startup and when silly antivirus background scans were run. Dual core will be more than adequate for the majority of web browsing and video playback (assuming GPU decode, of course.)
 
  • Like
Reactions: Tlh97 and SpudLobby

eek2121

Diamond Member
Aug 2, 2005
3,415
5,054
136
Two cores remain perfectly adequate for the majority of non-compute usage. Effectively one core for all background tasks and the other for current user task. Outside of actual compute tasks, the only times that I recall dual core computers feeling slow compared to quad core were on startup and when silly antivirus background scans were run. Dual core will be more than adequate for the majority of web browsing and video playback (assuming GPU decode, of course.)
…and Microsoft defender now offloads to the GPU.
 

dullard

Elite Member
May 21, 2001
26,022
4,636
126
Is this a RaptorLake refresh? , 1P core and 4 e cores..
The U300 is Intel's cheapest bottom-of-the-barrel mobile chip launched in January 2023.
It is Raptor Lake in its worst form. Fewest cores, lowest speed, fewest GPU execution units, lowest cache, etc.
 

poke01

Diamond Member
Mar 8, 2022
4,243
5,578
106
It's kind of been the opposite though. They've cut their GPU roadmap immensely. And who knows where they'll be with AI accelerators. The rest doesn't really need cutting edge lithography.
Intel first needs to get their CPUs in order. With Arrow Lake and Panther Lake the outcome will be seen. They need income rolling in and then they can start expanding again to other areas.
 

Exist50

Platinum Member
Aug 18, 2016
2,452
3,106
136
Intel first needs to get their CPUs in order. With Arrow Lake and Panther Lake the outcome will be seen. They need income rolling in and then they can start expanding again to other areas.
I don't think they have that luxury. Best case scenario, you're looking at a comeback with SRF, GNR, and PTL (ARL isn't going to be great for their financials with all the TSMC silicon). So that's 2025, maybe 2026 before anything starts showing up on the books? If they start actually investing then, then add another 3 years or so (optimistically) to get something on the market. So like 2030 before they have a serious graphics and AI offering? No, that's too late.
 

poke01

Diamond Member
Mar 8, 2022
4,243
5,578
106
I don't think they have that luxury. Best case scenario, you're looking at a comeback with SRF, GNR, and PTL (ARL isn't going to be great for their financials with all the TSMC silicon). So that's 2025, maybe 2026 before anything starts showing up on the books? If they start actually investing then, then add another 3 years or so (optimistically) to get something on the market. So like 2030 before they have a serious graphics and AI offering? No, that's too late.
True, ATL won't be good financial wise for Intel but it's a needed step. it's way to ensure Intel's grasp if their own labs fail at 20A.
Not only that Intel needs to move quickly from Intel 7nm to better node.

dGPUs isn't what Intel should focus but advancing their iGPU arch from ATL and beyond. Premium Laptops will ditch Nvidia if Intel makes competent GPUs for their SoCs.Intel needs to also needs to improve their VPUs every generation as well.

The next couple of years will be interesting.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
I don't think they have that luxury. Best case scenario, you're looking at a comeback with SRF, GNR, and PTL (ARL isn't going to be great for their financials with all the TSMC silicon). So that's 2025, maybe 2026 before anything starts showing up on the books? If they start actually investing then, then add another 3 years or so (optimistically) to get something on the market. So like 2030 before they have a serious graphics and AI offering? No, that's too late.
Intel's graphics/HPC-AI roadmap seems pretty opaque to me right now.
 

Exist50

Platinum Member
Aug 18, 2016
2,452
3,106
136
dGPUs isn't what Intel should focus but advancing their iGPU arch from ATL and beyond.
The two are tied at the hip. Plus there's AI, which should heavily leverage the GPU IP. At least for the server-scale stuff.
Intel's graphics/HPC-AI roadmap seems pretty opaque to me right now.
Because it's functionally non-existent beyond Battlemage. I still don't believe a word about Falcon Shores either.
 

ashFTW

Senior member
Sep 21, 2020
325
247
126
Intel can focus on HPC/AI/ML software for the next few years. I think that’s the real differentiator with NVIDIA. Then meet it with cutting edge hardware on an advanced process down the road. If Investment options are limited, something has got to give.
 

Geddagod

Golden Member
Dec 28, 2021
1,531
1,625
106
Intel can focus on HPC/AI/ML software for the next few years. I think that’s the real differentiator with NVIDIA. Then meet it with cutting edge hardware on an advanced process down the road. If Investment options are limited, something has got to give.
I think the node thing is over stated. Intel has shown they are willing to buy TSMC wafers if their own nodes aren't good enough. I feel like the brunt of Intel's reluctance to go 'deep' into graphics is the levels of investment they have to commit to, financially and in manpower, in the design side.
If their designs were good enough, they should be perfectly willing to reserve a bunch of wafers in the future for AI GPUs, since selling those seem to be essentially just printing money.
Maybe Intel commits once again when they manage to stabilize server market share bleeding in 2024 and 2025 (hopefully).
 

poke01

Diamond Member
Mar 8, 2022
4,243
5,578
106
The two are tied at the hip. Plus there's AI, which should heavily leverage the GPU IP. At least for the server-scale stuff
I would disagree. Intel does not need dGPUs, their future SoCs require massive GPU cores and if they don't have anything beyond battle mage then everything after Panter Lake will fail because Intel will be selling the whole package just like Apple is, so people expect their to improvements in every area of the SoC.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
I would disagree. Intel does not need dGPUs, their future SoCs require massive GPU cores and if they don't have anything beyond battle mage then everything after Panter Lake will fail because Intel will be selling the whole package just like Apple is, so people expect their to improvements in every area of the SoC.
If Intel doesn't have high performance AI/ML accelerators on the market going forward, they will be missing out on a huge market with great margins.
 

poke01

Diamond Member
Mar 8, 2022
4,243
5,578
106
If Intel doesn't have high performance AI/ML accelerators on the market going forward, they will be missing out on a huge market with great margins.
Oh they will especially in Panther Lake. MTL will start era of AI and ML engines for Intel and Arrow Lake will massively improve on them.
 

poke01

Diamond Member
Mar 8, 2022
4,243
5,578
106
What? No. Big chonking thing like Nvidia's H100's and AMD's Mi300s. Not CPUs.
Isn't this a thread for consumer products? That's what ARL and MTL will be.

Intel can upscale the tech if they can get these products out as expected.
 
Last edited:

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,067
3,574
126
atl?

atlantis lake?
Can someone tell me how MetorLake is MTL instead of just ML.....
MTL reminds me of Machine Translation Language.

Whose the guy that decided to put T in the middle, like how they put P in SPR for Saphire Rapids.
Can someone tell him thats not how Acronyms work.

They gonna call Raptor Lake... RPL? oh wait is it even called that?