Discussion Intel Meteor, Arrow, Lunar & Panther Lakes Discussion Threads

Page 84 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Tigerick

Senior member
Apr 1, 2022
719
677
106
PPT1.jpg
PPT2.jpg
PPT3.jpg



As Hot Chips 34 starting this week, Intel will unveil technical information of upcoming Meteor Lake (MTL) and Arrow Lake (ARL), new generation platform after Raptor Lake. Both MTL and ARL represent new direction which Intel will move to multiple chiplets and combine as one SoC platform.

MTL also represents new compute tile that based on Intel 4 process which is based on EUV lithography, a first from Intel. Intel expects to ship MTL mobile SoC in 2023.

ARL will come after MTL so Intel should be shipping it in 2024, that is what Intel roadmap is telling us. ARL compute tile will be manufactured by Intel 20A process, a first from Intel to use GAA transistors called RibbonFET.



Comparison of upcoming Intel's U-series CPU: Core Ultra 100U, Lunar Lake and Panther Lake

ModelCode-NameDateTDPNodeTilesMain TileCPULP E-CoreLLCGPUXe-cores
Core Ultra 100UMeteor LakeQ4 202315 - 57 WIntel 4 + N5 + N64tCPU2P + 8E212 MBIntel Graphics4
?Lunar LakeQ4 202417 - 30 WN3B + N62CPU + GPU & IMC4P + 4E012 MBArc8
?Panther LakeQ1 2026 ??Intel 18A + N3E3CPU + MC4P + 8E4?Arc12



Comparison of die size of Each Tile of Meteor Lake, Arrow Lake, Lunar Lake and Panther Lake

Meteor LakeArrow Lake (N3B)Lunar LakePanther Lake
PlatformMobile H/U OnlyDesktop & Mobile H&HXMobile U OnlyMobile H
Process NodeIntel 4TSMC N3BTSMC N3BIntel 18A
DateQ4 2023Desktop-Q4-2024
H&HX-Q1-2025
Q4 2024Q1 2026 ?
Full Die6P + 8P8P + 16E4P + 4E4P + 8E
LLC24 MB36 MB ?12 MB?
tCPU66.48
tGPU44.45
SoC96.77
IOE44.45
Total252.15

LNL-MX.png

Intel Core Ultra 100 - Meteor Lake

INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg

As mentioned by Tomshardware, TSMC will manufacture the I/O, SoC, and GPU tiles. That means Intel will manufacture only the CPU and Foveros tiles. (Notably, Intel calls the I/O tile an 'I/O Expander,' hence the IOE moniker.)



Clockspeed.png
 

Attachments

  • PantherLake.png
    PantherLake.png
    283.5 KB · Views: 24,021
  • LNL.png
    LNL.png
    881.8 KB · Views: 25,510
Last edited:

SpudLobby

Golden Member
May 18, 2022
1,041
701
106
So what do we know about the low power E Cores on MTL other than
- N6, separate tile from the compute tile
- Two of them
- Likely Crestmont
- Visible to the OS

Specifically:

Is this going to be A) only for idle power savings when the laptop is shut and in deep sleep - since powering down the compute tile is then possible and basic networking updates can run through modestly performing Crestmont cores on (presumably) low leakage N6 libraries?

Or B) do we have indication these can be utilized in daily, active PC use, say when browsing below some compute or interactivity threshold or reading a PDF in order to preserve power efficiency (maybe with an optional mode or autonomous, firmware use) from all the Ring/Uncore power in the compute tile - but then context switches are going to take longer and you have other processes that would have to be run.

Because being exposed in and of itself to the firmware and OS is perfectly compatible with utilizing this for A) only, but B) is where things actually get interesting and where Intel could save much more power for some simple tasks, if they have a way to pull off the migration and/or intelligent switching.
 

eek2121

Diamond Member
Aug 2, 2005
3,318
4,880
136
So what do we know about the low power E Cores on MTL other than
- N6, separate tile from the compute tile
- Two of them
- Likely Crestmont
- Visible to the OS

Specifically:

Is this going to be A) only for idle power savings when the laptop is shut and in deep sleep - since powering down the compute tile is then possible and basic networking updates can run through modestly performing Crestmont cores on (presumably) low leakage N6 libraries?

Or B) do we have indication these can be utilized in daily, active PC use, say when browsing below some compute or interactivity threshold or reading a PDF in order to preserve power efficiency (maybe with an optional mode or autonomous, firmware use) from all the Ring/Uncore power in the compute tile - but then context switches are going to take longer and you have other processes that would have to be run.

Because being exposed in and of itself to the firmware and OS is perfectly compatible with utilizing this for A) only, but B) is where things actually get interesting and where Intel could save much more power for some simple tasks, if they have a way to pull off the migration and/or intelligent switching.
Honestly all we know is that the cores appear to be visible to the OS.

If Intel decides to let them be used without restriction, I can see latency being an issue that could cause regression in some workloads. However, other workloads will see a bit of a boost.

If those two cores are on the SoC or another chip that allows/encourages the CPU “chiplet” to power down, that will be pretty amazing. Everything from viewing documents to Netflix n chill will benefit. Even certain games would benefit. Shoot, I will speculate that the majority of all tasks users use their computer for could be powered by those two little cores.


Last night I watched youtube in one tab and browsed the internet in another for two hours, my machine (desktop) consistently used over 100w. A setup like that could potentially cut that down < 10W
 
Jul 27, 2020
24,114
16,826
146
Two cores is still too low for being usable on its own. They can only be useful when the Windows desktop is locked and there is no user activity taking place. Otherwise, the two cores will just get bogged down if all essential threads are forced to run on them.
 

Khato

Golden Member
Jul 15, 2001
1,251
321
136
Two cores is still too low for being usable on its own. They can only be useful when the Windows desktop is locked and there is no user activity taking place. Otherwise, the two cores will just get bogged down if all essential threads are forced to run on them.
Two cores remain perfectly adequate for the majority of non-compute usage. Effectively one core for all background tasks and the other for current user task. Outside of actual compute tasks, the only times that I recall dual core computers feeling slow compared to quad core were on startup and when silly antivirus background scans were run. Dual core will be more than adequate for the majority of web browsing and video playback (assuming GPU decode, of course.)
 
  • Like
Reactions: Tlh97 and SpudLobby

eek2121

Diamond Member
Aug 2, 2005
3,318
4,880
136
Two cores remain perfectly adequate for the majority of non-compute usage. Effectively one core for all background tasks and the other for current user task. Outside of actual compute tasks, the only times that I recall dual core computers feeling slow compared to quad core were on startup and when silly antivirus background scans were run. Dual core will be more than adequate for the majority of web browsing and video playback (assuming GPU decode, of course.)
…and Microsoft defender now offloads to the GPU.
 

dullard

Elite Member
May 21, 2001
25,763
4,289
126
Is this a RaptorLake refresh? , 1P core and 4 e cores..
The U300 is Intel's cheapest bottom-of-the-barrel mobile chip launched in January 2023.
It is Raptor Lake in its worst form. Fewest cores, lowest speed, fewest GPU execution units, lowest cache, etc.
 

poke01

Diamond Member
Mar 8, 2022
3,427
4,702
106
It's kind of been the opposite though. They've cut their GPU roadmap immensely. And who knows where they'll be with AI accelerators. The rest doesn't really need cutting edge lithography.
Intel first needs to get their CPUs in order. With Arrow Lake and Panther Lake the outcome will be seen. They need income rolling in and then they can start expanding again to other areas.
 

Exist50

Platinum Member
Aug 18, 2016
2,452
3,105
136
Intel first needs to get their CPUs in order. With Arrow Lake and Panther Lake the outcome will be seen. They need income rolling in and then they can start expanding again to other areas.
I don't think they have that luxury. Best case scenario, you're looking at a comeback with SRF, GNR, and PTL (ARL isn't going to be great for their financials with all the TSMC silicon). So that's 2025, maybe 2026 before anything starts showing up on the books? If they start actually investing then, then add another 3 years or so (optimistically) to get something on the market. So like 2030 before they have a serious graphics and AI offering? No, that's too late.
 

poke01

Diamond Member
Mar 8, 2022
3,427
4,702
106
I don't think they have that luxury. Best case scenario, you're looking at a comeback with SRF, GNR, and PTL (ARL isn't going to be great for their financials with all the TSMC silicon). So that's 2025, maybe 2026 before anything starts showing up on the books? If they start actually investing then, then add another 3 years or so (optimistically) to get something on the market. So like 2030 before they have a serious graphics and AI offering? No, that's too late.
True, ATL won't be good financial wise for Intel but it's a needed step. it's way to ensure Intel's grasp if their own labs fail at 20A.
Not only that Intel needs to move quickly from Intel 7nm to better node.

dGPUs isn't what Intel should focus but advancing their iGPU arch from ATL and beyond. Premium Laptops will ditch Nvidia if Intel makes competent GPUs for their SoCs.Intel needs to also needs to improve their VPUs every generation as well.

The next couple of years will be interesting.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,111
136
I don't think they have that luxury. Best case scenario, you're looking at a comeback with SRF, GNR, and PTL (ARL isn't going to be great for their financials with all the TSMC silicon). So that's 2025, maybe 2026 before anything starts showing up on the books? If they start actually investing then, then add another 3 years or so (optimistically) to get something on the market. So like 2030 before they have a serious graphics and AI offering? No, that's too late.
Intel's graphics/HPC-AI roadmap seems pretty opaque to me right now.
 

Exist50

Platinum Member
Aug 18, 2016
2,452
3,105
136
dGPUs isn't what Intel should focus but advancing their iGPU arch from ATL and beyond.
The two are tied at the hip. Plus there's AI, which should heavily leverage the GPU IP. At least for the server-scale stuff.
Intel's graphics/HPC-AI roadmap seems pretty opaque to me right now.
Because it's functionally non-existent beyond Battlemage. I still don't believe a word about Falcon Shores either.
 

ashFTW

Senior member
Sep 21, 2020
315
235
126
Intel can focus on HPC/AI/ML software for the next few years. I think that’s the real differentiator with NVIDIA. Then meet it with cutting edge hardware on an advanced process down the road. If Investment options are limited, something has got to give.
 

Geddagod

Golden Member
Dec 28, 2021
1,340
1,433
106
Intel can focus on HPC/AI/ML software for the next few years. I think that’s the real differentiator with NVIDIA. Then meet it with cutting edge hardware on an advanced process down the road. If Investment options are limited, something has got to give.
I think the node thing is over stated. Intel has shown they are willing to buy TSMC wafers if their own nodes aren't good enough. I feel like the brunt of Intel's reluctance to go 'deep' into graphics is the levels of investment they have to commit to, financially and in manpower, in the design side.
If their designs were good enough, they should be perfectly willing to reserve a bunch of wafers in the future for AI GPUs, since selling those seem to be essentially just printing money.
Maybe Intel commits once again when they manage to stabilize server market share bleeding in 2024 and 2025 (hopefully).
 

poke01

Diamond Member
Mar 8, 2022
3,427
4,702
106
The two are tied at the hip. Plus there's AI, which should heavily leverage the GPU IP. At least for the server-scale stuff
I would disagree. Intel does not need dGPUs, their future SoCs require massive GPU cores and if they don't have anything beyond battle mage then everything after Panter Lake will fail because Intel will be selling the whole package just like Apple is, so people expect their to improvements in every area of the SoC.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,111
136
I would disagree. Intel does not need dGPUs, their future SoCs require massive GPU cores and if they don't have anything beyond battle mage then everything after Panter Lake will fail because Intel will be selling the whole package just like Apple is, so people expect their to improvements in every area of the SoC.
If Intel doesn't have high performance AI/ML accelerators on the market going forward, they will be missing out on a huge market with great margins.
 

poke01

Diamond Member
Mar 8, 2022
3,427
4,702
106
If Intel doesn't have high performance AI/ML accelerators on the market going forward, they will be missing out on a huge market with great margins.
Oh they will especially in Panther Lake. MTL will start era of AI and ML engines for Intel and Arrow Lake will massively improve on them.
 

poke01

Diamond Member
Mar 8, 2022
3,427
4,702
106
What? No. Big chonking thing like Nvidia's H100's and AMD's Mi300s. Not CPUs.
Isn't this a thread for consumer products? That's what ARL and MTL will be.

Intel can upscale the tech if they can get these products out as expected.
 
Last edited:

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,019
3,490
126
atl?

atlantis lake?
Can someone tell me how MetorLake is MTL instead of just ML.....
MTL reminds me of Machine Translation Language.

Whose the guy that decided to put T in the middle, like how they put P in SPR for Saphire Rapids.
Can someone tell him thats not how Acronyms work.

They gonna call Raptor Lake... RPL? oh wait is it even called that?