Discussion Intel Meteor, Arrow, Lunar & Panther Lakes + WCL Discussion Threads

Page 886 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Tigerick

Senior member
Apr 1, 2022
941
857
106
Wildcat Lake (WCL) Specs

Intel Wildcat Lake (WCL) is upcoming mobile SoC replacing Raptor Lake-U. WCL consists of 2 tiles: compute tile and PCD tile. It is true single die consists of CPU, GPU and NPU that is fabbed by 18-A process. Last time I checked, PCD tile is fabbed by TSMC N6 process. They are connected through UCIe, not D2D; a first from Intel. Expecting launching in Q1 2026.

Intel Raptor Lake UIntel Wildcat Lake 15W?Intel Lunar LakeIntel Panther Lake 4+0+4
Launch DateQ1-2024Q2-2026Q3-2024Q1-2026
ModelIntel 150UIntel Core 7Core Ultra 7 268VCore Ultra 7 365
Dies2223
NodeIntel 7 + ?Intel 18-A + TSMC N6TSMC N3B + N6Intel 18-A + Intel 3 + TSMC N6
CPU2 P-core + 8 E-cores2 P-core + 4 LP E-cores4 P-core + 4 LP E-cores4 P-core + 4 LP E-cores
Threads12688
Max Clock5.4 GHz?5 GHz4.8 GHz
L3 Cache12 MB12 MB12 MB
TDP15 - 55 W15 W ?17 - 37 W25 - 55 W
Memory128-bit LPDDR5-520064-bit LPDDR5128-bit LPDDR5x-8533128-bit LPDDR5x-7467
Size96 GB32 GB128 GB
Bandwidth136 GB/s
GPUIntel GraphicsIntel GraphicsArc 140VIntel Graphics
RTNoNoYESYES
EU / Xe96 EU2 Xe8 Xe4 Xe
Max Clock1.3 GHz?2 GHz2.5 GHz
NPUGNA 3.018 TOPS48 TOPS49 TOPS






PPT1.jpg
PPT2.jpg
PPT3.jpg



As Hot Chips 34 starting this week, Intel will unveil technical information of upcoming Meteor Lake (MTL) and Arrow Lake (ARL), new generation platform after Raptor Lake. Both MTL and ARL represent new direction which Intel will move to multiple chiplets and combine as one SoC platform.

MTL also represents new compute tile that based on Intel 4 process which is based on EUV lithography, a first from Intel. Intel expects to ship MTL mobile SoC in 2023.

ARL will come after MTL so Intel should be shipping it in 2024, that is what Intel roadmap is telling us. ARL compute tile will be manufactured by Intel 20A process, a first from Intel to use GAA transistors called RibbonFET.



LNL-MX.png
 

Attachments

  • PantherLake.png
    PantherLake.png
    283.5 KB · Views: 24,043
  • LNL.png
    LNL.png
    881.8 KB · Views: 25,531
  • INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg
    INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg
    181.4 KB · Views: 72,439
  • Clockspeed.png
    Clockspeed.png
    611.8 KB · Views: 72,326
Last edited:

DavidC1

Platinum Member
Dec 29, 2023
2,166
3,318
106
I didn't know fat iGPUs can suddenly be that much cheaper that putting it on i3s keeps it at i3 pricing
No you save money buying not being forced to buy Ultra 7 or Ultra 9 CPUs when wanting the top GPU config. Cause for the people who wants graphics performance the faster CPUs are just waste of money. It's like being forced to spend on a new platform rather than just swapping out the GPU in your desktop.
 
  • Like
Reactions: hemedans

511

Diamond Member
Jul 12, 2024
5,430
4,853
106
PTL will certainly match or beat ARL in ST perf without regression jmho(0-5%)
 
  • Like
Reactions: DKR

LightningZ71

Platinum Member
Mar 10, 2017
2,690
3,390
136
People on here acting like 1T performance doesn't matter for games...

An i3 binned processor is going to be down almost a full Ghz if not more from the i9 bin for the best core. In the necessarily lower resolutions that iGPU gaming will require, that 1T performance will still be an issue, or do you REALLY want bad 1% lows?
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,599
740
126
no because larger iGP is a cost adder that no one wants.
Using iGPU will be cheaper than adding corresponding separate dGPU. The customer can select whatever level of iGPU perf they want. E.g. Panther Lake will provide anything from 4 to 12 Xe3 cores. Everyone will be happy.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,599
740
126
Oh they do. But they don't want to pay for it.
Well, they have the freedom to select whatever level of iGPU performance they want. Anything from 4 to 12 Xe3 cores. Whatever choice they make, it'll be cheaper than corresponding dGPU.
 
Last edited:

Josh128

Golden Member
Oct 14, 2022
1,530
2,289
106
Valley has been obsolete benchmark like forever now.
Not for mobile iGPU it isnt. I just ran it 3 days ago on a 12CU Ryzen 7 6800H in a Beelink NUC Win 11 PC and at max power it struggled to hit 30fps at 1080p low or medium with no AA. I was actually expecting much more from Zen 3+ / 12CU RDNA 2 chip but it very much struggled and dropped into mid to low 20s quite often.
 

Tup3x

Golden Member
Dec 31, 2016
1,301
1,438
136
Not for mobile iGPU it isnt. I just ran it 3 days ago on a 12CU Ryzen 7 6800H in a Beelink NUC Win 11 PC and at max power it struggled to hit 30fps at 1080p low or medium with no AA. I was actually expecting much more from Zen 3+ / 12CU RDNA 2 chip but it very much struggled and dropped into mid to low 20s quite often.
I would be surprised if any modern GPU has any optimisations for it. It's obsolete piece of software that hardly tells anything about real world gaming performance.
 

Josh128

Golden Member
Oct 14, 2022
1,530
2,289
106
I would be surprised if any modern GPU has any optimisations for it. It's obsolete piece of software that hardly tells anything about real world gaming performance.
?? My 6700XT that is older than the 6800H and scores over 12x the fps using the same settings. Optimizations have nothing to do with it, 6700XT and 6800H are same GPU architecture, and the 6800H is newer.
1760216776941.png
 
  • Like
Reactions: lightmanek

Khato

Golden Member
Jul 15, 2001
1,370
477
136
OEMs like options that increase their ASPs without requiring additional investment. Discrete GPUs fall into this category because they can design a laptop to accept a standardized discrete GPU card and then sell that laptop with iGPU only or 3-4 levels of dGPU card. Slotted memory and SSDs fall into the same category - no additional costs, just pure ASP increase for the upgrades. Such is much of the reason why the OEMs only begrudgingly offered LNL - it cut out one of their favorite money makers.

Now with respect to PTL iGPU options, OEMs like it because it doesn't really intersect with their dGPU offerings. Instead it's allowing them an additional configuration option in all the non-dGPU categories. You want your ultralight 13" to have better graphics? Pay an extra $50 to get the upgraded iGPU. With PTL Intel will let OEMs offer anything from 4+0+4+4Xe to 4+8+4+12Xe configurations in a single design and charge accordingly.

Note that the above is also related to the reason for the lack of widespread Strix Halo adoption. It's effectively 3 SKUs as the 380 is a bad joke. Going from the 385 to the 395 gets you 2x CPU cores and 1.25x GPU, aka who cares? Whereas going with a dGPU allows for easily spanning from an RTX 5050 to a 5090 in one design if desired, a 4x increase. Oddly I expect that AMD would have had better reception if they'd gone with on-package memory for Strix Halo to allow for more meaningful SKU differentiation and cheaper platform costs for the OEM (would have made the 380 a viable option.)
 

adroc_thurston

Diamond Member
Jul 2, 2023
8,488
11,230
106
Whereas going with a dGPU allows for easily spanning from an RTX 5050 to a 5090 in one design if desired
nope, you need 3 separate platforms to go from the 5050 to 5090.
I'm not even talking about cooling assembly differences.
Oddly I expect that AMD would have had better reception if they'd gone with on-package memory for Strix Halo to allow for more meaningful SKU differentiation and cheaper platform costs for the OEM (would have made the 380 a viable option.)
MoP is Bad.
 
  • Like
Reactions: madtronik

DavidC1

Platinum Member
Dec 29, 2023
2,166
3,318
106
Using iGPU will be cheaper than adding corresponding separate dGPU. The customer can select whatever level of iGPU perf they want. E.g. Panther Lake will provide anything from 4 to 12 Xe3 cores. Everyone will be happy.
It's not cheaper, you actually pay more(quite a bit more) to get the high iGPU because you are forced to pay the highest CPU. With dGPU they can pair it with Core Ultra 3 if they want. And even 12 Xe3 is going to perform like the lowest end dGPUs anyway, so why do you need Ultra 9 to get it?

Want to spread halo iGPU? Get a SKU out that doesn't waste money on high end CPU. Right now you are paying $250 CPU + $100 GPU. How about $100 CPU + $100 GPU?

AMD/Intel: "Pairs their GPU with 7 and 9 series GPUs

"Muh nobody ain't buying our halo graphics!"

Uhh, it's a $1500 system for x60 graphics.....
Not for mobile iGPU it isnt. I just ran it 3 days ago on a 12CU Ryzen 7 6800H in a Beelink NUC Win 11 PC and at max power it struggled to hit 30fps at 1080p low or medium with no AA. I was actually expecting much more from Zen 3+ / 12CU RDNA 2 chip but it very much struggled and dropped into mid to low 20s quite often.
It doesn't matter that GTX 1050 Ti is not super duper faster than Lunarlake or Strix graphics. Especially in many newer games(where you need the frames), the iGPUs are often faster. That's the point, not that Heaven is relevant.
People on here acting like 1T performance doesn't matter for games...

An i3 binned processor is going to be down almost a full Ghz if not more from the i9 bin for the best core. In the necessarily lower resolutions that iGPU gaming will require, that 1T performance will still be an issue, or do you REALLY want bad 1% lows?
1GHz lower is almost nothing. One is running at 4GHz and the other is running at 5GHz, whoopdee doo. The biggest thing is the low CPU paired Halo will save you a bunch of money, which makes sense, rather than paying $600+ for x60 class performance. Of course the financially irresponsible can pair their RTX 5060 mobile with 9950X3D and 285K.
 
Last edited: