Discussion Intel Meteor, Arrow, Lunar & Panther Lakes + WCL Discussion Threads

Page 890 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Tigerick

Senior member
Apr 1, 2022
854
804
106
Wildcat Lake (WCL) Preliminary Specs

Intel Wildcat Lake (WCL) is upcoming mobile SoC replacing ADL-N. WCL consists of 2 tiles: compute tile and PCD tile. It is true single die consists of CPU, GPU and NPU that is fabbed by 18-A process. Last time I checked, PCD tile is fabbed by TSMC N6 process. They are connected through UCIe, not D2D; a first from Intel. Expecting launching in Q2/Computex 2026. In case people don't remember AlderLake-N, I have created a table below to compare the detail specs of ADL-N and WCL. Just for fun, I am throwing LNL and upcoming Mediatek D9500 SoC.

Intel Alder Lake - NIntel Wildcat LakeIntel Lunar LakeMediatek D9500
Launch DateQ1-2023Q2-2026 ?Q3-2024Q3-2025
ModelIntel N300?Core Ultra 7 268VDimensity 9500 5G
Dies2221
NodeIntel 7 + ?Intel 18-A + TSMC N6TSMC N3B + N6TSMC N3P
CPU8 E-cores2 P-core + 4 LP E-cores4 P-core + 4 LP E-coresC1 1+3+4
Threads8688
Max Clock3.8 GHz?5 GHz
L3 Cache6 MB?12 MB
TDP7 WFanless ?17 WFanless
Memory64-bit LPDDR5-480064-bit LPDDR5-6800 ?128-bit LPDDR5X-853364-bit LPDDR5X-10667
Size16 GB?32 GB24 GB ?
Bandwidth~ 55 GB/s136 GB/s85.6 GB/s
GPUUHD GraphicsArc 140VG1 Ultra
EU / Xe32 EU2 Xe8 Xe12
Max Clock1.25 GHz2 GHz
NPUNA18 TOPS48 TOPS100 TOPS ?






PPT1.jpg
PPT2.jpg
PPT3.jpg



As Hot Chips 34 starting this week, Intel will unveil technical information of upcoming Meteor Lake (MTL) and Arrow Lake (ARL), new generation platform after Raptor Lake. Both MTL and ARL represent new direction which Intel will move to multiple chiplets and combine as one SoC platform.

MTL also represents new compute tile that based on Intel 4 process which is based on EUV lithography, a first from Intel. Intel expects to ship MTL mobile SoC in 2023.

ARL will come after MTL so Intel should be shipping it in 2024, that is what Intel roadmap is telling us. ARL compute tile will be manufactured by Intel 20A process, a first from Intel to use GAA transistors called RibbonFET.



LNL-MX.png
 

Attachments

  • PantherLake.png
    PantherLake.png
    283.5 KB · Views: 24,031
  • LNL.png
    LNL.png
    881.8 KB · Views: 25,525
  • INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg
    INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg
    181.4 KB · Views: 72,433
  • Clockspeed.png
    Clockspeed.png
    611.8 KB · Views: 72,319
Last edited:

511

Diamond Member
Jul 12, 2024
4,601
4,220
106
For that to happen Intels cores need massive improvements. I remember watching a review where the fanless M4 MacBook Air beat an actively cooled 256V LNL laptop in real world cpu tests.
Bound to happen Intel's P core sucks general usage will be fine as it's on LP-E
 

DavidC1

Golden Member
Dec 29, 2023
1,889
3,034
96
Darkmont E core with as Intel seems to imply more substantial architecture changes than Cougar Lake. Could this mean 10%+ IPC over Skymont?
Darkmont should be 3-5% average. The 128B/cycle for L2 is already on Skymont. Cougar Cove's gains will be greater over the predecessor than Darkmont. Execution port count is same as Skymont. The increases are over Crestmont. Memory disambiguation was introduced with Core 2 in 2006. Cougar and Dark improves it over predecessors. Darkmont's changes are quite minor, and I suspect Cougar Cove is addressing it's weaknesses more. Your confusion I suspect is based on Intel going over what predecessors brought over what is actually new in 2026.

Xe3 offers >50% peak performance over Lunarlake, not per watt. Per watt it looks about 25%. I think potentially with improved power distribution between CPU and GPU there's a chance that Pantherlake can 2x in quite a few games. But partly that will be because the GPU has more power allocated than on Lunarlake. So 15W CPU + 10W GPU might become 10W CPU + 15W GPU on Pantherlake. And this will primarily help demanding settings and games, not so much higher frames and older titles. There's a typo with Base power. It should be 15-35W not 250W.

Tom Peterson says the 33% increase in the GPU core cache is a significant contributor to more performance as well.

Execution unit utilization was said to be improved due to 25% increased thread count and variable register allocation. C&C actually went over Xe3 changes well before Intel release:
Based on C&C data, they could improve the granularity and peak thread count even more on Xe3P.

Intel 7 didn't have that high yields in the beginning. They said even Intel 4 scaled faster.
 
Last edited:
  • Like
Reactions: Hulk

511

Diamond Member
Jul 12, 2024
4,601
4,220
106
Darkmont should be 3-5% average. The 128B/cycle for L2 is already on Skymont. Cougar Cove's gains will be greater over the predecessor than Darkmont. Execution port count is same as Skymont. The increases are over Crestmont. Memory disambiguation was introduced with Core 2 in 2006. Cougar and Dark improves it over predecessors. Darkmont's changes are quite minor, and I suspect Cougar Cove is addressing it's weaknesses more. Your confusion I suspect is based on Intel going over what predecessors brought over what is actually new in 2026.
also CLMUL on Darkmont which crestmont doesn't have
 

Thunder 57

Diamond Member
Aug 19, 2007
4,046
6,762
136
I'd like to go even smaller, but we're still using outdated standards like ATX for power, and futzing around connecting the motherboard to the front panel with jumper cables on terminal blocks like its 1975, so I don't hold out hope for any modernization on that front since everyone else seems to think that's a perfectly reasonable situation I guess.

I remember having a case where I guess USB didn't have a standarized pinout because all of the wires were serperate! That was ridiculous. I never bothered hooking up the front USB.
 
  • Like
Reactions: Josh128

Josh128

Golden Member
Oct 14, 2022
1,347
2,028
106
I remember having a case where I guess USB didn't have a standarized pinout because all of the wires were serperate! That was ridiculous. I never bothered hooking up the front USB.
I did, and its always a tedious task to do, down in the corner where you can barely reach, along with a subsequent hope and pray moment when you actually first try it to see if it works. ;);)
 
  • Like
Reactions: Thunder 57

Thunder 57

Diamond Member
Aug 19, 2007
4,046
6,762
136
I did, and its always a tedious task to do, down in the corner where you can barely reach, along with a subsequent hope and pray moment when you actually first try it to see if it works. ;);)

I tried putting the pins in the correct order and taping them to make it work but I gave up on that eventually.
 
Last edited:
  • Like
Reactions: Josh128

OneEng2

Senior member
Sep 19, 2022
848
1,112
106
Not really. If they did, AMD would have a much better position in the laptop market over the past 5 years. People these days apparently spend quite frivalously.
Intel is very prominent in OEM sales .... a marketing segment AMD is poorly aligned to.
AMDs market position is due to the fact they refused to do OEMs work while Intel does it for them and OEMs are a incompetent bunch.
I don't think they "refuse", I think they don't offer as compelling a package as Intel.
Thats AMDs problem as laptops are the largest TAM in the PC world.
Yes, but the margins are not nearly as high as DC and HEDT where AMD is raking in the profit and selling the most chips.
If Panther Lake has corrected some of the ARL inconsistencies, increased IPC in both cores, increased efficiency, and reduced latency, Panther Lake and the subsequent desktop counterpart could provide serious competition for Zen 6.
I keep saying this as well. AMD had issues with latency with Zen 2, and the next gen Zen 3 was a big improvement. It seems unfair (or uninformed) to assume that Intel would not follow the same pattern with NVL.
The underlying problem with Panther Lake on 18A is best encapsulated in one question: how many 5.1 GHz compute tiles can they yield per wafer on 18A? If what we're being told about 18A is true, binning for 5.1 GHz has not been easy. That's going to take its toll on margins at the very least.
The general issue in the past for BSDN was the creation of hot spots within the die. These hot spots then limit the max clock speed the chip can produce.

My biggest concern with 18A is clock speed. I suspect it will provide crazy good PPA and therefore be a big deal in server and laptop where max clock speed is less important than meeting a performance level at a given power.

Additionally, from a financial standpoint, 18A is very very expensive for Intel. Even should everything from a technical standpoint work out in spades, they could still fail financially.
 

511

Diamond Member
Jul 12, 2024
4,601
4,220
106
My biggest concern with 18A is clock speed. I suspect it will provide crazy good PPA and therefore be a big deal in server and laptop where max clock speed is less important than meeting a performance level at a given power.
And 18A is used in Server and Laptop
Additionally, from a financial standpoint, 18A is very very expensive for Intel. Even should everything from a technical standpoint work out in spades, they could still fail financiall
If you want expensive look at Intel 7 it's literally as expensive as Intel 18A to produce While having worse PPA and very bad margins.
 

Josh128

Golden Member
Oct 14, 2022
1,347
2,028
106
Yes, but the margins are not nearly as high as DC and HEDT where AMD is raking in the profit and selling the most chips.

Thats a silly excuse. DC, HEDT, and mobile/laptop dont have to be mutually exclusive. Intel is still making more revenue on client / laptops than AMD is as an entire company.


Intel client computing: $7.9B
1760374717170.png

AMD entire company: $7.7B
1760374738541.png
 
  • Like
Reactions: Henry swagger

LightningZ71

Platinum Member
Mar 10, 2017
2,529
3,222
136
And client typically has the lowest net margin. With as dominant a position as Intel has there, you don't "take" margin there, you have to "buy" it by selling a superior product at or below costs. Remember, client is pre ious little that is direct to the final customer and is mainly via OEM. OEM's are loathe to donate money to a disruptor by building out their platform if they don't believe that they can quickly make that money back in margin on volume. You don't get that unless you have a much better product that you are willing to practically donate to the OEM.
 

Doug S

Diamond Member
Feb 8, 2020
3,594
6,355
136
I did, and its always a tedious task to do, down in the corner where you can barely reach, along with a subsequent hope and pray moment when you actually first try it to see if it works. ;);)

We used to use similar but larger ribbon cables for storage, in the IDE/EIDE days. They realized that was a crappy solution and moved on with SATA to a compact connector and again with NVMe to a simple edge connector like PCIe.

Imagine if they standardized on a robust edge connector for motherboards that mates to a slot on the case. Part of it is fixed function (stuff every setup needs like power button) and part of it is configured in EFI to match what the case expects (or for bonus points, autonegotiated) No more futzing around to get your front panel USB and audio connected, no tiny fiddly ribbon cables that you can accidentally knock loose which might require taking everything apart to put back into place.

And while we're at it let's fix power. Let's have a single screw on coaxial 48v input. The board already has a bunch of VRMs, what are we saving by requiring precise voltage regulation in the power supply as well? Why do we have power supplies providing 12v, 5v and 3.3v and then because the wires are so thin provide an ever growing assortment of extra 12v inputs? With 48v the power supply is simple and cheap, and is easy to move to an external brick for SFF and AIO type setups. It is also possible to provide more than one 48v input if you want crazy amounts of board power using standard off the shelf 48v power supplies and/or require redundancy/resiliency.
 

OneEng2

Senior member
Sep 19, 2022
848
1,112
106
And 18A is used in Server and Laptop

If you want expensive look at Intel 7 it's literally as expensive as Intel 18A to produce While having worse PPA and very bad margins.
So Intel 7 also cost Intel 20bn to create? I don't have figures in front of me, but that seems very unlikely. I suspect you are talking about wafer time through the machine?

My opinion is you can't pretend you didn't spend 20bn on a process and price it like you only have to pay off operating costs. That's a great way to go bankrupt.
Irrelevant to the discussion. Intel still owns client. Theres no reason for AMD to not do everything in their power to take market share there.
Until AMD has excess capacity beyond HEDT and DC, why would they sell a SINGLE chip to an OEM and give up the profit?
And client typically has the lowest net margin. With as dominant a position as Intel has there, you don't "take" margin there, you have to "buy" it by selling a superior product at or below costs.
Agree. Intel is dug in deep with OEM's likely with binding contracts. If AMD wants this market, it will have to bleed for it. I'm not saying this wont eventually happen, just not with Zen 6.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
And 6) inaudible, although this is really underserved. Wide consumer exposure to fanless arm devices makes laptops that audibly ramp up and down even more annoying than they already are.
Agreed about preferring silent laptops.

But what about when using it for low performance tasks such as email/web-browsing/powerpoint/etc. Especially with PTL having 4x LPE cores that should be able to take care of that at low power consumption. Any reason to believe those use cases will be noisy? Will the fan even spin up, and if so shouldn't it be at low rpm and thus silent?
 

Josh128

Golden Member
Oct 14, 2022
1,347
2,028
106
So Intel 7 also cost Intel 20bn to create? I don't have figures in front of me, but that seems very unlikely. I suspect you are talking about wafer time through the machine?

My opinion is you can't pretend you didn't spend 20bn on a process and price it like you only have to pay off operating costs. That's a great way to go bankrupt.

Until AMD has excess capacity beyond HEDT and DC, why would they sell a SINGLE chip to an OEM and give up the profit?

Agree. Intel is dug in deep with OEM's likely with binding contracts. If AMD wants this market, it will have to bleed for it. I'm not saying this wont eventually happen, just not with Zen 6.
AMD can have all the capacity they want, they just have to pay for it.
 

DavidC1

Golden Member
Dec 29, 2023
1,889
3,034
96
Client is where innovation is most required, because in addition to all the other requirements, you need to factor in costs. In fact, it's the gaming segment that is most demanding, because you need really good MT performance, really good uarch, really good memory subsystem(Cache, Memory) with low latency and high bandwidth, efficient enough so you don't need exotic cooling, all the while clocking as high as possible.

You need constraints for innovation. Unlimited resources often result in waste. Look at the history of computing. E cores doing well compared to P cores is attributed to E core team being under constraint and pressure to perform or be eliminated. Those that only focus on enterprise die. Because it's much easier to create something that sells for $10K versus something that is sold for $1K at most. So eventually the ones that really participate in client market outperform you in server as well, since they bring learnings from client to servers. Plus you need much higher volume on client, which requires higher standards for reliability in silicon. That's why yield learning is all about volume.

Constraints also give you a clear target. If you are told you MUST create something with 1mm2 area and 1W of power, you do whatever you can to get best under that constraint. If you are told "do whatever you want", that's where the waste comes from.
 
Last edited:

DavidC1

Golden Member
Dec 29, 2023
1,889
3,034
96
So Intel 7 also cost Intel 20bn to create? I don't have figures in front of me, but that seems very unlikely. I suspect you are talking about wafer time through the machine?
20B wasn't all because of 18A, but the excess buildup they did. Had they focused on consistent growth over betting everything on 18A, it wouldn't have been 20B. Even if they were mostly shell buildings, that's still a waste of money. A "shell" of a building would cost at least several times the cost of an engineer's salary, if not more. Also, that will be amortized over at least next 5-7 years, which Intel can easily handle.

Connectors and ATX: It won't change, because there will be resistance. Everything else will have to be incremental and additions. If it's not additions, it'll be BTX all over again. If BTX was created for 100W Pentium 4 chips, then it's more necessary today where CPUs and GPUs are trending 3x that. Certainly businesses won't care for that even a little bit. The whole bit of new 12V connectors are all about GPUs, thus AI and gaming. I don't care for that either, because I can live with 30 fps.
also CLMUL on Darkmont which crestmont doesn't have
It's not a significant general purpose instruction. Also Crestmont supports it. Darkmont doubles the capability.
 

ToTTenTranz

Senior member
Feb 4, 2021
706
1,178
136
Agreed about preferring silent laptops.

But what about when using it for low performance tasks such as email/web-browsing/powerpoint/etc. Especially with PTL having 4x LPE cores that should be able to take care of that at low power consumption. Any reason to believe those use cases will be noisy? Will the fan even spin up, and if so shouldn't it be at low rpm and thus silent?
The LPE cores aren't good for web browsing that runs a lot of single threaded javascript. They're good for light loads like low priority system tasks without having to wake up the E or P cores.
 
  • Like
Reactions: Thunder 57

DavidC1

Golden Member
Dec 29, 2023
1,889
3,034
96
The LPE cores aren't good for web browsing that runs a lot of single threaded javascript. They're good for light loads like low priority system tasks without having to wake up the E or P cores.
It's 4 cores at Raptor Cove performance running at 3.7GHz for Lunarlake. My desktop is a Cometlake Celeron(Skylake class) running at 4GHz, 2C/4T. It's more than fine.
 
  • Like
Reactions: Fjodor2001

511

Diamond Member
Jul 12, 2024
4,601
4,220
106
So Intel 7 also cost Intel 20bn to create? I don't have figures in front of me, but that seems very unlikely. I suspect you are talking about wafer time through the machine?

My opinion is you can't pretend you didn't spend 20bn on a process and price it like you only have to pay off operating costs. That's a great way to go bankrupt.
The R&D cost has already been written off in previous years and the R&D cost was like $10 Billion all 18A is gonna consume is ramping cost and cost to produce a wafer.
 

regen1

Member
Aug 28, 2025
123
185
71
Intel 10nm/7 family(and its multi-patterning, lack of EUV usage) is costly and a margin destroyer, pulls down foundry balance-sheet and hence overall balance-sheet. 18A family is seemingly way better from its inception.
 
  • Like
Reactions: 511