Discussion Intel Meteor, Arrow, Lunar & Panther Lakes + WCL Discussion Threads

Page 728 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Tigerick

Senior member
Apr 1, 2022
908
828
106
Wildcat Lake (WCL) Preliminary Specs

Intel Wildcat Lake (WCL) is upcoming mobile SoC replacing ADL-N. WCL consists of 2 tiles: compute tile and PCD tile. It is true single die consists of CPU, GPU and NPU that is fabbed by 18-A process. Last time I checked, PCD tile is fabbed by TSMC N6 process. They are connected through UCIe, not D2D; a first from Intel. Expecting launching in Q2/Computex 2026. In case people don't remember AlderLake-N, I have created a table below to compare the detail specs of ADL-N and WCL. Just for fun, I am throwing LNL and upcoming Mediatek D9500 SoC.

Intel Alder Lake - NIntel Wildcat LakeIntel Lunar LakeMediatek D9500
Launch DateQ1-2023Q2-2026 ?Q3-2024Q3-2025
ModelIntel N300?Core Ultra 7 268VDimensity 9500 5G
Dies2221
NodeIntel 7 + ?Intel 18-A + TSMC N6TSMC N3B + N6TSMC N3P
CPU8 E-cores2 P-core + 4 LP E-cores4 P-core + 4 LP E-coresC1 1+3+4
Threads8688
Max Clock3.8 GHz?5 GHz
L3 Cache6 MB?12 MB
TDP7 WFanless ?17 WFanless
Memory64-bit LPDDR5-480064-bit LPDDR5-6800 ?128-bit LPDDR5X-853364-bit LPDDR5X-10667
Size16 GB?32 GB24 GB ?
Bandwidth~ 55 GB/s136 GB/s85.6 GB/s
GPUUHD GraphicsArc 140VG1 Ultra
EU / Xe32 EU2 Xe8 Xe12
Max Clock1.25 GHz2 GHz
NPUNA18 TOPS48 TOPS100 TOPS ?






PPT1.jpg
PPT2.jpg
PPT3.jpg



As Hot Chips 34 starting this week, Intel will unveil technical information of upcoming Meteor Lake (MTL) and Arrow Lake (ARL), new generation platform after Raptor Lake. Both MTL and ARL represent new direction which Intel will move to multiple chiplets and combine as one SoC platform.

MTL also represents new compute tile that based on Intel 4 process which is based on EUV lithography, a first from Intel. Intel expects to ship MTL mobile SoC in 2023.

ARL will come after MTL so Intel should be shipping it in 2024, that is what Intel roadmap is telling us. ARL compute tile will be manufactured by Intel 20A process, a first from Intel to use GAA transistors called RibbonFET.



LNL-MX.png
 

Attachments

  • PantherLake.png
    PantherLake.png
    283.5 KB · Views: 24,034
  • LNL.png
    LNL.png
    881.8 KB · Views: 25,527
  • INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg
    INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg
    181.4 KB · Views: 72,435
  • Clockspeed.png
    Clockspeed.png
    611.8 KB · Views: 72,321
Last edited:

OneEng2

Senior member
Sep 19, 2022
939
1,158
106
You aren't getting more than 3-5% for Darkmont. Now for Arctic Wolf I anticipate a BIG upgrade.
I think the performance uplift of Arctic Wolf will depend directly on the increase in transistor density Intel gets going from Low NA lithography to High NA lithography on 14A. If 14A density is only slightly better than 18A (which essentially equal in density to N3B plus a little), then I suspect the performance of Arctic Wolf will also have a smaller impact.

I believe this because I don't seen any HUGE eyesores in Lion Cove (mostly in the ring bus). When Intel moved from Netburst to Core architecture, there were MANY huge eyesores in the design of Netburst while Core was a more evolved PIII so even on the same process node, huge advantages were seen.... of course, we all had to start running our furnace again in the house since the computer no longer did double duty as a space heater :).

I think Lion Cove is actually fairly decent. I am sure they will work on the interfaces to lower the latencies, and certainly will get a lower latency connection to the L3 and main memory than poor ole Lion Cove has to deal with today.

Do you see some serious design goofs in Lion Cove?
 

511

Diamond Member
Jul 12, 2024
4,986
4,505
106
You aren't getting more than 3-5% for Darkmont. Now for Arctic Wolf I anticipate a BIG upgrade.
From the track record of P vs E team I definitely would put my vet on E cores having higher IPC gains
 

511

Diamond Member
Jul 12, 2024
4,986
4,505
106
Both next mont and cove are single digit. I can't give exact number maybe 5%-7%.
Yup within my estimates of 5-8%.
Redwood Cove was a Major rehaul at architectural level it regressed but Crestmont gained 5% IPC despite both sharing the same Horrendous Fabric
 

DavidC1

Golden Member
Dec 29, 2023
1,993
3,142
96
From the track record of P vs E team I definitely would put my vet on E cores having higher IPC gains
This has been history of the E cores in the past 10 plus years. The shrinks get very little and sometimes nothing. You can't expect huge changes every year.

Also not very much is changing in Darkmont either.
 

511

Diamond Member
Jul 12, 2024
4,986
4,505
106
This has been history of the E cores in the past 10 plus years. The shrinks get very little and sometimes nothing. You can't expect huge changes every year.

Also not very much is changing in Darkmont either.
There are definitely some enhancement and improvement I am wondering what is going to happen with Cougar Cove.
While the refresh not looks like much from Top View there is a major Change in Transistor Architecture a very big change
 

OneEng2

Senior member
Sep 19, 2022
939
1,158
106
There are definitely some enhancement and improvement I am wondering what is going to happen with Cougar Cove.
While the refresh not looks like much from Top View there is a major Change in Transistor Architecture a very big change
Yes, but the real question is, how much more transistor budget will 18 a have over N3B ... The most dense process on the market?

If there isn't any transistor count advantage, then it comes down to frequency and power scaling. I have not heard much encouraging information on this for BSPD, and have heard that traditionally hot spot problems have plagued the design.

CWF will have no need to scale frequency, but that's not true for client.
 

Geddagod

Golden Member
Dec 28, 2021
1,583
1,649
106
Yes, but the real question is, how much more transistor budget will 18 a have over N3B ... The most dense process on the market?

If there isn't any transistor count advantage, then it comes down to frequency and power scaling. I have not heard much encouraging information on this for BSPD, and have heard that traditionally hot spot problems have plagued the design.

CWF will have no need to scale frequency, but that's not true for client.
Rumor is that the P-cores in PTL are smaller than the ones in ARL, and I would be surprised if the Fmax of PTL isn't around the same as ARL-H.
 
  • Like
Reactions: SiliconFly

511

Diamond Member
Jul 12, 2024
4,986
4,505
106
Rumor is that the P-cores in PTL are smaller than the ones in ARL, and I would be surprised if the Fmax of PTL isn't around the same as ARL-H.
Probably they cut some bloat and optimized the uArch more as for Fmax only time will tell
 

OneEng2

Senior member
Sep 19, 2022
939
1,158
106
Rumor is that the P-cores in PTL are smaller than the ones in ARL, and I would be surprised if the Fmax of PTL isn't around the same as ARL-H.
Possibly so. I believe that logic density in 18A is expected to be better than N3B while SRAM density will only be on par with N3E. Because of this, I can see where the actual core will shrink a bit, but the cache will actually get a little larger compared to N3B.

As for frequency scaling on 18A, I don't think it is Fmax that gets it in trouble. It is localized hot spots.

Seems like BSPDN puts an additional layer between the silicon and the heat sink. Additionally, transistors tend to be grouped more tightly where higher power is required since the power via's take up some room .... all of which culminates into these little groupings of transistors getting hot.

I think TSMC has made a good decision implementing GAA without BSPDN first (which gets you the density gains), then trying to tackle BSPDN on their A16 process (which is just N2 with BSPDN as I understand it).
 
  • Like
Reactions: Win2012R2

DavidC1

Golden Member
Dec 29, 2023
1,993
3,142
96
There are definitely some enhancement and improvement I am wondering what is going to happen with Cougar Cove.
While the refresh not looks like much from Top View there is a major Change in Transistor Architecture a very big change
This is often said about both FinFET and RibbonFET but the reality is both of those gains are examples of modern processes taking MORE work for LESS gains than before.

The gains are meagre compared to what we used to get by a "simple" shrink. Pentium 4 gained 60% in frequency at the same power using the 0.13u node. The only major additions were copper interconnect. Preceding generations were even easier to get that kind of gains.

Think about that. Modern processes take
-1.5x the cost every gen
-1.2-1.5x the complexity every gen
-1/2 to 1/3rd the gains

And it doesn't change the fact that Darkmont gains are absolutely minor. I'm surprised it gets 3-5% for a Tick, which is a testament of how much better the E core team is.
 
Last edited:

DavidC1

Golden Member
Dec 29, 2023
1,993
3,142
96
Intel's GPU efforts are reminding me of their mobile efforts. History suggests they might drop it again. They can't seem to take any criticism and have to abandon it quick. 100 thousand adults running and maintaining the company but they are acting like 5 year old children.

Saying the problem is overhead, or drivers, or architecture is hiding the real issue - serious culture rot and mismanagement within the company.

There are many ups and downs in both a bull and a bear market. From a trading sense, I still rate Intel in a long-term downtrend. This is way more serious than just stock price. It's a road to disaster, one that can lead into losing everything.
 

gaav87

Senior member
Apr 27, 2024
659
1,279
96
But they added FRED :) "Flexible Return and Event Delivery"
Fred improves performance simplify the handling of system calls, interrupts, and exceptions by replacing the traditional IDT, GDT/LDT, and TSS. FRED eliminates the use of Rings 1 and 2, making Ring0 strictly 64-bit, and introduces new instructions and MSRs for event delivery and stack management

1736028066146.png
 

DavidC1

Golden Member
Dec 29, 2023
1,993
3,142
96
Where's the IA optimization manual for new CPUs Intel?
9fommn.jpg
 

Win2012R2

Golden Member
Dec 5, 2024
1,236
1,276
96
Intel's GPU efforts are reminding me of their mobile efforts.
There is a lot more client facing software involved and Intel is crap with software, unless it's Intel Compiler stuff. It's a lot easier with CPUs (with a monopoly on top) - get the hardware out, write IA reference and it's some other poor sods problem to deal with it, because if your software does not work with Intel CPU then it is your problem, not Intels - and with GPUs it is totally opposite now.

The market won't handle 3 players where 1 (Nvidia) is so dominant it hoovers all the money, Intel will be forced to license GPUs from AMD, that's my (crap) prediction - will be good for both companies.
 

511

Diamond Member
Jul 12, 2024
4,986
4,505
106
There is a lot more client facing software involved and Intel is crap with software, unless it's Intel Compiler stuff. It's a lot easier with CPUs (with a monopoly on top) - get the hardware out, write IA reference and it's some other poor sods problem to deal with it, because if your software does not work with Intel CPU then it is your problem, not Intels - and with GPUs it is totally opposite now.

The market won't handle 3 players where 1 (Nvidia) is so dominant it hoovers all the money, Intel will be forced to license GPUs from AMD, that's my (crap) prediction - will be good for both companies.
Hold your horses lol Intel Software teamis not crap even at GPU.It's their software team that allowed them to make consistent improvement with ARC even with HW flaws from Alchemist blaming it on the SW team is just bad.
The fact they have to license a worse implementation outside of Raster performance (RDNA) VS Xe doesn't seem logical
 

DavidC1

Golden Member
Dec 29, 2023
1,993
3,142
96
Hold your horses lol Intel Software teamis not crap even at GPU.It's their software team that allowed them to make consistent improvement with ARC even with HW flaws from Alchemist blaming it on the SW team is just bad.
The fact they have to license a worse implementation outside of Raster performance (RDNA) VS Xe doesn't seem logical
See, here's the thing. GPUs and games are a long time iterative thing. How long? Intel's been at it for 27 years.

They've been neglecting it pretty much, because internally they wanted GPUs to be ignored like HD Audio is. Audio doesn't need much compute, but 3D graphics need insatiable compute.

So the spotty game support due to neglect carries onto their dGPU.

Here's the thing, the Chinese vendor Moore Threads show they are way worse, not even comparable even to A380 with their A770 like part. So it is because Intel has the massive iGPU base over 25 years, they have a chance.

They means entering the GPU market is the hardest thing you can do, because you literally do not have enough time nor resources human and otherwise to cover 3 decades of backlog games.

If they give up on dGPUs, they don't need to license AMD GPUs. They can go back to being half-hearted again.
 

coercitiv

Diamond Member
Jan 24, 2014
7,443
17,729
136
The growing pains Intel is having with their dGPU effort shows how badly they need to go through the paces even if the main beneficiary in terms of revenue will be the mobile iGPU. That driver overhead that is holding BMG down may not manifest as a clear performance issue in smaller iGPUs, but it does eat CPU cycles in a thermally constrained environment. Efficiency could be much better even with the same hardware.

Without the dGPU effort I fully expect Intel to remain in the half-hearted zone, as @DavidC1 eloquently puts it. As long as they have a dGPU product on the market and R&D to support it, all weak points will be put under the magnifying glass and hardware/sofware teams will have to address them moving forward (sometimes it's more about getting clearance, the engineers often know what needs to be done but lack the green light to mature the tech).
 

511

Diamond Member
Jul 12, 2024
4,986
4,505
106
Is this regarding the Hardware Unboxed ARC Battlemage B580 controversy with older gens? If yes, then Hardware Unboxed just wants to sensationalize something which is clearly misleading. The Battlemage box clearly mentions that it only supports Ryzen 3000 and Intel 10th gen and above (not below). But they still go ahead and test it with an older gen CPU anyway and say it's under-performing due to driver overhead! It's not right imho.

View attachment 114260
They did test it with 5700X3D but not 10/11/12th gen Intel CPUs like the i5