Discussion Intel Meteor, Arrow, Lunar & Panther Lakes + WCL Discussion Threads

Page 647 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Tigerick

Senior member
Apr 1, 2022
870
808
106
Wildcat Lake (WCL) Preliminary Specs

Intel Wildcat Lake (WCL) is upcoming mobile SoC replacing ADL-N. WCL consists of 2 tiles: compute tile and PCD tile. It is true single die consists of CPU, GPU and NPU that is fabbed by 18-A process. Last time I checked, PCD tile is fabbed by TSMC N6 process. They are connected through UCIe, not D2D; a first from Intel. Expecting launching in Q2/Computex 2026. In case people don't remember AlderLake-N, I have created a table below to compare the detail specs of ADL-N and WCL. Just for fun, I am throwing LNL and upcoming Mediatek D9500 SoC.

Intel Alder Lake - NIntel Wildcat LakeIntel Lunar LakeMediatek D9500
Launch DateQ1-2023Q2-2026 ?Q3-2024Q3-2025
ModelIntel N300?Core Ultra 7 268VDimensity 9500 5G
Dies2221
NodeIntel 7 + ?Intel 18-A + TSMC N6TSMC N3B + N6TSMC N3P
CPU8 E-cores2 P-core + 4 LP E-cores4 P-core + 4 LP E-coresC1 1+3+4
Threads8688
Max Clock3.8 GHz?5 GHz
L3 Cache6 MB?12 MB
TDP7 WFanless ?17 WFanless
Memory64-bit LPDDR5-480064-bit LPDDR5-6800 ?128-bit LPDDR5X-853364-bit LPDDR5X-10667
Size16 GB?32 GB24 GB ?
Bandwidth~ 55 GB/s136 GB/s85.6 GB/s
GPUUHD GraphicsArc 140VG1 Ultra
EU / Xe32 EU2 Xe8 Xe12
Max Clock1.25 GHz2 GHz
NPUNA18 TOPS48 TOPS100 TOPS ?






PPT1.jpg
PPT2.jpg
PPT3.jpg



As Hot Chips 34 starting this week, Intel will unveil technical information of upcoming Meteor Lake (MTL) and Arrow Lake (ARL), new generation platform after Raptor Lake. Both MTL and ARL represent new direction which Intel will move to multiple chiplets and combine as one SoC platform.

MTL also represents new compute tile that based on Intel 4 process which is based on EUV lithography, a first from Intel. Intel expects to ship MTL mobile SoC in 2023.

ARL will come after MTL so Intel should be shipping it in 2024, that is what Intel roadmap is telling us. ARL compute tile will be manufactured by Intel 20A process, a first from Intel to use GAA transistors called RibbonFET.



LNL-MX.png
 

Attachments

  • PantherLake.png
    PantherLake.png
    283.5 KB · Views: 24,034
  • LNL.png
    LNL.png
    881.8 KB · Views: 25,527
  • INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg
    INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg
    181.4 KB · Views: 72,435
  • Clockspeed.png
    Clockspeed.png
    611.8 KB · Views: 72,321
Last edited:

Kocicak

Golden Member
Jan 17, 2019
1,177
1,232
136
I posted a comparison of P and E core performance of Arrow and Raptor lake CPUs and illustrated how useless HT is:

 

eek2121

Diamond Member
Aug 2, 2005
3,425
5,070
136
Depends on the series. Intel is causing some outright confusion with their new naming. I will admit not seeing anything firm from credible sources, but from what I can interpret, there appears to be a cost effective form of skymont (not on TSMC, but on an Intel process) that will be sharing the series with Gracemont. I would have to double check to be sure, but the only differentiator is the middle digit of the series.

I would normally say that I will be happy to be proven wrong, but if Intel is again refreshing gracemont, quite the opposite is true. FWIW the source is someone from a MAJOR media site, so if he gets it wrong I am going to out him lol…if I can find his post (not on X, but Bluesky)

EDIT: Oh and the claim was that the skymont based part was launching after the previous part…maybe as late as August of next year. I follow AMD more closely than Intel, so I would not be surprised if this does not pan out. The Intel process was not named either.
 

Hulk

Diamond Member
Oct 9, 1999
5,184
3,801
136
How did Intel get here?

Here's the question. What year over the last 30 did Intel have their largest combined architecture and process advantage over the competition? To be more specific in my question, the year where the combination of their node tech and architecture was highest. Not 2004 for process and 2006 for node, but the one year or release where they were the most ahead of the competition in the x86 space?

How far ahead were they? A little? A lot? 5 years? 5 months? A gap as wide as the Grand Canyon?
 

jdubs03

Golden Member
Oct 1, 2013
1,300
904
136
How did Intel get here?

Here's the question. What year over the last 30 did Intel have their largest combined architecture and process advantage over the competition? To be more specific in my question, the year where the combination of their node tech and architecture was highest. Not 2004 for process and 2006 for node, but the one year or release where they were the most ahead of the competition in the x86 space?

How far ahead were they? A little? A lot? 5 years? 5 months? A gap as wide as the Grand Canyon?
Sandy Bridge? Ivy Bridge? 2011-2012.
 
  • Like
Reactions: biostud

Hulk

Diamond Member
Oct 9, 1999
5,184
3,801
136
I was thinking Haswell as the wheels started coming off the cart with Broadwell process issues and then they stalled for a good 5 years. Can you imagine being 5 years ahead of the competition, with billions in the bank, and then "poof!" You're behind and struggling to keep up.

I don't want to jinx Intel but they are reminding me of the IBM implosion in the '80's. They need to make very wise decisions moving forward.
 

gdansk

Diamond Member
Feb 8, 2011
4,625
7,802
136
I was thinking Haswell as the wheels started coming off the cart with Broadwell process issues
Even with the stall Intel's 14nm was as good as competitors' 10nm. They were ahead of them too. The problem was really Intel's 10nm even if 14nm did have some warning signs of trouble to come.
 

Hulk

Diamond Member
Oct 9, 1999
5,184
3,801
136
32nm Sandy Bridge vs 32nm Bulldozer.
22nm Ivy Bridge vs 32nm Piledriver.

Later still, in 2015, it was:
14nm Skylake vs 32nm Piledriver and 28nm Excavator.
Whoa! Looks like we have a winner. They were way out ahead in 2015. Right about when they went to sleep for 5 years and won't up with water flooding in.
 
  • Like
Reactions: Tlh97 and jdubs03

poke01

Diamond Member
Mar 8, 2022
4,393
5,714
106
Whoa! Looks like we have a winner. They were way out ahead in 2015. Right about when they went to sleep for 5 years and won't up with water flooding in.
if it wasn’t for TSMC Intel would had no problem.

Zen2 was a game changer. AMD needed that to be great and it was.

Edit: added a better response
 

itsmydamnation

Diamond Member
Feb 6, 2011
3,079
3,915
136
if it wasn’t for TSMC Intel would had no problem.
Dumb take....

Like every pure play design company would not exist by that logic. Or there would just a different pure play foundry instead , maybe Samsung, maybe TI etc and things would roughly be the same.
 

DavidC1

Golden Member
Dec 29, 2023
1,936
3,076
96
Whoa! Looks like we have a winner. They were way out ahead in 2015. Right about when they went to sleep for 5 years and won't up with water flooding in.
No, Intel started declining after Sandy Bridge. Haswell, and Skylake were by no means impressive, other than the power savings(which should have been done WAY earlier, like 5-7 years) with Haswell. Architecturally it was crap, and the gains in process came much easier, much cheaper.

Reports by Intel employees at Reddit says that management didn't want to invest in uarch during those times, and stay fat and lazy.

Hence why Apple caught up in an impressive fashion. The thing is, I think even Sandy Bridge was hampered by this mindset and could have done much better. It was only 15% per clock because rest was used on clocks. Nehalem only got a big gain in MT because of SMT and much-needed move to a memory controller on-die(again a finance decision, related to maximizing fabs), in ST it was 5-10% at best.

So it leaves 4 years between Core 2 and Sandy Bridge for gains. As I said, the P core team even in their best days could have done much better.
 

DavidC1

Golden Member
Dec 29, 2023
1,936
3,076
96
Why Intel needs a titanic change in order to have a future.

It took Intel until 2010 to get an integrated memory controller, and on the client parts, it didn't get it fully until Sandy Bridge in 2011. 7 years after the dinky competitor AMD. Do you know why?

Intel is a finance company with engineers on the side #1

Because they used the latest process for CPUs, but N-1 for the MCH, and N-2 for the IO Hub. By moving to an IMC, it would have meant they couldn't utilize older fabs. So since the industry was nowhere near 2.5D interconnects, Intel should have opened up their fabs way back then. Do you understand this? Without this line of thinking they could have had an integrated memory controller maybe 2001, maybe 1999!

It took Intel many years before they took the Atom seriously. Do you know why?

Intel is a finance company with engineers on the side #2

Because they were scared of losing margins and wanted an artificial gap of 10x CPU performance between Atom and Core. They said it publicly. Then Apple in 2013 beat Silvermont to a pulp, and the rest is history. Silvermont had an anemic 4EU HD Graphics controller. Originally they wanted a 2EU version.

Intel had a chance of partaking in Apple's hardware ecosystem. Even their haphazard contra-revenue Atom approach would have worked better, since getting early in would have meant the ecosystem would be x86(thus Intel's favor). But they lost it. Do you know why?

Intel is a finance company with engineers on the side #3

Steve Jobs wanted to include Intel in the iPhone/iPad hardware. Paul Otellini in all his finance wisdom, said "No, I don't believe in your vision and I think your volumes are too low".

Some say it wasn't about using CPUs, but their fabs. So what? That would have been a boon too. It surely would have helped Gelsinger's rush-to-18A strategy too.

Why did Intel rush into increasing vector sizes with AVX, AVX2, and AVX512, leading to decreased clocks and thermal issues and software fragmentation? Every two years they doubled it. Do you know why?

Intel is a finance company with engineers on the side #4

Because at that time their main threat was Nvidia, or they thought. It would come to be eventually, but their line of reasoning was that boosting CPU general purpose FP performance would discourage transition to Nvidia. It's not entirely a stupid idea, until it is. They should have stayed at AVX2, and AVX512 should have been AVX3 - AVX512's instructions without the 512-bit width.

-Why would a company at the forefront of Moore's Law not bring a Celeron until they were forced to? Lower cost is natural.
-Why would a multi-national leading chip company not integrate many things as possible and wait for so long? Again it's natural.
-Why would a chip company with massive hand at every part of a PC ecosystem not look at every single ways to have a laptop with great, great battery life? Smaller, lower power, faster, that is at the HEART of Moore's Law.
 
Last edited:

gdansk

Diamond Member
Feb 8, 2011
4,625
7,802
136
#1 seems like the standard way to avoid poor fab utilization. I don't think it was unique to Intel to use old nodes for ancillary components.
#2 seems like a mistake but I can't find their quotes saying so.
#3 seems like retro-activism. Selling chips at low margins to Apple was never a winning strategy. Apple would inevitably do their own when they can afford it and then Intel would be in Samsung LSI's shoes. How'd that work out for them? The mistake was selling XScale instead of scaling it into a real business for phones after those negotiations with Apple.
#4 seems like an engineering mistake, I don't see how it's financial.

Edit: oh this should probably be in the Intel financials thread? or something. I'm not sure but it doesn't seem related to the Lakes now that I think about it.
 

511

Diamond Member
Jul 12, 2024
4,759
4,322
106
Why Intel needs a titanic change in order to have a future.

It took Intel until 2010 to get an integrated memory controller, and on the client parts, it didn't get it fully until Sandy Bridge in 2011. 7 years after the dinky competitor AMD. Do you know why?

Intel is a finance company with engineers on the side #1

Because they used the latest process for CPUs, but N-1 for the MCH, and N-2 for the IO Hub. By moving to an IMC, it would have meant they couldn't utilize older fabs. So since the industry was nowhere near 2.5D interconnects, Intel should have opened up their fabs way back then. Do you understand this? Without this line of thinking they could have had an integrated memory controller maybe 2001, maybe 1999!

It took Intel many years before they took the Atom seriously. Do you know why?

Intel is a finance company with engineers on the side #2

Because they were scared of losing margins and wanted an artificial gap of 10x CPU performance between Atom and Core. They said it publicly. Then Apple in 2013 beat Silvermont to a pulp, and the rest is history. Silvermont had an anemic 4EU HD Graphics controller. Originally they wanted a 2EU version.

Intel had a chance of partaking in Apple's hardware ecosystem. Even their haphazard contra-revenue Atom approach would have worked better, since getting early in would have meant the ecosystem would be x86(thus Intel's favor). But they lost it. Do you know why?

Intel is a finance company with engineers on the side #3

Steve Jobs wanted to include Intel in the iPhone/iPad hardware. Paul Otellini in all his finance wisdom, said "No, I don't believe in your vision and I think your volumes are too low".

Some say it wasn't about using CPUs, but their fabs. So what? That would have been a boon too. It surely would have helped Gelsinger's rush-to-18A strategy too.

Why did Intel rush into increasing vector sizes with AVX, AVX2, and AVX512, leading to decreased clocks and thermal issues and software fragmentation? Every two years they doubled it. Do you know why?

Intel is a finance company with engineers on the side #4

Because at that time their main threat was Nvidia, or they thought. It would come to be eventually, but their line of reasoning was that boosting CPU general purpose FP performance would discourage transition to Nvidia. It's not entirely a stupid idea, until it is. They should have stayed at AVX2, and AVX512 should have been AVX3 - AVX512's instructions without the 512-bit width.

-Why would a company at the forefront of Moore's Law not bring a Celeron until they were forced to? Lower cost is natural.
-Why would a multi-national leading chip company not integrate many things as possible and wait for so long? Again it's natural.
-Why would a chip company with massive hand at every part of a PC ecosystem not look at every single ways to have a laptop with great, great battery life? Smaller, lower power, faster, that is at the HEART of Moore's Law.
Happens when Finance takes over from Industry Pioneers of the Trinity
 
  • Like
Reactions: Ranulf

mzocyteae

Member
Dec 29, 2020
26
19
81
Hence why Apple caught up in an impressive fashion. The thing is, I think even Sandy Bridge was hampered by this mindset and could have done much better. It was only 15% per clock because rest was used on clocks. Nehalem only got a big gain in MT because of SMT and much-needed move to a memory controller on-die(again a finance decision, related to maximizing fabs), in ST it was 5-10% at best.

So it leaves 4 years between Core 2 and Sandy Bridge for gains. As I said, the P core team even in their best days could have done much better.
Nehalem can clock to much higher frequency than Penryn, but Intel did not push hard back in those days.
 

DavidC1

Golden Member
Dec 29, 2023
1,936
3,076
96
#1 seems like the standard way to avoid poor fab utilization. I don't think it was unique to Intel to use old nodes for ancillary components.
Which was a mistake. They should have opened it up, if they really had a big advantage.
#2 seems like a mistake but I can't find their quotes saying so.
I had over a GB of their presentations, and listened to fair bit of them. It was a goal at one point.
#3 seems like retro-activism. Selling chips at low margins to Apple was never a winning strategy. Apple would inevitably do their own when they can afford it and then Intel would be in Samsung LSI's shoes. How'd that work out for them? The mistake was selling XScale instead of scaling it into a real business for phones after those negotiations with Apple.
Right, it sounds bad until you realize that it would have meant x86 would been an ISA with a foothold in the mobile market. That would have helped tremendously later on with entering, because Medfield needed a translation layer.

Otellini also admitted later he did not expect volumes to be that high, and that it was a big mistake to refuse it. Keep in mind they spent billions of dollars, literally gave away some. Accepting Apple is a much, much better deal.

Remember, many Smartphone apps are heavily defeatured and useless on the desktop apps. It wasn't like that.
#4 seems like an engineering mistake, I don't see how it's financial.
Up until AVX512, there wasn't much of a clockspeed loss, because the FP portion was relatively small, so a new shrink was enough. But they doubled it again, which means at least they should have waited. You can see from their Icelake-SP presentation the 10nm shrink was enough.

But even now it's a mistake, because 512-bits is too much. You can see their old presentations, they were pushing HPC quite hard. Hence, AVX doubling. It's not difficult to see this is defend against Nvidia move. It's also because being in the Top500 sounds good to marketing.

Instead, there should have been a GPU push long time ago. The primary reason they had crap iGPUs were because they wanted it to relegate it similar to HD Audio - irrelevant. No one talks about sound anymore.

The thing is, I know these rants are pointless. It's just that they had all the potential to do so. And they are blowing it.
 
Last edited: