Discussion Intel Meteor, Arrow, Lunar & Panther Lakes Discussion Threads

Page 829 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Tigerick

Senior member
Apr 1, 2022
782
750
106
PPT1.jpg
PPT2.jpg
PPT3.jpg



As Hot Chips 34 starting this week, Intel will unveil technical information of upcoming Meteor Lake (MTL) and Arrow Lake (ARL), new generation platform after Raptor Lake. Both MTL and ARL represent new direction which Intel will move to multiple chiplets and combine as one SoC platform.

MTL also represents new compute tile that based on Intel 4 process which is based on EUV lithography, a first from Intel. Intel expects to ship MTL mobile SoC in 2023.

ARL will come after MTL so Intel should be shipping it in 2024, that is what Intel roadmap is telling us. ARL compute tile will be manufactured by Intel 20A process, a first from Intel to use GAA transistors called RibbonFET.



LNL-MX.png

Intel Core Ultra 100 - Meteor Lake

INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg

As mentioned by Tomshardware, TSMC will manufacture the I/O, SoC, and GPU tiles. That means Intel will manufacture only the CPU and Foveros tiles. (Notably, Intel calls the I/O tile an 'I/O Expander,' hence the IOE moniker.)



Clockspeed.png
 

Attachments

  • PantherLake.png
    PantherLake.png
    283.5 KB · Views: 24,025
  • LNL.png
    LNL.png
    881.8 KB · Views: 25,517
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
22,738
12,722
136
It is unfortunate to see account closures like that happen, especially when they are abrupt and unexplained. He didn't seem to have a major issue with people here (or if he did, it wasn't aired out in the open for everyone to see). Given the various backgrounds of people who post here, it may be that personal obligations took away his time or ability to post about technical matters on an old-school forum that is often scraped by leak aggregators/bait sites.
 

coercitiv

Diamond Member
Jan 24, 2014
7,247
17,064
136
I'll probably never understand what Intel was thinking to drop SMT in their server silicon. I could understand them doing it from a position of strength, thinking they could iterate faster than the competition. It seems to me Intel convinced themselves to slowly abandon SMT because it was more convenient (server product delays, consumer product no longer compatible in the long run). Meanwhile AMD did the opposite, they made changes to the architecture to leverage SMT more.

Reminds me of something Keller talked about during a keynote: Intel people came to him thinking there ain't much more that can be done to improve their cores, AMD people came to him thinking there has be more out there. Different mindsets, different outcomes after a decade of hard work.

Alas poor SMT.
a1wnq9.jpg
 

511

Diamond Member
Jul 12, 2024
3,240
3,176
106
I'll probably never understand what Intel was thinking to drop SMT in their server silicon. I could understand them doing it from a position of strength, thinking they could iterate faster than the competition. It seems to me Intel convinced themselves to slowly abandon SMT because it was more convenient (server product delays, consumer product no longer compatible in the long run). Meanwhile AMD did the opposite, they made changes to the architecture to leverage SMT more.
same there is 0 sense for dropping SMT on Server most likely the P core team convinced everyone SMT was not needed which was not great even NVIDIA is bringing SMT 🤣
 
  • Like
Reactions: Tlh97 and OneEng2

AcrosTinus

Senior member
Jun 23, 2024
216
222
76
same there is 0 sense for dropping SMT on Server most likely the P core team convinced everyone SMT was not needed which was not great even NVIDIA is bringing SMT 🤣
I think this is the direct impact of Spectre and Meltdown, they saw the issues, the mitigations were costly and it ended up being a cat and mouse game like with Malware and decided to phase it out for bigger cores. In the company I work for, almost all servers have SMT disabled. We also never buy max SKU and are solidly in the midrange for the servers we deploy. I can understand their decision, thread heavy is mostly not bought and the Xeons are scaling up rapidly. I guess the new CEO is just old fashioned and will invertedly rush Intel into the next security issue by reintroducing phased out tech in a short time. With AMD it was baked in from the beginning with a different from the scratch implementation. Intel might be carrying an implementation going back a decade+...
 
Last edited:
  • Like
Reactions: Tlh97 and 511
Jul 27, 2020
26,456
18,188
146
I have to be in favor of SMT because considering how large Intel's P-cores are, they absolutely need SMT to improve their PPA. Even better though would be a revamped SMT implementation, especially the dynamic sort, where applications can request SMT be turned on for certain workloads or even on certain cores for certain threads. It could also be a user choice without having to reboot the PC. I know, sounds nuts but in the realm of computing, if you are not doing nutty things, you aren't LIVING!
 

OneEng2

Senior member
Sep 19, 2022
725
968
106
You forget that SMT adds validation complexity. That increased difficulty not only causes potential delays but less focus on other parts of the core.

If you need extra 1 month per generation then in 10 generations that's nearly a year of delay. Nevermind increasingly sophisticated hacking and vulnerabilities which would further worsen this.
Not sure what happened to DavidC1, but my reply is still in order.

IMO, sacrificing a high volume product advantage due to development considerations is a bad idea. Also, I think the added 1 month to the development effort is likely paid for 1000 times over (and likely much more than that) from the increased sales that are enjoyed by a performant product ..... especially in high end DC!
 

ondma

Diamond Member
Mar 18, 2018
3,283
1,683
136
I'll probably never understand what Intel was thinking to drop SMT in their server silicon. I could understand them doing it from a position of strength, thinking they could iterate faster than the competition. It seems to me Intel convinced themselves to slowly abandon SMT because it was more convenient (server product delays, consumer product no longer compatible in the long run). Meanwhile AMD did the opposite, they made changes to the architecture to leverage SMT more.

Reminds me of something Keller talked about during a keynote: Intel people came to him thinking there ain't much more that can be done to improve their cores, AMD people came to him thinking there has be more out there. Different mindsets, different outcomes after a decade of hard work.

Alas poor SMT.
Well to be fair, Keller went to AMD during the Bulldozer era, so there was plenty of obvious room for improvement. Probably Intel did not imagine how fast Zen would evolve, or perhaps they would have been more amenable to redesigning their big cores.
 

poke01

Diamond Member
Mar 8, 2022
3,877
5,202
106
View attachment 128031LNL laptops are damm good man getting such high battery life at such capacity better than M3 macbook 🤣
The Dynabook has a 1200p screen vs the MacBooks 1664p screen resolution. Not only that the performance of the CPU used in the Dynabook is the level of i7 1360P, which isn’t M3 level. Which also means perf/w isn’t similar.

If we take a look at the picture you can see the Dell Pro 14 Premium 268V which has a higher resolution but a smaller battery but its performance is similar M3 15” Air.

What Intel did with Lunar Lake is optimise web and video playback battery life in separate scenarios. What it didn’t do is the fix the underlying issue, its cores.

Under load it will be a different story and I mean that in the context of these laptops. Both browsing the web AND 4K YouTube playing in the background will tell a different story, this is something almost no reviewer will test.
 
Jul 27, 2020
26,456
18,188
146
Probably Intel did not imagine how fast Zen would evolve
Their imagination died at some point. What kind of imagination thinks pairing lousy Skylake level E-cores with Golden Coves is a good idea? They could've simply put Golden Coves on a diet and used those as E-cores. At least, then we could've had SMT on all cores and even AVX-512 too. But the unimaginative Intel management could not see or didn't care about their customers. Pretended they could shove whatever they wanted down their throats and now they found out the hard way how that worked out for them.

When your competitor gloats that they deliver "consistent" performance on every core, you know you messed up bad. Sadly, Pat kept dreaming of outfabbing AMD by sinking billions. His last dying words will be, "18A, you were my only love!".
 

511

Diamond Member
Jul 12, 2024
3,240
3,176
106
What Intel did with Lunar Lake is optimise web and video playback battery life in separate scenarios. What it didn’t do is the fix the underlying issue, its cores.
Well Yes the P cores weren't fixed and I got no hope in them

Under load it will be a different story and I mean that in the context of these laptops. Both browsing the web AND 4K YouTube playing in the background will tell a different story, this is something almost no reviewer will test.
Only Geekerwan test these scenarios but not perfectly having few excel sheet open with YouTube/Spotify playing music and doing bunch of web browsing is the true test of battery life but no one has done that yet.
 

OneEng2

Senior member
Sep 19, 2022
725
968
106
Their imagination died at some point. What kind of imagination thinks pairing lousy Skylake level E-cores with Golden Coves is a good idea? They could've simply put Golden Coves on a diet and used those as E-cores. At least, then we could've had SMT on all cores and even AVX-512 too. But the unimaginative Intel management could not see or didn't care about their customers. Pretended they could shove whatever they wanted down their throats and now they found out the hard way how that worked out for them.
I wonder if it wasn't another P4 moment.

With P4 Intel decided that marketing was more important than actual performance. They decided to make an architecture devoted to clock speed because clock speed sells (at least it did back then).

With ARL, do you think it is possible that Intel decided that "more cores sells" and again decided that performance doesn't really matter?

... and are they right? Will the average consumer decide that a 54 core processor is two times better than a 24 core processor?

Your average consumer is certainly not like us. They likely have never seen a benchmark in their entire lives.

It is an interesting thought.
 

gdansk

Diamond Member
Feb 8, 2011
4,330
7,252
136
With P4 Intel decided that marketing was more important than actual performance. They decided to make an architecture devoted to clock speed because clock speed sells (at least it did back then).
No, go read a Microprocessor Report from back then. Speed demons kept winning in the late 90s so Intel/IBM doubled down on it. They did not expect the imminent demise of Dennard Scaling. That it also worked to sell chips was a bonus but that wasn't the reason Willamette was designed to clock high. Now you could argue they should have stopped earlier as it was obvious before Prescott but Intel itself had a very long pipeline and needed to ship something until C2D was ready.
 

dullard

Elite Member
May 21, 2001
25,932
4,522
126
Now you could argue they should have stopped earlier as it was obvious before Prescott but Intel itself had a very long pipeline and needed to ship something until C2D was ready.
I would argue the opposite. The P4 should have started later. The first 6 months of Willamette had lackluster performance (the long pipeline with added stages really required higher speeds to be usable.) And the first 10 months required expensive RAM. The 1.7 GHz and or 1.8 GHz P4 with the 845 chipset allowing PC133 RAM instead of RDRAM is where Intel should have started. The P4 would not have already had a bad reputation that lives until this day if they only waited.

The rest of your post is good though. The P4 was designed in the Pentium II days. They didn't realize that the P3/Athlon would clock so high and be so good at the time that the P4 was developed.
 
Jul 27, 2020
26,456
18,188
146
With ARL, do you think it is possible that Intel decided that "more cores sells" and again decided that performance doesn't really matter?
Possibly if they could've put out a 16+32 part but they failed at that. Even a 10+24 part could've given them some good marketing fodder. The issues that 285K has (slightly higher RAM latency than 265K) possibly dissuaded them from piling on more cores.
 
Jul 27, 2020
26,456
18,188
146
The P4 was designed in the Pentium II days. They didn't realize that the P3/Athlon would clock so high and be so good at the time that the P4 was developed.
I've seen past reviews of almost every P4 generation and one thing AMD couldn't beat it at was media encoding. Really hard to go against a 3.73 GHz SSE2 beast crunching video frames like a cookie monster. P4 could've survived alongside Core as a kind of a vanity project (they had the money after all) just like they insisted on having the anemic Atom development continue.
 

Thunder 57

Diamond Member
Aug 19, 2007
3,835
6,478
136
I would argue the opposite. The P4 should have started later. The first 6 months of Willamette had lackluster performance (the long pipeline with added stages really required higher speeds to be usable.) And the first 10 months required expensive RAM. The 1.7 GHz and or 1.8 GHz P4 with the 845 chipset allowing PC133 RAM instead of RDRAM is where Intel should have started. The P4 would not have already had a bad reputation that lives until this day if they only waited.

The rest of your post is good though. The P4 was designed in the Pentium II days. They didn't realize that the P3/Athlon would clock so high and be so good at the time that the P4 was developed.

The P4 needed plenty of memory bandwith so SDRAM was a poor choice for it.

I've seen past reviews of almost every P4 generation and one thing AMD couldn't beat it at was media encoding. Really hard to go against a 3.73 GHz SSE2 beast crunching video frames like a cookie monster. P4 could've survived alongside Core as a kind of a vanity project (they had the money after all) just like they insisted on having the anemic Atom development continue.

Media encoding was possibly the best case for the P4 because there aren't many branches. Combine that with SSE2 and it was a powerhouse in that area. Software optimized for the P4 could do very well. Anything that had branches or relied on x87 did rather poorly.
 

OneEng2

Senior member
Sep 19, 2022
725
968
106
No, go read a Microprocessor Report from back then. Speed demons kept winning in the late 90s so Intel/IBM doubled down on it. They did not expect the imminent demise of Dennard Scaling. That it also worked to sell chips was a bonus but that wasn't the reason Willamette was designed to clock high. Now you could argue they should have stopped earlier as it was obvious before Prescott but Intel itself had a very long pipeline and needed to ship something until C2D was ready.
Until AMD started using "model numbers" that conspicuously looked like MHz clock speeds of corresponding P4 clocks, they were getting clobbered by the marketing of a higher clocked P4.

Ironically, C2D returned to the PIII style architecture .... with more pizzaz .... definitely more than Athlon could muster at the time. It also jumped onto the model number bandwagon and abandoned the MHz war permanently.

I'm still not sure I am buying that Intel didn't create P4 architecture with an eye on Marketecture. Yes, I agree that they didn't foresee the absolute DEAD END of clock scaling; however, I also don't think it occurred to them that AMD could combat the clock speeds Marketecture with their own "Model Number Marketecture" so successfully.

It also didn't occur to them that AMD would simply IGNORE Intel's attempt to push IA64 and VLIW into the server workstation market and that AMD would ALSO ignore Intel's push to proprietary RAMBUS memory.

In this window of time, Intel lost the MHz marketing war, lost the IA64 war to AMD's x64 "AMD64", lost (temporarily) the SIMD war against 3DNow! and lost the memory interface war to DDR open standard.
The rest of your post is good though. The P4 was designed in the Pentium II days. They didn't realize that the P3/Athlon would clock so high and be so good at the time that the P4 was developed.
P4 Willamette clocked up to 2Ghz. That is FAR more than could have been expected of PIII which didn't quite make 1Ghz

At the release of C2D, P4 was up around 3.6Ghz while the C2D was a full GHz below that. Even C2D couldn't touch P4 clock speeds.... until a couple of die shrinks later ;).
Possibly if they could've put out a 16+32 part but they failed at that. Even a 10+24 part could've given them some good marketing fodder. The issues that 285K has (slightly higher RAM latency than 265K) possibly dissuaded them from piling on more cores.
Seems like they are headed that way for Nova Lake (52 cores) 2X(8P+16E)+4LPE. I think the headline will be "52 Cores!" .... and I think it will get spanked by Zen 6 and Zen 6 X3D variants in most applications and benchmarks (not Cinebench though ;) ).
I've seen past reviews of almost every P4 generation and one thing AMD couldn't beat it at was media encoding. Really hard to go against a 3.73 GHz SSE2 beast crunching video frames like a cookie monster. P4 could've survived alongside Core as a kind of a vanity project (they had the money after all) just like they insisted on having the anemic Atom development continue.
IIRC, Intel just kept ahead of AMD with SSE"x". The applications and benchmarks punished AMD horridly for being behind as well.

Ironically, the flip is happening today with AVX512. AMD is just punishing Intel with AVX512 optimized apps that are now pretty common.

Just as with SMT, I just don't understand Intel's thinking here.
 

Thunder 57

Diamond Member
Aug 19, 2007
3,835
6,478
136
Until AMD started using "model numbers" that conspicuously looked like MHz clock speeds of corresponding P4 clocks, they were getting clobbered by the marketing of a higher clocked P4.

Ironically, C2D returned to the PIII style architecture .... with more pizzaz .... definitely more than Athlon could muster at the time. It also jumped onto the model number bandwagon and abandoned the MHz war permanently.

I'm still not sure I am buying that Intel didn't create P4 architecture with an eye on Marketecture. Yes, I agree that they didn't foresee the absolute DEAD END of clock scaling; however, I also don't think it occurred to them that AMD could combat the clock speeds Marketecture with their own "Model Number Marketecture" so successfully.

It also didn't occur to them that AMD would simply IGNORE Intel's attempt to push IA64 and VLIW into the server workstation market and that AMD would ALSO ignore Intel's push to proprietary RAMBUS memory.

In this window of time, Intel lost the MHz marketing war, lost the IA64 war to AMD's x64 "AMD64", lost (temporarily) the SIMD war against 3DNow! and lost the memory interface war to DDR open standard.

P4 Willamette clocked up to 2Ghz. That is FAR more than could have been expected of PIII which didn't quite make 1Ghz

At the release of C2D, P4 was up around 3.6Ghz while the C2D was a full GHz below that. Even C2D couldn't touch P4 clock speeds.... until a couple of die shrinks later ;).

Seems like they are headed that way for Nova Lake (52 cores) 2X(8P+16E)+4LPE. I think the headline will be "52 Cores!" .... and I think it will get spanked by Zen 6 and Zen 6 X3D variants in most applications and benchmarks (not Cinebench though ;) ).

IIRC, Intel just kept ahead of AMD with SSE"x". The applications and benchmarks punished AMD horridly for being behind as well.

Ironically, the flip is happening today with AVX512. AMD is just punishing Intel with AVX512 optimized apps that are now pretty common.

Just as with SMT, I just don't understand Intel's thinking here.

Claiming the P4 was designed by marketing people is a common though false narrative. It was a nice side effect for the marketing team.
 
  • Like
Reactions: OneEng2