Discussion Intel Meteor, Arrow, Lunar & Panther Lakes + WCL Discussion Threads

Page 708 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Tigerick

Senior member
Apr 1, 2022
911
829
106
Wildcat Lake (WCL) Preliminary Specs

Intel Wildcat Lake (WCL) is upcoming mobile SoC replacing ADL-N. WCL consists of 2 tiles: compute tile and PCD tile. It is true single die consists of CPU, GPU and NPU that is fabbed by 18-A process. Last time I checked, PCD tile is fabbed by TSMC N6 process. They are connected through UCIe, not D2D; a first from Intel. Expecting launching in Q2/Computex 2026. In case people don't remember AlderLake-N, I have created a table below to compare the detail specs of ADL-N and WCL. Just for fun, I am throwing LNL and upcoming Mediatek D9500 SoC.

Intel Alder Lake - NIntel Wildcat LakeIntel Lunar LakeMediatek D9500
Launch DateQ1-2023Q2-2026 ?Q3-2024Q3-2025
ModelIntel N300?Core Ultra 7 268VDimensity 9500 5G
Dies2221
NodeIntel 7 + ?Intel 18-A + TSMC N6TSMC N3B + N6TSMC N3P
CPU8 E-cores2 P-core + 4 LP E-cores4 P-core + 4 LP E-coresC1 1+3+4
Threads8688
Max Clock3.8 GHz?5 GHz
L3 Cache6 MB?12 MB
TDP7 WFanless ?17 WFanless
Memory64-bit LPDDR5-480064-bit LPDDR5-6800 ?128-bit LPDDR5X-853364-bit LPDDR5X-10667
Size16 GB?32 GB24 GB ?
Bandwidth~ 55 GB/s136 GB/s85.6 GB/s
GPUUHD GraphicsArc 140VG1 Ultra
EU / Xe32 EU2 Xe8 Xe12
Max Clock1.25 GHz2 GHz
NPUNA18 TOPS48 TOPS100 TOPS ?






PPT1.jpg
PPT2.jpg
PPT3.jpg



As Hot Chips 34 starting this week, Intel will unveil technical information of upcoming Meteor Lake (MTL) and Arrow Lake (ARL), new generation platform after Raptor Lake. Both MTL and ARL represent new direction which Intel will move to multiple chiplets and combine as one SoC platform.

MTL also represents new compute tile that based on Intel 4 process which is based on EUV lithography, a first from Intel. Intel expects to ship MTL mobile SoC in 2023.

ARL will come after MTL so Intel should be shipping it in 2024, that is what Intel roadmap is telling us. ARL compute tile will be manufactured by Intel 20A process, a first from Intel to use GAA transistors called RibbonFET.



LNL-MX.png
 

Attachments

  • PantherLake.png
    PantherLake.png
    283.5 KB · Views: 24,034
  • LNL.png
    LNL.png
    881.8 KB · Views: 25,527
  • INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg
    INTEL-CORE-100-ULTRA-METEOR-LAKE-OFFCIAL-SLIDE-2.jpg
    181.4 KB · Views: 72,435
  • Clockspeed.png
    Clockspeed.png
    611.8 KB · Views: 72,321
Last edited:

Hulk

Diamond Member
Oct 9, 1999
5,210
3,839
136
L3 aside. It appears Cyberpunk 2077 update has its focus on scheduling actually to get to that mind blowing +32% uplift in lows. Core scheduling appears to be the primary issue here. Something that 0x114 is expected to fix (hopefully).
The only thing that doesn't sit right for me with this explanation is why wouldn't people have simply shut down the E's and immediately witnessed improved gaming performance on ARL in the same manner they did on ADL when hybrid first hit the streets?

I'm pulling for ARL but I am dubious that it's this simple. Especially now that scheduling is simpler for ARL since the removal of HT.
 

ajsdkflsdjfio

Member
Nov 20, 2024
185
133
76
Some have imagined that Intel should drop Lion Cove completely, and replace P cores with a better Skymont/Darkmont.

I agree that the MUCH smaller footprint for the E-Core. I do agree that the P cores are not efficient in space (compared to Zen 5 for instance), but that doesn't mean that it is possible for the E Core performance to be made to work as well on single threaded applications across the board as well as a P core does.

I guess I just always assume that Intel engineers are very talented. Who would put useless P Cores in a design where they could have just beefed up E Cores instead?

I just can't buy into the line of thinking where the P core is totally useless and that they could have easily just used all Skymont with a little more special sauce to raise the IPC another 10%.
The P cores are obviously still needed since Intel needs their main product line to be competitive with latest generation zen products. To replace all their products with E-cores would mean intentionally giving up the performance crown by 10-20%, not just +-5% trading blows here and there. This would do horrible damage to their mindshare and ultimately to their profits. Unfortunately that's the way the market works and the way a lot of consumers judge which brand to buy. It's the same reason why Intel blew up their cores so large and ran their cpus at insane power limits from alder-lake to raptor-lake. People like to look at the benchmarks and the balls to the wall 100% load performance numbers but not actually consider how much a product will actually benefit them. Arrow lake could have beaten normal Zen 5 in price, performance, and efficiency and still people would be shouting from the roofs about x3d.

9800 x3d is only 8 cores and you are paying 480 dollars for it. The performance benefit most people are going to see from it is far from the +25-30% reported by tech channels as well. How many people with enthusiast grade hardware play with 1080p or even 720p at low settings to intentionally reach a cpu-limit? How many people even have enthusiast grade hardware (4080 ATLEAST) in order to take full advantage of the 9800 x3d. Even then, upgrading your cpu to increase frames usually means that you are running at pretty decent frames at your target resolution anyways. How much is 10% better frames really worth when you are going from 200 to 220 fps, to me at least it's pretty much unnoticeable. I understand when you are spending extra money on a gpu since usually that means you literally cannot game at your target resolution with high frames without an upgraded gpu, but this usually isn't the case when comparing top end cpu to top end cpu. There are exceptions to this however with games like stalker 2 and monster hunter which are pretty cpu-limited even at lower frames, but even going on into the future I expect this to be the exception rather than the rule. So yea, most people playing most games hardly ever see even half of that advertised +25-30% x3d performance and even then it doesn't really improve their experience by that much. Regardless, the fact that Ryzen has this theoretical beast does a lot of harm for intel mindshare and for their sales. Arrow lake is pretty uncompetitive anyways, but the reason most people think arrow lake is bad is because of comparisons to x3d, not the regular Zen 5 lineup which is where the bulk of people upgrading should be buying anyways.

Same with RDNA3, RDNA3 certainly has it's downsides in lower ray-tracing performance and less strong of a feature set(especially in productivity) but across it's entire lineup provides a +20% rasterization boost for its given price bracket. Still people shit on RDNA3 for not being able to compete with the 4090 and still shit on RDNA 4/battlemage for not competing at the high end even though they are set to make huge strides in performance for mid-range/low-end segments which most of these people are going to buy anyways. AMD has significantly lost market share with their RDNA 3 generation even though the 4090 is a halo product bought by few and the rest of the lineup of RDNA is competitive or even better if you're not worried about ray-tracing/productivity which is most people.
 
Last edited:
  • Like
Reactions: Tlh97 and Joe NYC

Doug S

Diamond Member
Feb 8, 2020
3,729
6,588
136
Maybe some lower end Arrow Lakes will arrive next year that are all E core? Smaller die, less impressive specs so it would sell for less but it could be a sleeper hit for people in the know.
 

Hulk

Diamond Member
Oct 9, 1999
5,210
3,839
136
so Apple has better Thunderbolt 5 controllers than Intel. This is the advantage of integrating controllers on the SoC.

Arrow Lake H/HX won’t improve on this..

That is great but sequential speeds aren't really important unless you are moving large files from one fast drive to another, which is why in day-to-day use you can have a "fast" SATA drive that feels the same as a nvme ssd that has 10 times the sequential throughput. Don't get me wrong though, faster is always better if resources are infinite but if they are not then you generally balance overall performance against cost.

But yes, your point is well taken. Apple has the better control along with everything else.
 

coercitiv

Diamond Member
Jan 24, 2014
7,443
17,731
136
so Apple has better Thunderbolt 5 controllers than Intel. This is the advantage of integrating controllers on the SoC.

Arrow Lake H/HX won’t improve on this..
You honestly believe those are accurate results, limited by the TB5 controller? 1MB/s random writes?!

I'm getting tired of these youtubers who don't even bother to understand what they are looking at.
 

OneEng2

Senior member
Sep 19, 2022
951
1,163
106
The P cores are obviously still needed since Intel needs their main product line to be competitive with latest generation zen products. To replace all their products with E-cores would mean intentionally giving up the performance crown by 10-20%, not just +-5% trading blows here and there. This would do horrible damage to their mindshare and ultimately to their profits. Unfortunately that's the way the market works and the way a lot of consumers judge which brand to buy. It's the same reason why Intel blew up their cores so large and ran their cpus at insane power limits from alder-lake to raptor-lake. People like to look at the benchmarks and the balls to the wall 100% load performance numbers but not actually consider how much a product will actually benefit them. Arrow lake could have beaten normal Zen 5 in price, performance, and efficiency and still people would be shouting from the roofs about x3d.

9800 x3d is only 8 cores and you are paying 480 dollars for it. The performance benefit most people are going to see from it is far from the +25-30% reported by tech channels as well. How many people with enthusiast grade hardware play with 1080p or even 720p at low settings to intentionally reach a cpu-limit? How many people even have enthusiast grade hardware (4080 ATLEAST) in order to take full advantage of the 9800 x3d. Even then, upgrading your cpu to increase frames usually means that you are running at pretty decent frames at your target resolution anyways. How much is 10% better frames really worth when you are going from 200 to 220 fps, to me at least it's pretty much unnoticeable. I understand when you are spending extra money on a gpu since usually that means you literally cannot game at your target resolution with high frames without an upgraded gpu, but this usually isn't the case when comparing top end cpu to top end cpu. There are exceptions to this however with games like stalker 2 and monster hunter which are pretty cpu-limited even at lower frames, but even going on into the future I expect this to be the exception rather than the rule. So yea, most people playing most games hardly ever see even half of that advertised +25-30% x3d performance and even then it doesn't really improve their experience by that much. Regardless, the fact that Ryzen has this theoretical beast does a lot of harm for intel mindshare and for their sales. Arrow lake is pretty uncompetitive anyways, but the reason most people think arrow lake is bad is because of comparisons to x3d, not the regular Zen 5 lineup which is where the bulk of people upgrading should be buying anyways.

Same with RDNA3, RDNA3 certainly has it's downsides in lower ray-tracing performance and less strong of a feature set(especially in productivity) but across it's entire lineup provides a +20% rasterization boost for its given price bracket. Still people shit on RDNA3 for not being able to compete with the 4090 and still shit on RDNA 4/battlemage for not competing at the high end even though they are set to make huge strides in performance for mid-range/low-end segments which most of these people are going to buy anyways. AMD has significantly lost market share with their RDNA 3 generation even though the 4090 is a halo product bought by few and the rest of the lineup of RDNA is competitive or even better if you're not worried about ray-tracing/productivity which is most people.
... and I couldn't agree more.

My biggest criticism of ARL and LNL is that they are expensive for Intel to produce. I think they are actually pretty good in terms of performance (in most tasks) and power efficiency.

I also think they don't make very good DC chips (no SMT or AVX512), so I am dubious about the strategic design direction.

X3D was indeed a master stroke in marketing. I mean, seriously (no offence intended to those here) who cares about gamers? It is such a tiny percentage of consumers. Still, reviewers far and wide rave over Zen 5 because of this.

Now, note, Intel did its level best over the past 20 years to make gaming the bar to beat in mindshare. For the longest time Intel enjoyed lower latency memory access than AMD and better gaming while AMD played the "better at MT and more energy efficient card" .... and Intel did its best to convince the market that ST and gaming performance were what mattered most.
 

poke01

Diamond Member
Mar 8, 2022
4,559
5,856
106
You honestly believe those are accurate results, limited by the TB5 controller? 1MB/s random writes?!

I'm getting tired of these youtubers who don't even bother to understand what they are looking at.
Yes he tested using a TB5 cable and TB5 enclosure. What else could be limiting it on the Razer but bad implementation of TB5.

Clearly writes are impacted on both sequential and random. Even reads are not on par with the Mac.
 
Last edited:

poke01

Diamond Member
Mar 8, 2022
4,559
5,856
106
That is great but sequential speeds aren't really important unless you are moving large files from one fast drive to another, which is why in day-to-day use you can have a "fast" SATA drive that feels the same as a nvme ssd that has 10 times the sequential throughput. Don't get me wrong though, faster is always better if resources are infinite but if they are not then you generally balance overall performance against cost.

But yes, your point is well taken. Apple has the better control along with everything else.
You can clearly see the Mac is also ahead in Random read/write..
 

poke01

Diamond Member
Mar 8, 2022
4,559
5,856
106
If people want more proof the “TB5” is watered down in the Razer, it’s TB5 port is only capable of DP1.4, not DP2.1.
 

coercitiv

Diamond Member
Jan 24, 2014
7,443
17,731
136
Yes he tested using a TB5 cable and TB5 enclosure. What else could be limiting it on the Razer but bad implementation of TB5.
How can you look at a SSD speed test and not hear alarm bells when a fast PCIe SSD in a modern TB enclosure is running slower than a cheap USB flash disk in 4K random writes?

If people want more proof the “TB5” is watered down in the Razer, it’s TB5 port is only capable of DP1.4, not DP2.1.
So now it's Razer's fault, no longer Intel's?
 

poke01

Diamond Member
Mar 8, 2022
4,559
5,856
106
How can you look at a SSD speed test and not hear alarm bells when a fast PCIe SSD in a modern TB enclosure is running slower than a cheap USB flash disk in 4K random writes?


So now it's Razer's fault, no longer Intel's?
You know what I meant, these are Intel’s controllers which Razer marketed as “TB5”.
It’s like people act like Apple can’t make better TB controllers than Intel…
 

poke01

Diamond Member
Mar 8, 2022
4,559
5,856
106
How can you look at a SSD speed test and not hear alarm bells when a fast PCIe SSD in a modern TB enclosure is running slower than a cheap USB flash disk in 4K random writes?
Because I used Crystaldiskmark before and it’s accurate. I also know that Alex has access to TB5 products. So the the only excuse is the shoddy implementation by Razer/Intel.

People gobbled up the marketing I guess…
 

Attachments

  • 1733961600792.png
    1733961600792.png
    68.6 KB · Views: 23
Last edited:

poke01

Diamond Member
Mar 8, 2022
4,559
5,856
106
This is what proper TB5 specs should say:
MacBook Pro with M4 Pro
1733962105941.png

Razer blade 18 with a Ridge Barlow “TB5” controller:
1733961600792.png

Notice the difference in DP spec. Clearly the bandwidth is limited on the Razer.
 

poke01

Diamond Member
Mar 8, 2022
4,559
5,856
106
Hah, this just makes it even more funny!!!


In fact, the whole setup felt pretty laggy just navigating around the laptop and via a Web browser — not something you’d expect with a top-of-the-line Intel CPU and Nvidia GeForce RTX 4090 GPU inside. I’m pretty sure the Thunderbolt connection negatively contributed. Streaming a 4K, 60Hz video stuttered badly when run on the external display that was connected to the Thunderbolt dock — well over 30 percent of the frames were lost. Playing back the same video on just the laptop itself wasn’t perfect, but it was much, much better.
Maingear blamed this on the Thunderbolt 5 cable. “I haven’t tested a plethora of cables yet, but the Apple TB5 Pro cable has worked consistently for us,” a Maingear representative wrote in an email. “Where Thunderbolt 4 you were able to get away with a decent USB-C cable, Thunderbolt 5 seems to work best with certified Thunderbolt 5 cables.”
That may be true, but Kensington’s Thunderbolt 5 cable was labeled appropriately and this should be certified. And why should I need to buy an Apple cable to get my PC docking station to work?

The experience, quite frankly, stunk. While running the PCMark test with the SSD directly connected, I recorded a score of 1,743 or 252.3 MB/s. But while connected to the dock, the SSD’s PCMark score plunged to just 1,108 or 159.3MB/s. Was that the dock’s fault or the Thunderbolt 5 connection? One of the two, most likely.
When I directly connected the SSD to the laptop and copied my folder to the desktop, it took an average of one minute and five seconds. While streaming video, the same task took 58 seconds longer or two minutes and three seconds. That seemed quite extreme.

Weirdly, when I connected the SSD to the dock and then performed the folder copy, it finished in 41 sec

Well that answers the mystery, Intels TB5 controllers are BAD!!! The author needed an Apple TB5 cable to make the dock display monitor output properly, so even the TB5 cables that use Intel’s controllers are broken.
 

poke01

Diamond Member
Mar 8, 2022
4,559
5,856
106
DP2.1 means nothing since it's basically 3 different specs wearing the same hat.
I know, I trust Apple to include the 80Gbps 2.1 spec. Considering Apple’s TB5 cable is good and actually works with docks and monitors which a Maingear rep also backed up.

oh and also Apple TB controllers actually provide the correct seq/random read and write speeds. So more than likely it’s a better implementation than Intels.
 
Last edited:

511

Diamond Member
Jul 12, 2024
5,017
4,528
106
I know, I trust Apple to include the 80Gbps 2.1 spec. Considering Apple’s TB5 cable is good and actually works with docks and monitors which a Maingear rep also backed up.

oh and also Apple TB controllers actually provide the correct seq/random read and write speeds. So more than likely it’s a better implementation than Intels.
Intel TB controllers
 

poke01

Diamond Member
Mar 8, 2022
4,559
5,856
106
Intel TB controllers
Apple doesn’t use Intel TB controllers anymore since M2. Intel only provides discrete TB5 controllers called Barlow Ridge and Apple now use custom made TB5 controllers that are integrated into the M4 SoC.

Edit: the TB5 cables that Apple sells likely uses Apples custom made controllers too.
 
Last edited:

OneEng2

Senior member
Sep 19, 2022
951
1,163
106
Yes but it works many time from my personal experience
I am not sure that past experience on a monolithic CPU design with only 1 kind of core (which is my experience) is representative. My guess is that in the bios or even the scheduler in the processor for ARL, there may be much lower level control that could prohibit this trick from working.

FWIW, I have only used thread affinity to assign a single core to a high priority task (I/O) for a manufacturing test system. I have never attempted to isolate to a single core for a single thread before. Have you? If so, on what CPU?
 

poke01

Diamond Member
Mar 8, 2022
4,559
5,856
106
interesting to see how Dell implements the JHL9580 controller in ARL-H laptops. Hopefully they don’t pull a Razer and use DP1.4…
IMG_1032.jpeg

Dell will implement dual TB5 ports unlike Razer, this is only possible with JHL9580.