Question Raptor Lake - Official Thread

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hulk

Diamond Member
Oct 9, 1999
3,361
858
136

I'm still wondering how Intel is going to fit 2 more Gracemont clusters in the same die as ADL with Raptor Lake on the same process? I wonder if there was room on the die or if they are getting some space by working the node a bit? 8+16 is going to be a handful of compute in MT scenarios, which is where the current 12900K struggles against the 5950X. But then again AMD is bringing more horsepower to the table with the 7950X. Hopefully they'll release within a few months of one another so we can have some tasty shootouts!
 

Exist50

Senior member
Aug 18, 2016
610
543
136

I'm still wondering how Intel is going to fit 2 more Gracemont clusters in the same die as ADL with Raptor Lake on the same process? I wonder if there was room on the die or if they are getting some space by working the node a bit? 8+16 is going to be a handful of compute in MT scenarios, which is where the current 12900K struggles against the 5950X. But then again AMD is bringing more horsepower to the table with the 7950X. Hopefully they'll release within a few months of one another so we can have some tasty shootouts!
They'll just make it a bit longer. If you look at a die shot, there's not a ton of whitespace from adding on more cores.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,054
2,856
136
I'm still wondering how Intel is going to fit 2 more Gracemont clusters in the same die as ADL with Raptor Lake on the same process?
Same package you mean?

Well, die size is lot smaller than expected at around 210mm2. The two clusters will probably add little over 20mm2 so it's perfectly doable.
 
  • Like
Reactions: coercitiv

Asterox

Senior member
May 15, 2012
800
1,287
136
Raptor Lake support on "older Z690 and B660 motherboards", hm probably or it is expected or maybe................. :mask:

"The Z790 and B760 chipsets are likely to be the first to support Raptor Lake’s new feature called Digital Linear Voltage Regulator, supposedly lowering the CPU power by up to 25%. This new feature is listed on leaked slides outlining the changes arriving with Raptor Lake"

 

IntelUser2000

Elite Member
Oct 14, 2003
8,054
2,856
136
Why does it talk about DLVR when the leaked Intel slide says it's targetted for mobile? Videocardz is speculating on needing 7xx series chipsets for Raptorlake.

Z690 etc. might not support separate voltage rails for core clusters.
Things like separate voltage rails and ability for the ring bus to decouple themselves from the E core is something they can do. Whether it makes sense to do so from a technical and cost point of view I don't know. The first likely need the 7xx boards to account for different VRM requirements.

3) Rumor: Mooreslawisdead claims core frequency improvements.
We now know 5.5GHz "rumor" is true because 12900KS will reach that speed.

Also, as long as Intel has the clock speed advantage, they can use that to make up for the deficiency in uarch.

Third point is that MLiD has been spot on in so many Intel-related leaks. He definitely has good sources. Yes in regards to AMD/Nvidia he has been off, at least in the past but his Intel sources are very good.

Clock frequency increases can also be talking about in MT workloads, not necessarily ST. 12900KS's 5.5GHz is extremely situational.
 
Last edited:
  • Like
Reactions: Kaluan and CHADBOGA

Mopetar

Diamond Member
Jan 31, 2011
6,670
3,716
136
We now know 5.5GHz "rumor" is true because 12900KS will reach that speed.

Also, as long as Intel has the clock speed advantage, they can use that to make up for the deficiency in uarch.
To some degree, as long as you don't mind your CPU putting out 200W. It's not really a problem for enthusiasts who don't have a problem buying a cooler that can handle it, but we don't know what the availability on the KS is going to be given that it's very likely a top-end bin that very few chips fall in to.

Even if 5.5 GHz is attainable for more CPUs on the next generation, it's probably still going to require pushing the chip well beyond any sane power levels. It really isn't even worth it either since you still get the vast majority of the performance even when lowering the power levels to something more typical.


Source: https://www.techpowerup.com/review/intel-core-i9-12900k-alder-lake-tested-at-various-power-limits/3.html

You don't even lose 1% performance for capping the CPU at 125W. Applications can be hit a bit harder as TPU found that you'd get 86% of the performance on average at 125/125 as you would at 241/241. Intel needs to have more of a uarch uplift because they've run out of room to compensate with clock speed.
 
  • Like
Reactions: scannall

mikk

Diamond Member
May 15, 2012
3,449
1,197
136
You don't even lose 1% performance for capping the CPU at 125W. Applications can be hit a bit harder as TPU found that you'd get 86% of the performance on average at 125/125 as you would at 241/241. Intel needs to have more of a uarch uplift because they've run out of room to compensate with clock speed.

It isn't using 241W in games, not even close. He can set PL1 to 241W but it doesn't mean this power is really used.
 
  • Like
Reactions: Zucker2k

Mopetar

Diamond Member
Jan 31, 2011
6,670
3,716
136
It isn't using 241W in games, not even close. He can set PL1 to 241W but it doesn't mean this power is really used.
I realize that, but that's mainly because most games aren't even going to use enough threads to fully saturate the CPU and even those games that can scale up or beyond 8 cores still tend to load one or two cores far more heavily than the others. It does have a bigger impact in applications like Cinebench where it will load up all of those cores. That's where you wind up doubling the power for a ~50% gain in performance. That's typically the best case scenario though.

But it just shows that Intel can't really rely on clock speed increases. It might allow them to eke out a little bit more gaming performance just because performance is typically still bottlenecked by a single thread in many games, but outside of that it doesn't help them because it makes their CPUs come across as inefficient when they can guzzle almost twice the power but still lose to a 5950X in some benchmarks. We saw AMD gain a similar reputation in the GPU market because they pushed cards like Polaris well beyond where they should have been just to try to get a few more percentage points worth of performance.
 
  • Like
Reactions: scannall

eek2121

Golden Member
Aug 2, 2005
1,779
1,953
136

I'm still wondering how Intel is going to fit 2 more Gracemont clusters in the same die as ADL with Raptor Lake on the same process? I wonder if there was room on the die or if they are getting some space by working the node a bit? 8+16 is going to be a handful of compute in MT scenarios, which is where the current 12900K struggles against the 5950X. But then again AMD is bringing more horsepower to the table with the 7950X. Hopefully they'll release within a few months of one another so we can have some tasty shootouts!
Area matters less than power consumption, I guess, and Intel apparently is making some innovations in the power consumption department. The issue is they are sticking mostly with the higher power consumption numbers and doubling up on 'E' cores...note that I'm referring to power consumption, rather than TDP.
Why does it talk about DLVR when the leaked Intel slide says it's targetted for mobile? Videocardz is speculating on needing 7xx series chipsets for Raptorlake.



Things like separate voltage rails and ability for the ring bus to decouple themselves from the E core is something they can do. Whether it makes sense to do so from a technical and cost point of view I don't know. The first likely need the 7xx boards to account for different VRM requirements.



We now know 5.5GHz "rumor" is true because 12900KS will reach that speed.

Also, as long as Intel has the clock speed advantage, they can use that to make up for the deficiency in uarch.

Third point is that MLiD has been spot on in so many Intel-related leaks. He definitely has good sources. Yes in regards to AMD/Nvidia he has been off, at least in the past but his Intel sources are very good.

Clock frequency increases can also be talking about in MT workloads, not necessarily ST. 12900KS's 5.5GHz is extremely situational.
Okay, show me a 12900KS in the wild. I'm not saying it won't happen, but horse before cart and all..

To some degree, as long as you don't mind your CPU putting out 200W. It's not really a problem for enthusiasts who don't have a problem buying a cooler that can handle it, but we don't know what the availability on the KS is going to be given that it's very likely a top-end bin that very few chips fall in to.

Even if 5.5 GHz is attainable for more CPUs on the next generation, it's probably still going to require pushing the chip well beyond any sane power levels. It really isn't even worth it either since you still get the vast majority of the performance even when lowering the power levels to something more typical.


Source: https://www.techpowerup.com/review/intel-core-i9-12900k-alder-lake-tested-at-various-power-limits/3.html

You don't even lose 1% performance for capping the CPU at 125W. Applications can be hit a bit harder as TPU found that you'd get 86% of the performance on average at 125/125 as you would at 241/241. Intel needs to have more of a uarch uplift because they've run out of room to compensate with clock speed.
If you are trying to claim that the 12900k only loses 1% performance by halving power, I have a bridge to sell you. If the performance differences were only 1%, Intel would have set the TDP and/or power consumption to half of what they are. The chart you posted above was specific to gaming, and even then it only applies to specific titles. If Intel measured frames in the games I played it would be greater. They are making great progress, but they need to improve that efficiency.
 
Last edited:
  • Like
Reactions: Kaluan

mikk

Diamond Member
May 15, 2012
3,449
1,197
136
But it just shows that Intel can't really rely on clock speed increases. It might allow them to eke out a little bit more gaming performance just because performance is typically still bottlenecked by a single thread in many games, but outside of that it doesn't help them because it makes their CPUs come across as inefficient when they can guzzle almost twice the power but still lose to a 5950X in some benchmarks. We saw AMD gain a similar reputation in the GPU market because they pushed cards like Polaris well beyond where they should have been just to try to get a few more percentage points worth of performance.
Depends primarily on the gaming (MT) clock speed, afaik 12900k typically runs at 4900Mhz during gaming. Some say there are power improvements in Raptor which suggests to me 10ESF is a little improved, +200 Mhz is something I would expect. Not much but Raptor supposedly adds some other improvements, as a whole we might see a 10-20% improvement.
 

Mopetar

Diamond Member
Jan 31, 2011
6,670
3,716
136
If you are trying to claim that the 12900k only loses 1% performance by halving power, I have a bridge to sell you. If the performance differences were only 1%, Intel would have set the TDP and/or power consumption to half of what they are. The chart you posted above was specific to gaming, and even then it only applies to specific titles. If Intel measured frames in the games I played it would be greater. They are making great progress, but they need to improve that efficiency.
From the testing done by TPU across a wide range of applications they found that running a 241W for both PL1 and PL2 was around a 17% gain over running with 125W for both. The best cases were in rendering benchmarks which tended to be upwards of 50%, but there were others closer to games where it basically didn't matter.

It still tanks the efficiency though. It also doesn't add a lot in the average benchmark and just makes the chip look like a hot mess, when that really isn't the truth. Any chip eventually has the efficiency go down the toilet when the clocks are cranked up high enough, but Intel went out of their way and intentionally showed the Alder Lake in that light.
 

Hulk

Diamond Member
Oct 9, 1999
3,361
858
136
From the testing done by TPU across a wide range of applications they found that running a 241W for both PL1 and PL2 was around a 17% gain over running with 125W for both. The best cases were in rendering benchmarks which tended to be upwards of 50%, but there were others closer to games where it basically didn't matter.

It still tanks the efficiency though. It also doesn't add a lot in the average benchmark and just makes the chip look like a hot mess, when that really isn't the truth. Any chip eventually has the efficiency go down the toilet when the clocks are cranked up high enough, but Intel went out of their way and intentionally showed the Alder Lake in that light.
I'm not so sure about that, at least for the 12700K. My 12700K with mobo settings "auto" will draw about 170W max. It's high compared to Zen 3 but not incredibly so especially considering it performs at 5900X levels and sometimes better. If reviewers are pushing the 12700K higher than 170-175W then they are doing that through manually adjusting settings.
 

Mopetar

Diamond Member
Jan 31, 2011
6,670
3,716
136
I'm not so sure about that, at least for the 12700K. My 12700K with mobo settings "auto" will draw about 170W max. It's high compared to Zen 3 but not incredibly so especially considering it performs at 5900X levels and sometimes better. If reviewers are pushing the 12700K higher than 170-175W then they are doing that through manually adjusting settings.
The TPU benchmarks were done with a 12900K, so there's a bit of a difference. They also explicitly changed the PL1 and PL2 settings to various configurations. The top-end setting was 241W/241W which basically lets the chip draw that much power under any circumstances as long as whatever it's running is capable of causing it to draw that much or doesn't hit a thermal cutoff point where it will automatically throttle.

I'm not sure what "auto" on your board entails for a 12700K but it's probably lower than 241W. Even if it were 241W, you'd not be likely to actually reach that unless you're running something were it can max out all of the cores. Rendering benchmarks generally seem to be the best in this regard. Other applications might heavily stress a single core and cause the power draw to spike a bit, but aren't going to cause all cores to boost to 5+ GHz.
 

Hulk

Diamond Member
Oct 9, 1999
3,361
858
136
The TPU benchmarks were done with a 12900K, so there's a bit of a difference. They also explicitly changed the PL1 and PL2 settings to various configurations. The top-end setting was 241W/241W which basically lets the chip draw that much power under any circumstances as long as whatever it's running is capable of causing it to draw that much or doesn't hit a thermal cutoff point where it will automatically throttle.

I'm not sure what "auto" on your board entails for a 12700K but it's probably lower than 241W. Even if it were 241W, you'd not be likely to actually reach that unless you're running something were it can max out all of the cores. Rendering benchmarks generally seem to be the best in this regard. Other applications might heavily stress a single core and cause the power draw to spike a bit, but aren't going to cause all cores to boost to 5+ GHz.
"Auto" in my board (in sig) does 4.7GHz all core and boosts to 4.9Ghz ST. For me going higher isn't worth the heat/undervolting/stability testing effort. I've also increased the iGPU speed to 1800MHz from 1500MHz stock mainly to increase DxO PureRaw GPU and Magix Vegas Pro performance.
 
  • Like
Reactions: Mopetar

nicalandia

Golden Member
Jan 10, 2019
1,158
1,268
106

I'm still wondering how Intel is going to fit 2 more Gracemont clusters in the same die as ADL with Raptor Lake on the same process? I wonder if there was room on the die or if they are getting some space by working the node a bit? 8+16 is going to be a handful of compute in MT scenarios, which is where the current 12900K struggles against the 5950X. But then again AMD is bringing more horsepower to the table with the 7950X. Hopefully they'll release within a few months of one another so we can have some tasty shootouts!

Here I present to you: Intel® Core™ i9-13900K Processor: Die diagrams with annotations, die size, die area distribution

1644014665015.png

1644078076053.png


1644014724046.png


Based on on current Alder Lake S and currently known info on Raptor Lake

1644014751247.png


Intel® Core™ i9-13900K Processor: Die area size 227.54 mm2 with total of 68MB Cache (L2$ + L3$)

A single Raptor Cove core : 2MB L2$ + 3MB L3$. Die area 7.04 mm2
Updated Gracemont Quad core cluster : 4MB L2 + 3MB L3$. Die area size 8.78 mm2

Die Area of 4 Golden Cove cores is 42.11 mm2 and of the 16 core cluster is 52.68 mm2 so as you can see 4 e cores are not the same as a single Performance Core
 
Last edited:

eek2121

Golden Member
Aug 2, 2005
1,779
1,953
136
Die Area of 4 Golden Cove cores is 42.11 mm2 and of the 16 core cluster is 52.68 mm2 so as you can see 4 e cores are not the same as a single Performance Core
As GC is equivalent to 4 Gracemont cores, I would hope that to be the case...
 

nicalandia

Golden Member
Jan 10, 2019
1,158
1,268
106
As GC is equivalent to 4 Gracemont cores, I would hope that to be the case...
Intel said that a single Gracemont core was exactly 1/4 the size of a single Golden Cove core and that made people think that a Gracemont Cluster(4 cores) would use the same die size area of a single P core and as you can see on the annotations, they dont. The 16 core cluster use more area(52.68 vs 42.11)
 
May 1, 2020
149
168
86
Intel said that a single Gracemont core was exactly 1/4 the size of a single Golden Cove core and that made people think that a Gracemont Cluster(4 cores) would use the same die size area of a single P core and as you can see on the annotations, they dont. The 16 core cluster use more area(52.68 vs 42.11)
Don't you think this is a bit nitpicky? Although I doubt that they ever said exactly, it is roughly 1:4.
 

igor_kavinski

Platinum Member
Jul 27, 2020
2,460
1,266
96
If you have Alderlake I'd wait until Nova Lake, not even Lunar Lake.
That would take pretty strong self-control, for an Alder Lake owner not to upgrade to Raptor Lake at some point. The increased cache and additional GM cores alone would make it worth it, even without any architectural tweaks.
 

nicalandia

Golden Member
Jan 10, 2019
1,158
1,268
106
Don't you think this is a bit nitpicky? Although I doubt that they ever said exactly, it is roughly 1:4.
They did imply it by the use of many drawing diagrams so many people believe it to be 1:4, but is still quite a feat to be Honest.

Also the block diagram will not change the size because these are the same Gracemont Cores used in Alder Lake, but the additional 2 MB of L2$ is just being enabled(it was disabled on Alder Lake)


1644079857540.png



People expecting huge gains in games due to the double L2$ Cache that will be available on Raptor Lake which will be 32 MB (Alder Lake 16 MB) will be disappointed, because 78% of that increase will go to the Gracemont e cores which do little in games. But that is a huge step up in Multi Threaded Performance.
 
Last edited:
  • Like
Reactions: igor_kavinski

Hulk

Diamond Member
Oct 9, 1999
3,361
858
136
That would take pretty strong self-control, for an Alder Lake owner not to upgrade to Raptor Lake at some point. The increased cache and additional GM cores alone would make it worth it, even without any architectural tweaks.
Yes, unless a new mobo is required.
 

ASK THE COMMUNITY