Question Raptor Lake - Official Thread

Page 82 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hulk

Diamond Member
Oct 9, 1999
4,212
2,001
136
Since we already have the first Raptor Lake leak I'm thinking it should have it's own thread.
What do we know so far?
From Anandtech's Intel Process Roadmap articles from July:

Built on Intel 7 with upgraded FinFET
10-15% PPW (performance-per-watt)
Last non-tiled consumer CPU as Meteor Lake will be tiled

I'm guessing this will be a minor update to ADL with just a few microarchitecture changes to the cores. The larger change will be the new process refinement allowing 8+16 at the top of the stack.

Will it work with current z690 motherboards? If yes then that could be a major selling point for people to move to ADL rather than wait.
 
  • Like
Reactions: vstar

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Buying the Z790 platform would be dumb this December. No upgradability. At least, Z690 owners can get a nice rush of endorphins from their investment when they upgrade to the 13th gen this year or in the next few years.

I would disagree on that. It depends on how regularly you upgrade. At the zenith of my PC hardware enthusiast career, I was upgrading prolifically; often twice or sometimes even thrice within the same generation. Now that I'm older and don't game as much, I upgrade on a much slower cycle (although to be honest this X99 based rig that I'm on now is beyond long in the tooth). For someone like me, going with a Z790 motherboard plus 13900K setup makes a lot of sense because I will not upgrade again for at least 3 years. OEM buyers will keep their machines for even longer.

And you may say, well the x670e platform will offer a substantial upgrade path so why not fully commit yourself to Zen 4 rather than the current 60-40 split in favor of Raptor Lake? And to that I'd say, by the time Zen 5 or Zen 6 becomes available, the x670e chipset will not be an optimal solution for those CPUs and they will be held back in performance and features.
 

jpiniero

Lifer
Oct 1, 2010
14,573
5,203
136
No, those are 5nm wafer prices. The 10nm is about 6,000. The 7nm are about 9,000 and 5nm anywhere from 14 to $17,000 USD. Since Intel is its own source then it could be half of that

Maybe in a normal situation where the prices fall over time, and not go up instead.

The actual wafer production cost for Intel isn't all that much... but you have to factor in that you're paying a lot more than just the physical wafer and the labor involved. You're also paying in a way the future nodes.
 

nicalandia

Diamond Member
Jan 10, 2019
3,330
5,281
136
Maybe in a normal situation where the prices fall over time, and not go up instead.

The actual wafer production cost for Intel isn't all that much... but you have to factor in that you're paying a lot more than just the physical wafer and the labor involved. You're also paying in a way the future nodes.
True, if you think about the price per functional die is about 3% of the retail price so there are other things that are much more expensive than making them.
 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
While it is all buried in the accounting, the true cost of making a wafer for both TSMC and Intel includes all of the same components: R&D for the node, purchase, upkeep and depreciation of the equipment, the cost of raw materials, the time spent in labor to complete the process, the overhead of maintaining the fab, the portion of the construction cost of the fab that is amortized against that individual wafer, etc. The question is, is Intel's cost basis higher or lower than TSMC with respect to that one wafer? We will never know the full answer.
 

eek2121

Platinum Member
Aug 2, 2005
2,929
4,000
136
No way is a TSMC 7 nm wafer only 9k. It's more like 14-15 after the price hikes.

Depends on the size of the order and how long the customer has been using TSMC. The prices I heard were between $9,000-$10,000 for 7nm, $4,000-$7,000 for 6nm, and $13,000-$15,000 for 5nm. It is likely someone like AMD gets things cheaper, and someone like Intel has to pay more.
 

maddie

Diamond Member
Jul 18, 2010
4,738
4,667
136
While it is all buried in the accounting, the true cost of making a wafer for both TSMC and Intel includes all of the same components: R&D for the node, purchase, upkeep and depreciation of the equipment, the cost of raw materials, the time spent in labor to complete the process, the overhead of maintaining the fab, the portion of the construction cost of the fab that is amortized against that individual wafer, etc. The question is, is Intel's cost basis higher or lower than TSMC with respect to that one wafer? We will never know the full answer.
Just goes to show, that being an expert in one field has little relevance in others. Cutress however, really should know better.
 
  • Like
Reactions: ftt

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
Depends on the size of the order and how long the customer has been using TSMC. The prices I heard were between $9,000-$10,000 for 7nm, $4,000-$7,000 for 6nm, and $13,000-$15,000 for 5nm. It is likely someone like AMD gets things cheaper, and someone like Intel has to pay more.

If that difference in price between N7 dies and N6 dies is even remotely true, and TSMC's own statements about the portability of N7 designs to N6, its almost baffling that AMD hasn't done an N6 version of Zen3 as an in place upgrade to their line of N7 based Zen3 products just to decrease their own cost per working CCD. I realize that there would still be an R&D overhead to such a move, but, in volume, it should be more than made up in short order unless TSMC is flat out lying about the ease of porting those designs. We already see the improvements in Rembrandt's CCX over Cezanne while using what is essentially the same design. With a higher power budget, it seems logical that desktop CCDs should show an even better MT performance improvement.
 

jpiniero

Lifer
Oct 1, 2010
14,573
5,203
136
While it is all buried in the accounting, the true cost of making a wafer for both TSMC and Intel includes all of the same components: R&D for the node, purchase, upkeep and depreciation of the equipment, the cost of raw materials, the time spent in labor to complete the process, the overhead of maintaining the fab, the portion of the construction cost of the fab that is amortized against that individual wafer, etc. The question is, is Intel's cost basis higher or lower than TSMC with respect to that one wafer? We will never know the full answer.

Of course what it costs TSMC and what they charge are two different things.

If Raptor does end up being i5 K and above, that should be very telling that OEMs mostly rejected it. Probably because of the price. And the price is probably because of the die size and yields.
 
  • Like
Reactions: ftt
Jul 27, 2020
16,128
10,192
106
And to that I'd say, by the time Zen 5 or Zen 6 becomes available, the x670e chipset will not be an optimal solution for those CPUs and they will be held back in performance and features.
Personally, I would take a drop-in upgrade even with reduced features, like whenever and especially many years afterwards. Maybe not an issue for Linux folks but I prefer my Windows activation or license to not crap out over a hardware upgrade. I'm the type that hates re-installing everything and would prefer a slightly bogged down working Windows installation rather than a brand new empty one, that I then have to waste time populating with applications/games I need.
 

maddie

Diamond Member
Jul 18, 2010
4,738
4,667
136
If that difference in price between N7 dies and N6 dies is even remotely true, and TSMC's own statements about the portability of N7 designs to N6, its almost baffling that AMD hasn't done an N6 version of Zen3 as an in place upgrade to their line of N7 based Zen3 products just to decrease their own cost per working CCD. I realize that there would still be an R&D overhead to such a move, but, in volume, it should be more than made up in short order unless TSMC is flat out lying about the ease of porting those designs. We already see the improvements in Rembrandt's CCX over Cezanne while using what is essentially the same design. With a higher power budget, it seems logical that desktop CCDs should show an even better MT performance improvement.
I suggest that this has to do with the server world. Even though we might think its a trivial change, they would need to validate such a change. Contrast with the integrated die products. Constant, almost annual change.

The existence of N24 die as the sole Navi 2 on 6nm for the lowest end product suggests that it is cheaper.
 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
Even just spinning a line for desktop and Threadripper (especially threadripper where the better power/thermal situation could have an outsized impact), where the price per die is arguably even more important on what is now a trailing product with lower ASPs, would seem to make at least some financial sense...

As for validation for Epyc, I agree, it would be a big task. However, they are on the hook to support Epyc processors for 3+ years minimum. That's a long time to keep an aging node on a higher cost structure employed.
 
Last edited:

inf64

Diamond Member
Mar 11, 2011
3,697
4,015
136
MLID has a video out with supposed Raptor Lake pricing:

1663410909402.png

Pretty much aligned with Ryzen 7000 prices. Outlier could be 13600K - if it launces at $329 then it's clearly a better choice than both 7600X and 7700X no matter *if* it loses in ST or games (by a bit).
 

Rigg

Senior member
May 6, 2020
468
958
106
MLID has a video out with supposed Raptor Lake pricing:

View attachment 67695

Pretty much aligned with Ryzen 7000 prices. Outlier could be 13600K - if it launces at $329 then it's clearly a better choice than both 7600X and 7700X no matter *if* it loses in ST or games (by a bit).
He's really going out on a limb there. :rolleyes:

Unless you were going to leak the specific 1000 unit tray prices they always put in their presentations, I'm not sure why you would bother putting this out there. Even if this actually came from a "source" this isn't specific enough to be of any value. This is essentially a copy/paste of 12th gen MSRP with a range of up to 10% more added for wiggle room.
 
Last edited:

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,328
4,913
136
I expect it to be more than 10% more expensive than Alder Lake given market conditions and their need to inflate their ASP.

If Raptor Lake is as competitive as everyone expects it to be, people will pay out the nose for the top SKUs.
 
  • Like
Reactions: ZGR

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,540
14,494
136
I expect it to be more than 10% more expensive than Alder Lake given market conditions and their need to inflate their ASP.

If Raptor Lake is as competitive as everyone expects it to be, people will pay out the nose for the top SKUs.
Exactly. Intel still has a reputation, although a little tarnished in some circles. And there many enthusiast's that don't care if it takes up to 350 watt or more.
 
  • Like
Reactions: ZGR

Det0x

Golden Member
Sep 11, 2014
1,028
2,953
136
First review of retail Intel Core i9-13900K “Raptor Lake” CPU emerges


1663442273629.png1663442297359.png

In Cinebench R23 tests, Core i9-13900K is faster by 13% with Performance (Raptor Cove) cores compared to 12900K (Golden Cove). Interestingly, even the performance of the Efficient cores has increased by 14%, although the same architecture is used (Gracemont). It is worth noting that not only the frequency has been increased for Raptor Lake CPUs, but also the size of the L3 cache has. ECSM confirms that with unlimited (350w) i9-13900K can brake 40K points in Cinebench R23, which is 47% higher than 12900K with uncapped power.

1663442696244.png
1663442544458.png
The reviewer concluded that i9-13900K brings 10% higher framerates than 12900K in CPU-bound games (CSGO, Ashes of the Singularity), but also improves frame times for the slowest 0.1% frames. Below is CSGO performance with unlimited i9-13900K and i912900K running with DDR5 and DDR4 memory

ECSM is to provide more test results later: with default PL2 limit and later on Z790 motherboard. The conclusion is that i9-13900K has 12% better single-threaded performance and multi-threaded ‘greatly improved to compete with AMD Zen4

*edit*
From the linked Bilibili post:

Testing platform:
  • CPU1: Intel Core i9 13900K
  • CPU2: Intel Core i9 12900KF
  • DRAM: DDR5-6000 CL30-38-38-76, DDR4-3600 CL17-19-19-39, Trefi=262143, other parameters=Auto.
  • Motherboard: Z690 Taichi Razer Editon and Z790****
  • BIOS version: 12.01 and ****
  • GPU: AMD Radeon RX 6900 XTXH OC 2700MHz
  • Cooling: NZXT Kraken X73 作者:
1663443161571.pngvs 1663443305608.png

2 extra stops on rings bus = higher memory latency by the looks of things.. L3 bandwidth much improved tho


1663443448798.png1663443454696.png

Compared with Intel Core i9 12900K, due to the change of Ring bus structure and design, when Ecore is under load, Ringbug Frequency will no longer drop from 4700 MHz to 3600 MHz. The change is mainly from 5000 MHz. Changed to 4600 MHz, at this time Ringbus latency will no longer be a burden on core access latency. Coupled with possible changes in Ring bus topology, the core latency of Intel Core i9 13900K has an interesting change.

That is, there is no longer an obvious access penalty for the communication between P and E, and the communication speed between almost all cores is maintained at a consistent level, about 30-33 ns, except for the small cores in the same Cluster. There is a certain access delay penalty due to the bus snoop, and the E core delay in the same cluster is also slightly improved.

IPC test:

Based on the performance test, we used SPEC CPU 2017 1.1.8 and Geekbench 5.4.4 to conduct the corresponding IPC test, and also tested the default frequency and 3.6Ghz, for reference only.
  • SPEC CPU 2017:
  • OS: WSL2-Ubuntu 20.04
  • Compiler: GCC/Gfortran/G++ 10.3.0
  • Test parameters: -O3, the corresponding test and cfg are attached to the network disk, link: https://pan.baidu.com/s/1G0yD_FC3yXOJl3tkkyzjSg Extraction code: pa37, welcome to use the test.

P core part:
1663443934213.png

We first tested the single-thread performance at the default frequency, and we can see that the improvement is about 12.5%

1663444089734.png

Further, we conducted a 3.6GHz co-frequency test, and we can see that the co-frequency performance of the two cores of RPC/GLC is basically the same, while RPC has a relatively lower memory access latency due to its larger L2 cache.
 
Last edited:

Det0x

Golden Member
Sep 11, 2014
1,028
2,953
136
We also tested the IPC of the E core:
1663444257710.png

Due to the obvious optimization of the internal cache part and the further optimization of the core access delay, the IPC of Ecore has changed significantly, and the average IPC improvement is about 6%.

In addition to the GCC part, we also tested SPECint2017 with the combination of Clang 10+Gfortran 12. In the following table, we removed the score of the 548.exchange2_r project, which is only used to compare the performance of the C/C++ project for comparison with the mobile terminal mobile phone SOC for comparison.

It is important to note that the memory used in this review is not JEDEC-spec, as performance is slightly different than when using JEDEC memory.
1663444357370.png

We first tested the single-threaded performance at the default frequency, and we can see that the improvement is about 13% under the defualt frequency.
1663444432508.png

Further, we conducted a 3.6GHz co-frequency test, which is consistent with the results of SPEC2017. It can be seen that the co-frequency performance of the two cores of RPC/GLC is basically the same

We also tested the IPC of the E core
1663444536982.png

Since the investigation of Geekbench is more inclined to the ALU part, and the investigation of the internal cache is relatively weak, the results here are slightly different from those in SPEC2017. In GB5, the int part of Ecore is almost unchanged, while the FP part is the same as that of SPEC2017. The results are close, about a 6% improvement.
 
Last edited: