Question Raptor Lake - Official Thread

Page 100 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hulk

Diamond Member
Oct 9, 1999
4,527
2,520
136
Since we already have the first Raptor Lake leak I'm thinking it should have it's own thread.
What do we know so far?
From Anandtech's Intel Process Roadmap articles from July:

Built on Intel 7 with upgraded FinFET
10-15% PPW (performance-per-watt)
Last non-tiled consumer CPU as Meteor Lake will be tiled

I'm guessing this will be a minor update to ADL with just a few microarchitecture changes to the cores. The larger change will be the new process refinement allowing 8+16 at the top of the stack.

Will it work with current z690 motherboards? If yes then that could be a major selling point for people to move to ADL rather than wait.
 
  • Like
Reactions: vstar
Jul 27, 2020
20,040
13,740
146
Who is to say that the 13900K will be the best for 4090? Does CPU matters at 4K? For Bragging rights at 480p and 720p reviews on Anandtech?
1440p matters to a lot, especially those with gaming monitors with 120 Hz and above.

Zen 4 sales are not good. It's possible that a lot of people looking to upgrade are waiting to see what Intel has to offer.
 
  • Like
Reactions: Henry swagger

nicalandia

Diamond Member
Jan 10, 2019
3,331
5,282
136
1440p matters to a lot, especially those with gaming monitors with 120 Hz and above.
1440p Really? So one has the funds to build a $3000 Gaming CPU so you can get a skimp on a 1440p monitor? Do you really need a 4090 to play games at that resolution? If you have the budget to get a a High End CPU like 13900K/7950X Fast DDR5 and a 4090 GPU, you have the $ to get a Top of the line 4K Gaming Monitor
 
  • Like
Reactions: Kaluan
Jul 27, 2020
20,040
13,740
146
1440p Really? So one has the funds to build a $3000 Gaming CPU so you can get a 1440p High Hz monitor?
Funds can be limited. Not a lot of people have the funds to buy the best of every hardware component. Their build specifications may be very disproportionate and unbalanced, paying particular attention to the CPU and GPU but skimping everywhere else.
 

deasd

Senior member
Dec 31, 2013
567
922
136
Zen 4 sales are not good. It's possible that a lot of people looking to upgrade are waiting to see what Intel has to offer.

Because so much gamers burst out to buy 5800X3D after Zen4 review, due to FPS/price ratio. Nothing from Intel would change that..... only the next Zen4 X3D series would worth waiting.
 

Wolverine2349

Senior member
Oct 9, 2022
428
132
86
Funds can be limited. Not a lot of people have the funds to buy the best of every hardware component. Their build specifications may be very disproportionate and unbalanced, paying particular attention to the CPU and GPU but skimping everywhere else.


Plus some would rather do 1440P especially if they only have a 27 to 32 inch monitor where 4K makes no difference in visual quality.
 
  • Like
Reactions: igor_kavinski

Wolverine2349

Senior member
Oct 9, 2022
428
132
86
Some people paid around $630 for the pre-orders. There is a chance that the i9-13900K/KF SKUs may get scalped hard, coz 4090 gamers will want the best gaming CPU and a lot of them are bent towards Intel.


That would really be horrible. I thought the shortage of consumer grade computer parts was over unlike other things.

Well the RTX 4090 did sell out day 1, but they were available at local MIcro Center throughout the day but were gone at end of the day. Far cry from the days were there was nothing and on launch day of RTX 3080 and 3090, they were gone in seconds and none in stock to be found at the end. I also think cards selling out on day 1 is normal just not in seconds, but throughout the day.
 

nicalandia

Diamond Member
Jan 10, 2019
3,331
5,282
136
People on Twitter. Hey look how 4090 was out of stock within minutes of release and then saying why Ryzen 7000 can't do that? Well for one you can put that GPU on all range of Motherboards released in the past 6 years and play with it without any issues. If Ryzen 7000 could be just put on a AM5 you bet it would have been sold out too.
 
  • Like
Reactions: lightmanek

Harry_Wild

Senior member
Dec 14, 2012
841
152
106
Who is to say that the 13900K will be the best for 4090? Does CPU matters at 4K? For Bragging rights at 480p and 720p reviews on Anandtech?
The bandwidth matters aka PCIe. PCIe 5 is like warpspeed compare to previous PCIe versions like 3 and earlier! Of course no PCIe 5 graphics cards yet! Only PCIe 4s!
 

Wolverine2349

Senior member
Oct 9, 2022
428
132
86
Absolutely. But there are limitations due to how far away the system memory is from the CPU.



You're overthinking it. Cache is SRAM memory, which is much faster and less capacity than DRAM.

Also cache is built directly next to the CPU cores themselves which drastically reduces access latency.

Even with hyper fast system memory, cache will always have lower latency because of it's proximity to the CPU.



Increasing the cache capacity does increase latency as it takes longer to access the data, but it's still far less latency than accessing system memory.

That's precisely why V-cache provides such a significant increase in game performance. Anything that gets data to the CPU faster is going to increase performance.


Yeah that makes sense cache is SRAM and closer to much faster access. Though how come some apps do worse on 5800X3D than regular 580X. Is it only do to faster boost clock speeds on regular 5800X? If the clock speeds were always equal would the 580X3D always beat the regular 5800X or certainly never lose to it due to much larger cache??

And how about the extra L3 cache in Raptor Lake 13900K being 36MB instead of 30MB and 25MB on 12900K and 12700K. Will that make a big difference in reducing or eliminating potential gaming bottlenecks and allow it compete with Ryzen 7000X3D CPUs for smooth and strong 1% and 0.1% lows of FPS? This of course means e-waste cores are disabled so the 8 P cores have full access all the time to the L3 cache. Or is another 6-11MB of L3 cache to insignificant. Cause I do notice Intel scales their L3 cache size with more e-waste cores.
 

Kocicak

Golden Member
Jan 17, 2019
1,090
1,141
136
How I wrote, I want to play with it a little bit, I dont know if I keep it. I will be able to get the same money I bought it for for the next few days. It is not officially on sale yet.

The fact I can just drop it in my running system is great advantage. Anyway, I will put it in and see what happens. Shutting down.
 

nicalandia

Diamond Member
Jan 10, 2019
3,331
5,282
136
How I wrote, I want to play with it a little bit, I dont know if I keep it. I will be able to get the same money I bought it for for the next few days. It is not officially on sale yet.

The fact I can just drop it in my running system is great advantage. Anyway, I will put it in and see what happens. Shutting down.

Is anyone with a 13900K be testing it with DDR4 tight timing RAM?
 

ondma

Diamond Member
Mar 18, 2018
3,005
1,528
136
Who is to say that the 13900K will be the best for 4090? Does CPU matters at 4K? For Bragging rights at 480p and 720p reviews on Anandtech?
Maybe not 4k, but I saw a preview of the 4090 (cant remember the source, some oddball site I had never heard of, so ????) but anyway, with a 4090 several games were cpu bound at 1440 which had not been seen before. I view this as a potential problem for Intel, since they only have 8 big cores.
 

nicalandia

Diamond Member
Jan 10, 2019
3,331
5,282
136
I view this as a potential problem for Intel, since they only have 8 big cores.
That is true, the 16 e cores will be doing background tasks(not much of them) and the 8P Cores will be taxed by the 4090..

Perhaps the best CPU for that monster GPU will be a 24C/48T ThreadRipper
 

Hulk

Diamond Member
Oct 9, 1999
4,527
2,520
136
IIRC few months ago I saw diagrams like this one below showed the power requirement of RPL platform, it was 300 watts just for a CPU. Looks like this would become new normal for Intel platform......


The reason there is so much arguing/discussion over power draw is because it is a rather complex subject and it not only is dependent on the CPU but also on the subjective usage of a particular user. For example, is your CPU normally idling along and now and then you do something like apply a PS filter, or pre-render a few seconds of video and your power draw spikes to 250W but only for a few seconds. For many people with their usage that is fine as they want the most compute possible for "bursty" workloads.

On the other hand if you are constantly engaging all CPU's for distributed computing or video encoding or rendering then you are going to be concerned with overall efficiency.. aka how many kWHrs to complete the job?

So now you get into the nuance of the situation. Person A is idling his rig 99% of the time but the other 1% it's drawing 250Watts. While person B is running his rig full out 99% of the time. Of course absolute efficiency is going to be more important to person B. Meanwhile if the "less efficient" CPU in person A's rig is faster for those bursty workloads then his/her time is more important than any insignificant power usage since he/she's only hitting it hard 1% of the time.

Then you have to factor in that manufacturers set up CPU's and Motherboards for what they think most people will want/need. But we know better and can set power/frequency limits for a particular CPU so it fits our work flow best. Then you throw price into the equation and things become even more complicated.

Zen 3 and Alder Lake are competitive, very competitive. In order to analyze efficiency I think you would have to take a specific application and run it at various frequencies/voltages, see how fast it is and then make comparisons. That's why power arguments go on forever here. You can always find a spot on the curve that supports your side of argument.
"Yeah but at this frequency this CPU is this fast."
"But that's not stock, this one is more efficient at stock."
"But settings are there to change."
"True but we were comparing out of the box..."

And on and on...

I think Raptor Lake and Zen 4 are going to be just as competitive as Zen 3 and ADL. The fact that they have radically different architectures means that there are going to be 1,000 ways to frame an argument.

So, in conclusion I think when talking about efficiency we need to be very specific in order to avoid endless debate. As in "This CPU can do this rate of work on this application while drawing this much power."
 

inf64

Diamond Member
Mar 11, 2011
3,865
4,549
136
  • Like
Reactions: lightmanek

Wolverine2349

Senior member
Oct 9, 2022
428
132
86
Maybe not 4k, but I saw a preview of the 4090 (cant remember the source, some oddball site I had never heard of, so ????) but anyway, with a 4090 several games were cpu bound at 1440 which had not been seen before. I view this as a potential problem for Intel, since they only have 8 big cores.


Well games are still primarily single threaded. Well not really but that I think is more a catch all phrase that and figure of speech that you do not need nor benefit from throwing lots and lots more cores at gaming systems as in more than 8 good cores and even 6 is enough. Games are multi -threaded, but limited threads meaning you cannot just throw a super high end single or even dual core chip and expect great performance and even quad core is cutting it too close.

But 6 cores is probably enough and 8 is easily more than enough, SO isn't the need for stronger 6-8 core CPUs rather than more than 8 cores?? AMD does have more than 8 good cores, but they are on separate CCD/ring which means big latency penalty for game threads talking to each other if they have to cross CCD which is bad for game threads, but not an issue for VMs or running streaming app by the game on the other cores.

But for gaming only isn't the whole point more stronger 8 cores rather than more cores?? Especially until they have more than 8 good cores on a single CD/ring where there is no latency hop penalty for game threads to talk to each other??