Question Raptor Lake - Official Thread

Page 100 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hulk

Diamond Member
Oct 9, 1999
4,214
2,006
136
Since we already have the first Raptor Lake leak I'm thinking it should have it's own thread.
What do we know so far?
From Anandtech's Intel Process Roadmap articles from July:

Built on Intel 7 with upgraded FinFET
10-15% PPW (performance-per-watt)
Last non-tiled consumer CPU as Meteor Lake will be tiled

I'm guessing this will be a minor update to ADL with just a few microarchitecture changes to the cores. The larger change will be the new process refinement allowing 8+16 at the top of the stack.

Will it work with current z690 motherboards? If yes then that could be a major selling point for people to move to ADL rather than wait.
 
  • Like
Reactions: vstar

Wolverine2349

Member
Oct 9, 2022
157
64
61
Absolutely. But there are limitations due to how far away the system memory is from the CPU.



You're overthinking it. Cache is SRAM memory, which is much faster and less capacity than DRAM.

Also cache is built directly next to the CPU cores themselves which drastically reduces access latency.

Even with hyper fast system memory, cache will always have lower latency because of it's proximity to the CPU.



Increasing the cache capacity does increase latency as it takes longer to access the data, but it's still far less latency than accessing system memory.

That's precisely why V-cache provides such a significant increase in game performance. Anything that gets data to the CPU faster is going to increase performance.


Yeah that makes sense cache is SRAM and closer to much faster access. Though how come some apps do worse on 5800X3D than regular 580X. Is it only do to faster boost clock speeds on regular 5800X? If the clock speeds were always equal would the 580X3D always beat the regular 5800X or certainly never lose to it due to much larger cache??

And how about the extra L3 cache in Raptor Lake 13900K being 36MB instead of 30MB and 25MB on 12900K and 12700K. Will that make a big difference in reducing or eliminating potential gaming bottlenecks and allow it compete with Ryzen 7000X3D CPUs for smooth and strong 1% and 0.1% lows of FPS? This of course means e-waste cores are disabled so the 8 P cores have full access all the time to the L3 cache. Or is another 6-11MB of L3 cache to insignificant. Cause I do notice Intel scales their L3 cache size with more e-waste cores.
 

Kocicak

Senior member
Jan 17, 2019
982
973
136
How I wrote, I want to play with it a little bit, I dont know if I keep it. I will be able to get the same money I bought it for for the next few days. It is not officially on sale yet.

The fact I can just drop it in my running system is great advantage. Anyway, I will put it in and see what happens. Shutting down.
 

nicalandia

Diamond Member
Jan 10, 2019
3,330
5,281
136
How I wrote, I want to play with it a little bit, I dont know if I keep it. I will be able to get the same money I bought it for for the next few days. It is not officially on sale yet.

The fact I can just drop it in my running system is great advantage. Anyway, I will put it in and see what happens. Shutting down.

Is anyone with a 13900K be testing it with DDR4 tight timing RAM?
 

ondma

Platinum Member
Mar 18, 2018
2,720
1,280
136
Who is to say that the 13900K will be the best for 4090? Does CPU matters at 4K? For Bragging rights at 480p and 720p reviews on Anandtech?
Maybe not 4k, but I saw a preview of the 4090 (cant remember the source, some oddball site I had never heard of, so ????) but anyway, with a 4090 several games were cpu bound at 1440 which had not been seen before. I view this as a potential problem for Intel, since they only have 8 big cores.
 

nicalandia

Diamond Member
Jan 10, 2019
3,330
5,281
136
I view this as a potential problem for Intel, since they only have 8 big cores.
That is true, the 16 e cores will be doing background tasks(not much of them) and the 8P Cores will be taxed by the 4090..

Perhaps the best CPU for that monster GPU will be a 24C/48T ThreadRipper
 

Hulk

Diamond Member
Oct 9, 1999
4,214
2,006
136
IIRC few months ago I saw diagrams like this one below showed the power requirement of RPL platform, it was 300 watts just for a CPU. Looks like this would become new normal for Intel platform......


The reason there is so much arguing/discussion over power draw is because it is a rather complex subject and it not only is dependent on the CPU but also on the subjective usage of a particular user. For example, is your CPU normally idling along and now and then you do something like apply a PS filter, or pre-render a few seconds of video and your power draw spikes to 250W but only for a few seconds. For many people with their usage that is fine as they want the most compute possible for "bursty" workloads.

On the other hand if you are constantly engaging all CPU's for distributed computing or video encoding or rendering then you are going to be concerned with overall efficiency.. aka how many kWHrs to complete the job?

So now you get into the nuance of the situation. Person A is idling his rig 99% of the time but the other 1% it's drawing 250Watts. While person B is running his rig full out 99% of the time. Of course absolute efficiency is going to be more important to person B. Meanwhile if the "less efficient" CPU in person A's rig is faster for those bursty workloads then his/her time is more important than any insignificant power usage since he/she's only hitting it hard 1% of the time.

Then you have to factor in that manufacturers set up CPU's and Motherboards for what they think most people will want/need. But we know better and can set power/frequency limits for a particular CPU so it fits our work flow best. Then you throw price into the equation and things become even more complicated.

Zen 3 and Alder Lake are competitive, very competitive. In order to analyze efficiency I think you would have to take a specific application and run it at various frequencies/voltages, see how fast it is and then make comparisons. That's why power arguments go on forever here. You can always find a spot on the curve that supports your side of argument.
"Yeah but at this frequency this CPU is this fast."
"But that's not stock, this one is more efficient at stock."
"But settings are there to change."
"True but we were comparing out of the box..."

And on and on...

I think Raptor Lake and Zen 4 are going to be just as competitive as Zen 3 and ADL. The fact that they have radically different architectures means that there are going to be 1,000 ways to frame an argument.

So, in conclusion I think when talking about efficiency we need to be very specific in order to avoid endless debate. As in "This CPU can do this rate of work on this application while drawing this much power."
 

inf64

Diamond Member
Mar 11, 2011
3,697
4,015
136
  • Like
Reactions: lightmanek

Wolverine2349

Member
Oct 9, 2022
157
64
61
Maybe not 4k, but I saw a preview of the 4090 (cant remember the source, some oddball site I had never heard of, so ????) but anyway, with a 4090 several games were cpu bound at 1440 which had not been seen before. I view this as a potential problem for Intel, since they only have 8 big cores.


Well games are still primarily single threaded. Well not really but that I think is more a catch all phrase that and figure of speech that you do not need nor benefit from throwing lots and lots more cores at gaming systems as in more than 8 good cores and even 6 is enough. Games are multi -threaded, but limited threads meaning you cannot just throw a super high end single or even dual core chip and expect great performance and even quad core is cutting it too close.

But 6 cores is probably enough and 8 is easily more than enough, SO isn't the need for stronger 6-8 core CPUs rather than more than 8 cores?? AMD does have more than 8 good cores, but they are on separate CCD/ring which means big latency penalty for game threads talking to each other if they have to cross CCD which is bad for game threads, but not an issue for VMs or running streaming app by the game on the other cores.

But for gaming only isn't the whole point more stronger 8 cores rather than more cores?? Especially until they have more than 8 good cores on a single CD/ring where there is no latency hop penalty for game threads to talk to each other??
 

Kocicak

Senior member
Jan 17, 2019
982
973
136
I measured lower load efficiency, I put the outcome in the disaster thread. At low load it is similarly efficient as 12600K, but at high load it is even less efficient.

BTW the IPC improvement in Cinebench seems to be slightly negative. The score improvement is just frequency driven.

I will try to find in the bios how to limit the power.

The wafer box is really nice and solid. I like it a lot... :D
 
Last edited:
  • Like
Reactions: lightmanek

Hulk

Diamond Member
Oct 9, 1999
4,214
2,006
136
Ok, I must say I did not expect 337W. Here are the results. It ran at 85°C with Arctic Liquid Freezer II 240.

EDIT: Now I got 40 774. P cores run at 5500 MHz and E cores at 4300 MHz.



View attachment 69062

Alder Lake running those frequencies with that number of cores would score about 38,900. Raptor is about 4.6% better. Either the clocks are floating around as they normally do and IPC for Raptor=Alder for this benchmark or there is a slight IPC improvement for Raptor.
Can you nail down a frequency for the ST score? At 5.7 Alder would do about 2220.
 

inf64

Diamond Member
Mar 11, 2011
3,697
4,015
136
Really? Negative IPC?? Are you sure it's not just some Beta BIOS or just crappy Motherboard? Dud CPU?
It's probably a margin of error thing. The IPC is most likely within 1% so any difference can be disregarded. Not a bad frequency uplift but still much lower perf/watt than 7950X, at least in R23.
 

Hitman928

Diamond Member
Apr 15, 2012
5,244
7,793
136
That is unlimited power. The picture showed 'Limited vykonu PL1/PL2' at 4095watts.

The question is, was that the stock motherboard setting? Based on the comments from @Kocicak , it seems like this is what the motherboard is set to out of the box. Hopefully he can confirm. If so, this is similar to what Intel/MB makers did previously where the "official" guidance was PL2 for a limited time (tau) which after it expired, power would drop to PL1 but then pretty much all the motherboards out of the box set tau to unlimited meaning you were always at PL2. This was not the "official" stock setting but it seemed Intel either didn't care, or was actually promoting this behind the scenes. Do we get a repeat here where "officially" the CPU should be limited to 250 W for sustained loads but pretty much every motherboard comes out of the box with an unlimited power setting? Maybe this is just the pre-release board BIOS behavior? Will be interesting to see how things end up with release reviews.