Question Raptor Lake - Official Thread

Page 202 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hulk

Diamond Member
Oct 9, 1999
4,695
2,847
136
Since we already have the first Raptor Lake leak I'm thinking it should have it's own thread.
What do we know so far?
From Anandtech's Intel Process Roadmap articles from July:

Built on Intel 7 with upgraded FinFET
10-15% PPW (performance-per-watt)
Last non-tiled consumer CPU as Meteor Lake will be tiled

I'm guessing this will be a minor update to ADL with just a few microarchitecture changes to the cores. The larger change will be the new process refinement allowing 8+16 at the top of the stack.

Will it work with current z690 motherboards? If yes then that could be a major selling point for people to move to ADL rather than wait.
 
  • Like
Reactions: vstar

Hulk

Diamond Member
Oct 9, 1999
4,695
2,847
136
I've very rarely seen people truly happy with CPU upgrades of only one generation. The speed gains are often measurable, but often are so small that they can not easily be noticed outside of benchmarks. Plus the i9 line almost never makes good sense outside of bragging rights. The 14700K (or KF) seems to be the sweet spot for Raptor Lake refresh. Frequency difference between that and the 14900K is less than 2%--far too small to actually feel. The only time you'd notice the difference with the 14900K is when you really need all 16 E cores which is not common for most users. That is definitely not worth the 41% price premium on Amazon currently over the 14700K.

The reality of the situation is the one application that I use that I wish to be faster, namely Presonus Studio One (mixing down tracks), is not very multithreaded so 6 P's running fast is about the same as 8 P's running fast. I have my 13600K clocked at 5.4GHz on the P's anyway and when I'm mixing down tracks it's only drawing 88Watts. This application is actually faster during this process without HT on. I may turn off 1 core and see if it slows down much. It might only really be stressing 1 or 2 cores in which case I might be able to "find" my best cores and crank them up a bit more.
 
Jul 27, 2020
20,794
14,419
146
The reality of the situation is the one application that I use that I wish to be faster, namely Presonus Studio One (mixing down tracks), is not very multithreaded so 6 P's running fast is about the same as 8 P's running fast.
Are you sure that it's only CPU limited and not I/O bound? Easy way to test it is to create a pretty large RAM drive, put the files you work on there and benchmark a typical session. If the program settings have the option to specify a temp drive that it can use for temporary storage, you can point it to a folder on the RAM drive for that too. Even the fastest NVMe SSD is no match for a RAM drive.
 

Hulk

Diamond Member
Oct 9, 1999
4,695
2,847
136
Are you sure that it's only CPU limited and not I/O bound? Easy way to test it is to create a pretty large RAM drive, put the files you work on there and benchmark a typical session. If the program settings have the option to specify a temp drive that it can use for temporary storage, you can point it to a folder on the RAM drive for that too. Even the fastest NVMe SSD is no match for a RAM drive.
No, it's compute limited. It takes 20 seconds to render out a 6MB mp3 file. The files it is mixing aren't that large either. We're talking audio files not video.

Also the render speed is directly proportional to the cpu clock, I've checked it. Good thought though!
 
  • Wow
Reactions: igor_kavinski

Hulk

Diamond Member
Oct 9, 1999
4,695
2,847
136
TBH, I would explore different applications to see which one is the fastest for my workflow, although for 20 seconds, I might not even bother.
The thing about the 20 seconds is when mastering a project you need to lower a vocal a bit, do it, wait 20 seconds to update the mastering project... rinse, repeat, after 20 times it the wait gets old real fast. Even moving from 20 seconds to 16 is helpful to the workflow.

I hate waiting especially when I'm doing something creative as the wait breaks the creative process. If Studio One was "smart" it would break up the song into 50 slices and assigns processors to each slice, much like how we see CB R23 swarming during a run.
 
  • Like
Reactions: Mopetar
Jul 27, 2020
20,794
14,419
146
If Studio One was "smart" it would break up the song into 50 slices and assigns processors to each slice, much like how we see CB R23 swarming during a run.
That thought must have occurred in some creative's mind who just happened to also be a developer so there's a good chance that something better than Studio One is out there.
 

Hulk

Diamond Member
Oct 9, 1999
4,695
2,847
136
That thought must have occurred in some creative's mind who just happened to also be a developer so there's a good chance that something better than Studio One is out there.
Like most software they all have their advantages and disadvantages. I've been using Studio One since inception and have used many others. I like it. No software is perfect unfortunately.
 

Timur Born

Senior member
Feb 14, 2016
300
154
116
What the use of an AVX offset (-1) looks like on a 13900K when per core ratios don't match Turbo ratios.

1-core load SSE vs. AVX:
1699437993891.png1699437997807.png

2-core load SSE vs. AVX:
1699438008742.png1699438011632.png

4-core load SSE vs. AVX:
1699438020028.png1699438024765.png

6-core load SSE vs. AVX:
1699438038216.png1699438041929.png

8-core load SSE vs. AVX:
1699438052974.png1699438055652.png
 

poke01

Platinum Member
Mar 8, 2022
2,528
3,347
106

Igor declares 14th gen IMC a sham.
14th gen as whole is useless. No performance uplifts on the flagship. 10nm node is saturated, time for a new node.
 
Jul 27, 2020
20,794
14,419
146
14th gen as whole is useless. No performance uplifts on the flagship. 10nm node is saturated, time for a new node.
They COULD have made some improvements, like more cache or better tuning of the IMC or heck, a 40 thread monster powered by more E-cores. But they chose not to, probably coz the decision was made at the last minute. Now they gonna try to survive the onslaught of Zen 5 with unattractive, boring desktop silicon, depending primarily on the stupidity of the common buyer, especially the corporate hardware acquisition "experts". Intel is probably the luckiest corporation in the history of the US. They should have been hanging by a thin thread by now but it seems they have too many thin threads keeping them hanging.
 
  • Like
Reactions: Tlh97

Hulk

Diamond Member
Oct 9, 1999
4,695
2,847
136

It seems the i9-14900KS may be real.

View attachment 88597

That CANNOT be a typo, especially the 6.2 GHz part.

Guess this is Intel's Zen 5 counter strategy.

You know what's funny though?

Intel's top CPU being sold in an office PC :D
Good find! Maybe it's aimed at traders that need the fastest ST performance to analyze and make trades!
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
Friviolous discussion in regards to Zen5 will absolutely NOT be tollerated in this thread unless there is a absolute valid reason in regards to it being brought up.
Now they gonna try to survive the onslaught of Zen 5 with unattractive, boring desktop silicon,
How exiting do you expect zen 5 to be?!
They can't add another ccd because that would add 33% to the cost, they would be making 33% fewer CPUs out of the wafers they get.
They also wouldn't be able to power them since the 7950x already hits the maximum ppt for the platform.
If they add a bunch of zen 5c cores it will add a lot of complexity and issues like the thread director had, or like their own "game mode" , it will also force the main cores to use less power since the whole CPU is still bound to the max ppt, that is already being maxed out by the 7950x.
If they increase PPT people will be pissed because AMD promised a long life cycle for the platform.
Unless they have a complete redesign of the architecture up their...sleeves it's going to be a 10-15% increase in multithreaded at best.
15% if AMD manages to get every single bit of improvement out of the new node.
And while that is more exciting than 3-4% it's not exciting enough to make many people throw away their current systems to upgrade.
All this while hoping that TSMC doesn't increase their prices yet another time or even worse something happens that stops AMD from getting product from them again.
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
I was expecting the 14900K's to be more efficient than the 13th gen. They are turning out to be furnaces.
To measure efficiency you would have to test both under the same conditions.
If you are testing max power draw you are just testing how much the resilience of the node changed.
With both running at the same power (253W ) the 14900k is 4% better than the 13900k and only a little bit better than the 13900k-S wich already was a refinement.
It's not much but considering that it's the same node it's about what was expected.
oHkxpd4.jpg
 
  • Like
Reactions: controlflow

Hulk

Diamond Member
Oct 9, 1999
4,695
2,847
136
To measure efficiency you would have to test both under the same conditions.
If you are testing max power draw you are just testing how much the resilience of the node changed.
With both running at the same power (253W ) the 14900k is 4% better than the 13900k and only a little bit better than the 13900k-S wich already was a refinement.
It's not much but considering that it's the same node it's about what was expected.
oHkxpd4.jpg
The problem with measuring power draw between 13th and 14th gen is the difference looks to be so small that it is within the variation of sample binning.
For example, 14900K parts depending on SP rating have 6Ghz VIDs that range from just over 1.5 volts to just under 1.4 volts. Of course 13900KS and some 13900K parts will also hit 6GHz. If they are all using the same voltage under load the efficiency will be the same. But if the higher binned parts are requesting lower VIDs and/or under volted a bit then they will measure as more efficient.

Thing is it looks like there are no process changes from 13 to 14 gen, just more binning, perhaps better yields. It's a lottery for the 14900K. You could get a part that needs 1.5V for 6GHz or you cold get one that needs 1.388 for 6GHz. That's a big difference.

I guess my point is that since there is fundamentally no difference between generations apart from binning (I believe) then it's a lottery as far as efficiency and differences will be minor.

If there is going to be a 14900KS I'll be very interested to see they they fare as far as 6GHz VID specs and the v/f curve points in general compared to the 14900K and 13900KS. From what I've been reading it seems the 13900KS are more consistent than the 14900K. Meaning they are all pretty good. With the 14900K you can get one that has a better v/f curve than a 13900KS or you could get a furnace (~1.5v VID @6Ghz).

Intel 7 is tapped out.
 

Timur Born

Senior member
Feb 14, 2016
300
154
116
To measure efficiency you would have to test both under the same conditions.
If you are testing max power draw you are just testing how much the resilience of the node changed.
With both running at the same power (253W ) the 14900k is 4% better than the 13900k and only a little bit better than the 13900k-S wich already was a refinement.
It's not much but considering that it's the same node it's about what was expected.
But we don't learn anything about power/efficiency in this table, but just how different single-core frequencies unsurprisingly affect the results. And there are no "running at same power (253 W)" results in this table, as it is a single-core test.

My undervolted 13900K reaches 2360ish points at 38 W package power when running CB23 at 6.0 GHz, that's less than 1.35 V for a (real) single (not dual) core. And my CPU is just a "good enough" bin, not a stellar one.
 
Last edited: