Question Raptor Lake - Official Thread

Page 183 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hulk

Diamond Member
Oct 9, 1999
4,487
2,464
136
Since we already have the first Raptor Lake leak I'm thinking it should have it's own thread.
What do we know so far?
From Anandtech's Intel Process Roadmap articles from July:

Built on Intel 7 with upgraded FinFET
10-15% PPW (performance-per-watt)
Last non-tiled consumer CPU as Meteor Lake will be tiled

I'm guessing this will be a minor update to ADL with just a few microarchitecture changes to the cores. The larger change will be the new process refinement allowing 8+16 at the top of the stack.

Will it work with current z690 motherboards? If yes then that could be a major selling point for people to move to ADL rather than wait.
 
  • Like
Reactions: vstar

Hulk

Diamond Member
Oct 9, 1999
4,487
2,464
136
IHow does an "e" core compare to a HT "core" in the big cores?

I will try to analyze this. It's a good question.
A P core at 5.5GHz does about 2800 points in CB R23 with HT on.
2150 purely ST. So the logical thread does about 650 or about 23% the performance of a physical core.

I feel like I'm back in college writing up a lab because I know the people around here will tear into me worse than a TA for the slightest calc. error or opinion not supported by the data!

At 4.3GHz an E core scores about 1150 per core.

So...

P physical + logical = 100%
P physical = 77%
E = 41%
P logical = 20%

This is why the Thread Director is supposed to assign threads, P physical, E, then P logical. It generally get's it right but I notice that sometimes a logical thread will sneak in ahead of an E.

4 E's (one cluster) is 1.27 times the size of 1 P yet in CB R23 scores 64% higher despite the clock speed disadvantage.

Or put another way per square mm the E's are almost 30% more performant than the P's in CB R23.

So for completely optimized MT applications the E's are the way to go. Intel realized that most applications either use 6-8 cores if they are not MT optimized, or they use all of them available. That's just the state of software today "generally."

Again, this is all working within a transistor/die area budget. On TMSC 5 AMD could probably produce a 12+16 part that would be a real "problem" for Intel.
 

Exist50

Platinum Member
Aug 18, 2016
2,452
3,102
136
From 8 to 16 threads the 16P will scale at 100 for each added thread while the 8 + 16 will scale by 60 due to the lower E core IPC.

So as thread count increase the 16P will perform more and more better with the difference culminating at 16 threads and 25% better throughput.

A 8 + 32 would lag even more comparatively to a 24P as at 24 threads the difference would be 50%.
So on what grounds are you naming those very specific core counts? Your argument basically seems to be that if you purposely constrain your scenario to best suit a particular CPU, that one will do better. Not exactly a surprise...
Amdahl's Law. There's a reason why AMD hasn't pushed past 16c for high-end consumer.
They surely will release a 24c CPU eventually, but regardless, neither Intel's 8+16 nor AMD's 16+0 chips are for consumers. They only make sense for productivity or similarly intense workloads. Even gaming more or less maxes out around 8 cores.
 

DrMrLordX

Lifer
Apr 27, 2000
22,044
11,662
136
They surely will release a 24c CPU eventually

Maybe? It'll really depend on how long they stick with 8c CCDs. They had the opportunity to go with a 3 CCD config for AM5 and chose not to, and based on the package size, it's unlikely they'll expand beyond 2 CCDs unless the CCDs and IO dice get significantly smaller in future generations.

but regardless, neither Intel's 8+16 nor AMD's 16+0 chips are for consumers.

Yet they're on consumer platforms. Plus streamers really can use those extra cores.

(mostly these CPUs are for future-proofing and e-peen extension)
 

Exist50

Platinum Member
Aug 18, 2016
2,452
3,102
136
Maybe? It'll really depend on how long they stick with 8c CCDs. They had the opportunity to go with a 3 CCD config for AM5 and chose not to, and based on the package size, it's unlikely they'll expand beyond 2 CCDs unless the CCDs and IO dice get significantly smaller in future generations.
It's an interesting question. On one hand, if Intel is ever able to offer the oft-rumored 8+32 die on a comparable node, that would open up a pretty significant performance gap at the top end, and there tends to be a waterfall/halo affect on pricing for the lower tier SKUs.

On the other hand, supporting a 3rd CCX would add notable costs to the IO die and package. And of course, the majority of people would be perfectly well suited with 1-2 CCD products. So would it really matter if they didn't compete in the absolute highest tier?
Yet they're on consumer platforms. Plus streamers really can use those extra cores.

(mostly these CPUs are for future-proofing and e-peen extension)
Eh, I feel like streamers are pretty well served by the GPU encoders these days. But if you are doing CPU encoding, that's pretty parallel, so more cores/threads should be welcome.

But yes, there's certainly an element of internet e-peen as well...
 

DrMrLordX

Lifer
Apr 27, 2000
22,044
11,662
136
So would it really matter if they didn't compete in the absolute highest tier?

You mean in synthetics? Because moving to 32 e-cores would only show up in synthetics, and maybe some 3d rendering benchmarks.

But if you are doing CPU encoding, that's pretty parallel, so more cores/threads should be welcome.

I haven't tried dGPU streaming on anything recent, but x.264 on the CPU is still really good (and flexible), and 16c CPUs are perfect for it, especially since most games don't scale past 8c and x.264 doesn't really scale that well at high core counts either.
 
  • Like
Reactions: ZGR

Kocicak

Golden Member
Jan 17, 2019
1,078
1,133
136
You mean in synthetics? Because moving to 32 e-cores would only show up in synthetics, and maybe some 3d rendering benchmarks.
Are you sure? I recently posted this screenshot of my CPU with 8 E cores running a game, and somehow all those 8 E cores got utilised (in average 10%, spikes to 40%.), even though the P cores could handle the load alone.

The point is - even this light load managed to benefit from 8 E cores, why do you think that a bit heavier load could not benefit from 32 E cores?

13600K gam load.png
 

A///

Diamond Member
Feb 24, 2017
4,351
3,158
136
The 3950x was said to be a "beast" to borrow that stupid word from the young folk , in software based encoding since it lacked a igpu like intel and could be tested for that purpose. All the modern high end processors from Intel and AMD can blow past h.264 software encode without skipping a beat in game. av1's knock on doors is a blessing for those who cannot afford the extra cores but can afford a middle of the road priced card, provided they sell a portion of their liver.
 

Exist50

Platinum Member
Aug 18, 2016
2,452
3,102
136
You mean in synthetics? Because moving to 32 e-cores would only show up in synthetics, and maybe some 3d rendering benchmarks.
Basically, yes. All the benchmarks people use for internet dick-measuring contests. Or more importantly to Intel/AMD, the benchmarks that get a lot of hype in the media.
I haven't tried dGPU streaming on anything recent, but x.264 on the CPU is still really good (and flexible), and 16c CPUs are perfect for it, especially since most games don't scale past 8c and x.264 doesn't really scale that well at high core counts either.
I'd go so far as to say that the modern GPU encoders have rendered CPU encoding obsolete outside of professional use cases. The latest Intel and Nvidia encoders provide performance (quality and bitrate) comparable to (if not better than) X264 superslow at negligible performance overhead. SVT-AV1 still has an edge last I checked (Ada results hard to find), but with an utterly massive CPU compute cost, to the point where it de facto requires a dedicated streaming PC. So for most practical purposes, streaming is no longer a meaningful CPU benchmark.
 

coercitiv

Diamond Member
Jan 24, 2014
6,647
14,107
136
Are you sure? I recently posted this screenshot of my CPU with 8 E cores running a game, and somehow all those 8 E cores got utilised (in average 10%, spikes to 40%.), even though the P cores could handle the load alone.

The point is - even this light load managed to benefit from 8 E cores, why do you think that a bit heavier load could not benefit from 32 E cores?
Here's another screenshot, can you tell how many threads are running?

1679660627162.png

1679660676168.png
 
  • Like
Reactions: Elfear

Hulk

Diamond Member
Oct 9, 1999
4,487
2,464
136
Here's another screenshot, can you tell how many threads are running?

View attachment 78580


I have found a good way to see how many threads are being used by a particular software application is to look at the "average effective clock" in HWinfo for each core/thread. Task Manager seems to overestimate thread/core usage.

Task Manager provide a "feel good" I'm using so many cores impression that I don't think is accurate.

Also when core usage is moving around quickly I don't think Task Manager picks it up and shows two cores that might be sharing a thread as looking like two cores actually being loaded.
 

coercitiv

Diamond Member
Jan 24, 2014
6,647
14,107
136
Also when core usage is moving around quickly I don't think Task Manager picks it up and shows two cores that might be sharing a thread as looking like two cores actually being loaded.
That was my point. Without knowing the specific program and thread count, my example could easily be considered as a 8 thread workload, with some threads receiving less work than others. And yet it's just 4, and the system wasn't running anything else, with less than 1% utilization after CB stopped.
 

Mopetar

Diamond Member
Jan 31, 2011
8,110
6,755
136
If not for the process, Intel could probably have had 12-16c Golden Cove/Raptor Cove and been an actual competitor. Hybrid happened because of the process.

Intel's cores are too large for them to add much more than 8 per die. Unless they scrapped the GPU, even 12 would be pushing it and you'd need one hell of a dense process for them to get 16 cores on a die without it being monstrous itself.

Their efficiency cores are more area efficient than anything else. They can fit about four of those into the same space that a performance core occupies. Until they overhaul and slim down the design of the performance core, I don't see them matching AMD on count. Not that they really need to since the E-cores are plenty capable and far more efficient per area when it comes to parallel workloads.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,158
136
Bit late for that. Slimming down the core would require more research and work to be done post data going into the core to keep up with AMD if it even can. It really does not matter at this junction in time because zen 5 needs to live up to the hype papermaster has given it. 20-25% st gains aren't gonna cut it.
 
  • Like
Reactions: lightmanek

Kocicak

Golden Member
Jan 17, 2019
1,078
1,133
136
Whatever the case, they need to send them to fat core camp over the summer so they can slim down.

Quite the opposite, they need to send P cores to body building camp to get even fatter and stronger. Absolute performance is the goal here.

E cores need to get stronger too, but with a great emphasis to keeping them compact. Performance per die area is the goal here.

Performance per watt equally important in both cases.
 
  • Like
Reactions: controlflow

Alexium

Junior Member
Aug 18, 2017
13
3
81
Do the non-K 13xxx CPUs allow IccMax control? I've just learned that Icc power-limiting works much better for Rocket Lake than the actual PL, and then a tuned Rocket Lake is the same on power efficiency as Zen 3/4 and not 1.5x worse. This totally changes how I perceive these CPUs and I might try one, but I'm interested in the non-K i5-13500 - seems like a great deal.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
26,133
15,282
136
Do the non-K 13xxx CPUs allow IccMax control? I've just learned that Icc power-limiting works much better for Rocket Lake than the actual PL, and then a tuned Rocket Lake is the same on power efficiency as Zen 3/4 and not 1.5x worse. This totally changes how I perceive these CPUs and I might try one, but I'm interested in the non-K i5-13500 - seems like a great deal.
It does not matter how you tune a Rocket lake, they will not be as efficient as a Zen 3/4. even if the wattage is the same, the Zen 3/4 will have more performance.

If you meant Raptor lake (since that is the thread you posted in) Then this MAY only apply to Zen 4. Not sure how Zen 3 would do at the same wattage.
 

Alexium

Junior Member
Aug 18, 2017
13
3
81
I indeed meant Raptor lake, thank you for pointing out my typo, hopefully now I'll remember which is which. So is IccMax limiting available to non-K parts?
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
26,133
15,282
136
I indeed meant Raptor lake, thank you for pointing out my typo, hopefully now I'll remember which is which. So is IccMax limiting available to non-K parts?
That I can't answer. But my reply was still to dispell the rumor that Raptorlake is more efficient. And to confirm, both Raptprlake and Zen 4 can be limited to a certain current/wattage, but at the same setting, Zen 4 will perform better. Thus, its more efficient.
 

Alexium

Junior Member
Aug 18, 2017
13
3
81
I was considering the 13xxx series a complete joke until I was told about ICC limiting and shown this example. From a couple reviews where 7950X / 7950X3D were power-limited (Techpowerup has a good one here), this seems to be on-par:

file.php
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
26,133
15,282
136
I was considering the 13xxx series a complete joke until I was told about ICC limiting and shown this example. From a couple reviews where 7950X / 7950X3D were power-limited (Techpowerup has a good one here), this seems to be on-par:

file.php
OK, at stock 13900k wins, at a higher wattage. art 125 watt, the 7950x wins
1679784608213.png
 
  • Like
Reactions: Alexium