Question Raptor Lake - Official Thread

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hulk

Diamond Member
Oct 9, 1999
4,212
2,001
136
Since we already have the first Raptor Lake leak I'm thinking it should have it's own thread.
What do we know so far?
From Anandtech's Intel Process Roadmap articles from July:

Built on Intel 7 with upgraded FinFET
10-15% PPW (performance-per-watt)
Last non-tiled consumer CPU as Meteor Lake will be tiled

I'm guessing this will be a minor update to ADL with just a few microarchitecture changes to the cores. The larger change will be the new process refinement allowing 8+16 at the top of the stack.

Will it work with current z690 motherboards? If yes then that could be a major selling point for people to move to ADL rather than wait.
 
  • Like
Reactions: vstar

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
There is no change in latency or other behavior within the core. It's the fabric that's worse, primarily the memory controller latency.

I had a look at the memory latency scores of the 11900K, and while they are definitely worse than Comet Lake, they still aren't too bad. I don't know if memory latency alone would explain the performance discrepancies.

Might be some inter-core latency issues as well.

index.php
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
You can't use SPEC ST in one instance and SPEC MT (which is just rate-N) in another instance to conclude that SPEC is garbage. SPEC is similar to Geekbench - both of them test real-world workloads.

I don't dispute that Spec tests real world workloads. I dispute that it tests real world workloads as would be reflective of actual consumer, enterprise or HPC programs. This is a valid criticism, and if you ever go over to Realworldtech forums, you'll see that many industry professionals complain about Spec in that manner.

As for Geekbench, I remember running Geekbench 5 a long time ago on my 6900K. It didn't even register enough workload on my CPU to get it to go into turbo mode, and whatever bandwidth test it used, it wasn't even aware that my CPU had quad channel memory. Uninstalled it and never ran it again.

Golden Cove is around 15% higher IPC than Zen 3 in Geekbench, when you equip both platforms with fast RAM. The reason why the difference is much lower on SPEC in the Anandtech graphs is because they test with bog-standard JEDEC-spec memory.

Both of the reviews I took those graphs from used DDR5 4400 memory, which is slower than the memory that Anandtech used in their review (DDR5 4800); at least in terms of frequency. The timings were probably lower for the DDR5 4400 used in the Phoronix and Tomshardware reviews, but the sub timings aren't going to explain why there is such a massive difference in the performance of the 12900K in Spec and real world encoding and compiling workloads.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,763
3,585
136
I dispute that it tests real world workloads as would be reflective of actual consumer, enterprise or HPC programs.
That doesn't make any sense - go read SPEC documentation to find out what workloads SPEC runs; they're fairly representative of the kinds of things the domains you mention would like to run. People have a problem with the compiler tuning that vendors like to do when submitting official scores, not the benchmark itself.
As for Geekbench, I remember running Geekbench 5 a long time ago on my 6900K. It didn't even register enough workload on my CPU to get it to go into turbo mode, and whatever bandwidth test it used, it wasn't even aware that my CPU had quad channel memory. Uninstalled it and never ran it again.
Something was off with your system when you ran Geekbench. There are a multitude of scores in it's database with CPUs operating with functional turbo/boost. People here on this forum run it without any problems whatsoever.
Both of the reviews I took those graphs from used DDR5 4400 memory, which is slower than the memory that Anandtech used in their review (DDR5 4800); at least in terms of frequency. The timings were probably lower for the DDR5 4400 used in the Phoronix and Tomshardware reviews, but the sub timings aren't going to explain why there is such a massive difference in the performance of the 12900K in Spec and real world encoding and compiling workloads.
"Compiling and encoding" isn't universally faster on the 12900K. Phoronix test suite proves that. It depends on the encoder, the quality preset, how many makefiles are there, how they're linked etc.
 

dullard

Elite Member
May 21, 2001
25,042
3,395
126
I keep seeing people post that they think clocks will go higher in the future. Not gonna happen.

Thermal density is a huge issue. Clocks will likely regress slightly.
I think it comes from rumors and wishes. In other words, we don't have definitive evidence yet.

1) Rumor: AdoredTV claims a leak reporting 5.5 GHz. https://www.notebookcheck.net/Full-...minance-with-the-Core-i9-13900K.555908.0.html

2) Wish: Intel has almost never regressed on turbo clock speeds when going to a new generation. People wish that trend continues.

3) Rumor: Mooreslawisdead claims core frequency improvements.

4) Rumor, I forget who posted this one, is is also Mooreslawisdead? https://www.tweaktown.com/image.php...flagship-16-cores-24-threads-in-2022_full.jpg
 
Last edited:
  • Like
Reactions: Mopetar

Hougy

Member
Jan 13, 2021
77
60
61
I'm not a fan of this thread. I think it's better if all future Intel micro architectures are discussed in a single thread, so that we don't have to check multiple threads to discuss the same subject
 

DrMrLordX

Lifer
Apr 27, 2000
21,608
10,802
136
I'm not a fan of this thread. I think it's better if all future Intel micro architectures are discussed in a single thread, so that we don't have to check multiple threads to discuss the same subject

There was such a thread, but it was abandoned in favor of an Alder Lake thread.
 

coercitiv

Diamond Member
Jan 24, 2014
6,176
11,809
136
There was such a thread, but it was abandoned in favor of an Alder Lake thread.
The Intel current and future Lakes & Rapids thread is still active, the Alder Lake thread was created with the specific purpose of discussing a currently launched gen, hence allowing the bigger thread to discuss future related topics without confusing readers looking for specific ADL content.

AMD may be trying to laugh off those E's right now but they are going to be a thorn in their side I predict.
Out of curiosity, let's play WHAT IF and assume the situation is reversed: Intel has Golden Cove in tiles and 12900K is a 16c/32t homogeneous arch, AMD has a heterogeneous monolithic arch with 8 Zen3 cores and 16 Zen3C cores with Zen2-like IPC, lower fmax and latency aimed at throughput loads.

Who do think has the upper hand?

Hint: 3 tiles, but they won't bother.
 

DrMrLordX

Lifer
Apr 27, 2000
21,608
10,802
136
The Intel current and future Lakes & Rapids thread is still active, the Alder Lake thread was created with the specific purpose of discussing a currently launched gen, hence allowing the bigger thread to discuss future related topics without confusing readers looking for specific ADL content.

Right it's just that all the current-product banter was essentially removed from the thread. Taking Raptor Lake-specific stuff back to the thread would seem counter-intuitive, since all the Alder Lake posting was taken out of that thread already.
 

mikk

Diamond Member
May 15, 2012
4,131
2,127
136
There are rumors from several people that something is locked in Golden Cove which could be unleashed in Raptor Lake.


In the middle of Q3, Intel’s 13th-generation Core, LGA1700, Raptor Lake, and Z790/B760 are also on the market, without H710, Intel 600/700 CPUs are compatible with each other.
By the way, the Golden Cove on Alder Lake is not full of blood. Some things are blocked. Raptor Lake may be fully opened. At the same time, the cache architecture is optimized. Although the essence has not changed much, there is still a big improvement.
By the way, Raptor Lake is also compatible with DDR4.

Some source tells me that the GLC arch has something was locked, and it will be released in the next Gen MSDT and server to counter competitors. Although the source is very reliable, I still doubt the authenticity of info. So I think this info needs to be confirmed in the SPR.

Both SPR & Raptor Lake have enhancements & features in the big cores Alder Lake is missing.
 
  • Love
Reactions: Carfax83

coercitiv

Diamond Member
Jan 24, 2014
6,176
11,809
136
Right it's just that all the current-product banter was essentially removed from the thread. Taking Raptor Lake-specific stuff back to the thread would seem counter-intuitive, since all the Alder Lake posting was taken out of that thread already.
Raptor Lake is a future architecture, Alder Lake was removed for being the current arch. I'm having difficulty following your train of thought, you think the big Intel thread no longer makes sense once the Alder Lake content was branched out?
 

Hulk

Diamond Member
Oct 9, 1999
4,212
2,001
136
Out of curiosity, let's play WHAT IF and assume the situation is reversed: Intel has Golden Cove in tiles and 12900K is a 16c/32t homogeneous arch, AMD has a heterogeneous monolithic arch with 8 Zen3 cores and 16 Zen3C cores with Zen2-like IPC, lower fmax and latency aimed at throughput loads.

Who do think has the upper hand?

Hint: 3 tiles, but they won't bother.

Hmm. Intel with 16 Golden Coves tiled or AMD with 8+16 (Zen 3 + Zen 2 equivalent) monolithic?

I think due to the higher IPC of GC it wins 70% of the time but on highly MT apps the Zen 3/2 hybrid wins. AMD wins on power.

Thing is despite vastly different designs with Alder Lake and Zen 3 they are very competitive so turning the tables doesn't change much. I think one deciding factor of how things move forward between AMD and Intel is whether Intel 4 gets off the ground without massive delays.
 

Doug S

Platinum Member
Feb 8, 2020
2,235
3,453
136
I don't dispute that Spec tests real world workloads. I dispute that it tests real world workloads as would be reflective of actual consumer, enterprise or HPC programs. This is a valid criticism, and if you ever go over to Realworldtech forums, you'll see that many industry professionals complain about Spec in that manner.

Yes we complain about SPEC a lot in the RWT forums - but also agree pretty much universally that "SPEC isn't that great of a benchmark, but everything else is worse".

SPEC2017 at least addressed one of the bigger issues with SPEC2006, that several benchmarks had been "broken" by compilers. AFAIK none of SPEC2017's benchmarks has been broken yet. The biggest remaining issue is that the way it is used in vendor submissions is unrealistic for the way most applications (let alone phone apps) are delivered now - they use the highest levels of the optimization the compiler is capable of, substitute alternate malloc libraries, use feedback directed optimization, etc.

The way Anandtech performs its SPEC runs is actually better IMHO, because they decided to use standard flags like regular developers would use and unless specifically testing overclocking type stuff run CPUs at default settings with default DRAM.

Complain about SPEC all you want, but all other benchmarks are WORSE.
 

Hulk

Diamond Member
Oct 9, 1999
4,212
2,001
136
Yes we complain about SPEC a lot in the RWT forums - but also agree pretty much universally that "SPEC isn't that great of a benchmark, but everything else is worse".

SPEC2017 at least addressed one of the bigger issues with SPEC2006, that several benchmarks had been "broken" by compilers. AFAIK none of SPEC2017's benchmarks has been broken yet. The biggest remaining issue is that the way it is used in vendor submissions is unrealistic for the way most applications (let alone phone apps) are delivered now - they use the highest levels of the optimization the compiler is capable of, substitute alternate malloc libraries, use feedback directed optimization, etc.

The way Anandtech performs its SPEC runs is actually better IMHO, because they decided to use standard flags like regular developers would use and unless specifically testing overclocking type stuff run CPUs at default settings with default DRAM.

Complain about SPEC all you want, but all other benchmarks are WORSE.

Benchmarks... what's good, what bad? It's a deep rabbit hole that goes on and on Having been in this "game" of following tech for the past 30 years for me the simpler the better for benchmarks. Some thoughts:

1. I don't like benchmarks I can't run on my computer for free. It's hard to verify benches and it's hard to see how my rigs compare, especially if the newer benches haven't been run on older CPU's.
2. I don't like benches that aren't portable. Less mucking up my rig the better.
3. I don't like benches that aren't precise, meaning not a lot of variability in result from run-to-run.
4. I don't like benches that don't show much difference in performance among generations of CPU's. For example, years and years ago CPUmark99 was a widely used CPU bench. But by the time of Haswell just about every CPU performed the same on it as it could not exploit not only multiple cpu's but not even instruction level parallelism.
5. I like benches where I can "see" the work or task being completed.
6. I like benches that are widely published for obvious reasons.
7. Benches are helpful but at the end of the day you've gotta see how various cpu's handle tasks that are critical to you.

For these reasons lately I've been focused on Cinebench R23. It's ticks all of the right boxes for me and my workload. It highlights differences in CPU's, is precise, portable, and gives me a quick indication of the ST and MT "strength" of a CPU. After that I look at specific application results that are applicable to my workflow. Yes, CB is not perfect, no bench is but as I said it's subjective what bench a specific person will "take to heart."

Early CB R23 leaks showed Alder Lake being very strong ST and as good as the 5950X in MT. That turned out to be a pretty good average of how things worked out over broad application testing.

The thing is about leaks is that the rarely divulge clocks, which makes them just about meaningless, especially if the scores are low. If the scores are really good then we can assume highish ~5GHz clocks and get a good idea of performance.
 

Doug S

Platinum Member
Feb 8, 2020
2,235
3,453
136
For these reasons lately I've been focused on Cinebench R23. It's ticks all of the right boxes for me and my workload. It highlights differences in CPU's, is precise, portable, and gives me a quick indication of the ST and MT "strength" of a CPU. After that I look at specific application results that are applicable to my workflow. Yes, CB is not perfect, no bench is but as I said it's subjective what bench a specific person will "take to heart."

Something that is applicable to your work is always the best benchmark, but we're talking generally applicable benchmarks. Cinebench is a terrible general purpose benchmark, it is hardly better than Dhrystone. If it works for you that's great, but it is not applicable to the world at large.
 

Hulk

Diamond Member
Oct 9, 1999
4,212
2,001
136
Something that is applicable to your work is always the best benchmark, but we're talking generally applicable benchmarks. Cinebench is a terrible general purpose benchmark, it is hardly better than Dhrystone. If it works for you that's great, but it is not applicable to the world at large.

The "world at large" may disagree somewhat seeing that every cpu benchmark includes CB. It's pretty much ubiquitous.
 

Mopetar

Diamond Member
Jan 31, 2011
7,826
5,969
136
Cinebench is probably featured in most reviews because it is representative of a workload that professionals care about, generally shows how well a processor does with an embarrassingly parallel problem, and probably most importantly is available for anyone to run. Other benchmarks require a license and not everyone wants to pay that fee.

Are there other benchmarks that would make a good substitute for Cinebench that are as easy to conduct or don't require purchasing an application that you'd rarely use outside of benchmarking purposes? Maybe there a few that come close, but I suspect that Cinebench itself comes away as a clear winner when all of the different considerations are factored in.
 
  • Like
Reactions: BorisTheBlade82

Hulk

Diamond Member
Oct 9, 1999
4,212
2,001
136
Cinebench is probably featured in most reviews because it is representative of a workload that professionals care about, generally shows how well a processor does with an embarrassingly parallel problem, and probably most importantly is available for anyone to run.

All points of view are valid here of course. The fact that it is "embarrassingly parallel" to me is one of the reasons I like it. I hope that it is a good predictor for performance in applications that migrate progressively to "embarrassingly parallel." We can only hope as we have been for the past 20 year. Only quoting that term because I really like it! Very descriptive.
 

Hitman928

Diamond Member
Apr 15, 2012
5,232
7,773
136
Cinebench is probably featured in most reviews because it is representative of a workload that professionals care about, generally shows how well a processor does with an embarrassingly parallel problem, and probably most importantly is available for anyone to run. Other benchmarks require a license and not everyone wants to pay that fee.

Are there other benchmarks that would make a good substitute for Cinebench that are as easy to conduct or don't require purchasing an application that you'd rarely use outside of benchmarking purposes? Maybe there a few that come close, but I suspect that Cinebench itself comes away as a clear winner when all of the different considerations are factored in.

Blender. A little higher difficulty to run, but not much. Blender is used all the time by professionals and has support across a wide range of compute environments.

Edit: Just to add, it's free and already has regularly benchmarked scenes ready to be tested.
 
Last edited:
  • Like
Reactions: Mopetar

dullard

Elite Member
May 21, 2001
25,042
3,395
126
Are there other benchmarks that would make a good substitute for Cinebench that are as easy to conduct or don't require purchasing an application that you'd rarely use outside of benchmarking purposes? Maybe there a few that come close, but I suspect that Cinebench itself comes away as a clear winner when all of the different considerations are factored in.
When you talk about professional engineering work, you run into this cost problem. Good benchmarks are few and far between because no reviewer wants to pay thousands of dollars just to run a benchmark.

SPEChpc is much more representative of this type of work:
But then, you won't see desktop chips in that ranking.
 

Hulk

Diamond Member
Oct 9, 1999
4,212
2,001
136
When you talk about professional engineering work, you run into this cost problem. Good benchmarks are few and far between because no reviewer wants to pay thousands of dollars just to run a benchmark.

SPEChpc is much more representative of this type of work:
But then, you won't see desktop chips in that ranking.

Remember Winstone? It was fun watching that one load up all the apps and run the scripts. It would be fun to see a modern processor rip through that.
 
  • Love
Reactions: igor_kavinski

CHADBOGA

Platinum Member
Mar 31, 2009
2,135
832
136
Cinebench is probably featured in most reviews because it is representative of a workload that professionals care about, generally shows how well a processor does with an embarrassingly parallel problem, and probably most importantly is available for anyone to run. Other benchmarks require a license and not everyone wants to pay that fee.

Are there other benchmarks that would make a good substitute for Cinebench that are as easy to conduct or don't require purchasing an application that you'd rarely use outside of benchmarking purposes? Maybe there a few that come close, but I suspect that Cinebench itself comes away as a clear winner when all of the different considerations are factored in.
Cinebench also includes both single core and multi-core benchmarking.
 
  • Like
Reactions: Mopetar

Mopetar

Diamond Member
Jan 31, 2011
7,826
5,969
136
Cinebench also includes both single core and multi-core benchmarking.

I guess it's nice that it has both, but I generally use it as an indication of how good multi-core/thread performance can be since it's generally not too limited by other factors or doesn't have the obvious run-in with Amdahl's law where it's heavily gated by some slow portion that can't be parallelized in any way.

Maybe the ST benchmark adds a bit of gravy as far as testers are concerned because you do get both figures from one benchmark.
 
  • Like
Reactions: CHADBOGA

eek2121

Platinum Member
Aug 2, 2005
2,929
4,000
136
Yes we complain about SPEC a lot in the RWT forums - but also agree pretty much universally that "SPEC isn't that great of a benchmark, but everything else is worse".

SPEC2017 at least addressed one of the bigger issues with SPEC2006, that several benchmarks had been "broken" by compilers. AFAIK none of SPEC2017's benchmarks has been broken yet. The biggest remaining issue is that the way it is used in vendor submissions is unrealistic for the way most applications (let alone phone apps) are delivered now - they use the highest levels of the optimization the compiler is capable of, substitute alternate malloc libraries, use feedback directed optimization, etc.

The way Anandtech performs its SPEC runs is actually better IMHO, because they decided to use standard flags like regular developers would use and unless specifically testing overclocking type stuff run CPUs at default settings with default DRAM.

Complain about SPEC all you want, but all other benchmarks are WORSE.

IMO The only way a benchmark can be useful is if it supports all of the instructions on the CPU it is running on. Not supporting AVX2, AVX-512, or some other instruction set when the host CPU does leads to an inaccurate picture of how the CPU performs. Instead of measuring theoretical (or actual) performance, they measure at a baseline. A newly introduced (or enhanced) instruction set could boost performance by several percentage points, or it could cut power consumption by several percentage points. You won't see that with most of the current benchmarks today. Sure, software needs to be changed to take advantage of the new instructions, but pretending they do not exist is wrong, IMO. I bring this up because most benchmarks lack support in one area or another.
 
  • Like
Reactions: Carfax83

Doug S

Platinum Member
Feb 8, 2020
2,235
3,453
136
IMO The only way a benchmark can be useful is if it supports all of the instructions on the CPU it is running on. Not supporting AVX2, AVX-512, or some other instruction set when the host CPU does leads to an inaccurate picture of how the CPU performs. Instead of measuring theoretical (or actual) performance, they measure at a baseline. A newly introduced (or enhanced) instruction set could boost performance by several percentage points, or it could cut power consumption by several percentage points. You won't see that with most of the current benchmarks today. Sure, software needs to be changed to take advantage of the new instructions, but pretending they do not exist is wrong, IMO. I bring this up because most benchmarks lack support in one area or another.

So what about GPU, or fixed function units for encoding/decoding/encryption/decryption? Or do you believe there is some qualitative difference in functionality if it is made available via a CPU opcode versus being outside the CPU? What about stuff like Apple's AMX, which is accessed via a CPU instruction but the engine does not appear to actually be part of the CPU?

Compilers rarely are able to extract parallelism required to usefully access facilities like AVX-512, it requires either hand coded assembly or more often calling up a library function. What difference does it make if that library delivers its results using AVX-512 or the GPU?