Question Raptor Lake - Official Thread

Page 130 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hulk

Diamond Member
Oct 9, 1999
4,525
2,519
136
Since we already have the first Raptor Lake leak I'm thinking it should have it's own thread.
What do we know so far?
From Anandtech's Intel Process Roadmap articles from July:

Built on Intel 7 with upgraded FinFET
10-15% PPW (performance-per-watt)
Last non-tiled consumer CPU as Meteor Lake will be tiled

I'm guessing this will be a minor update to ADL with just a few microarchitecture changes to the cores. The larger change will be the new process refinement allowing 8+16 at the top of the stack.

Will it work with current z690 motherboards? If yes then that could be a major selling point for people to move to ADL rather than wait.
 
  • Like
Reactions: vstar

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
It's not a Reviewer, it was just me using Geekbench5 to compare a 65W 16 Threaded CPU from Intel(13400) from Intel to a 65W 16 Threaded CPU from AMD(7700 non-x model). Yeah they likely have different price and different segment.

The 13600 non-k its a better match, because it has 20 Threads and decent clocks.


Now that the 13600 has 4 extra threads, is Fair Now?
The 13600k has a base clock for the p-cores of 3.5Ghz and in your link it shows up as 2.7Ghz, unless we have a review that actually confirms the clocks and the performance you can't compare them.
Same for the 13400 you showed before, it's shown as 2.5Ghz, since that one isn't released yet we have no idea what the base clock should be.
2ykWEjL.jpg
 

nicalandia

Diamond Member
Jan 10, 2019
3,331
5,282
136
The 13600k has a base clock for the p-cores of 3.5Ghz and in your link it shows up as 2.7Ghz
It's the 13600 non-k, The fake Raptor Lake(Alder Lake rebrand). It has 20 Threads and rated at 65W

Talking about fake Raptor Lake Here is yet another.

The 13900HK
Screenshot_20221026-043717_Chrome.jpg

 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
One of the best reviews I've seen so far. He benchmarked at 1080p low settings which helped to put the burden on the CPU, also with an RTX 4090 and DDR5 6800 CL34 memory. Also, unlike other reviews, the ASRock motherboard he used automatically capped the power at 253 watts:

 

coercitiv

Diamond Member
Jan 24, 2014
6,677
14,275
136
PSA, make sure you do a hard reset (short the RTC) on many z690 mobos that have been upgraded from Alder to Raptor. I just partially fried my 13900K somehow after replacing a 12900K in an Asus TUF Z690 Plus DDR4 motherboard. I upgraded to the latest firmware on Asus' site, shorted the RTC jumper while power was cut off and I thought I was golden. Booted up fine and worked well for a day or so after enabling XMP. Today, will no longer boot at XMP speeds and my M.2 slot 1 has gone bad. Found out the stupid motherboard was running the CPU at 1.5 volts for some reason, I thought the high temps was just expected with RL. I then shorted the RTC while computer was on. It was then everything was reset, but damage already occurred. Putting back my 12900K results in booting fine at XMP and M.2 slot 1 working again.
If you're running the latest BIOS on that board, you may want to consider downgrading to the previous one. (or at least investigate this issue on the forums). Your problem has a high chance of being software related, hence fixable sooner or later.
PSA: Some Intel BIOS updates are nasty, bricking the NVME slot attached to the CPU.

 
  • Like
Reactions: gdansk

Pneumothorax

Golden Member
Nov 4, 2002
1,181
23
81
If you're running the latest BIOS on that board, you may want to consider downgrading to the previous one. (or at least investigate this issue on the forums). Your problem has a high chance of being software related, hence fixable sooner or later.
OMG Thanks for the link. That's crazy! What version of Bios do I use on my Asus? I'm thinking about just bringing this CPU back to BB, tired of being a beta tester for this crap. Surprised this made it past quality control.
 

nicalandia

Diamond Member
Jan 10, 2019
3,331
5,282
136
OMG Thanks for the link. That's crazy! What version of Bios do I use on my Asus? I'm thinking about just bringing this CPU back to BB, tired of being a beta tester for this crap. Surprised this made it past quality control.
Whover buys a CPU within the first week of release, becomes a beta tester automatically, give it at least a month or more.
 
Jul 27, 2020
20,040
13,738
146
What I'm curious about is, are they doing the stupid thing of disabling AVX-512 units still or have those units been removed from the die altogether? I mean, I don't know how reality works in a fab but the logical thing would be to have a new mask for the Raptor Lake die with the AVX-512 units out so they can reduce the die size or use the freed up space for additional cache etc.
 

nicalandia

Diamond Member
Jan 10, 2019
3,331
5,282
136
What I'm curious about is, are they doing the stupid thing of disabling AVX-512 units still or have those units been removed from the die altogether?.
AVX-512 have been designed in to the Core itself. They just go to the trouble of laser them off because it's not officially supported.

For example here is a Golden Cove Core and The Redwood Core next to each other, The AVX-512 Support has been baked in the core design because it will be used also on the server parts

Golden Cove vs Redwood Cove
1666797615258.png
 
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
6,677
14,275
136
What version of Bios do I use on my Asus?
I was merely passing along the bit of info that I stumbled upon. Ah, dug a bit more and found your exact issue, it appears you need to update the Intel ME firmware & driver.

Here's a thread on the issue - https://rog.asus.com/forum/showthre...103-from-with-13900k-no-nvme-drivers-detected
Here's the resource page you need for firmware/driver - https://rog.asus.com/forum/showthread.php?131027-RaptorLake-Resources

Take it slow, read the threads. Good luck!
 

Mopetar

Diamond Member
Jan 31, 2011
8,114
6,770
136
Why does equalizing for thread count instead of, say, price, make sense?

Both make sense. Comparing two CPUs that offer 16 threads for 65W makes sense as a comparison. So does comparing what level of performance you can get for $300 (or any amount), or what kind of performance you get if both cap clock rates at the same frequency, or what kind of performance you can get pushing both chips as far as they're capable of going, etc.

One type of comparison doesn't invalidate others.
 
  • Like
Reactions: TESKATLIPOKA

ondma

Diamond Member
Mar 18, 2018
3,005
1,528
136
Raptor Lake is mostly just overclocked Alder Lake. Don't see how the upgrade is worth it. Just take it back and be happy with what you have.
Simply not true, except for some unreleased chips on the lower end. You get more e cores, increased cache and probably a better memory controller with RL. Actually, the improvement from AL to RL was somewhat better than I expected.
 

Exist50

Platinum Member
Aug 18, 2016
2,452
3,102
136
Simply not true, except for some unreleased chips on the lower end. You get more e cores, increased cache and probably a better memory controller with RL. Actually, the improvement from AL to RL was somewhat better than I expected.
I think it's still fair to call it "mostly" just an overclocked Alder Lake. The frequency and extra cores are doing most of the work.
 

Exist50

Platinum Member
Aug 18, 2016
2,452
3,102
136
Wow, Intel cant win. They were criticized for years for not offering enough cores, but now that they finally do, it is just dismissed as "the extra cores and frequency are doing most of the work." Well that is the point of extra cores, is it not? On a more serious note, the usefulness of upgrading is probably dependent on use case. The extra cores give a nice boost for productivity work, but not so much for gaming, although the added cache and clockspeed do give an increase in gaming performance as well.
I'm not saying that's makes Raptor Lake a bad product. It just is what it is.
 

blckgrffn

Diamond Member
May 1, 2003
9,299
3,440
136
www.teamjuchems.com
I'm not saying that's makes Raptor Lake a bad product. It just is what it is.

FWIW, I would consider a 13900P or something that completely dropped the e-cores and either used that space for (ideally) more cache or just eschewed them completely at a lower price.

I sold a 5900X to install the 5800X3D concluding that either of the offers plenty of productivity performance but what I really wanted was better gaming performance, and specifically increasing those 1% low FPS numbers. The FPS highs were already high enough.
 

LightningZ71

Golden Member
Mar 10, 2017
1,798
2,156
136
Linux Performance Benchmarks for 13900K


The 13900K just don't play nice on Linux
View attachment 69890


Geometric Mean

View attachment 69891

Sifting through the article, it looks like a few things bubble to the surface:

AMD's decision to implement AVX-512 in Zen4 is paying dividends where it is supported. In heavy AVX-512 benchmarks, ZEN4 gains a lot of performance over Raptor Lake. This isn't a case where Intel could have changed the outcome, no matter of their approach. If they left AVX-512 enabled, but restricted threads to the P cores, they would be down many cores to the higher end Zen4 parts and would look bad against even the 7700x (and actually loose to it a couple of times in heavy MT benches while using alternate code paths). If they added AVX-512 to the E cores to help those benches, it would balloon those cores and the math on die space wouldn't work. If they decided to just go with 12 P-cores, it might have come out ahead in a couple of tests where AVX-512 is relevant, assuming that there are specific conditions in place, but, they would still be down by 4 cores vs. the 7950x and take a lot of losses.

There are multiple benches where the 7900 and 7950x loose significant performance against the 7700x and 7600x. There's a penalty that's still present in some benches for having multiple CCDs. It's not drastic overall, but it is notable. There are some tests where the 7700x and 7600x beat the 13900K and the 7950x and 7900x loose to it. That could likely be addresses through affinity adjustments, thought that's just speculation.

Overall power usage, while achieving higher performance, across a WIDE variety of tests, favors ZEN4, HOWEVER, this is not a huge difference as the 7950x is chewing through a lot of power itself to get the wins that it gets, just not usually as much as Raptor Lake.

Linux, especially the most recent kernels, seems to do a solid job of managing the E cores. In some cases, it seems to do better managing the E cores than it does handling the multiple CCDs of the two CCD Zen4 parts.

I speculate that, if AMD does release dual CCD X3D parts, and even with the single CCD parts, Raptor lake will suffer heavily in many of these benchmarks. I make this statement because I note that the 5800X3d is often nipping at the heels of the 13900K and even beats it on a few benchmarks. This is while the 5800X3d is suffering a massive penalty in single and all core boost on top of having a smaller L2, no AVX-512 support, and not having the other Zen4 core improvements. The Geomean for the 5800X3d is barely over the 5800X largely due to the clock deficit. From the information that we have, it looks like Zen4 will do much better in that regard.

All of this is to point out that the often speculated 13900KS release will likely need a significant clock speed bump to keep from being significantly outpaced, at least in the Phoronix Linux test suite.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,340
5,464
136
Both make sense. Comparing two CPUs that offer 16 threads for 65W makes sense as a comparison. So does comparing what level of performance you can get for $300 (or any amount), or what kind of performance you get if both cap clock rates at the same frequency, or what kind of performance you can get pushing both chips as far as they're capable of going, etc.

One type of comparison doesn't invalidate others.

Equal pricing is the one that matters to consumers though. Equalized threads is more of an abstract comparison of questionable utility.
 

Aapje

Golden Member
Mar 21, 2022
1,515
2,065
106
@ondma

Yes, slightly better and a bit more e-cores and a bit more cache. Not that significant. I'm hearing conflicting things about the memory controller, so I'm not sure about that yet.

At least in games, the main benefit comes from the increased clocks, but at a very high energy cost.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,340
5,464
136
I guess that only applies to CPUs then, because on the GPU side if you compare AMD/nVidia based on price AMD wins just about everywhere. Yet people still buy nVidia.


CPU there really is only compute performance.

GPUs do a lot more than straight just compute performance, NVidia markets lots of extra features like DLSS, RTX, etc. We still compare equal price points, even if more people keep choosing NVidia, it's still the proper comparison.
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
At least in games, the main benefit comes from the increased clocks, but at a very high energy cost.
You can go all the way down to 88W with the 13900k, lower the resolution to 720p, and you still only lose like 10% maximum in games and it still stays ahead of amd.
 

Hulk

Diamond Member
Oct 9, 1999
4,525
2,519
136
I created a video showing that you can cool the 13900K at 300W easily with a small 240 AIO cooler, even after 10 minutes the temps were mostly below 90°C.


BTW I do not like idea of this power draw at all, my CPU is at the moment limited to 160W, I am getting ready to put my modest hyper 212 type air cooler back on.

I have settled on 175W with the Noctua cooler in my sig. Temps stay in check and performance is still good. It I come across an app that needs single or double core power it'll still of course boost to a mighty 5.8GHz.