Question Raptor Lake - Official Thread

Page 157 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hulk

Diamond Member
Oct 9, 1999
4,119
1,900
136
Since we already have the first Raptor Lake leak I'm thinking it should have it's own thread.
What do we know so far?
From Anandtech's Intel Process Roadmap articles from July:

Built on Intel 7 with upgraded FinFET
10-15% PPW (performance-per-watt)
Last non-tiled consumer CPU as Meteor Lake will be tiled

I'm guessing this will be a minor update to ADL with just a few microarchitecture changes to the cores. The larger change will be the new process refinement allowing 8+16 at the top of the stack.

Will it work with current z690 motherboards? If yes then that could be a major selling point for people to move to ADL rather than wait.
 
  • Like
Reactions: vstar

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
From the 7950X being 13% faster in their regular Handbrake test, H264 to H265, to the 13900K being 19% faster once it s H264 to AV1, that s a 30% delta wich cant be explained by the raw exe capabilities.

AV1 is a different beast compared to x264 and x265. It's much more compute limited and uses SIMD much more heavily than H265. Raptor Lake is faster here because it has 50% higher SIMD throughput, and higher cache bandwidth.

And dont come with the RAM bandwith argument as you once did because the 7950X@5200RAM is faster than at 6000MHZ RAM in this test, wich say that it s more CPU perf dependant than RAM speed dependant.

The RAM bandwidth feeds the cache, which feeds the cores. Raptor Lake's cache bandwidth is much higher per core than Zen 4's. Raptor Lake's entire architecture seems centered around high bandwidth and high throughput.
 
  • Like
Reactions: Henry swagger

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Well then someone should clarify because I was just told Quicksync does support AV1 encode. Seems to be something worth clarifying.

As @Exist50 said, Intel's new discrete GPU (Alchemist) supports AV1 encode, along with Nvidia RTX 4000 series and AMD RDNA3. Those are the only GPUs that currently support AV1 encode.

Raptor Lake's iGPU supports AV1 decoding only. Meteor Lake is going to have the first iGPU that supports AV1 encoding.

Intel confirms Meteor Lake has AV1 video encoding and decoding support - VideoCardz.com
 

Abwx

Lifer
Apr 2, 2011
10,822
3,284
136
AV1 is a different beast compared to x264 and x265. It's much more compute limited and uses SIMD much more heavily than H265. Raptor Lake is faster here because it has 50% higher SIMD throughput, and higher cache bandwidth.



The RAM bandwidth feeds the cache, which feeds the cores. Raptor Lake's cache bandwidth is much higher per core than Zen 4's. Raptor Lake's entire architecture seems centered around high bandwidth and high throughput.


Looking at the 12900K vs 7950X SIMD throughput it s clear that with just 8 additIonal e cores the 13900K is well below the 7950X, so your argument is completely out of reality..

amd-7950x-cpu-mm.png


 
Last edited:
  • Like
Reactions: alexruiz

Rigg

Senior member
May 6, 2020
412
827
106
I get that people are upset because video card prices have gone way up, but these are luxury items. I wasn't happy about paying scalper prices for my RTX 4090, but I'm not going to whine about it endlessly. Complaining about price ad nauseum is boring as shite.


That said, when has criticism, or even harsh criticism of a corporate entity become something to refrain from?

You seem to be talking out of both sides of your mouth. On one hand you say GPU prices are too high but since they're luxury items you should deal with it and not complain because it's boring. Either pay the going rate or don't. Just don't 'whine' about the inflated prices.

So we shouldn't be overly critical of Nvidia's inflated prices but shouldn't refrain from criticism of corporate entity's. You're going to have to square that circle for me.
 
  • Like
Reactions: In2Photos

Khanan

Senior member
Aug 27, 2017
203
91
111
How is the RTX 4090 a broken chip? Also, playable framerates is subjective. RTX 4090 can do native 4K in every title that I am currently playing with high framerates. I've only used DLSS 3 in one game and that is the Witcher 3 Complete Edition which is CPU limited due to poor DX12 implementation, so it's not even the GPU that's the problem.
Already explained it right above your post. No "playable framerates" isnt subjective. 60 is largely regarded as well playable (else consoles would be dead by now), 30 is the minimum which can be considered "playable" by minimum standards, its not something down to taste or subjective, this is a objective measurement. XTX is easily fast enough for anything, especially if tuned. It gets nearly 20% extra performance and the difference to 4090 isnt much then, 4090 is overpriced 17% deactivated garbage chip, that's what it is. People literally buy -redacted-. 4090 Ti, or ADA102 at 100% will be much faster.

And DLSS3? Hahah seriously? Anyone who uses that broken tech must be a Nvidia fan. Not that it is needed with a 4090, not nearly. This is 2nd grade tech for users of 4060 and lower who have low fps in 4K and need it hard. But who uses a 4K monitor with a cheaper GPU? 4K is a luxury segment resolution, more or less. So does DLSS3 make any sense at all? Hardly. DLSS2 is excellent on the other hand

Profanity is not allowed in the tech forums.

Daveybrat
AT Moderator
 
Last edited by a moderator:

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
Looking at the 12900K vs 7950X SIMD throughput it s clear that with just 8 additIonal e cores the 13900K is well below the 7950X, so your argument is completely out of reality..

amd-7950x-cpu-mm.png


Those results are very clearly not representative of most real-world workloads (as expected from a pure vector throughput test), so why bother using them as a reference?

If I had to guess, AV1 software encode uses SVT-AV1. Someone can probably test that explicitly.
 

poke01

Senior member
Mar 8, 2022
589
568
96
And DLSS3? Hahah seriously? Anyone who uses that broken tech must be a Nvidia fan. Not that it is needed with a 4090, not nearly. This is 2nd grade tech for users of 4060 and lower who have low fps in 4K and need it hard. But who uses a 4K monitor with a cheaper GPU? 4K is a luxury segment resolution, more or less. So does DLSS3 make any sense at all? Hardly. DLSS2 is excellent on the other hand
I wonder will you say the same when FSR3 launches? It should be the same language otherwise you are very biased towards AMD in a positive way

Call people who use DLSS3/FSR3 "Nvidia fan" "AMD fan" is absurd?
XTX is easily fast enough for anything, especially if tuned. It gets nearly 20% extra performance and the difference to 4090 isnt much then, 4090 is overpriced 17% deactivated garbage chip, that's what it is. People literally buy shit. 4090 Ti, or ADA102 at 100% will be much faster.
What utter rubbish this take is. It's a very good thing you can't swear on this site otherwise oh boy.

By that logic why do we even need next gen cards from AMD or Nvidia we can get the MAGIC 20% boost from unicorn land on the previous gen cards and Viola we have no need for next gen cards.

Also no the 7900 XTX is not fast enough for anything. It's crap with 4k 240FPS Ultra gaming. Its bad with 4k 120FPS ultra WITH RT.

Please stop the AMD kissing to put in a light way. There are many more people other than gamers that the 4090 will be useful to. Video editors, VFX and 3D and AI. The 7900 XTX will not top these workflows.

Bottom line: No the 7900XTX is not as fast the RTX 4090 in everything and will never be. There is a reason AMD never even compared the 4090 to any of their 7900 cards.

Also why is always assumed that only AMD can make driver improvements but not Nvidia. I see plenty of driver improvements from Nvidia too.
 
Last edited:

Khanan

Senior member
Aug 27, 2017
203
91
111
I wonder will you say the same when FSR3 launches? It should be the same language otherwise you are very biased towards AMD in a positive way
And why should I suddenly become biased? You’re clueless if you think I am. Let me make it clearer for you: you have no clue who I am, nor do you seem capable of understanding it
What utter rubbish this take is. It's a very good thing you can't swear on this site otherwise oh boy.
The only rubbish i see here is you trying to have a take on my opinion without even remotely understanding it. That’s a quick and easy ignore like the last toxic Nvidia fan.
Bottom line: No the 7900XTX is not as fast the RTX 4090
Funny that I never said that, but now you made it awfully clear to everyone that you’re a biased Nvidia fan who felt attacked by my words.

The fact that you feel hindered by the rules of this website and want to swear at me shine a dark light on yourself. And then your extreme bias towards Nvidia and your try to make it seem I’m like you. It’s always these fans who expect others to be like themselves, you’re just projecting, buddy. I’m not like you, nor do I have any interest to downgrade.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Looking at the 12900K vs 7950X SIMD throughput it s clear that with just 8 additIonal e cores the 13900K is well below the 7950X, so your argument is completely out of reality..


That's a purely synthetic workload that isn't representative of real workloads like @Exist50 stated.

It may also be using AVX-512 which probably inflated the results a bit. At any rate, the only explanation for Raptor Lake's performance in AV1 encoding is that it has a higher AVX2 instruction throughput; 3x vector loads per cycle vs Zen 4's 2 vector loads per cycle.

And Raptor Lake also has the L1 cache bandwidth to feed and sustain that throughput, which is how I think it can compete with and even outperform the 7950x in heavy SIMD workloads.
 
  • Like
Reactions: Henry swagger

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
You seem to be talking out of both sides of your mouth. On one hand you say GPU prices are too high but since they're luxury items you should deal with it and not complain because it's boring. Either pay the going rate or don't. Just don't 'whine' about the inflated prices.

Not really. I complained about paying scalper prices for my RTX 4090, but I didn't make a song and dance about it and go on endlessly page after page.

So we shouldn't be overly critical of Nvidia's inflated prices but shouldn't refrain from criticism of corporate entity's. You're going to have to square that circle for me.

A lot of that price inflation is due to current market conditions, ie high demand, product scarcity, lack of competition etcetera. Considering what it does and how sophisticated the RTX 4090 is, I don't necessarily think the MSRP was excessively inflated.

I bought a Titan Xp in 2017 for $1200 USD directly from Nvidia. The base price for the RTX 4090 is $1600, or 33% more than the Titan Xp while completely blowing it out of the water in terms of performance and features. I have an aftermarket version but for simplicity's sake, lets just use the F.E model for pricing.

What pissed me off was that I had to pay scalper prices to get one because there was literally no other way to purchase one of these damn cards. Even so, I didn't feel entitled to a cheap high end luxury graphics card and eventually the sting of being "scalped" wore off and I'm genuinely happy with my purchase.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
And then your extreme bias towards Nvidia and your try to make it seem I’m like you. It’s always these fans who expect others to be like themselves, you’re just projecting, buddy. I’m not like you, nor do I have any interest to downgrade.

I'm pretty sure @poke01 has a 7900 XTX, but whatever. Everyone is a fanboy except you apparently..... :rolleyes:
 

Khanan

Senior member
Aug 27, 2017
203
91
111
I'm pretty sure @poke01 has a 7900 XTX, but whatever. Everyone is a fanboy except you apparently..... :rolleyes:
And I have a 2080 Ti and had all of the predecessors as well (780 Ti etc). Critical != biased. I think there are enough people here that are trying to be unbiased or fair. But whatever, like the mod said, btt and sorry for off topic.
Here’s my opinion on the KS: waste of money, like the 12900KS. Intel has a leg up in midrange with way more cores for the money but else where… and the platform is soon outdated as well.
7950X = AVX512 and efficiency king.
X3D variants = gaming kings and even more efficient due to lower clocks. I expect them to have less performance in regular apps however. Some special apps that like L3 cache will run faster.
 
Last edited:
  • Like
Reactions: Markfw

poke01

Senior member
Mar 8, 2022
589
568
96
X3D variants = gaming kings and even more efficient due to lower clocks. I
Even the non X parts are great. I am eying a non X 7900 Zen 4. The efficiency of these chips make it that they are perfect for small factor builds.

Kinda sad Intel needs to pump the power to achieve parity cause.
 

Timur Born

Senior member
Feb 14, 2016
277
139
116
When all 4 cores of an E core cluster are loaded with P95 AVX2 load the core frequency drops without hitting any limits (other than Turbo Limit at 43x). This happens at different voltages, happens less with SSE load and does *not* happen with AVX1 load. It also does not happen when less than 4 cores of a cluster are active.

1673369989440.png
 

Hitman928

Diamond Member
Apr 15, 2012
5,077
7,453
136
Normally PCGH is a pretty good review site but their results don't seem to jive with most other reviews.

PCGH now uses only officially supported memory speeds which hurts Zen architectures significantly more than RPL compared to what pretty much any gamer will run. It also explains why the 5800x3d does better against Zen 4 in their results compared to most other reviewers.