Question Raptor Lake - Official Thread

Page 156 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hulk

Diamond Member
Oct 9, 1999
4,209
1,994
136
Since we already have the first Raptor Lake leak I'm thinking it should have it's own thread.
What do we know so far?
From Anandtech's Intel Process Roadmap articles from July:

Built on Intel 7 with upgraded FinFET
10-15% PPW (performance-per-watt)
Last non-tiled consumer CPU as Meteor Lake will be tiled

I'm guessing this will be a minor update to ADL with just a few microarchitecture changes to the cores. The larger change will be the new process refinement allowing 8+16 at the top of the stack.

Will it work with current z690 motherboards? If yes then that could be a major selling point for people to move to ADL rather than wait.
 
  • Like
Reactions: vstar

Timur Born

Senior member
Feb 14, 2016
277
139
116
Will someone take this bait? I would not.
Maybe you should think less in terms of baits, traps and competition and more in terms of sharing information, opinions and test results?! This is a discussion board after all.

When I decided for the Intel system this time around I did so under the impression that it the 13900K has to brute-force multi-core performance to keep up with the 7950X, but would offer better single/low core performance and consume less energy then.
 

DrMrLordX

Lifer
Apr 27, 2000
21,599
10,792
136
Maybe you should think less in terms of baits, traps and competition and more in terms of sharing information, opinions and test results?! This is a discussion board after all.

Because measuring power draw in a 1T workload, outside of a laptop or other battery-powered device, is at best misguided. It's like comparing perf/watt when doing light work in Photoshop and staying well below 100% utilization. It's just not terribly important. Nobody is going to come to your office at work and say, "Sorry sir, we have to replace your 7950X desktop system with a 13900k, because we detected you using 50-60W in Photoshop instead of ~10W with this 13900k system". Or whatever.

Apparently Raptor Lake-P is 16 mm² bigger than Alder Lake-P which is further evidence it's not the same renamed ADL silicon.

Why in God's name would Intel produce a Raptor Cove variant with the old Alder Lake l2 configuration???
 

Timur Born

Senior member
Feb 14, 2016
277
139
116
Because measuring power draw in a 1T workload, outside of a laptop or other battery-powered device, is at best misguided. It's like comparing perf/watt when doing light work in Photoshop and staying well below 100% utilization. It's just not terribly important. Nobody is going to come to your office at work and say, "Sorry sir, we have to replace your 7950X desktop system with a 13900k, because we detected you using 50-60W in Photoshop instead of ~10W with this 13900k system". Or whatever.
I was more interested in what needs to be done/pushed to push the 7950X to 13900K levels in single-threaded load.
 

Timur Born

Senior member
Feb 14, 2016
277
139
116
How does your questioning tell me the information I was looking for, a comparison of single/low core performance between the two contestants?
 
Last edited:

Timur Born

Senior member
Feb 14, 2016
277
139
116
Tsk tsk, I asked first.
You mean you answered my original question with another question first? Maybe you should concentrate on the topic.

I would like to know how much voltage one has to pump into a 7950X to get around 2300-2350 points in CB23 single and how much power it uses then (and preferably also at non single low-core load).
 

DrMrLordX

Lifer
Apr 27, 2000
21,599
10,792
136
You mean you answered my original question with another question first? Maybe you should concentrate on the topic.

No, I told you power efficiency in 1T workloads is meaningless. You then asserted:

I was more interested in what needs to be done/pushed to push the 7950X to 13900K levels in single-threaded load.

and I'm still not sure how you would ascertain that from measuring perf/watt in 1T workloads.
 

Mopetar

Diamond Member
Jan 31, 2011
7,819
5,942
136
Because measuring power draw in a 1T workload, outside of a laptop or other battery-powered device, is at best misguided.

If all you're doing with your 13900K is trying to convince everyone else on the forums how efficient it is, you're probably not using much more than one thread and are idling a lot.

Not everyone has the same workload and you need to find the CPU that has the best efficiency for your needs.
 

lopri

Elite Member
Jul 27, 2002
13,209
594
126
power-applications.png


Gotta say, though, that 166W is not too terrible a number. 104W for 13600K is even better. None of the chips are consuming as much power as modern mid-range graphics cards are rated at. I hope it stays that way.
 
  • Like
Reactions: IEC

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
So Raptor Lake gets beat under high/full load but magically does better everywhere else? :rolleyes:

I don't know about "everywhere else" but you can't claim that Zen 4 beats Raptor Lake at power efficiency and performance per watt at everything, which is what some of you want to believe.

This ties into what I said earlier about the "mantra" that AMD must be the best at everything and Intel sucks at everything. Reality doesn't care about feelings or corporate loyalties however.

I'm curious as to why no one has seemingly picked up on the AMD RT arguments.

People have picked up on it, but for the most part it has flown under the radar because most reviewers test the CPU with RT disabled because they figure that enabling RT will increase how GPU bound the game is. But it definitely is a thing and can easily be repeated.

That's not to say that Zen 4 isn't a good CPU because of this one thing, just that it has inherited some of the weaknesses of Zen 3 from which it was derived. Zen 3 has the same issues with these workloads. Or more likely, it's not architecture specific because older Intel CPUs display the same peculiarities.

As far as echo chambers go, you can find at least one on these forums and it's probably not what you'd guess. So yes, I know what an echo chamber is. It is sad to see.

Probably P&N. I never set foot in that forum :eek:

I'm just not wearing blue & green glasses.

And neither am I. Human behavior is typically much more nuanced than most people know. If someone asks me, I don't consider myself to be a fan of Nvidia or Intel. I base my hardware purchases on performance first and foremost. The reason I stick with Intel and Nvidia is because this particular cycle they are the strongest pair. Almost bought a Zen 3 system when Zen 3 launched, and the only reason I didn't is because I wanted a bigger jump from my Broadwell-E system.
 
  • Like
Reactions: Thunder 57

Thunder 57

Platinum Member
Aug 19, 2007
2,669
3,785
136
I don't know about "everywhere else" but you can't claim that Zen 4 beats Raptor Lake at power efficiency and performance per watt at everything, which is what some of you want to believe.

This ties into what I said earlier about the "mantra" that AMD must be the best at everything and Intel sucks at everything. Reality doesn't care about feelings or corporate loyalties however.



People have picked up on it, but for the most part it has flown under the radar because most reviewers test the CPU with RT disabled because they figure that enabling RT will increase how GPU bound the game is. But it definitely is a thing and can easily be repeated.

That's not to say that Zen 4 isn't a good CPU because of this one thing, just that it has inherited some of the weaknesses of Zen 3 from which it was derived. Zen 3 has the same issues with these workloads. Or more likely, it's not architecture specific because older Intel CPUs display the same peculiarities.



Probably P&N. I never set foot in that forum :eek:



And neither am I. Human behavior is typically much more nuanced than most people know. If someone asks me, I don't consider myself to be a fan of Nvidia or Intel. I base my hardware purchases on performance first and foremost. The reason I stick with Intel and Nvidia is because this particular cycle they are the strongest pair. Almost bought a Zen 3 system when Zen 3 launched, and the only reason I didn't is because I wanted a bigger jump from my Broadwell-E system.

Quite the reasonable post. I do think you are looking too much into what brand cheerleaders say vs more reasonable people though. I get you want people to know that Zen 4 isn't the greatest thing since sliced bread and that's fine. Unfortunately there are those few that you will never change their minds no matter what you put in front of them. I am curious as to what you mean about the "mantra" that AMD is good Intel sucks thing, could you show me that post in case I missed it?
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I didn't even post in that thread. I generally have a preference for AMD but I don't let it cloud my judgement. I literally returned a 7900x/ x670e combo that I've been sitting on for a month today in favor of a 13700k/z690. Personally I'd have been happy going with either for my personal gaming needs but because of some very specific circumstances i was able to get the Intel combo for way less money. I've owned dozens of AMD and NVIDIA cards in the last 4 years and no they weren't for mining. I've gone from a 2700x >9900k >3900x >5800x >13700k in my personal rigs. I don't give a crap about brand loyalty or BS marketing. I care about performance and value and not always in that order.

OK I confused you with another person, my apologies. The RTX 4080 and RTX 4070 Ti threads became magnets for anti Nvidia propaganda and sentiment. One person who was actually interested in buying an RTX 4080 was actually downvoted for daring to contemplate buying one.

I get that people are upset because video card prices have gone way up, but these are luxury items. I wasn't happy about paying scalper prices for my RTX 4090, but I'm not going to whine about it endlessly. Complaining about price ad nauseum is boring as redacted.

People question if you are trying to justify your purchase because literally all you do is talk up Raptor and Lovelace and crap all over AMD

That's not really true. I waited until the RDNA3 presentation before I decided to buy my RTX 4090 because I was hoping that it would be competitive. After it launched, I read the reviews and came to the conclusion that it was a good card for the price and performance, but lacked the feature set of Lovelace and was not competitive for my needs which includes 4K plus maxed settings and RT.

That said, when has criticism, or even harsh criticism of a corporate entity become something to refrain from? People are tribal as hell on these forums, that they perceive strong criticism against their favored corporate entity/products as an attack.

My post sure seems to have struck a nerve there.

Hah, you wish. I've been on the internet long enough to have learned how to easily separate real life from forum antics :D





Profanity is not allowed in the tech forums.


esquared
Anandtech Forum Director
 
Last edited by a moderator:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Quite the reasonable post. I do think you are looking too much into what brand cheerleaders say vs more reasonable people though. I get you want people to know that Zen 4 isn't the greatest thing since sliced bread and that's fine. Unfortunately there are those few that you will never change their minds no matter what you put in front of them. I am curious as to what you mean about the "mantra" that AMD is good Intel sucks thing, could you show me that post in case I missed it?

A good example of this would be this post:

Source

This is the second time I've corrected him about this same issue. I told him previously that Intel QSV doesn't even support AV1 encoding with ADL or RPL, and he still continued on with that garbage because to him, the only reason why the 13900K could be faster than the 7950x in encoding was if Intel was "cheating" somehow or gaining an unfair advantage by using the iGPU.

Never mind the 50% higher AVX2 throughput or the higher cache bandwidth, they must be cheating.
 
  • Like
Reactions: Exist50

Thunder 57

Platinum Member
Aug 19, 2007
2,669
3,785
136
A good example of this would be this post:

Source

This is the second time I've corrected him about this same issue. I told him previously that Intel QSV doesn't even support AV1 encoding with ADL or RPL, and he still continued on with that garbage because to him, the only reason why the 13900K could be faster than the 7950x in encoding was if Intel was "cheating" somehow or gaining an unfair advantage by using the iGPU.

Never mind the 50% higher AVX2 throughput or the higher cache bandwidth, they must be cheating.

Well then someone should clarify because I was just told Quicksync does support AV1 encode. Seems to be something worth clarifying.
 

Khanan

Senior member
Aug 27, 2017
203
91
111
but lacked the feature set of Lovelace and was not competitive for my needs which includes 4K plus maxed settings and RT.
Yet the XTX is able to produce playable frame rates in every game on Ultra 4K with RT enabled using FSR. Yep even CP2077 which doesn’t run well on Radeon as one of few games. It is competitive. You can only say that the 4090 is faster, sure it is, but you also pay for that premium, a lot. And then it’s a broken chip on top, what a shitty deal for over 2000$.
 
Last edited:

Abwx

Lifer
Apr 2, 2011
10,922
3,413
136
A good example of this would be this post:

Source

This is the second time I've corrected him about this same issue. I told him previously that Intel QSV doesn't even support AV1 encoding with ADL or RPL, and he still continued on with that garbage because to him, the only reason why the 13900K could be faster than the 7950x in encoding was if Intel was "cheating" somehow or gaining an unfair advantage by using the iGPU.

Never mind the 50% higher AVX2 throughput or the higher cache bandwidth, they must be cheating.


From the 7950X being 13% faster in their regular Handbrake test, H264 to H265, to the 13900K being 19% faster once it s H264 to AV1, that s a 30% delta wich cant be explained by the raw exe capabilities.

And dont come with the RAM bandwith argument as you once did because the 7950X@5200RAM is faster than at 6000MHZ RAM in this test, wich say that it s more CPU perf dependant than RAM speed dependant.

 

Khanan

Senior member
Aug 27, 2017
203
91
111
Broken how?
I dislike the fact that a big part of the gpu is deactivated. At least for all the other expensive halo GPUs you got nearly the full chip or in case of the 780 Ti the full chip. Not only is the 4090 more expensive than the 3090 it’s also way more defective, the 3090 has a negligible part deactivated that doesn’t cost it any performance.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Yet the XTX is able to produce playable frame rates in every game on Ultra 4K with RT enabled using FSR. Yep even CP2077 which doesn’t run well on Radeon as one of few games. It is competitive. You can only say that the 4090 is faster, sure it is, but you also pay for that premium, a lot. And then it’s a broken chip on top, what a shitty deal for over 2000$.

How is the RTX 4090 a broken chip? Also, playable framerates is subjective. RTX 4090 can do native 4K in every title that I am currently playing with high framerates. I've only used DLSS 3 in one game and that is the Witcher 3 Complete Edition which is CPU limited due to poor DX12 implementation, so it's not even the GPU that's the problem.