Question Raptor Lake - Official Thread

Page 122 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hulk

Diamond Member
Oct 9, 1999
4,227
2,015
136
Since we already have the first Raptor Lake leak I'm thinking it should have it's own thread.
What do we know so far?
From Anandtech's Intel Process Roadmap articles from July:

Built on Intel 7 with upgraded FinFET
10-15% PPW (performance-per-watt)
Last non-tiled consumer CPU as Meteor Lake will be tiled

I'm guessing this will be a minor update to ADL with just a few microarchitecture changes to the cores. The larger change will be the new process refinement allowing 8+16 at the top of the stack.

Will it work with current z690 motherboards? If yes then that could be a major selling point for people to move to ADL rather than wait.
 
  • Like
Reactions: vstar

alexruiz

Platinum Member
Sep 21, 2001
2,836
556
126
There's literally an entire section of the video discussing this. It's 3090 vs 4090 and whatever particular driver overhead issue is going on.

So, going from a slower video card to a faster video card results in lower framerate?
Because of a magical "driver overhead"?
So, people should not upgrade to the faster GPU because they "will get lower framerates due to driver overhead"

Getting lower results with the faster GPU should tell you how competent the reviewers are
 

deathBOB

Senior member
Dec 2, 2007
566
228
116
So, going from a slower video card to a faster video card results in lower framerate?
Because of a magical "driver overhead"?
So, people should not upgrade to the faster GPU because they "will get lower framerates due to driver overhead"

Getting lower results with the faster GPU should tell you how competent the reviewers are

In what world are reviewers immune from the game/software/driver bugs that MS, Nvidia, AMD, game developers, etc cannot even get right? As I type this there is a current thread on Ryzen 7000 issues with games and Windows 11: (2) Question - Windows 11 hobbling Ryzen 7000 performance | AnandTech Forums: Technology, Hardware, Software, and Deals

No review is ever more than a snapshot in time, and we all know performance can change for better or worse.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
That "joke" of GPU was the second fastest available video card just a few days ago.
It is still the 3rd fastest video card on the market.

In rasterization perhaps, but in ray tracing it's probably behind the 3080.

The reviewers who have a clue on how to setup a Ryzen system got very close results between the platforms, even when using the RTX 4090
Those who are getting major discrepancies are as competent as you on setting up a Ryzen system...

Whoa, we got a super nerd here! :D Perhaps you should start your own YouTube channel and show them how it's done.
 
  • Haha
Reactions: igor_kavinski

alexruiz

Platinum Member
Sep 21, 2001
2,836
556
126
In what world are reviewers immune from the game/software/driver bugs that MS, Nvidia, AMD, game developers, etc cannot even get right? As I type this there is a current thread on Ryzen 7000 issues with games and Windows 11: (2) Question - Windows 11 hobbling Ryzen 7000 performance | AnandTech Forums: Technology, Hardware, Software, and Deals

No review is ever more than a snapshot in time, and we all know performance can change for better or worse.

Again, are you going to tell someone to NOT upgrade to a faster part because they get lower framerate due to "driver overhead, bugs, etc"

If those youtube videos are yours, go and check your windows install.
If you don't know how to setup a Ryzen system, several members here can guide you.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
In what world are reviewers immune from the game/software/driver bugs that MS, Nvidia, AMD, game developers, etc cannot even get right? As I type this there is a current thread on Ryzen 7000 issues with games and Windows 11: (2) Question - Windows 11 hobbling Ryzen 7000 performance | AnandTech Forums: Technology, Hardware, Software, and Deals

No review is ever more than a snapshot in time, and we all know performance can change for better or worse.

This precisely. There has been some issues with Windows 11 22H2 that lots of hardware reviewers have been complaining about. Also, dual CCDs can have performance issues in certain games. Same thing with the efficiency cores. Efficiency cores can lower performance at times for a number of reasons.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,821
3,642
136
Windows 11 22H2 shouldn't be used for benchmarking at all in its current state. There is a glaring flaw in it which is that the CPU utilization metric in task manager doesn't get reflected in CPU utilization showed while using any kind of overlay, be it MSI Afterburner, AMD or NVIDIA overlays, and even Windows Game Bar while using Win + G. The overlays barely show any utilization at all, while task manager reports the correct usage.

Until issues such as this are fixed, reviewers should stay away from Windows 11 22H2.
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,773
3,152
136

HurleyBird

Platinum Member
Apr 22, 2003
2,684
1,268
136
That "joke" of GPU was the second fastest available video card just a few days ago.
It is still the 3rd fastest video card on the market.

If I could only look at one set of results, I'd want that to be paired with the 4090. But it would be terrible if everyone only tested with one card or one vendor, so I certainly don't fault anyone for testing with the 6950XT. Ideally, reviewers should test with both the fastest Radeon and the fastest Geforce, or something close to that.
 

A///

Diamond Member
Feb 24, 2017
4,352
3,154
136
If I could only look at one set of results, I'd want that to be paired with the 4090. But it would be terrible if everyone only tested with one card or one vendor, so I certainly don't fault anyone for testing with the 6950XT. Ideally, reviewers should test with both the fastest Radeon and the fastest Geforce, or something close to that.
I don't want to butt into your guyses verbal threesome here but the 4090 is CPU limited by both the 7950X and the new 13900K. NVidia's efforts have paid off, but not without shortfalls. The lack of DP 2 i/o is a sore spot for many even if DP 2 monitors don't exist in large numbers right now they will by this time next year. It goes without saying Nvidia is a slimy company that would include DP 2 on the refreshed cards whenever they come out. Even so the card may outgun even the next generation of processors 2 years from now from Intel and AMD. Rumors afloat about RDNA3 performance will likely align the top end also being CPU limited by the 13900K or the 7950X. I can't remember the last time a top end card was CPU limited by a top end CPU.
 

Kocicak

Senior member
Jan 17, 2019
982
973
136
Not sure if it's already been posted, but looks like there's a bug in XTU that explains HUB's anomalous power scaling results.

I mean, how could this have happened, dont they follow this prestigious computer forum? :oops:

I have been posting these numbers since 13th October, they could have easilly noticed, that their numbers ar wrong??? They are off by up to 30 percent (a small part of this difference is caused by their power data points being 5W lower than mine).

HWUB power screw-up.png
 
Last edited:

HurleyBird

Platinum Member
Apr 22, 2003
2,684
1,268
136
Even so the card may outgun even the next generation of processors 2 years from now from Intel and AMD. Rumors afloat about RDNA3 performance will likely align the top end also being CPU limited by the 13900K or the 7950X. I can't remember the last time a top end card was CPU limited by a top end CPU.

Part of that equation is the driver. An AMD card that is around the same performance as a 4090 won't have exactly the same CPU bottlenecks. I'm sure Nvidia will be very focused on lowering driver overhead over the next couple years, although it might be less of an issue for them after Zen4 V-cache is out and reviewers switch over.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
@A/// No it happened a few times in the past. Also the driver overhead is relevant at high fps. At low fps it's limited by the GPU and doesn't play a big role.

Resolutions, graphics detail, and refresh rate(to a lesser extent) will continue to increase. Continuing the increasing 3D graphics and CPU requirements.
 
  • Like
Reactions: Tlh97

coercitiv

Diamond Member
Jan 24, 2014
6,208
11,923
136
Also the driver overhead is relevant at high fps. At low fps it's limited by the GPU and doesn't play a big role.
So the driver overhead is relevant when we do CPU testing in games, but much less relevant in GPU testing, making it an unnecessary burden for the former.

There's an argument to be made here that using a less capable 6950XT at low resolutions is a good idea for CPU testing in games. Unfortunately we have this new trend of introducing RT in CPU testing as well, understandably so considering RT increases load on the CPU. The current AMD cards have weaker RT capabilities, so this brings us back to square one, as either choice of GPU is not perfect.

I expect we'll get some interesting information using the new AMD cards, no matter if they win or not against Lovelace. They'll be strong enough in both raster and RT to allow for more consistent CPU testing.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
People make various arguments on what the ideal GPU testing is. There isn't. That's why there are dozens of reviews and you will have to read at least a few of them. One site possibly cannot cover all things.

Typical argument is that "CPU doesn't matter cause it's GPU bound". Well, if you are playing competitively then the CPU does matter very much. Especially in multiplayer and in real world environments with other players and interactions with such players plus NPCs can't be reliably tested. The CPU will play a bigger role.

But you can also make the argument for single player gamers or the ones that play games for relaxation will be lot less sensitive to such things, and in this case buying a lower end CPU with a capable GPU is a better deal.

Different things for different people. Disagreements exist but doesn't necessarily mean either party is wrong, because it's an experience.
 

coercitiv

Diamond Member
Jan 24, 2014
6,208
11,923
136
People make various arguments on what the ideal GPU testing is.
I don't care about getting the ideal test, I was merely discussing the impact of driver overhead on the commonly agreed testing for CPUs in games. We have a consensus among reviewers that CPU testing is done in ways that minimize GPU bottlenecks. This means we need to aim for minimal driver overhead, yet still get good RT performance so we know how future games are likely to behave. This does not make the test perfect, it makes it better.
 
  • Like
Reactions: Tlh97

PJVol

Senior member
May 25, 2020
534
447
106
They used a 6950xt for the GPU, which is almost like a joke. It's no wonder their gaming benches are full of anomalies because they are GPU bottlenecked to hell.
Why shouldn't they? GPU bottlenecked at 720p and 1080p? And wasn't there a GPU driver overhead with the DX11?
 
  • Like
Reactions: Tlh97

inf64

Diamond Member
Mar 11, 2011
3,702
4,030
136
Why shouldn't they? GPU bottlenecked at 720p and 1080p? And wasn't there a GPU driver overhead with the DX11?
Yeah, there is no GPU bottleneck at 720p or 1080p when using 6950XT. It was basically tied for the top spot with 3090ti in raster performance. There is NVidia driver overhead when Ryzens are in question, for some reason. We will have to see how RDNA3 performs, I suspect the scaling will be better with Ryzen (than Vs 4090).
 
  • Like
Reactions: Tlh97

Harry_Wild

Senior member
Dec 14, 2012
834
150
106
Tell me about: Intel Raptor Lake cpu eco-mode.

How do you go about setting up this mode? Bios? What temperature and wattage specs? % of reduction performance, wattage from normal mode. Thx!:)
 

tamz_msc

Diamond Member
Jan 5, 2017
3,821
3,642
136
Tell me about: Intel Raptor Lake cpu eco-mode.

How do you go about setting up this mode? Bios? What temperature and wattage specs? % of reduction performance, wattage from normal mode. Thx!:)
There are two ways to go about setting up an "eco mode" on Intel:

  1. Play with PL1, PL2 and Tau in BIOS, using Intel guidelines as given in the datasheet. Obviously setting values lower than the Intel guideline would result in more efficient operation. This is especially true if all you want is burst performance, in which case you should be setting PL2 to some value higher than PL1, and set a finite Tau depending on the duration of your workloads.
  2. If you do not want the complications associated with the trial and error in the first option, set PL2=PL1 = any desired value. This ignores Tau, and it will turbo to the best extent possible for the given power budget.
 

coercitiv

Diamond Member
Jan 24, 2014
6,208
11,923
136
Any review of 13600K with DDR4 3200/3600 ??
We are bound to get one soon from HUB, and they test with both memory types so we can assess relative performance difference in the same game selection.

It's worth noting that according to HUB data the 13700K and 13900K lose around 6% on average FPS and ~7-8% on 1% lows when using DDR4 3600 CL14 DR versus DDR5 6400 CL32. I think we can reasonably expect a 5-7% difference even when using a DDR4 4000 kit (or equivalent overclock for a DR setup).
 

StinkyPinky

Diamond Member
Jul 6, 2002
6,766
784
126
So if you set the power limit to 250W, you still get like 98% of the performance do you not? So the question I have to ask is why this is not the default for motherboards? Doesn't seem much point blasting heat throughout your system for very small gains.