Question Raptor Lake - Official Thread

Page 162 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hulk

Diamond Member
Oct 9, 1999
4,214
2,007
136
Since we already have the first Raptor Lake leak I'm thinking it should have it's own thread.
What do we know so far?
From Anandtech's Intel Process Roadmap articles from July:

Built on Intel 7 with upgraded FinFET
10-15% PPW (performance-per-watt)
Last non-tiled consumer CPU as Meteor Lake will be tiled

I'm guessing this will be a minor update to ADL with just a few microarchitecture changes to the cores. The larger change will be the new process refinement allowing 8+16 at the top of the stack.

Will it work with current z690 motherboards? If yes then that could be a major selling point for people to move to ADL rather than wait.
 
  • Like
Reactions: vstar

Mopetar

Diamond Member
Jan 31, 2011
7,835
5,982
136
The KS is irrelevant, fast DDR5 is what will allow Raptor Lake to keep up.

Is there any DDR5 that's as fast as L3 cache? I ask because the 7xxxX3D parts are probably going to step all over Intel to the extent that not even the exotic RAM kits that cost almost as much as the CPU will be able to compensate.

Or they could just get a faster GPU like most reasonable people. The 6950XT just can't cut it with these newer CPUs.

You'll have to excuse AnandTech for using what was basically the second fastest GPU from the previous generation instead of a 4090 which had only released a week before they published the article that contains that graphic. I'm certain they could have acquired that new piece of technology and benchmarked not only the Raptor Lake CPUs they were testing for that article, but also all of the other, older CPUs in the chart as well.

Even assuming they had, I'm not sure how the results change. I think you're grasping at straws to try to make a point that doesn't stand up to evidence.
 

Rigg

Senior member
May 6, 2020
468
961
106
The tests should be done at 720p in the most CPU limited areas of the game, with RT turned on.

Then you will see large gaps between these CPUs. He has three games in the lineup that support RT, and he didn't even bother turning it on.

I used to think that HWUB were biased, but now I know they are just straight up incompetent.

You clearly have a different philosophical view (arguably an outdated view) of how games testing should be done. That's fine. Look at the data you find compelling. Just don't say HUB's testing is incompetent or biased. That's garbage and you know it. Clearly you are not their audience.

I think testing video games with stock mem speeds/JEDEC timings is dumb and biases data towards Intel. I think 720p data with a 4090 is interesting but irrelevant. I don't think computerbase.de testers are incompetent or biased because their testing philosophy differs from my own. Get off your soapbox and get over yourself.

I find the real world testing HUB does far more relevant and informative than 720p data with JEDEC timings. You aren't enlightening us by telling us that more FPS scaling will be on display with low res data. We get it. We just don't care. We're arguing in circles with you because you can't get over your own biases and preconceived notions.

In a perfect world each time a CPU launches we'd have 50+ games worth of fresh data at every resolution from 480p - 4k, RT on/off, a range of memory speeds, etc.

It's just not feasible to cover every conceivable variable and still have a large enough sample size of games to draw any conclusions from. Especially when software and drivers are moving targets. You don't seem to appreciate the shear amount of work and time it takes to put together testing data like this. You have to pick and choose your variables. HUB has chosen to stick to resolutions, settings, and hardware configs that are most likely to cover real world use cases for the hardware they test. It's clear a large number of people agree with their methodology choices given their popularity. They are by no means perfect but your constant nit-picky criticism of them falls flat and is at times ridiculous. You make every mole hill sound like Mt. Everest.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Is there any DDR5 that's as fast as L3 cache? I ask because the 7xxxX3D parts are probably going to step all over Intel to the extent that not even the exotic RAM kits that cost almost as much as the CPU will be able to compensate.

No there isn't, but cache only gets you so far, especially with RT BVH building. RT BVH working set seems like it's too big to fit in the L3 cache of the Zen 3D parts and system memory performance is important there.

You'll have to excuse AnandTech for using what was basically the second fastest GPU from the previous generation instead of a 4090 which had only released a week before they published the article that contains that graphic. I'm certain they could have acquired that new piece of technology and benchmarked not only the Raptor Lake CPUs they were testing for that article, but also all of the other, older CPUs in the chart as well.

They'll get no excuses from me. I understand that getting an RTX 4090 can be difficult, but other reviewers managed to do so in time for the new CPUs.....and for the ones that didn't, they made sure they updated their tests with an RTX 4090 in the days or weeks post launch.

Sink or swim man.

Even assuming they had, I'm not sure how the results change.

Depending on the particular game, the changes may be minor or major. Some games are CPU bottlenecked even at 4K with an RTX 4090, usually due to poor CPU optimization or being inherently single threaded.

I think you're grasping at straws to try to make a point that doesn't stand up to evidence.

If I'm making a mistake, it's probably in thinking that Anandtech is still a top tier review site.
 

Mopetar

Diamond Member
Jan 31, 2011
7,835
5,982
136
For someone who has a 4090 and I'm assuming a 13900K, you're spending an awful lot of time trying to convince everyone else here that it's the best possible thing ever instead of actually using it.

I'm sure there's some unique combination of test conditions where your chosen technology performs best, but you just look damned pathetic trying to push it as the one and true way to benchmark things. It's like teen girl levels of insecurity.

If I were so blessed as to have such top end technology at the moment they'd have to get a pry bar to detach me from the chair after the gaming marathon I'd pull. Who gives a damn what anyone else here thinks or whether there will be something slightly better a few months from now? That's just the nature of technology and it's sad to see you squandering that precious moment in time when it's the best it will ever be.

Now go fire up your favorite game and crank the settings to max with the knowledge that right now today there's no one else on the planet that can get that level of performance. Replying to people on Internet forums can wait.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
You clearly have a different philosophical view (arguably an outdated view) of how games testing should be done. That's fine. Look at the data you find compelling. Just don't say HUB's testing is incompetent or biased. That's garbage and you know it. Clearly you are not their audience.

Did you see their graphs? When the results are that bunched together, how is that effective CPU testing? It's presenting a false narrative, because uninformed people might look at those graphs and think that their CPUs can easily ensure high framerates.......until they run into a CPU bound area or turn on RT and then their CPU chokes.

I think testing video games with stock mem speeds/JDEC timings is dumb and biases data towards Intel.

We've already covered this ad nauseum. Using stock memory was the preferred method since forever because that is the "out of the box" experience. Many reviewers still adhere to that practice and I don't think they are wrong to do so. That said, I don't think it's incorrect to include overclocked memory results either because most people will use overclocked memory.

I think 720p data with a 4090 is interesting but irrelevant. I don't think computerbase.de testers are incompetent or biased because their testing philosophy differs from my own. Get off your soapbox and get over yourself.

If a testing method doesn't accomplish what it set out to do, then it's ineffective period and not even worthy of consideration. Testing CPU and memory performance in games has always been tricky, and most reviewers fail at it.

I find the real world testing HUB does far more relevant and informative than 720p data with JDEC timings. You aren't enlightening us by telling us that more FPS scaling will be on display with low res data. We get it. We just don't care. We're arguing in circles with you because you can't get over your own biases and preconceived notions.

As usual, you're being way too emotional and dramatic about this. So what if I think HWUB are incompetent. It's just an opinion. Did I tell you that you shouldn't watch their YouTube videos or visit their website? How is it that some people can get so offended by an anonymous internet dude's opinion is beyond me.

It's just not feasible to cover every conceivable variable and still have a large enough sample size of games to draw any conclusions from. Especially when software and drivers are moving targets. You don't seem to appreciate the shear amount of work and time it takes to put together testing data like this. You have to pick and choose your variables. HUB has chosen to stick to resolutions, settings, and hardware configs that are most likely to cover real world use cases for the hardware they test. It's clear a large number of people agree with their methodology choices given their popularity. They are by means perfect but your constant nit-picky criticism of them falls flat and is at times ridiculous. You make every mole hill sound like Mt. Everest.

We all have supporters and detractors. I'm obviously one of HWUB's many detractors and I've stated why. Too much corner cutting for my tastes, but if it works for you then it works for you. As I said before, I'm not trying to dissuade anyone from watching their content, I'm just stating the reasons why I think they are crap.

How in the heck can they have 3 RT capable games in their lineup and not include any RT enabled benches? o_O
 
  • Like
Reactions: hemedans

Hitman928

Diamond Member
Apr 15, 2012
5,252
7,808
136
I never said it shouldn't be, but because of low level APIs, lowering the resolution isn't as effective as it used to be for isolating CPU performance.

You realize you are arguing against yourself at this point, right?

Um, you need to watch the video again. It was in the same area.....the exact same area. That area is infamous for punishing CPUs when appropriate settings are used, and both the uploaders knew exactly where to look.

As for your comment about different GPUs, I can tell you probably never even watched the video. Both GPUs were CPU bottlenecked in this particular area, and the guy with the RTX 4080 was CPU bottlenecked even at 4K DLSS.

I briefly watched the videos until I realized they were from 2 different random youtube people with 0 controls over anything related to benchmarking. That's all I needed to know. With that said, I went back and watched, they are in the exact same area of the map for about 30 - 40% of the time. They have different start and end points which is really what I had checked earlier.

I also don't know why you've resorted to trying to make such a faulty comparison when the site you say is the golden standard, Computerbase, already tested Cyberpunk at 720p with RT on and didn't find anywhere near the same lead for the 13900k. You keep shifting goal posts and throwing out arguments that either have no basis or at best are built on very shaky evidence. It's rather tiring to read through tbh.

1673669855189.png
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
For someone who has a 4090 and I'm assuming a 13900K, you're spending an awful lot of time trying to convince everyone else here that it's the best possible thing ever instead of actually using it.

I play lots of games, but as I get older, I just don't have the attention span for long gaming sessions like I used to. I'm actually very picky now about games, movies and other forms of entertainment. I'd like to think it's because I have more refined tastes now that I'm in my 40s and have less tolerance for B.S :D

I'm sure there's some unique combination of test conditions where your chosen technology performs best, but you just look damned pathetic trying to push it as the one and true way to benchmark things. It's like teen girl levels of insecurity.

Oh, here we go with the "I must be insecure" crap again. You know what a more reasonable and logical explanation is? That I love talking about and debating hardware performance, and that I have a generous case of "lastworditis."

And whether you want to believe it or not, I have backed up my arguments with benchmarks. But just like in the graphics forum, anything remotely perceived as anti AMD (even with evidence) is typically downplayed or ignored.

If I were so blessed as to have such top end technology at the moment they'd have to get a pry bar to detach me from the chair after the gaming marathon I'd pull. Who gives a damn what anyone else here thinks or whether there will be something slightly better a few months from now? That's just the nature of technology and it's sad to see you squandering that precious moment in time when it's the best it will ever be.

Gaming is fun, but hardware is equally fun. I like to test, tweak, debate and argue about hardware performance. Most people on this forum are like that. Why else is this thread hundreds of pages long? The fact that you are focusing on my perceived personal characteristics instead says a lot about you. Very cynical you must be.

Now go fire up your favorite game and crank the settings to max with the knowledge that right now today there's no one else on the planet that can get that level of performance. Replying to people on Internet forums can wait.

You know, I was at work the vast majority of the day. I was replying to most of these posts at work. That said, debating and discussing hardware to me is nearly as satisfying as gaming. The one game I am playing now more than anything is the Baldur's Gate 3 early access. Phenomenal game and world building. According to Steam I've already logged 56 hours playing it. Can easily see myself hitting the 200+ hour mark when the final release launches later this year.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I briefly watched the videos until I realized they were from 2 different random youtube people with 0 controls over anything related to benchmarking. That's all I needed to know. With that said, I went back and watched, they are in the exact same area of the map for about 30 - 40% of the time. They have different start and end points which is really what I had checked earlier.

The start point is on the same route (through Cherry Blossom Market), just that the Raptor Lake uploader didn't start on the main road and didn't turn around but kept going through while the 5800X3D uploader turned around once he hit the middle (and hit the most CPU limited area) and went back to the main road.

They were both obviously testing their machines, as they knew the exact location and settings to have the greatest impact on performance.

I also don't know why you've resorted to trying to make such a faulty comparison when the site you say is the golden standard, Computerbase, already tested Cyberpunk at 720p with RT on and didn't find anywhere near the same lead for the 13900k. You keep shifting goal posts and throwing out arguments that either have no basis or at best are built on very shaky evidence. It's rather tiring to read through tbh.

You're assuming that Computerbase.de tested the same area. CBP 2077 is a huge game and I don't know where Computerbase.de did their test run, but it likely wasn't in this area because the performance gap is much smaller. The YouTube videos I uploaded showed a worst case scenario for CPU performance if RT is enabled and crowd density is maxed out.
 

Rigg

Senior member
May 6, 2020
468
961
106
Did you see their graphs? When the results are that bunched together, how is that effective CPU testing? It's presenting a false narrative, because uninformed people might look at those graphs and think that their CPUs can easily ensure high framerates.......until they run into a CPU bound area or turn on RT and then their CPU chokes.
They showed results at 1080p on the most powerful gaming GPU in the world. The results are bunched together because they aren't trying to spin a narrative. Their goal isn't to show differences that only manifest themselves in edge case scenarios. If they did as you suggest, and the difference was more apparent, than they'd be showing uninformed people differences that aren't going to be there when they actually use the hardware.

What evidence do you have that they aren't using CPU bound areas for their testing?

You still haven't shown any compelling evidence that enabling ray tracing hurts performance on current gen AMD vs Intel. You are spinning a narrative based on a very limited amount of cherry picked/incomplete/inconclusive data in a small handful of ray traced games that already favor Intel without RT. About the only interesting thing you've shown (and was clearly backed up by the data you provided) was that turning on RT in Spiderman stretched Intel's already substantial FPS lead out another 5% (720p, Supported JDEC mem, 3090 ti).



We've already covered this ad nauseum. Using stock memory was the preferred method since forever because that is the "out of the box" experience. Many reviewers still adhere to that practice and I don't think they are wrong to do so. That said, I don't think it's incorrect to include overclocked memory results either because most people will use overclocked memory.
It hasn't been the preferred method since XMP became a thing. Sub 1080p testing hasn't been the norm in at least 5 years. You are living in the past in regards to common game benching practices. Right, wrong, or indifferent that is the reality.



If a testing method doesn't accomplish what it set out to do, then it's ineffective period and not even worthy of consideration. Testing CPU and memory performance in games has always been tricky, and most reviewers fail at it.
HUB's testing does exactly what it sets out to do. It show's you what the actual real world differences between gaming CPU's are by using the god tier GPU with the lowest quality settings any semi-reasonable person would use with it.


As usual, you're being way too emotional and dramatic about this. So what if I think HWUB are incompetent. It's just an opinion. Did I tell you that you shouldn't watch their YouTube videos or visit their website? How is it that some people can get so offended by an anonymous internet dude's opinion is beyond me.
I'm not emotional. I'm not offended. Don't flatter yourself. If anything you had an overly dramatic and emotional reaction to HUB's testing. Just like you do when anyone posts data from them that goes against your narrative.


We all have supporters and detractors. I'm obviously one of HWUB's many detractors and I've stated why. Too much corner cutting for my tastes, but if it works for you then it works for you. As I said before, I'm not trying to dissuade anyone from watching their content, I'm just stating the reasons why I think they are crap.

How in the heck can they have 3 RT capable games in their lineup and not include any RT enabled benches? o_O
Yeah but your reasons for thinking they are crap are silly and completely based on shaky assumptions /incomplete data. Perhaps they did their homework and found enabling RT didn't affect FPS scaling for these games. Steve has done a lot more benchmarking than you have. Maybe he's better informed than you are about how to best represent performance differences in games. I don't know. Either do you. They routinely do follow up videos/articles that explore things with different variables. Maybe we will get a video about RT enabled CPU performance in the future. It's not like you'll take it seriously if it challenges your preconceived notions though.
 
Last edited:

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
Without specific examples, that help to form an actual argument, this post is littered with flaws.
What in particular? It's been recited ad nauseam here. Everything from individual games performing drastically worse on Intel (and only Intel) vs other publications, to purposely choosing test conditions to favor AMD (no W11, no RT, etc), to dishonest price comparisons. It's obvious to anyone who's been following them that they warp their testing and even reporting to fit a preconceived outcome in line with their brand preferences. Why do you think that they, of all outlets, come up so often on these threads?
 
Last edited:
  • Like
Reactions: Carfax83

Henry swagger

Senior member
Feb 9, 2022
364
237
86
I'm not so sure. The HWUB testing showed the 13900KS averaging at 5.48ghz during the CBR23 10 minute run at 280w. My own CPU stays at 5.3ghz and draws 235w. If I had better cooling, I could get it down even lower.
Can you test your 13900k with e-cores only at 4.300ghz .. i wanna know the power draw and score ?
 

ondma

Platinum Member
Mar 18, 2018
2,720
1,280
136
Is there any DDR5 that's as fast as L3 cache? I ask because the 7xxxX3D parts are probably going to step all over Intel to the extent that not even the exotic RAM kits that cost almost as much as the CPU will be able to compensate.



You'll have to excuse AnandTech for using what was basically the second fastest GPU from the previous generation instead of a 4090 which had only released a week before they published the article that contains that graphic. I'm certain they could have acquired that new piece of technology and benchmarked not only the Raptor Lake CPUs they were testing for that article, but also all of the other, older CPUs in the chart as well.

Even assuming they had, I'm not sure how the results change. I think you're grasping at straws to try to make a point that doesn't stand up to evidence.
Lots of other sites managed to acquire a 4090 for comparing RL and Zen 4. Using weak dgpus has been the bane of AT gaming tests for a long, long time, and they dont seem to be willing to correct it. Can it really be that difficult to come up with the money for a 4090?? Lots of other sites are able to do it. Personally, I dont even look at AT gaming tests anymore. Sad, since they used to be my most trusted site.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Can you test your 13900k with e-cores only at 4.300ghz .. i wanna know the power draw and score ?

My motherboard doesn't give me the option to disable all the P cores. For some reason, I had to leave 1 of them running, so I did and I scored 20, 768. with a max power draw of 103w. So that's 1 P core @ 5.3ghz and 16 E cores @ 4.3ghz.
 
  • Like
Reactions: Henry swagger

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
They showed results at 1080p on the most powerful gaming GPU in the world. The results are bunched together because they aren't trying to spin a narrative. Their goal isn't to show differences that only manifest themselves in edge case scenarios. If they did as you suggest, and the difference was more apparent, than they'd be showing uninformed people differences that aren't going to be there when they actually use the hardware.

The results are bunched up together because there is very little if any CPU isolation in those tests. Had they turned on RT for those games that supported it, the results likely would have been different.

And RT isn't an "edge case scenario." Like it or not, RT is here to stay and the amount of games that support it will only continue to grow.

What evidence do you have that they aren't using CPU bound areas for their testing?

Their YouTube video actually showed snippets of the areas they are testing. Out of the all the games they tested, I only have Plague Tale Requiem and I can confirm that the early town area isn't very CPU intensive at all. Now the later town areas are a bit more CPU intensive (or when there are large amounts of rats on screen), but generally speaking, Plague Tale Requiem is much more GPU limited.

For HZD he's using the canned benchmark and for CBP 2077, it looks like crowd density is turned down.

You still haven't shown any compelling evidence that enabling ray tracing hurts performance on current gen AMD vs Intel. You are spinning a narrative based on a very limited amount of cherry picked/incomplete/inconclusive data in a small handful of ray traced games that already favor Intel without RT. About the only interesting thing you've shown (and was clearly backed up by the data you provided) was that turning on RT in Spiderman stretched Intel's already substantial FPS lead out another 5% (720p, Supported JDEC mem, 3090 ti).

That's because the RTX 3090 Ti was probably tapped out. Spider-Man Miles Morales increased the lead Raptor Lake had over Zen 4 substantially despite the engine being the exact same as the original Spider-Man Remastered, because it added RT shadows in addition to the RT reflections that were in the original game. According to the PCGH.de results, Zen 4's performance diminished in Spider Man MM despite using a faster RTX 4090. Raptor Lake took a hit as well, but not nearly as much.

Spider-Man: Miles Morales - CPU Benchmarks (pcgameshardware.de)

None of the Zen 4 CPUs can break the 60 FPS barrier in the Witcher 3 Next gen, which uses RT global illumination, RT reflections, RT shadows, RT ambient occlusion and increased draw distances.

Ryzen 7000 Non-X: Benchmarks, new basics (pcgameshardware.de)

It hasn't been the preferred method since XMP became a thing. Sub 1080p testing hasn't been the norm in at least 5 years. You are living in the past in regards to common game benching practices. Right, wrong, or indifferent that is the reality.

It varies across reviewers, but most of the more established reviewers tend to use stock memory settings.
 
Last edited:

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,542
14,497
136
Thanx.. impressive e-cores close to a 5900x in multi-core ✔💻
16 ecores+1 P core at 20k ? vs 23k for 12 cores? Or with SMT 18 cores at 20k vs 24 cores at 23k ? Either way does not sound that great, when a 7900x (current competitor) does 30k ? Not that great IMO.
 

Hulk

Diamond Member
Oct 9, 1999
4,214
2,007
136
My motherboard doesn't give me the option to disable all the P cores. For some reason, I had to leave 1 of them running, so I did and I scored 20, 768. with a max power draw of 103w. So that's 1 P core @ 5.3ghz and 16 E cores @ 4.3ghz.

I get 18,860 with the one P downclocked to 1.4GHz, which basically takes it out of the picture. 120W with stock motherboard settings. No under volting.
 
Jul 27, 2020
16,175
10,242
106
My motherboard doesn't give me the option to disable all the P cores. For some reason, I had to leave 1 of them running, so I did and I scored 20, 768. with a max power draw of 103w. So that's 1 P core @ 5.3ghz and 16 E cores @ 4.3ghz.
It's not your motherboard. Intel limitation. Reviewers are testing E-cores only by forcing the apps with affinity masks.
 
  • Like
Reactions: Carfax83

mikk

Diamond Member
May 15, 2012
4,133
2,136
136
Why in God's name would Intel produce a Raptor Cove variant with the old Alder Lake l2 configuration???


This explanation makes most sense:


It's a new ADL stepping with DLVR on die (not enabled says Raichu), and some other improvements. The biggest deal for Raptor mobile would be the efficiency gains from Intel 7 "ultra".
 
  • Like
Reactions: Henry swagger

Hitman928

Diamond Member
Apr 15, 2012
5,252
7,808
136
What in particular? It's been recited ad nauseam here. Everything from individual games performing drastically worse on Intel (and only Intel) vs other publications, to purposely choosing test conditions to favor AMD (no W11, no RT, etc), to dishonest price comparisons. It's obvious to anyone who's been following them that they warp their testing and even reporting to fit a preconceived outcome in line with their brand preferences. Why do you think that they, of all outlets, come up so often on these threads?

I've seen a few posters repeat that they are biased/bad ad nauseam but haven't seen anyone provide any actual proof or concrete examples. Do you have solid examples to point to that show their bad testing methodology? They have used W11 since Alderlake dropped specifically for Intel and very few reviewers turn on RT during CPU reviews because it introduces such a large GPU burden (and I have yet to see any real evidence that it is wise to turn it on for CPU reviews). The only one I see pushing for RT in CPU tests is @Carfax83 because he thinks (again, without real evidence) that Intel CPUs are better at it than AMD CPUs.
 

Hitman928

Diamond Member
Apr 15, 2012
5,252
7,808
136
The start point is on the same route (through Cherry Blossom Market), just that the Raptor Lake uploader didn't start on the main road and didn't turn around but kept going through while the 5800X3D uploader turned around once he hit the middle (and hit the most CPU limited area) and went back to the main road.

Right, so same general area, but not the exact same area except for less than half of the run there is overlap. When the one person turns back around, he changes settings to turn DLSS off so it's the start of a new section of his testing. He just does it on the fly rather than reload the original start point with new settings.


You're assuming that Computerbase.de tested the same area. CBP 2077 is a huge game and I don't know where Computerbase.de did their test run, but it likely wasn't in this area because the performance gap is much smaller. The YouTube videos I uploaded showed a worst case scenario for CPU performance if RT is enabled and crowd density is maxed out.

But one of your main arguments before for using computerbase is that they clearly test properly for CPU bottlenecks in games by picking the right settings and locations. So do you believe that they do this or not? Also, computerbase showed lower fps for the 13900k at 720p than what is in the video at 1080p + DLSS quality So if we're just going to go off of uncontrolled comparisons and a bunch of assumptions, it seems computerbase's situation is actually causing more of a CPU bottleneck than the videos you showed.

Edit: I also forgot to mention that in the 2 videos you posted as "proof" there is a clear increase in characters on screen for the AMD machine. I have no idea why, but it is obvious if you pay attention to it in the videos (examples below, one is really blurry because the uploader's video quality is terrible).

1673707066316.png

1673707275068.png

Just after the alley:

1673707704455.png

I probably missed a couple in the above screenshot because the player is moving so fast and the video quality is terrible.

1673708045142.png

This is where they very first come together. I didn't bother adding the count to these but you can obviously see the difference in the screenshots.

1673708226646.png

1673708268261.png
 
Last edited:

nurturedhate

Golden Member
Aug 27, 2011
1,742
673
136
Right, so same general area, but not the exact same area except for less than half of the run there is overlap. When the one person turns back around, he changes settings to turn DLSS off so it's the start of a new section of his testing. He just does it on the fly rather than reload the original start point with new settings.




But one of your main arguments before for using computerbase is that they clearly test properly for CPU bottlenecks in games by picking the right settings and locations. So do you believe that they do this or not? Also, computerbase showed lower fps for the 13900k at 720p than what is in the video at 1080p + DLSS quality So if we're just going to go off of uncontrolled comparisons and a bunch of assumptions, it seems computerbase's situation is actually causing more of a CPU bottleneck than the videos you showed.

Edit: I also forgot to mention that in the 2 videos you posted as "proof" there is a clear increase in characters on screen for the AMD machine. I have no idea why, but it is obvious if you pay attention to it in the videos (examples below, one is really blurry because the uploader's video quality is terrible).

View attachment 74591

View attachment 74593

Just after the alley:

View attachment 74594

I probably missed a couple in the above screenshot because the player is moving so fast and the video quality is terrible.

View attachment 74596

This is where they very first come together. I didn't bother adding the count to these but you can obviously see the difference in the screenshots.

View attachment 74597

View attachment 74598
There's an ingame option for crowd density. Really looks like that's what that is.
 

Hulk

Diamond Member
Oct 9, 1999
4,214
2,007
136
On a different topic...
Is there one programmed VID for a CPU or is there a VID for each core? I thought there was one voltage plane for Alder/Raptor but in the BIOS you can apparently set voltage offsets by core?