Discussion [HWUB] Nvidia has a driver overhead problem. . .

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hitman928

Diamond Member
Apr 15, 2012
5,262
7,890
136
HWUB did an investigation into GPU performance with lower end CPUs for gaming and published their results. In the comments they mention they tested more games than they showed in the video, all with the same results. As we all (should) know, in DX11 AMD had the low end CPU performance issue, but it looks like this has flipped for DX12/Vulkan. HWUB mentions they think their findings hold true for DX11 as well, but as far as I can tell, they only tested DX12/Vulkan titles so I don't think they have the data to backup that statement and doubt it is true.

(342) Nvidia Has a Driver Overhead Problem, GeForce vs Radeon on Low-End CPUs - YouTube

1615478285887.png
 

John Carmack

Member
Sep 10, 2016
155
247
116
Really? I've found their card reviews spot on. They have obviuos bias on rendering technology advances, so things like RT and DLSS techniques are worth a premium to them and it shows in their reviews. For example:


Hard to disagree with their take. Heck with the hindsight, AMD Super Resolution is not out yet and even if it was released today, how many games would support it? Meanwhile Nvidia is pushing UE4 plugins that enables DLSS support without much work.

I watched that author's video on Turing back when it was released and he spared no superlatives when talking about Turing's DLSS and RT so it's funny to contrast with his concern (trolling?) about the importance of the "here and now" versus how the RDNA2 might work in the future.

IIRC before the review embargo on Ampere was lifted there was a DF Ampere early access video of sorts being passed around here where they were allowed by Nvidia to share some of the hype, just not exact performance numbers. Why DF in particular? Maybe Nvidia found them an easy partner to work with, someone who shared their vision of the future of gaming.

Coincidentally shortly after the Ampere launch Nvidia was found to be threatening HWUB because they were being difficult and not seeing the future of gaming.

I don't know if I'd call them NVfoundry but at times they do sound like mouthpieces and extensions of the corporate hype machines. From the AMD side, I think back to a DF video about Renoir on mobile and it too was filled with superlatives.
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
To better get an idea of where the problems are at, I went through several of the GameGPU reviews that have CPU testing data and have compiled it into a single table. I put in CPUs for both AMD and Intel to show the scaling. FPS values marked with an asterisk denote that the value has hit a game enforced FPS cap and isn't being bottlenecked by either the CPU or GPU.

3090 + 14003090 + 1600X3090 + 2700X3090 + 5900X3090 + 4770K3090 + 7700K3090 + 8700K3090 + 10700K
Watch Dogs Legion (DX 12 Ultra)5668791157296109115
The Medium (DX 12 Max)66768411079109110110
Hitman 3 (DX 12 Max)71849913984117122131
AC: Valhalla (DX 12 Very High)6173859675959696
Cyberpunk 2077 (DX12 Ultra)566678115566986104
Forza Horizon 4 (DX 12 Max)95110124178115151155170
Total War Troy (DX 11 Max)567594116678296113
Mount & Blade II (DX 11 Very High)121137155200*148196200*200*
Nioh 2 (DX 11 High)104115120*120*120*120*120*120*
Valheim (DX 11 Max)58626616292128127138
Outriders Demo (DX 11 Max)69738413173106113118
It Takes Two (DX 11 Max)100111129243125165173196
Mafia Definitive Edition (DX 11 High)83103119191115154167179

6900XT + 14006900XT + 1600X6900XT + 2700X6900XT + 5900X6900XT + 4770K6090XT + 7700K6090XT + 8700K6090XT + 10700K
Watch Dogs Legion (DX 12 Ultra)69869611785116116117
The Medium (DX 12 Max)7381878781878787
Hitman 3 (DX 12 Max)799310815299137143158
AC: Valhalla (DX 12 Very High)819110111495109109109
Cyberpunk 2077 (DX12 Ultra)607384115627793111
Forza Horizon 4 (DX 12 Max)117131144179140179179179
Total War Troy (DX 11 Max)577589117658196114
Mount & Blade II (DX 11 Very High)122138162200*140187195200*
Nioh 2 (DX 11 High)87113116120*95120*120*120*
Valheim (DX 11 Max)57636613199126126131
Outriders Demo (DX 11 Max)66758613370101109115
It Takes Two (DX 11 Max)116125139253131169180197
Mafia Definitive Edition (DX 11 High)679011016295125141145

The driver overhead does show up in the DX12 titles, even ones like The Medium where the game doesn't run well on Radeon. Contrast this with the DX11 games were both the 6900XT and 3090 have similar performance and the older CPUs don't see the same kind of uplift from being paired with a Radeon as is the case with DX12 titles. A 2700X basically gets an extra 20 FPS in games like Watch Dogs Legion or Forza Horizon 4 even though with the top CPUs both the 3090 and 6900 XT have almost the same FPS.

It's also interesting that some of the DX11 titles can see a CPU bottleneck as well where the newer Ryzen 5000 CPUs are substantially better. Both Valheim and It Takes Two seem to scale off of the IPC and clock speed gains. Even the 6-core 5600X often has the same (or effectively so) frame rates as the 5900X in those titles and just blows anything that came out before them, Intel or AMD, out of the water. Otherwise you can look at games where both cards have similar max performance like the Total War Troy or Outriders demo and the older CPUs perform relatively the same regardless of which card is being used.
 

coercitiv

Diamond Member
Jan 24, 2014
6,199
11,895
136
IIRC before the review embargo on Ampere was lifted there was a DF Ampere early access video of sorts being passed around here where they were allowed by Nvidia to share some of the hype, just not exact performance numbers.
To give other readers full context, that video was showing huge performance difference between RTX 3080 and RTX 2080. Hardware Unboxed was one of the news outlets which reacted to that promo video, they explained that at least some titles shown running at 4K Ultra were VRAM limited on the 2080, hence the huge delta in performance.

I used to like DF a lot, and I still hope this is just a phase for them, but these "promotional content" mistakes add up over time, and as we can see they may even result in some form of tension with news outlets that are free to discuss misleading figures in PR content. (unlike DF who were probably bound by contract and could not issue a correction even if they wanted to)
 

Timorous

Golden Member
Oct 27, 2008
1,611
2,764
136
Frankly R1400 was and is horrible CPU. People forget that it is has 2 CCX'es active in a 2+2 setup. Cores have access to just 4MB of L3 maximum.

That means inter thread communications have to cross CCX boundary and on ZEN1 that was especially slow affair. Any workload that overloads intercore comms is gonna suffer.
AMD has advantage here, cause they are parents of this abomination and probably force critical driver threads on same CCX as game render threads, so there is less intercore overhead and they continue to scale somewhat.

Compare to proper quad core with 6MB of L3:
View attachment 42092

This shows that if you wanted a smooth 144hz experience in FH4 with a 10100, 7700K, 3300X or similar CPU then you cannot get it with an NV GPU. If you have Zen+ or a pre skylake Intel CPU the situation is even worse.

FH4 was built to run at 60fps on an 8c8t Jaguar CPU and we still see the issue even with a much faster 4c 8t CPU. I wonder what impact this issue will have when games are built to run at 60 (or maybe 30) fps on an 8c 16t zen 2 CPU.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
This shows that if you wanted a smooth 144hz experience in FH4 with a 10100, 7700K, 3300X or similar CPU then you cannot get it with an NV GPU. If you have Zen+ or a pre skylake Intel CPU the situation is even worse.

True, actually i think even Intel 4C CPUs that were hurt with uCode updates like pre CoffeeLake stuff would also have problems.

NV DX12 driver is far from optimal even from my limited experience. For example in Anno 1800 benchmarks DX11 vs DX12, DX11 consistently wins in averages, even if later stages of the benchmark are horrendously limited by CPU.
So you either give up peak performance, or suffer lower minimums with DX11 in mid-late game. And this should be a poster child of DX12 gaming. The only redeeming factor is game genre, i ended up running with 80FPS limit DX12, 3090 seems to hold it real well.

The real question is - can Nvidia improve their DX12 driver CPU optimizations, maybe they are doing too much heavy lifting in the driver ( like sorting, batching draw calls an so on) that was beneficial in DX11 era, but is hurting them now when GPUs push way more FPS, while at same time CPU performance has stagnated.
 
  • Like
Reactions: Leeea

blckgrffn

Diamond Member
May 1, 2003
9,126
3,066
136
www.teamjuchems.com
True, actually i think even Intel 4C CPUs that were hurt with uCode updates like pre CoffeeLake stuff would also have problems.

NV DX12 driver is far from optimal even from my limited experience. For example in Anno 1800 benchmarks DX11 vs DX12, DX11 consistently wins in averages, even if later stages of the benchmark are horrendously limited by CPU.
So you either give up peak performance, or suffer lower minimums with DX11 in mid-late game. And this should be a poster child of DX12 gaming. The only redeeming factor is game genre, i ended up running with 80FPS limit DX12, 3090 seems to hold it real well.

The real question is - can Nvidia improve their DX12 driver CPU optimizations, maybe they are doing too much heavy lifting in the driver ( like sorting, batching draw calls an so on) that was beneficial in DX11 era, but is hurting them now when GPUs push way more FPS, while at same time CPU performance has stagnated.

Wait, what? If anything, Nvidia (granted they are doing too much on the CPU as you describe) is lucky CPU performance has climbed so much in the past couple of years compared to the five years prior to that. And, arguably that’s thanks to AMD turning up the pressure which, let’s be honest, could have not happened if they had made a couple (more) missteps.

If anything, they (Nvidia) should have viewed the coupling of their GPU performance with CPU scaling as pretty alarming years ago when the writing was on the wall.

Say what you want about what is coming and market share, but Nvidia knows that if they are tying the performance ceiling of their GPUs to CPU performance, they are hitching their wagon to Intel.

Given the Intel track record for execution over the past few years, why the crap would they do that?!?

Obviously, to me, this is a choice they are making for reasons only they seem to know. Maybe it would make the whole upscaling argument more relevant? It’s baffling Ampere would not have addressed this given all the silicon searching for a use they did bother to include.
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
Obviously, to me, this is a choice they are making for reasons only they seem to know. Maybe it would make the whole upscaling argument more relevant? It’s baffling Ampere would not have addressed this given all the silicon searching for a use they did bother to include.

Maybe they'll switch back to using hardware scheduling in future architectures, but I can see why they would have skipped it with Ampere. It's doubtful that a hardware scheduler provides any real benefits to compute workloads so there wasn't as much priority on it.

It's fairly obvious that Nvidia has put a lot of focus on getting ray tracing into games and I can see that taking priority over other things they could be working on instead. I think that once they get that ironed out they'll start addressing other changes.

I also suspect that hubris played a bit of a role in it as well. If you could go back to shortly after the launch of Vega, how badly would you be laughed at if you predicted that AMD would not only be competitive, but in some ways best Nvidia's top card? Even if you go back through the RDNA2 thread, there's just a lot of disbelief that AMD could actually manage to pull of what the rumors were suggesting.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Say what you want about what is coming and market share, but Nvidia knows that if they are tying the performance ceiling of their GPUs to CPU performance, they are hitching their wagon to Intel.

Isn't ZEN3 in the CPU performance lead lately? It might be annoying for Nvidia to use AMD cpus in reference testing, but more FPS is more FPS? Heck AMD had used Intel CPUs when showcasing their GPUs during Bulldozer era, no big deal.
With CPU core count increasing there is actually more horsepower to extract from CPUs. Nvidia was already doing this multithreaded processing during DX11 era.

Obviously, to me, this is a choice they are making for reasons only they seem to know. Maybe it would make the whole upscaling argument more relevant? It’s baffling Ampere would not have addressed this given all the silicon searching for a use they did bother to include.

Too much drama, i like both DLSS and RT and I support NV for pushing technologies forward, instead of safe choice of more and more classic rasterization performance.
 
  • Like
Reactions: Leeea

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
Isn't ZEN3 in the CPU performance lead lately? It might be annoying for Nvidia to use AMD cpus in reference testing, but more FPS is more FPS?

Nvidia doesn't sell x86 CPUs so they'll use whatever makes their cards look the best. I believe that they'd done a bit of marketing in the past with AMD CPUs, but really until Zen 3 dropped Intel was still wearing the gaming crown. Maybe once the market settles down a little more we'll see them change over to using AMD CPUs for some of their comparisons.

Too much drama, i like both DLSS and RT and I support NV for pushing technologies forward, instead of safe choice of more and more classic rasterization performance.

DLSS is a bit of a double-edged sword. I think it has the most use in low-power situations and is probably a big deal for Nvidia's ARM ambitions. The flip side is that it might be used as a crutch or worse yet push developers towards relying on it to get acceptable performance out of games.

I suppose the ray tracing band-aid needed to come off eventually, but it's going to take another two or three generations before the technology really gets to where it needs to be. Right now the performance hit is so great that something like DLSS needs to be used for games to run at resolutions higher than 1080p and using DLSS is basically making a sacrifice in quality in order for another type of gain in quality, so that leaves a bitter taste in my mouth.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
28,486
20,572
146
Too much drama, i like both DLSS and RT and I support NV for pushing technologies forward, instead of safe choice of more and more classic rasterization performance.
Given how long new consoles were in development, it was coming no matter what. Nvidia was smart to create their own, and do a good job of marketing it, so as to make their name synonymous with RT in most gamers minds.
 
  • Like
Reactions: Tlh97 and Leeea

FiendishMind

Member
Aug 9, 2013
60
14
81
Given how long new consoles were in development, it was coming no matter what. Nvidia was smart to create their own, and do a good job of marketing it, so as to make their name synonymous with RT in most gamers minds.
I don't know if RTRT was always the plan for the new consoles. The patent for AMD's RT hardware solution was filed in December 2017, that's just months before DXR was launched and well after Turing was designed. Honestly, a the relatively late addition of RTRT hardware as a feature requirement could explain some of AMD's RT hardware design choices.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
28,486
20,572
146
I don't know if RTRT was always the plan for the new consoles. The patent for AMD's RT hardware solution was filed in December 2017, that's just months before DXR was launched and well after Turing was designed. Honestly, a the relatively late addition of RTRT hardware as a feature requirement could explain some of AMD's RT hardware design choices.
I remember RT running on a Vega 56. And since it did not require Nvidia hardware to run, and was a DX12 feature, I figured it was being developed and tested for a good bit before release a couple years ago. You may be right though. I honestly can't say differently.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
I remember RT running on a Vega 56. And since it did not require Nvidia hardware to run, and was a DX12 feature, I figured it was being developed and tested for a good bit before release a couple years ago. You may be right though. I honestly can't say differently.
That crytek demo is always way overstated. It was never going to work in a real game and the RT quality was very low. It was very much a proof of concept for cards in 5+ years. RT appeared because Nvidia pushed it, I think it's only in the consoles at all because the console makers demanded something having seen what the Nvidia gpu's could do. Hence we get AMD's sticking plaster solution. If it had really been planned well in advance AMD would have had a much more Nvidia like solution with separate RT hardware for better performance and a AI cores for a DLSS equivalent.
Not that I think Nvidia did it just for the gamers. Their pro cards have been doing RT (non real time) for years so they'd probably already considered adding specialist hardware to speed that up. Then they had their AI cores for servers, and they worked out they were really good a touching up images which is key for RT as you have to de-noise the image every frame. Put them together and you suddenly end up with something capable of real time RT. Finally AMD gpu's were rubbish so Nvidia could afford to use a cheaper process, blow a ton of silicon on next gen features that weren't used yet and still end up with something better then AMD.
 
  • Like
Reactions: Leeea

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
That crytek demo is always way overstated. It was never going to work in a real game and the RT quality was very low. It was very much a proof of concept for cards in 5+ years. RT appeared because Nvidia pushed it, I think it's only in the consoles at all because the console makers demanded something having seen what the Nvidia gpu's could do. Hence we get AMD's sticking plaster solution. If it had really been planned well in advance AMD would have had a much more Nvidia like solution with separate RT hardware for better performance and a AI cores for a DLSS equivalent.
Not that I think Nvidia did it just for the gamers. Their pro cards have been doing RT (non real time) for years so they'd probably already considered adding specialist hardware to speed that up. Then they had their AI cores for servers, and they worked out they were really good a touching up images which is key for RT as you have to de-noise the image every frame. Put them together and you suddenly end up with something capable of real time RT. Finally AMD gpu's were rubbish so Nvidia could afford to use a cheaper process, blow a ton of silicon on next gen features that weren't used yet and still end up with something better then AMD.

RT went to consoles not because NVIDIA released Turing but because Microsoft + NVIDIA + AMD were designing DXR together.
Microsoft announced DXR for Windows 10 on March 19 2018, same day NVIDIA announced RTX.
Microsoft DXR tier 1.1 API was co-developed with AMD for the Xbox and Windows
AMD RDNA2 has dedicated RT Cores for Ray Tracing and the design was not changed at the end to incorporate RT because NVIDIA released Turing in 2018. It was designed with RT for the start, 2018 was to late in to the development of RDNA2 to changed anything at that point.
As for AI, AMD had its own design with the Rapid Packed Math that could do 2x FP16 per shader back in 2017 with VEGA. RPM is still present on RDNA architecture and can do more than FP16.
Also to note that both AMD and NVIDIA were using Ray Tracing long time ago for professional applications, NVIDIA was just the first to release a GPU with dedicated RT hardware.

links
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136

Same story the new patch didnt help with the CPU overhead , even RTX3060 is affected at 1080p with the Ryzen 1400

CP77-Ultra-v1-2-1400.png


And with RT .

CP77-Ultra-RTX-Ultra-v1-2-1400.png
 

FiendishMind

Member
Aug 9, 2013
60
14
81
RT went to consoles not because NVIDIA released Turing but because Microsoft + NVIDIA + AMD were designing DXR together.
Microsoft announced DXR for Windows 10 on March 19 2018, same day NVIDIA announced RTX.
Microsoft DXR tier 1.1 API was co-developed with AMD for the Xbox and Windows
AMD RDNA2 has dedicated RT Cores for Ray Tracing and the design was not changed at the end to incorporate RT because NVIDIA released Turing in 2018. It was designed with RT for the start, 2018 was to late in to the development of RDNA2 to changed anything at that point.
RT hardware didn't go into consoles because of the Turing launch but they might have gone into them because of the Turing RTRT plans. Given how long it likely takes for a API like DXR to be cooked up, the other involved parties would have been in on Nvidia's RTRT plans for Turing a good deal of time before Turing or DXR actually launched. AMD's RT hardware patent filing landing just months before DXR launched makes me doubt that RT hardware was planned all along.
 
  • Like
Reactions: Tlh97 and Leeea

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
Patents will be filed as late as possible in order to avoid tipping one's hand to a competitor and to maximize the life of the patent. It's not like either company could just decide to add in ray tracing on a whim and it's likely something that both have been working on for a considerable time.

Even now it's a bit ahead of its time. Few games support it, many that do aren't making the best use of it, and the performance definitely isn't there outside of cards at the extreme high-end. If Nvidia didn't have DLSS to go along with it, I question if they would have released it at all because it would necessitate running the games at lower resolutions just to hit acceptable frame rates.
 
  • Like
Reactions: Tlh97 and Leeea

FiendishMind

Member
Aug 9, 2013
60
14
81
Patents will be filed as late as possible in order to avoid tipping one's hand to a competitor and to maximize the life of the patent. It's not like either company could just decide to add in ray tracing on a whim and it's likely something that both have been working on for a considerable time.

Even now it's a bit ahead of its time. Few games support it, many that do aren't making the best use of it, and the performance definitely isn't there outside of cards at the extreme high-end. If Nvidia didn't have DLSS to go along with it, I question if they would have released it at all because it would necessitate running the games at lower resolutions just to hit acceptable frame rates.
I'd think that if AMD was sitting on the patent for a significant amount of time before the development of DXR, Inline Ray Tracing would have been a part of DXR 1.0 instead of joining over a year later as a part of DXR 1.1.
 
  • Like
Reactions: Leeea

blckgrffn

Diamond Member
May 1, 2003
9,126
3,066
136
www.teamjuchems.com
Isn't ZEN3 in the CPU performance lead lately? It might be annoying for Nvidia to use AMD cpus in reference testing, but more FPS is more FPS? Heck AMD had used Intel CPUs when showcasing their GPUs during Bulldozer era, no big deal.
With CPU core count increasing there is actually more horsepower to extract from CPUs. Nvidia was already doing this multithreaded processing during DX11 era.



Too much drama, i like both DLSS and RT and I support NV for pushing technologies forward, instead of safe choice of more and more classic rasterization performance.

My point was no matter what they could bench with, Intel sells the most x86 CPUs so Ampere is going to be paired with Intel a lot. Maybe even in the vast majority of cases when we take prebuilt PCs (and basically the only way to buy RTX cards right now)... And you have to make sure you are performant on the most widely used brand, right? It would “come out” if you can use value CPUs with the competition and still get solid benchmark results.

In maybe other words, if you have a GPU that needs a high performance CPU to perform, and the main supplier of CPUs (Intel) is busy counting their money and buying back stock for years and years instead of pushing the performance envelope, wouldn’t you see that dependence as a risk? One that might severely limit the upside of your highest priced and highest margin SKUs?

As Mopetar says, and I am taking some liberties here, if Ampere is really a server/AI part masquerading as a GPU for the masses as well, it's likely that hardware scheduler (or whatever is limiting performance with lower power CPUs) is a non factor for those loads or something that makes it a non-issue for their highest margin install base. Wasted silicon.

That's the part that is relevant to the main issue this thread was created to discuss.

Secondarily, it is my opinion there is a ton of silicon that vast majority of most Ampere cards sold to gamers like us is unlikely to see heavy service in their relevant lifetimes. Given how RT effects are likely going to target the biggest install base (consoles) the computer hardware in a generation or two should be able to handle it exceptionally well without scaling IMO. We just will have to see how it plays out. And hopefully with crypto GPU mining behind us, prices should be affordable if Nvidia and AMD wants move new cards in volume, because the earth should be littered with RDNA 1 cards and low price 3060s and cards like them for cheap as they get liquidated. Still true even if it is just a GPU refresh for the miners (worst case scenario imo).

To me it plays out differently if MS updates the Series X in 2-3 with 2-4x the GPU/RT horsepower. I don’t know what the uptake rate will be on that (sign me up though) but that would naturally play to a sliding scale of RT effects in more titles (to support Xbox Series S through SX through SXXX) sooner rather than later, given the relative parity of the SX and PS5 now.

DLSS and RT drama aside, it’s really not that big of a deal because gamer adoption is relatively low because so many Ampere GPUs sold so far are being used for compute. I would love to know the ratio of gaming to mining, and if an Ampere owner has a card, how many do they have. I’d bet decent money most of them are in the hands of a vast minority of the owning population. Different topic but I typed it out so I am leaving it for later ;). Imagine the install base if it was 1 card sold to 1 gamer. What a paradise it would be! 😂
 
Last edited:
  • Like
Reactions: Tlh97 and Leeea

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Same story the new patch didnt help with the CPU overhead , even RTX3060 is affected at 1080p with the Ryzen 1400

To be fair Ryzen 1400 is only extracting 1/2 of performance out of AMD high end GPUs as well.
And 3090 needs 10900K to show ?full? potential. With CPU as recent as 9700K you would be missing 25% of performance on 1080P.
This game seems to chew CPU even without driver overhead accounted for, one has to really wonder how they plan to run it 30FPS on toaster class CPU in previous generation consoles?
 
  • Like
Reactions: Leeea

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
231
106
Cyberpunk is an unoptimized pile of mess, I had parts where I was bottlenecked by 3090 even at 1080p. At 4K max gfx, you can’t play it without DLSS (~10-25 FPS) So yeah, that was my experience with the game. So I switched back to Doom Eternal aka technical marvel, game gets repetitive, but it’s a pleasure to deal with, especially on high refresh hdr enabled monitors (e.g. LG 27GN950-B). Kudos to iD. I wish more games get to use their technology.

Edit: screen below was one such spot pre-1.20 patch. 3090 totally demolished by the game. Lowest FPS I’ve seen.
 

Attachments

  • 3A7ABD5D-4030-4B9B-8FF4-39358B860AAD.jpeg
    3A7ABD5D-4030-4B9B-8FF4-39358B860AAD.jpeg
    84.4 KB · Views: 32
Last edited:
  • Like
Reactions: Tlh97

amenx

Diamond Member
Dec 17, 2004
3,899
2,117
136
Cyberpunk is an unoptimized pile of mess, I had parts where I was bottlenecked by 3090 even at 1080p. At 4K max gfx, you can’t play it without DLSS (~10-25 FPS) So yeah, that was my experience with the game. So I switched back to Doom Eternal aka technical marvel, game gets repetitive, but it’s a pleasure to deal with, especially on high refresh hdr enabled monitors (e.g. LG 27GN950-B). Kudos to iD. I wish more games get to use their technology.

Edit: screen below was one such spot pre-1.20 patch. 3090 totally demolished by the game. Lowest FPS I’ve seen.
If you put everything maxed on without understanding the visual vs perf impact of each individual setting, you will likely get a sub-par experience.