Question 4080 Reviews

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
It's 60% bigger for 71% more SM and 50% more cache and bus width. Seems about right, but finding games that really benefit from a 4090 is the challenge.

Yeah, thats my point. The performance difference between the two is smaller than one would expect given the die size differences.

But, its quite possible that over time, that gap may grow with newer, more complex games.
 
  • Like
Reactions: Tlh97 and Leeea

moonbogg

Lifer
Jan 8, 2011
10,637
3,095
136
I love it when people compare sports cars to PC parts. Let's move along and discuss something of more substance, like the actual costs involved with buying a private island. Most people think you just buy it and you're done. Not so! Often roads and infrastructure have to be built, not to mention airstrips and landing pads. The below video really explains these costs in more detail. A person buying a private island spends more on gas just to get there one time than a 4090 costs, so I'd really appreciate it if you fools would stop whining about the cost of your stupid little video game toys.

 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I think nVidia may have priced the 4080 in part to goad AMD to price higher. Now that they didn't, perhaps you will see a further cut down AD102 (4080 Ti) in a few months and they will cut the price of the 4080 16 GB down to $999.

As long as AMD's RT performance is subpar, Nvidia will never lower their prices because they know gamers will pay out the nose for RT, DLSS, DLAA, FG etcetera. RT performance, machine learning and A.I are of the utmost importance to GPU performance moving forward, and AMD is behind by at least two or maybe even three generations in that respect.

Intel actually has a better shot of pressuring Nvidia to come back to reality because their GPUs have comparatively strong RT, ML and A.I capabilities for such a new product. If Intel keeps it up, Battle Mage and Celestial will be true threats; especially if they regain node supremacy.
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,735
2,711
146
I cannot stand the Founder cards, but I would consider an ASUS or MSI 4080 if it was cheaper than the 7900 series, and came with 8 pin connectors. I don't really care about RT or AI, just straight performance.
 
  • Like
Reactions: Tlh97 and ZGR

maddie

Diamond Member
Jul 18, 2010
4,881
4,951
136
As long as AMD's RT performance is subpar, Nvidia will never lower their prices because they know gamers will pay out the nose for RT, DLSS, DLAA, FG etcetera. RT performance, machine learning and A.I are of the utmost importance to GPU performance moving forward, and AMD is behind by at least two or maybe even three generations in that respect.

Intel actually has a better shot of pressuring Nvidia to come back to reality because their GPUs have comparatively strong RT, ML and A.I capabilities for such a new product. If Intel keeps it up, Battle Mage and Celestial will be true threats; especially if they regain node supremacy.
RT started with the 2xxx series and AMD is possibly 3 generations behind? Are you OK? Seriously.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
RT started with the 2xxx series and AMD is possibly 3 generations behind? Are you OK? Seriously.

It doesn't matter how early it started, what matters is the amount of R&D and resources that Nvidia puts into RT, machine learning and A.I compared to AMD every GPU cycle. When the RTX 2080 Ti came out, there was a lot of push back from gamers that Nvidia was wasting precious die space by devoting it to RT, machine learning and A.I rather than pure rasterization performance. That was until they saw the massive performance boost that could result from using that technology.

Since then, Nvidia has been full steam ahead and devoting plenty of die space and transistors to increasing the performance of those key features. You can tell by how quickly the RT performance has increased in just the 2 cycles since Turing launched 4 years ago, as well as how quickly DLSS became more and more viable to the point where it can exceed native resolution IQ. AMD on the other hand are twiddling their thumbs and not realizing the massive gap that is between them and Nvidia in those performance metrics.

The 7900XTX's raw RT performance looks like it's going to be equivalent to a 3080, and we don't know yet how effective FSR 3.0 will be yet. DLSS and FSR are force multipliers that complement RT and make high FPS gaming with it on feasible so it's just as important as the raw RT performance itself.
 
Last edited:

maddie

Diamond Member
Jul 18, 2010
4,881
4,951
136
It doesn't matter how early it started, what matters is the amount of R&D and resources that Nvidia puts into RT, machine learning and A.I compared to AMD every GPU cycle. When the RTX 2080 Ti came out, there was a lot of push back from gamers that Nvidia was wasting precious die space by devoting it to RT, machine learning and A.I rather than pure rasterization performance. That was until they saw the massive performance boost that could result from using that technology.

Since then, Nvidia has been full steam ahead and devoting plenty of die space and transistors to increasing the performance of those key features. You can tell by how quickly the RT performance has increased in just the 2 cycles since Turing launched 4 years ago, as well as how quickly DLSS became more and more viable to the point where it can exceed native resolution IQ. AMD on the other hand are twiddling their thumbs and not realizing the massive gap that is between them and Nvidia in those performance metrics.

The 7900XTX's raw RT performance looks like it's going to be equivalent to a 3080, and we don't know yet how effective FSR 3.0 will be yet. DLSS and FSR are force multipliers that complement RT and make high FPS gaming with it on feasible so it's just as important as the raw RT performance itself.
You're provoking arguments by claiming false hyperbolic statements. My opinion. 3 generations of RT exist from Nvidia and AMD is 3 behind. Does that mean it does not exist? All your latest posts are inflammatory. My opinion.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
You're provoking arguments by claiming false hyperbolic statements. My opinion. 3 generations of RT exist from Nvidia and AMD is 3 behind. Does that mean it does not exist? All your latest posts are inflammatory. My opinion.

Having an opposing opinion only provokes arguments if the other person is too emotionally invested in a particular product or company. At the end of the day, it's just my opinion and I don't claim it to be factual by any means. I just see what I see.

At the Computerbase.de RTX 4080 review, I looked at the Doom Eternal RT benchmarks. Doom Eternal has one of the fastest 3D engines out there, and the RT implementation is very performance oriented (even the consoles can use RT effects in this game) and only does reflections. The RTX 4090 can achieve 200 FPS in that title at 4K maxed settings RT enabled without DLSS while the 6900XT barely cracks the 60 FPS barrier. So at their current rate, AMD won't catch up for at least 2-3 cycles in just that one title.

Seeing stuff like that is what makes me say these "inflammatory" things.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Having an opposing opinion only provokes arguments if the other person is too emotionally invested in a particular product or company. At the end of the day, it's just my opinion and I don't claim it to be factual by any means. I just see what I see.

At the Computerbase.de RTX 4080 review, I looked at the Doom Eternal RT benchmarks. Doom Eternal has one of the fastest 3D engines out there, and the RT implementation is very performance oriented (even the consoles can use RT effects in this game) and only does reflections. The RTX 4090 can achieve 200 FPS in that title at 4K maxed settings RT enabled without DLSS while the 6900XT barely cracks the 60 FPS barrier. So at their current rate, AMD won't catch up for at least 2-3 cycles in just that one title.

Seeing stuff like that is what makes me say these "inflammatory" things.

Why are you comparing a brand new, top of the line card to a previous generation enthusiast level card? The 6950 competed with 3080Ti, not the 3090, much less the 4090.

The 3080Ti is faster in RT than the 69x0, but its typically by around 10-25fps at 4K.

Bunch of ray tracing benchmarks here: https://www.techpowerup.com/review/nvidia-geforce-rtx-4080-founders-edition/34.html

AMD is for sure 1 gen back. Where the 7K series will probably be a bit faster than the 3K series. But in no way are they "3 gens behind", as that would mean they don't have RT, as nVidia is just now releasing its 3rd gen.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Why are you comparing a brand new, top of the line card to a previous generation enthusiast level card? The 6950 competed with 3080Ti, not the 3090, much less the 4090.

I was just using it as an example of how far behind they are. I know that the 6900XT doesn't compete with the RTX 4090, but to me that kind of performance gap is indicative of just how much AMD will have to invest if they ever hope to be on par with Nvidia. I mean, they are more than 200% behind which is unheard of for GPU performance gap when comparing the highest end product models from generation to generation.

AMD is for sure 1 gen back. Where the 7K series will probably be a bit faster than the 3K series. But in no way are they "3 gens behind", as that would mean they don't have RT, as nVidia is just now releasing its 3rd gen.

I'm also factoring FSR vs DLSS into that statement. It's not enough just to provide a sizeable increase in raw RT performance. The upscaling is just as important, and last I checked, FSR was well behind in IQ compared to DLSS.

It's an unknown to be sure, and FSR 3.0 could be much better than 2.1 in image quality, but as of right now, DLSS presents a much better image. And yes, I've looked at comparisons and DLSS is notably sharper and clearer. I'm not sure if a non machine learning non A.I based upscaling tech like FSR could ever conceivably be better than something like DLSS which uses ML and A.I.

Lastly, there was a big jump in performance between the RTX 3000 series and the RTX 4000 series, because Nvidia is now on one of the best node processes you can get. So AMD no longer has that advantage in node process like they did with the RTX 3000.
 

alcoholbob

Diamond Member
May 24, 2005
6,311
357
126
Even people buying super cars can be picky and want some "value".

The people buying the 4090ti while having a 4090 are likely doing it because they are bored, imo. If any enthusiast bought a 4090 at this point, I'd say its a fair investment is all. I am prepared for future disappointment, I think in the near term (2-3 years) cards might get more efficient but nvidia certainly built a huge and brutishly fast chip in the 4090.

I bought a Vette this fall and while I still hate myself for it on some level it's opened my eyes to how relatively cheap PCs are when compared to many people's habits here. My list of Day 1 things to "fix" after buying it cost more than a 4090.

Maybe the best selling point they will concoct for the 4090 Ti will be no more 12vhpwr connector on the AIB boards…
 
  • Like
Reactions: Tlh97 and Leeea

psolord

Platinum Member
Sep 16, 2009
2,093
1,234
136
I love it when people compare sports cars to PC parts. Let's move along and discuss something of more substance, like the actual costs involved with buying a private island. Most people think you just buy it and you're done. Not so! Often roads and infrastructure have to be built, not to mention airstrips and landing pads. The below video really explains these costs in more detail. A person buying a private island spends more on gas just to get there one time than a 4090 costs, so I'd really appreciate it if you fools would stop whining about the cost of your stupid little video game toys.


Without reading, just by seeing the video image, I thought this was a Unigine Tropics run of the 4080 and thought, isn't that too light for the 4080? xD
 

Aapje

Golden Member
Mar 21, 2022
1,508
2,060
106
I was just using it as an example of how far behind they are. I know that the 6900XT doesn't compete with the RTX 4090, but to me that kind of performance gap is indicative of just how much AMD will have to invest if they ever hope to be on par with Nvidia. I mean, they are more than 200% behind which is unheard of for GPU performance gap when comparing the highest end product models from generation to generation.

But if the 7900 XTX is in RT about on par with the 3000 cards, then it is just a fact that they are one generation behind.

Of course, you can argue that AMD cannot make the same jump as Nvidia for the next gen, but this assumes that they can't just add a ton of Ray Accelerator cores if they want to. After all, this is how Nvidia made this big jump, so it is quite the assumption that AMD can't do this.

It's an unknown to be sure, and FSR 3.0 could be much better than 2.1 in image quality, but as of right now, DLSS presents a much better image. And yes, I've looked at comparisons and DLSS is notably sharper and clearer.

This critique tells me that you don't know what you are talking about, since FSR 3 is probably a frame interpolation technique like DLSS 3 and thus not just an improved version of FSR 2 (just like is true for DLSS 3 vs 2). Also since the sharpness depends in large part on the aggressiveness of the sharpening filter, the sharpness doesn't actually tell you that much about the quality of the temporal upscaler, or at least not within context. Most people seem to agree that by default, FSR 2 actually tends to be a little sharper. And here someone shows that FSR 2 is in fact sharper in Cyberpunk.

But ultimately it depends a lot on the game's default choices for sharpening and/or your settings if you can select the sharpening yourself.

FSR 2 is indeed a bit behind, but it's so close that some people prefer FSR, because they prefer the artifacts that it generates over those generated by DLSS. I don't see it as a major selling point anymore (which is probably why Nvidia introduced frame interpolation in the first place).
 
Last edited:
  • Like
Reactions: Tlh97 and Elfear

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
But if the 7900 XTX is in RT about on par with the 3000 cards, then it is just a fact that they are one generation behind.

That's assuming FSR 3.0 is going to be as effective as DLSS. Upscaling tech amplifies the effects of RT acceleration and makes it far more viable.

Of course, you can argue that AMD cannot make the same jump as Nvidia, but this assumes that they can't just add a ton of Ray Accelerator cores if they want to. After all, this is how Nvidia made this big jump, so it is quite the assumption that AMD can't do this.

And you claim I don't know what I'm talking about. The RT accelerator cores that AMD employs aren't like the ones that Nvidia uses. Nvidia uses dedicated RT cores that are a separate hardware block, while the ones that AMD uses are integrated into the GPU compute units themselves.

This critique tells me that you don't know what you are talking about, since FSR 3 is probably a frame interpolation technique like DLSS 3 and thus not just an improved version of FSR 2 (just like is true for DLSS 3 vs 2). Also since the sharpness depends in large part on the aggressiveness of the sharpening filter, the sharpness doesn't actually tell you that much about the quality of the temporal upscaler, or at least not within context. Most people seem to agree that by default, FSR 2 actually tends to be a little sharper. And here someone shows that FSR 2 is in fact sharper in Cyberpunk.

Well stuff like this tends to be subjective generally speaking, but when I looked at the 4K comparison image that TPU uploaded, the first thing I noticed was how much better looking the vegetation was in the DLSS image. The FSR 2.1 looks like it has vaseline smeared on the vegetation.

The DLSS image also had better anti aliasing. You can see this in the buildings on the top right.

TPU comparison image.

Then their comparison video showed more temporal instability:


But ultimately it depends a lot on the game's default choices for sharpening and/or your settings if you can select the sharpening yourself.

They claim to have turned the game's sharpening setting down to 0 and you can still see a big difference imo.

FSR 2 is indeed a bit behind, but it's so close that some people prefer FSR, because they prefer the artifacts that it generates over those generated by DLSS. I don't see it as a major selling point anymore (which is probably why Nvidia introduced frame interpolation in the first place).

As I said before, this sort of thing has a strong subjective component to it, but in the video you can clearly see more temporal instability artifacts being generated by FSR 2.1 and the screenshot shows more anti aliasing and reduced clarity. It does improve over the standard TAA solution however. for the most part.

At any rate, I understand FSR is a work in progress and DLSS had a large lead on it. My concern however is that since they don't use M.L and A.I to improve it, how is it ever going to be on par with DLSS or even XeSS for that matter?

For being the new kid on the block, XeSS to me looks more impressive especially it's being run on an Intel GPU and guess what, XeSS uses M.L and dedicated A.I hardware just like Nvidia does.
 
Last edited:
  • Like
Reactions: igor_kavinski

maddie

Diamond Member
Jul 18, 2010
4,881
4,951
136
Having an opposing opinion only provokes arguments if the other person is too emotionally invested in a particular product or company. At the end of the day, it's just my opinion and I don't claim it to be factual by any means. I just see what I see.

At the Computerbase.de RTX 4080 review, I looked at the Doom Eternal RT benchmarks. Doom Eternal has one of the fastest 3D engines out there, and the RT implementation is very performance oriented (even the consoles can use RT effects in this game) and only does reflections. The RTX 4090 can achieve 200 FPS in that title at 4K maxed settings RT enabled without DLSS while the 6900XT barely cracks the 60 FPS barrier. So at their current rate, AMD won't catch up for at least 2-3 cycles in just that one title.

Seeing stuff like that is what makes me say these "inflammatory" things.
Your opinion is respected. I just hate lies in a discussion. Simple.
 
  • Like
Reactions: Tlh97

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Well stuff like this tends to be subjective generally speaking, but when I looked at the 4K comparison image that TPU uploaded, the first thing I noticed was how much better looking the vegetation was in the DLSS image. The FSR 2.1 looks like it has vaseline smeared on the vegetation.

That was an old test with FSR 2.1. FSR 2.2 is the current version.

Here is a test of DLSS 2.4 vs FRS 2.2 (Both just released).

In short, DLSS has less shimmering during movement. FSR has better anti-aliasing, as DLSS has a pixelated look. Both have ghosting, but its more noticeable with FSR. Game performance was very close on both. Considering the head start nVidia had, AMD has made up a lot of ground. But really, anybody using DLSS or FSR doesn't care about visual quality, they only care about increased frame rates.

 

Aapje

Golden Member
Mar 21, 2022
1,508
2,060
106
That's assuming FSR 3.0 is going to be as effective as DLSS. Upscaling tech amplifies the effects of RT acceleration and makes it far more viable.

Again, FSR 3 is most likely not going to be an upscaling technology, just like DLSS 3 isn't. You keep conflating temporal upscaling with interpolation, even though they are completely different technologies.

You argued that DLSS 3 will improve the image quality compared to DLSS 2, even though it has already been released and we know that it isn't the case. DLSS 3 produces way more artifacts than both DLSS 2 and FSR 2, yet you don't comment on that at all, even though you critique the far smaller differences between DLSS 2 and FSR 2. Also, input lag will be worse than having half the FPS, where those frames are actually based on new input.

You continuously exaggerate the benefits of Nvidia features and downplay AMD's features...

Nvidia uses dedicated RT cores that are a separate hardware block, while the ones that AMD uses are integrated into the GPU compute units themselves.

But AMD can still add extra ray accelerators to the CU's.

Well stuff like this tends to be subjective generally speaking, but when I looked at the 4K comparison image that TPU uploaded, the first thing I noticed was how much better looking the vegetation was in the DLSS image. The FSR 2.1 looks like it has vaseline smeared on the vegetation.

I see a little bit more sharpness in the grass rendered by FSR, actually. However, FSR doesn't render very thin blades of grass, which is a known issue with rendering very thin lines.

but in the video you can clearly see more temporal instability artifacts being generated by FSR 2.1 and the screenshot shows more anti aliasing and reduced clarity.

I don't agree on the clarity, but the other two are a bit worse on FSR 2. However, I dispute that this is a big difference. Again, you don't call out the far bigger artifacting with DLSS 3, which suggests to me that you are rather biased.

My concern however is that since they don't use M.L and A.I to improve it, how is it ever going to be on par with DLSS or even XeSS for that matter?

ML is not necessarily better than manual coding. There is nothing that ML can do that a coder couldn't do in theory, although it may be harder to implement or even unfeasible due to how much effort it takes. However, there are advantages to hand coding as well, like tunability.

XeSS to me looks more impressive especially it's being run on an Intel GPU

XeSS actually runs way worse on non-Intel GPU's. XeSS on non-Intel is way worse than FSR 2.
 

Aapje

Golden Member
Mar 21, 2022
1,508
2,060
106
But really, anybody using DLSS or FSR doesn't care about visual quality, they only care about increased frame rates.

Of course they do. The entire point of the technology is to get better visual quality at a certain FPS.

However, I think that DLSS 2 and FSR 2 are now so close that it doesn't make much sense to have a strong preference.
 

blckgrffn

Diamond Member
May 1, 2003
9,294
3,436
136
www.teamjuchems.com
Of course they do. The entire point of the technology is to get better visual quality at a certain FPS.

However, I think that DLSS 2 and FSR 2 are now so close that it doesn't make much sense to have a strong preference.

Ok, I'll rephrase.

They clearly prioritize a target frame rate at a target resolution over visuals. So they want native resolution, they want an FPS floor and are willing to check a box that introduces shimmering, artifacts and what not to make that happen.

I'll add to that - I use FSR on my Steam Deck without question. When it is in my lap and I set the FPS target to 30 because I am of course casually gaming at this point, I also want as much battery life as possible & FSR for sure unloads the GPU to the max extent. So in this case, FSR can optimize for power as well - but at the cost of some visual fidelity I don't care about because its so hard too discern on a tiny screen so far from my face.

FSR is going to get pushed hard by all the consoles, so I'd expect its her to stay. DLSS is a pinky up, PCMR only, vendor locked feature. Consoles are locked config mini-pcs expected to work for what, a decade? That's where these features are sorely needed.

The 4080 should be safely past needing either technology on 1440p and lower resolutions for a long time unless you are maybe aiming for 200+ FPS but can't be bothered to turn down other options.
 
Last edited:

Heartbreaker

Diamond Member
Apr 3, 2006
4,337
5,456
136
Ok, I'll rephrase.

They clearly prioritize a target frame rate at a target resolution over visuals. So they want native resolution, they want an FPS floor and are willing to check a box that introduces shimmering, artifacts and what not to make that happen.

In many examples I have seen, DLSS reduces shimmering compared to Native.