Question 4080 Reviews

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
The last enthusiast GPU that was priced right was the 1080Ti, and that's what I'm running. I drew my line in the sand a long time ago when I saw the 2080Ti price and I just won't pay what these guys want for their stuff. I'll end up buying used or buying nothing at all and just get out of the PC hobby. I'll probably upgrade my kids PCs down the line, but that's it.

When the 1080Ti launched it was considered a money grab from nVidia. It was the most expensive pure gaming card in history (eg. excluding titan as that was a compute card that could game).
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
But it was only $50 more than the previous generation. That's normal price creep.
Going from $700 to $1200 is exceptional price creep.

Well, yeah. The 2080Ti was a gigantic price jump.

But, both the 780Ti and 980Ti were $649. Which was already considered high. The 1080Ti definitely annoyed some people. But, admittedly not nearly as much as the 2080Ti.
 
  • Like
Reactions: Kaluan
Aug 16, 2021
134
96
61
Prices ARE getting saner. I'm also from Eastern Europe and just bought a 6800XT for 620 EUR. Remove the VAT, convert to dollars and you get the equivalent of $530 in US, which is around the better deals currently available there.
Not sure, I can buy the same card in UK for less. BTW RX 6800 XT starts at 664 EUR and tops at 2334 EUR, most cards are at ~900 EUR. VAT is 21%. 900 * 0.79 = 711 EUR. 711 EUR is 735 USD. MSRP was supposed to be 649 USD. Yet, it's still very high price for any GPU. It should be no more than 400 EUR. I think that AMD needs a haircut.
 

Aikouka

Lifer
Nov 27, 2001
30,383
912
126
I was at best buy yesterday getting a new cf card and they had a boat load of 4080s in stock. Seems no one wants to touch them for the price to offload on ebay.

I signed up for Nvidia's Verified Priority Access program... mostly just to see what I'd get, and I've already received an offer to get a 4080.
 
Aug 16, 2021
134
96
61
You were showing us a 2000+ EUR price for 6800XT as example and now you're casually mentioning a 664 EUR starting price?! Damn, I'm sorry I even made the effort.
I said that average price is 900 EUR and there's no reason why any version of 6800 XT should cost over 2k EUR, it wasn't one off situation either or for specific model.
 

eek2121

Platinum Member
Aug 2, 2005
2,867
3,841
136
I just got done watching GN's review. Wow, what a poor value. $1,200 for something that isn't all that much faster than my 3090 FTW 3 Ultra.

EDIT: NVIDIA is in for a rude awakening IMO. The market is flooded with GPUs, and there is no incentive to buy the 4080 or 4090 at all.
 

moonbogg

Lifer
Jan 8, 2011
10,635
3,095
136
I just got done watching GN's review. Wow, what a poor value. $1,200 for something that isn't all that much faster than my 3090 FTW 3 Ultra.

EDIT: NVIDIA is in for a rude awakening IMO. The market is flooded with GPUs, and there is no incentive to buy the 4080 or 4090 at all.

They did it on purpose so you'd go out and buy a 3080 for $900 instead.
 

Mopetar

Diamond Member
Jan 31, 2011
7,784
5,879
136
EDIT: NVIDIA is in for a rude awakening IMO. The market is flooded with GPUs, and there is no incentive to buy the 4080 or 4090 at all.

There's still reason to get a 4090. It's the best card on the market right now and there is a niche market segment that will always upgrade to that halo product. Some of the 4090 buyers are only going to have theirs until they can get a 4090 Ti even though that's far less of a step up than you get from moving up from the previous generation's best card.
 

A///

Diamond Member
Feb 24, 2017
4,352
3,152
136
I signed up for Nvidia's Verified Priority Access program... mostly just to see what I'd get, and I've already received an offer to get a 4080.
to my knowledge those are reserved for you when you activate the offer. these were up for anyone to buy. i have not signed up yet because id like to time it to the 4090s coming back in stock. I don't plan on using it but instead opting for a 7900xt or xtx instead. The 4090 will be used in something else, resold or gifted.I expect 4090s to come back into stock around march when the export ban takes effect on march 1 2023, nvidia is allowed to continue shipping of orders up until that point until september 1 2023. there was a rumor a while back that 4090 production was suspended to focus on wafers for china exports before that march 1st date. there's still 4090 dies in the wild that have yet to be implemented on cards. nvidia is controlling the flow of these cards to offload their 30 series cards onto consumers who will no doubt buy up the scraps left. nvidia gets to sit on and control what aibs can make and sell, all while taking orders and then having months to fullfill them.
 

A///

Diamond Member
Feb 24, 2017
4,352
3,152
136
There's still reason to get a 4090. It's the best card on the market right now and there is a niche market segment that will always upgrade to that halo product. Some of the 4090 buyers are only going to have theirs until they can get a 4090 Ti even though that's far less of a step up than you get from moving up from the previous generation's best card.
the 4090 is cut down ad102. I wouldn't reflect on the 30 series -90 to -90 ti minimal gain in performance to the 40 series. I'm not saying expect miracles but nvidia knocked it out of the park this generation, but their pricing has left a very bad taste in my mouth.
 

Aapje

Golden Member
Mar 21, 2022
1,256
1,684
96
I'm going to time stamp the blurry trees in the YouTube video.

You gave a link to a page with a a comparison picture. I specifically said that I don't see any blurriness in that picture. So don't sudden throw a video at me and pretend that's what you were talking about when you gave me a link where you said that you saw blurriness.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Completely wrong , both AMD and NVIDIA are using dedicated RT cores for RayTracing. The implementation is deferent but the work they do is the same. The AMD RT cores are part of the Compute Unit (CU) and the NVIDIA RT cores are part of the Streaming Multiprocessors (SM).

As far as I know this is incorrect. Nvidia RT cores are fixed function and can't do anything else but ray tracing calculations. AMD have RT accelerators that can share resources with the TMUs and CU and assist with other things when not being used for RT.


Also to point out that AMD increased the RT cores only by 20% (from 80 to 96) from 6950ΧΤ to 7900XTX but the performance uplift is up to 80% due to way higher performance increase from each RT core. Put it simple so you can understand it the IPC increase of each RT core in RDNA3 is 50% or more.

It won't really matter because they are still going to be far behind Nvidia in RT performance.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
You gave a link to a page with a a comparison picture. I specifically said that I don't see any blurriness in that picture. So don't sudden throw a video at me and pretend that's what you were talking about when you gave me a link where you said that you saw blurriness.

And you said that I said DLSS 3 will improve image quality compared to DLSS 2. Now we're even! :D

That said, the trees are clearly blurry as hell.
 

lopri

Elite Member
Jul 27, 2002
13,207
593
126
Fishing pole and reel community getting bent out of shape? I see what you did there.

PC's are in a bad place right now, Nvidia is hell bent on keeping their profit margins and the performance crown (what else is new) but this time we are literally getting space heater video cards. I don't want a PC pumping out 700+ watts when I'm gaming even if it was dead silent.

I remember when cards were pushing 250 watts and thinking, damn that is too high.
And in those days a 550W PSU was overkill for most systems. I still double-check whenever I read power consumption part in reviews and shake my head in disbelief that the quoted numbers are from the "Card only" or "CPU only", not "Whole system."
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
You don't have time to do that anyway. You're too busy doing it for your own purchases.

I don't need my purchases to be coddled or reinforced by other people though. I wanted the fastest hardware and that's what I bought. If AMD had faster hardware (or maybe slightly slower but significantly cheaper), I would have bought it because that's all I care about. And to be fair, I did wait until the RDNA 3 reveal before I bought my RTX 4090 to make sure I wouldn't get shafted.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Running that video on a 2160p monitor at 2160p really shows how crap YouTube is for quality comparisons. The image quality on Youtube is just not good enough for comparisons.

I've spent hours comparing DLSS vs FSR vs TAA vs no AA in several games and the winner is always no anti aliasing when it comes to image quality. If I am forced to use TAA (yuck), then I must use AMD Fidelity FX (in Reshade) and Nvidia's game filter (sharpness + clarity) to counter the Vaseline blur that TAA does.

Have you tried DLAA? Supposedly that one is the best for image quality.

Both DLSS and FSR destroy image quality to the point where I feel like I am playing on lowered resolution. The best option is to lower settings a bit.

I've only tried DLSS in one game so far and that was A Plague's Tale Requiem. The implementation was very good and I couldn't detect any image quality problems. This was with DLSS set to quality.
 

AtenRa

Lifer
Feb 2, 2009
13,984
3,347
136
As far as I know this is incorrect. Nvidia RT cores are fixed function and can't do anything else but ray tracing calculations. AMD have RT accelerators that can share resources with the TMUs and CU and assist with other things when not being used for RT.




It won't really matter because they are still going to be far behind Nvidia in RT performance.

It seems you misunderstand what sharing resources means. As I have said above, both AMD and NVIDIA RT cores are fixed function units within a larger group of other units that comprise the AMD CU and NVIDIA SM. For example Texture Units (TMUs) are fixed function hardware doing only texture calculations, same is true for the RT Cores, they only do RT calculations (BVH etc). What sharing resources means is that the RT cores can have access and use resources that for example Texture units also need and use. If you go back to the AMD CU picture I have posted above, you will see that there is a Shared Memory inside the CU, so both RT cores and TMU units and others can have access and share resources with each other. BUT that doesnt mean that RT cores or TMU units are not fixed function hardware, it means that the architecture is such that both units within the CU can share resources with each other so that they can perform faster.

It won't really matter because they are still going to be far behind Nvidia in RT performance.

RTX4080 will only be on average 20% to 30% faster vs RX7900XTX in 4K RT (no DLSS/FSR). And before you talk about 4090, Im only comparing RX7900XT (NAVI31) to RTX4080 (AD103) because of the similar die size (NAVI31 300mm2 vs AD103 379mm2).
 
  • Like
Reactions: Tlh97

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
It seems you misunderstand what sharing resources means. As I have said above, both AMD and NVIDIA RT cores are fixed function units within a larger group of other units that comprise the AMD CU and NVIDIA SM. For example Texture Units (TMUs) are fixed function hardware doing only texture calculations, same is true for the RT Cores, they only do RT calculations (BVH etc). What sharing resources means is that the RT cores can have access and use resources that for example Texture units also need and use. If you go back to the AMD CU picture I have posted above, you will see that there is a Shared Memory inside the CU, so both RT cores and TMU units and others can have access and share resources with each other. BUT that doesnt mean that RT cores or TMU units are not fixed function hardware, it means that the architecture is such that both units within the CU can share resources with each other so that they can perform faster.

You keep repeating the same thing which is at odds with what I've read on the subject. Fixed function means only doing one thing. Nvidia's RT cores are fixed function because they only do ray tracing calculations. AMD's RT cores are not fixed function because they can process other things when not actively engaged in a ray tracing workload. That's actually a very efficient method of implementing RT (especially for the consoles), but it comes at the cost of raw performance.

From what I've read, Nvidia's RT cores remain in idle when not doing ray tracing while AMD's do not.

I'm willing to admit that I'm not a graphics engineer so my understanding of this subject is very limited so I could very well be wrong.

RTX4080 will only be on average 20% to 30% faster vs RX7900XTX in 4K RT (no DLSS/FSR). And before you talk about 4090, Im only comparing RX7900XT (NAVI31) to RTX4080 (AD103) because of the similar die size (NAVI31 300mm2 vs AD103 379mm2).

It's going to vary by game. Games with heavy RT loads are going to show a massive gap between RDNA3 and Ada. Hardwaretimes actually wrote an analysis on the publicly released benchmarks of RDNA3:

AMD RX 7900 XTX Scores Just 21 FPS in Cyberpunk 2077 with Ray Tracing, NVIDIA RTX 4090 Over 2x Faster | Hardware Times