• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."
  • Community Question: What makes a good motherboard?

nVidia 3080 reviews thread

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

insertcarehere

Senior member
Jan 17, 2013
354
193
116
RTX2080ti is also a cut down TU102.
Cutting down 4 SMs out of 72 is pretty different from cutting down 16 SMs out of 84.

Edit: That big of a gap between 3080 and 3090 (82 vs 68 SMs) also leaves the door wide open for further Super/Ti models in between, which is just as well because 10GB for 4K is not great.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
13,548
2,522
126
Which chip they are using is irrelevant. 3090 is already cut down from full as much as the 1080 Ti was.

There will almost certainly be 3080 Ti several months in the future, that has ~3090 performance. Then you can make your 2080 Ti vs 3080 Ti comparison. You will get a preview with 3090 review.
Really ??? so if the next Gen RTX4070 will also use the xx102 chip you will still believe the 4xxx series had the biggest uplift in performance ???
 
  • Like
Reactions: Tlh97

AtenRa

Lifer
Feb 2, 2009
13,548
2,522
126
That's not logical. So, when the 3080 ti do come out, then what will it compare to? And you know they can release a 3080 ti, because the 3090 shows there's a bigger ampere. Who cares about the code name? Would it have made you happier if they just edged it GA104 on the lid? It's priced like a tradition 80 card and there's even a bigger card. So, no this is not the ampere ti.
I dont believe there is a 3080Ti coming, perhaps we could see a 3090 with 12GB ram.
 

tviceman

Diamond Member
Mar 25, 2008
6,727
502
126
www.facebook.com
I think 80CUs @ 2+Ghz with the memory bandwidth to match should be in the 3080 ballpark unless RDNA has the same scaling issues as GCN, which it was specifically designed to fix.
It seems like with every single AMD new release build up, there is an unrealistic hype train build up. I think my 75-80% performance increase over 5700 XT is already fairly generous
That happens when chips are entirely power limited. Also Samsung's 8nm is behind TSMC 7nm.
Yes, I feel as if Nvidia could port Ampere over to TSMC 7nm a year from now and get an immediate 10-15% clock improvement along with a 20- 25% power reduction. But neither company has done a respin on an entire lineup since before 2010.
 
  • Like
Reactions: ozzy702

aleader

Senior member
Oct 28, 2013
234
54
101
I don't care about any of this 'what die is it using etc.' talk, all I care about is performance per dollar at 1440p in actual reviews. I go by Hardware Unboxed as my best source and 1440p performance per $ at just above the 5700/XT at $4.09 per frame looks pretty good to me. Here's hoping the 3070 is even better. Either will be a GIANT upgrade over the 1060 3GB currently in my system.
 
Last edited:

CakeMonster

Senior member
Nov 22, 2012
973
73
91
From what I'm seeing in the GN review so far the 0.1% minimum frame rate drops with their OC, that is worrying. The other numbers are good, but I'd never run with an OC that does that.
 
  • Like
Reactions: lightmanek

Guru

Senior member
May 5, 2017
821
343
106
It's not normal. Pascal was an outlier among the best ever releases, this is close to matching Pascal. Putting them both near the pinnacle of generational increases. The last time anything approached this level of gains before Pascal was the Mighty 8800 GTX.

Also it's not a 30% generation upgrade. Generation is NOT 3080 vs 2080 Ti.

Generation would be 3080 vs 2080, and these gains are typically around 70% at 4K. A very Pascal type gain, and very much beyond the typical release.
Generation over generation its 2080ti vs 3080, just because you are ignorant and Nvidia can sell the 2080ti as a high end at $1200, doesn't mean it's so. anyone with half the brain knows that the RTX 2080ti has historically been priced at $600 to 700 and the RTX 2080 has historically been priced $400 to $600.

So generationally comparing its the RTX 2080ti vs RTX 3080, because they are the same chip, the 102 size one, they are the same price historically.

Pascal wasn't an outlier, we had the GTX 480 to 680 generation which were all based on the same architecture and most of the on the same die size, so we had smaller performance gains generation over generation, even so we had about 20% performance uplift at the top end.

Again its a 30% performance uplift generation over generation at 4k, about 20% performance uplift at 1440p.

And again the Turing series were overpriced and each card was priced a level beyond what tier they were in. That is why what would have been a usually $200 to $250 RTX 2060 sold for $350. That is why AMD were able to wipe the floor with Nvidia in the mid range in terms of performance, value, watt per performance, etc...

Even today the RX 5700xt is basically a more realistic competitor to the RTX 2070super, rather than the same priced 2060super. Because Nvidia overpriced their cards to a tier above what they were! So a RTX 2060 is a mid tier $250 card and the RTX 2080ti is a high end $700 tier card!
 

DJinPrime

Member
Sep 9, 2020
83
88
51
I was merely pointing out the massive price difference between the models which signals different product tiers to begin with. But if you want to argue they belong in the same bracket as they are tied by inflation, you better be ready to argue that wages have doubled since 2017.
I guess I misunderstood you, I thought you were highlighting the much higher price for the various 80 ti series as a complain on pricing. Lol, no that's why I said the inflation % is BS and that's how employers can justify handing out 2-3% raises (if you even get a raise).
 

Saylick

Senior member
Sep 10, 2012
902
628
136
We need to look at the improvement in perf/$ to see how the value proposition changes between generations, and this approach should be fair with respect to removing die size out of the equation.

Here's what Techspot got for $/frame @ 1440p and 4K:




And here's what ComputerBase tabulated for the perf gains & cost differences between generations. I've taken the luxury of revising the launch prices to USD instead of Euros. I will definitely add that MSRPs do NOT reflect market or street prices. Moore's Law is Dead mentioned this in his last video about Nvidia intentionally taking a profit margin hit on their launch MSRP just to make these types of comparisons super favorable.

Non Ti-ModelsLaunch Price (USD)Performance Gain vs. Predecessor @ 2160p (4K)
GTX 780$649Baseline
GTX 980$549 (-15%)+26%
GTX 1080$599 MSRP (+9%), $699 FE (+27%)+64%
RTX 2080$699 MSRP (+17%), $799 FE (+14%)+40%
RTX 3080$699 MSRP (+0%), FE (-14%)+65%
Ti-ModelsLaunch Price (USD)Performance Gain vs. Predecessor @ 2160p (4K)
GTX 780 Ti$699Baseline
GTX 980 Ti$649 (-7%)+50%
GTX 1080 Ti$699 (+8%)+75%
RTX 2080 Ti$999 MSRP (+43%), $1199 FE (+72%)+35%
RTX 3080 Ti / 3090$1499 MSRP (+50%), FE (+25%)+50%(?)
 

DJinPrime

Member
Sep 9, 2020
83
88
51
I found their 4k average FPS page 30 really interesting. I'm surprised they included the lower end cards and if you look at the 4GB cards from both side, they compare pretty well against similar 6GB and 8GB cards. At 4k, you would think 4GB (lol even a 1060 3GB is in the test) would have dramatically fall off. But they're not. Yes, they're GPU limited but still you would think that the overhead cost of VRAM flush will have a much bigger impact. Even moving up in power, like the 2060 6GB performs exactly the same as 1080 8GB. So, how much VRAM is really being use in games? Maybe there's tons of studdering? Someone needs to really dig into this. VRAM size is starting to feel like PSU watt size, why are we paying so much for things we MIGHT not need??
 
  • Like
Reactions: lightmanek

AtenRa

Lifer
Feb 2, 2009
13,548
2,522
126
And here's what ComputerBase tabulated for the perf gains & cost differences between generations. I've taken the luxury of revising the launch prices to USD instead of Euros. I will definitely add that MSRPs do NOT reflect market or street prices. Moore's Law is Dead mentioned this in his last video about Nvidia intentionally taking a profit margin hit on their launch MSRP just to make these types of comparisons super favorable.

Non Ti-ModelsLaunch Price (USD)Performance Gain vs. Predecessor @ 2160p (4K)
GTX 780$649Baseline
GTX 980$549 (-15%)+26%
GTX 1080$599 MSRP (+9%), $699 FE (+27%)+64%
RTX 2080$699 MSRP (+17%), $799 FE (+14%)+40%
RTX 3080$699 MSRP (+0%), FE (-14%)+65%
Ti-ModelsLaunch Price (USD)Performance Gain vs. Predecessor @ 2160p (4K)
GTX 780 Ti$699Baseline
GTX 980 Ti$649 (-7%)+50%
GTX 1080 Ti$699 (+8%)+75%
RTX 2080 Ti$999 MSRP (+43%), $1199 FE (+72%)+35%
RTX 3080 Ti / 3090$1499 MSRP (+50%), FE (+25%)+50%(?)
I will agree with the Ti models but not with the non Ti.
980 was not a 780 replacement, same hold true for the 3080 over the 2080

Its a marketing wold, if NVIDIA would named the cards RTX3080Ti and RTX TITAN , then both 3080 and 3090 would not been seen the same in peoples eyes as they are looking today in the reviews.
Just take any review graph and just change the names and see what happens ;)
 

Stuka87

Diamond Member
Dec 10, 2010
5,233
998
126
Which chip they are using is irrelevant. 3090 is already cut down from full as much as the 1080 Ti was.

There will almost certainly be 3080 Ti several months in the future, that has ~3090 performance. Then you can make your 2080 Ti vs 3080 Ti comparison. You will get a preview with 3090 review.
I am almost willing to bet money that there will never be a 3080 Ti. There will most likely be a 3080 Super.

But it is not irrelevant as to what chip they are using. The fact is nVidia jacked the price WAY up on the 2080 Ti knowing people would be dumb enough to buy them. The 3080 *IS* the 2080 Ti replacement. The 3090 *IS* the Titan replacement.

The naming changes will totally make people ignore the fact that the 3080 is only marginally faster than the 2080 Ti, but is a lot faster than the 2080. Had they named the 3080 a 3080 Ti (as it should be), people would be angry as its small performance increase.
 

IntelUser2000

Elite Member
Oct 14, 2003
7,249
1,839
136
Yes, I feel as if Nvidia could port Ampere over to TSMC 7nm a year from now and get an immediate 10-15% clock improvement along with a 20- 25% power reduction.
That's very optimistic. I estimate either a 10% gain at maximum performance or 15-20% improvement in perf/watt at optimal levels.
 
  • Like
Reactions: lightmanek

samboy

Member
Aug 17, 2002
195
38
91
An impressive card; but I'm holding off to see what the AMD team offers:-

1. Impressive cooling system; but the card is using up to around 350watts
- I have a 750watt power supply and this card will use 50% of the power output!
- Case will be more challenging to cool and CPU performance will go down

2. Wait for >12GB VRAM
- Possibly for peace of mind.... but I'm anticipating that this next generation will quickly move beyond 10GB (especially if AMD releases a 16GB) & software developers will start to target this for their high end rendering settings.

I'm resigned to waiting until early 2021; both AMD and NVidia will have played most of their "cards" and no need to guess to figure out the best option.

Buying now (if you can get one) means that you have a really good card a few months early; but there is the risk that this may lead to buyers remorse in a few months time (even of NVidia simply ups the VRAM to 16GB for the 3080 at the same price).
 
  • Like
Reactions: Tlh97

guidryp

Senior member
Apr 3, 2006
629
544
136
But it is not irrelevant as to what chip they are using. The fact is nVidia jacked the price WAY up on the 2080 Ti knowing people would be dumb enough to buy them. The 3080 *IS* the 2080 Ti replacement. The 3090 *IS* the Titan replacement.
Yes 3090 *IS* the Titan replacement, and mirroring the Pascal release, you don't get a 3080 Ti of similar performance right away. You get it months later when yields improve enough to make it possible to deliver enough dies with more functional units enabled to create a more reasonable pricing structure.

But regular 3080 *is NOT* a x80 Ti card, because it DOES not have Titan (3090) performance. The usual job of x80 Ti cards is Titan performance at much lower price.

Go back and check. The 1080 Ti was Titan performance, many months after Regular 1080, or Pascal Titan. This wasn't just Pascal either, this exact pattern happened with Maxwell, and Kepler as well. Turing was the anomaly. IMO Ampere returns to the old pattern. 3080 Ti with 3090 (Titan) performance is some months away when yields improve.

For now, for people who can't wait for the 3080 Ti to deliver that extra step in performance, and have money to burn, can get this generations Titan (3090). Months from now you will get that same performance at much better pricing. Though likely with only 12GB of VRAM.
 
Last edited:

Saylick

Senior member
Sep 10, 2012
902
628
136
I am almost willing to bet money that there will never be a 3080 Ti. There will most likely be a 3080 Super.

But it is not irrelevant as to what chip they are using. The fact is nVidia jacked the price WAY up on the 2080 Ti knowing people would be dumb enough to buy them. The 3080 *IS* the 2080 Ti replacement. The 3090 *IS* the Titan replacement.

The naming changes will totally make people ignore the fact that the 3080 is only marginally faster than the 2080 Ti, but is a lot faster than the 2080. Had they named the 3080 a 3080 Ti (as it should be), people would be angry as its small performance increase.
I personally would argue that we should be seeing an 80 class card beat the last generation's 80 Ti card by a decent margin (~30%) at the $700 price-point. I think if you look at things historically, that's been the case (e.g. 60-series cards beating the last gen 80-series cards at a lower MSRP than what the previous 80-series card launched at). What's changed recently is that the RTX 2080 offered pretty much no perf/$ improvements in rasterization workloads, and it took Nvidia cutting down their big die in order to make the 3080 even viable because making a GA104-based RTX 3080 wouldn't have hit the +30% improvement at the $700 price point.
 

Wall Street

Senior member
Mar 28, 2012
691
44
91
And did you not read my post? For every 100 FP instructions, there are only 36 Int instructions. That's a performance impact of 5-7%.
Looking just at compute:

RTX 3080 needs to run its INTs on cores which are also full fledged FP32 cores. Per your claim, out of 136 instructions, 100 of them are FP, or 73%.

Therefore, in a typical workload, you would expect an Ampere design which has the same amount of theoretical Teraflops as a Turing design to underperform by 27% (where is 5-7% from?) due to the INT workload not impacting FP on Turing. I have no idea where your 5-7% comes from.

This is still a good trade-off for nVidia because this design allows for the huge uptick in theoretical FLOPs via increased FP32 shader count which means that an Ampere design with the same number of FP shaders as a Turing design would be a much lower end part.
 

IntelUser2000

Elite Member
Oct 14, 2003
7,249
1,839
136
Therefore, in a typical workload, you would expect an Ampere design which has the same amount of theoretical Teraflops as a Turing design to underperform by 27% (where is 5-7% from?) due to the INT workload not impacting FP on Turing. I have no idea where your 5-7% comes from.
This

Because a balanced architecture will stress compute, texture throughput and memory bandwidth roughly equally.
Compute has gone up big, but not the rest.
 

Timorous

Senior member
Oct 27, 2008
508
454
136
It seems like with every single AMD new release build up, there is an unrealistic hype train build up. I think my 75-80% performance increase over 5700 XT is already fairly generous.
Big Navi is looking to have at least double the transistors of N10.

Fury was 2x the performance of the 280X with just over 2x the transistors.

5700XT is 1.72x RX590 with 1.8x the transistors.

AMD seems to get close to linear scaling with transistor count so I really do not think 100% of 5700XT is that far fetched.
 

Wall Street

Senior member
Mar 28, 2012
691
44
91
AMD seems to get close to linear scaling with transistor count so I really do not think 100% of 5700XT is that far fetched.
The difficulty for AMD is that usually they get a new process node or increase TDP to get some performance, but I don't think that they can double the 5700XT TDP of 225 watts and call it a day. That should impact their linear scaling. They will need to spend transistors or reduce clock speed to make sure that the performance per watt ratio improves over RDNA 1.
 

Timorous

Senior member
Oct 27, 2008
508
454
136
The difficulty for AMD is that usually they get a new process node or increase TDP to get some performance, but I don't think that they can double the 5700XT TDP of 225 watts and call it a day. That should impact their linear scaling. They will need to spend transistors or reduce clock speed to make sure that the performance per watt ratio improves over RDNA 1.
AMD have said that they are targeting 50% perf/watt increase over RDNA.

Edit to add. That would put 2x 5700XT at around 300W if RDNA2 has perf/watt scaling that is like the 5500XT to 5700XT.
 

insertcarehere

Senior member
Jan 17, 2013
354
193
116
Big Navi is looking to have at least double the transistors of N10.

Fury was 2x the performance of the 280X with just over 2x the transistors.

5700XT is 1.72x RX590 with 1.8x the transistors.

AMD seems to get close to linear scaling with transistor count so I really do not think 100% of 5700XT is that far fetched.
The RX590 - > 5700XT uplift was aided by a ~25% increase in clocks from 12nm to 7nm process (1.6 to 2.0ghz). I highly doubt that gain is replicable with RDNA 1 -> 2 on the same process without blowing up their power budget.
 

guidryp

Senior member
Apr 3, 2006
629
544
136
AMD have said that they are targeting 50% perf/watt increase over RDNA.
NVidia claims to have achieved 90% perf/watt increase over Turing, and even showed a graph on how they did it.

But these kind of self marketing statements are usually meaningless, because they are done by essentially taking the new generations more powerful GPU, and limiting performance to the old generation model, so it running at MUCH lower clocks, in a part of the curve where power drops way off. Crank back up to normal clocks to achieve the performance you are looking for out of the part and those gains are mostly gone.

Remains to be seen what AMD did, but it seems it would have been safer to go a little more than double size, so they could go more relaxed on clocks.
 

ASK THE COMMUNITY