nVidia 3090 reviews thread

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BFG10K

Lifer
Aug 14, 2000
22,674
2,824
126
Written:


Video:

 
Last edited:

Grooveriding

Diamond Member
Dec 25, 2008
9,107
1,260
126
-Maybe yes, but then you launch a $700 3080 that is only slightly slower a month later and watch people's heads explode in anger.

NV was stuck between a rock and a hard place on this one.

True. Plus the 3080 is obviously the star of the show. The price/perf is stellar compared to last gen. You'd think they'd want it out first and getting most of the attention, it's going to sell a lot of more than the 3090.

3080 is a real struggle to get. I was up last night when nvidia's store got a batch and by the time I checked out they were all gone. Just not worth the frustration even bothering trying to buy one right now. Wait until the stock isn't so thin and it's not as much effort.
 

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
3090 feels like it might be the first halo card that it's almost embarrassing to own one. Like a gold plated iPhone or other senseless purchase. It was marketed as a gaming card, it's branded as a gaming card, and for about 8% more performance it costs 100% more and that abysmal amount of price to performance is truly awful. Add to the fact that it's extremely power hungry and I just can't fathom what nvidia was thinking. If 3080 inventory was where it should have been I think this card wouldn't even sell out.

It might actually provide reverse e-peen where you simply look bad by owning one. Perhaps only the FX 5800 was a worse halo card launch in the 20 years I've been following this stuff.
 

CP5670

Diamond Member
Jun 24, 2004
5,508
586
126
The 5800 had poor performance as well though, while the 3090 does at least perform better than anything else. It's a Titan with a different name, a machine learning and rendering card that can also play games.
 
  • Like
Reactions: Tlh97 and bigboxes

StinkyPinky

Diamond Member
Jul 6, 2002
6,761
777
126
3090 feels like it might be the first halo card that it's almost embarrassing to own one. Like a gold plated iPhone or other senseless purchase. It was marketed as a gaming card, it's branded as a gaming card, and for about 8% more performance it costs 100% more and that abysmal amount of price to performance is truly awful. Add to the fact that it's extremely power hungry and I just can't fathom what nvidia was thinking. If 3080 inventory was where it should have been I think this card wouldn't even sell out.

It might actually provide reverse e-peen where you simply look bad by owning one. Perhaps only the FX 5800 was a worse halo card launch in the 20 years I've been following this stuff.

The 20gb 3080 renders this card obsolete. I just don't get what Nvidia were thinking here.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,719
7,016
136
True. Plus the 3080 is obviously the star of the show. The price/perf is stellar compared to last gen. You'd think they'd want it out first and getting most of the attention, it's going to sell a lot of more than the 3090.

3080 is a real struggle to get. I was up last night when nvidia's store got a batch and by the time I checked out they were all gone. Just not worth the frustration even bothering trying to buy one right now. Wait until the stock isn't so thin and it's not as much effort.

- While I admittedly am not in the market for an $700 card, I am in the market for something. I figure i'll just put purchasing anything out of mind until February or March of next year. Lack of 3080 availability is driving prices up on everything, including the used market.
 
  • Like
Reactions: loki1944

Fritzo

Lifer
Jan 3, 2001
41,883
2,121
126
OK, I don't see myself getting 8K monitors anytime in the next several years (Hell, I'm still on 1080p and perfectly happy). Am I correct that the 3070 card would be the best buy of the bunch? It's $500 and outperforms the highest-end current card on the market?
 

Hitman928

Diamond Member
Apr 15, 2012
5,182
7,633
136
OK, I don't see myself getting 8K monitors anytime in the next several years (Hell, I'm still on 1080p and perfectly happy). Am I correct that the 3070 card would be the best buy of the bunch? It's $500 and outperforms the highest-end current card on the market?

We won't know for sure until it launches, but it's doubtful the 3070 will outperform a 2080Ti outside of maybe fully path traced titles like Quake RTX and even then, it'll probably be really close. The 2080Ti was just so overpriced for the performance increase you got from it (being the halo card) that the 3070 will still look good against it. Against the 2080 Super though, it's a lot less impressive. For most likely what will be roughly the same performance (maybe a little faster than 2080S) you spend $500 instead of $700 which is obviously better perf/$, but not really an impressive jump for a brand new generation card on an a more advanced process. Not a bad choice by any means, but not really a great increase in value either.

The 3080 is really the best card as you get halo card performance (-10%) at less than half the cost. Compared to the 3070 it will probably be a roughly linear increase in performance and price which usually doesn't happen so you are paying the same perf/$ but getting much higher performance. That is, if $700 is doable in your budget.
 
  • Like
Reactions: Tlh97 and swilli89

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
But the 3080 uses crazy amounts of power, and really needs a system carefully built round that to cope. That's quite a specific game to get into.

The 3070 (down) will hopefully be rather more manageable.
 

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
OK, I don't see myself getting 8K monitors anytime in the next several years (Hell, I'm still on 1080p and perfectly happy). Am I correct that the 3070 card would be the best buy of the bunch? It's $500 and outperforms the highest-end current card on the market?

That remains to be seen, but i have the feeling it will not, not in every situation.
For 1080p I believe 3060 will be the best vfm.
 
  • Like
Reactions: Tlh97 and Elfear

Fritzo

Lifer
Jan 3, 2001
41,883
2,121
126
That remains to be seen, but i have the feeling it will not, not in every situation.
For 1080p I believe 3060 will be the best vfm.
I ask because I keep seeing comparisons like this:

 

mohit9206

Golden Member
Jul 2, 2013
1,381
511
136
Yes if it was 30% more gaming performance then maybe $1500 could be justified. But then again whether the price is correct or not depends on whether people want to buy it or not. Gamers should pretend that 3090 doesn't exist.
 

Hitman928

Diamond Member
Apr 15, 2012
5,182
7,633
136
I ask because I keep seeing comparisons like this:


I don't find it likely that they have real 3070 benchmark numbers, we don't even have a release date for the 3070 yet. Most likely they just took 3080 numbers and tried to adjust them to what they think the 3070 would do. Maybe they really did get a 3070 and were allowed to publish results before anyone else, but I find that extremely doubtful.

Edit: If you look at their 3080 to 3070 comparison numbers you see this:

1601045962781.png

They get the exact same scaling (to 0.1% accuracy) across all resolutions in their games benchmarks. They're just making up numbers to match how they think it will perform.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
I ask because I keep seeing comparisons like this:


Pretty sure that is just made up nonsense.
 

blckgrffn

Diamond Member
May 1, 2003
9,111
3,029
136
www.teamjuchems.com
Is it me or had nvidia kinda botched this launch? 3070 only having 8GB in a late 2020/early 2021 card, a powerhouse 3080 but limited in the future by vram, and now a 3090 that draws so much power it may give a boost to the power industry.

I, for one, would be way more impressed had they been able to deliver the performance increases but held the line on power usage.

(Didn't we all hope for this with a node shrink?)

Having lived with a 300+W video card for quiet a while, it's OK but not something I am eager to do again. And I bought that card *knowing* that I was saving $$$ upfront by getting solid performance but at the cost of higher power usage than the competition was offering.
 
  • Like
Reactions: Tlh97 and Elfear

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Yup. Performance fine, power draw rather higher than they'd have liked.

They can cope with that on these huge cards, it'll matter much more further down the stack & crucial to have it much more under control for mobile.

The day 1 supply chain stuff etc is just a curse of success. Very little they could sensibly do.
 

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
I, for one, would be way more impressed had they been able to deliver the performance increases but held the line on power usage.

(Didn't we all hope for this with a node shrink?)

Having lived with a 300+W video card for quiet a while, it's OK but not something I am eager to do again. And I bought that card *knowing* that I was saving $$$ upfront by getting solid performance but at the cost of higher power usage than the competition was offering.
Well this is what we get with a revised Samsung 10nm process.

Keep in mind lads that Samsung was ramping its 10nm process in 2017 and this "8nm" process is more or less identical (within 10% density and power) which is in line with a three year old process simply maturing.

Ampere would have been mind blowing on TSMC 7nm. 3080 would have been a 450mm2 die using less than 250 watts and with higher frequencies to boot.
 
  • Like
Reactions: Mopetar

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
Well this is what we get with a revised Samsung 10nm process.

Keep in mind lads that Samsung was ramping its 10nm process in 2017 and this "8nm" process is more or less identical (within 10% density and power) which is in line with a three year old process simply maturing.

Ampere would have been mind blowing on TSMC 7nm. 3080 would have been a 450mm2 die using less than 250 watts and with higher frequencies to boot.

30% higher density + 30% lower power from SS 8nm to TSMC 7nm is not possible.

Actually Ampere GA102 has higher density at SS 8n vs RDNA NAVI 10 at TSMC 7nm
 

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
30% higher density + 30% lower power from SS 8nm to TSMC 7nm is not possible.

Actually Ampere GA102 has higher density at SS 8n vs RDNA NAVI 10 at TSMC 7nm
I don't think you want to dive into this. Let's not Ampere (NVIDIA) to RDNA 1 (AMD) shall we?

Why would you not start with Ampere "SS8n" versus Ampere TSMC 7nm?
30% higher density + 30% lower power from SS 8nm to TSMC 7nm is not possible.

Actually Ampere GA102 has higher density at SS 8n vs RDNA NAVI 10 at TSMC 7nm
Why would you willingly ignore A100 (nvidia versus nvidia, similar architecture on different processes) and reach to compare to RDNA 1?

NVIDIA achieved a density Of 66Mtr/mm2with the tsmc 7nm process.

A full quarter+ later, same chip architecture, it achieved 45 Mtr/mm2 using Samsung 10nm/8nm.

So it's actually greater than what I originally said. It's nearly 50% more dense!!!

So I think you need to revisit your comment that 30% is not possible.

Keep in mind A100, with almost double the transistors only uses 400 watts absolute maximum while A102 boosts up to 370 watts.

So once again, with numbers to back it up, A102 on TSMC 7nm would be an absolute stunner.
 
  • Like
Reactions: Mopetar and Tlh97

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
I don't think you want to dive into this. Let's not Ampere (NVIDIA) to RDNA 1 (AMD) shall we?

Why would you not start with Ampere "SS8n" versus Ampere TSMC 7nm?

Why would you willingly ignore A100 (nvidia versus nvidia, similar architecture on different processes) and reach to compare to RDNA 1?

NVIDIA achieved a density Of 66Mtr/mm2with the tsmc 7nm process.

A full quarter+ later, same chip architecture, it achieved 45 Mtr/mm2 using Samsung 10nm/8nm.

So it's actually greater than what I originally said. It's nearly 50% more dense!!!

So I think you need to revisit your comment that 30% is not possible.

Keep in mind A100, with almost double the transistors only uses 400 watts absolute maximum while A102 boosts up to 370 watts.

So once again, with numbers to back it up, A102 on TSMC 7nm would be an absolute stunner.

1. A100 uses the HD library of TSMCs 7nm that has a theoretical top density of 90Mt/mm2
2. A100 boost clock is only 1400MHz at 400W TDP
3. A100 uses HBM memory

For the above reasons we do not compare A100 to GA102,
not to mention that although both chips are using the Ampere architecture there are fundamental changes to the architecture between the two.

IF NVIDIA would port GA102 to TSMCs 7nm they would not use the HD Libraries but the HP library that has a maximum theoretical density of 65Mt/mm2 , same process used in NAVI 10.

So to sum up , comparing A100 to GA102 and extrapolating that porting GA102 to 7nm TSMC will increase both Density and lower power by 30% will lead to incorrect assumptions.
 

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
1. A100 uses the HD library of TSMCs 7nm that has a theoretical top density of 90Mt/mm2
2. A100 boost clock is only 1400MHz at 400W TDP
3. A100 uses HBM memory

For the above reasons we do not compare A100 to GA102,
not to mention that although both chips are using the Ampere architecture there are fundamental changes to the architecture between the two.

IF NVIDIA would port GA102 to TSMCs 7nm they would not use the HD Libraries but the HP library that has a maximum theoretical density of 65Mt/mm2 , same process used in NAVI 10.

So to sum up , comparing A100 to GA102 and extrapolating that porting GA102 to 7nm TSMC will increase both Density and lower power by 30% will lead to incorrect assumptions.
I'm curious to read about that. Do you have a link to where A100 is said to use a high density library at TSMC?
 

Saylick

Diamond Member
Sep 10, 2012
3,084
6,184
136
I don't think you want to dive into this. Let's not Ampere (NVIDIA) to RDNA 1 (AMD) shall we?

Why would you not start with Ampere "SS8n" versus Ampere TSMC 7nm?

Why would you willingly ignore A100 (nvidia versus nvidia, similar architecture on different processes) and reach to compare to RDNA 1?

NVIDIA achieved a density Of 66Mtr/mm2with the tsmc 7nm process.

A full quarter+ later, same chip architecture, it achieved 45 Mtr/mm2 using Samsung 10nm/8nm.

So it's actually greater than what I originally said. It's nearly 50% more dense!!!

So I think you need to revisit your comment that 30% is not possible.

Keep in mind A100, with almost double the transistors only uses 400 watts absolute maximum while A102 boosts up to 370 watts.

So once again, with numbers to back it up, A102 on TSMC 7nm would be an absolute stunner.
Everyone always seems to forget that A100 has 40 MB of L2 cache vs the 6 MB in GA102. That's going to inflate the transistor count of A100 as well.