nVidia 3080 reviews thread

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BFG10K

Lifer
Aug 14, 2000
22,672
2,817
126
Written:


Video:

 
Last edited:

DeathReborn

Platinum Member
Oct 11, 2005
2,743
734
136
On a side note, why are Anandtech reviews always late? Are they struggling financially?

Before Anand sold, it used to feel much more alive, responsive and meaningful.

Have we even had the much promised 960 review yet? I wouldn't hold my breath for AT to review any GPU's day 1...or week 1, it's not a new iPhone so not going to generate enough clicks.
 

aleader

Senior member
Oct 28, 2013
502
150
116
Now that the reviews show total system usage of 523W under load for the 3080, does this mean my 550W EVGA B3 won't be able to handle it? Now, keep in mind my CPU is a 3600 with a B450 MSI Carbon board, not the power-sucker that they use in the review. I admit I don't fully understand the way power supplies work, but I've seen a lot of reviews where the 3950X draws 140+ W, compared to my 65W 3600.

Is it as simple as saying my total system draw with a 3080 would be closer to 445W? I think I would be just fine at that wattage (81% load). I really don't want to have to get a new PSU. Worst case scenario I throw in a 3080 and it just shuts down the PC (at which point I know I need a bigger PSU)?

EDIT: scratch that. I don't believe I have 2x8 pin connectors anyways, so I guess I would need a new one!
 

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
Have we even had the much promised 960 review yet? I wouldn't hold my breath for AT to review any GPU's day 1...or week 1, it's not a new iPhone so not going to generate enough clicks.

I'm not sure if it's the same person, but in another thread someone had posted that one of the AT editors who lives in Oregon was affected by the fires. I think that's a good reason for any delay if they were the person in question.

Even if it's a little late does it matter? You can't buy a 3080 yet and depending on what the supply looks like it could be a good while before everyone who wants one can actually get one.

I also think that the people who are going to read an AT review will do it regardless of when it comes out, especially if it includes any kind of deep dive into the technology behind the product. Another set of benchmarks probably won't add anything to your decision about purchasing Ampere if you're considering it.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
Hmm, the Whitepaper says the 3070 has 96 ROPs, same as 3080.

Attempt to make up for other deficits?

3070 *1.5 most Architecture = 3080.
2080 Ti *1.3 performance = 3080.

It looks like claims that 3070 will be faster than 2080 Ti are shaky, maybe in handful of special cases, but may choke the rest of the time.

2080 Ti has 616/448 = 38% more memory BW, than 3070. And somehow the 3070 beats it in performance?

Where 3080 has 23% more BW than 2080 Ti and delivers around 30% more performance.

Basically 3080, performance gains over 2080 Ti makes sense with the extra power it brings to the table.

That leaves 3070 looking significantly under-powered to match 2080 Ti.

Time will tell, but heavy skepticism on the 3070, and I don't think ROPs alone will carry it past the 2080 Ti.
 

Golgatha

Lifer
Jul 18, 2003
12,639
1,481
126
Does it feel to anyone else like


It's going to be fun to see when Zen3 gets released next month.

Seeing benchmarks today convinced me my money was better spent on Zen 3 before a RTX 3000 series upgrade since I game at 1440p. Which is fine with me, as that gives time for AIB partners to release boards with standard PCIe plugs and better cooling solutions for about the same price. Honestly, I'd like to see a 20GB RTX 3080 released. That would be enough VRAM and performance at 1440p for years to come.
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,743
3,075
136
Looking just at compute:

RTX 3080 needs to run its INTs on cores which are also full fledged FP32 cores. Per your claim, out of 136 instructions, 100 of them are FP, or 73%.

Therefore, in a typical workload, you would expect an Ampere design which has the same amount of theoretical Teraflops as a Turing design to underperform by 27% (where is 5-7% from?) due to the INT workload not impacting FP on Turing. I have no idea where your 5-7% comes from.

This is still a good trade-off for nVidia because this design allows for the huge uptick in theoretical FLOPs via increased FP32 shader count which means that an Ampere design with the same number of FP shaders as a Turing design would be a much lower end part.
hardly,

i keep saying this every new generation of something when people sprouting stuff like this.

repeat after me,

executing data is easy,
moving data is hard.

So dont just look at FP execution units , look at register file bandwidth, read/write ports , instruction dispatch, L2 size, instruction mix, etc ,etc.
Just because marketing doesn't like it doesn't change the fact that a INT at the same bit width in many complex situations is a better choice then FP ( stability , value range etc) thus they get used.

The point is, if someone was optimising there shader code to get maximum performance out of turing then they are going to get very little performance improvement because moving operations from INT to FP doesn't actually buy anything. To see Amperes FP increase play out linearly you need something that is 100% FP only, you will find those in Compute workloads, HPC etc but less so in gaming.
 

simas

Senior member
Oct 16, 2005
412
107
116
Zen three isn't going to relieve the bottlenecks at low resolution with the new gen of GPU's. Is what it is.

but are they truly a bottleneck? other than few exceptions (professional e-sports may be), the fps very quickly becomes irrelevant above two digits. and pros play with graphics turned way, way down so GPU power is irrelevant to them completely.
 

DooKey

Golden Member
Nov 9, 2005
1,811
458
136
you know this how?

Easy, because it won't bring enough IPC increase to do so. 1080p is hard bottlenecked at this time. 10900K has 10% in gaming at 1080p on current Ryzen. Games are getting more complex. 4XXX isn't a revolution so much as an evolution. It's going to take a revolution to get CPUs to get rid of low resolution bottlenecks. Intel isn't there yet and AMD isn't either. JMHO.

With that said I don't think people should play at peasant resolutions.
 

thigobr

Senior member
Sep 4, 2016
231
165
116
Zen 3 is possibly bringing around 4% higher single core boost plus ~10-15% IPC improvement if rumors are true... That should help alleviate bottlenecks
 

Head1985

Golden Member
Jul 8, 2014
1,863
685
136
Hmm, the Whitepaper says the 3070 has 96 ROPs, same as 3080.
This is very interesting.First x104 GPU with 96Rops.They increased it for first time since maxwell(maxwell/gtx980 was first with 64rops)But i didint find anything about new delta color compression.I dont see 3070 faster than 2080TI with same memory bandwidth as 2070/super without new delta color compression.
 
Last edited:

Fallen Kell

Diamond Member
Oct 9, 1999
6,009
417
126
Now that the reviews show total system usage of 523W under load for the 3080, does this mean my 550W EVGA B3 won't be able to handle it? Now, keep in mind my CPU is a 3600 with a B450 MSI Carbon board, not the power-sucker that they use in the review. I admit I don't fully understand the way power supplies work, but I've seen a lot of reviews where the 3950X draws 140+ W, compared to my 65W 3600.

Is it as simple as saying my total system draw with a 3080 would be closer to 445W? I think I would be just fine at that wattage (81% load). I really don't want to have to get a new PSU. Worst case scenario I throw in a 3080 and it just shuts down the PC (at which point I know I need a bigger PSU)?

EDIT: scratch that. I don't believe I have 2x8 pin connectors anyways, so I guess I would need a new one!
It depends on if your power supply has that 550W in the correct rails needed for the card. EVGA is pretty good, so it is possible that they do have the circuitry to push 400-450W on the 12V rail (maybe even higher), but you need to really look at the underlying specs of your exact PSU and how it deals with power across the various rails (i.e. does it have efficiency losses/gains on one rail or another which cause the total 550W rating to not be attainable if say you need 100W for your CPU/RAM/motherboard on the 3.3V and 5V rails, limiting the 12V rail to only 400W or 350W and not actually hitting the 550W mark unless it is all on the 12V rail...

I actually forget what power supply I have in my current system, but I think it is either a 850W or 1000W as I had originally thought I was going to go SLI with the system but SLI was on its way out in terms of driver/game support so I never bothered.
 

aleader

Senior member
Oct 28, 2013
502
150
116
It depends on if your power supply has that 550W in the correct rails needed for the card. EVGA is pretty good, so it is possible that they do have the circuitry to push 400-450W on the 12V rail (maybe even higher), but you need to really look at the underlying specs of your exact PSU and how it deals with power across the various rails (i.e. does it have efficiency losses/gains on one rail or another which cause the total 550W rating to not be attainable if say you need 100W for your CPU/RAM/motherboard on the 3.3V and 5V rails, limiting the 12V rail to only 400W or 350W and not actually hitting the 550W mark unless it is all on the 12V rail...

I actually forget what power supply I have in my current system, but I think it is either a 850W or 1000W as I had originally thought I was going to go SLI with the system but SLI was on its way out in terms of driver/game support so I never bothered.

Good info, thanks. Does this tell me anything?

OUTPUT
Rail+3.3V+5V+12V-12V+5Vsb
Max output20A20A45.8A0.5A3.0A
549.6W
110W549.6W6W15W
Total550W @ +40C

This is my power supply:

https://www.evga.com/products/specs/psu.aspx?pn=e0b3cf46-8080-43aa-8630-32deb1f501bf
 

SPBHM

Diamond Member
Sep 12, 2012
5,056
409
126
these cards are just too fast for current games, unless you push high resolutions like 4K,
I wouldn't complain about the CPUs much, since they are often "bottlenecking" at 100FPS+ or something, at which point.... there really isn't much of a benefit to rendering more frames anyway for most games

the 3080 is a very impressive beast, it just needs a new generation of games I guess, because I don't think 300FPS or 4k is such a good use of resources.
 
  • Like
Reactions: xpea

A///

Diamond Member
Feb 24, 2017
4,352
3,154
136
Food for thought. None of these reviews stated what the ambient temperature of their office or testing area was. My computer's thermals as a whole heavily differ if it's cooled down to 68-75*F, and gets even wilder above that all the way through 84*.
 

xpea

Senior member
Feb 14, 2014
429
135
116
I know most of people here look for gaming, but man! Ampere is a beast for compute and rendering:

Blender-2.90-Classroom-CUDA-and-OptiX-Render-Time-Cycles-NVIDIA-GeForce-RTX-3080.jpgChaos-Group-V-Ray-Flowers-CUDA-and-OptiX-Render-Time-NVIDIA-GeForce-RTX-3080.jpgOTOY-OctaneRender-2020-Plants-Project-RTX-Score.jpg
These dual FP32 SMs are not a joke in this type of workload
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,381
2,415
146
Interesting, surprised they did not mention ETH hashrate in that article though. I would like to see further testing on that.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
We need to look at the improvement in perf/$ to see how the value proposition changes between generations, and this approach should be fair with respect to removing die size out of the equation.

Here's what Techspot got for $/frame @ 1440p and 4K:
Cost_1440p.png

Cost_4K.png



And here's what ComputerBase tabulated for the perf gains & cost differences between generations. I've taken the luxury of revising the launch prices to USD instead of Euros. I will definitely add that MSRPs do NOT reflect market or street prices. Moore's Law is Dead mentioned this in his last video about Nvidia intentionally taking a profit margin hit on their launch MSRP just to make these types of comparisons super favorable.

Non Ti-ModelsLaunch Price (USD)Performance Gain vs. Predecessor @ 2160p (4K)
GTX 780$649Baseline
GTX 980$549 (-15%)+26%
GTX 1080$599 MSRP (+9%), $699 FE (+27%)+64%
RTX 2080$699 MSRP (+17%), $799 FE (+14%)+40%
RTX 3080$699 MSRP (+0%), FE (-14%)+65%
Ti-ModelsLaunch Price (USD)Performance Gain vs. Predecessor @ 2160p (4K)
GTX 780 Ti$699Baseline
GTX 980 Ti$649 (-7%)+50%
GTX 1080 Ti$699 (+8%)+75%
RTX 2080 Ti$999 MSRP (+43%), $1199 FE (+72%)+35%
RTX 3080 Ti / 3090$1499 MSRP (+50%), FE (+25%)+50%(?)

Looking at the numbers from ComputerBase, I became kind of curious about how Ampere fits in on a historical context. Especially with all the discussion about whether the 3080 should be considered part of the XX80 class of cards or the XX80 Ti class.

So I plotted the historical cumulative performance/$ gain for each generation using the 700 series as the baseline, and included a trendline for the 700->900->10 series (the number for the 3070 is based on an estimate of the 3080 being 30% faster than the 3070):

uA7p7x7.jpg

qzWJT8F.jpg


Based on this it is clear that the 3080 provides a perf/$ improvement that is roughly comparable to that of the 900 and 10 series regardless of whether you consider it part of the XX80 class or the XX80 Ti class, but not big enough to make up for the very subpar improvements of the 20 series.

The 3070 provides an improvement that is almost good enough to make up for the 20 series within the XX80 class. I didn't plot it within the XX70 class, since Computerbase didn't provide those number, but I suspect it would look worse there.

The 3090 meanwhile is clearly below historical standards for the XX80 Ti class (same as the 2080 Ti), but I suspect it would look better if you compared it to Titan cards.