Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 140 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
May 15, 2014
559
292
136
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
RTX3080 110fps / 320W

RTX2080 60fps / 240W

20200901171847.jpg
So it's measuring, how much does a faster card consume when locked at low performance.
If somebody's never seen a misleading marketing figure, let's just show them this picture.
 

Saylick

Diamond Member
Sep 10, 2012
3,128
6,304
136
Nvidia itself is quoting the 36TFlop FP32 number, so that would seem to be the case. I'm actually quite surprised, but that is a lot of compute horsepower there if there's no gotchas. 29.8TF FP32 for $700 is a crazy value compared to the 13.4TF @ $1200 for the 2080 Ti or 13.8TF @ $700 for the Radeon VII.
They quote SHADER Flops.

Not FP32 ;). Its similar to what they claimed that FP16 performs "the same", enhanced by all that GEMM stuff, in their A100 chip to native FP32.

In reality native FP32 will be exactly that, native FP32.
Yeah, I agree with Glo here. They used the term "Shader-FLOPS" not "FP32 FLOPS". The INT cores actually do single-precision math (i.e. 32-bit) but just not floating point specifically. Going off of the SM diagram for A100, they don't list the INT cores as being capable of doing FP math either so my guess is either Nvidia tuned Ampere for graphics so that the the pipelines for an SM are (2) x 16-wide FP or 16-wide INT + 16-wide FP, or they are just listing Shader-FLOPS as a catch all term for all the concurrent single-precision math the entire GPU can do, INT and FP included. My money is on the latter.
 

Hitman928

Diamond Member
Apr 15, 2012
5,252
7,802
136
So it's measuring, how much does a faster card consume when locked at low performance.
If somebody's never seen a misleading marketing figure, let's just show them this picture.

This is almost exactly what AMD did with their Polaris claims of 2x perf/watt. They took carefully chosen card models from both generations and then locked them at 60 fps on 1 game and used that to claim 2x perf/w. AMD was rightfully criticized for that and we should do the same for Nvidia here, the real perf/w gain will be no where near 90%.
 

MrTeal

Diamond Member
Dec 7, 2003
3,569
1,698
136
Its 2xFP32 THROUGHPUT.

Its the same marketing gibberish they used for A100 chips.

In reality, when tested actual, native FP32 performance, it was the same, as V100.
We'll have to wait for testing to see. As I said previously, that kind of increase in FP32 performance would be really surprising and seems unlikely.
However Nvidia themselves list GA100 having 8192 CUDA cores and A100 having 6912 CUDA cores. If they are now saying GA102 has 10496 CUDA cores but are redefining them mid-generation, that's extremely questionable. This is Nvidia though, so I wouldn't put it past them at all.
 

Zstream

Diamond Member
Oct 24, 2005
3,396
277
136
3080 isn't mid tier, it's high end. It actually should be significantly closer to the 3090 than the 2080 was to the 2080 Ti. The 3070 at $500 seems a bit much though in comparison.
Ok high-end, and not close to the highest end... whatever terminology is used, it's way overpriced IMO. I guess we will see the end result, but this year is not the year to release with these $$$ numbers associated.
 

blckgrffn

Diamond Member
May 1, 2003
9,123
3,058
136
www.teamjuchems.com
3080 isn't mid tier, it's high end. It actually should be significantly closer to the 3090 than the 2080 was to the 2080 Ti. The 3070 at $500 seems a bit much though in comparison.

Right, which makes its 10GB frame buffer that much harder to accept. But with say 20GB at $850 it would really make the 3090 look outlandish ¯\_(ツ)_/¯
 

Hitman928

Diamond Member
Apr 15, 2012
5,252
7,802
136
Ok high-end, and not close to the highest end... whatever terminology is used, it's way overpriced IMO. I guess we will see the end result, but this year is not the year to release with these $$$ numbers associated.

$700 actually seems quite reasonable to me given the performance. The 3090 looks absurd though, I believe they are based on the same die and I don't see it being a whole lot faster than a 3080 so its price is somewhat bewildering. The 10 GB of VRAM on the 3080 is a bit disappointing but spending $800 more to get a decent at best performance bump and more RAM is pretty intense. I'm guessing Nvidia has other models in the works but are waiting for the new Radeon cards to drop to see how they want to price everything.
 

CastleBravo

Member
Dec 6, 2019
119
271
96
$700 actually seems quite reasonable to me given the performance. The 3090 looks absurd though, I believe they are based on the same die and I don't see it being a whole lot faster than a 3080 so it's price is somewhat bewildering. The 10 GB of VRAM on the 3080 is a bit disappointing but spending $800 more to get a decent at best performance bump and more RAM is pretty intense. I'm guessing Nvidia has other models in the works but are waiting for the new Radeon cards to drop to see how they want to price everything.

I'm hoping they show a good demo of this "RTX IO" thing before the 3080 is available for purchase. This new decompression tech combined with a gen4 nvme drive may make 10 GB of VRAM more than enough.
 
  • Like
Reactions: ozzy702

moonbogg

Lifer
Jan 8, 2011
10,635
3,095
136
I do like how the 3080 looks. The price is appropriate and we get a big die on an x80 card for the first time in a long time. The downside is the downgrade in Vram compared to 1080Ti though. If the 20GB models cost much more, then that sort of kills the 1080Ti upgrade argument. I'm left wondering about the 3080Ti. If it sits close to the 3090, then I'd expect the price to be around $1200 again, possibly $1000 IMO. That's not a 1080Ti upgrade.

So I'm still left feeling unsure about this generation for me. Too bad it couldn't have just been a simple no-brainer. More performance, more ram, and similar price would have had me. This time I'll wait and see, but no way I click the buy button without knowing the rasterization performance in a straightforward way without RT/DLSS confusing the issue.
 
  • Like
Reactions: Campy

CP5670

Diamond Member
Jun 24, 2004
5,510
588
126
The 3080 looks impressive despite being short on memory. I might get one after the cards have been out for a while. It's not cheap but actually provides a reason to upgrade unlike the 2080, and is likely capable of 4K at 120hz in a lot of games.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
How many 1080ti owners are going to see that buffer downgrade and take a pause on the 3080? I would, but I already sold my 1080 ti.

I don't see how 1GB of RAM could make people refuse to upgrade. On lower end cards it can make a big difference, but the difference between 10 and 11 GB's is non-consequential when you are talking about a GPU that is over all significantly faster.
 

blckgrffn

Diamond Member
May 1, 2003
9,123
3,058
136
www.teamjuchems.com
I don't see how 1GB of RAM could make people refuse to upgrade. On lower end cards it can make a big difference, but the difference between 10 and 11 GB's is non-consequential when you are talking about a GPU that is over all significantly faster.

That's rational.

When you are spending $700 on a video card that is by no means required, I think we've already stepped out of rational bounds already? When spending that much money on a high end product, you expect it to be superior in every way, right? At least I would.

I feel like even if it was at least 12GB, it would placate many. Bigger is better, right? Ha.
 
  • Like
Reactions: HurleyBird

Glo.

Diamond Member
Apr 25, 2015
5,705
4,549
136
I don't see how 1GB of RAM could make people refuse to upgrade. On lower end cards it can make a big difference, but the difference between 10 and 11 GB's is non-consequential when you are talking about a GPU that is over all significantly faster.
I agree. 10 GB vs 11 GB is no different.

If you need that 1 GB, you better buy 3090, instead of 3080.
 

Ajay

Lifer
Jan 8, 2001
15,431
7,849
136
Well, at least the pricing is good. I imagine that Samsung gave Nvidia a really good deal on 8nm wafers, given how they screwed up on 7nm EUV.
Looking forward to actual reviews and deep dives.
 
  • Like
Reactions: Konan

MrTeal

Diamond Member
Dec 7, 2003
3,569
1,698
136
3080 isn't mid tier, it's high end. It actually should be significantly closer to the 3090 than the 2080 was to the 2080 Ti. The 3070 at $500 seems a bit much though in comparison.
Yeah, the 3070 doesn't seem like it will be a value card at all. The 3080 has 50% more shaders and memory bandwidth than the 3070, for 40% more money. Compare that to the 1080/1070 where the 1080 was 33% more shaders for a 55% price increase.
 
  • Like
Reactions: Mopetar and ozzy702

uzzi38

Platinum Member
Oct 16, 2019
2,625
5,901
146
Well, at least the pricing is good. I imagine that Samsung gave Nvidia a really good deal on 8nm wafers, given how they screwed up on 7nm EUV.
Looking forward to actual reviews and deep dives.
Nvidia are no doubt getting a good deal.

AIBs? I feel so bad for them. Margins must be awful having to deal with GDDR6X in all it's cursedness. Power draw per module is notably up, but worsed of all is the PAM4 signalling. The extra PCB costs alongside the cost per module is not fun.
 

DisEnchantment

Golden Member
Mar 3, 2017
1,601
5,780
136
Well, at least the pricing is good. I imagine that Samsung gave Nvidia a really good deal on 8nm wafers, given how they screwed up on 7nm EUV.
Looking forward to actual reviews and deep dives.
It was quite disappointing that it was not 7LPP. Ampere could have stretched its legs.
There could be something wrong with 7LPP for bigger dies.
Also this means there is no quick re-tapeout path to the 6LPP/5LPE for mid gen Super refreshes.
Or maybe 6LPP is not part of wafer shuttle arrangement?
Samsung has to come out publicly in their Foundry Forums. Too much rampant FUD from Taiwan.
 
  • Like
Reactions: lightmanek and Glo.