Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 69 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
May 15, 2014
559
292
136
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
If the 3090 is Titan Class and the 3080/3080ti slots in about 15% under it, then a top-tier AMD part being 20% better than a 2080ti leaves it about 12% behind the top-tier consumer card. Not perfect, but not bad especially if they keep prices reasonable. I'd take a 12% performance loss if it's more in the $800 range than the $1200 range like the 2080ti

The Titan has not been a consumer card for some time now though. Very few people would spend $2500 on a card with same/worse gaming performance than the Ti model below it. Its price increase is for the compute abilities.

Its possible nVidia will bring its price back down to the 1200 range, but seems unlikely.
 
Last edited:

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I think what he's saying, and I'm half expecting it, is that the Titan will be renamed to 3090 and it will take over the halo position for consumer cards. If the rumors about the 3090 having 24GB of RAM are true this would make sense.

If they do that, I'd expect the pricing to be adjusted down in terms of tiers. Maybe $700 80, $900 Ti, $1300 90?

Personally I'm expecting this gen to have a decent amount more ML hardware so it may be the Titan just wasn't making sense anymore.

I could be way off base, but that's what it looks like.
 
  • Like
Reactions: Bubbleawsome

MrTeal

Diamond Member
Dec 7, 2003
3,614
1,816
136
I wouldn't expect a $900 Ti, certainly not for the launch models. I would actually be pleasantly surprised if you could buy any 3080 Ti under $1000 day 1.
 

Konan

Senior member
Jul 28, 2017
360
291
106
Also keep in mind that the current Ti has a msrp of $999- the inflated prices we are seeing now are due to supply limitations. That shouldn't be nearly as big of an issue with a much smaller die.

The prices have been shitty with Turing since they came out and with no supply issues several months post launch the price is still crap.
I feel the pricing will be better this time around but not by much. Nvidia can go first and then AMD will undercut without much performance difference and then things should equalize for a bit.

A 2080 Founders Edition @ $800 launch was a kick to the nuts. The standard model at $700 wasn't much better. We'd be lucky to see a 3080 "founders" for $700.
 
  • Like
Reactions: Ranulf

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
The prices have been shitty with Turing since they came out and with no supply issues several months post launch the price is still crap.
I feel the pricing will be better this time around but not by much. Nvidia can go first and then AMD will undercut without much performance difference and then things should equalize for a bit.

A 2080 Founders Edition @ $800 launch was a kick to the nuts. The standard model at $700 wasn't much better. We'd be lucky to see a 3080 "founders" for $700.

And then they came out with the 'Super' models, for that extra 1-2 punch.
 
  • Like
Reactions: Konan and Krteq

Tup3x

Golden Member
Dec 31, 2016
1,086
1,084
136
I'd hope that they'd just stop using Ti and Super suffixes. For example the jump in performance between 2080 and 2080 Ti was so large that it would have made more sense to just call it 2090. Same thing for previous generations too. If they want to make a refresh then those suffixes or increasing the number by five for example would be fine. Initially just numbers would make more sense.
 
  • Like
Reactions: Lodix and FaaR

jpiniero

Lifer
Oct 1, 2010
15,223
5,768
136
I'd hope that they'd just stop using Ti and Super suffixes. For example the jump in performance between 2080 and 2080 Ti was so large that it would have made more sense to just call it 2090. Same thing for previous generations too. If they want to make a refresh then those suffixes or increasing the number by five for example would be fine. Initially just numbers would make more sense.

The gap shouldn't be as large as 2080->2080 Ti; it's only ~20% more shaders if it ends up being the same counts as the rumors (4352 vs 5248/5376).
 

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146

Looked at this again, does seem like this is the 3080. If you look at the MSI Lightning, seems that's a ~320 W card. Comparing the efficency doesn't look as bad if you compare the 3080 to the Lightning Z, esp if nVidia can get another 5-10% in clocks in the final version.
No, it's not. It's a higher tier GPU that's crippled by poor memory. The 12Gbps is reporting correctly.
 

jpiniero

Lifer
Oct 1, 2010
15,223
5,768
136
No, it's not. It's a higher tier GPU that's crippled by poor memory. The 12Gbps is reporting correctly.

I'm not entirely sure how to calculate the bandwidth, but I have a feeling the 3080 will be more than 256 bit like usual. If it was 320 bit, that would be 15, right?
 

Konan

Senior member
Jul 28, 2017
360
291
106
For what it is worth Kopite7kimi is proposing the TimeSpy Extreme (Graphics) scores:
3080 = almost 8600 (Just about 36% faster than a stock RTX 2080 Ti) (aftermarket models about ~20% faster, but we also don't know what a 3080 aftermarket model can do yet)
3090 = almost 10000 (about 50% faster than a stock RTX 2080 Ti)
And tweaks, QA, drivers etc not finished yet..
 

jpiniero

Lifer
Oct 1, 2010
15,223
5,768
136
For what it is worth Kopite7kimi is proposing the TimeSpy Extreme (Graphics) scores:
3080 = almost 8600 (Just about 36% faster than a stock RTX 2080 Ti) (aftermarket models about ~20% faster, but we also don't know what a 3080 aftermarket model can do yet)

20% is presumably at the "official" TDP, which will be much less than the 320 W of the FE.
 
  • Like
Reactions: Konan

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
If it's using GA102 then how many parts are there- ie is there going to be a 3090/Titan, 3080Ti and a 3080 or will there just be 3090/3080?

If it's three tier we could see 384/352/320 or perhaps 384/320 but with different speeds?

Some of the rumors are saying the 70Ti will be coming back too, so in theory we could be looking at five parts above the x60.. Not saying won't happen, but if it does that would lead to either some very granular price increments or stone outrageous prices inn the high end.
 

Mopetar

Diamond Member
Jan 31, 2011
8,113
6,768
136
I would be incredibly surprised if NVidia only offered at most 12 GB in high end Ampere.

I think that would be more surprising than just about anything else that could happen with it, even given some of the other rumors that seem highly unlikely.
 

Bouowmx

Golden Member
Nov 13, 2016
1,147
551
146
2 GB GDDR6 chips appear to nerf bandwidth on the Radeon Pro RX 5700: 384 GB/s (12 GT/s) instead of 448 GB/s (14 GT/s) on the regular versions.
Though NVIDIA Turing isn't affected even on the Quadro RTX 8000 48 GB, with 2 GB and double-sided memory: still 14 GT/s vs GeForce RTX 2080 Ti.

I guess it will be 1 GB chips for Ampere (12 GB on 384-bit) to get higher bandwidth: the alleged >18 GT/s for GeForce RTX 3080 and 3080 Ti/3090.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
2 GB GDDR6 chips appear to nerf bandwidth on the Radeon Pro RX 5700: 384 GB/s (12 GT/s) instead of 448 GB/s (14 GT/s) on the regular versions.
Though NVIDIA Turing isn't affected even on the Quadro RTX 8000 48 GB, with 2 GB and double-sided memory: still 14 GT/s vs GeForce RTX 2080 Ti.

I guess it will be 1 GB chips for Ampere (12 GB on 384-bit) to get higher bandwidth: the alleged >18 GT/s for GeForce RTX 3080 and 3080 Ti/3090.

The slower memory speed could be a cost cutting request from Apple. The card isn't intended for gaming, but for video/modeling work. Where the bandwidth is less important than capacity.
 
  • Like
Reactions: FaaR and dr1337

amenx

Diamond Member
Dec 17, 2004
4,107
2,379
136

Veradun

Senior member
Jul 29, 2016
564
780
136
2 GB GDDR6 chips appear to nerf bandwidth on the Radeon Pro RX 5700: 384 GB/s (12 GT/s) instead of 448 GB/s (14 GT/s) on the regular versions.
Though NVIDIA Turing isn't affected even on the Quadro RTX 8000 48 GB, with 2 GB and double-sided memory: still 14 GT/s vs GeForce RTX 2080 Ti.

It isn't a tech limitation, it's just they are cheaper and consume less power, i. e. it's a choice
 
  • Like
Reactions: FaaR

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
What is the use case in the life cycle of the Ampere consumer cards for more than 12GB? Specifically, if something like a 3080 Ti is sold as a gaming card, is there anything within the next 24 months that would need it?
 

Mopetar

Diamond Member
Jan 31, 2011
8,113
6,768
136
Probably not the next 2 years outside of some insane resolutions or high quality texture packs, but there are people who keep these high end cards for more than just two years.

I also think that with the most recent consoles targeting 16 GB (the Xbox has 10 GB optimized for GPU use) that we'll see developers target a higher baseline for future games.

It would seem odd for NVidia to make a premium product in all other ways but skimp on additional memory. Maybe they do have some magic tech that lets them get around that to a degree, but a lot of people just look at numbers and conclude that bigger equals better.