Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 93 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
May 15, 2014
559
292
136
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 

Konan

Senior member
Jul 28, 2017
360
291
106
Do you actually understand what your saying or did you just put a bunch of words together? im struggling to even figure out what you mean by sparsity in the term of general purpose FLOPS situation. ie not talking about Matrix multiply , do you even know what a sparse matrix is?

100%. im assuming that Tensor 3.0 follows through the Ampere architecture which I read up on giving some credence to the rumors of higher precision gains. (There is a new sparsity feature in Ampere that wasn’t in Volta)
 

Saylick

Diamond Member
Sep 10, 2012
3,532
7,859
136
Long time no post for me! Just thought I'd give my two cents on incoming Ampere cards.

It looks like Nvidia's decision to go with Samsung is going to dramatically hurt their efficiency this generation. The rumored 350 watt TDP is 100 more watts than the 2080 TI. Granted, that is "only" a 33% power increase for what will likely be a ~60% rasterization and ~80+% RT performance uplift, but when moving to a new node that is the worst generational rasterization efficiency increase since Fermi, and quite possibly the highest rated TDP for a single GPU ever.

Do any of you think Nvidia will migrate over to TSMC 7nm+(+) in ~16 months or so with die shrinks of Ampere (getting another 20-30% performance / watt improvement) and push hopper out to a 36 month cycle instead of 24 months?
I feel like AMD's cadence is going to outpace Nvidia if Nvidia doesn't move to 5nm by this time next year. I don't know what kind of step-function jump a chiplet design will bring to GPUs but judging by what AMD has done already with their CPUs, I fully expect AMD to have a serious chance of taking the performance crown next year, at least in traditional rasterization workloads, if Nvidia do not move to Hopper. TSMC only has so much 5nm capacity and judging by AMD's better relationship with them in comparison to Nvidia's, it would not surprise me if Nvidia pays a premium for 5nm wafters or AMD's chiplet architecture is more tightly implemented with respect to TSMC's N5 node.
 

eek2121

Diamond Member
Aug 2, 2005
3,100
4,398
136
As I understand it, the reason for the cooler on the opposite side is strictly due to memory. The fan will actually spin at much lower speeds vs the front. Still definitely not ideal.
 

Konan

Senior member
Jul 28, 2017
360
291
106
I feel like AMD's cadence is going to outpace Nvidia if Nvidia doesn't move to 5nm by this time next year. I don't know what kind of step-function jump a chiplet design will bring to GPUs but judging by what AMD has done already with their CPUs, I fully expect AMD to have a serious chance of taking the performance crown next year, at least in traditional rasterization workloads, if Nvidia do not move to Hopper. TSMC only has so much 5nm capacity and judging by AMD's better relationship with them in comparison to Nvidia's, it would not surprise me if Nvidia pays a premium for 5nm wafters or AMD's chiplet architecture is more tightly implemented with respect to TSMC's N5 node.

For several years AMD has had new node advantage. For a quicker cadence we’ll have to see next time what that looks like because there hasn't really been any change for a while. I’d agree their sprint to new nodes and their roadmap is half decent. The thing is just the sheer R+D Nvidia and soon to be Intel have. MCM I expect Nvidia to come out on top even on an older node. Don’t see that changing anytime soon.
 

Tup3x

Golden Member
Dec 31, 2016
1,086
1,085
136
As I understand it, the reason for the cooler on the opposite side is strictly due to memory. The fan will actually spin at much lower speeds vs the front. Still definitely not ideal.
I'd like to know how hot those new memories actually run... and they probably sip quite a bit of power too. Makes me wonder if this really makes sense instead of going with HBM2.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,996
126
Looks like the rumors of the furnace and the ridiculous cooler were correct. No way will this tri-slot monstrosity be the same price as a 2080Ti, I expect $1500-$2000.
 
  • Like
Reactions: blckgrffn

beginner99

Diamond Member
Jun 2, 2009
5,233
1,610
136
Just thinking this. How many gamers can realistically fit this?

My thought to. I have a relative small case in length and with power connectors on top, 300mm is pretty much the biggest that will fit. (never version of my case can remove the hdd cage to a much bigger card would fit). Anyway at that size you bascially need a case that always hdd cage removal.

If I'm measuring it right compared to the 267mm 2080 FE, this card is ~326mm long. I had a triple slot dual 290X Devil 13 that weighed 5 lbs and came with a little stick that you used wedged between the card and the bottom of the case for extra support. That card was 305mm long. There's a lot of ATX cases that something like this wouldn't fit in.

This almost has to be fake.

True still have a 290x and certainly only a handful of models were short enough to fit my case + power connectors on top. And the pricing...Would have had interest in NV card for some playing with AI but well...if ru mors are true I rather wait and see what AMD has to offer. NV will have to lower prices once AMD and consoles launch.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
My thought to. I have a relative small case in length and with power connectors on top, 300mm is pretty much the biggest that will fit. (never version of my case can remove the hdd cage to a much bigger card would fit). Anyway at that size you bascially need a case that always hdd cage removal.

Does it matter? How many Titan class cards did you purchase in the past?
 
  • Like
Reactions: nnunn

Mopetar

Diamond Member
Jan 31, 2011
8,114
6,770
136
Looks like the rumors of the furnace and the ridiculous cooler were correct. No way will this tri-slot monstrosity be the same price as a 2080Ti, I expect $1500-$2000.

If it's going to cost $2,000 it has to perform like it. I'm not sure consumers are going to accept another generational price hike without substantial performance gains.

Unlike Pascal and Turing there should be legitimate competition in the high end which further limits a company's ability to charge prices like this. It worked in the past because consumers had no high-end alternatives.

The rumored prices seem a little odd as well because it leaves a lot of $200 holes to fill. No doubt this is something eventual SUPER cards are meant to address, but I think it also makes it harder for NVidia to upsell their own products in the mid-range.

The other thing I've been wondering about is what the low-end and mainstream product stack is going to look like. The 16xx cards were a good approach last time. Are those staying around, because I don't think I've heard any rumors about 26xx cards. Even with RT improvements in Ampere, I still don't see the point of including it in anything below the 3060.
 

jpiniero

Lifer
Oct 1, 2010
15,223
5,768
136
The other thing I've been wondering about is what the low-end and mainstream product stack is going to look like. The 16xx cards were a good approach last time. Are those staying around, because I don't think I've heard any rumors about 26xx cards. Even with RT improvements in Ampere, I still don't see the point of including it in anything below the 3060.

Def going to be RT throughout the entire stack. AMD will be the same way. It would still be much better in RT compared to other low end cards without any HWRT.

Only exception would be the mx450, and that's solely for Tiger Lake laptops.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,894
3,247
126
Just thinking this. How many gamers can realistically fit this?

Im more guessing it probably definitely tosses SLI out completely...

How many boards can you count with a 3 card spacing between the full 16x pci slots.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,760
1,455
136
Looks like the rumors of the furnace and the ridiculous cooler were correct. No way will this tri-slot monstrosity be the same price as a 2080Ti, I expect $1500-$2000.

I don't think so. Granted, a lot depends on AMD's eventual pricing and performance, but the sheer ridiculousness of this cooler smells like desperation. It's not like Ampere itself "runs hot" considering the efficiency of past architectures, it's just Nvidia pursuing extremely aggressive clockspeeds because they perceive a weaker competitive position. Since Kepler, Nvidia products have had plenty of room to increase performance at the expense of thermals. Like, if when the 7970 launched it had been 50% faster, you wouldn't have seen the efficient power sipping 680 we got in response. Whatever product ended up using GK104 would be hotter, louder, faster, more expensive to produce, and yet cheaper for consumers.

That said, if Nvidia launches first (which it looks like they will), then it's still reasonable to expect a high initial price. We could end up with a repeat of the GTX 280 / HD 4870 situation... which would be great. Although to be honest that cooler seems more reminiscent of Fermi or FX.
 
Last edited:

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
For several years AMD has had new node advantage. For a quicker cadence we’ll have to see next time what that looks like because there hasn't really been any change for a while. I’d agree their sprint to new nodes and their roadmap is half decent. The thing is just the sheer R+D Nvidia and soon to be Intel have. MCM I expect Nvidia to come out on top even on an older node. Don’t see that changing anytime soon.

Several years???

Navi came out last summer, before that nVidia had the node advantage. In terms of CPU, Intel has had the advantage until last summer.

Samsung's 8nm is technically newer than TSMC's 7nm. Its just not as refined.

It will be interesting to see if nVidia also ramps up their cadence. For the last several years there was no need for them too, because they had the market cornered.
 
  • Like
Reactions: FaaR

MrTeal

Diamond Member
Dec 7, 2003
3,614
1,816
136
Im more guessing it probably definitely tosses SLI out completely...

How many boards can you count with a 3 card spacing between the full 16x pci slots.
Most of them at least in the mainstream category, actually. When I moved from X99 to Ryzen I looked for an X570 board with two slot spacing between x16s so I could keep using my existing SLI-HB bridge and EK water bridge. Couldn't find one, so I just ended up selling one of my 1080Ti's.
 

Konan

Senior member
Jul 28, 2017
360
291
106
Several years???

Navi came out last summer, before that nVidia had the node advantage. In terms of CPU, Intel has had the advantage until last summer.

Samsung's 8nm is technically newer than TSMC's 7nm. Its just not as refined.

It will be interesting to see if nVidia also ramps up their cadence. For the last several years there was no need for them too, because they had the market cornered.

Yes, several years. Go back a bit further still and it is the same pattern.
AMD has been 1st to new launch products on nodes between the two especially with major launches. In the years that were the same AMD launched first.
Some history shows that there isn't such a need to rush to a new node. However, half node changes these days play a big part.

21.png
 
  • Like
Reactions: ozzy702

amenx

Diamond Member
Dec 17, 2004
4,107
2,379
136

Tup3x

Golden Member
Dec 31, 2016
1,086
1,085
136

BFG10K

Lifer
Aug 14, 2000
22,709
2,996
126
Here we go, once again the rumor/speculation was true. 2x8pin to 12pin adapter:


Recommended for 850W PSUs or better, all to power a single GPU.

It's now 100% certain this thing is a furnace. What we don't know is why. There's only two possible reasons: garbage manufacturing/design, or RDNA2 is far better than anyone expects.
 
Last edited:
  • Like
Reactions: blckgrffn and FaaR

BFG10K

Lifer
Aug 14, 2000
22,709
2,996
126
What if both, caused by Nvidia's huge mistake: underestimating AMD's capabilities of silicon physical design? ;)
I believe RDNA2 will be only 20% faster than 2080Ti.

What I think has happened is similar to the 1080Ti situation: Ampere is much faster than it needs to be because nVidia panicked for no reason. I also think Ampere has serious manufacturing issues on 8nm Samsung because of that die size .

This is speculation on my part, so we'll see.
 
  • Haha
Reactions: Mopetar and Krteq