Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 108 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
May 15, 2014
559
293
136
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 

maddie

Diamond Member
Jul 18, 2010
5,152
5,540
136
It's hard to judge completely without seeing the "front" side of that fan, but it does seem to be a non-optimal design for pull, especially with fins spaced so closely together. It's also possible that this is just a prototype and the end product will have a different fan.
Yeah, that's what I was saying earlier. All the pictures have to be wrong for this to be a pull fan.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
Forget I asked anything.
Lol, sorry. The originator in question makes stuff as he goes along and he has his minions in the rumor field, if you could call it that, who repeat what he says ad-nauseam. There's often no basis to the origin sources claims, and often don't make sense if you know just an ounce of how electronics work at the board level including but not limited to signals.

I don't like mentioning these people by name because it's not right to give them SEO if someone googles them.
 

nnunn

Junior Member
Jan 22, 2009
6
0
66
I'm guessing NV knows that anyone willing and able to buy a 3090 would never put up with loud fans. So what if they worked out how to make this funky cooler silent?
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
The card is bigger than Turing's flagship, right, the 2080ti. My best guess is they chose to use really large fans with a fan blade profile that can move enough air at lower RPM. I wouldn't be surprised if they tasked a company like Notctua, Arctic or BeQuiet to design something for them.
 
  • Like
Reactions: Elfear and nnunn

Hitman928

Diamond Member
Apr 15, 2012
6,684
12,337
136
The card is bigger than Turing's flagship, right, the 2080ti. My best guess is they chose to use really large fans with a fan blade profile that can move enough air at lower RPM. I wouldn't be surprised if they tasked a company like Notctua, Arctic or BeQuiet to design something for them.

Yeah, even if the GPU pulls 350W, it seems like they are trying to oversize the heatsink to get it as quiet as possible.
 
  • Like
Reactions: Elfear

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
Yeah, even if the GPU pulls 350W, it seems like they are trying to oversize the heatsink to get it as quiet as possible.
2080ti reportedly pulls about 277 watts at peak load. These new cards will be pulling well over 350 watts. 350W is what was rumored about 2-3 months ago. They can go big, but they're increasing size, weight and cost. The latter two they don't care about because it passes the cost onto the consumer, but the first issue affects their bottom line. If the card is too big for most cases on the market, then it's lost revenue. There's only so much weight you can add before you have to begin including a stability bracket or card standoff so you don't cause too much pressure on a customer's PCI-E slot. A CPU cooler can be as heavy as it is because of the bracketing system that distributes the force evenly. Don't be surprised if superclocked variants manage to require in excess of 400 watts and need upgraded cooling.

No idea if it's because of Samsung's inefficient 8nm node, Nvidia pushing the cards to their limits in addition to the large dies, or inability to control heat output and going big. Sans the size, the last time Nvidia released something like this was 11 years ago. IIRC the series after that ran even hotter but Nvidia's cooling solution was better. The generation after that was much better all around.
 
Last edited:

moonbogg

Lifer
Jan 8, 2011
10,731
3,440
136
So, the $800 card will need a water block. I think that much is pretty much guaranteed if any of the power rumors are true and you want it to run at max clocks consistently. So already you are beyond Titan pricing ($1000). I've been buying Nvidia GPU's forever; almost every generation and usually TWO of them. If they want to sell this thing to people like me they have to hit $700; same as 1080Ti pricing. It has to be $700 and no more or my boring old sig isn't changing for a while.
However, a nice TSMC 7nm alternative with a universal adaptive sync monitor to pair with it is sounding pretty juicy to me right now.
 

moonbogg

Lifer
Jan 8, 2011
10,731
3,440
136
So, the $800 card will need a water block. I think that much is pretty much guaranteed if any of the power rumors are true and you want it to run at max clocks consistently. So already you are beyond Titan pricing ($1000). I've been buying Nvidia GPU's forever; almost every generation and usually TWO of them. If they want to sell this thing to people like me they have to hit $700; same as 1080Ti pricing. It has to be $700 and no more or my boring old sig isn't changing for a while.
However, a nice TSMC 7nm alternative with a universal adaptive sync monitor to pair with it is sounding pretty juicy to me right now.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
I wonder if they're locking the 3090 to FE only like the Titan was
Is this assuming we won't see a Titan this generation? I could see them both being locked to them only. PCIE offers what, 75 to 80 watts plus the pin power? Max would be 375 watts, but nothing is stopping them from developing a monster Titan with 2x 12 pin cabling if they wanted to develop a flaming pcb, because that's what will happen without an h20 cooler. That's very extreme but they could go lower for a redundancy angle. IDK how a 360 mm rad would even cool something that hot. It would be between very warm and edge of toasty.

Honestly it makes me very curious how how incredible or diabolically stupid Hopper will be if it's going the chiplet route as per rumors.
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
16,637
7,121
136
I wonder if they're locking the 3090 to FE only like the Titan was

Nope, there will even be custom AIB cards too it looks like. A Titan is still possible but the difference wouldn't be much unless you really wanted 48 GB of ram.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Ressurecting 103 die is a good idea.

I wonder however which SKUs will land on this die...
 

Midwayman

Diamond Member
Jan 28, 2000
5,723
325
126
I keep thinking about the 3090 and "why now".
  • Is it that there are trying to move the whole product stack up a price bracket?
  • Or is it that they know where RDNA2 is landing and they needed to do this to ensure that they still have the halo card?
It just doesn't feel right for them to release such a huge power sucking card without another motive.
 
  • Like
Reactions: spursindonesia

Gideon

Platinum Member
Nov 27, 2007
2,025
5,031
136

12 Layer PCB required even for custom boards (usually Graphics Cards have 6-8), Backdrilling, this ain't gonna be cheap.

I don't see 3090 really costing less than 1500$ at this point (the reference price might be 1200$ or something, but it will have the usual 200$ markup that every 2080 Ti has)
 

jpiniero

Lifer
Oct 1, 2010
16,637
7,121
136
I keep thinking about the 3090 and "why now".
  • Is it that there are trying to move the whole product stack up a price bracket?
  • Or is it that they know where RDNA2 is landing and they needed to do this to ensure that they still have the halo card?
It just doesn't feel right for them to release such a huge power sucking card without another motive.

It was supposed to be on SS7, since it's on SS8 they were going to need a better cooler to account for the power draw increase.

That they set the power limit to 350 W instead of some lower stock value that could then be overclocked later may be due to AMD.
 

Gideon

Platinum Member
Nov 27, 2007
2,025
5,031
136
No, it was never supposed for TSMC 7nm process.
He was talking about Samsungs 7nm EUV (not TSMC), that was supposed to be ready eons ago (but was delayed).

I also have a hunch that GDDR6X was supposed to offer better power efficiency than it ended up providing:
MicronPower2_575px.png


So they may have been caught with two reasons why NVIDIA had to up the TBP.

After all it has happend before with bleeding-edge memory. Vega's HBM2 used more power than specified and still didn't quite reach the specified memory bandwidth (512 GB/s for two stacks).
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
EUV is a different beast. A100 is still 7nm DUV. So it makes even less sense to go with Samsungs 7nm process.