• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 94 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 
Looking at the size of the cooler I'd say that RTX 3090 is actually 375W+ TDP.

Also I'd say that RTX 3080 will be around 300W of power, unless - they will go for 20 GB setup for this GPU. Then we should look at up to 10% higher power draw.
 
I think RTX 3080 = 2080Ti + 30% (@300-320W) | RTX 3090 = 2080Ti + 55-65% (330-350W)

Those performance and power consumption estimates would imply that RTX 3090 is 20-25% faster than the RTX 3080 while only using 30W or ~10% more power, which I find hard to believe. There's supposed to be about a difference of 20% more cores in the RTX 3090 over the RTX 3080, which makes sense if clocks were similar between the two. At a minimum that should mean there's roughly a 20% difference in power as well, so the RTX 3090 will come in at 350-375W.
 
Unless they found a way to make dual-GPU work like single GPU, what good is dual GPU card when multi-GPU support is almost non-existent these days.
Because strong rumors are suggesting multi-tiled GPUs next gen from all the companies, so I would say that there appears to be solutions that are not traditional SLI.
 
What if it will be 5% faster while consuming 100-125W of power more?
Definitely not worth it if that were the ratio. If the performance/W is the same between both RDNA2 and Ampere, then I don't think anyone would complain about 100W more power if it translates proportionally into performance. I do think people would accept 15% more performance for an additional 100W of power, especially if it's an Nvidia product (e.g. GTX480). If it were AMD, not so much (e.g. R290X/Hawaii).
 
Because strong rumors are suggesting multi-tiled GPUs next gen from all the companies, so I would say that there appears to be solutions that are not traditional SLI.

I think Hopper is rumored to be just that, but I don't think it would be ready for Ampere.
 
Those performance and power consumption estimates would imply that RTX 3090 is 20-25% faster than the RTX 3080 while only using 30W or ~10% more power, which I find hard to believe. There's supposed to be about a difference of 20% more cores in the RTX 3090 over the RTX 3080, which makes sense if clocks were similar between the two. At a minimum that should mean there's roughly a 20% difference in power as well, so the RTX 3090 will come in at 350-375W.

On the 3090 I suspect lower core clocks and boost speed + Potentially Lower memory speed too. Combined yeah 20-25% faster but up to 50w more on load.
3080 I reckon ~19Gbps memory and boosting to 2100mhz on the core clock.
 
That's next time.
I think Hopper is rumored to be just that, but I don't think it would be ready for Ampere.
That goes for Hopper (NV) and RDNA3 (AMD), but not "current" gen.
Unless SS 8nm is such a terrible node, do we really see such a massive cooling solution as realistic for a single die. It looks like about (2-3)X the 2080 volume. An early iteration as Turing RT cores were?
 
Unless SS 8nm is such a terrible node, do we really see such a massive cooling solution as realistic for a single die. It looks like about (2-3)X the 2080 volume. An early iteration as Turing RT cores were?

You would have to compare the size to OC cards.
 
Unless SS 8nm is such a terrible node, do we really see such a massive cooling solution as realistic for a single die. It looks like about (2-3)X the 2080 volume. An early iteration as Turing RT cores were?

GDDR6x probably with lots of heat too.
 
That 3090 is huge. It's also strange for the fan to be located at the back. The custom AIB cards tend to be even larger than the Nvidia reference design as well.
 
Well coreteks said in a video back in June, that it's dual sided because the back side fan cools the traversal co-processor and this card is going to revolutionize the graphics industry. I recommend muting and turning captions on, as it's pretty insufferable to listen to. For me, that is.
 
I really think we'll see something special from the 3rd generation of Tensor / Tensor 3.0. Not only that but massive gains in effective TFLOPS using the new Ampere Sparsity feature giving huge increases to single precision (hence larger memory rumors we're seeing to allow non stifled throughput)
Do you actually understand what your saying or did you just put a bunch of words together? im struggling to even figure out what you mean by sparsity in the term of general purpose FLOPS situation. ie not talking about Matrix multiply , do you even know what a sparse matrix is?
 
Long time no post for me! Just thought I'd give my two cents on incoming Ampere cards.

It looks like Nvidia's decision to go with Samsung is going to dramatically hurt their efficiency this generation. The rumored 350 watt TDP is 100 more watts than the 2080 TI. Granted, that is "only" a 33% power increase for what will likely be a ~60% rasterization and ~80+% RT performance uplift, but when moving to a new node that is the worst generational rasterization efficiency increase since Fermi, and quite possibly the highest rated TDP for a single GPU ever.

Do any of you think Nvidia will migrate over to TSMC 7nm+(+) in ~16 months or so with die shrinks of Ampere (getting another 20-30% performance / watt improvement) and push hopper out to a 36 month cycle instead of 24 months?
 
Do you actually understand what your saying or did you just put a bunch of words together? im struggling to even figure out what you mean by sparsity in the term of general purpose FLOPS situation. ie not talking about Matrix multiply , do you even know what a sparse matrix is?

100%. im assuming that Tensor 3.0 follows through the Ampere architecture which I read up on giving some credence to the rumors of higher precision gains. (There is a new sparsity feature in Ampere that wasn’t in Volta)
 
Long time no post for me! Just thought I'd give my two cents on incoming Ampere cards.

It looks like Nvidia's decision to go with Samsung is going to dramatically hurt their efficiency this generation. The rumored 350 watt TDP is 100 more watts than the 2080 TI. Granted, that is "only" a 33% power increase for what will likely be a ~60% rasterization and ~80+% RT performance uplift, but when moving to a new node that is the worst generational rasterization efficiency increase since Fermi, and quite possibly the highest rated TDP for a single GPU ever.

Do any of you think Nvidia will migrate over to TSMC 7nm+(+) in ~16 months or so with die shrinks of Ampere (getting another 20-30% performance / watt improvement) and push hopper out to a 36 month cycle instead of 24 months?
I feel like AMD's cadence is going to outpace Nvidia if Nvidia doesn't move to 5nm by this time next year. I don't know what kind of step-function jump a chiplet design will bring to GPUs but judging by what AMD has done already with their CPUs, I fully expect AMD to have a serious chance of taking the performance crown next year, at least in traditional rasterization workloads, if Nvidia do not move to Hopper. TSMC only has so much 5nm capacity and judging by AMD's better relationship with them in comparison to Nvidia's, it would not surprise me if Nvidia pays a premium for 5nm wafters or AMD's chiplet architecture is more tightly implemented with respect to TSMC's N5 node.
 
As I understand it, the reason for the cooler on the opposite side is strictly due to memory. The fan will actually spin at much lower speeds vs the front. Still definitely not ideal.
 
I feel like AMD's cadence is going to outpace Nvidia if Nvidia doesn't move to 5nm by this time next year. I don't know what kind of step-function jump a chiplet design will bring to GPUs but judging by what AMD has done already with their CPUs, I fully expect AMD to have a serious chance of taking the performance crown next year, at least in traditional rasterization workloads, if Nvidia do not move to Hopper. TSMC only has so much 5nm capacity and judging by AMD's better relationship with them in comparison to Nvidia's, it would not surprise me if Nvidia pays a premium for 5nm wafters or AMD's chiplet architecture is more tightly implemented with respect to TSMC's N5 node.

For several years AMD has had new node advantage. For a quicker cadence we’ll have to see next time what that looks like because there hasn't really been any change for a while. I’d agree their sprint to new nodes and their roadmap is half decent. The thing is just the sheer R+D Nvidia and soon to be Intel have. MCM I expect Nvidia to come out on top even on an older node. Don’t see that changing anytime soon.
 
As I understand it, the reason for the cooler on the opposite side is strictly due to memory. The fan will actually spin at much lower speeds vs the front. Still definitely not ideal.
I'd like to know how hot those new memories actually run... and they probably sip quite a bit of power too. Makes me wonder if this really makes sense instead of going with HBM2.
 
Looks like the rumors of the furnace and the ridiculous cooler were correct. No way will this tri-slot monstrosity be the same price as a 2080Ti, I expect $1500-$2000.
 
Back
Top