• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 111 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 
"Damage control"? Wow. 350W with 35b xtors, 920gb/s and 24GB is allright. AMD has 300W with 12b xtors.
Where do you get that 35 bln Transistor count?

How will that 350W compare to RDNA2 GPUs?

Your posts look like Nvidia pays you to post here, for stuff like this:
nVidia is putting out GPUs which are much faster than console at the same price point. So everything above is really "luxury". That is one of the biggest problem with Microsoft and Sony. Using the same supplier limits every invention and performance difference. Microsoft pushed really hard with the first Xbox and 360. Best and fastest hardware at their time. Now you can buy a two years old RTX2070 with the same performance like a PS5...

So please, stop making damage control for Nvidia.
 
Kopite nailed everything.

Well except the 8nm part 😀

But yeah, if Nvidia needs a 627.12 mm² chip for 5248 CUDA cores @ 7nm (That's only 20% more than 2080 Ti on a process that is no denser than TSMC 16nm), I'm pretty sure these are fatter than the ones in Turing (e.g. the 2x FP32 part).

Either that or they have used massive amounts of die are for the new RT & Tensor cores.
 
Going back to performance expectations with respect to rumored TDP's, if Nvidia managed only a meager 20% perf/w increase over Turing, then @ 350w for the RTX 3090 vs. 250w for the RTX 2080 TI, the RTX 3090 should be 65-70% faster on average.

Looking at Turings non-RTX designs, the 1660 TI was about 15% more efficient than Pascal on the same node, so it's reasonable to expect Nvidia made some further gains in perf/w and a 20% improvement being a very conservative estimate.

Since they're likely well past the voltage efficiency curve, I'm going with 30% improvement, which should put the 3090 at 80% faster than the 2080 TI at 4k.
 
Since they're likely well past the voltage efficiency curve, I'm going with 30% improvement, which should put the 3090 at 80% faster than the 2080 TI at 4k.
If the specs are accurate it should achieve theses scores with 9% faster Clocks, 20% more shaders and 50% more bandwidth. This could only be possible with some major architectural improvements.

EDIT
For comparison 2080Ti is only 56% faster than 2070 in 4K, even though it has 80% more shaders and 37% more memory bandwidth, though yeah, it has slightly slower clock at -5%.
 
Last edited:
If default TDP is actually more like 250W (not 350W), then my guess is that NV designed their reference 3090 so that enthusiasts who edit their BIOS to remove power limits won't kill their board?

Also guessing that NV demo holodeck will require 3090.
And that NV will suggest boosting power.
Hence the funky cooler.
It is clearly stated in the leak that 350W is based on non-overclocked models.
 
35b xtors within 627mm² is not possible on Samsungs 8nm. That are 55m xtors / mm². It is TSMCs 7nm process.

It is possible, just would have to be very dense for the node. If it was on either 7 nm node, I would expect the power consumption to be a lot less.
 
Why? A100 runs at max 1430Mhz within 250W. Sustained is 90% of the peak performance. 3090 hits average of 1700MHz in gaming. That looks identical when we consider that GA102 is stripped from all of these unnecessary transistors.
 
Maybe RTX 3070 uses Samsung's process?

350 W is not that bad for RTX 3090 considering how much fast VRAM it has. Assuming that the transistor count is true.

10GB for 3080 and 8GB for 3070 are big fails though.
 
Oh the irony, all the people who thought and insisted NVIDIA is using 8nm from Samsung no matter how strange and non sensical that looks and sounds, now they are eating their conjectures and rumor regurgitation like it's nothing.

Again expect massive performance uplift from Ampere.
 
it was a direct question from press/investors. He's not lying

I have to agree. He's not dumb and that would be a bold faced lie. Very interesting. I went from being "meh" about Ampere to excited. I sold my 2080TI and am too busy to game much these days so it'll either be a 3070 or potentially 3080 if I feel like splurging. Take my money.
 
it was a direct question from press/investors. He's not lying

NVIDIA Next-Generation 7nm GPU: TSMC To Get The Bulk of Orders, Samsung’s EUV Has A Smaller Role

That is all he said. Nobody has asked him anything about 8 nm process, or products.
 
I have to agree. He's not dumb and that would be a bold faced lie. Very interesting. I went from being "meh" about Ampere to excited. I sold my 2080TI and am too busy to game much these days so it'll either be a 3070 or potentially 3080 if I feel like splurging. Take my money.
Yeah, and he with straight face told the press that Laptops with RTX 2080 Max-Q are better option than next gen Consoles.

Yeah, that turned out to be 100% true.
 
Back
Top