• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 27 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 
Wondering as well on pricing, i got a new build "KRONOS" in the works that will have dual 3080ti if the price isn't costing me my soul. Hoping they stick to the $1,200 or less price but i kept hearing a bit ago they had plans to reduce the pricing of this generation? They should work on that, if your gonna insist on a $1,200 gpu it should be a Titan class gpu like prior. Offer perhaps the 3080ti as the $799 solution? I would be fine if its perhaps even $1,000 to count for whatever inflation people argue effects pricing over time.

Maybe i got rose shaded glasses on but the price of a 2080ti compared to a 1080ti was quite a bit more. I get that the product stack has shifted buts its given Nvidia a bad rep of course too for apparent "gouging" as most don't understand how the stack works . I would be fine with a reasonably priced Titan based on 3080ti with a few more cores and more vram costing like $1,200 perhaps?

Hopefully Nvidia redeems themselves this generation. Unless the new prices are the norm then what can we do besides look at competition or not buy at all? Most like myself will prob pick up a 3080ti or in my case two even if released at $1,200 like the 2080ti before it. I got no issues waiting till Nov cause between needing a new nvlink compatible mobo/psu and the dual cards i need close to a estimated $3,200 out the door max HOPEFULLY and for me that will take all year to save and conjure up. The first 3080ti won't be in my rig till June assuming we are so blessed with such a beautiful beast so soon. I got my doubts about a summer release even but i love a good surprise. Either way will have the funds up and rdy for at least one. Would give me time to make sure its not doing space invaders or something broken or stupid before i double down on the next.

Here is to hoping the 14th of May brings good news. :beercheers:

Going dual GPU's at all in this day and age is a complete waste. The majority of new games do not support it. And I highly doubt nVidia would ever drop the price to almost half of the previous version ever. If anything, I would expect an eventual 3080Ti to cost more than the current one.
 
Going dual GPU's at all in this day and age is a complete waste. The majority of new games do not support it. And I highly doubt nVidia would ever drop the price to almost half of the previous version ever. If anything, I would expect an eventual 3080Ti to cost more than the current one.

The purpose of the build is so if i decide i wanna play some BF5 i could load up the game on one gpu, then i could let the second one fold at the same time. Not even gonna consider running many if any games in sli/nvlink. When i go to bed i will let the entire rig fold.

If the system decides to play nice to the idea of my purposes that will be up for debate. I played 2 games at the same time on my Q6600 back in 2007 for fun, i was so blown away that i wanna try it again! Well except now its folding/gaming and on 6x as much threads and nearly 2x the clockspeed.
 
If anything, I would expect an eventual 3080Ti to cost more than the current one.

They can do both, I mean look attractively priced and increase prices.

RTX series offered 30% improvement with similar increase in prices right? And extra features?

It's extremely easy to do it if the top end offers 60-70% gain. They can name the part that performs 30% faster RTX 3080 Ti, and anything above that, a new tier that sounds even better, such as RTX 3090 Ti and RTX 3098 Ti.

I know RussianSensation used to argue endlessly how Nvidia increased prices subtly across their line by segmentation play. Just by changing the name they instantly improved their revenue because consumers were none the wiser.
 
Last edited:
They can do both, I mean look attractively priced and increase prices.

RTX series offered 30% improvement with similar increase in prices right? And extra features?

It's extremely easy to do it if the top end offers 60-70% gain. They can name the part that performs 30% faster RTX 3080 Ti, and anything above that, a new tier that sounds even better, such as RTX 3090 Ti and RTX 3098 Ti.

I would be sorta fine with this idea but of course i hope they can bring back the Titan name at a "more" reasonable price. It was just fine i thought prior to the RTX generation, plus with the new build named KRONOS having dual Titans would be epic and fit right in with the theme and purpose of the build. I won't give up my soul for them though and if the prices are higher then $1,200 a pop i may opt for one depending on price/performance and well get lesser 3080 perhaps just to game on. Maybe swap what gpu does what depending on particular games perhaps?

Please please Nvidia bring back the Titan name at a cheaper price, that last one was WOW just to much more for so little gain. I get that the thing has massive amounts of vram but its performance is barely faster then a 2080ti and it cost well over $1,000 more right now.
 
NVIDIA secures 7nm and 5nm TSMC orders


Oh please give us a summer release, my body is so damn ready for Ampere. Only been 3 years since the 1080ti came out and if i bought new i would be going crazy sitting on something this long BEGGING for a upgrade. Any chance you think? I could have the funds for at least a single 3080ti in June without issue but i wouldn't expect it to be out THAT early.

Due to that thing that shall not be named, i heard they may be pushing this release up to October? This was before the May 14th announcement so not so sure now.
 
Last edited:

Who would have thought...?

😉

So lets sum up everything we know, based on latest information. 7 nm Ampere is GA100 chip, that is HPC only. 8 nm products(ampere? New architecture?), that will land later in the year are low-end only, just like Maxwell architecture dropped. 7 nm products, are due for later release, most likely H1 2021.

But if you would read my posts in this thread you would know this already.
 
Sure, because nothing in this tweet says anything like it. And why would nVidia produce only "low end" chips on 8nm and wait over a year for the 7nm chips?
 
Sure, because nothing in this tweet says anything like it. And why would nVidia produce only "low end" chips on 8nm and wait over a year for the 7nm chips?
Read ALL of the articles. In the twitt, in the twitt thread, and on this very page of this Thread. And then make a picture.
 
So lets sum up everything we know, based on latest information. 7 nm Ampere is GA100 chip, that is HPC only. 8 nm products(ampere? New architecture?), that will land later in the year are low-end only, just like Maxwell architecture dropped. 7 nm products, are due for later release, most likely H1 2021.
Actually no, not at all, entry level products are low end gaming GPUs, those are coming from Samsung. High end gaming GPUs and HPC GPUs are 7nm and are coming this year from TSMC. This contradicts what you are saying. Why would NVIDIA launch low end products before high end ones? It doesn't make any sense.
 
Actually no, not at all, entry level products are low end gaming GPUs, those are coming from Samsung. High end gaming GPUs and HPC GPUs are 7nm and are coming this year from TSMC. This contradicts what you are saying. Why would NVIDIA launch low end products before high end ones? It doesn't make any sense.

The reason would be is because the high end gaming GPUs originally taped out on Samsung's 7 nm process. So there's only the GA100 on TSMC 7 and the low end stuff which was always on Samsung's usable 8 nm.
 
The reason would be is because the high end gaming GPUs originally taped out on Samsung's 7 nm process. So there's only the GA100 on TSMC 7 and the low end stuff which was always on Samsung's usable 8 nm.
Ding, Ding, Ding 😉.

And high-end stuff has been retaped, redesigned for TSMC 7 nm process, to which everything exactly points. There were rumors only for past half a year what was happening at Nvidia. Now we have details, and the big picture is clear.

GA100 comes in late(hard launch, physical release) Q3 2020, and this is the first Nvidia next gen GPU. When they would release high-end stuff like 3080, when 8 nm products, just taped out recently? And 8 nm products are low-end.

Q4 2020 for low-end hardware.

All of this information gives you all idea why Nvidia would be releasing GA100 based Titan A chip, for pro-sumer/consumer market.

And why would all of this happen?

Because Nvidia knows that with 7 nm products on SS process, they wouldn't have a chance against AMD's RDNA2. Retaping their architecture, on another process gives them the opportunity to give AMD a fight.

Still, it appears that all of the rumors are true, so I will put this here.

With RDNA2 AMD may have slight Rasterization advantage, from top-to bottom, but Ray Tracing is where Nvidia will shine. So if you are genuinely interested in RTRT, be prepared for next gen from Nvidia with your wallets 🙂.
 

Who would have thought...?

😉

So lets sum up everything we know, based on latest information. 7 nm Ampere is GA100 chip, that is HPC only. 8 nm products(ampere? New architecture?), that will land later in the year are low-end only, just like Maxwell architecture dropped. 7 nm products, are due for later release, most likely H1 2021.

But if you would read my posts in this thread you would know this already.
but wccftech said that the RTX 3060 will beat the 2080ti in Q1 2021, and that in Q3 2020 the 3080ti will have 8192 CUDA cores /smh
:tearsofjoy: :laughing:
 
So let me rephrase it:
nVidia going from 16nm to 7nm will have a slight disadvantage in rasterizing when the competition on 7nm has a worse efficieny at the same transistor count?

I am the only one who think that doesnt make any sense?
Yes, because you live in Nvidia reality distortion field bubble.

RDNA1 already has the same IPC as Turing. RDNA2 brings a lot of improvements, both architectural, and physical design. But this is not AMD thread, so lets not make this into a Nvidia vs AMD fight.

Lets start talking more about Ray Tracing tech in next gen Nvidia GPUs, because there is lot to talk about.
but wccftech said that the RTX 3060 will beat the 2080ti in Q1 2021, and that in Q3 2020 the 3080ti will have 8192 CUDA cores /smh
:tearsofjoy: :laughing:
In heavily ray traced games, and I mean HEAVILY, 5% advantage of RTX 3060 over 2080 Ti still fits the bill of RTX 3060 beating RTX 2080 Ti 😉.

Its always about the context. It is believable but in CERTAIN CONDITIONS that have to be met. RTX 3060 will not beat RTX 2080 Ti in Rasterization.

In Ray Tracing, tho?
 
Yes, because you live in Nvidia reality distortion field bubble.

RDNA1 already has the same IPC as Turing. RDNA2 brings a lot of improvements, both architectural, and physical design. But this is not AMD thread, so lets not make this into a Nvidia vs AMD fight.

Lets start talking more about Ray Tracing tech in next gen Nvidia GPUs, because there is lot to talk about.

"Same IPC"? Who cares. TU116 is 30% more efficient that Navi12 with the same amount of transistors and a wider memory bus system.

So let me rephrase again: With three years between Volta and Ampere and two nodes between 7/8nm and 16nm we wont see any IPC and efficiency gains?
 

Who would have thought...?

😉

So lets sum up everything we know, based on latest information. 7 nm Ampere is GA100 chip, that is HPC only. 8 nm products(ampere? New architecture?), that will land later in the year are low-end only, just like Maxwell architecture dropped. 7 nm products, are due for later release, most likely H1 2021.

But if you would read my posts in this thread you would know this already.
There can be no doubt for the deeply religious of any ilk.
 
"Same IPC"? Who cares. TU116 is 30% more efficient that Navi12 with the same amount of transistors and a wider memory bus system.

So let me rephrase again: With three years between Volta and Ampere and two nodes between 7/8nm and 16nm we wont see any IPC and efficiency gains?
Turing used the same physical design improvements(optimizations) of Volta. Which was EXACT reason why 16 nm TSMC process was called by TSMC and Nvidia 12 Nm FN(vidia) process.

Now I have two questions for you.

1) What makes you believe Nvidia would just take those physical improvements take to another Fab vendor, and use the same optimization for efficiency?
2) What makes you believe that Samsung's Processes, in general, not only 8 nm but also 7 nm process, allows GPU designs to clock past 2 GHz?

TSMC's 7 nm process allowed AMD to clock way past 2 GHz barrier, which resulted in 2.23 GHz GPU in PS5. What makes you believe, that it is the upper limit of this process? This alone is the reason why AMD claims that RDNA2 GPUs have 50% improvement in performance per watt over RDNA1(that, alongside the IPC improvements).

The reason why there are such huge differences in process characteristic is the blatant reason why Nvidia decided to retape higher end products on 7 nm process.

Because no matter what they will do, 300-400 MHz disadvantage in Core clocks to AMD is automatic loss. Loss in performance, efficiency, mindshare.

If you know all of this, and how bad effectively Samsung process is compared to TSMC, also in the context of chip density(next gen Nvidia GPUs will be HUGE), why is it so hard to understand the REASONS why all of this would be happening?

Nvidia could get to similar Physical design optimizations on Samsung process, but it will take tens of months of optimizing the design. They simply have no time left on the table to afford this. RDNA1 was a pipe cleaner for AMD and effectively - physical design experiment, that allowed AMD, Microsoft and Sony to improve the physical design, to similar level as Turing.

P.S. There is no Navi 12 GPU. What are you talking about?
 
"Same IPC"? Who cares. TU116 is 30% more efficient that Navi12 with the same amount of transistors and a wider memory bus system.

So let me rephrase again: With three years between Volta and Ampere and two nodes between 7/8nm and 16nm we wont see any IPC and efficiency gains?

What is a Navi12? There is Navi10, and Navi14. As for efficiency, the RX5500 and 1650 Super are basically neck and neck. There is no 30% difference. As for transistor count, its 6.4B vs 6.6B, again, very similar.

And we should fully expect Ampere to be better than Volta. But Ampere will mostly be an HPC only card, just like Volta. And that has almost no bearing on lower end cards.
 
What is 5500 using? 14?
Turing used the same physical design improvements(optimizations) of Volta. Which was EXACT reason why 16 nm TSMC process was called by TSMC and Nvidia 12 Nm FN(vidia) process.

Now I have two questions for you.

1) What makes you believe Nvidia would just take those physical improvements take to another Fab vendor, and use the same optimization for efficiency?
2) What makes you believe that Samsung's Processes, in general, not only 8 nm but also 7 nm process, allows GPU designs to clock past 2 GHz?

TSMC's 7 nm process allowed AMD to clock way past 2 GHz barrier, which resulted in 2.23 GHz GPU in PS5. What makes you believe, that it is the upper limit of this process? This alone is the reason why AMD claims that RDNA2 GPUs have 50% improvement in performance per watt over RDNA1(that, alongside the IPC improvements).

The reason why there are such huge differences in process characteristic is the blatant reason why Nvidia decided to retape higher end products on 7 nm process.

Because no matter what they will do, 300-400 MHz disadvantage in Core clocks to AMD is automatic loss. Loss in performance, efficiency, mindshare.

If you know all of this, and how bad effectively Samsung process is compared to TSMC, also in the context of chip density(next gen Nvidia GPUs will be HUGE), why is it so hard to understand the REASONS why all of this would be happening?

Nvidia could get to similar Physical design optimizations on Samsung process, but it will take tens of months of optimizing the design. They simply have no time left on the table to afford this. RDNA1 was a pipe cleaner for AMD and effectively - physical design experiment, that allowed AMD, Microsoft and Sony to improve the physical design, to similar level as Turing.

P.S. There is no Navi 12 GPU. What are you talking about?

So basically you have no clue and making things up. Okay.
nVidia is using TSMC's 7nm process for the HPC chip while having no idee what Samsung's process can archive...
 
What is 5500 using? 14?


So basically you have no clue and making things up. Okay.
nVidia is using TSMC's 7nm process for the HPC chip while having no idee what Samsung's process can archive...
Engineers working at Intel, Nvidia, AMD do have an idea of what Samsung's process can do.
 
And yes you claim that nVidia has gone with Samsung's process just to redo their finished products while at the same time working closey with TSMC for their big, huge HPC chip. Do you see any sense in it?
 
And yes you claim that nVidia has gone with Samsung's process just to redo their finished products while at the same time working closey with TSMC for their big, huge HPC chip. Do you see any sense in it?
I think you should work on your reading comprehension skills, mate.
 
Back
Top