Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 195 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
May 15, 2014
559
292
136
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 

Glo.

Diamond Member
Apr 25, 2015
5,661
4,419
136
Old way of thinking is taking every single claim AMD has made thus far w/o concrete evidence in real life scenarios an old way of thinking? I've read what it can do thank you very much I don't need clarification, pardon me if I don't believe everything I see w/o first validating it..
While you are thinking in future terms when obviously you seem to cancel every single asset Nvidia has to offer in this gen? Dude srsly?
Would I have said what before? This is exactly my point. That some people see much higher available vram and think that this is the holy grail by itself.. Did you read the comments before mine and why did I start commenting in the first place or those didn't bother you?
Nevermind I think those that understand the points know perfectly well what we both are talking about here..
I'm not trying to persuade anyone buy anything I simply don't care, but seeing heavily biased opinions with wishful thinking is not my cup of tea.. That is all..
Nvidia claimed that Ampere is 1.9 more efficient than Turing, and yet you jumped their bandwagon of blatant lies ;).

This is just a joke ;).

P.S. As is with any Radeon release, the Anti-AMD agenda moved this time to topic of DLSS and RT performance. AMD can straight up win in value, rasterization performance, and efficiency and people will still complain their products are not good enough.

Because those products are not Nvidia branded ;).

Heck, now we are discussing Infinity Cache as something NEGATIVE. Till obviously proven positive, looking at latest track record of people's disbelief in AMD's capabilities
 

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
Nvidia claimed that Ampere is 1.9 more efficient than Turing, and yet you jumped their bandwagon of blatant lies ;).

This is just a joke ;).

P.S. As is with any Radeon release, the Anti-AMD agenda moved this time to topic of DLSS and RT performance. AMD can straight up win in value, rasterization performance, and efficiency and people will still complain their products are not good enough.

Because those products are not Nvidia branded ;).

Heck, now we are discussing Infinity Cache as something NEGATIVE. Till obviously proven positive, looking at latest track record of people's disbelief in AMD's capabilities
Lol wut? Who jumped on the hype bandwagon ? You know me?I play at 4k thus I need better gpu plus I needed an HDMI 2.1 capable one asap. First of all do you even know what Gpu did I have before? I had a 2080Ti and even if I had a 2080 nvidia said UP TO 1.9x which a person with common sense and a bit of experience understands this was a marketing bs.. Secondly what on earth are you talking about? When I need a gpu I take it, simple sa that, I had mine almost a month now I never wait for anything so if you don't know someone don't go spreading your weird perspective on people's choices.. If I want something better and I find it I'll get it..
I'm sorry but I won't even bother answering to the rest of your post. This has absolutely nothing to do with what I said that I don't even know where to begin with and simply ignore it. Go find someone else to troll and preach your AMD Evangelion,wrong person.
And in case you have forgotten the title says ampere not AMD so guess who is the one with the agenda trying to defend his beloved company in the wrong thread to begin with.. Sigh fanbois they want to preach objectivity lmao
 
Last edited:
  • Haha
Reactions: Glo.

Glo.

Diamond Member
Apr 25, 2015
5,661
4,419
136
Lol wut? Who jumped on the hype bandwagon ? You know me?I play at 4k thus I need better gpu plus I needed an HDMI 2.1 capable one asap. First of all do you even know what Gpu did I have before? I had a 2080Ti and even if I had a 2080 nvidia said UP TO 1.9x which a person with common sense and a bit of experience understands this was a marketing bs.. Secondly what on earth are you talking about? When I need a gpu I take it, simple sa that, I had mine almost a month now I never wait for anything so if you don't know someone don't go spreading your weird perspective on people's choices.. If I want something better and I find it I'll get it..
I'm sorry but I won't even bother answering to the rest of your post. This has absolutely nothing to do with what I said that I don't even know where to begin with and simply ignore it. Go find someone else to troll and preach your AMD Evangelion,wrong person.
And in case you have forgotten the title says ampere not AMD so guess who is the one with the agenda trying to defend his beloved company in the wrong thread to begin with.. Sigh fanbois they want to preach objectivity lmao
Right...

I said, it was a joke, a jab that I made to make fun of Nvidia marketing.

I don't care what you've bought.

You accuse me of bringing AMD agenda into Nvidia thread, and yet it was YOU who brought it first, on last page, discussing the viability of Infinity Cache, and defending your purchase of RTX 3080(buyers remorse, already?).

I find it completely ridiculous, that you don't want to believe that Infinity Cache is good choice, considering that ON CHIP memory will always have higher flexibility of the data that has low-latency acess compared to external memory systems, and that includes GDDR6 memory.

Lower latency means performance AND efficiency. Secondly, the benefit of on the die large cache is that you get perfect scaling with higher clock speeds. So if AIB models will have higher clock speeds, the cache will have higher bandwidth. And lower latency. And higher efficiency.

Maybe AMD hasn't released the Whitepaper yet, but its pretty apparent that their idea can be summed up this way: "it doesn't matter how much data you have available. But it really matters how you use it".

End of topic, that I did not even started.
 
  • Like
Reactions: lobz and PhoBoChai

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
Piece of advice because I don't have the time explaining myself to a stranger in a random topic on internet. Go back read my first reply to this thread and take some time to read before you make yourself say things you only see and believe. I didn't accuse you of anything. YOU quoted me because you feel the urge to say things that I didn't say on the first place.. Honestly I'm outta words and I don't even know why I answer to ppl like you .. And I don't even care what you find absurd or not I don't want to start a conversation with people like you.. Got it now? I can't paint the picture more clear for you..
 
Last edited:

Glo.

Diamond Member
Apr 25, 2015
5,661
4,419
136
Piece of advice because I don't have the time explaining myself to a stranger in a random topic on internet. Go back read my first reply to this thread and take some time to read before you make yourself say things you only see and believe. I didn't accuse you of anything. YOU quoted me because you feel the urge to say things that I didn't say on the first place.. Honestly I'm outta words and I don't even know why I'm even to ppl like you .. Speechless.. 😂 😂
This is what you wrote to me post before.
Go find someone else to troll and preach your AMD Evangelion,wrong person.
And in case you have forgotten the title says ampere not AMD so guess who is the one with the agenda trying to defend his beloved company in the wrong thread to begin with.. Sigh fanbois they want to preach objectivity lmao
To which I replied with this part:
You accuse me of bringing AMD agenda into Nvidia thread, and yet it was YOU who brought it first, on last page, discussing the viability of Infinity Cache, and defending your purchase of RTX 3080(buyers remorse, already?).

I quoted you to make a joke. To which you lost your mind. And then proceed with personal attack.
 
  • Like
Reactions: lobz

PhoBoChai

Member
Oct 10, 2017
119
389
106
Nvidia claimed that Ampere is 1.9 more efficient than Turing, and yet you jumped their bandwagon of blatant lies ;).

This is just a joke ;).

P.S. As is with any Radeon release, the Anti-AMD agenda moved this time to topic of DLSS and RT performance. AMD can straight up win in value, rasterization performance, and efficiency and people will still complain their products are not good enough.

Because those products are not Nvidia branded ;).

Heck, now we are discussing Infinity Cache as something NEGATIVE. Till obviously proven positive, looking at latest track record of people's disbelief in AMD's capabilities

Perf/w used to matter a lot. Like a LOT.

Then 3080 and 3090 happened. Seen some of those custom models? 480W and 2% faster than stock. What a laugh!

If you're honest to yourself. Perf/w matters, always. Perf/$ matters. Driver stability matters. Driver features matters. All of these things are important, some more than others and it depends on the individual. Just don't lie to yourself when your fav brand is losing in these metrics. These are profit companies, they don't care about you. Don't care so much about them either, use the best product for the job, ignore the brand.
 

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
Just don't lie to yourself when your fav brand is losing in these metrics. These are profit companies, they don't care about you. Don't care so much about them either, use the best product for the job, ignore the brand.

Finally a sensible post.. Don't even bother I shouldn't either. Fanbois from either side will try to pass their distorted agenda always. You say A but they go to B because this is what they do.
 
  • Like
Reactions: PhoBoChai

Glo.

Diamond Member
Apr 25, 2015
5,661
4,419
136
Perf/w used to matter a lot. Like a LOT.

Then 3080 and 3090 happened. Seen some of those custom models? 480W and 2% faster than stock. What a laugh!

If you're honest to yourself. Perf/w matters, always. Perf/$ matters. Driver stability matters. Driver features matters. All of these things are important, some more than others and it depends on the individual. Just don't lie to yourself when your fav brand is losing in these metrics. These are profit companies, they don't care about you. Don't care so much about them either, use the best product for the job, ignore the brand.
You tell that to me? Person who cares only about GPUs not using more than 125W of power under gaming, and not costing more than 250$? ;)
 
  • Like
Reactions: lobz

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
I'm not really sure NVidia could have used a bigger bus considering where the TDP of these cards are at already. Sure moving to a 512 bit bus would have solved a lot of problems, but it creates its own set.

Since no one really has good enough RT tech to pull off the kind of frame rates that consumers expect at native resolutions it wouldn't surprise me if NVidia had this great idea that everyone would just run DLSS all the time which negates some of the need for more VRAM.

Maybe that sounds utterly stupid, but NV customers have seemed happy to be led along like that so why should NVidia think they wouldn't go along with this as well? Brand loyalty can lead to some positive effects, but it can create a really toxic situation as well. Be loyal to a set of ideals, not to anyone that just happens to have lived up to them once upon a time.
 

PhoBoChai

Member
Oct 10, 2017
119
389
106
I'm not really sure NVidia could have used a bigger bus considering where the TDP of these cards are at already. Sure moving to a 512 bit bus would have solved a lot of problems, but it creates its own set.

Since no one really has good enough RT tech to pull off the kind of frame rates that consumers expect at native resolutions it wouldn't surprise me if NVidia had this great idea that everyone would just run DLSS all the time which negates some of the need for more VRAM.

Maybe that sounds utterly stupid, but NV customers have seemed happy to be led along like that so why should NVidia think they wouldn't go along with this as well? Brand loyalty can lead to some positive effects, but it can create a really toxic situation as well. Be loyal to a set of ideals, not to anyone that just happens to have lived up to them once upon a time.

Well, I don't think 512 bus is possible with GDDR6 or X. IIRC it's something to do with the specs and signal integrity making 384 the biggest possible.

As for DLSS lowering vram, yeah, but RTX increases vram usage, depending on the complexity of the scene and BVH structure. Can add +2GB vram usage in current games. Though if you're using internal 1080p then vram doesn't matter as much as native 4k. It's just a shame that NV didn't push DLSS more. The good 2.0 version is only in a handful of titles.
 
  • Like
  • Haha
Reactions: scineram and Olikan

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
I'm not really sure NVidia could have used a bigger bus considering where the TDP of these cards are at already. Sure moving to a 512 bit bus would have solved a lot of problems, but it creates its own set.

Since no one really has good enough RT tech to pull off the kind of frame rates that consumers expect at native resolutions it wouldn't surprise me if NVidia had this great idea that everyone would just run DLSS all the time which negates some of the need for more VRAM.

Maybe that sounds utterly stupid, but NV customers have seemed happy to be led along like that so why should NVidia think they wouldn't go along with this as well? Brand loyalty can lead to some positive effects, but it can create a really toxic situation as well. Be loyal to a set of ideals, not to anyone that just happens to have lived up to them once upon a time.

You don't need 512bit bus to put 12Gb of vram you need the exact same bus as 3090 has which is 384bit.. And tdp as you've seen does not change by that significantly 3090 has 350w..Nvidia just didn't give it because it wanted to give a Ti or a refresh later down the road.. Simple tactics of the green team..
About RT that's the thing. Nvidia has already DLSS 2.x which looks fantastic and this is a big plus given the current comparative advantage over AMD at RT performance by the looks of it.. Combined together DLSS with RT are a great combo which translates to high framerate and great RT fx..
There's no such thing as brand loyalty at least for people like me who buy high end every single generation. It's simple. Whoever gives me the best product I'll take it.. And since Nvidia has been on top for the last years Nvidia it was.. I still think Nvidia has the more balanced package for reasons I repeatedly explained in this topic. Others don't think so. It's ok at least we have great GPUs from both companies after a long time so let's stick to that shall we?
 
  • Like
Reactions: lightmanek

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
You don't need 512bit bus to put 12Gb of vram you need the exact same bus as 3090 has which is 384bit.. And tdp as you've seen does not change by that significantly 3090 has 350w..Nvidia just didn't give it because it wanted to give a Ti or a refresh later down the road.. Simple tactics of the green team..
About RT that's the thing. Nvidia has already DLSS 2.x which looks fantastic and this is a big plus given the current comparative advantage over AMD at RT performance by the looks of it.. Combined together DLSS with RT are a great combo which translates to high framerate and great RT fx..
There's no such thing as brand loyalty at least for people like me who buy high end every single generation. It's simple. Whoever gives me the best product I'll take it.. And since Nvidia has been on top for the last years Nvidia it was.. I still think Nvidia has the more balanced package for reasons I repeatedly explained in this topic. Others don't think so. It's ok at least we have great GPUs from both companies after a long time so let's stick to that shall we?

GDDR6x memory chips are only made in 1GB sizes. So there is no way to get 12GB currently unless nVidia payed Micron to develop a 512MB version as well, since GDDR6x is proprietary and only in use by nVidia.

As for the 3090 power usage, the 350W is inaccurate, which is par for the course with nVidia GPU's. Only this time not only is the average power consumption of the 3090 FE higher than 350W, it has transient spikes up to 490W, which is enough to trip the over-current protection on a lot of PSU's. Which many reviewers ran into. The AIB cards with the 450W TDP bios hit transients close to 600W.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
GDDR6x memory chips are only made in 1GB sizes. So there is no way to get 12GB currently unless nVidia payed Micron to develop a 512MB version as well, since GDDR6x is proprietary and only in use by nVidia.

What?

GA102 has a maximum of of 12 - 32 bit channels, so 12 GB, using 12 of those 1 GB chips is exactly what it is designed for.
 
  • Like
Reactions: Mopetar

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
What?

GA102 has a maximum of of 12 - 32 bit channels, so 12 GB, using 12 of those 1 GB chips is exactly what it is designed for.

My comment was assuming nVidia kept the same 384bit memory interface that the 3090 has (as the post I quoted said "exact same bus as 3090"), which would mean 24 512MB memory chips. So yes, they could go with a different board than the 3090 has and only have memory on the front side, and redesign it for 32bits per module instead of 16bits per module.
 

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
My comment was assuming nVidia kept the same 384bit memory interface that the 3090 has (as the post I quoted said "exact same bus as 3090"), which would mean 24 512MB memory chips. So yes, they could go with a different board than the 3090 has and only have memory on the front side, and redesign it for 32bits per module instead of 16bits per module.

Well that was poor syntax from my part I apologize, I think it was understood what I meant but anyway..
 
  • Like
Reactions: Stuka87

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
Independent reviews will tell you what? If it will suffice or not? Nobody knows mate if more vram will be needed with 100% certainty.. Most probably will but not with certainty. But you asked which one will be more bandwidth limited , which with the literal meaning 6800XT is so much more than 3080.. 6800XT has ~48% less memory bandwidth than 3080 that is 512 vs 760GB/s. Don't confuse memory bandwidth with amount of VRAM they are both equally important and serve complimentary but different purposes.
I didn't read this anywhere it's pure logic and I don't expect avg Joe to care about that. Thing is all I see in the net is that this X amount of vram is much higher than this Y amount of vram.. Well all I'm saying is that it's not as simple, simply measuring the amount of vram without taking into account the most important thing the memory bandwidth is like measuring the quality of photographic equipment solely on Mp..
And don't expect things to change so easily 10Gb of GDDRx will most probably be just fine at least for the next couple of years. 12 of the same quick memory would have been the sweet spot, 16 is plain waste..AMD didn't put 16Gb of GDDRx it put 16Gb of cheaper and slower GDDR6. If it had put GDDR6x at these amounts the price of it would have been at least the price of 6900XT if not more.
Also keep in mind that in the vast majority of cases even high end cards like 3080,6800XT will face issues with fillrate limitations before they even reach bandwidth limitations.. (this means 9/10 times, games will not have enough performance in their tank for a given modern game @4k before they reach bandwidth limitation.. ). A recent example is Watch Dogs Legion which is averaging 30-32fps @4k without DLSS and with RTX Ultra. It doesn't have any bandwidth limitations though so you get the idea by now I think..



No it will not this is your idea of a poor market prognosis. As I said 12 would have been perfect but 10 will suffice, I own one (3080) I play @4k for more than 5 years now (980Ti SLi, 1080Ti, 2080Ti and now 3080) and let me say I know the perks of this resolution (4k) pretty well by now in a vast array of games, to understand that 10Gb of GDDRx will be enough. If anything I think 16Gb is to purely impress the avg Joe who thinks that 16>>>>10 so 6800XT will be quicker in memory terms, when as i previously mentioned ,3080 has actually much higher memory bandwidth with 760GB/s while 6800XT has only 512GB/s ..
Man, what are you on about? You're contradicting your own post and you're talking about things that I already answered in my previous post :D You can't know how both cards will handle memory BW and it is absolutely not a ~48% difference because of the huge last-level cache, and the reason I brought up independent reviews is this: no matter what I tell you about this, you have no reason to believe me (or simple facts), but you can believe the benchmarks 2 weeks later :)
Also, where have I ever confused bandwidth with the amount of VRAM? First cite that post from me and then I tell you what you were reading wrong, or what words you've learned wrong (such as sufficient and limited). Until then this conversation is pointless. You're talking to me as if you were a teacher in a class for third graders.
 
Last edited:
  • Haha
Reactions: jim1976

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
So I'll tell you what.. Let's wait and see what infinity cache will eventually do, let's wait and see how AMD performs in a vast array of titles
This is getting hilarious.
That's exactly what I suggested, then you went about to write a page long post about how memory bandwidth is not the same thing as memory capacity. Like... what the hell?
and let's just slip under the carpet DLSS and RT performance.. Only VRAM matters from some people's quotes.. I honestly can't be more clear with my posts so I'm sorry if you can't understand any further

OK now I understand everything :) You're just making and changing goalposts as the conversation goes. I almost forgot which topic we were in, thanks for bringing me back to reality ;)
 
Last edited:

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
My comment was assuming nVidia kept the same 384bit memory interface that the 3090 has (as the post I quoted said "exact same bus as 3090"), which would mean 24 512MB memory chips. So yes, they could go with a different board than the 3090 has and only have memory on the front side, and redesign it for 32bits per module instead of 16bits per module.

Still incorrect. These are are still 32 bit data bus chips on the 3090.

GA102 has 384 bit data bus. organized into 12 x 32 channels. It uses 10 of these channels on the 3080, for 10GB. It should be blindingly obvious, that It could use all 12 channels, for 12 GB on 3080 Ti.

On the 3090, it still uses 12x32 bit data bus. It just has two 32 bit, 1-GB chips on each channel, instead of one. The address bus determines which of two chips, on each channel, will respond.
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
Perf/w used to matter a lot. Like a LOT.

Then 3080 and 3090 happened. Seen some of those custom models? 480W and 2% faster than stock. What a laugh!

If you're honest to yourself. Perf/w matters, always. Perf/$ matters. Driver stability matters. Driver features matters. All of these things are important, some more than others and it depends on the individual. Just don't lie to yourself when your fav brand is losing in these metrics. These are profit companies, they don't care about you. Don't care so much about them either, use the best product for the job, ignore the brand.
good post, wrong recipient
 

Hitman928

Diamond Member
Apr 15, 2012
5,182
7,631
136

HWB's take on VRAM situation for 3070 as well as low stock situation for Ampere according to the retailers they've talked to.

tl;dw: 8 GB on 3070 is plenty for all games today outside of a couple of examples but they believe it's going to be a problem starting in games coming soon and give an example of the new Watchdogs game maxed out at 1440p exceeding 8 GB. As far as stock, they said that the retailers they talked to said 3080 stock has been "abysmal" and that 3070 has been much better but still slow to restock compared to what they are used to.
 
Last edited:

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Still incorrect. These are are still 32 bit data bus chips on the 3090.

GA102 has 384 bit data bus. organized into 12 x 32 channels. It uses 10 of these channels on the 3080, for 10GB. It should be blindingly obvious, that It could use all 12 channels, for 12 GB on 3080 Ti.

On the 3090, it still uses 12x32 bit data bus. It just has two 32 bit, 1-GB chips on each channel, instead of one. The address bus determines which of two chips, on each channel, will respond.

I was not aware they had configured it that way. But that sounds like a giant bottleneck if the GPU can only access half its RAM chips at any given time because it has to context switch back and forth between them.