Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 72 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
May 15, 2014
559
292
136
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 

eek2121

Diamond Member
Aug 2, 2005
3,100
4,398
136
It cracks me up that people think AMD isn’t going to have a competitive offering this time around.

The GPUs in consoles historically have been typically equivalent to lower midrange stuff. The GPU in the PS5 is around 5% faster than the 5700XT. The GPU in the new XBox is around 23% faster. Both chips are designed for a low TDP. Let that sink in. If you are smart, you will wait until you have all the, erm, cards before making a decision. I have watched AMD/ATI give NVIDIA a bloody nose many times.

EDIT: The GPU in the PS5 has around the same performance as the GTX 2070 Super, The GPU in the Xbox One trades blows with the 2080 super. the total system draw for both machines is 280w. The TDP for the GPU is likely 150w or possibly even less. Imagine what a 250w card will look like.

Oh, one other thing: the PS5 GPU is the equivalent (performance wise) of an x600 GPU from AMD and the Xbox GPU is likely to be the equivalent of the x700 series. In other words, NVIDIA’s 3060 will have to beat the 2070 super to be competitive, and the 3070 will have to beat the 2080 super to be competitive.
 
Last edited:

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
It cracks me up that people think AMD isn’t going to have a competitive offering this time around.

The GPUs in consoles historically have been typically equivalent to lower midrange stuff. The GPU in the PS5 is around 5% faster than the 5700XT. The GPU in the new XBox is around 23% faster. Both chips are designed for a low TDP. Let that sink in. If you are smart, you will wait until you have all the, erm, cards before making a decision. I have watched AMD/ATI give NVIDIA a bloody nose many times.

EDIT: The GPU in the PS5 has around the same performance as the GTX 2070 Super, The GPU in the Xbox One trades blows with the 2080 super. the total system draw for both machines is 280w. The TDP for the GPU is likely 150w or possibly even less. Imagine what a 250w card will look like.

Oh, one other thing: the PS5 GPU is the equivalent (performance wise) of an x600 GPU from AMD and the Xbox GPU is likely to be the equivalent of the x700 series. In other words, NVIDIA’s 3060 will have to beat the 2070 super to be competitive, and the 3070 will have to beat the 2080 super to be competitive.
Wait for what? There isn’t a limit on how many cards or consoles you can buy in a year.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,996
126
Did AMD have a counter to Intel before Zen ? So looking at the past as an indicator of future is not going to work all the time.
Yes? Athlon 64 wiped the floor with Pentium 4. Then real competition gave us Core 2 thru to Sandy Bridge, after-which Intel started "competing with themselves" again.

When a company pulls stunts like removing soldered IHS and stagnates core count and IPC for half a decade while constantly demanding new motherboad purchases, it's obvious they have no competition.

Then Ryzen came along and suddenly Intel core counts went up and soldered IHS magically re-appeared.

RDNA2 is a huge leap in efficiency as seen from Xbox Series X specs. Nvidia is having to push GA102 to 350w for a reason. But that still might not be enough. We will see in a couple of months when RDNA2 launches.
Leaked specs are meaningless. Look at the failure that was Vega. Prior to it we got a whole lot of marketing garbage like "perf per watt".

The only difference now is that AMD are keeping their mouth shut, so maybe they do have a killer this time around. But going based off history, I'm 75% certain AMD won't touch at least nVidia's top 1-2 GPUs.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
What percentage was that "large number"? I remember it being an issue but I never had a space invaders problem. I've had far more problems with my AMD cards and constant black screens.

Its a hard number to get. None of the AIB's actually release the numbers, for obvious reasons. So we have to rely on retailers, but only some of them actually report it.

GamersNexus just had an article last week on this subject though from one retailer in europe. They gave all the RMA numbers for every single model, and make GPU that they sell.

Here is a quote from the article:
The data encompasses most of each company’s current product stack, with data representing the GTX 1660 Ti at the bottom of the stack for Nvidia, and going all the way up to the RTX 2080 Ti. Looking at the data, the RTX 2080 Ti has the single highest RMA rate for any GPU model, with an RMA rate hovering just over 5%. The cost may factor into this, as people might be less willing to wait around for a fix on a $1000 card.

Full article: https://www.gamersnexus.net/news-pc...-release-date-amd-x86-marketshare-intel-leaks
 
  • Like
Reactions: ozzy702

beginner99

Diamond Member
Jun 2, 2009
5,233
1,610
136
AMD will be TSMCs 7nm biggest customer with 30000 WPM in H2 2020 (out of 140K total 7nm WPM), I dont see them having problems with capacity.

AMD will be producing Zen3 chiplets, RDNA2 GPUs and most importanltly the console SOCs. I imagine the later taking up a very meaningful part of that whole wafer supply.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
7,166
7,666
136
Here we go again... After all the hype, NV killer guerilla marketing, Glo's raving (I do miss RussianSensation) Nvidia are going to announce and launch their cards before AMD has even gotten a finalized design down (for all we know).

Execution machines, those guys. It will be even sadder if they actually are 100% on the Samsung 8/10nm train for their consumer parts. Working with a brand new partner for large dies and still getting the stack out the door before we even hear a peep from AMD.
 
  • Like
Reactions: DiogoDX

moonbogg

Lifer
Jan 8, 2011
10,637
3,095
136
Whatever the case is, I expect them to be sold out instantly and simply not be available to actually purchase for months after release. The actual release date for most people is probably 6 months into 2021. You can't even buy a power supply right now. High-end GPUs aren't staying in stock for more than 30 seconds.
 

exquisitechar

Senior member
Apr 18, 2017
684
942
136
Here we go again... After all the hype, NV killer guerilla marketing, Glo's raving (I do miss RussianSensation) Nvidia are going to announce and launch their cards before AMD has even gotten a finalized design down (for all we know).

Execution machines, those guys. It will be even sadder if they actually are 100% on the Samsung 8/10nm train for their consumer parts. Working with a brand new partner for large dies and still getting the stack out the door before we even hear a peep from AMD.
They are getting their stack out the door before AMD precisely because they are using Samsung’s 8nm. Glo’s “raving” was based on his speculation that they would release the high end cards on TSMC’s N7 in 2021, which was incorrect. The cards are coming this year, but they will be less impressive than they would have been on N7. Hence the blown up power consumption.
Full node drop, I don't think nVidia is going to prove as historically greedy as the other team with their full node drop. The idea of a 251mm part costing $400 may fly for the red team, but I don't think the green team would be as forgiving. If you care more about names it may be a bit different of course, I'm used to looking at die size(which gives us an idea of actual cost to manufacture) not marketing names. Price versus performance we should see a *MASSIVE* uplift compared to last generation. I'm expecting the $500 parts will be in the range of the current over $1K parts.
Correct, but TSMC N7 in 2019 and Samsung’s 8nm in 2020 are two different things. Samsung is way cheaper.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
AMD will be producing Zen3 chiplets, RDNA2 GPUs and most importanltly the console SOCs. I imagine the later taking up a very meaningful part of that whole wafer supply.

True, thats why they increased inventory capacity to 1.3 billions the last months. Any way big navi is not a nigh capacity product, i dont expect they will have a lot of problems keeping up with demand.
 

Det0x

Golden Member
Sep 11, 2014
1,299
4,234
136

It is reported that NVIDIA’s board partners are ready to launch its new graphics cards at the same time as NVIDIA unveils its reference models.

This time so-called Founders Edition cards from NVIDIA are definitely going to look much different compared to custom designs based on the same GPU. We have already heard that there are two board designs for the GA102 GPU: PG133 and PG132. The former is NVIDIA exclusive for Founders Edition models and also the irregular-shaped PCB we have seen in a previous leak. The latter will be adopted by the vast majority of board partners for their semi-custom designs (reference PCB and custom cooling).

NVIDIA board partners have already confirmed to TweakTown that custom designs will be available at launch. The exact wording is that both Founders Edition and custom designs will launch simultaneously. What it means is that AIBs have been actively developing their cards for weeks and they should now be ready for launch. We can also confirm that AIBs have reached out to us confirming that they are ready.

NVIDIA GeForce RTX 30 series will be based on Ampere architecture. It is rumored that the cards might adopt the Samsung’s 8nm node. NVIDIA is allegedly still deciding on the final naming and pricing.

ASUS-GeForce-RTX-3080-Ti-Leak.jpg
 
  • Like
Reactions: Elfear

beginner99

Diamond Member
Jun 2, 2009
5,233
1,610
136
NVIDIA is allegedly still deciding on the final naming and pricing.

At least for naming, that can't really be true if they are commercially available anytime soon. If they haven't decided on names, they couldn't have printed any product boxes yet meaning they can't yet be shipped. Names have to be set by now for anything releasing in September.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Naming is definitely done, especially since most cards have the name molded into the plastic shrouds. Pricing though is always done last moment.

nVidia releasing first gives AMD the upper hand if they have competitive products. As they can adjust specs and pricing to counter whatever nVidia has (Provided they don't do it like the 5600XT again).

And I also don't think anybody should be expecting a 3080Ti at launch. Especially if its an N7 part and not a Samsung part. Datacenter customers are far more important to nVidia than the relatively small number of consumers that buy top end cards.
 

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
At least for naming, that can't really be true if they are commercially available anytime soon. If they haven't decided on names, they couldn't have printed any product boxes yet meaning they can't yet be shipped. Names have to be set by now for anything releasing in September.

I'm sure there are a lot of difference between video card and mobile devices, but typically I get disclosed on the names and features at least a month in advance if not more.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Yes? Athlon 64 wiped the floor with Pentium 4. Then real competition gave us Core 2 thru to Sandy Bridge, after-which Intel started "competing with themselves" again.

When a company pulls stunts like removing soldered IHS and stagnates core count and IPC for half a decade while constantly demanding new motherboad purchases, it's obvious they have no competition.

Then Ryzen came along and suddenly Intel core counts went up and soldered IHS magically re-appeared.

If you have to go all the way back to Athlon 64 to find CPU leadership for AMD then why did you miss the Radeon 9700 Pro (R300) and its successor Radeon 9800 Pro/ Radeon 9800 XT (RV350). ATI held the GPU crown for 20 months from Aug 2002 until Nvidia regained the GPU crown only in Apr 2004.

AMD's GPU division competed aggressively during the Radeon HD 4000 and HD 5000 series and even held the GPU crown for 6 months with the Radeon HD 5870. GCN was competitive with Kepler but Nvidia had better area and power efficiency. But AMD's CPU problems with Bulldozer hurt the entire company including its GPU division and once Nvidia pulled away with Maxwell it was one way traffic headed in Nvidia's direction.

AMD's CPU division has been far less competitive than their GPU division between 2003-2017. Zen brought a revival for AMD's CPU division. So do not make it look like AMD were more competitive against Intel in CPUs than they were against Nvidia in GPUs.

Leaked specs are meaningless. Look at the failure that was Vega. Prior to it we got a whole lot of marketing garbage like "perf per watt".

The only difference now is that AMD are keeping their mouth shut, so maybe they do have a killer this time around. But going based off history, I'm 75% certain AMD won't touch at least nVidia's top 1-2 GPUs.

This is not a leaked spec. Series X specs have been officially announced by Microsoft. If you saw the Series X teardown video you would have seen the PSU rating. Do the math and you will see the Series X is delivering 12 TF at 140-150w compared to Radeon 5700XT's 9 TF at 225w. Thats 2x the perf/watt.

Anyway in a couple of months AMD is likely to launch RDNA2. So you can bookmark all these posts and revisit. imo RDNA2 is likely to be the launch with bigger impact this fall than even Zen 3. Thats primarily because of Nvidia's absolute dominance of GPUs in the past 6 years. Nvidia might have made a strategic blunder by betting on Samsung 8nm instead of TSMC 7nm and they are like to pay for that mistake.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Anyway in a couple of months AMD is likely to launch RDNA2. So you can bookmark all these posts and revisit. imo RDNA2 is likely to be the launch with bigger impact this fall than even Zen 3. Thats primarily because of Nvidia's absolute dominance of GPUs in the past 6 years. Nvidia might have made a strategic blunder by betting on Samsung 8nm instead of TSMC 7nm and they are like to pay for that mistake.
AMD launches for the last 5+ years have basically been massive hype followed by a botched release (paper launch, useless stock coolers, last second bios changes), followed by 12 months of driver problems (unless it's re-releasing the same card - see 580, etc). Then we go back to "wait for AMD card name + 1". Several years later the cards are actually quite good once the drivers are properly fixed and optimised, and AMD are often still selling those cards are bargain prices. Nvidia on the other hand tend to stick too near day 1 pricing for the lifetime of the card. Hence my suggestion to people is if you want the new gen buy Nvidia, but if you want a bargain keep an eye on last gen AMD cards (might be some great deals on the 5x00 cards).
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
AMD launches for the last 5+ years have basically been massive hype followed by a botched release (paper launch, useless stock coolers, last second bios changes), followed by 12 months of driver problems (unless it's re-releasing the same card - see 580, etc). Then we go back to "wait for AMD card name + 1". Several years later the cards are actually quite good once the drivers are properly fixed and optimised, and AMD are often still selling those cards are bargain prices. Nvidia on the other hand tend to stick too near day 1 pricing for the lifetime of the card. Hence my suggestion to people is if you want the new gen buy Nvidia, but if you want a bargain keep an eye on last gen AMD cards (might be some great deals on the 5x00 cards).

Thats a fair statement . AMD have not executed a GPU launch well for years. The driver problems with the RDNA generation have made it harder for them to be viewed as a reliable GPU brand. So its on AMD if they want to get it right at RDNA2 launch and change the perception of their products.
 
  • Like
Reactions: Elfear

JasonLD

Senior member
Aug 22, 2017
487
447
136
Do the math and you will see the Series X is delivering 12 TF at 140-150w compared to Radeon 5700XT's 9 TF at 225w. Thats 2x the perf/watt.

That perf/watt figure can be very misleading considering Series X GPU is running right at perf/watt sweet spot while 5700XT is stepped a bit out of it in order to be competitive against its Nvidia counterparts.
Who knows if only adding 100mhz to series X GPU could jack up the power figure above 200w?
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
That perf/watt figure can be very misleading considering Series X GPU is running right at perf/watt sweet spot while 5700XT is stepped a bit out of it in order to be competitive against its Nvidia counterparts.
Who knows if only adding 100mhz to series X GPU could jack up the power figure above 200w?

I dont believe you will save 70-80W by lowering RX5700X by 100mhz
 

JasonLD

Senior member
Aug 22, 2017
487
447
136
I dont believe you will save 70-80W by lowering RX5700X by 100mhz

Probably not by just dropping clockspeed alone, but undervolting it may. Perf/Watt just heavily favors higher CU count/lower clockspeed against lower CU count/higher clockspeed. Perf/watt improvement wouldn't look as impressive if PS5 GPU was compared against 5700XT.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Perf/Watt just heavily favors higher CU count/lower clockspeed against lower CU count/higher clockspeed.

Thats not always the case, its not simple black and white. You can have way higher perf/watt from small GPUs at high clocks vs bigger GPUs with lower clocks.
Just remember, measuring perf/watt without a context is meaningless for Gaming GPUs.

For example, in the latest Techpowerup review, the ASUS TUF RX5600XT has the highest perf/watt at 4K resolution topping even the RTX2080Ti.
But that is a useless metric at 4K because RX5600XT doesnt have the power for 4K gaming.
But if you take the performance of the RX5600XT at 1080p as a context and compare its perf/watt against the same performance in that resolution then you get a meaningful result out of that perf/watt. Because that shows that RX5600XT has ~10% higher perf/watt at the same performance as the competition the RTX2060.

Perf/watt improvement wouldn't look as impressive if PS5 GPU was compared against 5700XT.

Why not??? you know something we dont ??
 
Last edited:

Gideon

Golden Member
Nov 27, 2007
1,774
4,145
136
5700 is a very interesting comparison. It has the same CU count and Memory bus width as PS5 @180W and yet is clocked nearly 600 mhz lower (game clock, if Cerney is to be believed). The only unknown is TDP, but I really doubt it's more than Xbox
 
Last edited:

JasonLD

Senior member
Aug 22, 2017
487
447
136
Thats not always the case, its not simple black and white. You can have way higher perf/watt from small GPUs at high clocks vs bigger GPUs with lower clocks.
Just remember, measuring perf/watt without a context is meaningless for Gaming GPUs.

For example, in the latest Techpowerup review, the ASUS TUF RX5600XT has the highest perf/watt at 4K resolution topping even the RTX2080Ti.
But that is a useless metric at 4K because RX5600XT doesnt have the power for 4K gaming.
But if you take the performance of the RX5600XT at 1080p as a context and compare its perf/watt against the same performance in that resolution then you get a meaningful result out of that perf/watt. Because that shows that RX5600XT has ~10% higher perf/watt at the same performance as the competition the RTX2060.

Oh, I meant that it is easier to manipulate the perf/watt in favor toward bigger GPUs with lower clocks vs smaller GPUs with higher clocks. perf/watt can be all over the place with gaming GPUs since product segmentation is ultimately based on performance, not just perf/watt.

Why not??? you know something we dont ??

I expect PS5 and Series X will be very close in terms of TDP. PS5 seems like it went a bit out of comfort zone in order to close the performance gap against Series X.