Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 79 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
May 15, 2014
559
292
136
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 

Glo.

Diamond Member
Apr 25, 2015
5,803
4,777
136
20GB of ram on 3080ti

UPDATED SAT, AUG 8 2020 10:29 AM CDT
Read more: https://www.tweaktown.com/news/7005...070-leaked-specs-up-20gb-gddr6-ram/index.html



I think the 3090 has been debunked, as it will not come out.
Or the leaker mistook it for a possibly "Titan" class edition.
Yeah it was debunked to the degree that Micron, the company that makes GDDR memory chips for those GPUs, confirmed existence of RTX 3090, today.
 

Karnak

Senior member
Jan 5, 2017
399
767
136
Performance estimations are 1.5 times that of 5700 XT
[...]
Big Navi power consumption is estimated to be between 330-375W
Obviously fake, why do you even bother? 1.5x faster than a 5700XT but >=1.47x higher power consumption at the same time? How's that in line with AMD's +50% Perf/Watt claim? Just lol. Probably like 90% about Ampere a lie then too after that.

Adored thinks the upcoming GeForce cards will be made on Samsung 5LPE, which is an evolution of the Samsung 7LPE process that had terrible yields, and is supposedly design compatible.
Nah, 5nm won't happen. Why? Because 5nm entered mass production in Q2 2020, so like 2-3 months ago. Ampere will launch next month and that's just impossible in that short of a time frame. Definitely won't happen. I know rumors are cool but Adored needs to sort out what could happen and what is litereally impossible. 5nm is the latter.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Competition –

Apparently Big Navi is 'dead before arrival'. AMD are flustered because they have grossly underestimated Ampere performance in top 3 cards in the stack. Source rumors are saying Nvidia will win by a large amount in traditional performance this time and then DLSS and RTX scale proportionally. AMD were full of confidence before but now are very anxious. They based their targets off Turing's generational improvements (being the smallest in Nvidia history) and grossly miscalculated the evolution capability that Nvidia has done with Ampere

RTX 6900 ROPs have been improved to 96 up from 64 that was leaked a while ago. However, FP32 performance will be nowhere near GA102.

N21 = PCB is in final stages of being completed. The core has been taped out but drivers and PCB not complete yet. Expected to be completed end of September. The release and distribution are estimated to be after mid-October. Performance estimations are 1.5 times that of 5700 XT with a TSE score of less than 8000. (Will trade blows with GA104) Big Navi power consumption is estimated to be between 330-375W

I know you didn't write this. But who ever did is taking the sensationalism to the next level. *NO* tech company can estimate what the competition is going to do YEARS before they come out. A companies comes up with a design, and moves forward on it.

Actually that entire bit reads like somebody that just wanted to make things up for people to feed off. Which we get every time a new GPU line comes out. The majority of what they said just doesn't line up with either other rumors, or the facts that have come out today.
 

Bouowmx

Golden Member
Nov 13, 2016
1,147
551
146
Adored thinks the upcoming GeForce cards will be made on Samsung 5LPE, which is an evolution of the Samsung 7LPE process that had terrible yields, and is supposedly design compatible.

What a timely video. It's times like this when this ridiculous idea actually as a probability of truth. Still feels unlikely.
 

Glo.

Diamond Member
Apr 25, 2015
5,803
4,777
136
Competition –

Apparently Big Navi is 'dead before arrival'. AMD are flustered because they have grossly underestimated Ampere performance in top 3 cards in the stack. Source rumors are saying Nvidia will win by a large amount in traditional performance this time and then DLSS and RTX scale proportionally. AMD were full of confidence before but now are very anxious. They based their targets off Turing's generational improvements (being the smallest in Nvidia history) and grossly miscalculated the evolution capability that Nvidia has done with Ampere

RTX 6900 ROPs have been improved to 96 up from 64 that was leaked a while ago. However, FP32 performance will be nowhere near GA102.

N21 = PCB is in final stages of being completed. The core has been taped out but drivers and PCB not complete yet. Expected to be completed end of September. The release and distribution are estimated to be after mid-October. Performance estimations are 1.5 times that of 5700 XT with a TSE score of less than 8000. (Will trade blows with GA104) Big Navi power consumption is estimated to be between 330-375W
Completely wrong.
 

Konan

Senior member
Jul 28, 2017
360
291
106
Obviously fake, why do you even bother? 1.5x faster than a 5700XT but >=1.47x higher power consumption at the same time? How's that in line with AMD's +50% Perf/Watt claim? Just lol. Probably like 90% about Ampere a lie then too after that.

I didn't write it just re-posting it. Yes 50% faster than 5700XT : I can believe that. As for power consumption on that card and with 2x performance wouldn't that put it around 300W+ ? (including 50% per/watt claim)


I know you didn't write this. But who ever did is taking the sensationalism to the next level. *NO* tech company can estimate what the competition is going to do YEARS before they come out. A companies comes up with a design, and moves forward on it.

Actually that entire bit reads like somebody that just wanted to make things up for people to feed off. Which we get every time a new GPU line comes out. The majority of what they said just doesn't line up with either other rumors, or the facts that have come out today.

Yes, thank you Stuka, I didn't write it. Agree lots of sensationalism in places. I personally work in tech and we roadmap out competitor potential capabilities and direction 1-3 years out, and sometimes more even. How that turns out is above average with anticipation, never 100% accurate but tech trends are there in hardware and software. Doesn't always influence the end product, but it can.
 
Last edited:

CP5670

Diamond Member
Jun 24, 2004
5,539
613
126
3) If the 3090 is 1.7x 2080Ti performance as suggested in one previous post they can pretty much charge however much they want. Prepare for prices north of $1500.

I think $2000 is easily possible if this is actually true, maybe even $2500 in line with the Titan RTX. Even the 2080Ti is close to $1500 these days due to virus-related demand. I can't see why they would go any lower, as there is clearly a market for them even at those prices.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,522
3,037
136
I didn't write it just re-posting it. Yes 50% faster than 5700XT : I can believe that. As for power consumption on that card and with 2x performance wouldn't that put it around 300W+ ? (including 50% per/watt claim)
If big navi has 80CU as mentioned in leaks, then It won't be just 50% faster than 5700xt. If the performance is 2x of 5700xt the TDP can be 300W including 50% better perf/w.

About the summary of leaks about Ampere It's an interesting read but I don't think everything is correct, especially the performance.
Luckily we don't need to wait for long, only a few weeks remain for Ampere presentation.
 
  • Like
Reactions: Konan

GodisanAtheist

Diamond Member
Nov 16, 2006
7,166
7,666
136
Wow, what a news day.

Conflicted feelings on the rumors: on one hand, a huge jump in performance could mean some serious discount sales on RTX 2xxx series hardware, which I am definitely in the market for, but as has been mentioned it could also mean P/P ratio doesn't really move and things just keep getting more expensive.

The rumor dump posted by Konan above is absolutely mouthwatering, the amount of salt that goes with it, but in the event that NV has managed a Maxwell -> Pascal level jump again (which isn't particularly unique given they did something similar with their prior node drop Fermi -> Kepler) maybe AMD should name Big Navi the "6300 XT" then brag about how NV can barely keep up with their entry level cards :D
 

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146
AMD didn't underestimate Intel after 3 years of screwing up 10nm. Remember Rome launch? AMD mentionned back then they designed Rome to be competitive with Ice Lake-SP and didn't think they'd match on per-core performance until Milan.

I cannot see them underestimating Nvidia who - to their credit - do not stop innovating.

Also, lmao at the idea that AMD have decided to increase ROP count since it was last leaked. Navi21 first taped out last year. Over 6 months later AMD submits a patch to Linux kernels showing ROP count to be at 64. Within weeks supposedly we're supposed to believe it was upped to 96?

Also double lol at Big Navi's board power at 350-375W.
 

Gideon

Golden Member
Nov 27, 2007
1,774
4,145
136
All the leaks seems to be spilling over today!
...

This rumor, while some might be accurate, seems to have quite a lot of BS in it.

1. He didn't even get his Time-spy scores right.

RX 5700 XT is 3950 - 4150
RTX 2080 Ti is 6300 - 7000

according to him:
RTX 3070 Ti (GA104) is 7000 - 7800
Big navi is (1.5 x Navi) = ~ 6000 (that's considerably slower than even Coretex suggested)

But stil Big Navi will somehow "trade blows" with GA104? These cards would have a difference in score comparable to 2080 vs 2080 Ti.

2. AMD adding ROPs to a design "mid-flight"?

3. "AMD underestimating Nvidia" - If AMD only manages to get 50% more performance from 2x the CUs with a 505 mm2 die on a 7nm+ node (capable of clocking PS5 to 2200 Mhz game-clock) @ 330-375W TDP - then that's not underestimating. That's utterly dismal execution, especially after 50% perf/watt gains talk at the time they had silicon of RDNA2 designs.

They would essentially be totally topped out with nothing left in the tank. It wouldn't matter if Nvidia were 3x the perf of 2080 Ti, they couldn't really do anything about it. So even if everything about AMD in this rumor is true the "underestimating" part makes no sense.

Also the "AMD were full of confidence" part. You have a design unable to beat a 250W 2080 Ti (with your design @ 350W btw) You know the competitor is doing 350W designs on a better node than before and you're "full of confidence" ? If that were true, everybody higher up ion the AMD graphics team should be fired for incompentence.
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
15,223
5,768
136
Conflicted feelings on the rumors: on one hand, a huge jump in performance could mean some serious discount sales on RTX 2xxx series hardware, which I am definitely in the market for, but as has been mentioned it could also mean P/P ratio doesn't really move and things just keep getting more expensive.

If you are gonna do that, it's going to be used... looks like the supply is basically gone. Almost to the point where it better be a hard launch.

Also 50% faster than the 2080 Ti FE isn't that amazing, when OC'd 2080 Ti's are already 15% faster.

Big navi is (1.5 x Navi) = ~ 6000 (that's considerably slower than even Coretex suggested)

Isn't Coreteks suggesting that what AMD releases this year is ~2080 Ti? The ~3080/3090 cards would be released next year.
 

Gideon

Golden Member
Nov 27, 2007
1,774
4,145
136
Isn't Coreteks suggesting that what AMD releases this year is ~2080 Ti? The ~3080/3090 cards would be released next year.

I was referring to this article:

Unfortunately for us PC enthusiasts it seems the news aren’t great regarding the so-called “NVidia Killer”. The performance level of the high-end GPUs based on the codenamed Sienna Cichlid ASICs is around the same as NVidia’s last-gen top SKU, the 2080ti, according to these reports. A best case scenario supposedly shared by AMD is that at most gamers should expect to get about 15% over the 2080ti in a select AMD “optimized” titles. According to the same sources AMD is aiming to launch the “Big Navi” as a direct competitor to the soon to be launched RTX 3080, and not as a competitor to NVidia’s highest performing part as has been widely reported (let alone beat it). Some have suggested that “Big Navi” would be up to 50% faster in traditional rasterization than the RTX 2080ti but according to AMD’s partners that will not be the case.

This states in no uncertain terms that he's talking about N21(Sienna Cichlid), which should be the biggest chip.

Overall, if the leaks about 2xFP32 perf are true then yeah, AMD will get smoked one way or another.

If the Navi die-size info is correct, I would have expected more of it, at least say 70% ~more perf than the 5700 XT (particularily at those TDPs and withrumors of possibly watercooled cards). I've always considered 90-100% to be the upper bound of possibility. A "moonshot" with major architectural improvements. 150% perf @350W and 2x the die size after PS5/xbox specs and all the hype qualifies as a fail for sure.
 

Saylick

Diamond Member
Sep 10, 2012
3,531
7,858
136
Out of curiosity, do you guys believe there's actually a co-processor on the back of the PCB, opposite of the GPU die? If so, won't it help explain part of the high power draw of the entire board?
 

CakeMonster

Golden Member
Nov 22, 2012
1,502
659
136
Out of curiosity, do you guys believe there's actually a co-processor on the back of the PCB, opposite of the GPU die? If so, won't it help explain part of the high power draw of the entire board?

Nope, why would they do that to themselves? Unnecessary complexity in an situation where NV almost certainly (unfortunately) has the upper hand performance wise.
 

FaaR

Golden Member
Dec 28, 2007
1,056
412
136
Wasn't Coreteks youtuber claiming that on the back of the PCB is a coprocessor that is doing the RT stuff?
Out of curiosity, do you guys believe there's actually a co-processor on the back of the PCB, opposite of the GPU die? If so, won't it help explain part of the high power draw of the entire board?
I'd say no. That's a youtuber showing how poor a grasp of technical matters he has despite supposedly being a computer hardware enthusiast.

The back of a major ASIC is typically packed with passive components, capacitors for decoupling and power delivery buffering and the like. You couldn't fit an entire additional ASIC there in that space with them in place, and doing without them is probably inadvisable. They're there for a reason after all. Besides, the PCB is riddled with vias there as well for the front side GPU, how would you fit hundreds at least (more likely 1000+) of additional solder pads for another major flip-chip substrate on the reverse side? Seems to require some magic tricks IMO or hereto undiscovered technical innovations never demonstrated in consumer computer hardware before.

Plus, cooling. There's not much space on the reverse side of the PCB as it is, if you stick another ASIC onto there the vertical height of the chip package will chew up much of what little is available. You might then be able to squeeze a very thin vapor chamber on top to transport off the heat generated, maybe, but with no room for an actual heat sink that doesn't help you very much. Add to the fact two ASICs both doing heavy computing tasks sitting back to back would be grilling each other with their respective heat output. It would significantly increase heat density in one small spot on the PCB and thus cooling difficulties, which will already be significant for a rumored 300+ watt part.

Also, intra-chip communications suffers from high latency and high power usage compared to inter-chip communications. It would complicate the 3D rendering pipeline a lot sending some of the work off-chip and then holding partially completed pixels in flight in buffer memory on the main GPU until results come back from the reverse side "accelerator".

All this seems very unlikely honestly, from an engineering and logic standpoint.

I don't see nvidia launching both a 3080 and 3090/3080ti at the same time if they completely destroy the competition and their older cards. IMO They would return to releasing a 80 one year and ti the next, similar to the Maxwell and Pascal releases).
If 3080 and 3090 both use the same ASIC there doesn't seem much point in holding off the release of the latter for months. The chip is ready, might as well sell it to people who want to buy it. What on earth would waiting really gain them when the bleeding edge gamers already know there's a better offering waiting around the corner and being artificially held back by NV for reasons of nothing except pure greed? Not very many are going to be insane enough to buy a really expensive 3080 just to tide them over for a few months.
 
  • Like
Reactions: xpea and Saylick

linkgoron

Platinum Member
Mar 9, 2005
2,409
979
136
If 3080 and 3090 both use the same ASIC there doesn't seem much point in holding off the release of the latter for months. The chip is ready, might as well sell it to people who want to buy it. What on earth would waiting really gain them when the bleeding edge gamers already know there's a better offering waiting around the corner and being artificially held back by NV for reasons of nothing except pure greed?

Like releasing the Titan X just two months (+ a week) after the 1080 release? There's always something better coming, especially when you buy the GX104 card.

Not very many are going to be insane enough to buy a really expensive 3080 just to tide them over for a few months.

If they use the same GPU, of course there's no need. However, it's not like Nvidia haven't done this before.

The 980 was released 9 months before the 980ti with a Titan released 6 months after the 980. The 1080 was released 10 months before the 1080ti with a Titan released 2 months after the 1080. If the 3080 is as a good as they were, I don't see why Nvidia would change their strategy. IMO, the only reason the 2080TI was released at the same time as the 2080 was because the 2080 wasn't really worth it as an upgrade over the 1080ti.

Edit: As stated in my previous post, the above is based on the speculation that Nvidia are going to completely demolish both their previous gen and Navi 2 with just their 3080 card. If Nvidia believe that AMD can be competitive with big Navi, they'll have a large incentive to release better cards. For example, the 980ti was probably released a bit early to counter Fiji, and the Super cards were clearly a counter to Navi.
 
Last edited:

Bouowmx

Golden Member
Nov 13, 2016
1,147
551
146
Concerning the rear-side co-processor, remember back in the Tesla days (GeForce 8800), NVIDIA graphics cards had an extra chip, the NVIO, that acted as a southbridge of sorts.
https://www.beyond3d.com/content/reviews/1/3
NVIO marshalls and is responsible for all data that enters the GPU over an interface that isn't PCI Express, and everything that outputs the chip that isn't going back to the host. In short, it's not only responsible for SLI, but the dual-link DVI outputs (HDCP-protected), all analogue output (component HDTV, VGA, etc) and input from external video sources.
Not implying that the alleged Ampere co-processor would do the same thing, but I just wanted to bring up some history.
 
  • Like
Reactions: psolord

CakeMonster

Golden Member
Nov 22, 2012
1,502
659
136
The 980 was released 9 months before the 980ti with a Titan released 6 months after the 980. The 1080 was released 10 months before the 1080ti with a Titan released 2 months after the 1080.

All of those line up well with the +/- 12 month upgrade cycle.

The Titan was priced outside the accepted top gamer card niche at the time. So it was 980->980Ti->1080->1080Ti->2080Ti for most people, with 20-30% improvement a year. Pretty reliable until 2019 which saw now no upgrade option (the price hikes of the 2080Ti could possibly retroactively be justified for buyers since they have had the top card for 2 whole years).

This makes it more likely that two models can arrive at once, since a lineup with 2 year old chips is already confirmed to be discontinued. I'm pretty sure NV also knows that the gamers who did spend money on that 20-30% upgrade every 12 months for so many years expect more than that 20-30% number when 2 years have gone by. They will probably still hike the price even more, but I think they will probably deliver on performance.
 

FaaR

Golden Member
Dec 28, 2007
1,056
412
136
However, it's not like Nvidia haven't done this before.
That they've done it before isn't in of itself evidence they will do it again. Also it's worth mentioning again that turing is old now and it wasn't exactly a huge performance increase even back when it first launched. People are ready to upgrade now. Forcing them to wait even longer would be pointless when the previous gen is 2+ years long in the tooth already.

Delaying might in fact end up hurting NV financially in this situation... Someone who is itching to upgrade and buys a 3080 for 1X money if it solo-launches might have bought a 3090 for 1.5X money if it'd been available, and then they probably won't bother spending that 1.5X money when the 3090 does launch later on.

That's .5X money NV left on the table - how will JHH afford his next mansion if they make such basic blunders?! :)
 

MrTeal

Diamond Member
Dec 7, 2003
3,614
1,816
136
I'd say no. That's a youtuber showing how poor a grasp of technical matters he has despite supposedly being a computer hardware enthusiast.

The back of a major ASIC is typically packed with passive components, capacitors for decoupling and power delivery buffering and the like. You couldn't fit an entire additional ASIC there in that space with them in place, and doing without them is probably inadvisable. They're there for a reason after all. Besides, the PCB is riddled with vias there as well for the front side GPU, how would you fit hundreds at least (more likely 1000+) of additional solder pads for another major flip-chip substrate on the reverse side? Seems to require some magic tricks IMO or hereto undiscovered technical innovations never demonstrated in consumer computer hardware before.

Plus, cooling. There's not much space on the reverse side of the PCB as it is, if you stick another ASIC onto there the vertical height of the chip package will chew up much of what little is available. You might then be able to squeeze a very thin vapor chamber on top to transport off the heat generated, maybe, but with no room for an actual heat sink that doesn't help you very much. Add to the fact two ASICs both doing heavy computing tasks sitting back to back would be grilling each other with their respective heat output. It would significantly increase heat density in one small spot on the PCB and thus cooling difficulties, which will already be significant for a rumored 300+ watt part.

Also, intra-chip communications suffers from high latency and high power usage compared to inter-chip communications. It would complicate the 3D rendering pipeline a lot sending some of the work off-chip and then holding partially completed pixels in flight in buffer memory on the main GPU until results come back from the reverse side "accelerator".

All this seems very unlikely honestly, from an engineering and logic standpoint.


If 3080 and 3090 both use the same ASIC there doesn't seem much point in holding off the release of the latter for months. The chip is ready, might as well sell it to people who want to buy it. What on earth would waiting really gain them when the bleeding edge gamers already know there's a better offering waiting around the corner and being artificially held back by NV for reasons of nothing except pure greed? Not very many are going to be insane enough to buy a really expensive 3080 just to tide them over for a few months.
Having done board layouts for 2000 ball 600W BGA ASICs I agree with your assessment on the challenges they'd face putting another chip on the other side. You can help mitigate routing and component issues using blind and buried vias, and moving to even more layers could help some but it would still be a big challenge to have two large chips on opposite sides of the board. It's not impossible of course, but I'm not sure why you would do it. If they have a coprocessor why not just stick it top side where power delivery, cooling and z-height issues are so much easier to deal with?
 
  • Like
Reactions: Stuka87 and FaaR