Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 74 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
May 15, 2014
559
292
136
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
7,166
7,666
136
At this point it's a god damn nickle to anyone that can get me 1080ti raster performance for something resembling $300.

RTX 3060 should be in that zone....

Edit:
Also glad Glo showed up and didn't disappoint. You're the man Glo!
 
Last edited:

Konan

Senior member
Jul 28, 2017
360
291
106
At this point it's a god damn nickle to anyone that can get me 1080ti raster performance for something resembling $300.

RTX 3060 should be in that zone....

I hope so. What was a 2060 at launch $350 and the Super variant was $400?? I think an RTX 3060 might be $400.

Got a feeling the pricing is going to be much higher at least on the higher end.... I reckon two of the cards will be priced at $999 and $1599 unsure of which models and I think they might hold something back for AMD's post launch.
 
Last edited:

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
I hope so. What was a 2060 at launch $350 and the Super variant was $400?? I think an RTX 2060 might be $400.

Got a feeling the pricing is going to be much higher at least on the higher end.... I reckon two of the cards will be priced at $999 and $1599 unsure of which models and I think they might hold something back for AMD's post launch.

The 2060 (or super) are not close to the 1080Ti either. The 2070 Super is a tiny bit faster than a 1080Ti. So maybe the 3060 will get there, but not betting on it being $300.
 

Gideon

Golden Member
Nov 27, 2007
1,774
4,145
136
"NVidia killer" or bust. 20% faster across the board for AMD, is the only option, all of their credibility is tied up on this point. Every bit of hype, all of the viral efforts, everything comes down to this. The credibility of all their hard-working .... fans.... is all tied up on this point.
Well I agree that AMD has a bunch of rabid fanboys and an insane Hype train. Considering how obnoxious most of them are and I doubt most of this is viral marketing. Visible AMDs marketing has been absolute Zero. It was minimal during NAVI release as well, compared to the stupid hype Raja generated for VEGA.
Pretty much Lisa Su has only mentioned the product a couple of times candidly when directly asked about.

But your 20% faster is a typical red-herring you choose randomly so you can claim you're right. You know fair-well that it won't be faster than the highest end Ampere even if the latter were 8nm, simply because it's 505mm2 and 80CUs. The most I've heard is that due-to the process advantage (if true) it could finally be somewhat competitive. AFAIK nobody worth listening to has claimed that it will actually beat nvidia by a large margin.

My hope is they'll be at least able to equal/narrowly beat the raster performance of a RTX 3080 for a similar price.

And you know what, if they did just that, It would still be a hell of an effort, considering the hole they were in with Vega. Essentially you could compare it to going from Excavator to Zen to Zen 2 vs a hypotetical Intel that actually delivered. And that with only a little over a year between RDNA1 and RDNA2 (RDNA3 whilw still facing rough odds would be in miles better spot than anything before it).

But I also do get your sarcasm considerning the crazy hype Navi generated (it was even supposed to be chiplet based!), then Big Navi and now RDNA3 (also chiplet based supposedly, though in reality it surely isn't). Despite this, trust me you would'n' be any better off with no AMD in graphics.

Rushing? NVIDIA ALWAYS launches first, Maxwell launched first, Pascal the same, and Turing the same too .. and in each case they had a far SUPERIOR product, that held the crown by a wide gap. AMD only launched months later. This time looks no different, in fact their marketing seems confident they have an amazing product on their hand this time, as the marketing campaign eclipses that of Turing and RTX.
Yeah, I also noticed that Nvidia surely is hyping this one up. Considering the comments somebody got out of an nvidia employees months back ("You'll be impressed") I'm pretty sure they have something up their sleeve.
 
  • Like
Reactions: Elfear

Glo.

Diamond Member
Apr 25, 2015
5,803
4,777
136
Rushing? NVIDIA ALWAYS launches first, Maxwell launched first, Pascal the same, and Turing the same too .. and in each case they had a far SUPERIOR product, that held the crown by a wide gap. AMD only launched months later. This time looks no different, in fact their marketing seems confident they have an amazing product on their hand this time, as the marketing campaign eclipses that of Turing and RTX.
Then why they have cancelled 103 die, and are releasing RTX 3080 using the 102 die, which has never happened before?
 

exquisitechar

Senior member
Apr 18, 2017
684
942
136
Yeah, I also noticed that Nvidia surely is hyping this one up. Considering the comments somebody got out of an nvidia employees months back ("You'll be impressed") I'm pretty sure they have something up their sleeve.
I mean, it's still a big node shrink. It's going to be far better than Turing and people will eat it up. "3070 is 2080ti tier performance with more features and better RT performance? Amazing!" is what the reactions will be like. Ampere will sell like hotcakes. It will just be worse than it would have been on N7 and will come with the caveat of increased power consumption, which most people will overlook.
 
  • Like
Reactions: Gideon and Konan

Tup3x

Golden Member
Dec 31, 2016
1,086
1,084
136
Then why they have cancelled 103 die, and are releasing RTX 3080 using the 102 die, which has never happened before?
No point to speculate about that at this point - once we know the facts it gets more interesting. If they want to push RT hard that might be one reason. Also who knows, maybe they got sweet deal from Samsung that the size isn't as important. (Are there even other relevant Samsung 8nm customers? Assuming that Samsung it is.)
 

moonbogg

Lifer
Jan 8, 2011
10,637
3,095
136
I don't think the Nvidia hype train has that much momentum this time around, to be honest. There is hype, but it's been reduced by the price increases and low performance of last gen. If people thought they could get a real high-end card for under $700 as they've always been able to in the past, I bet the hype train would be flying off the track right now. $700 puts you in the mid-range these days. That's just not exciting.
 

exquisitechar

Senior member
Apr 18, 2017
684
942
136
I don't think the Nvidia hype train has that much momentum this time around, to be honest. There is hype, but it's been reduced by the price increases and low performance of last gen. If people thought they could get a real high-end card for under $700 as they've always been able to in the past, I bet the hype train would be flying off the track right now. $700 puts you in the mid-range these days. That's just not exciting.
That no one has high expectations for the performance/dollar of the new cards is something that works in their favor. They can have high margins and still wow people with how much better Ampere is in that department than Turing. Samsung wafers are cheap and die sizes are down across the board compared to Turing...considering that and, IMO, the threat that next gen consoles pose to NV's low/mid range, I am not anticipating another price hike like Turing's.

Pascal owners need an upgrade and Turing really wasn't it for most people. Those people will be looking for a new card. The hype train will happen.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
That no one has high expectations for the performance/dollar of the new cards is something that works in their favor. They can have high margins and still wow people with how much better Ampere is in that department than Turing. Samsung wafers are cheap and die sizes are down across the board compared to Turing...considering that and, IMO, the threat that next gen consoles pose to NV's low/mid range, I am not anticipating another price hike like Turing's.

Pascal owners need an upgrade and Turing really wasn't it for most people. Those people will be looking for a new card. The hype train will happen.

We don't actually know die sizes. I would expect them to be of similar size. The smaller node just lets them get more onto that size die.
 
  • Like
Reactions: GodisanAtheist

Elfear

Diamond Member
May 30, 2004
7,127
738
126
Rushing? NVIDIA ALWAYS launches first, Maxwell launched first, Pascal the same, and Turing the same too .. and in each case they had a far SUPERIOR product, that held the crown by a wide gap. AMD only launched months later. This time looks no different, in fact their marketing seems confident they have an amazing product on their hand this time, as the marketing campaign eclipses that of Turing and RTX.

I agree with you that Nvidia releasing Ampere in the near future is not due to them scrambling because of RDNA2. Nvidia typically has excellent business sense and execution. However, they do not ALWAYS launch first as you've stated. Fermi vs Evergreen? Kepler vs Southern Islands?
 

KompuKare

Golden Member
Jul 28, 2009
1,191
1,487
136
It is reported that NVIDIA’s board partners are ready to launch its new graphics cards at the same time as NVIDIA unveils its reference models.

Now, that is a surprise. The AIB will love it, but what made Nvidia change their minds? AIB backlash? But after 4 years of F's Editions why show AIB some love now?

Maybe they know something about RDNA2, or want to pre-empty the new consoles? Or has Zen given AMD better a relationship with Taiwanese OEMs as most AIBs also make motherboards, and Nvidia fear this?

Anyway, good news for buyers.

90% efficiency of PSU and assuming a load of 90% (worst case) gives 206w.
8C/16T at 3.66 Ghz draws 54w from Renoir measurements.
Sorry, but I keep seeing this mistake being made in far too many places. PSU efficiency does not work that way:
1000W PSU can output 1000W. If it is 80% efficient it would draw 1250W (1000 / 0.8) leaving it to get rid of 250W of waste heat somehow (noisy).
A more realistic 90% efficient PSU would draw 1111W leaving it get rid of 111W of waste heat.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Now, that is a surprise. The AIB will love it, but what made Nvidia change their minds? AIB backlash? But after 4 years of F's Editions why show AIB some love now?

Maybe they know something about RDNA2, or want to pre-empty the new consoles? Or has Zen given AMD better a relationship with Taiwanese OEMs as most AIBs also make motherboards, and Nvidia fear this?

Anyway, good news for buyers.


Sorry, but I keep seeing this mistake being made in far too many places. PSU efficiency does not work that way:
1000W PSU can output 1000W. If it is 80% efficient it would draw 1250W (1000 / 0.8) leaving it to get rid of 250W of waste heat somehow (noisy).
A more realistic 90% efficient PSU would draw 1111W leaving it get rid of 111W of waste heat.

He said 90% load. You never ever want to run a PSU at 100% load. Typically 80% is the max you want, but its not uncommon to see 90% on devices with a fixed power draw.

As for efficiency, its not always the case of drawing more at the wall. I have seen PSU's wall to output ratio vary depending on the load being applied. This is with using a power analyzer to measure both.
 
  • Like
Reactions: Konan

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
He said 90% load. You never ever want to run a PSU at 100% load. Typically 80% is the max you want, but its not uncommon to see 90% on devices with a fixed power draw.

As for efficiency, its not always the case of drawing more at the wall. I have seen PSU's wall to output ratio vary depending on the load being applied. This is with using a power analyzer to measure both.

That is normal, all PSUs have a efficiency to load curve. Like the one bellow.

st85f-gs-v2-01.jpg
 

GodisanAtheist

Diamond Member
Nov 16, 2006
7,166
7,666
136
We don't actually know die sizes. I would expect them to be of similar size. The smaller node just lets them get more onto that size die.

- I think this is what we're going to see. Unfortunately Turing dies were so large that even a straight shrink would still result in historically large dies at each tier. Wonder if were going to see a more Maxwell-esq 600/400/200mm2 breakdown this time around.

You get a shrink, you get to pack some more stuff on, you still have room for 750mm2 monster die refresh...
 
  • Like
Reactions: Stuka87

MrTeal

Diamond Member
Dec 7, 2003
3,614
1,816
136
He said 90% load. You never ever want to run a PSU at 100% load. Typically 80% is the max you want, but its not uncommon to see 90% on devices with a fixed power draw.

As for efficiency, its not always the case of drawing more at the wall. I have seen PSU's wall to output ratio vary depending on the load being applied. This is with using a power analyzer to measure both.
He actually did both, he used 258W * 0.9 (load) * 0.9 (eff) to arrive at the 208W number. Still, it's not a huge difference either way since the assumptions likely have way more variability in them than the 10%. If you really want to talk SoC draw, you're still going to have another loss getting down to ~1V and so there will be another factor in there anyway. You'd usually ignore that for making a comparison to a GPU though, as we're usually looking at card draw @ 12V there.
 
  • Like
Reactions: Stuka87

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Sorry, but I keep seeing this mistake being made in far too many places. PSU efficiency does not work that way:
1000W PSU can output 1000W. If it is 80% efficient it would draw 1250W (1000 / 0.8) leaving it to get rid of 250W of waste heat somehow (noisy).
A more realistic 90% efficient PSU would draw 1111W leaving it get rid of 111W of waste heat.

Thanks for the correction.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
But your 20% faster is a typical red-herring you choose randomly so you can claim you're right.

"NVidia killer"

I don't work for AMD, so I'm not sure why you think I made that claim. They did not call it nVidia competitor or nVidia beater, they call it a nVidia killer, but you think being competitive somehow makes it a killer? Really?

I think 20% is a very low bar for a "killer".