Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 149 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
May 15, 2014
559
292
136
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Since we are not talking about the same die sizes here what you are saying is irrelevant.

AMD is number ONE 7nm customer at TSMC currently with 30K wpm , im sure they got a very nice deal.

Only TSMC gets a "very nice deal". Without competition every IHV will pay big money to get wafers.
So Ampere is a big deal for the whole business. The last big non TSMC GPU from nVidia was NV40 from IBM.
 
  • Like
Reactions: Martimus and x_marX

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
OK I have created Ampere SM approximation to see the difference of the new Cuda Cores partition

Inside the red square is the new Cuda Core partition that can execute one 16x FP32 or one 16x INT32 per cycle.
With the addition of the second 16x FP32 partition they can now execute 16x FP32 + 16x FP32 per cycle or 16x FP32 + 16x INT32.
So now they can do 128x FP32 versus 64x FP32 per SM and thats the double throughput they get in Ampere.

Ampere GA102 SM Approximation

SM-GA102-2.png


And Turing

Turing-SM.png
 
Last edited:

A///

Diamond Member
Feb 24, 2017
4,352
3,154
136
Only TSMC gets a "very nice deal". Without competition every IHV will pay big money to get wafers.
So Ampere is a big deal for the whole business. The last big non TSMC GPU from nVidia was NV40 from IBM.
Erm, what?

Pascal was done on both Samsung and TSMC. They started with TSMC before switching to Samsung.
 

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
Only TSMC gets a "very nice deal". Without competition every IHV will pay big money to get wafers.
So Ampere is a big deal for the whole business. The last big non TSMC GPU from nVidia was NV40 from IBM.

I dont know what this has to do with what i said about the deal of AMD and TSMC for the 7nm wafer price.
Since AMD is the number on customer at 7nm they get a lot better prices than they got last year.
 

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
No, they dont get better prices. Why should they? TSMC has no competition so AMD is paying whatever TSMC wants.

Not sure if you are joking or not here,

When you negotiate a deal with someone to buy more products than its other customers , you get better prices than what you had before.
Its simple, as JHH was saying

The more you buy , the less you pay :D
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Tend to agree with others, AMD has limited wafer allocation, with that:
1) they have to make lots of console chips, this must be in the contract.

for the rest do you:
a) make lots of cpus which are already ahead of their competitor performance wise, and their competitor is still busy shooting itself in the foot.
b) make gpu's which you are currently behind your competitor in performance and features and your competitor just released a new range with one of the biggest performance uplifts ever.

It's not rocket science, they will make cpu's. They'll do just enough gpu's to stay in the market.
 
  • Like
Reactions: Konan and ozzy702

maddie

Diamond Member
Jul 18, 2010
4,723
4,628
136
citation needed. You are using that number all over the thread without a single credible source (I doubt it exists) so I have to assume you pulled it out of a very dark place.

At best it helps NV to keep their margins higher but without the new powerful consoles and AMD being competitive, this price correction would not have happened and in fact the correction was overdue anyways because just like intel was for a long time, NV was it's own competition. They needed a big performance/$ improvement to get their existing users to update.

Datacenter increasing margin, yes makes sense but not that much. Plus on some level the huge dies NV is making are an artifact of that because many of these transistors are simply there for the datacenter/workstation use and not gaming. I'm pretty sure dlss was born from the fact they needed to make use of these else useless transistors for gaming.
The Reddit Nvidia Q&A had this interesting statement from an Nvidia rep.

Ray tracing denoising shaders are good examples that might benefit greatly from doubling FP32 throughput.

So much for needing the Tensor cores for this.
 

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
Tend to agree with others, AMD has limited wafer allocation, with that:
1) they have to make lots of console chips, this must be in the contract.

for the rest do you:
a) make lots of cpus which are already ahead of their competitor performance wise, and their competitor is still busy shooting itself in the foot.
b) make gpu's which you are currently behind your competitor in performance and features and your competitor just released a new range with one of the biggest performance uplifts ever.

It's not rocket science, they will make cpu's. They'll do just enough gpu's to stay in the market.

AMD needs approximately 5k wafers for 1 Million XBOX SX Console Chips (360mm2) (~200 good dies per wafer) per month.
H2 2020 AMD has 30K 7nm wafers in TSMC, being the number one 7nm customer.
Also to note that not many have realized, AMD increased its inventory by ~30% the last quarter (Q2 2020) to 1.3B and they had 6 months lead until the new Console launch in December 2020.

So, they have enough wafer capacity for their CPUs, GPUs and new Console chips.

On the other hand, NVIDIA with a 627mm2 Ampere GA102 chip found in RTX3080 and RTX3090 can have less than 50 good dies per wafer.
That means they need 4x times the wafer volume for the same amount of chips. For this I dont see a lot of RTX3080/3090 cards at retail stock in the first 2-3 months after Ampere launch.
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
nVidia has made it clear that their gaming business is growing 25% Q-Q. They will have enough supply to hit this target. Demand on the other hand...
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
No, they dont get better prices. Why should they? TSMC has no competition so AMD is paying whatever TSMC wants.

Apparently you are unaware of this thing called 'economies of scale'. If you buy one of something, you pay X price. If you buy a thousand of something, you pay LESS than if you bought one. If you buy tens of thousands of something, with a long term order, you pay less than either of the previous situations. TSMC WANTS people to place large, long term orders, as such, they give better deals to their large, long term customers.

Or TSMC is just giving the wafer allocation to nVidia and co. Maybe AMD can go back to GlobalFoundrys...

Huh? Seriously, what does this even mean?
 
Last edited:

Head1985

Golden Member
Jul 8, 2014
1,863
685
136

3080 running 4k resolution at roughly 40% - 50% faster avg FPS compared to the 2080Ti.....

F in chat for anyone who bought a 2080Ti in the last few months
3080 is the card to get this time.3070 and 3090 looks like crap compare to it.
With 3090 you pay 2x more for like 10-15% more perf?
With 3070 it will be 40-50% slower and have less Vram for only 200USD less.
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
So if Nvidia sells out within minutes it's because of demand?

Just ask yourself one question....Is it love or just a school boy crush?

Yes? Look up what the "Osborne effect " is. 3080 is more than twice as fast as the 5700XT which was sold last year for >$400 with 8GB. The demand will be there.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,248
136
Yes? Look up what the "Osborne effect " is. 3080 is more than twice as fast as the 5700XT which was sold last year for >$400 with 8GB. The demand will be there.

So you really have no idea in the end. Your just guessing that supply will be plentiful while demand will be greater.

Why would you bring up the 5700XT? Pretty much anybody that has one and want's to sell it now can get most of their money back thanks to the mining craze.

On another note....How's the resale value on the 2080Ti? I'm guessing you didn't mention that for a reason.

About the only credit I'll give you is your consistently biased towards Nvidia.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
It definitely looks like the 3080 is going to be a great performance per dollar card. The memory might rear it's ugly head a few years down the road, but considering where both the 3070 and 3090 land performance-wise and what they cost it's hard to consider anything else if you're going to go with NVidia.

Even if it's a power hog it's going to be winter soon and it can double as a space heater. :p
 
Feb 4, 2009
34,506
15,731
136
It definitely looks like the 3080 is going to be a great performance per dollar card. The memory might rear it's ugly head a few years down the road, but considering where both the 3070 and 3090 land performance-wise and what they cost it's hard to consider anything else if you're going to go with NVidia.

Even if it's a power hog it's going to be winter soon and it can double as a space heater. :p

Way out of my price range but I agree.
What I don’t understand is why is it so difficult to get more memory on cards lately?
Seems like an 8GB card is rare and 10GB seems like a weird number.
Why is it so difficult to have 8GB be the gaming normal and 16GB be the halo product?
I understand there may not be a use case for that memory but the market appears to want it.
 

Ajay

Lifer
Jan 8, 2001
15,332
7,792
136
Not sure if you are joking or not here,

When you negotiate a deal with someone to buy more products than its other customers , you get better prices than what you had before.
Its simple, as JHH was saying

The more you buy , the less you pay :D
In the past, TSMC had something like a bidding process on wafer allocation priority. Apple always won because they would pay more and buy risk wafers, which helped TSMC refine their process.
 

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
Way out of my price range but I agree.
What I don’t understand is why is it so difficult to get more memory on cards lately?
Seems like an 8GB card is rare and 10GB seems like a weird number.
Why is it so difficult to have 8GB be the gaming normal and 16GB be the halo product?
I understand there may not be a use case for that memory but the market appears to want it.

I think it just comes down to using GDDR6X and having to choose between 1 GB or 2 GB modules. So either you get 10 which feels too small or you pay a lot more for 20 GB which seems a bit more than you'd need for the life of the card. If they could get someone to make some 1.5 GB modules and have 15 GB that would hit a perfect sweet spot

In the past, TSMC had something like a bidding process on wafer allocation priority. Apple always won because they would pay more and buy risk wafers, which helped TSMC refine their process.

That's not that much different than any other business. If you want more of their supply just offer to pay more. It discourages inefficient use.

Apple can sell their devices at a premium and pass on any extra cost. Making smaller SoCs is also more tolerable while a process is still working out the kinks and has a higher defect rate. If you're making massive GPU dies they have to be for the enterprise market where they can sell for thousands or tens of thousands of dollars instead of hundreds.
 
Feb 4, 2009
34,506
15,731
136
I think it just comes down to using GDDR6X and having to choose between 1 GB or 2 GB modules. So either you get 10 which feels too small or you pay a lot more for 20 GB which seems a bit more than you'd need for the life of the card. If they could get someone to make some 1.5 GB modules and have 15 GB that would hit a perfect sweet spot



That's not that much different than any other business. If you want more of their supply just offer to pay more. It discourages inefficient use.

Apple can sell their devices at a premium and pass on any extra cost. Making smaller SoCs is also more tolerable while a process is still working out the kinks and has a higher defect rate. If you're making massive GPU dies they have to be for the enterprise market where they can sell for thousands or tens of thousands of dollars instead of hundreds.

Ah so they all need to be the same size, no doing 6x2GB and 4x1GB for 16GB total.