Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 152 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
May 15, 2014
559
292
136
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 

Mopetar

Diamond Member
Jan 31, 2011
8,113
6,768
136
That is incorrect , there are double the FP32 ALUs

It will be interesting to see how it works out in reality. In some ways it's similar to the approach that AMD started taking with Fury where they added more and more shaders while NVidia focused on fewer but more effective shaders.

I'll give NVidia some benefit of the doubt because they have some really good engineers that no doubt spent a lot of time grappling with the issue, but with Fury there was always the issue of keeping all of those shaders fed and busy.

If nothing else it might make the 3080 look even more attractive compared to the 3090 than it already is.
 

Konan

Senior member
Jul 28, 2017
360
291
106
If NVIDIA can rush GA103 (60 SMs or 7680 cores after doubling) on a 7 nm process in time for Big Navi (end of 2020), is the rest of the family going to be 7 nm, or why isn't the whole family on 7 nm?

Don't think it needs to be on 7nm. Why not continue on 8nm?
I don't think we'll see a refresh on 7 to be honest, at least not for a quite a while.
 
  • Like
Reactions: nieur

Gideon

Golden Member
Nov 27, 2007
1,774
4,145
136
I didn't say it didn't. Neither did I claim it previously.

They increased compute cores without boosting anything else so the gain isn't as big as expected.

That's true, but it's still a major benefit. AMD's GPUs have for a long time been really compute-heavy and have thus aged quite a bit better than similar Nvidia GPUs. I also expect the 3xxx series to also age much better than 2xxx in this regard, once more shader heavy games appear in the future.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Generally speaking I don't see another chip between GA104 (48SM) and cut-down GA102 (68SM). Already too close IMO. Other thing is that I think a potential GA103 on TSMC's N7 would make GA102 look really bad especially in terms of efficiency.

Remember 8800GT? It had nearly the speed of 8800GTX, but used like 66% of power. Same could happen for GA103 versus 3080. Offering say 95% of perf, but at 66% of power and 20GB of DRAM.
Would certainly make sweet card ~200W.

Of course 8800GT had advantage of jump to 65nm and i doubt SS8 => TSMC7 is that much of efficiency difference, but NV in the past did not shy away from cannibalizing its own products if it meants great product and ton of sales.
 
  • Like
Reactions: Bouowmx

Glo.

Diamond Member
Apr 25, 2015
5,803
4,777
136
Don't think it needs to be on 7nm. Why not continue on 8nm?
I don't think we'll see a refresh on 7 to be honest, at least not for a quite a while.
Because Nvidia needs something to compete with Navi 22.

And reusing 102 dies are too expensive.

If RTX 3080 is 68 SMs with 320 Bit GDDR6X, then 103 with 60 SMs and 256 bit GDDR6X will get you around 85% of RTX 3080 performance.

And don't forget. Already with 2944 ALUs/5888 CUDA cores RTX 3070 is 220W TDP, and full 48 SM with GDDR6X would be 250W TDP.

Don't expect 60 SM/256/320 Bit design to be any less than 300W on 8 nm process.

And that is not going to compete with 60 CU RDNA2 GPU that will use around 200-225W of power.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
What? A 60CU RDNA2 will be only as fast as a 3070. nVidia doesnt need any other chips. 3070 is faster with compute, faster with raytracing and faster with DL.

Maybe AMD need something bigger. But nVidia has covered every corner.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Sure. A 3070 is 60%+ faster than a 5700XT. Maybe you should explain how a 60CU Navi card is beating this with the same power consumption.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
What? A 60CU RDNA2 will be only as fast as a 3070. nVidia doesnt need any other chips. 3070 is faster with compute, faster with raytracing and faster with DL.

Maybe AMD need something bigger. But nVidia has covered every corner.
That's the most ridiculous statement I've heard. The XBX GPU is equal to the 2080 super and is at 140-150W. It's not difficult to reach the 3080 perf at 275W. In fact, with the decision to go less dense, and more perf on the node, it would likely be able go to 300W while keeping decently cool with an ok cooler.

Anyways, completely off topic for this thread. The new GPU's will be nice, but the $$$ is too much. I can't see people spending this much on new hardware when the next gen consoles are coming out for less than a single GPU card. You get a flipping mobo, ram, GPU, OS etc.. for 3080 price. Good luck with that.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
No, the XBSX is not equal to a 2080S. It performs around a 2080. And a console SoC is always more efficient than a stand alone GPU. To be faster than a 3070 AMD has to deliver 70% more compute performance within 220W.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
No, the XBSX is not equal to a 2080S. It performs around a 2080. And a console SoC is always more efficient than a stand alone GPU. To be faster than a 3070 AMD has to deliver 70% more compute performance within 220W.
Who told you it's equal to a 2080? It surpasses the 2080 in all aspects. And no, it doesn't need to be 220W. That's you applying stupid rules to your argument. It's baseless and silly.
 

Glo.

Diamond Member
Apr 25, 2015
5,803
4,777
136
Sure. A 3070 is 60%+ faster than a 5700XT. Maybe you should explain how a 60CU Navi card is beating this with the same power consumption.
Maybe you should explain first, where did you got that 3070 is 60% faster than RX 5700 XT?

60% faster than RX 5700 XT means that it is 15% faster than RTX 2080 Ti, which makes it 10-15% slower than RTX 3080. Do you genuinely believe anybody here is that stupid to buy what you are selling?

Secondly.


RTX 3070 even according to Nvidia materials in Unreal Engine 4 game is 43% faster than RX 5700 XT in 1440p.

Thats not really painting good picture for Ampere, does it?

Thirdly. Stop spreading BS about Ampere and RDNA2 based GPUs.
 

Gideon

Golden Member
Nov 27, 2007
1,774
4,145
136
RTX 3070 even according to Nvidia materials in Unreal Engine 4 game is 43% faster than RX 5700 XT in 1440p.
I wonder if they used DX11 or DX12 in the Borderlands 3 test.

According to techpowerup 2080 Ti is 49% faster than 5700 XT at 1440p DX11, but with DX12, the 2080 Ti is only 30% faster at 1440p.

In DX11 5700 XT is about equal with a vanilla 2070, in DX12 it's faster than 2070 SUPER (as 5700 XT gains 6 FPS while all Nvidia cards lose a few).

Regardless it seems that in this game 3070 doesn't quite reach 2080 Ti levels.
 
  • Like
Reactions: lightmanek

insertcarehere

Senior member
Jan 17, 2013
639
607
136

RTX 3070 even according to Nvidia materials in Unreal Engine 4 game is 43% faster than RX 5700 XT in 1440p.

Thats not really painting good picture for Ampere, does it?

Nvidia's graphs only compared 3070 to 2070/1070 in Borderlands 3 (they didn't do RDNA comparisons at all), the numbers that this poster is quoting is only done by comparing Nvidia's benchmark numbers to Techspot's even though the tests are obviously not done in identical conditions (Techspot for one didn't use the in-game benchmark tool while Nvidia most likely would). Let's wait for actual reviews to come out before claiming one way or another.
 

Gideon

Golden Member
Nov 27, 2007
1,774
4,145
136
Nvidia's graphs only compared 3070 to 2070/1070 in Borderlands 3 (they didn't do RDNA comparisons at all), the numbers that this poster is quoting is only done by comparing Nvidia's benchmark numbers to Techspot's even though the tests are obviously not done in identical conditions (Techspot for one didn't use the in-game benchmark tool while Nvidia most likely would). Let's wait for actual reviews to come out before claiming one way or another.

Well it is a speculation thread :D

But yeah, overall I agree. The results vary enough between reviewers that you can't really read too much into it. 2080 Ti for instance got 90FPS @ 1440p according to techspot, but only 78.5 FPS according to techpowerup. That delta is so wide that it 3070 can be both faster or slower than the 2080 Ti
 

Konan

Senior member
Jul 28, 2017
360
291
106
Because Nvidia needs something to compete with Navi 22.

And reusing 102 dies are too expensive.

If RTX 3080 is 68 SMs with 320 Bit GDDR6X, then 103 with 60 SMs and 256 bit GDDR6X will get you around 85% of RTX 3080 performance.

And don't forget. Already with 2944 ALUs/5888 CUDA cores RTX 3070 is 220W TDP, and full 48 SM with GDDR6X would be 250W TDP.

Don't expect 60 SM/256/320 Bit design to be any less than 300W on 8 nm process.

And that is not going to compete with 60 CU RDNA2 GPU that will use around 200-225W of power.

Nvidia has gone first and can now cut and segment as much as they want. There is tons of room. Between the 3070 and 3080 there is up to 30-40% in space.
Please don't forget that the 3070 has DLSS and RT too! Power doesn't matter when you have performance advantage. Remember the 3070 minimum will be +7% above a 2080Ti.

PS. Enjoy your time off?
 
Last edited:

Konan

Senior member
Jul 28, 2017
360
291
106
NVIDIA_DEV.2204 = "NVIDIA GeForce RTX 3090"
NVIDIA_DEV.2206 = "NVIDIA GeForce RTX 3080"
NVIDIA_DEV.222B = "NVIDIA GeForce RTX 3080 Ti Engineering Sample"
NVIDIA_DEV.222F = "NVIDIA GeForce RTX 3080 11GB Engineering Sample"
NVIDIA_DEV.223F = "NVIDIA GA102GL"
NVIDIA_DEV.2482 = "NVIDIA GeForce RTX 3070 SUPER"
NVIDIA_DEV.2484 = "NVIDIA GeForce RTX 3070"
NVIDIA_DEV.24AF = "NVIDIA GeForce RTX 3070 Engineering Sample"
NVIDIA_DEV.24BF = "NVIDIA GeForce RTX 3070 Engineering Sample"
NVIDIA_DEV .252F = "NVIDIA GeForce RTX 3060 Engineering Sample"

Pulled from the current PCI ID Repository showing added support.
 

Glo.

Diamond Member
Apr 25, 2015
5,803
4,777
136
Nvidia has gone first and can now cut and segment as much as they want. There is tons of room. Between the 3070 and 3080 there is up to 30-40% in space.
Please don't forget that the 3070 has DLSS and RT too! Power doesn't matter when you have performance advantage.
Let me repeat it one more time.

The fact that Nvidia develops 60 SM GPU is because its too expensive to cut 102 dies down to Navi 22 levels of performance, and 104 die is way too weak to compete with this GPU. EVEN WITH DLSS.

So where do you get that idea of performance advantage?