Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 80 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
May 15, 2014
559
292
136
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 

FaaR

Golden Member
Dec 28, 2007
1,056
412
136
You can help mitigate routing and component issues using blind and buried vias, and moving to even more layers could help some
How many more layers could you even fit in a PCB which has to be insertable into a PCIe slot? :p Of course... You could just frankenstein on a separate edge connector onto a thicker PCB I suppose, but considering motherboard makers are loath to go above even 4 layers I'm sure they won't be happy to pile on the layers for graphics cards even though they're considerably smaller in size... :p
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
How many more layers could you even fit in a PCB which has to be insertable into a PCIe slot? :p Of course... You could just frankenstein on a separate edge connector onto a thicker PCB I suppose, but considering motherboard makers are loath to go above even 4 layers I'm sure they won't be happy to pile on the layers for graphics cards even though they're considerably smaller in size... :p

You could easily have extra layers that end before the edge connector. But would add to cost, and for no real gain.
 
  • Like
Reactions: FaaR

MrTeal

Diamond Member
Dec 7, 2003
3,614
1,816
136
How many more layers could you even fit in a PCB which has to be insertable into a PCIe slot? :p Of course... You could just frankenstein on a separate edge connector onto a thicker PCB I suppose, but considering motherboard makers are loath to go above even 4 layers I'm sure they won't be happy to pile on the layers for graphics cards even though they're considerably smaller in size... :p
LOL... It's way more than 4. Mine was 16 10 and that's not atypical, even in a standard 1/16" thick PCB you'd put in a PCIe slot. Edit: Just double checked, that one was 10..
 
Last edited:

Glo.

Diamond Member
Apr 25, 2015
5,803
4,777
136
Yes, because there can only be one leaker.
Let me remind you, what yiou posted few days ago:

So, this guys is tweeting fakes just to correct himself a few days prio the launch. Great source btw.

Compare this to what has Kopite nailed, and turned out to be true:

I think between you and Kopite, its kopite who has way higher credibility to believe his words.

You embarass yourself with each word, when you doubt what he is writing.
 

beginner99

Diamond Member
Jun 2, 2009
5,233
1,610
136
However i heard Nvidia is very iffy about making another Titan as the sales on them are very dismal. People rather go full Quadro line then go Titan as Quadros have better work performance, and the marginally better titan gaming experience is not worth it.

The Titan V with simply too expensive for anything gaming and only use-case was workstations for people needing the compute/tensor cores for cheaper than Quadro/Tesla pricing and not needing more memory. Of course that market is tiny. Researchers in corporations usually don't have a workstation next to their desk, they connect to a compute server and work that way. And due to NV licensing you can only put Quadros or Teslas in servers.

On top of that I except big OEMs to not offer Titans at all in workstation but only Quadras because higher margings for them as well and customers not knowing any better that a 50% cheaper GPU would work just as well.
 

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146
Do you think AMD is going to dominate this market soon?
Hell no. AMD are still a small as hell fish in the consumer GPU market, there won't be any domination any time soon.

This is going to be their best, most competitive lineup in years, but there still won't be any domination. They'll take the lead next year when RDNA3 inevitably launches first. Still no domination. Even getting to 50% market share is one hell of an uphill battle.
 

Karnak

Senior member
Jan 5, 2017
399
767
136
Can anyone speculate on bus width used for 3080 -> 3090 ?
do they need to push 512mbit on top and rest 384bit
It's all based on GA102 so I highly doubt they'll cut it down to 384bit all the way from 512bit.

The 3080 is most likely cut down to 320bit while the 3090/Titan/Ti (or whatever) will feature full 384bit.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
I know. But i find it strange
Hell no. AMD are still a small as hell fish in the consumer GPU market, there won't be any domination any time soon.

This is going to be their best, most competitive lineup in years, but there still won't be any domination. They'll take the lead next year when RDNA3 inevitably launches first. Still no domination. Even getting to 50% market share is one hell of an uphill battle.

How do you know this? Maybe it will be their worst because unlike with Navi they wont have a two full node advantage to hide their architecture problems.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,109
136
Hell no. AMD are still a small as hell fish in the consumer GPU market, there won't be any domination any time soon.

This is going to be their best, most competitive lineup in years, but there still won't be any domination. They'll take the lead next year when RDNA3 inevitably launches first. Still no domination. Even getting to 50% market share is one hell of an uphill battle.
Yes. Seems like all the DIY sites and YouTube influencers run 2080 Ti's on their test systems. Nvidia and partners are very good at saturating the media space with halo products to give the illusion that ALL their products are superior to AMD. This is true even though AMD has very competitive GPUs in the mid to low tiers. Until AMD can gain mind share, they don't have a chance at gaining significant market-share gains.
 
  • Like
Reactions: Elfear and amenx

Konan

Senior member
Jul 28, 2017
360
291
106

Here he debunks the author of "some" rumors from previous pages.

Do not believe in them. Remember Blue Nugroho trashing PS5, claiming it is not RDNA2 architecture?

Fanboys exist in every side of the industry, and post stupidest BS.
some of the rumours yes, not all.
Personally, I don’t believe anything that I transferred over regarding AMD.

Couple of points/ my thoughts :
1. Everything FP 32 related was put on Baidu nearly 2 weeks before kopitekimi and KatCorgi said anything and agreed
2. TSE results were also there just under a week before they mentioned
3. The twitter guys say 3080 is only 20% better than a 2080ti and then shortly after the TSE results show, that based just off that, it throws up Conflicting posts between them
 

Hitman928

Diamond Member
Apr 15, 2012
6,186
10,693
136
Hell no. AMD are still a small as hell fish in the consumer GPU market, there won't be any domination any time soon.

This is going to be their best, most competitive lineup in years, but there still won't be any domination. They'll take the lead next year when RDNA3 inevitably launches first. Still no domination. Even getting to 50% market share is one hell of an uphill battle.

The last market share update I saw had AMD at 31% for add on GPU sales. Obviously not dominant by any means, but I wouldn't consider that a small as hell player either, especially if you add in all PC GPU share where AMD and Nvidia are essentially tied.

 

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146
I know. But i find it strange


How do you know this? Maybe it will be their worst because unlike with Navi they wont have a two full node advantage to hide their architecture problems.
I don't know for 100% certain. I just have one hell of a lot of faith in Zen's physical optimisation team, a great breakdown and analysis of the power consumption of the Series X, and some knowledge of AMD doing what is right in terms of managing (formerly known as) RTG.

Oh and some other little bits and pieces, like MI100 CU count and clocks (spolier alert: the AdoredTV leak was accurate for once in everything short of how they presented the information, and they didn't mention power consumption) and Renoir power efficiency (sustains 1.75GHz on the iGPU at ~22W full SoC power, so including IMC and at least one CPU core active and under a light load).

I'm fairly confident AMD will have a very competitive chip on sale later this year. One with a similar shader count to top-most Ampere die and surprisingly similar clocks too. So while there's no way I can say for a fact that AMD will take the crown, I think they're in a position where they will be competing against a higher tier GPU in Nvidia's product stack than they did last gen.
 

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146
The last market share update I saw had AMD at 31% for add on GPU sales. Obviously not dominant by any means, but I wouldn't consider that a small as hell player either, especially if you add in all PC GPU share where AMD and Nvidia are essentially tied.

Pretty sure they dropped again in Q1 and Q2 didn't they?
 

Hitman928

Diamond Member
Apr 15, 2012
6,186
10,693
136
Pretty sure they dropped again in Q1 and Q2 didn't they?

I don't know, I haven't seen any more recent numbers but I don't subscribe to the research companies either. Even if they did, I'm doubtful they would have dropped too much but if anyone has more recent numbers, I'd like to see them.
 

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146
I don't know, I haven't seen any more recent numbers but I don't subscribe to the research companies either. Even if they did, I'm doubtful they would have dropped too much but if anyone has more recent numbers, I'd like to see them.
Don't think Q2 numbers are released yet but Q1 there was a drop to 25%

 

Hitman928

Diamond Member
Apr 15, 2012
6,186
10,693
136
Don't think Q2 numbers are released yet but Q1 there was a drop to 25%


Ok, it took me a minute to reconcile those numbers from the 31% I linked to. The 31% share was desktop GPUs while your link is all discrete GPUs (so laptops included). In that sense AMD dropped from 27% to 25%. Even 25% I wouldn't call them a small fish, I mean, you still have 1/4 of the market. That's again not counting APUs which some unknown amount will be used for actual gaming, more often in laptops I would assume. Nvidia is still the dominant player in GPUs, but AMD also has a lot of weight numbers could change fairly rapidly of AMD were able to put out more competitive cards consistently.
 
  • Like
Reactions: uzzi38

Konan

Senior member
Jul 28, 2017
360
291
106
Also from Baidu forums. Again I'm just copy//paste not my words. Make of what you will.
This poster said there may be errors but basically this is it.

Rumor / Speculation...
We'll see how close they come ;)

NVIDIA Titan A
48 GB GDDR6X 5376CUDA FP32 36TF+ || 384bit 48 GB GDDR6X 192Rops
Equivalent performance is equivalent to Turing architecture 8064 level PS: 2G GDDR6X exclusive you know, the price is 2499 US dollars

NVIDIA GeForce RTX 3090
5120-5376CUDA FP32 34TF+ || 384bit 24 GB GDDR6X 192Rops
Equivalent performance is equivalent to the 7680 level PS of the Turing architecture: the price is 1999 US dollars.

NVIDIA GeForce RTX 3080
4608CUDA FP32 30TF+ || 320bit 20 GB GDDR6X 160Rops
Equivalent performance is equivalent to Turing architecture 6912 level PS: 2080Ti +30%. Price TBD

NVIDIA GeForce RTX 3070Ti
3072CUDA FP32 20TF+ || 256bit 16 GB GDDR6X 128Rops
Equivalent performance is equivalent to Turing architecture 4608 level PS: The full version of GA104 is better than 2080Ti || Equivalent cut down version TU102 by 20% or less

NVIDIA GeForce RTX 3070
2688CUDA FP32 15TF+ || 256bit 16 GB GDDR6X 128Rops
Equivalent performance is equivalent to Turing architecture 4032 level PS: basically second only to 2080Ti or the same

NVIDIA GeForce RTX 3060Ti
2304CUDA FP32 13TF+ || 192bit 12GB GDDR6X 96Rops
Equivalent performance is equivalent to Turing architecture 3456 level PS: this is OK GA104 cut version. Better than 2080 Super. Within 17%

NVIDIA GeForce RTX 3060
1920CUDA FP32 11TF+ || 192bit 12GB GDDR6X 96Rops
Equivalent performance is equivalent to Turing architecture 2880 level PS: Basically better than 1080Ti, the same as 2080 Super. Price is ~400-450 US dollars

NVIDIA GeForce RTX 3060 cut down version
1536CUDA FP32 9TF+ || 192bit 12GB GDDR6X 96Rops.
Equivalent performance is equivalent to Turing architecture 2304 level PS: basically it is 2070 but it is very cheap. Price is ~350-375 US dollars

NVIDIA GeForce RTX 3050Ti
1280CUDA 6-7TF+ || 128bit 8GB GDDR6 64Rops
Equivalent performance is equivalent to Turing architecture 1920 level PS: basically a 2060, support DLSS and RTX. To put it bluntly, it is 2060 within 200-250 US dollars

NVIDIA GeForce RTX 3050
1024CUDA 5-6TF+ || 128bit 8GB GDDR6 64Rops
Equivalent performance is equivalent to Turing architecture 1536 level PS: basically a 1660Ti price around 150-175 US dollars



dda63af082025aaf72e1e49cecedab64014f1acf.png

1597533661336.png
Twitter leakers notes for reference
 
  • Like
Reactions: psolord