• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Question 'Ampere'/Next-gen gaming uarch speculation thread

Ottonomous

Senior member
May 15, 2014
558
292
136
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 

rolodomo

Senior member
Mar 19, 2004
269
9
81
So some random dude who hasn't posted for two years shows up just to vote down my comment.
Care to elaborate @rolodomo ?
Apologies, the first down vote was a mistake and was removed. I must have accidentally clicked on the down arrow while scrolling through the thread. The second down vote is intentional for overreacting to a meaningless mistake and pinging me (a down vote on an internet forum, the horror, lol).
 

JujuFish

Lifer
Feb 3, 2005
10,458
438
136
9,000 posts over the course of 21 years is living on a forum....

This is a tech forum, why would it matter who posted areal density figures as long as they were accurate? Explain why that would matter.

If I quoted someone out of context, which I haven't seen so much as implied, I'll correct it. If people are too lazy to read a thread and then expect a recap because they are oh so special, not my problem.

There's an ignore function, why don't you do as you're told and use it?
You couldn't even follow the single line of conversation we were having, and you're complaining about other people not reading the thread? GTFO.
 

Tup3x

Senior member
Dec 31, 2016
536
397
136
This is a tech forum, not Twitter. If you read the actual discussion, why do you need headers for who said what? Furthermore, why should it EVER matter who made the statement.
It makes discussion hard. Why don't you see that? On top of that one cannot easily check the context because quote hyperlink is not working. That too makes discussion harder. You are the first person that I've seen who does something as ridiculous as this. Even puts effort to make it harder for others to continue the discussion... Come on man.

You are replying to people not some random quotes. Your reasoning doesn't make any sense.



Read the thread.

Anyone who is too lazy to read the thread they are posting in please put me on ignore. If I wanted to partake in sound bite stupidity I'd go the Twitter route.

Reading a thread and having a decent conversation isn't done by focusing on the person talking, but the merit of the idea and the perspective it brings, then counter perspective it offers from others.

You want a personal discussion, there are forums for that, this is a tech forum.
What kind on nonsense is this? Here especially one should write the source of news piece or person they are quoting/referring.

The problem is that we have no idea if you just made up those quotes or quote something that doesn't even exist on these forums. It's a mess if the quoted message is not on the same page. I don't like to waste my time to play Sherlock Holmes trying. Quoting post properly is so ridiculously easy that no amount of excuses can justify not using those quoting features.
 
Last edited:

FaaR

Golden Member
Dec 28, 2007
1,056
411
136
10? WoW, Minecraft and Cyberpunk are all pretty high profile too.
WoW uses it for shadowmap generation only. As the game already has a solid shadow rendering implementation it's basically just a checkbox feature for funsies, nothing more.

Also, again, when you respond to someone's post directly as you did in this case, use Proper Fricken Quoting. Everybody else can do it! So can you. I don't care about your stupid excuses why you think you shouldn't have to. None of them make the least bit of sense. Just do it. Stop obstructing!
 

Stuka87

Diamond Member
Dec 10, 2010
5,488
1,277
136
Steam HW Survey. Shows more than double 2070 Super cards in use vs 5700xt.

NVIDIA GeForce RTX 2070 SUPER 2.05%
AMD Radeon RX 5700 XT 0.88%

Not sure why there was a need for you to down vote my question. It should be expected if you post something as fact, you need to state a source.

Steam Survey has been proven to not be indicative of sales. Even Valve has come out and said this. It is a random cross section of GPU's being used for games on steam.
 
  • Like
Reactions: spursindonesia

FaaR

Golden Member
Dec 28, 2007
1,056
411
136
This is a tech forum, not Twitter. If you read the actual discussion, why do you need headers for who said what? Furthermore, why should it EVER matter who made the statement?
Not everyone lives on forums and hold entire discussion threads dozens of pages long in their heads. Proper quoting (which is NOT difficult to do) shows the flow of replies and provides a link back to the full reply, for greater context. It's stupid this should even have to be explained to you, since it's friggin' self-evident.

And, of course it matters who made a statement. People are individuals, not a faceless mass of thoughts and ideas. What an utterly ridiculous thing to say.

Now stop being obstructionist on purpose. Just show people some common courtesy and do what they ask you.
 

Stuka87

Diamond Member
Dec 10, 2010
5,488
1,277
136
Pretty much a landslide Ampere vs RDNA2.

Early RDNA2 samples ran really hot, think Vega OC hot. Microsoft put 1x there for the wow factor, that's it.
Ok, first off "hot" just means inadequate cooling. It literally means nothing else. You can have a 10W chip run 'super hot'. The whole reason the RX 3090 has a gigantic cooler is so that it won't run hot while consuming 350W+.

Claiming Microsoft broke FTC law for a 'wow' factor is rather ludicrous.

And finally, you last posted in 2017 in posts attacking AMD and making claims that we now know were wrong, why come back now?
 

Glo.

Diamond Member
Apr 25, 2015
4,835
3,457
136
Nope, wrong again, you never give up do you?
Then show proofs for that ;).

I have already shown you that the wrong one on the topic is you. Where are your proofs?

And don't show links to a the same rumor, that has been debunked already.
 

tajoh111

Senior member
Mar 28, 2005
229
219
116
Yeah, it was leaked.

https://twitter.com/kopite7kimi/status/1218229502423314434
#not

Im not targeting this at anyone but: how stupid can you be to believe in this "leak"?

Read all three Twitts. There is NO Ampere other than 7 nm TSMC, and no other than GA100. Only GA exists.
Certainly less stupid than the people who said Navi was going to be 200-250 dollars with rtx 2080 performance while taking into account the cost of 7nms.
As has been told many times, the rumor about GA103 and Ga104 is false, which people, who know better have debunked, already. Kopite is best informed when it goes for Nvidia, and he said plainly that there is no such thing as 8 nm Nvidia GPUs, and only Ampere GPU chip is GA100, and gaming cards, which are differenet architecture, have not been taped out yet.

What is being purported, by clueless journalists, or simply guys, who feed on the acolites of their favorite brands, is fake news, that has been made up from simply speculation of two Twitter accounts.

As has been said many times, by Komachi, and Kopite, there is no other Ampere GPU than GA100, and Kopite even said that Gaming, Nvidia next gen cards haven't been taped out, yet. So if Komachi is talking about Q4 release, it will be GA100 chip, because that is the only thing that is released by Nvidia this year, as a new GPU.
Nvidia does not need to use the rumor mill and a guerrilla marketing plan like AMD.

If anything, it is in Nvidia's best interest to prevent leaks unlike AMD because they have an entire lineup of cards selling well and any news of a new card on the horizon is detrimental to their sales and devalues their current inventory. They are not in the same situation as AMD which uses leaks, fake leaks and a misinformation campaign to their advantage. This is clear if you look at the length of bread crumbs leading up to any AMD release.

AMD has used guerrilla marketing and the rumor mill for years because since Maxwell AMD has been late to launch card compared to Nvidia. Haven't you noticed that with every AMD card launch, there is a very aggressive rumor marketing campaign that are designed to hype up AMD's next release well well before launch and these are timed right after Nvidia launches their cards. Why is this the case?

This is to stall sales of Nvidia cards and to put the blame of any hype and misinformation on the rumor sites and on forum posters.

Nvidia controls and prevents leaks from occurring because leaks are detrimental when your cards are selling well. On the other hand if your cards are not selling well and only your competitors have new cards on the market, it can help stall your competitors sales, particularly if these rumors are placed on cards which your existing cards don't compete in. i.e the high end for AMD(fiji, Vega and somewhat Navi).

The point I am trying to make is we cannot base Nvidia release dates based on rumors because Nvidia is very good at controlling rumors until the launch date is near. Similarly, you can't predict the launches of AMD based on rumors because the rumors are not designed with any real information to give an accurate gauge of performance. They are designed to get people to wait as long as AMD needs to get their cards out.
 

uzzi38

Golden Member
Oct 16, 2019
1,817
3,654
116
The source of that 3070+3080 diagram is also now claiming that GA102 was retaped to deal with Navi21.


He's fake AF.

Now we have to deal with the fact he's treated as a trusted source or some such rubbish.

No, the 3070 and 3080 don't exist yet because the dies haven't been retaped, and no, Navi 21 won't be competing with GA102. Like seriously, how much bull are we gonna let spread?
 
  • Like
Reactions: xpea and Glo.

DXDiag

Member
Nov 12, 2017
163
115
86
I don't know why some people are constantly downgrading anything NVIDIA does with their next gen, I would like to quote a famous Beyond3D post about this:


If we follow past trends, comparing the biggest gaming dies NVIDIA releases, a certain pattern emerges:

580 to 780Ti is a 85% uplift (new node)
780Ti to 980Ti is a 45% uplift (same node)
980Ti to 1080Ti is a 75% uplift (new node)
1080Ti to 2080Ti is a 40% uplift (same node)

So at the very least I expect a 3080TI to be around 70% faster than 2080Ti at the same TDP, that would make it close to two times as efficient as Turing, it all depends on what NVIDIA can extract from the new node.

I too expect a 3080Ti to be 70% faster than 3080Ti at the new node, based on past trends, and the latest rumors about HPC Ampere are also empowering this prediction.

As for the timeline, If NVIDIA is releasing HPC next gen soon, chances are they are releasing gaming soon as well, just like they did with Pascal, where they released GP100 alongside the GTX 1080.
 

uzzi38

Golden Member
Oct 16, 2019
1,817
3,654
116
Yes. Turing is 3 years old at this point, it's just an RT evolution of Volta which was introduced 3 years ago. NVIDIA will revamp the uarch.

Take a look at the provided leaked benchmarks, the 7552 core Ampere GPU (@1100MHz) is beating the 4608 core Turing GPU (@1800MHz) by 40% despite using lower clocks, which means this is a ~40% increase in IPC alone. Imagine if the 7552 core GPU operated at something like 1600MHz or more.
Why are you comparing against the RTX Titan.

Compare vs Volta which comes with it's own optimised driver set and, you know, is the actual predecessor to the GPU in that benchmark: https://browser.geekbench.com/v5/compute/576479

Compared against a V100 with 5120 shaders clocking at 1.38GHz, it manages a mere 10% lead.
 
  • Like
Reactions: DisEnchantment

Glo.

Diamond Member
Apr 25, 2015
4,835
3,457
136
Once more, no respected or even semi trusted source has ever spoken about NVIDIA making high end GPUs on such a less than stellar process such as 8nm from Samsung, all of the sources point out to NVIDIA taping HPC and high end gaming GPUs on 7nm from TSMC.
Dude, I can't even...

First you ask me to not get down to this level of conversation implying that you both should work on your reading comprehension skills, and then you respond to me with this?

WHEN I FROM THE START WAS SAYING THAT NVIDIA CANCELLED HIGH-END GAMING 8 NM LINEUP AND RETAPES IT ON 7 NM TSMC PROCESS, AND THAT ONLY PRODUCTS THAT NVIDIA TAPED OUT ON 8 NM PROCESS ARE LOW-END? AND THAT I WAS TELLING YOU ALL THIS SINCE JANUARY?!

What is wrong with you guys? The story I am trying to tell you all is simple. GA100 chip is 7 nm and it is this year. 8 nm Samsung Low-End to mid range products are latee this year, then all of higher end stuff on 7 nm TSMC process is starting from early next year.

And all of it I was trying to tell to people since january of this year. In this very thread. Obviously, I haven't had the details then, which part is which and the reasons for this retape/redesign. It fall into place with recent articles on the topic.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,474
136
Maxwell -> Pascal performance increase is primarily through increasing shader counts and a huge uptick in clocks. uArch differences were much smaller than other uArch changes, just the jump from going 28nm -> 16nm was huge.

Anyway, everybody's acting like 8LPP is the end of the world, but it's really not. It's still like a 40-50% improvement in power/perf vs TSMC N16 (give or take), it's just abysmal in density is all (compared to N7 that is).
SS 8LPP is a slightly improved version of SS 10LPP node and comparable to TSMC N10 in terms of transistor perf and density. TSMC N10 delivers 15% higher transistor perf at iso power vs TSMC 16FF+ or 35% lower power at iso speed . If Nvidia Ampere based Geforce GPUs are manufactured at SS 8LPP, its going to put them at a significant disadvantage in terms of process against AMD RDNA2 GPUs manufactured on TSMC N7P or N7+. N7P/N7+ is around 25-30% higher transistor perf than SS 8LPP. I think the 300-350w power numbers for GA102 could indicate its manufactured at SS 8LPP.
 
  • Like
Reactions: Glo. and Saylick

raghu78

Diamond Member
Aug 23, 2012
4,093
1,474
136
Nvidia with Turing at 12/16nm TSMC and if moving to Samsung 8nm with Ampere will give a 45% better Node density. IF this high end Ampere is at SS 8nm then it is not comparable to TSMC 10n

Node Densities:
TSMC 12/16nm Density is 33.8 MTr/mm2
Samsung 8nm (uHD) = 61.2 MTr/mm2

TSMC 10nm =52.51 MTr/mm2
TSMC N7 = 96.49 MTr/mm2
TSMC N7P = 96.49 MTr/mm2 (no density difference)
TSMC N7+ EUV = 115.8 MTr/mm2

In recent years AMD has always been typically the first to a new node. Every product launch Nvidia is behind on process node and every single year Nvidia always has performance crown.
RDNA2 is probably not on the + node as said a few pages back and in its own thread.
Besides, IMO there is a clear winner with which company has the better encoders, the better drivers and overall better software. At the end of the day for performance, I don’t think majority of buyers would have issue buying something with ~50w more if it turns out like that and I reckon Nvidia knows that...
SS 8nm UHD is 61 MTr/sq mm but Ampere GPUs which need to clock 2+ Ghz are not using UHD cells. The UHD cell has a cell height on 376 nm while the HD cell is 420nm.


So with HD cells the transistor density is around 55 MTx/sq mm. Meanwhile you can say Nvidia has better drivers/software and I would say that is true to some extent. Nvidia having the perf crown in the past does not mean they will have with Ampere. Meanwhile from my analysis of the Xbox Series X SoC specs - die size , CU count, clocks, and power supply my estimate is Navi 21 is likely to deliver 24 TF at roughly 270w. We will wait and see where fully enabled GA102 and Navi 21 specs/perf land. But I am quite confident in saying its going to be the closest contest for the GPU crown in a decade.
 
Last edited:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,474
136
You mind sharing your math on that estimate? I personally have a 21-22 TF @ 300W expectation for Big Navi / N21 and am curious as to how you arrived at your estimate, but I totally agree that this will be the closest contest at the upper end in a long time.
Sent you a direct message. I agree that based on the newer info on RDNA2 relating to dual pipe graphics command processor on Sienna Cichlid (which could affect my area scaling calculations) that the CU count could be 80 and performance could be a bit lower. But I am confident of 21+ TF.
 
  • Like
Reactions: Saylick

Leadbox

Senior member
Oct 25, 2010
744
63
91
The numbers for the PS5 are max clocks (turbo) which you will most likely never reach in an actual game. The rumor is Sony was surprised by MS GPU performance (number of CUs) and hence by not revealing "game clock" are trying to hide their very big GPU deficit while putting focus on their ssd tech.
Given the size of the thing, I suspect it will hold clocks for the most part. It's that big for a reason.
 
  • Like
Reactions: Konan and Saylick

uzzi38

Golden Member
Oct 16, 2019
1,817
3,654
116
How certain (reliable leaks) it is that Nvidia will be using SS 8LPP? Samsung has further improved 8 nm process, design for high clock/high power 8LPU, which seems like a better solution for big, power hungry and highly clocked GPUs.
8LPP has never been mentionned by name, it could also be 8LPU. I've just been shorthanding everything to 8LPP because afaik 7LPP was the original node. The only thing rumoured is some form of Samsung's 10nm, be it one of the 10 nodes itself or one of the 8nm nodes.

Or so I was first told. I'm not sure that was even the case at this point. someone I've been talking to recently is sure that EUV was never planned for Ampere. So at this point who the hell knows.
 

DiogoDX

Senior member
Oct 11, 2012
706
162
116
The A100 PCIe has a TDP of 250W. According to ComputeBase who quotes NVIDIA, the card does indeed have lower TDP specs. For comparison, the SXM variant has a TDP of 400W. However, despite the lower TDP of the PCIe model, NVIDIA says that the peak power for both models is the same, only during a sustained load will provide 10 to 50% lower performance than SXM4 based variant.
Up to 50% throttling.o_O So the peak numbers are just marketing.
 

Veradun

Senior member
Jul 29, 2016
564
780
136
Interesting! Total rumour territory here, doesn't quite add up in places with other leaks/rumours.



As fake ad you can get
 

ASK THE COMMUNITY