• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 178 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 
As far as I know none of the DC projects require very large amounts of VRAM but correct me if I'm wrong.

Between the two options, I would say if PCIe slot space isn't an issue then 2x3080s would probably be the better buy but if slot space is an issue, then 3090 is probably better. Once the full system power is taken into account, the 3090 will probably be at best a wash in efficiency but at much lower performance. Even if it is a slight efficiency win it would be at much lower performance and I doubt the efficiency difference over the lifetime of the cards would make up for the big performance gap.
It will also depend what you’re using for a motherboard. With a 3090, you’re likely limited to 2 cards unless you use water cooling or risers given the triple slot cooler. You could put 4 3080s into a HEDT system with four double spaced x16 slots.
 
Not that I can easily think how the steam survey results could be systematically badly biased when used for measuring GPU market share.

I can. People that prefer open tech vs proprietary tech. These will more likley own an AMD GPU. If something like open tech even interests you, you are most likley more technically aware and think more about freely sharing your data regardless of disclaimers made. Hence if you own AMD you will more likley decline the survey especially so over some random intel HD4000 user. Even if the effect is small, over the millions of people surveyed it will have an effect.
Not saying it's true but it's easy to come up with plausible explanations.
 
I can. People that prefer open tech vs proprietary tech. These will more likley own an AMD GPU. If something like open tech even interests you, you are most likley more technically aware and think more about freely sharing your data regardless of disclaimers made. Hence if you own AMD you will more likley decline the survey especially so over some random intel HD4000 user. Even if the effect is small, over the millions of people surveyed it will have an effect.
Not saying it's true but it's easy to come up with plausible explanations.
But you have no evidence of that. I can't believe AMD fans don't want their card recorded by steam - look at any "what gpu do you want/own" poll you see fairly regularly in news sites, AMD does just fine there, in fact I would say better then they should if we look at the actual distribution of gpu sales given by Peddle/Steam/etc.
 
Not this again, i had enoght and im going to say it. People complaining about the Steam survey are doing so because it does not applies to what they belive is correct or they are deliberately dismissing based only on their opinion.
Just like some years ago some people attacked Cinebench results and others defended them, and now it reversed.

I can. People that prefer open tech vs proprietary tech. These will more likley own an AMD GPU. If something like open tech even interests you, you are most likley more technically aware and think more about freely sharing your data regardless of disclaimers made. Hence if you own AMD you will more likley decline the survey especially so over some random intel HD4000 user. Even if the effect is small, over the millions of people surveyed it will have an effect.
Not saying it's true but it's easy to come up with plausible explanations.

Not only that makes zero sence, it is not true, as the RX480 is the most used GPU on Linux in the Steam Survey
 
There are obviously a few issues with the Steam survey. For example, what does it take when you have Steam installed on multiple machines? I've had the survey pop-up 4-5 times on my laptop (with an Nvidia GPU), while it only poped-up once on my desktop PC with a 5700XT. So, users with Steam installed on multiple machines might bias the survey (as Nvidia totally owns the laptop market, for example), and AMD has also blamed "internet cafes" a few years ago. However, IMO, even if the numbers are inaccurate (and looking at the numbers I'm pretty sure that there's some inherent bias for Nvidia) - I would argue that in essence the picture is accurate, Nvidia totally owns the GPU market. It might be 70-30 instead of 80-20, but I'm pretty sure Nvidia is dominating the market.
 
Last edited:
So, back to video card discussion...

The updated drivers from nVidia seem to have greatly increased card reliability at the cost of performance (Which I speculated at a few pages back). I have not seen any benchmarks yet, but users are reporting a 20-30Mhz drop in boost clocks. Which means the initial reviews show performance that isn't entirely true anymore.

 
So, back to video card discussion...

The updated drivers from nVidia seem to have greatly increased card reliability at the cost of performance (Which I speculated at a few pages back). I have not seen any benchmarks yet, but users are reporting a 20-30Mhz drop in boost clocks. Which means the initial reviews show performance that isn't entirely true anymore.

https://videocardz.com/newz/nvidia-...s-report-fewer-crashes-after-updating-drivers

That's ~1%-1.5% difference in clock speed. Margin of error stuff.
 
So, back to video card discussion...

The updated drivers from nVidia seem to have greatly increased card reliability at the cost of performance (Which I speculated at a few pages back). I have not seen any benchmarks yet, but users are reporting a 20-30Mhz drop in boost clocks. Which means the initial reviews show performance that isn't entirely true anymore.

Figured they would do this as it's the cheapest option. I am still waiting for a 3080 with a bit more ram.

Do you have a list of the known vendors that used good caps? I think asus is one of them. Just in case a 20gb model never comes out and I would then just have to go for a 3080.
 
I cant believe you guys are still discussing this steam hardware survey debacle, i thought this had been explained/debunked years ago.


The reason the Steam hardware survey went nuts back in 2017 is that Steam was over-counting every single individual login at an iCafe as another instance of that computer’s system configuration. Imagine if you had 10 of your best friends over to play Steam games on your PC. Every single one of them logged in to their own accounts — which led to 10 copies of your system config being uploaded to the platform and counted as separate submissions.

That’s basically what happened with Steam. And according to AMD, while the company made some corrections to its data, Valve has never been particularly concerned with making sure its numbers track real-life market share. AMD, meanwhile, is drastically underrepresented in iCafe gaming.

AMD-CPU-Share.png

To bolster his point, Herkelman released images like the above, showing how AMD CPU adoption rates changed dramatically when Steam added PUBG in China, then changed again afterward when the company re-updated its algorithms. In both cases, AMD’s market share was lower as a result of the update.

The error involves Steam counting every individual login at internet cafes (which are particularly popular in Asian countries) as separate PC configurations. So, if 10 people log into Steam on a single Intel-powered PC at an internet café – and opts in to the Steam Hardware Survey – then the survey will count it as 10 separate machines.

It’s the only data set of its type available publicly. But this could explain why AMD’s overall CPU market share numbers have been ticking up in other reports but have remained fairly static on Steam. If Chinese iCafe installations grow more quickly than other types of gaming and AMD isn’t represented in that market, it’s not going to appear to gain much market share in either CPU or GPUs. We’ve mostly used the SHS to compare generational adoption for Turing versus Pascal, which should be less-impacted. But if the figures for AMD adoption are incorrect, the figures for at least some Nvidia SKUs will be as well.
 
It’s the only data set of its type available publicly. But this could explain why AMD’s overall CPU market share numbers have been ticking up in other reports but have remained fairly static on Steam. If Chinese iCafe installations grow more quickly than other types of gaming and AMD isn’t represented in that market, it’s not going to appear to gain much market share in either CPU or GPUs. We’ve mostly used the SHS to compare generational adoption for Turing versus Pascal, which should be less-impacted. But if the figures for AMD adoption are incorrect, the figures for at least some Nvidia SKUs will be as well.

Yes, there was a problem with internet cafes, fixed 3 years ago.

AMD CPU share on Steam is not static. It increases every month.
 
Not sure if this was brought up already but Hardware Unboxed mentioned in one of their videos that the crashing issue wasn't exclusive to the cards with the POSCAPs as their Asus Tuf and FE cards experienced the same problems.

Has any other reviewer mentioned this?
 
Not sure if this was brought up already but Hardware Unboxed mentioned in one of their videos that the crashing issue wasn't exclusive to the cards with the POSCAPs as their Asus Tuf and FE cards experienced the same problems.

Has any other reviewer mentioned this?

LTT had an issue with I think it was their ASUS or FE also. But it turned out to be the PowerSupply they were using. Which was according to nVidia, large enough for the task, but ran into issues at very specific load amounts.
 
WCCFTech but testing GeForce RTX 3080 undervolt with the NVIDIA PCAT tool (not relying on software power values).

Results in Forza Horizon 4, 4K Ultra:
Stock ~ 1955 MHz:
143 fps, 310 W

862 mV ~ 1905 MHz:
139 fps, 230 W
97% performance for 74% power

806 mV ~ 1815 MHz:
134 fps, 201 W
94% performance for 65% power

Also included for reference: 2080 Ti, but it was not undervolted.

all-3080-overall.png
 
That's ~1%-1.5% difference in clock speed. Margin of error stuff.
While it is a small difference, it's not really margin of error stuff when it shifts the whole average downward.

WCCFTech but testing GeForce RTX 3080 undervolt with the NVIDIA PCAT tool (not relying on software power values).

Results in Forza Horizon 4, 4K Ultra:
Stock ~ 1955 MHz:
143 fps, 310 W

862 mV ~ 1905 MHz:
139 fps, 230 W
97% performance for 74% power

806 mV ~ 1815 MHz:
134 fps, 201 W
94% performance for 65% power
Be interesting to see actual card power measured externally and across a sample of cards, but that's a pretty amazing drop. I would imagine one of the better coolers like the ASUS TUF that already runs cool at 35dB with stock settings would be almost inaudible over background at 200W.
1601393566844.png
 
Nice to see a sample of course but it's very hard to believe that those undervolts can be achieved universally. Such a huge drop.

They'd surely have shipped at the lower TDP & left the over clocking for the AIB cards.

Can definitely believe that the 250w was their original design target. Might hit it on a refresh.
 
WCCFTech but testing GeForce RTX 3080 undervolt with the NVIDIA PCAT tool (not relying on software power values).

Results in Forza Horizon 4, 4K Ultra:
Stock ~ 1955 MHz:
143 fps, 310 W

862 mV ~ 1905 MHz:
139 fps, 230 W
97% performance for 74% power

806 mV ~ 1815 MHz:
134 fps, 201 W
94% performance for 65% power

Also included for reference: 2080 Ti, but it was not undervolted.


I plan on undervolting my 3080. Should have it next week if Amazon is accurate with the current delivery date.
 
Nice to see a sample of course but it's very hard to believe that those undervolts can be achieved universally. Such a huge drop.

They'd surely have shipped at the lower TDP & left the over clocking for the AIB cards.

Can definitely believe that the 250w was their original design target. Might hit it on a refresh.

My problem with the undervolting claims is that, in my experience, it doesn't pan out as advertised. At least that was my experience on AMD cards when undervolting was all the craze. I tried it on multiple 480s and Vega. I was able to undervolt all of them but none of them undervolted as much as what I'd seen posted on the internet without running into stability issues and one of the 480s barely undervolted at all without major stability issues, I'm talking like <5% undervolt. My best undervolt card was probably Vega56 which I was able to get a decent undervolt on, but again, not nearly as low as I had seen others reportedly get. I could go that low and be stable in many games for hours, but then certain games would crash after an hour or two of gaming. If I only played those stable games, I'd never even know the undervolt wasn't fully stable.

Long story short, AMD and I'm sure Nvidia are setting the voltages where they are for a reason. Most likely because they need to in order to guarantee stability while yielding as many die as possible and even when reviewers test undervolt settings, unless they are checking for stability across many games checking for hours on each game, I just don't trust that they are actually stable, just like I don't trust reviewer CPU overclock stability as they typically don't really check long term stability due to time constraints.
 
Nice to see a sample of course but it's very hard to believe that those undervolts can be achieved universally. Such a huge drop.

They'd surely have shipped at the lower TDP & left the over clocking for the AIB cards.

Can definitely believe that the 250w was their original design target. Might hit it on a refresh.

It's not that hard to believe and reminds me a lot of what we saw with AMD cards when Polaris/Vega first came out. You could give them a fairly substantial undervolt and save a lot of power without losing that much performance. Some people even saw a performance increase from a slight undervolt, which sounds absurd the first time you hear it, but was borne out by multiple sources.

I think NVidia just ran into the same sort of problem where the architecture just can't go any further on the current process without just throwing a lot of power at it for highly diminished returns. Would anyone have seriously cared if a 3080 only had 97% of the current performance if it came in at 230W? Releasing such a card with a lower power limit might have at least let people hold out hope for more powerful SUPER refresh with higher clock speeds and power limits later on, but what we have makes that prospect look doubtful unless the Samsung process sees considerable improvements.
 
My problem with the undervolting claims is that, in my experience, it doesn't pan out as advertised. At least that was my experience on AMD cards when undervolting was all the craze. I tried it on multiple 480s and Vega. I was able to undervolt all of them but none of them undervolted as much as what I'd seen posted on the internet without running into stability issues and one of the 480s barely undervolted at all without major stability issues, I'm talking like <5% undervolt. My best undervolt card was probably Vega56 which I was able to get a decent undervolt on, but again, not nearly as low as I had seen others reportedly get. I could go that low and be stable in many games for hours, but then certain games would crash after an hour or two of gaming. If I only played those stable games, I'd never even know the undervolt wasn't fully stable.

Long story short, AMD and I'm sure Nvidia are setting the voltages where they are for a reason. Most likely because they need to in order to guarantee stability while yielding as many die as possible and even when reviewers test undervolt settings, unless they are checking for stability across many games checking for hours on each game, I just don't trust that they are actually stable, just like I don't trust reviewer CPU overclock stability as they typically don't really check long term stability due to time constraints.

Yeah, its very hit and miss. AMD (or in this case nVidia) set the voltages that high for a reason, which typically is to have more chips that pass verification.

With my RX 480, I got really lucky. I got the card on release day of the Nitro+, and I was able to drop 100mV off it, and ran it for years with no issues. And it made a noticeable difference in how loud the fans got. But I have tried to help others, and some of them could not lower the voltage at all without crashing issues.

I really question the stability of the 3080 claims though, because those wattage drops are gigantic.
 
Every review i am reading is sand bagging the 3090.
Every other review i am reading is saying a 3080 20gb is in the works.

Every review i read is killing me, because it is saying i should WAIT for more news on the 20gb because the 3090 is a box of fail for a gamer and not developer.
 
My problem with the undervolting claims is that, in my experience, it doesn't pan out as advertised. At least that was my experience on AMD cards when undervolting was all the craze. I tried it on multiple 480s and Vega. I was able to undervolt all of them but none of them undervolted as much as what I'd seen posted on the internet without running into stability issues and one of the 480s barely undervolted at all without major stability issues, I'm talking like <5% undervolt. My best undervolt card was probably Vega56 which I was able to get a decent undervolt on, but again, not nearly as low as I had seen others reportedly get. I could go that low and be stable in many games for hours, but then certain games would crash after an hour or two of gaming. If I only played those stable games, I'd never even know the undervolt wasn't fully stable.

Long story short, AMD and I'm sure Nvidia are setting the voltages where they are for a reason. Most likely because they need to in order to guarantee stability while yielding as many die as possible and even when reviewers test undervolt settings, unless they are checking for stability across many games checking for hours on each game, I just don't trust that they are actually stable, just like I don't trust reviewer CPU overclock stability as they typically don't really check long term stability due to time constraints.
Had the opportunity to undervolt one RX 560, two RX 580, two Vega 56. Each card was different. The small Polaris was spectacular in a sense, when I tried underclocking & undervolting I quicly reached a point where fans would shut down occasionally even under load. (Asus Strix model, good cooler with heatpipes). One of the Vegas got mixed results since memory didn't meet the "internet" goal for overclocking & undervolting. The other card, which I'm still using right now, runs with a very aggressive undervolt on both core (underclocked) and memory(overclocked). The second Vega is rock stable, the other one was always hit and miss under similar undervolting conditions.

Bottom line is what you described: undervolting is not a guarantee. Reviewers who show their cards sipping power can only help us estimate variability, there will always be the chips that need to be closer to stock voltage to run stable.

The "good news" for Ampere is this undervolting potential should always be there with a small compromise in clocks. With cards that are very efficient at stock, undervolting is hit or miss since you need to win the chip lottery. With cards that are pushed for those last 3-5% in performance, going back the frequency/voltage curve provides hefty power saving with minimal performance loss. But even for this we would need to see stock voltage/frequency curves, preferably after the driver fix.
 
As far as I know none of the DC projects require very large amounts of VRAM but correct me if I'm wrong.

You are correct. I've never seen a DC task take up much VRAM. They're distinctly generated to be like that so they can be widely distributed to many different platforms.
 
I really question the stability of the 3080 claims though, because those wattage drops are gigantic.

Stability is also helped by better thermals and lesser voltage transients. NV simply pushed these cards too far. They require ~0.875V for 1900, yet getting pushed to 2000 boost clocks with voltages above 1V.

But stability could come in question in DL workloads, i wonder if those cards can really do nearly 30TF on 225W. Looking forward to testing one, as undervolted 1080TI was doing 10TF at same wattage. Insane efficiency gain if true versus Pascal.
 
Back
Top