• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 132 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 
Again, no one will ever need so much power to play games, so whatever "Big" Navi delivers is enough. But as I knew too well there's no way for AMD to catch up with Nvidia. The ludicrous idea that AMD was on it's way to pass Nvidia within two generations was pure insanity.
 
Nvidia website listing 3080 FE price as $699 so I was wrong, looks like that will be the actual price. That's really good pricing for the 3080 but it also seems way out of place with the 3070 at $499 and the 3090 at $1499.
 
Nvidia website listing 3080 FE price as $699 so I was wrong, looks like that will be the actual price. That's really good pricing for the 3080 but it also seems way out of place with the 3070 at $499 and the 3090 at $1499.

Gotta make room for a 3080 Ti later on.
 
Seems like Kopite nailed everything down.
PCB, Process, G6X, Cores. Everything...

Only thing that stands out is that, why are they so affordable? 🙂

I don't know, I think saying an $800 GPU is "affordable" is crazy. Especially when parts of the world have upwards of 20% unemployment right now.
 
I hope so, I don't mind paying that price once. I've had my 1080ti for a while now. I would intend keeping the new card the same or longer. I just dont want to buy all new Mobo / case.

My guess, just like the 1080ti(which I have), it will come down the road and bridge the 3080 and 3090.
 
I don't know, I think saying an $800 GPU is "affordable" is crazy. Especially when parts of the world have upwards of 20% unemployment right now.

Well these are enthusiast class GPU's. Plus they are powerful enough that I suspect you can use them for years, a bit like the 1080ti.

The 3070 and below are more than mainstream GPU's and they seem reasonably priced for the performance IMO.
 
Well these are enthusiast class GPU's. Plus they are powerful enough that I suspect you can use them for years, a bit like the 1080ti.

The 3070 and below are more than mainstream GPU's and they seem reasonably priced for the performance IMO.

Yep, if it turns out true that the 3070 is a 2080 Ti class GPU for $499 then I'd call that a huge win. Will sell like hotcakes.
 
Big marketing mistake to call the 3090 that. Calling it Titan would at least 'justified' its price in the eyes of many. Especially after he called the 3080 the 'flagship'.
 
Well these are enthusiast class GPU's. Plus they are powerful enough that I suspect you can use them for years, a bit like the 1080ti.

The 3070 and below are more than mainstream GPU's and they seem reasonably priced for the performance IMO.

Yeah, but there is a distinct difference between "priced well for the performance" and "affordable".
 
Yeah, that advertised CUDA core count is suspect... RTX 2080 Ti was 4352 cores and the 3080 should be about 35% faster than the RTX 2080 Ti if we go by this plot:
20200901173109.jpg


Either 2x the core count leads to 35% more performance, which would be terrible scaling, or there's simply the same number of cores/ALUs but higher IPC and higher core/mem clocks are the real reason for the performance gain.

EDIT: I'm thinking Nvidia is now advertising both FP and INT cores as CUDA cores now, which they did not do for Turing. Also, Nvidia advertise the cards as using "Shader-TLOPS", not "FP32 TFLOPS", which is interesting.
 
Last edited:
In a weird sort of turnabout, AMD now has the perf per Flop advantage.

Jokes aside, Pricing for the 3070 and 30870 is better than expected. I'm impressed by this launch, but it's clear that that's for a reason 😉

How do you figure that?
 
Back
Top