Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 122 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
May 15, 2014
559
292
136
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 

Tup3x

Golden Member
Dec 31, 2016
1,086
1,085
136
  • Like
Reactions: Krteq

TESKATLIPOKA

Platinum Member
May 1, 2020
2,523
3,038
136
Texts are all readable with DLSS 2, much better than the blur of native + TAA
Performance is up 80% than native resolution while looking better or similar
The game viusals remain sharp while in motion (which is 90% of the time during gaming), much much better than the blur-fested native TAA.

So you get everything into one package, every outlet that looked at DLSS 2 found it both impressive and astonishing, and recommended it over native resolution, all except AMD fans of course. In fact AMD will be in serious trouble if they don"t offer a DLSS 2 alternative.
How can an upscaled resolution look better than a native one? If you said similar or not much worse, then that's ok with me, but better?
 

pj-

Senior member
May 5, 2015
483
251
136
How can an upscaled resolution look better than a native one? If you said similar or not much worse, then that's ok with me, but better?

The post you quoted already says one reason; TAA. Another is the nature of how DLSS works. It's trained on super high resolution images. It doesn't surprise me that in some cases it can have detail that would normally require supersampling.

There are several videos showing this pretty clearly
 
  • Like
Reactions: DXDiag

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
The post you quoted already says one reason; TAA. Another is the nature of how DLSS works. It's trained on super high resolution images. It doesn't surprise me that in some cases it can have detail that would normally require supersampling.

There are several videos showing this pretty clearly
It's also trained to use the previous few frames so there is extra information in that, in particular it can heavily reduce temporal effects like shimmering.
 
  • Like
Reactions: DXDiag

Vipeax

Member
Sep 27, 2007
105
1
81
How can an upscaled resolution look better than a native one? If you said similar or not much worse, then that's ok with me, but better?
Have you ever looked at a lower quality image of anything and 'fixed' it visually in your mind? Or look at re-masters of old video's that are made to look better than the original recording. Feed an algorithm enough content and it can guess what a lower quality image should have looked like. This includes improving models and textures over the original.
 
  • Like
Reactions: ozzy702

dr1337

Senior member
May 25, 2020
417
691
136
The post you quoted already says one reason; TAA.
DLSS looking better than TAA isn't saying much. I've yet to see a good comparison of DLSS quality vs real native 4k without any TAA/smeary AA filters. I will say 100% that DLSS is better than TAA in every comparison ive seen, but im really not sure that its actually better than native. TAA at 4k is a bit silly anyways as you really don't need that much AA as you increase resolution.

And if DLSS isn't as good as native, it begs the question. Would it not be better to have more shaders instead of tensor cores?
 

alcoholbob

Diamond Member
May 24, 2005
6,317
366
126
The numbers are marketing. TSMC 12NM was 16NM.Globalfoundries 12NM was 14NM.

NV being on 8nm would be 1nm behind amd.

And the density of 12/14/16 was pretty close to 20nm. But funny enough this process of minor improvements and making up fictitious node numbers managed to actually leapfrog the more honest Intel.
 
  • Haha
Reactions: pcp7