Discussion Ada/'Lovelace'? Next gen Nvidia gaming architecture speculation

Page 50 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
How much larger are the RT cores than their typical shaders? At some point if we want to get serious about RT we're going to need cards that don't just pay lip-service to ray tracing and cover up the poor performance with fancy ups along features like DLSS or FSR.

I'm curious what kind of performance we could get if someone build a card that was built for RT and only had the minimum necessary hardware units for raster functionality.

The more RT effects, the more shader performance you also need. RT HW just calculates the intersections quickly. The shading is still done by shaders.
 

Tup3x

Senior member
Dec 31, 2016
964
949
136
TSMC N5 really is that expensive. I believe AD104 is maybe only barely cheaper than GA102. Okay, $899 is too much but they were probably debating between $899 and $799 with a fake MSRP of $699.
It going to be interesting to see how AMD prices their cards. Not sure if their chiplet approach makes it cheaper to manufacturer. I hope the added complexity doesn't nullify the benefits from not having to use N5 for all chiplets.

In any case, I don't see a bright future for RTX 4080 12 GB. I have a feeling that they will phase it out rather quickly. It does depend on what AMD has to offer though. I do think that they can likely easily make that card feel like a really bad deal.

But man, 192-bit memory bus and >1100 €...
 

jpiniero

Lifer
Oct 1, 2010
14,599
5,218
136
It going to be interesting to see how AMD prices their cards. Not sure if their chiplet approach makes it cheaper to manufacturer. I hope the added complexity doesn't nullify the benefits from not having to use N5 for all chiplets.

RDNA 3 is definitely cheaper. No question about it. Whether it's enough to make AMD change their pricing strategy is another story.
 
Last edited:

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
RDNA 3 is definitely cheaper. No question about it. Whether it's enough to make AMD change their pricing strategy is another story.

I think it's clever how AMD made the memory controller the chiplet. They side stepped the issues of having multiple compute dies completely. Memory controllers actually take up a fair bit of die space, so putting them off in chiplets is a nice way to break things up, without dividing up compute.

It should give them a production cost advantage on the high end.

Though how much of that gets passed on to the consumer is questionable. I would bet not much. AMD likes profits too.
 

Revolution 11

Senior member
Jun 2, 2011
952
79
91
WCCFTech asked some good questions to Nvidia during the Ada Q&A:

To resolve a point of discussion we were having earlier regarding "Does DLSS 3 Frame Generation add latency", the answer is "Yes, but it will be offset by Reflex" (see below). To those who predicted this correctly, here's kudos to you!
Half a frame of latency means what exactly?

0.5 frames of latency per second? That's negligible.

0.5 frames of latency per set of keyframes processed? That's horrific latency.
 

biostud

Lifer
Feb 27, 2003
18,251
4,764
136
If AMD is much cheaper to produce, compared to performance, it comes down what pricing strategy they want. High margins or put pressure on nvidia by selling comparable products for less. Hopefully the last one :p
 

biostud

Lifer
Feb 27, 2003
18,251
4,764
136
Obviously we really don't have any idea of performance yet, but the price of the 4090, could N31 with 3D cache be $1199 and $999 without?
 

Saylick

Diamond Member
Sep 10, 2012
3,162
6,387
136
Half a frame of latency means what exactly?

0.5 frames of latency per second? That's negligible.

0.5 frames of latency per set of keyframes processed? That's horrific latency.
I'm probably wrong, but how I interpret that as follows:

At 60 fps, the frame time is 16.67 ms. Say that the other sources of latency are 50 ms. In total, latency is 66.67 ms.

You enable DLSS Frame Generation without Reflex. Now, your frame time is still 16.67ms, but the processing adds in half a frame or 0.5x 16.67 ms = 8.33 ms. That gets added in with the other sources of latency, 50 ms, so now it's 58.33 ms. Adding back the frame time, you're now at 75 ms of latency.

Latency in this case means the time between input to when the screen shows the action you inputted. For what it's worth, Reflex's latency reduction is typically pretty small, like ~10ms small, but it just so happens to be on the same order of magnitude as the Frame Generation latency penalty, hence why Nvidia "bundle" Reflex with Frame Generation so that the penalty is minimized or eliminated. It's all intentional because Frame Generation wouldn't be as well received if it had any drawbacks. No marketing slamdunk for Frame Generation means less sales of Lovelace.
 
Last edited:

DooKey

Golden Member
Nov 9, 2005
1,811
458
136
Obviously we really don't have any idea of performance yet, but the price of the 4090, could N31 with 3D cache be $1199 and $999 without?
Not so sure about that. How much was the 6900XT at launch? It was $999 and I'd expect that plus $200-$250 since wafer costs and R&D have gone up. Their halo card isn't going to be cheap. JMO.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
I'm probably wrong, but how I interpret that as follows:

At 60 fps, the frame time is 16.67 ms. Say that the other sources of latency are 50 ms. In total, latency is 66.67 ms.

You enable DLSS Frame Generation without Reflex. Now, your frame time is still 16.67ms, but the processing adds in half a frame or 0.5x 16.67 ms = 8.33 ms. That gets added in with the other sources of latency, 50 ms, so now it's 58.33 ms. Adding back the frame time, you're now at 75 ms of latency.

Latency in this case means the time between input to when the screen shows the action you inputted. For what it's worth, Reflex's latency reduction is typically pretty small, like ~10ms small, but it just so happens to be on the same order of magnitude as the Frame Generation latency penalty.

Agreed Latency is measured per input action.

So at 60 FPS base frame rate, half a frame would add 8.33 ms as you state.

It's not a Lot, but as mentioned earlier in the thread, often the main reason to want higher FPS is to improve latency.

In your example with 50 ms of other lag:

60 FPS = 66.67 ms latency
Real 120 FPS = 58.33 ms latency
DLSS 3 "120 FPS" = 75 ms latency.

Instead of improving latency like real 120 FPS, DLSS inserted frames make it worse, and that is the point.
 

jpiniero

Lifer
Oct 1, 2010
14,599
5,218
136
In any case, I don't see a bright future for RTX 4080 12 GB. I have a feeling that they will phase it out rather quickly.

Unlikely. It is after all the full AD104. The best you can hope for is if memory bandwidth is really that big of an issue; that there will be a refresh next year with GDDR7. That is if GDDR7 is still coming.

That the full AD104 targeted ~3090 Ti performance is not that shocking.
 

Revolution 11

Senior member
Jun 2, 2011
952
79
91
I'm probably wrong, but how I interpret that as follows:

At 60 fps, the frame time is 16.67 ms. Say that the other sources of latency are 50 ms. In total, latency is 66.67 ms.

You enable DLSS Frame Generation without Reflex. Now, your frame time is still 16.67ms, but the processing adds in half a frame or 0.5x 16.67 ms = 8.33 ms. That gets added in with the other sources of latency, 50 ms, so now it's 58.33 ms. Adding back the frame time, you're now at 75 ms of latency.

Latency in this case means the time between input to when the screen shows the action you inputted. For what it's worth, Reflex's latency reduction is typically pretty small, like ~10ms small, but it just so happens to be on the same order of magnitude as the Frame Generation latency penalty, hence why Nvidia "bundle" Reflex with Frame Generation so that the penalty is minimized or eliminated. It's all intentional because Frame Generation wouldn't be as well received if it had any drawbacks. No marketing slamdunk for Frame Generation means less sales of Lovelace.
Thanks for the explanation and sample math, this makes much more sense now. So my next question (to Nvidia) is: Why would I not just turn on Reflex with DLSS 2.0 and ignore DLSS 3.0 completely for the lowest frame latency?

Because the reason I want to move from a old PC that gives me sub-30 FPS to a new PC that gives me 60 or 120 or 240 FPS in games is that input lag drops when the actual game FPS goes up. Interpolated frames don't solve the actual problem in that the game is not running quickly enough to give BOTH smooth and responsive gameplay.

Yes, I play Squad under 30 FPS. It is horrific and I have been waiting for 3 years for better GPU prices. The agony will continue for some time to come, it seems.
 

Saylick

Diamond Member
Sep 10, 2012
3,162
6,387
136
Thanks for the explanation and sample math, this makes much more sense now. So my next question (to Nvidia) is: Why would I not just turn on Reflex with DLSS 2.0 and ignore DLSS 3.0 completely for the lowest frame latency?

Because the reason I want to move from a old PC that gives me sub-30 FPS to a new PC that gives me 60 or 120 or 240 FPS in games is that input lag drops when the actual game FPS goes up. Interpolated frames don't solve the actual problem in that the game is not running quickly enough to give BOTH smooth and responsive gameplay.

Yes, I play Squad under 30 FPS. It is horrific and I have been waiting for 3 years for better GPU prices. The agony will continue for some time to come, it seems.
DLSS 3 incorporates 3 features: upscaling (already in DLSS 2), Reflex, and Frame Generation. I think for most people, enabling just the upscaling and Reflex is the way to go. I agree that Frame Generation isn't worth it, and if anything gives people a false sense of security. In other words, I would strongly advise against people cranking up the graphic settings and then using Frame Generation like a crutch to make up for the performance hit.
 
  • Like
Reactions: Tlh97 and scineram

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
Thanks for the explanation and sample math, this makes much more sense now. So my next question (to Nvidia) is: Why would I not just turn on Reflex with DLSS 2.0 and ignore DLSS 3.0 completely for the lowest frame latency?

You would if you are aware of all this. Most people won't be, and they might just see NVidia marketing showing that massive FPS increase. This is why I am curious to see how reviewers thread DLSS 3 frame creation.
 

SteveGrabowski

Diamond Member
Oct 20, 2014
6,893
5,825
136
I think it's clever how AMD made the memory controller the chiplet. They side stepped the issues of having multiple compute dies completely. Memory controllers actually take up a fair bit of die space, so putting them off in chiplets is a nice way to break things up, without dividing up compute.

It should give them a production cost advantage on the high end.

Though how much of that gets passed on to the consumer is questionable. I would bet not much. AMD likes profits too.

Shirley they gotta see a chance to both increase market share with cheaper offerings while also still being very profitable if they're not having to make these huge dies.
 
  • Like
Reactions: dlerious

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
Shirley they gotta see a chance to both increase market share with cheaper offerings while also still being very profitable if they're not having to make these huge dies.

We really don't know how much that advantage is, and generally, AMD already has to sell at a small discount.

The problem is that if they go low enough to really hurt NVidia sales, then NVidia drops price enough to restore the status quo, and then AMD just gets less profits.

I'd expect differences to be much like last generation, bigger saving at the 4090 level, and much less difference at the 4070 level.

...and don't call me Shirley. ;)
 

exquisitechar

Senior member
Apr 18, 2017
657
871
136
Shirley they gotta see a chance to both increase market share with cheaper offerings while also still being very profitable if they're not having to make these huge dies.
The thing is, they only have so many N5 wafers and using them for Zen 4 CPUs is far more profitable. Their desktop dGPU market share is not very important to them. They have a cost advantage, but I highly doubt they'll undercut Nvidia by a lot and they will be aiming for high margins. I expect them to have a similar pricing strategy as with RDNA2, only they will be even more competitive in terms of performance, power and area. Navi33 is using N6 and seems impressive for its size, but it also seems to be a laptop focused part.

Intel is the only one that might give us somewhat cheap GPUs in the short to mid term, before possibly becoming established in the market and more competitive. Arc has been a disaster so far, though.
 

Revolution 11

Senior member
Jun 2, 2011
952
79
91
You would if you are aware of all this. Most people won't be, and they might just see NVidia marketing showing that massive FPS increase. This is why I am curious to see how reviewers thread DLSS 3 frame creation.
I wonder if there is a marketing opening here for AMD. "With Radeon, you get real FPS, no gimmicks, the lowest input lag, and the lowest frame latency on the planet. No fake frames, no uncanny valley, truth in graphics."

Maybe even come up with a corny label like "HyperFrame Technology".
 

scineram

Senior member
Nov 1, 2020
361
283
106
If AMD is much cheaper to produce, compared to performance, it comes down what pricing strategy they want. High margins or put pressure on nvidia by selling comparable products for less. Hopefully the last one :p
If it's really cheaper then they should be able to achieve both somewhat. That is assuming the performance is competitive.
 
  • Like
Reactions: GodisanAtheist

dlerious

Golden Member
Mar 4, 2004
1,787
724
136
We really don't know how much that advantage is, and generally, AMD already has to sell at a small discount.

The problem is that if they go low enough to really hurt NVidia sales, then NVidia drops price enough to restore the status quo, and then AMD just gets less profits.

I'd expect differences to be much like last generation, bigger saving at the 4090 level, and much less difference at the 4070 level.

...and don't call me Shirley. ;)
They have to consider their 30 series oversupply too wouldn't they? Both companies have to consider what effect if any, the used market will have as well. I see AMD dropped the MSRP on 6000 cards recently (6800XT is now $599).
 

dlerious

Golden Member
Mar 4, 2004
1,787
724
136
The thing is, they only have so many N5 wafers and using them for Zen 4 CPUs is far more profitable. Their desktop dGPU market share is not very important to them. They have a cost advantage, but I highly doubt they'll undercut Nvidia by a lot and they will be aiming for high margins. I expect them to have a similar pricing strategy as with RDNA2, only they will be even more competitive in terms of performance, power and area. Navi33 is using N6 and seems impressive for its size, but it also seems to be a laptop focused part.

Intel is the only one that might give us somewhat cheap GPUs in the short to mid term, before possibly becoming established in the market and more competitive. Arc has been a disaster so far, though.
How cheap are we talking about? AMD has 6500XT $175, 6650XT $190, 6700XT $360.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
They have to consider their 30 series oversupply too wouldn't they? Both companies have to consider what effect if any, the used market will have as well. I see AMD dropped the MSRP on 6000 cards recently (6800XT is now $599).

They have to consider the oversupply of AMD 60 series as well, which would incline them to keep prices high until they are cleared, just like NVidia.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
Every time I see more DLSS3.0 info it sounds worse, and depending on the monitor the monitor refresh rate vs FPS it could be even worse. Sounds like it's only there for marketing a high fake framerates.

Bingo.

It's best to think of it like Motion Smoothing on modern TVs. It's not a performance enhancement (opposite really as in increases latency), but may give a smoother appearance to motion in some circumstances.

It's not totally useless, but it's being presented in a very misleading manner by NVidia that treats it like real frame rate gain, which it isn't, and yes they are doing it for marketing.
 

dlerious

Golden Member
Mar 4, 2004
1,787
724
136
They have to consider the oversupply of AMD 60 series as well, which would incline them to keep prices high until they are cleared, just like NVidia.
I don't think AMD has as much inventory. AMD just announced price cuts across the board. A 6800XT is now $599 and 6900XT is $699.

 
  • Like
Reactions: Kaluan and Tlh97