Discussion Ada/'Lovelace'? Next gen Nvidia gaming architecture speculation

Page 49 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

jpiniero

Lifer
Oct 1, 2010
14,511
5,159
136
In any case, I buy stuff that has the features (and performance) I need. If Intel can do that for example, I wouldn't hesitate buying their cards. But one thing is clear: I'll be skipping these cards. They are nuts when they are trying to sell a 295 mm² GPU with 192 bit bus for 1129 €. For comparison GA104 was 392 mm²... I just don't see how they can possible justify this asking price. I mean what the ....??? Also to call it RTX 4080 12 GB is just deceiving. RTX 4070 and anything below that are sure going to be underwhelming and over priced.

TSMC N5 really is that expensive. I believe AD104 is maybe only barely cheaper than GA102. Okay, $899 is too much but they were probably debating between $899 and $799 with a fake MSRP of $699.

If AMD solidly beats them in raster and appreciably undercuts them

See I don't think you are going to get both from AMD. At least not any time soon. We'll have to see how fast N33 is but I think that's a long ways off on desktop.
 

Saylick

Diamond Member
Sep 10, 2012
3,084
6,184
136
I am annoyed by the AIB designs. They are horrible, all thick as hell. FE is good enough being 3 slots high, but these clowns have to make them no less than 3,5 slots. Which in dual gpu setup will leave you with whopping half slot of free space between the cards.
Good thing you can't even do a dual GPU set up with Lovelace then. :p
 
  • Haha
Reactions: Kaluan

Hans Gruber

Platinum Member
Dec 23, 2006
2,092
1,065
136
TSMC N5 really is that expensive. I believe AD104 is maybe only barely cheaper than GA102. Okay, $899 is too much but they were probably debating between $899 and $799 with a fake MSRP of $699.



See I don't think you are going to get both from AMD. At least not any time soon. We'll have to see how fast N33 is but I think that's a long ways off on desktop.

The Nvidia CEO is a liar. We live in a market economy. The world economy is in a global recession. The demand for computers and components is in the dumpster.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
If a company wants to maintain unrealistic profit margins. How is that the consumers fault or problem?

No one said it was the consumers fault. Just pointing out, that chips are legitimately getting more expensive to build, and if the cost of chips is going up, you can't really expect the cost of cards to go down. Just like a lot of other things in these inflationary times.

I'm much more concerned about the rising cost of heating oil, gasoline and groceries than I am about rising cost of top end video cards I would never buy anyway.

But continue with the outrage over price increases in luxury goods. It's just about the only entertainment I can afford these days.
 

Saylick

Diamond Member
Sep 10, 2012
3,084
6,184
136
WCCFTech asked some good questions to Nvidia during the Ada Q&A:

To resolve a point of discussion we were having earlier regarding "Does DLSS 3 Frame Generation add latency", the answer is "Yes, but it will be offset by Reflex" (see below). To those who predicted this correctly, here's kudos to you!

For eSports users who might want to achieve the lowest latency, will it be possible to only enable NVIDIA DLSS 2 (Super Resolution) + Reflex without the DLSS Frame Generation that improves FPS but also increases system latency?

Our guidelines for DLSS 3 encourage developers to expose both a master DLSS On/Off switch, as well as individual feature toggles for Frame Generation, Super Resolution, and Reflex, so users have control and can configure the exact settings they prefer. Note that while DLSS Frame Generation can add roughly a half frame of latency, this latency is often mitigated by NVIDIA Reflex and DLSS Super Resolution - all part of the DLSS 3 package. NVIDIA Reflex reduces latency by synchronizing CPU and GPU, and DLSS Super Resolution reduces latency by decreasing the size of the render resolution.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
WCCFTech asked some good questions to Nvidia during the Ada Q&A:

To resolve a point of discussion we were having earlier regarding "Does DLSS 3 Frame Generation add latency", the answer is "Yes, but it will be offset by Reflex" (see below). To those who predicted this correctly, here's kudos to you!

Not a surprise, there is no free lunch.
 
  • Like
Reactions: Kaluan

dlerious

Golden Member
Mar 4, 2004
1,772
719
136
If AMD solidly beats them in raster and appreciably undercuts them while using less energy, they might not have much choice.

If not, and the market rejects the new offerings at their current prices, then I could see a new "unofficial" lower MSRP, or the refresh comes sooner than expected.
I'm hoping for used $300 3090s.
 
  • Haha
Reactions: Kaluan

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
If AMD solidly beats them in raster and appreciably undercuts them while using less energy, they might not have much choice.

If not, and the market rejects the new offerings at their current prices, then I could see a new "unofficial" lower MSRP, or the refresh comes sooner than expected.

If AMD solidly beats NVidia they'll just charge more for a premium product. We already saw them do this with Zen 3 when they had Intel beat, no reason to think they wouldn't do the same here.

If anything causes price cuts it'll be a deluge of used cards hitting the market. That might not affect the ultra high end because you can't get 4090-level performance from previous generation technology, but there's not a lot of point in a 4080 8 GB if there are 3080 cards going for half the price or less.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
No one said it was the consumers fault. Just pointing out, that chips are legitimately getting more expensive to build, and if the cost of chips is going up, you can't really expect the cost of cards to go down. Just like a lot of other things in these inflationary times.

I'm much more concerned about the rising cost of heating oil, gasoline and groceries than I am about rising cost of top end video cards I would never buy anyway.

But continue with the outrage over price increases in luxury goods. It's just about the only entertainment I can afford these days.

It was nVidia's decision to make ultra large dies though. There was no reason for them to add all these other types of cores (besides to help with vendor lock in) that make the die size larger. 600+mm dies are ultra expensive to make. Its the reason RDNA3 is going with chiplets. Its WAY cheaper to produce. Even the 6000 series had much smaller dies.

It really seems like nVidia thought this super high demand for GPUs was going to last forever, so they could get away with high prices. As this GPU was designed during the peak of the mining boom.

Its no wonder EVGA looked at this design and was like "NOPE!"
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
TSMC N5 really is that expensive. I believe AD104 is maybe only barely cheaper than GA102. Okay, $899 is too much but they were probably debating between $899 and $799 with a fake MSRP of $699.

See I don't think you are going to get both from AMD. At least not any time soon. We'll have to see how fast N33 is but I think that's a long ways off on desktop.
N4 costs more for sure, but It's not that more expensive to make a single Ada104 If you also look at the selling price.
From a single 4nm wafer you get ~194 AD104(294.5mm2).
If the wafer price is $15,000 then a single chip costs 15000/194= $77.5
Screenshot_2.png

For comparison from a single 8nm wafer you can get 145 GA104(392.5mm2).
If the wafer price is $5,000 then a single chip costs 5000/145= $34.5
So the difference in cost to produce ADA104 is 77.5-34.5= $43
Of course we need to add the cost of extra 4GB Vram.

I don't see why a full ADA104 needs to cost $300 more than RTX 3070Ti, when the extra cost shouldn't be more than $100. BTW 3070Ti was already $100 more expensive than RTX 3070 and It was just a full chip with GDDR6x.
Nvidia is just being greedy, but I wouldn't bet AMD will be that much better.
 

mikegg

Golden Member
Jan 30, 2010
1,740
406
136
I don't see why a full ADA104 needs to cost $300 more than RTX 3070Ti, when the extra cost shouldn't be more than $100. BTW 3070Ti was already $100 more expensive than RTX 3070 and It was just a full chip with GDDR6x.
Nvidia is just being greedy, but I wouldn't bet AMD will be that much better.
Higher R&D costs for more advanced chips? Inflation adjusted salaries, supply chain?
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
It was nVidia's decision to make ultra large dies though. There was no reason for them to add all these other types of cores (besides to help with vendor lock in) that make the die size larger.

If Nvidia hadn't bloated their die size with raytracing and deep learning hardware, they could have paid TSMC less. But even without that, the prices are just obscene.


I don't think it's realistic, nor desirable, to just stop adding new technology.

The ship has sailed on Ray Tracing. It's table stakes now. Everyone has it, even Intel.

Same thing is happening for Machine Learning. This is becoming universal as well. Smartphones now have machine learning cores.

Neither of them are really vendor lock in if everyone has it. NVidia vendor proprietary stuff is more in it's software, than in RT and ML HW.
 

jpiniero

Lifer
Oct 1, 2010
14,511
5,159
136
I don't think it's realistic, nor desirable, to just stop adding new technology.

I mentioned this already... but while Ray Tracing is awesome, there's no way a developer is going to do RT justice because the consoles can't do decent RT and 60 fps at the same time. Raster and Light Hybrid is the present and going to be the future for some time.

If Nvidia hadn't bloated their die size with raytracing and deep learning hardware, they could have paid TSMC less. But even without that, the prices are just obscene.

I think the L2 is probally the bigger culprit. And the dies are smaller than their Ampere equivalents although of course SS8 is a helluva lot cheaper.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
I mentioned this already... but while Ray Tracing is awesome, there's no way a developer is going to do RT justice because the consoles can't do decent RT and 60 fps at the same time.

I don't think that holds water.

There will always be PC games that push the HW to the limits including RT HW.

Poor afterthought ports of console games might have the same limited RT, but good ports won't, nor will games that designed with PC in mind from the start, and certainly not games designed for PC first.

I'm not seeing a sound argument against RT or ML HW here.
 
  • Like
Reactions: xpea

jpiniero

Lifer
Oct 1, 2010
14,511
5,159
136
I don't think that holds water.

There will always be PC games that push the HW to the limits including RT HW.

It's becoming rarer and rarer. You can see why nVidia is pushing RT because you definately don't need a 4090 to play the latest P2W GAAS game.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
It's becoming rarer and rarer. You can see why nVidia is pushing RT because you definately don't need a 4090 to play the latest P2W GAAS game.

AAA games aren't going away, and that is what has always gone hand in hand with top gaming PCs and top GPUs.

If your argument is that everything is going to P2W GAAS, then the gaming PC is essentially dead, everyone should just get a console.
 

In2Photos

Golden Member
Mar 21, 2007
1,603
1,637
136
So what do RT cores do when you have RT off? Nothing? Can they only perform RT tasks? Would be nice if you could use that compute power for other tasks if that's the case.
 

NTMBK

Lifer
Nov 14, 2011
10,208
4,940
136
So what do RT cores do when you have RT off? Nothing? Can they only perform RT tasks? Would be nice if you could use that compute power for other tasks if that's the case.

They're fixed function accelerators for one set of math operations (ray-octree intersections). If you aren't doing that operation they aren't useful. Same way texture samplers are only useful if you're sampling textures.
 

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
How much larger are the RT cores than their typical shaders? At some point if we want to get serious about RT we're going to need cards that don't just pay lip-service to ray tracing and cover up the poor performance with fancy ups along features like DLSS or FSR.

I'm curious what kind of performance we could get if someone build a card that was built for RT and only had the minimum necessary hardware units for raster functionality.