Discussion Ada/'Lovelace'? Next gen Nvidia gaming architecture speculation

Page 71 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

TESKATLIPOKA

Platinum Member
May 1, 2020
2,696
3,260
136
I actually think it's legitimately a new 3050 SKU and not a typo. But we will see.
Raptor had the same "mistake"and It would explain why only RX 3050 had Vram mentioned.
I personally don't see a reason for choosing this over 4050, unless price is a lot higher.
 

jpiniero

Lifer
Oct 1, 2010
16,816
7,258
136
In that case, It would explain why only RX 3050 had Vram mentioned.
I personally don't see a reason for choosing this over 4050, unless price is a lot higher.

We'll see about a lot but I'm sure either way the price with the 4050 (8 GB) is higher.
 

Saylick

Diamond Member
Sep 10, 2012
4,052
9,472
136
So MLID apparently has some renders of the Ada Titan, which isn't going to actually launch.

Like someone said earlier on these forums, it ain't a graphics card no more. It's a freakin' graphics brick.

1671593633248.png
1671593617668.png
1671593639248.png
 
  • Like
Reactions: Ranulf

TESKATLIPOKA

Platinum Member
May 1, 2020
2,696
3,260
136
Alleged NVIDIA GeForce RTX 40 Laptop GPU clock and TGP specs emerge
Screenshot_25.png
16 SM RTX 3050 goes up to 1740MHz.
18-20 SM RTX 4050 goes up to 2370MHz, that's 36% higher frequency. Let's keep in mind that the average clockspeed could be higher than 36%.

RTX 3080Ti based on GA103 with 58 SM boosts to only 1.59GHz.
RTX 4090 based on Ada103 with 76? SM boost to 2.04GHz, that's 28% higher frequency.
This all looks very good, announcement should be CES 2023.
 

CP5670

Diamond Member
Jun 24, 2004
5,665
765
126
I set up the corsair psu adapter cable, which makes the case look much cleaner than the clunky nvidia adapter.
 

Attachments

  • 20221221_103624.jpg
    20221221_103624.jpg
    339.3 KB · Views: 32
  • Like
Reactions: Elfear

TESKATLIPOKA

Platinum Member
May 1, 2020
2,696
3,260
136
Yeesh, 3050 was 35-80 W. 4050 is 85 - 140. Guess that's where the 3050 6 GB comes in.
I think there is some nonsense made up by wccftech.
And here is the original:
Screenshot_26.png
18-20 SM RTX 4050 at 2.37GHz won't need 140W, that's just pure BS. He even used the same frequency for base and boost clock.
24SM RTX 4060 at ?GHz also won't need that.
Just check out other bigger chips.

You can even check out RTX 4080 16GB, >3-4x bigger chip with higher clocks and needs only 320W for the whole card, not just TGP(GPU + memory).
Nvidia needs something for the more portable machines and 85W as absolute minimum is certainly not that.
85W as max for those clocks and 10W dynamic boost is much more likely.

Now that I look at It, I don't think ADA107 will be used for both RTX 4050 and 4060.
 
Last edited:

Tigerick

Senior member
Apr 1, 2022
851
802
106
Damn NV using AD107 as Mobile RTX4060 instead of 4050Ti. I am afraid TGP for RTX4060 is correct, NV needs to clock it higher to remedy lack of CUDA cores, ie 3072. No wonder I heard about NV's OPP rebate to OEM of up to $50 if they hit certain price point in China market.

AMD this round also up TGP of N33 up to 140W.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,696
3,260
136
Is that Time Spy Graphics chart? Where did you find It? Is It at least a bit legit?

RTX 3060: 70-80W -> 1402-1425 MHz that's up to 10,944 GFLOPs.
RTX 3070: 125W -> 1620 MHz that's 16,588 GFLOPs.

4050 at 95W scores 6.5% better, so It should have ~11,600 GFLOPs.
RTX 4050 should be 2304(18SM) or 2560(20SM) Cuda shaders, which means frequency should be 2.27 or 2.52GHz. I find 2.52GHz too high for that TGP.

4060 at 140W scores 10,400 and that is 27% more than 4050, which means ~14,700 GFLOPs.
RTX 4060 with 3072(24SM) shaders would need ~ 2.39GHz, which is only 120MHz more than the minimum frequency for 4050.
I don't think you need 47% higher TGP for this clockspeed.

If I compared It to RTX 3070, then I would need 16.5 GFLOPs and that's ~2.7GHz. This would explain the high TGP, but then 4060 with 20% more SM and 19% higher clocks would manage only 27% higher score than RTX 4050? That doesn't make sense.

P.S. I know converting performance to GFLOPs is not ideal, but better than nothing.
 
Last edited:

Tigerick

Senior member
Apr 1, 2022
851
802
106
Is that Time Spy Graphics chart? Where did you find It? Is It at least a bit legit?

RTX 3060: 70-80W -> 1402-1425 MHz that's up to 10,944 GFLOPs.
RTX 3070: 125W -> 1620 MHz that's 16,588 GFLOPs.

4050 at 95W scores 6.5% better, so It should have ~11,600 GFLOPs.
RTX 4050 should be 2304(18SM) or 2560(20SM) Cuda shaders, which means frequency should be 2.27 or 2.52GHz. I find 2.52GHz too high for that TGP.

4060 at 140W scores 10,400 and that is 27% more than 4050, which means ~14,700 GFLOPs.
RTX 4060 with 3072(24SM) shaders would need ~ 2.39GHz, which is only 120MHz more than the minimum frequency for 4050.
I don't think you need 47% higher TGP for this clockspeed.

If I compared It to RTX 3070, then I would need 16.5 GFLOPs and that's ~2.7GHz. This would explain the high TGP, but then 4060 with 20% more SM and 19% higher clocks would manage only 27% higher score than RTX 4050? That doesn't make sense.

P.S. I know converting performance to GFLOPs is not ideal, but better than nothing.
Of course it is not official from NV but from OEM. The power gap between 4050 and 4060 does seems large, maybe RTX4060 is not using full AD107 die as we presume or some mistakes.

Notice that RTX4070's performance has been crippled as well, NV might not be using full AD106 die...
 
  • Like
Reactions: TESKATLIPOKA

TESKATLIPOKA

Platinum Member
May 1, 2020
2,696
3,260
136
Of course it is not official from NV but from OEM. The power gap between 4050 and 4060 does seems large, maybe RTX4060 is not using full AD107 die as we presume or some mistakes.

Notice that RTX4070's performance has been crippled as well, NV might not be using full AD106 die...
If It's from OEM that's also legit for me, thanks.

Nvidia used full dies in previous generation, so 4060 should be full AD107 and the same should be true for 4070.
It doesn't make sense for 4060 not to be full Ada107 and instead using clocks to compensate, which reduces efficiency a lot, 4080 155W would be 60% more efficient, that's just ridiculous.

The other possible explanation is that 4050 is 18SM and full Ada is actually only 20SM as GA107, but this also doesn't make much sense when we know every Ada GPU has more SM than the previous generation.
If the performance for 4070 110W version wasn't 17% higher than 4060 140W, then I would suspect that 4070 is actually full 24SM Ada107.

As you said 4070 also looks weird and performance is pretty low compared to 4060 140W, but I don't believe in a cut down version because of 4080 175W, which is 53% faster. 60SM vs 36SM is 67% difference.
Ada106 supposedly has only 128bit bus, so this could explain lower than expected performance, but then why design a chip with 36SM, when BW will be a huge bottleneck?

There are still a lot of questions, but at least performance is great, although TGP for 4050, 4060 and 4070 is not that great.

P.S. I think I am more interested in CES 2023 than in Christmas. :D
 
Last edited:

Tigerick

Senior member
Apr 1, 2022
851
802
106
If It's from OEM that's also legit for me, thanks.

Nvidia used full dies in previous generation, so 4060 should be full AD107 and the same should be true for 4070.
It doesn't make sense for 4060 not to be full Ada107 and instead using clocks to compensate, which reduces efficiency a lot, 4080 155W would be 60% more efficient, that's just ridiculous.

The other possible explanation is that 4050 is 18SM and full Ada is actually only 20SM as GA107, but this also doesn't make much sense when we know every Ada GPU has more SM than the previous generation.
If the performance for 4070 110W version wasn't 17% higher than 4060 140W, then I would suspect that 4070 is actually full 24SM Ada107.

As you said 4070 also looks weird and performance is pretty low compared to 4060 140W, but I don't believe in a cut down version because of 4080 175W, which is 53% faster. 60SM vs 36SM is 67% difference.
Ada106 supposedly has only 128bit bus, so this could explain lower than expected performance, but then why design a chip with 36SM, when BW will be a huge bottleneck?

There are still a lot of questions, but at least performance is great, although TGP for 4050, 4060 and 4070 is not that great.

P.S. I think I am more interested in CES 2023 than in Christmas. :D
The main reason behind cripple performance of RTX4050 - 4070 cause NV still need to clear inventory of RTX30 series. I heard OEM will launch new laptop with RTX30 series along with RTX40 so NV does not want to make the generation gap seems too big.

Another reason is BOM, with Intel and NV both launching new generation of CPU and GPU at 'new' prices, the combo is seriously being undercut by AMD combo. AMD's Dragon Range 8 Cores + N33 140W are going to be available at very aggressive price. Thus the rebate kick in, if not NV is losing ground on OEM.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,696
3,260
136
The main reason behind cripple performance of RTX4050 - 4070 cause NV still need to clear inventory of RTX30 series. I heard OEM will launch new laptop with RTX30 series along with RTX40 so NV does not want to make the generation gap seems too big.
That's understandable If they have a lot of Ampere inventory.
Ok, so they will release Ti models later, but then what is the actual SM configuration for 4050 and 4060?
4050 could be 18SM and 4060 20SM, so there is still the 24SM option.
The problem with this is you can't have 4050Ti model when 4060 already has 20SM and then how much SM does 4070 have to perform only 23% better than the cutdown 4060?
Another reason is BOM, with Intel and NV both launching new generation of CPU and GPU at 'new' prices, the combo is seriously being undercut by AMD combo. AMD's Dragon Range 8 Cores + N33 140W are going to be available at very aggressive price. Thus the rebate kick in, if not NV is losing ground on OEM.
If It's priced aggressively, then I have to question how RX 7600M actually performs. If It performs at 40% of RX 79700XTX then the score is only 11880. That's not much higher than 4060.
 
Last edited:

Tigerick

Senior member
Apr 1, 2022
851
802
106
If It's priced aggressively, then I have to question how RX 7600M actually performs. If It performs at 40% of RX 79700XTX then the score is only 11880. That's not much higher than 4060.

I can send u my China TUL models with NV and AMD GPU, is there a way to send?
 
  • Like
Reactions: scineram

jpiniero

Lifer
Oct 1, 2010
16,816
7,258
136
The main reason behind cripple performance of RTX4050 - 4070 cause NV still need to clear inventory of RTX30 series. I heard OEM will launch new laptop with RTX30 series along with RTX40 so NV does not want to make the generation gap seems too big.so f

I wouldn't be surprised but so far I've only seen 3050 6 GB. Which I think is intended to be lower/cheaper than the 4050.

4060 is most likely 3072 cores and 4050 2560.

Edit: From NBC's scores the 3060L can get as high as 9235 in TSG but 7700 is more like 95 W. So perhaps the chart has a typo for the 3060L's power draw.
 
  • Like
Reactions: Kaluan

TESKATLIPOKA

Platinum Member
May 1, 2020
2,696
3,260
136
NVIDIA RTX 4070 Ti is 5% faster than RTX 3090 Ti in leaked OctaneBench test
RTX4070TI-OTOY-OCTANEBENCH.png
relative-performance_3840-2160.png

Based on this test, It looks like RTX 4070Ti will be a bit faster than 3090Ti. If you check TPU performance, then It could be slower than the fastest Ampere.
RTX 4080 16GB is just 13.5% faster than RTX 3090Ti.

The funniest thing is that with the old CPU(5800X) It scored 16.3% more than RTX 3090Ti.
relative-performance_3840-2160.png
 
Last edited:

GodisanAtheist

Diamond Member
Nov 16, 2006
8,318
9,689
136
NVIDIA RTX 4070 Ti is 5% faster than RTX 3090 Ti in leaked OctaneBench test
RTX4070TI-OTOY-OCTANEBENCH.png
relative-performance_3840-2160.png

Based on this test, It looks like RTX 4070Ti will be a bit faster than 3090Ti. If you check TPU performance, then It could be slower than the fastest Ampere.
RTX 4080 16GB is just 13.5% faster than RTX 3090Ti.

The funniest thing is that with the old CPU(5800X) It scored 16.3% more than RTX 3090Ti.
relative-performance_3840-2160.png

-4090 is 36% faster than 4080 in octane but 27% faster in gaming.

4080 is 28% faster than 4070ti in octane so... maybe 23% faster in gaming?

That would put the 4070ti @ roughly 75% on the top TPU chart, so slightly ahead of a 3090 non-ti.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,696
3,260
136
-4090 is 36% faster than 4080 in octane but 27% faster in gaming.

4080 is 28% faster than 4070ti in octane so... maybe 23% faster in gaming?

That would put the 4070ti @ roughly 75% on the top TPU chart, so slightly ahead of a 3090 non-ti.
As you said, between 3090 and 3090Ti. Do they really want to ask $849-899 for this, when It has less Vram? :(

@jpiniero That would be better, but price difference would be bigger between this vs RTX 4080 16GB than RTX 4080 16GB vs RTX 4090.
 
Last edited:

Saylick

Diamond Member
Sep 10, 2012
4,052
9,472
136
As you said, between 3090 and 3090Ti. Do they really want to ask $849-899 for this, when It has less Vram? :(
They'll price it at >$800 not because it's the price we want, but because it's the highest price Nvidia think the market will bear. Enthusiasts will just collectively sigh before covering their eyes with one hand and clicking the Complete Order button with the other, because Nvidia have such a dominant position that they can effectively play the "take it or leave it" card.

If it was priced sanely it could be an excellent card. Sadly it's going to be overpriced for sure.
Yep. Nvidia can reap producer surplus by setting a price that's on the high end before slowly lowering it over time to extract as much revenue out of the consumer base. Time and time again, consumers have shown Nvidia that we're willing to treat graphics cards as a luxury good rather than a commodity. Kudos to Jensen and his marketing department for cultivating that brand recognition over the years to make their graphics cards - I still can't believe it, out all of all things, freakin' graphics cards - as a covetable, desirable item that gives you bragging rights. Their highest end cards are effectively Veblen goods, I.e. the higher they are priced, the more desirable they become.
 
  • Like
Reactions: CP5670

maddie

Diamond Member
Jul 18, 2010
5,156
5,545
136
They'll price it at >$800 not because it's the price we want, but because it's the highest price Nvidia think the market will bear. Enthusiasts will just collectively sigh before covering their eyes with one hand and clicking the Complete Order button with the other, because Nvidia have such a dominant position that they can effectively play the "take it or leave it" card.


Yep. Nvidia can reap producer surplus by setting a price that's on the high end before slowly lowering it over time to extract as much revenue out of the consumer base. Time and time again, consumers have shown Nvidia that we're willing to treat graphics cards as a luxury good rather than a commodity. Kudos to Jensen and his marketing department for cultivating that brand recognition over the years to make their graphics cards - I still can't believe it, out all of all things, freakin' graphics cards - as a covetable, desirable item that gives you bragging rights. Their highest end cards are effectively Veblen goods, I.e. the higher they are priced, the more desirable they become.
The difference with cards is technological obsolescence, unlike other durable items. Rinse & repeat every couple yrs. Autos, boats, sport & music equipment, etc, at least have long lifetimes, and then have the chance of becoming classics, these don't.
 
  • Like
Reactions: Lodix