NV 4060 / 4060TI reviews

Page 16 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SteveGrabowski

Diamond Member
Oct 20, 2014
8,587
7,210
136
I would love to know how much L2 or IC you would need to compensate for 1/2 of bandwidth.
I think 32MB extra cache would do wonders for 4060Ti, but that would also mean ~20-25mm2 bigger chip.

What I find interesting is that 4060Ti is only 25% faster at Full HD than RX 7600, but at 4K It's 43% faster.
Both of them have the same BW and similar amount of cache.
Probably Nvidia's DX12 driver overhead problems making it underperform at 1080p. AMD used to have the same problems in DirectX 11 8-10 years ago.
 

moonbogg

Lifer
Jan 8, 2011
10,731
3,440
136
People say Nvidia is trying to sell the massive overstock of 3000 series cards, but where are they? What cards are they talking about? I see 3060's still priced $3-400, so I know Nvidia isn't interested in selling those things either. All I can conclude is that Nvidia actually doesn't want to sell any gaming cards, be it new or old. Jensen woke up one day and just decided not to sell any more cards. Super interesting.
 
Jul 27, 2020
24,175
16,864
146
Instead of waste of sand, waste of paper?
He can recoup a few dollars from recycling the cardboard from the 4060 Ti boxes to pay for his blood pressure meds and most important to him, his Viagra pills, you know, so he can reliably go from 1 inch to 2 inches.
 
Jul 27, 2020
24,175
16,864
146
Jensen woke up one day and just decided not to sell any more cards. Super interesting.
It's a collusion of sorts based on tacit agreement of the two big GPU players. Nvidia isn't trying to move the needle coz they want their cryptomining profits in a cryptowhining world. AMD as always is trying to copy Nvidia's moves to keep their GPU division relevant coz their real darling is their CPU. They have given up trying to compete with Nvidia on price coz doing that before hasn't netted them more marketshare. Intel is still on training wheels and trying hard not to land nose first on the rocky terrain leading to Mount GPU's peak.

Were it not for unsold inventory, Nvidia/AMD would be having trouble keeping even their crappy cards in stock, let alone the better ones. And then, Intel ARC would have sold much better too. The high prices drove a lot of gamers away and they have probably taken up other hobbies or still playing the waiting game to see if the retailers/AIB partners get tired of maintaining the depreciating inventory of previous gen cards and engage in a massive firesale.
 

Aapje

Golden Member
Mar 21, 2022
1,530
2,106
106
People say Nvidia is trying to sell the massive overstock of 3000 series cards, but where are they? What cards are they talking about? I see 3060's still priced $3-400, so I know Nvidia isn't interested in selling those things either. All I can conclude is that Nvidia actually doesn't want to sell any gaming cards, be it new or old. Jensen woke up one day and just decided not to sell any more cards. Super interesting.
I'm now seeing 300 euro MSI Ventus 3060 12 GB's, which is a great deal compared to one month ago when they cost 350 euros. And I would argue that if people only want Nvidia, it is a better deal than the 4060 (cheaper and 12 GB which is more important than not having DLSS 3).
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
I'm now seeing 300 euro MSI Ventus 3060 12 GB's, which is a great deal compared to one month ago when they cost 350 euros. And I would argue that if people only want Nvidia, it is a better deal than the 4060 (cheaper and 12 GB which is more important than not having DLSS 3).
Yes! I already had an RX 7600 XFX triple-fan on order, but whenn the MSI Ventus 12GB RTX 3060 hit $289 USD @ Newegg, I had to snag one of those too.
 
  • Love
Reactions: DAPUNISHER

Ranulf

Platinum Member
Jul 18, 2001
2,751
2,174
136
The 3060 12GB at $280 is the only decent deal right now from Nvidia. That said, it is basically $80-100 for Nvidia's software and that extra 4Gb of ram. The 6600 is about the same performance but going for $180-200 now.
 

SteveGrabowski

Diamond Member
Oct 20, 2014
8,587
7,210
136
The 3060 12GB at $280 is the only decent deal right now from Nvidia. That said, it is basically $80-100 for Nvidia's software and that extra 4Gb of ram. The 6600 is about the same performance but going for $180-200 now.
3060 is about 10% faster in raster than 6600 non XT so that plus DLSS being a lot better than FSR when rendering low resolutions and the extra 4GB of VRAM make 3060 definitely a better buy if you can go to $280 but no higher. Still $280 is a tough sell when the 35% faster 6700 XT that also has 12GB of VRAM can be found for $310 to $320 regularly now. There is even an MSI RX 6750 XT sold by Amazon (no third party crap) for $330 right now.


Though Zotac has a 3060 12GB for $260 right now. Maybe your best bet for an entry level gpu these days.

 
Last edited:

TESKATLIPOKA

Platinum Member
May 1, 2020
2,696
3,259
136
Probably Nvidia's DX12 driver overhead problems making it underperform at 1080p. AMD used to have the same problems in DirectX 11 8-10 years ago.
That's not my point. Look at the performance difference at 4K.
288GB/s and 32MB cache for both, yet 4060Ti still managed 43% higher performance in TPU review. It looks like BW is not such a great problem for 4060Ti as everyone thinks, or N33 has more than It needs.
 

Rigg

Senior member
May 6, 2020
639
1,524
136
That's not my point. Look at the performance difference at 4K.
288GB/s and 32MB cache for both, yet 4060Ti still managed 43% higher performance in TPU review. It looks like BW is not such a great problem for 4060Ti as everyone thinks, or N33 has more than It needs.
I didn't do a deep dive but it looks like Far Cry 6 is really hurting the overall 4k average compared to 1080 and 1440 for the RX 7600. The game tanks at 4k in his testing. HWUB 4K results are actually more favorable for the 7600.
average-fps-per-game-3840-2160.png
average-fps-per-game-2560-1440.png
average-fps-per-game-1920-1080.png
 
Last edited:
  • Like
Reactions: Tlh97

TESKATLIPOKA

Platinum Member
May 1, 2020
2,696
3,259
136
The losses to 3060 Ti in some games beg to disagree
I didn't say that It doesn't affect performance at all, just not as much as some of you think.
I checked performance against 3060Ti, and It loses in 6 games from 25 in TPU review and even that only at 4K and the worst loss is 3.3%.
BF5: -2.9%
Days gone: -0.6%
Elder Ring: -3.3%
Far Cry 6: -0.6%
Guardians of the galaxy: -0.7%
The Witcher 3: -1.1%

Now let's see what happens when we compare It against Its stronger sibling.
SMGPCShadersTMUROPsFrequency(median)TFLOPsTexture Fill ratePixel Fill rateBWL2 cacheVram
RTX 4060Ti3434352136482790MHz24.28379.5134288 GB/s32MB8GB
RTX 40704645888184642790MHz32.86513.4178.6504 GB/s36MB12GB
Difference+35.3%+33.3%+35.3%+35.3%33.3%0%+35.3%+35.3%+33.3%+75%+12.5%+50%
The difference in average performance is 28% at 1080p, 32% at 1440p and 36% at 4K(link).
Performance at 4K is pretty much the same as the increase in TFLOPs and texture filtrate. The much higher BW doesn't provide any significant improvement in performance.


I hope Nvidia get their act together and release RTX 4060Ti 16GB with the full 36SM chip with 16GB 20gbps GDDR6 memory, it could perform pretty good(+~10%), although for $499 It would not be worth It. for $399-429 I would consider It.
 

SteveGrabowski

Diamond Member
Oct 20, 2014
8,587
7,210
136
I didn't say that It doesn't affect performance at all, just not as much as some of you think.
I checked performance against 3060Ti, and It loses in 6 games from 25 in TPU review and even that only at 4K and the worst loss is 3.3%.
BF5: -2.9%
Days gone: -0.6%
Elder Ring: -3.3%
Far Cry 6: -0.6%
Guardians of the galaxy: -0.7%
The Witcher 3: -1.1%
In Hogwarts it's -5% at 1080p and -11% at 1440p and in The Last of Us it's -6% at 1080p and -10% at 1440p per Hardware Unboxed. Absolutely disgraceful gpu.
 
  • Like
Reactions: Tlh97

TESKATLIPOKA

Platinum Member
May 1, 2020
2,696
3,259
136
In Hogwarts it's -5% at 1080p and -11% at 1440p and in The Last of Us it's -6% at 1080p and -10% at 1440p per Hardware Unboxed. Absolutely disgraceful gpu.
The GPU is not bad, the price is bad and of course 8GB Vram.

RTX 4070 is not much stronger in Hogwarts legacy: +26%(1% low: +27%) compared to 4060Ti, so It's not like 4060Ti is weak per se, rather Ada is doing poorly in that game.
The Last of us part 1 is a massacre for any 8GB card, so I won't compare It at this point, will wait until the 16GB version is out.
TLOU_1440p.png
 
Last edited:

tajoh111

Senior member
Mar 28, 2005
343
385
136
The GPU is not bad, the price is bad and of course 8GB Vram.

RTX 4070 is not much stronger in Hogwarts legacy: +26%(1% low: +27%) compared to 4060Ti, so It's not like 4060Ti is weak per se, rather Ada is doing poorly in that game.
The Last of us part 1 is a massacre for any 8GB card, so I won't compare It at this point, will wait until the 16GB version is out.
TLOU_1440p.png

I think ADA is actually one of the strongest architectures and improvement gen over gen but the amount of stock last gen is it's biggest enemy along with AMD's incompetence.

Supposed Nvidia did not hold back. The RTX 4080 came out as the RTX 4070.

This would make the RTX 4070 116% faster than a RTX 3070 which would be an absurd improvement(computerbase.de).
If the RTX 4070 ti actually out as a RTX 4060, we would be looking at 120% improvement gen on gen.
Similarly with a full AD102, we would be looking at around 108% vs the RTX 3090. If the RTX 4060 Ti came out as the RTX 4050, it would be another 100% plus improvement gen on gen.

These are absurd improvement gen on gen, the biggest ever by a long shot on average which is likely why Nvidia shifted the product line. Personally I think if Nvidia products one tier rather than two tiers, this would have been perfect, two tiers is a bit too much considering these monstrous improvements.

Compare this to AMD's 35% percent improvement gen on gen and it would just emphasize AMD incompetence in developing GPU's(both the the 7900 xtx and 7900 are 35% improvements over the 6900xt and 6800xt respectively).

So if the RTX 4080 came out as a RTX 4070 and was priced at $600, this along with the showering of praise in reviews would force the RX 7900 XTX to sell at 500 but possibly drop to 450 to compete with the mindshare. The 7900xt(this would definitely get branded the 7800XT) would be selling in the 350 dollar tier. The RX 7600 would be a 120 dollar product. AMD graphic division would be grinded into dust, particularly their mind share.

This forum was disappointed when the 7900 xtx only competed with the RTX 4080, imagine the disappointment if the 7900xtx only competed with the RTX 4070 when the hype train said it would beat at RTX 4090. Nvidia's flagship would be 50% faster than AMD's and thus make AMD look like they are a generation behind.

Both companies would not be making money, particularly not to cover R and D, which is why I think such pricing, particularly from AMD fans about Nvidia products is unrealistic. Moreover Nvidia's last gen products would be worthless meaning a multibillion dollar inventory/loss write down would be on order. RTX 4070 would make RTX 3090 be worth 400 dollars, RTX 3060 would be worth 120 dollars and etc. Maybe even less as everyone would be selling their cards to jump on the new cards flooding the used market. While gamers would love this, this is something only an incompetent CEO would do. Inflict harm on their own company when they are primarily in a competition with itself and sell things at a loss when it is not necessary and when sell other products like datacenter chips which they can pivot their products towards.
 
  • Haha
Reactions: Thunder 57

Aapje

Golden Member
Mar 21, 2022
1,530
2,106
106
@tajoh111

Indeed, although that huge improvement is only when looking at the performance per mm2 of die space. When you look at the performance for the cost to produce, the gain is much less since TSMC 4N is substantially more expensive than Samsung 8N.

Therefor it would be reasonable to either cut the die size per tier, or to increase the price. The problem is that they did both excessively and apparently not to keep the margins stable, but to substantially increase them.
 

Ranulf

Platinum Member
Jul 18, 2001
2,751
2,174
136

TESKATLIPOKA

Platinum Member
May 1, 2020
2,696
3,259
136
I think ADA is actually one of the strongest architectures and improvement gen over gen but the amount of stock last gen is it's biggest enemy along with AMD's incompetence.

Supposed Nvidia did not hold back. The RTX 4080 came out as the RTX 4070.

This would make the RTX 4070 116% faster than a RTX 3070 which would be an absurd improvement(computerbase.de).
If the RTX 4070 ti actually out as a RTX 4060, we would be looking at 120% improvement gen on gen.
Similarly with a full AD102, we would be looking at around 108% vs the RTX 3090. If the RTX 4060 Ti came out as the RTX 4050, it would be another 100% plus improvement gen on gen.

These are absurd improvement gen on gen, the biggest ever by a long shot on average which is likely why Nvidia shifted the product line. Personally I think if Nvidia products one tier rather than two tiers, this would have been perfect, two tiers is a bit too much considering these monstrous improvements.

Compare this to AMD's 35% percent improvement gen on gen and it would just emphasize AMD incompetence in developing GPU's(both the the 7900 xtx and 7900 are 35% improvements over the 6900xt and 6800xt respectively).

So if the RTX 4080 came out as a RTX 4070 and was priced at $600, this along with the showering of praise in reviews would force the RX 7900 XTX to sell at 500 but possibly drop to 450 to compete with the mindshare. The 7900xt(this would definitely get branded the 7800XT) would be selling in the 350 dollar tier. The RX 7600 would be a 120 dollar product. AMD graphic division would be grinded into dust, particularly their mind share.

This forum was disappointed when the 7900 xtx only competed with the RTX 4080, imagine the disappointment if the 7900xtx only competed with the RTX 4070 when the hype train said it would beat at RTX 4090. Nvidia's flagship would be 50% faster than AMD's and thus make AMD look like they are a generation behind.

Both companies would not be making money, particularly not to cover R and D, which is why I think such pricing, particularly from AMD fans about Nvidia products is unrealistic. Moreover Nvidia's last gen products would be worthless meaning a multibillion dollar inventory/loss write down would be on order. RTX 4070 would make RTX 3090 be worth 400 dollars, RTX 3060 would be worth 120 dollars and etc. Maybe even less as everyone would be selling their cards to jump on the new cards flooding the used market. While gamers would love this, this is something only an incompetent CEO would do. Inflict harm on their own company when they are primarily in a competition with itself and sell things at a loss when it is not necessary and when sell other products like datacenter chips which they can pivot their products towards.
RTX 3070Ti(GA104 392.5mm2) -> RTX 4080(AD103 378.6mm2) is 80% faster at 4K(TPU).
RTX 3070Ti(GA104 392.5mm2) -> RTX 4070Ti(AD104 294.5mm2) is 43% faster at 4K(TPU).

RTX 3060(GA106 276mm2) -> RTX 4070Ti(AD104 294.5mm2) is 135% faster at 4K(TPU).
RTX 3060(GA106 276mm2) -> RTX 4060Ti(AD106 190mm2) is 41% faster at 4K(TPU).

Why you compare AD103 to GA104 or AD104 to GA106 is unknown to me.
ADA uses a vastly superior process, which in a bit smaller space can pack 2.64x more transistors(AD103 vs GA104), but It also cost a lot more to make. I wouldn't be surprised If the price per wafer was ~3-3.5x more expensive than what they had at Samsung.
There is no way they would price ADA as the previous generation, so no $600 for Ada103.

I have to correct some things you wrote.
RX 7900 XTX is 49% faster than RX6900XT at 4K(TPU).
RTX 3090Ti(GA102 628.4mm2) -> RTX 4090(AD102 608.5mm2) is 45% faster at 4K(TPU).
Even if Nvidia released the Full AD102 It would be ~15% more performance in my opinion and that is 67% over RTX 3090Ti or 42% over 7900XTX.

N31 GCD is only 300mm2, they could have increased the size to 400mm2, which would result in a GCD with 144CU: 9216SP:576TMUs:288ROPs, this is 50% more than what N31 has. I think this would have been enough to fight against full AD102 at least in raster.
AMD underestimated Nvidia's willingness to make such a big chip on a new process, but that doesn't mean they are incompetent at developing GPUs.
 
Last edited:
  • Like
Reactions: Tlh97 and Rigg

Mopetar

Diamond Member
Jan 31, 2011
8,353
7,425
136
The difference in average performance is 28% at 1080p, 32% at 1440p and 36% at 4K(link).
Performance at 4K is pretty much the same as the increase in TFLOPs and texture filtrate. The much higher BW doesn't provide any significant improvement in performance.

Unless you seriously downclock the VRAM on the 4070 so that it has the same effective bandwidth, you can't say to what extent the bandwidth of the 4060 Ti is limiting performance. It's pretty clear that it matters in some titles, otherwise it wouldn't lose to a 3060 Ti, but if you dropped the 4070's bandwidth to that of a 4060 Ti, you'd probably see lower performance in some titles as the bottleneck shifts from the shaders to the memory subsystem.

Even then the 4070 would have more L2 cache so it won't be a perfect comparison, but it's only an extra 4 MB which is unlikely to tip the balance too much. Regardless a 4070 with the bandwidth of a 4060 Ti probably won't be able to scale based on shader count if it can't keep them fed.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,696
3,259
136
Unless you seriously downclock the VRAM on the 4070 so that it has the same effective bandwidth, you can't say to what extent the bandwidth of the 4060 Ti is limiting performance. It's pretty clear that it matters in some titles, otherwise it wouldn't lose to a 3060 Ti, but if you dropped the 4070's bandwidth to that of a 4060 Ti, you'd probably see lower performance in some titles as the bottleneck shifts from the shaders to the memory subsystem.

Even then the 4070 would have more L2 cache so it won't be a perfect comparison, but it's only an extra 4 MB which is unlikely to tip the balance too much. Regardless a 4070 with the bandwidth of a 4060 Ti probably won't be able to scale based on shader count if it can't keep them fed.
I think comparing It to RTX 4070 is not a bad idea.
36% is average at 4K, so of course in some games It's more and in some less, I would have to check 25 games, but I am too lazy.
Downclocking 4070's memory wouldn't show how bandwidth is limiting 4060Ti, It will show how limited 4070 is.
OC-ing 4060Ti's memory would show us somewhat how limited It is.
 

MrTeal

Diamond Member
Dec 7, 2003
3,901
2,631
136
I think ADA is actually one of the strongest architectures and improvement gen over gen but the amount of stock last gen is it's biggest enemy along with AMD's incompetence.

Supposed Nvidia did not hold back. The RTX 4080 came out as the RTX 4070.

This would make the RTX 4070 116% faster than a RTX 3070 which would be an absurd improvement(computerbase.de).
If the RTX 4070 ti actually out as a RTX 4060, we would be looking at 120% improvement gen on gen.
Similarly with a full AD102, we would be looking at around 108% vs the RTX 3090. If the RTX 4060 Ti came out as the RTX 4050, it would be another 100% plus improvement gen on gen.

These are absurd improvement gen on gen, the biggest ever by a long shot on average which is likely why Nvidia shifted the product line. Personally I think if Nvidia products one tier rather than two tiers, this would have been perfect, two tiers is a bit too much considering these monstrous improvements.

Compare this to AMD's 35% percent improvement gen on gen and it would just emphasize AMD incompetence in developing GPU's(both the the 7900 xtx and 7900 are 35% improvements over the 6900xt and 6800xt respectively).

So if the RTX 4080 came out as a RTX 4070 and was priced at $600, this along with the showering of praise in reviews would force the RX 7900 XTX to sell at 500 but possibly drop to 450 to compete with the mindshare. The 7900xt(this would definitely get branded the 7800XT) would be selling in the 350 dollar tier. The RX 7600 would be a 120 dollar product. AMD graphic division would be grinded into dust, particularly their mind share.

This forum was disappointed when the 7900 xtx only competed with the RTX 4080, imagine the disappointment if the 7900xtx only competed with the RTX 4070 when the hype train said it would beat at RTX 4090. Nvidia's flagship would be 50% faster than AMD's and thus make AMD look like they are a generation behind.

Both companies would not be making money, particularly not to cover R and D, which is why I think such pricing, particularly from AMD fans about Nvidia products is unrealistic. Moreover Nvidia's last gen products would be worthless meaning a multibillion dollar inventory/loss write down would be on order. RTX 4070 would make RTX 3090 be worth 400 dollars, RTX 3060 would be worth 120 dollars and etc. Maybe even less as everyone would be selling their cards to jump on the new cards flooding the used market. While gamers would love this, this is something only an incompetent CEO would do. Inflict harm on their own company when they are primarily in a competition with itself and sell things at a loss when it is not necessary and when sell other products like datacenter chips which they can pivot their products towards.

RTX 3070Ti(GA104 392.5mm2) -> RTX 4080(AD103 378.6mm2) is 80% faster at 4K(TPU).
RTX 3070Ti(GA104 392.5mm2) -> RTX 4070Ti(AD104 294.5mm2) is 43% faster at 4K(TPU).

RTX 3060(GA106 276mm2) -> RTX 4070Ti(AD104 294.5mm2) is 135% faster at 4K(TPU).
RTX 3060(GA106 276mm2) -> RTX 4060Ti(AD106 190mm2) is 41% faster at 4K(TPU).

Alternately,
RTX 3070 Ti (GA104 17.4B transistors) -> RTX 4060 (AD107 18.9B transistors) Unknown delta, but it's going to be way, way slower
RTX 3090 Ti (Full GA102 28.3B transistors) -> RTX 4070 Ti (Full AD104 35.8B transistors) 10% slower at 4k

Moving to TSMC 4 has given them a massive increase in transistors vs SS8, but whatever they used those for hasn't produced gains in gaming performance. I'm sure ADA is a very strong architecture and improvement gen over gen for AI and LLM use, but at least for gaming Ada doesn't seem to be a huge leap forward architecturally at this point.
 
  • Like
Reactions: Tlh97

jpiniero

Lifer
Oct 1, 2010
16,132
6,594
136
Alternately,
RTX 3070 Ti (GA104 17.4B transistors) -> RTX 4060 (AD107 18.9B transistors) Unknown delta, but it's going to be way, way slower
RTX 3090 Ti (Full GA102 28.3B transistors) -> RTX 4070 Ti (Full AD104 35.8B transistors) 10% slower at 4k

The cache and the OPA takes up a lot of space.