8GB VRAM not enough (and 10 / 12)

Page 65 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
8GB
Horizon Forbidden West 3060 is faster than the 2080 Super despite the former usually competing with the 2070. Also 3060 has a better 1% low than 4060 and 4060Ti 8GB.
pFJi8XrGZfYuvhvk4952je-970-80.png.webp
Resident Evil Village 3060TI/3070 tanks at 4K and is slower than the 3060/6700XT when ray tracing:
RE.jpg
Company Of Heroes 3060 has a higher minimum than the 3070TI:
CH.jpg

10GB / 12GB

Reasons why still shipping 8GB since 2014 isn't NV's fault.
  1. It's the player's fault.
  2. It's the reviewer's fault.
  3. It's the developer's fault.
  4. It's AMD's fault.
  5. It's the game's fault.
  6. It's the driver's fault.
  7. It's a system configuration issue.
  8. Wrong settings were tested.
  9. Wrong area was tested.
  10. Wrong games were tested.
  11. 4K is irrelevant.
  12. Texture quality is irrelevant as long as it matches a console's.
  13. Detail levels are irrelevant as long as they match a console's.
  14. There's no reason a game should use more than 8GB, because a random forum user said so.
  15. It's completely acceptable for the more expensive 3070/3070TI/3080 to turn down settings while the cheaper 3060/6700XT has no issue.
  16. It's an anomaly.
  17. It's a console port.
  18. It's a conspiracy against NV.
  19. 8GB cards aren't meant for 4K / 1440p / 1080p / 720p gaming.
  20. It's completely acceptable to disable ray tracing on NV while AMD has no issue.
  21. Polls, hardware market share, and game title count are evidence 8GB is enough, but are totally ignored when they don't suit the ray tracing agenda.
According to some people here, 8GB is neeeevaaaaah NV's fault and objective evidence "doesn't count" because of reasons(tm). If you have others please let me know and I'll add them to the list. Cheers!
 
Last edited:

Timorous

Golden Member
Oct 27, 2008
1,931
3,741
136
Different cards, wildly different settings, same 8GB framebuffer, completely different results. The GTX 1070 could never do what the 3060ti can, so once again, it's a matter of gpu processing power and not vram.

It has always been both...

The 3070 is a GPU with enough horsepower to use more than 8GB of VRAM as you can tell by the slower 16GB professional part that uses the same chip in a single slot cooled package will frequently out perform or out IQ the 8GB variant.

The 4070Ti has a lot of GPU horsepower, horsepower that cannot be fully utilised because the bus is narrow and it only has 12GB of VRAM. You can see this by how it tends to lose performance relative to the 3090Ti and 7900XT at 4K. Give it 256bit and 16GB of VRAM and this won't be anywhere near as bad and you probably see a decent bump in 1440p performance too.
 
Jul 27, 2020
24,563
17,074
146
you will cry when you see the 0.1%/1% lows in Doom Eternal.
That's most likely a game engine/driver issue. And that game just looks fugly compared to the others.

You are again kinda missing the point. The 4070 Ti has enough power to play older games at 4K but it is more likely to hit VRAM limits in them. It's artificially crippled to make the 4080 look more appealing.

Look at the AMD options:

RX 7600 8GB
RX 7700 XT 12GB
RX 7800 XT 16GB
RX 7900 XT 20GB
RX 7900 XTX 24GB

Now compare that with the GeFarce line-up:

4060 8GB
4060 Ti 8GB
4060 Ti 16GB
4070 12GB
4070 Ti 12GB
4080 16GB
4090 24GB

See how the Radeon line-up is much simpler? nGreedia strategy seems to be, newbie gamer buys 4060. Curses himself for getting cheated. Ditches it on the used market and thinking he's being smart, goes for the 4060 Ti 16GB. Curses himself again when he runs into GPU performance issues. Looks at benchmarks this time and settles for the 4070 Ti. Sooner or later, he finds a game that he really likes that wants more VRAM. Curses himself again and looks at the 4080. "Oh, I'm not falling for that trick again, nGreedia!", says the frustrated gamer feeling smug and satisfied with himself for his big bulb moment and promptly plops down $2000 for the 4090. And just when he thought his troubles were over and he was enjoying sweet, sweet buttery smooth finger lickin' framerates, he smells something burning. He tries to RMA the 4090 but gets denied. "You didn't insert the connector properly!", he is told. He gets a sledgehammer and trashes his PC to smithereens. Buys a PS5 and vows never to touch PC gaming again.
 

Ranulf

Platinum Member
Jul 18, 2001
2,780
2,330
136
And here is a recent two day video, of a 12GB 4070ti vs a 24GB 3090. The 4070ti wins most runs and you will cry when you see the 0.1%/1% lows in Doom Eternal. Yeah vram ain't gonna save you friends.


You understand that the main take away from that comparison is that the value you get for a top end card evaporates rather quickly and even so at the upper mid range (that is now $800 not $379 msrp in 2016). I expect and hope a 3060ti is better and faster than a 1070 but that doesn't mean you aren't getting short changed on only having 8GB of vram in the 3060ti and especially in the new cards.

In the same amount of time we've been stuck at 8GB of ram for 70 class cards, we went from 1-2 GB to 8GB at the mid to upper mid range product level. 2010 to 2016 vs 2016 to 2023. You can argue that the 3060ti is fine with 8GB but its already 3 years old now (as of 3 days ago). I can argue that your $800 4070ti with 12GB of ram is nothing more than a 1080p Ultra card when you factor in ray tracing and frame gen tech. So much for any real advancement other than an advanced assault on the customers wallets.
 

StinkyPinky

Diamond Member
Jul 6, 2002
6,950
1,259
126
I said at the time that 10GB for the 3080 was ridiculous. Nvidia have a long history of being cheap asses on vram and claiming "it's enough" when history has proved them wrong many times over. I think they know this, it's planned obsolescence. Still 10GB on a 3080 and 8GB on a 4060 is ridiculous. Those things were obsolete out of the box if you ask me, and that's not ethical imo.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,204
126
I said at the time that 10GB for the 3080 was ridiculous. Nvidia have a long history of being cheap asses on vram and claiming "it's enough" when history has proved them wrong many times over. I think they know this, it's planned obsolescence. Still 10GB on a 3080 and 8GB on a 4060 is ridiculous. Those things were obsolete out of the box if you ask me, and that's not ethical imo.
In Europe, in theory, if the cards are still under mandatory warranty period - could they not be returned as "not fit for purpose"?
 

Thunder 57

Diamond Member
Aug 19, 2007
3,607
6,027
136
I said at the time that 10GB for the 3080 was ridiculous. Nvidia have a long history of being cheap asses on vram and claiming "it's enough" when history has proved them wrong many times over. I think they know this, it's planned obsolescence. Still 10GB on a 3080 and 8GB on a 4060 is ridiculous. Those things were obsolete out of the box if you ask me, and that's not ethical imo.

Kind of ironic coming from the guy who lists a 4080 in his signature.
 
  • Like
Reactions: dr1337

Timorous

Golden Member
Oct 27, 2008
1,931
3,741
136
No, as they never promised a specific level of performance that isn't being met.

Not fit for purpose is more about what is reasonably expected from the product in question and I think you could argue a vram deficiency on a $750 product makes it unfit.

As an example I have an expensive watch with an alligator leather strap. After wearing it for a few weeks one of the strap keeper loops started to split so I took it back to the jeweler and they sent it to get a new strap because it was not fit for purpose. (In this case a new strap has a retail price of ~£460 and it was replaced because of a keeper loop)

I guess that is kind of like coil whine, no GPU has a stated level of coil whine but many GPUs get replaced due to excessive coil whine.

The 4060 may actually be more susceptible to it because of frame gen. An advertised feature to improve frame rates that in some cases on the 4060 actually makes it worse due to the VRAM usage frame gen requires, especially if you try and mix it with RT which also increases vram usage.
 
Last edited:

psolord

Platinum Member
Sep 16, 2009
2,125
1,256
136
It has always been both...
Sure it has always been both, but the balance is 75/25 towards gpu power instead of vram quantity. Maybe even less. That's why I say I have three 8GB cards and they are nothing alike.

I did show ARK Ascended a few posts before, where the 8GB 4060ti was 5X faster than the 8GB 6600, and no one made a peep about it....They were both still unplayable anyway, but we are arguing gpu power/vram balance here and gpu power wins, hands down.

The 3070 is a GPU with enough horsepower to use more than 8GB of VRAM as you can tell by the slower 16GB professional part that uses the same chip in a single slot cooled package will frequently out perform or out IQ the 8GB variant.
We have also seen these modded 3070s with 16GB. They are indeed better, in some corner cases at best. For the vast majority of games, the supposed 16GB 3070s would fall flat on their face, due to gpu power first and formost. Especially newer games, where the supposed longevity would be needed. Regarding more vram on the same gpu, Steve at Gamer's Nexus said it plainly, "avoid the 4060ti 16GB, forget it exists". Because it's a stupid card, only to fill a pricing point. You want true performance, you go a tier above, to the 4070.

Look at two of the heaviest games on existence on the PC right now. ARK Ascended and Alan Wake II.

Screenshot 2023-12-05 at 10-53-15 Ark Survival Ascended PC Performance Benchmarks for Graphics...pngScreenshot 2023-12-05 at 10-47-23 Alan Wake 2 PC Performance Benchmarks for Graphics Cards and...png

What do you think would change with 16GB 3070s? Absolutely nothing. Keep in mind that both of these games are favoring AMD, so I don't want to hear any complaints that I cherry picked Nvidia games.


The 4070Ti has a lot of GPU horsepower, horsepower that cannot be fully utilised because the bus is narrow and it only has 12GB of VRAM. You can see this by how it tends to lose performance relative to the 3090Ti and 7900XT at 4K. Give it 256bit and 16GB of VRAM and this won't be anywhere near as bad and you probably see a decent bump in 1440p performance too.
You cannot give a 4070ti more bus or vram, because it is a full chip. Nvidia's engineers, designed it this way. People in forums knowing better than nvidia's engineers, always blows my mind...

The 4070ti is not bad at all. The AD104 is not a 4K chip by a long shot. It is a very good 1440p chip and will be a very good 1080p later on. This is how things are. Actually looking at ARK Ascended it is less than a 1080p chip already, lol. This game is very bad for nvidia gpus though. It clearly favors AMD, but it is what it is, irrelevant to the matter at hand. Nothing to do with vram or anything.


The AD104/4070ti is a 40tflop chip/card and the 4090 is a 82.5tflop card. With 100% more performance, the 4090 is only 62% faster according to tpu's list.


Also looking at the above gamegpu benches, the 7900XT you mentioned, is 29.78% faster at 1080p in Ark and only 22.72% faster at 4K. So at 4k, percentage wise, it is even less faster. It is also considerably faster than the usual mean between the two cards.

In Alan Wake II, at 1080p the 7900XT is 13.20% faster while at 4k it's only 7.5% faster. So no, the 320bit bus of the 7900XT and the 20GBs of vram, compared to the 192bit bus of the 4070ti and its mere 12GBs, ain't doing much more for 4k, which the 4070ti isn't aimed for anyway.

Actually I can push it even further, if you compare all tech available to the cards, at 4K/dlss/fsr in Alan Wake 2, the 4070ti/7900xt are basically tied and we know that dlss gives the better image.


And let's not even start discussing RT....

Screenshot 2023-12-05 at 11-40-33 Alan Wake 2 PC Performance Benchmarks for Graphics Cards and...png

Yeah I know, people in this forum don't like any of nvidia's techs. Figures...

So let me break it down to you. When you really start pushing visual tech and combine all nvidia techs (without even framegen), you end up with a poor 192bit 12GB AD104, being faster than a power gobbling 7900XTX, with better visual quality.

So yeah, as a 4070ti user, I am happy as a clam. It is one of the best chips nvidia made really.
 
Last edited:

psolord

Platinum Member
Sep 16, 2009
2,125
1,256
136
That's most likely a game engine/driver issue. And that game just looks fugly compared to the others.

You are again kinda missing the point. The 4070 Ti has enough power to play older games at 4K but it is more likely to hit VRAM limits in them. It's artificially crippled to make the 4080 look more appealing.

Look at the AMD options:

RX 7600 8GB
RX 7700 XT 12GB
RX 7800 XT 16GB
RX 7900 XT 20GB
RX 7900 XTX 24GB

Now compare that with the GeFarce line-up:

4060 8GB
4060 Ti 8GB
4060 Ti 16GB
4070 12GB
4070 Ti 12GB
4080 16GB
4090 24GB

See how the Radeon line-up is much simpler? nGreedia strategy seems to be, newbie gamer buys 4060. Curses himself for getting cheated. Ditches it on the used market and thinking he's being smart, goes for the 4060 Ti 16GB. Curses himself again when he runs into GPU performance issues. Looks at benchmarks this time and settles for the 4070 Ti. Sooner or later, he finds a game that he really likes that wants more VRAM. Curses himself again and looks at the 4080. "Oh, I'm not falling for that trick again, nGreedia!", says the frustrated gamer feeling smug and satisfied with himself for his big bulb moment and promptly plops down $2000 for the 4090. And just when he thought his troubles were over and he was enjoying sweet, sweet buttery smooth finger lickin' framerates, he smells something burning. He tries to RMA the 4090 but gets denied. "You didn't insert the connector properly!", he is told. He gets a sledgehammer and trashes his PC to smithereens. Buys a PS5 and vows never to touch PC gaming again.
Newbie gamer can go buy an Atari 2600 and start buying everything from that to the 4090. This is a non argument. GPUs are complicated products. You can have a very good 1080p high fps card, which is not fit for 1440p but you can also have a 1440p/60 card, which is not fit for high fps 1080p. These are different products. Users need to set their expectations straight when buying.

The 4070ti is not a 4k card by a long shot. It is perfrectly balanced the way it is. No more vram is needed for its performance tier. Nvidia's engineers surpassed themselves imo, all things considered, die size, transistor count, power draw, features. Yeah they were not going to give it away for free, that's the way it goes.

He can buy a PS5 all he wants. He will end up playing Alan Wake II like this@847p compared to a 3060ti straight 1080p DLAA.

alan wake 2 settings pc high vs ps5.jpg

This is the mentality that is going on this thread and that's why I am opposing it. You are only damaging PC gaming with this whole 8GBs is not enough thingy. Crank everything over 9000 and yeah, it does not run well. Good luck playing UE5 games at 720p upscaled to whatever res with fsr mate.
 

psolord

Platinum Member
Sep 16, 2009
2,125
1,256
136
You understand that the main take away from that comparison is that the value you get for a top end card evaporates rather quickly

The main take away, is that the 4070ti is a mighty good card. Not a 4K card. I posted the 4k video as a proof of concept. It IS however a very good 4k/dlss card and that was my intended target. Having a blast so far...and the way things are going, it will be having gpu power problems long before it will have vram problems.

and even so at the upper mid range (that is now $800 not $379 msrp in 2016).

The tiers have changed now though. There are more of them and there is more competition from last gen cards. Chips have gotten huge and can be cut in more ways. Different resolutions and different settings. Going from 1080p low, to 4k ultra gtfo, has many levels in between, not just three cards.

I expect and hope a 3060ti is better and faster than a 1070 but that doesn't mean you aren't getting short changed on only having 8GB of vram in the 3060ti and especially in the new cards.

You are not getting short changed at all. The 3060ti is healthily consistently faster. Equal vram amount, has nothing to do with anything. Especially if you also take mesh shaders in account, the 3060ti is even faster nowadays (Alan Wake 2).

In the same amount of time we've been stuck at 8GB of ram for 70 class cards, we went from 1-2 GB to 8GB at the mid to upper mid range product level. 2010 to 2016 vs 2016 to 2023. You can argue that the 3060ti is fine with 8GB but its already 3 years old now (as of 3 days ago). I can argue that your $800 4070ti with 12GB of ram is nothing more than a 1080p Ultra card when you factor in ray tracing and frame gen tech. So much for any real advancement other than an advanced assault on the customers wallets.

The 7900XTX can also be called a 1080p card, if you try to run Alan Wake II with RT tho...

People seem to think that going from a 3060ti to a twice as fast 4070ti, you need twice as much vram. I swear to god, sometimes, it feels like I walked in a church forum, that suddenly decided to discuss gpus. Vram amount does not, and does not need to scale linearly with gpu processing power. It's 75/25 at best for the vast majority of games, excluding corner cases. They are totally different things. You could argue about the bandwidth needing to do that, but Nvidia would prove as wrong again. The 4070ti has only 12.5% more bandwidth compared to the 3060ti and still destroys it. Also has only 50% more vram amount and still destroys it. And I am talking about things the 3060ti can run healthily, not ridiculous examples 1440p max+max RT. I will call upon Alan Wake II. 3060ti 55fps 1080p dlaa, 4070ti 106 fps. That's the delta I see in everything really. So let me worry about the 4070ti mkay?

You are all seeing this 8GBs thingy in a very shallow way and this is very sad for a tech forum, that should otherwise have some deeper knowledge of how things work. 8GB is A LOT of data. You can clearly see that ARK "fits" in 8GB at 1080p and still brings a 4080 to its knees. I don't even know what we are discussing here...
 

Thunder 57

Diamond Member
Aug 19, 2007
3,607
6,027
136
Sure it has always been both, but the balance is 75/25 towards gpu power instead of vram quantity. Maybe even less. That's why I say I have three 8GB cards and they are nothing alike.

I did show ARK Ascended a few posts before, where the 8GB 4060ti was 5X faster than the 8GB 6600, and no one made a peep about it....They were both still unplayable anyway, but we are arguing gpu power/vram balance here and gpu power wins, hands down.


We have also seen these modded 3070s with 16GB. They are indeed better, in some corner cases at best. For the vast majority of games, the supposed 16GB 3070s would fall flat on their face, due to gpu power first and formost. Especially newer games, where the supposed longevity would be needed. Regarding more vram on the same gpu, Steve at Gamer's Nexus said it plainly, "avoid the 4060ti 16GB, forget it exists". Because it's a stupid card, only to fill a pricing point. You want true performance, you go a tier above, to the 4070.

Look at two of the heaviest games on existence on the PC right now. ARK Ascended and Alan Wake II.


What do you think would change with 16GB 3070s? Absolutely nothing. Keep in mind that both of these games are favoring AMD, so I don't want to hear any complaints that I cherry picked Nvidia games.



You cannot give a 4070ti more bus or vram, because it is a full chip. Nvidia's engineers, designed it this way. People in forums knowing better than nvidia's engineers, always blows my mind...

The 4070ti is not bad at all. The AD104 is not a 4K chip by a long shot. It is a very good 1440p chip and will be a very good 1080p later on. This is how things are. Actually looking at ARK Ascended it is less than a 1080p chip already, lol. This game is very bad for nvidia gpus though. It clearly favors AMD, but it is what it is, irrelevant to the matter at hand. Nothing to do with vram or anything.


The AD104/4070ti is a 40tflop chip/card and the 4090 is a 82.5tflop card. With 100% more performance, the 4090 is only 62% faster according to tpu's list.


Also looking at the above gamegpu benches, the 7900XT you mentioned, is 29.78% faster at 1080p in Ark and only 22.72% faster at 4K. So at 4k, percentage wise, it is even less faster. It is also considerably faster than the usual mean between the two cards.

In Alan Wake II, at 1080p the 7900XT is 13.20% faster while at 4k it's only 7.5% faster. So no, the 320bit bus of the 7900XT and the 20GBs of vram, compared to the 192bit bus of the 4070ti and its mere 12GBs, ain't doing much more for 4k, which the 4070ti isn't aimed for anyway.

Actually I can push it even further, if you compare all tech available to the cards, at 4K/dlss/fsr in Alan Wake 2, the 4070ti/7900xt are basically tied and we know that dlss gives the better image.


And let's not even start discussing RT....


Yeah I know, people in this forum don't like any of nvidia's techs. Figures...

So let me break it down to you. When you really start pushing visual tech and combine all nvidia techs (without even framegen), you end up with a poor 192bit 12GB AD104, being faster than a power gobbling 7900XTX, with better visual quality.

So yeah, as a 4070ti user, I am happy as a clam. It is one of the best chips nvidia made really.

Blah blah blah, "I'm going to continue to be a contrarian". "Can't give a 4070 Tye more wide/add memory because it is a full die". Well Nvidia shouldn't have done that and screwed everyone over. "It is one of the best chips nvidia made really"? HA! You must be on something. It is one of the worst chips Nvidia has made probably going back to Fermi.

But hey, if you're happy as a clam, I have no problem with you enjoying it/validating your flawed purchase.
 

Thunder 57

Diamond Member
Aug 19, 2007
3,607
6,027
136
You are all seeing this 8GBs thingy in a very shallow way and this is very sad for a tech forum, that should otherwise have some deeper knowledge of how things work. 8GB is A LOT of data. You can clearly see that ARK "fits" in 8GB at 1080p and still brings a 4080 to its knees. I don't even know what we are discussing here...

What is "very sad" is cherry picking examples to try to prove your viewpoint. You'll figure it out when your 4070 Tye starts to choke compared to a useless 4060 Ti 16GB and have to disable your ray tracing or lower your DLSS. Actually, you probably won't.
 

coercitiv

Diamond Member
Jan 24, 2014
7,156
16,660
136
I swear to god, sometimes, it feels like I walked in a church forum, that suddenly decided to discuss gpus.
Tell me about it, ever since you stepped in this thread it feels like a missionary is trying to convert me to some kind of happiness cult. Every time time I object to something that doesn't really make sense, the missionary deflects and tells me I do not understand how the world works, I just need to bend my head a bit and I'll see things in the correct light.

All this reminds of an old joke: a fisherman wants to have the best fishing suit money can buy, so he commissions one from an expensive tailor. The tailor crafts him a new suit with the best materials on the market, except he makes a few mistakes. The fisherman comes in to check his costume: it looks fantastic until he puts it on, it doesn't sit quite right. The tailor looks at the fisherman and tells him he needs to know how to wear the costume. After teaching him how to stand and walk, the clothes fit perfectly.

A few days later, early in the morning, the fisherman heads to his favorite fishing spot. On the way, some folks observe him and talk to each other:
"Take a look at that man, what do you think?"
"Poor man, what an odd limp, must be a birth defect."
"Yeah, but his costume is the best I've ever seen. Amazing fit too."
 

Timorous

Golden Member
Oct 27, 2008
1,931
3,741
136
Sure it has always been both, but the balance is 75/25 towards gpu power instead of vram quantity. Maybe even less. That's why I say I have three 8GB cards and they are nothing alike.

I did show ARK Ascended a few posts before, where the 8GB 4060ti was 5X faster than the 8GB 6600, and no one made a peep about it....They were both still unplayable anyway, but we are arguing gpu power/vram balance here and gpu power wins, hands down.


We have also seen these modded 3070s with 16GB. They are indeed better, in some corner cases at best. For the vast majority of games, the supposed 16GB 3070s would fall flat on their face, due to gpu power first and formost. Especially newer games, where the supposed longevity would be needed. Regarding more vram on the same gpu, Steve at Gamer's Nexus said it plainly, "avoid the 4060ti 16GB, forget it exists". Because it's a stupid card, only to fill a pricing point. You want true performance, you go a tier above, to the 4070.

Look at two of the heaviest games on existence on the PC right now. ARK Ascended and Alan Wake II.


What do you think would change with 16GB 3070s? Absolutely nothing. Keep in mind that both of these games are favoring AMD, so I don't want to hear any complaints that I cherry picked Nvidia games.



You cannot give a 4070ti more bus or vram, because it is a full chip. Nvidia's engineers, designed it this way. People in forums knowing better than nvidia's engineers, always blows my mind...

The 4070ti is not bad at all. The AD104 is not a 4K chip by a long shot. It is a very good 1440p chip and will be a very good 1080p later on. This is how things are. Actually looking at ARK Ascended it is less than a 1080p chip already, lol. This game is very bad for nvidia gpus though. It clearly favors AMD, but it is what it is, irrelevant to the matter at hand. Nothing to do with vram or anything.


The AD104/4070ti is a 40tflop chip/card and the 4090 is a 82.5tflop card. With 100% more performance, the 4090 is only 62% faster according to tpu's list.


Also looking at the above gamegpu benches, the 7900XT you mentioned, is 29.78% faster at 1080p in Ark and only 22.72% faster at 4K. So at 4k, percentage wise, it is even less faster. It is also considerably faster than the usual mean between the two cards.

In Alan Wake II, at 1080p the 7900XT is 13.20% faster while at 4k it's only 7.5% faster. So no, the 320bit bus of the 7900XT and the 20GBs of vram, compared to the 192bit bus of the 4070ti and its mere 12GBs, ain't doing much more for 4k, which the 4070ti isn't aimed for anyway.

Actually I can push it even further, if you compare all tech available to the cards, at 4K/dlss/fsr in Alan Wake 2, the 4070ti/7900xt are basically tied and we know that dlss gives the better image.


And let's not even start discussing RT....


Yeah I know, people in this forum don't like any of nvidia's techs. Figures...

So let me break it down to you. When you really start pushing visual tech and combine all nvidia techs (without even framegen), you end up with a poor 192bit 12GB AD104, being faster than a power gobbling 7900XTX, with better visual quality.

So yeah, as a 4070ti user, I am happy as a clam. It is one of the best chips nvidia made really.

Those GameGPU charts are horrendous. Mixing DLAA, DLSS, Native all on the same chart makes it very difficult to compare like for like and it does not even look as though they test NV GPUs at native native, their native is DLAA which typically has barely any FPS cost but not always. Also it seems to be sorted by 4K FPS making it even harder to parse so I am not going to use that chart.

I will use the much easier to parse one from TPU who I also rank more highly in terms of methodology.

performance-rt-3840-2160.png


As you can see the 4060Ti is far faster than the 4070Ti at this 4K native + RT setting. From the 1440p results you would expect the 4070Ti with a sensible memory and bus config to remain between the 3090Ti and 3090 but you crank it to 4K and it falls off a cliff.

Now ~30fps may not be playable for some people but I am sure with fine tuning you could get there with a 4070Ti and enjoy it at native, or use DLAA to improve IQ. I would not at all be surprised if an optimised settings + RT + 4K DLAA is a better IQ than running Ultra + RT + DLSS to 4K.

As for a 16gb 3070.

It would be beating the 2080Ti at 1440p + RT.

performance-rt-2560-1440.png


A near 30fps experience which with tuning could be a smooth 30 fps experience. Again not unplayable for a slower paced game. And here a 4060Ti 16GB with frame gen on and some tuning of the setting could actually offer a really smooth visual experience, unlike the 8GB model.

Or if you wanted to try path tracing then a 16GB 3070 would probably offer a 30+ fps experience like the 4060Ti 16 GB manages. Also look how well the 4070Ti can do when the memory system is not overwhelmed.

performance-pt-1920-1080.png


And yes, these are corner cases, at the moment, the problem with corner cases like Doom Eternal for the 3070 is that as time goes on they tend to stop being corner cases and become the norm, and we are seeing more and more cases where 8GB is insufficient for the amount of GPU power on tap and we are seeing it with 12GB and in the future 16GB and more will be the new low end.

For serial upgraders it is less of an issue, for those who like to buy solid hardware and run it until it dies it is an issue because often the sort of settings you need to reduce when you are VRAM limited have far more IQ impact than the ones you need to turn down / off when GPU limited. Also there will come a point where no amount of tuning will make a game fit in a limited VRAM buffer where as usually you can keep turning GPU compute settings down to get something playable.

The other issue that none of these charts show and where video reviews are a benefit is how much texture swapping or LOD issues are occurring when the engine runs out of VRAM. For some engines rather than tank the frame rate they just make the game look terrible so the bar chart at least looks okay. Something that happened in Hogwarts Legacy for the 8GB 3070. Frame rates were fine, texture swapping made it look like garbage. The 16GB pro model however had fine frame rates and the IQ was also great.

I expect in 5 years time the 4070 12GB will be useless for RT in new titles, just like the 7800XT will be, and sometimes is now. I also expect that for pure raster (which will be a thing until at least half way into the next console cycle, maybe the one after that) the 7800XT will require fewer trade offs to get playable frame rates at 1440p. Same goes for the 4070Ti and 7900XT at 4K, infact at 4K I would not be surprised if in raster the 7800XT sometimes comes out ahead at playable settings.

All this tells me is we need a new outlet to come and do what [H] did. Set a target frame rate and then max out the IQ using all the tools available while maintaining that frame rate.
 

Aapje

Golden Member
Mar 21, 2022
1,530
2,106
106
Not fit for purpose is more about what is reasonably expected from the product in question and I think you could argue a vram deficiency on a $750 product makes it unfit.
The issue with that argument is that there are a lot of different use cases and expectations. An 8 GB card with 4090-level performance would probably be perfect for the esports gamer with a 540 Hz 1080p monitor.

So as long as there is a use case for a product and the company doesn't really overtly lie about being able to do something that the product is not suited for, they can typically get away with it.

You may disagree, but the courts are also not going to let you return a Ferrari because it can't clear speed bumps, just because the expectation for a regular car is that it can do that.

I think you could argue a vram deficiency on a $750 product makes it unfit.
For certain tasks. The question for the courts is just going to be whether the company didn't deceive you into thinking that it was suitable for things that it isn't suited for.
 
  • Like
Reactions: psolord

Aapje

Golden Member
Mar 21, 2022
1,530
2,106
106
I will use the much easier to parse one from TPU who I also rank more highly in terms of methodology.
That first picture has some really weird data, which makes me distrust it a lot. Why is a 4070 Ti 12 GB so much slower than a 6700 XT 12 GB, when it should be the opposite?

The only explanation that I can come up with is that they are CPU-limited in those tests, which is why the AMD cards with their lower CPU overhead are overperforming.
 

Thunder 57

Diamond Member
Aug 19, 2007
3,607
6,027
136
That first picture has some really weird data, which makes me distrust it a lot. Why is a 4070 Ti 12 GB so much slower than a 6700 XT 12 GB, when it should be the opposite?

The only explanation that I can come up with is that they are CPU-limited in those tests, which is why the AMD cards with their lower CPU overhead are overperforming.

CPU limited at 4K? But I thought AMD had the crappy drivers!
 
Jul 27, 2020
24,563
17,074
146
Excellent post, @Timorous ! The same GPU (4060 Ti) goes from unplayable to playable in Alan Wake 2 with path tracing by the simple doubling of VRAM. Perfect example of how nGreedia is refusing to let the masses have the full performance for the GPU that they "PAID" for.
 

Timorous

Golden Member
Oct 27, 2008
1,931
3,741
136
That first picture has some really weird data, which makes me distrust it a lot. Why is a 4070 Ti 12 GB so much slower than a 6700 XT 12 GB, when it should be the opposite?

The only explanation that I can come up with is that they are CPU-limited in those tests, which is why the AMD cards with their lower CPU overhead are overperforming.

Could be ReBar related. NV has it off by default with a whitelist, AMD have it on by default and I have no clue if AW2 is on the NV whitelist.

You may disagree, but the courts are also not going to let you return a Ferrari because it can't clear speed bumps, just because the expectation for a regular car is that it can do that.

They 100% would unless it was a super small volume or a track only special. Mainly because it is required by law.

For certain tasks. The question for the courts is just going to be whether the company didn't deceive you into thinking that it was suitable for things that it isn't suited for.

Nah. The question will be is the product fit for purpose. Take a simple example of a GPU with an underpowered cooling solution that thermally throttles, either by design or through defect. Anything like that would be considered unfit for purpose and in the UK you have 6 months to inform the retailer that you are rejecting it for those reasons. They get 1 chance to repair / replace the item and if the new or repaired item is unfit you can get a full refund. After the 6 month window you can get a pro-rata refund to account for your use of the product.

I think it is reasonable for a more expensive product of a generation to out perform cheaper products in the same generation pretty much across the board. I do not think it can be reasonably argued that a 4070Ti with a launch price of $800 should get out performed by a $500 4060Ti 16GB in any gaming scenario. Since it does in multiple instances I think that makes it unfit for purpose. This would probably need to be argued with the ombudsman / court depending on country but I think there is a legitimate argument to be made. It is not a slam dunk of course because it is more subjective than an overheating GPU due to an improper cooling solution.
 
  • Like
Reactions: Tlh97

Aapje

Golden Member
Mar 21, 2022
1,530
2,106
106
The main take away, is that the 4070ti is a mighty good card. Not a 4K card. I posted the 4k video as a proof of concept. It IS however a very good 4k/dlss card and that was my intended target.

4k/DLSS quality renders on 1440p, so you are actually using it as a 1440p card.

You are pretty much in a sweet spot, as DLSS works best for 4k, at quality settings. However, where do you go when games get more demanding and you get pushed out of your sweet spot? Lowering DLSS settings will start introducing artifacts, so you'll probably lower the game settings. But then you'll still have the issue that with DLSS, it's garbage in = garbage out.

With the memory, you are also currently in a sweet spot, as games almost never exceed the 12 GB, although we see games creeping higher and higher. We also see that raytracing increases VRAM usage. We know that Micron is planning 3 GB chips with GDDR7, while GDDR6 stagnated on 2 GB chips. So there is a good chance that we'll see a jump up in VRAM sizes with those 3 GB chips, which in turn will probably mean that programmers will start to use that VRAM.

This is really my objection with many Nvidia cards. They seem designed and priced to put you in a sweet spot right now, but once you leave that sweet spot they suddenly don't look so great anymore, compared to AMD and Nvidia cards that don't just have a limited sweet spot. So the AMD card or 3060 12 GB may not have looked that hot when it was first introduced, but then a while later the 3060 Ti, 3070 and 3080 owners experience a rapid decline in their ability to keep up with newer games, relatively to similarly priced cards that don't have such a limited sweet spot.

DLSS 3 is also a feature that only really works well when you have a fairly high frame rate to start with, so it's another thing they expect you to pay for that just increases how fast the card ages, as your 90->160 Hz DLSS 3, suddenly drops to 60 Hz once you cannot stick to 90 Hz and your base frame rate is no longer in the sweet spot for DLSS 3.

People are normally better at accepting a lower performance when they haven't know any better, than at accepting a large decline. So that 160 Hz -> 60 Hz drop is probably going to feel real bad and push you to an early upgrade, while a person who paid the same money for the better base performance on an AMD card will perhaps reach only 120 Hz at first, rather than the 160 Hz with DLSS 3, but then the decline will be less, ending up going from 120 Hz to something like 80 Hz, rather than 160 Hz -> 60 Hz.

For me, $800 seems like a lot of money to gamble that you won't get pushed out of the sweet spots too quickly, especially since Nvidia just seems to keep increasing the number of sweet spots that exist, so the chance just gets bigger and bigger that you get pushed out of one of them.
The tiers have changed now though. There are more of them and there is more competition from last gen cards. Chips have gotten huge and can be cut in more ways.
The chips have gotten smaller compared to last gen, so I don't know what you are on about. The x080 went from 630 to 280 mm2. The largest Navi x1 chip went from 520 to 308 mm2.

Because Nvidia has cut the bus sizes to the bone, they actually have left themselves very little room to offer different chip configs, as they can pretty much only play with the number of cores.

You are all seeing this 8GBs thingy in a very shallow way and this is very sad for a tech forum, that should otherwise have some deeper knowledge of how things work. 8GB is A LOT of data.
Yes, and my old 128 MB card also had a lot of VRAM compared to the 8 MB Voodoo 2 card that I also owned. Yet either card could not run any game today.

Time moves on and VRAM demand is ever increasing. Pretending that it does not and that more VRAM is unimportant because it is not that limiting now, even though it has a very high chance of becoming (more) limiting in the 3-8 years that you intend to use the card for, is just sticking your head in the sand.
 
Last edited: