Question Is 10GB of Vram enough for 4K gaming for the next 3 years?

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

moonbogg

Lifer
Jan 8, 2011
10,635
3,095
136
The question is simple. Will 10GB be enough moving forward for the next 3 years? We all know what happens when Vram gets breached: skips, stutters and chugging.
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
I see Tarkov on this page mentioned. What are the other 2 please? Just curious.

One is a game that isn't even out yet (I think the name is Godfall) and hasn't been tested so who knows if it's a big deal or not.

I think the third was a CoD title, but I may be mixing that up with discussions about 8 GB on the 3070 being sufficient.
 

Hitman928

Diamond Member
Apr 15, 2012
5,262
7,890
136
Godfall is running at 70-100fps at 4K Ultra/Epic on the 3080. So it's OK I guess.


It runs at 1440p/60 or 4k/30 on the PS5 I think, so it's a-ok...

There's some pretty big stutters in there, many right in the middle of combat. I'm not saying it's memory related, but it would be interesting to test frame times against a GPU with more VRAM to see if the stutters go away or not. I also wonder if there are other levels that are larger areas as that will obviously increase VRAM usage. Ray tracing tends to increase VRAM usage as well. It would be great if 10 GB is sufficient, but I tend to trust the developers who have already characterized the performance and VRAM usage in their own game. But we'll see once it gets tested more thoroughly.
 

Leeea

Diamond Member
Apr 3, 2020
3,625
5,368
136
I see Tarkov on this page mentioned. What are the other 2 please? Just curious.

MS flight Sim (ouch)

and GodFall*

*GodFall is associated with AMDs marketing department, to show off RDNA2 on its next gen graphics cards and consoles. It uses 12 GB of vram. The resources are not wasted**, but rather texture, view distance, and scene complexity designed to use 12 GB on the high graphics setting. GodFall is expected to run smoothly on console.

**looking at you Nvidia gameworks
 
Last edited:

DJinPrime

Member
Sep 9, 2020
87
89
51
There's some pretty big stutters in there, many right in the middle of combat. I'm not saying it's memory related, but it would be interesting to test frame times against a GPU with more VRAM to see if the stutters go away or not. I also wonder if there are other levels that are larger areas as that will obviously increase VRAM usage. Ray tracing tends to increase VRAM usage as well. It would be great if 10 GB is sufficient, but I tend to trust the developers who have already characterized the performance and VRAM usage in their own game. But we'll see once it gets tested more thoroughly.
The stuttering is probably not memory related, since the memory usage shows it to be around 8gb. If they turn on Ray tracing, the FPS is going to tank way below 60. At that point, you either have to start lowering settings or use dlss. So, less vram needed.
My feeling about current gen is that memory is not an issue. I prefer higher framerate over max every setting. I guess I'm used to that coming from using a 1660 ti currently. Looking at benchmarks, if you run with everything turned max at 1440p or 4k, even today's games are in 60-90 fps range for a 3080. I much rather have my games running over 100, so some setting will have to be lowered anyway. I just think that like previous generations going from 1GB -> 2GB, 2->4, 4->8, by the time the memory size becomes an issue, the GPU is already too slow. That's my reasoning for going with a 3070 and not bother waiting for the TI version. I'm getting mine deliver today! Also, with the current stock situation, just being able to buy one at retail price felt like winning the lottery.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
MS flight Sim (ouch)

and GodFall*

*GodFall is associated with AMDs marketing department, to show off RDNA2 on its next gen graphics cards and consoles. It uses 12 GB of vram. The resources are not wasted**, but rather texture, view distance, and scene complexity designed to use 12 GB on the high graphics setting. GodFall is expected to run smoothly on console.

**looking at you Nvidia gameworks

Although of course - and no ones surprise - NV have the odd game with options to turn ray tracing right up so you really need DLSS to do anything. Again mild advantages but....

Any game setting like this associated with a marketing department comes with a very distinct health warning. The two companies have got distinctly different, but overall close enough, architectures this time round that they'll play amusing games 'breaking' each others cards. AMD have the money to play these games so will!
(The games get a load of free publicity of course.).
 

DJinPrime

Member
Sep 9, 2020
87
89
51
You are also goign to encouter performance drop-offs in Doom Ethernal 4K Ultra Nightmare setting as it uses moar video memory :)
1, the drop off is not that significant if I remember from the DF video, especially for a game that's running well over 100 fps.
2, then turn the setting down 1, you wouldn't notice the difference.
3, it might not worth the money to get a higher tier of video card for the extra memory because of 1 & 2.
4, most games are not near 8GB actual usage, just look at Afterburner and in game info. Even Godfall, don't buy the marketing, the game in the current state does not require 16GB. Why would game developers make games that would trash the majority of their customers? Not everyone will be running 3080s or 6800s GPUs, in fact majority wouldn't for a good number of years.
5, you can wait for AMD or NV refresh, but you'll be waiting for a while due to supply issue. There's always something better coming out, by the time these cards are widely available, people will be hyping up rdna3 & rtx 40 cards. So, do you wait more?
6, something that I don't think get talked about much. 1GB holds a lot of textures, what the heck are the developers doing that they need constant streaming of multiple GBs of video assets? Nothing happens on screens that would suddenly require massive new amount of graphic assets. And for your AMD fans, that magical Infinity Cache is only 128MB, don't that tell you that for majority of cases, the GPU is only working with MBs of data? And for cache to be worth it, it means that the same data has to stay in the cache. 1st level caches are in KB, and 2nd level is just a few MB in size. 2nd level and higher caches are shared by all the processing units. That should tell you something.
 

kondziowy

Senior member
Feb 19, 2016
212
188
116
#2 well, in e.g. Tarkov you DO notice a difference trust me. Even I run maxed textures on card 2x slower than 3080 because medium looks ugly.

And what about Star Citizen? It required insane amounts of VRAM like a year ago? Something changed?
 
  • Like
Reactions: spursindonesia

DJinPrime

Member
Sep 9, 2020
87
89
51
#2 well, in e.g. Tarkov you DO notice a difference trust me. Even I run maxed textures on card 2x slower than 3080 because medium looks ugly.

And what about Star Citizen? It required insane amounts of VRAM like a year ago? Something changed?
Never played those games, but just looking at random youtube videos of those 2 games with Afterburners running does not show them to be memory problems. Just watch one for SC running 4k with a 2080ti and only showing 7GB allocated with fps under 40.

Going back to my #6 point, a 4K screen (3840*2160), if you had an uncompressed image that size at 32bits color per pixel = 3840*2160*32=265,420,00 bits = 33,177,600 Bytes = 33.2MB. For 1GB, you can hold almost 31 of these images. That's uncompressed. Are game developers so unoptimized where they're just using graphic assets without thinking about memory, optimizations, and reuse?
I guess I can make a game where each of my on screen characters use a unique texture file where I need the entire file even if only a fraction will be display. Each inanimate object also use distinct textures. Instead of 4k, why not use 16k quality assets. That will make it look super uber detailed on a 1080p screen right? Sure, memory will become a problem.
 
  • Haha
Reactions: spursindonesia

kondziowy

Senior member
Feb 19, 2016
212
188
116
Never played those games, but just looking at random youtube videos of those 2 games with Afterburners running does not show them to be memory problems. Just watch one for SC running 4k with a 2080ti and only showing 7GB allocated with fps under 40.
All I see is constant 11.5GB VRAM allocation on rtx3090, and I know 8GB was not enough for this game already 2 years ago, so I'm not talking to you anymore, because you see what you want to see. And I don't care about development, I care about playing. Bye.
 

lightmanek

Senior member
Feb 19, 2017
387
754
136
1, the drop off is not that significant if I remember from the DF video, especially for a game that's running well over 100 fps.
2, then turn the setting down 1, you wouldn't notice the difference.
3, it might not worth the money to get a higher tier of video card for the extra memory because of 1 & 2.
4, most games are not near 8GB actual usage, just look at Afterburner and in game info. Even Godfall, don't buy the marketing, the game in the current state does not require 16GB. Why would game developers make games that would trash the majority of their customers? Not everyone will be running 3080s or 6800s GPUs, in fact majority wouldn't for a good number of years.
5, you can wait for AMD or NV refresh, but you'll be waiting for a while due to supply issue. There's always something better coming out, by the time these cards are widely available, people will be hyping up rdna3 & rtx 40 cards. So, do you wait more?
6, something that I don't think get talked about much. 1GB holds a lot of textures, what the heck are the developers doing that they need constant streaming of multiple GBs of video assets? Nothing happens on screens that would suddenly require massive new amount of graphic assets. And for your AMD fans, that magical Infinity Cache is only 128MB, don't that tell you that for majority of cases, the GPU is only working with MBs of data? And for cache to be worth it, it means that the same data has to stay in the cache. 1st level caches are in KB, and 2nd level is just a few MB in size. 2nd level and higher caches are shared by all the processing units. That should tell you something.

Obviously any of us can determine best settings we want to play with and target frame rate we feel adequate to enjoy gaming session, but here we are discussing which games are already utilising more than 8GB or 10GB VRAM found on previous high-end and recent mid to high end cards. The problem is, people spending not insignificant amount of money on their hardware tend to expect maxing out all the graphical options and have smooth experience, but we are at the time where new console cycle just started and each vendor selected 16GB of RAM + super fast storage for their top of the line machines and therefore developers will be targeting this level of resources with their assets and scene complexities. Currently PC's lack Direct Storage and any game relying on it on consoles will have to either use more VRAM on PC to guarantee similar experience or cut back on quality of assets. In other words, for now PC gamers will have to compensate with more and not less VRAM compared to consoles. We have all the TFLOPS and CPU power over consoles, but lack proper support for other memory related technologies like Direct Storage or compression engines and it will take more time before we reach parity on all fronts again.

I would love to have 128GB of RAM on GPU and not worry about it, but realistically, it comes down to money. Both manufacturers made their decisions years ago as to how much VRAM they targeted for specific price and performance points, thinking that they were doing the right thing. Now the next 12 months will verify that with upcoming titles and how they compare against their competition is said titles.
8GB cards will still play well or very well, but I have a feeling that 12GB+ will be that little bit better in certain scenarios, more often the further we go into 2021.
 

MrTeal

Diamond Member
Dec 7, 2003
3,569
1,699
136
Never played those games, but just looking at random youtube videos of those 2 games with Afterburners running does not show them to be memory problems. Just watch one for SC running 4k with a 2080ti and only showing 7GB allocated with fps under 40.

Going back to my #6 point, a 4K screen (3840*2160), if you had an uncompressed image that size at 32bits color per pixel = 3840*2160*32=265,420,00 bits = 33,177,600 Bytes = 33.2MB. For 1GB, you can hold almost 31 of these images. That's uncompressed. Are game developers so unoptimized where they're just using graphic assets without thinking about memory, optimizations, and reuse?
I guess I can make a game where each of my on screen characters use a unique texture file where I need the entire file even if only a fraction will be display. Each inanimate object also use distinct textures. Instead of 4k, why not use 16k quality assets. That will make it look super uber detailed on a 1080p screen right? Sure, memory will become a problem.
That's Billy Madison level ranting there. May god have mercy on all our souls.
 

Leeea

Diamond Member
Apr 3, 2020
3,625
5,368
136
Why would game developers make games that would trash the majority of their customers? Not everyone will be running 3080s or 6800s GPUs, in fact majority wouldn't for a good number of years.

Godfall and all these games will run just fine on medium settings for nearly everyone. They are not losing sales because someone with a midrange card has to use midrange graphics options.

But who buys a $700+ video card to run a game at medium settings?
 

CakeMonster

Golden Member
Nov 22, 2012
1,391
498
136
I wish all cards would simply just have insane amounts of memory but I also blame game studios for creating more confusion than utility by adding options that have no perceivable difference but tanking performance. There's just no use, not even for enthusiasts. If a 2009 game had less compressed textures as a Ultra High option, it would still look dated by now, and wouldn't have mattered then nor now.
 

cherullo

Member
May 19, 2019
41
85
91
Going back to my #6 point, a 4K screen (3840*2160), if you had an uncompressed image that size at 32bits color per pixel = 3840*2160*32=265,420,00 bits = 33,177,600 Bytes = 33.2MB. For 1GB, you can hold almost 31 of these images. That's uncompressed. Are game developers so unoptimized where they're just using graphic assets without thinking about memory, optimizations, and reuse?
I guess I can make a game where each of my on screen characters use a unique texture file where I need the entire file even if only a fraction will be display. Each inanimate object also use distinct textures. Instead of 4k, why not use 16k quality assets. That will make it look super uber detailed on a 1080p screen right? Sure, memory will become a problem.

Games do compress textures, of course, and employ a lot of different tricks to save memory. The DirectX preferred texture formats are DXT1 for RGB textures and DXT5 for RGBA textures. A 4096x4096 texture consumes, respectively, 10.7MB and 21.3MB, including the generated mipmaps. But, games use textures for many things, even on the same object: color textures, normal maps, smoothness/metallic maps, subsurface scattering maps, ambient occlusion maps, detail textures, etc. Some of those can be combined in a single texture, but it's not uncommon for games to use 3 or more textures for a single object. Every object needs those maps, since they're crucial to make the object react realistically with the scene's light sources.

Games also generate many large textures at runtime as temporary buffers, each frame, for effects like shadows, motion blur, depth of field, ambient occlusion, etc. Those are compressed just to save bandwidth, but still consume just as much memory as uncompressed textures.
And then there is geometry and animation data, which also stays on VRAM, doubly so when RT is used. And also the data used to simulate particle effects/NPCs on the GPU. The list of uses for VRAM can be quite large!

Anyway, next-gen consoles will be able to swap textures much faster than current PCs, so PCs will have to hold more textures in memory to compensate or offer worse image quality.
And that's another reason why nVidia messed up their line-up: a 500USD console will offer better image quality than a 700USD nVidia graphics card plus the added cost of a whole computer, just because they skimped on memory.
The same will be true when comparing Ampere to the corresponding Navi cards. The 3090 aside, the current Ampere line-up will age like fine milk. And don't get me wrong, I got a Radeon 5700, it's also fine milk.
 

DJinPrime

Member
Sep 9, 2020
87
89
51
That's Billy Madison level ranting there. May god have mercy on all our souls.
Instead of responding logically, you just go down the path of name calling right?

Godfall and all these games will run just fine on medium settings for nearly everyone. They are not losing sales because someone with a midrange card has to use midrange graphics options.

But who buys a $700+ video card to run a game at medium settings?
There's no evidence that game is memory size limited though. Just because a game runs with low FPS doesn't automatically means memory size limit.

Anyway, next-gen consoles will be able to swap textures much faster than current PCs, so PCs will have to hold more textures in memory to compensate or offer worse image quality.
And that's another reason why nVidia messed up their line-up: a 500USD console will offer better image quality than a 700USD nVidia graphics card plus the added cost of a whole computer, just because they skimped on memory.
That's not reality though, look at DF's latest video on WD raytracing, the consoles settings are lower than the lowest possible setting on the PC and it's running worse than a 2060 super when the PC is configured the same (slightly better even). Yeah, first generation game, optimization, blah blah, same could be said about PC side. Look at the latest PC enhancements, SAM (SAM for everyone from NV???). The consoles only have 16GB shared memory, and the series S have only 10. PCs also are getting really fast pci 4 drives. Xbox series/PS 5 are not going to catch 3080/6800 level performance ever.

Even in the case where ultra high texture exceeds the vram size, you can still run the rest of the settings at highest settings with just the quality of the texture dial down 1 level. You're not going to lose all your GPU processing power just because you have to dial down the texture. Memory is not free, both in cost and also power. You need to constantly feed power to ram, so if you're not using it, then it's just wasted power that could have gone to the GPU and your money. My 3070 is idling using 22W but less than 4W is from the GPU. Why do you think we used to mock all those crappy low end card with large vram? Having 16GB or 24GB vram for current games and probably for the next 3 years is a waste 99% of the time. When the next graphic killer PC game comes out, it's not because of the vram size that will make your 3080 obsolete, you need a better architecture. All a game have to do is double the rays used and everything currently will perform like crap, doesn't matter if you have 10gb or 24gb. Future proofing with GPU have never worked. The next gen will be at lease 40% faster (probably much more for RT), out in probably less than 2 years.
 

psolord

Golden Member
Sep 16, 2009
1,916
1,194
136
Games do compress textures, of course, and employ a lot of different tricks to save memory. The DirectX preferred texture formats are DXT1 for RGB textures and DXT5 for RGBA textures. A 4096x4096 texture consumes, respectively, 10.7MB and 21.3MB, including the generated mipmaps. But, games use textures for many things, even on the same object: color textures, normal maps, smoothness/metallic maps, subsurface scattering maps, ambient occlusion maps, detail textures, etc. Some of those can be combined in a single texture, but it's not uncommon for games to use 3 or more textures for a single object. Every object needs those maps, since they're crucial to make the object react realistically with the scene's light sources.

Games also generate many large textures at runtime as temporary buffers, each frame, for effects like shadows, motion blur, depth of field, ambient occlusion, etc. Those are compressed just to save bandwidth, but still consume just as much memory as uncompressed textures.
And then there is geometry and animation data, which also stays on VRAM, doubly so when RT is used. And also the data used to simulate particle effects/NPCs on the GPU. The list of uses for VRAM can be quite large!

Anyway, next-gen consoles will be able to swap textures much faster than current PCs, so PCs will have to hold more textures in memory to compensate or offer worse image quality.
And that's another reason why nVidia messed up their line-up: a 500USD console will offer better image quality than a 700USD nVidia graphics card plus the added cost of a whole computer, just because they skimped on memory.
The same will be true when comparing Ampere to the corresponding Navi cards. The 3090 aside, the current Ampere line-up will age like fine milk. And don't get me wrong, I got a Radeon 5700, it's also fine milk.

You should watch Digital Foundry's Watchdogs Legion study, to see what kind of quality consoles give.


Long story short, there is no way to set the exact same console settings through the settings options on the PC. They actually took the configuration files from series X and used them on the PC.

The result? The PC still looks better and the series X performs worse than a RTX 2060 super.

watchdogslegionpccomparison.png

Every console generation the same crap. You buy second rate hardware, with limited usage, you buy it cheap.

edit: Oh wow the guy above said the exact same thing. Didn't read before posting, but glad people are informed.
 

MrTeal

Diamond Member
Dec 7, 2003
3,569
1,699
136
Instead of responding logically, you just go down the path of name calling right?
You're arguing that games aren't limited by memory, and even if they are just turning down the settings won't make a difference to IQ, and it's just lazy devs that can't properly compress assets. You then trot out a bunch of inane math that springs from apparently the mistaken believe that VRAM is some sort of glorified frame buffer.

What part of that message requires a detailed rebuttal?
 

kondziowy

Senior member
Feb 19, 2016
212
188
116
Don't mind me, just looking for my daily dose of drama.
5ZkKqsv.jpg
8GB cards swapping around 1GB of textures between RAM and VRAM at 1440p. Production of RTX3080 20GB is ramping up as we speak. Or I guess we can learn to always close all other apps when playing a game :)
 

DJinPrime

Member
Sep 9, 2020
87
89
51
You're arguing that games aren't limited by memory, and even if they are just turning down the settings won't make a difference to IQ, and it's just lazy devs that can't properly compress assets. You then trot out a bunch of inane math that springs from apparently the mistaken believe that VRAM is some sort of glorified frame buffer.

What part of that message requires a detailed rebuttal?
Which current game actually shows more memory usage than 10GB? The "inane math" is to show what needs to happen for games to grow vram usage if purely just a texture issue. Since that's what everyone is talking about. So, to use that extra memory, you either have to do stupid thing like my "inane" example or you need to do more calculations and effects. Guess what? Special effects like RT are going to tax the processing unit and fps will fall. That's my point, you're worrying about the wrong thing. 10GB is not your problem when the new graphic killer game comes out. Dialing the texture quality down is least of your problem and the simplest solution.

Don't mind me, just looking for my daily dose of drama.
5ZkKqsv.jpg
8GB cards swapping around 1GB of textures between RAM and VRAM at 1440p. Production of RTX3080 20GB is ramping up as we speak. Or I guess we can learn to always close all other apps when playing a game
Lol, are you going to give more context or just really trying to create drama? The thing you underlined said "is left for your OTHER applications to run" Why do I care about other applications that might need to use vram when I'm gaming? Also, RAM and VRAM swap all the time...unless you're using more than 8GB for the current frame (and next few?) what's the problem? Look at game configs, most will have a stream buffer setting.

To me, the disappointing thing about current gen GPU is not the memory size, though that seemed like the easy target. It's really that they're not as fast as hyped for RT. Take the 3070, because it match pretty exactly with 2080 ti. Even for RT heavy games, it just barely beats the 2080 ti, where exactly is the improvement? I love the drop in price, but I expected more on the technology side.
 
  • Haha
Reactions: spursindonesia

cherullo

Member
May 19, 2019
41
85
91
You should watch Digital Foundry's Watchdogs Legion study, to see what kind of quality consoles give.


Thanks for the link, Digital Foundry is awesome.
The video shows that 2060 Super cards have better raytracing than the consoles on this particular game. This is fine, Watch Dogs Legion is a ray-tracing showcase for nVidia where every surface is reflective on purpose.
Those reflections are really beautiful on PC, but overall the game is not, just look at those ugly pipes everywhere. It doesn't hold a candle to the Unreal 5 demo, and it shows: as said in the DF video, WD:L is built on top of the same old engine as the previous games.
It's certainly not using Mesh Shaders and Sampler Feedback like the U5 Demo. Those, combined with a fast SSD on an optimized path to GPU memory, are the real game changers exclusive to the consoles for now. Just watch the Mark Cerny PS5 presentation again, when he talks about assets streaming. Once the usage of this kind of technique becomes common, current PCs will have to adapt somehow.
Let me put it another way: the launch of the new consoles is one of those very few events where we are guaranteed to have games using more VRAM going forward, and the best nVidia could do at the 700USD price point and below was LESS memory than the 3 years old 1080 Ti.

Now, about saying that consoles are second rate hardware... maybe Nintendo consoles, because the Series X and the PS5 are quite something. The Series X consumes as much power as a 2060 Super alone (around 160w), while including 8 Zen2 cores (while not the latest, they're quite recent considering that game consoles have a large gestation period) plus really fast SSDs, chipset, and a GPU that AFAWK loses to a 2060S on RT but is pretty close to a 2080 on rasterisation. Not bad really for a fixed target that will be optimized to death in the coming years.

So, the question then is how are PCs going to implement this same kind of asset streaming? Having more VRAM and being more aggresive prefetching data? Having PCIEx SSDs doing direct memory transfers to GPU memory using SAM? Using lower quality assets?
If you can upgrade your computer every year then this is not a problem, and I'd be glad for you. Unfortunately I know I can't upgrade so often, so I'd be really p***ed to spend 700USD on a video card just to end up using lower quality assets.
 

Leeea

Diamond Member
Apr 3, 2020
3,625
5,368
136
So, the question then is how are PCs going to implement this same kind of asset streaming?

PC's have a DMA (Direct Memory Access) controller for PCI devices. That means NVMe (aka PCI) SSDs can already transfer directly to a PC's memory. Any PCI GPU can then also pull that asset to directly to the GPU out of memory, without having to run it through the CPU. Any PC with 16 GB of main memory* + 16 GB of VRAM can stream assets every bit as good as any next gen console.

*remember, a consoles memory is shared, so while it does not need to do the extra step transferring to the GPUs memory 1 time, it has to contend with the CPU & GPU demanding reads from it every time while simultaneously moving these assets around.

Having more VRAM and being more aggresive prefetching data?

As long as the PC has at least 16 GB of VRAM, and 16 GB of main ram, it will not need to prefetch any more then a console.

Having PCIEx SSDs doing direct memory transfers to GPU memory using SAM?

SAM does nothing for consoles with there shared memory systems.

SAM (Smart Access Memory) is a useful feature on PC which allows the GPU could obtain assets without burdening the PCs main system memory. But in AMDs own slides in their marketing presentation SAM + Rage Mode was only good for 4% to 5%.


Using lower quality assets?
Provided said PC as at least 16 GB of VRAM, it will never need to use lower quality assets vs a next gen console.
 
Last edited: