I see Tarkov on this page mentioned. What are the other 2 please? Just curious.
That is without ray tracing that will come laterGodfall is running at 70-100fps at 4K Ultra/Epic on the 3080. So it's OK I guess.
It runs at 1440p/60 or 4k/30 on the PS5 I think, so it's a-ok...
Godfall is running at 70-100fps at 4K Ultra/Epic on the 3080. So it's OK I guess.
It runs at 1440p/60 or 4k/30 on the PS5 I think, so it's a-ok...
I see Tarkov on this page mentioned. What are the other 2 please? Just curious.
The stuttering is probably not memory related, since the memory usage shows it to be around 8gb. If they turn on Ray tracing, the FPS is going to tank way below 60. At that point, you either have to start lowering settings or use dlss. So, less vram needed.There's some pretty big stutters in there, many right in the middle of combat. I'm not saying it's memory related, but it would be interesting to test frame times against a GPU with more VRAM to see if the stutters go away or not. I also wonder if there are other levels that are larger areas as that will obviously increase VRAM usage. Ray tracing tends to increase VRAM usage as well. It would be great if 10 GB is sufficient, but I tend to trust the developers who have already characterized the performance and VRAM usage in their own game. But we'll see once it gets tested more thoroughly.
MS flight Sim (ouch)
and GodFall*
*GodFall is associated with AMDs marketing department, to show off RDNA2 on its next gen graphics cards and consoles. It uses 12 GB of vram. The resources are not wasted**, but rather texture, view distance, and scene complexity designed to use 12 GB on the high graphics setting. GodFall is expected to run smoothly on console.
**looking at you Nvidia gameworks
1, the drop off is not that significant if I remember from the DF video, especially for a game that's running well over 100 fps.You are also goign to encouter performance drop-offs in Doom Ethernal 4K Ultra Nightmare setting as it uses moar video memory
Never played those games, but just looking at random youtube videos of those 2 games with Afterburners running does not show them to be memory problems. Just watch one for SC running 4k with a 2080ti and only showing 7GB allocated with fps under 40.#2 well, in e.g. Tarkov you DO notice a difference trust me. Even I run maxed textures on card 2x slower than 3080 because medium looks ugly.
And what about Star Citizen? It required insane amounts of VRAM like a year ago? Something changed?
All I see is constant 11.5GB VRAM allocation on rtx3090, and I know 8GB was not enough for this game already 2 years ago, so I'm not talking to you anymore, because you see what you want to see. And I don't care about development, I care about playing. Bye.Never played those games, but just looking at random youtube videos of those 2 games with Afterburners running does not show them to be memory problems. Just watch one for SC running 4k with a 2080ti and only showing 7GB allocated with fps under 40.
1, the drop off is not that significant if I remember from the DF video, especially for a game that's running well over 100 fps.
2, then turn the setting down 1, you wouldn't notice the difference.
3, it might not worth the money to get a higher tier of video card for the extra memory because of 1 & 2.
4, most games are not near 8GB actual usage, just look at Afterburner and in game info. Even Godfall, don't buy the marketing, the game in the current state does not require 16GB. Why would game developers make games that would trash the majority of their customers? Not everyone will be running 3080s or 6800s GPUs, in fact majority wouldn't for a good number of years.
5, you can wait for AMD or NV refresh, but you'll be waiting for a while due to supply issue. There's always something better coming out, by the time these cards are widely available, people will be hyping up rdna3 & rtx 40 cards. So, do you wait more?
6, something that I don't think get talked about much. 1GB holds a lot of textures, what the heck are the developers doing that they need constant streaming of multiple GBs of video assets? Nothing happens on screens that would suddenly require massive new amount of graphic assets. And for your AMD fans, that magical Infinity Cache is only 128MB, don't that tell you that for majority of cases, the GPU is only working with MBs of data? And for cache to be worth it, it means that the same data has to stay in the cache. 1st level caches are in KB, and 2nd level is just a few MB in size. 2nd level and higher caches are shared by all the processing units. That should tell you something.
That's Billy Madison level ranting there. May god have mercy on all our souls.Never played those games, but just looking at random youtube videos of those 2 games with Afterburners running does not show them to be memory problems. Just watch one for SC running 4k with a 2080ti and only showing 7GB allocated with fps under 40.
Going back to my #6 point, a 4K screen (3840*2160), if you had an uncompressed image that size at 32bits color per pixel = 3840*2160*32=265,420,00 bits = 33,177,600 Bytes = 33.2MB. For 1GB, you can hold almost 31 of these images. That's uncompressed. Are game developers so unoptimized where they're just using graphic assets without thinking about memory, optimizations, and reuse?
I guess I can make a game where each of my on screen characters use a unique texture file where I need the entire file even if only a fraction will be display. Each inanimate object also use distinct textures. Instead of 4k, why not use 16k quality assets. That will make it look super uber detailed on a 1080p screen right? Sure, memory will become a problem.
Why would game developers make games that would trash the majority of their customers? Not everyone will be running 3080s or 6800s GPUs, in fact majority wouldn't for a good number of years.
Going back to my #6 point, a 4K screen (3840*2160), if you had an uncompressed image that size at 32bits color per pixel = 3840*2160*32=265,420,00 bits = 33,177,600 Bytes = 33.2MB. For 1GB, you can hold almost 31 of these images. That's uncompressed. Are game developers so unoptimized where they're just using graphic assets without thinking about memory, optimizations, and reuse?
I guess I can make a game where each of my on screen characters use a unique texture file where I need the entire file even if only a fraction will be display. Each inanimate object also use distinct textures. Instead of 4k, why not use 16k quality assets. That will make it look super uber detailed on a 1080p screen right? Sure, memory will become a problem.
Instead of responding logically, you just go down the path of name calling right?That's Billy Madison level ranting there. May god have mercy on all our souls.
There's no evidence that game is memory size limited though. Just because a game runs with low FPS doesn't automatically means memory size limit.Godfall and all these games will run just fine on medium settings for nearly everyone. They are not losing sales because someone with a midrange card has to use midrange graphics options.
But who buys a $700+ video card to run a game at medium settings?
That's not reality though, look at DF's latest video on WD raytracing, the consoles settings are lower than the lowest possible setting on the PC and it's running worse than a 2060 super when the PC is configured the same (slightly better even). Yeah, first generation game, optimization, blah blah, same could be said about PC side. Look at the latest PC enhancements, SAM (SAM for everyone from NV???). The consoles only have 16GB shared memory, and the series S have only 10. PCs also are getting really fast pci 4 drives. Xbox series/PS 5 are not going to catch 3080/6800 level performance ever.Anyway, next-gen consoles will be able to swap textures much faster than current PCs, so PCs will have to hold more textures in memory to compensate or offer worse image quality.
And that's another reason why nVidia messed up their line-up: a 500USD console will offer better image quality than a 700USD nVidia graphics card plus the added cost of a whole computer, just because they skimped on memory.
Games do compress textures, of course, and employ a lot of different tricks to save memory. The DirectX preferred texture formats are DXT1 for RGB textures and DXT5 for RGBA textures. A 4096x4096 texture consumes, respectively, 10.7MB and 21.3MB, including the generated mipmaps. But, games use textures for many things, even on the same object: color textures, normal maps, smoothness/metallic maps, subsurface scattering maps, ambient occlusion maps, detail textures, etc. Some of those can be combined in a single texture, but it's not uncommon for games to use 3 or more textures for a single object. Every object needs those maps, since they're crucial to make the object react realistically with the scene's light sources.
Games also generate many large textures at runtime as temporary buffers, each frame, for effects like shadows, motion blur, depth of field, ambient occlusion, etc. Those are compressed just to save bandwidth, but still consume just as much memory as uncompressed textures.
And then there is geometry and animation data, which also stays on VRAM, doubly so when RT is used. And also the data used to simulate particle effects/NPCs on the GPU. The list of uses for VRAM can be quite large!
Anyway, next-gen consoles will be able to swap textures much faster than current PCs, so PCs will have to hold more textures in memory to compensate or offer worse image quality.
And that's another reason why nVidia messed up their line-up: a 500USD console will offer better image quality than a 700USD nVidia graphics card plus the added cost of a whole computer, just because they skimped on memory.
The same will be true when comparing Ampere to the corresponding Navi cards. The 3090 aside, the current Ampere line-up will age like fine milk. And don't get me wrong, I got a Radeon 5700, it's also fine milk.
You're arguing that games aren't limited by memory, and even if they are just turning down the settings won't make a difference to IQ, and it's just lazy devs that can't properly compress assets. You then trot out a bunch of inane math that springs from apparently the mistaken believe that VRAM is some sort of glorified frame buffer.Instead of responding logically, you just go down the path of name calling right?
Which current game actually shows more memory usage than 10GB? The "inane math" is to show what needs to happen for games to grow vram usage if purely just a texture issue. Since that's what everyone is talking about. So, to use that extra memory, you either have to do stupid thing like my "inane" example or you need to do more calculations and effects. Guess what? Special effects like RT are going to tax the processing unit and fps will fall. That's my point, you're worrying about the wrong thing. 10GB is not your problem when the new graphic killer game comes out. Dialing the texture quality down is least of your problem and the simplest solution.You're arguing that games aren't limited by memory, and even if they are just turning down the settings won't make a difference to IQ, and it's just lazy devs that can't properly compress assets. You then trot out a bunch of inane math that springs from apparently the mistaken believe that VRAM is some sort of glorified frame buffer.
What part of that message requires a detailed rebuttal?
Lol, are you going to give more context or just really trying to create drama? The thing you underlined said "is left for your OTHER applications to run" Why do I care about other applications that might need to use vram when I'm gaming? Also, RAM and VRAM swap all the time...unless you're using more than 8GB for the current frame (and next few?) what's the problem? Look at game configs, most will have a stream buffer setting.Don't mind me, just looking for my daily dose of drama.
8GB cards swapping around 1GB of textures between RAM and VRAM at 1440p. Production of RTX3080 20GB is ramping up as we speak. Or I guess we can learn to always close all other apps when playing a game
You should watch Digital Foundry's Watchdogs Legion study, to see what kind of quality consoles give.
So, the question then is how are PCs going to implement this same kind of asset streaming?
Having more VRAM and being more aggresive prefetching data?
Having PCIEx SSDs doing direct memory transfers to GPU memory using SAM?
Provided said PC as at least 16 GB of VRAM, it will never need to use lower quality assets vs a next gen console.Using lower quality assets?