I guess the question is if you have a 1080ti and haven't upgraded in 3 years, is the 3090 worth the price and will it sustain that same timeframe?
I'm tempted to avoid trying to order on launch day just to see reviews.
Yea I'm willing to wait just for reviews but don't think I'll want to wait 6 months or more. I know I want to get a new card in my case by the time cyberpunk launches.Agree, Waiting to see what AMD comes out with and actually benchmarks might be the best play here.
I think there will be something between a 3080 and 3090.. However, I doubt it will be called the TI Traditionally, the performance difference between TI and non TI is about 30%. A hypothetical card that sits in between the 3080 and 3090 would likely be only 10% faster. Maybe it will be called the 3080 super.
Yea I'm willing to wait just for reviews but don't think I'll want to wait 6 months or more. I know I want to get a new card in my case by the time cyberpunk launches.
I can assure you I'm not the only one wanting to play cyberpunk. I just had told myself I should probably upgrade before it comes out because there's no way I'll be able to play it max settings with my current card.If cyberpunk is the game you want to play. Nvidia has surely got your wallet with DLSS already .
I'm actually waiting to see the people who jumped on the 8 GB 3070s & 10 GB 3080s regret their purchase when AMD announces 16 GB cards and Nvidia counters with 16 GB 3070s and 20 GB 3080s. To me, the next gen card purchase for 4K gaming needs to have 16 GB or more.
Nvidia touts their superior texture compression, so if this is in regards to the 3080, then I think yes.
Literally every GPU supports texture compression.
Literally every GPU supports texture compression.
I'm waiting to see the 3070 8gb beat the 2080ti 12gb in all 4k titles without a hitch for its 2 year run before Hopper/RDNA3 arrive. I have no doubt vram requirements will increase, but that currently they are done more so for marketing purposes and bit-bus either/or limitations. The 3090 has 24gb instead of 12gb just to be "more than" big Navis 16gb. Big Navi has 16gb because they cant sell an 8gb flagship vs Nvidias 12gb cards. Consoles have 16gb because they are in for the long haul (6-7 years) which is a good design choice for its life span. I could be wrong that 8gb wont last the 2 years before 2022 HW arrives, but we'll see. Also, in meantime we will have a good chance to test out the vram theories with the 8-16gb, 10-20 versions of the 3070, 3080 as it can only be done with identical cards.I'm actually waiting to see the people who jumped on the 8 GB 3070s & 10 GB 3080s regret their purchase when AMD announces 16 GB cards and Nvidia counters with 16 GB 3070s and 20 GB 3080s. To me, the next gen card purchase for 4K gaming needs to have 16 GB or more.
With two new relatively powerful consoles getting released, I'm not sure if that will stay true.Raster performance requirements seem to have been slowing down lately.
Thats right, all the new shiny things come out just in time for assembly before cyberpunk!I'm wondering about that 10 GB as well. My trusty 1070 gave me 5 years and it would be nice if the 3080 would do the same. I've been back and forth on waiting for AMD but I will probably try to buy a FE on launch. I game at 1440p/165 Hz anyway and need to play Cyberpunk 2077
There's always lowering settings .. I often use mostly low/medium settings on newer games with my 1070 and don't mind.
Yah, and this is what I'm getting at. The high end GPU market (4K, 3070+) is tiny...like super-tiny. If the majority of the cards out there (over 70%) are 8GB, that's going to be the target. They will not develop games that limit their sales to 1% of the market...unless they are complete business noobs.
Wouldn't the overhead of DirectStorage just be hundred(s) of MB? You only need the buffer big enough to decompress 1 whole asset at minimum right? Going back to your project, if you had DirectStorage technology, do you still need to load all the assets of the level? You're doing that currently because of the performance hit during game play if you have to load new assets right? But if DS takes out the CPU/RAM path, loading asset should have minimal impact to game play. So, instead of loading all the assets at the start of the level, you can load 1/4 (for example) and have a thread to stream in new asset as needed. The ps5 ratchet and clank demo shows the ability to load in completely different levels seamlessly. To me, this indicates that VRAM usage will be much more efficient and smarter going forward.This is more or less a question that is depends on the actual memory management. Things got rude with Vulkan and D3D12, but AMD provides two middlewares, and these are huge help. With VMA and D3D12ME the memory management problems are not so bad as two or three years ago. Normally these middlewares are just works out of box, a lot of next-gen engine use them, and they are also very good as a stepping stone to create a unique management solution. For example I working on an unannounced project, and in the engine we use VMA. We modified the original code to create a very custom streaming solution, so our game won't have a texture quality option anymore. When it first load the map, it will try to load the high quality assets, and if there is not enough VRAM, than the streaming algorithm just change some of the loaded textures to medium or low resolution, and there is also a defragment solution to the VRAM. With this concept it is not a problem if a GPU has only 4-6-8-10 GB of VRAM. The performance will be always good, but the picture quality will be different. Obviously there is a target for us, and in our tests 16 GB VRAM is needed to always load the high quality assets. And this is guaranteed, we checked it with Radeon VII. But RTX 3080 works very well with 10 GB of VRAM, but some loaded assets will be medium and low quality, because there is not enough memory to load the high quality versions. But we are very happy with the performance we see, and it is hard to notice the poorer picture quality. If you don't search intentionally for the low quality assets, you will probably miss them.
If it's important to get the absolute best quality, than sure go for a 16 GB card, but 10 GB won't limit the performance. We know that these cards are exist, and we try to solve their limitations in clever and user friendly ways. Not just a some popup message in the options menu, that warns the user to lower the texture quality.
For DirectStorage ... this not a good thing for the cards with lower VRAM capacity. This API will solve the decompression problem on the CPU side, but to do this, it will increase the memory footprint inside the VRAM. So with DirectStorage the VRAM pressure will be much-much worse.
It also have several other issues, like the GPU caches must be flushed after every read from the SSD. In a marketing slide, this might be a good feature, but it won't work well on todays GPUs. It will works on the XS S/X and the PS5, but those consoles are designed around this feature, and this is a huge difference.