Are you talking about RTX IO?I can't imagine this being a problem within 2 years (definitely new generation by then with lots more ram and smaller process). 3-4 years is probably safe too but getting harder to predict. If you are at a 1-2 year upgrade cycle anyway, there's no reason not to get the 3080, if you need/want the performance that is.
Can we clear up compression once and for all? I seem to remember the 980 or possibly the 1080 had a compression scheme that suddenly improved bandwidth but that's not what we're talking about here? I see tons of people making outrageous claims about this new memory acting like twice the amount. Even I understand that that's bunk, but has there been any real reduction whatsoever of the actual space assets take up while in GPU RAM, and is that even possible?
Yeah, I might actually have misunderstood that. The storage might need to be PCIe connected, which doesn't necessarily preclude SATA SSDs, but it might not be as simple.
This is the page he linked about GPUDirect storage
GPUDirect Storage: A Direct Path Between Storage and GPU Memory | NVIDIA Technical Blog
As AI and HPC datasets continue to increase in size, the time spent loading data for a given application begins to place a strain on the total application’s performance. When considering end-to-end…developer.nvidia.com
So, how does RTX IO work for NVMe drives that are directed connected to a CPU x4 interface? The data would still need to be shuttled through the CPU obviously, but it would still be a benefit as it can move in a compressed form and without the CPU having to process it outside of routing it to the x16 GPU interface?
Are you talking about RTX IO?
I don't believe so. My understanding is that it allows the GPU to directly stream and decompress game data off storage, rather than the CPU handling that and the GPU fetching the decompressed data from system memory. Once the data is on the GPU, it would behave the same as it traditionally does.
I'm wondering about that 10 GB as well. My trusty 1070 gave me 5 years and it would be nice if the 3080 would do the same. I've been back and forth on waiting for AMD but I will probably try to buy a FE on launch. I game at 1440p/165 Hz anyway and need to play Cyberpunk 2077
There's always lowering settings .. I often use mostly low/medium settings on newer games with my 1070 and don't mind.
Yeah, I might actually have misunderstood that. The storage might need to be PCIe connected, which doesn't necessarily preclude SATA SSDs, but it might not be as simple.
This is the page he linked about GPUDirect storage
GPUDirect Storage: A Direct Path Between Storage and GPU Memory | NVIDIA Technical Blog
As AI and HPC datasets continue to increase in size, the time spent loading data for a given application begins to place a strain on the total application’s performance. When considering end-to-end…developer.nvidia.com
So, how does RTX IO work for NVMe drives that are directed connected to a CPU x4 interface? The data would still need to be shuttled through the CPU obviously, but it would still be a benefit as it can move in a compressed form and without the CPU having to process it outside of routing it to the x16 GPU interface?
Yeah, I should just retract everything I said. They showed drives behind a RAID controller, but even a PCIe RAID controller does make the drives NVMe which seems to be a requirement for this to work.Pretty sure SATA ssds, by virtue of being on the SATA interface, are not PCIe attached drives.
The alternative to NVMe drives are those”old” school enterprise drives that plugged right into a PCIe slot, I am pretty sure.
Burst bandwidth on SATA is way too slow.
Having FuryX flashbacks with the 3080 10GB. I always keep the cards for 2 or 3 years so I am not regressing from 11GB of my 1080Ti that I use in 4K.
I will wait for the 20GB model or go with AMD if they deliver the same performance with 16GB.
How urgent are you in needing a new flagship GPU? I have no doubt in my mind that there will be a 20 GB version of the 3080, Nvidia shouldn't have wafer supply issues this time around, and Big Navi will come out before the end of the year. By that time, Cyberpunk 2077 will have released, which hopefully is a good representation of your average AAA title for the near future, and we'd also know if a 3080 with 10 GB runs fine on it.Yeah, the 10GB is really getting me down to be honest. I must admit though, I feel like quite an odd creature because 2 days ago I felt my 11GB was still overkill but suddenly 10GB is trash. That is kind of odd. However, game requirements continue to increase and this new card is supposed to be for the next 2-3 years or so. 10GB feels a little on the light side.
Should it also be said that game developers would have to be out of their minds to require more Vram than the current higher-end offerings have? Most people will still be on 8GB cards, so for a developer to just smash through 10GB to max their game does sound a little odd on their part as well.
How urgent are you in needing a new flagship GPU? I have no doubt in my mind that there will be a 20 GB version of the 3080, Nvidia shouldn't have wafer supply issues this time around, and Big Navi will come out before the end of the year. By that time, Cyberpunk 2077 will have released, which hopefully is a good representation of your average AAA title for the near future, and we'd also know if a 3080 with 10 GB runs fine on it.
Yeah, I should just retract everything I said. They showed drives behind a RAID controller, but even a PCIe RAID controller does make the drives NVMe which seems to be a requirement for this to work.
Which honestly is a really PITA. I have been looking at replacing my boot drive with an NVMe one, but I would have planned on keeping the 2TB of SATA storage I already have to use as a Steam and games drive. Now to use RTX IO I either need to drop another couple hundred bucks on a 2TB main drive, or deal with with moving games back and forth between active and storage drives.
I've decided I'm going to go for the 3090. I'd love to spend as much as I did for the 1080ti this year but I also don't want to go back on the vram. I use my VR headset quite a bit and I've seen my card get used up quite a bit.I'm in no hurry. I can easily wait all next year for a proper Ti replacement if necessary. 10GB is just baffling to be honest. They aren't fooling anyone when they call it their flagship GPU. It's the 2080 replacement, not the Ti replacement. Ti buyers should just wait IMO if you want the real deal or go for the 3090.
Hahaha. Just buy the damn thing! The poll asked if 10 GBs was good for 3 years and most said no. If you rephrased the question to 1 or 2 years, I'm sure it would be closer to 50:50 if not more skewed towards "yes". Something tells me that next generation GPUs, i.e. Hopper and RDNA3, are going to make all prior monolithic-based GPUs look feeble. 5nm + Chiplets is such a paradigm shift in mm2/$ that you're probably going to see a higher-than-average performance increase per price tier in the next generation. You're gonna be so incentized to upgrade in 2 years or less anyways so by that time, there will be opportunities for you to buy a GPU with more VRAM. And like Blckgrffn said, you're not bound to lose much resale if you decide to upgrade from your 3080 within 6 months.Something tells me I'll still smash F5 on that 3080 on release day though, lol. Let's be honest; I want that thing haha.
Most of the ones here with hesitations aren't the 2080 Ti owners, they're the ones who bought the 1080 Ti 3.5 years ago for $699 moving to a faster $699 card with less RAM.Amd launched the R9 Fury X in 2015 at $649 with 4GB vram. So 10GB is still 2.5X that. 780Ti was also 3GB in 2013. Those who bought 2080Ti Ti 11GB for $1200, do not really have a right to complain about a faster card at $699 witu 1GB less ram.
Then they should wait for the 20GB model. Nvidia isn't forcing anyone to get the 10GB model.Most of the ones here with hesitations aren't the 2080 Ti owners, they're the ones who bought the 1080 Ti 3.5 years ago for $699 moving to a faster $699 card with less RAM.
Why only 10 GB of memory for RTX 3080? How was that determined to be a sufficient number, when it is stagnant from the previous generation?
[Justin Walker] We’re constantly analyzing memory requirements of the latest games and regularly review with game developers to understand their memory needs for current and upcoming games. The goal of 3080 is to give you great performance at up to 4k resolution with all the settings maxed out at the best possible price.
In order to do this, you need a very powerful GPU with high speed memory and enough memory to meet the needs of the games. A few examples - if you look at Shadow of the Tomb Raider, Assassin’s Creed Odyssey, Metro Exodus, Wolfenstein Youngblood, Gears of War 5, Borderlands 3 and Red Dead Redemption 2 running on a 3080 at 4k with Max settings (including any applicable high res texture packs) and RTX On, when the game supports it, you get in the range of 60-100fps and use anywhere from 4GB to 6GB of memory.
Extra memory is always nice to have but it would increase the price of the graphics card, so we need to find the right balance.
Should it also be said that game developers would have to be out of their minds to require more Vram than the current higher-end offerings have? Most people will still be on 8GB cards, so for a developer to just smash through 10GB to max their game does sound a little odd on their part as well.