• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[Eurogamer] The state of 2GB VRAM GPUs

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Because of the architecture, GCN is better for DX-12 than Maxwell and Tonga is way better than GM206 with more compute capabilities.
If the DX-12 game will also use Async Compute, even GCN Gen 1.0 Tahiti will be faster than Maxwell.

We keep hearing that, but we dont see it.
 
We keep hearing that, but we dont see it.

That's because its only hypothetically better with possible future software configurations in a parallel universe. The rest of us play our games in DX11 in the year 2016, milky way galaxy, planet earth.
 
But we do. While it's not revolutionary like many wanted, it's clear from all tests that AMD is getting the bigger boosts.

No we dont. Nvidia still rules in the DX12 we see. Despite the constant rubbish we hear.

DX12 helps AMD where their huge lack of a multithreaded driver shows in DX11. In other words, it should make AMD performance more consistent and they can do some cost savings in optimizations.
 
If the DX-12 game will also use Async Compute, even GCN Gen 1.0 Tahiti will be faster than Maxwell.

Way to generalize there. Not sure how you know that, but ok.

Either way, higher utilization would increase power use. Maybe it will increase efficiency, but can we know that before seeing it? I don't think so.
 
Way to generalize there. Not sure how you know that, but ok.

Either way, higher utilization would increase power use. Maybe it will increase efficiency, but can we know that before seeing it? I don't think so.

We have seen Mantle and that is very close to how DX-12 games will perform.

Performance increase in Mantle is way higher than power usage increase and thus perf/watt is increasing not decreasing.
 
Couldn't it be likely that AMD simply has a huge advantage in that all its DX12 cards are GCN-based and thus can share optimisations across the board?
Whereas Nvidia has three different architectures, which all require separate optimizations from the studios/Nvidia
 
Looking at all the games posted by russian, I see a max of around 20% difference in frame times. I would really like to see some non-instrumented tests to see how much difference it takes to be perceived by the naked eye, especially while playing a game. Note-- referring to the 960 data, since that seems to be the card everybody seems to hate on.
 
Way to generalize there. Not sure how you know that, but ok.

Either way, higher utilization would increase power use. Maybe it will increase efficiency, but can we know that before seeing it? I don't think so.

Performance increasing faster than power draw would increase efficiency. If they're paying most of the power cost for more adaptable compute, actually using more adaptable compute would be to their benefit.
 
Wow. They basically said 2GB is enough for the GTX 960.

Seems to be for the games they're testing, but it looks like a lot of games are smarter than in the past about vram management and can handle smaller amounts pretty gracefully.
 
Last edited:
more like it is enough now for most games. if you plan to keep your gpu longer than a year, it will probably bite you in the ass.

my personal prediction? 99% chance it will bite 960 owners asses within a year from now.
 
more like it is enough now for most games. if you plan to keep your gpu longer than a year, it will probably bite you in the ass.

my personal prediction? 99% chance it will bite 960 owners asses within a year from now.

First thing GTX 960 will suffer is from performance not memory buffer in new games. So even if they would have 4GB cards, GTX 960 owners will suffer big performance drops in next gen games.
 
First thing GTX 960 will suffer is from performance not memory buffer in new games. So even if they would have 4GB cards, GTX 960 owners will suffer big performance drops in next gen games.
either way, 960 is still trash right? not enough vram or not enough raw performance, both makes it trash.
 
Don't video editing/photo editing/Physics calculations/3D modeling/Graphics design,etc. use more VRAM than gaming does? Isn't that why Firepro and Quadro cards usually offer more VRAM than their desktop siblings? So it's not that these cards are coming with more on board RAM than they require in games but rather they are designed to do MORE than just play games.
 
The 2gb v 4gb v 6gb v 8gb thing changes depending on whether it makes sense to ever SLI/crossfire the card too.

2GB was devastating on the 680/770 because if you added another one for cheap a while after the card had come out, you'd actually have enough grunt to turn settings up quite a bit except that the VRAM would bottleneck you. End result, hardly anyone is still holding onto SLI 680s/770s and they all went 780 or greater, while you still see some 7970 crossfires out there because the 3gb is just enough for now.

For example, now, while lots of people are upgrading to 980 Ti's I snagged someone's old 290 they're upgrading from for $170 to add to a 290 I already have. I got a ton of performance for very little money that way, but if I had been VRAM limited it would be basically pointless to do that. Sure I have to deal with dual GPU problems, but I also saved roughly ~$300-$350 to get the same performance for the most part.

With the 960/380, it would be kind've pointless to CF/SLI them because you can double its performance in a single GPU without it being hardly much worse of a deal to do so, in a few years down the line when its time to upgrade cheap you could probably find a midrange 14nm chip that will double, or a used Fury or 980 for cheap. The single GPU upgrade I doubt will end up much more expensive once all the 28nm cards have been obsoleted and are on the market used.
 
Last edited:
Techspot article nails it and reaffirms my own observations. There's not enough grunt in these low end cards to warrant the extra cost for the extra RAM.

http://www.techspot.com/review/1114-vram-comparison-test/page5.html

Wrong.

You can't even enable Ultra textures in a few games already released last year with 2GB vram, and a 7950 has enough grunt to run those. Heck, even Watch Dogs, which is now old, runs great on my OC 7950 with Ultra Textures. Something 2GB GPUs simply cannot do.

These GPUs lack grunt for MSAA in modern games, but they have plenty for handling high res texture options.
 
Oh noes, can't use top-end ULTRA settings on a budget $165 card. D:

Textures usually are only expensive in terms of VRAM and sometimes memory bandwidth, though. In most cases, your texture setting is entirely dependent on the size of the frame buffer, so you get a better looking game for almost no performance hit if you simply have enough VRAM.
 
No we dont. Nvidia still rules in the DX12 we see. Despite the constant rubbish we hear.

DX12 helps AMD where their huge lack of a multithreaded driver shows in DX11. In other words, it should make AMD performance more consistent and they can do some cost savings in optimizations.

If you have this binary win/lose mentality where all that matters is who has the higher average FPS, then yes, you are correct.
 
Wrong.

You can't even enable Ultra textures in a few games already released last year with 2GB vram, and a 7950 has enough grunt to run those. Heck, even Watch Dogs, which is now old, runs great on my OC 7950 with Ultra Textures. Something 2GB GPUs simply cannot do.

These GPUs lack grunt for MSAA in modern games, but they have plenty for handling high res texture options.

Invalid comparison. 7950 is old GCN with zero memory compression but the fillrate of the card is what makes it faster, not the extra GB of memory.

The article is correct in that for the few crappy ports like Shadow of Mordor with 2GB cards you're better off backing off ultra textures to high (which you'll never notice the difference at 1080P anyway) to make the game actually playable. The extra VRAM goes to waste for the most part and the fillrate of the 960 is to low to make effective use of 4GB in most cases.
 
Back
Top