WCCftech: Memory allocation problem with GTX 970 [UPDATE] PCPer: NVidia response

Page 27 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

ocre

Golden Member
Dec 26, 2008
1,594
7
81
I've been watching this thread with some fascination. I did "enough" research to my satisfaction just to pick the "right" GTX 970 card. I had toyed with the idea of 980 vs 970. Say what you will -- I'm STILL toying with the idea of picking up a second 970 for SLI. And -- I'm still "young" in my window of time to make an RMA-> replacement with the Egg for a 980 card.

And of course, my purchase preceded the announcement of the 500MB "problem" by mere days -- MERE DAYS!!

There are a few categories of users, with an enthusiast group (here) possessing more tech knowledge, and gamers who post customer reviews. The latter so far aren't fretting much over the miscalculated promotion of 4GB that doesn't explain the segmented 3.5/500MB architecture of the 970 card.

But folks here (the first category) are kicking up the dust about the misrepresentation. I can't quite make up my mind. The 970 seems to perform up to and beyond my expectations (for the MONEY), and the 2x 970 configuration still looks good (for the MONEY and relative to either 980 or 2x 980 performance in gaming benchies).

If I do one thing, I'll carry the weight of my own fear of potential embarrassment when somebody says "Ahh! Ya GOT THOSE DE-FECT-IVE 970 cards! Why you got those darn 970 cards, anyway!" If I do the other thing, it's either going to cost me more or the same, or the performance boost shouldn't matter much.

Not ALL of this, but at least part of it is about "group-think" and the behavior of consumers over details missed in advertising promotion. Looking at it from the perspective of NVidia's self-interest, that of MSI or ASUS or EVGA and Newegg and their self-interest, and how they promoted the 970 from the git-go -- I might have done the same thing. Otherwise, they'd have to fill advertisements with more technical details about cross-bar resources and the whole enchilada. But if I were SMART and I were NVidia, EVGA, MSI, ASUS etc. -- I probably would've advertised the card as "3.5GB VRAM" with an asterisk.

But that's not the culture of today, which gives us Bernie Madoff, Robert Rizzo, Rita Crundwell, Carly Fiorina, Halliburton and a $2 trillion dollar war that's just "sunk cost," and a certain category of politicians who will tell any lie, make any distortion, flip-flop on any issue -- just to achieve power.

So how does this advertising anomaly stack up against all that?

People in marketing. The economist Frank Knight had something to say about all this: Companies create demand for something that the consumer might otherwise never miss and never need. It is less about need, supply and demand -- and more about commercial propaganda and mass-psychology.

So your worried that people might think less of you?

The performance of the 970 is strong. My reference 970 overclocked is locked at 1500mhz. Actually a little over. The performance is not an issue for me at all.

I also don't think the segmented ram is all that big of a deal. It's just the misrepresentation of the ROPs and cache that i am not happy with. I would have bought/ could have got a 980. And that's the thing, the 970 is still that option and you know how it performs. if you send it back you will have to pay a lot more for a 980 to get just a little more performance or get a 290(x). If neither option interest you, and your happy with the card, I don't think there is anything wrong with keeping it. That's all your decision there.

I am trying to get nvidia to let me pay the difference for a 980. Me send them this card and pay the difference. But I don't expect many people who bought a 970 would want to pay all that extra for a 980. I am gonna keep trying to pressure them to give us options. If they don't..........

But I find the performance of my card fine. It's just that I wanted 64rops. That's what I want to sort out here
 

rgallant

Golden Member
Apr 14, 2007
1,361
11
81
my take away from the PCp vid is

1-using the full 4gb depends on the drivers and the os , long term will these 970 work the same as they do today .

if some peeps are having issues is it the os their using or the drivers or some program some gamers run blocking the smooth use of the 3.5 + .5gbs[per nv works ok]

also why it was done like this was so they could use chips with one defective L2 ,so all about yields $$$$ and only reason for the 3.5 + .5gbs. per vid re nv

I wonder if due to volume nv might have crippled fully working L2 chips to fill the 970 orders , wouldn't that be a kick in the axs.

2-up coming os win 10 will it see the 3 memory pools ,patch after patch ?
in the testing [if it gets done] maybe the reviewers should include it.

3- whats going to happen with the GM200 cut downs ?
 
Last edited:

tential

Diamond Member
May 13, 2008
7,348
642
121
Hopefully this will put an end to any further denial. The truth is there and plain to see -- the specs were wrong, some of those that were technically correct were completely misleading, and there exists a real and measurable performance implication above 3.5GB of VRAM usage.

Actually many GTX 970 users think otherwise. Even ones on this very page don't care.

The GTX 970 has performed well for users for the last 5 months or so.
No matter what benchmarks you have to show, people will be happy because their GTX 970 has been running well at the settings they've been using and not some cranked up settings they had 0 intention of ever using.
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
http://forums.overclockers.co.uk/showpost.php?p=27542683&postcount=2526

https://www.youtube.com/watch?v=a8Q6jmg_qik&feature=youtu.be

1080p with high textures on 970 (>3.5gb vram), stutters. On medium textures, ok. It certainly has the grunt to handle 1080p.. just lacks the vram. o_O

It appears that medium is the same as high in terms of quality. High just loads up more textures for less pop in. So again we have a game wasting resources just because. If you're going to eat VRAM at least make it because the game looks better.

See here http://www.geforce.com/whats-new/gu...performance-guide#dying-light-texture-quality
Unfortunately, Dying Light doesn't allow for Texture Quality to be changed mid-game. This, combined with checkpoint-style respawn locations and a constantly-changing time of day make direct comparisons in interesting locations impossible. From testing, it appears that there is no difference in quality between the two settings, and that High may merely be storing more textures in memory on suitably equipped GPUs. For example, running around an area on Medium resulted in a modicum of texture pop-in and VRAM usage of around 2GB. Repeating the test on High resulted in zero pop-in and VRAM usage that topped out at 3.3GB, though during longer gameplay sessions usage of nearly 4GB has been observed.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,812
1,550
136
It appears that medium is the same as high in terms of quality. High just loads up more textures for less pop in. So again we have a game wasting resources just because. If you're going to eat VRAM at least make it because the game looks better.

1) Less pop in absolutely makes a game look better. Pop in when noticed is incredibly harmful to immersion.

2) Even if it didn't, the scenario still correctly shows how the GTX 970 fails to delivery acceptable and smooth gameplay over 3.5GB of practical VRAM usage.
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
1) Less pop in absolutely makes a game look better. Pop in when noticed is incredibly harmful to immersion.

2) Even if it didn't, the scenario still correctly shows how the GTX 970 fails to delivery acceptable and smooth gameplay over 3.5GB of practical VRAM usage.

I wasn't trying to dispute that, I'm saying that it's a waste of resources to just load everything because you can and that the actual quality of the environment doesn't appear to change.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,812
1,550
136
I wasn't trying to dispute that, I'm saying that it's a waste of resources to just load everything because you can and that the actual quality of the environment doesn't appear to change.

If it eliminates texture pop-in it's not a waste.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Here is supposedly the official response from nv:
http://hardforum.com/showpost.php?p=1041390025&postcount=503
Hey,

First, I want you to know that I'm not just a mod, I work for NVIDIA in Santa Clara.

I totally get why so many people are upset. We messed up some of the stats on the reviewer kit and we didn't properly explain the memory architecture. I realize a lot of you guys rely on product reviews to make purchase decisions and we let you down.

It sucks because we're really proud of this thing. The GTX970 is an amazing card and I genuinely believe it's the best card for the money that you can buy. We're working on a driver update that will tune what's allocated where in memory to further improve performance.

Having said that, I understand that this whole experience might have turned you off to the card. If you don't want the card anymore you should return it and get a refund or exchange. If you have any problems getting that done, let me know and I'll do my best to help.

--Peter
 

showb1z

Senior member
Dec 30, 2010
462
53
91
I wasn't trying to dispute that, I'm saying that it's a waste of resources to just load everything because you can and that the actual quality of the environment doesn't appear to change.

So your solution is to turn down settings that would be fine on any other 4GB card?
Another fix to this issue would've been if they had sold us an actual 4GB card instead of saving a quick buck and lying to us about it.
It doesn't matter if the higher settings is pointless in your opinion, the fact is, this card doesn't perform like a 4GB card should. And more and more games will run into that wall. No dev is going to consider a 3.5GB card. They'll optimize for 2GB, 3GB or 4GB.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
http://www.hardwarecanucks.com/foru...68595-gtx-970s-memory-explained-tested-2.html

They haven't posted the actual FCAT charts yet.
The problem is at that res the GPU grunt is the bigger factor than the Vram bandwidth which still affects it to some extent.

That's true but 290X is priced extremely close to a 970 and an after-market 290 ~ reference 290X. Look at these 2 examples:

290X has 24% higher averages, but you are much closer to that 60 fps average mark. You just need 10-15% overclocking on that 290X and you just might hit that! With a 970, no chance. More importantly, 290X is delivering performance in line with a 980, a card costing nearly 2X. That's a pretty bad position to be in for a 970 in a 2014 GOTY title.

This isn't what I would call unplayable on a 290X/980 but the 970 is way underperforming.
GTX-970-MEMORY-4.jpg


http://www.hardwarecanucks.com/foru...68595-gtx-970s-memory-explained-tested-2.html

Sure, while more performance would help, clearly the 970 is struggling in modern games that require > 3.5GB of VRAM. This is for sure only going to get worse as games like Dying Light are easily using 3.3-4GB of VRAM at 1440P. Think about this, what if you have dual 290s vs. 970s? You'll have even more GPU power but 970 will run into these massive VRAM walls at > 3.5GB.

Buying a 970/970 SLI for 1440P or 4K is no longer that simple. I am just surprised no mainstream reviewer picked this up for 4 months.

What makes the situtation worse is the current pricing of the 970. Without even touching the insane value of after-market 290s, a 290X is $280 and the BEST 290X - the cool and quiet MSI Lightning 290X is $310, or cheaper than ANY 970.

Knowing this, if I had 1440P or higher, used AA+highest textures and wanted to keep the cards for 2-3 years, and after stuttering in recorded game videos and all the benchmarking proof, I don't think I could actually purchase 970 SLI over 290X CF at this point (or the much cheaper 290 CF).

Looking at the number of OOS R9 290 cards on Newegg, the market is paying attention.
 
Last edited:
Feb 19, 2009
10,457
10
76
Really, Hardware Canucks could have saved all that time and re-publish NV's own results with average fps. They were pretty quick to jump on the FCAT & "smoothness" bandwagon not long ago.
 

Spanners

Senior member
Mar 16, 2014
325
1
0
Remember VRAM isnt equal to VRAM due to compression techniques. So even of we imagine a fixed setting at 3.5GB on a GTX970, you still have more VRAM than a 4GB GTX770 or a 4GB 290/290X in terms of gaming.

Thats why performance is all that matters in different resolutions and settings.

I'm sure I've discussed this with you before but I've seen no evidence compression has a major impact on VRAM usage. If it did I'm fairly sure it would have been touted. It's only been marketed as bandwidth reduction.

"...compression is enough to reduce NVIDIA’s bandwidth requirements by 25% over Kepler"

The frames still have to be stored somewhere before the algorithms are run to compress them.

"... the GPU is able to significantly reduce the number of bytes that have to be fetched from memory per frame"

"This means that from the perspective of the GPU core, a Kepler-style memory system running at 9.3Gbps would provide effective bandwidth similar to the bandwidth that Maxwell’s enhanced memory system provides."

I don't think this makes the 970's 3.5GB have more effective VRAM than any 4GB card without this compression. The data doesn't arrive compressed and it certainly can't leave compressed so this just improves intra-GPU communication.

Whitepaper the quotes came from.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Knowing this, if I had 1440P or higher, used AA+highest textures and wanted to keep the cards for 2-3 years, and after stuttering in recorded game videos and all the benchmarking proof, I don't think I could actually purchase 970 SLI over 290X CF at this point (or the much cheaper 290 CF).

Thing is, Hawaii's obscene power consumption is a major turn off, as well as AMD's high CPU overhead in their DX11 drivers; especially if you want to go crossfire..

Also, I honestly would never recommend a GTX 970 for 4K, even if it never had the VRAM issue. It's not fast enough for 4K, and neither is the 980 for that matter.. For 1440p it's perfect though, and from my experience I haven't encountered any weird stuttering or dropped frames or anything..

That said, I still feel a bit duped over this affair because I remember reading multiple reviews and checking the specs for the 970 and 980, and being surprised when they both supposedly shared the same ROP count, L2 cache and memory bus.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
That said, I still feel a bit duped over this affair because I remember reading multiple reviews and checking the specs for the 970 and 980, and being surprised when they both supposedly shared the same ROP count, L2 cache and memory bus.

I think there is too much blame on NV but the reviewers are not accepting much responsibility, other than AT which noted that they should have caught the issues. I found that respectable that AT actually admitted that :thumbsup:

Why are the same reviewers that pushed frame times (TechReport, HardwareCanucks, PCPer) now blatantly ignoring frame times on the 960? Also, irregardless if the 970 specs were incorrectly presented by NV accidentally, professional reviewers should be doing more in-depth GPU review testing to reveal pros and cons of cards in the first place. It's not AMD's/NV's job to reveal limitations of their cards in demanding scenes/games. That's the reviewer's job to help us make a more informed decision!

How many reviewers tested 970 SLI vs. 290/X CF vs. 980 SLI and never noticed the major frame times issues/performance drop-off at 1440p/4K on the former setup, and yet regular gamers on neogaf and reddit did?! Normal gamers don't get paid to review videocards like professionals do. I feel like the reviewers let 970 owners down too. :hmm:
 
Last edited:

skipsneeky2

Diamond Member
May 21, 2011
5,035
1
71
Need to do some serious thinking still, the best 970 being the G1 Gaming 970 is $360 while the best 290x i think being the MSI Lightning is $340.Got a TX650 with 54a on the 12v.....enough for this beast?

My Asrock B75M-GL R2.0 mobo pretty much has the most idea clearance for the Lightning where its gonna block at worst 2 useless pci slots i won't ever use.:awe:I think even a 4 slot card would work for this motherboard.:eek: Picked this board simply on that basis.

Bugs the hell out of me that the 970 has a bit less memory.I look at the mistake of buying a faster 2gb 770 over a 280x for the games i played back in 2013,got gimped in the end.
 

amenx

Diamond Member
Dec 17, 2004
4,521
2,857
136
Dying Light, heavy vram using game, esp @ 4k:

http--www.gamegpu.ru-images-stories-Test_GPU-Action-Dying_Light-test-dl__vram.jpg


Yet we have the "3.5gb" 970 ahead of the 6gb Titan :D. And seemingly doing well in SLI, even the 3gb 780ti.

http--www.gamegpu.ru-images-stories-Test_GPU-Action-Dying_Light-test-dl_3840.jpg
 

SPBHM

Diamond Member
Sep 12, 2012
5,066
418
126
I think there is too much blame on NV but the reviewers are not accepting much responsibility, other than AT which noted that they should have caught the issues. I found that respectable that AT actually admitted that :thumbsup:

I think some noticed the problem but didn't understand or bother investigating more

like this from October
http://techreport.com/blog/27143/here-another-reason-the-geforce-gtx-970-is-slower-than-the-gtx-980

clearly showing the reduced ROP/bandwidth compared to the official specs.
 

skipsneeky2

Diamond Member
May 21, 2011
5,035
1
71
Its enough as long as you're not planning to OC.

Yeah i think so too,back of box requires a minimum of a 500w.Guru3d calls for a 550-600w in their review while Newegg calls for a 700w.

It's one of the stop gag things that have really stopped me from taking the 290x seriously till now.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I'm sure I've discussed this with you before but I've seen no evidence compression has a major impact on VRAM usage. If it did I'm fairly sure it would have been touted. It's only been marketed as bandwidth reduction.

"...compression is enough to reduce NVIDIA’s bandwidth requirements by 25% over Kepler"

The frames still have to be stored somewhere before the algorithms are run to compress them.

"... the GPU is able to significantly reduce the number of bytes that have to be fetched from memory per frame"

"This means that from the perspective of the GPU core, a Kepler-style memory system running at 9.3Gbps would provide effective bandwidth similar to the bandwidth that Maxwell’s enhanced memory system provides."

I don't think this makes the 970's 3.5GB have more effective VRAM than any 4GB card without this compression. The data doesn't arrive compressed and it certainly can't leave compressed so this just improves intra-GPU communication.

Whitepaper the quotes came from.

From the same paper:


To reduce DRAM bandwidth demands, NVIDIA GPUs make use of lossless compression techniques as data is written out to memory. The bandwidth savings from this compression is realized a second time when clients such as the Texture Unit later read the data. As illustrated in the preceding figure, our compression engine has multiple layers of compression algorithms. Any block going out to memory will first be examined to see if 4x2 pixel regions within the block are constant, in which case the data will be compressed 8:1 (i.e., from 256B to 32B of data, for 32b color). If that fails, but 2x2 pixel regions are constant, we will compress the data 4:1.

 
Status
Not open for further replies.