WCCftech: Memory allocation problem with GTX 970 [UPDATE] PCPer: NVidia response

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

ocre

Golden Member
Dec 26, 2008
1,594
7
81
Enigmoid is probably on the right path.

I think its gonna boil down to the same limitation in this color fill test.
Its already been touched on but now there is a great effort in making it some kind of sensational manifestation. The gtx 970 has parts disabled and the test everyone is using is not just testing for vram.

3dm-color.gif
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
And it has nothing to do with the problem.

A GTX980m can only process 48 pixel/clock, less than a GTX970. However in the Cuda benchmark all memory partitions are available at full speed.

Well, looking it from a layperson's view, they cut down the 970 so that people could not buy a 970, overclock it or manipulate the BIOS and instantly have a 980 for $200+ less money. Gaming performance seems fine, but even overclocked a 970 doesn't quite reach the 980, but it's close. Maybe Cuda performance, which my understanding is what this test is, is cut down because of whatever changes to the chip were done to not undercut the 980.

Maybe I'm wrong but it seems not very far off from the Titan parts being fully unlocked for uses outside gaming. The lower end parts were cut down just enough in one way or another. I still say this supposed "bandwidth problem" stems from Cuda itself and what they changed between the 980 and 970.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
A) The post above yours explains what the benchmark does pretty concisely. The source code is here.

B) The bandwidth drop off is around 3300MiB which is roughly 3.5GB. Maybe the drivers are allocating ram differently for the 970. It would explain why people were seeing different usage at the same settings/resolution vs the 980. Maybe it has little real-world consequences besides some very limited scenarios.

C) I've seen a number of posts in the Nvidia forums stating that stuttering occurs at higher ram usage. Can't say if these are accurate or related though.

D) People have been running this benchmark incorrectly, it's easy to create a false result. I've yet to see a single 970 the doesn't exhibit this behavior though. If the benchmark was that erratic surely we'd see some results that were "good" for the 970? Nobody said the cards were broken.

Lots of potential outcomes from the extremely mundane to some juicy drama.

The only outcome from this drama made by bored people not knowing what they are looking at is an official statement from Nvidia explaining why the benchmark is showing incorrect data, reviews from several well known sites showing it doesnt affect gaming or any other real world scenarios. Followed by denial and twisting from people who thought it was the real deal and was looking for Nvidia`s downfall because they thought recall would happen.

There are many cases where people run the benchmark correctly and still get these low bandwidth drops and not being able to use VRAM fully.

Here, I posted this earlier. Read
http://www.reddit.com/r/pcmasterrace/comments/2tfybe/investigating_the_970_vram_issue/

The users that mention stuttering and stuff like that are as vague as one can be. Stuttering can happen on any machine, with any GPU setup, on certain scenarios.
I see people posting graphs from maxed out games with high resolution, lots of MSAA to get VRAM usage up and say the reduced FPS is because of reduced bandwidth, but doesnt realize that this setting can force even the most hardcore computer down on its knees regardless of any bandwidth you have available
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Well, looking it from a layperson's view, they cut down the 970 so that people could not buy a 970, overclock it or manipulate the BIOS and instantly have a 980 for $200+ less money.

nVidia has never had a problem with decreasing the vram and bandwidth within the same family (G80, GT200, Fermi). So I dont think this happens intentionally.
 

Head1985

Golden Member
Jul 8, 2014
1,867
699
136
The only outcome from this drama made by bored people not knowing what they are looking at is an official statement from Nvidia explaining why the benchmark is showing incorrect data, reviews from several well known sites showing it doesnt affect gaming or any other real world scenarios. Followed by denial and twisting from people who thought it was the real deal and was looking for Nvidia`s downfall because they thought recall would happen.

There are many cases where people run the benchmark correctly and still get these low bandwidth drops and not being able to use VRAM fully.

Here, I posted this earlier. Read
http://www.reddit.com/r/pcmasterrace/comments/2tfybe/investigating_the_970_vram_issue/

The users that mention stuttering and stuff like that are as vague as one can be. Stuttering can happen on any machine, with any GPU setup, on certain scenarios.
I see people posting graphs from maxed out games with high resolution, lots of MSAA to get VRAM usage up and say the reduced FPS is because of reduced bandwidth, but doesnt realize that this setting can force even the most hardcore computer down on its knees regardless of any bandwidth you have available
but there is performance HIT.
skyrim
http://abload.de/img/tesv_2015_01_20_04_28mju9b.png
http://abload.de/img/tesv_2015_01_20_04_53joj4e.png
watchdogs
http://abload.de/img/watch_dogs_2015_01_11w4sxz.jpg
http://abload.de/img/watch_dogs_2015_01_11mmsqt.jpg
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
nVidia has never had a problem with decreasing the vram and bandwidth within the same family (G80, GT200, Fermi). So I dont think this happens intentionally.

How do you mean? Remember the 6970 and 6950 from AMD and being able to flip a switch on some cards to get a 6970 from your 6950? You don't think when Nvidia was designing the 970 they purposely cut it down in a way that makes getting a 980 out of a 970 impossible?


So we resort to staring at fps numbers all day? In the case of watch dogs you go from unplayable to unplayable. Hardly telling of anything. In Skyrim, anything could be doing the performance hit. From the lighting, the way the shadows are, the number of characters loaded in the scene that you can't see directly in front of you. Same for watch dogs for that matter.

Look, the only way to show it without unreliable dynamic scenes and screenshotting what clearly has different things going on each time is to have a timedemo recording so that each run is 100% exactly the same every single time. If you look at those screenshots side by side you can tell that things in the distance aren't the same between them. The FPS numbers are irrelevant because of this. I know what you and others might say. "Higher settings were loaded up to use more VRAM" Well here's the problem, higher detail settings will lower your FPS. Oh no!
 
Last edited:

mikk

Diamond Member
May 15, 2012
4,327
2,408
136
Take your bad attitude somewhere else.

Even with IGP I can still get up to 2 blocks "failed" depending on the testrun.


The truth hurts sometimes. You simply did something wrong. Once again, GTX 980 is not affected. Confirmed by numerous users in other forums. Your denial is amazing.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
So we resort to staring at fps numbers all day? In the case of watch dogs you go from unplayable to unplayable. Hardly telling of anything. In Skyrim, anything could be doing the performance hit. From the lighting, the way the shadows are, the number of characters loaded in the scene that you can't see directly in front of you. Same for watch dogs for that matter.

Look, the only way to show it without unreliable dynamic scenes and screenshotting what clearly has different things going on each time is to have a timedemo recording so that each run is 100% exactly the same every single time. If you look at those screenshots side by side you can tell that things in the distance aren't the same between them. The FPS numbers are irrelevant because of this. I know what you and others might say. "Higher settings were loaded up to use more VRAM" Well here's the problem, higher detail settings will lower your FPS. Oh no!

Exactly.

And even if you get 22GB/s vs 220GB/s and full VRAM usage you will see massive FPS drop.
I can asssure you that that poor Maxwell core will be bottlenecked to hell and beyond. You will notice it and there would be a major outrage across all tech forums, which havent happened before. Strange huh?


Again, read my link to reddit @Head. There is zero performance drop
 
Last edited:

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
Exactly.

And even if you get 22GB/s vs 220GB/s and full VRAM usage you will see massive FPS drop. Not a 10-20% drop either.
I can asssure you that that poor Maxwell core will be bottlenecked to hell and beyond.

Watching his links, it says ***.de, so my guess he is from the german forum where this prodigy that made this benchmark resides.

The other thing is that the benchmark is doing stuff that games don't. These cards are marketed for gamers who want a upper-mid tier performing card. Not to people who want cuda performance.

I'm really having a good time seeing all the people come out of the woodwork calling for a recall.
 
Last edited:

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
The other thing is that the benchmark is doing stuff that games don't. These cards are marketed for gamers who want a upper-mid tier performing card. Not to people who want cuda performance.

I'm really having a good time seeing all the people come out of the woodwork calling for a recall.

The only concern I would have is if the drivers are adjusting IQ to keep from using that last bit of VRAM that will be slow. In general people expect their cards to obey the game settings. Granted something like that could actually become a "feature" as long as it is done openly. Not that anyone has determined if anything is actually "wrong" yet when running games.

If there is something to it in games though the sudden lack of FCAT analysis starting with the 970/980 reviews goes from "Really guys? Smoothness not important now because the 290s are smoother?" to "Was this an attempt to hide the issue from the beginning?".
 
Last edited:

ocre

Golden Member
Dec 26, 2008
1,594
7
81
You know, i have been thinking................


I want a free gtx 980 upgrade too!!!!!!!!!!!!


Nvidia, you sold me a card that doesnt perform as well as a gtx 980 and i am POed
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
The only concern I would have is if the drivers are adjusting IQ to keep from using that last bit of VRAM that will be slow. In general people expect their cards to obey the game settings. Granted something like that could actually become a "feature" as long as it is done openly. Not that anyone has determined if anything is actually "wrong" yet when running games.

If there is something to it in games though the sudden lack of FCAT analysis starting with the 970/980 reviews goes from "Really guys? Smoothness not important now?" to "Was this an attempt to hide the issue from the beginning?".

Not only are there tons of examples of people running more than 3500mb in games with a gtx 970, I just posted a very detailed example above which goes even further by listing everything over a time span.

There are also now popping up Kombuster test that show the gtx 970 is fine all the way up to 4gb.

There is something going on that is funny with this Na test. We know that the 970 and 980 are different. The 3d mark color fill test also shows there is a difference.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Not only are there tons of examples of people running more than 3500mb in games with a gtx 970, I just posted a very detailed example above which goes even further by listing everything over a time span.

There are also now popping up Kombuster test that show the gtx 970 is fine all the way up to 4gb.

There is something going on that is funny with this Na test. We know that the 970 and 980 are different. The 3d mark color fill test also shows there is a difference.

As anyone done in depth analysis with FCAT and a VRAM inspector?
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
A few sites posted frame times. Nothing showed a significant drop that indicated problems that I saw. Some settings used looked to be big vram users but they didn't note how much it was using.
 

96Firebird

Diamond Member
Nov 8, 2010
5,748
345
126
I hope people over-react to this and used GTX 970s flood the market, I've always wanted to try out SLI...
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
http://www.pcper.com/news/Graphics-Cards/NVIDIA-Responds-GTX-970-35GB-Memory-Issue

So stupid. They just avoid going over 3.5GB as much as possible. This isn't a 4GB card. There's no guarantee they can avoid doing it all the time, and we already know what the performance of that last .5GB is.

What's going to happen when DX12 comes out and NVidia's driver isn't around to baby this hamstrung memory configuration?

It's not...we already proved that there's no problem running games that consume 3.8GB as performance doesn't tank to the dirt like it should if there was a bandwidth issue at all. Turning settings up to use more memory and saying "look it lost 4fps" is meaningless when turning settings up automatically drops your fps a bit.

I don't even think you read the article at all. This is regarding running a setting that is using less than 3.5GB then going to 4k and loading it past 3.5GB.

On GTX 980, Shadows of Mordor drops about 24% on GTX 980 and 25% on GTX 970, a 1% difference. On Battlefield 4, the drop is 47% on GTX 980 and 50% on GTX 970, a 3% difference. On CoD : AW, the drop is 41% on GTX 980 and 44% on GTX 970, a 3% difference. As you can see, there is very little change in the performance of the GTX 970 relative to GTX 980 on these games when it is using the 0.5GB segment.
 
Last edited:

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
Where does it say they made changes to the driver? All I see is the GPU calls up the first 3.5GB first by default. This could all be in the firmware/BIOS.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
The GeForce GTX 970 is equipped with 4GB of dedicated graphics memory. However the 970 has a different configuration of SMs than the 980, and fewer crossbar resources to the memory system. To optimally manage memory traffic in this configuration, we segment graphics memory into a 3.5GB section and a 0.5GB section. The GPU has higher priority access to the 3.5GB section. When a game needs less than 3.5GB of video memory per draw command then it will only access the first partition, and 3rd party applications that measure memory usage will report 3.5GB of memory in use on GTX 970, but may report more for GTX 980 if there is more memory used by other commands. When a game requires more than 3.5GB of memory then we use both segments.

We understand there have been some questions about how the GTX 970 will perform when it accesses the 0.5GB memory segment. The best way to test that is to look at game performance. Compare a GTX 980 to a 970 on a game that uses less than 3.5GB. Then turn up the settings so the game needs more than 3.5GB and compare 980 and 970 performance again.

http://www.pcper.com/news/Graphics-Cards/NVIDIA-Responds-GTX-970-35GB-Memory-Issue
 
Status
Not open for further replies.