Title says it all:
What is the best way to check for stability on a GPU OC?
What do you run and for how long? Or do you just play and if there is a crash, lower your settings?
FurMark is a very intensive OpenGL benchmark that uses fur rendering algorithms to measure the performance of the graphics card. Fur rendering is especially adapted to overheat the GPU and that's why FurMark is also a perfect stability and stress test tool (also called GPU burner) for the graphics card.
Bingo. Most benchmarks or "stress tools" like OCCT or Furmark don't completely load all the GPU's components. Programs like Furmark are just power viruses that do little more than test the capacity of your GPU heatsink. Both AMD and nvidia now include throttling measures in their drivers to even prevent these power virus programs from maxing out the cards. Therefore, playing a modern game is going to be your best test for stability.Honestly gpu intensive games like Crysis2 at heavy detail settings seem to be the most reliable stress test for me. I've been able to loop benchmarks (Heaven, 3dMark11) and gpu burning utilities for hours and have no issues, then play 15 minutes of Crysis at the same settings and crash or see artifacts because of the OC being unstable.
In essence, yes. Before OCP (over-current protection) was heavily implemented into both hardware and software, power virus programs like these could cause the card to draw so much current as to kill components/circuitry in the card. Now it's near impossible to do so.I have heard people saying that programs like Furmark are 'card killers.' Never really knew what they were referring to. Maybe causing the card to overheat, is that what you mean?
In essence, yes. Before OCP (over-current protection) was heavily implemented into both hardware and software, power virus programs like these could cause the card to draw so much current as to kill components/circuitry in the card. Now it's near impossible to do so.
How come my 100% stock Radeon 7950 crashes and makes my screen turn black after running OCCT or Furmark after 3-5 minutes yet will play a game o/c'd or loop Heaven while o/c'd for as long as I want with no artifacts?
Is there a safety feature shutting the card down? Someone said above these aren't true tests, just power hungry viruses. But why make them if they cant even run on a stock card?
How come my 100% stock Radeon 7950 crashes and makes my screen turn black after running OCCT or Furmark after 3-5 minutes yet will play a game o/c'd or loop Heaven while o/c'd for as long as I want with no artifacts?
Is there a safety feature shutting the card down? Someone said above these aren't true tests, just power hungry viruses. But why make them if they cant even run on a stock card?
No, they won't, but that also eliminates their "effectiveness" (which is a ruse in itself). Case in point, don't use Furmark or similar programs.Those Furmark type programs won't damage recently manufactured cards then?
Not really true. I believe that by design, GPU's are easier to exploit via a power virus, so their loads are multitudes higher than would be on a CPU. Furthermore, a video card's PCB is much smaller than that of a motherboard, so they make do with what space they have.Only because GPU power circuits are under-engineered. Imagine if CPUs or motherboards burned out under "full CPU load", because they were designed to imagine that nobody would run their PC at full load.
Some 7950 models have the bare minimum cooling on the VRM's. While it's more than adequate for stock settings, a power virus will exploit this deficiency and crash the card due to the high temps. Don't run Furmark or similar programs anymore as you will eventually damage your hardware.How come my 100% stock Radeon 7950 crashes and makes my screen turn black after running OCCT or Furmark after 3-5 minutes yet will play a game o/c'd or loop Heaven while o/c'd for as long as I want with no artifacts?
Is there a safety feature shutting the card down? Someone said above these aren't true tests, just power hungry viruses. But why make them if they cant even run on a stock card?
Like I said, because the VRM's probably aren't that heavily cooled, and their heatsinks are separate from the main GPU heatsink. As long as you don't run power virus programs or pump crazy voltage through the GPU for an overclock, youll be fine. Install GPUTweak by ASUS and check your VRM temps in a game and in Furmark. You'll see the difference.I have the Asus 7950 DC II which is a 3 slot cooler. My PSU is a Corsair TX650 which has Seasonic internals and should be more than enough for a single cars setup.
Does this sound like a faulty card or poorly designed card in general? Really considering returing it. Like I said, I can play Skyrim for hours and my temps wont breach 65C with an idle of 30C and this is while overclocked. But it will crash Furmark/OCCT stock. Heaven also seems fine, although I only tested this for about 30 min, still better than the 5 min lasted on the other stress tests.
Like I said, because the VRM's probably aren't that heavily cooled, and their heatsinks are separate from the main GPU heatsink. As long as you don't run power virus programs or pump crazy voltage through the GPU for an overclock, youll be fine. Install GPUTweak by ASUS and check your VRM temps in a game and in Furmark. You'll see the difference.
"Power virus" is a total misnomer, and is a made-up term invented to disguise poor engineering practices.Not really true. I believe that by design, GPU's are easier to exploit via a power virus, so their loads are multitudes higher than would be on a CPU. Furthermore, a video card's PCB is much smaller than that of a motherboard, so they make do with what space they have.
Some 7950 models have the bare minimum cooling on the VRM's. While it's more than adequate for stock settings, a power virus will exploit this deficiency and crash the card due to the high temps. Don't run Furmark or similar programs anymore as you will eventually damage your hardware.
The 7950 card designs were left to the board partners, so each company has its own design that "follows" AMD's specifications. Because of this, there's a lot of variance in the designs and how robust some are. Some 7950's have really beefy cooling all around, some are hit and miss.Is this problem more or less universal with all 7950's or just a few designs? How would increasing GPU voltage put the card in danger, do you mean don't increase the memory voltage?
No, but I custom cooled mine. I also have a reference 7970, and the stock cooling solution is pretty beefy as well.Does your 7970 have the same issue?
Like I said, check your VRM temps and then proceed with the oveclocking. It's always better to get more information :thumbsup:.My highest stable o/c is 1050/1650 (tested by gaming). Oddly, the ASUS stock voltage is .993, not the reference spec 1.093. The o/c was done with on 1.093V.
Thanks for your help.
It actually goes higher, but that's the highest stable I can get through a quick OC on in Afterburner @ 1.3V (1.2V with droop). If I flash it to a better BIOS, I can hit 1400MHz+. My RAM actually only hits 1600MHz stable. I have to give it 1.7V (stock is 1.6V) to hit 1775MHz stably. However, my core is water cooled.Nice oc on the 7970,the highest i have seen of any 7970 owner in the forums.
Must share your core voltage:thumbsup:,and from what i gathered 1700mhz is typical of any 7970 right?
Hate sounding like a n00b but still gathering oc info as we speak to get that maximum oc...i'm already at 1200/1650mhz at 1.186v.
That's not how it works. GPU's and other microprocessors are designed with certain specifications, much like anything that is engineered. Power viruses and similar programs are designed not to provide functional work, but to simply suck as much current as possible. No company would waste time over-designing hardware so a few geeks can get excited knowing their GPU's can pull 400W and still function. That's a waste of resources on the company's part and as a consumer I certainly don't want to pay for it. It is therefore cheaper and easier to implement OCP (both hardware and software) to make sure the GPU is running as designed."Power virus" is a total misnomer, and is a made-up term invented to disguise poor engineering practices.
GPUs have many processors, and just like any processor, should, in theory, be able to be 100% utilized. But like many things, they are engineered only to handle a certain percentage of total load, rather than the full extent of total load.
You either made a poor analogy or don't understand how bridges are made. Just like I said above, anything engineered is made to specifications. You'll notice many smaller bridges will have a "tonnage limit" sign with pictures of certain semi-trailer trucks on it. These bridges are constructed with the intent to handle local traffic, and larger trucks will find alternate, mainstream routes. Just the same, large highway bridges too are designed to specifications, only there's are designed to handle the highway full of vehicles.Edit: Imagine if bridges were designed, only to handle medium or light traffic, and not a full highway full of cars. Would that be acceptable to you? Like I said, GPUs are under-engineered.
Again, you're in the minority of people who actually care and furthermore would be willing to pay the sizeable overhead this would cost. Some AIB's do make more robust graphics cards with more power phases and beefier components, but they generally are much more expensive.Edit: Imagine if CPUs were designed this way, and the engineer simply chose to label things like Linpack (which is a mathematical matrix solver or something like that), as a "CPU power virus", and said that running such programs was unsupported on those CPUs. Would that be acceptable to you, that only certain software codes were "supported", and others were not?
To me, for a general-purpose computing device, to be told that certain classes of programs were unacceptable to run on that hardware, due to causing high power-consumption within that processing device, would be totally unacceptable to me to use or purchase.
A properly-engineered, general-purpose computing device, should be able to run ANY program that is written for it, successfully, as long as the software doesn't contain programming defects. Causing the underlying hardware to draw too much power is not a software defect.