Interested in torturing your 580 yet?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Of course, Linpack and Prime 95 mainly test the CPU stability, Crysis and 3dmark06 test more than just the CPU. At the very least, if your machine passes Linpack but fails Crysis or 3Dmark, you know that the CPU is not the culprit. The OCCT3.0 power supply test is probably the closest thing we have to a single torture test that can cover the whole system.

Its a little more complicated for the CPU case because neither Linpack nor Prime95 test the entire instruction set (over 800 of them) for stability under load.

Prime95 stable merely means the system is stable while running the instruction mix represented by Prime95. Likewise for Linpack.

This is why F@H people find their otherwise Prime95 stable rigs will crash when running F@H...the instructions and the mix of them is different.

I do like OCCT testing because if runs the math checks though to confirm the chip is not simply stable (no crashing) but also still doing math correctly, the same feature that is nice for Prime95 and Linpack.

The right tool for the job, OCCT is just a tool but its not the only one we should be using when OC'ing our GPU's.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
This is not the same thing as prime 95 is to a cpu.

This is similar to revving your engine to the red line in the tachometer for endless amounts of time, and expect NO negative effects or impact.
Actually it's not like revving an engine at all, flawed comparison. An engine is designed to operate in a certain range over its lifetime, and that does not include always being at redline. A processor or GPU has no moving parts, and is designed to operate at maximum load indefinitely (and will as long as the cooling system keeps operating). The life of the unit will be shortened in theory, but you might go from a 50 year lifetime to a 20 year.

The only reason Nvidia put in the driver level checks in is to make the 580 look good from a power perspective, no other reason. Nvidia doesn't like the super high power draw numbers showing up in all the reviews.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Actually it's not like revving an engine at all, flawed comparison. An engine is designed to operate in a certain range over its lifetime, and that does not include always being at redline. A processor or GPU has no moving parts, and is designed to operate at maximum load indefinitely (and will as long as the cooling system keeps operating). The life of the unit will be shortened in theory, but you might go from a 50 year lifetime to a 20 year.

The only reason Nvidia put in the driver level checks in is to make the 580 look good from a power perspective, no other reason. Nvidia doesn't like the super high power draw numbers showing up in all the reviews.

They do it for the same reasons AMD does it.
Your reasoning is flawed, lol
 
Last edited:

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
The only reason Nvidia put in the driver level checks in is to make the 580 look good from a power perspective, no other reason. Nvidia doesn't like the super high power draw numbers showing up in all the reviews.

So why does AMD do it?
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
AMD does NOT use driver detection to downclock the GPU like Nvidia is doing. For the 4000 series cards they did use driver detection to artificially limit GPU utilization. For Evergreen, AMD put in hardware protection for an overcurrent condition, the driver does not interact with it.

What Nvidia is doing is only there to make the graphs look good, it is not in place to prevent damage, that is done strictly in hardware. The GTX480 has no such driver interaction and it does not get damaged when running Furmark or other torture test.

Pure marketing stunt by Nvidia.

Your reasoning is flawed, lol
lol@you.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
It really just sounds like you are trying to find a reason to bash nV, sorry.

Until it actually makes a difference in a game, it doesn't matter.

I don't see a difference if AMD is doing it at the hardware level V driver level. In fact it all sounds rather silly.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
AMD does NOT use driver detection to downclock the GPU like Nvidia is doing. For the 4000 series cards they did use driver detection to artificially limit GPU utilization. For Evergreen, AMD put in hardware protection for an overcurrent condition, the driver does not interact with it.

What Nvidia is doing is only there to make the graphs look good, it is not in place to prevent damage, that is done strictly in hardware. The GTX480 has no such driver interaction and it does not get damaged when running Furmark or other torture test.

Pure marketing stunt by Nvidia.

lol@you.
So your admitting that AMD still does it, you just happen to know its done for different reasons.
Regardless how its done. Its done.
In fact in the 5970 o/c article it was shown the card could not overclock to 5870 speeds without throttling, which sort of proves you wrong. :)
http://www.anandtech.com/show/3590
The Nvidia solution might be more elegant if it will not try to step unless the driver detects power virus.
As we previously noted in our 5970 review, when overclocked the card was throttling down in two cases. One was when running OCCT/FurMark, members of AMD’s “power virus” list by virtue of the fact that they put a card under a greater load than AMD believes to be realistically possible.
 
Last edited:

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
The Nvidia solution might be more elegant if it will not try to step unless the driver detects power virus.
o_O

Do you understand at all how these protections work? Did you even read the link you posted? Understand this, you should not count on software to provide hardware protection. If you do, then the software can crash, malfunction, or have bugs that cause hardware failure. Actually Nvidia's driver bug proved this, some cards were physically destroyed because the fan refused to ramp up, cooking the GPU.

So Nvidia, like AMD, uses hardware current detection that prevents the GPU from exceeding it's rated power. This happens strictly on the hardware level. But Nvidia took it a step further, and looks for a checksum, if it sees that a program matches it, it clocks down the GPU. It's not for protection (that already exists in hardware). And remember, the GTX480 does not do driver detection. The "power virus" will not be detected if the author makes a newer version or changes the hash value. Which again, is why you don't count on software to low level protect hardware.

But I'm sure you'll ignore all the facts anyway.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
o_O

Do you understand at all how these protections work? Did you even read the link you posted? Understand this, you should not count on software to provide hardware protection. If you do, then the software can crash, malfunction, or have bugs that cause hardware failure. Actually Nvidia's driver bug proved this, some cards were physically destroyed because the fan refused to ramp up, cooking the GPU.

So Nvidia, like AMD, uses hardware current detection that prevents the GPU from exceeding it's rated power. This happens strictly on the hardware level. But Nvidia took it a step further, and looks for a checksum, if it sees that a program matches it, it clocks down the GPU. It's not for protection (that already exists in hardware). And remember, the GTX480 does not do driver detection. The "power virus" will not be detected if the author makes a newer version or changes the hash value. Which again, is why you don't count on software to low level protect hardware.

But I'm sure you'll ignore all the facts anyway.
Like your ignoring facts also, so you can conclude with a anti-Nvidia negative anecdote. Facts are Nvidia states this is a work in progress. So they will deal with new software when it comes.
Of course you probably think they are lying ?
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
In fact in the 5970 o/c article it was shown the card could not overclock to 5870 speeds without throttling, which sort of proves you wrong. :)
http://www.anandtech.com/show/3590
The Nvidia solution might be more elegant if it will not try to step unless the driver detects power virus.

Read the article you linked. The card throttled due to the VRM's overheating. It's a cooling issue, not a determined attempt to change the way a independently developed application interacts with the card / system.

All they had to do was monitor current. Then it's protection. Detecting a particular piece of software in the driver? That's manipulation.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Would a car manufacturer pay for your health bills if you got into a car accident because your tire blew out when you knowlingly and willingly were driving the car above the tire's speed rating limit? Not in Canada :). Not sure about America, but most likely the driver would lose the case.

[rant] Remember the Firestone tires on Ford trucks that failed? The problem was people were running them under inflated. That's what happens when you sell trucks/SUV as family vehicles. Even the dealers under inflated the tires so the ride was better. Didn't matter. Firestone copped the blame.

In this day and age when they have to put warnings on rat poison that it's not for human consumption, or, keep out of reach of children. Or tell you not to put the plastic bag that your suit comes from the cleaners in in your baby's crib, you can be held liable for all kinds of stupidity.

This is what happens when you interfere with natural selection. You breed idiots. [/rant]

I now return you to your regularly scheduled programming. ;)
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
How is that? Someone achieved 1ghz core already.

AMD and nV are not limiting overclocking by doing this.

AMD and nVidia are not doing the same thing. nVidia uses current limiting on the 580. AMD uses thermal sensors.

I don't have a problem with what nVidia is doing until someone comes up with a real world situation that current limiting kicks in and limits the performance of the card in a situation that it wouldn't be damaged (overheat) without it. Current limiting circuitry has a tendency to do that. If it's possible to make it foul someone will figure out how to do it. Likely someone from AMD. ;) Until then, "no harm no foul".

We all know the real reason they are doing this. It's so it's not shown in reviews drawing 350W and running at 100c in Furmark. They took enough crap last round with the 480 over that.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
AMD and nVidia are not doing the same thing. nVidia uses current limiting on the 580. AMD uses thermal sensors.

I don't have a problem with what nVidia is doing until someone comes up with a real world situation that current limiting kicks in and limits the performance of the card in a situation that it wouldn't be damaged (overheat) without it. Current limiting circuitry has a tendency to do that. If it's possible to make it foul someone will figure out how to do it. Likely someone from AMD. ;) Until then, "no harm no foul".

We all know the real reason they are doing this. It's so it's not shown in reviews drawing 350W and running at 100c in Furmark. They took enough crap last round with the 480 over that.

I can assure you there are a dozen AMD employees and fanbois attempting to do exactly what you are talking about. If it can be done in a real situation, it will be exposed.

Until then, this is probably the non-issue of the year.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
AMD and nVidia are not doing the same thing. nVidia uses current limiting on the 580. AMD uses thermal sensors.
This is false. Both AMD and Nvidia use thermal and current sensing. AMD has used this method starting with Evergreen back in Oct. 2009.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
I'm not familiar with AMD using current limiting. Not that I know everything, of course. :)
They did it in part because of how the 4000 series handled overcurrent situations. Which is to say, not very well. The cards would either hard lock or instantly reset if the power threshold was exceeded.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
They did it in part because of how the 4000 series handled overcurrent situations. Which is to say, not very well. The cards would either hard lock or instantly reset if the power threshold was exceeded.

That's still not the same thing. I'd rather have a protection circuit that shuts down the card rather than doing this.

limit2_small.jpg



edit: I've read up a bit more and you are correct, AMD does have current limiting. It's not the same as nVidia's. It's not aimed at specific programs, like Furmark, OCCT.
 
Last edited:

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
^ Why do something that is going to make your card do either? Ill stick to gaming, which is why I buy my cards.
 

w1zzard

Junior Member
Mar 14, 2002
5
0
0
edit: I've read up a bit more and you are correct, AMD does have current limiting. It's not the same as nVidia's. It's not aimed at specific programs, like Furmark, OCCT.

source? afaik amd does not monitor current, but monitors vrm temperature and clocks down when the temp limit is exceeded
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
source? afaik amd does not monitor current, but monitors vrm temperature and clocks down when the temp limit is exceeded

Well w1zzard, I'm sure you know better than I do. (no sarcasm intended)

This brings us to Cypress. For Cypress, AMD has implemented a hardware solution to the VRM problem, by dedicating a very small portion of Cypress’s die to a monitoring chip. In this case the job of the monitor is to continually monitor the VRMs for dangerous conditions. Should the VRMs end up in a critical state, the monitor will immediately throttle back the card by one PowerPlay level. The card will continue operating at this level until the VRMs are back to safe levels, at which point the monitor will allow the card to go back to the requested performance level. In the case of a stressful program, this can continue to go back and forth as the VRMs permit.

By implementing this at the hardware level, Cypress cards are fully protected against all possible overcurrent situations, so that it’s not possible for any program (OCCT, FurMark, or otherwise) to damage the hardware by generating too high of a load. This also means that the protections at the driver level are not needed, and we’ve confirmed with AMD that the 5870 is allowed to run to the point where it maxes out or where overcurrent protection kicks in.

Source: http://www.anandtech.com/show/2841/11

Am I misinterpreting the use of the term overcurrent? Because up until I read this I always thought AMD only had thermal protection myself.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
I read a really good explanation, but I don't remember where it was. Anyway, here is what Dave Bauman had to say:
The Legion FurMark results tell me that the 5970 is doing exactly what it was designed to do. There are protection measures in place that kick in when thermal or power levels exceed maximum permitted levels, so the card was taking the correct actions to protect both itself and the motherboard.
source
He talks about the 5970 but the concept is the same.

Remember, the GPU downclocks itself in response to feedback from the power regulators, so the protection loop is dependent on how the power circuitry is designed. In other words, if the power regulation does not have the ability to sense an over current condition, the GPU will not know about it. So it's down to the components and implementation of the power section of the card. At least that's how I understand it.