SweClockers: Geforce GTX 590 burns @ 772MHz & 1.025V

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
Wow that member has 10 posts on that forum, and didnt provide any pictures.

This is what we are taking as "proof" now?


Something tells me there is more than a little guerilla marketing going on right now....

You're paranoid.

http://forums.overclockers.co.uk/showpost.php?p=18781867&postcount=140

Since people were being bizzaro on him, he went ahead and proved it.

29032011018.jpg


29032011014.jpg


The GTX 590 is a defective turd.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
WTH are you responding to ? My post is pointing out power strategies both AMD and Nvidia seem to be using with these dual near 300 watt cards. Is that going to be the standard bs response to every angle of discussion from now on ?
But but but they blow up !


How many times have we heard 'better out of the box support' regarding Nvidia?

And, reviewers tested the GTX590 with drivers that didn't throttle. We know how the 6990 performs at 830MHz, and we know how it performs at 880MHz. Do we know how the GTX590 performs with the newest drivers? I'll hazard a guess that it'll be worse in some benches than what we have read so far.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
The 6990 also throttled during testing here at Anands.
Everything people are panicking about is also part of AMD's power strategy with the 6990.
http://www.anandtech.com/show/4209/amds-radeon-hd-6990-the-new-single-card-king/3



throttled.png

This is without overclocking at 830mhz default bios

This is VERY relevant to the current discussion.

Personally I don't see self-induced throttling to be a bad thing, its happening to protect the hardware for a good reason.

But the fact that it happens does not mean it is desirable from an end-user standpoint.

If I bought any peice of hardware and it systematically throttled under my usage scenarios I'd be looking to the manufacturer to build a more robust product.

When people's AM2 mobos started dying from those 140W TDP Phenom CPU's (mobo's weren't designed to work with more than 125W TDP CPUs) the issue was definitely real and problematic to the end-user.

It would appear that both the HD6990 and GTX590 could stand to use an even more robust PCB design (the GTX590 more so) such that the headroom is elevated above its current levels.

I'd be really surprised if neither AMD nor Nvidia are learning from this product cycle and future products are going to be all the more robust (which means all the more expensive, there is a trade-off in all this).
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
This is VERY relevant to the current discussion.

Personally I don't see self-induced throttling to be a bad thing, its happening to protect the hardware for a good reason.

But the fact that it happens does not mean it is desirable from an end-user standpoint.

If I bought any peice of hardware and it systematically throttled under my usage scenarios I'd be looking to the manufacturer to build a more robust product.

When people's AM2 mobos started dying from those 140W TDP Phenom CPU's (mobo's weren't designed to work with more than 125W TDP CPUs) the issue was definitely real and problematic to the end-user.

It would appear that both the HD6990 and GTX590 could stand to use an even more robust PCB design (the GTX590 more so) such that the headroom is elevated above its current levels.

I'd be really surprised if neither AMD nor Nvidia are learning from this product cycle and future products are going to be all the more robust (which means all the more expensive, there is a trade-off in all this).


IDC, you might be familiar with this, maybe you can shed a little light. To me, with no experience in this industry, it appears that Nvidia allowed AMD to launch the 6990 first to see where the GTX590 needed to be (clocks, performance) to equal or surpass it. After the 6990 launched, Nvidia decided on the clocks they needed, but that number was on the high end, maybe even a bit beyond where they planned or wanted to be. This may or may not be correct, hopefully you can shed some light.

Now, that's just how it appears to me, but I have no idea how plausible that is. Do you have an idea on how long it takes for Nvidia or AMD to adjust the clock speed before releasing a product? Or do you think 607MHz has been the GPU clock that they planned on for quite some time now?

I ask because I don't know if Nvidia can last minute add 25MHz (as an example) to the GPU's as a reaction to the 6990's performance, or if that type of change isn't as simple as it sounds. It just appears to me that the GTX590's final specs seem a bit reactionary (like the P3 1.13GHz years ago). And given the amount of problems that the GTX590 seems to have, and the driver revisions, it appears to not have been tested as thoroughly as maybe it should have been.

So anyway, based on what you know, is it as easy as a minor clock speed bump, some minor testing and off they go? Or is it more involved then us outsiders might realize?
 
Last edited:

WelshBloke

Lifer
Jan 12, 2005
33,536
11,660
136
IDC, you might be familiar with this, maybe you can shed a little light. To me, with no experience in this industry, it appears that Nvidia allowed AMD to launch the 6990 first to see where the GTX590 needed to be (clocks, performance) to equal or surpass it. After the 6990 launched, Nvidia decided on the clocks they needed, but that number was on the high end, maybe even a bit beyond where they planned or wanted to be. This may or may not be correct, hopefully you can shed some light.

Now, that's just how it appears to me, but I have no idea how plausible that is. Do you have an idea on how long it takes for Nvidia or AMD to adjust the clock speed before releasing a product? Or do you think 607MHz has been the GPU clock that they planned on for quite some time now?

I ask because I don't know if Nvidia can last minute add 25MHz (as an example) to the GPU's as a reaction to the 6990's performance, or if that type of change isn't as simple as it sounds. It just appears to me that the GTX590's final specs seem a bit reactionary (like the P3 1.13GHz years ago). And given the amount of problems that the GTX590 seems to have, and the driver revisions, it appears to not have been tested as thoroughly as maybe it should have been.

So anyway, based on what you know, is it as easy as a minor clock speed bump, some minor testing and off they go? Or is it more involved then us outsiders might realize?

Whats the odds that the marketing dept decided the final clocks not the engineering dept?
 

pcm81

Senior member
Mar 11, 2011
598
16
81
IDC, you might be familiar with this, maybe you can shed a little light. To me, with no experience in this industry, it appears that Nvidia allowed AMD to launch the 6990 first to see where the GTX590 needed to be (clocks, performance) to equal or surpass it. After the 6990 launched, Nvidia decided on the clocks they needed, but that number was on the high end, maybe even a bit beyond where they planned or wanted to be. This may or may not be correct, hopefully you can shed some light.

Now, that's just how it appears to me, but I have no idea how plausible that is. Do you have an idea on how long it takes for Nvidia or AMD to adjust the clock speed before releasing a product? Or do you think 607MHz has been the GPU clock that they planned on for quite some time now?

I ask because I don't know if Nvidia can last minute add 25MHz (as an example) to the GPU's as a reaction to the 6990's performance, or if that type of change isn't as simple as it sounds. It just appears to me that the GTX590's final specs seem a bit reactionary (like the P3 1.13GHz years ago). And given the amount of problems that the GTX590 seems to have, and the driver revisions, it appears to not have been tested as thoroughly as maybe it should have been.

So anyway, based on what you know, is it as easy as a minor clock speed bump, some minor testing and off they go? Or is it more involved then us outsiders might realize?

For the engineer who designed 590 it takes 5 minutes to adjust the default clock, in the cards bios, not just he drivers; any 1 can adjust it in drivers. The problem is: what to adjust to? The gotcha is that the clock needs to be high enough to beat 6990, yet low enough to be stable with voltage low enough to not burn out the card. So basically Nvidia had to find a stable overclock of their initial design, not just change the MHz number... As the result of this overclock, they pushed too far and caused cards to burn up. since cards were already shipped and there is no way to get them back, they had to overwrite bios clocks via driver throttling. And that is where we are at now...

EDIT
My guess is that 590 was designed to run at 525MHz.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
How come you seem to think the VRMs deliver or operate the same regardless of load? If the GPUs clocks are lower, they will draw less power.. the VRMs do not continue to feed it with max current loads. :D If that were the case, idle clocks wouldn't affect power use and subsequently cooling down the VRMs or reducing the current drawn (you can check VRM amps).

ICs run more efficient at cooler temps, obviously these guys put some massive heatsinks on the VRM and beefy coolers on it allow it to run well beyond recommended specs.

In the OC review you linked: http://lab501.ro/placi-video/nvidia-geforce-gtx-590-studiu-de-overclocking/12
The reviewer specifically said the 590 should have had 6 Volterra 40A VRMs per core rather than the 5x bad TDA 35A with low efficiency. It limited him pushing the vcore higher than 1.1V as it would die. Quote: "We have very fragile Dr.Mos chips and not very efficient in energy conversion, and mini-inducers, which anyway would be made redundant more than 30A mosfet, a combination very-very poor, I never thought I'll ever see it on a motherboard of this caliber." Essentially he had to over-cool it to run it beyond its specs and says its unlikely to survive.

Edit: In contrast, the 6990: http://lab501.ro/placi-video/asus-radeon-hd-6990-studiu-de-overclocking/10
4x 80A Volterra VRMs each GPU, or 8 x 80A = Well beyond anything you will need and you won't have to ever worry about it. The gtx580 OC to ~1ghz with his setup, pretty high OCs, the GPUs are capable.

Edit2: 675mhz under reference setup. http://lab501.ro/placi-video/asus-geforce-gtx-590-vs-hd-6990-clash-of-the-titans/13


The VRM is a DC to DC converter, it converts the 12V from the PSU to 0.938V the GTX590 needs to operate at 607MHz. If you only downclock the GPUs to 550MHz without reducing the voltage, at full load the Voltage controller of the VRM will try to give 0.938V through the MOSFET to the GPUs and so the MOSFET will continue to work the same as before. Just because you downclock the frequency doesn’t mean that the Voltage controller will lower the voltage.

The cards BIOS has Voltage values for each stage the card is in, for example if the card is in 2D then the frequency of the GPU goes down to 51MHz and it lowers the voltage too. But when is in 3D mode, the clocks go up to 607MHz with 0.938V. You can lower the GPUs frequencies (say to 550MHz) but if you don’t reduce the voltage the MOSFET at 3D mode will operate at the same conditions as before.

We do know that GTX590 VRM implementation is not designed for O/C and it will be sensible not to over-voltage with the NV reference PCB. That’s why I keep telling to wait for custom PCBs from the AIBs.

It would be ideal to have 12 VRMs for each GF110 chip (24 in total) in the GTX590 (like MSI GTX580 Lighting image bellow) but the PCB would have to be bigger and the cooling solution would have to be redesign and be bigger, heavier and more expensive in order to be able to keep up with the higher TDP the new card and the VRMs would generate. Then the card will cost $1000 and only extreme users and overclockers will buy it (Like ASUS ARES).

NV reference GTX590 is not for extreme users and overclockers, plain and simple.

4614d8d2d3c3450d.jpg
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
For the engineer who designed 590 it takes 5 minutes to adjust the default clock, in the cards bios, not just he drivers; any 1 can adjust it in drivers. The problem is: what to adjust to? The gotcha is that the clock needs to be high enough to beat 6990, yet low enough to be stable with voltage low enough to not burn out the card. So basically Nvidia had to find a stable overclock of their initial design, not just change the MHz number... As the result of this overclock, they pushed too far and caused cards to burn up. since cards were already shipped and there is no way to get them back, they had to overwrite bios clocks via driver throttling. And that is where we are at now...

EDIT
My guess is that 590 was designed to run at 525MHz.


I understand the ease of the actual clock adjustment. But there is the testing and quality checks at that new clock speed, that is what I am getting at. I don't know how long and how much work the validation process takes.
 

pcm81

Senior member
Mar 11, 2011
598
16
81
I understand the ease of the actual clock adjustment. But there is the testing and quality checks at that new clock speed, that is what I am getting at. I don't know how long and how much work the validation process takes.

Clearly it takes more than 16 days that NVidia had to do it... March 8th to 24th...
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
IDC, you might be familiar with this, maybe you can shed a little light. To me, with no experience in this industry, it appears that Nvidia allowed AMD to launch the 6990 first to see where the GTX590 needed to be (clocks, performance) to equal or surpass it. After the 6990 launched, Nvidia decided on the clocks they needed, but that number was on the high end, maybe even a bit beyond where they planned or wanted to be. This may or may not be correct, hopefully you can shed some light.

Now, that's just how it appears to me, but I have no idea how plausible that is. Do you have an idea on how long it takes for Nvidia or AMD to adjust the clock speed before releasing a product? Or do you think 607MHz has been the GPU clock that they planned on for quite some time now?

I ask because I don't know if Nvidia can last minute add 25MHz (as an example) to the GPU's as a reaction to the 6990's performance, or if that type of change isn't as simple as it sounds. It just appears to me that the GTX590's final specs seem a bit reactionary (like the P3 1.13GHz years ago). And given the amount of problems that the GTX590 seems to have, and the driver revisions, it appears to not have been tested as thoroughly as maybe it should have been.

So anyway, based on what you know, is it as easy as a minor clock speed bump, some minor testing and off they go? Or is it more involved then us outsiders might realize?

Unfortunately this is one of those things where it falls into a rather wide spectrum of "possible" scenarios for which we have no practical way to rank-sort and downselect to a smaller subset of "plausible" scenarios owing to the lack of information.

What you propose is most certainly possible. There is nothing about the engineering constraints and boundary conditions that would out-right preclude such a scenario.

I know for fact that in this industry competitive products and info on them do feed back into the binning decision cycle from a cost-benefits perspective that includes everything under the sun from production costs to field-fail warranty liabilities and so on.

But there is also nothing to go on here to suggest that Nvidia did not decide 24 months ago that the launch clocks of the 590 were going to be anything but their current stock-clocks.

Just going from experience in how I've seen these things play out while being an engineer on that other side of the fence, the timeline of events involved here, the specific situation of power-management issues on both sides of the fence, and adding a heavy amount of "personal opinion and WAG" I am personally inclined to believe that Nvidia targeted the 590 to fit within the prior existing 300W threshold.

And when the HD6990 was launched (whether Nvidia was dragging their feet for that to happen before the 590 was released is irrelevant) at its respective pricepoint, performance point, and power point, Nvidia scrambled to assess and determine whether an engineering (data) driven business decision could be made to justify pushing up the clockspeed and power-consumption of the 590 so that it was in the same performance/power regime as the HD6990 without resorting to a complete redesign of the reference PCB as well as avoiding the use of even more expensive components that would push the BOM even higher still.

Just my opinion, but bringing everything I have to bear on the subject (experience, education, intuition, etc) this "story" just fits all the data points with the least amount of trade-offs, wild suppositions and intractable boundary conditions.

In short - IMHO Occam's Razor strongly supports the notion you put forth in your post.
 

WelshBloke

Lifer
Jan 12, 2005
33,536
11,660
136
That was sarcastic, right?

Not really. I'm guessing the engineers designed it to a certain performance level, then the 6990 was released and the marketing dept had a fit and wanted more power and it had to be bodged into what was released.
 

pcm81

Senior member
Mar 11, 2011
598
16
81
Unfortunately this is one of those things where it falls into a rather wide spectrum of "possible" scenarios for which we have no practical way to rank-sort and downselect to a smaller subset of "plausible" scenarios owing to the lack of information.

What you propose is most certainly possible. There is nothing about the engineering constraints and boundary conditions that would out-right preclude such a scenario.

I know for fact that in this industry competitive products and info on them do feed back into the binning decision cycle from a cost-benefits perspective that includes everything under the sun from production costs to field-fail warranty liabilities and so on.

But there is also nothing to go on here to suggest that Nvidia did not decide 24 months ago that the launch clocks of the 590 were going to be anything but their current stock-clocks.

Just going from experience in how I've seen these things play out while being an engineer on that other side of the fence, the timeline of events involved here, the specific situation of power-management issues on both sides of the fence, and adding a heavy amount of "personal opinion and WAG" I am personally inclined to believe that Nvidia targeted the 590 to fit within the prior existing 300W threshold.

And when the HD6990 was launched (whether Nvidia was dragging their feet for that to happen before the 590 was released is irrelevant) at its respective pricepoint, performance point, and power point, Nvidia scrambled to assess and determine whether an engineering (data) driven business decision could be made to justify pushing up the clockspeed and power-consumption of the 590 so that it was in the same performance/power regime as the HD6990 without resorting to a complete redesign of the reference PCB as well as avoiding the use of even more expensive components that would push the BOM even higher still.

Just my opinion, but bringing everything I have to bear on the subject (experience, education, intuition, etc) this "story" just fits all the data points with the least amount of trade-offs, wild suppositions and intractable boundary conditions.

In short - IMHO Occam's Razor strongly supports the notion you put forth in your post.


+1
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I understand the ease of the actual clock adjustment. But there is the testing and quality checks at that new clock speed, that is what I am getting at. I don't know how long and how much work the validation process takes.

It's a sliding scale between investment (time, resources, manpower, etc) and risk or liability (incomplete characterization, sample size, statistics, etc).

As such there is absolutely no one, not even the engineers at Nvidia, who can answer your question as you haven't nailed down the constraints.

If you said "what would it take to pull this off with 21 days lead-time?" or "what is the risk involved if Nvidia rushed this to production without full characterization?" then an answer could be fleshed out.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
It's a sliding scale between investment (time, resources, manpower, etc) and risk or liability (incomplete characterization, sample size, statistics, etc).

As such there is absolutely no one, not even the engineers at Nvidia, who can answer your question as you haven't nailed down the constraints.

If you said "what would it take to pull this off with 21 days lead-time?" or "what is the risk involved if Nvidia rushed this to production without full characterization?" then an answer could be fleshed out.


Thank you IDC. Basically we have no way of knowing if it was 12 months or 12 days before the clocks were nailed down... but, it's hard to believe there have been these issues if it was on the further end of the spectrum. But, the only ones who know for sure aren't likely to tell us.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
The VRM is a DC to DC converter, it converts the 12V from the PSU to 0.938V the GTX590 needs to operate at 607MHz. If you only downclock the GPUs to 550MHz without reducing the voltage, at full load the Voltage controller of the VRM will try to give 0.938V through the MOSFET to the GPUs and so the MOSFET will continue to work the same as before. Just because you downclock the frequency doesn’t mean that the Voltage controller will lower the voltage.

You are thinking that voltage is what is killing the VRMs, bit it's not. It's the current that's killing them. By reducing the clock speed of the GPU you reduce the amount of current it draws.
 

TerabyteX

Banned
Mar 14, 2011
92
1
0
This is VERY relevant to the current discussion.

Personally I don't see self-induced throttling to be a bad thing, its happening to protect the hardware for a good reason.

But the fact that it happens does not mean it is desirable from an end-user standpoint.

If I bought any peice of hardware and it systematically throttled under my usage scenarios I'd be looking to the manufacturer to build a more robust product.

When people's AM2 mobos started dying from those 140W TDP Phenom CPU's (mobo's weren't designed to work with more than 125W TDP CPUs) the issue was definitely real and problematic to the end-user.

It would appear that both the HD6990 and GTX590 could stand to use an even more robust PCB design (the GTX590 more so) such that the headroom is elevated above its current levels.

I'd be really surprised if neither AMD nor Nvidia are learning from this product cycle and future products are going to be all the more robust (which means all the more expensive, there is a trade-off in all this).

Yeah, plus the fact that several sites had done benchmarks with Powercontrol at default and 20% which prevents downclocking and the performance difference is from slight to none, but nVidia's approach indeed will affect its performance notably.
 

pcm81

Senior member
Mar 11, 2011
598
16
81
You are thinking that voltage is what is killing the VRMs, bit it's not. It's the current that's killing them. By reducing the clock speed of the GPU you reduce the amount of current it draws.

True, but in this case it's the same thing, since resistance of the circuit is constant. Raisng the voltage results in increase in the current by the same percentage.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
True, but in this case it's the same thing, since resistance of the circuit is constant. Raisng the voltage results in increase in the current by the same percentage.

Processors are not resistive circuits.
 

zebrax2

Senior member
Nov 18, 2007
977
70
91
True, but in this case it's the same thing, since resistance of the circuit is constant. Raisng the voltage results in increase in the current by the same percentage.

Actually with the same load raising the voltage would mean lower current needed (P=VI)
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Well, Id just like to point out that for some reason, TPU never increased the voltage of the HD6990 in their review and just clocked their card based on default stock voltage. Coming to think of it, does it even allow you to tune the voltage via software?

I think both solutions are pretty much the limits of today's technology but my personal opinion is that Id rather see them target the high end with a single GPU based solution in instead of a dual GPU solution.

And for those who find GTX590 "supposedly" blowing up at stock yet not knowing what they did to the card, same can be said for the HD6990. Heres the interesting link. I dont think any reviewers have had their GTX590 blow up at stock either.

Heres techreport refuting the supposed claims at stock. Id like to point out that what BFG10K said regarding running products over their specs is definitely true, whether its a $20 dollar product or a $10k product. Theres always the associated risks that follows and many dont understand this.

Its quite amusing though. Hey, its an enthusiast card! it must overvolt and overclock like mad! They forget that YMMV and well sometimes things go boom! But I dont blame that trend of thinking (we've all been brain washed the past 8 years or so) since its been one of those buying points ever since BFG released their OC line of models with a measly 25MHz OC. Back then, people were drooling over that.