Official GTX 590 Review Thread (23 reviews at this time)

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
May 13, 2009
12,333
612
126
Wow, what a ridiculous statement. Most people are grateful for what he does but if you want to stay uninformed, don't read reviews. Simple as that.

How is it ridiculous? I've owned two different GTX 580's and I wouldn't dare put either one at 1.2v period. I had a reference version at 1.025 at 830mhz core and the temps were near 90c while gaming not furmark. This w1zzard obviously is no genius. Only a fool or someone that doesn't mind blowing up loaner cards would subject a dual gpu to those extremes.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
This thread is pretty one-sided... when's the next big launch? I'm pretty sure Nvidia won't take this sitting down, and AMD has been hitting their stride. It ensures great value for us consumers, which is perfect! Can't wait for 28nm.

Thread is one-sided because nvidia has taken out more computers in the past 3 days than the chinese army. When gtx 580/70 launched and kicked the crap out of amd, nvidia (correctly) got enormous kudos for it. The forum was awash in green team praise because they got the jump on the competition and did something that most of us (even strong green teamers in some cases) didn't expect. gtx 560 was generally well received, and gtx 460 got TONS of praise. In fact, amd's recent launches have been ok (barts, 6990) and poor (69x0). The last time nvidia got this sort of criticism was with fermi I, but that was a different scenario. This is one ultra high end, ultra low volume part that appears to have been poorly designed compared to card released by its direct competitor. The only issue that I see with this launch that could impact other nvidia cards is the driver snafu, which should have been fixed before the cards came out.

One interesting note here: again, the card that launched first is better. It happened with 58x0 vs fermi I, it happened with gtx 5x0 vs 69x0, and it happened again with the sandwich contest. I think that we should call this the Anandtech Effect: whichever high end video card of a given generation that launches first will be better than the competing models from other companies.
 

Jionix

Senior member
Jan 12, 2011
238
0
0
clearly you didn't check out those thermal images of 6990 vs gtx 590. ~ 20c lower temps on the actual card for 6990... As I've stated many times these past couple of days, I think both of these cards are poor, but it is becoming clear now that nvidia took some serious shortcuts with gtx 590 that should actually preclude high end buyers from even considering it all unless they go straight to WC.


If you ask me, there is something fishy about the reported temperatures of the 590, and more specifically, I think Nvidia might have selectively placed the temp diode in an optimal location.

Obviously, the thermal images show the truth of it.

But just consider this simple notion: At stock, the 590 is drawing 25-40 more watts of power then the 6990. Where is this heat going? Plus, the fan is slower. Nvidia's cooling design is no more intricate them AMD's, in fact, they seem like they are the same method.

So, is Nvidia fudging their thermal sensor? Similar to how they have fudged their TDP?
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
yeah, and after reading Virge's statement I'd like to see some others do some thermal imaging.

Apoppin, are you reading this?
 

WelshBloke

Lifer
Jan 12, 2005
33,551
11,698
136
That picture doesn't make any sense. The PCB is a heatsink, so something hotter than 110C would have to be generating that heat. Considering that the GPUs aren't getting that hot (thermal cut-off is 97C), I can't fathom what could be operating at 110C and overall generating enough heat to bring the whole card to approximately that temperature.

The hot spot in that third photo is in the centre of the card. Where are the VRM's and power circuitry on a GTX590?

IMG0031587.png
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
The hot spot in that third photo is in the centre of the card. Where are the VRM's and power circuitry on a GTX590?

IMG0031587.png
VRMs (or rather the MOSFETs) get hot, but they don't necessarily produce a lot of heat - they're pretty efficient devices after all. You are right though, looking at AT's big picture, the voltage regulation equipment is at the center.

After reading the Google translation I think I see the problem.

Note that these pictures using a scale suited to the comparison of a wide range of graphics cards but smooth results beyond 80 ° C. Here's the same shot but with a different scale to better represent the differences between the high temperatures:
If you look at the pictures, the range they're using is up to 80C and 90C respectively. The camera isn't reporting meaningful values beyond those limits in their respective pictures. So all that means is that on the first picture the entire card reads above 80C, and that in the 2nd picture the darkest area is above 90C.

That's still warm, but I wouldn't consider that abnormal. I think if they moved the range up another 30C so that it could accurately show everything from the relatively cool PCB to the VRMs, we'd have a better idea of what's hot where. Certainly it's not the whole card that's like that.
 

Elfear

Diamond Member
May 30, 2004
7,169
829
126
How is it ridiculous? I've owned two different GTX 580's and I wouldn't dare put either one at 1.2v period. I had a reference version at 1.025 at 830mhz core and the temps were near 90c while gaming not furmark. This w1zzard obviously is no genius. Only a fool or someone that doesn't mind blowing up loaner cards would subject a dual gpu to those extremes.

Let me ask you this. Do you think W1zzard has more experience overclocking video cards or you? What you dare really doesn't have much relevance here since you don't have anywhere near the experience W1zzard does. Not trying to sound harsh but it's like some dude off the street yelling he wouldn't dare take a corner like Schumacher and he must be an idiot for doing so.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
Let me ask you this. Do you think W1zzard has more experience overclocking video cards or you? What you dare really doesn't have much relevance here since you don't have anywhere near the experience W1zzard does. Not trying to sound harsh but it's like some dude off the street yelling he wouldn't dare take a corner like Schumacher and he must be an idiot for doing so.

I am more likely listen to real card users...not people who get to play with voltages and heat on loaner cards with no consequences.

In the real world, risking the warranty on a $500 card is a tough thing to deal with ;)


Just like how in your analogy (that proves my signature correct), you wouldnt "take a corner like Shumacher."
 
Last edited:
May 13, 2009
12,333
612
126
Let me ask you this. Do you think W1zzard has more experience overclocking video cards or you? What you dare really doesn't have much relevance here since you don't have anywhere near the experience W1zzard does. Not trying to sound harsh but it's like some dude off the street yelling he wouldn't dare take a corner like Schumacher and he must be an idiot for doing so.

But whom still has a working card? ;)
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
I am more likely listen to real card users...not people who get to play with voltages and heat on loaner cards with no consequences.

In the real world, risking the warranty on a $500 card is a tough thing to deal with ;)


Just like how in your analogy (that proves my signature correct), you wouldnt "take a corner like Shumacher."


You'd rather listen to people posting on a forum than reviewers? Obviously both count in their own way. You never answered my question, it's like you ignored my earlier post altogether. I see your CPU is overclocked. Have you ever added voltage to a CPU? How about your 5850's?

Do you think an enthusiast who buys this type of card wouldn't use water and overclock / overvolt? Maybe many won't, but it's rediculous to assume that no one will. 1.2v may be excessive, but it seems some have failed with far less abuse. This is an issue in my opinion.

For a marvel of engineering flagship product, this thing sure is fragile.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
You'd rather listen to people posting on a forum than reviewers? Obviously both count in their own way. You never answered my question, it's like you ignored my earlier post altogether. I see your CPU is overclocked. Have you ever added voltage to a CPU? How about your 5850's?

CPU/GPU is apples/oranges. CPUs are far more robust. You have to seriously be doing something stupid to make a key-chain out of one. And I do listen to "people on forums" when deciding the commonly accepted max voltages for a processor, RAM, etc.

Also when doing a risk/benefit analysis, you would see that adding 1ghz to your $199 CPU is far more beneficial than adding 100mhz to your $300 graphics card.

I never add voltage to a GPU. Do I overclock? Absolutely.

I most certainly wouldnt add voltage to a $700 card that is already pushing power limits. I woud consider it if I had a GTX460 that I was just messing around with and didnt care what happened to it.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Let me ask you this. Do you think W1zzard has more experience overclocking video cards or you? What you dare really doesn't have much relevance here since you don't have anywhere near the experience W1zzard does. Not trying to sound harsh but it's like some dude off the street yelling he wouldn't dare take a corner like Schumacher and he must be an idiot for doing so.

Because W1zzard has a lot of experience in O/C he should of know that in order to UP the voltage the card should be able to cope with the excessive temperatures. So in order to raise the voltage you should first change the cooling solution if the Maximum TDP is going to be raised too.

For my understanding raising the voltage above the 1.0-1.1V is raising the TDP above what the cooling solution is designed for, though making the VRMs to operate beyond the nominal parameters and they just blowup.

This is Nvidias fault because they had to make the necessary steps in order no one would be able to raise the voltage beyond the 1.0V if that's the case with this problem.
 
Last edited:

WelshBloke

Lifer
Jan 12, 2005
33,551
11,698
136
Because W1zzard has a lot of experience in O/C he should of know that in order to UP the voltage the card should be able to cope with the excessive temperatures. So in order to raise the voltage you should first change the cooling solution if the Maximum TDP is going to be raised too.

For my understanding raising the voltage above the 1.0-1.1V is raising the TDP above what the cooling solution is designed for, though making the VRMs to operate beyond the nominal parameters and they just blowup.

This is Nvidias fault because they had to make the necessary steps in order no one would be able to raise the voltage beyond the 1.0V if that's the case with this problem.

Is it confirmed to be a temperature problem? If the power draw is high enough could it not just cause the chips to go POP straight away (I know thats technically still a temperature problem but no cooling is going to stop a fuse from blowing).
 

thilanliyan

Lifer
Jun 21, 2005
12,085
2,281
126
Seriously Thilan? Loads of people said it couldn't be done. Yet here we are.
I think you should give credit where it is due and stop pretending that what was done wasn't amazing.

I certainly wasn't one of them saying it couldn't be done (without lowering clocks and binning)...and I'm sure some people were saying it couldn't be done with 2 6970s too...do you consider the 6990 an "engineering marvel" also (I certainly don't)? Both 6990 and 590 seem like hack jobs to me...there are massive compromises they have had to make to get them out.

Like I said, you turn the clocks down and bin to get lower stock voltage and of course it could be done...there's NOTHING magical about that. Can you please describe to me which part of this card is a "marvel", bearing in mind that clocks were lowered drastically and GPUs binned?

My contention is not that it is not a good card...it is your use of "engineering marvel" to describe it which this certainly isn't. If you give that title to the 590, you would have to give it to the 6990 as well. Did you ever call the 6990 an "engineering marvel"? I haven't been following these launches that closely so I don't know whether you actually said that.

Seriously, you can praise the card without using marketing phrases like that. Or maybe I just have a much stricter definition of "engineering marvel".


Also when doing a risk/benefit analysis, you would see that adding 1ghz to your $199 CPU is far more beneficial than adding 100mhz to your $300 graphics card.

Depends on the use. If gaming (which people buying mid-range and higher cards probably would) the 100MHz the GPU will probably give more benefit. If you do more intensive stuff than gaming then yes you can get lots of benefit from CPU OCing as well.
 
Last edited:

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
CPU/GPU is apples/oranges. CPUs are far more robust. You have to seriously be doing something stupid to make a key-chain out of one. And I do listen to "people on forums" when deciding the commonly accepted max voltages for a processor, RAM, etc.

Also when doing a risk/benefit analysis, you would see that adding 1ghz to your $199 CPU is far more beneficial than adding 100mhz to your $300 graphics card.

I never add voltage to a GPU. Do I overclock? Absolutely.

I most certainly wouldnt add voltage to a $700 card that is already pushing power limits. I woud consider it if I had a GTX460 that I was just messing around with and didnt care what happened to it.

Where do you think that information comes from? It's guys like W1zzard who risk their ES cards to determine what the products "should" be able to handle.

Actually damaging the gpu itself is almost as difficult to do as a cpu. Provided that you ramp up clocks/voltages in reasonable increments, you will get artifacts and lockups way before the gpu has a chance to destroy itself. The difference with this situation is not that people are damaging the gpu, but the actual power delivery/regulation system is failing. This type of subsystem failure is indicative of inadequate components being used.

Think about it, the math doesn't add up. A 6970 is ~$350, and a 6990 is ~$700... How does NV take a $500 GTX 580, essentially double it, and create a $700 GTX 590? Either they are massively ripping people off for the GTX 580 or they are cutting some corners on the GTX 590. Considering that the GTX 580 doesn't have these issues and the 590 does, I'm guessing the latter. The GTX 590 probably should be a $900-1000 card, but NV knows they can't sell that given the card's performance in relation to the 6990.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
I am more likely listen to real card users...not people who get to play with voltages and heat on loaner cards with no consequences.

In the real world, risking the warranty on a $500 card is a tough thing to deal with ;)


Just like how in your analogy (that proves my signature correct), you wouldnt "take a corner like Shumacher."

I think that it's been conclusively shown that the card failed not because of the 1.2v but because ocp wasn't working in the driver. A much better question regarding w1zzard would be: When was the last time he fried a card during testing, and did he use the same procedure to test the gtx 590 that he has on previous cards or did he somehow do a special test unique to this one card?
 

thilanliyan

Lifer
Jun 21, 2005
12,085
2,281
126
After reading the Google translation I think I see the problem.

If you look at the pictures, the range they're using is up to 80C and 90C respectively. The camera isn't reporting meaningful values beyond those limits in their respective pictures. So all that means is that on the first picture the entire card reads above 80C, and that in the 2nd picture the darkest area is above 90C.

That's still warm, but I wouldn't consider that abnormal. I think if they moved the range up another 30C so that it could accurately show everything from the relatively cool PCB to the VRMs, we'd have a better idea of what's hot where. Certainly it's not the whole card that's like that.

The different scaling they used was just to show more easily the differences between several parts of the card. The maximum temperature they are reporting is correct, regardless of which scale they use. Excuse this post if that is what you meant in what I quoted.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Think about it, the math doesn't add up. A 6970 is ~$350, and a 6990 is ~$700... How does NV take a $500 GTX 580, essentially double it, and create a $700 GTX 590? Either they are massively ripping people off for the GTX 580 or they are cutting some corners on the GTX 590. .


Because it isn't a GTX 580 core clocked the same. What they did to me is offer GTX 570 Sli performance( 350$ x2) with some added value, like more SP's and ram, etc..
 

WelshBloke

Lifer
Jan 12, 2005
33,551
11,698
136
The different scaling they used was just to show more easily the differences between several parts of the card. The maximum temperature they are reporting is correct, regardless of which scale they use. Excuse this post if that is what you meant in what I quoted.


I'm not convinced, 112C is insanely hot for the backplate of a videocard.

Mind you VRMs do get hot but over 112C is way too hot.
 

thilanliyan

Lifer
Jun 21, 2005
12,085
2,281
126
I'm not convinced, 112C is insanely hot for the backplate of a videocard.

Mind you VRMs do get hot but over 112C is way too hot.

It wasn't the backplate right? I thought it was the back of the actual PCB they were measuring. Whatever is at the front of that area doesn't necessarily have to be over 112C. If the heat dissipation in that area is bad, and you have several components dissipating power in that area (it isn't only the VRMs giving off energy), it can get extremely hot. Remember it was just ONE spot where they measured 112C. If it was 112C all over the card I would be skeptical of that.
 
Last edited: