4870/4890 design flaw?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
Things we know about this problem:

*It only happens on 4800 cards. (Does it happen on 4770s and 4670s?)
*It does not happen on all 4870s, some are immune to this problem entirely for one reason or another.
*This problem does not happen with folding, any game, or any other program that uses the GPU, just Furmark and OCCT3 GPU Test.
 

error8

Diamond Member
Nov 28, 2007
3,204
0
76
This issue must be due to vrm overheating. I've redone the test and ended up with 103 C on one VRM chip, while the gpu was only at 71 C!

My guess is that in some particular situations ( bad case airflow, high ambient temperature, using a slow fan profile ) VRMs can probably end up beyond 120 C, making the card crash, or even worse, but only in this OCCT test. But this is already known on 4870 cards. VRMs are sensitive and need serious cooling. This is why aftermarket heatsinks fail on 4870 cards, because the VRM area can't be cooled well enough, using the provided little heatsinks.

I don't see it as an issue, since no game does it. It's more of a caractheristic of the card, then it is an issue. Don't play OCCT, play games. :)
 

videogames101

Diamond Member
Aug 24, 2005
6,783
27
91
So, by rendering something designed to max out a card, it fails. Isn't that the point of maxing it out? To find the limits? At some point all cards are going to fail, it's the nature of consumer hardware.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Does that issue happens only with reference HD 4870? I didn't had that issue but my card comes with a custom black cooler that resembles the Toxic cooler and works like a dream, even though part of it's exhaust goes inside of the case and the other part goes in the back of the case.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Originally posted by: Dribble
Originally posted by: Mem
IMHO with all due respect to members and Mods here this thread should be locked,its a flamebait thread.

It's a useful thread discussing a real problem. All posts relating to nvidia, or denial of the problem with no proof, or trying to get rid of the thread should be removed so the real problem can be discussed.

I agree completely that it's worth discussing, but I think it should also be kept in perspective... it's not something you'll see in any gaming situation, it's limited to very specific benches at certain settings it looks like. Some people with their reference card seem to not have the problem, and have posted so here. I don't see the harm in discussing it though.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
It is possible that games are affected by this, but that people are attributing the crashes to something else (drivers, GPU overheat, etc.).
 

s44

Diamond Member
Oct 13, 2006
9,427
16
81
Originally posted by: Dribble
Originally posted by: Mem
IMHO with all due respect to members and Mods here this thread should be locked,its a flamebait thread.

It's a useful thread discussing a real problem. All posts relating to nvidia, or denial of the problem with no proof, or trying to get rid of the thread should be removed so the real problem can be discussed.
+1
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
Originally posted by: Wreckage
It is possible that games are affected by this, but that people are attributing the crashes to something else (drivers, GPU overheat, etc.).

I don't think that's possible because my games don't crash, period. Meaning if games crash for someone else with a 4870, either their specific card is defective and not a representation of the whole line, or they have a software problem. Game crashes are almost certain to be caused by user error or software.
 

Mem

Lifer
Apr 23, 2000
21,476
13
81
Originally posted by: dguy6789
Originally posted by: Wreckage
It is possible that games are affected by this, but that people are attributing the crashes to something else (drivers, GPU overheat, etc.).

I don't think that's possible because my games don't crash, period. Meaning if games crash for someone else with a 4870, either their specific card is defective and not a representation of the whole line, or they have a software problem. Game crashes are almost certain to be caused by user error or software.

No issues too with gaming on my 4870 card, done about 50 hours so far in Drakensang with not even a single crash(still not finished the game).

It's hard to point the finger at video card for odd crash etc..even if you do get a game crash,now do you blame cooling,drivers, hardware on the PC,overclocking,game itself (poor coding ,bugs etc) or some other cause?..




Originally posted by: n7
Interesting.

That said, as since i've gotten my HD 4890, i've not had a single issue with it.
This is very unlike the constant stream of drivers issues with my GTX 280.

Therefore, i'm going to happily continue using my potentially "flawed" hardware

+1 :thumbsup:

:)
 

SSChevy2001

Senior member
Jul 9, 2008
774
0
0
Originally posted by: Wreckage
It is possible that people are getting crashes because of this and are simply attributing it to overheating. It's not uncommon for people to monitor the temps on their card but they may not monitor how many amps the card is using.

It is also possible that a future game or even an application like Folding@home could run into this flaw.

The link I posted does discuss such possibilities.
The link you posted was your way of sticking it to ATi customers, nothing more.

Crysis which is one of the most demanding titles out and it uses about 50-55A, so I seriously doubt there will ever be a realistic load that will ever come close to 82A.

The only flaw I see is this application is stressing the 4870 and 4890 a lot harder than Nvidia cards. Example from the link.

GTX285 stock gets only 53FPS Avg
http://www.xtremesystems.org/f...=3800005&postcount=291

HD4890 underclocked 850/850 is over 80FPS with a slower CPU
http://www.xtremesystems.org/f...=3799838&postcount=280

So even an underclocked HD4890 is producing 50% more FPS than a GTX285, what does that say about this stress test?
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
If any game would crash a HD4870 due to this it would be pretty well known. With the thousands and thousands of people running many many games on it. As it stands now, the reliability of the HD4870 is as good as any other card of this generation. The highest I have seen on my card was something like 56A. And I did play most of the modern games on my PC.

I guess all the Intel chips are flawed too, since Linpack very often causes the CPUs to overheat and throttle down? :confused: Or even hang the system. Seriously, flawed design imo ;)

They wrote a program that shows that when you load the VRMs with 82A. the card will throttle to 200MHz due to built in mechanisms (think that was stated in some other posts below in that link). Fire up Linpack, your CPU's temp will skyrocket and may cause it to throttle or even hang the system. Does that mean all the Intel chips suck or are flawed? No, it means Linpack can cause them to behave like that. Shows you the limit of your CPU. Same for this utility. It showed the limit for VRMs. The load was very artificial and far far away from anything available on the gaming market.

So I would advise you Wreckage to refrain from statements like:

It is possible that games are affected by this, but that people are attributing the crashes to something else (drivers, GPU overheat, etc.).

Those words are unconfirmed and frankly not true.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
lol well i just tried this.. on stock reference 4870....

well... it started the test then just a black screen with my mouse.....

pressed esc after about a min of doing nothing
then everything was fine just went to normal occt screen.....

nothing was wrong ... just the test didn't load up it didnt even go into 3d clocks...
Hmm. Flaw in test?
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: SSChevy2001

The only flaw I see is this application is stressing the 4870 and 4890 a lot harder than Nvidia cards. Example from the link.

GTX285 stock gets only 53FPS Avg
http://www.xtremesystems.org/f...=3800005&postcount=291

HD4890 underclocked 850/850 is over 80FPS with a slower CPU
http://www.xtremesystems.org/f...=3799838&postcount=280

So even an underclocked HD4890 is producing 50% more FPS than a GTX285, what does that say about this stress test?

I think that it's because it's one of the very few applications that can max the execution engine usage, that proves how powerful the HD 4890 is if the 3D applications are optimized for it.
 

thilanliyan

Lifer
Jun 21, 2005
12,060
2,273
126
Originally posted by: Wreckage
It is possible that games are affected by this, but that people are attributing the crashes to something else (drivers, GPU overheat, etc.).

Well who knows how it will pan out later but right now I don't think there are any games that draw that many amps. I usually leave the Rivatuner hw monitor open when I game and the most I THINK I ever got was around 65-70A but I don't remember which game or program (other than stress testing programs) it was for (maybe F@H?? not sure).
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: evolucion8
Originally posted by: apoppin
The way you put it, makes it sound like a conspiracy :p
-i was being facetious

AFaiK, twiimtbp is a program designed to help the devs utilize Nvidia cards to their fullest
- it is quite a good program really that does not preclude ATi at all - it is up to ATi and the devs to communicate well - or we have hotfixes
rose.gif

Kinda of a conspiracy, do you remember original Far Cry? A game which was optimized for half precision for nVidia cards, Lost Planet? A game with lots of shader dependant texture reads which was created originally for Xbox 360 which in such case, it was supposed to run flawlessly on ATi hardware? Or DeadSpace, which it runs great on ATi, runs much faster on nVidia, Cryostasis which is a pita, original Crysis which took a lot of driver revisions to match (Or outperform) nVidia in some scenarios. Doom 3 which used a look up table which was slow on ATi hardware. ATi isn't holy though, they artificially crippled the Anti Aliasing on Call of Juarez in DX10 so it would run in shader hardware which benefits ATi and affected nVidia.

From my perspective when comparing graphics and stuff, Games under the ATi Get in The Game program tends to have huge amount of shaders and normal resolution textures, which make the game to look realistic, while games under TWIMTBP program tends to look a bit blocky with huge amount of textures and less shaders which gives a look of a CG, I like both looks though.

http://www.hardocp.com/article...wxMCwsaGVudGh1c2lhc3Q=

Developer Relations


Once again, we feel compelled to talk about developer relations with AMD and NVIDIA. In this case, however, the situation is reversed from our last evaluation of Cryostasis. In the short collection of splash screens in Demigod?s startup routine is a great big AMD logo, stating "The future is fusion." So, it seems that AMD was on top of this release for once.


On the first page of this article, we asked the question: Will NVIDIA video cards suffer for AMD?s involvement in Demigod?s development. No, they do not suffer. AMD?s video cards do outperform NVIDIA?s offerings consistently in Demigod, but it is nowhere near as one-sided as we saw in Cryostasis. This is a part of AMD that is not flexed enough. If Demigod is any indicator, it appears that GPU manufacturers can actually work with game developers to make their games actually run better, not just more exclusive.

That's why in many scenarios, the ATi GPU goes underutilized in such games and hence, less overheating. ATi hardware loves long complex shaders with huge amounts of math processing which will run slower on nVidia hardware, while nVidia hardware loves huge amount of short shaders and lots of dependant texture reads which doesn't help at all ATi's Superscalar architecture, before HD architecture, the difference was less noticeable than now.

Yes, it is not a conspiracy - just the completely different approach each company has to graphics

Look at Call of Juarez - ATi blows away Nvidia cards on it - to the point where Nvidia cried "foul" ; Other games, one tends to prefer Nvidia
- although most games work OK with each vendor's HW

the way i look at it, this guy wrote a program designed to expose a 'weakness' in an ATi series -that trips *protection* and shuts it down
- he may well have been tipped off

. .. *cough* .. by some engineer working for a rival company :Q

who knows .. who cares ?
:confused:

it is the ongoing great propaganda war between ATi and Nvidia .. so far, it does not affect games
rose.gif
 

Stoneburner

Diamond Member
May 29, 2003
3,491
0
76
Originally posted by: apoppin


Yes, it is not a conspiracy - just the completely different approach each company has to graphics

Look at Call of Juarez - ATi blows away Nvidia cards on it - to the point where Nvidia cried "foul" ; Other games, one tends to prefer Nvidia
- although most games work OK with each vendor's HW

the way i look at it, this guy wrote a program designed to expose a 'weakness' in an ATi series -that trips *protection* and shuts it down
- he may well have been tipped off

. .. *cough* .. by some engineer working for a rival company :Q

who knows .. who cares ?
:confused:

it is the ongoing great propaganda war between ATi and Nvidia .. so far, it does not affect games
rose.gif


Sadly, this is entirely plausible. Competition is fine but I hope these companies aren't going this far. Then again, Intel was apparently having certain programs recognize AMD processors and work slower. ATI or Nvidia could have done something similar...

On the other hand, some ATer postulated (years ago) that eventually people would have to buy 1 ati card and 1 nvidia card because only half the games released would run on each. That hasn't come to pass at least :)
 

iandh

Junior Member
Jun 30, 2009
7
0
0
IMHO the only design flaw is in Furmark. :)



I can't go too far into detail; a contact of mine said the 4870 was not designed for 100% hardware utilization in real world scenarios, more like 80%. Furmark uses parts of the GPU that wouldn't normally be run simultaneously in a normal gaming load situation.

The patched ATI drivers since this first surfaced a few months ago basically protect the card from the flaw in Furmark.


I work at an electronics manufacturer and from what I've the power section is almost always THE simplest part of a device, even a multi-phase digital power section such as used on modern GPU's. It isn't like the wild pack of Ph.D. eggheads at ATI designed this this amazing GPU and then after 20 internal design revisions, release the card for retail and THEN realize "Hurrr, ewps, we maked da VRM tew small". ;)
 

Schmide

Diamond Member
Mar 7, 2002
5,745
1,036
126
I have a Sapphire 4870 512mb I guess I don't have the voltera VRMs because I can't reproduce the error and I can't monitor the VRMs or voltage. I have to admit I was scared to let it finish as the stock cooling I had on yesterday jumped up to 110c within a few min. So today I installed a waterblock on my card and it topped out a 68c during the test which makes me happy.

Does anyone has anymore info on the Sapphire 4870 512mb (non toxic) VRM?
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Originally posted by: thilan29
Originally posted by: Wreckage
It is possible that games are affected by this, but that people are attributing the crashes to something else (drivers, GPU overheat, etc.).

Well who knows how it will pan out later but right now I don't think there are any games that draw that many amps. I usually leave the Rivatuner hw monitor open when I game and the most I THINK I ever got was around 65-70A but I don't remember which game or program (other than stress testing programs) it was for (maybe F@H?? not sure).

I just tried F@H and watched it for a bit to see... less than 50 amps on my Sapphire Atomic edition at 820/4100. I'll test the supposed 4870 killer in a bit and post my results. But, I have a Sapphire Atomic card as I mentioned, I don't know if it is the same as reference or not, though I have a feeling it's not.


Originally posted by: iandh
IMHO the only design flaw is in Furmark. :)



I can't go too far into detail; a contact of mine said the 4870 was not designed for 100% hardware utilization in real world scenarios, more like 80%. Furmark uses parts of the GPU that wouldn't normally be run simultaneously in a normal gaming load situation.

The patched ATI drivers since this first surfaced a few months ago basically protect the card from the flaw in Furmark.


I work at an electronics manufacturer and from what I've the power section is almost always THE simplest part of a device, even a multi-phase digital power section such as used on modern GPU's. It isn't like the wild pack of Ph.D. eggheads at ATI designed this this amazing GPU and then after 20 internal design revisions, release the card for retail and THEN realize "Hurrr, ewps, we maked da VRM tew small". ;)

It looks like it's not really too small for any real world apps... just this bench. I'd say the OP would be more accurate as 'hardware limitation' than 'flaw', but somehow I don't see the OP editing that out...

Originally posted by: Schmide
I have a Sapphire 4870 512mb I guess I don't have the voltera VRMs because I can't reproduce the error and I can't monitor the VRMs or voltage. I have to admit I was scared to let it finish as the stock cooling I had on yesterday jumped up to 110c within a few min. So today I installed a waterblock on my card and it topped out a 68c during the test which makes me happy.

Does anyone has anymore info on the Sapphire 4870 512mb (non toxic) VRM?

I don't really have information, but I too don't belive that the non-reference Sapphire cards use the voltera chip that the reference designs do use. A little while ago there was an EVGA app that was modded to allow it to volt-mod other cards, not just Nvidia cards. I guess it required the voltera power stuff. I tried it on my card and it would not change my GPU's voltage, so my guess is that we don't have the voltera stuff. <shrug>
 

thilanliyan

Lifer
Jun 21, 2005
12,060
2,273
126
Originally posted by: SlowSpyder
I just tried F@H and watched it for a bit to see... less than 50 amps on my Sapphire Atomic edition at 820/4100. I'll test the supposed 4870 killer in a bit and post my results. But, I have a Sapphire Atomic card as I mentioned, I don't know if it is the same as reference or not.

Maybe it was in fact Furmark...just tried renamed Furmark and it drew 65A at 1920x1200, 4AA. I don't have Crysis installed anymore to see if it was that. Oh well...I'm gonna stick to using the card for games instead of trying to find ways to maybe break something.:)
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Originally posted by: Schmide
Found the info

http://www.hardforum.com/showthread.php?t=1399839

Interesting, according to that link the Atomic edition does infact use the reference PCB design. I wonder why I can't get that volt mod to work? <shrug>

So will this bench kill my 4870 than? I don't mind running it, just don't want any permenant damage if it can be avoided. Of course if it does die I just have to get a 4890 I guess... :D Anyway, any thoughts on the safety issue?
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
it shuts down = that is protection imo

the issue is if it does not shut down and you run 120C - watch your temps!!

but be careful, one of my friends tested it for our site and hosed his Win7 install
[without a drive image as back-up; i know how he spent the night last night :p]

reinstalling Windows

rose.gif
 

AmberClad

Diamond Member
Jul 23, 2005
4,914
0
0
Originally posted by: iandh
I work at an electronics manufacturer
Among other things ;). Shame only Petra's carries it. I prefer Jabtech and Sidewinder myself, so I've been using Swiftech ramsinks underneath my AC S1 on the VRMs.

I found out over the weekend when I went to clean the heatsink for the first time since I've had the card (nearly a year now) that the VRM ramsinks fell off at some point, onto the AC S1. Whoops...

I'm kind of curious how Zalman's new ZM-RHS70 VRM heatsink does compared to yours.
 

ShawnD1

Lifer
May 24, 2003
15,987
2
81
Originally posted by: apoppin
they are not the same - other than the results - crashing the card
- Furmark can actually permanently damage the card if you keep it going and it doesn't crash :p

I can believe that. After reading this thread, I installed Furmark to see what my video cards could do. Would you believe a GeForce 7950GT will get up to 130C with stock cooling and stock speeds without crashing? Even if it isn't crashing, that seems like a major design flaw to let it get that hot in the first place.

edit:
I'll agree with everyone saying not to worry since it's just a theoretical thing. While my card would probably melt if left to run at 130C for a while, I've never seen it crash in any game over the past 3 years. ATI - probably the same thing applies. As interesting as Furmark is, it's not something to lose sleep over.