PSA - DON'T buy PNY graphics cards anymore. (Throwing DCers under the bus with miners)

VirtualLarry

No Lifer
Aug 25, 2001
56,326
10,034
126
Read it and weep - their warranty statement.

https://www.pny.com/File Library/Support/PNY Products/Warranties/GeForce Graphics/3-Year-Limited-Warranty.pdf

While I'm sure that the intent of this updated warranty statement (cards used outside "intended uses" will have VOIDED WARRANTIES) was to thwart mining RMAs, I feel that they don't adequately distinguish DC usage either, and since it's not "gaming", then I feel that it also falls outside "intended uses".

So DONT BUY PNY VIDEO CARDS TO RUN F@H - NO WARRANTY FROM PNY.

Such a shame, too, since they are an American company.

Edit: Relevant terms:
THIS WARRANTY SHALL NOT APPLY WHERE PRODUCT(S) ARE USED TO ANY DEGREE, OUTSIDE OF NORMAL INTENDED
USE, WHICH SHALL INCLUDE BUT ARE NOT LIMITED TO “MINING” (e.g.,
Cryptocurrency
, Data Mining, Mining Farms).

Does DC fall under "data mining"? Or is otherwise considered "outside of normal intended use"?
 
Last edited:

lane42

Diamond Member
Sep 3, 2000
5,721
624
126
Have not bought anything from PNY in years after they screwed me on a
30 dollar rebate for ram.
 

mindless1

Diamond Member
Aug 11, 2001
8,052
1,442
126
Another case of "don't be dumb and give them a reason to deny your warranty".

However, if I were a video card company, I wouldn't want to warranty for a calendar period either, if used for other than intended purposes. Full tilt operation doing distributed computing/etc is bound to wear a card out faster.

Their choice is then, warranty for every possible use which means a shorter warranty for those who buy them for gaming, or exclude those used for other purposes.

If someone wants to use their card for this purpose I'm fine with that but it shouldn't cost everyone else more, which it would one way or the other.

In other words I suspect this will become a more prevalent stance as manufacturers find more and more warranty expense due to those uses. *Somebody* has to eat the cost.
 
Last edited:

petrusbroder

Elite Member
Nov 28, 2004
13,343
1,138
126
Another case of "don't be dumb and give them a reason to deny your warranty".
However, if I were a video card company, I wouldn't want to warranty for a calendar period either, if used for other than intended purposes. Full tilt operation doing distributed computing/etc is bound to wear a card out faster.
Their choice is then, warranty for every possible use which means a shorter warranty for those who buy them for gaming, or exclude those used for other purposes.
If someone wants to use their card for this purpose I'm fine with that but it shouldn't cost everyone else more, which it would one way or the other.
In other words I suspect this will become a more prevalent stance as manufacturers find more and more warranty expense due to those uses. *Somebody* has to eat the cost.
Is there any evidence that DC (or "full tilt operation") really does wear out a GPU faster? I am not so sure. If the GPU is kept at a reasonable temperature and if the fans are used within their limits there should be no detrimental effects of DC. My evidence? I have had no GPU fail since I started DC many years ago and at times I've had 22 computers crunching (but not mining ...) 24/7 with all computers having one or two GPUs. PSUs have failed most often, and some of those failures have had secondary effects (failed mobos HDDs); HDD have failed ... but no GPU. Also, GPUs have an intended use to calculate, that what they do when gaming, DC-ing or mining. Maybe some parts of the GPUs are used more intensively, but that should not matter.
It is not what the GPU does that damages the GPU, it is temperature, possibly overclocking (but that is temperature too) and using voltage/currents above spec (and that is temperature too).

Then OTOH: each manufacture can specify what warranty they want to give. In my opinion, the limitation in warranty by PNY implicates that they do not trust their production process and thus their quality and therefore limit their warranty.

BTW: the GPUs my kids have used for gaming only have failed usually after a few (2-3) years ... but, as I said above, none of my GPUs used for DC only.

Bias: I have never used GPUs made by PNY - mostly because they have not been available where I shop my parts and then later, because I've purchased cards by other manufacturers when they were sold at discounts or marketing drives.
 
  • Like
Reactions: IEC and ao_ika_red

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
It is not what the GPU does that damages the GPU, it is temperature, possibly overclocking (but that is temperature too) and using voltage/currents above spec (and that is temperature too).
The last time I had failed GPU was a 8600GT and it's not because the chip itself was died, but bad power delivery from psu caused a solid capacitor near VRM gone kaput and when I replaced it, I thought I had a plagued capacitor (remember capacitor plague issue throughout 2000s) and burned part of the board and it became useless but I believe the chip was still good.

I have never used GPUs made by PNY - mostly because they have not been available where I shop my parts and then later, because I've purchased cards by other manufacturers when they were sold at discounts or marketing drives.
I never heard PNY-made GPU either, in this side of the world. I saw some traces of them recently (nvidia pascal era) and I guess those miners that brought PNY GPU into my country.
 

mindless1

Diamond Member
Aug 11, 2001
8,052
1,442
126
Is there any evidence that DC (or "full tilt operation") really does wear out a GPU faster?

Yes, heat kills electronics. The old rule of thumb is a halved lifespan for each 10C temperature rise.

I am not so sure. If the GPU is kept at a reasonable temperature and if the fans are used within their limits there should be no detrimental effects of DC. My evidence? I have had no GPU fail since I started DC many years ago and at times I've had 22 computers crunching (but not mining ...) 24/7 with all computers having one or two GPUs.

How do you not use fans within their limits? Yes video card fans have a tendency to wear out, some within a year or two at elevated RPM. If the fan keeps working and there's no dust clogging then the GPU should survive better than the VRM section, but one factor here is how long you run the cards.

It doesn't really make sense to keep using the same video cards for 5+ years, yet they should last as long as the rest of the system, 10+ years. I have had most last that long with exceptions being oddball cases of my own doing.

Bias: I have never used GPUs made by PNY - mostly because they have not been available where I shop my parts and then later, because I've purchased cards by other manufacturers when they were sold at discounts or marketing drives.

I've only owned two that I recall, a (now) ancient AGP Personal Cinema card with a digital (analog tuning) tuner built in, which was great for what it was and still worked fine when I retired it, but that was a ~dozen years ago... AGP, and a GTX 750 TI I got from Dell 2-3 years ago because I had a $100 gift card I needed to use. It works fine but obviously not much of a gaming card, nor was it intended to be.
 

mindless1

Diamond Member
Aug 11, 2001
8,052
1,442
126
The last time I had failed GPU was a 8600GT and it's not because the chip itself was died, but bad power delivery from psu caused a solid capacitor near VRM gone kaput and when I replaced it, I thought I had a plagued capacitor (remember capacitor plague issue throughout 2000s) and burned part of the board and it became useless but I believe the chip was still good.

I doubt it was cap plague that caused a solid one to go bad, seems more likely it was that solder ball issue nVidia had a few years back. I distinctly remember the 8600GT being affected because before I gave an 8600GT to a friend, I put a honkin'big DIY hunk of heatsink on it to make sure the GPU stayed extra cool. AFAIK he's still using it (cheapskate non-gamer, lol).
 

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
I doubt it was cap plague that caused a solid one to go bad, seems more likely it was that solder ball issue nVidia had a few years back. I distinctly remember the 8600GT being affected because before I gave an 8600GT to a friend, I put a honkin'big DIY hunk of heatsink on it to make sure the GPU stayed extra cool. AFAIK he's still using it (cheapskate non-gamer, lol).
The original one was fine for about two years before the psu went mad, after that I bought a new psu and replaced the cap with Nippon chemi-con electrolytic cap (it's pretty hard to source solid cap in my city back then) and it was good for 3 months or so before it blew up and left burning trail around it.
 

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
Regarding the stress from computing applications: I have read claims that the more constant workload from GPGPU tasks with long duration, and fewer power-on/-off cycles, are positive factors to be considered too.

I don't know, what's DC?
Distributed Computing, as in the name of the subforum where this thread was opened in. :-)

Does DC fall under "data mining"?
Many DC projects arguably don't. One says in its name that it does: the dDM Project (which does not have a GPU application currently). Some projects appear at least related to data mining, e.g. SETI@Home.

On the other hand, I am not sure whether it was intentional or by mistake that PNY liken "data mining" and "cryptocurrency mining" in the U.S. warranty terms from your link.

Or is otherwise considered "outside of normal intended use"?
Perhaps a question which lawyers would like to get paid for arguing over. Also remember that sometimes local law overrides what vendors write into their disclaimers.

BTW, PNY would certainly be glad to sell Quadros or Teslas to those interested in GPU computing. ;-)
 

mikeymikec

Lifer
May 19, 2011
17,675
9,516
136
Distributed Computing, as in the name of the subforum where this thread was opened in. :)

<scrolls up, observes, scrolls back down>

So it is! :)

---

I wonder if/how cryptocurrency miners built into websites figure in this warranty situation; at least I imagine that since such things are possible in the first place, one that targets the GPU is probably equally possible.
 
  • Like
Reactions: Thebobo

Beer4Me

Senior member
Mar 16, 2011
564
20
76
No surprise.

If the trend continues, we'll see other manufacturers follow suit (Assuming they haven't already). Again, these cards were not intended for 24x7 operation, so YMMV. That being said, make sure they have clean power, and use a program like Afterburner to throttle your cards. I limit my cards to 65% power limit and run an aggressive fan profile. If the fan/heatsink fails, I'll go aftermarket rather than RMA. Most miners know what they're getting themselves into, I say "most".
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
I don't see this as any different than car manufacturers denying warranty claims if you track your car. One could argue if the PNY (or whoever) used quality parts, it shouldn't matter. After all servers run 24x7 for many years at high load. But products are built for their average user workload and just like racing isn't normal usage for a car, mining isn't normal usage for a GPU.
 

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
Though as @VirtualLarry already hinted, the question remains whether they did this because they expect (or experience) a higher failure rate from cards that are used by miners, or because they expect (or experience) an abuse of the RMA process by miners (big miners or small miners?). Another potential reason is that they actually have different SKUs for big miners (e.g. with minimal packaging, to be handed over in quantities of entire pallets directly at the factory back door) and want to steer those customers away from the normal retail SKUs.

If they have such miner SKUs (I don't know if they do), then you know from which SKUs they draw higher margins (the miner SKUs of course, otherwise they wouldn't have them), and you know which purpose these warranty terms are serving.
 

petrusbroder

Elite Member
Nov 28, 2004
13,343
1,138
126
Yes, heat kills electronics. The old rule of thumb is a halved lifespan for each 10C temperature rise.
How do you not use fans within their limits? Yes video card fans have a tendency to wear out, some within a year or two at elevated RPM. If the fan keeps working and there's no dust clogging then the GPU should survive better than the VRM section, but one factor here is how long you run the cards.
It doesn't really make sense to keep using the same video cards for 5+ years, yet they should last as long as the rest of the system, 10+ years. I have had most last that long with exceptions being oddball cases of my own doing.
I've only owned two that I recall, a (now) ancient AGP Personal Cinema card with a digital (analog tuning) tuner built in, which was great for what it was and still worked fine when I retired it, but that was a ~dozen years ago... AGP, and a GTX 750 TI I got from Dell 2-3 years ago because I had a $100 gift card I needed to use. It works fine but obviously not much of a gaming card, nor was it intended to be.
I think that is what I said - temperature (= heat) affects GPUs. If you keep the temperatures within the design limits the job a GPU does not matter. That means cleaning the fans and coolers and ventilation openings ...
If the manufacturer would have a sensor that measures max temperature and would specify that the warranty would be void if the temperature climbed higher than the max temp, I would understand that. But cancelling warranty because of DC or mining? That is just ridiculous.
 
  • Like
Reactions: IEC and Ken g6

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,328
4,913
136
The conditions under which the cards were used are more important than the task that is run, in my experience.

Run it hot in a case with poor airflow at 90°C and your card will likely die an early death, gaming or not. Though the thermal cycling with a gaming load is likely to be extra stressful on the card at those temps.

Run it in an open air frame (as all mine are) at 65°C or less and I've never had a card die on me, and many cards 4+ years old are still working (even after moving onto 1 or 2 additional owners).
 
  • Like
Reactions: petrusbroder

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
So if we assume for a minute that PNY aren't n00bs in GPU engineering, then life expectancy considerations are improbable as a reason for these warranty terms. Remaining reasons:
‒ Those who wrote these terms didn't consult with engineering.
‒ Protection against RMA abuse.
‒ Market segmentation (if they have other SKUs with differing terms).
‒ Anything we haven't thought of yet.
 

mindless1

Diamond Member
Aug 11, 2001
8,052
1,442
126
I don't see this as any different than car manufacturers denying warranty claims if you track your car. One could argue if the PNY (or whoever) used quality parts, it shouldn't matter.

It would be like tracking your car, from day 1, all day, every day, until it failed. Yes some (if not all) car manufacturers deny warranty claims due to track/racing/modifications/etc.

There is no valid argument that "quality parts" shouldn't matter. A quality part still has lifetime specs and they do wear out. For example take a look at the spec sheet for the best capacitor you can find that's rated for SMPS use. Lifespan is rated in thousands of hours, maybe derated to 10K hours. That's based on frequency and current (as it relates to GPU and memory load). Math. There are 8760 hours in a year.

After all servers run 24x7 for many years at high load. But products are built for their average user workload and just like racing isn't normal usage for a car, mining isn't normal usage for a GPU.

A decent server is designed to do this, doesn't have to shoehorn the components into the space available within the length, width, and depth of a PCIe slot (or two counting the heatsink). A video card is an entire *computer* on a board less than 1/3rd the size of the typical server board.

Also, a server running at "high load" doesn't necessarily mean it's running near 100%. That would be a server due to be replaced.
 
Last edited:

mindless1

Diamond Member
Aug 11, 2001
8,052
1,442
126
I think that is what I said - temperature (= heat) affects GPUs. If you keep the temperatures within the design limits the job a GPU does not matter.

No it's not what you said. You said GPU, I didn't. The GPU is less often the failure point unless you have a heatsink problem. However if you keep the temp within the design limits for GPU, it does still matter. The design limits were never intended to mean it runs forever. Why would you assume this magical thing when every other product is only guaranteed till the warranty expires?

We could say everything in a computer should last forever, but power supply, both the ATX main PSU, and the VRMs on the video card, are weak links. Run them hard and watch the magic smoke escape.

That means cleaning the fans and coolers and ventilation openings ...
If the manufacturer would have a sensor that measures max temperature and would specify that the warranty would be void if the temperature climbed higher than the max temp, I would understand that. But cancelling warranty because of DC or mining? That is just ridiculous.

You're entitled to that opinion, but it makes perfect sense to me to reduce or deny warranty. There are many products where use outside of the intended fashion will cause denial of warranty, or at least should IMO. For example I don't feel that Craftsman should replace a broken screwdriver if it's used as a pry bar.

Whether you deny that the extra stress would cause an early failure or not, there's no denying that it's extra stress. In my mind that should mean a reduction in the warranty period to offset that. Maybe no reduction in a lifetime warranty since there's no Load % / Year calculation that would apply, but I never considered a lifetime warranty on video cards to be a rational stance, more like marketing an insurance policy.
 

mindless1

Diamond Member
Aug 11, 2001
8,052
1,442
126
The conditions under which the cards were used are more important than the task that is run, in my experience.

Run it hot in a case with poor airflow at 90°C and your card will likely die an early death, gaming or not. Though the thermal cycling with a gaming load is likely to be extra stressful on the card at those temps.

Run it in an open air frame (as all mine are) at 65°C or less and I've never had a card die on me, and many cards 4+ years old are still working (even after moving onto 1 or 2 additional owners).

This is a good point. Video card manufacturers can't assume everyone is going to do this. They have to assume a lot of their cards are getting stuffed into OEM systems, not checked for dust buildup or temperature. They'd look like control freaks if they put a checklist in their warranty statement of things an owner must prove in order to qualify for warranty claims.

They try to keep things simple, like when we saw recommendations for a 400W+ PSU for video card X even if it was entirely possible to build a balanced system with card X that used less than 250W. They try to cover worse case scenarios as much as is reasonable.

I just see it as accepting responsibility. For years I voltmodded and o'c cards, chucked the OEM heatsink off after the first non-DOA test, and accepted I had no warranty. I monitored temperature for my own benefit and accepted (hoped) that the card lifespan would exceed it's useful performance/feature lifespan. In some cases when cards were retired to lesser duty I un-modded them, returned to stock except usually left the better heatsink on.
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
A decent server is designed to do this, doesn't have to shoehorn the components into the space available within the length, width, and depth of a PCIe slot (or two counting the heatsink).
Actually a server often does have to shoehorn components into such little space, or less. Far less if you go by chassis volume : system power draw. But its cooling system is designed for this, and environmental conditions controlled respectively.
 
Last edited:

mindless1

Diamond Member
Aug 11, 2001
8,052
1,442
126
^ What server capable of the same wattage (excluding HDDs/SSDs of course) as a good DC/mining/etc video card, has a motherboard the same size as a video card PCB? And while we're at it, one that someone would want to pick for high load extended deployment, not fails early due to it?

Far less if you go by chassis volume : system power draw.

Chassis volume is irrelevant to the extent that no properly designed server has any component in it that gets as hot as a typical video card GPU or VRMs at full load.
 
Last edited:

ericlp

Diamond Member
Dec 24, 2000
6,133
219
106
I doubt they can tell the difference between a heavy gamer vs, DC or Mining. Like others have said, if you know your going to run that card 24/7 at full tilt.... Buy a few more case fans to keep the temps at reasonable levels, or better yet, run it on water. CPU and GPU's. I'm thinking this is the best way to go for the lowest temps. Seeing how expensive gear is these days, it's almost a no brainer to invest in proper cooling.

Also, if you can't make that investment, know you can always set your equipment to run at half the speed till you can.
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,241
3,825
75
I think that these days, between GridCoin and FoldingCoin, it's hard to separate DC and cryptocurrency.
 
  • Like
Reactions: ericlp