Should I buy used (mining) R9-280X?

Blue_Max

Diamond Member
Jul 7, 2011
4,223
153
106
The price of local used Radeons is very tasty thanks to miners. I know one guy with 6x ASUS 280X cards for a good price and makes no secret that he used mined with them for a year. Well taken care of, dusted & cleaned, etc.

My first thought is no, but then there's a plus - he still has everything required to register the cards with ASUS so it'd be under the full warranty... right?

...would ASUS see that a failing card has run 24/7 for a year? Not overclocked, just run constantly.


...heck, I plan to do plenty of Folding@Home so I can't really complain.

~$120 CDN is a super deal compared to new GTX 960's at double that price. *sigh* You Americans get all the best deals. ;)


So... thoughts? Advice?
Thanks! :wub:
 

master_shake_

Diamond Member
May 22, 2012
6,425
292
121
there is no rule that it can not be used for computation so warranty is still valid.

i would buy them and use them and if they fail rma em.

for 120 canadian!! id buy 4

you in ontario??

;D
 

boozzer

Golden Member
Jan 12, 2012
1,549
18
81
if they work, you should at least buy 1. make sure it works first. alot of cards used for mining got problems with artifacts when gaming, dunno why.
 

xorbe

Senior member
Sep 7, 2011
368
0
76
And the vram hot for 24/7 for a year, that stuff seems to degrade eventually.
 

Blue_Max

Diamond Member
Jul 7, 2011
4,223
153
106
Beginning to sound like warranty or not, it's a bad idea. Even if they do honour the warranty I'll be out the shipping cost and PITA time.

Video cards and hard drives are the only items I've ever been burned on used... maybe I should just stick to new in this case.

Of course, we're talking about half the price of new or even less... hard to ignore.

Thanks for the info & opinions... :)
 

therealnickdanger

Senior member
Oct 26, 2005
987
2
0
I bought a 7950 about 18 months ago for $160 that was previously used by a miner. He let me see that it played games just fine before I bought it. Since then, I overclocked it and it still works just fine. I also still have an overclocked GTX 470 that has been going strong for five years now. The only GPU that has ever failed on me was recently - a bad fan on an HD6870, which I replaced for $6 and now it works perfect again.

If you can verify that it is still operational when under load, then you should jump on it!
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Nobody can show data that mining cards fail more than regular cards because no such data exists. All you have to worry about is that the fan has extra wear on it, and the fan might fail 3 years from now instead of 5 when you've long since replaced the card anyways. Even if you still have it, it'll be like $15 with shipping to buy the replacement part.

Go for it, that's a fantastic deal.
 

fleshconsumed

Diamond Member
Feb 21, 2002
6,486
2,363
136
I was mining back when it was still profitable, if the cards still work, the biggest problem you're going to see is with the fans. The high heat dries out the lubrication which wears them out and if not taken care of can destroy the fan. However, in most of the cases you can easily fix the problem by lubricating the fan yourself. At 120CDN I'd definitely get one.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
And the vram hot for 24/7 for a year, that stuff seems to degrade eventually.

There is no scientific evidence that backs that up. If you game a lot or run Folding@Home/Seti@Home/MilkyWay@Home as many of us have done for years, as long as your card runs within specifications in terms of voltages and VRM/GPU temperatures, there is no extra degradation. As mentioned by other posters, the fan bearings is about the only component that's wearing out much faster. Having said that, mining is no worse for GPUs as Folding@Home or other distributed computing projects are and many gamers on this forum run those programs when not gaming. Since the cards still have warranty, I wouldn't worry about it at all. I've never had a GPU fail from mining or distributed computing in 15 years and I ran them balls to the wall 24/7 100%.
 

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
my 290's have been going strong for 9 months or so and they were mined with. The fan is the only thing that can be an issue but that is a cheap fix. I say go ahead and grab one.
 

MeldarthX

Golden Member
May 8, 2010
1,026
0
76
most miner undervoltaged their cards; for 120.......that's a steal; worst is the fan going and that's an easy fix :D
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,086
3,593
126
i would never buy a card used for mining unless it was watercooled and came with a waterblock.

the card has been stressed beyond what even gaming does on a pure endurance run.
There is no way to tell how long the card has degraded.

If it was on a waterblock, and watercooled, well, i would give it a thought because its stated every 10C you lower the temp on an IC, you effectively double the life on the said component.

Factor in the fact average LC setups will reduce temps by 1/2, thats about a 30-40C drop in load temps, which translates to about 3x the life the card would have should it haven been on a normal heat sink, which also translates to the card being able to last if you were to convert it even back on air.
 

lyssword

Diamond Member
Dec 15, 2005
5,630
25
91
The only issues I had with used cards was failing fans(on my/bro's comp), and those 3 were reference/super cheap shitty fans. Last cards were asus, not are they only quiet, but reliable by the looks of it too, going for 3+years heavy use without any probs. (7790/r270 btw). I will always pick a kickass fan design (pcs/frozr/vapor etc) used card over same new card with cheapo/stock fan.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
i would never buy a card used for mining unless it was watercooled and came with a waterblock.

the card has been stressed beyond what even gaming does on a pure endurance run.
There is no way to tell how long the card has degraded.

If it was on a waterblock, and watercooled, well, i would give it a thought because its stated every 10C you lower the temp on an IC, you effectively double the life on the said component.

Factor in the fact average LC setups will reduce temps by 1/2, thats about a 30-40C drop in load temps, which translates to about 3x the life the card would have should it haven been on a normal heat sink, which also translates to the card being able to last if you were to convert it even back on air.

But mining is no more GPU intensive than distributed computing. Before mining even came onto the scene, hundreds if not thousands of PC gamers at AT participated in DC projects for years and years. I've personally ran every single GPU I've ever had max overclocked + overvolted in DC projects, starting all the way to GeForce 4 Ti 4200 and ending in my 7970s. I ran those GPUs hard and not one has failed. You say that the life's card is adversely affected by large temperatures but I disagree. First of all, people who run DC or mining do not want to kill their card because it would result in downtime and RMA. As a result, knowing that they are putting 99% workload on the card, they are way more likely to set up a custom fan curve, change the TIM, and ensure the card operates in a well ventilated case.

In other words, because mining made $, the loss a card for weeks it takes to RMA was way more detrimental to the user than the last 5% of its GPU clock speeds. As such, most miners ran their cards undervolted and/or at reasonable temperatures to ensure that they do not fail pre-maturily. Also, when you discuss extending the life of a card by 2-3X, what are we talking about from 10 years to 20-30 years? Who cares at that point. Nearly every high-end NV GPU made in the last 10 years is rated to run at 93C+, with most in the 95-105C range! AMD isn't much different. That means if you take a card like a 7970 or an R9 290 and run it at 89C 24/7 99.9% load for 3 years, the actual ASIC will not fail as long as you keep your VRMs ventilated and below 120C. I am not just talking out of my a** as I've had many many cards over the years doing DC/mining workloads 24/7 365 days a year.

Since videocards do not have moving parts and ASICs are rated to operate at very high temperatures (95-105C), unless you have an electricity spike or purposely overvolt your card outside of specs, or overclock it way beyond specs, the chance of it failing in the next 5 years from load, just because it operates at 85-90C is very small. Plus based on the OP, it sounds like the card he would buy would still have some warranty left on it.
 
Last edited:
Apr 20, 2008
10,067
990
126
I got burned on a 7850 that would artifact during gaming after a few minutes. It wasn't the core or RAM getting up, it had to have been on the PCB. Either way it was a card that was mined with and it was trashed. Would you buy a GPU that ran furmark for a year? How about a CPU that ran LINX for a year?

We see the reports on here about overclocked/stressed CPUs not being stable after years of work. Those gpus used for mining are ridiculously worn.

Several radar systems (circuit cards specifically) that I've worked on have an expected time before maintenance of 3 months or less with 24/7 usage. It's measured in hours runtime otherwise. We're talking resistors/capacitors and even ICs going out with enough constant load. These components have an EOL based on load life, not actual age. A card that's been mined on for a year is likely the equivalent of an 8-12 year old card in terms of usage. A transistor can only switch so many times until it is going to start needing more voltage to do the same work. If they were indestructible from load I'd have not been an electronics technician in the navy.

In my opinion that's a no go unless it's warrantied like mine was.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
But mining is no more GPU intensive than distributed computing.
...
I've personally ran every single GPU I've ever had max overclocked + overvolted in DC projects
...
I am not just talking out of my a** as I've had many many cards over the years doing DC/mining workloads 24/7 365 days a year.

I disagree with the first point. I run various DC projects (mostly Einstein@Home and MilkyWay@Home), and with the default BOINC settings, I am hard-pressed to reach 99% GPU utilization.

Contrast that with CGMiner, which, from what I've read, has an -intensity switch, that can put a greater load on your GPU.

So I would say that mining puts an even greater load on a (say) 7950 (my card), than DC, unless you customize your DC config to maximize your DC load (if the project even can).
 

OVerLoRDI

Diamond Member
Jan 22, 2006
5,490
4
81
I was one of those original miners with ~30 58xx radeons.

The cards do take a beating. Most often though it was the fans that failed. However some degrade and required bios tweaks to bump the voltage up permanently to make them stable. I did all this because I tested each card individually and then either replaced the fan or tweaked voltages. I'd obviously disclose the required voltage bump and deduct the price accordingly.

Out of 30 video cards, I had 5 fans fail and 2 cards require voltage bumps. Not bad really considering most did full time mining for almost a year.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I disagree with the first point. I run various DC projects (mostly Einstein@Home and MilkyWay@Home), and with the default BOINC settings, I am hard-pressed to reach 99% GPU utilization..

I am not sure how that's even possible because in MilkyWay @ H, my GPUs are 99% loaded 24/7.

Either way in the OP's case, there are 3 distinct points nearly everyone in this thread missed:

1. The card he intends to buy still has warranty.
2. It's less than HALF the price of a 20% slower GTX960 ($120 CDN vs. $250 CDN+tax).
3. Asus DCUII is built with Super Alloy components that are all rated at far longer time span than any NV/AMD reference card ever made and they are rated to operate at even higher temperatures.

I am not sure how so many people responding in this thread completely ignored these 2 points. Also, as has been mentioned in this thread, the fans do get warn out, but there is no data that shows us a strong correlation between GPU mining cards failing (when kept at reasonable voltage and temps) vs. GPUs that are used for DC/gaming only. My cards were mined at 1.15Ghz full time and to this day they show no signs of artifacts or any degradation. They can actually overclock well past 1.15Ghz to 1.2Ghz with a 1.25V voltage bump. That's the point - I ran my cards at stock Tahiti voltage of 1.175V the entire time and monitored temps closely. Under those conditions, why would my ASIC/PCB/VRMs degrade way faster to the point where it would affect the real world useful life of the GPU? They wouldn't. GPUs are designed to work at 99% workloads for 24/7/365 for years and years. Have you ever seen AMD/NV state that they only recommend you use your GPU for 8-10 hours a day at 90% workload? Heck no.

It's the opposite. AIBs go out of their way to advertising that their GPUs are in designed to work 24/7/365 for years because of so many conservative PC gamers who think that if you run your GPU at 99% workload 24/7, it somehow degrades quicker. Ya, it might die faster and we are talking 10 years lifespan vs. 100 years. Who cares.

"You can count on HIS cards to game hard for 24/7/365 for years!" ~ HIS Digital

For example, EVGA offers their cards with 5-10 year warranty options while XFX has Lifetime Warranty. Do you think those AIBs make some uber special videocards with super-alien technology?

Most of these mined GPUs that fail are fan issues or the card itself was low quality to begin with. Asus DCUII R9 280X in the OP is not a low quality videocard like say a PNY GTX970 is.

It comes with Super Alloy components + 12-phase power design and 2.5X longer rated components to begin with.
http://www.asus.com/us/Graphics_Cards/R9280XDC2T3GD5/

It's really becoming a trend how so many people on this forum fail to understand context and actually do in-depth research before doing recommendations.

Does this look like a card that will die prematurely from 99% GPU load? This R9 280X has higher quality PCB, Mosfets, VRMs than the $1000 Titan X. This is not your reference GTX670/760/HD7970 design folks.

GTX-670-TOP-13.png


GTX-670-TOP-14.png


HD5870V2-22.jpg


cooler3.jpg


front.jpg


This Asus Direct CUII R9 280X is built like a tank, 95% as good as the Matrix.
 
Last edited:
Apr 20, 2008
10,067
990
126
Running scrypt/coin mining pushes a GPU so far, much further than any sort of gaming. Components are truly rated on load usage operations/switching. Most people who game on PC average an hour or two a night. A miner? Nearly 24 hours a day at full clocks and voltage. Degradation happens even with proper cooling. Just look at all those people who had to bake 8800 class cards to get them to work. Mine ran cool with the blower design.

I'm absolutely certain buying used mined GCN cards will be a crapshoot in the coming months. I'm just waiting on the 7850 baking threads.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Running scrypt/coin mining pushes a GPU so far, much further than any sort of gaming. Components are truly rated on load usage operations/switching. Most people who game on PC average an hour or two a night. A miner? Nearly 24 hours a day at full clocks and voltage. Degradation happens even with proper cooling. Just look at all those people who had to bake 8800 class cards to get them to work. Mine ran cool with the blower design.

Unless we can measure a real world impact of degradation of running a GPU rated to run at 99% load for 24/7, the concept of degradation is meaningless, especially for the Asus DCUII R9 280X that has components that are built to outlast any reference GPU from AMD/NV by 2.5X. Do you realize how long that is? See, if you've never ran DC projects or mining for years, it sounds like you are just repeating things other people repeat that have little to do with reality.

GeForce 8 failing has nothing to do with GPU ASIC/PCB/VRMs/Mosfets failing but the use of cost saving lead-free solder. All GeForce 8 cards will fail eventually whether gaming, running DC or mining -- it doesn't matter at all because what's faulty is the solder inside the entire graphics card. Why do you think the NV GPU in the original PS3 fails with time? It's not because of excessive loads. As lead-free solder heats up and cools down, over time cracks develop in the traces and you get a loss of signal as electrons cannot travel. Desktop HD4800-R9 200 are not susceptible to these issues as far as I am aware. Therefore, comparing the failures of GeForce 8 to mining is not even comparing apples and oranges. A graphics card is not like a boat or a car engine - the ASIC doesn't wear out the same way since there are no moving parts, no oil/carbon monoxide build-up/residue.

As I said, unless there is a scientific study or you can find a technical paper that shows the degradation curve of ANY GPU running 99% load 24/7 for years, I will take the above as just an opinion with no factual basis to back it up. Using my own experience of running DC on CPU/GPUs for 10+ years, I have never had a GPU fail from 99% load. Again I ran HD4890 and HD6950 unlocked and overclocked in MilkyWay@Home and even had a big times thread for this DC project with all my cards. I was doing it for years @ 99% load. 0 failures. Every person to whom I sold my cards - 0 failures after me.

I'm absolutely certain buying used mined GCN cards will be a crapshoot in the coming months. I'm just waiting on the 7850 baking threads.
I think you didn't do your research into 'baking' and bump-gate. It's obvious considering you are making a connection that doesn't exist.

So you are saying my 4890/6950/7970s are all an anomaly? Not 1 of my cards is dead in 5 years since I started mining 24/7. What about my CPUs? All max overclocked + overvolted and running DC projects for years, including my laptop where the i7 3635QM hits 93-94C and load is 99-100%. 2.5 years already and 0 issues as all my previous Intel chips. You think Intel makes some alien transistors but AMD uses some budget ones in their GPU design? That's not how it works. Computer chips are meant to be used and abused 24/7 99% load. They have to be designed taking into account that some users out there will be running them at such loads OR otherwise they would have disclaimers to the contrary to prevent lawsuits against Intel/AMD/NV from "overload operation out of spec." I've never bought an Intel/NV/AMD/ATI product where it stated on the box that the product's useful life will be just 2-3 years if I run it 24/7 99% load or my warranty will be reduced from 2-3 years if instead of gaming for 1-2 hours a day, I run my graphics card 16 hours a day.

There are going to be GPUs that fail after mining but those same GPUs would have likely failed under other loads since there is always a 2-5% failure rate for GPUs, regardless of their workload. It's crazy how you guys think that a GPU's life expectancy starts to dramatically decrease as we put 99% workload on it vs. 95% load. Do you keep a chart of the hours your GPU "performed work?" in fear that it has a 10,000 hours life expectancy?
 
Last edited:

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
GeForce 8 failing has nothing to do with GPU ASIC/PCB/VRMs/Mosfets failing but the use of cost saving lead-free solder.

Lead-free solder wasn't a cost-cutting move; it was part of the European Union's RoHS mandate. The problem is that Nvidia didn't adequately test their new lead-free BGA packaging, and as a result, it didn't hold up over repeated hot-cold cycling. This was how "Bumpgate" happened.