voltage degradation is real.
Compound that with heat on a loaded gpu, its bound to fail over time.
Again, it sounds like the people making opinions on mining are those who never mined. Those of us who mined and used DC would tell you otherwise. Why would my HD7970s that ran mining overclocked at 1.15Ghz on
stock voltage suffer from voltage degradation? I didn't run my ASIC out of voltage spec and kept it at 1.174V the whole time.
As i have stressed, a gamer is no way near the load time a miner is.
Well actually full gaming load is greater than mining load since mining doesn't stress the memory. If you are running a game at 99% load, the GPU is more loaded because the memory comes into play with gaming. With mining, you can downclock the memory all the way down to 800mhz on say the 7970. It makes no difference. Therefore, the total power usage in 99% GPU load in gaming is actually higher for the overall videocard and the entire PCB is more stressed.
But let's take gaming vs. mining aside. In DC projects like MilkyWay@Home, the GPU is literally loaded at 99%. Every single stream processor is used as the program scales linearly with more stream processors. Have you seen thousands of GPUs dying from Collatz Conjecture or Milkway@Home? No.
Mosfet and vregs are not invulnerable... you can mitigate stuff by lowering the temp values on said objects, which raises efficiency and overall life of the said object.
No one disputes that. However, if the impact to real world useful life is hardly measurable, it's irrelevant if the GPU lasts 50 years or 'just' 10.
This is why my only objection in buying a miner card was if it was watercooled, and kept under a full cover waterblock while it was mining.
I've seen you post this over the years. It's a flawed theory. What most often kills GPUs is electromigration, overvoltage and current spikes. Temperatures rarely kill GPUs since they simply downclock their GPU clocks. Miners would be extremely conscious of GPU clocks since it directly impacted the hashing rate. Therefore, a miner would never run the card at 95C load and suffer a 15-20% GPU throttling penalty as it would hurt his income. Chances are the mining card actually operated at reasonable temperatures of mid-70s which in no way shortens the actual real world useful life of a GPU.
Let's look at the temperatures of the Asus DCUII R9 280X. Does this look like a card that overheats? No.
If it was kept on stock air, and hard pushed, i would hesitate, and think twice.
Again, if the card was
not overvolted, it makes no real world impact to its useful life if it ran at 99% 24/7 or 1 hour a day once. If your theory was true, every time you played a game or used our GPU, you are "wearing" it down significantly. That's not how ASICs work. From an engineer's point of view, sure it's "wearing down" since it can be measured on a transistor level but from a consumer's point of view the reduction in useful life is immaterial.
Even if it was under warranty, getting a card replaced is a hassle, unless its though eVGA with advance RMA.
But your entire premise that a mining card that's built like a tank with Asus SuperAlloy components is more likely to fail because of mining is not based on any factual evidence. Therefore, you are just thinking that in theory it would be more worn down.
The vregs on a cpu motherboard are much more advance then a GPU.
Depends on the motherboard. The SuperAlloy components are the same used in the Asus Matrix Platinum cards that can take a massive overvoltage of 1.35-1.4V on Tahiti cards. Like I said, you and other people in this thread have completely ignored the actual card in question. It's built like a tank from the PCB to the Mosfets to the VRMs - the ones used in the Matrix.
"Now onto the results. For benchmarking purposes, we hit the 1377MHz mark with the core voltage set to 1.4V while the memory proved to be even more malleable to overclocking than expected with a maximum speed of 6.854 Gbps. " HWC
They have a lot better cooling then a GPU, and more surface area. The PCB on the board itself is thicker then a GPU, especially if were looking at things like MSI / ASUS / Gigabyte.
The VRMs are rated at 125C at least and the components are Asus SuperAlloy. You sure are making a bold claim here that a GPU with a 250W TDP ASIC 12-power phases, digital power. Again, I don't see any evidence to support your view that Asus, MSI, Gigabyte make motherboards with superior components than what's uses on the Asus R9 280X DCUII.
You cant compare a cpu being data mined vs a GPU.
Its two different objects.
LinX AVX and IBT load the CPU a lot more than mining does on a GPU. You don't see anyone freaking out when using those programs when testing their CPU overclock.
Also server boards and gamer boards are distinct.
They use differnent type of mosfets and vregs on server board which were designed to handle the non stop load.
What's interesting is that motherboard makers are copying GPU makers and using their components as "server grade". Gigabyte X99 uses Cooper Bussman chokes.
http://www.gigabyte.com/microsite/372/images/performance.html
What do we have on a reference HD7970?
Asus claims the SuperAlloy components last 2.5X longer, minimize power noise by 30% and improve efficiency 15% compared to a reference HD7970. Do we have an unusual case of reference HD7970 cards dying from mining? No. Then it's even less likely for a card built even better such as the Asus DCUII to die prematurely.
This could be said the same with Workstation cards vs gamer cards.
Ever wonder why a FireGL Pro costs more then a regular radeon... well, its not just because its a fireGL.
Do you have evidence to back this up? If we look at reference 7970/R9 290X and 480/580/680/780/780Ti designs and see their respective FirePro and Quadro versions, it's more or less the same thing. Every single one of those cards is built with worse components than the Asus DirectCUII series.
Would you buy a 2014 prius with 350k miles but was properly maintained? For half the price of a new one?
No, you wouldn't. Unless you're an idiot.
This comparison makes no sense. Cars have thousands of components, computers and moving parts and are subjected to harsh weather conditions, poor roads/pot holes, salt/rust, sunlight/wind/acid rain, dust/dirt, contamination in the engine/fuel system, improper/late maintenance, etc. That's why they break. Air cooled GPUs have
no moving parts besides the fan. If you run your GPU 100 hours a year or 5,000 hours a year, it won't die any faster in terms of real world useful life. The ASIC doesn't wear out the same way as lithium ion batteries every time you recharge them or as a car engine. Give me a break. The fact that you even made this comparison shows you don't understand how ASICs are designed/engineered. Using your logic, all old consoles should be dead by now, yet 10, 15, 20 year old consoles still work like brand new.
Every single GPU I've ever owned that ran nearly 24/7 99% load for years either via mining or DC projects has had 0 failures. We are talking all the way back to Pentium 233 MMX and ATI 128 Pro days. What's your experience? Sounds like you just repeat what you hear with no proper research. The OP's card is better built than my Sapphires but I've mined on mine for at least 2 years straight. It's a walk in the park for the cards if you run them within operating voltage spec and keep VRM temps in check. What fails are the fan ball bearings. If one starts heavily overvolting + overclocking and putting 24/7 99% load, that's a different story. The OP could always ask the original owner if he overvolted or undervolted his cards (as many miners did to improve Hash/watt).
Like I said, if GPUs had a measurable reduction in their useful life the more you used them, we would never have companies like HIS state they give you 2-3 year warranty with 24/7 365 load operation. Instead, GPU makers would advertise 2-year-warranty or 10,000 working hours, whichever comes first.
If you don't believe me, go buy a $20-30 GPU with a fanless heatsink off Newegg and let it run a DC project 99% load 24/7. Come back in 5 years. It'll be working as good as on day 1. The sad part is this thread highlights so many of the misconceptions GPU mining had to suffer from and as a result of the FUD, so many PC gamers missed out on thousands and tens of thousands of mining $profits.