Is "degradation" real or a myth?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
Not speaking for Intel, but speaking as an engineer who designs high speed analog I/O on an Intel CPU, electromigration is not only real but a huge amount of my design effort is spent making sure it isn't a problem. With that said, you are insane if you think an overclocked CPU should always work flawlessly, especially if you increase the voltage.
As for this:

I assure you that the last 25mV had very little to do with it. It was the other 300.

Well, the forums at the time had said that the general wisdom was that 1.4v was the max safe voltage for 45nm Core2 chips. As was said, 1.5 or 1.55v was the safe max for 65nm Core2.

Whether either of those is true, I have no idea.

But I cringe, when I see people running 1.4v+ (some 1.5v+!) on SB CPUs, being that they are 32nm, knowing how a 45nm Core2 degraded at those voltages.
 

ClockHound

Golden Member
Nov 27, 2007
1,111
219
106
Degradation is a process by which something gets destroyed.

Besides, if it's as bad as people say it is, people's CPUs should be blowing up by now.

Degradation is what happens to you when you post about how much better your 8150 Bulldozer is than an Ivy 3770k.
 

Plimogz

Senior member
Oct 3, 2009
678
0
71
Degradation is what happens to you when you post about how much better your 8150 Bulldozer is than an Ivy 3770k.

And in that case, as with excessive voltage and unchecked temperatures, it is well deserved.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
It's most certainly real. I'm not sure why you'd think they are indestructible. I have countless Athlon X2 CPUs that would start off by needing more voltage or a lower OC to remain stable. Many were no longer stable at stock speeds without over voting a bit.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
Well, the forums at the time had said that the general wisdom was that 1.4v was the max safe voltage for 45nm Core2 chips. As was said, 1.5 or 1.55v was the safe max for 65nm Core2.

Whether either of those is true, I have no idea.

But I cringe, when I see people running 1.4v+ (some 1.5v+!) on SB CPUs, being that they are 32nm, knowing how a 45nm Core2 degraded at those voltages.

I'm sure those numbers were not arrived at with anything other than empirical evidence. There's something to be said for that, though.

With that said, 1.4V on Sandy Bridge is nothing short of retarded.
 

Puppies04

Diamond Member
Apr 25, 2011
5,909
17
76
could be the motherboard getting old.
I had what I thought was degredation. e2180 stock 2ghz at 3.4ghz 1.5v.
Turned out the board was getting old, voltage at 1.48 would drop to 1.47, 1.46 under heavy load-- started giving me BSODs. Bumped up the voltage to 1.5025, and it would drop to 1.48v. Was good to go again for years.


I think a lot of people have this problem and blame the CPU.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Intel actually publish data about maximum recommended voltages for the CPUs they release. Since they designed the process and test a lot more CPUs than anyone else can I would say using their recommendations is likely a good idea.

For SB-E they say its 1.35 V recommended maximum and 1.4 as VAbsolute. I haven't checked what they say about IB, but my guess is its less than those. You'll still get degradation, but it'll slower and more manageable.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
My i5-3570K arrives next week. I won't oc until such a time that I actually need more speed and even then I would rather not overvolt. The extra frames/second is not worth it when stock is already so fast. What are people trying to prove with massive overvolts and overclocks anyway, unless they absolutely need it for work or something?
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Yes, its real, but usually takes years to see the effects of a max overclocked (mild vcore) system. That is when you run them 24/7@100% load like I do.

Yes, that has been my experience as well. My e6750 and, later, x3350 both spent their later years inching their way closer to stock.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
I assume having my 2500k at 4.4 using 1.33 will have no real impact on its useful lifespan since nearly 90% of the time its just at 1.6 and idle voltage of around 1.00.

If you're in trouble then I'm screwed... I'm at 4.54 and 1.36v with seti 24/7. However, my temps are much lower than on my i7 920, and heat/power is a LOT less overall. I'm not concerned. In fact, one of these days I'm going to give the rig to my mom and give her old Q6600 to the school (since it's never going to die if left on its own).
 

maniac5999

Senior member
Dec 30, 2009
505
14
81
About 2 years ago I bought a P2 940, it was pretty mediocre to begin with, it took 1.45v to hit 3.4ghz, and would push 70* under heavy load. This was with a huge Sunbeam CCF 120. I wouldn't accept anything less, so I just left it there. Over about a year it degraded to the point where it wasn't even stable at 2ghz with that voltage. Yes, degradation is real, and both voltage and temperature have an effect on it. If you can keep the temperature and voltage low enough, 10ghz won't degrade your processor. (good luck finding a processor that will get to 10ghz without extreme voltage, tho)
 

tuffluck

Member
Mar 20, 2010
115
1
81
I believe it is real. My e6750 ran at 3.4ghz for a couple of years and then one day I could not reach that clock no matter the voltage. 2.8 was as high as I could get it ever again.
 

Net

Golden Member
Aug 30, 2003
1,592
3
81
I can tell you from a class that I took in semiconductors during my EE degree that semiconductors do degrade overtime and eventually have to be replaced.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
However, most of us here really swap out our procs long before our overvolt and potential higer temps (especially with ivb!) cause issues.

In the past, I've had chips "degrade". The symptoms were instability at settings that were previously stable, requiring a reduction in OC, or an increase in voltage.

Now I don't go crazy with the voltage, but I also refresh my hardware so often that I'll never have degredation issues.

Technically, most of what people are calling degredation is due to electromigration. (see http://en.wikipedia.org/wiki/Electromigration)
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I think alot of people confuse the concept of a dead CPU and a faulty one. Remember there is alot of software exceptions, crashes, maybe a BSOD now and then. All can be caused by degration. It doesnt have to be totally dead. And as some say, some will still work at lower speeds.

Electromigration:
This is the most likely reason an overclocker will kill a processor. When the DC current in a line is too high, the metal grains that make up the wire are physically pushed aside by the electron wind. The longer you run the chip at higher than design voltages, the more the metal is distorted. Eventually it gives up the ghost and the circuit fails permanently.

Hot Electrons:
Again caused by overvoltage, when there is a high voltage between the source and the drain of a device, a high electric field is created and electrons accelerate, damaging the oxide and interface near the drain, changing the transistor threshold and mobility. In an N-transistor, the gate is always positive and the shift is always in the same direction. Eventually the threshold moves to a point where the transistor no longer switches and is effectively dead. This problem is exacerbated by the move to smaller technologies as, although device voltages are reducing as sizes come down, they aren't reducing in proportion to the device shrinkage, leading to higher field strengths compared with older devices.
 
Last edited:

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
I'm sure those numbers were not arrived at with anything other than empirical evidence. There's something to be said for that, though.

With that said, 1.4V on Sandy Bridge is nothing short of retarded.

Of course it was empirical. That's why people run their SNB's at 1.4v for short periods. They assume their mileage varies, and a lot of these people are so fixated on achieving 5 GHz that they'll take the short term risks to do it.

I had a Q9550 that worked at 4.2 GHz for three days, and I think that wasn't even quite at 1.4 volts. Nevertheless, I woke up to a frozen linpack interface and could never get it to POST with that config again. I'm wondering if a little more voltage would've postponed that failure, and not exacerbated it. She runs at 3.75 GHz with 1.21v now.

I never considered that to be a suicide run because I never exceeded 1.4v with that chip, yet I *still* got screwed out of ~300 MHz because of my meddling (it had been stable at 4 GHz for weeks). From what I've seen with SNB, there aren't too many golden samples, and 95% of your OC can be had with a <15% increase in voltage, so I wouldn't go beyond that for any reason. There is no sense agonizing over 4.8 or 5 GHz if you can do 4.7 at 1.3v.
 
Last edited:

aaksheytalwar

Diamond Member
Feb 17, 2012
3,389
0
76
Degradation is real, has happened to me with my A64 3200 OC. If you upgrade your CPU at least once a year, there isn't much reason to worry. If you upgrade at least once every 1.5-2 years, just do a mild-moderate oc and there shouldn't be much of problem as you will have room to increase volts later. If you intend to run an oc for 2-2.5+ years then there is good reason to worry with high oc or with temps above 60-65C load. And I assume you will only stress the CPU 1-2 hours a day max, not 24 7. That will cause it sooner.

And honestly, the max you should consider overclocking is 4.2-4.4 Ghz or so, rare cases 4.4-4.5 Ghz if you have superb cooling. Going beyond that can't be perceived real time no matter what you do. Going from 4.3 to 4.7 can't be perceived appreciably in anything at all. So it just causes a potential for problems without doing anything at all.

The only reason I go 4.3 instead of 4.1/4.2 is because I get 7.8 in my CPU (2600k) else 4.2 or even 4.1 was perfectly fine as they get 7.7 :p

So it is mostly an ego thing but I get 4.3 with OCCT temps in 50s at 1.27V, so it doesn't matter. And I won't use the OC for more than 1-2 years anyway, will be buying a 3770k within a week or so.

And planning to do 4.3-4.4 with 3770k with H100 at full speed as well.

Very few things are affected above 4GHz and past 4.3-4.4 Ghz it is pretty much nothing.

And you can get 4 GHz or even 4.2/4.3 GHz very close to stock volts with not much increase in temps either. So that is ideal.
 
Last edited:
Dec 30, 2004
12,553
2
76
I think alot of people confuse the concept of a dead CPU and a faulty one. Remember there is alot of software exceptions, crashes, maybe a BSOD now and then. All can be caused by degration. It doesnt have to be totally dead. And as some say, some will still work at lower speeds.

the problem is the bigger electrons cause the most damage. the smaller ones don't cause any electromigration. if we could filter, somehow, the big electrons out, and send them to the motherboard instead, that would fix the problem of degredation.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
the problem is the bigger electrons cause the most damage. the smaller ones don't cause any electromigration. if we could filter, somehow, the big electrons out, and send them to the motherboard instead, that would fix the problem of degredation.

There is no such thing as big or small electrons. Nor can you send them to the motherboard to avoid material degrading.
 

Pray To Jesus

Diamond Member
Mar 14, 2011
3,622
0
0
In modern consumer electronic devices, ICs rarely fail due to electromigration effects. This is because proper semiconductor design practices incorporate the effects of electromigration into the IC's layout. Nearly all IC design houses use automated EDA tools to check and correct electromigration problems at the transistor layout-level. When operated within the manufacturer's specified temperature and voltage range, a properly designed IC device is more likely to fail from other (environmental) causes, such as cumulative damage from gamma-ray bombardment.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
In modern consumer electronic devices, ICs rarely fail due to electromigration effects. This is because proper semiconductor design practices incorporate the effects of electromigration into the IC's layout. Nearly all IC design houses use automated EDA tools to check and correct electromigration problems at the transistor layout-level. When operated within the manufacturer's specified temperature and voltage range, a properly designed IC device is more likely to fail from other (environmental) causes, such as cumulative damage from gamma-ray bombardment.

The question is in the context of specifically not doing what I've bolded.