What does "burn in a computer" mean?

Cheezeit

Diamond Member
Apr 21, 2005
3,298
0
76
Hi all,
i heard people talking about burning in computers after they build them. what does this mean? Are they like certain tests?
 

Hyperlite

Diamond Member
May 25, 2004
5,664
2
76
Programs that stress the system components heavily, such as 3dmark05 for a graphics card or Prime95 for CPUs. As a matter of fact, SISoft Sandra 2005 has a burn-in utility that i believe does several components, and you can loop it for an unlimited amount of time.
 

jackschmittusa

Diamond Member
Apr 16, 2003
5,972
1
0
It is not just stability that a burn-in checks. Operating components at their maximum level of function for a long period can actually make poor components fail. This was the original idea behind it long ago. This way, you found bad parts before the unit was put into service, and you got a replacement under warranty.
 

imported_rod

Golden Member
Apr 13, 2005
1,788
0
0
Originally posted by: jackschmittusa
It is not just stability that a burn-in checks. Operating components at their maximum level of function for a long period can actually make poor components fail. This was the original idea behind it long ago. This way, you found bad parts before the unit was put into service, and you got a replacement under warranty.
Thats my understanding of it too.

RoD
 

tiap

Senior member
Mar 22, 2001
572
0
0
Originally posted by: jackschmittusa
It is not just stability that a burn-in checks. Operating components at their maximum level of function for a long period can actually make poor components fail. This was the original idea behind it long ago. This way, you found bad parts before the unit was put into service, and you got a replacement under warranty.

I remember the ads in computershopper when it was 1 1/2 inches thick, and all the companies advertised that they burned in all their computers for 24hrs before shipping. They all advertized it. It was a fad for a while. I think that was before dell and gateway.

 

bluestrobe

Platinum Member
Aug 15, 2004
2,033
1
0
When I worked at HP/Compaq, they burned in every system for about 3-4 hours before they shipped them. This was done after a pre-test phase where basic testing was done to make sure the hardware was right and installed correctly.
 

bob4432

Lifer
Sep 6, 2003
11,727
46
91
when i first build a machine i will run memtest for about 24-48 hours continuous, then prime95 or seti or both for about 24-48 hours continuous and then run one of the gaming benchmarks for 12hours or so. i know in the "real world" i would never stress the machine this much and would rather have it pop when all the components are new and their warranties are good :)
 

Joepublic2

Golden Member
Jan 22, 2005
1,097
6
76
I buy cheap components with short warranty periods, so I make them work hard to ensure nothing breaks. I believe my motherboard only came with a one month warranty, so I looped 3dmark 2005 and ran prime95 in the background for two weeks before I was satisified.
 

maluckey

Platinum Member
Jan 31, 2003
2,933
0
71
Burn-in to set the thermal compound, and test for bad components is a good thing. It is never a wise idea to use the "old-style" procedure of volting to extremes. I have a Barton Mobile 2400 that now requires stupid levels of voltage to reach levels that are everyday settings for one of my other CPU's of the same batch. I learned the hard way.

I was trying for max OC, and had the voltage well into the 2.0 voltage range. After failing to stabilize 2.7 Ghz, the CPU now requires more voltage at ALL settings than it previously did.
 

cryptonomicon

Senior member
Oct 20, 2004
467
0
0
im surprised no one has mentioned BH-5 burn in yet, a process where winbond bh5/6 was run overnight or longer on a few mhz past the best stable speed, so it generated a few errors in memtestx86, and with heavy voltage (3.0-3.2v if i remember correctly).

supposedly, the next morning there would be no errors. i have no idea how this works, but the time + voltage did something to increase the performance of the chip and allowed it to run faster.

again, this is an anomaly and an exception to the definiton of a burn-in, but it still is a burn in and it's true.
 
Sep 29, 2004
18,656
68
91
I thought running a 3D benchmark tool on a 24 hour loop was considered "burn in". No crashes, and you have a stable system.

Theory: You should be suing near 100% CPU utilization for 24 hours. Along with similar pain on your 3D card.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
"burn-in" is the period of time that the user's brain gets conditioned to his new computer during which he runs lots of tests. ;)

there is no "burn in" . . . it is a myth
:roll:

Here is the info copied from the link above: [WELL worth reading]
There is burn-in and then there is burn-in. In semiconductor manufacturing terminology "burn-in" is a stage of the production flow after packaging in which the CPU is placed in an elevated temperature environment and is stressed at atypical operating conditions. The end goal of this is to dramatically reduce the statistical probability of "infant mortality" failures of product on the street. "Infant mortality" is a characteristic of any form of complex manufacturing in that if you were to plot device failures in the y-axis and time in the x-axis, the graph should look like a "U". As the device is used, initially quite a few fail but as time goes on this number drops off (you are in the bottom of the "U" in the graph). As the designed life of the product is reached and exceeded, the failure count rises back up again. Burn-in is designed to catch the initial failures before the product is shipped to customers and to put the product solidly in the bottom section of the "U" graph in which few failures occur. During this process there is a noticeable and measurable circuitry slow-down on the chip that is an unfortunate by-product of the process of running at the burn-in operating point. You put a fast chip into the burn-in ovens and it will always come out of the ovens slower than when it went in - but the ones that were likely to fail early on are dead and not shipped to customers.

There are two mechanisms that cause the circuitry in CMOS - particularly modern sub-micron CMOS - to slow down when undergoing the burn-in process: PMOS bias-temperature instability (PMOS BTI) and NMOS hot-electron gate-impact ionization (known as "NMOS hot-e"). Both of these effects are complex quantum-electrical effects that result in circuitry slowing down over time. You should be able to type either of these two terms into Google to read more about what is actually happening. The end-result is, as mentioned, that the chips will start to fail at a lower frequency than they did before going into burn-in due to the transistor current drive strength being reduced.

There is another use for the term "burn-in" with regard to chips that is used by system builders and that is as a test for reliability and to reduce customer returns due to component failure. This usually consists of putting the system together, plugging it in and running computational software on the system for a period of 24-48 hours. At the larger OEM companies, this is often done at a higher than typical operating temperature.

Some time ago someone on the internet wrote a very factual sounding article on the benefits of running a CPU at a higher than typical voltage for a day or two to improve it's "overclockability". This author wrote some scientific sounding verbiage about how NMOS hot-e actually improves the drive strength of PMOS devices as a supposed explanation for why this method works. Reading this particular article and, even worse, seeing people commenting that this was a wonderful article that everyone should follow was the reason why I started posting on AnandTech way back when. The author was wrong on several key points - primarily that NMOS hot-e can occur in electron-minority (hole majority) carrier devices that are biased such as to repel electrons - and I contacted the author with a wide assortment of technical journals showing that he was wrong. He was not particularly open to the fact that he might be mistaken and never remove the article from the website that I'm aware of. Suffice to say, however, that he did not understand basic semiconductor electronics and was wrong.

There is no practical physical method that could cause a CPU to speed up after being run at an elevated voltage for an extended period of time. There may be some effect that people are seeing at the system level, but I'm not aware of what it could be. Several years ago when this issue was at it's height on the Internet, I walked around and talked to quite a few senior engineers at Intel asking if they had heard of this and what they thought be occurring. All I got were strange looks followed by reiterations of the same facts as to why this couldn't work that I had already figured out by myself. Finally, I was motivated enough to ask for and receive the burn-in reports for frequency degradation for products that I was working on at the time. I looked at approximately 25,000 200MHz Pentium CPU's, and approximately 18,000 Pentium II (Deschutes) CPU's and found that, with practically no exceptions at all, they all got slower after coming out of burn-in by a substantial percentage.

To me there is no doubt in my mind that suggesting that users overvolt their CPU's to "burn them in" is a bad thing. I'd liken it to an electrical form of homeopathy - except that ingesting water when you are sick is not going to harm you and overvolting a CPU for prolonged periods of time definitely does harm the chip. People can do what they want with their machines that they have bought - as long as the aware that what they are doing is not helping and is probably harming their systems. I have seen people - even people who know computers well - saying that they have seen their systems run faster after "burning it in" but whatever effect they may or may not be seeing, it's not caused by the CPU running faster.