Do you compile/simulate with an overclocked CPU?

Do you compile/simulate with an overclocked CPU?

  • Yes

  • No


Results are only viewable after voting.

WaitingForNehalem

Platinum Member
Aug 24, 2008
2,497
0
71
Since you can never really know whether your overclocked CPU is truly stable (especially those without unlocked multipliers), do you still compile code or run simulations with an overclocked CPU? I ask because I worry that bits will get corrupted producing irregular results, even though my CPU has passed Prime95 (16hrs) and IntelBurnTest (10 passes, "Very High"). For example, while programming in C#, DateTime.Now was giving me random times for a short while. How do I know this strange behavior wasn't attributed to my overclocked CPU? As an EE student, I also don't want to run simulations that will return skewed data because of my CPU.
 
Last edited:

Subyman

Moderator <br> VC&G Forum
Mar 18, 2005
7,876
32
86
I run iOS simulators on an OCed hackintosh. I've noticed some strange things such emittercells being randomly moved, but I mainly test on the actual device, so I'm not too worried. When I do a final compile or major test compile, I'll go back to stock clocks. When I'm compiling 4-5 times every few minutes to rapidly test new code, the extra speed outweighs the minor problems that may arise.
 

pantsaregood

Senior member
Feb 13, 2011
993
37
91
Modern processors have pretty good error correction. A good way to spot a CPU returning errors and correcting them is with IBT - getting a lower number of FLOPS while overclocked is usually a result of instability.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
I use to until I found out that a bunch of stuff I had done turned out corrupted and full of bogus results...the scary part was that the bogus results were only detectable by a trained eye. To the outside observer you would have thought the results were likely in-line with expectation.

Since then I only compile and run my simulations on stock hardware configured rigs.

This is my take on OC'ing systems that are doing things for which the computational results matter - it is an amateur move to put so much at risk when the prospective gains are so little.

Trash everything for the opportunity to gain maybe 40% increase in compute performance? No thank you.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Compilation is heavily CPU driven. Its also a lot on the same files with the same results and because I also TDD its automatically tested. I run a more realistic 24/7 over clock which I confirm stable over a much longer period and wider set of tests. If a bit error did creep in the testing process would flag it up, which so far has never happened.

I think some people overclock to far and trade off stability, I just don't do that. I back off from the absolute best clock speeds because usually if it was hard to get with lots more voltage its probably not ever going to be stable. If now I spotted any oddness however rare I would go down another 100Mhz to be sure. The extra speed isn't worth it but neither is leaving 30-50% of the chips possible performance unutilised.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
I use to until I found out that a bunch of stuff I had done turned out corrupted and full of bogus results...the scary part was that the bogus results were only detectable by a trained eye. To the outside observer you would have thought the results were likely in-line with expectation.

Did you use ECC RAM? How do you know it was the CPU not the RAM?
 
Dec 30, 2004
12,553
2
76
I use to until I found out that a bunch of stuff I had done turned out corrupted and full of bogus results...the scary part was that the bogus results were only detectable by a trained eye. To the outside observer you would have thought the results were likely in-line with expectation.

Since then I only compile and run my simulations on stock hardware configured rigs.

This is my take on OC'ing systems that are doing things for which the computational results matter - it is an amateur move to put so much at risk when the prospective gains are so little.

Trash everything for the opportunity to gain maybe 40% increase in compute performance? No thank you.

yes but you get done 40% faster so if you have to trash it and do it again you're only 20% behind!!!!
 

Dravic

Senior member
May 18, 2000
892
0
76
I think some people overclock to far and trade off stability, I just don't do that. I back off from the absolute best clock speeds because usually if it was hard to get with lots more voltage its probably not ever going to be stable.


I do similar. I find the max bootable OC, back down to stable (tested), then back down another 5-10%. I would run scripts and some programs on an overclocked PC, but nothing where I relied on the computational data results.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Did you use ECC RAM? How do you know it was the CPU not the RAM?

Both (I had systems that used ECC and others which did not). I was also able to duplicate the same portfolio of symptoms on both my AMD and Intel based rigs.

I knew it was the CPU because the "problem" went away when I reduced the overclock.

The issue here, as is the case with all silent data-corruption, is detection. Just because I was able to reduce the overclock and make a known corruption symptom go away does not mean that I made all the unknown corruption issues go away. False negatives are a bitch.

You only know to look for corruption where you know to look for it. When it happens somewhere that you aren't looking then it goes undetected.

So you start bolstering your detection methods, that takes up more time and resources (the two things you were attempting to gain by overclocking in the first place). And it still doesn't ensure you are not at risk, at best you are merely reducing the known risk but you are still in the dark on the unknown risks.

But it is like this - this is a competitive world and as much as I am altruistic in sharing my educated opinions on things with my fellow enthusiasts I also have no problems letting my competing engineers and academicians undermine themselves by overclocking their systems and trashing their results (whether or not they catch onto it when it happens under their own noses is their problem).

OC'ing your rig when doing anything that matters on it IS a classic Tortoise and the Hare situation.

And the moral of the tortoise and the hare story is that you need idiots (Hare's) who foolishly run around (OC'ing) thinking they are gaining some advantage over everyone else (ego mixes with arrogance, a toxic cocktail) in order for the tortoise to eventually win out in the situation.

In other words, without the hare the turtle is just one slow-ass turtle. It takes the foolishness of the hare to make the turtle into a winner.

And for that reason I have had many successes in my life, once I stopped being the Hare to other people's tortoises and became a tortoise myself. That was a key turning point in my career, and since making that transition I have watched it happen like a fly on the wall in many other people's careers as if it is some sort of universal learning curve (and not everyone makes it up the climb).

Slow down and get shit right the first time and people will value you for your consistency more than they will for your random one-hit homeruns that come once a season.

So to all you Hare's out there I say "keep it up! don't stop, sure you know what you are doing, you are more clever than everyone else, other people don't do it because they aren't as clever and crafty as you, you are special, please keep OC'ing your hardware and making your results all the more likely to be crap! the rest of us are counting on you continuing to be you ;)"

Now don't confuse my position here as being one of anti-OC'ing in all regards. I do OC, as a hobby on my home computers for which I am doing stuff that doesn't really matter when it comes to my livelihood (or someone elses livelihood).

There is amateur vs. professional OC'ing and there is responsible vs. irresponsible OC'ing.

Amateur OC'ing can ONLY be responsible OC'ing if the amateur restricts the application of OC'ing to those situations in which the rig won't be used for anything critical. Don't OC your computer if you are a freelance consultant or running your own business, etc. You will race ahead like the hare only to crash and miss a critical deadline (or worse, turn in junk results and lose all credibility for the future).

It is VERY irresponsible for an amateur to OC a rig for a friend or family member, or to sell a pre-OC'ed rig, because the amateur OC'er has zero control over the types of computations and applications the OC'ed computer is tasked with.

Someone might be attempting to do their home finances on that OC'ed rig, or it silently corrupts all their digital photos of their honeymoon (happened to me) or baby photos of their kids. Had they been made aware of the real risks that come with OC'ing then they might not have been so willing to let their nerd friend OC their computer. Deciding for someone that the risk is acceptable is the very definition if irresponsible.

A professional can do responsible OC'ing but many people confuse professionals with being amateurs who have a lot of experience OC'ing in a DIY manner. The difference between amateur and professional is not a matter of experience (time spent OC'ing), rather it has to do with training and awareness. Knowing what to be looking for, how to quantify risk, mitigate it, detect it, etc.

Anand Lal Shimpi said:
Glen took me on a tour of Centaur's simulation lab. To say it was a different experience would be an understatement. While some machines were racked, there were a lot of desktop motherboards running Core i5s and Core i7s running out of cases:

The systems that were in cases were water cooled Core i7s, overclocked to 5GHz. There are two folks at Centaur who build each and every one of these machines, and overclock them. You and I know that overclocking both Nehalem and Sandy Bridge results in much better performance for the same dollar amount, but this is the first time I've seen overclocking used to speed up the simulation of microprocessors.

Source

When you design and validate CPU's for a living you know exactly what to do (and what not to do) when OC'ing your hardware. The folks at Centaur know what they are doing, they are professionals. But that is because of the uniqueness of their profession (designing/validating x86 processors for a living).

You don't get to be an aeronautical engineer by working at Boeing pushing a broom for 20yrs (loads of experience in the aerospace industry)...the guy who just graduated college with a 4yr degree knows more about designing airplanes on his first day on the job than the guy who has pushed a broom for 20yrs.

Same with the difference between professional and amateur OC'ing. I am not a professional OC'er, I merely know enough to know that I am an amateur without question. It took me a while to gather my wits about me and realize this. As such you won't see me compiling or running simulations on Oc'ed hardware anymore.

But I'm counting on young whippersnappers being foolish and shooting themselves in their own feet, its what makes the rest of us tortoises invaluable (highly compensated :p) to our employers :sneaky:
 

WaitingForNehalem

Platinum Member
Aug 24, 2008
2,497
0
71
I use to until I found out that a bunch of stuff I had done turned out corrupted and full of bogus results...the scary part was that the bogus results were only detectable by a trained eye. To the outside observer you would have thought the results were likely in-line with expectation.

Since then I only compile and run my simulations on stock hardware configured rigs.

This is my take on OC'ing systems that are doing things for which the computational results matter - it is an amateur move to put so much at risk when the prospective gains are so little.

Trash everything for the opportunity to gain maybe 40% increase in compute performance? No thank you.

Well, there's my answer :\. This really does ruin the "free performance" aspect of overclocking. Initially I was against OCing for this very reason, however, after 4 years my low clock speed is starting to show its age. Games showed a real improvement with the 700MHz increase.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
I wouldn't consider running at stock settings invulnarable to silient corruption issues(pentium 3 1.13GHz). I once read a study which concluded that underclocked CPUs have tangibly less silient corruption issues and OS crashes than stock CPUs which in turn have less issues than OCed CPUs. I would definately feel more comfortable running an SB watercooled setup at 4.2GHz with my motherboard(Asus ROG) that would never heat to more than 50C with a quality PSU than runing the same CPU at 3.4GHz (stock) with a stock cooler and a low-end mobo and just adequate PSU.

I knew it was the CPU because the "problem" went away when I reduced the overclock.

IMHO it's still not enough to conclude with a 99.9% certanity that it was purely CPU's fault. Maybe mobo power delivery system was inadequate for overclocked settings or PSU? Did the computer pass 48H of linpack at 8GB sample size? My system crashed after 33H of linpack at 4.6GHz so I backed it down to 4.5GHz. Replicating the issue in an another computer would have been more conclusive.
I guess the point still stands, overclocking the CPU makes the rig less reliable as a whole, even if the oced CPU isn't directly responsible for the issues.
 
Last edited:

WaitingForNehalem

Platinum Member
Aug 24, 2008
2,497
0
71
I wouldn't consider running at stock settings invulnarable to silient corruption issues(pentium 3 1.13GHz). I once read a study which concluded that underclocked CPUs have tangibly less silient corruption issues and OS crashes than stock CPUs which in turn have less issues than OCed CPUs. I would definately feel more comfortable running an SB watercooled setup at 4.2GHz with my motherboard(Asus ROG) that would never heat to more than 50C with a quality PSU than runing the same CPU at 3.4GHz (stock) with a stock cooler and a low-end mobo and just adequate PSU.



IMHO it's still not enough to conclude with a 99.9% certanity that it was purely CPU's fault. Maybe mobo power delivery system was inadequate for overclocked settings or PSU? Did the computer pass 48H of linpack at 8GB sample size? My system crashed after 33H of linpack at 4.6GHz so I backed it down to 4.5GHz. Replicating the issue in an another computer would have been more conclusive.

He did:

I was also able to duplicate the same portfolio of symptoms on both my AMD and Intel based rigs.
 
Last edited:

Soulkeeper

Diamond Member
Nov 23, 2001
6,732
155
106
Every cpu i've ever owned has been overclocked and i've compiled code on all of them.
For me, compiling code is the best stability test for both cpu and mem.
Ultimately it's the final/best test of system stability for me as i'll have 30+ day uptimes with countless hours compiling code and running it.

This is likely why my overclocks are so much lower than most everyone else :)
ie: mild/honest overclocks

I respect anyone that feels differently about this, specially if your livlihood is at stake.
Also time/uncertainty/etc.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
You only overclock if your data doesnt mean anything in value to you. Running Prime95 etc is not a guarantee for stability at all.
 

iCyborg

Golden Member
Aug 8, 2008
1,344
61
91
Lepton87 said:
I wouldn't consider running at stock settings invulnarable to silient corruption issues(pentium 3 1.13GHz). I once read a study which concluded that underclocked CPUs have tangibly less silient corruption issues and OS crashes than stock CPUs which in turn have less issues than OCed CPUs. I would definately feel more comfortable running an SB watercooled setup at 4.2GHz with my motherboard(Asus ROG) that would never heat to more than 50C with a quality PSU than runing the same CPU at 3.4GHz (stock) with a stock cooler and a low-end mobo and just adequate PSU.

IMHO it's still not enough to conclude with a 99.9% certanity that it was purely CPU's fault. Maybe mobo power delivery system was inadequate for overclocked settings or PSU? Did the computer pass 48H of linpack at 8GB sample size? My system crashed after 33H of linpack at 4.6GHz so I backed it down to 4.5GHz. Replicating the issue in an another computer would have been more conclusive.
He did:
I still agree with the rest of his post: running at stock doesn't mean you're 100% free of these corruptions. Just less likely, and perhaps a lot less likely, but if it's something critical you're doing, you'll want to introduce some redundancy to serve as quality control, in which case OC-ing will matter much less. Unless the side effects are very pronounced, but I am not under such an impression.

Anyway, I OC, but my 24/7 settings are typically not close to the limit, and usually with better thermals than stock clocks with a factory cooler.
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
I don't worry too much about silent data corruption, since everything I do is run through several machines to confirm the results :D


OP, you sound like a student. Chances are it won't matter too much, but you can always do a run on a lab machine if you want to be sure your results are good.
 

OVerLoRDI

Diamond Member
Jan 22, 2006
5,490
4
81
I'd do 3D renders/art with overclocked hardware. But compilation and data work? Not so much.
 

samboy

Senior member
Aug 17, 2002
223
94
101
Do all compiling/testing at 4.5GHZ on a Sandy Bridge 2600k.
Never seen any issues due to the OC and if there was some form of data corruption then the testing would fail and pick this up.

No problems for the last 18 months (other than an issue with a power supply that needed to be replaced)
 

cytg111

Lifer
Mar 17, 2008
25,652
15,155
136
funny .. when you say data-corruption, my first instinct is the data-storage .. and associated bus .. any chance that got oc'ed too ?
 

Ajay

Lifer
Jan 8, 2001
16,094
8,112
136
At home, yes; at work, no. It really is the way IDK pointed out. Financial firms (traders specifically), for instance, ran overclocked systems for years - I don't know if they still do. But there were some firms where the traders were dressed in suits, except they had shorts on because the systems under their desks ran so hot.

I a couple of professionals in IT hand built my system (man, would that be awesome!), then I'd be very happy for fast compiles on large code bases. But debugging with an emulator pod - would rather have a bog standard, reliable machine for that.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
or it silently corrupts all their digital photos of their honeymoon (happened to me) or baby photos of their kids

I'm curious how this happened. CPU error? Memory? HDD?

I am setting up a file server/HTPC combo (eyes on Windows 8 and its Storage Spaces solution, since I'd have trouble using FreeNAS as an HTPC) and felt forced into going AMD to get cheap ECC + USB 3.0 + SATA-III, rather than expensive Intel Xeon combinations. How important do you figure ECC is for a 24/7 file server?