Hi forum. I've been a regular reader of AnandTech for a couple of years now, but this is my first time writing in the forums.
So far all my computers have been laptops - but now that I've graduated from university and I use my computer almost always at home, I'm considering building a high-end desktop system. I'm currently at the stage where I'm choosing the individual components. CPU will probably be a 2600 or 2600K with an Akasa Venom cooler.
Basically, my questions are these:
1. Is overclocking recommended for distributed computing (see below)?
2. How much of a performance increase can I expect, (preferably expressed as a percentage value) compared to a stock 2600K, if the only overclocking I do is to click "fast" or "extreme" in the ASUS automatic overclocking software (which will come with the new motherboard) - i.e. no manual overclocking whatsoever?
Now, the reason I felt the need to ask on these forums is because of how distributed computing works. You've probably heard of SETI@home or Folding@Home so you know how it works. My particular preference (the one I will be running on the new rig) is World Community Grid (WCG), which runs on the BOINC client. In brief, the way it works is that it basically uses your computer's idle CPU cycles to perform calculations on proteins and such that could help scientists develop new drugs to treat diseases. The project sends your computer a "work unit", the computer spends a couple of hours working on it in the background, and once finished, it uploads the results to the project and downloads a new work unit. You basically get points for every successful work unit your computer completes.
What makes distributed computing so special in terms of overclocking is this:
a. Projects like WCG don't send a particular work unit just to you - they send several identical ones to many different people and thus get several results back, which should be identical. The WCG server then compares the results. If they're identical, everyone gets points for the work. If there are discrepancies, then some or all the people who worked on that unit will get no points. So if an overclock causes the CPU to output incorrect or corrupt data, then it would be useless because you get zero points for every corrupt work unit. I don't know whether overclocking actually affects the precision of CPU calculations in this way? The BOINC help documentation says it might - in fact they discourage overclocking for this reason - but I thought maybe someone from this forum can offer an opinion on this?
b. With the BOINC client, one CPU can process one seperate work unit per logical thread - at the same time. And each work unit is basically run constantly at full throttle, but as a low-priority Windows process, so it takes up all spare CPU cycles. So end result is, each core in the CPU will run at a constant 100% utilisation constantly. So in the case of an i7-2600K, you'll be running all 8 threads at 100% utilisation, for as long as the project is running in the background (i.e. up to 24 hours a day). This means that every piece of performance gained in overclocking is actually going to be 100% useful, but it also pushes the CPU and cooling system to its absolute limit and demands perfect stability, in contrast to more "normal" activities (like gaming and video encoding) - so again, I was hoping some experienced forum readers could offer their opinion about this.
For reference, my current laptop has a Core 2 Duo T7300 (2GHz). Running the project with the stock cooler built into the laptop, and no other cooling systems gives CPU temperature of 92 degrees Celsius, and I used to run it like this for 8-24 hours a day for about a year (before I got fed up of the noise of the little laptop fan constantly at full speed ^_^).
So far all my computers have been laptops - but now that I've graduated from university and I use my computer almost always at home, I'm considering building a high-end desktop system. I'm currently at the stage where I'm choosing the individual components. CPU will probably be a 2600 or 2600K with an Akasa Venom cooler.
Basically, my questions are these:
1. Is overclocking recommended for distributed computing (see below)?
2. How much of a performance increase can I expect, (preferably expressed as a percentage value) compared to a stock 2600K, if the only overclocking I do is to click "fast" or "extreme" in the ASUS automatic overclocking software (which will come with the new motherboard) - i.e. no manual overclocking whatsoever?
Now, the reason I felt the need to ask on these forums is because of how distributed computing works. You've probably heard of SETI@home or Folding@Home so you know how it works. My particular preference (the one I will be running on the new rig) is World Community Grid (WCG), which runs on the BOINC client. In brief, the way it works is that it basically uses your computer's idle CPU cycles to perform calculations on proteins and such that could help scientists develop new drugs to treat diseases. The project sends your computer a "work unit", the computer spends a couple of hours working on it in the background, and once finished, it uploads the results to the project and downloads a new work unit. You basically get points for every successful work unit your computer completes.
What makes distributed computing so special in terms of overclocking is this:
a. Projects like WCG don't send a particular work unit just to you - they send several identical ones to many different people and thus get several results back, which should be identical. The WCG server then compares the results. If they're identical, everyone gets points for the work. If there are discrepancies, then some or all the people who worked on that unit will get no points. So if an overclock causes the CPU to output incorrect or corrupt data, then it would be useless because you get zero points for every corrupt work unit. I don't know whether overclocking actually affects the precision of CPU calculations in this way? The BOINC help documentation says it might - in fact they discourage overclocking for this reason - but I thought maybe someone from this forum can offer an opinion on this?
b. With the BOINC client, one CPU can process one seperate work unit per logical thread - at the same time. And each work unit is basically run constantly at full throttle, but as a low-priority Windows process, so it takes up all spare CPU cycles. So end result is, each core in the CPU will run at a constant 100% utilisation constantly. So in the case of an i7-2600K, you'll be running all 8 threads at 100% utilisation, for as long as the project is running in the background (i.e. up to 24 hours a day). This means that every piece of performance gained in overclocking is actually going to be 100% useful, but it also pushes the CPU and cooling system to its absolute limit and demands perfect stability, in contrast to more "normal" activities (like gaming and video encoding) - so again, I was hoping some experienced forum readers could offer their opinion about this.
For reference, my current laptop has a Core 2 Duo T7300 (2GHz). Running the project with the stock cooler built into the laptop, and no other cooling systems gives CPU temperature of 92 degrees Celsius, and I used to run it like this for 8-24 hours a day for about a year (before I got fed up of the noise of the little laptop fan constantly at full speed ^_^).