The sad thing is that nowhere in this thread is there any economic analysis of how much DC actually costs anyone, especially if they run it as a background app while doing light tasks otherwise. Using what little information is available to me, I recall that most DC projects were founded on the premise that the typical DC user would run the DC app as a background task which would chew up unused CPU cycles. The entire concept behind DC being useful as a cruncher of spare CPU time was born in an era when technologies like Speedstep and CnQ were either non-existent or far less effective than they are today. To put it another way, the swing in power consumption between a CPU at idle and at load has grown much larger than it was in the old days. Modern CPUs are much better at efficiently idling.
On the flip side, total power consumption of a non-overclocked CPU has gone down by quite a bit except in some extreme cases, and in those extreme cases, the amount of work per watt that you get versus other, lower-power CPUs of the same generation is actually quite remarkable when all cores are effectively utilized. Computers properly configured for DC apps, in this day and age, would probably utilize less power per core (while producing much greater results per core) at load than an older CPU would consume while idle.
In other words, running a DC app on your Q6600 would probably cost you less than four older machines from 5-10 years ago sitting idle.
Furthermore, I don't think it's really fair to say that the work being done in DC efforts is necessary worth less than the work that could be done due to direct donations. Please keep in mind that money does not solve all woes. Consider all the computers running f@h; do you have any idea how much more it would cost Stanford to operate and administer enough machines to replace all the work being done by f@h participants? Could they even sequester enough manpower to take on such a task? At least with f@h, people can build, tweak, test, and operate f@h client machines with their spare time that they could not or would not, presumably, monetize through work. In other words, that spare time being spent has zero real-world value in most (if not all cases), but the equivalent amount of time Stanford would need to run scads of clusters or mainframes to do the same work without f@h folders would probably cost them a lot of money in the form of salaried workers (there are only so many easily-exploitable grads and undergrads available for this kind of work).
Finally, it should be noted that folding with overclocked machines should not be held up as the de facto standard when it comes to assessing the costs of running DC apps. I can only guess at the true motivation that anyone might have when running a DC app full-time on a heavily OCed machine, but it should be obvious that OCed machines have worse power/performance ratios than ones running at stock (or ones running at stock speeds with significant undervolts). About the only reason I could see to OC a f@h machine would be to reduce turnaround times on processing key blocks, though with modern CPUs and the smp client, that should not be a big issue. Anyone looking to crank out the most keys per dollar spent on hardware and/or power would probably do a lot better with a lot of stripped-down clients running at stock speeds, undervolted, rather than OCing quads and watercooling them. The fact that anyone could do that and tune a system to run 24/7 without burning out all their hardware is impressive, but it is not necessarily economically efficient. Running a few OCed systems versus dozens of stock systems in a folding farm may be more feasible given limited space . . .
Anyway, I guess the point is, until anyone can actually prove that money spent directly on charities or on research is somehow more beneficial that time spent on a "charitable" DC project, the question of whether or not it is money well-spent is moot. It does stand to reason that the original underlying premise behind DC, namely that unspent CPU cycles still consume power and therefore can be used at almost no cost, is no longer applicable in most cases, and that furthermore many DC fanatics seem to go about DC in a way that is not exactly efficient or cost-effective except when it comes to conserving physical space. But hey, if that's what they want to do, then more power to em.
I don't run DC clients because I like to leave my machines off most of the time. I really can't afford to jack up my power bill any higher than it is. If I got rich or could figure out how to get someone to sponsor a DC effort as a promotional stunt, then I would run DC apps, though I would have a farm of stripped-down DC computers with efficient, minimalist PSUs, onboard graphics, no fans (if possible), probably no harddrives (if possible), and undervolted or LV/ULV/mobile CPUs rather than OCed behemoths. I'd probably have them all in racks in a colocation facility or something of that nature where the hosting fees would presumably be lower than the actual electricity bills I'd rack up trying to host all that hardware at home. Uptime would likely be much-improved as well, and hardware failures would be much less common than they would be on a monster OCed rig.
But hey, since I'm a (relatively) poor boy and since I haven't figured out how to get anyone to sponsor a colossal-but-efficient promotional DC effort, I'll just have to let my CPUs remain silent. For now.