• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

using a legacy system for distributed computing

Status
Not open for further replies.

Turbonium

Platinum Member
When does it become notably inefficient and nonsensical to use a legacy system for distributed computing purposes? In other words, at what point can one argue that the power consumption and even physical/spatial overhead of the system becomes too much to justify the actual computing work that is being done, assuming such a point even exists?

I would imagine that determining this sort of thing would be a function of both current processor technology, as well as the legacy technology that is in question (i.e. it would be relative to what is currently available).

Practically speaking, the question becomes: is it always a good idea to use any available system for distributed computing, regardless of how slow it is? Put another way, if you have a 486 or Pentium still lying around, would it be better to dispose (i.e. recycle) it, or use it to crunch away?

I suppose one could argue that by recycling older rigs, that same silicon can be put towards newer CPUs that would do the same work orders of magnitude faster. However, you'd still need to determine a point at which it would be a greater net benefit to do this, taking into account the efficiency of the recycling process (both in terms of energy and materials), as well as considering when (and if) that recycled bit of electronics will even end up in another distributed computing scenario at all. For example, assuming the average recycled CPU ends up in another distributed computing scenario in X years, Y% of the time, you'd still need to consider that during that entire time, it could have been crunching numbers, albeit at a much slower pace.

I'm probably thinking way too much into this, but I thought this would be the right place to ask.

Also: if you have a legacy rig and are using it for distributed computing, post the specs and even pics if you got any.
 
Yes such a point exists and I'd say it might be a (average performing system for the era) 5 year old system, unless the goal of the distributed project is more important than any other factor. Today even 5 years might be a stretch with GPU processing taking the lead in performance and more power miserly CPUs available.

However you might find that really old systems don't work out well for such purposes for another reason, they're already at end of expected lifespan and putting a heavy load on them may make either the motherboard or power supply fail rather soon.

I don't feel it's reasonable to think recycling old computers goes directly towards manufacturing new ones, except in an abstract way that certain metals after recycled, make their way into the general metal supply for all sorts of manufacturing, as well as recyclable plastics - but not the silicon AFAIK as silica is one of the most abundant elements on earth.

The main issue to me is not of materials or recycling but performance per watt. Using 486 based systems would waste so much power you could both pay for and run one new system with far more performance than all the 486 systems combined, not even counting the space or environmental conditioning to keep the farm cool enough.
 
One should just assume that any recycled CPUs/components never make it into another distributed computing scenario. I don't even know why I brought it up, to be honest. Perhaps I was just over-analyzing and trying to consider all possible angles when determining the aforementioned point. Really though, recycled parts probably make it into a distributed computing project at such an infinitesimally small rate that it should just be considered to be zero or non-existent.

I'm more interested in a practical way to determine if it's worth it to use a given system for your average distributed computing project. You mentioned that the energy costs of running a 486 would be better put towards a newer system for the same purposes. Is this literally true? Is there any way to come up with a basic sort of function, with basic variables, to use in determining whether or not a given system should be used or scrapped? You mentioned performance per watt; can you elaborate, in terms of an actual equation?
 
Last edited:
Yes it's true that for example, ten 486 systems won't have nearly the performance of a modern system yet the sum power consumption of the 486 boxes will be higher, as will the average failure rate by quite a lot (ignoring random failures since we're contrasting a simple size of one with the modern system).

No I can't come up with a viable equation, especially since it will depend on what CPU architecture the distributed software is optimized for and what features any particular CPU has, so there's the issue of benchmarking the systems running that same software - but I could throw an idea out there what it should look like:

P = Performance, operations per second for the particular software used, OR the inverse of the time to complete one job (1/T) - a benchmark result using the same software as the distributed computing project will use.

W = watts consumed in that time period, it'll need to be measured at the wall outlet and multiplied by the # of systems.

N = # of 486 systems

M = 1 ... modern system (or however many would be contrasted, ie 4M if 4 modern systems.

( N * P ) / W = performance per watt total for all 486's
( M * P ) / W = performance per watt for 1 modern system

... whichever is higher, is more energy efficient.

To give you some idea, ten 486 systems will probably consume in excess of 600W at the wall, while the one modern system could be well under 200W (depending on how it's built). I mentioned GPU processing but even if we ignore that, take a look at the following linked Tom's Hardware benchmarks, http://www.tomshardware.com/reviews/benchmark-marathon,590.html

They don't even go back as far as 80486, but even a faster Pentium 166 MMX took 7688 seconds to do what a now old by modern standards Pentium 4, 3GHz did in 293 seconds... Pentium 4 3GHz is pretty slow by today's standards but it was still 26 times faster than the Pentium MMX! Building with something more modern, you'd probably need closer to 100X as many 486 systems, then recall that adding a video card into the mix makes it an even higher #.

The power bill savings alone would pay for the new system in about a few months (I'm too lazy to do the math and there are too many variables that can vary, to suggest a #) but at 11 cents/KWH US average electricity prices, your power bill could be $385 higher per year running ten 486 systems, let alone the cost to run 26 to 100 of them.

http://www.tomshardware.com/reviews/benchmark-marathon,590-27.html
 
Last edited:
It's impossible to answer the question until we can come up with a "value" for a single DC work unit. Financially, it doesn't EVER make sense to use a machine for distributed computing...you get no money for it, and you spend money on the electric bill.

So you have to measure it in some other way. How much satisfaction do you derive from building/maintaining/running computers? How much happiness does it give you to know that you are helping solve problems? How much enjoyment do you get from being well-ranked in the DC world?

That varies from person to person. For example, someone who likes having lots of different hardware, particularly historical hardware, might get a kick out of having a full generational progression of computers working in a DC cluster. It would actually be amusing to compare the output between 486->Pentium->Pentium->PII->PIII->P4->Core+ machines.
But someone else, who gets more out of being well ranked, wouldn't want to waste their time or electricity on an obsolete system. They would basically balance the cost of electricity, WU output, and the cost of the system to reach the optimum level for building their cluster(s).

Recycling is basically a no-go. The exception is that you can sell certain older processors on ebay to saps who will try to extract the gold.
 
Status
Not open for further replies.
Back
Top