Turbonium
Platinum Member
When does it become notably inefficient and nonsensical to use a legacy system for distributed computing purposes? In other words, at what point can one argue that the power consumption and even physical/spatial overhead of the system becomes too much to justify the actual computing work that is being done, assuming such a point even exists?
I would imagine that determining this sort of thing would be a function of both current processor technology, as well as the legacy technology that is in question (i.e. it would be relative to what is currently available).
Practically speaking, the question becomes: is it always a good idea to use any available system for distributed computing, regardless of how slow it is? Put another way, if you have a 486 or Pentium still lying around, would it be better to dispose (i.e. recycle) it, or use it to crunch away?
I suppose one could argue that by recycling older rigs, that same silicon can be put towards newer CPUs that would do the same work orders of magnitude faster. However, you'd still need to determine a point at which it would be a greater net benefit to do this, taking into account the efficiency of the recycling process (both in terms of energy and materials), as well as considering when (and if) that recycled bit of electronics will even end up in another distributed computing scenario at all. For example, assuming the average recycled CPU ends up in another distributed computing scenario in X years, Y% of the time, you'd still need to consider that during that entire time, it could have been crunching numbers, albeit at a much slower pace.
I'm probably thinking way too much into this, but I thought this would be the right place to ask.
Also: if you have a legacy rig and are using it for distributed computing, post the specs and even pics if you got any.
I would imagine that determining this sort of thing would be a function of both current processor technology, as well as the legacy technology that is in question (i.e. it would be relative to what is currently available).
Practically speaking, the question becomes: is it always a good idea to use any available system for distributed computing, regardless of how slow it is? Put another way, if you have a 486 or Pentium still lying around, would it be better to dispose (i.e. recycle) it, or use it to crunch away?
I suppose one could argue that by recycling older rigs, that same silicon can be put towards newer CPUs that would do the same work orders of magnitude faster. However, you'd still need to determine a point at which it would be a greater net benefit to do this, taking into account the efficiency of the recycling process (both in terms of energy and materials), as well as considering when (and if) that recycled bit of electronics will even end up in another distributed computing scenario at all. For example, assuming the average recycled CPU ends up in another distributed computing scenario in X years, Y% of the time, you'd still need to consider that during that entire time, it could have been crunching numbers, albeit at a much slower pace.
I'm probably thinking way too much into this, but I thought this would be the right place to ask.
Also: if you have a legacy rig and are using it for distributed computing, post the specs and even pics if you got any.