http://www.cnn.com/2000/TECH/computing/07/07/seti.implication.idg/index.html
<< The basic idea is simple, says Dave McNett: "It's all based on not wasting the resource -- running distributed software on your machine and letting it use whatever resources you aren't using."
McNett is president of Distributed.net, a Birmingham, Ala.-based nonprofit research foundation founded in 1997 to compete in an encryption-breaking contest. The group has grown to 20 developers and has rallied a 190,000-machine network (93% are PCs) to break code and solve mathematical puzzles for fun and prizes.
These kinds of networks can accomplish a great deal, McNett says, because 90% of most computers' processing power goes unused. "During the day, most PCs spend most of their time flying tiny toasters around," he says. Even when computers are in use, the majority of tasks aren't CPU-intensive. Working in a spreadsheet, for example, is CPU-intensive only when the columns are computed. "CPUs are used only in short bursts," McNett says. "And that's not even mentioning 6 p.m. to 9 a.m. and weekends and holidays."
Application Limits
Massively parallel computing "does make sense for use in the oil industry, and we have used the technique (internally) for some of our computationally intensive problems," says John M. Old, director of information management for worldwide exploration and production at Texaco Inc. in Houston.
But distributed computing isn't for every job. "The SETI project lends itself to breaking the data into small, independent chunks, which makes the parallel computing fairly simple," Old explains. Unfortunately, not all data can be segmented that way, and many projects require complex communication among processors.
McNett acknowledges that there are plenty of things an IBM RS/6000 can do that a distributed network can't. "We can't do anything that's more data-intensive than CPU-intensive," he explains. For example, weather prediction is difficult because the data is very interrelated. Distributed computing is better at jobs such as animation rendering, in which each of the 30 frames per second that go into a movie like Toy Story are separate tasks that can be distributed among thousands of computers.
With those kinds of jobs in mind, the folks at Distributed.net are considering a commercial spin-off. At present, Distributed.net's machines are equivalent to 42 144-node RS/6000s, the fastest computers on the market, at a net cost of about $120 million (based on the floating-point speed of the RS/6000 and the Pentium II/266 PC, the average computer on the distributed network). "We're proud of that," McNett says, "but the potential number of machines dwarfs what we have now." >>
<<
Meanwhile, even though ProcessTree hasn't yet set a pricing plan, CEO Steve Porter offers a ballpark figure of about $1,000 for the equivalent of a year's worth of CPU power from a Pentium II/400.
The company may pay in the range of $10 to $20 per month per computer - and even more for large-volume volunteers such as businesses. Payment will likely be in credits with an online retailer or service. For example, a participant might get discounts on his Internet service in exchange for running the software. "They're not going to be able to retire on this," Albea says, "but it's a resource just doing nothing, and instead they can be getting credits."
Since its site debuted in January - with virtually no advertising - ProcessTree has lined up more than 35,000 users representing more than 70,000 machines. "We are the largest body of available commercial computing power in the world right now," Porter says. "You can't get anything that can go faster than we can, and we get faster every day." >>
<< The basic idea is simple, says Dave McNett: "It's all based on not wasting the resource -- running distributed software on your machine and letting it use whatever resources you aren't using."
McNett is president of Distributed.net, a Birmingham, Ala.-based nonprofit research foundation founded in 1997 to compete in an encryption-breaking contest. The group has grown to 20 developers and has rallied a 190,000-machine network (93% are PCs) to break code and solve mathematical puzzles for fun and prizes.
These kinds of networks can accomplish a great deal, McNett says, because 90% of most computers' processing power goes unused. "During the day, most PCs spend most of their time flying tiny toasters around," he says. Even when computers are in use, the majority of tasks aren't CPU-intensive. Working in a spreadsheet, for example, is CPU-intensive only when the columns are computed. "CPUs are used only in short bursts," McNett says. "And that's not even mentioning 6 p.m. to 9 a.m. and weekends and holidays."
Application Limits
Massively parallel computing "does make sense for use in the oil industry, and we have used the technique (internally) for some of our computationally intensive problems," says John M. Old, director of information management for worldwide exploration and production at Texaco Inc. in Houston.
But distributed computing isn't for every job. "The SETI project lends itself to breaking the data into small, independent chunks, which makes the parallel computing fairly simple," Old explains. Unfortunately, not all data can be segmented that way, and many projects require complex communication among processors.
McNett acknowledges that there are plenty of things an IBM RS/6000 can do that a distributed network can't. "We can't do anything that's more data-intensive than CPU-intensive," he explains. For example, weather prediction is difficult because the data is very interrelated. Distributed computing is better at jobs such as animation rendering, in which each of the 30 frames per second that go into a movie like Toy Story are separate tasks that can be distributed among thousands of computers.
With those kinds of jobs in mind, the folks at Distributed.net are considering a commercial spin-off. At present, Distributed.net's machines are equivalent to 42 144-node RS/6000s, the fastest computers on the market, at a net cost of about $120 million (based on the floating-point speed of the RS/6000 and the Pentium II/266 PC, the average computer on the distributed network). "We're proud of that," McNett says, "but the potential number of machines dwarfs what we have now." >>
<<
Meanwhile, even though ProcessTree hasn't yet set a pricing plan, CEO Steve Porter offers a ballpark figure of about $1,000 for the equivalent of a year's worth of CPU power from a Pentium II/400.
The company may pay in the range of $10 to $20 per month per computer - and even more for large-volume volunteers such as businesses. Payment will likely be in credits with an online retailer or service. For example, a participant might get discounts on his Internet service in exchange for running the software. "They're not going to be able to retire on this," Albea says, "but it's a resource just doing nothing, and instead they can be getting credits."
Since its site debuted in January - with virtually no advertising - ProcessTree has lined up more than 35,000 users representing more than 70,000 machines. "We are the largest body of available commercial computing power in the world right now," Porter says. "You can't get anything that can go faster than we can, and we get faster every day." >>
