- Sep 7, 2001
- 628
- 0
- 0
I know that each large problems has it's own characteristics that may make it more or less suitable to distributed processesing vs processing on a supercomputer. I also know tha supercomputers offer features that distributed computing can't for solving large tasks. So this is just speculation for fun, the performance ratings are far from the final measure of a machine's usefullness.
Recently NEC revealed "The Earth Simulator" which it built in Japan. It is a vector supercomputer which hit 35 teraflops on the linpack benchmark. That is 5x higher than the previous world super computer champ - IBM's ASCI White which rang in at 7 teraflops or so.
I've heard that SETI has been able to pull in 40 teraflops worth of compute power (if you add up all of the processing being done by people running the clients.)
Does anyone have any idea how much CPU power other distributed things such as RC5-64 are pulling in? I'd be really curious to know how many teraflops they are doing and who is doing the most.
Recently NEC revealed "The Earth Simulator" which it built in Japan. It is a vector supercomputer which hit 35 teraflops on the linpack benchmark. That is 5x higher than the previous world super computer champ - IBM's ASCI White which rang in at 7 teraflops or so.
I've heard that SETI has been able to pull in 40 teraflops worth of compute power (if you add up all of the processing being done by people running the clients.)
Does anyone have any idea how much CPU power other distributed things such as RC5-64 are pulling in? I'd be really curious to know how many teraflops they are doing and who is doing the most.
