How to calculate Cluster/Distributed Computational Potential (MFLOPS/GFLOPS) ??

thorin

Diamond Member
Oct 9, 1999
7,573
0
0
I couldn't get a real answer to this questin in General HW yesterday so here we go again, hopefully you guys will be more helpful.

"I need to know how to caclulate the computational potential of a group of x86 machines in a cluster or distributed setup. ie: What's the computational power of 30 P3 500s (etc etc)."

Thorin
 

rjain

Golden Member
May 1, 2003
1,475
0
0
It depends on not only the raw CPU power but also the speed of the network and the arrangement and the type of code being run. It's like trying to estimate the number of FPS you'll get in UT2003 using just CPU MHz.
 

Shalmanese

Platinum Member
Sep 29, 2000
2,157
0
0
The easiest and most relaible way of measuring is to just benchmark it. Unfortunately, this may not always be possible so you need to work with some rules of thumb. Firstly, what broad category of programs is it? FPU intesive? Intenger intensive? IO Intensive? etc. Then, work out where your bottlenecks are going to occur. From that, extrapolate out to performance. A good profiler would help with this.
 

dakels

Platinum Member
Nov 20, 2002
2,809
2
0
Agree with rjain of course, but just to state the obvious, there should be a theoretical potential from the CPU's alone. Like say each 1000mhz P3 is capable of 1 gigaflop. Then you have a potential theoretical 30 gigaflop system. Then of course all your other potential limiting factors come in like network and bus speeds and OS or software's multithreading efficiencies and blah blah blah.

I would think at the least you can estimate the theoretical processing power though. Just need to know what each core's theoretical potentials are. You can usually find out what each processor's Mflop potential is by just looking up a white paper or detailed spec sheet. I always see GFlop potentials over on Apple's stuff cuz they always liked to brag how much better their G4's are then equivalent pentiums, too bad everything else in the box bottlenecks so the end result is often a slower machine then a comparable Pentium.
 

thorin

Diamond Member
Oct 9, 1999
7,573
0
0
Originally posted by: dakels
Agree with rjain of course, but just to state the obvious, there should be a theoretical potential from the CPU's alone. Like say each 1000mhz P3 is capable of 1 gigaflop. Then you have a potential theoretical 30 gigaflop system. Then of course all your other potential limiting factors come in like network and bus speeds and OS or software's multithreading efficiencies and blah blah blah.

I would think at the least you can estimate the theoretical processing power though. Just need to know what each core's theoretical potentials are. You can usually find out what each processor's Mflop potential is by just looking up a white paper or detailed spec sheet. I always see GFlop potentials over on Apple's stuff cuz they always liked to brag how much better their G4's are then equivalent pentiums, too bad everything else in the box bottlenecks so the end result is often a slower machine then a comparable Pentium.
Ya that's the kinda thing I want to do. I was just hoping there was a listing/table online somewhere that I could gather the information from. I suppose I could use results from SPEC scores online to find an average for different types of systems/procs.

Thorin
 

dakels

Platinum Member
Nov 20, 2002
2,809
2
0
not that I know alot about this topic but I would think SPEC is the best unbiased resource. There probably isn't many consolidated listings because the information isn't worth much if you don't consider the environment the processor is in and what tasks it is given to achieve a benchmark. I guess the best thing you could do is look at resulting benchmarks from people/companies in certain situations and try to approximate your potentials. Even alot of theoretical processor benchmarks are pretty arguable since many can be manipulated to give better results for the test architecture. Processor companies do this all the time for marketing reasons. They're not lying but they are sure as hell giving your limited and biased information on "test" benchmarks.
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
It's just 30*500Million = 150Gigaflops. It's assumed that it won't actually be able to attain this level of speed in actual use.

A higher figure can be arrived at if you consider SSE but SSE isn't really so great for many scientific applications so it's often discounted.

If you want a practical measure, you can't get one by calculations. You have to see some benches like linpack or something like that.