• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

parallel processing

armegeden

Junior Member
ey, i'm working on a cluster project @ school and me and this other guy are somewhat in charge of getting 50mobo's networked together running off of 1HD and such. Anyone know any sites that discuss this sort of thing? Can someone give me a list of everything we'd need? Just from reading other posts, all i've gathered is:
-50 mobos (w/ processor and ram and everythnig already on)
-a case/rack of some sort
-dunno how many powersupplies.. i've read where someone said a 2:1 ratio, so we'd need like 25, but i would assume that varies some...
-1 HD
-1 Floppy
-1 Video Card
-1 keyboard
-1 monitor
but apparently depending on the OS we decide to use (as far as i know, none as been decided on), we may need 1 VC and 1 keyboard per motherboard?
how would we connect all of the motherboards together? the only thing that comes to mind (keep in mind my scope of knowledge is fairly limited, so i'm not aware of all the options and hardware and software out there) is having 1 mb serve as a "master", so to speak, and have the drives and stuff all connected to it just like a regular computer, and then hook all the other boards in a network hub and have the hub go to the master board. I'm sure there's another easier, better way, but I figured i'd just throw that out there...

thanks,
arm
 
There may be other ways, but the two easiest would be to have the hard drive connected to the 'main' motherboard, and then either have the rest of the boards use network bootable network cards that will allow them to get the OS information from the system with the hard drive, or else have all of the others boot from floppies (DOS or some small form of Linux are the only real options here) and then use the remote hard drive as a 'server/storage' device where they keep any files that may be needed. Booting from floppies would be cheaper and easier, since bootable NICs can be costly, and booting from DOS requires almost no configuration-just share the 'server' drive across the network. However, if you used bootable NICs, you could run just about any OS you want to, as long as the 'server' is set up to support it.

Beowulf clustering is also cool, but takes a bit more configuration and technical know-how. BadThad's link on clusters is a good start if you decide to go that way.
 
yea, we're also working on a 32 node beowulf, but apparently the guy supervising all this wants to try doing one in a different way as well. We were just gonna get a rack... or, well seeing as how we have 50boards, I would say get a few racks 🙂 and then just throw it all together and see what happens. None of us has done one tho, nor have we seen one or really know much about it; we just know that it can be done, and somehow or another I got stuck goin out and trying to find out how 😛 any idea as to the # of power supplies we may need? the supervisor guy said probably just 1 or 2, but i've also heard 25+, and I'm more inclined to go w/ the 25+ figure 🙂

thanks!!
 
beowulf is only good if you want a really scalable system. management and reliability are terrible. no node is even cluster aware! only good for cheap parallel processing, but if a single node goes down the answer is wrong. the cluster doesn't manage itself, can't rebuild the node or move the work to elsewhere.
 
just to clarify things for me, seeing as how i'm somewhat new to this all, what exactly is a beowulf cluster? like, i know it's a bunch ("cluster&quot😉 of computers working together, but obviously there are different types of clusters and stuff. like how we have the 32 node cluster in which each node could function as a single computer if we hooked up a keyboard and monitor (ie: each node has a HD, floppy, vc, etc). whereas this new cluster we are working on is just 50 motherboards working off of 1 hard drive... so are they both beowulfs? or is there a pretty little name for the one that's just mobo's?

thanks!
 
Back
Top