I'm not sure what they're planning to use, but if I had the resources to try it out, I'd set up a Beowulf cluster with Mosix.Originally posted by: bob970
where exactly is the software for the beowulf cluster and how do you set it up? perhaps with a little holiday money i'll try a beowulf athlonxp1800 cluster with 4 machines or sojust out of curiousity of setting one up
Originally posted by: jliechty
I'm not sure what they're planning to use, but if I had the resources to try it out, I'd set up a Beowulf cluster with Mosix.Originally posted by: bob970
where exactly is the software for the beowulf cluster and how do you set it up? perhaps with a little holiday money i'll try a beowulf athlonxp1800 cluster with 4 machines or sojust out of curiousity of setting one up
Your outdoor temperature sucks. It makes me shiver just to think about it.Originally posted by: LANMAN
Don't have to worry about over-heating!! It's almost zero outside with -10 below wind chill.. :| Burrrrrr....
--LANMAN
Originally posted by: LANMAN
Don't have to worry about over-heating!! It's almost zero outside with -10 below wind chill.. :| Burrrrrr....
--LANMAN
Originally posted by: Mardeth
Well its -25C here.![]()
I recommended Mosix earlier, because it is easy to set up 24 copies (or however many are needed, per CPU) on the master node, and then let them automatically propagate to the client nodes with the Mosix system. Using individual systems with bootp and stuff would also be relatively easy to manage, though perhaps harder to learn at first. A more or less true "Beowulf" as I understand it in the traditional sense is a hybrid individual machines setup with custom software that uses message passing interfaces to simulate a cluster. This would not be possible, given that we don't have the source to most DC projects, and even if we did, they would be a PITA to modify for such a cluster.Originally posted by: Mardeth
I think we are going to do SETI. Dont know how their are going to use Beowulf, as "one" or as many computers.
Originally posted by: cautery
Lanman... et al.
How's the system coming?
Kinda curious as to why you are collecting vid cards and hard disks?
Don't really need then do you?
Man, I wish I had known about this project....
I JUST threw away the custom switch harness we built for WarpCore a couple of years ago...
That would have allowed you to power on up to 24 boards from a single panel...
I'll go pull out my spare parts box and see what I have available....
I think I might have a few C300a's and slot-1 boards to carry them.... Maybe a net car or two spare... I'll look at your list to see what else I can find...
Some other stuff you might want to know....
Not clear what project you plan to run this on, but it is likely that you don't really need the system setup as a true Beowulf... You COULD, but it's likely that the setup would be harder than the benefits you'd get....
Most projects will run just as (or more) efficiently if you run each system independently....
ONE way to do that is like we did... Set one PC up as a master node... Then have all of the others use bootp or one of the other ways to boot diskless to a home directory on the master.... You can use either a floppy with a minimal diskless boot kernal, or if you're lucky enough to have NICs with programmable bot prom, you can boot remote from the NIC...
You can share the local work unit location (if a local buffer is an option)... Use the master to either directly transfer the completed work... or you could set it up as a router to allow each node to talk to the project servers....
The security setup is easier if you route the master through another machine connected to the internet (your personal server/desktop)... The intermediate machine can act as a firewall to the outside world... Or alternatively, you can have the master node connected diectly to a DSL switch/router...
The net traffic is minimal, so you don't have to have a switch for the nodes... a hub is just fine....
As I said, I haven't read the entire 25+ pages, so you might well already know all of this.... OR things in Beowulf world may have changed sufficiently, to make my approach obsolete...
Just trying to offer my observations...
The reason I mention the diskless, video-less, keyboardless, and mousless options, is that a rack of any density running flat out is going to generate an ENORMOUS amount of heat...
WarpCore... with only 10 of its 12 dual C300a at 464 running and th emaster single proc node at 504 would bring my office/computer room to 110F in less than 20 minutes WITH the AC running full blast.... It would hit 120+ if you weren't careful which would cause nodes with less stable overclocks to crash...
Gimme a holler if I can be of any help as a reference...
Originally posted by: cautery
OK y'all... I just went through my primary spares boxes... So here's a list of the stuff I'm willing to donate to the cause:
Drives:
Mitsubishi Electric LS-120 drive (120MB disc and 3.5" floppy compatible)
Teac 3.5" FDD
SCSI-2 CD-ROM Reader (yeah, but it'll work with the stuff below)
Yamaha CRW6416S Internal SCSI CD-RW (latest firmware)
Peripheral Cards:
Generic 56.6 Modem w/driver CD
3-port FireWire Card
Creative Labs Model CT7160 PC-DVD Overlay card
Creative Labs DxR3 card (MPEG Card?)
Kingston KNE111TX 10/100Mbit Network Interface Card
2 each Netgear FA310TX Rev-D1 10/100Mbit Network Interface Cards
Peripherals and Connectivity:
Cybex SwitchView 4 Port KVM switch WITH 4 Video/Keyboard/Mouse Cables: Run 4 PCs from one monitor/KB/Mouse
Logitech Model M-S35 3-button PS/2 mouse
8 each CAT-5 or better network cables of varying length from short patch to 10+ feet
Custom 4-pin power supply harnes extension... 7 devices in serial using 14GA wire
100 ft+ DB9F to DB9M serial extension cable
4 each 10" 20-pin ATX power supply harness extensions
4 each dual drive ATA-100 cables
1 each single drive ATA-100 cable
1 each flat dual drive ATA-33 cable
2 each single drive ATA-33 cables
3 each dual drive Floppy Disk Drive cables
3 each custom made rounded ATA-33 single drive cables
1 each custom made rounded SCSI (50-pin) dual drive cable
2 each flat dual drive SCSI (50-pin) dual device cable
1 each flat cable with 2 internal 50-pin device connections and 1 external HD-50 connector on slot bracket
1 each 68-pin SCSI cable with 2 internal 69-pin connectors and 1 external 68-pin connector on slot bracket
1 each LVD SCSI cable with terminator; bonded twist-n-flat cable with 4 internal 68-pin SCSI device connectors
Cooling:
5 each Nidec Beta V TA300DC 80mm fans modified for full-time high speed (remove solder jumper to return to heat sensing speed control)
2 each Innovative BP602512H 60mm fans with tach sense (from GlobalWin coolers)
2 each GlobalWin Socket-370 CPU Coolers with 60mm tach sense fans (can't remember the model number)
1 each NEW Taisol Socket-370 CPU Cooler with tach sensing low-profile fan
1 each Alpha PAL ??? Socket-370 cooler with GlobalWin 60mm tach sensing fan (not innovative... it's a Sunon or YS-Tech I believe)
Power:
1 each PC Power & Cooling Turbo-Cool 300ATX power supply with power cord
Memory:
2 each 128MB PC-100-222 SDRAM (great little modules!!)
3 each 32MB PC-100 SDRAM (rock steady)
4 each 256MB PC-133 SDRAM (run fine, but can be finicky on some stodgy boards....)
Motherboards:
1 each ASUS P2B-S (Single processor, dual channel 80MB SCSI controller onboard)
Socket-370 to Slot-1 Converter boards:
1 each ASUS S370 rev.1.01 (the king of rock solid converter boards IMHO, set voltage from 1.8-2.6 via onboard jumper and jumper table is screened on back)
2 each MSI MS-6905 Master boards (successfully modded (jumper on reverse of board) if I remember correctly to run Dual Celerons) fully configurable via jumpers for dual/single and overclocking bus speed base... supports Coppermine, Original Box.
Processors:
Celeron C300a #L9065124-0460 (should be rock steady at 464 and probably 504) lapped flat to 1500 grit
Celeron C300a #L9065124-0462 (should be steady at 504) lapped flat to 1500 grit
