• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

OT: Beowulf pics

Originally posted by: RaySun2Be
Details man, we need Details! 😛

Details, for the curious: 😉

head node:
2 x 2.4 GHz Xeon, 533 FSB
2 GB RAM
120 GB HD
0.6 TB SCSI RAID Fileserver (5 x 180 GB Seagate SCSI HD)
Adaptec 3210S SCSI RAID Controller
Supermicro Super X5DPL-8GM-B motherboard (Adaptec Raid card refused to play nicely with Tyan mobo)

compute nodes (x 23):
2 x 2.4 GHz Xeon, 533 FSB
2 GB RAM
120 GB HD, Cold swappable
Tyan Tiger i7501 motherboard

- Runs with modified version of RedHat 7.3
- Compute networking with managed SMC GbE 24-port switch
- Management networking with AOpen 24-port switch
- Remote power management of individual nodes through APC Masterswitch components
- 2200 VA Tripplite UPS for Head node + switches
 
Originally posted by: MereMortal
Originally posted by: RaySun2Be
Details man, we need Details! 😛

Details, for the curious: 😉

head node:
2 x 2.4 GHz Xeon, 533 FSB
2 GB RAM
120 GB HD
0.9 TB SCSI RAID Fileserver (5 x 180 GB Seagate SCSI HD)
Adaptec 3210S SCSI RAID Controller
Supermicro Super X5DPL-8GM-B motherboard (Adaptec Raid card refused to play nicely with Tyan mobo)

compute nodes (x 23):
2 x 2.4 GHz Xeon, 533 FSB
2 GB RAM
120 GB HD, Cold swappable
Tyan Tiger i7501 motherboard

- Runs with modified version of RedHat 7.3
- Compute networking with managed SMC GbE 24-port switch
- Management networking with AOpen 24-port switch
- Remote power management of individual nodes through APC Masterswitch components
- 2200 VA Tripplite UPS for Head node + switches

sick....
just sick 😀
 
Using a Beowulf for DC, at least how we are going to have it set up, would be difficult. The whole point of having a parallel cluster is to do calculations that are opposite in construct to how DC apps are written. It is actually a major PITA to try to run DC with a batch queue system.

So as you can imagine, we have a number of people who are chomping at the bit to run their codes. I will be administrator for the time being, but dammit Jim, I'm an astrophysicist not a cluster administrator. I may try to do something DC related eventually, but for now, I have to much to learn about pbs administration and using mpi with multiple compilers.

The types of things that we work on in our group are mainly simulations of the Sun. Some of us do sims of the corona and the initiation processes for coronal mass ejections (CMEs). I do subsurface simulations of the convection zone and the generation of magnetic fields by dynamo action. Eventually we will let other folks at SSL have some time, and they do work on solar energetic particles and space weather/magnetosphere calculations (the magnetosphere is the extension of the Sun's magnetic field out into the Solar System disk, and it's interactions with the Earth).
 
Originally posted by: MereMortal
Originally posted by: RaySun2Be
Details man, we need Details! 😛

Details, for the curious: 😉

head node:
2 x 2.4 GHz Xeon, 533 FSB
2 GB RAM
120 GB HD
0.6 TB SCSI RAID Fileserver (5 x 180 GB Seagate SCSI HD)
Adaptec 3210S SCSI RAID Controller
Supermicro Super X5DPL-8GM-B motherboard (Adaptec Raid card refused to play nicely with Tyan mobo)

compute nodes (x 23):
2 x 2.4 GHz Xeon, 533 FSB
2 GB RAM
120 GB HD, Cold swappable
Tyan Tiger i7501 motherboard

- Runs with modified version of RedHat 7.3
- Compute networking with managed SMC GbE 24-port switch
- Management networking with AOpen 24-port switch
- Remote power management of individual nodes through APC Masterswitch components
- 2200 VA Tripplite UPS for Head node + switches

Why such an old distro? From what I've heard, you should get a significant improvement with >= 2.4.20 kernel due to better scheduler support for hyperthreading. You could use this kernel with RedHat 7.3 of course, but RH 9 also has improvements to the threading architecture, and compilers as well.

Our new hardware is stacked up outside my office right now ... very similar to yours, but 2.6Ghz CPUs and 4GB RAM on the server, and significantly less disk space. It came with RH 8, which I've been benchmarking. I'm going to upgrade one of the nodes to 9.0 next week, so we'll see how much difference it makes.

Are you getting the Intel compilers to run on it?
 
...but dammit Jim, I'm an astrophysicist not a cluster administrator.
LOL 😀

The types of things that we work on in our group are mainly simulations of the Sun. Some of us do sims of the corona and the initiation processes for coronal mass ejections (CMEs). I do subsurface simulations of the convection zone and the generation of magnetic fields by dynamo action. Eventually we will let other folks at SSL have some time, and they do work on solar energetic particles and space weather/magnetosphere calculations (the magnetosphere is the extension of the Sun's magnetic field out into the Solar System disk, and it's interactions with the Earth).
Jeebus man! :Q
I was going to ask where you worked but i see from the pics you're at Berkeley. Nice gig! 🙂
 
Originally posted by: ergeorge
Why such an old distro? From what I've heard, you should get a significant improvement with >= 2.4.20 kernel due to better scheduler support for hyperthreading. You could use this kernel with RedHat 7.3 of course, but RH 9 also has improvements to the threading architecture, and compilers as well.

Our new hardware is stacked up outside my office right now ... very similar to yours, but 2.6Ghz CPUs and 4GB RAM on the server, and significantly less disk space. It came with RH 8, which I've been benchmarking. I'm going to upgrade one of the nodes to 9.0 next week, so we'll see how much difference it makes.

Are you getting the Intel compilers to run on it?


Well, we are using a 7.3 version due to some of the software on our system that is incompatible with the glibc's of RH8 and RH9. I know that the intel compilers now support RH8, but from our experience RH8 is a dog. RH9 is better, but it came out so soon after RH8 that other software hasn't had a chance to catch up. And we're talking Fortran software here. I'm looking forward to 2.6.0 kernels for their enhanced memory management. The parallel debugger (which was not chump change) also wants RH7.3.

Everything seems to be compatible at this point, so I figure we better let some people run some jobs before I go break it! 😉

We got the intel compilers (just ifc + mkl) and they seem to work fine. We also have an older version of Lahey-Fujitsu that I'm debating whether or not to install. I'll probably be trying to run the first 'real' parallel job this weekend.
 
Originally posted by: MereMortal
Originally posted by: ergeorge
Why such an old distro? From what I've heard, you should get a significant improvement with >= 2.4.20 kernel due to better scheduler support for hyperthreading. You could use this kernel with RedHat 7.3 of course, but RH 9 also has improvements to the threading architecture, and compilers as well.

Our new hardware is stacked up outside my office right now ... very similar to yours, but 2.6Ghz CPUs and 4GB RAM on the server, and significantly less disk space. It came with RH 8, which I've been benchmarking. I'm going to upgrade one of the nodes to 9.0 next week, so we'll see how much difference it makes.

Are you getting the Intel compilers to run on it?


Well, we are using a 7.3 version due to some of the software on our system that is incompatible with the glibc's of RH8 and RH9. I know that the intel compilers now support RH8, but from our experience RH8 is a dog. RH9 is better, but it came out so soon after RH8 that other software hasn't had a chance to catch up. And we're talking Fortran software here. I'm looking forward to 2.6.0 kernels for their enhanced memory management. The parallel debugger (which was not chump change) also wants RH7.3.

Yea, figured it was something like that. 2.6 should be nice across the board ... better memory management, better scheduler, processor affinity, etc.

Actually, fortran is one of the prime motivations for us getting the Intel compilers ... What there is of Gnu's fortran sucks, and some of our stuff needs F95. And the C++ compiler just rocks for performance.

Everything seems to be compatible at this point, so I figure we better let some people run some jobs before I go break it! 😉

We got the intel compilers (just ifc + mkl) and they seem to work fine. We also have an older version of Lahey-Fujitsu that I'm debating whether or not to install. I'll probably be trying to run the first 'real' parallel job this weekend.

All our stuff is still in boxes ... can't wait to get em in and get rid of the old Alphas!

 
Originally posted by: ergeorge

All our stuff is still in boxes ... can't wait to get em in and get rid of the old Alphas!

Make sure those Alphas find a good home. If you need help finding homes, let me know 😉
 
Holy CRACK BOX BATMAN! Looks like some of the stuff I used to support at Imation. And I bet the lights in the dark are perty too. 😀

Wolfie

 
:Q
😀🙂:wine:
rose.gif
😀🙂






🙂
 
Some of us do sims of the corona and the initiation processes for coronal mass ejections (CMEs). I do subsurface simulations of the convection zone and the generation of magnetic fields by dynamo action. Eventually we will let other folks at SSL have some time, and they do work on solar energetic particles and space weather/magnetosphere calculations (the magnetosphere is the extension of the Sun's magnetic field out into the Solar System disk, and it's interactions with the Earth).

there's lots a words in there that I dont' know what they mean 😕
 
Back
Top