• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

New Workhorse Machine for Scientific Computation

Lyuokdea

Member
Hi all,

I am building a workhorse machine that will be primarily used for scientific computation. Several of the codes I'm working with take up to a week to run and can use several GBs of RAM. Thus a top quality processing machine is a necessity. I'm not planning on overclocking, because system stability is critical. This system will be running a linux variant (haven't decided what), and will consistently need uptimes of several months.

I've come up with a machine based on the new Xeon Nehalem architecture, which falls just within my $2500-$3000 budget. A public link to the newegg wish list is here:

http://secure.newegg.com/WishL...WishListNumber=9576285

For those who don't want to click, the highlights are as such:

Processor: (2x) Intel Xeon E5520 Nehalem 2.26GHz LGA 1366 Quad-Core Server Processor Model BX80602E5520 - Retail
Motherboard: ASUS Z8NR-D12(ASMB4-IKVM) Dual LGA 1366 Extended ATX Server Motherboard
RAM: OCZ Platinum 12GB (6 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Desktop Memory Model
HDs: (4x) Western Digital Caviar Black WD1001FALS 1TB 7200 RPM SATA 3.0Gb/s Hard Drive
GPU: BFG Tech BFGE951024GTE GeForce 9500 GT 1GB 128-bit GDDR2 PCI Express 2.0 x16 HDCP Ready SLI Supported Video Card
Heatsink: (2x) Dynatron G666 60mm Double Ball Bearing CPU Cooler - Retail
Case: COOLER MASTER HAF 932 RC-932-KKN1-GP Black Steel ATX Full Tower Computer Case - Retail
Power Supply: CORSAIR CMPSU-750TX 750W ATX12V / EPS12V SLI Ready CrossFire Ready 80 PLUS

There's obviously a DVD Burner/Monitor/SATAII Cables/Keyboard/Mice etc.

So my questions are this:

1.) I'm somewhat worried about the Heatsink. Will two of these fit together on the same motherboard without a problem. Has anybody put these heat sinks together on the same motherboard?

2.) I'd like to cut back on costs a little bit if possible, but don't want to kill system quality, are there any cheaper components that can be dubbed in without hurting the overall system. I could cut out the graphics card for now, since there is a card on the motherboard, I assume a graphics card for a motherboard of this quality will still support a 1920x1200 monitor without problems, even if it's a server motherboard?

3.) Has anybody installed an EATX board into the Coolermaster case? I know sometimes cases advertise EATX when really it's a tough fit.

Any other advice or suggestions would be greatly appreciated. I've built several computers before, but I've never built a system of this complexity.

Thanks,

~Lyuokdea

 
the problem with OEM, i think...is that anything like this would cost $5000 or more, and I don't have that type of money....

Another question for people:

one of my friends is suggesting instead the following HD arrangement

1.) 60 GB SSD (operating system)
2.) 74 GB 10k RPM Maxtor Raptor (frequent program writes)
3.) Dual 1TB in Raid 1 Array (Data protection)

Is that a better set up than 4 1TB drives in a Raid 10 array?

~Lyuokdea
 
You could drop down to the previous gen cpu's and pick up something like this...
http://outlet.us.dell.com/ARBO...&c=us&l=en&s=dfb&cs=28

Then just upgrade the memory and hard drives to your liking. Probably not what you were thinking, but you should consider what you'd do for support. I mean, if this is a machine that you need up and running, especially if you have a financial interest in it, then what happens when/if the system goes down, and you're not sure what the problem is? Bad motherboard? Could take a couple of weeks to get an RMA. With dell, some guy comes and brings a new board the next day. But that's just my thinking, you may not be worried about support. 😀
 
Originally posted by: Lyuokdea
I could cut out the graphics card for now

What about getting a Nvidia CUDA, ATI Stream, OpenCL system. If you're doing linear programming no CPU can keep up with the parallel power of a GPU. I think you can get mother boards that run up to 4 gpu cards. I'm not totally versed on what is best and the various costs on each but you get in the TFLOP range for single precision, and 250 GFLOP for double precision per high end GPU. Relatively a Intel i7 will perform in the 70 GFLOP range.
 
CUDA programming requires a specific programming language and type of code (massively parallel) in order to provide a performance boost. Not all the codes are mine, and I don't have the ability (or authority) to rewrite them for CUDA programming.

I have thought about learning CUDA programming at some point, and thus have included the 9500 Graphics card which is compatable with the CUDA system.

~Lyuokdea
 
the 9500 is basically worthless for cuda (would want at least a gtx 260 to see a real benefit), but if you are not going to use cuda for a while then saving money on the gpu makes sense (especially with directx11 probably out within the year)
 
so I think, after talking to some posters on here (and getting an authorization to spend $2830, which was slightly more than I was planning)...I've changed the HD optimziation to this:

(2x) ocz vertex 30 GB (raid 0, for system partition)
(1x) 300 GB Velociraptor 10,000 RPM (/home, /root, /swp) <-- I've heard swp doesn't work well on ssd drives, is this true?
(2x) 1 TB Maxtor Cavier Black (raid 1, /data to protect all long term storage)

Runs will output to both the Velociraptor and the Maxtor array, probably balancing the load when there are many runs, everything can be backed up long term on the raid 1 though...since I want to use the computer while runs are executing, the raid 0 ssd array should keep all applications blistering fast even while codes are currently writing in the background.

Does this seem reasonable? Also, is there a cheaper way to get the same sort of performance? Because this is a little on the expensive side.

Thanks,

~Lyuokdea
 
I would think, if you're doing calculations that take a week, I doubt disk speed is any factor at all. Although, without knowing the nature of the calculations, it is useless to make a guess on what would be best.

Since you're not overclockig, why do you the extra coolers? Retail chips come with coolers. The motherboard comes with sata cables as well. 3 to be exact.

You're really paying a premium for server class chips and such, you could get 3 (maybe 4) regular quad core systems and cluster them for what you're paying. Maybe invest in a high speed switch and get some other computers involved.
 
According to newegg, the new e5520 chips do not come with coolers....I don't know if that's correct, but that's what I'm reading off of the website.

~Lyuokdea
 
Well you can combo up the motherboard with that processor for $30 in savings...http://secure.newegg.com/Shopp...aspx?submit=ChangeItem

Also you said "several" Gigs of ram, 6 would not be enough? Because if you can make do with 6 then you are saving about $280

If you are going to pinch pennies check here: http://www.newegg.com/Product/...rives-_-L0C-_-22136317

The only other place you could save money is by stepping down to a 22" lcd and going with a larger mid tower case but not much more than that.

 
One new question that's now popping up...the Supermicro boards are now available. Which manufacturer is thought to have higher quality? I know they're both pretty top-line for servers, correct?

A couple of differences I'm noticing are that the Supermicro board is listed as an EATX board, while the Asus board is SSI EEB. The Supermicro boards are using the Intel 5520 chipset while the Asus board uses the Intel 5500. The Asus board has an integrated video card, while the Supermicro one has integrated sound. The supermicro board also has one more PCI Express x16 port

Thanks for your help,

~Lyuokdea
 
1) The ocz vertex is very promising but very risky. They basically use their customers to quality control their product. Since you need longterm stability you would be wise to avoid it. Go for 1 or 2 intel x25-m's if you want ssd.

2) You mentioned long term backup to the raid1 1tb drives. False. Anything plugged in is not long term backup, and that goes double for something connected to the same system you're backing up.
 
One last question I still haven't found a clear answer to. Are SSI EEB and EATX interchangable? Is a problem between the two only with the power supply or the case as well?

Thanks,

~Lyuokdea
 
Hi All,

Thanks for your help so far. I have an updated system build, that I want to make some final checks on before I order it at the end of this week.

The updated build is here: http://secure.newegg.com/WishL...WishListNumber=9576285

Again for those not interested in clicking, the components are as listed:

Processor - Xeon Nehalem e5520 (2.26 Ghz) (x2)
Heatsink - Dynatron G666 60mm Double Ball Bearing CPU Cooler (x2)
Motherboard - SUPERMICRO MBD-X8DA3-O Dual LGA 1366 Intel 5520 EATX Server
RAM - 6 GB (3x2GB) Crucial DDR3 1333 ECC Unbuffered Triple Chanel kit (x2 for 12GB total)
Graphics Card - GeForce Nvidia 9600 GT Superclocked GDDR3 512 MB
Hard Drives:
2x ocz vertex 30 GB SSD (raid 0, all system files)
1x Western Digital Caviar SE16 640 GB(/home, /root/, /swap, main user drive)
2x Western Digital Caviar Green WD10EADS 1TB SATA 3.0Gb/s Hard Drive (raid 1 /data/, high volume writes)
Case - SILVERSTONE KUBLAI Series KL03-B Black
Power Supply - CORSAIR CMPSU-850TX 850W ATX12V 2.2 / EPS12V 2.91 SLI Ready CrossFire Ready Active PF
Cords - (2x) OKGEAR 18" SATA II Cable Model GC18ARM12 - Retail

So the remaining questions are:

1.) Are the Motherboard/Heatsink/Case, and Power supply all compatible? The concerns I have are correct holes in the case for the heat sink mounting brackets, as well as the power supply needs for the motherboard. Specifically, Supermicro lists the following: (http://www.supermicro.com/manu...erboard/X58/X8DA3.pdf)

"The X8DA3/X8DAi can accommodate 24-pin ATX power supplies. Although most power supplies generally meet the specifications required by the CPU, some are inadequate. In addition, the two onboard 12V 8-pin power connections are also required to ensure adequate power supply to the system."

I don't understand which power supplies have two 12V 8-pin power connections. Does the one I'm listing meet this specification? Do all power supplies have this, and thus it is not prominantly listed among those that do? If not, is there a converter that can easily give you the two 12V 8-pin power connectors? Is there another power supply I should be using?

Also, I've noticed a beastly heat sink from Zalman: http://www.newegg.com/Product/Prod [...] 6835118046 I'm not planning on overclocking at all, so i think this would be a bit overboard, but is it worth it, given that the price isn't that much greater?

2.) Has anybody set up a dual monitor display off of a GeForce 9600GT? Is that a reasonable setup without any screen hangups? I doubt I'll be doing anything graphically intensive, but I'm not sure if 1GB graphics memory would help more than having a very fast processor as with my setup.

3.) Does the raid setup I have look reasonable for their given uses. I want a combination of a system that is very snappy for the user, but at the same time, there will be almost constant read writes going to the 1TB Caviar array. I think this separation will make everything work.

~Lyuokdea

 
In response to #3:

IMO, putting the system files on the RAID0 SSDs is a complete waste. In the 1st post you say that this machine will be used for really big scientific computation type problems. I'd expect in these cases, a single user logs in, starts a process, and then lets it run for several days. Once your scientific computing code starts running, will it really need super fast performance to system files?

You say your program will be frequently hitting the /data partition. Is this going to be a bottleneck for the program? HD's are going to be one heck of a lot slower than rest of the system.

If this is the case, wouldn't it make more sense to put a cheap SATA drive in for the system files & a faster disk for the data? If your datasets are small enough, you could use the RAID SSDs here, but if your datasets are huge you might need some kind of RAID SAS disk system.
 
In response to #2:

The 9600GT 512MB will work fine for a linux desktop. It sounds like your main use for this computer is a big scientific application that doesn't interact with the user much, so I imagine it runs from the command line with no graphics. If you're not doing any intense 3D graphics, the type of video card you use is mostly pointless. I ran dual 1600x1200 CRTs off a 6600GT/256MB in Gnome / KDE with no problem. The only thing to check is if you are using 2xLCDs be sure to get a card with 2xDVI connectors.

I did a bit of linux based scientific computing type stuff in grad school, & I tended to just boot the computer to the console / command line and turn off the GUI to save resources. (For example, you wouldnt want to waste CPU time drawing a fancy 3D screen saver when you're processing data.) I would log in from my laptop or another computer to check the status of my program occasionally.

 
Thanks for the detailed responses Knavish.... I agree the raid 0 SSDs is pretty superfluous, i'll probably just cut back to one, and run the core linux components off of there. Then that money can go to buying a speedy SCSI or Velociraptor type drive for simulated writes that need a lot of speed.

In addition to scientific simulations, I am planning to use this computer as an everyday machine. That doesn't necessarily mean it will be anything intensive, but general web surfing, e-mail, instant messaging type stuff. Thus I want to have a fast system drive and a separate user drive that simulations are not writing to in order to keep things from lagging.

I don't know if I need to go much faster than caviar blacks for the SATA drives. For week long simulations, even if they lag down while they have to output 20 GBs of data or so, that time should be insignificant next to processor memory time. So I'm not overly concerned. However, it might make more sense to go with one ssd for the system, and then a fastor scsi drive for any simulations where I think i do need that extra speed.

Thanks so much for your help,

~Lyuokdea
 
Are you positive that your software will be able to fully utilize all 16 cores of this system (8 physical, 8 hyperthreading)? I think that a single CPU system might save you some money. However, this does appear to be one of the first times that I can think of where it was significantly cheaper to build your own system rather than buy a Dell or Apple pre-built
 
Originally posted by: Lyuokdea

I don't know if I need to go much faster than caviar blacks for the SATA drives. For week long simulations, even if they lag down while they have to output 20 GBs of data or so, that time should be insignificant next to processor memory time. So I'm not overly concerned. However, it might make more sense to go with one ssd for the system, and then a fastor scsi drive for any simulations where I think i do need that extra speed.

~Lyuokdea

That makes sense. If you were hitting the HD all that much you'd hardly be able to keep all 8 cores busy!
----
I use to use separate HD partitions for different linux root directories, but now I'm lazy and I just make one big partition. That way I don't need to worry about filling up one or leaving a few hundred GB unused on another. On the other hand, I can see how it would be convenient to keep the user (/home) and temporary files on a separate partition from the system files for ease in upgrading the OS.
 
What power supplies do SAS drives require? I assume it uses a different power connector than SATA drives? It looks like the prices for 73.5 GB 15000 RPM HDs are pretty reasonable, and that might form a nice drive for anything that requires fast writes. Things can then be ported over to the 1TB caviars if they need stored.

~Lyuokdea
 
Sorry to keep asking random questions:

What do people think about this Chieftec case?

On newegg as: http://www.newegg.com/Product/...x?Item=N82E16811160009

I know it doesn't have any fans. However, I can get it for less than $100 with shipping included, and then can spend the extra money buying quiet 120 mm and 90mm fans.

The major concern that is holding me up, is whether the holes are in the right place for the dual processor heat sinks. Is this likely to be a major problem? Is it possible to create the correct holes if they don't exist?

Thanks,

~Lyuokdea
 


One new question....i finally saw the price of the non-SAS Supermicro motherboard...I had figured it would be $20-30 cheaper, and thus not a big deal, but it's actually $100 off. So maybe I'll do that, which will require swaping in a velociraptor instead of the SAS 15k RPM drive.

My question is...I know the SATA interface allows 3.0 Gbit/s. Is that a total among all SATA devices on your system, or is that per SATA device. That is, if I have 6 different SATA drives on the system, are they bound to a maximum of ~300 MB/s transfer? I know that likely the processors wouldn't support a ton more than that anyway. However, if having SAS and SATA simultaneously would allow the 15k drive to run without interference from any SATA communications, that might be worthwhile...

Thanks,

~Lyuokdea
 
Back
Top