Server Build

KingGheedora

Diamond Member
Jun 24, 2006
3,248
1
81
I'm building a development server for myself. I will run VM's a handful at a time for testing apps I write, as well as sometimes testing large SQL Server 2005 databases 300-600GB, database development and running reports/analysis (olap, data warehousing).

I don't care about reliability for the raid0 array, I'm the only one using this system, and the data I'm working with comes from another server. I can always start from scratch if something goes wrong, so I'm okay with RAID 0 for the fast storage. Will go with something redundant for the slower storage.

I will run a 64bit OS -- either Windows Server 2003 or Vista, maybe dual boot. If you see any issues with any of the components/drivers for either of these OS'es please point them out.


UPDATE:
Went with the 8port RAID and 4 Raptors -- over budget, so I'm selling my existing system to balance this out. So I will need a case.

Found the mountain mods brushed aluminum Opti1203 case (dimensions 18x18x18). Has 10 5.25" bays. Going to use one for the DVD drive. Going to suspend the hard drives in the other 9 bays with elastic to reduce vibration/noise.

At last, everything is ordered. Just sit back and wait for it all to come in.


Parts:

$290 - Q6600, ordered guaranteed G0 from ClubIT
$470 - 8GB G.SKILL DDR2 800
$185 - Abit IP35 Pro
$515 - 3ware 9650SE 8port PCIe SATA II RAID controller
$660 - (4x RAID0) 150GB Raptors (after mail in rebate)
$240 - 4x 320GB Maxtor Rebadged Seagate 7200.10 (already purchased, only part I bought so far, couldn't pass these up for $60 each) Storage
$170 - Corsair CMPSU-620HX PSU
$34 = Samsung 18X DVD burner
Stock CPU cooler for now
$324 - U2 UFO Opti1203
$24 - (6x) Yate Loon D12SL 12 120mm fans

Will use existing keyboard/mouse/monitor. Have random 7200rpm 74gb SATA drives lying around to use for boot/OS drive.
 

Tristor

Senior member
Jul 25, 2007
314
0
71
if you are using a motherboard with 4 RAM slots, you always want to do 2 DIMMs, leaving 2 banks unpopulated, that way you can get a 1T command rate, and it will also work better for overclocking. you need a motherboard with a PCI-E x8 slot extra (not x4, that x4 won't work with that card), and you'll need to do your research because some desktop boards get pissy if you put anything in the extra slot other than a gfx adapter or a physx card. get a hitachi deskstar 7k100 750GB, they are the fastest 750GB on the market, stick it in an esata enclosure, and use that. don't use usb for external unless you are stuck with it. most mobos don't have onboard video, you are looking at grabbing a cheaper 128MB PCI-E card and using that for video, the ATI x300SE (Asus EAX300) works pretty good and last I checked was like $35.

 

Fullmetal Chocobo

Moderator<br>Distributed Computing
Moderator
May 13, 2003
13,704
7
81
Just a suggestion for mobo--Asus P5N32-E SLI (since you are going with that processor, and need multiple PCIe slots). I have one running an Areca ARC-1220 RAID card & 8800 GTS 640mb together with no problems. See sig for full specs & components.
 

Horsepower

Senior member
Oct 9, 1999
963
1
0
I considered doing this to test server software, and ended up buying a refurbished Dell Poweredge Xeon.
 

KingGheedora

Diamond Member
Jun 24, 2006
3,248
1
81
Originally posted by: Horsepower
I considered doing this to test server software, and ended up buying a refurbished Dell Poweredge Xeon.

Are you a developer? I could probably go that way too, but the building of it is interesting for me.

How do you like the system, and what kind of software are you testing?
 

KingGheedora

Diamond Member
Jun 24, 2006
3,248
1
81
Originally posted by: Fullmetal Chocobo
Just a suggestion for mobo--Asus P5N32-E SLI (since you are going with that processor, and need multiple PCIe slots). I have one running an Areca ARC-1220 RAID card & 8800 GTS 640mb together with no problems. See sig for full specs & components.

I like that this board has two x16 PCIe slots (I don't plan on using SLI, but who knows when I'll want to add other PCIe components.

Are there any issues with the nForce chipset? Vista drivers? 64bit drivers?

I wanted to get an Intel P35 chipset for Penryn upgrade path.
 

KingGheedora

Diamond Member
Jun 24, 2006
3,248
1
81
How is this board: Gigabyte GA-P35-DQ. Saw it on "Anand's Picks", it's a P35 chipset and has two x16 PCIe slots, and three x1 slots. Seems good, I'll check around on what people are saying on this board.

I've never seen such a big heatsink for a chipset cooler: Text. I'm not sure that's a good thing or a bad thing. Is it really necessary? Does it get in the way of any CPU coolers, or other components?
 

yuppiejr

Golden Member
Jul 31, 2002
1,317
0
0
The the RAID controller you selected requires an 8x PCIe slot, not a 4x slot... which is going to limit your board selection a lot if you intent to use a 16x pcie graphics card. Most of the 2 x PCIe "16x" slot equipped boards will only run the second slot at 4 x speed if you have a graphics card installed in the primary slot.

Unless you plan to run this array with more than 4 drives down the road you will be better off with this 3-ware 4x PCIe / 4 drive controller that has the same 256 meg ecc cache and RAID 0/1/1e/5/10 support: http://www.newegg.com/Product/...x?Item=N82E16816116042

As for mainboard, I like this Abit board a lot:

http://www.newegg.com/Product/...x?Item=N82E16813127030

You get 1 x 16x PCIe slot for graphics, 1 x 4x PCIe slot for your RAID card, 2 eSATA ports (that support RAID 0/1 no less) that you can use with an enclosure for external backups that will blow USB 2.0 transfer rates out of the water, DUAL Gig-E NICs, firewire, 6 extra ICH9r SATA II ports w/ Intel Matrix RAID to play with for extra storage, etc...

Some folks like the Nvidia chipset solutions (680i for this application), however I've been unhappy with their driver support in Windows Server 2003 64-bit or Vista 64-bit and feel their RAID and ethernet implementations are both inferior to the Intel based solutions, but that's just my personal experience - sounds like Fullmetal Chocobo has a working setup with the Asus P5N32E SLI board which might also be worth a look.

As for your backup drive, the Hitachi 750 gig 7k100 unit mentioned above is a good option and you could either put it in an eSATA enclosure or install a removable drive bay in one of your 5 1/4" slots and just use a standard SATA port. USB 2.0 and firewire 400 will be way too slow to complete backups in any reasonable amount of time.

I would stick to 2 gig memory modules, a pair of these G.Skill 2 x 2 gig DDR2/800 kits : http://www.newegg.com/Product/...x?Item=N82E16820231122 .. should more than fit the bill if you really want 8 gigs, though this will limit your overclocking headroom due to the extra strain on the northbridge. The "4 slots, 2 DIMMS" theory applies if you are looking at 4 gigs (4 x 1 gig vs 2 x 2 gigs) but because of the rarity of 4 gig DDR2 modules I don't think this is a viable option for your build.

The only 4 gig DDR2 modules I am finding are running $400 EACH and are typically 667/533 mhz only, plus ECC which may or may not work in this board... and say goodbye to any overclocking headroom. Not worth it IMHO.

As for KVM solutions, this DVI/USB + line level mic/speaker switch looks like a good solution to minimize desktop clutter: http://www.newegg.com/Product/...x?Item=N82E16817183316

There are a lot of good full tower cases out there, for your application I'd consider something with 4-5 x 5.25" bays available so you can accommodate an optical drive plus this multi drive hot-swap enclosure that includes a "drive failed" light to make swapping a bad drive in your array easy: http://www.newegg.com/Product/...x?Item=N82E16817994028

You could fit all 4 drives for your RAID array into that enclosure plus your backup drive then mount the Raptor inside of the case in a standard 3.5" bay.







 

KingGheedora

Diamond Member
Jun 24, 2006
3,248
1
81
Originally posted by: yuppiejr
The the RAID controller you selected requires an 8x PCIe slot, not a 4x slot... which is going to limit your board selection a lot if you intent to use a 16x pcie graphics card. Most of the 2 x PCIe "16x" slot equipped boards will only run the second slot at 4 x speed if you have a graphics card installed in the primary slot.

Unless you plan to run this array with more than 4 drives down the road you will be better off with this 3-ware 4x PCIe / 4 drive controller that has the same 256 meg ecc cache and RAID 0/1/1e/5/10 support: http://www.newegg.com/Product/...x?Item=N82E16816116042

Thanks again for the response. Will take a look at the Abit board. The Areca can be used in a x4 PCIe slot. x4 gives a max bandwidth of 1GB/s, so I don't think it's going to be a problem to run it in an x4 slot.

See this link for example: Text

That page also compares raid 0 and 5 on the Areca 1210 I plan to get. They are almost the same speed, I thought raid5 would be slower for some reason, could have sworn I've seen benchmarks to indicate this. I'm gonna reconsider raid5.
 

yuppiejr

Golden Member
Jul 31, 2002
1,317
0
0
Originally posted by: KingGheedora
Originally posted by: Looney
Why would you go RAID0 and not RAID5?

Raid 0 has faster reads and writes.

This may depend a lot on the controller you are dealing with, but generally speaking this is correct - and since the OP has specified he does NOT require redundancy - rather maximum read/write performance (random access primarily) the RAID 0 is, in theory, the better way to go.

http://www.barefeats.com/hard53.html

and

http://storageadvisors.adaptec...10-vs-raid-5-question/

and

http://sql-server-performance....ty/forums/t/15475.aspx



OP - you should set aside some time and start reading here, lots of good information on hardware configurations for SQL servers and some nitty gritty details about how to configure the cluster size in your array & filesystem, where to put tempdb, etc...

http://sql-server-performance....ommunity/forums/8.aspx
 

KingGheedora

Diamond Member
Jun 24, 2006
3,248
1
81
Originally posted by: yuppiejr
Originally posted by: KingGheedora
Originally posted by: Looney
Why would you go RAID0 and not RAID5?

Raid 0 has faster reads and writes.

This may depend a lot on the controller you are dealing with, but generally speaking this is correct - and since the OP has specified he does NOT require redundancy - rather maximum read/write performance (random access primarily) the RAID 0 is, in theory, the better way to go.

Hey Yuppie, did you take a look at the link I pasted? They show benchmarks of the controller I plan to get in raid0 vs raid5, and there's very little difference between the two. Do you think there's something wrong with those benchmarks, or does a quality controller decrease the diff between the two?
 

Fullmetal Chocobo

Moderator<br>Distributed Computing
Moderator
May 13, 2003
13,704
7
81
RAID 5 performance is not bad at all when scaled to multiple (4+) hard drives on a good controller. I have 4 Western Digital 400gb RE2 hard drives in RAID 5, and performance ispretty close to my RAID 0 array with dual 150gb Raptors.

RAID 0 benchmarks

RAID 5 benchmarks.
 

yuppiejr

Golden Member
Jul 31, 2002
1,317
0
0
Originally posted by: KingGheedora
Originally posted by: yuppiejr
Originally posted by: KingGheedora
Originally posted by: Looney
Why would you go RAID0 and not RAID5?

Raid 0 has faster reads and writes.

This may depend a lot on the controller you are dealing with, but generally speaking this is correct - and since the OP has specified he does NOT require redundancy - rather maximum read/write performance (random access primarily) the RAID 0 is, in theory, the better way to go.

Hey Yuppie, did you take a look at the link I pasted? They show benchmarks of the controller I plan to get in raid0 vs raid5, and there's very little difference between the two. Do you think there's something wrong with those benchmarks, or does a quality controller decrease the diff between the two?

Honestly, I think that you've changed your mind so much as to what you actually want this box to do and gone ahead and purchased 4 x 7200 RPM SATA drives it's rather irrelevant what I think at this point. The reason the Areca controller benchmarks are the same between RAID5 and RAID0 implementations is because the array is limited by the 7.2k drive's rotational latency, not the RAID processing overhead. :)

A RAID 5 implementation, no matter how good, is going to use at least 4 x more IO cycles on writes and twice as many on a read. In high end arrays, you deal with this by equipping it with a hardware controller that can deal with the RAID processing overhead without delaying read/write operations to the connected drives. By pairing a fast 8-channel RAID controller like the Areca with 3-4 relatively "slow" (high rotational latency) 7200 RPM hard drives you just aren't going to see much difference between RAID0/5 when you're looking at random read/write performance at varying queue depths. The fact that the benchmark numbers are nearly identical even on a 'worst case' benchmark indicates that the controller is not the limiting factor in the benchmark array's performance. In contrast, look at the results of the ICH7r controller in RAID5 vs RAID0 - obviously controller limited.

If you were to pair the same Areca controller with an array of 5-8 x 15k RPM drives and a deep queued random read/write intensive benchmark that would more appropriately simulate a database server's needs you would begin to see the difference.

Since your original goal was to build a super fast RAID array that will run queries on a 150 gig database this is why I was advising you look at 10k/15k SAS drives in lieu of cheaper 7200 RPM SATA units. Essentially the Areca controller is going to spend time twiddling it's thumbs due to the pokey drives you chose which is why there won't be a significant performance impact running RAID 5 vs RAID 0 in your situation. Ultimately, you could not afford the performance you wanted though I do think a pair of striped 150 gig raptors would have been a better choice than 4 x 320 gig Seagate 7200.10 drives for your application paired with a good 3ware/Areca controller (again, rotational latency is a key consideration in database arrays due to the impact it has on random read/write performance).
 

yuppiejr

Golden Member
Jul 31, 2002
1,317
0
0
Originally posted by: Fullmetal Chocobo
RAID 5 performance is not bad at all when scaled to multiple (4+) hard drives on a good controller. I have 4 Western Digital 400gb RE2 hard drives in RAID 5, and performance ispretty close to my RAID 0 array with dual 150gb Raptors.

RAID 0 benchmarks

RAID 5 benchmarks.

Wait, did you look at the access times in your own benchmarks?

The transfer rates are almost identical - which is not unexpected since the Raptors use less dense platters than most 7200 RPM drives. What the Raptor gains in rotational speed it loses in platter density when you're looking at raw transfer rate which is why 7200 RPM drives with high density platters in RAID5 arrays with the appropriate controller make great choices for media servers or other bulk storage applications.

The more important factor for the OP's database server is the access times - you've got a 12.7 ms access time on your 7200 RPM equipped RAID5 versus 8 ms on the 10k Raptor equipped RAID0 array - meaning that in random read/write cycles it's almost 40% SLOWER with the 7.2k RAID5 array. In this case, and based on the OP's benchmark on the Areca controller, it looks like the rotational latency advantage of the Raptors explains the difference which is what I was getting at in my previous suggestions to the OP to go with higher RPM drives.
 

Fullmetal Chocobo

Moderator<br>Distributed Computing
Moderator
May 13, 2003
13,704
7
81
Originally posted by: yuppiejr
Originally posted by: Fullmetal Chocobo
RAID 5 performance is not bad at all when scaled to multiple (4+) hard drives on a good controller. I have 4 Western Digital 400gb RE2 hard drives in RAID 5, and performance is pretty close to my RAID 0 array with dual 150gb Raptors.

RAID 0 benchmarks

RAID 5 benchmarks.

Wait, did you look at the access times in your own benchmarks?

The transfer rates are almost identical - which is not unexpected since the Raptors use less dense platters than most 7200 RPM drives. What the Raptor gains in rotational speed it loses in platter density when you're looking at raw transfer rate which is why 7200 RPM drives with high density platters in RAID5 arrays with the appropriate controller make great choices for media servers or other bulk storage applications.

The more important factor for the OP's database server is the access times - you've got a 12.7 ms access time on your 7200 RPM equipped RAID5 versus 8 ms on the 10k Raptor equipped RAID0 array - meaning that in random read/write cycles it's almost 40% SLOWER with the 7.2k RAID5 array. In this case, and based on the OP's benchmark on the Areca controller, it looks like the rotational latency advantage of the Raptors explains the difference which is what I was getting at in my previous suggestions to the OP to go with higher RPM drives.

I agree. The difference most likely is due to the difference in disks in each array, not so much the array themselves. I was merely stating that RAID 5 performance is not horrible compared to RAID 0.
 

yuppiejr

Golden Member
Jul 31, 2002
1,317
0
0
I appreciate that you are providing insight based on actual use of the product, my point is that the benchmark numbers you provided are irrelevant since you are comparing a RAID 0 array using 10k RPM disks to a RAID5 array using 7.2k RPM disks. I found the following article at Tom's that compared performance numbers using the SAME hard drives using the Areca, 3Ware and other controllers that compared RAID 0, 0+1 and 5 including database usage benchmarks it would be of great value in this discussion.

http://www.tomshardware.com/20...smb-servers/index.html

I was actually rather surprised by the outcome, if you compare the RAID 0 performance in database applications here:

http://images.tomshardware.com...b-servers/image011.gif

.. to the RAID 5 performance here:

http://images.tomshardware.com...b-servers/image013.gif

... you're looking at a serious difference in performance, even with only 4 drives. For example, the Areca unit was running 180 or so I/O's per second in RAID 5 and 480 in RAID5 while the 3Ware hit 270 IO/s in RAID5 and 530 IO/s in RAID0 database benchmarks.

In summary, go with the 3Ware 9650SE controller and 2 x WD Raptors and re-task the Maxtors you ordered from Fry's KingGheedora. You'll get the whole setup for around $670 after 2 x $30 rebates on the Raptors ($120+ under your original budget) and get much better performance for your database testing applications. You could still use the Seagate/Maxtor drives, use 2 in a RAID1 array on the other 2 available 3Ware controller ports to provide a fast online backup. You could then buy 2 x 5.25" drive bay - removeable SATA drive bays and just mount one of them in your case and use the drives to do offline backups if needed. Or you could return all 4 and just buy one big-ass drive for storage and backups.
 

KingGheedora

Diamond Member
Jun 24, 2006
3,248
1
81
yuppie, the 3ware in tomshardware link seems like a better choice than the areca 1210, but it's not the same one you're linking to at newegg. Do you know what the performance diff. is between the two models?

EDIT: Never mind, you had a typo in your post ("2650" instead of "9650"). I like that card but still hesitant because of the 4 drive max.
 

cmbehan

Senior member
Apr 18, 2001
276
0
0
I know the build is part of the thrill you're going for, but I think you're going to be much better off looking at server hardware than workstation hardware here...

It'll give you a much more accurate real-world test of your applications.

like Horsepower mentioned, you should probably look for refurb Dell or HP ProLiant boxes....

If you really want to build your own, look at Supermicro boards/chassis, SAS cards/drives, xeon chips, redundant power, etc...

Oh, and definitely go with Windows Server 2003 or SBS 03 (or the *nix of your choice).


Yes, my suggestions are probably doubling your budget, but it's going to give you a much more realitic test bed for your work.

 

KingGheedora

Diamond Member
Jun 24, 2006
3,248
1
81
I'm not doing load testing so I don't need anything more realistic than what I've got planned...
 

yuppiejr

Golden Member
Jul 31, 2002
1,317
0
0
Yes, typo on my part - forgive the confusion.

Honestly, you need FASTER (higher RPM) drives, not more of them... and putting more than 4-5 drives on a single controller channel is generally a bad idea if you want to maintain performance (bus/controller bandwith availability). This is why most standalone servers that have more than 5 SCSI drives use split backplanes and controllers that support more than one channel on a single card.

At this point you can only afford 2 suitably fast Raptor drives, why are you worried about having 8 ports available on your array? The P35 chipset mainboards only support 4 PCIe lanes on the second PCIe "16x" slot when the video card is in use so even the Areca controller is really only practical for 4 drives unless you're willing to take a performance hit that directly opposes your core design requirement... What it comes down to for your application of the Areca card vs. the 3Ware is expandability vs performance. Since your application requires the later, I don't even know why you are worried about "only" having 4 drives to work with.

Remember, your mainboard is going to have 6 x RAID 0/1/5/10 capable ports on the ICH9r southbridge that you can use for a slower storage array if you desire.
 

KingGheedora

Diamond Member
Jun 24, 2006
3,248
1
81
Because I don't want to limit myself in case I want more drives in the future. What if I want another array in the future? What if I want to run the maxtors and the raptors in two separate raid arrays off the add-in controller? I think that's a reasonable thing to do, because I want some redundancy on the maxtors if I end up using them as the backup/storage drive, and onboard raid is too slow for anything other than raid0.

Do you think the bandwidth of whatever combination of drives 10k rpm or less, would really surpass the 1GB/s bandwidth of the x4 PCIe slot?
 

yuppiejr

Golden Member
Jul 31, 2002
1,317
0
0
Originally posted by: KingGheedora
Because I don't want to limit myself in case I want more drives in the future. What if I want another array in the future? What if I want to run the maxtors and the raptors in two separate raid arrays off the add-in controller? I think that's a reasonable thing to do, because I want some redundancy on the maxtors if I end up using them as the backup/storage drive, and onboard raid is too slow for anything other than raid0.

..