Thoughts on my personal server setup

zephxiii

Member
Sep 29, 2009
183
0
76
I've been primarily running off a 2003/exch. 2k3 setup on an old Socket 754 Athlon 64 2ghz system and it has been great with performance and rock solid. However I haven't been able to get RAID to work on it for whatever reason. I don't need RAID for the 2x 2tb storage drives as I use Syncback and prefer that, but I kinda need something for the OS drives. I have also been wanting to get away from the "OS attached to a physical machine" via ESXi. And I have been wanting to move up to 2008R2 etc.

I looked at rack servers and setups (which I like and the servers aren't that much really, but that really isn't practical for me). The server tower options are kinda slim in the pickings for the ideal box I was looking for (cheap with SAS or SATA RAID)...so I ended up with a Dell Precision 690 that is a dual socket LGA 771 system with 1 Xenon 5130 cpu (dual core 2ghz), 4GB of ECC ram (64GB max), integrated SAS 5/ir RAID w/ total of 7 SATA ports, a SAS 15k rpm 36GB HD, and pretty nice case. The ECC ram is kinda of a major point for me(I know how RAM likes to go bad and screw things up at times), that and I figure the RAID should work well.

The only downsides I found with it is that it is a beast with it's overkill 1000w PSU that draws 180w from the wall with low power video card :( The only reason I don't like that is that it is going to put a nice dent in the UPS uptime lol. It also moves a notable amount of air, though prob less when i get it into the basement. It has 4 HD bays (with nice trays), and I could probably mount 3 additional ones in the Optical/floppy bays.

I've actually had this machine for 7 months now, but it was just a test bed running 247 2008R2 and a VM of 7....it has been running flawlessly though i need to move it where there is less dust (in a carpetless room).

The new setup consists of two WD5000AKS 500GB for ESXi and VM space (though 500GB may be too large and a waste). I will probably throw the 2 2TB and 1.5Tb drives into it. I have a spare 2TB sitting around...but I feel i need to keep that as a back up if one the active 2TBs craps out..and this has already happened! I just got the RMAd drive back from the one that failed, luckily i had that spare and I was able to throw it in and re-mirror.

What about the WD5000AKS in RAID though? They don't have TLER enabled by default, is TLER *really* that important? It seems like if it is taking that long for Error recovery and dropping out of the RAID...I should really be thinking about replacing that drive. Supposedly I can enable TLER, it's just that I'd have to dick around with floppy drives, and MSDOS boot disks which is not my favorite activity (getting it all to work).

Also by default Dell's RAID cards have disk cache disabled. This in turn made write performance abysmal. In testing i got 30-40MB/s read transfer to original server, and only 6MB/s write rate (after a short 30-40MB/s burst) with a 700mb file.

I had to install windows XP on a separate drive (in ATA mode) to install the dell RAID management software to set "Disk cache policy" to enabled. After that I saw a slight increase in read rate, and a constant write rate of 30-40MB/s with that 700mb file.

I should be ok to leave disk cache enabled if I am on an UPS? I suppose i can disable it once i'm done setting everything up...it's just that disk performance really sucks with it disabled.

Also, how am I to monitor the drive health on the RAID? I can't monitor it from a VM. I haven't seen anything in ESXi yet to. Is the machine going to let me know some how if a drive dies?

I'm just going to have the 2008R2 and a Win7 VM in it....I may move the current 2003 instance into it as well though.

I'm thinking this machine should be able to hold me over quite well for a while. I can throw a boatload of ECC ram into it, quadcore CPU, and even a 2nd CPU (not sure how that will work out with ESXi though, think it needs another license). It just sucks that it pulls 180w at the wall...that is the only downside of it I can think of?

On a side note, as i'm typing this I'm hearing my 1.5TB getting flakey again :( Random clicks it shouldn't be making, popping in and out of the OS...:( Might have to use that 2TB and get another one. The media on it is expendable though at least.
 
Last edited:

mfenn

Elite Member
Super Moderator
Jan 17, 2010
22,400
5
71
www.mfenn.com
You should be able to configure the policies on the SAS 5/iR by pressing Ctrl-C when prompted during the boot process. Anyway, yeah the SAS 5/iR is a pretty crap RAID controller all things considered. You're seeing bad performance because it doesn't have any onboard cache at all. Not much you can do about that except for live with it or get a new RAID controller.

As for monitoring the health of the RAID from VMWare, go to support.dell.com, enter your service tag, choose VMWare ESXi 5 (or whatever version) as your OS, and download the OpenManage package for VMWare.
 

Ketchup

Elite Member
Sep 1, 2002
14,541
236
106
I ended up with a Dell Precision 690 that is a dual socket LGA 771 system with 1 Xenon 5130 cpu (dual core 2ghz), 4GB of ECC ram (64GB max), integrated SAS 5/ir RAID w/ total of 7 SATA ports, a SAS 15k rpm 36GB HD, and pretty nice case... The only downsides I found with it is that it is a beast with it's overkill 1000w PSU that draws 180w from the wall with low power video card :(

Something to keep in mind is that the Xenon and 15k drive are causing your power draw, not just because you have a high power PSU. The headroom will come in handy if you add another CPU/ more drives to the box.
 

zephxiii

Member
Sep 29, 2009
183
0
76
You should be able to configure the policies on the SAS 5/iR by pressing Ctrl-C when prompted during the boot process. Anyway, yeah the SAS 5/iR is a pretty crap RAID controller all things considered. You're seeing bad performance because it doesn't have any onboard cache at all. Not much you can do about that except for live with it or get a new RAID controller.

Yea I went into CTRL-C to set it up and the disk cache policy setting is nowhere in the BIOS. But changing the Disk Cache policy to "enabled" made a massive difference as i'm seeing high 30MB/s to low 40MB/s constant write speed vs. 6-7MB/s before. Night and day difference, read speeds were slightly better in around the same speed as write. I would imagine this is enabling the cache builtin to the HDs. The machine seems to be running just great now with regards to disk access, whereas before it was sluggish.

I've attached a screen shot of the setting in the RAID utility from the setup at work (which uses a 5/ir and the same utility).



As for monitoring the health of the RAID from VMWare, go to support.dell.com, enter your service tag, choose VMWare ESXi 5 (or whatever version) as your OS, and download the OpenManage package for VMWare.

Thanks! That sounds great!


dellraid.jpg
 
Last edited:

zephxiii

Member
Sep 29, 2009
183
0
76
Something to keep in mind is that the Xenon and 15k drive are causing your power draw, not just because you have a high power PSU. The headroom will come in handy if you add another CPU/ more drives to the box.

Oh I thought it was more with the size/complexity of the mobo and inefficiency of the PSU at that size :-/. Didn't think the xeon sapped that much, I thought it would be similar to a Core 2 Duo of the era. Well that SAS drive is no longer in there so hopefully the power has dropped a little :)
 

mfenn

Elite Member
Super Moderator
Jan 17, 2010
22,400
5
71
www.mfenn.com
Ah OK, I was thinking of the PERCs where it is in the BIOS. 40MB/s is still pretty terrible for a 15K drive, unless it is a really old one.
 

zephxiii

Member
Sep 29, 2009
183
0
76
OH no, these are 7200RPM drives, though performance is still kinda low it is more than plenty. There won't be much activity on these drives once everything is setup.
 

zephxiii

Member
Sep 29, 2009
183
0
76
Yea, it is way too small @ 36GB. I've decided that I will not waste HD bays on drives that small anymore. I've already used over 100GB on the raided 500GB drives for ESXi and the two VMs.