Debian Base OS on RAID 0 + RAID 5 and VirtualBox vs. ESXi

VinylxScratches

Golden Member
Feb 2, 2009
1,666
0
0
I am trying find a cheap SATA raid 4 or 8 port card but it's impossible for ESXi.

I am about to say screw it and try setting up Debian on a Raid 0 config OS drive and the setup a RAID 5 across 4 drives and just run VMs that way.

How hard of a hit will I take?

I'd like to run 2 Windows 2008 servers and a Linux fileserver running TwonkyMedia Server. It's used to stream stuff. Do you think this will work out??

Currently I am running Debian on a P4 shitbox with 512MB of ram using a 320 ATA drive for OS and a 1TB drive for my files.
 

joetekubi

Member
Nov 6, 2009
176
0
71
I strongly suggest OS on Raid 1 at a minimum. Don't mess with Raid 0 ever. If you're already planning on a 4 drive array, use Raid 10 instead of Raid 5. Much easier rebuild if the array or a drive ever borks on you.
You haven't mentioned what kind of pc you will be running. You will need an i5 or X4 at a minimum. i7 or X6 would be better - with lots of ram.
I'm a big fan of Virtualbox, but others prefer Vmware. Try both yourself and decide.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
I would echo the recommendation of RAID1 or RAID10, RAID0 is just asking for trouble.

And I would recommend VMware Server or KVM over VirtualBox since Oracle bought it.
 

Khyron320

Senior member
Aug 26, 2002
306
0
0
www.khyrolabs.com
Highpoints 2680 has been great since I got it working.... I am having issues with my cheap 2tb drives dropping from the array. I updated to cc35 and so far its good.

Maybe the 2680 pci-e works with esx
 

spikespiegal

Golden Member
Oct 10, 2005
1,219
9
76
RAID 5 went out of style when Bill Clinton left office. Frankly I'd like to see parity logic banned by the United Nations given all the data I've seen it wreck, but we still have brain washed EMC geeks still pushing it. Back when 9gig SCSI drives cost $500 it was worth considering.

I'm dealing with a Vmware server using 4 drives and RAID 5 right now, and write performance and latency is so bad it's not measureable. Streaming video to multiple sources would likely be it's only benefit.

RAID 1 or 10. I vote for RAID 1 and two logicals because it will give more flexibity.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
RAID 5 went out of style when Bill Clinton left office. Frankly I'd like to see parity logic banned by the United Nations given all the data I've seen it wreck, but we still have brain washed EMC geeks still pushing it. Back when 9gig SCSI drives cost $500 it was worth considering.

I'm dealing with a Vmware server using 4 drives and RAID 5 right now, and write performance and latency is so bad it's not measureable. Streaming video to multiple sources would likely be it's only benefit.

RAID 1 or 10. I vote for RAID 1 and two logicals because it will give more flexibity.

You have seen parity checking "wreck data"? Care to elaborate on that?

As for the performance, I'm also not sure how you are having performance problems. RAID 5 should be faster overall than 1 or 0. Are you not running on a dedicated controller with full I/O Offload?

RAID 0 should increase your write speed to a theoretical maximum of X*Y (With X being the number of drives in the array and Y being the write speed of the slowest drive). You sacrifice data integrity with that speed boost as there is no parity checking across the stripes - thus 1 drive failing in some way destroys the entire RAID.

RAID 1 should increase your read speed to a theoretical maximum of X*Y; however, your write speed theoretically could be cut in half. With this mirroring, assuming 1 drive isn't failing (Which is why a RAID 2-6 setup with parity is beneficial) to write properly you ensure data integrity.

OP: Honestly what all are you trying to do with this computer? Running 2 instances of Server 2008 as well as a Media Server (An odd setup) makes it difficult to figure out what your goal is.

-GP
 

Scarpozzi

Lifer
Jun 13, 2000
26,391
1,780
126
Look at QNAP NAS devices. I just bought a QNAP TS-212 for $260 with no drives.
http://qnap.com/pro_detail_feature.asp?p_id=192

For another $100 you can move up to hot swappable with more processor power and eSATA connectors...for another $100 you can get that with 2 or more drive bays...ideally you could have 4 drives doing RAID10 with a hotspare.

These come with GB copper for iSCSI, CIFS/NFS, and USB.

Get some cheap 2TB drives:
http://www.amazon.com/Western-Digita.../dp/B002ZCXK0I

ESXi works with iSCSI, so that's your best bet there...just make sure you use Gig Copper because you'll need about 600mbit/sec for iSCSI max performance.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
raid-10 with sata drives sucks balls for esxi - its nearly unusable. any sort of load and a vmotion will fail (timeout) to vcenter. dunno what you use your vm's for but i can fit 8x as much on a sas 15K solution than a sata 7200rpm solution and i only use flash back write cache controllers with either drive.

sata drives are okay for backup (D2d2d) but other than that man they are too slow
 

Scarpozzi

Lifer
Jun 13, 2000
26,391
1,780
126
raid-10 with sata drives sucks balls for esxi - its nearly unusable. any sort of load and a vmotion will fail (timeout) to vcenter. dunno what you use your vm's for but i can fit 8x as much on a sas 15K solution than a sata 7200rpm solution and i only use flash back write cache controllers with either drive.

sata drives are okay for backup (D2d2d) but other than that man they are too slow
To put it lightly, you may have something else going on.

To get into more detail....SATAII has been dubbed the norm these days. It's basically the name the industry has started calling SATA drives that have 3Gb/sec transfer rates. Now...that we all know this....let's look at bandwidth requirements for FC/iSCSI since those are the two most common VMware disk connectors. If you have a busy volume, you need a pipe of about 600Mb/sec for iSCSI...this is why gig copper works ok for it...and why FC scales as high as it does (VM-wise) @ 2GB or 4GB (8GB if you have money). In most of my DRS clusters, I can have 24 VMs per server without pipe I/O issues....but certain applications are write hungry, so I have to spread out the array access or use FastCache to deal with hotspots in the array.

I've been using SATA II drives with VMware for years. It was 'cheap' storage and worked fine for things without a lot of writes. My current employer originally set their VMFS volumes up on RAID 5 (12-14 spindles tho on an old EMC CX500). The new stuff is a mix of SATA II & 10k SAS. Typically, 15k SAS and SSD are only used for FastCache & Oracle/MS SQL servers. In any case, they all vmotion just fine with SATA...but it can be really slow when client-based backups are running during a transfer...b

If you're running into problems with VMotion, make sure you've got it on a private VLAN without any other traffic. If you're doing this local, you can use a crossover cable, but it won't correct for dropped packets. You really need it to go through a private switch. It doesn't play well with other traffic. I typically just put it on a private, non-routed class C and all's well. The few failures I've seen have been virtual-machine specific..where it was installed originally in an ESX3 or ESX3.5 environment and needed to be powered down and cold migrated to change its disk type or hardware profile. (thin-provisioned disk/thick provisioned,etc)

Some tuning pointers though for a low-resource VMware/Citrix environment, if you're simply running linux...you should be good. Just size the ram appropriately on the VMs (low) so you don't end up with a lot of swap eating your disk IO. You can even look at non-journaling file systems like EXT2 because they scale the best...EXT3/4, ReiserFS, and most of the newer ones do a lot more touching of the files and end up causing a performance hit in comparison to EXT2. Same goes for Windows...adjust your page file/system cache and try turning off the versioning crap where possible. It creates too many restore points. I've actually had a lot of luck on standard systems disabling the modified time stamp since it tends to touch a lot of files unnecessarily. It won't be something you want to do on a system you need to audit. (modified stamp is a security feature!)
 

Brazen

Diamond Member
Jul 14, 2000
4,259
0
0
I would echo the recommendation of RAID1 or RAID10, RAID0 is just asking for trouble.

And I would recommend VMware Server or KVM over VirtualBox since Oracle bought it.

VirtualBox has gotten even better since Oracle acquired it. Since it's open source, if Oracle ever does screw it up, it will likely be forked and carry on, like LibreOffice.

VirtualBox + phpvirtualbox works pretty awesome. I can't remember sources, but just in the last couple months I've seen polls showing VirtualBox is more popular than KVM and that performance is far better than KVM and equal or slightly better than VMWare Server.
 
Last edited:

child of wonder

Diamond Member
Aug 31, 2006
8,307
176
106
raid-10 with sata drives sucks balls for esxi - its nearly unusable. any sort of load and a vmotion will fail (timeout) to vcenter. dunno what you use your vm's for but i can fit 8x as much on a sas 15K solution than a sata 7200rpm solution and i only use flash back write cache controllers with either drive.

sata drives are okay for backup (D2d2d) but other than that man they are too slow

Then you set something up wrong.

I'm running a Dell PERC 5i in my Debian File server with 4x 250GB SATA drives in RAID 5. My two ESXi hosts run approximately 14 VMs off the datastore with no issues. Latency averages about 4ms for reads and 1ms for writes and I can get nearly 90MB/s read in the VMs.

Prior to that I was using Linux software RAID 10 and had similar performance.