• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Help with storage on a Hyper V server

crash331

Member
I am trying to help out a small business on a limited budget, so the hardware that is in place and the configuration it is running is not easily changed.

They have Server 2008 running hyper v with a few VMs on it. The storage disks are a RAID 5 array grouped as one giant drive. All of the VM's vhds are on this one drive. The write speed is atrocious. I know you get some penalty for using RAID 5, and RAID 10 is better, but like I said, I am not likely to change that.

What I am wondering is is there a way to maximize performance using RAID 5 and what they have? What if we created multiple RAID 5 arrays and put a few machines on each? Instead of having all machines on 1 array? Would that help?
 
I am trying to help out a small business on a limited budget, so the hardware that is in place and the configuration it is running is not easily changed.

They have Server 2008 running hyper v with a few VMs on it. The storage disks are a RAID 5 array grouped as one giant drive. All of the VM's vhds are on this one drive. The write speed is atrocious. I know you get some penalty for using RAID 5, and RAID 10 is better, but like I said, I am not likely to change that.

What I am wondering is is there a way to maximize performance using RAID 5 and what they have? What if we created multiple RAID 5 arrays and put a few machines on each? Instead of having all machines on 1 array? Would that help?

How many disks are in the RAID 5 and what speed are the disks?
 
How many disks are in the RAID 5 and what speed are the disks?

This is certainly the most important question before proceeding. Chances are while your storage space is sufficient your storage I/O is maxed out.

Chopping up the drives into smaller virtual disks may improve performance for some VMs but the ones that are putting a strain on your server now will only suffer further.
 
Looks like they are actually in a Dell powervault and not just an array in the server like I thought. It's an MD1200. It has twelve 2TB disks, around 20TB total. Looking at the model numbers, they are only 7200RPM disks (ugh!) They are SAS. Model # is ST2000NM. The RAID controller is a Dell H800.

Oh, and only 6TB of disk space is being used and realistically it will not go beyond 12TB.
 
Last edited:
Are all the physical disks at least allocated to the virtual disk? If not adding any remaining may help, but yeah you're in a bind there otherwise with getting more performance out of that.

You could potentially look into some SSD caching, it looks like the H800 supports CacheCade. It's obviously not free, but probably the cheapest option at this point.
 
Yes, all disks are in use. Thanks for the info.

Going forward, if they build another hyper V host, what would be the best setup? Will just changing to RAID 10 be sufficient?

The VMs are typically low load with spikes here and there. They store files which are tagged in a SQL database. On a typical day, 500MB of files may be added with 500 records added to the database. On a heavy day, maybe 2-3GB added with 2 or 3 thousand associated database records.
 
Going to RAID 10 will give you about a 50% bump in theoretical performance based on this calculator (http://www.wmarow.com/strcalc/) with some guesswork on my part. You still don't have a lot to work with but that's better.

As for adding another Hyper-V host, I'd assume you're looking to share storage on that MD1200. If I understand it correctly, and correct me if I'm wrong as I've not touched Hyper-V in a while, you'll need to carve that MD1200 up into separate disk groups in order for both servers to access the same enclosure at the same time. They still can't access the same virtual disks though due to NTFS limitations.

So that said, even switching to RAID 10 you're going to be reducing performance for every pair of disks you remove and add to the new host. Again building a strong case for SSD caching for a cheap performance boost to what you have in place already.
 
They would probably just purchase new hardware for a new vhost. A new R720 and a new MD1200. No need to share the MD1200 between the 2 vhosts.
 
Is the battery installed on the H800? If not your running in write through and the performance will be complete crap no matter what you do as writes will not be cached and committed immediately. The battery is required for write caching.

Also is the RAM module installed. There is a 2MB version that doesn't do squat. You should have the 512 / 1024 MB module.
 
Last edited:
Do they need so much storage for a single host that they need a MD1200?


Sort of. Realistically they will probably use 12TB, but they need to be able to go to 20 if needed.

You can probably fit almost that much in to a 720 by itself. I'm not sure why the MD1200 is needed for the 20TB, that decision is above me and it may be down to how contracts were written.
 
Is the battery installed on the H800? If not your running in write through and the performance will be complete crap no matter what you do as writes will not be cached and committed immediately. The battery is required for write caching.

Also is the RAM module installed. There is a 2MB version that doesn't do squat. You should have the 512 / 1024 MB module.

I believe it has the 512MB module and it does have a battery.
 
Sort of. Realistically they will probably use 12TB, but they need to be able to go to 20 if needed.

You can probably fit almost that much in to a 720 by itself. I'm not sure why the MD1200 is needed for the 20TB, that decision is above me and it may be down to how contracts were written.

They might want to consider the R720xd. It can fit up to 26 Hard Disk internall into this server.

Also might want to look into the PE VRTX also. Especially if the Hyper-V environment is going to keep growing. The VRTX can support up to 4 blade servers with shared storage so you could build a Hyper-V cluster all within the VRTX chassis.
 
Back
Top