Best way to get more IOPS on a Hyper V host?

crash331

Member
Sep 26, 2013
57
0
0
I work at a small company and after running into storage performance problems with our last hyper v host, the boss is asking me to research ways to correct it before he spends money on someone else solving the problem for him.


We basically have a host now with about 20 VMs. CPU and RAM load is fine. We are barely using the CPU and only about half of the 128GB of ram. But the I/O system is taking a beating.

The current setup is (12) 2TB drives in raid 5. That gives about 20TB of space and comes out to 560 IOPS. I read that raid 10 is much faster, but in order for us to fit 20TB, we need to go to 4TB disks. The problem is (10) 4TB disks in raid 10 give the exact same IOPS as before (albeit with less chance for failure).

What other options should be look at? I know very little about SANs. It seems they would work, but they are also very expensive. It also looks like using an SSD on the RAID controller for cachecade might help, but how much I'm not sure. Is there an option out there I don't know about that may help? We are looking to get about double the IOPS (1000-1200) for about $10,000-$15,000.
 

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
RAID5 is just about the worst setup for IOPS performance. What drives do you have now? What kind of system (internal storage? RAID card? Cache configuration?).

What's your read/write mix and access pattern look like? That is going to have a huge impact on what the answer looks like. Can you split out your workloads and put high IOPS loads on faster storage? A couple thousand IOPS is easy to do for not much money, it's the 20 TB need that's killing you.

Viper GTS
 

crash331

Member
Sep 26, 2013
57
0
0
RAID5 is just about the worst setup for IOPS performance. What drives do you have now? What kind of system (internal storage? RAID card? Cache configuration?).

What's your read/write mix and access pattern look like? That is going to have a huge impact on what the answer looks like. Can you split out your workloads and put high IOPS loads on faster storage? A couple thousand IOPS is easy to do for not much money, it's the 20 TB need that's killing you.

Viper GTS


Right now we have twelve 2TB 7200RPM SAS drives in a Dell PowerVault using an H800 RAID card. I think the card has 512MB cache. We are looking at upgrading that to 1GB on the new system.


From looking at logs, it looks like the R/W ratio is going to be about 60-70% writes. I'm not sure if splitting the loads is easy, at least I wouldn't know how to do it. Each VM is for storage and retrieval of medical images and records for clinics across the US. It's not a predictable load. A VM could be idle for 8 hours and then suddenly be sent 1000 1MB images for a CT or MRI or something. Edit: Forgot to mention that as the server stores these images (jpegs in a DICOM format) it also writes patient data to a SQL Express database.


I looked at going to 10k drives, but they seem to max out at 1.2TB and they are costly at that size. The server we are looking at can fit 12 3.5inch drives or 24 2.5inch drives. If there is a better solution with storage not contained within the server, we could consider that also.



edit:

We also looked at getting an SSD and using it for Cachecade, but I'm not sure how much it will help us. From what I can tell, Dell only supports Cachecade 1.0 which only caches reads. From the logs I ran, it looks like writes are mostly what's waiting in the queue and writes are what take a hit when using RAID 5.
 
Last edited:

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
Have you considered replacing the H800 with a newer adapter that supports Cachecade 2.0? There's a good chance you wouldn't even need to remake the array (though cachecade may impact this).

From your description it sounds like you are using an external DAS type enclosure, so you may have internal expansion space left for SSDs.

You should be able to do this within your budget. Buying sufficient mechanical disks to cover your spike performance needs will probably exceed your budget unless you go the DIY route which is really not appropriate for that kind of environment.

Viper GTS
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I generally find "wide" RAID 5 like that tends to rarely even come close to meeting the IOPS that the math gets you. Generally you need something like disk pooling to help shift the loads among the array members (look up EMC Disk pool vs pure RAID 5 for an example.) Cachecade that Viper mentioned may help a lot depending on the work load. EMC calls it fastcache but every vendor now tends to have some version around.

Since this is DAS, is the H800 setup properly for MPIO? Does the DAS unit have multiple controllers or is purely a dumb pile of disks? (IE Dell MD3220 vs an MD1220 for example.) Normally you get far better performance out of a controller based DAS array as the controllers handle the disk management better than the internal controller.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Sounds like you need to move to a SAN. 20 TB is a lot of data to house within a single server, especially if you need it to perform.

The problem with your setup, as you know, is that you're not going to get any more IOPS out of it. In my experience in the SAN world, I only expect 50-60 IOPS per spindle with 7.2k drives. Even then, latency is quite high (>30 ms) and you wouldn't want to use it for anything that would be doing small 4 or 8k IOs.

Getting a RAID controller that supports SSD as cache would help, especially if only a very small portion of that 20 TB is "hot."

That's probably your best bet as you're not going to get much of a SAN for $10-15k. A single shelf of 24 1 TB disks for the SANs I manage costs more than that.