I work at a small company and after running into storage performance problems with our last hyper v host, the boss is asking me to research ways to correct it before he spends money on someone else solving the problem for him.
We basically have a host now with about 20 VMs. CPU and RAM load is fine. We are barely using the CPU and only about half of the 128GB of ram. But the I/O system is taking a beating.
The current setup is (12) 2TB drives in raid 5. That gives about 20TB of space and comes out to 560 IOPS. I read that raid 10 is much faster, but in order for us to fit 20TB, we need to go to 4TB disks. The problem is (10) 4TB disks in raid 10 give the exact same IOPS as before (albeit with less chance for failure).
What other options should be look at? I know very little about SANs. It seems they would work, but they are also very expensive. It also looks like using an SSD on the RAID controller for cachecade might help, but how much I'm not sure. Is there an option out there I don't know about that may help? We are looking to get about double the IOPS (1000-1200) for about $10,000-$15,000.
We basically have a host now with about 20 VMs. CPU and RAM load is fine. We are barely using the CPU and only about half of the 128GB of ram. But the I/O system is taking a beating.
The current setup is (12) 2TB drives in raid 5. That gives about 20TB of space and comes out to 560 IOPS. I read that raid 10 is much faster, but in order for us to fit 20TB, we need to go to 4TB disks. The problem is (10) 4TB disks in raid 10 give the exact same IOPS as before (albeit with less chance for failure).
What other options should be look at? I know very little about SANs. It seems they would work, but they are also very expensive. It also looks like using an SSD on the RAID controller for cachecade might help, but how much I'm not sure. Is there an option out there I don't know about that may help? We are looking to get about double the IOPS (1000-1200) for about $10,000-$15,000.