HP Proliant, SSDs and virtualization.

pragmature

Junior Member
Dec 2, 2009
4
0
0
Hi all,

We are in the beginning of a virtualization project for our labs. Server side, we plan to buy some nice HP DL160/360 G6, ideal for VDI with their 9 DIMM slot per socket, xeons 5500 series and (relatively) low cost. The bottleneck in our future hardware infrastructure appear to be the storage subsystem. We have an old IBM SAN that can use for that, but with SSDs being an order of magnitude faster in IOPS, I’m not sure about buying lots of 15K SAS drives.

We are toying with the idea of just filling the DL160s with SSDs and virtualize the SAN itself; that way we can scale nicely by just adding identical machines to the infrastructure. However, I have several questions about this setup:

-HP charge an arm and a leg for their SSDs (Samsung rebranded one I believe): 25$/ GB. Are they that much better than intel X-25E (11$/GB) ? Both use SLC design

-Can we use third party SDDs (like X-25E) in HP proliant servers?

-Can the SmartArray Controller in HP server (P410/P411/P800) cope with lots of SSDs ? I can’t find any benchmark / review of such a setup

-And finally: If you have good or bad experience with virtual SANs, feel free to comment it here!
 
Last edited:

zuffy

Senior member
Feb 28, 2000
684
0
71
Using SSD for virtualization is insane. You're better off investing in an EMC Clariion or something with that money.
 

Chiropteran

Diamond Member
Nov 14, 2003
9,811
110
106
It's not really that insane, depending on the use of the vm servers. If you are virtualizing lots of little servers, 12GB of disk each for example, you can run many more on a single SSD than you ever could on a typical mechanical harddrive, due to performance considerations. Taken to the other extreme, if you need 300GB of capacity for each VM, SSD is kinda silly.

It depends on your needs.

Can't answer your specific questions, as we only use Dell here, but I would expect your raid card or hard disk controller to see the SSDs the same as any other SATA drive, so if it supports raids using standard SATA drives I don't see any reason why it would be incompatible with SSDs.
 

DukeN

Golden Member
Dec 12, 1999
1,422
0
76
I believe you are better off buying newer SAN hardware with spindle based disks (SAS or FC). Spread it out over 24 or 32 drives and you'll have fantastic performance for virtualization as your IO per spindle is much greater.

The other thing to consider here is that you will get the space you need for virtualization at a drastically lower cost of that with SSDs. SSDs are great for random reads and writes for desktop use but it can be argued for sequential use, especially in a massive array those advantages are not so great.

Lastly, some HP storage controllers look for HP partnumbers in the drive identifiers. I would definitely test before deploying/buying a bunch of drives. Also, this may invalidate your support and if it works now, future software releases may invalidate the hardware you're using now.

Just out of curiousity, how much were you thinking of spending with SSDs?
 

pragmature

Junior Member
Dec 2, 2009
4
0
0
I think that we all agree that $/GB cost of SSD is not really competitive. However their $/IOPS ratio is worth considering. I must admit that as desktop virtualisation is new for me, I don’t really know what is the read/write random/sequential mix of disk operation for the whole system.


In our case, we are virtualizing a university lab of 30 machines, with more coming if the pilot is a success. The disk for each machine is small (10-20GB would be enough). The linked clones technology from VMWare further reduce that. Moreover, the user space can be moved to the slower disk on our old SAN that cost a lot less / GB.
 
Last edited:

pragmature

Junior Member
Dec 2, 2009
4
0
0
Just to clarify, we are doing a VDI project (Virtual Desktop Infrastructure), we are not virtualizing servers.
 

DukeN

Golden Member
Dec 12, 1999
1,422
0
76
I think for a smaller scale deployment an inexpensive newer SAN with spindle drives would probably work better than an older SAN or standalone servers with SSDs. (eg you can buy a base EVA4400 with 4GB cache and 8X 400GB FC drives for like $20K)

Desktop virtualization seems a bit nascent at this point IMO - the last time I looked at it it was pretty weak a year or so back. From what I gather the disk IOs aren't much of a concern with desktop virtualization, as a lot of the performance issues come from issues with video/CPU processing.
 

pragmature

Junior Member
Dec 2, 2009
4
0
0
Desktop virtualization is indeed an unproven technology at this point. However, competition between the majors VDI players (VMWare, Citrix, Microsoft…) is driving innovation fast-paced. Last year tech have almost nothing to do with today’s. In any case, we feel that the moment has come to give it a try (in a small scale fashion at least). To be honest, one of the biggest obstacle at the moment is the insane Microsoft licences for virtualizing windows desktops (the infamous VECD licensing)



As far as I know, disk access in VDI is driven by “storms”; when all the users boot/log at once for example. Something that WILL happen in a lab. VMWare documentation is not very explicit on the kind of I/O involved however.
 

DukeN

Golden Member
Dec 12, 1999
1,422
0
76
Nice - please let me know how your test works out and all the best. Would be curious to hear a real user's unbiased experience as opposed to some vendor sponsored testimonial.

If you have storage/SAN/server based questions let me know.

Cheers and good luck!
 

SolMiester

Diamond Member
Dec 19, 2004
5,330
17
76
We have a virtual infrastructure here that I built. Depending on what the servers are vSphere has great IOPS output now so SSD arent really necessary with the exception of SQL or Exchange Data files.....

SAS on RAID 10 is amble IMO, RAID 5 for tier 2 and use SSD for the data files. Not sure about VDI, but really they have been developed to run just fine on SAS arrays!
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
1. lefthand VSA is quite nice - not cheap
2. dl360/dl380 man - skip the b/s models - you need a ton of ram for VDI with 2-socket the sweet spot is 6 dimms per socket (1333) if you go to 9 dimms per socket you drop to 800mhz so 48gb = best performance 72gb = best ram price/$$
3. p410i with 1gb flash-backed-write cache is not going to be as fast as the ssd due to cache and raid overhead. its also optimized for hard drives. p800 is a generation older (pci express 1.0) so negatory.

4. make sure everything you buy is on the HAL and supported by the software vendors (ie vmware view) and hardware vendors - otherwise you will get failure and no support.

you'd be surprised at how much i/o 6 15K SAS in raid-10 3.5LFF in a dl380 G6 can provide. I've also got 8 10K 2.5" SFF drives in an older dl380 and it runs fine about 10 servers 3 having sql. the 15K RPM is worth its weight in gold in hindsight.

Any storage virtualization appliance needs to be on its own box