SSD in enterprise servers?

holden j caufield

Diamond Member
Dec 30, 1999
6,324
10
81
We haven't gone that route, wondering if anyone has made the move. Our virtual hosts run pretty happy on sas and sata usually in raid 5. Wondering what their reliability is like.
 
Last edited:

pakotlar

Senior member
Aug 22, 2003
731
187
116
We recently deployed x25-e's in raid1 via cachecade 2.0 pro on an lei 9260cv-8i, and it is very stable. More difficult to pinpoint performance vs traditional sad deployment, but at least some improvement is there.
 

Coup27

Platinum Member
Jul 17, 2010
2,140
3
81
I think if you select a proven drive from a proven SSD vendor it will be stable and reliable and aslong as the workload doesn't hammer it with writes into oblivion it should last.

On the negative side, they won't fit into a hotswap system and I would assume they would invalidate a warranty claim should a repair engineer come out?
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
No there is most certainly proof that MLC consumer drives do not last for long (1-6 months) average 2-3 months with server write patterns (one box, 20 vm's , mix load 24x7) - the 50nm > 34nm especially. but SLC is cool. especially the ole 50nm. it will last for 4-5 years. unlike hard drives which can last for 10+ years under the same workload.

just be careful - when they fail they all fail at the same time - this can cause any raid to go belly up - stagger the ssd deployment (SLC or MLC) - perhaps start with a small two disk raid-10 then expand to more as needed.

lsi provides very specific esx/vmguest software for optimizing hot data access between host and vm guest - even cachecade users do not get this (iirc fusion i/o is the only one selling this technology) - which is far far superior to simple cachecade 2.0 - plus you need really want about 400-500gb of raided (!!) ssd to tier proper.

Good thing is $65 for 8gb RDIMM's - so a typical server has 18 (now 24) slots - cheap to outram - than ssd tier. the vmware tax is a little problem but i spose you could stick with 4.1 or wait for hyper-v 3.0 in server 2012 :)

i'm going to replace 60 4gb dimms with 8gb (16gb have some complications) and double down on ram - whole database = 50gb(20gb compressed) -> reserve 72gb ram for vm. sql buffers are compressed too.

But i picked up a 9260-8i for $80 so i might give cachecade a try if i get a chance.

DO NOT USE SLC in servers - you go google the report they have ran/running - the fail rates were epic. some lasting weeks - and of course OCZ LTT :) lolz