No there is most certainly proof that MLC consumer drives do not last for long (1-6 months) average 2-3 months with server write patterns (one box, 20 vm's , mix load 24x7) - the 50nm > 34nm especially. but SLC is cool. especially the ole 50nm. it will last for 4-5 years. unlike hard drives which can last for 10+ years under the same workload.
just be careful - when they fail they all fail at the same time - this can cause any raid to go belly up - stagger the ssd deployment (SLC or MLC) - perhaps start with a small two disk raid-10 then expand to more as needed.
lsi provides very specific esx/vmguest software for optimizing hot data access between host and vm guest - even cachecade users do not get this (iirc fusion i/o is the only one selling this technology) - which is far far superior to simple cachecade 2.0 - plus you need really want about 400-500gb of raided (!!) ssd to tier proper.
Good thing is $65 for 8gb RDIMM's - so a typical server has 18 (now 24) slots - cheap to outram - than ssd tier. the vmware tax is a little problem but i spose you could stick with 4.1 or wait for hyper-v 3.0 in server 2012
i'm going to replace 60 4gb dimms with 8gb (16gb have some complications) and double down on ram - whole database = 50gb(20gb compressed) -> reserve 72gb ram for vm. sql buffers are compressed too.
But i picked up a 9260-8i for $80 so i might give cachecade a try if i get a chance.
DO NOT USE SLC in servers - you go google the report they have ran/running - the fail rates were epic. some lasting weeks - and of course OCZ LTT

lolz