SSDs are still too new, give it like 5-10 years until there are more cases of write limitations being hit, and the data will be there. Right now one of the other issues with them being too new is there are still some firmware and other issues that do cause them to fail randomly so that makes people forget about the fact that even if it was not for those issues, they are destined to a guaranteed failure. For home users SSDs will most likely not fail before they are retired due to something better being out, but in an enterprise where 100's of GB is written to each disk every day? I would not do it. If it's for cache or something that's another story, as long as the SAN can continue to run even if the cache fails. So you have to replace a couple non critical drives every couple years, not a huge deal compared to having your whole SAN go read only on you and having your IT manager expecting you to "make it work NOW".
Also:
http://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923.html
Keep in mind they are basing this on 10GB/day which is what a typical desktop PC at home will do when it's mostly idling all day with a few hours of gaming or web surfing. An enterprise class setup will do that in less than an hour just from shuffling data around, VMs writing to their virtual disks, and of course actual file transfers within the VMs which translate to writes to the SAN.
While reading up I also noted that using SMART you can actually tell how much life is left in a SSD with the SSD_LIFE_LEFT attribute. 100 meaning it's at full and 0 meaning it's basically dead. (percentage). So guess in an enterprise environment you could just setup a monitoring solution to monitor each disk and when it reaches close to 20 you could start replacing one at a time and rebuild the array. I would imagine an array rebuild with SSDs would be quick so this could probably be done in a week day or so. I suppose that is something that is feasible for a business that only has one SAN.