Tell you what - I'll load up an SSD on my VM'd Citrix server Host and tell you how long it takes to smoke it.
I've seen my Citrix and Terminal Server boxes exceed 1,000 non-sequential random writes per second, and sustain the pattern for hours choking 15k SCSI. Users are simply running Internet Explorer and Outlook, and combined with an AV program in the background minor I/O writes are off the chart. FireFox is an even bigger disk hog.
Load up Perfmon and watch disk writes, or Process Explorer and note just how much disk I/O actually occurs on a Windows desktop while you are sitting doing nothing. This is why I don't use SSD on this type of architecture.
Random writes are not actually the problem as far as NAND lifetime is concerned. For the reason, see
http://www.storagesearch.com/ssdmyths-endurance.html
Let's say you sustain 200MB/s of writes for 12 hours every single day (I doubt even in your enterprise use case that your usage is this high). For something like a 160GB MLC drive (5000 cycles), it would last somewhere around 92 days. Not so good.
Let's use a more realistic number (as you quoted, 1000 4KB writes/second, 12 hours a day). Assuming a ridiculously horrible write amplification of 5x, that same drive would last just under 3 years. With a write amplification of 1x, it would last almost 13 years.
What I mean is that a) it's easy to fudge the statistics in any way that you want to support your particular case, and that b) enterprise suitability of MLC really depends on your particular usage scenario. Long sequential writes are far more stressful than even thousands of random writes, assuming a non-sucky controller implementation. I'd say a SSD is probably quite suitable for your application (but not, say, for a security timeloop video recording system).
PS Also note that unlike HDDs which fail catastrophically (i.e. head crashes), NAND flash fails and becomes essentially ROM. In other words, for correct controller implementations no data should be lost when the drive exceeds NAND lifespan, (assuming the controller doesn't fail first, which it almost inevitably will).
PSS I'll assume that you are making backups, so this essentially boils down to a cost comparison; over the expected lifespan of the device, which IO solution would provide the highest $/IOPS? I think that if you go through the calculations, the SSD will win.
Why, for example, does the data recorder example stress a flash SSD more than say continuously writing to the same sector?
The answer is that the data recorder - by writing to successively sectors - makes the best use of the inbuilt block erase/write circuits and the external (to the flash memory - but still internal to the SSD) buffer / cache. In fact it's the only way you can get anywhere close to the headline spec data write throughput and write IOPS.
This is because you are statistically more likely to find that writing to different address blocks finds blocks that are ready to write.
If you write a program which keeps rewriting data to exactly the same address sector - all successive sector writes are delayed until the current erase / write cycle for that part of the flash is complete. So it actually runs at the slowest possible write speed.
If you were patient enough to try writing a million or so times to the same logical sector - then at some point the internal wear leveling processor would have transparently assigned it to a different physical address in flash by then. This is invisible to you. You think you're still writing to the same memory - but you're not. It's only the logical address that stays the same. In fact you are stuffing data throughout the whole physical flash disk - while operating at the slowest possible write speed.
It will take orders of magnitude longer wearing out the memory in this way than in the rogue data recorder example. That's because writing to flash is not the same as writing to RAM, and also because writing to a flash SSD sector is not the same as writing to a block of dumb flash memory. There are many layers of virtualization between you and the raw memory in an SSD. If you write to a dumb flash memory chip successively to the same location - then you can see a bad result quite quickly. But comparing dumb flash storage to intelligent flash SSDs is like comparing the hiss on a 33 RPM vinyl music album to that on a CD. They are quite different products - even though they can both play same music.