SSD's in enterprise?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Not just softraid, either.

1,000,000 IOPS can be easily achieved with 24 sandforce SSDs on a decent SAS RAID controller.
The Areca ARC1280 was limited to 70.000 IOps; that's achievable with one Intel SSD in 512b random reads or two Intel SSDs and 4K random reads. So exceeding 1 million IOps with hardware RAID might be very difficult; more likely you would get this kind of performance with software RAID instead, which can scale IOps much higher than Hardware RAID.

With 5 Intel X25-V 40GB in RAID0 under BSD i got up to 1234MB/s of RANDOM reads (512bytes - 128KiB); which is about 20.000 IOps @ 64KiB avg; should have tested 4K and 512b as well; but i think this will already bottleneck most Hardware RAID, including some very expensive cards.

Haven't played with newer Intel IOP offerings; i still have my IOP331 @ 500MHz Areca controller though.
 

alaricljs

Golden Member
May 11, 2005
1,221
1
76
pci express cards are not enterprise. neither is software mirroring.

800gb mlc are coming in feb. plus new controllers to back them. enterprise power dispersement, enteprise cooling, enteprise hot-swap.

PCI express not enterprise?

So the fact that the Sun/Oracle M8000 can control 112 PCIe slots (some of those in expansion chassis) means nothing? And the fact that every single server platform I'm familiar with comes with PCIe? That's a lot of slots to not be considered an enterprise server.

The ONLY reason PCI-X is still in the enterprise is legacy. It works and plenty of shops like the idea of upgrading their servers and keeping the expensive PCI-X cards they already have. There are also plenty of manufacturers that aren't developing PCIe replacements for their cards since they don't have to in order to keep their clients.
 

jimhsu

Senior member
Mar 22, 2009
705
0
76
Devil's advocate here.

The problem with SSDs in enterprise is not performance, or cost, or any of those things. It's the other things that consumers care somewhat less of -- variable IOPS during variable work loads, known AFRs and MTBFs, etc.

Intel quoted a while back:

When following a benchmark test or IOMeter workload that has put the drive into this state which is uncharacteristic of client usage, it will take significant usage time under the new workload conditions for the drive to adapt to the new workload, and therefore provide inconsistent (and likely low) benchmark results for that and possibly subsequent tests, and can occasionally cause extremely long latencies. The old HDD concept of defragmentation applies but in new ways. Standard windows defragmentation tools will not work.

Define "uncharacteristic of client usage". What client are you talking about?

Fusion-io further commented:

Interestingly enough, it's not just because our garbage collection is more efficient than others that we get so much better write performance. It's actually another dirty secret in the Flash SSD world - poor performance with mixed workloads.

Other SSD's get a small fraction of their read or write performance when doing a mix of reads and writes. You'd think that if one gets X IOPS on reads and Y IOPS on writes, one should get 50% * x + 50% * y under a 50/50 RW mix. In reality they typically get less than a quarter of that.

This is fundamentally because NAND is half-duplex and writes take much longer than reads. This makes it a bit tricky to interleave reads and writes. The ioDrive, on the other hand, mixes reads and writes with great efficiency.

The thing is that enterprises are very insistent about guaranteeing performance. There were some reports where some fusion-IO SSDs degraded to less than 10% of "new" performance when subject to "certain write patterns". Enterprises are scared of these things, so as a result have to plan for the worst case -- i.e. buying 10x as much of those SSDs. In contrast, hard drives have been around for a while and have slow, but consistent performance.

I won't cover reliability because 1) there's not that much stuff on it right now and 2) there's too much speculation. All I'll say is that it's not just URE rates and write cycles.
 
Last edited:

Cable God

Diamond Member
Jun 25, 2000
3,251
0
71
So you think PCI-E SSD is not enterprise.... Tell that to the hundreds or thousands of companies already running it in production. Quite a few companies are running the Fusion IO product and Virident product in software raid with zero reliability issues and extremely high performance.

I don't worry with being scared of drives writing out or dying with SSD, nor do I with 15k SAS disks. Why not? I have a 4-hour replacement guarantee and I run them in RAID10. If you run business critical data on a single storage device that is not backed up and not in RAID of some sort, that's not the SSD's fault.

Speaking of the variable performance issue: All areas we've used them in, as well as our clients (used with our applications), have drastically improved performance across the board in every case. Benchmark numbers mean diddly squat in the real world and to end users or business users.
 
Last edited:

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
As long as you can hot-swap remove the fusionIO without moving the server in a full rack - then that is cool. downtime is not.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
I thought PCI-express had native support for hot plug? Not sure if devices/drivers and operating systems support that, but PCI-express should.

Also heard some are working on a modified PCIe standard to serve as standardized NAND/SSD interface.