Pros and Cons of running two SSDs in RAID 0 vs. one larger SSD?

cbn

Lifer
Mar 27, 2009
12,968
221
106
What are Pros and Cons of running two SSDs in RAID 0 vs. one larger SSD?

Here are some Pros I thought of....

1. Two smaller 6 Gbps (or PCIe 3.0 x 4) SSDs would have a higher sequential read than one larger SSD.....and maybe even higher sequential write if the smaller SSDs were large enough.

......and some Cons:

1. Uses two SATA ports (or two PCIe 3.0 x 4 slots) compared to one.
2. Less reliability* (re: if one SSD goes down the entire volume is lost)
3. Less overall capacity (and lower sequential write) if the small SSDs were a higher price per GB.

*assuming the same NAND is used on the small SSDs as the large single SSD
 

ashetos

Senior member
Jul 23, 2013
254
14
76
Sustained write performance is usually about 10K IOPS even with the most expensive SSDs. What you can do is buy a bunch of cheap SSDs and get 10K times N IOPS for sustained writes with RAID-0. This can be very useful for certain workloads, e.g. a server that runs an SSD caching software solution.
 
  • Like
Reactions: cbn

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
I think you forgot: "Lack of TRIM support".

I tried RAID-0 of two 30GB OCZ Agility SSDs, in Windows 7 64-bit, on a P35 / ICH9R chipset mobo. The version of IRST available for that chipset, did not support passing TRIM under RAID-0. So, the SSDs degraded over a span of maybe a couple of weeks, until the benchmarks were WORSE than a single one of those drives, with TRIM operating nominally.
 
  • Like
Reactions: cbn
Feb 25, 2011
16,978
1,614
126
I think you forgot: "Lack of TRIM support".

I tried RAID-0 of two 30GB OCZ Agility SSDs, in Windows 7 64-bit, on a P35 / ICH9R chipset mobo. The version of IRST available for that chipset, did not support passing TRIM under RAID-0. So, the SSDs degraded over a span of maybe a couple of weeks, until the benchmarks were WORSE than a single one of those drives, with TRIM operating nominally.
TRIM support depends on the RAID controller.

With a softRAID chip as old as the one in the P35 (YIKES!) the main problem is that it can bottleneck total IOPS in RAID mode - a RAID of "good" SSDs is therefore slower in random read/write than a single drive of the same type. (They don't have the same problem with sequential access though, and there were definitely people "back in the day" who willingly made that tradeoff in order to get their 1GB/sec synthetic benchmarks.)
 
  • Like
Reactions: cbn

BonzaiDuck

Lifer
Jun 30, 2004
16,101
1,719
126
I think you forgot: "Lack of TRIM support".

I tried RAID-0 of two 30GB OCZ Agility SSDs, in Windows 7 64-bit, on a P35 / ICH9R chipset mobo. The version of IRST available for that chipset, did not support passing TRIM under RAID-0. So, the SSDs degraded over a span of maybe a couple of weeks, until the benchmarks were WORSE than a single one of those drives, with TRIM operating nominally.

Yeah -- you figure that on past-gen boards, you're limited by the interdependence of the OS, the chipset and the Intel controller with that. "No TRIM in RAID" may not be the case with Z97 or Z170, but it was a problem with Z68 without BIOS update later on, and I'm not even sure it was totally reliable then. I just can't remember. There was a lot of flap about it here, though.

I drifted away from using RAID-mode applying to the entire onboard Intel controller because of that, and because I needed AHCI-mode just to attempt using RAPID. I don't use RAPID anymore, but AHCI gives me more flexibility. If I want RAID enough, I'd use a separate x4 controller in a PCIE slot -- if I have one available.

And in addition, if I wanted to use RAID from another controller, I'd first been inclined to a more expensive hardware controller with cache memory. And still -- no TRIM, unless the 3rd-party controller guaranteed it. You don't need to spend top dollar on such cards for a workstation. There are other ways to "boost performance" without RAID, or at least for myself, I'm concluding that I found one in different configurations of RAM and SSDs.
 
  • Like
Reactions: cbn

BonzaiDuck

Lifer
Jun 30, 2004
16,101
1,719
126
Thanks for bringing up the issue of TRIM in RAID 0. According to the following article this wasn't fixed until the 7 series Chipset:

http://www.anandtech.com/show/6161/...ssd-arrays-on-7series-motherboards-we-test-it
I'm not here to sell or promote this, but I'd made a point of experimenting with just about every caching program or hardware/program commonly offered for persistent storage to suit my consumer-enthusiast needs and imagination. Or I'd read reviews or taken a trial-period excursion here or there.

For that reason no less, I'd only "recommend" the following because it used to have a 90-day trial period, and it's still offering a 60-day trial:

Romex PrimoCache

I can point to maybe three other programs, but PRimo is the one I'm using.

If you have spare RAM, you can use it to cache as SSD, HDD, or combinations of SSDs and HDDs. If you throw in the possibility of creating a cache-volume on NVMe or SATA SSD, you can respectively cache SATA SSDs and HDDs or simply HDDs. The persistent caches fill up slowly -- the program manages this. You could put OS volumes and caching volumes on the same SSD. Any number of combinations, because the program is hardware- and storage-mode agnostic -- multiple controllers and combined RAID and AHCI modes of 2 or more controllers.

This way, you can have the speed, though tailored to your particular pattern of use. All you're doing is shifting data between tiers of storage based on prior use patterns. So you might play a game stored on an HDD, but suddenly with your mouse and keyboard it's always "blink" and "right there."

It requires that the system-hardware generally be absolutely stable: CPU overclock or settings; RAM configuration given a ringer of tests at stock maybe 500% of HCI Memtest-64. Solid graphic overclock stability. Your disks should be thoroughly checked with the appropriate diagnostic, like WDDiag for WD disks or SeaTools for Seagate.

If you integrate HDDs, there will be less wear and tear on them. Additionally, if your usage and maintenance are in a range of certain patterns, you will accumulate data in those caches to a point where they are filled, and then find at some point after resetting them that the TBW on the caching NVMe or SATA SSD was very modest for the time deployed.

I've only now discovered that I can use as much as 5GB out of 2x8 16GB for caching, and with several programs running to include a game at 1080p and 144Hz, I still have between 5GB and 6GB of free RAM. So I could increase the cache size to 6GB and lose little.

I've never lost data because of a caching program. You could take a walk on the wild side and cache writes -- something you can also do with Primo. That would only increase the risk, but proven hardware stability mitigates against it. You take your chances. You can switch it on and off with a mouse-click in a Primo dialog.
 

energee

Member
Jan 27, 2011
55
2
71
Thanks for bringing up the issue of TRIM in RAID 0. According to the following article this wasn't fixed until the 7 series Chipset:

http://www.anandtech.com/show/6161/...ssd-arrays-on-7series-motherboards-we-test-it

The Option ROM on Intel 6-series motherboards can be updated to support TRIM, but most users won't bother. Intel generally only officially releases fixes for consumer hardware in the form of a new SKU, and motherboard manufacturers don't seem to object.
 

energee

Member
Jan 27, 2011
55
2
71
What are Pros and Cons of running two SSDs in RAID 0 vs. one larger SSD?

Here are some Pros I thought of....

1. Two smaller 6 Gbps (or PCIe 3.0 x 4) SSDs would have a higher sequential read than one larger SSD.....and maybe even higher sequential write if the smaller SSDs were large enough.

......and some Cons:

1. Uses two SATA ports (or two PCIe 3.0 x 4 slots) compared to one.

I think you'd see the greatest benefit when using traditional SATA ports because you'd no longer be choking on the limited bandwidth of a single port. Sequential throughput would see a big boost, especially on reads.

Most modern consumer motherboards have an abundance of SATA ports but limited number of PCIe lanes. I think that makes a compelling case for buying large when it comes to PCIe SSDs, since you can't just pile on more drives when you need more space.
 

BonzaiDuck

Lifer
Jun 30, 2004
16,101
1,719
126
I think you'd see the greatest benefit when using traditional SATA ports because you'd no longer be choking on the limited bandwidth of a single port. Sequential throughput would see a big boost, especially on reads.

Most modern consumer motherboards have an abundance of SATA ports but limited number of PCIe lanes. I think that makes a compelling case for buying large when it comes to PCIe SSDs, since you can't just pile on more drives when you need more space.

Well, that's a traditional view. I gave my own solution in a previous post, and here's a bench result with Magician on an ADATA SP550 in AHCI-mode with a moderate 2.5 GB RAM-cache and a 40GB NVMe M.2 caching volume:

Magician%20benchmark%20for%20ADATA%20SP550.jpg


Now, obviously, this is just a benchmark. It pretty much conforms to similar results with Anvil and CrystalDiskMark, ATTO and others. The "eye of the needle" where sequential reads "falls down" are the benchtests with > X GB test-size on an X GB cache or <= X. However, this all hinges on how big a single file you might load and how often you'd load that file size, because for most work, and for most programs and games -- a 2.5 GB cache is more than sufficient for most work.

The Primo program is also caching to my NVMe M.2 960 EVO, and it does it in sort of a "stealth" mode, when the system is idle. Unlike ISRT (which also requires RAID-mode), the caches fill slowly. The assumption even I would make is that it would hammer the caching-NVMe-SSD and rack up enormous TBWs. But it depends on deployment and usage: For an OS-boot disk, the persistent cache may fill up, but subsequent writes to it are much less frequent, so I found another system with a regular SATA caching-SSD had only accumulated about 6TB of writes over a two-year deployment.

You can cache a RAID0 array and an AHCI disk (with the different modes requiring different controllers). But it's the best argument I can make for buying a single large SSD, which you'd only cache to RAM unless you had an NVMe caching volume available -- yet to similar effect. Most of the 21,000 MB/s score shown here is due to the RAM-cache, with only some portion representing hits to the NVMe.

RAM is cheap. 16GB will always have RAM to spare unless you have some task and workload that will hog memory.

A couple days of operating this configuration, and everything I use in OS and software is just "right there."

Here's a benchie for a 5400 RPM 2.5" 2TB laptop spinner -- Seagate Barracuda:

Magician%20Benchie%20for%20Barracuda%20with%202048MB%20RAM.jpg


If I choose to configure these disks for deferred writes -- equivalent to the "Maximum" setting of ISRT -- then the "Write" part of the equation changes also.

So someone would say "Gee, though! You have to have two SATA SSDs or "the spare, small one" to do this. In fact, you could get a large 1TB SATA SSD, put the OS volumes on it, add a ~50GB caching volume for an HDD, and cache all of that to RAM for both the SSD and the HDD, resulting in similar benchmarks.

You spend $30 on a lifetime license, free upgrades through all future versions. You configure it, tweak it and -- if you want -- forget about it. You can change the RAM-caching on the fly. If you change the size of the SSD caches, they'll be purged and the stealth-caching will begin all over again. But that still means you can adjust it for usage, depending on what you're doing.

My priority is to conserve SATA ports -- for things like hot-swap bays and eSATA. You could hook up two SSD's in RAID-mode; you used two ports; your storage volume is still more limited; you can't break a RAID0 array -- and all the other misgivings. But if you had a RAID0, you could still cache it and throw AHCI-mode disks from a different controller into the configuration.

Put it another way. Instead of spending money on duplicate SSDs, you buy a program that will allow adding other disks, like spinners, into the mix.

It may be a stopgap anticipating cheaper NVMe or -- whatever -- but it works great. I've got plans to shell out for a 1TB Pro or EVO, and get myself a 27" 1440p gaming monitor.

I can wait. The only thing nagging me? My curiosity. And, of course, the other way to go is excessive RAM of 32GB kit allocated to larger caches. The only thing about that is that the Hiberfil.sys will need to be at least 16GB, but 16GB is the default for a 16GB kit.