SSD's. Raid-0 or no?

CiPHER

Senior member
Mar 5, 2015
226
1
36
All SSDs use the principle behind RAID0 internally already - and some use RAID3 or RAID5 or the equivalent of that. This is called interleaving and/or interleaving with parity. All modern SSDs use it, and since the X25-M it can also process random I/O in parallel thanks to AHCI/NCQ.

Generally modern SSDs do 16-way interleaving; which you could interpret as 16 NAND dies put in RAID0. That is why SSDs are faster than USB sticks, who generally have only 1 or 2 channels and thus especially writing is much slower. Only the more expensive USB sticks have more channels.

Doing RAID0 on two SSDs means you could say i now have two SSDs of each 16-way interleaving, so that means 32-way interleaving; double the speed for anything except blocking random reads.

Putting Samsung SSDs in RAID is potentially risky because of the special protection mechanism this SSD uses. I would recommend RAID0-ing SSDs only with other brands, such as Crucial.

Also, you should know that a single SSD is also so fast, that an even faster SSD - whether thanks to RAID0 or not - would make little difference in reality except for special usage like servers or very heavy powerusers who need more than 1GB/s of throughput for their daily tasks.

In your case; maybe use the second Samsung 850 EVO for a different system instead. Or just use them as 2 SSDs where one is for gaming etc. Depends on your situation and needs.
 

darkfalz

Member
Jul 29, 2007
181
0
76
I don't think you'll notice any benefit at all from RAID-0 with SSDs outside of benchmarks. Get the biggest SSD you can afford instead. Any kind of soft raid including RAID-0 will also increase CPU/memory usage, not a huge amount, but often the spike in CPU usage/RAM use will offset any gain in latency/reads. Like full disk encryption, every read with a soft raid has to be assembled/buffered in memory before being used by the application - whereas with a dedicated drive, in some instances data can be streamed directly to the target with DMA, or at least direct to memory with no further processing required.

I have 3 disk setups in two my PCs. Disk 0 is the SSD, which has the OS installed on it, as well as programmes and my newest/favourite games. Disk 1 is the Data drive, a regular large HDD which has documents, media etc on it. Disk 2 is the Games drive, where I have the rest of my games library. This works really well, for example, I can game from disk 2 while doing a LAN file copy from disk 1 across the network with no impact. Or download to disk 1 with no added delays reading from disk 2.

What I am thinking of doing is re-partitioning the SSD and using 60 GB of it for Intel SRT caching of the other HDDs (or at least the games drive) but it's a lot of work for probably minimal gains.
 
Last edited:

CiPHER

Senior member
Mar 5, 2015
226
1
36
I don't think you'll notice any benefit at all from RAID-0 with SSDs outside of benchmarks
True, but that counts for faster SSDs just as well. It doesn't matter that RAID0 is the reason that the storage is faster; a natively faster SSD will also provide very little real-world benefit that can be distinguished from Placebo-effect.

All modern SSDs use the principle behind RAID0 to achieve their high speed. Without it, they would be slower than harddrives in some areas like sequential write.
Any kind of soft raid including RAID-0 will also increase CPU/memory usage
Memory usage is virtually zero, CPU usage is close to zero. RAID0 is as light as JBOD and as a single disk.

Like full disk encryption, every read with a soft raid has to be assembled/buffered in memory before being used by the application
You make it sound like this is an expensive operation taking lots of CPU cycles. It is not. Even RAID5 parity RAID, which is a lot more 'expensive' is very light for reads and only mildly intensive for writes. All CPUs can write parity at more than 2GB/s - effectively memory bound.

It is a myth that simple RAID levels use a lot of CPU cycles. RAID0 and RAID1 and JBOD are essentially free. It is also a myth that RAID5 is 'heavy' because of the parity calculations. XOR is pretty much the easiest instruction for your processor; only capped by memory performance.
 

darkfalz

Member
Jul 29, 2007
181
0
76
It is a myth that simple RAID levels use a lot of CPU cycles. RAID0 and RAID1 and JBOD are essentially free. It is also a myth that RAID5 is 'heavy' because of the parity calculations. XOR is pretty much the easiest instruction for your processor; only capped by memory performance.

It's not a myth. They use cycles and a small amount of memory for every operation. It has an impact on performance. I have experimented with RAID-0 over several ICHR generations. I can tell you that the latency when for example playing a game and a section loads is greater than one disk, although it might actually load faster, the seek will put a greater lag on the system.
 

Coup27

Platinum Member
Jul 17, 2010
2,140
3
81
I ran 2x Samsung 830's in RAID0 for around 6 months and decided it didn't really achieve much.

Firstly, the system took a few seconds longer to boot than with a single drive because of the Intel Option ROM. Subjectively, the system was no faster. The sequential's were much higher in a benchmarking utility but not noticeable outside of that. Strangely, 4k random read was ~2MB/s slower in RAID0 than with a single drive.

In summary, I would rather a single drive of the chosen capacity than 2 drives of half the capacity in RAID0. Higher capacity SSD's are also tend to be a bit faster than lower capacity drives.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
It's not a myth. They use cycles and a small amount of memory for every operation. It has an impact on performance. I have experimented with RAID-0 over several ICHR generations. I can tell you that the latency when for example playing a game and a section loads is greater than one disk, although it might actually load faster, the seek will put a greater lag on the system.
It is a myth.

Your higher latency most likely was due to either a too low stripesize, or a misaligned partition causing non-aligned I/O to hit two stripe blocks, instead of only one stripe block.

With interleaved I/O, you want each I/O to be handled by one disk. If your stripesize is too low (<128K) this will not happen. Generally stripesizes of 4MiB or larger are preferred for maximum random IOps performance. But your Intel Rapid Storage Technology (RST) RAID engine only supports up to 128KiB stripe. Windows also used misaligned partitions in the past, causing all RAIDs to lose a lot of their potential performance level.

I ran 2x Samsung 830's in RAID0 for around 6 months and decided it didn't really achieve much.
True. You can do a very simple test:

1. Reboot/power-cycle your computer so that the RAM filecache is reset.
2. Start your favourite game, measure the loading time in seconds with a stopwatch.
3. Shutdown the game; wait for the system to idle. Now start the game again.

The second time you start the game, it will be read from VFS filecache instead of from the disk. This means you read at RAM-speed instead of at SSD-speed.

Now the trick is: no SSD will ever be faster than RAM. By very definition because the SSD will put stuff in RAM (DMA or Direct Memory Access-transfer) and only then can the system utilise the data. If the data already is in RAM, this basically means you simulate a condition where you have an infinitely fast SSD.

So. If the second time is only marginally faster than the first time, this means the single SSD already is very close to the bottleneck and having an insanely fast SSD like 10 SSDs in RAID0 on a proper controller causing 8GB/s+, will only marginally increase performance in reality.

The thing is that - by using SSDs - you have shifted the bottleneck from the harddrive to the CPU. The CPU is often still bottlenecked by single threaded workloads. You would need a 10GHz+ CPU to fully saturate an SSD in reality.

Firstly, the system took a few seconds longer to boot than with a single drive because of the Intel Option ROM.
True, and the faster I/O during boot will only marginally compensate for this.

Subjectively, the system was no faster. The sequential's were much higher in a benchmarking utility but not noticeable outside of that. Strangely, 4k random read was ~2MB/s slower in RAID0 than with a single drive.
There is one area which RAID0 will not improve. This is random blocking reads. In CrystalDiskMark and AS SSD this is called 4K read. This is always between 20MB/s and 30MB/s and is fully bottlenecked by latency. This is because for this performance aspect, only one I/O can be processed at a time.

As discussed before, a single SSD without any host-level RAID is already utilizing RAID0 internally using multiple channels and multiple planes per NAND die. This means on a 8-channel SSD controller with 2-plane/die you have 16-way interleaving. This means the performance can be up to 16 times higher when using high queue depth. This is the reason 4K-64 or 4K-32 is so much faster than normal 4K: the SSD receives multiple instructions at the same time thanks to AHCI/NCQ and the internal RAID0 allows the SSD to actually execute the I/O in parallel. One I/O takes x milliseconds but 16 I/O takes the same time if it can be done in parallel. That is the power of RAID0!

But as said RAID0 has one weakness, it cannot improve blocking random read performance. It can improve random write and multiqueue random reads, though.

In summary, I would rather a single drive of the chosen capacity than 2 drives of half the capacity in RAID0. Higher capacity SSD's are also tend to be a bit faster than lower capacity drives.
Only for some SSDs this is still true. The Crucial M4 and MX100 under 512GB still are not SATA/600 capped. The MX200 128GB uses dynamic write acceleration to achieve capped speeds. So the next generation is not capped anymore.

With so many consumers focused on SSD performance - even though performance differences between SSDs are insignificant - it amazes me that people get obsessed with a few MB/s difference, while doubling/tripling the performance using multiple SSDs in RAID0 is not often being considered. Even though this performance increase is essentially free - two 256GB SSDs are about the same price as one 512GB. Plus you can always use them separately for two low-grade systems you send off to family/etc.

I still have my battery of Intel X25-V 40GB SSDs and love them for their durability, reliability and performance.
 

darkfalz

Member
Jul 29, 2007
181
0
76
It is a myth.

Your higher latency most likely was due to either a too low stripesize, or a misaligned partition causing non-aligned I/O to hit two stripe blocks, instead of only one stripe block.

I always went with 128K. YMMV, but I found that gains from increased throughput for some operations were minimal compared to benchmarks, but latency from having to seek two drives and assemble data was if anything increased, as well as micro-CPU spiking.

I find a two disk operation to be better, leveraging OS reads and data reads across two disks, or even better, across three for different workloads. There is also the advantage with multiple (spindle) disks of doing data intensive operations - say, extract a large RAR set, between two drives as they can both operate at full throughput without too much seeking slowing things down.

It needs to be remembered that real world applications and benchmarks are two different things. Loading a level in a game, for example, isn't a case of reading one long sequential file off a disk into memory and nothing else. It's reading different bits and pieces, organising them in memory, loading them into VRAM, performing decryption or decompression on them etc. Do your theoretical "doubling" of performance very quickly reduces to significantly less.

Another thing about RAID-0 with traditional HDDs is that they'd both seek and read in tandem, and this would make the system noticeably nosier during disk operations. This is why in my system with RAID-5 I have it in a case which has a lot of sound dampening. The OP was talking about SSDs which this doesn't apply to, obviously.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
Another myth is that RAID0 only improves sequential I/O. That is not true; it doubles random I/O performance just as it doubles sequential I/O. The only exception is blocking random reads.

The difference between 20MB/s 4K and 280MB/s 4K-64 random I/O performance is thanks to the internal interleaving/RAID0 used by the SSD.

If you use host RAID0 then you can see the effect continue with two SSDs. Beware that Windows' storage backend is single threaded and will be capped with high random IOps on some CPUs, especially when some aggressive power saving features are enabled, like C1E.

Don't want to contradict your experience though. But if you say 'latency from having to seek two drives' then your stripesize is too low. Because for one I/O, only one drive should be busy, not two. If two are busy for one I/O request, you have a sub-optimal RAID0 array with either misalignment or too low stripesize.
 

Coup27

Platinum Member
Jul 17, 2010
2,140
3
81
Never.
Ever.
Use.
RAID0.

This has been a public service announcement.
Thanks for that useless input. There is nothing wrong with RAID0. You look at the risks involved and the potential gains, your backup policy and decide if it's worth it or not.
 

npaladin-2000

Senior member
May 11, 2012
450
3
76
Thanks for that useless input. There is nothing wrong with RAID0. You look at the risks involved and the potential gains, your backup policy and decide if it's worth it or not.

Not useless at all. Particularly when it comes to SSDs, RAID0 is useless outside of an iSCSI/FiberChannel environment, since a single SSD can saturate a SATA controller, and a normal in-chassis 8-drive SSD array can saturate a 12g SAS controller.

Regardless of that, the loss of a single drive in a RAID0 array takes down the entire array. If it's important enough to need to be that fast, it's important enough to not be able to tolerate downtime. Particularly when it only costs you 2 extra drives to implement RAID5 with a hotpsare.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
You're talking about a situation in a data center. Not a home situation of an enthusiast that wants to see high digits and is perfectly fine with a consumer-grade solution. Doubling the risk of disk failure is not all that spectacular. It doesn't change anything from the fact that you need to backup certain valuable data anyway. Besides the good SSDs today don't fail that easily like in the early days.

I mean, i could use the same argument to say that you should not buy quadcore CPUs because those have 4x the chance of failing, or something to that effect. SSDs internally use multiple chips are use the principle behind RAID0 - interleaving - extensively. The rest of your system does use this kind of parallelization as well:

  • Dual channel memory
  • Multi-core processors
  • Any GPU
  • SLI videocards
  • PCI-express multiple lanes
  • SATA-Express
  • Network link aggregation (LACP)
  • DOCSIS channel bonding
Same principle. To do things in parallel is just a proven and solid way to increase performance at the cost of complexity.
 
Last edited:

npaladin-2000

Senior member
May 11, 2012
450
3
76
You're talking about a situation in a data center. Not a home situation of an enthusiast that wants to see high digits and is perfectly fine with a consumer-grade solution. Doubling the risk of disk failure is not all that spectacular. It doesn't change anything from the fact that you need to backup certain valuable data anyway. Besides the good SSDs today don't fail that easily like in the early days.

I mean, i could use the same argument to say that you should not buy quadcore CPUs because those have 4x the chance of failing, or something to that effect. SSDs internally use multiple chips are use the principle behind RAID0 - interleaving - extensively. The rest of your system does use this kind of parallelization as well:

  • Dual channel memory
  • Multi-core processors
  • Any GPU
  • SLI videocards
  • PCI-express multiple lanes
  • SATA-Express
  • Network link aggregation (LACP)
  • DOCSIS channel bonding
Same principle. To do things in parallel is just a proven and solid way to increase performance at the cost of complexity.

Every last one of those that you mentioned has methods in place to tolerate failure of a single sub-component without completely killing the component. All except a RAID-0 array anyway. And again, in a consumer setup, you're going to likely saturate your SATA interface with a single SSD, so what's the point of risking the entire data array and going RAID-0? Single drive failure kills the entire storage volume. SSDs fail a lot less often than they used to, but they still fail at times.
 

Red Squirrel

No Lifer
May 24, 2003
69,677
13,316
126
www.betteroff.ca
Never.
Ever.
Use.
RAID0.

This has been a public service announcement.

This, unless it's purely temporary storage. Basically, treat it like a very large ram pool.

Never use it for actual live data though, like a NAS. You can have all the backups in the world but if a drive fails you lose immediate access to all your data and you still need to rebuild the file system from backups. Things like permissions etc can be a royal pain too unless you find a way to back that up too and restore a verbatim copy of the entire FS.

I would use either raid 10 or raid 5. I have not played with raid with SSDs so can't speak from experience but I would imagine raid 5 or 6 is probably a viable option even for large arrays since the rebuild times should be pretty fast.

That said, it would be fun to mess around with raid 0 just to play around with it, but no way would I use it for anything short of a quick lab based test or for data I really don't care about losing access to.
 

Berryracer

Platinum Member
Oct 4, 2006
2,779
1
81
RAID on SSDs is pointless, gives you less 4K speeds which are the most important for OS snapiness, longer boot time due to the initialization of the RAID Controller on startup, and higher risk of data loss!

http://www.overclock.net/t/1500862/1-single-ssd-vs-2-ssd-raid-0

Sean Webster said:
If you have a workload in which you need high sequential speeds, then in that case you are good to go with a RAID 0 array. Otherwise, RAID 0 with SSDs is pointless besides having a bit more e-peen. The numbers don't lie, but they can be deceiving...remember, these are synthetics tests, not real world workloads. Just because you see 1GB/s bandwidth capability, it doesn't mean you necessarily will take advantage of it in actual use. You are right, "right choice is the one the individual decides is best for his/her application." However, many have little knowledge on the matter and can not make make an educated decisions. Thus, they have to turn to those with knowledge on the subject for educated advise. Otherwise, you end up losing time and money over uneducated decisions.
==============================================
I've used Samsung 850 PROs in RAID 0 mode, believe me there is 0 difference between RAID 0 and a single SSD when it comes to the performance that a normal user would experience because RAID 0 mode doesn't improve on the 4K random reads/writes which is what a normal or even power user would use most of the time, it only helps in sequential reads/writes, say for instance if all you do is copy large video or data files (10GB ++) all the time from one partition to the other which I doubt that you do....

RAID 0 mode is great to show off high benchmarks, but for a normal user, it only increases the risk of failure, higher latency, longer boot times due to the 2 - 3 seconds required for the RAID controller at startup, just not worth it, get the largest single SSD that you can afford and be done with it

Excerpts from the Samsung SSD Whitepaper

"... Fast sequential speeds allow for quick file copies and smoother performance when working with large files, like videos. However, it is random performance, measured in Input/Output Operations Per Second (IOPS) that is, perhaps, the most important performance metric for SSDs.

A large portion of storage activity is made up of 4K random writes, a metric that measures how well a drive will perform when writing small chunks of random data (e.g. changing a small piece of a Word or text file and then saving the changes). Users spend a majority of their time not copying large files or installing applications, but multitasking (e.g. email, web-surfing, listening to music, etc.) and working with various work and media files - tasks influenced by IOPS. An SSD can offer up to a 200x improvement in IOPS over a traditional HDD (results may vary based on HDD model).

For this reason, Samsung put a heavy focus on random performance when designing its SSD lineup, offering users industry leading Random Performance of up to 100,000 IOPS. This is performance for the real world; performance you will notice and appreciate every day ..."



"... most consumer workloads will be similar to 4KB data at QD 1 ..."



"... While the majority of client PC workloads will not exceed a QD of 1, some usage scenarios may generate a QD of 2-6 or even up to 10 (in limited applications). Data center applications, on the other hand, may generate massive numbers of Input/Output (I/O) requests, creating a QD of 32, 64, or even 128 in some cases (depending on the number of access requests per second) ..."



"... For the vast majority of users, the most meaningful Iometer scores will be those of 4K random Read and Write performance at a Queue Depth of 1-32 ..."



"... The most common queue depths to test are a Queue Depth of 1, which is typical of light consumer workloads, and a Queue Depth of 32, which is representative of a heavy workload as might be seen on a on a server (e.g. web server, database server, etc.) ..."




"... peak speeds are not a good indication of everyday performance. Users are typically not installing applications or copying massive files on a regular basis. Many manufacturers like to brag about peak performance ..."
 

darkfalz

Member
Jul 29, 2007
181
0
76
Another myth is that RAID0 only improves sequential I/O. That is not true; it doubles random I/O performance just as it doubles sequential I/O. The only exception is blocking random reads.

This will only be even theoretically true 50% of the time, if the random data happens to be on the other drive. There is just as much chance it will be on the same drive and performance will be no better than a single drive setup.

There's a software layer sitting between the OS and the disks performing all this striping and determining which disk to read. There's a real overhead here. Sure, it may be 5% of one core, but it's there. I found it offset the gains in a lot of scenarios, and certainly enough to make the increased risk of losing all data unappealing.
 

gus6464

Golden Member
Nov 10, 2005
1,848
32
91
We run RAID 10 with Intel DC3700's in an MD1220 at work for use with Hyper-V. We have as much as 25 VM's on a volume and throughput was choking when running RAID 5. After switching to RAID 10 the choking has gone away. SSD's do definitely benefit from RAID 0.
 

Fred B

Member
Sep 4, 2013
103
0
0
For couple of years i run ssd storage raid 0 and now single , i can not detected real world difference .
It seem to depend on workload if raid 0 could be beneficial , normal consumer workload will not benefit . So high performance IQ demanding applications could benefit from raid 0 and this is not to compete with single ssd , but to compete with expensive high performance I/O pci solutions .
 

npaladin-2000

Senior member
May 11, 2012
450
3
76
For couple of years i run ssd storage raid 0 and now single , i can not detected real world difference .
It seem to depend on workload if raid 0 could be beneficial , normal consumer workload will not benefit . So high performance IQ demanding applications could benefit from raid 0 and this is not to compete with single ssd , but to compete with expensive high performance I/O pci solutions .
But in that kind of enterprise setup you're not going to run RAID0 anyway, you're gonna run RAID10.
 

Fred B

Member
Sep 4, 2013
103
0
0
It is not an Enterprise setup , just normal consumer pc with the ssd raid 0 been used as kind of fast-storage / cache for files on hd .
 

PhIlLy ChEeSe

Senior member
Apr 1, 2013
962
0
0
Raid-0 is fine for a HOME user(note to back up important files)as more likely to experience a failure. If you have important file's and do not want to risk data loss do not use Raid-0.

Why does every topic produce pissing matches, I'm right your wrong? Again if it is work critical stuff DO NOT USE RAID-0, if you seek performance and do not mind having to rebuild every so often then RAID-0 is for you.
 

gus6464

Golden Member
Nov 10, 2005
1,848
32
91
Raid-0 is fine for a HOME user(note to back up important files)as more likely to experience a failure. If you have important file's and do not want to risk data loss do not use Raid-0.

Why does every topic produce pissing matches, I'm right your wrong? Again if it is work critical stuff DO NOT USE RAID-0, if you seek performance and do not mind having to rebuild every so often then RAID-0 is for you.

Or run RAID 10 and problem solved. Best of both worlds.