RAID 0 with two SATA drives 6Gbps interface..worth it?

moriz

Member
Mar 11, 2009
196
0
0
i have the same drives in RAID 0. i'm getting around 230/215 MBps sequential transfers. however, RAID0 doesn't do anything for seek times and random access, which is where SSDs excel at, and is also what gives SSDs their snappiness. as such, RAID0 mechanical drives won't give you the same user experience as even the slowest SSDs on the market.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
RAID0 does not decrease access time, but it does improve IOps. The relation between the two is poorly understood.

In short, RAID0 can double the random reads and writes just like sequential throughput, but for random reads it requires a higher queue depth. For writes a single queue depth should already be sufficient.

The simplest to understand is to run CrystalDiskMark. That breaks down into:
- sequential read
- sequential write
- random read (qd=1)
- random read (qd=32)
- random write

With RAID0, everything except the random read qd=1 will be improved, potentially doubled. So the thought that RAID0 only helps throughput is simply not true.

Also keep in mind that SSDs use RAID0 or interleaving internally, a 10-channel SSD behaves much the same like 10 single-channel SSDs in RAID0. That's also why the random read qd=1 remains at ~25MB/s; that is the single channel performance and with a single queue depth you cannot use more than 1 channel at the same time.

I decided not to go with a SSD, they don't seem reliable or large enough yet.
You are missing the point. The point of SSD is not to store everything on it, but only store data that is accessed randomly, like your operating system and installed applications. Virtually all other files like the ones you download will not get any faster on an SSD, and don't need to either. A movie won't play any better on an SSD, and for these kind of files the SSD can even be slower than a 5400rpm HDD.

You use an SSD for random access I/O like OS + Applications, and use HDDs for sequential access I/O like large files; so called contiguous access. Your SSD only needs to be 32GB or something to achieve the desired effect.
 
Last edited:

JoeMcJoe

Senior member
May 10, 2011
327
0
0
Thanks for this information.

So a good setup would be to have a SSD for the OS, and have a pair of fast HDs in RAID0 for the data drive?

How much space does Windows7 take up?

I have external RAID5 NAS that my data is backed up, which is backed up to another RAID1 NAS.
 
Last edited:

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
RAID0 does not decrease access time, but it does improve IOps.

Not that I think you're stating otherwise, but a SSD will kick the snot out of mechanical drives in RAID 0 in terms of IOps as well as latency.

I find RAID 0 fairly useless these days. The more spindles you add, the greater the chances there are for a hardware failure. Some people don't understand that, but it's true. An extreme example to illustrate:

Lets say 1 in 100 hard drives fail.
If you have 1 hard drive, the chances that a drive of yours is going to fail before the other 99 is not that likely.
If you have 100 hard drives, the chances that a drive of yours is going to fail is 100%.

In RAID 0 any drive failure = data loss or expensive data recovery.

Storage is so cheap these days, and for a purely data storage application, like housing your media files or something, a single drive is plenty fast.

If I was building a new computer right now I'd have an SSD around 250 GB in size for my OS and all my applications and a 1 TB mechanical drive for bulk local data. Then I'd do image backups of the SSD and file level backups of the 1 TB mechanical drive to either a USB hard drive or network share.
 

acole1

Golden Member
Sep 28, 2005
1,543
0
0
I have been running RAID0 with two WD Black 640's for 1.5 years now with no problems. I definitely enjoy the speed bump and I don't mind the risk of failure since I back up my data/don't keep important data on my desktop.

I got both drives for $93, so the benefits to me are:
1.2TB of storage for a good price (at the time)
Better speed than single hard drive
Only 1 drive letter to manage files in (I have used multiple drives in the past and I always hated having to micromanage where to put stuff)

It can't really compare to an SSD, but I'm too cheap to buy one right now and this gave me better performance for not much $$.

A big reason I chose the WD Black 640's was for their reliability. 1TB drives have been known to (or at least were known to) be less reliable than the smaller drives, and the WD Black series drives are acclaimed to be fast and some of the most reliable consumer level drives. Seagate doesn't seem to have had such a great track record lately.

So overall, it's really up to you whether or not the speed of RAID0 is worth the risk compared to a single HDD solution. Or if the speed (and *theoretical* reliability) of the SSD is worth the $$.
 
Last edited:

Despoiler

Golden Member
Nov 10, 2007
1,967
772
136
The risk of RAID 0 failure is essentially the same as any single drive. Your failure rate is across all manufactured drives and having one more of those drives is negligible in terms of risk to you. The increased risk myth has been carried forward in history from the time when RAID 0 was exotic. Google wrote an excellent white paper on hard drive failure. The number one cause of drive failure is age. At around 2 years you have a significant increase in the likelihood of failure and is modified by manufacturer quality. There is a reason why drives have either a 3 or 5 year warranty.

http://static.googleusercontent.com...abs.google.com/en/us/papers/disk_failures.pdf

I have had my Western Digital RE2 raid 0 array for over 5 years w/o issue. It will be getting replaced with a SSD in the near future though. RAID 0 is a great, cost effective solution to get better speeds than a single mech hd, but SSDs are generally 2-3x faster at everything.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
I just don't see RAID 0 being beneficial. The choke point in a PC has always been hard drives but that's mostly due to latency. You say the increased risk "myth" has been carried forward from when RAID 0 was "exotic." I say the benefit of RAID 0 is a myth carried forward from a time when hard drives bursted 33 MB/sec and barely sustained 15. Now we're at a point where a single 7200 RPM drive's sequential throughput can nearly saturate a gigabit link.
 
Last edited:

Despoiler

Golden Member
Nov 10, 2007
1,967
772
136
I just don't see RAID 0 being beneficial. The choke point in a PC has always been hard drives but that's mostly due to latency. You say the increased risk "myth" has been carried forward from when RAID 0 was "exotic." I say the benefit of RAID 0 is a myth carried forward from a time when hard drives bursted 33 MB/sec and barely sustained 15. Now we're at a point where a single 7200 RPM drive's sequential throughput can nearly saturate a gigabit link.

Umm yah....
 

JoeMcJoe

Senior member
May 10, 2011
327
0
0
Last edited:

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Jeff7181, please keep in mind that SSDs achieve their high IOps because of the use of 'RAID0' or interleaving. This is a great technology that applies to many domains: multi-core processor chips, PCI-express lanes, dual/quad channel memory, etc.

You're right of course that an SSD is unbeatable in terms of latency, but that doesn't nullify the benefits of using RAID0 to achieve higher speeds.

Also, the added risk to dataloss is pretty much irrelevant. Whether you have a 1 in 100 or 2 in 100 chance of losing data, is no real consideration. In both circumstances do you need a proper backup to protect against dataloss. In fact, the added risk might convince people they need a proper backup so they end up with a setup that is safer than a bare disk without backup setup.

When talking about RAID0 for SSDs, you achieve near-perfect performance scaling while it's quite trivial to backup. A single $70 2TB could backup 10-20 SSDs so the disparity in capacity between SSD and HDD makes backing up SSDs a non-issue. Another argument would be that SSDs generally store data you can afford to lose, such as operating system, installed applications, etc. In reality, that's only true if you keep a tight separation between system data and user data, for example by moving the My Documents folder to your harddrive instead, which you do backup regularly.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
I think folks should write their own driver to do SRT - why not? software raid is just a driver, ramdisk is a driver, SRT persistent ramdisk like superspeed cache(tm) product - we just need low level drivers to bring technology that ZFS has had to windows. Someone here has to know how to code drivers or find drivers examples - and figure out how to sign them :) i'd love to have ZFS for windows with LVM and tiered SSD - has that been done or being worked on at all? What about running ZFS (solaris) in a VM and exposing hardware (SLAT/VT-d) to the vm and then exporting the tiered storage back to the host using AOE/ISCSI? iirc ISCSI comes with every copy of windows client? ISCSI target comes with every opensolaris/zfs - not sure if you'd need esxi or workstation 8 to using VT-d to give control of storage controller directly to a vm?

tl:dr; make cool SRT drivers of our own - or build ZFS tiered ssd VSA in a VM on same pc
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
SRT basically is a proprietary software feature that Intel designed because the Windows operating system lacks this feature. Sure you could write an alternative driver but while you're at it, you might as well replace the outdated NTFS filesystem Windows still uses. It's no secret the I/O storage backend in Windows is terribly outdated when comparing to UNIX.

It's interesting that you propose a VM solution, since this would allow Windows to work on ZFS and benefit from L2ARC feature; which is just a superior form of the 'SRT' that Intel implemented. Superior because ZFS can detect corruption on the L2ARC cache device which SRT cannot. If SRT reads from your SSD instead of HDD, it can not detect whether the data differs from the data stored on HDD instead.

What you propose is basically what I'm running. I'm running several Linux workstations linked to iSCSI to my ZFS NAS. So while Linux does not support SRT/L2ARC natively, I'm using it indirectly. Such a setup would work for Windows too, except Windows cannot boot directly from iSCSI disks I believe; not without proprietary third-party software. But you could use it for data storage, such as games on an iSCSI disk.