• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Is RAID5 a mistake?

3nd

Junior Member
My plan for the new system I'm building has been to run a 128GB SSD as my system drive and use 3 SATA drives in RAID5 off the board as a data drive.

As I've been reading about it more it sounds like this might be a mistake. I understand redundancy and I'm not looking for anyone to tell me that this doesn't solve backup issues, etc. But will the 3 SATA drives in RAID5 actually give me lower performance than a single SATA drive?

What order would the following setups take for average transfer fastest to slowest. I always assumed 2, 5, 4, 1, 3 - but is that incorrect?

1. Single SATA Hard Drive
2. 2 SATA HDDs Striped RAID 0
3. 2 SATA HDDs Mirrored RAID 1
4. 3 SATA HDDs RAID 5
5. 4 SATA HDDs RAID 5

Thanks!
 
* What RAID controller do you plan on running the RAID 5 array with?
* What model are the drives you plan on RAIDing together?
* Do you have an option of RAID 6 with the controller to be used?
 
The easiest way to determine this for yourself, is to test the performance of your RAID 5 setup, then clear the array, and make a new one of the other type you wish to try. After making the array, repopulate it from your backup, and then test the performance.

There are many factors that affect the speed of an array, and testing your own system is the only way to be sure how it will perform.

Don't try to build an array with data in it, as it is much faster to simply build an empty array, then copy your data to it.
 
Raid 5 on a good hardware controller can be fast but will never be as fast for writes as a single drive or a RAID 0 array, reads will be faster though. Off the mobo controller it will be slower for sure at everything.
 
My plan for the new system I'm building has been to run a 128GB SSD as my system drive and use 3 SATA drives in RAID5 off the board as a data drive.

As I've been reading about it more it sounds like this might be a mistake. I understand redundancy and I'm not looking for anyone to tell me that this doesn't solve backup issues, etc. But will the 3 SATA drives in RAID5 actually give me lower performance than a single SATA drive?

What order would the following setups take for average transfer fastest to slowest. I always assumed 2, 5, 4, 1, 3 - but is that incorrect?

1. Single SATA Hard Drive
2. 2 SATA HDDs Striped RAID 0
3. 2 SATA HDDs Mirrored RAID 1
4. 3 SATA HDDs RAID 5
5. 4 SATA HDDs RAID 5

Thanks!

Read is the same (I assume a decent enough controller that can keep up here) Write is reduced, typically to around 66% of the original disk speed. Good controllers have a write cache that hides this.

Key thing is the controller. The performance also changes based on the number of spools. 3 disk RAID 5 will be slower than a 7 disk RAID typically when backed by a good controller.
 
Raid 5 on a good hardware controller can be fast but will never be as fast for writes as a single drive

?!

A 4-drive RAID5 should have faster sequential writes than a 2-drive RAID0. (random would be a different issue since RAID5 has higher latencies) And any RAID5 array should outperform a single drive in sequential writes. (and reads, too)

And you can get this same performance with software RAID5, chipset/firmware RAID5, and true hardware RAID5.

The reason why most chipset RAID5 get craptastic sequential write performance because of alignment issues. Kinda like the alignment penalties that you get with Advanced Format drives with a file system that does not align with AF sector sizes, except that with RAID5, the misalignment penalty is HUGE.

There are ways around this. E.g., manually adjusting partition start positions, file system sector sizes, etc. or using aggressive write caching (not forcing each write to immediately flush opens the possibility for multiple writes to be recombined for optimal alignment). The latter (the caching) is, I suspect, what expensive hardware RAID controllers use to work around the problem, but it can be done with software/firmware RAID5, too. Google it.

In many respects, software RAID5 is actually the best because both the RAID alignment and file system alignment can now potentially be under the same roof. Problem is, Microsoft, in their infinite wisdom, decided to disable software RAID5 for non-server SKUs (not even Ultimate has it! it's downright silly considering that enterprises using a server edition probably use expensive hardware RAID anyway) (of course, there is Linux...)
 
Last edited:
Thanks for all the comments.

To answer the first few questions, the RAID controller is the onboard Z68 controller.

The hard drives would be 2TB Seagate Green 5900RPM drives.

I haven't purchased the second two drives and now I'm probably going to hold off as a good backup solution and a single 2TB data drive should be fine for now. I'm not willing to give up 30%+ write performance if that is indeed what people are seeing.

thanks
 
I always assumed 2, 5, 4, 1, 3 - but is that incorrect?

1. Single SATA Hard Drive
2. 2 SATA HDDs Striped RAID 0
3. 2 SATA HDDs Mirrored RAID 1
4. 3 SATA HDDs RAID 5
5. 4 SATA HDDs RAID 5

if a good controller for teh raid 5, then 5,2,1,3,4

on a poor motherboard one, it is proberly more like 2,1,3,5,4

this is assumeing a mix of read and writes (fast reads can be good, but not if the writes are at USB2 speeds (where some original motherboard raid 5's had it).
 
I have had zero luck with software RAID 5, and the only RAID 5 cards which do not use software are expensive because they have their on microprocessor and RAM. Those cards only fit in server boards because they are PCI-X 32. All other card I know of offload to the system. Admittedly I have not messed with RAID 5 in a few years, so something may have changed, but last I checked you will be looking at a $600+ Adaptec controller. Even then, RAID 5 is very picky about drives, even the ones on the compatibility list.

I also have to agree with Blain that running green drives in RAID is pretty lol for obvious reasons. If you just want redundancy, run RAID 1. If you want speed out of RAID you will have to pay through the nose. Your best option is probably backup to an external drive for data protection.
 
yah you can do raid-10 with $50 card setup (i have some) 8 ports - real raid controller. you won't get the lame write performance hit given how cheap those drives are
 
I note that the OP has a motherboard with the Z68 chipset. Get back to that later, although my thoughts about it should be obvious . . .

Since July, '07, I've run a 3Ware-AMCC 9650-SE 4LP 4-port RAID controller with (trouble remembering) either 128MB or 256MB on-board cache. It was capable of realizing the full potential of SATA-II drives, and it was a four-drive array in RAID5 of nearly 1TB.

I'd even bought spare drives in case one of them crapped out, but it never happened. Every now and then, I'd run the web-page utility to verify the entire array. Nothing untowards ever turned up with that, either.

It was . . . fast enough. And I still had a sense of security about data integrity. I still had other backup solutions that I used periodically.

This time, I want a system that is fast but uses a lot less power -- or somewhat less power.

So I ordered the 600GB SATA-III VelociRaptor and an SATA-III SSD to deploy as ISRT on the Z68 motherboard. I may add a 500 GB SATA-III Cav Black as a hot-swap unit, possibly for extra video storage and recording. I think I also might get a spare drive-caddy and another 500 Cav Black to clone the boot drive periodically. Other backup occurs regularly to (actually, "from") a WHS server -- which in turn is backed up by an offline hot-swap periodically. I won't buy a "spare" Raptor unless I need it.

The experience with the 1TB array: I barely consumed 500GB in usage, after piling up several movies, the entire HBO "Pacific" series, etc. etc. Deleting things I didn't want to keep a few weeks ago, there is less than 400 GB in use. Put all the movies on another drive, and I believe the storage usage will drop to below 250GB.

I saw some performance benchies on an ISRT setup with a 20GB cache the other day. I'll want to go back and check, but I think the testing was done with a "mainstream" SATA-III HDD, and I don't think they used a VelociRaptor. It appears that -- after a little initial computer usage, the performance of the ISRT appears to come close to 80 or 90% the speed of the SSD, more or less. That's with a 20GB cache. I think I might as well go ahead and make a cache either double or triple that.

With the remaining SATA-II's from the previous RAID5, I should have a lot of hot-swap drives to play with . . . . maybe add to increase the WHS capacity. But then -- if added to the WHS, I'll be burning more electricity. Let me think about that some more.
 
Back
Top