RAID 5 on Intel ICH9R

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
Just wondering, I want to create a RAID array on my mobo that has an Intel ICH9R chipset. It has six SATA2 ports on it. How many drives can I put into a single RAID 5 array?

3?
4?
5?
6?

Just wondering how many drives I should purchase for the RAID array. My case has room for six 3.5" HDs.
 

QuixoticOne

Golden Member
Nov 4, 2005
1,855
0
0
There's nothing too wrong with ICH9R; it has had good reviews for performance and capability.
q.v.
http://forums.anandtech.com/me...t_key=y&keyword1=ICH9R

Check your motherboard specifications, though, sometimes some of the SATA ports are controlled by ICH9R (e.g. 4 ports) and others (e.g. 2 ports) may be controlled by a different "add on" PATA+SATA controller chip like a JMICRON or whatever.

I'd fill the ICH9R controlled ports up with drives, but not include ports from other controllers in the same RAID.

Use at least 2 drives for RAID 0 or RAID 1.
Use at least 3 drives for RAID 5.


 

NesuD

Diamond Member
Oct 9, 1999
4,999
106
106
if your using Vista be sure to check "enable advanced performance" in the disk drives properties in device manager. This makes a big difference in the RAID5 write performance of most onboard RAID controllers. Made a big difference in writes on my ICH10R.
 

Keitero

Golden Member
Jun 28, 2004
1,890
0
0
I wouldn't bother using RAID 5 with any onboard RAID controller. Having to use your CPU and RAM to do XOR calculations will make your performance suffer if you are planning on putting your OS on it. Also with the lack of a battery backup for your cache, you run the risk of corrupt data (rare chance I'll admit). RAID 0 or 1 would be fine as it's not as demanding on the CPU as RAID 5. To answer your original question, you can put as many drives as you want, port permitting, but the minimum is 3.
 

mooseracing

Golden Member
Mar 9, 2006
1,711
0
0
Originally posted by: QuixoticOne
There's nothing too wrong with ICH9R; it has had good reviews for performance and capability.
q



other than the fact that it is onboard. You will not be able to use the drives in that array when the controller fails elsewhere unless you buy another board with ICH9. It would be nice if RAID was stanardized across the board so no one had to worry about this.
 

QuixoticOne

Golden Member
Nov 4, 2005
1,855
0
0
True, but that's what the OP said he wanted to use, and it will work for what he wants, so for him it is the best solution.

There is a way around the problem such as using JBOD non-RAID mode on the ICHx(don't need the -R) and use a pure software RAID implementation like ZFS' RAIDZ or LINUX' (md) RAID so your data drives would be more portable between systems. Actually I think ZFS soft JBOD RAIDZ has many compelling advantages for an inexpensive RAID system so it is actually what I use.

I've got an Intel CPU and Intel ICH9R motherboard. If the M/B ever failed I can't imagine what else I'd want to replace it with except another Intel ICH9R or maybe ICH10R motherboard to keep my platform compatible with my RAM, CPU, and general stability / functionality expectations. That isn't too big of a deal.

With the NEHALEM CPUs coming out in several months, it seems a good bet that anyone who's now using Intel will continue to desire to do so for at least the next year or two.

I'd be annoyed if the drives didn't port between ICH9R / ICH10R / ICH11R.

I do agree that it would be better if there were clear multi-vendor specifications / standards and implementations for the storage formats in use, and open source pure software RAID that could read & recover any "fake raid" chipset RAID format.


Originally posted by: mooseracing
other than the fact that it is onboard. You will not be able to use the drives in that array when the controller fails elsewhere unless you buy another board with ICH9. It would be nice if RAID was stanardized across the board so no one had to worry about this.

 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
never use raid5 on a mobo controller, it is absolute shit, beyond shit actually.

Only thing I would trust an on board raid for is raid1.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
So no-one has said that you are limited to 4 drives for a RAID5. I guess that means that I could use all six SATA ports in one RAID5 array?
 

QuixoticOne

Golden Member
Nov 4, 2005
1,855
0
0
Yes, that should be fine. As long as they're all controlled by the ICH9R, you can RAID5 any number equal or more than 3 drives. Six would be great.

Originally posted by: VirtualLarry
So no-one has said that you are limited to 4 drives for a RAID5. I guess that means that I could use all six SATA ports in one RAID5 array?

 

EarthwormJim

Diamond Member
Oct 15, 2003
3,239
0
76
Could be an isolated incident (i.e. my mobo is bad), but my DFI P35 board with ICH9R would randomly give me hard drive corruption if I had all 6 ports loaded up.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
the recommend amount of drives for a raid 5 is 3-5 drives, the recommended number of drives for a raid6 (two parity drives) is 5-9...

But yes, you could use 5 or 6 or even more drives in raid5. it is just a bad idea because you start running the risk of data loss.

Problems with onboard raid:
1. Resetting the CMOS causes the array to be lost, to restore it you must delete the broken array, then create a new array adding the drives in the EXACT order you did when creating it, and choosing the exact same stripe size, and then choose NOT to clear the array.
2. updating the BIOS would most likely also cause the array to be lost (but be recoverable) in the same manner
3. Performance (speed) will be BEYOND atrocious.
4. The array is tied to your controller and is not migratable.
5. There are some bugs in the cheap onboard controllers that you might run into.

Either get a real controller (300+$, single point of failure, if the controller fails, you need to get the same model, which can be very expensive when it is EOL), or use an OS based implementation (which is the CORRECT way to do raid!: disable raid in bios, and then in windows, linux, or whatever, set up a software raid).

I am using a raidz2 (like raid6) ZFS array of 5x750GB drives on an opensolaris machine.
http://opensolaris.org/os/community/zfs/

The best build for it is the latest open solaris, which you can get here:
http://www.genunix.org/

Also, we are volunteering information above and beyond... but that is cause your questions were simplistic... just look up raid in wikipedia, it answers the questions you asked. In fact, the very LEAST any person should do is read the entire wiki articles before even starting to think raid.

Originally posted by: EarthwormJim
Could be an isolated incident (i.e. my mobo is bad), but my DFI P35 board with ICH9R would randomly give me hard drive corruption if I had all 6 ports loaded up.

There are tons of bugs with mobo controllers, don't trust them.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
Originally posted by: taltamir
the recommend amount of drives for a raid 5 is 3-5 drives, the recommended number of drives for a raid6 (two parity drives) is 5-9...

But yes, you could use 5 or 6 or even more drives in raid5. it is just a bad idea because you start running the risk of data loss.

Problems with onboard raid:
1. Resetting the CMOS causes the array to be lost, to restore it you must delete the broken array, then create a new array adding the drives in the EXACT order you did when creating it, and choosing the exact same stripe size, and then choose NOT to clear the array.
2. updating the BIOS would most likely also cause the array to be lost (but be recoverable) in the same manner
I don't think that either one of those is true. The RAID software/BIOS writes metadata to the drives in a reserved location (last sector?), and the RAID array is portable between compatible mobos even. The RAID metadata is NOT stored in the CMOS or in the BIOS, AFAIK.

Originally posted by: taltamir
3. Performance (speed) will be BEYOND atrocious.
4. The array is tied to your controller and is not migratable.
5. There are some bugs in the cheap onboard controllers that you might run into.
I'll agree that there might be bugs, but there could be bugs in ANY RAID implementation. I don't agree with the performance issue, there have been benchmarks in which the software RAID does just fine against hardware RAID performance. As far as price/performance goes, software RAID wins hands-down.

Originally posted by: taltamir
Either get a real controller (300+$, single point of failure, if the controller fails, you need to get the same model, which can be very expensive when it is EOL), or use an OS based implementation (which is the CORRECT way to do raid!: disable raid in bios, and then in windows, linux, or whatever, set up a software raid).
You would think that running OS level raid would be the best, but for things like a bootable RAID-0, you still would need BIOS support, so therefore it seems best if the RAID implementation resides at a lower level than the BIOS. It can also be more OS-independent that way. The primary problem is that the entire industry has moved to support consumer-level RAID, EXCEPT for Microsoft. They still consider it a "server-only" feature, and thus have RAID disabled in their consumer OSes. Unlike the Mac, which can RAID floppy disks if you really want to.

My main storage server is going to use three PNY S-Cure (Netcell Revolution-based) hardware 5-port SATA1 PCI RAID controllers. But I was thinking of putting a RAID on each workstation as well, with the ICH9R. After all, you can never have too much storage, right?
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: VirtualLarry
My main storage server is going to use three PNY S-Cure (Netcell Revolution-based) hardware 5-port SATA1 PCI RAID controllers.

Why did you decide to go with a chipset from a defunct company with a bad reputation on an obsolete low-bandwidth interface?
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: VirtualLarry
Originally posted by: taltamir
the recommend amount of drives for a raid 5 is 3-5 drives, the recommended number of drives for a raid6 (two parity drives) is 5-9...

But yes, you could use 5 or 6 or even more drives in raid5. it is just a bad idea because you start running the risk of data loss.

Problems with onboard raid:
1. Resetting the CMOS causes the array to be lost, to restore it you must delete the broken array, then create a new array adding the drives in the EXACT order you did when creating it, and choosing the exact same stripe size, and then choose NOT to clear the array.
2. updating the BIOS would most likely also cause the array to be lost (but be recoverable) in the same manner
I don't think that either one of those is true. The RAID software/BIOS writes metadata to the drives in a reserved location (last sector?), and the RAID array is portable between compatible mobos even. The RAID metadata is NOT stored in the CMOS or in the BIOS, AFAIK.
I was able to transfer a RAID5 array between an nforce 2, nforce4, and an nforce5 mobo. Yet when I reset the bios it was lost, just like I was warned, and I had to follow the recovery process listed. I suspect the reason is that when you reset the cmos the settings change to IDE mode instead of RAID mode. So you get a "broken" arrays (more then one) (same as if you accidently unplug and replug a drive... in certain chipsets... the nforce mobos were a LOT more tolerant to raid disturbance then intel ICH9R... the intel was a total disaster with 0 flexibility and migrateability, I was reconstructing the array all the time). If you set it to raid mode before plugging in all the drives at once, then the array is recognized (assuming compatible raid format, aka, mobo chipset... some mobo moves will not work).

Originally posted by: taltamir
3. Performance (speed) will be BEYOND atrocious.
4. The array is tied to your controller and is not migratable.
5. There are some bugs in the cheap onboard controllers that you might run into.
I'll agree that there might be bugs, but there could be bugs in ANY RAID implementation. I don't agree with the performance issue, there have been benchmarks in which the software RAID does just fine against hardware RAID performance. As far as price/performance goes, software RAID wins hands-down.
Software raid = mobo "controller" or OS implementation... and remember I AM recommending OS implementation. And we are talking about RAID5 performance here. And I am talking from experience here and from benchmarks. Show me a mobo RAID5 speed bench that gets good speeds. (and on 6 drives? hah! as if)

Originally posted by: taltamir
Either get a real controller (300+$, single point of failure, if the controller fails, you need to get the same model, which can be very expensive when it is EOL), or use an OS based implementation (which is the CORRECT way to do raid!: disable raid in bios, and then in windows, linux, or whatever, set up a software raid).
You would think that running OS level raid would be the best, but for things like a bootable RAID-0, you still would need BIOS support, so therefore it seems best if the RAID implementation resides at a lower level than the BIOS.
Unrelated conjecture. RAID0 can ONLY be done on a controller level because no OS can boot from a raid array that it itself creates in software (well, that I know of, in theory it is possible). It can only use it for storage. But he is talking RAID5 here... Which is storage (I hope, the OS should only be on RAID1 or RAID0!)
Saying that the mobo controller is better for task A because it is the only thing capable of unrelated task B makes no sense.

It can also be more OS-independent that way. The primary problem is that the entire industry has moved to support consumer-level RAID, EXCEPT for Microsoft. They still consider it a "server-only" feature, and thus have RAID disabled in their consumer OSes. Unlike the Mac, which can RAID floppy disks if you really want to.

That is a serious limitation. I would recommend using a fileserver, with gigabit ethernet, and either opensolaris (for the AWESOME ZFS), or freenas.
But out of necessity you mind end up doing it on your main PC which runs windows.

My main storage server is going to use three PNY S-Cure (Netcell Revolution-based) hardware 5-port SATA1 PCI RAID controllers. But I was thinking of putting a RAID on each workstation as well, with the ICH9R. After all, you can never have too much storage, right?

I like how you don't practice what you preach. People with a lot experience warned me against RAID5 in mobo.. I knew better of course... So I did it, over multiple boards and chipsets, it was a disaster in nforce2, nforce4, nforce5, and intel ICH9R. (i actually started it on nforce4, migrated to nforce2 for some reason or another for a few days, then upgraded to nforce5, and finally moved to an ICH9R based C2D system)
But if you wanna do it, hop to it.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: taltamir
Show me a mobo RAID5 speed bench that gets good speeds. (and on 6 drives? hah! as if)

I've only benched up to ICH8x, so don't have personal 6-drive results at present, but here are a couple:

nVIDIA 3-drive RAID 5:

http://i89.photobucket.com/alb...d0/atto-nvr53-3264.png

Intel 4-drive RAID 5:

http://i89.photobucket.com/alb...ir5-4-64-4-Vista64.png

The drives used are a couple of years old now, so nothing special in modern terms. Still those figures are fine IMO for typical home file server usage.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
Originally posted by: Madwand1
Originally posted by: VirtualLarry
My main storage server is going to use three PNY S-Cure (Netcell Revolution-based) hardware 5-port SATA1 PCI RAID controllers.

Why did you decide to go with a chipset from a defunct company with a bad reputation on an obsolete low-bandwidth interface?

I didn't know about the "bad reputation". What's so bad about it? As for the others, the card was cheap. 5-port SATA *hardware* RAID for $40. That's barely more expensive than a cheap 4-port SATA soft-RAID SI3114-chipset based card. I have one of those too, in my other machine, and they suck.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
madwand. is that about 1MB/s at 4KB file sizes and about 15MB/s at 32KB?
The higher sizes and speeds are impressive, and very surprising since that is not what I saw in real usage. (not accusing you, just pointing out the disparity)

15MB/s and under performance is rather reminiscent of what I would sometimes have with Mobo controller RAID5 during real world useage. But that was not with such small files. So this kinda odd in comparison.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: taltamir
madwand. is that about 1MB/s at 4KB file sizes and about 15MB/s at 32KB?
The higher sizes and speeds are impressive, and very surprising since that is not what I saw in real usage. (not accusing you, just pointing out the disparity)

15MB/s and under performance is rather reminiscent of what I would sometimes have with Mobo controller RAID5 during real world useage. But that was not with such small files. So this kinda odd in comparison.

There are specific technical reasons for such performance patterns. nVIDIA's RAID 5 implementation is notably very susceptible to them, so it's not surprising that if you don't know about this that you get lousy write performance under nVIDIA RAID 5.

Intel has a better cache implementation, so has a different performance pattern, and typically shows decent write performance provided that the user knows enough to turn the write cache on.
 

QuixoticOne

Golden Member
Nov 4, 2005
1,855
0
0
Good reviews of ICHx RAID:
http://www.alternativerecursion.info/?p=31

I've had excellent performance on a Silicon Image 3114 4-port SATA controller + a couple of motherboard controller ports for a 5 disk RAID-5 using LINUX's (md) software RAID. Certainly *nothing* to brag about in terms of motherboard or SATA controller price or features.
That was for a legacy system I assembled a few years back.

Now ICH9R is even better when used in JBOD mode + open solaris & ZFS.

I don't quite understand the fixation on "performance" for RAID-5 anyway. RAID5/RAID6/RAIDZ/RAIDZ2 isn't really "about" performance, it is about REDUNDANCY / RELIABILITY while seeking to have acceptable performance. If you want raw performance, RAID-0, RAID-1 is your thing. The fact that RAID-5/6/Z/Z2 actually performs comparable to up to significantly better than a single drive alone in many I/O operations given appropriate system tuning is just icing on the cake.

If you have a big multi-user enterprise database or file server or streaming media server, of course you care about a performance level, but that is probably still WAY secondary to reliability / availability concerns, and in those cases you have the budget for $12K SANs / SunFire storage servers or whatever gets the job done with uber high levels of tech support & performance & reliability guarantees.

It *is* pathetic that in this day and age we're still having to cobble together reliable, fault tolerant, good performance storage systems even on consumer hardware, and that Microsoft makes it even harder than necessary/usual to do RAID / image backups / file backups on their consumer OSs. Granted, things should be DESIGNED better at the hardware level, file system level, OS level, application level so we don't even have to worry about this menial stuff. But IMHO at least there are adequately good readily available free(ish) solutions like opensolaris / ZFS / linux (md) / ICHxR / freenas / openfiler et. al.

Considering that a full on entry level quad core PC costs $400 minus OS / minus storage drives, for the cost of even one of these $300+ fancy RAID controllers one can buy an el-cheapo file server box, pop opensolaris / freenas / openfiler on it, add $400 for 3x1TB drives for a RAID5/Z and be done with it, I find the costly RAID cards only dubiously economical for personal / small business usage compared to just switching to a full on file server with a good UPS and filesystem / backup configuration.

 

mooseracing

Golden Member
Mar 9, 2006
1,711
0
0
Originally posted by: QuixoticOne
I find the costly RAID cards only dubiously economical for personal / small business usage compared to just switching to a full on file server with a good UPS and filesystem / backup configuration.

Like it's been said, Perc5 can be bought for $100-150 on ebay, it's an LSI card that is comparable to $300 and up cards. It will beat onboard in every way and can go system to system.

I've got too much time in my data I don't want to lose it because of a surge in the USB port that now kills my motherboard. Not to mention with the Perc card being isntalled in sooo many systems from Dell you will be easily able to get a replacement down the road if needed.

 

QuixoticOne

Golden Member
Nov 4, 2005
1,855
0
0
You're right, I over-generalized without being clear as to why.

I don't mean that the Percs or other controllers may not be a good solution for some people. It may even be the "best" (subjective) value for some uses/users. Certainly modern hardware assisted cards should perform better than software only implementations -- though given cheap CPUs like the quad core ones et. al. I have to start wondering if the sheer free PERFORMANCE of CPUs will make any parity (et. al.) calculation speed advantages of "hardware" cards basically irrelevant unless of course you're maxing out your CPU on business applications... It is pretty irrelevant to have a checksum offloading GBE NIC now, at least unless you have several ports and can offload SSL / VPN calculations as well...

What I was getting at was that a well chosen software-only implementation on any reasonable dual core CPU, e.g. OpenSolaris + ZFS and a UPS on top of that keeps your data almost perfectly safe given the factors in the soft-RAID / copy-on-write design used on ZFS to absolutely prevent whole FS or any "old data" corruption due to an incomplete transactions of any nature. Either a transaction atomically completes due to a single final successful metadata write enabling the FS to "see" the new altered "copy" of the FS inclusive of the last atomic transaction, or that transaction (in whole) does NOT effectively complete if any part of it is interrupted by crashing before the WHOLE transaction is complete. In any case nothing at all is left "corrupted" by a crash. Either the transaction 100% works, or it 100% does not leaving the previous state of the filesystem and its data intact and uncorrupted.

So at least there are *viable* choices for *safe* software-only RAID without using an expensive controller of any kind (no NVRAM or controller based battery backup is needed for crash safety). Whether you choose to get a fancy controller for performance reasons (CPU loading vs offloaded calculations) or for capacity / interface reasons (adding SAS ports or more ports of any kind than cheaper / motherboard controllers support) is another issue. Your data CAN be SAFE either without fancy controllers/drives or with them, so it passes the "will it work for me" bar, the only decisive questions are performance and extensibility and type/number of drive ports.

Originally posted by: mooseracing

Like it's been said, Perc5 can be bought for $100-150 on ebay, it's an LSI card that is comparable to $300 and up cards. It will beat onboard in every way and can go system to system.

I've got too much time in my data I don't want to lose it because of a surge in the USB port that now kills my motherboard. Not to mention with the Perc card being isntalled in sooo many systems from Dell you will be easily able to get a replacement down the road if needed.