the recommend amount of drives for a raid 5 is 3-5 drives, the recommended number of drives for a raid6 (two parity drives) is 5-9...
But yes, you could use 5 or 6 or even more drives in raid5. it is just a bad idea because you start running the risk of data loss.
Problems with onboard raid:
1. Resetting the CMOS causes the array to be lost, to restore it you must delete the broken array, then create a new array adding the drives in the EXACT order you did when creating it, and choosing the exact same stripe size, and then choose NOT to clear the array.
2. updating the BIOS would most likely also cause the array to be lost (but be recoverable) in the same manner
3. Performance (speed) will be BEYOND atrocious.
4. The array is tied to your controller and is not migratable.
5. There are some bugs in the cheap onboard controllers that you might run into.
Either get a real controller (300+$, single point of failure, if the controller fails, you need to get the same model, which can be very expensive when it is EOL), or use an OS based implementation (which is the CORRECT way to do raid!: disable raid in bios, and then in windows, linux, or whatever, set up a software raid).
I am using a raidz2 (like raid6) ZFS array of 5x750GB drives on an opensolaris machine.
http://opensolaris.org/os/community/zfs/
The best build for it is the latest open solaris, which you can get here:
http://www.genunix.org/
Also, we are volunteering information above and beyond... but that is cause your questions were simplistic... just look up raid in wikipedia, it answers the questions you asked. In fact, the very LEAST any person should do is read the entire wiki articles before even starting to think raid.
Originally posted by: EarthwormJim
Could be an isolated incident (i.e. my mobo is bad), but my DFI P35 board with ICH9R would randomly give me hard drive corruption if I had all 6 ports loaded up.
There are tons of bugs with mobo controllers, don't trust them.