Linux software raid is top-level stuff.
It's probably the best your going to find and is benchmarked to be faster then even expensive hardware-based raid setups.
The reason you'd want a expensive hardware RAID setup isn't so much for speed, you get other features.. such as the SCSI controllers it has, the ability to do hotswap/hotspares, speed of rebuilding a array after losing a hardware, that sort of thing.
If you think about it, software RAID makes sense.. your comparing the speed from a embedded style 200-400mhz proccessor on a raid controller vs a 2000-3000mhz CPU.
Software RAID is significantly handicapped in capacity by the speed of the PCI bus right now. The 32bit 66mhz PCI bus that is used in current x86 machines has a maximum bandwidth of 133MB/s. Around 100MB/s realisticly.
A harddrive usually gets about 45MB/s. Newer ones can go a bit faster, but I am talking about sustained speeds. So after about 4-5 harddrives you pretty maxed out your systems internal bandwidth... your going to have to replicate volumes, and any writes have to go to 2 or more harddrives simitaniously to maintain parity and all that stuff. Above that your going to saturate the bus and loose performance.
Personally I use a RAID 5 software array for my home folder. It's in a dedicated PC, that that is all it does... it's connected to my main desktop by gigabyte ethernet.. the network is a cluster filing system designed for secure networks for clusters. Allows for migrating proccesses back and forth between computers and shared directories and such.. (check out OpenSSI if your curious).
The RAID 5 array consists of 2 Maxtor 7200rpm/8meg cache 120gig harddrives running on a SATA to PCI adapter. The 3rd hardrive is a older Western Digital 120gig/7200rpm/8meg cache PATA drive running from the onboard IDE controller. (with PATA, only use harddrives that are 'MASTER' setup, using slave drives is OK, but performance will suck.)
All 3 harddrives are partitioned with one big 110gig partition. (or something like that.. you know harddrive sizes vary a bit) and that is what I use to build the array.
*Note that this computer is booted off of the network, via PXE... If this computer had to deal with booting itself up I'd have to have a seperate /boot partition for the bios to read the kernel off of.. Only needs to be 20megs or so, but I make mine 100-200megs for comfort.
Ontop of the RAID device, I have it in one big LVM (logical volume managment) volume group, and that is divided up into volumes, which are basicly partitions, but can be moved around and resized and such without rebooting them. (although it depends on the filing system itself to weither or not they can be resized on the fly, I use Ext3, which I dont' think supports that..)
So basicly I have:
3 harddrives used to create one RAID 5 array. 2 SATA, 1 PATA.
Ontop of the raid array I use LVM to divide up into different 'partitions' which can be moved, resized, or deleted based on my needs at any given time.
It's just something I am screwing around with so that I know that I know howto do it.
LVM is nice because you can add harddrives to the volume group and get more capacity, or take away harddrive space.. or take a partition from another harddrive and just add that to the volume group. Stuff like that.
It's all kinda cool in a geeky sort of way.
Linux software RAID is very nice. It's fast, flexible, and has lots of usefull features. You can have RAID 0/1 or RAID 5 or even weird ones like RAID 4 or RAID 10 and stuff like that. Its IDEAL for creating cheap file servers that need massive amounts of space for storing data.
Then you can have hotswappables, and stuff like that.. but the hardware itself needs to support it. And until SATA gets more established that's realy just scsi.