Sure.
The so-called 'BIOS-raid' uses a simple SATA (or PATA) interface were all the work on maintaining the actual RAID device is done on your main cpu using software drivers. It's similar to how Winmodems and Software Modems differ from 'real' hardware-based modems. So essecialy what is happenning is that your running a form of software raid using special drivers. Some of the nicer types of BIOS raid will actually include a sort of 'accelerator' to assist in some calculations, but that is about that.
In 'real' hardware raid controller you have a dedicated embedded-style proccessor that works with dedicated hardware to provide the work and the OS itself basicly just uses it as harddrive space. All the work is done in the controller. They also have other special features like hotswappability, error detection, hardware failure detection, etc etc that your not going to get thru software or 'bios raid'.
Linux software raid has several advantages and disavantages over real hardware based controllers. First off is that it's programmed well (probably the best software raid your going to find, this is something Linux is very good at) and your central cpu (2-3ghz) is much more powerfull then the special-purpose proccessor provided on the harddrive controller (200-400mhz, I think), so even with the overhead of proccessing it it has a good chance of actually being faster then hardware raid. The second part is that it's inexpensive, the software raid is provided as a feature of the OS itself so it's not any more expensive then the disks you need and the linux OS itself.
Another advantage is that it's very flexible. For instance I have 3 harddrives in my file server. 2 run off of a SATA controller on my PCI slot, and the 3rd runs off of my onboard PATA controller and it's in RAID 5 configuration. You could mix and match hardware. For instance if I had 2 200 gig drives and one 80gig drive I could divide them up into partitions and do one 160gig RAID 5 array (80gigs from the three drive's partitions) and one 120 gig RAID 1 array (from the left overs of the 200gig drives). OF course this wouldn't be best for performance because the different arrays would be fighting each other a bit, but you could set it up to perform very well under specific circumstances (like the RAID 5 is the main system, were you only use the RAID 1 for safe-keeping).
Then when you run SMP systems it would be even nicer over single cpu systems since you can use a pretty-much dedicated cpu to handle your disk access, raid array, network overhead, sound proccessing, while the other CPU works fast on the applications.
The disavantages over hardware raid is that it makes things complicated, all this extra configuration and such can be a pain, especially when you have to recover from a failure. It doesn't have sophisticated error detection that a very high quality hardware controller will have.. for instance a harddrive could START failing and spread corrupted information over the array before it fails thus causing perminate file loss (still can happen with hardware raid of course, just less likely). The amount of drives you can have is limited by PCI bus bandwidth... The hardware raid proccesses the calculations internally and mirrors information internally, on software raid all this informaiton has to travel over the PCI bus, so your limited in what sizes of arrays you can build effectively. (going to be much less of a problems with newer 64bit and fast mhz PCI express designes with the serial-style dedicated bandwidth for each pci slot). Then you have can have features like hotswappable drives, hotspares and such that are not fully supported by linux software raid (at least not yet).
Now with onboard cheapy BIOS-assisted raid you have all the disavtanges of both types with few of either's advantages.
Most of the time, especially in Linux, you don't realy want raid on desktop systems anyways, unless you have high disk access requirements. It's better to instead spend your money on more RAM. The way it works is that when you open a program you read the information from the harddrive, which is slow, but once it's in RAM you just read it from RAM instead. The more ram you have the more you can cache and faster your system will get (to a certain extent. Probably 2-3 gigs would be the maximum realy usefull amount except for the most demanding workstations).
For data protection dedicfated file servers and real backups (raid is not backups) are more usefull.
Even for high-end database loads you can simply go out and buy 16-32gigs of RAM and have the entire database cached in ram, or even manually placed there. You'd probably want RAID 5 for high-avability (gaurd against drive failure taking out your system) but it's not nessicary for performance.