When talking about RAID performance, you guys should always keep in mind that the performance is highly dependent on the actual RAID implementation, rather than the theoretically possible performance that the RAID scheme allows.
For example, many RAID1 engines (RAID10 or 0+1 for that matter) gain very little out of the mirroring when reading large files. Either they just read from the master disk not bothering about the identical slave mirrored disk at all. More often they employ a 'split' algoritm, which can mean two things:
1) both disks get to perform the same I/O; the one who delivers it the quickest 'wins'; little to no performance increase here
2) both disks get to perform half of the requested I/O size. So when reading 64KiB disk1 gets the first 32KiB and disk2 gets the second 32KiB. This sounds good but it means both drives will be seeking to the same file and skipping 50% of LBA on sequential reads. The end result is barely any better performance than a single disk reading 100% LBA without having to skip anything.
RAID5 can use all disk members, though this requires the engine to let the parity drive (for that stripe block) to skip its parity and read the next stripe block instead. Not all implementations do this. Also, the skipping ahead means the disks also slow down a tiny bit. RAID1 theoretically has the best read performance, since a good engine will be able to make both disks read at full throughput. Virtually no implementation is capable of this, however.
The best RAID1 performance scaling I've seen is from ZFS, which has sequential reads on RAID1 just below that of RAID0. That is very good, and a sign that the engine does quite a good job at letting the disks do useful things.
@Red Squirrel: you need to increase the test size of at least 8 times your RAM, or your RAM buffercache will contaminate your results. I recommend using 'dd' instead of bonnie, because bonnie does not employ a cooldown period necessary for properly benchmarking RAIDs with write-back mechanisms.