>According to Sandra, my rig is quite a bit faster than your standard
>single drive, yet nowhere near the speed of a theoretical RAID-0.
I haven't taken a look at Sandra (and if it has it's own marker for where theoretical RAID0 for similar drives to yours should be then nevermind ^^), but realize that synthetic benchmarks generally won't show a double score since they spread the IO operations around as opposed to doing a straight run test of transfer rates. I think this short changes the user experience effects a little since a lot of the time you are waiting on your hard drives you are waiting for a large file and smaller/multiple file access are too quick to have much impact on what you're doing anyway, but I do edit video so. ^^
>CPU utilization is not an issue, Promise controller manages the
>striping. It is transparent to the OS and CPU.
Sorry, not true. The controller *and the drivers for it* manage the striping. Most of the cheap consumer RAID cards (as in Promise/etc rather than Adaptec/etc) do very little in hardware and have the drivers handle most of it since it's cheaper. For example I was looking at Linux SATA RAID chips support the other day, check out the huge number (including Intel's ICH5R) that are marked "proprietary software raid" (and the author's recommendation to just use linux software raid since it's programmed better, hehe):
http://www.linuxmafia.com/faq/Hardware/sata.html
Indeed Promise doesn't do well against even Windows software RAID in the CPU usage department as per this article at ars (admittedly it's old and with slow drives, though):
http://arstechnica.com/reviews/4q99/fasttrak66/fasttrak66-2.html
Do remember when checking out CPU usage, however, that it's not the usage for RAID vs. 0% that you should be considering, but rather CPU usage for RAID vs. CPU usage for doing the operation without RAID.
>The clock speed of PCI os 33Mhz. What speed does the IDE/SATA bus transmit at on your mobos?
OK, PCI is a 32-bit bus,which is 4 bytes. So it theoretically tops out at 4*33.3=133.3 MB/sec (note these are "decimal" MB, like most numbers here, since most of that number is from the M in MHz ^^). The popular onboard ICH5R SATA RAID has 150MB/sec bandwidth for each drive, but only a 266MB/sec interconnect between north and south bridge.
WinBench's beginning transfer rate is one number where this is noticeable. On my machine a single 73GB Cheetah 15k.3 gets 76 MB/sec, again on my machine two in Adaptec RAID0 but on the PCI bus get 119. Now 2 WD740 Raptors on ICH5R get 143, however, which comfortable beats my drives. That last number is from this review:
http://www.extremetech.com/article2/0,1558,1541671,00.asp
I know StorageReview reports 76.4 for a single 73GB 15k.3 Cheetah, so at least for that number you know my testing is OK since they are close. ^^
So even with two drives and nothing else running on the PCI bus taking up it's shared bandwidth you can lose a little bit of performance. Your drives are slower, however, so it's less of an issue. Also serious RAID users need more than 2 drives so ICH5R isn't much help there. I'll link my WB transfer rate graphs below just for kicks. It's so sad seeing that array top out due to the bus limitation, isn't it (it stops early since that's when I took the screenshot ^^)?
http://pics.atofftopic.com/Images/wai/raid_rate.gif
http://pics.atofftopic.com/Images/wai/single_drive_rate.gif
edit: some cleanup, that's the longest post I'll ever do again...