<< You got the point there, without noticing 🙂 Those "onboard IDE raid" mainboards use standard dual channel PCI IDE chips,
and then put (CPU driven) firmware and drivers on that do the mirroring and striping.
If you want RAID 5 for true redundancy et al, you need to have a separate RAID processor somewhere that handles
the drives transparently, so that the CPU sees a single mass storage unit. That's what these RAID SCSI cards do, they have
a number of standard SCSI chips on, but these aren't visible as SCSI controllers to the CPU, they're hidden behind a RAID
entity that distributes and reassembles the data that then go over the PCI bus.
There's also a massive difference in PCI load. Do mirroring with the former, cheap CPU driven solutions, and enjoy the
doubled PCI bus load. Not something you want when you're after performance. With real got-my-own-brain RAID solutions,
just the assembled data go over the PCI bus. >>
Or does it? I thought the RAID0/1 controllers in the motherboards handling the striping/mirroring in hardware and thus the stream was broken up at the controller. RAID5 requires a processor/memory to calculate the parity information that is striped across drives. I'm more worried about how the RAID controller connects to the MB. ie Does it just take up a PCI connection, or does it connect to the southbridge through a seperate link so not to share bandwidth with other PCI devices. Probably wouldn't matter that much given realistic drive performance.
<< Besides, what do you think onboard controllers connect to? Thin Air Warp Speed bus? No, it's just the same PCI bus that runs
to the slots as well. Whether you have it on a card or on the board, it's going to eat into the same PCI bus bandwidth.
(The exception being chipset-integrated components, and coming soon onboard native HyperTransport controllers.) >>
I've seen layouts of say, the KT333 chipset, and it's southbridge had a seperate connection line to a HighPoint RAID controller that was clearly disctinct from the 6 PCI connections. The built-in IDE controller on the southbridge doesn't share PCI bandwidth (ie it's not on the PCI bus and the southbridge<=>northbridge connection supports ample bandwidth for both) so it would make sense for an onboard RAID controller to connect more intimately to the southbridge than through a PCI link. Also, I've seen KT333 motherboards with onboard sound, onboard LAN, and onboard RAID with 5 PCI slots. With a max of 6 PCI connections to the southbridge, two of those onboard devices must not use a PCI connection.
*shrugs* If I were a baller, I'd just go dual athlon with a 64bit SCSI RAID card and 15K RPM drives, but i'm not. Actually, there are 64bit IDE raid cards aren't there?
-ryan