On my ABIT NF7-S that setup you just stated has only the advantage of data redundancy (the latest BIOS revision for RocketRAID 1540 puts in RAID 5 support, though I doubt it will perform the parity calculations at a reasonable rate, but you indicated 6-channels, which my card can't do anyway). Performance-wise, it'd be no better. The reason is simple: the PCI bus has already limited the performance of my quad- 6Y060L0s. The six-drive RAID 5 would perform exactly the same, because it would stand only to improve my maximum sustained throughput. As it is with my current setup, I'm looking at 115MB/s straight across the entire surface of the drive. Simply put four Maxtor 6Y060L0s combined at once already maxes out the PCI bus, even at the slow end of th drive. My only choice is to upgrade to a platform with 66Mz PCI. If I did that, I'd be switching to a board with slower memory performance and limited overclockability (I'd not change my CPU as well), which would result in unbalanced system performance.

Your choice of RAID controller is good, but I chose a (not native) SATA RAID controller because of space constraints; going by your idea of 6 drives I'd have a total of ten rounded cables piped about in my rig (six for the RAID 5, 1 for my SCSI, 2 for my optical drives, 1 for my floppy), which, as it is, does not have the space for even 6 to run about. With the SATA, the thin wires cleared up huge amounts of space, due to the sheer volume of cables. The increased length also helped dramatically, as does the improved airflow. I have also noticed that on rounded IDE cables, signal integrity is not perfect. The data then has to be resent after integrity checks, which slows down performance. The SATA is not suffering from this, so even with the double conversion (PATA-SATA then SATA-PATA, remember by RAID controller's IC , HPT374, and my hard drives are both technically PATA), performance actually increases over using PATA and the rounded cables. As for why I didn't go for native SATA, the reason is two-fold: firstly, when I made my purchase, no native SATA RAID adapters were out yet, and waiting was not necessary because: secondly, the PCI bus is already limited the controller to 115MB/s/channel at best anyway, below the limited of ATA/133 (the best my controller and drives can do). SATA/150 would not benefit as the PCI bus is already limiting it, plus my drives are only PATA/133, so they too would not benefit anyway from a 150MB/s controller.
All in all, what I am considering upgrading (long term; tight on ca$h@tm) is to swap out the WD400BBs for a pair of WD360GDs. My medium size files/internet cache array can benefit from the quicker access times of the Raptors over the Caviars, plus being that this array is "merely" two drives, the PCI bus is not limiting it yet and I stand to gain massively in terms of sustained throughput at both ends of the array. The loss of about 8GB in making that switch won't hurt me; going from 416GB (Which right now I use maybe 25% of) total capacity down to 408GB won't hurt me one bit, and the gain in performance would be well worth it. As a matter of fact, that idea is what brought me to this thread in the first place.
But yes, I do try to maintain a, balanced machine. On it is 2 pieces of 256MB GeIL PC3500 Platinum memory. Due to the fact that my board is revision 1.0 (I'm going out to get a revision 2.0 NF7-S hopefully this weekend at the Tri-State Fairs), I can only do FSB of 175 at my core speed of approximately 2280MHz (this board's FSB limit drops as the core speed increases and with voltage increases as well). My core is a Barton 2500+; it's cooled using an Aeroflow.
My video card is an ATi/Sapphire Radeon 9500 128MB (nonPro); I flashed it with Warp11's Radeon 9700 nonPro-Pro BIOS to unlock its speeds and convert it to 256-bit memory bus; then I used RivaTuner+SoftMod to unlock the four extra pipelines (it's 8-pipelines just like Radeon 9700/9700 Pro, unlike the 9500's). Clocks right now are 385.7 core and 308.55 memory, with room to spare. I just received all my new VPU cooling components yesterday and it took me quite a while to get it all assembled then installed. I have TweakMaster revision 4.0 RAMsinks on the frontside memory, CompUSA RAMsinks on the backside memory (the TweakMonsters don't have enough clearance underneath...), a ZM80A-HP cooler for the VPU and finally dual TMD fans mounted onto the ZM80A-HP, one on each side. Before doing this setup, I could only get about 360core and 289memory, stable. I stopped at 388/309 last night 'cause it was getting late (notice my last post was past 3:00am :-D) and I'll continue pushing it farther today.
I already broke 3550 in Code Creatures ("official score"; I've looked around the web and noticed that P4/3.06HT setups with 512MB RAM and a stock Radeon 9700 Pro are doing about 3000-3100). I use Code Creatures to test VPU/VRAM stability because I found it to be the first software to go bonkers when the core or memory is too high; also, it appears to be the least CPU-dependant bench I have. For testing the CPU stability while overclocking, I use C&C: Generals. Even at the slightest hint of instability, that game randomly exits back to the desktop, far sooner than anything else I have petering out.
Right now I have all 7 of my internal drives mounted within my Tt Xaser II A6000A. The Cheetah is in a Cooler Master CoolDrive 5.25" aluminum bay cooler, and the six 3.5" drive bays are occupied by the 4 6Y060L0s and the 2 WD400BBs. The bottom drive cage is cooled using an 80mm LED fan mounted in the supplied purple fan cage. The top drive bay is cooled using a 92mm LED fan I rigged in. The Cheetah has a weasely 40mm rifle bearing fan but it's okay; the drive doesn't overheat anyway. I don't have a floppy drive and when I need it to do BIOS updates and what not I open up my case and plug it in (I left my floppy cable in there and a free floppy power plug also).
-Ed