Originally posted by: Nothinman
What about raid 5 is it normal for it to eat up so many resources when it's software raid?
It requires parity calculations for each write which take CPU time, but on a modern CPU it shouldn't be that much of a hit.
I'm doing rm -rf * in a folder with 1TB worth of junk data and the load is skyrocketing. It's at 3.75 now.
Load is a fairly worthless performance indicator, especially considering that in Linux processes waiting on I/O are included in the load calculation.
A lot depends on your file system. Removing files is
not going to be fast in a RAID 5, because typically that entails clobbering the root inode of the file: its a small, (fairly) random write. Small random writes on RAID 5 read data to be clobbered along with the parity, recomputes the parity, then writes the new data and the new parity. Hence, the time required is something like:
T(smallW) = Max( ReadDiskA, ReadDiskB ) + Tcpu + Max( WriteDiskA, WriteDiskB )
In the long run, you are going to be limited by your slowest disk in this configuration (which appears to be sdc, as posted above). Tcpu is going to be absurdly small compared to the disk access times, and Write time is likely to be small because of queuing, at least when you're not doing tons and tons of writes (unlike the rm -rf case).
In other words, small write performance sucks on RAID 5. Its often orders of magnitude longer than on RAID0/1, because neither of those RAIDs need to
read data off disk before clobbering.
Large writes are another matter entirely. If your SW raid controller is worth its salt, you should achieve write bandwidth equivalent (or near to) the slowest disk's bandwidth multiplied by the number of data disks in the array. But again, this is for large writes only.