I tried posting a responce waaay up above when the post count was ~10, but oh well, I'll try again!
first of all, if I had the money, (or the patience to save up what I have, rather then spend it on CPU's etc), I would immediately go SCSI.
think of the advantages this way.
CPU's are now running on the order of 1 billionth of a second per clock cycle. in fact, the newer ones (say P4 1.5 ghz) is running at such speed that the clock cycle length is in the TRILLIONTHS of a second (I think about 666 trillionths of a second right?).
now, we go to hard drives. their access times are measured in milliseconds, or thousandths of a second. so the CPU has to wait an ETERNITY just for the data to start coming for it.
in theory, if hard drives access times were in the same range as a clock cycle for a CPU, yet their transfer rates remained the same (say 40 megs a second), the actuall total throughput of the device would increase a good amount, because rather then spending massive amounts of time seeking, it would be transferring rapidly.
of course, that's theory.
SCSI on the other hand is reality.
not only do you have less CPU usage (with todays CPU's you don't notice IDE usage THAT much, but it's there), you have SCSI's capability to run more then 4 drives off of one controller card, without a big bottleneck (with IDE the drives take turns, neither can be sending or recieving at the same time, it's called time division multiplexing if I remember right). with SCSI, all drives can run simultaneously, therefor being able to utilize it's large theoretical throughput (with SCSI 3, it's 160 megs a second).
and of course, because SCSI is a higher performing standard then IDE, more is charged for it, and it's drives, becuase the drives have been designed with low access times in mind, and nothing currently in the same price range comes close to that speed.