defenestrate.. heh
You'll get best performance out of 10000k or 15000k 9GB SCSI drives, but they're expensive. SATA's got the second best performance, followed by IDE.
The nicest thing about SCSI is that it's hot pluggable if you have a good controller. If a drive fails, you just unplug it, and put in another one... the thing starts rebuilding the array without any downtime... the OS doesn't even get involved. I don't think this is the case with SATA, and I know it's not with IDE.
IDE and SATA drives tend to be of lower quality than SCSI drives, as IDE and SATA are meant for consumer devices, not servers (with the exception of the WD Raptor, which I'd term a workstation drive). You're more likely to have a failure with an IDE or SATA drive, but they're also cheaper to replace.
If you're going to have immediate access to the server, you probably won't need a hot spare drive (a drive which has the sole purpose of taking over when another drive fails, so that you get a redundant array back up without having to intervene). If you're any distance away from the server, or it'll take you a couple of weeks to get there, or whatever, then you may want to consider this... If two drives die, then you lose all of your data unless you've got that hot spare.
If you need loads of space, the only way to go is IDE. If you want the best reliability, and the best speed, but at a higher cost, go with SCSI. The middle ground is SATA.
If you managed to get your hands on 4x 9 gig SCSI drives, I'd probably set it up as a three drive RAID 5 with hot spare. You'll get 18 gigs of space, which should be enough for your database, plus you'll be able to have two drives die and your server will still keep ticking. If you need more space, and have immediate access to the server to replace a failed drive, go with a four drive RAID 5 to get 27 gigs of space.
Edit:
In case anyone's wondering, my development server at home has 4 drives... two RAID 1s - 2x120 and 2x80. Every night, the important data from the larger RAID 1 synchronizes with the smaller RAID 1 (counting? that's 4 drives that the important data's on so far...). I then use rsync to synchronize my offsite copy (also on RAID 1 2x80 ... 6 drives). The CVS data also gets checked out to my laptop nightly, so my most important data is copied to 7 drives at any one time, two offsite and one on a laptop that I carry with me. And yes, out of this 7 drive setup, I have had up to 3 drives dead at once before I could replace any.