Cheap HDDs for server

Haden

Senior member
Nov 21, 2001
578
0
0
I'm going to upgrade our file server (we're running out of disk space), but I doubt about HDD to choose (currently there is 60GB 7200rpm Maxtor and 120GB 5400rpm Maxtor).

IBM 120GXP seems to be very good choice (very fast/inexpensive) but I'm afraid what IBM means by saying that 120GXPs should never run more than 8hours a day.
Anyone has experience with these? They tend to fail?
Any suggestions about best IDE disks for server (SCSI are way to expensive to aford :))

BTW, I thought that running HDD 24/7 is even better than several boots per day, I'm I wrong here?

Thanks
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
The words Cheap and Server shouldn't appear in the same sentence. That's money saved now that costs you double or more later.

If you're in the US, consider the www.storagereview.com / HyperMicro deal of a free U160 SCSI card when you purchase any 15k rpm SCSI drive.

 

amdskip

Lifer
Jan 6, 2001
22,530
13
81
If it's a real server thats used a lot, run scsi drives because ide are not made for the abuse.
 

blstriker

Golden Member
Oct 22, 1999
1,432
0
0
I use 2x Maxtor 7200 80 gig hd's in a raid 1 and they work great. I forget which website, but one of the pretty large hardware websites just built a new webserver using 4x 120 gig 8 meg cache WD 7200. Those are pretty high peformance drives. They're not cheap compared to other IDE drives, but very cheap compared to SCSI drives.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Wait till they start failing. They will. Much much faster than SCSI drives. There goes the money saved, and if you've been equally cheap on the backup solution, your data with it.
 

blstriker

Golden Member
Oct 22, 1999
1,432
0
0
Why would they fail faster if the life operating hours are almost the same? Just curious. Except maybe for IBM ;)
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Because over the lifetime of hard disk surfaces, weak spots on the surface do appear. SCSI drives have spare areas, and firmware that automatically detects those weak spots and quietly uses those spares then - on reads and writes, no data lost. IDE drives in turn neither have the brain nor the spares to do that, so every read or write problem kicks through to the operating system and application programs. Not good. With an IDE drive, one bad sector is a problem, a SCSI drive will have to collect many thousands before running out of spares.

Then, SCSI drives have seriously longer MTBF times although their MTBF is calculated assuming 24/7 use, which is not the case for IDE drives.

Finally, thanks to their intelligent firmware and SCSI's advanced command processing that allows the drive to collect, sort and complete incoming requests in any order the drive pleases, SCSI drives can (and do) optimize their work to minimize head assembly movement and active head switching. This makes for both MUCH speedier responses and dramatically less mechanical wear on the head assembly. IDE drives can only process one command at a time.

regards, Peter
 

Haden

Senior member
Nov 21, 2001
578
0
0
I understand SCSI would be much better, but there's no way to get ~200GB we need, using SCSI: just to expensive.

So we try to use slow 5400 rpm (hopping they tend to fail less) and consider small SCSI for system/critical data only.

Thanks.
 

rayster

Member
Oct 29, 2002
47
0
0
Originally posted by: Peter
Because over the lifetime of hard disk surfaces, weak spots on the surface do appear. SCSI drives have spare areas, and firmware that automatically detects those weak spots and quietly uses those spares then - on reads and writes, no data lost. IDE drives in turn neither have the brain nor the spares to do that, so every read or write problem kicks through to the operating system and application programs. Not good. With an IDE drive, one bad sector is a problem, a SCSI drive will have to collect many thousands before running out of spares.
IDE drives do detect and isolate bad sectors. Whenever you run Scandisk, or any other disk maintenance utility, if a bad sector is discovered the data (if any) is written elsewhere, the bad sector is marked in the FAT, and no data is ever written back to that area. SMART IDE drives do this automatically and also test themselves to warn of emminent failure. The SCSI controller/bus is more sophisticated and will give better throughput, but with the price differential, you could easily do a RAID1 with a couple of large IDE drives for less money. The combined MTBF would almost certainly be equal to or better than a single SCSI drive. I've built low end servers this way for a long time and never had one crap out on me. If you're running a DB server, or where performance is hypercritical, or something that's going to take a beating from users (60-75+ clients, depending on usage) SCSI is the way to go. A workgroup server for file & print, you're fine with 2 IDE drives and a good backup/recovery strategy.

 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
You're proving my point ... with IDE drives, the host system gets to see the bad sectors, and you need to run utility software to make the operating system avoid them. Besides, if a write access hits the bad spot before it is mapped out, your data will be gone. SCSI drives take care of this internally without any intervention, particularly not losing data when detecting a weak spot on writes.

And no, SMART hard disks do not automate any of this either. SMART drives just monitor their operating parameters, and alert the system if they go out of bounds. This is useful as an early warning on sudden failures, but not for the normal aging of a HDD.

MTBF? If you run several units of a given MTBF in parallel, total MTBF will _decrease_! Mirroring does little to improve uptime, got to have RAID 5 for this. And even if you do, you'll soon find yourself swapping out HDDs a lot more often than with SCSI (and besides, IDE drives are not hot swappable, moving the requirement onto the RAID card, making the latter more expensive). You got to see the long term cost. There is a reason SCSI hasn't died out.

Backup/recovery is another topic, not to be missed either.
 

rayster

Member
Oct 29, 2002
47
0
0
Is SCSI better? Sure is. Is SCSI expensive? Sure is. The parameters of the original question included cost, and took SCSI off the table as an option. I was trying to point out that there were viable alternatives to SCSI that were nearly as good. You're going to take a trade-off for the reduced costs somewhere. A lot clients are willing to run Scandisk daily and use that to safeguard data on a low end system if it saves money upfront.

To be honest, I'm not 100% sure on the SMART drives (never had one alert me, never had one fail), so I'll take your word for it.

And, using Bellcore failure rate models, running multiple units in parallel will increase MTBF for the drive array to 150% of the MTBF of the individual drives. MTBF decreases when the parts are nonredundant. Losing a single drive would be a noncritical failure, and the net result is an increased life expectantcy for the system and the data.
 

everman

Lifer
Nov 5, 2002
11,288
1
0
Maybe a raid array would be better as it would cost less and still offer high performance and reliablility (depending on the setup). Maybe use raid 5.