1TB IDE RAID: need advice

danwatt

Junior Member
Apr 9, 2002
6
0
0
I am planning on building a 1TB (approx) IDE-RAID fileserver this summer, and I am building it on the cheap. I do not want to go through with it though if the setup is going to suck, so I need some advice.

First off, the specs: I will be using 5400RPM drives, and 80GB drives are the cheapest / GB at the moment. The setup will require 14 or 16 drives (16 if I go with software RAID-5, more on this later), and 2 controller cards: the High Point R404, which supports 8 drives. The rest of the computer will be pretty average, with a 1.2 - 1.6 Ghz CPU (probablly Athlon XP) with 512MB RAM and a Gig-E network card.

Why 5400 RPM drives? 1) They are cheap, and 2) PCI speed limitations. With 4 drives on a 7200RPM RAID-0 setup, the drive throughput pretty much maxes out the PCI bus limits, make that 7 or 8 drives if using 66mhz PCI (or 64bit). Well, I am going to be using 14 (or 16) drives, and I feel that that many will definately hit a few limits in the system. Now I know that the R404 may be a bad choice since it gets 8 drives by having 2 drives on each channel, and that is not ideal for RAID. If someone can back that up and give me a better alternative, please let me know. The current plan involves 7 x 2 drive RAID-0 arrays, 8x2 if using RAID-5 in software.

Now another concearn: I plan on implementing RAID-5 in software (Windows 2000 Server), which will need the extra 2 drives (1 array) to bring the system up to 1TB (remember that in RAID-5 you essentially lose a disk, and in this case, lose an array, to parity). My concearn here is how fast is software RAID-5? Will 1.2Ghz be able to handle this without slowing down throughput too much?

Finally, power! How much power is required to run 14 or 16 5400RPM drives?

So, to sum it all up
1) With 14 or 16 5400RPM drives, will the throughput max out the PCI bus?
2) Is there a better RAID-0 card out there that isnt too expensive? Or is the R404 just fine for 8 drives (given the problem of having 2 drives per cable)
3) How much of a slowdown will software RAID-5 introduce into the system, given that the CPU is pretty much dedicated for this task?
4) How big of a power supply is necessary, and would I be better off going dual PS or redundant (the system does not need to be THAT failsafe though)
5) Any general concearns / advice?

Finally, right now, its looking like the cost of disks + cards is about $1500 for 14 drives, $1700 for 16 drives.
 

Evadman

Administrator Emeritus<br>Elite Member
Feb 18, 2001
30,990
5
81
Answeres to the Q's I know...

#1 With 14 drives, assuming ATA 100, the theoretical bandwith used will be 1400, which is about 2x the amount a pci bus can take ( 533 if I remember correctly for a 32 bit ) A move two 64 would be needed. ( Tyan Tiger or equivelant )
#2 I have little knoledge here.....
#3 No exact #'s here, but I would think you would be ok. ( according to the RAID article posted here on AT )
#4 I would ge at a minimum a 400 watt, probably a 500. 14 drives are going to pull some serious wattage ( 25-40 each? ) That's a huge chunk of change.
#5. Yes,...... That's a lot of MP3's :p
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
I would also say be prepared to by atleast 3 extra disks, at the rate I've seen IDE disks come in dead or die soon after use where I work, you'll be replacing a few =)
 

danwatt

Junior Member
Apr 9, 2002
6
0
0
Well, I am considering having a "hot spare" set available, meaning either I need 18 drives in the system or I just reduce the total size my 160GB. Windows 2000's RAID-5 supports hot spares (just it wont rebuild automatically : you have to tell it to rebuild).

One thing though, and this is because I have never had a "serious" drive failure. What does it take for a drive to be rendered "useless" short of a head crash? If a bad sector or something occurs on the disk, cant something like scandisk recover that? Or just wipe the drive and rebuild to that drive? If someone could clear that up, that would be great, since I have never seen this fully explained in any article that I have ever read.
 

RU482

Lifer
Apr 9, 2000
12,689
3
81
With that much equipment, I'd reccomend the largest supply you can find, and probably a redundant supply in case you burn one up

Check Pricewatch, I saw an Enermax 650W ATX supply for a little over $100
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Well, I am considering having a "hot spare" set available, meaning either I need 18 drives in the system or I just reduce the total size my 160GB. Windows 2000's RAID-5 supports hot spares (just it wont rebuild automatically : you have to tell it to rebuild).

I hope you don't mean you plan on hot-swapping the drive because that can blow all the drives on the wire and the controller because IDE wasn't designed for that. If you do want that, you'll need more expensive hardware.

What does it take for a drive to be rendered "useless" short of a head crash? If a bad sector or something occurs on the disk, cant something like scandisk recover that? Or just wipe the drive and rebuild to that drive? If someone could clear that up, that would be great, since I have never seen this fully explained in any article that I have ever read.

1 bad sector usually means more will come soon, low leveling or zeroing the drive may fix it for a while but it'll go eventually.
 

danwatt

Junior Member
Apr 9, 2002
6
0
0
Ahhh!! This is the second time today this forum has deleted a reply I have posted.... Here we go again....

"Why?" I do video editing for school organizations and for personal projects, and quite frankly 100GB (what I have dedicated right now) isnt quite enough.

As for maxing out the PCI Bus.... I was just curious to know if it WAS that fast. I dont really need to max it out - max throughput for vid cap (the most strenuous operation, besides moving files) is 3MB/s (for DV, what I will be doing most of the time). Yea, a single drive would do for this, but the more "free" speed I get, the better, especially if I DO have to move files over a network (read- GigE, not 100mbit).

Hot swapping - I meant Hot Spares - drives that are available all of the time to be used to rebuild the array onto. But, from my past experience with people who I know that have RAID, IDE RAID can be hot swapped, but maybe only higher end cards. I know someone with a Promise SuperTrak 100, and he has removed drives (no drive activity mind you) and put them back in while the system was running. Granted, this would be moronic to do to an active drive in any situation. But the point of "hot swapping" is that the drive that has failed has no activity, so it could be swapped. Correct me if I am wrong.....
 

Sahakiel

Golden Member
Oct 19, 2001
1,746
0
86
I figure with the amount of time, effort, and money you're gonna pour into this thing, might as well go with quality. Get about 7 of those Seagate Barracuda 180's and stuff them into a couple SCSI towers (one for each channel). Hook em up to a quality SCSI RAID controller (well worth it considering how RAID 5's XOR calculations will pretty much monopolize any CPU) Pop the array into a system built around something like a Tyan Thunder cause that's what you'll get when trying to find a board with something like 66Mhz 64-bit PCI (I think Thunder is only 33Mhz 64-bit, though. Could be wrong). Unless you don't give a damn about drive throughput and just want the capacity above all else in which case you can do with 33MHz 32-bit PCI or even ISA (not with a quality controller).
At any rate, if you seriously want to use software RAID 5, I'd say go with dual CPU's cause I don't think there are many CPU's that can handle both the OS and the parity calculations especially if you have a lot of drives in the array.
 

J.Zorg

Member
Feb 20, 2000
47
0
0
As far as I know the Promise TX4 only Supports 4 Drives.
I´m running a Software Raid 5 under Linux and i have to run every HD on a different channel. But I wouldn´t recommend an Software Raid Solution this big. Your IO load will be so high that you just cant really use the storage.
I think this solution might be cheap but you won´t have any fun with it.
I would suggest a IDE hardware Raid 5. Get yourself a Promis Supertrack
and 6x160 GB HDs. It has an Intel Risc chip an Ram on the Controller so you don´t have to worry about rebuilding the array every time your computer crashes or a HD fails. Because if you have a Software Raid you always have to remember you CPU has to do all the Parity checking and rebuilding.
If you take take the Promise Supertrak (32bit PCI)and those 6x160GB HDs you will have 800GB of storage with a simple Raid 5 Setup. I you want to have a Hot-Spare HD you would be down to 640GB. I think this Setup would save you lot of trouble and won´t cost you more mony than your setup. If you want to have more space you could add another controller or maybe get a 3ware Escalade Controller (64bit PCI). It supports up to 8 HDs. I think its about 450$ and 8 Maxtor 160GB each 270$. This would be 2610$. This would be 1120GB with a pure Raid 5 setup and 960 with on hot-spare hd.
Your setup will cost about 16x 130$ for the drives and maybe 2x160$ for the controllers. That´s 2400$ for a crappy homemade solution.

This would be a pure Hardware solution and since this is a harware setup you won´t need any software or much configuration. JUst a few clicks in the bios and you are all done. Spend a bit more money and save you a lot of time and trouble. (ever tried connecting 16hds with 8 crappy IDE cable in one case ;) )
With this setup you would still have some room for further upgrades in the future and will have a lot more performance. You also don´t need much CPU speed for this setup...any Sytem abobve 500MHz should be able to handle that. Or with multiple Users you might wanna have a dual Setup or a 64bit PCI bus for the 3ware controller. YOu should also consider the using dual nics since those 3ware controllers will give you alot more than your 100mbit NICcan handle.

Here is a test of these controllers.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
I know someone with a Promise SuperTrak 100, and he has removed drives (no drive activity mind you) and put them back in while the system was running. Granted, this would be moronic to do to an active drive in any situation. But the point of "hot swapping" is that the drive that has failed has no activity, so it could be swapped. Correct me if I am wrong.....

Sometimes it works, sometimes you fry both drives on the chain and the controller, your decision if you want to take that chance. You need a high end IDE RAID card that is known to support hot-swapping, because they have special electrical components that power down the drives and controllers individually to make it safe, otherwise you have a 50/50 chance.
 

danwatt

Junior Member
Apr 9, 2002
6
0
0
Hey, thanks for the info on the 3ware. With the 7180 and 8 160s, thats 1.12TB + an extra drive as a hot spare. And, it still comes in under $2000 for the storage solution (probably more like $1800 in a couple months). As for throughput.... I will be using a Gig-E to connect it to 2 computers, and MAXIMUM throughput isnt my main concearn.... I need available bandwith for DV capture (which really 1 drive can do) and possibly 3x speed retrieval of DV footage (for moving it around and such...), but I dont need this setup for RAW video or anything... But at the same time I do want RAID5 (somewhere in the equation) for redundancy, and I want more than single drive performance.
 

J.Zorg

Member
Feb 20, 2000
47
0
0


<< With the 7180 and 8 160s, thats 1.12TB + an extra drive as a hot s >>


I think you have not really understood how Raid 5 works. If you use 8 HDs with 160GB each you will have 1120GB Storage because parity information is stored equaly over all harddrives. You loose the size of on HD due to the parity infomation. If you have N drives you have N-1*HD_Size = Total Storage available. If one drive fails your raid is still operational because with the parity information on the other drives the controller can still recreate your data. If another drive fails your data is lost.
A hot spare drive is an extra hd which the controller integrates into the raid if one drive fails. So if one drive fails and the raid controller has finished rebuilding the array with the hot spare drive you have a fully operational setup agian . If another drive fails your data is still intact. You just have to add 2 drives then.
If you use one hot spare drive the size of your array is N - 2 * Size_of_drives = Total_Size
With 8 drives you would have 6*160GB = 960GB.
 

snorks

Junior Member
Apr 12, 2002
2
0
0
Some thoughts to mull over:

Seriously think about SCSI and Hardware RAID. The reasons below will explain why.

MTBF or Mean time between failure, SCSI drives typically have an estimated 5 x longer lifetime than IDE and if you're talking 16 drives you are going to need it.

Hardware RAID cards have on-board memory used as cache, this alone is enough reason to go SCSI. Hardware SCSI RAID cards also come in 64bit/66MHz whic you will need without a doubt.

SCSI HDDs have faster spindle speeds and more cache (another BIG plus)

You don't have the IDE channel problem where if one drive is being accessed the other can't be

On that note I'll correct another user on PCI bus speeds. Standard old PCI slots have 128MB/sec bandwidth. 33MHz/64Bit slots (Look like normal PCI but are a bit longer) have 256MB/sec bandwidth. 66MHz/64Bit slots (same as 33/64 but key is at the top of the slot not the bottom) have 512Mb/sec bandwidth. Some server board have multiple PCI busses usually running at 64/66 but typically they are Dual CPU board (eg Tyan 2460/2 boards have 33/64 (MP Chipset), Tyan 2466/8 have 66/64 (MPX CHipset). The more PCI busses the board has the better as you have more bandwidth. Personally I would have 2 Dual channel RAID cards running on 2 seperate 66/64 PCI busses with 4 drives on each channel of the RAID cards. Something like the Adaptec 3210S card times 2.

You seem to also be suggesting creating a lot of RAID0 or 1 arrays and then making that a RAID 5. This makes no sense as you will be slowing the whole thing down even more especially if those RAID0 or 1 arrays are across 2 drives on the same channel, since you can only write to one at a time you get no benefit from the array.. Make one very large RAID 5 array only if you go that way. Same goes for striping as well since the more drives in the stripe the faster your access!

On power :)

God I just thought of another reason to go SCSI.. power... You can set SCSI drives to spin up when they are initialised by the card (Delayed spin), which means you can get by with a smaller power supply.

Other than that though look at the spec sheet of the drives you get. It should be avaliable from the manufacturers website. This will give you the power required, take the start up power and multiply it by the number of drives you have (rememeber HDD's use the 12v rail mostly so thats what you'll need to look for on the PSU), then depending on the system add another 250-300 watts overall. make sure you check each power rail and it's maximum or you might be in for a shock. Power on is without a doubt the most stressing time for a PSU.

RAID 5 in software will be doable, it won't hurt the CPU that much.

My question to you is: How do you plan to fit 14-16 HDD's in a case and keep them cool??

Remembering you have some fairly restrictive cable lengths with IDE (hey another reason to go SCSI) :). With SCSI you could have external cases filled with fan's/hdd's and cable them to the RAID card.

On the network front. Depending on the motherboard you have you will again be struggling on bandwidth. Gigabit cards are all 64bit. SOme 33Mhz some 66Mhz.

In all honesty (and speaking from experience) and IDE solution will give you a lot of trouble if it is even possible with cable lengths etc.

Good luck and let us know how it goes. Incidentally flick me an ICQ or email if you need me to clarify anything.

 

J.Zorg

Member
Feb 20, 2000
47
0
0
snorks: Its all convincing you are telling and i would always buy scsi solution if i had the money. You havent talked about prices ;). And since danwatt want´s cheap solution its just out of the question buying a scsi solution i think.
If you read my post i recommended a IDE hardware raid controller with 8*160GB Drives. The 3ware Escalade is 64bit/66mhz. For the bandwith concern of yours. You only need 8 cables for 8 drives. Works in standart server Towers. The Speed shouldn´t be a problem, i think the hold the speed record for IDE Raids with 160mb/s. I think thats enough for a semi professional Raid solution. It´s a hardware Raid: You have a Intel 960 Risc Onboard an up to 128mb Cache. Same a every SCSI Raid contrioller. As far as i know the only difference between a pure SCSI and IDE Hardware solution are the cables, their lenght and the hd. SCSI disks a much better, faster and more reliable but they are dammn expensive. And this i the point where you have to make some compromises....

 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Well if you buy half as many SCSI drives as you intend to buy IDE, the price difference will become smaller,
and the effective throughput will stay about the same, even if you use substantially fewer drives and
controller channels (because SCSI multi-drive setups make more efficient use of the channels they're on).

And what's been forgotten, SCSI drives are also inherently more reliable because they use a "defect free
interface" automagic that detects and replaces bad sectors internally with factory reserved spares -
without the host ever noticing. No more lost write data, surface scans or reformatting. Ever.
Finally, there's a version of the SCSI physical connector especially made for hot plugging - the SCA-80
connector. This has the SCSI signals, power lines, and ID jumpers (!) on one connector that mates in a
way such that there is no risk of electrically frying something.

Sure, the initial cost is higher, but in the medium to long run, it definitely pays off.

Final question: How are you going to backup that thing? Spend a few brain cells, and then lots of $$$ ...
the largest available budget tape drives are 120 GBytes (from OnStream) - above that, we're going
deeply into four- to five-figure space.

regards, Peter
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
PS: You should have your RAID controller and GBit Ethernet on 64-bit, 66 MHz PCI. Otherwise you'll
be too bandwidth limited to make all the neat storage work as fast as it could.
 

Shalmanese

Platinum Member
Sep 29, 2000
2,157
0
0
There does exist what sounds to me like a VERY impressive IDE RAID solution which has HW Raid 5, expandable up to 64MB Cache using standard SDR or EDO RAM (forgot which) and most impressive of all, 16 seperate IDE channels onboard.

I have no idea where you can get it and I dont know what it is called but I konw its around the $500 - $1000 mark which is VERY good value for money when compared to SCSI.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Oh, $500 to $1000 buys you very neat single or dual channel SCSI RAID controllers ... remember,
you don't need such a pile of channels with SCSI because SCSI drives can share channel bandwidth
- and hot plug from and to a live channel - while IDE drives can't.
Twin U160 SCSI channel setup will run six to eight midrange or four to six high end
drives without choking, and give you about 300 MBytes/s PCI throughput, ideally.
(You remembered to leave some 150 to 200 MB/s for the GBit Ethernet, didn't you?)

You want to look at the product line of these guys:

http://www.icp-vortex.com

They offer a complete range of SCSI RAID solutions, including low range solutions that are
much less pathetic than Adaptec's.

Like that one:

http://www.icp-vortex.com/product/pci/rz64u160/8523rz/8523rz_e.html

This is 64-bit 66 MHz PCI (runs in 32-bit and/or 33 MHz slots too), RAID 0,1,4,5,10,
twin U160 (courtesy of the proven LSI 53c1010-66 chip), Intel RISC processor, SDRAM slot taking
one standard PC133-ECC module.

regards, Peter
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Before I forget it the 4th time in a row: With that huge amount of mass storage, your system is
definitely going to need much more RAM. 3 GBytes would be more appropriate than a measly 512
MBytes, even more wouldn't hurt but then we're definitely off any sort of affordable mainboards.

regards, Peter
 

lifeguard1999

Platinum Member
Jul 3, 2000
2,323
1
0
I am building an IDE RAID system at work. It uses the Promise UltraTrak100 TX8 ($1,700) and eight Western Digital 120 GB drives (8MB cahce, 7200 RPM, for $1,600). It attaches to the computer via a SCSI interface and provides 960 GB of non RAID disk space. The TX8 is capable of RAID 0, 1, 3, and 5. If you can live with less disk space, you can even keep back a drive as a hot spare. It is also capable of hot swapping drives. Such a system for you would cost about $3,300. All this is plugged into a dual 2.2 GHz Xeon workstation with 2 GB of RDRAM and its own system disk (not included in price).

Want even more space? The TX8 supports drives above 137 GB, though not ATA/133. Using the 160 GB 5400 RPM drives would get you 1.28 TB of disk space for about $3,400.

You could bump down to the Promise UltraTrak100 TX4 which holds only 4 disk drives (480 GB total disk space) for a total price of $1,700. Or make that 640 GB of disk space with the Maxtor drives for about $1,800.

Need more disk space, but at a reduced cost? Try the Promise SX6000. It supports 6 drives (720 GB) for about $1,600. At this point, you need to buy the enclosure yourself.

But as SmirnoffIsTasty said: "Before you ask "can I?", ask "why would I?"".

I deal with terabytes of scientific data. We roll in some of the data, process it, produce a few frames, then roll it out. We gather the frames together to make a scientific animation. Throw in some artisitic animation to help explain the science, and you can see where we need the disk space. In your case, I am still left scratching my head and wondering "Why?". In the end, all of our movies are less than 4.7 GB of information (Gee, can you tell we write it to DVD?). Most fit on a CD. In our case we are going for a balance of speed, space, and cost. All we are doing is dropping out 1 frame every few minutes. When we video edit, we are using SCSI disks. The OS is on one disk, and we use the others for storing/editing the frames.

If you are doing video editing, I do not see where you need 1+ TB of slow disks, especially in this setup.

Really what you need to do is build a data system that is progressive. Think about it from a memory standpoint for CPUs. A CPU has registers, then L1 cache, then L2 cache, (sometimes L3 cache), then main memory, then a hard drive. You need a data system that mirrors that. For example:

1. SCSI hard drives on which to do the editing. (Say 2 x 73 GB)
2. RAID (IDE or SCSI) for longer term storage. (One of the solutions given above or by other members)
3. CD burner to back up small projects.
4. DVD burner for larger projects.
5. Tape for full and incremental backups.

Now fit all of this within your budget. :)

As an example:

1. 2 x 73 GB SCSI hard drives: $1,000
2. IDE RAID 5 disk farm with 1 hot spare: $1,700
3. 40X CD burner + CDs: $150
4. DVD burner + DVDs: $600
5. Tape drive + tapes: $800

All prices are, of course, approximate.
 

danwatt

Junior Member
Apr 9, 2002
6
0
0
Well, as I have said, speed DOES matter somewhat, but I dont need the performance that 7200*8 RAID-0 can support. With RAID-0 WinBench-99 done on the 3ware (8xFujitsu 10GB x 5400RPM), the drive got a high end mark of 40350, and a data transfer rate between 155,000-196,000. With RAID-5, the high end mark was 13250, and the transfer rate was 134,000-185,000. Right now my current video drive (IBM-75gxp) gets the following scores: Transfer rate of 22,000-28,800, and a high end mark of 21000. Now the high end mark was higher than the RAID-5 results, but transfer rates were a lot higher (more than 5x as fast). I am going to dig around for more benchmarks, and a lot of other factors play in (CPU, Ram, Gig-E transfer rates, etc), but right now this 8 drive solution is looking pretty good.
 

CZroe

Lifer
Jun 24, 2001
24,195
857
126
Only use Western Digital or IBM drives because they are the only drives that have hardware-diagnostics programs that can identify AND FIX problems regardless of the file system. These utiolities have been invaluable with my RAIDed drives in the past, and many of them could never have been formatted again without it. Western Digital 5400RPM 80GB drives were on sale at CompUSA a couple months ago for $99.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
IBM is leaving the hard disk business, so I doubt it would be a good idea to use their drives.