RAID 0 - is it secure enough?? Need advice.

Agrippa

Member
Aug 1, 2000
87
0
0
Hi all. I'm currently running RAID 10 with 4 45GB IBM 75GXP drives and have experienced few problems with this for the last 2 years or so. However, after I got myself an ADSL account a short time ago, I'm getting short on space and I'm trying to figure out whether to throw away a lot of stuff or re-configure into a RAID 0 array in order to double the available space. Sadly, buying 4 120GB drives is not an option at this time...

Anyone here been running RAUD 0 for a while? I know it's fast, but is it safe? Opinions would be welcome!!! The full set-up goes like this:

Abit KR7A-RAID
AMD AthlonXP 1600+ @ 1.7GHz
TwinMOS PC2700 512MB
4 x IBM 75GXP 45GB ATA100
Maxtor 60GB ATA133 (extra backup for important stuff)
Pioneer 16X DVD
Plextor 12X/10X CD-RW
ATI Radeon 64 VIVO
CL Audigy Player
Kingston KNE110 TX/100B N/W card
Eicon DIVA 2 ISDN

Thanks for helping,

Agrippa
 

Mavrick007

Diamond Member
Dec 19, 2001
3,198
0
0
What kind of a card are you using for raid 10? Raid 0 is alot faster but it's highly unsafe. If one drive fails, you lose your whole raid array, and since you have 4 drives, your chances are very high that something will go wrong. Chances increase four fold at least. And from what I've heard about the IBM 75GXP series drives, that's just an accident waiting to happen(I have one that is still going good *knock on wood*).
 

Windogg

Lifer
Oct 9, 1999
10,241
0
0
RAID01 is one of the safest in terms of data redundancy. RAID0 has zero data redundancy and is truly a misnomer. If you have data you cannot afford to lose, only use RAID0 is you have adequate backup. To run two IBM 75GXP drives in RAID0 is risky, to run four of them in a striped array in damn near suicidal IMHO. I only know of 3 scenarious where RAID0 has real practical benefits and none of them seem applicable to you. Another idea is to get a decent hardware based RAID adapter (not your typical Promise FastTrak or Iwill SideRAID) and run it in RAID5. You will only lose N-1 drives as opposed to 50% of available space. In your case, the capacity of three out of four drives will be available.

Windogg
 

PlatinumGold

Lifer
Aug 11, 2000
23,168
0
71
windogg

it is applicable to agrippa because he is currenty running raid 10, stripping w/ mirror i believe. if he goes to raid 0 it doulbes his hd space.

<< However, after I got myself an ADSL account a short time ago, I'm getting short on space >>

.

i would do striping w/ parity (raid 5 it think). it will increase your hd space but it will also give you some fault tolerance. it will stripe accross 3 hd and use the 4th for parity.
 

Workin'

Diamond Member
Jan 10, 2000
5,309
0
0
It looks like Agrippa would have to buy a new RAID card to do RAID 5. All the cheapo cards and built-in mobo RAID support 0, 1, 0+1, but not 5. The least expensive IDE RAID 5 card I can find is the Promise SuperTrak SX6000 which is at least $245 plus a stick of SDRAM for cache.

FWIW, I've been running one RAID 0 array with 2 6.8GB Maxtor drives and an original Promise FastTrak for about 4 years without any problems, and have been running another RAID 0 array with 2 WD 40GB and Windows 2000 software RAID for the past year also with no problems. But I back up my data regularly, so I'm not worried about a drive failing. Even so, since drives are so cheap nowadays I'm thinking about getting a few more WD 40 giggers and the SuperTrak SX6000 to make a 5 or 6 drive RAID 5 array just for the heck of it.
 

Amused

Elite Member
Apr 14, 2001
57,151
18,720
146


<< What kind of a card are you using for raid 10? Raid 0 is alot faster but it's highly unsafe. If one drive fails, you lose your whole raid array, and since you have 4 drives, your chances are very high that something will go wrong. Chances increase four fold at least. And from what I've heard about the IBM 75GXP series drives, that's just an accident waiting to happen(I have one that is still going good *knock on wood*). >>



Just curious, but say the chances of one HDD failing are 1 in 500,000. If he has 4 drives, that makes his chance 4 in 500,000. How is that "highly unsafe?"
 

PlatinumGold

Lifer
Aug 11, 2000
23,168
0
71
Actually agrippa didn't say how he was doing Raid 10 either.

If he has win 2k server than he can implement raid 5 via to OS. that would be cheapest solution if he is running win2k server.
 

Workin'

Diamond Member
Jan 10, 2000
5,309
0
0


<< If he has win 2k server than he can implement raid 5 via the OS. That would be cheapest solution if he is running win2k server. >>

That's true but the write performance will be horrible. The parity calculations will also suck up quite a bit of CPU time. A RAID 5 card does all that with its own processor. But it is an option, and for a home system you might not notice the performance hit.
 

jklesel

Member
Sep 13, 2000
30
0
0
amusedone-- ever heard of MTTF? having a raid 0 cuts your MTTF(mean time to failure) in half, doubling the chance one of the two drives will fail.
 

Eug

Lifer
Mar 11, 2000
24,048
1,679
126


<< Just curious, but say the chances of one HDD failing are 1 in 500,000. If he has 4 drives, that makes his chance 4 in 500,000. How is that "highly unsafe?" >>


If say chances of one HDD failing and your losing all the data on that one drive is 1/500000 per unit of time, then having 4 independent drives and losing all the data on all four drives concurrently is (1/500000)x(1/500000)x(1/500000)x(1/500000). ie. pretty damn unlikely in any reasonable period of time purely by chance, unless there are other circumstances such as your computers being run over by a truck or being hit by lightening.

If you're running RAID 0, the chance of losing ALL the data at any one point in time is going to be 4/500000 per unit of time, or 1/125000 per unit of time. If it's true that 75 GXP drives truly are corruption prone (which is debatable) then that unit of time is going to be shorter than average. 180 GB of data is quite a bit of data to lose.

My stats here are pretty simplistic and not really correct, but you get the picture, especially when you consider just about everyone I know who's had a computer for more than a few years has had at least one drive go on them. That one drive goes in your RAID 0 setup and all of them go.

Basically, you MUST backup if your data is of any importance to you. Whether you do it via RAID or some other method (external 60 GB drive & CD-ROM (& Zip) in my case) is up to you. It sounds like here the original poster would have no form of backup whatsoever.



<< amusedone-- ever heard of MTTF? having a raid 0 cuts your MTTF(mean time to failure) in half, doubling the chance one of the two drives will fail. >>

That about sums up my post, but to reiterate: not only does it cut the time to failure in half, it doubles the amount of data lost. In the case of the original poster, it quarters the time to failure, and quadruples the amount of data lost.
 

Agrippa

Member
Aug 1, 2000
87
0
0
Mavrick007: I'm using the built-in HPT 372 controller, so RAID 5 is currently a no-go I'm afraid and frankly the cost of a RAID 5 capable card would about equal that of buying new, larger drives and selling my current ones. As for the 75GXPs, I've had no problems, nor indications of any problems with them. I've not really read up on the alleged issues with them, but possibly mine date from before the production-issues arose... (they're about 14 months old)

Workin': I'd love a SuperTrack SX6000, but the lowest price over here (Norway) translates to about $450 - and I'm not even sure if any memory is included... Sad, eh?

Anyway, I hear a lot of bad stuff about RAID 0, but no-one actually seems to have had a cathastrophy decend on them courtesy of their RAID controller. The lowered MTTF is itself acceptable, since there's a good chance I'll detect the impending failure before it happens (at least I always have before...) and because I'll certainly get some new drives within a year or so. What I'm trying to decide is whether RAID 0 is worth risking for up to a year then, not whether to switch from 10 to 0 altogether.

Agrippa
 

Eug

Lifer
Mar 11, 2000
24,048
1,679
126


<< Anyway, I hear a lot of bad stuff about RAID 0, but no-one actually seems to have had a cathastrophy decend on them courtesy of their RAID controller. The lowered MTTF is itself acceptable, since there's a good chance I'll detect the impending failure before it happens (at least I always have before...) and because I'll certainly get some new drives within a year or so. What I'm trying to decide is whether RAID 0 is worth risking for up to a year then, not whether to switch from 10 to 0 altogether. >>


I guess you haven't seen the numerous posts here lately about RAID 0 going bad. Not all were caught before the data was completely lost.
 

MrGrim

Golden Member
Oct 20, 1999
1,653
0
0


<< Just curious, but say the chances of one HDD failing are 1 in 500,000. If he has 4 drives, that makes his chance 4 in 500,000. How is that "highly unsafe?" >>



Hmmm he's running 4 75GXPs ... what are the chances of NONE of them failing? ;)
 

SpideyCU

Golden Member
Nov 17, 2000
1,402
0
0
Yeah, well, ironically enough, I've got a RAID-0 going with two Maxtor drives (18 months or so), and my 45 GB IBM 75GXP is my backup drive. Somehow I still suspect it to be more likely that my backup drive will die first, if anything does at all. ;)

Despite the fact that people talk about RAID-0 like it's a death trap for drives (I really would like to see some large-scale numbers for this - if you have two drives and both of them have a 99% chance of performing without any problems for at least 5 years (which is a realistic spec), it's not like one of them will die in two and a half years if you put them in a RAID-0 configuration), you are in fact going to be doing this with 4 drives. Doing so without backup - unless you feel you can backup everything that's really important to you to that 60 GB drive you listed - is tempting fate. Besides, with all those components inside your case, it must be getting warm in there, and it's been theorized (justifiably, though I can't find the article right now) that the reason the 75GXPs died as often as they did was because they were more sensitive to heat than other drives.
 

Amused

Elite Member
Apr 14, 2001
57,151
18,720
146


<<

<< Just curious, but say the chances of one HDD failing are 1 in 500,000. If he has 4 drives, that makes his chance 4 in 500,000. How is that "highly unsafe?" >>



Hmmm he's running 4 75GXPs ... what are the chances of NONE of them failing? ;)
>>



All six of mine still work fine after a year or more.
 

MrGrim

Golden Member
Oct 20, 1999
1,653
0
0


<< All six of mine still work fine after a year or more. >>



I think you are very lucky. :) Both of mine died within 3 months.
 

Mday

Lifer
Oct 14, 1999
18,647
1
81
my raid0 has been working for over 2 years. i keep a singular HDD around for storing backup images of my array.

I have had problems, since the drives are bitchy, but its all good. i did lose some data a while back.

raid10 is very wasteful... raid5 is good, but you cant do that on your system without added cost.

what you can do is set up a raid 1 for your important data, and raid 0 for non essential stuff which you should back up often. that will give you an extra 45 GB to work with.

of course you would have to back everything up and reconstruct the array which would destroy all your data... have fun...
 

dj4005

Member
Oct 19, 1999
141
0
76


<< Raid 0 is alot faster but it's highly unsafe. If one drive fails, you lose your whole raid array, and since you have 4 drives, your chances are very high that something will go wrong. Chances increase four fold at least. >>



If you equate instability with the number of components involved, should we assume that you are running an 8088 based PC? Surely the number of components in the newer CPUs would make them more prone to failure.

A system with a single fan in the P/S would be FAR less prone to failure than a system with CPU fans, chipset fans, video card fans and multiple case fans.

Just how far do you carry this analogy?
 

LostHiWay

Golden Member
Apr 22, 2001
1,544
0
76
If you plan to overclock the FSB be careful. FSB overclocking breaks the arrary a lot causing you to have to start over.
 

Yossarian

Lifer
Dec 26, 2000
18,010
1
81
dj4005 wrote:



<< If you equate instability with the number of components involved, should we assume that you are running an 8088 based PC? Surely the number of components in the newer CPUs would make them more prone to failure.

A system with a single fan in the P/S would be FAR less prone to failure than a system with CPU fans, chipset fans, video card fans and multiple case fans.

Just how far do you carry this analogy?
>>



I think you've already taken it to the point of absurdity. There is a world of difference between the failure rate of a solid state device like a cpu and a mechanical one like a HDD which has a multitude of components.