• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

I think I killed my primary partition

3Martini

Junior Member
I was having problems getting Windows XP to load after an improper shut down.

I have a RAID0 arry for my primary drive. I was finally able to get Windows to boot from CD using my Windows XP disk and a floppy with my RAID drivers (using F6). I tried to do an installation repair, but it did not work. OS Load error.

Then from the repair console, I noticed that my drives letters were flip flopped. I have 3 drives - 2 in a Raid0 array, and the third drive was just for data. But it showed the data drive as C: and my windows drive as D:

First I tried to run bootcfg. But windows still would not boot.

Then I used the recovery console to map my drives:

D: NTFS 305250MB \Device\Harddisk0\Partition1
C: NTFS 286165MB \Device\Harddisk0\Partition1

And finally (here is where I probably clobbered my root partition) I ran
fixmbr \Device\Harddisk0\Partition1

Now my Raid drive shows up like this:

305251 MB Disk0 at Id0 on bus0 on iaStor[MBR]
D: Partition1 [unknown] 305251MN (305250 MB free)

286166 MB Disk 0 at Id1 on bus0 on iaStor [MBR]
C: Partition (SCSI-Vol1) [NTFS] 286166 MB (919644 free)

Can I still recover my files from my RAID drive (Disk0 at Id0) - or are they gone for good?
 
the files are never gone for real.... at least not if you dont overwrite the areas where they are recorded. i had a similar prob last week and i recovered my files using "GetDataBack for NTFS". maybe "TestDisk" or "GParted" can help you too....never used those 2, though.
 
I think he's screwed since it was a raid0 array. If the array's still there, but unreadable atm it may be recoverable, but I doubt it. That's why I don't use raid. It adds complexity to an already complex system and stuff like that happens. I guess plenty of people use it successfully, but I don't trust it, or really have the need.
 
I think he's screwed since it was a raid0 array.

As long as it's still setup as an array it should be fine. And those cheapo onboard controllers don't do a whole lot anyway, most of them just put a few K worth of data at the end of the drive detailing the array setup so there's not much metadata to get lost.
 
Originally posted by: pcgeek11
Raid0 = Playing Russian Roulette with two bullets instead of one.

pcgeek11

Yep, most people have to have a RAID 0 setup fail before they learn the (perceived) performance gain isn't worth the risk of failure. I know I did... 😱
 
Originally posted by: Robor
Originally posted by: pcgeek11
Raid0 = Playing Russian Roulette with two bullets instead of one.

pcgeek11

Yep, most people have to have a RAID 0 setup fail before they learn the (perceived) performance gain isn't worth the risk of failure. I know I did... 😱

Although they don't apply to every activity, I quite enjoy the performance gains when they do apply. The only risk I've gained is the risk hassle from a restore. When I compare the mtbf to the expected replacement time due to size or performance upgrades I find there is no real added risk at all.

I've gone the better part of a decade now running raid 0. When is it I should be learning my lesson again?
 
Originally posted by: Smilin
I've gone the better part of a decade now running raid 0. When is it I should be learning my lesson again?
I'm willing to bet that for every case like yours there's 10 who have experienced failures. I'd also guess only a small % of those running RAID 0 get little if any performance benefit and with huge drives being available for storage the space benefit argument is pretty much nil.

 
Right now the mtbf on drives is longer than their useful life even if you chop that mtbf in half. Be it raid or single drives it's a safe bet you'll end up upgrading before you have a failure. You aren't really going to see a higher failure rate due to physical drive failure. You're more likely to see a failure due to bozo misconfigs but that's a fault of the user not the technology.

As for performance it comes down to how much you want to spend and when. Take those raptors for an example. The throughput on those new 150s is about what two of the first gen 74s would do when striped. The advantage of the 74s? You could have been running them for two years instead of waiting. If you run raid you can leapfrog ahead over and over. I was running 2x80 WD special editions (with teh 8mg cache!! ooo) for a long time until the first raptors became available. The single raptor was slightly faster but I had been enjoying the speed long before it existed.

My point I guess: Broadly speaking the added chance of failure just really isn't an issue. Drives are very reliable these days.
 
Originally posted by: Smilin
Right now the mtbf on drives is longer than their useful life even if you chop that mtbf in half. Be it raid or single drives it's a safe bet you'll end up upgrading before you have a failure. You aren't really going to see a higher failure rate due to physical drive failure. You're more likely to see a failure due to bozo misconfigs but that's a fault of the user not the technology.

As for performance it comes down to how much you want to spend and when. Take those raptors for an example. The throughput on those new 150s is about what two of the first gen 74s would do when striped. The advantage of the 74s? You could have been running them for two years instead of waiting. If you run raid you can leapfrog ahead over and over. I was running 2x80 WD special editions (with teh 8mg cache!! ooo) for a long time until the first raptors became available. The single raptor was slightly faster but I had been enjoying the speed long before it existed.

My point I guess: Broadly speaking the added chance of failure just really isn't an issue. Drives are very reliable these days.

My problem isn't so much with physical failure, but with the added complexity of the writing scheme. It seems to me that the added steps increase the chances of problems happening. It may be unfounded, but the small performance increases aren't worth the hassle for me.
 
My point I guess: Broadly speaking the added chance of failure just really isn't an issue. Drives are very reliable these days.

I've seen a lot of drives fail so I don't think I'd consider any drive "very reliable" so I would have probably phrased it: If a drive is going to fail, it's going to fail whether you RAID it or not and in both cases unless all of your data is on redundant RAID sets you're going to end up restoring from backup so having good backups negates the real problem.

My problem isn't so much with physical failure, but with the added complexity of the writing scheme. It seems to me that the added steps increase the chances of problems happening. It may be unfounded, but the small performance increases aren't worth the hassle for me.

With a little bit of planning there is no hassle. And virtually all of the same problems can also be had with an odd or very new storage controller.
 
Originally posted by: Smilin
Right now the mtbf on drives is longer than their useful life even if you chop that mtbf in half. Be it raid or single drives it's a safe bet you'll end up upgrading before you have a failure. You aren't really going to see a higher failure rate due to physical drive failure. You're more likely to see a failure due to bozo misconfigs but that's a fault of the user not the technology.

As for performance it comes down to how much you want to spend and when. Take those raptors for an example. The throughput on those new 150s is about what two of the first gen 74s would do when striped. The advantage of the 74s? You could have been running them for two years instead of waiting. If you run raid you can leapfrog ahead over and over. I was running 2x80 WD special editions (with teh 8mg cache!! ooo) for a long time until the first raptors became available. The single raptor was slightly faster but I had been enjoying the speed long before it existed.

My point I guess: Broadly speaking the added chance of failure just really isn't an issue. Drives are very reliable these days.

I'm speaking of both the drives *and* the controller. In my failure case my onboard RAID controller just 'lost' my RAID 0 config one day. I split them up and they ran separate for a while without issue. I didn't update the BIOS or mess around with it - it just decided that it didn't want to see my array anymore. I recreated it but of course everything was gone. I guess I could have gone to more effort to resolve it but it was just my OS and proggies. All I lost was some old Email and programs/games I had to reinstall.

And still, even though drives are more reliable these days having 2 instead of one doubles your chance of failure. Oh, and additional heat, power, and cabling. 😉
 
I'm speaking of both the drives *and* the controller.

That completely changes the story, no matter how you have the drives setup if the controller decides to sh!t itself you're screwed.
 
Originally posted by: Nothinman
I'm speaking of both the drives *and* the controller.

That completely changes the story, no matter how you have the drives setup if the controller decides to sh!t itself you're screwed.

Yeah, but if it's a single drive and the motherboard dies you can take it and put it in another system to copy the data over.
 
Originally posted by: Robor
Originally posted by: Nothinman
I'm speaking of both the drives *and* the controller.

That completely changes the story, no matter how you have the drives setup if the controller decides to sh!t itself you're screwed.

Yeah, but if it's a single drive and the motherboard dies you can take it and put it in another system to copy the data over.

You can do the same thing with raid 0. The raid config data is on the drives themselves. I've done this several times as I've upgraded to new motherboards. If you are forced to switch motherboard types you'll need a controller from the same manufacturer. However this limitation also applies to single drives as well which will stop 7b at boot if the controller changes.

Also doubling the chance of failure only applies when you start approaching the mtbf of the drives. With today's drives this timeframe is beyond the useful life of the drive unless you run old crap forever. Most folks itching for the performance of raid aren't going to sit around with obsolete hardware. It's a non issue.
 
Originally posted by: pcgeek11
Raid0 = Playing Russian Roulette with two bullets instead of one.
Hey, I like that analogy!
Originally posted by: Smilin
My point I guess: Broadly speaking the added chance of failure just really isn't an issue. Drives are very reliable these days.
The thing that Smilin isn't mentioning is that I'm sure he keeps backups of anything important. 😉

But I really disagree with the statement that "Drives are very reliable these days". I haven't seen ANY improvement in drive reliability in the last fifteen years (when drive heads started being controlled by servo-motors instead of stepper-motors).

Google's recent study of 100,000 hard drives shows that the odds of a single hard drive failure are about 6 percent each year over the first five years. For RAID0, that gives a 94x.94=.88=12% chance of array failure each year. That's pretty much the same odds as Russian Roulette.
 
Yeah, but if it's a single drive and the motherboard dies you can take it and put it in another system to copy the data over.

I can do that with my RAID setup too, I just don't use the crappy onboard RAID controllers.
 
Originally posted by: Smilin
You can do the same thing with raid 0. The raid config data is on the drives themselves. I've done this several times as I've upgraded to new motherboards. If you are forced to switch motherboard types you'll need a controller from the same manufacturer. However this limitation also applies to single drives as well which will stop 7b at boot if the controller changes.

Also doubling the chance of failure only applies when you start approaching the mtbf of the drives. With today's drives this timeframe is beyond the useful life of the drive unless you run old crap forever. Most folks itching for the performance of raid aren't going to sit around with obsolete hardware. It's a non issue.

If the motherboard in my PC with a single drive dies I can take it and stick it into another machine and pull the data off. If my motherboard in my RAID 0 setup dies I can't just stick the drives into another system and pull the data off.

Not all drives make it to their MTBF. Some go longer, others die sooner.

 
Originally posted by: RebateMonger
Originally posted by: pcgeek11
Raid0 = Playing Russian Roulette with two bullets instead of one.
Hey, I like that analogy!
Originally posted by: Smilin
My point I guess: Broadly speaking the added chance of failure just really isn't an issue. Drives are very reliable these days.
The thing that Smilin isn't mentioning is that I'm sure he keeps backups of anything important. 😉

But I really disagree with the statement that "Drives are very reliable these days". I haven't seen ANY improvement in drive reliability in the last fifteen years (when drive heads started being controlled by servo-motors instead of stepper-motors).

Google's recent study of 100,000 hard drives shows that the odds of a single hard drive failure are about 6 percent each year over the first five years. For RAID0, that gives a 94x.94=.88=12% chance of array failure each year. That's pretty much the same odds as Russian Roulette.

k, horseshit on that stat.

You hand picked one tiny stat out of a 12 page article and presented it completely out of context. There are lots of other studies out that that also completely contradict this.

First, they don't actually say 6% each year fir the first five years. It reaches that rate in it's second third and fourth years. Why would anyone wanting raid 0 performance run four year old drives? You would be better off replacing it with a newer single drive.

Second, they are studying server drives in large arrays under heavier load. (it's googles back end servers here, not your desktop). They measure usage as a percentage of a drives maximum throughput . A drive that's been running in the top 50th percentile of it's throughput for two years straight shows only a 1.5% failure rate. Do you actually run your drives so continuously?

Third, what they call a "failure" is utter crap..

From that article:
"Therefore, the most accurate
definition we can present of a failure event for our study
is: a drive is considered to have failed if it was replaced
as part of a repairs procedure."


Of course they then say basically "yea we replaced it so it's failed. Turned out nothing was wrong with it though."...

lerath and Shah [7] report
between 15-60% of drives considered to have failed at
the user site are found to have no defect by the manufacturers
upon returning the unit.


If you've not seen any improvement in disk reliability over fifteen years then you've been living under a rock. The latest raptors are sporting a 1.2million hour mtbf (135 years). If you know what mtbf is (it doesn't mean a drive will last 135 years). It's more akin to if you own 135 of them you can expect one to fail per year. That's a fraction of one percent.

So back to my point: you might double your failure rate but even doubled the failure rate is so low the chances are you'll upgrade the drive long before it fails.


And yes. I have backups. drive reliability has nothing to do with this. I would have multiple backups even if running a raid 5 w/hotspare.
 
Back
Top