Question Recovering RAID 5 -

Hoochee

Junior Member
Mar 26, 2020
4
0
11
I have a Thecus N5550 with 5 x 3TB WD Red drives in RAID 5. Two of the drives failed within a day of each other and I had one 3TB WD Red on hand (exact model/specs of the two that failed). I hot swapped it, recovery began and completed. I had a WD 8TB easystore more than less doing nothing (but with data on it) so I shucked the drive, taped the required pins and dropped it in the NAS. It was recognized, I set it as a spare but recovery did't start. I rebooted the NAS and the RAID showed to be gone. Rebooted again, no RAID. I put the failed 3TB drive back in, the RAID reappears and recovery begins. Obviously that doesn't last long because the NAS soon shows the drive is bad and the RAID status changed to Degraded. I pull the 3TB drive, put the 8TB back in, it's recognized I set it as a spare but again recovery doesn't begin. I reboot the NAS and the RAID is gone again. Swap back in the 3TB, reboot, RAID shows again...you get the picture - rinse and repeat. What am I dealing with hear?

The N5550 has a max capacity of 25TB (5x5TB) and I've assumed it will simply ignore the unusable portion of the 8TB drive I'm trying to use. Maybe someone knows a lot about Thecus and can tell me I'm wrong.
 
Last edited:

ch33zw1z

Lifer
Nov 4, 2004
39,749
20,323
146
You introduced a new disk from the WD, which already had array meta data on it, into your 5x3TB RAID 5, its confusing for the N5550.

Answer this: after the first 3TB disk failed, did you rebuild it before the 2nd 3TB failed?
 

Hoochee

Junior Member
Mar 26, 2020
4
0
11
Introducing a new disk shouldn't have been an issue as that's how the array would be rebuilt on the new disk - drop in a new disk and the array rebuild is supposed to start automatically.

I can see where it might not be clear as to the timing of adding replacement drives. I did not replace the first failed drive before the second failed - it was less than 24 hours between failures.

I'm guessing my issues had to do with the loss of two drives on a RAID5, which from what I understand should have made the data inaccessible but it didn't, and why the NAS showed the RAID when at least three of the original four drives were installed. Anyway, I just completed pulling all the data off the array, replacing the failed drives, building a new array, and transferring data back to it.
 

ch33zw1z

Lifer
Nov 4, 2004
39,749
20,323
146
If you put in a disk that has existing RAID data on it, that can definitely cause confusion for the controller.

Yea, two failed in a RAID 5 basically means that array has lost its integrity. Starting from scratch is the only real option.

It's possible some of the data will be corrupted and you may not find out until later, but cross your fingers

You'll want to backup the array in the future, as the old saying goes, RAID is not a backup