• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Hot to upgrade 7 bay NAS to larger discs while keeping the data

So I have a 7 bay nas filled with 3TB HGST NAS ssd's. I want to build a new 7 disc raid 5 array with double the size and less power and noise with the 6TB HGST He drives. I currently have a little over 14TB of data on the current 17TB or so capacity raid 5 array now.

I cant put the data on the new discs from the old discs then put the new discs in the nas and set up the array because it will erase it all when it sets up the new array. So the only thing I can think of is I have to transfer all 14TB of my data onto a 3rd set of HDD's that can hold 14TB of data. Then take out the old HDD's and put the new HDD's in the NAS build the array then transfer the 14TB of data off the third set of HDD's onto the newly built array.

Is having a third set of HDD's the only way I can accomplish this? Is there anyway I can get around having to buy an additional 14TB worth of hard drives? Would the 7 originally raided drives still work as a complete raid array in another pc if I connected all 7 drives to a pci-e raid card in that pc? I can borrow a raid card for free for a day or 2. Or will the raid only work in the thecus it was set up in?
 
Either you completely backup the data to another storage device, setup the unit with the new drives and then copy your files back or you consider hiring someone else to do it for you.

Another option, one that I don't prefer because of the greater risk of losing all your data is to remove all your drives from the NAS, put them in a linux system and mount the RAID array there. Then copy the data to the newly setup RAID array on your NAS over the network. If you mess up doing this, you risk losing it all or incurring very expensive data recovery services.

It concerns me that you have so much data stored on this NAS and without a current backup.
 
So I have a 7 bay nas filled with 3TB HGST NAS ssd's. I want to build a new 7 disc raid 5 array with double the size and less power and noise with the 6TB HGST He drives. I currently have a little over 14TB of data on the current 17TB or so capacity raid 5 array now.

I cant put the data on the new discs from the old discs then put the new discs in the nas and set up the array because it will erase it all when it sets up the new array. So the only thing I can think of is I have to transfer all 14TB of my data onto a 3rd set of HDD's that can hold 14TB of data. Then take out the old HDD's and put the new HDD's in the NAS build the array then transfer the 14TB of data off the third set of HDD's onto the newly built array.

Is having a third set of HDD's the only way I can accomplish this? Is there anyway I can get around having to buy an additional 14TB worth of hard drives? Would the 7 originally raided drives still work as a complete raid array in another pc if I connected all 7 drives to a pci-e raid card in that pc? I can borrow a raid card for free for a day or 2. Or will the raid only work in the thecus it was set up in?

In general, you really can't just move the array to a different RAID card unless it's the exact same make/model.

Do you currently have 14TB of data (is the current array full?)?

Sounds like a great time to get a backup plan in place.

Step 1 - Get the three cheapest 5TB HDDs you can and just pool them together (JBOD style). It can be done with most systems pretty easily and for minimal if any investment.
Step 2 - Move the data over from the old array.
Step 3 - Build the new array
Step 4 - Transfer back to the new array.
Step 5 - Keep the 3x5TB running and run a backup to it once in awhile.

You didn't say what kind of RAID you were doing, but a FreeNAS/ZFS system would eliminate this problem in the future since you could just pool multiple arrays. If you refuse to use a backup then there's even more reason to do it.
 
Swap first HDD. Rebuild array.
Swap second HDD. Rebuild array.
and so on until all disks have been replaced by new bigger disks, then expand volume.

But you should have a backup made before doing something that risky.
 
If I understand correctly he wants to replace his current drives with bigger ones. His thoughts are he needs to create a whole new array to do this.

In my working with raid 5 at least, he could remove one of the current drives by telling the raid it failed, put in the new bigger drive, and the array will use just the amount that fits the current disk size, as it rebuilds that drive.

Once he's swapped all of the new drives in you can assign the extra unused space to the volume, and it will grow into that size.

But this is based on enterprise raid solutions, so your mileage my vary depending on the raid capability's that your solution has.
 
Swap first HDD. Rebuild array.
Swap second HDD. Rebuild array.
and so on until all disks have been replaced by new bigger disks, then expand volume.

But you should have a backup made before doing something that risky.

+1

Most NAS devices are setup to do exactly this. Assuming you are staying Raid 5, doing the hot swap upgrade is generally the safest bet even although it can take awhile. That said, ideally you should already be maintaining a separate backup as protection against a failed rebuild.
 
If you go the upgrade 1 drive at a time, then expand, remember that 1 single bad sector on any drive will cause the RAID to knock that drive offline, which in turn will leave you with two offline drives and the need for data recovery. So, be 100% sure that the data backup is confirmed.
 
Swap first HDD. Rebuild array.
Swap second HDD. Rebuild array.
and so on until all disks have been replaced by new bigger disks, then expand volume.

But you should have a backup made before doing something that risky.

Holly shit! This is asking for trouble, like 5 times in his case.

Wow
 
This is asking for trouble, like 5 times in his case.
I mean, you could try something like backup to crashplan, swap in all drives, download crashplan backup.

It sounds like OP doesn't have a current backup, though, so assuming a 10/100 connect, they'll be uploading to crashplan at 10Mbps or so, and for 14 TB of data that will likely take the better part of 17 days, and then another day or two to download the backup to the new disks, nearly 3 weeks. Swapping and rebuilding one drive will probably take about 12-24 hrs/drive to do a RAID-5 rebuild, so the entire swap replace might be done in under a week.
 
I mean, you could try something like backup to crashplan, swap in all drives, download crashplan backup.

It sounds like OP doesn't have a current backup, though, so assuming a 10/100 connect, they'll be uploading to crashplan at 10Mbps or so, and for 14 TB of data that will likely take the better part of 17 days, and then another day or two to download the backup to the new disks, nearly 3 weeks. Swapping and rebuilding one drive will probably take about 12-24 hrs/drive to do a RAID-5 rebuild, so the entire swap replace might be done in under a week.
And only a few seconds to lose all their data, if one drive has a single bad sector, which is statistically probable with drives of this capacity.
 
buy a new 7disc NAS enclosure for your new hdds.

Transfer material after new array is built

sell old enclosure and possibly hdds in it, after cleaning your data out, on ebay or donate it to charity for possible write off.

-----------
 
I would buy another cheap enclosure enough to house two 6TB drives JBOD. After re-organize you may be able to fit everything there. Now you can populate the old enclosure with 5 new drives. Copy data to that then use the original 2 6TB to expand the raid to 7 disks.
 
And only a few seconds to lose all their data, if one drive has a single bad sector, which is statistically probable with drives of this capacity.
If that were true, I would never be able to rebuild a RAID array, and I've done it twice this week.

Stop fearmongering. That "RAID-5 WILL BE YOUR DOOM!!!" stuff is a sales pitch, it isn't borne out in actual failure rates by a long shot. (Almost as if it's a superficially simplistic view of how these things actually work... but that would be impossible...)
 
If that were true, I would never be able to rebuild a RAID array, and I've done it twice this week.

Stop fearmongering. That "RAID-5 WILL BE YOUR DOOM!!!" stuff is a sales pitch, it isn't borne out in actual failure rates by a long shot. (Almost as if it's a superficially simplistic view of how these things actually work... but that would be impossible...)
I work at a data recovery lab. If it were a sales pitch, I'd encourage you all to not take precautions and not to backup your data and just let you make the same mistakes that my clients make. But, I do care and want to help you avoid data loss and avoid spending thousands of dollars.

Don't wear your seatbelt when you drive. The number of cars that get in an accident every day is insignificant to the number of cars on the road. With the odds of you getting into an accident, why bother?

Just because your rebuilds have worked, thus far, doesn't mean that it will next time. I get failed RAID data recovery jobs frequently enough to know what could have been done to avoid needing my services.

If you are confident that your RAID rebuild process is always safe, I suggest that you assure those who follow your advice on this thread that you will pay for any data recovery services needed, should their RAID fail during a rebuild, as per your advice.
 
I personally think that RAID5 of 7 6TB disks is a little too far for my comfort level. On my 4 NAS boxes, I always use 2-disk protection setup. This include by Synology DS3612sx which only has 4 4-TB drives so far (so I only have 8TB of usable space, a little extreme at the moment but if I ever need to expand the array to 12 disks, I'll keep the same RAID 6 setup).

I personal recommendation is not about RAID5 or RAID6 for the OP. It's a backup plan. There's always a chance that the NAS box hardware goes bad.

In general, rebuilding time for RAID5/6 is way longer than wipe/rebuild/restore from backup. For my most critical must-always-on NAS, I stick with RAID1+offsite backup.
 
I personally think that RAID5 of 7 6TB disks is a little too far for my comfort level. On my 4 NAS boxes, I always use 2-disk protection setup. This include by Synology DS3612sx which only has 4 4-TB drives so far (so I only have 8TB of usable space, a little extreme at the moment but if I ever need to expand the array to 12 disks, I'll keep the same RAID 6 setup).

I personal recommendation is not about RAID5 or RAID6 for the OP. It's a backup plan. There's always a chance that the NAS box hardware goes bad.

In general, rebuilding time for RAID5/6 is way longer than wipe/rebuild/restore from backup. For my most critical must-always-on NAS, I stick with RAID1+offsite backup.
Seems like a very wise solution to me.
 
And only a few seconds to lose all their data, if one drive has a single bad sector, which is statistically probable with drives of this capacity.

Are you just quoting the result from the binomial distribution applied to the URE rate for ordinary hard drives? URE ~1 bit per 10^14, assuming a rebuild requires him to read all 1.7e14 bits (7 drives/pool * 3TB * 1e12bytes/TB * 8 bits/byte = 1.7e14 bits/pool).

Admittedly, that binomial distribution does, look pretty grim (~18% chance of success) for 1 rebuild, but I'm sort of suspicious about 1 bit-error a) destroying the array, and b) them actually happening that frequently.
 
Are you just quoting the result from the binomial distribution applied to the URE rate for ordinary hard drives? URE ~1 bit per 10^14, assuming a rebuild requires him to read all 1.7e14 bits (7 drives/pool * 3TB * 1e12bytes/TB * 8 bits/byte = 1.7e14 bits/pool).

Admittedly, that binomial distribution does, look pretty grim (~18% chance of success) for 1 rebuild, but I'm sort of suspicious about 1 bit-error a) destroying the array, and b) them actually happening that frequently.
RAID is based on distributed parity. If one drive fails, the parity from the remaining working drives can be used to rebuild the replacement drive. However, this process starts at sector 0 and gradually works to the MAX_LBA of each drive. It is only after the full rebuild is completed that the replacement drive is considered online and active again. However, if a single bad sector is encountered on any of the good drives, the RAID controller will tag that drive as offline and suddenly you now have two drives offline and no longer have access to your data.

I frequently get RAID data recovery projects in here that fail for this very reason. We recently finished up one that had 4 drives, the one bad drive that they were trying to rebuild was, in fact, the only good drive in the set.

We also just had a project come in where a tech forced two offline drives back online in hopes of accessing his VMFS. It certainly didn't help that he formatted the newly created RAID volume with a new VMFS volume, but what he didn't take into consideration is that the two drives went offline for a reason. Ironically enough, all 4 drives were in horrible shape and I'm still amazed that his RAID was working as long as it was.

Anyway, it is usually when we make assumptions that we get into trouble. If you are going to assume anything, assume the worst and hope for the best. Should you assume the best and get the worst, you may just find that it was a very expensive mistake.
 
RAID is based on distributed parity. If one drive fails, the parity from the remaining working drives can be used to rebuild the replacement drive. However, this process starts at sector 0 and gradually works to the MAX_LBA of each drive. It is only after the full rebuild is completed that the replacement drive is considered online and active again. However, if a single bad sector is encountered on any of the good drives, the RAID controller will tag that drive as offline and suddenly you now have two drives offline and no longer have access to your data.

I frequently get RAID data recovery projects in here that fail for this very reason. We recently finished up one that had 4 drives, the one bad drive that they were trying to rebuild was, in fact, the only good drive in the set.

We also just had a project come in where a tech forced two offline drives back online in hopes of accessing his VMFS. It certainly didn't help that he formatted the newly created RAID volume with a new VMFS volume, but what he didn't take into consideration is that the two drives went offline for a reason. Ironically enough, all 4 drives were in horrible shape and I'm still amazed that his RAID was working as long as it was.

Anyway, it is usually when we make assumptions that we get into trouble. If you are going to assume anything, assume the worst and hope for the best. Should you assume the best and get the worst, you may just find that it was a very expensive mistake.
So the answer to Essence's question is "yes." You are.

If you are confident that your RAID rebuild process is always safe, I suggest that you assure those who follow your advice on this thread that you will pay for any data recovery services needed, should their RAID fail during a rebuild, as per your advice.
So in other words, you rarely work with a RAID array that isn't hosed.

You may have noticed that my first post, I called it a risky process and suggested OP have backups. I'm not a fool. But I'm not clutching my pearls either.
 
Back
Top