• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

RAID volume migration question

Hello,

I am contemplating building a new server for my home. The current server has been running 24/7 for over 5 years and at this stage, I am starting to get nervous and expect a major hardware failure soon.

Currently, the server uses a P35 chipset motherboard (running a Q6600 CPU) and has six 750 GB SATA drives connected to the onboard Intel RAID controller. I don't remember the storage controller name offhand (maybe ICH8 or 9?), but I'm wondering what will happen if I put in an X79 board and connect the drives to the Intel controller. I know that some of the storage controllers were backwards compatible and would see the volume and make it available; would a RAID volume from the P35 chipset era easily transfer over to the X79 era? I'm thinking it will but hope someone has tried.

The reason I ask is that if I do build a new server, I will probably use the existing disk array for now as I don't want to spend the money on 6 new drives at this stage.

Thanks
 
You kinda answered the question for the next step. You need to know which controller you have, and what controller you're going to. Specifics matter. As well as what type of RAID you're running, and what the next controller can do.

In situations like yours, IMO, using onboard RAID is not a good idea. If you use a PCIE or PCI RAID controller, you simply move it to the new machine and make sure you plug the drives in the correct order 🙂

edit: After researching and before attempting, get a backup of anything you don't want to lose...
 
You kinda answered the question for the next step. You need to know which controller you have, and what controller you're going to. Specifics matter. As well as what type of RAID you're running, and what the next controller can do.

In situations like yours, IMO, using onboard RAID is not a good idea. If you use a PCIE or PCI RAID controller, you simply move it to the new machine and make sure you plug the drives in the correct order 🙂

edit: After researching and before attempting, get a backup of anything you don't want to lose...

The idea was to keep the server cost low and the onboard Intel is generally adequate for my needs, plus at least in the past, upgrades seem to have been easy. I've thought about purchasing a RAID card but at this stage, would likely only do it if I needed additional ports. I was hoping someone had actual experience going from P35-->X79 with a RAID volume to offer pointers.

I'm an IT pro (ex server admin, current developer), so I run backups every night to an external 2 TB drive and have done RAID swaps on SCSI server systems in the past. Everything is backed up except my ripped DVDs (several hundred GBs), but I can easily copy them over to temp storage.

EDIT: Quick survey of the X79 boards seems to show that not many, if any, have 6 SATA ports covered by the onboard Intel. So I may not have an option of whether or not to get a RAID controller.
 
Last edited:
If you are an IT pro then you should know that connecting your RAID a different controller, can break everything completely.
I would make new RAID and copy everything.
So do you dare doing it without backing up your data on additional storage device, like other disks or tape?
 
After 5 years of 24/7 use, the drives are the next most likely thing to break. I wouldn't recycle them, I'd build a new machine and use new drives, then copy the data.
 
Hello,

I am contemplating building a new server for my home. The current server has been running 24/7 for over 5 years and at this stage, I am starting to get nervous and expect a major hardware failure soon.

Currently, the server uses a P35 chipset motherboard (running a Q6600 CPU) and has six 750 GB SATA drives connected to the onboard Intel RAID controller. I don't remember the storage controller name offhand (maybe ICH8 or 9?), but I'm wondering what will happen if I put in an X79 board and connect the drives to the Intel controller. I know that some of the storage controllers were backwards compatible and would see the volume and make it available; would a RAID volume from the P35 chipset era easily transfer over to the X79 era? I'm thinking it will but hope someone has tried.

The reason I ask is that if I do build a new server, I will probably use the existing disk array for now as I don't want to spend the money on 6 new drives at this stage.

Thanks

Frankly, as dave_the_nerd mentions, the drives are the most likely part to fail so those should be your primary concern for replacement. Over the past ~15 years I've had multiple machines running 24x7 in my place and I can only think of 1 that had a straight up motherboard failure at some point. As long as you control the heat and humidity within reason and blow out the dust once in a while most of them will go indefinitely.

6x750G is only 4.5TB so you don't need to buy 6 new drives. You could get away with 4x2TB and have more space or 3 and have slightly less with RAID5. And if you use software RAID like mdadm on Linux you can move the drives to new servers, other ports, etc without any reconfiguration or worry on your part.
 
If you are an IT pro then you should know that connecting your RAID a different controller, can break everything completely.

Hence the reason I asked, because moving from one Intel controller to another Intel controller has worked in the past.

I would make new RAID and copy everything.
So do you dare doing it without backing up your data on additional storage device, like other disks or tape?

See earlier post (everything is backed up).
 
After 5 years of 24/7 use, the drives are the next most likely thing to break. I wouldn't recycle them, I'd build a new machine and use new drives, then copy the data.

Over the last 5 years, I've lost 2 of the drives and IIRC, 3 of the case fans. I suspect the power supply and drives are going to start croaking soon. Also, I think I'd likely just recycle them into non-critical, backup storage.
 
Last edited:
Frankly, as dave_the_nerd mentions, the drives are the most likely part to fail so those should be your primary concern for replacement. Over the past ~15 years I've had multiple machines running 24x7 in my place and I can only think of 1 that had a straight up motherboard failure at some point. As long as you control the heat and humidity within reason and blow out the dust once in a while most of them will go indefinitely.

6x750G is only 4.5TB so you don't need to buy 6 new drives. You could get away with 4x2TB and have more space or 3 and have slightly less with RAID5. And if you use software RAID like mdadm on Linux you can move the drives to new servers, other ports, etc without any reconfiguration or worry on your part.

Yeah, I was thinking of going with fewer larger drives and expanding the array down the line to save money now while getting more reliable components. I'm also going to go with ony 32 GB of RAM for now and then expand to 64 GB later (ESXi free supports a max of 32 GB of physical RAM, but I read that a $500 license of VMWare Essentials removes that limit and gives other features).

The other issue I am running into is the fact that I've been toying with running ESXi and in order to meet the HCL list, I'm probably going to be spending much more than I originally envisioned especially with regards to a dedicated RAID controller. I was looking at an Intel RAID controller earlier that was on the HCL (around $500 IIRC) and need to do a little more research on that.
 
Over the last 5 years, I've lost 2 of the drives and IIRC, 3 of the case fans. I suspect the power supply and drives are going to start croaking soon. Also, I think I'd likely just recycle them into non-critical, backup storage.

A PSU failure would, more than likely, not be a big deal.

Buy nicer case fans. 😛

But yeah, a quartet of 2TB drives in RAID 5 or 6 would be a nice improvement from where you are now, whether you put them in a new system or just replaced the existing array.

As far as it working before because both controllers were Intel, I will only say this: In my ignorance, I have twice tried to migrate RAID arrays before, always between motherboards with same brand but different model controllers. Sometimes it works, sometimes it doesn't. It's never gone 100% smoothly.

Copying from an old array to a new one is a hassle, to be sure, but it's an comprehensible one. You can set the copy up, walk away, grab coffee, watch netflix, and not feel like you're trying to bash your head through a brick wall.
 
So, let me ask a question about drives in particular. I'm having trouble phrasing it, so I hope I can explain it adequately. At what capacity are we seeing the reliability of drives starting to die off? I remember that for the longest time, the 2 TB drives seemed to be pretty unreliable -- is that still the case?

Also, with this in mind, what individual drive capacity would have you considering RAID6 over RAID5? If the bigger drives are really that unreliable and a drive dies, the chances of another dying on a RAID rebuild may make it worthwhile to use RAID6 instead. Thoughts?
 
Hence the reason I asked, because moving from one Intel controller to another Intel controller has worked in the past.

Intel Matrix RAID uses firmware to spoof the physical drives at boot to a logical volume (or volumes) that the operating system sees as if they were physical drives. But the motherboard is not doing any other work. It's all software RAID. So unless you have identical firmware and software (the driver) versions on the new computer, I would not even remotely consider just moving the drives over.

If you're using linux for this storage brick, and no dual booting, then I wouldn't use IMSM, as there's no advantage. I'd just use default mdadm superblock format v1.2, and RAID 5 or 6. If RAID 5 just realize it's risky. Perhaps use the old NAS as the new backup.

If you're open to suggestions I'd look at NAS 4 Free for the new system, its RAIDZ1 (single parity like RAID5) is more reliable than conventional RAID 5. Another option is Nexentastor Community Edition.

In any case, make sure you're doing periodic RAID scrubs to avoid one day encountering an unexpected read error for a file you haven't touched in five years.

I agree with the others: build a whole new system, and rsync the data over NFS. Slower than direct connect but safe. I would make as few changes to the source server to get the data off of it, especially if you don't have some other backup. Note the log times on both machines just before the copy, and scan the logs on both machines after the copy. Especially on the source, because any sector read errors will appear there.
 
Last edited:
Back
Top