• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Raid'ing mounted partitions in LInux -REDUX

Netopia

Diamond Member
Greetings again!

I managed from to get my raid devices created and partitions assigned, and all seemed well. /proc/mdstat showed two happy mirrors and everything was running nicely... until I rebooted.

Now when I reboot, I get an fsck failure and get droped to a recovery shell. From the shell, I can check /proc/mdstat and see that (usually) one array "might" be running, but degraded with only one drive, and two other arrays (that aren't ones I created) with one drive each and inactive.

That sounded pretty convoluted. Hmmm... I've got two arrays, md3 and md4. On boot, they fail and mdstat shows md3 degraded, and then /dev/md_d3 as inactive and /dev/md_d4 as inactive.

If I do an "mdadm --examine --scan", it shows the proper info for the arrays that should be there:

ARRAY /dev/md3 level=raid1 num-devices=2 UUID=a2670247:370af5e6:e3c6662c:564643c7
ARRAY /dev/md4 level=raid1 num-devices=2 UUID=f9beb73d:8545f5c7:e3c6662c:564643c7

If I stop all the devices showing under mdstat and then issue a "mdadm --assemble -scan" , then md3 and md4 are found and built properly and I can manually mount them (mount -a) and get into my Ubuntu desktop.

Here's some other output:

fstab:
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
# / was on /dev/sdc2 during installation
UUID=51d94079-ccae-40cb-96e6-d995648ac739 / ext2 relatime,errors=remount-ro 0 1
# /boot was on /dev/sdc1 during installation
UUID=bbdba62b-629d-4f7d-b91e-154448666854 /boot ext2 relatime 0 2
# /var was on /dev/sda2 during installation
UUID=1db94bf7-575e-4997-b8b3-bf6aaa457fe4 /var ext4 relatime 0 2
# /tmp was on /dev/sda6 during installation
UUID=e6d424ac-71b8-4feb-a21b-3d509fdb4958 /tmp ext4 relatime 0 2
# swap was on /dev/sda5 during installation
UUID=9770b9bf-659e-4e0d-9383-9bf3c50e678b none swap sw 0 0
# swap was on /dev/sdb5 during installation
# UUID=d50d1fef-8f62-451a-a5a4-130dc31807f0 none swap sw 0 0
UUID=9ece3c73-bb95-4966-95da-d93d2ff880f0 none swap sw 0 0
# /home was on /dev/sda3 during installation
UUID=36e32cb5-bced-4eb0-bbc1-639d5f8be7f4 /home ext4 relatime 0 2
# /storage was on /dev/sda4 during installation
UUID=09286288-7ed9-4db1-8320-d2ed039b2a79 /storage ext4 relatime 0 2

/proc/mdstat:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md4 : active raid1 sda4[0] sdb4[1]
674561216 blocks [2/2] [UU]

md3 : active raid1 sda3[0] sdb3[1]
50002240 blocks [2/2] [UU]


I'm not sure what else to do or where to look. I've used mdadm a couple of times, but am obviously not an expert. I would have just set up the raid stuff whilst installing ubuntu, but there was a problem with the alternative CD hanging at about 71% (known bug), so I had to install from the desktop CD. Since I'm booting from an 8GB CompactFlash card, I put /var, /storage and /home on one of the 750GB sata drives and figured I'd mirror /home and /storage after installing mdadm after the initial install.

Anyway, I'm stuck. the boot hangs and doesn't create the raid arrays correctly, and so doesn't mount them and for some reason doesn't mount /var either.

Any help, any comments... much appreciated.

Joe

 
Well... it's fixed. But I thought I should follow up though just so there's some resolution.

First, I discovered that Debian based systems keep mdadm.conf in /etc/mdadm, and not just in the root of /etc.

I backed up the original mdadm.conf and copied the one I had created earlier with:
"mdadm --examine --scan >> /etc/mdadm.conf"

Then I rebooted. Unfortunately, the system still failed the fsck check. This time it reported that the reported size and the physical sizes were different and additionally that "Either the superblock or the partition table is likely to be corrupt".

I googled around and found the solution to that. For each array, I had to do the following:

e2fsck -f /dev/md#
resize2fs /dev/md#

After I had done that on each of the arrays, I rebooted and everything was fine.

Whew!

Joe
 
Another option that seems to work accross all systems is to put mdadm --assemble /dev/md0 in a startup script. It will auto find the proper drives and assemble the array.

On some systems I don't need to do it, others I do.
 
Another option that seems to work accross all systems is to put mdadm --assemble /dev/md0 in a startup script. It will auto find the proper drives and assemble the array.

As long as the drives themselves spin up right, you've got the partition types set to RAID autodect and your mdadm.conf is setup right the included initramfs scripts should bring up the arrays just fine without any hacks like that.
 
Back
Top