• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Linux RAID problems

Bremen

Senior member
My problem is that I have two drives half the size of two others. As such I thought I'd create a raid0 with the smaller drives and use it in a raid 5 with the larger two. I can make the array fine and everything is running, but if I try to start the array with 'raidstart' it fails to detect the raid0 array (the raid0 is running). I've tried without a superblock, since I assumed autodection may be the problem, no dice. I then tried using LVM instead of the raidtools to combine the smaller drives, still same exact problem. So I was left wondering is it even posslbie to use a raid/lvm inside another raid? Also if anyone has another method I could use to combine the smaller disks to present a single unified device for doing the RAID5 I'd love to hear it.
 
I just tested it in VMWare here and it worked fine.

$cat /proc/mdstat
Personalities : [raid0] [raid5] [raid4]
md1 : active raid5 sde[2] sdd[1] md0[0]
4193920 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md0 : active raid0 sdc[1] sdb[0]
2097024 blocks 64k chunks

unused devices: <none>
 
were you able to start the array with 'raidstart'? I could make the array with 'mkraid' and it would run but I would never be able to restart the array.

well, I settled on cutting the large disks in half and doing a raid 5 then using the unused partitions on the large disks for raid1. Anyway, the performace was quite bad with the raid5 over the raid0 as it only was syncing at about 500K/sec. As opposed to 27000K/sec for the straight disks. I'll then try to use linear to combine the raid5 and raid1
 
I never did raidstart, it came right up after mdadm was done creating it.
Which wasn't my problem :0)

I'm sorry I was unclear, but I could create the array fine, and it would run fine (if a tad slow, but I digress). What was happening was if I stopped the array and started it again it would fail to find the raid0 array. This basically meant if I ever rebooted I was screwed :0) Anyway the current setup works fine and its the same ammount of storage space so...
 
Maybe I'll play with it a bit more in my VM then. But even if it can't find the RAID0 array the RAID5 should still come up in degraded mode, not ideal but you should be able to use mdadm to re-add the RAID0 element back in and let it rebuild.
 
uhhh use mdadm like nothinman said, no reason to raidstart.... actually i thought that raidstart and mkraid and the like were deprecated, maybe im wrong... dunno....
 
I do think that raidstart and mkraid are deprecated, hell I can't even find them in sid or edgy, but they don't fare any better. Creating the array and rebooting still causes it to come up with the RAID0 device out of the RAID5 array, it's a simple matter to re-add the device with 'mdadm --re-add /dev/md1 /dev/md0' but then it has to rebuild the parity. Personally at this point, I think I'd give up and create a logical volume out of the 2 smaller disks and use that in the array. =)
 
Personally at this point, I think I'd give up and create a logical volume out of the 2 smaller disks and use that in the array. =)
I tried LVM, but still had the same problem. I changed my setup a little as I mentioned in a previous post to a raid 5 and a raid 1 joined in a linear array, and it works fine.

As for mdadm, I was under the impression it was just a frontend for raidtools.
 
Back
Top