• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

The Linux file system and software RAID

her209

No Lifer
So I'm playing around with software RAIDs and the Logical Volume Manager on CentOS 4.8 on a VMWare Server VM. Is there a best practice for setting up the file system, .e.g., I have the /boot on a RAID-1, swap space on a RAID-0, and everything else on a RAID-5 logical volume. What is usually done for /home and /var?
 
I've always kept /home on a Raid1 mirrored pair. Since a disk fail last year, I've also kept / and /MUSIC on Raid1 mirror. I've read more than a few emails crying about busted Raid5 arrays for me to feel safe using that on a home machine. If you are religious about backups it might work for you. With the current plunging disk prices, it seems like too much trouble to go with Raid 5 to save redundant disk overhead costs.
 
I just un raid1 on the whole pc and raid5 on the backup server. I also use backupPC for incremental backups. The lack of undelete in linux makes this critical.

The only reason I use raid at all is so if a disk dies I don't have to waste my time repairing my system. Also please note to setup raid1 to boot u have to install grub on both disk
 
I just un raid1 on the whole pc and raid5 on the backup server. I also use backupPC for incremental backups. The lack of undelete in linux makes this critical.

The only reason I use raid at all is so if a disk dies I don't have to waste my time repairing my system. Also please note to setup raid1 to boot u have to install grub on both disk
yes, and configure it properly as well. When you get done there should be several options at the grub screen 🙂
 
that is on the first drive. If it dies, you won't boot up. That is awkward when you have raid1 for high availability.

Here are some instructions on fixing that.

http://www.jms1.net/grub-raid-1.shtml
I built another VM with the /boot partition and a LVM partition that contains the / and swap partitions in RAID-1. After installing the OS, I took a snapshot, then removed the first drive /dev/sda and powered on the VM. It hung. I then reverted back to the snapshot and followed the instructions in the link you provided. I then shutdown the VM, removed /dev/sda, again and powered it back on and it booted. Cool! But on the Grub menu, I still see the same two options as before, i.e., no duplicate entries with both drives attached.

Question regarding GRUB: My understanding is that Stage 1 of GRUB is located in the MBR which is in the first 512 bytes of the first disk. Does that mean that /boot/grub contains Stage 2 of GRUB?
 
Last edited:
yes. I did not read through those instructions, the method I follow, I edit the grub menu to reflect booting off each drive and off each drive in single user mode. This is with Debian.
 
drives are cheap IMO. I have had clients pull a failing disk on a server 3 hours away, mail it in for replacement, swap the new one in and I never had to go there. I like that.
 
Back
Top