• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Linux software raid or not

Red Squirrel

No Lifer
I'll be upgrading my server soon, which will also involve an OS reinstall as it will be different mobo/cpu and will be 64 bit so I may go with 64-bit Linux (not sure yet). I'm thinking of rather then having separate logical drives, to raid them all together. I'll have a separate OS drive thats not raided, but for data I'm thinking of raiding them all together with raid 0. Performance wise, would this be better, the same, or would it be worse? This is software raid, not hardware.

As for the risk factor, I have a backup server and I do full disk backups nightly, the downtime involved if a disk dies will be a pita, but its not going to cost me anything. (I'm the one of few people that uses this server)
 
Beside the fact that a new motherboard and CPU don't require a reinstall and that you can install a 64-bit kernel with a 32-bit userland if you want, you have to decide if disk I/O is that big of a problem for this server. RAID0 will definitely be faster, that's the whole point of RAID0, but if you're not having any problems with the current speed then you'll just be increasing the chance of failure for no reason.
 
Hmm so I would actually not have to reinstall? I'm going from an AMD 2000+ to an AMD 64bit, thats quite a big change.

Disk i/o is not a huge issue, but anything to improve performance, may help a lot with VMs too. Now that I think of it I can always just have a raid 0 for VMs only.
 
Most distros put just about every module available in the initramfs image and udev loads the modules for all of the detected hardware on bootup. The only real issue is that if the storage controller changes then you might have to fixup /etc/fstab but if you're using LVM, volume labels, UUIDs, etc then that should be fine too.
 
I was under the impression that RAID wasn't even possible, with only one drive, no matter how many partitions you put on it. And even if it is, it wouldn't be any faster, since you'd be using the same cache, and the same drive head.
 
I would not stripe logical disk on a single physical disk. Each read and write of your raid blocks would come from most likely opposite ends of the disk. You will take a huge hit in performance. I see no reason to do it.

In other news linux software raid is ok. AIX software raid rocks!
 
Oh no, I'd be doing this with several drives. No point in raiding a single drive. I think it does let you do that though, but I would see no point.

But knowing I probably won't have to format for the upgrade, then I will probably just keep as is. I figured since I have to format anyway may as well change configuration. The "new" board has sata 1 so I might do a sata hardware raid in the future though.
 
Back
Top