Backing up Linux

Circlenaut

Platinum Member
Mar 22, 2001
2,175
5
81
Let me start by saying that I'm only a little above a complete Linux noob.

My Setup:

So I've decided recently to go with a fedora 10 + Amahi setup for my home network. The thing is I always want to back up the entire server itself. I have 6 1.5TB HDs connected via port multiplying eSATA. I have about 4+TB of data that needs to be backed up which is on the main Array. I plan on taking these 1.5 TB HDs and storing them in a safe off-site location. Every month I want to do this.

Solution I'm looking for:

Maybe something that can take my data and spit out 2GB chucks that could be written on the 1.5 TB HDs? Somehow it would need to be spread across the HDs (I would need to number the HDs for identification?).

Any ideas?
 

Scarpozzi

Lifer
Jun 13, 2000
26,392
1,780
126
You should look at tar. It's a simple utility that can be scripted to backup whatever locations you want.

Try to write a script to backup something, then use cron to run the script daily or weekly.

Here's a link...I just googled it...if you google 'tar backup' there'll be other sites.
http://lantech.geekvenue.net/c...k/994016279/index_html

If you want a full-service backup program, look at AMANDA. It can be setup to use tape libraries, etc... I just think that's a lot more complicated than you want/need to make it. tar will do everything you need. Just remember that it will be disk/processor intensive, so you'll want to make sure it runs when you're not around. Backups sometimes can take hours depending on how much data you're dealing with.
 

Scarpozzi

Lifer
Jun 13, 2000
26,392
1,780
126
One more point. Since you're backing up so much data, you can script multiple backups of each partition or directory you want....and use cron to stagger when they begin.
 

sourceninja

Diamond Member
Mar 8, 2005
8,805
65
91
Check out crashplan http://www.crashplan.com

It can do online backups and backup to removable media. It is very easy to manage. What I did was get a few friends to each dedicate a 1TB hard disk to backups. We each then gave each other our friend codes and use each other for backups. I have 2 remote offsite backups of all my important data. All encrypted via my own key and safe. I also use it to maintain a local backup to USB disk.

It works very well. Plus (I am about to do this) for 5.00 a month (if you pay 3 years up front) you can get unlimited only backup storage space for all of your computers.

They even have a way for you to send them the inital seed (so you don't have to push 1TB of data over your pipe).
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
You probably want to look at something like rsync or a wrapper around it like backuppc or rdiff-backup to save space.

How about image backup? Is that what DD is for?

It can create images of anything but it's not recommended on mounted filesystems because it'll bypass the pagecache and you'll probably end up with a corrupt or incomplete image.
 

lxskllr

No Lifer
Nov 30, 2004
60,062
10,548
126
Originally posted by: Nothinman
You probably want to look at something like rsync or a wrapper around it like backuppc or rdiff-backup to save space.

How about image backup? Is that what DD is for?

It can create images of anything but it's not recommended on mounted filesystems because it'll bypass the pagecache and you'll probably end up with a corrupt or incomplete image.

Gotcha. That should work booting to a USB Linux, then saving an image to a different thumb drive, or network drive, right? My specific task is for a SSD on my Eee, so the sizes aren't going to be huge.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Gotcha. That should work booting to a USB Linux, then saving an image to a different thumb drive, or network drive, right? My specific task is for a SSD on my Eee, so the sizes aren't going to be huge.

Yea it'll work, but it'll save everything including free space since it doesn't understand filesystems. It's usually much better to use tar, partimage, ntfsclone, etc.
 

lxskllr

No Lifer
Nov 30, 2004
60,062
10,548
126
Originally posted by: Nothinman

Yea it'll work, but it'll save everything including free space since it doesn't understand filesystems. It's usually much better to use tar, partimage, ntfsclone, etc.

Ok, cool. Thanks a lot :^)
 

mundane

Diamond Member
Jun 7, 2002
5,603
8
81
We use rsnapshot - typically copy /home, /etc, and a machine-specific backup directory which itself is populated by cron jobs (which copy/backup elements which aren't best served by rsync directly, such as using `svnadmin hotcopy` or `mysqldump`, stuff which requires sync access)

Having said that, it's probably not ideal for your setup - if the entire VM is somehow corrupted, it requires a specific base system install (via xen-create-image and apt-get) before applying the config files. (This is in contrast to offering a full OS image - saves space but requires more setup before restoration is complete)

LVM route (orthogonal to the above) - use each of the ESATA drives as an LVM Physical Volume, aggregate into a Logical Group, and treat that as a single device (Logical Volume). At this time I am unclear how that meshes with your offsite replication, though.
 

Circlenaut

Platinum Member
Mar 22, 2001
2,175
5
81
Originally posted by: mundane
We use rsnapshot - typically copy /home, /etc, and a machine-specific backup directory which itself is populated by cron jobs (which copy/backup elements which aren't best served by rsync directly, such as using `svnadmin hotcopy` or `mysqldump`, stuff which requires sync access)

Having said that, it's probably not ideal for your setup - if the entire VM is somehow corrupted, it requires a specific base system install (via xen-create-image and apt-get) before applying the config files. (This is in contrast to offering a full OS image - saves space but requires more setup before restoration is complete)

LVM route (orthogonal to the above) - use each of the ESATA drives as an LVM Physical Volume, aggregate into a Logical Group, and treat that as a single device (Logical Volume). At this time I am unclear how that meshes with your offsite replication, though.

What if one drives fails in the LVM? Only the data on that drive is gone right? not the whole LVM?
 

Red Squirrel

No Lifer
May 24, 2003
70,592
13,807
126
www.anyf.ca
dd is great for EXACT images such as a specially configured USB stick but I would not use it for backups. As it is more at the raw level and reads byte per byte, it is very slow, and your resulting image will be the size of the DISK/PARTITION you are backing up. Come to think of it, what are good imaging programs for Linux? I usually use Acronis but there must be Linux based ones out there.

For files I use rsync and have it scheduled. With key pair setup with ssh you can even remotely backup servers or push backups to a remote server. Great for offsite backups.
 

Brazen

Diamond Member
Jul 14, 2000
4,259
0
0
Originally posted by: RedSquirrel
...
Come to think of it, what are good imaging programs for Linux? I usually use Acronis but there must be Linux based ones out there.

Partimage. If you want a nice bootable CD and menu driven interface, try the Clonezilla livecd.
 

skyking

Lifer
Nov 21, 2001
22,775
5,937
146
with a large data server like that, I prefer having the OS on a separate software raid1 array. Just a couple of small drives, 40 gigs are more than enough. It allows you to back up the server itself separate from the data array. It opens up some possibilities for unmounting the data array for maintenance without booting into a live cd. It also means a major OS problem can be restored quickly, instead of a huge 4TB ball of stuff.