• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Linux Backup System

Anteaus

Platinum Member
I have a bit of general knowedge when it comes to linux, but I fall short rapidly when it comes to application.

Here is my situation. I'm putting together a basic linux file server. It will have one system drive and 2 data drives. I have no plans to use any kind of raid at this stage. I will mount each drive normally.

One of the data drives will be shared on the network. What I would like to do is use a program or create a macro that will compare the shared drive with the second data drive, and have it automatically check for differences and copy/override/remove files as necessary in whatever time interval I set. I realize that this is precisely what Raid 1 is designed to do, but I'm specifically avoiding raid. Also, I like the idea that the secondary won't be spinning every time I use the server.

My ideal solution would be an command line solution with or without a time component, but I'm open to any suggestions. Full automation is not completely necessary. Thanks in advance.
 
Whatever you choose should just be run via cron. One option is rdiff-backup, a custom rsync script and there are a lot of wrappers around rsync too.
 
if your lazy, check out crashplan, it can run headless and it works great. De-dup/compression and real time backup.
 
Second a simple shell script for rsync and a cron job. Probably something like -a and --delete for what you want.
 
My rsync wrapper:

Makes for 90 days worth of snapshot backups, based on hard links.

Code:
#!/bin/bash
#BACKUP shell script
#makes use of rsync and snapshots via hard links
#allows for directory based "timeline" of snapshots without wasting any hard drive space

#targets
STUDIES_SRC=/studies/;
HOME_SRC=/home/gooberlx/;
SRV_SRC=/srv/;

DEST_TOP=/backup/Linux/vida3;
PROGRESS_DIR=$DEST_TOP/backup.inprogress;

STUDIES_LNK=$DEST_TOP/backup.0/studies;
HOME_LNK=$DEST_TOP/backup.0/gooberlx;
SRV_LNK=$DEST_TOP/backup.0/srv;

STUDIES_DST=$PROGRESS_DIR/studies/;
HOME_DST=$PROGRESS_DIR/gooberlx/;
SRV_DST=$PROGRESS_DIR/srv/;

#commands
RSYNC=/usr/bin/rsync;


#first, remove any existing progress dir
[ -d $PROGRESS_DIR ] && rm -rf $PROGRESS_DIR;

#then rename any existing backup.90 to progress dir
[ -d $DEST_TOP/backup.90 ] && mv $DEST_TOP/backup.90 $PROGRESS_DIR;

#lastly, if still no progress dir, then create one
[ ! -d $PROGRESS_DIR ] && mkdir $PROGRESS_DIR;

#now rotate snapshot directories
for i in {89..0}; do
	let "j=$i+1";
	[ -d $DEST_TOP/backup.$i ] && mv $DEST_TOP/backup.$i $DEST_TOP/backup.$j;
done

#now copy hard links from most recent snapshot (backup.1) to inprogress
[ -d $DEST_TOP/backup.1 ] && cp -al $DEST_TOP/backup.1/. $PROGRESS_DIR;

#next, do the actual rsync copy for today's changes, and remove old files from the oldest snapshot
#srv, then gooberlx, then studies
$RSYNC -rluz --delete $SRV_SRC $SRV_DST;
$RSYNC -rluz --delete $HOME_SRC $HOME_DST;
$RSYNC -rluz --delete $STUDIES_SRC $STUDIES_DST;

#finally, when finished, rename progress to backup.0
[ -d $PROGRESS_DIR ] && mv $PROGRESS_DIR $DEST_TOP/backup.0;

As mentioned, Crashplan is also a great backup program.

edit:
My wrapper assumes the previous night's backup completed, and doesn't check/kill any existing backup processes running. This hasn't been a problem for me so I haven't compensated for it. I probably should though...maybe just log the process id to a file or something. Oh well, food for thought.
 
Last edited:
Back
Top