Submit Hint Search The Forums LinksStatsPollsHeadlinesRSS
14,000 hints and counting!


Click here to return to the 'A free perl/rsync backup script' hint
The following comments are owned by whoever posted them. This site is not responsible for what they say.
A free perl/rsync backup script
Authored by: icedtrip on Apr 15, '04 12:40:25AM
Below is the script I use to make backups, run from cron, nightly. I had a mishap several months ago losing all my data. Since then, I make sure to keep consistent backups of my /Users folder. I figured if I have another mishap, I will reinstall everything anyway, so no need for a full drive backup. I do have other scripts like this one backing up other areas such as my WebServer folder, etc. I use 'rsync', but anything could be used in its place. I have tested my backups and really do not have fear of not having the resource forks. Maybe this is dumb, but under OS X, I am not too concerned.

I got the idea of this script from someone else. It has been so long I cannot remember, so I cannot give this person credit by name, but this process (although heavily modified by myself) did not originate from my mind. Hope this helps some. Let me know what you think.


#!/bin/sh
# ----------------------------------------------------------------------
# incremental-set-backup script (9 backups spanning 2 drives)
# ----------------------------------------------------------------------
# this script calls upon 'rsync' to do daily backups to another partition,
# then uses 'cpio' to move old sets to an external drive.  two full sized
# backups will be made and 7 incremental sets will be produced.  with  
# the use of cron, this will provide 9 days worth of consecutive backups. 
# ----------------------------------------------------------------------

# ------------- partition and external drive locations -----------------

EXTBAK=/Volumes/VolumeName/backupfolder;
PARBAK=/Volumes/PartitionName/backupfolder;

# ------------- the process --------------------------------------------

# check to be sure script is run as root
if (( `id -u` != 0 )); then { echo "Sorry, must be root.  Exiting..."; exit; } fi

# incremental sets of /Users/ to EXTBAK using 'rsync' and 'cpio', 
# eventually making 2 full sets and 7 incremental sets

# process broken down into 5 easy steps.

# step 1: use 'rm' to delete the oldest set, if it exists:
if [ -d $EXTBAK/Users.7 ] ; then				\
rm -rf $EXTBAK/Users.7 ;					\
fi;

# step 2: use 'mv' to move the middle sets back by one, if they exist
if [ -d $EXTBAK/Users.6 ] ; then                                   \
mv $EXTBAK/Users.6 $EXTBAK/Users.7 ;                     \
fi;
if [ -d $EXTBAK/Users.5 ] ; then                                   \
mv $EXTBAK/Users.5 $EXTBAK/Users.6 ;                     \
fi;
if [ -d $EXTBAK/Users.4 ] ; then                                   \
mv $EXTBAK/Users.4 $EXTBAK/Users.5 ;                     \
fi;
if [ -d $EXTBAK/Users.3 ] ; then                                   \
mv $EXTBAK/Users.3 $EXTBAK/Users.4 ;                     \
fi;
if [ -d $EXTBAK/Users.2 ] ; then				\
mv $EXTBAK/Users.2 $EXTBAK/Users.3 ;			\
fi;
if [ -d $EXTBAK/Users.1 ] ; then				\
mv $EXTBAK/Users.1 $EXTBAK/Users.2 ;			\
fi;

# step 3: use 'cpio' to make a hard-link-only copy (except for dirs) of 
# the previous set, if it exists.  this will give 2 full sized backups 
# (Users & Users.0) and enable incremental sets to be made on the 2nd  
# harddrive instead of full copies
if [ -d $EXTBAK/Users.0 ] ; then				\
cd $EXTBAK/Users.0 && find . -print |			\
cpio -pdl $EXTBAK/Users.1 ;					\
fi;

# step 4: use 'cpio' to make a copy of your main backup to EXTBAK if it exists
# 'cpio -pdl' will be used since it will make links "only when possible"
if [ -d $EXTBAK/Users ] ; then					\
cd $EXTBAK/Users.0 && find . -print |			\
cpio -pdl $EXTBAK/Users.0 ;					\
fi;

# step 5: use 'rsync' on /Users/ to create your main backup ('rsync' behaves
# like 'cp --remove-destination' by default, so when the destination exists,
# it is unlinked first)
rsync									\
	-va --delete							\
	/Users/ $PARBAK/Users ;

# step 5: use 'touch' to update the mtime of $EXTBAK/Users to reflect
# the set's time
touch $EXTBAK/Users ;

# voila...you got your bak


And that is it!

[ Reply to This | # ]
Looks promising ...
Authored by: kmue on Apr 15, '04 06:00:09AM
... but the directories are never created. In step 4 you meant perhaps $PARBAK as source? Here is my attempt (still it does not create directories):

#!/bin/sh
# ----------------------------------------------------------------------
# incremental-set-backup script (9 backups spanning 2 drives)
# ----------------------------------------------------------------------
# this script calls upon 'rsync' to do daily backups to another partition,
# then uses 'cpio' to move old sets to an external drive.  two full sized
# backups will be made and 7 incremental sets will be produced.  with  
# the use of cron, this will provide 9 days worth of consecutive backups. 
# ----------------------------------------------------------------------

# -----------------------------------------------------------------------
# CONFIG
# -----------------------------------------------------------------------
SOURCE=/Volumes/Data/Users/example
PREFIX=test
EXTBAK=/Volumes/Data/Backup
PARBAK=/Volumes/Applications/Backup
NUMBAK=7

# -----------------------------------------------------------------------
# getopts
# -----------------------------------------------------------------------
while getopts :n opt; do
	case "$opt" in
		n)	NODO=1;;
		*)	echo >&2 "Usage: $0 [-n]"; exit 1;;
	esac
done

# -----------------------------------------------------------------------
# SUBROUTINES
# -----------------------------------------------------------------------
me=$(basename $0)
tell() { echo "`/bin/date '+%Y%d%m %H:%M:%S'` $me: $1"; }

# -----------------------------------------------------------------------
exec() {
# -----------------------------------------------------------------------
	tmp=/tmp/$me.exec.tmp
	if [ "$NODO" ]; then
		tell "exec: $*"
	else
		tell "exec: $*"
		eval "$*" > $tmp 2>&1
		if [ "$?" != 0 ]; then
			[ -s "$tmp" ] && tell "`/bin/cat $tmp`"; /bin/rm -f $tmp
		fi
	fi
}

# -----------------------------------------------------------------------
# MAIN
# -----------------------------------------------------------------------

# check to be sure script is run as root
#if [ $(id -u) != 0 ]; then
#	echo "Sorry, must be root. Exiting..."; exit 1
#fi

TARGET=$PREFIX/$(basename $SOURCE)

# create the backup directories
mkdir -p $EXTBAK/$PREFIX
mkdir -p $PARBAK/$PREFIX

# incremental sets of /Users/ to EXTBAK using 'rsync' and 'cpio', 
# eventually making 2 full sets and 7 incremental sets

# process broken down into 5 easy steps.
# step 1: use 'rm' to delete the oldest set, if it exists:
[ -d $EXTBAK/$TARGET.7 ] && exec rm -rf $EXTBAK/$TARGET.7

# step 2: use 'mv' to move the middle sets back by one, if they exist
t=$NUMBAK; let s=(t - 1)
while [ $t -gt 1 ]; do
	if [ -d $EXTBAK/$TARGET.$s ]; then
		exec mv $EXTBAK/$TARGET.$s $EXTBAK/$TARGET.$t
	fi
	let t=(t - 1); let s=(t - 1)
done

# step 3: use 'cpio' to make a hard-link-only copy (except for dirs) of 
# the previous set, if it exists.  this will give 2 full sized backups 
# (Users & $TARGET.0) and enable incremental sets to be made on the 2nd  
# harddrive instead of full copies
if [ -d $EXTBAK/$TARGET.0 ]; then
	exec cd $EXTBAK/$TARGET.0 && find . -print | cpio -pdl $EXTBAK/$TARGET.1
fi

# step 4: use 'cpio' to make a copy of your main backup to EXTBAK if it exists
# 'cpio -pdl' will be used since it will make links "only when possible"
if [ -d $PARBAK/$TARGET ]; then
	exec "cd $PARBAK/$TARGET.0 && find . -print | cpio -pdl $EXTBAK/$TARGET.0"
fi

# step 5: use 'rsync' on /Users/ to create your main backup ('rsync' behaves
# like 'cp --remove-destination' by default, so when the destination exists,
# it is unlinked first)
exec rsync -va --delete $SOURCE $PARBAK/$PREFIX

# step 5: use 'touch' to update the mtime of $EXTBAK/Users to reflect
# the set's time
exec touch $EXTBAK/$TARGET

# voila...you got your bak


[ Reply to This | # ]
Looks promising ...
Authored by: bluehz on Apr 15, '04 08:26:01AM

I don't think its a very good idea to use cpio in a Mac environment. cpio is not mac resource /HFS aware.



[ Reply to This | # ]
That's not incremental
Authored by: advocate on Apr 16, '04 06:37:11PM

You seem to be using a curious definition of 'incremental'. An incremental backup is a difference between the last full backup set plus all the incremental backups taken since then and the current state of the filesystem. Sure, what you do doesn't copy any files except the ones that have changed, but for any file that has changed, you copy the whole file, and you also keep around (yes, I know it's via hard link) all the files that haven't changed.

The reason I point this out is that I'm terribly disappointed by the lack of functionality in supposedly professional-grade backup solutions. Even Retrospect doesn't cut it: it backs up entire files when even a byte has changed, and it doesn't allow snapshots to be rotated off. Nobody seems to have anything worth using. Hey, I'm willing to go for an enterprise-grade solution; are there any?



[ Reply to This | # ]
That's not incremental
Authored by: advocate on Apr 16, '04 06:42:43PM

Sorry, I should have made it clear that I don't really care about network bandwidth, I care about long-term storage requirements. So sure, rsync will only spend a handful of packets to send over that one byte difference in a ten gigabyte file, but you're keeping both copies of the whole ten gigabyte file (with a one byte difference between them) on the backup disk afterwards.

It's too bad about resource/info forks. If there weren't multiple forks to deal with, just storing binary diffs by timestamp would do nicely, I would think. And once in a while you'd want to merge the older diffs into the full backup for performance.



[ Reply to This | # ]