Submit Hint Search The Forums LinksStatsPollsHeadlinesRSS
14,000 hints and counting!

A free perl/rsync backup script UNIX
There are a lot of backup solutions for Mac OS X. I have tried many of them, and most of them either:
  • Cost money
  • Only can backup to local directories (or remote requiring you to mount them first).
  • Do transfer backup data in clear text over network.
  • Do not support transferring file diffs. So if you have changed your 1GB PG encrypted disk image, it has to be transferred all over again for each backup.
  • Require user interaction, which make it impossible to run in a cronjob
Therefore I have been wondering what the ultimate backup solution might be. I concluded that the UNIX tool rsync would be close to it ... especially when used via rsyncX, a freeware app that lets you backup with resource forks, even to non-HFS+ volumes.

I have created a perl script for automating rsync/RsyncX backups, with multiple sources, multiple destinations, different backup sets for daily, nightly and weekly backup, systematic logs to a log folder, and more. I did this first for personal use, but maybe others might find my backup utility script useful. You can read much more about it and download it from my rsyncbackup web page.

Of course it is free to use. Please drop me a comment, with suggestions for new features, etc., if you wish...

[robg adds: I haven't tested this one, but "How do I back up?" is among the most popular email inquiries, so I figured it was worth sharing.]
    •    
  • Currently 2.33 / 5
  • 1
  • 2
  • 3
  • 4
  • 5
  (3 votes cast)
 
[33,421 views]  

A free perl/rsync backup script | 34 comments | Create New Account
Click here to return to the 'A free perl/rsync backup script' hint
The following comments are owned by whoever posted them. This site is not responsible for what they say.
Compared to CCC?
Authored by: alexmathew on Apr 14, '04 11:46:29AM

How does this compare to Carbon Copy Cloner?
- Can you boot out of the backup like CCC
- If I can boot, how does it handle the files that have changed?

I use CCC and like it - its saved my A*** a few times. However, it has no provision for incremental backups or backups to CD-ROM's etc.

I guess this tool is still a beta but as a non-geek user - it would be nice to know in plain-speak how it works.
Thanks
AM



[ Reply to This | # ]
RsyncX webcast
Authored by: egan on Apr 14, '04 11:59:49AM

mere seconds after reading this hint, I found this:

http://www.macosxlabs.org/webcasts/webcasts_next.html

a webcast featuring discussion of RsyncX 2.0 capabilities.



[ Reply to This | # ]
Compared to CCC?
Authored by: ragnar on Apr 14, '04 12:03:07PM

CCC will only back up to an HFS+ volume.



[ Reply to This | # ]
PsyncX
Authored by: kholburn on Apr 14, '04 07:04:52PM

I found CCC was unusable in 10.3 It would just hang. PsyncX works fine.



[ Reply to This | # ]
PsyncX
Authored by: wgscott on Apr 15, '04 02:19:15AM

You have to have a new version of psync for 10.3. PsyncX installs it.



[ Reply to This | # ]
My kludgy solution
Authored by: astack on Apr 14, '04 12:10:43PM
I had the same problem. Right now what I'm doing is periodically running this line of backup:

tar -c -v -f backup.tar --newer-mtime 'Mar 08 00:00 2004' /Users/username --exclude=Library --exclude=\.Trash; tar -r -v -f backup.tar --newer-mtime 'Mar 08 00:00 2004' /Users/username/Library/Mail --append

This is all one line and run from my home directory. Essentially, this is a string of tar commands that goes through my home directory and adds all file that have been modified since Mar 08 00:00 2004 to a tar file (I modify the date before I run it-- I haven't gotten around to using arguments to do that quite yet). Also, it excludes my Library directory and the trash. Next, it appends my mail directory to the tar file. To use it, replace username with your user name and modify the date (change each of those twice). You can add additional directories by appending an additional tar command (remember to add a semicolon between commands)

Only use this for incremental backups! Otherwise you'll get a huge file. You can then burn the tar file to disk or gzip it (gzip backup.tar).

Like I said, its a kludgy solution, but it works for me. One of these days I'll work on getting it to be a little more automated.

[ Reply to This | # ]
My kludgy solution
Authored by: mommsen on Apr 14, '04 01:48:06PM

just a warning concerning backups with tar: tar is very sensible on bit errors. In case your archive is corrupted by a single bit, tar will fail to extract the information stored after the affected bit.



[ Reply to This | # ]
My kludgy solution
Authored by: Gigacorpse on Apr 15, '04 07:43:12AM

I believe that you can compress the archive directly through the tar command (although I am not at my computer to verify that as I write this). Another option is to use Ditto, which allows you to create zip archives.

I have been using Rsync with sparse disk images, which is nice because the disk image will be no bigger than needed.



[ Reply to This | # ]
A free perl/rsync backup script
Authored by: johnevansdo7 on Apr 14, '04 04:22:02PM

If you are in a heterogenous environment (hopefully with at least one linux box to act as the server) and have a reliable tape drive, you could look into setting up amanda (www.amanda.org). Mac OS X works pretty reliably as an amanda client, and you get incremental backups out of it. It's not a completely trivial setup, but the instructions at http://web.brandeis.edu/pages/view/Bio/AmandaMacOSXCompileNotes
work.



[ Reply to This | # ]
A free perl/rsync backup script
Authored by: i5m on Apr 14, '04 04:31:52PM

I tried rsyncx awhile back and had problems with it hanging on certain files, it would then fill up a massive log file and take up all the space on my disk. Not good if it's backing up at night / at weekend when you are away.

I've had more success with psync which can be searched for on macosxhints



[ Reply to This | # ]
A free perl/rsync backup script
Authored by: samaaron on Apr 14, '04 04:58:01PM

psync also deals with resource forks and data forks - which is a facility lacking in the vanilla unix command rsync



[ Reply to This | # ]
A free perl/rsync backup script
Authored by: scrofa on Apr 23, '04 01:41:51PM

The commandline replacement for rsync installed with RsyncX has HFS+ and resource fork compatibility (assuming the RsyncX version of rsync has been installed onto both computers involved).

The other added benefit of the RsyncX replacement is that it can be invoked without superuser access. When trying to add Psync to my scripts it always needed SU access, thus making it useless for automatic backups (Though I do concede that I don't know that much about the commandline, so I could have been doing it wrong). rsync, on the other hand, doesn't need superuser access.

---
*******************************
sic facvs nocti se miscuit atrae

[ Reply to This | # ]

Try Bacula
Authored by: MadProphet on Apr 14, '04 05:03:30PM
Have you tried Bacula?

I use it in a production environment with a mix of Mac, Linux, FreeBSD and Windows clients. Over 50 machines total.

It's a client/server app, that's open source and free.



[ Reply to This | # ]
Try Bacula
Authored by: ssevenup on Apr 16, '04 07:58:29PM

As of 4/2/04 the mailing list would indicate that resource forks are not supported at all.

--MM


---
Mark Moorcroft
ELORET Corp. - NASA/Ames RC
Sys. Admin.



[ Reply to This | # ]
A free perl/rsync backup script
Authored by: ChiefTypist on Apr 14, '04 05:56:32PM
A lot of the comments in this thread indicate that some of you may not understand why rsync (and RsyncX for HFS) is a really good idea for network backups.

Here is a very good summary of why rsync kicks butt for backing up large filesystems over a network. When you are dealing with 100's of GB over a 10 Mbps (or slower) network, it's the only way to go.

[ Reply to This | # ]
HFS+ to non HFS+ volumes?
Authored by: fghorow on Apr 14, '04 08:42:53PM
You mention that rsyncX backs up resource forks to non-hfs+ volumes. I've tried this over the network to a linux filesystem and have not succeeded in getting it to work.

What magic incantation did you use to do the trick?

Also, what about hfs+ file attributes other than resource forks? Do they survive a "round trip" to non-hfs+ volumes? (Without such a round trip property, the backups would not be reliable, no?)

[ Reply to This | # ]

HFS+ to non HFS+ volumes?
Authored by: scrofa on Apr 23, '04 12:34:19PM

In order for HFS+ to be supported, the Macosxlabs (Kevin A. Boyd) version of rsync must be installed on both computers being synced. Excerpt from rsyncX readme:

In order for this version of rsync to work and preserve HFS+ permissions, at least this version of /usr/local/bin/rsync must be installed on every machine that will be involved in the rsync. Check by typing "rsync --version" on all machines before starting, if unsure.

Please note however that while you gain some features by doing this (resource fork and HFS+ compatibliity), you also lose some features by dropping the native rsync.



[ Reply to This | # ]
rsync then chown
Authored by: oink on Apr 14, '04 11:41:13PM

Still waiting for rsync to handle chown at run time. I backup my web files to my local directory and have to chown all files before I can modify them. I think cpio does it somewhat.



[ Reply to This | # ]
A free perl/rsync backup script
Authored by: icedtrip on Apr 15, '04 12:40:25AM
Below is the script I use to make backups, run from cron, nightly. I had a mishap several months ago losing all my data. Since then, I make sure to keep consistent backups of my /Users folder. I figured if I have another mishap, I will reinstall everything anyway, so no need for a full drive backup. I do have other scripts like this one backing up other areas such as my WebServer folder, etc. I use 'rsync', but anything could be used in its place. I have tested my backups and really do not have fear of not having the resource forks. Maybe this is dumb, but under OS X, I am not too concerned.

I got the idea of this script from someone else. It has been so long I cannot remember, so I cannot give this person credit by name, but this process (although heavily modified by myself) did not originate from my mind. Hope this helps some. Let me know what you think.


#!/bin/sh
# ----------------------------------------------------------------------
# incremental-set-backup script (9 backups spanning 2 drives)
# ----------------------------------------------------------------------
# this script calls upon 'rsync' to do daily backups to another partition,
# then uses 'cpio' to move old sets to an external drive.  two full sized
# backups will be made and 7 incremental sets will be produced.  with  
# the use of cron, this will provide 9 days worth of consecutive backups. 
# ----------------------------------------------------------------------

# ------------- partition and external drive locations -----------------

EXTBAK=/Volumes/VolumeName/backupfolder;
PARBAK=/Volumes/PartitionName/backupfolder;

# ------------- the process --------------------------------------------

# check to be sure script is run as root
if (( `id -u` != 0 )); then { echo "Sorry, must be root.  Exiting..."; exit; } fi

# incremental sets of /Users/ to EXTBAK using 'rsync' and 'cpio', 
# eventually making 2 full sets and 7 incremental sets

# process broken down into 5 easy steps.

# step 1: use 'rm' to delete the oldest set, if it exists:
if [ -d $EXTBAK/Users.7 ] ; then				\
rm -rf $EXTBAK/Users.7 ;					\
fi;

# step 2: use 'mv' to move the middle sets back by one, if they exist
if [ -d $EXTBAK/Users.6 ] ; then                                   \
mv $EXTBAK/Users.6 $EXTBAK/Users.7 ;                     \
fi;
if [ -d $EXTBAK/Users.5 ] ; then                                   \
mv $EXTBAK/Users.5 $EXTBAK/Users.6 ;                     \
fi;
if [ -d $EXTBAK/Users.4 ] ; then                                   \
mv $EXTBAK/Users.4 $EXTBAK/Users.5 ;                     \
fi;
if [ -d $EXTBAK/Users.3 ] ; then                                   \
mv $EXTBAK/Users.3 $EXTBAK/Users.4 ;                     \
fi;
if [ -d $EXTBAK/Users.2 ] ; then				\
mv $EXTBAK/Users.2 $EXTBAK/Users.3 ;			\
fi;
if [ -d $EXTBAK/Users.1 ] ; then				\
mv $EXTBAK/Users.1 $EXTBAK/Users.2 ;			\
fi;

# step 3: use 'cpio' to make a hard-link-only copy (except for dirs) of 
# the previous set, if it exists.  this will give 2 full sized backups 
# (Users & Users.0) and enable incremental sets to be made on the 2nd  
# harddrive instead of full copies
if [ -d $EXTBAK/Users.0 ] ; then				\
cd $EXTBAK/Users.0 && find . -print |			\
cpio -pdl $EXTBAK/Users.1 ;					\
fi;

# step 4: use 'cpio' to make a copy of your main backup to EXTBAK if it exists
# 'cpio -pdl' will be used since it will make links "only when possible"
if [ -d $EXTBAK/Users ] ; then					\
cd $EXTBAK/Users.0 && find . -print |			\
cpio -pdl $EXTBAK/Users.0 ;					\
fi;

# step 5: use 'rsync' on /Users/ to create your main backup ('rsync' behaves
# like 'cp --remove-destination' by default, so when the destination exists,
# it is unlinked first)
rsync									\
	-va --delete							\
	/Users/ $PARBAK/Users ;

# step 5: use 'touch' to update the mtime of $EXTBAK/Users to reflect
# the set's time
touch $EXTBAK/Users ;

# voila...you got your bak


And that is it!

[ Reply to This | # ]
Looks promising ...
Authored by: kmue on Apr 15, '04 06:00:09AM
... but the directories are never created. In step 4 you meant perhaps $PARBAK as source? Here is my attempt (still it does not create directories):

#!/bin/sh
# ----------------------------------------------------------------------
# incremental-set-backup script (9 backups spanning 2 drives)
# ----------------------------------------------------------------------
# this script calls upon 'rsync' to do daily backups to another partition,
# then uses 'cpio' to move old sets to an external drive.  two full sized
# backups will be made and 7 incremental sets will be produced.  with  
# the use of cron, this will provide 9 days worth of consecutive backups. 
# ----------------------------------------------------------------------

# -----------------------------------------------------------------------
# CONFIG
# -----------------------------------------------------------------------
SOURCE=/Volumes/Data/Users/example
PREFIX=test
EXTBAK=/Volumes/Data/Backup
PARBAK=/Volumes/Applications/Backup
NUMBAK=7

# -----------------------------------------------------------------------
# getopts
# -----------------------------------------------------------------------
while getopts :n opt; do
	case "$opt" in
		n)	NODO=1;;
		*)	echo >&2 "Usage: $0 [-n]"; exit 1;;
	esac
done

# -----------------------------------------------------------------------
# SUBROUTINES
# -----------------------------------------------------------------------
me=$(basename $0)
tell() { echo "`/bin/date '+%Y%d%m %H:%M:%S'` $me: $1"; }

# -----------------------------------------------------------------------
exec() {
# -----------------------------------------------------------------------
	tmp=/tmp/$me.exec.tmp
	if [ "$NODO" ]; then
		tell "exec: $*"
	else
		tell "exec: $*"
		eval "$*" > $tmp 2>&1
		if [ "$?" != 0 ]; then
			[ -s "$tmp" ] && tell "`/bin/cat $tmp`"; /bin/rm -f $tmp
		fi
	fi
}

# -----------------------------------------------------------------------
# MAIN
# -----------------------------------------------------------------------

# check to be sure script is run as root
#if [ $(id -u) != 0 ]; then
#	echo "Sorry, must be root. Exiting..."; exit 1
#fi

TARGET=$PREFIX/$(basename $SOURCE)

# create the backup directories
mkdir -p $EXTBAK/$PREFIX
mkdir -p $PARBAK/$PREFIX

# incremental sets of /Users/ to EXTBAK using 'rsync' and 'cpio', 
# eventually making 2 full sets and 7 incremental sets

# process broken down into 5 easy steps.
# step 1: use 'rm' to delete the oldest set, if it exists:
[ -d $EXTBAK/$TARGET.7 ] && exec rm -rf $EXTBAK/$TARGET.7

# step 2: use 'mv' to move the middle sets back by one, if they exist
t=$NUMBAK; let s=(t - 1)
while [ $t -gt 1 ]; do
	if [ -d $EXTBAK/$TARGET.$s ]; then
		exec mv $EXTBAK/$TARGET.$s $EXTBAK/$TARGET.$t
	fi
	let t=(t - 1); let s=(t - 1)
done

# step 3: use 'cpio' to make a hard-link-only copy (except for dirs) of 
# the previous set, if it exists.  this will give 2 full sized backups 
# (Users & $TARGET.0) and enable incremental sets to be made on the 2nd  
# harddrive instead of full copies
if [ -d $EXTBAK/$TARGET.0 ]; then
	exec cd $EXTBAK/$TARGET.0 && find . -print | cpio -pdl $EXTBAK/$TARGET.1
fi

# step 4: use 'cpio' to make a copy of your main backup to EXTBAK if it exists
# 'cpio -pdl' will be used since it will make links "only when possible"
if [ -d $PARBAK/$TARGET ]; then
	exec "cd $PARBAK/$TARGET.0 && find . -print | cpio -pdl $EXTBAK/$TARGET.0"
fi

# step 5: use 'rsync' on /Users/ to create your main backup ('rsync' behaves
# like 'cp --remove-destination' by default, so when the destination exists,
# it is unlinked first)
exec rsync -va --delete $SOURCE $PARBAK/$PREFIX

# step 5: use 'touch' to update the mtime of $EXTBAK/Users to reflect
# the set's time
exec touch $EXTBAK/$TARGET

# voila...you got your bak


[ Reply to This | # ]
Looks promising ...
Authored by: bluehz on Apr 15, '04 08:26:01AM

I don't think its a very good idea to use cpio in a Mac environment. cpio is not mac resource /HFS aware.



[ Reply to This | # ]
That's not incremental
Authored by: advocate on Apr 16, '04 06:37:11PM

You seem to be using a curious definition of 'incremental'. An incremental backup is a difference between the last full backup set plus all the incremental backups taken since then and the current state of the filesystem. Sure, what you do doesn't copy any files except the ones that have changed, but for any file that has changed, you copy the whole file, and you also keep around (yes, I know it's via hard link) all the files that haven't changed.

The reason I point this out is that I'm terribly disappointed by the lack of functionality in supposedly professional-grade backup solutions. Even Retrospect doesn't cut it: it backs up entire files when even a byte has changed, and it doesn't allow snapshots to be rotated off. Nobody seems to have anything worth using. Hey, I'm willing to go for an enterprise-grade solution; are there any?



[ Reply to This | # ]
That's not incremental
Authored by: advocate on Apr 16, '04 06:42:43PM

Sorry, I should have made it clear that I don't really care about network bandwidth, I care about long-term storage requirements. So sure, rsync will only spend a handful of packets to send over that one byte difference in a ten gigabyte file, but you're keeping both copies of the whole ten gigabyte file (with a one byte difference between them) on the backup disk afterwards.

It's too bad about resource/info forks. If there weren't multiple forks to deal with, just storing binary diffs by timestamp would do nicely, I would think. And once in a while you'd want to merge the older diffs into the full backup for performance.



[ Reply to This | # ]
How to restore?
Authored by: drauh on Apr 15, '04 01:41:05AM

All these backup hints are great, but they have one thing lacking: how does one perform a restore? An example should be posted on the web page.



[ Reply to This | # ]
If you don't want to run it from a crontab
Authored by: Ptitboul on Apr 15, '04 05:57:21AM
For example, if the computer you want to backup is a laptop, it might not be connected when the crontab runs. The solution I use is to make a backup each time there is a network change. First you have to change the file /System/Library/SystemConfiguration/Kicker.bundle/Contents/Resources/Kicker.xml to add the following lines

        <dict>
                <key>execCommand</key>
                <string>$BUNDLE/Contents/Resources/disk-backup</string>
                <key>execUID</key>
                <integer>0</integer>
                <key>keys</key>
                <array>
                        <string>State:/Network/Global/DNS</string>
                        <string>State:/Network/Global/IPv4</string>
                        <string>Setup:/</string>
                </array>
                <key>name</key>
                <string>disk-backup</string>
        </dict>
Then you have to create a file named /System/Library/SystemConfiguration/Kicker.bundle/Contents/Resources/disk-backup that will run the backup, e.g. something like

#! /bin/sh
case "`ifconfig|grep 'inet '`" in *' 64.233.'*)
  # run only if you have a fast connection
  logger -i -p daemon.notice -t disk-backup Backup of local disk data
  exec >> /var/log/disk-backup.log 2>&1
  /usr/local/bin/rsyncbackup (...)
;; esac
Note that with this example the rsyncbackup utility is run with administrator priviledges. Moreover, the Kicker.xml file can be changed by some system updates, therefore you have to check that your modification is still there after each system update.

[ Reply to This | # ]
Still Waiting!
Authored by: spodieodie on Apr 15, '04 06:09:51PM

The reason I use Retrospect is because I back up to tape. To my knowledge, there is no other simple way to back up to tape from Mac OS X. Tapes are never mounted, so using rsync to back up would be pointless.

I have tinkered around with TAR and have yet to find a way to make it recognize my SCSI tape drive. I would drop Retrospect in a heartbeat if I could find a system utility that would back up to tape and still preserve permissions and resource forks. Right now, if you back up to tape, there is no alternative to Retrospect.



[ Reply to This | # ]
Still Waiting!
Authored by: scrofa on Apr 23, '04 01:13:53PM
interesting....

I dropped Retrospect like a rock when I learned that it only supported internet archives via ftp. After 12 emails I finally contacted them by using up my free tech support call. When I asked them how I could back up my files to the internet securely, they said that it wasn't possible with Retrospect, nor was it a priority in the future.

Afterall, what good is a backup hard drive if your house burns down or you house is ropped (as mine was last month--the robbing, that is, not the burning down)?

So I dug into the command line. I have an rsync incremental backup which I run every night via cron (thanks to rsyncX and Macosxlabs it's even HFS+ and resource fork compatible!). I also run an archive script which copies the most vital files to a new folder, makes a compressed diskimage out of them (thus preserving the resource forks) and then copies them to my brother's bsd server via SCP.

Yep, and it only took about 10 months to figure out all the unix crap (ssh-keygen, ssh-agent, crontab, scp, hdiutil, etc...) to get the scripts working (hehe).

So I guess what I'm really trying to say is that Retrospect is fine if you just want to back up your stuff to a tape or external hard drive. But for a more comprehensive backup plan to safeguard against catastrophic loss of data (fire, theft, etc...) you should back up to a remote site. Retrospect fails in this category.



[ Reply to This | # ]
Still Waiting!
Authored by: Kalak on Aug 11, '06 08:50:24AM

I was digging through looking for using a remote tape drive with tar, and I found your comment. It's old, but I thought I'd put the trick up.

You can tunnel the ftp from retrospect via an ssh tunnel iof you have ssh access on the destination server. I'm away from may main computer, or I'd give you the command line, but you tunnel ports 20 & 21, then point Retrospect to the localhost address, now tunneled via ssh to your destination. It's a hack, but possibly useful.

(rsync is a good resource too, but when backing up a Win/Mac workgroup to the other side of campus, Retrospect has it's uses too.)

---
--
Kalak
I am, and always will be, an Idiot.



[ Reply to This | # ]
It's an Apple problem
Authored by: king-of-birds on Apr 26, '05 09:31:00AM
I spent a long time trying to figure out how to get tar to recognize my tape drive as well. It turns out to be a deficiency in OS X, which (still) has no support for
/dev/rmt


[ Reply to This | # ]
Backup across the network to a windows share
Authored by: hankins on Apr 15, '04 09:17:10PM
Thank you for this hint. It might not be the prettiest GUI-wrapped, brushed-metal solution, but it works great.

I came up with some custom code for the "destinations.conf" file to help me mount and backup to a network share. I work in a Mac & PC environment and have tons of storage available on the PC; therefore, I wanted to automatically mount the Windows share, back up the files from my TiBook, and then unmount the share when done.

This is what I did...

1. Created an applescript to mount the Windows volume without launching the Finder Window. Here's the script:


tell application "Finder"
	mount volume "smb://10.0.1.70/WindowsShare"
end tell

2. Added some conditional code to the "destinations.conf" backup script:

tag|/Volumes/WindowsShare/|osascript /Users/me/Library/Scripts/Networking/ConnectToWindows.scpt;sleep 5;test -d /Volumes/WindowsShare|true

3. When I launch the backup process, I also tack on an unmount command to release the Volume. This is pretty ugly.

sudo rsyncbackup -s sources.daily.conf; disktool -e //USER@SERVER/WINDOWSSHARE
What happens when all is said and done is the script attempts to connect to the Windows share, sleeps a few seconds to complete the transaction, then double-checks to make sure the volume was successfully mounted. If so, it continues on with the backup process. After the backup has completed, the command is given to kill the mount altogether, removing the share icon from the desktop.

I'm 100% sure there are better ways to do all of this, so please pass along any tips. I'm not trying to fool anyone with any kind of "omfg l33t coding skillz" by any means!

FWIW, The only reason I have the applescript mount is because I couldn't successfully figure out how to mount the windows share through the command line. Likewise, the only reason I issue the disktool unmount command as part of the initial backup command is because I couldn't figure out how to insert the code into the actual backup process. Please expand on this idea if you're able to - I'd love to see what other possibilities exist.

[ Reply to This | # ]

A free perl/rsync backup script
Authored by: vocaro on Apr 16, '04 12:54:40AM

Actually, this has already been done. It's called rdiff-backup. It's been around for awhile, so it's very stable. I use it all the time to back up my home directory.

I would recommend rdiff-backup over rsyncbackup, simply because the latter is still at version 0.1 and not fully tested. No sense risking your data to an untested program.

Trevor



[ Reply to This | # ]
rsesource forks
Authored by: tezla on Apr 17, '04 05:34:30PM

rdiff-backup looks cool, but I guess it will not support resource forks, and therefore be useless to many.



[ Reply to This | # ]
New: version 0.2
Authored by: tezla on Apr 17, '04 05:38:00PM

Several cosmetic fixes, and a more streamlined CLI is added. rsyncbackup is now under GNU General Public License. New features include:

  • Wizard for creating sshkeys to remote hosts. This is described later in the manual.
  • Help screen added, --help
  • Several options on the CLI, including controlling verbosity output,
  • Listing cron-jobs
  • Debug configuration files, listing all backups in a specific source file

Upgrading from 0.1 to 0.2 you do not need to change the configuration files, however you need to add a -b or --do_backup parameter to rsyncbackup, if not it will only print the help screen.

Can be downloaded from http://erlang.no/rsyncbackup

[ Reply to This | # ]

A free psync backup script
Authored by: httpd on May 07, '04 05:18:21PM

Since I use psync to backup my files to a Windows share I wrote this script so I could put in /etc/crontab. I use psync-0.65.16 since it is better with Samba shares the the original 0.65 version.
You can get it at http://www.jacek-dom.net/software/psync/
(And yes, you should really hide your username and password to the windows share better and not put in the url, but that is explained all over the place)


Backup.sh
----
#/bin/sh
echo "Starting with making a folder"
mkdir /Volumes/mybackupfolder
echo "mounting network drive"
mount_smbf //myusername:mypassword@ip_to_my_host/shared_directory /Volumes/mybackupfolder
echo "Stage one is complete"
echo "lets backup somee files"
psync -r -p /Users/myusername/folder_to_backup /Volumes/mybackupfolder
echo "psync is complete lets unoumt the backup drive"
umount /Volumes/mybackupfolder
echo "Backup is done"
---


In /etc/crontab
--
5 5 * * * root /bin/sh /Users/myusername/Documents/backup.sh
---



[ Reply to This | # ]