This is a fairly specific hint, but it might help someone trying to achieve the same thing. The goal: To back up a FileVault-protected home directory to a UNIX server, keeping the backup encrypted, and without logging off. Time Machine isn't an option, as it doesn't play well with Filevault. A previous hint I submitted does half the job, but the backups are stored unencrypted, so anyone with access to the server can read them.
The solution is to use EncFS on the server. This ensures files are stored encrypted (the password is stored in a file on the client, safely inside the FileVault protected home directory). Only SSH access to the server is required, no special encrypted volume is used (so the encrypted backup can easily be backed up itself), and root access is only required to install EncFS and add the user to the correct group. Here's what you need to do:
Set up a user account on the server, and ensure you can log in from the client with SSH using a public key with no password.
On the server, install EncFS. You will need at least version 1.4.
On the server, ensure each user that will be backing up is a member of the "fuse" group, eg. usermod -a -G fuse mike
If you want to back up files with resource forks or extended attributes, make sure the client and server are both running rsync version 3.0 (version 2.6.3 is included in 10.5). This compiles easily for OS X -- just download, ./configure and make, then put the rsync binary somewhere in your path. Do the same on the server.
Once that's done, you need to prepare the server. The setup I've chosen is a folder, /mnt/backup, which will contain all the backups for each user. Each user will have a folder in there called .user@hostname (note the dot), which contains the encrypted backup. As with FileVault, the decrypted version of this can then be mounted at user@hostname.
According to man cal, there's also an ncal command with extra options, such as displaying week numbers via the -w flag. But no ncal is installed in OS X. A little experiment reveals, however, that it's one and the same program -- it apparently behaves differently when named differently.
If you create an ncal link to the cal utility, you get a fully functional ncal program; note the week numbers below the calendar:
$ mkdir -p $HOME/bin
$ ln `which cal` $HOME/bin/ncal
$ $HOME/bin/ncal -w
Mo 5 12 19 26
Tu 6 13 20 27
We 7 14 21 28
Th 1 8 15 22 29
Fr 2 9 16 23 30
Sa 3 10 17 24 31
Su 4 11 18 25
18 19 20 21 22
[robg adds: This works as described; you can also now use ncal -e or ncal -o to see the date for Easter (Western and Orthodox, respectively), -J for the Julian calendar, and -p to see the "country codes and switching days from Julian to Gregorian calendar as they are assumed by ncal." I tried creating an alias instead of a link, but that didn't work...can anyone explain why they chose to make cal and ncal work like this? Why not just give all the options to cal?]
I was recently trying to do some simple shell scripting with the command httpd -t, which runs a syntax check on the Apache configuration files. If everything is OK, it returns Syntax OK, otherwise, it will try to tell you what's wrong with the files. I wanted to grab the output of the syntax check and display it in my script, but wasn't having any luck -- any attempt to redirect the output, or assign it to a variable, resulted in nothing (empty variable, empty file, etc.). Unix wizards can stop reading this hint now, for I'm sure you know what the problem is...
After much Googling, I discovered that some Unix commands create their output by writing to standard error (STDERR) instead of standard output (the Terminal window, typically). As a result, operations that capture standard output won't work for these types of messages, even though the message itself is displayed in the Terminal window. I found the solution in the Wikipedia entry for stderr: append 2>&1 to the end of the command (note that this is for sh-style shells, including bash, the OS X default shell). This redirects standard error to the same destination as standard output, meaning the data can then be captured as you wish. To put the output of httpd -t onto the clipboard, for instance, you could use this:
$ httpd -t 2>&1 | pbcopy
Or if you wished to assign the output to a variable for use in a script:
CONFIGTEST=`httpd -t 2>&1`
I'm sure there are a number of other commands that output via standard error -- one for certain is ssh -V, which returns the version information for ssh.
My iMac is behind a satellite internet connection, which is very slow. Rather than using Screen Sharing or Finder's file sharing for Back to My Mac, I often find it easier to use ssh. Until today I didn't know how to connect to a Back To My Mac computer via ssh. It turns out that it's very easy:
ssh -vvv -p 22 hostname.username.members.mac.com.
Where hostname is the name you gave to the machine (i.e. the name that appears in Finder), and username is your .Mac username (i.e. if your email address is email@example.com, your .Mac username is steve). Note that there is a "." at the very end of the command -- I've had more consistent success using it that way. You can also try Terminal.app » Shell » New Remote Connection (or press Command-Shift-K) and then look under 'Secure Shell (ssh) for "Discovered Servers"'.
As with all things Back To My Mac related, success is flakey and your best bet is if you have Airport Extreme base stations on both the local and remote machines.
If you have ever used the locate command within Terminal to find a file of yours that, for instance, ends in .doc, you might find that you end up with more than 60,000 hits on your system that you didn't create, and didn't want to know about. Hence, it may be beneficial to create a user-level database for locate that searches only your local directory structure, so that you only see what belongs to you.
To do this, and have my new database updated automatically, I modified a copy of the locate database update program that came with the Mac, set up a crontab to update this database hourly, and then created an alias for my local locate, called llocate. Now when I type llocate .doc , I only find 584 .doc files, and I can rest assured that they are all mine! Read on for the how-to...
I like the way Mac OS X lets you choose any album from iPhoto to use as a set of desktop background images. In particular, using iPhoto albums to create sets of images saves disk space because the same image can be displayed in multiple sets ("albums") even though it's only physically stored on disk just once. I had a number of images that I didn't want to store in iPhoto, but that I did want to use as desktop backgrounds in some sets and that I didn't want to duplicate.
The solution turned out to be to simple but a little tricky. Using the Desktop & Screen Saver System Preferences panel, you can select any arbitrary folder to use for desktop backgrounds. If you fill this folder with Unix symbolic links that all have an absolute POSIX path specified for the link, Desktop & Screen Saver finds and follows each image. However, only symbolic links with an absolute path (or hard links, of course) work. Notably, Mac OS X "aliases" do not work.
So, by way of example, if I have five pictures, 1.jpg through 5.jpg, that I want to display in two sets of background images, set A and set B, I can simply do this in Terminal:
Those commands will have define Set A as containing 1.jpg, 2.jpg, 3.jpg, and 5.jpg, but not 4.jpg. Set B will also contain 1.jpg, 2.jpg, and 5.jpg, but will also contain 4.jpg and not 3.jpg. Just like an album, because we're using Unix links, the re-used images (1, 2, and 5) are only physically stored on disk once, even though they're in both sets of desktop backgrounds.
As I wanted to pass my newsfeeds from Mail.app to a friend I realized that there is no "Export RSS-feeds to ..." button in Mail.app. So I wrote this little workaround to export them in Terminal (assuming the bash default shell):
This exports all URLs from your Mail.app RSS directory directly into the shell. I could then easily pass this list to my friend. As an additional option, here is a little modification to open all feed URLs in the default web browser:
This opens your feeds in the browser with a feed://... URL. Then you can easily add them to your bookmarks bar, or can do something other with them. Note that you may add && sleep 60 after the last bracket if you have lots of feeds in Mail.app.
This will allow you to tab complete any hostname you've previously ssh'd to.
[robg adds: This worked as described for me. The complete command is a built-in bash function that lets you specify lists of options to be used with tab completion on a given command. The version above parses your known_hosts to create the list of options. You can read more about the complete built-in function in the bash man pages -- man bash, then search for the section titled Programmable Completion.]
With the advent of 10.5, MacPorts has had to undergo a lot of updates, some of which have been painfully slow in coming. As a freelance security consultant, I *have* to have TinyCA2 working. Unfortunately, the MacPort project to update it for 10.5 took five months before the trouble ticket was finally even assigned to a maintainer. (I suppose these guys have a lot to do.) So here are some workarounds to make it compile under Leopard:
Edit the files (as root) that are listed in the error output (I had six instances total in four different files) following the command sudo port install tinyca2.
In those files, delete the ' character (apostrophe) mentioned on the line number indicated, which follows the colon in the error output. (If you use nano to edit the files, press Contrl-C to find the current line number.) These are in commented lines, but for some reason, they still keep the files from being processed.
cd to the directory mentioned in the error output (probably /opt » local » var » macports » build » _opt_local_var_macports_sources_rsync.macports.org_release_ports_security_tinyca2 » work » tinyca2-0.7.3 » po) and edit (as root) the language file mentioned in the error output. Change anything not important (ie add a space to the end of the file), then save the file.
Re-run the sudo port install command.
Edit the new language file mentioned in the error output, and do the same thing again (edit, change inconsequential info, save).
After doing this for four or five of the language translation files, the compiling works, and you should now be able to finish compiling the program. Hopefully the great guys at MacPorts can get this port updated, but until they do, this should get you up and running.