While looking for the perfect product to keep my photos safe, I discovered that sometimes simple is best. My requirements were simple: ensure that all my digital photos, stored on a locally attached USB drive, were duplicated to another drive attached to my AirPort Extreme. My photos are in RAW format (specifically DNG files) and will never change, so I only need to concern myself with new files.
I checked out numerous commercial and free products for backup, synchronizing and more, and nothing quite fit the bill. Whilst rsync could probably do the job, I couldn't get my head around the terminology to be sure I wasn't risking the original files. Then I discovered the solution. So mind-bogglingly simple, and no third-party software required. In Terminal, I run this command:
I have a need (like for Mutt's query_command) to search for email addresses from OS X's Address Book.app, but from the command line. After trying various hints, code, etc., and getting frustrated, I found that it's just stored in sqlite3. Everything is already built into OS X, you just have to figure out the SQL structure. I've only tested this on 10.6, but I imagine it will either work as is, or be easily adapted to work, on 10.5.
I created a shell script called abook.sh in /usr/local/bin with these contents:
sqlite3 -separator ' ' ~/Library/Application\ Support/AddressBook/AddressBook-v22.abcddb "select e.ZADDRESSNORMALIZED,p.ZFIRSTNAME,p.ZLASTNAME,p.ZORGANIZATION from ZABCDRECORD as p,ZABCDEMAILADDRESS as e WHERE e.ZOWNER = p.Z_PK;" | grep $1
Note: The -separator ' ' bit is a Tab character (use Control-V, Control-I) to insert one in bash. You can make it a few spaces or whatever you want, but if you are using this as Mutt's query_command, then it must be a Tab character.
The above code dumps more than one email address per contact (if there is more than one), and one email address per line -- so this isn't useful for importing into anything else. To mess about with the SQL, use sqlite3 and try the .dump command to get a feel for what the schema is like. That's it! I hope this helps someone out there in the world.
[robg adds: If you create the above shell script, remember to make it executable (chmod a+x scriptname), and call it with the name you'd like to find: abook.sh Smith. We covered some other sqlite tricks for Address Book in this previous hint.]
I had no idea what the map entries were there for; they don't show if you simply do ls /Volumes. After inquiring around, a friend managed to find the answer for me: they're related to the autofs feature in 10.5 and 10.6. You can read all about it, if you care, in this Apple PDF. I was more curious than alarmed, as I hadn't remembered seeing those entries before (I don't use df -h folder all that often).
I looked everywhere on the net to find something that I could afford with which to connect my USB serial port to a Sun machine. I couldn't install Fink or MacPorts, and screen was garbling all of the text for the Solaris install. Finally, I started looking in general UNIX support, which led me to the cu utility.
Use the cu command to get a (more-or-less) clear line. It's part of OSX's BSD heritage, and was originally used to allow UUCP batches to dial modems and link with each other. Here's how to use set up a connection:
sudo cu -s [bitrate] --nostop -l /dev/cu.[serialdevice]
This could also be used with /dev/tty.[serialdevice], I believe, though I have not tried it.
Caveat: If you use the ~ key, make sure you type it twice. Also, the --nostop parameter disables cu interpreting XON/XOFF software flow control. If you don't use it, and the system gets a Control-S character for whatever reason, you will need to type Control-Q to get the output moving again. If you're using the Terminal, you know how to use man(1), so read the man page for cu(1).
For a little project I was recently working on, I wanted to track some memory usage statistics over time. I figured that the Unix command vm_stat would be a good way to do that, as it includes the four basic memory usage types (free, active, inactive, and wired). My intent was to put this in a shell script that would run the command at a specified interval, dumping the output to a text file each time.
However, the basic output of vm_stat is less than ideal for dumping to a file via a shell script:
Parsing the above in a spreadsheet app would require some serious text manipulation. A brief glimpse at the man page for vm_stat showed another usage option: vm_stat interval, where interval is how often (in seconds) vm_stat should measure memory usage. When used in this way, the output is much nicer for capture:
That was nearly perfect, though I only needed the first 36 characters (the end of the wire column). But I had a problem: I needed to include a timestamp on each row of the output, so I could tie memory usage back to some activities I was starting and stopping at specific times. Read on to see how this is done...
From time to time I like to know (when reviewing log files) where an IP address is geographically located. The following shell script will take a list of IP address in a file named list, and look up their geographic location. Here's the code:
echo "Put the IPs you want to lookup into a file named list."
for i in `cat list`
lynx -dump $url$i > tmp
cat tmp | sed -n '/Host Name/,/Postal code/p'
jamesk@HOME~/Desktop $ echo 220.127.116.11 > list
jamesk@HOME~/Desktop $ geo
Put the IPs you want to lookup into a file named list.
Host Name: cache04.ns.uu.net
IP Address: 18.104.22.168
Country: United States united states
Country code: US (USA)
[robg adds: Note that this hint requires the lynx text-only web browser, which you can install via Fink or MacPorts or probably many other methods. I'm sure there's probably a way to do this without lynx, but I'll leave that to those who actually know what they're doing...]
Tripwire is a set of open-source Unix command line utilities, spun off by the company of the same name that sells a more-capable commercial prodct; you can use it to verify the integrity of your system files, detect intrusions, and monitor what files get added or changed by your computer's software processes. For the slightly paranoid, it can have a calming effect.
Fortunately, it is relatively easy to run under OS X. Installing it, however, can be another story. Back in 2003, a member named frodo published a hint here on how to install Tripwire with a precompiled package that he had developed. Sadly, his website is no longer operational, so Mac OS X users who wish to use Tripwire have to muddle through the generic installation process for Unix boxes. This can be quite confusing, so I thought that it would be useful to document it in a step-by-step fashion. The following is based on the sources currently available.
The first step is to install the XCode tools so that you can perform compilation of the source; you don't need the latest and greatest, so you can simply install XCode from the CD or DVD that has your operating system install on it. Alternatively, you can join the Apple Developer Connection (free) and download XCode.
Next you need the source for Tripwire. Paul Herman's web site still has the portable Tripwire tarball available, so download tripwire-portable-0.9.tar.gz from there. Move the downloaded file to a convenient place (like Documents) in your user's folder hierarchy, and then double-click the downloaded file; it should expand into a folder of sources.
I have used many different methods over the years to print documents to PDFs from the command line. Some have been complicated sequences of pipes to and from groff, others required TeX, and occasionally I set up a "virtual" printer, simply to print to file. I recently read documentation for cups-pdf, however, and found that cupsfilter command is sufficient for most of my own tasks in its bare form!
For example, to print 80-column ASCII plaintext (the majority of my code), I can use this:
$ cupsfilter foo.txt > foo.pdf
If you find the output of that command a bit verbose (as I do), you can send the errors silently to the null device using this version:
$ cupsfilter foo.txt > foo.pdf 2> /dev/null
There are many ways to wrap this simple command even more conveniently, but I'll omit those here for now. The reason this method is ideal is because it uses built-in routines in OS X; any time you can take advantage of these, do, because many of the core technologies are significantly faster and more secure than third-party alternatives.
Of course, your mileage may vary :-).
[robg adds:cupsfilter PDF conversions work in at least 10.5 and 10.6, and I suspect in earlier OS X releases.]