I have long wanted a way to store and retrieve Desktop items. That is, change the items on my Desktop according to the task I'm currently working on -- dev, doc, design, surf. Well, I tried with some simple mv commands in Terminal, and it seemed to work fine. So I wrote a short script to handle this task.
I placed my script in ~/bin, and remember to make it executable (chmod 755 scriptname). The first time you run it, it will create a top-level storage directory in ~/Library/Desktops. It will then create a file named _current_ in that directory, which will contain the name of the "current" desktop items that are in ~/Desktop.
The first time the script is run, the "current" desktop item storage will be called "default." If the desktop script is run without any parameters, it will list the available storage names. The current one will be marked with >>. If a paramater is supplied to the script, it will either retreive the items from an existing storage bin, or create a new storage bin if none exists with that name.
I have only tested it on my PowerBook with 10.4, so I made 10.4 a requirement. I guess Spotlight is needed to get the update done in the Finder. I imagine that one should only use storage names with ordinary identifiers -- that is, they should start with a letter and not contain any spaces or other weird characters. I have not tested using non-standard names; the script would need some work to handle them properly, I think.
I do not take any responsibility for this script and what it does. mv can be destructive so be careful. It hasn't broken anything for me yet, but I have only been using it for a couple of days. Usually I only make scripts for own usage -- I'm no expert in scripting, so if you can see any flaws or obvious mistakes, please add a comment.
[robg adds: I tested this on a Core Duo mini, and it seems to work just fine. It's an interesting idea, sort of like a simplified version of true virtual desktops.]
A long time ago, I wrote a hint for automatically creating tcsh aliases so that you could launch Mac applications from Terminal by typing in their names (i.e., safari would open Safari, quicktimeplayer would open QuickTime Player, etc..). Because this works with the shell's built-in auto completion, it means that you can launch applications faster than Spotlight can find them, and certainly faster than hunting them down with the Finder!
Unfortunately, somewhere along the way, Apple switched the Mac's default shell from tcsh to bash. I stayed with tcsh because, well, habit, really, and I never got around to making the switch until now. It turns out that, with some syntax changes, the alias generator works just as well in bash as it did in tcsh.
Ever wanted to grab a log file entries from a remote machine on your local machine? Here's one way to do it. In Terminal, connected to a remote host, run tail -f on the log file you want to view, and then direct the output to a local file with a .log extension. For example:
(You'll have to replace remotehost with the full name/IP address of the remote host, obviously.) Note that the command will not return a command prompt. Now launch Console, use File: Open within the application, and point it to the local file with the .log extension you named in the Terminal command.
Now you can use Console's features to view your remote log files in real time. When you want to stop the logging, switch back to Terminal and hit Control-C to stop the ssh command that is sitting in your Terminal window.
Usually, I make TinyURLs with this Dashboard widget. However, I get a bit frustrated due to the delay when a widget is launched for the first time. I love Tiny URLs for two reasons: (1) they fit in any mail or forum list with restricted line width support; and (2) they hide the address and then make it more surprising for the reader to open the URL.
So instead, here's a small Perl program that takes the contents of the clipboard (if no arguments are provided) or converts each argument into a Tiny URL, and then prints the Tiny URL version to the standard output.
...failed in tests and couldn't install it. Finally, I found one to download on Dave Cross' CPAN page. When you install it, be careful with the place it wants to install. It will try to install into /Library -> Perl -> 5.8.6 -> lib -> perl5 -> site_perl -> WWW. That is wrong; you must move it to /Library/Perl/5.8.6/WWW/.
There are a few examples of backup schemes on MacOSXHints. In the spirit of giving something back, I thought I would share my backup scripts.
I back up all my user files to an external hard disk using RsyncX. You need to download and install this first. The advantage of this is that you syncronise between the data on your hard disk and the data on your external backup disk, so each time the script runs, you only copy accross any changed files. When I say "all my user files," I mean that I syncronise a copy of all directories under /Users
Copy this SyncUsers script to /usr/bin on your Mac (here's an executable version). The volume to back up to is represented by the variable EXTDISK in the backup script. In my case, DataFormac. Change this to the volume name you want to backup to. The script should be executed via sudo to insure that all users can be archived, and so that mdutil works. To run the script, just type sudo SyncUsers in a terminal window.
I've worked in the computer industry a while, and have seen a few disasters. Because of this, I'm overly cautious. Every month I take a copy of my external FireWire backup disk to another disk and store it "offsite" (which in fact means in my desk at work!). This is done with a second script called Offsite (here's the executable version).
Copy the script to /usr/bin on your Mac and then again you need to change the name of some disk volumes in the script. The disk that the script copies from (the volume you run SyncUsers to) is stored in the SYNCDISK variable. The disk volume that you will copy to is in the variable EXTDISK. Now be aware, the script will erase the volume (EXTDISK) and then take a copy of your backup volume using ditto. Again, this script should be run using sudo.
I've been meaning to mention this for a while, but I've been somewhat overwhelmed with site-related activities of late (which is also why I'm again behind on the Pick of the Weeks)...if you're interested in learning Unix, we've actually got a pretty good teaching assistant over on the forum site.
Our moderator hayne has put together a Unix FAQ, which addresses many questions that newcomers will have, covering topics such as common commands, permissions, shell login scripts, and more. There's even stuff there that may be of interest to those with some Unix knowledge, including some information on scripting the shell directly and via AppleScript. There's also a collection of links to places where you can learn more about Unix.
While it's obviously not a comprehensive list of everything to know about Unix, it's a good starting point if you're wanting to learn more about this system that lurks beneath the aquafied covers of OS X.
I know this has been visited before, but I found one of the best methods to date for killing a process from the command line: using a variation on a script in this writeup. The secret is using the -o flag to control the output of ps. You can then use awk and grep as usual, and xargs to cycle through the results and kill all matching processes. Consider this command:
ps axco pid,command
That will output only the process ID and the command name, which makes for easy pickings for a script to find and kill! I changed the original script from the command ps e to ps axc -- -a, -x, and other flags will vary depending on the user's needs. However, do not invoke the script with the -u (e.g. ps aux), which is the way that it is normally suggested to run the ps command. The -u flag specifies the format, and overides the -o flag.
Example: to add this to a script to kill the Finder you wold do:
The original article explains how to variablize it, so you could set up a generic script and run it on multiple platforms with different versions of ps by invoking:
script_name $SIG $PROCESSNAME
Remember! This script kills all processes that are named $PROCESSNAME, so it's a good idea to use ps -c, which just prints the executable name, instead of the full path. That way it won't accidently kill another process that maches a folder or something else.
There are some other tricks for using ps in that writeup as well...
Experienced Unix users, look away now, please -- the following is a very simple hint. Over the weekend, I "lost track" of a device on my network. We've got a wireless video camera, but I've had it unplugged for a very long time. I plugged it in this weekend, but couldn't even begin to remember what IP address I'd assigned to it. So I wanted a simple way to just poll my network and see what was out there, which would let me find the camera by process of elimination.
Some versions of the ping command support the -b broadcast flag, which will send a ping request to any device capable of receiving such requests on your network, and report back with the addresses of those that replied. Unfortunately, Mac OS X's version of ping doesn't seem support the flag -- it doesn't work if you try to use it, and it's not listed in the man page. Just as I was about to go find and build a new ping, a much more Unix savvy friend of mine offered this alternative:
Run that, and you'll see a response from anything on your network (192.168.1.xxx, in my case), like this:
robg $> ping 192.168.1.255
PING 192.168.1.255 (192.168.1.255): 56 data bytes
64 bytes from 192.168.1.53: icmp_seq=0 ttl=64 time=0.175 ms
64 bytes from 192.168.1.2: icmp_seq=0 ttl=150 time=0.660 ms (DUP!)
64 bytes from 192.168.1.70: icmp_seq=0 ttl=64 time=1.027 ms (DUP!)
64 bytes from 192.168.1.116: icmp_seq=0 ttl=60 time=3.966 ms (DUP!)
64 bytes from 192.168.1.92: icmp_seq=0 ttl=64 time=1.728 ms (DUP!)
So our ping does support broadcast pings, by placing the 255 value in the field you wish to vary -- the last field of the IP address for a typical home network. Of course, once I had the list, I then had to figure out what was what, but that was relatively trivial.
See, I told you it was a simple hint. And yet, in all my years of OS X usage, I had no idea you could do such a thing. So perhaps this will help some other relatively inexperienced Unix user out there...
Being a dialup only user, I've been very frustrated with iTunes' lack of a resumable download for podcasts, etc. My attempts at downloading with Firefox from sites such as MacObserver(MacGeekGab) was poor, since Firefox would often find the connection closed after the download was partly downloaded. (This is the same thing I'd see in iTunes, where I would never know what had happened, except that the file size was way too small.)
I'm sure there are shareware-type downloaders out there that would work, but money doesn't grow on trees for a lot of us folks. So I use the built-in curl command in Terminal. Trying curl in Terminal using the options -C - -O will work for most sites. However, for sites (like MacObserver) that are using a service such as CacheFly which uses redirects, it won't work. I was stumped until I contacted CacheFly, and they pointed out that "you can have curl follow redirects by passing it the -L flag."
I had read the help for curl, but did not understand what Follow Location: hints (H) meant. Finally! Now I can grab the podcast URL from the website, and then run this command in Terminal:
curl -L -C - -O http://url.of.podcast
Even if the connection gets killed before I've finished downloading (hours later), I can resume and finish it. I hope this helps at least one other dialup user out there. Hopefully Apple will eventually connect iTunes with tools like curl that ship with OS X to upgrade their download capabilities.
...and have Terminal open the Finder, with the specified file(s) selected.
The command open wasn't useful in this case, because I wanted the files to be selected. And sometimes I have the complete path file in the clipboard, and it annoys me to have to strip the name off, and then in the Finder, look for this file.
My solution is an AppleScript called via a function in the shell. This way, you can also use completions from the shell to specify one or several files. If you call it normally, you will have to give the arguments with an absolute path. But, if the first argument is Current_path:path, then the script will look in the folder path to find files.