If you're a web or FileMaker developer and need access to MySQL databases via ODBC, you probably know that there are no binary distributions of MyODBC drivers that work on an Intel Mac. The PowerPC version won't work, and just trying to compile from source won't either.
After discussions with others, we've succeeded in compiling myODBC and it works! The trick is that you need to make simple changes in three files. Change the references to odbcinst.h to iodbcinst.h in these files: util/MYODBCUtil.h, driver/myodbc3.h, and myodbcinst/myodbcinst.c.
[robg adds: Apparently this information is in a README.osx in the distribution, but many haven't found that (judging by the results of a quick net search). This thread on the macosxhints' forums also covers the problem; if you need more detail on the process, check either link.]
This is a warning as well as a hint. Control-clicking on a file or folder brings up a contextual menu that includes the option to Create archive of "xxx". The created archive file is a zip file that can be unzipped with /usr/bin/unzip. So, one would think that OS X is using /usr/bin/zip to create the archive. But that isn't so. It seems that OS X creates the zip file using code internal to Finder. It seems (from output of ps or top) not to be using /usr/bin/zip. Now the warning: /usr/bin/zip is broken on HFS+ file systems. It is broken in two ways:
By default, it does not preserve the resource fork of files that it archives.There seems to be no way to force it to preserve resource forks. (Has anyone found a way?)
Its behaviour does not correspond to its man page: /usr/share/man/man1/zip.1
Those (like me) who want to automate backups of their system using command line tools should perhaps use /usr/bin/tar. The tar utility now (as of OS X 10.4.8 and perhaps earlier) preserves resource forks. It stores the resource fork of foo as ._foo, and then recombines the data and resource forks when untarring to an HFS+ volume. To archive and compress a folder foo and all of its subfolders, one can do:
As described in this hint, the split utility is a highly-useful tool for splitting a large file into pieces. Recently, however, I discovered that attempting to split a file into pieces 2GB in size or larger fails with an error:
This is caused by Apple shipping an older copy of split that uses signed long values for the byte count, limiting it to 2047MB. One workaround is to compile a new split utility that uses 64-bit integers, allowing a much larger byte count. Newer source code may be found on the NetBSD web site.
This may be kind of obscure, but if you're into Perl and trying to install modules via CPAN, specifically the DBI and DBD ones, things can get hairy. I spent a few hours trying to install these modules from source and from the CPAN perl shell. Neither worked, but gave frustrating errors that were impossible to track down. After a bunch of testing, I found the solution:
In a Terminal window, type fink list dbi or fink list dbd and you will see a list of available packages.
Now type fink install dbi-pm586 (that's the default DBI package for Perl 5.8.6)
Here's the tricky part. Fink installs the modules in /sw -> lib -> perl5 -> 5.8.6 -> darwin-thread-multi-2level, which is not in your normal @INC when you run perl scripts. I'm sure you could recompile Perl to include Fink's installation directory, or move the files to the right places, but instead I found a different way. In your perl script, simply put:
use lib '/sw/lib/perl5/5.8.6/darwin-thread-multi-2level';
and Perl will look in the correct directory.
This solution eliminates the annoying CPAN - compiler - make errors for me.
[robg adds: A much older hint refers to some compiler issues with gcc3 that prevent the DBI and DBD modules from compiling. The hint claims that simply changing the compiler to gcc2 solves those issues. I'm not sure if that's still relevant to the issue in this hint, but it might be something to look at. In any event, using the Fink versions appears to be a valid workaround.]
The tar utility in 10.4 is great in that it now supports copying of resource forks. I've used hfstar for a long while, and thought I'd switch this weekend to using the 10.4 version on a newly-acquired Intel iMac. The machine has its internal disk partitioned into three pieces: OS, Users and Media. On the OS and Media partitions, /usr/bin/tar worked fine and preserved resource forks on the backups.
On the Users partition, however, it successfully created tar archives, but without resource forks being preserved. It also generated errors of the form:
$ tar -cvf test.tar Test
tar: /tmp/tar.md.GPzLI9: Cannot stat: No such file or directory
tar: /tmp/tar.md.ayDXd5: Cannot stat: No such file or directory
tar: Error exit delayed from previous errors
I did some googling on the issue, and could find very little other than some mention that it might be due to disk errors. So I ran a repair with Disk Utility, and it couldn't find anything wrong with the disk.
NOTE: this hint was developed under XCode2.4 and Mac OS X 10.4.8, it probably won't work on pre-Tiger systems.
We just got some MacBook Pros, and our lab is now mixed PPC and i386. We rely on a number of command line programs developed over the years and ported from various BSD-flavored systems. We decided to convert all these programs and libraries to universal binaries so that all the systems can copy them from a single installed prototype. However, the process of converting bunches of makefiles to universal binaries seemed to be kind of a pain, so I wrote a simple program to convert cc(1) to produce them by default. The source can be downloaded here.
This tiny program works by inserting three critical arguments into a standard cc command and then calling gcc with the revised argument vector. It is installed as a separate command called ccub, and optionally it can replace cc so that universal binaries are the default. There's more explanation in the tarball.
Mac OS X users who attempt to access a Subversion repository hosted on an SMB file server (such as a Windows server) quickly run into difficulty. This is due to how file operations are handled by Mac OS X when dealing with an SMB filesystem. The details can be found in this blog entry: Subversion, Mac OS X, and SMB. A patch for obtaining usable support for repositories hosted on an SMB file server can be found here. Finally, here are instructions on how to install Subversion with the patch applied.
While this will obviously not be as reliable as accessing a repository hosted on a real Subversion server, it does suffice for situations that involve a small number of users.
There are a lot of ways to author a DVD, like iDVD or SmallDVD. They are quite powerful and require a lot of interaction from the user. But how could we author a DVD without using a GUI? The objective of this hint is create a simple DVD to view some DivX (or other formats) in a DVD player. I do not need fancy things, like menus. I just want a DVD like a VHS tape, one movie is after the other, but more comfortable to use because we can access the movies using the Chapter button on the DVD player's remote.
The best part of not using a GUI is that you just spend one minute preparing everything, and then there is no more interaction until the end of the proccess. I prepare my DVDs during the night. That is great because the Mac could need several hours to prepare the DVD. To use this hint, you must be comfortable using Terminal (only a little, you don't need to be an expert -- I am not) and installing applications using Fink.
First you should install DVDauthor and mkisofs, both available in Fink, and MPlayer.
I was initially using SSH shortcuts as described in this hint, but after using the hint for a while, I noticed some problems:
There doesn't seem to be a way to create .inetloc files other than drag and drop. You can create a file that has the correct data fork and the Finder will handle it, but without the resource fork portion that the Finder itself creates upon drop, other programs (like Path Finder) won't open the file properly. I keep a list of hosts I regularly connect to in ~/.hosts, and I wanted to be able to use a script to create the .inetloc files based on .hosts, rather than have to drag and drop the location to the Finder for every single one. (That would be fine if it was one time only, but the list changes.)
For whatever reason (I suspect a bug in Terminal.app), if you open a new Terminal window via one of these ssh://host shortcuts, .term files will not open until you relaunch the Terminal. They'll make Terminal the active pplication, but nothing happens.
For whatever reason (another bug?), windows opened via the ssh://host .inetloc files don't show the "Command key" (⌘1, ⌘2, etc) in the window's title bar. (Although the command keys still work if you're a good guesser.)
The ssh:// addresses don't allow for many SSH options. From what I can tell, username is the only thing you can change, and even that can be problematic as robg pointed out.
I had to tell QuickSilver where to look for the .inetloc files, but I noticed that it already indexes ~/Library/Application Support/Terminal/ by default.
The solution was to use .term files for everything instead of ssh:// shortcuts.
I often rename files immediately after downloading and stick them in a folder somewhere for later reference. But I also often forget what I've already downloaded. So I wrote this bash shell script to use Spotlight to find files that match a file's name or its size and kind.
Usage, in Terminal:
For example, for a file called 0.pdf, output might look like this (line breaks added for a narrower display):
Possible matches based on filename:
Possible matches based on size and kind:
So it turns out I just downloaded a file that I already have four copies of under different names and locations.
I've set this up as a command in OnMyCommand. For this to work, it requires you to have put the shell script in a folder that's included in your $PATH. Here's the OnMyCommand command (assuming you are using OMCEdit):