Submit Hint Search The Forums LinksStatsPollsHeadlinesRSS
14,000 hints and counting!

Easily create HFS-aware PKZIP and unix archives UNIX
HFS files have meta-data and resource forks that are lost during common unix archive operations like cpio, tar, gzip. Panther's new archive maker is HFS aware, but it's not compatible with Stuffit or tar and zip. Fortunately you can use ditto to make both PKZIP (Windows) and compressed Unix archives that preserve the resource forks! There are three types of archives you can produce: pkzip, cpio-zipped, and tar-zipped. The first two are easy. The latter is tricky to create and to undo, so I dont reccomend it but the method is instructuve so I'll describe it anyhow.

To create a pkzip archive of "some_folder" type:

 % ditto -c -k -X --rsrc some_folder  some_folder.zip
Alternatively, to create a compressed cpio archive type:
 % ditto -c -X  -z --rsrc some_folder some_folder.cpio
By the way, although most people are more familiar with tar-zip, a cpio archive is a univerally compatible unix archive, and if anything, is more versatile (see man pages for cpio and pax for info). If you send this file to a unix or Windows person, they can open these using cpio and pkzip, respectively. For example, on any unix machine (besides OS X), to open a cpio archive type:
 % cpio -i -I -z some_folder.cpio 
and it will unarchive it in the current folder. In this folder you may now observe that for each file some_file that has meta-data, there will be an associated file named ._somefile which contains all the meta-data and resource forks. This is called AppleDouble format. Note that not all files have meta-data, so you may not see this AppleDouble format file for every regular HFS file.

On an OS X machine, you can extract the archive using cpio also, and you will see these same AppleDouble files. However, when you do this on an HFS file system, it fails to restore the resource forks to the right files. Therefore, don't use cpio to unarchive on an HFS file system. It does restore the resource forks on Mac OS if you have a UFS partition or a mounted NFS drive, however. On an HFS system, you recover the resource forks from the cpio archive using ditto again:

 % ditto -x --rsrc some_folder.cpio some_folder_destination
To recover the resource forks on a pkzipped file, you have to use StuffIt Expander. Strangely, ditto does not work in this case -- I'm guessing this is a bug, since it should work.

ADVANCED STUFF
You don't really need to read further, but you might be interested is seeing how to use these tricks to make a tar gzip archive that has resource forks! Well it's not pretty and it's hard to undo. But in case you want to, here is a script. Save the following into a file called tar_hfs.sh and make it executable (chmod a+x tar_hrs.sh).

#!/bin/bash

 TEMPDIR=`mktemp -d`  || exit 1
 TEMPFIFO=`mktemp -p $TEMPDIR`  &&  mkdir ${TEMPDIR}/$1 && {

   ditto --rsrc -c -X $1 $TEMPFIFO  
   ditto -x $TEMPFIFO ${TEMPDIR}/$1

   (cd $TEMPDIR && tar -cz  $1)
}

rm -fr $TEMPDIR
To use this on some_folder, type tar_hfs.sh some_folder > some_folder.tgz. Again, this saves the meta-data in AppleDouble format. I use ditto to first create a cpio archive then I use ditto to unpack it in a resource-unaware manner, leaving the ._somefile meta_data. Then I tar this. Note this trick winds up as an intermediate step, making a full copy of the folder you are trying to archive (it cleans up after itself), so don't do this on a nearly full disk! Also note that this script is for illustration purposes only and is not very general: it assumes that you are in the directory containing some_folder; it won't quite work if the you give it a full path to some_dir on the command line.

Okay now how do I un-tar-zip this and restore the resource forks? To do this, you need to un-tar-zip it then creates a cpio archive, then use ditto to unarchive this. Yuck, and this is left as an exercise to the reader. But better yet, just stick with the cpio archives.

Lastly I'll mention one final mystery that maybe some reader can solve. I ran into in figuring this out. If you look at the script above, I called one of the intermediate files TEMPFIFO. This is because in my original script i used a unix fifo instead of a real file and backgrounded the first ditto command, figuring I could save some space and time. Oddly, about every ten or so tries the first ditto command issues an error message about not being able to complete the fifo. I have no idea why that error would happen. Anyone know why? Also maybe someone would like to wrap this in apple script for a drag and drop and post it below?

    •    
  • Currently 2.70 / 5
  You rated: 3 / 5 (10 votes cast)
 
[25,858 views]  

Easily create HFS-aware PKZIP and unix archives | 19 comments | Create New Account
Click here to return to the 'Easily create HFS-aware PKZIP and unix archives' hint
The following comments are owned by whoever posted them. This site is not responsible for what they say.
Easily create HFS-aware PKZIP and unix archives
Authored by: raider on Nov 26, '03 02:02:40PM
Allright, let me start by noting that I am a switcher - as of the release of OSX 10.0.

I have never understood the benefit of resource forks, but have been plagued by the problems caused by them. Mail.App before Panther handled them terribly for cross platform usage (or rather most windows mail applications handled them incorrectly - as AppleDouble is an internet standard - but still it was ALWAYS percieved as my fault for using a "POS Macintrash" - not good for getting people to like or even tolerate Macs...). The common unix commands like cp and mv would kill them, and break some files and programs... And things like this, where they are not supported in the formats that the whole world uses...

Ideally I would only like to ever have to interact with a Mac - but realistically, if I want to stay employed, I need to interact with various operating systems, and none of them have resource forks except the Mac.

So my question to long time mac users (who I am sure will answer) - Why do we even need them any more? Why not just get rid of them, and not have these problems? What benefits did resource forks provide that warrants keeping them around? Links to sites are appreciated, but I am also interested in personal opinion....

[ Reply to This | # ]
Easily create HFS-aware PKZIP and unix archives
Authored by: SOX on Nov 26, '03 03:08:40PM
Not an answer to your question but... Microsoft's longhorn and several Gnu projects are moving in the direction of even more meta data since the user experience will be a file system accessed via data base not a folder hierachy.

The mac metadata and reousrce forks serve distinct rolls. The meta data allows one to know what program created or shoul dopen a file without having to reply on the name extention convention. It also can indicate other things like the fact that the file is a HFS alias (as opposed to a unix link).

the resource fork was a neccessary item to assure programs and documents could carry around auxilirary information without having to have multple files. In Mac OSX this is replaced with an equally arbitrary solution. One can create a folder that contains multiple files that all pertain to the document or application. For example the applications now all are designated by the arbitrary addition of .app to their folder name. In the finder they appear as atomic entity rather than as a folder you can (easily) open. What's arbirtary here is the follwoing. When you double click the icon how does the finder know what to do? well it looks inside the folder for a file that obeys a certain naming convention. When you look at the file in the finder how does it know what icon to use? Well again inside the folder a naming convetion is used to tell the finder which is the icon. How do applications egister services with the OS. You got it....

the resource fork was just a different solution to this problem that worked fine. The problem was not that mac made a bad choice, rather the problem was that EVERYONE else (unix, DOS) made a bad choice by not creating a rich enough specification for how to do accomplish all these needs when applications require auxilirary files and yet how to keep the application as an atomic entity. Thus the macs had a hard time mapping their scheme onto these weakling file systems. This was the source of this headache.

the headache recurs now mainly as a legacy; pure osx files only have meta data but not resource forks. In the future I expect we may see more meta data but lose the resource forks.

[ Reply to This | # ]

Easily create HFS-aware PKZIP and unix archives
Authored by: greed on Nov 26, '03 03:22:19PM

Resource forks on data files are typically used to contain 'state' information. For example, a text editor I used to use would write the current position to the resource fork, so any program that didn't know how to use that resource would just see an ordinary text file in the data fork.

Another thing that gets put in the resource fork is the 'preview' and the 'icon' images. This way, a file can have a custom icon, and you don't have to have a separate file. (Anyone remember .info files on the Amiga?)

I believe the TYPE and CREATOR information is handled in much the same way as the resource forks when dealing with non-HFS-aware programs. On HFS, they are actually attributes of the file, just like creation and modification time, size, and so on.

So forks just give you a way of keeping related 'files' together under one name. It is pretty sloppy on Apple's part to leave the standard BSD utilities unmodified; it would at least be nice if the standard 'ls' command could show type and creator as well. (I despise file extensions for type information, I really liked the MacOS way of making it a full-fledged attribute on the file.)

How do you get a custom icon for something on Windows? For executables and DLLs, it seems they use a special section in the COFF file. Can you even have a custom icon on a data file? It would have to be in a separate file if it was.

That's on data files. For program files, resources would contain everything from the icon sets, to the window and menu definitions, sound samples, and so on. Those are now handled with the ".app" directory format.

Generally, on data files, you should always be able to toss the resource fork without damaging the data (as long as you can get the TYPE some other way.) Mail.app sending the resource fork is just dumb, I had to go through and remove all the resource forks before e-mailing some attachments. Given all the "be more like Windows" changes in OS X, many of which are frustrating (file extensions... yuck), why didn't they suppress the resource fork by default?



[ Reply to This | # ]
Read.
Authored by: Lectrick on Nov 27, '03 01:28:12PM

http://www.macdisk.com/macforken.php3

---
In /dev/null, no one can hear you scream



[ Reply to This | # ]
Easily create HFS-aware PKZIP and unix archives
Authored by: DavidRavenMoon on Nov 26, '03 03:23:19PM

Resource forks are quite handy, and in fact I'm sad to see them go in OS X.

Some examples. On a PC, or even with data only files in OS X, the file's suffix is the only thing that tells the OS what type of file it is, and what to use to open it. That's fine and dandy for the majority of situations, but I'll give a few examples.

Let's take the EPS (Encapsulated PostScript) file format. I'm a graphic artist, and two of the programs I use all day are Adobe Photoshop and Illustrator. Both of these applications can save a file in the EPS format, but the files are quite different! Illustrator EPS files are mostly vector information, and opening one of these in Photoshop will cause the file to be rasterized. Photoshop can also save EPS files, but these are basically bitmap raster files wrapped up in a PostScript shell. If you open these in Illustrator, you will see a document with a placed image.

If these files had no resource fork, such as when they come from a PC, you have no idea what application created them, you only see "myfile.eps." On a Mac, you can clearly see which icon each has (Photoshop or Illustrator), and each has Type and Creator code meta data embedded in the resource fork, which is how the icons are assigned. Previews and thumbnails are also in the resource fork.

This way, double clicking on the file would launch the correct application, even if you have OS X set to use Preview by default for EPS files.

So if you know you are going to send a file to a PC user (or any OS other than Mac OS) saving your image files with no previews will prevent resource files from being sent with the data file. Some Mac files, such as font suitcases, will become damaged with the resource stripped off, since the actual font is in the resource fork. OS X's dfonts are data only. If a file with resource forks needs to be sent it must be stuffed and not zipped (unless you are doing this trick, but I think it's much easier to use Stuffit).



[ Reply to This | # ]
Resource forks are not the only way to tell the file type!
Authored by: hopthrisC on Nov 26, '03 04:18:48PM
I don't want to start a huge discussion here! I myself tend to ignore the whole resource fork business, hoping that one day the issue will go away one way or the other, but...

You can tell the file type (very often) from the contents of a file. In fact there is a Unix command to do this: file(1). Try it on a .psd! To verify that it does not need the suffix or the resource fork, cp the file on the command line and run file again:

aschenputtel:~ hop$ file test.xxx
test.xxx: Adobe Photoshop Image

Let's take the EPS file format ;) Every PostScript file has to start with "%!PS". Easy to tell the type of content from this.

As for the Photoshop vs. Illustrator example you brought up: That's a valid point in some respects, but you may also see it from another standpoint: I usually open files with the application that is best fit to the task I intend to perform, which is not necessarily the same as the one used to create the file.

Once the file is saved as .eps in Photoshop, you can't vectorize it anyway, and Photoshop won't let you edit vector data, no matter how often it was created in Illustrator. [ignoring the fact that paths are vector data, just to get my point across ;) ]

The only reason to save a picture as EPS from Photoshop probably is, that you want to import it in a vector based application that doesn't import other formats (LaTeX comes to mind), so will you double click it ever again to open it with Photoshop?

The fonts... what should I say? Have you noticed that Apple is moving away from PS fonts and to TrueType fonts?

[ Reply to This | # ]
file type info is not usually in the resource fork
Authored by: hayne on Nov 26, '03 05:05:14PM

Just to clarify:
The file type and creator info (that tells the Mac what application to use to open this file) is not usually stored in the resource fork. This information is stored in HFS+ as part of the "catalog" along with other metadata like the time of last modification, etc.

To prove this to yourself, take a file that has type/creator information - e.g. a file created by a Mac application, not one downloaded from elsewhere. Make a duplicate of the file in Finder and rename the file so it is named "junk", then open a Terminal window and then do the following commands:

ls -l junk/..namedfork/rsrc
cp /dev/null junk/..namedfork/rsrc

The first of the above commands shows you the resource fork of the file named "junk". If there is no resource fork, it will show up as zero size.

The second of the above commands will remove the resource fork of the file named "junk".

In spite of the resource fork being removed, you should still be able to open the file as usual by double-clicking on it in Finder.



[ Reply to This | # ]
Easily create HFS-aware PKZIP and unix archives
Authored by: gshenaut on Nov 26, '03 09:27:23PM

EPS is not a good example since EPS files contain a %%Creator header line
that identifies the program that created it, which is just one example of how
the material placed into resource forks can be handled painlessly without
them.

Greg Shenaut



[ Reply to This | # ]
Creator and Type codes have NOTHING to do with rsrc fork
Authored by: elmimmo on Nov 27, '03 03:18:29AM

At the risk of sounding repetitive, even though some people are glad to have creator and type codes for files, they have NOTHING to do with resource forks. It is a file attribute stored in the HFS+ file catalog, something like the creation date.



[ Reply to This | # ]
Creator and Type codes have NOTHING to do with rsrc fork
Authored by: gshenaut on Nov 27, '03 11:32:38AM

OK, so what good are resource forks?



[ Reply to This | # ]
Easily create HFS-aware PKZIP and unix archives
Authored by: ChiperSoft on Nov 26, '03 06:24:48PM
I put this to work in AppleScript to create a droplet for making macbinary zip files that stuffit can handle. Here's the applescript:

-- This droplet is based upon the droplet examples on Apple's AppleScript site.
on open these_items
	repeat with i from 1 to the count of these_items
		set this_item to item i of these_items
		set the item_info to info for this_item
		if (alias of the item_info is false) then
			process_item(this_item)
		end if
	end repeat
end open

-- this sub-routine processes folders 
on process_item(this_item)
	
	set file_path to POSIX path of this_item
	set file_path to trim_lineend(file_path, "/") --this is needed so that ditto can properly handle directories
	
	set the_command to "ditto -c -k -X --rsrc " & quoted form of file_path & " " & quoted form of (file_path & ".zip")
	
	do shell script the_command
end process_item

on trim_lineend(this_text, trim_chars) --modified from a subroutines on the AppleScript site
	set x to the length of the trim_chars
	repeat while this_text ends with the trim_chars
		try
			set this_text to characters 1 thru -(x + 1) of this_text as string
		on error
			-- the text contains nothing but the trim characters
			return ""
		end try
	end repeat
	return this_text
end trim_lineend
Save the script as an application. the zip file will be created in the same directory as the dropped file/folder

[ Reply to This | # ]
Easily create HFS-aware PKZIP and unix archives
Authored by: SOX on Nov 26, '03 11:06:55PM

thanks!
I've been trying to figure out how to do charater editing and regular expressions in applescript to delete that last slash. Is there a good online ref that teaches apple script? I find either super fcicila things, examples or hideous-to-read reference manuals that make no sense to me.

three things I'm trying to figure out is how to extract the filename from some fully qualified path (that might incude spaces). How to extract the enclosing folder name, again without the path. and how to extract the filename without the suffix or path. (e.g. so I can write something like
[code]
command /path/myarch.bzip2 /newpath/myarch.cpio
[\code]
when given the name /path/myarch.bzip2 I need to be able to create the name /newpath/myarch.cpio
I can do this by calling perl and using regular expressions but there must be some way to do it in applscript directly. plus it gets ugly in perl when there are special chearacters in the file name that need meta-quoting.

any suggestions?



[ Reply to This | # ]
Easily create HFS-aware PKZIP and unix archives
Authored by: omnivector on Nov 26, '03 09:25:14PM
for those unix people in the house, here's a simple ruby script to maczip a folder. Usage: maczip creates .zip, and doesn't delete folder

#!/usr/bin/ruby

folder = ARGV[ 0 ]
folder = folder.sub( "/$", "" ) 

puts `ditto -c -k -X --rsrc "#{folder}" "#{folder}.zip"`
just drop that in a file named maczip, chmod it to 755, and put it in your bin folder of choice (~/bin)

---
- Tristan

[ Reply to This | # ]

Easily create HFS-aware PKZIP and unix archives
Authored by: merlyn on Nov 27, '03 10:55:52AM
If you're gonna go to that much work, you might as well create a .dmg file, which is completely resource aware, obviously. In Panther, it's as simple as:
hdiutil create -srcfolder SomeDirectory result.dmg
And now you have a .dmg (no resource fork) that can be safely emailed and unpacked, or put on a server.

[ Reply to This | # ]
Easily create HFS-aware PKZIP and unix archives
Authored by: adrianm on Nov 27, '03 03:00:44PM
I don't know if I just missed it in this thread, but ditto also has the --sequesterRsrc flag that, when creating a zip file, will do so in the format the Finder 'archives' files and folders; that is, it puts all the resource forks in a __MACOSX directory within the zip.

[ Reply to This | # ]
Easily create HFS-aware PKZIP and unix archives
Authored by: LC on Dec 02, '03 02:00:26PM

Yecch, it seems that ditto puts the "-v" or "-V" output on stdout,
so if you use "-" as the archive specifier (to use a pipe or file
redirect), those messages corrupt the archive ... I had wondered
why I'd seen cpio errors on the unpack side (I usually archive/
dearchive through a compressed remote shell;). Is it documented
somewhere that ditto's verbose messages don't go to stderr when
the archive is "-"? Thanks; Larry.



[ Reply to This | # ]
Easily create HFS-aware PKZIP and unix archives
Authored by: LC on Dec 02, '03 04:46:05PM
To make it clearer, these (in 10.3.1) work fine:
 pax -wv mydir | pax
 pax -w  mydir | pax -v

 tar cvf - mydir | tar t

 ditto -c mydir1 - | ditto -V -x - /tmp/mydir2
The following doesn't work: (because the archive gets ruined)
 ditto -V -c mydir1 - | ditto -x - /tmp/mydir2

 ditto: cpio read error: Illegal seek
 ditto: xxx: Broken pipe
 (...)
Remove the "-V" from that last command, and it works o.k. (I just submitted the above in bugreport;) Larry.

[ Reply to This | # ]
Easily create HFS-aware PKZIP and unix archives
Authored by: fy on Dec 06, '03 02:36:10AM

Does this hint say?
- we can make a pkzip archive with meta-data and resource forks by typing
ditto -c -k -X --rsrc some_folder some_folder.zip
- we can recover the resource forks on some_folder.zip using StuffIt Expander

If it does, does it require rather newer versions of StuffIt Expander?
I can't recover it with StuffIt Expander 7.0.3, while I can with BOMArchiveHelper.



[ Reply to This | # ]
Easily create HFS-aware PKZIP and unix archives
Authored by: carsten on Apr 20, '04 12:26:29PM

To use the command line to decompress ZIP files created with the 10.3 Finder AND have the resource forks properly merged back in:

ditto --sequesterRsrc -k -x test.zip .

(The period at the end just means output the files to the current directory. Replace the period with ~/Desktop or wherever you want to resurrect the files.)

This should be useful for scripting, and maybe for 10.2 users too, but...

Does anyone know if the above ditto command-line works in 10.2? (where there is no BOMArchiveHelper)

[ Reply to This | # ]