Submit Hint Search The Forums LinksStatsPollsHeadlinesRSS
14,000 hints and counting!


Click here to return to the 'small improvements on improvements' hint
The following comments are owned by whoever posted them. This site is not responsible for what they say.
small improvements on improvements
Authored by: peyote on Oct 19, '02 05:21:00AM
You forgot:- 4. WAY more efficient. With the perl versions, when you run the script, your shell has to fork and exec perl . Perl compiles the script then when it gets to the system() command it forks and execs a shell, that shell goes and finds the "open" program, forks yet again and execs it. In the shell version, everything but "touch" and open" are shell builtins, so much less forking going on, no expensive perl interpreter startup. For bonus points, make the last line "exec open ......" which saves one fork operation. For double bonus points, turn it into a shell function and load that from your .bashrc (assuming you use bash as your login shell), something like this:-
pb () {
	for i in "$@"
	do
		test -e "$i" || touch "$i"
	done
	open -a "/Developer/Applications/Project Builder.app" "$@"
	# NB you *don't* want that extra exec here or you'll blow
	# away your login shell :)
}
(Oh yeah, you forgot the quotes around the $i in your version, so filenames with spaces will leave "droppings" all over the place) Anyway, with that shell function defined, you can type "pb foo.c" as before and if the file exists, the only time the shell will have to fork and exec is when it runs the "open" command, which saves about a bazillion CPU cycles and disk IOs compared to the perl way. Obviously in csh people's mileage will vary, but scripting in csh is evil, so don't do that. For other Bourne shell variants, the startup file to place shell functions into is different. -- Pete.

[ Reply to This | # ]
small improvements on improvements
Authored by: yoel on Oct 19, '02 11:27:45AM

Good point about the quotes...that one slipped by me. Your efficiency suggestions are all good ones, too. I will only note that you have that peculiar obsession with micro-optimization that characterizes many old-time Unix people, which is, I suppose, understandable seeing as how the old Unix machines probably would get smoked by my PalmPilot :-).



[ Reply to This | # ]
small improvements on improvements
Authored by: peyote on Oct 19, '02 01:13:02PM

More of a habit than an obsession... I just hate seeing cycles go to waste.

But yeah, you're right... When I were a lad, hardware was a lot slower... I remember being impressed by how nippy the Sun 3/50 was when it came out.

But to drag this back on topic, every cycle you throw away by (say) eedlessly running up a perl interpreter is one that can't be used for that uber-cool raytrace you're doing in the background. (or that password cracker or SETI@home or whatever).



[ Reply to This | # ]
small improvements on improvements
Authored by: AndyFyfe on Oct 19, '02 01:06:37PM

> WAY more efficient.

Actually no.

Whether it's a perl script or a bash script, the system is going to fork
and exec the interpreter (perl, bash), which will read, parse, and execute
the script, each having to fork & exec to execute external commands.

Perl's "system" command won't run the command through a shell unless it
contains shell metacharacters. Otherwise it'll break it up and exec it
directly. Still, the system command should really be written as

system "touch", $filename

so that spaces and the like in $filename don't get messed up. (With
multiple arguments, system will never run the command via /bin/sh.)

And the final system can be

exec "open", "-a", "/Developer/Applications/Project Builder.app", $filename

to avoid the final fork(), just like in bash.

Making it a shell function avoids some overhead, though assuming the
script uses exec to run "open", the number of fork() calls is the same.

If it is a script, you can do the following sort of thing:

my %commands = {
# map the name of the script ($0) to the name of the application to run
"pb" => "/Developer/Applications/Project Builder.app",
"te" => "/Applications/TextEdit.app",
# as many as you want
};

if (!defined($comamnds{$0})) {
print "$0: Don't know what command to run\\n";
exit 1;
}

and change the exec of the open command to use $commands{$0}.

Then you can take your script "pb" and create links to it (either
symbolic or hard) with the alternative names "te", etc.



[ Reply to This | # ]
small improvements on improvements
Authored by: yoel on Oct 19, '02 01:39:52PM

well, making it a shell function avoids
1) starting perl
2) compiling the script to bytecode
which I imagine are the two biggest speed hits. That thing with the hash is a really clever idea, though.



[ Reply to This | # ]
small improvements on improvements
Authored by: AndyFyfe on Oct 20, '02 04:09:52PM

Certainly making it a shell function rather than a separate script
(perl or otherwise) avoids having to start up a script interpreter
and having to read the script.

But it doesn't avoid having to convert the script into some sort of
internal representation. Perl has its internal parse tree (which can
be dumped as bytecode or C or perl); the shell has its own variant.
There are bound to be tradeoffs between doing work up front or delaying
it until execution, but I can't imagine noticing on a script of this size.
I expect the shell does some of the work once when it processes the
function definition and thus saves some time each time it is executed;
another part of the win for shell functions.

On my system, running the perl script takes about 20 times longer
than running the shell function, so the overhead is very real. But
the "open" command takes about 20 times longer than the perl script
(time from hitting "return" until the next command prompt, not the
time until the application is actually running, which is longer still).
And that's where the speed hit really lies in this case. Ignoring the
time it takes me to type the command, of course.



[ Reply to This | # ]
heh
Authored by: yoel on Oct 20, '02 05:16:36PM

I'm amused that you actually went and timed this. You are, of course, absolutely right that in the big scheme of things, the overhead of running perl vs. using a shell function is very small potatoes. I think of this sort of optimization as more of a fun game than anything else (you can feel free to call me sick now).



[ Reply to This | # ]
small improvements on improvements-BUG
Authored by: Krioni on Oct 22, '02 04:12:41PM

Um, neat trick!

But, a few things are broken:

First, $0 holds the whole path - you'll need to get just the filename part.

<pre>
my %commands = {
# map the name of the script ($0) to the name of the application to run
"pb" => "/Developer/Applications/Project Builder.app",
"te" => "/Applications/TextEdit.app",
# as many as you want
};

if (!defined($comamnds{$0})) {

# typo there - should be commands
</pre>



[ Reply to This | # ]