Submit Hint Search The Forums LinksStatsPollsHeadlinesRSS
14,000 hints and counting!


Click here to return to the 'small improvements on improvements' hint
The following comments are owned by whoever posted them. This site is not responsible for what they say.
small improvements on improvements
Authored by: AndyFyfe on Oct 20, '02 04:09:52PM

Certainly making it a shell function rather than a separate script
(perl or otherwise) avoids having to start up a script interpreter
and having to read the script.

But it doesn't avoid having to convert the script into some sort of
internal representation. Perl has its internal parse tree (which can
be dumped as bytecode or C or perl); the shell has its own variant.
There are bound to be tradeoffs between doing work up front or delaying
it until execution, but I can't imagine noticing on a script of this size.
I expect the shell does some of the work once when it processes the
function definition and thus saves some time each time it is executed;
another part of the win for shell functions.

On my system, running the perl script takes about 20 times longer
than running the shell function, so the overhead is very real. But
the "open" command takes about 20 times longer than the perl script
(time from hitting "return" until the next command prompt, not the
time until the application is actually running, which is longer still).
And that's where the speed hit really lies in this case. Ignoring the
time it takes me to type the command, of course.



[ Reply to This | # ]
heh
Authored by: yoel on Oct 20, '02 05:16:36PM

I'm amused that you actually went and timed this. You are, of course, absolutely right that in the big scheme of things, the overhead of running perl vs. using a shell function is very small potatoes. I think of this sort of optimization as more of a fun game than anything else (you can feel free to call me sick now).



[ Reply to This | # ]