Submit Hint Search The Forums LinksStatsPollsHeadlinesRSS
14,000 hints and counting!

Disable pipelining in Mozilla to improve performance Web Browsers
I use privoxy to filter out ads etc and was curious if privoxy supported pipelining to improve browsing speed (like Mozilla and Chimera do) so I asked a question about this at privoxy's supportpages at sourceforge, and got this interesting answer:

[Editor's note: Please see the comments for some pretty good evidence that this note contains bad advice...]
Date: 2003-01-17 00:49
Sender: nobody
Logged In: NO

You should pay particular attention to the same Mozilla screen where you can specify pipelining. It says: "WARNING: pipelining is an experimental feature, designed to improve page-load performance, that is unfortunately not well supported by some web servers and proxies."

My opinion is that pipelining is NOT a good idea. It can slow down everything because the results must be streamed back in their entirety in the same sequence that they were requested -- and getting a 100Byte GIF will have to wait on a 200K SWF before the browser even sees the 1st byte of it. Pipelining counteracts a browser's multi-threading capabilities and also the server's multi-threading capabilities. It's just not a good idea even when everything works right - everything waits on the single-threaded pipe.
Read the rest of the article for the remainder of the reply I received...

The note continues:
For enhancing browser speed - real speed either piecemeal or total page loading - you should consider tuning your max# connections per server instead. You can create a new file called user.js in your mozilla profile directory and put the following lines in it (or add these lines if that file already exists):
 user_pref("network.http.max-connections", 64);
user_pref("network.http.max-connections-per-server", 8);
user_pref("network.http.max-persistent-connections-per-proxy", 20);
user_pref("network.http.max-persistent-connections-per-server", 10);
Now you must tweak those settings to find the values that work best for your particular browser habits and network configuration. Please realize that very high values may hurt performance as much as very low values will. Some sites advocate absurdly high values without having done any real throughput testing.

The max# connections can also be tweaked in other browsers. Some Google lookups will show exactly how to do it:
  • For IE that's done in MaxConnectionsPerServer registry entries.
  • Netscape 4.x uses a preference setting in its prefs.js file.
  • Opera users set it on the Network preferences screen.
In the specific case of Privoxy - NO, Pipelining is not supported. Nor should it be in any proxy capable of filtering. A filtering proxy itself responds directly to many requests without sending-to or waiting-on another server -- and that conflicts with the principle of pipelining. In order to support pipelining the proxy would have to delay a filtered response within a stream and insert it in the appropriate place where the server would have placed it. Get that -- the proxy would have to delay its responses. Others may disagree. My opinion is that pipelining is a terrible idea. Reused HTTP/1.1 connections are very much prefered since they are more efficent for the client browsers and for all the servers.

Guy.
Please don't credit me with this information, even though I find it very interesting. It would be interesting to get some reader feedback on good values for these settings for different connections speeds...
    •    
  • Currently 2.60 / 5
  You rated: 4 / 5 (5 votes cast)
 
[19,583 views]  

Disable pipelining in Mozilla to improve performance | 11 comments | Create New Account
Click here to return to the 'Disable pipelining in Mozilla to improve performance' hint
The following comments are owned by whoever posted them. This site is not responsible for what they say.
Not quite...
Authored by: mholve on Jan 24, '03 10:48:33AM
While that explanation kind of makes sense... What is missing is the fact that the client/server have to initiate multiple connections for each piece of a page without pipelining. With it, is like having a persistent database connection - it's there, and it's wide open for data. With all the extra connections, you have more overhead. I would say that pipelining is still faster - perhaps not perceptually as the author indicates.

Just my take on it... Anyone seen any real benchmarks or know more about it?

[ Reply to This | # ]

Disable pipelining in Mozilla to improve performance
Authored by: metiure on Jan 24, '03 11:20:31AM

it's definitively faster with pipelining OFF and tuning the max# connections per server with the given values.

I'm using Chimera Build ID: 2003012004

vic



[ Reply to This | # ]
NO!
Authored by: natenate on Jan 24, '03 11:54:30AM

This guy doesn't know what he's talking about. All web servers that conform to HTTP 1.1 are required to support pipelining. That being said, he is correct in saying that some servers don't like it. Mozilla itself is aware of these servers, and if it finds one of these few specific server types at the remote end, it won't use pipelining.

Moreover, he suggests fooling with max connections (and the like). DON'T. If you do, you're putting undue stress on every single server that you load a page from.



[ Reply to This | # ]
can't keep'em alive ?
Authored by: hagbard on Jan 24, '03 04:31:56PM

After reading the comments, I checked my prefs.js file, and removed the pipelining, which effectively renders my chimera faster...
I added the following, which was not in my file:
user_pref("network.http.keep-alive", true);
user_pref("network.http.proxy.keep-alive", true);
user_pref("network.http.keep-alive.timeout", 180000);
(I guessed the timeout was in milliseconds ?)
But my problem is that when I do a netstat -an right after a page is loaded, no connexion remains open, ie keep-alive is NOT working. I tried increasing the timeout value, but it didn't change anything...
Does anyone else have the same problem ???



[ Reply to This | # ]
Re: can't keep'em alive ?
Authored by: marook on Jan 24, '03 06:17:31PM
Well, Keep Alive is not ment to Keep a Connection Alive without data needed to be transfered!

Keep Alive will Keep the Connection Alive (not closed) as long as the Page you are loading still has items that need to be loaded - with a maximum length = maxlength. After maxlength timesout, a new connection is made!

All god network applications close the connection when they are done with the transfer! Imagine what would happen if your browser kept a connection to every site you had visited since last boot!

Keep ALive is ment to keep the overhead of opening/closing connections for every gif/jpg/swf down, by requesting the items over the same connection.
Hope it makes sence...

[ Reply to This | # ]
Re: can't keep'em alive ?
Authored by: hagbard on Jan 25, '03 01:30:05AM

please read my comment carefully, I talk about timeout and connexions not being kept up before the timeout.... of course we don't want zombie connexions, but when you keep on surfing on the same site, having alive connexions speeds things up...



[ Reply to This | # ]
Keep Alive
Authored by: digitalone on Jan 25, '03 06:17:33AM

The keep alive option mentioned in user prefs is a good one, but it can only ATTEMPT to keep the connection alive. Most servers will kill the connection by changing their state automatically, forcing the connection closed. If set-up right, and many servers do this to prevent Denial-of-Service attacks, they will kill the connection without waiting or the remote host to ACK if the state exists where there are multiple requests made from the same remote host for different pages while the previous TCP session is still established with the same host.
You're right in that HTTP 1.1 supports pipelining, otherwise you would have to create a new connection every time you pull any embedded component in a page [such as images or Java applets, etc]. It's scope can be severely limited by constraints made on the server, however. On commercial or busy servers, typically open connections are set to close immediately after all data is transmitted, and the link is kept open b the scripting embedded in the HTML/XML whatever.

Digitalone
This, however, is one man's humble opinion.



[ Reply to This | # ]
Removing pipelining works great
Authored by: uid73397 on Jan 26, '03 01:15:52AM

I made my pipelining options false and the increase in speed was huge. I have the 4 line, multiple connections addition referred to in this post but with the following numbers:

user_pref("network.http.max-connections", 128);
user_pref("network.http.max-connections-per-server", 48);
user_pref("network.http.max-persistent-connections-per-proxy", 24);
user_pref("network.http.max-persistent-connections-per-server", 12);



[ Reply to This | # ]
Removing pipelining works great
Authored by: mosch on Jan 28, '03 11:19:48AM

Please don't even consider using the settings that uid73397 has suggested. 48 connections per server might give you decent performance, but it does so by slamming the load on the server, and is actually fairly likely to cause you some delay, due to the server being unable to provide that many connections immediately.

The default apache configuration allows a total 150 concurrent connections, with 8 spares at any given time, because individual users are expected to use between a few connections. Telling the browser to initiate 48 connections to the server simultaneously means that you could potentially be using one third of a heavy-duty server's connections for a moment. The worst part is that if the admin has attempted to improve overall quality of service by limiting per IP bandwidth to something reasonable, all those connections will stay open for a significant amount of time.

Well, I'm off to see what I need to do to limit connections per IP, since uid73397 has accidentally pointed out a denial-of-service attack that I'd never previously considered.

Please don't follow this anti-social "hint".



[ Reply to This | # ]
Reply
Authored by: david-bo on Jan 29, '03 05:17:03PM

Guy, the guy (!) who wrote the original text, has commented on the comments found here:

------
Date: 2003-01-27 01:18
Sender: nobody
Logged In: NO

Looks like I have to watch out what I say around here. Some
of your readers do seem to agree now that their
browsing has been improved using the information we both made
available to them.

There are two statements that stand out:
1) Me, saying "Others may disagree."
2) The one who said "This guy doesn't know what he's talking
about."

Seriously though, there may be no single answer for everybody.
If someone spends all their time at one specific
site that has an HTTP server specifically designed for pipelining,
then that will do very well. On the other hand,
someone who peeps at only very simple sites may be better served
with older HTTP/1.0 non-persistent
connections.

I would like to highlight how the "average" or
"common" browsing pattern is contradictory to
pipelining:

- You hit a page.
- As that HTML is retrieved the browser sees and fetches bunches
of GIF, JPG, CSS, JS, etc. that are specified as
part of that page. A "Burst" of requests.
- Then a long delay while the human absorbs the content.
- Then click to something else == Repeat.

That's not what pipelining will be good at supporting. Instead
you need an architecture designed for bursts of
distinct communication - which in today's technology should involve
multiple concurrent threads and several
concurrent TCP sockets. In the real-world testing that I have
done, persistence beyond a few seconds is irrelevant
- and everything crawls with a single pipe.

Just turn on Privoxy's URL logging to a file for a few hours
then review it. You'll see bursts of requests happening
simultaneously with some time lag between them. It's pretty
rare for a single lonely request to a site, and it's also
rare for 100 requests to the same site in the same burst. With
advertisements, the burst consists of intermingled
requests to a couple of sites.

How would YOU design an architecture to accommodate that? If
your answer is to group all sites within a burst
together and hand the whole mess to some other server - then
Pipelining is your answer. Sort of like delegating
responsibility saying "I don't know how to handle all that
stuff very well, so it's your job".

In contrast, re-used HTTP/1.1 connections work quite differently.
In this method a browser would open a handful
(default 2, I like more) connections to a server - and as each
is satisfied it then shoves another one down that
same pipe. It is quite common for one connection to be reused
several times before the first one gets done with
its reply. Some requests take longer to process, often just
due to sheer size. And while these requests are being
satisfied, the data is being pumped back to the browser - which
is then using multiple threads to render the page
as the various pieces complete. That's the real world that I
live in.

You see, there's at least 2 computers capable of multiple threads
to produce the result. By having only a single
pipe or two, neither computer can bring its resources to bear
on the task at hand. It just doesn't make much sense
unless the expense of opening a TCP socket is exceedingly expensive
- and it's not.

==========

Here's some other arguments against Pipelining in the "real
world" of multi-threading:

- Even if opening a subsequent TCP socket IS exceedingly expensive,
then reused HTTP/1.1 connections would
still be piped to the previously completed sockets instead of
delaying anything else.

- You can open a bunch of TCP sockets concurrently much faster
than if opening each one at a time. So the initial
TCP socket overhead associated with establishing a connection
is mostly a parallel event, not a serial event. The
overhead does not accumulate, and it must be incurred at least
once when pipelining anyway.

- When the HTTP/1.1 RFC was written, the above 2 items were not
as true. Netscape was the first to introduce
socket reusability for HTTP/1.0, a sub-protocol that wasn't properly
implemented in some other browsers. Many
client PC's, ISP's, and backbone routers were each individually
subject to bigger performance issues as the
number of concurrent connections increased. OS TCP stacks and
internet routers have evolved significantly since
then. So I think Pipelining is a method which was being voided
about the same time it was being specified.

- For the advanced college types, the applicable topic is
"Queuing Theory". Pipelining is where you drop the
class
to avoid an F because you haven't got a clue how to apply any
of it.

==========

Regarding Pipelining being supported by all HTTP/1.1 servers:
Sure, it's called a QUEUE where activity on a 2nd
request is delayed until the 1st one is completed. Remember
that Pipelining REQUIRES the complete response
for each request to be streamed in exactly the sequence that
was requested. The server can not complete several
requests simultaneously and hope they accidentally come out in
sequence. The server is FORCED to queue
either the incoming requests, OR the outgoing results. Either
way, it must put you on hold for a while someplace.

Meanwhile, your Number5 browser is loafing back at the ranch
"I need Input". It does help keep your processor
from overheating. It can also help ensure that speedy new machines
will remain competitive with existing low-end
machines :)

==========

And now a clarification about Privoxy. Future plans are to better
address reusing HTTP/1.1 connections. Currently
the methods employed through Privoxy do not fully illustrate
the performance variance. Benchmarking the two
methods may be more accurate without Privoxy skewing any
results.

I encourage the members of your other forum to please do some
of their own research. And those that grasp the
issues - whether they agree with me or not - are encouraged to
join us in this project as we improve the product.

Interesting topic, thanks for the opportunity to respond here.
Say HI to your forum for me.

Guy.

------
Date: 2003-01-27 01:45
Sender: nobody
Logged In: NO

OOPS. I was enjoying writing the response so much I didn't proofread
it enough. There's an important
misrepresentation of what I meant to say in one of
the paragraphs.

The sentence currently reads:

If your answer is to group all sites within a burst together
and hand the whole mess to some other server - then
Pipelining is your answer.

Should have been (upper casing variance):

If your answer is to group all THE REQUESTS FOR EACH SITE within
a burst together and hand EACH GROUP
IN ITS ENTIRETY to A server - then Pipelining is your answer.

Guy.

------



[ Reply to This | # ]
It works for me!
Authored by: malvolio on Feb 11, '03 11:45:47AM

Turning off pipelining and editing my user.js file as recommended has given a significant speed boost to both Mozilla and Chimera.
Sweeeet!



[ Reply to This | # ]