Applying the Dogfood Principle

The current set of servers are configured according to the dogfood principle. We’re practicing what we preach, to what extent possible. Some of the knobs we are twisting are not well tested in the field, so we might as well test them somewhere! Admittedly a primary goal is to keep the service(s) running, so if we encounter problems, we will modify what is in place, and eventually move into the cloud. Until then, the dogfood principle applies. There will be a set of formal test servers and routers up at some point, too.

ECN is turned on. Using ECN does little good unless one of the routers on the path actually uses it. Work is ongoing to see if it can be enabled in the general case. In the meantime feel free to try it.

SACK and DSACK are enabled. These do help.

It’s very easy to enable these three options, under various forms of Linux. Into your /etc/sysctl.conf you can put


IPv6 is enabled in primary DNS and as part of the main website(s) themselves. IPv6 behavior is potentially worse, as IPv6 doesn’t get anywhere near as much attention from developers, ISPs, or hardware vendors. It’s potentially better in that less stuff (NAT, shapers) muck with it.

TXQUEUELEN is reduced to 64. This is (probably) the wrong thing for a server, but for one that is not doing traffic shaping (yet) and handling multiple flows, it makes sense as it does push more decision making back into the tcp portion of the buffer stack, where it belongs.

Driver Buffers is currently unknown. These are older servers however, so we suspect they are non-bloated.

There is (currently) no outgoing traffic shaping in place, however Qdiscs#SFB|SFB, Qdiscs#RED|RED are under consideration.

The apache servers are using Apache-mpm-event instead of the more common Apache-mpm-worker - theoretically improving HTTP 1.1 performance.

There is also a fix to MSIE recognition:

All major bits of code (e.g. redmine) are running under a form of fastcgi (fcgid), which load balances and scales up and down well with minimal memory use.

TCP Vegas is under consideration.

The (low-power) dedicated servers currently running are donated by ISC and Teklibre .

If you encounter problems, please send an email to support AT, detailing your configuration, and a traceroute.You can also take steps to Diagnose your bufferbloat.

To edit this page, submit a pull request to the Github repository.
RSS feed

Recent Updates

Oct 20, 2023 Wiki page
What Can I Do About Bufferbloat?
Dec 3, 2022 Wiki page
Codel Wiki
Jun 11, 2022 Wiki page
More about Bufferbloat
Jun 11, 2022 Wiki page
Tests for Bufferbloat
Dec 7, 2021 Wiki page
Getting SQM Running Right

Find us elsewhere

Bufferbloat Mailing Lists
#bufferbloat on Twitter
Google+ group
Archived Bufferbloat pages from the Wayback Machine


Comcast Research Innovation Fund
Nlnet Foundation
Shuttleworth Foundation

Bufferbloat Related Projects

OpenWrt Project
Congestion Control Blog
Flent Network Test Suite
The Cake shaper
CeroWrt (where it all started)

Network Performance Related Resources

Jim Gettys' Blog - The chairman of the Fjord
Toke's Blog - Karlstad University's work on bloat
Voip Users Conference - Weekly Videoconference mostly about voip
Candelatech - A wifi testing company that "gets it".