Bug #269

Performance over the LFN 2

Added by Dave Täht on Sep 15, 2011. Updated on Nov 18, 2011.
New Normal Dave Täht

Description

I started fiddling with tcp’s socket params. Over fios and a 85 ms path I get about 10Mbit/sec with a 256k window, going to io.lab.bufferbloat.net, which is the maximum by default you can get out of default sysctl settings

I raised that exorbantly… and got it up to 14Mbit, using westwood+. I don’t know what the

root@bob-desktop:~# add these to /etc/sysctl.conf and do a sysctl -a -p
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
root@bob-desktop:~# exit

exit
d@bob-desktop:~\$ iperf -w2m -s
————————————————————
Server listening on TCP port 5001
TCP window size: 4.00 MByte (WARNING: requested 2.00 MByte)
————————————————————
[ 4] local 149.20.63.20 port 5001 connected with 149.20.54.82 port 47337
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-60.0 sec 2.92 GBytes 418 Mbits/sec

VERY SHORT PATH above, gigE.

[ 5] local 149.20.63.20 port 5001 connected with 71.162.243.5 port 41074
[ 5] 0.0-10.5 sec 18.2 MBytes 14.5 Mbits/sec

the 84 ms path above.

History

Updated by Dave Täht on Sep 15, 2011.
the above numbers are with a txqueuelen of 8 on the router. txqueuelen of 1000 gets me:

[ 5] local 149.20.63.20 port 5001 connected with 71.162.243.5 port 41100
[ 5] 0.0-60.3 sec 183 MBytes 25.5 Mbits/sec

txqueuelen of 40 gets me:

root@OpenWrt:/etc/config# iperf -t 60 -w1m -c io.lab.bufferbloat.net
————————————————————
Client connecting to io.lab.bufferbloat.net, TCP port 5001
TCP window size: 2.00 MByte (WARNING: requested 1.00 MByte)


TCP window size: 2.00 MByte (WARNING: requested 1.00 MByte)
————————————————————
[ 3] local 192.168.1.220 port 41103 connected with 149.20.63.20 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 191 MBytes 26.6 Mbits/sec

So, obviously, we have a astrong interrelationship between txqueuelen, and the tcp window size.

Updated by Dave Täht on Sep 17, 2011.
Updated by Dave Täht on Nov 18, 2011.
All these bugs are related to having a decent AQM scheme in place that makes a sane compromise between throughput and latency.
Updated by Dave Täht on Nov 18, 2011.
I HAVE, however, settled on about 50 buffers as being a reasonable default without AQM when connected at 100Mbit. Currently this is 4 in the driver and 40 in the stack. I may need to increase this a little bit, but at this setting I get over 400Mbit at gigE speeds, on a local lan.

So, as we add some other AQM technique than pfifo fast, these numbers will need to be tweaked. A lot.

This is a static export of the original bufferbloat.net issue database. As such, no further commenting is possible; the information is solely here for archival purposes.
RSS feed

Recent Updates

Oct 20, 2023 Wiki page
What Can I Do About Bufferbloat?
Dec 3, 2022 Wiki page
Codel Wiki
Jun 11, 2022 Wiki page
More about Bufferbloat
Jun 11, 2022 Wiki page
Tests for Bufferbloat
Dec 7, 2021 Wiki page
Getting SQM Running Right

Find us elsewhere

Bufferbloat Mailing Lists
#bufferbloat on Twitter
Google+ group
Archived Bufferbloat pages from the Wayback Machine

Sponsors

Comcast Research Innovation Fund
Nlnet Foundation
Shuttleworth Foundation
GoFundMe

Bufferbloat Related Projects

OpenWrt Project
Congestion Control Blog
Flent Network Test Suite
Sqm-Scripts
The Cake shaper
AQMs in BSD
IETF AQM WG
CeroWrt (where it all started)

Network Performance Related Resources


Jim Gettys' Blog - The chairman of the Fjord
Toke's Blog - Karlstad University's work on bloat
Voip Users Conference - Weekly Videoconference mostly about voip
Candelatech - A wifi testing company that "gets it".