Experiment - TCP Cubic vs TCP Vegas¶
PURPOSE: examine TCP responses to short and long haul 802.11n packet loss.
On a suggestion from one of the posters to jg's blog, I took a look at tcp vegas. The results I got were puzzling.
With tcp cubic, I typically get 71Mbit/sec and the side effects of bufferbloat with a single stream.
With vegas turned on, a single stream peaks at around 20Mbit.
10 vegas streams did about 55Mbit in total.
Can I surmise that TCP cubic is like a dragster, able to go really fast in one direction down a straightaway, and TCP vegas more like an 80s model MR2, maneuverable, but underpowered?
The testbed network:
The first test path: laptop->nano-m->nano-m->openrd
(I note that this path almost never exhibits packet loss)
Most of the machines on the path are running with minimal txqueues and dma buffers running as low as they can go. (I'll fully document this in a bit)
TEST 1 - raw throughput, maximum wireless speed¶
1 $ openrd: iperf -s 2 $ laptop: iperf -t 60 -c openrd
With vegas (on both laptop and server)¶
1 modprobe tcp_vegas 2 echo vegas > /proc/sys/net/ipv4/tcp_congestion_control 3 4 openrd:$ iperf -s 5 laptop:$ iperf -t 60 -c openrd & 6 laptop:$ ping openrd
On a failed hunch, I also re-ran the tests with a much larger
1 echo 8388608 > /proc/sys/net/core/rmem_max # on both machines 2 echo 8388608 /proc/sys/net/core/wmem_max # on both machines 3 openrd: iperf -w8m -s 4 laptop: iperf -t 60 -w8m -c openrd
To no net difference in effect.
Conclusion: In part, people like vegas because it is slower than cubic and thus has fewer side effects. More research into vegas's response to latency is desirable, which we can do with series 2.
Test 2 - testing with minimal wireless speed¶
Test 3 - testing with packet loss¶
Test 4 - testing with de-bufferbloated drivers¶
Test 5 - testing with traffic shaping¶