I’ve been wondering for months why westwood didn’t seem to do anything
differently than cubic
even in cases where it should like at 200+ ms RTTs. This is probably why.
and discovered it was turned off in sysctl.conf
Assuming that netperf doesn’t enable it itself (I have to look at some old packet captures) this invalidates all testing we’ve done to date, against westwood+. While there is a performance impact to timestamping, it’s kind of required to give tcp an actual clue as to real delays - in, for example, a tcp proxy case, or… as I do all the time - testing how well wireless is working from the host to the router.
Sigh. Enabled by default in rc8.
It is vitally important to use the RTTM mechanism with big
windows; otherwise, the door is opened to some dangerous
instabilities due to aliasing. Furthermore, the option is
probably useful for all TCP’s, since it simplifies the sender.
3.2 TCP Timestamps Option
TCP is a symmetric protocol, allowing data to be sent at any time
in either direction, and therefore timestamp echoing may occur in
either direction. For simplicity and symmetry, we specify that
timestamps always be sent and echoed in both directions. For
efficiency, we combine the timestamp and timestamp reply fields
into a single TCP Timestamps Option.
Some ethernet/wireless hardware makes doing the timestamps cheap/free. So it isn’t clear this change is a big deal (and may help performance on hardware where timestamps are expensive). I suspect the WNDR3700v2’s hardware is recent enough that doing timestamps won’t cost much.
Unless I’m missing something…
Secondly, when used as a web proxy, it was my hope that westwood+ would help, and it wasn’t.
Thirdly timestamping would help vpn over tcp when the router is the endpoint there.
Fourthly, certain network monitoring tools on the router continually
update the web page,
and would benefit from better congestion control.
As you also note, having it on when rarely used anyway, means it isn’t going to hurt…
Thus, timestamping being off by default is a bad idea.
In some preliminary tests I saw no real difference in cpu usage with timestamping on, and a slight reduction in throughput (from about 94Mbit to 92.X Mbit) due to the increase in ack size.
But I now have several hundred gb of captures to throw out and some more thorough tests to re-run. I really hope to see westwood looking like westwood, in particular, now.
And the context of the above comment is not actually as applicable to APs but to stations…
Relevant commit seems to be -
https://github.com/dtaht/cerofiles/commit/61886bddee89adcd955c4de7dc940f071786d062 Author: Dave Taht
Date: 2012-01-21 (Sat, 21 Jan 2012)
Make sure TCP timestamps are on by default