Christian Huitema's Latest Posts

Cloudy sky, waves on the sea, the sun is 
shining

The list of past posts including those previously published on Wordpress is available here.

The weird case of the wifi latency spikes

Posted on 18 May 2023

About two weeks ago, I was told by developers of “Media over QUIC” that there was an issue when running over Wi-Fi. After a few seconds, there would be some kind of event, triggering congestion control implemented in Picoquic to reduce the bandwidth, and then resulting in pretty bad performance. It seems due to issues with the Wi-Fi driver on the Mac, as I wrote in a toot on Mastodon. Now that I am less busy with other projects, I have the time to measure the issue in details.

RTT versus time

The figure above shows the evolution of the round trip time (RTT) between two computers in my office: an iMac running macOS Ventura 13.3.1, and a Dell laptop running Windows 11. The measurement were taken with a simple program that was generating UDP packets every 20ms on the iMac, sending them over WiFi to the laptop, and then receiving an echo from the laptop. The program logged the time at which the packet was sent, the time at which the laptop sent the echo, and the time at which the echo was received. The RTT is of course measured as the difference between the time the packet was sent and the time the echo was received.

The RTT versus Time graph shows that most RTT samples are rather short, a few milliseconds — the median RTT is 4.04 milliseconds, and 95% of samples are echoed in less than 8ms. Out of 30,000 packets sent in 10 minutes, 38 were lost, about 0.12%. Some packets take a bit longer, with the 99th percentile at 50.7ms, which is somewhat concerning. But the obvious issues are the 18 spikes on the graph, 18 separate events during which the RTT exceeded 100ms, including 12 events with an RTT above 200ms.

Close up view of a simple 
spike

The close-up graph shows a detailed view of a single spike. 14 packets were affected. The first one was lost, the second one was echoed after 250ms, and we see the RTT of the next 12 packets decreasing linearly from 250 ms and 4 ms. Looking at the raw data shows that these 13 packets were received just microseconds apart. Everything happens as if Wi-Fi transmission has been suspended for 250 ms, with packets queued during the suspension and delivered quickly when transmission resumes.

The previous graph looked at a “simple” spike happening 23 seconds after the start of the measurements. Simple events appear as narrow spike in the “time line” graph. Some events are more complex. They appear on the graph as a combination of adjacent line.

Close up view of a series of 
spikes

The next graph shows a close up of a series of spikes happening at short intervals. There are 14 such spikes, spread over a 3 seconds interval. Each spike has the same structure as the single spike described above: the network transmission appears to stop for an interval, and then packets are delivered. In one case, two spikes overlap. Spikes may have different intervals, between 50 ms and 280 ms.

One-way-delays

The RTT is the sum of two one-way delays: from the Mac to the PC, and back. The previous analysis concludes that the spikes happen when transmission stops, but that could be transmission from the Mac or from the PC. The one way delay trap shows that it actually happens in both directions. Out of the 18 spikes in the RTT timeline graph, 11 happens because transmission stopped on the Mac, 3 because it stopped on the PC, and 4 because it stopped on both. It seems that PC and Mac have similar Wi-Fi drivers, both creating occasional spikes, but that this happens almost twice as often on the Mac.

At this stage, we don’t know exactly what causes the Wi-Fi drivers to stop transmission. There are two plausible ideas: wireless driver sometime stop in order to save energy; or, wireless drivers sometime stop operation on one frequency band in order to scan the other bands and locate alternative Wi-Fi routers. Out of the two, the scanning hypothesis is the most likely. It would explain the “series of spikes” patterns, with the wifi radio briefly returning to the nominal frequency band between scans of multiple bands.

My next task will be to see how the QUIC stack in Picoquic can be adapted to mitigate the effects of this Wi-Fi behavior, for example by returning quickly to nominal conditions after the end of a spike. But the best mitigation won’t help the fact that shutting down radios for a quarter of a second does nothing good to end to end latency. VoIP over Wi-Fi is going to not sound very good. The issue is for our colleagues at Apple and Microsoft to fix!

The new ACK startled the butterfly

Posted on 29 Apr 2023

I just implemented in Picoquic the new ACK processing algorithm (proposed for QUIC multipath)[https://github.com/quicwg/multipath/pull/217], which processes ACK independently of the path over which they arrive. It looked good, but there was an interesting regression. The tests that simulated transmission over satellite links were failing. The previous version showed a file transfer concluding in less than 7 seconds, but the with the new version it took about 10 seconds. That was strange, since the only changed was the computation of the round trip time, and the logs showed that both versions computed the same value. To solve that, I had to take a look at traces captured in log files. The traces of the new execution looked reasonable, as shown in this graph:

qvis rendering of the new execution log 
trace

We see the transmission accelerating steadily, just as expected from an implementation of the slow-start algorithm. The curve is very smooth. The congestion window double once per RTT, until it becomes large enough to saturate the simulated link, after about 7.5 seconds. And then there is a tail of transmission, including retransmission of packets sent at the end of the slow start period, for a total duration of almost 10 seconds. But the previous version was actually completing the transfer much faster, as seen in this other graph:

qvis rendering of the old execution log 
trace

Spot the difference? The old curve was not as smooth. We see a series of segments at progressively higher speed, often starting with a vertical line that indicates many packets sent in quick successions. These packet trains will be received by the other end at close to line speed, and the arrival of ACKs will reflect that speed. Picoquic uses that information to compute an estimate of the path capacity. This allows the congestion window to grow much faster than the regular slow start algorithm, allowing the whole transmission to last fewer than 7 seconds. But what did this not happen in the new variant?

If you compare the very beginning of the two curves, you will notice the small vertical lines at the beginning of each new roundtrip period in the old curve. They are missing in the new curve. It is hard to point the exact cause, but some detail changed, and then the whole behavior changed. That reminded me of the old story about a butterfly flapping its wings on a Pacific Island, and next thing you know there is a typhoon approaching Hawaii. There was no butterfly here, just probably a tiny change in the timing and sequence of packets, but then the connection fell in a pattern where pacing enforce a form of ACK clocking, and the code never had a chance to properly estimate the bandwidth of the path.

I fixed it by forcing a pacing pause if the bandwidth estimation fails during slow start. The transmission only restarts after enough pacing credits have been accumulated for sending a long enough train. With that, the tests do complete in less than 7 seconds. But I am glad that the tests exposed the issue, which was indeed a bug. The butterfly flapping its wing and causing a typhoon is a metaphor of chaotic systems, in which tiny changes can have unforeseen consequences. The code behavior exposed here was chaotic, and that’s not good. Code should be predictable, and behavior should never be left to chance!

QUIC to Mars

Posted on 07 Feb 2023

A friend, Marc Blanchet, asked me last December whether it would be possible to use QUIC in space. Sure, the delays would be longer, but in theory it should be possible to scale the various time-related constants in the protocol, and then everything else should work. I waited to have some free time, and then I took the challenge, running a couple of simulations to see how Picoquic would behave on space links, such as between the Earth and Mars. I had already tested Picoquic on links with a 10 second round trip time (RTT), so there was hope.

First, I tried a simulation with a one minute one-way delay. A bit short of Mars, but a good first step. Of course, the first trial did not work, because Picoquic was programmed with a “handshake completion timer” of 30 seconds, and the Picoquic server was enforcing a maximum idle timer of 2 minutes. There was already an API to set the idle timer, so I used it to set a value of at least 3 times the maximum supported RTT. Then, I updated the code to keep the handshake going until the largest of the 30 second default timer and the value of the idle timer. And, success, the handshake did work in the simulation. However, it was very noisy.

At the beginning of the connection, client and servers do not know the RTT. The QUIC spec says to repeat the Initial packet if a response does not arrive within a timer, starting with a short initial timer value (Picoquic uses 250ms), and doubling that value after every repeat. That’s a good exploration, but Picoquic capped the timer at 1 second, so there are enough trials on average to succeed in front of 30% packet loss — which meant repeating the Initial packet more than 120 times in our case. The fix was to make that cap a fraction of the idle timer value, with limit to about a dozen transmissions in our test. Still big, but acceptable.

After the handshake things get better, because both ends at that point have measured the RTT at least one. Most timer values used in the data transmission phase are proportional to this RTT, and they naturally adapt. The usual concern with long delay links is the duration of the slow start phase, during which the sender gradually increases the sending rate until the path bandwidth is assessed. The sending rate starts at a low value and is doubled every RTT, but for a 10 Mbps link that might require 5 or 6 RTT. In our case, that would be 12 minutes before reaching full efficiency, which would not be good. But Picoquic already knew how to cope with that, because it was already tested on satellite links.

Picoquic uses “chirping” to rapidly discover the path capacity. During the first RTT, Picoquic sends a small train of packets, measures the time between first and last acknowledgement for that train, and gets a gross estimate of the link data rate. It then uses that estimate to accelerate the start-up algorithm (Picoquic uses Hystart), by propping up the sending rate. That works quite well for our long distance links, and we reach reasonable usage in 3 RTT instead of 5. It could work even better if Picoquic used the full estimate provided by chirping, or maybe derived from a previous connection, but estimates could be wrong and we limit potential issues by only using half their value.

Chirping takes care of congestion control, at least during startup, but we also have to consider flow control. If the client asks to “Get this 100MB file” but the flow control allows only 1MB, the transmission on very long delay link is going to take a very long time. But if the client says something like “get this 100MB file and, by the way, here are an extra 100MB of flow control credits”, the transmission will happen much faster. This is what we do in the tests, but it will have to be somehow automated in the practical deployments.

Once we have solved congestion control and flow control, we need to worry about timers. In QUIC, most timers are proportional to the RTT, but a few are not. The idle timer is preset before the measurement, as discussed above. The BBR algorithm specifies a “probe RTT” interval of 10 seconds, which would not be good, but Picoquic was already programmed to use the max of that and 3 RTT. The main issue in the simulation was the “retire connection ID (CID)” interval.

Picoquic is programmed to switch to a CID if resuming transmission after a long silence. This is a privacy feature, because long silences often trigger a NAT rebinding. Changing the CID makes it harder for on path observers to correlate the newly observed packets to the previous connection. However, the “long silence” was defined as 5 seconds, which is way to short in our case. We had to change that and define it as the largest of 5 seconds and 3 times the RTT.

With these changes, our “60 seconds delay” experiment was successful. That was a happy result, but Marc pointed out that 60 seconds is not that long. It takes more than 3 minutes to send a signal from Earth to Mars when Mars is at the closest distance, and 22 minutes when Mars is at the furthest. Sending signals to Jupiter takes 32 minutes to almost an hour, and to Saturn more than an hour. What if we repeated the experiment by simulating a 20 minute delay? Would things explode?

In theory, the code was ready for this 20 minute trial, but in practice it did in fact explode. Picoquic measures time in microseconds. 20 minutes is 1,200,000,000 microseconds. Multiply by 4 and you get a number that does not fit on 32 bits! The tests quickly surfaced these issues, and they had to be fixed. But after those fixes the transmissions worked as expected.

I don’t know whether Picoquic will in fact be used in spaceships, but I found the exercise quite interesting. It reinforces my conviction that “if it is not tested, it does not work”. A bunch of little issues were found, which overall make the code more robust. And, well, one can always dream that QUIC will one day be used for transmissions between Earth and Mars.

Managing QUIC ACKs in Picoquic

Posted on 13 Dec 2022

I have been working on the Picoquic implementation of QUIC since 2017. Picoquic distinguished itself by performing very well on GEO satellite links. The main reason is that 40 years ago, I was studying protocols for transport of data over satellite links for my PhD. So, of course, I wanted to support the scenario well in my implementation of QUIC. Which explained why this morning someone was asking me about the ACK rate tuning work in Picoquic and why it was getting good performance results over GEO. Turns out that I never wrote that down, so here it is.

Sending fewer ACKs reduces transmission overhead and message processing load, which is a good thing. Historically, ACKs were also used for ACK Clocking: if ACKs are sent very often, each one acknowledges few packets, and thus opens the congestion window just enough for allowing a few more packets to be sent. If ACKs were too sparse, each would provide many credits, causing implementations to send packets in large bursts, maybe causing congestion on the path. But most implementations today implement some form of pacing, so ACK Clocking is not necessary anymore to prevent such packet bursts. Of course, if having fewer ACKs reduces overhead, it also impacts RTT measurements and packet loss detection, so there is a limit to how few ACKs a transport implementation should send.

This was discussed in QUIC Working Group. The discussions resulted in the publication of the QUIC Acknowledgement Frequency draft. The draft defines a QUIC control frame, by which the sender of packets can tell receivers how many packets or how much time they should wait before sending an ACK. However, that draft only provides generic guidance on how these parameters shall be sent. Picoquic implements the draft, and sets the packet threshold and ACK delay as follow:

The coefficients above were set in an empirical manner, based on a simulations of a variety of network configurations. Each of these simulation is actually a test case in the Picoquic suite of tests, which would detect if a code change caused a performance regression in one of the configurations. These simulations include several GEO configurations, including for example simulation of a high bandwidth data path and a low bandwith return path. In that asymmetric configuration, having too many ACKs would cause congestion on the return path, but the chosen tunings avoid that.

In that asymmetric configuration, limiting the number of ACKs is not enough. QUIC ACK frames could grow very large if they are allowed to carry a large number of "ACK ranges". If the ACKs were too large, that too could saturate a narrow return path. Picoquic limits the number of ACK ranges to 32, and further limits the size of ACKs by not including ranges that are too old, were already acknowledged, or were already announced in 4 previous ACK. And with all that, yes, we end up with good ACK behavior on GEO satellite links. And on other links too.

Migrating this blog to Private Octopus

Posted on 29 Oct 2022

My blog was fist published on WordPress, but I am getting repeated feedback that not having advertisements would be better, and also that a blog on networking really should be accessible on IPv6. So, I am taking the plunge and migrating the blog to the server of my personal company, Private Octopus.

The new blog is published as a static web site, developed using Jekyll. The upside of Jekyll is that publishing as a static web site is much simpler to manage than alternatives that require data base management. The downside is that I have to find a way to accept comments. And I would rather do that without adding a bunch of trackers that come with ready made solutions of the age of surveillance capitalism.

Net result, the comment section is a bit experimental. I am integrating this with Mastodon, because I like the concept of decentralized social networks. The integration is inspired by the work of Yidhra Farm, which I hope I ported correctly.