26 Jan 2021
I just finished implementing multipath support for QUIC per draft-liu-multipath-quic-02 in picoquic. That took me quite a bit more time than I initially thought. As I was modifying picoquic’s code to handle multipath, I realized I needed to add many tests in the test suite, because there are lots of corner cases.
The initial test was simple. Start a QUIC connection on a single path, add a second path to that connection, and verify that data is sent faster than if only one path was available. That test verified the basic functions such as scheduling packets according to the capacity of each path, managing end to end acknowledgements, and managing end to end retransmissions. And yes, the test demonstrates that using two paths in parallel allows for faster transfers than using just one. It also outlines that multipath scheduling relies on good per-path congestion control, since good scheduling requires accurate and timely estimates of each path’s capacity.
I added then a slightly more complex test, in which the first path was a low bandwidth and low delay “terrestrial” path, and the second path was a high bandwidth and high delay “satellite” path. As in the previous case, the expectation is that the transmission will use the capacity of both paths, but there is a twist. Classic congestion control algorithms require multiple round trip before the congestion window matches the capacity of the path. For satellite links, that is a challenge, because the roundtrip delay is long. On the other hand, draft-liu-multipath-quic-02 allows for acknowledgements of packets sent on one path to be carried over any other available path. Performance will be much better if the acknowledgements are sent on the short delay path rather than on the satellite path. This requires effective measuring delays on each path in each direction and using these measurements to schedule acknowledgement packets. This is a bit too complex to explain in details here, but I will do write up real soon.
The next test was a bit harder. Start a QUIC connection on a single path, add a second path to that connection, and after a short time drop either the first path or the second path, without explicitly signaling to the sender and receiver that this path has disappeared. This is testing a scenario in which using two paths could well be worse than using just one. If one of the paths is not working, packets sent to that path will be lost. It will take time to correct these losses. If packets had been sent only on the good path, they would all have been received sooner – but of course the sender does not know which of the two paths will end up not working. The success condition for the test was a bit more relaxed than “as fast as if everything has just been sent on the good path”. Instead, I just wanted to ensure that the overall time for sending a large file was “not much worse than if everything has just been sent on the good path”, but even that relaxed bar required a fair bit of tuning in the scheduling algorithm.
In theory, the congestion control algorithm run on each path will react to packet losses. The congestion window of the poorly working path will shrink, and the simple scheduling algorithm will send most packets on the other link. Eventually, repeated losses will cause the poorly working path to be dropped, and all packets will be sent to the other path. But these processes take time. I was intrigued by a comment made by Christoph Paasch during his presentation of Multipath transports at Apple (slides can be found here) during the QUIC interim meeting in October 2020 (minutes are here): they would “Switch between paths more than once per-RTT sometimes, [because] characteristics are changing on a very short time frame.” I implemented something similar, tweaking the scheduling to avoid sending packets on a path on which the last packet was lost. When a link drops, that rule limits the damage to the overall delivery time to at most one retransmission timer.
Of course, if we just stop sending on a path after the first loss events, there would be no chance to bring the path back on. That requires more tweaks in the implementation, to force some probing of the “failing” paths until they either recover or are marked abandoned. QUIC makes that easy, because we can send “dummy” packets that just contain a PING frame and some padding, without affecting the flow of application data. I have not fully tested that yet, I need to add more to the test suite a test that starts a connection, add a new path, and then simulate the temporary unavailability of one the paths. That is on the “to do” list.
My initial set of tests was not just verifying basic multipath behavior and reaction to path losses. I also added tests to verify the special form of packet encryption specified in draft-liu-multipath-quic-02, verify that there was no regression compared to “monopath” QUIC, and verify special cases like support of 0-RTT with and without packet losses, support for packet number hole insertion as part of defense against optimistic acks, support for key rotation when using either one or tow paths, support for renewal of the connection identifiers used for the paths, and support for NAT traversal. These tests were needed to verify the design choice of using connection identifiers to identify paths, and of using separate packet sequence number for each path. I have to write up more details about that, specifically the pros and cons of having one sequence number per path versus the single sequence number design in draft-huitema-quic-mpath-option-00.
The implementation exercise uncovered other issues. There is for example an interesting interaction between multipath scheduling algorithms and the congestion control algorithms used on each path, their sensitivity to “buffer bloat”, and their reliance or not on packet losses. The NAT traversal tests demonstrated support for the basic NAT rebinding scenarios, but the interaction between NAT and multipath requires some more work. Finally, during the whole exercise, I struggled with the absence of multipath support in the “QLOG” format and the associated tools. Each of these topics merits its own development. Hopefully, I will have time to write all that, and inform the standardization of multipath support in QUIC.