Normally all requests sent via the meta connections are checked so that they
cannot be larger than the input buffer. However, when packets are forwarded via
meta connections, they are copied into a packet buffer without checking whether
it fits into it. Since the packet buffer is allocated on the stack, this in
effect allows an authenticated remote node to cause a stack overflow.
This issue was found by Martin Schobert.
We were checking only for readability, which is not a problem for normal
connections, since the server side of a connection will always send an ID
request. But when using a proxy, the proxy server doesn't send anything before
the client, so tinc would not see that its connection to the proxy had already
been established.
Tinc never restarts PMTU discovery unless a node becomes unreachable. However,
it can be that the PMTU was very low during the initial discovery, but has
increased later. To detect this, tinc now tries to send an extra packet every
PingInterval, with a size slightly higher than the currently known PMTU. If
this packet is succesfully received back, we partially restart PMTU discovery
to find out the new maximum.
Conflicts:
src/net_packet.c
Commit dd07c9fc1f broke the reception of datagram
SPTPS packets, by undoing the conversion of the sequence number to host byte
order before comparison. This caused error messages like "Packet is 16777215
seqs in the future, dropped (1)".
Tinc uses Kruskal's algorithm to calculate a MST. However, this was broken in
commit 6e80da3370. Revert back to the working
algorithm from tinc 1.0.
Thanks to Cheng LI for spotting the problem.
Without adding any extra traffic, we can measure round trip times, estimate the
bandwidth and packet loss between nodes. The RTT and bandwidth can be measured
by timing the MTU probe packets. The RTT is the difference between the time a
burst of MTU probes was sent and when the first reply is received. The
bandwidth can be estimated by multiplying the size of the probe packets by the
time between succesive received probe replies of the same burst. The packet
loss can be estimated for incoming traffic by comparing how many packets have
actually been received to the increase in the sequence numbers.
The estimates are not perfect. Especially bandwidth is difficult to measure,
the only accurate way is to continuously send as much data as possible, but
that is obviously not desirable. The packet loss rate is also almost always
a few percent when sending a lot of data over the VPN via TCP, since TCP
*needs* packet loss to work properly.
Keep track of the number of correct, non-replayed UDP packets that have been
received, regardless of their content. This can be compared to the sequence
number to determine the real packet loss.