We were checking only for readability, which is not a problem for normal
connections, since the server side of a connection will always send an ID
request. But when using a proxy, the proxy server doesn't send anything before
the client, so tinc would not see that its connection to the proxy had already
been established.
Tinc never restarts PMTU discovery unless a node becomes unreachable. However,
it can be that the PMTU was very low during the initial discovery, but has
increased later. To detect this, tinc now tries to send an extra packet every
PingInterval, with a size slightly higher than the currently known PMTU. If
this packet is succesfully received back, we partially restart PMTU discovery
to find out the new maximum.
Conflicts:
src/net_packet.c
Commit dd07c9fc1f broke the reception of datagram
SPTPS packets, by undoing the conversion of the sequence number to host byte
order before comparison. This caused error messages like "Packet is 16777215
seqs in the future, dropped (1)".
Tinc uses Kruskal's algorithm to calculate a MST. However, this was broken in
commit 6e80da3370. Revert back to the working
algorithm from tinc 1.0.
Thanks to Cheng LI for spotting the problem.
Without adding any extra traffic, we can measure round trip times, estimate the
bandwidth and packet loss between nodes. The RTT and bandwidth can be measured
by timing the MTU probe packets. The RTT is the difference between the time a
burst of MTU probes was sent and when the first reply is received. The
bandwidth can be estimated by multiplying the size of the probe packets by the
time between succesive received probe replies of the same burst. The packet
loss can be estimated for incoming traffic by comparing how many packets have
actually been received to the increase in the sequence numbers.
The estimates are not perfect. Especially bandwidth is difficult to measure,
the only accurate way is to continuously send as much data as possible, but
that is obviously not desirable. The packet loss rate is also almost always
a few percent when sending a lot of data over the VPN via TCP, since TCP
*needs* packet loss to work properly.
Keep track of the number of correct, non-replayed UDP packets that have been
received, regardless of their content. This can be compared to the sequence
number to determine the real packet loss.
There are several reasons for this:
- MacOS/X doesn't support polling the tap device using kqueue, requiring a
workaround to fall back to select().
- On Windows only sockets are properly handled, therefore tinc uses a second
thread that does a blocking ReadFile() on the TAP-Win32/64 device. However,
this does not mix well with libevent.
- Libevent, event just the core, is quite large, and although it is easy to get
and install on many platforms, it can be a burden.
- Libev is more lightweight and seems technically superior, but it doesn't
abstract away all the platform differences (for example, async events are not
supported on Windows).
We don't need to search the whole edge tree, we can use the node's own edge
tree since each edge has a pointer to its reverse. Also, we do need to make
sure we try the reflexive address often.
Before it would always use the first socket, and always send an IPv4 broadcast packet. That
works fine in a lot of situations, but it is better to try all sockets, and to send IPv6 packets
on IPv6 sockets. This is especially important for users that are on IPv6-only networks or that
have multiple physical network interfaces, although in the latter case it probably requires
them to use the ListenAddress variable to create a separate socket for each interface.
Before, when tinc saw a packet larger than the PMTU with a VLAN tag, it would
not know what to do with it, and would just forward it via TCP. Now, tinc
handles 802.1q packets correctly, as long as there is only one tag.
When set to a non-zero value, tinc will try to maintain exactly that number of
meta connections to other nodes. If there are not enough connections, it will
periodically try to set up an outgoing connection to a random node. If there
are too many connections, it will periodically try to remove an outgoing
connection.
Only the very first packet of an SPTPS session should be send with REQ_KEY,
this signals the peer to abort any previous session and start a new one as
well.
Most of the code doesn't care whether requests are terminated with a newline or
not, except that when requests are forwarded, it is assumed they do not have
one and a newline is added. When a node using SPTPS receives a request from
another SPTPS-using node, and forwards it to a non-SPTPS-using node, this will
result in two consecutive newlines, which the latter node will see as an empty,
and thus invalid, request.
The tree functions were never used on the connection_tree, a list is more appropriate.
Also be more paranoid about connections disappearing while traversing the list.
Struct outgoing_ts and connection_ts were depending too much on each other,
causing lots of problems, especially the reuse of a connection_t. Now, whenever
a connection is closed it is immediately removed from the list of connections
and destroyed.
Similar to old style key exchange requests, keep track of whether a key
exchange is already in progress and how long it took. If no key is known yet
or if key exchange takes too long, (re)start a new key exchange.
Most fields should be zero when reusing a connection. In particular, when an
outgoing connection to a node which is reachable on more than one address is
made, the second connection to that node will have status.encryptout set but
outctx will be NULL, causing a NULL pointer dereference when
EVP_EncryptUpdate() is called in send_meta() when it shouldn't.
When starting tincd, tincctl now strips non-options from the command line, and
sets argv[0] to the name of the tincd command instead of copying its own
command name.
When stopping a running tincd, tincctl now waits for it to terminate.