This commit makes tincd capable of discovering UPnP-IGD devices on the
local network, and add mappings (port redirects) for its TCP and/or UDP
port.
The goal is to improve reliability and performance of tinc with nodes
sitting behind home routers that support UPnP, by making it less reliant
on UDP Hole Punching, which is prone to failure when "hostile" NATs are
involved.
The way this is implemented is by leveraging the libminiupnpc library,
which we have just added a new dependency on. We use pthread to run the
UPnP client code in a dedicated thread; we can't use the tinc event loop
because libminiupnpc doesn't have a non-blocking API.
This dumps the name of the invitation file, as well as the name of the
node that is being invited. This can make it easier to find the
invitation file belonging to a given node.
Unfortunately, glibc assumes that /etc/resolv.conf is a static file that
never changes. Even on servers, /etc/resolv.conf might be a dynamically
generated file, and we never know when it changes. So just call
res_init() every time, so glibc uses up-to-date nameserver information.
Conflicts:
src/have.h
src/net.c
src/net_setup.c
As a rule, it seems reasonable to make sure that tinc operates correctly
on at least 1G links, since these are pretty common. However, I have
observed replay window issues when operating at speeds of 600 Mbit/s and
above, especially when the receiving end is a Windows system (not sure
why). This commit increases the default so that this won't occur on
fresh setups.
It may not be obvious, but due to the way tinc operates (single-threaded
control loop with no intermediate packet buffer), UDP send and receive
buffers can have a massive impact on performance. It is therefore of
paramount importance that the buffers be large enough to prevent packet
drops that could occur while tinc is processing a packet.
Leaving that value to the OS default could be reasonable if we weren't
relying on it so much. Instead, this makes performance somewhat
unpredictable.
In practice, the worst case scenario occurs on Windows, where Microsoft
had the brillant idea of making the buffers 8K in size by default, no
matter what the link speed is. Considering that 8K flies past in a
matter of microseconds on >1G links, this is extremely inappropriate. On
these systems, changing the buffer size to 1M results in *obscene*
raw throughput improvements; I have observed a 10X jump from 40 Mbit/s
to 400 Mbit/s on my system.
In this commit, we stop trusting the OS to get this right and we use a
fixed 1M value instead, which should be enough for <=1G links.
This makes sure MTU_INFO messages are only sent at the maximum rate of
5 per second (by default). As usual with these "probe" mechanisms, the
rate of these messages cannot be higher than the rate of data packets
themselves, since they are sent from the RX path.
This makes sure UDP_INFO messages are only sent at the maximum rate of
5 per second (by default). As usual with these "probe" mechanisms, the
rate of these messages cannot be higher than the rate of data packets
themselves, since they are sent from the RX path.
This will report possible problems in the configuration files, and in
some cases offers to fix them.
The code is far from perfect yet. It expects keys to be in their default
locations, it doesn't check for Public/PrivateKey[File] statemetns yet.
It also does not correctly handle Ed25519 public keys yet.
This introduces a new configuration option,
UDPDiscoveryKeepaliveInterval, which is used as the UDP discovery
interval once the UDP tunnel is established. The pre-existing option,
UDPDiscoveryInterval, is therefore only used before UDP connectivity
is established.
The defaults are set so that tinc sends UDP pings more aggressively
if the tunnel is not established yet. This is appropriate since the
size of probes in that scenario is very small (16 bytes).
Since UDP discovery is the place where UDP feasibility is checked, it
makes sense to test for local connectivity as well. This was previously
done as part of PMTU discovery.
This adds a new mechanism by which tinc can determine if a node is
reachable via UDP. The new mechanism is currently redundant with the
PMTU discovery mechanism - that will be fixed in a future commit.
Conceptually, the UDP discovery mechanism works similarly to PMTU
discovery: it sends UDP probes (of minmtu size, to make sure the tunnel
is fully usable), and assumes UDP is usable if it gets replies. It
assumes UDP is broken if too much time has passed since the last reply.
The big difference with the current PMTU discovery mechanism, however,
is that UDP discovery probes are only triggered as part of the
packet TX path (through try_tx()). This is quite interesting, because
it means tinc will never send UDP pings more often than normal packets,
and most importantly, it will automatically stop sending pings as soon
as packets stop flowing, thereby nicely reducing network chatter.
Of course, there are small drawbacks in some edge cases: for example,
if a node only sends one packet every minute to another node, these
packets will only be sent over TCP, because the interval between packets
is too long for tinc to maintain the UDP tunnel. I consider this a
feature, not a bug: I believe it is appropriate to use TCP in scenarios
where traffic is negligible, so that we don't pollute the network with
pings just to maintain a UDP tunnel that's seeing negligible usage.
ListenAddress works the same as BindToAddress, except that from now on,
explicitly binding outgoing packets to the address of a socket is only done for
sockets specified with BindToAddress.
Tinc now strictly limits incoming connections from the same host to 1 per
second. For incoming connections from multiple hosts short bursts of incoming
connections are allowed (by default 100), but on average also only 1 connection
per second is allowed.
When an incoming connection exceeds the limit, tinc will keep the connection in
a tarpit; the connection will be kept open but it is ignored completely. Only
one connection is in a tarpit at a time to limit the number of useless open
connections.
When LocalDiscovery is enabled, tinc normally sends broadcast packets during
PMTU discovery to the broadcast address (255.255.255.255 or ff02::1). This
option lets tinc use a different address.
At the moment only one LocalDiscoveryAddress can be specified.
Using the tinc command, an administrator of an existing VPN can generate
invitations for new nodes. The invitation is a small URL that can easily
be copy&pasted into email or live chat. Another person can have tinc
automatically setup the necessary configuration files and exchange keys
with the server, by only using the invitation URL.
The invitation protocol uses temporary ECDSA keys. The invitation URL
consists of the hostname and port of the server, a hash of the server's
temporary ECDSA key and a cookie. When the client wants to accept an
invitation, it also creates a temporary ECDSA key, connects to the server
and says it wants to accept an invitation. Both sides exchange their
temporary keys. The client verifies that the server's key matches the hash
in the invitation URL. After setting up an SPTPS connection using the
temporary keys, the client gives the cookie to the server. If the cookie
is valid, the server sends the client an invitation file containing the
client's new name and a copy of the server's host config file. If everything
is ok, the client will generate a long-term ECDSA key and send it to the
server, which will add it to a new host config file for the client.
The invitation protocol currently allows multiple host config files to be
send from the server to the client. However, the client filters out
most configuration variables for its own host configuration file. In
particular, it only accepts Name, Mode, Broadcast, ConnectTo, Subnet and
AutoConnect. Also, at the moment no tinc-up script is generated.
When an invitation has succesfully been accepted, the client needs to start
the tinc daemon manually.