This commit makes tincd capable of discovering UPnP-IGD devices on the
local network, and add mappings (port redirects) for its TCP and/or UDP
port.
The goal is to improve reliability and performance of tinc with nodes
sitting behind home routers that support UPnP, by making it less reliant
on UDP Hole Punching, which is prone to failure when "hostile" NATs are
involved.
The way this is implemented is by leveraging the libminiupnpc library,
which we have just added a new dependency on. We use pthread to run the
UPnP client code in a dedicated thread; we can't use the tinc event loop
because libminiupnpc doesn't have a non-blocking API.
Unfortunately, glibc assumes that /etc/resolv.conf is a static file that
never changes. Even on servers, /etc/resolv.conf might be a dynamically
generated file, and we never know when it changes. So just call
res_init() every time, so glibc uses up-to-date nameserver information.
Conflicts:
src/have.h
src/net.c
src/net_setup.c
Average RTT can be used to update edge weight and propagate it to the network.
tinc dump edges has been also extended to give the current RTT.
New edge weight will change only if the config has EdgeUpdateInterval set to other value than 0.
- Ignore local configuration for editors
- Extended manpage with informations about EdgeUpdateInterval
- Added clone_edge and fixed potential segfault when b->from not defined
- Compute avg_rtt based on the time values we got back in PONG
- Add avg_rtt on dump edge
- Send current time on PING and return it on PONG
- Changed last_ping_time to struct timeval
- Extended edge_t with avg_rtt
As a rule, it seems reasonable to make sure that tinc operates correctly
on at least 1G links, since these are pretty common. However, I have
observed replay window issues when operating at speeds of 600 Mbit/s and
above, especially when the receiving end is a Windows system (not sure
why). This commit increases the default so that this won't occur on
fresh setups.
It may not be obvious, but due to the way tinc operates (single-threaded
control loop with no intermediate packet buffer), UDP send and receive
buffers can have a massive impact on performance. It is therefore of
paramount importance that the buffers be large enough to prevent packet
drops that could occur while tinc is processing a packet.
Leaving that value to the OS default could be reasonable if we weren't
relying on it so much. Instead, this makes performance somewhat
unpredictable.
In practice, the worst case scenario occurs on Windows, where Microsoft
had the brillant idea of making the buffers 8K in size by default, no
matter what the link speed is. Considering that 8K flies past in a
matter of microseconds on >1G links, this is extremely inappropriate. On
these systems, changing the buffer size to 1M results in *obscene*
raw throughput improvements; I have observed a 10X jump from 40 Mbit/s
to 400 Mbit/s on my system.
In this commit, we stop trusting the OS to get this right and we use a
fixed 1M value instead, which should be enough for <=1G links.
This makes sure MTU_INFO messages are only sent at the maximum rate of
5 per second (by default). As usual with these "probe" mechanisms, the
rate of these messages cannot be higher than the rate of data packets
themselves, since they are sent from the RX path.
This makes sure UDP_INFO messages are only sent at the maximum rate of
5 per second (by default). As usual with these "probe" mechanisms, the
rate of these messages cannot be higher than the rate of data packets
themselves, since they are sent from the RX path.
This introduces a new configuration option,
UDPDiscoveryKeepaliveInterval, which is used as the UDP discovery
interval once the UDP tunnel is established. The pre-existing option,
UDPDiscoveryInterval, is therefore only used before UDP connectivity
is established.
The defaults are set so that tinc sends UDP pings more aggressively
if the tunnel is not established yet. This is appropriate since the
size of probes in that scenario is very small (16 bytes).
Since UDP discovery is the place where UDP feasibility is checked, it
makes sense to test for local connectivity as well. This was previously
done as part of PMTU discovery.
This adds a new mechanism by which tinc can determine if a node is
reachable via UDP. The new mechanism is currently redundant with the
PMTU discovery mechanism - that will be fixed in a future commit.
Conceptually, the UDP discovery mechanism works similarly to PMTU
discovery: it sends UDP probes (of minmtu size, to make sure the tunnel
is fully usable), and assumes UDP is usable if it gets replies. It
assumes UDP is broken if too much time has passed since the last reply.
The big difference with the current PMTU discovery mechanism, however,
is that UDP discovery probes are only triggered as part of the
packet TX path (through try_tx()). This is quite interesting, because
it means tinc will never send UDP pings more often than normal packets,
and most importantly, it will automatically stop sending pings as soon
as packets stop flowing, thereby nicely reducing network chatter.
Of course, there are small drawbacks in some edge cases: for example,
if a node only sends one packet every minute to another node, these
packets will only be sent over TCP, because the interval between packets
is too long for tinc to maintain the UDP tunnel. I consider this a
feature, not a bug: I believe it is appropriate to use TCP in scenarios
where traffic is negligible, so that we don't pollute the network with
pings just to maintain a UDP tunnel that's seeing negligible usage.
There are two caveats to be aware of which are documented in this
commit:
- Because the system will likely assign different ports when binding
several times to different address families, it is recommended to
only use a single address family, otherwise other nodes will only
get one port among the several that were assigned, possibly breaking
communication.
- AutoConnect won't work in this scenario, because it relies on the UDP
port being the same as the TCP port, which is not the case when using
system-assigned ports.
This adds a new option, BroadcastSubnet, that allows the user to
declare broadcast subnets, i.e. subnets which are considered broadcast
addresses by the tinc routing layer. Previously only the global IPv4
and IPv6 broadcast addresses were supported by virtue of being
hardcoded.
This is useful when using tinc in router mode with Ethernet virtual
devices, as it can be used to provide broadcast support for a local
broadcast address (e.g. 10.42.255.255) instead of just the global
address (255.255.255.255).
This is implemented by removing hardcoded broadcast addresses and
introducing "broadcast subnets", which are subnets with a NULL owner.
By default, behavior is unchanged; this is accomplished by adding
the global broadcast addresses for Ethernet, IPv4 and IPv6 at start
time.
Recent improvements to the local discovery mechanism makes it cheaper,
more network-friendly, and now it cannot make things worse (as opposed
to the old mechanism). Thus there is no reason not to enable it by
default.
The new local address based local discovery mechanism is technically
superior to the old broadcast-based one. In fact, the old algorithm
can technically make things worse by e.g. sending broadcasts over the
VPN itself and then selecting the VPN address as the node's UDP
address. This cannot happen with the new mechanism.
Note that this means old nodes that don't send their local addresses in
ADD_EDGE messages can't be discovered, because there is no address to
send discovery packets to. Old nodes can still discover new nodes by
sending them broadcasts, though.
This introduces a new way of doing local discovery: when tinc has
local address information for the recipient node, it will send local
discovery packets directly to the local address of that node, instead
of using broadcast packets.
This new way of doing local discovery provides numerous advantages compared to
using broadcasts:
- No broadcast packets "polluting" the local network;
- Reliable even if the sending host has multiple network interfaces (in
contrast, broadcasts will only be sent through one unpredictable
interface)
- Works even if the two hosts are not on the same broadcast domain. One
example is a large LAN where the two hosts might be on different local
subnets. In fact, thanks to UDP hole punching this might even work if
there is a NAT sitting in the middle of the LAN between the two nodes!
- Sometimes a node is reachable through its "normal" address, and via a
local subnet as well. One might think the local subnet is the best route
to the node in this case, but more often than not it's actually worse -
one example is where the local segment is a third party VPN running in
parallel, or ironically it can be the local segment formed by the tinc
VPN itself! Because this new algorithm only checks the addresses for
which an edge is already established, it is less likely to fall into
these traps.
Besides controlling when tinc-up and tinc-down get called, this commit makes
DeviceStandby control when the virtual network interface "cable" is "plugged"
on Windows. This is more user-friendly as the status of the tinc network can
be seen just by looking at the state of the network interface, and it makes
Windows behave better when isolated.
This adds a new DeviceStandby option; when it is disabled (the default),
behavior is unchanged. If it is enabled, tinc-up will not be called during
tinc initialization, but will instead be deferred until the first node is
reachable, and it will be closed as soon as no nodes are reachable.
This is useful because it means the device won't be set up until we are fairly
sure there is something listening on the other side. This is more user-friendly,
as one can check on the status of the tinc network connection just by checking
the status of the network interface. Besides, it prevents the OS from thinking
it is connected to some network when it is in fact completely isolated.
ListenAddress works the same as BindToAddress, except that from now on,
explicitly binding outgoing packets to the address of a socket is only done for
sockets specified with BindToAddress.
Tinc now strictly limits incoming connections from the same host to 1 per
second. For incoming connections from multiple hosts short bursts of incoming
connections are allowed (by default 100), but on average also only 1 connection
per second is allowed.
When an incoming connection exceeds the limit, tinc will keep the connection in
a tarpit; the connection will be kept open but it is ignored completely. Only
one connection is in a tarpit at a time to limit the number of useless open
connections.
When LocalDiscovery is enabled, tinc normally sends broadcast packets during
PMTU discovery to the broadcast address (255.255.255.255 or ff02::1). This
option lets tinc use a different address.
At the moment only one LocalDiscoveryAddress can be specified.