Since UDP discovery is the place where UDP feasibility is checked, it
makes sense to test for local connectivity as well. This was previously
done as part of PMTU discovery.
This adds a new mechanism by which tinc can determine if a node is
reachable via UDP. The new mechanism is currently redundant with the
PMTU discovery mechanism - that will be fixed in a future commit.
Conceptually, the UDP discovery mechanism works similarly to PMTU
discovery: it sends UDP probes (of minmtu size, to make sure the tunnel
is fully usable), and assumes UDP is usable if it gets replies. It
assumes UDP is broken if too much time has passed since the last reply.
The big difference with the current PMTU discovery mechanism, however,
is that UDP discovery probes are only triggered as part of the
packet TX path (through try_tx()). This is quite interesting, because
it means tinc will never send UDP pings more often than normal packets,
and most importantly, it will automatically stop sending pings as soon
as packets stop flowing, thereby nicely reducing network chatter.
Of course, there are small drawbacks in some edge cases: for example,
if a node only sends one packet every minute to another node, these
packets will only be sent over TCP, because the interval between packets
is too long for tinc to maintain the UDP tunnel. I consider this a
feature, not a bug: I believe it is appropriate to use TCP in scenarios
where traffic is negligible, so that we don't pollute the network with
pings just to maintain a UDP tunnel that's seeing negligible usage.
There are two caveats to be aware of which are documented in this
commit:
- Because the system will likely assign different ports when binding
several times to different address families, it is recommended to
only use a single address family, otherwise other nodes will only
get one port among the several that were assigned, possibly breaking
communication.
- AutoConnect won't work in this scenario, because it relies on the UDP
port being the same as the TCP port, which is not the case when using
system-assigned ports.
This adds a new option, BroadcastSubnet, that allows the user to
declare broadcast subnets, i.e. subnets which are considered broadcast
addresses by the tinc routing layer. Previously only the global IPv4
and IPv6 broadcast addresses were supported by virtue of being
hardcoded.
This is useful when using tinc in router mode with Ethernet virtual
devices, as it can be used to provide broadcast support for a local
broadcast address (e.g. 10.42.255.255) instead of just the global
address (255.255.255.255).
This is implemented by removing hardcoded broadcast addresses and
introducing "broadcast subnets", which are subnets with a NULL owner.
By default, behavior is unchanged; this is accomplished by adding
the global broadcast addresses for Ethernet, IPv4 and IPv6 at start
time.
Recent improvements to the local discovery mechanism makes it cheaper,
more network-friendly, and now it cannot make things worse (as opposed
to the old mechanism). Thus there is no reason not to enable it by
default.
The new local address based local discovery mechanism is technically
superior to the old broadcast-based one. In fact, the old algorithm
can technically make things worse by e.g. sending broadcasts over the
VPN itself and then selecting the VPN address as the node's UDP
address. This cannot happen with the new mechanism.
Note that this means old nodes that don't send their local addresses in
ADD_EDGE messages can't be discovered, because there is no address to
send discovery packets to. Old nodes can still discover new nodes by
sending them broadcasts, though.
This introduces a new way of doing local discovery: when tinc has
local address information for the recipient node, it will send local
discovery packets directly to the local address of that node, instead
of using broadcast packets.
This new way of doing local discovery provides numerous advantages compared to
using broadcasts:
- No broadcast packets "polluting" the local network;
- Reliable even if the sending host has multiple network interfaces (in
contrast, broadcasts will only be sent through one unpredictable
interface)
- Works even if the two hosts are not on the same broadcast domain. One
example is a large LAN where the two hosts might be on different local
subnets. In fact, thanks to UDP hole punching this might even work if
there is a NAT sitting in the middle of the LAN between the two nodes!
- Sometimes a node is reachable through its "normal" address, and via a
local subnet as well. One might think the local subnet is the best route
to the node in this case, but more often than not it's actually worse -
one example is where the local segment is a third party VPN running in
parallel, or ironically it can be the local segment formed by the tinc
VPN itself! Because this new algorithm only checks the addresses for
which an edge is already established, it is less likely to fall into
these traps.
Besides controlling when tinc-up and tinc-down get called, this commit makes
DeviceStandby control when the virtual network interface "cable" is "plugged"
on Windows. This is more user-friendly as the status of the tinc network can
be seen just by looking at the state of the network interface, and it makes
Windows behave better when isolated.
This adds a new DeviceStandby option; when it is disabled (the default),
behavior is unchanged. If it is enabled, tinc-up will not be called during
tinc initialization, but will instead be deferred until the first node is
reachable, and it will be closed as soon as no nodes are reachable.
This is useful because it means the device won't be set up until we are fairly
sure there is something listening on the other side. This is more user-friendly,
as one can check on the status of the tinc network connection just by checking
the status of the network interface. Besides, it prevents the OS from thinking
it is connected to some network when it is in fact completely isolated.
ListenAddress works the same as BindToAddress, except that from now on,
explicitly binding outgoing packets to the address of a socket is only done for
sockets specified with BindToAddress.
Tinc now strictly limits incoming connections from the same host to 1 per
second. For incoming connections from multiple hosts short bursts of incoming
connections are allowed (by default 100), but on average also only 1 connection
per second is allowed.
When an incoming connection exceeds the limit, tinc will keep the connection in
a tarpit; the connection will be kept open but it is ignored completely. Only
one connection is in a tarpit at a time to limit the number of useless open
connections.
When LocalDiscovery is enabled, tinc normally sends broadcast packets during
PMTU discovery to the broadcast address (255.255.255.255 or ff02::1). This
option lets tinc use a different address.
At the moment only one LocalDiscoveryAddress can be specified.
When set to a non-zero value, tinc will try to maintain exactly that number of
meta connections to other nodes. If there are not enough connections, it will
periodically try to set up an outgoing connection to a random node. If there
are too many connections, it will periodically try to remove an outgoing
connection.
When the Proxy option is used, outgoing connections will be made via the
specified proxy. There is no support for authentication methods or for having
the proxy forward incoming connections, and there is no attempt to proxy UDP.
When the "Broadcast = direct" option is used, broadcast packets are not sent
and forwarded via the Minimum Spanning Tree to all nodes, but are sent directly
to all nodes that can be reached in one hop.
One use for this is to allow running ad-hoc routing protocols, such as OLSR, on
top of tinc.
When the Name starts with a $, the rest will be interpreted as the name of an
environment variable containing the real Name. When Name is $HOST, but this
environment variable does not exist, gethostname() will be used to set the
Name. In both cases, illegal characters will be converted to underscores.
DeviceType = multicast allows one to specify a multicast address and port with
a Device statement. Tinc will then read/send packets to that multicast group
instead of to a tun/tap device. This allows interaction with UML, QEMU and KVM
instances that are listening on the same group.