The new local address based local discovery mechanism is technically
superior to the old broadcast-based one. In fact, the old algorithm
can technically make things worse by e.g. sending broadcasts over the
VPN itself and then selecting the VPN address as the node's UDP
address. This cannot happen with the new mechanism.
Note that this means old nodes that don't send their local addresses in
ADD_EDGE messages can't be discovered, because there is no address to
send discovery packets to. Old nodes can still discover new nodes by
sending them broadcasts, though.
tinc is using a separate thread to read from the TAP device on Windows.
The rationale was that the notification mechanism for packets arriving
on the virtual network device is based on Win32 events, and the event
loop did not support listening to these events.
Thanks to recent improvements, this event loop limitation has been
lifted. Therefore we can get rid of the separate thread and simply add
the Win32 "incoming packet" event to the event loop, just like a socket.
The result is cleaner code that's easier to reason about.
This adds a new DeviceStandby option; when it is disabled (the default),
behavior is unchanged. If it is enabled, tinc-up will not be called during
tinc initialization, but will instead be deferred until the first node is
reachable, and it will be closed as soon as no nodes are reachable.
This is useful because it means the device won't be set up until we are fairly
sure there is something listening on the other side. This is more user-friendly,
as one can check on the status of the tinc network connection just by checking
the status of the network interface. Besides, it prevents the OS from thinking
it is connected to some network when it is in fact completely isolated.
ListenAddress works the same as BindToAddress, except that from now on,
explicitly binding outgoing packets to the address of a socket is only done for
sockets specified with BindToAddress.
Tinc now strictly limits incoming connections from the same host to 1 per
second. For incoming connections from multiple hosts short bursts of incoming
connections are allowed (by default 100), but on average also only 1 connection
per second is allowed.
When an incoming connection exceeds the limit, tinc will keep the connection in
a tarpit; the connection will be kept open but it is ignored completely. Only
one connection is in a tarpit at a time to limit the number of useless open
connections.
When LocalDiscovery is enabled, tinc normally sends broadcast packets during
PMTU discovery to the broadcast address (255.255.255.255 or ff02::1). This
option lets tinc use a different address.
At the moment only one LocalDiscoveryAddress can be specified.
Normally all requests sent via the meta connections are checked so that they
cannot be larger than the input buffer. However, when packets are forwarded via
meta connections, they are copied into a packet buffer without checking whether
it fits into it. Since the packet buffer is allocated on the stack, this in
effect allows an authenticated remote node to cause a stack overflow.
This issue was found by Martin Schobert.
There are several reasons for this:
- MacOS/X doesn't support polling the tap device using kqueue, requiring a
workaround to fall back to select().
- On Windows only sockets are properly handled, therefore tinc uses a second
thread that does a blocking ReadFile() on the TAP-Win32/64 device. However,
this does not mix well with libevent.
- Libevent, event just the core, is quite large, and although it is easy to get
and install on many platforms, it can be a burden.
- Libev is more lightweight and seems technically superior, but it doesn't
abstract away all the platform differences (for example, async events are not
supported on Windows).
When set to a non-zero value, tinc will try to maintain exactly that number of
meta connections to other nodes. If there are not enough connections, it will
periodically try to set up an outgoing connection to a random node. If there
are too many connections, it will periodically try to remove an outgoing
connection.
Struct outgoing_ts and connection_ts were depending too much on each other,
causing lots of problems, especially the reuse of a connection_t. Now, whenever
a connection is closed it is immediately removed from the list of connections
and destroyed.
When two nodes which support SPTPS want to send packets to each other, they now
always use SPTPS. The node initiating the SPTPS session send the first SPTPS
packet via an extended REQ_KEY messages. All other handshake messages are sent
using ANS_KEY messages. This ensures that intermediate nodes using an older
version of tinc can still help with NAT traversal. After the authentication
phase is over, SPTPS packets are sent via UDP, or are encapsulated in extended
REQ_KEY messages instead of PACKET messages.
When the Proxy option is used, outgoing connections will be made via the
specified proxy. There is no support for authentication methods or for having
the proxy forward incoming connections, and there is no attempt to proxy UDP.
When the Name starts with a $, the rest will be interpreted as the name of an
environment variable containing the real Name. When Name is $HOST, but this
environment variable does not exist, gethostname() will be used to set the
Name. In both cases, illegal characters will be converted to underscores.
In this situation, the two nodes will start fighting over the edges they announced.
When we have to contradict both ADD_EDGE and DEL_EDGE messages, we log a warning,
and with 25% chance per PingTimeout we quit.
Since event.h is not part of tinc, we include it in have.h were all other
system header files are included. We also ensure -levent comes before -lgdi32
when compiling with MinGW, apparently it doesn't work when the order is
reversed.
When the HUP signal is sent while some outgoing connections have not been made
yet, or are being retried, a NULL pointer could be dereferenced resulting in
tinc crashing. We fix this by more careful handling of outgoing_ts, and by
deleting all connections that have not been fully activated yet at the HUP
signal is received.
The TAP-Win32 device is not a socket, and select() under Windows only works
with sockets. Tinc used a separate thread to read from the TAP-Win32 device,
and passed this via a local socket to the main thread which could then select()
from it. We now use a global mutex, which is only unlocked when the main thread
is waiting for select(), to allow the TAP reader thread to process packets
directly.