Tinc now strictly limits incoming connections from the same host to 1 per
second. For incoming connections from multiple hosts short bursts of incoming
connections are allowed (by default 100), but on average also only 1 connection
per second is allowed.
When an incoming connection exceeds the limit, tinc will keep the connection in
a tarpit; the connection will be kept open but it is ignored completely. Only
one connection is in a tarpit at a time to limit the number of useless open
connections.
When LocalDiscovery is enabled, tinc normally sends broadcast packets during
PMTU discovery to the broadcast address (255.255.255.255 or ff02::1). This
option lets tinc use a different address.
At the moment only one LocalDiscoveryAddress can be specified.
Some options can take an optional argument. However, in this case GNU getopt
requires that the optional argument is right next to the option without
whitespace inbetween. If there is whitespace, getopt will treat it as a
non-option argument, but tincd ignored those without a warning. Now tincd will
allow optional arguments with whitespace inbetween, and will give an error when
it encounters any other non-option arguments.
The tinc binary now requires that all options for itself are given before the
command.
Using the tinc command, an administrator of an existing VPN can generate
invitations for new nodes. The invitation is a small URL that can easily
be copy&pasted into email or live chat. Another person can have tinc
automatically setup the necessary configuration files and exchange keys
with the server, by only using the invitation URL.
The invitation protocol uses temporary ECDSA keys. The invitation URL
consists of the hostname and port of the server, a hash of the server's
temporary ECDSA key and a cookie. When the client wants to accept an
invitation, it also creates a temporary ECDSA key, connects to the server
and says it wants to accept an invitation. Both sides exchange their
temporary keys. The client verifies that the server's key matches the hash
in the invitation URL. After setting up an SPTPS connection using the
temporary keys, the client gives the cookie to the server. If the cookie
is valid, the server sends the client an invitation file containing the
client's new name and a copy of the server's host config file. If everything
is ok, the client will generate a long-term ECDSA key and send it to the
server, which will add it to a new host config file for the client.
The invitation protocol currently allows multiple host config files to be
send from the server to the client. However, the client filters out
most configuration variables for its own host configuration file. In
particular, it only accepts Name, Mode, Broadcast, ConnectTo, Subnet and
AutoConnect. Also, at the moment no tinc-up script is generated.
When an invitation has succesfully been accepted, the client needs to start
the tinc daemon manually.
Most important is the annotation of xasprintf() with the format attribute,
which allows the compiler to give warnings about the format string and
arguments.
ecdh_compute_shared() was changed to immediately delete the ephemeral key after
the shared secret was computed. Therefore, the pointer to the ecdh_t struct
should be zeroed so it won't be freed again when a struct sptps_t is freed.
At this point, c->config_tree may or may not be NULL, but this does not tell us whether it is an
outgoing connection or not. For incoming connections, we do not know the peer's name yet,
so we always have to claim ECDSA support. For outgoing connections, we always need to check
whether we have the peer's ECDSA public key, so that if we don't, we correctly tell the peer that
we want to upgrade.
This gets rid of the rest of the symbolic links. However, as a consequence, the
crypto header files have now moved to src/, and can no longer contain
library-specific declarations. Therefore, cipher_t, digest_t, ecdh_t, ecdsa_t
and rsa_t are now all opaque types, and only pointers to those types can be
used.
Normally all requests sent via the meta connections are checked so that they
cannot be larger than the input buffer. However, when packets are forwarded via
meta connections, they are copied into a packet buffer without checking whether
it fits into it. Since the packet buffer is allocated on the stack, this in
effect allows an authenticated remote node to cause a stack overflow.
This issue was found by Martin Schobert.
We were checking only for readability, which is not a problem for normal
connections, since the server side of a connection will always send an ID
request. But when using a proxy, the proxy server doesn't send anything before
the client, so tinc would not see that its connection to the proxy had already
been established.
Tinc never restarts PMTU discovery unless a node becomes unreachable. However,
it can be that the PMTU was very low during the initial discovery, but has
increased later. To detect this, tinc now tries to send an extra packet every
PingInterval, with a size slightly higher than the currently known PMTU. If
this packet is succesfully received back, we partially restart PMTU discovery
to find out the new maximum.
Conflicts:
src/net_packet.c
Commit dd07c9fc1f broke the reception of datagram
SPTPS packets, by undoing the conversion of the sequence number to host byte
order before comparison. This caused error messages like "Packet is 16777215
seqs in the future, dropped (1)".
Tinc uses Kruskal's algorithm to calculate a MST. However, this was broken in
commit 6e80da3370. Revert back to the working
algorithm from tinc 1.0.
Thanks to Cheng LI for spotting the problem.
Without adding any extra traffic, we can measure round trip times, estimate the
bandwidth and packet loss between nodes. The RTT and bandwidth can be measured
by timing the MTU probe packets. The RTT is the difference between the time a
burst of MTU probes was sent and when the first reply is received. The
bandwidth can be estimated by multiplying the size of the probe packets by the
time between succesive received probe replies of the same burst. The packet
loss can be estimated for incoming traffic by comparing how many packets have
actually been received to the increase in the sequence numbers.
The estimates are not perfect. Especially bandwidth is difficult to measure,
the only accurate way is to continuously send as much data as possible, but
that is obviously not desirable. The packet loss rate is also almost always
a few percent when sending a lot of data over the VPN via TCP, since TCP
*needs* packet loss to work properly.
Keep track of the number of correct, non-replayed UDP packets that have been
received, regardless of their content. This can be compared to the sequence
number to determine the real packet loss.
There are several reasons for this:
- MacOS/X doesn't support polling the tap device using kqueue, requiring a
workaround to fall back to select().
- On Windows only sockets are properly handled, therefore tinc uses a second
thread that does a blocking ReadFile() on the TAP-Win32/64 device. However,
this does not mix well with libevent.
- Libevent, event just the core, is quite large, and although it is easy to get
and install on many platforms, it can be a burden.
- Libev is more lightweight and seems technically superior, but it doesn't
abstract away all the platform differences (for example, async events are not
supported on Windows).
We don't need to search the whole edge tree, we can use the node's own edge
tree since each edge has a pointer to its reverse. Also, we do need to make
sure we try the reflexive address often.
Before it would always use the first socket, and always send an IPv4 broadcast packet. That
works fine in a lot of situations, but it is better to try all sockets, and to send IPv6 packets
on IPv6 sockets. This is especially important for users that are on IPv6-only networks or that
have multiple physical network interfaces, although in the latter case it probably requires
them to use the ListenAddress variable to create a separate socket for each interface.
Before, when tinc saw a packet larger than the PMTU with a VLAN tag, it would
not know what to do with it, and would just forward it via TCP. Now, tinc
handles 802.1q packets correctly, as long as there is only one tag.
When set to a non-zero value, tinc will try to maintain exactly that number of
meta connections to other nodes. If there are not enough connections, it will
periodically try to set up an outgoing connection to a random node. If there
are too many connections, it will periodically try to remove an outgoing
connection.
Only the very first packet of an SPTPS session should be send with REQ_KEY,
this signals the peer to abort any previous session and start a new one as
well.
Most of the code doesn't care whether requests are terminated with a newline or
not, except that when requests are forwarded, it is assumed they do not have
one and a newline is added. When a node using SPTPS receives a request from
another SPTPS-using node, and forwards it to a non-SPTPS-using node, this will
result in two consecutive newlines, which the latter node will see as an empty,
and thus invalid, request.
The tree functions were never used on the connection_tree, a list is more appropriate.
Also be more paranoid about connections disappearing while traversing the list.
Struct outgoing_ts and connection_ts were depending too much on each other,
causing lots of problems, especially the reuse of a connection_t. Now, whenever
a connection is closed it is immediately removed from the list of connections
and destroyed.
Similar to old style key exchange requests, keep track of whether a key
exchange is already in progress and how long it took. If no key is known yet
or if key exchange takes too long, (re)start a new key exchange.
Most fields should be zero when reusing a connection. In particular, when an
outgoing connection to a node which is reachable on more than one address is
made, the second connection to that node will have status.encryptout set but
outctx will be NULL, causing a NULL pointer dereference when
EVP_EncryptUpdate() is called in send_meta() when it shouldn't.
When starting tincd, tincctl now strips non-options from the command line, and
sets argv[0] to the name of the tincd command instead of copying its own
command name.
When stopping a running tincd, tincctl now waits for it to terminate.
Internally, tinc maintains a directed graph of the meta connections between
nodes. However, this causes graphviz to draw two lines between nodes, which is
not always desirable. The "dump graph" command now defaults to dumping an
undirected graph, the "dump digraph" command will dump a directed graph.
This new setting allows choosing a custom script interpreter used for the various tinc callbacks.
If none is specified, the script itself is called as executable (as before).
This is particularly useful when storing tinc configuration and script on a mount point with no-exec attribute.
Commented non-existing functions in android NDK.
Prefix scripts execution with shell binary to allow execution on no-exec mount points.
Everyything is currently hard coded, while it should use pre-compiler variables...
When two nodes which support SPTPS want to send packets to each other, they now
always use SPTPS. The node initiating the SPTPS session send the first SPTPS
packet via an extended REQ_KEY messages. All other handshake messages are sent
using ANS_KEY messages. This ensures that intermediate nodes using an older
version of tinc can still help with NAT traversal. After the authentication
phase is over, SPTPS packets are sent via UDP, or are encapsulated in extended
REQ_KEY messages instead of PACKET messages.
Otherwise, the call to daemon() could close filedescriptors in use by libevent
itself; for example if it uses kqueue or epoll instead of a select() or poll()
backend.
In retry() the function do_outgoing_connection() is called, which can delete
items from the connection_tree, so when walking the tree we must first save the
pointer to the next item.
The used remote protocol can change between two reconnects, aka if
the remote side has enabled/disabled for example their ExperimentalProtocols
setting.
Proxy type "exec" can be used to have an external script or binary set
up an outgoing connection. Standard input and output will be used to
exchange data with the external command. The variables REMOTEADDRESS and
REMOTEPORT are set to the intended destination address and port.
When the Proxy option is used, outgoing connections will be made via the
specified proxy. There is no support for authentication methods or for having
the proxy forward incoming connections, and there is no attempt to proxy UDP.
When the "Broadcast = direct" option is used, broadcast packets are not sent
and forwarded via the Minimum Spanning Tree to all nodes, but are sent directly
to all nodes that can be reached in one hop.
One use for this is to allow running ad-hoc routing protocols, such as OLSR, on
top of tinc.
Most likeley the error is that there just is no valid key inside the used
host file, and in this case errno just contains a random value from the
last previously failed call.
When the Name starts with a $, the rest will be interpreted as the name of an
environment variable containing the real Name. When Name is $HOST, but this
environment variable does not exist, gethostname() will be used to set the
Name. In both cases, illegal characters will be converted to underscores.
If the LISTEN_FDS environment variable is set and tinc is run in the
foreground, tinc will use filedescriptors 3 to 3 + LISTEN_FDS for its listening
TCP sockets. For now, tinc will create matching listening UDP sockets itself.
There is no dependency on systemd or on libsystemd-daemon.
DeviceType = multicast allows one to specify a multicast address and port with
a Device statement. Tinc will then read/send packets to that multicast group
instead of to a tun/tap device. This allows interaction with UML, QEMU and KVM
instances that are listening on the same group.
When making outgoing connections, tinc goes through the list of Addresses and
tries all of them until one succeeds. However, before it would consider
establishing a TCP connection a success, even when the authentication failed.
This would be a problem if the first Address would point to a hostname and port
combination that belongs to the wrong tinc node, or perhaps even to a non-tinc
service, causing tinc to endlessly try this Address instead of moving to the
next one.
Problem found by Delf Eldkraft.
* Everything is identical except the headers of the records.
* Instead of sending explicit message length and having an implicit sequence
number, datagram mode has an implicit message length and an explicit sequence
number.
* The sequence number is used to set the most significant bytes of the counter.
Seeking in files and rewriting parts of them does not seem to work properly on
Windows. Instead, when old RSA keys are found when generating new ones, the
file containing the old keys is copied to a temporary file where the changes
are made, and that file is renamed back to the original filename. On Windows,
we cannot atomically replace files with a rename(), so we need to move the
original file out of the way first. If anything fails, the new code will warn
that the user has to solve the problem by hand.
This allows tincctl to receive log messages from a running tincd,
independent of what is logged to syslog or to file. Tincctl can receive
debug messages with an arbitrary level.
This allows administrators who frequently want to work with one tinc
network to omit the -n option. Since the NETNAME variable is set by
tincd when executing scripts, this makes it slightly easier to use
tincctl from within scripts.
The Broadcast option can be used to cause tinc to drop all broadcast and
multicast packets. This option might be expanded in the future to selectively
allow only some broadcast packet types.
The code introduced in commit 41a05f59ba is not
needed anymore, since tinc has been able to handle UDP packets from a different
source address than those of the TCP packets since 1.0.10. When using multiple
BindToAddress statements, this code does not make sense anymore, we do want the
kernel to choose the source address on its own.
Tinc will now, by default, decrement the TTL field of incoming IPv4 and IPv6
packets, before forwarding them to the virtual network device or to another
node. Packets with a TTL value of zero will be dropped, and an ICMP Time
Exceeded message will be sent back.
This behaviour can be disabled using the DecrementTTL option.
Scripts called by tinc would inherit its open filedescriptors. This could
be a problem if other long-running daemons are started from those scripts,
if those daemons would not close all filedescriptors before going into the
background.
Problem found and solution suggested by Nick Hibma.
Apart from the platform specific tun/tap driver, link with the dummy and
raw_socket devices, and optionally with support for UML and VDE devices.
At runtime, the DeviceType option can be used to select which driver to
use.