This gets rid of xasprintf() in a number of places, and removes the need
to free() the temporary strings. A few potential memory leaks have been
fixed.
This dumps the name of the invitation file, as well as the name of the
node that is being invited. This can make it easier to find the
invitation file belonging to a given node.
It is possible that opening /dev/net/tun works but that interface
creation itself fails, for example if a non-root user tries to create a
new interface, or if the desired interface is already opened by another
process. In this case, the ioctl() fails, but we actually silently
ignored this condition.
The compile time local state directory is usually /var or
/usr/local/var. If this is not accessible for some reason, for example
because someone ./configured tinc without --localstatedir and
/usr/local/var does not exist, or if tinc is started by a non-root user,
then tinc will fall back to the directory where tinc.conf is stored.
A warning is logged when this happens.
This function is not used for normal traffic, only when a packet from an
unknown source is received and we need to check against candidates. No
failures should be logger in this case; if the packet is really not
valid this will be logged by handle_incoming_vpn_data().
try_tx_sptps() gives up on UDP communication if the recipient doesn't
support relaying. This is too restrictive - we only need the other node
to support relaying if we actually want to relay through them. If the
packet is sent directly, it's fine to send it to an old pre-node-IDs
tinc-1.1 node.
Currently, tinc tries to parse node IDs for all SPTPS packets, including
ones sent from older, pre-node-IDs tinc-1.1 nodes, and therefore doesn't
recognize packets from these nodes. This commit fixes that.
It also makes code slightly clearer by reducing the amount of fiddling
around packet offset/length.
A condition in try_harder() is always evaluating to false when talking
to a SPTPS node because n->status.validkey_in is always false in that
case. Fix the condition so that the SPTPS status is correctly checked.
This prevented recent tinc-1.1 nodes from talking to older, pre-node-ID
tinc-1.1 nodes.
The regression was introduced in
6056f1c13b.
Since commit 13f9bc1ff1, tinc passes the
-I. option to the preprocessor so that version_git.h can be found during
out-of-tree ("VPATH") builds.
The problem is, this option also affects the directory search for files
included *from* system headers. For example, on MinGW, unistd.h contains
the following line:
#include <process.h>
Which, due to -I. putting the tinc directory at the head of the search
order, results in tinc's process.h being included instead of the file
from MinGW. Hilarity ensues.
This commit fixes the issue by using -iquote, which doesn't affect
system headers.
KEY_CHANGED messages are only useful to invalidate keys for non-SPTPS nodes;
SPTPS nodes use a different internal mechanism (forced KEX) for that purpose.
Therefore, if we know we can't talk to legacy nodes, there's no point in
sending them these messages.
There are a number of ways a SPTPS tunnel can get into a corrupt state.
For example, during key regeneration, the KEX and SIG messages from
other nodes might arrive out of order, which confuses the hell out of
the SPTPS code. Another possible scenario is not noticing another node
crashed and restarted because there was no point in time where the node
was seen completely disconnected from *all* nodes; this could result in
using the wrong (old) key. There are probably other scenarios which have
not even been considered yet. Distributed systems are hard.
When SPTPS got confused by a packet, it used to crash the entire
process; fortunately that was fixed by commit
2e7f68ad2b. However, the error handling
(or lack thereof) leaves a lot to be desired. Currently, when SPTPS
encounters an error when receiving a packet, it just shrugs it off and
continues as if nothing happened. The problem is, sometimes getting
receive errors mean the tunnel is completely stuck and will not recover
on its own. In that case, the node will become unreachable - possibly
indefinitely.
The goal of this commit is to improve SPTPS error handling by taking
proactive action when an incoming packet triggers a failure, which is
often an indicator that the tunnel is stuck in some way. When that
happens, we simply restart SPTPS entirely, which should make the tunnel
recover quickly.
To prevent "storms" where two buggy nodes flood each other with invalid
packets and therefore spend all their time negotiating new tunnels, we
limit the frequency at which tunnel restarts happen to ten seconds.
It is likely this commit will solve the "Invalid KEX record length
during key regeneration" issue that has been seen in the wild. It is
difficult to be sure though because we do not have a full understanding
of all the possible conditions that can trigger this problem.
Commit 10c1f60c64 introduced a mechanism
by which a packet received by REQ_KEY could continue its journey over
UDP. This was based on the assumption that REQ_KEY messages would never
be used for handshake packets (which should never be sent over UDP,
because SPTPS currently doesn't handle lost handshake packets very
well).
Unfortunately, there is one case where handshake packets are sent using
REQ_KEY: when regenerating the SPTPS key for a pre-established channel.
With the current code, such packets risk getting relayed over UDP.
When processing a REQ_KEY message, it is impossible for the receiving
end to distinguish between a data SPTPS packet and a handshake packet,
because this information is stored in the type field which is encrypted
with the end-to-end key.
This commit fixes the issue by making tinc use ANS_KEY for all SPTPS
handshake messages. This works because ANS_KEY messages are never
forwarded using the SPTPS relay mechanisms, therefore they are
guaranteed to stick to TCP.
If the ADD_EDGE is for one of the edges we own, and if it is not the
same as we actually have, send a correcting ADD_EDGE back. Otherwise, if
the ADD_EDGE contains new information, update our idea of the local
address for that edge.
If the ADD_EDGE does not contain local address information, then we
never make a correction nor log a warning.
Currently, SPTPS packets are transported over TCP metaconnections using
extended REQ_KEY requests, in order for the packets to pass through
tinc-1.0 nodes unaltered. Unfortunately, this method presents two
significant downsides:
- An already encrypted SPTPS packet is decrypted and then encrypted
again every time it passes through a node, since it is transported
over the SPTPS channels of the metaconnections. This
double-encryption is unnecessary and wastes CPU cycles.
- More importantly, the only way to transport binary data over
standard metaconnection messages such as REQ_KEY is to encode it
in base64, which has a 33% encoding overhead. This wastes 25% of the
network bandwidth.
This commit introduces a new protocol message, SPTPS_PACKET, which can
be used to transport SPTPS packets over a TCP metaconnection in an
efficient way. The new message is appropriately protected through a
minor protocol version increment, and extended REQ_KEY messages are
still used with nodes that do not support the new message, as well as
for the intial handshake packets, for which efficiency is not a concern.
The way SPTPS_PACKET works is very similar to how the traditional PACKET
message works: after the SPTPS_PACKET message, the raw binary packet is
sent directly over the metaconnection. There is one important
difference, however: in the case of SPTPS_PACKET, the packet is sent
directly over the TCP stream completely bypassing the SPTPS channel of
the metaconnection itself for maximum efficiency. This is secure because
the SPTPS packet that is being sent is already encrypted with an
end-to-end key.
sptps_receive_data() always consumes the entire buffer passed to it,
which is somewhat inflexible. This commit improves the interface so that
sptps_receive_data() consumes at most one record. The goal is to allow
non-SPTPS stuff to be interleaved with SPTPS records in a single TCP
stream.
REQ_SPTPS implies the message has an ANS_ counterpart (like REQ_KEY,
ANS_KEY), but it doesn't. Therefore dropping the REQ_ seems more
appropriate, and we add a _PACKET suffix to reduce the likelihood of
naming conflicts.
Currently, when tinc receives a SPTPS packet over TCP via the REQ_KEY
encapsulation mechanism, it forwards it like any other TCP request. This
is inefficient, because even though we received the packet over TCP,
we might have an UDP link with the next hop, which means the packet
could be sent over UDP.
This commit removes that limitation by making sure SPTPS data packets
received through REQ_KEY requests are not forwarded as-is but passed
to send_sptps_data() instead, thereby using the same code path as if
the packet was received over UDP.
net_packet doesn't actually use send_sptps_data(); it only uses
send_sptps_data_priv(). In addition, the only user of send_sptps_data()
is protocol_key. Therefore it makes sense to expose
send_sptps_data_priv() directly, and move send_sptps_data() (which is
basically just boilerplate) as a local function in protocol_key.
Currently, when relaying SPTPS UDP packets, the code uses the direct
sender as the originator, instead of preserving the original source ID.
This wouldn't cause any issues in most cases because the originator and
the sender are the same in simple one-hop relay chains, but this will
break as soon as there is more than one relay.
This fixes some issues with the build system when building out of tree.
With this commit, it is now possible to do the following:
$ cd /tmp/build
$ /path/to/tinc/configure
$ make
Instead of using the hardcoded version number in configure.ac, this
makes tinc use the live version reported by "git describe",
queried on-the-fly during the build process and regenerated for every
build.
This makes tinc version output more useful, as tinc will now display the
number of commits since the last tag as well as the commit the binary is
built from, following the format described in git-describe(1).
Here's an example of tincd --version output:
tinc version release-1.1pre10-48-gc149315 (built Jun 29 2014 15:21:10, protocol 17.3)
When building directly from a release tag, this will look like the following:
tinc version release-1.1pre10 (built Jun 29 2014 15:21:10, protocol 17.3)
(Note that the format is slightly different - because of the way the
tags are named, it says "release-1.1pre10" instead of just "1.1pre10")
If git describe fails (for example when building from a release
tarball), the build automatically falls back to the autoconf-provided
VERSION macro (i.e. the old behavior).
read_rsa_public_key() was bailing out early if the given node already has an Ed25519 key, and
returned true even though c->rsa was NULL. The early bailout code isn't necessary anymore, so just
remove it.
This deals with the case where one node knows the Ed25519 key of another node, but not the other
way around. This was blocked by an overly paranoid check in id_h(). The upgrade_h() function already
handled this case, and the node that already knows the other's Ed25519 key checks that it has not
been changed, otherwise the connection will be aborted.
Unfortunately, glibc assumes that /etc/resolv.conf is a static file that
never changes. Even on servers, /etc/resolv.conf might be a dynamically
generated file, and we never know when it changes. So just call
res_init() every time, so glibc uses up-to-date nameserver information.
Conflicts:
src/have.h
src/net.c
src/net_setup.c
Testing has revealed that the newer series of Windows TAP drivers (i.e.
9.0.0.21 and later, also known as NDIS6, tap-windows6) suffer from
serious performance issues in the write path. Write operations seems to
take a very long time to complete, resulting in massive packet loss even
for throughputs as low as 10 Mbit/s.
I've made some attempts to alleviate the problem using parellelism. By
using custom code that allows up to 256 write operations at the same
time the results are much better, but it's still about 2 times worse
than the traditional 9.0.0.9 driver.
We need to investigate more and file a bug against tap-windows6, but in
the mean time, let's inform the user that he might not want to use the
latest drivers.
This is generally useful. We've seen issues that are specific to some
version of these drivers (especially the newer 9.0.0.21 version), so
it's relevant to log it, especially since that means it will be
copy-pasted by people posting their logs asking for help.
As a rule, it seems reasonable to make sure that tinc operates correctly
on at least 1G links, since these are pretty common. However, I have
observed replay window issues when operating at speeds of 600 Mbit/s and
above, especially when the receiving end is a Windows system (not sure
why). This commit increases the default so that this won't occur on
fresh setups.
It may not be obvious, but due to the way tinc operates (single-threaded
control loop with no intermediate packet buffer), UDP send and receive
buffers can have a massive impact on performance. It is therefore of
paramount importance that the buffers be large enough to prevent packet
drops that could occur while tinc is processing a packet.
Leaving that value to the OS default could be reasonable if we weren't
relying on it so much. Instead, this makes performance somewhat
unpredictable.
In practice, the worst case scenario occurs on Windows, where Microsoft
had the brillant idea of making the buffers 8K in size by default, no
matter what the link speed is. Considering that 8K flies past in a
matter of microseconds on >1G links, this is extremely inappropriate. On
these systems, changing the buffer size to 1M results in *obscene*
raw throughput improvements; I have observed a 10X jump from 40 Mbit/s
to 400 Mbit/s on my system.
In this commit, we stop trusting the OS to get this right and we use a
fixed 1M value instead, which should be enough for <=1G links.
Write operations to the Windows device do not necessarily complete
immediately; in fact, with the latest TAP-Win32 drivers, this never
seems to be the case.
write_packet() does not handle that case correctly, because the
OVERLAPPED structure and the packet data go out of scope before the
write operation completes, resulting in race conditions.
This commit fixes the issue by making sure these data structures are
kept in global scope, and by dropping any packets that may arrive while
the previous write operation is still pending.
On Windows, when disabling the device, tinc uses the CancelIo() to
cancel the pending read operation, and then proceeds to delete the event
handle immediately.
This assumes that CancelIo() blocks until the pending read request is
completely torn down and no references to it remain. While MSDN is not
completely clear on that subject, it does suggest that this is not the
case:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa363791.aspx
If the function succeeds [...] the cancel operation for all pending
I/O operations issued by the calling thread for the specified file
handle was successfully requested.
This implies that cancellation was merely "requested", and that there
are no guarantees as to the state of the operation when CancelIo()
returns. Therefore, care must be taken not to close event handles
prematurely.
While I'm no aware of this potential race condition causing any problems
in practice, I don't want to take any chances.
Modern versions of GCC handle structure packing differently when
compiling for Windows, as reported in the following GCC bug report:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=52991
In practice, this affects tinc because it uses packed structs as a
convenient way to populate packet headers. "struct ip" is especially
affected - on Linux, sizeof(struct ip) returns 20 as expected, while on
Windows, it returns 24 because of the broken alignment.
This in turn completely breaks code that has to populate an IP header.
Specifically, this breaks route_ipv4_unreachable() which is responsible,
among other things, for the generation of ICMP Fragmentation Needed
messages. On Windows, these messages are corrupted beyond hope because
of this alignment issue. For TCP connections that are established
before tinc obtains a fix on the MTU (and thus are not MSS clamped),
this can result in massive disruption.
This commit fixes the issue by forcing GCC to use standard alignment
for all packed structures in the tinc codebase instead of the MSVC
alignment.
HAVE_DECL_RES_INIT is generated using AC_CHECK_DECLS. tinc checks this
symbol using #ifdef, which is wrong because (according to autoconf docs)
the symbol is always defined, it's just set to zero if the check failed.
This broke the Windows build starting from
0b310bf406, because it introduced this
conditional in code that's not excluded from the Windows build.
Ironically, commit 0f8e2cc78c introduced
a regression on its own, since it accidently removed a return statement
that prevented try_tx_sptps() from sending UDP/MTU probes to nodes that
are past static relays.
This makes sure MTU_INFO messages are only sent at the maximum rate of
5 per second (by default). As usual with these "probe" mechanisms, the
rate of these messages cannot be higher than the rate of data packets
themselves, since they are sent from the RX path.
This makes sure UDP_INFO messages are only sent at the maximum rate of
5 per second (by default). As usual with these "probe" mechanisms, the
rate of these messages cannot be higher than the rate of data packets
themselves, since they are sent from the RX path.
In this commit, nodes use MTU_INFO messages to provide MTU information.
The issue this code is meant to address is the non-trivial problem of
finding the proper MTU when UDP SPTPS relays are involved. Currently,
tinc has no idea what the MTU looks like beyond the first relay, and
will arbitrarily use the first relay's MTU as the limit. This will fail
miserably if the MTU decreases after the first relay, forcing relays to
fall back to TCP. More generally, one should keep in mind that relay
paths can be arbitrarily complex, resulting in packets taking "epic
journeys" through the graph, switching back and forth between UDP (with
variable MTUs) and TCP multiple times along the path.
A solution that was considered consists in sending standard MTU probes
through the relays. This is inefficient (if there are 3 nodes on one
side of relay and 3 nodes on the other side, we end up with 3*3=9 MTU
discoveries taking place at the same time, while technically only
3+3=6 are needed) and would involve eyebrow-raising behaviors such as
probes being sent over TCP.
This commit implements an alternative solution, which consists in
the packet receiver sending MTU_INFO messages to the packet sender.
The message contains an MTU value which is set to maximum when the
message is originally sent. The message gets altered as it travels
through the metagraph, such that when the message arrives to the
destination, the MTU value contained in the message can be used to
send packets while making sure no relays will be forced to fall back to
TCP to deliver them.
The operating principles behind such a protocol message are similar to
how the UDP_INFO message works, but there is a key difference that
prevents us from simply reusing the same message: the UDP_INFO message
only cares about relay-to-relay links (i.e. it is sent between static
relays and the information it contains only makes sense between two
adjacent static relays), while the MTU_INFO cares about the end-to-end
MTU, including the entire relay path. Therefore, UDP_INFO messages stop
when they encounter static relays, while MTU_INFO messages don't stop
until they get to the original packet sender.
Note that, technically, the MTU that is obtained through this mechanism
can be slightly pessimistic, because it can be lowered by an
intermediate node that is not being used as a relay. Since nodes have no
way of knowing whether they'll be used as dynamic relays or not (and
have no say in the matter), this is not a trivial problem. That said,
this is highly unlikely to result in noticeable issues in realistic
scenarios.
In this commit, nodes use UDP_INFO messages to provide UDP address
information. The basic principle is that the node that receives packets
sends UDP_INFO messages to the node that's sending the packets. The
message originally contains no address information, and is (hopefully)
updated with relevant address information as it gets relayed through the
metagraph - specifically, each intermediate node will update the message
with its best guess as to what the address is while forwarding it.
When a node receives an UDP_INFO message, and it doesn't have a
confirmed UDP tunnel with the originator node, it will update its
records with the new address for that node, so that it always has the
best possible guess as to how to reach that node. This applies to the
destination node of course, but also to any intermediate nodes, because
there's no reason they should pass on the free intel, and because it
results in nice behavior in the presence of relay chains (multiple nodes
in a path all trying to reach the same destination).
If, on the other hand, the node does have a confirmed UDP tunnel, it
will ignore the address information contained in the message.
In all cases, if the node that receives the message is not the
destination node specified in the message, it will forward the message
but not before overriding the address information with the one from its
own records. If the node has a confirmed UDP tunnel, that means the
message is updated with the address of the confirmed tunnel; if not,
the message simply reflects the records of the intermediate node, which
just happen to be the contents of the UDP_INFO message it just got, so
it's simply forwarded with no modification.
This is similar to the way ANS_KEY messages are currently
overloaded to provide UDP address information, with two differences:
- UDP_INFO messages are sent way more often than ANS_KEY messages,
thereby keeping the address information fresh. Previously, if the UDP
situation were to change after the ANS_KEY message was sent, the
sender would virtually never get the updated information.
- Once a node puts address information in an ANS_KEY message, it is
never changed again as the message travels through the metagraph; in
contrast, UDP_INFO messages behave the opposite way, as they get
rewritten every time they travel through a node with a confirmed UDP
tunnel. The latter behavior seems more appropriate because UDP tunnel
information becomes more relevant as it moves closer to the
destination node. The ANS_KEY behavior is not satisfactory in some
cases such as multi-layered graphs where the first hop is located
before a NAT.
Ultimately, the rationale behind this whole process is to improve UDP
hole punching capabilities when port translation is in effect, and more
generally, to make tinc more reliable in (very) hostile network
conditions (such as multi-layered NAT).
This commit adds a new command line option for tincd which allows to
use tincd in non-detached mode with log messages still going to
syslog. The motivation for this change is to ease use of tincd
in Docker containers.
If receive_handshake() or the receive_record() user callback returns an
error, sptps_receive_data_datagram() crashes the entire process. This is
heavy-handed, makes tinc very brittle to certain failures (i.e.
unexpected packets), and is inconsistent with the rest of SPTPS code.
Refactoring commit 81578484dc seems to
have introduced a regression as it moved discovery code away from
send_sptps_data_priv() and within send_packet(). The issue is,
send_packet() is not called when the node is simply relaying an UDP
SPTPS packet: indeed, send_sptps_data_priv() is called directly from
handle_incoming_vpn_data() in that case.
As a result, try_tx_sptps() is not called in the relaying case, which in
practice means that a relay doesn't initiate UDP/MTU discovery with the
next relay (unless some other activity compels it to do so). This can
result in packets getting sent over TCP instead of UDP from the relay.
Refactoring commit 0e65326047 broke UDP
SPTPS relaying by accidently removing try_tx_sptps() logic related to
establishing connectivity to so-called "dynamic" relays (i.e. relays
that are not specified by IndirectData configuration statements, but
are used on-the-fly to circumvent loss of direct UDP connectivity).
Specifically, the TX path was not trying to establish a tunnel to
dynamic relays (nexthop) anymore. This meant that MTU was not being
discovered with dynamic relays, which basically meant that all packets
being sent to dynamic relays went over TCP, thereby defeating the whole
purpose of SPTPS UDP relaying.
Note that this bug could easily go unnoticed if a tunnel was established
with the dynamic tunnel for some other reason (i.e. exchanging actual
data packets with the relay node).
Unfortunately, glibc assumes that /etc/resolv.conf is a static file that
never changes. Even on servers, /etc/resolv.conf might be a dynamically
generated file, and we never know when it changes. So just call
res_init() every time, so glibc uses up-to-date nameserver information.
This will report possible problems in the configuration files, and in
some cases offers to fix them.
The code is far from perfect yet. It expects keys to be in their default
locations, it doesn't check for Public/PrivateKey[File] statemetns yet.
It also does not correctly handle Ed25519 public keys yet.
When no UDP communication has been done yet, tinc establishes a guess
for the UDP address+port of each node. However, when there are multiple nodes
behind a NAT, tinc will guess the exact same address+port combination
for them, because it doesn't know about the NAT mappings yet. So when
receiving a packet, don't trust that guess unless we have confirmed UDP
communication.
This ensures try_harder() is called in such cases. However, this
function was actually very inefficient, trying to verify packets
multiple times for nodes with multiple edges. Only call try_mac() at
most once per node.
If we receive any traffic from another node, we periodically send back a
gratuitous type 2 probe reply with the maximum received packet length.
On the other node, this causes the udp and perhaps mtu probe timers to
be reset, so it does not need to send a probe request. Gratuitous probe
replies from another node also count as received traffic for this
purpose, so for nodes that also have a meta-connection, UDP keepalive
packets in principle can now solely be type 2 replies. This reduces the
amount of probe traffic even more.
To work, gratuitous replies should be sent slightly more often than
udp_discovery_keepalive_interval, so probe requests won't be triggered.
This also means that the timer resolution must be smaller than the
difference between the two, and at the moment it's kind of a hack.
When we have fixed the PMTU, n->mtuprobes == -1. When we send MTU probes
when mtuprobes == -1, decrease mtuprobes, and reset it back to -1 in
mtu_probe_h(). If mtuprobes < -1, send MTU probes every second, until
mtuprobes <= -4, in which case we will restart MTU discovery.
This is not working at all anymore. Just remove it, and we'll do another
attempt at RTT, bandwidth and packet loss estimation after the new
probing code stabilizes.
We are trying to decouple UDP probing from MTU probing, so only send
very small packets during UDP probing. This significantly reduces the
amount of traffic sent (54 to 67 bytes per probe instead of 1500 bytes).
This means the MTU probing code takes over sending PMTU sized probes,
but this commit does not take care of detecting PMTU decreases.
In tinc 1.0.x, this was tracked in node->inkey, however in tinc 1.1 we have an abstraction layer for
the legacy cipher and digest, and we don't keep an explicit copy of the key around. We cannot use
cipher_active() or digest_active(), since it is possible to set both to the null algorithm. So add a bit to
node_status_t.
This introduces a new configuration option,
UDPDiscoveryKeepaliveInterval, which is used as the UDP discovery
interval once the UDP tunnel is established. The pre-existing option,
UDPDiscoveryInterval, is therefore only used before UDP connectivity
is established.
The defaults are set so that tinc sends UDP pings more aggressively
if the tunnel is not established yet. This is appropriate since the
size of probes in that scenario is very small (16 bytes).
Currently, if a MTU probe is sent and gets rejected by the system
because it is too large (i.e. send() returns EMSGSIZE), the MTU
discovery algorithm is not aware of it and still behaves as if the probe
was actually sent.
This patch makes the MTU discovery algorithm recalculate and send a new
probe when this happens, so that the probe "slot" does not go to waste.
The original multiplier constant for the MTU discovery algorithm, 0.97,
assumes a somewhat pessmistic scenario where we don't get any help from
the OS - i.e. maxmtu never changes. This can happen if IP_MTU is not
usable and the OS doesn't reject overly large packets.
However, in most systems the OS will, in fact, contribute to the MTU
discovery process. In these situations, an actual MTU equal to maxmtu
is quite likely (as opposed to the maxmtu = 1518 case where that is
highly unlikely, unless the physical network supports jumbo frames).
It therefore makes sense to use a multiplier of 1 - that will make the
first probe length equal to maxmtu.
The best results are obtained if the OS supports the getsockopt(IP_MTU)
call, and its result is accurate. In that case, tinc will typically fix
the MTU after one single probe(!), like so:
Using system-provided maximum tinc MTU for foobar (1.2.3.4 port 655): 1442
Sending UDP probe length 1442 to foobar (1.2.3.4 port 655)
Got type 2 UDP probe reply 1442 from foobar (1.2.3.4 port 655)
Fixing MTU of foobar (1.2.3.4 port 655) to 1442 after 1 probes
Linux provides a getsockopt() option, IP_MTU, to get the kernel's best
guess at a connection MTU. In practice, it seems to return the MTU of
the physical interface the socket is using.
This patch uses this option to initialize maxmtu to a better value when
MTU discovery starts.
Unfortunately, this is not supported on Windows. Winsock has options
such as SO_MAX_MSG_SIZE, SO_MAXDG and SO_MAXPATHDG but they seem useless
as they always return absurdly large values (typically, 65507), as
confirmed by http://support.microsoft.com/kb/822061/
If MTU discovery comes up with an MTU smaller than 512 bytes (e.g. due
to massive packet loss), it's pretty much guaranteed to be wrong. Even
if it's not, most Internet applications assume the MTU will be at least
512, so fixing the MTU to a small value is likely to cause trouble
anyway.
This also makes the discovery algorithm converge even faster, since the
interval it has to consider is smaller.
The recently introduced new MTU discovery algorithm converges much
faster than the previous one, which allows us to reduce the number
of probes required before we can confidently fix the MTU. This commit
reduces the number of initial discovery probes from 90 to 20. With the
new algorithm this is more than enough to get to the precise (byte-level
accuracy) MTU value; in cases of packet loss or weird MTU values for
which the algorithm is not optimized, we should get close to the actual
value, and then we rely on MTU increase detection (steady state probes)
to fine-tune it later if the need arises.
This patch also triggers MTU increase detection even if the MTU we have
is off by only one byte. Previously we only did that if it was off by at
least 8 bytes. Considering that (1) this should happen less often,
(2) restarting MTU discovery is cheaper than before and (3) having MTUs
that are subtly off from their intended values by just a few bytes
sounds like trouble, this sounds like a good idea.
Currently, tinc uses a naive algorithm for choosing MTU discovery probe
sizes, picking a size at random between minmtu and maxmtu.
This is of course suboptimal - since the behavior of probes is
deterministic (assuming no packet loss), it seems likely that using a
non-deterministic discovery algorithm will not yield the best results.
Furthermore, the randomness introduces a lot of variation in convergence
times.
The random solution also suffers from pathological cases - since it's
using a uniform distribution, it doesn't take into account the fact that
it's often more interesting to send small probes rather than large ones,
because getting replies is the only way we can make progress (assuming
the worst case scenario in which the OS doesn't know anything, therefore
keeping maxmtu constant). This can lead to absurd situations where the
discovery algorithm is close to the real MTU, but can't get to it
because the random number generator keeps generating numbers that are
past it.
The algorithm implemented in this patch aims to improve on the naive
random algorithm. It is organized around "cycles" of 8 probes; the sizes
of the probes decrease as we go through the cycle, thus making sure the
algorithm can cover lots of ground quickly (in case we're far from
actual MTU), but also examining the local area (in case we're close to
actual MTU). Using cycles ensures that the algorithm will "go back" to
large probes to better cover the new interval and to protect against
packet loss.
For the probe size itself, various mathematical models were simulated in
an attempt to find the one that converges the fastest; it has been
determined that using an exponential based on the size of the remaining
interval was the most effective option. The exponential is adjusted with
a magic multiplier fine-tuned to make tinc jump to the "most
interesting" (i.e. 1400+) section as soon as discovery starts.
Simulations indicate that assuming no packet loss and no help from the
OS (i.e. maxmtu stays constant), this algorithm will typically converge
to the *exact* MTU value in less than 10 probes, and will get within 8
bytes in less than 5 probes, for actual MTUs between 1417 and ~1450
(which is the range the algorithm is fine-tuned for). In contrast, the
previous algorithm gives results all over the place, sometimes taking
30+ probes to get in the ballpark. Because of the issues with the
distribution, the previous algorithm sometimes never gets to the precise
MTU value within any reasonable amount of time - in contrast, the new
algorithm will always get to the precise value in less than 30 probes,
even if the actual MTU is completely outside the optimized range.
tinc bandwidth estimation has always been quite unreliable (at least in
my experience), but there's no chance of it working anymore since the
last changes to MTU discovery code, because packets are not sent in
batches of three anymore.
This commit removes the dead code - fortunately, nothing depends on this
estimation (it's not even shown in node info). We probably need be
smarter about this if we do want this estimation back.
Currently, tinc sends MTU probes in batches of three every second. This
commit changes that to send one packet every 333 milliseconds instead.
This change brings two benefits:
- It makes MTU probing faster, because MTU probe lengths are calculated
based on minmtu, and minmtu is adjusted based on the replies. When
sending batches of three packets, all three packets are based on the
same minmtu estimation; in contrast, by sending one packet more
frequently, each subsequent packet can benefit from the replies that
have been received since the last packet was sent. As a result, MTU
discovery converges much faster (2-3 times as fast, typically).
- It reduces network spikiness - it's more network-friendly to send
one packet from time to time as opposed to sending bursts.
This is a minor cosmetic nit to emphasise the distinction between the
initial MTU discovery phase, and the post-initial phase (i.e. maxmtu
checking).
Furthermore, this is an improvement with regard to the DRY (Don't
Repeat Yourself) principle, as the maximum mtuprobes value is only
written once.
If a probe reply is received that makes minmtu equal to maxmtu, we
have to wait until try_mtu() runs to realize that. Since try_mtu()
runs after a packet is sent, this means there is at least one packet
(possibly more, depending on timing) that won't benefit from the
fixed MTU. This also happens when maxmtu is updated from the send()
path.
This commit fixes that by making sure we check whether the MTU can be
fixed every time minmtu or maxmtu is touched.
This moves related functions together, and is a pure cut-and-paste
change. The reason it was not done in the previous commit is because it
would have made the diff harder to review.
Currently, the PMTU discovery code is run by a timeout callback,
independently of tunnel activity. This commit moves it into the TX
path, meaning that send_mtu_probe_handler() is only called if a
packet is about to be sent. Consequently, it has been renamed to
try_mtu() for consistency with try_tx(), try_udp() and try_sptps().
Running PMTU discovery code only as part of the TX path prevents
PMTU discovery from generating unreasonable amounts of traffic when
the "real" traffic is negligible. One extreme example is sending one
real packet and then going silent: in the current code this one little
packet will result in the entire PMTU discovery algorithm being run
from start to finish, resulting in absurd write traffic amplification.
With this patch, PMTU discovery stops as soon as "real" packets stop
flowing, and will be no more aggressive than the underlying traffic.
Furthermore, try_mtu() only runs if there is confirmed UDP
connectivity as per the UDP discovery mechanism. This prevents
unnecessary network chatter - previously, the PMTU discovery code
would send bursts of (potentially large) probe packets every second
even if there was nothing on the other side. With this patch, the
PMTU code only does that if something replied to the lightweight UDP
discovery pings.
These inefficiencies were made even worse when the node is not a
direct neighbour, as tinc will use PMTU discovery both on the
destination node *and* the relay. UDP discovery is more lightweight for
this purpose.
As a bonus, this code simplifies overall code somewhat - state is
easier to manage when code is run in predictable contexts as opposed
to "surprise callbacks". In addition, there is no need to call PMTU
discovery code outside of net_packet.c anymore, thereby simplifying
module boundaries.
This is a rewrite of the send_mtu_probe_handler() function to make it
focus on the actual discovery of PMTU. In particular, the PMTU
discovery code doesn't care about tunnel state anymore - it only cares
about doing the initial PMTU discovery, and once that's done, making
sure PMTU did not increase by checking it from time to time. All other
duties have already been rewritten in the UDP discovery code.
As a result, the send_mtu_probe_handler(), which previously implemented
a nightmarish state machine which was very difficult to follow and
understand, has been massively simplified. We moved from four persistent
states to only two - initial discovery and steady state.
Furthermore, a side effect is that network chatter is reduced: instead
of sending bursts of three minmtu-sized packets in the steady state,
there is only one such packet that's sent from the UDP discovery code.
However, that introduces a slight regression in the bandwidth estimation
code, which relies on three-packet bursts in order to function.
Considering that this estimation is extremely unreliable (in my
experience) and isn't relied on by anything, this seems like an
acceptable regression.
Since UDP discovery is the place where UDP feasibility is checked, it
makes sense to test for local connectivity as well. This was previously
done as part of PMTU discovery.
This adds a new mechanism by which tinc can determine if a node is
reachable via UDP. The new mechanism is currently redundant with the
PMTU discovery mechanism - that will be fixed in a future commit.
Conceptually, the UDP discovery mechanism works similarly to PMTU
discovery: it sends UDP probes (of minmtu size, to make sure the tunnel
is fully usable), and assumes UDP is usable if it gets replies. It
assumes UDP is broken if too much time has passed since the last reply.
The big difference with the current PMTU discovery mechanism, however,
is that UDP discovery probes are only triggered as part of the
packet TX path (through try_tx()). This is quite interesting, because
it means tinc will never send UDP pings more often than normal packets,
and most importantly, it will automatically stop sending pings as soon
as packets stop flowing, thereby nicely reducing network chatter.
Of course, there are small drawbacks in some edge cases: for example,
if a node only sends one packet every minute to another node, these
packets will only be sent over TCP, because the interval between packets
is too long for tinc to maintain the UDP tunnel. I consider this a
feature, not a bug: I believe it is appropriate to use TCP in scenarios
where traffic is negligible, so that we don't pollute the network with
pings just to maintain a UDP tunnel that's seeing negligible usage.
This moves related functions together. try_tx() is at the right place
since its only caller is send_packet().
This is a pure cut-and-paste change. The reason it was not done in the
previous commit is because it would have made the diff harder to review.
Currently, the TX path (starting from send_packet()) in tinc has three
responsabilities:
- Making sure packets can be sent (e.g. fetching SPTPS keys);
- Making sure they can be sent optimally (e.g. fetching non-SPTPS keys
so that UDP can be used);
- Sending the actual packet, if feasible.
The first two are closely related; the third one, however, can be
cleanly separated from the other two - meaning, we can loosen code
coupling between sending packets and "optimizing" the way packets are
sent. This will become increasingly important as future commits will
move more tunnel establishment and maintenance code into the TX path,
so we will benefit from a cleaner separation of concerns.
This is especially relevant because of the dual nature of the TX path
(SPTPS versus non-SPTPS), which can make things really complicated when
trying to share low-level code between both.
In this commit, code related to establishing or improving tunnels is
moved away from the core TX path by introducing the "try_*()" family of
function, of which try_sptps() already existed before this commit.
This is a pure refactoring; this commit shouldn't introduce any change
in behavior.
This cleans up the PMTU probing function a little bit. It moves the
low-level sending of packets to a separate function, so that the code
reads naturally instead of using a weird for loop with "special
indexes". In addition, comments are moved inside the body of the
function for additional context.
This shouldn't introduce any change of behavior, except for local
discovery which has some minor logic fixes and which now always uses
small packets (16 bytes) because there's no need for a full-length
probe just to try the local network.
The option "--disable-legacy-protocol" was added to the configure
script. The new protocol does not depend on any external crypto
libraries, so when the option is used tinc is no longer linked to
OpenSSL's libcrypto.
It then thinks there should be a rule to make the .c file, which does
not exist of course. Luckily, we can tell it that version.o is .PHONY,
and this will still cause the .o file to be regenerated and linked into
the binaries everytime make is called.
Currently, when sending packets over TCP where the final recipient is
a node we have a direct metaconnection to, tinc first establishes a
SPTPS handshake between the two neighbors.
It turns out this SPTPS tunnel is not actually useful, because the
packet is only being sent over one metaconnection with no intermediate
nodes, and the metaconnection itself is already secured using a separate
SPTPS handshake.
Therefore it seems simpler and more efficient to simply send these
packets directly over the metaconnection itself without any additional
layer. This commits implements this solution without any changes to the
metaprotocol, since the appropriate message already exists: it's the
good old "plaintext" PACKET message.
This change brings two significant benefits:
- Packets to neighbors can be sent immediately - there is no initial
delay and packet loss previously caused by the SPTPS handshake;
- Performance of sending packets to neighbors over TCP is greatly
improved since the data only goes through one round of encryption
instead of two.
Conflicts:
src/net_packet.c
Currently, when tinc establishes a metaconnection, it automatically
starts a VPN SPTPS tunnel with the other side of the metaconnection.
It is not clear what this is trying to accomplish. Having a
metaconnection with a node does not necessarily mean we're going to send
packets to that node. This patch removes this behavior, thereby
simplifying code paths and removing unnecessary network chatter.
Naturally, this introduces a slight delay (as well as at least one
initial packet loss) between the moment a metaconnection is established
and the moment VPN packets can be exchanged between the two nodes.
However this is no different to the non-neighbor case, so it makes
things more consistent and therefore easier to reason about.
The offset value indicates where the actual payload starts, so we can
process both legacy and SPTPS UDP packets without having to do casting
tricks and/or moving memory around.
Limit the amount of address/ID lookups to the minimum in all cases:
1) Legacy packets, need an address lookup.
2) Indirect SPTPS packets, need an address lookup + two ID lookups.
3) Direct SPTPS packets, need an ID or an address lookup.
So we start with an address lookup. If the source is an 1.1 node, we know it's an SPTPS packet,
and then the check for direct packets is a simple check if dstid is zero. If not, do the srcid and dstid
lookup. If the source is an 1.0 node, we don't have to do anything else.
If the address is unknown, we first check whether it's from a 1.1 node by assuming it has a valid srcid
and verifying the packet. If not, use the old try_harder().
If the peer presents a different one from the one we already know, log
an error. Otherwise, log an informational message, and terminate in the
same way as we would if we didn't already have that key.
The SPTPS code doesn't know about nodes, so when it logs an error about
a bad packet, it doesn't log which node it came from. So add a log
message with the node's name and hostname in receive_udppacket().
On Linux, tinc doesn't know the MAC address of the TAP device until the
first read. This means that if no packets are sent through the
interface, tinc won't be able to figure out which MAC address to tag
incoming packets with. As a result, it is impossible to receive any
packet until at least one packet has been sent.
When IPv6 is disabled Linux does not spontanously send any packets
when the interface comes up. At first users wonder why the node is not
responding to ICMP pings, and then as soon as at least one packet is
sent through the interface, pings mysteriously start working, resulting
in user confusion.
This change fixes that problem by making sure tinc is aware of the
device's MAC address even before the first packet is sent.
Currently, when tinc sends UDP SPTPS datagrams through a relay, it
doesn't automatically start discovering PMTU with the relay. This means
that unless something else triggers PMTU discovery, tinc will keep using
TCP when sending packets through the relay.
This patches fixes the issue by explicitly establishing UDP tunnels with
relays.
Currently, we send MTU probes to each node we receive a key for, even if
we know we will never send UDP packets to that node because of
indirection. This commit disables MTU probing between nodes that have
direct communication disabled, otherwise MTU probes end up getting sent
through relays.
With the legacy protocol this was never a problem because we would never
request the key of a node with indirection enabled; with SPTPS this was
not a problem until we introduced relaying because send_sptps_data()
would simply ignore indirections, but this is not the case anymore.
Note that the fix is implemented in a quick and dirty way, by disabling
the call to send_mtu_probe() in ans_key_h(); this is not a clean fix
because there's no code to resume sending MTU probes in case the
indirection disappears because of a graph change.
This commit changes the layout of UDP datagrams to include a 6-byte
destination node ID at the very beginning of the datagram (i.e. before
the source node ID and the seqno). Note that this only applies to SPTPS.
Thanks to this new field, it is now possible to send SPTPS datagrams to
nodes that are not the final recipient of the packets, thereby using
these nodes as relay nodes. Previously SPTPS was unable to relay packets
using UDP, and required a fallback to TCP if the final recipient could
not be contacted directly using UDP. In that sense it fixes a regression
that SPTPS introduced with regard to the legacy protocol.
This change also updates tinc's low-level routing logic (i.e.
send_sptps_data()) to automatically use this relaying facility if at all
possible. Specifically, it will relay packets if we don't have a
confirmed UDP link to the final recipient (but we have one with the next
hop node), or if IndirectData is specified. This is similar to how the
legacy protocol forwards packets.
When sending packets directly without any relaying, the sender node uses
a special value for the destination node ID: instead of setting the
field to the ID of the recipient node, it writes a zero ID instead. This
allows the recipient node to distinguish between a relayed packet and a
direct packet, which is important when determining the UDP address of
the sending node.
On the relay side, relay nodes will happily relay packets that have a
destination ID which is non-zero *and* is different from their own,
provided that the source IP address of the packet is known. This is to
prevent abuse by random strangers, since a node can't authenticate the
packets that are being relayed through it.
This change keeps the protocol number from the previous datagram format
change (source IDs), 17.4. Compatibility is still preserved with 1.0 and
with pre-1.1 releases. Note, however, that nodes running this code won't
understand datagrams sent from nodes that only use source IDs and
vice-versa (not that we really care).
There is one caveat: in the current state, there is no way for the
original sender to know what the PMTU is beyond the first hop, and
contrary to the legacy protocol, relay nodes can't apply MSS clamping
because they can't decrypt the relayed packets. This leads to
inefficient scenarios where a reduced PMTU over some link that's part of
the relay path will result in relays falling back to TCP to send packets
to their final destinations.
Another caveat is that once a packet gets sent over TCP, it will use
TCP over the entire path, even if it is technically possible to use UDP
beyond the TCP-only link(s).
Arguably, these two caveats can be fixed by improving the
metaconnection protocol, but that's out of scope for this change. TODOs
are added instead. In any case, this is no worse than before.
In addition, this change increases SPTPS datagram overhead by another
6 bytes for the destination ID, on top of the existing 6-byte overhead
from the source ID.
This commit changes the layout of UDP datagrams to include the 6-byte ID
(i.e. node name hash) of the node that crafted the packet at the very
beginning of the datagram (i.e. before the seqno). Note that this only
applies to SPTPS.
This is implemented at the lowest layer, i.e. in
handle_incoming_vpn_data() and send_sptps_data() functions. Source ID is
added and removed there, in such a way that the upper layers are unaware
of its presence.
This is the first stepping stone towards supporting UDP relaying in
SPTPS, by providing information about the original sender in the packet
itself. Nevertheless, even without relaying this commit already provides
a few benefits such as being able to reliably determine the source node
of a packet in the presence of an unknown source IP address, without
having to painfully go through all node keys. This makes tinc's behavior
much more scalable in this regard.
This change does not break anything with regard to the protocol: It
preserves compatibility with 1.0 and even with older pre-1.1 releases
thanks to a minor protocol version change (17.4). Source ID information
won't be included in packets sent to nodes with minor version < 4.
One drawback, however, is that this change increases SPTPS datagram
overhead by 6 bytes (the size of the source ID itself).
This introduces a new type of identifier for nodes, which complements
node names: node IDs. Node IDs are defined as the first 6 bytes of the
SHA-256 hash of the node name. They will be used in future code in lieu
of node names as unique node identifiers in contexts where space is at
a premium (such as VPN packets).
The semantics of node IDs is that they are supposed to be unique in a
tinc graph; i.e. two different nodes that are part of the same graph
should not have the same ID, otherwise things could break. This
solution provides this guarantee based on realistic probabilities:
indeed, according to the birthday problem, with a 48-bit hash, the
probability of at least one collision is 1e-13 with 10 nodes, 1e-11
with 100 nodes, 1e-9 with 1000 nodes and 1e-7 with 10000 nodes. Things
only start getting hairy with more than 1 million nodes, as the
probability gets over 0.2%.
Currently, when tinc receives an UDP packet from an unexpected address
(i.e. an address different from the node's current address), it just
updates its internal UDP address record and carries on like nothing
happened.
This poses two problems:
- It assumes that the PMTU for the new address is the same as the
old address, which is risky. Packets might get dropped if the PMTU
turns out to be smaller (or if UDP communication on the new address
turns out to be impossible).
- Because the source address in the UDP packet itself is not
authenticated (i.e. it can be forged by an attacker), this
introduces a potential vulnerability by which an attacker with
control over one link can trick a tinc node into dumping its network
traffic to an arbitrary IP address.
This commit fixes the issue by invalidating UDP/PMTU state for a node
when its UDP address changes. This will trigger a temporary fallback
to indirect communication until we get confirmation via PMTU discovery
that the node is indeed sitting at the other end of the new UDP address.
Currently tinc only uses type 2 MTU probe replies if the recipient uses
protocol version 17.3. It should of course support any higher minor
protocol version as well.
In this commit, if a node receives a REQ_PUBKEY message from a node it
doesn't have the key for, it will send a REQ_PUBKEY message in return
*before* sending its own key.
The rationale is to prevent delays when establishing communication
between two nodes that see each other for the first time. These delays
are caused by the first SPTPS packet being dropped on the floor, as
shown in the following typical exchange:
node1: No Ed25519 key known for node2
REQ_PUBKEY ->
<- ANS_PUBKEY
node1: Learned Ed25519 public key from node2
REQ_SPTPS_START ->
node2: No Ed25519 key known for zyklos
<- REQ_PUBKEY
ANS_PUBKEY ->
node2: Learned Ed25519 public key from node1
-- 10-second delay --
node1: No key from node2 after 10 seconds, restarting SPTPS
REQ_SPTPS_START ->
<- SPTPS ->
node1: SPTPS key exchange with node2 succesful
node2: SPTPS key exchange with node1 succesful
With this patch, the following happens instead:
node1: No Ed25519 key known for node2
REQ_PUBKEY ->
node2: Preemptively requesting Ed25519 key for node1
<- REQ_PUBKEY
<- ANS_PUBKEY
ANS_PUBKEY ->
node2: Learned Ed25519 public key from node1
node1: Learned Ed25519 public key from node2
REQ_SPTPS_START ->
<- SPTPS ->
node1: SPTPS key exchange with node2 succesful
node2: SPTPS key exchange with node1 succesful
There are platforms on which it is impossible to rename the TUN/TAP
device. An example is Mac OS X (tuntapx). On these platforms,
specifying the Interface option will not rename the interface, but
the specified name will still be passed to tinc-up scripts and the
like, resulting in potential confusion for the user.
A logic bug was introduced in bd451cfe15
in which running graph() several times with zero reachable nodes had
the effect of calling device_enable() (instead of keeping the device
disabled).
This results in weird behavior when DeviceStandby is enabled, especially
on Windows where calling device_enable() several times in a row corrupts
I/O structures for the device, rendering it unusable.
The Windows build was broken by commit
826ad11e41 which introduced a dependency
on the HOST_NAME_MAX macro, which is not defined on Windows. According
to MSDN for gethostname(), the maximum length of the returned string
is 256 bytes (including the terminating null byte), so let's use that
as a fallback.
Successfully getting an existing variable ("tinc get name") should
not result in an error exitcode (1) from the tinc command.
This changes the result of test/commandline.test from FAIL to PASS.
The handling of TAP-Win32 virtual network device reads that complete
immediately (ReadFile() returns TRUE) is incorrect - instead of
starting a new read, tinc will continue listening for the overlapped
read completion event which will never fire. As a result, tinc stops
receiving packets on the interface.
With newer TAP-Win32 versions (such as the experimental
tap-windows6 9.21.0), tinc is unable to read from the virtual network
device:
Error while reading from (null) {23810A13-BCA9-44CE-94C6-9AEDFBF85736}: No such file or directory
This is because these new drivers apparently don't accept reads when
the device is not in the connected state (media status).
This commit fixes the issue by making sure we start reading no sooner
than when the device is enabled, and that we stop reading when the
device is disabled. This also makes the behavior somewhat cleaner,
because it doesn't make much sense to read from a disabled device
anyway.
Some tinc commands, such as "tinc generate-keys", use the terminal to
ask the user for information. This can be bypassed by making sure
there is no terminal, which is trivial on *nix but might require
jumping through some hoops on Windows depending on how the command is
invoked.
This commit adds a --batch option that ensures tinc will never ask the
user for input, even if it is attached to a terminal.
This is a slight optimization for sptps_verify_datagram(), which might
come in handy since this function is called in a loop via try_harder().
It turns out that since sptps_verify_datagram() doesn't update any
state, it doesn't matter in which order verifications are done. However,
it does affect performance since it's much cheaper to check the seqno
than to try to decrypt the packet.
Since this function is called with the wrong node most of the time, it
makes verification vastly faster for the majority of calls because the
seqno will be wrong in most cases.
When invoking tincd, tinc start currently uses the execvp() function,
which doesn't behave well in a console as the console displays a new
prompt before the subprocess finishes (which makes me suspect the exit
value is not handled at all). This new code uses spawnvp() instead,
which seems like a better fit.
When invoking "tinc start" with spaces in the path, the following
happens:
> "c:\Program Files (x86)\tinc\tinc.exe" start
c:\Program: unrecognized argument 'Files'
Try `c:\Program --help' for more information.
This is caused by inconsistent handling of command line strings between
execvp() and the spawned process' CRT, as documented on MSDN:
http://msdn.microsoft.com/library/431x4c1w.aspx
This commit makes tinc exit cleanly on Windows when hitting CTRL+C at
the console or when the user logs off. This change has no effect when
running tinc as a service.
This fixes the following compiler warning when building for Windows:
In file included from top.c:24:0:
/usr/local/mingw/ncurses/include/curses.h:1478:0: error: "KEY_EVENT" redefined [-Werror]
#define KEY_EVENT 0633 /* We were interrupted by an event */
^
In file included from /usr/share/mingw-w64/include/windows.h:74:0,
from /usr/share/mingw-w64/include/winsock2.h:23,
from have.h:46,
from system.h:26,
from top.c:20:
/usr/share/mingw-w64/include/wincon.h:101:0: note: this is the location of the previous definition
#define KEY_EVENT 0x1
^
This removes a bunch of variables that are never actually used anywhere.
This fixes the following compiler warning when building for Windows:
mingw/device.c:46:17: error: ‘device_total_in’ defined but not used [-Werror=unused-variable]
static uint64_t device_total_in = 0;
^
This fixes the following compiler warning when building for Windows:
mingw/device.c: In function ‘setup_device’:
mingw/device.c:92:9: error: unused variable ‘thread’ [-Werror=unused-variable]
HANDLE thread;
^
This fixes the following compiler warning when building for Windows:
mingw/device.c: In function ‘setup_device’:
mingw/device.c:186:2: error: passing argument 2 of ‘io_add_event’ from incompatible pointer type [-Werror]
io_add_event(&device_read_io, device_handle_read, NULL, CreateEvent(NULL, TRUE, FALSE, NULL));
^
In file included from mingw/../net.h:27:0,
from mingw/../subnet.h:24,
from mingw/../conf.h:34,
from mingw/device.c:26:
mingw/../event.h:61:13: note: expected ‘io_cb_t’ but argument is of type ‘void (*)(void *)’
extern void io_add_event(io_t *io, io_cb_t cb, void* data, WSAEVENT event);
This fixes the following compiler warning when building for Windows:
script.c: In function ‘execute_script’:
script.c:52:5: error: value computed is not used [-Werror=unused-value]
*q++;
^
This fixes the following compiler warning when building for Windows:
net_packet.c: In function ‘send_udppacket’:
net_packet.c:633:6: error: unused variable ‘origpriority’ [-Werror=unused-variable]
int origpriority = origpkt->priority;
^
This is so the positions of the other bits don't change, making it easier to
debug problems with different versions of tinc.
Also fix the padding so connection_status_t is exactly 32 bits.
The only places where connection_t::status.active is modified is in
ack_h() and terminate_connection(). In both cases, connection_t::edge
is added and removed at the same time, and that's the only places
connection_t::edge is set. Therefore, the following is true at all
times:
!c->status.active == !c->edge
This commit removes the redundant state information by getting rid of
connection_t::status.active, and using connection_t::edge instead.
in receive_udppacket(), we initialize outpkt to a default value but the
value is never read anywhere, as every read is preceded by a write.
This issue was found by the clang static analyzer tool:
http://clang-analyzer.llvm.org/
If choose_local_address() is unable to find a local address (e.g.
because of old nodes that don't send their local address information),
then send_sptps_data() ends up using uninitialized variables for the
socket and address.
This regression was introduced in
4159108971. The commit took care of
handling that case in send_udppacket() but was missing the same fix
for send_sptps_data().
This bug was found by the clang static analyzer tool:
http://clang-analyzer.llvm.org/
Based on a patch from Etienne Dechamps. We avoid the use of %hhx, since even
though it is C99, not all compilers support it yet. We use %x instead, since
it's guaranteed that the minimum size of function arguments on the stack or in
registers is that of an int.
On Windows, the event loop io tree uses the Windows Event handle to
differentiate between io_t objects. Unfortunately, there is a bug in
the io_add_event() function (introduced in
2f9a1d4ab5) as it sets the event after
inserting the object into the tree, resulting in objects appearing in
io_tree out of order.
This can lead to crashes on Windows as the event loop is unable to
determine which events fired.
Setting the Port configuration variable to zero can be used to make tinc
listen on a system-assigned port. Unfortunately, in this scenario myport
will be zero, which means that tinc won't transmit its actual UDP
listening port to other nodes. This breaks UDP hole punching and local
discovery.
Commit 611217c96e introduced a regression
because it accidentally reordered the timeout handler calls and the
fdset setup code. This means that any io_add(), io_del() or io_set()
calls in timeout handlers would be ignored in the current event loop
iteration, resulting in erratic behavior.
The most visible symptom is when a metaconnection timeout occurs and the
connection is closed; the timeout handler closes the socket but it still
ends up in the select() call, typically resulting in the following
crash:
Error while waiting for input: Bad file descriptor