This is not working at all anymore. Just remove it, and we'll do another
attempt at RTT, bandwidth and packet loss estimation after the new
probing code stabilizes.
We are trying to decouple UDP probing from MTU probing, so only send
very small packets during UDP probing. This significantly reduces the
amount of traffic sent (54 to 67 bytes per probe instead of 1500 bytes).
This means the MTU probing code takes over sending PMTU sized probes,
but this commit does not take care of detecting PMTU decreases.
In tinc 1.0.x, this was tracked in node->inkey, however in tinc 1.1 we have an abstraction layer for
the legacy cipher and digest, and we don't keep an explicit copy of the key around. We cannot use
cipher_active() or digest_active(), since it is possible to set both to the null algorithm. So add a bit to
node_status_t.
This introduces a new configuration option,
UDPDiscoveryKeepaliveInterval, which is used as the UDP discovery
interval once the UDP tunnel is established. The pre-existing option,
UDPDiscoveryInterval, is therefore only used before UDP connectivity
is established.
The defaults are set so that tinc sends UDP pings more aggressively
if the tunnel is not established yet. This is appropriate since the
size of probes in that scenario is very small (16 bytes).
Currently, if a MTU probe is sent and gets rejected by the system
because it is too large (i.e. send() returns EMSGSIZE), the MTU
discovery algorithm is not aware of it and still behaves as if the probe
was actually sent.
This patch makes the MTU discovery algorithm recalculate and send a new
probe when this happens, so that the probe "slot" does not go to waste.
The original multiplier constant for the MTU discovery algorithm, 0.97,
assumes a somewhat pessmistic scenario where we don't get any help from
the OS - i.e. maxmtu never changes. This can happen if IP_MTU is not
usable and the OS doesn't reject overly large packets.
However, in most systems the OS will, in fact, contribute to the MTU
discovery process. In these situations, an actual MTU equal to maxmtu
is quite likely (as opposed to the maxmtu = 1518 case where that is
highly unlikely, unless the physical network supports jumbo frames).
It therefore makes sense to use a multiplier of 1 - that will make the
first probe length equal to maxmtu.
The best results are obtained if the OS supports the getsockopt(IP_MTU)
call, and its result is accurate. In that case, tinc will typically fix
the MTU after one single probe(!), like so:
Using system-provided maximum tinc MTU for foobar (1.2.3.4 port 655): 1442
Sending UDP probe length 1442 to foobar (1.2.3.4 port 655)
Got type 2 UDP probe reply 1442 from foobar (1.2.3.4 port 655)
Fixing MTU of foobar (1.2.3.4 port 655) to 1442 after 1 probes
Linux provides a getsockopt() option, IP_MTU, to get the kernel's best
guess at a connection MTU. In practice, it seems to return the MTU of
the physical interface the socket is using.
This patch uses this option to initialize maxmtu to a better value when
MTU discovery starts.
Unfortunately, this is not supported on Windows. Winsock has options
such as SO_MAX_MSG_SIZE, SO_MAXDG and SO_MAXPATHDG but they seem useless
as they always return absurdly large values (typically, 65507), as
confirmed by http://support.microsoft.com/kb/822061/
If MTU discovery comes up with an MTU smaller than 512 bytes (e.g. due
to massive packet loss), it's pretty much guaranteed to be wrong. Even
if it's not, most Internet applications assume the MTU will be at least
512, so fixing the MTU to a small value is likely to cause trouble
anyway.
This also makes the discovery algorithm converge even faster, since the
interval it has to consider is smaller.
The recently introduced new MTU discovery algorithm converges much
faster than the previous one, which allows us to reduce the number
of probes required before we can confidently fix the MTU. This commit
reduces the number of initial discovery probes from 90 to 20. With the
new algorithm this is more than enough to get to the precise (byte-level
accuracy) MTU value; in cases of packet loss or weird MTU values for
which the algorithm is not optimized, we should get close to the actual
value, and then we rely on MTU increase detection (steady state probes)
to fine-tune it later if the need arises.
This patch also triggers MTU increase detection even if the MTU we have
is off by only one byte. Previously we only did that if it was off by at
least 8 bytes. Considering that (1) this should happen less often,
(2) restarting MTU discovery is cheaper than before and (3) having MTUs
that are subtly off from their intended values by just a few bytes
sounds like trouble, this sounds like a good idea.
Currently, tinc uses a naive algorithm for choosing MTU discovery probe
sizes, picking a size at random between minmtu and maxmtu.
This is of course suboptimal - since the behavior of probes is
deterministic (assuming no packet loss), it seems likely that using a
non-deterministic discovery algorithm will not yield the best results.
Furthermore, the randomness introduces a lot of variation in convergence
times.
The random solution also suffers from pathological cases - since it's
using a uniform distribution, it doesn't take into account the fact that
it's often more interesting to send small probes rather than large ones,
because getting replies is the only way we can make progress (assuming
the worst case scenario in which the OS doesn't know anything, therefore
keeping maxmtu constant). This can lead to absurd situations where the
discovery algorithm is close to the real MTU, but can't get to it
because the random number generator keeps generating numbers that are
past it.
The algorithm implemented in this patch aims to improve on the naive
random algorithm. It is organized around "cycles" of 8 probes; the sizes
of the probes decrease as we go through the cycle, thus making sure the
algorithm can cover lots of ground quickly (in case we're far from
actual MTU), but also examining the local area (in case we're close to
actual MTU). Using cycles ensures that the algorithm will "go back" to
large probes to better cover the new interval and to protect against
packet loss.
For the probe size itself, various mathematical models were simulated in
an attempt to find the one that converges the fastest; it has been
determined that using an exponential based on the size of the remaining
interval was the most effective option. The exponential is adjusted with
a magic multiplier fine-tuned to make tinc jump to the "most
interesting" (i.e. 1400+) section as soon as discovery starts.
Simulations indicate that assuming no packet loss and no help from the
OS (i.e. maxmtu stays constant), this algorithm will typically converge
to the *exact* MTU value in less than 10 probes, and will get within 8
bytes in less than 5 probes, for actual MTUs between 1417 and ~1450
(which is the range the algorithm is fine-tuned for). In contrast, the
previous algorithm gives results all over the place, sometimes taking
30+ probes to get in the ballpark. Because of the issues with the
distribution, the previous algorithm sometimes never gets to the precise
MTU value within any reasonable amount of time - in contrast, the new
algorithm will always get to the precise value in less than 30 probes,
even if the actual MTU is completely outside the optimized range.
tinc bandwidth estimation has always been quite unreliable (at least in
my experience), but there's no chance of it working anymore since the
last changes to MTU discovery code, because packets are not sent in
batches of three anymore.
This commit removes the dead code - fortunately, nothing depends on this
estimation (it's not even shown in node info). We probably need be
smarter about this if we do want this estimation back.
Currently, tinc sends MTU probes in batches of three every second. This
commit changes that to send one packet every 333 milliseconds instead.
This change brings two benefits:
- It makes MTU probing faster, because MTU probe lengths are calculated
based on minmtu, and minmtu is adjusted based on the replies. When
sending batches of three packets, all three packets are based on the
same minmtu estimation; in contrast, by sending one packet more
frequently, each subsequent packet can benefit from the replies that
have been received since the last packet was sent. As a result, MTU
discovery converges much faster (2-3 times as fast, typically).
- It reduces network spikiness - it's more network-friendly to send
one packet from time to time as opposed to sending bursts.
This is a minor cosmetic nit to emphasise the distinction between the
initial MTU discovery phase, and the post-initial phase (i.e. maxmtu
checking).
Furthermore, this is an improvement with regard to the DRY (Don't
Repeat Yourself) principle, as the maximum mtuprobes value is only
written once.
If a probe reply is received that makes minmtu equal to maxmtu, we
have to wait until try_mtu() runs to realize that. Since try_mtu()
runs after a packet is sent, this means there is at least one packet
(possibly more, depending on timing) that won't benefit from the
fixed MTU. This also happens when maxmtu is updated from the send()
path.
This commit fixes that by making sure we check whether the MTU can be
fixed every time minmtu or maxmtu is touched.
This moves related functions together, and is a pure cut-and-paste
change. The reason it was not done in the previous commit is because it
would have made the diff harder to review.
Currently, the PMTU discovery code is run by a timeout callback,
independently of tunnel activity. This commit moves it into the TX
path, meaning that send_mtu_probe_handler() is only called if a
packet is about to be sent. Consequently, it has been renamed to
try_mtu() for consistency with try_tx(), try_udp() and try_sptps().
Running PMTU discovery code only as part of the TX path prevents
PMTU discovery from generating unreasonable amounts of traffic when
the "real" traffic is negligible. One extreme example is sending one
real packet and then going silent: in the current code this one little
packet will result in the entire PMTU discovery algorithm being run
from start to finish, resulting in absurd write traffic amplification.
With this patch, PMTU discovery stops as soon as "real" packets stop
flowing, and will be no more aggressive than the underlying traffic.
Furthermore, try_mtu() only runs if there is confirmed UDP
connectivity as per the UDP discovery mechanism. This prevents
unnecessary network chatter - previously, the PMTU discovery code
would send bursts of (potentially large) probe packets every second
even if there was nothing on the other side. With this patch, the
PMTU code only does that if something replied to the lightweight UDP
discovery pings.
These inefficiencies were made even worse when the node is not a
direct neighbour, as tinc will use PMTU discovery both on the
destination node *and* the relay. UDP discovery is more lightweight for
this purpose.
As a bonus, this code simplifies overall code somewhat - state is
easier to manage when code is run in predictable contexts as opposed
to "surprise callbacks". In addition, there is no need to call PMTU
discovery code outside of net_packet.c anymore, thereby simplifying
module boundaries.
This is a rewrite of the send_mtu_probe_handler() function to make it
focus on the actual discovery of PMTU. In particular, the PMTU
discovery code doesn't care about tunnel state anymore - it only cares
about doing the initial PMTU discovery, and once that's done, making
sure PMTU did not increase by checking it from time to time. All other
duties have already been rewritten in the UDP discovery code.
As a result, the send_mtu_probe_handler(), which previously implemented
a nightmarish state machine which was very difficult to follow and
understand, has been massively simplified. We moved from four persistent
states to only two - initial discovery and steady state.
Furthermore, a side effect is that network chatter is reduced: instead
of sending bursts of three minmtu-sized packets in the steady state,
there is only one such packet that's sent from the UDP discovery code.
However, that introduces a slight regression in the bandwidth estimation
code, which relies on three-packet bursts in order to function.
Considering that this estimation is extremely unreliable (in my
experience) and isn't relied on by anything, this seems like an
acceptable regression.
Since UDP discovery is the place where UDP feasibility is checked, it
makes sense to test for local connectivity as well. This was previously
done as part of PMTU discovery.
This adds a new mechanism by which tinc can determine if a node is
reachable via UDP. The new mechanism is currently redundant with the
PMTU discovery mechanism - that will be fixed in a future commit.
Conceptually, the UDP discovery mechanism works similarly to PMTU
discovery: it sends UDP probes (of minmtu size, to make sure the tunnel
is fully usable), and assumes UDP is usable if it gets replies. It
assumes UDP is broken if too much time has passed since the last reply.
The big difference with the current PMTU discovery mechanism, however,
is that UDP discovery probes are only triggered as part of the
packet TX path (through try_tx()). This is quite interesting, because
it means tinc will never send UDP pings more often than normal packets,
and most importantly, it will automatically stop sending pings as soon
as packets stop flowing, thereby nicely reducing network chatter.
Of course, there are small drawbacks in some edge cases: for example,
if a node only sends one packet every minute to another node, these
packets will only be sent over TCP, because the interval between packets
is too long for tinc to maintain the UDP tunnel. I consider this a
feature, not a bug: I believe it is appropriate to use TCP in scenarios
where traffic is negligible, so that we don't pollute the network with
pings just to maintain a UDP tunnel that's seeing negligible usage.
This moves related functions together. try_tx() is at the right place
since its only caller is send_packet().
This is a pure cut-and-paste change. The reason it was not done in the
previous commit is because it would have made the diff harder to review.
Currently, the TX path (starting from send_packet()) in tinc has three
responsabilities:
- Making sure packets can be sent (e.g. fetching SPTPS keys);
- Making sure they can be sent optimally (e.g. fetching non-SPTPS keys
so that UDP can be used);
- Sending the actual packet, if feasible.
The first two are closely related; the third one, however, can be
cleanly separated from the other two - meaning, we can loosen code
coupling between sending packets and "optimizing" the way packets are
sent. This will become increasingly important as future commits will
move more tunnel establishment and maintenance code into the TX path,
so we will benefit from a cleaner separation of concerns.
This is especially relevant because of the dual nature of the TX path
(SPTPS versus non-SPTPS), which can make things really complicated when
trying to share low-level code between both.
In this commit, code related to establishing or improving tunnels is
moved away from the core TX path by introducing the "try_*()" family of
function, of which try_sptps() already existed before this commit.
This is a pure refactoring; this commit shouldn't introduce any change
in behavior.
This cleans up the PMTU probing function a little bit. It moves the
low-level sending of packets to a separate function, so that the code
reads naturally instead of using a weird for loop with "special
indexes". In addition, comments are moved inside the body of the
function for additional context.
This shouldn't introduce any change of behavior, except for local
discovery which has some minor logic fixes and which now always uses
small packets (16 bytes) because there's no need for a full-length
probe just to try the local network.
The option "--disable-legacy-protocol" was added to the configure
script. The new protocol does not depend on any external crypto
libraries, so when the option is used tinc is no longer linked to
OpenSSL's libcrypto.
It then thinks there should be a rule to make the .c file, which does
not exist of course. Luckily, we can tell it that version.o is .PHONY,
and this will still cause the .o file to be regenerated and linked into
the binaries everytime make is called.
On some platforms (Mac OS X for example), the res_init() function requires
linking with libresolv. On others (Linux, OpenBSD for example), res_init()
lives in libc.
Currently, when sending packets over TCP where the final recipient is
a node we have a direct metaconnection to, tinc first establishes a
SPTPS handshake between the two neighbors.
It turns out this SPTPS tunnel is not actually useful, because the
packet is only being sent over one metaconnection with no intermediate
nodes, and the metaconnection itself is already secured using a separate
SPTPS handshake.
Therefore it seems simpler and more efficient to simply send these
packets directly over the metaconnection itself without any additional
layer. This commits implements this solution without any changes to the
metaprotocol, since the appropriate message already exists: it's the
good old "plaintext" PACKET message.
This change brings two significant benefits:
- Packets to neighbors can be sent immediately - there is no initial
delay and packet loss previously caused by the SPTPS handshake;
- Performance of sending packets to neighbors over TCP is greatly
improved since the data only goes through one round of encryption
instead of two.
Conflicts:
src/net_packet.c
Currently, when tinc establishes a metaconnection, it automatically
starts a VPN SPTPS tunnel with the other side of the metaconnection.
It is not clear what this is trying to accomplish. Having a
metaconnection with a node does not necessarily mean we're going to send
packets to that node. This patch removes this behavior, thereby
simplifying code paths and removing unnecessary network chatter.
Naturally, this introduces a slight delay (as well as at least one
initial packet loss) between the moment a metaconnection is established
and the moment VPN packets can be exchanged between the two nodes.
However this is no different to the non-neighbor case, so it makes
things more consistent and therefore easier to reason about.