To access route attribute cache from multiple threads at once, we have
to lock the cache on writing. The route attributes data structures are
safe to read unless somebody tries to tamper with the cache itself.
This commit prevents use-after-free of routes belonging to protocols
which have been already destroyed, delaying also all the protocols'
shutdown until all of their routes have been finally propagated through
all the pipes down to the appropriate exports.
The use-after-free was somehow hypothetic yet theoretically possible in
rare conditions, when one BGP protocol authors a lot of routes and the
user deletes that protocol by reconfiguring in the same time as next hop
update is requested, causing rte_better() to be called on a
not-yet-pruned network prefix while the owner protocol has been already
freed.
In parallel execution environments, this would happen an inter-thread
use-after-free, causing possible heisenbugs or other nasty problems.
There is a simple universal IO loop, taking care of events, timers and
sockets. Primarily, one instance of a protocol should use exactly one IO
loop to do all its work, as is now done in BFD.
Contrary to previous versions, the loop is now launched and cleaned by
the nest/proto.c code, allowing for a protocol to just request its own
loop by setting the loop's lock order in config higher than the_bird.
It is not supported nor checked if any protocol changed the requested
lock order in reconfigure. No protocol should do it at all.
In some specific configurations, it was possible to send BIRD into an
infinite loop of recursive next hop resolution. This was caused by route
priority inversion.
To prevent priority inversions affecting other next hops, we simply
refuse to resolve any next hop if the best route for the matching prefix
is recursive or any other route with the same preference is recursive.
Next hop resolution doesn't change route priority, therefore it is
perfectly OK to resolve BGP next hops e.g. by an OSPF route, yet if the
same (or covering) prefix is also announced by iBGP, by retraction of
the OSPF route we would get a possible priority inversion.
* internal tables are now more standalone, having their own import and
export hooks
* route refresh/reload uses stale counter instead of stale flag,
allowing to drop walking the table at the beginning
* route modify (by BGP LLGR) is now done by a special refeed hook,
reimporting the modified routes directly without filters
Channels have now included rt_import_req and rt_export_req to hook into
the table instead of just one list node. This will (in future) allow for:
* channel import and export bound to different tables
* more efficient pipe code (dropping most of the channel code)
* conversion of 'show route' to a special kind of export
* temporary static routes from CLI
The import / export states are also updated to the new algorithms.
Routes are now allocated only when they are just to be inserted to the
table. Updating a route needs a locally allocated route structure.
Ownership of the attributes is also now not transfered from protocols to
tables and vice versa but just borrowed which should be easier to handle
in a multithreaded environment.
Add support for specifying a password in hexadecimal format, The result
is the same whether a password is specified as a quoted string or a
hex-encoded byte string, this just makes it more convenient to input
high-entropy byte strings as MAC keys.
The Babel MAC authentication RFC recommends implementing Blake2s as one of
the supported algorithms. In order to achieve do this, add the blake2b and
blake2s hash functions for MAC authentication. The hashing function
implementations are the reference implementations from blake2.net.
The Blake2 algorithms allow specifying an arbitrary output size, and the
Babel MAC spec says to implement Blake2s with 128-bit output. To satisfy
this, we add two different variants of each of the algorithms, one using
the default size (256 bits for Blake2s, 512 bits for Blake2b), and one
using half the default output size.
Update to BIRD coding style done by committer.
In general, events are code handling some some condition, which is
scheduled when such condition happened and executed independently from
I/O loop. Work-events are a subgroup of events that are scheduled
repeatedly until some (often significant) work is done (e.g. feeding
routes to protocol). All scheduled events are executed during each
I/O loop iteration.
Separate work-events from regular events to a separate queue and
rate limit their execution to a fixed number per I/O loop iteration.
That should prevent excess latency when many work-events are
scheduled at one time (e.g. simultaneous reload of many BGP sessions).