This commit makes the route chains in the tables atomic. This allows not
only standard exports but also feeds and bulk exports to be processed
without ever locking the table.
Design note: the overall data structures are quite brittle. We're using
RCU read-locks to keep track about readers, and we're indicating ongoing
work on the data structures by prepending a REF_OBSOLETE sentinel node
to make every reader go waiting.
All the operations are intended to stay inside nest/rt-table.c and it
may be even best to further refactor the code to hide the routing table
internal structure inside there. Nobody shall definitely write any
routines manipulating live routes in tables from outside.
This ensures that if somebody passes an event to a loop which
has just started executing, then the event gets picked up. Otherwise
there is a race condition causing stray events pending in queue
but without the ping (because the run actually finishes too fast
to pickup the later events).
The UDP logging had to be substantially rewritten due to a different
logging backend and reconfiguration mechanisms.
Conflicts:
doc/bird.sgml
sysdep/unix/config.Y
sysdep/unix/io.c
sysdep/unix/log.c
sysdep/unix/unix.h
When a recursive route with MPLS-labeled nexthop was exported to kernel
and read back, the nexthop_same() failed due to different labels_orig
field and kernel protocol reinstalled it unnecessarily.
For comparing hext hops, route cache has to distinguish ones with
different labels_orig, but KRT has to ignore that, so we need two
nexthop compare functions.
Thanks to Marcel Menzel for the bugreport.
Some [redacted] (yes, myself) had a really bad idea
to rename nest/route.h to nest/rt.h while refactoring
some data structures out of it.
This led to unnecessarily complex problems with
merging updates from v2. Reverting this change
to make my life a bit easier.
At least it needed only one find-sed command:
find -name '*.[chlY]' -type f -exec sed -i 's#nest/rt.h#nest/route.h#' '{}' +
The Kernel protocol, even with the option 'learn' enabled, ignores
direct routes created by the OS kernel (on Linux these are routes
with rtm_protocol == RTPROT_KERNEL).
Implement optional behavior where both OS kernel and third-party routes
are learned, it can be enabled by 'learn all' option.
Minor changes by committer.
When regular event was added from work event, we did remember that
regular event list was empty and therefore we did not use zero time
in poll(). This leads to ~3 s latency in route reload during
reconfiguration.
The MPLS subsystem manages MPLS labels and handles their allocation to
MPLS-aware routing protocols. These labels are then attached to IP or VPN
routes representing label switched paths -- LSPs.
There was already a preliminary MPLS support consisting of MPLS label
net_addr, MPLS routing tables with static MPLS routes, remote labels in
next hops, and kernel protocol support.
This patch adds the MPLS domain as a basic structure representing local
label space with dynamic label allocator and configurable label ranges.
To represent LSPs, allocated local labels can be attached as route
attributes to IP or VPN routes with local labels as attributes.
There are several steps for handling LSP routes in routing protocols --
deciding to which forwarding equivalence class (FEC) the LSP route
belongs, allocating labels for new FECs, announcing MPLS routes for new
FECs, attaching labels to LSP routes. The FEC map structure implements
basic code for managing FECs in routing protocols, therefore existing
protocols can be made MPLS-aware by adding FEC map and delegating
most work related to local label management to it.
If the protocol supports route refresh on export, we keep the stop-start
method of route refeed. This applies for BGP with ERR or with export
table on, for OSPF, Babel, RIP or Pipe.
For BGP without ERR or for future selective ROA reloads, we're adding an
auxiliary export request, doing the refeed while the main export request
is running, somehow resembling the original method of BIRD 2 refeed.
There is also a refeed request queue to keep track of different refeed
requests.
For now, there are 4 phases: Necessary (device), Connector (kernel, pipe), Generator (static, rpki) and Regular.
Started and reconfigured are from Necessary to Regular, shutdown backwards.
This way, kernel can flush routes before actually being shutdown.
This variant of logging avoids calling write() for every log line,
allowing for waitless logging. This makes heavy logging less heavy
and more useful for race condition debugging.
The original logging routines were locking a common mutex. This led to
massive underperformance and unwanted serialization when heavily logging
due to lock contention. Now the logging is lockless, though still
serializing on write() syscalls to the same filedescriptor.
This change also brings in a persistent logging channel structures and
thus avoids writing into active configuration data structures during
regular run.
Despite not having defined 'master interface', VRF interfaces should be
treated as being inside respective VRFs. They behave as a loopback for
respective VRFs. Treating the VRF interface as inside the VRF allows
e.g. OSPF to pick up IP addresses defined on the VRF interface.
For this, we also need to tell apart VRF interfaces and regular interfaces.
Extend Netlink code to parse interface type and mark VRF interfaces with
IF_VRF flag.
Based on the patch from Erin Shepherd, thanks!
It is necessary for IPv4 over IPv6 nexthop support on FreeBSD,
and RTA_VIA is not really related to MPLS.
It breaks build for some very old systems like Debian 8 and CentOS 7,
but we generally do not support older kernels than 4.14 LTS anyway.
Add a current_time_now() function which gets an immediate monotonic
timestamp instead of using the cached value from the event loop. This is
useful for callers that need precise times, such as the Babel RTT
measurement code.
Minor changes by committer.
A forgotten else-clause caused BIRD to treat some pseudo-random place in
memory as fd-pair. This was happening only on startup of the first
thread in group and the value there in memory was typically zero ... and
writing to stdin succeeded.
When running BIRD with stdin not present (like systemd does), it died on
this spurious write. Now it seems to work correctly.
Thanks to Daniel Suchy <danny@danysek.cz> for reporting.
http://trubka.network.cz/pipermail/bird-users/2023-May/016929.html
The original algorithm was suffering from an ABA race condition:
A: fp = page_stack
B: completely allocates the same page and writes into it some data
A: unsuspecting, loads (invalid) next = fp->next
B: finishes working with the page and returns it back to page_stack
A: compare-exchange page_stack: fp => next succeeds and writes garbage
to page_stack
Fixed this by using an implicit spinlock in hot page allocator.
If a thread encounters timeout == 0 for poll, it considers itself
"busy" and with some hysteresis it tries to drop loops for others to
pick and thus better distribute work between threads.
Memory allocation is a fragile part of BIRD and we need checking that
everybody is using the resource pools in an appropriate way. To assure
this, all the resource pools are associated with locking domains and
every resource manipulation is thoroughly checked whether the
appropriate locking domain is locked.
With transitive resource manipulation like resource dumping or mass free
operations, domains are locked and unlocked on the go, thus we require
pool domains to have higher order than their parent to allow for this
transitive operations.
Adding pool locking revealed some cases of insecure memory manipulation
and this commit fixes that as well.
The support for IPv4 routes with IPv6 nexthops was implemented in FreeBSD
13.1, this patch allows to import and export such routes from/to kernel.
Minor change from committer.
Now sk_open() requires an explicit IO loop to open the socket in. Also
specific functions for socket RX pause / resume are added to allow for
BGP corking.
And last but not least, socket reloop is now synchronous to resolve
weird cases of the target loop stopping before actually picking up the
relooped socket. Now the caller must ensure that both loops are locked
while relooping, and this way all sockets always have their respective
loop.
If there are lots of loops in a single thread and only some of the loops
are actually active, the other loops are now kept aside and not checked
until they actually get some timers, events or active sockets.
This should help with extreme loads like 100k tables and protocols.
Also ping and loop pickup mechanism was allowing subtle race
conditions. Now properly handling collisions between loop ping and pickup.