The UDP logging had to be substantially rewritten due to a different
logging backend and reconfiguration mechanisms.
Conflicts:
doc/bird.sgml
sysdep/unix/config.Y
sysdep/unix/io.c
sysdep/unix/log.c
sysdep/unix/unix.h
There is a long-known CC attribute cleanup which allows to call a custom
cleanup function when an auto-storage variable ceases to exist. We're
gonna use it for end-of-loop and leave-locked-block macros.
This commit adds a static unit test for this compiler feature to be sure
that it really does what we want. We're looking forward to the next ISO
C norm where this may finally get a nice syntax and standardization.
Add a new protocol offering route aggregation.
User can specify list of route attributes in the configuration file and
run route aggregation on the export side of the pipe protocol. Routes are
sorted and for every group of equivalent routes new route is created and
exported to the routing table. It is also possible to specify filter
which will run for every route before aggregation.
Furthermore, it will be possible to set attributes of new routes
according to attributes of the aggregated routes.
This is a work in progress.
Original work by Igor Putovny, subsequent cleanups and finalization by
Maria Matejka.
This is a split-commit of the neighboring aggregator branch
with a bit improved lvalue handling, to have easier merge into v3.
Some [redacted] (yes, myself) had a really bad idea
to rename nest/route.h to nest/rt.h while refactoring
some data structures out of it.
This led to unnecessarily complex problems with
merging updates from v2. Reverting this change
to make my life a bit easier.
At least it needed only one find-sed command:
find -name '*.[chlY]' -type f -exec sed -i 's#nest/rt.h#nest/route.h#' '{}' +
This merge was particularly difficult. I finally resorted to delete the
symbol scope active flag altogether and replace its usage by other
means.
Also I had to update custom route attribute registration to fit
both the scope updates in v2 and the data model in v3.
The MPLS subsystem manages MPLS labels and handles their allocation to
MPLS-aware routing protocols. These labels are then attached to IP or VPN
routes representing label switched paths -- LSPs.
There was already a preliminary MPLS support consisting of MPLS label
net_addr, MPLS routing tables with static MPLS routes, remote labels in
next hops, and kernel protocol support.
This patch adds the MPLS domain as a basic structure representing local
label space with dynamic label allocator and configurable label ranges.
To represent LSPs, allocated local labels can be attached as route
attributes to IP or VPN routes with local labels as attributes.
There are several steps for handling LSP routes in routing protocols --
deciding to which forwarding equivalence class (FEC) the LSP route
belongs, allocating labels for new FECs, announcing MPLS routes for new
FECs, attaching labels to LSP routes. The FEC map structure implements
basic code for managing FECs in routing protocols, therefore existing
protocols can be made MPLS-aware by adding FEC map and delegating
most work related to local label management to it.
If the protocol supports route refresh on export, we keep the stop-start
method of route refeed. This applies for BGP with ERR or with export
table on, for OSPF, Babel, RIP or Pipe.
For BGP without ERR or for future selective ROA reloads, we're adding an
auxiliary export request, doing the refeed while the main export request
is running, somehow resembling the original method of BIRD 2 refeed.
There is also a refeed request queue to keep track of different refeed
requests.
The original logging routines were locking a common mutex. This led to
massive underperformance and unwanted serialization when heavily logging
due to lock contention. Now the logging is lockless, though still
serializing on write() syscalls to the same filedescriptor.
This change also brings in a persistent logging channel structures and
thus avoids writing into active configuration data structures during
regular run.
Add a current_time_now() function which gets an immediate monotonic
timestamp instead of using the cached value from the event loop. This is
useful for callers that need precise times, such as the Babel RTT
measurement code.
Minor changes by committer.
Backport some changes from branch oz-parametric-hashes. Replace naive
hash function for IPv6 addresses, fix hashing of VPNx (where upper half
of RD was ignored), fix hashing of MPLS labels (where identity was used).
The original algorithm was suffering from an ABA race condition:
A: fp = page_stack
B: completely allocates the same page and writes into it some data
A: unsuspecting, loads (invalid) next = fp->next
B: finishes working with the page and returns it back to page_stack
A: compare-exchange page_stack: fp => next succeeds and writes garbage
to page_stack
Fixed this by using an implicit spinlock in hot page allocator.
The symbol table used just symbol name as a key, and used a trick with
active flag to find symbols in active scopes with one hash table lookup.
The disadvantage is that it can degenerate to O(n) for negative queries
in situations where are many symbols with the same name in different
scopes.
Thanks to Yanko Kaneti for the bugreport.
Memory allocation is a fragile part of BIRD and we need checking that
everybody is using the resource pools in an appropriate way. To assure
this, all the resource pools are associated with locking domains and
every resource manipulation is thoroughly checked whether the
appropriate locking domain is locked.
With transitive resource manipulation like resource dumping or mass free
operations, domains are locked and unlocked on the go, thus we require
pool domains to have higher order than their parent to allow for this
transitive operations.
Adding pool locking revealed some cases of insecure memory manipulation
and this commit fixes that as well.
When lp_save() is called on an empty linpool, then some allocation is
done, then lp_restore() is called, the linpool is restored but the used
chunks are inaccessible. Fix it.
This change adds one pointer worth of memory to every list node.
Keeping this information helps auditing the lists, checking that the
node indeed is outside of list or inside the right one.
The typed lists shouldn't be used anywhere with memory pressure anyway,
thus the one added pointer isn't significant.
Now sk_open() requires an explicit IO loop to open the socket in. Also
specific functions for socket RX pause / resume are added to allow for
BGP corking.
And last but not least, socket reloop is now synchronous to resolve
weird cases of the target loop stopping before actually picking up the
relooped socket. Now the caller must ensure that both loops are locked
while relooping, and this way all sockets always have their respective
loop.
If there are lots of loops in a single thread and only some of the loops
are actually active, the other loops are now kept aside and not checked
until they actually get some timers, events or active sockets.
This should help with extreme loads like 100k tables and protocols.
Also ping and loop pickup mechanism was allowing subtle race
conditions. Now properly handling collisions between loop ping and pickup.
The import table feed wasn't resetting the table-specific route values
like REF_FILTERED and thus made the route look like filtered even though
it should have been re-evaluated as accepted.
and "%M" formats expect "Input/output error" message but musl returns
"I/O error". Proposed change compares the printf output with string
returned from strerror function for EIO constant.
See-also: https://bugs.gentoo.org/836713
Minor change from committer.
When a linpool is used to allocate a one-off big load of memory, it
makes no sense to keep that amount of memory for future use inside the
linpool. Contrary to previous implementations where the memory was
directly free()d, we now use the page allocator which has an internal
cache which keeps the released pages for us and subsequent allocations
simply get these released pages back.
And even if the page cleanup routine kicks in inbetween, the pages get
only madvise()d, not munmap()ed so performance aspects are negligible.
This may fix some memory usage peaks in extreme cases.
On large configurations, too many threads would spawn with one thread
per loop. Therefore, threads may now run multiple loops at once. The
thread count is configurable and may be changed during run. All threads
are spawned on startup.
This change helps with memory bloating. BIRD filters need large
temporary memory blocks to store their stack and also memory management
keeps its hot page storage per-thread.
Known bugs:
* Thread autobalancing is not yet implemented.
* Low latency loops are executed together with standard loops.
Log message before aborting due to watchdog timeout. We have to use
async-safe write to debug log, as it is done in signal handler.
Minor changes from committer.
After a suggestion by Santiago, I added the direct list pointer into
events and the events are now using this value to check whether the
route is active or not. Also the whole trick with sentinel node unioned
with event list is now gone.
For debugging, there is also an internal circular buffer to store what
has been recently happening in event code before e.g. a crash happened.
By default, this debug is off and must be manually enabled in
lib/event.c as it eats quite some time and space.
Had to fix route source locking inside BGP export table as we need to
keep the route sources properly allocated until even last BGP pending
update is sent out, therefore the export table printout is accurate.
Some unit tests weren't initializing the birdloop, trying to write the
birdloop ping into stdin. Fixed this and also forced stdin close on
startup of every test just to be sure that CI and local build behave the
same in this. (CI was failing on this while local build not.)
In multithreaded environment, we need to pass messages between workers.
This is done by queuing events to their respective queues. The
double-linked list is not really useful for that as it needs locking
everywhere.
This commit rewrites the event subsystem to use a single-linked list
where events are enqueued by a single atomic instruction and the queue
is processed after atomically moving the whole queue aside.
There were more conflicts that I'd like to see, most notably in route
export. If a bisect identifies this commit with something related, it
may be simply true that this commit introduces that bug. Let's hope it
doesn't happen.
For BGP LLGR purposes, there was an API allowing a protocol to directly
modify their stale routes in table before flushing them. This API was
called by the table prune routine which violates the future locking
requirements.
Instead of this, BGP now requests a special route export and reimports
these routes into the table, allowing for asynchronous execution without
locking the table on export.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
Until now, if export table was enabled, Nest was storing exactly the
route before rt_notify() was called on it. This was quite sloppy and
spooky and it also wasn't reflecting the changes BGP does before
sending. And as BGP is storing the routes to be sent anyway, we are
simply keeping the already-sent routes in there to better rule out
unneeded reexports.
Some of the route attributes (IGP metric, preference) make no sense in
BGP, therefore these will be probably replaced by something sensible.
Also the nexthop shown in the short output is the BGP nexthop.
It's now possible to pause iteration through hash. This requires
struct hash_iterator to be allocated somewhere handy.
The iteration itself is surrounded by HASH_WALK_ITER and
HASH_WALK_ITER_END. Call HASH_WALK_ITER_PUT to ask for pausing; it may
still do some more iterations until it comes to a suitable pausing
point. The iterator must be initalized to an empty structure. No cleanup
is needed if iteration is abandoned inbetween.
There were quite a lot of conflicts in flowspec validation code which
ultimately led to some code being a bit rewritten, not only adapted from
this or that branch, yet it is still in a limit of a merge.
For now, all route attributes are stored as eattrs in ea_list. This
should make route manipulation easier and it also allows for a layered
approach of route attributes where updates from filters will be stored
as an overlay over the previous version.
As there is either a nexthop or another destination specification
(or othing in case of ROAs and Flowspec), it may be merged together.
This code is somehow quirky and should be replaced in future by better
implementation of nexthop.
Also flowspec validation result has its own attribute now as it doesn't
have anything to do with route nexthop.
This doesn't do anything more than to put the whole structure inside
adata. The overall performance is certainly going downhill; we'll
optimize this later.
Anyway, this is one of the latest items inside rta and in several
commits we may drop rta completely and move to eattrs-only routes.
The route scope attribute was used for simple user route marking. As
there is a better tool for this (custom attributes), the old and limited
way can be dropped.