Add a new protocol offering route aggregation.
User can specify list of route attributes in the configuration file and
run route aggregation on the export side of the pipe protocol. Routes are
sorted and for every group of equivalent routes new route is created and
exported to the routing table. It is also possible to specify filter
which will run for every route before aggregation.
Furthermore, it will be possible to set attributes of new routes
according to attributes of the aggregated routes.
This is a work in progress.
Original work by Igor Putovny, subsequent cleanups and finalization by
Maria Matejka.
This is a split-commit of the neighboring aggregator branch
with a bit improved lvalue handling, to have easier merge into v3.
Some [redacted] (yes, myself) had a really bad idea
to rename nest/route.h to nest/rt.h while refactoring
some data structures out of it.
This led to unnecessarily complex problems with
merging updates from v2. Reverting this change
to make my life a bit easier.
At least it needed only one find-sed command:
find -name '*.[chlY]' -type f -exec sed -i 's#nest/rt.h#nest/route.h#' '{}' +
This merge was particularly difficult. I finally resorted to delete the
symbol scope active flag altogether and replace its usage by other
means.
Also I had to update custom route attribute registration to fit
both the scope updates in v2 and the data model in v3.
If the protocol supports route refresh on export, we keep the stop-start
method of route refeed. This applies for BGP with ERR or with export
table on, for OSPF, Babel, RIP or Pipe.
For BGP without ERR or for future selective ROA reloads, we're adding an
auxiliary export request, doing the refeed while the main export request
is running, somehow resembling the original method of BIRD 2 refeed.
There is also a refeed request queue to keep track of different refeed
requests.
The original logging routines were locking a common mutex. This led to
massive underperformance and unwanted serialization when heavily logging
due to lock contention. Now the logging is lockless, though still
serializing on write() syscalls to the same filedescriptor.
This change also brings in a persistent logging channel structures and
thus avoids writing into active configuration data structures during
regular run.
Add a current_time_now() function which gets an immediate monotonic
timestamp instead of using the cached value from the event loop. This is
useful for callers that need precise times, such as the Babel RTT
measurement code.
Minor changes by committer.
Backport some changes from branch oz-parametric-hashes. Replace naive
hash function for IPv6 addresses, fix hashing of VPNx (where upper half
of RD was ignored), fix hashing of MPLS labels (where identity was used).
The original algorithm was suffering from an ABA race condition:
A: fp = page_stack
B: completely allocates the same page and writes into it some data
A: unsuspecting, loads (invalid) next = fp->next
B: finishes working with the page and returns it back to page_stack
A: compare-exchange page_stack: fp => next succeeds and writes garbage
to page_stack
Fixed this by using an implicit spinlock in hot page allocator.
The symbol table used just symbol name as a key, and used a trick with
active flag to find symbols in active scopes with one hash table lookup.
The disadvantage is that it can degenerate to O(n) for negative queries
in situations where are many symbols with the same name in different
scopes.
Thanks to Yanko Kaneti for the bugreport.
Memory allocation is a fragile part of BIRD and we need checking that
everybody is using the resource pools in an appropriate way. To assure
this, all the resource pools are associated with locking domains and
every resource manipulation is thoroughly checked whether the
appropriate locking domain is locked.
With transitive resource manipulation like resource dumping or mass free
operations, domains are locked and unlocked on the go, thus we require
pool domains to have higher order than their parent to allow for this
transitive operations.
Adding pool locking revealed some cases of insecure memory manipulation
and this commit fixes that as well.
When lp_save() is called on an empty linpool, then some allocation is
done, then lp_restore() is called, the linpool is restored but the used
chunks are inaccessible. Fix it.
This change adds one pointer worth of memory to every list node.
Keeping this information helps auditing the lists, checking that the
node indeed is outside of list or inside the right one.
The typed lists shouldn't be used anywhere with memory pressure anyway,
thus the one added pointer isn't significant.
Now sk_open() requires an explicit IO loop to open the socket in. Also
specific functions for socket RX pause / resume are added to allow for
BGP corking.
And last but not least, socket reloop is now synchronous to resolve
weird cases of the target loop stopping before actually picking up the
relooped socket. Now the caller must ensure that both loops are locked
while relooping, and this way all sockets always have their respective
loop.
If there are lots of loops in a single thread and only some of the loops
are actually active, the other loops are now kept aside and not checked
until they actually get some timers, events or active sockets.
This should help with extreme loads like 100k tables and protocols.
Also ping and loop pickup mechanism was allowing subtle race
conditions. Now properly handling collisions between loop ping and pickup.