To allow reading of protocol states from other protocols or completely
different routines, we have to export these states to data structures
not requiring to lock the protocol loops.
On one hand, this doesn't give the reader the actual state "right now",
on the other hand, getting that is impossible in a properly
multithreaded environment and you will always get the information with
some (little but noteworthy) delay.
This implementation handles only the basic state information of the
protocols, common for all the protocols. Adding protocol-specific state
information should be done by implementing the protocol hook init_state().
Channel information is stored but not announced, as we don't need the
announcements for now.
There is an IP table for every ROA table, holding special records
combining all known ROAs for every top-prefix.
The ROA digestor is now an IP digestor, running over the auxiliary
table.
Channel is now just subscribing to yet another journal announcing
digested tries from the ROA table.
Creating tries in every channel on-the-fly was too slow to handle
and it ate obnoxious amounts of memory. Instead, the tries are
constructed directly in the table and the channels are notified
with the completed tries.
The delayed export-release mechanism is used to keep the tries allocated
until routes get reloaded.
Originally, this mechanism required to check whether there's enough time to work
and then to send an event. This macro combines all the logic and goes more straightforwardly
to the _end_ of the export processing loop.
One should note that there were two cases where the export processing loop
was deferred at the _beginning_, which led to ignoring some routes on
reimports. This wasn't easily noticeable in the tests until the one-task
limit got a ceiling on 300 ms to keep reasonable latency.
Introducing a new omnipotent internal API to just pass route updates
from whatever point wherever we want.
From now on, all the exports should be processed by RT_WALK_EXPORTS
macro, and you can also issue a separate feed-only request to just get a
feed and finish.
The exporters can now also stop and the readers must expect that to
happen and recover. Main tables don't stop, though.
Add a new protocol offering route aggregation.
User can specify list of route attributes in the configuration file and
run route aggregation on the export side of the pipe protocol. Routes are
sorted and for every group of equivalent routes new route is created and
exported to the routing table. It is also possible to specify filter
which will run for every route before aggregation.
Furthermore, it will be possible to set attributes of new routes
according to attributes of the aggregated routes.
This is a work in progress.
Original work by Igor Putovny, subsequent cleanups and finalization by
Maria Matejka.
This is a split-commit of the neighboring aggregator branch
with a bit improved lvalue handling, to have easier merge into v3.
Some [redacted] (yes, myself) had a really bad idea
to rename nest/route.h to nest/rt.h while refactoring
some data structures out of it.
This led to unnecessarily complex problems with
merging updates from v2. Reverting this change
to make my life a bit easier.
At least it needed only one find-sed command:
find -name '*.[chlY]' -type f -exec sed -i 's#nest/rt.h#nest/route.h#' '{}' +
There are now 3 different pools with specific lifetime. All of these are
available since protocol start, anyway they get freed in different
moments.
First, pool_up gets freed immediately after announcing PS_STOP, to e.g.
stop all timers and events regularly updating the routing table when the
imports are already flushing.
Then, pool_inloop gets freed just before the protocol loop is finally
stopped, after all channels, imports and exports and other hooks are
cleaned up.
And finally, the pool itself is freed the last. Unless you explicitly
need the early free, use this pool.
The MPLS subsystem manages MPLS labels and handles their allocation to
MPLS-aware routing protocols. These labels are then attached to IP or VPN
routes representing label switched paths -- LSPs.
There was already a preliminary MPLS support consisting of MPLS label
net_addr, MPLS routing tables with static MPLS routes, remote labels in
next hops, and kernel protocol support.
This patch adds the MPLS domain as a basic structure representing local
label space with dynamic label allocator and configurable label ranges.
To represent LSPs, allocated local labels can be attached as route
attributes to IP or VPN routes with local labels as attributes.
There are several steps for handling LSP routes in routing protocols --
deciding to which forwarding equivalence class (FEC) the LSP route
belongs, allocating labels for new FECs, announcing MPLS routes for new
FECs, attaching labels to LSP routes. The FEC map structure implements
basic code for managing FECs in routing protocols, therefore existing
protocols can be made MPLS-aware by adding FEC map and delegating
most work related to local label management to it.