The symbol table used just symbol name as a key, and used a trick with
active flag to find symbols in active scopes with one hash table lookup.
The disadvantage is that it can degenerate to O(n) for negative queries
in situations where are many symbols with the same name in different
scopes.
Thanks to Yanko Kaneti for the bugreport.
The Kernel protocol, even with the option 'learn' enabled, ignores
direct routes created by the OS kernel (on Linux these are routes
with rtm_protocol == RTPROT_KERNEL).
Implement optional behavior where both OS kernel and third-party routes
are learned, it can be enabled by 'learn all' option.
Minor changes by committer.
Old configs do not define MPLS domains and may use a static protocol
to define static MPLS routes.
When MPLS channel is the only channel of static protocol, handle it
as a main channel. Also, define implicit MPLS domain if needed and
none is defined.
When regular event was added from work event, we did remember that
regular event list was empty and therefore we did not use zero time
in poll(). This leads to ~3 s latency in route reload during
reconfiguration.
When a MPLS channel is reloaded, it should reload all regular MPLS-aware
channels. This causes re-evaluation of routes in FEC map and possibly
reannouncement of MPLS routes.
Use mpls_new_label() / mpls_free_label() also for static labels, to keep
track of allocated labels and to enforce label ranges.
Static label allocations always use static label range, regardless of
configured label range.
Instead of just using route attributes, static routes with
static MPLS labels can be defined just by e.g.:
route 10.1.1.0/24 mpls 100 via 10.1.2.1 mpls 200;
The L3VPN protocol implements RFC 4364 BGP/MPLS VPNs using MPLS backbone.
It works similarly to pipe. It connects IP table (one per VRF) with (global)
VPN table. Routes passed from VPN table to IP table are stripped of RD and
filtered by import targets, routes passed in the other direction are extended
with RD, MPLS labels and export targets in extended communities. A separate
MPLS channel is used to announce MPLS routes for the labels.
The new labeling policy MPLS_POLICY_VRF assigns one label to all routes
(from the same FEC map associated with one VRF), while replaces their
next hops with a lookup to a VRF table. This is useful for L3VPN
protocol.
When MPLS is active, received routes on MPLS-aware SAFIs (ipvX-mpls,
vpnX-mpls) are automatically labeled according to active label policy and
corresponding MPLS routes are automatically generated. Also routes sent
on MPLS-aware SAFIs announce local labels when it should be done.
When MPLS is active, static IP/VPN routes are automatically labeled
according to active label policy and corresponding MPLS routes are
automatically generated.
The MPLS subsystem manages MPLS labels and handles their allocation to
MPLS-aware routing protocols. These labels are then attached to IP or VPN
routes representing label switched paths -- LSPs.
There was already a preliminary MPLS support consisting of MPLS label
net_addr, MPLS routing tables with static MPLS routes, remote labels in
next hops, and kernel protocol support.
This patch adds the MPLS domain as a basic structure representing local
label space with dynamic label allocator and configurable label ranges.
To represent LSPs, allocated local labels can be attached as route
attributes to IP or VPN routes with local labels as attributes.
There are several steps for handling LSP routes in routing protocols --
deciding to which forwarding equivalence class (FEC) the LSP route
belongs, allocating labels for new FECs, announcing MPLS routes for new
FECs, attaching labels to LSP routes. The FEC map structure implements
basic code for managing FECs in routing protocols, therefore existing
protocols can be made MPLS-aware by adding FEC map and delegating
most work related to local label management to it.
If the protocol supports route refresh on export, we keep the stop-start
method of route refeed. This applies for BGP with ERR or with export
table on, for OSPF, Babel, RIP or Pipe.
For BGP without ERR or for future selective ROA reloads, we're adding an
auxiliary export request, doing the refeed while the main export request
is running, somehow resembling the original method of BIRD 2 refeed.
There is also a refeed request queue to keep track of different refeed
requests.
In general, private_id is sparse and protocols may want to map some
internal values directly into it. For example, L3VPN needs to
map VPN route discriminators to private_id.
OTOH, u32 is enough for global_id, as these identifiers are dense.
Add a new protocol offering route aggregation.
User can specify list of route attributes in the configuration file and
run route aggregation on the export side of the pipe protocol. Routes are
sorted and for every group of equivalent routes new route is created and
exported to the routing table. It is also possible to specify filter
which will run for every route before aggregation.
Furthermore, it will be possible to set attributes of new routes
according to attributes of the aggregated routes.
This is a work in progress.
Original work by Igor Putovny, subsequent cleanups and finalization by
Maria Matejka.
For now, there are 4 phases: Necessary (device), Connector (kernel, pipe), Generator (static, rpki) and Regular.
Started and reconfigured are from Necessary to Regular, shutdown backwards.
This way, kernel can flush routes before actually being shutdown.