With very busy deployments, RPKI may kill cache connection too early.
Instead of that, we want it to keep loading if any data is waiting to
be read and the reason for delay is just our congestion.
Also, when we kill the session because of actually slow cache, we want
to reload from scratch as the data we have is unreliable and nobody
knows whether the state is still valid.
Implement BGP Send hold timer according to draft-ietf-idr-bgp-sendholdtimer.
The Send hold timer drops the session if the neighbor is sending keepalives,
but does not receive our messages, causing the TCP connection to stall.
Some BGP capabilities change the BGP behavior in a significant way, so if
the configuration depends on it, it is better to not establish BGP
session when the capability is not available.
Add several BGP option to require individual BGP capabilities during
session negotiation.
Add a new protocol offering route aggregation.
User can specify list of route attributes in the configuration file and
run route aggregation on the export side of the pipe protocol. Routes are
sorted and for every group of equivalent routes new route is created and
exported to the routing table. It is also possible to specify filter
which will run for every route before aggregation.
Furthermore, it will be possible to set attributes of new routes
according to attributes of the aggregated routes.
This is a work in progress.
Original work by Igor Putovny, subsequent cleanups and finalization by
Maria Matejka.
This is a split-commit of the neighboring aggregator branch
with a bit improved lvalue handling, to have easier merge into v3.
Some [redacted] (yes, myself) had a really bad idea
to rename nest/route.h to nest/rt.h while refactoring
some data structures out of it.
This led to unnecessarily complex problems with
merging updates from v2. Reverting this change
to make my life a bit easier.
At least it needed only one find-sed command:
find -name '*.[chlY]' -type f -exec sed -i 's#nest/rt.h#nest/route.h#' '{}' +
This merge was particularly difficult. I finally resorted to delete the
symbol scope active flag altogether and replace its usage by other
means.
Also I had to update custom route attribute registration to fit
both the scope updates in v2 and the data model in v3.
Conflicts:
conf/cf-lex.l
conf/conf.h
conf/confbase.Y
conf/gen_keywords.m4
conf/gen_parser.m4
filter/config.Y
nest/config.Y
proto/bgp/config.Y
proto/static/config.Y
Keywords and attributes are split to separate namespaces, to avoid
collisions between regular keyword use and attribute overlay.
There are now 3 different pools with specific lifetime. All of these are
available since protocol start, anyway they get freed in different
moments.
First, pool_up gets freed immediately after announcing PS_STOP, to e.g.
stop all timers and events regularly updating the routing table when the
imports are already flushing.
Then, pool_inloop gets freed just before the protocol loop is finally
stopped, after all channels, imports and exports and other hooks are
cleaned up.
And finally, the pool itself is freed the last. Unless you explicitly
need the early free, use this pool.
Old configs do not define MPLS domains and may use a static protocol
to define static MPLS routes.
When MPLS channel is the only channel of static protocol, handle it
as a main channel. Also, define implicit MPLS domain if needed and
none is defined.
When a MPLS channel is reloaded, it should reload all regular MPLS-aware
channels. This causes re-evaluation of routes in FEC map and possibly
reannouncement of MPLS routes.
Instead of just using route attributes, static routes with
static MPLS labels can be defined just by e.g.:
route 10.1.1.0/24 mpls 100 via 10.1.2.1 mpls 200;
The L3VPN protocol implements RFC 4364 BGP/MPLS VPNs using MPLS backbone.
It works similarly to pipe. It connects IP table (one per VRF) with (global)
VPN table. Routes passed from VPN table to IP table are stripped of RD and
filtered by import targets, routes passed in the other direction are extended
with RD, MPLS labels and export targets in extended communities. A separate
MPLS channel is used to announce MPLS routes for the labels.
When MPLS is active, received routes on MPLS-aware SAFIs (ipvX-mpls,
vpnX-mpls) are automatically labeled according to active label policy and
corresponding MPLS routes are automatically generated. Also routes sent
on MPLS-aware SAFIs announce local labels when it should be done.
When MPLS is active, static IP/VPN routes are automatically labeled
according to active label policy and corresponding MPLS routes are
automatically generated.
If the protocol supports route refresh on export, we keep the stop-start
method of route refeed. This applies for BGP with ERR or with export
table on, for OSPF, Babel, RIP or Pipe.
For BGP without ERR or for future selective ROA reloads, we're adding an
auxiliary export request, doing the refeed while the main export request
is running, somehow resembling the original method of BIRD 2 refeed.
There is also a refeed request queue to keep track of different refeed
requests.
In general, private_id is sparse and protocols may want to map some
internal values directly into it. For example, L3VPN needs to
map VPN route discriminators to private_id.
OTOH, u32 is enough for global_id, as these identifiers are dense.
Add a new protocol offering route aggregation.
User can specify list of route attributes in the configuration file and
run route aggregation on the export side of the pipe protocol. Routes are
sorted and for every group of equivalent routes new route is created and
exported to the routing table. It is also possible to specify filter
which will run for every route before aggregation.
Furthermore, it will be possible to set attributes of new routes
according to attributes of the aggregated routes.
This is a work in progress.
Original work by Igor Putovny, subsequent cleanups and finalization by
Maria Matejka.
For now, there are 4 phases: Necessary (device), Connector (kernel, pipe), Generator (static, rpki) and Regular.
Started and reconfigured are from Necessary to Regular, shutdown backwards.
This way, kernel can flush routes before actually being shutdown.