This commit prevents use-after-free of routes belonging to protocols
which have been already destroyed, delaying also all the protocols'
shutdown until all of their routes have been finally propagated through
all the pipes down to the appropriate exports.
The use-after-free was somehow hypothetic yet theoretically possible in
rare conditions, when one BGP protocol authors a lot of routes and the
user deletes that protocol by reconfiguring in the same time as next hop
update is requested, causing rte_better() to be called on a
not-yet-pruned network prefix while the owner protocol has been already
freed.
In parallel execution environments, this would happen an inter-thread
use-after-free, causing possible heisenbugs or other nasty problems.
This basically means that:
* there are some more levels of indirection and asynchronicity, mostly
in cleanup procedures, requiring correct lock ordering
* all the internal table operations (prune, next hop update) are done
without blocking the other parts of BIRD
* the protocols may get their own loops very soon
There is a simple universal IO loop, taking care of events, timers and
sockets. Primarily, one instance of a protocol should use exactly one IO
loop to do all its work, as is now done in BFD.
Contrary to previous versions, the loop is now launched and cleaned by
the nest/proto.c code, allowing for a protocol to just request its own
loop by setting the loop's lock order in config higher than the_bird.
It is not supported nor checked if any protocol changed the requested
lock order in reconfigure. No protocol should do it at all.
* internal tables are now more standalone, having their own import and
export hooks
* route refresh/reload uses stale counter instead of stale flag,
allowing to drop walking the table at the beginning
* route modify (by BGP LLGR) is now done by a special refeed hook,
reimporting the modified routes directly without filters
Channels have now included rt_import_req and rt_export_req to hook into
the table instead of just one list node. This will (in future) allow for:
* channel import and export bound to different tables
* more efficient pipe code (dropping most of the channel code)
* conversion of 'show route' to a special kind of export
* temporary static routes from CLI
The import / export states are also updated to the new algorithms.
In general, events are code handling some some condition, which is
scheduled when such condition happened and executed independently from
I/O loop. Work-events are a subgroup of events that are scheduled
repeatedly until some (often significant) work is done (e.g. feeding
routes to protocol). All scheduled events are executed during each
I/O loop iteration.
Separate work-events from regular events to a separate queue and
rate limit their execution to a fixed number per I/O loop iteration.
That should prevent excess latency when many work-events are
scheduled at one time (e.g. simultaneous reload of many BGP sessions).
If there are roa_check() calls in channel filters, then the channel
subscribes to ROA table notifications, which are sent when ROA tables
are updated (subject to settle time) and trigger channel reload or
refeed.
The patch add support for per-channel debug flags, currently just
'states', 'routes', and 'filters'. Flag 'states' is used for channel
state changes, remaining two for routes passed through the channel.
The per-protocol debug flags 'routes'/'filters' still enable reporting
of routes for all channels, to keep existing behavior.
The patch causes minor changes in some log messages.
When config structures are copied due to template application,
we need to reset list node structure before calling add_tail().
Thanks to Mikael Magnusson for patches.
Most commands like 'show ospf neighbors' fail when protocol is not
specified and there are multiple instances of given protocol type.
This is annoying in BIRD 2, as many protocols have IPv4 and IPv6
instances. The patch changes that by showing output from all protocol
instances of appropriate type.
Note that the patch also removes terminating cli_msg() call from these
commands and moves it to the common iterating code.
Channel currently does not have independent pool and uses protocol pool,
which is freed when protocol changes state to down, while channel is
still in flushing. Move some some cleanup code to channel_do_flush()
so it is done before freeing of protocol pool.
Use a hierarchical bitmap in a routing table to assign ids to routes, and
then use bitmaps (indexed by route id) in channels to keep track whether
routes were exported. This avoids unreliable and inefficient re-evaluation
of filters for old routes in order to determine whether they were exported.
The patch implements optional internal export table to a channel and
hooks it to BGP so it can be used as Adj-RIB-Out. When enabled, all
exported (post-filtered) routes are stored there. An export table can be
examined using e.g. 'show route export table bgp1.ipv4'.
Several BGP channel options (including 'next hop self') could be
reconfigured without session reset, with just route refeed/refresh.
The patch improves reconfiguration code to do it that way.
Protocol can have specified VRF, in such case it is restricted to a set
of ifaces associated with the VRF, otherwise it can use all interfaces.
The patch allows to specify VRF as 'default', in which case it is
restricted to a set of iface not associated with any VRF.
When 'graceful down' command is entered, protocols are shut down
with regard to graceful restart. Namely Kernel protocol does
not remove routes and BGP protocol does not send notification,
just closes the connection.
Support for dynamically spawning BGP protocols for incoming connections.
Use 'neighbor range' to specify range of valid neighbor addresses, then
incoming connections from these addresses spawn new BGP instances.
This protocol is highly experimental and nobody should use it in
production. Anyway it may help you getting some insight into what eats
so much time in filter processing.
The patch d506263d... blocked adding channel during reconfiguration,
that broke protocols which use the same functiona also during init.
This patch fixes that.
The patch implements optional internal import table to a channel and
hooks it to BGP so it can be used as Adj-RIB-In. When enabled, all
received (pre-filtered) routes are stored there and import filters can
be re-evaluated without explicit route refresh. An import table can be
examined using e.g. 'show route import table bgp1.ipv4'.
When a new channel is found during reconfiguration, do force restart
of the protocol, like with any other un-reconfigurable change.
The old behavior was that the new channel was added but remained in down
state, even if the protocol was up, so a manual protocol restart was
often necessary.
In the future this should be improved such that a reconfigurable
channel addition (e.g. direct) is accepted and channel is started,
while an un-reconfigurable addition forces protocol restart.
The new MRT protocol is responsible for periodic RIB table dumps in the
MRT format (RFC 6396). Also the existing code for BGP4MP MRT dumps is
refactored and splitted between BGP to MRT protocols, will be more
integrated into MRT in the future.
Example:
protocol mrt {
table "*";
filename "%N_%F_%T.mrt";
period 60;
}
It is partially based on the old MRT code from Pavel Tvrdik.
If export filter is changed during reconfiguration and a route disappears
between reconfiguration and refeed (e.g., if the route is a static route
also removed during the reconfiguration), the route is not withdrawn.
The patch fixes that by adding tx reconfiguration timestamp.
The old timer interface is still kept, but implemented by new timers. The
plan is to switch from the old inteface to the new interface, then clean
it up.
The patch implements BGP Administrative Shutdown Communication (RFC 8203)
allowing BGP operators to pass messages related to BGP session
administrative shutdown/restart. It handles both transmit and receive of
shutdown messages. Messages are logged and may be displayed by show
protocol all command.
Thanks to Job Snijders for the basic patch.
Add basic VRF (virtual routing and forwarding) support. Protocols can be
associated with VRFs, such protocols will be restricted to interfaces
assigned to the VRF (as reported by Linux kernel) and will use sockets
bound to the VRF. E.g., different multihop BGP instances can use diffent
kernel routing tables to handle BGP TCP connections.
The VRF support is preliminary, currently there are several limitations:
- Recent Linux kernels (4.11) do not handle correctly sockets bound
to interaces that are part of VRF, so most protocols other than multihop
BGP do not work. This will be fixed by future kernel versions.
- Neighbor cache ignores VRFs. Breaks config with the same prefix on
local interfaces in different VRFs. Not much problem as single hop
protocols do not work anyways.
- Olock code ignores VRFs. Breaks config with multiple BGP peers with the
same IP address in different VRFs.
- Incoming BGP connections are not dispatched according to VRFs.
Breaks config with multiple BGP peers with the same IP address in
different VRFs. Perhaps we would need some kernel API to read VRF of
incoming connection? Or probably use multiple listening sockets in
int-new branch.
- We should handle master VRF interface up/down events and perhaps
disable associated protocols when VRF goes down. Or at least disable
associated interfaces.
- Also we should check if the master iface is really VRF iface and
not some other kind of master iface.
- BFD session request dispatch should be aware of VRFs.
- Perhaps kernel protocol should read default kernel table ID from VRF
iface so it is not necessary to configure it.
- Perhaps we should have per-VRF default table.
Some code cleanup, multiple bugfixes, allows to specify also channel
for 'show route export'. Interesting how such apparenty simple thing
like show route cmd has plenty of ugly corner cases.
This patch implements the IPv6 subset of the Babel routing protocol.
Based on the patch from Toke Hoiland-Jorgensen, with some heavy
modifications and bugfixes.
Thanks to Toke Hoiland-Jorgensen for the original patch.
Counter exp_routes is increased during initial route feed after GR
recovery, so it has to start with zero, otherwise BIRD will end with
double value in exp_routes.
The patch adds support for channels, structures connecting protocols and
tables and handling most interactions between them. The documentation is
missing yet.
Symbol lookup by cf_find_symbol() not only did the lookup but also added
new void symbols allocated from cfg_mem linpool, which gets broken when
lookups are done outside of config parsing, which may lead to crashes
during reconfiguration.
The patch separates lookup-only cf_find_symbol() and config-modifying
cf_get_symbol(), while the later is called only during parsing. Also
new_config and cfg_mem global variables are NULLed outside of parsing.
When route was propagated to another rtable through a pipe and then the
pipe was reconfigured softly in such a way that any subsequent route
updates are filtered, then the source protocol shutdown didn't clean up
the route in the second rtable which caused stale routes and potential
crashes.
Router ID could be automatically determined based of subset of
ifaces/addresses specified by 'router id from' option. The patch also
does some minor changes related to router ID reconfiguration.
Thanks to Alexander V. Chernikov for most of the work.
Several new configure command variants:
configure undo - undo last reconfiguration
configure timeout - configure with scheduled undo if not confirmed in timeout
configure confirm - confirm last configuration
configure check - just parse and validate config file
When 'import keep rejected' protocol option is activated, routes
rejected by the import filter are kept in the routing table, but they
are hidden and not propagated to other protocols. It is possible to
examine them using 'show route rejected'.
Allows to send and receive multiple routes for one network by one BGP
session. Also contains necessary core changes to support this (routing
tables accepting several routes for one network from one protocol).
It needs some more cleanup before merging to the master branch.
When a protocol went down, all its routes were flushed in one step, that
may block BIRD for too much time. The patch fixes that by limiting
maximum number of routes flushed in one step.
The nest-protocol interaction is changed to better handle multitable
protocols. Multitable protocols now declare that by 'multitable' field,
which tells nest that a protocol handles things related to proto-rtable
interaction (table locking, announce hook adding, reconfiguration of
filters) itself.
Filters and stats are moved to announce hooks, a protocol could have
different filters and stats to different tables.
The patch is based on one from Alexander V. Chernikov, thanks.
Hostcache is a structure for monitoring changes in a routing table that
is used for routes with dynamic/recursive next hops. This is needed for
proper iBGP next hop handling.
When device protocol goes down, interfaces should be flushed
asynchronously (in the same way like routes from protocols are flushed),
when protocol goes to DOWN/HUNGRY.
This fixes the problem with static routes staying in kernel routing
table after BIRD shutdown.
- BSD kernel syncer is now self-conscious and can learn alien routes
- important bugfix in BSD kernel syncer (crash after protocol restart)
- many minor changes and bugfixes in kernel syncers and neighbor cache
- direct protocol does not generate host and link local routes
- min_scope check is removed, all routes have SCOPE_UNIVERSE by default
- also fixes some remaining compiler warnings
It seems that by adding one pipe-specific exception to route
announcement code and by adding one argument to rt_notify() callback i
could completely eliminate the need for the phantom protocol instance
and therefore make the code more straightforward. It will also fix some
minor bugs (like ignoring debug flag changes from the command line).
When uncofiguring the pipe and the peer table, the peer table was
unlocked when pipe protocol state changed to down/flushing and not to
down/hungry. This leads to the removal of the peer table before
the routes from the pipe were flushed.
The fix leads to adding some pipe-specific hacks to the nest,
but this seems inevitable.
The core state machine was broken - it didn't free resources
in START -> DOWN transition and might freed resources after
UP -> STOP transition before protocol turned down. It leads
to deadlock on olock acquisition when lock was not freed
during previous stop.
The current behavior is that resources, allocated during
DOWN -> * transition, are freed in * -> DOWN transition,
and flushing (scheduled in UP -> *) just counteract
feeding (scheduled in * -> UP). Protocol fell down
when both flushing is done (if needed) and protocol
reports DOWN.
BTW, is thera a reason why neighbour cache item acquired
by protocol is not tracked by resource mechanism?
When protocol started, feeding was scheduled. If protocol
got down before feeding was executed, then function
responsible for connecting protocol to kernel routing
tables was called after the function responsible for
disconnecting, then resource pool of protocol was freed,
but freed linked list structures remains in the list.