Instead of propagating interface updates as they are loaded from kernel,
they are enqueued and all the notifications are called from a
protocol-specific event. This change allows to break the locking loop
between protocols and interfaces.
Anyway, this change is based on v2 branch to keep the changes between v2
and v3 smaller.
The interface list must be flushed when device protocol is stopped. This
was done in a hardcoded specific hook inside generic protocol routines.
The cleanup hook was originally used for table reference counting late
cleanup, yet it can be also simply used for prettier interface list flush.
There ware missing dependencies for proto-build.c generation, which
sometimes lead to failed builds, and ignores changes in the set of
built protocols. Fix that, and also improve formatting of proto-build.c
When creating a new babel_source object we initialise the seqno to 0. The
caller will update the source object with the right metric and seqno value,
for both newly created and old source objects. However if we initialise the
source object seqno to 0 that may actually turn out to be a valid (higher)
seqno than the one in the routing table, because of seqno wrapping. In this
case the source metric will not be set properly, which breaks feasibility
tracking for subsequent updates.
To fix this, add a new initial_seqno argument to babel_get_source() which
is used when allocating a new object, and set that to the seqno value of
the update we're sending.
Thanks to Juliusz Chroboczek for the bugreport.
Juliusz noticed there were a couple of places we were doing straight
inequality comparisons of seqnos in Babel. This is wrong because seqnos can
wrap: so we need to use the modulo-64k comparison function for these cases
as well.
Introduce a strict-inequality version of the modulo-comparison for this
purpose.
Instead of calling custom hooks from object locks, we use standard event
sending mechanism to inform protocols about object lock changes. This is
a backport from version 3 where these events are passed across threads.
This implementation of object locks doesn't use mutexes to lock the
whole data structure. In version 3, this data structure may get accessed
from multiple threads and must be protected by mutex.
Instead of calling custom hooks from object locks, we use standard event
sending mechanism to inform protocols about object lock changes. As
event sending is lockless, the unlocking protocol simply enqueues the
appropriate event to the given loop when the locking is done.
This reverts commit 7144c9ca46.
The onlink attribute implementation collides with the nexthop attribute
behavior in v3; keeping it aside until finding out how to reimplement it
correctly.
For active sessions, ignore received packets with zero local id and
mismatched remote id. That forces a session timeout instead of an
immediate session restart. It makes BFD sessions more resilient to
packet spoofing.
Thanks to André Grüneberg for the suggestion.
Protocols receive if_notify() announcements that are filtered according
to their VRF setting, but during reconfiguration, they access iface_list
directly and forgot to check VRF setting here, which leads to all
interfaces be addedd.
Fix this issue for Babel, OSPF, RAdv and RIP protocols.
Thanks to Marcel Menzel for the bugreport.
On large configurations, too many threads would spawn with one thread
per loop. Therefore, threads may now run multiple loops at once. The
thread count is configurable and may be changed during run. All threads
are spawned on startup.
This change helps with memory bloating. BIRD filters need large
temporary memory blocks to store their stack and also memory management
keeps its hot page storage per-thread.
Known bugs:
* Thread autobalancing is not yet implemented.
* Low latency loops are executed together with standard loops.
If no channel is flushing, table prune doesn't walk over routes in nets
and also doesn't walk over importing channel lists. This helps to
alleviate the memory caching burdens a lot.
Some CLI actions, notably "show route", are run by queuing an event
somewhere else. If the user closes the socket, in case such an action is
being executed, the CLI must free the socket immediately from the error
hook but the pool must remain until the asynchronous event finishes and
cleans everything up.
When BIRD has no free memory mapped, it allocates several pages in
advance just to be sure that there is some memory available if needed.
This hysteresis tactics works quite well to reduce memory ping-ping with
kernel.
Yet it had a subtle bug: this pre-allocation didn't take a memory
coldlist into account, therefore requesting new pages from kernel even
in cases when there were other pages available. This led to slow memory
bloating.
To demonstrate this behavior fast enough to be seen well, you may:
* temporarily set the values in sysdep/unix/alloc.c as follows to
exacerbate the issue:
#define KEEP_PAGES_MAIN_MAX 4096
#define KEEP_PAGES_MAIN_MIN 1000
#define CLEANUP_PAGES_BULK 4096
* create a config file with several millions of static routes
* periodically disable all static protocols and then reload config
* log memory consumption
This should give you a steady growth rate of about 16kB per cycle. If
you don't set the values this high, the issue happens much more slowly,
yet after 14 days of running, you are going to see an OOM kill.
After this fix, pre-allocation uses the memory coldlist to get some hot
pages and the same test as described here gets you a perfectly stable
constant memory consumption (after some initial wobbling).
Thanks to NIX-CZ for reporting and helping to investigate this issue.
Thanks to Santiago for finding the cause in the code.