If a thread encounters timeout == 0 for poll, it considers itself
"busy" and with some hysteresis it tries to drop loops for others to
pick and thus better distribute work between threads.
Memory allocation is a fragile part of BIRD and we need checking that
everybody is using the resource pools in an appropriate way. To assure
this, all the resource pools are associated with locking domains and
every resource manipulation is thoroughly checked whether the
appropriate locking domain is locked.
With transitive resource manipulation like resource dumping or mass free
operations, domains are locked and unlocked on the go, thus we require
pool domains to have higher order than their parent to allow for this
transitive operations.
Adding pool locking revealed some cases of insecure memory manipulation
and this commit fixes that as well.
Now sk_open() requires an explicit IO loop to open the socket in. Also
specific functions for socket RX pause / resume are added to allow for
BGP corking.
And last but not least, socket reloop is now synchronous to resolve
weird cases of the target loop stopping before actually picking up the
relooped socket. Now the caller must ensure that both loops are locked
while relooping, and this way all sockets always have their respective
loop.
If there are lots of loops in a single thread and only some of the loops
are actually active, the other loops are now kept aside and not checked
until they actually get some timers, events or active sockets.
This should help with extreme loads like 100k tables and protocols.
Also ping and loop pickup mechanism was allowing subtle race
conditions. Now properly handling collisions between loop ping and pickup.
Instead of propagating interface updates as they are loaded from kernel,
they are enqueued and all the notifications are called from a
protocol-specific event. This change allows to break the locking loop
between protocols and interfaces.
Anyway, this change is based on v2 branch to keep the changes between v2
and v3 smaller.
The interface list must be flushed when device protocol is stopped. This
was done in a hardcoded specific hook inside generic protocol routines.
The cleanup hook was originally used for table reference counting late
cleanup, yet it can be also simply used for prettier interface list flush.
On large configurations, too many threads would spawn with one thread
per loop. Therefore, threads may now run multiple loops at once. The
thread count is configurable and may be changed during run. All threads
are spawned on startup.
This change helps with memory bloating. BIRD filters need large
temporary memory blocks to store their stack and also memory management
keeps its hot page storage per-thread.
Known bugs:
* Thread autobalancing is not yet implemented.
* Low latency loops are executed together with standard loops.
Some CLI actions, notably "show route", are run by queuing an event
somewhere else. If the user closes the socket, in case such an action is
being executed, the CLI must free the socket immediately from the error
hook but the pool must remain until the asynchronous event finishes and
cleans everything up.
When BIRD has no free memory mapped, it allocates several pages in
advance just to be sure that there is some memory available if needed.
This hysteresis tactics works quite well to reduce memory ping-ping with
kernel.
Yet it had a subtle bug: this pre-allocation didn't take a memory
coldlist into account, therefore requesting new pages from kernel even
in cases when there were other pages available. This led to slow memory
bloating.
To demonstrate this behavior fast enough to be seen well, you may:
* temporarily set the values in sysdep/unix/alloc.c as follows to
exacerbate the issue:
#define KEEP_PAGES_MAIN_MAX 4096
#define KEEP_PAGES_MAIN_MIN 1000
#define CLEANUP_PAGES_BULK 4096
* create a config file with several millions of static routes
* periodically disable all static protocols and then reload config
* log memory consumption
This should give you a steady growth rate of about 16kB per cycle. If
you don't set the values this high, the issue happens much more slowly,
yet after 14 days of running, you are going to see an OOM kill.
After this fix, pre-allocation uses the memory coldlist to get some hot
pages and the same test as described here gets you a perfectly stable
constant memory consumption (after some initial wobbling).
Thanks to NIX-CZ for reporting and helping to investigate this issue.
Thanks to Santiago for finding the cause in the code.
The usage pattern implemented in allocator seems to be incompatible with
transparent huge pages, as memory released using madvise(MADV_DONTNEED)
with regular page size and alignment does not seem to trigger demotion
of huge pages back to regular pages, even when significant number of
pages is released. Even if demotion is triggered when system memory
is low, it still breaks memory accounting.
Log message before aborting due to watchdog timeout. We have to use
async-safe write to debug log, as it is done in signal handler.
Minor changes from committer.
When there is a continuos stream of CLI commands, cli_get_command()
always returns 1 (there is a new command). Anyway, the socket receive
buffer was reset only when there was no command at all, leading to a
strange behavior: after a while, the CLI receive buffer came to its end,
then read() was called with zero size buffer, it returned 0 which was
interpreted as EOF.
The patch fixes that by resetting the buffer position after each command
and moving remaining data at the beginning of buffer.
Thanks to Maria Matejka for examining the bug and for the original bugfix.
BIRD keeps a previous (old) configuration for the purpose of undo. The
existing code frees it after a new configuration is successfully parsed
during reconfiguration. That causes memory usage spikes as there are
temporarily three configurations (old, current, and new). The patch
changes it to free the old one before parsing the new one (as user
already requested a new config). The disadvantage is that undo is
not available after failed reconfiguration.
Memory unmapping causes slow address space fragmentation, leading in
extreme cases to failing to allocate pages at all. Removing this problem
by keeping all the pages allocated to us, yet calling madvise() to let
kernel dispose of them.
This adds a little complexity and overhead as we have to keep the
pointers to the free pages, therefore to hold e.g. 1 GB of 4K pages with
8B pointers, we have to store 2 MB of data.
This is a reimplementation of commit 0f2be469f8
by Alexander Zubkov. In the master branch, changes in commit eb937358
broke setting of channel preference for alien routes learned during
scan. The preference was set only for async routes.
The original solution is extended here to accomodate for v3 specifics.
Changes in commit eb937358 broke setting of channel preference for alien
routes learned during scan. The preference was set only for async routes.
Move common attribute processing part of functions krt_learn_async() and
krt_learn_async() to a separate function to have only one place for such
changes.
Remove compile-time sysdep option CONFIG_ALL_TABLES_AT_ONCE, replace it
with runtime ability to run either separate table scans or shared scan.
On Linux, use separate table scans by default when the netlink socket
option NETLINK_GET_STRICT_CHK is available, but retreat to shared scan
when it fails.
Running separate table scans has advantages where some routing tables are
managed independently, e.g. when multiple routing daemons are running on
the same machine, as kernel routing table modification performance is
significantly reduced when the table is modified while it is being
scanned.
Thanks Daniel Gröber for the original patch and Toke Høiland-Jørgensen
for suggestions.
The learnt routes are now pushed all into the connected table, not only
the best one. This shouldn't do any damage in well managed setups, yet
it should be noted that it is a change of behavior.
If anybody misses a feature which they implemented by misusing this
internal learn table, let us know, we'll consider implementing it in a
better way.
Passing protocol to preexport was in fact a historical relic from the
old times when channels weren't a thing. Refactoring that to match
current extensibility needs.
There were quite a lot of conflicts in flowspec validation code which
ultimately led to some code being a bit rewritten, not only adapted from
this or that branch, yet it is still in a limit of a merge.