A forgotten else-clause caused BIRD to treat some pseudo-random place in
memory as fd-pair. This was happening only on startup of the first
thread in group and the value there in memory was typically zero ... and
writing to stdin succeeded.
When running BIRD with stdin not present (like systemd does), it died on
this spurious write. Now it seems to work correctly.
Thanks to Daniel Suchy <danny@danysek.cz> for reporting.
http://trubka.network.cz/pipermail/bird-users/2023-May/016929.html
The original algorithm was suffering from an ABA race condition:
A: fp = page_stack
B: completely allocates the same page and writes into it some data
A: unsuspecting, loads (invalid) next = fp->next
B: finishes working with the page and returns it back to page_stack
A: compare-exchange page_stack: fp => next succeeds and writes garbage
to page_stack
Fixed this by using an implicit spinlock in hot page allocator.
If a thread encounters timeout == 0 for poll, it considers itself
"busy" and with some hysteresis it tries to drop loops for others to
pick and thus better distribute work between threads.
Memory allocation is a fragile part of BIRD and we need checking that
everybody is using the resource pools in an appropriate way. To assure
this, all the resource pools are associated with locking domains and
every resource manipulation is thoroughly checked whether the
appropriate locking domain is locked.
With transitive resource manipulation like resource dumping or mass free
operations, domains are locked and unlocked on the go, thus we require
pool domains to have higher order than their parent to allow for this
transitive operations.
Adding pool locking revealed some cases of insecure memory manipulation
and this commit fixes that as well.
Now sk_open() requires an explicit IO loop to open the socket in. Also
specific functions for socket RX pause / resume are added to allow for
BGP corking.
And last but not least, socket reloop is now synchronous to resolve
weird cases of the target loop stopping before actually picking up the
relooped socket. Now the caller must ensure that both loops are locked
while relooping, and this way all sockets always have their respective
loop.
If there are lots of loops in a single thread and only some of the loops
are actually active, the other loops are now kept aside and not checked
until they actually get some timers, events or active sockets.
This should help with extreme loads like 100k tables and protocols.
Also ping and loop pickup mechanism was allowing subtle race
conditions. Now properly handling collisions between loop ping and pickup.
Instead of propagating interface updates as they are loaded from kernel,
they are enqueued and all the notifications are called from a
protocol-specific event. This change allows to break the locking loop
between protocols and interfaces.
Anyway, this change is based on v2 branch to keep the changes between v2
and v3 smaller.
The interface list must be flushed when device protocol is stopped. This
was done in a hardcoded specific hook inside generic protocol routines.
The cleanup hook was originally used for table reference counting late
cleanup, yet it can be also simply used for prettier interface list flush.
On large configurations, too many threads would spawn with one thread
per loop. Therefore, threads may now run multiple loops at once. The
thread count is configurable and may be changed during run. All threads
are spawned on startup.
This change helps with memory bloating. BIRD filters need large
temporary memory blocks to store their stack and also memory management
keeps its hot page storage per-thread.
Known bugs:
* Thread autobalancing is not yet implemented.
* Low latency loops are executed together with standard loops.
Some CLI actions, notably "show route", are run by queuing an event
somewhere else. If the user closes the socket, in case such an action is
being executed, the CLI must free the socket immediately from the error
hook but the pool must remain until the asynchronous event finishes and
cleans everything up.