Due to a race condition between rta_apply_hostentry() and rt_update_hostentry(),
happening when a new route is inserted to a table, this commit makes it mandatory
to lock the next hop resolution table while resolving the next hop.
This may be slow, we'll fix it better in some future release
After converting BFD to the new IO loop system, reconfiguration never
really worked. Sadly, we missed this case in our testing suite so it
passed under the radar for quite a while.
Thanks to Andrei Dinu <andrei.dinu@digitalit.ro> for reporting and
isolating this issue.
A forgotten else-clause caused BIRD to treat some pseudo-random place in
memory as fd-pair. This was happening only on startup of the first
thread in group and the value there in memory was typically zero ... and
writing to stdin succeeded.
When running BIRD with stdin not present (like systemd does), it died on
this spurious write. Now it seems to work correctly.
Thanks to Daniel Suchy <danny@danysek.cz> for reporting.
http://trubka.network.cz/pipermail/bird-users/2023-May/016929.html
There was a bug occuring when one thread sought for a src by its global id
and another one was allocating another src with such an ID that it caused
route src global index reallocation. This brief moment of inconsistency
led to a rare use-after-free of the old global index block.
The original algorithm was suffering from an ABA race condition:
A: fp = page_stack
B: completely allocates the same page and writes into it some data
A: unsuspecting, loads (invalid) next = fp->next
B: finishes working with the page and returns it back to page_stack
A: compare-exchange page_stack: fp => next succeeds and writes garbage
to page_stack
Fixed this by using an implicit spinlock in hot page allocator.
If a thread encounters timeout == 0 for poll, it considers itself
"busy" and with some hysteresis it tries to drop loops for others to
pick and thus better distribute work between threads.
Memory allocation is a fragile part of BIRD and we need checking that
everybody is using the resource pools in an appropriate way. To assure
this, all the resource pools are associated with locking domains and
every resource manipulation is thoroughly checked whether the
appropriate locking domain is locked.
With transitive resource manipulation like resource dumping or mass free
operations, domains are locked and unlocked on the go, thus we require
pool domains to have higher order than their parent to allow for this
transitive operations.
Adding pool locking revealed some cases of insecure memory manipulation
and this commit fixes that as well.