In some edge cases, the dynamic BGP starts but doesn't yet pick up
the socket from the peer, when it gets shut down, typically on
a complete shutdown. Fixing this to just close the socket, not assert
it being already picked up.
Different internal allocators (memory blocks, linpools, and slabs) used
different way to compute alignment. Unify it to use alignment based on
standard max_align_t type.
On x86_64, this does not change alignment of memory blocks and linpools
(both old and new is 16), but it increases alignment of slabs from 8 to
16.
Minor changes by commiter.
Best route refeed is tricky. The journal may include repeatedly the same
route in the old and/or in the new position in case of flaps. We don't
like checking that fully in the RCU critical section which is already
way too long, thus we filter out the repeated occurence of the current
best route while keeping possibly more old routes.
We also don't want to send spurious withdraws, and we need to check that
only one notification per net is sent for RA_OPTIMAL.
There was also missing a rejected map update in case of idempotent
squashed update, and last but not least, the best route journal should
not include invalid routes (import keep filtered).
Even though allocating from tmp_linpool is quite cheap,
it isn't cheap when the block is larger than a page, which is the case here.
Instead, we now allocate just the result which typically fits in a page,
avoiding a necessity of a malloc().
To mid-term allocate and free lots of small blocks in a fast pace,
mb_alloc is too slow and causes heap bloating. We can already allocate
blocks from slabs, and if we allow for a little bit of inefficiency,
we can just use multiple slabs with stepped sizes.
This technique is already used in ea_list allocation which is gonna be
converted to Stonehenge.
There was an inefficiency in the initial scan state machine,
causing routes to be fed several times instead of just once.
Now the export startup is postponed until first krt_scan()
finishes and we actually can do the pruning with full information.
The commit 21213be523baa7f2cbf0feaa617f265c55e9b17a expanded private_id
in route source to u64, but forgot to modify function arguments, so it
was still cropped at 32-bit, which may cause some collisions for L3VPN.
This patch fixes that.
On BSD, the onlink flag is not tracked or reported by kernel. We are
using an heuristic that assigns the onlink flag to routes scanned from
the kernel. We should use the same heuristic even in BSD-Netlink
case, as the onlink flag is not reported here too.
Thanks to Björn König for the original patch.
The Babel seqno wraps around when reaching its maximum value (UINT16_MAX).
When comparing seqnos, this has to be taken into account. Therefore,
plain number comparisons do not work.
We missed that the protocol spawner violates the prescribed
locking order. When the rtable level is locked, no new protocol can be
started, thus we need to:
* create the protocol from a clean mainloop context
* in protocol start hook, take the socket
Testsuite: cf-bgp-autopeer
Fixes: #136
Thanks to Job Snijders <job@fastly.com> for reporting:
https://trubka.network.cz/pipermail/bird-users/2024-December/017980.html
Otherwise it looks like we are sending too much traffic to netlink
every other while, which is not true. Now we can disambiguate between
in-kernel updates and ignored routes.
If there is no feed pending, the requested one should be
activated immediately, otherwise it is activated only after
the full run, effectively running first a full feed and
then the requested one.
When many changes are done during reconfiguration, the table may
start pruning old routes before everything is settled down, slowing
down not only the reconfiguration, but also the shutdown process.
In our usecase, these are impossibly greedy because we often
free memory in a different thread than where we allocate, forcing
the default allocator to scatter the used memory all over the place.
There was a suspicion that maybe the BIRD 3 version of ROA gets the
digesting wrong. This test covers the nastiest cornercases we could
think about, so now we can expect it to be right.
We have quite large critical sections and we need to allocate inside
them. This is something to revise properly later on, yet for now,
instead of slowly but surely growing the virtual memory address space,
it's better to optimize the cold page cache pickup and count situations
where this happened inside the critical section.