The prefix hash table in BGP used the same hash function as the rtable.
When a batch of routes are exported during feed/flush to the BGP, they
all have similar hash values, so they are all crowded in a few slots in
the BGP prefix table (which is much smaller - around the size of the
batch - and uses higher bits from hash values), making it much slower due
to excessive collisions. Use a different hash function to avoid this.
Also, increase the batch size to fill 4k BGP packets and increase minimum
BGP bucket and prefix hash sizes to avoid back and forth resizing during
flushes.
This leads to order of magnitude faster flushes (on my test data).
The route scope attribute was used for simple user route marking. As
there is a better tool for this (custom attributes), the old and limited
way can be dropped.
Some tokens are both keywords and symbols. For now, we allow only
specific keywords to be redefined; in future, more of the keywords may
be added to this category.
The redefinable keywords must be specified in any .Y file as follows:
toksym: THE_KEYWORD ;
See proto/bgp/config.Y for an example.
Also dropped a lot of unused terminals.
Changes in internal API:
* Every route attribute must be defined as struct ea_class somewhere.
* Registration of route attributes known at startup must be done by
ea_register_init() from protocol build functions.
* Every attribute has now its symbol registered in a global symbol table
defined as SYM_ATTRIBUTE
* All attribute ID's are dynamically allocated.
* Attribute value custom formatting hook is defined in the ea_class.
* Attribute names are the same for display and filters, always prefixed
by protocol name.
Also added some unit testing code for filters with route attributes.
Implicit paddings have undefined values in C. We want the eattr blocks
to be comparable by memcmp and eattrs settable directly by structrure
literals. This check ensures that all paddings in eattr and bval are
explicit and therefore zeroed in all literals.
This commit removes the EAF_TYPE_* namespace completely and also for
route attributes, filter-based types T_* are used. This simplifies
fetching and setting route attributes from filters.
Also, there is now union bval which serves as an universal value holder
instead of private unions held separately by eattr and filter code.
Before this change, fetch-update-write and bitmasking was hardcoded in
attribute access code cased by the attribute type. Several filter
instructions are used to do it instead.
As this is certainly going to be a little bit slower than before, the
switch block in attribute access code should be completely removed in
near future, helping with both performance and code cleanliness.
The user interface should have stayed intact.
When shutting down a Babel instance we send a wildcard retraction to make
sure all peers can quickly switch to other route origins. Add another small
optimisation borrowed from babeld: sending a Hello message (along with the
retraction) with a very low interval.
This will cause neighbours to modify their expiry timers for the node's
state to quickly time it out, thus conserving resources in the network.
The interface pointer was improperly converted to u32 and back. Fixing
this by explicitly allocating an adata structure for it. It's not so
memory efficient, we'll optimize this later.
When BIRD was munmapping too many pages, it sometimes aborted, saying
that munmap failed with "Not enough memory" as the address space was
getting more and more fragmented.
There is a workaround in place, simply keeping that page for future use,
yet it has never been compiled in because I somehow forgot to include
errno.h. And because I also thought that somebody may have ENOMEM not
defined (why?!), there was a check which quietly omitted that
workaround.
Anyway, ENOMEM is POSIX. It's an utter nonsense to check for its
existence. If it doesn't exist, something is broken.
Add BFD protocol option 'strict bind' to use separate listening socket
for each BFD interface bound to its address instead of using shared
listening sockets.
It is too cryptic to flush tmp_linpool in these cases and we don't want
anybody in the future to break this code by adding an allocation
somewhere which should persist over that flush.
Saving and restoring linpool state is safer.