To allow reading of protocol states from other protocols or completely
different routines, we have to export these states to data structures
not requiring to lock the protocol loops.
On one hand, this doesn't give the reader the actual state "right now",
on the other hand, getting that is impossible in a properly
multithreaded environment and you will always get the information with
some (little but noteworthy) delay.
This implementation handles only the basic state information of the
protocols, common for all the protocols. Adding protocol-specific state
information should be done by implementing the protocol hook init_state().
Channel information is stored but not announced, as we don't need the
announcements for now.
For the upcoming rework of protocol state information propagation,
we need some more eattr types to be defined.
These types are probably not defined completely and before using
them for route attributes, you should check that they don't lack
some crucial methods.
the sync is actually needed when the pages get freed, not precisely
after every item cleanup, as the data technically stays intact until the
deferred free's are called.
This partially reverts commit d617801c31.
The common lockfree doesn't work well for high-volume structures like
eattr cache because it expects the structure to be cleaned up by a
sweeper routine ... which is very ineffective for >1M records.
OTOH, we need the deferred ea_free in all cases ... so keeping that.
The spinlocked hash has a main rw spinlock for the data blocks
and then a rw spinlock for each hash chain. Rehashing is asynchronous,
running from an event, and it happens chain-wise, never blocking more
than one chain at a time.
There is an IP table for every ROA table, holding special records
combining all known ROAs for every top-prefix.
The ROA digestor is now an IP digestor, running over the auxiliary
table.
Channel is now just subscribing to yet another journal announcing
digested tries from the ROA table.
Creating tries in every channel on-the-fly was too slow to handle
and it ate obnoxious amounts of memory. Instead, the tries are
constructed directly in the table and the channels are notified
with the completed tries.
The delayed export-release mechanism is used to keep the tries allocated
until routes get reloaded.
Originally, this mechanism required to check whether there's enough time to work
and then to send an event. This macro combines all the logic and goes more straightforwardly
to the _end_ of the export processing loop.
One should note that there were two cases where the export processing loop
was deferred at the _beginning_, which led to ignoring some routes on
reimports. This wasn't easily noticeable in the tests until the one-task
limit got a ceiling on 300 ms to keep reasonable latency.
In future, this and rtable's data structures should be probably merged
but it isn't a good idea to do now. The used data structure is similar
to rtable -- an array of pointers to linked lists.
Feed is lockless, as with all tables.
Full export (receiving updates) is not supported yet but we don't have
any method how to use it anyway. Gonna implement it later.
Introducing a new omnipotent internal API to just pass route updates
from whatever point wherever we want.
From now on, all the exports should be processed by RT_WALK_EXPORTS
macro, and you can also issue a separate feed-only request to just get a
feed and finish.
The exporters can now also stop and the readers must expect that to
happen and recover. Main tables don't stop, though.