We have quite large critical sections and we need to allocate inside
them. This is something to revise properly later on, yet for now,
instead of slowly but surely growing the virtual memory address space,
it's better to optimize the cold page cache pickup and count situations
where this happened inside the critical section.
The resource dumping routines needed to be updated in v3 to use the new
API introduced in v2.
Conflicts:
filter/f-util.c
filter/filter.c
lib/birdlib.h
lib/event.c
lib/mempool.c
lib/resource.c
lib/resource.h
lib/slab.c
lib/timer.c
nest/config.Y
nest/iface.c
nest/iface.h
nest/locks.c
nest/neighbor.c
nest/proto.c
nest/route.h
nest/rt-attr.c
nest/rt-table.c
proto/bfd/bfd.c
proto/bmp/bmp.c
sysdep/unix/io.c
sysdep/unix/krt.c
sysdep/unix/main.c
sysdep/unix/unix.h
The EAttr ID space is dense so we can just walk once, sweep the whole
input and go home.
There is a little bit of memory inefficiency in allocating always the
largest possible block, yet it isn't too bad.
There are also unit tests for this.
The function lfjour_cleanup_hook() was scheduled each time any of the
journal recipients reached end of a block of journal items or read all
of journal items. Because lfjour_cleanup_hook() can clean only journal
items every recipient has processed, it was often called uselessly.
This commit restricts most of the unuseful scheduling. Only some
recipients are given a token alowing them to try to schedule the
cleanup hook. When a recipient wants to schedule the cleanup hook, it
checks whether it has a token. If yes, it decrements number of tokens
the journal has given (issued_tokens) and discards its own token. If
issued_tokens reaches zero, the recipient is allowed to schedule the
cleanup hook.
There is a maximum number of tokens a journal can give to its recipients
(max_tokens). A new recipient is given a token in its init, unless the
maximum number of tokens is reached. The rest of tokens is given to
customers in lfjour_cleanup_hook().
In the cleanup hook, the issued_tokens number is increased in order to
avoid calling the hook before it finishes. Then, tokens are given to the
slowest recipients (but never to more than max_token recipients). Before
leaving lfjour_cleanup_hook(), the issued_tokens number is decreased back.
If no other tokens are given, we have to make sure the
lfjour_cleanup_hook will be called again. If every item in journal was
read by every recipient, tokens are given to random recipients. If all
recipients with tokens managed to finish until now, we give the token to
the first unfinished customer we find, or we just call the hook again.
Fix several errors including:
- Unaligned memory access to 'Length of Error Text' field
- No validation of 'Length of Encapsulated PDU' field
- No validation of 'Error Code' field
- No validation of characters in diagnostic message
All the 'dump something' CLI commands now have a new mandatory
argument -- name of the file where to dump the data. This allows
for more flexible dumping even for production deployments where
the debug output is by default off.
Also the dump commands are now restricted (they weren't before)
to assure that only the appropriate users can run these time consuming
commands.
When printing near the end of the buffer, there was an overflow in two cases:
(1) %c and size is zero
(2) %1N, %1I, %1I4, %1I6 (auto-fill field_width for Net or IP), size is
more than actual length of the net/ip but less than the auto-filled
field width.
Manual code examination showed that nothing could have ever triggered
this behavior. All older versions of BIRD, including BIRD 3 development
versions, are totally safe. This exact overflow has been found while
implementing a new feature in later commits.
The strcmp function is not guaranteed to return -1 or +1
but any negative or positive value if the input strings
are different. Fixed the false assumption which triggered
a build bug on emulated arm64.
There were some nasty problems with deferred protocol state updates and
race conditions on BGP startup, shutdown, and also with referencing the
cached states.
Now it looks fixed.
For the upcoming rework of protocol state information propagation,
we need some more eattr types to be defined.
These types are probably not defined completely and before using
them for route attributes, you should check that they don't lack
some crucial methods.
The original algorithm assumed principles not consistent with the RFC
and could have lead to false invalids.
Also added filter tests showing also how the ASPA literals are used in
the static protocol.
Current implementation handles flowspec prefix length and offset only
in bytes, but RFC 8956 (Dissemination of Flow Specification Rules for
IPv6) Section 3.1 [1] and example in Section 3.8.2 [2] states the
pattern should begin right after offset *bits*.
For example, pattern "::1:1234:5678:9800:0/60-104" is currently
serialized as "02 68 3c 01 12 34 56 78 98", but it should shift its
pattern 4 more bits to the left: "02 68 3c 11 23 45 67 89 80".
This patch implements shifting left/right for IPv6 type and use it to
correct the behaviour. Test data are replaced with the correct ones.
Minor changes and test vectors done by committer.
[1]: https://www.rfc-editor.org/rfc/rfc8956.html#section-3.1
[2]: https://www.rfc-editor.org/rfc/rfc8956.html#section-3.8.2
The period of recurent timers was stored in 32b field, despite it was
btime-compatible value in us. Therefore, it was limited to ~72 min,
which mas okay for most purposes, except configurable MRT dump periods.
Thanks to Felix Friedlander for the bugreport.