When a linpool is used to allocate a one-off big load of memory, it
makes no sense to keep that amount of memory for future use inside the
linpool. Contrary to previous implementations where the memory was
directly free()d, we now use the page allocator which has an internal
cache which keeps the released pages for us and subsequent allocations
simply get these released pages back.
And even if the page cleanup routine kicks in inbetween, the pages get
only madvise()d, not munmap()ed so performance aspects are negligible.
This may fix some memory usage peaks in extreme cases.
This feature is intended mostly for checking that BIRD's allocation
strategies don't consume much memory space. There are some cases where
withdrawing routes in a specific order lead to memory fragmentation and
this output should give the user at least a notion of how much memory is
actually used for data storage and how much memory is "just allocated"
or used for overhead.
Also raising the "system allocator overhead estimation" from 8 to 16
bytes; it is probably even more. I've found 16 as a local minimum in
best scenarios among reachable machines. I couldn't find any reasonable
method to estimate this value when BIRD starts up.
This commit also fixes the inaccurate computation of memory overhead for
slabs where the "system allocater overhead estimation" was improperly
added to the size of mmap-ed memory.
This also fixes bug that timer->recurrent was not cleared
in tm_new() and unexpected recurrence of startup timer
in BGP confused state machine and caused crash.
- cfg_strcpy() -> cfg_strdup()
- mempool -> linpool, mp_* -> lp_* [to avoid confusion with memblock, mb_*]
Anyway, it might be better to stop ranting about names and do some *real* work.