The resource dumping routines needed to be updated in v3 to use the new
API introduced in v2.
Conflicts:
filter/f-util.c
filter/filter.c
lib/birdlib.h
lib/event.c
lib/mempool.c
lib/resource.c
lib/resource.h
lib/slab.c
lib/timer.c
nest/config.Y
nest/iface.c
nest/iface.h
nest/locks.c
nest/neighbor.c
nest/proto.c
nest/route.h
nest/rt-attr.c
nest/rt-table.c
proto/bfd/bfd.c
proto/bmp/bmp.c
sysdep/unix/io.c
sysdep/unix/krt.c
sysdep/unix/main.c
sysdep/unix/unix.h
All the 'dump something' CLI commands now have a new mandatory
argument -- name of the file where to dump the data. This allows
for more flexible dumping even for production deployments where
the debug output is by default off.
Also the dump commands are now restricted (they weren't before)
to assure that only the appropriate users can run these time consuming
commands.
The original algorithm was suffering from an ABA race condition:
A: fp = page_stack
B: completely allocates the same page and writes into it some data
A: unsuspecting, loads (invalid) next = fp->next
B: finishes working with the page and returns it back to page_stack
A: compare-exchange page_stack: fp => next succeeds and writes garbage
to page_stack
Fixed this by using an implicit spinlock in hot page allocator.
When BIRD has no free memory mapped, it allocates several pages in
advance just to be sure that there is some memory available if needed.
This hysteresis tactics works quite well to reduce memory ping-ping with
kernel.
Yet it had a subtle bug: this pre-allocation didn't take a memory
coldlist into account, therefore requesting new pages from kernel even
in cases when there were other pages available. This led to slow memory
bloating.
To demonstrate this behavior fast enough to be seen well, you may:
* temporarily set the values in sysdep/unix/alloc.c as follows to
exacerbate the issue:
#define KEEP_PAGES_MAIN_MAX 4096
#define KEEP_PAGES_MAIN_MIN 1000
#define CLEANUP_PAGES_BULK 4096
* create a config file with several millions of static routes
* periodically disable all static protocols and then reload config
* log memory consumption
This should give you a steady growth rate of about 16kB per cycle. If
you don't set the values this high, the issue happens much more slowly,
yet after 14 days of running, you are going to see an OOM kill.
After this fix, pre-allocation uses the memory coldlist to get some hot
pages and the same test as described here gets you a perfectly stable
constant memory consumption (after some initial wobbling).
Thanks to NIX-CZ for reporting and helping to investigate this issue.
Thanks to Santiago for finding the cause in the code.
The usage pattern implemented in allocator seems to be incompatible with
transparent huge pages, as memory released using madvise(MADV_DONTNEED)
with regular page size and alignment does not seem to trigger demotion
of huge pages back to regular pages, even when significant number of
pages is released. Even if demotion is triggered when system memory
is low, it still breaks memory accounting.
Memory unmapping causes slow address space fragmentation, leading in
extreme cases to failing to allocate pages at all. Removing this problem
by keeping all the pages allocated to us, yet calling madvise() to let
kernel dispose of them.
This adds a little complexity and overhead as we have to keep the
pointers to the free pages, therefore to hold e.g. 1 GB of 4K pages with
8B pointers, we have to store 2 MB of data.
We can also quite simply allocate bigger blocks. Anyway, we need these
blocks to be aligned to their size which needs one mmap() two times
bigger and then two munmap()s returning the unaligned parts.
The user can specify -B <N> on startup when <N> is the exponent of 2,
setting the block size to 2^N. On most systems, N is 12, anyway if you
know that your configuration is going to eat gigabytes of RAM, you are
almost forced to raise your block size as you may easily get into memory
fragmentation issues or you have to raise your maximum mapping count,
e.g. "sysctl vm.max_map_count=(number)".
From now, there are no auxiliary pointers stored in the free slab nodes.
This led to strange debugging problems if use-after-free happened in
slab-allocated structures, especially if the structure's first member is
a next pointer.
This also reduces the memory needed by 1 pointer per allocated object.
OTOH, we now rely on pages being aligned to their size's multiple, which
is quite common anyway.