0
0
mirror of https://gitlab.nic.cz/labs/bird.git synced 2024-09-19 20:05:21 +00:00
Commit Graph

13 Commits

Author SHA1 Message Date
Maria Matejka
b95dc8f29f Expanded usage of stdbool.h to the whole BIRD 2024-03-25 09:39:58 +01:00
Ondrej Zajicek
804916daa9 Alloc: Minor cleanups
- Fix THP disable on old systems
 - Failed syscalls should use die() instead of bug()
 - Our printf uses %ld for s64 instead of long
2023-01-18 13:40:21 +01:00
Maria Matejka
6bb992cb04 Merge branch 'master' of https://gitlab.nic.cz/labs/bird 2023-01-18 12:33:06 +01:00
Maria Matejka
973aa37e1e Fix memory pre-allocation
When BIRD has no free memory mapped, it allocates several pages in
advance just to be sure that there is some memory available if needed.
This hysteresis tactics works quite well to reduce memory ping-ping with
kernel.

Yet it had a subtle bug: this pre-allocation didn't take a memory
coldlist into account, therefore requesting new pages from kernel even
in cases when there were other pages available. This led to slow memory
bloating.

To demonstrate this behavior fast enough to be seen well, you may:
  * temporarily set the values in sysdep/unix/alloc.c as follows to
    exacerbate the issue:
      #define KEEP_PAGES_MAIN_MAX    4096
      #define KEEP_PAGES_MAIN_MIN    1000
      #define CLEANUP_PAGES_BULK     4096
  * create a config file with several millions of static routes
  * periodically disable all static protocols and then reload config
  * log memory consumption

This should give you a steady growth rate of about 16kB per cycle. If
you don't set the values this high, the issue happens much more slowly,
yet after 14 days of running, you are going to see an OOM kill.

After this fix, pre-allocation uses the memory coldlist to get some hot
pages and the same test as described here gets you a perfectly stable
constant memory consumption (after some initial wobbling).

Thanks to NIX-CZ for reporting and helping to investigate this issue.
Thanks to Santiago for finding the cause in the code.
2023-01-18 09:39:45 +01:00
Ondrej Zajicek
928a1cb034 Alloc: Disable transparent huge pages
The usage pattern implemented in allocator seems to be incompatible with
transparent huge pages, as memory released using madvise(MADV_DONTNEED)
with regular page size and alignment does not seem to trigger demotion
of huge pages back to regular pages, even when significant number of
pages is released. Even if demotion is triggered when system memory
is low, it still breaks memory accounting.
2023-01-17 17:13:50 +01:00
Maria Matejka
57308fb277 Page allocator: Fixed minor bugs and added commentary 2022-11-03 12:38:57 +01:00
Maria Matejka
9d03c3f56c Memory pages are not munmapped, instead we just madvise()
Memory unmapping causes slow address space fragmentation, leading in
extreme cases to failing to allocate pages at all. Removing this problem
by keeping all the pages allocated to us, yet calling madvise() to let
kernel dispose of them.

This adds a little complexity and overhead as we have to keep the
pointers to the free pages, therefore to hold e.g. 1 GB of 4K pages with
8B pointers, we have to store 2 MB of data.
2022-11-02 12:56:54 +01:00
Maria Matejka
4e60b3ee72 Fixed a static assert in page allocator 2022-03-09 13:28:03 +01:00
Maria Matejka
9e60a1fbc3 Fixed resource initialization in unit tests 2022-03-09 10:30:42 +01:00
Maria Matejka
c78247f9b9 Single-threaded version of sark-branch memory page management 2022-03-09 09:10:44 +01:00
Ondrej Zajicek (work)
2fc8b4c4ba Alloc: Use posix_memalign() instead of aligned_alloc()
For compatibility with older systems use posix_memalign(). We can
switch to aligned_alloc() when we commit to C11 for multithreading.
2022-02-08 22:42:00 +01:00
Maria Matejka
644e9ca94e Directly mapped pages are kept for future use if temporarily not needed 2021-11-24 19:42:52 +00:00
Maria Matejka
886dd92eee Slab: head now uses bitmask for used/free nodes info instead of lists
From now, there are no auxiliary pointers stored in the free slab nodes.
This led to strange debugging problems if use-after-free happened in
slab-allocated structures, especially if the structure's first member is
a next pointer.

This also reduces the memory needed by 1 pointer per allocated object.
OTOH, we now rely on pages being aligned to their size's multiple, which
is quite common anyway.
2021-03-25 16:47:48 +01:00