1998-05-15 07:54:32 +00:00
|
|
|
/*
|
2000-06-01 17:12:19 +00:00
|
|
|
* BIRD -- Routing Tables
|
1998-05-15 07:54:32 +00:00
|
|
|
*
|
2000-01-16 16:44:50 +00:00
|
|
|
* (c) 1998--2000 Martin Mares <mj@ucw.cz>
|
2024-05-02 09:39:34 +00:00
|
|
|
* (c) 2019--2024 Maria Matejka <mq@jmq.cz>
|
1998-05-15 07:54:32 +00:00
|
|
|
*
|
|
|
|
* Can be freely distributed and used under the terms of the GNU GPL.
|
|
|
|
*/
|
|
|
|
|
2000-06-01 17:12:19 +00:00
|
|
|
/**
|
|
|
|
* DOC: Routing tables
|
|
|
|
*
|
|
|
|
* Routing tables are probably the most important structures BIRD uses. They
|
|
|
|
* hold all the information about known networks, the associated routes and
|
|
|
|
* their attributes.
|
|
|
|
*
|
2000-06-08 12:37:21 +00:00
|
|
|
* There are multiple routing tables (a primary one together with any
|
2000-06-01 17:12:19 +00:00
|
|
|
* number of secondary ones if requested by the configuration). Each table
|
|
|
|
* is basically a FIB containing entries describing the individual
|
2000-06-07 13:25:53 +00:00
|
|
|
* destination networks. For each network (represented by structure &net),
|
2000-06-08 12:37:21 +00:00
|
|
|
* there is a one-way linked list of route entries (&rte), the first entry
|
|
|
|
* on the list being the best one (i.e., the one we currently use
|
2000-06-01 17:12:19 +00:00
|
|
|
* for routing), the order of the other ones is undetermined.
|
|
|
|
*
|
2023-01-01 19:10:23 +00:00
|
|
|
* The &rte contains information about the route. There are net and src, which
|
|
|
|
* together forms a key identifying the route in a routing table. There is a
|
|
|
|
* pointer to a &rta structure (see the route attribute module for a precise
|
|
|
|
* explanation) holding the route attributes, which are primary data about the
|
|
|
|
* route. There are several technical fields used by routing table code (route
|
|
|
|
* id, REF_* flags), There is also the pflags field, holding protocol-specific
|
|
|
|
* flags. They are not used by routing table code, but by protocol-specific
|
|
|
|
* hooks. In contrast to route attributes, they are not primary data and their
|
|
|
|
* validity is also limited to the routing table.
|
2021-12-20 19:25:35 +00:00
|
|
|
*
|
|
|
|
* There are several mechanisms that allow automatic update of routes in one
|
|
|
|
* routing table (dst) as a result of changes in another routing table (src).
|
|
|
|
* They handle issues of recursive next hop resolving, flowspec validation and
|
|
|
|
* RPKI validation.
|
|
|
|
*
|
|
|
|
* The first such mechanism is handling of recursive next hops. A route in the
|
|
|
|
* dst table has an indirect next hop address, which is resolved through a route
|
|
|
|
* in the src table (which may also be the same table) to get an immediate next
|
|
|
|
* hop. This is implemented using structure &hostcache attached to the src
|
|
|
|
* table, which contains &hostentry structures for each tracked next hop
|
|
|
|
* address. These structures are linked from recursive routes in dst tables,
|
|
|
|
* possibly multiple routes sharing one hostentry (as many routes may have the
|
|
|
|
* same indirect next hop). There is also a trie in the hostcache, which matches
|
|
|
|
* all prefixes that may influence resolving of tracked next hops.
|
|
|
|
*
|
|
|
|
* When a best route changes in the src table, the hostcache is notified using
|
2022-08-31 12:01:59 +00:00
|
|
|
* an auxiliary export request, which checks using the trie whether the
|
2021-12-20 19:25:35 +00:00
|
|
|
* change is relevant and if it is, then it schedules asynchronous hostcache
|
|
|
|
* recomputation. The recomputation is done by rt_update_hostcache() (called
|
2022-08-31 12:01:59 +00:00
|
|
|
* as an event of src table), it walks through all hostentries and resolves
|
2021-12-20 19:25:35 +00:00
|
|
|
* them (by rt_update_hostentry()). It also updates the trie. If a change in
|
|
|
|
* hostentry resolution was found, then it schedules asynchronous nexthop
|
|
|
|
* recomputation of associated dst table. That is done by rt_next_hop_update()
|
|
|
|
* (called from rt_event() of dst table), it iterates over all routes in the dst
|
|
|
|
* table and re-examines their hostentries for changes. Note that in contrast to
|
|
|
|
* hostcache update, next hop update can be interrupted by main loop. These two
|
|
|
|
* full-table walks (over hostcache and dst table) are necessary due to absence
|
|
|
|
* of direct lookups (route -> affected nexthop, nexthop -> its route).
|
|
|
|
*
|
|
|
|
* The second mechanism is for flowspec validation, where validity of flowspec
|
|
|
|
* routes depends of resolving their network prefixes in IP routing tables. This
|
|
|
|
* is similar to the recursive next hop mechanism, but simpler as there are no
|
|
|
|
* intermediate hostcache and hostentries (because flows are less likely to
|
2022-08-31 14:04:36 +00:00
|
|
|
* share common net prefix than routes sharing a common next hop). Every dst
|
|
|
|
* table has its own export request in every src table. Each dst table has its
|
|
|
|
* own trie of prefixes that may influence validation of flowspec routes in it
|
|
|
|
* (flowspec_trie).
|
2021-12-20 19:25:35 +00:00
|
|
|
*
|
2022-08-31 14:04:36 +00:00
|
|
|
* When a best route changes in the src table, the notification mechanism is
|
|
|
|
* invoked by the export request which checks its dst table's trie to see
|
|
|
|
* whether the change is relevant, and if so, an asynchronous re-validation of
|
2021-12-20 19:25:35 +00:00
|
|
|
* flowspec routes in the dst table is scheduled. That is also done by function
|
|
|
|
* rt_next_hop_update(), like nexthop recomputation above. It iterates over all
|
|
|
|
* flowspec routes and re-validates them. It also recalculates the trie.
|
|
|
|
*
|
|
|
|
* Note that in contrast to the hostcache update, here the trie is recalculated
|
|
|
|
* during the rt_next_hop_update(), which may be interleaved with IP route
|
|
|
|
* updates. The trie is flushed at the beginning of recalculation, which means
|
|
|
|
* that such updates may use partial trie to see if they are relevant. But it
|
|
|
|
* works anyway! Either affected flowspec was already re-validated and added to
|
|
|
|
* the trie, then IP route change would match the trie and trigger a next round
|
|
|
|
* of re-validation, or it was not yet re-validated and added to the trie, but
|
|
|
|
* will be re-validated later in this round anyway.
|
|
|
|
*
|
|
|
|
* The third mechanism is used for RPKI re-validation of IP routes and it is the
|
2022-09-01 08:39:56 +00:00
|
|
|
* simplest. It is also an auxiliary export request belonging to the
|
|
|
|
* appropriate channel, triggering its reload/refeed timer after a settle time.
|
2000-06-01 17:12:19 +00:00
|
|
|
*/
|
|
|
|
|
2000-03-12 21:01:38 +00:00
|
|
|
#undef LOCAL_DEBUG
|
1999-02-13 19:15:28 +00:00
|
|
|
|
1998-05-15 07:54:32 +00:00
|
|
|
#include "nest/bird.h"
|
2023-10-29 15:25:01 +00:00
|
|
|
#include "nest/route.h"
|
1998-05-20 11:54:33 +00:00
|
|
|
#include "nest/protocol.h"
|
1999-12-01 15:10:21 +00:00
|
|
|
#include "nest/iface.h"
|
2022-09-14 23:38:18 +00:00
|
|
|
#include "nest/mpls.h"
|
1998-05-20 11:54:33 +00:00
|
|
|
#include "lib/resource.h"
|
1999-02-13 21:29:01 +00:00
|
|
|
#include "lib/event.h"
|
2021-02-10 02:09:57 +00:00
|
|
|
#include "lib/timer.h"
|
1999-12-01 15:10:21 +00:00
|
|
|
#include "lib/string.h"
|
1999-05-17 20:14:52 +00:00
|
|
|
#include "conf/conf.h"
|
1999-03-17 14:31:26 +00:00
|
|
|
#include "filter/filter.h"
|
2019-02-08 12:38:12 +00:00
|
|
|
#include "filter/data.h"
|
2018-06-27 14:51:53 +00:00
|
|
|
#include "lib/hash.h"
|
2000-03-31 23:30:21 +00:00
|
|
|
#include "lib/string.h"
|
2004-05-31 17:16:47 +00:00
|
|
|
#include "lib/alloca.h"
|
2021-12-20 19:25:35 +00:00
|
|
|
#include "lib/flowspec.h"
|
2022-09-05 10:55:36 +00:00
|
|
|
#include "lib/idm.h"
|
2023-12-08 15:13:14 +00:00
|
|
|
#include "lib/netindex_private.h"
|
2002-11-13 08:47:06 +00:00
|
|
|
|
2019-09-28 12:17:20 +00:00
|
|
|
#ifdef CONFIG_BGP
|
|
|
|
#include "proto/bgp/bgp.h"
|
|
|
|
#endif
|
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
#include <stdatomic.h>
|
|
|
|
|
2010-06-02 20:20:40 +00:00
|
|
|
pool *rt_table_pool;
|
|
|
|
|
2018-11-20 16:38:19 +00:00
|
|
|
list routing_tables;
|
2021-06-21 15:07:31 +00:00
|
|
|
list deleted_routing_tables;
|
2021-09-30 11:50:54 +00:00
|
|
|
|
2023-12-08 15:13:14 +00:00
|
|
|
netindex_hash *rt_global_netindex_hash;
|
2024-05-02 09:39:34 +00:00
|
|
|
#define RT_INITIAL_ROUTES_BLOCK_SIZE 128
|
2023-12-08 15:13:14 +00:00
|
|
|
|
2022-07-28 11:50:59 +00:00
|
|
|
struct rt_cork rt_cork;
|
|
|
|
|
2024-02-29 13:04:05 +00:00
|
|
|
/* Data structures for export journal */
|
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
static void rt_free_hostcache(struct rtable_private *tab);
|
2022-08-31 12:01:59 +00:00
|
|
|
static void rt_update_hostcache(void *tab);
|
2024-02-22 12:31:11 +00:00
|
|
|
static void rt_next_hop_update(void *_tab);
|
2022-09-09 11:52:37 +00:00
|
|
|
static void rt_nhu_uncork(void *_tab);
|
2022-05-31 10:51:34 +00:00
|
|
|
static inline void rt_next_hop_resolve_rte(rte *r);
|
2022-06-07 10:18:23 +00:00
|
|
|
static inline void rt_flowspec_resolve_rte(rte *r, struct channel *c);
|
2023-09-14 12:40:33 +00:00
|
|
|
static void rt_refresh_trace(struct rtable_private *tab, struct rt_import_hook *ih, const char *msg);
|
2022-09-07 11:54:20 +00:00
|
|
|
static void rt_kick_prune_timer(struct rtable_private *tab);
|
2024-02-29 13:04:05 +00:00
|
|
|
static void rt_prune_table(void *_tab);
|
2022-09-07 11:54:20 +00:00
|
|
|
static void rt_check_cork_low(struct rtable_private *tab);
|
|
|
|
static void rt_check_cork_high(struct rtable_private *tab);
|
2022-07-28 11:50:59 +00:00
|
|
|
static void rt_cork_release_hook(void *);
|
2022-09-09 11:52:37 +00:00
|
|
|
static void rt_shutdown(void *);
|
2022-09-07 11:54:20 +00:00
|
|
|
static void rt_delete(void *);
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2023-10-31 08:58:42 +00:00
|
|
|
int rte_same(const rte *x, const rte *y);
|
2022-07-15 12:57:02 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
static inline void rt_rte_trace_in(uint flag, struct rt_import_request *req, const rte *e, const char *msg);
|
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
const char *rt_import_state_name_array[TIS_MAX] = {
|
|
|
|
[TIS_DOWN] = "DOWN",
|
|
|
|
[TIS_UP] = "UP",
|
|
|
|
[TIS_STOP] = "STOP",
|
|
|
|
[TIS_FLUSHING] = "FLUSHING",
|
|
|
|
[TIS_WAITING] = "WAITING",
|
|
|
|
[TIS_CLEARED] = "CLEARED",
|
|
|
|
};
|
|
|
|
|
|
|
|
const char *rt_export_state_name_array[TES_MAX] = {
|
2024-05-02 09:39:34 +00:00
|
|
|
#define RT_EXPORT_STATES_ENUM_HELPER(p) [TES_##p] = #p,
|
|
|
|
MACRO_FOREACH(RT_EXPORT_STATES_ENUM_HELPER, RT_EXPORT_STATES)
|
|
|
|
#undef RT_EXPORT_STATES_ENUM_HELPER
|
2021-06-21 15:07:31 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
const char *rt_import_state_name(u8 state)
|
|
|
|
{
|
|
|
|
if (state >= TIS_MAX)
|
|
|
|
return "!! INVALID !!";
|
|
|
|
else
|
|
|
|
return rt_import_state_name_array[state];
|
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
const char *rt_export_state_name(enum rt_export_state state)
|
2021-06-21 15:07:31 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
ASSERT_DIE((state < TES_MAX) && (state >= 0));
|
|
|
|
|
|
|
|
return rt_export_state_name_array[state];
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
2014-03-20 13:07:12 +00:00
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
static struct hostentry *rt_get_hostentry(struct rtable_private *tab, ip_addr a, ip_addr ll, rtable *dep);
|
2000-03-12 20:30:53 +00:00
|
|
|
|
2022-09-12 08:25:14 +00:00
|
|
|
static inline rtable *rt_priv_to_pub(struct rtable_private *tab) { return RT_PUB(tab); }
|
|
|
|
static inline rtable *rt_pub_to_pub(rtable *tab) { return tab; }
|
|
|
|
#define RT_ANY_TO_PUB(tab) _Generic((tab),rtable*:rt_pub_to_pub,struct rtable_private*:rt_priv_to_pub)((tab))
|
|
|
|
|
2022-08-30 17:40:58 +00:00
|
|
|
#define rt_trace(tab, level, fmt, args...) do {\
|
2022-09-12 08:25:14 +00:00
|
|
|
rtable *t = RT_ANY_TO_PUB((tab)); \
|
2022-08-30 17:40:58 +00:00
|
|
|
if (t->config->debug & (level)) \
|
|
|
|
log(L_TRACE "%s: " fmt, t->name, ##args); \
|
|
|
|
} while (0)
|
|
|
|
|
2023-09-24 10:15:26 +00:00
|
|
|
#define req_trace(r, level, fmt, args...) do { \
|
|
|
|
if (r->trace_routes & (level)) \
|
|
|
|
log(L_TRACE "%s: " fmt, r->name, ##args); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define channel_trace(c, level, fmt, args...) do {\
|
|
|
|
if ((c->debug & (level)) || (c->proto->debug & (level))) \
|
|
|
|
log(L_TRACE "%s.%s: " fmt, c->proto->name, c->name, ##args);\
|
|
|
|
} while (0)
|
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
/*
|
|
|
|
* Lockless table feeding helpers
|
|
|
|
*/
|
|
|
|
struct rtable_reading {
|
|
|
|
rtable *t;
|
2024-05-14 08:46:10 +00:00
|
|
|
struct rcu_unwinder *u;
|
2024-04-03 12:47:15 +00:00
|
|
|
};
|
|
|
|
|
2024-05-14 08:46:10 +00:00
|
|
|
#define RT_READ_ANCHORED(_o, _i, _u) \
|
|
|
|
struct rtable_reading _s##_i = { .t = _o, .u = _u, }, *_i = &_s##_i;
|
|
|
|
|
|
|
|
#define RT_READ(_o, _i) RCU_ANCHOR(_u##_i); RT_READ_ANCHORED(_o, _i, _u##_i);
|
|
|
|
|
|
|
|
#define RT_READ_RETRY(tr) RCU_RETRY(tr->u)
|
2024-04-03 12:47:15 +00:00
|
|
|
|
|
|
|
#define RT_READ_LOCKED(_o, _i) \
|
2024-05-14 08:46:10 +00:00
|
|
|
ASSERT_DIE(RT_IS_LOCKED(_o)); \
|
|
|
|
struct rtable_reading _s##_i = { .t = RT_PUB(_o), .u = RCU_WONT_RETRY, }, *_i = &_s##_i;
|
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
|
|
|
|
#define RTE_IS_OBSOLETE(s) ((s)->rte.flags & REF_OBSOLETE)
|
|
|
|
#define RTE_OBSOLETE_CHECK(tr, _s) ({ \
|
|
|
|
struct rte_storage *s = _s; \
|
|
|
|
if (s && RTE_IS_OBSOLETE(s)) \
|
|
|
|
RT_READ_RETRY(tr); \
|
|
|
|
s; })
|
|
|
|
|
|
|
|
#define NET_READ_WALK_ROUTES(tr, n, ptr, r) \
|
|
|
|
for (struct rte_storage *r, * _Atomic *ptr = &(n)->routes; \
|
|
|
|
r = RTE_OBSOLETE_CHECK(tr, atomic_load_explicit(ptr, memory_order_acquire)); \
|
|
|
|
ptr = &r->next)
|
|
|
|
|
|
|
|
#define NET_READ_BEST_ROUTE(tr, n) RTE_OBSOLETE_CHECK(tr, atomic_load_explicit(&n->routes, memory_order_acquire))
|
|
|
|
|
|
|
|
#define NET_WALK_ROUTES(priv, n, ptr, r) \
|
|
|
|
for (struct rte_storage *r = ({ ASSERT_DIE(RT_IS_LOCKED(priv)); NULL; }), \
|
|
|
|
* _Atomic *ptr = &(n)->routes; \
|
|
|
|
r = atomic_load_explicit(ptr, memory_order_acquire); \
|
|
|
|
ptr = &r->next)
|
|
|
|
#define NET_BEST_ROUTE(priv, n) ({ ASSERT_DIE(RT_IS_LOCKED(priv)); atomic_load_explicit(&n->routes, memory_order_acquire); })
|
|
|
|
|
2023-12-08 15:13:14 +00:00
|
|
|
static inline net *
|
2024-04-03 12:47:15 +00:00
|
|
|
net_find(struct rtable_reading *tr, const struct netindex *i)
|
2023-12-08 15:13:14 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
u32 rbs = atomic_load_explicit(&tr->t->routes_block_size, memory_order_acquire);
|
|
|
|
if (i->index >= rbs)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
net *routes = atomic_load_explicit(&tr->t->routes, memory_order_acquire);
|
|
|
|
return &(routes[i->index]);
|
2023-12-08 15:13:14 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline net *
|
2024-04-03 12:47:15 +00:00
|
|
|
net_find_valid(struct rtable_reading *tr, struct netindex_hash_private *nh, const net_addr *addr)
|
2021-11-29 18:23:42 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
struct netindex *i = net_find_index_fragile(nh, addr);
|
|
|
|
if (!i)
|
|
|
|
return NULL;
|
2023-12-08 15:13:14 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
net *n = net_find(tr, i);
|
|
|
|
if (!n)
|
|
|
|
return NULL;
|
2021-11-29 18:23:42 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
struct rte_storage *s = NET_READ_BEST_ROUTE(tr, n);
|
2022-02-03 05:08:51 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
if (!s || !rte_is_valid(&s->rte))
|
|
|
|
return NULL;
|
2023-12-08 15:13:14 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
return n;
|
2021-11-29 18:23:42 +00:00
|
|
|
}
|
|
|
|
|
2015-12-24 14:52:03 +00:00
|
|
|
static inline void *
|
2024-04-03 12:47:15 +00:00
|
|
|
net_route_ip6_sadr_trie(struct rtable_reading *tr, struct netindex_hash_private *nh, const net_addr_ip6_sadr *n0)
|
2021-11-29 18:23:42 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
u32 bs = atomic_load_explicit(&tr->t->routes_block_size, memory_order_acquire);
|
|
|
|
const struct f_trie *trie = atomic_load_explicit(&tr->t->trie, memory_order_acquire);
|
|
|
|
TRIE_WALK_TO_ROOT_IP6(trie, (const net_addr_ip6 *) n0, px)
|
2021-11-29 18:23:42 +00:00
|
|
|
{
|
|
|
|
net_addr_ip6_sadr n = NET_ADDR_IP6_SADR(px.prefix, px.pxlen, n0->src_prefix, n0->src_pxlen);
|
|
|
|
net *best = NULL;
|
|
|
|
int best_pxlen = 0;
|
|
|
|
|
|
|
|
/* We need to do dst first matching. Since sadr addresses are hashed on dst
|
|
|
|
prefix only, find the hash table chain and go through it to find the
|
|
|
|
match with the longest matching src prefix. */
|
2023-12-08 15:13:14 +00:00
|
|
|
for (struct netindex *i = net_find_index_fragile_chain(nh, (net_addr *) &n); i; i = i->next)
|
2021-11-29 18:23:42 +00:00
|
|
|
{
|
2023-12-08 15:13:14 +00:00
|
|
|
net_addr_ip6_sadr *a = (void *) i->addr;
|
2021-11-29 18:23:42 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
if ((i->index < bs) &&
|
2023-12-08 15:13:14 +00:00
|
|
|
net_equal_dst_ip6_sadr(&n, a) &&
|
2021-11-29 18:23:42 +00:00
|
|
|
net_in_net_src_ip6_sadr(&n, a) &&
|
|
|
|
(a->src_pxlen >= best_pxlen))
|
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
net *cur = &(atomic_load_explicit(&tr->t->routes, memory_order_acquire)[i->index]);
|
|
|
|
struct rte_storage *s = NET_READ_BEST_ROUTE(tr, cur);
|
|
|
|
|
|
|
|
if (s && rte_is_valid(&s->rte))
|
2023-12-08 15:13:14 +00:00
|
|
|
{
|
|
|
|
best = cur;
|
|
|
|
best_pxlen = a->src_pxlen;
|
|
|
|
}
|
2021-11-29 18:23:42 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (best)
|
|
|
|
return best;
|
|
|
|
}
|
|
|
|
TRIE_WALK_TO_ROOT_END;
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2015-12-24 14:52:03 +00:00
|
|
|
|
2018-02-13 15:27:57 +00:00
|
|
|
static inline void *
|
2024-04-03 12:47:15 +00:00
|
|
|
net_route_ip6_sadr_fib(struct rtable_reading *tr, struct netindex_hash_private *nh, const net_addr_ip6_sadr *n0)
|
2018-02-13 15:27:57 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
u32 bs = atomic_load_explicit(&tr->t->routes_block_size, memory_order_acquire);
|
|
|
|
|
2021-11-29 18:23:42 +00:00
|
|
|
net_addr_ip6_sadr n;
|
|
|
|
net_copy_ip6_sadr(&n, n0);
|
2018-02-13 15:27:57 +00:00
|
|
|
|
|
|
|
while (1)
|
|
|
|
{
|
|
|
|
net *best = NULL;
|
|
|
|
int best_pxlen = 0;
|
|
|
|
|
|
|
|
/* We need to do dst first matching. Since sadr addresses are hashed on dst
|
|
|
|
prefix only, find the hash table chain and go through it to find the
|
2021-11-29 18:23:42 +00:00
|
|
|
match with the longest matching src prefix. */
|
2023-12-08 15:13:14 +00:00
|
|
|
for (struct netindex *i = net_find_index_fragile_chain(nh, (net_addr *) &n); i; i = i->next)
|
2018-02-13 15:27:57 +00:00
|
|
|
{
|
2023-12-08 15:13:14 +00:00
|
|
|
net_addr_ip6_sadr *a = (void *) i->addr;
|
2018-02-13 15:27:57 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
if ((i->index < bs) &&
|
2023-12-08 15:13:14 +00:00
|
|
|
net_equal_dst_ip6_sadr(&n, a) &&
|
2021-11-29 18:23:42 +00:00
|
|
|
net_in_net_src_ip6_sadr(&n, a) &&
|
2018-02-13 15:27:57 +00:00
|
|
|
(a->src_pxlen >= best_pxlen))
|
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
net *cur = &(atomic_load_explicit(&tr->t->routes, memory_order_acquire)[i->index]);
|
|
|
|
struct rte_storage *s = NET_READ_BEST_ROUTE(tr, cur);
|
|
|
|
if (RTE_IS_OBSOLETE(s))
|
|
|
|
RT_READ_RETRY(tr);
|
|
|
|
|
|
|
|
if (s && rte_is_valid(&s->rte))
|
2023-12-08 15:13:14 +00:00
|
|
|
{
|
|
|
|
best = cur;
|
|
|
|
best_pxlen = a->src_pxlen;
|
|
|
|
}
|
2018-02-13 15:27:57 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (best)
|
|
|
|
return best;
|
|
|
|
|
2021-11-29 18:23:42 +00:00
|
|
|
if (!n.dst_pxlen)
|
2018-02-13 15:27:57 +00:00
|
|
|
break;
|
|
|
|
|
2021-11-29 18:23:42 +00:00
|
|
|
n.dst_pxlen--;
|
|
|
|
ip6_clrbit(&n.dst_prefix, n.dst_pxlen);
|
2018-02-13 15:27:57 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2023-12-08 15:13:14 +00:00
|
|
|
static net *
|
2024-04-03 12:47:15 +00:00
|
|
|
net_route(struct rtable_reading *tr, const net_addr *n)
|
2016-01-20 14:38:37 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
ASSERT(tr->t->addr_type == n->type);
|
2024-04-26 10:14:33 +00:00
|
|
|
SKIP_BACK_DECLARE(net_addr_union, nu, n, n);
|
2016-01-20 14:38:37 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
const struct f_trie *trie = atomic_load_explicit(&tr->t->trie, memory_order_acquire);
|
|
|
|
|
|
|
|
NH_LOCK(tr->t->netindex, nh);
|
|
|
|
|
2023-12-07 13:10:11 +00:00
|
|
|
#define TW(ipv, what) \
|
2024-04-03 12:47:15 +00:00
|
|
|
TRIE_WALK_TO_ROOT_IP##ipv(trie, &(nu->ip##ipv), var) \
|
2023-12-07 13:10:11 +00:00
|
|
|
{ what(ipv, var); } \
|
|
|
|
TRIE_WALK_TO_ROOT_END; return NULL;
|
2021-11-29 18:23:42 +00:00
|
|
|
|
2023-12-07 13:10:11 +00:00
|
|
|
#define FW(ipv, what) do { \
|
|
|
|
net_addr_union nuc; net_copy(&nuc.n, n); \
|
|
|
|
while (1) { \
|
|
|
|
what(ipv, nuc.ip##ipv); if (!nuc.n.pxlen) return NULL; \
|
|
|
|
nuc.n.pxlen--; ip##ipv##_clrbit(&nuc.ip##ipv.prefix, nuc.ip##ipv.pxlen); \
|
|
|
|
} \
|
|
|
|
} while(0); return NULL;
|
2016-05-12 14:04:47 +00:00
|
|
|
|
2023-12-07 13:10:11 +00:00
|
|
|
#define FVR_IP(ipv, var) \
|
2024-04-03 12:47:15 +00:00
|
|
|
net *r; if (r = net_find_valid(tr, nh, (net_addr *) &var)) return r;
|
2021-11-29 18:23:42 +00:00
|
|
|
|
2023-12-07 13:10:11 +00:00
|
|
|
#define FVR_VPN(ipv, var) \
|
|
|
|
net_addr_vpn##ipv _var0 = NET_ADDR_VPN##ipv(var.prefix, var.pxlen, nu->vpn##ipv.rd); FVR_IP(ipv, _var0);
|
2016-05-12 14:04:47 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
if (trie)
|
2023-12-07 13:10:11 +00:00
|
|
|
switch (n->type) {
|
|
|
|
case NET_IP4: TW(4, FVR_IP);
|
|
|
|
case NET_VPN4: TW(4, FVR_VPN);
|
|
|
|
case NET_IP6: TW(6, FVR_IP);
|
|
|
|
case NET_VPN6: TW(6, FVR_VPN);
|
|
|
|
|
|
|
|
case NET_IP6_SADR:
|
2024-04-03 12:47:15 +00:00
|
|
|
return net_route_ip6_sadr_trie(tr, nh, (net_addr_ip6_sadr *) n);
|
2023-12-07 13:10:11 +00:00
|
|
|
default:
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
switch (n->type) {
|
|
|
|
case NET_IP4: FW(4, FVR_IP);
|
|
|
|
case NET_VPN4: FW(4, FVR_VPN);
|
|
|
|
case NET_IP6: FW(6, FVR_IP);
|
|
|
|
case NET_VPN6: FW(6, FVR_VPN);
|
|
|
|
|
|
|
|
case NET_IP6_SADR:
|
2024-04-03 12:47:15 +00:00
|
|
|
return net_route_ip6_sadr_fib (tr, nh, (net_addr_ip6_sadr *) n);
|
2023-12-07 13:10:11 +00:00
|
|
|
default:
|
|
|
|
return NULL;
|
|
|
|
}
|
2021-11-29 18:23:42 +00:00
|
|
|
|
2023-12-08 11:07:46 +00:00
|
|
|
#undef TW
|
|
|
|
#undef FW
|
|
|
|
#undef FVR_IP
|
|
|
|
#undef FVR_VPN
|
2021-11-29 18:23:42 +00:00
|
|
|
}
|
|
|
|
|
2016-01-20 14:38:37 +00:00
|
|
|
|
2016-05-12 14:04:47 +00:00
|
|
|
/**
|
|
|
|
* roa_check - check validity of route origination in a ROA table
|
|
|
|
* @tab: ROA table
|
|
|
|
* @n: network prefix to check
|
|
|
|
* @asn: AS number of network prefix
|
|
|
|
*
|
|
|
|
* Implements RFC 6483 route validation for the given network prefix. The
|
|
|
|
* procedure is to find all candidate ROAs - ROAs whose prefixes cover the given
|
|
|
|
* network prefix. If there is no candidate ROA, return ROA_UNKNOWN. If there is
|
|
|
|
* a candidate ROA with matching ASN and maxlen field greater than or equal to
|
|
|
|
* the given prefix length, return ROA_VALID. Otherwise, return ROA_INVALID. If
|
|
|
|
* caller cannot determine origin AS, 0 could be used (in that case ROA_VALID
|
|
|
|
* cannot happen). Table @tab must have type NET_ROA4 or NET_ROA6, network @n
|
|
|
|
* must have type NET_IP4 or NET_IP6, respectively.
|
|
|
|
*/
|
|
|
|
int
|
2022-09-07 11:54:20 +00:00
|
|
|
net_roa_check(rtable *tp, const net_addr *n, u32 asn)
|
2016-01-20 14:38:37 +00:00
|
|
|
{
|
2024-04-26 10:14:33 +00:00
|
|
|
SKIP_BACK_DECLARE(net_addr_union, nu, n, n);
|
2023-12-08 11:07:46 +00:00
|
|
|
int anything = 0;
|
|
|
|
|
|
|
|
#define TW(ipv) do { \
|
2024-04-03 12:47:15 +00:00
|
|
|
TRIE_WALK_TO_ROOT_IP##ipv(trie, &(nu->ip##ipv), var) { \
|
2023-12-08 11:07:46 +00:00
|
|
|
net_addr_roa##ipv roa0 = NET_ADDR_ROA##ipv(var.prefix, var.pxlen, 0, 0); \
|
|
|
|
ROA_PARTIAL_CHECK(ipv); \
|
|
|
|
} TRIE_WALK_TO_ROOT_END; \
|
|
|
|
return anything ? ROA_INVALID : ROA_UNKNOWN; \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define FW(ipv) do { \
|
|
|
|
net_addr_roa##ipv roa0 = NET_ADDR_ROA##ipv(nu->ip##ipv.prefix, nu->ip##ipv.pxlen, 0, 0);\
|
|
|
|
while (1) { \
|
|
|
|
ROA_PARTIAL_CHECK(ipv); \
|
|
|
|
if (roa0.pxlen == 0) break; \
|
|
|
|
roa0.pxlen--; ip##ipv##_clrbit(&roa0.prefix, roa0.pxlen); \
|
|
|
|
} \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define ROA_PARTIAL_CHECK(ipv) do { \
|
2023-12-08 15:13:14 +00:00
|
|
|
for (struct netindex *i = net_find_index_fragile_chain(nh, (net_addr *) &roa0); i; i = i->next)\
|
2023-12-08 11:07:46 +00:00
|
|
|
{ \
|
2024-04-03 12:47:15 +00:00
|
|
|
if (bs < i->index) continue; \
|
2023-12-08 15:13:14 +00:00
|
|
|
net_addr_roa##ipv *roa = (void *) i->addr; \
|
|
|
|
if (!net_equal_prefix_roa##ipv(roa, &roa0)) continue; \
|
2024-04-03 12:47:15 +00:00
|
|
|
net *r = &(atomic_load_explicit(&tr->t->routes, memory_order_acquire)[i->index]); \
|
|
|
|
struct rte_storage *s = NET_READ_BEST_ROUTE(tr, r); \
|
|
|
|
if (s && rte_is_valid(&s->rte)) \
|
2023-12-08 11:07:46 +00:00
|
|
|
{ \
|
|
|
|
anything = 1; \
|
|
|
|
if (asn && (roa->asn == asn) && (roa->max_pxlen >= nu->ip##ipv.pxlen)) \
|
2023-12-08 15:13:14 +00:00
|
|
|
return ROA_VALID; \
|
2023-12-08 11:07:46 +00:00
|
|
|
} \
|
|
|
|
} \
|
|
|
|
} while (0)
|
2022-09-07 11:54:20 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
RT_READ(tp, tr);
|
|
|
|
|
2021-11-29 18:23:42 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
u32 bs = atomic_load_explicit(&tr->t->routes_block_size, memory_order_acquire);
|
|
|
|
const struct f_trie *trie = atomic_load_explicit(&tr->t->trie, memory_order_acquire);
|
|
|
|
|
|
|
|
NH_LOCK(tr->t->netindex, nh);
|
|
|
|
if ((tr->t->addr_type == NET_ROA4) && (n->type == NET_IP4))
|
2022-09-07 11:54:20 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
if (trie) TW(4);
|
|
|
|
else FW(4);
|
2022-09-07 11:54:20 +00:00
|
|
|
}
|
2024-04-03 12:47:15 +00:00
|
|
|
else if ((tr->t->addr_type == NET_ROA6) && (n->type == NET_IP6))
|
2022-09-07 11:54:20 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
if (trie) TW(6);
|
|
|
|
else FW(6);
|
2022-09-07 11:54:20 +00:00
|
|
|
}
|
2021-11-29 18:23:42 +00:00
|
|
|
}
|
2023-12-08 11:07:46 +00:00
|
|
|
|
|
|
|
return anything ? ROA_INVALID : ROA_UNKNOWN;
|
|
|
|
#undef ROA_PARTIAL_CHECK
|
|
|
|
#undef TW
|
|
|
|
#undef FW
|
2010-07-30 23:04:32 +00:00
|
|
|
}
|
1998-05-20 11:54:33 +00:00
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
struct rte_storage *
|
2023-12-08 15:13:14 +00:00
|
|
|
rte_store(const rte *r, struct netindex *i, struct rtable_private *tab)
|
1999-04-05 20:25:03 +00:00
|
|
|
{
|
2023-07-03 18:38:24 +00:00
|
|
|
struct rte_storage *s = sl_alloc(tab->rte_slab);
|
|
|
|
struct rte *e = RTES_WRITE(s);
|
1999-04-05 20:25:03 +00:00
|
|
|
|
2023-07-03 18:38:24 +00:00
|
|
|
*e = *r;
|
2023-12-08 15:13:14 +00:00
|
|
|
e->net = i->addr;
|
|
|
|
net_lock_index(tab->netindex, i);
|
2020-04-10 15:08:29 +00:00
|
|
|
|
2023-07-03 18:38:24 +00:00
|
|
|
rt_lock_source(e->src);
|
1999-04-05 20:25:03 +00:00
|
|
|
|
2024-04-04 10:01:35 +00:00
|
|
|
e->attrs = ea_lookup(e->attrs, BIT32_ALL(EALS_PREIMPORT, EALS_FILTERED), EALS_IN_TABLE);
|
2015-06-08 00:20:43 +00:00
|
|
|
|
2024-01-28 12:09:48 +00:00
|
|
|
#if 0
|
|
|
|
debug("(store) %N ", i->addr);
|
|
|
|
ea_dump(e->attrs);
|
|
|
|
debug("\n");
|
|
|
|
#endif
|
|
|
|
|
2023-07-03 18:38:24 +00:00
|
|
|
return s;
|
2015-06-08 00:20:43 +00:00
|
|
|
}
|
|
|
|
|
2021-03-20 20:16:12 +00:00
|
|
|
/**
|
|
|
|
* rte_free - delete a &rte
|
2020-01-28 10:42:46 +00:00
|
|
|
* @e: &struct rte_storage to be deleted
|
|
|
|
* @tab: the table which the rte belongs to
|
2021-03-20 20:16:12 +00:00
|
|
|
*
|
|
|
|
* rte_free() deletes the given &rte from the routing table it's linked to.
|
|
|
|
*/
|
|
|
|
|
2024-03-13 12:46:16 +00:00
|
|
|
static void
|
2023-12-08 15:13:14 +00:00
|
|
|
rte_free(struct rte_storage *e, struct rtable_private *tab)
|
2021-03-20 20:16:12 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Wait for very slow table readers */
|
|
|
|
synchronize_rcu();
|
|
|
|
|
|
|
|
rt_rte_trace_in(D_ROUTES, e->rte.sender->req, &e->rte, "freeing");
|
|
|
|
|
2023-12-08 15:13:14 +00:00
|
|
|
struct netindex *i = RTE_GET_NETINDEX(&e->rte);
|
|
|
|
net_unlock_index(tab->netindex, i);
|
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
rt_unlock_source(e->rte.src);
|
2023-12-08 15:13:14 +00:00
|
|
|
|
2024-04-04 10:01:35 +00:00
|
|
|
ea_free(e->rte.attrs);
|
2022-04-04 18:31:14 +00:00
|
|
|
sl_free(e);
|
2021-03-20 20:16:12 +00:00
|
|
|
}
|
2019-03-06 17:14:12 +00:00
|
|
|
|
1998-05-20 11:54:33 +00:00
|
|
|
static int /* Actually better or at least as good as */
|
2023-03-30 09:37:16 +00:00
|
|
|
rte_better(const rte *new, const rte *old)
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2023-03-30 09:37:16 +00:00
|
|
|
int (*better)(const rte *, const rte *);
|
1998-06-03 08:40:10 +00:00
|
|
|
|
2012-11-10 13:26:13 +00:00
|
|
|
if (!rte_is_valid(old))
|
1998-05-20 11:54:33 +00:00
|
|
|
return 1;
|
2012-11-10 13:26:13 +00:00
|
|
|
if (!rte_is_valid(new))
|
|
|
|
return 0;
|
|
|
|
|
2022-04-20 10:24:26 +00:00
|
|
|
u32 np = rt_get_preference(new);
|
|
|
|
u32 op = rt_get_preference(old);
|
|
|
|
|
|
|
|
if (np > op)
|
1998-05-20 11:54:33 +00:00
|
|
|
return 1;
|
2022-04-20 10:24:26 +00:00
|
|
|
if (np < op)
|
1998-05-20 11:54:33 +00:00
|
|
|
return 0;
|
2021-09-27 14:40:28 +00:00
|
|
|
if (new->src->owner->class != old->src->owner->class)
|
2000-03-01 11:48:11 +00:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* If the user has configured protocol preferences, so that two different protocols
|
|
|
|
* have the same preference, try to break the tie by comparing addresses. Not too
|
|
|
|
* useful, but keeps the ordering of routes unambiguous.
|
|
|
|
*/
|
2021-09-27 14:40:28 +00:00
|
|
|
return new->src->owner->class > old->src->owner->class;
|
2000-03-01 11:48:11 +00:00
|
|
|
}
|
2021-09-27 14:40:28 +00:00
|
|
|
if (better = new->src->owner->class->rte_better)
|
1998-06-03 08:40:10 +00:00
|
|
|
return better(new, old);
|
|
|
|
return 0;
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
|
|
|
|
2015-06-08 00:20:43 +00:00
|
|
|
static int
|
2023-03-30 09:37:16 +00:00
|
|
|
rte_mergable(const rte *pri, const rte *sec)
|
2015-06-08 00:20:43 +00:00
|
|
|
{
|
2023-03-30 09:37:16 +00:00
|
|
|
int (*mergable)(const rte *, const rte *);
|
2015-06-08 00:20:43 +00:00
|
|
|
|
|
|
|
if (!rte_is_valid(pri) || !rte_is_valid(sec))
|
|
|
|
return 0;
|
|
|
|
|
2022-04-20 10:24:26 +00:00
|
|
|
if (rt_get_preference(pri) != rt_get_preference(sec))
|
2015-06-08 00:20:43 +00:00
|
|
|
return 0;
|
|
|
|
|
2021-09-27 14:40:28 +00:00
|
|
|
if (pri->src->owner->class != sec->src->owner->class)
|
2015-06-08 00:20:43 +00:00
|
|
|
return 0;
|
|
|
|
|
2021-09-27 14:40:28 +00:00
|
|
|
if (mergable = pri->src->owner->class->rte_mergable)
|
2015-06-08 00:20:43 +00:00
|
|
|
return mergable(pri, sec);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2000-03-12 20:30:53 +00:00
|
|
|
static void
|
2021-06-21 15:07:31 +00:00
|
|
|
rte_trace(const char *name, const rte *e, int dir, const char *msg)
|
2000-03-12 20:30:53 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
log(L_TRACE "%s %c %s %N ptr %p (%u) src %luL %uG %uS id %u %s",
|
|
|
|
name, dir, msg, e->net, e, NET_TO_INDEX(e->net)->index,
|
2021-09-27 11:04:16 +00:00
|
|
|
e->src->private_id, e->src->global_id, e->stale_cycle, e->id,
|
2022-05-15 16:09:30 +00:00
|
|
|
rta_dest_name(rte_dest(e)));
|
2000-03-12 20:30:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
2021-06-21 15:07:31 +00:00
|
|
|
channel_rte_trace_in(uint flag, struct channel *c, const rte *e, const char *msg)
|
2000-03-12 20:30:53 +00:00
|
|
|
{
|
2020-12-07 21:19:40 +00:00
|
|
|
if ((c->debug & flag) || (c->proto->debug & flag))
|
2024-05-02 09:39:34 +00:00
|
|
|
log(L_TRACE "%s > %s %N ptr %p (-) src %luL %uG %uS id %u %s",
|
|
|
|
c->in_req.name, msg, e->net, e,
|
2024-01-23 19:25:48 +00:00
|
|
|
e->src->private_id, e->src->global_id, e->stale_cycle, e->id,
|
|
|
|
rta_dest_name(rte_dest(e)));
|
2000-03-12 20:30:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
2021-06-21 15:07:31 +00:00
|
|
|
channel_rte_trace_out(uint flag, struct channel *c, const rte *e, const char *msg)
|
2000-03-12 20:30:53 +00:00
|
|
|
{
|
2020-12-07 21:19:40 +00:00
|
|
|
if ((c->debug & flag) || (c->proto->debug & flag))
|
2021-06-21 15:07:31 +00:00
|
|
|
rte_trace(c->out_req.name, e, '<', msg);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
rt_rte_trace_in(uint flag, struct rt_import_request *req, const rte *e, const char *msg)
|
|
|
|
{
|
|
|
|
if (req->trace_routes & flag)
|
|
|
|
rte_trace(req->name, e, '>', msg);
|
2000-03-12 20:30:53 +00:00
|
|
|
}
|
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
#if 0
|
|
|
|
// seems to be unused at all
|
|
|
|
static inline void
|
|
|
|
rt_rte_trace_out(uint flag, struct rt_export_request *req, const rte *e, const char *msg)
|
|
|
|
{
|
|
|
|
if (req->trace_routes & flag)
|
|
|
|
rte_trace(req->name, e, '<', msg);
|
2000-03-12 20:30:53 +00:00
|
|
|
}
|
2021-06-21 15:07:31 +00:00
|
|
|
#endif
|
2000-03-12 20:30:53 +00:00
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
static uint
|
2024-04-03 12:47:15 +00:00
|
|
|
rte_feed_count(struct rtable_reading *tr, net *n)
|
2021-09-30 11:50:54 +00:00
|
|
|
{
|
|
|
|
uint count = 0;
|
2024-04-03 12:47:15 +00:00
|
|
|
NET_READ_WALK_ROUTES(tr, n, ep, e)
|
2022-07-14 09:09:23 +00:00
|
|
|
count++;
|
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
#if 0
|
2021-09-30 11:50:54 +00:00
|
|
|
static void
|
2024-04-03 12:47:15 +00:00
|
|
|
rte_feed_obtain(struct rtable_reading *tr, net *n, const rte **feed, uint count)
|
2021-09-30 11:50:54 +00:00
|
|
|
{
|
|
|
|
uint i = 0;
|
2024-04-03 12:47:15 +00:00
|
|
|
NET_READ_WALK_ROUTES(tr, n, ep, e)
|
|
|
|
{
|
|
|
|
if (i >= count)
|
|
|
|
RT_READ_RETRY(tr);
|
2022-07-14 09:09:23 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
feed[i++] = &e->rte;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (i != count)
|
|
|
|
RT_READ_RETRY(tr);
|
2021-09-30 11:50:54 +00:00
|
|
|
}
|
2024-05-02 09:39:34 +00:00
|
|
|
#endif
|
2021-09-30 11:50:54 +00:00
|
|
|
|
2024-04-23 16:50:22 +00:00
|
|
|
static void
|
|
|
|
rte_feed_obtain_copy(struct rtable_reading *tr, net *n, rte *feed, uint count)
|
|
|
|
{
|
|
|
|
uint i = 0;
|
|
|
|
NET_READ_WALK_ROUTES(tr, n, ep, e)
|
|
|
|
{
|
|
|
|
if (i >= count)
|
|
|
|
RT_READ_RETRY(tr);
|
|
|
|
|
|
|
|
feed[i++] = e->rte;
|
|
|
|
ea_free_later(ea_ref(e->rte.attrs));
|
|
|
|
}
|
|
|
|
|
|
|
|
if (i != count)
|
|
|
|
RT_READ_RETRY(tr);
|
|
|
|
}
|
|
|
|
|
2012-04-15 13:07:58 +00:00
|
|
|
static rte *
|
2022-05-30 14:41:15 +00:00
|
|
|
export_filter(struct channel *c, rte *rt, int silent)
|
1999-03-17 14:31:26 +00:00
|
|
|
{
|
2016-01-26 10:48:58 +00:00
|
|
|
struct proto *p = c->proto;
|
2019-02-15 12:53:17 +00:00
|
|
|
const struct filter *filter = c->out_filter;
|
2021-06-21 15:07:31 +00:00
|
|
|
struct channel_export_stats *stats = &c->export_stats;
|
2012-03-15 10:58:08 +00:00
|
|
|
|
2020-03-09 14:31:10 +00:00
|
|
|
/* Do nothing if we have already rejected the route */
|
2024-05-02 09:39:34 +00:00
|
|
|
if (silent && bmap_test(&c->export_rejected_map, rt->id))
|
2020-03-09 14:31:10 +00:00
|
|
|
goto reject_noset;
|
|
|
|
|
|
|
|
int v = p->preexport ? p->preexport(c, rt) : 0;
|
2012-04-15 13:07:58 +00:00
|
|
|
if (v < 0)
|
|
|
|
{
|
|
|
|
if (silent)
|
2020-03-09 14:31:10 +00:00
|
|
|
goto reject_noset;
|
2009-12-02 21:19:47 +00:00
|
|
|
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->updates_rejected++;
|
2013-02-08 22:58:27 +00:00
|
|
|
if (v == RIC_REJECT)
|
2021-06-21 15:07:31 +00:00
|
|
|
channel_rte_trace_out(D_FILTERS, c, rt, "rejected by protocol");
|
2021-11-23 11:12:32 +00:00
|
|
|
goto reject;
|
2020-03-09 14:31:10 +00:00
|
|
|
|
2012-04-15 13:07:58 +00:00
|
|
|
}
|
|
|
|
if (v > 0)
|
1999-04-05 20:25:03 +00:00
|
|
|
{
|
2012-04-15 13:07:58 +00:00
|
|
|
if (!silent)
|
2021-06-21 15:07:31 +00:00
|
|
|
channel_rte_trace_out(D_FILTERS, c, rt, "forced accept by protocol");
|
2012-04-15 13:07:58 +00:00
|
|
|
goto accept;
|
1999-04-05 20:25:03 +00:00
|
|
|
}
|
2009-06-03 23:22:56 +00:00
|
|
|
|
2012-04-15 13:07:58 +00:00
|
|
|
v = filter && ((filter == FILTER_REJECT) ||
|
2022-05-30 14:41:15 +00:00
|
|
|
(f_run(filter, rt,
|
2018-05-29 10:08:12 +00:00
|
|
|
(silent ? FF_SILENT : 0)) > F_ACCEPT));
|
2012-04-15 13:07:58 +00:00
|
|
|
if (v)
|
|
|
|
{
|
|
|
|
if (silent)
|
|
|
|
goto reject;
|
|
|
|
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->updates_filtered++;
|
2021-06-21 15:07:31 +00:00
|
|
|
channel_rte_trace_out(D_FILTERS, c, rt, "filtered out");
|
2012-04-15 13:07:58 +00:00
|
|
|
goto reject;
|
1999-04-05 20:25:03 +00:00
|
|
|
}
|
2009-06-03 23:22:56 +00:00
|
|
|
|
2012-04-15 13:07:58 +00:00
|
|
|
accept:
|
2020-03-09 14:31:10 +00:00
|
|
|
/* We have accepted the route */
|
2024-05-02 09:39:34 +00:00
|
|
|
bmap_clear(&c->export_rejected_map, rt->id);
|
2012-04-15 13:07:58 +00:00
|
|
|
return rt;
|
|
|
|
|
|
|
|
reject:
|
2020-03-09 14:31:10 +00:00
|
|
|
/* We have rejected the route by filter */
|
2024-05-02 09:39:34 +00:00
|
|
|
bmap_set(&c->export_rejected_map, rt->id);
|
2020-03-09 14:31:10 +00:00
|
|
|
|
|
|
|
reject_noset:
|
2012-04-15 13:07:58 +00:00
|
|
|
/* Discard temporary rte */
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2021-06-21 15:07:31 +00:00
|
|
|
do_rt_notify(struct channel *c, const net_addr *net, rte *new, const rte *old)
|
2012-04-15 13:07:58 +00:00
|
|
|
{
|
2016-01-26 10:48:58 +00:00
|
|
|
struct proto *p = c->proto;
|
2021-06-21 15:07:31 +00:00
|
|
|
struct channel_export_stats *stats = &c->export_stats;
|
2009-06-03 23:22:56 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
ASSERT_DIE(old || new);
|
|
|
|
|
2021-11-06 19:34:16 +00:00
|
|
|
if (!old && new)
|
|
|
|
if (CHANNEL_LIMIT_PUSH(c, OUT))
|
2019-09-09 00:55:32 +00:00
|
|
|
{
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->updates_rejected++;
|
2021-06-21 15:07:31 +00:00
|
|
|
channel_rte_trace_out(D_FILTERS, c, new, "rejected [limit]");
|
2019-09-09 00:55:32 +00:00
|
|
|
return;
|
2012-04-24 21:39:57 +00:00
|
|
|
}
|
2021-11-06 19:34:16 +00:00
|
|
|
|
|
|
|
if (!new && old)
|
|
|
|
CHANNEL_LIMIT_POP(c, OUT);
|
2012-04-24 21:39:57 +00:00
|
|
|
|
2009-06-03 23:22:56 +00:00
|
|
|
if (new)
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->updates_accepted++;
|
2009-06-03 23:22:56 +00:00
|
|
|
else
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->withdraws_accepted++;
|
2009-06-03 23:22:56 +00:00
|
|
|
|
2019-09-09 00:55:32 +00:00
|
|
|
if (old)
|
2024-05-02 09:39:34 +00:00
|
|
|
bmap_clear(&c->export_accepted_map, old->id);
|
2019-09-09 00:55:32 +00:00
|
|
|
|
2009-06-03 23:22:56 +00:00
|
|
|
if (new)
|
2024-05-02 09:39:34 +00:00
|
|
|
bmap_set(&c->export_accepted_map, new->id);
|
2009-06-03 23:22:56 +00:00
|
|
|
|
2024-04-07 09:43:52 +00:00
|
|
|
if (new && old)
|
|
|
|
channel_rte_trace_out(D_ROUTES, c, new, "replaced");
|
|
|
|
else if (new)
|
|
|
|
channel_rte_trace_out(D_ROUTES, c, new, "added");
|
|
|
|
else if (old)
|
|
|
|
channel_rte_trace_out(D_ROUTES, c, old, "removed");
|
2019-09-09 00:55:32 +00:00
|
|
|
|
2022-06-20 17:10:49 +00:00
|
|
|
p->rt_notify(p, c, net, new, old);
|
2012-04-15 13:07:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2024-05-02 09:39:34 +00:00
|
|
|
rt_notify_basic(struct channel *c, const rte *new, const rte *old)
|
2012-04-15 13:07:58 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
const rte *trte = new ?: old;
|
2023-09-24 10:15:26 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Ignore invalid routes */
|
|
|
|
if (!rte_is_valid(new))
|
|
|
|
new = NULL;
|
|
|
|
|
|
|
|
if (!rte_is_valid(old))
|
|
|
|
old = NULL;
|
|
|
|
|
|
|
|
if (!new && !old)
|
|
|
|
{
|
|
|
|
channel_rte_trace_out(D_ROUTES, c, trte, "idempotent withdraw (filtered on import)");
|
2022-07-15 12:57:02 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* If this is a refeed, we may need to copy the new route to the old one */
|
|
|
|
if (!old && bmap_test(&c->export_accepted_map, new->id))
|
|
|
|
{
|
|
|
|
ASSERT_DIE(rt_export_get_state(&c->out_req) == TES_PARTIAL);
|
2023-09-29 14:24:50 +00:00
|
|
|
old = new;
|
2024-05-02 09:39:34 +00:00
|
|
|
}
|
2023-09-29 14:24:50 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Run the filters, actually */
|
|
|
|
rte n0, *np = NULL;
|
2012-04-15 13:07:58 +00:00
|
|
|
if (new)
|
2024-05-02 09:39:34 +00:00
|
|
|
{
|
|
|
|
n0 = *new;
|
|
|
|
np = export_filter(c, &n0, 0);
|
|
|
|
}
|
2012-04-15 13:07:58 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Have we exported the old route? */
|
|
|
|
if (old && !bmap_test(&c->export_accepted_map, old->id))
|
2019-09-09 00:55:32 +00:00
|
|
|
old = NULL;
|
2012-04-15 13:07:58 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Withdraw to withdraw. */
|
|
|
|
if (!np && !old)
|
|
|
|
{
|
|
|
|
channel_rte_trace_out(D_ROUTES, c, trte, "idempotent withdraw (filtered on export)");
|
2012-04-15 13:07:58 +00:00
|
|
|
return;
|
2024-05-02 09:39:34 +00:00
|
|
|
}
|
2012-04-15 13:07:58 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* OK, notify. */
|
|
|
|
do_rt_notify(c, np ? np->net : old->net, np, old);
|
2021-09-27 11:04:16 +00:00
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
static void
|
|
|
|
rt_notify_accepted(struct channel *c, const struct rt_export_feed *feed)
|
2012-04-15 13:07:58 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
rte *old_best, *new_best;
|
|
|
|
_Bool feeding = rt_net_is_feeding(&c->out_req, feed->ni->addr);
|
|
|
|
_Bool idempotent = 0;
|
2012-04-15 13:07:58 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
for (uint i = 0; i < feed->count_routes; i++)
|
2019-09-09 00:55:32 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
rte *r = &feed->block[i];
|
2021-09-30 11:50:54 +00:00
|
|
|
|
|
|
|
/* Previously exported */
|
2024-05-02 09:39:34 +00:00
|
|
|
if (!old_best && bmap_test(&c->export_accepted_map, r->id))
|
2012-04-15 13:07:58 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
old_best = r;
|
|
|
|
|
|
|
|
/* Is still the best and need not be refed anyway */
|
|
|
|
if (!new_best && !feeding)
|
2019-09-09 00:55:32 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
idempotent = 1;
|
|
|
|
new_best = r;
|
2019-09-09 00:55:32 +00:00
|
|
|
}
|
2021-09-30 11:50:54 +00:00
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Unflag obsolete routes */
|
|
|
|
if (r->flags & REF_OBSOLETE)
|
|
|
|
bmap_clear(&c->export_rejected_map, r->id);
|
2018-07-06 00:04:45 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Mark invalid as rejected */
|
|
|
|
else if (!rte_is_valid(r))
|
|
|
|
bmap_set(&c->export_rejected_map, r->id);
|
|
|
|
|
|
|
|
/* Already rejected */
|
|
|
|
else if (!feeding && bmap_test(&c->export_rejected_map, r->id))
|
|
|
|
;
|
|
|
|
|
|
|
|
/* No new best route yet and this is a valid candidate */
|
|
|
|
else if (!new_best)
|
2020-03-09 14:31:10 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
/* This branch should not be executed if this route is old best */
|
|
|
|
ASSERT_DIE(r != old_best);
|
|
|
|
|
|
|
|
/* Have no new best route yet, try this route not seen before */
|
|
|
|
new_best = export_filter(c, r, 0);
|
|
|
|
DBG("rt_notify_accepted: checking route id %u: %s\n", r->id, new_best ? "ok" : "no");
|
2020-03-09 14:31:10 +00:00
|
|
|
}
|
2021-09-27 11:04:16 +00:00
|
|
|
}
|
2012-04-15 13:07:58 +00:00
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
/* Nothing to export */
|
2024-05-02 09:39:34 +00:00
|
|
|
if (!idempotent && (new_best || old_best))
|
|
|
|
do_rt_notify(c, feed->ni->addr, new_best, old_best);
|
2021-09-29 15:59:50 +00:00
|
|
|
else
|
2021-09-30 11:50:54 +00:00
|
|
|
DBG("rt_notify_accepted: nothing to export\n");
|
1999-03-17 14:31:26 +00:00
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
void
|
|
|
|
channel_notify_accepted(void *_channel)
|
2015-06-08 00:20:43 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
struct channel *c = _channel;
|
2021-09-30 11:50:54 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
RT_EXPORT_WALK(&c->out_req, u)
|
|
|
|
{
|
|
|
|
switch (u->kind)
|
|
|
|
{
|
|
|
|
case RT_EXPORT_STOP:
|
|
|
|
bug("Main table export stopped");
|
2023-09-29 14:24:50 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
case RT_EXPORT_FEED:
|
|
|
|
if (u->feed->count_routes)
|
|
|
|
rt_notify_accepted(c, u->feed);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case RT_EXPORT_UPDATE:
|
|
|
|
{
|
|
|
|
struct rt_export_feed *f = rt_net_feed(c->table, u->update->new ? u->update->new->net : u->update->old->net, SKIP_BACK(struct rt_pending_export, it, u->update));
|
|
|
|
rt_notify_accepted(c, f);
|
|
|
|
for (uint i=0; i<f->count_exports; i++)
|
|
|
|
rt_export_processed(&c->out_req, f->exports[i]);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-06-03 09:12:20 +00:00
|
|
|
MAYBE_DEFER_TASK(c->out_req.r.target, c->out_req.r.event,
|
|
|
|
"export to %s.%s (secondary)", c->proto->name, c->name);
|
2024-05-02 09:39:34 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
rte *
|
|
|
|
rt_export_merged(struct channel *c, const struct rt_export_feed *feed, linpool *pool, int silent)
|
|
|
|
{
|
|
|
|
_Bool feeding = !silent && rt_net_is_feeding(&c->out_req, feed->ni->addr);
|
2023-09-29 14:24:50 +00:00
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
// struct proto *p = c->proto;
|
2022-05-05 16:08:37 +00:00
|
|
|
struct nexthop_adata *nhs = NULL;
|
2024-05-02 09:39:34 +00:00
|
|
|
rte *best0 = &feed->block[0];
|
2021-06-21 15:07:31 +00:00
|
|
|
rte *best = NULL;
|
2015-06-08 00:20:43 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* First route is obsolete */
|
|
|
|
if (best0->flags & REF_OBSOLETE)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/* First route is invalid */
|
2015-06-08 00:20:43 +00:00
|
|
|
if (!rte_is_valid(best0))
|
|
|
|
return NULL;
|
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
/* Already rejected, no need to re-run the filter */
|
2024-05-02 09:39:34 +00:00
|
|
|
if (!feeding && bmap_test(&c->export_rejected_map, best0->id))
|
2021-09-30 11:50:54 +00:00
|
|
|
return NULL;
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
best = export_filter(c, best0, silent);
|
2021-09-30 11:50:54 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Best route doesn't pass the filter */
|
2021-09-30 11:50:54 +00:00
|
|
|
if (!best)
|
|
|
|
return NULL;
|
2015-06-08 00:20:43 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Unreachable routes can't be merged */
|
2021-09-30 11:50:54 +00:00
|
|
|
if (!rte_is_reachable(best))
|
2015-06-08 00:20:43 +00:00
|
|
|
return best;
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
for (uint i = 1; i < feed->count_routes; i++)
|
2015-06-08 00:20:43 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
rte *r = &feed->block[i];
|
|
|
|
|
|
|
|
/* Obsolete routes can't be merged */
|
|
|
|
if (r->flags & REF_OBSOLETE)
|
|
|
|
break;
|
|
|
|
|
|
|
|
/* Failed to pass mergable test */
|
|
|
|
if (!rte_mergable(best0, r))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* Already rejected by filters */
|
|
|
|
if (!feeding && bmap_test(&c->export_rejected_map, r->id))
|
2015-06-08 00:20:43 +00:00
|
|
|
continue;
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Running export filter on new or accepted route */
|
|
|
|
rte *tmp = export_filter(c, r, silent);
|
2015-06-08 00:20:43 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* New route rejected or unreachable */
|
2021-09-30 11:50:54 +00:00
|
|
|
if (!tmp || !rte_is_reachable(tmp))
|
2015-06-08 00:20:43 +00:00
|
|
|
continue;
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Merging next hops */
|
2022-06-08 13:31:28 +00:00
|
|
|
eattr *nhea = ea_find(tmp->attrs, &ea_gen_nexthop);
|
2022-05-30 15:36:36 +00:00
|
|
|
ASSERT_DIE(nhea);
|
2015-06-08 00:20:43 +00:00
|
|
|
|
2022-05-30 15:36:36 +00:00
|
|
|
if (nhs)
|
|
|
|
nhs = nexthop_merge(nhs, (struct nexthop_adata *) nhea->u.ptr, c->merge_limit, pool);
|
|
|
|
else
|
|
|
|
nhs = (struct nexthop_adata *) nhea->u.ptr;
|
2015-06-08 00:20:43 +00:00
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* There is some nexthop, we shall set the merged version to the route */
|
2015-06-08 00:20:43 +00:00
|
|
|
if (nhs)
|
|
|
|
{
|
2022-05-30 10:03:03 +00:00
|
|
|
eattr *nhea = ea_find(best->attrs, &ea_gen_nexthop);
|
2022-05-05 16:08:37 +00:00
|
|
|
ASSERT_DIE(nhea);
|
2015-06-08 00:20:43 +00:00
|
|
|
|
2022-05-05 16:08:37 +00:00
|
|
|
nhs = nexthop_merge(nhs, (struct nexthop_adata *) nhea->u.ptr, c->merge_limit, pool);
|
|
|
|
|
2022-05-30 10:03:03 +00:00
|
|
|
ea_set_attr(&best->attrs,
|
2022-05-05 16:08:37 +00:00
|
|
|
EA_LITERAL_DIRECT_ADATA(&ea_gen_nexthop, 0, &nhs->ad));
|
2015-06-08 00:20:43 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return best;
|
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
static void
|
|
|
|
rt_notify_merged(struct channel *c, const struct rt_export_feed *f)
|
2021-09-30 11:50:54 +00:00
|
|
|
{
|
2023-03-30 09:37:16 +00:00
|
|
|
const rte *old_best = NULL;
|
2021-09-30 11:50:54 +00:00
|
|
|
/* Find old best route */
|
2024-05-02 09:39:34 +00:00
|
|
|
for (uint i = 0; i < f->count_routes; i++)
|
|
|
|
if (bmap_test(&c->export_accepted_map, f->block[i].id))
|
2021-09-30 11:50:54 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
old_best = &f->block[i];
|
2021-09-30 11:50:54 +00:00
|
|
|
break;
|
|
|
|
}
|
2015-06-08 00:20:43 +00:00
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
/* Prepare new merged route */
|
2024-05-02 09:39:34 +00:00
|
|
|
rte *new_merged = f->count_routes ? rt_export_merged(c, f, tmp_linpool, 0) : NULL;
|
2021-09-30 11:50:54 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* And notify the protocol */
|
2021-06-21 15:07:31 +00:00
|
|
|
if (new_merged || old_best)
|
2024-05-02 09:39:34 +00:00
|
|
|
do_rt_notify(c, f->ni->addr, new_merged, old_best);
|
2021-09-30 11:50:54 +00:00
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
void
|
2024-05-02 09:39:34 +00:00
|
|
|
channel_notify_merged(void *_channel)
|
2021-09-30 11:50:54 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
struct channel *c = _channel;
|
2023-09-29 14:24:50 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
RT_EXPORT_WALK(&c->out_req, u)
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
switch (u->kind)
|
|
|
|
{
|
|
|
|
case RT_EXPORT_STOP:
|
|
|
|
bug("Main table export stopped");
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
case RT_EXPORT_FEED:
|
|
|
|
if (u->feed->count_routes)
|
|
|
|
rt_notify_merged(c, u->feed);
|
|
|
|
break;
|
2023-09-29 14:24:50 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
case RT_EXPORT_UPDATE:
|
|
|
|
{
|
|
|
|
struct rt_export_feed *f = rt_net_feed(c->table, u->update->new ? u->update->new->net : u->update->old->net, SKIP_BACK(struct rt_pending_export, it, u->update));
|
|
|
|
rt_notify_merged(c, f);
|
|
|
|
for (uint i=0; i<f->count_exports; i++)
|
|
|
|
rt_export_processed(&c->out_req, f->exports[i]);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-06-03 09:12:20 +00:00
|
|
|
MAYBE_DEFER_TASK(c->out_req.r.target, c->out_req.r.event,
|
|
|
|
"export to %s.%s (merged)", c->proto->name, c->name);
|
2024-05-02 09:39:34 +00:00
|
|
|
}
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2024-05-02 09:39:34 +00:00
|
|
|
channel_notify_basic(void *_channel)
|
2021-09-30 11:50:54 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
struct channel *c = _channel;
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
RT_EXPORT_WALK(&c->out_req, u)
|
|
|
|
{
|
|
|
|
switch (u->kind)
|
|
|
|
{
|
|
|
|
case RT_EXPORT_STOP:
|
|
|
|
bug("Main table export stopped");
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
case RT_EXPORT_FEED:
|
|
|
|
{
|
|
|
|
/* Find where the old route block begins */
|
|
|
|
uint oldpos = 0;
|
|
|
|
while ((oldpos < u->feed->count_routes) && !(u->feed->block[oldpos].flags & REF_OBSOLETE))
|
|
|
|
oldpos++;
|
2023-09-24 10:15:26 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Send updates one after another */
|
|
|
|
for (uint i = 0; i < oldpos; i++)
|
|
|
|
{
|
|
|
|
rte *new = &u->feed->block[i];
|
|
|
|
rte *old = NULL;
|
|
|
|
for (uint o = oldpos; o < u->feed->count_routes; o++)
|
|
|
|
if (new->src == u->feed->block[o].src)
|
|
|
|
{
|
|
|
|
old = &u->feed->block[o];
|
|
|
|
break;
|
|
|
|
}
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
rt_notify_basic(c, new, old);
|
2022-07-15 12:57:02 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Mark old processed */
|
|
|
|
if (old)
|
|
|
|
old->src = NULL;
|
|
|
|
}
|
2022-07-15 12:57:02 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Send withdraws */
|
|
|
|
for (uint o = oldpos; o < u->feed->count_routes; o++)
|
|
|
|
if (u->feed->block[o].src)
|
|
|
|
rt_notify_basic(c, NULL, &u->feed->block[o]);
|
|
|
|
}
|
|
|
|
break;
|
2023-09-24 10:15:26 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
case RT_EXPORT_UPDATE:
|
|
|
|
{
|
|
|
|
const rte *new = u->update->new;
|
|
|
|
const rte *old = u->update->old;
|
|
|
|
struct rte_src *src = (c->ra_mode == RA_ANY) ? (new ? new->src : old->src) : NULL;
|
|
|
|
|
|
|
|
/* Squashing subsequent updates */
|
|
|
|
for (SKIP_BACK_DECLARE(const struct rt_pending_export, rpe, it, u->update);
|
|
|
|
rpe = atomic_load_explicit(&rpe->next, memory_order_acquire) ;)
|
|
|
|
/* Either new is the same as this update's "old". Then the squash
|
|
|
|
* is obvious.
|
|
|
|
*
|
|
|
|
* Or we're squashing an update-from-nothing with a withdrawal,
|
|
|
|
* and then either src is set because it must match (RA_ANY)
|
|
|
|
* or it doesn't matter at all (RA_OPTIMAL).
|
|
|
|
*/
|
|
|
|
if ((rpe->it.old == new) && (new || src && (src == rpe->it.new->src)))
|
|
|
|
{
|
|
|
|
new = rpe->it.new;
|
|
|
|
rt_export_processed(&c->out_req, rpe->it.seq);
|
|
|
|
}
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
if (new && old && rte_same(new, old))
|
|
|
|
{
|
|
|
|
channel_rte_trace_out(D_ROUTES, c, new, "already exported");
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
if ((new->id != old->id) && bmap_test(&c->export_accepted_map, old->id))
|
|
|
|
{
|
|
|
|
bmap_set(&c->export_accepted_map, new->id);
|
|
|
|
bmap_clear(&c->export_accepted_map, old->id);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else if (!new && !old)
|
|
|
|
channel_rte_trace_out(D_ROUTES, c, u->update->new, "idempotent withdraw (squash)");
|
|
|
|
else
|
|
|
|
rt_notify_basic(c, new, old);
|
2023-09-24 10:15:26 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
break;
|
|
|
|
}
|
2022-07-14 09:09:23 +00:00
|
|
|
}
|
2023-03-31 08:46:17 +00:00
|
|
|
|
2024-06-03 09:12:20 +00:00
|
|
|
MAYBE_DEFER_TASK(c->out_req.r.target, c->out_req.r.event,
|
|
|
|
"export to %s.%s (regular)", c->proto->name, c->name);
|
2023-03-31 08:46:17 +00:00
|
|
|
}
|
2015-06-08 00:20:43 +00:00
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
static void
|
|
|
|
rt_flush_best(struct rtable_private *tab, u64 upto)
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
RT_EXPORT_WALK(&tab->best_req, u)
|
|
|
|
{
|
|
|
|
ASSERT_DIE(u->kind == RT_EXPORT_UPDATE);
|
|
|
|
ASSERT_DIE(u->update->seq <= upto);
|
|
|
|
if (u->update->seq == upto)
|
|
|
|
return;
|
|
|
|
}
|
2021-09-27 11:04:16 +00:00
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
static struct rt_pending_export *
|
|
|
|
rte_announce_to(struct rt_exporter *e, struct rt_net_pending_export *npe, const rte *new, const rte *old)
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
if (new == old)
|
2021-09-27 11:04:16 +00:00
|
|
|
return NULL;
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
struct rt_pending_export rpe = {
|
|
|
|
.it = {
|
|
|
|
.new = new,
|
|
|
|
.old = old,
|
|
|
|
},
|
|
|
|
};
|
2024-04-03 12:47:15 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
struct rt_export_item *rei = rt_exporter_push(e, &rpe.it);
|
|
|
|
if (!rei)
|
|
|
|
return NULL;
|
2024-04-03 12:47:15 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
SKIP_BACK_DECLARE(struct rt_pending_export, pushed, it, rei);
|
2024-02-29 13:04:05 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
struct rt_pending_export *last = atomic_load_explicit(&npe->last, memory_order_relaxed);
|
|
|
|
if (last)
|
|
|
|
ASSERT_DIE(atomic_exchange_explicit(&last->next, pushed, memory_order_acq_rel) == NULL);
|
2024-04-03 12:47:15 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
atomic_store_explicit(&npe->last, pushed, memory_order_release);
|
|
|
|
if (!atomic_load_explicit(&npe->first, memory_order_relaxed))
|
|
|
|
atomic_store_explicit(&npe->first, pushed, memory_order_release);
|
2023-11-14 11:53:40 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
return pushed;
|
2021-09-27 11:04:16 +00:00
|
|
|
}
|
|
|
|
|
1999-04-05 20:25:03 +00:00
|
|
|
static void
|
2024-05-30 06:22:40 +00:00
|
|
|
rte_announce(struct rtable_private *tab, const struct netindex *i UNUSED, net *net, const rte *new, const rte *old,
|
2024-02-07 16:30:43 +00:00
|
|
|
const rte *new_best, const rte *old_best)
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2023-12-08 15:13:14 +00:00
|
|
|
/* Update network count */
|
|
|
|
tab->net_count += (!!new_best - !!old_best);
|
|
|
|
|
2024-02-07 16:30:43 +00:00
|
|
|
int new_best_valid = rte_is_valid(new_best);
|
|
|
|
int old_best_valid = rte_is_valid(old_best);
|
2012-11-10 13:26:13 +00:00
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
if ((new == old) && (new_best == old_best))
|
2012-11-10 13:26:13 +00:00
|
|
|
return;
|
1998-05-20 11:54:33 +00:00
|
|
|
|
2022-09-01 08:39:56 +00:00
|
|
|
if (new_best_valid)
|
2024-02-07 16:30:43 +00:00
|
|
|
new_best->sender->stats.pref++;
|
2022-09-01 08:39:56 +00:00
|
|
|
if (old_best_valid)
|
2024-02-07 16:30:43 +00:00
|
|
|
old_best->sender->stats.pref--;
|
2021-02-10 02:09:57 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Try to push */
|
|
|
|
struct rt_pending_export *best_rpe = NULL;
|
|
|
|
struct rt_pending_export *all_rpe = rte_announce_to(&tab->export_all, &net->all, new, old);
|
|
|
|
if (all_rpe)
|
|
|
|
{
|
|
|
|
/* Also best may have changed */
|
|
|
|
best_rpe = rte_announce_to(&tab->export_best, &net->best, new_best, old_best);
|
|
|
|
if (best_rpe)
|
|
|
|
/* Announced best, need an anchor to all */
|
|
|
|
best_rpe->seq_all = all_rpe->it.seq;
|
|
|
|
else if (new_best != old_best)
|
|
|
|
/* Would announce best but it's empty with no reader */
|
|
|
|
rt_flush_best(tab, all_rpe->it.seq);
|
2021-09-27 11:04:16 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
rt_check_cork_high(tab);
|
|
|
|
}
|
|
|
|
else
|
2019-09-09 00:55:32 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Not announced anything, cleanup now */
|
|
|
|
ASSERT_DIE(new_best == old_best);
|
|
|
|
hmap_clear(&tab->id_map, old->id);
|
|
|
|
rte_free(SKIP_BACK(struct rte_storage, rte, old), tab);
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
2024-05-02 09:39:34 +00:00
|
|
|
}
|
2019-09-09 00:55:32 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
static net *
|
|
|
|
rt_cleanup_find_net(struct rtable_private *tab, struct rt_pending_export *rpe)
|
|
|
|
{
|
|
|
|
/* Find the appropriate struct network */
|
|
|
|
ASSERT_DIE(rpe->it.new || rpe->it.old);
|
|
|
|
const net_addr *n = rpe->it.new ?
|
|
|
|
rpe->it.new->net :
|
|
|
|
rpe->it.old->net;
|
|
|
|
struct netindex *ni = NET_TO_INDEX(n);
|
|
|
|
ASSERT_DIE(ni->index < atomic_load_explicit(&tab->routes_block_size, memory_order_relaxed));
|
|
|
|
net *routes = atomic_load_explicit(&tab->routes, memory_order_relaxed);
|
|
|
|
return &routes[ni->index];
|
|
|
|
}
|
2021-09-27 11:04:16 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
static _Bool
|
|
|
|
rt_cleanup_update_pointers(struct rt_net_pending_export *npe, struct rt_pending_export *rpe)
|
|
|
|
{
|
|
|
|
struct rt_pending_export *first = atomic_load_explicit(&npe->first, memory_order_relaxed);
|
|
|
|
struct rt_pending_export *last = atomic_load_explicit(&npe->last, memory_order_relaxed);
|
|
|
|
ASSERT_DIE(rpe == first);
|
2021-09-27 11:04:16 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
atomic_store_explicit(
|
|
|
|
&npe->first,
|
|
|
|
atomic_load_explicit(&rpe->next, memory_order_relaxed),
|
|
|
|
memory_order_release
|
|
|
|
);
|
2021-09-27 11:04:16 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
if (rpe != last)
|
|
|
|
return 0;
|
2021-09-27 11:04:16 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
atomic_store_explicit(&npe->last, NULL, memory_order_release);
|
|
|
|
return 1;
|
2021-09-27 11:04:16 +00:00
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
static void
|
|
|
|
rt_cleanup_export_best(struct lfjour *j, struct lfjour_item *i)
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
SKIP_BACK_DECLARE(struct rt_pending_export, rpe, it.li, i);
|
|
|
|
SKIP_BACK_DECLARE(struct rtable_private, tab, export_best.journal, j);
|
|
|
|
rt_flush_best(tab, rpe->seq_all);
|
|
|
|
|
|
|
|
/* Find the appropriate struct network */
|
|
|
|
net *net = rt_cleanup_find_net(tab, rpe);
|
|
|
|
|
|
|
|
/* Update the first and last pointers */
|
|
|
|
rt_cleanup_update_pointers(&net->best, rpe);
|
|
|
|
|
|
|
|
/* Wait for readers before releasing */
|
|
|
|
synchronize_rcu();
|
2021-09-27 11:04:16 +00:00
|
|
|
}
|
|
|
|
|
2024-02-29 13:04:05 +00:00
|
|
|
static void
|
2024-05-02 09:39:34 +00:00
|
|
|
rt_cleanup_export_all(struct lfjour *j, struct lfjour_item *i)
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
SKIP_BACK_DECLARE(struct rt_pending_export, rpe, it.li, i);
|
|
|
|
SKIP_BACK_DECLARE(struct rtable_private, tab, export_all.journal, j);
|
2022-09-07 11:54:20 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Find the appropriate struct network */
|
|
|
|
net *net = rt_cleanup_find_net(tab, rpe);
|
2021-09-27 11:04:16 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
/* Update the first and last pointers */
|
2024-05-02 09:39:34 +00:00
|
|
|
_Bool is_last = rt_cleanup_update_pointers(&net->all, rpe);
|
2021-06-19 18:50:18 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Free the old route */
|
|
|
|
if (rpe->it.old)
|
2024-02-29 13:04:05 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
ASSERT_DIE(rpe->it.old->flags & REF_OBSOLETE);
|
|
|
|
hmap_clear(&tab->id_map, rpe->it.old->id);
|
|
|
|
rte_free(SKIP_BACK(struct rte_storage, rte, rpe->it.old), tab);
|
2024-02-29 13:04:05 +00:00
|
|
|
}
|
2016-01-26 10:48:58 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
if (is_last)
|
2024-02-29 13:04:05 +00:00
|
|
|
tab->gc_counter++;
|
2024-05-02 09:39:34 +00:00
|
|
|
|
|
|
|
/* Wait for readers before releasing */
|
|
|
|
synchronize_rcu();
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_dump_best_req(struct rt_export_request *req)
|
|
|
|
{
|
|
|
|
SKIP_BACK_DECLARE(struct rtable_private, tab, best_req, req);
|
|
|
|
debug(" Table %s best cleanup request (%p)\n", tab->name, req);
|
2021-09-27 11:04:16 +00:00
|
|
|
}
|
2022-06-27 17:53:06 +00:00
|
|
|
|
2022-09-26 10:09:14 +00:00
|
|
|
static void
|
2024-02-29 13:04:05 +00:00
|
|
|
rt_import_cleared(void *_ih)
|
2022-09-26 10:09:14 +00:00
|
|
|
{
|
2024-02-29 13:04:05 +00:00
|
|
|
struct rt_import_hook *hook = _ih;
|
2022-09-26 10:09:14 +00:00
|
|
|
|
2024-02-29 13:04:05 +00:00
|
|
|
ASSERT_DIE(hook->import_state == TIS_CLEARED);
|
2022-09-12 08:25:14 +00:00
|
|
|
|
2024-02-29 13:04:05 +00:00
|
|
|
/* Local copy of the otherwise freed callback data */
|
|
|
|
void (*stopped)(struct rt_import_request *) = hook->stopped;
|
|
|
|
struct rt_import_request *req = hook->req;
|
2022-09-01 09:17:35 +00:00
|
|
|
|
2024-02-29 13:04:05 +00:00
|
|
|
/* Finally uncouple from the table */
|
|
|
|
RT_LOCKED(hook->table, tab)
|
|
|
|
{
|
|
|
|
req->hook = NULL;
|
2022-09-01 09:17:35 +00:00
|
|
|
|
2024-02-29 13:04:05 +00:00
|
|
|
rt_trace(tab, D_EVENTS, "Hook %s stopped", req->name);
|
|
|
|
rem_node(&hook->n);
|
|
|
|
mb_free(hook);
|
|
|
|
rt_unlock_table(tab);
|
2022-09-07 11:54:20 +00:00
|
|
|
}
|
2022-09-12 08:25:14 +00:00
|
|
|
|
2024-02-29 13:04:05 +00:00
|
|
|
/* And call the callback */
|
|
|
|
stopped(req);
|
2022-09-01 09:17:35 +00:00
|
|
|
}
|
|
|
|
|
2024-02-29 13:04:05 +00:00
|
|
|
static void
|
2024-05-02 09:39:34 +00:00
|
|
|
rt_cleanup_done_all(struct rt_exporter *e, u64 end_seq)
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
SKIP_BACK_DECLARE(struct rtable_private, tab, export_all, e);
|
2024-02-29 13:04:05 +00:00
|
|
|
ASSERT_DIE(DG_IS_LOCKED(tab->lock.rtable));
|
2022-06-27 17:53:06 +00:00
|
|
|
|
2024-02-29 13:04:05 +00:00
|
|
|
if (~end_seq)
|
2024-05-02 09:39:34 +00:00
|
|
|
rt_trace(tab, D_STATES, "Export all cleanup done up to seq %lu", end_seq);
|
2024-02-29 13:04:05 +00:00
|
|
|
else
|
2024-05-02 09:39:34 +00:00
|
|
|
rt_trace(tab, D_STATES, "Export all cleanup complete");
|
2024-02-29 13:04:05 +00:00
|
|
|
|
|
|
|
rt_check_cork_low(tab);
|
|
|
|
|
|
|
|
struct rt_import_hook *ih; node *x, *n;
|
|
|
|
uint cleared_counter = 0;
|
|
|
|
if (tab->wait_counter)
|
|
|
|
WALK_LIST2_DELSAFE(ih, n, x, tab->imports, n)
|
|
|
|
if (ih->import_state == TIS_WAITING)
|
2024-05-02 09:39:34 +00:00
|
|
|
{
|
2024-02-29 13:04:05 +00:00
|
|
|
if (end_seq >= ih->flush_seq)
|
|
|
|
{
|
|
|
|
ih->import_state = TIS_CLEARED;
|
|
|
|
tab->wait_counter--;
|
|
|
|
cleared_counter++;
|
2022-06-27 17:53:06 +00:00
|
|
|
|
2024-02-29 13:04:05 +00:00
|
|
|
ih->cleanup_event = (event) {
|
|
|
|
.hook = rt_import_cleared,
|
|
|
|
.data = ih,
|
|
|
|
};
|
|
|
|
ev_send_loop(ih->req->loop, &ih->cleanup_event);
|
|
|
|
}
|
2024-05-02 09:39:34 +00:00
|
|
|
}
|
2024-02-29 13:04:05 +00:00
|
|
|
|
|
|
|
if (!EMPTY_LIST(tab->imports) &&
|
|
|
|
(tab->gc_counter >= tab->config->gc_threshold))
|
|
|
|
rt_kick_prune_timer(tab);
|
2021-09-27 11:04:16 +00:00
|
|
|
}
|
2022-06-27 17:53:06 +00:00
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
static void
|
2024-05-02 09:39:34 +00:00
|
|
|
rt_cleanup_done_best(struct rt_exporter *e, u64 end_seq)
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
SKIP_BACK_DECLARE(struct rtable_private, tab, export_best, e);
|
2021-09-27 11:04:16 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
if (~end_seq)
|
|
|
|
rt_trace(tab, D_STATES, "Export best cleanup done up to seq %lu", end_seq);
|
|
|
|
else
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
rt_trace(tab, D_STATES, "Export best cleanup complete, flushing regular");
|
|
|
|
rt_flush_best(tab, ~0ULL);
|
2019-09-09 00:55:32 +00:00
|
|
|
}
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
#define RT_EXPORT_BULK 1024
|
2021-09-27 11:04:16 +00:00
|
|
|
|
1999-03-17 15:01:07 +00:00
|
|
|
static inline int
|
2021-06-21 15:07:31 +00:00
|
|
|
rte_validate(struct channel *ch, rte *e)
|
1999-03-17 15:01:07 +00:00
|
|
|
{
|
|
|
|
int c;
|
2020-01-28 10:42:46 +00:00
|
|
|
const net_addr *n = e->net;
|
1999-03-17 15:01:07 +00:00
|
|
|
|
2023-11-23 22:33:44 +00:00
|
|
|
#define IGNORING(pre, post) do { \
|
|
|
|
log(L_WARN "%s.%s: Ignoring " pre " %N " post, ch->proto->name, ch->name, n); \
|
|
|
|
return 0; \
|
|
|
|
} while (0)
|
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
if (!net_validate(n))
|
2023-11-23 22:33:44 +00:00
|
|
|
IGNORING("bogus prefix", "");
|
2010-02-26 09:55:58 +00:00
|
|
|
|
2017-12-09 23:55:34 +00:00
|
|
|
/* FIXME: better handling different nettypes */
|
2020-01-28 10:42:46 +00:00
|
|
|
c = !net_is_flow(n) ?
|
|
|
|
net_classify(n): (IADDR_HOST | SCOPE_UNIVERSE);
|
2010-02-26 09:55:58 +00:00
|
|
|
if ((c < 0) || !(c & IADDR_HOST) || ((c & IADDR_SCOPE_MASK) <= SCOPE_LINK))
|
2023-11-23 22:33:44 +00:00
|
|
|
IGNORING("bogus route", "");
|
2010-02-26 09:55:58 +00:00
|
|
|
|
2022-06-08 09:47:49 +00:00
|
|
|
if (net_type_match(n, NB_DEST))
|
2017-04-05 14:16:04 +00:00
|
|
|
{
|
2022-06-08 13:31:28 +00:00
|
|
|
eattr *nhea = ea_find(e->attrs, &ea_gen_nexthop);
|
2022-06-08 09:47:49 +00:00
|
|
|
int dest = nhea_dest(nhea);
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2022-06-08 09:47:49 +00:00
|
|
|
if (dest == RTD_NONE)
|
2023-11-23 22:33:44 +00:00
|
|
|
IGNORING("route", "with no destination");
|
2017-04-05 14:16:04 +00:00
|
|
|
|
2022-06-08 09:47:49 +00:00
|
|
|
if ((dest == RTD_UNICAST) &&
|
|
|
|
!nexthop_is_sorted((struct nexthop_adata *) nhea->u.ptr))
|
2023-11-23 22:33:44 +00:00
|
|
|
IGNORING("unsorted multipath route", "");
|
2022-05-05 16:08:37 +00:00
|
|
|
}
|
2022-06-08 13:31:28 +00:00
|
|
|
else if (ea_find(e->attrs, &ea_gen_nexthop))
|
2023-11-23 22:33:44 +00:00
|
|
|
IGNORING("route", "having a superfluous nexthop attribute");
|
2016-08-30 15:17:27 +00:00
|
|
|
|
1999-03-17 15:01:07 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2023-10-31 08:58:42 +00:00
|
|
|
int
|
2023-07-03 18:38:24 +00:00
|
|
|
rte_same(const rte *x, const rte *y)
|
2000-05-06 21:21:19 +00:00
|
|
|
{
|
2023-01-01 19:10:23 +00:00
|
|
|
/* rte.flags / rte.pflags are not checked, as they are internal to rtable */
|
2000-05-06 21:21:19 +00:00
|
|
|
return
|
2023-11-01 09:58:44 +00:00
|
|
|
(x == y) || (
|
2023-10-31 08:58:42 +00:00
|
|
|
(x->attrs == y->attrs) ||
|
2024-04-04 10:01:35 +00:00
|
|
|
((!x->attrs->stored || !y->attrs->stored) && ea_same(x->attrs, y->attrs))
|
2023-10-31 08:58:42 +00:00
|
|
|
) &&
|
2020-04-10 15:08:29 +00:00
|
|
|
x->src == y->src &&
|
2019-02-22 01:16:39 +00:00
|
|
|
rte_is_filtered(x) == rte_is_filtered(y);
|
2000-05-06 21:21:19 +00:00
|
|
|
}
|
|
|
|
|
2023-07-03 18:38:24 +00:00
|
|
|
static inline int rte_is_ok(const rte *e) { return e && !rte_is_filtered(e); }
|
2012-11-16 12:29:16 +00:00
|
|
|
|
2024-02-29 13:04:05 +00:00
|
|
|
static void
|
2023-12-08 15:13:14 +00:00
|
|
|
rte_recalculate(struct rtable_private *table, struct rt_import_hook *c, struct netindex *i, net *net, rte *new, struct rte_src *src)
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2021-06-21 15:07:31 +00:00
|
|
|
struct rt_import_request *req = c->req;
|
|
|
|
struct rt_import_stats *stats = &c->stats;
|
2024-04-03 12:47:15 +00:00
|
|
|
struct rte_storage *old_best_stored = NET_BEST_ROUTE(table, net);
|
2023-07-03 18:38:24 +00:00
|
|
|
const rte *old_best = old_best_stored ? &old_best_stored->rte : NULL;
|
1998-05-20 11:54:33 +00:00
|
|
|
|
2022-06-27 10:32:15 +00:00
|
|
|
/* If the new route is identical to the old one, we find the attributes in
|
|
|
|
* cache and clone these with no performance drop. OTOH, if we were to lookup
|
|
|
|
* the attributes, such a route definitely hasn't been anywhere yet,
|
|
|
|
* therefore it's definitely worth the time. */
|
|
|
|
struct rte_storage *new_stored = NULL;
|
|
|
|
if (new)
|
2023-07-03 18:38:24 +00:00
|
|
|
{
|
2023-12-08 15:13:14 +00:00
|
|
|
new_stored = rte_store(new, i, table);
|
2023-07-03 18:38:24 +00:00
|
|
|
new = RTES_WRITE(new_stored);
|
|
|
|
}
|
2022-06-27 10:32:15 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
struct rte_storage * _Atomic *last_ptr = NULL;
|
|
|
|
struct rte_storage *old_stored = NULL;
|
|
|
|
const rte *old = NULL;
|
2020-01-28 10:42:46 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
/* Find the original route from the same protocol */
|
|
|
|
NET_WALK_ROUTES(table, net, ep, e)
|
|
|
|
{
|
|
|
|
last_ptr = &e->next;
|
|
|
|
if (e->rte.src == src)
|
|
|
|
if (old_stored)
|
|
|
|
bug("multiple routes in table with the same src");
|
|
|
|
else
|
|
|
|
old_stored = e;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (old_stored)
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
old = &old_stored->rte;
|
2020-01-28 10:42:46 +00:00
|
|
|
|
2020-05-01 20:26:24 +00:00
|
|
|
/* If there is the same route in the routing table but from
|
|
|
|
* a different sender, then there are two paths from the
|
|
|
|
* source protocol to this routing table through transparent
|
|
|
|
* pipes, which is not allowed.
|
|
|
|
* We log that and ignore the route. */
|
2021-06-21 15:07:31 +00:00
|
|
|
if (old->sender != c)
|
2020-05-01 20:26:24 +00:00
|
|
|
{
|
|
|
|
if (!old->generation && !new->generation)
|
|
|
|
bug("Two protocols claim to author a route with the same rte_src in table %s: %N %s/%u:%u",
|
2023-12-08 15:13:14 +00:00
|
|
|
c->table->name, i->addr, old->src->owner->name, old->src->private_id, old->src->global_id);
|
2020-05-01 20:26:24 +00:00
|
|
|
|
|
|
|
log_rl(&table->rl_pipe, L_ERR "Route source collision in table %s: %N %s/%u:%u",
|
2023-12-08 15:13:14 +00:00
|
|
|
c->table->name, i->addr, old->src->owner->name, old->src->private_id, old->src->global_id);
|
2020-05-01 20:26:24 +00:00
|
|
|
}
|
2009-12-02 16:26:16 +00:00
|
|
|
|
2022-06-27 10:32:15 +00:00
|
|
|
if (new && rte_same(old, &new_stored->rte))
|
2000-05-06 21:21:19 +00:00
|
|
|
{
|
2019-02-22 01:16:39 +00:00
|
|
|
/* No changes, ignore the new route and refresh the old one */
|
2023-07-03 18:38:24 +00:00
|
|
|
old_stored->stale_cycle = new->stale_cycle;
|
2012-11-10 13:26:13 +00:00
|
|
|
|
2012-11-15 00:29:01 +00:00
|
|
|
if (!rte_is_filtered(new))
|
2012-11-10 13:26:13 +00:00
|
|
|
{
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->updates_ignored++;
|
2021-06-21 15:07:31 +00:00
|
|
|
rt_rte_trace_in(D_ROUTES, req, new, "ignored");
|
2012-11-10 13:26:13 +00:00
|
|
|
}
|
2022-06-27 10:32:15 +00:00
|
|
|
|
|
|
|
/* We need to free the already stored route here before returning */
|
2023-12-08 15:13:14 +00:00
|
|
|
rte_free(new_stored, table);
|
2024-02-29 13:04:05 +00:00
|
|
|
return;
|
2020-05-01 20:26:24 +00:00
|
|
|
}
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
|
|
|
|
2009-06-03 23:22:56 +00:00
|
|
|
if (!old && !new)
|
|
|
|
{
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->withdraws_ignored++;
|
2024-02-29 13:04:05 +00:00
|
|
|
return;
|
2009-06-03 23:22:56 +00:00
|
|
|
}
|
|
|
|
|
2022-06-27 10:32:15 +00:00
|
|
|
/* If rejected by import limit, we need to pretend there is no route */
|
|
|
|
if (req->preimport && (req->preimport(req, new, old) == 0))
|
|
|
|
{
|
2023-12-08 15:13:14 +00:00
|
|
|
rte_free(new_stored, table);
|
2022-06-27 10:32:15 +00:00
|
|
|
new_stored = NULL;
|
|
|
|
new = NULL;
|
|
|
|
}
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
if (!new && !old)
|
|
|
|
{
|
|
|
|
stats->withdraws_ignored++;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2013-01-10 12:07:33 +00:00
|
|
|
int new_ok = rte_is_ok(new);
|
|
|
|
int old_ok = rte_is_ok(old);
|
|
|
|
|
2012-11-16 12:29:16 +00:00
|
|
|
if (new_ok)
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->updates_accepted++;
|
2012-11-16 12:29:16 +00:00
|
|
|
else if (old_ok)
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->withdraws_accepted++;
|
2012-11-16 12:29:16 +00:00
|
|
|
else
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->withdraws_ignored++;
|
2009-06-03 23:22:56 +00:00
|
|
|
|
2021-02-10 02:09:57 +00:00
|
|
|
if (old_ok || new_ok)
|
|
|
|
table->last_rt_change = current_time();
|
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
/* Finalize the new stored route */
|
|
|
|
if (new_stored)
|
|
|
|
{
|
|
|
|
new->lastmod = current_time();
|
|
|
|
new->id = hmap_first_zero(&table->id_map);
|
|
|
|
hmap_set(&table->id_map, new->id);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* We need to add a spinlock sentinel to the beginning */
|
|
|
|
struct rte_storage local_sentinel = {
|
|
|
|
.flags = REF_OBSOLETE,
|
|
|
|
.next = old_best_stored,
|
|
|
|
};
|
|
|
|
atomic_store_explicit(&net->routes, &local_sentinel, memory_order_release);
|
|
|
|
|
|
|
|
/* Mark also the old route as obsolete. */
|
|
|
|
if (old_stored)
|
|
|
|
old_stored->flags |= REF_OBSOLETE;
|
|
|
|
|
2012-07-04 19:31:03 +00:00
|
|
|
if (table->config->sorted)
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2012-07-04 19:31:03 +00:00
|
|
|
/* If routes are sorted, just insert new route to appropriate position */
|
2020-01-28 10:42:46 +00:00
|
|
|
if (new_stored)
|
2012-07-04 19:31:03 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
struct rte_storage * _Atomic *k = &local_sentinel.next, *kk;
|
|
|
|
for (; kk = atomic_load_explicit(k, memory_order_relaxed); k = &kk->next)
|
|
|
|
if ((kk != old_stored) && rte_better(new, &kk->rte))
|
2012-07-04 19:31:03 +00:00
|
|
|
break;
|
2009-08-11 13:49:56 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
/* Do not flip the operation order, the list must stay consistent */
|
|
|
|
atomic_store_explicit(&new_stored->next, kk, memory_order_release);
|
|
|
|
atomic_store_explicit(k, new_stored, memory_order_release);
|
2020-07-16 13:02:10 +00:00
|
|
|
|
2018-12-11 12:52:30 +00:00
|
|
|
table->rt_count++;
|
2012-07-04 19:31:03 +00:00
|
|
|
}
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
2012-07-04 19:31:03 +00:00
|
|
|
else
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2012-07-04 19:31:03 +00:00
|
|
|
/* If routes are not sorted, find the best route and move it on
|
|
|
|
the first position. There are several optimized cases. */
|
|
|
|
|
2021-09-27 14:40:28 +00:00
|
|
|
if (src->owner->rte_recalculate &&
|
2023-07-03 18:38:24 +00:00
|
|
|
src->owner->rte_recalculate(table, net, new_stored, old_stored, old_best_stored))
|
2012-07-04 19:31:03 +00:00
|
|
|
goto do_recalculate;
|
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
if (new_stored && rte_better(&new_stored->rte, old_best))
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
/* The first case - the new route is clearly optimal,
|
2012-07-04 19:31:03 +00:00
|
|
|
we link it at the first position */
|
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
/* First link to the chain */
|
|
|
|
atomic_store_explicit(&new_stored->next,
|
|
|
|
atomic_load_explicit(&local_sentinel.next, memory_order_acquire),
|
|
|
|
memory_order_release);
|
|
|
|
|
|
|
|
/* And then link to the added route */
|
|
|
|
atomic_store_explicit(&local_sentinel.next, new_stored, memory_order_release);
|
2020-07-16 13:02:10 +00:00
|
|
|
|
2018-12-11 12:52:30 +00:00
|
|
|
table->rt_count++;
|
2009-08-11 13:49:56 +00:00
|
|
|
}
|
2012-07-04 19:31:03 +00:00
|
|
|
else if (old == old_best)
|
2009-08-11 13:49:56 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
/* The second case - the old best route will disappear, we add the
|
2012-07-04 19:31:03 +00:00
|
|
|
new route (if we have any) to the list (we don't care about
|
|
|
|
position) and then we elect the new optimal route and relink
|
|
|
|
that route at the first position and announce it. New optimal
|
|
|
|
route might be NULL if there is no more routes */
|
|
|
|
|
|
|
|
do_recalculate:
|
2024-04-03 12:47:15 +00:00
|
|
|
/* Add the new route to the list right behind the old one */
|
2020-01-28 10:42:46 +00:00
|
|
|
if (new_stored)
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
atomic_store_explicit(&new_stored->next, atomic_load_explicit(&old_stored->next, memory_order_relaxed), memory_order_release);
|
|
|
|
atomic_store_explicit(&old_stored->next, new_stored, memory_order_release);
|
2020-07-16 13:02:10 +00:00
|
|
|
|
2018-12-11 12:52:30 +00:00
|
|
|
table->rt_count++;
|
2012-07-04 19:31:03 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Find a new optimal route (if there is any) */
|
2024-04-03 12:47:15 +00:00
|
|
|
struct rte_storage * _Atomic *bp = &local_sentinel.next;
|
|
|
|
struct rte_storage *best = atomic_load_explicit(bp, memory_order_relaxed);
|
|
|
|
|
|
|
|
/* Best can't be the old one */
|
|
|
|
if (best == old_stored)
|
|
|
|
{
|
|
|
|
bp = &best->next;
|
|
|
|
best = atomic_load_explicit(bp, memory_order_relaxed);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (best)
|
|
|
|
{
|
|
|
|
for (struct rte_storage *kk, * _Atomic *k = &best->next;
|
|
|
|
kk = atomic_load_explicit(k, memory_order_relaxed);
|
|
|
|
k = &kk->next)
|
|
|
|
if (rte_better(&kk->rte, &best->rte))
|
|
|
|
best = atomic_load_explicit(bp = k, memory_order_relaxed);
|
|
|
|
|
|
|
|
/* Now we know which route is the best one, we have to relink it
|
|
|
|
* to the front place. */
|
|
|
|
|
|
|
|
/* First we wait until all readers finish */
|
|
|
|
synchronize_rcu();
|
|
|
|
/* Now all readers must have seen the local spinlock sentinel
|
|
|
|
* and will wait until we re-arrange the structure */
|
|
|
|
|
|
|
|
/* The best route gets removed from its original place */
|
|
|
|
atomic_store_explicit(bp,
|
|
|
|
atomic_load_explicit(&best->next, memory_order_relaxed),
|
|
|
|
memory_order_release);
|
|
|
|
|
|
|
|
/* After the best route, the original chain shall be linked */
|
|
|
|
atomic_store_explicit(&best->next,
|
|
|
|
atomic_load_explicit(&local_sentinel.next, memory_order_relaxed),
|
|
|
|
memory_order_release);
|
|
|
|
|
|
|
|
/* And now we finally link the best route first */
|
|
|
|
atomic_store_explicit(&local_sentinel.next, best, memory_order_release);
|
|
|
|
}
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
2020-01-28 10:42:46 +00:00
|
|
|
else if (new_stored)
|
2012-07-04 19:31:03 +00:00
|
|
|
{
|
|
|
|
/* The third case - the new route is not better than the old
|
|
|
|
best route (therefore old_best != NULL) and the old best
|
|
|
|
route was not removed (therefore old_best == net->routes).
|
2020-07-16 13:02:10 +00:00
|
|
|
We just link the new route to the old/last position. */
|
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
if (old_stored)
|
|
|
|
{
|
|
|
|
atomic_store_explicit(&new_stored->next,
|
|
|
|
atomic_load_explicit(&old_stored->next, memory_order_relaxed),
|
|
|
|
memory_order_release);
|
|
|
|
atomic_store_explicit(&old_stored->next, new_stored, memory_order_release);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
atomic_store_explicit(&new_stored->next, NULL, memory_order_relaxed);
|
|
|
|
atomic_store_explicit(last_ptr, new_stored, memory_order_release);
|
|
|
|
}
|
2012-07-04 19:31:03 +00:00
|
|
|
}
|
2024-04-03 12:47:15 +00:00
|
|
|
/* The fourth (empty) case - suboptimal route is being removed, nothing to do */
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
2009-08-11 13:49:56 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
/* Finally drop the old route */
|
|
|
|
if (old_stored)
|
|
|
|
{
|
|
|
|
uint seen = 0;
|
|
|
|
NET_WALK_ROUTES(table, net, ep, e)
|
|
|
|
if (e == old_stored)
|
|
|
|
{
|
|
|
|
ASSERT_DIE(e->rte.src == src);
|
|
|
|
atomic_store_explicit(ep,
|
|
|
|
atomic_load_explicit(&e->next, memory_order_relaxed),
|
|
|
|
memory_order_release);
|
|
|
|
ASSERT_DIE(!seen++);
|
|
|
|
}
|
|
|
|
ASSERT_DIE(seen == 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct rte_storage *new_best = atomic_load_explicit(&local_sentinel.next, memory_order_relaxed);
|
2012-07-04 19:31:03 +00:00
|
|
|
|
|
|
|
/* Log the route change */
|
2021-06-21 15:07:31 +00:00
|
|
|
if (new_ok)
|
2024-04-03 12:47:15 +00:00
|
|
|
rt_rte_trace_in(D_ROUTES, req, &new_stored->rte, new_stored == new_best ? "added [best]" : "added");
|
2021-06-21 15:07:31 +00:00
|
|
|
else if (old_ok)
|
2009-12-02 13:33:34 +00:00
|
|
|
{
|
2021-06-21 15:07:31 +00:00
|
|
|
if (old != old_best)
|
|
|
|
rt_rte_trace_in(D_ROUTES, req, old, "removed");
|
2024-04-03 12:47:15 +00:00
|
|
|
else if (new_best && rte_is_ok(&new_best->rte))
|
2021-06-21 15:07:31 +00:00
|
|
|
rt_rte_trace_in(D_ROUTES, req, old, "removed [replaced]");
|
|
|
|
else
|
|
|
|
rt_rte_trace_in(D_ROUTES, req, old, "removed [sole]");
|
2009-08-11 13:49:56 +00:00
|
|
|
}
|
2022-10-11 09:08:15 +00:00
|
|
|
else
|
|
|
|
if (req->trace_routes & D_ROUTES)
|
2023-12-08 15:13:14 +00:00
|
|
|
log(L_TRACE "%s > ignored %N %s->%s", req->name, i->addr, old ? "filtered" : "none", new ? "filtered" : "none");
|
2009-08-11 13:49:56 +00:00
|
|
|
|
2012-07-04 19:31:03 +00:00
|
|
|
/* Propagate the route change */
|
2024-02-07 16:30:43 +00:00
|
|
|
rte_announce(table, i, net,
|
|
|
|
RTE_OR_NULL(new_stored), RTE_OR_NULL(old_stored),
|
2024-04-03 12:47:15 +00:00
|
|
|
RTE_OR_NULL(new_best), RTE_OR_NULL(old_best_stored));
|
|
|
|
|
|
|
|
/* Now we can finally release the changes back for reading */
|
|
|
|
atomic_store_explicit(&net->routes, new_best, memory_order_release);
|
2012-04-15 13:07:58 +00:00
|
|
|
|
2024-02-29 13:04:05 +00:00
|
|
|
return;
|
1998-10-18 11:13:16 +00:00
|
|
|
}
|
|
|
|
|
2022-06-27 10:32:15 +00:00
|
|
|
int
|
2023-07-03 18:38:24 +00:00
|
|
|
channel_preimport(struct rt_import_request *req, rte *new, const rte *old)
|
2021-06-21 15:07:31 +00:00
|
|
|
{
|
2024-04-26 10:14:33 +00:00
|
|
|
SKIP_BACK_DECLARE(struct channel, c, in_req, req);
|
2021-06-21 15:07:31 +00:00
|
|
|
|
|
|
|
if (new && !old)
|
|
|
|
if (CHANNEL_LIMIT_PUSH(c, RX))
|
2022-06-27 10:32:15 +00:00
|
|
|
return 0;
|
2021-06-21 15:07:31 +00:00
|
|
|
|
|
|
|
if (!new && old)
|
|
|
|
CHANNEL_LIMIT_POP(c, RX);
|
|
|
|
|
|
|
|
int new_in = new && !rte_is_filtered(new);
|
|
|
|
int old_in = old && !rte_is_filtered(old);
|
2024-02-29 13:04:05 +00:00
|
|
|
|
2023-11-08 20:51:46 +00:00
|
|
|
int verdict = 1;
|
2021-06-21 15:07:31 +00:00
|
|
|
|
|
|
|
if (new_in && !old_in)
|
|
|
|
if (CHANNEL_LIMIT_PUSH(c, IN))
|
2022-06-16 21:24:56 +00:00
|
|
|
if (c->in_keep & RIK_REJECTED)
|
2021-06-21 15:07:31 +00:00
|
|
|
new->flags |= REF_FILTERED;
|
|
|
|
else
|
2023-11-08 20:51:46 +00:00
|
|
|
verdict = 0;
|
2021-06-21 15:07:31 +00:00
|
|
|
|
|
|
|
if (!new_in && old_in)
|
|
|
|
CHANNEL_LIMIT_POP(c, IN);
|
|
|
|
|
2023-11-08 20:51:46 +00:00
|
|
|
mpls_rte_preimport(new_in ? new : NULL, old_in ? old : NULL);
|
|
|
|
|
|
|
|
return verdict;
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
|
|
|
|
2009-05-31 13:24:27 +00:00
|
|
|
void
|
2020-01-28 10:42:46 +00:00
|
|
|
rte_update(struct channel *c, const net_addr *n, rte *new, struct rte_src *src)
|
1999-04-05 20:25:03 +00:00
|
|
|
{
|
2021-06-21 15:07:31 +00:00
|
|
|
if (!c->in_req.hook)
|
2022-10-11 09:08:15 +00:00
|
|
|
{
|
|
|
|
log(L_WARN "%s.%s: Called rte_update without import hook", c->proto->name, c->name);
|
2021-06-21 15:07:31 +00:00
|
|
|
return;
|
2022-10-11 09:08:15 +00:00
|
|
|
}
|
2021-06-21 15:07:31 +00:00
|
|
|
|
|
|
|
ASSERT(c->channel_state == CS_UP);
|
|
|
|
|
2024-04-04 10:01:35 +00:00
|
|
|
/* Storing prefilter routes as an explicit layer */
|
2022-06-16 21:24:56 +00:00
|
|
|
if (new && (c->in_keep & RIK_PREFILTER))
|
2024-04-23 16:28:34 +00:00
|
|
|
new->attrs = ea_lookup_tmp(new->attrs, 0, EALS_PREIMPORT);
|
1999-04-05 20:25:03 +00:00
|
|
|
|
2024-01-28 12:09:48 +00:00
|
|
|
#if 0
|
|
|
|
debug("%s.%s -(prefilter)-> %s: %N ", c->proto->name, c->name, c->table->name, n);
|
|
|
|
if (new) ea_dump(new->attrs);
|
|
|
|
else debug("withdraw");
|
|
|
|
debug("\n");
|
|
|
|
#endif
|
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
const struct filter *filter = c->in_filter;
|
|
|
|
struct channel_import_stats *stats = &c->import_stats;
|
2023-11-23 17:41:07 +00:00
|
|
|
struct mpls_fec *fec = NULL;
|
2016-01-26 10:48:58 +00:00
|
|
|
|
1999-04-05 20:25:03 +00:00
|
|
|
if (new)
|
|
|
|
{
|
2020-01-28 10:42:46 +00:00
|
|
|
new->net = n;
|
2024-01-23 19:25:48 +00:00
|
|
|
new->sender = c->in_req.hook;
|
2021-06-21 15:07:31 +00:00
|
|
|
|
|
|
|
int fr;
|
2016-01-26 10:48:58 +00:00
|
|
|
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->updates_received++;
|
2022-05-31 10:51:34 +00:00
|
|
|
if ((filter == FILTER_REJECT) ||
|
2022-05-30 14:41:15 +00:00
|
|
|
((fr = f_run(filter, new, 0)) > F_ACCEPT))
|
2000-03-12 20:30:53 +00:00
|
|
|
{
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->updates_filtered++;
|
2021-06-21 15:07:31 +00:00
|
|
|
channel_rte_trace_in(D_FILTERS, c, new, "filtered out");
|
2012-11-10 13:26:13 +00:00
|
|
|
|
2022-06-16 21:24:56 +00:00
|
|
|
if (c->in_keep & RIK_REJECTED)
|
2021-06-21 15:07:31 +00:00
|
|
|
new->flags |= REF_FILTERED;
|
|
|
|
else
|
|
|
|
new = NULL;
|
2000-03-12 20:30:53 +00:00
|
|
|
}
|
2022-05-15 13:53:35 +00:00
|
|
|
|
2023-11-23 23:05:51 +00:00
|
|
|
if (new && c->proto->mpls_channel)
|
2023-11-23 17:41:07 +00:00
|
|
|
if (mpls_handle_rte(c->proto->mpls_channel, n, new, &fec) < 0)
|
2023-11-09 15:34:04 +00:00
|
|
|
{
|
|
|
|
channel_rte_trace_in(D_FILTERS, c, new, "invalid");
|
|
|
|
stats->updates_invalid++;
|
|
|
|
new = NULL;
|
|
|
|
}
|
2023-11-08 20:51:46 +00:00
|
|
|
|
2022-05-31 10:51:34 +00:00
|
|
|
if (new)
|
2024-01-26 13:42:11 +00:00
|
|
|
{
|
2024-04-23 16:28:34 +00:00
|
|
|
new->attrs = ea_lookup_tmp(new->attrs,
|
|
|
|
(c->in_keep & RIK_PREFILTER) ? BIT32_ALL(EALS_PREIMPORT) : 0, EALS_FILTERED);
|
2024-01-26 13:42:11 +00:00
|
|
|
|
2022-06-07 10:18:23 +00:00
|
|
|
if (net_is_flow(n))
|
|
|
|
rt_flowspec_resolve_rte(new, c);
|
|
|
|
else
|
|
|
|
rt_next_hop_resolve_rte(new);
|
2024-01-26 13:42:11 +00:00
|
|
|
}
|
2022-05-15 13:53:35 +00:00
|
|
|
|
2022-05-31 10:51:34 +00:00
|
|
|
if (new && !rte_validate(c, new))
|
2022-05-15 13:53:35 +00:00
|
|
|
{
|
2022-05-31 10:51:34 +00:00
|
|
|
channel_rte_trace_in(D_FILTERS, c, new, "invalid");
|
|
|
|
stats->updates_invalid++;
|
|
|
|
new = NULL;
|
2022-05-15 13:53:35 +00:00
|
|
|
}
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
|
|
|
else
|
|
|
|
stats->withdraws_received++;
|
2012-11-10 13:26:13 +00:00
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
rte_import(&c->in_req, n, new, src);
|
2019-03-14 16:22:22 +00:00
|
|
|
|
2023-11-23 17:41:07 +00:00
|
|
|
if (fec)
|
|
|
|
{
|
|
|
|
mpls_unlock_fec(fec);
|
|
|
|
DBGL( "Unlock FEC %p (rte_update %N)", fec, n);
|
|
|
|
}
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rte_import(struct rt_import_request *req, const net_addr *n, rte *new, struct rte_src *src)
|
|
|
|
{
|
|
|
|
struct rt_import_hook *hook = req->hook;
|
|
|
|
if (!hook)
|
2022-10-11 09:08:15 +00:00
|
|
|
{
|
|
|
|
log(L_WARN "%s: Called rte_import without import hook", req->name);
|
2021-06-21 15:07:31 +00:00
|
|
|
return;
|
2022-10-11 09:08:15 +00:00
|
|
|
}
|
2019-01-31 14:02:15 +00:00
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
RT_LOCKED(hook->table, tab)
|
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
u32 bs = atomic_load_explicit(&tab->routes_block_size, memory_order_acquire);
|
|
|
|
|
2023-12-08 15:13:14 +00:00
|
|
|
struct netindex *i;
|
2024-04-03 12:47:15 +00:00
|
|
|
net *routes = atomic_load_explicit(&tab->routes, memory_order_acquire);
|
2022-09-07 11:54:20 +00:00
|
|
|
net *nn;
|
|
|
|
if (new)
|
2021-06-21 15:07:31 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
/* An update */
|
|
|
|
/* Set auxiliary values */
|
|
|
|
new->stale_cycle = hook->stale_set;
|
|
|
|
new->sender = hook;
|
|
|
|
|
2023-12-08 15:13:14 +00:00
|
|
|
/* Allocate the key structure */
|
|
|
|
i = net_get_index(tab->netindex, n);
|
|
|
|
new->net = i->addr;
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
/* Block size update */
|
|
|
|
u32 nbs = bs;
|
|
|
|
while (i->index >= nbs)
|
|
|
|
nbs *= 2;
|
|
|
|
|
|
|
|
if (nbs > bs)
|
|
|
|
{
|
|
|
|
net *nb = mb_alloc(tab->rp, nbs * sizeof *nb);
|
|
|
|
memcpy(&nb[0], routes, bs * sizeof *nb);
|
|
|
|
memset(&nb[bs], 0, (nbs - bs) * sizeof *nb);
|
|
|
|
ASSERT_DIE(atomic_compare_exchange_strong_explicit(
|
|
|
|
&tab->routes, &routes, nb,
|
|
|
|
memory_order_acq_rel, memory_order_relaxed));
|
|
|
|
ASSERT_DIE(atomic_compare_exchange_strong_explicit(
|
|
|
|
&tab->routes_block_size, &bs, nbs,
|
|
|
|
memory_order_acq_rel, memory_order_relaxed));
|
2024-05-30 06:22:40 +00:00
|
|
|
ASSERT_DIE(atomic_compare_exchange_strong_explicit(
|
|
|
|
&tab->export_all.max_feed_index, &bs, nbs,
|
|
|
|
memory_order_acq_rel, memory_order_relaxed));
|
|
|
|
ASSERT_DIE(atomic_compare_exchange_strong_explicit(
|
|
|
|
&tab->export_best.max_feed_index, &bs, nbs,
|
|
|
|
memory_order_acq_rel, memory_order_relaxed));
|
2024-04-03 12:47:15 +00:00
|
|
|
|
|
|
|
synchronize_rcu();
|
|
|
|
mb_free(routes);
|
|
|
|
|
|
|
|
routes = nb;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Update table tries */
|
|
|
|
struct f_trie *trie = atomic_load_explicit(&tab->trie, memory_order_relaxed);
|
|
|
|
if (trie)
|
|
|
|
trie_add_prefix(trie, i->addr, i->addr->pxlen, i->addr->pxlen);
|
|
|
|
|
|
|
|
if (tab->trie_new)
|
|
|
|
trie_add_prefix(tab->trie_new, i->addr, i->addr->pxlen, i->addr->pxlen);
|
2023-12-08 15:13:14 +00:00
|
|
|
}
|
2024-04-03 12:47:15 +00:00
|
|
|
else if ((i = net_find_index(tab->netindex, n)) && (i->index < bs))
|
|
|
|
/* Found an block where we can withdraw from */
|
|
|
|
;
|
2023-12-08 15:13:14 +00:00
|
|
|
else
|
2012-08-14 14:25:22 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
/* No route for this net is present at all. Ignore right now. */
|
2021-06-21 15:07:31 +00:00
|
|
|
req->hook->stats.withdraws_ignored++;
|
2022-10-11 09:08:15 +00:00
|
|
|
if (req->trace_routes & D_ROUTES)
|
|
|
|
log(L_TRACE "%s > ignored %N withdraw", req->name, n);
|
2023-11-14 11:53:40 +00:00
|
|
|
return;
|
2012-08-14 14:25:22 +00:00
|
|
|
}
|
2009-06-03 23:22:56 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
/* Resolve the net structure */
|
|
|
|
nn = &routes[i->index];
|
|
|
|
|
|
|
|
/* Recalculate the best route. */
|
2024-02-29 13:04:05 +00:00
|
|
|
rte_recalculate(tab, hook, i, nn, new, src);
|
2022-09-07 11:54:20 +00:00
|
|
|
}
|
1999-04-05 20:25:03 +00:00
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/*
|
|
|
|
* Feeding
|
|
|
|
*/
|
|
|
|
|
|
|
|
static net *
|
|
|
|
rt_net_feed_get_net(struct rtable_reading *tr, uint index)
|
|
|
|
{
|
|
|
|
/* Get the route block from the table */
|
|
|
|
net *routes = atomic_load_explicit(&tr->t->routes, memory_order_acquire);
|
|
|
|
u32 bs = atomic_load_explicit(&tr->t->routes_block_size, memory_order_acquire);
|
|
|
|
|
|
|
|
/* Nothing to actually feed */
|
|
|
|
if (index >= bs)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/* We have a net to feed! */
|
|
|
|
return &routes[index];
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct rt_pending_export *
|
|
|
|
rt_net_feed_validate_first(
|
|
|
|
struct rtable_reading *tr,
|
|
|
|
const struct rt_pending_export *first_in_net,
|
|
|
|
const struct rt_pending_export *last_in_net,
|
|
|
|
const struct rt_pending_export *first)
|
|
|
|
{
|
|
|
|
/* Inconsistent input */
|
|
|
|
if (!first_in_net != !last_in_net)
|
|
|
|
RT_READ_RETRY(tr);
|
|
|
|
|
|
|
|
if (!first)
|
|
|
|
return first_in_net;
|
|
|
|
|
|
|
|
for (uint i = 1; i < 4096; i++)
|
|
|
|
{
|
|
|
|
/* Export item validity check: we must find it between first_in_net and last_in_net */
|
|
|
|
const struct rt_pending_export *rpe = first_in_net;
|
|
|
|
while (rpe)
|
|
|
|
if (rpe == first)
|
|
|
|
return first;
|
|
|
|
else if (rpe == last_in_net)
|
|
|
|
/* Got to the end without finding the beginning */
|
|
|
|
break;
|
|
|
|
else
|
|
|
|
rpe = atomic_load_explicit(&rpe->next, memory_order_acquire);
|
|
|
|
|
|
|
|
birdloop_yield();
|
|
|
|
}
|
2024-04-23 16:50:22 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
log(L_WARN "Waiting too long for table announcement to finish");
|
|
|
|
RT_READ_RETRY(tr);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct rt_export_feed *
|
|
|
|
rt_net_feed_index(struct rtable_reading *tr, net *n, const struct rt_pending_export *first)
|
|
|
|
{
|
2024-04-23 16:50:22 +00:00
|
|
|
/* Get the feed itself. It may change under our hands tho. */
|
2024-05-02 09:39:34 +00:00
|
|
|
struct rt_pending_export *first_in_net, *last_in_net;
|
|
|
|
first_in_net = atomic_load_explicit(&n->all.first, memory_order_acquire);
|
|
|
|
last_in_net = atomic_load_explicit(&n->all.last, memory_order_acquire);
|
|
|
|
|
|
|
|
first = rt_net_feed_validate_first(tr, first_in_net, last_in_net, first);
|
2024-04-23 16:50:22 +00:00
|
|
|
|
|
|
|
/* Count the elements */
|
|
|
|
uint rcnt = rte_feed_count(tr, n);
|
|
|
|
uint ecnt = 0;
|
|
|
|
uint ocnt = 0;
|
2024-05-02 09:39:34 +00:00
|
|
|
for (const struct rt_pending_export *rpe = first; rpe;
|
2024-04-23 16:50:22 +00:00
|
|
|
rpe = atomic_load_explicit(&rpe->next, memory_order_acquire))
|
|
|
|
{
|
|
|
|
ecnt++;
|
2024-05-02 09:39:34 +00:00
|
|
|
if (rpe->it.old)
|
2024-04-23 16:50:22 +00:00
|
|
|
ocnt++;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct rt_export_feed *feed = NULL;
|
|
|
|
|
|
|
|
if (rcnt || ocnt || ecnt)
|
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
feed = rt_alloc_feed(rcnt+ocnt, ecnt);
|
2024-04-23 16:50:22 +00:00
|
|
|
|
|
|
|
if (rcnt)
|
|
|
|
rte_feed_obtain_copy(tr, n, feed->block, rcnt);
|
|
|
|
|
|
|
|
if (ecnt)
|
|
|
|
{
|
|
|
|
uint e = 0;
|
|
|
|
uint rpos = rcnt;
|
2024-05-02 09:39:34 +00:00
|
|
|
for (const struct rt_pending_export *rpe = first; rpe;
|
2024-04-23 16:50:22 +00:00
|
|
|
rpe = atomic_load_explicit(&rpe->next, memory_order_acquire))
|
|
|
|
if (e >= ecnt)
|
|
|
|
RT_READ_RETRY(tr);
|
|
|
|
else
|
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
feed->exports[e++] = rpe->it.seq;
|
2024-04-23 16:50:22 +00:00
|
|
|
|
|
|
|
/* Copy also obsolete routes */
|
2024-05-02 09:39:34 +00:00
|
|
|
if (rpe->it.old)
|
2024-04-23 16:50:22 +00:00
|
|
|
{
|
|
|
|
ASSERT_DIE(rpos < rcnt + ocnt);
|
2024-05-02 09:39:34 +00:00
|
|
|
feed->block[rpos++] = *rpe->it.old;
|
|
|
|
ea_free_later(ea_ref(rpe->it.old->attrs));
|
2024-04-23 16:50:22 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT_DIE(e == ecnt);
|
|
|
|
}
|
2024-05-02 09:39:34 +00:00
|
|
|
|
|
|
|
feed->ni = NET_TO_INDEX(feed->block[0].net);
|
2024-04-23 16:50:22 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Check that it indeed didn't change and the last export is still the same. */
|
2024-05-02 09:39:34 +00:00
|
|
|
if (
|
|
|
|
(first_in_net != atomic_load_explicit(&n->all.first, memory_order_acquire))
|
|
|
|
|| (last_in_net != atomic_load_explicit(&n->all.last, memory_order_acquire)))
|
2024-04-23 16:50:22 +00:00
|
|
|
RT_READ_RETRY(tr);
|
|
|
|
|
|
|
|
return feed;
|
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
static struct rt_export_feed *
|
2024-05-30 06:22:40 +00:00
|
|
|
rt_net_feed_internal(struct rtable_reading *tr, const struct netindex *ni, const struct rt_pending_export *first)
|
2024-05-02 09:39:34 +00:00
|
|
|
{
|
2024-05-30 06:22:40 +00:00
|
|
|
net *n = rt_net_feed_get_net(tr, ni->index);
|
2024-05-02 09:39:34 +00:00
|
|
|
if (!n)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
return rt_net_feed_index(tr, n, first);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct rt_export_feed *
|
|
|
|
rt_net_feed(rtable *t, const net_addr *a, const struct rt_pending_export *first)
|
|
|
|
{
|
|
|
|
RT_READ(t, tr);
|
2024-05-30 06:22:40 +00:00
|
|
|
const struct netindex *ni = net_find_index(tr->t->netindex, a);
|
|
|
|
return ni ? rt_net_feed_internal(tr, ni, first) : NULL;
|
2024-05-02 09:39:34 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct rt_export_feed *
|
2024-06-03 12:23:41 +00:00
|
|
|
rt_feed_net_all(struct rt_exporter *e, struct rcu_unwinder *u, struct netindex *ni, const struct rt_export_item *_first)
|
2024-05-02 09:39:34 +00:00
|
|
|
{
|
|
|
|
RT_READ_ANCHORED(SKIP_BACK(rtable, export_all, e), tr, u);
|
2024-05-30 06:22:40 +00:00
|
|
|
return rt_net_feed_internal(tr, ni, SKIP_BACK(const struct rt_pending_export, it, _first));
|
2024-05-02 09:39:34 +00:00
|
|
|
}
|
|
|
|
|
2024-04-23 16:50:22 +00:00
|
|
|
rte
|
2024-05-02 09:39:34 +00:00
|
|
|
rt_net_best(rtable *t, const net_addr *a)
|
2013-02-08 22:58:27 +00:00
|
|
|
{
|
2022-09-07 11:54:20 +00:00
|
|
|
rte rt = {};
|
2013-02-08 22:58:27 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
RT_READ(t, tr);
|
2013-02-08 22:58:27 +00:00
|
|
|
|
2024-06-03 12:23:41 +00:00
|
|
|
struct netindex *i = net_find_index(t->netindex, a);
|
2024-04-03 12:47:15 +00:00
|
|
|
net *n = i ? net_find(tr, i) : NULL;
|
2024-04-23 16:50:22 +00:00
|
|
|
if (!n)
|
|
|
|
return rt;
|
2024-04-03 12:47:15 +00:00
|
|
|
|
2024-04-23 16:50:22 +00:00
|
|
|
struct rte_storage *e = NET_READ_BEST_ROUTE(tr, n);
|
2024-04-03 12:47:15 +00:00
|
|
|
if (!e || !rte_is_valid(&e->rte))
|
2024-04-23 16:50:22 +00:00
|
|
|
return rt;
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
ASSERT_DIE(e->rte.net == i->addr);
|
2024-04-23 16:50:22 +00:00
|
|
|
ea_free_later(ea_ref(e->rte.attrs));
|
|
|
|
return RTE_COPY(e);
|
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
static struct rt_export_feed *
|
2024-06-03 12:23:41 +00:00
|
|
|
rt_feed_net_best(struct rt_exporter *e, struct rcu_unwinder *u, struct netindex *ni, const struct rt_export_item *_first)
|
2024-05-02 09:39:34 +00:00
|
|
|
{
|
|
|
|
SKIP_BACK_DECLARE(rtable, t, export_best, e);
|
|
|
|
SKIP_BACK_DECLARE(const struct rt_pending_export, first, it, _first);
|
|
|
|
|
|
|
|
RT_READ_ANCHORED(t, tr, u);
|
|
|
|
|
|
|
|
net *n = rt_net_feed_get_net(tr, ni->index);
|
|
|
|
if (!n)
|
|
|
|
/* No more to feed, we are fed up! */
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
const struct rt_pending_export *first_in_net, *last_in_net;
|
|
|
|
first_in_net = atomic_load_explicit(&n->best.first, memory_order_acquire);
|
|
|
|
last_in_net = atomic_load_explicit(&n->best.last, memory_order_acquire);
|
|
|
|
first = rt_net_feed_validate_first(tr, first_in_net, last_in_net, first);
|
|
|
|
|
|
|
|
uint ecnt = 0;
|
|
|
|
for (const struct rt_pending_export *rpe = first; rpe;
|
|
|
|
rpe = atomic_load_explicit(&rpe->next, memory_order_acquire))
|
|
|
|
ecnt++;
|
|
|
|
|
|
|
|
struct rte_storage *best = atomic_load_explicit(&n->routes, memory_order_acquire);
|
|
|
|
if (!ecnt && !best)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
struct rt_export_feed *feed = rt_alloc_feed(!!best, ecnt);
|
|
|
|
feed->ni = ni;
|
|
|
|
if (best)
|
|
|
|
feed->block[0] = best->rte;
|
|
|
|
|
|
|
|
if (ecnt)
|
|
|
|
{
|
|
|
|
uint e = 0;
|
|
|
|
for (const struct rt_pending_export *rpe = first; rpe;
|
|
|
|
rpe = atomic_load_explicit(&rpe->next, memory_order_acquire))
|
|
|
|
if (e >= ecnt)
|
|
|
|
RT_READ_RETRY(tr);
|
|
|
|
else
|
|
|
|
feed->exports[e++] = rpe->it.seq;
|
|
|
|
|
|
|
|
ASSERT_DIE(e == ecnt);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check that it indeed didn't change and the last export is still the same. */
|
|
|
|
if (
|
|
|
|
(first_in_net != atomic_load_explicit(&n->best.first, memory_order_acquire))
|
|
|
|
|| (last_in_net != atomic_load_explicit(&n->best.last, memory_order_acquire)))
|
|
|
|
RT_READ_RETRY(tr);
|
|
|
|
|
|
|
|
/* And we're finally done */
|
|
|
|
return feed;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2024-04-23 16:50:22 +00:00
|
|
|
/* Check rtable for best route to given net whether it would be exported do p */
|
|
|
|
int
|
|
|
|
rt_examine(rtable *t, net_addr *a, struct channel *c, const struct filter *filter)
|
|
|
|
{
|
|
|
|
rte rt = rt_net_best(t, a);
|
2020-01-28 10:42:46 +00:00
|
|
|
|
|
|
|
int v = c->proto->preexport ? c->proto->preexport(c, &rt) : 0;
|
2013-02-08 22:58:27 +00:00
|
|
|
if (v == RIC_PROCESS)
|
2022-04-10 16:55:15 +00:00
|
|
|
v = (f_run(filter, &rt, FF_SILENT) <= F_ACCEPT);
|
2013-02-08 22:58:27 +00:00
|
|
|
|
|
|
|
return v > 0;
|
|
|
|
}
|
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
static inline void
|
|
|
|
rt_set_import_state(struct rt_import_hook *hook, u8 state)
|
|
|
|
{
|
|
|
|
hook->last_state_change = current_time();
|
|
|
|
hook->import_state = state;
|
|
|
|
|
2022-08-31 09:58:27 +00:00
|
|
|
CALL(hook->req->log_state_change, hook->req, state);
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_request_import(rtable *t, struct rt_import_request *req)
|
2021-06-21 15:07:31 +00:00
|
|
|
{
|
2022-09-07 11:54:20 +00:00
|
|
|
RT_LOCKED(t, tab)
|
|
|
|
{
|
|
|
|
rt_lock_table(tab);
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
struct rt_import_hook *hook = req->hook = mb_allocz(tab->rp, sizeof(struct rt_import_hook));
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
DBG("Lock table %s for import %p req=%p uc=%u\n", tab->name, hook, req, tab->use_count);
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
hook->req = req;
|
|
|
|
hook->table = t;
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_set_import_state(hook, TIS_UP);
|
|
|
|
add_tail(&tab->imports, &hook->n);
|
|
|
|
}
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rt_stop_import(struct rt_import_request *req, void (*stopped)(struct rt_import_request *))
|
|
|
|
{
|
|
|
|
ASSERT_DIE(req->hook);
|
|
|
|
struct rt_import_hook *hook = req->hook;
|
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
RT_LOCKED(hook->table, tab)
|
|
|
|
{
|
|
|
|
rt_set_import_state(hook, TIS_STOP);
|
|
|
|
hook->stopped = stopped;
|
2022-09-26 10:09:14 +00:00
|
|
|
|
2023-09-14 12:40:33 +00:00
|
|
|
rt_refresh_trace(tab, hook, "stop import");
|
|
|
|
|
2023-01-19 09:56:16 +00:00
|
|
|
/* Cancel table rr_counter */
|
2022-09-26 10:09:14 +00:00
|
|
|
if (hook->stale_set != hook->stale_pruned)
|
2024-04-09 17:14:30 +00:00
|
|
|
tab->rr_counter -= ((int) hook->stale_set - (int) hook->stale_pruned);
|
2023-01-19 09:56:16 +00:00
|
|
|
|
|
|
|
tab->rr_counter++;
|
2022-09-26 10:09:14 +00:00
|
|
|
|
|
|
|
hook->stale_set = hook->stale_pruned = hook->stale_pruning = hook->stale_valid = 0;
|
2024-02-29 13:04:05 +00:00
|
|
|
|
|
|
|
rt_schedule_prune(tab);
|
2022-09-07 11:54:20 +00:00
|
|
|
}
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
|
|
|
|
2014-03-23 00:35:33 +00:00
|
|
|
|
|
|
|
/**
|
|
|
|
* rt_refresh_begin - start a refresh cycle
|
|
|
|
* @t: related routing table
|
2016-01-26 10:48:58 +00:00
|
|
|
* @c related channel
|
2014-03-23 00:35:33 +00:00
|
|
|
*
|
|
|
|
* This function starts a refresh cycle for given routing table and announce
|
|
|
|
* hook. The refresh cycle is a sequence where the protocol sends all its valid
|
|
|
|
* routes to the routing table (by rte_update()). After that, all protocol
|
2016-01-26 10:48:58 +00:00
|
|
|
* routes (more precisely routes with @c as @sender) not sent during the
|
2014-03-23 00:35:33 +00:00
|
|
|
* refresh cycle but still in the table from the past are pruned. This is
|
|
|
|
* implemented by marking all related routes as stale by REF_STALE flag in
|
|
|
|
* rt_refresh_begin(), then marking all related stale routes with REF_DISCARD
|
|
|
|
* flag in rt_refresh_end() and then removing such routes in the prune loop.
|
|
|
|
*/
|
2014-03-20 13:07:12 +00:00
|
|
|
void
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
rt_refresh_begin(struct rt_import_request *req)
|
2014-03-20 13:07:12 +00:00
|
|
|
{
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
struct rt_import_hook *hook = req->hook;
|
|
|
|
ASSERT_DIE(hook);
|
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
RT_LOCKED(hook->table, tab)
|
|
|
|
{
|
|
|
|
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
/* If the pruning routine is too slow */
|
2023-03-19 12:21:35 +00:00
|
|
|
if (((hook->stale_set - hook->stale_pruned) & 0xff) >= 240)
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
{
|
2023-03-19 12:21:35 +00:00
|
|
|
log(L_WARN "Route refresh flood in table %s (stale_set=%u, stale_pruned=%u)", hook->table->name, hook->stale_set, hook->stale_pruned);
|
|
|
|
|
|
|
|
/* Forcibly set all old routes' stale cycle to zero. */
|
2024-04-03 12:47:15 +00:00
|
|
|
u32 bs = atomic_load_explicit(&tab->routes_block_size, memory_order_relaxed);
|
|
|
|
net *routes = atomic_load_explicit(&tab->routes, memory_order_relaxed);
|
|
|
|
for (u32 i = 0; i < bs; i++)
|
|
|
|
NET_WALK_ROUTES(tab, &routes[i], ep, e)
|
2023-12-08 15:13:14 +00:00
|
|
|
if (e->rte.sender == req->hook)
|
|
|
|
e->stale_cycle = 0;
|
2023-03-19 12:21:35 +00:00
|
|
|
|
|
|
|
/* Smash the route refresh counter and zero everything. */
|
2024-04-09 17:14:30 +00:00
|
|
|
tab->rr_counter -= ((int) hook->stale_set - (int) hook->stale_pruned);
|
2023-03-19 12:21:35 +00:00
|
|
|
hook->stale_set = hook->stale_valid = hook->stale_pruning = hook->stale_pruned = 0;
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
}
|
|
|
|
|
2023-03-19 12:21:35 +00:00
|
|
|
/* Now we can safely increase the stale_set modifier */
|
|
|
|
hook->stale_set++;
|
|
|
|
|
2023-01-19 09:56:16 +00:00
|
|
|
/* The table must know that we're route-refreshing */
|
|
|
|
tab->rr_counter++;
|
|
|
|
|
2023-09-14 12:40:33 +00:00
|
|
|
rt_refresh_trace(tab, hook, "route refresh begin");
|
2022-09-07 11:54:20 +00:00
|
|
|
}
|
2014-03-20 13:07:12 +00:00
|
|
|
}
|
|
|
|
|
2014-03-23 00:35:33 +00:00
|
|
|
/**
|
|
|
|
* rt_refresh_end - end a refresh cycle
|
|
|
|
* @t: related routing table
|
2016-01-26 10:48:58 +00:00
|
|
|
* @c: related channel
|
2014-03-23 00:35:33 +00:00
|
|
|
*
|
2016-01-26 10:48:58 +00:00
|
|
|
* This function ends a refresh cycle for given routing table and announce
|
2014-03-23 00:35:33 +00:00
|
|
|
* hook. See rt_refresh_begin() for description of refresh cycles.
|
|
|
|
*/
|
2014-03-20 13:07:12 +00:00
|
|
|
void
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
rt_refresh_end(struct rt_import_request *req)
|
2014-03-20 13:07:12 +00:00
|
|
|
{
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
struct rt_import_hook *hook = req->hook;
|
|
|
|
ASSERT_DIE(hook);
|
2014-03-20 13:07:12 +00:00
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
RT_LOCKED(hook->table, tab)
|
|
|
|
{
|
2023-03-19 12:21:35 +00:00
|
|
|
/* Now valid routes are only those one with the latest stale_set value */
|
2023-09-14 12:40:33 +00:00
|
|
|
UNUSED uint cnt = hook->stale_set - hook->stale_valid;
|
2023-03-19 12:21:35 +00:00
|
|
|
hook->stale_valid = hook->stale_set;
|
2014-03-20 13:07:12 +00:00
|
|
|
|
2023-01-19 09:56:16 +00:00
|
|
|
/* Here we can't kick the timer as we aren't in the table service loop */
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_schedule_prune(tab);
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
|
2023-09-14 12:40:33 +00:00
|
|
|
rt_refresh_trace(tab, hook, "route refresh end");
|
2022-09-07 11:54:20 +00:00
|
|
|
}
|
2014-03-20 13:07:12 +00:00
|
|
|
}
|
|
|
|
|
2023-09-14 12:40:33 +00:00
|
|
|
/**
|
|
|
|
* rt_refresh_trace - log information about route refresh
|
|
|
|
* @tab: table
|
|
|
|
* @ih: import hook doing the route refresh
|
|
|
|
* @msg: what is happening
|
|
|
|
*
|
|
|
|
* This function consistently logs route refresh messages.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
rt_refresh_trace(struct rtable_private *tab, struct rt_import_hook *ih, const char *msg)
|
|
|
|
{
|
|
|
|
if (ih->req->trace_routes & D_STATES)
|
|
|
|
log(L_TRACE "%s: %s: rr %u set %u valid %u pruning %u pruned %u", ih->req->name, msg,
|
|
|
|
tab->rr_counter, ih->stale_set, ih->stale_valid, ih->stale_pruning, ih->stale_pruned);
|
|
|
|
}
|
|
|
|
|
2000-06-01 17:12:19 +00:00
|
|
|
/**
|
|
|
|
* rte_dump - dump a route
|
|
|
|
* @e: &rte to be dumped
|
|
|
|
*
|
|
|
|
* This functions dumps contents of a &rte to debug output.
|
|
|
|
*/
|
1998-05-20 11:54:33 +00:00
|
|
|
void
|
2020-01-28 10:42:46 +00:00
|
|
|
rte_dump(struct rte_storage *e)
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2024-01-28 12:09:48 +00:00
|
|
|
debug("(%u) %-1N", NET_TO_INDEX(e->rte.net)->index, e->rte.net);
|
|
|
|
debug("ID=%d ", e->rte.id);
|
|
|
|
debug("SENDER=%s ", e->rte.sender->req->name);
|
2020-01-28 10:42:46 +00:00
|
|
|
debug("PF=%02x ", e->rte.pflags);
|
2023-11-01 17:25:40 +00:00
|
|
|
debug("SRC=%uG ", e->rte.src->global_id);
|
2022-06-08 13:31:28 +00:00
|
|
|
ea_dump(e->rte.attrs);
|
1998-06-04 20:28:19 +00:00
|
|
|
debug("\n");
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
1998-05-15 07:54:32 +00:00
|
|
|
|
2000-06-01 17:12:19 +00:00
|
|
|
/**
|
|
|
|
* rt_dump - dump a routing table
|
|
|
|
* @t: routing table to be dumped
|
|
|
|
*
|
|
|
|
* This function dumps contents of a given routing table to debug output.
|
|
|
|
*/
|
1998-05-20 11:54:33 +00:00
|
|
|
void
|
2024-04-03 12:47:15 +00:00
|
|
|
rt_dump(rtable *tab)
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
RT_READ(tab, tp);
|
|
|
|
|
|
|
|
/* Looking at priv.deleted is technically unsafe but we don't care */
|
|
|
|
debug("Dump of routing table <%s>%s\n", tab->name, tab->priv.deleted ? " (deleted)" : "");
|
2022-09-07 11:54:20 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
u32 bs = atomic_load_explicit(&tp->t->routes_block_size, memory_order_relaxed);
|
|
|
|
net *routes = atomic_load_explicit(&tp->t->routes, memory_order_relaxed);
|
|
|
|
for (u32 i = 0; i < bs; i++)
|
|
|
|
NET_READ_WALK_ROUTES(tp, &routes[i], ep, e)
|
2023-12-08 15:13:14 +00:00
|
|
|
rte_dump(e);
|
2022-09-07 11:54:20 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
debug("\n");
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
1998-05-15 07:54:32 +00:00
|
|
|
|
2000-06-01 17:12:19 +00:00
|
|
|
/**
|
|
|
|
* rt_dump_all - dump all routing tables
|
|
|
|
*
|
|
|
|
* This function dumps contents of all routing tables to debug output.
|
|
|
|
*/
|
1998-05-24 14:49:14 +00:00
|
|
|
void
|
|
|
|
rt_dump_all(void)
|
|
|
|
{
|
1999-05-17 20:14:52 +00:00
|
|
|
rtable *t;
|
2021-03-30 13:09:53 +00:00
|
|
|
node *n;
|
1999-05-17 20:14:52 +00:00
|
|
|
|
2021-03-30 13:09:53 +00:00
|
|
|
WALK_LIST2(t, n, routing_tables, n)
|
1999-05-17 20:14:52 +00:00
|
|
|
rt_dump(t);
|
2021-06-21 15:07:31 +00:00
|
|
|
|
|
|
|
WALK_LIST2(t, n, deleted_routing_tables, n)
|
|
|
|
rt_dump(t);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_dump_hooks(rtable *tp)
|
2021-06-21 15:07:31 +00:00
|
|
|
{
|
2022-09-07 11:54:20 +00:00
|
|
|
RT_LOCKED(tp, tab)
|
|
|
|
{
|
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
debug("Dump of hooks in routing table <%s>%s\n", tab->name, tab->deleted ? " (deleted)" : "");
|
2022-08-31 12:01:59 +00:00
|
|
|
debug(" nhu_state=%u use_count=%d rt_count=%u\n",
|
|
|
|
tab->nhu_state, tab->use_count, tab->rt_count);
|
2021-06-21 15:07:31 +00:00
|
|
|
debug(" last_rt_change=%t gc_time=%t gc_counter=%d prune_state=%u\n",
|
|
|
|
tab->last_rt_change, tab->gc_time, tab->gc_counter, tab->prune_state);
|
|
|
|
|
|
|
|
struct rt_import_hook *ih;
|
|
|
|
WALK_LIST(ih, tab->imports)
|
|
|
|
{
|
|
|
|
ih->req->dump_req(ih->req);
|
|
|
|
debug(" Import hook %p requested by %p: pref=%u"
|
|
|
|
" last_state_change=%t import_state=%u stopped=%p\n",
|
|
|
|
ih, ih->req, ih->stats.pref,
|
|
|
|
ih->last_state_change, ih->import_state, ih->stopped);
|
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
#if 0
|
|
|
|
/* FIXME: I'm very lazy to write this now */
|
2024-02-29 13:04:05 +00:00
|
|
|
WALK_TLIST(lfjour_recipient, r, &tab->journal.recipients)
|
2021-06-21 15:07:31 +00:00
|
|
|
{
|
2024-04-26 10:14:33 +00:00
|
|
|
SKIP_BACK_DECLARE(struct rt_export_hook, eh, recipient, r);
|
2024-02-09 16:02:44 +00:00
|
|
|
eh->req->dump_req(eh->req);
|
2021-06-21 15:07:31 +00:00
|
|
|
debug(" Export hook %p requested by %p:"
|
2021-09-27 11:04:16 +00:00
|
|
|
" refeed_pending=%u last_state_change=%t export_state=%u\n",
|
2024-02-09 16:02:44 +00:00
|
|
|
eh, eh->req, eh->refeed_pending, eh->last_state_change,
|
|
|
|
atomic_load_explicit(&eh->export_state, memory_order_relaxed));
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
2024-05-02 09:39:34 +00:00
|
|
|
#endif
|
2021-06-21 15:07:31 +00:00
|
|
|
debug("\n");
|
2022-09-07 11:54:20 +00:00
|
|
|
|
|
|
|
}
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rt_dump_hooks_all(void)
|
|
|
|
{
|
|
|
|
rtable *t;
|
|
|
|
node *n;
|
|
|
|
|
|
|
|
debug("Dump of all table hooks\n");
|
|
|
|
|
|
|
|
WALK_LIST2(t, n, routing_tables, n)
|
|
|
|
rt_dump_hooks(t);
|
|
|
|
|
|
|
|
WALK_LIST2(t, n, deleted_routing_tables, n)
|
|
|
|
rt_dump_hooks(t);
|
1998-05-24 14:49:14 +00:00
|
|
|
}
|
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
static inline void
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_schedule_nhu(struct rtable_private *tab)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
2022-09-07 11:12:44 +00:00
|
|
|
if (tab->nhu_corked)
|
|
|
|
{
|
|
|
|
if (!(tab->nhu_corked & NHU_SCHEDULED))
|
|
|
|
tab->nhu_corked |= NHU_SCHEDULED;
|
|
|
|
}
|
|
|
|
else if (!(tab->nhu_state & NHU_SCHEDULED))
|
|
|
|
{
|
|
|
|
rt_trace(tab, D_EVENTS, "Scheduling NHU");
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2022-09-07 11:12:44 +00:00
|
|
|
/* state change:
|
|
|
|
* NHU_CLEAN -> NHU_SCHEDULED
|
|
|
|
* NHU_RUNNING -> NHU_DIRTY
|
|
|
|
*/
|
|
|
|
if ((tab->nhu_state |= NHU_SCHEDULED) == NHU_SCHEDULED)
|
2024-02-22 12:31:11 +00:00
|
|
|
ev_send_loop(tab->loop, tab->nhu_event);
|
2022-09-07 11:12:44 +00:00
|
|
|
}
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
void
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_schedule_prune(struct rtable_private *tab)
|
2012-03-28 16:40:04 +00:00
|
|
|
{
|
2016-01-26 10:48:58 +00:00
|
|
|
/* state change 0->1, 2->3 */
|
|
|
|
tab->prune_state |= 1;
|
2024-02-29 13:04:05 +00:00
|
|
|
ev_send_loop(tab->loop, tab->prune_event);
|
2021-09-27 11:04:16 +00:00
|
|
|
}
|
2016-01-26 10:48:58 +00:00
|
|
|
|
2022-06-04 15:34:57 +00:00
|
|
|
static void
|
|
|
|
rt_prune_timer(timer *t)
|
|
|
|
{
|
2022-09-07 11:54:20 +00:00
|
|
|
RT_LOCKED((rtable *) t->data, tab)
|
|
|
|
if (tab->gc_counter >= tab->config->gc_threshold)
|
|
|
|
rt_schedule_prune(tab);
|
2022-06-04 15:34:57 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_kick_prune_timer(struct rtable_private *tab)
|
2022-06-04 15:34:57 +00:00
|
|
|
{
|
|
|
|
/* Return if prune is already scheduled */
|
|
|
|
if (tm_active(tab->prune_timer) || (tab->prune_state & 1))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* Randomize GC period to +/- 50% */
|
|
|
|
btime gc_period = tab->config->gc_period;
|
|
|
|
gc_period = (gc_period / 2) + (random_u32() % (uint) gc_period);
|
2022-09-09 11:52:37 +00:00
|
|
|
tm_start_in(tab->prune_timer, gc_period, tab->loop);
|
2022-06-04 15:34:57 +00:00
|
|
|
}
|
|
|
|
|
2024-02-11 21:58:29 +00:00
|
|
|
#define TLIST_PREFIX rt_flowspec_link
|
|
|
|
#define TLIST_TYPE struct rt_flowspec_link
|
|
|
|
#define TLIST_ITEM n
|
|
|
|
#define TLIST_WANT_WALK
|
|
|
|
#define TLIST_WANT_ADD_TAIL
|
|
|
|
#define TLIST_DEFINED_BEFORE
|
|
|
|
|
|
|
|
struct rt_flowspec_link {
|
|
|
|
TLIST_DEFAULT_NODE;
|
|
|
|
rtable *src;
|
|
|
|
rtable *dst;
|
|
|
|
u32 uc;
|
|
|
|
struct rt_export_request req;
|
2024-05-02 09:39:34 +00:00
|
|
|
event event;
|
2024-02-11 21:58:29 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
#include "lib/tlists.h"
|
|
|
|
|
2022-06-04 15:34:57 +00:00
|
|
|
|
2022-08-31 14:04:36 +00:00
|
|
|
static void
|
2024-05-02 09:39:34 +00:00
|
|
|
rt_flowspec_export(void *_link)
|
2022-08-31 14:04:36 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
struct rt_flowspec_link *ln = _link;
|
2022-09-07 11:54:20 +00:00
|
|
|
rtable *dst_pub = ln->dst;
|
|
|
|
ASSUME(rt_is_flow(dst_pub));
|
2023-11-14 11:53:40 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
RT_EXPORT_WALK(&ln->req, u)
|
|
|
|
{
|
|
|
|
const net_addr *n = NULL;
|
|
|
|
switch (u->kind)
|
|
|
|
{
|
|
|
|
case RT_EXPORT_STOP:
|
|
|
|
bug("Main table export stopped");
|
|
|
|
|
|
|
|
case RT_EXPORT_FEED:
|
|
|
|
if (u->feed->count_routes)
|
|
|
|
n = u->feed->block[0].net;
|
|
|
|
break;
|
|
|
|
|
|
|
|
case RT_EXPORT_UPDATE:
|
|
|
|
{
|
|
|
|
/* Conflate following updates */
|
|
|
|
const rte *old = RTE_VALID_OR_NULL(u->update->old);
|
|
|
|
const rte *new = u->update->new;
|
|
|
|
for (
|
|
|
|
SKIP_BACK_DECLARE(struct rt_pending_export, rpe, it, u->update);
|
|
|
|
rpe = atomic_load_explicit(&rpe->next, memory_order_acquire) ;)
|
|
|
|
{
|
|
|
|
ASSERT_DIE(new == rpe->it.old);
|
|
|
|
new = rpe->it.new;
|
|
|
|
rt_export_processed(&ln->req, rpe->it.seq);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Ignore idempotent */
|
|
|
|
if ((old == new) || old && new && rte_same(old, new))
|
|
|
|
continue;
|
2022-08-31 14:04:36 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
n = (new ?: old)->net;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
2022-08-31 14:04:36 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
if (!n)
|
|
|
|
continue;
|
2022-08-31 14:04:36 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
RT_LOCKED(dst_pub, dst)
|
|
|
|
{
|
|
|
|
/* No need to inspect it further if recalculation is already scheduled */
|
|
|
|
if ((dst->nhu_state == NHU_SCHEDULED) || (dst->nhu_state == NHU_DIRTY))
|
|
|
|
break;
|
|
|
|
|
|
|
|
/* Irrelevant prefix */
|
|
|
|
if (!trie_match_net(dst->flowspec_trie, n))
|
|
|
|
break;
|
2022-08-31 14:04:36 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Actually, schedule NHU */
|
|
|
|
rt_schedule_nhu(dst);
|
|
|
|
}
|
2022-09-07 11:54:20 +00:00
|
|
|
|
2024-06-03 09:12:20 +00:00
|
|
|
MAYBE_DEFER_TASK(birdloop_event_list(dst_pub->loop), &ln->event,
|
|
|
|
"flowspec ctl export from %s to %s", ln->src->name, dst_pub->name);
|
2023-11-14 11:53:40 +00:00
|
|
|
}
|
2022-08-31 14:04:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_flowspec_dump_req(struct rt_export_request *req)
|
|
|
|
{
|
2024-04-26 10:14:33 +00:00
|
|
|
SKIP_BACK_DECLARE(struct rt_flowspec_link, ln, req, req);
|
2022-08-31 14:04:36 +00:00
|
|
|
debug(" Flowspec link for table %s (%p)\n", ln->dst->name, req);
|
|
|
|
}
|
|
|
|
|
2021-12-20 19:25:35 +00:00
|
|
|
static struct rt_flowspec_link *
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_flowspec_find_link(struct rtable_private *src, rtable *dst)
|
2021-12-20 19:25:35 +00:00
|
|
|
{
|
2024-02-11 21:58:29 +00:00
|
|
|
WALK_TLIST(rt_flowspec_link, ln, &src->flowspec_links)
|
2024-05-02 09:39:34 +00:00
|
|
|
if (ln->dst == dst)
|
|
|
|
switch (rt_export_get_state(&ln->req))
|
2024-02-11 21:58:29 +00:00
|
|
|
{
|
|
|
|
case TES_FEEDING:
|
|
|
|
case TES_READY:
|
|
|
|
return ln;
|
2024-05-02 09:39:34 +00:00
|
|
|
|
|
|
|
default:
|
|
|
|
bug("Unexpected flowspec link state");
|
2024-02-11 21:58:29 +00:00
|
|
|
}
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_flowspec_link(rtable *src_pub, rtable *dst_pub)
|
2021-12-20 19:25:35 +00:00
|
|
|
{
|
2022-09-07 11:54:20 +00:00
|
|
|
ASSERT(rt_is_ip(src_pub));
|
|
|
|
ASSERT(rt_is_flow(dst_pub));
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
int lock_dst = 0;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2023-02-07 16:01:34 +00:00
|
|
|
birdloop_enter(dst_pub->loop);
|
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
RT_LOCKED(src_pub, src)
|
2021-12-20 19:25:35 +00:00
|
|
|
{
|
2022-09-07 11:54:20 +00:00
|
|
|
struct rt_flowspec_link *ln = rt_flowspec_find_link(src, dst_pub);
|
|
|
|
|
|
|
|
if (!ln)
|
|
|
|
{
|
2023-04-21 13:26:06 +00:00
|
|
|
pool *p = birdloop_pool(dst_pub->loop);
|
2022-09-07 11:54:20 +00:00
|
|
|
ln = mb_allocz(p, sizeof(struct rt_flowspec_link));
|
|
|
|
ln->src = src_pub;
|
|
|
|
ln->dst = dst_pub;
|
|
|
|
ln->req = (struct rt_export_request) {
|
|
|
|
.name = mb_sprintf(p, "%s.flowspec.notifier", dst_pub->name),
|
2024-05-02 09:39:34 +00:00
|
|
|
.r = {
|
|
|
|
.event = &ln->event,
|
|
|
|
.target = birdloop_event_list(dst_pub->loop),
|
|
|
|
},
|
2023-04-21 13:26:06 +00:00
|
|
|
.pool = p,
|
2022-09-07 11:54:20 +00:00
|
|
|
.trace_routes = src->config->debug,
|
2024-05-02 09:39:34 +00:00
|
|
|
.dump = rt_flowspec_dump_req,
|
|
|
|
};
|
|
|
|
ln->event = (event) {
|
|
|
|
.hook = rt_flowspec_export,
|
|
|
|
.data = ln,
|
2022-09-07 11:54:20 +00:00
|
|
|
};
|
2024-02-11 21:58:29 +00:00
|
|
|
rt_flowspec_link_add_tail(&src->flowspec_links, ln);
|
2022-09-07 11:54:20 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
rtex_export_subscribe(&src->export_best, &ln->req);
|
2022-09-07 11:54:20 +00:00
|
|
|
|
|
|
|
lock_dst = 1;
|
|
|
|
}
|
2022-08-31 14:04:36 +00:00
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
ln->uc++;
|
2021-12-20 19:25:35 +00:00
|
|
|
}
|
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
if (lock_dst)
|
|
|
|
rt_lock_table(dst_pub);
|
2023-02-07 16:01:34 +00:00
|
|
|
|
|
|
|
birdloop_leave(dst_pub->loop);
|
2021-12-20 19:25:35 +00:00
|
|
|
}
|
|
|
|
|
2022-08-31 14:04:36 +00:00
|
|
|
void
|
|
|
|
rt_flowspec_unlink(rtable *src, rtable *dst)
|
2021-12-20 19:25:35 +00:00
|
|
|
{
|
2023-02-07 16:01:34 +00:00
|
|
|
birdloop_enter(dst->loop);
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
_Bool unlock_dst = 0;
|
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
struct rt_flowspec_link *ln;
|
|
|
|
RT_LOCKED(src, t)
|
|
|
|
{
|
|
|
|
ln = rt_flowspec_find_link(t, dst);
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
ASSERT(ln && (ln->uc > 0));
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
if (!--ln->uc)
|
2024-02-11 21:58:29 +00:00
|
|
|
{
|
|
|
|
rt_flowspec_link_rem_node(&t->flowspec_links, ln);
|
2024-05-02 09:39:34 +00:00
|
|
|
rtex_export_unsubscribe(&ln->req);
|
|
|
|
ev_postpone(&ln->event);
|
|
|
|
mb_free(ln);
|
|
|
|
unlock_dst = 1;
|
2024-02-11 21:58:29 +00:00
|
|
|
}
|
2022-09-07 11:54:20 +00:00
|
|
|
}
|
2023-02-07 16:01:34 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
if (unlock_dst)
|
|
|
|
rt_unlock_table(dst);
|
|
|
|
|
2023-02-07 16:01:34 +00:00
|
|
|
birdloop_leave(dst->loop);
|
2021-12-20 19:25:35 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_flowspec_reset_trie(struct rtable_private *tab)
|
2021-12-20 19:25:35 +00:00
|
|
|
{
|
|
|
|
linpool *lp = tab->flowspec_trie->lp;
|
|
|
|
int ipv4 = tab->flowspec_trie->ipv4;
|
|
|
|
|
|
|
|
lp_flush(lp);
|
|
|
|
tab->flowspec_trie = f_new_trie(lp, 0);
|
|
|
|
tab->flowspec_trie->ipv4 = ipv4;
|
|
|
|
}
|
|
|
|
|
2024-06-03 12:23:41 +00:00
|
|
|
/* ROA digestor */
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_dump_roa_digestor_req(struct rt_export_request *req)
|
|
|
|
{
|
|
|
|
debug(" ROA update digestor %s (%p)\n", req->name, req);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_cleanup_roa_digest(struct lfjour *j UNUSED, struct lfjour_item *i)
|
|
|
|
{
|
|
|
|
SKIP_BACK_DECLARE(struct roa_digest, d, li, i);
|
|
|
|
rfree(d->trie->lp);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_roa_announce_digest(struct settle *s)
|
|
|
|
{
|
|
|
|
SKIP_BACK_DECLARE(struct roa_digestor, d, settle, s);
|
|
|
|
|
|
|
|
RT_LOCK(d->tab, tab);
|
|
|
|
|
|
|
|
struct lfjour_item *it = lfjour_push_prepare(&d->digest);
|
|
|
|
if (it)
|
|
|
|
{
|
|
|
|
SKIP_BACK_DECLARE(struct roa_digest, dd, li, it);
|
|
|
|
dd->trie = d->trie;
|
|
|
|
lfjour_push_commit(&d->digest);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
rfree(d->trie->lp);
|
|
|
|
|
|
|
|
d->trie = f_new_trie(lp_new(tab->rp), 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_roa_update_net(struct roa_digestor *d, struct netindex *ni, uint maxlen)
|
|
|
|
{
|
|
|
|
trie_add_prefix(d->trie, ni->addr, net_pxlen(ni->addr), maxlen);
|
|
|
|
settle_kick(&d->settle, d->tab->loop);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_roa_update(void *_d)
|
|
|
|
{
|
|
|
|
struct roa_digestor *d = _d;
|
|
|
|
RT_LOCK(d->tab, tab);
|
|
|
|
|
|
|
|
RT_EXPORT_WALK(&d->req, u)
|
|
|
|
{
|
|
|
|
struct netindex *ni = NULL;
|
|
|
|
switch (u->kind)
|
|
|
|
{
|
|
|
|
case RT_EXPORT_STOP:
|
|
|
|
bug("Main table export stopped");
|
|
|
|
|
|
|
|
case RT_EXPORT_FEED:
|
|
|
|
if (u->feed->count_routes)
|
|
|
|
ni = u->feed->ni;
|
|
|
|
break;
|
|
|
|
|
|
|
|
case RT_EXPORT_UPDATE:
|
|
|
|
/* Only switched ROA from one source to another? No change indicated. */
|
|
|
|
if (!u->update->new || !u->update->old)
|
|
|
|
ni = NET_TO_INDEX(u->update->new ? u->update->new->net : u->update->old->net);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ni)
|
|
|
|
rt_roa_update_net(d, ni, (tab->addr_type == NET_ROA6) ? 128 : 32);
|
|
|
|
|
|
|
|
MAYBE_DEFER_TASK(birdloop_event_list(tab->loop), &d->event,
|
|
|
|
"ROA digestor update in %s", tab->name);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/* Routing table setup and free */
|
|
|
|
|
2021-03-30 16:51:31 +00:00
|
|
|
static void
|
|
|
|
rt_free(resource *_r)
|
|
|
|
{
|
2024-04-26 10:14:33 +00:00
|
|
|
SKIP_BACK_DECLARE(struct rtable_private, r, r, _r);
|
2022-09-07 11:54:20 +00:00
|
|
|
|
2021-03-30 16:51:31 +00:00
|
|
|
DBG("Deleting routing table %s\n", r->name);
|
|
|
|
ASSERT_DIE(r->use_count == 0);
|
|
|
|
|
|
|
|
r->config->table = NULL;
|
|
|
|
rem_node(&r->n);
|
|
|
|
|
|
|
|
if (r->hostcache)
|
|
|
|
rt_free_hostcache(r);
|
|
|
|
|
|
|
|
/* Freed automagically by the resource pool
|
|
|
|
fib_free(&r->fib);
|
|
|
|
hmap_free(&r->id_map);
|
|
|
|
rfree(r->rt_event);
|
|
|
|
mb_free(r);
|
|
|
|
*/
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2024-05-02 09:39:34 +00:00
|
|
|
rt_res_dump(resource *_r, unsigned indent UNUSED)
|
2021-03-30 16:51:31 +00:00
|
|
|
{
|
2024-04-26 10:14:33 +00:00
|
|
|
SKIP_BACK_DECLARE(struct rtable_private, r, r, _r);
|
2022-09-07 11:54:20 +00:00
|
|
|
|
2021-03-30 16:51:31 +00:00
|
|
|
debug("name \"%s\", addr_type=%s, rt_count=%u, use_count=%d\n",
|
|
|
|
r->name, net_label[r->addr_type], r->rt_count, r->use_count);
|
2023-02-28 09:42:47 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
#if 0
|
|
|
|
/* TODO: rethink this completely */
|
2024-02-29 13:04:05 +00:00
|
|
|
/* TODO: move this to lfjour */
|
2023-02-28 09:42:47 +00:00
|
|
|
char x[32];
|
|
|
|
bsprintf(x, "%%%dspending export %%p\n", indent + 2);
|
|
|
|
|
2024-02-29 13:04:05 +00:00
|
|
|
WALK_TLIST(lfjour_block, n, &r->journal.pending)
|
2023-02-28 09:42:47 +00:00
|
|
|
debug(x, "", n);
|
2024-05-02 09:39:34 +00:00
|
|
|
#endif
|
2021-03-30 16:51:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct resclass rt_class = {
|
|
|
|
.name = "Routing table",
|
2022-09-07 11:54:20 +00:00
|
|
|
.size = sizeof(rtable),
|
2021-03-30 16:51:31 +00:00
|
|
|
.free = rt_free,
|
|
|
|
.dump = rt_res_dump,
|
|
|
|
.lookup = NULL,
|
|
|
|
.memsize = NULL,
|
|
|
|
};
|
|
|
|
|
2022-09-05 10:55:36 +00:00
|
|
|
static struct idm rtable_idm;
|
|
|
|
uint rtable_max_id = 0;
|
|
|
|
|
2021-03-30 16:51:31 +00:00
|
|
|
rtable *
|
|
|
|
rt_setup(pool *pp, struct rtable_config *cf)
|
2000-03-04 22:21:06 +00:00
|
|
|
{
|
2022-09-09 11:52:37 +00:00
|
|
|
ASSERT_DIE(birdloop_inside(&main_birdloop));
|
|
|
|
|
2023-04-22 19:20:19 +00:00
|
|
|
/* Start the service thread */
|
|
|
|
struct birdloop *loop = birdloop_new(pp, DOMAIN_ORDER(service), 0, "Routing table service %s", cf->name);
|
2023-04-21 13:26:06 +00:00
|
|
|
birdloop_enter(loop);
|
2023-04-22 19:20:19 +00:00
|
|
|
pool *sp = birdloop_pool(loop);
|
2023-04-21 13:26:06 +00:00
|
|
|
|
|
|
|
/* Create the table domain and pool */
|
2024-05-16 08:22:19 +00:00
|
|
|
DOMAIN(rtable) dom = DOMAIN_NEW_RCU_SYNC(rtable);
|
2023-04-21 13:26:06 +00:00
|
|
|
LOCK_DOMAIN(rtable, dom);
|
|
|
|
|
|
|
|
pool *p = rp_newf(sp, dom.rtable, "Routing table data %s", cf->name);
|
2021-03-30 16:51:31 +00:00
|
|
|
|
2023-04-22 19:20:19 +00:00
|
|
|
/* Create the actual table */
|
2022-09-07 11:54:20 +00:00
|
|
|
struct rtable_private *t = ralloc(p, &rt_class);
|
2021-03-30 16:51:31 +00:00
|
|
|
t->rp = p;
|
2023-04-22 19:20:19 +00:00
|
|
|
t->loop = loop;
|
2023-04-21 13:26:06 +00:00
|
|
|
t->lock = dom;
|
2021-03-30 16:51:31 +00:00
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
t->rte_slab = sl_new(p, sizeof(struct rte_storage));
|
|
|
|
|
2018-02-06 15:08:45 +00:00
|
|
|
t->name = cf->name;
|
2000-03-04 22:21:06 +00:00
|
|
|
t->config = cf;
|
2018-02-06 15:08:45 +00:00
|
|
|
t->addr_type = cf->addr_type;
|
2023-12-07 13:38:05 +00:00
|
|
|
t->debug = cf->debug;
|
2022-09-05 10:55:36 +00:00
|
|
|
t->id = idm_alloc(&rtable_idm);
|
|
|
|
if (t->id >= rtable_max_id)
|
|
|
|
rtable_max_id = t->id + 1;
|
2021-03-30 16:51:31 +00:00
|
|
|
|
2023-12-08 15:13:14 +00:00
|
|
|
t->netindex = rt_global_netindex_hash;
|
2024-04-03 12:47:15 +00:00
|
|
|
atomic_store_explicit(&t->routes, mb_allocz(p, RT_INITIAL_ROUTES_BLOCK_SIZE * sizeof(net)), memory_order_relaxed);
|
|
|
|
atomic_store_explicit(&t->routes_block_size, RT_INITIAL_ROUTES_BLOCK_SIZE, memory_order_relaxed);
|
2016-01-26 10:48:58 +00:00
|
|
|
|
2021-11-29 18:23:42 +00:00
|
|
|
if (cf->trie_used)
|
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
struct f_trie *trie = f_new_trie(lp_new_default(p), 0);
|
|
|
|
trie->ipv4 = net_val_match(t->addr_type, NB_IP4 | NB_VPN4 | NB_ROA4);
|
|
|
|
atomic_store_explicit(&t->trie, trie, memory_order_relaxed);
|
2021-11-29 18:23:42 +00:00
|
|
|
}
|
|
|
|
|
2022-07-11 15:08:59 +00:00
|
|
|
init_list(&t->imports);
|
2022-06-20 19:29:10 +00:00
|
|
|
|
2022-07-11 15:08:59 +00:00
|
|
|
hmap_init(&t->id_map, p, 1024);
|
|
|
|
hmap_set(&t->id_map, 0);
|
2021-03-30 16:51:31 +00:00
|
|
|
|
2024-02-22 12:31:11 +00:00
|
|
|
t->nhu_event = ev_new_init(p, rt_next_hop_update, t);
|
2022-09-09 11:52:37 +00:00
|
|
|
t->nhu_uncork_event = ev_new_init(p, rt_nhu_uncork, t);
|
2022-07-13 10:02:34 +00:00
|
|
|
t->prune_timer = tm_new_init(p, rt_prune_timer, t, 0, 0);
|
2024-02-29 13:04:05 +00:00
|
|
|
t->prune_event = ev_new_init(p, rt_prune_table, t);
|
2022-07-11 15:08:59 +00:00
|
|
|
t->last_rt_change = t->gc_time = current_time();
|
2022-09-05 04:58:42 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
t->export_best = (struct rt_exporter) {
|
|
|
|
.journal = {
|
|
|
|
.loop = t->loop,
|
|
|
|
.domain = t->lock.rtable,
|
|
|
|
.item_size = sizeof(struct rt_pending_export),
|
|
|
|
.item_done = rt_cleanup_export_best,
|
|
|
|
},
|
|
|
|
.name = mb_sprintf(p, "%s.export-best", t->name),
|
2024-05-30 06:22:40 +00:00
|
|
|
.net_type = t->addr_type,
|
|
|
|
.max_feed_index = RT_INITIAL_ROUTES_BLOCK_SIZE,
|
|
|
|
.netindex = t->netindex,
|
2024-05-02 09:39:34 +00:00
|
|
|
.trace_routes = t->debug,
|
|
|
|
.cleanup_done = rt_cleanup_done_best,
|
|
|
|
.feed_net = rt_feed_net_best,
|
|
|
|
};
|
|
|
|
|
|
|
|
rt_exporter_init(&t->export_best, &cf->export_settle);
|
|
|
|
|
|
|
|
t->export_all = (struct rt_exporter) {
|
|
|
|
.journal = {
|
|
|
|
.loop = t->loop,
|
|
|
|
.domain = t->lock.rtable,
|
|
|
|
.item_size = sizeof(struct rt_pending_export),
|
|
|
|
.item_done = rt_cleanup_export_all,
|
|
|
|
},
|
|
|
|
.name = mb_sprintf(p, "%s.export-all", t->name),
|
2024-05-30 06:22:40 +00:00
|
|
|
.net_type = t->addr_type,
|
|
|
|
.max_feed_index = RT_INITIAL_ROUTES_BLOCK_SIZE,
|
|
|
|
.netindex = t->netindex,
|
2024-05-02 09:39:34 +00:00
|
|
|
.trace_routes = t->debug,
|
|
|
|
.cleanup_done = rt_cleanup_done_all,
|
|
|
|
.feed_net = rt_feed_net_all,
|
|
|
|
};
|
|
|
|
|
|
|
|
rt_exporter_init(&t->export_all, &cf->export_settle);
|
|
|
|
|
|
|
|
t->best_req = (struct rt_export_request) {
|
|
|
|
.name = mb_sprintf(p, "%s.best-cleanup", t->name),
|
|
|
|
.pool = p,
|
|
|
|
.trace_routes = t->debug,
|
|
|
|
.dump = rt_dump_best_req,
|
|
|
|
};
|
|
|
|
|
|
|
|
/* Subscribe and pre-feed the best_req */
|
|
|
|
rtex_export_subscribe(&t->export_all, &t->best_req);
|
|
|
|
RT_EXPORT_WALK(&t->best_req, u)
|
|
|
|
ASSERT_DIE(u->kind == RT_EXPORT_FEED);
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2024-06-03 12:23:41 +00:00
|
|
|
/* Prepare the ROA digestor */
|
|
|
|
if ((t->addr_type == NET_ROA6) || (t->addr_type == NET_ROA4))
|
|
|
|
{
|
|
|
|
struct roa_digestor *d = mb_alloc(p, sizeof *d);
|
|
|
|
*d = (struct roa_digestor) {
|
|
|
|
.tab = RT_PUB(t),
|
|
|
|
.req = {
|
|
|
|
.name = mb_sprintf(p, "%s.roa-digestor", t->name),
|
|
|
|
.r = {
|
|
|
|
.target = birdloop_event_list(t->loop),
|
|
|
|
.event = &d->event,
|
|
|
|
},
|
|
|
|
.pool = p,
|
|
|
|
.trace_routes = t->debug,
|
|
|
|
.dump = rt_dump_roa_digestor_req,
|
|
|
|
},
|
|
|
|
.digest = {
|
|
|
|
.loop = t->loop,
|
|
|
|
.domain = t->lock.rtable,
|
|
|
|
.item_size = sizeof(struct roa_digest),
|
|
|
|
.item_done = rt_cleanup_roa_digest,
|
|
|
|
},
|
|
|
|
.settle = SETTLE_INIT(&cf->roa_settle, rt_roa_announce_digest, NULL),
|
|
|
|
.event = {
|
|
|
|
.hook = rt_roa_update,
|
|
|
|
.data = d,
|
|
|
|
},
|
|
|
|
.trie = f_new_trie(lp_new(t->rp), 0),
|
|
|
|
};
|
|
|
|
|
|
|
|
struct settle_config digest_settle_config = {};
|
|
|
|
|
|
|
|
rtex_export_subscribe(&t->export_best, &d->req);
|
|
|
|
lfjour_init(&d->digest, &digest_settle_config);
|
|
|
|
|
|
|
|
t->roa_digest = d;
|
|
|
|
}
|
|
|
|
|
2022-07-28 11:50:59 +00:00
|
|
|
t->cork_threshold = cf->cork_threshold;
|
|
|
|
|
2022-07-11 15:08:59 +00:00
|
|
|
t->rl_pipe = (struct tbf) TBF_DEFAULT_LOG_LIMITS;
|
2022-03-09 12:49:31 +00:00
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
if (rt_is_flow(RT_PUB(t)))
|
2022-07-11 15:08:59 +00:00
|
|
|
{
|
|
|
|
t->flowspec_trie = f_new_trie(lp_new_default(p), 0);
|
|
|
|
t->flowspec_trie->ipv4 = (t->addr_type == NET_FLOW4);
|
2021-03-30 16:51:31 +00:00
|
|
|
}
|
2021-02-10 02:09:57 +00:00
|
|
|
|
2023-04-21 13:26:06 +00:00
|
|
|
UNLOCK_DOMAIN(rtable, dom);
|
|
|
|
|
2022-09-12 08:25:14 +00:00
|
|
|
birdloop_leave(t->loop);
|
2022-09-09 11:52:37 +00:00
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
return RT_PUB(t);
|
2000-03-04 22:21:06 +00:00
|
|
|
}
|
|
|
|
|
2000-06-01 17:12:19 +00:00
|
|
|
/**
|
|
|
|
* rt_init - initialize routing tables
|
|
|
|
*
|
|
|
|
* This function is called during BIRD startup. It initializes the
|
|
|
|
* routing table module.
|
|
|
|
*/
|
1998-05-20 11:54:33 +00:00
|
|
|
void
|
|
|
|
rt_init(void)
|
|
|
|
{
|
|
|
|
rta_init();
|
2023-04-21 13:26:06 +00:00
|
|
|
rt_table_pool = rp_new(&root_pool, the_bird_domain.the_bird, "Routing tables");
|
1999-05-17 20:14:52 +00:00
|
|
|
init_list(&routing_tables);
|
2021-06-21 15:07:31 +00:00
|
|
|
init_list(&deleted_routing_tables);
|
2022-07-28 11:50:59 +00:00
|
|
|
ev_init_list(&rt_cork.queue, &main_birdloop, "Route cork release");
|
|
|
|
rt_cork.run = (event) { .hook = rt_cork_release_hook };
|
2022-09-05 10:55:36 +00:00
|
|
|
idm_init(&rtable_idm, rt_table_pool, 256);
|
2024-05-30 20:59:08 +00:00
|
|
|
rt_global_netindex_hash = netindex_hash_new(rt_table_pool, &global_event_list);
|
2023-12-08 15:13:14 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static _Bool
|
|
|
|
rt_prune_net(struct rtable_private *tab, struct network *n)
|
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
NET_WALK_ROUTES(tab, n, ep, e)
|
2023-12-08 15:13:14 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
ASSERT_DIE(!(e->flags & REF_OBSOLETE));
|
2023-12-08 15:13:14 +00:00
|
|
|
struct rt_import_hook *s = e->rte.sender;
|
2024-04-20 16:10:42 +00:00
|
|
|
|
|
|
|
_Bool stale = (s->import_state == TIS_FLUSHING);
|
|
|
|
|
|
|
|
if (!stale)
|
|
|
|
{
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The range of 0..256 is split by s->stale_* like this:
|
|
|
|
*
|
|
|
|
* pruned pruning valid set
|
|
|
|
* | | | |
|
|
|
|
* 0 v v v v 256
|
|
|
|
* |...........................+++++++++++........|
|
|
|
|
*
|
|
|
|
* We want to drop everything outside the marked range, thus
|
|
|
|
* (e->rte.stale_cycle < s->stale_valid) ||
|
|
|
|
* (e->rte.stale_cycle > s->stale_set))
|
|
|
|
* looks right.
|
|
|
|
*
|
|
|
|
* But the pointers may wrap around, and in the following situation, all the routes get pruned:
|
|
|
|
*
|
|
|
|
* set pruned pruning valid
|
|
|
|
* | | | |
|
|
|
|
* 0 v v v v 256
|
|
|
|
* |++++++..................................++++++|
|
|
|
|
*
|
|
|
|
* In that case, we want
|
|
|
|
* (e->rte.stale_cycle > s->stale_valid) ||
|
|
|
|
* (e->rte.stale_cycle < s->stale_set))
|
|
|
|
*
|
|
|
|
* Full logic table:
|
|
|
|
*
|
|
|
|
* permutation | result | (S < V) + (S < SC) + (SC < V)
|
|
|
|
* -----------------+----------+---------------------------------
|
|
|
|
* SC < V <= S | prune | 0 + 0 + 1 = 1
|
|
|
|
* S < SC < V | prune | 1 + 1 + 1 = 3
|
|
|
|
* V <= S < SC | prune | 0 + 1 + 0 = 1
|
|
|
|
* SC <= S < V | keep | 1 + 0 + 1 = 2
|
|
|
|
* V <= SC <= S | keep | 0 + 0 + 0 = 0
|
|
|
|
* S < V <= SC | keep | 1 + 1 + 0 = 2
|
|
|
|
*
|
|
|
|
* Now the following code hopefully makes sense.
|
|
|
|
*/
|
|
|
|
|
|
|
|
int sv = (s->stale_set < s->stale_valid);
|
|
|
|
int ssc = (s->stale_set < e->rte.stale_cycle);
|
|
|
|
int scv = (e->rte.stale_cycle < s->stale_valid);
|
|
|
|
stale = (sv + ssc + scv) & 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* By the C standard, either the importer is flushing and stale_perm is 1,
|
|
|
|
* or by the table above, stale_perm is between 0 and 3, where even values
|
|
|
|
* say "keep" and odd values say "prune". */
|
|
|
|
|
|
|
|
if (stale)
|
2023-12-08 15:13:14 +00:00
|
|
|
{
|
2024-04-20 16:10:42 +00:00
|
|
|
/* Announce withdrawal */
|
2023-12-08 15:13:14 +00:00
|
|
|
struct netindex *i = RTE_GET_NETINDEX(&e->rte);
|
|
|
|
rte_recalculate(tab, e->rte.sender, i, n, NULL, e->rte.src);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
1999-02-13 19:15:28 +00:00
|
|
|
|
2012-03-28 16:40:04 +00:00
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
/**
|
|
|
|
* rt_prune_table - prune a routing table
|
|
|
|
*
|
|
|
|
* The prune loop scans routing tables and removes routes belonging to flushing
|
|
|
|
* protocols, discarded routes and also stale network entries. It is called from
|
|
|
|
* rt_event(). The event is rescheduled if the current iteration do not finish
|
|
|
|
* the table. The pruning is directed by the prune state (@prune_state),
|
|
|
|
* specifying whether the prune cycle is scheduled or running, and there
|
|
|
|
* is also a persistent pruning iterator (@prune_fit).
|
|
|
|
*
|
|
|
|
* The prune loop is used also for channel flushing. For this purpose, the
|
|
|
|
* channels to flush are marked before the iteration and notified after the
|
|
|
|
* iteration.
|
|
|
|
*/
|
|
|
|
static void
|
2024-02-29 13:04:05 +00:00
|
|
|
rt_prune_table(void *_tab)
|
2012-03-28 16:40:04 +00:00
|
|
|
{
|
2024-02-29 13:04:05 +00:00
|
|
|
RT_LOCK((rtable *) _tab, tab);
|
2016-01-26 10:48:58 +00:00
|
|
|
|
2024-02-29 13:04:05 +00:00
|
|
|
int limit = 2000;
|
2021-06-21 15:07:31 +00:00
|
|
|
struct rt_import_hook *ih;
|
2016-01-26 10:48:58 +00:00
|
|
|
node *n, *x;
|
1999-02-13 19:15:28 +00:00
|
|
|
|
2022-08-30 17:40:58 +00:00
|
|
|
rt_trace(tab, D_STATES, "Pruning");
|
2012-03-28 16:40:04 +00:00
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
if (tab->prune_state == 0)
|
|
|
|
return;
|
2012-03-28 16:40:04 +00:00
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
if (tab->prune_state == 1)
|
|
|
|
{
|
|
|
|
/* Mark channels to flush */
|
2021-06-21 15:07:31 +00:00
|
|
|
WALK_LIST2(ih, n, tab->imports, n)
|
|
|
|
if (ih->import_state == TIS_STOP)
|
|
|
|
rt_set_import_state(ih, TIS_FLUSHING);
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
else if ((ih->stale_valid != ih->stale_pruning) && (ih->stale_pruning == ih->stale_pruned))
|
|
|
|
{
|
|
|
|
ih->stale_pruning = ih->stale_valid;
|
2023-09-14 12:40:33 +00:00
|
|
|
rt_refresh_trace(tab, ih, "table prune after refresh begin");
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
}
|
2016-01-26 10:48:58 +00:00
|
|
|
|
2023-12-08 15:13:14 +00:00
|
|
|
tab->prune_index = 0;
|
2016-01-26 10:48:58 +00:00
|
|
|
tab->prune_state = 2;
|
2022-02-03 05:08:51 +00:00
|
|
|
|
2022-06-04 15:34:57 +00:00
|
|
|
tab->gc_counter = 0;
|
|
|
|
tab->gc_time = current_time();
|
|
|
|
|
2022-02-03 05:08:51 +00:00
|
|
|
if (tab->prune_trie)
|
|
|
|
{
|
|
|
|
/* Init prefix trie pruning */
|
|
|
|
tab->trie_new = f_new_trie(lp_new_default(tab->rp), 0);
|
2024-04-03 12:47:15 +00:00
|
|
|
tab->trie_new->ipv4 = atomic_load_explicit(&tab->trie, memory_order_relaxed)->ipv4;
|
2022-02-03 05:08:51 +00:00
|
|
|
}
|
2016-01-26 10:48:58 +00:00
|
|
|
}
|
2012-03-28 16:40:04 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
u32 bs = atomic_load_explicit(&tab->routes_block_size, memory_order_relaxed);
|
|
|
|
net *routes = atomic_load_explicit(&tab->routes, memory_order_relaxed);
|
|
|
|
for (; tab->prune_index < bs; tab->prune_index++)
|
1999-02-13 19:15:28 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
net *n = &routes[tab->prune_index];
|
2023-12-08 15:13:14 +00:00
|
|
|
while ((limit > 0) && rt_prune_net(tab, n))
|
|
|
|
limit--;
|
|
|
|
|
2022-02-03 05:08:51 +00:00
|
|
|
if (limit <= 0)
|
|
|
|
{
|
2024-02-29 13:04:05 +00:00
|
|
|
ev_send_loop(tab->loop, tab->prune_event);
|
2022-02-03 05:08:51 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
struct rte_storage *e = NET_BEST_ROUTE(tab, n);
|
|
|
|
if (tab->trie_new && e)
|
2018-07-31 16:40:38 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
const net_addr *a = e->rte.net;
|
2023-12-08 15:13:14 +00:00
|
|
|
trie_add_prefix(tab->trie_new, a, a->pxlen, a->pxlen);
|
2022-02-03 05:08:51 +00:00
|
|
|
limit--;
|
|
|
|
}
|
1999-02-13 19:15:28 +00:00
|
|
|
}
|
2012-03-28 16:40:04 +00:00
|
|
|
|
2024-02-29 13:04:05 +00:00
|
|
|
rt_trace(tab, D_EVENTS, "Prune done");
|
2024-05-02 09:39:34 +00:00
|
|
|
lfjour_announce_now(&tab->export_all.journal);
|
|
|
|
lfjour_announce_now(&tab->export_best.journal);
|
2022-09-01 09:17:35 +00:00
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
/* state change 2->0, 3->1 */
|
2022-09-12 16:27:01 +00:00
|
|
|
if (tab->prune_state &= 1)
|
2024-02-29 13:04:05 +00:00
|
|
|
ev_send_loop(tab->loop, tab->prune_event);
|
2014-03-20 13:07:12 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
struct f_trie *trie = atomic_load_explicit(&tab->trie, memory_order_relaxed);
|
2022-02-03 05:08:51 +00:00
|
|
|
if (tab->trie_new)
|
|
|
|
{
|
|
|
|
/* Finish prefix trie pruning */
|
2024-04-03 12:47:15 +00:00
|
|
|
atomic_store_explicit(&tab->trie, tab->trie_new, memory_order_release);
|
|
|
|
tab->trie_new = NULL;
|
|
|
|
tab->prune_trie = 0;
|
|
|
|
|
|
|
|
rt_trace(tab, D_EVENTS, "Trie prune done, new %p, old %p (%s)",
|
|
|
|
tab->trie_new, trie, tab->trie_lock_count ? "still used" : "freeing");
|
2022-02-04 04:34:02 +00:00
|
|
|
|
|
|
|
if (!tab->trie_lock_count)
|
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
synchronize_rcu();
|
|
|
|
rfree(trie->lp);
|
2022-02-04 04:34:02 +00:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
ASSERT(!tab->trie_old);
|
2024-04-03 12:47:15 +00:00
|
|
|
tab->trie_old = trie;
|
2022-02-04 04:34:02 +00:00
|
|
|
tab->trie_old_lock_count = tab->trie_lock_count;
|
|
|
|
tab->trie_lock_count = 0;
|
|
|
|
}
|
2022-02-03 05:08:51 +00:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Schedule prefix trie pruning */
|
2024-04-03 12:47:15 +00:00
|
|
|
if (trie && !tab->trie_old && (trie->prefix_count > (2 * tab->net_count)))
|
2022-02-03 05:08:51 +00:00
|
|
|
{
|
|
|
|
/* state change 0->1, 2->3 */
|
|
|
|
tab->prune_state |= 1;
|
|
|
|
tab->prune_trie = 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
/* Close flushed channels */
|
2021-06-21 15:07:31 +00:00
|
|
|
WALK_LIST2_DELSAFE(ih, n, x, tab->imports, n)
|
|
|
|
if (ih->import_state == TIS_FLUSHING)
|
|
|
|
{
|
2023-01-19 09:56:16 +00:00
|
|
|
DBG("flushing %s %s rr %u", ih->req->name, tab->name, tab->rr_counter);
|
2024-05-02 09:39:34 +00:00
|
|
|
ih->flush_seq = tab->export_all.journal.next_seq;
|
2021-09-27 11:04:16 +00:00
|
|
|
rt_set_import_state(ih, TIS_WAITING);
|
2022-09-26 10:09:14 +00:00
|
|
|
tab->rr_counter--;
|
2023-01-19 09:56:16 +00:00
|
|
|
tab->wait_counter++;
|
2024-05-02 09:39:34 +00:00
|
|
|
lfjour_schedule_cleanup(&tab->export_best.journal);
|
|
|
|
lfjour_schedule_cleanup(&tab->export_all.journal);
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
else if (ih->stale_pruning != ih->stale_pruned)
|
|
|
|
{
|
2024-04-09 17:14:30 +00:00
|
|
|
tab->rr_counter -= ((int) ih->stale_pruning - (int) ih->stale_pruned);
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
ih->stale_pruned = ih->stale_pruning;
|
2023-09-14 12:40:33 +00:00
|
|
|
rt_refresh_trace(tab, ih, "table prune after refresh end");
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
}
|
1999-05-17 20:14:52 +00:00
|
|
|
}
|
|
|
|
|
2022-07-28 11:50:59 +00:00
|
|
|
static void
|
|
|
|
rt_cork_release_hook(void *data UNUSED)
|
|
|
|
{
|
2024-05-16 08:22:19 +00:00
|
|
|
do birdloop_yield();
|
2022-07-28 11:50:59 +00:00
|
|
|
while (
|
|
|
|
!atomic_load_explicit(&rt_cork.active, memory_order_acquire) &&
|
|
|
|
ev_run_list(&rt_cork.queue)
|
|
|
|
);
|
|
|
|
}
|
|
|
|
|
2022-02-04 04:34:02 +00:00
|
|
|
/**
|
|
|
|
* rt_lock_trie - lock a prefix trie of a routing table
|
|
|
|
* @tab: routing table with prefix trie to be locked
|
|
|
|
*
|
|
|
|
* The prune loop may rebuild the prefix trie and invalidate f_trie_walk_state
|
|
|
|
* structures. Therefore, asynchronous walks should lock the prefix trie using
|
|
|
|
* this function. That allows the prune loop to rebuild the trie, but postpones
|
|
|
|
* its freeing until all walks are done (unlocked by rt_unlock_trie()).
|
|
|
|
*
|
|
|
|
* Return a current trie that will be locked, the value should be passed back to
|
|
|
|
* rt_unlock_trie() for unlocking.
|
|
|
|
*
|
|
|
|
*/
|
2024-04-03 12:47:15 +00:00
|
|
|
const struct f_trie *
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_lock_trie(struct rtable_private *tab)
|
2022-02-04 04:34:02 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
const struct f_trie *trie = atomic_load_explicit(&tab->trie, memory_order_relaxed);
|
|
|
|
ASSERT(trie);
|
2022-02-04 04:34:02 +00:00
|
|
|
|
|
|
|
tab->trie_lock_count++;
|
2024-04-03 12:47:15 +00:00
|
|
|
return trie;
|
2022-02-04 04:34:02 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* rt_unlock_trie - unlock a prefix trie of a routing table
|
|
|
|
* @tab: routing table with prefix trie to be locked
|
|
|
|
* @trie: value returned by matching rt_lock_trie()
|
|
|
|
*
|
|
|
|
* Done for trie locked by rt_lock_trie() after walk over the trie is done.
|
|
|
|
* It may free the trie and schedule next trie pruning.
|
|
|
|
*/
|
|
|
|
void
|
2024-04-03 12:47:15 +00:00
|
|
|
rt_unlock_trie(struct rtable_private *tab, const struct f_trie *trie)
|
2022-02-04 04:34:02 +00:00
|
|
|
{
|
|
|
|
ASSERT(trie);
|
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
const struct f_trie *tab_trie = atomic_load_explicit(&tab->trie, memory_order_relaxed);
|
|
|
|
|
|
|
|
if (trie == tab_trie)
|
2022-02-04 04:34:02 +00:00
|
|
|
{
|
|
|
|
/* Unlock the current prefix trie */
|
|
|
|
ASSERT(tab->trie_lock_count);
|
|
|
|
tab->trie_lock_count--;
|
|
|
|
}
|
|
|
|
else if (trie == tab->trie_old)
|
|
|
|
{
|
|
|
|
/* Unlock the old prefix trie */
|
|
|
|
ASSERT(tab->trie_old_lock_count);
|
|
|
|
tab->trie_old_lock_count--;
|
|
|
|
|
|
|
|
/* Free old prefix trie that is no longer needed */
|
|
|
|
if (!tab->trie_old_lock_count)
|
|
|
|
{
|
|
|
|
rfree(tab->trie_old->lp);
|
|
|
|
tab->trie_old = NULL;
|
|
|
|
|
|
|
|
/* Kick prefix trie pruning that was postponed */
|
2024-04-03 12:47:15 +00:00
|
|
|
if (tab_trie && (tab_trie->prefix_count > (2 * tab->net_count)))
|
2022-02-04 04:34:02 +00:00
|
|
|
{
|
|
|
|
tab->prune_trie = 1;
|
2023-01-19 09:56:16 +00:00
|
|
|
rt_kick_prune_timer(tab);
|
2022-02-04 04:34:02 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
log(L_BUG "Invalid arg to rt_unlock_trie()");
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
void
|
|
|
|
rt_preconfig(struct config *c)
|
|
|
|
{
|
|
|
|
init_list(&c->tables);
|
2016-01-26 10:48:58 +00:00
|
|
|
|
2023-10-27 16:29:31 +00:00
|
|
|
c->def_tables[NET_IP4] = cf_define_symbol(c, cf_get_symbol(c, "master4"), SYM_TABLE, table, NULL);
|
|
|
|
c->def_tables[NET_IP6] = cf_define_symbol(c, cf_get_symbol(c, "master6"), SYM_TABLE, table, NULL);
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
2022-06-04 15:34:57 +00:00
|
|
|
void
|
|
|
|
rt_postconfig(struct config *c)
|
|
|
|
{
|
|
|
|
uint num_tables = list_length(&c->tables);
|
|
|
|
btime def_gc_period = 400 MS * num_tables;
|
|
|
|
def_gc_period = MAX(def_gc_period, 10 S);
|
|
|
|
def_gc_period = MIN(def_gc_period, 600 S);
|
|
|
|
|
|
|
|
struct rtable_config *rc;
|
|
|
|
WALK_LIST(rc, c->tables)
|
|
|
|
if (rc->gc_period == (uint) -1)
|
|
|
|
rc->gc_period = (uint) def_gc_period;
|
2022-09-01 12:21:56 +00:00
|
|
|
|
|
|
|
for (uint net_type = 0; net_type < NET_MAX; net_type++)
|
|
|
|
if (c->def_tables[net_type] && !c->def_tables[net_type]->table)
|
|
|
|
{
|
|
|
|
c->def_tables[net_type]->class = SYM_VOID;
|
|
|
|
c->def_tables[net_type] = NULL;
|
|
|
|
}
|
2022-06-04 15:34:57 +00:00
|
|
|
}
|
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
/*
|
2010-07-05 15:50:19 +00:00
|
|
|
* Some functions for handing internal next hop updates
|
|
|
|
* triggered by rt_schedule_nhu().
|
|
|
|
*/
|
|
|
|
|
2017-03-22 14:00:07 +00:00
|
|
|
void
|
2022-09-07 11:54:20 +00:00
|
|
|
ea_set_hostentry(ea_list **to, rtable *dep, rtable *src, ip_addr gw, ip_addr ll, u32 lnum, u32 labels[lnum])
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
2024-01-30 22:13:49 +00:00
|
|
|
struct {
|
|
|
|
struct hostentry_adata head;
|
2024-02-22 10:38:13 +00:00
|
|
|
u32 label_space[];
|
|
|
|
} *h;
|
|
|
|
u32 sz = sizeof *h + lnum * sizeof(u32);
|
|
|
|
h = alloca(sz);
|
|
|
|
memset(h, 0, sz);
|
2022-05-15 13:53:35 +00:00
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
RT_LOCKED(src, tab)
|
2024-02-22 10:38:13 +00:00
|
|
|
h->head.he = rt_get_hostentry(tab, gw, ll, dep);
|
2024-01-30 22:13:49 +00:00
|
|
|
|
2024-02-22 10:38:13 +00:00
|
|
|
memcpy(h->head.labels, labels, lnum * sizeof(u32));
|
2022-05-15 13:53:35 +00:00
|
|
|
|
2024-02-22 10:38:13 +00:00
|
|
|
ea_set_attr_data(to, &ea_gen_hostentry, 0, h->head.ad.data, (byte *) &h->head.labels[lnum] - h->head.ad.data);
|
2022-05-15 13:53:35 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static void
|
2024-04-04 09:38:52 +00:00
|
|
|
rta_apply_hostentry(ea_list **to, struct hostentry_adata *head)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
2022-05-15 13:53:35 +00:00
|
|
|
u32 *labels = head->labels;
|
|
|
|
u32 lnum = (u32 *) (head->ad.data + head->ad.length) - labels;
|
2024-04-04 09:38:52 +00:00
|
|
|
struct hostentry *he = head->he;
|
2022-05-15 13:53:35 +00:00
|
|
|
|
2024-04-04 09:38:52 +00:00
|
|
|
rcu_read_lock();
|
|
|
|
u32 version = atomic_load_explicit(&he->version, memory_order_acquire);
|
2016-08-09 12:47:51 +00:00
|
|
|
|
2024-04-04 09:38:52 +00:00
|
|
|
while (1)
|
2016-08-09 12:47:51 +00:00
|
|
|
{
|
2024-04-04 09:38:52 +00:00
|
|
|
if (version & 1)
|
|
|
|
{
|
|
|
|
rcu_read_unlock();
|
|
|
|
birdloop_yield();
|
|
|
|
rcu_read_lock();
|
|
|
|
version = atomic_load_explicit(&he->version, memory_order_acquire);
|
|
|
|
continue;
|
|
|
|
}
|
2016-08-09 12:47:51 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Jump-away block for applying the actual attributes */
|
|
|
|
do {
|
|
|
|
ea_set_attr_u32(to, &ea_gen_igp_metric, 0, he->igp_metric);
|
2022-05-15 16:09:30 +00:00
|
|
|
|
2024-05-31 07:47:56 +00:00
|
|
|
ea_list *src = atomic_load_explicit(&he->src, memory_order_acquire);
|
|
|
|
if (!src)
|
2024-05-02 09:39:34 +00:00
|
|
|
{
|
|
|
|
ea_set_dest(to, 0, RTD_UNREACHABLE);
|
|
|
|
break;
|
|
|
|
}
|
2022-05-15 16:09:30 +00:00
|
|
|
|
2024-05-31 07:47:56 +00:00
|
|
|
eattr *he_nh_ea = ea_find(src, &ea_gen_nexthop);
|
2024-05-02 09:39:34 +00:00
|
|
|
ASSERT_DIE(he_nh_ea);
|
2017-03-17 14:48:09 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
struct nexthop_adata *nhad = (struct nexthop_adata *) he_nh_ea->u.ptr;
|
|
|
|
int idest = nhea_dest(he_nh_ea);
|
2022-05-05 16:08:37 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
if ((idest != RTD_UNICAST) ||
|
|
|
|
!lnum && he->nexthop_linkable)
|
|
|
|
{
|
|
|
|
/* Just link the nexthop chain, no label append happens. */
|
2024-05-31 07:47:56 +00:00
|
|
|
ea_copy_attr(to, src, &ea_gen_nexthop);
|
2024-05-02 09:39:34 +00:00
|
|
|
break;
|
|
|
|
}
|
2017-02-24 13:05:11 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
uint total_size = OFFSETOF(struct nexthop_adata, nh);
|
2022-05-05 16:08:37 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
NEXTHOP_WALK(nh, nhad)
|
2024-04-04 09:38:52 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
if (nh->labels + lnum > MPLS_MAX_LABEL_STACK)
|
|
|
|
{
|
|
|
|
log(L_WARN "Sum of label stack sizes %d + %d = %d exceedes allowed maximum (%d)",
|
|
|
|
nh->labels, lnum, nh->labels + lnum, MPLS_MAX_LABEL_STACK);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
total_size += NEXTHOP_SIZE_CNT(nh->labels + lnum);
|
2024-04-04 09:38:52 +00:00
|
|
|
}
|
2022-05-15 16:09:30 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
if (total_size == OFFSETOF(struct nexthop_adata, nh))
|
|
|
|
{
|
|
|
|
log(L_WARN "No valid nexthop remaining, setting route unreachable");
|
2022-05-05 16:08:37 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
struct nexthop_adata nha = {
|
|
|
|
.ad.length = NEXTHOP_DEST_SIZE,
|
|
|
|
.dest = RTD_UNREACHABLE,
|
|
|
|
};
|
2022-05-05 16:08:37 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
ea_set_attr_data(to, &ea_gen_nexthop, 0, &nha.ad.data, nha.ad.length);
|
|
|
|
break;
|
|
|
|
}
|
2022-05-05 16:08:37 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
struct nexthop_adata *new = (struct nexthop_adata *) tmp_alloc_adata(total_size);
|
|
|
|
struct nexthop *dest = &new->nh;
|
2019-10-10 13:25:36 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
NEXTHOP_WALK(nh, nhad)
|
|
|
|
{
|
|
|
|
if (nh->labels + lnum > MPLS_MAX_LABEL_STACK)
|
|
|
|
continue;
|
2022-05-05 16:08:37 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
memcpy(dest, nh, NEXTHOP_SIZE(nh));
|
|
|
|
if (lnum)
|
|
|
|
{
|
|
|
|
memcpy(&(dest->label[dest->labels]), labels, lnum * sizeof labels[0]);
|
|
|
|
dest->labels += lnum;
|
|
|
|
}
|
2017-02-24 13:05:11 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
if (ipa_nonzero(nh->gw))
|
|
|
|
/* Router nexthop */
|
|
|
|
dest->flags = (dest->flags & RNF_ONLINK);
|
|
|
|
else if (!(nh->iface->flags & IF_MULTIACCESS) || (nh->iface->flags & IF_LOOPBACK))
|
|
|
|
dest->gw = IPA_NONE; /* PtP link - no need for nexthop */
|
|
|
|
else if (ipa_nonzero(he->link))
|
|
|
|
dest->gw = he->link; /* Device nexthop with link-local address known */
|
|
|
|
else
|
|
|
|
dest->gw = he->addr; /* Device nexthop with link-local address unknown */
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
dest = NEXTHOP_NEXT(dest);
|
|
|
|
}
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
/* Fix final length */
|
|
|
|
new->ad.length = (void *) dest - (void *) new->ad.data;
|
|
|
|
ea_set_attr(to, EA_LITERAL_DIRECT_ADATA(
|
|
|
|
&ea_gen_nexthop, 0, &new->ad));
|
2024-04-04 09:38:52 +00:00
|
|
|
}
|
2024-05-02 09:39:34 +00:00
|
|
|
while (0);
|
2022-05-15 16:09:30 +00:00
|
|
|
|
2024-04-04 09:38:52 +00:00
|
|
|
/* Has the HE version changed? */
|
|
|
|
u32 end_version = atomic_load_explicit(&he->version, memory_order_acquire);
|
2022-05-15 16:09:30 +00:00
|
|
|
|
2024-04-04 09:38:52 +00:00
|
|
|
/* Stayed stable, we can finalize the route */
|
|
|
|
if (end_version == version)
|
|
|
|
break;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2024-04-04 09:38:52 +00:00
|
|
|
/* No, retry once again */
|
|
|
|
version = end_version;
|
|
|
|
}
|
2022-05-05 16:08:37 +00:00
|
|
|
|
2024-04-04 09:38:52 +00:00
|
|
|
rcu_read_unlock();
|
|
|
|
|
|
|
|
ea_set_attr_u32(to, &ea_gen_hostentry_version, 0, version);
|
2021-12-20 19:25:35 +00:00
|
|
|
}
|
|
|
|
|
2022-09-06 17:38:40 +00:00
|
|
|
static inline int
|
2023-07-03 18:38:24 +00:00
|
|
|
rt_next_hop_update_rte(const rte *old, rte *new)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
2024-04-04 09:38:52 +00:00
|
|
|
eattr *hev = ea_find(old->attrs, &ea_gen_hostentry_version);
|
|
|
|
if (!hev)
|
|
|
|
return 0;
|
|
|
|
u32 last_version = hev->u.data;
|
|
|
|
|
|
|
|
eattr *heea = ea_find(old->attrs, &ea_gen_hostentry);
|
|
|
|
ASSERT_DIE(heea);
|
|
|
|
struct hostentry_adata *head = (struct hostentry_adata *) heea->u.ptr;
|
|
|
|
|
|
|
|
u32 current_version = atomic_load_explicit(&head->he->version, memory_order_acquire);
|
|
|
|
if (current_version == last_version)
|
2022-09-06 17:38:40 +00:00
|
|
|
return 0;
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2022-09-06 17:38:40 +00:00
|
|
|
*new = *old;
|
2024-04-04 10:01:35 +00:00
|
|
|
new->attrs = ea_strip_to(new->attrs, BIT32_ALL(EALS_PREIMPORT, EALS_FILTERED));
|
2024-04-04 09:38:52 +00:00
|
|
|
rta_apply_hostentry(&new->attrs, head);
|
2022-09-06 17:38:40 +00:00
|
|
|
return 1;
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
2022-05-31 10:51:34 +00:00
|
|
|
static inline void
|
|
|
|
rt_next_hop_resolve_rte(rte *r)
|
|
|
|
{
|
2022-06-08 13:31:28 +00:00
|
|
|
eattr *heea = ea_find(r->attrs, &ea_gen_hostentry);
|
2022-05-31 10:51:34 +00:00
|
|
|
if (!heea)
|
|
|
|
return;
|
|
|
|
|
2024-04-04 09:38:52 +00:00
|
|
|
rta_apply_hostentry(&r->attrs, (struct hostentry_adata *) heea->u.ptr);
|
2022-05-31 10:51:34 +00:00
|
|
|
}
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
#ifdef CONFIG_BGP
|
|
|
|
|
|
|
|
static inline int
|
|
|
|
net_flow_has_dst_prefix(const net_addr *n)
|
|
|
|
{
|
|
|
|
ASSUME(net_is_flow(n));
|
|
|
|
|
|
|
|
if (n->pxlen)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
if (n->type == NET_FLOW4)
|
|
|
|
{
|
|
|
|
const net_addr_flow4 *n4 = (void *) n;
|
|
|
|
return (n4->length > sizeof(net_addr_flow4)) && (n4->data[0] == FLOW_TYPE_DST_PREFIX);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
const net_addr_flow6 *n6 = (void *) n;
|
|
|
|
return (n6->length > sizeof(net_addr_flow6)) && (n6->data[0] == FLOW_TYPE_DST_PREFIX);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int
|
2022-05-30 10:03:03 +00:00
|
|
|
rta_as_path_is_empty(ea_list *a)
|
2021-12-20 19:25:35 +00:00
|
|
|
{
|
2022-05-30 10:03:03 +00:00
|
|
|
eattr *e = ea_find(a, "bgp_path");
|
2021-12-20 19:25:35 +00:00
|
|
|
return !e || (as_path_getlen(e->u.ptr) == 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline u32
|
2022-05-30 10:03:03 +00:00
|
|
|
rta_get_first_asn(ea_list *a)
|
2021-12-20 19:25:35 +00:00
|
|
|
{
|
2022-05-30 10:03:03 +00:00
|
|
|
eattr *e = ea_find(a, "bgp_path");
|
2021-12-20 19:25:35 +00:00
|
|
|
u32 asn;
|
|
|
|
|
|
|
|
return (e && as_path_get_first_regular(e->u.ptr, &asn)) ? asn : 0;
|
|
|
|
}
|
|
|
|
|
2022-06-08 09:47:49 +00:00
|
|
|
static inline enum flowspec_valid
|
2024-04-03 12:47:15 +00:00
|
|
|
rt_flowspec_check(rtable *tab_ip, struct rtable_private *tab_flow, const net_addr *n, ea_list *a, int interior)
|
2021-12-20 19:25:35 +00:00
|
|
|
{
|
|
|
|
ASSERT(rt_is_ip(tab_ip));
|
2024-04-03 12:47:15 +00:00
|
|
|
ASSERT(rt_is_flow(RT_PUB(tab_flow)));
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
/* RFC 8955 6. a) Flowspec has defined dst prefix */
|
|
|
|
if (!net_flow_has_dst_prefix(n))
|
2022-06-08 09:47:49 +00:00
|
|
|
return FLOWSPEC_INVALID;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
/* RFC 9117 4.1. Accept AS_PATH is empty (fr */
|
|
|
|
if (interior && rta_as_path_is_empty(a))
|
2022-06-08 09:47:49 +00:00
|
|
|
return FLOWSPEC_VALID;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
|
|
|
|
/* RFC 8955 6. b) Flowspec and its best-match route have the same originator */
|
|
|
|
|
|
|
|
/* Find flowspec dst prefix */
|
|
|
|
net_addr dst;
|
|
|
|
if (n->type == NET_FLOW4)
|
|
|
|
net_fill_ip4(&dst, net4_prefix(n), net4_pxlen(n));
|
|
|
|
else
|
|
|
|
net_fill_ip6(&dst, net6_prefix(n), net6_pxlen(n));
|
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
rte rb = {};
|
2024-04-03 12:47:15 +00:00
|
|
|
|
|
|
|
RT_READ(tab_ip, tip);
|
|
|
|
const struct f_trie *ip_trie = atomic_load_explicit(&tip->t->trie, memory_order_relaxed);
|
|
|
|
ASSERT_DIE(ip_trie);
|
|
|
|
|
|
|
|
/* Find best-match BGP unicast route for flowspec dst prefix */
|
|
|
|
net *nb = net_route(tip, &dst);
|
|
|
|
if (nb)
|
|
|
|
rb = RTE_COPY_VALID(RTE_OR_NULL(NET_READ_BEST_ROUTE(tip, nb)));
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
/* Register prefix to trie for tracking further changes */
|
|
|
|
int max_pxlen = (n->type == NET_FLOW4) ? IP4_MAX_PREFIX_LENGTH : IP6_MAX_PREFIX_LENGTH;
|
2024-04-03 12:47:15 +00:00
|
|
|
trie_add_prefix(tab_flow->flowspec_trie, &dst, (rb.net ? rb.net->pxlen : 0), max_pxlen);
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
/* No best-match BGP route -> no flowspec */
|
2022-09-07 11:54:20 +00:00
|
|
|
if (!rb.attrs || (rt_get_source_attr(&rb) != RTS_BGP))
|
2022-06-08 09:47:49 +00:00
|
|
|
return FLOWSPEC_INVALID;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
/* Find ORIGINATOR_ID values */
|
2022-05-30 10:03:03 +00:00
|
|
|
u32 orig_a = ea_get_int(a, "bgp_originator_id", 0);
|
2022-09-07 11:54:20 +00:00
|
|
|
u32 orig_b = ea_get_int(rb.attrs, "bgp_originator_id", 0);
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
/* Originator is either ORIGINATOR_ID (if present), or BGP neighbor address (if not) */
|
2022-04-20 11:56:04 +00:00
|
|
|
if ((orig_a != orig_b) || (!orig_a && !orig_b && !ipa_equal(
|
2022-05-30 10:03:03 +00:00
|
|
|
ea_get_ip(a, &ea_gen_from, IPA_NONE),
|
2022-09-07 11:54:20 +00:00
|
|
|
ea_get_ip(rb.attrs, &ea_gen_from, IPA_NONE)
|
2022-04-20 11:56:04 +00:00
|
|
|
)))
|
2022-06-08 09:47:49 +00:00
|
|
|
return FLOWSPEC_INVALID;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
|
|
|
|
/* Find ASN of the best-match route, for use in next checks */
|
2022-09-07 11:54:20 +00:00
|
|
|
u32 asn_b = rta_get_first_asn(rb.attrs);
|
2021-12-20 19:25:35 +00:00
|
|
|
if (!asn_b)
|
2022-06-08 09:47:49 +00:00
|
|
|
return FLOWSPEC_INVALID;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
/* RFC 9117 4.2. For EBGP, flowspec and its best-match route are from the same AS */
|
|
|
|
if (!interior && (rta_get_first_asn(a) != asn_b))
|
2022-06-08 09:47:49 +00:00
|
|
|
return FLOWSPEC_INVALID;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
/* RFC 8955 6. c) More-specific routes are from the same AS as the best-match route */
|
2024-04-03 12:47:15 +00:00
|
|
|
NH_LOCK(tip->t->netindex, nh);
|
|
|
|
|
|
|
|
TRIE_WALK(ip_trie, subnet, &dst)
|
2021-12-20 19:25:35 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
net *nc = net_find_valid(tip, nh, &subnet);
|
|
|
|
if (!nc)
|
|
|
|
continue;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
struct rte_storage *rs = NET_READ_BEST_ROUTE(tip, nc);
|
|
|
|
const rte *rc = &rs->rte;
|
|
|
|
if (rt_get_source_attr(rc) != RTS_BGP)
|
|
|
|
return FLOWSPEC_INVALID;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
if (rta_get_first_asn(rc->attrs) != asn_b)
|
|
|
|
return FLOWSPEC_INVALID;
|
2021-12-20 19:25:35 +00:00
|
|
|
}
|
2024-04-03 12:47:15 +00:00
|
|
|
TRIE_WALK_END;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2022-06-08 09:47:49 +00:00
|
|
|
return FLOWSPEC_VALID;
|
2021-12-20 19:25:35 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#endif /* CONFIG_BGP */
|
|
|
|
|
2022-09-06 17:38:40 +00:00
|
|
|
static int
|
2024-04-03 12:47:15 +00:00
|
|
|
rt_flowspec_update_rte(struct rtable_private *tab, const rte *r, rte *new)
|
2021-12-20 19:25:35 +00:00
|
|
|
{
|
|
|
|
#ifdef CONFIG_BGP
|
2022-06-29 10:51:07 +00:00
|
|
|
if (r->generation || (rt_get_source_attr(r) != RTS_BGP))
|
2022-09-06 17:38:40 +00:00
|
|
|
return 0;
|
2022-02-11 21:29:13 +00:00
|
|
|
|
2022-06-07 10:18:23 +00:00
|
|
|
struct bgp_channel *bc = (struct bgp_channel *) SKIP_BACK(struct channel, in_req, r->sender->req);
|
2022-02-11 21:29:13 +00:00
|
|
|
if (!bc->base_table)
|
2022-09-06 17:38:40 +00:00
|
|
|
return 0;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2024-04-26 10:14:33 +00:00
|
|
|
SKIP_BACK_DECLARE(struct bgp_proto, p, p, bc->c.proto);
|
2022-06-08 09:47:49 +00:00
|
|
|
|
|
|
|
enum flowspec_valid old = rt_get_flowspec_valid(r),
|
2022-09-06 17:38:40 +00:00
|
|
|
valid = rt_flowspec_check(bc->base_table, tab, r->net, r->attrs, p->is_interior);
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2022-05-15 16:09:30 +00:00
|
|
|
if (old == valid)
|
2022-09-06 17:38:40 +00:00
|
|
|
return 0;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2022-09-06 17:38:40 +00:00
|
|
|
*new = *r;
|
2024-04-04 10:01:35 +00:00
|
|
|
new->attrs = ea_strip_to(new->attrs, BIT32_ALL(EALS_PREIMPORT, EALS_FILTERED));
|
2022-09-06 17:38:40 +00:00
|
|
|
ea_set_attr_u32(&new->attrs, &ea_gen_flowspec_valid, 0, valid);
|
|
|
|
return 1;
|
2021-12-20 19:25:35 +00:00
|
|
|
#else
|
2022-09-06 17:38:40 +00:00
|
|
|
return 0;
|
2021-12-20 19:25:35 +00:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2022-06-07 10:18:23 +00:00
|
|
|
static inline void
|
|
|
|
rt_flowspec_resolve_rte(rte *r, struct channel *c)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_BGP
|
2022-06-08 09:47:49 +00:00
|
|
|
enum flowspec_valid valid, old = rt_get_flowspec_valid(r);
|
2022-06-07 10:18:23 +00:00
|
|
|
struct bgp_channel *bc = (struct bgp_channel *) c;
|
|
|
|
|
2022-06-08 09:47:49 +00:00
|
|
|
if ( (rt_get_source_attr(r) == RTS_BGP)
|
2023-09-14 13:21:53 +00:00
|
|
|
&& (c->class == &channel_bgp)
|
2022-06-08 09:47:49 +00:00
|
|
|
&& (bc->base_table))
|
|
|
|
{
|
2024-04-26 10:14:33 +00:00
|
|
|
SKIP_BACK_DECLARE(struct bgp_proto, p, p, bc->c.proto);
|
2024-04-03 12:47:15 +00:00
|
|
|
RT_LOCKED(c->in_req.hook->table, tab)
|
|
|
|
valid = rt_flowspec_check(
|
|
|
|
bc->base_table, tab,
|
|
|
|
r->net, r->attrs, p->is_interior);
|
2022-06-08 09:47:49 +00:00
|
|
|
}
|
|
|
|
else
|
|
|
|
valid = FLOWSPEC_UNKNOWN;
|
2022-06-07 10:18:23 +00:00
|
|
|
|
2022-06-08 09:47:49 +00:00
|
|
|
if (valid == old)
|
2022-06-07 10:18:23 +00:00
|
|
|
return;
|
|
|
|
|
2022-06-08 09:47:49 +00:00
|
|
|
if (valid == FLOWSPEC_UNKNOWN)
|
2022-06-08 13:31:28 +00:00
|
|
|
ea_unset_attr(&r->attrs, 0, &ea_gen_flowspec_valid);
|
2022-06-08 09:47:49 +00:00
|
|
|
else
|
2022-06-08 13:31:28 +00:00
|
|
|
ea_set_attr_u32(&r->attrs, &ea_gen_flowspec_valid, 0, valid);
|
2022-06-07 10:18:23 +00:00
|
|
|
#endif
|
|
|
|
}
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
static inline int
|
2023-12-08 15:13:14 +00:00
|
|
|
rt_next_hop_update_net(struct rtable_private *tab, struct netindex *ni, net *n)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
2022-09-06 17:38:40 +00:00
|
|
|
uint count = 0;
|
2023-12-08 15:13:14 +00:00
|
|
|
int is_flow = net_val_match(tab->addr_type, NB_FLOW);
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
struct rte_storage *old_best = NET_BEST_ROUTE(tab, n);
|
2010-07-05 15:50:19 +00:00
|
|
|
if (!old_best)
|
|
|
|
return 0;
|
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
NET_WALK_ROUTES(tab, n, ep, e)
|
2022-09-06 17:38:40 +00:00
|
|
|
count++;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2021-02-25 20:52:49 +00:00
|
|
|
if (!count)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
struct rte_multiupdate {
|
2022-09-06 17:38:40 +00:00
|
|
|
struct rte_storage *old, *new_stored;
|
|
|
|
rte new;
|
|
|
|
} *updates = tmp_allocz(sizeof(struct rte_multiupdate) * (count+1));
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2022-09-06 17:38:40 +00:00
|
|
|
uint pos = 0;
|
2024-04-03 12:47:15 +00:00
|
|
|
NET_WALK_ROUTES(tab, n, ep, e)
|
2022-09-06 17:38:40 +00:00
|
|
|
updates[pos++].old = e;
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2022-09-06 17:38:40 +00:00
|
|
|
uint mod = 0;
|
|
|
|
if (is_flow)
|
|
|
|
for (uint i = 0; i < pos; i++)
|
2024-04-03 12:47:15 +00:00
|
|
|
mod += rt_flowspec_update_rte(tab, &updates[i].old->rte, &updates[i].new);
|
2022-06-07 10:18:23 +00:00
|
|
|
|
2022-09-06 17:38:40 +00:00
|
|
|
else
|
|
|
|
for (uint i = 0; i < pos; i++)
|
|
|
|
mod += rt_next_hop_update_rte(&updates[i].old->rte, &updates[i].new);
|
|
|
|
|
|
|
|
if (!mod)
|
|
|
|
return 0;
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
/* We add a spinlock sentinel to the beginning */
|
|
|
|
struct rte_storage local_sentinel = {
|
|
|
|
.flags = REF_OBSOLETE,
|
|
|
|
.next = old_best,
|
|
|
|
};
|
|
|
|
atomic_store_explicit(&n->routes, &local_sentinel, memory_order_release);
|
2011-12-22 12:20:29 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
/* Now we mark all the old routes obsolete */
|
2022-09-06 17:38:40 +00:00
|
|
|
for (uint i = 0; i < pos; i++)
|
2024-04-03 12:47:15 +00:00
|
|
|
if (updates[i].new.attrs)
|
|
|
|
updates[i].old->flags |= REF_OBSOLETE;
|
|
|
|
|
|
|
|
/* Wait for readers */
|
|
|
|
synchronize_rcu();
|
2022-09-06 17:38:40 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
/* And now we go backwards to keep the list properly linked */
|
|
|
|
struct rte_storage *next = NULL;
|
|
|
|
for (int i = pos - 1; i >= 0; i--)
|
|
|
|
{
|
|
|
|
struct rte_storage *this;
|
2022-09-07 11:54:20 +00:00
|
|
|
if (updates[i].new.attrs)
|
2024-03-13 12:46:16 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
rte *new = &updates[i].new;
|
|
|
|
new->lastmod = current_time();
|
|
|
|
new->id = hmap_first_zero(&tab->id_map);
|
|
|
|
hmap_set(&tab->id_map, new->id);
|
|
|
|
this = updates[i].new_stored = rte_store(new, ni, tab);
|
2024-03-13 12:46:16 +00:00
|
|
|
}
|
2022-09-07 11:54:20 +00:00
|
|
|
else
|
2024-04-03 12:47:15 +00:00
|
|
|
this = updates[i].old;
|
2022-09-07 11:54:20 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
atomic_store_explicit(&this->next, next, memory_order_release);
|
|
|
|
next = this;
|
2022-09-06 17:38:40 +00:00
|
|
|
}
|
2024-04-03 12:47:15 +00:00
|
|
|
|
|
|
|
/* Add behind the sentinel */
|
|
|
|
atomic_store_explicit(&local_sentinel.next, next, memory_order_release);
|
2022-09-06 17:38:40 +00:00
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
/* Call the pre-comparison hooks */
|
|
|
|
for (uint i = 0; i < pos; i++)
|
|
|
|
if (updates[i].new_stored)
|
|
|
|
{
|
|
|
|
/* Not really an efficient way to compute this */
|
|
|
|
if (updates[i].old->rte.src->owner->rte_recalculate)
|
2023-07-03 18:38:24 +00:00
|
|
|
updates[i].old->rte.src->owner->rte_recalculate(tab, n, updates[i].new_stored, updates[i].old, old_best);
|
2022-09-07 11:54:20 +00:00
|
|
|
}
|
|
|
|
|
2011-12-22 12:20:29 +00:00
|
|
|
/* Find the new best route */
|
2024-04-03 12:47:15 +00:00
|
|
|
uint best_pos = 0;
|
|
|
|
struct rte_storage *new_best = updates[0].new_stored ?: updates[0].old;
|
|
|
|
|
|
|
|
for (uint i = 1; i < pos; i++)
|
|
|
|
{
|
|
|
|
struct rte_storage *s = updates[i].new_stored ?: updates[i].old;
|
|
|
|
if (rte_better(&s->rte, &new_best->rte))
|
2011-12-22 12:20:29 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
best_pos = i;
|
|
|
|
new_best = s;
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
2024-04-03 12:47:15 +00:00
|
|
|
}
|
2010-07-05 15:50:19 +00:00
|
|
|
|
|
|
|
/* Relink the new best route to the first position */
|
2024-04-03 12:47:15 +00:00
|
|
|
struct rte_storage * _Atomic *best_prev;
|
|
|
|
if (best_pos)
|
|
|
|
best_prev = &(updates[best_pos-1].new_stored ?: updates[best_pos-1].old)->next;
|
|
|
|
else
|
|
|
|
best_prev = &local_sentinel.next;
|
|
|
|
|
|
|
|
/* Unlink from the original place */
|
|
|
|
atomic_store_explicit(best_prev,
|
|
|
|
atomic_load_explicit(&new_best->next, memory_order_relaxed),
|
|
|
|
memory_order_release);
|
|
|
|
|
|
|
|
/* Link out */
|
|
|
|
atomic_store_explicit(&new_best->next,
|
|
|
|
atomic_load_explicit(&local_sentinel.next, memory_order_relaxed),
|
|
|
|
memory_order_release);
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2024-04-07 11:23:59 +00:00
|
|
|
/* Now we have to announce the routes the right way, to not cause any
|
|
|
|
* strange problems with consistency. */
|
|
|
|
|
|
|
|
ASSERT_DIE(updates[0].old == old_best);
|
|
|
|
|
|
|
|
/* Find new best route original position */
|
|
|
|
uint nbpos = ~0;
|
|
|
|
for (uint i=0; i<count; i++)
|
2024-04-03 12:47:15 +00:00
|
|
|
if ((updates[i].new_stored == new_best) || (updates[i].old == new_best))
|
2024-04-07 11:23:59 +00:00
|
|
|
{
|
|
|
|
nbpos = i;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
ASSERT_DIE(~nbpos);
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
struct rt_pending_export *best_rpe =
|
|
|
|
(new_best != old_best) ?
|
|
|
|
rte_announce_to(&tab->export_best, &n->best, &new_best->rte, &old_best->rte)
|
|
|
|
: NULL;
|
2024-04-07 11:23:59 +00:00
|
|
|
|
2022-09-06 17:38:40 +00:00
|
|
|
uint total = 0;
|
2024-05-02 09:39:34 +00:00
|
|
|
u64 last_seq = 0;
|
2024-04-03 12:47:15 +00:00
|
|
|
|
2021-02-25 20:52:49 +00:00
|
|
|
/* Announce the changes */
|
2022-09-06 17:38:40 +00:00
|
|
|
for (uint i=0; i<count; i++)
|
2021-02-25 20:52:49 +00:00
|
|
|
{
|
2024-04-07 11:23:59 +00:00
|
|
|
/* Not changed at all */
|
2022-09-06 17:38:40 +00:00
|
|
|
if (!updates[i].new_stored)
|
|
|
|
continue;
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
struct rt_pending_export *this_rpe =
|
|
|
|
rte_announce_to(&tab->export_all, &n->all,
|
|
|
|
&updates[i].new_stored->rte, &updates[i].old->rte);
|
2024-04-07 11:23:59 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
ASSERT_DIE(this_rpe);
|
2024-04-03 12:47:15 +00:00
|
|
|
_Bool nb = (new_best->rte.src == updates[i].new.src), ob = (i == 0);
|
2024-05-02 09:39:34 +00:00
|
|
|
char info[96];
|
|
|
|
char best_indicator[2][2] = { { ' ', '+' }, { '-', '=' } };
|
|
|
|
bsnprintf(info, sizeof info, "autoupdated [%cbest]", best_indicator[ob][nb]);
|
|
|
|
|
|
|
|
rt_rte_trace_in(D_ROUTES, updates[i].new.sender->req, &updates[i].new, info);
|
|
|
|
|
|
|
|
/* Double announcement of this specific route */
|
|
|
|
if (ob && best_rpe)
|
|
|
|
{
|
|
|
|
ASSERT_DIE(best_rpe->it.old == &updates[i].old->rte);
|
|
|
|
ASSERT_DIE(!best_rpe->seq_all);
|
|
|
|
best_rpe->seq_all = this_rpe->it.seq;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
last_seq = this_rpe->it.seq;
|
2022-09-06 17:38:40 +00:00
|
|
|
|
|
|
|
total++;
|
2021-02-25 20:52:49 +00:00
|
|
|
}
|
2015-06-08 00:20:43 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
if (best_rpe && !best_rpe->seq_all)
|
|
|
|
{
|
|
|
|
ASSERT_DIE(!updates[0].new_stored);
|
|
|
|
best_rpe->seq_all = last_seq;
|
|
|
|
}
|
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
/* Now we can finally release the changes back into the table */
|
|
|
|
atomic_store_explicit(&n->routes, new_best, memory_order_release);
|
|
|
|
|
2022-09-06 17:38:40 +00:00
|
|
|
return total;
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2022-09-09 11:52:37 +00:00
|
|
|
rt_nhu_uncork(void *_tab)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
2022-09-07 11:54:20 +00:00
|
|
|
RT_LOCKED((rtable *) _tab, tab)
|
|
|
|
{
|
2022-09-09 11:52:37 +00:00
|
|
|
ASSERT_DIE(tab->nhu_corked);
|
2022-09-07 11:12:44 +00:00
|
|
|
ASSERT_DIE(tab->nhu_state == 0);
|
2022-09-09 11:52:37 +00:00
|
|
|
|
|
|
|
/* Reset the state */
|
2022-09-07 11:12:44 +00:00
|
|
|
tab->nhu_state = tab->nhu_corked;
|
|
|
|
tab->nhu_corked = 0;
|
|
|
|
rt_trace(tab, D_STATES, "Next hop updater uncorked");
|
2022-09-09 11:52:37 +00:00
|
|
|
|
2024-02-22 12:31:11 +00:00
|
|
|
ev_send_loop(tab->loop, tab->nhu_event);
|
2022-09-07 11:12:44 +00:00
|
|
|
}
|
2022-09-09 11:52:37 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2024-02-22 12:31:11 +00:00
|
|
|
rt_next_hop_update(void *_tab)
|
2022-09-09 11:52:37 +00:00
|
|
|
{
|
2024-02-22 12:31:11 +00:00
|
|
|
RT_LOCK((rtable *) _tab, tab);
|
|
|
|
|
2022-09-09 11:52:37 +00:00
|
|
|
ASSERT_DIE(birdloop_inside(tab->loop));
|
|
|
|
|
|
|
|
if (tab->nhu_corked)
|
|
|
|
return;
|
2022-09-07 11:12:44 +00:00
|
|
|
|
|
|
|
if (!tab->nhu_state)
|
2022-09-09 11:52:37 +00:00
|
|
|
return;
|
2022-09-07 11:12:44 +00:00
|
|
|
|
|
|
|
/* Check corkedness */
|
2022-09-09 11:52:37 +00:00
|
|
|
if (rt_cork_check(tab->nhu_uncork_event))
|
2022-09-07 11:12:44 +00:00
|
|
|
{
|
|
|
|
rt_trace(tab, D_STATES, "Next hop updater corked");
|
2024-02-29 13:04:05 +00:00
|
|
|
|
|
|
|
if (tab->nhu_state & NHU_RUNNING)
|
2024-05-02 09:39:34 +00:00
|
|
|
{
|
|
|
|
lfjour_announce_now(&tab->export_best.journal);
|
|
|
|
lfjour_announce_now(&tab->export_all.journal);
|
|
|
|
}
|
2022-09-07 11:12:44 +00:00
|
|
|
|
|
|
|
tab->nhu_corked = tab->nhu_state;
|
|
|
|
tab->nhu_state = 0;
|
2022-09-12 08:25:14 +00:00
|
|
|
return;
|
2022-09-07 11:12:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
int max_feed = 32;
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2022-09-07 11:12:44 +00:00
|
|
|
/* Initialize a new run */
|
2017-02-22 13:02:03 +00:00
|
|
|
if (tab->nhu_state == NHU_SCHEDULED)
|
2022-09-07 11:12:44 +00:00
|
|
|
{
|
2023-12-08 15:13:14 +00:00
|
|
|
tab->nhu_index = 0;
|
2022-09-07 11:12:44 +00:00
|
|
|
tab->nhu_state = NHU_RUNNING;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2022-09-07 11:12:44 +00:00
|
|
|
if (tab->flowspec_trie)
|
|
|
|
rt_flowspec_reset_trie(tab);
|
|
|
|
}
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2022-09-07 11:12:44 +00:00
|
|
|
/* Walk the fib one net after another */
|
2024-04-03 12:47:15 +00:00
|
|
|
u32 bs = atomic_load_explicit(&tab->routes_block_size, memory_order_relaxed);
|
|
|
|
net *routes = atomic_load_explicit(&tab->routes, memory_order_relaxed);
|
|
|
|
for (; tab->nhu_index < bs; tab->nhu_index++)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
net *n = &routes[tab->nhu_index];
|
|
|
|
struct rte_storage *s = NET_BEST_ROUTE(tab, n);
|
|
|
|
if (!s)
|
2023-12-08 15:13:14 +00:00
|
|
|
continue;
|
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
if (max_feed <= 0)
|
|
|
|
{
|
2024-02-22 12:31:11 +00:00
|
|
|
ev_send_loop(tab->loop, tab->nhu_event);
|
2022-09-12 08:25:14 +00:00
|
|
|
return;
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
2023-12-08 15:13:14 +00:00
|
|
|
|
2023-05-01 13:10:53 +00:00
|
|
|
TMP_SAVED
|
2024-04-03 12:47:15 +00:00
|
|
|
max_feed -= rt_next_hop_update_net(tab, RTE_GET_NETINDEX(&s->rte), n);
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
2022-09-07 11:12:44 +00:00
|
|
|
/* Finished NHU, cleanup */
|
2022-09-01 09:17:35 +00:00
|
|
|
rt_trace(tab, D_EVENTS, "NHU done, scheduling export timer");
|
|
|
|
|
2017-02-22 13:02:03 +00:00
|
|
|
/* State change:
|
|
|
|
* NHU_DIRTY -> NHU_SCHEDULED
|
|
|
|
* NHU_RUNNING -> NHU_CLEAN
|
|
|
|
*/
|
2022-09-07 11:12:44 +00:00
|
|
|
if ((tab->nhu_state &= NHU_SCHEDULED) == NHU_SCHEDULED)
|
2024-02-22 12:31:11 +00:00
|
|
|
ev_send_loop(tab->loop, tab->nhu_event);
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
2022-09-01 12:21:56 +00:00
|
|
|
void
|
|
|
|
rt_new_default_table(struct symbol *s)
|
|
|
|
{
|
|
|
|
for (uint addr_type = 0; addr_type < NET_MAX; addr_type++)
|
|
|
|
if (s == new_config->def_tables[addr_type])
|
|
|
|
{
|
2023-03-09 15:34:17 +00:00
|
|
|
ASSERT_DIE(!s->table);
|
2022-09-01 12:21:56 +00:00
|
|
|
s->table = rt_new_table(s, addr_type);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
bug("Requested an unknown new default table: %s", s->name);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct rtable_config *
|
|
|
|
rt_get_default_table(struct config *cf, uint addr_type)
|
|
|
|
{
|
|
|
|
struct symbol *ts = cf->def_tables[addr_type];
|
|
|
|
if (!ts)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
if (!ts->table)
|
|
|
|
rt_new_default_table(ts);
|
|
|
|
|
|
|
|
return ts->table;
|
|
|
|
}
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2000-03-04 22:21:06 +00:00
|
|
|
struct rtable_config *
|
2015-11-05 11:48:52 +00:00
|
|
|
rt_new_table(struct symbol *s, uint addr_type)
|
2000-03-04 22:21:06 +00:00
|
|
|
{
|
2023-03-09 15:34:17 +00:00
|
|
|
if (s->table)
|
|
|
|
cf_error("Duplicate configuration of table %s", s->name);
|
|
|
|
|
2000-03-04 22:21:06 +00:00
|
|
|
struct rtable_config *c = cfg_allocz(sizeof(struct rtable_config));
|
|
|
|
|
2022-09-01 12:21:56 +00:00
|
|
|
if (s == new_config->def_tables[addr_type])
|
|
|
|
s->table = c;
|
|
|
|
else
|
2023-10-27 16:29:31 +00:00
|
|
|
cf_define_symbol(new_config, s, SYM_TABLE, table, c);
|
2022-09-01 12:21:56 +00:00
|
|
|
|
2000-03-04 22:21:06 +00:00
|
|
|
c->name = s->name;
|
2015-11-05 11:48:52 +00:00
|
|
|
c->addr_type = addr_type;
|
2022-06-04 15:34:57 +00:00
|
|
|
c->gc_threshold = 1000;
|
|
|
|
c->gc_period = (uint) -1; /* set in rt_postconfig() */
|
2023-01-19 09:56:16 +00:00
|
|
|
c->cork_threshold.low = 1024;
|
|
|
|
c->cork_threshold.high = 8192;
|
2022-09-21 16:43:44 +00:00
|
|
|
c->export_settle = (struct settle_config) {
|
|
|
|
.min = 1 MS,
|
|
|
|
.max = 100 MS,
|
|
|
|
};
|
2022-09-26 10:09:14 +00:00
|
|
|
c->export_rr_settle = (struct settle_config) {
|
|
|
|
.min = 100 MS,
|
|
|
|
.max = 3 S,
|
|
|
|
};
|
2024-06-03 12:23:41 +00:00
|
|
|
c->roa_settle = (struct settle_config) {
|
|
|
|
.min = 1 S,
|
|
|
|
.max = 20 S,
|
|
|
|
};
|
2023-12-07 13:38:05 +00:00
|
|
|
c->debug = new_config->table_default_debug;
|
2016-01-26 10:48:58 +00:00
|
|
|
|
|
|
|
add_tail(&new_config->tables, &c->n);
|
|
|
|
|
|
|
|
/* First table of each type is kept as default */
|
|
|
|
if (! new_config->def_tables[addr_type])
|
2022-09-01 12:21:56 +00:00
|
|
|
new_config->def_tables[addr_type] = s;
|
2016-01-26 10:48:58 +00:00
|
|
|
|
2000-03-04 22:21:06 +00:00
|
|
|
return c;
|
|
|
|
}
|
|
|
|
|
2000-06-01 17:12:19 +00:00
|
|
|
/**
|
|
|
|
* rt_lock_table - lock a routing table
|
|
|
|
* @r: routing table to be locked
|
|
|
|
*
|
|
|
|
* Lock a routing table, because it's in use by a protocol,
|
|
|
|
* preventing it from being freed when it gets undefined in a new
|
|
|
|
* configuration.
|
|
|
|
*/
|
2022-09-07 11:54:20 +00:00
|
|
|
void
|
|
|
|
rt_lock_table_priv(struct rtable_private *r, const char *file, uint line)
|
1999-05-17 20:14:52 +00:00
|
|
|
{
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_trace(r, D_STATES, "Locked at %s:%d", file, line);
|
|
|
|
r->use_count++;
|
2000-01-16 16:44:50 +00:00
|
|
|
}
|
|
|
|
|
2000-06-01 17:12:19 +00:00
|
|
|
/**
|
|
|
|
* rt_unlock_table - unlock a routing table
|
|
|
|
* @r: routing table to be unlocked
|
|
|
|
*
|
|
|
|
* Unlock a routing table formerly locked by rt_lock_table(),
|
|
|
|
* that is decrease its use count and delete it if it's scheduled
|
|
|
|
* for deletion by configuration changes.
|
|
|
|
*/
|
2022-09-07 11:54:20 +00:00
|
|
|
void
|
|
|
|
rt_unlock_table_priv(struct rtable_private *r, const char *file, uint line)
|
2000-01-16 16:44:50 +00:00
|
|
|
{
|
2022-09-07 13:06:22 +00:00
|
|
|
rt_trace(r, D_STATES, "Unlocked at %s:%d", file, line);
|
2000-01-16 16:44:50 +00:00
|
|
|
if (!--r->use_count && r->deleted)
|
2022-09-09 11:52:37 +00:00
|
|
|
/* Stop the service thread to finish this up */
|
2024-05-02 09:39:34 +00:00
|
|
|
ev_send_loop(r->loop, ev_new_init(r->rp, rt_shutdown, r));
|
2022-09-09 11:52:37 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_shutdown(void *tab_)
|
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
rtable *t = tab_;
|
|
|
|
RT_LOCK(t, tab);
|
|
|
|
|
2024-06-03 12:23:41 +00:00
|
|
|
if (tab->roa_digest)
|
|
|
|
{
|
|
|
|
rtex_export_unsubscribe(&tab->roa_digest->req);
|
|
|
|
ASSERT_DIE(EMPTY_TLIST(lfjour_recipient, &tab->roa_digest->digest.recipients));
|
|
|
|
ev_postpone(&tab->roa_digest->event);
|
|
|
|
settle_cancel(&tab->roa_digest->settle);
|
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
rtex_export_unsubscribe(&tab->best_req);
|
|
|
|
|
|
|
|
rt_exporter_shutdown(&tab->export_best, NULL);
|
|
|
|
rt_exporter_shutdown(&tab->export_all, NULL);
|
|
|
|
|
|
|
|
birdloop_stop_self(t->loop, rt_delete, t);
|
2022-09-07 11:54:20 +00:00
|
|
|
}
|
2021-03-30 16:51:31 +00:00
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
static void
|
|
|
|
rt_delete(void *tab_)
|
|
|
|
{
|
2023-04-21 13:26:06 +00:00
|
|
|
ASSERT_DIE(birdloop_inside(&main_birdloop));
|
2022-09-09 11:52:37 +00:00
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
/* We assume that nobody holds the table reference now as use_count is zero.
|
|
|
|
* Anyway the last holder may still hold the lock. Therefore we lock and
|
|
|
|
* unlock it the last time to be sure that nobody is there. */
|
2023-11-14 11:53:40 +00:00
|
|
|
struct rtable_private *tab = RT_LOCK_SIMPLE((rtable *) tab_);
|
2022-09-07 11:54:20 +00:00
|
|
|
struct config *conf = tab->deleted;
|
2023-04-21 13:26:06 +00:00
|
|
|
DOMAIN(rtable) dom = tab->lock;
|
2023-11-14 11:53:40 +00:00
|
|
|
RT_UNLOCK_SIMPLE(RT_PUB(tab));
|
2022-09-07 11:54:20 +00:00
|
|
|
|
2023-04-22 19:20:19 +00:00
|
|
|
/* Everything is freed by freeing the loop */
|
2023-02-24 08:13:35 +00:00
|
|
|
birdloop_free(tab->loop);
|
2022-09-07 11:54:20 +00:00
|
|
|
config_del_obstacle(conf);
|
2022-09-09 11:52:37 +00:00
|
|
|
|
2023-04-21 13:26:06 +00:00
|
|
|
/* Also drop the domain */
|
|
|
|
DOMAIN_FREE(rtable, dom);
|
2000-01-16 16:44:50 +00:00
|
|
|
}
|
|
|
|
|
2022-09-07 11:54:20 +00:00
|
|
|
|
2022-07-28 11:50:59 +00:00
|
|
|
static void
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_check_cork_low(struct rtable_private *tab)
|
2022-07-28 11:50:59 +00:00
|
|
|
{
|
|
|
|
if (!tab->cork_active)
|
|
|
|
return;
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
if (tab->deleted ||
|
|
|
|
(lfjour_pending_items(&tab->export_best.journal) < tab->cork_threshold.low)
|
|
|
|
&& (lfjour_pending_items(&tab->export_all.journal) < tab->cork_threshold.low))
|
2022-07-28 11:50:59 +00:00
|
|
|
{
|
|
|
|
tab->cork_active = 0;
|
|
|
|
rt_cork_release();
|
|
|
|
|
2022-08-30 17:40:58 +00:00
|
|
|
rt_trace(tab, D_STATES, "Uncorked");
|
2022-07-28 11:50:59 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_check_cork_high(struct rtable_private *tab)
|
2022-07-28 11:50:59 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
if (!tab->deleted && !tab->cork_active && (
|
|
|
|
(lfjour_pending_items(&tab->export_best.journal) >= tab->cork_threshold.low)
|
|
|
|
|| (lfjour_pending_items(&tab->export_all.journal) >= tab->cork_threshold.low)))
|
2022-07-28 11:50:59 +00:00
|
|
|
{
|
|
|
|
tab->cork_active = 1;
|
|
|
|
rt_cork_acquire();
|
2024-05-02 09:39:34 +00:00
|
|
|
lfjour_schedule_cleanup(&tab->export_best.journal);
|
|
|
|
lfjour_schedule_cleanup(&tab->export_all.journal);
|
2024-02-29 13:04:05 +00:00
|
|
|
// rt_export_used(&tab->exporter, tab->name, "corked");
|
2022-07-28 11:50:59 +00:00
|
|
|
|
2022-08-30 17:40:58 +00:00
|
|
|
rt_trace(tab, D_STATES, "Corked");
|
2022-07-28 11:50:59 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2021-12-22 03:32:26 +00:00
|
|
|
static int
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_reconfigure(struct rtable_private *tab, struct rtable_config *new, struct rtable_config *old)
|
2021-12-22 03:32:26 +00:00
|
|
|
{
|
|
|
|
if ((new->addr_type != old->addr_type) ||
|
|
|
|
(new->sorted != old->sorted) ||
|
|
|
|
(new->trie_used != old->trie_used))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
DBG("\t%s: same\n", new->name);
|
2022-09-07 11:54:20 +00:00
|
|
|
new->table = RT_PUB(tab);
|
2021-12-22 03:32:26 +00:00
|
|
|
tab->name = new->name;
|
|
|
|
tab->config = new;
|
2023-12-07 13:38:05 +00:00
|
|
|
tab->debug = new->debug;
|
2024-05-02 09:39:34 +00:00
|
|
|
tab->export_all.trace_routes = tab->export_best.trace_routes = new->debug;
|
2021-12-22 03:32:26 +00:00
|
|
|
|
2022-08-31 12:01:59 +00:00
|
|
|
if (tab->hostcache)
|
|
|
|
tab->hostcache->req.trace_routes = new->debug;
|
|
|
|
|
2024-02-29 13:04:05 +00:00
|
|
|
WALK_TLIST(rt_flowspec_link, ln, &tab->flowspec_links)
|
|
|
|
ln->req.trace_routes = new->debug;
|
2022-08-31 14:04:36 +00:00
|
|
|
|
2022-07-28 11:50:59 +00:00
|
|
|
tab->cork_threshold = new->cork_threshold;
|
|
|
|
|
|
|
|
if (new->cork_threshold.high != old->cork_threshold.high)
|
|
|
|
rt_check_cork_high(tab);
|
|
|
|
|
|
|
|
if (new->cork_threshold.low != old->cork_threshold.low)
|
|
|
|
rt_check_cork_low(tab);
|
|
|
|
|
2024-06-03 12:23:41 +00:00
|
|
|
if (tab->roa_digest && (
|
|
|
|
(new->roa_settle.min != tab->roa_digest->settle.cf.min)
|
|
|
|
|| (new->roa_settle.max != tab->roa_digest->settle.cf.max)))
|
|
|
|
tab->roa_digest->settle.cf = new->roa_settle;
|
|
|
|
|
2021-12-22 03:32:26 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2018-03-18 12:48:47 +00:00
|
|
|
static struct rtable_config *
|
|
|
|
rt_find_table_config(struct config *cf, char *name)
|
|
|
|
{
|
|
|
|
struct symbol *sym = cf_find_symbol(cf, name);
|
2019-02-15 12:53:17 +00:00
|
|
|
return (sym && (sym->class == SYM_TABLE)) ? sym->table : NULL;
|
2018-03-18 12:48:47 +00:00
|
|
|
}
|
|
|
|
|
2000-06-01 17:12:19 +00:00
|
|
|
/**
|
|
|
|
* rt_commit - commit new routing table configuration
|
|
|
|
* @new: new configuration
|
|
|
|
* @old: original configuration or %NULL if it's boot time config
|
|
|
|
*
|
|
|
|
* Scan differences between @old and @new configuration and modify
|
|
|
|
* the routing tables according to these changes. If @new defines a
|
|
|
|
* previously unknown table, create it, if it omits a table existing
|
|
|
|
* in @old, schedule it for deletion (it gets deleted when all protocols
|
|
|
|
* disconnect from it by calling rt_unlock_table()), if it exists
|
|
|
|
* in both configurations, leave it unchanged.
|
|
|
|
*/
|
2000-01-16 16:44:50 +00:00
|
|
|
void
|
|
|
|
rt_commit(struct config *new, struct config *old)
|
|
|
|
{
|
|
|
|
struct rtable_config *o, *r;
|
1999-05-17 20:14:52 +00:00
|
|
|
|
2000-01-16 16:44:50 +00:00
|
|
|
DBG("rt_commit:\n");
|
|
|
|
if (old)
|
1999-05-17 20:14:52 +00:00
|
|
|
{
|
2000-01-16 16:44:50 +00:00
|
|
|
WALK_LIST(o, old->tables)
|
2024-03-05 13:48:37 +00:00
|
|
|
{
|
|
|
|
_Bool ok;
|
2023-11-14 11:53:40 +00:00
|
|
|
RT_LOCKED(o->table, tab)
|
2000-01-16 16:44:50 +00:00
|
|
|
{
|
2024-03-05 13:48:37 +00:00
|
|
|
r = tab->deleted ? NULL : rt_find_table_config(new, o->name);
|
|
|
|
ok = r && !new->shutdown && rt_reconfigure(tab, r, o);
|
|
|
|
}
|
2021-12-22 03:32:26 +00:00
|
|
|
|
2024-03-05 13:48:37 +00:00
|
|
|
if (ok)
|
|
|
|
continue;
|
2021-12-22 03:32:26 +00:00
|
|
|
|
2024-03-05 13:48:37 +00:00
|
|
|
birdloop_enter(o->table->loop);
|
|
|
|
RT_LOCKED(o->table, tab)
|
|
|
|
{
|
2021-12-22 03:32:26 +00:00
|
|
|
DBG("\t%s: deleted\n", o->name);
|
|
|
|
tab->deleted = old;
|
|
|
|
config_add_obstacle(old);
|
|
|
|
rt_lock_table(tab);
|
2022-08-31 12:01:59 +00:00
|
|
|
|
2022-10-06 15:51:32 +00:00
|
|
|
rt_check_cork_low(tab);
|
2022-09-07 11:54:20 +00:00
|
|
|
|
2024-03-05 13:48:37 +00:00
|
|
|
if (tab->hcu_event)
|
|
|
|
{
|
|
|
|
if (ev_get_list(tab->hcu_event) == &rt_cork.queue)
|
|
|
|
ev_postpone(tab->hcu_event);
|
2023-03-06 12:16:12 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
rtex_export_unsubscribe(&tab->hostcache->req);
|
2024-03-05 13:48:37 +00:00
|
|
|
}
|
2024-05-02 09:39:34 +00:00
|
|
|
|
2024-03-05 13:48:37 +00:00
|
|
|
rt_unlock_table(tab);
|
2000-01-16 16:44:50 +00:00
|
|
|
}
|
2024-03-05 13:48:37 +00:00
|
|
|
birdloop_leave(o->table->loop);
|
|
|
|
}
|
1999-05-17 20:14:52 +00:00
|
|
|
}
|
2000-01-16 16:44:50 +00:00
|
|
|
|
|
|
|
WALK_LIST(r, new->tables)
|
|
|
|
if (!r->table)
|
|
|
|
{
|
2021-03-30 16:51:31 +00:00
|
|
|
r->table = rt_setup(rt_table_pool, r);
|
2000-01-16 16:44:50 +00:00
|
|
|
DBG("\t%s: created\n", r->name);
|
2021-03-30 16:51:31 +00:00
|
|
|
add_tail(&routing_tables, &r->table->n);
|
2000-01-16 16:44:50 +00:00
|
|
|
}
|
|
|
|
DBG("\tdone\n");
|
1999-05-17 20:14:52 +00:00
|
|
|
}
|
1999-12-01 15:10:21 +00:00
|
|
|
|
2019-08-13 16:22:07 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Hostcache
|
|
|
|
*/
|
|
|
|
|
2015-12-24 14:52:03 +00:00
|
|
|
static inline u32
|
2010-07-26 14:39:27 +00:00
|
|
|
hc_hash(ip_addr a, rtable *dep)
|
|
|
|
{
|
2015-12-24 14:52:03 +00:00
|
|
|
return ipa_hash(a) ^ ptr_hash(dep);
|
2010-07-26 14:39:27 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
hc_insert(struct hostcache *hc, struct hostentry *he)
|
|
|
|
{
|
2015-05-19 06:53:34 +00:00
|
|
|
uint k = he->hash_key >> hc->hash_shift;
|
2010-07-26 14:39:27 +00:00
|
|
|
he->next = hc->hash_table[k];
|
|
|
|
hc->hash_table[k] = he;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
hc_remove(struct hostcache *hc, struct hostentry *he)
|
|
|
|
{
|
|
|
|
struct hostentry **hep;
|
2015-05-19 06:53:34 +00:00
|
|
|
uint k = he->hash_key >> hc->hash_shift;
|
2010-07-26 14:39:27 +00:00
|
|
|
|
|
|
|
for (hep = &hc->hash_table[k]; *hep != he; hep = &(*hep)->next);
|
|
|
|
*hep = he->next;
|
|
|
|
}
|
|
|
|
|
|
|
|
#define HC_DEF_ORDER 10
|
|
|
|
#define HC_HI_MARK *4
|
|
|
|
#define HC_HI_STEP 2
|
|
|
|
#define HC_HI_ORDER 16 /* Must be at most 16 */
|
|
|
|
#define HC_LO_MARK /5
|
|
|
|
#define HC_LO_STEP 2
|
|
|
|
#define HC_LO_ORDER 10
|
|
|
|
|
|
|
|
static void
|
2021-03-30 16:51:31 +00:00
|
|
|
hc_alloc_table(struct hostcache *hc, pool *p, unsigned order)
|
2010-07-26 14:39:27 +00:00
|
|
|
{
|
2016-10-14 13:37:04 +00:00
|
|
|
uint hsize = 1 << order;
|
2010-07-26 14:39:27 +00:00
|
|
|
hc->hash_order = order;
|
2015-12-24 14:52:03 +00:00
|
|
|
hc->hash_shift = 32 - order;
|
2016-10-14 13:37:04 +00:00
|
|
|
hc->hash_max = (order >= HC_HI_ORDER) ? ~0U : (hsize HC_HI_MARK);
|
|
|
|
hc->hash_min = (order <= HC_LO_ORDER) ? 0U : (hsize HC_LO_MARK);
|
2010-07-26 14:39:27 +00:00
|
|
|
|
2021-03-30 16:51:31 +00:00
|
|
|
hc->hash_table = mb_allocz(p, hsize * sizeof(struct hostentry *));
|
2010-07-26 14:39:27 +00:00
|
|
|
}
|
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
static void
|
2021-03-30 16:51:31 +00:00
|
|
|
hc_resize(struct hostcache *hc, pool *p, unsigned new_order)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
2010-07-26 14:39:27 +00:00
|
|
|
struct hostentry **old_table = hc->hash_table;
|
|
|
|
struct hostentry *he, *hen;
|
2016-10-14 13:37:04 +00:00
|
|
|
uint old_size = 1 << hc->hash_order;
|
|
|
|
uint i;
|
2010-07-26 14:39:27 +00:00
|
|
|
|
2021-03-30 16:51:31 +00:00
|
|
|
hc_alloc_table(hc, p, new_order);
|
2010-07-26 14:39:27 +00:00
|
|
|
for (i = 0; i < old_size; i++)
|
|
|
|
for (he = old_table[i]; he != NULL; he=hen)
|
|
|
|
{
|
|
|
|
hen = he->next;
|
|
|
|
hc_insert(hc, he);
|
|
|
|
}
|
|
|
|
mb_free(old_table);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct hostentry *
|
2021-03-30 16:51:31 +00:00
|
|
|
hc_new_hostentry(struct hostcache *hc, pool *p, ip_addr a, ip_addr ll, rtable *dep, unsigned k)
|
2010-07-26 14:39:27 +00:00
|
|
|
{
|
|
|
|
struct hostentry *he = sl_alloc(hc->slab);
|
|
|
|
|
2017-02-24 13:05:11 +00:00
|
|
|
*he = (struct hostentry) {
|
|
|
|
.addr = a,
|
|
|
|
.link = ll,
|
|
|
|
.tab = dep,
|
|
|
|
.hash_key = k,
|
|
|
|
};
|
2010-07-26 14:39:27 +00:00
|
|
|
|
|
|
|
add_tail(&hc->hostentries, &he->ln);
|
|
|
|
hc_insert(hc, he);
|
|
|
|
|
|
|
|
hc->hash_items++;
|
|
|
|
if (hc->hash_items > hc->hash_max)
|
2021-03-30 16:51:31 +00:00
|
|
|
hc_resize(hc, p, hc->hash_order + HC_HI_STEP);
|
2010-07-26 14:39:27 +00:00
|
|
|
|
|
|
|
return he;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2021-03-30 16:51:31 +00:00
|
|
|
hc_delete_hostentry(struct hostcache *hc, pool *p, struct hostentry *he)
|
2010-07-26 14:39:27 +00:00
|
|
|
{
|
2024-05-31 07:47:56 +00:00
|
|
|
ea_free(atomic_load_explicit(&he->src, memory_order_relaxed));
|
2010-12-07 22:33:55 +00:00
|
|
|
|
2010-07-26 14:39:27 +00:00
|
|
|
rem_node(&he->ln);
|
|
|
|
hc_remove(hc, he);
|
2022-04-04 18:31:14 +00:00
|
|
|
sl_free(he);
|
2010-07-26 14:39:27 +00:00
|
|
|
|
|
|
|
hc->hash_items--;
|
|
|
|
if (hc->hash_items < hc->hash_min)
|
2021-03-30 16:51:31 +00:00
|
|
|
hc_resize(hc, p, hc->hash_order - HC_LO_STEP);
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
2022-08-31 12:01:59 +00:00
|
|
|
static void
|
|
|
|
hc_notify_dump_req(struct rt_export_request *req)
|
|
|
|
{
|
|
|
|
debug(" Table %s (%p)\n", req->name, req);
|
|
|
|
}
|
|
|
|
|
2022-09-23 07:58:00 +00:00
|
|
|
static void
|
2024-05-02 09:39:34 +00:00
|
|
|
hc_notify_export(void *_hc)
|
2022-08-31 12:01:59 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
struct hostcache *hc = _hc;
|
2022-08-31 12:01:59 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
RT_EXPORT_WALK(&hc->req, u)
|
|
|
|
{
|
|
|
|
const net_addr *n = NULL;
|
|
|
|
switch (u->kind)
|
2024-04-03 12:47:15 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
case RT_EXPORT_STOP:
|
|
|
|
bug("Main table export stopped");
|
|
|
|
break;
|
|
|
|
|
|
|
|
case RT_EXPORT_FEED:
|
|
|
|
if (u->feed->count_routes)
|
|
|
|
n = u->feed->block[0].net;
|
|
|
|
break;
|
|
|
|
|
|
|
|
case RT_EXPORT_UPDATE:
|
|
|
|
{
|
|
|
|
/* Conflate following updates */
|
|
|
|
const rte *old = RTE_VALID_OR_NULL(u->update->old);
|
|
|
|
const rte *new = u->update->new;
|
|
|
|
for (
|
|
|
|
SKIP_BACK_DECLARE(struct rt_pending_export, rpe, it, u->update);
|
|
|
|
rpe = atomic_load_explicit(&rpe->next, memory_order_acquire) ;)
|
|
|
|
{
|
|
|
|
ASSERT_DIE(new == rpe->it.old);
|
|
|
|
new = rpe->it.new;
|
|
|
|
rt_export_processed(&hc->req, rpe->it.seq);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Ignore idempotent */
|
|
|
|
if ((old == new) || old && new && rte_same(old, new))
|
|
|
|
continue;
|
2024-04-03 12:47:15 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
n = (new ?: old)->net;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!n)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
RT_LOCK(hc->tab, tab);
|
|
|
|
if (ev_active(tab->hcu_event))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (!trie_match_net(hc->trie, n))
|
|
|
|
{
|
2023-05-01 12:20:27 +00:00
|
|
|
/* No interest in this update, mark seen only */
|
2024-05-02 09:39:34 +00:00
|
|
|
if (hc->req.trace_routes & D_ROUTES)
|
|
|
|
log(L_TRACE "%s < boring %N (%u)",
|
|
|
|
hc->req.name, n, NET_TO_INDEX(n)->index);
|
2024-04-03 12:47:15 +00:00
|
|
|
}
|
2023-05-01 12:20:27 +00:00
|
|
|
else
|
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
if (hc->req.trace_routes & D_ROUTES)
|
2024-04-03 12:47:15 +00:00
|
|
|
log(L_TRACE "%s < checking %N (%u)",
|
2024-05-02 09:39:34 +00:00
|
|
|
hc->req.name, n, NET_TO_INDEX(n)->index);
|
2024-04-03 12:47:15 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
if ((rt_export_get_state(&hc->req) == TES_READY)
|
2024-03-05 13:25:52 +00:00
|
|
|
&& !ev_active(tab->hcu_event))
|
2024-04-03 12:47:15 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
if (hc->req.trace_routes & D_EVENTS)
|
|
|
|
log(L_TRACE "%s requesting HCU", hc->req.name);
|
2024-04-03 12:47:15 +00:00
|
|
|
|
2024-03-05 13:25:52 +00:00
|
|
|
ev_send_loop(tab->loop, tab->hcu_event);
|
2024-04-03 12:47:15 +00:00
|
|
|
}
|
2023-05-01 12:20:27 +00:00
|
|
|
}
|
2024-05-02 09:39:34 +00:00
|
|
|
|
2024-06-03 09:12:20 +00:00
|
|
|
MAYBE_DEFER_TASK(hc->req.r.target, hc->req.r.event,
|
|
|
|
"hostcache updater in %s", tab->name);
|
2024-05-02 09:39:34 +00:00
|
|
|
}
|
2022-08-31 12:01:59 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
static void
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_init_hostcache(struct rtable_private *tab)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
2021-03-30 16:51:31 +00:00
|
|
|
struct hostcache *hc = mb_allocz(tab->rp, sizeof(struct hostcache));
|
2010-07-05 15:50:19 +00:00
|
|
|
init_list(&hc->hostentries);
|
2010-07-26 14:39:27 +00:00
|
|
|
|
|
|
|
hc->hash_items = 0;
|
2021-03-30 16:51:31 +00:00
|
|
|
hc_alloc_table(hc, tab->rp, HC_DEF_ORDER);
|
|
|
|
hc->slab = sl_new(tab->rp, sizeof(struct hostentry));
|
2010-07-26 14:39:27 +00:00
|
|
|
|
2022-04-04 20:34:14 +00:00
|
|
|
hc->lp = lp_new(tab->rp);
|
2020-03-26 02:57:48 +00:00
|
|
|
hc->trie = f_new_trie(hc->lp, 0);
|
2010-07-27 16:20:12 +00:00
|
|
|
|
2024-03-05 13:25:52 +00:00
|
|
|
hc->tab = RT_PUB(tab);
|
2022-08-31 12:01:59 +00:00
|
|
|
|
2024-03-05 13:25:52 +00:00
|
|
|
tab->hcu_event = ev_new_init(tab->rp, rt_update_hostcache, tab);
|
|
|
|
tab->hcu_uncork_event = ev_new_init(tab->rp, rt_update_hostcache, tab);
|
2010-07-05 15:50:19 +00:00
|
|
|
tab->hostcache = hc;
|
2023-04-21 13:26:06 +00:00
|
|
|
|
2024-03-05 13:25:52 +00:00
|
|
|
ev_send_loop(tab->loop, tab->hcu_event);
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_free_hostcache(struct rtable_private *tab)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
|
|
|
struct hostcache *hc = tab->hostcache;
|
|
|
|
|
|
|
|
node *n;
|
|
|
|
WALK_LIST(n, hc->hostentries)
|
|
|
|
{
|
2024-04-26 10:14:33 +00:00
|
|
|
SKIP_BACK_DECLARE(struct hostentry, he, ln, n);
|
2024-05-31 07:47:56 +00:00
|
|
|
ea_free(atomic_load_explicit(&he->src, memory_order_relaxed));
|
2010-12-07 22:33:55 +00:00
|
|
|
|
2024-03-05 13:25:52 +00:00
|
|
|
if (!lfuc_finished(&he->uc))
|
2010-07-05 15:50:19 +00:00
|
|
|
log(L_ERR "Hostcache is not empty in table %s", tab->name);
|
|
|
|
}
|
|
|
|
|
2021-03-30 16:51:31 +00:00
|
|
|
/* Freed automagically by the resource pool
|
2010-07-26 14:39:27 +00:00
|
|
|
rfree(hc->slab);
|
2010-07-27 16:20:12 +00:00
|
|
|
rfree(hc->lp);
|
2010-07-26 14:39:27 +00:00
|
|
|
mb_free(hc->hash_table);
|
2010-07-05 15:50:19 +00:00
|
|
|
mb_free(hc);
|
2021-03-30 16:51:31 +00:00
|
|
|
*/
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
if_local_addr(ip_addr a, struct iface *i)
|
|
|
|
{
|
|
|
|
struct ifa *b;
|
|
|
|
|
|
|
|
WALK_LIST(b, i->addrs)
|
|
|
|
if (ipa_equal(a, b->ip))
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-09-28 12:17:20 +00:00
|
|
|
u32
|
2022-05-30 15:11:30 +00:00
|
|
|
rt_get_igp_metric(const rte *rt)
|
2010-07-30 23:04:32 +00:00
|
|
|
{
|
2022-05-30 10:03:03 +00:00
|
|
|
eattr *ea = ea_find(rt->attrs, "igp_metric");
|
2010-08-02 11:11:53 +00:00
|
|
|
|
|
|
|
if (ea)
|
|
|
|
return ea->u.data;
|
|
|
|
|
2022-05-04 12:41:51 +00:00
|
|
|
if (rt_get_source_attr(rt) == RTS_DEVICE)
|
2010-07-30 23:04:32 +00:00
|
|
|
return 0;
|
|
|
|
|
2021-09-27 14:40:28 +00:00
|
|
|
if (rt->src->owner->class->rte_igp_metric)
|
|
|
|
return rt->src->owner->class->rte_igp_metric(rt);
|
2021-03-20 22:18:34 +00:00
|
|
|
|
2010-07-30 23:04:32 +00:00
|
|
|
return IGP_METRIC_UNKNOWN;
|
|
|
|
}
|
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
static int
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_update_hostentry(struct rtable_private *tab, struct hostentry *he)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
2018-01-29 11:49:37 +00:00
|
|
|
int direct = 0;
|
2010-07-27 16:20:12 +00:00
|
|
|
int pxlen = 0;
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2024-04-04 09:38:52 +00:00
|
|
|
/* Signalize work in progress */
|
|
|
|
ASSERT_DIE((atomic_fetch_add_explicit(&he->version, 1, memory_order_acq_rel) & 1) == 0);
|
|
|
|
|
2015-12-24 14:52:03 +00:00
|
|
|
/* Reset the hostentry */
|
2024-05-31 07:47:56 +00:00
|
|
|
ea_list *old_src = atomic_exchange_explicit(&he->src, NULL, memory_order_acq_rel);
|
|
|
|
ea_list *new_src = NULL;
|
2018-01-29 11:49:37 +00:00
|
|
|
he->nexthop_linkable = 0;
|
2010-12-07 22:33:55 +00:00
|
|
|
he->igp_metric = 0;
|
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
RT_READ_LOCKED(tab, tr);
|
2015-12-24 14:52:03 +00:00
|
|
|
net_addr he_addr;
|
|
|
|
net_fill_ip_host(&he_addr, he->addr);
|
2024-04-03 12:47:15 +00:00
|
|
|
net *n = net_route(tr, &he_addr);
|
|
|
|
/*
|
|
|
|
log(L_DEBUG "rt_update_hostentry(%s %p) got net_route(%N) = %p",
|
|
|
|
tab->name, he, &he_addr, n);
|
|
|
|
*/
|
2010-07-30 23:04:32 +00:00
|
|
|
if (n)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
2024-04-03 12:47:15 +00:00
|
|
|
struct rte_storage *e = NET_BEST_ROUTE(tab, n);
|
2022-06-08 13:31:28 +00:00
|
|
|
ea_list *a = e->rte.attrs;
|
2022-07-12 13:05:04 +00:00
|
|
|
u32 pref = rt_get_preference(&e->rte);
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
NET_WALK_ROUTES(tab, n, ep, ee)
|
2022-07-12 13:05:04 +00:00
|
|
|
if (rte_is_valid(&ee->rte) &&
|
|
|
|
(rt_get_preference(&ee->rte) >= pref) &&
|
|
|
|
ea_find(ee->rte.attrs, &ea_gen_hostentry))
|
2010-08-03 06:26:47 +00:00
|
|
|
{
|
|
|
|
/* Recursive route should not depend on another recursive route */
|
2015-11-05 11:48:52 +00:00
|
|
|
log(L_WARN "Next hop address %I resolvable through recursive route for %N",
|
2023-12-08 15:13:14 +00:00
|
|
|
he->addr, ee->rte.net);
|
2010-12-07 22:33:55 +00:00
|
|
|
goto done;
|
2010-08-03 06:26:47 +00:00
|
|
|
}
|
2010-12-07 22:33:55 +00:00
|
|
|
|
2023-12-08 15:13:14 +00:00
|
|
|
pxlen = e->rte.net->pxlen;
|
2021-10-22 17:43:55 +00:00
|
|
|
|
2022-05-30 10:03:03 +00:00
|
|
|
eattr *nhea = ea_find(a, &ea_gen_nexthop);
|
2022-05-15 16:09:30 +00:00
|
|
|
ASSERT_DIE(nhea);
|
|
|
|
struct nexthop_adata *nhad = (void *) nhea->u.ptr;
|
|
|
|
|
|
|
|
if (NEXTHOP_IS_REACHABLE(nhad))
|
|
|
|
NEXTHOP_WALK(nh, nhad)
|
2017-03-17 14:48:09 +00:00
|
|
|
if (ipa_zero(nh->gw))
|
|
|
|
{
|
|
|
|
if (if_local_addr(he->addr, nh->iface))
|
|
|
|
{
|
|
|
|
/* The host address is a local address, this is not valid */
|
|
|
|
log(L_WARN "Next hop address %I is a local address of iface %s",
|
|
|
|
he->addr, nh->iface->name);
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
2018-01-29 11:49:37 +00:00
|
|
|
direct++;
|
2017-03-17 14:48:09 +00:00
|
|
|
}
|
2017-03-08 15:27:18 +00:00
|
|
|
|
2024-05-31 07:47:56 +00:00
|
|
|
new_src = ea_ref(a);
|
2018-01-29 11:49:37 +00:00
|
|
|
he->nexthop_linkable = !direct;
|
2020-01-28 10:42:46 +00:00
|
|
|
he->igp_metric = rt_get_igp_metric(&e->rte);
|
2024-04-03 12:47:15 +00:00
|
|
|
|
2024-05-31 07:47:56 +00:00
|
|
|
if ((old_src != new_src) && (tab->debug & D_ROUTES))
|
2024-05-02 09:39:34 +00:00
|
|
|
if (ipa_zero(he->link) || ipa_equal(he->link, he->addr))
|
2024-04-03 12:47:15 +00:00
|
|
|
log(L_TRACE "%s: Hostentry %p for %I in %s resolved via %N (%uG)",
|
|
|
|
tab->name, he, he->addr, he->tab->name, e->rte.net, e->rte.src->global_id);
|
|
|
|
else
|
|
|
|
log(L_TRACE "%s: Hostentry %p for %I %I in %s resolved via %N (%uG)",
|
|
|
|
tab->name, he, he->addr, he->link, he->tab->name, e->rte.net, e->rte.src->global_id);
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
2024-04-03 12:47:15 +00:00
|
|
|
else if (old_src && (tab->debug & D_ROUTES))
|
2024-05-02 09:39:34 +00:00
|
|
|
if (ipa_zero(he->link) || ipa_equal(he->link, he->addr))
|
2024-04-03 12:47:15 +00:00
|
|
|
log(L_TRACE "%s: Hostentry %p for %I in %s not resolved",
|
|
|
|
tab->name, he, he->addr, he->tab->name);
|
|
|
|
else
|
|
|
|
log(L_TRACE "%s: Hostentry %p for %I %I in %s not resolved",
|
|
|
|
tab->name, he, he->addr, he->link, he->tab->name);
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2017-03-08 15:27:18 +00:00
|
|
|
done:
|
2024-04-04 09:38:52 +00:00
|
|
|
/* Signalize work done and wait for readers */
|
2024-05-31 07:47:56 +00:00
|
|
|
ASSERT_DIE(atomic_exchange_explicit(&he->src, new_src, memory_order_acq_rel) == NULL);
|
2024-04-04 09:38:52 +00:00
|
|
|
ASSERT_DIE((atomic_fetch_add_explicit(&he->version, 1, memory_order_acq_rel) & 1) == 1);
|
|
|
|
synchronize_rcu();
|
|
|
|
|
2010-07-27 16:20:12 +00:00
|
|
|
/* Add a prefix range to the trie */
|
2015-12-24 14:52:03 +00:00
|
|
|
trie_add_prefix(tab->hostcache->trie, &he_addr, pxlen, he_addr.pxlen);
|
2010-07-27 16:20:12 +00:00
|
|
|
|
2024-04-04 10:01:35 +00:00
|
|
|
ea_free(old_src);
|
2024-05-31 07:47:56 +00:00
|
|
|
return old_src != new_src;
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2022-08-31 12:01:59 +00:00
|
|
|
rt_update_hostcache(void *data)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
2022-09-07 11:54:20 +00:00
|
|
|
rtable **nhu_pending;
|
|
|
|
|
|
|
|
RT_LOCKED((rtable *) data, tab)
|
|
|
|
{
|
2010-07-05 15:50:19 +00:00
|
|
|
struct hostcache *hc = tab->hostcache;
|
2022-08-31 12:01:59 +00:00
|
|
|
|
2023-04-21 13:26:06 +00:00
|
|
|
/* Finish initialization */
|
|
|
|
if (!hc->req.name)
|
|
|
|
{
|
|
|
|
hc->req = (struct rt_export_request) {
|
|
|
|
.name = mb_sprintf(tab->rp, "%s.hcu.notifier", tab->name),
|
2024-05-02 09:39:34 +00:00
|
|
|
.r = {
|
|
|
|
.event = &hc->source_event,
|
|
|
|
.target = birdloop_event_list(tab->loop),
|
|
|
|
},
|
2024-04-07 20:27:13 +00:00
|
|
|
.pool = birdloop_pool(tab->loop),
|
2023-04-21 13:26:06 +00:00
|
|
|
.trace_routes = tab->config->debug,
|
2024-05-02 09:39:34 +00:00
|
|
|
.dump = hc_notify_dump_req,
|
|
|
|
};
|
|
|
|
hc->source_event = (event) {
|
|
|
|
.hook = hc_notify_export,
|
|
|
|
.data = hc,
|
2023-04-21 13:26:06 +00:00
|
|
|
};
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
rtex_export_subscribe(&tab->export_best, &hc->req);
|
2023-04-21 13:26:06 +00:00
|
|
|
}
|
|
|
|
|
2022-09-14 08:25:09 +00:00
|
|
|
/* Shutdown shortcut */
|
2024-05-02 09:39:34 +00:00
|
|
|
if (rt_export_get_state(&hc->req) == TES_DOWN)
|
2023-11-14 11:53:40 +00:00
|
|
|
return;
|
2022-09-14 08:25:09 +00:00
|
|
|
|
2024-03-05 13:25:52 +00:00
|
|
|
if (rt_cork_check(tab->hcu_uncork_event))
|
2022-08-31 12:01:59 +00:00
|
|
|
{
|
|
|
|
rt_trace(tab, D_STATES, "Hostcache update corked");
|
2023-11-14 11:53:40 +00:00
|
|
|
return;
|
2022-08-31 12:01:59 +00:00
|
|
|
}
|
|
|
|
|
2022-09-05 10:55:36 +00:00
|
|
|
/* Destination schedule map */
|
2022-09-07 11:54:20 +00:00
|
|
|
nhu_pending = tmp_allocz(sizeof(rtable *) * rtable_max_id);
|
2022-09-05 10:55:36 +00:00
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
struct hostentry *he;
|
|
|
|
node *n, *x;
|
|
|
|
|
2010-07-27 16:20:12 +00:00
|
|
|
/* Reset the trie */
|
|
|
|
lp_flush(hc->lp);
|
2020-03-26 02:57:48 +00:00
|
|
|
hc->trie = f_new_trie(hc->lp, 0);
|
2010-07-27 16:20:12 +00:00
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
WALK_LIST_DELSAFE(n, x, hc->hostentries)
|
|
|
|
{
|
|
|
|
he = SKIP_BACK(struct hostentry, ln, n);
|
2024-03-05 13:25:52 +00:00
|
|
|
if (lfuc_finished(&he->uc))
|
|
|
|
{
|
|
|
|
hc_delete_hostentry(hc, tab->rp, he);
|
|
|
|
continue;
|
|
|
|
}
|
2010-07-05 15:50:19 +00:00
|
|
|
|
|
|
|
if (rt_update_hostentry(tab, he))
|
2022-09-05 10:55:36 +00:00
|
|
|
nhu_pending[he->tab->id] = he->tab;
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
2022-09-07 11:54:20 +00:00
|
|
|
}
|
2022-09-05 10:55:36 +00:00
|
|
|
|
|
|
|
for (uint i=0; i<rtable_max_id; i++)
|
|
|
|
if (nhu_pending[i])
|
2022-09-07 11:54:20 +00:00
|
|
|
RT_LOCKED(nhu_pending[i], dst)
|
|
|
|
rt_schedule_nhu(dst);
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
2022-05-15 13:53:35 +00:00
|
|
|
static struct hostentry *
|
2022-09-07 11:54:20 +00:00
|
|
|
rt_get_hostentry(struct rtable_private *tab, ip_addr a, ip_addr ll, rtable *dep)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
2022-10-10 03:06:19 +00:00
|
|
|
ip_addr link = ipa_zero(ll) ? a : ll;
|
2010-07-05 15:50:19 +00:00
|
|
|
struct hostentry *he;
|
|
|
|
|
|
|
|
if (!tab->hostcache)
|
|
|
|
rt_init_hostcache(tab);
|
|
|
|
|
2015-12-24 14:52:03 +00:00
|
|
|
u32 k = hc_hash(a, dep);
|
2010-07-26 14:39:27 +00:00
|
|
|
struct hostcache *hc = tab->hostcache;
|
|
|
|
for (he = hc->hash_table[k >> hc->hash_shift]; he != NULL; he = he->next)
|
2022-10-10 03:06:19 +00:00
|
|
|
if (ipa_equal(he->addr, a) && ipa_equal(he->link, link) && (he->tab == dep))
|
2023-04-27 10:38:50 +00:00
|
|
|
break;
|
|
|
|
|
2024-03-05 13:25:52 +00:00
|
|
|
if (he)
|
2024-04-03 12:47:15 +00:00
|
|
|
{
|
|
|
|
if (tab->debug & D_ROUTES)
|
|
|
|
if (ipa_zero(ll))
|
|
|
|
log(L_TRACE "%s: Found existing hostentry %p for %I in %s",
|
|
|
|
tab->name, he, a, he->tab->name);
|
|
|
|
else
|
|
|
|
log(L_TRACE "%s: Found existing hostentry %p for %I %I in %s",
|
|
|
|
tab->name, he, a, ll, he->tab->name);
|
|
|
|
}
|
2024-03-05 13:25:52 +00:00
|
|
|
else
|
2023-04-27 10:38:50 +00:00
|
|
|
{
|
|
|
|
he = hc_new_hostentry(hc, tab->rp, a, link, dep, k);
|
2023-08-28 13:36:40 +00:00
|
|
|
he->owner = RT_PUB(tab);
|
2023-04-27 10:38:50 +00:00
|
|
|
rt_update_hostentry(tab, he);
|
|
|
|
}
|
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
/* Keep the hostentry alive until this task ends */
|
|
|
|
lfuc_lock_revive(&he->uc);
|
2024-03-05 13:25:52 +00:00
|
|
|
lfuc_unlock(&he->uc, birdloop_event_list(tab->loop), tab->hcu_event);
|
2010-07-05 15:50:19 +00:00
|
|
|
|
|
|
|
return he;
|
|
|
|
}
|
|
|
|
|
2024-04-03 12:47:15 +00:00
|
|
|
rte *
|
|
|
|
krt_export_net(struct channel *c, const net_addr *a, linpool *lp)
|
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
if (c->ra_mode == RA_MERGED)
|
2024-04-03 12:47:15 +00:00
|
|
|
{
|
2024-05-02 09:39:34 +00:00
|
|
|
struct rt_export_feed *feed = rt_net_feed(c->table, a, NULL);
|
|
|
|
if (!feed->count_routes)
|
|
|
|
return NULL;
|
2024-04-03 12:47:15 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
if (!bmap_test(&c->export_accepted_map, feed->block[0].id))
|
|
|
|
return NULL;
|
2024-04-03 12:47:15 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
return rt_export_merged(c, feed, lp, 1);
|
2024-04-03 12:47:15 +00:00
|
|
|
}
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
static _Thread_local rte best;
|
|
|
|
best = rt_net_best(c->table, a);
|
2024-04-03 12:47:15 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
if (!best.attrs)
|
2024-04-03 12:47:15 +00:00
|
|
|
return NULL;
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
if (c->out_filter == FILTER_REJECT)
|
2024-04-03 12:47:15 +00:00
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/* We could run krt_preexport() here, but it is already handled by krt_is_installed() */
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
if (c->out_filter == FILTER_ACCEPT)
|
|
|
|
return &best;
|
2024-04-03 12:47:15 +00:00
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
if (f_run(c->out_filter, &best, FF_SILENT) > F_ACCEPT)
|
2024-04-03 12:47:15 +00:00
|
|
|
return NULL;
|
|
|
|
|
2024-05-02 09:39:34 +00:00
|
|
|
return &best;
|
2024-04-03 12:47:15 +00:00
|
|
|
}
|