1998-05-15 07:54:32 +00:00
|
|
|
/*
|
2000-06-01 17:12:19 +00:00
|
|
|
* BIRD -- Routing Tables
|
1998-05-15 07:54:32 +00:00
|
|
|
*
|
2000-01-16 16:44:50 +00:00
|
|
|
* (c) 1998--2000 Martin Mares <mj@ucw.cz>
|
1998-05-15 07:54:32 +00:00
|
|
|
*
|
|
|
|
* Can be freely distributed and used under the terms of the GNU GPL.
|
|
|
|
*/
|
|
|
|
|
2000-06-01 17:12:19 +00:00
|
|
|
/**
|
|
|
|
* DOC: Routing tables
|
|
|
|
*
|
|
|
|
* Routing tables are probably the most important structures BIRD uses. They
|
|
|
|
* hold all the information about known networks, the associated routes and
|
|
|
|
* their attributes.
|
|
|
|
*
|
2000-06-08 12:37:21 +00:00
|
|
|
* There are multiple routing tables (a primary one together with any
|
2000-06-01 17:12:19 +00:00
|
|
|
* number of secondary ones if requested by the configuration). Each table
|
|
|
|
* is basically a FIB containing entries describing the individual
|
2000-06-07 13:25:53 +00:00
|
|
|
* destination networks. For each network (represented by structure &net),
|
2000-06-08 12:37:21 +00:00
|
|
|
* there is a one-way linked list of route entries (&rte), the first entry
|
|
|
|
* on the list being the best one (i.e., the one we currently use
|
2000-06-01 17:12:19 +00:00
|
|
|
* for routing), the order of the other ones is undetermined.
|
|
|
|
*
|
|
|
|
* The &rte contains information specific to the route (preference, protocol
|
|
|
|
* metrics, time of last modification etc.) and a pointer to a &rta structure
|
|
|
|
* (see the route attribute module for a precise explanation) holding the
|
|
|
|
* remaining route attributes which are expected to be shared by multiple
|
|
|
|
* routes in order to conserve memory.
|
2021-12-20 19:25:35 +00:00
|
|
|
*
|
|
|
|
* There are several mechanisms that allow automatic update of routes in one
|
|
|
|
* routing table (dst) as a result of changes in another routing table (src).
|
|
|
|
* They handle issues of recursive next hop resolving, flowspec validation and
|
|
|
|
* RPKI validation.
|
|
|
|
*
|
|
|
|
* The first such mechanism is handling of recursive next hops. A route in the
|
|
|
|
* dst table has an indirect next hop address, which is resolved through a route
|
|
|
|
* in the src table (which may also be the same table) to get an immediate next
|
|
|
|
* hop. This is implemented using structure &hostcache attached to the src
|
|
|
|
* table, which contains &hostentry structures for each tracked next hop
|
|
|
|
* address. These structures are linked from recursive routes in dst tables,
|
|
|
|
* possibly multiple routes sharing one hostentry (as many routes may have the
|
|
|
|
* same indirect next hop). There is also a trie in the hostcache, which matches
|
|
|
|
* all prefixes that may influence resolving of tracked next hops.
|
|
|
|
*
|
|
|
|
* When a best route changes in the src table, the hostcache is notified using
|
|
|
|
* rt_notify_hostcache(), which immediately checks using the trie whether the
|
|
|
|
* change is relevant and if it is, then it schedules asynchronous hostcache
|
|
|
|
* recomputation. The recomputation is done by rt_update_hostcache() (called
|
|
|
|
* from rt_event() of src table), it walks through all hostentries and resolves
|
|
|
|
* them (by rt_update_hostentry()). It also updates the trie. If a change in
|
|
|
|
* hostentry resolution was found, then it schedules asynchronous nexthop
|
|
|
|
* recomputation of associated dst table. That is done by rt_next_hop_update()
|
|
|
|
* (called from rt_event() of dst table), it iterates over all routes in the dst
|
|
|
|
* table and re-examines their hostentries for changes. Note that in contrast to
|
|
|
|
* hostcache update, next hop update can be interrupted by main loop. These two
|
|
|
|
* full-table walks (over hostcache and dst table) are necessary due to absence
|
|
|
|
* of direct lookups (route -> affected nexthop, nexthop -> its route).
|
|
|
|
*
|
|
|
|
* The second mechanism is for flowspec validation, where validity of flowspec
|
|
|
|
* routes depends of resolving their network prefixes in IP routing tables. This
|
|
|
|
* is similar to the recursive next hop mechanism, but simpler as there are no
|
|
|
|
* intermediate hostcache and hostentries (because flows are less likely to
|
|
|
|
* share common net prefix than routes sharing a common next hop). In src table,
|
|
|
|
* there is a list of dst tables (list flowspec_links), this list is updated by
|
|
|
|
* flowpsec channels (by rt_flowspec_link() and rt_flowspec_unlink() during
|
|
|
|
* channel start/stop). Each dst table has its own trie of prefixes that may
|
|
|
|
* influence validation of flowspec routes in it (flowspec_trie).
|
|
|
|
*
|
|
|
|
* When a best route changes in the src table, rt_flowspec_notify() immediately
|
|
|
|
* checks all dst tables from the list using their tries to see whether the
|
|
|
|
* change is relevant for them. If it is, then an asynchronous re-validation of
|
|
|
|
* flowspec routes in the dst table is scheduled. That is also done by function
|
|
|
|
* rt_next_hop_update(), like nexthop recomputation above. It iterates over all
|
|
|
|
* flowspec routes and re-validates them. It also recalculates the trie.
|
|
|
|
*
|
|
|
|
* Note that in contrast to the hostcache update, here the trie is recalculated
|
|
|
|
* during the rt_next_hop_update(), which may be interleaved with IP route
|
|
|
|
* updates. The trie is flushed at the beginning of recalculation, which means
|
|
|
|
* that such updates may use partial trie to see if they are relevant. But it
|
|
|
|
* works anyway! Either affected flowspec was already re-validated and added to
|
|
|
|
* the trie, then IP route change would match the trie and trigger a next round
|
|
|
|
* of re-validation, or it was not yet re-validated and added to the trie, but
|
|
|
|
* will be re-validated later in this round anyway.
|
|
|
|
*
|
|
|
|
* The third mechanism is used for RPKI re-validation of IP routes and it is the
|
|
|
|
* simplest. It is just a list of subscribers in src table, who are notified
|
|
|
|
* when any change happened, but only after a settle time. Also, in RPKI case
|
|
|
|
* the dst is not a table, but a channel, who refeeds routes through a filter.
|
2000-06-01 17:12:19 +00:00
|
|
|
*/
|
|
|
|
|
2000-03-12 21:01:38 +00:00
|
|
|
#undef LOCAL_DEBUG
|
1999-02-13 19:15:28 +00:00
|
|
|
|
1998-05-15 07:54:32 +00:00
|
|
|
#include "nest/bird.h"
|
2022-03-31 17:09:38 +00:00
|
|
|
#include "nest/rt.h"
|
1998-05-20 11:54:33 +00:00
|
|
|
#include "nest/protocol.h"
|
1999-12-01 15:10:21 +00:00
|
|
|
#include "nest/iface.h"
|
1998-05-20 11:54:33 +00:00
|
|
|
#include "lib/resource.h"
|
1999-02-13 21:29:01 +00:00
|
|
|
#include "lib/event.h"
|
2021-02-10 02:09:57 +00:00
|
|
|
#include "lib/timer.h"
|
1999-12-01 15:10:21 +00:00
|
|
|
#include "lib/string.h"
|
1999-05-17 20:14:52 +00:00
|
|
|
#include "conf/conf.h"
|
1999-03-17 14:31:26 +00:00
|
|
|
#include "filter/filter.h"
|
2019-02-08 12:38:12 +00:00
|
|
|
#include "filter/data.h"
|
2018-06-27 14:51:53 +00:00
|
|
|
#include "lib/hash.h"
|
2000-03-31 23:30:21 +00:00
|
|
|
#include "lib/string.h"
|
2004-05-31 17:16:47 +00:00
|
|
|
#include "lib/alloca.h"
|
2021-12-20 19:25:35 +00:00
|
|
|
#include "lib/flowspec.h"
|
2002-11-13 08:47:06 +00:00
|
|
|
|
2019-09-28 12:17:20 +00:00
|
|
|
#ifdef CONFIG_BGP
|
|
|
|
#include "proto/bgp/bgp.h"
|
|
|
|
#endif
|
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
#include <stdatomic.h>
|
|
|
|
|
2010-06-02 20:20:40 +00:00
|
|
|
pool *rt_table_pool;
|
|
|
|
|
1999-04-05 20:25:03 +00:00
|
|
|
static linpool *rte_update_pool;
|
1998-05-20 11:54:33 +00:00
|
|
|
|
2018-11-20 16:38:19 +00:00
|
|
|
list routing_tables;
|
2021-06-21 15:07:31 +00:00
|
|
|
list deleted_routing_tables;
|
2021-09-30 11:50:54 +00:00
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
/* Data structures for export journal */
|
|
|
|
#define RT_PENDING_EXPORT_ITEMS (page_size - sizeof(struct rt_export_block)) / sizeof(struct rt_pending_export)
|
|
|
|
|
|
|
|
struct rt_export_block {
|
|
|
|
node n;
|
|
|
|
_Atomic u32 end;
|
|
|
|
_Atomic _Bool not_last;
|
|
|
|
struct rt_pending_export export[];
|
|
|
|
};
|
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
static void rt_free_hostcache(rtable *tab);
|
|
|
|
static void rt_notify_hostcache(rtable *tab, net *net);
|
|
|
|
static void rt_update_hostcache(rtable *tab);
|
|
|
|
static void rt_next_hop_update(rtable *tab);
|
2022-05-31 10:51:34 +00:00
|
|
|
static inline void rt_next_hop_resolve_rte(rte *r);
|
2022-06-07 10:18:23 +00:00
|
|
|
static inline void rt_flowspec_resolve_rte(rte *r, struct channel *c);
|
2016-01-26 10:48:58 +00:00
|
|
|
static inline void rt_prune_table(rtable *tab);
|
2021-02-10 02:09:57 +00:00
|
|
|
static inline void rt_schedule_notify(rtable *tab);
|
2021-12-20 19:25:35 +00:00
|
|
|
static void rt_flowspec_notify(rtable *tab, net *net);
|
2022-06-04 15:34:57 +00:00
|
|
|
static void rt_kick_prune_timer(rtable *tab);
|
2022-06-22 10:45:42 +00:00
|
|
|
static void rt_feed_by_fib(void *);
|
|
|
|
static void rt_feed_by_trie(void *);
|
2022-06-27 17:53:06 +00:00
|
|
|
static void rt_feed_equal(void *);
|
|
|
|
static void rt_feed_for(void *);
|
2022-06-22 10:45:42 +00:00
|
|
|
static uint rt_feed_net(struct rt_export_hook *c, net *n);
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
static inline void rt_export_used(struct rt_exporter *);
|
2021-09-27 11:04:16 +00:00
|
|
|
static void rt_export_cleanup(rtable *tab);
|
|
|
|
|
|
|
|
static inline void rte_update_lock(void);
|
|
|
|
static inline void rte_update_unlock(void);
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
static int rte_same(rte *x, rte *y);
|
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
const char *rt_import_state_name_array[TIS_MAX] = {
|
|
|
|
[TIS_DOWN] = "DOWN",
|
|
|
|
[TIS_UP] = "UP",
|
|
|
|
[TIS_STOP] = "STOP",
|
|
|
|
[TIS_FLUSHING] = "FLUSHING",
|
|
|
|
[TIS_WAITING] = "WAITING",
|
|
|
|
[TIS_CLEARED] = "CLEARED",
|
|
|
|
};
|
|
|
|
|
|
|
|
const char *rt_export_state_name_array[TES_MAX] = {
|
|
|
|
[TES_DOWN] = "DOWN",
|
|
|
|
[TES_FEEDING] = "FEEDING",
|
|
|
|
[TES_READY] = "READY",
|
|
|
|
[TES_STOP] = "STOP"
|
|
|
|
};
|
|
|
|
|
|
|
|
const char *rt_import_state_name(u8 state)
|
|
|
|
{
|
|
|
|
if (state >= TIS_MAX)
|
|
|
|
return "!! INVALID !!";
|
|
|
|
else
|
|
|
|
return rt_import_state_name_array[state];
|
|
|
|
}
|
|
|
|
|
|
|
|
const char *rt_export_state_name(u8 state)
|
|
|
|
{
|
|
|
|
if (state >= TES_MAX)
|
|
|
|
return "!! INVALID !!";
|
|
|
|
else
|
|
|
|
return rt_export_state_name_array[state];
|
|
|
|
}
|
2014-03-20 13:07:12 +00:00
|
|
|
|
2022-05-31 10:51:34 +00:00
|
|
|
static inline struct rte_storage *rt_next_hop_update_rte(rtable *tab, net *n, rte *old);
|
2022-05-15 13:53:35 +00:00
|
|
|
static struct hostentry *rt_get_hostentry(rtable *tab, ip_addr a, ip_addr ll, rtable *dep);
|
2000-03-12 20:30:53 +00:00
|
|
|
|
2021-11-29 18:23:42 +00:00
|
|
|
static void
|
|
|
|
net_init_with_trie(struct fib *f, void *N)
|
|
|
|
{
|
|
|
|
rtable *tab = SKIP_BACK(rtable, fib, f);
|
|
|
|
net *n = N;
|
|
|
|
|
|
|
|
if (tab->trie)
|
|
|
|
trie_add_prefix(tab->trie, n->n.addr, n->n.addr->pxlen, n->n.addr->pxlen);
|
2022-02-03 05:08:51 +00:00
|
|
|
|
|
|
|
if (tab->trie_new)
|
|
|
|
trie_add_prefix(tab->trie_new, n->n.addr, n->n.addr->pxlen, n->n.addr->pxlen);
|
2021-11-29 18:23:42 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline net *
|
|
|
|
net_route_ip4_trie(rtable *t, const net_addr_ip4 *n0)
|
|
|
|
{
|
|
|
|
TRIE_WALK_TO_ROOT_IP4(t->trie, n0, n)
|
|
|
|
{
|
|
|
|
net *r;
|
|
|
|
if (r = net_find_valid(t, (net_addr *) &n))
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
TRIE_WALK_TO_ROOT_END;
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline net *
|
|
|
|
net_route_vpn4_trie(rtable *t, const net_addr_vpn4 *n0)
|
|
|
|
{
|
|
|
|
TRIE_WALK_TO_ROOT_IP4(t->trie, (const net_addr_ip4 *) n0, px)
|
|
|
|
{
|
|
|
|
net_addr_vpn4 n = NET_ADDR_VPN4(px.prefix, px.pxlen, n0->rd);
|
|
|
|
|
|
|
|
net *r;
|
|
|
|
if (r = net_find_valid(t, (net_addr *) &n))
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
TRIE_WALK_TO_ROOT_END;
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline net *
|
|
|
|
net_route_ip6_trie(rtable *t, const net_addr_ip6 *n0)
|
|
|
|
{
|
|
|
|
TRIE_WALK_TO_ROOT_IP6(t->trie, n0, n)
|
|
|
|
{
|
|
|
|
net *r;
|
|
|
|
if (r = net_find_valid(t, (net_addr *) &n))
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
TRIE_WALK_TO_ROOT_END;
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline net *
|
|
|
|
net_route_vpn6_trie(rtable *t, const net_addr_vpn6 *n0)
|
|
|
|
{
|
|
|
|
TRIE_WALK_TO_ROOT_IP6(t->trie, (const net_addr_ip6 *) n0, px)
|
|
|
|
{
|
|
|
|
net_addr_vpn6 n = NET_ADDR_VPN6(px.prefix, px.pxlen, n0->rd);
|
|
|
|
|
|
|
|
net *r;
|
|
|
|
if (r = net_find_valid(t, (net_addr *) &n))
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
TRIE_WALK_TO_ROOT_END;
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2015-12-24 14:52:03 +00:00
|
|
|
static inline void *
|
2021-11-29 18:23:42 +00:00
|
|
|
net_route_ip6_sadr_trie(rtable *t, const net_addr_ip6_sadr *n0)
|
|
|
|
{
|
|
|
|
TRIE_WALK_TO_ROOT_IP6(t->trie, (const net_addr_ip6 *) n0, px)
|
|
|
|
{
|
|
|
|
net_addr_ip6_sadr n = NET_ADDR_IP6_SADR(px.prefix, px.pxlen, n0->src_prefix, n0->src_pxlen);
|
|
|
|
net *best = NULL;
|
|
|
|
int best_pxlen = 0;
|
|
|
|
|
|
|
|
/* We need to do dst first matching. Since sadr addresses are hashed on dst
|
|
|
|
prefix only, find the hash table chain and go through it to find the
|
|
|
|
match with the longest matching src prefix. */
|
|
|
|
for (struct fib_node *fn = fib_get_chain(&t->fib, (net_addr *) &n); fn; fn = fn->next)
|
|
|
|
{
|
|
|
|
net_addr_ip6_sadr *a = (void *) fn->addr;
|
|
|
|
|
|
|
|
if (net_equal_dst_ip6_sadr(&n, a) &&
|
|
|
|
net_in_net_src_ip6_sadr(&n, a) &&
|
|
|
|
(a->src_pxlen >= best_pxlen))
|
|
|
|
{
|
|
|
|
best = fib_node_to_user(&t->fib, fn);
|
|
|
|
best_pxlen = a->src_pxlen;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (best)
|
|
|
|
return best;
|
|
|
|
}
|
|
|
|
TRIE_WALK_TO_ROOT_END;
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline net *
|
|
|
|
net_route_ip4_fib(rtable *t, const net_addr_ip4 *n0)
|
2010-07-30 23:04:32 +00:00
|
|
|
{
|
2021-11-29 18:23:42 +00:00
|
|
|
net_addr_ip4 n;
|
|
|
|
net_copy_ip4(&n, n0);
|
|
|
|
|
2015-12-24 14:52:03 +00:00
|
|
|
net *r;
|
2021-11-29 18:23:42 +00:00
|
|
|
while (r = net_find_valid(t, (net_addr *) &n), (!r) && (n.pxlen > 0))
|
|
|
|
{
|
|
|
|
n.pxlen--;
|
|
|
|
ip4_clrbit(&n.prefix, n.pxlen);
|
|
|
|
}
|
|
|
|
|
|
|
|
return r;
|
|
|
|
}
|
2010-07-30 23:04:32 +00:00
|
|
|
|
2021-11-29 18:23:42 +00:00
|
|
|
static inline net *
|
|
|
|
net_route_vpn4_fib(rtable *t, const net_addr_vpn4 *n0)
|
|
|
|
{
|
|
|
|
net_addr_vpn4 n;
|
|
|
|
net_copy_vpn4(&n, n0);
|
|
|
|
|
|
|
|
net *r;
|
|
|
|
while (r = net_find_valid(t, (net_addr *) &n), (!r) && (n.pxlen > 0))
|
2015-12-24 14:52:03 +00:00
|
|
|
{
|
2021-11-29 18:23:42 +00:00
|
|
|
n.pxlen--;
|
|
|
|
ip4_clrbit(&n.prefix, n.pxlen);
|
2015-12-24 14:52:03 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2021-11-29 18:23:42 +00:00
|
|
|
static inline net *
|
|
|
|
net_route_ip6_fib(rtable *t, const net_addr_ip6 *n0)
|
2015-12-24 14:52:03 +00:00
|
|
|
{
|
2021-11-29 18:23:42 +00:00
|
|
|
net_addr_ip6 n;
|
|
|
|
net_copy_ip6(&n, n0);
|
|
|
|
|
2015-12-24 14:52:03 +00:00
|
|
|
net *r;
|
2021-11-29 18:23:42 +00:00
|
|
|
while (r = net_find_valid(t, (net_addr *) &n), (!r) && (n.pxlen > 0))
|
|
|
|
{
|
|
|
|
n.pxlen--;
|
|
|
|
ip6_clrbit(&n.prefix, n.pxlen);
|
|
|
|
}
|
|
|
|
|
|
|
|
return r;
|
|
|
|
}
|
2015-12-24 14:52:03 +00:00
|
|
|
|
2021-11-29 18:23:42 +00:00
|
|
|
static inline net *
|
|
|
|
net_route_vpn6_fib(rtable *t, const net_addr_vpn6 *n0)
|
|
|
|
{
|
|
|
|
net_addr_vpn6 n;
|
|
|
|
net_copy_vpn6(&n, n0);
|
|
|
|
|
|
|
|
net *r;
|
|
|
|
while (r = net_find_valid(t, (net_addr *) &n), (!r) && (n.pxlen > 0))
|
2015-12-24 14:52:03 +00:00
|
|
|
{
|
2021-11-29 18:23:42 +00:00
|
|
|
n.pxlen--;
|
|
|
|
ip6_clrbit(&n.prefix, n.pxlen);
|
2015-12-24 14:52:03 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2018-02-13 15:27:57 +00:00
|
|
|
static inline void *
|
2021-11-29 18:23:42 +00:00
|
|
|
net_route_ip6_sadr_fib(rtable *t, const net_addr_ip6_sadr *n0)
|
2018-02-13 15:27:57 +00:00
|
|
|
{
|
2021-11-29 18:23:42 +00:00
|
|
|
net_addr_ip6_sadr n;
|
|
|
|
net_copy_ip6_sadr(&n, n0);
|
2018-02-13 15:27:57 +00:00
|
|
|
|
|
|
|
while (1)
|
|
|
|
{
|
|
|
|
net *best = NULL;
|
|
|
|
int best_pxlen = 0;
|
|
|
|
|
|
|
|
/* We need to do dst first matching. Since sadr addresses are hashed on dst
|
|
|
|
prefix only, find the hash table chain and go through it to find the
|
2021-11-29 18:23:42 +00:00
|
|
|
match with the longest matching src prefix. */
|
|
|
|
for (struct fib_node *fn = fib_get_chain(&t->fib, (net_addr *) &n); fn; fn = fn->next)
|
2018-02-13 15:27:57 +00:00
|
|
|
{
|
|
|
|
net_addr_ip6_sadr *a = (void *) fn->addr;
|
|
|
|
|
2021-11-29 18:23:42 +00:00
|
|
|
if (net_equal_dst_ip6_sadr(&n, a) &&
|
|
|
|
net_in_net_src_ip6_sadr(&n, a) &&
|
2018-02-13 15:27:57 +00:00
|
|
|
(a->src_pxlen >= best_pxlen))
|
|
|
|
{
|
|
|
|
best = fib_node_to_user(&t->fib, fn);
|
|
|
|
best_pxlen = a->src_pxlen;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (best)
|
|
|
|
return best;
|
|
|
|
|
2021-11-29 18:23:42 +00:00
|
|
|
if (!n.dst_pxlen)
|
2018-02-13 15:27:57 +00:00
|
|
|
break;
|
|
|
|
|
2021-11-29 18:23:42 +00:00
|
|
|
n.dst_pxlen--;
|
|
|
|
ip6_clrbit(&n.dst_prefix, n.dst_pxlen);
|
2018-02-13 15:27:57 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2021-11-29 18:23:42 +00:00
|
|
|
net *
|
2016-05-12 14:04:47 +00:00
|
|
|
net_route(rtable *tab, const net_addr *n)
|
2016-01-20 14:38:37 +00:00
|
|
|
{
|
2016-05-12 14:04:47 +00:00
|
|
|
ASSERT(tab->addr_type == n->type);
|
2016-01-20 14:38:37 +00:00
|
|
|
|
2016-05-12 14:04:47 +00:00
|
|
|
switch (n->type)
|
|
|
|
{
|
|
|
|
case NET_IP4:
|
2021-11-29 18:23:42 +00:00
|
|
|
if (tab->trie)
|
|
|
|
return net_route_ip4_trie(tab, (net_addr_ip4 *) n);
|
|
|
|
else
|
|
|
|
return net_route_ip4_fib (tab, (net_addr_ip4 *) n);
|
|
|
|
|
2016-05-12 14:04:47 +00:00
|
|
|
case NET_VPN4:
|
2021-11-29 18:23:42 +00:00
|
|
|
if (tab->trie)
|
|
|
|
return net_route_vpn4_trie(tab, (net_addr_vpn4 *) n);
|
|
|
|
else
|
|
|
|
return net_route_vpn4_fib (tab, (net_addr_vpn4 *) n);
|
2016-05-12 14:04:47 +00:00
|
|
|
|
|
|
|
case NET_IP6:
|
2021-11-29 18:23:42 +00:00
|
|
|
if (tab->trie)
|
|
|
|
return net_route_ip6_trie(tab, (net_addr_ip6 *) n);
|
|
|
|
else
|
|
|
|
return net_route_ip6_fib (tab, (net_addr_ip6 *) n);
|
|
|
|
|
2016-05-12 14:04:47 +00:00
|
|
|
case NET_VPN6:
|
2021-11-29 18:23:42 +00:00
|
|
|
if (tab->trie)
|
|
|
|
return net_route_vpn6_trie(tab, (net_addr_vpn6 *) n);
|
|
|
|
else
|
|
|
|
return net_route_vpn6_fib (tab, (net_addr_vpn6 *) n);
|
2016-05-12 14:04:47 +00:00
|
|
|
|
2018-02-13 15:27:57 +00:00
|
|
|
case NET_IP6_SADR:
|
2021-11-29 18:23:42 +00:00
|
|
|
if (tab->trie)
|
|
|
|
return net_route_ip6_sadr_trie(tab, (net_addr_ip6_sadr *) n);
|
|
|
|
else
|
|
|
|
return net_route_ip6_sadr_fib (tab, (net_addr_ip6_sadr *) n);
|
2018-02-13 15:27:57 +00:00
|
|
|
|
2016-05-12 14:04:47 +00:00
|
|
|
default:
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int
|
2021-11-29 18:23:42 +00:00
|
|
|
net_roa_check_ip4_trie(rtable *tab, const net_addr_ip4 *px, u32 asn)
|
|
|
|
{
|
|
|
|
int anything = 0;
|
|
|
|
|
|
|
|
TRIE_WALK_TO_ROOT_IP4(tab->trie, px, px0)
|
|
|
|
{
|
|
|
|
net_addr_roa4 roa0 = NET_ADDR_ROA4(px0.prefix, px0.pxlen, 0, 0);
|
|
|
|
|
|
|
|
struct fib_node *fn;
|
|
|
|
for (fn = fib_get_chain(&tab->fib, (net_addr *) &roa0); fn; fn = fn->next)
|
|
|
|
{
|
|
|
|
net_addr_roa4 *roa = (void *) fn->addr;
|
|
|
|
net *r = fib_node_to_user(&tab->fib, fn);
|
|
|
|
|
2022-05-30 14:20:35 +00:00
|
|
|
if (net_equal_prefix_roa4(roa, &roa0) && r->routes && rte_is_valid(&r->routes->rte))
|
2021-11-29 18:23:42 +00:00
|
|
|
{
|
|
|
|
anything = 1;
|
|
|
|
if (asn && (roa->asn == asn) && (roa->max_pxlen >= px->pxlen))
|
|
|
|
return ROA_VALID;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
TRIE_WALK_TO_ROOT_END;
|
|
|
|
|
|
|
|
return anything ? ROA_INVALID : ROA_UNKNOWN;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
net_roa_check_ip4_fib(rtable *tab, const net_addr_ip4 *px, u32 asn)
|
2016-05-12 14:04:47 +00:00
|
|
|
{
|
|
|
|
struct net_addr_roa4 n = NET_ADDR_ROA4(px->prefix, px->pxlen, 0, 0);
|
2016-01-20 14:38:37 +00:00
|
|
|
struct fib_node *fn;
|
2016-05-12 14:04:47 +00:00
|
|
|
int anything = 0;
|
|
|
|
|
2016-01-20 14:38:37 +00:00
|
|
|
while (1)
|
|
|
|
{
|
|
|
|
for (fn = fib_get_chain(&tab->fib, (net_addr *) &n); fn; fn = fn->next)
|
|
|
|
{
|
2016-05-12 14:04:47 +00:00
|
|
|
net_addr_roa4 *roa = (void *) fn->addr;
|
2016-01-20 14:38:37 +00:00
|
|
|
net *r = fib_node_to_user(&tab->fib, fn);
|
2016-05-12 14:04:47 +00:00
|
|
|
|
2022-05-30 14:20:35 +00:00
|
|
|
if (net_equal_prefix_roa4(roa, &n) && r->routes && rte_is_valid(&r->routes->rte))
|
2016-01-20 14:38:37 +00:00
|
|
|
{
|
|
|
|
anything = 1;
|
|
|
|
if (asn && (roa->asn == asn) && (roa->max_pxlen >= px->pxlen))
|
|
|
|
return ROA_VALID;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (n.pxlen == 0)
|
|
|
|
break;
|
|
|
|
|
|
|
|
n.pxlen--;
|
|
|
|
ip4_clrbit(&n.prefix, n.pxlen);
|
|
|
|
}
|
|
|
|
|
|
|
|
return anything ? ROA_INVALID : ROA_UNKNOWN;
|
|
|
|
}
|
|
|
|
|
2016-05-12 14:04:47 +00:00
|
|
|
static int
|
2021-11-29 18:23:42 +00:00
|
|
|
net_roa_check_ip6_trie(rtable *tab, const net_addr_ip6 *px, u32 asn)
|
|
|
|
{
|
|
|
|
int anything = 0;
|
|
|
|
|
|
|
|
TRIE_WALK_TO_ROOT_IP6(tab->trie, px, px0)
|
|
|
|
{
|
|
|
|
net_addr_roa6 roa0 = NET_ADDR_ROA6(px0.prefix, px0.pxlen, 0, 0);
|
|
|
|
|
|
|
|
struct fib_node *fn;
|
|
|
|
for (fn = fib_get_chain(&tab->fib, (net_addr *) &roa0); fn; fn = fn->next)
|
|
|
|
{
|
|
|
|
net_addr_roa6 *roa = (void *) fn->addr;
|
|
|
|
net *r = fib_node_to_user(&tab->fib, fn);
|
|
|
|
|
2022-05-30 14:20:35 +00:00
|
|
|
if (net_equal_prefix_roa6(roa, &roa0) && r->routes && rte_is_valid(&r->routes->rte))
|
2021-11-29 18:23:42 +00:00
|
|
|
{
|
|
|
|
anything = 1;
|
|
|
|
if (asn && (roa->asn == asn) && (roa->max_pxlen >= px->pxlen))
|
|
|
|
return ROA_VALID;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
TRIE_WALK_TO_ROOT_END;
|
|
|
|
|
|
|
|
return anything ? ROA_INVALID : ROA_UNKNOWN;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
net_roa_check_ip6_fib(rtable *tab, const net_addr_ip6 *px, u32 asn)
|
2016-01-20 14:38:37 +00:00
|
|
|
{
|
|
|
|
struct net_addr_roa6 n = NET_ADDR_ROA6(px->prefix, px->pxlen, 0, 0);
|
|
|
|
struct fib_node *fn;
|
2016-05-12 14:04:47 +00:00
|
|
|
int anything = 0;
|
|
|
|
|
2016-01-20 14:38:37 +00:00
|
|
|
while (1)
|
|
|
|
{
|
|
|
|
for (fn = fib_get_chain(&tab->fib, (net_addr *) &n); fn; fn = fn->next)
|
|
|
|
{
|
2016-05-12 14:04:47 +00:00
|
|
|
net_addr_roa6 *roa = (void *) fn->addr;
|
2016-01-20 14:38:37 +00:00
|
|
|
net *r = fib_node_to_user(&tab->fib, fn);
|
2016-05-12 14:04:47 +00:00
|
|
|
|
2022-05-30 14:20:35 +00:00
|
|
|
if (net_equal_prefix_roa6(roa, &n) && r->routes && rte_is_valid(&r->routes->rte))
|
2016-01-20 14:38:37 +00:00
|
|
|
{
|
|
|
|
anything = 1;
|
|
|
|
if (asn && (roa->asn == asn) && (roa->max_pxlen >= px->pxlen))
|
|
|
|
return ROA_VALID;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (n.pxlen == 0)
|
|
|
|
break;
|
|
|
|
|
|
|
|
n.pxlen--;
|
|
|
|
ip6_clrbit(&n.prefix, n.pxlen);
|
|
|
|
}
|
|
|
|
|
|
|
|
return anything ? ROA_INVALID : ROA_UNKNOWN;
|
|
|
|
}
|
|
|
|
|
2016-05-12 14:04:47 +00:00
|
|
|
/**
|
|
|
|
* roa_check - check validity of route origination in a ROA table
|
|
|
|
* @tab: ROA table
|
|
|
|
* @n: network prefix to check
|
|
|
|
* @asn: AS number of network prefix
|
|
|
|
*
|
|
|
|
* Implements RFC 6483 route validation for the given network prefix. The
|
|
|
|
* procedure is to find all candidate ROAs - ROAs whose prefixes cover the given
|
|
|
|
* network prefix. If there is no candidate ROA, return ROA_UNKNOWN. If there is
|
|
|
|
* a candidate ROA with matching ASN and maxlen field greater than or equal to
|
|
|
|
* the given prefix length, return ROA_VALID. Otherwise, return ROA_INVALID. If
|
|
|
|
* caller cannot determine origin AS, 0 could be used (in that case ROA_VALID
|
|
|
|
* cannot happen). Table @tab must have type NET_ROA4 or NET_ROA6, network @n
|
|
|
|
* must have type NET_IP4 or NET_IP6, respectively.
|
|
|
|
*/
|
|
|
|
int
|
2016-01-20 14:38:37 +00:00
|
|
|
net_roa_check(rtable *tab, const net_addr *n, u32 asn)
|
|
|
|
{
|
2016-05-12 14:04:47 +00:00
|
|
|
if ((tab->addr_type == NET_ROA4) && (n->type == NET_IP4))
|
2021-11-29 18:23:42 +00:00
|
|
|
{
|
|
|
|
if (tab->trie)
|
|
|
|
return net_roa_check_ip4_trie(tab, (const net_addr_ip4 *) n, asn);
|
|
|
|
else
|
|
|
|
return net_roa_check_ip4_fib (tab, (const net_addr_ip4 *) n, asn);
|
|
|
|
}
|
2016-05-12 14:04:47 +00:00
|
|
|
else if ((tab->addr_type == NET_ROA6) && (n->type == NET_IP6))
|
2021-11-29 18:23:42 +00:00
|
|
|
{
|
|
|
|
if (tab->trie)
|
|
|
|
return net_roa_check_ip6_trie(tab, (const net_addr_ip6 *) n, asn);
|
|
|
|
else
|
|
|
|
return net_roa_check_ip6_fib (tab, (const net_addr_ip6 *) n, asn);
|
|
|
|
}
|
2016-01-20 14:38:37 +00:00
|
|
|
else
|
2016-05-12 14:04:47 +00:00
|
|
|
return ROA_UNKNOWN; /* Should not happen */
|
2010-07-30 23:04:32 +00:00
|
|
|
}
|
1998-05-20 11:54:33 +00:00
|
|
|
|
2000-06-01 17:12:19 +00:00
|
|
|
/**
|
|
|
|
* rte_find - find a route
|
|
|
|
* @net: network node
|
2012-08-14 14:25:22 +00:00
|
|
|
* @src: route source
|
2000-06-01 17:12:19 +00:00
|
|
|
*
|
2020-01-28 10:42:46 +00:00
|
|
|
* The rte_find() function returns a pointer to a route for destination @net
|
|
|
|
* which is from route source @src. List end pointer is returned if no route is found.
|
2000-06-01 17:12:19 +00:00
|
|
|
*/
|
2020-01-28 10:42:46 +00:00
|
|
|
static struct rte_storage **
|
2012-08-14 14:25:22 +00:00
|
|
|
rte_find(net *net, struct rte_src *src)
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2020-01-28 10:42:46 +00:00
|
|
|
struct rte_storage **e = &net->routes;
|
1998-05-20 11:54:33 +00:00
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
while ((*e) && (*e)->rte.src != src)
|
|
|
|
e = &(*e)->next;
|
1998-05-20 11:54:33 +00:00
|
|
|
|
|
|
|
return e;
|
|
|
|
}
|
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
|
|
|
|
struct rte_storage *
|
|
|
|
rte_store(const rte *r, net *net, rtable *tab)
|
1999-04-05 20:25:03 +00:00
|
|
|
{
|
2020-01-28 10:42:46 +00:00
|
|
|
struct rte_storage *e = sl_alloc(tab->rte_slab);
|
1999-04-05 20:25:03 +00:00
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
e->rte = *r;
|
|
|
|
e->rte.net = net->n.addr;
|
2020-04-10 15:08:29 +00:00
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
rt_lock_source(e->rte.src);
|
1999-04-05 20:25:03 +00:00
|
|
|
|
2022-06-08 13:31:28 +00:00
|
|
|
if (ea_is_cached(e->rte.attrs))
|
2020-01-28 10:42:46 +00:00
|
|
|
e->rte.attrs = rta_clone(e->rte.attrs);
|
|
|
|
else
|
2022-06-16 21:24:56 +00:00
|
|
|
e->rte.attrs = rta_lookup(e->rte.attrs, 1);
|
2015-06-08 00:20:43 +00:00
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
return e;
|
2015-06-08 00:20:43 +00:00
|
|
|
}
|
|
|
|
|
2021-03-20 20:16:12 +00:00
|
|
|
/**
|
|
|
|
* rte_free - delete a &rte
|
2020-01-28 10:42:46 +00:00
|
|
|
* @e: &struct rte_storage to be deleted
|
|
|
|
* @tab: the table which the rte belongs to
|
2021-03-20 20:16:12 +00:00
|
|
|
*
|
|
|
|
* rte_free() deletes the given &rte from the routing table it's linked to.
|
|
|
|
*/
|
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
void
|
2022-05-30 13:27:46 +00:00
|
|
|
rte_free(struct rte_storage *e)
|
2021-03-20 20:16:12 +00:00
|
|
|
{
|
2020-01-28 10:42:46 +00:00
|
|
|
rt_unlock_source(e->rte.src);
|
|
|
|
rta_free(e->rte.attrs);
|
2022-04-04 18:31:14 +00:00
|
|
|
sl_free(e);
|
2021-03-20 20:16:12 +00:00
|
|
|
}
|
2019-03-06 17:14:12 +00:00
|
|
|
|
1998-05-20 11:54:33 +00:00
|
|
|
static int /* Actually better or at least as good as */
|
|
|
|
rte_better(rte *new, rte *old)
|
|
|
|
{
|
1998-06-03 08:40:10 +00:00
|
|
|
int (*better)(rte *, rte *);
|
|
|
|
|
2012-11-10 13:26:13 +00:00
|
|
|
if (!rte_is_valid(old))
|
1998-05-20 11:54:33 +00:00
|
|
|
return 1;
|
2012-11-10 13:26:13 +00:00
|
|
|
if (!rte_is_valid(new))
|
|
|
|
return 0;
|
|
|
|
|
2022-04-20 10:24:26 +00:00
|
|
|
u32 np = rt_get_preference(new);
|
|
|
|
u32 op = rt_get_preference(old);
|
|
|
|
|
|
|
|
if (np > op)
|
1998-05-20 11:54:33 +00:00
|
|
|
return 1;
|
2022-04-20 10:24:26 +00:00
|
|
|
if (np < op)
|
1998-05-20 11:54:33 +00:00
|
|
|
return 0;
|
2020-04-10 15:08:29 +00:00
|
|
|
if (new->src->proto->proto != old->src->proto->proto)
|
2000-03-01 11:48:11 +00:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* If the user has configured protocol preferences, so that two different protocols
|
|
|
|
* have the same preference, try to break the tie by comparing addresses. Not too
|
|
|
|
* useful, but keeps the ordering of routes unambiguous.
|
|
|
|
*/
|
2020-04-10 15:08:29 +00:00
|
|
|
return new->src->proto->proto > old->src->proto->proto;
|
2000-03-01 11:48:11 +00:00
|
|
|
}
|
2020-04-10 15:08:29 +00:00
|
|
|
if (better = new->src->proto->rte_better)
|
1998-06-03 08:40:10 +00:00
|
|
|
return better(new, old);
|
|
|
|
return 0;
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
|
|
|
|
2015-06-08 00:20:43 +00:00
|
|
|
static int
|
|
|
|
rte_mergable(rte *pri, rte *sec)
|
|
|
|
{
|
|
|
|
int (*mergable)(rte *, rte *);
|
|
|
|
|
|
|
|
if (!rte_is_valid(pri) || !rte_is_valid(sec))
|
|
|
|
return 0;
|
|
|
|
|
2022-04-20 10:24:26 +00:00
|
|
|
if (rt_get_preference(pri) != rt_get_preference(sec))
|
2015-06-08 00:20:43 +00:00
|
|
|
return 0;
|
|
|
|
|
2020-04-10 15:08:29 +00:00
|
|
|
if (pri->src->proto->proto != sec->src->proto->proto)
|
2015-06-08 00:20:43 +00:00
|
|
|
return 0;
|
|
|
|
|
2020-04-10 15:08:29 +00:00
|
|
|
if (mergable = pri->src->proto->rte_mergable)
|
2015-06-08 00:20:43 +00:00
|
|
|
return mergable(pri, sec);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2000-03-12 20:30:53 +00:00
|
|
|
static void
|
2021-06-21 15:07:31 +00:00
|
|
|
rte_trace(const char *name, const rte *e, int dir, const char *msg)
|
2000-03-12 20:30:53 +00:00
|
|
|
{
|
2022-07-15 12:57:02 +00:00
|
|
|
log(L_TRACE "%s %c %s %N src %uL %uG %uS id %u %s",
|
2021-10-06 13:10:33 +00:00
|
|
|
name, dir, msg, e->net,
|
2021-09-27 11:04:16 +00:00
|
|
|
e->src->private_id, e->src->global_id, e->stale_cycle, e->id,
|
2022-05-15 16:09:30 +00:00
|
|
|
rta_dest_name(rte_dest(e)));
|
2000-03-12 20:30:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
2021-06-21 15:07:31 +00:00
|
|
|
channel_rte_trace_in(uint flag, struct channel *c, const rte *e, const char *msg)
|
2000-03-12 20:30:53 +00:00
|
|
|
{
|
2020-12-07 21:19:40 +00:00
|
|
|
if ((c->debug & flag) || (c->proto->debug & flag))
|
2021-06-21 15:07:31 +00:00
|
|
|
rte_trace(c->in_req.name, e, '>', msg);
|
2000-03-12 20:30:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
2021-06-21 15:07:31 +00:00
|
|
|
channel_rte_trace_out(uint flag, struct channel *c, const rte *e, const char *msg)
|
2000-03-12 20:30:53 +00:00
|
|
|
{
|
2020-12-07 21:19:40 +00:00
|
|
|
if ((c->debug & flag) || (c->proto->debug & flag))
|
2021-06-21 15:07:31 +00:00
|
|
|
rte_trace(c->out_req.name, e, '<', msg);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
rt_rte_trace_in(uint flag, struct rt_import_request *req, const rte *e, const char *msg)
|
|
|
|
{
|
|
|
|
if (req->trace_routes & flag)
|
|
|
|
rte_trace(req->name, e, '>', msg);
|
2000-03-12 20:30:53 +00:00
|
|
|
}
|
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
#if 0
|
|
|
|
// seems to be unused at all
|
|
|
|
static inline void
|
|
|
|
rt_rte_trace_out(uint flag, struct rt_export_request *req, const rte *e, const char *msg)
|
|
|
|
{
|
|
|
|
if (req->trace_routes & flag)
|
|
|
|
rte_trace(req->name, e, '<', msg);
|
2000-03-12 20:30:53 +00:00
|
|
|
}
|
2021-06-21 15:07:31 +00:00
|
|
|
#endif
|
2000-03-12 20:30:53 +00:00
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
static uint
|
|
|
|
rte_feed_count(net *n)
|
|
|
|
{
|
|
|
|
uint count = 0;
|
|
|
|
for (struct rte_storage *e = n->routes; e; e = e->next)
|
2022-07-14 09:09:23 +00:00
|
|
|
count++;
|
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rte_feed_obtain(net *n, struct rte **feed, uint count)
|
|
|
|
{
|
|
|
|
uint i = 0;
|
|
|
|
for (struct rte_storage *e = n->routes; e; e = e->next)
|
|
|
|
{
|
|
|
|
ASSERT_DIE(i < count);
|
|
|
|
feed[i++] = &e->rte;
|
|
|
|
}
|
2022-07-14 09:09:23 +00:00
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
ASSERT_DIE(i == count);
|
|
|
|
}
|
|
|
|
|
2012-04-15 13:07:58 +00:00
|
|
|
static rte *
|
2022-05-30 14:41:15 +00:00
|
|
|
export_filter(struct channel *c, rte *rt, int silent)
|
1999-03-17 14:31:26 +00:00
|
|
|
{
|
2016-01-26 10:48:58 +00:00
|
|
|
struct proto *p = c->proto;
|
2019-02-15 12:53:17 +00:00
|
|
|
const struct filter *filter = c->out_filter;
|
2021-06-21 15:07:31 +00:00
|
|
|
struct channel_export_stats *stats = &c->export_stats;
|
2012-03-15 10:58:08 +00:00
|
|
|
|
2020-03-09 14:31:10 +00:00
|
|
|
/* Do nothing if we have already rejected the route */
|
|
|
|
if (silent && bmap_test(&c->export_reject_map, rt->id))
|
|
|
|
goto reject_noset;
|
|
|
|
|
|
|
|
int v = p->preexport ? p->preexport(c, rt) : 0;
|
2012-04-15 13:07:58 +00:00
|
|
|
if (v < 0)
|
|
|
|
{
|
|
|
|
if (silent)
|
2020-03-09 14:31:10 +00:00
|
|
|
goto reject_noset;
|
2009-12-02 21:19:47 +00:00
|
|
|
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->updates_rejected++;
|
2013-02-08 22:58:27 +00:00
|
|
|
if (v == RIC_REJECT)
|
2021-06-21 15:07:31 +00:00
|
|
|
channel_rte_trace_out(D_FILTERS, c, rt, "rejected by protocol");
|
2020-03-09 14:31:10 +00:00
|
|
|
goto reject_noset;
|
|
|
|
|
2012-04-15 13:07:58 +00:00
|
|
|
}
|
|
|
|
if (v > 0)
|
1999-04-05 20:25:03 +00:00
|
|
|
{
|
2012-04-15 13:07:58 +00:00
|
|
|
if (!silent)
|
2021-06-21 15:07:31 +00:00
|
|
|
channel_rte_trace_out(D_FILTERS, c, rt, "forced accept by protocol");
|
2012-04-15 13:07:58 +00:00
|
|
|
goto accept;
|
1999-04-05 20:25:03 +00:00
|
|
|
}
|
2009-06-03 23:22:56 +00:00
|
|
|
|
2012-04-15 13:07:58 +00:00
|
|
|
v = filter && ((filter == FILTER_REJECT) ||
|
2022-05-30 14:41:15 +00:00
|
|
|
(f_run(filter, rt,
|
2018-05-29 10:08:12 +00:00
|
|
|
(silent ? FF_SILENT : 0)) > F_ACCEPT));
|
2012-04-15 13:07:58 +00:00
|
|
|
if (v)
|
|
|
|
{
|
|
|
|
if (silent)
|
|
|
|
goto reject;
|
|
|
|
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->updates_filtered++;
|
2021-06-21 15:07:31 +00:00
|
|
|
channel_rte_trace_out(D_FILTERS, c, rt, "filtered out");
|
2012-04-15 13:07:58 +00:00
|
|
|
goto reject;
|
1999-04-05 20:25:03 +00:00
|
|
|
}
|
2009-06-03 23:22:56 +00:00
|
|
|
|
2012-04-15 13:07:58 +00:00
|
|
|
accept:
|
2020-03-09 14:31:10 +00:00
|
|
|
/* We have accepted the route */
|
|
|
|
bmap_clear(&c->export_reject_map, rt->id);
|
2012-04-15 13:07:58 +00:00
|
|
|
return rt;
|
|
|
|
|
|
|
|
reject:
|
2020-03-09 14:31:10 +00:00
|
|
|
/* We have rejected the route by filter */
|
|
|
|
bmap_set(&c->export_reject_map, rt->id);
|
|
|
|
|
|
|
|
reject_noset:
|
2012-04-15 13:07:58 +00:00
|
|
|
/* Discard temporary rte */
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2021-06-21 15:07:31 +00:00
|
|
|
do_rt_notify(struct channel *c, const net_addr *net, rte *new, const rte *old)
|
2012-04-15 13:07:58 +00:00
|
|
|
{
|
2016-01-26 10:48:58 +00:00
|
|
|
struct proto *p = c->proto;
|
2021-06-21 15:07:31 +00:00
|
|
|
struct channel_export_stats *stats = &c->export_stats;
|
2009-06-03 23:22:56 +00:00
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
if (c->refeeding && new)
|
2019-09-09 00:55:32 +00:00
|
|
|
c->refeed_count++;
|
2009-06-03 23:22:56 +00:00
|
|
|
|
2021-11-06 19:34:16 +00:00
|
|
|
if (!old && new)
|
|
|
|
if (CHANNEL_LIMIT_PUSH(c, OUT))
|
2019-09-09 00:55:32 +00:00
|
|
|
{
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->updates_rejected++;
|
2021-06-21 15:07:31 +00:00
|
|
|
channel_rte_trace_out(D_FILTERS, c, new, "rejected [limit]");
|
2019-09-09 00:55:32 +00:00
|
|
|
return;
|
2012-04-24 21:39:57 +00:00
|
|
|
}
|
2021-11-06 19:34:16 +00:00
|
|
|
|
|
|
|
if (!new && old)
|
|
|
|
CHANNEL_LIMIT_POP(c, OUT);
|
2012-04-24 21:39:57 +00:00
|
|
|
|
2009-06-03 23:22:56 +00:00
|
|
|
if (new)
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->updates_accepted++;
|
2009-06-03 23:22:56 +00:00
|
|
|
else
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->withdraws_accepted++;
|
2009-06-03 23:22:56 +00:00
|
|
|
|
2019-09-09 00:55:32 +00:00
|
|
|
if (old)
|
|
|
|
bmap_clear(&c->export_map, old->id);
|
|
|
|
|
2009-06-03 23:22:56 +00:00
|
|
|
if (new)
|
2019-09-09 00:55:32 +00:00
|
|
|
bmap_set(&c->export_map, new->id);
|
2009-06-03 23:22:56 +00:00
|
|
|
|
2000-03-12 20:30:53 +00:00
|
|
|
if (p->debug & D_ROUTES)
|
2019-09-09 00:55:32 +00:00
|
|
|
{
|
|
|
|
if (new && old)
|
2021-06-21 15:07:31 +00:00
|
|
|
channel_rte_trace_out(D_ROUTES, c, new, "replaced");
|
2019-09-09 00:55:32 +00:00
|
|
|
else if (new)
|
2021-06-21 15:07:31 +00:00
|
|
|
channel_rte_trace_out(D_ROUTES, c, new, "added");
|
2019-09-09 00:55:32 +00:00
|
|
|
else if (old)
|
2021-06-21 15:07:31 +00:00
|
|
|
channel_rte_trace_out(D_ROUTES, c, old, "removed");
|
2019-09-09 00:55:32 +00:00
|
|
|
}
|
|
|
|
|
2022-06-20 17:10:49 +00:00
|
|
|
p->rt_notify(p, c, net, new, old);
|
2012-04-15 13:07:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2021-06-21 15:07:31 +00:00
|
|
|
rt_notify_basic(struct channel *c, const net_addr *net, rte *new, rte *old)
|
2012-04-15 13:07:58 +00:00
|
|
|
{
|
2022-07-15 12:57:02 +00:00
|
|
|
if (new && old && rte_same(new, old))
|
|
|
|
{
|
|
|
|
if ((new->id != old->id) && bmap_test(&c->export_map, old->id))
|
|
|
|
{
|
|
|
|
bmap_set(&c->export_map, new->id);
|
|
|
|
bmap_clear(&c->export_map, old->id);
|
|
|
|
}
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2012-04-15 13:07:58 +00:00
|
|
|
if (new)
|
2020-01-28 10:42:46 +00:00
|
|
|
new = export_filter(c, new, 0);
|
2012-04-15 13:07:58 +00:00
|
|
|
|
2019-09-09 00:55:32 +00:00
|
|
|
if (old && !bmap_test(&c->export_map, old->id))
|
|
|
|
old = NULL;
|
2012-04-15 13:07:58 +00:00
|
|
|
|
|
|
|
if (!new && !old)
|
|
|
|
return;
|
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
do_rt_notify(c, net, new, old);
|
2012-04-15 13:07:58 +00:00
|
|
|
}
|
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
static void
|
|
|
|
channel_rpe_mark_seen(struct rt_export_request *req, struct rt_pending_export *rpe)
|
|
|
|
{
|
|
|
|
struct channel *c = SKIP_BACK(struct channel, out_req, req);
|
|
|
|
|
|
|
|
rpe_mark_seen(req->hook, rpe);
|
|
|
|
if (rpe->old)
|
|
|
|
bmap_clear(&c->export_reject_map, rpe->old->rte.id);
|
|
|
|
}
|
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
void
|
|
|
|
rt_notify_accepted(struct rt_export_request *req, const net_addr *n, struct rt_pending_export *rpe,
|
|
|
|
struct rte **feed, uint count)
|
2012-04-15 13:07:58 +00:00
|
|
|
{
|
2021-06-21 15:07:31 +00:00
|
|
|
struct channel *c = SKIP_BACK(struct channel, out_req, req);
|
|
|
|
|
|
|
|
rte nb0, *new_best = NULL;
|
|
|
|
const rte *old_best = NULL;
|
2012-04-15 13:07:58 +00:00
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
for (uint i = 0; i < count; i++)
|
2019-09-09 00:55:32 +00:00
|
|
|
{
|
2021-09-30 11:50:54 +00:00
|
|
|
if (!rte_is_valid(feed[i]))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* Has been already rejected, won't bother with it */
|
2021-06-21 15:07:31 +00:00
|
|
|
if (!c->refeeding && bmap_test(&c->export_reject_map, feed[i]->id))
|
2021-09-30 11:50:54 +00:00
|
|
|
continue;
|
|
|
|
|
|
|
|
/* Previously exported */
|
|
|
|
if (!old_best && bmap_test(&c->export_map, feed[i]->id))
|
2012-04-15 13:07:58 +00:00
|
|
|
{
|
2021-09-30 11:50:54 +00:00
|
|
|
/* is still best */
|
|
|
|
if (!new_best)
|
2019-09-09 00:55:32 +00:00
|
|
|
{
|
2021-09-30 11:50:54 +00:00
|
|
|
DBG("rt_notify_accepted: idempotent\n");
|
2021-06-21 15:07:31 +00:00
|
|
|
goto done;
|
2019-09-09 00:55:32 +00:00
|
|
|
}
|
2012-04-15 13:07:58 +00:00
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
/* is superseded */
|
|
|
|
old_best = feed[i];
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Have no new best route yet */
|
|
|
|
if (!new_best)
|
|
|
|
{
|
|
|
|
/* Try this route not seen before */
|
|
|
|
nb0 = *feed[i];
|
|
|
|
new_best = export_filter(c, &nb0, 0);
|
|
|
|
DBG("rt_notify_accepted: checking route id %u: %s\n", feed[i]->id, new_best ? "ok" : "no");
|
2018-07-06 00:04:45 +00:00
|
|
|
}
|
2019-09-09 00:55:32 +00:00
|
|
|
}
|
2018-07-06 00:04:45 +00:00
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
done:
|
2021-09-30 11:50:54 +00:00
|
|
|
/* Check obsolete routes for previously exported */
|
2021-09-27 11:04:16 +00:00
|
|
|
while (rpe)
|
|
|
|
{
|
|
|
|
channel_rpe_mark_seen(req, rpe);
|
|
|
|
if (rpe->old)
|
2020-03-09 14:31:10 +00:00
|
|
|
{
|
2021-09-27 11:04:16 +00:00
|
|
|
if (bmap_test(&c->export_map, rpe->old->rte.id))
|
2021-09-30 11:50:54 +00:00
|
|
|
{
|
2021-09-27 11:04:16 +00:00
|
|
|
ASSERT_DIE(old_best == NULL);
|
|
|
|
old_best = &rpe->old->rte;
|
2021-09-30 11:50:54 +00:00
|
|
|
}
|
2020-03-09 14:31:10 +00:00
|
|
|
}
|
2021-09-27 11:04:16 +00:00
|
|
|
rpe = rpe_next(rpe, NULL);
|
|
|
|
}
|
2012-04-15 13:07:58 +00:00
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
/* Nothing to export */
|
2019-09-09 00:55:32 +00:00
|
|
|
if (!new_best && !old_best)
|
2021-09-30 11:50:54 +00:00
|
|
|
{
|
|
|
|
DBG("rt_notify_accepted: nothing to export\n");
|
2021-09-27 11:04:16 +00:00
|
|
|
return;
|
2021-09-30 11:50:54 +00:00
|
|
|
}
|
2012-04-15 13:07:58 +00:00
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
do_rt_notify(c, n, new_best, old_best);
|
1999-03-17 14:31:26 +00:00
|
|
|
}
|
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
rte *
|
|
|
|
rt_export_merged(struct channel *c, struct rte **feed, uint count, linpool *pool, int silent)
|
2015-06-08 00:20:43 +00:00
|
|
|
{
|
2021-09-30 11:50:54 +00:00
|
|
|
_Thread_local static rte rloc;
|
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
// struct proto *p = c->proto;
|
2022-05-05 16:08:37 +00:00
|
|
|
struct nexthop_adata *nhs = NULL;
|
2021-06-21 15:07:31 +00:00
|
|
|
rte *best0 = feed[0];
|
|
|
|
rte *best = NULL;
|
2015-06-08 00:20:43 +00:00
|
|
|
|
|
|
|
if (!rte_is_valid(best0))
|
|
|
|
return NULL;
|
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
/* Already rejected, no need to re-run the filter */
|
2021-06-21 15:07:31 +00:00
|
|
|
if (!c->refeeding && bmap_test(&c->export_reject_map, best0->id))
|
2021-09-30 11:50:54 +00:00
|
|
|
return NULL;
|
|
|
|
|
|
|
|
rloc = *best0;
|
2022-05-30 14:41:15 +00:00
|
|
|
best = export_filter(c, &rloc, silent);
|
2021-09-30 11:50:54 +00:00
|
|
|
|
|
|
|
if (!best)
|
|
|
|
/* Best route doesn't pass the filter */
|
|
|
|
return NULL;
|
2015-06-08 00:20:43 +00:00
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
if (!rte_is_reachable(best))
|
|
|
|
/* Unreachable routes can't be merged */
|
2015-06-08 00:20:43 +00:00
|
|
|
return best;
|
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
for (uint i = 1; i < count; i++)
|
2015-06-08 00:20:43 +00:00
|
|
|
{
|
2021-09-30 11:50:54 +00:00
|
|
|
if (!rte_mergable(best0, feed[i]))
|
2015-06-08 00:20:43 +00:00
|
|
|
continue;
|
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
rte tmp0 = *feed[i];
|
2022-05-30 14:41:15 +00:00
|
|
|
rte *tmp = export_filter(c, &tmp0, 1);
|
2015-06-08 00:20:43 +00:00
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
if (!tmp || !rte_is_reachable(tmp))
|
2015-06-08 00:20:43 +00:00
|
|
|
continue;
|
|
|
|
|
2022-06-08 13:31:28 +00:00
|
|
|
eattr *nhea = ea_find(tmp->attrs, &ea_gen_nexthop);
|
2022-05-30 15:36:36 +00:00
|
|
|
ASSERT_DIE(nhea);
|
2015-06-08 00:20:43 +00:00
|
|
|
|
2022-05-30 15:36:36 +00:00
|
|
|
if (nhs)
|
|
|
|
nhs = nexthop_merge(nhs, (struct nexthop_adata *) nhea->u.ptr, c->merge_limit, pool);
|
|
|
|
else
|
|
|
|
nhs = (struct nexthop_adata *) nhea->u.ptr;
|
2015-06-08 00:20:43 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (nhs)
|
|
|
|
{
|
2022-05-30 10:03:03 +00:00
|
|
|
eattr *nhea = ea_find(best->attrs, &ea_gen_nexthop);
|
2022-05-05 16:08:37 +00:00
|
|
|
ASSERT_DIE(nhea);
|
2015-06-08 00:20:43 +00:00
|
|
|
|
2022-05-05 16:08:37 +00:00
|
|
|
nhs = nexthop_merge(nhs, (struct nexthop_adata *) nhea->u.ptr, c->merge_limit, pool);
|
|
|
|
|
2022-05-30 10:03:03 +00:00
|
|
|
ea_set_attr(&best->attrs,
|
2022-05-05 16:08:37 +00:00
|
|
|
EA_LITERAL_DIRECT_ADATA(&ea_gen_nexthop, 0, &nhs->ad));
|
2015-06-08 00:20:43 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return best;
|
|
|
|
}
|
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
void
|
|
|
|
rt_notify_merged(struct rt_export_request *req, const net_addr *n, struct rt_pending_export *rpe,
|
|
|
|
struct rte **feed, uint count)
|
2021-09-30 11:50:54 +00:00
|
|
|
{
|
2021-06-21 15:07:31 +00:00
|
|
|
struct channel *c = SKIP_BACK(struct channel, out_req, req);
|
2015-06-08 00:20:43 +00:00
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
// struct proto *p = c->proto;
|
2015-06-08 00:20:43 +00:00
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
#if 0 /* TODO: Find whether this check is possible when processing multiple changes at once. */
|
2015-06-08 00:20:43 +00:00
|
|
|
/* Check whether the change is relevant to the merged route */
|
2019-09-09 00:55:32 +00:00
|
|
|
if ((new_best == old_best) &&
|
|
|
|
(new_changed != old_changed) &&
|
|
|
|
!rte_mergable(new_best, new_changed) &&
|
|
|
|
!rte_mergable(old_best, old_changed))
|
|
|
|
return;
|
2021-09-30 11:50:54 +00:00
|
|
|
#endif
|
2015-06-08 00:20:43 +00:00
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
rte *old_best = NULL;
|
|
|
|
/* Find old best route */
|
|
|
|
for (uint i = 0; i < count; i++)
|
|
|
|
if (bmap_test(&c->export_map, feed[i]->id))
|
|
|
|
{
|
|
|
|
old_best = feed[i];
|
|
|
|
break;
|
|
|
|
}
|
2015-06-08 00:20:43 +00:00
|
|
|
|
2021-09-30 11:50:54 +00:00
|
|
|
/* Check obsolete routes for previously exported */
|
2021-09-27 11:04:16 +00:00
|
|
|
while (rpe)
|
|
|
|
{
|
|
|
|
channel_rpe_mark_seen(req, rpe);
|
|
|
|
if (rpe->old)
|
2021-09-30 11:50:54 +00:00
|
|
|
{
|
2021-09-27 11:04:16 +00:00
|
|
|
if (bmap_test(&c->export_map, rpe->old->rte.id))
|
2021-09-30 11:50:54 +00:00
|
|
|
{
|
2021-09-27 11:04:16 +00:00
|
|
|
ASSERT_DIE(old_best == NULL);
|
|
|
|
old_best = &rpe->old->rte;
|
2021-09-30 11:50:54 +00:00
|
|
|
}
|
|
|
|
}
|
2021-09-27 11:04:16 +00:00
|
|
|
rpe = rpe_next(rpe, NULL);
|
|
|
|
}
|
2021-09-30 11:50:54 +00:00
|
|
|
|
|
|
|
/* Prepare new merged route */
|
2021-06-21 15:07:31 +00:00
|
|
|
rte *new_merged = count ? rt_export_merged(c, feed, count, rte_update_pool, 0) : NULL;
|
2021-09-30 11:50:54 +00:00
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
if (new_merged || old_best)
|
|
|
|
do_rt_notify(c, n, new_merged, old_best);
|
2021-09-30 11:50:54 +00:00
|
|
|
}
|
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
void
|
|
|
|
rt_notify_optimal(struct rt_export_request *req, const net_addr *net, struct rt_pending_export *rpe)
|
2021-09-30 11:50:54 +00:00
|
|
|
{
|
2021-06-21 15:07:31 +00:00
|
|
|
struct channel *c = SKIP_BACK(struct channel, out_req, req);
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
rte *o = RTE_VALID_OR_NULL(rpe->old_best);
|
2021-09-27 11:04:16 +00:00
|
|
|
struct rte_storage *new_best = rpe->new_best;
|
2022-07-14 09:09:23 +00:00
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
while (rpe)
|
|
|
|
{
|
|
|
|
channel_rpe_mark_seen(req, rpe);
|
|
|
|
new_best = rpe->new_best;
|
|
|
|
rpe = rpe_next(rpe, NULL);
|
2022-07-14 09:09:23 +00:00
|
|
|
}
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
rte n0 = RTE_COPY_VALID(new_best);
|
|
|
|
if (n0.src || o)
|
|
|
|
rt_notify_basic(c, net, n0.src ? &n0 : NULL, o);
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rt_notify_any(struct rt_export_request *req, const net_addr *net, struct rt_pending_export *rpe)
|
2021-09-30 11:50:54 +00:00
|
|
|
{
|
2021-06-21 15:07:31 +00:00
|
|
|
struct channel *c = SKIP_BACK(struct channel, out_req, req);
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
rte *n = RTE_VALID_OR_NULL(rpe->new);
|
|
|
|
rte *o = RTE_VALID_OR_NULL(rpe->old);
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
if (!n && !o)
|
2022-07-14 09:09:23 +00:00
|
|
|
{
|
2021-09-27 11:04:16 +00:00
|
|
|
channel_rpe_mark_seen(req, rpe);
|
2022-07-15 12:57:02 +00:00
|
|
|
return;
|
2022-07-14 09:09:23 +00:00
|
|
|
}
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
struct rte_src *src = n ? n->src : o->src;
|
|
|
|
struct rte_storage *new_latest = rpe->new;
|
|
|
|
|
|
|
|
while (rpe)
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
2022-07-15 12:57:02 +00:00
|
|
|
channel_rpe_mark_seen(req, rpe);
|
|
|
|
new_latest = rpe->new;
|
|
|
|
rpe = rpe_next(rpe, src);
|
2021-09-27 11:04:16 +00:00
|
|
|
}
|
2022-07-15 12:57:02 +00:00
|
|
|
|
|
|
|
rte n0 = RTE_COPY_VALID(new_latest);
|
|
|
|
if (n0.src || o)
|
|
|
|
rt_notify_basic(c, net, n0.src ? &n0 : NULL, o);
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rt_feed_any(struct rt_export_request *req, const net_addr *net, struct rt_pending_export *rpe UNUSED, rte **feed, uint count)
|
|
|
|
{
|
|
|
|
struct channel *c = SKIP_BACK(struct channel, out_req, req);
|
|
|
|
|
|
|
|
for (uint i=0; i<count; i++)
|
2022-07-14 09:09:23 +00:00
|
|
|
if (rte_is_valid(feed[i]))
|
|
|
|
{
|
|
|
|
rte n0 = *feed[i];
|
|
|
|
rt_notify_basic(c, net, &n0, NULL);
|
|
|
|
}
|
2015-06-08 00:20:43 +00:00
|
|
|
}
|
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
void
|
|
|
|
rpe_mark_seen(struct rt_export_hook *hook, struct rt_pending_export *rpe)
|
|
|
|
{
|
|
|
|
bmap_set(&hook->seq_map, rpe->seq);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct rt_pending_export *
|
|
|
|
rpe_next(struct rt_pending_export *rpe, struct rte_src *src)
|
|
|
|
{
|
|
|
|
struct rt_pending_export *next = atomic_load_explicit(&rpe->next, memory_order_acquire);
|
|
|
|
|
|
|
|
if (!next)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
if (!src)
|
|
|
|
return next;
|
|
|
|
|
|
|
|
while (rpe = next)
|
|
|
|
if (src == (rpe->new ? rpe->new->rte.src : rpe->old->rte.src))
|
|
|
|
return rpe;
|
|
|
|
else
|
|
|
|
next = atomic_load_explicit(&rpe->next, memory_order_acquire);
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct rt_pending_export * rt_next_export_fast(struct rt_pending_export *last);
|
|
|
|
static void
|
|
|
|
rte_export(struct rt_export_hook *hook, struct rt_pending_export *rpe)
|
|
|
|
{
|
|
|
|
if (bmap_test(&hook->seq_map, rpe->seq))
|
2022-07-15 12:57:02 +00:00
|
|
|
goto ignore; /* Seen already */
|
2021-09-27 11:04:16 +00:00
|
|
|
|
|
|
|
const net_addr *n = rpe->new_best ? rpe->new_best->rte.net : rpe->old_best->rte.net;
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
switch (hook->req->addr_mode)
|
|
|
|
{
|
|
|
|
case TE_ADDR_NONE:
|
|
|
|
break;
|
|
|
|
|
|
|
|
case TE_ADDR_IN:
|
|
|
|
if (!net_in_netX(n, hook->req->addr))
|
|
|
|
goto ignore;
|
|
|
|
break;
|
|
|
|
|
|
|
|
case TE_ADDR_EQUAL:
|
|
|
|
if (!net_equal(n, hook->req->addr))
|
|
|
|
goto ignore;
|
|
|
|
break;
|
|
|
|
|
|
|
|
case TE_ADDR_FOR:
|
|
|
|
bug("Continuos export of best prefix match not implemented yet.");
|
|
|
|
|
|
|
|
default:
|
|
|
|
bug("Strange table export address mode: %d", hook->req->addr_mode);
|
|
|
|
}
|
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
if (rpe->new)
|
|
|
|
hook->stats.updates_received++;
|
|
|
|
else
|
|
|
|
hook->stats.withdraws_received++;
|
|
|
|
|
|
|
|
if (hook->req->export_one)
|
|
|
|
hook->req->export_one(hook->req, n, rpe);
|
|
|
|
else if (hook->req->export_bulk)
|
|
|
|
{
|
|
|
|
net *net = SKIP_BACK(struct network, n.addr, (net_addr (*)[0]) n);
|
|
|
|
uint count = rte_feed_count(net);
|
|
|
|
rte **feed = NULL;
|
|
|
|
if (count)
|
|
|
|
{
|
|
|
|
feed = alloca(count * sizeof(rte *));
|
|
|
|
rte_feed_obtain(net, feed, count);
|
|
|
|
}
|
|
|
|
hook->req->export_bulk(hook->req, n, rpe, feed, count);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
bug("Export request must always provide an export method");
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
ignore:
|
2021-09-27 11:04:16 +00:00
|
|
|
/* Get the next export if exists */
|
|
|
|
hook->rpe_next = rt_next_export_fast(rpe);
|
|
|
|
|
|
|
|
/* The last block may be available to free */
|
|
|
|
if (PAGE_HEAD(hook->rpe_next) != PAGE_HEAD(rpe))
|
2022-07-15 12:57:02 +00:00
|
|
|
CALL(hook->table->used, hook->table);
|
2021-09-27 11:04:16 +00:00
|
|
|
|
|
|
|
/* Releasing this export for cleanup routine */
|
|
|
|
DBG("store hook=%p last_export=%p seq=%lu\n", hook, rpe, rpe->seq);
|
|
|
|
atomic_store_explicit(&hook->last_export, rpe, memory_order_release);
|
|
|
|
}
|
|
|
|
|
2000-06-02 12:41:25 +00:00
|
|
|
/**
|
|
|
|
* rte_announce - announce a routing table change
|
|
|
|
* @tab: table the route has been added to
|
|
|
|
* @net: network in question
|
2019-09-09 00:55:32 +00:00
|
|
|
* @new: the new or changed route
|
|
|
|
* @old: the previous route replaced by the new one
|
2016-05-12 13:49:44 +00:00
|
|
|
* @new_best: the new best route for the same network
|
|
|
|
* @old_best: the previous best route for the same network
|
2000-06-02 12:41:25 +00:00
|
|
|
*
|
2019-09-09 00:55:32 +00:00
|
|
|
* This function gets a routing table update and announces it to all protocols
|
|
|
|
* that are connected to the same table by their channels.
|
2000-06-02 12:41:25 +00:00
|
|
|
*
|
2019-09-09 00:55:32 +00:00
|
|
|
* There are two ways of how routing table changes are announced. First, there
|
|
|
|
* is a change of just one route in @net (which may caused a change of the best
|
|
|
|
* route of the network). In this case @new and @old describes the changed route
|
|
|
|
* and @new_best and @old_best describes best routes. Other routes are not
|
|
|
|
* affected, but in sorted table the order of other routes might change.
|
2009-05-31 13:24:27 +00:00
|
|
|
*
|
2019-09-09 00:55:32 +00:00
|
|
|
* The function announces the change to all associated channels. For each
|
|
|
|
* channel, an appropriate preprocessing is done according to channel &ra_mode.
|
|
|
|
* For example, %RA_OPTIMAL channels receive just changes of best routes.
|
|
|
|
*
|
|
|
|
* In general, we first call preexport() hook of a protocol, which performs
|
|
|
|
* basic checks on the route (each protocol has a right to veto or force accept
|
|
|
|
* of the route before any filter is asked). Then we consult an export filter
|
|
|
|
* of the channel and verify the old route in an export map of the channel.
|
|
|
|
* Finally, the rt_notify() hook of the protocol gets called.
|
|
|
|
*
|
|
|
|
* Note that there are also calls of rt_notify() hooks due to feed, but that is
|
|
|
|
* done outside of scope of rte_announce().
|
2000-06-02 12:41:25 +00:00
|
|
|
*/
|
1999-04-05 20:25:03 +00:00
|
|
|
static void
|
2020-03-09 14:31:10 +00:00
|
|
|
rte_announce(rtable *tab, net *net, struct rte_storage *new, struct rte_storage *old,
|
2020-01-28 10:42:46 +00:00
|
|
|
struct rte_storage *new_best, struct rte_storage *old_best)
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2022-07-14 09:09:23 +00:00
|
|
|
int new_best_valid = rte_is_valid(RTE_OR_NULL(new_best));
|
|
|
|
int old_best_valid = rte_is_valid(RTE_OR_NULL(old_best));
|
2012-11-10 13:26:13 +00:00
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
if ((new == old) && (new_best == old_best))
|
2012-11-10 13:26:13 +00:00
|
|
|
return;
|
1998-05-20 11:54:33 +00:00
|
|
|
|
2022-07-14 09:09:23 +00:00
|
|
|
if (new_best_valid || old_best_valid)
|
2019-02-02 12:28:16 +00:00
|
|
|
{
|
2022-07-14 09:09:23 +00:00
|
|
|
if (new_best_valid)
|
2021-06-21 15:07:31 +00:00
|
|
|
new_best->rte.sender->stats.pref++;
|
2022-07-14 09:09:23 +00:00
|
|
|
if (old_best_valid)
|
2021-06-21 15:07:31 +00:00
|
|
|
old_best->rte.sender->stats.pref--;
|
2019-02-02 12:28:16 +00:00
|
|
|
|
|
|
|
if (tab->hostcache)
|
|
|
|
rt_notify_hostcache(tab, net);
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
if (!EMPTY_LIST(tab->flowspec_links))
|
|
|
|
rt_flowspec_notify(tab, net);
|
2019-02-02 12:28:16 +00:00
|
|
|
}
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2021-02-10 02:09:57 +00:00
|
|
|
rt_schedule_notify(tab);
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
if (EMPTY_LIST(tab->exporter.hooks) && EMPTY_LIST(tab->exporter.pending))
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
|
|
|
/* No export hook and no pending exports to cleanup. We may free the route immediately. */
|
|
|
|
if (!old)
|
|
|
|
return;
|
|
|
|
|
|
|
|
hmap_clear(&tab->id_map, old->rte.id);
|
2022-07-15 12:57:02 +00:00
|
|
|
rte_free(old);
|
2021-09-27 11:04:16 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Get the pending export structure */
|
|
|
|
struct rt_export_block *rpeb = NULL, *rpebsnl = NULL;
|
|
|
|
u32 end = 0;
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
if (!EMPTY_LIST(tab->exporter.pending))
|
2019-09-09 00:55:32 +00:00
|
|
|
{
|
2022-07-15 12:57:02 +00:00
|
|
|
rpeb = TAIL(tab->exporter.pending);
|
2021-09-27 11:04:16 +00:00
|
|
|
end = atomic_load_explicit(&rpeb->end, memory_order_relaxed);
|
|
|
|
if (end >= RT_PENDING_EXPORT_ITEMS)
|
|
|
|
{
|
|
|
|
ASSERT_DIE(end == RT_PENDING_EXPORT_ITEMS);
|
|
|
|
rpebsnl = rpeb;
|
|
|
|
|
|
|
|
rpeb = NULL;
|
|
|
|
end = 0;
|
|
|
|
}
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
2019-09-09 00:55:32 +00:00
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
if (!rpeb)
|
|
|
|
{
|
2022-07-15 12:57:02 +00:00
|
|
|
rpeb = alloc_page();
|
2021-09-27 11:04:16 +00:00
|
|
|
*rpeb = (struct rt_export_block) {};
|
2022-07-15 12:57:02 +00:00
|
|
|
add_tail(&tab->exporter.pending, &rpeb->n);
|
2021-09-27 11:04:16 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Fill the pending export */
|
|
|
|
struct rt_pending_export *rpe = &rpeb->export[rpeb->end];
|
|
|
|
*rpe = (struct rt_pending_export) {
|
|
|
|
.new = new,
|
|
|
|
.new_best = new_best,
|
|
|
|
.old = old,
|
|
|
|
.old_best = old_best,
|
2022-07-15 12:57:02 +00:00
|
|
|
.seq = tab->exporter.next_seq++,
|
2021-09-27 11:04:16 +00:00
|
|
|
};
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
DBGL("rte_announce: table=%s net=%N new=%p id %u from %s old=%p id %u from %s new_best=%p id %u old_best=%p id %u seq=%lu",
|
|
|
|
tab->name, net->n.addr,
|
|
|
|
new, new ? new->rte.id : 0, new ? new->rte.sender->req->name : NULL,
|
|
|
|
old, old ? old->rte.id : 0, old ? old->rte.sender->req->name : NULL,
|
|
|
|
new_best, old_best, rpe->seq);
|
2021-09-27 11:04:16 +00:00
|
|
|
|
|
|
|
ASSERT_DIE(atomic_fetch_add_explicit(&rpeb->end, 1, memory_order_release) == end);
|
|
|
|
|
|
|
|
if (rpebsnl)
|
2021-06-21 15:07:31 +00:00
|
|
|
{
|
2021-09-27 11:04:16 +00:00
|
|
|
_Bool f = 0;
|
|
|
|
ASSERT_DIE(atomic_compare_exchange_strong_explicit(&rpebsnl->not_last, &f, 1,
|
|
|
|
memory_order_release, memory_order_relaxed));
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Append to the same-network squasher list */
|
|
|
|
if (net->last)
|
|
|
|
{
|
|
|
|
struct rt_pending_export *rpenull = NULL;
|
|
|
|
ASSERT_DIE(atomic_compare_exchange_strong_explicit(
|
|
|
|
&net->last->next, &rpenull, rpe,
|
|
|
|
memory_order_relaxed,
|
|
|
|
memory_order_relaxed));
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
net->last = rpe;
|
|
|
|
|
|
|
|
if (!net->first)
|
|
|
|
net->first = rpe;
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
if (tab->exporter.first == NULL)
|
|
|
|
tab->exporter.first = rpe;
|
2021-09-27 11:04:16 +00:00
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
if (!tm_active(tab->exporter.export_timer))
|
|
|
|
tm_start(tab->exporter.export_timer, tab->config->export_settle_time);
|
2021-09-27 11:04:16 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct rt_pending_export *
|
|
|
|
rt_next_export_fast(struct rt_pending_export *last)
|
|
|
|
{
|
|
|
|
/* Get the whole export block and find our position in there. */
|
|
|
|
struct rt_export_block *rpeb = PAGE_HEAD(last);
|
|
|
|
u32 pos = (last - &rpeb->export[0]);
|
|
|
|
u32 end = atomic_load_explicit(&rpeb->end, memory_order_acquire);
|
|
|
|
ASSERT_DIE(pos < end);
|
|
|
|
|
|
|
|
/* Next is in the same block. */
|
|
|
|
if (++pos < end)
|
|
|
|
return &rpeb->export[pos];
|
|
|
|
|
|
|
|
/* There is another block. */
|
|
|
|
if (atomic_load_explicit(&rpeb->not_last, memory_order_acquire))
|
|
|
|
{
|
|
|
|
/* This is OK to do non-atomically because of the not_last flag. */
|
|
|
|
rpeb = NODE_NEXT(rpeb);
|
|
|
|
return &rpeb->export[0];
|
|
|
|
}
|
|
|
|
|
|
|
|
/* There is nothing more. */
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct rt_pending_export *
|
2022-07-15 12:57:02 +00:00
|
|
|
rt_next_export(struct rt_export_hook *hook, struct rt_exporter *tab)
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
|
|
|
/* As the table is locked, it is safe to reload the last export pointer */
|
|
|
|
struct rt_pending_export *last = atomic_load_explicit(&hook->last_export, memory_order_acquire);
|
|
|
|
|
|
|
|
/* It is still valid, let's reuse it */
|
|
|
|
if (last)
|
|
|
|
return rt_next_export_fast(last);
|
|
|
|
|
|
|
|
/* No, therefore we must process the table's first pending export */
|
|
|
|
else
|
2022-07-15 12:57:02 +00:00
|
|
|
return tab->first;
|
2021-09-27 11:04:16 +00:00
|
|
|
}
|
|
|
|
|
2021-06-19 18:50:18 +00:00
|
|
|
static inline void
|
|
|
|
rt_send_export_event(struct rt_export_hook *hook)
|
|
|
|
{
|
|
|
|
ev_send(hook->req->list, hook->event);
|
|
|
|
}
|
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
static void
|
|
|
|
rt_announce_exports(timer *tm)
|
|
|
|
{
|
|
|
|
rtable *tab = tm->data;
|
|
|
|
|
|
|
|
struct rt_export_hook *c; node *n;
|
2022-07-15 12:57:02 +00:00
|
|
|
WALK_LIST2(c, n, tab->exporter.hooks, n)
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
|
|
|
if (atomic_load_explicit(&c->export_state, memory_order_acquire) != TES_READY)
|
2021-06-21 15:07:31 +00:00
|
|
|
continue;
|
2016-01-26 10:48:58 +00:00
|
|
|
|
2021-06-19 18:50:18 +00:00
|
|
|
rt_send_export_event(c);
|
2021-09-27 11:04:16 +00:00
|
|
|
}
|
|
|
|
}
|
2022-06-27 17:53:06 +00:00
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
static struct rt_pending_export *
|
2022-07-15 12:57:02 +00:00
|
|
|
rt_last_export(struct rt_exporter *tab)
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
|
|
|
struct rt_pending_export *rpe = NULL;
|
2022-06-27 17:53:06 +00:00
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
if (!EMPTY_LIST(tab->pending))
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
|
|
|
/* We'll continue processing exports from this export on */
|
2022-07-15 12:57:02 +00:00
|
|
|
struct rt_export_block *reb = TAIL(tab->pending);
|
2021-09-27 11:04:16 +00:00
|
|
|
ASSERT_DIE(reb->end);
|
|
|
|
rpe = &reb->export[reb->end - 1];
|
|
|
|
}
|
2022-06-27 17:53:06 +00:00
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
return rpe;
|
|
|
|
}
|
2022-06-27 17:53:06 +00:00
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
#define RT_EXPORT_BULK 1024
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_export_hook(void *_data)
|
|
|
|
{
|
|
|
|
struct rt_export_hook *c = _data;
|
|
|
|
|
|
|
|
ASSERT_DIE(atomic_load_explicit(&c->export_state, memory_order_relaxed) == TES_READY);
|
|
|
|
|
|
|
|
if (!c->rpe_next)
|
|
|
|
{
|
|
|
|
c->rpe_next = rt_next_export(c, c->table);
|
|
|
|
|
|
|
|
if (!c->rpe_next)
|
|
|
|
{
|
2022-07-15 12:57:02 +00:00
|
|
|
CALL(c->table->used, c->table);
|
2021-09-27 11:04:16 +00:00
|
|
|
return;
|
2022-06-27 17:53:06 +00:00
|
|
|
}
|
2019-09-09 00:55:32 +00:00
|
|
|
}
|
2022-06-22 10:45:42 +00:00
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
/* Process the export */
|
|
|
|
for (uint i=0; i<RT_EXPORT_BULK; i++)
|
|
|
|
{
|
|
|
|
rte_update_lock();
|
2020-03-09 14:31:10 +00:00
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
rte_export(c, c->rpe_next);
|
|
|
|
|
|
|
|
if (!c->rpe_next)
|
|
|
|
break;
|
|
|
|
|
|
|
|
rte_update_unlock();
|
2019-09-09 00:55:32 +00:00
|
|
|
}
|
2021-09-27 11:04:16 +00:00
|
|
|
|
2021-06-19 18:50:18 +00:00
|
|
|
rt_send_export_event(c);
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
|
1999-03-17 15:01:07 +00:00
|
|
|
static inline int
|
2021-06-21 15:07:31 +00:00
|
|
|
rte_validate(struct channel *ch, rte *e)
|
1999-03-17 15:01:07 +00:00
|
|
|
{
|
|
|
|
int c;
|
2020-01-28 10:42:46 +00:00
|
|
|
const net_addr *n = e->net;
|
1999-03-17 15:01:07 +00:00
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
if (!net_validate(n))
|
2015-11-05 11:48:52 +00:00
|
|
|
{
|
|
|
|
log(L_WARN "Ignoring bogus prefix %N received via %s",
|
2021-06-21 15:07:31 +00:00
|
|
|
n, ch->proto->name);
|
2015-11-05 11:48:52 +00:00
|
|
|
return 0;
|
|
|
|
}
|
2010-02-26 09:55:58 +00:00
|
|
|
|
2017-12-09 23:55:34 +00:00
|
|
|
/* FIXME: better handling different nettypes */
|
2020-01-28 10:42:46 +00:00
|
|
|
c = !net_is_flow(n) ?
|
|
|
|
net_classify(n): (IADDR_HOST | SCOPE_UNIVERSE);
|
2010-02-26 09:55:58 +00:00
|
|
|
if ((c < 0) || !(c & IADDR_HOST) || ((c & IADDR_SCOPE_MASK) <= SCOPE_LINK))
|
2015-11-05 11:48:52 +00:00
|
|
|
{
|
|
|
|
log(L_WARN "Ignoring bogus route %N received via %s",
|
2021-06-21 15:07:31 +00:00
|
|
|
n, ch->proto->name);
|
2015-11-05 11:48:52 +00:00
|
|
|
return 0;
|
|
|
|
}
|
2010-02-26 09:55:58 +00:00
|
|
|
|
2022-06-08 09:47:49 +00:00
|
|
|
if (net_type_match(n, NB_DEST))
|
2017-04-05 14:16:04 +00:00
|
|
|
{
|
2022-06-08 13:31:28 +00:00
|
|
|
eattr *nhea = ea_find(e->attrs, &ea_gen_nexthop);
|
2022-06-08 09:47:49 +00:00
|
|
|
int dest = nhea_dest(nhea);
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2022-06-08 09:47:49 +00:00
|
|
|
if (dest == RTD_NONE)
|
|
|
|
{
|
|
|
|
log(L_WARN "Ignoring route %N with no destination received via %s",
|
|
|
|
n, ch->proto->name);
|
|
|
|
return 0;
|
|
|
|
}
|
2017-04-05 14:16:04 +00:00
|
|
|
|
2022-06-08 09:47:49 +00:00
|
|
|
if ((dest == RTD_UNICAST) &&
|
|
|
|
!nexthop_is_sorted((struct nexthop_adata *) nhea->u.ptr))
|
|
|
|
{
|
|
|
|
log(L_WARN "Ignoring unsorted multipath route %N received via %s",
|
|
|
|
n, ch->proto->name);
|
|
|
|
return 0;
|
|
|
|
}
|
2022-05-05 16:08:37 +00:00
|
|
|
}
|
2022-06-08 13:31:28 +00:00
|
|
|
else if (ea_find(e->attrs, &ea_gen_nexthop))
|
2017-04-05 14:16:04 +00:00
|
|
|
{
|
2022-06-08 09:47:49 +00:00
|
|
|
log(L_WARN "Ignoring route %N having a nexthop attribute received via %s",
|
2021-06-21 15:07:31 +00:00
|
|
|
n, ch->proto->name);
|
2017-04-05 14:16:04 +00:00
|
|
|
return 0;
|
|
|
|
}
|
2016-08-30 15:17:27 +00:00
|
|
|
|
1999-03-17 15:01:07 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2000-05-06 21:21:19 +00:00
|
|
|
static int
|
|
|
|
rte_same(rte *x, rte *y)
|
|
|
|
{
|
2019-02-22 01:16:39 +00:00
|
|
|
/* rte.flags are not checked, as they are mostly internal to rtable */
|
2000-05-06 21:21:19 +00:00
|
|
|
return
|
|
|
|
x->attrs == y->attrs &&
|
|
|
|
x->pflags == y->pflags &&
|
2020-04-10 15:08:29 +00:00
|
|
|
x->src == y->src &&
|
2019-02-22 01:16:39 +00:00
|
|
|
rte_is_filtered(x) == rte_is_filtered(y);
|
2000-05-06 21:21:19 +00:00
|
|
|
}
|
|
|
|
|
2012-11-16 12:29:16 +00:00
|
|
|
static inline int rte_is_ok(rte *e) { return e && !rte_is_filtered(e); }
|
|
|
|
|
1999-04-05 20:25:03 +00:00
|
|
|
static void
|
2021-06-21 15:07:31 +00:00
|
|
|
rte_recalculate(struct rt_import_hook *c, net *net, rte *new, struct rte_src *src)
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2021-06-21 15:07:31 +00:00
|
|
|
struct rt_import_request *req = c->req;
|
2016-01-26 10:48:58 +00:00
|
|
|
struct rtable *table = c->table;
|
2021-06-21 15:07:31 +00:00
|
|
|
struct rt_import_stats *stats = &c->stats;
|
2020-01-28 10:42:46 +00:00
|
|
|
struct rte_storage *old_best_stored = net->routes, *old_stored = NULL;
|
|
|
|
rte *old_best = old_best_stored ? &old_best_stored->rte : NULL;
|
1998-05-20 11:54:33 +00:00
|
|
|
rte *old = NULL;
|
|
|
|
|
2022-06-27 10:32:15 +00:00
|
|
|
/* If the new route is identical to the old one, we find the attributes in
|
|
|
|
* cache and clone these with no performance drop. OTOH, if we were to lookup
|
|
|
|
* the attributes, such a route definitely hasn't been anywhere yet,
|
|
|
|
* therefore it's definitely worth the time. */
|
|
|
|
struct rte_storage *new_stored = NULL;
|
|
|
|
if (new)
|
|
|
|
new = &(new_stored = rte_store(new, net, table))->rte;
|
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
/* Find and remove original route from the same protocol */
|
|
|
|
struct rte_storage **before_old = rte_find(net, src);
|
|
|
|
|
|
|
|
if (*before_old)
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2020-01-28 10:42:46 +00:00
|
|
|
old = &(old_stored = (*before_old))->rte;
|
|
|
|
|
2020-05-01 20:26:24 +00:00
|
|
|
/* If there is the same route in the routing table but from
|
|
|
|
* a different sender, then there are two paths from the
|
|
|
|
* source protocol to this routing table through transparent
|
|
|
|
* pipes, which is not allowed.
|
|
|
|
* We log that and ignore the route. */
|
2021-06-21 15:07:31 +00:00
|
|
|
if (old->sender != c)
|
2020-05-01 20:26:24 +00:00
|
|
|
{
|
|
|
|
if (!old->generation && !new->generation)
|
|
|
|
bug("Two protocols claim to author a route with the same rte_src in table %s: %N %s/%u:%u",
|
|
|
|
c->table->name, net->n.addr, old->src->proto->name, old->src->private_id, old->src->global_id);
|
|
|
|
|
|
|
|
log_rl(&table->rl_pipe, L_ERR "Route source collision in table %s: %N %s/%u:%u",
|
|
|
|
c->table->name, net->n.addr, old->src->proto->name, old->src->private_id, old->src->global_id);
|
|
|
|
}
|
2009-12-02 16:26:16 +00:00
|
|
|
|
2022-06-27 10:32:15 +00:00
|
|
|
if (new && rte_same(old, &new_stored->rte))
|
2000-05-06 21:21:19 +00:00
|
|
|
{
|
2019-02-22 01:16:39 +00:00
|
|
|
/* No changes, ignore the new route and refresh the old one */
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
old->stale_cycle = new->stale_cycle;
|
2012-11-10 13:26:13 +00:00
|
|
|
|
2012-11-15 00:29:01 +00:00
|
|
|
if (!rte_is_filtered(new))
|
2012-11-10 13:26:13 +00:00
|
|
|
{
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->updates_ignored++;
|
2021-06-21 15:07:31 +00:00
|
|
|
rt_rte_trace_in(D_ROUTES, req, new, "ignored");
|
2012-11-10 13:26:13 +00:00
|
|
|
}
|
2022-06-27 10:32:15 +00:00
|
|
|
|
|
|
|
/* We need to free the already stored route here before returning */
|
|
|
|
rte_free(new_stored);
|
|
|
|
return;
|
2020-05-01 20:26:24 +00:00
|
|
|
}
|
2020-01-28 10:42:46 +00:00
|
|
|
|
2020-05-01 20:26:24 +00:00
|
|
|
*before_old = (*before_old)->next;
|
|
|
|
table->rt_count--;
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
|
|
|
|
2009-06-03 23:22:56 +00:00
|
|
|
if (!old && !new)
|
|
|
|
{
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->withdraws_ignored++;
|
2009-06-03 23:22:56 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2022-06-27 10:32:15 +00:00
|
|
|
/* If rejected by import limit, we need to pretend there is no route */
|
|
|
|
if (req->preimport && (req->preimport(req, new, old) == 0))
|
|
|
|
{
|
|
|
|
rte_free(new_stored);
|
|
|
|
new_stored = NULL;
|
|
|
|
new = NULL;
|
|
|
|
}
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2013-01-10 12:07:33 +00:00
|
|
|
int new_ok = rte_is_ok(new);
|
|
|
|
int old_ok = rte_is_ok(old);
|
|
|
|
|
2012-11-16 12:29:16 +00:00
|
|
|
if (new_ok)
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->updates_accepted++;
|
2012-11-16 12:29:16 +00:00
|
|
|
else if (old_ok)
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->withdraws_accepted++;
|
2012-11-16 12:29:16 +00:00
|
|
|
else
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->withdraws_ignored++;
|
2009-06-03 23:22:56 +00:00
|
|
|
|
2021-02-10 02:09:57 +00:00
|
|
|
if (old_ok || new_ok)
|
|
|
|
table->last_rt_change = current_time();
|
|
|
|
|
2012-07-04 19:31:03 +00:00
|
|
|
if (table->config->sorted)
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2012-07-04 19:31:03 +00:00
|
|
|
/* If routes are sorted, just insert new route to appropriate position */
|
2020-01-28 10:42:46 +00:00
|
|
|
if (new_stored)
|
2012-07-04 19:31:03 +00:00
|
|
|
{
|
2020-01-28 10:42:46 +00:00
|
|
|
struct rte_storage **k;
|
|
|
|
if ((before_old != &net->routes) && !rte_better(new, &SKIP_BACK(struct rte_storage, next, before_old)->rte))
|
|
|
|
k = before_old;
|
2012-07-04 19:31:03 +00:00
|
|
|
else
|
|
|
|
k = &net->routes;
|
2009-08-11 13:49:56 +00:00
|
|
|
|
2012-07-04 19:31:03 +00:00
|
|
|
for (; *k; k=&(*k)->next)
|
2020-01-28 10:42:46 +00:00
|
|
|
if (rte_better(new, &(*k)->rte))
|
2012-07-04 19:31:03 +00:00
|
|
|
break;
|
2009-08-11 13:49:56 +00:00
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
new_stored->next = *k;
|
|
|
|
*k = new_stored;
|
2020-07-16 13:02:10 +00:00
|
|
|
|
2018-12-11 12:52:30 +00:00
|
|
|
table->rt_count++;
|
2012-07-04 19:31:03 +00:00
|
|
|
}
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
2012-07-04 19:31:03 +00:00
|
|
|
else
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2012-07-04 19:31:03 +00:00
|
|
|
/* If routes are not sorted, find the best route and move it on
|
|
|
|
the first position. There are several optimized cases. */
|
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
if (src->proto->rte_recalculate &&
|
|
|
|
src->proto->rte_recalculate(table, net, new_stored ? &new_stored->rte : NULL, old, old_best))
|
2012-07-04 19:31:03 +00:00
|
|
|
goto do_recalculate;
|
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
if (new_stored && rte_better(&new_stored->rte, old_best))
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2012-07-04 19:31:03 +00:00
|
|
|
/* The first case - the new route is cleary optimal,
|
|
|
|
we link it at the first position */
|
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
new_stored->next = net->routes;
|
|
|
|
net->routes = new_stored;
|
2020-07-16 13:02:10 +00:00
|
|
|
|
2018-12-11 12:52:30 +00:00
|
|
|
table->rt_count++;
|
2009-08-11 13:49:56 +00:00
|
|
|
}
|
2012-07-04 19:31:03 +00:00
|
|
|
else if (old == old_best)
|
2009-08-11 13:49:56 +00:00
|
|
|
{
|
2012-07-04 19:31:03 +00:00
|
|
|
/* The second case - the old best route disappeared, we add the
|
|
|
|
new route (if we have any) to the list (we don't care about
|
|
|
|
position) and then we elect the new optimal route and relink
|
|
|
|
that route at the first position and announce it. New optimal
|
|
|
|
route might be NULL if there is no more routes */
|
|
|
|
|
|
|
|
do_recalculate:
|
|
|
|
/* Add the new route to the list */
|
2020-01-28 10:42:46 +00:00
|
|
|
if (new_stored)
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2020-01-28 10:42:46 +00:00
|
|
|
new_stored->next = *before_old;
|
|
|
|
*before_old = new_stored;
|
2020-07-16 13:02:10 +00:00
|
|
|
|
2018-12-11 12:52:30 +00:00
|
|
|
table->rt_count++;
|
2012-07-04 19:31:03 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Find a new optimal route (if there is any) */
|
|
|
|
if (net->routes)
|
|
|
|
{
|
2020-01-28 10:42:46 +00:00
|
|
|
struct rte_storage **bp = &net->routes;
|
|
|
|
for (struct rte_storage **k=&(*bp)->next; *k; k=&(*k)->next)
|
|
|
|
if (rte_better(&(*k)->rte, &(*bp)->rte))
|
2012-07-04 19:31:03 +00:00
|
|
|
bp = k;
|
|
|
|
|
|
|
|
/* And relink it */
|
2020-01-28 10:42:46 +00:00
|
|
|
struct rte_storage *best = *bp;
|
2012-07-04 19:31:03 +00:00
|
|
|
*bp = best->next;
|
|
|
|
best->next = net->routes;
|
|
|
|
net->routes = best;
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
|
|
|
}
|
2020-01-28 10:42:46 +00:00
|
|
|
else if (new_stored)
|
2012-07-04 19:31:03 +00:00
|
|
|
{
|
|
|
|
/* The third case - the new route is not better than the old
|
|
|
|
best route (therefore old_best != NULL) and the old best
|
|
|
|
route was not removed (therefore old_best == net->routes).
|
2020-07-16 13:02:10 +00:00
|
|
|
We just link the new route to the old/last position. */
|
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
new_stored->next = *before_old;
|
|
|
|
*before_old = new_stored;
|
2012-07-04 19:31:03 +00:00
|
|
|
|
2018-12-11 12:52:30 +00:00
|
|
|
table->rt_count++;
|
2012-07-04 19:31:03 +00:00
|
|
|
}
|
|
|
|
/* The fourth (empty) case - suboptimal route was removed, nothing to do */
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
2009-08-11 13:49:56 +00:00
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
if (new_stored)
|
2019-09-09 00:55:32 +00:00
|
|
|
{
|
2020-01-28 10:42:46 +00:00
|
|
|
new_stored->rte.lastmod = current_time();
|
2021-09-27 11:04:16 +00:00
|
|
|
new_stored->rte.id = hmap_first_zero(&table->id_map);
|
|
|
|
hmap_set(&table->id_map, new_stored->rte.id);
|
2019-09-09 00:55:32 +00:00
|
|
|
}
|
2012-07-04 19:31:03 +00:00
|
|
|
|
|
|
|
/* Log the route change */
|
2021-06-21 15:07:31 +00:00
|
|
|
if (new_ok)
|
|
|
|
rt_rte_trace_in(D_ROUTES, req, &new_stored->rte, new_stored == net->routes ? "added [best]" : "added");
|
|
|
|
else if (old_ok)
|
2009-12-02 13:33:34 +00:00
|
|
|
{
|
2021-06-21 15:07:31 +00:00
|
|
|
if (old != old_best)
|
|
|
|
rt_rte_trace_in(D_ROUTES, req, old, "removed");
|
|
|
|
else if (net->routes && rte_is_ok(&net->routes->rte))
|
|
|
|
rt_rte_trace_in(D_ROUTES, req, old, "removed [replaced]");
|
|
|
|
else
|
|
|
|
rt_rte_trace_in(D_ROUTES, req, old, "removed [sole]");
|
2009-08-11 13:49:56 +00:00
|
|
|
}
|
|
|
|
|
2012-07-04 19:31:03 +00:00
|
|
|
/* Propagate the route change */
|
2020-03-09 14:31:10 +00:00
|
|
|
rte_announce(table, net, new_stored, old_stored,
|
2020-01-28 10:42:46 +00:00
|
|
|
net->routes, old_best_stored);
|
2012-04-15 13:07:58 +00:00
|
|
|
|
|
|
|
if (!net->routes &&
|
2022-06-04 15:34:57 +00:00
|
|
|
(table->gc_counter++ >= table->config->gc_threshold))
|
|
|
|
rt_kick_prune_timer(table);
|
2012-04-15 13:07:58 +00:00
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
#if 0
|
|
|
|
/* Enable and reimplement these callbacks if anybody wants to use them */
|
2012-11-16 12:29:16 +00:00
|
|
|
if (old_ok && p->rte_remove)
|
|
|
|
p->rte_remove(net, old);
|
|
|
|
if (new_ok && p->rte_insert)
|
2020-01-28 10:42:46 +00:00
|
|
|
p->rte_insert(net, &new_stored->rte);
|
2021-06-21 15:07:31 +00:00
|
|
|
#endif
|
2012-11-16 12:29:16 +00:00
|
|
|
|
1998-10-18 11:13:16 +00:00
|
|
|
}
|
|
|
|
|
1999-04-05 20:25:03 +00:00
|
|
|
static int rte_update_nest_cnt; /* Nesting counter to allow recursive updates */
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
rte_update_lock(void)
|
|
|
|
{
|
|
|
|
rte_update_nest_cnt++;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
rte_update_unlock(void)
|
|
|
|
{
|
|
|
|
if (!--rte_update_nest_cnt)
|
|
|
|
lp_flush(rte_update_pool);
|
|
|
|
}
|
|
|
|
|
2022-06-27 10:32:15 +00:00
|
|
|
int
|
2021-06-21 15:07:31 +00:00
|
|
|
channel_preimport(struct rt_import_request *req, rte *new, rte *old)
|
|
|
|
{
|
|
|
|
struct channel *c = SKIP_BACK(struct channel, in_req, req);
|
|
|
|
|
|
|
|
if (new && !old)
|
|
|
|
if (CHANNEL_LIMIT_PUSH(c, RX))
|
2022-06-27 10:32:15 +00:00
|
|
|
return 0;
|
2021-06-21 15:07:31 +00:00
|
|
|
|
|
|
|
if (!new && old)
|
|
|
|
CHANNEL_LIMIT_POP(c, RX);
|
|
|
|
|
|
|
|
int new_in = new && !rte_is_filtered(new);
|
|
|
|
int old_in = old && !rte_is_filtered(old);
|
|
|
|
|
|
|
|
if (new_in && !old_in)
|
|
|
|
if (CHANNEL_LIMIT_PUSH(c, IN))
|
2022-06-16 21:24:56 +00:00
|
|
|
if (c->in_keep & RIK_REJECTED)
|
2021-06-21 15:07:31 +00:00
|
|
|
{
|
|
|
|
new->flags |= REF_FILTERED;
|
2022-06-27 10:32:15 +00:00
|
|
|
return 1;
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
|
|
|
else
|
2022-06-27 10:32:15 +00:00
|
|
|
return 0;
|
2021-06-21 15:07:31 +00:00
|
|
|
|
|
|
|
if (!new_in && old_in)
|
|
|
|
CHANNEL_LIMIT_POP(c, IN);
|
|
|
|
|
2022-06-27 10:32:15 +00:00
|
|
|
return 1;
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
|
|
|
|
2009-05-31 13:24:27 +00:00
|
|
|
void
|
2020-01-28 10:42:46 +00:00
|
|
|
rte_update(struct channel *c, const net_addr *n, rte *new, struct rte_src *src)
|
1999-04-05 20:25:03 +00:00
|
|
|
{
|
2021-06-21 15:07:31 +00:00
|
|
|
if (!c->in_req.hook)
|
|
|
|
return;
|
|
|
|
|
|
|
|
ASSERT(c->channel_state == CS_UP);
|
|
|
|
|
2022-06-16 21:24:56 +00:00
|
|
|
/* The import reloader requires prefilter routes to be the first layer */
|
|
|
|
if (new && (c->in_keep & RIK_PREFILTER))
|
|
|
|
if (ea_is_cached(new->attrs) && !new->attrs->next)
|
|
|
|
new->attrs = ea_clone(new->attrs);
|
|
|
|
else
|
|
|
|
new->attrs = ea_lookup(new->attrs, 0);
|
1999-04-05 20:25:03 +00:00
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
const struct filter *filter = c->in_filter;
|
|
|
|
struct channel_import_stats *stats = &c->import_stats;
|
2016-01-26 10:48:58 +00:00
|
|
|
|
1999-04-05 20:25:03 +00:00
|
|
|
rte_update_lock();
|
|
|
|
if (new)
|
|
|
|
{
|
2020-01-28 10:42:46 +00:00
|
|
|
new->net = n;
|
2021-06-21 15:07:31 +00:00
|
|
|
|
|
|
|
int fr;
|
2016-01-26 10:48:58 +00:00
|
|
|
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->updates_received++;
|
2022-05-31 10:51:34 +00:00
|
|
|
if ((filter == FILTER_REJECT) ||
|
2022-05-30 14:41:15 +00:00
|
|
|
((fr = f_run(filter, new, 0)) > F_ACCEPT))
|
2000-03-12 20:30:53 +00:00
|
|
|
{
|
2021-06-21 17:11:42 +00:00
|
|
|
stats->updates_filtered++;
|
2021-06-21 15:07:31 +00:00
|
|
|
channel_rte_trace_in(D_FILTERS, c, new, "filtered out");
|
2012-11-10 13:26:13 +00:00
|
|
|
|
2022-06-16 21:24:56 +00:00
|
|
|
if (c->in_keep & RIK_REJECTED)
|
2021-06-21 15:07:31 +00:00
|
|
|
new->flags |= REF_FILTERED;
|
|
|
|
else
|
|
|
|
new = NULL;
|
2000-03-12 20:30:53 +00:00
|
|
|
}
|
2022-05-15 13:53:35 +00:00
|
|
|
|
2022-05-31 10:51:34 +00:00
|
|
|
if (new)
|
2022-06-07 10:18:23 +00:00
|
|
|
if (net_is_flow(n))
|
|
|
|
rt_flowspec_resolve_rte(new, c);
|
|
|
|
else
|
|
|
|
rt_next_hop_resolve_rte(new);
|
2022-05-15 13:53:35 +00:00
|
|
|
|
2022-05-31 10:51:34 +00:00
|
|
|
if (new && !rte_validate(c, new))
|
2022-05-15 13:53:35 +00:00
|
|
|
{
|
2022-05-31 10:51:34 +00:00
|
|
|
channel_rte_trace_in(D_FILTERS, c, new, "invalid");
|
|
|
|
stats->updates_invalid++;
|
|
|
|
new = NULL;
|
2022-05-15 13:53:35 +00:00
|
|
|
}
|
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
|
|
|
else
|
|
|
|
stats->withdraws_received++;
|
2012-11-10 13:26:13 +00:00
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
rte_import(&c->in_req, n, new, src);
|
2019-03-14 16:22:22 +00:00
|
|
|
|
2022-06-16 21:24:56 +00:00
|
|
|
/* Now the route attributes are kept by the in-table cached version
|
|
|
|
* and we may drop the local handle */
|
|
|
|
if (new && (c->in_keep & RIK_PREFILTER))
|
2022-06-28 10:57:18 +00:00
|
|
|
{
|
|
|
|
/* There may be some updates on top of the original attribute block */
|
|
|
|
ea_list *a = new->attrs;
|
|
|
|
while (a->next)
|
|
|
|
a = a->next;
|
|
|
|
|
|
|
|
ea_free(a);
|
|
|
|
}
|
2022-06-16 21:24:56 +00:00
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
rte_update_unlock();
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rte_import(struct rt_import_request *req, const net_addr *n, rte *new, struct rte_src *src)
|
|
|
|
{
|
|
|
|
struct rt_import_hook *hook = req->hook;
|
|
|
|
if (!hook)
|
|
|
|
return;
|
2019-01-31 14:02:15 +00:00
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
net *nn;
|
|
|
|
if (new)
|
|
|
|
{
|
2019-01-31 14:02:15 +00:00
|
|
|
/* Use the actual struct network, not the dummy one */
|
2021-06-21 15:07:31 +00:00
|
|
|
nn = net_get(hook->table, n);
|
2020-01-28 10:42:46 +00:00
|
|
|
new->net = nn->n.addr;
|
2021-06-21 15:07:31 +00:00
|
|
|
new->sender = hook;
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
|
|
|
|
/* Set the stale cycle */
|
|
|
|
new->stale_cycle = hook->stale_set;
|
1999-04-05 20:25:03 +00:00
|
|
|
}
|
2021-06-21 15:07:31 +00:00
|
|
|
else if (!(nn = net_find(hook->table, n)))
|
2012-08-14 14:25:22 +00:00
|
|
|
{
|
2021-06-21 15:07:31 +00:00
|
|
|
req->hook->stats.withdraws_ignored++;
|
|
|
|
return;
|
2012-08-14 14:25:22 +00:00
|
|
|
}
|
2009-06-03 23:22:56 +00:00
|
|
|
|
2019-01-31 14:02:15 +00:00
|
|
|
/* And recalculate the best route */
|
2021-06-21 15:07:31 +00:00
|
|
|
rte_recalculate(hook, nn, new, src);
|
1999-04-05 20:25:03 +00:00
|
|
|
}
|
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
/* Independent call to rte_announce(), used from next hop
|
|
|
|
recalculation, outside of rte_update(). new must be non-NULL */
|
2018-02-06 16:43:55 +00:00
|
|
|
static inline void
|
2020-03-09 14:31:10 +00:00
|
|
|
rte_announce_i(rtable *tab, net *net, struct rte_storage *new, struct rte_storage *old,
|
2020-01-28 10:42:46 +00:00
|
|
|
struct rte_storage *new_best, struct rte_storage *old_best)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
|
|
|
rte_update_lock();
|
2020-03-09 14:31:10 +00:00
|
|
|
rte_announce(tab, net, new, old, new_best, old_best);
|
2010-07-05 15:50:19 +00:00
|
|
|
rte_update_unlock();
|
|
|
|
}
|
|
|
|
|
2016-10-14 13:37:04 +00:00
|
|
|
static inline void
|
2020-01-28 10:42:46 +00:00
|
|
|
rte_discard(net *net, rte *old) /* Non-filtered route deletion, used during garbage collection */
|
1998-10-18 11:13:16 +00:00
|
|
|
{
|
1999-04-05 20:25:03 +00:00
|
|
|
rte_update_lock();
|
2020-01-28 10:42:46 +00:00
|
|
|
rte_recalculate(old->sender, net, NULL, old->src);
|
1999-04-05 20:25:03 +00:00
|
|
|
rte_update_unlock();
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
|
|
|
|
2013-02-08 22:58:27 +00:00
|
|
|
/* Check rtable for best route to given net whether it would be exported do p */
|
|
|
|
int
|
2020-01-28 10:42:46 +00:00
|
|
|
rt_examine(rtable *t, net_addr *a, struct channel *c, const struct filter *filter)
|
2013-02-08 22:58:27 +00:00
|
|
|
{
|
2015-11-05 11:48:52 +00:00
|
|
|
net *n = net_find(t, a);
|
2013-02-08 22:58:27 +00:00
|
|
|
|
2022-05-30 14:20:35 +00:00
|
|
|
if (!n || !rte_is_valid(RTE_OR_NULL(n->routes)))
|
2013-02-08 22:58:27 +00:00
|
|
|
return 0;
|
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
rte rt = n->routes->rte;
|
|
|
|
|
2013-02-08 22:58:27 +00:00
|
|
|
rte_update_lock();
|
|
|
|
|
|
|
|
/* Rest is stripped down export_filter() */
|
2020-01-28 10:42:46 +00:00
|
|
|
int v = c->proto->preexport ? c->proto->preexport(c, &rt) : 0;
|
2013-02-08 22:58:27 +00:00
|
|
|
if (v == RIC_PROCESS)
|
2022-04-10 16:55:15 +00:00
|
|
|
v = (f_run(filter, &rt, FF_SILENT) <= F_ACCEPT);
|
2013-02-08 22:58:27 +00:00
|
|
|
|
|
|
|
rte_update_unlock();
|
|
|
|
|
|
|
|
return v > 0;
|
|
|
|
}
|
|
|
|
|
2022-06-20 19:29:10 +00:00
|
|
|
static void
|
|
|
|
rt_table_export_done(struct rt_export_hook *hook)
|
|
|
|
{
|
|
|
|
struct rt_exporter *re = hook->table;
|
|
|
|
struct rtable *tab = SKIP_BACK(struct rtable, exporter, re);
|
|
|
|
|
|
|
|
rt_unlock_table(tab);
|
|
|
|
DBG("Export hook %p in table %s finished uc=%u\n", hook, tab->name, tab->use_count);
|
|
|
|
}
|
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
static void
|
|
|
|
rt_export_stopped(void *data)
|
|
|
|
{
|
|
|
|
struct rt_export_hook *hook = data;
|
2022-06-20 19:29:10 +00:00
|
|
|
struct rt_exporter *tab = hook->table;
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
/* Drop pending exports */
|
2022-07-15 12:57:02 +00:00
|
|
|
CALL(tab->used, tab);
|
2021-09-27 11:04:16 +00:00
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
/* Unlist */
|
|
|
|
rem_node(&hook->n);
|
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
/* Report the channel as stopped. */
|
2021-06-21 15:07:31 +00:00
|
|
|
hook->stopped(hook->req);
|
|
|
|
|
2022-06-20 19:29:10 +00:00
|
|
|
/* Reporting the hook as finished. */
|
2022-06-20 17:10:49 +00:00
|
|
|
CALL(tab->done, hook);
|
2022-06-20 19:29:10 +00:00
|
|
|
|
2022-07-28 17:49:03 +00:00
|
|
|
/* Free the hook. */
|
2021-06-21 15:07:31 +00:00
|
|
|
rfree(hook->pool);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
rt_set_import_state(struct rt_import_hook *hook, u8 state)
|
|
|
|
{
|
|
|
|
hook->last_state_change = current_time();
|
|
|
|
hook->import_state = state;
|
|
|
|
|
|
|
|
if (hook->req->log_state_change)
|
|
|
|
hook->req->log_state_change(hook->req, state);
|
|
|
|
}
|
|
|
|
|
2022-06-20 17:10:49 +00:00
|
|
|
void
|
2021-06-21 15:07:31 +00:00
|
|
|
rt_set_export_state(struct rt_export_hook *hook, u8 state)
|
|
|
|
{
|
|
|
|
hook->last_state_change = current_time();
|
2021-09-27 11:04:16 +00:00
|
|
|
atomic_store_explicit(&hook->export_state, state, memory_order_release);
|
2021-06-21 15:07:31 +00:00
|
|
|
|
|
|
|
if (hook->req->log_state_change)
|
|
|
|
hook->req->log_state_change(hook->req, state);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rt_request_import(rtable *tab, struct rt_import_request *req)
|
|
|
|
{
|
|
|
|
rt_lock_table(tab);
|
|
|
|
|
|
|
|
struct rt_import_hook *hook = req->hook = mb_allocz(tab->rp, sizeof(struct rt_import_hook));
|
|
|
|
|
|
|
|
DBG("Lock table %s for import %p req=%p uc=%u\n", tab->name, hook, req, tab->use_count);
|
|
|
|
|
|
|
|
hook->req = req;
|
|
|
|
hook->table = tab;
|
|
|
|
|
|
|
|
rt_set_import_state(hook, TIS_UP);
|
|
|
|
|
|
|
|
hook->n = (node) {};
|
|
|
|
add_tail(&tab->imports, &hook->n);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rt_stop_import(struct rt_import_request *req, void (*stopped)(struct rt_import_request *))
|
|
|
|
{
|
|
|
|
ASSERT_DIE(req->hook);
|
|
|
|
struct rt_import_hook *hook = req->hook;
|
|
|
|
|
|
|
|
rt_schedule_prune(hook->table);
|
|
|
|
|
|
|
|
rt_set_import_state(hook, TIS_STOP);
|
|
|
|
|
|
|
|
hook->stopped = stopped;
|
|
|
|
}
|
|
|
|
|
2022-06-20 19:29:10 +00:00
|
|
|
static struct rt_export_hook *
|
|
|
|
rt_table_export_start(struct rt_exporter *re, struct rt_export_request *req)
|
2021-06-21 15:07:31 +00:00
|
|
|
{
|
2022-06-20 19:29:10 +00:00
|
|
|
rtable *tab = SKIP_BACK(rtable, exporter, re);
|
2021-06-21 15:07:31 +00:00
|
|
|
rt_lock_table(tab);
|
|
|
|
|
|
|
|
pool *p = rp_new(tab->rp, "Export hook");
|
2022-06-20 19:29:10 +00:00
|
|
|
struct rt_export_hook *hook = mb_allocz(p, sizeof(struct rt_export_hook));
|
2021-06-21 15:07:31 +00:00
|
|
|
hook->pool = p;
|
|
|
|
|
|
|
|
/* stats zeroed by mb_allocz */
|
2022-06-27 17:53:06 +00:00
|
|
|
switch (req->addr_mode)
|
2022-06-22 10:45:42 +00:00
|
|
|
{
|
2022-06-27 17:53:06 +00:00
|
|
|
case TE_ADDR_IN:
|
|
|
|
if (tab->trie && net_val_match(tab->addr_type, NB_IP))
|
|
|
|
{
|
|
|
|
hook->walk_state = mb_allocz(p, sizeof (struct f_trie_walk_state));
|
|
|
|
hook->walk_lock = rt_lock_trie(tab);
|
|
|
|
trie_walk_init(hook->walk_state, tab->trie, req->addr);
|
|
|
|
hook->event = ev_new_init(p, rt_feed_by_trie, hook);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
/* fall through */
|
|
|
|
case TE_ADDR_NONE:
|
|
|
|
FIB_ITERATE_INIT(&hook->feed_fit, &tab->fib);
|
|
|
|
hook->event = ev_new_init(p, rt_feed_by_fib, hook);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case TE_ADDR_EQUAL:
|
|
|
|
hook->event = ev_new_init(p, rt_feed_equal, hook);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case TE_ADDR_FOR:
|
|
|
|
hook->event = ev_new_init(p, rt_feed_for, hook);
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
bug("Requested an unknown export address mode");
|
2022-06-22 10:45:42 +00:00
|
|
|
}
|
2021-06-21 15:07:31 +00:00
|
|
|
|
|
|
|
DBG("New export hook %p req %p in table %s uc=%u\n", hook, req, tab->name, tab->use_count);
|
|
|
|
|
2022-06-20 19:29:10 +00:00
|
|
|
return hook;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rt_request_export(struct rt_exporter *re, struct rt_export_request *req)
|
|
|
|
{
|
|
|
|
struct rt_export_hook *hook = req->hook = re->start(re, req);
|
|
|
|
|
|
|
|
hook->req = req;
|
|
|
|
hook->table = re;
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
bmap_init(&hook->seq_map, hook->pool, 1024);
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
struct rt_pending_export *rpe = rt_last_export(hook->table);
|
|
|
|
DBG("store hook=%p last_export=%p seq=%lu\n", hook, rpe, rpe ? rpe->seq : 0);
|
|
|
|
atomic_store_explicit(&hook->last_export, rpe, memory_order_relaxed);
|
|
|
|
|
2022-06-20 19:29:10 +00:00
|
|
|
hook->n = (node) {};
|
|
|
|
add_tail(&re->hooks, &hook->n);
|
|
|
|
|
2022-06-27 17:53:06 +00:00
|
|
|
/* Regular export */
|
2021-06-21 15:07:31 +00:00
|
|
|
rt_set_export_state(hook, TES_FEEDING);
|
2022-07-18 10:33:00 +00:00
|
|
|
rt_send_export_event(hook);
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
|
|
|
|
2022-06-20 19:29:10 +00:00
|
|
|
static void
|
|
|
|
rt_table_export_stop(struct rt_export_hook *hook)
|
|
|
|
{
|
|
|
|
rtable *tab = SKIP_BACK(rtable, exporter, hook->table);
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
if (atomic_load_explicit(&hook->export_state, memory_order_relaxed) != TES_FEEDING)
|
2022-06-27 17:53:06 +00:00
|
|
|
return;
|
2022-06-24 13:27:26 +00:00
|
|
|
|
2022-06-20 17:10:49 +00:00
|
|
|
switch (hook->req->addr_mode)
|
2022-06-27 17:53:06 +00:00
|
|
|
{
|
2022-06-20 17:10:49 +00:00
|
|
|
case TE_ADDR_IN:
|
|
|
|
if (hook->walk_lock)
|
|
|
|
{
|
|
|
|
rt_unlock_trie(tab, hook->walk_lock);
|
|
|
|
hook->walk_lock = NULL;
|
|
|
|
mb_free(hook->walk_state);
|
|
|
|
hook->walk_state = NULL;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
/* fall through */
|
|
|
|
case TE_ADDR_NONE:
|
|
|
|
fit_get(&tab->fib, &hook->feed_fit);
|
|
|
|
break;
|
2022-06-27 17:53:06 +00:00
|
|
|
}
|
2022-06-20 19:29:10 +00:00
|
|
|
}
|
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
void
|
|
|
|
rt_stop_export(struct rt_export_request *req, void (*stopped)(struct rt_export_request *))
|
|
|
|
{
|
|
|
|
ASSERT_DIE(req->hook);
|
|
|
|
struct rt_export_hook *hook = req->hook;
|
|
|
|
|
2022-06-20 19:29:10 +00:00
|
|
|
/* Cancel the feeder event */
|
2021-06-21 15:07:31 +00:00
|
|
|
ev_postpone(hook->event);
|
|
|
|
|
2022-06-20 19:29:10 +00:00
|
|
|
/* Stop feeding from the exporter */
|
2022-06-20 17:10:49 +00:00
|
|
|
CALL(hook->table->stop, hook);
|
2021-06-21 15:07:31 +00:00
|
|
|
|
2022-06-20 19:29:10 +00:00
|
|
|
/* Reset the event as the stopped event */
|
2021-06-21 15:07:31 +00:00
|
|
|
hook->event->hook = rt_export_stopped;
|
|
|
|
hook->stopped = stopped;
|
|
|
|
|
2022-06-20 19:29:10 +00:00
|
|
|
/* Update export state */
|
2021-06-21 15:07:31 +00:00
|
|
|
rt_set_export_state(hook, TES_STOP);
|
2022-06-20 19:29:10 +00:00
|
|
|
|
|
|
|
/* Run the stopped event */
|
2022-07-18 10:33:00 +00:00
|
|
|
rt_send_export_event(hook);
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
2014-03-23 00:35:33 +00:00
|
|
|
|
|
|
|
/**
|
|
|
|
* rt_refresh_begin - start a refresh cycle
|
|
|
|
* @t: related routing table
|
2016-01-26 10:48:58 +00:00
|
|
|
* @c related channel
|
2014-03-23 00:35:33 +00:00
|
|
|
*
|
|
|
|
* This function starts a refresh cycle for given routing table and announce
|
|
|
|
* hook. The refresh cycle is a sequence where the protocol sends all its valid
|
|
|
|
* routes to the routing table (by rte_update()). After that, all protocol
|
2016-01-26 10:48:58 +00:00
|
|
|
* routes (more precisely routes with @c as @sender) not sent during the
|
2014-03-23 00:35:33 +00:00
|
|
|
* refresh cycle but still in the table from the past are pruned. This is
|
|
|
|
* implemented by marking all related routes as stale by REF_STALE flag in
|
|
|
|
* rt_refresh_begin(), then marking all related stale routes with REF_DISCARD
|
|
|
|
* flag in rt_refresh_end() and then removing such routes in the prune loop.
|
|
|
|
*/
|
2014-03-20 13:07:12 +00:00
|
|
|
void
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
rt_refresh_begin(struct rt_import_request *req)
|
2014-03-20 13:07:12 +00:00
|
|
|
{
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
struct rt_import_hook *hook = req->hook;
|
|
|
|
ASSERT_DIE(hook);
|
|
|
|
ASSERT_DIE(hook->stale_set == hook->stale_valid);
|
|
|
|
|
|
|
|
/* If the pruning routine is too slow */
|
|
|
|
if ((hook->stale_pruned < hook->stale_valid) && (hook->stale_pruned + 128 < hook->stale_valid)
|
|
|
|
|| (hook->stale_pruned > hook->stale_valid) && (hook->stale_pruned > hook->stale_valid + 128))
|
|
|
|
{
|
|
|
|
log(L_WARN "Route refresh flood in table %s", hook->table->name);
|
|
|
|
FIB_WALK(&hook->table->fib, net, n)
|
|
|
|
{
|
|
|
|
for (struct rte_storage *e = n->routes; e; e = e->next)
|
|
|
|
if (e->rte.sender == req->hook)
|
|
|
|
e->rte.stale_cycle = 0;
|
|
|
|
}
|
|
|
|
FIB_WALK_END;
|
|
|
|
hook->stale_set = 1;
|
|
|
|
hook->stale_valid = 0;
|
|
|
|
hook->stale_pruned = 0;
|
|
|
|
}
|
|
|
|
/* Setting a new value of the stale modifier */
|
|
|
|
else if (!++hook->stale_set)
|
|
|
|
{
|
|
|
|
/* Let's reserve the stale_cycle zero value for always-invalid routes */
|
|
|
|
hook->stale_set = 1;
|
|
|
|
hook->stale_valid = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (req->trace_routes & D_STATES)
|
|
|
|
log(L_TRACE "%s: route refresh begin [%u]", req->name, hook->stale_set);
|
2014-03-20 13:07:12 +00:00
|
|
|
}
|
|
|
|
|
2014-03-23 00:35:33 +00:00
|
|
|
/**
|
|
|
|
* rt_refresh_end - end a refresh cycle
|
|
|
|
* @t: related routing table
|
2016-01-26 10:48:58 +00:00
|
|
|
* @c: related channel
|
2014-03-23 00:35:33 +00:00
|
|
|
*
|
2016-01-26 10:48:58 +00:00
|
|
|
* This function ends a refresh cycle for given routing table and announce
|
2014-03-23 00:35:33 +00:00
|
|
|
* hook. See rt_refresh_begin() for description of refresh cycles.
|
|
|
|
*/
|
2014-03-20 13:07:12 +00:00
|
|
|
void
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
rt_refresh_end(struct rt_import_request *req)
|
2014-03-20 13:07:12 +00:00
|
|
|
{
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
struct rt_import_hook *hook = req->hook;
|
|
|
|
ASSERT_DIE(hook);
|
2014-03-20 13:07:12 +00:00
|
|
|
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
hook->stale_valid++;
|
|
|
|
ASSERT_DIE(hook->stale_set == hook->stale_valid);
|
2014-03-20 13:07:12 +00:00
|
|
|
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
rt_schedule_prune(hook->table);
|
|
|
|
|
|
|
|
if (req->trace_routes & D_STATES)
|
|
|
|
log(L_TRACE "%s: route refresh end [%u]", req->name, hook->stale_valid);
|
2014-03-20 13:07:12 +00:00
|
|
|
}
|
|
|
|
|
2000-06-01 17:12:19 +00:00
|
|
|
/**
|
|
|
|
* rte_dump - dump a route
|
|
|
|
* @e: &rte to be dumped
|
|
|
|
*
|
|
|
|
* This functions dumps contents of a &rte to debug output.
|
|
|
|
*/
|
1998-05-20 11:54:33 +00:00
|
|
|
void
|
2020-01-28 10:42:46 +00:00
|
|
|
rte_dump(struct rte_storage *e)
|
1998-05-20 11:54:33 +00:00
|
|
|
{
|
2020-01-28 10:42:46 +00:00
|
|
|
debug("%-1N ", e->rte.net);
|
|
|
|
debug("PF=%02x ", e->rte.pflags);
|
2022-06-08 13:31:28 +00:00
|
|
|
ea_dump(e->rte.attrs);
|
1998-06-04 20:28:19 +00:00
|
|
|
debug("\n");
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
1998-05-15 07:54:32 +00:00
|
|
|
|
2000-06-01 17:12:19 +00:00
|
|
|
/**
|
|
|
|
* rt_dump - dump a routing table
|
|
|
|
* @t: routing table to be dumped
|
|
|
|
*
|
|
|
|
* This function dumps contents of a given routing table to debug output.
|
|
|
|
*/
|
1998-05-20 11:54:33 +00:00
|
|
|
void
|
|
|
|
rt_dump(rtable *t)
|
|
|
|
{
|
2021-06-21 15:07:31 +00:00
|
|
|
debug("Dump of routing table <%s>%s\n", t->name, t->deleted ? " (deleted)" : "");
|
1998-12-20 14:01:37 +00:00
|
|
|
#ifdef DEBUGGING
|
1999-04-12 18:01:07 +00:00
|
|
|
fib_check(&t->fib);
|
1998-12-20 14:01:37 +00:00
|
|
|
#endif
|
2015-12-21 19:16:05 +00:00
|
|
|
FIB_WALK(&t->fib, net, n)
|
1999-04-12 18:01:07 +00:00
|
|
|
{
|
2020-01-28 10:42:46 +00:00
|
|
|
for(struct rte_storage *e=n->routes; e; e=e->next)
|
1999-04-12 18:01:07 +00:00
|
|
|
rte_dump(e);
|
1998-06-04 20:28:19 +00:00
|
|
|
}
|
1999-04-12 18:01:07 +00:00
|
|
|
FIB_WALK_END;
|
1998-06-04 20:28:19 +00:00
|
|
|
debug("\n");
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
1998-05-15 07:54:32 +00:00
|
|
|
|
2000-06-01 17:12:19 +00:00
|
|
|
/**
|
|
|
|
* rt_dump_all - dump all routing tables
|
|
|
|
*
|
|
|
|
* This function dumps contents of all routing tables to debug output.
|
|
|
|
*/
|
1998-05-24 14:49:14 +00:00
|
|
|
void
|
|
|
|
rt_dump_all(void)
|
|
|
|
{
|
1999-05-17 20:14:52 +00:00
|
|
|
rtable *t;
|
2021-03-30 13:09:53 +00:00
|
|
|
node *n;
|
1999-05-17 20:14:52 +00:00
|
|
|
|
2021-03-30 13:09:53 +00:00
|
|
|
WALK_LIST2(t, n, routing_tables, n)
|
1999-05-17 20:14:52 +00:00
|
|
|
rt_dump(t);
|
2021-06-21 15:07:31 +00:00
|
|
|
|
|
|
|
WALK_LIST2(t, n, deleted_routing_tables, n)
|
|
|
|
rt_dump(t);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rt_dump_hooks(rtable *tab)
|
|
|
|
{
|
|
|
|
debug("Dump of hooks in routing table <%s>%s\n", tab->name, tab->deleted ? " (deleted)" : "");
|
|
|
|
debug(" nhu_state=%u hcu_scheduled=%u use_count=%d rt_count=%u\n",
|
|
|
|
tab->nhu_state, tab->hcu_scheduled, tab->use_count, tab->rt_count);
|
|
|
|
debug(" last_rt_change=%t gc_time=%t gc_counter=%d prune_state=%u\n",
|
|
|
|
tab->last_rt_change, tab->gc_time, tab->gc_counter, tab->prune_state);
|
|
|
|
|
|
|
|
struct rt_import_hook *ih;
|
|
|
|
WALK_LIST(ih, tab->imports)
|
|
|
|
{
|
|
|
|
ih->req->dump_req(ih->req);
|
|
|
|
debug(" Import hook %p requested by %p: pref=%u"
|
|
|
|
" last_state_change=%t import_state=%u stopped=%p\n",
|
|
|
|
ih, ih->req, ih->stats.pref,
|
|
|
|
ih->last_state_change, ih->import_state, ih->stopped);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct rt_export_hook *eh;
|
2022-06-20 19:29:10 +00:00
|
|
|
WALK_LIST(eh, tab->exporter.hooks)
|
2021-06-21 15:07:31 +00:00
|
|
|
{
|
|
|
|
eh->req->dump_req(eh->req);
|
|
|
|
debug(" Export hook %p requested by %p:"
|
2021-09-27 11:04:16 +00:00
|
|
|
" refeed_pending=%u last_state_change=%t export_state=%u\n",
|
|
|
|
eh, eh->req, eh->refeed_pending, eh->last_state_change, atomic_load_explicit(&eh->export_state, memory_order_relaxed));
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
|
|
|
debug("\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rt_dump_hooks_all(void)
|
|
|
|
{
|
|
|
|
rtable *t;
|
|
|
|
node *n;
|
|
|
|
|
|
|
|
debug("Dump of all table hooks\n");
|
|
|
|
|
|
|
|
WALK_LIST2(t, n, routing_tables, n)
|
|
|
|
rt_dump_hooks(t);
|
|
|
|
|
|
|
|
WALK_LIST2(t, n, deleted_routing_tables, n)
|
|
|
|
rt_dump_hooks(t);
|
1998-05-24 14:49:14 +00:00
|
|
|
}
|
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
static inline void
|
|
|
|
rt_schedule_hcu(rtable *tab)
|
|
|
|
{
|
|
|
|
if (tab->hcu_scheduled)
|
|
|
|
return;
|
|
|
|
|
|
|
|
tab->hcu_scheduled = 1;
|
|
|
|
ev_schedule(tab->rt_event);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
rt_schedule_nhu(rtable *tab)
|
|
|
|
{
|
2017-02-22 13:02:03 +00:00
|
|
|
if (tab->nhu_state == NHU_CLEAN)
|
2010-07-05 15:50:19 +00:00
|
|
|
ev_schedule(tab->rt_event);
|
|
|
|
|
2017-02-22 13:02:03 +00:00
|
|
|
/* state change:
|
|
|
|
* NHU_CLEAN -> NHU_SCHEDULED
|
|
|
|
* NHU_RUNNING -> NHU_DIRTY
|
|
|
|
*/
|
|
|
|
tab->nhu_state |= NHU_SCHEDULED;
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
void
|
|
|
|
rt_schedule_prune(rtable *tab)
|
2012-03-28 16:40:04 +00:00
|
|
|
{
|
2016-01-26 10:48:58 +00:00
|
|
|
if (tab->prune_state == 0)
|
|
|
|
ev_schedule(tab->rt_event);
|
2012-03-28 16:40:04 +00:00
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
/* state change 0->1, 2->3 */
|
|
|
|
tab->prune_state |= 1;
|
2012-03-28 16:40:04 +00:00
|
|
|
}
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
static void
|
|
|
|
rt_export_used(struct rt_exporter *e)
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
2022-07-15 12:57:02 +00:00
|
|
|
rtable *tab = SKIP_BACK(rtable, exporter, e);
|
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
if (config->table_debug)
|
|
|
|
log(L_TRACE "%s: Export cleanup requested", tab->name);
|
|
|
|
|
|
|
|
if (tab->export_used)
|
|
|
|
return;
|
|
|
|
|
|
|
|
tab->export_used = 1;
|
|
|
|
ev_schedule(tab->rt_event);
|
|
|
|
}
|
2016-01-26 10:48:58 +00:00
|
|
|
|
2000-04-27 22:28:49 +00:00
|
|
|
static void
|
2010-07-05 15:50:19 +00:00
|
|
|
rt_event(void *ptr)
|
1999-02-13 21:29:01 +00:00
|
|
|
{
|
2010-07-05 15:50:19 +00:00
|
|
|
rtable *tab = ptr;
|
|
|
|
|
2016-05-12 14:04:47 +00:00
|
|
|
rt_lock_table(tab);
|
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
if (tab->export_used)
|
|
|
|
rt_export_cleanup(tab);
|
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
if (tab->hcu_scheduled)
|
|
|
|
rt_update_hostcache(tab);
|
1999-05-17 20:14:52 +00:00
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
if (tab->nhu_state)
|
|
|
|
rt_next_hop_update(tab);
|
|
|
|
|
2014-03-20 13:07:12 +00:00
|
|
|
if (tab->prune_state)
|
2016-01-26 10:48:58 +00:00
|
|
|
rt_prune_table(tab);
|
2016-05-12 14:04:47 +00:00
|
|
|
|
|
|
|
rt_unlock_table(tab);
|
1999-02-13 21:29:01 +00:00
|
|
|
}
|
|
|
|
|
2021-02-10 02:09:57 +00:00
|
|
|
|
2022-06-04 15:34:57 +00:00
|
|
|
static void
|
|
|
|
rt_prune_timer(timer *t)
|
|
|
|
{
|
|
|
|
rtable *tab = t->data;
|
|
|
|
|
|
|
|
if (tab->gc_counter >= tab->config->gc_threshold)
|
|
|
|
rt_schedule_prune(tab);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_kick_prune_timer(rtable *tab)
|
|
|
|
{
|
|
|
|
/* Return if prune is already scheduled */
|
|
|
|
if (tm_active(tab->prune_timer) || (tab->prune_state & 1))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* Randomize GC period to +/- 50% */
|
|
|
|
btime gc_period = tab->config->gc_period;
|
|
|
|
gc_period = (gc_period / 2) + (random_u32() % (uint) gc_period);
|
|
|
|
tm_start(tab->prune_timer, gc_period);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2021-02-10 02:09:57 +00:00
|
|
|
static inline btime
|
|
|
|
rt_settled_time(rtable *tab)
|
|
|
|
{
|
|
|
|
ASSUME(tab->base_settle_time != 0);
|
|
|
|
|
|
|
|
return MIN(tab->last_rt_change + tab->config->min_settle_time,
|
|
|
|
tab->base_settle_time + tab->config->max_settle_time);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_settle_timer(timer *t)
|
|
|
|
{
|
|
|
|
rtable *tab = t->data;
|
|
|
|
|
|
|
|
if (!tab->base_settle_time)
|
|
|
|
return;
|
|
|
|
|
|
|
|
btime settled_time = rt_settled_time(tab);
|
|
|
|
if (current_time() < settled_time)
|
|
|
|
{
|
|
|
|
tm_set(tab->settle_timer, settled_time);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Settled */
|
|
|
|
tab->base_settle_time = 0;
|
|
|
|
|
|
|
|
struct rt_subscription *s;
|
|
|
|
WALK_LIST(s, tab->subscribers)
|
|
|
|
s->hook(s);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_kick_settle_timer(rtable *tab)
|
|
|
|
{
|
|
|
|
tab->base_settle_time = current_time();
|
|
|
|
|
|
|
|
if (!tab->settle_timer)
|
2021-03-30 16:51:31 +00:00
|
|
|
tab->settle_timer = tm_new_init(tab->rp, rt_settle_timer, tab, 0, 0);
|
2021-02-10 02:09:57 +00:00
|
|
|
|
|
|
|
if (!tm_active(tab->settle_timer))
|
|
|
|
tm_set(tab->settle_timer, rt_settled_time(tab));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
rt_schedule_notify(rtable *tab)
|
|
|
|
{
|
|
|
|
if (EMPTY_LIST(tab->subscribers))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (tab->base_settle_time)
|
|
|
|
return;
|
|
|
|
|
|
|
|
rt_kick_settle_timer(tab);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rt_subscribe(rtable *tab, struct rt_subscription *s)
|
|
|
|
{
|
|
|
|
s->tab = tab;
|
|
|
|
rt_lock_table(tab);
|
2021-06-21 15:07:31 +00:00
|
|
|
DBG("rt_subscribe(%s)\n", tab->name);
|
2021-02-10 02:09:57 +00:00
|
|
|
add_tail(&tab->subscribers, &s->n);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rt_unsubscribe(struct rt_subscription *s)
|
|
|
|
{
|
|
|
|
rem_node(&s->n);
|
|
|
|
rt_unlock_table(s->tab);
|
|
|
|
}
|
|
|
|
|
2021-12-20 19:25:35 +00:00
|
|
|
static struct rt_flowspec_link *
|
|
|
|
rt_flowspec_find_link(rtable *src, rtable *dst)
|
|
|
|
{
|
|
|
|
struct rt_flowspec_link *ln;
|
|
|
|
WALK_LIST(ln, src->flowspec_links)
|
|
|
|
if ((ln->src == src) && (ln->dst == dst))
|
|
|
|
return ln;
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rt_flowspec_link(rtable *src, rtable *dst)
|
|
|
|
{
|
|
|
|
ASSERT(rt_is_ip(src));
|
|
|
|
ASSERT(rt_is_flow(dst));
|
|
|
|
|
|
|
|
struct rt_flowspec_link *ln = rt_flowspec_find_link(src, dst);
|
|
|
|
|
|
|
|
if (!ln)
|
|
|
|
{
|
|
|
|
rt_lock_table(src);
|
|
|
|
rt_lock_table(dst);
|
|
|
|
|
|
|
|
ln = mb_allocz(src->rp, sizeof(struct rt_flowspec_link));
|
|
|
|
ln->src = src;
|
|
|
|
ln->dst = dst;
|
|
|
|
add_tail(&src->flowspec_links, &ln->n);
|
|
|
|
}
|
|
|
|
|
|
|
|
ln->uc++;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rt_flowspec_unlink(rtable *src, rtable *dst)
|
|
|
|
{
|
|
|
|
struct rt_flowspec_link *ln = rt_flowspec_find_link(src, dst);
|
|
|
|
|
|
|
|
ASSERT(ln && (ln->uc > 0));
|
|
|
|
|
|
|
|
ln->uc--;
|
|
|
|
|
|
|
|
if (!ln->uc)
|
|
|
|
{
|
|
|
|
rem_node(&ln->n);
|
|
|
|
mb_free(ln);
|
|
|
|
|
|
|
|
rt_unlock_table(src);
|
|
|
|
rt_unlock_table(dst);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_flowspec_notify(rtable *src, net *net)
|
|
|
|
{
|
|
|
|
/* Only IP tables are src links */
|
|
|
|
ASSERT(rt_is_ip(src));
|
|
|
|
|
|
|
|
struct rt_flowspec_link *ln;
|
|
|
|
WALK_LIST(ln, src->flowspec_links)
|
|
|
|
{
|
|
|
|
rtable *dst = ln->dst;
|
|
|
|
ASSERT(rt_is_flow(dst));
|
|
|
|
|
|
|
|
/* No need to inspect it further if recalculation is already active */
|
|
|
|
if ((dst->nhu_state == NHU_SCHEDULED) || (dst->nhu_state == NHU_DIRTY))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (trie_match_net(dst->flowspec_trie, net->n.addr))
|
|
|
|
rt_schedule_nhu(dst);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_flowspec_reset_trie(rtable *tab)
|
|
|
|
{
|
|
|
|
linpool *lp = tab->flowspec_trie->lp;
|
|
|
|
int ipv4 = tab->flowspec_trie->ipv4;
|
|
|
|
|
|
|
|
lp_flush(lp);
|
|
|
|
tab->flowspec_trie = f_new_trie(lp, 0);
|
|
|
|
tab->flowspec_trie->ipv4 = ipv4;
|
|
|
|
}
|
|
|
|
|
2021-03-30 16:51:31 +00:00
|
|
|
static void
|
|
|
|
rt_free(resource *_r)
|
|
|
|
{
|
|
|
|
rtable *r = (rtable *) _r;
|
|
|
|
|
|
|
|
DBG("Deleting routing table %s\n", r->name);
|
|
|
|
ASSERT_DIE(r->use_count == 0);
|
|
|
|
|
|
|
|
r->config->table = NULL;
|
|
|
|
rem_node(&r->n);
|
|
|
|
|
|
|
|
if (r->hostcache)
|
|
|
|
rt_free_hostcache(r);
|
|
|
|
|
|
|
|
/* Freed automagically by the resource pool
|
|
|
|
fib_free(&r->fib);
|
|
|
|
hmap_free(&r->id_map);
|
|
|
|
rfree(r->rt_event);
|
|
|
|
rfree(r->settle_timer);
|
|
|
|
mb_free(r);
|
|
|
|
*/
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_res_dump(resource *_r)
|
|
|
|
{
|
|
|
|
rtable *r = (rtable *) _r;
|
|
|
|
debug("name \"%s\", addr_type=%s, rt_count=%u, use_count=%d\n",
|
|
|
|
r->name, net_label[r->addr_type], r->rt_count, r->use_count);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct resclass rt_class = {
|
|
|
|
.name = "Routing table",
|
|
|
|
.size = sizeof(struct rtable),
|
|
|
|
.free = rt_free,
|
|
|
|
.dump = rt_res_dump,
|
|
|
|
.lookup = NULL,
|
|
|
|
.memsize = NULL,
|
|
|
|
};
|
|
|
|
|
|
|
|
rtable *
|
|
|
|
rt_setup(pool *pp, struct rtable_config *cf)
|
2000-03-04 22:21:06 +00:00
|
|
|
{
|
2022-03-15 10:21:46 +00:00
|
|
|
pool *p = rp_newf(pp, "Routing table %s", cf->name);
|
2021-03-30 16:51:31 +00:00
|
|
|
|
|
|
|
rtable *t = ralloc(p, &rt_class);
|
|
|
|
t->rp = p;
|
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
t->rte_slab = sl_new(p, sizeof(struct rte_storage));
|
|
|
|
|
2018-02-06 15:08:45 +00:00
|
|
|
t->name = cf->name;
|
2000-03-04 22:21:06 +00:00
|
|
|
t->config = cf;
|
2018-02-06 15:08:45 +00:00
|
|
|
t->addr_type = cf->addr_type;
|
2021-03-30 16:51:31 +00:00
|
|
|
|
2015-11-05 11:48:52 +00:00
|
|
|
fib_init(&t->fib, p, t->addr_type, sizeof(net), OFFSETOF(net, n), 0, NULL);
|
2016-01-26 10:48:58 +00:00
|
|
|
|
2021-11-29 18:23:42 +00:00
|
|
|
if (cf->trie_used)
|
|
|
|
{
|
|
|
|
t->trie = f_new_trie(lp_new_default(p), 0);
|
|
|
|
t->trie->ipv4 = net_val_match(t->addr_type, NB_IP4 | NB_VPN4 | NB_ROA4);
|
|
|
|
|
|
|
|
t->fib.init = net_init_with_trie;
|
|
|
|
}
|
|
|
|
|
2021-12-20 19:25:35 +00:00
|
|
|
init_list(&t->flowspec_links);
|
|
|
|
|
2022-06-20 19:29:10 +00:00
|
|
|
t->exporter = (struct rt_exporter) {
|
2022-06-27 17:53:06 +00:00
|
|
|
.addr_type = t->addr_type,
|
2022-06-20 19:29:10 +00:00
|
|
|
.start = rt_table_export_start,
|
|
|
|
.stop = rt_table_export_stop,
|
|
|
|
.done = rt_table_export_done,
|
2022-07-15 12:57:02 +00:00
|
|
|
.used = rt_export_used,
|
2022-06-20 19:29:10 +00:00
|
|
|
};
|
2022-07-15 12:57:02 +00:00
|
|
|
|
2022-06-20 19:29:10 +00:00
|
|
|
init_list(&t->exporter.hooks);
|
2022-07-15 12:57:02 +00:00
|
|
|
init_list(&t->exporter.pending);
|
2022-06-20 19:29:10 +00:00
|
|
|
|
2022-07-11 15:08:59 +00:00
|
|
|
init_list(&t->imports);
|
2022-06-20 19:29:10 +00:00
|
|
|
|
2022-07-11 15:08:59 +00:00
|
|
|
hmap_init(&t->id_map, p, 1024);
|
|
|
|
hmap_set(&t->id_map, 0);
|
2021-03-30 16:51:31 +00:00
|
|
|
|
2022-07-11 15:08:59 +00:00
|
|
|
init_list(&t->subscribers);
|
2019-09-09 00:55:32 +00:00
|
|
|
|
2022-07-11 15:08:59 +00:00
|
|
|
t->rt_event = ev_new_init(p, rt_event, t);
|
2022-07-13 10:02:34 +00:00
|
|
|
t->prune_timer = tm_new_init(p, rt_prune_timer, t, 0, 0);
|
2022-07-15 12:57:02 +00:00
|
|
|
t->exporter.export_timer = tm_new_init(p, rt_announce_exports, t, 0, 0);
|
2022-07-11 15:08:59 +00:00
|
|
|
t->last_rt_change = t->gc_time = current_time();
|
2022-07-15 12:57:02 +00:00
|
|
|
t->exporter.next_seq = 1;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2022-07-11 15:08:59 +00:00
|
|
|
t->rl_pipe = (struct tbf) TBF_DEFAULT_LOG_LIMITS;
|
2022-03-09 12:49:31 +00:00
|
|
|
|
2022-07-11 15:08:59 +00:00
|
|
|
if (rt_is_flow(t))
|
|
|
|
{
|
|
|
|
t->flowspec_trie = f_new_trie(lp_new_default(p), 0);
|
|
|
|
t->flowspec_trie->ipv4 = (t->addr_type == NET_FLOW4);
|
2021-03-30 16:51:31 +00:00
|
|
|
}
|
2021-02-10 02:09:57 +00:00
|
|
|
|
2021-03-30 16:51:31 +00:00
|
|
|
return t;
|
2000-03-04 22:21:06 +00:00
|
|
|
}
|
|
|
|
|
2000-06-01 17:12:19 +00:00
|
|
|
/**
|
|
|
|
* rt_init - initialize routing tables
|
|
|
|
*
|
|
|
|
* This function is called during BIRD startup. It initializes the
|
|
|
|
* routing table module.
|
|
|
|
*/
|
1998-05-20 11:54:33 +00:00
|
|
|
void
|
|
|
|
rt_init(void)
|
|
|
|
{
|
|
|
|
rta_init();
|
1999-02-13 21:29:01 +00:00
|
|
|
rt_table_pool = rp_new(&root_pool, "Routing tables");
|
2017-05-16 12:31:16 +00:00
|
|
|
rte_update_pool = lp_new_default(rt_table_pool);
|
1999-05-17 20:14:52 +00:00
|
|
|
init_list(&routing_tables);
|
2021-06-21 15:07:31 +00:00
|
|
|
init_list(&deleted_routing_tables);
|
1998-05-20 11:54:33 +00:00
|
|
|
}
|
1999-02-13 19:15:28 +00:00
|
|
|
|
2012-03-28 16:40:04 +00:00
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
/**
|
|
|
|
* rt_prune_table - prune a routing table
|
|
|
|
*
|
|
|
|
* The prune loop scans routing tables and removes routes belonging to flushing
|
|
|
|
* protocols, discarded routes and also stale network entries. It is called from
|
|
|
|
* rt_event(). The event is rescheduled if the current iteration do not finish
|
|
|
|
* the table. The pruning is directed by the prune state (@prune_state),
|
|
|
|
* specifying whether the prune cycle is scheduled or running, and there
|
|
|
|
* is also a persistent pruning iterator (@prune_fit).
|
|
|
|
*
|
|
|
|
* The prune loop is used also for channel flushing. For this purpose, the
|
|
|
|
* channels to flush are marked before the iteration and notified after the
|
|
|
|
* iteration.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
rt_prune_table(rtable *tab)
|
2012-03-28 16:40:04 +00:00
|
|
|
{
|
|
|
|
struct fib_iterator *fit = &tab->prune_fit;
|
2022-05-15 13:05:13 +00:00
|
|
|
int limit = 2000;
|
2016-01-26 10:48:58 +00:00
|
|
|
|
2021-06-21 15:07:31 +00:00
|
|
|
struct rt_import_hook *ih;
|
2016-01-26 10:48:58 +00:00
|
|
|
node *n, *x;
|
1999-02-13 19:15:28 +00:00
|
|
|
|
|
|
|
DBG("Pruning route table %s\n", tab->name);
|
2000-05-08 22:33:02 +00:00
|
|
|
#ifdef DEBUGGING
|
|
|
|
fib_check(&tab->fib);
|
|
|
|
#endif
|
2012-03-28 16:40:04 +00:00
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
if (tab->prune_state == 0)
|
|
|
|
return;
|
2012-03-28 16:40:04 +00:00
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
if (tab->prune_state == 1)
|
|
|
|
{
|
|
|
|
/* Mark channels to flush */
|
2021-06-21 15:07:31 +00:00
|
|
|
WALK_LIST2(ih, n, tab->imports, n)
|
|
|
|
if (ih->import_state == TIS_STOP)
|
|
|
|
rt_set_import_state(ih, TIS_FLUSHING);
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
else if ((ih->stale_valid != ih->stale_pruning) && (ih->stale_pruning == ih->stale_pruned))
|
|
|
|
{
|
|
|
|
ih->stale_pruning = ih->stale_valid;
|
|
|
|
|
|
|
|
if (ih->req->trace_routes & D_STATES)
|
|
|
|
log(L_TRACE "%s: table prune after refresh begin [%u]", ih->req->name, ih->stale_pruning);
|
|
|
|
}
|
2016-01-26 10:48:58 +00:00
|
|
|
|
|
|
|
FIB_ITERATE_INIT(fit, &tab->fib);
|
|
|
|
tab->prune_state = 2;
|
2022-02-03 05:08:51 +00:00
|
|
|
|
2022-06-04 15:34:57 +00:00
|
|
|
tab->gc_counter = 0;
|
|
|
|
tab->gc_time = current_time();
|
|
|
|
|
2022-02-03 05:08:51 +00:00
|
|
|
if (tab->prune_trie)
|
|
|
|
{
|
|
|
|
/* Init prefix trie pruning */
|
|
|
|
tab->trie_new = f_new_trie(lp_new_default(tab->rp), 0);
|
|
|
|
tab->trie_new->ipv4 = tab->trie->ipv4;
|
|
|
|
}
|
2016-01-26 10:48:58 +00:00
|
|
|
}
|
2012-03-28 16:40:04 +00:00
|
|
|
|
1999-04-12 18:01:07 +00:00
|
|
|
again:
|
2015-12-21 19:16:05 +00:00
|
|
|
FIB_ITERATE_START(&tab->fib, fit, net, n)
|
1999-02-13 19:15:28 +00:00
|
|
|
{
|
1999-04-12 18:01:07 +00:00
|
|
|
rescan:
|
2022-02-03 05:08:51 +00:00
|
|
|
if (limit <= 0)
|
|
|
|
{
|
|
|
|
FIB_ITERATE_PUT(fit);
|
|
|
|
ev_schedule(tab->rt_event);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
for (struct rte_storage *e=n->routes; e; e=e->next)
|
2018-07-31 16:40:38 +00:00
|
|
|
{
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
struct rt_import_hook *s = e->rte.sender;
|
|
|
|
if ((s->import_state == TIS_FLUSHING) ||
|
|
|
|
(e->rte.stale_cycle < s->stale_valid) ||
|
|
|
|
(e->rte.stale_cycle > s->stale_set))
|
1999-04-12 18:01:07 +00:00
|
|
|
{
|
2020-01-28 10:42:46 +00:00
|
|
|
rte_discard(n, &e->rte);
|
2016-01-26 10:48:58 +00:00
|
|
|
limit--;
|
2012-03-28 16:40:04 +00:00
|
|
|
|
2018-07-31 16:40:38 +00:00
|
|
|
goto rescan;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
if (!n->routes && !n->first) /* Orphaned FIB entry */
|
1999-02-13 19:15:28 +00:00
|
|
|
{
|
2015-12-21 19:16:05 +00:00
|
|
|
FIB_ITERATE_PUT(fit);
|
|
|
|
fib_delete(&tab->fib, n);
|
1999-04-12 18:01:07 +00:00
|
|
|
goto again;
|
1999-02-13 19:15:28 +00:00
|
|
|
}
|
2022-02-03 05:08:51 +00:00
|
|
|
|
|
|
|
if (tab->trie_new)
|
|
|
|
{
|
|
|
|
trie_add_prefix(tab->trie_new, n->n.addr, n->n.addr->pxlen, n->n.addr->pxlen);
|
|
|
|
limit--;
|
|
|
|
}
|
1999-02-13 19:15:28 +00:00
|
|
|
}
|
2015-12-21 19:16:05 +00:00
|
|
|
FIB_ITERATE_END;
|
2012-03-28 16:40:04 +00:00
|
|
|
|
2000-05-08 22:33:02 +00:00
|
|
|
#ifdef DEBUGGING
|
|
|
|
fib_check(&tab->fib);
|
|
|
|
#endif
|
2012-03-28 16:40:04 +00:00
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
/* state change 2->0, 3->1 */
|
|
|
|
tab->prune_state &= 1;
|
2014-03-20 13:07:12 +00:00
|
|
|
|
2022-02-03 05:08:51 +00:00
|
|
|
if (tab->trie_new)
|
|
|
|
{
|
|
|
|
/* Finish prefix trie pruning */
|
2022-02-04 04:34:02 +00:00
|
|
|
|
|
|
|
if (!tab->trie_lock_count)
|
|
|
|
{
|
|
|
|
rfree(tab->trie->lp);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
ASSERT(!tab->trie_old);
|
|
|
|
tab->trie_old = tab->trie;
|
|
|
|
tab->trie_old_lock_count = tab->trie_lock_count;
|
|
|
|
tab->trie_lock_count = 0;
|
|
|
|
}
|
|
|
|
|
2022-02-03 05:08:51 +00:00
|
|
|
tab->trie = tab->trie_new;
|
|
|
|
tab->trie_new = NULL;
|
|
|
|
tab->prune_trie = 0;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Schedule prefix trie pruning */
|
2022-02-04 04:34:02 +00:00
|
|
|
if (tab->trie && !tab->trie_old && (tab->trie->prefix_count > (2 * tab->fib.entries)))
|
2022-02-03 05:08:51 +00:00
|
|
|
{
|
|
|
|
/* state change 0->1, 2->3 */
|
|
|
|
tab->prune_state |= 1;
|
|
|
|
tab->prune_trie = 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
rt_prune_sources();
|
2012-03-28 16:40:04 +00:00
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
uint flushed_channels = 0;
|
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
/* Close flushed channels */
|
2021-06-21 15:07:31 +00:00
|
|
|
WALK_LIST2_DELSAFE(ih, n, x, tab->imports, n)
|
|
|
|
if (ih->import_state == TIS_FLUSHING)
|
|
|
|
{
|
2022-07-15 12:57:02 +00:00
|
|
|
ih->flush_seq = tab->exporter.next_seq;
|
2021-09-27 11:04:16 +00:00
|
|
|
rt_set_import_state(ih, TIS_WAITING);
|
|
|
|
flushed_channels++;
|
2021-06-21 15:07:31 +00:00
|
|
|
}
|
Route refresh in tables uses a stale counter.
Until now, we were marking routes as REF_STALE and REF_DISCARD to
cleanup old routes after route refresh. This needed a synchronous route
table walk at both beginning and the end of route refresh routine,
marking the routes by the flags.
We avoid these walks by using a stale counter. Every route contains:
u8 stale_cycle;
Every import hook contains:
u8 stale_set;
u8 stale_valid;
u8 stale_pruned;
u8 stale_pruning;
In base_state, stale_set == stale_valid == stale_pruned == stale_pruning
and all routes' stale_cycle also have the same value.
The route refresh looks like follows:
+ ----------- + --------- + ----------- + ------------- + ------------ +
| | stale_set | stale_valid | stale_pruning | stale_pruned |
| Base | x | x | x | x |
| Begin | x+1 | x | x | x |
... now routes are being inserted with stale_cycle == (x+1)
| End | x+1 | x+1 | x | x |
... now table pruning routine is scheduled
| Prune begin | x+1 | x+1 | x+1 | x |
... now routes with stale_cycle not between stale_set and stale_valid
are deleted
| Prune end | x+1 | x+1 | x+1 | x+1 |
+ ----------- + --------- + ----------- + ------------- + ------------ +
The pruning routine is asynchronous and may have high latency in
high-load environments. Therefore, multiple route refresh requests may
happen before the pruning routine starts, leading to this situation:
| Prune begin | x+k | x+k | x -> x+k | x |
... or even
| Prune begin | x+k+1 | x+k | x -> x+k | x |
... if the prune event starts while another route refresh is running.
In such a case, the pruning routine still deletes routes not fitting
between stale_set and and stale_valid, effectively pruning the remnants
of all unpruned route refreshes from before:
| Prune end | x+k | x+k | x+k | x+k |
In extremely rare cases, there may happen too many route refreshes
before any route prune routine finishes. If the difference between
stale_valid and stale_pruned becomes more than 128 when requesting for
another route refresh, the routine walks the table synchronously and
resets all the stale values to a base state, while logging a warning.
2022-07-12 08:36:10 +00:00
|
|
|
else if (ih->stale_pruning != ih->stale_pruned)
|
|
|
|
{
|
|
|
|
ih->stale_pruned = ih->stale_pruning;
|
|
|
|
if (ih->req->trace_routes & D_STATES)
|
|
|
|
log(L_TRACE "%s: table prune after refresh end [%u]", ih->req->name, ih->stale_pruned);
|
|
|
|
}
|
2021-09-27 11:04:16 +00:00
|
|
|
|
|
|
|
/* In some cases, we may want to directly proceed to export cleanup */
|
2022-07-15 12:57:02 +00:00
|
|
|
if (EMPTY_LIST(tab->exporter.hooks) && flushed_channels)
|
2021-09-27 11:04:16 +00:00
|
|
|
rt_export_cleanup(tab);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_export_cleanup(rtable *tab)
|
|
|
|
{
|
|
|
|
tab->export_used = 0;
|
|
|
|
|
|
|
|
u64 min_seq = ~((u64) 0);
|
|
|
|
struct rt_pending_export *last_export_to_free = NULL;
|
2022-07-15 12:57:02 +00:00
|
|
|
struct rt_pending_export *first = tab->exporter.first;
|
2021-09-27 11:04:16 +00:00
|
|
|
|
|
|
|
struct rt_export_hook *eh;
|
|
|
|
node *n;
|
2022-07-15 12:57:02 +00:00
|
|
|
WALK_LIST2(eh, n, tab->exporter.hooks, n)
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
|
|
|
switch (atomic_load_explicit(&eh->export_state, memory_order_acquire))
|
|
|
|
{
|
|
|
|
case TES_DOWN:
|
|
|
|
continue;
|
|
|
|
|
|
|
|
case TES_READY:
|
|
|
|
{
|
|
|
|
struct rt_pending_export *last = atomic_load_explicit(&eh->last_export, memory_order_acquire);
|
|
|
|
if (!last)
|
|
|
|
/* No last export means that the channel has exported nothing since last cleanup */
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
else if (min_seq > last->seq)
|
|
|
|
{
|
|
|
|
min_seq = last->seq;
|
|
|
|
last_export_to_free = last;
|
|
|
|
}
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
default:
|
|
|
|
/* It's only safe to cleanup when the export state is idle or regular. No feeding or stopping allowed. */
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
tab->exporter.first = last_export_to_free ? rt_next_export_fast(last_export_to_free) : NULL;
|
2021-09-27 11:04:16 +00:00
|
|
|
|
|
|
|
if (config->table_debug)
|
2022-07-15 12:57:02 +00:00
|
|
|
log(L_TRACE "%s: Export cleanup, old exporter.first seq %lu, new %lu, min_seq %ld",
|
2021-09-27 11:04:16 +00:00
|
|
|
tab->name,
|
2022-07-15 12:57:02 +00:00
|
|
|
first ? first->seq : 0,
|
|
|
|
tab->exporter.first ? tab->exporter.first->seq : 0,
|
2021-09-27 11:04:16 +00:00
|
|
|
min_seq);
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
WALK_LIST2(eh, n, tab->exporter.hooks, n)
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
|
|
|
if (atomic_load_explicit(&eh->export_state, memory_order_acquire) != TES_READY)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
struct rt_pending_export *last = atomic_load_explicit(&eh->last_export, memory_order_acquire);
|
|
|
|
if (last == last_export_to_free)
|
|
|
|
{
|
|
|
|
/* This may fail when the channel managed to export more inbetween. This is OK. */
|
|
|
|
atomic_compare_exchange_strong_explicit(
|
|
|
|
&eh->last_export, &last, NULL,
|
|
|
|
memory_order_release,
|
|
|
|
memory_order_relaxed);
|
|
|
|
|
|
|
|
DBG("store hook=%p last_export=NULL\n", eh);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
while (first && (first->seq <= min_seq))
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
2022-07-15 12:57:02 +00:00
|
|
|
ASSERT_DIE(first->new || first->old);
|
2021-09-27 11:04:16 +00:00
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
const net_addr *n = first->new ?
|
|
|
|
first->new->rte.net :
|
|
|
|
first->old->rte.net;
|
2021-09-27 11:04:16 +00:00
|
|
|
net *net = SKIP_BACK(struct network, n.addr, (net_addr (*)[0]) n);
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
ASSERT_DIE(net->first == first);
|
2021-09-27 11:04:16 +00:00
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
if (first == net->last)
|
2021-09-27 11:04:16 +00:00
|
|
|
/* The only export here */
|
|
|
|
net->last = net->first = NULL;
|
|
|
|
else
|
|
|
|
/* First is now the next one */
|
2022-07-15 12:57:02 +00:00
|
|
|
net->first = atomic_load_explicit(&first->next, memory_order_relaxed);
|
2021-09-27 11:04:16 +00:00
|
|
|
|
|
|
|
/* For now, the old route may be finally freed */
|
2022-07-15 12:57:02 +00:00
|
|
|
if (first->old)
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
2022-07-15 12:57:02 +00:00
|
|
|
rt_rte_trace_in(D_ROUTES, first->old->rte.sender->req, &first->old->rte, "freed");
|
|
|
|
hmap_clear(&tab->id_map, first->old->rte.id);
|
|
|
|
rte_free(first->old);
|
2021-09-27 11:04:16 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef LOCAL_DEBUG
|
2022-07-15 12:57:02 +00:00
|
|
|
memset(first, 0xbd, sizeof(struct rt_pending_export));
|
2021-09-27 11:04:16 +00:00
|
|
|
#endif
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
struct rt_export_block *reb = HEAD(tab->exporter.pending);
|
|
|
|
ASSERT_DIE(reb == PAGE_HEAD(first));
|
2021-09-27 11:04:16 +00:00
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
u32 pos = (first - &reb->export[0]);
|
2021-09-27 11:04:16 +00:00
|
|
|
u32 end = atomic_load_explicit(&reb->end, memory_order_relaxed);
|
|
|
|
ASSERT_DIE(pos < end);
|
|
|
|
|
|
|
|
struct rt_pending_export *next = NULL;
|
|
|
|
|
|
|
|
if (++pos < end)
|
|
|
|
next = &reb->export[pos];
|
|
|
|
else
|
|
|
|
{
|
|
|
|
rem_node(&reb->n);
|
|
|
|
|
|
|
|
#ifdef LOCAL_DEBUG
|
|
|
|
memset(reb, 0xbe, page_size);
|
|
|
|
#endif
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
free_page(reb);
|
2021-09-27 11:04:16 +00:00
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
if (EMPTY_LIST(tab->exporter.pending))
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
|
|
|
if (config->table_debug)
|
|
|
|
log(L_TRACE "%s: Resetting export seq", tab->name);
|
|
|
|
|
|
|
|
node *n;
|
2022-07-15 12:57:02 +00:00
|
|
|
WALK_LIST2(eh, n, tab->exporter.hooks, n)
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
|
|
|
if (atomic_load_explicit(&eh->export_state, memory_order_acquire) != TES_READY)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
ASSERT_DIE(atomic_load_explicit(&eh->last_export, memory_order_acquire) == NULL);
|
|
|
|
bmap_reset(&eh->seq_map, 1024);
|
|
|
|
}
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
tab->exporter.next_seq = 1;
|
2021-09-27 11:04:16 +00:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2022-07-15 12:57:02 +00:00
|
|
|
reb = HEAD(tab->exporter.pending);
|
2021-09-27 11:04:16 +00:00
|
|
|
next = &reb->export[0];
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
first = next;
|
2021-09-27 11:04:16 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
done:;
|
|
|
|
struct rt_import_hook *ih; node *x;
|
|
|
|
_Bool imports_stopped = 0;
|
|
|
|
WALK_LIST2_DELSAFE(ih, n, x, tab->imports, n)
|
|
|
|
if (ih->import_state == TIS_WAITING)
|
2022-07-15 12:57:02 +00:00
|
|
|
if (!first || (first->seq >= ih->flush_seq))
|
2021-09-27 11:04:16 +00:00
|
|
|
{
|
|
|
|
ih->import_state = TIS_CLEARED;
|
|
|
|
ih->stopped(ih->req);
|
|
|
|
rem_node(&ih->n);
|
|
|
|
mb_free(ih);
|
|
|
|
rt_unlock_table(tab);
|
|
|
|
imports_stopped = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (tab->export_used)
|
|
|
|
ev_schedule(tab->rt_event);
|
|
|
|
|
|
|
|
if (imports_stopped)
|
|
|
|
{
|
|
|
|
if (config->table_debug)
|
|
|
|
log(L_TRACE "%s: Sources pruning routine requested", tab->name);
|
|
|
|
|
|
|
|
rt_prune_sources();
|
|
|
|
}
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
if (EMPTY_LIST(tab->exporter.pending) && tm_active(tab->exporter.export_timer))
|
|
|
|
tm_stop(tab->exporter.export_timer);
|
1999-05-17 20:14:52 +00:00
|
|
|
}
|
|
|
|
|
2022-02-04 04:34:02 +00:00
|
|
|
/**
|
|
|
|
* rt_lock_trie - lock a prefix trie of a routing table
|
|
|
|
* @tab: routing table with prefix trie to be locked
|
|
|
|
*
|
|
|
|
* The prune loop may rebuild the prefix trie and invalidate f_trie_walk_state
|
|
|
|
* structures. Therefore, asynchronous walks should lock the prefix trie using
|
|
|
|
* this function. That allows the prune loop to rebuild the trie, but postpones
|
|
|
|
* its freeing until all walks are done (unlocked by rt_unlock_trie()).
|
|
|
|
*
|
|
|
|
* Return a current trie that will be locked, the value should be passed back to
|
|
|
|
* rt_unlock_trie() for unlocking.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
struct f_trie *
|
|
|
|
rt_lock_trie(rtable *tab)
|
|
|
|
{
|
|
|
|
ASSERT(tab->trie);
|
|
|
|
|
|
|
|
tab->trie_lock_count++;
|
|
|
|
return tab->trie;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* rt_unlock_trie - unlock a prefix trie of a routing table
|
|
|
|
* @tab: routing table with prefix trie to be locked
|
|
|
|
* @trie: value returned by matching rt_lock_trie()
|
|
|
|
*
|
|
|
|
* Done for trie locked by rt_lock_trie() after walk over the trie is done.
|
|
|
|
* It may free the trie and schedule next trie pruning.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
rt_unlock_trie(rtable *tab, struct f_trie *trie)
|
|
|
|
{
|
|
|
|
ASSERT(trie);
|
|
|
|
|
|
|
|
if (trie == tab->trie)
|
|
|
|
{
|
|
|
|
/* Unlock the current prefix trie */
|
|
|
|
ASSERT(tab->trie_lock_count);
|
|
|
|
tab->trie_lock_count--;
|
|
|
|
}
|
|
|
|
else if (trie == tab->trie_old)
|
|
|
|
{
|
|
|
|
/* Unlock the old prefix trie */
|
|
|
|
ASSERT(tab->trie_old_lock_count);
|
|
|
|
tab->trie_old_lock_count--;
|
|
|
|
|
|
|
|
/* Free old prefix trie that is no longer needed */
|
|
|
|
if (!tab->trie_old_lock_count)
|
|
|
|
{
|
|
|
|
rfree(tab->trie_old->lp);
|
|
|
|
tab->trie_old = NULL;
|
|
|
|
|
|
|
|
/* Kick prefix trie pruning that was postponed */
|
|
|
|
if (tab->trie && (tab->trie->prefix_count > (2 * tab->fib.entries)))
|
|
|
|
{
|
|
|
|
tab->prune_trie = 1;
|
|
|
|
rt_schedule_prune(tab);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
log(L_BUG "Invalid arg to rt_unlock_trie()");
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
void
|
|
|
|
rt_preconfig(struct config *c)
|
|
|
|
{
|
|
|
|
init_list(&c->tables);
|
2016-01-26 10:48:58 +00:00
|
|
|
|
|
|
|
rt_new_table(cf_get_symbol("master4"), NET_IP4);
|
|
|
|
rt_new_table(cf_get_symbol("master6"), NET_IP6);
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
2022-06-04 15:34:57 +00:00
|
|
|
void
|
|
|
|
rt_postconfig(struct config *c)
|
|
|
|
{
|
|
|
|
uint num_tables = list_length(&c->tables);
|
|
|
|
btime def_gc_period = 400 MS * num_tables;
|
|
|
|
def_gc_period = MAX(def_gc_period, 10 S);
|
|
|
|
def_gc_period = MIN(def_gc_period, 600 S);
|
|
|
|
|
|
|
|
struct rtable_config *rc;
|
|
|
|
WALK_LIST(rc, c->tables)
|
|
|
|
if (rc->gc_period == (uint) -1)
|
|
|
|
rc->gc_period = (uint) def_gc_period;
|
|
|
|
}
|
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
/*
|
2010-07-05 15:50:19 +00:00
|
|
|
* Some functions for handing internal next hop updates
|
|
|
|
* triggered by rt_schedule_nhu().
|
|
|
|
*/
|
|
|
|
|
2017-03-22 14:00:07 +00:00
|
|
|
void
|
2022-05-15 13:53:35 +00:00
|
|
|
ea_set_hostentry(ea_list **to, struct rtable *dep, struct rtable *tab, ip_addr gw, ip_addr ll, u32 lnum, u32 labels[lnum])
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
2022-05-15 13:53:35 +00:00
|
|
|
struct {
|
|
|
|
struct adata ad;
|
|
|
|
struct hostentry *he;
|
|
|
|
u32 labels[lnum];
|
|
|
|
} *head = (void *) tmp_alloc_adata(sizeof *head - sizeof(struct adata));
|
|
|
|
|
|
|
|
head->he = rt_get_hostentry(tab, gw, ll, dep);
|
|
|
|
memcpy(head->labels, labels, lnum * sizeof(u32));
|
|
|
|
|
|
|
|
ea_set_attr(to, EA_LITERAL_DIRECT_ADATA(
|
|
|
|
&ea_gen_hostentry, 0, &head->ad));
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static void
|
2022-05-30 10:03:03 +00:00
|
|
|
rta_apply_hostentry(ea_list **to, struct hostentry_adata *head)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
2022-05-15 13:53:35 +00:00
|
|
|
struct hostentry *he = head->he;
|
|
|
|
u32 *labels = head->labels;
|
|
|
|
u32 lnum = (u32 *) (head->ad.data + head->ad.length) - labels;
|
|
|
|
|
2022-05-30 10:03:03 +00:00
|
|
|
ea_set_attr_u32(to, &ea_gen_igp_metric, 0, he->igp_metric);
|
2016-08-09 12:47:51 +00:00
|
|
|
|
2022-05-15 16:09:30 +00:00
|
|
|
if (!he->src)
|
2016-08-09 12:47:51 +00:00
|
|
|
{
|
2022-05-30 10:03:03 +00:00
|
|
|
ea_set_dest(to, 0, RTD_UNREACHABLE);
|
2016-08-09 12:47:51 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2022-05-30 10:03:03 +00:00
|
|
|
eattr *he_nh_ea = ea_find(he->src, &ea_gen_nexthop);
|
2022-05-15 16:09:30 +00:00
|
|
|
ASSERT_DIE(he_nh_ea);
|
|
|
|
|
|
|
|
struct nexthop_adata *nhad = (struct nexthop_adata *) he_nh_ea->u.ptr;
|
|
|
|
int idest = nhea_dest(he_nh_ea);
|
|
|
|
|
|
|
|
if ((idest != RTD_UNICAST) ||
|
|
|
|
!lnum && he->nexthop_linkable)
|
2017-03-17 14:48:09 +00:00
|
|
|
{ /* Just link the nexthop chain, no label append happens. */
|
2022-05-30 10:03:03 +00:00
|
|
|
ea_copy_attr(to, he->src, &ea_gen_nexthop);
|
2017-03-17 14:48:09 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2022-05-05 16:08:37 +00:00
|
|
|
uint total_size = OFFSETOF(struct nexthop_adata, nh);
|
|
|
|
|
|
|
|
NEXTHOP_WALK(nh, nhad)
|
2016-08-09 12:47:51 +00:00
|
|
|
{
|
2022-05-15 13:53:35 +00:00
|
|
|
if (nh->labels + lnum > MPLS_MAX_LABEL_STACK)
|
2017-03-17 14:48:09 +00:00
|
|
|
{
|
2022-05-05 16:08:37 +00:00
|
|
|
log(L_WARN "Sum of label stack sizes %d + %d = %d exceedes allowed maximum (%d)",
|
2022-05-15 13:53:35 +00:00
|
|
|
nh->labels, lnum, nh->labels + lnum, MPLS_MAX_LABEL_STACK);
|
2022-05-05 16:08:37 +00:00
|
|
|
continue;
|
2017-03-17 14:48:09 +00:00
|
|
|
}
|
2017-02-24 13:05:11 +00:00
|
|
|
|
2022-05-15 13:53:35 +00:00
|
|
|
total_size += NEXTHOP_SIZE_CNT(nh->labels + lnum);
|
2022-05-05 16:08:37 +00:00
|
|
|
}
|
2019-10-10 13:25:36 +00:00
|
|
|
|
2022-05-05 16:08:37 +00:00
|
|
|
if (total_size == OFFSETOF(struct nexthop_adata, nh))
|
|
|
|
{
|
|
|
|
log(L_WARN "No valid nexthop remaining, setting route unreachable");
|
|
|
|
|
2022-05-15 16:09:30 +00:00
|
|
|
struct nexthop_adata nha = {
|
|
|
|
.ad.length = NEXTHOP_DEST_SIZE,
|
|
|
|
.dest = RTD_UNREACHABLE,
|
|
|
|
};
|
|
|
|
|
2022-05-30 10:03:03 +00:00
|
|
|
ea_set_attr_data(to, &ea_gen_nexthop, 0, &nha.ad.data, nha.ad.length);
|
2022-05-05 16:08:37 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct nexthop_adata *new = (struct nexthop_adata *) tmp_alloc_adata(total_size);
|
|
|
|
struct nexthop *dest = &new->nh;
|
|
|
|
|
|
|
|
NEXTHOP_WALK(nh, nhad)
|
|
|
|
{
|
2022-05-15 13:53:35 +00:00
|
|
|
if (nh->labels + lnum > MPLS_MAX_LABEL_STACK)
|
2022-05-05 16:08:37 +00:00
|
|
|
continue;
|
|
|
|
|
|
|
|
memcpy(dest, nh, NEXTHOP_SIZE(nh));
|
2022-05-15 13:53:35 +00:00
|
|
|
if (lnum)
|
2019-10-10 13:25:36 +00:00
|
|
|
{
|
2022-05-15 13:53:35 +00:00
|
|
|
memcpy(&(dest->label[dest->labels]), labels, lnum * sizeof labels[0]);
|
|
|
|
dest->labels += lnum;
|
2019-10-10 13:25:36 +00:00
|
|
|
}
|
|
|
|
|
2017-03-17 14:48:09 +00:00
|
|
|
if (ipa_nonzero(nh->gw))
|
2022-05-05 16:08:37 +00:00
|
|
|
/* Router nexthop */
|
|
|
|
dest->flags = (dest->flags & RNF_ONLINK);
|
2019-10-10 13:06:32 +00:00
|
|
|
else if (!(nh->iface->flags & IF_MULTIACCESS) || (nh->iface->flags & IF_LOOPBACK))
|
2022-05-05 16:08:37 +00:00
|
|
|
dest->gw = IPA_NONE; /* PtP link - no need for nexthop */
|
2017-03-17 14:48:09 +00:00
|
|
|
else if (ipa_nonzero(he->link))
|
2022-05-05 16:08:37 +00:00
|
|
|
dest->gw = he->link; /* Device nexthop with link-local address known */
|
2017-03-17 14:48:09 +00:00
|
|
|
else
|
2022-05-05 16:08:37 +00:00
|
|
|
dest->gw = he->addr; /* Device nexthop with link-local address unknown */
|
|
|
|
|
|
|
|
dest = NEXTHOP_NEXT(dest);
|
2016-08-09 12:47:51 +00:00
|
|
|
}
|
2017-02-24 13:05:11 +00:00
|
|
|
|
2022-05-05 16:08:37 +00:00
|
|
|
/* Fix final length */
|
|
|
|
new->ad.length = (void *) dest - (void *) new->ad.data;
|
2022-05-30 10:03:03 +00:00
|
|
|
ea_set_attr(to, EA_LITERAL_DIRECT_ADATA(
|
2022-05-05 16:08:37 +00:00
|
|
|
&ea_gen_nexthop, 0, &new->ad));
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
2022-05-15 13:53:35 +00:00
|
|
|
static inline struct hostentry_adata *
|
2022-05-30 10:03:03 +00:00
|
|
|
rta_next_hop_outdated(ea_list *a)
|
2021-12-20 19:25:35 +00:00
|
|
|
{
|
2022-05-15 16:09:30 +00:00
|
|
|
/* First retrieve the hostentry */
|
2022-05-30 10:03:03 +00:00
|
|
|
eattr *heea = ea_find(a, &ea_gen_hostentry);
|
2022-05-15 13:53:35 +00:00
|
|
|
if (!heea)
|
|
|
|
return NULL;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2022-05-15 13:53:35 +00:00
|
|
|
struct hostentry_adata *head = (struct hostentry_adata *) heea->u.ptr;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2022-05-15 16:09:30 +00:00
|
|
|
/* If no nexthop is present, we have to create one */
|
2022-05-30 10:03:03 +00:00
|
|
|
eattr *a_nh_ea = ea_find(a, &ea_gen_nexthop);
|
2022-05-15 16:09:30 +00:00
|
|
|
if (!a_nh_ea)
|
|
|
|
return head;
|
|
|
|
|
|
|
|
struct nexthop_adata *nhad = (struct nexthop_adata *) a_nh_ea->u.ptr;
|
|
|
|
|
|
|
|
/* Shortcut for unresolvable hostentry */
|
2022-05-15 13:53:35 +00:00
|
|
|
if (!head->he->src)
|
2022-05-15 16:09:30 +00:00
|
|
|
return NEXTHOP_IS_REACHABLE(nhad) ? head : NULL;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2022-05-15 16:09:30 +00:00
|
|
|
/* Comparing our nexthop with the hostentry nexthop */
|
2022-05-30 10:03:03 +00:00
|
|
|
eattr *he_nh_ea = ea_find(head->he->src, &ea_gen_nexthop);
|
2022-05-05 16:08:37 +00:00
|
|
|
|
2022-05-15 16:09:30 +00:00
|
|
|
return (
|
2022-05-30 10:03:03 +00:00
|
|
|
(ea_get_int(a, &ea_gen_igp_metric, IGP_METRIC_UNKNOWN) != head->he->igp_metric) ||
|
2022-05-15 13:53:35 +00:00
|
|
|
(!head->he->nexthop_linkable) ||
|
|
|
|
(!he_nh_ea != !a_nh_ea) ||
|
|
|
|
(he_nh_ea && a_nh_ea && !adata_same(he_nh_ea->u.ptr, a_nh_ea->u.ptr)))
|
|
|
|
? head : NULL;
|
2021-12-20 19:25:35 +00:00
|
|
|
}
|
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
static inline struct rte_storage *
|
|
|
|
rt_next_hop_update_rte(rtable *tab, net *n, rte *old)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
2022-05-15 13:53:35 +00:00
|
|
|
struct hostentry_adata *head = rta_next_hop_outdated(old->attrs);
|
|
|
|
if (!head)
|
2021-12-20 19:25:35 +00:00
|
|
|
return NULL;
|
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
rte e0 = *old;
|
2022-06-08 13:31:28 +00:00
|
|
|
rta_apply_hostentry(&e0.attrs, head);
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2020-01-28 10:42:46 +00:00
|
|
|
return rte_store(&e0, n, tab);
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
2022-05-31 10:51:34 +00:00
|
|
|
static inline void
|
|
|
|
rt_next_hop_resolve_rte(rte *r)
|
|
|
|
{
|
2022-06-08 13:31:28 +00:00
|
|
|
eattr *heea = ea_find(r->attrs, &ea_gen_hostentry);
|
2022-05-31 10:51:34 +00:00
|
|
|
if (!heea)
|
|
|
|
return;
|
|
|
|
|
|
|
|
struct hostentry_adata *head = (struct hostentry_adata *) heea->u.ptr;
|
|
|
|
|
2022-06-08 13:31:28 +00:00
|
|
|
rta_apply_hostentry(&r->attrs, head);
|
2022-05-31 10:51:34 +00:00
|
|
|
}
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
#ifdef CONFIG_BGP
|
|
|
|
|
|
|
|
static inline int
|
|
|
|
net_flow_has_dst_prefix(const net_addr *n)
|
|
|
|
{
|
|
|
|
ASSUME(net_is_flow(n));
|
|
|
|
|
|
|
|
if (n->pxlen)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
if (n->type == NET_FLOW4)
|
|
|
|
{
|
|
|
|
const net_addr_flow4 *n4 = (void *) n;
|
|
|
|
return (n4->length > sizeof(net_addr_flow4)) && (n4->data[0] == FLOW_TYPE_DST_PREFIX);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
const net_addr_flow6 *n6 = (void *) n;
|
|
|
|
return (n6->length > sizeof(net_addr_flow6)) && (n6->data[0] == FLOW_TYPE_DST_PREFIX);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int
|
2022-05-30 10:03:03 +00:00
|
|
|
rta_as_path_is_empty(ea_list *a)
|
2021-12-20 19:25:35 +00:00
|
|
|
{
|
2022-05-30 10:03:03 +00:00
|
|
|
eattr *e = ea_find(a, "bgp_path");
|
2021-12-20 19:25:35 +00:00
|
|
|
return !e || (as_path_getlen(e->u.ptr) == 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline u32
|
2022-05-30 10:03:03 +00:00
|
|
|
rta_get_first_asn(ea_list *a)
|
2021-12-20 19:25:35 +00:00
|
|
|
{
|
2022-05-30 10:03:03 +00:00
|
|
|
eattr *e = ea_find(a, "bgp_path");
|
2021-12-20 19:25:35 +00:00
|
|
|
u32 asn;
|
|
|
|
|
|
|
|
return (e && as_path_get_first_regular(e->u.ptr, &asn)) ? asn : 0;
|
|
|
|
}
|
|
|
|
|
2022-06-08 09:47:49 +00:00
|
|
|
static inline enum flowspec_valid
|
2022-05-30 10:03:03 +00:00
|
|
|
rt_flowspec_check(rtable *tab_ip, rtable *tab_flow, const net_addr *n, ea_list *a, int interior)
|
2021-12-20 19:25:35 +00:00
|
|
|
{
|
|
|
|
ASSERT(rt_is_ip(tab_ip));
|
|
|
|
ASSERT(rt_is_flow(tab_flow));
|
|
|
|
ASSERT(tab_ip->trie);
|
|
|
|
|
|
|
|
/* RFC 8955 6. a) Flowspec has defined dst prefix */
|
|
|
|
if (!net_flow_has_dst_prefix(n))
|
2022-06-08 09:47:49 +00:00
|
|
|
return FLOWSPEC_INVALID;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
/* RFC 9117 4.1. Accept AS_PATH is empty (fr */
|
|
|
|
if (interior && rta_as_path_is_empty(a))
|
2022-06-08 09:47:49 +00:00
|
|
|
return FLOWSPEC_VALID;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
|
|
|
|
/* RFC 8955 6. b) Flowspec and its best-match route have the same originator */
|
|
|
|
|
|
|
|
/* Find flowspec dst prefix */
|
|
|
|
net_addr dst;
|
|
|
|
if (n->type == NET_FLOW4)
|
|
|
|
net_fill_ip4(&dst, net4_prefix(n), net4_pxlen(n));
|
|
|
|
else
|
|
|
|
net_fill_ip6(&dst, net6_prefix(n), net6_pxlen(n));
|
|
|
|
|
|
|
|
/* Find best-match BGP unicast route for flowspec dst prefix */
|
|
|
|
net *nb = net_route(tab_ip, &dst);
|
2022-03-09 12:13:05 +00:00
|
|
|
const rte *rb = nb ? &nb->routes->rte : NULL;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
/* Register prefix to trie for tracking further changes */
|
|
|
|
int max_pxlen = (n->type == NET_FLOW4) ? IP4_MAX_PREFIX_LENGTH : IP6_MAX_PREFIX_LENGTH;
|
|
|
|
trie_add_prefix(tab_flow->flowspec_trie, &dst, (nb ? nb->n.addr->pxlen : 0), max_pxlen);
|
|
|
|
|
|
|
|
/* No best-match BGP route -> no flowspec */
|
2022-05-04 12:41:51 +00:00
|
|
|
if (!rb || (rt_get_source_attr(rb) != RTS_BGP))
|
2022-06-08 09:47:49 +00:00
|
|
|
return FLOWSPEC_INVALID;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
/* Find ORIGINATOR_ID values */
|
2022-05-30 10:03:03 +00:00
|
|
|
u32 orig_a = ea_get_int(a, "bgp_originator_id", 0);
|
|
|
|
u32 orig_b = ea_get_int(rb->attrs, "bgp_originator_id", 0);
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
/* Originator is either ORIGINATOR_ID (if present), or BGP neighbor address (if not) */
|
2022-04-20 11:56:04 +00:00
|
|
|
if ((orig_a != orig_b) || (!orig_a && !orig_b && !ipa_equal(
|
2022-05-30 10:03:03 +00:00
|
|
|
ea_get_ip(a, &ea_gen_from, IPA_NONE),
|
|
|
|
ea_get_ip(rb->attrs, &ea_gen_from, IPA_NONE)
|
2022-04-20 11:56:04 +00:00
|
|
|
)))
|
2022-06-08 09:47:49 +00:00
|
|
|
return FLOWSPEC_INVALID;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
|
|
|
|
/* Find ASN of the best-match route, for use in next checks */
|
|
|
|
u32 asn_b = rta_get_first_asn(rb->attrs);
|
|
|
|
if (!asn_b)
|
2022-06-08 09:47:49 +00:00
|
|
|
return FLOWSPEC_INVALID;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
/* RFC 9117 4.2. For EBGP, flowspec and its best-match route are from the same AS */
|
|
|
|
if (!interior && (rta_get_first_asn(a) != asn_b))
|
2022-06-08 09:47:49 +00:00
|
|
|
return FLOWSPEC_INVALID;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
/* RFC 8955 6. c) More-specific routes are from the same AS as the best-match route */
|
|
|
|
TRIE_WALK(tab_ip->trie, subnet, &dst)
|
|
|
|
{
|
|
|
|
net *nc = net_find_valid(tab_ip, &subnet);
|
|
|
|
if (!nc)
|
|
|
|
continue;
|
|
|
|
|
2022-03-09 12:13:05 +00:00
|
|
|
const rte *rc = &nc->routes->rte;
|
2022-05-04 12:41:51 +00:00
|
|
|
if (rt_get_source_attr(rc) != RTS_BGP)
|
2022-06-08 09:47:49 +00:00
|
|
|
return FLOWSPEC_INVALID;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
if (rta_get_first_asn(rc->attrs) != asn_b)
|
2022-06-08 09:47:49 +00:00
|
|
|
return FLOWSPEC_INVALID;
|
2021-12-20 19:25:35 +00:00
|
|
|
}
|
|
|
|
TRIE_WALK_END;
|
|
|
|
|
2022-06-08 09:47:49 +00:00
|
|
|
return FLOWSPEC_VALID;
|
2021-12-20 19:25:35 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#endif /* CONFIG_BGP */
|
|
|
|
|
2022-03-09 12:13:05 +00:00
|
|
|
static struct rte_storage *
|
|
|
|
rt_flowspec_update_rte(rtable *tab, net *n, rte *r)
|
2021-12-20 19:25:35 +00:00
|
|
|
{
|
|
|
|
#ifdef CONFIG_BGP
|
2022-06-29 10:51:07 +00:00
|
|
|
if (r->generation || (rt_get_source_attr(r) != RTS_BGP))
|
2022-02-11 21:29:13 +00:00
|
|
|
return NULL;
|
|
|
|
|
2022-06-07 10:18:23 +00:00
|
|
|
struct bgp_channel *bc = (struct bgp_channel *) SKIP_BACK(struct channel, in_req, r->sender->req);
|
2022-02-11 21:29:13 +00:00
|
|
|
if (!bc->base_table)
|
2021-12-20 19:25:35 +00:00
|
|
|
return NULL;
|
|
|
|
|
2022-06-08 09:47:49 +00:00
|
|
|
struct bgp_proto *p = SKIP_BACK(struct bgp_proto, p, bc->c.proto);
|
|
|
|
|
|
|
|
enum flowspec_valid old = rt_get_flowspec_valid(r),
|
|
|
|
valid = rt_flowspec_check(bc->base_table, tab, n->n.addr, r->attrs, p->is_interior);
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2022-05-15 16:09:30 +00:00
|
|
|
if (old == valid)
|
2021-12-20 19:25:35 +00:00
|
|
|
return NULL;
|
|
|
|
|
2022-06-08 13:31:28 +00:00
|
|
|
rte new = *r;
|
|
|
|
ea_set_attr_u32(&new.attrs, &ea_gen_flowspec_valid, 0, valid);
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2022-03-09 12:13:05 +00:00
|
|
|
return rte_store(&new, n, tab);
|
2021-12-20 19:25:35 +00:00
|
|
|
#else
|
|
|
|
return NULL;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2022-06-07 10:18:23 +00:00
|
|
|
static inline void
|
|
|
|
rt_flowspec_resolve_rte(rte *r, struct channel *c)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_BGP
|
2022-06-08 09:47:49 +00:00
|
|
|
enum flowspec_valid valid, old = rt_get_flowspec_valid(r);
|
2022-06-07 10:18:23 +00:00
|
|
|
struct bgp_channel *bc = (struct bgp_channel *) c;
|
|
|
|
|
2022-06-08 09:47:49 +00:00
|
|
|
if ( (rt_get_source_attr(r) == RTS_BGP)
|
2022-07-20 10:25:20 +00:00
|
|
|
&& (c->class == &channel_bgp)
|
2022-06-08 09:47:49 +00:00
|
|
|
&& (bc->base_table))
|
|
|
|
{
|
|
|
|
struct bgp_proto *p = SKIP_BACK(struct bgp_proto, p, bc->c.proto);
|
|
|
|
valid = rt_flowspec_check(
|
|
|
|
bc->base_table,
|
|
|
|
c->in_req.hook->table,
|
|
|
|
r->net, r->attrs, p->is_interior);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
valid = FLOWSPEC_UNKNOWN;
|
2022-06-07 10:18:23 +00:00
|
|
|
|
2022-06-08 09:47:49 +00:00
|
|
|
if (valid == old)
|
2022-06-07 10:18:23 +00:00
|
|
|
return;
|
|
|
|
|
2022-06-08 09:47:49 +00:00
|
|
|
if (valid == FLOWSPEC_UNKNOWN)
|
2022-06-08 13:31:28 +00:00
|
|
|
ea_unset_attr(&r->attrs, 0, &ea_gen_flowspec_valid);
|
2022-06-08 09:47:49 +00:00
|
|
|
else
|
2022-06-08 13:31:28 +00:00
|
|
|
ea_set_attr_u32(&r->attrs, &ea_gen_flowspec_valid, 0, valid);
|
2022-06-07 10:18:23 +00:00
|
|
|
#endif
|
|
|
|
}
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
static inline int
|
|
|
|
rt_next_hop_update_net(rtable *tab, net *n)
|
|
|
|
{
|
2021-02-25 20:52:49 +00:00
|
|
|
struct rte_storage *new;
|
2010-07-05 15:50:19 +00:00
|
|
|
int count = 0;
|
2022-03-09 12:37:12 +00:00
|
|
|
int is_flow = net_is_flow(n->n.addr);
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2021-02-25 20:52:49 +00:00
|
|
|
struct rte_storage *old_best = n->routes;
|
2010-07-05 15:50:19 +00:00
|
|
|
if (!old_best)
|
|
|
|
return 0;
|
|
|
|
|
2021-02-25 20:52:49 +00:00
|
|
|
for (struct rte_storage *e, **k = &n->routes; e = *k; k = &e->next)
|
2022-03-09 12:37:12 +00:00
|
|
|
if (is_flow || rta_next_hop_outdated(e->rte.attrs))
|
2021-02-25 20:52:49 +00:00
|
|
|
count++;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
2021-02-25 20:52:49 +00:00
|
|
|
if (!count)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
struct rte_multiupdate {
|
|
|
|
struct rte_storage *old, *new;
|
|
|
|
} *updates = alloca(sizeof(struct rte_multiupdate) * count);
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2021-02-25 20:52:49 +00:00
|
|
|
int pos = 0;
|
|
|
|
for (struct rte_storage *e, **k = &n->routes; e = *k; k = &e->next)
|
2022-03-09 12:37:12 +00:00
|
|
|
if (is_flow || rta_next_hop_outdated(e->rte.attrs))
|
2021-02-25 20:52:49 +00:00
|
|
|
{
|
2022-03-09 12:37:12 +00:00
|
|
|
struct rte_storage *new = is_flow
|
|
|
|
? rt_flowspec_update_rte(tab, n, &e->rte)
|
|
|
|
: rt_next_hop_update_rte(tab, n, &e->rte);
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2022-06-07 10:18:23 +00:00
|
|
|
if (!new)
|
|
|
|
continue;
|
|
|
|
|
2011-12-22 12:20:29 +00:00
|
|
|
/* Call a pre-comparison hook */
|
|
|
|
/* Not really an efficient way to compute this */
|
2020-01-28 10:42:46 +00:00
|
|
|
if (e->rte.src->proto->rte_recalculate)
|
2021-02-25 20:52:49 +00:00
|
|
|
e->rte.src->proto->rte_recalculate(tab, n, &new->rte, &e->rte, &old_best->rte);
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2021-02-25 20:52:49 +00:00
|
|
|
updates[pos++] = (struct rte_multiupdate) {
|
|
|
|
.old = e,
|
|
|
|
.new = new,
|
|
|
|
};
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2021-02-25 20:52:49 +00:00
|
|
|
/* Replace the route in the list */
|
|
|
|
new->next = e->next;
|
|
|
|
*k = e = new;
|
2021-09-27 11:04:16 +00:00
|
|
|
|
|
|
|
/* Get a new ID for the route */
|
|
|
|
new->rte.lastmod = current_time();
|
|
|
|
new->rte.id = hmap_first_zero(&tab->id_map);
|
|
|
|
hmap_set(&tab->id_map, new->rte.id);
|
2011-12-22 12:20:29 +00:00
|
|
|
}
|
|
|
|
|
2022-06-07 10:18:23 +00:00
|
|
|
ASSERT_DIE(pos <= count);
|
|
|
|
count = pos;
|
2011-12-22 12:20:29 +00:00
|
|
|
|
|
|
|
/* Find the new best route */
|
2021-02-25 20:52:49 +00:00
|
|
|
struct rte_storage **new_best = NULL;
|
|
|
|
for (struct rte_storage *e, **k = &n->routes; e = *k; k = &e->next)
|
2011-12-22 12:20:29 +00:00
|
|
|
{
|
2020-01-28 10:42:46 +00:00
|
|
|
if (!new_best || rte_better(&e->rte, &(*new_best)->rte))
|
2010-07-05 15:50:19 +00:00
|
|
|
new_best = k;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Relink the new best route to the first position */
|
|
|
|
new = *new_best;
|
|
|
|
if (new != n->routes)
|
|
|
|
{
|
|
|
|
*new_best = new->next;
|
|
|
|
new->next = n->routes;
|
|
|
|
n->routes = new;
|
|
|
|
}
|
|
|
|
|
2021-02-25 20:52:49 +00:00
|
|
|
/* Announce the changes */
|
|
|
|
for (int i=0; i<count; i++)
|
|
|
|
{
|
|
|
|
_Bool nb = (new == updates[i].new), ob = (old_best == updates[i].old);
|
2021-10-06 13:10:33 +00:00
|
|
|
const char *best_indicator[2][2] = {
|
|
|
|
{ "autoupdated", "autoupdated [-best]" },
|
|
|
|
{ "autoupdated [+best]", "autoupdated [best]" }
|
|
|
|
};
|
2021-06-21 15:07:31 +00:00
|
|
|
rt_rte_trace_in(D_ROUTES, updates[i].new->rte.sender->req, &updates[i].new->rte, best_indicator[nb][ob]);
|
2020-03-09 14:31:10 +00:00
|
|
|
rte_announce_i(tab, n, updates[i].new, updates[i].old, new, old_best);
|
2021-02-25 20:52:49 +00:00
|
|
|
}
|
2015-06-08 00:20:43 +00:00
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_next_hop_update(rtable *tab)
|
|
|
|
{
|
|
|
|
struct fib_iterator *fit = &tab->nhu_fit;
|
|
|
|
int max_feed = 32;
|
|
|
|
|
2017-02-22 13:02:03 +00:00
|
|
|
if (tab->nhu_state == NHU_CLEAN)
|
2010-07-05 15:50:19 +00:00
|
|
|
return;
|
|
|
|
|
2017-02-22 13:02:03 +00:00
|
|
|
if (tab->nhu_state == NHU_SCHEDULED)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
|
|
|
FIB_ITERATE_INIT(fit, &tab->fib);
|
2017-02-22 13:02:03 +00:00
|
|
|
tab->nhu_state = NHU_RUNNING;
|
2021-12-20 19:25:35 +00:00
|
|
|
|
|
|
|
if (tab->flowspec_trie)
|
|
|
|
rt_flowspec_reset_trie(tab);
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
2015-12-21 19:16:05 +00:00
|
|
|
FIB_ITERATE_START(&tab->fib, fit, net, n)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
|
|
|
if (max_feed <= 0)
|
|
|
|
{
|
2015-12-21 19:16:05 +00:00
|
|
|
FIB_ITERATE_PUT(fit);
|
2010-07-05 15:50:19 +00:00
|
|
|
ev_schedule(tab->rt_event);
|
|
|
|
return;
|
|
|
|
}
|
2015-12-21 19:16:05 +00:00
|
|
|
max_feed -= rt_next_hop_update_net(tab, n);
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
2015-12-21 19:16:05 +00:00
|
|
|
FIB_ITERATE_END;
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2017-02-22 13:02:03 +00:00
|
|
|
/* State change:
|
|
|
|
* NHU_DIRTY -> NHU_SCHEDULED
|
|
|
|
* NHU_RUNNING -> NHU_CLEAN
|
|
|
|
*/
|
2010-07-05 15:50:19 +00:00
|
|
|
tab->nhu_state &= 1;
|
|
|
|
|
2017-02-22 13:02:03 +00:00
|
|
|
if (tab->nhu_state != NHU_CLEAN)
|
2010-07-05 15:50:19 +00:00
|
|
|
ev_schedule(tab->rt_event);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2000-03-04 22:21:06 +00:00
|
|
|
struct rtable_config *
|
2015-11-05 11:48:52 +00:00
|
|
|
rt_new_table(struct symbol *s, uint addr_type)
|
2000-03-04 22:21:06 +00:00
|
|
|
{
|
2012-07-20 17:56:57 +00:00
|
|
|
/* Hack that allows to 'redefine' the master table */
|
2016-01-26 10:48:58 +00:00
|
|
|
if ((s->class == SYM_TABLE) &&
|
2019-02-15 12:53:17 +00:00
|
|
|
(s->table == new_config->def_tables[addr_type]) &&
|
2016-01-26 10:48:58 +00:00
|
|
|
((addr_type == NET_IP4) || (addr_type == NET_IP6)))
|
2019-02-15 12:53:17 +00:00
|
|
|
return s->table;
|
2012-07-20 17:56:57 +00:00
|
|
|
|
2000-03-04 22:21:06 +00:00
|
|
|
struct rtable_config *c = cfg_allocz(sizeof(struct rtable_config));
|
|
|
|
|
2019-02-15 12:53:17 +00:00
|
|
|
cf_define_symbol(s, SYM_TABLE, table, c);
|
2000-03-04 22:21:06 +00:00
|
|
|
c->name = s->name;
|
2015-11-05 11:48:52 +00:00
|
|
|
c->addr_type = addr_type;
|
2022-06-04 15:34:57 +00:00
|
|
|
c->gc_threshold = 1000;
|
|
|
|
c->gc_period = (uint) -1; /* set in rt_postconfig() */
|
2021-02-10 02:09:57 +00:00
|
|
|
c->min_settle_time = 1 S;
|
|
|
|
c->max_settle_time = 20 S;
|
2016-01-26 10:48:58 +00:00
|
|
|
|
|
|
|
add_tail(&new_config->tables, &c->n);
|
|
|
|
|
|
|
|
/* First table of each type is kept as default */
|
|
|
|
if (! new_config->def_tables[addr_type])
|
|
|
|
new_config->def_tables[addr_type] = c;
|
|
|
|
|
2000-03-04 22:21:06 +00:00
|
|
|
return c;
|
|
|
|
}
|
|
|
|
|
2000-06-01 17:12:19 +00:00
|
|
|
/**
|
|
|
|
* rt_lock_table - lock a routing table
|
|
|
|
* @r: routing table to be locked
|
|
|
|
*
|
|
|
|
* Lock a routing table, because it's in use by a protocol,
|
|
|
|
* preventing it from being freed when it gets undefined in a new
|
|
|
|
* configuration.
|
|
|
|
*/
|
1999-05-17 20:14:52 +00:00
|
|
|
void
|
2000-01-16 16:44:50 +00:00
|
|
|
rt_lock_table(rtable *r)
|
1999-05-17 20:14:52 +00:00
|
|
|
{
|
2000-01-16 16:44:50 +00:00
|
|
|
r->use_count++;
|
|
|
|
}
|
|
|
|
|
2000-06-01 17:12:19 +00:00
|
|
|
/**
|
|
|
|
* rt_unlock_table - unlock a routing table
|
|
|
|
* @r: routing table to be unlocked
|
|
|
|
*
|
|
|
|
* Unlock a routing table formerly locked by rt_lock_table(),
|
|
|
|
* that is decrease its use count and delete it if it's scheduled
|
|
|
|
* for deletion by configuration changes.
|
|
|
|
*/
|
2000-01-16 16:44:50 +00:00
|
|
|
void
|
|
|
|
rt_unlock_table(rtable *r)
|
|
|
|
{
|
|
|
|
if (!--r->use_count && r->deleted)
|
|
|
|
{
|
|
|
|
struct config *conf = r->deleted;
|
2021-03-30 16:51:31 +00:00
|
|
|
|
|
|
|
/* Delete the routing table by freeing its pool */
|
|
|
|
rt_shutdown(r);
|
2000-01-16 16:44:50 +00:00
|
|
|
config_del_obstacle(conf);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-12-22 03:32:26 +00:00
|
|
|
static int
|
|
|
|
rt_reconfigure(rtable *tab, struct rtable_config *new, struct rtable_config *old)
|
|
|
|
{
|
|
|
|
if ((new->addr_type != old->addr_type) ||
|
|
|
|
(new->sorted != old->sorted) ||
|
|
|
|
(new->trie_used != old->trie_used))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
DBG("\t%s: same\n", new->name);
|
|
|
|
new->table = tab;
|
|
|
|
tab->name = new->name;
|
|
|
|
tab->config = new;
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2018-03-18 12:48:47 +00:00
|
|
|
static struct rtable_config *
|
|
|
|
rt_find_table_config(struct config *cf, char *name)
|
|
|
|
{
|
|
|
|
struct symbol *sym = cf_find_symbol(cf, name);
|
2019-02-15 12:53:17 +00:00
|
|
|
return (sym && (sym->class == SYM_TABLE)) ? sym->table : NULL;
|
2018-03-18 12:48:47 +00:00
|
|
|
}
|
|
|
|
|
2000-06-01 17:12:19 +00:00
|
|
|
/**
|
|
|
|
* rt_commit - commit new routing table configuration
|
|
|
|
* @new: new configuration
|
|
|
|
* @old: original configuration or %NULL if it's boot time config
|
|
|
|
*
|
|
|
|
* Scan differences between @old and @new configuration and modify
|
|
|
|
* the routing tables according to these changes. If @new defines a
|
|
|
|
* previously unknown table, create it, if it omits a table existing
|
|
|
|
* in @old, schedule it for deletion (it gets deleted when all protocols
|
|
|
|
* disconnect from it by calling rt_unlock_table()), if it exists
|
|
|
|
* in both configurations, leave it unchanged.
|
|
|
|
*/
|
2000-01-16 16:44:50 +00:00
|
|
|
void
|
|
|
|
rt_commit(struct config *new, struct config *old)
|
|
|
|
{
|
|
|
|
struct rtable_config *o, *r;
|
1999-05-17 20:14:52 +00:00
|
|
|
|
2000-01-16 16:44:50 +00:00
|
|
|
DBG("rt_commit:\n");
|
|
|
|
if (old)
|
1999-05-17 20:14:52 +00:00
|
|
|
{
|
2000-01-16 16:44:50 +00:00
|
|
|
WALK_LIST(o, old->tables)
|
|
|
|
{
|
2021-12-22 03:32:26 +00:00
|
|
|
rtable *tab = o->table;
|
|
|
|
if (tab->deleted)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
r = rt_find_table_config(new, o->name);
|
|
|
|
if (r && !new->shutdown && rt_reconfigure(tab, r, o))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
DBG("\t%s: deleted\n", o->name);
|
|
|
|
tab->deleted = old;
|
|
|
|
config_add_obstacle(old);
|
|
|
|
rt_lock_table(tab);
|
|
|
|
rt_unlock_table(tab);
|
2000-01-16 16:44:50 +00:00
|
|
|
}
|
1999-05-17 20:14:52 +00:00
|
|
|
}
|
2000-01-16 16:44:50 +00:00
|
|
|
|
|
|
|
WALK_LIST(r, new->tables)
|
|
|
|
if (!r->table)
|
|
|
|
{
|
2021-03-30 16:51:31 +00:00
|
|
|
r->table = rt_setup(rt_table_pool, r);
|
2000-01-16 16:44:50 +00:00
|
|
|
DBG("\t%s: created\n", r->name);
|
2021-03-30 16:51:31 +00:00
|
|
|
add_tail(&routing_tables, &r->table->n);
|
2000-01-16 16:44:50 +00:00
|
|
|
}
|
|
|
|
DBG("\tdone\n");
|
1999-05-17 20:14:52 +00:00
|
|
|
}
|
1999-12-01 15:10:21 +00:00
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
static void
|
|
|
|
rt_feed_done(struct rt_export_hook *c)
|
|
|
|
{
|
|
|
|
c->event->hook = rt_export_hook;
|
|
|
|
|
|
|
|
rt_set_export_state(c, TES_READY);
|
|
|
|
|
2022-07-18 10:33:00 +00:00
|
|
|
rt_send_export_event(c);
|
2022-07-15 12:57:02 +00:00
|
|
|
}
|
|
|
|
|
2000-06-01 17:12:19 +00:00
|
|
|
/**
|
2022-06-22 10:45:42 +00:00
|
|
|
* rt_feed_by_fib - advertise all routes to a channel by walking a fib
|
2016-01-26 10:48:58 +00:00
|
|
|
* @c: channel to be fed
|
2000-06-01 17:12:19 +00:00
|
|
|
*
|
2016-01-26 10:48:58 +00:00
|
|
|
* This function performs one pass of advertisement of routes to a channel that
|
|
|
|
* is in the ES_FEEDING state. It is called by the protocol code as long as it
|
|
|
|
* has something to do. (We avoid transferring all the routes in single pass in
|
|
|
|
* order not to monopolize CPU time.)
|
2000-06-01 17:12:19 +00:00
|
|
|
*/
|
2021-06-21 15:07:31 +00:00
|
|
|
static void
|
2022-06-22 10:45:42 +00:00
|
|
|
rt_feed_by_fib(void *data)
|
2000-05-19 10:46:26 +00:00
|
|
|
{
|
2021-06-21 15:07:31 +00:00
|
|
|
struct rt_export_hook *c = data;
|
|
|
|
|
2016-01-26 10:48:58 +00:00
|
|
|
struct fib_iterator *fit = &c->feed_fit;
|
2000-05-19 10:59:47 +00:00
|
|
|
int max_feed = 256;
|
2000-05-19 10:46:26 +00:00
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
ASSERT(atomic_load_explicit(&c->export_state, memory_order_relaxed) == TES_FEEDING);
|
2000-05-19 10:46:26 +00:00
|
|
|
|
2022-06-20 19:29:10 +00:00
|
|
|
rtable *tab = SKIP_BACK(rtable, exporter, c->table);
|
|
|
|
|
|
|
|
FIB_ITERATE_START(&tab->fib, fit, net, n)
|
2000-05-19 10:46:26 +00:00
|
|
|
{
|
2000-05-19 10:59:47 +00:00
|
|
|
if (max_feed <= 0)
|
|
|
|
{
|
2015-12-21 19:16:05 +00:00
|
|
|
FIB_ITERATE_PUT(fit);
|
2021-06-19 18:50:18 +00:00
|
|
|
rt_send_export_event(c);
|
2021-06-21 15:07:31 +00:00
|
|
|
return;
|
2000-05-19 10:59:47 +00:00
|
|
|
}
|
2009-05-31 13:24:27 +00:00
|
|
|
|
2021-09-27 11:04:16 +00:00
|
|
|
if (atomic_load_explicit(&c->export_state, memory_order_acquire) != TES_FEEDING)
|
|
|
|
return;
|
2022-06-22 10:45:42 +00:00
|
|
|
|
2022-06-27 17:53:06 +00:00
|
|
|
if ((c->req->addr_mode == TE_ADDR_NONE) || net_in_netX(n->n.addr, c->req->addr))
|
2022-06-22 10:45:42 +00:00
|
|
|
max_feed -= rt_feed_net(c, n);
|
|
|
|
}
|
|
|
|
FIB_ITERATE_END;
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
rt_feed_done(c);
|
2022-06-22 10:45:42 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_feed_by_trie(void *data)
|
|
|
|
{
|
|
|
|
struct rt_export_hook *c = data;
|
|
|
|
rtable *tab = SKIP_BACK(rtable, exporter, c->table);
|
|
|
|
|
|
|
|
ASSERT_DIE(c->walk_state);
|
|
|
|
struct f_trie_walk_state *ws = c->walk_state;
|
|
|
|
|
|
|
|
int max_feed = 256;
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
ASSERT(atomic_load_explicit(&c->export_state, memory_order_relaxed) == TES_FEEDING);
|
2022-06-22 10:45:42 +00:00
|
|
|
|
|
|
|
net_addr addr;
|
|
|
|
while (trie_walk_next(ws, &addr))
|
|
|
|
{
|
|
|
|
net *n = net_find(tab, &addr);
|
|
|
|
if (!n)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if ((max_feed -= rt_feed_net(c, n)) <= 0)
|
|
|
|
return;
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
if (atomic_load_explicit(&c->export_state, memory_order_acquire) != TES_FEEDING)
|
|
|
|
return;
|
2022-06-22 10:45:42 +00:00
|
|
|
}
|
2015-05-31 09:29:53 +00:00
|
|
|
|
2022-06-22 10:45:42 +00:00
|
|
|
rt_unlock_trie(tab, c->walk_lock);
|
|
|
|
c->walk_lock = NULL;
|
|
|
|
|
|
|
|
mb_free(c->walk_state);
|
|
|
|
c->walk_state = NULL;
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
rt_feed_done(c);
|
2022-06-22 10:45:42 +00:00
|
|
|
}
|
|
|
|
|
2022-06-27 17:53:06 +00:00
|
|
|
static void
|
|
|
|
rt_feed_equal(void *data)
|
|
|
|
{
|
|
|
|
struct rt_export_hook *c = data;
|
|
|
|
rtable *tab = SKIP_BACK(rtable, exporter, c->table);
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
ASSERT_DIE(atomic_load_explicit(&c->export_state, memory_order_relaxed) == TES_FEEDING);
|
2022-06-27 17:53:06 +00:00
|
|
|
ASSERT_DIE(c->req->addr_mode == TE_ADDR_EQUAL);
|
|
|
|
|
|
|
|
net *n = net_find(tab, c->req->addr);
|
|
|
|
if (n)
|
|
|
|
rt_feed_net(c, n);
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
rt_feed_done(c);
|
2022-06-27 17:53:06 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_feed_for(void *data)
|
|
|
|
{
|
|
|
|
struct rt_export_hook *c = data;
|
|
|
|
rtable *tab = SKIP_BACK(rtable, exporter, c->table);
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
ASSERT_DIE(atomic_load_explicit(&c->export_state, memory_order_relaxed) == TES_FEEDING);
|
2022-06-27 17:53:06 +00:00
|
|
|
ASSERT_DIE(c->req->addr_mode == TE_ADDR_FOR);
|
|
|
|
|
|
|
|
net *n = net_route(tab, c->req->addr);
|
|
|
|
if (n)
|
|
|
|
rt_feed_net(c, n);
|
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
rt_feed_done(c);
|
2022-06-27 17:53:06 +00:00
|
|
|
}
|
2022-06-22 10:45:42 +00:00
|
|
|
|
|
|
|
static uint
|
|
|
|
rt_feed_net(struct rt_export_hook *c, net *n)
|
|
|
|
{
|
2022-07-15 12:57:02 +00:00
|
|
|
uint count = 0;
|
2022-06-22 10:45:42 +00:00
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
if (c->req->export_bulk)
|
|
|
|
{
|
|
|
|
count = rte_feed_count(n);
|
|
|
|
if (count)
|
|
|
|
{
|
|
|
|
rte_update_lock();
|
|
|
|
rte **feed = alloca(count * sizeof(rte *));
|
|
|
|
rte_feed_obtain(n, feed, count);
|
|
|
|
c->req->export_bulk(c->req, n->n.addr, NULL, feed, count);
|
|
|
|
rte_update_unlock();
|
|
|
|
}
|
|
|
|
}
|
2000-05-19 10:46:26 +00:00
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
else if (n->routes)
|
|
|
|
{
|
|
|
|
rte_update_lock();
|
|
|
|
struct rt_pending_export rpe = { .new = n->routes, .new_best = n->routes };
|
|
|
|
c->req->export_one(c->req, n->n.addr, &rpe);
|
|
|
|
rte_update_unlock();
|
|
|
|
count = 1;
|
|
|
|
}
|
2000-05-19 10:46:26 +00:00
|
|
|
|
2022-07-15 12:57:02 +00:00
|
|
|
for (struct rt_pending_export *rpe = n->first; rpe; rpe = rpe_next(rpe, NULL))
|
|
|
|
rpe_mark_seen(c, rpe);
|
|
|
|
|
|
|
|
return count;
|
|
|
|
}
|
2018-09-27 20:57:55 +00:00
|
|
|
|
2019-08-13 16:22:07 +00:00
|
|
|
/*
|
|
|
|
* Import table
|
|
|
|
*/
|
|
|
|
|
2022-06-16 21:24:56 +00:00
|
|
|
void channel_reload_export_bulk(struct rt_export_request *req, const net_addr *net, struct rt_pending_export *rpe UNUSED, rte **feed, uint count)
|
2018-09-27 20:57:55 +00:00
|
|
|
{
|
2022-06-16 21:24:56 +00:00
|
|
|
struct channel *c = SKIP_BACK(struct channel, reload_req, req);
|
2018-09-27 20:57:55 +00:00
|
|
|
|
2022-06-16 21:24:56 +00:00
|
|
|
for (uint i=0; i<count; i++)
|
|
|
|
if (feed[i]->sender == c->in_req.hook)
|
2018-09-27 20:57:55 +00:00
|
|
|
{
|
2022-06-16 21:24:56 +00:00
|
|
|
/* Strip the later attribute layers */
|
|
|
|
rte new = *feed[i];
|
|
|
|
while (new.attrs->next)
|
|
|
|
new.attrs = new.attrs->next;
|
2019-08-26 19:53:56 +00:00
|
|
|
|
2022-06-16 21:24:56 +00:00
|
|
|
/* And reload the route */
|
|
|
|
rte_update(c, net, &new, new.src);
|
2018-09-27 20:57:55 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-08-13 16:22:07 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Hostcache
|
|
|
|
*/
|
|
|
|
|
2015-12-24 14:52:03 +00:00
|
|
|
static inline u32
|
2010-07-26 14:39:27 +00:00
|
|
|
hc_hash(ip_addr a, rtable *dep)
|
|
|
|
{
|
2015-12-24 14:52:03 +00:00
|
|
|
return ipa_hash(a) ^ ptr_hash(dep);
|
2010-07-26 14:39:27 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
hc_insert(struct hostcache *hc, struct hostentry *he)
|
|
|
|
{
|
2015-05-19 06:53:34 +00:00
|
|
|
uint k = he->hash_key >> hc->hash_shift;
|
2010-07-26 14:39:27 +00:00
|
|
|
he->next = hc->hash_table[k];
|
|
|
|
hc->hash_table[k] = he;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
hc_remove(struct hostcache *hc, struct hostentry *he)
|
|
|
|
{
|
|
|
|
struct hostentry **hep;
|
2015-05-19 06:53:34 +00:00
|
|
|
uint k = he->hash_key >> hc->hash_shift;
|
2010-07-26 14:39:27 +00:00
|
|
|
|
|
|
|
for (hep = &hc->hash_table[k]; *hep != he; hep = &(*hep)->next);
|
|
|
|
*hep = he->next;
|
|
|
|
}
|
|
|
|
|
|
|
|
#define HC_DEF_ORDER 10
|
|
|
|
#define HC_HI_MARK *4
|
|
|
|
#define HC_HI_STEP 2
|
|
|
|
#define HC_HI_ORDER 16 /* Must be at most 16 */
|
|
|
|
#define HC_LO_MARK /5
|
|
|
|
#define HC_LO_STEP 2
|
|
|
|
#define HC_LO_ORDER 10
|
|
|
|
|
|
|
|
static void
|
2021-03-30 16:51:31 +00:00
|
|
|
hc_alloc_table(struct hostcache *hc, pool *p, unsigned order)
|
2010-07-26 14:39:27 +00:00
|
|
|
{
|
2016-10-14 13:37:04 +00:00
|
|
|
uint hsize = 1 << order;
|
2010-07-26 14:39:27 +00:00
|
|
|
hc->hash_order = order;
|
2015-12-24 14:52:03 +00:00
|
|
|
hc->hash_shift = 32 - order;
|
2016-10-14 13:37:04 +00:00
|
|
|
hc->hash_max = (order >= HC_HI_ORDER) ? ~0U : (hsize HC_HI_MARK);
|
|
|
|
hc->hash_min = (order <= HC_LO_ORDER) ? 0U : (hsize HC_LO_MARK);
|
2010-07-26 14:39:27 +00:00
|
|
|
|
2021-03-30 16:51:31 +00:00
|
|
|
hc->hash_table = mb_allocz(p, hsize * sizeof(struct hostentry *));
|
2010-07-26 14:39:27 +00:00
|
|
|
}
|
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
static void
|
2021-03-30 16:51:31 +00:00
|
|
|
hc_resize(struct hostcache *hc, pool *p, unsigned new_order)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
2010-07-26 14:39:27 +00:00
|
|
|
struct hostentry **old_table = hc->hash_table;
|
|
|
|
struct hostentry *he, *hen;
|
2016-10-14 13:37:04 +00:00
|
|
|
uint old_size = 1 << hc->hash_order;
|
|
|
|
uint i;
|
2010-07-26 14:39:27 +00:00
|
|
|
|
2021-03-30 16:51:31 +00:00
|
|
|
hc_alloc_table(hc, p, new_order);
|
2010-07-26 14:39:27 +00:00
|
|
|
for (i = 0; i < old_size; i++)
|
|
|
|
for (he = old_table[i]; he != NULL; he=hen)
|
|
|
|
{
|
|
|
|
hen = he->next;
|
|
|
|
hc_insert(hc, he);
|
|
|
|
}
|
|
|
|
mb_free(old_table);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct hostentry *
|
2021-03-30 16:51:31 +00:00
|
|
|
hc_new_hostentry(struct hostcache *hc, pool *p, ip_addr a, ip_addr ll, rtable *dep, unsigned k)
|
2010-07-26 14:39:27 +00:00
|
|
|
{
|
|
|
|
struct hostentry *he = sl_alloc(hc->slab);
|
|
|
|
|
2017-02-24 13:05:11 +00:00
|
|
|
*he = (struct hostentry) {
|
|
|
|
.addr = a,
|
|
|
|
.link = ll,
|
|
|
|
.tab = dep,
|
|
|
|
.hash_key = k,
|
|
|
|
};
|
2010-07-26 14:39:27 +00:00
|
|
|
|
|
|
|
add_tail(&hc->hostentries, &he->ln);
|
|
|
|
hc_insert(hc, he);
|
|
|
|
|
|
|
|
hc->hash_items++;
|
|
|
|
if (hc->hash_items > hc->hash_max)
|
2021-03-30 16:51:31 +00:00
|
|
|
hc_resize(hc, p, hc->hash_order + HC_HI_STEP);
|
2010-07-26 14:39:27 +00:00
|
|
|
|
|
|
|
return he;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2021-03-30 16:51:31 +00:00
|
|
|
hc_delete_hostentry(struct hostcache *hc, pool *p, struct hostentry *he)
|
2010-07-26 14:39:27 +00:00
|
|
|
{
|
2010-12-07 22:33:55 +00:00
|
|
|
rta_free(he->src);
|
|
|
|
|
2010-07-26 14:39:27 +00:00
|
|
|
rem_node(&he->ln);
|
|
|
|
hc_remove(hc, he);
|
2022-04-04 18:31:14 +00:00
|
|
|
sl_free(he);
|
2010-07-26 14:39:27 +00:00
|
|
|
|
|
|
|
hc->hash_items--;
|
|
|
|
if (hc->hash_items < hc->hash_min)
|
2021-03-30 16:51:31 +00:00
|
|
|
hc_resize(hc, p, hc->hash_order - HC_LO_STEP);
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_init_hostcache(rtable *tab)
|
|
|
|
{
|
2021-03-30 16:51:31 +00:00
|
|
|
struct hostcache *hc = mb_allocz(tab->rp, sizeof(struct hostcache));
|
2010-07-05 15:50:19 +00:00
|
|
|
init_list(&hc->hostentries);
|
2010-07-26 14:39:27 +00:00
|
|
|
|
|
|
|
hc->hash_items = 0;
|
2021-03-30 16:51:31 +00:00
|
|
|
hc_alloc_table(hc, tab->rp, HC_DEF_ORDER);
|
|
|
|
hc->slab = sl_new(tab->rp, sizeof(struct hostentry));
|
2010-07-26 14:39:27 +00:00
|
|
|
|
2022-04-04 20:34:14 +00:00
|
|
|
hc->lp = lp_new(tab->rp);
|
2020-03-26 02:57:48 +00:00
|
|
|
hc->trie = f_new_trie(hc->lp, 0);
|
2010-07-27 16:20:12 +00:00
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
tab->hostcache = hc;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_free_hostcache(rtable *tab)
|
|
|
|
{
|
|
|
|
struct hostcache *hc = tab->hostcache;
|
|
|
|
|
|
|
|
node *n;
|
|
|
|
WALK_LIST(n, hc->hostentries)
|
|
|
|
{
|
|
|
|
struct hostentry *he = SKIP_BACK(struct hostentry, ln, n);
|
2010-12-07 22:33:55 +00:00
|
|
|
rta_free(he->src);
|
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
if (he->uc)
|
|
|
|
log(L_ERR "Hostcache is not empty in table %s", tab->name);
|
|
|
|
}
|
|
|
|
|
2021-03-30 16:51:31 +00:00
|
|
|
/* Freed automagically by the resource pool
|
2010-07-26 14:39:27 +00:00
|
|
|
rfree(hc->slab);
|
2010-07-27 16:20:12 +00:00
|
|
|
rfree(hc->lp);
|
2010-07-26 14:39:27 +00:00
|
|
|
mb_free(hc->hash_table);
|
2010-07-05 15:50:19 +00:00
|
|
|
mb_free(hc);
|
2021-03-30 16:51:31 +00:00
|
|
|
*/
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_notify_hostcache(rtable *tab, net *net)
|
|
|
|
{
|
|
|
|
if (tab->hcu_scheduled)
|
|
|
|
return;
|
|
|
|
|
2015-12-24 14:52:03 +00:00
|
|
|
if (trie_match_net(tab->hostcache->trie, net->n.addr))
|
|
|
|
rt_schedule_hcu(tab);
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
if_local_addr(ip_addr a, struct iface *i)
|
|
|
|
{
|
|
|
|
struct ifa *b;
|
|
|
|
|
|
|
|
WALK_LIST(b, i->addrs)
|
|
|
|
if (ipa_equal(a, b->ip))
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-09-28 12:17:20 +00:00
|
|
|
u32
|
2022-05-30 15:11:30 +00:00
|
|
|
rt_get_igp_metric(const rte *rt)
|
2010-07-30 23:04:32 +00:00
|
|
|
{
|
2022-05-30 10:03:03 +00:00
|
|
|
eattr *ea = ea_find(rt->attrs, "igp_metric");
|
2010-08-02 11:11:53 +00:00
|
|
|
|
|
|
|
if (ea)
|
|
|
|
return ea->u.data;
|
|
|
|
|
2022-05-04 12:41:51 +00:00
|
|
|
if (rt_get_source_attr(rt) == RTS_DEVICE)
|
2010-07-30 23:04:32 +00:00
|
|
|
return 0;
|
|
|
|
|
2021-03-20 22:18:34 +00:00
|
|
|
if (rt->src->proto->rte_igp_metric)
|
|
|
|
return rt->src->proto->rte_igp_metric(rt);
|
|
|
|
|
2010-07-30 23:04:32 +00:00
|
|
|
return IGP_METRIC_UNKNOWN;
|
|
|
|
}
|
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
static int
|
|
|
|
rt_update_hostentry(rtable *tab, struct hostentry *he)
|
|
|
|
{
|
2022-05-30 10:03:03 +00:00
|
|
|
ea_list *old_src = he->src;
|
2018-01-29 11:49:37 +00:00
|
|
|
int direct = 0;
|
2010-07-27 16:20:12 +00:00
|
|
|
int pxlen = 0;
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2015-12-24 14:52:03 +00:00
|
|
|
/* Reset the hostentry */
|
2010-12-07 22:33:55 +00:00
|
|
|
he->src = NULL;
|
2018-01-29 11:49:37 +00:00
|
|
|
he->nexthop_linkable = 0;
|
2010-12-07 22:33:55 +00:00
|
|
|
he->igp_metric = 0;
|
|
|
|
|
2015-12-24 14:52:03 +00:00
|
|
|
net_addr he_addr;
|
|
|
|
net_fill_ip_host(&he_addr, he->addr);
|
|
|
|
net *n = net_route(tab, &he_addr);
|
2010-07-30 23:04:32 +00:00
|
|
|
if (n)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
2020-01-28 10:42:46 +00:00
|
|
|
struct rte_storage *e = n->routes;
|
2022-06-08 13:31:28 +00:00
|
|
|
ea_list *a = e->rte.attrs;
|
2022-07-12 13:05:04 +00:00
|
|
|
u32 pref = rt_get_preference(&e->rte);
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2021-10-22 17:43:55 +00:00
|
|
|
for (struct rte_storage *ee = n->routes; ee; ee = ee->next)
|
2022-07-12 13:05:04 +00:00
|
|
|
if (rte_is_valid(&ee->rte) &&
|
|
|
|
(rt_get_preference(&ee->rte) >= pref) &&
|
|
|
|
ea_find(ee->rte.attrs, &ea_gen_hostentry))
|
2010-08-03 06:26:47 +00:00
|
|
|
{
|
|
|
|
/* Recursive route should not depend on another recursive route */
|
2015-11-05 11:48:52 +00:00
|
|
|
log(L_WARN "Next hop address %I resolvable through recursive route for %N",
|
|
|
|
he->addr, n->n.addr);
|
2010-12-07 22:33:55 +00:00
|
|
|
goto done;
|
2010-08-03 06:26:47 +00:00
|
|
|
}
|
2010-12-07 22:33:55 +00:00
|
|
|
|
2021-10-22 17:43:55 +00:00
|
|
|
pxlen = n->n.addr->pxlen;
|
|
|
|
|
2022-05-30 10:03:03 +00:00
|
|
|
eattr *nhea = ea_find(a, &ea_gen_nexthop);
|
2022-05-15 16:09:30 +00:00
|
|
|
ASSERT_DIE(nhea);
|
|
|
|
struct nexthop_adata *nhad = (void *) nhea->u.ptr;
|
|
|
|
|
|
|
|
if (NEXTHOP_IS_REACHABLE(nhad))
|
|
|
|
NEXTHOP_WALK(nh, nhad)
|
2017-03-17 14:48:09 +00:00
|
|
|
if (ipa_zero(nh->gw))
|
|
|
|
{
|
|
|
|
if (if_local_addr(he->addr, nh->iface))
|
|
|
|
{
|
|
|
|
/* The host address is a local address, this is not valid */
|
|
|
|
log(L_WARN "Next hop address %I is a local address of iface %s",
|
|
|
|
he->addr, nh->iface->name);
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
2018-01-29 11:49:37 +00:00
|
|
|
direct++;
|
2017-03-17 14:48:09 +00:00
|
|
|
}
|
2017-03-08 15:27:18 +00:00
|
|
|
|
2010-12-07 22:33:55 +00:00
|
|
|
he->src = rta_clone(a);
|
2018-01-29 11:49:37 +00:00
|
|
|
he->nexthop_linkable = !direct;
|
2020-01-28 10:42:46 +00:00
|
|
|
he->igp_metric = rt_get_igp_metric(&e->rte);
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
2017-03-08 15:27:18 +00:00
|
|
|
done:
|
2010-07-27 16:20:12 +00:00
|
|
|
/* Add a prefix range to the trie */
|
2015-12-24 14:52:03 +00:00
|
|
|
trie_add_prefix(tab->hostcache->trie, &he_addr, pxlen, he_addr.pxlen);
|
2010-07-27 16:20:12 +00:00
|
|
|
|
2010-12-07 22:33:55 +00:00
|
|
|
rta_free(old_src);
|
|
|
|
return old_src != he->src;
|
2010-07-05 15:50:19 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rt_update_hostcache(rtable *tab)
|
|
|
|
{
|
|
|
|
struct hostcache *hc = tab->hostcache;
|
|
|
|
struct hostentry *he;
|
|
|
|
node *n, *x;
|
|
|
|
|
2010-07-27 16:20:12 +00:00
|
|
|
/* Reset the trie */
|
|
|
|
lp_flush(hc->lp);
|
2020-03-26 02:57:48 +00:00
|
|
|
hc->trie = f_new_trie(hc->lp, 0);
|
2010-07-27 16:20:12 +00:00
|
|
|
|
2010-07-05 15:50:19 +00:00
|
|
|
WALK_LIST_DELSAFE(n, x, hc->hostentries)
|
|
|
|
{
|
|
|
|
he = SKIP_BACK(struct hostentry, ln, n);
|
|
|
|
if (!he->uc)
|
|
|
|
{
|
2021-03-30 16:51:31 +00:00
|
|
|
hc_delete_hostentry(hc, tab->rp, he);
|
2010-07-05 15:50:19 +00:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (rt_update_hostentry(tab, he))
|
|
|
|
rt_schedule_nhu(he->tab);
|
|
|
|
}
|
|
|
|
|
|
|
|
tab->hcu_scheduled = 0;
|
|
|
|
}
|
|
|
|
|
2022-05-15 13:53:35 +00:00
|
|
|
static struct hostentry *
|
2012-08-14 14:25:22 +00:00
|
|
|
rt_get_hostentry(rtable *tab, ip_addr a, ip_addr ll, rtable *dep)
|
2010-07-05 15:50:19 +00:00
|
|
|
{
|
|
|
|
struct hostentry *he;
|
|
|
|
|
|
|
|
if (!tab->hostcache)
|
|
|
|
rt_init_hostcache(tab);
|
|
|
|
|
2015-12-24 14:52:03 +00:00
|
|
|
u32 k = hc_hash(a, dep);
|
2010-07-26 14:39:27 +00:00
|
|
|
struct hostcache *hc = tab->hostcache;
|
|
|
|
for (he = hc->hash_table[k >> hc->hash_shift]; he != NULL; he = he->next)
|
|
|
|
if (ipa_equal(he->addr, a) && (he->tab == dep))
|
|
|
|
return he;
|
2010-07-05 15:50:19 +00:00
|
|
|
|
2021-03-30 16:51:31 +00:00
|
|
|
he = hc_new_hostentry(hc, tab->rp, a, ipa_zero(ll) ? a : ll, dep, k);
|
2010-07-26 14:39:27 +00:00
|
|
|
rt_update_hostentry(tab, he);
|
2010-07-05 15:50:19 +00:00
|
|
|
return he;
|
|
|
|
}
|
|
|
|
|
2012-08-14 14:25:22 +00:00
|
|
|
|
2000-06-02 12:29:55 +00:00
|
|
|
/*
|
|
|
|
* Documentation for functions declared inline in route.h
|
|
|
|
*/
|
|
|
|
#if 0
|
|
|
|
|
|
|
|
/**
|
|
|
|
* net_find - find a network entry
|
|
|
|
* @tab: a routing table
|
|
|
|
* @addr: address of the network
|
|
|
|
*
|
|
|
|
* net_find() looks up the given network in routing table @tab and
|
|
|
|
* returns a pointer to its &net entry or %NULL if no such network
|
|
|
|
* exists.
|
|
|
|
*/
|
2015-11-05 11:48:52 +00:00
|
|
|
static inline net *net_find(rtable *tab, net_addr *addr)
|
2000-06-02 12:29:55 +00:00
|
|
|
{ DUMMY; }
|
|
|
|
|
|
|
|
/**
|
|
|
|
* net_get - obtain a network entry
|
|
|
|
* @tab: a routing table
|
|
|
|
* @addr: address of the network
|
|
|
|
*
|
|
|
|
* net_get() looks up the given network in routing table @tab and
|
|
|
|
* returns a pointer to its &net entry. If no such entry exists, it's
|
|
|
|
* created.
|
|
|
|
*/
|
2015-11-05 11:48:52 +00:00
|
|
|
static inline net *net_get(rtable *tab, net_addr *addr)
|
2000-06-02 12:29:55 +00:00
|
|
|
{ DUMMY; }
|
|
|
|
|
|
|
|
/**
|
|
|
|
* rte_cow - copy a route for writing
|
|
|
|
* @r: a route entry to be copied
|
|
|
|
*
|
|
|
|
* rte_cow() takes a &rte and prepares it for modification. The exact action
|
|
|
|
* taken depends on the flags of the &rte -- if it's a temporary entry, it's
|
|
|
|
* just returned unchanged, else a new temporary entry with the same contents
|
|
|
|
* is created.
|
|
|
|
*
|
|
|
|
* The primary use of this function is inside the filter machinery -- when
|
|
|
|
* a filter wants to modify &rte contents (to change the preference or to
|
|
|
|
* attach another set of attributes), it must ensure that the &rte is not
|
|
|
|
* shared with anyone else (and especially that it isn't stored in any routing
|
|
|
|
* table).
|
|
|
|
*
|
2000-06-07 12:29:08 +00:00
|
|
|
* Result: a pointer to the new writable &rte.
|
2000-06-02 12:29:55 +00:00
|
|
|
*/
|
|
|
|
static inline rte * rte_cow(rte *r)
|
|
|
|
{ DUMMY; }
|
|
|
|
|
|
|
|
#endif
|