2005-06-22 03:43:18 +08:00
|
|
|
/*
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public License
|
|
|
|
* as published by the Free Software Foundation; either version
|
|
|
|
* 2 of the License, or (at your option) any later version.
|
|
|
|
*
|
|
|
|
* Robert Olsson <robert.olsson@its.uu.se> Uppsala Universitet
|
|
|
|
* & Swedish University of Agricultural Sciences.
|
|
|
|
*
|
2007-02-09 22:24:47 +08:00
|
|
|
* Jens Laas <jens.laas@data.slu.se> Swedish University of
|
2005-06-22 03:43:18 +08:00
|
|
|
* Agricultural Sciences.
|
2007-02-09 22:24:47 +08:00
|
|
|
*
|
2005-06-22 03:43:18 +08:00
|
|
|
* Hans Liss <hans.liss@its.uu.se> Uppsala Universitet
|
|
|
|
*
|
2011-03-31 09:57:33 +08:00
|
|
|
* This work is based on the LPC-trie which is originally described in:
|
2007-02-09 22:24:47 +08:00
|
|
|
*
|
2005-06-22 03:43:18 +08:00
|
|
|
* An experimental study of compression methods for dynamic tries
|
|
|
|
* Stefan Nilsson and Matti Tikkanen. Algorithmica, 33(1):19-33, 2002.
|
2010-10-18 17:03:14 +08:00
|
|
|
* http://www.csc.kth.se/~snilsson/software/dyntrie2/
|
2005-06-22 03:43:18 +08:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* IP-address lookup using LC-tries. Stefan Nilsson and Gunnar Karlsson
|
|
|
|
* IEEE Journal on Selected Areas in Communications, 17(6):1083-1092, June 1999
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* Code from fib_hash has been reused which includes the following header:
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* INET An implementation of the TCP/IP protocol suite for the LINUX
|
|
|
|
* operating system. INET is implemented using the BSD Socket
|
|
|
|
* interface as the means of communication with the user level.
|
|
|
|
*
|
|
|
|
* IPv4 FIB: lookup engine and maintenance routines.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* Authors: Alexey Kuznetsov, <kuznet@ms2.inr.ac.ru>
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public License
|
|
|
|
* as published by the Free Software Foundation; either version
|
|
|
|
* 2 of the License, or (at your option) any later version.
|
2005-12-23 03:25:10 +08:00
|
|
|
*
|
|
|
|
* Substantial contributions to this work comes from:
|
|
|
|
*
|
|
|
|
* David S. Miller, <davem@davemloft.net>
|
|
|
|
* Stephen Hemminger <shemminger@osdl.org>
|
|
|
|
* Paul E. McKenney <paulmck@us.ibm.com>
|
|
|
|
* Patrick McHardy <kaber@trash.net>
|
2005-06-22 03:43:18 +08:00
|
|
|
*/
|
|
|
|
|
2009-08-29 14:57:15 +08:00
|
|
|
#define VERSION "0.409"
|
2005-06-22 03:43:18 +08:00
|
|
|
|
|
|
|
#include <asm/uaccess.h>
|
2007-10-19 14:40:25 +08:00
|
|
|
#include <linux/bitops.h>
|
2005-06-22 03:43:18 +08:00
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/string.h>
|
|
|
|
#include <linux/socket.h>
|
|
|
|
#include <linux/sockios.h>
|
|
|
|
#include <linux/errno.h>
|
|
|
|
#include <linux/in.h>
|
|
|
|
#include <linux/inet.h>
|
2006-01-04 06:38:34 +08:00
|
|
|
#include <linux/inetdevice.h>
|
2005-06-22 03:43:18 +08:00
|
|
|
#include <linux/netdevice.h>
|
|
|
|
#include <linux/if_arp.h>
|
|
|
|
#include <linux/proc_fs.h>
|
2005-08-26 04:01:29 +08:00
|
|
|
#include <linux/rcupdate.h>
|
2005-06-22 03:43:18 +08:00
|
|
|
#include <linux/skbuff.h>
|
|
|
|
#include <linux/netlink.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/list.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2011-07-15 23:47:34 +08:00
|
|
|
#include <linux/export.h>
|
2007-09-12 18:01:34 +08:00
|
|
|
#include <net/net_namespace.h>
|
2005-06-22 03:43:18 +08:00
|
|
|
#include <net/ip.h>
|
|
|
|
#include <net/protocol.h>
|
|
|
|
#include <net/route.h>
|
|
|
|
#include <net/tcp.h>
|
|
|
|
#include <net/sock.h>
|
|
|
|
#include <net/ip_fib.h>
|
|
|
|
#include "fib_lookup.h"
|
|
|
|
|
2006-03-21 13:35:01 +08:00
|
|
|
#define MAX_STAT_DEPTH 32
|
2005-06-22 03:43:18 +08:00
|
|
|
|
|
|
|
#define KEYLENGTH (8*sizeof(t_key))
|
|
|
|
|
|
|
|
typedef unsigned int t_key;
|
|
|
|
|
2015-01-01 02:55:35 +08:00
|
|
|
#define IS_TNODE(n) ((n)->bits)
|
|
|
|
#define IS_LEAF(n) (!(n)->bits)
|
2005-08-26 04:01:29 +08:00
|
|
|
|
2015-01-01 02:56:12 +08:00
|
|
|
#define get_index(_key, _kv) (((_key) ^ (_kv)->key) >> (_kv)->pos)
|
2015-01-01 02:55:54 +08:00
|
|
|
|
2015-01-01 02:55:35 +08:00
|
|
|
struct tnode {
|
|
|
|
t_key key;
|
|
|
|
unsigned char bits; /* 2log(KEYLENGTH) bits needed */
|
|
|
|
unsigned char pos; /* 2log(KEYLENGTH) bits needed */
|
|
|
|
struct tnode __rcu *parent;
|
2015-01-01 02:55:41 +08:00
|
|
|
struct rcu_head rcu;
|
2015-01-01 02:55:47 +08:00
|
|
|
union {
|
|
|
|
/* The fields in this struct are valid if bits > 0 (TNODE) */
|
|
|
|
struct {
|
|
|
|
unsigned int full_children; /* KEYLENGTH bits needed */
|
|
|
|
unsigned int empty_children; /* KEYLENGTH bits needed */
|
|
|
|
struct tnode __rcu *child[0];
|
|
|
|
};
|
|
|
|
/* This list pointer if valid if bits == 0 (LEAF) */
|
|
|
|
struct hlist_head list;
|
|
|
|
};
|
2005-06-22 03:43:18 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct leaf_info {
|
|
|
|
struct hlist_node hlist;
|
|
|
|
int plen;
|
2011-07-18 11:16:33 +08:00
|
|
|
u32 mask_plen; /* ntohl(inet_make_mask(plen)) */
|
2005-06-22 03:43:18 +08:00
|
|
|
struct list_head falh;
|
2011-07-18 11:16:33 +08:00
|
|
|
struct rcu_head rcu;
|
2005-06-22 03:43:18 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
#ifdef CONFIG_IP_FIB_TRIE_STATS
|
|
|
|
struct trie_use_stats {
|
|
|
|
unsigned int gets;
|
|
|
|
unsigned int backtrack;
|
|
|
|
unsigned int semantic_match_passed;
|
|
|
|
unsigned int semantic_match_miss;
|
|
|
|
unsigned int null_node_hit;
|
2005-07-06 06:02:40 +08:00
|
|
|
unsigned int resize_node_skipped;
|
2005-06-22 03:43:18 +08:00
|
|
|
};
|
|
|
|
#endif
|
|
|
|
|
|
|
|
struct trie_stat {
|
|
|
|
unsigned int totdepth;
|
|
|
|
unsigned int maxdepth;
|
|
|
|
unsigned int tnodes;
|
|
|
|
unsigned int leaves;
|
|
|
|
unsigned int nullpointers;
|
2008-01-23 13:54:05 +08:00
|
|
|
unsigned int prefixes;
|
2006-03-21 13:35:01 +08:00
|
|
|
unsigned int nodesizes[MAX_STAT_DEPTH];
|
2005-07-20 05:01:51 +08:00
|
|
|
};
|
2005-06-22 03:43:18 +08:00
|
|
|
|
|
|
|
struct trie {
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode __rcu *trie;
|
2005-06-22 03:43:18 +08:00
|
|
|
#ifdef CONFIG_IP_FIB_TRIE_STATS
|
2015-01-01 02:55:29 +08:00
|
|
|
struct trie_use_stats __percpu *stats;
|
2005-06-22 03:43:18 +08:00
|
|
|
#endif
|
|
|
|
};
|
|
|
|
|
2015-01-01 02:56:18 +08:00
|
|
|
static void tnode_put_child_reorg(struct tnode *tn, unsigned long i,
|
|
|
|
struct tnode *n, int wasfull);
|
2015-01-01 02:55:47 +08:00
|
|
|
static struct tnode *resize(struct trie *t, struct tnode *tn);
|
2005-08-10 11:25:06 +08:00
|
|
|
static struct tnode *inflate(struct trie *t, struct tnode *tn);
|
|
|
|
static struct tnode *halve(struct trie *t, struct tnode *tn);
|
2009-06-15 17:31:29 +08:00
|
|
|
/* tnodes to free after resize(); protected by RTNL */
|
2015-01-01 02:55:41 +08:00
|
|
|
static struct callback_head *tnode_free_head;
|
2009-07-14 16:33:08 +08:00
|
|
|
static size_t tnode_free_size;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* synchronize_rcu after call_rcu for that many pages; it should be especially
|
|
|
|
* useful before resizing the root node with PREEMPT_NONE configs; the value was
|
|
|
|
* obtained experimentally, aiming to avoid visible slowdown.
|
|
|
|
*/
|
|
|
|
static const int sync_pages = 128;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2006-12-07 12:33:20 +08:00
|
|
|
static struct kmem_cache *fn_alias_kmem __read_mostly;
|
2008-01-23 13:51:50 +08:00
|
|
|
static struct kmem_cache *trie_leaf_kmem __read_mostly;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2015-01-01 02:55:35 +08:00
|
|
|
/* caller must hold RTNL */
|
|
|
|
#define node_parent(n) rtnl_dereference((n)->parent)
|
2011-03-31 16:51:35 +08:00
|
|
|
|
2015-01-01 02:55:35 +08:00
|
|
|
/* caller must hold RCU read lock or RTNL */
|
|
|
|
#define node_parent_rcu(n) rcu_dereference_rtnl((n)->parent)
|
2011-03-31 16:51:35 +08:00
|
|
|
|
2015-01-01 02:55:35 +08:00
|
|
|
/* wrapper for rcu_assign_pointer */
|
2015-01-01 02:55:47 +08:00
|
|
|
static inline void node_set_parent(struct tnode *n, struct tnode *tp)
|
2008-01-18 19:31:36 +08:00
|
|
|
{
|
2015-01-01 02:55:47 +08:00
|
|
|
if (n)
|
|
|
|
rcu_assign_pointer(n->parent, tp);
|
2007-08-11 06:22:13 +08:00
|
|
|
}
|
|
|
|
|
2015-01-01 02:55:35 +08:00
|
|
|
#define NODE_INIT_PARENT(n, p) RCU_INIT_POINTER((n)->parent, p)
|
|
|
|
|
|
|
|
/* This provides us with the number of children in this node, in the case of a
|
|
|
|
* leaf this will return 0 meaning none of the children are accessible.
|
2008-03-23 08:59:58 +08:00
|
|
|
*/
|
2015-01-01 02:56:18 +08:00
|
|
|
static inline unsigned long tnode_child_length(const struct tnode *tn)
|
2007-08-11 06:22:13 +08:00
|
|
|
{
|
2015-01-01 02:55:35 +08:00
|
|
|
return (1ul << tn->bits) & ~(1ul);
|
2007-08-11 06:22:13 +08:00
|
|
|
}
|
2005-08-26 04:01:29 +08:00
|
|
|
|
2015-01-01 02:56:18 +08:00
|
|
|
/* caller must hold RTNL */
|
|
|
|
static inline struct tnode *tnode_get_child(const struct tnode *tn,
|
|
|
|
unsigned long i)
|
2008-01-18 19:31:36 +08:00
|
|
|
{
|
2015-01-01 02:55:35 +08:00
|
|
|
BUG_ON(i >= tnode_child_length(tn));
|
2005-08-26 04:01:29 +08:00
|
|
|
|
2011-03-31 16:51:35 +08:00
|
|
|
return rtnl_dereference(tn->child[i]);
|
2008-01-18 19:31:36 +08:00
|
|
|
}
|
|
|
|
|
2015-01-01 02:56:18 +08:00
|
|
|
/* caller must hold RCU read lock or RTNL */
|
|
|
|
static inline struct tnode *tnode_get_child_rcu(const struct tnode *tn,
|
|
|
|
unsigned long i)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2015-01-01 02:55:35 +08:00
|
|
|
BUG_ON(i >= tnode_child_length(tn));
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2011-03-31 16:51:35 +08:00
|
|
|
return rcu_dereference_rtnl(tn->child[i]);
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2015-01-01 02:56:12 +08:00
|
|
|
/* To understand this stuff, an understanding of keys and all their bits is
|
|
|
|
* necessary. Every node in the trie has a key associated with it, but not
|
|
|
|
* all of the bits in that key are significant.
|
|
|
|
*
|
|
|
|
* Consider a node 'n' and its parent 'tp'.
|
|
|
|
*
|
|
|
|
* If n is a leaf, every bit in its key is significant. Its presence is
|
|
|
|
* necessitated by path compression, since during a tree traversal (when
|
|
|
|
* searching for a leaf - unless we are doing an insertion) we will completely
|
|
|
|
* ignore all skipped bits we encounter. Thus we need to verify, at the end of
|
|
|
|
* a potentially successful search, that we have indeed been walking the
|
|
|
|
* correct key path.
|
|
|
|
*
|
|
|
|
* Note that we can never "miss" the correct key in the tree if present by
|
|
|
|
* following the wrong path. Path compression ensures that segments of the key
|
|
|
|
* that are the same for all keys with a given prefix are skipped, but the
|
|
|
|
* skipped part *is* identical for each node in the subtrie below the skipped
|
|
|
|
* bit! trie_insert() in this implementation takes care of that.
|
|
|
|
*
|
|
|
|
* if n is an internal node - a 'tnode' here, the various parts of its key
|
|
|
|
* have many different meanings.
|
|
|
|
*
|
|
|
|
* Example:
|
|
|
|
* _________________________________________________________________
|
|
|
|
* | i | i | i | i | i | i | i | N | N | N | S | S | S | S | S | C |
|
|
|
|
* -----------------------------------------------------------------
|
|
|
|
* 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16
|
|
|
|
*
|
|
|
|
* _________________________________________________________________
|
|
|
|
* | C | C | C | u | u | u | u | u | u | u | u | u | u | u | u | u |
|
|
|
|
* -----------------------------------------------------------------
|
|
|
|
* 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
|
|
|
|
*
|
|
|
|
* tp->pos = 22
|
|
|
|
* tp->bits = 3
|
|
|
|
* n->pos = 13
|
|
|
|
* n->bits = 4
|
|
|
|
*
|
|
|
|
* First, let's just ignore the bits that come before the parent tp, that is
|
|
|
|
* the bits from (tp->pos + tp->bits) to 31. They are *known* but at this
|
|
|
|
* point we do not use them for anything.
|
|
|
|
*
|
|
|
|
* The bits from (tp->pos) to (tp->pos + tp->bits - 1) - "N", above - are the
|
|
|
|
* index into the parent's child array. That is, they will be used to find
|
|
|
|
* 'n' among tp's children.
|
|
|
|
*
|
|
|
|
* The bits from (n->pos + n->bits) to (tn->pos - 1) - "S" - are skipped bits
|
|
|
|
* for the node n.
|
|
|
|
*
|
|
|
|
* All the bits we have seen so far are significant to the node n. The rest
|
|
|
|
* of the bits are really not needed or indeed known in n->key.
|
|
|
|
*
|
|
|
|
* The bits from (n->pos) to (n->pos + n->bits - 1) - "C" - are the index into
|
|
|
|
* n's child array, and will of course be different for each child.
|
|
|
|
*
|
|
|
|
* The rest of the bits, from 0 to (n->pos + n->bits), are completely unknown
|
|
|
|
* at this point.
|
|
|
|
*/
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2007-12-14 01:47:57 +08:00
|
|
|
static const int halve_threshold = 25;
|
|
|
|
static const int inflate_threshold = 50;
|
ipv4: Fix fib_trie rebalancing, part 4 (root thresholds)
Pawel Staszewski wrote:
<blockquote>
Some time ago i report this:
http://bugzilla.kernel.org/show_bug.cgi?id=6648
and now with 2.6.29 / 2.6.29.1 / 2.6.29.3 and 2.6.30 it back
dmesg output:
oprofile: using NMI interrupt.
Fix inflate_threshold_root. Now=15 size=11 bits
...
Fix inflate_threshold_root. Now=15 size=11 bits
cat /proc/net/fib_triestat
Basic info: size of leaf: 40 bytes, size of tnode: 56 bytes.
Main:
Aver depth: 2.28
Max depth: 6
Leaves: 276539
Prefixes: 289922
Internal nodes: 66762
1: 35046 2: 13824 3: 9508 4: 4897 5: 2331 6: 1149 7: 5
9: 1 18: 1
Pointers: 691228
Null ptrs: 347928
Total size: 35709 kB
</blockquote>
It seems, the current threshold for root resizing is too aggressive,
and it causes misleading warnings during big updates, but it might be
also responsible for memory problems, especially with non-preempt
configs, when RCU freeing is delayed long after call_rcu.
It should be also mentioned that because of non-atomic changes during
resizing/rebalancing the current lookup algorithm can miss valid leaves
so it's additional argument to shorten these activities even at a cost
of a minimally longer searching.
This patch restores values before the patch "[IPV4]: fib_trie root
node settings", commit: 965ffea43d4ebe8cd7b9fee78d651268dd7d23c5 from
v2.6.22.
Pawel's report:
<blockquote>
I dont see any big change of (cpu load or faster/slower
routing/propagating routes from bgpd or something else) - in avg there
is from 2% to 3% more of CPU load i dont know why but it is - i change
from "preempt" to "no preempt" 3 times and check this my "mpstat -P ALL
1 30"
always avg cpu load was from 2 to 3% more compared to "no preempt"
[...]
cat /proc/net/fib_triestat
Basic info: size of leaf: 20 bytes, size of tnode: 36 bytes.
Main:
Aver depth: 2.44
Max depth: 6
Leaves: 277814
Prefixes: 291306
Internal nodes: 66420
1: 32737 2: 14850 3: 10332 4: 4871 5: 2313 6: 942 7: 371 8: 3 17: 1
Pointers: 599098
Null ptrs: 254865
Total size: 18067 kB
</blockquote>
According to this and other similar reports average depth is slightly
increased (~0.2), and root nodes are shorter (log 17 vs. 18), but
there is no visible performance decrease. So, until memory handling is
improved or added parameters for changing this individually, this
patch resets to safer defaults.
Reported-by: Pawel Staszewski <pstaszewski@itcare.pl>
Reported-by: Jorge Boncompte [DTI2] <jorge@dti2.net>
Signed-off-by: Jarek Poplawski <jarkao2@gmail.com>
Tested-by: Pawel Staszewski <pstaszewski@itcare.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-07-08 10:39:16 +08:00
|
|
|
static const int halve_threshold_root = 15;
|
2009-08-29 14:57:15 +08:00
|
|
|
static const int inflate_threshold_root = 30;
|
2005-08-26 04:01:29 +08:00
|
|
|
|
|
|
|
static void __alias_free_mem(struct rcu_head *head)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2005-08-26 04:01:29 +08:00
|
|
|
struct fib_alias *fa = container_of(head, struct fib_alias, rcu);
|
|
|
|
kmem_cache_free(fn_alias_kmem, fa);
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2005-08-26 04:01:29 +08:00
|
|
|
static inline void alias_free_mem_rcu(struct fib_alias *fa)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2005-08-26 04:01:29 +08:00
|
|
|
call_rcu(&fa->rcu, __alias_free_mem);
|
|
|
|
}
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2015-01-01 02:55:41 +08:00
|
|
|
#define TNODE_KMALLOC_MAX \
|
2015-01-01 02:55:47 +08:00
|
|
|
ilog2((PAGE_SIZE - sizeof(struct tnode)) / sizeof(struct tnode *))
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2015-01-01 02:55:41 +08:00
|
|
|
static void __node_free_rcu(struct rcu_head *head)
|
2008-04-10 18:47:34 +08:00
|
|
|
{
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *n = container_of(head, struct tnode, rcu);
|
2015-01-01 02:55:41 +08:00
|
|
|
|
|
|
|
if (IS_LEAF(n))
|
|
|
|
kmem_cache_free(trie_leaf_kmem, n);
|
|
|
|
else if (n->bits <= TNODE_KMALLOC_MAX)
|
|
|
|
kfree(n);
|
|
|
|
else
|
|
|
|
vfree(n);
|
2008-04-10 18:47:34 +08:00
|
|
|
}
|
|
|
|
|
2015-01-01 02:55:41 +08:00
|
|
|
#define node_free(n) call_rcu(&n->rcu, __node_free_rcu)
|
|
|
|
|
2005-08-26 04:01:29 +08:00
|
|
|
static inline void free_leaf_info(struct leaf_info *leaf)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2011-03-18 11:42:34 +08:00
|
|
|
kfree_rcu(leaf, rcu);
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2008-01-14 14:31:44 +08:00
|
|
|
static struct tnode *tnode_alloc(size_t size)
|
2005-07-06 05:44:55 +08:00
|
|
|
{
|
2005-08-26 04:01:29 +08:00
|
|
|
if (size <= PAGE_SIZE)
|
2008-01-14 14:31:44 +08:00
|
|
|
return kzalloc(size, GFP_KERNEL);
|
2008-04-10 17:56:38 +08:00
|
|
|
else
|
2010-11-20 15:46:35 +08:00
|
|
|
return vzalloc(size);
|
2008-04-10 17:56:38 +08:00
|
|
|
}
|
2005-08-26 04:01:29 +08:00
|
|
|
|
2009-06-15 17:31:29 +08:00
|
|
|
static void tnode_free_safe(struct tnode *tn)
|
|
|
|
{
|
|
|
|
BUG_ON(IS_LEAF(tn));
|
2015-01-01 02:55:41 +08:00
|
|
|
tn->rcu.next = tnode_free_head;
|
|
|
|
tnode_free_head = &tn->rcu;
|
2009-06-15 17:31:29 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void tnode_free_flush(void)
|
|
|
|
{
|
2015-01-01 02:55:41 +08:00
|
|
|
struct callback_head *head;
|
|
|
|
|
|
|
|
while ((head = tnode_free_head)) {
|
|
|
|
struct tnode *tn = container_of(head, struct tnode, rcu);
|
|
|
|
|
|
|
|
tnode_free_head = head->next;
|
|
|
|
tnode_free_size += offsetof(struct tnode, child[1 << tn->bits]);
|
2009-06-15 17:31:29 +08:00
|
|
|
|
2015-01-01 02:55:41 +08:00
|
|
|
node_free(tn);
|
2009-06-15 17:31:29 +08:00
|
|
|
}
|
2009-07-14 16:33:08 +08:00
|
|
|
|
|
|
|
if (tnode_free_size >= PAGE_SIZE * sync_pages) {
|
|
|
|
tnode_free_size = 0;
|
|
|
|
synchronize_rcu();
|
|
|
|
}
|
2009-06-15 17:31:29 +08:00
|
|
|
}
|
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
static struct tnode *leaf_new(t_key key)
|
2005-08-26 04:01:29 +08:00
|
|
|
{
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *l = kmem_cache_alloc(trie_leaf_kmem, GFP_KERNEL);
|
2005-08-26 04:01:29 +08:00
|
|
|
if (l) {
|
2015-01-01 02:55:35 +08:00
|
|
|
l->parent = NULL;
|
|
|
|
/* set key and pos to reflect full key value
|
|
|
|
* any trailing zeros in the key should be ignored
|
|
|
|
* as the nodes are searched
|
|
|
|
*/
|
|
|
|
l->key = key;
|
2015-01-01 02:56:12 +08:00
|
|
|
l->pos = 0;
|
2015-01-01 02:55:35 +08:00
|
|
|
/* set bits to 0 indicating we are not a tnode */
|
|
|
|
l->bits = 0;
|
|
|
|
|
2005-08-26 04:01:29 +08:00
|
|
|
INIT_HLIST_HEAD(&l->list);
|
|
|
|
}
|
|
|
|
return l;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct leaf_info *leaf_info_new(int plen)
|
|
|
|
{
|
|
|
|
struct leaf_info *li = kmalloc(sizeof(struct leaf_info), GFP_KERNEL);
|
|
|
|
if (li) {
|
|
|
|
li->plen = plen;
|
2011-07-18 11:16:33 +08:00
|
|
|
li->mask_plen = ntohl(inet_make_mask(plen));
|
2005-08-26 04:01:29 +08:00
|
|
|
INIT_LIST_HEAD(&li->falh);
|
|
|
|
}
|
|
|
|
return li;
|
|
|
|
}
|
|
|
|
|
2008-01-23 13:53:36 +08:00
|
|
|
static struct tnode *tnode_new(t_key key, int pos, int bits)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2015-01-01 02:55:41 +08:00
|
|
|
size_t sz = offsetof(struct tnode, child[1 << bits]);
|
2005-07-06 05:44:55 +08:00
|
|
|
struct tnode *tn = tnode_alloc(sz);
|
2015-01-01 02:55:35 +08:00
|
|
|
unsigned int shift = pos + bits;
|
|
|
|
|
|
|
|
/* verify bits and pos their msb bits clear and values are valid */
|
|
|
|
BUG_ON(!bits || (shift > KEYLENGTH));
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-08-10 11:24:39 +08:00
|
|
|
if (tn) {
|
2015-01-01 02:55:35 +08:00
|
|
|
tn->parent = NULL;
|
2005-06-22 03:43:18 +08:00
|
|
|
tn->pos = pos;
|
|
|
|
tn->bits = bits;
|
2015-01-01 02:56:12 +08:00
|
|
|
tn->key = (shift < KEYLENGTH) ? (key >> shift) << shift : 0;
|
2005-06-22 03:43:18 +08:00
|
|
|
tn->full_children = 0;
|
|
|
|
tn->empty_children = 1<<bits;
|
|
|
|
}
|
2005-07-20 05:01:51 +08:00
|
|
|
|
2010-09-10 07:32:28 +08:00
|
|
|
pr_debug("AT %p s=%zu %zu\n", tn, sizeof(struct tnode),
|
2015-01-01 02:55:47 +08:00
|
|
|
sizeof(struct tnode *) << bits);
|
2005-06-22 03:43:18 +08:00
|
|
|
return tn;
|
|
|
|
}
|
|
|
|
|
2015-01-01 02:56:12 +08:00
|
|
|
/* Check whether a tnode 'n' is "full", i.e. it is an internal node
|
2005-06-22 03:43:18 +08:00
|
|
|
* and no bits are skipped. See discussion in dyntree paper p. 6
|
|
|
|
*/
|
2015-01-01 02:55:47 +08:00
|
|
|
static inline int tnode_full(const struct tnode *tn, const struct tnode *n)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2015-01-01 02:56:12 +08:00
|
|
|
return n && ((n->pos + n->bits) == tn->pos) && IS_TNODE(n);
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2015-01-01 02:56:18 +08:00
|
|
|
static inline void put_child(struct tnode *tn, unsigned long i,
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *n)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
|
|
|
tnode_put_child_reorg(tn, i, n, -1);
|
|
|
|
}
|
|
|
|
|
2005-07-20 05:01:51 +08:00
|
|
|
/*
|
2005-06-22 03:43:18 +08:00
|
|
|
* Add a child at position i overwriting the old value.
|
|
|
|
* Update the value of full_children and empty_children.
|
|
|
|
*/
|
|
|
|
|
2015-01-01 02:56:18 +08:00
|
|
|
static void tnode_put_child_reorg(struct tnode *tn, unsigned long i,
|
|
|
|
struct tnode *n, int wasfull)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *chi = rtnl_dereference(tn->child[i]);
|
2005-06-22 03:43:18 +08:00
|
|
|
int isfull;
|
|
|
|
|
2015-01-01 02:56:18 +08:00
|
|
|
BUG_ON(i >= tnode_child_length(tn));
|
2005-08-24 12:59:41 +08:00
|
|
|
|
2005-06-22 03:43:18 +08:00
|
|
|
/* update emptyChildren */
|
|
|
|
if (n == NULL && chi != NULL)
|
|
|
|
tn->empty_children++;
|
|
|
|
else if (n != NULL && chi == NULL)
|
|
|
|
tn->empty_children--;
|
2005-07-20 05:01:51 +08:00
|
|
|
|
2005-06-22 03:43:18 +08:00
|
|
|
/* update fullChildren */
|
2005-08-10 11:24:39 +08:00
|
|
|
if (wasfull == -1)
|
2005-06-22 03:43:18 +08:00
|
|
|
wasfull = tnode_full(tn, chi);
|
|
|
|
|
|
|
|
isfull = tnode_full(tn, n);
|
2005-07-20 05:01:51 +08:00
|
|
|
if (wasfull && !isfull)
|
2005-06-22 03:43:18 +08:00
|
|
|
tn->full_children--;
|
2005-07-20 05:01:51 +08:00
|
|
|
else if (!wasfull && isfull)
|
2005-06-22 03:43:18 +08:00
|
|
|
tn->full_children++;
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2015-01-01 02:55:35 +08:00
|
|
|
node_set_parent(n, tn);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2012-01-12 12:41:32 +08:00
|
|
|
rcu_assign_pointer(tn->child[i], n);
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2015-01-01 02:56:06 +08:00
|
|
|
static void put_child_root(struct tnode *tp, struct trie *t,
|
|
|
|
t_key key, struct tnode *n)
|
|
|
|
{
|
|
|
|
if (tp)
|
|
|
|
put_child(tp, get_index(key, tp), n);
|
|
|
|
else
|
|
|
|
rcu_assign_pointer(t->trie, n);
|
|
|
|
}
|
|
|
|
|
2009-08-29 14:57:15 +08:00
|
|
|
#define MAX_WORK 10
|
2015-01-01 02:55:47 +08:00
|
|
|
static struct tnode *resize(struct trie *t, struct tnode *tn)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *old_tn, *n = NULL;
|
2005-10-05 04:01:58 +08:00
|
|
|
int inflate_threshold_use;
|
|
|
|
int halve_threshold_use;
|
2009-08-29 14:57:15 +08:00
|
|
|
int max_work;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2007-02-09 22:24:47 +08:00
|
|
|
if (!tn)
|
2005-06-22 03:43:18 +08:00
|
|
|
return NULL;
|
|
|
|
|
2005-08-24 12:59:41 +08:00
|
|
|
pr_debug("In tnode_resize %p inflate_threshold=%d threshold=%d\n",
|
|
|
|
tn, inflate_threshold, halve_threshold);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
|
|
|
/* No children */
|
2015-01-01 02:55:35 +08:00
|
|
|
if (tn->empty_children > (tnode_child_length(tn) - 1))
|
|
|
|
goto no_children;
|
|
|
|
|
2005-06-22 03:43:18 +08:00
|
|
|
/* One child */
|
2015-01-01 02:55:35 +08:00
|
|
|
if (tn->empty_children == (tnode_child_length(tn) - 1))
|
2009-08-29 14:57:15 +08:00
|
|
|
goto one_child;
|
2005-07-20 05:01:51 +08:00
|
|
|
/*
|
2005-06-22 03:43:18 +08:00
|
|
|
* Double as long as the resulting node has a number of
|
|
|
|
* nonempty nodes that are above the threshold.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2005-07-20 05:01:51 +08:00
|
|
|
* From "Implementing a dynamic compressed trie" by Stefan Nilsson of
|
|
|
|
* the Helsinki University of Technology and Matti Tikkanen of Nokia
|
2005-06-22 03:43:18 +08:00
|
|
|
* Telecommunications, page 6:
|
2005-07-20 05:01:51 +08:00
|
|
|
* "A node is doubled if the ratio of non-empty children to all
|
2005-06-22 03:43:18 +08:00
|
|
|
* children in the *doubled* node is at least 'high'."
|
|
|
|
*
|
2005-07-20 05:01:51 +08:00
|
|
|
* 'high' in this instance is the variable 'inflate_threshold'. It
|
|
|
|
* is expressed as a percentage, so we multiply it with
|
|
|
|
* tnode_child_length() and instead of multiplying by 2 (since the
|
|
|
|
* child array will be doubled by inflate()) and multiplying
|
|
|
|
* the left-hand side by 100 (to handle the percentage thing) we
|
2005-06-22 03:43:18 +08:00
|
|
|
* multiply the left-hand side by 50.
|
2005-07-20 05:01:51 +08:00
|
|
|
*
|
|
|
|
* The left-hand side may look a bit weird: tnode_child_length(tn)
|
|
|
|
* - tn->empty_children is of course the number of non-null children
|
|
|
|
* in the current node. tn->full_children is the number of "full"
|
2005-06-22 03:43:18 +08:00
|
|
|
* children, that is non-null tnodes with a skip value of 0.
|
2005-07-20 05:01:51 +08:00
|
|
|
* All of those will be doubled in the resulting inflated tnode, so
|
2005-06-22 03:43:18 +08:00
|
|
|
* we just count them one extra time here.
|
2005-07-20 05:01:51 +08:00
|
|
|
*
|
2005-06-22 03:43:18 +08:00
|
|
|
* A clearer way to write this would be:
|
2005-07-20 05:01:51 +08:00
|
|
|
*
|
2005-06-22 03:43:18 +08:00
|
|
|
* to_be_doubled = tn->full_children;
|
2005-07-20 05:01:51 +08:00
|
|
|
* not_to_be_doubled = tnode_child_length(tn) - tn->empty_children -
|
2005-06-22 03:43:18 +08:00
|
|
|
* tn->full_children;
|
|
|
|
*
|
|
|
|
* new_child_length = tnode_child_length(tn) * 2;
|
|
|
|
*
|
2005-07-20 05:01:51 +08:00
|
|
|
* new_fill_factor = 100 * (not_to_be_doubled + 2*to_be_doubled) /
|
2005-06-22 03:43:18 +08:00
|
|
|
* new_child_length;
|
|
|
|
* if (new_fill_factor >= inflate_threshold)
|
2005-07-20 05:01:51 +08:00
|
|
|
*
|
|
|
|
* ...and so on, tho it would mess up the while () loop.
|
|
|
|
*
|
2005-06-22 03:43:18 +08:00
|
|
|
* anyway,
|
|
|
|
* 100 * (not_to_be_doubled + 2*to_be_doubled) / new_child_length >=
|
|
|
|
* inflate_threshold
|
2005-07-20 05:01:51 +08:00
|
|
|
*
|
2005-06-22 03:43:18 +08:00
|
|
|
* avoid a division:
|
|
|
|
* 100 * (not_to_be_doubled + 2*to_be_doubled) >=
|
|
|
|
* inflate_threshold * new_child_length
|
2005-07-20 05:01:51 +08:00
|
|
|
*
|
2005-06-22 03:43:18 +08:00
|
|
|
* expand not_to_be_doubled and to_be_doubled, and shorten:
|
2005-07-20 05:01:51 +08:00
|
|
|
* 100 * (tnode_child_length(tn) - tn->empty_children +
|
2005-08-10 11:24:39 +08:00
|
|
|
* tn->full_children) >= inflate_threshold * new_child_length
|
2005-07-20 05:01:51 +08:00
|
|
|
*
|
2005-06-22 03:43:18 +08:00
|
|
|
* expand new_child_length:
|
2005-07-20 05:01:51 +08:00
|
|
|
* 100 * (tnode_child_length(tn) - tn->empty_children +
|
2005-08-10 11:24:39 +08:00
|
|
|
* tn->full_children) >=
|
2005-06-22 03:43:18 +08:00
|
|
|
* inflate_threshold * tnode_child_length(tn) * 2
|
2005-07-20 05:01:51 +08:00
|
|
|
*
|
2005-06-22 03:43:18 +08:00
|
|
|
* shorten again:
|
2005-07-20 05:01:51 +08:00
|
|
|
* 50 * (tn->full_children + tnode_child_length(tn) -
|
2005-08-10 11:24:39 +08:00
|
|
|
* tn->empty_children) >= inflate_threshold *
|
2005-06-22 03:43:18 +08:00
|
|
|
* tnode_child_length(tn)
|
2005-07-20 05:01:51 +08:00
|
|
|
*
|
2005-06-22 03:43:18 +08:00
|
|
|
*/
|
|
|
|
|
2005-10-05 04:01:58 +08:00
|
|
|
/* Keep root node larger */
|
|
|
|
|
2015-01-01 02:55:35 +08:00
|
|
|
if (!node_parent(tn)) {
|
2009-08-29 14:57:15 +08:00
|
|
|
inflate_threshold_use = inflate_threshold_root;
|
|
|
|
halve_threshold_use = halve_threshold_root;
|
2010-09-10 07:32:28 +08:00
|
|
|
} else {
|
2005-10-05 04:01:58 +08:00
|
|
|
inflate_threshold_use = inflate_threshold;
|
2009-08-29 14:57:15 +08:00
|
|
|
halve_threshold_use = halve_threshold;
|
|
|
|
}
|
2005-10-05 04:01:58 +08:00
|
|
|
|
2009-08-29 14:57:15 +08:00
|
|
|
max_work = MAX_WORK;
|
|
|
|
while ((tn->full_children > 0 && max_work-- &&
|
2008-01-23 13:53:36 +08:00
|
|
|
50 * (tn->full_children + tnode_child_length(tn)
|
|
|
|
- tn->empty_children)
|
|
|
|
>= inflate_threshold_use * tnode_child_length(tn))) {
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-08-10 11:25:06 +08:00
|
|
|
old_tn = tn;
|
|
|
|
tn = inflate(t, tn);
|
2008-01-23 13:53:36 +08:00
|
|
|
|
2005-08-10 11:25:06 +08:00
|
|
|
if (IS_ERR(tn)) {
|
|
|
|
tn = old_tn;
|
2005-07-06 06:02:40 +08:00
|
|
|
#ifdef CONFIG_IP_FIB_TRIE_STATS
|
2015-01-01 02:55:29 +08:00
|
|
|
this_cpu_inc(t->stats->resize_node_skipped);
|
2005-07-06 06:02:40 +08:00
|
|
|
#endif
|
|
|
|
break;
|
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2009-08-29 14:57:15 +08:00
|
|
|
/* Return if at least one inflate is run */
|
2010-09-10 07:32:28 +08:00
|
|
|
if (max_work != MAX_WORK)
|
2015-01-01 02:55:47 +08:00
|
|
|
return tn;
|
2009-08-29 14:57:15 +08:00
|
|
|
|
2005-06-22 03:43:18 +08:00
|
|
|
/*
|
|
|
|
* Halve as long as the number of empty children in this
|
|
|
|
* node is above threshold.
|
|
|
|
*/
|
2005-07-06 06:02:40 +08:00
|
|
|
|
2009-08-29 14:57:15 +08:00
|
|
|
max_work = MAX_WORK;
|
|
|
|
while (tn->bits > 1 && max_work-- &&
|
2005-06-22 03:43:18 +08:00
|
|
|
100 * (tnode_child_length(tn) - tn->empty_children) <
|
2005-10-05 04:01:58 +08:00
|
|
|
halve_threshold_use * tnode_child_length(tn)) {
|
2005-07-06 06:02:40 +08:00
|
|
|
|
2005-08-10 11:25:06 +08:00
|
|
|
old_tn = tn;
|
|
|
|
tn = halve(t, tn);
|
|
|
|
if (IS_ERR(tn)) {
|
|
|
|
tn = old_tn;
|
2005-07-06 06:02:40 +08:00
|
|
|
#ifdef CONFIG_IP_FIB_TRIE_STATS
|
2015-01-01 02:55:29 +08:00
|
|
|
this_cpu_inc(t->stats->resize_node_skipped);
|
2005-07-06 06:02:40 +08:00
|
|
|
#endif
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-07-20 05:01:51 +08:00
|
|
|
|
2005-06-22 03:43:18 +08:00
|
|
|
/* Only one child remains */
|
2015-01-01 02:55:35 +08:00
|
|
|
if (tn->empty_children == (tnode_child_length(tn) - 1)) {
|
|
|
|
unsigned long i;
|
2009-08-29 14:57:15 +08:00
|
|
|
one_child:
|
2015-01-01 02:55:35 +08:00
|
|
|
for (i = tnode_child_length(tn); !n && i;)
|
|
|
|
n = tnode_get_child(tn, --i);
|
|
|
|
no_children:
|
|
|
|
/* compress one level */
|
|
|
|
node_set_parent(n, NULL);
|
|
|
|
tnode_free_safe(tn);
|
|
|
|
return n;
|
2009-08-29 14:57:15 +08:00
|
|
|
}
|
2015-01-01 02:55:47 +08:00
|
|
|
return tn;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2011-03-31 16:51:35 +08:00
|
|
|
|
|
|
|
static void tnode_clean_free(struct tnode *tn)
|
|
|
|
{
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *tofree;
|
2015-01-01 02:56:18 +08:00
|
|
|
unsigned long i;
|
2011-03-31 16:51:35 +08:00
|
|
|
|
|
|
|
for (i = 0; i < tnode_child_length(tn); i++) {
|
2015-01-01 02:56:18 +08:00
|
|
|
tofree = tnode_get_child(tn, i);
|
2011-03-31 16:51:35 +08:00
|
|
|
if (tofree)
|
2015-01-01 02:55:41 +08:00
|
|
|
node_free(tofree);
|
2011-03-31 16:51:35 +08:00
|
|
|
}
|
2015-01-01 02:55:41 +08:00
|
|
|
node_free(tn);
|
2011-03-31 16:51:35 +08:00
|
|
|
}
|
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
static struct tnode *inflate(struct trie *t, struct tnode *oldtnode)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2015-01-01 02:56:18 +08:00
|
|
|
unsigned long olen = tnode_child_length(oldtnode);
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *tn;
|
2015-01-01 02:56:18 +08:00
|
|
|
unsigned long i;
|
2015-01-01 02:56:12 +08:00
|
|
|
t_key m;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-08-24 12:59:41 +08:00
|
|
|
pr_debug("In inflate\n");
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2015-01-01 02:56:12 +08:00
|
|
|
tn = tnode_new(oldtnode->key, oldtnode->pos - 1, oldtnode->bits + 1);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-08-24 12:59:41 +08:00
|
|
|
if (!tn)
|
2005-08-10 11:25:06 +08:00
|
|
|
return ERR_PTR(-ENOMEM);
|
2005-07-06 06:02:40 +08:00
|
|
|
|
|
|
|
/*
|
2005-07-20 05:01:51 +08:00
|
|
|
* Preallocate and store tnodes before the actual work so we
|
|
|
|
* don't get into an inconsistent state if memory allocation
|
|
|
|
* fails. In case of failure we return the oldnode and inflate
|
2005-07-06 06:02:40 +08:00
|
|
|
* of tnode is ignored.
|
|
|
|
*/
|
2015-01-01 02:56:12 +08:00
|
|
|
for (i = 0, m = 1u << tn->pos; i < olen; i++) {
|
|
|
|
struct tnode *inode = tnode_get_child(oldtnode, i);
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2015-01-01 02:56:12 +08:00
|
|
|
if (tnode_full(oldtnode, inode) && (inode->bits > 1)) {
|
2005-07-06 06:02:40 +08:00
|
|
|
struct tnode *left, *right;
|
2005-07-20 05:01:51 +08:00
|
|
|
|
2015-01-01 02:56:12 +08:00
|
|
|
left = tnode_new(inode->key & ~m, inode->pos,
|
2005-07-06 06:02:40 +08:00
|
|
|
inode->bits - 1);
|
2005-08-10 11:25:06 +08:00
|
|
|
if (!left)
|
|
|
|
goto nomem;
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2015-01-01 02:56:12 +08:00
|
|
|
right = tnode_new(inode->key | m, inode->pos,
|
2005-07-06 06:02:40 +08:00
|
|
|
inode->bits - 1);
|
|
|
|
|
2007-02-09 22:24:47 +08:00
|
|
|
if (!right) {
|
2015-01-01 02:55:41 +08:00
|
|
|
node_free(left);
|
2005-08-10 11:25:06 +08:00
|
|
|
goto nomem;
|
2007-02-09 22:24:47 +08:00
|
|
|
}
|
2005-07-06 06:02:40 +08:00
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
put_child(tn, 2*i, left);
|
|
|
|
put_child(tn, 2*i+1, right);
|
2005-07-06 06:02:40 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-08-10 11:24:39 +08:00
|
|
|
for (i = 0; i < olen; i++) {
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *inode = tnode_get_child(oldtnode, i);
|
2005-08-10 11:24:39 +08:00
|
|
|
struct tnode *left, *right;
|
2015-01-01 02:56:18 +08:00
|
|
|
unsigned long size, j;
|
2005-07-20 05:01:51 +08:00
|
|
|
|
2005-06-22 03:43:18 +08:00
|
|
|
/* An empty child */
|
2015-01-01 02:55:47 +08:00
|
|
|
if (inode == NULL)
|
2005-06-22 03:43:18 +08:00
|
|
|
continue;
|
|
|
|
|
|
|
|
/* A leaf or an internal node with skipped bits */
|
2015-01-01 02:55:47 +08:00
|
|
|
if (!tnode_full(oldtnode, inode)) {
|
2015-01-01 02:56:12 +08:00
|
|
|
put_child(tn, get_index(inode->key, tn), inode);
|
2005-06-22 03:43:18 +08:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* An internal node with two children */
|
|
|
|
if (inode->bits == 1) {
|
2012-07-29 10:00:03 +08:00
|
|
|
put_child(tn, 2*i, rtnl_dereference(inode->child[0]));
|
|
|
|
put_child(tn, 2*i+1, rtnl_dereference(inode->child[1]));
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2009-06-15 17:31:29 +08:00
|
|
|
tnode_free_safe(inode);
|
2005-08-10 11:24:39 +08:00
|
|
|
continue;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2005-08-10 11:24:39 +08:00
|
|
|
/* An internal node with more than two children */
|
|
|
|
|
|
|
|
/* We will replace this node 'inode' with two new
|
|
|
|
* ones, 'left' and 'right', each with half of the
|
|
|
|
* original children. The two new nodes will have
|
|
|
|
* a position one bit further down the key and this
|
|
|
|
* means that the "significant" part of their keys
|
|
|
|
* (see the discussion near the top of this file)
|
|
|
|
* will differ by one bit, which will be "0" in
|
|
|
|
* left's key and "1" in right's key. Since we are
|
|
|
|
* moving the key position by one step, the bit that
|
|
|
|
* we are moving away from - the bit at position
|
|
|
|
* (inode->pos) - is the one that will differ between
|
|
|
|
* left and right. So... we synthesize that bit in the
|
|
|
|
* two new keys.
|
|
|
|
* The mask 'm' below will be a single "one" bit at
|
|
|
|
* the position (inode->pos)
|
|
|
|
*/
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-08-10 11:24:39 +08:00
|
|
|
/* Use the old key, but set the new significant
|
|
|
|
* bit to zero.
|
|
|
|
*/
|
2005-07-06 06:02:40 +08:00
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
left = tnode_get_child(tn, 2*i);
|
2012-07-29 10:00:03 +08:00
|
|
|
put_child(tn, 2*i, NULL);
|
2005-07-06 06:02:40 +08:00
|
|
|
|
2005-08-10 11:24:39 +08:00
|
|
|
BUG_ON(!left);
|
2005-07-06 06:02:40 +08:00
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
right = tnode_get_child(tn, 2*i+1);
|
2012-07-29 10:00:03 +08:00
|
|
|
put_child(tn, 2*i+1, NULL);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-08-10 11:24:39 +08:00
|
|
|
BUG_ON(!right);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-08-10 11:24:39 +08:00
|
|
|
size = tnode_child_length(left);
|
|
|
|
for (j = 0; j < size; j++) {
|
2012-07-29 10:00:03 +08:00
|
|
|
put_child(left, j, rtnl_dereference(inode->child[j]));
|
|
|
|
put_child(right, j, rtnl_dereference(inode->child[j + size]));
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
2012-07-29 10:00:03 +08:00
|
|
|
put_child(tn, 2*i, resize(t, left));
|
|
|
|
put_child(tn, 2*i+1, resize(t, right));
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2009-06-15 17:31:29 +08:00
|
|
|
tnode_free_safe(inode);
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
2009-06-15 17:31:29 +08:00
|
|
|
tnode_free_safe(oldtnode);
|
2005-06-22 03:43:18 +08:00
|
|
|
return tn;
|
2005-08-10 11:25:06 +08:00
|
|
|
nomem:
|
2011-03-31 16:51:35 +08:00
|
|
|
tnode_clean_free(tn);
|
|
|
|
return ERR_PTR(-ENOMEM);
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
static struct tnode *halve(struct trie *t, struct tnode *oldtnode)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2015-01-01 02:56:18 +08:00
|
|
|
unsigned long olen = tnode_child_length(oldtnode);
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *tn, *left, *right;
|
2005-06-22 03:43:18 +08:00
|
|
|
int i;
|
|
|
|
|
2005-08-24 12:59:41 +08:00
|
|
|
pr_debug("In halve\n");
|
2005-07-20 05:01:51 +08:00
|
|
|
|
2015-01-01 02:56:12 +08:00
|
|
|
tn = tnode_new(oldtnode->key, oldtnode->pos + 1, oldtnode->bits - 1);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-08-10 11:25:06 +08:00
|
|
|
if (!tn)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
2005-07-06 06:02:40 +08:00
|
|
|
|
|
|
|
/*
|
2005-07-20 05:01:51 +08:00
|
|
|
* Preallocate and store tnodes before the actual work so we
|
|
|
|
* don't get into an inconsistent state if memory allocation
|
|
|
|
* fails. In case of failure we return the oldnode and halve
|
2005-07-06 06:02:40 +08:00
|
|
|
* of tnode is ignored.
|
|
|
|
*/
|
|
|
|
|
2005-08-10 11:24:39 +08:00
|
|
|
for (i = 0; i < olen; i += 2) {
|
2005-07-06 06:02:40 +08:00
|
|
|
left = tnode_get_child(oldtnode, i);
|
|
|
|
right = tnode_get_child(oldtnode, i+1);
|
2005-07-20 05:01:51 +08:00
|
|
|
|
2005-07-06 06:02:40 +08:00
|
|
|
/* Two nonempty children */
|
2005-08-24 12:59:41 +08:00
|
|
|
if (left && right) {
|
2005-08-10 11:25:06 +08:00
|
|
|
struct tnode *newn;
|
2005-08-24 12:59:41 +08:00
|
|
|
|
2015-01-01 02:56:12 +08:00
|
|
|
newn = tnode_new(left->key, oldtnode->pos, 1);
|
2005-08-24 12:59:41 +08:00
|
|
|
|
|
|
|
if (!newn)
|
2005-08-10 11:25:06 +08:00
|
|
|
goto nomem;
|
2005-08-24 12:59:41 +08:00
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
put_child(tn, i/2, newn);
|
2005-07-06 06:02:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-08-10 11:24:39 +08:00
|
|
|
for (i = 0; i < olen; i += 2) {
|
|
|
|
struct tnode *newBinNode;
|
|
|
|
|
2005-06-22 03:43:18 +08:00
|
|
|
left = tnode_get_child(oldtnode, i);
|
|
|
|
right = tnode_get_child(oldtnode, i+1);
|
2005-07-20 05:01:51 +08:00
|
|
|
|
2005-06-22 03:43:18 +08:00
|
|
|
/* At least one of the children is empty */
|
|
|
|
if (left == NULL) {
|
|
|
|
if (right == NULL) /* Both are empty */
|
|
|
|
continue;
|
2012-07-29 10:00:03 +08:00
|
|
|
put_child(tn, i/2, right);
|
2005-08-10 11:24:39 +08:00
|
|
|
continue;
|
2005-08-24 12:59:41 +08:00
|
|
|
}
|
2005-08-10 11:24:39 +08:00
|
|
|
|
|
|
|
if (right == NULL) {
|
2012-07-29 10:00:03 +08:00
|
|
|
put_child(tn, i/2, left);
|
2005-08-10 11:24:39 +08:00
|
|
|
continue;
|
|
|
|
}
|
2005-07-20 05:01:51 +08:00
|
|
|
|
2005-06-22 03:43:18 +08:00
|
|
|
/* Two nonempty children */
|
2015-01-01 02:55:47 +08:00
|
|
|
newBinNode = tnode_get_child(tn, i/2);
|
2012-07-29 10:00:03 +08:00
|
|
|
put_child(tn, i/2, NULL);
|
|
|
|
put_child(newBinNode, 0, left);
|
|
|
|
put_child(newBinNode, 1, right);
|
|
|
|
put_child(tn, i/2, resize(t, newBinNode));
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
2009-06-15 17:31:29 +08:00
|
|
|
tnode_free_safe(oldtnode);
|
2005-06-22 03:43:18 +08:00
|
|
|
return tn;
|
2005-08-10 11:25:06 +08:00
|
|
|
nomem:
|
2011-03-31 16:51:35 +08:00
|
|
|
tnode_clean_free(tn);
|
|
|
|
return ERR_PTR(-ENOMEM);
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2005-09-20 06:31:18 +08:00
|
|
|
/* readside must use rcu_read_lock currently dump routines
|
2005-08-26 04:01:29 +08:00
|
|
|
via get_fa_head and dump */
|
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
static struct leaf_info *find_leaf_info(struct tnode *l, int plen)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2005-09-20 06:31:18 +08:00
|
|
|
struct hlist_head *head = &l->list;
|
2005-06-22 03:43:18 +08:00
|
|
|
struct leaf_info *li;
|
|
|
|
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry_rcu(li, head, hlist)
|
2005-07-20 05:01:51 +08:00
|
|
|
if (li->plen == plen)
|
2005-06-22 03:43:18 +08:00
|
|
|
return li;
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2005-06-22 03:43:18 +08:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
static inline struct list_head *get_fa_head(struct tnode *l, int plen)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2005-09-20 06:31:18 +08:00
|
|
|
struct leaf_info *li = find_leaf_info(l, plen);
|
2005-07-20 05:01:51 +08:00
|
|
|
|
2005-08-10 11:24:39 +08:00
|
|
|
if (!li)
|
|
|
|
return NULL;
|
2005-07-20 05:01:51 +08:00
|
|
|
|
2005-08-10 11:24:39 +08:00
|
|
|
return &li->falh;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void insert_leaf_info(struct hlist_head *head, struct leaf_info *new)
|
|
|
|
{
|
2007-02-09 22:24:47 +08:00
|
|
|
struct leaf_info *li = NULL, *last = NULL;
|
|
|
|
|
|
|
|
if (hlist_empty(head)) {
|
|
|
|
hlist_add_head_rcu(&new->hlist, head);
|
|
|
|
} else {
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry(li, head, hlist) {
|
2007-02-09 22:24:47 +08:00
|
|
|
if (new->plen > li->plen)
|
|
|
|
break;
|
|
|
|
|
|
|
|
last = li;
|
|
|
|
}
|
|
|
|
if (last)
|
2014-08-07 07:09:16 +08:00
|
|
|
hlist_add_behind_rcu(&new->hlist, &last->hlist);
|
2007-02-09 22:24:47 +08:00
|
|
|
else
|
|
|
|
hlist_add_before_rcu(&new->hlist, &li->hlist);
|
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2005-08-26 04:01:29 +08:00
|
|
|
/* rcu_read_lock needs to be hold by caller from readside */
|
2015-01-01 02:55:47 +08:00
|
|
|
static struct tnode *fib_find_node(struct trie *t, u32 key)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *n = rcu_dereference_rtnl(t->trie);
|
2015-01-01 02:56:00 +08:00
|
|
|
|
|
|
|
while (n) {
|
|
|
|
unsigned long index = get_index(key, n);
|
|
|
|
|
|
|
|
/* This bit of code is a bit tricky but it combines multiple
|
|
|
|
* checks into a single check. The prefix consists of the
|
|
|
|
* prefix plus zeros for the bits in the cindex. The index
|
|
|
|
* is the difference between the key and this value. From
|
|
|
|
* this we can actually derive several pieces of data.
|
|
|
|
* if !(index >> bits)
|
|
|
|
* we know the value is cindex
|
|
|
|
* else
|
|
|
|
* we have a mismatch in skip bits and failed
|
|
|
|
*/
|
|
|
|
if (index >> n->bits)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/* we have found a leaf. Prefixes have already been compared */
|
|
|
|
if (IS_LEAF(n))
|
2005-06-22 03:43:18 +08:00
|
|
|
break;
|
|
|
|
|
2015-01-01 02:56:00 +08:00
|
|
|
n = rcu_dereference_rtnl(n->child[index]);
|
|
|
|
}
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2015-01-01 02:56:00 +08:00
|
|
|
return n;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2009-06-18 15:28:51 +08:00
|
|
|
static void trie_rebalance(struct trie *t, struct tnode *tn)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
|
|
|
int wasfull;
|
2009-05-22 06:20:59 +08:00
|
|
|
t_key cindex, key;
|
2007-08-11 06:22:13 +08:00
|
|
|
struct tnode *tp;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2009-05-22 06:20:59 +08:00
|
|
|
key = tn->key;
|
|
|
|
|
2015-01-01 02:55:35 +08:00
|
|
|
while (tn != NULL && (tp = node_parent(tn)) != NULL) {
|
2015-01-01 02:56:12 +08:00
|
|
|
cindex = get_index(key, tp);
|
2005-06-22 03:43:18 +08:00
|
|
|
wasfull = tnode_full(tp, tnode_get_child(tp, cindex));
|
2015-01-01 02:55:47 +08:00
|
|
|
tn = resize(t, tn);
|
2008-01-23 13:53:36 +08:00
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
tnode_put_child_reorg(tp, cindex, tn, wasfull);
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2015-01-01 02:55:35 +08:00
|
|
|
tp = node_parent(tn);
|
2009-07-01 03:47:19 +08:00
|
|
|
if (!tp)
|
2015-01-01 02:55:47 +08:00
|
|
|
rcu_assign_pointer(t->trie, tn);
|
2009-07-01 03:47:19 +08:00
|
|
|
|
2009-06-15 17:31:29 +08:00
|
|
|
tnode_free_flush();
|
2007-08-11 06:22:13 +08:00
|
|
|
if (!tp)
|
2005-06-22 03:43:18 +08:00
|
|
|
break;
|
2007-08-11 06:22:13 +08:00
|
|
|
tn = tp;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
2007-08-11 06:22:13 +08:00
|
|
|
|
2005-06-22 03:43:18 +08:00
|
|
|
/* Handle last (top) tnode */
|
2009-06-18 15:28:51 +08:00
|
|
|
if (IS_TNODE(tn))
|
2015-01-01 02:55:47 +08:00
|
|
|
tn = resize(t, tn);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
rcu_assign_pointer(t->trie, tn);
|
2009-06-18 15:28:51 +08:00
|
|
|
tnode_free_flush();
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2005-08-26 04:01:29 +08:00
|
|
|
/* only used from updater-side */
|
|
|
|
|
2008-01-13 12:57:07 +08:00
|
|
|
static struct list_head *fib_insert_node(struct trie *t, u32 key, int plen)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2005-07-20 05:01:51 +08:00
|
|
|
struct list_head *fa_head = NULL;
|
2015-01-01 02:56:06 +08:00
|
|
|
struct tnode *l, *n, *tp = NULL;
|
2005-06-22 03:43:18 +08:00
|
|
|
struct leaf_info *li;
|
|
|
|
|
2015-01-01 02:56:06 +08:00
|
|
|
li = leaf_info_new(plen);
|
|
|
|
if (!li)
|
|
|
|
return NULL;
|
|
|
|
fa_head = &li->falh;
|
|
|
|
|
2011-03-31 16:51:35 +08:00
|
|
|
n = rtnl_dereference(t->trie);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-07-20 05:01:51 +08:00
|
|
|
/* If we point to NULL, stop. Either the tree is empty and we should
|
|
|
|
* just put a new leaf in if, or we have reached an empty child slot,
|
2005-06-22 03:43:18 +08:00
|
|
|
* and we should just put our new leaf in that.
|
|
|
|
*
|
2015-01-01 02:56:06 +08:00
|
|
|
* If we hit a node with a key that does't match then we should stop
|
|
|
|
* and create a new tnode to replace that node and insert ourselves
|
|
|
|
* and the other node into the new tnode.
|
2005-06-22 03:43:18 +08:00
|
|
|
*/
|
2015-01-01 02:56:06 +08:00
|
|
|
while (n) {
|
|
|
|
unsigned long index = get_index(key, n);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2015-01-01 02:56:06 +08:00
|
|
|
/* This bit of code is a bit tricky but it combines multiple
|
|
|
|
* checks into a single check. The prefix consists of the
|
|
|
|
* prefix plus zeros for the "bits" in the prefix. The index
|
|
|
|
* is the difference between the key and this value. From
|
|
|
|
* this we can actually derive several pieces of data.
|
|
|
|
* if !(index >> bits)
|
|
|
|
* we know the value is child index
|
|
|
|
* else
|
|
|
|
* we have a mismatch in skip bits and failed
|
|
|
|
*/
|
|
|
|
if (index >> n->bits)
|
2005-06-22 03:43:18 +08:00
|
|
|
break;
|
|
|
|
|
2015-01-01 02:56:06 +08:00
|
|
|
/* we have found a leaf. Prefixes have already been compared */
|
|
|
|
if (IS_LEAF(n)) {
|
|
|
|
/* Case 1: n is a leaf, and prefixes match*/
|
|
|
|
insert_leaf_info(&n->list, li);
|
|
|
|
return fa_head;
|
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2015-01-01 02:56:06 +08:00
|
|
|
tp = n;
|
|
|
|
n = rcu_dereference_rtnl(n->child[index]);
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2015-01-01 02:56:06 +08:00
|
|
|
l = leaf_new(key);
|
|
|
|
if (!l) {
|
|
|
|
free_leaf_info(li);
|
2008-01-13 12:57:07 +08:00
|
|
|
return NULL;
|
2005-06-29 06:00:39 +08:00
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
|
|
|
insert_leaf_info(&l->list, li);
|
|
|
|
|
2015-01-01 02:56:06 +08:00
|
|
|
/* Case 2: n is a LEAF or a TNODE and the key doesn't match.
|
|
|
|
*
|
|
|
|
* Add a new tnode here
|
|
|
|
* first tnode need some special handling
|
|
|
|
* leaves us in position for handling as case 3
|
|
|
|
*/
|
|
|
|
if (n) {
|
|
|
|
struct tnode *tn;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2015-01-01 02:56:12 +08:00
|
|
|
tn = tnode_new(key, __fls(key ^ n->key), 1);
|
2005-07-20 05:01:51 +08:00
|
|
|
if (!tn) {
|
2005-06-29 06:00:39 +08:00
|
|
|
free_leaf_info(li);
|
2015-01-01 02:55:41 +08:00
|
|
|
node_free(l);
|
2008-01-13 12:57:07 +08:00
|
|
|
return NULL;
|
2005-08-10 11:24:39 +08:00
|
|
|
}
|
|
|
|
|
2015-01-01 02:56:06 +08:00
|
|
|
/* initialize routes out of node */
|
|
|
|
NODE_INIT_PARENT(tn, tp);
|
|
|
|
put_child(tn, get_index(key, tn) ^ 1, n);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2015-01-01 02:56:06 +08:00
|
|
|
/* start adding routes into the node */
|
|
|
|
put_child_root(tp, t, key, tn);
|
|
|
|
node_set_parent(n, tn);
|
2014-12-11 13:49:22 +08:00
|
|
|
|
2015-01-01 02:56:06 +08:00
|
|
|
/* parent now has a NULL spot where the leaf can go */
|
2014-12-11 13:49:22 +08:00
|
|
|
tp = tn;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2015-01-01 02:56:06 +08:00
|
|
|
/* Case 3: n is NULL, and will just insert a new leaf */
|
|
|
|
if (tp) {
|
|
|
|
NODE_INIT_PARENT(l, tp);
|
|
|
|
put_child(tp, get_index(key, tp), l);
|
|
|
|
trie_rebalance(t, tp);
|
|
|
|
} else {
|
|
|
|
rcu_assign_pointer(t->trie, l);
|
|
|
|
}
|
2005-08-26 04:01:29 +08:00
|
|
|
|
2005-06-22 03:43:18 +08:00
|
|
|
return fa_head;
|
|
|
|
}
|
|
|
|
|
2007-03-27 05:22:22 +08:00
|
|
|
/*
|
|
|
|
* Caller must hold RTNL.
|
|
|
|
*/
|
2009-09-20 18:35:36 +08:00
|
|
|
int fib_table_insert(struct fib_table *tb, struct fib_config *cfg)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
|
|
|
struct trie *t = (struct trie *) tb->tb_data;
|
|
|
|
struct fib_alias *fa, *new_fa;
|
2005-07-20 05:01:51 +08:00
|
|
|
struct list_head *fa_head = NULL;
|
2005-06-22 03:43:18 +08:00
|
|
|
struct fib_info *fi;
|
2006-08-18 09:14:52 +08:00
|
|
|
int plen = cfg->fc_dst_len;
|
|
|
|
u8 tos = cfg->fc_tos;
|
2005-06-22 03:43:18 +08:00
|
|
|
u32 key, mask;
|
|
|
|
int err;
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *l;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
|
|
|
if (plen > 32)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2006-08-18 09:14:52 +08:00
|
|
|
key = ntohl(cfg->fc_dst);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2006-08-11 14:08:33 +08:00
|
|
|
pr_debug("Insert table=%u %08x/%d\n", tb->tb_id, key, plen);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-08-10 11:24:39 +08:00
|
|
|
mask = ntohl(inet_make_mask(plen));
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-07-20 05:01:51 +08:00
|
|
|
if (key & ~mask)
|
2005-06-22 03:43:18 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
key = key & mask;
|
|
|
|
|
2006-08-18 09:14:52 +08:00
|
|
|
fi = fib_create_info(cfg);
|
|
|
|
if (IS_ERR(fi)) {
|
|
|
|
err = PTR_ERR(fi);
|
2005-06-22 03:43:18 +08:00
|
|
|
goto err;
|
2006-08-18 09:14:52 +08:00
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
|
|
|
l = fib_find_node(t, key);
|
2005-07-20 05:01:51 +08:00
|
|
|
fa = NULL;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-07-20 05:01:51 +08:00
|
|
|
if (l) {
|
2005-06-22 03:43:18 +08:00
|
|
|
fa_head = get_fa_head(l, plen);
|
|
|
|
fa = fib_find_alias(fa_head, tos, fi->fib_priority);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Now fa, if non-NULL, points to the first fib alias
|
|
|
|
* with the same keys [prefix,tos,priority], if such key already
|
|
|
|
* exists or to the node before which we will insert new one.
|
|
|
|
*
|
|
|
|
* If fa is NULL, we will need to allocate a new one and
|
|
|
|
* insert to the head of f.
|
|
|
|
*
|
|
|
|
* If f is NULL, no fib node matched the destination key
|
|
|
|
* and we need to allocate a new one of those as well.
|
|
|
|
*/
|
|
|
|
|
2008-01-29 13:18:06 +08:00
|
|
|
if (fa && fa->fa_tos == tos &&
|
|
|
|
fa->fa_info->fib_priority == fi->fib_priority) {
|
|
|
|
struct fib_alias *fa_first, *fa_match;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
|
|
|
err = -EEXIST;
|
2006-08-18 09:14:52 +08:00
|
|
|
if (cfg->fc_nlflags & NLM_F_EXCL)
|
2005-06-22 03:43:18 +08:00
|
|
|
goto out;
|
|
|
|
|
2008-01-29 13:18:06 +08:00
|
|
|
/* We have 2 goals:
|
|
|
|
* 1. Find exact match for type, scope, fib_info to avoid
|
|
|
|
* duplicate routes
|
|
|
|
* 2. Find next 'fa' (or head), NLM_F_APPEND inserts before it
|
|
|
|
*/
|
|
|
|
fa_match = NULL;
|
|
|
|
fa_first = fa;
|
|
|
|
fa = list_entry(fa->fa_list.prev, struct fib_alias, fa_list);
|
|
|
|
list_for_each_entry_continue(fa, fa_head, fa_list) {
|
|
|
|
if (fa->fa_tos != tos)
|
|
|
|
break;
|
|
|
|
if (fa->fa_info->fib_priority != fi->fib_priority)
|
|
|
|
break;
|
|
|
|
if (fa->fa_type == cfg->fc_type &&
|
|
|
|
fa->fa_info == fi) {
|
|
|
|
fa_match = fa;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-08-18 09:14:52 +08:00
|
|
|
if (cfg->fc_nlflags & NLM_F_REPLACE) {
|
2005-06-22 03:43:18 +08:00
|
|
|
struct fib_info *fi_drop;
|
|
|
|
u8 state;
|
|
|
|
|
2008-01-29 13:18:06 +08:00
|
|
|
fa = fa_first;
|
|
|
|
if (fa_match) {
|
|
|
|
if (fa == fa_match)
|
|
|
|
err = 0;
|
2008-01-18 19:45:18 +08:00
|
|
|
goto out;
|
2008-01-29 13:18:06 +08:00
|
|
|
}
|
2005-08-26 04:01:29 +08:00
|
|
|
err = -ENOBUFS;
|
2006-12-07 12:33:17 +08:00
|
|
|
new_fa = kmem_cache_alloc(fn_alias_kmem, GFP_KERNEL);
|
2005-08-26 04:01:29 +08:00
|
|
|
if (new_fa == NULL)
|
|
|
|
goto out;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
|
|
|
fi_drop = fa->fa_info;
|
2005-08-26 04:01:29 +08:00
|
|
|
new_fa->fa_tos = fa->fa_tos;
|
|
|
|
new_fa->fa_info = fi;
|
2006-08-18 09:14:52 +08:00
|
|
|
new_fa->fa_type = cfg->fc_type;
|
2005-06-22 03:43:18 +08:00
|
|
|
state = fa->fa_state;
|
2008-01-29 13:18:06 +08:00
|
|
|
new_fa->fa_state = state & ~FA_S_ACCESSED;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-08-26 04:01:29 +08:00
|
|
|
list_replace_rcu(&fa->fa_list, &new_fa->fa_list);
|
|
|
|
alias_free_mem_rcu(fa);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
|
|
|
fib_release_info(fi_drop);
|
|
|
|
if (state & FA_S_ACCESSED)
|
2012-09-07 08:45:29 +08:00
|
|
|
rt_cache_flush(cfg->fc_nlinfo.nl_net);
|
2007-05-24 05:55:06 +08:00
|
|
|
rtmsg_fib(RTM_NEWROUTE, htonl(key), new_fa, plen,
|
|
|
|
tb->tb_id, &cfg->fc_nlinfo, NLM_F_REPLACE);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-08-10 11:24:39 +08:00
|
|
|
goto succeeded;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
/* Error if we find a perfect match which
|
|
|
|
* uses the same scope, type, and nexthop
|
|
|
|
* information.
|
|
|
|
*/
|
2008-01-29 13:18:06 +08:00
|
|
|
if (fa_match)
|
|
|
|
goto out;
|
2008-01-23 13:53:36 +08:00
|
|
|
|
2006-08-18 09:14:52 +08:00
|
|
|
if (!(cfg->fc_nlflags & NLM_F_APPEND))
|
2008-01-29 13:18:06 +08:00
|
|
|
fa = fa_first;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
err = -ENOENT;
|
2006-08-18 09:14:52 +08:00
|
|
|
if (!(cfg->fc_nlflags & NLM_F_CREATE))
|
2005-06-22 03:43:18 +08:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
err = -ENOBUFS;
|
2006-12-07 12:33:17 +08:00
|
|
|
new_fa = kmem_cache_alloc(fn_alias_kmem, GFP_KERNEL);
|
2005-06-22 03:43:18 +08:00
|
|
|
if (new_fa == NULL)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
new_fa->fa_info = fi;
|
|
|
|
new_fa->fa_tos = tos;
|
2006-08-18 09:14:52 +08:00
|
|
|
new_fa->fa_type = cfg->fc_type;
|
2005-06-22 03:43:18 +08:00
|
|
|
new_fa->fa_state = 0;
|
|
|
|
/*
|
|
|
|
* Insert new entry to the list.
|
|
|
|
*/
|
|
|
|
|
2005-07-20 05:01:51 +08:00
|
|
|
if (!fa_head) {
|
2008-01-13 12:57:07 +08:00
|
|
|
fa_head = fib_insert_node(t, key, plen);
|
|
|
|
if (unlikely(!fa_head)) {
|
|
|
|
err = -ENOMEM;
|
2005-06-29 06:00:39 +08:00
|
|
|
goto out_free_new_fa;
|
2008-01-13 12:57:07 +08:00
|
|
|
}
|
2005-06-29 06:00:39 +08:00
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2011-04-15 05:49:37 +08:00
|
|
|
if (!plen)
|
|
|
|
tb->tb_num_default++;
|
|
|
|
|
2005-08-26 04:01:29 +08:00
|
|
|
list_add_tail_rcu(&new_fa->fa_list,
|
|
|
|
(fa ? &fa->fa_list : fa_head));
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2012-09-07 08:45:29 +08:00
|
|
|
rt_cache_flush(cfg->fc_nlinfo.nl_net);
|
2006-08-18 09:14:52 +08:00
|
|
|
rtmsg_fib(RTM_NEWROUTE, htonl(key), new_fa, plen, tb->tb_id,
|
2007-05-24 05:55:06 +08:00
|
|
|
&cfg->fc_nlinfo, 0);
|
2005-06-22 03:43:18 +08:00
|
|
|
succeeded:
|
|
|
|
return 0;
|
2005-06-29 06:00:39 +08:00
|
|
|
|
|
|
|
out_free_new_fa:
|
|
|
|
kmem_cache_free(fn_alias_kmem, new_fa);
|
2005-06-22 03:43:18 +08:00
|
|
|
out:
|
|
|
|
fib_release_info(fi);
|
2005-08-10 11:24:39 +08:00
|
|
|
err:
|
2005-06-22 03:43:18 +08:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2005-09-20 06:31:18 +08:00
|
|
|
/* should be called with rcu_read_lock */
|
2015-01-01 02:55:47 +08:00
|
|
|
static int check_leaf(struct fib_table *tb, struct trie *t, struct tnode *l,
|
2011-03-12 08:54:08 +08:00
|
|
|
t_key key, const struct flowi4 *flp,
|
2010-10-05 18:41:36 +08:00
|
|
|
struct fib_result *res, int fib_flags)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
|
|
|
struct leaf_info *li;
|
|
|
|
struct hlist_head *hhead = &l->list;
|
2005-07-20 05:01:51 +08:00
|
|
|
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry_rcu(li, hhead, hlist) {
|
2011-03-08 07:01:10 +08:00
|
|
|
struct fib_alias *fa;
|
2008-01-23 13:53:36 +08:00
|
|
|
|
2011-07-18 11:16:33 +08:00
|
|
|
if (l->key != (key & li->mask_plen))
|
2005-06-22 03:43:18 +08:00
|
|
|
continue;
|
|
|
|
|
2011-03-08 07:01:10 +08:00
|
|
|
list_for_each_entry_rcu(fa, &li->falh, fa_list) {
|
|
|
|
struct fib_info *fi = fa->fa_info;
|
|
|
|
int nhsel, err;
|
2008-01-23 13:53:36 +08:00
|
|
|
|
2011-03-12 08:54:08 +08:00
|
|
|
if (fa->fa_tos && fa->fa_tos != flp->flowi4_tos)
|
2011-03-08 07:01:10 +08:00
|
|
|
continue;
|
2012-05-11 10:16:32 +08:00
|
|
|
if (fi->fib_dead)
|
|
|
|
continue;
|
2011-03-25 09:06:47 +08:00
|
|
|
if (fa->fa_info->fib_scope < flp->flowi4_scope)
|
2011-03-08 07:01:10 +08:00
|
|
|
continue;
|
|
|
|
fib_alias_accessed(fa);
|
|
|
|
err = fib_props[fa->fa_type].error;
|
2015-01-01 02:55:54 +08:00
|
|
|
if (unlikely(err < 0)) {
|
2005-06-22 03:43:18 +08:00
|
|
|
#ifdef CONFIG_IP_FIB_TRIE_STATS
|
2015-01-01 02:55:29 +08:00
|
|
|
this_cpu_inc(t->stats->semantic_match_passed);
|
2011-03-08 07:01:10 +08:00
|
|
|
#endif
|
2011-03-26 11:33:23 +08:00
|
|
|
return err;
|
2011-03-08 07:01:10 +08:00
|
|
|
}
|
|
|
|
if (fi->fib_flags & RTNH_F_DEAD)
|
|
|
|
continue;
|
|
|
|
for (nhsel = 0; nhsel < fi->fib_nhs; nhsel++) {
|
|
|
|
const struct fib_nh *nh = &fi->fib_nh[nhsel];
|
|
|
|
|
|
|
|
if (nh->nh_flags & RTNH_F_DEAD)
|
|
|
|
continue;
|
2011-03-12 08:54:08 +08:00
|
|
|
if (flp->flowi4_oif && flp->flowi4_oif != nh->nh_oif)
|
2011-03-08 07:01:10 +08:00
|
|
|
continue;
|
|
|
|
|
|
|
|
#ifdef CONFIG_IP_FIB_TRIE_STATS
|
2015-01-01 02:55:29 +08:00
|
|
|
this_cpu_inc(t->stats->semantic_match_passed);
|
2011-03-08 07:01:10 +08:00
|
|
|
#endif
|
2011-07-18 11:16:33 +08:00
|
|
|
res->prefixlen = li->plen;
|
2011-03-08 07:01:10 +08:00
|
|
|
res->nh_sel = nhsel;
|
|
|
|
res->type = fa->fa_type;
|
2015-01-01 02:55:54 +08:00
|
|
|
res->scope = fi->fib_scope;
|
2011-03-08 07:01:10 +08:00
|
|
|
res->fi = fi;
|
|
|
|
res->table = tb;
|
|
|
|
res->fa_head = &li->falh;
|
|
|
|
if (!(fib_flags & FIB_LOOKUP_NOREF))
|
2011-07-18 11:16:33 +08:00
|
|
|
atomic_inc(&fi->fib_clntref);
|
2011-03-08 07:01:10 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_IP_FIB_TRIE_STATS
|
2015-01-01 02:55:29 +08:00
|
|
|
this_cpu_inc(t->stats->semantic_match_miss);
|
2005-06-22 03:43:18 +08:00
|
|
|
#endif
|
|
|
|
}
|
2008-01-23 13:53:36 +08:00
|
|
|
|
2008-07-11 07:52:52 +08:00
|
|
|
return 1;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2015-01-01 02:55:54 +08:00
|
|
|
static inline t_key prefix_mismatch(t_key key, struct tnode *n)
|
|
|
|
{
|
|
|
|
t_key prefix = n->key;
|
|
|
|
|
|
|
|
return (key ^ prefix) & (prefix | -prefix);
|
|
|
|
}
|
|
|
|
|
2011-03-12 08:54:08 +08:00
|
|
|
int fib_table_lookup(struct fib_table *tb, const struct flowi4 *flp,
|
2010-10-05 18:41:36 +08:00
|
|
|
struct fib_result *res, int fib_flags)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2015-01-01 02:55:54 +08:00
|
|
|
struct trie *t = (struct trie *)tb->tb_data;
|
2015-01-01 02:55:29 +08:00
|
|
|
#ifdef CONFIG_IP_FIB_TRIE_STATS
|
|
|
|
struct trie_use_stats __percpu *stats = t->stats;
|
|
|
|
#endif
|
2015-01-01 02:55:54 +08:00
|
|
|
const t_key key = ntohl(flp->daddr);
|
|
|
|
struct tnode *n, *pn;
|
|
|
|
t_key cindex;
|
|
|
|
int ret = 1;
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2005-08-26 04:01:29 +08:00
|
|
|
rcu_read_lock();
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2005-08-26 04:01:29 +08:00
|
|
|
n = rcu_dereference(t->trie);
|
2005-07-20 05:01:51 +08:00
|
|
|
if (!n)
|
2005-06-22 03:43:18 +08:00
|
|
|
goto failed;
|
|
|
|
|
|
|
|
#ifdef CONFIG_IP_FIB_TRIE_STATS
|
2015-01-01 02:55:29 +08:00
|
|
|
this_cpu_inc(stats->gets);
|
2005-06-22 03:43:18 +08:00
|
|
|
#endif
|
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
pn = n;
|
2015-01-01 02:55:54 +08:00
|
|
|
cindex = 0;
|
|
|
|
|
|
|
|
/* Step 1: Travel to the longest prefix match in the trie */
|
|
|
|
for (;;) {
|
|
|
|
unsigned long index = get_index(key, n);
|
|
|
|
|
|
|
|
/* This bit of code is a bit tricky but it combines multiple
|
|
|
|
* checks into a single check. The prefix consists of the
|
|
|
|
* prefix plus zeros for the "bits" in the prefix. The index
|
|
|
|
* is the difference between the key and this value. From
|
|
|
|
* this we can actually derive several pieces of data.
|
|
|
|
* if !(index >> bits)
|
|
|
|
* we know the value is child index
|
|
|
|
* else
|
|
|
|
* we have a mismatch in skip bits and failed
|
|
|
|
*/
|
|
|
|
if (index >> n->bits)
|
|
|
|
break;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2015-01-01 02:55:54 +08:00
|
|
|
/* we have found a leaf. Prefixes have already been compared */
|
|
|
|
if (IS_LEAF(n))
|
2008-01-23 13:53:36 +08:00
|
|
|
goto found;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2015-01-01 02:55:54 +08:00
|
|
|
/* only record pn and cindex if we are going to be chopping
|
|
|
|
* bits later. Otherwise we are just wasting cycles.
|
2005-08-10 11:24:39 +08:00
|
|
|
*/
|
2015-01-01 02:55:54 +08:00
|
|
|
if (index) {
|
|
|
|
pn = n;
|
|
|
|
cindex = index;
|
2005-08-10 11:24:39 +08:00
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2015-01-01 02:55:54 +08:00
|
|
|
n = rcu_dereference(n->child[index]);
|
|
|
|
if (unlikely(!n))
|
|
|
|
goto backtrace;
|
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2015-01-01 02:55:54 +08:00
|
|
|
/* Step 2: Sort out leaves and begin backtracing for longest prefix */
|
|
|
|
for (;;) {
|
|
|
|
/* record the pointer where our next node pointer is stored */
|
|
|
|
struct tnode __rcu **cptr = n->child;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2015-01-01 02:55:54 +08:00
|
|
|
/* This test verifies that none of the bits that differ
|
|
|
|
* between the key and the prefix exist in the region of
|
|
|
|
* the lsb and higher in the prefix.
|
2005-08-10 11:24:39 +08:00
|
|
|
*/
|
2015-01-01 02:55:54 +08:00
|
|
|
if (unlikely(prefix_mismatch(key, n)))
|
|
|
|
goto backtrace;
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2015-01-01 02:55:54 +08:00
|
|
|
/* exit out and process leaf */
|
|
|
|
if (unlikely(IS_LEAF(n)))
|
|
|
|
break;
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2015-01-01 02:55:54 +08:00
|
|
|
/* Don't bother recording parent info. Since we are in
|
|
|
|
* prefix match mode we will have to come back to wherever
|
|
|
|
* we started this traversal anyway
|
2005-08-10 11:24:39 +08:00
|
|
|
*/
|
|
|
|
|
2015-01-01 02:55:54 +08:00
|
|
|
while ((n = rcu_dereference(*cptr)) == NULL) {
|
2005-06-22 03:43:18 +08:00
|
|
|
backtrace:
|
|
|
|
#ifdef CONFIG_IP_FIB_TRIE_STATS
|
2015-01-01 02:55:54 +08:00
|
|
|
if (!n)
|
|
|
|
this_cpu_inc(stats->null_node_hit);
|
2005-06-22 03:43:18 +08:00
|
|
|
#endif
|
2015-01-01 02:55:54 +08:00
|
|
|
/* If we are at cindex 0 there are no more bits for
|
|
|
|
* us to strip at this level so we must ascend back
|
|
|
|
* up one level to see if there are any more bits to
|
|
|
|
* be stripped there.
|
|
|
|
*/
|
|
|
|
while (!cindex) {
|
|
|
|
t_key pkey = pn->key;
|
|
|
|
|
|
|
|
pn = node_parent_rcu(pn);
|
|
|
|
if (unlikely(!pn))
|
|
|
|
goto failed;
|
|
|
|
#ifdef CONFIG_IP_FIB_TRIE_STATS
|
|
|
|
this_cpu_inc(stats->backtrack);
|
|
|
|
#endif
|
|
|
|
/* Get Child's index */
|
|
|
|
cindex = get_index(pkey, pn);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* strip the least significant bit from the cindex */
|
|
|
|
cindex &= cindex - 1;
|
|
|
|
|
|
|
|
/* grab pointer for next child node */
|
|
|
|
cptr = &pn->child[cindex];
|
2005-07-20 05:01:51 +08:00
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
2015-01-01 02:55:54 +08:00
|
|
|
|
2005-06-22 03:43:18 +08:00
|
|
|
found:
|
2015-01-01 02:55:54 +08:00
|
|
|
/* Step 3: Process the leaf, if that fails fall back to backtracing */
|
|
|
|
ret = check_leaf(tb, t, n, key, flp, res, fib_flags);
|
|
|
|
if (unlikely(ret > 0))
|
|
|
|
goto backtrace;
|
|
|
|
failed:
|
2005-08-26 04:01:29 +08:00
|
|
|
rcu_read_unlock();
|
2005-06-22 03:43:18 +08:00
|
|
|
return ret;
|
|
|
|
}
|
2011-08-25 19:46:12 +08:00
|
|
|
EXPORT_SYMBOL_GPL(fib_table_lookup);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2008-01-23 13:56:34 +08:00
|
|
|
/*
|
|
|
|
* Remove the leaf and return parent.
|
|
|
|
*/
|
2015-01-01 02:55:47 +08:00
|
|
|
static void trie_leaf_remove(struct trie *t, struct tnode *l)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2015-01-01 02:55:35 +08:00
|
|
|
struct tnode *tp = node_parent(l);
|
2005-07-20 05:01:51 +08:00
|
|
|
|
2008-01-23 13:56:34 +08:00
|
|
|
pr_debug("entering trie_leaf_remove(%p)\n", l);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-07-20 05:01:51 +08:00
|
|
|
if (tp) {
|
2015-01-01 02:56:06 +08:00
|
|
|
put_child(tp, get_index(l->key, tp), NULL);
|
2009-06-18 15:28:51 +08:00
|
|
|
trie_rebalance(t, tp);
|
2015-01-01 02:56:06 +08:00
|
|
|
} else {
|
2011-08-02 00:19:00 +08:00
|
|
|
RCU_INIT_POINTER(t->trie, NULL);
|
2015-01-01 02:56:06 +08:00
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2015-01-01 02:55:41 +08:00
|
|
|
node_free(l);
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2007-03-27 05:22:22 +08:00
|
|
|
/*
|
|
|
|
* Caller must hold RTNL.
|
|
|
|
*/
|
2009-09-20 18:35:36 +08:00
|
|
|
int fib_table_delete(struct fib_table *tb, struct fib_config *cfg)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
|
|
|
struct trie *t = (struct trie *) tb->tb_data;
|
|
|
|
u32 key, mask;
|
2006-08-18 09:14:52 +08:00
|
|
|
int plen = cfg->fc_dst_len;
|
|
|
|
u8 tos = cfg->fc_tos;
|
2005-06-22 03:43:18 +08:00
|
|
|
struct fib_alias *fa, *fa_to_delete;
|
|
|
|
struct list_head *fa_head;
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *l;
|
2005-08-10 11:24:39 +08:00
|
|
|
struct leaf_info *li;
|
|
|
|
|
2005-07-20 05:01:51 +08:00
|
|
|
if (plen > 32)
|
2005-06-22 03:43:18 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
2006-08-18 09:14:52 +08:00
|
|
|
key = ntohl(cfg->fc_dst);
|
2005-08-10 11:24:39 +08:00
|
|
|
mask = ntohl(inet_make_mask(plen));
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-07-20 05:01:51 +08:00
|
|
|
if (key & ~mask)
|
2005-06-22 03:43:18 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
key = key & mask;
|
|
|
|
l = fib_find_node(t, key);
|
|
|
|
|
2005-07-20 05:01:51 +08:00
|
|
|
if (!l)
|
2005-06-22 03:43:18 +08:00
|
|
|
return -ESRCH;
|
|
|
|
|
2012-08-13 16:26:08 +08:00
|
|
|
li = find_leaf_info(l, plen);
|
|
|
|
|
|
|
|
if (!li)
|
|
|
|
return -ESRCH;
|
|
|
|
|
|
|
|
fa_head = &li->falh;
|
2005-06-22 03:43:18 +08:00
|
|
|
fa = fib_find_alias(fa_head, tos, 0);
|
|
|
|
|
|
|
|
if (!fa)
|
|
|
|
return -ESRCH;
|
|
|
|
|
2005-08-24 12:59:41 +08:00
|
|
|
pr_debug("Deleting %08x/%d tos=%d t=%p\n", key, plen, tos, t);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
|
|
|
fa_to_delete = NULL;
|
2008-01-29 13:18:06 +08:00
|
|
|
fa = list_entry(fa->fa_list.prev, struct fib_alias, fa_list);
|
|
|
|
list_for_each_entry_continue(fa, fa_head, fa_list) {
|
2005-06-22 03:43:18 +08:00
|
|
|
struct fib_info *fi = fa->fa_info;
|
|
|
|
|
|
|
|
if (fa->fa_tos != tos)
|
|
|
|
break;
|
|
|
|
|
2006-08-18 09:14:52 +08:00
|
|
|
if ((!cfg->fc_type || fa->fa_type == cfg->fc_type) &&
|
|
|
|
(cfg->fc_scope == RT_SCOPE_NOWHERE ||
|
2011-03-25 09:06:47 +08:00
|
|
|
fa->fa_info->fib_scope == cfg->fc_scope) &&
|
2011-03-19 20:13:46 +08:00
|
|
|
(!cfg->fc_prefsrc ||
|
|
|
|
fi->fib_prefsrc == cfg->fc_prefsrc) &&
|
2006-08-18 09:14:52 +08:00
|
|
|
(!cfg->fc_protocol ||
|
|
|
|
fi->fib_protocol == cfg->fc_protocol) &&
|
|
|
|
fib_nh_match(cfg, fi) == 0) {
|
2005-06-22 03:43:18 +08:00
|
|
|
fa_to_delete = fa;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-08-10 11:24:39 +08:00
|
|
|
if (!fa_to_delete)
|
|
|
|
return -ESRCH;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-08-10 11:24:39 +08:00
|
|
|
fa = fa_to_delete;
|
2006-08-18 09:14:52 +08:00
|
|
|
rtmsg_fib(RTM_DELROUTE, htonl(key), fa, plen, tb->tb_id,
|
2007-05-24 05:55:06 +08:00
|
|
|
&cfg->fc_nlinfo, 0);
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2005-08-26 04:01:29 +08:00
|
|
|
list_del_rcu(&fa->fa_list);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2011-04-15 05:49:37 +08:00
|
|
|
if (!plen)
|
|
|
|
tb->tb_num_default--;
|
|
|
|
|
2005-08-10 11:24:39 +08:00
|
|
|
if (list_empty(fa_head)) {
|
2005-08-26 04:01:29 +08:00
|
|
|
hlist_del_rcu(&li->hlist);
|
2005-08-10 11:24:39 +08:00
|
|
|
free_leaf_info(li);
|
2005-08-26 04:01:29 +08:00
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-08-10 11:24:39 +08:00
|
|
|
if (hlist_empty(&l->list))
|
2008-01-23 13:56:34 +08:00
|
|
|
trie_leaf_remove(t, l);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-08-10 11:24:39 +08:00
|
|
|
if (fa->fa_state & FA_S_ACCESSED)
|
2012-09-07 08:45:29 +08:00
|
|
|
rt_cache_flush(cfg->fc_nlinfo.nl_net);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-08-26 04:01:29 +08:00
|
|
|
fib_release_info(fa->fa_info);
|
|
|
|
alias_free_mem_rcu(fa);
|
2005-08-10 11:24:39 +08:00
|
|
|
return 0;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2008-04-10 18:46:12 +08:00
|
|
|
static int trie_flush_list(struct list_head *head)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
|
|
|
struct fib_alias *fa, *fa_node;
|
|
|
|
int found = 0;
|
|
|
|
|
|
|
|
list_for_each_entry_safe(fa, fa_node, head, fa_list) {
|
|
|
|
struct fib_info *fi = fa->fa_info;
|
|
|
|
|
2005-08-26 04:01:29 +08:00
|
|
|
if (fi && (fi->fib_flags & RTNH_F_DEAD)) {
|
|
|
|
list_del_rcu(&fa->fa_list);
|
|
|
|
fib_release_info(fa->fa_info);
|
|
|
|
alias_free_mem_rcu(fa);
|
2005-06-22 03:43:18 +08:00
|
|
|
found++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return found;
|
|
|
|
}
|
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
static int trie_flush_leaf(struct tnode *l)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
|
|
|
int found = 0;
|
|
|
|
struct hlist_head *lih = &l->list;
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
struct hlist_node *tmp;
|
2005-06-22 03:43:18 +08:00
|
|
|
struct leaf_info *li = NULL;
|
|
|
|
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry_safe(li, tmp, lih, hlist) {
|
2008-04-10 18:46:12 +08:00
|
|
|
found += trie_flush_list(&li->falh);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
|
|
|
if (list_empty(&li->falh)) {
|
2005-08-26 04:01:29 +08:00
|
|
|
hlist_del_rcu(&li->hlist);
|
2005-06-22 03:43:18 +08:00
|
|
|
free_leaf_info(li);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return found;
|
|
|
|
}
|
|
|
|
|
2008-01-23 13:55:32 +08:00
|
|
|
/*
|
|
|
|
* Scan for the next right leaf starting at node p->child[idx]
|
|
|
|
* Since we have back pointer, no recursion necessary.
|
|
|
|
*/
|
2015-01-01 02:55:47 +08:00
|
|
|
static struct tnode *leaf_walk_rcu(struct tnode *p, struct tnode *c)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2008-01-23 13:55:32 +08:00
|
|
|
do {
|
2015-01-01 02:56:18 +08:00
|
|
|
unsigned long idx = c ? idx = get_index(c->key, p) + 1 : 0;
|
2005-08-26 04:01:29 +08:00
|
|
|
|
2015-01-01 02:56:18 +08:00
|
|
|
while (idx < tnode_child_length(p)) {
|
2008-01-23 13:55:32 +08:00
|
|
|
c = tnode_get_child_rcu(p, idx++);
|
2005-08-26 04:01:29 +08:00
|
|
|
if (!c)
|
2005-08-10 11:24:39 +08:00
|
|
|
continue;
|
|
|
|
|
2013-08-06 02:18:49 +08:00
|
|
|
if (IS_LEAF(c))
|
2015-01-01 02:55:47 +08:00
|
|
|
return c;
|
2008-01-23 13:55:32 +08:00
|
|
|
|
|
|
|
/* Rescan start scanning in new node */
|
2015-01-01 02:55:47 +08:00
|
|
|
p = c;
|
2008-01-23 13:55:32 +08:00
|
|
|
idx = 0;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
2008-01-23 13:55:32 +08:00
|
|
|
|
|
|
|
/* Node empty, walk back up to parent */
|
2015-01-01 02:55:47 +08:00
|
|
|
c = p;
|
2010-09-10 07:32:28 +08:00
|
|
|
} while ((p = node_parent_rcu(c)) != NULL);
|
2008-01-23 13:55:32 +08:00
|
|
|
|
|
|
|
return NULL; /* Root of trie */
|
|
|
|
}
|
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
static struct tnode *trie_firstleaf(struct trie *t)
|
2008-01-23 13:55:32 +08:00
|
|
|
{
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *n = rcu_dereference_rtnl(t->trie);
|
2008-01-23 13:55:32 +08:00
|
|
|
|
|
|
|
if (!n)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
if (IS_LEAF(n)) /* trie is just a leaf */
|
2015-01-01 02:55:47 +08:00
|
|
|
return n;
|
2008-01-23 13:55:32 +08:00
|
|
|
|
|
|
|
return leaf_walk_rcu(n, NULL);
|
|
|
|
}
|
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
static struct tnode *trie_nextleaf(struct tnode *l)
|
2008-01-23 13:55:32 +08:00
|
|
|
{
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *p = node_parent_rcu(l);
|
2008-01-23 13:55:32 +08:00
|
|
|
|
|
|
|
if (!p)
|
|
|
|
return NULL; /* trie with just one leaf */
|
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
return leaf_walk_rcu(p, l);
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
static struct tnode *trie_leafindex(struct trie *t, int index)
|
2008-02-01 08:45:47 +08:00
|
|
|
{
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *l = trie_firstleaf(t);
|
2008-02-01 08:45:47 +08:00
|
|
|
|
2008-02-12 13:12:49 +08:00
|
|
|
while (l && index-- > 0)
|
2008-02-01 08:45:47 +08:00
|
|
|
l = trie_nextleaf(l);
|
2008-02-12 13:12:49 +08:00
|
|
|
|
2008-02-01 08:45:47 +08:00
|
|
|
return l;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2007-03-27 05:22:22 +08:00
|
|
|
/*
|
|
|
|
* Caller must hold RTNL.
|
|
|
|
*/
|
2009-09-20 18:35:36 +08:00
|
|
|
int fib_table_flush(struct fib_table *tb)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
|
|
|
struct trie *t = (struct trie *) tb->tb_data;
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *l, *ll = NULL;
|
2008-01-23 13:55:32 +08:00
|
|
|
int found = 0;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2008-01-23 13:55:32 +08:00
|
|
|
for (l = trie_firstleaf(t); l; l = trie_nextleaf(l)) {
|
2008-04-10 18:46:12 +08:00
|
|
|
found += trie_flush_leaf(l);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
|
|
|
if (ll && hlist_empty(&ll->list))
|
2008-01-23 13:56:34 +08:00
|
|
|
trie_leaf_remove(t, ll);
|
2005-06-22 03:43:18 +08:00
|
|
|
ll = l;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ll && hlist_empty(&ll->list))
|
2008-01-23 13:56:34 +08:00
|
|
|
trie_leaf_remove(t, ll);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-08-24 12:59:41 +08:00
|
|
|
pr_debug("trie_flush found=%d\n", found);
|
2005-06-22 03:43:18 +08:00
|
|
|
return found;
|
|
|
|
}
|
|
|
|
|
2010-10-28 10:00:43 +08:00
|
|
|
void fib_free_table(struct fib_table *tb)
|
|
|
|
{
|
2015-01-01 02:55:29 +08:00
|
|
|
#ifdef CONFIG_IP_FIB_TRIE_STATS
|
|
|
|
struct trie *t = (struct trie *)tb->tb_data;
|
|
|
|
|
|
|
|
free_percpu(t->stats);
|
|
|
|
#endif /* CONFIG_IP_FIB_TRIE_STATS */
|
2010-10-28 10:00:43 +08:00
|
|
|
kfree(tb);
|
|
|
|
}
|
|
|
|
|
2008-01-23 13:53:36 +08:00
|
|
|
static int fn_trie_dump_fa(t_key key, int plen, struct list_head *fah,
|
|
|
|
struct fib_table *tb,
|
2005-06-22 03:43:18 +08:00
|
|
|
struct sk_buff *skb, struct netlink_callback *cb)
|
|
|
|
{
|
|
|
|
int i, s_i;
|
|
|
|
struct fib_alias *fa;
|
2006-09-27 13:21:45 +08:00
|
|
|
__be32 xkey = htonl(key);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2008-02-01 08:45:47 +08:00
|
|
|
s_i = cb->args[5];
|
2005-06-22 03:43:18 +08:00
|
|
|
i = 0;
|
|
|
|
|
2005-08-26 04:01:29 +08:00
|
|
|
/* rcu_read_lock is hold by caller */
|
|
|
|
|
|
|
|
list_for_each_entry_rcu(fa, fah, fa_list) {
|
2005-06-22 03:43:18 +08:00
|
|
|
if (i < s_i) {
|
|
|
|
i++;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2012-09-08 04:12:54 +08:00
|
|
|
if (fib_dump_info(skb, NETLINK_CB(cb->skb).portid,
|
2005-06-22 03:43:18 +08:00
|
|
|
cb->nlh->nlmsg_seq,
|
|
|
|
RTM_NEWROUTE,
|
|
|
|
tb->tb_id,
|
|
|
|
fa->fa_type,
|
2006-08-18 09:15:17 +08:00
|
|
|
xkey,
|
2005-06-22 03:43:18 +08:00
|
|
|
plen,
|
|
|
|
fa->fa_tos,
|
2008-01-23 13:55:01 +08:00
|
|
|
fa->fa_info, NLM_F_MULTI) < 0) {
|
2008-02-01 08:45:47 +08:00
|
|
|
cb->args[5] = i;
|
2005-06-22 03:43:18 +08:00
|
|
|
return -1;
|
2005-08-10 11:24:39 +08:00
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
i++;
|
|
|
|
}
|
2008-02-01 08:45:47 +08:00
|
|
|
cb->args[5] = i;
|
2005-06-22 03:43:18 +08:00
|
|
|
return skb->len;
|
|
|
|
}
|
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
static int fn_trie_dump_leaf(struct tnode *l, struct fib_table *tb,
|
2008-01-23 13:56:11 +08:00
|
|
|
struct sk_buff *skb, struct netlink_callback *cb)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2008-01-23 13:56:11 +08:00
|
|
|
struct leaf_info *li;
|
|
|
|
int i, s_i;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2008-02-01 08:45:47 +08:00
|
|
|
s_i = cb->args[4];
|
2008-01-23 13:56:11 +08:00
|
|
|
i = 0;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2008-01-23 13:56:11 +08:00
|
|
|
/* rcu_read_lock is hold by caller */
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry_rcu(li, &l->list, hlist) {
|
2008-01-23 13:56:11 +08:00
|
|
|
if (i < s_i) {
|
|
|
|
i++;
|
2005-06-22 03:43:18 +08:00
|
|
|
continue;
|
2008-01-23 13:56:11 +08:00
|
|
|
}
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2008-01-23 13:56:11 +08:00
|
|
|
if (i > s_i)
|
2008-02-01 08:45:47 +08:00
|
|
|
cb->args[5] = 0;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2008-01-23 13:56:11 +08:00
|
|
|
if (list_empty(&li->falh))
|
2005-06-22 03:43:18 +08:00
|
|
|
continue;
|
|
|
|
|
2008-01-23 13:56:11 +08:00
|
|
|
if (fn_trie_dump_fa(l->key, li->plen, &li->falh, tb, skb, cb) < 0) {
|
2008-02-01 08:45:47 +08:00
|
|
|
cb->args[4] = i;
|
2005-06-22 03:43:18 +08:00
|
|
|
return -1;
|
|
|
|
}
|
2008-01-23 13:56:11 +08:00
|
|
|
i++;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
2008-01-23 13:56:11 +08:00
|
|
|
|
2008-02-01 08:45:47 +08:00
|
|
|
cb->args[4] = i;
|
2005-06-22 03:43:18 +08:00
|
|
|
return skb->len;
|
|
|
|
}
|
|
|
|
|
2009-09-20 18:35:36 +08:00
|
|
|
int fib_table_dump(struct fib_table *tb, struct sk_buff *skb,
|
|
|
|
struct netlink_callback *cb)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *l;
|
2005-06-22 03:43:18 +08:00
|
|
|
struct trie *t = (struct trie *) tb->tb_data;
|
2008-01-23 13:57:22 +08:00
|
|
|
t_key key = cb->args[2];
|
2008-02-01 08:45:47 +08:00
|
|
|
int count = cb->args[3];
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-08-26 04:01:29 +08:00
|
|
|
rcu_read_lock();
|
2008-01-23 13:57:22 +08:00
|
|
|
/* Dump starting at last key.
|
|
|
|
* Note: 0.0.0.0/0 (ie default) is first key.
|
|
|
|
*/
|
2008-02-01 08:45:47 +08:00
|
|
|
if (count == 0)
|
2008-01-23 13:57:22 +08:00
|
|
|
l = trie_firstleaf(t);
|
|
|
|
else {
|
2008-02-01 08:45:47 +08:00
|
|
|
/* Normally, continue from last key, but if that is missing
|
|
|
|
* fallback to using slow rescan
|
|
|
|
*/
|
2008-01-23 13:57:22 +08:00
|
|
|
l = fib_find_node(t, key);
|
2008-02-01 08:45:47 +08:00
|
|
|
if (!l)
|
|
|
|
l = trie_leafindex(t, count);
|
2008-01-23 13:57:22 +08:00
|
|
|
}
|
2008-01-23 13:56:11 +08:00
|
|
|
|
2008-01-23 13:57:22 +08:00
|
|
|
while (l) {
|
|
|
|
cb->args[2] = l->key;
|
2008-01-23 13:56:11 +08:00
|
|
|
if (fn_trie_dump_leaf(l, tb, skb, cb) < 0) {
|
2008-02-01 08:45:47 +08:00
|
|
|
cb->args[3] = count;
|
2008-01-23 13:56:11 +08:00
|
|
|
rcu_read_unlock();
|
|
|
|
return -1;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
2008-01-23 13:57:22 +08:00
|
|
|
|
2008-02-01 08:45:47 +08:00
|
|
|
++count;
|
2008-01-23 13:57:22 +08:00
|
|
|
l = trie_nextleaf(l);
|
2008-02-01 08:45:47 +08:00
|
|
|
memset(&cb->args[4], 0,
|
|
|
|
sizeof(cb->args) - 4*sizeof(cb->args[0]));
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
2008-02-01 08:45:47 +08:00
|
|
|
cb->args[3] = count;
|
2005-08-26 04:01:29 +08:00
|
|
|
rcu_read_unlock();
|
2008-01-23 13:56:11 +08:00
|
|
|
|
2005-06-22 03:43:18 +08:00
|
|
|
return skb->len;
|
|
|
|
}
|
|
|
|
|
2011-02-02 07:30:56 +08:00
|
|
|
void __init fib_trie_init(void)
|
2008-01-15 15:14:20 +08:00
|
|
|
{
|
2008-01-23 13:53:36 +08:00
|
|
|
fn_alias_kmem = kmem_cache_create("ip_fib_alias",
|
|
|
|
sizeof(struct fib_alias),
|
2008-01-23 13:51:50 +08:00
|
|
|
0, SLAB_PANIC, NULL);
|
|
|
|
|
|
|
|
trie_leaf_kmem = kmem_cache_create("ip_fib_trie",
|
2015-01-01 02:55:47 +08:00
|
|
|
max(sizeof(struct tnode),
|
2008-01-23 13:51:50 +08:00
|
|
|
sizeof(struct leaf_info)),
|
|
|
|
0, SLAB_PANIC, NULL);
|
2008-01-15 15:14:20 +08:00
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2008-01-15 15:14:20 +08:00
|
|
|
|
2011-02-02 07:30:56 +08:00
|
|
|
struct fib_table *fib_trie_table(u32 id)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
|
|
|
struct fib_table *tb;
|
|
|
|
struct trie *t;
|
|
|
|
|
|
|
|
tb = kmalloc(sizeof(struct fib_table) + sizeof(struct trie),
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (tb == NULL)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
tb->tb_id = id;
|
2007-12-08 16:32:23 +08:00
|
|
|
tb->tb_default = -1;
|
2011-04-15 05:49:37 +08:00
|
|
|
tb->tb_num_default = 0;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
|
|
|
t = (struct trie *) tb->tb_data;
|
2015-01-01 02:55:29 +08:00
|
|
|
RCU_INIT_POINTER(t->trie, NULL);
|
|
|
|
#ifdef CONFIG_IP_FIB_TRIE_STATS
|
|
|
|
t->stats = alloc_percpu(struct trie_use_stats);
|
|
|
|
if (!t->stats) {
|
|
|
|
kfree(tb);
|
|
|
|
tb = NULL;
|
|
|
|
}
|
|
|
|
#endif
|
2005-06-22 03:43:18 +08:00
|
|
|
|
|
|
|
return tb;
|
|
|
|
}
|
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
#ifdef CONFIG_PROC_FS
|
|
|
|
/* Depth first Trie walk iterator */
|
|
|
|
struct fib_trie_iter {
|
2008-01-10 19:27:17 +08:00
|
|
|
struct seq_net_private p;
|
2008-03-24 13:43:56 +08:00
|
|
|
struct fib_table *tb;
|
2005-09-10 04:35:42 +08:00
|
|
|
struct tnode *tnode;
|
2010-09-10 07:32:28 +08:00
|
|
|
unsigned int index;
|
|
|
|
unsigned int depth;
|
2005-09-10 04:35:42 +08:00
|
|
|
};
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
static struct tnode *fib_trie_get_next(struct fib_trie_iter *iter)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2015-01-01 02:56:18 +08:00
|
|
|
unsigned long cindex = iter->index;
|
2005-09-10 04:35:42 +08:00
|
|
|
struct tnode *tn = iter->tnode;
|
|
|
|
struct tnode *p;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2007-01-25 06:42:04 +08:00
|
|
|
/* A single entry routing table */
|
|
|
|
if (!tn)
|
|
|
|
return NULL;
|
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
pr_debug("get_next iter={node=%p index=%d depth=%d}\n",
|
|
|
|
iter->tnode, iter->index, iter->depth);
|
|
|
|
rescan:
|
2015-01-01 02:56:18 +08:00
|
|
|
while (cindex < tnode_child_length(tn)) {
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *n = tnode_get_child_rcu(tn, cindex);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
if (n) {
|
|
|
|
if (IS_LEAF(n)) {
|
|
|
|
iter->tnode = tn;
|
|
|
|
iter->index = cindex + 1;
|
|
|
|
} else {
|
|
|
|
/* push down one level */
|
2015-01-01 02:55:47 +08:00
|
|
|
iter->tnode = n;
|
2005-09-10 04:35:42 +08:00
|
|
|
iter->index = 0;
|
|
|
|
++iter->depth;
|
|
|
|
}
|
|
|
|
return n;
|
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
++cindex;
|
|
|
|
}
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
/* Current node exhausted, pop back up */
|
2015-01-01 02:55:47 +08:00
|
|
|
p = node_parent_rcu(tn);
|
2005-09-10 04:35:42 +08:00
|
|
|
if (p) {
|
2015-01-01 02:56:12 +08:00
|
|
|
cindex = get_index(tn->key, p) + 1;
|
2005-09-10 04:35:42 +08:00
|
|
|
tn = p;
|
|
|
|
--iter->depth;
|
|
|
|
goto rescan;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
2005-09-10 04:35:42 +08:00
|
|
|
|
|
|
|
/* got root? */
|
|
|
|
return NULL;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
static struct tnode *fib_trie_get_first(struct fib_trie_iter *iter,
|
2005-09-10 04:35:42 +08:00
|
|
|
struct trie *t)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *n;
|
2006-03-21 13:34:12 +08:00
|
|
|
|
2007-03-09 12:44:43 +08:00
|
|
|
if (!t)
|
2006-03-21 13:34:12 +08:00
|
|
|
return NULL;
|
|
|
|
|
|
|
|
n = rcu_dereference(t->trie);
|
2008-03-24 13:43:56 +08:00
|
|
|
if (!n)
|
2006-03-21 13:34:12 +08:00
|
|
|
return NULL;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2008-03-24 13:43:56 +08:00
|
|
|
if (IS_TNODE(n)) {
|
2015-01-01 02:55:47 +08:00
|
|
|
iter->tnode = n;
|
2008-03-24 13:43:56 +08:00
|
|
|
iter->index = 0;
|
|
|
|
iter->depth = 1;
|
|
|
|
} else {
|
|
|
|
iter->tnode = NULL;
|
|
|
|
iter->index = 0;
|
|
|
|
iter->depth = 0;
|
2005-08-10 11:24:39 +08:00
|
|
|
}
|
2008-03-24 13:43:56 +08:00
|
|
|
|
|
|
|
return n;
|
2005-09-10 04:35:42 +08:00
|
|
|
}
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
static void trie_collect_stats(struct trie *t, struct trie_stat *s)
|
|
|
|
{
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *n;
|
2005-09-10 04:35:42 +08:00
|
|
|
struct fib_trie_iter iter;
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
memset(s, 0, sizeof(*s));
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
rcu_read_lock();
|
2008-03-24 13:43:56 +08:00
|
|
|
for (n = fib_trie_get_first(&iter, t); n; n = fib_trie_get_next(&iter)) {
|
2005-09-10 04:35:42 +08:00
|
|
|
if (IS_LEAF(n)) {
|
2008-01-23 13:54:05 +08:00
|
|
|
struct leaf_info *li;
|
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
s->leaves++;
|
|
|
|
s->totdepth += iter.depth;
|
|
|
|
if (iter.depth > s->maxdepth)
|
|
|
|
s->maxdepth = iter.depth;
|
2008-01-23 13:54:05 +08:00
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
hlist_for_each_entry_rcu(li, &n->list, hlist)
|
2008-01-23 13:54:05 +08:00
|
|
|
++s->prefixes;
|
2005-09-10 04:35:42 +08:00
|
|
|
} else {
|
2015-01-01 02:56:18 +08:00
|
|
|
unsigned long i;
|
2005-09-10 04:35:42 +08:00
|
|
|
|
|
|
|
s->tnodes++;
|
2015-01-01 02:55:47 +08:00
|
|
|
if (n->bits < MAX_STAT_DEPTH)
|
|
|
|
s->nodesizes[n->bits]++;
|
2006-03-21 13:35:01 +08:00
|
|
|
|
2015-01-01 02:56:18 +08:00
|
|
|
for (i = 0; i < tnode_child_length(n); i++) {
|
2015-01-01 02:55:47 +08:00
|
|
|
if (!rcu_access_pointer(n->child[i]))
|
2005-09-10 04:35:42 +08:00
|
|
|
s->nullpointers++;
|
2015-01-01 02:56:18 +08:00
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
}
|
2005-08-26 04:01:29 +08:00
|
|
|
rcu_read_unlock();
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
/*
|
|
|
|
* This outputs /proc/net/fib_triestats
|
|
|
|
*/
|
|
|
|
static void trie_show_stats(struct seq_file *seq, struct trie_stat *stat)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2010-09-10 07:32:28 +08:00
|
|
|
unsigned int i, max, pointers, bytes, avdepth;
|
2005-07-20 05:01:51 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
if (stat->leaves)
|
|
|
|
avdepth = stat->totdepth*100 / stat->leaves;
|
|
|
|
else
|
|
|
|
avdepth = 0;
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2008-01-23 13:53:36 +08:00
|
|
|
seq_printf(seq, "\tAver depth: %u.%02d\n",
|
|
|
|
avdepth / 100, avdepth % 100);
|
2005-09-10 04:35:42 +08:00
|
|
|
seq_printf(seq, "\tMax depth: %u\n", stat->maxdepth);
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
seq_printf(seq, "\tLeaves: %u\n", stat->leaves);
|
2015-01-01 02:55:47 +08:00
|
|
|
bytes = sizeof(struct tnode) * stat->leaves;
|
2008-01-23 13:54:05 +08:00
|
|
|
|
|
|
|
seq_printf(seq, "\tPrefixes: %u\n", stat->prefixes);
|
|
|
|
bytes += sizeof(struct leaf_info) * stat->prefixes;
|
|
|
|
|
2008-01-13 12:55:55 +08:00
|
|
|
seq_printf(seq, "\tInternal nodes: %u\n\t", stat->tnodes);
|
2005-09-10 04:35:42 +08:00
|
|
|
bytes += sizeof(struct tnode) * stat->tnodes;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2006-03-21 13:35:01 +08:00
|
|
|
max = MAX_STAT_DEPTH;
|
|
|
|
while (max > 0 && stat->nodesizes[max-1] == 0)
|
2005-09-10 04:35:42 +08:00
|
|
|
max--;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
pointers = 0;
|
2013-07-23 03:01:58 +08:00
|
|
|
for (i = 1; i < max; i++)
|
2005-09-10 04:35:42 +08:00
|
|
|
if (stat->nodesizes[i] != 0) {
|
2008-01-13 12:55:55 +08:00
|
|
|
seq_printf(seq, " %u: %u", i, stat->nodesizes[i]);
|
2005-09-10 04:35:42 +08:00
|
|
|
pointers += (1<<i) * stat->nodesizes[i];
|
|
|
|
}
|
|
|
|
seq_putc(seq, '\n');
|
2008-01-13 12:55:55 +08:00
|
|
|
seq_printf(seq, "\tPointers: %u\n", pointers);
|
2005-08-26 04:01:29 +08:00
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
bytes += sizeof(struct tnode *) * pointers;
|
2008-01-13 12:55:55 +08:00
|
|
|
seq_printf(seq, "Null ptrs: %u\n", stat->nullpointers);
|
|
|
|
seq_printf(seq, "Total size: %u kB\n", (bytes + 1023) / 1024);
|
2008-01-13 13:23:17 +08:00
|
|
|
}
|
2005-08-26 04:01:29 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
#ifdef CONFIG_IP_FIB_TRIE_STATS
|
2008-01-13 13:23:17 +08:00
|
|
|
static void trie_show_usage(struct seq_file *seq,
|
2015-01-01 02:55:29 +08:00
|
|
|
const struct trie_use_stats __percpu *stats)
|
2008-01-13 13:23:17 +08:00
|
|
|
{
|
2015-01-01 02:55:29 +08:00
|
|
|
struct trie_use_stats s = { 0 };
|
|
|
|
int cpu;
|
|
|
|
|
|
|
|
/* loop through all of the CPUs and gather up the stats */
|
|
|
|
for_each_possible_cpu(cpu) {
|
|
|
|
const struct trie_use_stats *pcpu = per_cpu_ptr(stats, cpu);
|
|
|
|
|
|
|
|
s.gets += pcpu->gets;
|
|
|
|
s.backtrack += pcpu->backtrack;
|
|
|
|
s.semantic_match_passed += pcpu->semantic_match_passed;
|
|
|
|
s.semantic_match_miss += pcpu->semantic_match_miss;
|
|
|
|
s.null_node_hit += pcpu->null_node_hit;
|
|
|
|
s.resize_node_skipped += pcpu->resize_node_skipped;
|
|
|
|
}
|
|
|
|
|
2008-01-13 13:23:17 +08:00
|
|
|
seq_printf(seq, "\nCounters:\n---------\n");
|
2015-01-01 02:55:29 +08:00
|
|
|
seq_printf(seq, "gets = %u\n", s.gets);
|
|
|
|
seq_printf(seq, "backtracks = %u\n", s.backtrack);
|
2008-01-23 13:53:36 +08:00
|
|
|
seq_printf(seq, "semantic match passed = %u\n",
|
2015-01-01 02:55:29 +08:00
|
|
|
s.semantic_match_passed);
|
|
|
|
seq_printf(seq, "semantic match miss = %u\n", s.semantic_match_miss);
|
|
|
|
seq_printf(seq, "null node hit= %u\n", s.null_node_hit);
|
|
|
|
seq_printf(seq, "skipped node resize = %u\n\n", s.resize_node_skipped);
|
2005-09-10 04:35:42 +08:00
|
|
|
}
|
2008-01-13 13:23:17 +08:00
|
|
|
#endif /* CONFIG_IP_FIB_TRIE_STATS */
|
|
|
|
|
2008-03-24 13:43:56 +08:00
|
|
|
static void fib_table_print(struct seq_file *seq, struct fib_table *tb)
|
2008-01-15 15:11:54 +08:00
|
|
|
{
|
2008-03-24 13:43:56 +08:00
|
|
|
if (tb->tb_id == RT_TABLE_LOCAL)
|
|
|
|
seq_puts(seq, "Local:\n");
|
|
|
|
else if (tb->tb_id == RT_TABLE_MAIN)
|
|
|
|
seq_puts(seq, "Main:\n");
|
|
|
|
else
|
|
|
|
seq_printf(seq, "Id %d:\n", tb->tb_id);
|
2008-01-15 15:11:54 +08:00
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2008-03-24 13:43:56 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
static int fib_triestat_seq_show(struct seq_file *seq, void *v)
|
|
|
|
{
|
2008-01-10 19:27:17 +08:00
|
|
|
struct net *net = (struct net *)seq->private;
|
2008-03-24 13:43:56 +08:00
|
|
|
unsigned int h;
|
2007-12-07 16:47:47 +08:00
|
|
|
|
2008-01-15 15:11:54 +08:00
|
|
|
seq_printf(seq,
|
2008-01-23 13:53:36 +08:00
|
|
|
"Basic info: size of leaf:"
|
|
|
|
" %Zd bytes, size of tnode: %Zd bytes.\n",
|
2015-01-01 02:55:47 +08:00
|
|
|
sizeof(struct tnode), sizeof(struct tnode));
|
2008-01-15 15:11:54 +08:00
|
|
|
|
2008-03-24 13:43:56 +08:00
|
|
|
for (h = 0; h < FIB_TABLE_HASHSZ; h++) {
|
|
|
|
struct hlist_head *head = &net->ipv4.fib_table_hash[h];
|
|
|
|
struct fib_table *tb;
|
|
|
|
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry_rcu(tb, head, tb_hlist) {
|
2008-03-24 13:43:56 +08:00
|
|
|
struct trie *t = (struct trie *) tb->tb_data;
|
|
|
|
struct trie_stat stat;
|
2007-12-07 16:47:47 +08:00
|
|
|
|
2008-03-24 13:43:56 +08:00
|
|
|
if (!t)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
fib_table_print(seq, tb);
|
|
|
|
|
|
|
|
trie_collect_stats(t, &stat);
|
|
|
|
trie_show_stats(seq, &stat);
|
|
|
|
#ifdef CONFIG_IP_FIB_TRIE_STATS
|
2015-01-01 02:55:29 +08:00
|
|
|
trie_show_usage(seq, t->stats);
|
2008-03-24 13:43:56 +08:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
return 0;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
static int fib_triestat_seq_open(struct inode *inode, struct file *file)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2008-07-18 19:07:21 +08:00
|
|
|
return single_open_net(inode, file, fib_triestat_seq_show);
|
2008-01-10 19:27:17 +08:00
|
|
|
}
|
|
|
|
|
2007-02-12 16:55:35 +08:00
|
|
|
static const struct file_operations fib_triestat_fops = {
|
2005-09-10 04:35:42 +08:00
|
|
|
.owner = THIS_MODULE,
|
|
|
|
.open = fib_triestat_seq_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
2008-07-18 19:07:44 +08:00
|
|
|
.release = single_release_net,
|
2005-09-10 04:35:42 +08:00
|
|
|
};
|
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
static struct tnode *fib_trie_get_idx(struct seq_file *seq, loff_t pos)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2008-03-26 01:36:06 +08:00
|
|
|
struct fib_trie_iter *iter = seq->private;
|
|
|
|
struct net *net = seq_file_net(seq);
|
2005-09-10 04:35:42 +08:00
|
|
|
loff_t idx = 0;
|
2008-03-24 13:43:56 +08:00
|
|
|
unsigned int h;
|
2005-09-10 04:35:42 +08:00
|
|
|
|
2008-03-24 13:43:56 +08:00
|
|
|
for (h = 0; h < FIB_TABLE_HASHSZ; h++) {
|
|
|
|
struct hlist_head *head = &net->ipv4.fib_table_hash[h];
|
|
|
|
struct fib_table *tb;
|
2005-09-10 04:35:42 +08:00
|
|
|
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry_rcu(tb, head, tb_hlist) {
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *n;
|
2008-03-24 13:43:56 +08:00
|
|
|
|
|
|
|
for (n = fib_trie_get_first(iter,
|
|
|
|
(struct trie *) tb->tb_data);
|
|
|
|
n; n = fib_trie_get_next(iter))
|
|
|
|
if (pos == idx++) {
|
|
|
|
iter->tb = tb;
|
|
|
|
return n;
|
|
|
|
}
|
|
|
|
}
|
2005-09-10 04:35:42 +08:00
|
|
|
}
|
2008-03-24 13:43:56 +08:00
|
|
|
|
2005-06-22 03:43:18 +08:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
static void *fib_trie_seq_start(struct seq_file *seq, loff_t *pos)
|
2008-01-13 13:25:02 +08:00
|
|
|
__acquires(RCU)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2005-09-10 04:35:42 +08:00
|
|
|
rcu_read_lock();
|
2008-03-26 01:36:06 +08:00
|
|
|
return fib_trie_get_idx(seq, *pos);
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
static void *fib_trie_seq_next(struct seq_file *seq, void *v, loff_t *pos)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2005-09-10 04:35:42 +08:00
|
|
|
struct fib_trie_iter *iter = seq->private;
|
2008-03-26 01:36:06 +08:00
|
|
|
struct net *net = seq_file_net(seq);
|
2008-03-24 13:43:56 +08:00
|
|
|
struct fib_table *tb = iter->tb;
|
|
|
|
struct hlist_node *tb_node;
|
|
|
|
unsigned int h;
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *n;
|
2005-09-10 04:35:42 +08:00
|
|
|
|
2005-06-22 03:43:18 +08:00
|
|
|
++*pos;
|
2008-03-24 13:43:56 +08:00
|
|
|
/* next node in same table */
|
|
|
|
n = fib_trie_get_next(iter);
|
|
|
|
if (n)
|
|
|
|
return n;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2008-03-24 13:43:56 +08:00
|
|
|
/* walk rest of this hash chain */
|
|
|
|
h = tb->tb_id & (FIB_TABLE_HASHSZ - 1);
|
2011-03-31 16:51:35 +08:00
|
|
|
while ((tb_node = rcu_dereference(hlist_next_rcu(&tb->tb_hlist)))) {
|
2008-03-24 13:43:56 +08:00
|
|
|
tb = hlist_entry(tb_node, struct fib_table, tb_hlist);
|
|
|
|
n = fib_trie_get_first(iter, (struct trie *) tb->tb_data);
|
|
|
|
if (n)
|
|
|
|
goto found;
|
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2008-03-24 13:43:56 +08:00
|
|
|
/* new hash chain */
|
|
|
|
while (++h < FIB_TABLE_HASHSZ) {
|
|
|
|
struct hlist_head *head = &net->ipv4.fib_table_hash[h];
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry_rcu(tb, head, tb_hlist) {
|
2008-03-24 13:43:56 +08:00
|
|
|
n = fib_trie_get_first(iter, (struct trie *) tb->tb_data);
|
|
|
|
if (n)
|
|
|
|
goto found;
|
|
|
|
}
|
|
|
|
}
|
2005-09-10 04:35:42 +08:00
|
|
|
return NULL;
|
2008-03-24 13:43:56 +08:00
|
|
|
|
|
|
|
found:
|
|
|
|
iter->tb = tb;
|
|
|
|
return n;
|
2005-09-10 04:35:42 +08:00
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
static void fib_trie_seq_stop(struct seq_file *seq, void *v)
|
2008-01-13 13:25:02 +08:00
|
|
|
__releases(RCU)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2005-09-10 04:35:42 +08:00
|
|
|
rcu_read_unlock();
|
|
|
|
}
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
static void seq_indent(struct seq_file *seq, int n)
|
|
|
|
{
|
2010-09-10 07:32:28 +08:00
|
|
|
while (n-- > 0)
|
|
|
|
seq_puts(seq, " ");
|
2005-09-10 04:35:42 +08:00
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2008-01-15 15:09:56 +08:00
|
|
|
static inline const char *rtn_scope(char *buf, size_t len, enum rt_scope_t s)
|
2005-09-10 04:35:42 +08:00
|
|
|
{
|
2007-03-09 12:44:43 +08:00
|
|
|
switch (s) {
|
2005-09-10 04:35:42 +08:00
|
|
|
case RT_SCOPE_UNIVERSE: return "universe";
|
|
|
|
case RT_SCOPE_SITE: return "site";
|
|
|
|
case RT_SCOPE_LINK: return "link";
|
|
|
|
case RT_SCOPE_HOST: return "host";
|
|
|
|
case RT_SCOPE_NOWHERE: return "nowhere";
|
|
|
|
default:
|
2008-01-15 15:09:56 +08:00
|
|
|
snprintf(buf, len, "scope=%d", s);
|
2005-09-10 04:35:42 +08:00
|
|
|
return buf;
|
|
|
|
}
|
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2009-08-06 01:42:58 +08:00
|
|
|
static const char *const rtn_type_names[__RTN_MAX] = {
|
2005-09-10 04:35:42 +08:00
|
|
|
[RTN_UNSPEC] = "UNSPEC",
|
|
|
|
[RTN_UNICAST] = "UNICAST",
|
|
|
|
[RTN_LOCAL] = "LOCAL",
|
|
|
|
[RTN_BROADCAST] = "BROADCAST",
|
|
|
|
[RTN_ANYCAST] = "ANYCAST",
|
|
|
|
[RTN_MULTICAST] = "MULTICAST",
|
|
|
|
[RTN_BLACKHOLE] = "BLACKHOLE",
|
|
|
|
[RTN_UNREACHABLE] = "UNREACHABLE",
|
|
|
|
[RTN_PROHIBIT] = "PROHIBIT",
|
|
|
|
[RTN_THROW] = "THROW",
|
|
|
|
[RTN_NAT] = "NAT",
|
|
|
|
[RTN_XRESOLVE] = "XRESOLVE",
|
|
|
|
};
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2010-09-10 07:32:28 +08:00
|
|
|
static inline const char *rtn_type(char *buf, size_t len, unsigned int t)
|
2005-09-10 04:35:42 +08:00
|
|
|
{
|
|
|
|
if (t < __RTN_MAX && rtn_type_names[t])
|
|
|
|
return rtn_type_names[t];
|
2008-01-15 15:09:56 +08:00
|
|
|
snprintf(buf, len, "type %u", t);
|
2005-09-10 04:35:42 +08:00
|
|
|
return buf;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
/* Pretty print the trie */
|
|
|
|
static int fib_trie_seq_show(struct seq_file *seq, void *v)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2005-09-10 04:35:42 +08:00
|
|
|
const struct fib_trie_iter *iter = seq->private;
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *n = v;
|
2005-07-20 05:01:51 +08:00
|
|
|
|
2008-03-24 13:43:56 +08:00
|
|
|
if (!node_parent_rcu(n))
|
|
|
|
fib_table_print(seq, iter->tb);
|
2007-01-27 11:06:01 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
if (IS_TNODE(n)) {
|
2015-01-01 02:55:47 +08:00
|
|
|
__be32 prf = htonl(n->key);
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2015-01-01 02:56:12 +08:00
|
|
|
seq_indent(seq, iter->depth-1);
|
|
|
|
seq_printf(seq, " +-- %pI4/%zu %u %u %u\n",
|
|
|
|
&prf, KEYLENGTH - n->pos - n->bits, n->bits,
|
|
|
|
n->full_children, n->empty_children);
|
2005-09-10 04:35:42 +08:00
|
|
|
} else {
|
2008-01-23 13:54:37 +08:00
|
|
|
struct leaf_info *li;
|
2015-01-01 02:55:47 +08:00
|
|
|
__be32 val = htonl(n->key);
|
2005-09-10 04:35:42 +08:00
|
|
|
|
|
|
|
seq_indent(seq, iter->depth);
|
2008-10-31 15:53:57 +08:00
|
|
|
seq_printf(seq, " |-- %pI4\n", &val);
|
2008-01-23 13:54:37 +08:00
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
hlist_for_each_entry_rcu(li, &n->list, hlist) {
|
2008-01-23 13:54:37 +08:00
|
|
|
struct fib_alias *fa;
|
|
|
|
|
|
|
|
list_for_each_entry_rcu(fa, &li->falh, fa_list) {
|
|
|
|
char buf1[32], buf2[32];
|
|
|
|
|
|
|
|
seq_indent(seq, iter->depth+1);
|
|
|
|
seq_printf(seq, " /%d %s %s", li->plen,
|
|
|
|
rtn_scope(buf1, sizeof(buf1),
|
2011-03-25 09:06:47 +08:00
|
|
|
fa->fa_info->fib_scope),
|
2008-01-23 13:54:37 +08:00
|
|
|
rtn_type(buf2, sizeof(buf2),
|
|
|
|
fa->fa_type));
|
|
|
|
if (fa->fa_tos)
|
2008-02-05 18:58:45 +08:00
|
|
|
seq_printf(seq, " tos=%d", fa->fa_tos);
|
2008-01-23 13:54:37 +08:00
|
|
|
seq_putc(seq, '\n');
|
2005-09-10 04:35:42 +08:00
|
|
|
}
|
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
2005-09-10 04:35:42 +08:00
|
|
|
|
2005-06-22 03:43:18 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2007-03-13 05:34:29 +08:00
|
|
|
static const struct seq_operations fib_trie_seq_ops = {
|
2005-09-10 04:35:42 +08:00
|
|
|
.start = fib_trie_seq_start,
|
|
|
|
.next = fib_trie_seq_next,
|
|
|
|
.stop = fib_trie_seq_stop,
|
|
|
|
.show = fib_trie_seq_show,
|
2005-06-22 03:43:18 +08:00
|
|
|
};
|
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
static int fib_trie_seq_open(struct inode *inode, struct file *file)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2008-01-10 19:27:17 +08:00
|
|
|
return seq_open_net(inode, file, &fib_trie_seq_ops,
|
|
|
|
sizeof(struct fib_trie_iter));
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2007-02-12 16:55:35 +08:00
|
|
|
static const struct file_operations fib_trie_fops = {
|
2005-09-10 04:35:42 +08:00
|
|
|
.owner = THIS_MODULE,
|
|
|
|
.open = fib_trie_seq_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
2008-01-10 19:27:17 +08:00
|
|
|
.release = seq_release_net,
|
2005-06-22 03:43:18 +08:00
|
|
|
};
|
|
|
|
|
2008-02-12 13:14:39 +08:00
|
|
|
struct fib_route_iter {
|
|
|
|
struct seq_net_private p;
|
|
|
|
struct trie *main_trie;
|
|
|
|
loff_t pos;
|
|
|
|
t_key key;
|
|
|
|
};
|
|
|
|
|
2015-01-01 02:55:47 +08:00
|
|
|
static struct tnode *fib_route_get_idx(struct fib_route_iter *iter, loff_t pos)
|
2008-02-12 13:14:39 +08:00
|
|
|
{
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *l = NULL;
|
2008-02-12 13:14:39 +08:00
|
|
|
struct trie *t = iter->main_trie;
|
|
|
|
|
|
|
|
/* use cache location of last found key */
|
|
|
|
if (iter->pos > 0 && pos >= iter->pos && (l = fib_find_node(t, iter->key)))
|
|
|
|
pos -= iter->pos;
|
|
|
|
else {
|
|
|
|
iter->pos = 0;
|
|
|
|
l = trie_firstleaf(t);
|
|
|
|
}
|
|
|
|
|
|
|
|
while (l && pos-- > 0) {
|
|
|
|
iter->pos++;
|
|
|
|
l = trie_nextleaf(l);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (l)
|
|
|
|
iter->key = pos; /* remember it */
|
|
|
|
else
|
|
|
|
iter->pos = 0; /* forget it */
|
|
|
|
|
|
|
|
return l;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *fib_route_seq_start(struct seq_file *seq, loff_t *pos)
|
|
|
|
__acquires(RCU)
|
|
|
|
{
|
|
|
|
struct fib_route_iter *iter = seq->private;
|
|
|
|
struct fib_table *tb;
|
|
|
|
|
|
|
|
rcu_read_lock();
|
2008-03-26 01:36:06 +08:00
|
|
|
tb = fib_get_table(seq_file_net(seq), RT_TABLE_MAIN);
|
2008-02-12 13:14:39 +08:00
|
|
|
if (!tb)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
iter->main_trie = (struct trie *) tb->tb_data;
|
|
|
|
if (*pos == 0)
|
|
|
|
return SEQ_START_TOKEN;
|
|
|
|
else
|
|
|
|
return fib_route_get_idx(iter, *pos - 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *fib_route_seq_next(struct seq_file *seq, void *v, loff_t *pos)
|
|
|
|
{
|
|
|
|
struct fib_route_iter *iter = seq->private;
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *l = v;
|
2008-02-12 13:14:39 +08:00
|
|
|
|
|
|
|
++*pos;
|
|
|
|
if (v == SEQ_START_TOKEN) {
|
|
|
|
iter->pos = 0;
|
|
|
|
l = trie_firstleaf(iter->main_trie);
|
|
|
|
} else {
|
|
|
|
iter->pos++;
|
|
|
|
l = trie_nextleaf(l);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (l)
|
|
|
|
iter->key = l->key;
|
|
|
|
else
|
|
|
|
iter->pos = 0;
|
|
|
|
return l;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void fib_route_seq_stop(struct seq_file *seq, void *v)
|
|
|
|
__releases(RCU)
|
|
|
|
{
|
|
|
|
rcu_read_unlock();
|
|
|
|
}
|
|
|
|
|
2010-09-10 07:32:28 +08:00
|
|
|
static unsigned int fib_flag_trans(int type, __be32 mask, const struct fib_info *fi)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2010-09-10 07:32:28 +08:00
|
|
|
unsigned int flags = 0;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2010-09-10 07:32:28 +08:00
|
|
|
if (type == RTN_UNREACHABLE || type == RTN_PROHIBIT)
|
|
|
|
flags = RTF_REJECT;
|
2005-09-10 04:35:42 +08:00
|
|
|
if (fi && fi->fib_nh->nh_gw)
|
|
|
|
flags |= RTF_GATEWAY;
|
2006-09-27 13:21:45 +08:00
|
|
|
if (mask == htonl(0xFFFFFFFF))
|
2005-09-10 04:35:42 +08:00
|
|
|
flags |= RTF_HOST;
|
|
|
|
flags |= RTF_UP;
|
|
|
|
return flags;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
/*
|
|
|
|
* This outputs /proc/net/route.
|
|
|
|
* The format of the file is not supposed to be changed
|
2010-09-10 07:32:28 +08:00
|
|
|
* and needs to be same as fib_hash output to avoid breaking
|
2005-09-10 04:35:42 +08:00
|
|
|
* legacy utilities
|
|
|
|
*/
|
|
|
|
static int fib_route_seq_show(struct seq_file *seq, void *v)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2015-01-01 02:55:47 +08:00
|
|
|
struct tnode *l = v;
|
2008-01-23 13:54:37 +08:00
|
|
|
struct leaf_info *li;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
if (v == SEQ_START_TOKEN) {
|
|
|
|
seq_printf(seq, "%-127s\n", "Iface\tDestination\tGateway "
|
|
|
|
"\tFlags\tRefCnt\tUse\tMetric\tMask\t\tMTU"
|
|
|
|
"\tWindow\tIRTT");
|
|
|
|
return 0;
|
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry_rcu(li, &l->list, hlist) {
|
2005-09-10 04:35:42 +08:00
|
|
|
struct fib_alias *fa;
|
2006-09-27 13:21:45 +08:00
|
|
|
__be32 mask, prefix;
|
2005-08-10 11:24:39 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
mask = inet_make_mask(li->plen);
|
|
|
|
prefix = htonl(l->key);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
list_for_each_entry_rcu(fa, &li->falh, fa_list) {
|
2005-10-15 07:42:39 +08:00
|
|
|
const struct fib_info *fi = fa->fa_info;
|
2010-09-10 07:32:28 +08:00
|
|
|
unsigned int flags = fib_flag_trans(fa->fa_type, mask, fi);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
if (fa->fa_type == RTN_BROADCAST
|
|
|
|
|| fa->fa_type == RTN_MULTICAST)
|
|
|
|
continue;
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2013-11-15 06:31:57 +08:00
|
|
|
seq_setwidth(seq, 127);
|
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
if (fi)
|
2008-04-24 16:02:16 +08:00
|
|
|
seq_printf(seq,
|
|
|
|
"%s\t%08X\t%08X\t%04X\t%d\t%u\t"
|
2013-11-15 06:31:57 +08:00
|
|
|
"%d\t%08X\t%d\t%u\t%u",
|
2005-09-10 04:35:42 +08:00
|
|
|
fi->fib_dev ? fi->fib_dev->name : "*",
|
|
|
|
prefix,
|
|
|
|
fi->fib_nh->nh_gw, flags, 0, 0,
|
|
|
|
fi->fib_priority,
|
|
|
|
mask,
|
2008-01-23 13:53:36 +08:00
|
|
|
(fi->fib_advmss ?
|
|
|
|
fi->fib_advmss + 40 : 0),
|
2005-09-10 04:35:42 +08:00
|
|
|
fi->fib_window,
|
2013-11-15 06:31:57 +08:00
|
|
|
fi->fib_rtt >> 3);
|
2005-09-10 04:35:42 +08:00
|
|
|
else
|
2008-04-24 16:02:16 +08:00
|
|
|
seq_printf(seq,
|
|
|
|
"*\t%08X\t%08X\t%04X\t%d\t%u\t"
|
2013-11-15 06:31:57 +08:00
|
|
|
"%d\t%08X\t%d\t%u\t%u",
|
2005-09-10 04:35:42 +08:00
|
|
|
prefix, 0, flags, 0, 0, 0,
|
2013-11-15 06:31:57 +08:00
|
|
|
mask, 0, 0, 0);
|
2005-06-22 03:43:18 +08:00
|
|
|
|
2013-11-15 06:31:57 +08:00
|
|
|
seq_pad(seq, '\n');
|
2005-09-10 04:35:42 +08:00
|
|
|
}
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2007-03-13 05:34:29 +08:00
|
|
|
static const struct seq_operations fib_route_seq_ops = {
|
2008-02-12 13:14:39 +08:00
|
|
|
.start = fib_route_seq_start,
|
|
|
|
.next = fib_route_seq_next,
|
|
|
|
.stop = fib_route_seq_stop,
|
2005-09-10 04:35:42 +08:00
|
|
|
.show = fib_route_seq_show,
|
2005-06-22 03:43:18 +08:00
|
|
|
};
|
|
|
|
|
2005-09-10 04:35:42 +08:00
|
|
|
static int fib_route_seq_open(struct inode *inode, struct file *file)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2008-01-10 19:27:17 +08:00
|
|
|
return seq_open_net(inode, file, &fib_route_seq_ops,
|
2008-02-12 13:14:39 +08:00
|
|
|
sizeof(struct fib_route_iter));
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2007-02-12 16:55:35 +08:00
|
|
|
static const struct file_operations fib_route_fops = {
|
2005-09-10 04:35:42 +08:00
|
|
|
.owner = THIS_MODULE,
|
|
|
|
.open = fib_route_seq_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
2008-01-10 19:27:17 +08:00
|
|
|
.release = seq_release_net,
|
2005-06-22 03:43:18 +08:00
|
|
|
};
|
|
|
|
|
2008-01-10 19:21:09 +08:00
|
|
|
int __net_init fib_proc_init(struct net *net)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2013-02-18 09:34:54 +08:00
|
|
|
if (!proc_create("fib_trie", S_IRUGO, net->proc_net, &fib_trie_fops))
|
2005-09-10 04:35:42 +08:00
|
|
|
goto out1;
|
|
|
|
|
2013-02-18 09:34:54 +08:00
|
|
|
if (!proc_create("fib_triestat", S_IRUGO, net->proc_net,
|
|
|
|
&fib_triestat_fops))
|
2005-09-10 04:35:42 +08:00
|
|
|
goto out2;
|
|
|
|
|
2013-02-18 09:34:54 +08:00
|
|
|
if (!proc_create("route", S_IRUGO, net->proc_net, &fib_route_fops))
|
2005-09-10 04:35:42 +08:00
|
|
|
goto out3;
|
|
|
|
|
2005-06-22 03:43:18 +08:00
|
|
|
return 0;
|
2005-09-10 04:35:42 +08:00
|
|
|
|
|
|
|
out3:
|
2013-02-18 09:34:56 +08:00
|
|
|
remove_proc_entry("fib_triestat", net->proc_net);
|
2005-09-10 04:35:42 +08:00
|
|
|
out2:
|
2013-02-18 09:34:56 +08:00
|
|
|
remove_proc_entry("fib_trie", net->proc_net);
|
2005-09-10 04:35:42 +08:00
|
|
|
out1:
|
|
|
|
return -ENOMEM;
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
2008-01-10 19:21:09 +08:00
|
|
|
void __net_exit fib_proc_exit(struct net *net)
|
2005-06-22 03:43:18 +08:00
|
|
|
{
|
2013-02-18 09:34:56 +08:00
|
|
|
remove_proc_entry("fib_trie", net->proc_net);
|
|
|
|
remove_proc_entry("fib_triestat", net->proc_net);
|
|
|
|
remove_proc_entry("route", net->proc_net);
|
2005-06-22 03:43:18 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
#endif /* CONFIG_PROC_FS */
|