License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2005-04-17 06:20:36 +08:00
|
|
|
#ifndef __NET_ACT_API_H
|
|
|
|
#define __NET_ACT_API_H
|
|
|
|
|
|
|
|
/*
|
2016-06-05 22:41:32 +08:00
|
|
|
* Public action API for classifiers/qdiscs
|
|
|
|
*/
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-07-05 22:24:24 +08:00
|
|
|
#include <linux/refcount.h>
|
2021-12-18 02:16:27 +08:00
|
|
|
#include <net/flow_offload.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <net/sch_generic.h>
|
|
|
|
#include <net/pkt_sched.h>
|
2016-02-23 07:57:53 +08:00
|
|
|
#include <net/net_namespace.h>
|
|
|
|
#include <net/netns/generic.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2017-08-30 14:31:59 +08:00
|
|
|
struct tcf_idrinfo {
|
2018-10-03 03:50:19 +08:00
|
|
|
struct mutex lock;
|
2017-08-30 14:31:59 +08:00
|
|
|
struct idr action_idr;
|
2019-08-26 01:01:32 +08:00
|
|
|
struct net *net;
|
2016-07-26 07:09:41 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct tc_action_ops;
|
|
|
|
|
|
|
|
struct tc_action {
|
|
|
|
const struct tc_action_ops *ops;
|
2016-07-26 07:09:42 +08:00
|
|
|
__u32 type; /* for backward compat(TCA_OLD_COMPAT) */
|
2017-08-30 14:31:59 +08:00
|
|
|
struct tcf_idrinfo *idrinfo;
|
2016-07-26 07:09:41 +08:00
|
|
|
|
2016-07-26 07:09:42 +08:00
|
|
|
u32 tcfa_index;
|
2018-07-05 22:24:24 +08:00
|
|
|
refcount_t tcfa_refcnt;
|
|
|
|
atomic_t tcfa_bindcnt;
|
2016-07-26 07:09:42 +08:00
|
|
|
int tcfa_action;
|
|
|
|
struct tcf_t tcfa_tm;
|
2021-10-16 16:49:09 +08:00
|
|
|
struct gnet_stats_basic_sync tcfa_bstats;
|
|
|
|
struct gnet_stats_basic_sync tcfa_bstats_hw;
|
2016-07-26 07:09:42 +08:00
|
|
|
struct gnet_stats_queue tcfa_qstats;
|
2016-12-05 01:48:16 +08:00
|
|
|
struct net_rate_estimator __rcu *tcfa_rate_est;
|
2016-07-26 07:09:42 +08:00
|
|
|
spinlock_t tcfa_lock;
|
2021-10-16 16:49:09 +08:00
|
|
|
struct gnet_stats_basic_sync __percpu *cpu_bstats;
|
|
|
|
struct gnet_stats_basic_sync __percpu *cpu_bstats_hw;
|
2015-07-06 20:18:04 +08:00
|
|
|
struct gnet_stats_queue __percpu *cpu_qstats;
|
2023-02-18 06:36:13 +08:00
|
|
|
struct tc_cookie __rcu *user_cookie;
|
2019-03-20 22:00:16 +08:00
|
|
|
struct tcf_chain __rcu *goto_chain;
|
2019-10-30 22:09:06 +08:00
|
|
|
u32 tcfa_flags;
|
2020-03-20 07:26:23 +08:00
|
|
|
u8 hw_stats;
|
2020-03-28 23:37:43 +08:00
|
|
|
u8 used_hw_stats;
|
|
|
|
bool used_hw_stats_valid;
|
2021-12-18 02:16:23 +08:00
|
|
|
u32 in_hw_count;
|
2006-08-22 14:54:55 +08:00
|
|
|
};
|
2016-07-26 07:09:42 +08:00
|
|
|
#define tcf_index common.tcfa_index
|
|
|
|
#define tcf_refcnt common.tcfa_refcnt
|
|
|
|
#define tcf_bindcnt common.tcfa_bindcnt
|
|
|
|
#define tcf_action common.tcfa_action
|
|
|
|
#define tcf_tm common.tcfa_tm
|
|
|
|
#define tcf_bstats common.tcfa_bstats
|
|
|
|
#define tcf_qstats common.tcfa_qstats
|
|
|
|
#define tcf_rate_est common.tcfa_rate_est
|
|
|
|
#define tcf_lock common.tcfa_lock
|
2006-08-22 14:54:55 +08:00
|
|
|
|
2020-03-20 07:26:23 +08:00
|
|
|
#define TCA_ACT_HW_STATS_ANY (TCA_ACT_HW_STATS_IMMEDIATE | \
|
|
|
|
TCA_ACT_HW_STATS_DELAYED)
|
2020-03-07 19:40:20 +08:00
|
|
|
|
2021-07-30 07:12:14 +08:00
|
|
|
/* Reserve 16 bits for user-space. See TCA_ACT_FLAGS_NO_PERCPU_STATS. */
|
|
|
|
#define TCA_ACT_FLAGS_USER_BITS 16
|
|
|
|
#define TCA_ACT_FLAGS_USER_MASK 0xffff
|
|
|
|
#define TCA_ACT_FLAGS_POLICE (1U << TCA_ACT_FLAGS_USER_BITS)
|
|
|
|
#define TCA_ACT_FLAGS_BIND (1U << (TCA_ACT_FLAGS_USER_BITS + 1))
|
|
|
|
#define TCA_ACT_FLAGS_REPLACE (1U << (TCA_ACT_FLAGS_USER_BITS + 2))
|
|
|
|
#define TCA_ACT_FLAGS_NO_RTNL (1U << (TCA_ACT_FLAGS_USER_BITS + 3))
|
2022-10-21 15:58:39 +08:00
|
|
|
#define TCA_ACT_FLAGS_AT_INGRESS (1U << (TCA_ACT_FLAGS_USER_BITS + 4))
|
2021-07-30 07:12:14 +08:00
|
|
|
|
2015-07-06 20:18:08 +08:00
|
|
|
/* Update lastuse only if needed, to avoid dirtying a cache line.
|
|
|
|
* We use a temp variable to avoid fetching jiffies twice.
|
|
|
|
*/
|
|
|
|
static inline void tcf_lastuse_update(struct tcf_t *tm)
|
|
|
|
{
|
|
|
|
unsigned long now = jiffies;
|
|
|
|
|
|
|
|
if (tm->lastuse != now)
|
|
|
|
tm->lastuse = now;
|
2016-06-06 18:32:54 +08:00
|
|
|
if (unlikely(!tm->firstuse))
|
|
|
|
tm->firstuse = now;
|
2015-07-06 20:18:08 +08:00
|
|
|
}
|
|
|
|
|
2016-06-06 18:32:55 +08:00
|
|
|
static inline void tcf_tm_dump(struct tcf_t *dtm, const struct tcf_t *stm)
|
|
|
|
{
|
|
|
|
dtm->install = jiffies_to_clock_t(jiffies - stm->install);
|
|
|
|
dtm->lastuse = jiffies_to_clock_t(jiffies - stm->lastuse);
|
2020-05-17 20:46:31 +08:00
|
|
|
dtm->firstuse = stm->firstuse ?
|
|
|
|
jiffies_to_clock_t(jiffies - stm->firstuse) : 0;
|
2016-06-06 18:32:55 +08:00
|
|
|
dtm->expires = jiffies_to_clock_t(stm->expires);
|
|
|
|
}
|
|
|
|
|
2021-12-18 02:16:21 +08:00
|
|
|
static inline enum flow_action_hw_stats tc_act_hw_stats(u8 hw_stats)
|
|
|
|
{
|
|
|
|
if (WARN_ON_ONCE(hw_stats > TCA_ACT_HW_STATS_ANY))
|
|
|
|
return FLOW_ACTION_HW_STATS_DONT_CARE;
|
|
|
|
else if (!hw_stats)
|
|
|
|
return FLOW_ACTION_HW_STATS_DISABLED;
|
|
|
|
|
|
|
|
return hw_stats;
|
|
|
|
}
|
|
|
|
|
2019-09-13 23:28:40 +08:00
|
|
|
typedef void (*tc_action_priv_destructor)(void *priv);
|
|
|
|
|
2006-08-22 14:54:55 +08:00
|
|
|
struct tc_action_ops {
|
2013-12-16 12:15:10 +08:00
|
|
|
struct list_head head;
|
2005-04-17 06:20:36 +08:00
|
|
|
char kind[IFNAMSIZ];
|
2019-02-10 20:25:00 +08:00
|
|
|
enum tca_id id; /* identifier should match kind */
|
2022-09-08 12:14:33 +08:00
|
|
|
unsigned int net_id;
|
2016-07-26 07:09:41 +08:00
|
|
|
size_t size;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct module *owner;
|
2016-06-05 22:41:32 +08:00
|
|
|
int (*act)(struct sk_buff *, const struct tc_action *,
|
2018-07-30 20:30:43 +08:00
|
|
|
struct tcf_result *); /* called under RCU BH lock*/
|
2006-08-22 14:54:55 +08:00
|
|
|
int (*dump)(struct sk_buff *, struct tc_action *, int, int);
|
2017-12-06 04:53:07 +08:00
|
|
|
void (*cleanup)(struct tc_action *);
|
2018-08-30 01:15:35 +08:00
|
|
|
int (*lookup)(struct net *net, struct tc_action **a, u32 index);
|
2013-01-14 13:15:39 +08:00
|
|
|
int (*init)(struct net *net, struct nlattr *nla,
|
2021-07-30 07:12:14 +08:00
|
|
|
struct nlattr *est, struct tc_action **act,
|
|
|
|
struct tcf_proto *tp,
|
2019-10-30 22:09:05 +08:00
|
|
|
u32 flags, struct netlink_ext_ack *extack);
|
2016-02-23 07:57:53 +08:00
|
|
|
int (*walk)(struct net *, struct sk_buff *,
|
2018-02-15 23:54:53 +08:00
|
|
|
struct netlink_callback *, int,
|
2018-02-15 23:54:58 +08:00
|
|
|
const struct tc_action_ops *,
|
|
|
|
struct netlink_ext_ack *);
|
2020-06-19 14:01:07 +08:00
|
|
|
void (*stats_update)(struct tc_action *, u64, u64, u64, u64, bool);
|
2018-03-09 05:59:18 +08:00
|
|
|
size_t (*get_fill_size)(const struct tc_action *act);
|
2019-09-13 23:28:41 +08:00
|
|
|
struct net_device *(*get_dev)(const struct tc_action *a,
|
|
|
|
tc_action_priv_destructor *destructor);
|
2019-09-13 23:28:40 +08:00
|
|
|
struct psample_group *
|
|
|
|
(*get_psample_group)(const struct tc_action *a,
|
|
|
|
tc_action_priv_destructor *destructor);
|
2021-12-18 02:16:21 +08:00
|
|
|
int (*offload_act_setup)(struct tc_action *act, void *entry_data,
|
2022-04-07 15:35:22 +08:00
|
|
|
u32 *index_inc, bool bind,
|
|
|
|
struct netlink_ext_ack *extack);
|
2016-02-23 07:57:53 +08:00
|
|
|
};
|
|
|
|
|
2022-12-06 21:55:10 +08:00
|
|
|
#ifdef CONFIG_NET_CLS_ACT
|
|
|
|
|
|
|
|
#define ACT_P_CREATED 1
|
|
|
|
#define ACT_P_DELETED 1
|
|
|
|
|
2016-02-23 07:57:53 +08:00
|
|
|
struct tc_action_net {
|
2017-08-30 14:31:59 +08:00
|
|
|
struct tcf_idrinfo *idrinfo;
|
2016-02-23 07:57:53 +08:00
|
|
|
const struct tc_action_ops *ops;
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2016-02-23 07:57:53 +08:00
|
|
|
static inline
|
2019-08-26 01:01:32 +08:00
|
|
|
int tc_action_net_init(struct net *net, struct tc_action_net *tn,
|
2017-11-07 05:47:18 +08:00
|
|
|
const struct tc_action_ops *ops)
|
2016-02-23 07:57:53 +08:00
|
|
|
{
|
|
|
|
int err = 0;
|
|
|
|
|
2017-08-30 14:31:59 +08:00
|
|
|
tn->idrinfo = kmalloc(sizeof(*tn->idrinfo), GFP_KERNEL);
|
|
|
|
if (!tn->idrinfo)
|
2016-02-23 07:57:53 +08:00
|
|
|
return -ENOMEM;
|
|
|
|
tn->ops = ops;
|
2019-08-26 01:01:32 +08:00
|
|
|
tn->idrinfo->net = net;
|
2018-10-03 03:50:19 +08:00
|
|
|
mutex_init(&tn->idrinfo->lock);
|
2017-08-30 14:31:59 +08:00
|
|
|
idr_init(&tn->idrinfo->action_idr);
|
2016-02-23 07:57:53 +08:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2017-08-30 14:31:59 +08:00
|
|
|
void tcf_idrinfo_destroy(const struct tc_action_ops *ops,
|
|
|
|
struct tcf_idrinfo *idrinfo);
|
2016-02-23 07:57:53 +08:00
|
|
|
|
2017-12-12 07:35:03 +08:00
|
|
|
static inline void tc_action_net_exit(struct list_head *net_list,
|
|
|
|
unsigned int id)
|
2016-02-23 07:57:53 +08:00
|
|
|
{
|
2017-12-12 07:35:03 +08:00
|
|
|
struct net *net;
|
|
|
|
|
2017-11-02 01:23:49 +08:00
|
|
|
rtnl_lock();
|
2017-12-12 07:35:03 +08:00
|
|
|
list_for_each_entry(net, net_list, exit_list) {
|
|
|
|
struct tc_action_net *tn = net_generic(net, id);
|
|
|
|
|
|
|
|
tcf_idrinfo_destroy(tn->ops, tn->idrinfo);
|
|
|
|
kfree(tn->idrinfo);
|
|
|
|
}
|
2017-11-02 01:23:49 +08:00
|
|
|
rtnl_unlock();
|
2016-02-23 07:57:53 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
int tcf_generic_walker(struct tc_action_net *tn, struct sk_buff *skb,
|
|
|
|
struct netlink_callback *cb, int type,
|
2018-02-15 23:54:59 +08:00
|
|
|
const struct tc_action_ops *ops,
|
|
|
|
struct netlink_ext_ack *extack);
|
2017-08-30 14:31:59 +08:00
|
|
|
int tcf_idr_search(struct tc_action_net *tn, struct tc_action **a, u32 index);
|
|
|
|
int tcf_idr_create(struct tc_action_net *tn, u32 index, struct nlattr *est,
|
|
|
|
struct tc_action **a, const struct tc_action_ops *ops,
|
2019-10-30 22:09:06 +08:00
|
|
|
int bind, bool cpustats, u32 flags);
|
|
|
|
int tcf_idr_create_from_flags(struct tc_action_net *tn, u32 index,
|
|
|
|
struct nlattr *est, struct tc_action **a,
|
|
|
|
const struct tc_action_ops *ops, int bind,
|
|
|
|
u32 flags);
|
2021-02-17 00:22:00 +08:00
|
|
|
void tcf_idr_insert_many(struct tc_action *actions[]);
|
2018-07-05 22:24:32 +08:00
|
|
|
void tcf_idr_cleanup(struct tc_action_net *tn, u32 index);
|
|
|
|
int tcf_idr_check_alloc(struct tc_action_net *tn, u32 *index,
|
|
|
|
struct tc_action **a, int bind);
|
2021-04-07 23:36:04 +08:00
|
|
|
int tcf_idr_release(struct tc_action *a, bool bind);
|
net: sched: fix refcount imbalance in actions
Since commit 55334a5db5cd ("net_sched: act: refuse to remove bound action
outside"), we end up with a wrong reference count for a tc action.
Test case 1:
FOO="1,6 0 0 4294967295,"
BAR="1,6 0 0 4294967294,"
tc filter add dev foo parent 1: bpf bytecode "$FOO" flowid 1:1 \
action bpf bytecode "$FOO"
tc actions show action bpf
action order 0: bpf bytecode '1,6 0 0 4294967295' default-action pipe
index 1 ref 1 bind 1
tc actions replace action bpf bytecode "$BAR" index 1
tc actions show action bpf
action order 0: bpf bytecode '1,6 0 0 4294967294' default-action pipe
index 1 ref 2 bind 1
tc actions replace action bpf bytecode "$FOO" index 1
tc actions show action bpf
action order 0: bpf bytecode '1,6 0 0 4294967295' default-action pipe
index 1 ref 3 bind 1
Test case 2:
FOO="1,6 0 0 4294967295,"
tc filter add dev foo parent 1: bpf bytecode "$FOO" flowid 1:1 action ok
tc actions show action gact
action order 0: gact action pass
random type none pass val 0
index 1 ref 1 bind 1
tc actions add action drop index 1
RTNETLINK answers: File exists [...]
tc actions show action gact
action order 0: gact action pass
random type none pass val 0
index 1 ref 2 bind 1
tc actions add action drop index 1
RTNETLINK answers: File exists [...]
tc actions show action gact
action order 0: gact action pass
random type none pass val 0
index 1 ref 3 bind 1
What happens is that in tcf_hash_check(), we check tcf_common for a given
index and increase tcfc_refcnt and conditionally tcfc_bindcnt when we've
found an existing action. Now there are the following cases:
1) We do a late binding of an action. In that case, we leave the
tcfc_refcnt/tcfc_bindcnt increased and are done with the ->init()
handler. This is correctly handeled.
2) We replace the given action, or we try to add one without replacing
and find out that the action at a specific index already exists
(thus, we go out with error in that case).
In case of 2), we have to undo the reference count increase from
tcf_hash_check() in the tcf_hash_check() function. Currently, we fail to
do so because of the 'tcfc_bindcnt > 0' check which bails out early with
an -EPERM error.
Now, while commit 55334a5db5cd prevents 'tc actions del action ...' on an
already classifier-bound action to drop the reference count (which could
then become negative, wrap around etc), this restriction only accounts for
invocations outside a specific action's ->init() handler.
One possible solution would be to add a flag thus we possibly trigger
the -EPERM ony in situations where it is indeed relevant.
After the patch, above test cases have correct reference count again.
Fixes: 55334a5db5cd ("net_sched: act: refuse to remove bound action outside")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Cong Wang <cwang@twopensource.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-07-30 05:35:25 +08:00
|
|
|
|
2016-02-23 07:57:53 +08:00
|
|
|
int tcf_register_action(struct tc_action_ops *a, struct pernet_operations *ops);
|
2016-06-05 22:41:32 +08:00
|
|
|
int tcf_unregister_action(struct tc_action_ops *a,
|
|
|
|
struct pernet_operations *ops);
|
2018-07-05 22:24:33 +08:00
|
|
|
int tcf_action_destroy(struct tc_action *actions[], int bind);
|
2016-08-14 13:35:00 +08:00
|
|
|
int tcf_action_exec(struct sk_buff *skb, struct tc_action **actions,
|
|
|
|
int nr_actions, struct tcf_result *res);
|
2017-05-17 17:08:02 +08:00
|
|
|
int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla,
|
2021-07-30 07:12:14 +08:00
|
|
|
struct nlattr *est,
|
2021-04-07 23:36:03 +08:00
|
|
|
struct tc_action *actions[], int init_res[], size_t *attr_size,
|
2021-12-18 02:16:28 +08:00
|
|
|
u32 flags, u32 fl_flags, struct netlink_ext_ack *extack);
|
2021-07-30 07:12:14 +08:00
|
|
|
struct tc_action_ops *tc_action_load_ops(struct nlattr *nla, bool police,
|
net_sched: fix RTNL deadlock again caused by request_module()
tcf_action_init_1() loads tc action modules automatically with
request_module() after parsing the tc action names, and it drops RTNL
lock and re-holds it before and after request_module(). This causes a
lot of troubles, as discovered by syzbot, because we can be in the
middle of batch initializations when we create an array of tc actions.
One of the problem is deadlock:
CPU 0 CPU 1
rtnl_lock();
for (...) {
tcf_action_init_1();
-> rtnl_unlock();
-> request_module();
rtnl_lock();
for (...) {
tcf_action_init_1();
-> tcf_idr_check_alloc();
// Insert one action into idr,
// but it is not committed until
// tcf_idr_insert_many(), then drop
// the RTNL lock in the _next_
// iteration
-> rtnl_unlock();
-> rtnl_lock();
-> a_o->init();
-> tcf_idr_check_alloc();
// Now waiting for the same index
// to be committed
-> request_module();
-> rtnl_lock()
// Now waiting for RTNL lock
}
rtnl_unlock();
}
rtnl_unlock();
This is not easy to solve, we can move the request_module() before
this loop and pre-load all the modules we need for this netlink
message and then do the rest initializations. So the loop breaks down
to two now:
for (i = 1; i <= TCA_ACT_MAX_PRIO && tb[i]; i++) {
struct tc_action_ops *a_o;
a_o = tc_action_load_ops(name, tb[i]...);
ops[i - 1] = a_o;
}
for (i = 1; i <= TCA_ACT_MAX_PRIO && tb[i]; i++) {
act = tcf_action_init_1(ops[i - 1]...);
}
Although this looks serious, it only has been reported by syzbot, so it
seems hard to trigger this by humans. And given the size of this patch,
I'd suggest to make it to net-next and not to backport to stable.
This patch has been tested by syzbot and tested with tdc.py by me.
Fixes: 0fedc63fadf0 ("net_sched: commit action insertions together")
Reported-and-tested-by: syzbot+82752bc5331601cf4899@syzkaller.appspotmail.com
Reported-and-tested-by: syzbot+b3b63b6bff456bd95294@syzkaller.appspotmail.com
Reported-by: syzbot+ba67b12b1ca729912834@syzkaller.appspotmail.com
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Tested-by: Jamal Hadi Salim <jhs@mojatatu.com>
Acked-by: Jamal Hadi Salim <jhs@mojatatu.com>
Link: https://lore.kernel.org/r/20210117005657.14810-1-xiyou.wangcong@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-01-17 08:56:57 +08:00
|
|
|
bool rtnl_held,
|
|
|
|
struct netlink_ext_ack *extack);
|
2017-05-17 17:08:02 +08:00
|
|
|
struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
|
|
|
|
struct nlattr *nla, struct nlattr *est,
|
2021-04-07 23:36:03 +08:00
|
|
|
struct tc_action_ops *a_o, int *init_res,
|
2021-07-30 07:12:14 +08:00
|
|
|
u32 flags, struct netlink_ext_ack *extack);
|
2018-07-05 22:24:33 +08:00
|
|
|
int tcf_action_dump(struct sk_buff *skb, struct tc_action *actions[], int bind,
|
2020-05-15 19:40:12 +08:00
|
|
|
int ref, bool terse);
|
2013-07-31 13:47:13 +08:00
|
|
|
int tcf_action_dump_old(struct sk_buff *skb, struct tc_action *a, int, int);
|
|
|
|
int tcf_action_dump_1(struct sk_buff *skb, struct tc_action *a, int, int);
|
2019-10-30 22:09:01 +08:00
|
|
|
|
|
|
|
static inline void tcf_action_update_bstats(struct tc_action *a,
|
|
|
|
struct sk_buff *skb)
|
|
|
|
{
|
2019-10-30 22:09:04 +08:00
|
|
|
if (likely(a->cpu_bstats)) {
|
2021-10-16 16:49:09 +08:00
|
|
|
bstats_update(this_cpu_ptr(a->cpu_bstats), skb);
|
2019-10-30 22:09:04 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
spin_lock(&a->tcfa_lock);
|
|
|
|
bstats_update(&a->tcfa_bstats, skb);
|
|
|
|
spin_unlock(&a->tcfa_lock);
|
2019-10-30 22:09:02 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void tcf_action_inc_drop_qstats(struct tc_action *a)
|
|
|
|
{
|
2019-10-30 22:09:04 +08:00
|
|
|
if (likely(a->cpu_qstats)) {
|
|
|
|
qstats_drop_inc(this_cpu_ptr(a->cpu_qstats));
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
spin_lock(&a->tcfa_lock);
|
|
|
|
qstats_drop_inc(&a->tcfa_qstats);
|
|
|
|
spin_unlock(&a->tcfa_lock);
|
2019-10-30 22:09:02 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void tcf_action_inc_overlimit_qstats(struct tc_action *a)
|
|
|
|
{
|
2019-10-30 22:09:04 +08:00
|
|
|
if (likely(a->cpu_qstats)) {
|
|
|
|
qstats_overlimit_inc(this_cpu_ptr(a->cpu_qstats));
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
spin_lock(&a->tcfa_lock);
|
|
|
|
qstats_overlimit_inc(&a->tcfa_qstats);
|
|
|
|
spin_unlock(&a->tcfa_lock);
|
2019-10-30 22:09:02 +08:00
|
|
|
}
|
|
|
|
|
2020-06-19 14:01:07 +08:00
|
|
|
void tcf_action_update_stats(struct tc_action *a, u64 bytes, u64 packets,
|
|
|
|
u64 drops, bool hw);
|
2013-07-31 13:47:13 +08:00
|
|
|
int tcf_action_copy_stats(struct sk_buff *, struct tc_action *, int);
|
2016-03-08 18:42:31 +08:00
|
|
|
|
2021-12-18 02:16:25 +08:00
|
|
|
int tcf_action_update_hw_stats(struct tc_action *action);
|
2021-12-18 02:16:27 +08:00
|
|
|
int tcf_action_reoffload_cb(flow_indr_block_bind_cb_t *cb,
|
|
|
|
void *cb_priv, bool add);
|
net/sched: prepare TC actions to properly validate the control action
- pass a pointer to struct tcf_proto in each actions's init() handler,
to allow validating the control action, checking whether the chain
exists and (eventually) refcounting it.
- remove code that validates the control action after a successful call
to the action's init() handler, and replace it with a test that forbids
addition of actions having 'goto_chain' and NULL goto_chain pointer at
the same time.
- add tcf_action_check_ctrlact(), that will validate the control action
and eventually allocate the action 'goto_chain' within the init()
handler.
- add tcf_action_set_ctrlact(), that will assign the control action and
swap the current 'goto_chain' pointer with the new given one.
This disallows 'goto_chain' on actions that don't initialize it properly
in their init() handler, i.e. calling tcf_action_check_ctrlact() after
successful IDR reservation and then calling tcf_action_set_ctrlact()
to assign 'goto_chain' and 'tcf_action' consistently.
By doing this, the kernel does not leak anymore refcounts when a valid
'goto chain' handle is replaced in TC actions, causing kmemleak splats
like the following one:
# tc chain add dev dd0 chain 42 ingress protocol ip flower \
> ip_proto tcp action drop
# tc chain add dev dd0 chain 43 ingress protocol ip flower \
> ip_proto udp action drop
# tc filter add dev dd0 ingress matchall \
> action gact goto chain 42 index 66
# tc filter replace dev dd0 ingress matchall \
> action gact goto chain 43 index 66
# echo scan >/sys/kernel/debug/kmemleak
<...>
unreferenced object 0xffff93c0ee09f000 (size 1024):
comm "tc", pid 2565, jiffies 4295339808 (age 65.426s)
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 00 00 00 08 00 06 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<000000009b63f92d>] tc_ctl_chain+0x3d2/0x4c0
[<00000000683a8d72>] rtnetlink_rcv_msg+0x263/0x2d0
[<00000000ddd88f8e>] netlink_rcv_skb+0x4a/0x110
[<000000006126a348>] netlink_unicast+0x1a0/0x250
[<00000000b3340877>] netlink_sendmsg+0x2c1/0x3c0
[<00000000a25a2171>] sock_sendmsg+0x36/0x40
[<00000000f19ee1ec>] ___sys_sendmsg+0x280/0x2f0
[<00000000d0422042>] __sys_sendmsg+0x5e/0xa0
[<000000007a6c61f9>] do_syscall_64+0x5b/0x180
[<00000000ccd07542>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[<0000000013eaa334>] 0xffffffffffffffff
Fixes: db50514f9a9c ("net: sched: add termination action to allow goto chain")
Fixes: 97763dc0f401 ("net_sched: reject unknown tcfa_action values")
Signed-off-by: Davide Caratti <dcaratti@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-20 21:59:59 +08:00
|
|
|
int tcf_action_check_ctrlact(int action, struct tcf_proto *tp,
|
|
|
|
struct tcf_chain **handle,
|
|
|
|
struct netlink_ext_ack *newchain);
|
|
|
|
struct tcf_chain *tcf_action_set_ctrlact(struct tc_action *a, int action,
|
|
|
|
struct tcf_chain *newchain);
|
2020-11-25 12:01:23 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_INET
|
|
|
|
DECLARE_STATIC_KEY_FALSE(tcf_frag_xmit_count);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
int tcf_dev_queue_xmit(struct sk_buff *skb, int (*xmit)(struct sk_buff *skb));
|
2021-12-18 02:16:27 +08:00
|
|
|
|
|
|
|
#else /* !CONFIG_NET_CLS_ACT */
|
|
|
|
|
|
|
|
static inline int tcf_action_reoffload_cb(flow_indr_block_bind_cb_t *cb,
|
|
|
|
void *cb_priv, bool add) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-08-14 13:34:59 +08:00
|
|
|
#endif /* CONFIG_NET_CLS_ACT */
|
2016-07-25 18:12:33 +08:00
|
|
|
|
2016-05-13 20:55:35 +08:00
|
|
|
static inline void tcf_action_stats_update(struct tc_action *a, u64 bytes,
|
2020-06-19 14:01:07 +08:00
|
|
|
u64 packets, u64 drops,
|
|
|
|
u64 lastuse, bool hw)
|
2016-05-13 20:55:35 +08:00
|
|
|
{
|
2016-08-14 13:34:59 +08:00
|
|
|
#ifdef CONFIG_NET_CLS_ACT
|
2016-05-13 20:55:35 +08:00
|
|
|
if (!a->ops->stats_update)
|
|
|
|
return;
|
|
|
|
|
2020-06-19 14:01:07 +08:00
|
|
|
a->ops->stats_update(a, bytes, packets, drops, lastuse, hw);
|
2016-08-14 13:34:59 +08:00
|
|
|
#endif
|
2016-05-13 20:55:35 +08:00
|
|
|
}
|
|
|
|
|
2017-10-11 15:41:08 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#endif
|