2019-05-29 22:18:09 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-only
|
2017-07-18 00:28:56 +08:00
|
|
|
/* Copyright (c) 2017 Covalent IO, Inc. http://covalent.io
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* Devmaps primary use is as a backend map for XDP BPF helper call
|
|
|
|
* bpf_redirect_map(). Because XDP is mostly concerned with performance we
|
|
|
|
* spent some effort to ensure the datapath with redirect maps does not use
|
|
|
|
* any locking. This is a quick note on the details.
|
|
|
|
*
|
|
|
|
* We have three possible paths to get into the devmap control plane bpf
|
|
|
|
* syscalls, bpf programs, and driver side xmit/flush operations. A bpf syscall
|
|
|
|
* will invoke an update, delete, or lookup operation. To ensure updates and
|
|
|
|
* deletes appear atomic from the datapath side xchg() is used to modify the
|
|
|
|
* netdev_map array. Then because the datapath does a lookup into the netdev_map
|
|
|
|
* array (read-only) from an RCU critical section we use call_rcu() to wait for
|
|
|
|
* an rcu grace period before free'ing the old data structures. This ensures the
|
|
|
|
* datapath always has a valid copy. However, the datapath does a "flush"
|
|
|
|
* operation that pushes any pending packets in the driver outside the RCU
|
|
|
|
* critical section. Each bpf_dtab_netdev tracks these pending operations using
|
2019-06-28 17:12:34 +08:00
|
|
|
* a per-cpu flush list. The bpf_dtab_netdev object will not be destroyed until
|
|
|
|
* this list is empty, indicating outstanding flush operations have completed.
|
2017-07-18 00:28:56 +08:00
|
|
|
*
|
|
|
|
* BPF syscalls may race with BPF program calls on any of the update, delete
|
|
|
|
* or lookup operations. As noted above the xchg() operation also keep the
|
|
|
|
* netdev_map consistent in this case. From the devmap side BPF programs
|
|
|
|
* calling into these operations are the same as multiple user space threads
|
|
|
|
* making system calls.
|
2017-07-18 00:30:02 +08:00
|
|
|
*
|
|
|
|
* Finally, any of the above may race with a netdev_unregister notifier. The
|
|
|
|
* unregister notifier must search for net devices in the map structure that
|
|
|
|
* contain a reference to the net device and remove them. This is a two step
|
|
|
|
* process (a) dereference the bpf_dtab_netdev object in netdev_map and (b)
|
|
|
|
* check to see if the ifindex is the same as the net_device being removed.
|
bpf: devmap fix mutex in rcu critical section
Originally we used a mutex to protect concurrent devmap update
and delete operations from racing with netdev unregister notifier
callbacks.
The notifier hook is needed because we increment the netdev ref
count when a dev is added to the devmap. This ensures the netdev
reference is valid in the datapath. However, we don't want to block
unregister events, hence the initial mutex and notifier handler.
The concern was in the notifier hook we search the map for dev
entries that hold a refcnt on the net device being torn down. But,
in order to do this we require two steps,
(i) dereference the netdev: dev = rcu_dereference(map[i])
(ii) test ifindex: dev->ifindex == removing_ifindex
and then finally we can swap in the NULL dev in the map via an
xchg operation,
xchg(map[i], NULL)
The danger here is a concurrent update could run a different
xchg op concurrently leading us to replace the new dev with a
NULL dev incorrectly.
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
dev = rcu_dereference(map[i])
xchg(map[i]), new_dev);
rcu_call(dev,...)
xchg(map[i], NULL)
The above flow would create the incorrect state with the dev
reference in the update path being lost. To resolve this the
original code used a mutex around the above block. However,
updates, deletes, and lookups occur inside rcu critical sections
so we can't use a mutex in this context safely.
Fortunately, by writing slightly better code we can avoid the
mutex altogether. If CPU 1 in the above example uses a cmpxchg
and _only_ replaces the dev reference in the map when it is in
fact the expected dev the race is removed completely. The two
cases being illustrated here, first the race condition,
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
dev = rcu_dereference(map[i])
xchg(map[i]), new_dev);
rcu_call(dev,...)
odev = cmpxchg(map[i], dev, NULL)
Now we can test the cmpxchg return value, detect odev != dev and
abort. Or in the good case,
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
odev = cmpxchg(map[i], dev, NULL)
[...]
Now 'odev == dev' and we can do proper cleanup.
And viola the original race we tried to solve with a mutex is
corrected and the trace noted by Sasha below is resolved due
to removal of the mutex.
Note: When walking the devmap and removing dev references as needed
we depend on the core to fail any calls to dev_get_by_index() using
the ifindex of the device being removed. This way we do not race with
the user while searching the devmap.
Additionally, the mutex was also protecting list add/del/read on
the list of maps in-use. This patch converts this to an RCU list
and spinlock implementation. This protects the list from concurrent
alloc/free operations. The notifier hook walks this list so it uses
RCU read semantics.
BUG: sleeping function called from invalid context at kernel/locking/mutex.c:747
in_atomic(): 1, irqs_disabled(): 0, pid: 16315, name: syz-executor1
1 lock held by syz-executor1/16315:
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] map_delete_elem kernel/bpf/syscall.c:577 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] SYSC_bpf kernel/bpf/syscall.c:1427 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] SyS_bpf+0x1d32/0x4ba0 kernel/bpf/syscall.c:1388
Fixes: 2ddf71e23cc2 ("net: add notifier hooks for devmap bpf map")
Reported-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-05 13:02:19 +08:00
|
|
|
* When removing the dev a cmpxchg() is used to ensure the correct dev is
|
|
|
|
* removed, in the case of a concurrent update or delete operation it is
|
|
|
|
* possible that the initially referenced dev is no longer in the map. As the
|
|
|
|
* notifier hook walks the map we know that new dev references can not be
|
|
|
|
* added by the user because core infrastructure ensures dev_get_by_index()
|
|
|
|
* calls will fail at this point.
|
2017-07-18 00:28:56 +08:00
|
|
|
*/
|
|
|
|
#include <linux/bpf.h>
|
2018-05-24 22:45:46 +08:00
|
|
|
#include <net/xdp.h>
|
2017-07-18 00:28:56 +08:00
|
|
|
#include <linux/filter.h>
|
2018-05-24 22:45:46 +08:00
|
|
|
#include <trace/events/xdp.h>
|
2017-07-18 00:28:56 +08:00
|
|
|
|
2017-10-19 04:00:22 +08:00
|
|
|
#define DEV_CREATE_FLAG_MASK \
|
|
|
|
(BPF_F_NUMA_NODE | BPF_F_RDONLY | BPF_F_WRONLY)
|
|
|
|
|
2018-05-24 22:45:51 +08:00
|
|
|
#define DEV_MAP_BULK_SIZE 16
|
2019-06-28 17:12:34 +08:00
|
|
|
struct bpf_dtab_netdev;
|
|
|
|
|
2018-05-24 22:45:51 +08:00
|
|
|
struct xdp_bulk_queue {
|
|
|
|
struct xdp_frame *q[DEV_MAP_BULK_SIZE];
|
2019-06-28 17:12:34 +08:00
|
|
|
struct list_head flush_node;
|
2018-05-24 22:45:57 +08:00
|
|
|
struct net_device *dev_rx;
|
2019-06-28 17:12:34 +08:00
|
|
|
struct bpf_dtab_netdev *obj;
|
2018-05-24 22:45:51 +08:00
|
|
|
unsigned int count;
|
|
|
|
};
|
|
|
|
|
2017-07-18 00:28:56 +08:00
|
|
|
struct bpf_dtab_netdev {
|
2018-05-24 22:45:46 +08:00
|
|
|
struct net_device *dev; /* must be first member, due to tracepoint */
|
2017-07-18 00:28:56 +08:00
|
|
|
struct bpf_dtab *dtab;
|
2018-05-24 22:45:51 +08:00
|
|
|
struct xdp_bulk_queue __percpu *bulkq;
|
2017-08-23 07:47:54 +08:00
|
|
|
struct rcu_head rcu;
|
2019-07-27 00:06:53 +08:00
|
|
|
unsigned int idx; /* keep track of map index for tracepoint */
|
2017-07-18 00:28:56 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct bpf_dtab {
|
|
|
|
struct bpf_map map;
|
|
|
|
struct bpf_dtab_netdev **netdev_map;
|
2019-06-28 17:12:34 +08:00
|
|
|
struct list_head __percpu *flush_list;
|
2017-07-18 00:30:02 +08:00
|
|
|
struct list_head list;
|
2017-07-18 00:28:56 +08:00
|
|
|
};
|
|
|
|
|
bpf: devmap fix mutex in rcu critical section
Originally we used a mutex to protect concurrent devmap update
and delete operations from racing with netdev unregister notifier
callbacks.
The notifier hook is needed because we increment the netdev ref
count when a dev is added to the devmap. This ensures the netdev
reference is valid in the datapath. However, we don't want to block
unregister events, hence the initial mutex and notifier handler.
The concern was in the notifier hook we search the map for dev
entries that hold a refcnt on the net device being torn down. But,
in order to do this we require two steps,
(i) dereference the netdev: dev = rcu_dereference(map[i])
(ii) test ifindex: dev->ifindex == removing_ifindex
and then finally we can swap in the NULL dev in the map via an
xchg operation,
xchg(map[i], NULL)
The danger here is a concurrent update could run a different
xchg op concurrently leading us to replace the new dev with a
NULL dev incorrectly.
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
dev = rcu_dereference(map[i])
xchg(map[i]), new_dev);
rcu_call(dev,...)
xchg(map[i], NULL)
The above flow would create the incorrect state with the dev
reference in the update path being lost. To resolve this the
original code used a mutex around the above block. However,
updates, deletes, and lookups occur inside rcu critical sections
so we can't use a mutex in this context safely.
Fortunately, by writing slightly better code we can avoid the
mutex altogether. If CPU 1 in the above example uses a cmpxchg
and _only_ replaces the dev reference in the map when it is in
fact the expected dev the race is removed completely. The two
cases being illustrated here, first the race condition,
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
dev = rcu_dereference(map[i])
xchg(map[i]), new_dev);
rcu_call(dev,...)
odev = cmpxchg(map[i], dev, NULL)
Now we can test the cmpxchg return value, detect odev != dev and
abort. Or in the good case,
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
odev = cmpxchg(map[i], dev, NULL)
[...]
Now 'odev == dev' and we can do proper cleanup.
And viola the original race we tried to solve with a mutex is
corrected and the trace noted by Sasha below is resolved due
to removal of the mutex.
Note: When walking the devmap and removing dev references as needed
we depend on the core to fail any calls to dev_get_by_index() using
the ifindex of the device being removed. This way we do not race with
the user while searching the devmap.
Additionally, the mutex was also protecting list add/del/read on
the list of maps in-use. This patch converts this to an RCU list
and spinlock implementation. This protects the list from concurrent
alloc/free operations. The notifier hook walks this list so it uses
RCU read semantics.
BUG: sleeping function called from invalid context at kernel/locking/mutex.c:747
in_atomic(): 1, irqs_disabled(): 0, pid: 16315, name: syz-executor1
1 lock held by syz-executor1/16315:
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] map_delete_elem kernel/bpf/syscall.c:577 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] SYSC_bpf kernel/bpf/syscall.c:1427 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] SyS_bpf+0x1d32/0x4ba0 kernel/bpf/syscall.c:1388
Fixes: 2ddf71e23cc2 ("net: add notifier hooks for devmap bpf map")
Reported-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-05 13:02:19 +08:00
|
|
|
static DEFINE_SPINLOCK(dev_map_lock);
|
2017-07-18 00:30:02 +08:00
|
|
|
static LIST_HEAD(dev_map_list);
|
|
|
|
|
2019-07-27 00:06:53 +08:00
|
|
|
static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr)
|
2017-07-18 00:28:56 +08:00
|
|
|
{
|
2019-06-28 17:12:34 +08:00
|
|
|
int err, cpu;
|
2017-07-18 00:28:56 +08:00
|
|
|
u64 cost;
|
|
|
|
|
|
|
|
/* check sanity of attributes */
|
|
|
|
if (attr->max_entries == 0 || attr->key_size != 4 ||
|
2017-10-19 04:00:22 +08:00
|
|
|
attr->value_size != 4 || attr->map_flags & ~DEV_CREATE_FLAG_MASK)
|
2019-07-27 00:06:53 +08:00
|
|
|
return -EINVAL;
|
2017-07-18 00:28:56 +08:00
|
|
|
|
2019-06-28 17:12:35 +08:00
|
|
|
/* Lookup returns a pointer straight to dev->ifindex, so make sure the
|
|
|
|
* verifier prevents writes from the BPF side
|
|
|
|
*/
|
|
|
|
attr->map_flags |= BPF_F_RDONLY_PROG;
|
|
|
|
|
2017-07-18 00:28:56 +08:00
|
|
|
|
2018-01-12 12:29:06 +08:00
|
|
|
bpf_map_init_from_attr(&dtab->map, attr);
|
2017-07-18 00:28:56 +08:00
|
|
|
|
|
|
|
/* make sure page count doesn't overflow */
|
|
|
|
cost = (u64) dtab->map.max_entries * sizeof(struct bpf_dtab_netdev *);
|
2019-06-28 17:12:34 +08:00
|
|
|
cost += sizeof(struct list_head) * num_possible_cpus();
|
2017-07-18 00:28:56 +08:00
|
|
|
|
2019-05-30 09:03:58 +08:00
|
|
|
/* if map size is larger than memlock limit, reject it */
|
2019-05-30 09:03:59 +08:00
|
|
|
err = bpf_map_charge_init(&dtab->map.memory, cost);
|
2017-07-18 00:28:56 +08:00
|
|
|
if (err)
|
2019-07-27 00:06:53 +08:00
|
|
|
return -EINVAL;
|
2017-09-18 21:03:46 +08:00
|
|
|
|
2019-06-28 17:12:34 +08:00
|
|
|
dtab->flush_list = alloc_percpu(struct list_head);
|
|
|
|
if (!dtab->flush_list)
|
2019-05-30 09:03:58 +08:00
|
|
|
goto free_charge;
|
2017-07-18 00:29:40 +08:00
|
|
|
|
2019-06-28 17:12:34 +08:00
|
|
|
for_each_possible_cpu(cpu)
|
|
|
|
INIT_LIST_HEAD(per_cpu_ptr(dtab->flush_list, cpu));
|
|
|
|
|
2017-07-18 00:28:56 +08:00
|
|
|
dtab->netdev_map = bpf_map_area_alloc(dtab->map.max_entries *
|
2017-08-19 02:28:00 +08:00
|
|
|
sizeof(struct bpf_dtab_netdev *),
|
|
|
|
dtab->map.numa_node);
|
2017-07-18 00:28:56 +08:00
|
|
|
if (!dtab->netdev_map)
|
2019-06-28 17:12:34 +08:00
|
|
|
goto free_percpu;
|
2017-07-18 00:28:56 +08:00
|
|
|
|
2019-07-27 00:06:53 +08:00
|
|
|
return 0;
|
2019-06-28 17:12:34 +08:00
|
|
|
|
|
|
|
free_percpu:
|
|
|
|
free_percpu(dtab->flush_list);
|
2019-05-30 09:03:58 +08:00
|
|
|
free_charge:
|
|
|
|
bpf_map_charge_finish(&dtab->map.memory);
|
2019-07-27 00:06:53 +08:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct bpf_map *dev_map_alloc(union bpf_attr *attr)
|
|
|
|
{
|
|
|
|
struct bpf_dtab *dtab;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
if (!capable(CAP_NET_ADMIN))
|
|
|
|
return ERR_PTR(-EPERM);
|
|
|
|
|
|
|
|
dtab = kzalloc(sizeof(*dtab), GFP_USER);
|
|
|
|
if (!dtab)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
|
|
|
|
|
|
|
err = dev_map_init_map(dtab, attr);
|
|
|
|
if (err) {
|
|
|
|
kfree(dtab);
|
|
|
|
return ERR_PTR(err);
|
|
|
|
}
|
|
|
|
|
|
|
|
spin_lock(&dev_map_lock);
|
|
|
|
list_add_tail_rcu(&dtab->list, &dev_map_list);
|
|
|
|
spin_unlock(&dev_map_lock);
|
|
|
|
|
|
|
|
return &dtab->map;
|
2017-07-18 00:28:56 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void dev_map_free(struct bpf_map *map)
|
|
|
|
{
|
|
|
|
struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
|
2017-07-18 00:29:40 +08:00
|
|
|
int i, cpu;
|
2017-07-18 00:28:56 +08:00
|
|
|
|
|
|
|
/* At this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0,
|
|
|
|
* so the programs (can be more than one that used this map) were
|
|
|
|
* disconnected from events. Wait for outstanding critical sections in
|
|
|
|
* these programs to complete. The rcu critical section only guarantees
|
|
|
|
* no further reads against netdev_map. It does __not__ ensure pending
|
|
|
|
* flush operations (if any) are complete.
|
|
|
|
*/
|
2017-08-21 07:48:12 +08:00
|
|
|
|
|
|
|
spin_lock(&dev_map_lock);
|
|
|
|
list_del_rcu(&dtab->list);
|
|
|
|
spin_unlock(&dev_map_lock);
|
|
|
|
|
2018-08-18 05:26:14 +08:00
|
|
|
bpf_clear_redirect_map(map);
|
2017-07-18 00:28:56 +08:00
|
|
|
synchronize_rcu();
|
|
|
|
|
2019-05-14 00:59:16 +08:00
|
|
|
/* Make sure prior __dev_map_entry_free() have completed. */
|
|
|
|
rcu_barrier();
|
|
|
|
|
2017-07-18 00:29:40 +08:00
|
|
|
/* To ensure all pending flush operations have completed wait for flush
|
2019-06-28 17:12:34 +08:00
|
|
|
* list to empty on _all_ cpus.
|
2017-07-18 00:29:40 +08:00
|
|
|
* Because the above synchronize_rcu() ensures the map is disconnected
|
2019-06-28 17:12:34 +08:00
|
|
|
* from the program we can assume no new items will be added.
|
2017-07-18 00:29:40 +08:00
|
|
|
*/
|
|
|
|
for_each_online_cpu(cpu) {
|
2019-06-28 17:12:34 +08:00
|
|
|
struct list_head *flush_list = per_cpu_ptr(dtab->flush_list, cpu);
|
2017-07-18 00:29:40 +08:00
|
|
|
|
2019-06-28 17:12:34 +08:00
|
|
|
while (!list_empty(flush_list))
|
2017-09-09 05:01:10 +08:00
|
|
|
cond_resched();
|
2017-07-18 00:29:40 +08:00
|
|
|
}
|
|
|
|
|
2017-07-18 00:28:56 +08:00
|
|
|
for (i = 0; i < dtab->map.max_entries; i++) {
|
|
|
|
struct bpf_dtab_netdev *dev;
|
|
|
|
|
|
|
|
dev = dtab->netdev_map[i];
|
|
|
|
if (!dev)
|
|
|
|
continue;
|
|
|
|
|
2019-06-14 16:20:14 +08:00
|
|
|
free_percpu(dev->bulkq);
|
2017-07-18 00:28:56 +08:00
|
|
|
dev_put(dev->dev);
|
|
|
|
kfree(dev);
|
|
|
|
}
|
|
|
|
|
2019-06-28 17:12:34 +08:00
|
|
|
free_percpu(dtab->flush_list);
|
2017-07-18 00:28:56 +08:00
|
|
|
bpf_map_area_free(dtab->netdev_map);
|
|
|
|
kfree(dtab);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int dev_map_get_next_key(struct bpf_map *map, void *key, void *next_key)
|
|
|
|
{
|
|
|
|
struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
|
|
|
|
u32 index = key ? *(u32 *)key : U32_MAX;
|
2017-08-23 07:47:54 +08:00
|
|
|
u32 *next = next_key;
|
2017-07-18 00:28:56 +08:00
|
|
|
|
|
|
|
if (index >= dtab->map.max_entries) {
|
|
|
|
*next = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (index == dtab->map.max_entries - 1)
|
|
|
|
return -ENOENT;
|
|
|
|
*next = index + 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-06-28 17:12:34 +08:00
|
|
|
static int bq_xmit_all(struct xdp_bulk_queue *bq, u32 flags,
|
2018-08-09 05:00:45 +08:00
|
|
|
bool in_napi_ctx)
|
2018-05-24 22:45:51 +08:00
|
|
|
{
|
2019-06-28 17:12:34 +08:00
|
|
|
struct bpf_dtab_netdev *obj = bq->obj;
|
2018-05-24 22:45:51 +08:00
|
|
|
struct net_device *dev = obj->dev;
|
2018-05-24 22:46:17 +08:00
|
|
|
int sent = 0, drops = 0, err = 0;
|
2018-05-24 22:45:51 +08:00
|
|
|
int i;
|
|
|
|
|
|
|
|
if (unlikely(!bq->count))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
for (i = 0; i < bq->count; i++) {
|
|
|
|
struct xdp_frame *xdpf = bq->q[i];
|
|
|
|
|
|
|
|
prefetch(xdpf);
|
|
|
|
}
|
|
|
|
|
2018-05-31 17:00:23 +08:00
|
|
|
sent = dev->netdev_ops->ndo_xdp_xmit(dev, bq->count, bq->q, flags);
|
xdp: change ndo_xdp_xmit API to support bulking
This patch change the API for ndo_xdp_xmit to support bulking
xdp_frames.
When kernel is compiled with CONFIG_RETPOLINE, XDP sees a huge slowdown.
Most of the slowdown is caused by DMA API indirect function calls, but
also the net_device->ndo_xdp_xmit() call.
Benchmarked patch with CONFIG_RETPOLINE, using xdp_redirect_map with
single flow/core test (CPU E5-1650 v4 @ 3.60GHz), showed
performance improved:
for driver ixgbe: 6,042,682 pps -> 6,853,768 pps = +811,086 pps
for driver i40e : 6,187,169 pps -> 6,724,519 pps = +537,350 pps
With frames avail as a bulk inside the driver ndo_xdp_xmit call,
further optimizations are possible, like bulk DMA-mapping for TX.
Testing without CONFIG_RETPOLINE show the same performance for
physical NIC drivers.
The virtual NIC driver tun sees a huge performance boost, as it can
avoid doing per frame producer locking, but instead amortize the
locking cost over the bulk.
V2: Fix compile errors reported by kbuild test robot <lkp@intel.com>
V4: Isolated ndo, driver changes and callers.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-05-24 22:46:12 +08:00
|
|
|
if (sent < 0) {
|
2018-05-24 22:46:17 +08:00
|
|
|
err = sent;
|
xdp: change ndo_xdp_xmit API to support bulking
This patch change the API for ndo_xdp_xmit to support bulking
xdp_frames.
When kernel is compiled with CONFIG_RETPOLINE, XDP sees a huge slowdown.
Most of the slowdown is caused by DMA API indirect function calls, but
also the net_device->ndo_xdp_xmit() call.
Benchmarked patch with CONFIG_RETPOLINE, using xdp_redirect_map with
single flow/core test (CPU E5-1650 v4 @ 3.60GHz), showed
performance improved:
for driver ixgbe: 6,042,682 pps -> 6,853,768 pps = +811,086 pps
for driver i40e : 6,187,169 pps -> 6,724,519 pps = +537,350 pps
With frames avail as a bulk inside the driver ndo_xdp_xmit call,
further optimizations are possible, like bulk DMA-mapping for TX.
Testing without CONFIG_RETPOLINE show the same performance for
physical NIC drivers.
The virtual NIC driver tun sees a huge performance boost, as it can
avoid doing per frame producer locking, but instead amortize the
locking cost over the bulk.
V2: Fix compile errors reported by kbuild test robot <lkp@intel.com>
V4: Isolated ndo, driver changes and callers.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-05-24 22:46:12 +08:00
|
|
|
sent = 0;
|
|
|
|
goto error;
|
2018-05-24 22:45:51 +08:00
|
|
|
}
|
xdp: change ndo_xdp_xmit API to support bulking
This patch change the API for ndo_xdp_xmit to support bulking
xdp_frames.
When kernel is compiled with CONFIG_RETPOLINE, XDP sees a huge slowdown.
Most of the slowdown is caused by DMA API indirect function calls, but
also the net_device->ndo_xdp_xmit() call.
Benchmarked patch with CONFIG_RETPOLINE, using xdp_redirect_map with
single flow/core test (CPU E5-1650 v4 @ 3.60GHz), showed
performance improved:
for driver ixgbe: 6,042,682 pps -> 6,853,768 pps = +811,086 pps
for driver i40e : 6,187,169 pps -> 6,724,519 pps = +537,350 pps
With frames avail as a bulk inside the driver ndo_xdp_xmit call,
further optimizations are possible, like bulk DMA-mapping for TX.
Testing without CONFIG_RETPOLINE show the same performance for
physical NIC drivers.
The virtual NIC driver tun sees a huge performance boost, as it can
avoid doing per frame producer locking, but instead amortize the
locking cost over the bulk.
V2: Fix compile errors reported by kbuild test robot <lkp@intel.com>
V4: Isolated ndo, driver changes and callers.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-05-24 22:46:12 +08:00
|
|
|
drops = bq->count - sent;
|
|
|
|
out:
|
2018-05-24 22:45:51 +08:00
|
|
|
bq->count = 0;
|
|
|
|
|
2019-07-27 00:06:53 +08:00
|
|
|
trace_xdp_devmap_xmit(&obj->dtab->map, obj->idx,
|
2018-05-24 22:46:17 +08:00
|
|
|
sent, drops, bq->dev_rx, dev, err);
|
2018-05-24 22:45:57 +08:00
|
|
|
bq->dev_rx = NULL;
|
2019-06-28 17:12:34 +08:00
|
|
|
__list_del_clearprev(&bq->flush_node);
|
2018-05-24 22:45:51 +08:00
|
|
|
return 0;
|
xdp: change ndo_xdp_xmit API to support bulking
This patch change the API for ndo_xdp_xmit to support bulking
xdp_frames.
When kernel is compiled with CONFIG_RETPOLINE, XDP sees a huge slowdown.
Most of the slowdown is caused by DMA API indirect function calls, but
also the net_device->ndo_xdp_xmit() call.
Benchmarked patch with CONFIG_RETPOLINE, using xdp_redirect_map with
single flow/core test (CPU E5-1650 v4 @ 3.60GHz), showed
performance improved:
for driver ixgbe: 6,042,682 pps -> 6,853,768 pps = +811,086 pps
for driver i40e : 6,187,169 pps -> 6,724,519 pps = +537,350 pps
With frames avail as a bulk inside the driver ndo_xdp_xmit call,
further optimizations are possible, like bulk DMA-mapping for TX.
Testing without CONFIG_RETPOLINE show the same performance for
physical NIC drivers.
The virtual NIC driver tun sees a huge performance boost, as it can
avoid doing per frame producer locking, but instead amortize the
locking cost over the bulk.
V2: Fix compile errors reported by kbuild test robot <lkp@intel.com>
V4: Isolated ndo, driver changes and callers.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-05-24 22:46:12 +08:00
|
|
|
error:
|
|
|
|
/* If ndo_xdp_xmit fails with an errno, no frames have been
|
|
|
|
* xmit'ed and it's our responsibility to them free all.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < bq->count; i++) {
|
|
|
|
struct xdp_frame *xdpf = bq->q[i];
|
|
|
|
|
|
|
|
/* RX path under NAPI protection, can return frames faster */
|
2018-08-09 05:00:45 +08:00
|
|
|
if (likely(in_napi_ctx))
|
|
|
|
xdp_return_frame_rx_napi(xdpf);
|
|
|
|
else
|
|
|
|
xdp_return_frame(xdpf);
|
xdp: change ndo_xdp_xmit API to support bulking
This patch change the API for ndo_xdp_xmit to support bulking
xdp_frames.
When kernel is compiled with CONFIG_RETPOLINE, XDP sees a huge slowdown.
Most of the slowdown is caused by DMA API indirect function calls, but
also the net_device->ndo_xdp_xmit() call.
Benchmarked patch with CONFIG_RETPOLINE, using xdp_redirect_map with
single flow/core test (CPU E5-1650 v4 @ 3.60GHz), showed
performance improved:
for driver ixgbe: 6,042,682 pps -> 6,853,768 pps = +811,086 pps
for driver i40e : 6,187,169 pps -> 6,724,519 pps = +537,350 pps
With frames avail as a bulk inside the driver ndo_xdp_xmit call,
further optimizations are possible, like bulk DMA-mapping for TX.
Testing without CONFIG_RETPOLINE show the same performance for
physical NIC drivers.
The virtual NIC driver tun sees a huge performance boost, as it can
avoid doing per frame producer locking, but instead amortize the
locking cost over the bulk.
V2: Fix compile errors reported by kbuild test robot <lkp@intel.com>
V4: Isolated ndo, driver changes and callers.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-05-24 22:46:12 +08:00
|
|
|
drops++;
|
|
|
|
}
|
|
|
|
goto out;
|
2018-05-24 22:45:51 +08:00
|
|
|
}
|
|
|
|
|
2017-07-18 00:29:40 +08:00
|
|
|
/* __dev_map_flush is called from xdp_do_flush_map() which _must_ be signaled
|
|
|
|
* from the driver before returning from its napi->poll() routine. The poll()
|
|
|
|
* routine is called either from busy_poll context or net_rx_action signaled
|
|
|
|
* from NET_RX_SOFTIRQ. Either way the poll routine must complete before the
|
2019-06-28 17:12:34 +08:00
|
|
|
* net device can be torn down. On devmap tear down we ensure the flush list
|
|
|
|
* is empty before completing to ensure all flush operations have completed.
|
2017-07-18 00:29:40 +08:00
|
|
|
*/
|
|
|
|
void __dev_map_flush(struct bpf_map *map)
|
|
|
|
{
|
|
|
|
struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
|
2019-06-28 17:12:34 +08:00
|
|
|
struct list_head *flush_list = this_cpu_ptr(dtab->flush_list);
|
|
|
|
struct xdp_bulk_queue *bq, *tmp;
|
2017-07-18 00:29:40 +08:00
|
|
|
|
2019-06-14 16:20:15 +08:00
|
|
|
rcu_read_lock();
|
2019-06-28 17:12:34 +08:00
|
|
|
list_for_each_entry_safe(bq, tmp, flush_list, flush_node)
|
|
|
|
bq_xmit_all(bq, XDP_XMIT_FLUSH, true);
|
2019-06-14 16:20:15 +08:00
|
|
|
rcu_read_unlock();
|
2017-07-18 00:29:40 +08:00
|
|
|
}
|
|
|
|
|
2017-07-18 00:28:56 +08:00
|
|
|
/* rcu_read_lock (from syscall and BPF contexts) ensures that if a delete and/or
|
|
|
|
* update happens in parallel here a dev_put wont happen until after reading the
|
|
|
|
* ifindex.
|
|
|
|
*/
|
2018-05-24 22:45:46 +08:00
|
|
|
struct bpf_dtab_netdev *__dev_map_lookup_elem(struct bpf_map *map, u32 key)
|
2017-07-18 00:28:56 +08:00
|
|
|
{
|
|
|
|
struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
|
2018-05-24 22:45:46 +08:00
|
|
|
struct bpf_dtab_netdev *obj;
|
2017-07-18 00:28:56 +08:00
|
|
|
|
2017-08-23 07:47:54 +08:00
|
|
|
if (key >= map->max_entries)
|
2017-07-18 00:28:56 +08:00
|
|
|
return NULL;
|
|
|
|
|
2018-05-24 22:45:46 +08:00
|
|
|
obj = READ_ONCE(dtab->netdev_map[key]);
|
|
|
|
return obj;
|
|
|
|
}
|
|
|
|
|
2018-05-24 22:45:51 +08:00
|
|
|
/* Runs under RCU-read-side, plus in softirq under NAPI protection.
|
|
|
|
* Thus, safe percpu variable access.
|
|
|
|
*/
|
2018-05-24 22:45:57 +08:00
|
|
|
static int bq_enqueue(struct bpf_dtab_netdev *obj, struct xdp_frame *xdpf,
|
|
|
|
struct net_device *dev_rx)
|
|
|
|
|
2018-05-24 22:45:51 +08:00
|
|
|
{
|
2019-06-28 17:12:34 +08:00
|
|
|
struct list_head *flush_list = this_cpu_ptr(obj->dtab->flush_list);
|
2018-05-24 22:45:51 +08:00
|
|
|
struct xdp_bulk_queue *bq = this_cpu_ptr(obj->bulkq);
|
|
|
|
|
|
|
|
if (unlikely(bq->count == DEV_MAP_BULK_SIZE))
|
2019-06-28 17:12:34 +08:00
|
|
|
bq_xmit_all(bq, 0, true);
|
2018-05-24 22:45:51 +08:00
|
|
|
|
2018-05-24 22:45:57 +08:00
|
|
|
/* Ingress dev_rx will be the same for all xdp_frame's in
|
|
|
|
* bulk_queue, because bq stored per-CPU and must be flushed
|
|
|
|
* from net_device drivers NAPI func end.
|
|
|
|
*/
|
|
|
|
if (!bq->dev_rx)
|
|
|
|
bq->dev_rx = dev_rx;
|
|
|
|
|
2018-05-24 22:45:51 +08:00
|
|
|
bq->q[bq->count++] = xdpf;
|
2019-06-28 17:12:34 +08:00
|
|
|
|
|
|
|
if (!bq->flush_node.prev)
|
|
|
|
list_add(&bq->flush_node, flush_list);
|
|
|
|
|
2018-05-24 22:45:51 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2018-05-24 22:45:57 +08:00
|
|
|
int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_buff *xdp,
|
|
|
|
struct net_device *dev_rx)
|
2018-05-24 22:45:46 +08:00
|
|
|
{
|
|
|
|
struct net_device *dev = dst->dev;
|
|
|
|
struct xdp_frame *xdpf;
|
2018-07-06 10:49:00 +08:00
|
|
|
int err;
|
2018-05-24 22:45:46 +08:00
|
|
|
|
|
|
|
if (!dev->netdev_ops->ndo_xdp_xmit)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
2018-07-06 10:49:00 +08:00
|
|
|
err = xdp_ok_fwd_dev(dev, xdp->data_end - xdp->data);
|
|
|
|
if (unlikely(err))
|
|
|
|
return err;
|
|
|
|
|
2018-05-24 22:45:46 +08:00
|
|
|
xdpf = convert_to_xdp_frame(xdp);
|
|
|
|
if (unlikely(!xdpf))
|
|
|
|
return -EOVERFLOW;
|
|
|
|
|
2018-05-24 22:45:57 +08:00
|
|
|
return bq_enqueue(dst, xdpf, dev_rx);
|
2017-07-18 00:28:56 +08:00
|
|
|
}
|
|
|
|
|
2018-06-14 10:07:42 +08:00
|
|
|
int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb,
|
|
|
|
struct bpf_prog *xdp_prog)
|
|
|
|
{
|
|
|
|
int err;
|
|
|
|
|
2018-07-06 10:49:00 +08:00
|
|
|
err = xdp_ok_fwd_dev(dst->dev, skb->len);
|
2018-06-14 10:07:42 +08:00
|
|
|
if (unlikely(err))
|
|
|
|
return err;
|
|
|
|
skb->dev = dst->dev;
|
|
|
|
generic_xdp_tx(skb, xdp_prog);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-08-23 07:47:54 +08:00
|
|
|
static void *dev_map_lookup_elem(struct bpf_map *map, void *key)
|
|
|
|
{
|
2018-05-24 22:45:46 +08:00
|
|
|
struct bpf_dtab_netdev *obj = __dev_map_lookup_elem(map, *(u32 *)key);
|
2018-05-30 23:09:16 +08:00
|
|
|
struct net_device *dev = obj ? obj->dev : NULL;
|
2017-08-23 07:47:54 +08:00
|
|
|
|
|
|
|
return dev ? &dev->ifindex : NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void dev_map_flush_old(struct bpf_dtab_netdev *dev)
|
2017-07-18 00:29:40 +08:00
|
|
|
{
|
2018-05-31 17:00:23 +08:00
|
|
|
if (dev->dev->netdev_ops->ndo_xdp_xmit) {
|
2018-05-24 22:45:51 +08:00
|
|
|
struct xdp_bulk_queue *bq;
|
2017-07-18 00:29:40 +08:00
|
|
|
int cpu;
|
|
|
|
|
2019-06-14 16:20:15 +08:00
|
|
|
rcu_read_lock();
|
2017-07-18 00:29:40 +08:00
|
|
|
for_each_online_cpu(cpu) {
|
2018-05-24 22:45:51 +08:00
|
|
|
bq = per_cpu_ptr(dev->bulkq, cpu);
|
2019-06-28 17:12:34 +08:00
|
|
|
bq_xmit_all(bq, XDP_XMIT_FLUSH, false);
|
2017-07-18 00:29:40 +08:00
|
|
|
}
|
2019-06-14 16:20:15 +08:00
|
|
|
rcu_read_unlock();
|
2017-07-18 00:29:40 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-07-18 00:28:56 +08:00
|
|
|
static void __dev_map_entry_free(struct rcu_head *rcu)
|
|
|
|
{
|
2017-08-23 07:47:54 +08:00
|
|
|
struct bpf_dtab_netdev *dev;
|
2017-07-18 00:28:56 +08:00
|
|
|
|
2017-08-23 07:47:54 +08:00
|
|
|
dev = container_of(rcu, struct bpf_dtab_netdev, rcu);
|
|
|
|
dev_map_flush_old(dev);
|
2018-05-24 22:45:51 +08:00
|
|
|
free_percpu(dev->bulkq);
|
2017-08-23 07:47:54 +08:00
|
|
|
dev_put(dev->dev);
|
|
|
|
kfree(dev);
|
2017-07-18 00:28:56 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int dev_map_delete_elem(struct bpf_map *map, void *key)
|
|
|
|
{
|
|
|
|
struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
|
|
|
|
struct bpf_dtab_netdev *old_dev;
|
|
|
|
int k = *(u32 *)key;
|
|
|
|
|
|
|
|
if (k >= map->max_entries)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2017-08-23 07:47:54 +08:00
|
|
|
/* Use call_rcu() here to ensure any rcu critical sections have
|
|
|
|
* completed, but this does not guarantee a flush has happened
|
2017-07-18 00:28:56 +08:00
|
|
|
* yet. Because driver side rcu_read_lock/unlock only protects the
|
|
|
|
* running XDP program. However, for pending flush operations the
|
|
|
|
* dev and ctx are stored in another per cpu map. And additionally,
|
|
|
|
* the driver tear down ensures all soft irqs are complete before
|
|
|
|
* removing the net device in the case of dev_put equals zero.
|
|
|
|
*/
|
|
|
|
old_dev = xchg(&dtab->netdev_map[k], NULL);
|
|
|
|
if (old_dev)
|
|
|
|
call_rcu(&old_dev->rcu, __dev_map_entry_free);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-07-27 00:06:53 +08:00
|
|
|
static struct bpf_dtab_netdev *__dev_map_alloc_node(struct net *net,
|
|
|
|
struct bpf_dtab *dtab,
|
|
|
|
u32 ifindex,
|
|
|
|
unsigned int idx)
|
2017-07-18 00:28:56 +08:00
|
|
|
{
|
2018-05-24 22:45:51 +08:00
|
|
|
gfp_t gfp = GFP_ATOMIC | __GFP_NOWARN;
|
2019-07-27 00:06:53 +08:00
|
|
|
struct bpf_dtab_netdev *dev;
|
|
|
|
struct xdp_bulk_queue *bq;
|
|
|
|
int cpu;
|
|
|
|
|
|
|
|
dev = kmalloc_node(sizeof(*dev), gfp, dtab->map.numa_node);
|
|
|
|
if (!dev)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
|
|
|
|
|
|
|
dev->bulkq = __alloc_percpu_gfp(sizeof(*dev->bulkq),
|
|
|
|
sizeof(void *), gfp);
|
|
|
|
if (!dev->bulkq) {
|
|
|
|
kfree(dev);
|
|
|
|
return ERR_PTR(-ENOMEM);
|
|
|
|
}
|
|
|
|
|
|
|
|
for_each_possible_cpu(cpu) {
|
|
|
|
bq = per_cpu_ptr(dev->bulkq, cpu);
|
|
|
|
bq->obj = dev;
|
|
|
|
}
|
|
|
|
|
|
|
|
dev->dev = dev_get_by_index(net, ifindex);
|
|
|
|
if (!dev->dev) {
|
|
|
|
free_percpu(dev->bulkq);
|
|
|
|
kfree(dev);
|
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
}
|
|
|
|
|
|
|
|
dev->idx = idx;
|
|
|
|
dev->dtab = dtab;
|
|
|
|
|
|
|
|
return dev;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int __dev_map_update_elem(struct net *net, struct bpf_map *map,
|
|
|
|
void *key, void *value, u64 map_flags)
|
|
|
|
{
|
|
|
|
struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
|
2017-07-18 00:28:56 +08:00
|
|
|
struct bpf_dtab_netdev *dev, *old_dev;
|
|
|
|
u32 ifindex = *(u32 *)value;
|
2019-06-28 17:12:34 +08:00
|
|
|
u32 i = *(u32 *)key;
|
2017-07-18 00:28:56 +08:00
|
|
|
|
|
|
|
if (unlikely(map_flags > BPF_EXIST))
|
|
|
|
return -EINVAL;
|
|
|
|
if (unlikely(i >= dtab->map.max_entries))
|
|
|
|
return -E2BIG;
|
|
|
|
if (unlikely(map_flags == BPF_NOEXIST))
|
|
|
|
return -EEXIST;
|
|
|
|
|
|
|
|
if (!ifindex) {
|
|
|
|
dev = NULL;
|
|
|
|
} else {
|
2019-07-27 00:06:53 +08:00
|
|
|
dev = __dev_map_alloc_node(net, dtab, ifindex, i);
|
|
|
|
if (IS_ERR(dev))
|
|
|
|
return PTR_ERR(dev);
|
2017-07-18 00:28:56 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Use call_rcu() here to ensure rcu critical sections have completed
|
|
|
|
* Remembering the driver side flush operation will happen before the
|
|
|
|
* net device is removed.
|
|
|
|
*/
|
|
|
|
old_dev = xchg(&dtab->netdev_map[i], dev);
|
|
|
|
if (old_dev)
|
|
|
|
call_rcu(&old_dev->rcu, __dev_map_entry_free);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-07-27 00:06:53 +08:00
|
|
|
static int dev_map_update_elem(struct bpf_map *map, void *key, void *value,
|
|
|
|
u64 map_flags)
|
|
|
|
{
|
|
|
|
return __dev_map_update_elem(current->nsproxy->net_ns,
|
|
|
|
map, key, value, map_flags);
|
|
|
|
}
|
|
|
|
|
2017-07-18 00:28:56 +08:00
|
|
|
const struct bpf_map_ops dev_map_ops = {
|
|
|
|
.map_alloc = dev_map_alloc,
|
|
|
|
.map_free = dev_map_free,
|
|
|
|
.map_get_next_key = dev_map_get_next_key,
|
|
|
|
.map_lookup_elem = dev_map_lookup_elem,
|
|
|
|
.map_update_elem = dev_map_update_elem,
|
|
|
|
.map_delete_elem = dev_map_delete_elem,
|
2018-08-12 07:59:17 +08:00
|
|
|
.map_check_btf = map_check_no_btf,
|
2017-07-18 00:28:56 +08:00
|
|
|
};
|
2017-07-18 00:30:02 +08:00
|
|
|
|
|
|
|
static int dev_map_notification(struct notifier_block *notifier,
|
|
|
|
ulong event, void *ptr)
|
|
|
|
{
|
|
|
|
struct net_device *netdev = netdev_notifier_info_to_dev(ptr);
|
|
|
|
struct bpf_dtab *dtab;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
switch (event) {
|
|
|
|
case NETDEV_UNREGISTER:
|
bpf: devmap fix mutex in rcu critical section
Originally we used a mutex to protect concurrent devmap update
and delete operations from racing with netdev unregister notifier
callbacks.
The notifier hook is needed because we increment the netdev ref
count when a dev is added to the devmap. This ensures the netdev
reference is valid in the datapath. However, we don't want to block
unregister events, hence the initial mutex and notifier handler.
The concern was in the notifier hook we search the map for dev
entries that hold a refcnt on the net device being torn down. But,
in order to do this we require two steps,
(i) dereference the netdev: dev = rcu_dereference(map[i])
(ii) test ifindex: dev->ifindex == removing_ifindex
and then finally we can swap in the NULL dev in the map via an
xchg operation,
xchg(map[i], NULL)
The danger here is a concurrent update could run a different
xchg op concurrently leading us to replace the new dev with a
NULL dev incorrectly.
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
dev = rcu_dereference(map[i])
xchg(map[i]), new_dev);
rcu_call(dev,...)
xchg(map[i], NULL)
The above flow would create the incorrect state with the dev
reference in the update path being lost. To resolve this the
original code used a mutex around the above block. However,
updates, deletes, and lookups occur inside rcu critical sections
so we can't use a mutex in this context safely.
Fortunately, by writing slightly better code we can avoid the
mutex altogether. If CPU 1 in the above example uses a cmpxchg
and _only_ replaces the dev reference in the map when it is in
fact the expected dev the race is removed completely. The two
cases being illustrated here, first the race condition,
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
dev = rcu_dereference(map[i])
xchg(map[i]), new_dev);
rcu_call(dev,...)
odev = cmpxchg(map[i], dev, NULL)
Now we can test the cmpxchg return value, detect odev != dev and
abort. Or in the good case,
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
odev = cmpxchg(map[i], dev, NULL)
[...]
Now 'odev == dev' and we can do proper cleanup.
And viola the original race we tried to solve with a mutex is
corrected and the trace noted by Sasha below is resolved due
to removal of the mutex.
Note: When walking the devmap and removing dev references as needed
we depend on the core to fail any calls to dev_get_by_index() using
the ifindex of the device being removed. This way we do not race with
the user while searching the devmap.
Additionally, the mutex was also protecting list add/del/read on
the list of maps in-use. This patch converts this to an RCU list
and spinlock implementation. This protects the list from concurrent
alloc/free operations. The notifier hook walks this list so it uses
RCU read semantics.
BUG: sleeping function called from invalid context at kernel/locking/mutex.c:747
in_atomic(): 1, irqs_disabled(): 0, pid: 16315, name: syz-executor1
1 lock held by syz-executor1/16315:
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] map_delete_elem kernel/bpf/syscall.c:577 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] SYSC_bpf kernel/bpf/syscall.c:1427 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] SyS_bpf+0x1d32/0x4ba0 kernel/bpf/syscall.c:1388
Fixes: 2ddf71e23cc2 ("net: add notifier hooks for devmap bpf map")
Reported-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-05 13:02:19 +08:00
|
|
|
/* This rcu_read_lock/unlock pair is needed because
|
|
|
|
* dev_map_list is an RCU list AND to ensure a delete
|
|
|
|
* operation does not free a netdev_map entry while we
|
|
|
|
* are comparing it against the netdev being unregistered.
|
|
|
|
*/
|
|
|
|
rcu_read_lock();
|
|
|
|
list_for_each_entry_rcu(dtab, &dev_map_list, list) {
|
2017-07-18 00:30:02 +08:00
|
|
|
for (i = 0; i < dtab->map.max_entries; i++) {
|
bpf: devmap fix mutex in rcu critical section
Originally we used a mutex to protect concurrent devmap update
and delete operations from racing with netdev unregister notifier
callbacks.
The notifier hook is needed because we increment the netdev ref
count when a dev is added to the devmap. This ensures the netdev
reference is valid in the datapath. However, we don't want to block
unregister events, hence the initial mutex and notifier handler.
The concern was in the notifier hook we search the map for dev
entries that hold a refcnt on the net device being torn down. But,
in order to do this we require two steps,
(i) dereference the netdev: dev = rcu_dereference(map[i])
(ii) test ifindex: dev->ifindex == removing_ifindex
and then finally we can swap in the NULL dev in the map via an
xchg operation,
xchg(map[i], NULL)
The danger here is a concurrent update could run a different
xchg op concurrently leading us to replace the new dev with a
NULL dev incorrectly.
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
dev = rcu_dereference(map[i])
xchg(map[i]), new_dev);
rcu_call(dev,...)
xchg(map[i], NULL)
The above flow would create the incorrect state with the dev
reference in the update path being lost. To resolve this the
original code used a mutex around the above block. However,
updates, deletes, and lookups occur inside rcu critical sections
so we can't use a mutex in this context safely.
Fortunately, by writing slightly better code we can avoid the
mutex altogether. If CPU 1 in the above example uses a cmpxchg
and _only_ replaces the dev reference in the map when it is in
fact the expected dev the race is removed completely. The two
cases being illustrated here, first the race condition,
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
dev = rcu_dereference(map[i])
xchg(map[i]), new_dev);
rcu_call(dev,...)
odev = cmpxchg(map[i], dev, NULL)
Now we can test the cmpxchg return value, detect odev != dev and
abort. Or in the good case,
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
odev = cmpxchg(map[i], dev, NULL)
[...]
Now 'odev == dev' and we can do proper cleanup.
And viola the original race we tried to solve with a mutex is
corrected and the trace noted by Sasha below is resolved due
to removal of the mutex.
Note: When walking the devmap and removing dev references as needed
we depend on the core to fail any calls to dev_get_by_index() using
the ifindex of the device being removed. This way we do not race with
the user while searching the devmap.
Additionally, the mutex was also protecting list add/del/read on
the list of maps in-use. This patch converts this to an RCU list
and spinlock implementation. This protects the list from concurrent
alloc/free operations. The notifier hook walks this list so it uses
RCU read semantics.
BUG: sleeping function called from invalid context at kernel/locking/mutex.c:747
in_atomic(): 1, irqs_disabled(): 0, pid: 16315, name: syz-executor1
1 lock held by syz-executor1/16315:
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] map_delete_elem kernel/bpf/syscall.c:577 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] SYSC_bpf kernel/bpf/syscall.c:1427 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] SyS_bpf+0x1d32/0x4ba0 kernel/bpf/syscall.c:1388
Fixes: 2ddf71e23cc2 ("net: add notifier hooks for devmap bpf map")
Reported-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-05 13:02:19 +08:00
|
|
|
struct bpf_dtab_netdev *dev, *odev;
|
2017-07-18 00:30:02 +08:00
|
|
|
|
bpf: devmap fix mutex in rcu critical section
Originally we used a mutex to protect concurrent devmap update
and delete operations from racing with netdev unregister notifier
callbacks.
The notifier hook is needed because we increment the netdev ref
count when a dev is added to the devmap. This ensures the netdev
reference is valid in the datapath. However, we don't want to block
unregister events, hence the initial mutex and notifier handler.
The concern was in the notifier hook we search the map for dev
entries that hold a refcnt on the net device being torn down. But,
in order to do this we require two steps,
(i) dereference the netdev: dev = rcu_dereference(map[i])
(ii) test ifindex: dev->ifindex == removing_ifindex
and then finally we can swap in the NULL dev in the map via an
xchg operation,
xchg(map[i], NULL)
The danger here is a concurrent update could run a different
xchg op concurrently leading us to replace the new dev with a
NULL dev incorrectly.
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
dev = rcu_dereference(map[i])
xchg(map[i]), new_dev);
rcu_call(dev,...)
xchg(map[i], NULL)
The above flow would create the incorrect state with the dev
reference in the update path being lost. To resolve this the
original code used a mutex around the above block. However,
updates, deletes, and lookups occur inside rcu critical sections
so we can't use a mutex in this context safely.
Fortunately, by writing slightly better code we can avoid the
mutex altogether. If CPU 1 in the above example uses a cmpxchg
and _only_ replaces the dev reference in the map when it is in
fact the expected dev the race is removed completely. The two
cases being illustrated here, first the race condition,
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
dev = rcu_dereference(map[i])
xchg(map[i]), new_dev);
rcu_call(dev,...)
odev = cmpxchg(map[i], dev, NULL)
Now we can test the cmpxchg return value, detect odev != dev and
abort. Or in the good case,
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
odev = cmpxchg(map[i], dev, NULL)
[...]
Now 'odev == dev' and we can do proper cleanup.
And viola the original race we tried to solve with a mutex is
corrected and the trace noted by Sasha below is resolved due
to removal of the mutex.
Note: When walking the devmap and removing dev references as needed
we depend on the core to fail any calls to dev_get_by_index() using
the ifindex of the device being removed. This way we do not race with
the user while searching the devmap.
Additionally, the mutex was also protecting list add/del/read on
the list of maps in-use. This patch converts this to an RCU list
and spinlock implementation. This protects the list from concurrent
alloc/free operations. The notifier hook walks this list so it uses
RCU read semantics.
BUG: sleeping function called from invalid context at kernel/locking/mutex.c:747
in_atomic(): 1, irqs_disabled(): 0, pid: 16315, name: syz-executor1
1 lock held by syz-executor1/16315:
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] map_delete_elem kernel/bpf/syscall.c:577 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] SYSC_bpf kernel/bpf/syscall.c:1427 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] SyS_bpf+0x1d32/0x4ba0 kernel/bpf/syscall.c:1388
Fixes: 2ddf71e23cc2 ("net: add notifier hooks for devmap bpf map")
Reported-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-05 13:02:19 +08:00
|
|
|
dev = READ_ONCE(dtab->netdev_map[i]);
|
2018-10-24 19:15:17 +08:00
|
|
|
if (!dev || netdev != dev->dev)
|
2017-07-18 00:30:02 +08:00
|
|
|
continue;
|
bpf: devmap fix mutex in rcu critical section
Originally we used a mutex to protect concurrent devmap update
and delete operations from racing with netdev unregister notifier
callbacks.
The notifier hook is needed because we increment the netdev ref
count when a dev is added to the devmap. This ensures the netdev
reference is valid in the datapath. However, we don't want to block
unregister events, hence the initial mutex and notifier handler.
The concern was in the notifier hook we search the map for dev
entries that hold a refcnt on the net device being torn down. But,
in order to do this we require two steps,
(i) dereference the netdev: dev = rcu_dereference(map[i])
(ii) test ifindex: dev->ifindex == removing_ifindex
and then finally we can swap in the NULL dev in the map via an
xchg operation,
xchg(map[i], NULL)
The danger here is a concurrent update could run a different
xchg op concurrently leading us to replace the new dev with a
NULL dev incorrectly.
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
dev = rcu_dereference(map[i])
xchg(map[i]), new_dev);
rcu_call(dev,...)
xchg(map[i], NULL)
The above flow would create the incorrect state with the dev
reference in the update path being lost. To resolve this the
original code used a mutex around the above block. However,
updates, deletes, and lookups occur inside rcu critical sections
so we can't use a mutex in this context safely.
Fortunately, by writing slightly better code we can avoid the
mutex altogether. If CPU 1 in the above example uses a cmpxchg
and _only_ replaces the dev reference in the map when it is in
fact the expected dev the race is removed completely. The two
cases being illustrated here, first the race condition,
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
dev = rcu_dereference(map[i])
xchg(map[i]), new_dev);
rcu_call(dev,...)
odev = cmpxchg(map[i], dev, NULL)
Now we can test the cmpxchg return value, detect odev != dev and
abort. Or in the good case,
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
odev = cmpxchg(map[i], dev, NULL)
[...]
Now 'odev == dev' and we can do proper cleanup.
And viola the original race we tried to solve with a mutex is
corrected and the trace noted by Sasha below is resolved due
to removal of the mutex.
Note: When walking the devmap and removing dev references as needed
we depend on the core to fail any calls to dev_get_by_index() using
the ifindex of the device being removed. This way we do not race with
the user while searching the devmap.
Additionally, the mutex was also protecting list add/del/read on
the list of maps in-use. This patch converts this to an RCU list
and spinlock implementation. This protects the list from concurrent
alloc/free operations. The notifier hook walks this list so it uses
RCU read semantics.
BUG: sleeping function called from invalid context at kernel/locking/mutex.c:747
in_atomic(): 1, irqs_disabled(): 0, pid: 16315, name: syz-executor1
1 lock held by syz-executor1/16315:
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] map_delete_elem kernel/bpf/syscall.c:577 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] SYSC_bpf kernel/bpf/syscall.c:1427 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] SyS_bpf+0x1d32/0x4ba0 kernel/bpf/syscall.c:1388
Fixes: 2ddf71e23cc2 ("net: add notifier hooks for devmap bpf map")
Reported-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-05 13:02:19 +08:00
|
|
|
odev = cmpxchg(&dtab->netdev_map[i], dev, NULL);
|
|
|
|
if (dev == odev)
|
2017-07-18 00:30:02 +08:00
|
|
|
call_rcu(&dev->rcu,
|
|
|
|
__dev_map_entry_free);
|
|
|
|
}
|
|
|
|
}
|
bpf: devmap fix mutex in rcu critical section
Originally we used a mutex to protect concurrent devmap update
and delete operations from racing with netdev unregister notifier
callbacks.
The notifier hook is needed because we increment the netdev ref
count when a dev is added to the devmap. This ensures the netdev
reference is valid in the datapath. However, we don't want to block
unregister events, hence the initial mutex and notifier handler.
The concern was in the notifier hook we search the map for dev
entries that hold a refcnt on the net device being torn down. But,
in order to do this we require two steps,
(i) dereference the netdev: dev = rcu_dereference(map[i])
(ii) test ifindex: dev->ifindex == removing_ifindex
and then finally we can swap in the NULL dev in the map via an
xchg operation,
xchg(map[i], NULL)
The danger here is a concurrent update could run a different
xchg op concurrently leading us to replace the new dev with a
NULL dev incorrectly.
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
dev = rcu_dereference(map[i])
xchg(map[i]), new_dev);
rcu_call(dev,...)
xchg(map[i], NULL)
The above flow would create the incorrect state with the dev
reference in the update path being lost. To resolve this the
original code used a mutex around the above block. However,
updates, deletes, and lookups occur inside rcu critical sections
so we can't use a mutex in this context safely.
Fortunately, by writing slightly better code we can avoid the
mutex altogether. If CPU 1 in the above example uses a cmpxchg
and _only_ replaces the dev reference in the map when it is in
fact the expected dev the race is removed completely. The two
cases being illustrated here, first the race condition,
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
dev = rcu_dereference(map[i])
xchg(map[i]), new_dev);
rcu_call(dev,...)
odev = cmpxchg(map[i], dev, NULL)
Now we can test the cmpxchg return value, detect odev != dev and
abort. Or in the good case,
CPU 1 CPU 2
notifier hook bpf devmap update
dev = rcu_dereference(map[i])
odev = cmpxchg(map[i], dev, NULL)
[...]
Now 'odev == dev' and we can do proper cleanup.
And viola the original race we tried to solve with a mutex is
corrected and the trace noted by Sasha below is resolved due
to removal of the mutex.
Note: When walking the devmap and removing dev references as needed
we depend on the core to fail any calls to dev_get_by_index() using
the ifindex of the device being removed. This way we do not race with
the user while searching the devmap.
Additionally, the mutex was also protecting list add/del/read on
the list of maps in-use. This patch converts this to an RCU list
and spinlock implementation. This protects the list from concurrent
alloc/free operations. The notifier hook walks this list so it uses
RCU read semantics.
BUG: sleeping function called from invalid context at kernel/locking/mutex.c:747
in_atomic(): 1, irqs_disabled(): 0, pid: 16315, name: syz-executor1
1 lock held by syz-executor1/16315:
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] map_delete_elem kernel/bpf/syscall.c:577 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] SYSC_bpf kernel/bpf/syscall.c:1427 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8c363bc2>] SyS_bpf+0x1d32/0x4ba0 kernel/bpf/syscall.c:1388
Fixes: 2ddf71e23cc2 ("net: add notifier hooks for devmap bpf map")
Reported-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-05 13:02:19 +08:00
|
|
|
rcu_read_unlock();
|
2017-07-18 00:30:02 +08:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return NOTIFY_OK;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct notifier_block dev_map_notifier = {
|
|
|
|
.notifier_call = dev_map_notification,
|
|
|
|
};
|
|
|
|
|
|
|
|
static int __init dev_map_init(void)
|
|
|
|
{
|
2018-05-24 22:45:46 +08:00
|
|
|
/* Assure tracepoint shadow struct _bpf_dtab_netdev is in sync */
|
|
|
|
BUILD_BUG_ON(offsetof(struct bpf_dtab_netdev, dev) !=
|
|
|
|
offsetof(struct _bpf_dtab_netdev, dev));
|
2017-07-18 00:30:02 +08:00
|
|
|
register_netdevice_notifier(&dev_map_notifier);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
subsys_initcall(dev_map_init);
|