linux/kernel/bpf
Stanislav Fomichev 00442143a2 bpf: convert cgroup_bpf.progs to hlist
This lets us reclaim some space to be used by new cgroup lsm slots.

Before:
struct cgroup_bpf {
	struct bpf_prog_array *    effective[23];        /*     0   184 */
	/* --- cacheline 2 boundary (128 bytes) was 56 bytes ago --- */
	struct list_head           progs[23];            /*   184   368 */
	/* --- cacheline 8 boundary (512 bytes) was 40 bytes ago --- */
	u32                        flags[23];            /*   552    92 */

	/* XXX 4 bytes hole, try to pack */

	/* --- cacheline 10 boundary (640 bytes) was 8 bytes ago --- */
	struct list_head           storages;             /*   648    16 */
	struct bpf_prog_array *    inactive;             /*   664     8 */
	struct percpu_ref          refcnt;               /*   672    16 */
	struct work_struct         release_work;         /*   688    32 */

	/* size: 720, cachelines: 12, members: 7 */
	/* sum members: 716, holes: 1, sum holes: 4 */
	/* last cacheline: 16 bytes */
};

After:
struct cgroup_bpf {
	struct bpf_prog_array *    effective[23];        /*     0   184 */
	/* --- cacheline 2 boundary (128 bytes) was 56 bytes ago --- */
	struct hlist_head          progs[23];            /*   184   184 */
	/* --- cacheline 5 boundary (320 bytes) was 48 bytes ago --- */
	u8                         flags[23];            /*   368    23 */

	/* XXX 1 byte hole, try to pack */

	/* --- cacheline 6 boundary (384 bytes) was 8 bytes ago --- */
	struct list_head           storages;             /*   392    16 */
	struct bpf_prog_array *    inactive;             /*   408     8 */
	struct percpu_ref          refcnt;               /*   416    16 */
	struct work_struct         release_work;         /*   432    72 */

	/* size: 504, cachelines: 8, members: 7 */
	/* sum members: 503, holes: 1, sum holes: 1 */
	/* last cacheline: 56 bytes */
};

Suggested-by: Jakub Sitnicki <jakub@cloudflare.com>
Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>
Reviewed-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/r/20220628174314.1216643-3-sdf@google.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-06-29 13:21:51 -07:00
..
preload bpf: Remove redundant slash 2022-03-07 22:19:32 -08:00
arraymap.c bpf: add bpf_map_lookup_percpu_elem for percpu map 2022-05-11 18:16:54 -07:00
bloom_filter.c bpf: Compute map_btf_id during build time 2022-04-26 11:35:21 -07:00
bpf_inode_storage.c bpf: Compute map_btf_id during build time 2022-04-26 11:35:21 -07:00
bpf_iter.c bpf: Inline calls to bpf_loop when callback is known 2022-06-20 17:40:51 -07:00
bpf_local_storage.c bpf: Fix usage of trace RCU in local storage. 2022-04-19 17:55:45 -07:00
bpf_lru_list.c bpf_lru_list: Read double-checked variable once without lock 2021-02-10 15:54:26 -08:00
bpf_lru_list.h printk: stop including cache.h from printk.h 2022-05-13 07:20:07 -07:00
bpf_lsm.c bpf, x86: Attach a cookie to fentry/fexit/fmod_ret/lsm. 2022-05-10 21:58:31 -07:00
bpf_struct_ops_types.h bpf: Add dummy BPF STRUCT_OPS for test purpose 2021-11-01 14:10:00 -07:00
bpf_struct_ops.c bpf: Require only one of cong_avoid() and cong_control() from a TCP CC 2022-06-23 09:49:57 -07:00
bpf_task_storage.c bpf: Compute map_btf_id during build time 2022-04-26 11:35:21 -07:00
btf.c bpf: Merge "types_are_compat" logic into relo_core.c 2022-06-24 14:15:37 -07:00
cgroup.c bpf: convert cgroup_bpf.progs to hlist 2022-06-29 13:21:51 -07:00
core.c bpf, x64: Add predicate for bpf2bpf with tailcalls support in JIT 2022-06-21 18:52:04 +02:00
cpumap.c bpf: Compute map_btf_id during build time 2022-04-26 11:35:21 -07:00
devmap.c bpf: Compute map_btf_id during build time 2022-04-26 11:35:21 -07:00
disasm.c bpf: Relicense disassembler as GPL-2.0-only OR BSD-2-Clause 2021-09-02 14:49:23 +02:00
disasm.h bpf: Relicense disassembler as GPL-2.0-only OR BSD-2-Clause 2021-09-02 14:49:23 +02:00
dispatcher.c bpf: Remove bpf_image tree 2020-03-13 12:49:52 -07:00
hashtab.c bpf: add bpf_map_lookup_percpu_elem for percpu map 2022-05-11 18:16:54 -07:00
helpers.c bpf: Fix non-static bpf_func_proto struct definitions 2022-06-17 16:00:51 +02:00
inode.c bpf: Convert bpf_preload.ko to use light skeleton. 2022-02-10 23:31:51 +01:00
Kconfig rcu: Make the TASKS_RCU Kconfig option be selected 2022-04-20 16:52:58 -07:00
link_iter.c bpf: Add bpf_link iterator 2022-05-10 11:20:45 -07:00
local_storage.c bpf: Compute map_btf_id during build time 2022-04-26 11:35:21 -07:00
lpm_trie.c bpf: Compute map_btf_id during build time 2022-04-26 11:35:21 -07:00
Makefile bpf: Add bpf_link iterator 2022-05-10 11:20:45 -07:00
map_in_map.c bpf: Allow storing unreferenced kptr in map 2022-04-25 17:31:35 -07:00
map_in_map.h bpf: Add map_meta_equal map ops 2020-08-28 15:41:30 +02:00
map_iter.c bpf: Introduce MEM_RDONLY flag 2021-12-18 13:27:41 -08:00
mmap_unlock_work.h bpf: Introduce helper bpf_find_vma 2021-11-07 11:54:51 -08:00
net_namespace.c net: Add includes masked by netdevice.h including uapi/bpf.h 2021-12-29 20:03:05 -08:00
offload.c bpf, offload: Replace bitwise AND by logical AND in bpf_prog_offload_info_fill 2020-02-17 16:53:49 +01:00
percpu_freelist.c bpf: avoid grabbing spin_locks of all cpus when no free elems 2022-06-11 14:25:35 -07:00
percpu_freelist.h bpf: Use raw_spin_trylock() for pcpu_freelist_push/pop in NMI 2020-10-06 00:04:11 +02:00
prog_iter.c bpf: Refactor bpf_iter_reg to have separate seq_info member 2020-07-25 20:16:32 -07:00
queue_stack_maps.c bpf: Compute map_btf_id during build time 2022-04-26 11:35:21 -07:00
reuseport_array.c bpf: Compute map_btf_id during build time 2022-04-26 11:35:21 -07:00
ringbuf.c bpf: Dynptr support for ring buffers 2022-05-23 14:31:28 -07:00
stackmap.c bpf: Compute map_btf_id during build time 2022-04-26 11:35:21 -07:00
syscall.c bpf: Fix non-static bpf_func_proto struct definitions 2022-06-17 16:00:51 +02:00
sysfs_btf.c bpf: Load and verify kernel module BTFs 2020-11-10 15:25:53 -08:00
task_iter.c bpf: Remove redundant assignment to meta.seq in __task_seq_show() 2022-04-11 21:14:34 +02:00
tnum.c bpf, tnums: Provably sound, faster, and more precise algorithm for tnum_mul 2021-06-01 13:34:15 +02:00
trampoline.c bpf: add bpf_func_t and trampoline helpers 2022-06-29 13:21:51 -07:00
verifier.c bpf: Fix for use-after-free bug in inline_bpf_loop 2022-06-24 16:50:39 +02:00