License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
#include <linux/linkage.h>
|
|
|
|
#include <linux/errno.h>
|
|
|
|
|
|
|
|
#include <asm/unistd.h>
|
|
|
|
|
2018-04-05 17:53:03 +08:00
|
|
|
#ifdef CONFIG_ARCH_HAS_SYSCALL_WRAPPER
|
|
|
|
/* Architectures may override COND_SYSCALL and COND_SYSCALL_COMPAT */
|
|
|
|
#include <asm/syscall_wrapper.h>
|
|
|
|
#endif /* CONFIG_ARCH_HAS_SYSCALL_WRAPPER */
|
|
|
|
|
2007-10-17 14:29:25 +08:00
|
|
|
/* we can't #include <linux/syscalls.h> here,
|
|
|
|
but tell gcc to not warn with -Wmissing-prototypes */
|
|
|
|
asmlinkage long sys_ni_syscall(void);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Non-implemented system calls get redirected here.
|
|
|
|
*/
|
|
|
|
asmlinkage long sys_ni_syscall(void)
|
|
|
|
{
|
|
|
|
return -ENOSYS;
|
|
|
|
}
|
|
|
|
|
2018-04-05 17:53:03 +08:00
|
|
|
#ifndef COND_SYSCALL
|
2018-03-05 02:06:35 +08:00
|
|
|
#define COND_SYSCALL(name) cond_syscall(sys_##name)
|
2018-04-05 17:53:03 +08:00
|
|
|
#endif /* COND_SYSCALL */
|
|
|
|
|
|
|
|
#ifndef COND_SYSCALL_COMPAT
|
2018-03-05 02:06:35 +08:00
|
|
|
#define COND_SYSCALL_COMPAT(name) cond_syscall(compat_sys_##name)
|
2018-04-05 17:53:03 +08:00
|
|
|
#endif /* COND_SYSCALL_COMPAT */
|
2018-03-05 02:06:35 +08:00
|
|
|
|
2018-03-07 02:53:01 +08:00
|
|
|
/*
|
|
|
|
* This list is kept in the same order as include/uapi/asm-generic/unistd.h.
|
|
|
|
* Architecture specific entries go below, followed by deprecated or obsolete
|
|
|
|
* system calls.
|
|
|
|
*/
|
|
|
|
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(io_setup);
|
|
|
|
COND_SYSCALL_COMPAT(io_setup);
|
|
|
|
COND_SYSCALL(io_destroy);
|
|
|
|
COND_SYSCALL(io_submit);
|
|
|
|
COND_SYSCALL_COMPAT(io_submit);
|
|
|
|
COND_SYSCALL(io_cancel);
|
2019-01-07 07:33:08 +08:00
|
|
|
COND_SYSCALL(io_getevents_time32);
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(io_getevents);
|
2019-01-07 07:33:08 +08:00
|
|
|
COND_SYSCALL(io_pgetevents_time32);
|
aio: implement io_pgetevents
This is the io_getevents equivalent of ppoll/pselect and allows to
properly mix signals and aio completions (especially with IOCB_CMD_POLL)
and atomically executes the following sequence:
sigset_t origmask;
pthread_sigmask(SIG_SETMASK, &sigmask, &origmask);
ret = io_getevents(ctx, min_nr, nr, events, timeout);
pthread_sigmask(SIG_SETMASK, &origmask, NULL);
Note that unlike many other signal related calls we do not pass a sigmask
size, as that would get us to 7 arguments, which aren't easily supported
by the syscall infrastructure. It seems a lot less painful to just add a
new syscall variant in the unlikely case we're going to increase the
sigset size.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
2018-05-03 01:51:00 +08:00
|
|
|
COND_SYSCALL(io_pgetevents);
|
2019-01-07 07:33:08 +08:00
|
|
|
COND_SYSCALL_COMPAT(io_pgetevents_time32);
|
aio: implement io_pgetevents
This is the io_getevents equivalent of ppoll/pselect and allows to
properly mix signals and aio completions (especially with IOCB_CMD_POLL)
and atomically executes the following sequence:
sigset_t origmask;
pthread_sigmask(SIG_SETMASK, &sigmask, &origmask);
ret = io_getevents(ctx, min_nr, nr, events, timeout);
pthread_sigmask(SIG_SETMASK, &origmask, NULL);
Note that unlike many other signal related calls we do not pass a sigmask
size, as that would get us to 7 arguments, which aren't easily supported
by the syscall infrastructure. It seems a lot less painful to just add a
new syscall variant in the unlikely case we're going to increase the
sigset size.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
2018-05-03 01:51:00 +08:00
|
|
|
COND_SYSCALL_COMPAT(io_pgetevents);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
COND_SYSCALL(io_uring_setup);
|
|
|
|
COND_SYSCALL(io_uring_enter);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 00:16:05 +08:00
|
|
|
COND_SYSCALL(io_uring_register);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* fs/xattr.c */
|
|
|
|
|
|
|
|
/* fs/dcache.c */
|
|
|
|
|
|
|
|
/* fs/cookies.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(lookup_dcookie);
|
|
|
|
COND_SYSCALL_COMPAT(lookup_dcookie);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* fs/eventfd.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(eventfd2);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* fs/eventfd.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(epoll_create1);
|
|
|
|
COND_SYSCALL(epoll_ctl);
|
|
|
|
COND_SYSCALL(epoll_pwait);
|
|
|
|
COND_SYSCALL_COMPAT(epoll_pwait);
|
2020-12-19 06:05:41 +08:00
|
|
|
COND_SYSCALL(epoll_pwait2);
|
|
|
|
COND_SYSCALL_COMPAT(epoll_pwait2);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* fs/fcntl.c */
|
|
|
|
|
|
|
|
/* fs/inotify_user.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(inotify_init1);
|
|
|
|
COND_SYSCALL(inotify_add_watch);
|
|
|
|
COND_SYSCALL(inotify_rm_watch);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* fs/ioctl.c */
|
|
|
|
|
|
|
|
/* fs/ioprio.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(ioprio_set);
|
|
|
|
COND_SYSCALL(ioprio_get);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* fs/locks.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(flock);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* fs/namei.c */
|
|
|
|
|
|
|
|
/* fs/namespace.c */
|
|
|
|
|
|
|
|
/* fs/nfsctl.c */
|
|
|
|
|
|
|
|
/* fs/open.c */
|
|
|
|
|
|
|
|
/* fs/pipe.c */
|
|
|
|
|
|
|
|
/* fs/quota.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(quotactl);
|
2021-05-25 22:07:48 +08:00
|
|
|
COND_SYSCALL(quotactl_fd);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* fs/readdir.c */
|
|
|
|
|
|
|
|
/* fs/read_write.c */
|
|
|
|
|
|
|
|
/* fs/sendfile.c */
|
|
|
|
|
|
|
|
/* fs/select.c */
|
|
|
|
|
|
|
|
/* fs/signalfd.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(signalfd4);
|
|
|
|
COND_SYSCALL_COMPAT(signalfd4);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* fs/splice.c */
|
|
|
|
|
|
|
|
/* fs/stat.c */
|
|
|
|
|
|
|
|
/* fs/sync.c */
|
|
|
|
|
|
|
|
/* fs/timerfd.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(timerfd_create);
|
|
|
|
COND_SYSCALL(timerfd_settime);
|
2019-01-07 07:33:08 +08:00
|
|
|
COND_SYSCALL(timerfd_settime32);
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(timerfd_gettime);
|
2019-01-07 07:33:08 +08:00
|
|
|
COND_SYSCALL(timerfd_gettime32);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* fs/utimes.c */
|
|
|
|
|
|
|
|
/* kernel/acct.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(acct);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* kernel/capability.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(capget);
|
|
|
|
COND_SYSCALL(capset);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* kernel/exec_domain.c */
|
|
|
|
|
|
|
|
/* kernel/exit.c */
|
|
|
|
|
|
|
|
/* kernel/fork.c */
|
2019-06-21 07:26:35 +08:00
|
|
|
/* __ARCH_WANT_SYS_CLONE3 */
|
|
|
|
COND_SYSCALL(clone3);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
2021-09-24 01:10:51 +08:00
|
|
|
/* kernel/futex/syscalls.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(futex);
|
2019-01-07 07:33:08 +08:00
|
|
|
COND_SYSCALL(futex_time32);
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(set_robust_list);
|
|
|
|
COND_SYSCALL_COMPAT(set_robust_list);
|
|
|
|
COND_SYSCALL(get_robust_list);
|
|
|
|
COND_SYSCALL_COMPAT(get_robust_list);
|
futex: Implement sys_futex_waitv()
Add support to wait on multiple futexes. This is the interface
implemented by this syscall:
futex_waitv(struct futex_waitv *waiters, unsigned int nr_futexes,
unsigned int flags, struct timespec *timeout, clockid_t clockid)
struct futex_waitv {
__u64 val;
__u64 uaddr;
__u32 flags;
__u32 __reserved;
};
Given an array of struct futex_waitv, wait on each uaddr. The thread
wakes if a futex_wake() is performed at any uaddr. The syscall returns
immediately if any waiter has *uaddr != val. *timeout is an optional
absolute timeout value for the operation. This syscall supports only
64bit sized timeout structs. The flags argument of the syscall should be
empty, but it can be used for future extensions. Flags for shared
futexes, sizes, etc. should be used on the individual flags of each
waiter.
__reserved is used for explicit padding and should be 0, but it might be
used for future extensions. If the userspace uses 32-bit pointers, it
should make sure to explicitly cast it when assigning to waitv::uaddr.
Returns the array index of one of the woken futexes. There’s no given
information of how many were woken, or any particular attribute of it
(if it’s the first woken, if it is of the smaller index...).
Signed-off-by: André Almeida <andrealmeid@collabora.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20210923171111.300673-17-andrealmeid@collabora.com
2021-09-24 01:11:05 +08:00
|
|
|
COND_SYSCALL(futex_waitv);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* kernel/hrtimer.c */
|
|
|
|
|
|
|
|
/* kernel/itimer.c */
|
|
|
|
|
|
|
|
/* kernel/kexec.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(kexec_load);
|
|
|
|
COND_SYSCALL_COMPAT(kexec_load);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* kernel/module.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(init_module);
|
|
|
|
COND_SYSCALL(delete_module);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* kernel/posix-timers.c */
|
|
|
|
|
|
|
|
/* kernel/printk.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(syslog);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* kernel/ptrace.c */
|
|
|
|
|
|
|
|
/* kernel/sched/core.c */
|
|
|
|
|
|
|
|
/* kernel/sys.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(setregid);
|
|
|
|
COND_SYSCALL(setgid);
|
|
|
|
COND_SYSCALL(setreuid);
|
|
|
|
COND_SYSCALL(setuid);
|
|
|
|
COND_SYSCALL(setresuid);
|
|
|
|
COND_SYSCALL(getresuid);
|
|
|
|
COND_SYSCALL(setresgid);
|
|
|
|
COND_SYSCALL(getresgid);
|
|
|
|
COND_SYSCALL(setfsuid);
|
|
|
|
COND_SYSCALL(setfsgid);
|
|
|
|
COND_SYSCALL(setgroups);
|
|
|
|
COND_SYSCALL(getgroups);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* kernel/time.c */
|
|
|
|
|
|
|
|
/* kernel/timer.c */
|
|
|
|
|
|
|
|
/* ipc/mqueue.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(mq_open);
|
|
|
|
COND_SYSCALL_COMPAT(mq_open);
|
|
|
|
COND_SYSCALL(mq_unlink);
|
|
|
|
COND_SYSCALL(mq_timedsend);
|
2019-01-07 07:33:08 +08:00
|
|
|
COND_SYSCALL(mq_timedsend_time32);
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(mq_timedreceive);
|
2019-01-07 07:33:08 +08:00
|
|
|
COND_SYSCALL(mq_timedreceive_time32);
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(mq_notify);
|
|
|
|
COND_SYSCALL_COMPAT(mq_notify);
|
|
|
|
COND_SYSCALL(mq_getsetattr);
|
|
|
|
COND_SYSCALL_COMPAT(mq_getsetattr);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* ipc/msg.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(msgget);
|
ipc: rename old-style shmctl/semctl/msgctl syscalls
The behavior of these system calls is slightly different between
architectures, as determined by the CONFIG_ARCH_WANT_IPC_PARSE_VERSION
symbol. Most architectures that implement the split IPC syscalls don't set
that symbol and only get the modern version, but alpha, arm, microblaze,
mips-n32, mips-n64 and xtensa expect the caller to pass the IPC_64 flag.
For the architectures that so far only implement sys_ipc(), i.e. m68k,
mips-o32, powerpc, s390, sh, sparc, and x86-32, we want the new behavior
when adding the split syscalls, so we need to distinguish between the
two groups of architectures.
The method I picked for this distinction is to have a separate system call
entry point: sys_old_*ctl() now uses ipc_parse_version, while sys_*ctl()
does not. The system call tables of the five architectures are changed
accordingly.
As an additional benefit, we no longer need the configuration specific
definition for ipc_parse_version(), it always does the same thing now,
but simply won't get called on architectures with the modern interface.
A small downside is that on architectures that do set
ARCH_WANT_IPC_PARSE_VERSION, we now have an extra set of entry points
that are never called. They only add a few bytes of bloat, so it seems
better to keep them compared to adding yet another Kconfig symbol.
I considered adding new syscall numbers for the IPC_64 variants for
consistency, but decided against that for now.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2019-01-01 05:22:40 +08:00
|
|
|
COND_SYSCALL(old_msgctl);
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(msgctl);
|
|
|
|
COND_SYSCALL_COMPAT(msgctl);
|
2019-02-28 22:22:53 +08:00
|
|
|
COND_SYSCALL_COMPAT(old_msgctl);
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(msgrcv);
|
|
|
|
COND_SYSCALL_COMPAT(msgrcv);
|
|
|
|
COND_SYSCALL(msgsnd);
|
|
|
|
COND_SYSCALL_COMPAT(msgsnd);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* ipc/sem.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(semget);
|
ipc: rename old-style shmctl/semctl/msgctl syscalls
The behavior of these system calls is slightly different between
architectures, as determined by the CONFIG_ARCH_WANT_IPC_PARSE_VERSION
symbol. Most architectures that implement the split IPC syscalls don't set
that symbol and only get the modern version, but alpha, arm, microblaze,
mips-n32, mips-n64 and xtensa expect the caller to pass the IPC_64 flag.
For the architectures that so far only implement sys_ipc(), i.e. m68k,
mips-o32, powerpc, s390, sh, sparc, and x86-32, we want the new behavior
when adding the split syscalls, so we need to distinguish between the
two groups of architectures.
The method I picked for this distinction is to have a separate system call
entry point: sys_old_*ctl() now uses ipc_parse_version, while sys_*ctl()
does not. The system call tables of the five architectures are changed
accordingly.
As an additional benefit, we no longer need the configuration specific
definition for ipc_parse_version(), it always does the same thing now,
but simply won't get called on architectures with the modern interface.
A small downside is that on architectures that do set
ARCH_WANT_IPC_PARSE_VERSION, we now have an extra set of entry points
that are never called. They only add a few bytes of bloat, so it seems
better to keep them compared to adding yet another Kconfig symbol.
I considered adding new syscall numbers for the IPC_64 variants for
consistency, but decided against that for now.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2019-01-01 05:22:40 +08:00
|
|
|
COND_SYSCALL(old_semctl);
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(semctl);
|
|
|
|
COND_SYSCALL_COMPAT(semctl);
|
2019-02-28 22:22:53 +08:00
|
|
|
COND_SYSCALL_COMPAT(old_semctl);
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(semtimedop);
|
2019-01-07 07:33:08 +08:00
|
|
|
COND_SYSCALL(semtimedop_time32);
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(semop);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* ipc/shm.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(shmget);
|
ipc: rename old-style shmctl/semctl/msgctl syscalls
The behavior of these system calls is slightly different between
architectures, as determined by the CONFIG_ARCH_WANT_IPC_PARSE_VERSION
symbol. Most architectures that implement the split IPC syscalls don't set
that symbol and only get the modern version, but alpha, arm, microblaze,
mips-n32, mips-n64 and xtensa expect the caller to pass the IPC_64 flag.
For the architectures that so far only implement sys_ipc(), i.e. m68k,
mips-o32, powerpc, s390, sh, sparc, and x86-32, we want the new behavior
when adding the split syscalls, so we need to distinguish between the
two groups of architectures.
The method I picked for this distinction is to have a separate system call
entry point: sys_old_*ctl() now uses ipc_parse_version, while sys_*ctl()
does not. The system call tables of the five architectures are changed
accordingly.
As an additional benefit, we no longer need the configuration specific
definition for ipc_parse_version(), it always does the same thing now,
but simply won't get called on architectures with the modern interface.
A small downside is that on architectures that do set
ARCH_WANT_IPC_PARSE_VERSION, we now have an extra set of entry points
that are never called. They only add a few bytes of bloat, so it seems
better to keep them compared to adding yet another Kconfig symbol.
I considered adding new syscall numbers for the IPC_64 variants for
consistency, but decided against that for now.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2019-01-01 05:22:40 +08:00
|
|
|
COND_SYSCALL(old_shmctl);
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(shmctl);
|
|
|
|
COND_SYSCALL_COMPAT(shmctl);
|
2019-02-28 22:22:53 +08:00
|
|
|
COND_SYSCALL_COMPAT(old_shmctl);
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(shmat);
|
|
|
|
COND_SYSCALL_COMPAT(shmat);
|
|
|
|
COND_SYSCALL(shmdt);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* net/socket.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(socket);
|
|
|
|
COND_SYSCALL(socketpair);
|
|
|
|
COND_SYSCALL(bind);
|
|
|
|
COND_SYSCALL(listen);
|
|
|
|
COND_SYSCALL(accept);
|
|
|
|
COND_SYSCALL(connect);
|
|
|
|
COND_SYSCALL(getsockname);
|
|
|
|
COND_SYSCALL(getpeername);
|
|
|
|
COND_SYSCALL(setsockopt);
|
|
|
|
COND_SYSCALL_COMPAT(setsockopt);
|
|
|
|
COND_SYSCALL(getsockopt);
|
|
|
|
COND_SYSCALL_COMPAT(getsockopt);
|
|
|
|
COND_SYSCALL(sendto);
|
|
|
|
COND_SYSCALL(shutdown);
|
|
|
|
COND_SYSCALL(recvfrom);
|
|
|
|
COND_SYSCALL_COMPAT(recvfrom);
|
|
|
|
COND_SYSCALL(sendmsg);
|
|
|
|
COND_SYSCALL_COMPAT(sendmsg);
|
|
|
|
COND_SYSCALL(recvmsg);
|
|
|
|
COND_SYSCALL_COMPAT(recvmsg);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* mm/filemap.c */
|
|
|
|
|
|
|
|
/* mm/nommu.c, also with MMU */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(mremap);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* security/keys/keyctl.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(add_key);
|
|
|
|
COND_SYSCALL(request_key);
|
|
|
|
COND_SYSCALL(keyctl);
|
|
|
|
COND_SYSCALL_COMPAT(keyctl);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2021-04-22 23:41:18 +08:00
|
|
|
/* security/landlock/syscalls.c */
|
|
|
|
COND_SYSCALL(landlock_create_ruleset);
|
|
|
|
COND_SYSCALL(landlock_add_rule);
|
|
|
|
COND_SYSCALL(landlock_restrict_self);
|
|
|
|
|
2018-03-07 02:53:01 +08:00
|
|
|
/* arch/example/kernel/sys_example.c */
|
2006-04-11 13:53:06 +08:00
|
|
|
|
2018-03-07 02:53:01 +08:00
|
|
|
/* mm/fadvise.c */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(fadvise64_64);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* mm/, CONFIG_MMU only */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(swapon);
|
|
|
|
COND_SYSCALL(swapoff);
|
|
|
|
COND_SYSCALL(mprotect);
|
|
|
|
COND_SYSCALL(msync);
|
|
|
|
COND_SYSCALL(mlock);
|
|
|
|
COND_SYSCALL(munlock);
|
|
|
|
COND_SYSCALL(mlockall);
|
|
|
|
COND_SYSCALL(munlockall);
|
|
|
|
COND_SYSCALL(mincore);
|
|
|
|
COND_SYSCALL(madvise);
|
mm/madvise: introduce process_madvise() syscall: an external memory hinting API
There is usecase that System Management Software(SMS) want to give a
memory hint like MADV_[COLD|PAGEEOUT] to other processes and in the
case of Android, it is the ActivityManagerService.
The information required to make the reclaim decision is not known to the
app. Instead, it is known to the centralized userspace
daemon(ActivityManagerService), and that daemon must be able to initiate
reclaim on its own without any app involvement.
To solve the issue, this patch introduces a new syscall
process_madvise(2). It uses pidfd of an external process to give the
hint. It also supports vector address range because Android app has
thousands of vmas due to zygote so it's totally waste of CPU and power if
we should call the syscall one by one for each vma.(With testing 2000-vma
syscall vs 1-vector syscall, it showed 15% performance improvement. I
think it would be bigger in real practice because the testing ran very
cache friendly environment).
Another potential use case for the vector range is to amortize the cost
ofTLB shootdowns for multiple ranges when using MADV_DONTNEED; this could
benefit users like TCP receive zerocopy and malloc implementations. In
future, we could find more usecases for other advises so let's make it
happens as API since we introduce a new syscall at this moment. With
that, existing madvise(2) user could replace it with process_madvise(2)
with their own pid if they want to have batch address ranges support
feature.
ince it could affect other process's address range, only privileged
process(PTRACE_MODE_ATTACH_FSCREDS) or something else(e.g., being the same
UID) gives it the right to ptrace the process could use it successfully.
The flag argument is reserved for future use if we need to extend the API.
I think supporting all hints madvise has/will supported/support to
process_madvise is rather risky. Because we are not sure all hints make
sense from external process and implementation for the hint may rely on
the caller being in the current context so it could be error-prone. Thus,
I just limited hints as MADV_[COLD|PAGEOUT] in this patch.
If someone want to add other hints, we could hear the usecase and review
it for each hint. It's safer for maintenance rather than introducing a
buggy syscall but hard to fix it later.
So finally, the API is as follows,
ssize_t process_madvise(int pidfd, const struct iovec *iovec,
unsigned long vlen, int advice, unsigned int flags);
DESCRIPTION
The process_madvise() system call is used to give advice or directions
to the kernel about the address ranges from external process as well as
local process. It provides the advice to address ranges of process
described by iovec and vlen. The goal of such advice is to improve
system or application performance.
The pidfd selects the process referred to by the PID file descriptor
specified in pidfd. (See pidofd_open(2) for further information)
The pointer iovec points to an array of iovec structures, defined in
<sys/uio.h> as:
struct iovec {
void *iov_base; /* starting address */
size_t iov_len; /* number of bytes to be advised */
};
The iovec describes address ranges beginning at address(iov_base)
and with size length of bytes(iov_len).
The vlen represents the number of elements in iovec.
The advice is indicated in the advice argument, which is one of the
following at this moment if the target process specified by pidfd is
external.
MADV_COLD
MADV_PAGEOUT
Permission to provide a hint to external process is governed by a
ptrace access mode PTRACE_MODE_ATTACH_FSCREDS check; see ptrace(2).
The process_madvise supports every advice madvise(2) has if target
process is in same thread group with calling process so user could
use process_madvise(2) to extend existing madvise(2) to support
vector address ranges.
RETURN VALUE
On success, process_madvise() returns the number of bytes advised.
This return value may be less than the total number of requested
bytes, if an error occurred. The caller should check return value
to determine whether a partial advice occurred.
FAQ:
Q.1 - Why does any external entity have better knowledge?
Quote from Sandeep
"For Android, every application (including the special SystemServer)
are forked from Zygote. The reason of course is to share as many
libraries and classes between the two as possible to benefit from the
preloading during boot.
After applications start, (almost) all of the APIs end up calling into
this SystemServer process over IPC (binder) and back to the
application.
In a fully running system, the SystemServer monitors every single
process periodically to calculate their PSS / RSS and also decides
which process is "important" to the user for interactivity.
So, because of how these processes start _and_ the fact that the
SystemServer is looping to monitor each process, it does tend to *know*
which address range of the application is not used / useful.
Besides, we can never rely on applications to clean things up
themselves. We've had the "hey app1, the system is low on memory,
please trim your memory usage down" notifications for a long time[1].
They rely on applications honoring the broadcasts and very few do.
So, if we want to avoid the inevitable killing of the application and
restarting it, some way to be able to tell the OS about unimportant
memory in these applications will be useful.
- ssp
Q.2 - How to guarantee the race(i.e., object validation) between when
giving a hint from an external process and get the hint from the target
process?
process_madvise operates on the target process's address space as it
exists at the instant that process_madvise is called. If the space
target process can run between the time the process_madvise process
inspects the target process address space and the time that
process_madvise is actually called, process_madvise may operate on
memory regions that the calling process does not expect. It's the
responsibility of the process calling process_madvise to close this
race condition. For example, the calling process can suspend the
target process with ptrace, SIGSTOP, or the freezer cgroup so that it
doesn't have an opportunity to change its own address space before
process_madvise is called. Another option is to operate on memory
regions that the caller knows a priori will be unchanged in the target
process. Yet another option is to accept the race for certain
process_madvise calls after reasoning that mistargeting will do no
harm. The suggested API itself does not provide synchronization. It
also apply other APIs like move_pages, process_vm_write.
The race isn't really a problem though. Why is it so wrong to require
that callers do their own synchronization in some manner? Nobody
objects to write(2) merely because it's possible for two processes to
open the same file and clobber each other's writes --- instead, we tell
people to use flock or something. Think about mmap. It never
guarantees newly allocated address space is still valid when the user
tries to access it because other threads could unmap the memory right
before. That's where we need synchronization by using other API or
design from userside. It shouldn't be part of API itself. If someone
needs more fine-grained synchronization rather than process level,
there were two ideas suggested - cookie[2] and anon-fd[3]. Both are
applicable via using last reserved argument of the API but I don't
think it's necessary right now since we have already ways to prevent
the race so don't want to add additional complexity with more
fine-grained optimization model.
To make the API extend, it reserved an unsigned long as last argument
so we could support it in future if someone really needs it.
Q.3 - Why doesn't ptrace work?
Injecting an madvise in the target process using ptrace would not work
for us because such injected madvise would have to be executed by the
target process, which means that process would have to be runnable and
that creates the risk of the abovementioned race and hinting a wrong
VMA. Furthermore, we want to act the hint in caller's context, not the
callee's, because the callee is usually limited in cpuset/cgroups or
even freezed state so they can't act by themselves quick enough, which
causes more thrashing/kill. It doesn't work if the target process are
ptraced(e.g., strace, debugger, minidump) because a process can have at
most one ptracer.
[1] https://developer.android.com/topic/performance/memory"
[2] process_getinfo for getting the cookie which is updated whenever
vma of process address layout are changed - Daniel Colascione -
https://lore.kernel.org/lkml/20190520035254.57579-1-minchan@kernel.org/T/#m7694416fd179b2066a2c62b5b139b14e3894e224
[3] anonymous fd which is used for the object(i.e., address range)
validation - Michal Hocko -
https://lore.kernel.org/lkml/20200120112722.GY18451@dhcp22.suse.cz/
[minchan@kernel.org: fix process_madvise build break for arm64]
Link: http://lkml.kernel.org/r/20200303145756.GA219683@google.com
[minchan@kernel.org: fix build error for mips of process_madvise]
Link: http://lkml.kernel.org/r/20200508052517.GA197378@google.com
[akpm@linux-foundation.org: fix patch ordering issue]
[akpm@linux-foundation.org: fix arm64 whoops]
[minchan@kernel.org: make process_madvise() vlen arg have type size_t, per Florian]
[akpm@linux-foundation.org: fix i386 build]
[sfr@canb.auug.org.au: fix syscall numbering]
Link: https://lkml.kernel.org/r/20200905142639.49fc3f1a@canb.auug.org.au
[sfr@canb.auug.org.au: madvise.c needs compat.h]
Link: https://lkml.kernel.org/r/20200908204547.285646b4@canb.auug.org.au
[minchan@kernel.org: fix mips build]
Link: https://lkml.kernel.org/r/20200909173655.GC2435453@google.com
[yuehaibing@huawei.com: remove duplicate header which is included twice]
Link: https://lkml.kernel.org/r/20200915121550.30584-1-yuehaibing@huawei.com
[minchan@kernel.org: do not use helper functions for process_madvise]
Link: https://lkml.kernel.org/r/20200921175539.GB387368@google.com
[akpm@linux-foundation.org: pidfd_get_pid() gained an argument]
[sfr@canb.auug.org.au: fix up for "iov_iter: transparently handle compat iovecs in import_iovec"]
Link: https://lkml.kernel.org/r/20200928212542.468e1fef@canb.auug.org.au
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Cc: Brian Geffon <bgeffon@google.com>
Cc: Christian Brauner <christian@brauner.io>
Cc: Daniel Colascione <dancol@google.com>
Cc: Jann Horn <jannh@google.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: John Dias <joaodias@google.com>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Oleksandr Natalenko <oleksandr@redhat.com>
Cc: Sandeep Patil <sspatil@google.com>
Cc: SeongJae Park <sj38.park@gmail.com>
Cc: SeongJae Park <sjpark@amazon.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Sonny Rao <sonnyrao@google.com>
Cc: Tim Murray <timmurray@google.com>
Cc: Christian Brauner <christian.brauner@ubuntu.com>
Cc: Florian Weimer <fw@deneb.enyo.de>
Cc: <linux-man@vger.kernel.org>
Link: http://lkml.kernel.org/r/20200302193630.68771-3-minchan@kernel.org
Link: http://lkml.kernel.org/r/20200508183320.GA125527@google.com
Link: http://lkml.kernel.org/r/20200622192900.22757-4-minchan@kernel.org
Link: https://lkml.kernel.org/r/20200901000633.1920247-4-minchan@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-18 07:14:59 +08:00
|
|
|
COND_SYSCALL(process_madvise);
|
2021-09-03 06:00:33 +08:00
|
|
|
COND_SYSCALL(process_mrelease);
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(remap_file_pages);
|
|
|
|
COND_SYSCALL(mbind);
|
|
|
|
COND_SYSCALL(get_mempolicy);
|
|
|
|
COND_SYSCALL(set_mempolicy);
|
|
|
|
COND_SYSCALL(migrate_pages);
|
|
|
|
COND_SYSCALL(move_pages);
|
|
|
|
|
|
|
|
COND_SYSCALL(perf_event_open);
|
|
|
|
COND_SYSCALL(accept4);
|
|
|
|
COND_SYSCALL(recvmmsg);
|
y2038: socket: Add compat_sys_recvmmsg_time64
recvmmsg() takes two arguments to pointers of structures that differ
between 32-bit and 64-bit architectures: mmsghdr and timespec.
For y2038 compatbility, we are changing the native system call from
timespec to __kernel_timespec with a 64-bit time_t (in another patch),
and use the existing compat system call on both 32-bit and 64-bit
architectures for compatibility with traditional 32-bit user space.
As we now have two variants of recvmmsg() for 32-bit tasks that are both
different from the variant that we use on 64-bit tasks, this means we
also require two compat system calls!
The solution I picked is to flip things around: The existing
compat_sys_recvmmsg() call gets moved from net/compat.c into net/socket.c
and now handles the case for old user space on all architectures that
have set CONFIG_COMPAT_32BIT_TIME. A new compat_sys_recvmmsg_time64()
call gets added in the old place for 64-bit architectures only, this
one handles the case of a compat mmsghdr structure combined with
__kernel_timespec.
In the indirect sys_socketcall(), we now need to call either
do_sys_recvmmsg() or __compat_sys_recvmmsg(), depending on what kind of
architecture we are on. For compat_sys_socketcall(), no such change is
needed, we always call __compat_sys_recvmmsg().
I decided to not add a new SYS_RECVMMSG_TIME64 socketcall: Any libc
implementation for 64-bit time_t will need significant changes including
an updated asm/unistd.h, and it seems better to consistently use the
separate syscalls that configuration, leaving the socketcall only for
backward compatibility with 32-bit time_t based libc.
The naming is asymmetric for the moment, so both existing syscalls
entry points keep their names, while the new ones are recvmmsg_time32
and compat_recvmmsg_time64 respectively. I expect that we will rename
the compat syscalls later as we start using generated syscall tables
everywhere and add these entry points.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2018-04-18 19:43:52 +08:00
|
|
|
COND_SYSCALL(recvmmsg_time32);
|
2019-01-07 07:33:08 +08:00
|
|
|
COND_SYSCALL_COMPAT(recvmmsg_time32);
|
y2038: socket: Add compat_sys_recvmmsg_time64
recvmmsg() takes two arguments to pointers of structures that differ
between 32-bit and 64-bit architectures: mmsghdr and timespec.
For y2038 compatbility, we are changing the native system call from
timespec to __kernel_timespec with a 64-bit time_t (in another patch),
and use the existing compat system call on both 32-bit and 64-bit
architectures for compatibility with traditional 32-bit user space.
As we now have two variants of recvmmsg() for 32-bit tasks that are both
different from the variant that we use on 64-bit tasks, this means we
also require two compat system calls!
The solution I picked is to flip things around: The existing
compat_sys_recvmmsg() call gets moved from net/compat.c into net/socket.c
and now handles the case for old user space on all architectures that
have set CONFIG_COMPAT_32BIT_TIME. A new compat_sys_recvmmsg_time64()
call gets added in the old place for 64-bit architectures only, this
one handles the case of a compat mmsghdr structure combined with
__kernel_timespec.
In the indirect sys_socketcall(), we now need to call either
do_sys_recvmmsg() or __compat_sys_recvmmsg(), depending on what kind of
architecture we are on. For compat_sys_socketcall(), no such change is
needed, we always call __compat_sys_recvmmsg().
I decided to not add a new SYS_RECVMMSG_TIME64 socketcall: Any libc
implementation for 64-bit time_t will need significant changes including
an updated asm/unistd.h, and it seems better to consistently use the
separate syscalls that configuration, leaving the socketcall only for
backward compatibility with 32-bit time_t based libc.
The naming is asymmetric for the moment, so both existing syscalls
entry points keep their names, while the new ones are recvmmsg_time32
and compat_recvmmsg_time64 respectively. I expect that we will rename
the compat syscalls later as we start using generated syscall tables
everywhere and add these entry points.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2018-04-18 19:43:52 +08:00
|
|
|
COND_SYSCALL_COMPAT(recvmmsg_time64);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Architecture specific syscalls: see further below
|
|
|
|
*/
|
2009-12-18 10:24:25 +08:00
|
|
|
|
2018-03-07 02:53:01 +08:00
|
|
|
/* fanotify */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(fanotify_init);
|
|
|
|
COND_SYSCALL(fanotify_mark);
|
2011-01-29 21:13:26 +08:00
|
|
|
|
|
|
|
/* open by handle */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(name_to_handle_at);
|
|
|
|
COND_SYSCALL(open_by_handle_at);
|
|
|
|
COND_SYSCALL_COMPAT(open_by_handle_at);
|
2012-06-01 07:26:44 +08:00
|
|
|
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(sendmmsg);
|
|
|
|
COND_SYSCALL_COMPAT(sendmmsg);
|
|
|
|
COND_SYSCALL(process_vm_readv);
|
|
|
|
COND_SYSCALL_COMPAT(process_vm_readv);
|
|
|
|
COND_SYSCALL(process_vm_writev);
|
|
|
|
COND_SYSCALL_COMPAT(process_vm_writev);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
2012-06-01 07:26:44 +08:00
|
|
|
/* compare kernel pointers */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(kcmp);
|
2014-06-26 07:08:24 +08:00
|
|
|
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(finit_module);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
2014-06-26 07:08:24 +08:00
|
|
|
/* operate on Secure Computing state */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(seccomp);
|
2014-09-26 15:16:58 +08:00
|
|
|
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(memfd_create);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
2014-09-26 15:16:58 +08:00
|
|
|
/* access BPF programs and maps */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(bpf);
|
syscalls: implement execveat() system call
This patchset adds execveat(2) for x86, and is derived from Meredydd
Luff's patch from Sept 2012 (https://lkml.org/lkml/2012/9/11/528).
The primary aim of adding an execveat syscall is to allow an
implementation of fexecve(3) that does not rely on the /proc filesystem,
at least for executables (rather than scripts). The current glibc version
of fexecve(3) is implemented via /proc, which causes problems in sandboxed
or otherwise restricted environments.
Given the desire for a /proc-free fexecve() implementation, HPA suggested
(https://lkml.org/lkml/2006/7/11/556) that an execveat(2) syscall would be
an appropriate generalization.
Also, having a new syscall means that it can take a flags argument without
back-compatibility concerns. The current implementation just defines the
AT_EMPTY_PATH and AT_SYMLINK_NOFOLLOW flags, but other flags could be
added in future -- for example, flags for new namespaces (as suggested at
https://lkml.org/lkml/2006/7/11/474).
Related history:
- https://lkml.org/lkml/2006/12/27/123 is an example of someone
realizing that fexecve() is likely to fail in a chroot environment.
- http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=514043 covered
documenting the /proc requirement of fexecve(3) in its manpage, to
"prevent other people from wasting their time".
- https://bugzilla.redhat.com/show_bug.cgi?id=241609 described a
problem where a process that did setuid() could not fexecve()
because it no longer had access to /proc/self/fd; this has since
been fixed.
This patch (of 4):
Add a new execveat(2) system call. execveat() is to execve() as openat()
is to open(): it takes a file descriptor that refers to a directory, and
resolves the filename relative to that.
In addition, if the filename is empty and AT_EMPTY_PATH is specified,
execveat() executes the file to which the file descriptor refers. This
replicates the functionality of fexecve(), which is a system call in other
UNIXen, but in Linux glibc it depends on opening "/proc/self/fd/<fd>" (and
so relies on /proc being mounted).
The filename fed to the executed program as argv[0] (or the name of the
script fed to a script interpreter) will be of the form "/dev/fd/<fd>"
(for an empty filename) or "/dev/fd/<fd>/<filename>", effectively
reflecting how the executable was found. This does however mean that
execution of a script in a /proc-less environment won't work; also, script
execution via an O_CLOEXEC file descriptor fails (as the file will not be
accessible after exec).
Based on patches by Meredydd Luff.
Signed-off-by: David Drysdale <drysdale@google.com>
Cc: Meredydd Luff <meredydd@senatehouse.org>
Cc: Shuah Khan <shuah.kh@samsung.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Rich Felker <dalias@aerifal.cx>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-12-13 08:57:29 +08:00
|
|
|
|
|
|
|
/* execveat */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(execveat);
|
sys_membarrier(): system-wide memory barrier (generic, x86)
Here is an implementation of a new system call, sys_membarrier(), which
executes a memory barrier on all threads running on the system. It is
implemented by calling synchronize_sched(). It can be used to
distribute the cost of user-space memory barriers asymmetrically by
transforming pairs of memory barriers into pairs consisting of
sys_membarrier() and a compiler barrier. For synchronization primitives
that distinguish between read-side and write-side (e.g. userspace RCU
[1], rwlocks), the read-side can be accelerated significantly by moving
the bulk of the memory barrier overhead to the write-side.
The existing applications of which I am aware that would be improved by
this system call are as follows:
* Through Userspace RCU library (http://urcu.so)
- DNS server (Knot DNS) https://www.knot-dns.cz/
- Network sniffer (http://netsniff-ng.org/)
- Distributed object storage (https://sheepdog.github.io/sheepdog/)
- User-space tracing (http://lttng.org)
- Network storage system (https://www.gluster.org/)
- Virtual routers (https://events.linuxfoundation.org/sites/events/files/slides/DPDK_RCU_0MQ.pdf)
- Financial software (https://lkml.org/lkml/2015/3/23/189)
Those projects use RCU in userspace to increase read-side speed and
scalability compared to locking. Especially in the case of RCU used by
libraries, sys_membarrier can speed up the read-side by moving the bulk of
the memory barrier cost to synchronize_rcu().
* Direct users of sys_membarrier
- core dotnet garbage collector (https://github.com/dotnet/coreclr/issues/198)
Microsoft core dotnet GC developers are planning to use the mprotect()
side-effect of issuing memory barriers through IPIs as a way to implement
Windows FlushProcessWriteBuffers() on Linux. They are referring to
sys_membarrier in their github thread, specifically stating that
sys_membarrier() is what they are looking for.
To explain the benefit of this scheme, let's introduce two example threads:
Thread A (non-frequent, e.g. executing liburcu synchronize_rcu())
Thread B (frequent, e.g. executing liburcu
rcu_read_lock()/rcu_read_unlock())
In a scheme where all smp_mb() in thread A are ordering memory accesses
with respect to smp_mb() present in Thread B, we can change each
smp_mb() within Thread A into calls to sys_membarrier() and each
smp_mb() within Thread B into compiler barriers "barrier()".
Before the change, we had, for each smp_mb() pairs:
Thread A Thread B
previous mem accesses previous mem accesses
smp_mb() smp_mb()
following mem accesses following mem accesses
After the change, these pairs become:
Thread A Thread B
prev mem accesses prev mem accesses
sys_membarrier() barrier()
follow mem accesses follow mem accesses
As we can see, there are two possible scenarios: either Thread B memory
accesses do not happen concurrently with Thread A accesses (1), or they
do (2).
1) Non-concurrent Thread A vs Thread B accesses:
Thread A Thread B
prev mem accesses
sys_membarrier()
follow mem accesses
prev mem accesses
barrier()
follow mem accesses
In this case, thread B accesses will be weakly ordered. This is OK,
because at that point, thread A is not particularly interested in
ordering them with respect to its own accesses.
2) Concurrent Thread A vs Thread B accesses
Thread A Thread B
prev mem accesses prev mem accesses
sys_membarrier() barrier()
follow mem accesses follow mem accesses
In this case, thread B accesses, which are ensured to be in program
order thanks to the compiler barrier, will be "upgraded" to full
smp_mb() by synchronize_sched().
* Benchmarks
On Intel Xeon E5405 (8 cores)
(one thread is calling sys_membarrier, the other 7 threads are busy
looping)
1000 non-expedited sys_membarrier calls in 33s =3D 33 milliseconds/call.
* User-space user of this system call: Userspace RCU library
Both the signal-based and the sys_membarrier userspace RCU schemes
permit us to remove the memory barrier from the userspace RCU
rcu_read_lock() and rcu_read_unlock() primitives, thus significantly
accelerating them. These memory barriers are replaced by compiler
barriers on the read-side, and all matching memory barriers on the
write-side are turned into an invocation of a memory barrier on all
active threads in the process. By letting the kernel perform this
synchronization rather than dumbly sending a signal to every process
threads (as we currently do), we diminish the number of unnecessary wake
ups and only issue the memory barriers on active threads. Non-running
threads do not need to execute such barrier anyway, because these are
implied by the scheduler context switches.
Results in liburcu:
Operations in 10s, 6 readers, 2 writers:
memory barriers in reader: 1701557485 reads, 2202847 writes
signal-based scheme: 9830061167 reads, 6700 writes
sys_membarrier: 9952759104 reads, 425 writes
sys_membarrier (dyn. check): 7970328887 reads, 425 writes
The dynamic sys_membarrier availability check adds some overhead to
the read-side compared to the signal-based scheme, but besides that,
sys_membarrier slightly outperforms the signal-based scheme. However,
this non-expedited sys_membarrier implementation has a much slower grace
period than signal and memory barrier schemes.
Besides diminishing the number of wake-ups, one major advantage of the
membarrier system call over the signal-based scheme is that it does not
need to reserve a signal. This plays much more nicely with libraries,
and with processes injected into for tracing purposes, for which we
cannot expect that signals will be unused by the application.
An expedited version of this system call can be added later on to speed
up the grace period. Its implementation will likely depend on reading
the cpu_curr()->mm without holding each CPU's rq lock.
This patch adds the system call to x86 and to asm-generic.
[1] http://urcu.so
membarrier(2) man page:
MEMBARRIER(2) Linux Programmer's Manual MEMBARRIER(2)
NAME
membarrier - issue memory barriers on a set of threads
SYNOPSIS
#include <linux/membarrier.h>
int membarrier(int cmd, int flags);
DESCRIPTION
The cmd argument is one of the following:
MEMBARRIER_CMD_QUERY
Query the set of supported commands. It returns a bitmask of
supported commands.
MEMBARRIER_CMD_SHARED
Execute a memory barrier on all threads running on the system.
Upon return from system call, the caller thread is ensured that
all running threads have passed through a state where all memory
accesses to user-space addresses match program order between
entry to and return from the system call (non-running threads
are de facto in such a state). This covers threads from all pro=E2=80=90
cesses running on the system. This command returns 0.
The flags argument needs to be 0. For future extensions.
All memory accesses performed in program order from each targeted
thread is guaranteed to be ordered with respect to sys_membarrier(). If
we use the semantic "barrier()" to represent a compiler barrier forcing
memory accesses to be performed in program order across the barrier,
and smp_mb() to represent explicit memory barriers forcing full memory
ordering across the barrier, we have the following ordering table for
each pair of barrier(), sys_membarrier() and smp_mb():
The pair ordering is detailed as (O: ordered, X: not ordered):
barrier() smp_mb() sys_membarrier()
barrier() X X O
smp_mb() X O O
sys_membarrier() O O O
RETURN VALUE
On success, these system calls return zero. On error, -1 is returned,
and errno is set appropriately. For a given command, with flags
argument set to 0, this system call is guaranteed to always return the
same value until reboot.
ERRORS
ENOSYS System call is not implemented.
EINVAL Invalid arguments.
Linux 2015-04-15 MEMBARRIER(2)
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Nicholas Miell <nmiell@comcast.net>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Alan Cox <gnomes@lxorguk.ukuu.org.uk>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Pranith Kumar <bobby.prani@gmail.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-09-12 04:07:39 +08:00
|
|
|
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(userfaultfd);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
sys_membarrier(): system-wide memory barrier (generic, x86)
Here is an implementation of a new system call, sys_membarrier(), which
executes a memory barrier on all threads running on the system. It is
implemented by calling synchronize_sched(). It can be used to
distribute the cost of user-space memory barriers asymmetrically by
transforming pairs of memory barriers into pairs consisting of
sys_membarrier() and a compiler barrier. For synchronization primitives
that distinguish between read-side and write-side (e.g. userspace RCU
[1], rwlocks), the read-side can be accelerated significantly by moving
the bulk of the memory barrier overhead to the write-side.
The existing applications of which I am aware that would be improved by
this system call are as follows:
* Through Userspace RCU library (http://urcu.so)
- DNS server (Knot DNS) https://www.knot-dns.cz/
- Network sniffer (http://netsniff-ng.org/)
- Distributed object storage (https://sheepdog.github.io/sheepdog/)
- User-space tracing (http://lttng.org)
- Network storage system (https://www.gluster.org/)
- Virtual routers (https://events.linuxfoundation.org/sites/events/files/slides/DPDK_RCU_0MQ.pdf)
- Financial software (https://lkml.org/lkml/2015/3/23/189)
Those projects use RCU in userspace to increase read-side speed and
scalability compared to locking. Especially in the case of RCU used by
libraries, sys_membarrier can speed up the read-side by moving the bulk of
the memory barrier cost to synchronize_rcu().
* Direct users of sys_membarrier
- core dotnet garbage collector (https://github.com/dotnet/coreclr/issues/198)
Microsoft core dotnet GC developers are planning to use the mprotect()
side-effect of issuing memory barriers through IPIs as a way to implement
Windows FlushProcessWriteBuffers() on Linux. They are referring to
sys_membarrier in their github thread, specifically stating that
sys_membarrier() is what they are looking for.
To explain the benefit of this scheme, let's introduce two example threads:
Thread A (non-frequent, e.g. executing liburcu synchronize_rcu())
Thread B (frequent, e.g. executing liburcu
rcu_read_lock()/rcu_read_unlock())
In a scheme where all smp_mb() in thread A are ordering memory accesses
with respect to smp_mb() present in Thread B, we can change each
smp_mb() within Thread A into calls to sys_membarrier() and each
smp_mb() within Thread B into compiler barriers "barrier()".
Before the change, we had, for each smp_mb() pairs:
Thread A Thread B
previous mem accesses previous mem accesses
smp_mb() smp_mb()
following mem accesses following mem accesses
After the change, these pairs become:
Thread A Thread B
prev mem accesses prev mem accesses
sys_membarrier() barrier()
follow mem accesses follow mem accesses
As we can see, there are two possible scenarios: either Thread B memory
accesses do not happen concurrently with Thread A accesses (1), or they
do (2).
1) Non-concurrent Thread A vs Thread B accesses:
Thread A Thread B
prev mem accesses
sys_membarrier()
follow mem accesses
prev mem accesses
barrier()
follow mem accesses
In this case, thread B accesses will be weakly ordered. This is OK,
because at that point, thread A is not particularly interested in
ordering them with respect to its own accesses.
2) Concurrent Thread A vs Thread B accesses
Thread A Thread B
prev mem accesses prev mem accesses
sys_membarrier() barrier()
follow mem accesses follow mem accesses
In this case, thread B accesses, which are ensured to be in program
order thanks to the compiler barrier, will be "upgraded" to full
smp_mb() by synchronize_sched().
* Benchmarks
On Intel Xeon E5405 (8 cores)
(one thread is calling sys_membarrier, the other 7 threads are busy
looping)
1000 non-expedited sys_membarrier calls in 33s =3D 33 milliseconds/call.
* User-space user of this system call: Userspace RCU library
Both the signal-based and the sys_membarrier userspace RCU schemes
permit us to remove the memory barrier from the userspace RCU
rcu_read_lock() and rcu_read_unlock() primitives, thus significantly
accelerating them. These memory barriers are replaced by compiler
barriers on the read-side, and all matching memory barriers on the
write-side are turned into an invocation of a memory barrier on all
active threads in the process. By letting the kernel perform this
synchronization rather than dumbly sending a signal to every process
threads (as we currently do), we diminish the number of unnecessary wake
ups and only issue the memory barriers on active threads. Non-running
threads do not need to execute such barrier anyway, because these are
implied by the scheduler context switches.
Results in liburcu:
Operations in 10s, 6 readers, 2 writers:
memory barriers in reader: 1701557485 reads, 2202847 writes
signal-based scheme: 9830061167 reads, 6700 writes
sys_membarrier: 9952759104 reads, 425 writes
sys_membarrier (dyn. check): 7970328887 reads, 425 writes
The dynamic sys_membarrier availability check adds some overhead to
the read-side compared to the signal-based scheme, but besides that,
sys_membarrier slightly outperforms the signal-based scheme. However,
this non-expedited sys_membarrier implementation has a much slower grace
period than signal and memory barrier schemes.
Besides diminishing the number of wake-ups, one major advantage of the
membarrier system call over the signal-based scheme is that it does not
need to reserve a signal. This plays much more nicely with libraries,
and with processes injected into for tracing purposes, for which we
cannot expect that signals will be unused by the application.
An expedited version of this system call can be added later on to speed
up the grace period. Its implementation will likely depend on reading
the cpu_curr()->mm without holding each CPU's rq lock.
This patch adds the system call to x86 and to asm-generic.
[1] http://urcu.so
membarrier(2) man page:
MEMBARRIER(2) Linux Programmer's Manual MEMBARRIER(2)
NAME
membarrier - issue memory barriers on a set of threads
SYNOPSIS
#include <linux/membarrier.h>
int membarrier(int cmd, int flags);
DESCRIPTION
The cmd argument is one of the following:
MEMBARRIER_CMD_QUERY
Query the set of supported commands. It returns a bitmask of
supported commands.
MEMBARRIER_CMD_SHARED
Execute a memory barrier on all threads running on the system.
Upon return from system call, the caller thread is ensured that
all running threads have passed through a state where all memory
accesses to user-space addresses match program order between
entry to and return from the system call (non-running threads
are de facto in such a state). This covers threads from all pro=E2=80=90
cesses running on the system. This command returns 0.
The flags argument needs to be 0. For future extensions.
All memory accesses performed in program order from each targeted
thread is guaranteed to be ordered with respect to sys_membarrier(). If
we use the semantic "barrier()" to represent a compiler barrier forcing
memory accesses to be performed in program order across the barrier,
and smp_mb() to represent explicit memory barriers forcing full memory
ordering across the barrier, we have the following ordering table for
each pair of barrier(), sys_membarrier() and smp_mb():
The pair ordering is detailed as (O: ordered, X: not ordered):
barrier() smp_mb() sys_membarrier()
barrier() X X O
smp_mb() X O O
sys_membarrier() O O O
RETURN VALUE
On success, these system calls return zero. On error, -1 is returned,
and errno is set appropriately. For a given command, with flags
argument set to 0, this system call is guaranteed to always return the
same value until reboot.
ERRORS
ENOSYS System call is not implemented.
EINVAL Invalid arguments.
Linux 2015-04-15 MEMBARRIER(2)
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Nicholas Miell <nmiell@comcast.net>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Alan Cox <gnomes@lxorguk.ukuu.org.uk>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Pranith Kumar <bobby.prani@gmail.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-09-12 04:07:39 +08:00
|
|
|
/* membarrier */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(membarrier);
|
2016-09-13 04:38:42 +08:00
|
|
|
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(mlock2);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(copy_file_range);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
2016-09-13 04:38:42 +08:00
|
|
|
/* memory protection keys */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(pkey_mprotect);
|
|
|
|
COND_SYSCALL(pkey_alloc);
|
|
|
|
COND_SYSCALL(pkey_free);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
mm: introduce memfd_secret system call to create "secret" memory areas
Introduce "memfd_secret" system call with the ability to create memory
areas visible only in the context of the owning process and not mapped not
only to other processes but in the kernel page tables as well.
The secretmem feature is off by default and the user must explicitly
enable it at the boot time.
Once secretmem is enabled, the user will be able to create a file
descriptor using the memfd_secret() system call. The memory areas created
by mmap() calls from this file descriptor will be unmapped from the kernel
direct map and they will be only mapped in the page table of the processes
that have access to the file descriptor.
Secretmem is designed to provide the following protections:
* Enhanced protection (in conjunction with all the other in-kernel
attack prevention systems) against ROP attacks. Seceretmem makes
"simple" ROP insufficient to perform exfiltration, which increases the
required complexity of the attack. Along with other protections like
the kernel stack size limit and address space layout randomization which
make finding gadgets is really hard, absence of any in-kernel primitive
for accessing secret memory means the one gadget ROP attack can't work.
Since the only way to access secret memory is to reconstruct the missing
mapping entry, the attacker has to recover the physical page and insert
a PTE pointing to it in the kernel and then retrieve the contents. That
takes at least three gadgets which is a level of difficulty beyond most
standard attacks.
* Prevent cross-process secret userspace memory exposures. Once the
secret memory is allocated, the user can't accidentally pass it into the
kernel to be transmitted somewhere. The secreremem pages cannot be
accessed via the direct map and they are disallowed in GUP.
* Harden against exploited kernel flaws. In order to access secretmem,
a kernel-side attack would need to either walk the page tables and
create new ones, or spawn a new privileged uiserspace process to perform
secrets exfiltration using ptrace.
The file descriptor based memory has several advantages over the
"traditional" mm interfaces, such as mlock(), mprotect(), madvise(). File
descriptor approach allows explicit and controlled sharing of the memory
areas, it allows to seal the operations. Besides, file descriptor based
memory paves the way for VMMs to remove the secret memory range from the
userspace hipervisor process, for instance QEMU. Andy Lutomirski says:
"Getting fd-backed memory into a guest will take some possibly major
work in the kernel, but getting vma-backed memory into a guest without
mapping it in the host user address space seems much, much worse."
memfd_secret() is made a dedicated system call rather than an extension to
memfd_create() because it's purpose is to allow the user to create more
secure memory mappings rather than to simply allow file based access to
the memory. Nowadays a new system call cost is negligible while it is way
simpler for userspace to deal with a clear-cut system calls than with a
multiplexer or an overloaded syscall. Moreover, the initial
implementation of memfd_secret() is completely distinct from
memfd_create() so there is no much sense in overloading memfd_create() to
begin with. If there will be a need for code sharing between these
implementation it can be easily achieved without a need to adjust user
visible APIs.
The secret memory remains accessible in the process context using uaccess
primitives, but it is not exposed to the kernel otherwise; secret memory
areas are removed from the direct map and functions in the
follow_page()/get_user_page() family will refuse to return a page that
belongs to the secret memory area.
Once there will be a use case that will require exposing secretmem to the
kernel it will be an opt-in request in the system call flags so that user
would have to decide what data can be exposed to the kernel.
Removing of the pages from the direct map may cause its fragmentation on
architectures that use large pages to map the physical memory which
affects the system performance. However, the original Kconfig text for
CONFIG_DIRECT_GBPAGES said that gigabyte pages in the direct map "... can
improve the kernel's performance a tiny bit ..." (commit 00d1c5e05736
("x86: add gbpages switches")) and the recent report [1] showed that "...
although 1G mappings are a good default choice, there is no compelling
evidence that it must be the only choice". Hence, it is sufficient to
have secretmem disabled by default with the ability of a system
administrator to enable it at boot time.
Pages in the secretmem regions are unevictable and unmovable to avoid
accidental exposure of the sensitive data via swap or during page
migration.
Since the secretmem mappings are locked in memory they cannot exceed
RLIMIT_MEMLOCK. Since these mappings are already locked independently
from mlock(), an attempt to mlock()/munlock() secretmem range would fail
and mlockall()/munlockall() will ignore secretmem mappings.
However, unlike mlock()ed memory, secretmem currently behaves more like
long-term GUP: secretmem mappings are unmovable mappings directly consumed
by user space. With default limits, there is no excessive use of
secretmem and it poses no real problem in combination with
ZONE_MOVABLE/CMA, but in the future this should be addressed to allow
balanced use of large amounts of secretmem along with ZONE_MOVABLE/CMA.
A page that was a part of the secret memory area is cleared when it is
freed to ensure the data is not exposed to the next user of that page.
The following example demonstrates creation of a secret mapping (error
handling is omitted):
fd = memfd_secret(0);
ftruncate(fd, MAP_SIZE);
ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE,
MAP_SHARED, fd, 0);
[1] https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/
[akpm@linux-foundation.org: suppress Kconfig whine]
Link: https://lkml.kernel.org/r/20210518072034.31572-5-rppt@kernel.org
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Hagen Paul Pfeifer <hagen@jauu.net>
Acked-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: kernel test robot <lkp@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-08 09:08:03 +08:00
|
|
|
/* memfd_secret */
|
|
|
|
COND_SYSCALL(memfd_secret);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Architecture specific weak syscall entries.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* pciconfig: alpha, arm, arm64, ia64, sparc */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(pciconfig_read);
|
|
|
|
COND_SYSCALL(pciconfig_write);
|
|
|
|
COND_SYSCALL(pciconfig_iobase);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* sys_socketcall: arm, mips, x86, ... */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(socketcall);
|
|
|
|
COND_SYSCALL_COMPAT(socketcall);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* compat syscalls for arm64, x86, ... */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL_COMPAT(fanotify_mark);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* x86 */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(vm86old);
|
|
|
|
COND_SYSCALL(modify_ldt);
|
|
|
|
COND_SYSCALL(vm86);
|
|
|
|
COND_SYSCALL(kexec_file_load);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* s390 */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(s390_pci_mmio_read);
|
|
|
|
COND_SYSCALL(s390_pci_mmio_write);
|
2019-01-16 21:15:20 +08:00
|
|
|
COND_SYSCALL(s390_ipc);
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL_COMPAT(s390_ipc);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* powerpc */
|
2018-05-02 21:20:48 +08:00
|
|
|
COND_SYSCALL(rtas);
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(spu_run);
|
|
|
|
COND_SYSCALL(spu_create);
|
|
|
|
COND_SYSCALL(subpage_prot);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Deprecated system calls which are still defined in
|
|
|
|
* include/uapi/asm-generic/unistd.h and wanted by >= 1 arch
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* __ARCH_WANT_SYSCALL_NO_FLAGS */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(epoll_create);
|
|
|
|
COND_SYSCALL(inotify_init);
|
|
|
|
COND_SYSCALL(eventfd);
|
|
|
|
COND_SYSCALL(signalfd);
|
|
|
|
COND_SYSCALL_COMPAT(signalfd);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* __ARCH_WANT_SYSCALL_OFF_T */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(fadvise64);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* __ARCH_WANT_SYSCALL_DEPRECATED */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(epoll_wait);
|
|
|
|
COND_SYSCALL(recv);
|
|
|
|
COND_SYSCALL_COMPAT(recv);
|
|
|
|
COND_SYSCALL(send);
|
|
|
|
COND_SYSCALL(uselib);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
2019-07-15 17:46:10 +08:00
|
|
|
/* optional: time32 */
|
|
|
|
COND_SYSCALL(time32);
|
|
|
|
COND_SYSCALL(stime32);
|
|
|
|
COND_SYSCALL(utime32);
|
|
|
|
COND_SYSCALL(adjtimex_time32);
|
|
|
|
COND_SYSCALL(sched_rr_get_interval_time32);
|
|
|
|
COND_SYSCALL(nanosleep_time32);
|
|
|
|
COND_SYSCALL(rt_sigtimedwait_time32);
|
|
|
|
COND_SYSCALL_COMPAT(rt_sigtimedwait_time32);
|
|
|
|
COND_SYSCALL(timer_settime32);
|
|
|
|
COND_SYSCALL(timer_gettime32);
|
|
|
|
COND_SYSCALL(clock_settime32);
|
|
|
|
COND_SYSCALL(clock_gettime32);
|
|
|
|
COND_SYSCALL(clock_getres_time32);
|
|
|
|
COND_SYSCALL(clock_nanosleep_time32);
|
|
|
|
COND_SYSCALL(utimes_time32);
|
|
|
|
COND_SYSCALL(futimesat_time32);
|
|
|
|
COND_SYSCALL(pselect6_time32);
|
|
|
|
COND_SYSCALL_COMPAT(pselect6_time32);
|
|
|
|
COND_SYSCALL(ppoll_time32);
|
|
|
|
COND_SYSCALL_COMPAT(ppoll_time32);
|
|
|
|
COND_SYSCALL(utimensat_time32);
|
|
|
|
COND_SYSCALL(clock_adjtime32);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The syscalls below are not found in include/uapi/asm-generic/unistd.h
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* obsolete: SGETMASK_SYSCALL */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(sgetmask);
|
|
|
|
COND_SYSCALL(ssetmask);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* obsolete: SYSFS_SYSCALL */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(sysfs);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* obsolete: __ARCH_WANT_SYS_IPC */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(ipc);
|
|
|
|
COND_SYSCALL_COMPAT(ipc);
|
2018-03-07 02:53:01 +08:00
|
|
|
|
|
|
|
/* obsolete: UID16 */
|
2018-03-05 02:06:35 +08:00
|
|
|
COND_SYSCALL(chown16);
|
|
|
|
COND_SYSCALL(fchown16);
|
|
|
|
COND_SYSCALL(getegid16);
|
|
|
|
COND_SYSCALL(geteuid16);
|
|
|
|
COND_SYSCALL(getgid16);
|
|
|
|
COND_SYSCALL(getgroups16);
|
|
|
|
COND_SYSCALL(getresgid16);
|
|
|
|
COND_SYSCALL(getresuid16);
|
|
|
|
COND_SYSCALL(getuid16);
|
|
|
|
COND_SYSCALL(lchown16);
|
|
|
|
COND_SYSCALL(setfsgid16);
|
|
|
|
COND_SYSCALL(setfsuid16);
|
|
|
|
COND_SYSCALL(setgid16);
|
|
|
|
COND_SYSCALL(setgroups16);
|
|
|
|
COND_SYSCALL(setregid16);
|
|
|
|
COND_SYSCALL(setresgid16);
|
|
|
|
COND_SYSCALL(setresuid16);
|
|
|
|
COND_SYSCALL(setreuid16);
|
|
|
|
COND_SYSCALL(setuid16);
|
rseq: Introduce restartable sequences system call
Expose a new system call allowing each thread to register one userspace
memory area to be used as an ABI between kernel and user-space for two
purposes: user-space restartable sequences and quick access to read the
current CPU number value from user-space.
* Restartable sequences (per-cpu atomics)
Restartables sequences allow user-space to perform update operations on
per-cpu data without requiring heavy-weight atomic operations.
The restartable critical sections (percpu atomics) work has been started
by Paul Turner and Andrew Hunter. It lets the kernel handle restart of
critical sections. [1] [2] The re-implementation proposed here brings a
few simplifications to the ABI which facilitates porting to other
architectures and speeds up the user-space fast path.
Here are benchmarks of various rseq use-cases.
Test hardware:
arm32: ARMv7 Processor rev 4 (v7l) "Cubietruck", 2-core
x86-64: Intel E5-2630 v3@2.40GHz, 16-core, hyperthreading
The following benchmarks were all performed on a single thread.
* Per-CPU statistic counter increment
getcpu+atomic (ns/op) rseq (ns/op) speedup
arm32: 344.0 31.4 11.0
x86-64: 15.3 2.0 7.7
* LTTng-UST: write event 32-bit header, 32-bit payload into tracer
per-cpu buffer
getcpu+atomic (ns/op) rseq (ns/op) speedup
arm32: 2502.0 2250.0 1.1
x86-64: 117.4 98.0 1.2
* liburcu percpu: lock-unlock pair, dereference, read/compare word
getcpu+atomic (ns/op) rseq (ns/op) speedup
arm32: 751.0 128.5 5.8
x86-64: 53.4 28.6 1.9
* jemalloc memory allocator adapted to use rseq
Using rseq with per-cpu memory pools in jemalloc at Facebook (based on
rseq 2016 implementation):
The production workload response-time has 1-2% gain avg. latency, and
the P99 overall latency drops by 2-3%.
* Reading the current CPU number
Speeding up reading the current CPU number on which the caller thread is
running is done by keeping the current CPU number up do date within the
cpu_id field of the memory area registered by the thread. This is done
by making scheduler preemption set the TIF_NOTIFY_RESUME flag on the
current thread. Upon return to user-space, a notify-resume handler
updates the current CPU value within the registered user-space memory
area. User-space can then read the current CPU number directly from
memory.
Keeping the current cpu id in a memory area shared between kernel and
user-space is an improvement over current mechanisms available to read
the current CPU number, which has the following benefits over
alternative approaches:
- 35x speedup on ARM vs system call through glibc
- 20x speedup on x86 compared to calling glibc, which calls vdso
executing a "lsl" instruction,
- 14x speedup on x86 compared to inlined "lsl" instruction,
- Unlike vdso approaches, this cpu_id value can be read from an inline
assembly, which makes it a useful building block for restartable
sequences.
- The approach of reading the cpu id through memory mapping shared
between kernel and user-space is portable (e.g. ARM), which is not the
case for the lsl-based x86 vdso.
On x86, yet another possible approach would be to use the gs segment
selector to point to user-space per-cpu data. This approach performs
similarly to the cpu id cache, but it has two disadvantages: it is
not portable, and it is incompatible with existing applications already
using the gs segment selector for other purposes.
Benchmarking various approaches for reading the current CPU number:
ARMv7 Processor rev 4 (v7l)
Machine model: Cubietruck
- Baseline (empty loop): 8.4 ns
- Read CPU from rseq cpu_id: 16.7 ns
- Read CPU from rseq cpu_id (lazy register): 19.8 ns
- glibc 2.19-0ubuntu6.6 getcpu: 301.8 ns
- getcpu system call: 234.9 ns
x86-64 Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz:
- Baseline (empty loop): 0.8 ns
- Read CPU from rseq cpu_id: 0.8 ns
- Read CPU from rseq cpu_id (lazy register): 0.8 ns
- Read using gs segment selector: 0.8 ns
- "lsl" inline assembly: 13.0 ns
- glibc 2.19-0ubuntu6 getcpu: 16.6 ns
- getcpu system call: 53.9 ns
- Speed (benchmark taken on v8 of patchset)
Running 10 runs of hackbench -l 100000 seems to indicate, contrary to
expectations, that enabling CONFIG_RSEQ slightly accelerates the
scheduler:
Configuration: 2 sockets * 8-core Intel(R) Xeon(R) CPU E5-2630 v3 @
2.40GHz (directly on hardware, hyperthreading disabled in BIOS, energy
saving disabled in BIOS, turboboost disabled in BIOS, cpuidle.off=1
kernel parameter), with a Linux v4.6 defconfig+localyesconfig,
restartable sequences series applied.
* CONFIG_RSEQ=n
avg.: 41.37 s
std.dev.: 0.36 s
* CONFIG_RSEQ=y
avg.: 40.46 s
std.dev.: 0.33 s
- Size
On x86-64, between CONFIG_RSEQ=n/y, the text size increase of vmlinux is
567 bytes, and the data size increase of vmlinux is 5696 bytes.
[1] https://lwn.net/Articles/650333/
[2] http://www.linuxplumbersconf.org/2013/ocw/system/presentations/1695/original/LPC%20-%20PerCpu%20Atomics.pdf
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Joel Fernandes <joelaf@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Watson <davejwatson@fb.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: "H . Peter Anvin" <hpa@zytor.com>
Cc: Chris Lameter <cl@linux.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Andrew Hunter <ahh@google.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Paul Turner <pjt@google.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Maurer <bmaurer@fb.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: linux-api@vger.kernel.org
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20151027235635.16059.11630.stgit@pjt-glaptop.roam.corp.google.com
Link: http://lkml.kernel.org/r/20150624222609.6116.86035.stgit@kitami.mtv.corp.google.com
Link: https://lkml.kernel.org/r/20180602124408.8430-3-mathieu.desnoyers@efficios.com
2018-06-02 20:43:54 +08:00
|
|
|
|
|
|
|
/* restartable sequence */
|
|
|
|
COND_SYSCALL(rseq);
|