Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
|
|
|
/*
|
|
|
|
* Shared application/kernel submission and completion ring pairs, for
|
|
|
|
* supporting fast/efficient IO.
|
|
|
|
*
|
|
|
|
* A note on the read/write ordering memory barriers that are matched between
|
2019-04-25 05:54:16 +08:00
|
|
|
* the application and kernel side.
|
|
|
|
*
|
|
|
|
* After the application reads the CQ ring tail, it must use an
|
|
|
|
* appropriate smp_rmb() to pair with the smp_wmb() the kernel uses
|
|
|
|
* before writing the tail (using smp_load_acquire to read the tail will
|
|
|
|
* do). It also needs a smp_mb() before updating CQ head (ordering the
|
|
|
|
* entry load(s) with the head store), pairing with an implicit barrier
|
2021-05-17 05:58:11 +08:00
|
|
|
* through a control-dependency in io_get_cqe (smp_store_release to
|
2019-04-25 05:54:16 +08:00
|
|
|
* store head will do). Failure to do so could lead to reading invalid
|
|
|
|
* CQ entries.
|
|
|
|
*
|
|
|
|
* Likewise, the application must use an appropriate smp_wmb() before
|
|
|
|
* writing the SQ tail (ordering SQ entry stores with the tail store),
|
|
|
|
* which pairs with smp_load_acquire in io_get_sqring (smp_store_release
|
|
|
|
* to store the tail will do). And it needs a barrier ordering the SQ
|
|
|
|
* head load before writing new SQ entries (smp_load_acquire to read
|
|
|
|
* head will do).
|
|
|
|
*
|
|
|
|
* When using the SQ poll thread (IORING_SETUP_SQPOLL), the application
|
|
|
|
* needs to check the SQ flags for IORING_SQ_NEED_WAKEUP *after*
|
|
|
|
* updating the SQ tail; a full memory barrier smp_mb() is needed
|
|
|
|
* between.
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
*
|
|
|
|
* Also see the examples in the liburing library:
|
|
|
|
*
|
|
|
|
* git://git.kernel.dk/liburing
|
|
|
|
*
|
|
|
|
* io_uring also uses READ/WRITE_ONCE() for _any_ store or load that happens
|
|
|
|
* from data shared between the kernel and application. This is done both
|
|
|
|
* for ordering purposes, but also to ensure that once a value is loaded from
|
|
|
|
* data that the application could potentially modify, it remains stable.
|
|
|
|
*
|
|
|
|
* Copyright (C) 2018-2019 Jens Axboe
|
2019-01-12 00:43:02 +08:00
|
|
|
* Copyright (c) 2018-2019 Christoph Hellwig
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
*/
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/errno.h>
|
|
|
|
#include <linux/syscalls.h>
|
2020-02-28 01:15:42 +08:00
|
|
|
#include <net/compat.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
#include <linux/refcount.h>
|
|
|
|
#include <linux/uio.h>
|
2020-01-19 01:22:41 +08:00
|
|
|
#include <linux/bits.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
|
|
|
#include <linux/sched/signal.h>
|
|
|
|
#include <linux/fs.h>
|
|
|
|
#include <linux/file.h>
|
|
|
|
#include <linux/fdtable.h>
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/mman.h>
|
|
|
|
#include <linux/percpu.h>
|
|
|
|
#include <linux/slab.h>
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 00:16:05 +08:00
|
|
|
#include <linux/bvec.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
#include <linux/net.h>
|
|
|
|
#include <net/sock.h>
|
|
|
|
#include <net/af_unix.h>
|
2019-01-11 13:13:58 +08:00
|
|
|
#include <net/scm.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
#include <linux/anon_inodes.h>
|
|
|
|
#include <linux/sched/mm.h>
|
|
|
|
#include <linux/uaccess.h>
|
|
|
|
#include <linux/nospec.h>
|
2019-11-30 01:14:00 +08:00
|
|
|
#include <linux/highmem.h>
|
2019-12-12 02:20:36 +08:00
|
|
|
#include <linux/fsnotify.h>
|
2019-12-26 13:03:45 +08:00
|
|
|
#include <linux/fadvise.h>
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-18 00:52:41 +08:00
|
|
|
#include <linux/task_work.h>
|
2020-09-14 03:09:39 +08:00
|
|
|
#include <linux/io_uring.h>
|
2021-02-17 08:46:48 +08:00
|
|
|
#include <linux/audit.h>
|
lsm,io_uring: add LSM hooks to io_uring
A full expalantion of io_uring is beyond the scope of this commit
description, but in summary it is an asynchronous I/O mechanism
which allows for I/O requests and the resulting data to be queued
in memory mapped "rings" which are shared between the kernel and
userspace. Optionally, io_uring offers the ability for applications
to spawn kernel threads to dequeue I/O requests from the ring and
submit the requests in the kernel, helping to minimize the syscall
overhead. Rings are accessed in userspace by memory mapping a file
descriptor provided by the io_uring_setup(2), and can be shared
between applications as one might do with any open file descriptor.
Finally, process credentials can be registered with a given ring
and any process with access to that ring can submit I/O requests
using any of the registered credentials.
While the io_uring functionality is widely recognized as offering a
vastly improved, and high performing asynchronous I/O mechanism, its
ability to allow processes to submit I/O requests with credentials
other than its own presents a challenge to LSMs. When a process
creates a new io_uring ring the ring's credentials are inhertied
from the calling process; if this ring is shared with another
process operating with different credentials there is the potential
to bypass the LSMs security policy. Similarly, registering
credentials with a given ring allows any process with access to that
ring to submit I/O requests with those credentials.
In an effort to allow LSMs to apply security policy to io_uring I/O
operations, this patch adds two new LSM hooks. These hooks, in
conjunction with the LSM anonymous inode support previously
submitted, allow an LSM to apply access control policy to the
sharing of io_uring rings as well as any io_uring credential changes
requested by a process.
The new LSM hooks are described below:
* int security_uring_override_creds(cred)
Controls if the current task, executing an io_uring operation,
is allowed to override it's credentials with @cred. In cases
where the current task is a user application, the current
credentials will be those of the user application. In cases
where the current task is a kernel thread servicing io_uring
requests the current credentials will be those of the io_uring
ring (inherited from the process that created the ring).
* int security_uring_sqpoll(void)
Controls if the current task is allowed to create an io_uring
polling thread (IORING_SETUP_SQPOLL). Without a SQPOLL thread
in the kernel processes must submit I/O requests via
io_uring_enter(2) which allows us to compare any requested
credential changes against the application making the request.
With a SQPOLL thread, we can no longer compare requested
credential changes against the application making the request,
the comparison is made against the ring's credentials.
Signed-off-by: Paul Moore <paul@paul-moore.com>
2021-02-02 08:56:49 +08:00
|
|
|
#include <linux/security.h>
|
2023-02-16 16:09:38 +08:00
|
|
|
#include <asm/shmparam.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-16 01:02:01 +08:00
|
|
|
#define CREATE_TRACE_POINTS
|
|
|
|
#include <trace/events/io_uring.h>
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
#include <uapi/linux/io_uring.h>
|
|
|
|
|
2019-10-24 21:25:42 +08:00
|
|
|
#include "io-wq.h"
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2022-05-25 02:45:38 +08:00
|
|
|
#include "io_uring.h"
|
2022-05-26 10:31:09 +08:00
|
|
|
#include "opdef.h"
|
2022-05-25 22:56:52 +08:00
|
|
|
#include "refs.h"
|
2022-05-26 01:01:04 +08:00
|
|
|
#include "tctx.h"
|
2022-05-25 23:13:39 +08:00
|
|
|
#include "sqpoll.h"
|
2022-05-26 00:40:19 +08:00
|
|
|
#include "fdinfo.h"
|
2022-06-13 21:07:23 +08:00
|
|
|
#include "kbuf.h"
|
2022-06-13 21:12:45 +08:00
|
|
|
#include "rsrc.h"
|
2022-06-16 17:22:02 +08:00
|
|
|
#include "cancel.h"
|
2022-07-08 04:30:09 +08:00
|
|
|
#include "net.h"
|
2022-07-13 04:52:38 +08:00
|
|
|
#include "notif.h"
|
2022-05-25 00:56:14 +08:00
|
|
|
|
2022-05-25 22:57:27 +08:00
|
|
|
#include "timeout.h"
|
2022-05-26 10:31:09 +08:00
|
|
|
#include "poll.h"
|
2023-06-02 22:41:46 +08:00
|
|
|
#include "rw.h"
|
2022-07-08 04:16:20 +08:00
|
|
|
#include "alloc_cache.h"
|
2022-05-25 01:46:43 +08:00
|
|
|
|
2019-09-15 05:23:45 +08:00
|
|
|
#define IORING_MAX_ENTRIES 32768
|
2019-10-05 02:10:03 +08:00
|
|
|
#define IORING_MAX_CQ_ENTRIES (2 * IORING_MAX_ENTRIES)
|
2019-10-26 21:20:21 +08:00
|
|
|
|
2020-08-27 22:58:30 +08:00
|
|
|
#define IORING_MAX_RESTRICTIONS (IORING_RESTRICTION_LAST + \
|
|
|
|
IORING_REGISTER_LAST + IORING_OP_LAST)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2021-09-15 19:03:38 +08:00
|
|
|
#define SQE_COMMON_FLAGS (IOSQE_FIXED_FILE | IOSQE_IO_LINK | \
|
|
|
|
IOSQE_IO_HARDLINK | IOSQE_ASYNC)
|
|
|
|
|
2021-11-10 23:49:34 +08:00
|
|
|
#define SQE_VALID_FLAGS (SQE_COMMON_FLAGS | IOSQE_BUFFER_SELECT | \
|
|
|
|
IOSQE_IO_DRAIN | IOSQE_CQE_SKIP_SUCCESS)
|
2021-09-15 19:03:38 +08:00
|
|
|
|
2021-06-18 01:14:04 +08:00
|
|
|
#define IO_REQ_CLEAN_FLAGS (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP | \
|
2022-06-02 13:57:02 +08:00
|
|
|
REQ_F_POLLED | REQ_F_INFLIGHT | REQ_F_CREDS | \
|
|
|
|
REQ_F_ASYNC_DATA)
|
2021-02-19 02:29:40 +08:00
|
|
|
|
2022-03-22 06:02:22 +08:00
|
|
|
#define IO_REQ_CLEAN_SLOW_FLAGS (REQ_F_REFCOUNT | REQ_F_LINK | REQ_F_HARDLINK |\
|
|
|
|
IO_REQ_CLEAN_FLAGS)
|
|
|
|
|
2021-06-14 09:36:22 +08:00
|
|
|
#define IO_TCTX_REFS_CACHE_NR (1U << 10)
|
|
|
|
|
2021-02-10 08:03:13 +08:00
|
|
|
#define IO_COMPL_BATCH 32
|
2021-02-10 08:03:17 +08:00
|
|
|
#define IO_REQ_ALLOC_BATCH 8
|
2021-02-10 08:03:10 +08:00
|
|
|
|
2022-04-21 17:13:43 +08:00
|
|
|
enum {
|
|
|
|
IO_CHECK_CQ_OVERFLOW_BIT,
|
2022-04-21 17:13:44 +08:00
|
|
|
IO_CHECK_CQ_DROPPED_BIT,
|
2022-04-21 17:13:43 +08:00
|
|
|
};
|
|
|
|
|
2022-08-30 20:50:12 +08:00
|
|
|
enum {
|
|
|
|
IO_EVENTFD_OP_SIGNAL_BIT,
|
|
|
|
IO_EVENTFD_OP_FREE_BIT,
|
|
|
|
};
|
|
|
|
|
2020-07-14 04:37:14 +08:00
|
|
|
struct io_defer_entry {
|
|
|
|
struct list_head list;
|
|
|
|
struct io_kiocb *req;
|
2020-07-14 04:37:15 +08:00
|
|
|
u32 seq;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
};
|
|
|
|
|
2021-08-15 17:40:25 +08:00
|
|
|
/* requests with any of those set should undergo io_disarm_next() */
|
|
|
|
#define IO_DISARM_MASK (REQ_F_ARM_LTIMEOUT | REQ_F_LINK_TIMEOUT | REQ_F_FAIL)
|
2022-04-16 05:08:29 +08:00
|
|
|
#define IO_REQ_LINK_FLAGS (REQ_F_LINK | REQ_F_HARDLINK)
|
2021-08-15 17:40:25 +08:00
|
|
|
|
2022-06-20 08:25:52 +08:00
|
|
|
static bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
|
2021-02-04 21:51:56 +08:00
|
|
|
struct task_struct *task,
|
2021-05-17 05:58:04 +08:00
|
|
|
bool cancel_all);
|
2020-12-31 05:34:15 +08:00
|
|
|
|
2022-04-16 05:08:26 +08:00
|
|
|
static void io_queue_sqe(struct io_kiocb *req);
|
2019-04-07 11:51:27 +08:00
|
|
|
|
2023-01-18 23:56:30 +08:00
|
|
|
struct kmem_cache *req_cachep;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2023-08-22 05:15:52 +08:00
|
|
|
static int __read_mostly sysctl_io_uring_disabled;
|
|
|
|
static int __read_mostly sysctl_io_uring_group = -1;
|
|
|
|
|
|
|
|
#ifdef CONFIG_SYSCTL
|
|
|
|
static struct ctl_table kernel_io_uring_disabled_table[] = {
|
|
|
|
{
|
|
|
|
.procname = "io_uring_disabled",
|
|
|
|
.data = &sysctl_io_uring_disabled,
|
|
|
|
.maxlen = sizeof(sysctl_io_uring_disabled),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = SYSCTL_ZERO,
|
|
|
|
.extra2 = SYSCTL_TWO,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "io_uring_group",
|
|
|
|
.data = &sysctl_io_uring_group,
|
|
|
|
.maxlen = sizeof(gid_t),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec,
|
|
|
|
},
|
|
|
|
{},
|
|
|
|
};
|
|
|
|
#endif
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
struct sock *io_uring_get_socket(struct file *file)
|
|
|
|
{
|
|
|
|
#if defined(CONFIG_UNIX)
|
2022-05-25 11:54:43 +08:00
|
|
|
if (io_is_uring_fops(file)) {
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
|
|
|
|
|
|
|
return ctx->ring_sock->sk;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(io_uring_get_socket);
|
|
|
|
|
2021-09-08 23:40:52 +08:00
|
|
|
static inline void io_submit_flush_completions(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2022-11-24 17:35:54 +08:00
|
|
|
if (!wq_list_empty(&ctx->submit_state.compl_reqs) ||
|
|
|
|
ctx->submit_state.cqes_count)
|
2021-09-08 23:40:52 +08:00
|
|
|
__io_submit_flush_completions(ctx);
|
|
|
|
}
|
|
|
|
|
2022-06-17 16:48:01 +08:00
|
|
|
static inline unsigned int __io_cqring_events(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
return ctx->cached_cq_tail - READ_ONCE(ctx->rings->cq.head);
|
|
|
|
}
|
|
|
|
|
io_uring: calculate CQEs from the user visible value
io_cqring_wait (and it's wake function io_has_work) used cached_cq_tail in
order to calculate the number of CQEs. cached_cq_tail is set strictly
before the user visible rings->cq.tail
However as far as userspace is concerned, if io_uring_enter(2) is called
with a minimum number of events, they will verify by checking
rings->cq.tail.
It is therefore possible for io_uring_enter(2) to return early with fewer
events visible to the user.
Instead make the wait functions read from the user visible value, so there
will be no discrepency.
This is triggered eventually by the following reproducer:
struct io_uring_sqe *sqe;
struct io_uring_cqe *cqe;
unsigned int cqe_ready;
struct io_uring ring;
int ret, i;
ret = io_uring_queue_init(N, &ring, 0);
assert(!ret);
while(true) {
for (i = 0; i < N; i++) {
sqe = io_uring_get_sqe(&ring);
io_uring_prep_nop(sqe);
sqe->flags |= IOSQE_ASYNC;
}
ret = io_uring_submit(&ring);
assert(ret == N);
do {
ret = io_uring_wait_cqes(&ring, &cqe, N, NULL, NULL);
} while(ret == -EINTR);
cqe_ready = io_uring_cq_ready(&ring);
assert(!ret);
assert(cqe_ready == N);
io_uring_cq_advance(&ring, N);
}
Fixes: ad3eb2c89fb2 ("io_uring: split overflow state into SQ and CQ side")
Signed-off-by: Dylan Yudaken <dylany@meta.com>
Link: https://lore.kernel.org/r/20221108153016.1854297-1-dylany@meta.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-11-08 23:30:16 +08:00
|
|
|
static inline unsigned int __io_cqring_events_user(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
return READ_ONCE(ctx->rings->cq.tail) - READ_ONCE(ctx->rings->cq.head);
|
|
|
|
}
|
|
|
|
|
2022-06-02 13:57:02 +08:00
|
|
|
static bool io_match_linked(struct io_kiocb *head)
|
|
|
|
{
|
|
|
|
struct io_kiocb *req;
|
|
|
|
|
|
|
|
io_for_each_link(req, head) {
|
|
|
|
if (req->flags & REQ_F_INFLIGHT)
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
return false;
|
2021-11-26 22:38:15 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* As io_match_task() but protected against racing with linked timeouts.
|
|
|
|
* User must not hold timeout_lock.
|
|
|
|
*/
|
2022-05-26 10:31:09 +08:00
|
|
|
bool io_match_task_safe(struct io_kiocb *head, struct task_struct *task,
|
|
|
|
bool cancel_all)
|
2021-11-26 22:38:15 +08:00
|
|
|
{
|
2022-06-02 13:57:02 +08:00
|
|
|
bool matched;
|
|
|
|
|
2021-11-26 22:38:15 +08:00
|
|
|
if (task && head->task != task)
|
|
|
|
return false;
|
2022-06-02 13:57:02 +08:00
|
|
|
if (cancel_all)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
if (head->flags & REQ_F_LINK_TIMEOUT) {
|
|
|
|
struct io_ring_ctx *ctx = head->ctx;
|
|
|
|
|
|
|
|
/* protect against races with linked timeouts */
|
|
|
|
spin_lock_irq(&ctx->timeout_lock);
|
|
|
|
matched = io_match_linked(head);
|
|
|
|
spin_unlock_irq(&ctx->timeout_lock);
|
|
|
|
} else {
|
|
|
|
matched = io_match_linked(head);
|
|
|
|
}
|
|
|
|
return matched;
|
2021-11-26 22:38:15 +08:00
|
|
|
}
|
|
|
|
|
2021-08-27 17:46:09 +08:00
|
|
|
static inline void req_fail_link_node(struct io_kiocb *req, int res)
|
|
|
|
{
|
|
|
|
req_set_fail(req);
|
2022-05-25 05:21:00 +08:00
|
|
|
io_req_set_res(req, res, 0);
|
2021-08-27 17:46:09 +08:00
|
|
|
}
|
|
|
|
|
2022-04-12 22:09:48 +08:00
|
|
|
static inline void io_req_add_to_cache(struct io_kiocb *req, struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
wq_stack_add_head(&req->comp_list, &ctx->submit_state.free_list);
|
2021-08-27 17:46:09 +08:00
|
|
|
}
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold void io_ring_ctx_ref_free(struct percpu_ref *ref)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = container_of(ref, struct io_ring_ctx, refs);
|
|
|
|
|
2020-05-15 07:18:39 +08:00
|
|
|
complete(&ctx->ref_comp);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold void io_fallback_req_func(struct work_struct *work)
|
2021-08-10 03:18:07 +08:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx,
|
|
|
|
fallback_work.work);
|
|
|
|
struct llist_node *node = llist_del_all(&ctx->fallback_llist);
|
|
|
|
struct io_kiocb *req, *tmp;
|
2023-03-27 23:38:15 +08:00
|
|
|
struct io_tw_state ts = { .locked = true, };
|
2021-08-10 03:18:07 +08:00
|
|
|
|
2023-12-03 23:37:53 +08:00
|
|
|
percpu_ref_get(&ctx->refs);
|
2023-01-17 00:48:59 +08:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2022-06-25 18:52:59 +08:00
|
|
|
llist_for_each_entry_safe(req, tmp, node, io_task_work.node)
|
2023-03-27 23:38:15 +08:00
|
|
|
req->io_task_work.func(req, &ts);
|
|
|
|
if (WARN_ON_ONCE(!ts.locked))
|
2023-01-17 00:48:59 +08:00
|
|
|
return;
|
|
|
|
io_submit_flush_completions(ctx);
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2023-12-03 23:37:53 +08:00
|
|
|
percpu_ref_put(&ctx->refs);
|
2021-08-10 03:18:07 +08:00
|
|
|
}
|
|
|
|
|
2022-06-16 17:22:10 +08:00
|
|
|
static int io_alloc_hash_table(struct io_hash_table *table, unsigned bits)
|
|
|
|
{
|
|
|
|
unsigned hash_buckets = 1U << bits;
|
|
|
|
size_t hash_size = hash_buckets * sizeof(table->hbs[0]);
|
|
|
|
|
|
|
|
table->hbs = kmalloc(hash_size, GFP_KERNEL);
|
|
|
|
if (!table->hbs)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
table->hash_bits = bits;
|
|
|
|
init_hash_table(table, hash_buckets);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
2022-05-02 00:52:44 +08:00
|
|
|
int hash_bits;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
|
|
|
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
|
|
|
|
if (!ctx)
|
|
|
|
return NULL;
|
|
|
|
|
2022-05-02 00:52:44 +08:00
|
|
|
xa_init(&ctx->io_bl_xa);
|
|
|
|
|
2019-12-05 10:56:40 +08:00
|
|
|
/*
|
|
|
|
* Use 5 bits less than the max cq entries, that should give us around
|
2022-06-16 17:22:05 +08:00
|
|
|
* 32 entries per hash list if totally full and uniformly spread, but
|
|
|
|
* don't keep too many buckets to not overconsume memory.
|
2019-12-05 10:56:40 +08:00
|
|
|
*/
|
2022-06-16 17:22:05 +08:00
|
|
|
hash_bits = ilog2(p->cq_entries) - 5;
|
|
|
|
hash_bits = clamp(hash_bits, 1, 8);
|
2022-06-16 17:22:10 +08:00
|
|
|
if (io_alloc_hash_table(&ctx->cancel_table, hash_bits))
|
2019-12-05 10:56:40 +08:00
|
|
|
goto err;
|
2022-06-16 17:22:12 +08:00
|
|
|
if (io_alloc_hash_table(&ctx->cancel_table_locked, hash_bits))
|
|
|
|
goto err;
|
2019-05-08 01:01:48 +08:00
|
|
|
if (percpu_ref_init(&ctx->refs, io_ring_ctx_ref_free,
|
2022-07-16 01:45:01 +08:00
|
|
|
0, GFP_KERNEL))
|
2019-11-08 09:27:42 +08:00
|
|
|
goto err;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
|
|
|
ctx->flags = p->flags;
|
2020-09-04 02:12:41 +08:00
|
|
|
init_waitqueue_head(&ctx->sqo_sq_wait);
|
2020-09-15 01:16:23 +08:00
|
|
|
INIT_LIST_HEAD(&ctx->sqd_list);
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-07 02:31:17 +08:00
|
|
|
INIT_LIST_HEAD(&ctx->cq_overflow_list);
|
2022-03-09 08:46:52 +08:00
|
|
|
INIT_LIST_HEAD(&ctx->io_buffers_cache);
|
2023-11-28 07:47:04 +08:00
|
|
|
INIT_HLIST_HEAD(&ctx->io_buf_list);
|
2023-04-04 20:39:57 +08:00
|
|
|
io_alloc_cache_init(&ctx->rsrc_node_cache, IO_NODE_ALLOC_CACHE_MAX,
|
|
|
|
sizeof(struct io_rsrc_node));
|
|
|
|
io_alloc_cache_init(&ctx->apoll_cache, IO_ALLOC_CACHE_MAX,
|
|
|
|
sizeof(struct async_poll));
|
|
|
|
io_alloc_cache_init(&ctx->netmsg_cache, IO_ALLOC_CACHE_MAX,
|
|
|
|
sizeof(struct io_async_msghdr));
|
2020-05-15 07:18:39 +08:00
|
|
|
init_completion(&ctx->ref_comp);
|
2021-03-08 22:16:16 +08:00
|
|
|
xa_init_flags(&ctx->personalities, XA_FLAGS_ALLOC1);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
mutex_init(&ctx->uring_lock);
|
2021-06-15 06:37:28 +08:00
|
|
|
init_waitqueue_head(&ctx->cq_wait);
|
2023-01-09 22:46:08 +08:00
|
|
|
init_waitqueue_head(&ctx->poll_wq);
|
2023-04-13 22:28:08 +08:00
|
|
|
init_waitqueue_head(&ctx->rsrc_quiesce_wq);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
spin_lock_init(&ctx->completion_lock);
|
2021-08-11 05:11:51 +08:00
|
|
|
spin_lock_init(&ctx->timeout_lock);
|
2021-09-25 04:59:49 +08:00
|
|
|
INIT_WQ_LIST(&ctx->iopoll_list);
|
2022-03-09 08:46:52 +08:00
|
|
|
INIT_LIST_HEAD(&ctx->io_buffers_pages);
|
|
|
|
INIT_LIST_HEAD(&ctx->io_buffers_comp);
|
2019-04-07 11:51:27 +08:00
|
|
|
INIT_LIST_HEAD(&ctx->defer_list);
|
2019-09-18 02:26:57 +08:00
|
|
|
INIT_LIST_HEAD(&ctx->timeout_list);
|
2021-08-29 09:54:38 +08:00
|
|
|
INIT_LIST_HEAD(&ctx->ltimeout_list);
|
2021-01-16 01:37:46 +08:00
|
|
|
INIT_LIST_HEAD(&ctx->rsrc_ref_list);
|
2022-08-30 20:50:10 +08:00
|
|
|
init_llist_head(&ctx->work_llist);
|
2021-03-06 19:02:12 +08:00
|
|
|
INIT_LIST_HEAD(&ctx->tctx_list);
|
2021-09-25 04:59:47 +08:00
|
|
|
ctx->submit_state.free_list.next = NULL;
|
|
|
|
INIT_WQ_LIST(&ctx->locked_free_list);
|
2021-07-01 04:54:03 +08:00
|
|
|
INIT_DELAYED_WORK(&ctx->fallback_work, io_fallback_req_func);
|
2021-09-25 04:59:44 +08:00
|
|
|
INIT_WQ_LIST(&ctx->submit_state.compl_reqs);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
return ctx;
|
2019-11-08 09:27:42 +08:00
|
|
|
err:
|
2022-06-16 17:22:10 +08:00
|
|
|
kfree(ctx->cancel_table.hbs);
|
2022-06-16 17:22:12 +08:00
|
|
|
kfree(ctx->cancel_table_locked.hbs);
|
2022-05-02 00:52:44 +08:00
|
|
|
kfree(ctx->io_bl);
|
|
|
|
xa_destroy(&ctx->io_bl_xa);
|
2019-11-08 09:27:42 +08:00
|
|
|
kfree(ctx);
|
|
|
|
return NULL;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
|
|
|
|
2021-05-17 05:58:10 +08:00
|
|
|
static void io_account_cq_overflow(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct io_rings *r = ctx->rings;
|
|
|
|
|
|
|
|
WRITE_ONCE(r->cq_overflow, READ_ONCE(r->cq_overflow) + 1);
|
|
|
|
ctx->cq_extra--;
|
|
|
|
}
|
|
|
|
|
2020-07-14 04:37:15 +08:00
|
|
|
static bool req_need_defer(struct io_kiocb *req, u32 seq)
|
2019-10-11 11:42:58 +08:00
|
|
|
{
|
2020-07-09 23:43:27 +08:00
|
|
|
if (unlikely(req->flags & REQ_F_IO_DRAIN)) {
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2019-11-08 23:09:12 +08:00
|
|
|
|
2021-05-17 05:58:10 +08:00
|
|
|
return seq + READ_ONCE(ctx->cq_extra) != ctx->cached_cq_tail;
|
2020-07-09 23:43:27 +08:00
|
|
|
}
|
2019-04-07 11:51:27 +08:00
|
|
|
|
2019-11-13 18:06:25 +08:00
|
|
|
return false;
|
2019-04-07 11:51:27 +08:00
|
|
|
}
|
|
|
|
|
2023-06-23 19:23:24 +08:00
|
|
|
static void io_clean_op(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
if (req->flags & REQ_F_BUFFER_SELECTED) {
|
|
|
|
spin_lock(&req->ctx->completion_lock);
|
|
|
|
io_put_kbuf_comp(req);
|
|
|
|
spin_unlock(&req->ctx->completion_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (req->flags & REQ_F_NEED_CLEANUP) {
|
|
|
|
const struct io_cold_def *def = &io_cold_defs[req->opcode];
|
|
|
|
|
|
|
|
if (def->cleanup)
|
|
|
|
def->cleanup(req);
|
|
|
|
}
|
|
|
|
if ((req->flags & REQ_F_POLLED) && req->apoll) {
|
|
|
|
kfree(req->apoll->double_poll);
|
|
|
|
kfree(req->apoll);
|
|
|
|
req->apoll = NULL;
|
|
|
|
}
|
|
|
|
if (req->flags & REQ_F_INFLIGHT) {
|
|
|
|
struct io_uring_task *tctx = req->task->io_uring;
|
|
|
|
|
|
|
|
atomic_dec(&tctx->inflight_tracked);
|
|
|
|
}
|
|
|
|
if (req->flags & REQ_F_CREDS)
|
|
|
|
put_cred(req->creds);
|
|
|
|
if (req->flags & REQ_F_ASYNC_DATA) {
|
|
|
|
kfree(req->async_data);
|
|
|
|
req->async_data = NULL;
|
|
|
|
}
|
|
|
|
req->flags &= ~IO_REQ_CLEAN_FLAGS;
|
|
|
|
}
|
|
|
|
|
2022-06-02 13:57:02 +08:00
|
|
|
static inline void io_req_track_inflight(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
if (!(req->flags & REQ_F_INFLIGHT)) {
|
|
|
|
req->flags |= REQ_F_INFLIGHT;
|
2022-06-24 01:06:43 +08:00
|
|
|
atomic_inc(&req->task->io_uring->inflight_tracked);
|
2022-06-02 13:57:02 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-08-12 02:28:31 +08:00
|
|
|
static struct io_kiocb *__io_prep_linked_timeout(struct io_kiocb *req)
|
|
|
|
{
|
2021-08-15 17:40:26 +08:00
|
|
|
if (WARN_ON_ONCE(!req->link))
|
|
|
|
return NULL;
|
|
|
|
|
2021-08-15 17:40:24 +08:00
|
|
|
req->flags &= ~REQ_F_ARM_LTIMEOUT;
|
|
|
|
req->flags |= REQ_F_LINK_TIMEOUT;
|
2021-08-12 02:28:31 +08:00
|
|
|
|
|
|
|
/* linked timeouts should have two refs once prep'ed */
|
2021-08-15 17:40:18 +08:00
|
|
|
io_req_set_refcount(req);
|
2021-08-15 17:40:24 +08:00
|
|
|
__io_req_set_refcount(req->link, 2);
|
|
|
|
return req->link;
|
2021-08-12 02:28:31 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct io_kiocb *io_prep_linked_timeout(struct io_kiocb *req)
|
|
|
|
{
|
2021-08-15 17:40:24 +08:00
|
|
|
if (likely(!(req->flags & REQ_F_ARM_LTIMEOUT)))
|
2021-08-12 02:28:31 +08:00
|
|
|
return NULL;
|
|
|
|
return __io_prep_linked_timeout(req);
|
|
|
|
}
|
|
|
|
|
2022-04-16 05:08:25 +08:00
|
|
|
static noinline void __io_arm_ltimeout(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
io_queue_linked_timeout(__io_prep_linked_timeout(req));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void io_arm_ltimeout(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
if (unlikely(req->flags & REQ_F_ARM_LTIMEOUT))
|
|
|
|
__io_arm_ltimeout(req);
|
|
|
|
}
|
|
|
|
|
2020-10-15 22:46:24 +08:00
|
|
|
static void io_prep_async_work(struct io_kiocb *req)
|
|
|
|
{
|
2023-01-12 22:44:10 +08:00
|
|
|
const struct io_issue_def *def = &io_issue_defs[req->opcode];
|
2020-10-15 22:46:24 +08:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2021-06-18 01:14:02 +08:00
|
|
|
if (!(req->flags & REQ_F_CREDS)) {
|
|
|
|
req->flags |= REQ_F_CREDS;
|
2021-06-18 01:14:01 +08:00
|
|
|
req->creds = get_current_cred();
|
2021-06-18 01:14:02 +08:00
|
|
|
}
|
2021-03-07 00:22:27 +08:00
|
|
|
|
2021-03-22 09:58:29 +08:00
|
|
|
req->work.list.next = NULL;
|
|
|
|
req->work.flags = 0;
|
2022-04-19 00:44:00 +08:00
|
|
|
req->work.cancel_seq = atomic_read(&ctx->cancel_seq);
|
2020-10-22 23:47:16 +08:00
|
|
|
if (req->flags & REQ_F_FORCE_ASYNC)
|
|
|
|
req->work.flags |= IO_WQ_WORK_CONCURRENT;
|
|
|
|
|
2023-06-20 19:32:31 +08:00
|
|
|
if (req->file && !(req->flags & REQ_F_FIXED_FILE))
|
2023-06-20 19:32:32 +08:00
|
|
|
req->flags |= io_file_get_flags(req->file);
|
2022-07-21 23:06:47 +08:00
|
|
|
|
2023-04-11 19:06:01 +08:00
|
|
|
if (req->file && (req->flags & REQ_F_ISREG)) {
|
2023-03-08 00:47:20 +08:00
|
|
|
bool should_hash = def->hash_reg_file;
|
|
|
|
|
|
|
|
/* don't serialize this request if the fs doesn't need it */
|
|
|
|
if (should_hash && (req->file->f_flags & O_DIRECT) &&
|
|
|
|
(req->file->f_mode & FMODE_DIO_PARALLEL_WRITE))
|
|
|
|
should_hash = false;
|
|
|
|
if (should_hash || (ctx->flags & IORING_SETUP_IOPOLL))
|
2020-10-15 22:46:24 +08:00
|
|
|
io_wq_hash_work(&req->work, file_inode(req->file));
|
2021-04-01 22:38:34 +08:00
|
|
|
} else if (!req->file || !S_ISBLK(file_inode(req->file)->i_mode)) {
|
2020-10-15 22:46:24 +08:00
|
|
|
if (def->unbound_nonreg_file)
|
|
|
|
req->work.flags |= IO_WQ_WORK_UNBOUND;
|
|
|
|
}
|
2019-10-24 21:25:42 +08:00
|
|
|
}
|
2020-01-28 07:34:48 +08:00
|
|
|
|
2020-06-30 00:18:43 +08:00
|
|
|
static void io_prep_async_link(struct io_kiocb *req)
|
2019-10-24 21:25:42 +08:00
|
|
|
{
|
2020-06-30 00:18:43 +08:00
|
|
|
struct io_kiocb *cur;
|
2019-09-10 23:15:04 +08:00
|
|
|
|
2021-07-26 21:14:31 +08:00
|
|
|
if (req->flags & REQ_F_LINK_TIMEOUT) {
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2021-11-23 09:45:35 +08:00
|
|
|
spin_lock_irq(&ctx->timeout_lock);
|
2021-07-26 21:14:31 +08:00
|
|
|
io_for_each_link(cur, req)
|
|
|
|
io_prep_async_work(cur);
|
2021-11-23 09:45:35 +08:00
|
|
|
spin_unlock_irq(&ctx->timeout_lock);
|
2021-07-26 21:14:31 +08:00
|
|
|
} else {
|
|
|
|
io_for_each_link(cur, req)
|
|
|
|
io_prep_async_work(cur);
|
|
|
|
}
|
2019-10-24 21:25:42 +08:00
|
|
|
}
|
|
|
|
|
2023-03-27 23:38:15 +08:00
|
|
|
void io_queue_iowq(struct io_kiocb *req, struct io_tw_state *ts_dont_use)
|
2019-10-24 21:25:42 +08:00
|
|
|
{
|
2020-06-30 00:18:43 +08:00
|
|
|
struct io_kiocb *link = io_prep_linked_timeout(req);
|
2021-02-17 03:56:50 +08:00
|
|
|
struct io_uring_task *tctx = req->task->io_uring;
|
2019-10-24 21:25:42 +08:00
|
|
|
|
2021-02-17 05:15:30 +08:00
|
|
|
BUG_ON(!tctx);
|
|
|
|
BUG_ON(!tctx->io_wq);
|
2019-10-24 21:25:42 +08:00
|
|
|
|
2020-06-30 00:18:43 +08:00
|
|
|
/* init ->work of the whole link before punting */
|
|
|
|
io_prep_async_link(req);
|
2021-07-24 01:53:54 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Not expected to happen, but if we do have a bug where this _can_
|
|
|
|
* happen, catch it here and ensure the request is marked as
|
|
|
|
* canceled. That will make io-wq go through the usual work cancel
|
|
|
|
* procedure rather than attempt to run this request (or create a new
|
|
|
|
* worker for it).
|
|
|
|
*/
|
|
|
|
if (WARN_ON_ONCE(!same_thread_group(req->task, current)))
|
|
|
|
req->work.flags |= IO_WQ_WORK_CANCEL;
|
|
|
|
|
2022-06-16 20:57:20 +08:00
|
|
|
trace_io_uring_queue_async_work(req, io_wq_is_hashed(&req->work));
|
2021-03-02 02:20:47 +08:00
|
|
|
io_wq_enqueue(tctx->io_wq, &req->work);
|
2020-08-10 23:55:22 +08:00
|
|
|
if (link)
|
|
|
|
io_queue_linked_timeout(link);
|
2020-06-30 00:18:43 +08:00
|
|
|
}
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold void io_queue_deferred(struct io_ring_ctx *ctx)
|
2019-04-07 11:51:27 +08:00
|
|
|
{
|
2021-06-15 06:37:31 +08:00
|
|
|
while (!list_empty(&ctx->defer_list)) {
|
2020-07-14 04:37:14 +08:00
|
|
|
struct io_defer_entry *de = list_first_entry(&ctx->defer_list,
|
|
|
|
struct io_defer_entry, list);
|
2019-04-07 11:51:27 +08:00
|
|
|
|
2020-07-14 04:37:15 +08:00
|
|
|
if (req_need_defer(de->req, de->seq))
|
2020-05-27 01:34:05 +08:00
|
|
|
break;
|
2020-07-14 04:37:14 +08:00
|
|
|
list_del_init(&de->list);
|
2021-01-27 07:35:10 +08:00
|
|
|
io_req_task_queue(de->req);
|
2020-07-14 04:37:14 +08:00
|
|
|
kfree(de);
|
2021-06-15 06:37:31 +08:00
|
|
|
}
|
2020-05-27 01:34:05 +08:00
|
|
|
}
|
|
|
|
|
2022-08-30 20:50:12 +08:00
|
|
|
|
|
|
|
static void io_eventfd_ops(struct rcu_head *rcu)
|
2020-01-09 02:04:00 +08:00
|
|
|
{
|
2022-08-30 20:50:11 +08:00
|
|
|
struct io_ev_fd *ev_fd = container_of(rcu, struct io_ev_fd, rcu);
|
2022-08-30 20:50:12 +08:00
|
|
|
int ops = atomic_xchg(&ev_fd->ops, 0);
|
2022-06-20 08:25:55 +08:00
|
|
|
|
2022-08-30 20:50:12 +08:00
|
|
|
if (ops & BIT(IO_EVENTFD_OP_SIGNAL_BIT))
|
2022-11-21 01:18:45 +08:00
|
|
|
eventfd_signal_mask(ev_fd->cq_ev_fd, 1, EPOLL_URING_WAKE);
|
2022-08-30 20:50:11 +08:00
|
|
|
|
2022-08-30 20:50:12 +08:00
|
|
|
/* IO_EVENTFD_OP_FREE_BIT may not be set here depending on callback
|
|
|
|
* ordering in a race but if references are 0 we know we have to free
|
|
|
|
* it regardless.
|
2022-06-20 08:25:55 +08:00
|
|
|
*/
|
2022-08-30 20:50:12 +08:00
|
|
|
if (atomic_dec_and_test(&ev_fd->refs)) {
|
|
|
|
eventfd_ctx_put(ev_fd->cq_ev_fd);
|
|
|
|
kfree(ev_fd);
|
|
|
|
}
|
2022-08-30 20:50:11 +08:00
|
|
|
}
|
|
|
|
|
2022-02-04 22:51:14 +08:00
|
|
|
static void io_eventfd_signal(struct io_ring_ctx *ctx)
|
2020-01-09 02:04:00 +08:00
|
|
|
{
|
2022-08-30 20:50:12 +08:00
|
|
|
struct io_ev_fd *ev_fd = NULL;
|
2022-02-04 22:51:14 +08:00
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
/*
|
|
|
|
* rcu_dereference ctx->io_ev_fd once and use it for both for checking
|
|
|
|
* and eventfd_signal
|
|
|
|
*/
|
|
|
|
ev_fd = rcu_dereference(ctx->io_ev_fd);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check again if ev_fd exists incase an io_eventfd_unregister call
|
|
|
|
* completed between the NULL check of ctx->io_ev_fd at the start of
|
|
|
|
* the function and rcu_read_lock.
|
|
|
|
*/
|
|
|
|
if (unlikely(!ev_fd))
|
|
|
|
goto out;
|
2020-05-16 00:38:05 +08:00
|
|
|
if (READ_ONCE(ctx->rings->cq_flags) & IORING_CQ_EVENTFD_DISABLED)
|
2022-02-04 22:51:14 +08:00
|
|
|
goto out;
|
2022-08-30 20:50:12 +08:00
|
|
|
if (ev_fd->eventfd_async && !io_wq_current_is_worker())
|
|
|
|
goto out;
|
2022-02-04 22:51:14 +08:00
|
|
|
|
2022-08-30 20:50:12 +08:00
|
|
|
if (likely(eventfd_signal_allowed())) {
|
2022-11-21 01:18:45 +08:00
|
|
|
eventfd_signal_mask(ev_fd->cq_ev_fd, 1, EPOLL_URING_WAKE);
|
2022-08-30 20:50:12 +08:00
|
|
|
} else {
|
|
|
|
atomic_inc(&ev_fd->refs);
|
|
|
|
if (!atomic_fetch_or(BIT(IO_EVENTFD_OP_SIGNAL_BIT), &ev_fd->ops))
|
2022-12-16 02:41:38 +08:00
|
|
|
call_rcu_hurry(&ev_fd->rcu, io_eventfd_ops);
|
2022-08-30 20:50:12 +08:00
|
|
|
else
|
|
|
|
atomic_dec(&ev_fd->refs);
|
|
|
|
}
|
|
|
|
|
2022-02-04 22:51:14 +08:00
|
|
|
out:
|
|
|
|
rcu_read_unlock();
|
2020-01-09 02:04:00 +08:00
|
|
|
}
|
|
|
|
|
2022-08-30 20:50:12 +08:00
|
|
|
static void io_eventfd_flush_signal(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
bool skip;
|
|
|
|
|
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Eventfd should only get triggered when at least one event has been
|
|
|
|
* posted. Some applications rely on the eventfd notification count
|
|
|
|
* only changing IFF a new CQE has been added to the CQ ring. There's
|
|
|
|
* no depedency on 1:1 relationship between how many times this
|
|
|
|
* function is called (and hence the eventfd count) and number of CQEs
|
|
|
|
* posted to the CQ ring.
|
|
|
|
*/
|
|
|
|
skip = ctx->cached_cq_tail == ctx->evfd_last_cq_tail;
|
|
|
|
ctx->evfd_last_cq_tail = ctx->cached_cq_tail;
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
if (skip)
|
|
|
|
return;
|
|
|
|
|
|
|
|
io_eventfd_signal(ctx);
|
|
|
|
}
|
|
|
|
|
2022-06-19 19:26:06 +08:00
|
|
|
void __io_commit_cqring_flush(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2023-01-09 22:46:09 +08:00
|
|
|
if (ctx->poll_activated)
|
|
|
|
io_poll_wq_wake(ctx);
|
2022-12-03 01:47:24 +08:00
|
|
|
if (ctx->off_timeout_used)
|
|
|
|
io_flush_timeouts(ctx);
|
|
|
|
if (ctx->drain_active) {
|
2022-06-19 19:26:06 +08:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2022-12-03 01:47:24 +08:00
|
|
|
io_queue_deferred(ctx);
|
2022-06-19 19:26:06 +08:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
}
|
|
|
|
if (ctx->has_evfd)
|
2022-08-30 20:50:12 +08:00
|
|
|
io_eventfd_flush_signal(ctx);
|
2022-06-19 19:26:06 +08:00
|
|
|
}
|
|
|
|
|
2022-12-07 23:50:01 +08:00
|
|
|
static inline void __io_cq_lock(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2023-08-25 06:53:29 +08:00
|
|
|
if (!ctx->lockless_cq)
|
2022-12-07 23:50:01 +08:00
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
}
|
|
|
|
|
2022-12-03 01:47:23 +08:00
|
|
|
static inline void io_cq_lock(struct io_ring_ctx *ctx)
|
|
|
|
__acquires(ctx->completion_lock)
|
|
|
|
{
|
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
}
|
|
|
|
|
2022-12-07 23:50:01 +08:00
|
|
|
static inline void __io_cq_unlock_post(struct io_ring_ctx *ctx)
|
2023-01-09 22:46:10 +08:00
|
|
|
{
|
|
|
|
io_commit_cqring(ctx);
|
2023-08-25 06:53:28 +08:00
|
|
|
if (!ctx->task_complete) {
|
2023-08-25 06:53:29 +08:00
|
|
|
if (!ctx->lockless_cq)
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
/* IOPOLL rings only need to wake up if it's also SQPOLL */
|
|
|
|
if (!ctx->syscall_iopoll)
|
|
|
|
io_cqring_wake(ctx);
|
2023-01-09 22:46:10 +08:00
|
|
|
}
|
2023-08-25 06:53:28 +08:00
|
|
|
io_commit_cqring_flush(ctx);
|
2023-01-09 22:46:10 +08:00
|
|
|
}
|
|
|
|
|
2023-06-23 19:23:30 +08:00
|
|
|
static void io_cq_unlock_post(struct io_ring_ctx *ctx)
|
2022-11-25 03:46:41 +08:00
|
|
|
__releases(ctx->completion_lock)
|
2022-06-20 08:25:56 +08:00
|
|
|
{
|
2022-12-07 23:50:01 +08:00
|
|
|
io_commit_cqring(ctx);
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
io_cqring_wake(ctx);
|
2023-08-25 06:53:28 +08:00
|
|
|
io_commit_cqring_flush(ctx);
|
2022-06-20 08:25:56 +08:00
|
|
|
}
|
|
|
|
|
2019-11-22 12:01:26 +08:00
|
|
|
/* Returns true if there are no backlogged entries after the flush */
|
2022-12-07 11:53:28 +08:00
|
|
|
static void io_cqring_overflow_kill(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct io_overflow_cqe *ocqe;
|
|
|
|
LIST_HEAD(list);
|
|
|
|
|
2023-06-23 19:23:27 +08:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2022-12-07 11:53:28 +08:00
|
|
|
list_splice_init(&ctx->cq_overflow_list, &list);
|
|
|
|
clear_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq);
|
2023-06-23 19:23:27 +08:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2022-12-07 11:53:28 +08:00
|
|
|
|
|
|
|
while (!list_empty(&list)) {
|
|
|
|
ocqe = list_first_entry(&list, struct io_overflow_cqe, list);
|
|
|
|
list_del(&ocqe->list);
|
|
|
|
kfree(ocqe);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-12-07 11:53:29 +08:00
|
|
|
static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx)
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-07 02:31:17 +08:00
|
|
|
{
|
2022-04-27 02:21:30 +08:00
|
|
|
size_t cqe_size = sizeof(struct io_uring_cqe);
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-07 02:31:17 +08:00
|
|
|
|
2022-12-07 11:53:28 +08:00
|
|
|
if (__io_cqring_events(ctx) == ctx->cq_entries)
|
2022-12-07 11:53:29 +08:00
|
|
|
return;
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-07 02:31:17 +08:00
|
|
|
|
2022-04-27 02:21:30 +08:00
|
|
|
if (ctx->flags & IORING_SETUP_CQE32)
|
|
|
|
cqe_size <<= 1;
|
|
|
|
|
2022-06-20 08:25:56 +08:00
|
|
|
io_cq_lock(ctx);
|
2021-02-23 20:40:22 +08:00
|
|
|
while (!list_empty(&ctx->cq_overflow_list)) {
|
2023-08-25 06:53:27 +08:00
|
|
|
struct io_uring_cqe *cqe;
|
2021-02-23 20:40:22 +08:00
|
|
|
struct io_overflow_cqe *ocqe;
|
2020-09-29 03:10:13 +08:00
|
|
|
|
2023-08-25 06:53:27 +08:00
|
|
|
if (!io_get_cqe_overflow(ctx, &cqe, true))
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-07 02:31:17 +08:00
|
|
|
break;
|
2021-02-23 20:40:22 +08:00
|
|
|
ocqe = list_first_entry(&ctx->cq_overflow_list,
|
|
|
|
struct io_overflow_cqe, list);
|
2022-12-07 11:53:28 +08:00
|
|
|
memcpy(cqe, &ocqe->cqe, cqe_size);
|
2021-02-23 20:40:22 +08:00
|
|
|
list_del(&ocqe->list);
|
|
|
|
kfree(ocqe);
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-07 02:31:17 +08:00
|
|
|
}
|
|
|
|
|
2022-12-07 11:53:29 +08:00
|
|
|
if (list_empty(&ctx->cq_overflow_list)) {
|
2022-04-21 17:13:43 +08:00
|
|
|
clear_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq);
|
2022-04-26 09:49:00 +08:00
|
|
|
atomic_andnot(IORING_SQ_CQ_OVERFLOW, &ctx->rings->sq_flags);
|
2020-12-17 08:24:38 +08:00
|
|
|
}
|
2022-06-20 08:25:56 +08:00
|
|
|
io_cq_unlock_post(ctx);
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-07 02:31:17 +08:00
|
|
|
}
|
|
|
|
|
2022-12-21 22:05:09 +08:00
|
|
|
static void io_cqring_do_overflow_flush(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
/* iopoll syncs against uring_lock, not completion_lock */
|
|
|
|
if (ctx->flags & IORING_SETUP_IOPOLL)
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
__io_cqring_overflow_flush(ctx);
|
|
|
|
if (ctx->flags & IORING_SETUP_IOPOLL)
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
|
2022-12-07 11:53:29 +08:00
|
|
|
static void io_cqring_overflow_flush(struct io_ring_ctx *ctx)
|
2021-01-05 04:36:36 +08:00
|
|
|
{
|
2022-12-21 22:05:09 +08:00
|
|
|
if (test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq))
|
|
|
|
io_cqring_do_overflow_flush(ctx);
|
2021-01-05 04:36:36 +08:00
|
|
|
}
|
|
|
|
|
2023-01-23 22:37:17 +08:00
|
|
|
/* can be called by any task */
|
2023-06-23 19:23:25 +08:00
|
|
|
static void io_put_task_remote(struct task_struct *task)
|
2021-08-09 20:04:13 +08:00
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = task->io_uring;
|
|
|
|
|
2023-06-23 19:23:25 +08:00
|
|
|
percpu_counter_sub(&tctx->inflight, 1);
|
2023-02-17 23:27:23 +08:00
|
|
|
if (unlikely(atomic_read(&tctx->in_cancel)))
|
2022-03-25 19:52:15 +08:00
|
|
|
wake_up(&tctx->wait);
|
2023-06-23 19:23:25 +08:00
|
|
|
put_task_struct(task);
|
2022-03-25 19:52:15 +08:00
|
|
|
}
|
|
|
|
|
2023-01-23 22:37:17 +08:00
|
|
|
/* used by a task to put its own references */
|
2023-06-23 19:23:25 +08:00
|
|
|
static void io_put_task_local(struct task_struct *task)
|
2023-01-23 22:37:17 +08:00
|
|
|
{
|
2023-06-23 19:23:25 +08:00
|
|
|
task->io_uring->cached_refs++;
|
2023-01-23 22:37:17 +08:00
|
|
|
}
|
|
|
|
|
2023-01-17 00:48:58 +08:00
|
|
|
/* must to be called somewhat shortly after putting a request */
|
2023-06-23 19:23:25 +08:00
|
|
|
static inline void io_put_task(struct task_struct *task)
|
2023-01-17 00:48:58 +08:00
|
|
|
{
|
|
|
|
if (likely(task == current))
|
2023-06-23 19:23:25 +08:00
|
|
|
io_put_task_local(task);
|
2023-01-17 00:48:58 +08:00
|
|
|
else
|
2023-06-23 19:23:25 +08:00
|
|
|
io_put_task_remote(task);
|
2023-01-17 00:48:58 +08:00
|
|
|
}
|
|
|
|
|
2022-07-13 04:52:47 +08:00
|
|
|
void io_task_refs_refill(struct io_uring_task *tctx)
|
2021-08-27 18:55:01 +08:00
|
|
|
{
|
|
|
|
unsigned int refill = -tctx->cached_refs + IO_TCTX_REFS_CACHE_NR;
|
|
|
|
|
|
|
|
percpu_counter_add(&tctx->inflight, refill);
|
|
|
|
refcount_add(refill, ¤t->usage);
|
|
|
|
tctx->cached_refs += refill;
|
|
|
|
}
|
|
|
|
|
2022-01-09 08:53:22 +08:00
|
|
|
static __cold void io_uring_drop_tctx_refs(struct task_struct *task)
|
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = task->io_uring;
|
|
|
|
unsigned int refs = tctx->cached_refs;
|
|
|
|
|
|
|
|
if (refs) {
|
|
|
|
tctx->cached_refs = 0;
|
|
|
|
percpu_counter_sub(&tctx->inflight, refs);
|
|
|
|
put_task_struct_many(task, refs);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-06-17 16:48:02 +08:00
|
|
|
static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
|
|
|
|
s32 res, u32 cflags, u64 extra1, u64 extra2)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
2021-04-13 09:58:44 +08:00
|
|
|
struct io_overflow_cqe *ocqe;
|
2022-04-27 02:21:30 +08:00
|
|
|
size_t ocq_size = sizeof(struct io_overflow_cqe);
|
|
|
|
bool is_cqe32 = (ctx->flags & IORING_SETUP_CQE32);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2023-01-04 09:34:57 +08:00
|
|
|
lockdep_assert_held(&ctx->completion_lock);
|
|
|
|
|
2022-04-27 02:21:30 +08:00
|
|
|
if (is_cqe32)
|
|
|
|
ocq_size += sizeof(struct io_uring_cqe);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2022-04-27 02:21:30 +08:00
|
|
|
ocqe = kmalloc(ocq_size, GFP_ATOMIC | __GFP_ACCOUNT);
|
2022-04-21 17:13:41 +08:00
|
|
|
trace_io_uring_cqe_overflow(ctx, user_data, res, cflags, ocqe);
|
2021-04-13 09:58:44 +08:00
|
|
|
if (!ocqe) {
|
|
|
|
/*
|
|
|
|
* If we're in ring overflow flush mode, or in task cancel mode,
|
|
|
|
* or cannot allocate an overflow entry, then we need to drop it
|
|
|
|
* on the floor.
|
|
|
|
*/
|
2021-05-17 05:58:10 +08:00
|
|
|
io_account_cq_overflow(ctx);
|
2022-04-21 17:13:44 +08:00
|
|
|
set_bit(IO_CHECK_CQ_DROPPED_BIT, &ctx->check_cq);
|
2021-04-13 09:58:44 +08:00
|
|
|
return false;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
2021-04-13 09:58:44 +08:00
|
|
|
if (list_empty(&ctx->cq_overflow_list)) {
|
2022-04-21 17:13:43 +08:00
|
|
|
set_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq);
|
2022-04-26 09:49:00 +08:00
|
|
|
atomic_or(IORING_SQ_CQ_OVERFLOW, &ctx->rings->sq_flags);
|
2021-08-08 08:13:42 +08:00
|
|
|
|
2021-04-13 09:58:44 +08:00
|
|
|
}
|
2021-04-25 21:32:17 +08:00
|
|
|
ocqe->cqe.user_data = user_data;
|
2021-04-13 09:58:44 +08:00
|
|
|
ocqe->cqe.res = res;
|
|
|
|
ocqe->cqe.flags = cflags;
|
2022-04-27 02:21:30 +08:00
|
|
|
if (is_cqe32) {
|
|
|
|
ocqe->cqe.big_cqe[0] = extra1;
|
|
|
|
ocqe->cqe.big_cqe[1] = extra2;
|
|
|
|
}
|
2021-04-13 09:58:44 +08:00
|
|
|
list_add_tail(&ocqe->list, &ctx->cq_overflow_list);
|
|
|
|
return true;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
|
|
|
|
2023-08-11 20:53:44 +08:00
|
|
|
void io_req_cqe_overflow(struct io_kiocb *req)
|
2022-06-17 16:48:02 +08:00
|
|
|
{
|
2023-08-11 20:53:44 +08:00
|
|
|
io_cqring_event_overflow(req->ctx, req->cqe.user_data,
|
|
|
|
req->cqe.res, req->cqe.flags,
|
2023-08-25 06:53:25 +08:00
|
|
|
req->big_cqe.extra1, req->big_cqe.extra2);
|
|
|
|
memset(&req->big_cqe, 0, sizeof(req->big_cqe));
|
2022-06-17 16:48:02 +08:00
|
|
|
}
|
|
|
|
|
2022-06-17 16:48:01 +08:00
|
|
|
/*
|
|
|
|
* writes to the cq entry need to come after reading head; the
|
|
|
|
* control dependency is enough as we're using WRITE_ONCE to
|
|
|
|
* fill the cq entry
|
|
|
|
*/
|
2023-08-25 06:53:26 +08:00
|
|
|
bool io_cqe_cache_refill(struct io_ring_ctx *ctx, bool overflow)
|
2022-06-17 16:48:01 +08:00
|
|
|
{
|
|
|
|
struct io_rings *rings = ctx->rings;
|
|
|
|
unsigned int off = ctx->cached_cq_tail & (ctx->cq_entries - 1);
|
|
|
|
unsigned int free, queued, len;
|
|
|
|
|
2022-09-23 21:53:25 +08:00
|
|
|
/*
|
|
|
|
* Posting into the CQ when there are pending overflowed CQEs may break
|
|
|
|
* ordering guarantees, which will affect links, F_MORE users and more.
|
|
|
|
* Force overflow the completion.
|
|
|
|
*/
|
|
|
|
if (!overflow && (ctx->check_cq & BIT(IO_CHECK_CQ_OVERFLOW_BIT)))
|
2023-08-25 06:53:26 +08:00
|
|
|
return false;
|
2022-06-17 16:48:01 +08:00
|
|
|
|
|
|
|
/* userspace may cheat modifying the tail, be safe and do min */
|
|
|
|
queued = min(__io_cqring_events(ctx), ctx->cq_entries);
|
|
|
|
free = ctx->cq_entries - queued;
|
|
|
|
/* we need a contiguous range, limit based on the current array offset */
|
|
|
|
len = min(free, ctx->cq_entries - off);
|
|
|
|
if (!len)
|
2023-08-25 06:53:26 +08:00
|
|
|
return false;
|
2022-06-17 16:48:01 +08:00
|
|
|
|
2022-06-17 16:48:05 +08:00
|
|
|
if (ctx->flags & IORING_SETUP_CQE32) {
|
|
|
|
off <<= 1;
|
|
|
|
len <<= 1;
|
|
|
|
}
|
|
|
|
|
2022-06-17 16:48:01 +08:00
|
|
|
ctx->cqe_cached = &rings->cqes[off];
|
|
|
|
ctx->cqe_sentinel = ctx->cqe_cached + len;
|
2023-08-25 06:53:26 +08:00
|
|
|
return true;
|
2022-06-17 16:48:01 +08:00
|
|
|
}
|
|
|
|
|
2022-12-07 23:50:01 +08:00
|
|
|
static bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res,
|
|
|
|
u32 cflags)
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-24 07:42:51 +08:00
|
|
|
{
|
2022-06-15 18:23:06 +08:00
|
|
|
struct io_uring_cqe *cqe;
|
|
|
|
|
2021-11-10 23:49:31 +08:00
|
|
|
ctx->cq_extra++;
|
2022-06-15 18:23:06 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If we can't get a cq entry, userspace overflowed the
|
|
|
|
* submission (by quite a lot). Increment the overflow count in
|
|
|
|
* the ring.
|
|
|
|
*/
|
2023-08-25 06:53:27 +08:00
|
|
|
if (likely(io_get_cqe(ctx, &cqe))) {
|
2022-06-30 17:12:31 +08:00
|
|
|
trace_io_uring_complete(ctx, NULL, user_data, res, cflags, 0, 0);
|
|
|
|
|
2022-06-15 18:23:06 +08:00
|
|
|
WRITE_ONCE(cqe->user_data, user_data);
|
|
|
|
WRITE_ONCE(cqe->res, res);
|
|
|
|
WRITE_ONCE(cqe->flags, cflags);
|
2022-06-15 18:23:07 +08:00
|
|
|
|
|
|
|
if (ctx->flags & IORING_SETUP_CQE32) {
|
|
|
|
WRITE_ONCE(cqe->big_cqe[0], 0);
|
|
|
|
WRITE_ONCE(cqe->big_cqe[1], 0);
|
|
|
|
}
|
2022-06-15 18:23:06 +08:00
|
|
|
return true;
|
|
|
|
}
|
2022-06-30 17:12:26 +08:00
|
|
|
return false;
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-24 07:42:51 +08:00
|
|
|
}
|
|
|
|
|
2022-11-24 17:35:54 +08:00
|
|
|
static void __io_flush_post_cqes(struct io_ring_ctx *ctx)
|
|
|
|
__must_hold(&ctx->uring_lock)
|
|
|
|
{
|
|
|
|
struct io_submit_state *state = &ctx->submit_state;
|
|
|
|
unsigned int i;
|
|
|
|
|
|
|
|
lockdep_assert_held(&ctx->uring_lock);
|
|
|
|
for (i = 0; i < state->cqes_count; i++) {
|
2023-08-25 06:53:36 +08:00
|
|
|
struct io_uring_cqe *cqe = &ctx->completion_cqes[i];
|
2022-11-24 17:35:54 +08:00
|
|
|
|
2022-12-07 23:50:01 +08:00
|
|
|
if (!io_fill_cqe_aux(ctx, cqe->user_data, cqe->res, cqe->flags)) {
|
2023-09-07 20:50:08 +08:00
|
|
|
if (ctx->lockless_cq) {
|
2022-12-07 23:50:01 +08:00
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
io_cqring_event_overflow(ctx, cqe->user_data,
|
|
|
|
cqe->res, cqe->flags, 0, 0);
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
} else {
|
|
|
|
io_cqring_event_overflow(ctx, cqe->user_data,
|
|
|
|
cqe->res, cqe->flags, 0, 0);
|
|
|
|
}
|
|
|
|
}
|
2022-11-24 17:35:54 +08:00
|
|
|
}
|
|
|
|
state->cqes_count = 0;
|
|
|
|
}
|
|
|
|
|
2022-11-24 17:35:58 +08:00
|
|
|
static bool __io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags,
|
|
|
|
bool allow_overflow)
|
2022-06-17 16:48:00 +08:00
|
|
|
{
|
|
|
|
bool filled;
|
|
|
|
|
2022-06-20 08:25:56 +08:00
|
|
|
io_cq_lock(ctx);
|
2022-12-07 23:50:01 +08:00
|
|
|
filled = io_fill_cqe_aux(ctx, user_data, res, cflags);
|
|
|
|
if (!filled && allow_overflow)
|
|
|
|
filled = io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0);
|
|
|
|
|
2022-06-20 08:25:56 +08:00
|
|
|
io_cq_unlock_post(ctx);
|
2022-06-17 16:48:00 +08:00
|
|
|
return filled;
|
|
|
|
}
|
|
|
|
|
2022-11-24 17:35:58 +08:00
|
|
|
bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags)
|
|
|
|
{
|
|
|
|
return __io_post_aux_cqe(ctx, user_data, res, cflags, true);
|
|
|
|
}
|
|
|
|
|
2023-08-11 20:53:45 +08:00
|
|
|
/*
|
|
|
|
* A helper for multishot requests posting additional CQEs.
|
|
|
|
* Should only be used from a task_work including IO_URING_F_MULTISHOT.
|
|
|
|
*/
|
|
|
|
bool io_fill_cqe_req_aux(struct io_kiocb *req, bool defer, s32 res, u32 cflags)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
2023-06-08 04:41:20 +08:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
u64 user_data = req->cqe.user_data;
|
2022-11-24 17:35:55 +08:00
|
|
|
struct io_uring_cqe *cqe;
|
|
|
|
|
|
|
|
if (!defer)
|
2023-08-11 20:53:45 +08:00
|
|
|
return __io_post_aux_cqe(ctx, user_data, res, cflags, false);
|
2022-11-24 17:35:55 +08:00
|
|
|
|
|
|
|
lockdep_assert_held(&ctx->uring_lock);
|
|
|
|
|
2023-08-25 06:53:36 +08:00
|
|
|
if (ctx->submit_state.cqes_count == ARRAY_SIZE(ctx->completion_cqes)) {
|
2022-12-07 23:50:01 +08:00
|
|
|
__io_cq_lock(ctx);
|
2022-11-24 17:35:55 +08:00
|
|
|
__io_flush_post_cqes(ctx);
|
|
|
|
/* no need to flush - flush is deferred */
|
2022-12-07 23:50:01 +08:00
|
|
|
__io_cq_unlock_post(ctx);
|
2022-11-24 17:35:55 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* For defered completions this is not as strict as it is otherwise,
|
|
|
|
* however it's main job is to prevent unbounded posted completions,
|
|
|
|
* and in that it works just as well.
|
|
|
|
*/
|
2023-08-11 20:53:45 +08:00
|
|
|
if (test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq))
|
2022-11-24 17:35:55 +08:00
|
|
|
return false;
|
|
|
|
|
2023-08-25 06:53:36 +08:00
|
|
|
cqe = &ctx->completion_cqes[ctx->submit_state.cqes_count++];
|
2022-11-24 17:35:55 +08:00
|
|
|
cqe->user_data = user_data;
|
|
|
|
cqe->res = res;
|
|
|
|
cqe->flags = cflags;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2023-04-04 20:39:49 +08:00
|
|
|
static void __io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
2022-11-23 19:33:40 +08:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2023-04-04 20:39:47 +08:00
|
|
|
struct io_rsrc_node *rsrc_node = NULL;
|
2022-11-23 19:33:40 +08:00
|
|
|
|
|
|
|
io_cq_lock(ctx);
|
2023-08-11 20:53:43 +08:00
|
|
|
if (!(req->flags & REQ_F_CQE_SKIP)) {
|
|
|
|
if (!io_fill_cqe_req(ctx, req))
|
|
|
|
io_req_cqe_overflow(req);
|
|
|
|
}
|
2022-11-23 19:33:40 +08:00
|
|
|
|
2021-02-10 10:53:37 +08:00
|
|
|
/*
|
|
|
|
* If we're the last reference to this request, add to our locked
|
|
|
|
* free_list cache.
|
|
|
|
*/
|
2021-02-25 04:28:27 +08:00
|
|
|
if (req_ref_put_and_test(req)) {
|
2022-04-16 05:08:29 +08:00
|
|
|
if (req->flags & IO_REQ_LINK_FLAGS) {
|
2021-08-15 17:40:25 +08:00
|
|
|
if (req->flags & IO_DISARM_MASK)
|
2021-03-09 08:37:59 +08:00
|
|
|
io_disarm_next(req);
|
|
|
|
if (req->link) {
|
|
|
|
io_req_task_queue(req->link);
|
|
|
|
req->link = NULL;
|
|
|
|
}
|
|
|
|
}
|
2023-01-17 00:49:01 +08:00
|
|
|
io_put_kbuf_comp(req);
|
2023-06-23 19:23:23 +08:00
|
|
|
if (unlikely(req->flags & IO_REQ_CLEAN_FLAGS))
|
|
|
|
io_clean_op(req);
|
2023-07-08 01:14:40 +08:00
|
|
|
io_put_file(req);
|
2023-06-23 19:23:23 +08:00
|
|
|
|
2023-04-04 20:39:47 +08:00
|
|
|
rsrc_node = req->rsrc_node;
|
2022-03-25 21:00:43 +08:00
|
|
|
/*
|
|
|
|
* Selected buffer deallocation in io_clean_op() assumes that
|
|
|
|
* we don't hold ->completion_lock. Clean them here to avoid
|
|
|
|
* deadlocks.
|
|
|
|
*/
|
2023-06-23 19:23:25 +08:00
|
|
|
io_put_task_remote(req->task);
|
2021-09-25 04:59:47 +08:00
|
|
|
wq_list_add_head(&req->comp_list, &ctx->locked_free_list);
|
2021-05-17 05:58:12 +08:00
|
|
|
ctx->locked_free_nr++;
|
2021-03-15 04:57:09 +08:00
|
|
|
}
|
2022-06-20 08:25:56 +08:00
|
|
|
io_cq_unlock_post(ctx);
|
2023-04-04 20:39:47 +08:00
|
|
|
|
2023-04-04 20:39:49 +08:00
|
|
|
if (rsrc_node) {
|
|
|
|
io_ring_submit_lock(ctx, issue_flags);
|
2023-04-04 20:39:55 +08:00
|
|
|
io_put_rsrc_node(ctx, rsrc_node);
|
2023-04-04 20:39:49 +08:00
|
|
|
io_ring_submit_unlock(ctx, issue_flags);
|
|
|
|
}
|
2021-04-16 07:44:34 +08:00
|
|
|
}
|
|
|
|
|
2022-11-23 19:33:41 +08:00
|
|
|
void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-24 07:42:51 +08:00
|
|
|
{
|
2023-04-14 15:53:13 +08:00
|
|
|
if (req->ctx->task_complete && req->ctx->submitter_task != current) {
|
2022-12-07 11:53:30 +08:00
|
|
|
req->io_task_work.func = io_req_task_complete;
|
|
|
|
io_req_task_work_add(req);
|
|
|
|
} else if (!(issue_flags & IO_URING_F_UNLOCKED) ||
|
|
|
|
!(req->ctx->flags & IORING_SETUP_IOPOLL)) {
|
2023-04-04 20:39:49 +08:00
|
|
|
__io_req_complete_post(req, issue_flags);
|
2022-11-23 19:33:41 +08:00
|
|
|
} else {
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
2023-04-04 20:39:49 +08:00
|
|
|
__io_req_complete_post(req, issue_flags & ~IO_URING_F_UNLOCKED);
|
2022-11-23 19:33:41 +08:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
2019-11-08 23:52:53 +08:00
|
|
|
}
|
|
|
|
|
2022-11-24 17:35:53 +08:00
|
|
|
void io_req_defer_failed(struct io_kiocb *req, s32 res)
|
2022-11-23 19:33:37 +08:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2021-03-01 06:35:12 +08:00
|
|
|
{
|
2023-01-12 22:44:11 +08:00
|
|
|
const struct io_cold_def *def = &io_cold_defs[req->opcode];
|
2022-09-21 19:17:46 +08:00
|
|
|
|
2022-11-23 19:33:37 +08:00
|
|
|
lockdep_assert_held(&req->ctx->uring_lock);
|
|
|
|
|
2021-05-17 05:58:05 +08:00
|
|
|
req_set_fail(req);
|
2022-05-25 05:21:00 +08:00
|
|
|
io_req_set_res(req, res, io_put_kbuf(req, IO_URING_F_UNLOCKED));
|
2022-09-21 19:17:46 +08:00
|
|
|
if (def->fail)
|
|
|
|
def->fail(req);
|
2022-11-24 17:35:53 +08:00
|
|
|
io_req_complete_defer(req);
|
2021-03-01 06:35:12 +08:00
|
|
|
}
|
|
|
|
|
2021-08-09 20:04:08 +08:00
|
|
|
/*
|
|
|
|
* Don't initialise the fields below on every allocation, but do that in
|
|
|
|
* advance and keep them valid across allocations.
|
|
|
|
*/
|
|
|
|
static void io_preinit_req(struct io_kiocb *req, struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
req->ctx = ctx;
|
|
|
|
req->link = NULL;
|
|
|
|
req->async_data = NULL;
|
|
|
|
/* not necessary, but safer to zero */
|
2023-08-25 06:53:24 +08:00
|
|
|
memset(&req->cqe, 0, sizeof(req->cqe));
|
2023-08-25 06:53:25 +08:00
|
|
|
memset(&req->big_cqe, 0, sizeof(req->big_cqe));
|
2021-08-09 20:04:08 +08:00
|
|
|
}
|
|
|
|
|
2021-03-20 01:22:39 +08:00
|
|
|
static void io_flush_cached_locked_reqs(struct io_ring_ctx *ctx,
|
2021-08-10 03:18:11 +08:00
|
|
|
struct io_submit_state *state)
|
2021-03-20 01:22:39 +08:00
|
|
|
{
|
2021-08-11 05:18:27 +08:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-09-25 04:59:47 +08:00
|
|
|
wq_list_splice(&ctx->locked_free_list, &state->free_list);
|
2021-05-17 05:58:12 +08:00
|
|
|
ctx->locked_free_nr = 0;
|
2021-08-11 05:18:27 +08:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-03-20 01:22:39 +08:00
|
|
|
}
|
|
|
|
|
io_uring: remove submission references
Requests are by default given with two references, submission and
completion. Completion references are straightforward, they represent
request ownership and are put when a request is completed or so.
Submission references are a bit more trickier. They're needed when
io_issue_sqe() followed deep into the submission stack (e.g. in fs,
block, drivers, etc.), request may have given away for concurrent
execution or already completed, and the code unwinding back to
io_issue_sqe() may be accessing some pieces of our requests, e.g.
file or iov.
Now, we prevent such async/in-depth completions by pushing requests
through task_work. Punting to io-wq is also done through task_works,
apart from a couple of cases with a pretty well known context. So,
there're two cases:
1) io_issue_sqe() from the task context and protected by ->uring_lock.
Either requests return back to io_uring or handed to task_work, which
won't be executed because we're currently controlling that task. So,
we can be sure that requests are staying alive all the time and we don't
need submission references to pin them.
2) io_issue_sqe() from io-wq, which doesn't hold the mutex. The role of
submission reference is played by io-wq reference, which is put by
io_wq_submit_work(). Hence, it should be fine.
Considering that, we can carefully kill the submission reference.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/6b68f1c763229a590f2a27148aee77767a8d7750.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-12 02:28:29 +08:00
|
|
|
/*
|
|
|
|
* A request might get retired back into the request caches even before opcode
|
|
|
|
* handlers and io_issue_sqe() are done with it, e.g. inline completion path.
|
|
|
|
* Because of that, io_alloc_req() should be called only under ->uring_lock
|
|
|
|
* and with extra caution to not get a request that is still worked on.
|
|
|
|
*/
|
2022-07-27 17:30:40 +08:00
|
|
|
__cold bool __io_alloc_req_refill(struct io_ring_ctx *ctx)
|
io_uring: remove submission references
Requests are by default given with two references, submission and
completion. Completion references are straightforward, they represent
request ownership and are put when a request is completed or so.
Submission references are a bit more trickier. They're needed when
io_issue_sqe() followed deep into the submission stack (e.g. in fs,
block, drivers, etc.), request may have given away for concurrent
execution or already completed, and the code unwinding back to
io_issue_sqe() may be accessing some pieces of our requests, e.g.
file or iov.
Now, we prevent such async/in-depth completions by pushing requests
through task_work. Punting to io-wq is also done through task_works,
apart from a couple of cases with a pretty well known context. So,
there're two cases:
1) io_issue_sqe() from the task context and protected by ->uring_lock.
Either requests return back to io_uring or handed to task_work, which
won't be executed because we're currently controlling that task. So,
we can be sure that requests are staying alive all the time and we don't
need submission references to pin them.
2) io_issue_sqe() from io-wq, which doesn't hold the mutex. The role of
submission reference is played by io-wq reference, which is put by
io_wq_submit_work(). Hence, it should be fine.
Considering that, we can carefully kill the submission reference.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/6b68f1c763229a590f2a27148aee77767a8d7750.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-12 02:28:29 +08:00
|
|
|
__must_hold(&ctx->uring_lock)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
2021-08-09 20:04:08 +08:00
|
|
|
gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
|
2021-09-25 04:59:45 +08:00
|
|
|
void *reqs[IO_REQ_ALLOC_BATCH];
|
2021-08-09 20:04:08 +08:00
|
|
|
int ret, i;
|
2021-02-10 08:03:23 +08:00
|
|
|
|
2022-04-12 22:09:46 +08:00
|
|
|
/*
|
|
|
|
* If we have more than a batch's worth of requests in our IRQ side
|
|
|
|
* locked cache, grab the lock and move them over to our submission
|
|
|
|
* side cache.
|
|
|
|
*/
|
2022-04-16 05:08:33 +08:00
|
|
|
if (data_race(ctx->locked_free_nr) > IO_COMPL_BATCH) {
|
2022-04-12 22:09:46 +08:00
|
|
|
io_flush_cached_locked_reqs(ctx, &ctx->submit_state);
|
2022-04-12 22:09:47 +08:00
|
|
|
if (!io_req_cache_empty(ctx))
|
2022-04-12 22:09:46 +08:00
|
|
|
return true;
|
|
|
|
}
|
2021-02-10 08:03:23 +08:00
|
|
|
|
2021-09-25 04:59:45 +08:00
|
|
|
ret = kmem_cache_alloc_bulk(req_cachep, gfp, ARRAY_SIZE(reqs), reqs);
|
2019-03-15 06:30:06 +08:00
|
|
|
|
2021-08-09 20:04:08 +08:00
|
|
|
/*
|
|
|
|
* Bulk alloc is all-or-nothing. If we fail to get a batch,
|
|
|
|
* retry single alloc to be on the safe side.
|
|
|
|
*/
|
|
|
|
if (unlikely(ret <= 0)) {
|
2021-09-25 04:59:45 +08:00
|
|
|
reqs[0] = kmem_cache_alloc(req_cachep, gfp);
|
|
|
|
if (!reqs[0])
|
2021-10-05 03:02:49 +08:00
|
|
|
return false;
|
2021-08-09 20:04:08 +08:00
|
|
|
ret = 1;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
2021-08-09 20:04:08 +08:00
|
|
|
|
2021-10-05 03:02:53 +08:00
|
|
|
percpu_ref_get_many(&ctx->refs, ret);
|
2021-09-25 04:59:45 +08:00
|
|
|
for (i = 0; i < ret; i++) {
|
2022-04-12 22:09:46 +08:00
|
|
|
struct io_kiocb *req = reqs[i];
|
2021-09-25 04:59:45 +08:00
|
|
|
|
|
|
|
io_preinit_req(req, ctx);
|
2022-04-12 22:09:48 +08:00
|
|
|
io_req_add_to_cache(req, ctx);
|
2021-09-25 04:59:45 +08:00
|
|
|
}
|
2021-10-05 03:02:49 +08:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2023-04-04 20:39:48 +08:00
|
|
|
__cold void io_free_req(struct io_kiocb *req)
|
|
|
|
{
|
2023-06-23 19:23:22 +08:00
|
|
|
/* refs were already put, restore them for io_req_task_complete() */
|
|
|
|
req->flags &= ~REQ_F_REFCOUNT;
|
|
|
|
/* we only want to free it, don't post CQEs */
|
|
|
|
req->flags |= REQ_F_CQE_SKIP;
|
|
|
|
req->io_task_work.func = io_req_task_complete;
|
2023-04-04 20:39:48 +08:00
|
|
|
io_req_task_work_add(req);
|
|
|
|
}
|
|
|
|
|
2021-09-08 23:40:51 +08:00
|
|
|
static void __io_req_find_next_prep(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2022-12-03 01:47:23 +08:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2022-06-20 08:25:55 +08:00
|
|
|
io_disarm_next(req);
|
2022-12-03 01:47:23 +08:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-09-08 23:40:51 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct io_kiocb *io_req_find_next(struct io_kiocb *req)
|
2019-11-09 11:00:08 +08:00
|
|
|
{
|
2021-03-09 08:37:58 +08:00
|
|
|
struct io_kiocb *nxt;
|
2019-11-22 04:21:01 +08:00
|
|
|
|
2019-05-11 06:07:28 +08:00
|
|
|
/*
|
|
|
|
* If LINK is set, we have dependent requests in this chain. If we
|
|
|
|
* didn't fail this request, queue the first one up, moving any other
|
|
|
|
* dependencies to the next request. In case of failure, fail the rest
|
|
|
|
* of the chain.
|
|
|
|
*/
|
2021-09-08 23:40:51 +08:00
|
|
|
if (unlikely(req->flags & IO_DISARM_MASK))
|
|
|
|
__io_req_find_next_prep(req);
|
2021-03-09 08:37:58 +08:00
|
|
|
nxt = req->link;
|
|
|
|
req->link = NULL;
|
|
|
|
return nxt;
|
2019-11-21 04:03:52 +08:00
|
|
|
}
|
2019-05-11 06:07:28 +08:00
|
|
|
|
2023-03-27 23:38:15 +08:00
|
|
|
static void ctx_flush_and_put(struct io_ring_ctx *ctx, struct io_tw_state *ts)
|
io_uring: fix __tctx_task_work() ctx race
There is an unlikely but possible race using a freed context. That's
because req->task_work.func() can free a request, but we won't
necessarily find a completion in submit_state.comp and so all ctx refs
may be put by the time we do mutex_lock(&ctx->uring_ctx);
There are several reasons why it can miss going through
submit_state.comp: 1) req->task_work.func() didn't complete it itself,
but punted to iowq (e.g. reissue) and it got freed later, or a similar
situation with it overflowing and getting flushed by someone else, or
being submitted to IRQ completion, 2) As we don't hold the uring_lock,
someone else can do io_submit_flush_completions() and put our ref.
3) Bugs and code obscurities, e.g. failing to propagate issue_flags
properly.
One example is as follows
CPU1 | CPU2
=======================================================================
@req->task_work.func() |
-> @req overflwed, |
so submit_state.comp,nr==0 |
| flush overflows, and free @req
| ctx refs == 0, free it
ctx is dead, but we do |
lock + flush + unlock |
So take a ctx reference for each new ctx we see in __tctx_task_work(),
and do release it until we do all our flushing.
Fixes: 65453d1efbd2 ("io_uring: enable req cache for task_work items")
Reported-by: syzbot+a157ac7c03a56397f553@syzkaller.appspotmail.com
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
[axboe: fold in my one-liner and fix ref mismatch]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-01 06:04:53 +08:00
|
|
|
{
|
|
|
|
if (!ctx)
|
|
|
|
return;
|
2022-04-26 09:49:04 +08:00
|
|
|
if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
|
|
|
|
atomic_andnot(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
|
2023-03-27 23:38:15 +08:00
|
|
|
if (ts->locked) {
|
2021-09-08 23:40:52 +08:00
|
|
|
io_submit_flush_completions(ctx);
|
io_uring: fix __tctx_task_work() ctx race
There is an unlikely but possible race using a freed context. That's
because req->task_work.func() can free a request, but we won't
necessarily find a completion in submit_state.comp and so all ctx refs
may be put by the time we do mutex_lock(&ctx->uring_ctx);
There are several reasons why it can miss going through
submit_state.comp: 1) req->task_work.func() didn't complete it itself,
but punted to iowq (e.g. reissue) and it got freed later, or a similar
situation with it overflowing and getting flushed by someone else, or
being submitted to IRQ completion, 2) As we don't hold the uring_lock,
someone else can do io_submit_flush_completions() and put our ref.
3) Bugs and code obscurities, e.g. failing to propagate issue_flags
properly.
One example is as follows
CPU1 | CPU2
=======================================================================
@req->task_work.func() |
-> @req overflwed, |
so submit_state.comp,nr==0 |
| flush overflows, and free @req
| ctx refs == 0, free it
ctx is dead, but we do |
lock + flush + unlock |
So take a ctx reference for each new ctx we see in __tctx_task_work(),
and do release it until we do all our flushing.
Fixes: 65453d1efbd2 ("io_uring: enable req cache for task_work items")
Reported-by: syzbot+a157ac7c03a56397f553@syzkaller.appspotmail.com
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
[axboe: fold in my one-liner and fix ref mismatch]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-01 06:04:53 +08:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2023-03-27 23:38:15 +08:00
|
|
|
ts->locked = false;
|
io_uring: fix __tctx_task_work() ctx race
There is an unlikely but possible race using a freed context. That's
because req->task_work.func() can free a request, but we won't
necessarily find a completion in submit_state.comp and so all ctx refs
may be put by the time we do mutex_lock(&ctx->uring_ctx);
There are several reasons why it can miss going through
submit_state.comp: 1) req->task_work.func() didn't complete it itself,
but punted to iowq (e.g. reissue) and it got freed later, or a similar
situation with it overflowing and getting flushed by someone else, or
being submitted to IRQ completion, 2) As we don't hold the uring_lock,
someone else can do io_submit_flush_completions() and put our ref.
3) Bugs and code obscurities, e.g. failing to propagate issue_flags
properly.
One example is as follows
CPU1 | CPU2
=======================================================================
@req->task_work.func() |
-> @req overflwed, |
so submit_state.comp,nr==0 |
| flush overflows, and free @req
| ctx refs == 0, free it
ctx is dead, but we do |
lock + flush + unlock |
So take a ctx reference for each new ctx we see in __tctx_task_work(),
and do release it until we do all our flushing.
Fixes: 65453d1efbd2 ("io_uring: enable req cache for task_work items")
Reported-by: syzbot+a157ac7c03a56397f553@syzkaller.appspotmail.com
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
[axboe: fold in my one-liner and fix ref mismatch]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-01 06:04:53 +08:00
|
|
|
}
|
|
|
|
percpu_ref_put(&ctx->refs);
|
|
|
|
}
|
|
|
|
|
2022-06-22 21:40:28 +08:00
|
|
|
static unsigned int handle_tw_list(struct llist_node *node,
|
2023-03-27 23:38:15 +08:00
|
|
|
struct io_ring_ctx **ctx,
|
|
|
|
struct io_tw_state *ts,
|
2022-06-22 21:40:28 +08:00
|
|
|
struct llist_node *last)
|
2021-12-07 17:39:49 +08:00
|
|
|
{
|
2022-06-22 21:40:28 +08:00
|
|
|
unsigned int count = 0;
|
|
|
|
|
2023-01-23 22:37:18 +08:00
|
|
|
while (node && node != last) {
|
2022-06-22 21:40:23 +08:00
|
|
|
struct llist_node *next = node->next;
|
2021-12-07 17:39:49 +08:00
|
|
|
struct io_kiocb *req = container_of(node, struct io_kiocb,
|
|
|
|
io_task_work.node);
|
|
|
|
|
2022-03-25 00:17:44 +08:00
|
|
|
prefetch(container_of(next, struct io_kiocb, io_task_work.node));
|
|
|
|
|
2021-12-07 17:39:49 +08:00
|
|
|
if (req->ctx != *ctx) {
|
2023-03-27 23:38:15 +08:00
|
|
|
ctx_flush_and_put(*ctx, ts);
|
2021-12-07 17:39:49 +08:00
|
|
|
*ctx = req->ctx;
|
|
|
|
/* if not contended, grab and improve batching */
|
2023-03-27 23:38:15 +08:00
|
|
|
ts->locked = mutex_trylock(&(*ctx)->uring_lock);
|
2021-12-07 17:39:49 +08:00
|
|
|
percpu_ref_get(&(*ctx)->refs);
|
2023-03-27 23:38:14 +08:00
|
|
|
}
|
2023-06-02 22:41:46 +08:00
|
|
|
INDIRECT_CALL_2(req->io_task_work.func,
|
|
|
|
io_poll_task_func, io_req_rw_complete,
|
|
|
|
req, ts);
|
2021-12-07 17:39:49 +08:00
|
|
|
node = next;
|
2022-06-22 21:40:28 +08:00
|
|
|
count++;
|
2023-01-28 00:50:31 +08:00
|
|
|
if (unlikely(need_resched())) {
|
2023-03-27 23:38:15 +08:00
|
|
|
ctx_flush_and_put(*ctx, ts);
|
2023-01-28 00:50:31 +08:00
|
|
|
*ctx = NULL;
|
|
|
|
cond_resched();
|
|
|
|
}
|
2022-06-22 21:40:25 +08:00
|
|
|
}
|
2022-06-22 21:40:28 +08:00
|
|
|
|
|
|
|
return count;
|
2021-12-07 17:39:49 +08:00
|
|
|
}
|
|
|
|
|
2022-06-22 21:40:24 +08:00
|
|
|
/**
|
|
|
|
* io_llist_xchg - swap all entries in a lock-less list
|
|
|
|
* @head: the head of lock-less list to delete all entries
|
|
|
|
* @new: new entry as the head of the list
|
|
|
|
*
|
|
|
|
* If list is empty, return NULL, otherwise, return the pointer to the first entry.
|
|
|
|
* The order of entries returned is from the newest to the oldest added one.
|
|
|
|
*/
|
|
|
|
static inline struct llist_node *io_llist_xchg(struct llist_head *head,
|
|
|
|
struct llist_node *new)
|
|
|
|
{
|
|
|
|
return xchg(&head->first, new);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* io_llist_cmpxchg - possibly swap all entries in a lock-less list
|
|
|
|
* @head: the head of lock-less list to delete all entries
|
|
|
|
* @old: expected old value of the first entry of the list
|
|
|
|
* @new: new entry as the head of the list
|
|
|
|
*
|
|
|
|
* perform a cmpxchg on the first entry of the list.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static inline struct llist_node *io_llist_cmpxchg(struct llist_head *head,
|
|
|
|
struct llist_node *old,
|
|
|
|
struct llist_node *new)
|
|
|
|
{
|
|
|
|
return cmpxchg(&head->first, old, new);
|
|
|
|
}
|
|
|
|
|
2023-06-29 01:06:05 +08:00
|
|
|
static __cold void io_fallback_tw(struct io_uring_task *tctx, bool sync)
|
2023-06-28 01:57:53 +08:00
|
|
|
{
|
|
|
|
struct llist_node *node = llist_del_all(&tctx->task_list);
|
2023-06-29 01:06:05 +08:00
|
|
|
struct io_ring_ctx *last_ctx = NULL;
|
2023-06-28 01:57:53 +08:00
|
|
|
struct io_kiocb *req;
|
|
|
|
|
|
|
|
while (node) {
|
|
|
|
req = container_of(node, struct io_kiocb, io_task_work.node);
|
|
|
|
node = node->next;
|
2023-06-29 01:06:05 +08:00
|
|
|
if (sync && last_ctx != req->ctx) {
|
|
|
|
if (last_ctx) {
|
|
|
|
flush_delayed_work(&last_ctx->fallback_work);
|
|
|
|
percpu_ref_put(&last_ctx->refs);
|
|
|
|
}
|
|
|
|
last_ctx = req->ctx;
|
|
|
|
percpu_ref_get(&last_ctx->refs);
|
|
|
|
}
|
2023-06-28 01:57:53 +08:00
|
|
|
if (llist_add(&req->io_task_work.node,
|
|
|
|
&req->ctx->fallback_llist))
|
|
|
|
schedule_delayed_work(&req->ctx->fallback_work, 1);
|
|
|
|
}
|
2023-06-29 01:06:05 +08:00
|
|
|
|
|
|
|
if (last_ctx) {
|
|
|
|
flush_delayed_work(&last_ctx->fallback_work);
|
|
|
|
percpu_ref_put(&last_ctx->refs);
|
|
|
|
}
|
2023-06-28 01:57:53 +08:00
|
|
|
}
|
|
|
|
|
2022-05-26 01:01:04 +08:00
|
|
|
void tctx_task_work(struct callback_head *cb)
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-26 05:39:59 +08:00
|
|
|
{
|
2023-03-27 23:38:15 +08:00
|
|
|
struct io_tw_state ts = {};
|
2021-06-18 01:14:07 +08:00
|
|
|
struct io_ring_ctx *ctx = NULL;
|
2021-06-18 01:14:06 +08:00
|
|
|
struct io_uring_task *tctx = container_of(cb, struct io_uring_task,
|
|
|
|
task_work);
|
2022-06-22 21:40:25 +08:00
|
|
|
struct llist_node fake = {};
|
2022-12-07 11:53:33 +08:00
|
|
|
struct llist_node *node;
|
2023-01-23 22:37:18 +08:00
|
|
|
unsigned int loops = 0;
|
|
|
|
unsigned int count = 0;
|
2022-06-22 21:40:25 +08:00
|
|
|
|
2022-12-07 11:53:33 +08:00
|
|
|
if (unlikely(current->flags & PF_EXITING)) {
|
2023-06-29 01:06:05 +08:00
|
|
|
io_fallback_tw(tctx, true);
|
2022-12-07 11:53:33 +08:00
|
|
|
return;
|
|
|
|
}
|
2022-06-22 21:40:25 +08:00
|
|
|
|
2023-01-23 22:37:18 +08:00
|
|
|
do {
|
2022-06-22 21:40:28 +08:00
|
|
|
loops++;
|
2022-06-22 21:40:25 +08:00
|
|
|
node = io_llist_xchg(&tctx->task_list, &fake);
|
2023-03-27 23:38:15 +08:00
|
|
|
count += handle_tw_list(node, &ctx, &ts, &fake);
|
2023-01-23 22:37:19 +08:00
|
|
|
|
|
|
|
/* skip expensive cmpxchg if there are items in the list */
|
|
|
|
if (READ_ONCE(tctx->task_list.first) != &fake)
|
|
|
|
continue;
|
2023-03-27 23:38:15 +08:00
|
|
|
if (ts.locked && !wq_list_empty(&ctx->submit_state.compl_reqs)) {
|
2023-01-23 22:37:19 +08:00
|
|
|
io_submit_flush_completions(ctx);
|
|
|
|
if (READ_ONCE(tctx->task_list.first) != &fake)
|
|
|
|
continue;
|
|
|
|
}
|
2022-06-22 21:40:25 +08:00
|
|
|
node = io_llist_cmpxchg(&tctx->task_list, &fake, NULL);
|
2023-01-23 22:37:18 +08:00
|
|
|
} while (node != &fake);
|
2021-06-18 01:14:07 +08:00
|
|
|
|
2023-03-27 23:38:15 +08:00
|
|
|
ctx_flush_and_put(ctx, &ts);
|
2022-01-09 08:53:22 +08:00
|
|
|
|
2023-02-17 23:27:23 +08:00
|
|
|
/* relaxed read is enough as only the task itself sets ->in_cancel */
|
|
|
|
if (unlikely(atomic_read(&tctx->in_cancel)))
|
2022-01-09 08:53:22 +08:00
|
|
|
io_uring_drop_tctx_refs(current);
|
2022-06-22 21:40:28 +08:00
|
|
|
|
|
|
|
trace_io_uring_task_work_run(tctx, count, loops);
|
2021-02-10 08:03:20 +08:00
|
|
|
}
|
|
|
|
|
2023-06-23 19:23:26 +08:00
|
|
|
static inline void io_req_local_work_add(struct io_kiocb *req, unsigned flags)
|
2022-08-30 20:50:10 +08:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2023-04-06 21:20:12 +08:00
|
|
|
unsigned nr_wait, nr_tw, nr_tw_prev;
|
2023-04-06 21:20:11 +08:00
|
|
|
struct llist_node *first;
|
2022-08-30 20:50:10 +08:00
|
|
|
|
2023-04-06 21:20:12 +08:00
|
|
|
if (req->flags & (REQ_F_LINK | REQ_F_HARDLINK))
|
|
|
|
flags &= ~IOU_F_TWQ_LAZY_WAKE;
|
2023-02-17 23:22:17 +08:00
|
|
|
|
2023-04-06 21:20:11 +08:00
|
|
|
first = READ_ONCE(ctx->work_llist.first);
|
|
|
|
do {
|
2023-04-06 21:20:12 +08:00
|
|
|
nr_tw_prev = 0;
|
|
|
|
if (first) {
|
|
|
|
struct io_kiocb *first_req = container_of(first,
|
|
|
|
struct io_kiocb,
|
|
|
|
io_task_work.node);
|
|
|
|
/*
|
|
|
|
* Might be executed at any moment, rely on
|
|
|
|
* SLAB_TYPESAFE_BY_RCU to keep it alive.
|
|
|
|
*/
|
|
|
|
nr_tw_prev = READ_ONCE(first_req->nr_tw);
|
|
|
|
}
|
|
|
|
nr_tw = nr_tw_prev + 1;
|
|
|
|
/* Large enough to fail the nr_wait comparison below */
|
|
|
|
if (!(flags & IOU_F_TWQ_LAZY_WAKE))
|
2024-01-17 08:57:26 +08:00
|
|
|
nr_tw = INT_MAX;
|
2023-04-06 21:20:12 +08:00
|
|
|
|
|
|
|
req->nr_tw = nr_tw;
|
2023-04-06 21:20:11 +08:00
|
|
|
req->io_task_work.node.next = first;
|
|
|
|
} while (!try_cmpxchg(&ctx->work_llist.first, &first,
|
|
|
|
&req->io_task_work.node));
|
|
|
|
|
2023-04-06 21:20:12 +08:00
|
|
|
if (!first) {
|
|
|
|
if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
|
|
|
|
atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
|
|
|
|
if (ctx->has_evfd)
|
|
|
|
io_eventfd_signal(ctx);
|
2022-08-30 20:50:10 +08:00
|
|
|
}
|
|
|
|
|
2023-04-06 21:20:12 +08:00
|
|
|
nr_wait = atomic_read(&ctx->cq_wait_nr);
|
|
|
|
/* no one is waiting */
|
|
|
|
if (!nr_wait)
|
|
|
|
return;
|
|
|
|
/* either not enough or the previous add has already woken it up */
|
|
|
|
if (nr_wait > nr_tw || nr_tw_prev >= nr_wait)
|
|
|
|
return;
|
|
|
|
/* pairs with set_current_state() in io_cqring_wait() */
|
|
|
|
smp_mb__after_atomic();
|
|
|
|
wake_up_state(ctx->submitter_task, TASK_INTERRUPTIBLE);
|
2022-08-30 20:50:10 +08:00
|
|
|
}
|
|
|
|
|
2023-06-23 19:23:26 +08:00
|
|
|
static void io_req_normal_work_add(struct io_kiocb *req)
|
2021-02-10 08:03:20 +08:00
|
|
|
{
|
2022-06-22 21:40:22 +08:00
|
|
|
struct io_uring_task *tctx = req->task->io_uring;
|
2022-04-26 09:49:02 +08:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2021-02-10 08:03:20 +08:00
|
|
|
|
|
|
|
/* task_work already pending, we're done */
|
2022-08-30 20:50:07 +08:00
|
|
|
if (!llist_add(&req->io_task_work.node, &tctx->task_list))
|
2021-07-01 20:26:05 +08:00
|
|
|
return;
|
2021-02-10 08:03:20 +08:00
|
|
|
|
2022-04-26 09:49:04 +08:00
|
|
|
if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
|
|
|
|
atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
|
|
|
|
|
2022-05-21 23:17:05 +08:00
|
|
|
if (likely(!task_work_add(req->task, &tctx->task_work, ctx->notify_method)))
|
2021-07-01 20:26:05 +08:00
|
|
|
return;
|
2021-08-09 20:04:06 +08:00
|
|
|
|
2023-06-29 01:06:05 +08:00
|
|
|
io_fallback_tw(tctx, false);
|
2022-08-30 20:50:10 +08:00
|
|
|
}
|
|
|
|
|
2023-06-23 19:23:26 +08:00
|
|
|
void __io_req_task_work_add(struct io_kiocb *req, unsigned flags)
|
|
|
|
{
|
|
|
|
if (req->ctx->flags & IORING_SETUP_DEFER_TASKRUN) {
|
|
|
|
rcu_read_lock();
|
|
|
|
io_req_local_work_add(req, flags);
|
|
|
|
rcu_read_unlock();
|
|
|
|
} else {
|
|
|
|
io_req_normal_work_add(req);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-08-30 20:50:10 +08:00
|
|
|
static void __cold io_move_task_work_from_local(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct llist_node *node;
|
|
|
|
|
|
|
|
node = llist_del_all(&ctx->work_llist);
|
|
|
|
while (node) {
|
|
|
|
struct io_kiocb *req = container_of(node, struct io_kiocb,
|
|
|
|
io_task_work.node);
|
|
|
|
|
|
|
|
node = node->next;
|
2023-06-23 19:23:26 +08:00
|
|
|
io_req_normal_work_add(req);
|
2022-08-30 20:50:10 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-03-27 23:38:15 +08:00
|
|
|
static int __io_run_local_work(struct io_ring_ctx *ctx, struct io_tw_state *ts)
|
2022-08-30 20:50:10 +08:00
|
|
|
{
|
|
|
|
struct llist_node *node;
|
2023-01-09 22:46:13 +08:00
|
|
|
unsigned int loops = 0;
|
2023-01-05 19:22:23 +08:00
|
|
|
int ret = 0;
|
2022-08-30 20:50:10 +08:00
|
|
|
|
2023-01-05 19:22:23 +08:00
|
|
|
if (WARN_ON_ONCE(ctx->submitter_task != current))
|
2022-08-30 20:50:10 +08:00
|
|
|
return -EEXIST;
|
2023-01-09 22:46:13 +08:00
|
|
|
if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
|
|
|
|
atomic_andnot(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
|
2022-08-30 20:50:10 +08:00
|
|
|
again:
|
2023-05-19 23:51:31 +08:00
|
|
|
/*
|
|
|
|
* llists are in reverse order, flip it back the right way before
|
|
|
|
* running the pending items.
|
|
|
|
*/
|
|
|
|
node = llist_reverse_order(io_llist_xchg(&ctx->work_llist, NULL));
|
2023-01-09 22:46:13 +08:00
|
|
|
while (node) {
|
2022-08-30 20:50:10 +08:00
|
|
|
struct llist_node *next = node->next;
|
|
|
|
struct io_kiocb *req = container_of(node, struct io_kiocb,
|
|
|
|
io_task_work.node);
|
|
|
|
prefetch(container_of(next, struct io_kiocb, io_task_work.node));
|
2023-06-02 22:41:46 +08:00
|
|
|
INDIRECT_CALL_2(req->io_task_work.func,
|
|
|
|
io_poll_task_func, io_req_rw_complete,
|
|
|
|
req, ts);
|
2022-08-30 20:50:10 +08:00
|
|
|
ret++;
|
|
|
|
node = next;
|
|
|
|
}
|
2023-01-09 22:46:13 +08:00
|
|
|
loops++;
|
2022-08-30 20:50:10 +08:00
|
|
|
|
2023-01-09 22:46:13 +08:00
|
|
|
if (!llist_empty(&ctx->work_llist))
|
2022-08-30 20:50:10 +08:00
|
|
|
goto again;
|
2023-03-27 23:38:15 +08:00
|
|
|
if (ts->locked) {
|
2022-08-30 20:50:10 +08:00
|
|
|
io_submit_flush_completions(ctx);
|
2023-01-17 00:48:57 +08:00
|
|
|
if (!llist_empty(&ctx->work_llist))
|
|
|
|
goto again;
|
|
|
|
}
|
2022-08-30 20:50:13 +08:00
|
|
|
trace_io_uring_local_work_run(ctx, ret, loops);
|
2022-08-30 20:50:10 +08:00
|
|
|
return ret;
|
2022-09-04 00:09:22 +08:00
|
|
|
}
|
|
|
|
|
2023-01-09 22:46:07 +08:00
|
|
|
static inline int io_run_local_work_locked(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2023-03-27 23:38:15 +08:00
|
|
|
struct io_tw_state ts = { .locked = true, };
|
2023-01-09 22:46:07 +08:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (llist_empty(&ctx->work_llist))
|
|
|
|
return 0;
|
|
|
|
|
2023-03-27 23:38:15 +08:00
|
|
|
ret = __io_run_local_work(ctx, &ts);
|
2023-01-09 22:46:07 +08:00
|
|
|
/* shouldn't happen! */
|
2023-03-27 23:38:15 +08:00
|
|
|
if (WARN_ON_ONCE(!ts.locked))
|
2023-01-09 22:46:07 +08:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2023-01-09 22:46:06 +08:00
|
|
|
static int io_run_local_work(struct io_ring_ctx *ctx)
|
2022-09-04 00:09:22 +08:00
|
|
|
{
|
2023-03-27 23:38:15 +08:00
|
|
|
struct io_tw_state ts = {};
|
2022-09-04 00:09:22 +08:00
|
|
|
int ret;
|
|
|
|
|
2023-03-27 23:38:15 +08:00
|
|
|
ts.locked = mutex_trylock(&ctx->uring_lock);
|
|
|
|
ret = __io_run_local_work(ctx, &ts);
|
|
|
|
if (ts.locked)
|
2022-09-04 00:09:22 +08:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
|
|
|
|
return ret;
|
2022-08-30 20:50:10 +08:00
|
|
|
}
|
|
|
|
|
2023-03-27 23:38:15 +08:00
|
|
|
static void io_req_task_cancel(struct io_kiocb *req, struct io_tw_state *ts)
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-26 05:39:59 +08:00
|
|
|
{
|
2023-03-27 23:38:15 +08:00
|
|
|
io_tw_lock(req->ctx, ts);
|
2022-11-24 17:35:53 +08:00
|
|
|
io_req_defer_failed(req, req->cqe.res);
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-26 05:39:59 +08:00
|
|
|
}
|
|
|
|
|
2023-03-27 23:38:15 +08:00
|
|
|
void io_req_task_submit(struct io_kiocb *req, struct io_tw_state *ts)
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-26 05:39:59 +08:00
|
|
|
{
|
2023-03-27 23:38:15 +08:00
|
|
|
io_tw_lock(req->ctx, ts);
|
2021-08-19 23:41:42 +08:00
|
|
|
/* req->task == current here, checking PF_EXITING is safe */
|
2023-01-27 21:52:24 +08:00
|
|
|
if (unlikely(req->task->flags & PF_EXITING))
|
2022-11-24 17:35:53 +08:00
|
|
|
io_req_defer_failed(req, -EFAULT);
|
2023-01-27 21:52:24 +08:00
|
|
|
else if (req->flags & REQ_F_FORCE_ASYNC)
|
2023-03-27 23:38:15 +08:00
|
|
|
io_queue_iowq(req, ts);
|
2023-01-27 21:52:24 +08:00
|
|
|
else
|
|
|
|
io_queue_sqe(req);
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-26 05:39:59 +08:00
|
|
|
}
|
|
|
|
|
2022-05-25 22:57:27 +08:00
|
|
|
void io_req_task_queue_fail(struct io_kiocb *req, int ret)
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-26 05:39:59 +08:00
|
|
|
{
|
2022-05-25 05:21:00 +08:00
|
|
|
io_req_set_res(req, ret, 0);
|
2021-07-01 04:54:04 +08:00
|
|
|
req->io_task_work.func = io_req_task_cancel;
|
2022-05-21 23:17:05 +08:00
|
|
|
io_req_task_work_add(req);
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-26 05:39:59 +08:00
|
|
|
}
|
|
|
|
|
2022-06-13 21:27:03 +08:00
|
|
|
void io_req_task_queue(struct io_kiocb *req)
|
2021-02-19 06:32:52 +08:00
|
|
|
{
|
2021-07-01 04:54:04 +08:00
|
|
|
req->io_task_work.func = io_req_task_submit;
|
2022-05-21 23:17:05 +08:00
|
|
|
io_req_task_work_add(req);
|
2021-02-19 06:32:52 +08:00
|
|
|
}
|
|
|
|
|
2022-05-25 22:57:27 +08:00
|
|
|
void io_queue_next(struct io_kiocb *req)
|
2019-11-09 11:00:08 +08:00
|
|
|
{
|
2020-06-29 18:13:00 +08:00
|
|
|
struct io_kiocb *nxt = io_req_find_next(req);
|
2019-11-22 04:21:01 +08:00
|
|
|
|
|
|
|
if (nxt)
|
2020-06-27 19:04:55 +08:00
|
|
|
io_req_task_queue(nxt);
|
2019-11-09 11:00:08 +08:00
|
|
|
}
|
|
|
|
|
2023-08-25 06:53:29 +08:00
|
|
|
static void io_free_batch_list(struct io_ring_ctx *ctx,
|
|
|
|
struct io_wq_work_node *node)
|
2021-09-25 04:59:50 +08:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2020-07-18 16:32:52 +08:00
|
|
|
{
|
2021-09-25 04:59:50 +08:00
|
|
|
do {
|
|
|
|
struct io_kiocb *req = container_of(node, struct io_kiocb,
|
|
|
|
comp_list);
|
2020-06-28 17:52:33 +08:00
|
|
|
|
2022-03-22 06:02:22 +08:00
|
|
|
if (unlikely(req->flags & IO_REQ_CLEAN_SLOW_FLAGS)) {
|
|
|
|
if (req->flags & REQ_F_REFCOUNT) {
|
|
|
|
node = req->comp_list.next;
|
|
|
|
if (!req_ref_put_and_test(req))
|
|
|
|
continue;
|
|
|
|
}
|
2022-03-22 06:02:23 +08:00
|
|
|
if ((req->flags & REQ_F_POLLED) && req->apoll) {
|
|
|
|
struct async_poll *apoll = req->apoll;
|
|
|
|
|
|
|
|
if (apoll->double_poll)
|
|
|
|
kfree(apoll->double_poll);
|
2022-07-08 04:20:54 +08:00
|
|
|
if (!io_alloc_cache_put(&ctx->apoll_cache, &apoll->cache))
|
|
|
|
kfree(apoll);
|
2022-03-22 06:02:23 +08:00
|
|
|
req->flags &= ~REQ_F_POLLED;
|
|
|
|
}
|
2022-04-16 05:08:29 +08:00
|
|
|
if (req->flags & IO_REQ_LINK_FLAGS)
|
2022-03-22 06:02:24 +08:00
|
|
|
io_queue_next(req);
|
2022-03-22 06:02:22 +08:00
|
|
|
if (unlikely(req->flags & IO_REQ_CLEAN_FLAGS))
|
|
|
|
io_clean_op(req);
|
2021-10-05 03:02:55 +08:00
|
|
|
}
|
2023-07-08 01:14:40 +08:00
|
|
|
io_put_file(req);
|
2020-06-28 17:52:33 +08:00
|
|
|
|
2021-10-10 06:14:41 +08:00
|
|
|
io_req_put_rsrc_locked(req, ctx);
|
2020-07-18 16:32:52 +08:00
|
|
|
|
2023-06-23 19:23:25 +08:00
|
|
|
io_put_task(req->task);
|
2021-10-05 03:02:55 +08:00
|
|
|
node = req->comp_list.next;
|
2022-04-12 22:09:48 +08:00
|
|
|
io_req_add_to_cache(req, ctx);
|
2021-09-25 04:59:50 +08:00
|
|
|
} while (node);
|
2020-03-04 02:33:13 +08:00
|
|
|
}
|
|
|
|
|
2023-08-25 06:53:29 +08:00
|
|
|
void __io_submit_flush_completions(struct io_ring_ctx *ctx)
|
2021-08-13 02:48:34 +08:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2021-02-10 08:03:14 +08:00
|
|
|
{
|
2021-08-10 03:18:11 +08:00
|
|
|
struct io_submit_state *state = &ctx->submit_state;
|
2023-03-10 00:51:13 +08:00
|
|
|
struct io_wq_work_node *node;
|
2021-02-10 08:03:14 +08:00
|
|
|
|
2022-12-07 23:50:01 +08:00
|
|
|
__io_cq_lock(ctx);
|
2022-11-24 17:35:54 +08:00
|
|
|
/* must come first to preserve CQE ordering in failure cases */
|
|
|
|
if (state->cqes_count)
|
|
|
|
__io_flush_post_cqes(ctx);
|
2023-03-10 00:51:13 +08:00
|
|
|
__wq_list_for_each(node, &state->compl_reqs) {
|
2022-06-19 19:26:08 +08:00
|
|
|
struct io_kiocb *req = container_of(node, struct io_kiocb,
|
|
|
|
comp_list);
|
2021-11-10 23:49:33 +08:00
|
|
|
|
2022-12-07 23:50:01 +08:00
|
|
|
if (!(req->flags & REQ_F_CQE_SKIP) &&
|
2023-08-11 20:53:43 +08:00
|
|
|
unlikely(!io_fill_cqe_req(ctx, req))) {
|
2023-09-07 20:50:08 +08:00
|
|
|
if (ctx->lockless_cq) {
|
2022-12-07 23:50:01 +08:00
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
io_req_cqe_overflow(req);
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
} else {
|
|
|
|
io_req_cqe_overflow(req);
|
|
|
|
}
|
|
|
|
}
|
2021-02-10 08:03:14 +08:00
|
|
|
}
|
2023-06-23 19:23:31 +08:00
|
|
|
__io_cq_unlock_post(ctx);
|
2022-06-19 19:26:08 +08:00
|
|
|
|
2022-11-24 17:35:54 +08:00
|
|
|
if (!wq_list_empty(&ctx->submit_state.compl_reqs)) {
|
|
|
|
io_free_batch_list(ctx, state->compl_reqs.first);
|
|
|
|
INIT_WQ_LIST(&state->compl_reqs);
|
|
|
|
}
|
2020-03-04 02:33:13 +08:00
|
|
|
}
|
|
|
|
|
2021-01-05 04:36:36 +08:00
|
|
|
static unsigned io_cqring_events(struct io_ring_ctx *ctx)
|
2019-08-21 01:03:11 +08:00
|
|
|
{
|
|
|
|
/* See comment at the top of this file */
|
|
|
|
smp_rmb();
|
2020-12-17 08:24:37 +08:00
|
|
|
return __io_cqring_events(ctx);
|
2019-08-21 01:03:11 +08:00
|
|
|
}
|
|
|
|
|
2019-01-09 23:59:42 +08:00
|
|
|
/*
|
|
|
|
* We can't just wait for polled events to come to us, we have to actively
|
|
|
|
* find and complete them.
|
|
|
|
*/
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold void io_iopoll_try_reap_events(struct io_ring_ctx *ctx)
|
2019-01-09 23:59:42 +08:00
|
|
|
{
|
|
|
|
if (!(ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return;
|
|
|
|
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-09-25 04:59:49 +08:00
|
|
|
while (!wq_list_empty(&ctx->iopoll_list)) {
|
2020-07-07 21:36:22 +08:00
|
|
|
/* let it sleep and repeat later if can't complete a request */
|
2021-09-25 04:59:43 +08:00
|
|
|
if (io_do_iopoll(ctx, true) == 0)
|
2020-07-07 21:36:22 +08:00
|
|
|
break;
|
2019-08-22 12:19:11 +08:00
|
|
|
/*
|
|
|
|
* Ensure we allow local-to-the-cpu processing to take place,
|
|
|
|
* in this case we need to ensure that we reap all events.
|
2020-07-06 22:59:31 +08:00
|
|
|
* Also let task_work, etc. to progress by releasing the mutex
|
2019-08-22 12:19:11 +08:00
|
|
|
*/
|
2020-07-06 22:59:31 +08:00
|
|
|
if (need_resched()) {
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
cond_resched();
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
}
|
2019-01-09 23:59:42 +08:00
|
|
|
}
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
|
2020-07-07 21:36:21 +08:00
|
|
|
static int io_iopoll_check(struct io_ring_ctx *ctx, long min)
|
2019-01-09 23:59:42 +08:00
|
|
|
{
|
2020-07-07 21:36:21 +08:00
|
|
|
unsigned int nr_events = 0;
|
2022-04-21 17:13:44 +08:00
|
|
|
unsigned long check_cq;
|
2019-08-20 02:15:59 +08:00
|
|
|
|
2022-09-08 23:56:52 +08:00
|
|
|
if (!io_allowed_run_tw(ctx))
|
|
|
|
return -EEXIST;
|
|
|
|
|
2022-06-16 00:33:55 +08:00
|
|
|
check_cq = READ_ONCE(ctx->check_cq);
|
|
|
|
if (unlikely(check_cq)) {
|
|
|
|
if (check_cq & BIT(IO_CHECK_CQ_OVERFLOW_BIT))
|
2022-12-07 11:53:28 +08:00
|
|
|
__io_cqring_overflow_flush(ctx);
|
2022-06-16 00:33:55 +08:00
|
|
|
/*
|
|
|
|
* Similarly do not spin if we have not informed the user of any
|
|
|
|
* dropped CQE.
|
|
|
|
*/
|
|
|
|
if (check_cq & BIT(IO_CHECK_CQ_DROPPED_BIT))
|
|
|
|
return -EBADR;
|
|
|
|
}
|
2021-04-13 09:58:46 +08:00
|
|
|
/*
|
|
|
|
* Don't enter poll loop if we already have events pending.
|
|
|
|
* If we do, we can potentially be spinning for commands that
|
|
|
|
* already triggered a CQE (eg in error).
|
|
|
|
*/
|
|
|
|
if (io_cqring_events(ctx))
|
2022-03-22 22:07:58 +08:00
|
|
|
return 0;
|
2022-04-21 17:13:44 +08:00
|
|
|
|
2019-01-09 23:59:42 +08:00
|
|
|
do {
|
2023-08-10 00:03:00 +08:00
|
|
|
int ret = 0;
|
|
|
|
|
2019-08-20 02:15:59 +08:00
|
|
|
/*
|
|
|
|
* If a submit got punted to a workqueue, we can have the
|
|
|
|
* application entering polling for a command before it gets
|
|
|
|
* issued. That app will hold the uring_lock for the duration
|
|
|
|
* of the poll right here, so we need to take a breather every
|
|
|
|
* now and then to ensure that the issue has a chance to add
|
|
|
|
* the poll to the issued list. Otherwise we can spin here
|
|
|
|
* forever, while the workqueue is stuck trying to acquire the
|
|
|
|
* very same mutex.
|
|
|
|
*/
|
2022-09-03 23:52:01 +08:00
|
|
|
if (wq_list_empty(&ctx->iopoll_list) ||
|
|
|
|
io_task_work_pending(ctx)) {
|
2021-07-08 20:37:06 +08:00
|
|
|
u32 tail = ctx->cached_cq_tail;
|
|
|
|
|
2022-10-27 22:44:28 +08:00
|
|
|
(void) io_run_local_work_locked(ctx);
|
2019-01-09 23:59:42 +08:00
|
|
|
|
2022-09-03 23:52:01 +08:00
|
|
|
if (task_work_pending(current) ||
|
|
|
|
wq_list_empty(&ctx->iopoll_list)) {
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2022-09-08 23:56:54 +08:00
|
|
|
io_run_task_work();
|
2022-09-03 23:52:01 +08:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
}
|
2021-07-08 20:37:06 +08:00
|
|
|
/* some requests don't go through iopoll_list */
|
|
|
|
if (tail != ctx->cached_cq_tail ||
|
2021-09-25 04:59:49 +08:00
|
|
|
wq_list_empty(&ctx->iopoll_list))
|
2021-04-13 09:58:45 +08:00
|
|
|
break;
|
2019-08-20 02:15:59 +08:00
|
|
|
}
|
2021-09-25 04:59:43 +08:00
|
|
|
ret = io_do_iopoll(ctx, !min);
|
2023-08-10 00:03:00 +08:00
|
|
|
if (unlikely(ret < 0))
|
|
|
|
return ret;
|
2023-08-09 23:20:21 +08:00
|
|
|
|
|
|
|
if (task_sigpending(current))
|
|
|
|
return -EINTR;
|
2023-08-10 00:03:00 +08:00
|
|
|
if (need_resched())
|
2021-09-25 04:59:43 +08:00
|
|
|
break;
|
2022-03-22 22:07:58 +08:00
|
|
|
|
2021-09-25 04:59:43 +08:00
|
|
|
nr_events += ret;
|
2023-08-10 00:03:00 +08:00
|
|
|
} while (nr_events < min);
|
2022-03-22 22:07:58 +08:00
|
|
|
|
2023-08-10 00:03:00 +08:00
|
|
|
return 0;
|
2019-01-09 23:59:42 +08:00
|
|
|
}
|
2022-06-16 17:21:59 +08:00
|
|
|
|
2023-03-27 23:38:15 +08:00
|
|
|
void io_req_task_complete(struct io_kiocb *req, struct io_tw_state *ts)
|
2021-08-11 05:15:25 +08:00
|
|
|
{
|
2023-03-27 23:38:15 +08:00
|
|
|
if (ts->locked)
|
2022-06-20 08:26:00 +08:00
|
|
|
io_req_complete_defer(req);
|
2022-06-16 17:21:59 +08:00
|
|
|
else
|
2022-11-25 18:34:10 +08:00
|
|
|
io_req_complete_post(req, IO_URING_F_UNLOCKED);
|
2021-08-11 05:15:25 +08:00
|
|
|
}
|
|
|
|
|
2019-01-09 23:59:42 +08:00
|
|
|
/*
|
|
|
|
* After the iocb has been issued, it's safe to be found on the poll list.
|
|
|
|
* Adding the kiocb to the list AFTER submission ensures that we don't
|
2021-04-13 09:58:46 +08:00
|
|
|
* find it from a io_do_iopoll() thread before the issuer is done
|
2019-01-09 23:59:42 +08:00
|
|
|
* accessing the kiocb cookie.
|
|
|
|
*/
|
2021-10-16 00:09:12 +08:00
|
|
|
static void io_iopoll_req_issued(struct io_kiocb *req, unsigned int issue_flags)
|
2019-01-09 23:59:42 +08:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2021-10-18 21:34:31 +08:00
|
|
|
const bool needs_lock = issue_flags & IO_URING_F_UNLOCKED;
|
2021-06-14 09:36:14 +08:00
|
|
|
|
|
|
|
/* workqueue context doesn't hold uring_lock, grab it now */
|
2021-10-18 21:34:31 +08:00
|
|
|
if (unlikely(needs_lock))
|
2021-06-14 09:36:14 +08:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2019-01-09 23:59:42 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Track whether we have multiple files in our lists. This will impact
|
|
|
|
* how we do polling eventually, not spinning if we're on potentially
|
|
|
|
* different devices.
|
|
|
|
*/
|
2021-09-25 04:59:49 +08:00
|
|
|
if (wq_list_empty(&ctx->iopoll_list)) {
|
2021-06-28 05:37:30 +08:00
|
|
|
ctx->poll_multi_queue = false;
|
|
|
|
} else if (!ctx->poll_multi_queue) {
|
2019-01-09 23:59:42 +08:00
|
|
|
struct io_kiocb *list_req;
|
|
|
|
|
2021-09-25 04:59:49 +08:00
|
|
|
list_req = container_of(ctx->iopoll_list.first, struct io_kiocb,
|
|
|
|
comp_list);
|
2021-10-12 19:12:14 +08:00
|
|
|
if (list_req->file != req->file)
|
2021-06-28 05:37:30 +08:00
|
|
|
ctx->poll_multi_queue = true;
|
2019-01-09 23:59:42 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* For fast devices, IO may have already completed. If it has, add
|
|
|
|
* it to the front so we find it first.
|
|
|
|
*/
|
io_uring: fix io_kiocb.flags modification race in IOPOLL mode
While testing io_uring in arm, we found sometimes io_sq_thread() keeps
polling io requests even though there are not inflight io requests in
block layer. After some investigations, found a possible race about
io_kiocb.flags, see below race codes:
1) in the end of io_write() or io_read()
req->flags &= ~REQ_F_NEED_CLEANUP;
kfree(iovec);
return ret;
2) in io_complete_rw_iopoll()
if (res != -EAGAIN)
req->flags |= REQ_F_IOPOLL_COMPLETED;
In IOPOLL mode, io requests still maybe completed by interrupt, then
above codes are not safe, concurrent modifications to req->flags, which
is not protected by lock or is not atomic modifications. I also had
disassemble io_complete_rw_iopoll() in arm:
req->flags |= REQ_F_IOPOLL_COMPLETED;
0xffff000008387b18 <+76>: ldr w0, [x19,#104]
0xffff000008387b1c <+80>: orr w0, w0, #0x1000
0xffff000008387b20 <+84>: str w0, [x19,#104]
Seems that the "req->flags |= REQ_F_IOPOLL_COMPLETED;" is load and
modification, two instructions, which obviously is not atomic.
To fix this issue, add a new iopoll_completed in io_kiocb to indicate
whether io request is completed.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-11 23:39:36 +08:00
|
|
|
if (READ_ONCE(req->iopoll_completed))
|
2021-09-25 04:59:49 +08:00
|
|
|
wq_list_add_head(&req->comp_list, &ctx->iopoll_list);
|
2019-01-09 23:59:42 +08:00
|
|
|
else
|
2021-09-25 04:59:49 +08:00
|
|
|
wq_list_add_tail(&req->comp_list, &ctx->iopoll_list);
|
io_uring: fix poll_list race for SETUP_IOPOLL|SETUP_SQPOLL
After making ext4 support iopoll method:
let ext4_file_operations's iopoll method be iomap_dio_iopoll(),
we found fio can easily hang in fio_ioring_getevents() with below fio
job:
rm -f testfile; sync;
sudo fio -name=fiotest -filename=testfile -iodepth=128 -thread
-rw=write -ioengine=io_uring -hipri=1 -sqthread_poll=1 -direct=1
-bs=4k -size=10G -numjobs=8 -runtime=2000 -group_reporting
with IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL enabled.
There are two issues that results in this hang, one reason is that
when IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL are enabled, fio
does not use io_uring_enter to get completed events, it relies on
kernel io_sq_thread to poll for completed events.
Another reason is that there is a race: when io_submit_sqes() in
io_sq_thread() submits a batch of sqes, variable 'inflight' will
record the number of submitted reqs, then io_sq_thread will poll for
reqs which have been added to poll_list. But note, if some previous
reqs have been punted to io worker, these reqs will won't be in
poll_list timely. io_sq_thread() will only poll for a part of previous
submitted reqs, and then find poll_list is empty, reset variable
'inflight' to be zero. If app just waits these deferred reqs and does
not wake up io_sq_thread again, then hang happens.
For app that entirely relies on io_sq_thread to poll completed requests,
let io_iopoll_req_issued() wake up io_sq_thread properly when adding new
element to poll_list, and when io_sq_thread prepares to sleep, check
whether poll_list is empty again, if not empty, continue to poll.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-25 22:12:08 +08:00
|
|
|
|
2021-10-18 21:34:31 +08:00
|
|
|
if (unlikely(needs_lock)) {
|
2021-06-14 09:36:14 +08:00
|
|
|
/*
|
|
|
|
* If IORING_SETUP_SQPOLL is enabled, sqes are either handle
|
|
|
|
* in sq thread task context or in io worker task context. If
|
|
|
|
* current task context is sq thread, we don't need to check
|
|
|
|
* whether should wake up sq thread.
|
|
|
|
*/
|
|
|
|
if ((ctx->flags & IORING_SETUP_SQPOLL) &&
|
|
|
|
wq_has_sleeper(&ctx->sq_data->wait))
|
|
|
|
wake_up(&ctx->sq_data->wait);
|
|
|
|
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
2019-01-09 23:59:42 +08:00
|
|
|
}
|
|
|
|
|
2022-05-26 00:40:19 +08:00
|
|
|
unsigned int io_file_get_flags(struct file *file)
|
2021-10-17 07:07:10 +08:00
|
|
|
{
|
|
|
|
unsigned int res = 0;
|
2020-04-29 03:15:06 +08:00
|
|
|
|
2023-06-20 19:32:29 +08:00
|
|
|
if (S_ISREG(file_inode(file)->i_mode))
|
2023-06-20 19:32:32 +08:00
|
|
|
res |= REQ_F_ISREG;
|
2023-06-20 19:32:28 +08:00
|
|
|
if ((file->f_flags & O_NONBLOCK) || (file->f_mode & FMODE_NOWAIT))
|
2023-06-20 19:32:32 +08:00
|
|
|
res |= REQ_F_SUPPORT_NOWAIT;
|
2021-10-17 07:07:10 +08:00
|
|
|
return res;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
|
|
|
|
2022-05-25 19:59:19 +08:00
|
|
|
bool io_alloc_async_data(struct io_kiocb *req)
|
2020-03-27 15:36:52 +08:00
|
|
|
{
|
2023-01-12 22:44:11 +08:00
|
|
|
WARN_ON_ONCE(!io_cold_defs[req->opcode].async_size);
|
|
|
|
req->async_data = kmalloc(io_cold_defs[req->opcode].async_size, GFP_KERNEL);
|
2021-10-05 03:02:56 +08:00
|
|
|
if (req->async_data) {
|
|
|
|
req->flags |= REQ_F_ASYNC_DATA;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
return true;
|
2020-03-27 15:36:52 +08:00
|
|
|
}
|
|
|
|
|
2022-06-13 21:27:03 +08:00
|
|
|
int io_req_prep_async(struct io_kiocb *req)
|
2019-12-03 02:03:47 +08:00
|
|
|
{
|
2023-01-12 22:44:11 +08:00
|
|
|
const struct io_cold_def *cdef = &io_cold_defs[req->opcode];
|
2023-01-12 22:44:10 +08:00
|
|
|
const struct io_issue_def *def = &io_issue_defs[req->opcode];
|
2022-05-24 06:56:21 +08:00
|
|
|
|
|
|
|
/* assign early for deferred execution for non-fixed file */
|
2023-02-28 12:54:59 +08:00
|
|
|
if (def->needs_file && !(req->flags & REQ_F_FIXED_FILE) && !req->file)
|
2022-05-24 06:56:21 +08:00
|
|
|
req->file = io_file_get_normal(req, req->cqe.fd);
|
2023-01-12 22:44:11 +08:00
|
|
|
if (!cdef->prep_async)
|
2022-05-24 06:56:21 +08:00
|
|
|
return 0;
|
|
|
|
if (WARN_ON_ONCE(req_has_async_data(req)))
|
|
|
|
return -EFAULT;
|
2023-01-12 22:44:11 +08:00
|
|
|
if (!def->manual_alloc) {
|
2022-08-24 20:07:42 +08:00
|
|
|
if (io_alloc_async_data(req))
|
|
|
|
return -EAGAIN;
|
|
|
|
}
|
2023-01-12 22:44:11 +08:00
|
|
|
return cdef->prep_async(req);
|
2020-10-01 03:57:55 +08:00
|
|
|
}
|
|
|
|
|
2020-07-14 04:37:15 +08:00
|
|
|
static u32 io_get_sequence(struct io_kiocb *req)
|
|
|
|
{
|
2021-06-18 01:14:05 +08:00
|
|
|
u32 seq = req->ctx->cached_sq_head;
|
2022-03-25 19:52:16 +08:00
|
|
|
struct io_kiocb *cur;
|
2020-07-14 04:37:15 +08:00
|
|
|
|
2021-06-18 01:14:05 +08:00
|
|
|
/* need original cached_sq_head, but it was increased for each req */
|
2022-03-25 19:52:16 +08:00
|
|
|
io_for_each_link(cur, req)
|
2021-06-18 01:14:05 +08:00
|
|
|
seq--;
|
|
|
|
return seq;
|
2020-07-14 04:37:15 +08:00
|
|
|
}
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold void io_drain_req(struct io_kiocb *req)
|
2022-11-23 19:33:37 +08:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2019-04-07 11:51:27 +08:00
|
|
|
{
|
2019-11-08 23:09:12 +08:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2020-07-14 04:37:14 +08:00
|
|
|
struct io_defer_entry *de;
|
2019-12-03 02:03:47 +08:00
|
|
|
int ret;
|
2021-10-02 01:07:01 +08:00
|
|
|
u32 seq = io_get_sequence(req);
|
2021-06-15 23:47:57 +08:00
|
|
|
|
2019-11-13 18:06:25 +08:00
|
|
|
/* Still need defer if there is pending req in defer list. */
|
2021-11-25 17:21:02 +08:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-09-25 05:00:04 +08:00
|
|
|
if (!req_need_defer(req, seq) && list_empty_careful(&ctx->defer_list)) {
|
2021-11-25 17:21:02 +08:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-10-02 01:07:01 +08:00
|
|
|
queue:
|
2021-06-15 23:47:56 +08:00
|
|
|
ctx->drain_active = false;
|
2021-10-02 01:07:01 +08:00
|
|
|
io_req_task_queue(req);
|
|
|
|
return;
|
2021-06-15 23:47:56 +08:00
|
|
|
}
|
2021-11-25 17:21:02 +08:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2020-07-14 04:37:15 +08:00
|
|
|
|
2020-06-30 00:18:43 +08:00
|
|
|
io_prep_async_link(req);
|
2020-07-14 04:37:14 +08:00
|
|
|
de = kmalloc(sizeof(*de), GFP_KERNEL);
|
2021-06-15 06:37:30 +08:00
|
|
|
if (!de) {
|
2021-07-12 05:41:13 +08:00
|
|
|
ret = -ENOMEM;
|
2023-01-27 18:59:11 +08:00
|
|
|
io_req_defer_failed(req, ret);
|
|
|
|
return;
|
2021-06-15 06:37:30 +08:00
|
|
|
}
|
2019-12-05 02:08:05 +08:00
|
|
|
|
2021-08-11 05:18:27 +08:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2020-07-14 04:37:15 +08:00
|
|
|
if (!req_need_defer(req, seq) && list_empty(&ctx->defer_list)) {
|
2021-08-11 05:18:27 +08:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2020-07-14 04:37:14 +08:00
|
|
|
kfree(de);
|
2021-10-02 01:07:01 +08:00
|
|
|
goto queue;
|
2019-04-07 11:51:27 +08:00
|
|
|
}
|
|
|
|
|
2022-06-16 20:57:20 +08:00
|
|
|
trace_io_uring_defer(req);
|
2020-07-14 04:37:14 +08:00
|
|
|
de->req = req;
|
2020-07-14 04:37:15 +08:00
|
|
|
de->seq = seq;
|
2020-07-14 04:37:14 +08:00
|
|
|
list_add_tail(&de->list, &ctx->defer_list);
|
2021-08-11 05:18:27 +08:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2019-04-07 11:51:27 +08:00
|
|
|
}
|
|
|
|
|
2023-01-21 00:10:30 +08:00
|
|
|
static bool io_assign_file(struct io_kiocb *req, const struct io_issue_def *def,
|
|
|
|
unsigned int issue_flags)
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-30 00:10:08 +08:00
|
|
|
{
|
2023-01-21 00:10:30 +08:00
|
|
|
if (req->file || !def->needs_file)
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-30 00:10:08 +08:00
|
|
|
return true;
|
|
|
|
|
|
|
|
if (req->flags & REQ_F_FIXED_FILE)
|
2022-04-12 22:09:43 +08:00
|
|
|
req->file = io_file_get_fixed(req, req->cqe.fd, issue_flags);
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-30 00:10:08 +08:00
|
|
|
else
|
2022-04-12 22:09:43 +08:00
|
|
|
req->file = io_file_get_normal(req, req->cqe.fd);
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-30 00:10:08 +08:00
|
|
|
|
2022-04-19 03:51:12 +08:00
|
|
|
return !!req->file;
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-30 00:10:08 +08:00
|
|
|
}
|
|
|
|
|
2021-02-10 08:03:09 +08:00
|
|
|
static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
2023-01-12 22:44:10 +08:00
|
|
|
const struct io_issue_def *def = &io_issue_defs[req->opcode];
|
2021-02-28 06:57:30 +08:00
|
|
|
const struct cred *creds = NULL;
|
2019-12-18 10:53:05 +08:00
|
|
|
int ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2023-01-21 00:10:30 +08:00
|
|
|
if (unlikely(!io_assign_file(req, def, issue_flags)))
|
2022-04-15 10:23:40 +08:00
|
|
|
return -EBADF;
|
|
|
|
|
2021-09-25 04:59:41 +08:00
|
|
|
if (unlikely((req->flags & REQ_F_CREDS) && req->creds != current_cred()))
|
2021-06-18 01:14:01 +08:00
|
|
|
creds = override_creds(req->creds);
|
2021-02-28 06:57:30 +08:00
|
|
|
|
2022-05-24 06:53:15 +08:00
|
|
|
if (!def->audit_skip)
|
2021-02-17 08:46:48 +08:00
|
|
|
audit_uring_entry(req->opcode);
|
|
|
|
|
2022-05-24 06:56:21 +08:00
|
|
|
ret = def->issue(req, issue_flags);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2022-05-24 06:53:15 +08:00
|
|
|
if (!def->audit_skip)
|
2021-02-17 08:46:48 +08:00
|
|
|
audit_uring_exit(!ret, ret);
|
|
|
|
|
2021-02-28 06:57:30 +08:00
|
|
|
if (creds)
|
|
|
|
revert_creds(creds);
|
2022-05-25 05:21:00 +08:00
|
|
|
|
2022-06-16 17:21:58 +08:00
|
|
|
if (ret == IOU_OK) {
|
|
|
|
if (issue_flags & IO_URING_F_COMPLETE_DEFER)
|
2022-06-20 08:26:00 +08:00
|
|
|
io_req_complete_defer(req);
|
2022-06-16 17:21:58 +08:00
|
|
|
else
|
2022-11-23 19:33:41 +08:00
|
|
|
io_req_complete_post(req, issue_flags);
|
2023-12-01 08:38:52 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ret != IOU_ISSUE_SKIP_COMPLETE)
|
2019-01-09 23:59:42 +08:00
|
|
|
return ret;
|
2022-05-25 05:21:00 +08:00
|
|
|
|
2020-05-20 11:20:27 +08:00
|
|
|
/* If the op doesn't have a file, we're not polling for it */
|
2022-12-07 11:53:26 +08:00
|
|
|
if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue)
|
2021-10-16 00:09:12 +08:00
|
|
|
io_iopoll_req_issued(req, issue_flags);
|
2019-01-09 23:59:42 +08:00
|
|
|
|
|
|
|
return 0;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
|
|
|
|
2023-03-27 23:38:15 +08:00
|
|
|
int io_poll_issue(struct io_kiocb *req, struct io_tw_state *ts)
|
2022-05-26 10:31:09 +08:00
|
|
|
{
|
2023-03-27 23:38:15 +08:00
|
|
|
io_tw_lock(req->ctx, ts);
|
2022-11-24 17:35:59 +08:00
|
|
|
return io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_MULTISHOT|
|
|
|
|
IO_URING_F_COMPLETE_DEFER);
|
2022-05-26 10:31:09 +08:00
|
|
|
}
|
|
|
|
|
2022-05-26 01:01:04 +08:00
|
|
|
struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
|
2021-08-09 20:04:05 +08:00
|
|
|
{
|
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
2023-06-23 19:23:21 +08:00
|
|
|
struct io_kiocb *nxt = NULL;
|
2021-08-09 20:04:05 +08:00
|
|
|
|
2023-06-23 19:23:21 +08:00
|
|
|
if (req_ref_put_and_test(req)) {
|
|
|
|
if (req->flags & IO_REQ_LINK_FLAGS)
|
|
|
|
nxt = io_req_find_next(req);
|
|
|
|
io_free_req(req);
|
|
|
|
}
|
|
|
|
return nxt ? &nxt->work : NULL;
|
2021-08-09 20:04:05 +08:00
|
|
|
}
|
|
|
|
|
2022-05-26 01:01:04 +08:00
|
|
|
void io_wq_submit_work(struct io_wq_work *work)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
2023-01-12 22:44:10 +08:00
|
|
|
const struct io_issue_def *def = &io_issue_defs[req->opcode];
|
2022-12-07 11:53:30 +08:00
|
|
|
unsigned int issue_flags = IO_URING_F_UNLOCKED | IO_URING_F_IOWQ;
|
2021-10-23 19:13:57 +08:00
|
|
|
bool needs_poll = false;
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-30 00:10:08 +08:00
|
|
|
int ret = 0, err = -ECANCELED;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2022-11-10 20:21:03 +08:00
|
|
|
/* one will be dropped by ->io_wq_free_work() after returning to io-wq */
|
2021-08-15 17:40:18 +08:00
|
|
|
if (!(req->flags & REQ_F_REFCOUNT))
|
|
|
|
__io_req_set_refcount(req, 2);
|
|
|
|
else
|
|
|
|
req_ref_get(req);
|
io_uring: remove submission references
Requests are by default given with two references, submission and
completion. Completion references are straightforward, they represent
request ownership and are put when a request is completed or so.
Submission references are a bit more trickier. They're needed when
io_issue_sqe() followed deep into the submission stack (e.g. in fs,
block, drivers, etc.), request may have given away for concurrent
execution or already completed, and the code unwinding back to
io_issue_sqe() may be accessing some pieces of our requests, e.g.
file or iov.
Now, we prevent such async/in-depth completions by pushing requests
through task_work. Punting to io-wq is also done through task_works,
apart from a couple of cases with a pretty well known context. So,
there're two cases:
1) io_issue_sqe() from the task context and protected by ->uring_lock.
Either requests return back to io_uring or handed to task_work, which
won't be executed because we're currently controlling that task. So,
we can be sure that requests are staying alive all the time and we don't
need submission references to pin them.
2) io_issue_sqe() from io-wq, which doesn't hold the mutex. The role of
submission reference is played by io-wq reference, which is put by
io_wq_submit_work(). Hence, it should be fine.
Considering that, we can carefully kill the submission reference.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/6b68f1c763229a590f2a27148aee77767a8d7750.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-12 02:28:29 +08:00
|
|
|
|
2022-04-16 05:08:25 +08:00
|
|
|
io_arm_ltimeout(req);
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-30 00:10:08 +08:00
|
|
|
|
2021-08-23 20:30:44 +08:00
|
|
|
/* either cancelled or io-wq is dying, so don't touch tctx->iowq */
|
2021-10-23 19:13:57 +08:00
|
|
|
if (work->flags & IO_WQ_WORK_CANCEL) {
|
2022-04-12 22:24:43 +08:00
|
|
|
fail:
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-30 00:10:08 +08:00
|
|
|
io_req_task_queue_fail(req, err);
|
2021-10-23 19:13:57 +08:00
|
|
|
return;
|
|
|
|
}
|
2023-01-21 00:10:30 +08:00
|
|
|
if (!io_assign_file(req, def, issue_flags)) {
|
2022-04-12 22:24:43 +08:00
|
|
|
err = -EBADF;
|
|
|
|
work->flags |= IO_WQ_WORK_CANCEL;
|
|
|
|
goto fail;
|
|
|
|
}
|
2019-01-19 13:56:34 +08:00
|
|
|
|
2021-10-23 19:13:57 +08:00
|
|
|
if (req->flags & REQ_F_FORCE_ASYNC) {
|
2021-10-23 19:13:59 +08:00
|
|
|
bool opcode_poll = def->pollin || def->pollout;
|
|
|
|
|
|
|
|
if (opcode_poll && file_can_poll(req->file)) {
|
|
|
|
needs_poll = true;
|
2021-10-23 19:13:57 +08:00
|
|
|
issue_flags |= IO_URING_F_NONBLOCK;
|
2021-10-23 19:13:59 +08:00
|
|
|
}
|
2019-10-24 21:25:42 +08:00
|
|
|
}
|
2019-01-19 13:56:34 +08:00
|
|
|
|
2021-10-23 19:13:57 +08:00
|
|
|
do {
|
|
|
|
ret = io_issue_sqe(req, issue_flags);
|
|
|
|
if (ret != -EAGAIN)
|
|
|
|
break;
|
2023-07-21 03:16:53 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If REQ_F_NOWAIT is set, then don't wait or retry with
|
|
|
|
* poll. -EAGAIN is final for that case.
|
|
|
|
*/
|
|
|
|
if (req->flags & REQ_F_NOWAIT)
|
|
|
|
break;
|
|
|
|
|
2021-10-23 19:13:57 +08:00
|
|
|
/*
|
|
|
|
* We can get EAGAIN for iopolled IO even though we're
|
|
|
|
* forcing a sync submission from here, since we can't
|
|
|
|
* wait for request slots on the block side.
|
|
|
|
*/
|
|
|
|
if (!needs_poll) {
|
2022-05-13 18:24:56 +08:00
|
|
|
if (!(req->ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
break;
|
2023-09-07 20:50:07 +08:00
|
|
|
if (io_wq_worker_stopped())
|
|
|
|
break;
|
2021-10-23 19:13:57 +08:00
|
|
|
cond_resched();
|
|
|
|
continue;
|
io_uring: implement async hybrid mode for pollable requests
The current logic of requests with IOSQE_ASYNC is first queueing it to
io-worker, then execute it in a synchronous way. For unbound works like
pollable requests(e.g. read/write a socketfd), the io-worker may stuck
there waiting for events for a long time. And thus other works wait in
the list for a long time too.
Let's introduce a new way for unbound works (currently pollable
requests), with this a request will first be queued to io-worker, then
executed in a nonblock try rather than a synchronous way. Failure of
that leads it to arm poll stuff and then the worker can begin to handle
other works.
The detail process of this kind of requests is:
step1: original context:
queue it to io-worker
step2: io-worker context:
nonblock try(the old logic is a synchronous try here)
|
|--fail--> arm poll
|
|--(fail/ready)-->synchronous issue
|
|--(succeed)-->worker finish it's job, tw
take over the req
This works much better than the old IOSQE_ASYNC logic in cases where
unbound max_worker is relatively small. In this case, number of
io-worker eazily increments to max_worker, new worker cannot be created
and running workers stuck there handling old works in IOSQE_ASYNC mode.
In my 64-core machine, set unbound max_worker to 20, run echo-server,
turns out:
(arguments: register_file, connetion number is 1000, message size is 12
Byte)
original IOSQE_ASYNC: 76664.151 tps
after this patch: 166934.985 tps
Suggested-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Hao Xu <haoxu@linux.alibaba.com>
Link: https://lore.kernel.org/r/20211018133445.103438-1-haoxu@linux.alibaba.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-10-18 21:34:45 +08:00
|
|
|
}
|
|
|
|
|
2022-03-16 00:54:08 +08:00
|
|
|
if (io_arm_poll_handler(req, issue_flags) == IO_APOLL_OK)
|
2021-10-23 19:13:57 +08:00
|
|
|
return;
|
|
|
|
/* aborted or ready, in either case retry blocking */
|
|
|
|
needs_poll = false;
|
|
|
|
issue_flags &= ~IO_URING_F_NONBLOCK;
|
|
|
|
} while (1);
|
2019-01-19 13:56:34 +08:00
|
|
|
|
2021-02-19 06:32:52 +08:00
|
|
|
/* avoid locking problems by failing it from a clean context */
|
2022-05-25 05:21:00 +08:00
|
|
|
if (ret < 0)
|
2021-02-19 06:32:52 +08:00
|
|
|
io_req_task_queue_fail(req, ret);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
|
|
|
|
2022-05-25 11:19:47 +08:00
|
|
|
inline struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
|
|
|
|
unsigned int issue_flags)
|
2019-03-14 02:39:28 +08:00
|
|
|
{
|
2022-04-05 07:18:43 +08:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2023-06-20 19:32:35 +08:00
|
|
|
struct io_fixed_file *slot;
|
2022-04-05 07:18:43 +08:00
|
|
|
struct file *file = NULL;
|
2019-03-14 02:39:28 +08:00
|
|
|
|
2022-04-19 03:51:11 +08:00
|
|
|
io_ring_submit_lock(ctx, issue_flags);
|
2022-04-05 07:18:43 +08:00
|
|
|
|
2021-08-09 20:04:02 +08:00
|
|
|
if (unlikely((unsigned int)fd >= ctx->nr_user_files))
|
2022-04-05 07:18:43 +08:00
|
|
|
goto out;
|
2021-08-09 20:04:02 +08:00
|
|
|
fd = array_index_nospec(fd, ctx->nr_user_files);
|
2023-06-20 19:32:35 +08:00
|
|
|
slot = io_fixed_file_slot(&ctx->file_table, fd);
|
|
|
|
file = io_slot_file(slot);
|
|
|
|
req->flags |= io_slot_flags(slot);
|
2022-04-05 07:18:43 +08:00
|
|
|
io_req_set_rsrc_node(req, ctx, 0);
|
|
|
|
out:
|
2022-04-19 03:51:11 +08:00
|
|
|
io_ring_submit_unlock(ctx, issue_flags);
|
2021-08-09 20:04:02 +08:00
|
|
|
return file;
|
|
|
|
}
|
2021-03-12 23:27:05 +08:00
|
|
|
|
2022-05-25 11:19:47 +08:00
|
|
|
struct file *io_file_get_normal(struct io_kiocb *req, int fd)
|
2021-08-09 20:04:02 +08:00
|
|
|
{
|
io_uring: remove file batch-get optimisation
For requests with non-fixed files, instead of grabbing just one
reference, we get by the number of left requests, so the following
requests using the same file can take it without atomics.
However, it's not all win. If there is one request in the middle
not using files or having a fixed file, we'll need to put back the left
references. Even worse if an application submits requests dealing with
different files, it will do a put for each new request, so doubling the
number of atomics needed. Also, even if not used, it's still takes some
cycles in the submission path.
If a file used many times, it rather makes sense to pre-register it, if
not, we may fall in the described pitfall. So, this optimisation is a
matter of use case. Go with the simpliest code-wise way, remove it.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-10 21:52:47 +08:00
|
|
|
struct file *file = fget(fd);
|
2021-08-09 20:04:02 +08:00
|
|
|
|
2022-06-16 20:57:20 +08:00
|
|
|
trace_io_uring_file_get(req, fd);
|
2019-03-14 02:39:28 +08:00
|
|
|
|
2021-08-09 20:04:02 +08:00
|
|
|
/* we don't allow fixed io_uring files */
|
2022-05-26 00:28:04 +08:00
|
|
|
if (file && io_is_uring_fops(file))
|
2022-06-02 13:57:02 +08:00
|
|
|
io_req_track_inflight(req);
|
2020-10-11 01:34:08 +08:00
|
|
|
return file;
|
2019-03-14 02:39:28 +08:00
|
|
|
}
|
|
|
|
|
2022-04-16 05:08:28 +08:00
|
|
|
static void io_queue_async(struct io_kiocb *req, int ret)
|
2021-09-25 04:59:59 +08:00
|
|
|
__must_hold(&req->ctx->uring_lock)
|
|
|
|
{
|
2022-04-16 05:08:28 +08:00
|
|
|
struct io_kiocb *linked_timeout;
|
|
|
|
|
|
|
|
if (ret != -EAGAIN || (req->flags & REQ_F_NOWAIT)) {
|
2022-11-24 17:35:53 +08:00
|
|
|
io_req_defer_failed(req, ret);
|
2022-04-16 05:08:28 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
linked_timeout = io_prep_linked_timeout(req);
|
2021-09-25 04:59:59 +08:00
|
|
|
|
2022-03-16 00:54:08 +08:00
|
|
|
switch (io_arm_poll_handler(req, 0)) {
|
2021-09-25 04:59:59 +08:00
|
|
|
case IO_APOLL_READY:
|
2022-09-07 00:11:17 +08:00
|
|
|
io_kbuf_recycle(req, 0);
|
2021-09-25 04:59:59 +08:00
|
|
|
io_req_task_queue(req);
|
|
|
|
break;
|
|
|
|
case IO_APOLL_ABORTED:
|
2022-06-17 20:24:26 +08:00
|
|
|
io_kbuf_recycle(req, 0);
|
2022-04-16 05:08:27 +08:00
|
|
|
io_queue_iowq(req, NULL);
|
2021-09-25 04:59:59 +08:00
|
|
|
break;
|
2022-03-10 02:27:52 +08:00
|
|
|
case IO_APOLL_OK:
|
|
|
|
break;
|
2021-09-25 04:59:59 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
if (linked_timeout)
|
|
|
|
io_queue_linked_timeout(linked_timeout);
|
|
|
|
}
|
|
|
|
|
2022-04-16 05:08:26 +08:00
|
|
|
static inline void io_queue_sqe(struct io_kiocb *req)
|
2021-08-09 20:04:10 +08:00
|
|
|
__must_hold(&req->ctx->uring_lock)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
2019-03-13 00:18:47 +08:00
|
|
|
int ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2021-02-10 08:03:22 +08:00
|
|
|
ret = io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);
|
2020-02-23 14:22:19 +08:00
|
|
|
|
2019-10-17 23:20:46 +08:00
|
|
|
/*
|
|
|
|
* We async punt it if the file wasn't marked NOWAIT, or if the file
|
|
|
|
* doesn't support non-blocking read/write attempts
|
|
|
|
*/
|
2022-04-16 05:08:28 +08:00
|
|
|
if (likely(!ret))
|
2022-04-16 05:08:25 +08:00
|
|
|
io_arm_ltimeout(req);
|
2022-04-16 05:08:28 +08:00
|
|
|
else
|
|
|
|
io_queue_async(req, ret);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
|
|
|
|
2021-09-25 04:59:58 +08:00
|
|
|
static void io_queue_sqe_fallback(struct io_kiocb *req)
|
2021-08-09 20:04:10 +08:00
|
|
|
__must_hold(&req->ctx->uring_lock)
|
2019-09-09 20:50:40 +08:00
|
|
|
{
|
2022-04-16 05:08:32 +08:00
|
|
|
if (unlikely(req->flags & REQ_F_FAIL)) {
|
|
|
|
/*
|
|
|
|
* We don't submit, fail them all, for that replace hardlinks
|
|
|
|
* with normal links. Extra REQ_F_LINK is tolerated.
|
|
|
|
*/
|
|
|
|
req->flags &= ~REQ_F_HARDLINK;
|
|
|
|
req->flags |= REQ_F_LINK;
|
2022-11-24 17:35:53 +08:00
|
|
|
io_req_defer_failed(req, req->cqe.res);
|
2021-06-15 06:37:30 +08:00
|
|
|
} else {
|
|
|
|
int ret = io_req_prep_async(req);
|
|
|
|
|
2023-01-27 18:59:11 +08:00
|
|
|
if (unlikely(ret)) {
|
2022-11-24 17:35:53 +08:00
|
|
|
io_req_defer_failed(req, ret);
|
2023-01-27 18:59:11 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (unlikely(req->ctx->drain_active))
|
|
|
|
io_drain_req(req);
|
2021-06-15 06:37:30 +08:00
|
|
|
else
|
2022-04-16 05:08:27 +08:00
|
|
|
io_queue_iowq(req, NULL);
|
2019-12-17 23:04:44 +08:00
|
|
|
}
|
2019-09-09 20:50:40 +08:00
|
|
|
}
|
|
|
|
|
2021-02-19 02:29:40 +08:00
|
|
|
/*
|
|
|
|
* Check SQE restrictions (opcode and flags).
|
|
|
|
*
|
|
|
|
* Returns 'true' if SQE is allowed, 'false' otherwise.
|
|
|
|
*/
|
|
|
|
static inline bool io_check_restriction(struct io_ring_ctx *ctx,
|
|
|
|
struct io_kiocb *req,
|
|
|
|
unsigned int sqe_flags)
|
2019-09-09 20:50:40 +08:00
|
|
|
{
|
2021-02-19 02:29:40 +08:00
|
|
|
if (!test_bit(req->opcode, ctx->restrictions.sqe_op))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if ((sqe_flags & ctx->restrictions.sqe_flags_required) !=
|
|
|
|
ctx->restrictions.sqe_flags_required)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (sqe_flags & ~(ctx->restrictions.sqe_flags_allowed |
|
|
|
|
ctx->restrictions.sqe_flags_required))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
2019-09-09 20:50:40 +08:00
|
|
|
}
|
|
|
|
|
2021-10-02 01:07:00 +08:00
|
|
|
static void io_init_req_drain(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
struct io_kiocb *head = ctx->submit_state.link.head;
|
|
|
|
|
|
|
|
ctx->drain_active = true;
|
|
|
|
if (head) {
|
|
|
|
/*
|
|
|
|
* If we need to drain a request in the middle of a link, drain
|
|
|
|
* the head request and the next request/link after the current
|
|
|
|
* link. Considering sequential execution of links,
|
2021-11-25 17:21:03 +08:00
|
|
|
* REQ_F_IO_DRAIN will be maintained for every request of our
|
2021-10-02 01:07:00 +08:00
|
|
|
* link.
|
|
|
|
*/
|
2021-11-25 17:21:03 +08:00
|
|
|
head->flags |= REQ_F_IO_DRAIN | REQ_F_FORCE_ASYNC;
|
2021-10-02 01:07:00 +08:00
|
|
|
ctx->drain_next = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-02-19 02:29:40 +08:00
|
|
|
static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
|
|
|
|
const struct io_uring_sqe *sqe)
|
2021-08-09 20:04:10 +08:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2021-02-19 02:29:40 +08:00
|
|
|
{
|
2023-01-12 22:44:10 +08:00
|
|
|
const struct io_issue_def *def;
|
2021-02-19 02:29:40 +08:00
|
|
|
unsigned int sqe_flags;
|
2021-10-02 01:07:02 +08:00
|
|
|
int personality;
|
2021-10-06 23:06:49 +08:00
|
|
|
u8 opcode;
|
2021-02-19 02:29:40 +08:00
|
|
|
|
2021-08-09 20:04:08 +08:00
|
|
|
/* req is partially pre-initialised, see io_preinit_req() */
|
2021-10-06 23:06:49 +08:00
|
|
|
req->opcode = opcode = READ_ONCE(sqe->opcode);
|
2021-02-19 02:29:40 +08:00
|
|
|
/* same numerical values with corresponding REQ_F_*, safe to copy */
|
|
|
|
req->flags = sqe_flags = READ_ONCE(sqe->flags);
|
2022-04-12 22:09:43 +08:00
|
|
|
req->cqe.user_data = READ_ONCE(sqe->user_data);
|
2021-02-19 02:29:40 +08:00
|
|
|
req->file = NULL;
|
2022-04-19 03:51:13 +08:00
|
|
|
req->rsrc_node = NULL;
|
2021-02-19 02:29:40 +08:00
|
|
|
req->task = current;
|
|
|
|
|
2021-10-06 23:06:49 +08:00
|
|
|
if (unlikely(opcode >= IORING_OP_LAST)) {
|
|
|
|
req->opcode = 0;
|
2021-02-19 02:29:40 +08:00
|
|
|
return -EINVAL;
|
2021-10-06 23:06:49 +08:00
|
|
|
}
|
2023-01-12 22:44:10 +08:00
|
|
|
def = &io_issue_defs[opcode];
|
2021-09-15 19:03:38 +08:00
|
|
|
if (unlikely(sqe_flags & ~SQE_COMMON_FLAGS)) {
|
|
|
|
/* enforce forwards compatibility on users */
|
|
|
|
if (sqe_flags & ~SQE_VALID_FLAGS)
|
|
|
|
return -EINVAL;
|
2022-04-29 09:09:43 +08:00
|
|
|
if (sqe_flags & IOSQE_BUFFER_SELECT) {
|
2022-05-24 06:53:15 +08:00
|
|
|
if (!def->buffer_select)
|
2022-04-29 09:09:43 +08:00
|
|
|
return -EOPNOTSUPP;
|
|
|
|
req->buf_index = READ_ONCE(sqe->buf_group);
|
|
|
|
}
|
2021-11-10 23:49:34 +08:00
|
|
|
if (sqe_flags & IOSQE_CQE_SKIP_SUCCESS)
|
|
|
|
ctx->drain_disabled = true;
|
|
|
|
if (sqe_flags & IOSQE_IO_DRAIN) {
|
|
|
|
if (ctx->drain_disabled)
|
|
|
|
return -EOPNOTSUPP;
|
2021-10-02 01:07:00 +08:00
|
|
|
io_init_req_drain(req);
|
2021-11-10 23:49:34 +08:00
|
|
|
}
|
2021-09-25 04:59:57 +08:00
|
|
|
}
|
|
|
|
if (unlikely(ctx->restricted || ctx->drain_active || ctx->drain_next)) {
|
|
|
|
if (ctx->restricted && !io_check_restriction(ctx, req, sqe_flags))
|
|
|
|
return -EACCES;
|
|
|
|
/* knock it to the slow queue path, will be drained there */
|
|
|
|
if (ctx->drain_active)
|
|
|
|
req->flags |= REQ_F_FORCE_ASYNC;
|
|
|
|
/* if there is no link, we're at "next" request and need to drain */
|
|
|
|
if (unlikely(ctx->drain_next) && !ctx->submit_state.link.head) {
|
|
|
|
ctx->drain_next = false;
|
|
|
|
ctx->drain_active = true;
|
2021-11-25 17:21:03 +08:00
|
|
|
req->flags |= REQ_F_IO_DRAIN | REQ_F_FORCE_ASYNC;
|
2021-09-25 04:59:57 +08:00
|
|
|
}
|
2021-09-15 19:03:38 +08:00
|
|
|
}
|
2021-02-19 02:29:40 +08:00
|
|
|
|
2022-05-24 06:53:15 +08:00
|
|
|
if (!def->ioprio && sqe->ioprio)
|
2022-04-27 01:34:56 +08:00
|
|
|
return -EINVAL;
|
2022-05-24 06:53:15 +08:00
|
|
|
if (!def->iopoll && (ctx->flags & IORING_SETUP_IOPOLL))
|
2022-04-27 01:34:56 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
2022-05-24 06:53:15 +08:00
|
|
|
if (def->needs_file) {
|
2021-10-06 23:06:46 +08:00
|
|
|
struct io_submit_state *state = &ctx->submit_state;
|
|
|
|
|
2022-04-12 22:09:43 +08:00
|
|
|
req->cqe.fd = READ_ONCE(sqe->fd);
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-30 00:10:08 +08:00
|
|
|
|
2021-10-06 23:06:46 +08:00
|
|
|
/*
|
|
|
|
* Plug now if we have more than 2 IO left after this, and the
|
|
|
|
* target is potentially a read/write to block based storage.
|
|
|
|
*/
|
2022-05-24 06:53:15 +08:00
|
|
|
if (state->need_plug && def->plug) {
|
2021-10-06 23:06:46 +08:00
|
|
|
state->plug_started = true;
|
|
|
|
state->need_plug = false;
|
2021-10-07 01:01:42 +08:00
|
|
|
blk_start_plug_nr_ios(&state->plug, state->submit_nr);
|
2021-10-06 23:06:46 +08:00
|
|
|
}
|
2021-02-19 02:29:40 +08:00
|
|
|
}
|
2020-10-28 07:25:35 +08:00
|
|
|
|
2021-03-07 00:22:27 +08:00
|
|
|
personality = READ_ONCE(sqe->personality);
|
|
|
|
if (personality) {
|
2021-11-02 12:06:18 +08:00
|
|
|
int ret;
|
|
|
|
|
2021-06-18 01:14:01 +08:00
|
|
|
req->creds = xa_load(&ctx->personalities, personality);
|
|
|
|
if (!req->creds)
|
2021-03-07 00:22:27 +08:00
|
|
|
return -EINVAL;
|
2021-06-18 01:14:01 +08:00
|
|
|
get_cred(req->creds);
|
lsm,io_uring: add LSM hooks to io_uring
A full expalantion of io_uring is beyond the scope of this commit
description, but in summary it is an asynchronous I/O mechanism
which allows for I/O requests and the resulting data to be queued
in memory mapped "rings" which are shared between the kernel and
userspace. Optionally, io_uring offers the ability for applications
to spawn kernel threads to dequeue I/O requests from the ring and
submit the requests in the kernel, helping to minimize the syscall
overhead. Rings are accessed in userspace by memory mapping a file
descriptor provided by the io_uring_setup(2), and can be shared
between applications as one might do with any open file descriptor.
Finally, process credentials can be registered with a given ring
and any process with access to that ring can submit I/O requests
using any of the registered credentials.
While the io_uring functionality is widely recognized as offering a
vastly improved, and high performing asynchronous I/O mechanism, its
ability to allow processes to submit I/O requests with credentials
other than its own presents a challenge to LSMs. When a process
creates a new io_uring ring the ring's credentials are inhertied
from the calling process; if this ring is shared with another
process operating with different credentials there is the potential
to bypass the LSMs security policy. Similarly, registering
credentials with a given ring allows any process with access to that
ring to submit I/O requests with those credentials.
In an effort to allow LSMs to apply security policy to io_uring I/O
operations, this patch adds two new LSM hooks. These hooks, in
conjunction with the LSM anonymous inode support previously
submitted, allow an LSM to apply access control policy to the
sharing of io_uring rings as well as any io_uring credential changes
requested by a process.
The new LSM hooks are described below:
* int security_uring_override_creds(cred)
Controls if the current task, executing an io_uring operation,
is allowed to override it's credentials with @cred. In cases
where the current task is a user application, the current
credentials will be those of the user application. In cases
where the current task is a kernel thread servicing io_uring
requests the current credentials will be those of the io_uring
ring (inherited from the process that created the ring).
* int security_uring_sqpoll(void)
Controls if the current task is allowed to create an io_uring
polling thread (IORING_SETUP_SQPOLL). Without a SQPOLL thread
in the kernel processes must submit I/O requests via
io_uring_enter(2) which allows us to compare any requested
credential changes against the application making the request.
With a SQPOLL thread, we can no longer compare requested
credential changes against the application making the request,
the comparison is made against the ring's credentials.
Signed-off-by: Paul Moore <paul@paul-moore.com>
2021-02-02 08:56:49 +08:00
|
|
|
ret = security_uring_override_creds(req->creds);
|
|
|
|
if (ret) {
|
|
|
|
put_cred(req->creds);
|
|
|
|
return ret;
|
|
|
|
}
|
2021-06-18 01:14:02 +08:00
|
|
|
req->flags |= REQ_F_CREDS;
|
2021-03-07 00:22:27 +08:00
|
|
|
}
|
2021-02-19 02:29:40 +08:00
|
|
|
|
2022-05-24 06:56:21 +08:00
|
|
|
return def->prep(req, sqe);
|
2021-02-19 02:29:40 +08:00
|
|
|
}
|
|
|
|
|
2022-04-16 05:08:30 +08:00
|
|
|
static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
|
|
|
|
struct io_kiocb *req, int ret)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
struct io_submit_link *link = &ctx->submit_state.link;
|
|
|
|
struct io_kiocb *head = link->head;
|
|
|
|
|
2022-06-16 20:57:20 +08:00
|
|
|
trace_io_uring_req_failed(sqe, req, ret);
|
2022-04-16 05:08:30 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Avoid breaking links in the middle as it renders links with SQPOLL
|
|
|
|
* unusable. Instead of failing eagerly, continue assembling the link if
|
|
|
|
* applicable and mark the head with REQ_F_FAIL. The link flushing code
|
|
|
|
* should find the flag and handle the rest.
|
|
|
|
*/
|
|
|
|
req_fail_link_node(req, ret);
|
|
|
|
if (head && !(head->flags & REQ_F_FAIL))
|
|
|
|
req_fail_link_node(head, -ECANCELED);
|
|
|
|
|
|
|
|
if (!(req->flags & IO_REQ_LINK_FLAGS)) {
|
|
|
|
if (head) {
|
|
|
|
link->last->link = req;
|
|
|
|
link->head = NULL;
|
|
|
|
req = head;
|
|
|
|
}
|
|
|
|
io_queue_sqe_fallback(req);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (head)
|
|
|
|
link->last->link = req;
|
|
|
|
else
|
|
|
|
link->head = req;
|
|
|
|
link->last = req;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
|
2021-02-19 02:29:42 +08:00
|
|
|
const struct io_uring_sqe *sqe)
|
2021-08-09 20:04:10 +08:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2019-05-11 06:07:28 +08:00
|
|
|
{
|
2021-02-19 02:29:42 +08:00
|
|
|
struct io_submit_link *link = &ctx->submit_state.link;
|
2020-04-12 07:05:05 +08:00
|
|
|
int ret;
|
2019-05-11 06:07:28 +08:00
|
|
|
|
2021-02-19 02:29:41 +08:00
|
|
|
ret = io_init_req(ctx, req, sqe);
|
2022-04-16 05:08:30 +08:00
|
|
|
if (unlikely(ret))
|
|
|
|
return io_submit_fail_init(sqe, req, ret);
|
2021-06-15 06:37:31 +08:00
|
|
|
|
2023-03-31 00:03:41 +08:00
|
|
|
trace_io_uring_submit_req(req);
|
2021-02-19 02:29:41 +08:00
|
|
|
|
2019-05-11 06:07:28 +08:00
|
|
|
/*
|
|
|
|
* If we already have a head request, queue this one for async
|
|
|
|
* submittal once the head completes. If we don't have a head but
|
|
|
|
* IOSQE_IO_LINK is set in the sqe, start a new head. This one will be
|
|
|
|
* submitted sync once the chain is complete. If none of those
|
|
|
|
* conditions are true (normal request), then just queue it.
|
|
|
|
*/
|
2022-04-16 05:08:31 +08:00
|
|
|
if (unlikely(link->head)) {
|
2022-04-16 05:08:30 +08:00
|
|
|
ret = io_req_prep_async(req);
|
|
|
|
if (unlikely(ret))
|
|
|
|
return io_submit_fail_init(sqe, req, ret);
|
|
|
|
|
2022-06-16 20:57:20 +08:00
|
|
|
trace_io_uring_link(req, link->head);
|
2020-10-28 07:25:37 +08:00
|
|
|
link->last->link = req;
|
2020-10-28 07:25:35 +08:00
|
|
|
link->last = req;
|
2019-12-18 03:26:58 +08:00
|
|
|
|
2022-04-16 05:08:29 +08:00
|
|
|
if (req->flags & IO_REQ_LINK_FLAGS)
|
2021-09-25 04:59:56 +08:00
|
|
|
return 0;
|
2022-04-16 05:08:30 +08:00
|
|
|
/* last request of the link, flush it */
|
|
|
|
req = link->head;
|
2021-09-25 04:59:56 +08:00
|
|
|
link->head = NULL;
|
2022-04-16 05:08:31 +08:00
|
|
|
if (req->flags & (REQ_F_FORCE_ASYNC | REQ_F_FAIL))
|
|
|
|
goto fallback;
|
|
|
|
|
|
|
|
} else if (unlikely(req->flags & (IO_REQ_LINK_FLAGS |
|
|
|
|
REQ_F_FORCE_ASYNC | REQ_F_FAIL))) {
|
|
|
|
if (req->flags & IO_REQ_LINK_FLAGS) {
|
|
|
|
link->head = req;
|
|
|
|
link->last = req;
|
|
|
|
} else {
|
|
|
|
fallback:
|
|
|
|
io_queue_sqe_fallback(req);
|
|
|
|
}
|
2021-09-25 04:59:56 +08:00
|
|
|
return 0;
|
2019-05-11 06:07:28 +08:00
|
|
|
}
|
2019-12-05 21:15:45 +08:00
|
|
|
|
2021-09-25 04:59:56 +08:00
|
|
|
io_queue_sqe(req);
|
2020-04-12 07:05:03 +08:00
|
|
|
return 0;
|
2019-05-11 06:07:28 +08:00
|
|
|
}
|
|
|
|
|
2019-01-10 00:06:50 +08:00
|
|
|
/*
|
|
|
|
* Batched submission is done, ensure local IO is flushed out.
|
|
|
|
*/
|
2021-09-25 04:59:55 +08:00
|
|
|
static void io_submit_state_end(struct io_ring_ctx *ctx)
|
2019-01-10 00:06:50 +08:00
|
|
|
{
|
2021-09-25 04:59:55 +08:00
|
|
|
struct io_submit_state *state = &ctx->submit_state;
|
|
|
|
|
2022-04-12 22:09:45 +08:00
|
|
|
if (unlikely(state->link.head))
|
|
|
|
io_queue_sqe_fallback(state->link.head);
|
2021-09-25 04:59:55 +08:00
|
|
|
/* flush only after queuing links as they can generate completions */
|
2021-09-08 23:40:52 +08:00
|
|
|
io_submit_flush_completions(ctx);
|
2020-10-28 23:33:23 +08:00
|
|
|
if (state->plug_started)
|
|
|
|
blk_finish_plug(&state->plug);
|
2019-01-10 00:06:50 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Start submission side cache.
|
|
|
|
*/
|
|
|
|
static void io_submit_state_start(struct io_submit_state *state,
|
2021-02-10 08:03:11 +08:00
|
|
|
unsigned int max_ios)
|
2019-01-10 00:06:50 +08:00
|
|
|
{
|
2020-10-28 23:33:23 +08:00
|
|
|
state->plug_started = false;
|
2021-09-08 23:40:49 +08:00
|
|
|
state->need_plug = max_ios > 2;
|
2021-10-07 01:01:42 +08:00
|
|
|
state->submit_nr = max_ios;
|
2021-02-19 02:29:42 +08:00
|
|
|
/* set only head, no need to init link_last in advance */
|
|
|
|
state->link.head = NULL;
|
2019-01-10 00:06:50 +08:00
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
static void io_commit_sqring(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2019-08-27 01:23:46 +08:00
|
|
|
struct io_rings *rings = ctx->rings;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2019-12-31 02:24:46 +08:00
|
|
|
/*
|
|
|
|
* Ensure any loads from the SQEs are done at this point,
|
|
|
|
* since once we write the new head, the application could
|
|
|
|
* write new data to them.
|
|
|
|
*/
|
|
|
|
smp_store_release(&rings->sq.head, ctx->cached_sq_head);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2021-06-05 00:42:56 +08:00
|
|
|
* Fetch an sqe, if one is available. Note this returns a pointer to memory
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
* that is mapped by userspace. This means that care needs to be taken to
|
|
|
|
* ensure that reads are stable, as we cannot rely on userspace always
|
|
|
|
* being a good citizen. If members of the sqe are validated and then later
|
|
|
|
* used, it's important that those reads are done through READ_ONCE() to
|
|
|
|
* prevent a re-load down the line.
|
|
|
|
*/
|
2023-01-23 22:37:15 +08:00
|
|
|
static bool io_get_sqe(struct io_ring_ctx *ctx, const struct io_uring_sqe **sqe)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
2023-08-25 06:53:32 +08:00
|
|
|
unsigned mask = ctx->sq_entries - 1;
|
|
|
|
unsigned head = ctx->cached_sq_head++ & mask;
|
|
|
|
|
|
|
|
if (!(ctx->flags & IORING_SETUP_NO_SQARRAY)) {
|
|
|
|
head = READ_ONCE(ctx->sq_array[head]);
|
|
|
|
if (unlikely(head >= ctx->sq_entries)) {
|
|
|
|
/* drop invalid entries */
|
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
ctx->cq_extra--;
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
WRITE_ONCE(ctx->rings->sq_dropped,
|
|
|
|
READ_ONCE(ctx->rings->sq_dropped) + 1);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The cached sq head (or cq tail) serves two purposes:
|
|
|
|
*
|
|
|
|
* 1) allows us to batch the cost of updating the user visible
|
|
|
|
* head updates.
|
|
|
|
* 2) allows the kernel side to track the head on its own, even
|
|
|
|
* though the application is the one updating it.
|
|
|
|
*/
|
|
|
|
|
2023-08-25 06:53:32 +08:00
|
|
|
/* double index for 128-byte SQEs, twice as long */
|
|
|
|
if (ctx->flags & IORING_SETUP_SQE128)
|
|
|
|
head <<= 1;
|
|
|
|
*sqe = &ctx->sq_sqes[head];
|
|
|
|
return true;
|
2020-04-08 13:58:43 +08:00
|
|
|
}
|
|
|
|
|
2022-05-25 23:13:39 +08:00
|
|
|
int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
|
2021-08-09 20:04:10 +08:00
|
|
|
__must_hold(&ctx->uring_lock)
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-11 02:22:30 +08:00
|
|
|
{
|
2021-09-25 05:00:01 +08:00
|
|
|
unsigned int entries = io_sqring_entries(ctx);
|
2022-04-12 22:09:49 +08:00
|
|
|
unsigned int left;
|
|
|
|
int ret;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-11 02:22:30 +08:00
|
|
|
|
2021-10-05 03:02:47 +08:00
|
|
|
if (unlikely(!entries))
|
2021-09-25 05:00:01 +08:00
|
|
|
return 0;
|
2019-12-31 02:24:45 +08:00
|
|
|
/* make sure SQ entry isn't read before tail */
|
2023-03-31 00:05:31 +08:00
|
|
|
ret = left = min(nr, entries);
|
2022-04-12 22:09:49 +08:00
|
|
|
io_get_task_refs(left);
|
|
|
|
io_submit_state_start(&ctx->submit_state, left);
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-11 02:22:30 +08:00
|
|
|
|
2021-09-25 05:00:01 +08:00
|
|
|
do {
|
2019-12-20 09:24:38 +08:00
|
|
|
const struct io_uring_sqe *sqe;
|
2019-11-07 06:41:06 +08:00
|
|
|
struct io_kiocb *req;
|
2019-10-25 17:31:30 +08:00
|
|
|
|
2023-01-23 22:37:16 +08:00
|
|
|
if (unlikely(!io_alloc_req(ctx, &req)))
|
2019-10-25 17:31:30 +08:00
|
|
|
break;
|
2023-01-23 22:37:15 +08:00
|
|
|
if (unlikely(!io_get_sqe(ctx, &sqe))) {
|
2022-04-12 22:09:48 +08:00
|
|
|
io_req_add_to_cache(req, ctx);
|
2021-02-12 19:55:17 +08:00
|
|
|
break;
|
|
|
|
}
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-11 02:22:30 +08:00
|
|
|
|
2022-04-12 22:09:50 +08:00
|
|
|
/*
|
|
|
|
* Continue submitting even for sqe failure if the
|
|
|
|
* ring was setup with IORING_SETUP_SUBMIT_ALL
|
|
|
|
*/
|
|
|
|
if (unlikely(io_submit_sqe(ctx, req, sqe)) &&
|
|
|
|
!(ctx->flags & IORING_SETUP_SUBMIT_ALL)) {
|
|
|
|
left--;
|
|
|
|
break;
|
2022-03-11 03:59:35 +08:00
|
|
|
}
|
2022-04-12 22:09:50 +08:00
|
|
|
} while (--left);
|
2020-01-26 03:34:01 +08:00
|
|
|
|
2022-04-12 22:09:49 +08:00
|
|
|
if (unlikely(left)) {
|
|
|
|
ret -= left;
|
|
|
|
/* try again if it submitted nothing and can't allocate a req */
|
|
|
|
if (!ret && io_req_cache_empty(ctx))
|
|
|
|
ret = -EAGAIN;
|
|
|
|
current->io_uring->cached_refs += left;
|
2020-01-26 03:34:01 +08:00
|
|
|
}
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-11 02:22:30 +08:00
|
|
|
|
2021-09-25 04:59:55 +08:00
|
|
|
io_submit_state_end(ctx);
|
2019-11-06 05:22:14 +08:00
|
|
|
/* Commit SQ ring head once we've consumed and submitted all SQEs */
|
|
|
|
io_commit_sqring(ctx);
|
2022-04-12 22:09:49 +08:00
|
|
|
return ret;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-11 02:22:30 +08:00
|
|
|
}
|
|
|
|
|
2019-09-25 03:47:15 +08:00
|
|
|
struct io_wait_queue {
|
|
|
|
struct wait_queue_entry wq;
|
|
|
|
struct io_ring_ctx *ctx;
|
2021-08-07 04:04:31 +08:00
|
|
|
unsigned cq_tail;
|
2019-09-25 03:47:15 +08:00
|
|
|
unsigned nr_timeouts;
|
2023-01-05 19:22:29 +08:00
|
|
|
ktime_t timeout;
|
2019-09-25 03:47:15 +08:00
|
|
|
};
|
|
|
|
|
2022-08-30 20:50:08 +08:00
|
|
|
static inline bool io_has_work(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2022-08-30 20:50:10 +08:00
|
|
|
return test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq) ||
|
2023-01-05 19:22:26 +08:00
|
|
|
!llist_empty(&ctx->work_llist);
|
2022-08-30 20:50:08 +08:00
|
|
|
}
|
|
|
|
|
2021-01-05 04:36:36 +08:00
|
|
|
static inline bool io_should_wake(struct io_wait_queue *iowq)
|
2019-09-25 03:47:15 +08:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = iowq->ctx;
|
io_uring: calculate CQEs from the user visible value
io_cqring_wait (and it's wake function io_has_work) used cached_cq_tail in
order to calculate the number of CQEs. cached_cq_tail is set strictly
before the user visible rings->cq.tail
However as far as userspace is concerned, if io_uring_enter(2) is called
with a minimum number of events, they will verify by checking
rings->cq.tail.
It is therefore possible for io_uring_enter(2) to return early with fewer
events visible to the user.
Instead make the wait functions read from the user visible value, so there
will be no discrepency.
This is triggered eventually by the following reproducer:
struct io_uring_sqe *sqe;
struct io_uring_cqe *cqe;
unsigned int cqe_ready;
struct io_uring ring;
int ret, i;
ret = io_uring_queue_init(N, &ring, 0);
assert(!ret);
while(true) {
for (i = 0; i < N; i++) {
sqe = io_uring_get_sqe(&ring);
io_uring_prep_nop(sqe);
sqe->flags |= IOSQE_ASYNC;
}
ret = io_uring_submit(&ring);
assert(ret == N);
do {
ret = io_uring_wait_cqes(&ring, &cqe, N, NULL, NULL);
} while(ret == -EINTR);
cqe_ready = io_uring_cq_ready(&ring);
assert(!ret);
assert(cqe_ready == N);
io_uring_cq_advance(&ring, N);
}
Fixes: ad3eb2c89fb2 ("io_uring: split overflow state into SQ and CQ side")
Signed-off-by: Dylan Yudaken <dylany@meta.com>
Link: https://lore.kernel.org/r/20221108153016.1854297-1-dylany@meta.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-11-08 23:30:16 +08:00
|
|
|
int dist = READ_ONCE(ctx->rings->cq.tail) - (int) iowq->cq_tail;
|
2019-09-25 03:47:15 +08:00
|
|
|
|
|
|
|
/*
|
2019-12-13 19:09:50 +08:00
|
|
|
* Wake up if we have enough events, or if a timeout occurred since we
|
2019-09-25 03:47:15 +08:00
|
|
|
* started waiting. For timeouts, we always want to return to userspace,
|
|
|
|
* regardless of event count.
|
|
|
|
*/
|
2021-08-07 04:04:31 +08:00
|
|
|
return dist >= 0 || atomic_read(&ctx->cq_timeouts) != iowq->nr_timeouts;
|
2019-09-25 03:47:15 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int io_wake_function(struct wait_queue_entry *curr, unsigned int mode,
|
|
|
|
int wake_flags, void *key)
|
|
|
|
{
|
2023-01-09 22:46:04 +08:00
|
|
|
struct io_wait_queue *iowq = container_of(curr, struct io_wait_queue, wq);
|
2019-09-25 03:47:15 +08:00
|
|
|
|
2021-01-05 04:36:36 +08:00
|
|
|
/*
|
|
|
|
* Cannot safely flush overflowed CQEs from here, ensure we wake up
|
|
|
|
* the task, and the next invocation will do it.
|
|
|
|
*/
|
2023-01-09 22:46:04 +08:00
|
|
|
if (io_should_wake(iowq) || io_has_work(iowq->ctx))
|
2021-01-05 04:36:36 +08:00
|
|
|
return autoremove_wake_function(curr, mode, wake_flags, key);
|
|
|
|
return -1;
|
2019-09-25 03:47:15 +08:00
|
|
|
}
|
|
|
|
|
2022-08-30 20:50:10 +08:00
|
|
|
int io_run_task_work_sig(struct io_ring_ctx *ctx)
|
2020-09-25 03:32:18 +08:00
|
|
|
{
|
2023-01-05 19:22:22 +08:00
|
|
|
if (!llist_empty(&ctx->work_llist)) {
|
2023-01-09 22:46:05 +08:00
|
|
|
__set_current_state(TASK_RUNNING);
|
2023-01-05 19:22:22 +08:00
|
|
|
if (io_run_local_work(ctx) > 0)
|
2023-08-11 20:53:47 +08:00
|
|
|
return 0;
|
2023-01-05 19:22:22 +08:00
|
|
|
}
|
|
|
|
if (io_run_task_work() > 0)
|
2023-08-11 20:53:47 +08:00
|
|
|
return 0;
|
2022-02-17 03:53:42 +08:00
|
|
|
if (task_sigpending(current))
|
|
|
|
return -EINTR;
|
|
|
|
return 0;
|
2020-09-25 03:32:18 +08:00
|
|
|
}
|
|
|
|
|
2023-07-25 01:28:17 +08:00
|
|
|
static bool current_pending_io(void)
|
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
|
|
|
|
|
|
|
if (!tctx)
|
|
|
|
return false;
|
|
|
|
return percpu_counter_read_positive(&tctx->inflight);
|
|
|
|
}
|
|
|
|
|
2021-02-04 21:51:58 +08:00
|
|
|
/* when returns >0, the caller should retry */
|
|
|
|
static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
|
2023-01-05 19:22:29 +08:00
|
|
|
struct io_wait_queue *iowq)
|
2021-02-04 21:51:58 +08:00
|
|
|
{
|
2023-07-25 01:28:17 +08:00
|
|
|
int io_wait, ret;
|
2023-07-08 00:20:07 +08:00
|
|
|
|
2023-01-05 19:22:24 +08:00
|
|
|
if (unlikely(READ_ONCE(ctx->check_cq)))
|
|
|
|
return 1;
|
2023-01-05 19:22:25 +08:00
|
|
|
if (unlikely(!llist_empty(&ctx->work_llist)))
|
|
|
|
return 1;
|
|
|
|
if (unlikely(test_thread_flag(TIF_NOTIFY_SIGNAL)))
|
|
|
|
return 1;
|
|
|
|
if (unlikely(task_sigpending(current)))
|
|
|
|
return -EINTR;
|
|
|
|
if (unlikely(io_should_wake(iowq)))
|
|
|
|
return 0;
|
2023-07-08 00:20:07 +08:00
|
|
|
|
|
|
|
/*
|
2023-07-25 01:28:17 +08:00
|
|
|
* Mark us as being in io_wait if we have pending requests, so cpufreq
|
|
|
|
* can take into account that the task is waiting for IO - turns out
|
|
|
|
* to be important for low QD IO.
|
2023-07-08 00:20:07 +08:00
|
|
|
*/
|
2023-07-25 01:28:17 +08:00
|
|
|
io_wait = current->in_iowait;
|
|
|
|
if (current_pending_io())
|
|
|
|
current->in_iowait = 1;
|
2023-07-08 00:20:07 +08:00
|
|
|
ret = 0;
|
2023-01-05 19:22:29 +08:00
|
|
|
if (iowq->timeout == KTIME_MAX)
|
2023-01-05 19:22:28 +08:00
|
|
|
schedule();
|
2023-01-05 19:22:29 +08:00
|
|
|
else if (!schedule_hrtimeout(&iowq->timeout, HRTIMER_MODE_ABS))
|
2023-07-08 00:20:07 +08:00
|
|
|
ret = -ETIME;
|
2023-07-25 01:28:17 +08:00
|
|
|
current->in_iowait = io_wait;
|
2023-07-08 00:20:07 +08:00
|
|
|
return ret;
|
2021-02-04 21:51:58 +08:00
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
/*
|
|
|
|
* Wait until events become available, if we don't already have some. The
|
|
|
|
* application must reap them itself, as they reside on the shared cq ring.
|
|
|
|
*/
|
|
|
|
static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
|
2020-11-03 10:54:37 +08:00
|
|
|
const sigset_t __user *sig, size_t sigsz,
|
|
|
|
struct __kernel_timespec __user *uts)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
2021-08-09 23:07:32 +08:00
|
|
|
struct io_wait_queue iowq;
|
2019-08-27 01:23:46 +08:00
|
|
|
struct io_rings *rings = ctx->rings;
|
2021-02-04 21:51:57 +08:00
|
|
|
int ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2022-09-08 23:56:52 +08:00
|
|
|
if (!io_allowed_run_tw(ctx))
|
|
|
|
return -EEXIST;
|
2023-01-05 19:22:23 +08:00
|
|
|
if (!llist_empty(&ctx->work_llist))
|
|
|
|
io_run_local_work(ctx);
|
2023-01-05 19:22:21 +08:00
|
|
|
io_run_task_work();
|
|
|
|
io_cqring_overflow_flush(ctx);
|
|
|
|
/* if user messes with these they will just get an early return */
|
|
|
|
if (__io_cqring_events_user(ctx) >= min_events)
|
|
|
|
return 0;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
|
|
|
if (sig) {
|
2019-03-25 22:34:53 +08:00
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
if (in_compat_syscall())
|
|
|
|
ret = set_compat_user_sigmask((const compat_sigset_t __user *)sig,
|
2019-07-17 07:29:53 +08:00
|
|
|
sigsz);
|
2019-03-25 22:34:53 +08:00
|
|
|
else
|
|
|
|
#endif
|
2019-07-17 07:29:53 +08:00
|
|
|
ret = set_user_sigmask(sig, sigsz);
|
2019-03-25 22:34:53 +08:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-08-09 23:07:32 +08:00
|
|
|
init_waitqueue_func_entry(&iowq.wq, io_wake_function);
|
|
|
|
iowq.wq.private = current;
|
|
|
|
INIT_LIST_HEAD(&iowq.wq.entry);
|
|
|
|
iowq.ctx = ctx;
|
2019-09-25 03:47:15 +08:00
|
|
|
iowq.nr_timeouts = atomic_read(&ctx->cq_timeouts);
|
2021-08-07 04:04:31 +08:00
|
|
|
iowq.cq_tail = READ_ONCE(ctx->rings->cq.head) + min_events;
|
2023-01-05 19:22:29 +08:00
|
|
|
iowq.timeout = KTIME_MAX;
|
|
|
|
|
|
|
|
if (uts) {
|
|
|
|
struct timespec64 ts;
|
|
|
|
|
|
|
|
if (get_timespec64(&ts, uts))
|
|
|
|
return -EFAULT;
|
|
|
|
iowq.timeout = ktime_add_ns(timespec64_to_ktime(ts), ktime_get_ns());
|
|
|
|
}
|
2021-08-09 23:07:32 +08:00
|
|
|
|
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-16 01:02:01 +08:00
|
|
|
trace_io_uring_cqring_wait(ctx, min_events);
|
2019-09-25 03:47:15 +08:00
|
|
|
do {
|
2023-01-05 19:22:24 +08:00
|
|
|
unsigned long check_cq;
|
|
|
|
|
2023-01-09 22:46:11 +08:00
|
|
|
if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) {
|
2023-04-06 21:20:12 +08:00
|
|
|
int nr_wait = (int) iowq.cq_tail - READ_ONCE(ctx->rings->cq.tail);
|
|
|
|
|
|
|
|
atomic_set(&ctx->cq_wait_nr, nr_wait);
|
2023-01-09 22:46:11 +08:00
|
|
|
set_current_state(TASK_INTERRUPTIBLE);
|
|
|
|
} else {
|
|
|
|
prepare_to_wait_exclusive(&ctx->cq_wait, &iowq.wq,
|
|
|
|
TASK_INTERRUPTIBLE);
|
|
|
|
}
|
|
|
|
|
2023-01-05 19:22:29 +08:00
|
|
|
ret = io_cqring_wait_schedule(ctx, &iowq);
|
2023-01-09 22:46:11 +08:00
|
|
|
__set_current_state(TASK_RUNNING);
|
2023-04-06 21:20:12 +08:00
|
|
|
atomic_set(&ctx->cq_wait_nr, 0);
|
2023-01-09 22:46:12 +08:00
|
|
|
|
2023-01-05 19:22:25 +08:00
|
|
|
/*
|
|
|
|
* Run task_work after scheduling and before io_should_wake().
|
|
|
|
* If we got woken because of task_work being processed, run it
|
|
|
|
* now rather than let the caller do another wait loop.
|
|
|
|
*/
|
|
|
|
io_run_task_work();
|
|
|
|
if (!llist_empty(&ctx->work_llist))
|
|
|
|
io_run_local_work(ctx);
|
2023-01-05 19:22:24 +08:00
|
|
|
|
2024-01-05 03:21:08 +08:00
|
|
|
/*
|
|
|
|
* Non-local task_work will be run on exit to userspace, but
|
|
|
|
* if we're using DEFER_TASKRUN, then we could have waited
|
|
|
|
* with a timeout for a number of requests. If the timeout
|
|
|
|
* hits, we could have some requests ready to process. Ensure
|
|
|
|
* this break is _after_ we have run task_work, to avoid
|
|
|
|
* deferring running potentially pending requests until the
|
|
|
|
* next time we wait for events.
|
|
|
|
*/
|
|
|
|
if (ret < 0)
|
|
|
|
break;
|
|
|
|
|
2023-01-05 19:22:24 +08:00
|
|
|
check_cq = READ_ONCE(ctx->check_cq);
|
|
|
|
if (unlikely(check_cq)) {
|
|
|
|
/* let the caller flush overflows, retry */
|
2023-01-05 19:22:27 +08:00
|
|
|
if (check_cq & BIT(IO_CHECK_CQ_OVERFLOW_BIT))
|
2023-01-05 19:22:24 +08:00
|
|
|
io_cqring_do_overflow_flush(ctx);
|
|
|
|
if (check_cq & BIT(IO_CHECK_CQ_DROPPED_BIT)) {
|
|
|
|
ret = -EBADR;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-01-05 19:22:25 +08:00
|
|
|
if (io_should_wake(&iowq)) {
|
|
|
|
ret = 0;
|
2022-12-18 04:42:24 +08:00
|
|
|
break;
|
2023-01-05 19:22:25 +08:00
|
|
|
}
|
2021-03-05 08:15:48 +08:00
|
|
|
cond_resched();
|
2023-01-05 19:22:25 +08:00
|
|
|
} while (1);
|
2019-09-25 03:47:15 +08:00
|
|
|
|
2023-01-09 22:46:11 +08:00
|
|
|
if (!(ctx->flags & IORING_SETUP_DEFER_TASKRUN))
|
|
|
|
finish_wait(&ctx->cq_wait, &iowq.wq);
|
2020-07-04 22:55:50 +08:00
|
|
|
restore_saved_sigmask_unless(ret == -EINTR);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2019-08-27 01:23:46 +08:00
|
|
|
return READ_ONCE(rings->cq.head) == READ_ONCE(rings->cq.tail) ? ret : 0;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
|
|
|
|
2023-11-28 11:53:52 +08:00
|
|
|
void io_mem_free(void *ptr)
|
2021-08-09 23:09:47 +08:00
|
|
|
{
|
2022-06-13 21:12:45 +08:00
|
|
|
if (!ptr)
|
|
|
|
return;
|
2021-08-09 23:09:47 +08:00
|
|
|
|
2023-08-16 23:11:49 +08:00
|
|
|
folio_put(virt_to_folio(ptr));
|
2021-08-09 23:09:47 +08:00
|
|
|
}
|
|
|
|
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-06 07:20:54 +08:00
|
|
|
static void io_pages_free(struct page ***pages, int npages)
|
|
|
|
{
|
|
|
|
struct page **page_array;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (!pages)
|
|
|
|
return;
|
2023-10-18 22:09:27 +08:00
|
|
|
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-06 07:20:54 +08:00
|
|
|
page_array = *pages;
|
2023-10-18 22:09:27 +08:00
|
|
|
if (!page_array)
|
|
|
|
return;
|
|
|
|
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-06 07:20:54 +08:00
|
|
|
for (i = 0; i < npages; i++)
|
|
|
|
unpin_user_page(page_array[i]);
|
|
|
|
kvfree(page_array);
|
|
|
|
*pages = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *__io_uaddr_map(struct page ***pages, unsigned short *npages,
|
|
|
|
unsigned long uaddr, size_t size)
|
|
|
|
{
|
|
|
|
struct page **page_array;
|
|
|
|
unsigned int nr_pages;
|
2023-11-25 12:02:01 +08:00
|
|
|
void *page_addr;
|
2023-10-03 23:59:58 +08:00
|
|
|
int ret, i;
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-06 07:20:54 +08:00
|
|
|
|
|
|
|
*npages = 0;
|
|
|
|
|
|
|
|
if (uaddr & (PAGE_SIZE - 1) || !size)
|
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
|
|
|
|
nr_pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
|
|
|
|
if (nr_pages > USHRT_MAX)
|
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
page_array = kvmalloc_array(nr_pages, sizeof(struct page *), GFP_KERNEL);
|
|
|
|
if (!page_array)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
|
|
|
|
|
|
|
ret = pin_user_pages_fast(uaddr, nr_pages, FOLL_WRITE | FOLL_LONGTERM,
|
|
|
|
page_array);
|
|
|
|
if (ret != nr_pages) {
|
|
|
|
err:
|
|
|
|
io_pages_free(&page_array, ret > 0 ? ret : 0);
|
|
|
|
return ret < 0 ? ERR_PTR(ret) : ERR_PTR(-EFAULT);
|
|
|
|
}
|
2023-10-03 23:59:58 +08:00
|
|
|
|
2023-11-25 12:02:01 +08:00
|
|
|
page_addr = page_address(page_array[0]);
|
2023-10-03 23:59:58 +08:00
|
|
|
for (i = 0; i < nr_pages; i++) {
|
2023-11-25 12:02:01 +08:00
|
|
|
ret = -EINVAL;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Can't support mapping user allocated ring memory on 32-bit
|
|
|
|
* archs where it could potentially reside in highmem. Just
|
|
|
|
* fail those with -EINVAL, just like we did on kernels that
|
|
|
|
* didn't support this feature.
|
|
|
|
*/
|
|
|
|
if (PageHighMem(page_array[i]))
|
2023-10-03 23:59:58 +08:00
|
|
|
goto err;
|
2023-11-25 12:02:01 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* No support for discontig pages for now, should either be a
|
|
|
|
* single normal page, or a huge page. Later on we can add
|
|
|
|
* support for remapping discontig pages, for now we will
|
|
|
|
* just fail them with EINVAL.
|
|
|
|
*/
|
|
|
|
if (page_address(page_array[i]) != page_addr)
|
|
|
|
goto err;
|
|
|
|
page_addr += PAGE_SIZE;
|
2023-10-03 23:59:58 +08:00
|
|
|
}
|
|
|
|
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-06 07:20:54 +08:00
|
|
|
*pages = page_array;
|
|
|
|
*npages = nr_pages;
|
|
|
|
return page_to_virt(page_array[0]);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *io_rings_map(struct io_ring_ctx *ctx, unsigned long uaddr,
|
|
|
|
size_t size)
|
|
|
|
{
|
|
|
|
return __io_uaddr_map(&ctx->ring_pages, &ctx->n_ring_pages, uaddr,
|
|
|
|
size);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *io_sqes_map(struct io_ring_ctx *ctx, unsigned long uaddr,
|
|
|
|
size_t size)
|
|
|
|
{
|
|
|
|
return __io_uaddr_map(&ctx->sqe_pages, &ctx->n_sqe_pages, uaddr,
|
|
|
|
size);
|
|
|
|
}
|
|
|
|
|
2021-11-06 07:15:46 +08:00
|
|
|
static void io_rings_free(struct io_ring_ctx *ctx)
|
|
|
|
{
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-06 07:20:54 +08:00
|
|
|
if (!(ctx->flags & IORING_SETUP_NO_MMAP)) {
|
|
|
|
io_mem_free(ctx->rings);
|
|
|
|
io_mem_free(ctx->sq_sqes);
|
|
|
|
ctx->rings = NULL;
|
|
|
|
ctx->sq_sqes = NULL;
|
|
|
|
} else {
|
|
|
|
io_pages_free(&ctx->ring_pages, ctx->n_ring_pages);
|
2023-10-18 22:09:27 +08:00
|
|
|
ctx->n_ring_pages = 0;
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-06 07:20:54 +08:00
|
|
|
io_pages_free(&ctx->sqe_pages, ctx->n_sqe_pages);
|
2023-10-18 22:09:27 +08:00
|
|
|
ctx->n_sqe_pages = 0;
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-06 07:20:54 +08:00
|
|
|
}
|
2021-11-06 07:15:46 +08:00
|
|
|
}
|
|
|
|
|
2023-11-28 11:53:52 +08:00
|
|
|
void *io_mem_alloc(size_t size)
|
2021-08-09 23:09:47 +08:00
|
|
|
{
|
2022-06-13 21:12:45 +08:00
|
|
|
gfp_t gfp = GFP_KERNEL_ACCOUNT | __GFP_ZERO | __GFP_NOWARN | __GFP_COMP;
|
2021-11-06 07:13:52 +08:00
|
|
|
void *ret;
|
2021-08-09 23:09:47 +08:00
|
|
|
|
2021-11-06 07:13:52 +08:00
|
|
|
ret = (void *) __get_free_pages(gfp, get_order(size));
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
return ERR_PTR(-ENOMEM);
|
2021-08-09 23:09:47 +08:00
|
|
|
}
|
|
|
|
|
2022-06-13 21:12:45 +08:00
|
|
|
static unsigned long rings_size(struct io_ring_ctx *ctx, unsigned int sq_entries,
|
|
|
|
unsigned int cq_entries, size_t *sq_offset)
|
2019-01-11 13:13:58 +08:00
|
|
|
{
|
2022-06-13 21:12:45 +08:00
|
|
|
struct io_rings *rings;
|
|
|
|
size_t off, sq_array_size;
|
2019-01-11 13:13:58 +08:00
|
|
|
|
2022-06-13 21:12:45 +08:00
|
|
|
off = struct_size(rings, cqes, cq_entries);
|
|
|
|
if (off == SIZE_MAX)
|
|
|
|
return SIZE_MAX;
|
|
|
|
if (ctx->flags & IORING_SETUP_CQE32) {
|
|
|
|
if (check_shl_overflow(off, 1, &off))
|
|
|
|
return SIZE_MAX;
|
|
|
|
}
|
2021-10-10 06:14:41 +08:00
|
|
|
|
2022-06-13 21:12:45 +08:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
off = ALIGN(off, SMP_CACHE_BYTES);
|
|
|
|
if (off == 0)
|
|
|
|
return SIZE_MAX;
|
|
|
|
#endif
|
2021-04-01 22:43:43 +08:00
|
|
|
|
2023-08-25 06:53:32 +08:00
|
|
|
if (ctx->flags & IORING_SETUP_NO_SQARRAY) {
|
|
|
|
if (sq_offset)
|
|
|
|
*sq_offset = SIZE_MAX;
|
|
|
|
return off;
|
|
|
|
}
|
|
|
|
|
2022-06-13 21:12:45 +08:00
|
|
|
if (sq_offset)
|
|
|
|
*sq_offset = off;
|
2021-04-01 22:43:43 +08:00
|
|
|
|
2022-06-13 21:12:45 +08:00
|
|
|
sq_array_size = array_size(sizeof(u32), sq_entries);
|
|
|
|
if (sq_array_size == SIZE_MAX)
|
|
|
|
return SIZE_MAX;
|
2019-01-11 13:13:58 +08:00
|
|
|
|
2022-06-13 21:12:45 +08:00
|
|
|
if (check_add_overflow(off, sq_array_size, &off))
|
|
|
|
return SIZE_MAX;
|
2021-02-19 17:19:36 +08:00
|
|
|
|
2022-06-13 21:12:45 +08:00
|
|
|
return off;
|
2021-02-19 17:19:36 +08:00
|
|
|
}
|
|
|
|
|
2022-06-13 21:12:45 +08:00
|
|
|
static int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg,
|
|
|
|
unsigned int eventfd_async)
|
2021-02-19 17:19:36 +08:00
|
|
|
{
|
2022-06-13 21:12:45 +08:00
|
|
|
struct io_ev_fd *ev_fd;
|
|
|
|
__s32 __user *fds = arg;
|
|
|
|
int fd;
|
2021-02-21 02:03:49 +08:00
|
|
|
|
2022-06-13 21:12:45 +08:00
|
|
|
ev_fd = rcu_dereference_protected(ctx->io_ev_fd,
|
|
|
|
lockdep_is_held(&ctx->uring_lock));
|
|
|
|
if (ev_fd)
|
|
|
|
return -EBUSY;
|
2021-02-19 17:19:36 +08:00
|
|
|
|
2022-06-13 21:12:45 +08:00
|
|
|
if (copy_from_user(&fd, fds, sizeof(*fds)))
|
|
|
|
return -EFAULT;
|
2021-03-20 01:22:36 +08:00
|
|
|
|
2022-06-13 21:12:45 +08:00
|
|
|
ev_fd = kmalloc(sizeof(*ev_fd), GFP_KERNEL);
|
|
|
|
if (!ev_fd)
|
|
|
|
return -ENOMEM;
|
2019-12-10 02:22:50 +08:00
|
|
|
|
2022-06-13 21:12:45 +08:00
|
|
|
ev_fd->cq_ev_fd = eventfd_ctx_fdget(fd);
|
|
|
|
if (IS_ERR(ev_fd->cq_ev_fd)) {
|
|
|
|
int ret = PTR_ERR(ev_fd->cq_ev_fd);
|
|
|
|
kfree(ev_fd);
|
|
|
|
return ret;
|
|
|
|
}
|
2022-06-20 08:25:55 +08:00
|
|
|
|
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
ctx->evfd_last_cq_tail = ctx->cached_cq_tail;
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
|
2022-06-13 21:12:45 +08:00
|
|
|
ev_fd->eventfd_async = eventfd_async;
|
|
|
|
ctx->has_evfd = true;
|
|
|
|
rcu_assign_pointer(ctx->io_ev_fd, ev_fd);
|
2022-08-30 20:50:12 +08:00
|
|
|
atomic_set(&ev_fd->refs, 1);
|
|
|
|
atomic_set(&ev_fd->ops, 0);
|
2022-06-13 21:12:45 +08:00
|
|
|
return 0;
|
2021-01-16 01:37:50 +08:00
|
|
|
}
|
|
|
|
|
2022-06-13 21:12:45 +08:00
|
|
|
static int io_eventfd_unregister(struct io_ring_ctx *ctx)
|
2021-01-16 01:37:51 +08:00
|
|
|
{
|
2022-06-13 21:12:45 +08:00
|
|
|
struct io_ev_fd *ev_fd;
|
|
|
|
|
|
|
|
ev_fd = rcu_dereference_protected(ctx->io_ev_fd,
|
|
|
|
lockdep_is_held(&ctx->uring_lock));
|
|
|
|
if (ev_fd) {
|
|
|
|
ctx->has_evfd = false;
|
|
|
|
rcu_assign_pointer(ctx->io_ev_fd, NULL);
|
2022-08-30 20:50:12 +08:00
|
|
|
if (!atomic_fetch_or(BIT(IO_EVENTFD_OP_FREE_BIT), &ev_fd->ops))
|
|
|
|
call_rcu(&ev_fd->rcu, io_eventfd_ops);
|
2022-06-13 21:12:45 +08:00
|
|
|
return 0;
|
|
|
|
}
|
2021-06-14 09:36:21 +08:00
|
|
|
|
2022-06-13 21:12:45 +08:00
|
|
|
return -ENXIO;
|
2021-04-25 21:32:16 +08:00
|
|
|
}
|
|
|
|
|
2022-06-13 21:12:45 +08:00
|
|
|
static void io_req_caches_free(struct io_ring_ctx *ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
2023-01-23 22:37:16 +08:00
|
|
|
struct io_kiocb *req;
|
2021-10-05 03:02:53 +08:00
|
|
|
int nr = 0;
|
2021-02-10 08:03:17 +08:00
|
|
|
|
2021-02-14 00:09:44 +08:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2022-10-17 04:30:50 +08:00
|
|
|
io_flush_cached_locked_reqs(ctx, &ctx->submit_state);
|
2021-02-14 00:09:44 +08:00
|
|
|
|
2022-04-12 22:09:47 +08:00
|
|
|
while (!io_req_cache_empty(ctx)) {
|
2023-01-23 22:37:16 +08:00
|
|
|
req = io_extract_req(ctx);
|
2021-09-25 04:59:47 +08:00
|
|
|
kmem_cache_free(req_cachep, req);
|
2021-10-05 03:02:53 +08:00
|
|
|
nr++;
|
2021-09-25 04:59:47 +08:00
|
|
|
}
|
2021-10-05 03:02:53 +08:00
|
|
|
if (nr)
|
|
|
|
percpu_ref_put_many(&ctx->refs, nr);
|
2021-02-14 00:09:44 +08:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
|
2023-04-04 20:39:54 +08:00
|
|
|
static void io_rsrc_node_cache_free(struct io_cache_entry *entry)
|
|
|
|
{
|
|
|
|
kfree(container_of(entry, struct io_rsrc_node, cache));
|
|
|
|
}
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
2021-02-18 12:03:43 +08:00
|
|
|
io_sq_thread_finish(ctx);
|
2021-08-10 09:44:23 +08:00
|
|
|
/* __io_rsrc_put_work() may need uring_lock to progress, wait w/o it */
|
2023-04-13 22:28:10 +08:00
|
|
|
if (WARN_ON_ONCE(!list_empty(&ctx->rsrc_ref_list)))
|
|
|
|
return;
|
2021-08-10 09:44:23 +08:00
|
|
|
|
2021-02-19 17:19:36 +08:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-08-10 09:44:23 +08:00
|
|
|
if (ctx->buf_data)
|
2021-04-25 21:32:25 +08:00
|
|
|
__io_sqe_buffers_unregister(ctx);
|
2021-08-10 09:44:23 +08:00
|
|
|
if (ctx->file_data)
|
2021-04-13 09:58:38 +08:00
|
|
|
__io_sqe_files_unregister(ctx);
|
2022-12-07 11:53:28 +08:00
|
|
|
io_cqring_overflow_kill(ctx);
|
2019-04-12 01:45:41 +08:00
|
|
|
io_eventfd_unregister(ctx);
|
2022-07-08 04:16:20 +08:00
|
|
|
io_alloc_cache_free(&ctx->apoll_cache, io_apoll_cache_free);
|
2022-07-08 04:30:09 +08:00
|
|
|
io_alloc_cache_free(&ctx->netmsg_cache, io_netmsg_cache_free);
|
2020-02-24 07:23:11 +08:00
|
|
|
io_destroy_buffers(ctx);
|
2023-04-02 03:50:39 +08:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2021-04-20 19:03:32 +08:00
|
|
|
if (ctx->sq_creds)
|
|
|
|
put_cred(ctx->sq_creds);
|
2022-06-16 17:22:08 +08:00
|
|
|
if (ctx->submitter_task)
|
|
|
|
put_task_struct(ctx->submitter_task);
|
2019-01-09 23:59:42 +08:00
|
|
|
|
2021-04-01 22:43:46 +08:00
|
|
|
/* there are no registered resources left, nobody uses it */
|
|
|
|
if (ctx->rsrc_node)
|
2023-04-04 20:39:54 +08:00
|
|
|
io_rsrc_node_destroy(ctx, ctx->rsrc_node);
|
2021-04-01 22:43:46 +08:00
|
|
|
|
|
|
|
WARN_ON_ONCE(!list_empty(&ctx->rsrc_ref_list));
|
2019-01-09 23:59:42 +08:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
#if defined(CONFIG_UNIX)
|
2019-06-13 05:58:43 +08:00
|
|
|
if (ctx->ring_sock) {
|
|
|
|
ctx->ring_sock->file = NULL; /* so that iput() is called */
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
sock_release(ctx->ring_sock);
|
2019-06-13 05:58:43 +08:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
#endif
|
2021-08-29 09:54:38 +08:00
|
|
|
WARN_ON_ONCE(!list_empty(&ctx->ltimeout_list));
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2023-04-04 20:39:54 +08:00
|
|
|
io_alloc_cache_free(&ctx->rsrc_node_cache, io_rsrc_node_cache_free);
|
2022-10-04 10:19:08 +08:00
|
|
|
if (ctx->mm_account) {
|
|
|
|
mmdrop(ctx->mm_account);
|
|
|
|
ctx->mm_account = NULL;
|
|
|
|
}
|
2021-11-06 07:15:46 +08:00
|
|
|
io_rings_free(ctx);
|
2023-11-28 07:47:04 +08:00
|
|
|
io_kbuf_mmap_list_free(ctx);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
|
|
|
percpu_ref_exit(&ctx->refs);
|
|
|
|
free_uid(ctx->user);
|
2021-02-28 06:04:18 +08:00
|
|
|
io_req_caches_free(ctx);
|
2021-02-20 03:33:30 +08:00
|
|
|
if (ctx->hash_map)
|
|
|
|
io_wq_put_hash(ctx->hash_map);
|
2022-06-16 17:22:10 +08:00
|
|
|
kfree(ctx->cancel_table.hbs);
|
2022-06-16 17:22:12 +08:00
|
|
|
kfree(ctx->cancel_table_locked.hbs);
|
2022-05-02 00:52:44 +08:00
|
|
|
kfree(ctx->io_bl);
|
|
|
|
xa_destroy(&ctx->io_bl_xa);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
kfree(ctx);
|
|
|
|
}
|
|
|
|
|
2023-01-09 22:46:09 +08:00
|
|
|
static __cold void io_activate_pollwq_cb(struct callback_head *cb)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = container_of(cb, struct io_ring_ctx,
|
|
|
|
poll_wq_task_work);
|
|
|
|
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
ctx->poll_activated = true;
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wake ups for some events between start of polling and activation
|
|
|
|
* might've been lost due to loose synchronisation.
|
|
|
|
*/
|
|
|
|
wake_up_all(&ctx->poll_wq);
|
|
|
|
percpu_ref_put(&ctx->refs);
|
|
|
|
}
|
|
|
|
|
|
|
|
static __cold void io_activate_pollwq(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
/* already activated or in progress */
|
|
|
|
if (ctx->poll_activated || ctx->poll_wq_task_work.func)
|
|
|
|
goto out;
|
|
|
|
if (WARN_ON_ONCE(!ctx->task_complete))
|
|
|
|
goto out;
|
|
|
|
if (!ctx->submitter_task)
|
|
|
|
goto out;
|
|
|
|
/*
|
|
|
|
* with ->submitter_task only the submitter task completes requests, we
|
|
|
|
* only need to sync with it, which is done by injecting a tw
|
|
|
|
*/
|
|
|
|
init_task_work(&ctx->poll_wq_task_work, io_activate_pollwq_cb);
|
|
|
|
percpu_ref_get(&ctx->refs);
|
|
|
|
if (task_work_add(ctx->submitter_task, &ctx->poll_wq_task_work, TWA_SIGNAL))
|
|
|
|
percpu_ref_put(&ctx->refs);
|
|
|
|
out:
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
static __poll_t io_uring_poll(struct file *file, poll_table *wait)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
|
|
|
__poll_t mask = 0;
|
|
|
|
|
2023-01-09 22:46:09 +08:00
|
|
|
if (unlikely(!ctx->poll_activated))
|
|
|
|
io_activate_pollwq(ctx);
|
|
|
|
|
2023-01-09 22:46:08 +08:00
|
|
|
poll_wait(file, &ctx->poll_wq, wait);
|
2019-04-25 05:54:17 +08:00
|
|
|
/*
|
|
|
|
* synchronizes with barrier from wq_has_sleeper call in
|
|
|
|
* io_commit_cqring
|
|
|
|
*/
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
smp_rmb();
|
2020-09-04 02:12:41 +08:00
|
|
|
if (!io_sqring_full(ctx))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
mask |= EPOLLOUT | EPOLLWRNORM;
|
2021-02-05 16:34:21 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't flush cqring overflow list here, just do a simple check.
|
|
|
|
* Otherwise there could possible be ABBA deadlock:
|
|
|
|
* CPU0 CPU1
|
|
|
|
* ---- ----
|
|
|
|
* lock(&ctx->uring_lock);
|
|
|
|
* lock(&ep->mtx);
|
|
|
|
* lock(&ctx->uring_lock);
|
|
|
|
* lock(&ep->mtx);
|
|
|
|
*
|
|
|
|
* Users may get EPOLLIN meanwhile seeing nothing in cqring, this
|
2022-11-25 18:34:11 +08:00
|
|
|
* pushes them to do the flush.
|
2021-02-05 16:34:21 +08:00
|
|
|
*/
|
2022-08-30 20:50:08 +08:00
|
|
|
|
2023-01-23 22:37:13 +08:00
|
|
|
if (__io_cqring_events_user(ctx) || io_has_work(ctx))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
mask |= EPOLLIN | EPOLLRDNORM;
|
|
|
|
|
|
|
|
return mask;
|
|
|
|
}
|
|
|
|
|
2020-12-24 11:02:20 +08:00
|
|
|
static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id)
|
2020-01-29 01:04:42 +08:00
|
|
|
{
|
2021-02-16 04:40:22 +08:00
|
|
|
const struct cred *creds;
|
2020-01-29 01:04:42 +08:00
|
|
|
|
2021-03-08 22:16:16 +08:00
|
|
|
creds = xa_erase(&ctx->personalities, id);
|
2021-02-16 04:40:22 +08:00
|
|
|
if (creds) {
|
|
|
|
put_cred(creds);
|
2020-12-24 11:02:20 +08:00
|
|
|
return 0;
|
2020-10-15 22:46:24 +08:00
|
|
|
}
|
2020-12-24 11:02:20 +08:00
|
|
|
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2021-03-06 19:02:13 +08:00
|
|
|
struct io_tctx_exit {
|
|
|
|
struct callback_head task_work;
|
|
|
|
struct completion completion;
|
2021-03-06 19:02:15 +08:00
|
|
|
struct io_ring_ctx *ctx;
|
2021-03-06 19:02:13 +08:00
|
|
|
};
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold void io_tctx_exit_cb(struct callback_head *cb)
|
2021-03-06 19:02:13 +08:00
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
|
|
|
struct io_tctx_exit *work;
|
|
|
|
|
|
|
|
work = container_of(cb, struct io_tctx_exit, task_work);
|
|
|
|
/*
|
2023-02-17 23:27:23 +08:00
|
|
|
* When @in_cancel, we're in cancellation and it's racy to remove the
|
2021-03-06 19:02:13 +08:00
|
|
|
* node. It'll be removed by the end of cancellation, just ignore it.
|
2022-12-06 17:38:32 +08:00
|
|
|
* tctx can be NULL if the queueing of this task_work raced with
|
|
|
|
* work cancelation off the exec path.
|
2021-03-06 19:02:13 +08:00
|
|
|
*/
|
2023-02-17 23:27:23 +08:00
|
|
|
if (tctx && !atomic_read(&tctx->in_cancel))
|
2021-06-14 09:36:15 +08:00
|
|
|
io_uring_del_tctx_node((unsigned long)work->ctx);
|
2021-03-06 19:02:13 +08:00
|
|
|
complete(&work->completion);
|
|
|
|
}
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold bool io_cancel_ctx_cb(struct io_wq_work *work, void *data)
|
2021-04-26 06:34:45 +08:00
|
|
|
{
|
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
|
|
|
|
|
|
|
return req->ctx == data;
|
|
|
|
}
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold void io_ring_exit_work(struct work_struct *work)
|
2020-04-10 08:14:00 +08:00
|
|
|
{
|
2021-03-06 19:02:13 +08:00
|
|
|
struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx, exit_work);
|
2021-03-06 19:02:16 +08:00
|
|
|
unsigned long timeout = jiffies + HZ * 60 * 5;
|
2021-08-09 20:04:17 +08:00
|
|
|
unsigned long interval = HZ / 20;
|
2021-03-06 19:02:13 +08:00
|
|
|
struct io_tctx_exit exit;
|
|
|
|
struct io_tctx_node *node;
|
|
|
|
int ret;
|
2020-04-10 08:14:00 +08:00
|
|
|
|
2020-06-18 05:00:04 +08:00
|
|
|
/*
|
|
|
|
* If we're doing polled IO and end up having requests being
|
|
|
|
* submitted async (out-of-line), then completions can come in while
|
|
|
|
* we're waiting for refs to drop. We need to reap these manually,
|
|
|
|
* as nobody else will be looking for them.
|
|
|
|
*/
|
2020-07-07 21:36:22 +08:00
|
|
|
do {
|
2022-12-07 11:53:28 +08:00
|
|
|
if (test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq)) {
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
io_cqring_overflow_kill(ctx);
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
|
2022-08-30 20:50:10 +08:00
|
|
|
if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
|
|
|
|
io_move_task_work_from_local(ctx);
|
|
|
|
|
2022-06-20 08:25:52 +08:00
|
|
|
while (io_uring_try_cancel_requests(ctx, NULL, true))
|
|
|
|
cond_resched();
|
|
|
|
|
2021-04-26 06:34:45 +08:00
|
|
|
if (ctx->sq_data) {
|
|
|
|
struct io_sq_data *sqd = ctx->sq_data;
|
|
|
|
struct task_struct *tsk;
|
|
|
|
|
|
|
|
io_sq_thread_park(sqd);
|
|
|
|
tsk = sqd->thread;
|
|
|
|
if (tsk && tsk->io_uring && tsk->io_uring->io_wq)
|
|
|
|
io_wq_cancel_cb(tsk->io_uring->io_wq,
|
|
|
|
io_cancel_ctx_cb, ctx, true);
|
|
|
|
io_sq_thread_unpark(sqd);
|
|
|
|
}
|
2021-03-06 19:02:16 +08:00
|
|
|
|
2021-10-05 03:02:53 +08:00
|
|
|
io_req_caches_free(ctx);
|
|
|
|
|
2021-08-09 20:04:17 +08:00
|
|
|
if (WARN_ON_ONCE(time_after(jiffies, timeout))) {
|
|
|
|
/* there is little hope left, don't run it too often */
|
|
|
|
interval = HZ * 60;
|
|
|
|
}
|
io_uring: wait interruptibly for request completions on exit
WHen the ring exits, cleanup is done and the final cancelation and
waiting on completions is done by io_ring_exit_work. That function is
invoked by kworker, which doesn't take any signals. Because of that, it
doesn't really matter if we wait for completions in TASK_INTERRUPTIBLE
or TASK_UNINTERRUPTIBLE state. However, it does matter to the hung task
detection checker!
Normally we expect cancelations and completions to happen rather
quickly. Some test cases, however, will exit the ring and park the
owning task stopped (eg via SIGSTOP). If the owning task needs to run
task_work to complete requests, then io_ring_exit_work won't make any
progress until the task is runnable again. Hence io_ring_exit_work can
trigger the hung task detection, which is particularly problematic if
panic-on-hung-task is enabled.
As the ring exit doesn't take signals to begin with, have it wait
interruptibly rather than uninterruptibly. io_uring has a separate
stuck-exit warning that triggers independently anyway, so we're not
really missing anything by making this switch.
Cc: stable@vger.kernel.org # 5.10+
Link: https://lore.kernel.org/r/b0e4aaef-7088-56ce-244c-976edeac0e66@kernel.dk
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-12 11:14:09 +08:00
|
|
|
/*
|
|
|
|
* This is really an uninterruptible wait, as it has to be
|
|
|
|
* complete. But it's also run from a kworker, which doesn't
|
|
|
|
* take signals, so it's fine to make it interruptible. This
|
|
|
|
* avoids scenarios where we knowingly can wait much longer
|
|
|
|
* on completions, for example if someone does a SIGSTOP on
|
|
|
|
* a task that needs to finish task_work to make this loop
|
|
|
|
* complete. That's a synthetic situation that should not
|
|
|
|
* cause a stuck task backtrace, and hence a potential panic
|
|
|
|
* on stuck tasks if that is enabled.
|
|
|
|
*/
|
|
|
|
} while (!wait_for_completion_interruptible_timeout(&ctx->ref_comp, interval));
|
2021-03-06 19:02:13 +08:00
|
|
|
|
2021-04-14 20:38:34 +08:00
|
|
|
init_completion(&exit.completion);
|
|
|
|
init_task_work(&exit.task_work, io_tctx_exit_cb);
|
|
|
|
exit.ctx = ctx;
|
2023-12-03 23:37:53 +08:00
|
|
|
|
2021-03-06 19:02:13 +08:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
while (!list_empty(&ctx->tctx_list)) {
|
2021-03-06 19:02:16 +08:00
|
|
|
WARN_ON_ONCE(time_after(jiffies, timeout));
|
|
|
|
|
2021-03-06 19:02:13 +08:00
|
|
|
node = list_first_entry(&ctx->tctx_list, struct io_tctx_node,
|
|
|
|
ctx_node);
|
2021-04-14 20:38:34 +08:00
|
|
|
/* don't spin on a single task if cancellation failed */
|
|
|
|
list_rotate_left(&ctx->tctx_list);
|
2021-03-06 19:02:13 +08:00
|
|
|
ret = task_work_add(node->task, &exit.task_work, TWA_SIGNAL);
|
|
|
|
if (WARN_ON_ONCE(ret))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
io_uring: wait interruptibly for request completions on exit
WHen the ring exits, cleanup is done and the final cancelation and
waiting on completions is done by io_ring_exit_work. That function is
invoked by kworker, which doesn't take any signals. Because of that, it
doesn't really matter if we wait for completions in TASK_INTERRUPTIBLE
or TASK_UNINTERRUPTIBLE state. However, it does matter to the hung task
detection checker!
Normally we expect cancelations and completions to happen rather
quickly. Some test cases, however, will exit the ring and park the
owning task stopped (eg via SIGSTOP). If the owning task needs to run
task_work to complete requests, then io_ring_exit_work won't make any
progress until the task is runnable again. Hence io_ring_exit_work can
trigger the hung task detection, which is particularly problematic if
panic-on-hung-task is enabled.
As the ring exit doesn't take signals to begin with, have it wait
interruptibly rather than uninterruptibly. io_uring has a separate
stuck-exit warning that triggers independently anyway, so we're not
really missing anything by making this switch.
Cc: stable@vger.kernel.org # 5.10+
Link: https://lore.kernel.org/r/b0e4aaef-7088-56ce-244c-976edeac0e66@kernel.dk
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-12 11:14:09 +08:00
|
|
|
/*
|
|
|
|
* See comment above for
|
|
|
|
* wait_for_completion_interruptible_timeout() on why this
|
|
|
|
* wait is marked as interruptible.
|
|
|
|
*/
|
|
|
|
wait_for_completion_interruptible(&exit.completion);
|
2021-03-06 19:02:13 +08:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2021-08-11 05:18:27 +08:00
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-03-06 19:02:13 +08:00
|
|
|
|
2023-04-06 21:20:08 +08:00
|
|
|
/* pairs with RCU read section in io_req_local_work_add() */
|
|
|
|
if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
|
|
|
|
synchronize_rcu();
|
|
|
|
|
2020-04-10 08:14:00 +08:00
|
|
|
io_ring_ctx_free(ctx);
|
|
|
|
}
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
2021-03-08 22:16:16 +08:00
|
|
|
unsigned long index;
|
|
|
|
struct creds *creds;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
percpu_ref_kill(&ctx->refs);
|
2021-03-08 22:16:16 +08:00
|
|
|
xa_for_each(&ctx->personalities, index, creds)
|
|
|
|
io_unregister_personality(ctx, index);
|
2022-06-16 17:22:12 +08:00
|
|
|
if (ctx->rings)
|
|
|
|
io_poll_remove_all(ctx, NULL, true);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
|
2022-10-17 04:30:51 +08:00
|
|
|
/*
|
|
|
|
* If we failed setting up the ctx, we might not have any rings
|
|
|
|
* and therefore did not submit any requests
|
|
|
|
*/
|
|
|
|
if (ctx->rings)
|
2022-03-22 06:02:20 +08:00
|
|
|
io_kill_timeouts(ctx, NULL, true);
|
2020-07-10 23:13:34 +08:00
|
|
|
|
2023-06-29 01:06:05 +08:00
|
|
|
flush_delayed_work(&ctx->fallback_work);
|
|
|
|
|
2020-04-10 08:14:00 +08:00
|
|
|
INIT_WORK(&ctx->exit_work, io_ring_exit_work);
|
2020-08-20 01:10:51 +08:00
|
|
|
/*
|
|
|
|
* Use system_unbound_wq to avoid spawning tons of event kworkers
|
|
|
|
* if we're exiting a ton of rings at the same time. It just adds
|
|
|
|
* noise and overhead, there's no discernable change in runtime
|
|
|
|
* over using system_wq.
|
|
|
|
*/
|
|
|
|
queue_work(system_unbound_wq, &ctx->exit_work);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int io_uring_release(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
|
|
|
|
|
|
|
file->private_data = NULL;
|
|
|
|
io_ring_ctx_wait_and_kill(ctx);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-11-06 21:00:26 +08:00
|
|
|
struct io_task_cancel {
|
|
|
|
struct task_struct *task;
|
2021-05-17 05:58:04 +08:00
|
|
|
bool all;
|
2020-11-06 21:00:26 +08:00
|
|
|
};
|
2020-08-13 07:33:30 +08:00
|
|
|
|
2020-11-06 21:00:26 +08:00
|
|
|
static bool io_cancel_task_cb(struct io_wq_work *work, void *data)
|
2020-08-16 23:23:05 +08:00
|
|
|
{
|
2020-11-06 06:31:37 +08:00
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
2020-11-06 21:00:26 +08:00
|
|
|
struct io_task_cancel *cancel = data;
|
2020-11-06 06:31:37 +08:00
|
|
|
|
2021-11-26 22:38:15 +08:00
|
|
|
return io_match_task_safe(req, cancel->task, cancel->all);
|
2020-08-16 23:23:05 +08:00
|
|
|
}
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold bool io_cancel_defer_files(struct io_ring_ctx *ctx,
|
|
|
|
struct task_struct *task,
|
|
|
|
bool cancel_all)
|
2020-09-06 05:45:14 +08:00
|
|
|
{
|
2021-03-12 07:29:35 +08:00
|
|
|
struct io_defer_entry *de;
|
2020-09-06 05:45:14 +08:00
|
|
|
LIST_HEAD(list);
|
|
|
|
|
2021-08-11 05:18:27 +08:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2020-09-06 05:45:14 +08:00
|
|
|
list_for_each_entry_reverse(de, &ctx->defer_list, list) {
|
2021-11-26 22:38:15 +08:00
|
|
|
if (io_match_task_safe(de->req, task, cancel_all)) {
|
2020-09-06 05:45:14 +08:00
|
|
|
list_cut_position(&list, &ctx->defer_list, &de->list);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2021-08-11 05:18:27 +08:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-03-12 07:29:35 +08:00
|
|
|
if (list_empty(&list))
|
|
|
|
return false;
|
2020-09-06 05:45:14 +08:00
|
|
|
|
|
|
|
while (!list_empty(&list)) {
|
|
|
|
de = list_first_entry(&list, struct io_defer_entry, list);
|
|
|
|
list_del_init(&de->list);
|
2022-11-23 19:33:37 +08:00
|
|
|
io_req_task_queue_fail(de->req, -ECANCELED);
|
2020-09-06 05:45:14 +08:00
|
|
|
kfree(de);
|
|
|
|
}
|
2021-03-12 07:29:35 +08:00
|
|
|
return true;
|
2020-09-06 05:45:14 +08:00
|
|
|
}
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold bool io_uring_try_cancel_iowq(struct io_ring_ctx *ctx)
|
2021-03-06 19:02:17 +08:00
|
|
|
{
|
|
|
|
struct io_tctx_node *node;
|
|
|
|
enum io_wq_cancel cret;
|
|
|
|
bool ret = false;
|
|
|
|
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
list_for_each_entry(node, &ctx->tctx_list, ctx_node) {
|
|
|
|
struct io_uring_task *tctx = node->task->io_uring;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* io_wq will stay alive while we hold uring_lock, because it's
|
|
|
|
* killed after ctx nodes, which requires to take the lock.
|
|
|
|
*/
|
|
|
|
if (!tctx || !tctx->io_wq)
|
|
|
|
continue;
|
|
|
|
cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_ctx_cb, ctx, true);
|
|
|
|
ret |= (cret != IO_WQ_CANCEL_NOTFOUND);
|
|
|
|
}
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2022-06-20 08:25:52 +08:00
|
|
|
static __cold bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
|
2021-10-05 03:02:54 +08:00
|
|
|
struct task_struct *task,
|
|
|
|
bool cancel_all)
|
2021-02-04 21:51:56 +08:00
|
|
|
{
|
2021-05-17 05:58:04 +08:00
|
|
|
struct io_task_cancel cancel = { .task = task, .all = cancel_all, };
|
2021-03-06 19:02:17 +08:00
|
|
|
struct io_uring_task *tctx = task ? task->io_uring : NULL;
|
2022-06-20 08:25:52 +08:00
|
|
|
enum io_wq_cancel cret;
|
|
|
|
bool ret = false;
|
2021-02-04 21:51:56 +08:00
|
|
|
|
2023-04-06 21:20:14 +08:00
|
|
|
/* set it so io_req_local_work_add() would wake us up */
|
|
|
|
if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) {
|
|
|
|
atomic_set(&ctx->cq_wait_nr, 1);
|
|
|
|
smp_mb();
|
|
|
|
}
|
|
|
|
|
2022-03-22 06:02:20 +08:00
|
|
|
/* failed during ring init, it couldn't have issued any requests */
|
|
|
|
if (!ctx->rings)
|
2022-06-20 08:25:52 +08:00
|
|
|
return false;
|
2022-03-22 06:02:20 +08:00
|
|
|
|
2022-06-20 08:25:52 +08:00
|
|
|
if (!task) {
|
|
|
|
ret |= io_uring_try_cancel_iowq(ctx);
|
|
|
|
} else if (tctx && tctx->io_wq) {
|
|
|
|
/*
|
|
|
|
* Cancels requests of all rings, not only @ctx, but
|
|
|
|
* it's fine as the task is in exit/exec.
|
|
|
|
*/
|
|
|
|
cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_task_cb,
|
|
|
|
&cancel, true);
|
|
|
|
ret |= (cret != IO_WQ_CANCEL_NOTFOUND);
|
|
|
|
}
|
2021-02-04 21:51:56 +08:00
|
|
|
|
2022-06-20 08:25:52 +08:00
|
|
|
/* SQPOLL thread does its own polling */
|
|
|
|
if ((!(ctx->flags & IORING_SETUP_SQPOLL) && cancel_all) ||
|
|
|
|
(ctx->sq_data && ctx->sq_data->thread == current)) {
|
|
|
|
while (!wq_list_empty(&ctx->iopoll_list)) {
|
|
|
|
io_iopoll_try_reap_events(ctx);
|
|
|
|
ret = true;
|
2023-01-28 00:28:13 +08:00
|
|
|
cond_resched();
|
2021-02-04 21:51:56 +08:00
|
|
|
}
|
|
|
|
}
|
2022-06-20 08:25:52 +08:00
|
|
|
|
2023-01-05 19:22:23 +08:00
|
|
|
if ((ctx->flags & IORING_SETUP_DEFER_TASKRUN) &&
|
|
|
|
io_allowed_defer_tw_run(ctx))
|
2022-08-30 20:50:10 +08:00
|
|
|
ret |= io_run_local_work(ctx) > 0;
|
2022-06-20 08:25:52 +08:00
|
|
|
ret |= io_cancel_defer_files(ctx, task, cancel_all);
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
ret |= io_poll_remove_all(ctx, task, cancel_all);
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
ret |= io_kill_timeouts(ctx, task, cancel_all);
|
|
|
|
if (task)
|
2022-08-30 20:50:10 +08:00
|
|
|
ret |= io_run_task_work() > 0;
|
2022-06-20 08:25:52 +08:00
|
|
|
return ret;
|
2021-02-04 21:51:56 +08:00
|
|
|
}
|
|
|
|
|
2021-04-11 08:46:27 +08:00
|
|
|
static s64 tctx_inflight(struct io_uring_task *tctx, bool tracked)
|
io_uring: cancel sqpoll via task_work
1) The first problem is io_uring_cancel_sqpoll() ->
io_uring_cancel_task_requests() basically doing park(); park(); and so
hanging.
2) Another one is more subtle, when the master task is doing cancellations,
but SQPOLL task submits in-between the end of the cancellation but
before finish() requests taking a ref to the ctx, and so eternally
locking it up.
3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
same io_uring_cancel_sqpoll() from the owner task, they race for
tctx->wait events. And there probably more of them.
Instead do SQPOLL cancellations from within SQPOLL task context via
task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
park()/unpark() during cancellation, which is ugly, subtle and anyway
doesn't allow to do io_run_task_work() properly.
io_uring_cancel_sqpoll() is called only from SQPOLL task context and
under sqd locking, so all parking is removed from there. And so,
io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
SQPOLL task, and that spare us from some headache.
Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
which is not used anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-12 07:29:38 +08:00
|
|
|
{
|
2021-04-11 08:46:27 +08:00
|
|
|
if (tracked)
|
2022-06-02 13:57:02 +08:00
|
|
|
return atomic_read(&tctx->inflight_tracked);
|
io_uring: cancel sqpoll via task_work
1) The first problem is io_uring_cancel_sqpoll() ->
io_uring_cancel_task_requests() basically doing park(); park(); and so
hanging.
2) Another one is more subtle, when the master task is doing cancellations,
but SQPOLL task submits in-between the end of the cancellation but
before finish() requests taking a ref to the ctx, and so eternally
locking it up.
3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
same io_uring_cancel_sqpoll() from the owner task, they race for
tctx->wait events. And there probably more of them.
Instead do SQPOLL cancellations from within SQPOLL task context via
task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
park()/unpark() during cancellation, which is ugly, subtle and anyway
doesn't allow to do io_run_task_work() properly.
io_uring_cancel_sqpoll() is called only from SQPOLL task context and
under sqd locking, so all parking is removed from there. And so,
io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
SQPOLL task, and that spare us from some headache.
Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
which is not used anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-12 07:29:38 +08:00
|
|
|
return percpu_counter_sum(&tctx->inflight);
|
|
|
|
}
|
|
|
|
|
2021-06-14 09:36:23 +08:00
|
|
|
/*
|
|
|
|
* Find any io_uring ctx that this task has registered or done IO on, and cancel
|
2021-12-09 23:54:29 +08:00
|
|
|
* requests. @sqd should be not-null IFF it's an SQPOLL thread cancellation.
|
2021-06-14 09:36:23 +08:00
|
|
|
*/
|
2022-05-25 23:13:39 +08:00
|
|
|
__cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd)
|
2021-02-08 06:34:26 +08:00
|
|
|
{
|
io_uring: cancel sqpoll via task_work
1) The first problem is io_uring_cancel_sqpoll() ->
io_uring_cancel_task_requests() basically doing park(); park(); and so
hanging.
2) Another one is more subtle, when the master task is doing cancellations,
but SQPOLL task submits in-between the end of the cancellation but
before finish() requests taking a ref to the ctx, and so eternally
locking it up.
3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
same io_uring_cancel_sqpoll() from the owner task, they race for
tctx->wait events. And there probably more of them.
Instead do SQPOLL cancellations from within SQPOLL task context via
task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
park()/unpark() during cancellation, which is ugly, subtle and anyway
doesn't allow to do io_run_task_work() properly.
io_uring_cancel_sqpoll() is called only from SQPOLL task context and
under sqd locking, so all parking is removed from there. And so,
io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
SQPOLL task, and that spare us from some headache.
Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
which is not used anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-12 07:29:38 +08:00
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
2021-04-18 21:52:09 +08:00
|
|
|
struct io_ring_ctx *ctx;
|
2023-04-06 21:20:14 +08:00
|
|
|
struct io_tctx_node *node;
|
|
|
|
unsigned long index;
|
2021-02-08 06:34:26 +08:00
|
|
|
s64 inflight;
|
|
|
|
DEFINE_WAIT(wait);
|
2020-10-30 23:37:30 +08:00
|
|
|
|
2021-06-14 09:36:23 +08:00
|
|
|
WARN_ON_ONCE(sqd && sqd->thread != current);
|
|
|
|
|
2021-04-27 20:51:49 +08:00
|
|
|
if (!current->io_uring)
|
|
|
|
return;
|
2021-05-23 22:48:39 +08:00
|
|
|
if (tctx->io_wq)
|
|
|
|
io_wq_exit_start(tctx->io_wq);
|
|
|
|
|
2023-02-17 23:27:23 +08:00
|
|
|
atomic_inc(&tctx->in_cancel);
|
2021-02-08 06:34:26 +08:00
|
|
|
do {
|
2022-06-20 08:25:52 +08:00
|
|
|
bool loop = false;
|
|
|
|
|
2021-08-09 20:04:20 +08:00
|
|
|
io_uring_drop_tctx_refs(current);
|
2021-02-08 06:34:26 +08:00
|
|
|
/* read completions before cancelations */
|
2021-06-14 09:36:23 +08:00
|
|
|
inflight = tctx_inflight(tctx, !cancel_all);
|
2021-02-08 06:34:26 +08:00
|
|
|
if (!inflight)
|
|
|
|
break;
|
2020-10-30 23:37:30 +08:00
|
|
|
|
2021-06-14 09:36:23 +08:00
|
|
|
if (!sqd) {
|
|
|
|
xa_for_each(&tctx->xa, index, node) {
|
|
|
|
/* sqpoll task will cancel all its requests */
|
|
|
|
if (node->ctx->sq_data)
|
|
|
|
continue;
|
2022-06-20 08:25:52 +08:00
|
|
|
loop |= io_uring_try_cancel_requests(node->ctx,
|
|
|
|
current, cancel_all);
|
2021-06-14 09:36:23 +08:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
|
2022-06-20 08:25:52 +08:00
|
|
|
loop |= io_uring_try_cancel_requests(ctx,
|
|
|
|
current,
|
|
|
|
cancel_all);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (loop) {
|
|
|
|
cond_resched();
|
|
|
|
continue;
|
2021-06-14 09:36:23 +08:00
|
|
|
}
|
2021-05-23 22:48:39 +08:00
|
|
|
|
2021-12-09 23:54:29 +08:00
|
|
|
prepare_to_wait(&tctx->wait, &wait, TASK_INTERRUPTIBLE);
|
|
|
|
io_run_task_work();
|
2021-08-09 20:04:20 +08:00
|
|
|
io_uring_drop_tctx_refs(current);
|
2023-04-06 21:20:14 +08:00
|
|
|
xa_for_each(&tctx->xa, index, node) {
|
|
|
|
if (!llist_empty(&node->ctx->work_llist)) {
|
|
|
|
WARN_ON_ONCE(node->ctx->submitter_task &&
|
|
|
|
node->ctx->submitter_task != current);
|
|
|
|
goto end_wait;
|
|
|
|
}
|
|
|
|
}
|
2020-09-14 03:09:39 +08:00
|
|
|
/*
|
2021-01-26 23:28:26 +08:00
|
|
|
* If we've seen completions, retry without waiting. This
|
|
|
|
* avoids a race where a completion comes in before we did
|
|
|
|
* prepare_to_wait().
|
2020-09-14 03:09:39 +08:00
|
|
|
*/
|
2021-05-17 05:58:04 +08:00
|
|
|
if (inflight == tctx_inflight(tctx, !cancel_all))
|
2021-01-26 23:28:26 +08:00
|
|
|
schedule();
|
2023-04-06 21:20:14 +08:00
|
|
|
end_wait:
|
2020-12-20 21:21:44 +08:00
|
|
|
finish_wait(&tctx->wait, &wait);
|
2020-10-16 06:24:45 +08:00
|
|
|
} while (1);
|
2021-01-05 04:43:29 +08:00
|
|
|
|
2021-02-27 19:16:46 +08:00
|
|
|
io_uring_clean_tctx(tctx);
|
2021-05-17 05:58:04 +08:00
|
|
|
if (cancel_all) {
|
2022-01-09 08:53:22 +08:00
|
|
|
/*
|
|
|
|
* We shouldn't run task_works after cancel, so just leave
|
2023-02-17 23:27:23 +08:00
|
|
|
* ->in_cancel set for normal exit.
|
2022-01-09 08:53:22 +08:00
|
|
|
*/
|
2023-02-17 23:27:23 +08:00
|
|
|
atomic_dec(&tctx->in_cancel);
|
2021-04-11 08:46:27 +08:00
|
|
|
/* for exec all current's requests should be gone, kill tctx */
|
|
|
|
__io_uring_free(current);
|
|
|
|
}
|
2020-06-15 15:24:04 +08:00
|
|
|
}
|
|
|
|
|
2021-08-12 12:14:35 +08:00
|
|
|
void __io_uring_cancel(bool cancel_all)
|
2021-06-14 09:36:23 +08:00
|
|
|
{
|
2021-08-12 12:14:35 +08:00
|
|
|
io_uring_cancel_generic(cancel_all, NULL);
|
2021-06-14 09:36:23 +08:00
|
|
|
}
|
|
|
|
|
2019-11-28 19:53:22 +08:00
|
|
|
static void *io_uring_validate_mmap_request(struct file *file,
|
|
|
|
loff_t pgoff, size_t sz)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
2019-11-28 19:53:22 +08:00
|
|
|
loff_t offset = pgoff << PAGE_SHIFT;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
struct page *page;
|
|
|
|
void *ptr;
|
|
|
|
|
2023-03-15 01:07:19 +08:00
|
|
|
switch (offset & IORING_OFF_MMAP_MASK) {
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
case IORING_OFF_SQ_RING:
|
2019-08-27 01:23:46 +08:00
|
|
|
case IORING_OFF_CQ_RING:
|
2023-11-28 08:08:19 +08:00
|
|
|
/* Don't allow mmap if the ring was setup without it */
|
|
|
|
if (ctx->flags & IORING_SETUP_NO_MMAP)
|
|
|
|
return ERR_PTR(-EINVAL);
|
2019-08-27 01:23:46 +08:00
|
|
|
ptr = ctx->rings;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
break;
|
|
|
|
case IORING_OFF_SQES:
|
2023-11-28 08:08:19 +08:00
|
|
|
/* Don't allow mmap if the ring was setup without it */
|
|
|
|
if (ctx->flags & IORING_SETUP_NO_MMAP)
|
|
|
|
return ERR_PTR(-EINVAL);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
ptr = ctx->sq_sqes;
|
|
|
|
break;
|
2023-03-15 01:07:19 +08:00
|
|
|
case IORING_OFF_PBUF_RING: {
|
|
|
|
unsigned int bgid;
|
|
|
|
|
|
|
|
bgid = (offset & ~IORING_OFF_MMAP_MASK) >> IORING_OFF_PBUF_SHIFT;
|
io_uring: free io_buffer_list entries via RCU
commit 5cf4f52e6d8aa2d3b7728f568abbf9d42a3af252 upstream.
mmap_lock nests under uring_lock out of necessity, as we may be doing
user copies with uring_lock held. However, for mmap of provided buffer
rings, we attempt to grab uring_lock with mmap_lock already held from
do_mmap(). This makes lockdep, rightfully, complain:
WARNING: possible circular locking dependency detected
6.7.0-rc1-00009-gff3337ebaf94-dirty #4438 Not tainted
------------------------------------------------------
buf-ring.t/442 is trying to acquire lock:
ffff00020e1480a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_uring_validate_mmap_request.isra.0+0x4c/0x140
but task is already holding lock:
ffff0000dc226190 (&mm->mmap_lock){++++}-{3:3}, at: vm_mmap_pgoff+0x124/0x264
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&mm->mmap_lock){++++}-{3:3}:
__might_fault+0x90/0xbc
io_register_pbuf_ring+0x94/0x488
__arm64_sys_io_uring_register+0x8dc/0x1318
invoke_syscall+0x5c/0x17c
el0_svc_common.constprop.0+0x108/0x130
do_el0_svc+0x2c/0x38
el0_svc+0x4c/0x94
el0t_64_sync_handler+0x118/0x124
el0t_64_sync+0x168/0x16c
-> #0 (&ctx->uring_lock){+.+.}-{3:3}:
__lock_acquire+0x19a0/0x2d14
lock_acquire+0x2e0/0x44c
__mutex_lock+0x118/0x564
mutex_lock_nested+0x20/0x28
io_uring_validate_mmap_request.isra.0+0x4c/0x140
io_uring_mmu_get_unmapped_area+0x3c/0x98
get_unmapped_area+0xa4/0x158
do_mmap+0xec/0x5b4
vm_mmap_pgoff+0x158/0x264
ksys_mmap_pgoff+0x1d4/0x254
__arm64_sys_mmap+0x80/0x9c
invoke_syscall+0x5c/0x17c
el0_svc_common.constprop.0+0x108/0x130
do_el0_svc+0x2c/0x38
el0_svc+0x4c/0x94
el0t_64_sync_handler+0x118/0x124
el0t_64_sync+0x168/0x16c
From that mmap(2) path, we really just need to ensure that the buffer
list doesn't go away from underneath us. For the lower indexed entries,
they never go away until the ring is freed and we can always sanely
reference those as long as the caller has a file reference. For the
higher indexed ones in our xarray, we just need to ensure that the
buffer list remains valid while we return the address of it.
Free the higher indexed io_buffer_list entries via RCU. With that we can
avoid needing ->uring_lock inside mmap(2), and simply hold the RCU read
lock around the buffer list lookup and address check.
To ensure that the arrayed lookup either returns a valid fully formulated
entry via RCU lookup, add an 'is_ready' flag that we access with store
and release memory ordering. This isn't needed for the xarray lookups,
but doesn't hurt either. Since this isn't a fast path, retain it across
both types. Similarly, for the allocated array inside the ctx, ensure
we use the proper load/acquire as setup could in theory be running in
parallel with mmap.
While in there, add a few lockdep checks for documentation purposes.
Cc: stable@vger.kernel.org
Fixes: c56e022c0a27 ("io_uring: add support for user mapped provided buffer ring")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-11-28 08:54:40 +08:00
|
|
|
rcu_read_lock();
|
2023-03-15 01:07:19 +08:00
|
|
|
ptr = io_pbuf_get_address(ctx, bgid);
|
io_uring: free io_buffer_list entries via RCU
commit 5cf4f52e6d8aa2d3b7728f568abbf9d42a3af252 upstream.
mmap_lock nests under uring_lock out of necessity, as we may be doing
user copies with uring_lock held. However, for mmap of provided buffer
rings, we attempt to grab uring_lock with mmap_lock already held from
do_mmap(). This makes lockdep, rightfully, complain:
WARNING: possible circular locking dependency detected
6.7.0-rc1-00009-gff3337ebaf94-dirty #4438 Not tainted
------------------------------------------------------
buf-ring.t/442 is trying to acquire lock:
ffff00020e1480a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_uring_validate_mmap_request.isra.0+0x4c/0x140
but task is already holding lock:
ffff0000dc226190 (&mm->mmap_lock){++++}-{3:3}, at: vm_mmap_pgoff+0x124/0x264
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&mm->mmap_lock){++++}-{3:3}:
__might_fault+0x90/0xbc
io_register_pbuf_ring+0x94/0x488
__arm64_sys_io_uring_register+0x8dc/0x1318
invoke_syscall+0x5c/0x17c
el0_svc_common.constprop.0+0x108/0x130
do_el0_svc+0x2c/0x38
el0_svc+0x4c/0x94
el0t_64_sync_handler+0x118/0x124
el0t_64_sync+0x168/0x16c
-> #0 (&ctx->uring_lock){+.+.}-{3:3}:
__lock_acquire+0x19a0/0x2d14
lock_acquire+0x2e0/0x44c
__mutex_lock+0x118/0x564
mutex_lock_nested+0x20/0x28
io_uring_validate_mmap_request.isra.0+0x4c/0x140
io_uring_mmu_get_unmapped_area+0x3c/0x98
get_unmapped_area+0xa4/0x158
do_mmap+0xec/0x5b4
vm_mmap_pgoff+0x158/0x264
ksys_mmap_pgoff+0x1d4/0x254
__arm64_sys_mmap+0x80/0x9c
invoke_syscall+0x5c/0x17c
el0_svc_common.constprop.0+0x108/0x130
do_el0_svc+0x2c/0x38
el0_svc+0x4c/0x94
el0t_64_sync_handler+0x118/0x124
el0t_64_sync+0x168/0x16c
From that mmap(2) path, we really just need to ensure that the buffer
list doesn't go away from underneath us. For the lower indexed entries,
they never go away until the ring is freed and we can always sanely
reference those as long as the caller has a file reference. For the
higher indexed ones in our xarray, we just need to ensure that the
buffer list remains valid while we return the address of it.
Free the higher indexed io_buffer_list entries via RCU. With that we can
avoid needing ->uring_lock inside mmap(2), and simply hold the RCU read
lock around the buffer list lookup and address check.
To ensure that the arrayed lookup either returns a valid fully formulated
entry via RCU lookup, add an 'is_ready' flag that we access with store
and release memory ordering. This isn't needed for the xarray lookups,
but doesn't hurt either. Since this isn't a fast path, retain it across
both types. Similarly, for the allocated array inside the ctx, ensure
we use the proper load/acquire as setup could in theory be running in
parallel with mmap.
While in there, add a few lockdep checks for documentation purposes.
Cc: stable@vger.kernel.org
Fixes: c56e022c0a27 ("io_uring: add support for user mapped provided buffer ring")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-11-28 08:54:40 +08:00
|
|
|
rcu_read_unlock();
|
2023-03-15 01:07:19 +08:00
|
|
|
if (!ptr)
|
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
break;
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
default:
|
2019-11-28 19:53:22 +08:00
|
|
|
return ERR_PTR(-EINVAL);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
page = virt_to_head_page(ptr);
|
2019-09-24 06:34:25 +08:00
|
|
|
if (sz > page_size(page))
|
2019-11-28 19:53:22 +08:00
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
|
|
|
|
return ptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_MMU
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
|
2019-11-28 19:53:22 +08:00
|
|
|
{
|
|
|
|
size_t sz = vma->vm_end - vma->vm_start;
|
|
|
|
unsigned long pfn;
|
|
|
|
void *ptr;
|
|
|
|
|
|
|
|
ptr = io_uring_validate_mmap_request(file, vma->vm_pgoff, sz);
|
|
|
|
if (IS_ERR(ptr))
|
|
|
|
return PTR_ERR(ptr);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
|
|
|
pfn = virt_to_phys(ptr) >> PAGE_SHIFT;
|
|
|
|
return remap_pfn_range(vma, vma->vm_start, pfn, sz, vma->vm_page_prot);
|
|
|
|
}
|
|
|
|
|
2023-02-16 16:09:38 +08:00
|
|
|
static unsigned long io_uring_mmu_get_unmapped_area(struct file *filp,
|
|
|
|
unsigned long addr, unsigned long len,
|
|
|
|
unsigned long pgoff, unsigned long flags)
|
|
|
|
{
|
|
|
|
void *ptr;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Do not allow to map to user-provided address to avoid breaking the
|
|
|
|
* aliasing rules. Userspace is not able to guess the offset address of
|
|
|
|
* kernel kmalloc()ed memory area.
|
|
|
|
*/
|
|
|
|
if (addr)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
ptr = io_uring_validate_mmap_request(filp, pgoff, len);
|
|
|
|
if (IS_ERR(ptr))
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2023-07-21 23:24:31 +08:00
|
|
|
/*
|
|
|
|
* Some architectures have strong cache aliasing requirements.
|
|
|
|
* For such architectures we need a coherent mapping which aliases
|
|
|
|
* kernel memory *and* userspace memory. To achieve that:
|
|
|
|
* - use a NULL file pointer to reference physical memory, and
|
|
|
|
* - use the kernel virtual address of the shared io_uring context
|
|
|
|
* (instead of the userspace-provided address, which has to be 0UL
|
|
|
|
* anyway).
|
2023-08-08 02:04:09 +08:00
|
|
|
* - use the same pgoff which the get_unmapped_area() uses to
|
|
|
|
* calculate the page colouring.
|
2023-07-21 23:24:31 +08:00
|
|
|
* For architectures without such aliasing requirements, the
|
|
|
|
* architecture will return any suitable mapping because addr is 0.
|
|
|
|
*/
|
|
|
|
filp = NULL;
|
|
|
|
flags |= MAP_SHARED;
|
|
|
|
pgoff = 0; /* has been translated to ptr above */
|
2023-02-16 16:09:38 +08:00
|
|
|
#ifdef SHM_COLOUR
|
2023-07-21 23:24:31 +08:00
|
|
|
addr = (uintptr_t) ptr;
|
2023-08-08 02:04:09 +08:00
|
|
|
pgoff = addr >> PAGE_SHIFT;
|
2023-02-16 16:09:38 +08:00
|
|
|
#else
|
2023-07-21 23:24:31 +08:00
|
|
|
addr = 0UL;
|
2023-02-16 16:09:38 +08:00
|
|
|
#endif
|
2023-07-21 23:24:31 +08:00
|
|
|
return current->mm->get_unmapped_area(filp, addr, len, pgoff, flags);
|
2023-02-16 16:09:38 +08:00
|
|
|
}
|
|
|
|
|
2019-11-28 19:53:22 +08:00
|
|
|
#else /* !CONFIG_MMU */
|
|
|
|
|
|
|
|
static int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
|
|
|
|
{
|
2023-01-03 00:08:54 +08:00
|
|
|
return is_nommu_shared_mapping(vma->vm_flags) ? 0 : -EINVAL;
|
2019-11-28 19:53:22 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned int io_uring_nommu_mmap_capabilities(struct file *file)
|
|
|
|
{
|
|
|
|
return NOMMU_MAP_DIRECT | NOMMU_MAP_READ | NOMMU_MAP_WRITE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned long io_uring_nommu_get_unmapped_area(struct file *file,
|
|
|
|
unsigned long addr, unsigned long len,
|
|
|
|
unsigned long pgoff, unsigned long flags)
|
|
|
|
{
|
|
|
|
void *ptr;
|
|
|
|
|
|
|
|
ptr = io_uring_validate_mmap_request(file, pgoff, len);
|
|
|
|
if (IS_ERR(ptr))
|
|
|
|
return PTR_ERR(ptr);
|
|
|
|
|
|
|
|
return (unsigned long) ptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif /* !CONFIG_MMU */
|
|
|
|
|
2022-03-22 22:07:56 +08:00
|
|
|
static int io_validate_ext_arg(unsigned flags, const void __user *argp, size_t argsz)
|
|
|
|
{
|
|
|
|
if (flags & IORING_ENTER_EXT_ARG) {
|
|
|
|
struct io_uring_getevents_arg arg;
|
|
|
|
|
|
|
|
if (argsz != sizeof(arg))
|
|
|
|
return -EINVAL;
|
|
|
|
if (copy_from_user(&arg, argp, sizeof(arg)))
|
|
|
|
return -EFAULT;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-11-03 10:54:37 +08:00
|
|
|
static int io_get_ext_arg(unsigned flags, const void __user *argp, size_t *argsz,
|
|
|
|
struct __kernel_timespec __user **ts,
|
|
|
|
const sigset_t __user **sig)
|
|
|
|
{
|
|
|
|
struct io_uring_getevents_arg arg;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If EXT_ARG isn't set, then we have no timespec and the argp pointer
|
|
|
|
* is just a pointer to the sigset_t.
|
|
|
|
*/
|
|
|
|
if (!(flags & IORING_ENTER_EXT_ARG)) {
|
|
|
|
*sig = (const sigset_t __user *) argp;
|
|
|
|
*ts = NULL;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* EXT_ARG is set - ensure we agree on the size of it and copy in our
|
|
|
|
* timespec and sigset_t pointers if good.
|
|
|
|
*/
|
|
|
|
if (*argsz != sizeof(arg))
|
|
|
|
return -EINVAL;
|
|
|
|
if (copy_from_user(&arg, argp, sizeof(arg)))
|
|
|
|
return -EFAULT;
|
2022-04-13 00:30:42 +08:00
|
|
|
if (arg.pad)
|
|
|
|
return -EINVAL;
|
2020-11-03 10:54:37 +08:00
|
|
|
*sig = u64_to_user_ptr(arg.sigmask);
|
|
|
|
*argsz = arg.sigmask_sz;
|
|
|
|
*ts = u64_to_user_ptr(arg.ts);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
|
2020-11-03 10:54:37 +08:00
|
|
|
u32, min_complete, u32, flags, const void __user *, argp,
|
|
|
|
size_t, argsz)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
2023-11-29 01:29:58 +08:00
|
|
|
struct file *file;
|
2021-03-20 01:22:30 +08:00
|
|
|
long ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2021-03-20 01:22:30 +08:00
|
|
|
if (unlikely(flags & ~(IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP |
|
io_uring: add support for registering ring file descriptors
Lots of workloads use multiple threads, in which case the file table is
shared between them. This makes getting and putting the ring file
descriptor for each io_uring_enter(2) system call more expensive, as it
involves an atomic get and put for each call.
Similarly to how we allow registering normal file descriptors to avoid
this overhead, add support for an io_uring_register(2) API that allows
to register the ring fds themselves:
1) IORING_REGISTER_RING_FDS - takes an array of io_uring_rsrc_update
structs, and registers them with the task.
2) IORING_UNREGISTER_RING_FDS - takes an array of io_uring_src_update
structs, and unregisters them.
When a ring fd is registered, it is internally represented by an offset.
This offset is returned to the application, and the application then
uses this offset and sets IORING_ENTER_REGISTERED_RING for the
io_uring_enter(2) system call. This works just like using a registered
file descriptor, rather than a real one, in an SQE, where
IOSQE_FIXED_FILE gets set to tell io_uring that we're using an internal
offset/descriptor rather than a real file descriptor.
In initial testing, this provides a nice bump in performance for
threaded applications in real world cases where the batch count (eg
number of requests submitted per io_uring_enter(2) invocation) is low.
In a microbenchmark, submitting NOP requests, we see the following
increases in performance:
Requests per syscall Baseline Registered Increase
----------------------------------------------------------------
1 ~7030K ~8080K +15%
2 ~13120K ~14800K +13%
4 ~22740K ~25300K +11%
Co-developed-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-04 23:22:22 +08:00
|
|
|
IORING_ENTER_SQ_WAIT | IORING_ENTER_EXT_ARG |
|
|
|
|
IORING_ENTER_REGISTERED_RING)))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
io_uring: add support for registering ring file descriptors
Lots of workloads use multiple threads, in which case the file table is
shared between them. This makes getting and putting the ring file
descriptor for each io_uring_enter(2) system call more expensive, as it
involves an atomic get and put for each call.
Similarly to how we allow registering normal file descriptors to avoid
this overhead, add support for an io_uring_register(2) API that allows
to register the ring fds themselves:
1) IORING_REGISTER_RING_FDS - takes an array of io_uring_rsrc_update
structs, and registers them with the task.
2) IORING_UNREGISTER_RING_FDS - takes an array of io_uring_src_update
structs, and unregisters them.
When a ring fd is registered, it is internally represented by an offset.
This offset is returned to the application, and the application then
uses this offset and sets IORING_ENTER_REGISTERED_RING for the
io_uring_enter(2) system call. This works just like using a registered
file descriptor, rather than a real one, in an SQE, where
IOSQE_FIXED_FILE gets set to tell io_uring that we're using an internal
offset/descriptor rather than a real file descriptor.
In initial testing, this provides a nice bump in performance for
threaded applications in real world cases where the batch count (eg
number of requests submitted per io_uring_enter(2) invocation) is low.
In a microbenchmark, submitting NOP requests, we see the following
increases in performance:
Requests per syscall Baseline Registered Increase
----------------------------------------------------------------
1 ~7030K ~8080K +15%
2 ~13120K ~14800K +13%
4 ~22740K ~25300K +11%
Co-developed-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-04 23:22:22 +08:00
|
|
|
/*
|
|
|
|
* Ring fd has been registered via IORING_REGISTER_RING_FDS, we
|
|
|
|
* need only dereference our task private array to find it.
|
|
|
|
*/
|
|
|
|
if (flags & IORING_ENTER_REGISTERED_RING) {
|
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
|
|
|
|
2022-06-25 18:53:01 +08:00
|
|
|
if (unlikely(!tctx || fd >= IO_RINGFD_REG_MAX))
|
io_uring: add support for registering ring file descriptors
Lots of workloads use multiple threads, in which case the file table is
shared between them. This makes getting and putting the ring file
descriptor for each io_uring_enter(2) system call more expensive, as it
involves an atomic get and put for each call.
Similarly to how we allow registering normal file descriptors to avoid
this overhead, add support for an io_uring_register(2) API that allows
to register the ring fds themselves:
1) IORING_REGISTER_RING_FDS - takes an array of io_uring_rsrc_update
structs, and registers them with the task.
2) IORING_UNREGISTER_RING_FDS - takes an array of io_uring_src_update
structs, and unregisters them.
When a ring fd is registered, it is internally represented by an offset.
This offset is returned to the application, and the application then
uses this offset and sets IORING_ENTER_REGISTERED_RING for the
io_uring_enter(2) system call. This works just like using a registered
file descriptor, rather than a real one, in an SQE, where
IOSQE_FIXED_FILE gets set to tell io_uring that we're using an internal
offset/descriptor rather than a real file descriptor.
In initial testing, this provides a nice bump in performance for
threaded applications in real world cases where the batch count (eg
number of requests submitted per io_uring_enter(2) invocation) is low.
In a microbenchmark, submitting NOP requests, we see the following
increases in performance:
Requests per syscall Baseline Registered Increase
----------------------------------------------------------------
1 ~7030K ~8080K +15%
2 ~13120K ~14800K +13%
4 ~22740K ~25300K +11%
Co-developed-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-04 23:22:22 +08:00
|
|
|
return -EINVAL;
|
|
|
|
fd = array_index_nospec(fd, IO_RINGFD_REG_MAX);
|
2023-11-29 01:29:58 +08:00
|
|
|
file = tctx->registered_rings[fd];
|
|
|
|
if (unlikely(!file))
|
2022-06-25 18:53:01 +08:00
|
|
|
return -EBADF;
|
io_uring: add support for registering ring file descriptors
Lots of workloads use multiple threads, in which case the file table is
shared between them. This makes getting and putting the ring file
descriptor for each io_uring_enter(2) system call more expensive, as it
involves an atomic get and put for each call.
Similarly to how we allow registering normal file descriptors to avoid
this overhead, add support for an io_uring_register(2) API that allows
to register the ring fds themselves:
1) IORING_REGISTER_RING_FDS - takes an array of io_uring_rsrc_update
structs, and registers them with the task.
2) IORING_UNREGISTER_RING_FDS - takes an array of io_uring_src_update
structs, and unregisters them.
When a ring fd is registered, it is internally represented by an offset.
This offset is returned to the application, and the application then
uses this offset and sets IORING_ENTER_REGISTERED_RING for the
io_uring_enter(2) system call. This works just like using a registered
file descriptor, rather than a real one, in an SQE, where
IOSQE_FIXED_FILE gets set to tell io_uring that we're using an internal
offset/descriptor rather than a real file descriptor.
In initial testing, this provides a nice bump in performance for
threaded applications in real world cases where the batch count (eg
number of requests submitted per io_uring_enter(2) invocation) is low.
In a microbenchmark, submitting NOP requests, we see the following
increases in performance:
Requests per syscall Baseline Registered Increase
----------------------------------------------------------------
1 ~7030K ~8080K +15%
2 ~13120K ~14800K +13%
4 ~22740K ~25300K +11%
Co-developed-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-04 23:22:22 +08:00
|
|
|
} else {
|
2023-11-29 01:29:58 +08:00
|
|
|
file = fget(fd);
|
|
|
|
if (unlikely(!file))
|
2022-06-25 18:53:01 +08:00
|
|
|
return -EBADF;
|
|
|
|
ret = -EOPNOTSUPP;
|
2023-11-29 01:29:58 +08:00
|
|
|
if (unlikely(!io_is_uring_fops(file)))
|
2022-06-25 18:53:02 +08:00
|
|
|
goto out;
|
io_uring: add support for registering ring file descriptors
Lots of workloads use multiple threads, in which case the file table is
shared between them. This makes getting and putting the ring file
descriptor for each io_uring_enter(2) system call more expensive, as it
involves an atomic get and put for each call.
Similarly to how we allow registering normal file descriptors to avoid
this overhead, add support for an io_uring_register(2) API that allows
to register the ring fds themselves:
1) IORING_REGISTER_RING_FDS - takes an array of io_uring_rsrc_update
structs, and registers them with the task.
2) IORING_UNREGISTER_RING_FDS - takes an array of io_uring_src_update
structs, and unregisters them.
When a ring fd is registered, it is internally represented by an offset.
This offset is returned to the application, and the application then
uses this offset and sets IORING_ENTER_REGISTERED_RING for the
io_uring_enter(2) system call. This works just like using a registered
file descriptor, rather than a real one, in an SQE, where
IOSQE_FIXED_FILE gets set to tell io_uring that we're using an internal
offset/descriptor rather than a real file descriptor.
In initial testing, this provides a nice bump in performance for
threaded applications in real world cases where the batch count (eg
number of requests submitted per io_uring_enter(2) invocation) is low.
In a microbenchmark, submitting NOP requests, we see the following
increases in performance:
Requests per syscall Baseline Registered Increase
----------------------------------------------------------------
1 ~7030K ~8080K +15%
2 ~13120K ~14800K +13%
4 ~22740K ~25300K +11%
Co-developed-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-04 23:22:22 +08:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2023-11-29 01:29:58 +08:00
|
|
|
ctx = file->private_data;
|
2020-08-27 22:58:31 +08:00
|
|
|
ret = -EBADFD;
|
2021-03-20 01:22:30 +08:00
|
|
|
if (unlikely(ctx->flags & IORING_SETUP_R_DISABLED))
|
2020-08-27 22:58:31 +08:00
|
|
|
goto out;
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-11 02:22:30 +08:00
|
|
|
/*
|
|
|
|
* For SQ polling, the thread will do all submissions and completions.
|
|
|
|
* Just return the requested submit count, and wake the thread if
|
|
|
|
* we were asked to.
|
|
|
|
*/
|
2019-09-13 04:19:16 +08:00
|
|
|
ret = 0;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-11 02:22:30 +08:00
|
|
|
if (ctx->flags & IORING_SETUP_SQPOLL) {
|
2021-08-10 03:18:12 +08:00
|
|
|
io_cqring_overflow_flush(ctx);
|
2020-12-17 08:24:39 +08:00
|
|
|
|
2021-08-14 23:04:40 +08:00
|
|
|
if (unlikely(ctx->sq_data->thread == NULL)) {
|
|
|
|
ret = -EOWNERDEAD;
|
2021-03-07 18:54:29 +08:00
|
|
|
goto out;
|
2021-08-14 23:04:40 +08:00
|
|
|
}
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-11 02:22:30 +08:00
|
|
|
if (flags & IORING_ENTER_SQ_WAKEUP)
|
2020-09-03 03:52:19 +08:00
|
|
|
wake_up(&ctx->sq_data->wait);
|
2023-01-15 15:15:19 +08:00
|
|
|
if (flags & IORING_ENTER_SQ_WAIT)
|
|
|
|
io_sqpoll_wait_sq(ctx);
|
|
|
|
|
2022-04-21 17:13:42 +08:00
|
|
|
ret = to_submit;
|
2019-09-13 04:19:16 +08:00
|
|
|
} else if (to_submit) {
|
2021-06-14 09:36:15 +08:00
|
|
|
ret = io_uring_add_tctx_node(ctx);
|
2020-09-14 03:09:39 +08:00
|
|
|
if (unlikely(ret))
|
|
|
|
goto out;
|
2019-12-19 00:53:45 +08:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2022-04-21 17:13:42 +08:00
|
|
|
ret = io_submit_sqes(ctx, to_submit);
|
|
|
|
if (ret != to_submit) {
|
2022-03-22 22:07:58 +08:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2019-12-19 00:53:45 +08:00
|
|
|
goto out;
|
2022-03-22 22:07:58 +08:00
|
|
|
}
|
2022-10-07 04:42:33 +08:00
|
|
|
if (flags & IORING_ENTER_GETEVENTS) {
|
|
|
|
if (ctx->syscall_iopoll)
|
|
|
|
goto iopoll_locked;
|
|
|
|
/*
|
|
|
|
* Ignore errors, we'll soon call io_cqring_wait() and
|
|
|
|
* it should handle ownership problems if any.
|
|
|
|
*/
|
|
|
|
if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
|
|
|
|
(void)io_run_local_work_locked(ctx);
|
|
|
|
}
|
2022-03-22 22:07:58 +08:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
2022-08-30 20:50:10 +08:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
if (flags & IORING_ENTER_GETEVENTS) {
|
2022-04-21 17:13:42 +08:00
|
|
|
int ret2;
|
2022-08-30 20:50:10 +08:00
|
|
|
|
2022-03-22 22:07:57 +08:00
|
|
|
if (ctx->syscall_iopoll) {
|
2022-03-22 22:07:58 +08:00
|
|
|
/*
|
|
|
|
* We disallow the app entering submit/complete with
|
|
|
|
* polling, but we still need to lock the ring to
|
|
|
|
* prevent racing with polled issue that got punted to
|
|
|
|
* a workqueue.
|
|
|
|
*/
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
iopoll_locked:
|
2022-04-21 17:13:42 +08:00
|
|
|
ret2 = io_validate_ext_arg(flags, argp, argsz);
|
|
|
|
if (likely(!ret2)) {
|
|
|
|
min_complete = min(min_complete,
|
|
|
|
ctx->cq_entries);
|
|
|
|
ret2 = io_iopoll_check(ctx, min_complete);
|
2022-03-22 22:07:58 +08:00
|
|
|
}
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2019-01-09 23:59:42 +08:00
|
|
|
} else {
|
2022-03-22 22:07:56 +08:00
|
|
|
const sigset_t __user *sig;
|
|
|
|
struct __kernel_timespec __user *ts;
|
|
|
|
|
2022-04-21 17:13:42 +08:00
|
|
|
ret2 = io_get_ext_arg(flags, argp, &argsz, &ts, &sig);
|
|
|
|
if (likely(!ret2)) {
|
|
|
|
min_complete = min(min_complete,
|
|
|
|
ctx->cq_entries);
|
|
|
|
ret2 = io_cqring_wait(ctx, min_complete, sig,
|
|
|
|
argsz, ts);
|
|
|
|
}
|
2019-01-09 23:59:42 +08:00
|
|
|
}
|
2020-11-03 10:54:37 +08:00
|
|
|
|
2022-04-21 17:13:44 +08:00
|
|
|
if (!ret) {
|
2022-04-21 17:13:42 +08:00
|
|
|
ret = ret2;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2022-04-21 17:13:44 +08:00
|
|
|
/*
|
|
|
|
* EBADR indicates that one or more CQE were dropped.
|
|
|
|
* Once the user has been informed we can clear the bit
|
|
|
|
* as they are obviously ok with those drops.
|
|
|
|
*/
|
|
|
|
if (unlikely(ret2 == -EBADR))
|
|
|
|
clear_bit(IO_CHECK_CQ_DROPPED_BIT,
|
|
|
|
&ctx->check_cq);
|
2019-01-09 23:59:42 +08:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
2019-12-19 00:53:45 +08:00
|
|
|
out:
|
2023-11-29 01:29:58 +08:00
|
|
|
if (!(flags & IORING_ENTER_REGISTERED_RING))
|
|
|
|
fput(file);
|
2022-04-21 17:13:42 +08:00
|
|
|
return ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations io_uring_fops = {
|
|
|
|
.release = io_uring_release,
|
|
|
|
.mmap = io_uring_mmap,
|
2019-11-28 19:53:22 +08:00
|
|
|
#ifndef CONFIG_MMU
|
|
|
|
.get_unmapped_area = io_uring_nommu_get_unmapped_area,
|
|
|
|
.mmap_capabilities = io_uring_nommu_mmap_capabilities,
|
2023-02-16 16:09:38 +08:00
|
|
|
#else
|
|
|
|
.get_unmapped_area = io_uring_mmu_get_unmapped_area,
|
2019-11-28 19:53:22 +08:00
|
|
|
#endif
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
.poll = io_uring_poll,
|
2020-02-27 01:38:32 +08:00
|
|
|
#ifdef CONFIG_PROC_FS
|
2020-01-30 23:25:34 +08:00
|
|
|
.show_fdinfo = io_uring_show_fdinfo,
|
2020-02-27 01:38:32 +08:00
|
|
|
#endif
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
};
|
|
|
|
|
2022-05-26 01:48:35 +08:00
|
|
|
bool io_is_uring_fops(struct file *file)
|
|
|
|
{
|
|
|
|
return file->f_op == &io_uring_fops;
|
|
|
|
}
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold int io_allocate_scq_urings(struct io_ring_ctx *ctx,
|
|
|
|
struct io_uring_params *p)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
2019-08-27 01:23:46 +08:00
|
|
|
struct io_rings *rings;
|
|
|
|
size_t size, sq_array_offset;
|
2021-11-06 07:13:52 +08:00
|
|
|
void *ptr;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2020-08-06 02:58:23 +08:00
|
|
|
/* make sure these are sane, as we already accounted them */
|
|
|
|
ctx->sq_entries = p->sq_entries;
|
|
|
|
ctx->cq_entries = p->cq_entries;
|
|
|
|
|
2022-04-27 02:21:25 +08:00
|
|
|
size = rings_size(ctx, p->sq_entries, p->cq_entries, &sq_array_offset);
|
2019-08-27 01:23:46 +08:00
|
|
|
if (size == SIZE_MAX)
|
|
|
|
return -EOVERFLOW;
|
|
|
|
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-06 07:20:54 +08:00
|
|
|
if (!(ctx->flags & IORING_SETUP_NO_MMAP))
|
|
|
|
rings = io_mem_alloc(size);
|
|
|
|
else
|
|
|
|
rings = io_rings_map(ctx, p->cq_off.user_addr, size);
|
|
|
|
|
2021-11-06 07:13:52 +08:00
|
|
|
if (IS_ERR(rings))
|
|
|
|
return PTR_ERR(rings);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2019-08-27 01:23:46 +08:00
|
|
|
ctx->rings = rings;
|
2023-08-25 06:53:32 +08:00
|
|
|
if (!(ctx->flags & IORING_SETUP_NO_SQARRAY))
|
|
|
|
ctx->sq_array = (u32 *)((char *)rings + sq_array_offset);
|
2019-08-27 01:23:46 +08:00
|
|
|
rings->sq_ring_mask = p->sq_entries - 1;
|
|
|
|
rings->cq_ring_mask = p->cq_entries - 1;
|
|
|
|
rings->sq_ring_entries = p->sq_entries;
|
|
|
|
rings->cq_ring_entries = p->cq_entries;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2022-04-01 09:27:52 +08:00
|
|
|
if (p->flags & IORING_SETUP_SQE128)
|
|
|
|
size = array_size(2 * sizeof(struct io_uring_sqe), p->sq_entries);
|
|
|
|
else
|
|
|
|
size = array_size(sizeof(struct io_uring_sqe), p->sq_entries);
|
2019-11-21 00:26:29 +08:00
|
|
|
if (size == SIZE_MAX) {
|
2021-11-06 07:15:46 +08:00
|
|
|
io_rings_free(ctx);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
return -EOVERFLOW;
|
2019-11-21 00:26:29 +08:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-06 07:20:54 +08:00
|
|
|
if (!(ctx->flags & IORING_SETUP_NO_MMAP))
|
|
|
|
ptr = io_mem_alloc(size);
|
|
|
|
else
|
|
|
|
ptr = io_sqes_map(ctx, p->sq_off.user_addr, size);
|
|
|
|
|
2021-11-06 07:13:52 +08:00
|
|
|
if (IS_ERR(ptr)) {
|
2021-11-06 07:15:46 +08:00
|
|
|
io_rings_free(ctx);
|
2021-11-06 07:13:52 +08:00
|
|
|
return PTR_ERR(ptr);
|
2019-11-21 00:26:29 +08:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2021-11-06 07:13:52 +08:00
|
|
|
ctx->sq_sqes = ptr;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2023-04-29 00:40:30 +08:00
|
|
|
static int io_uring_install_fd(struct file *file)
|
2020-12-22 02:34:05 +08:00
|
|
|
{
|
2023-04-29 00:40:30 +08:00
|
|
|
int fd;
|
2020-12-22 02:34:05 +08:00
|
|
|
|
|
|
|
fd = get_unused_fd_flags(O_RDWR | O_CLOEXEC);
|
|
|
|
if (fd < 0)
|
|
|
|
return fd;
|
|
|
|
fd_install(fd, file);
|
|
|
|
return fd;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
/*
|
|
|
|
* Allocate an anonymous fd, this is what constitutes the application
|
|
|
|
* visible backing of an io_uring instance. The application mmaps this
|
|
|
|
* fd to gain access to the SQ/CQ ring details. If UNIX sockets are enabled,
|
|
|
|
* we have to tie this fd to a socket for file garbage collection purposes.
|
|
|
|
*/
|
2020-12-22 02:34:05 +08:00
|
|
|
static struct file *io_uring_get_file(struct io_ring_ctx *ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
|
|
|
struct file *file;
|
2020-12-22 02:34:05 +08:00
|
|
|
#if defined(CONFIG_UNIX)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = sock_create_kern(&init_net, PF_UNIX, SOCK_RAW, IPPROTO_IP,
|
|
|
|
&ctx->ring_sock);
|
|
|
|
if (ret)
|
2020-12-22 02:34:05 +08:00
|
|
|
return ERR_PTR(ret);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
#endif
|
|
|
|
|
2021-02-02 08:33:52 +08:00
|
|
|
file = anon_inode_getfile_secure("[io_uring]", &io_uring_fops, ctx,
|
|
|
|
O_RDWR | O_CLOEXEC, NULL);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
#if defined(CONFIG_UNIX)
|
2020-12-22 02:34:05 +08:00
|
|
|
if (IS_ERR(file)) {
|
|
|
|
sock_release(ctx->ring_sock);
|
|
|
|
ctx->ring_sock = NULL;
|
|
|
|
} else {
|
|
|
|
ctx->ring_sock->file = file;
|
2020-09-14 03:09:39 +08:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
#endif
|
2020-12-22 02:34:05 +08:00
|
|
|
return file;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold int io_uring_create(unsigned entries, struct io_uring_params *p,
|
|
|
|
struct io_uring_params __user *params)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
2023-04-29 00:40:30 +08:00
|
|
|
struct io_uring_task *tctx;
|
2020-12-22 02:34:05 +08:00
|
|
|
struct file *file;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
int ret;
|
|
|
|
|
2019-12-29 06:39:54 +08:00
|
|
|
if (!entries)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
return -EINVAL;
|
2019-12-29 06:39:54 +08:00
|
|
|
if (entries > IORING_MAX_ENTRIES) {
|
|
|
|
if (!(p->flags & IORING_SETUP_CLAMP))
|
|
|
|
return -EINVAL;
|
|
|
|
entries = IORING_MAX_ENTRIES;
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2023-04-29 00:40:30 +08:00
|
|
|
if ((p->flags & IORING_SETUP_REGISTERED_FD_ONLY)
|
|
|
|
&& !(p->flags & IORING_SETUP_NO_MMAP))
|
|
|
|
return -EINVAL;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
/*
|
|
|
|
* Use twice as many entries for the CQ ring. It's possible for the
|
|
|
|
* application to drive a higher depth than the size of the SQ ring,
|
|
|
|
* since the sqes are only used at submission time. This allows for
|
2019-10-05 02:10:03 +08:00
|
|
|
* some flexibility in overcommitting a bit. If the application has
|
|
|
|
* set IORING_SETUP_CQSIZE, it will have passed in the desired number
|
|
|
|
* of CQ ring entries manually.
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
*/
|
|
|
|
p->sq_entries = roundup_pow_of_two(entries);
|
2019-10-05 02:10:03 +08:00
|
|
|
if (p->flags & IORING_SETUP_CQSIZE) {
|
|
|
|
/*
|
|
|
|
* If IORING_SETUP_CQSIZE is set, we do the same roundup
|
|
|
|
* to a power-of-two, if it isn't already. We do NOT impose
|
|
|
|
* any cq vs sq ring sizing.
|
|
|
|
*/
|
2020-11-24 15:03:03 +08:00
|
|
|
if (!p->cq_entries)
|
2019-10-05 02:10:03 +08:00
|
|
|
return -EINVAL;
|
2019-12-29 06:39:54 +08:00
|
|
|
if (p->cq_entries > IORING_MAX_CQ_ENTRIES) {
|
|
|
|
if (!(p->flags & IORING_SETUP_CLAMP))
|
|
|
|
return -EINVAL;
|
|
|
|
p->cq_entries = IORING_MAX_CQ_ENTRIES;
|
|
|
|
}
|
2020-11-24 15:03:03 +08:00
|
|
|
p->cq_entries = roundup_pow_of_two(p->cq_entries);
|
|
|
|
if (p->cq_entries < p->sq_entries)
|
|
|
|
return -EINVAL;
|
2019-10-05 02:10:03 +08:00
|
|
|
} else {
|
|
|
|
p->cq_entries = 2 * p->sq_entries;
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
|
|
|
ctx = io_ring_ctx_alloc(p);
|
2021-02-22 07:19:37 +08:00
|
|
|
if (!ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
return -ENOMEM;
|
2022-03-22 22:07:57 +08:00
|
|
|
|
2022-12-07 11:53:30 +08:00
|
|
|
if ((ctx->flags & IORING_SETUP_DEFER_TASKRUN) &&
|
|
|
|
!(ctx->flags & IORING_SETUP_IOPOLL) &&
|
|
|
|
!(ctx->flags & IORING_SETUP_SQPOLL))
|
|
|
|
ctx->task_complete = true;
|
|
|
|
|
2023-08-25 06:53:29 +08:00
|
|
|
if (ctx->task_complete || (ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
ctx->lockless_cq = true;
|
|
|
|
|
2023-01-09 22:46:09 +08:00
|
|
|
/*
|
|
|
|
* lazy poll_wq activation relies on ->task_complete for synchronisation
|
|
|
|
* purposes, see io_activate_pollwq()
|
|
|
|
*/
|
|
|
|
if (!ctx->task_complete)
|
|
|
|
ctx->poll_activated = true;
|
|
|
|
|
2022-03-22 22:07:57 +08:00
|
|
|
/*
|
|
|
|
* When SETUP_IOPOLL and SETUP_SQPOLL are both enabled, user
|
|
|
|
* space applications don't need to do io completion events
|
|
|
|
* polling again, they can rely on io_sq_thread to do polling
|
|
|
|
* work, which can reduce cpu usage and uring_lock contention.
|
|
|
|
*/
|
|
|
|
if (ctx->flags & IORING_SETUP_IOPOLL &&
|
|
|
|
!(ctx->flags & IORING_SETUP_SQPOLL))
|
|
|
|
ctx->syscall_iopoll = 1;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
ctx->compat = in_compat_syscall();
|
2023-07-18 19:56:07 +08:00
|
|
|
if (!ns_capable_noaudit(&init_user_ns, CAP_IPC_LOCK))
|
2021-02-22 07:19:37 +08:00
|
|
|
ctx->user = get_uid(current_user());
|
2020-09-15 00:45:53 +08:00
|
|
|
|
2022-04-26 09:49:02 +08:00
|
|
|
/*
|
2022-04-26 09:49:03 +08:00
|
|
|
* For SQPOLL, we just need a wakeup, always. For !SQPOLL, if
|
|
|
|
* COOP_TASKRUN is set, then IPIs are never needed by the app.
|
2022-04-26 09:49:02 +08:00
|
|
|
*/
|
2022-04-26 09:49:03 +08:00
|
|
|
ret = -EINVAL;
|
|
|
|
if (ctx->flags & IORING_SETUP_SQPOLL) {
|
|
|
|
/* IPI related flags don't make sense with SQPOLL */
|
2022-04-26 09:49:04 +08:00
|
|
|
if (ctx->flags & (IORING_SETUP_COOP_TASKRUN |
|
2022-08-30 20:50:10 +08:00
|
|
|
IORING_SETUP_TASKRUN_FLAG |
|
|
|
|
IORING_SETUP_DEFER_TASKRUN))
|
2022-04-26 09:49:03 +08:00
|
|
|
goto err;
|
2022-04-26 09:49:02 +08:00
|
|
|
ctx->notify_method = TWA_SIGNAL_NO_IPI;
|
2022-04-26 09:49:03 +08:00
|
|
|
} else if (ctx->flags & IORING_SETUP_COOP_TASKRUN) {
|
|
|
|
ctx->notify_method = TWA_SIGNAL_NO_IPI;
|
|
|
|
} else {
|
2022-08-30 20:50:10 +08:00
|
|
|
if (ctx->flags & IORING_SETUP_TASKRUN_FLAG &&
|
|
|
|
!(ctx->flags & IORING_SETUP_DEFER_TASKRUN))
|
2022-04-26 09:49:04 +08:00
|
|
|
goto err;
|
2022-04-26 09:49:02 +08:00
|
|
|
ctx->notify_method = TWA_SIGNAL;
|
2022-04-26 09:49:03 +08:00
|
|
|
}
|
2022-04-26 09:49:02 +08:00
|
|
|
|
2022-08-30 20:50:10 +08:00
|
|
|
/*
|
|
|
|
* For DEFER_TASKRUN we require the completion task to be the same as the
|
|
|
|
* submission task. This implies that there is only one submitter, so enforce
|
|
|
|
* that.
|
|
|
|
*/
|
|
|
|
if (ctx->flags & IORING_SETUP_DEFER_TASKRUN &&
|
|
|
|
!(ctx->flags & IORING_SETUP_SINGLE_ISSUER)) {
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
2020-09-15 00:45:53 +08:00
|
|
|
/*
|
|
|
|
* This is just grabbed for accounting purposes. When a process exits,
|
|
|
|
* the mm is exited and dropped before the files, hence we need to hang
|
|
|
|
* on to this mm purely for the purposes of being able to unaccount
|
|
|
|
* memory (locked/pinned vm). It's not used for anything else.
|
|
|
|
*/
|
2020-08-25 21:58:00 +08:00
|
|
|
mmgrab(current->mm);
|
2020-09-15 00:45:53 +08:00
|
|
|
ctx->mm_account = current->mm;
|
2020-08-25 21:58:00 +08:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
ret = io_allocate_scq_urings(ctx, p);
|
|
|
|
if (ret)
|
|
|
|
goto err;
|
|
|
|
|
2020-08-27 22:58:31 +08:00
|
|
|
ret = io_sq_offload_create(ctx, p);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
if (ret)
|
|
|
|
goto err;
|
2023-04-11 19:06:07 +08:00
|
|
|
|
|
|
|
ret = io_rsrc_init(ctx);
|
2021-04-29 18:46:48 +08:00
|
|
|
if (ret)
|
|
|
|
goto err;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2019-08-27 01:23:46 +08:00
|
|
|
p->sq_off.head = offsetof(struct io_rings, sq.head);
|
|
|
|
p->sq_off.tail = offsetof(struct io_rings, sq.tail);
|
|
|
|
p->sq_off.ring_mask = offsetof(struct io_rings, sq_ring_mask);
|
|
|
|
p->sq_off.ring_entries = offsetof(struct io_rings, sq_ring_entries);
|
|
|
|
p->sq_off.flags = offsetof(struct io_rings, sq_flags);
|
|
|
|
p->sq_off.dropped = offsetof(struct io_rings, sq_dropped);
|
2023-08-25 06:53:32 +08:00
|
|
|
if (!(ctx->flags & IORING_SETUP_NO_SQARRAY))
|
|
|
|
p->sq_off.array = (char *)ctx->sq_array - (char *)ctx->rings;
|
2021-11-06 07:11:34 +08:00
|
|
|
p->sq_off.resv1 = 0;
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-06 07:20:54 +08:00
|
|
|
if (!(ctx->flags & IORING_SETUP_NO_MMAP))
|
|
|
|
p->sq_off.user_addr = 0;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
|
2019-08-27 01:23:46 +08:00
|
|
|
p->cq_off.head = offsetof(struct io_rings, cq.head);
|
|
|
|
p->cq_off.tail = offsetof(struct io_rings, cq.tail);
|
|
|
|
p->cq_off.ring_mask = offsetof(struct io_rings, cq_ring_mask);
|
|
|
|
p->cq_off.ring_entries = offsetof(struct io_rings, cq_ring_entries);
|
|
|
|
p->cq_off.overflow = offsetof(struct io_rings, cq_overflow);
|
|
|
|
p->cq_off.cqes = offsetof(struct io_rings, cqes);
|
2020-05-16 00:38:04 +08:00
|
|
|
p->cq_off.flags = offsetof(struct io_rings, cq_flags);
|
2021-11-06 07:11:34 +08:00
|
|
|
p->cq_off.resv1 = 0;
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-06 07:20:54 +08:00
|
|
|
if (!(ctx->flags & IORING_SETUP_NO_MMAP))
|
|
|
|
p->cq_off.user_addr = 0;
|
2019-09-07 00:26:21 +08:00
|
|
|
|
2020-05-05 16:28:53 +08:00
|
|
|
p->features = IORING_FEAT_SINGLE_MMAP | IORING_FEAT_NODROP |
|
|
|
|
IORING_FEAT_SUBMIT_STABLE | IORING_FEAT_RW_CUR_POS |
|
2020-06-17 17:53:55 +08:00
|
|
|
IORING_FEAT_CUR_PERSONALITY | IORING_FEAT_FAST_POLL |
|
2020-11-03 10:54:37 +08:00
|
|
|
IORING_FEAT_POLL_32BITS | IORING_FEAT_SQPOLL_NONFIXED |
|
2021-06-10 23:37:38 +08:00
|
|
|
IORING_FEAT_EXT_ARG | IORING_FEAT_NATIVE_WORKERS |
|
2022-04-11 05:13:24 +08:00
|
|
|
IORING_FEAT_RSRC_TAGS | IORING_FEAT_CQE_SKIP |
|
2023-02-15 08:42:22 +08:00
|
|
|
IORING_FEAT_LINKED_FILE | IORING_FEAT_REG_REG_RING;
|
2020-05-05 16:28:53 +08:00
|
|
|
|
|
|
|
if (copy_to_user(params, p, sizeof(*p))) {
|
|
|
|
ret = -EFAULT;
|
|
|
|
goto err;
|
|
|
|
}
|
2020-07-31 03:43:53 +08:00
|
|
|
|
2022-09-27 01:09:25 +08:00
|
|
|
if (ctx->flags & IORING_SETUP_SINGLE_ISSUER
|
|
|
|
&& !(ctx->flags & IORING_SETUP_R_DISABLED))
|
2023-01-21 00:38:06 +08:00
|
|
|
WRITE_ONCE(ctx->submitter_task, get_task_struct(current));
|
2022-09-27 01:09:25 +08:00
|
|
|
|
2020-12-22 02:34:05 +08:00
|
|
|
file = io_uring_get_file(ctx);
|
|
|
|
if (IS_ERR(file)) {
|
|
|
|
ret = PTR_ERR(file);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
2023-04-29 00:40:30 +08:00
|
|
|
ret = __io_uring_add_tctx_node(ctx);
|
|
|
|
if (ret)
|
|
|
|
goto err_fput;
|
|
|
|
tctx = current->io_uring;
|
|
|
|
|
2019-10-28 23:15:33 +08:00
|
|
|
/*
|
|
|
|
* Install ring fd as the very last thing, so we don't risk someone
|
|
|
|
* having closed it before we finish setup
|
|
|
|
*/
|
2023-04-29 00:40:30 +08:00
|
|
|
if (p->flags & IORING_SETUP_REGISTERED_FD_ONLY)
|
|
|
|
ret = io_ring_add_registered_file(tctx, file, 0, IO_RINGFD_REG_MAX);
|
|
|
|
else
|
|
|
|
ret = io_uring_install_fd(file);
|
|
|
|
if (ret < 0)
|
|
|
|
goto err_fput;
|
2019-10-28 23:15:33 +08:00
|
|
|
|
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-16 01:02:01 +08:00
|
|
|
trace_io_uring_create(ret, ctx, p->sq_entries, p->cq_entries, p->flags);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
return ret;
|
|
|
|
err:
|
|
|
|
io_ring_ctx_wait_and_kill(ctx);
|
|
|
|
return ret;
|
2023-04-29 00:40:30 +08:00
|
|
|
err_fput:
|
|
|
|
fput(file);
|
|
|
|
return ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Sets up an aio uring context, and returns the fd. Applications asks for a
|
|
|
|
* ring size, we return the actual sq/cq ring sizes (among other things) in the
|
|
|
|
* params structure passed in.
|
|
|
|
*/
|
|
|
|
static long io_uring_setup(u32 entries, struct io_uring_params __user *params)
|
|
|
|
{
|
|
|
|
struct io_uring_params p;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (copy_from_user(&p, params, sizeof(p)))
|
|
|
|
return -EFAULT;
|
|
|
|
for (i = 0; i < ARRAY_SIZE(p.resv); i++) {
|
|
|
|
if (p.resv[i])
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-11 02:22:30 +08:00
|
|
|
if (p.flags & ~(IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL |
|
2019-12-29 06:39:54 +08:00
|
|
|
IORING_SETUP_SQ_AFF | IORING_SETUP_CQSIZE |
|
2020-08-27 22:58:31 +08:00
|
|
|
IORING_SETUP_CLAMP | IORING_SETUP_ATTACH_WQ |
|
2022-04-26 09:49:03 +08:00
|
|
|
IORING_SETUP_R_DISABLED | IORING_SETUP_SUBMIT_ALL |
|
2022-04-01 09:27:52 +08:00
|
|
|
IORING_SETUP_COOP_TASKRUN | IORING_SETUP_TASKRUN_FLAG |
|
2022-06-16 17:22:08 +08:00
|
|
|
IORING_SETUP_SQE128 | IORING_SETUP_CQE32 |
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-06 07:20:54 +08:00
|
|
|
IORING_SETUP_SINGLE_ISSUER | IORING_SETUP_DEFER_TASKRUN |
|
2023-08-25 06:53:32 +08:00
|
|
|
IORING_SETUP_NO_MMAP | IORING_SETUP_REGISTERED_FD_ONLY |
|
|
|
|
IORING_SETUP_NO_SQARRAY))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
2022-04-26 09:49:04 +08:00
|
|
|
return io_uring_create(entries, &p, params);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
}
|
|
|
|
|
2023-08-22 05:15:52 +08:00
|
|
|
static inline bool io_uring_allowed(void)
|
|
|
|
{
|
|
|
|
int disabled = READ_ONCE(sysctl_io_uring_disabled);
|
|
|
|
kgid_t io_uring_group;
|
|
|
|
|
|
|
|
if (disabled == 2)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (disabled == 0 || capable(CAP_SYS_ADMIN))
|
|
|
|
return true;
|
|
|
|
|
|
|
|
io_uring_group = make_kgid(&init_user_ns, sysctl_io_uring_group);
|
|
|
|
if (!gid_valid(io_uring_group))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return in_group_p(io_uring_group);
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
SYSCALL_DEFINE2(io_uring_setup, u32, entries,
|
|
|
|
struct io_uring_params __user *, params)
|
|
|
|
{
|
2023-08-22 05:15:52 +08:00
|
|
|
if (!io_uring_allowed())
|
|
|
|
return -EPERM;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
return io_uring_setup(entries, params);
|
|
|
|
}
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold int io_probe(struct io_ring_ctx *ctx, void __user *arg,
|
|
|
|
unsigned nr_args)
|
2020-01-17 06:36:52 +08:00
|
|
|
{
|
|
|
|
struct io_uring_probe *p;
|
|
|
|
size_t size;
|
|
|
|
int i, ret;
|
|
|
|
|
|
|
|
size = struct_size(p, ops, nr_args);
|
|
|
|
if (size == SIZE_MAX)
|
|
|
|
return -EOVERFLOW;
|
|
|
|
p = kzalloc(size, GFP_KERNEL);
|
|
|
|
if (!p)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
ret = -EFAULT;
|
|
|
|
if (copy_from_user(p, arg, size))
|
|
|
|
goto out;
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (memchr_inv(p, 0, size))
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
p->last_op = IORING_OP_LAST - 1;
|
|
|
|
if (nr_args > IORING_OP_LAST)
|
|
|
|
nr_args = IORING_OP_LAST;
|
|
|
|
|
|
|
|
for (i = 0; i < nr_args; i++) {
|
|
|
|
p->ops[i].op = i;
|
2023-01-12 22:44:10 +08:00
|
|
|
if (!io_issue_defs[i].not_supported)
|
2020-01-17 06:36:52 +08:00
|
|
|
p->ops[i].flags = IO_URING_OP_SUPPORTED;
|
|
|
|
}
|
|
|
|
p->ops_len = i;
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
if (copy_to_user(arg, p, size))
|
|
|
|
ret = -EFAULT;
|
|
|
|
out:
|
|
|
|
kfree(p);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-01-29 01:04:42 +08:00
|
|
|
static int io_register_personality(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2021-02-16 04:40:22 +08:00
|
|
|
const struct cred *creds;
|
2021-03-08 22:16:16 +08:00
|
|
|
u32 id;
|
2020-10-15 22:46:24 +08:00
|
|
|
int ret;
|
2020-01-29 01:04:42 +08:00
|
|
|
|
2021-02-16 04:40:22 +08:00
|
|
|
creds = get_current_cred();
|
2020-10-15 22:46:24 +08:00
|
|
|
|
2021-03-08 22:16:16 +08:00
|
|
|
ret = xa_alloc_cyclic(&ctx->personalities, &id, (void *)creds,
|
|
|
|
XA_LIMIT(0, USHRT_MAX), &ctx->pers_next, GFP_KERNEL);
|
2021-08-21 04:53:59 +08:00
|
|
|
if (ret < 0) {
|
|
|
|
put_cred(creds);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
return id;
|
2020-01-29 01:04:42 +08:00
|
|
|
}
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold int io_register_restrictions(struct io_ring_ctx *ctx,
|
|
|
|
void __user *arg, unsigned int nr_args)
|
2020-08-27 22:58:30 +08:00
|
|
|
{
|
|
|
|
struct io_uring_restriction *res;
|
|
|
|
size_t size;
|
|
|
|
int i, ret;
|
|
|
|
|
2020-08-27 22:58:31 +08:00
|
|
|
/* Restrictions allowed only if rings started disabled */
|
|
|
|
if (!(ctx->flags & IORING_SETUP_R_DISABLED))
|
|
|
|
return -EBADFD;
|
|
|
|
|
2020-08-27 22:58:30 +08:00
|
|
|
/* We allow only a single restrictions registration */
|
2020-08-27 22:58:31 +08:00
|
|
|
if (ctx->restrictions.registered)
|
2020-08-27 22:58:30 +08:00
|
|
|
return -EBUSY;
|
|
|
|
|
|
|
|
if (!arg || nr_args > IORING_MAX_RESTRICTIONS)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
size = array_size(nr_args, sizeof(*res));
|
|
|
|
if (size == SIZE_MAX)
|
|
|
|
return -EOVERFLOW;
|
|
|
|
|
|
|
|
res = memdup_user(arg, size);
|
|
|
|
if (IS_ERR(res))
|
|
|
|
return PTR_ERR(res);
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
for (i = 0; i < nr_args; i++) {
|
|
|
|
switch (res[i].opcode) {
|
|
|
|
case IORING_RESTRICTION_REGISTER_OP:
|
|
|
|
if (res[i].register_op >= IORING_REGISTER_LAST) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
__set_bit(res[i].register_op,
|
|
|
|
ctx->restrictions.register_op);
|
|
|
|
break;
|
|
|
|
case IORING_RESTRICTION_SQE_OP:
|
|
|
|
if (res[i].sqe_op >= IORING_OP_LAST) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
__set_bit(res[i].sqe_op, ctx->restrictions.sqe_op);
|
|
|
|
break;
|
|
|
|
case IORING_RESTRICTION_SQE_FLAGS_ALLOWED:
|
|
|
|
ctx->restrictions.sqe_flags_allowed = res[i].sqe_flags;
|
|
|
|
break;
|
|
|
|
case IORING_RESTRICTION_SQE_FLAGS_REQUIRED:
|
|
|
|
ctx->restrictions.sqe_flags_required = res[i].sqe_flags;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
/* Reset all restrictions if an error happened */
|
|
|
|
if (ret != 0)
|
|
|
|
memset(&ctx->restrictions, 0, sizeof(ctx->restrictions));
|
|
|
|
else
|
2020-08-27 22:58:31 +08:00
|
|
|
ctx->restrictions.registered = true;
|
2020-08-27 22:58:30 +08:00
|
|
|
|
|
|
|
kfree(res);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-08-27 22:58:31 +08:00
|
|
|
static int io_register_enable_rings(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
if (!(ctx->flags & IORING_SETUP_R_DISABLED))
|
|
|
|
return -EBADFD;
|
|
|
|
|
2023-01-09 22:46:09 +08:00
|
|
|
if (ctx->flags & IORING_SETUP_SINGLE_ISSUER && !ctx->submitter_task) {
|
2023-01-21 00:38:06 +08:00
|
|
|
WRITE_ONCE(ctx->submitter_task, get_task_struct(current));
|
2023-01-09 22:46:09 +08:00
|
|
|
/*
|
|
|
|
* Lazy activation attempts would fail if it was polled before
|
|
|
|
* submitter_task is set.
|
|
|
|
*/
|
|
|
|
if (wq_has_sleeper(&ctx->poll_wq))
|
|
|
|
io_activate_pollwq(ctx);
|
|
|
|
}
|
2022-09-27 01:09:25 +08:00
|
|
|
|
2020-08-27 22:58:31 +08:00
|
|
|
if (ctx->restrictions.registered)
|
|
|
|
ctx->restricted = 1;
|
|
|
|
|
2021-03-08 21:20:57 +08:00
|
|
|
ctx->flags &= ~IORING_SETUP_R_DISABLED;
|
|
|
|
if (ctx->sq_data && wq_has_sleeper(&ctx->sq_data->wait))
|
|
|
|
wake_up(&ctx->sq_data->wait);
|
2020-08-27 22:58:31 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2023-08-14 01:05:36 +08:00
|
|
|
static __cold int __io_register_iowq_aff(struct io_ring_ctx *ctx,
|
|
|
|
cpumask_var_t new_mask)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!(ctx->flags & IORING_SETUP_SQPOLL)) {
|
|
|
|
ret = io_wq_cpu_affinity(current->io_uring, new_mask);
|
|
|
|
} else {
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
ret = io_sqpoll_wq_cpu_affinity(ctx, new_mask);
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold int io_register_iowq_aff(struct io_ring_ctx *ctx,
|
|
|
|
void __user *arg, unsigned len)
|
2021-06-18 00:19:54 +08:00
|
|
|
{
|
|
|
|
cpumask_var_t new_mask;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
cpumask_clear(new_mask);
|
|
|
|
if (len > cpumask_size())
|
|
|
|
len = cpumask_size();
|
|
|
|
|
2022-04-06 19:55:33 +08:00
|
|
|
if (in_compat_syscall()) {
|
|
|
|
ret = compat_get_bitmap(cpumask_bits(new_mask),
|
|
|
|
(const compat_ulong_t __user *)arg,
|
|
|
|
len * 8 /* CHAR_BIT */);
|
|
|
|
} else {
|
|
|
|
ret = copy_from_user(new_mask, arg, len);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ret) {
|
2021-06-18 00:19:54 +08:00
|
|
|
free_cpumask_var(new_mask);
|
|
|
|
return -EFAULT;
|
|
|
|
}
|
|
|
|
|
2023-08-14 01:05:36 +08:00
|
|
|
ret = __io_register_iowq_aff(ctx, new_mask);
|
2021-06-18 00:19:54 +08:00
|
|
|
free_cpumask_var(new_mask);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold int io_unregister_iowq_aff(struct io_ring_ctx *ctx)
|
2021-06-18 00:19:54 +08:00
|
|
|
{
|
2023-08-14 01:05:36 +08:00
|
|
|
return __io_register_iowq_aff(ctx, NULL);
|
2021-06-18 00:19:54 +08:00
|
|
|
}
|
|
|
|
|
2021-10-05 03:02:54 +08:00
|
|
|
static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
|
|
|
|
void __user *arg)
|
2021-10-21 20:20:29 +08:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2021-08-28 01:33:19 +08:00
|
|
|
{
|
2021-10-21 20:20:29 +08:00
|
|
|
struct io_tctx_node *node;
|
2021-09-02 04:15:59 +08:00
|
|
|
struct io_uring_task *tctx = NULL;
|
|
|
|
struct io_sq_data *sqd = NULL;
|
2021-08-28 01:33:19 +08:00
|
|
|
__u32 new_count[2];
|
|
|
|
int i, ret;
|
|
|
|
|
|
|
|
if (copy_from_user(new_count, arg, sizeof(new_count)))
|
|
|
|
return -EFAULT;
|
|
|
|
for (i = 0; i < ARRAY_SIZE(new_count); i++)
|
|
|
|
if (new_count[i] > INT_MAX)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2021-09-02 04:15:59 +08:00
|
|
|
if (ctx->flags & IORING_SETUP_SQPOLL) {
|
|
|
|
sqd = ctx->sq_data;
|
|
|
|
if (sqd) {
|
2021-09-09 09:07:26 +08:00
|
|
|
/*
|
|
|
|
* Observe the correct sqd->lock -> ctx->uring_lock
|
|
|
|
* ordering. Fine to drop uring_lock here, we hold
|
|
|
|
* a ref to the ctx.
|
|
|
|
*/
|
2021-09-14 03:08:51 +08:00
|
|
|
refcount_inc(&sqd->refs);
|
2021-09-09 09:07:26 +08:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2021-09-02 04:15:59 +08:00
|
|
|
mutex_lock(&sqd->lock);
|
2021-09-09 09:07:26 +08:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-09-14 03:08:51 +08:00
|
|
|
if (sqd->thread)
|
|
|
|
tctx = sqd->thread->io_uring;
|
2021-09-02 04:15:59 +08:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
tctx = current->io_uring;
|
|
|
|
}
|
|
|
|
|
2021-10-20 06:43:46 +08:00
|
|
|
BUILD_BUG_ON(sizeof(new_count) != sizeof(ctx->iowq_limits));
|
2021-09-02 04:15:59 +08:00
|
|
|
|
2021-11-08 23:10:03 +08:00
|
|
|
for (i = 0; i < ARRAY_SIZE(new_count); i++)
|
|
|
|
if (new_count[i])
|
|
|
|
ctx->iowq_limits[i] = new_count[i];
|
2021-10-20 06:43:46 +08:00
|
|
|
ctx->iowq_limits_set = true;
|
|
|
|
|
|
|
|
if (tctx && tctx->io_wq) {
|
|
|
|
ret = io_wq_max_workers(tctx->io_wq, new_count);
|
|
|
|
if (ret)
|
|
|
|
goto err;
|
|
|
|
} else {
|
|
|
|
memset(new_count, 0, sizeof(new_count));
|
|
|
|
}
|
2021-09-02 04:15:59 +08:00
|
|
|
|
2021-09-14 03:08:51 +08:00
|
|
|
if (sqd) {
|
2021-09-02 04:15:59 +08:00
|
|
|
mutex_unlock(&sqd->lock);
|
2021-09-14 03:08:51 +08:00
|
|
|
io_put_sq_data(sqd);
|
|
|
|
}
|
2021-08-28 01:33:19 +08:00
|
|
|
|
|
|
|
if (copy_to_user(arg, new_count, sizeof(new_count)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2021-10-21 20:20:29 +08:00
|
|
|
/* that's it for SQPOLL, only the SQPOLL task creates requests */
|
|
|
|
if (sqd)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* now propagate the restriction to all registered users */
|
|
|
|
list_for_each_entry(node, &ctx->tctx_list, ctx_node) {
|
|
|
|
struct io_uring_task *tctx = node->task->io_uring;
|
|
|
|
|
|
|
|
if (WARN_ON_ONCE(!tctx->io_wq))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(new_count); i++)
|
|
|
|
new_count[i] = ctx->iowq_limits[i];
|
|
|
|
/* ignore errors, it always returns zero anyway */
|
|
|
|
(void)io_wq_max_workers(tctx->io_wq, new_count);
|
|
|
|
}
|
2021-08-28 01:33:19 +08:00
|
|
|
return 0;
|
2021-09-02 04:15:59 +08:00
|
|
|
err:
|
2021-09-14 03:08:51 +08:00
|
|
|
if (sqd) {
|
2021-09-02 04:15:59 +08:00
|
|
|
mutex_unlock(&sqd->lock);
|
2021-09-14 03:08:51 +08:00
|
|
|
io_put_sq_data(sqd);
|
|
|
|
}
|
2021-09-02 04:15:59 +08:00
|
|
|
return ret;
|
2021-08-28 01:33:19 +08:00
|
|
|
}
|
|
|
|
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 00:16:05 +08:00
|
|
|
static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
|
|
|
|
void __user *arg, unsigned nr_args)
|
2019-04-16 00:49:38 +08:00
|
|
|
__releases(ctx->uring_lock)
|
|
|
|
__acquires(ctx->uring_lock)
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 00:16:05 +08:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2019-04-23 00:23:23 +08:00
|
|
|
/*
|
2022-06-25 18:53:02 +08:00
|
|
|
* We don't quiesce the refs for register anymore and so it can't be
|
|
|
|
* dying as we're holding a file ref here.
|
2019-04-23 00:23:23 +08:00
|
|
|
*/
|
2022-06-25 18:53:02 +08:00
|
|
|
if (WARN_ON_ONCE(percpu_ref_is_dying(&ctx->refs)))
|
2019-04-23 00:23:23 +08:00
|
|
|
return -ENXIO;
|
|
|
|
|
2022-09-27 08:13:30 +08:00
|
|
|
if (ctx->submitter_task && ctx->submitter_task != current)
|
|
|
|
return -EEXIST;
|
|
|
|
|
2021-04-15 20:07:40 +08:00
|
|
|
if (ctx->restricted) {
|
|
|
|
opcode = array_index_nospec(opcode, IORING_REGISTER_LAST);
|
|
|
|
if (!test_bit(opcode, ctx->restrictions.register_op))
|
|
|
|
return -EACCES;
|
|
|
|
}
|
|
|
|
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 00:16:05 +08:00
|
|
|
switch (opcode) {
|
|
|
|
case IORING_REGISTER_BUFFERS:
|
2022-05-19 02:13:49 +08:00
|
|
|
ret = -EFAULT;
|
|
|
|
if (!arg)
|
|
|
|
break;
|
2021-04-25 21:32:26 +08:00
|
|
|
ret = io_sqe_buffers_register(ctx, arg, nr_args, NULL);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 00:16:05 +08:00
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_BUFFERS:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
2021-01-07 04:39:10 +08:00
|
|
|
ret = io_sqe_buffers_unregister(ctx);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 00:16:05 +08:00
|
|
|
break;
|
2019-01-11 13:13:58 +08:00
|
|
|
case IORING_REGISTER_FILES:
|
2022-05-09 23:29:14 +08:00
|
|
|
ret = -EFAULT;
|
|
|
|
if (!arg)
|
|
|
|
break;
|
2021-04-25 21:32:21 +08:00
|
|
|
ret = io_sqe_files_register(ctx, arg, nr_args, NULL);
|
2019-01-11 13:13:58 +08:00
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_FILES:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_sqe_files_unregister(ctx);
|
|
|
|
break;
|
2019-10-04 03:59:56 +08:00
|
|
|
case IORING_REGISTER_FILES_UPDATE:
|
2021-04-25 21:32:22 +08:00
|
|
|
ret = io_register_files_update(ctx, arg, nr_args);
|
2019-10-04 03:59:56 +08:00
|
|
|
break;
|
2019-04-12 01:45:41 +08:00
|
|
|
case IORING_REGISTER_EVENTFD:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (nr_args != 1)
|
|
|
|
break;
|
2022-02-04 22:51:15 +08:00
|
|
|
ret = io_eventfd_register(ctx, arg, 0);
|
|
|
|
break;
|
|
|
|
case IORING_REGISTER_EVENTFD_ASYNC:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (nr_args != 1)
|
2020-01-09 02:04:00 +08:00
|
|
|
break;
|
2022-02-04 22:51:15 +08:00
|
|
|
ret = io_eventfd_register(ctx, arg, 1);
|
2019-04-12 01:45:41 +08:00
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_EVENTFD:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_eventfd_unregister(ctx);
|
|
|
|
break;
|
2020-01-17 06:36:52 +08:00
|
|
|
case IORING_REGISTER_PROBE:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (!arg || nr_args > 256)
|
|
|
|
break;
|
|
|
|
ret = io_probe(ctx, arg, nr_args);
|
|
|
|
break;
|
2020-01-29 01:04:42 +08:00
|
|
|
case IORING_REGISTER_PERSONALITY:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_register_personality(ctx);
|
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_PERSONALITY:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg)
|
|
|
|
break;
|
|
|
|
ret = io_unregister_personality(ctx, nr_args);
|
|
|
|
break;
|
2020-08-27 22:58:31 +08:00
|
|
|
case IORING_REGISTER_ENABLE_RINGS:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_register_enable_rings(ctx);
|
|
|
|
break;
|
2020-08-27 22:58:30 +08:00
|
|
|
case IORING_REGISTER_RESTRICTIONS:
|
|
|
|
ret = io_register_restrictions(ctx, arg, nr_args);
|
|
|
|
break;
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 23:37:37 +08:00
|
|
|
case IORING_REGISTER_FILES2:
|
|
|
|
ret = io_register_rsrc(ctx, arg, nr_args, IORING_RSRC_FILE);
|
|
|
|
break;
|
|
|
|
case IORING_REGISTER_FILES_UPDATE2:
|
|
|
|
ret = io_register_rsrc_update(ctx, arg, nr_args,
|
|
|
|
IORING_RSRC_FILE);
|
|
|
|
break;
|
|
|
|
case IORING_REGISTER_BUFFERS2:
|
|
|
|
ret = io_register_rsrc(ctx, arg, nr_args, IORING_RSRC_BUFFER);
|
2021-04-25 21:32:21 +08:00
|
|
|
break;
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 23:37:37 +08:00
|
|
|
case IORING_REGISTER_BUFFERS_UPDATE:
|
|
|
|
ret = io_register_rsrc_update(ctx, arg, nr_args,
|
|
|
|
IORING_RSRC_BUFFER);
|
2021-04-25 21:32:22 +08:00
|
|
|
break;
|
2021-06-18 00:19:54 +08:00
|
|
|
case IORING_REGISTER_IOWQ_AFF:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (!arg || !nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_register_iowq_aff(ctx, arg, nr_args);
|
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_IOWQ_AFF:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_unregister_iowq_aff(ctx);
|
|
|
|
break;
|
2021-08-28 01:33:19 +08:00
|
|
|
case IORING_REGISTER_IOWQ_MAX_WORKERS:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (!arg || nr_args != 2)
|
|
|
|
break;
|
|
|
|
ret = io_register_iowq_max_workers(ctx, arg);
|
|
|
|
break;
|
io_uring: add support for registering ring file descriptors
Lots of workloads use multiple threads, in which case the file table is
shared between them. This makes getting and putting the ring file
descriptor for each io_uring_enter(2) system call more expensive, as it
involves an atomic get and put for each call.
Similarly to how we allow registering normal file descriptors to avoid
this overhead, add support for an io_uring_register(2) API that allows
to register the ring fds themselves:
1) IORING_REGISTER_RING_FDS - takes an array of io_uring_rsrc_update
structs, and registers them with the task.
2) IORING_UNREGISTER_RING_FDS - takes an array of io_uring_src_update
structs, and unregisters them.
When a ring fd is registered, it is internally represented by an offset.
This offset is returned to the application, and the application then
uses this offset and sets IORING_ENTER_REGISTERED_RING for the
io_uring_enter(2) system call. This works just like using a registered
file descriptor, rather than a real one, in an SQE, where
IOSQE_FIXED_FILE gets set to tell io_uring that we're using an internal
offset/descriptor rather than a real file descriptor.
In initial testing, this provides a nice bump in performance for
threaded applications in real world cases where the batch count (eg
number of requests submitted per io_uring_enter(2) invocation) is low.
In a microbenchmark, submitting NOP requests, we see the following
increases in performance:
Requests per syscall Baseline Registered Increase
----------------------------------------------------------------
1 ~7030K ~8080K +15%
2 ~13120K ~14800K +13%
4 ~22740K ~25300K +11%
Co-developed-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-04 23:22:22 +08:00
|
|
|
case IORING_REGISTER_RING_FDS:
|
|
|
|
ret = io_ringfd_register(ctx, arg, nr_args);
|
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_RING_FDS:
|
|
|
|
ret = io_ringfd_unregister(ctx, arg, nr_args);
|
|
|
|
break;
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-01 04:38:53 +08:00
|
|
|
case IORING_REGISTER_PBUF_RING:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (!arg || nr_args != 1)
|
|
|
|
break;
|
|
|
|
ret = io_register_pbuf_ring(ctx, arg);
|
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_PBUF_RING:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (!arg || nr_args != 1)
|
|
|
|
break;
|
|
|
|
ret = io_unregister_pbuf_ring(ctx, arg);
|
|
|
|
break;
|
io_uring: add sync cancelation API through io_uring_register()
The io_uring cancelation API is async, like any other API that we expose
there. For the case of finding a request to cancel, or not finding one,
it is fully sync in that when submission returns, the CQE for both the
cancelation request and the targeted request have been posted to the
CQ ring.
However, if the targeted work is being executed by io-wq, the API can
only start the act of canceling it. This makes it difficult to use in
some circumstances, as the caller then has to wait for the CQEs to come
in and match on the same cancelation data there.
Provide a IORING_REGISTER_SYNC_CANCEL command for io_uring_register()
that does sync cancelations, always. For the io-wq case, it'll wait
for the cancelation to come in before returning. The only expected
returns from this API is:
0 Request found and canceled fine.
> 0 Requests found and canceled. Only happens if asked to
cancel multiple requests, and if the work wasn't in
progress.
-ENOENT Request not found.
-ETIME A timeout on the operation was requested, but the timeout
expired before we could cancel.
and we won't get -EALREADY via this API.
If the timeout value passed in is -1 (tv_sec and tv_nsec), then that
means that no timeout is requested. Otherwise, the timespec passed in
is the amount of time the sync cancel will wait for a successful
cancelation.
Link: https://github.com/axboe/liburing/discussions/608
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-06-19 00:00:50 +08:00
|
|
|
case IORING_REGISTER_SYNC_CANCEL:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (!arg || nr_args != 1)
|
|
|
|
break;
|
|
|
|
ret = io_sync_cancel(ctx, arg);
|
|
|
|
break;
|
2022-06-25 18:55:38 +08:00
|
|
|
case IORING_REGISTER_FILE_ALLOC_RANGE:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (!arg || nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_register_file_alloc_range(ctx, arg);
|
|
|
|
break;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 00:16:05 +08:00
|
|
|
default:
|
|
|
|
ret = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
|
|
|
|
void __user *, arg, unsigned int, nr_args)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
|
|
|
long ret = -EBADF;
|
2023-11-29 01:29:58 +08:00
|
|
|
struct file *file;
|
2023-02-15 08:42:22 +08:00
|
|
|
bool use_registered_ring;
|
|
|
|
|
|
|
|
use_registered_ring = !!(opcode & IORING_REGISTER_USE_REGISTERED_RING);
|
|
|
|
opcode &= ~IORING_REGISTER_USE_REGISTERED_RING;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 00:16:05 +08:00
|
|
|
|
2022-12-23 21:37:08 +08:00
|
|
|
if (opcode >= IORING_REGISTER_LAST)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2023-02-15 08:42:22 +08:00
|
|
|
if (use_registered_ring) {
|
|
|
|
/*
|
|
|
|
* Ring fd has been registered via IORING_REGISTER_RING_FDS, we
|
|
|
|
* need only dereference our task private array to find it.
|
|
|
|
*/
|
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 00:16:05 +08:00
|
|
|
|
2023-02-15 08:42:22 +08:00
|
|
|
if (unlikely(!tctx || fd >= IO_RINGFD_REG_MAX))
|
|
|
|
return -EINVAL;
|
|
|
|
fd = array_index_nospec(fd, IO_RINGFD_REG_MAX);
|
2023-11-29 01:29:58 +08:00
|
|
|
file = tctx->registered_rings[fd];
|
|
|
|
if (unlikely(!file))
|
2023-02-15 08:42:22 +08:00
|
|
|
return -EBADF;
|
|
|
|
} else {
|
2023-11-29 01:29:58 +08:00
|
|
|
file = fget(fd);
|
|
|
|
if (unlikely(!file))
|
2023-02-15 08:42:22 +08:00
|
|
|
return -EBADF;
|
|
|
|
ret = -EOPNOTSUPP;
|
2023-11-29 01:29:58 +08:00
|
|
|
if (!io_is_uring_fops(file))
|
2023-02-15 08:42:22 +08:00
|
|
|
goto out_fput;
|
|
|
|
}
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 00:16:05 +08:00
|
|
|
|
2023-11-29 01:29:58 +08:00
|
|
|
ctx = file->private_data;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 00:16:05 +08:00
|
|
|
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
ret = __io_uring_register(ctx, opcode, arg, nr_args);
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2022-02-04 22:51:13 +08:00
|
|
|
trace_io_uring_register(ctx, opcode, ctx->nr_user_files, ctx->nr_user_bufs, ret);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 00:16:05 +08:00
|
|
|
out_fput:
|
2023-11-29 01:29:58 +08:00
|
|
|
if (!use_registered_ring)
|
|
|
|
fput(file);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 00:16:05 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
static int __init io_uring_init(void)
|
|
|
|
{
|
2022-08-11 15:11:16 +08:00
|
|
|
#define __BUILD_BUG_VERIFY_OFFSET_SIZE(stype, eoffset, esize, ename) do { \
|
2020-01-29 21:39:41 +08:00
|
|
|
BUILD_BUG_ON(offsetof(stype, ename) != eoffset); \
|
2022-08-11 15:11:16 +08:00
|
|
|
BUILD_BUG_ON(sizeof_field(stype, ename) != esize); \
|
2020-01-29 21:39:41 +08:00
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define BUILD_BUG_SQE_ELEM(eoffset, etype, ename) \
|
2022-08-11 15:11:16 +08:00
|
|
|
__BUILD_BUG_VERIFY_OFFSET_SIZE(struct io_uring_sqe, eoffset, sizeof(etype), ename)
|
|
|
|
#define BUILD_BUG_SQE_ELEM_SIZE(eoffset, esize, ename) \
|
|
|
|
__BUILD_BUG_VERIFY_OFFSET_SIZE(struct io_uring_sqe, eoffset, esize, ename)
|
2020-01-29 21:39:41 +08:00
|
|
|
BUILD_BUG_ON(sizeof(struct io_uring_sqe) != 64);
|
|
|
|
BUILD_BUG_SQE_ELEM(0, __u8, opcode);
|
|
|
|
BUILD_BUG_SQE_ELEM(1, __u8, flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(2, __u16, ioprio);
|
|
|
|
BUILD_BUG_SQE_ELEM(4, __s32, fd);
|
|
|
|
BUILD_BUG_SQE_ELEM(8, __u64, off);
|
|
|
|
BUILD_BUG_SQE_ELEM(8, __u64, addr2);
|
2022-08-11 15:11:16 +08:00
|
|
|
BUILD_BUG_SQE_ELEM(8, __u32, cmd_op);
|
|
|
|
BUILD_BUG_SQE_ELEM(12, __u32, __pad1);
|
2020-01-29 21:39:41 +08:00
|
|
|
BUILD_BUG_SQE_ELEM(16, __u64, addr);
|
2020-02-24 16:32:45 +08:00
|
|
|
BUILD_BUG_SQE_ELEM(16, __u64, splice_off_in);
|
2020-01-29 21:39:41 +08:00
|
|
|
BUILD_BUG_SQE_ELEM(24, __u32, len);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __kernel_rwf_t, rw_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, /* compat */ int, rw_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, /* compat */ __u32, rw_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, fsync_flags);
|
2020-06-17 17:53:55 +08:00
|
|
|
BUILD_BUG_SQE_ELEM(28, /* compat */ __u16, poll_events);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, poll32_events);
|
2020-01-29 21:39:41 +08:00
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, sync_range_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, msg_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, timeout_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, accept_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, cancel_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, open_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, statx_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, fadvise_advice);
|
2020-02-24 16:32:45 +08:00
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, splice_flags);
|
2022-08-11 15:11:16 +08:00
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, rename_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, unlink_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, hardlink_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, xattr_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, msg_ring_flags);
|
2020-01-29 21:39:41 +08:00
|
|
|
BUILD_BUG_SQE_ELEM(32, __u64, user_data);
|
|
|
|
BUILD_BUG_SQE_ELEM(40, __u16, buf_index);
|
2021-06-24 22:09:58 +08:00
|
|
|
BUILD_BUG_SQE_ELEM(40, __u16, buf_group);
|
2020-01-29 21:39:41 +08:00
|
|
|
BUILD_BUG_SQE_ELEM(42, __u16, personality);
|
2020-02-24 16:32:45 +08:00
|
|
|
BUILD_BUG_SQE_ELEM(44, __s32, splice_fd_in);
|
2021-08-25 19:25:45 +08:00
|
|
|
BUILD_BUG_SQE_ELEM(44, __u32, file_index);
|
2022-09-01 18:54:04 +08:00
|
|
|
BUILD_BUG_SQE_ELEM(44, __u16, addr_len);
|
|
|
|
BUILD_BUG_SQE_ELEM(46, __u16, __pad3[0]);
|
2022-03-23 23:44:19 +08:00
|
|
|
BUILD_BUG_SQE_ELEM(48, __u64, addr3);
|
2022-08-11 15:11:16 +08:00
|
|
|
BUILD_BUG_SQE_ELEM_SIZE(48, 0, cmd);
|
|
|
|
BUILD_BUG_SQE_ELEM(56, __u64, __pad2);
|
2020-01-29 21:39:41 +08:00
|
|
|
|
2021-04-27 23:13:53 +08:00
|
|
|
BUILD_BUG_ON(sizeof(struct io_uring_files_update) !=
|
|
|
|
sizeof(struct io_uring_rsrc_update));
|
|
|
|
BUILD_BUG_ON(sizeof(struct io_uring_rsrc_update) >
|
|
|
|
sizeof(struct io_uring_rsrc_update2));
|
2021-08-26 03:51:40 +08:00
|
|
|
|
|
|
|
/* ->buf_index is u16 */
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-01 04:38:53 +08:00
|
|
|
BUILD_BUG_ON(offsetof(struct io_uring_buf_ring, bufs) != 0);
|
|
|
|
BUILD_BUG_ON(offsetof(struct io_uring_buf, resv) !=
|
|
|
|
offsetof(struct io_uring_buf_ring, tail));
|
2021-08-26 03:51:40 +08:00
|
|
|
|
2021-04-27 23:13:53 +08:00
|
|
|
/* should fit into one byte */
|
|
|
|
BUILD_BUG_ON(SQE_VALID_FLAGS >= (1 << 8));
|
2021-09-15 19:03:38 +08:00
|
|
|
BUILD_BUG_ON(SQE_COMMON_FLAGS >= (1 << 8));
|
|
|
|
BUILD_BUG_ON((SQE_VALID_FLAGS | SQE_COMMON_FLAGS) != SQE_VALID_FLAGS);
|
2021-04-27 23:13:53 +08:00
|
|
|
|
2021-09-07 11:22:43 +08:00
|
|
|
BUILD_BUG_ON(__REQ_F_LAST_BIT > 8 * sizeof(int));
|
2021-06-24 22:09:58 +08:00
|
|
|
|
2022-04-26 09:49:00 +08:00
|
|
|
BUILD_BUG_ON(sizeof(atomic_t) != sizeof(u32));
|
|
|
|
|
2022-06-16 06:27:42 +08:00
|
|
|
io_uring_optable_init();
|
2022-05-24 06:56:21 +08:00
|
|
|
|
2023-08-03 04:38:01 +08:00
|
|
|
/*
|
|
|
|
* Allow user copy in the per-command field, which starts after the
|
|
|
|
* file in io_kiocb and until the opcode field. The openat2 handling
|
|
|
|
* requires copying in user memory into the io_kiocb object in that
|
|
|
|
* range, and HARDENED_USERCOPY will complain if we haven't
|
|
|
|
* correctly annotated this range.
|
|
|
|
*/
|
|
|
|
req_cachep = kmem_cache_create_usercopy("io_kiocb",
|
|
|
|
sizeof(struct io_kiocb), 0,
|
|
|
|
SLAB_HWCACHE_ALIGN | SLAB_PANIC |
|
|
|
|
SLAB_ACCOUNT | SLAB_TYPESAFE_BY_RCU,
|
|
|
|
offsetof(struct io_kiocb, cmd.data),
|
|
|
|
sizeof_field(struct io_kiocb, cmd.data), NULL);
|
|
|
|
|
2023-08-22 05:15:52 +08:00
|
|
|
#ifdef CONFIG_SYSCTL
|
|
|
|
register_sysctl_init("kernel", kernel_io_uring_disabled_table);
|
|
|
|
#endif
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-08 01:46:33 +08:00
|
|
|
return 0;
|
|
|
|
};
|
|
|
|
__initcall(io_uring_init);
|