mirror of
https://github.com/edk2-porting/linux-next.git
synced 2024-12-14 08:13:56 +08:00
io_uring: fix race condition in task_work add and clear
We clear the bit marking the ctx task_work as active after having run
the queued work, but we really should be clearing it before. Otherwise
we can hit a tiny race ala:
CPU0 CPU1
io_task_work_add() tctx_task_work()
run_work
add_to_list
test_and_set_bit
clear_bit
already set
and CPU0 will return thinking the task_work is queued, while in reality
it's already being run. If we hit the condition after __tctx_task_work()
found no more work, but before we've cleared the bit, then we'll end up
thinking it's queued and will be run. In reality it is queued, but we
didn't queue the ctx task_work to ensure that it gets run.
Fixes: 7cbf1722d5
("io_uring: provide FIFO ordering for task_work")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
This commit is contained in:
parent
afcc4015d1
commit
1d5f360dd1
@ -1893,10 +1893,10 @@ static void tctx_task_work(struct callback_head *cb)
|
||||
{
|
||||
struct io_uring_task *tctx = container_of(cb, struct io_uring_task, task_work);
|
||||
|
||||
clear_bit(0, &tctx->task_state);
|
||||
|
||||
while (__tctx_task_work(tctx))
|
||||
cond_resched();
|
||||
|
||||
clear_bit(0, &tctx->task_state);
|
||||
}
|
||||
|
||||
static int io_task_work_add(struct task_struct *tsk, struct io_kiocb *req,
|
||||
|
Loading…
Reference in New Issue
Block a user