mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-11-24 04:34:08 +08:00
5a26e45edb
When doing io_uring benchmark on /dev/nullb0, it's easy to crash the
kernel if poll requests timeout triggered, as reported by David. [1]
BUG: kernel NULL pointer dereference, address: 0000000000000008
Workqueue: kblockd blk_mq_timeout_work
RIP: 0010:null_timeout_rq+0x4e/0x91
Call Trace:
? null_timeout_rq+0x4e/0x91
blk_mq_handle_expired+0x31/0x4b
bt_iter+0x68/0x84
? bt_tags_iter+0x81/0x81
__sbitmap_for_each_set.constprop.0+0xb0/0xf2
? __blk_mq_complete_request_remote+0xf/0xf
bt_for_each+0x46/0x64
? __blk_mq_complete_request_remote+0xf/0xf
? percpu_ref_get_many+0xc/0x2a
blk_mq_queue_tag_busy_iter+0x14d/0x18e
blk_mq_timeout_work+0x95/0x127
process_one_work+0x185/0x263
worker_thread+0x1b5/0x227
This is indeed a race problem between null_timeout_rq() and null_poll().
null_poll() null_timeout_rq()
spin_lock(&nq->poll_lock)
list_splice_init(&nq->poll_list, &list)
spin_unlock(&nq->poll_lock)
while (!list_empty(&list))
req = list_first_entry()
list_del_init()
...
blk_mq_add_to_batch()
// req->rq_next = NULL
spin_lock(&nq->poll_lock)
// rq->queuelist->next == NULL
list_del_init(&rq->queuelist)
spin_unlock(&nq->poll_lock)
Fix these problems by setting requests state to MQ_RQ_COMPLETE under
nq->poll_lock protection, in which null_timeout_rq() can safely detect
this race and early return.
Note this patch just fix the kernel panic when request timeout happen.
[1] https://lore.kernel.org/all/3893581.1691785261@warthog.procyon.org.uk/
Fixes:
|
||
---|---|---|
.. | ||
Kconfig | ||
main.c | ||
Makefile | ||
null_blk.h | ||
trace.c | ||
trace.h | ||
zoned.c |