mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-11-11 12:28:41 +08:00
blk-mq: order adding requests to hctx->dispatch and checking SCHED_RESTART
commitd7d8535f37
upstream. SCHED_RESTART code path is relied to re-run queue for dispatch requests in hctx->dispatch. Meantime the SCHED_RSTART flag is checked when adding requests to hctx->dispatch. memory barriers have to be used for ordering the following two pair of OPs: 1) adding requests to hctx->dispatch and checking SCHED_RESTART in blk_mq_dispatch_rq_list() 2) clearing SCHED_RESTART and checking if there is request in hctx->dispatch in blk_mq_sched_restart(). Without the added memory barrier, either: 1) blk_mq_sched_restart() may miss requests added to hctx->dispatch meantime blk_mq_dispatch_rq_list() observes SCHED_RESTART, and not run queue in dispatch side or 2) blk_mq_dispatch_rq_list still sees SCHED_RESTART, and not run queue in dispatch side, meantime checking if there is request in hctx->dispatch from blk_mq_sched_restart() is missed. IO hang in ltp/fs_fill test is reported by kernel test robot: https://lkml.org/lkml/2020/7/26/77 Turns out it is caused by the above out-of-order OPs. And the IO hang can't be observed any more after applying this patch. Fixes:bd166ef183
("blk-mq-sched: add framework for MQ capable IO schedulers") Reported-by: kernel test robot <rong.a.chen@intel.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Bart Van Assche <bvanassche@acm.org> Cc: Christoph Hellwig <hch@lst.de> Cc: David Jeffery <djeffery@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
parent
30028c328b
commit
77064570e4
@ -69,6 +69,15 @@ void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx)
|
||||
return;
|
||||
clear_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);
|
||||
|
||||
/*
|
||||
* Order clearing SCHED_RESTART and list_empty_careful(&hctx->dispatch)
|
||||
* in blk_mq_run_hw_queue(). Its pair is the barrier in
|
||||
* blk_mq_dispatch_rq_list(). So dispatch code won't see SCHED_RESTART,
|
||||
* meantime new request added to hctx->dispatch is missed to check in
|
||||
* blk_mq_run_hw_queue().
|
||||
*/
|
||||
smp_mb();
|
||||
|
||||
blk_mq_run_hw_queue(hctx, true);
|
||||
}
|
||||
|
||||
|
@ -1221,6 +1221,15 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
|
||||
list_splice_init(list, &hctx->dispatch);
|
||||
spin_unlock(&hctx->lock);
|
||||
|
||||
/*
|
||||
* Order adding requests to hctx->dispatch and checking
|
||||
* SCHED_RESTART flag. The pair of this smp_mb() is the one
|
||||
* in blk_mq_sched_restart(). Avoid restart code path to
|
||||
* miss the new added requests to hctx->dispatch, meantime
|
||||
* SCHED_RESTART is observed here.
|
||||
*/
|
||||
smp_mb();
|
||||
|
||||
/*
|
||||
* If SCHED_RESTART was set by the caller of this function and
|
||||
* it is no longer set that means that it was cleared by another
|
||||
|
Loading…
Reference in New Issue
Block a user