dm: fix missing imposition of queue_limits from dm_wq_work() thread

If a DM device was suspended when bios were issued to it, those bios
would be deferred using queue_io(). Once the DM device was resumed
dm_process_bio() could be called by dm_wq_work() for original bio that
still needs splitting. dm_process_bio()'s check for current->bio_list
(meaning call chain is within ->submit_bio) as a prerequisite for
calling blk_queue_split() for "abnormal IO" would result in
dm_process_bio() never imposing corresponding queue_limits
(e.g. discard_granularity, discard_max_bytes, etc).

Fix this by always having dm_wq_work() resubmit deferred bios using
submit_bio_noacct().

Side-effect is blk_queue_split() is always called for "abnormal IO" from
->submit_bio, be it from application thread or dm_wq_work() workqueue,
so proper bio splitting and depth-first bio submission is performed.
For sake of clarity, remove current->bio_list check before call to
blk_queue_split().

Also, remove dm_wq_work()'s use of dm_{get,put}_live_table() -- no
longer needed since IO will be reissued in terms of ->submit_bio.
And rename bio variable from 'c' to 'bio'.

Fixes: cf9c378655 ("dm: fix comment in dm_process_bio()")
Reported-by: Jeffle Xu <jefflexu@linux.alibaba.com>
Reviewed-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
This commit is contained in:
Mike Snitzer 2020-09-28 13:41:36 -04:00
parent 7d837c0dd9
commit 0c2915b8c6

View File

@ -1676,17 +1676,11 @@ static blk_qc_t dm_process_bio(struct mapped_device *md,
} }
/* /*
* If in ->submit_bio we need to use blk_queue_split(), otherwise * Use blk_queue_split() for abnormal IO (e.g. discard, writesame, etc)
* queue_limits for abnormal requests (e.g. discard, writesame, etc) * otherwise associated queue_limits won't be imposed.
* won't be imposed.
* If called from dm_wq_work() for deferred bio processing, bio
* was already handled by following code with previous ->submit_bio.
*/ */
if (current->bio_list) {
if (is_abnormal_io(bio)) if (is_abnormal_io(bio))
blk_queue_split(&bio); blk_queue_split(&bio);
/* regular IO is split by __split_and_process_bio */
}
if (dm_get_md_type(md) == DM_TYPE_NVME_BIO_BASED) if (dm_get_md_type(md) == DM_TYPE_NVME_BIO_BASED)
return __process_bio(md, map, bio); return __process_bio(md, map, bio);
@ -2383,29 +2377,19 @@ static int dm_wait_for_completion(struct mapped_device *md, long task_state)
*/ */
static void dm_wq_work(struct work_struct *work) static void dm_wq_work(struct work_struct *work)
{ {
struct mapped_device *md = container_of(work, struct mapped_device, struct mapped_device *md = container_of(work, struct mapped_device, work);
work); struct bio *bio;
struct bio *c;
int srcu_idx;
struct dm_table *map;
map = dm_get_live_table(md, &srcu_idx);
while (!test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) { while (!test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) {
spin_lock_irq(&md->deferred_lock); spin_lock_irq(&md->deferred_lock);
c = bio_list_pop(&md->deferred); bio = bio_list_pop(&md->deferred);
spin_unlock_irq(&md->deferred_lock); spin_unlock_irq(&md->deferred_lock);
if (!c) if (!bio)
break; break;
if (dm_request_based(md)) submit_bio_noacct(bio);
(void) submit_bio_noacct(c);
else
(void) dm_process_bio(md, map, c);
} }
dm_put_live_table(md, srcu_idx);
} }
static void dm_queue_flush(struct mapped_device *md) static void dm_queue_flush(struct mapped_device *md)