2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-17 01:34:00 +08:00

dm cache policy mq: fix promotions to occur as expected

Micro benchmarks that repeatedly issued IO to a single block were
failing to cause a promotion from the origin device to the cache.  Fix
this by not updating the stats during map() if -EWOULDBLOCK will be
returned.

The mq policy will only update stats, consider migration, etc, once per
tick period (a unit of time established between dm-cache core and the
policies).

When the IO thread calls the policy's map method, if it would like to
migrate the associated block it returns -EWOULDBLOCK, the IO then gets
handed over to a worker thread which handles the migration.  The worker
thread calls map again, to check the migration is still needed (avoids a
race among other things).  *BUT*, before this fix, if we were still in
the same tick period the stats were already updated by the previous map
call -- so the migration would no longer be requested.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
This commit is contained in:
Joe Thornber 2013-11-15 10:51:20 +00:00 committed by Mike Snitzer
parent 9b7aaa64f9
commit af95e7a69b

View File

@ -730,15 +730,18 @@ static int pre_cache_entry_found(struct mq_policy *mq, struct entry *e,
int r = 0;
bool updated = updated_this_tick(mq, e);
requeue_and_update_tick(mq, e);
if ((!discarded_oblock && updated) ||
!should_promote(mq, e, discarded_oblock, data_dir))
!should_promote(mq, e, discarded_oblock, data_dir)) {
requeue_and_update_tick(mq, e);
result->op = POLICY_MISS;
else if (!can_migrate)
} else if (!can_migrate)
r = -EWOULDBLOCK;
else
else {
requeue_and_update_tick(mq, e);
r = pre_cache_to_cache(mq, e, result);
}
return r;
}