mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-12-12 13:34:10 +08:00
2341d662e9
The processes associated with a bfq_queue, say Q, may happen to generate their cumulative I/O at a lower rate than the rate at which the device could serve the same I/O. This is rather probable, e.g., if only one process is associated with Q and the device is an SSD. It results in Q becoming often empty while in service. If BFQ is not allowed to switch to another queue when Q becomes empty, then, during the service of Q, there will be frequent "service holes", i.e., time intervals during which Q gets empty and the device can only consume the I/O already queued in its hardware queues. This easily causes considerable losses of throughput. To counter this problem, BFQ implements a request injection mechanism, which tries to fill the above service holes with I/O requests taken from other bfq_queues. The hard part in this mechanism is finding the right amount of I/O to inject, so as to both boost throughput and not break Q's bandwidth and latency guarantees. To this goal, the current version of this mechanism measures the bandwidth enjoyed by Q while it is being served, and tries to inject the maximum possible amount of extra service that does not cause Q's bandwidth to decrease too much. This solution has an important shortcoming. For bandwidth measurements to be stable and reliable, Q must remain in service for a much longer time than that needed to serve a single I/O request. Unfortunately, this does not hold with many workloads. This commit addresses this issue by changing the way the amount of injection allowed is dynamically computed. It tunes injection as a function of the service times of single I/O requests of Q, instead of Q's bandwidth. Single-request service times are evidently meaningful even if Q gets very few I/O requests completed while it is in service. As a testbed for this new solution, we measured the throughput reached by BFQ for one of the nastiest workloads and configurations for this scheduler: the workload generated by the dbench test (in the Phoronix suite), with 6 clients, on a filesystem with journaling, and with the journaling daemon enjoying a higher weight than normal processes. With this commit, the throughput grows from ~100 MB/s to ~150 MB/s on a PLEXTOR PX-256M5. Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name> Tested-by: Francesco Pollicino <fra.fra.800@gmail.com> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk> |
||
---|---|---|
.. | ||
partitions | ||
badblocks.c | ||
bfq-cgroup.c | ||
bfq-iosched.c | ||
bfq-iosched.h | ||
bfq-wf2q.c | ||
bio-integrity.c | ||
bio.c | ||
blk-cgroup.c | ||
blk-core.c | ||
blk-exec.c | ||
blk-flush.c | ||
blk-integrity.c | ||
blk-ioc.c | ||
blk-iolatency.c | ||
blk-lib.c | ||
blk-map.c | ||
blk-merge.c | ||
blk-mq-cpumap.c | ||
blk-mq-debugfs-zoned.c | ||
blk-mq-debugfs.c | ||
blk-mq-debugfs.h | ||
blk-mq-pci.c | ||
blk-mq-rdma.c | ||
blk-mq-sched.c | ||
blk-mq-sched.h | ||
blk-mq-sysfs.c | ||
blk-mq-tag.c | ||
blk-mq-tag.h | ||
blk-mq-virtio.c | ||
blk-mq.c | ||
blk-mq.h | ||
blk-pm.c | ||
blk-pm.h | ||
blk-rq-qos.c | ||
blk-rq-qos.h | ||
blk-settings.c | ||
blk-softirq.c | ||
blk-stat.c | ||
blk-stat.h | ||
blk-sysfs.c | ||
blk-throttle.c | ||
blk-timeout.c | ||
blk-wbt.c | ||
blk-wbt.h | ||
blk-zoned.c | ||
blk.h | ||
bounce.c | ||
bsg-lib.c | ||
bsg.c | ||
cmdline-parser.c | ||
compat_ioctl.c | ||
elevator.c | ||
genhd.c | ||
ioctl.c | ||
ioprio.c | ||
Kconfig | ||
Kconfig.iosched | ||
kyber-iosched.c | ||
Makefile | ||
mq-deadline.c | ||
opal_proto.h | ||
partition-generic.c | ||
scsi_ioctl.c | ||
sed-opal.c | ||
t10-pi.c |