mirror of
https://github.com/edk2-porting/linux-next.git
synced 2024-12-16 09:13:55 +08:00
5ce13431dd
Here is a second try at a patch I posted earlier, which also implements suggestions Steve made: Before this patch, GFS2 would keep searching through all the rgrps until it found one that had a chunk of free blocks big enough to satisfy the size hint, which is based on the file write size, regardless of whether the chunk was big enough to perform the write. However, when doing big writes there may not be a large enough chunk of free blocks in any rgrp, due to file system fragmentation. The largest chunk may be big enough to satisfy the write request, but it may not meet the ideal reservation size from the "size hint". The writes would slow to a crawl because every write would search every rgrp, then finally give up and default to a single-block write. In my case, performance would drop from 425MB/s to 18KB/s, or 24000 times slower. This patch basically makes it so that if we can't find a contiguous chunk of blocks big enough to satisfy the sizehint, we'll use the largest chunk of blocks we found that will still contain the write. It does so by keeping track of the largest run of blocks within the rgrp. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com> |
||
---|---|---|
.. | ||
acl.c | ||
acl.h | ||
aops.c | ||
bmap.c | ||
bmap.h | ||
dentry.c | ||
dir.c | ||
dir.h | ||
export.c | ||
file.c | ||
gfs2.h | ||
glock.c | ||
glock.h | ||
glops.c | ||
glops.h | ||
incore.h | ||
inode.c | ||
inode.h | ||
Kconfig | ||
lock_dlm.c | ||
log.c | ||
log.h | ||
lops.c | ||
lops.h | ||
main.c | ||
Makefile | ||
meta_io.c | ||
meta_io.h | ||
ops_fstype.c | ||
quota.c | ||
quota.h | ||
recovery.c | ||
recovery.h | ||
rgrp.c | ||
rgrp.h | ||
super.c | ||
super.h | ||
sys.c | ||
sys.h | ||
trace_gfs2.h | ||
trans.c | ||
trans.h | ||
util.c | ||
util.h | ||
xattr.c | ||
xattr.h |