mirror of
https://github.com/edk2-porting/linux-next.git
synced 2024-11-20 00:26:39 +08:00
readahead: reduce unnecessary mmap_miss increases
The original INT_MAX is too large, reduce it to - avoid unnecessarily dirtying/bouncing the cache line - restore mmap read-around faster on changed access pattern Background: in the mosbench exim benchmark which does multi-threaded page faults on shared struct file, the ra->mmap_miss updates are found to cause excessive cache line bouncing on tmpfs. The ra state updates are needless for tmpfs because it actually disabled readahead totally (shmem_backing_dev_info.ra_pages == 0). Tested-by: Tim Chen <tim.c.chen@intel.com> Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
275b12bf54
commit
207d04baa3
@ -1566,7 +1566,8 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma,
|
||||
return;
|
||||
}
|
||||
|
||||
if (ra->mmap_miss < INT_MAX)
|
||||
/* Avoid banging the cache line if not needed */
|
||||
if (ra->mmap_miss < MMAP_LOTSAMISS * 10)
|
||||
ra->mmap_miss++;
|
||||
|
||||
/*
|
||||
|
Loading…
Reference in New Issue
Block a user