mm/mremap: optimize the start addresses in move_page_tables()

Patch series "Optimize mremap during mutual alignment within PMD", v6.

This patchset optimizes the start addresses in move_page_tables() and
tests the changes.  It addresses a warning [1] that occurs due to a
downward, overlapping move on a mutually-aligned offset within a PMD
during exec.  By initiating the copy process at the PMD level when such
alignment is present, we can prevent this warning and speed up the copying
process at the same time.  Linus Torvalds suggested this idea.  Check the
individual patches for more details.  [1]
https://lore.kernel.org/all/ZB2GTBD%2FLWTrkOiO@dhcp22.suse.cz/


This patch (of 7):

Recently, we see reports [1] of a warning that triggers due to
move_page_tables() doing a downward and overlapping move on a
mutually-aligned offset within a PMD.  By mutual alignment, I mean the
source and destination addresses of the mremap are at the same offset
within a PMD.

This mutual alignment along with the fact that the move is downward is
sufficient to cause a warning related to having an allocated PMD that does
not have PTEs in it.

This warning will only trigger when there is mutual alignment in the move
operation.  A solution, as suggested by Linus Torvalds [2], is to initiate
the copy process at the PMD level whenever such alignment is present. 
Implementing this approach will not only prevent the warning from being
triggered, but it will also optimize the operation as this method should
enhance the speed of the copy process whenever there's a possibility to
start copying at the PMD level.

Some more points:
a. The optimization can be done only when both the source and
destination of the mremap do not have anything mapped below it up to a
PMD boundary. I add support to detect that.

b. #1 is not a problem for the call to move_page_tables() from exec.c as
nothing is expected to be mapped below the source. However, for
non-overlapping mutually aligned moves as triggered by mremap(2), I
added support for checking such cases.

c. I currently only optimize for PMD moves, in the future I/we can build
on this work and do PUD moves as well if there is a need for this. But I
want to take it one step at a time.

d. We need to be careful about mremap of ranges within the VMA itself.
For this purpose, I added checks to determine if the address after
alignment falls within its VMA itself.

[1] https://lore.kernel.org/all/ZB2GTBD%2FLWTrkOiO@dhcp22.suse.cz/
[2] https://lore.kernel.org/all/CAHk-=whd7msp8reJPfeGNyt0LiySMT0egExx3TVZSX3Ok6X=9g@mail.gmail.com/

Link: https://lkml.kernel.org/r/20230903151328.2981432-1-joel@joelfernandes.org
Link: https://lkml.kernel.org/r/20230903151328.2981432-2-joel@joelfernandes.org
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Reviewed-by: Lorenzo Stoakes <lstoakes@gmail.com>
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kalesh Singh <kaleshsingh@google.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Lokesh Gidra <lokeshgidra@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Joel Fernandes (Google) 2023-09-03 15:13:22 +00:00 committed by Andrew Morton
parent 2eaa6c2abb
commit af8ca1c149

View File

@ -489,6 +489,53 @@ static bool move_pgt_entry(enum pgt_entry entry, struct vm_area_struct *vma,
return moved;
}
/*
* A helper to check if a previous mapping exists. Required for
* move_page_tables() and realign_addr() to determine if a previous mapping
* exists before we can do realignment optimizations.
*/
static bool can_align_down(struct vm_area_struct *vma, unsigned long addr_to_align,
unsigned long mask)
{
unsigned long addr_masked = addr_to_align & mask;
/*
* If @addr_to_align of either source or destination is not the beginning
* of the corresponding VMA, we can't align down or we will destroy part
* of the current mapping.
*/
if (vma->vm_start != addr_to_align)
return false;
/*
* Make sure the realignment doesn't cause the address to fall on an
* existing mapping.
*/
return find_vma_intersection(vma->vm_mm, addr_masked, vma->vm_start) == NULL;
}
/* Opportunistically realign to specified boundary for faster copy. */
static void try_realign_addr(unsigned long *old_addr, struct vm_area_struct *old_vma,
unsigned long *new_addr, struct vm_area_struct *new_vma,
unsigned long mask)
{
/* Skip if the addresses are already aligned. */
if ((*old_addr & ~mask) == 0)
return;
/* Only realign if the new and old addresses are mutually aligned. */
if ((*old_addr & ~mask) != (*new_addr & ~mask))
return;
/* Ensure realignment doesn't cause overlap with existing mappings. */
if (!can_align_down(old_vma, *old_addr, mask) ||
!can_align_down(new_vma, *new_addr, mask))
return;
*old_addr = *old_addr & mask;
*new_addr = *new_addr & mask;
}
unsigned long move_page_tables(struct vm_area_struct *vma,
unsigned long old_addr, struct vm_area_struct *new_vma,
unsigned long new_addr, unsigned long len,
@ -508,6 +555,14 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
return move_hugetlb_page_tables(vma, new_vma, old_addr,
new_addr, len);
/*
* If possible, realign addresses to PMD boundary for faster copy.
* Only realign if the mremap copying hits a PMD boundary.
*/
if ((vma != new_vma)
&& (len >= PMD_SIZE - (old_addr & ~PMD_MASK)))
try_realign_addr(&old_addr, vma, &new_addr, new_vma, PMD_MASK);
flush_cache_range(vma, old_addr, old_end);
mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma->vm_mm,
old_addr, old_end);
@ -577,6 +632,13 @@ again:
mmu_notifier_invalidate_range_end(&range);
/*
* Prevent negative return values when {old,new}_addr was realigned
* but we broke out of the above loop for the first PMD itself.
*/
if (len + old_addr < old_end)
return 0;
return len + old_addr - old_end; /* how much done */
}