mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-11-25 13:14:07 +08:00
a4cb3b2433
This patch adds support for verifying that we correctly handle the
situation where something is already mapped before the destination of the remap.
Any realignment of destination address and PMD-copy will destroy that
existing mapping. In such cases, we need to avoid doing the optimization.
To test this, we map an area called the preamble before the remap
region. Then we verify after the mremap operation that this region did not get
corrupted.
Putting some prints in the kernel, I verified that we optimize
correctly in different situations:
Optimize when there is alignment and no previous mapping (this is tested
by previous patch).
<prints>
can_align_down(old_vma->vm_start=2900000, old_addr=2900000, mask=-2097152): 0
can_align_down(new_vma->vm_start=2f00000, new_addr=2f00000, mask=-2097152): 0
=== Starting move_page_tables ===
Doing PUD move for 2800000 -> 2e00000 of extent=200000 <-- Optimization
Doing PUD move for 2a00000 -> 3000000 of extent=200000
Doing PUD move for 2c00000 -> 3200000 of extent=200000
</prints>
Don't optimize when there is alignment but there is previous mapping
(this is tested by this patch).
Notice that can_align_down() returns 1 for the destination mapping
as we detected there is something there.
<prints>
can_align_down(old_vma->vm_start=2900000, old_addr=2900000, mask=-2097152): 0
can_align_down(new_vma->vm_start=5700000, new_addr=5700000, mask=-2097152): 1
=== Starting move_page_tables ===
Doing move_ptes for
|
||
---|---|---|
.. | ||
crypto/chacha20-s390 | ||
cxl | ||
fault-injection | ||
ktest | ||
kunit | ||
memblock | ||
nvdimm | ||
radix-tree | ||
scatterlist | ||
selftests | ||
vsock |