mirror of
https://github.com/edk2-porting/linux-next.git
synced 2024-12-16 09:13:55 +08:00
mm: rmap: fix CONT-PTE/PMD size hugetlb issue when unmapping
On some architectures (like ARM64), it can support CONT-PTE/PMD size hugetlb, which means it can support not only PMD/PUD size hugetlb: 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page size specified. When unmapping a hugetlb page, we will get the relevant page table entry by huge_pte_offset() only once to nuke it. This is correct for PMD or PUD size hugetlb, since they always contain only one pmd entry or pud entry in the page table. However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, since they can contain several continuous pte or pmd entry with same page table attributes, so we will nuke only one pte or pmd entry for this CONT-PTE/PMD size hugetlb page. And now try_to_unmap() is only passed a hugetlb page in the case where the hugetlb page is poisoned. Which means now we will unmap only one pte entry for a CONT-PTE or CONT-PMD size poisoned hugetlb page, and we can still access other subpages of a CONT-PTE or CONT-PMD size poisoned hugetlb page, which will cause serious issues possibly. So we should change to use huge_ptep_clear_flush() to nuke the hugetlb page table to fix this issue, which already considered CONT-PTE and CONT-PMD size hugetlb. We've already used set_huge_swap_pte_at() to set a poisoned swap entry for a poisoned hugetlb page. Meanwhile adding a VM_BUG_ON() to make sure the passed hugetlb page is poisoned in try_to_unmap(). Link: https://lkml.kernel.org/r/0a2e547238cad5bc153a85c3e9658cb9d55f9cac.1652270205.git.baolin.wang@linux.alibaba.com Link: https://lkml.kernel.org/r/730ea4b6d292f32fb10b7a4e87dad49b0eb30474.1652147571.git.baolin.wang@linux.alibaba.com Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: Muchun Song <songmuchun@bytedance.com> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: David Hildenbrand <david@redhat.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: David S. Miller <davem@davemloft.net> Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Rich Felker <dalias@libc.org> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.osdn.me> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
5d4af6195c
commit
a00a875925
39
mm/rmap.c
39
mm/rmap.c
@ -1527,6 +1527,11 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
|
||||
PageAnonExclusive(subpage);
|
||||
|
||||
if (folio_test_hugetlb(folio)) {
|
||||
/*
|
||||
* The try_to_unmap() is only passed a hugetlb page
|
||||
* in the case where the hugetlb page is poisoned.
|
||||
*/
|
||||
VM_BUG_ON_PAGE(!PageHWPoison(subpage), subpage);
|
||||
/*
|
||||
* huge_pmd_unshare may unmap an entire PMD page.
|
||||
* There is no way of knowing exactly which PMDs may
|
||||
@ -1562,28 +1567,28 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
|
||||
break;
|
||||
}
|
||||
}
|
||||
pteval = huge_ptep_clear_flush(vma, address, pvmw.pte);
|
||||
} else {
|
||||
flush_cache_page(vma, address, pte_pfn(*pvmw.pte));
|
||||
}
|
||||
|
||||
/*
|
||||
* Nuke the page table entry. When having to clear
|
||||
* PageAnonExclusive(), we always have to flush.
|
||||
*/
|
||||
if (should_defer_flush(mm, flags) && !anon_exclusive) {
|
||||
/*
|
||||
* We clear the PTE but do not flush so potentially
|
||||
* a remote CPU could still be writing to the folio.
|
||||
* If the entry was previously clean then the
|
||||
* architecture must guarantee that a clear->dirty
|
||||
* transition on a cached TLB entry is written through
|
||||
* and traps if the PTE is unmapped.
|
||||
* Nuke the page table entry. When having to clear
|
||||
* PageAnonExclusive(), we always have to flush.
|
||||
*/
|
||||
pteval = ptep_get_and_clear(mm, address, pvmw.pte);
|
||||
if (should_defer_flush(mm, flags) && !anon_exclusive) {
|
||||
/*
|
||||
* We clear the PTE but do not flush so potentially
|
||||
* a remote CPU could still be writing to the folio.
|
||||
* If the entry was previously clean then the
|
||||
* architecture must guarantee that a clear->dirty
|
||||
* transition on a cached TLB entry is written through
|
||||
* and traps if the PTE is unmapped.
|
||||
*/
|
||||
pteval = ptep_get_and_clear(mm, address, pvmw.pte);
|
||||
|
||||
set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
|
||||
} else {
|
||||
pteval = ptep_clear_flush(vma, address, pvmw.pte);
|
||||
set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
|
||||
} else {
|
||||
pteval = ptep_clear_flush(vma, address, pvmw.pte);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
|
Loading…
Reference in New Issue
Block a user