mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-11-23 12:14:10 +08:00
mm/memory: don't require head page for do_set_pmd()
The requirement that the head page be passed to do_set_pmd() was added in commitef37b2ea08
("mm/memory: page_add_file_rmap() -> folio_add_file_rmap_[pte|pmd]()") and prevents pmd-mapping in the finish_fault() and filemap_map_pages() paths if the page to be inserted is anything but the head page for an otherwise suitable vma and pmd-sized page. Matthew said: : We're going to stop using PMDs to map large folios unless the fault is : within the first 4KiB of the PMD. No idea how many workloads that : affects, but it only needs to be backported as far as v6.8, so we may : as well backport it. Link: https://lkml.kernel.org/r/20240611153216.2794513-1-abrestic@rivosinc.com Fixes:ef37b2ea08
("mm/memory: page_add_file_rmap() -> folio_add_file_rmap_[pte|pmd]()") Signed-off-by: Andrew Bresticker <abrestic@rivosinc.com> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Hugh Dickins <hughd@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
bf14ed81f5
commit
ab1ffc86cb
@ -4608,8 +4608,9 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
|
||||
if (!thp_vma_suitable_order(vma, haddr, PMD_ORDER))
|
||||
return ret;
|
||||
|
||||
if (page != &folio->page || folio_order(folio) != HPAGE_PMD_ORDER)
|
||||
if (folio_order(folio) != HPAGE_PMD_ORDER)
|
||||
return ret;
|
||||
page = &folio->page;
|
||||
|
||||
/*
|
||||
* Just backoff if any subpage of a THP is corrupted otherwise
|
||||
|
Loading…
Reference in New Issue
Block a user