mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-11-11 12:28:41 +08:00
mm/rmap: warn on new PTE-mapped folios in page_add_anon_rmap()
If swapin code would ever decide to not use order-0 pages and supply a PTE-mapped large folio, we will have to change how we call __folio_set_anon() -- eventually with exclusive=false and an adjusted address. For now, let's add a VM_WARN_ON_FOLIO() with a comment about the situation. Link: https://lkml.kernel.org/r/20230913125113.313322-5-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Muchun Song <muchun.song@linux.dev> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
c5c5400347
commit
a1f34ee1de
@ -1238,6 +1238,13 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
|
||||
|
||||
if (unlikely(!folio_test_anon(folio))) {
|
||||
VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
|
||||
/*
|
||||
* For a PTE-mapped large folio, we only know that the single
|
||||
* PTE is exclusive. Further, __folio_set_anon() might not get
|
||||
* folio->index right when not given the address of the head
|
||||
* page.
|
||||
*/
|
||||
VM_WARN_ON_FOLIO(folio_test_large(folio) && !compound, folio);
|
||||
__folio_set_anon(folio, vma, address,
|
||||
!!(flags & RMAP_EXCLUSIVE));
|
||||
} else if (likely(!folio_test_ksm(folio))) {
|
||||
|
Loading…
Reference in New Issue
Block a user