linux/arch/x86/entry
Mike Kravetz e9adcfecf5 mm: remove zap_page_range and create zap_vma_pages
zap_page_range was originally designed to unmap pages within an address
range that could span multiple vmas.  While working on [1], it was
discovered that all callers of zap_page_range pass a range entirely within
a single vma.  In addition, the mmu notification call within zap_page
range does not correctly handle ranges that span multiple vmas.  When
crossing a vma boundary, a new mmu_notifier_range_init/end call pair with
the new vma should be made.

Instead of fixing zap_page_range, do the following:
- Create a new routine zap_vma_pages() that will remove all pages within
  the passed vma.  Most users of zap_page_range pass the entire vma and
  can use this new routine.
- For callers of zap_page_range not passing the entire vma, instead call
  zap_page_range_single().
- Remove zap_page_range.

[1] https://lore.kernel.org/linux-mm/20221114235507.294320-2-mike.kravetz@oracle.com/
Link: https://lkml.kernel.org/r/20230104002732.232573-1-mike.kravetz@oracle.com
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Suggested-by: Peter Xu <peterx@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Peter Xu <peterx@redhat.com>
Acked-by: Heiko Carstens <hca@linux.ibm.com>	[s390]
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nadav Amit <nadav.amit@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-01-18 17:12:55 -08:00
..
syscalls Kbuild updates for v5.18 2022-03-31 11:59:03 -07:00
vdso mm: remove zap_page_range and create zap_vma_pages 2023-01-18 17:12:55 -08:00
vsyscall x86/vsyscall_emu/64: Don't use RET in vsyscall emulation 2022-06-27 10:33:58 +02:00
calling.h x86/retbleed: Add fine grained Kconfig knobs 2022-06-29 17:43:41 +02:00
common.c X86 entry code related updates: 2021-06-29 12:44:51 -07:00
entry_32.S x86/percpu: Move current_top_of_stack next to current_task 2022-10-17 16:41:05 +02:00
entry_64_compat.S - Add the call depth tracking mitigation for Retbleed which has 2022-12-14 15:03:00 -08:00
entry_64.S x86/retbleed: Add SKL return thunk 2022-10-17 16:41:15 +02:00
entry.S x86/bugs: Add retbleed=ibpb 2022-06-27 10:34:00 +02:00
Makefile x86/entry: Build thunk_$(BITS) only if CONFIG_PREEMPTION=y 2022-08-04 12:23:50 +02:00
syscall_32.c x86/syscalls: Stop filling syscall arrays with *_sys_ni_syscall 2021-05-20 15:03:59 +02:00
syscall_64.c x86/syscalls: Stop filling syscall arrays with *_sys_ni_syscall 2021-05-20 15:03:59 +02:00
syscall_x32.c x86/syscalls: Stop filling syscall arrays with *_sys_ni_syscall 2021-05-20 15:03:59 +02:00
thunk_32.S x86/entry: Build thunk_$(BITS) only if CONFIG_PREEMPTION=y 2022-08-04 12:23:50 +02:00
thunk_64.S x86/entry: Align SYM_CODE_START() variants 2022-10-17 16:41:00 +02:00