linux/arch/x86/mm
Rick Edgecombe e5136e8765 mm: Warn on shadow stack memory in wrong vma
The x86 Control-flow Enforcement Technology (CET) feature includes a new
type of memory called shadow stack. This shadow stack memory has some
unusual properties, which requires some core mm changes to function
properly.

One sharp edge is that PTEs that are both Write=0 and Dirty=1 are
treated as shadow by the CPU, but this combination used to be created by
the kernel on x86. Previous patches have changed the kernel to now avoid
creating these PTEs unless they are for shadow stack memory. In case any
missed corners of the kernel are still creating PTEs like this for
non-shadow stack memory, and to catch any re-introductions of the logic,
warn if any shadow stack PTEs (Write=0, Dirty=1) are found in non-shadow
stack VMAs when they are being zapped. This won't catch transient cases
but should have decent coverage.

In order to check if a PTE is shadow stack in core mm code, add two arch
breakouts arch_check_zapped_pte/pmd(). This will allow shadow stack
specific code to be kept in arch/x86.

Only do the check if shadow stack is supported by the CPU and configured
because in rare cases older CPUs may write Dirty=1 to a Write=0 CPU on
older CPUs. This check is handled in pte_shstk()/pmd_shstk().

Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Acked-by: Mike Rapoport (IBM) <rppt@kernel.org>
Tested-by: Pengfei Xu <pengfei.xu@intel.com>
Tested-by: John Allen <john.allen@amd.com>
Tested-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/all/20230613001108.3040476-18-rick.p.edgecombe%40intel.com
2023-07-11 14:12:19 -07:00
..
pat x86/mm: Remove _PAGE_DIRTY from kernel RO pages 2023-07-11 14:12:19 -07:00
amdtopology.c x86/mm: Replace nodes_weight() with nodes_empty() where appropriate 2022-04-10 22:35:38 +02:00
cpu_entry_area.c x86/mm: Do not shuffle CPU entry areas without KASLR 2023-03-22 10:42:47 -07:00
debug_pagetables.c x86/mm/dump_pagetables: remove MODULE_LICENSE in non-modules 2023-04-13 13:13:54 -07:00
dump_pagetables.c mm: don't include asm/pgtable.h if linux/mm.h is already included 2020-06-09 09:39:13 -07:00
extable.c x86-64: mm: clarify the 'positive addresses' user address rules 2023-05-03 10:37:22 -07:00
fault.c x86/mm: Check shadow stack page fault errors 2023-07-11 14:12:19 -07:00
highmem_32.c x86/mm: Include asm/numa.h for set_highmem_pages_init() 2023-05-18 11:56:18 -07:00
hugetlbpage.c arch/x86/mm/hugetlbpage.c: pud_huge() returns 0 when using 2-level paging 2022-11-08 15:57:25 -08:00
ident_map.c x86/mm/ident_map: Check for errors from ident_pud_init() 2020-10-28 14:48:30 +01:00
init_32.c x86/mm: Remove Xen-PV leftovers from init_32.c 2023-06-09 11:00:21 +02:00
init_64.c mm/sparse-vmemmap: generalise vmemmap_populate_hugepages() 2022-12-11 18:12:12 -08:00
init.c x86/mm: Avoid incomplete Global INVLPG flushes 2023-05-17 08:55:02 -07:00
iomap_32.c io-mapping: Cleanup atomic iomap 2020-11-06 23:14:58 +01:00
ioremap.c x86/ioremap: Add hypervisor callback for private MMIO mapping in coco VM 2023-03-26 23:42:40 +02:00
kasan_init_64.c x86/kasan: Populate shadow for shared chunk of the CPU entry area 2022-12-15 10:37:28 -08:00
kaslr.c x86/mm: Avoid using set_pgd() outside of real PGD pages 2023-06-16 11:46:42 -07:00
kmmio.c x86/mm/kmmio: Remove redundant preempt_disable() 2022-12-12 10:54:48 -05:00
kmsan_shadow.c x86: kmsan: handle CPU entry area 2022-10-03 14:03:26 -07:00
maccess.c x86: Share definition of __is_canonical_address() 2022-02-02 13:11:42 +01:00
Makefile x86: kmsan: handle CPU entry area 2022-10-03 14:03:26 -07:00
mem_encrypt_amd.c - Fix a race window where load_unaligned_zeropad() could cause 2023-06-26 16:32:47 -07:00
mem_encrypt_boot.S x86/mm: Remove P*D_PAGE_MASK and P*D_PAGE_SIZE macros 2022-12-15 10:37:27 -08:00
mem_encrypt_identity.c - Yosry Ahmed brought back some cgroup v1 stats in OOM logs. 2023-06-28 10:28:11 -07:00
mem_encrypt.c virtio: replace arch_has_restricted_virtio_memory_access() 2022-06-06 08:22:01 +02:00
mm_internal.h x86/mm: thread pgprot_t through init_memory_mapping() 2020-04-10 15:36:21 -07:00
mmap.c x86/mm/mmap: Fix -Wmissing-prototypes warnings 2020-04-22 20:19:48 +02:00
mmio-mod.c x86: Replace cpumask_weight() with cpumask_empty() where appropriate 2022-04-10 22:35:38 +02:00
numa_32.c x86/mm: Drop deprecated DISCONTIGMEM support for 32-bit 2020-05-28 18:34:30 +02:00
numa_64.c
numa_emulation.c x86/mm: Replace nodes_weight() with nodes_empty() where appropriate 2022-04-10 22:35:38 +02:00
numa_internal.h
numa.c x86/numa: Use cpumask_available instead of hardcoded NULL check 2022-08-03 11:44:57 +02:00
pf_in.c
pf_in.h
pgprot.c x86/mm: move protection_map[] inside the platform 2022-07-17 17:14:38 -07:00
pgtable_32.c mm: remove unneeded includes of <asm/pgalloc.h> 2020-08-07 11:33:26 -07:00
pgtable.c mm: Warn on shadow stack memory in wrong vma 2023-07-11 14:12:19 -07:00
physaddr.c mm, x86/mm: Untangle address space layout definitions from basic pgtable type definitions 2019-12-10 10:12:55 +01:00
physaddr.h
pkeys.c x86/pkeys: Clarify PKRU_AD_KEY macro 2022-06-07 16:06:33 -07:00
pti.c x86/mm: Remove P*D_PAGE_MASK and P*D_PAGE_SIZE macros 2022-12-15 10:37:27 -08:00
srat.c
testmmiotrace.c remove ioremap_nocache and devm_ioremap_nocache 2020-01-06 09:45:59 +01:00
tlb.c Add support for new Linear Address Masking CPU feature. This is similar 2023-04-28 09:43:49 -07:00