linux/arch/x86/mm
Linus Torvalds 6014bc2756 x86-64: make access_ok() independent of LAM
The linear address masking (LAM) code made access_ok() more complicated,
in that it now needs to untag the address in order to verify the access
range.  See commit 74c228d20a ("x86/uaccess: Provide untagged_addr()
and remove tags before address check").

We were able to avoid that overhead in the get_user/put_user code paths
by simply using the sign bit for the address check, and depending on the
GP fault if the address was non-canonical, which made it all independent
of LAM.

And we can do the same thing for access_ok(): simply check that the user
pointer range has the high bit clear.  No need to bother with any
address bit masking.

In fact, we can go a bit further, and just check the starting address
for known small accesses ranges: any accesses that overflow will still
be in the non-canonical area and will still GP fault.

To still make syzkaller catch any potentially unchecked user addresses,
we'll continue to warn about GP faults that are caused by accesses in
the non-canonical range.  But we'll limit that to purely "high bit set
and past the one-page 'slop' area".

We could probably just do that "check only starting address" for any
arbitrary range size: realistically all kernel accesses to user space
will be done starting at the low address.  But let's leave that kind of
optimization for later.  As it is, this already allows us to generate
simpler code and not worry about any tag bits in the address.

The one thing to look out for is the GUP address check: instead of
actually copying data in the virtual address range (and thus bad
addresses being caught by the GP fault), GUP will look up the page
tables manually.  As a result, the page table limits need to be checked,
and that was previously implicitly done by the access_ok().

With the relaxed access_ok() check, we need to just do an explicit check
for TASK_SIZE_MAX in the GUP code instead.  The GUP code already needs
to do the tag bit unmasking anyway, so there this is all very
straightforward, and there are no LAM issues.

Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-05-03 10:37:22 -07:00
..
pat - Nick Piggin's "shoot lazy tlbs" series, to improve the peformance of 2023-04-27 19:42:02 -07:00
amdtopology.c x86/mm: Replace nodes_weight() with nodes_empty() where appropriate 2022-04-10 22:35:38 +02:00
cpu_entry_area.c x86/mm: Do not shuffle CPU entry areas without KASLR 2023-03-22 10:42:47 -07:00
debug_pagetables.c x86/mm/dump_pagetables: remove MODULE_LICENSE in non-modules 2023-04-13 13:13:54 -07:00
dump_pagetables.c mm: don't include asm/pgtable.h if linux/mm.h is already included 2020-06-09 09:39:13 -07:00
extable.c x86-64: make access_ok() independent of LAM 2023-05-03 10:37:22 -07:00
fault.c x86/mm: try VMA lock-based page fault handling first 2023-04-05 20:03:01 -07:00
highmem_32.c x86/mm/highmem: Use generic kmap atomic implementation 2020-11-06 23:14:55 +01:00
hugetlbpage.c arch/x86/mm/hugetlbpage.c: pud_huge() returns 0 when using 2-level paging 2022-11-08 15:57:25 -08:00
ident_map.c x86/mm/ident_map: Check for errors from ident_pud_init() 2020-10-28 14:48:30 +01:00
init_32.c x86: mm: rename __is_kernel_text() to is_x86_32_kernel_text() 2021-11-09 10:02:51 -08:00
init_64.c mm/sparse-vmemmap: generalise vmemmap_populate_hugepages() 2022-12-11 18:12:12 -08:00
init.c Add support for new Linear Address Masking CPU feature. This is similar 2023-04-28 09:43:49 -07:00
iomap_32.c io-mapping: Cleanup atomic iomap 2020-11-06 23:14:58 +01:00
ioremap.c x86/ioremap: Add hypervisor callback for private MMIO mapping in coco VM 2023-03-26 23:42:40 +02:00
kasan_init_64.c x86/kasan: Populate shadow for shared chunk of the CPU entry area 2022-12-15 10:37:28 -08:00
kaslr.c x86: Fix various typos in comments 2021-03-18 15:31:53 +01:00
kmmio.c x86/mm/kmmio: Remove redundant preempt_disable() 2022-12-12 10:54:48 -05:00
kmsan_shadow.c x86: kmsan: handle CPU entry area 2022-10-03 14:03:26 -07:00
maccess.c x86: Share definition of __is_canonical_address() 2022-02-02 13:11:42 +01:00
Makefile x86: kmsan: handle CPU entry area 2022-10-03 14:03:26 -07:00
mem_encrypt_amd.c x86/mm: Handle decryption/re-encryption of bss_decrypted consistently 2023-03-27 09:23:21 +02:00
mem_encrypt_boot.S x86/mm: Remove P*D_PAGE_MASK and P*D_PAGE_SIZE macros 2022-12-15 10:37:27 -08:00
mem_encrypt_identity.c x86/mm: Fix use of uninitialized buffer in sme_enable() 2023-03-16 12:22:25 +01:00
mem_encrypt.c virtio: replace arch_has_restricted_virtio_memory_access() 2022-06-06 08:22:01 +02:00
mm_internal.h x86/mm: thread pgprot_t through init_memory_mapping() 2020-04-10 15:36:21 -07:00
mmap.c x86/mm/mmap: Fix -Wmissing-prototypes warnings 2020-04-22 20:19:48 +02:00
mmio-mod.c x86: Replace cpumask_weight() with cpumask_empty() where appropriate 2022-04-10 22:35:38 +02:00
numa_32.c x86/mm: Drop deprecated DISCONTIGMEM support for 32-bit 2020-05-28 18:34:30 +02:00
numa_64.c mm: remove include/linux/bootmem.h 2018-10-31 08:54:16 -07:00
numa_emulation.c x86/mm: Replace nodes_weight() with nodes_empty() where appropriate 2022-04-10 22:35:38 +02:00
numa_internal.h
numa.c x86/numa: Use cpumask_available instead of hardcoded NULL check 2022-08-03 11:44:57 +02:00
pf_in.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 156 2019-05-30 11:26:35 -07:00
pf_in.h treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 156 2019-05-30 11:26:35 -07:00
pgprot.c x86/mm: move protection_map[] inside the platform 2022-07-17 17:14:38 -07:00
pgtable_32.c mm: remove unneeded includes of <asm/pgalloc.h> 2020-08-07 11:33:26 -07:00
pgtable.c mm/pgtable: Fix multiple -Wstringop-overflow warnings 2022-12-01 08:50:38 -08:00
physaddr.c mm, x86/mm: Untangle address space layout definitions from basic pgtable type definitions 2019-12-10 10:12:55 +01:00
physaddr.h
pkeys.c x86/pkeys: Clarify PKRU_AD_KEY macro 2022-06-07 16:06:33 -07:00
pti.c x86/mm: Remove P*D_PAGE_MASK and P*D_PAGE_SIZE macros 2022-12-15 10:37:27 -08:00
srat.c
testmmiotrace.c remove ioremap_nocache and devm_ioremap_nocache 2020-01-06 09:45:59 +01:00
tlb.c Add support for new Linear Address Masking CPU feature. This is similar 2023-04-28 09:43:49 -07:00