mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-12-05 10:04:12 +08:00
903f433f8f
Patch series "arm64: untag user pointers passed to the kernel", v19. === Overview arm64 has a feature called Top Byte Ignore, which allows to embed pointer tags into the top byte of each pointer. Userspace programs (such as HWASan, a memory debugging tool [1]) might use this feature and pass tagged user pointers to the kernel through syscalls or other interfaces. Right now the kernel is already able to handle user faults with tagged pointers, due to these patches: 1.81cddd65
("arm64: traps: fix userspace cache maintenance emulation on a tagged pointer") 2.7dcd9dd8
("arm64: hw_breakpoint: fix watchpoint matching for tagged pointers") 3.276e9327
("arm64: entry: improve data abort handling of tagged pointers") This patchset extends tagged pointer support to syscall arguments. As per the proposed ABI change [3], tagged pointers are only allowed to be passed to syscalls when they point to memory ranges obtained by anonymous mmap() or sbrk() (see the patchset [3] for more details). For non-memory syscalls this is done by untaging user pointers when the kernel performs pointer checking to find out whether the pointer comes from userspace (most notably in access_ok). The untagging is done only when the pointer is being checked, the tag is preserved as the pointer makes its way through the kernel and stays tagged when the kernel dereferences the pointer when perfoming user memory accesses. The mmap and mremap (only new_addr) syscalls do not currently accept tagged addresses. Architectures may interpret the tag as a background colour for the corresponding vma. Other memory syscalls (mprotect, etc.) don't do user memory accesses but rather deal with memory ranges, and untagged pointers are better suited to describe memory ranges internally. Thus for memory syscalls we untag pointers completely when they enter the kernel. === Other approaches One of the alternative approaches to untagging that was considered is to completely strip the pointer tag as the pointer enters the kernel with some kind of a syscall wrapper, but that won't work with the countless number of different ioctl calls. With this approach we would need a custom wrapper for each ioctl variation, which doesn't seem practical. An alternative approach to untagging pointers in memory syscalls prologues is to inspead allow tagged pointers to be passed to find_vma() (and other vma related functions) and untag them there. Unfortunately, a lot of find_vma() callers then compare or subtract the returned vma start and end fields against the pointer that was being searched. Thus this approach would still require changing all find_vma() callers. === Testing The following testing approaches has been taken to find potential issues with user pointer untagging: 1. Static testing (with sparse [2] and separately with a custom static analyzer based on Clang) to track casts of __user pointers to integer types to find places where untagging needs to be done. 2. Static testing with grep to find parts of the kernel that call find_vma() (and other similar functions) or directly compare against vm_start/vm_end fields of vma. 3. Static testing with grep to find parts of the kernel that compare user pointers with TASK_SIZE or other similar consts and macros. 4. Dynamic testing: adding BUG_ON(has_tag(addr)) to find_vma() and running a modified syzkaller version that passes tagged pointers to the kernel. Based on the results of the testing the requried patches have been added to the patchset. === Notes This patchset is meant to be merged together with "arm64 relaxed ABI" [3]. This patchset is a prerequisite for ARM's memory tagging hardware feature support [4]. This patchset has been merged into the Pixel 2 & 3 kernel trees and is now being used to enable testing of Pixel phones with HWASan. Thanks! [1] http://clang.llvm.org/docs/HardwareAssistedAddressSanitizerDesign.html [2]5f960cb10f
[3] https://lkml.org/lkml/2019/6/12/745 [4] https://community.arm.com/processors/b/blog/posts/arm-a-profile-architecture-2018-developments-armv85a This patch (of 11) This patch is a part of a series that extends kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments. strncpy_from_user and strnlen_user accept user addresses as arguments, and do not go through the same path as copy_from_user and others, so here we need to handle the case of tagged user addresses separately. Untag user pointers passed to these functions. Note, that this patch only temporarily untags the pointers to perform validity checks, but then uses them as is to perform user memory accesses. [andreyknvl@google.com: fix sparc4 build] Link: http://lkml.kernel.org/r/CAAeHK+yx4a-P0sDrXTUxMvO2H0CJZUFPffBrg_cU7oJOZyC7ew@mail.gmail.com Link: http://lkml.kernel.org/r/c5a78bcad3e94d6cda71fcaa60a423231ae71e4c.1563904656.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Khalid Aziz <khalid.aziz@oracle.com> Acked-by: Kees Cook <keescook@chromium.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Eric Auger <eric.auger@redhat.com> Cc: Felix Kuehling <Felix.Kuehling@amd.com> Cc: Jens Wiklander <jens.wiklander@linaro.org> Cc: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
127 lines
3.5 KiB
C
127 lines
3.5 KiB
C
// SPDX-License-Identifier: GPL-2.0
|
|
#include <linux/kernel.h>
|
|
#include <linux/export.h>
|
|
#include <linux/uaccess.h>
|
|
#include <linux/mm.h>
|
|
|
|
#include <asm/word-at-a-time.h>
|
|
|
|
/* Set bits in the first 'n' bytes when loaded from memory */
|
|
#ifdef __LITTLE_ENDIAN
|
|
# define aligned_byte_mask(n) ((1ul << 8*(n))-1)
|
|
#else
|
|
# define aligned_byte_mask(n) (~0xfful << (BITS_PER_LONG - 8 - 8*(n)))
|
|
#endif
|
|
|
|
/*
|
|
* Do a strnlen, return length of string *with* final '\0'.
|
|
* 'count' is the user-supplied count, while 'max' is the
|
|
* address space maximum.
|
|
*
|
|
* Return 0 for exceptions (which includes hitting the address
|
|
* space maximum), or 'count+1' if hitting the user-supplied
|
|
* maximum count.
|
|
*
|
|
* NOTE! We can sometimes overshoot the user-supplied maximum
|
|
* if it fits in a aligned 'long'. The caller needs to check
|
|
* the return value against "> max".
|
|
*/
|
|
static inline long do_strnlen_user(const char __user *src, unsigned long count, unsigned long max)
|
|
{
|
|
const struct word_at_a_time constants = WORD_AT_A_TIME_CONSTANTS;
|
|
unsigned long align, res = 0;
|
|
unsigned long c;
|
|
|
|
/*
|
|
* Truncate 'max' to the user-specified limit, so that
|
|
* we only have one limit we need to check in the loop
|
|
*/
|
|
if (max > count)
|
|
max = count;
|
|
|
|
/*
|
|
* Do everything aligned. But that means that we
|
|
* need to also expand the maximum..
|
|
*/
|
|
align = (sizeof(unsigned long) - 1) & (unsigned long)src;
|
|
src -= align;
|
|
max += align;
|
|
|
|
unsafe_get_user(c, (unsigned long __user *)src, efault);
|
|
c |= aligned_byte_mask(align);
|
|
|
|
for (;;) {
|
|
unsigned long data;
|
|
if (has_zero(c, &data, &constants)) {
|
|
data = prep_zero_mask(c, data, &constants);
|
|
data = create_zero_mask(data);
|
|
return res + find_zero(data) + 1 - align;
|
|
}
|
|
res += sizeof(unsigned long);
|
|
/* We already handled 'unsigned long' bytes. Did we do it all ? */
|
|
if (unlikely(max <= sizeof(unsigned long)))
|
|
break;
|
|
max -= sizeof(unsigned long);
|
|
unsafe_get_user(c, (unsigned long __user *)(src+res), efault);
|
|
}
|
|
res -= align;
|
|
|
|
/*
|
|
* Uhhuh. We hit 'max'. But was that the user-specified maximum
|
|
* too? If so, return the marker for "too long".
|
|
*/
|
|
if (res >= count)
|
|
return count+1;
|
|
|
|
/*
|
|
* Nope: we hit the address space limit, and we still had more
|
|
* characters the caller would have wanted. That's 0.
|
|
*/
|
|
efault:
|
|
return 0;
|
|
}
|
|
|
|
/**
|
|
* strnlen_user: - Get the size of a user string INCLUDING final NUL.
|
|
* @str: The string to measure.
|
|
* @count: Maximum count (including NUL character)
|
|
*
|
|
* Context: User context only. This function may sleep if pagefaults are
|
|
* enabled.
|
|
*
|
|
* Get the size of a NUL-terminated string in user space.
|
|
*
|
|
* Returns the size of the string INCLUDING the terminating NUL.
|
|
* If the string is too long, returns a number larger than @count. User
|
|
* has to check the return value against "> count".
|
|
* On exception (or invalid count), returns 0.
|
|
*
|
|
* NOTE! You should basically never use this function. There is
|
|
* almost never any valid case for using the length of a user space
|
|
* string, since the string can be changed at any time by other
|
|
* threads. Use "strncpy_from_user()" instead to get a stable copy
|
|
* of the string.
|
|
*/
|
|
long strnlen_user(const char __user *str, long count)
|
|
{
|
|
unsigned long max_addr, src_addr;
|
|
|
|
if (unlikely(count <= 0))
|
|
return 0;
|
|
|
|
max_addr = user_addr_max();
|
|
src_addr = (unsigned long)untagged_addr(str);
|
|
if (likely(src_addr < max_addr)) {
|
|
unsigned long max = max_addr - src_addr;
|
|
long retval;
|
|
|
|
if (user_access_begin(str, max)) {
|
|
retval = do_strnlen_user(str, count, max);
|
|
user_access_end();
|
|
return retval;
|
|
}
|
|
}
|
|
return 0;
|
|
}
|
|
EXPORT_SYMBOL(strnlen_user);
|