hardening updates for v6.2-rc1

- Convert flexible array members, fix -Wstringop-overflow warnings,
   and fix KCFI function type mismatches that went ignored by
   maintainers (Gustavo A. R. Silva, Nathan Chancellor, Kees Cook).
 
 - Remove the remaining side-effect users of ksize() by converting
   dma-buf, btrfs, and coredump to using kmalloc_size_roundup(),
   add more __alloc_size attributes, and introduce full testing
   of all allocator functions. Finally remove the ksize() side-effect
   so that each allocation-aware checker can finally behave without
   exceptions.
 
 - Introduce oops_limit (default 10,000) and warn_limit (default off)
   to provide greater granularity of control for panic_on_oops and
   panic_on_warn (Jann Horn, Kees Cook).
 
 - Introduce overflows_type() and castable_to_type() helpers for
   cleaner overflow checking.
 
 - Improve code generation for strscpy() and update str*() kern-doc.
 
 - Convert strscpy and sigphash tests to KUnit, and expand memcpy
   tests.
 
 - Always use a non-NULL argument for prepare_kernel_cred().
 
 - Disable structleak plugin in FORTIFY KUnit test (Anders Roxell).
 
 - Adjust orphan linker section checking to respect CONFIG_WERROR
   (Xin Li).
 
 - Make sure siginfo is cleared for forced SIGKILL (haifeng.xu).
 
 - Fix um vs FORTIFY warnings for always-NULL arguments.
 -----BEGIN PGP SIGNATURE-----
 
 iQJKBAABCgA0FiEEpcP2jyKd1g9yPm4TiXL039xtwCYFAmOZSOoWHGtlZXNjb29r
 QGNocm9taXVtLm9yZwAKCRCJcvTf3G3AJjAAD/0YkvpU7f03f8hcQMJK6wv//24K
 AW41hEaBikq9RcmkuvkLLrJRibGgZ5O2xUkUkxRs/HxhkhrZ0kEw8sbwZe8MoWls
 F4Y9+TDjsrdHmjhfcBZdLnVxwcKK5wlaEcpjZXtbsfcdhx3TbgcDA23YELl5t0K+
 I11j4kYmf9SLl4CwIrSP5iACml8CBHARDh8oIMF7FT/LrjNbM8XkvBcVVT6hTbOV
 yjgA8WP2e9GXvj9GzKgqvd0uE/kwPkVAeXLNFWopPi4FQ8AWjlxbBZR0gamA6/EB
 d7TIs0ifpVU2JGQaTav4xO6SsFMj3ntoUI0qIrFaTxZAvV4KYGrPT/Kwz1O4SFaG
 rN5lcxseQbPQSBTFNG4zFjpywTkVCgD2tZqDwz5Rrmiraz0RyIokCN+i4CD9S0Ds
 oEd8JSyLBk1sRALczkuEKo0an5AyC9YWRcBXuRdIHpLo08PsbeUUSe//4pe303cw
 0ApQxYOXnrIk26MLElTzSMImlSvlzW6/5XXzL9ME16leSHOIfDeerPnc9FU9Eb3z
 ODv22z6tJZ9H/apSUIHZbMciMbbVTZ8zgpkfydr08o87b342N/ncYHZ5cSvQ6DWb
 jS5YOIuvl46/IhMPT16qWC8p0bP5YhxoPv5l6Xr0zq0ooEj0E7keiD/SzoLvW+Qs
 AHXcibguPRQBPAdiPQ==
 =yaaN
 -----END PGP SIGNATURE-----

Merge tag 'hardening-v6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux

Pull kernel hardening updates from Kees Cook:

 - Convert flexible array members, fix -Wstringop-overflow warnings, and
   fix KCFI function type mismatches that went ignored by maintainers
   (Gustavo A. R. Silva, Nathan Chancellor, Kees Cook)

 - Remove the remaining side-effect users of ksize() by converting
   dma-buf, btrfs, and coredump to using kmalloc_size_roundup(), add
   more __alloc_size attributes, and introduce full testing of all
   allocator functions. Finally remove the ksize() side-effect so that
   each allocation-aware checker can finally behave without exceptions

 - Introduce oops_limit (default 10,000) and warn_limit (default off) to
   provide greater granularity of control for panic_on_oops and
   panic_on_warn (Jann Horn, Kees Cook)

 - Introduce overflows_type() and castable_to_type() helpers for cleaner
   overflow checking

 - Improve code generation for strscpy() and update str*() kern-doc

 - Convert strscpy and sigphash tests to KUnit, and expand memcpy tests

 - Always use a non-NULL argument for prepare_kernel_cred()

 - Disable structleak plugin in FORTIFY KUnit test (Anders Roxell)

 - Adjust orphan linker section checking to respect CONFIG_WERROR (Xin
   Li)

 - Make sure siginfo is cleared for forced SIGKILL (haifeng.xu)

 - Fix um vs FORTIFY warnings for always-NULL arguments

* tag 'hardening-v6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: (31 commits)
  ksmbd: replace one-element arrays with flexible-array members
  hpet: Replace one-element array with flexible-array member
  um: virt-pci: Avoid GCC non-NULL warning
  signal: Initialize the info in ksignal
  lib: fortify_kunit: build without structleak plugin
  panic: Expose "warn_count" to sysfs
  panic: Introduce warn_limit
  panic: Consolidate open-coded panic_on_warn checks
  exit: Allow oops_limit to be disabled
  exit: Expose "oops_count" to sysfs
  exit: Put an upper limit on how often we can oops
  panic: Separate sysctl logic from CONFIG_SMP
  mm/pgtable: Fix multiple -Wstringop-overflow warnings
  mm: Make ksize() a reporting-only function
  kunit/fortify: Validate __alloc_size attribute results
  drm/sti: Fix return type of sti_{dvo,hda,hdmi}_connector_mode_valid()
  drm/fsl-dcu: Fix return type of fsl_dcu_drm_connector_mode_valid()
  driver core: Add __alloc_size hint to devm allocators
  overflow: Introduce overflows_type() and castable_to_type()
  coredump: Proactively round up to kmalloc bucket size
  ...
This commit is contained in:
Linus Torvalds 2022-12-14 12:20:00 -08:00
commit 48ea09cdda
61 changed files with 1533 additions and 463 deletions

View File

@ -0,0 +1,6 @@
What: /sys/kernel/oops_count
Date: November 2022
KernelVersion: 6.2.0
Contact: Linux Kernel Hardening List <linux-hardening@vger.kernel.org>
Description:
Shows how many times the system has Oopsed since last boot.

View File

@ -0,0 +1,6 @@
What: /sys/kernel/oops_count
Date: November 2022
KernelVersion: 6.2.0
Contact: Linux Kernel Hardening List <linux-hardening@vger.kernel.org>
Description:
Shows how many times the system has Warned since last boot.

View File

@ -670,6 +670,15 @@ This is the default behavior.
an oops event is detected.
oops_limit
==========
Number of kernel oopses after which the kernel should panic when
``panic_on_oops`` is not set. Setting this to 0 disables checking
the count. Setting this to 1 has the same effect as setting
``panic_on_oops=1``. The default value is 10000.
osrelease, ostype & version
===========================
@ -1526,6 +1535,16 @@ entry will default to 2 instead of 0.
2 Unprivileged calls to ``bpf()`` are disabled
= =============================================================
warn_limit
==========
Number of kernel warnings after which the kernel should panic when
``panic_on_warn`` is not set. Setting this to 0 disables checking
the warning count. Setting this to 1 has the same effect as setting
``panic_on_warn=1``. The default value is 0.
watchdog
========

View File

@ -36,6 +36,9 @@ String Conversions
String Manipulation
-------------------
.. kernel-doc:: include/linux/fortify-string.h
:internal:
.. kernel-doc:: lib/string.c
:export:

View File

@ -8105,6 +8105,8 @@ S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/hardening
F: include/linux/fortify-string.h
F: lib/fortify_kunit.c
F: lib/memcpy_kunit.c
F: lib/strscpy_kunit.c
F: lib/test_fortify/*
F: scripts/test_fortify.sh
K: \b__NO_FORTIFY\b
@ -11208,6 +11210,8 @@ M: Kees Cook <keescook@chromium.org>
L: linux-hardening@vger.kernel.org
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/hardening
F: Documentation/ABI/testing/sysfs-kernel-oops_count
F: Documentation/ABI/testing/sysfs-kernel-warn_count
F: include/linux/overflow.h
F: include/linux/randomize_kstack.h
F: mm/usercopy.c
@ -19050,7 +19054,7 @@ M: Jason A. Donenfeld <Jason@zx2c4.com>
S: Maintained
F: include/linux/siphash.h
F: lib/siphash.c
F: lib/test_siphash.c
F: lib/siphash_kunit.c
SIS 190 ETHERNET DRIVER
M: Francois Romieu <romieu@fr.zoreil.com>

View File

@ -1120,7 +1120,7 @@ endif
# We never want expected sections to be placed heuristically by the
# linker. All sections should be explicitly named in the linker script.
ifdef CONFIG_LD_ORPHAN_WARN
LDFLAGS_vmlinux += --orphan-handling=warn
LDFLAGS_vmlinux += --orphan-handling=$(CONFIG_LD_ORPHAN_WARN_LEVEL)
endif
# Align the bit size of userspace programs with the kernel

View File

@ -124,7 +124,7 @@ LDFLAGS_vmlinux += --no-undefined
LDFLAGS_vmlinux += -X
# Report orphan sections
ifdef CONFIG_LD_ORPHAN_WARN
LDFLAGS_vmlinux += --orphan-handling=warn
LDFLAGS_vmlinux += --orphan-handling=$(CONFIG_LD_ORPHAN_WARN_LEVEL)
endif
# Next argument is a linker script
LDFLAGS_vmlinux += -T

View File

@ -27,7 +27,7 @@ ldflags-y := -shared -soname=linux-vdso.so.1 --hash-style=sysv \
-Bsymbolic --build-id=sha1 -n $(btildflags-y)
ifdef CONFIG_LD_ORPHAN_WARN
ldflags-y += --orphan-handling=warn
ldflags-y += --orphan-handling=$(CONFIG_LD_ORPHAN_WARN_LEVEL)
endif
ldflags-y += -T

View File

@ -104,7 +104,7 @@ VDSO_AFLAGS += -D__ASSEMBLY__
VDSO_LDFLAGS += -Bsymbolic --no-undefined -soname=linux-vdso.so.1
VDSO_LDFLAGS += -z max-page-size=4096 -z common-page-size=4096
VDSO_LDFLAGS += -shared --hash-style=sysv --build-id=sha1
VDSO_LDFLAGS += --orphan-handling=warn
VDSO_LDFLAGS += --orphan-handling=$(CONFIG_LD_ORPHAN_WARN_LEVEL)
# Borrow vdsomunge.c from the arm vDSO

View File

@ -97,7 +97,8 @@ static int um_pci_send_cmd(struct um_pci_device *dev,
}
buf = get_cpu_var(um_pci_msg_bufs);
memcpy(buf, cmd, cmd_size);
if (buf)
memcpy(buf, cmd, cmd_size);
if (posted) {
u8 *ncmd = kmalloc(cmd_size + extra_size, GFP_ATOMIC);
@ -182,6 +183,7 @@ static unsigned long um_pci_cfgspace_read(void *priv, unsigned int offset,
struct um_pci_message_buffer *buf;
u8 *data;
unsigned long ret = ULONG_MAX;
size_t bytes = sizeof(buf->data);
if (!dev)
return ULONG_MAX;
@ -189,7 +191,8 @@ static unsigned long um_pci_cfgspace_read(void *priv, unsigned int offset,
buf = get_cpu_var(um_pci_msg_bufs);
data = buf->data;
memset(buf->data, 0xff, sizeof(buf->data));
if (buf)
memset(data, 0xff, bytes);
switch (size) {
case 1:
@ -204,7 +207,7 @@ static unsigned long um_pci_cfgspace_read(void *priv, unsigned int offset,
goto out;
}
if (um_pci_send_cmd(dev, &hdr, sizeof(hdr), NULL, 0, data, 8))
if (um_pci_send_cmd(dev, &hdr, sizeof(hdr), NULL, 0, data, bytes))
goto out;
switch (size) {

View File

@ -68,7 +68,7 @@ KBUILD_LDFLAGS += $(call ld-option,--no-ld-generated-unwind-info)
# address by the bootloader.
LDFLAGS_vmlinux := -pie $(call ld-option, --no-dynamic-linker)
ifdef CONFIG_LD_ORPHAN_WARN
LDFLAGS_vmlinux += --orphan-handling=warn
LDFLAGS_vmlinux += --orphan-handling=$(CONFIG_LD_ORPHAN_WARN_LEVEL)
endif
LDFLAGS_vmlinux += -z noexecstack
ifeq ($(CONFIG_LD_IS_BFD),y)

View File

@ -299,9 +299,6 @@ static void pgd_prepopulate_pmd(struct mm_struct *mm, pgd_t *pgd, pmd_t *pmds[])
pud_t *pud;
int i;
if (PREALLOCATED_PMDS == 0) /* Work around gcc-3.4.x bug */
return;
p4d = p4d_offset(pgd, 0);
pud = pud_offset(p4d, 0);
@ -434,10 +431,12 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
mm->pgd = pgd;
if (preallocate_pmds(mm, pmds, PREALLOCATED_PMDS) != 0)
if (sizeof(pmds) != 0 &&
preallocate_pmds(mm, pmds, PREALLOCATED_PMDS) != 0)
goto out_free_pgd;
if (preallocate_pmds(mm, u_pmds, PREALLOCATED_USER_PMDS) != 0)
if (sizeof(u_pmds) != 0 &&
preallocate_pmds(mm, u_pmds, PREALLOCATED_USER_PMDS) != 0)
goto out_free_pmds;
if (paravirt_pgd_alloc(mm) != 0)
@ -451,17 +450,22 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
spin_lock(&pgd_lock);
pgd_ctor(mm, pgd);
pgd_prepopulate_pmd(mm, pgd, pmds);
pgd_prepopulate_user_pmd(mm, pgd, u_pmds);
if (sizeof(pmds) != 0)
pgd_prepopulate_pmd(mm, pgd, pmds);
if (sizeof(u_pmds) != 0)
pgd_prepopulate_user_pmd(mm, pgd, u_pmds);
spin_unlock(&pgd_lock);
return pgd;
out_free_user_pmds:
free_pmds(mm, u_pmds, PREALLOCATED_USER_PMDS);
if (sizeof(u_pmds) != 0)
free_pmds(mm, u_pmds, PREALLOCATED_USER_PMDS);
out_free_pmds:
free_pmds(mm, pmds, PREALLOCATED_PMDS);
if (sizeof(pmds) != 0)
free_pmds(mm, pmds, PREALLOCATED_PMDS);
out_free_pgd:
_pgd_free(pgd);
out:

View File

@ -821,7 +821,7 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
* called by a driver when serving an unrelated request from userland, we use
* the kernel credentials to read the file.
*/
kern_cred = prepare_kernel_cred(NULL);
kern_cred = prepare_kernel_cred(&init_task);
if (!kern_cred) {
ret = -ENOMEM;
goto out;

View File

@ -98,12 +98,17 @@ static void dma_resv_list_set(struct dma_resv_list *list,
static struct dma_resv_list *dma_resv_list_alloc(unsigned int max_fences)
{
struct dma_resv_list *list;
size_t size;
list = kmalloc(struct_size(list, table, max_fences), GFP_KERNEL);
/* Round up to the next kmalloc bucket size. */
size = kmalloc_size_roundup(struct_size(list, table, max_fences));
list = kmalloc(size, GFP_KERNEL);
if (!list)
return NULL;
list->max_fences = (ksize(list) - offsetof(typeof(*list), table)) /
/* Given the resulting bucket size, recalculated max_fences. */
list->max_fences = (size - offsetof(typeof(*list), table)) /
sizeof(*list->table);
return list;

View File

@ -60,8 +60,9 @@ static int fsl_dcu_drm_connector_get_modes(struct drm_connector *connector)
return drm_panel_get_modes(fsl_connector->panel, connector);
}
static int fsl_dcu_drm_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
static enum drm_mode_status
fsl_dcu_drm_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
if (mode->hdisplay & 0xf)
return MODE_ERROR;

View File

@ -51,7 +51,7 @@ int i915_user_extensions(struct i915_user_extension __user *ext,
return err;
if (get_user(next, &ext->next_extension) ||
overflows_type(next, ext))
overflows_type(next, uintptr_t))
return -EFAULT;
ext = u64_to_user_ptr(next);

View File

@ -111,10 +111,6 @@ bool i915_error_injected(void);
#define range_overflows_end_t(type, start, size, max) \
range_overflows_end((type)(start), (type)(size), (type)(max))
/* Note we don't consider signbits :| */
#define overflows_type(x, T) \
(sizeof(x) > sizeof(T) && (x) >> BITS_PER_TYPE(T))
#define ptr_mask_bits(ptr, n) ({ \
unsigned long __v = (unsigned long)(ptr); \
(typeof(ptr))(__v & -BIT(n)); \

View File

@ -346,8 +346,9 @@ static int sti_dvo_connector_get_modes(struct drm_connector *connector)
#define CLK_TOLERANCE_HZ 50
static int sti_dvo_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
static enum drm_mode_status
sti_dvo_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
int target = mode->clock * 1000;
int target_min = target - CLK_TOLERANCE_HZ;

View File

@ -601,8 +601,9 @@ static int sti_hda_connector_get_modes(struct drm_connector *connector)
#define CLK_TOLERANCE_HZ 50
static int sti_hda_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
static enum drm_mode_status
sti_hda_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
int target = mode->clock * 1000;
int target_min = target - CLK_TOLERANCE_HZ;

View File

@ -1004,8 +1004,9 @@ fail:
#define CLK_TOLERANCE_HZ 50
static int sti_hdmi_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
static enum drm_mode_status
sti_hdmi_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
int target = mode->clock * 1000;
int target_min = target - CLK_TOLERANCE_HZ;

View File

@ -485,6 +485,11 @@ static int fs_path_ensure_buf(struct fs_path *p, int len)
path_len = p->end - p->start;
old_buf_len = p->buf_len;
/*
* Allocate to the next largest kmalloc bucket size, to let
* the fast path happen most of the time.
*/
len = kmalloc_size_roundup(len);
/*
* First time the inline_buf does not suffice
*/
@ -498,11 +503,7 @@ static int fs_path_ensure_buf(struct fs_path *p, int len)
if (!tmp_buf)
return -ENOMEM;
p->buf = tmp_buf;
/*
* The real size of the buffer is bigger, this will let the fast path
* happen most of the time
*/
p->buf_len = ksize(p->buf);
p->buf_len = len;
if (p->reversed) {
tmp_buf = p->buf + old_buf_len - path_len - 1;

View File

@ -189,7 +189,7 @@ init_cifs_spnego(void)
* spnego upcalls.
*/
cred = prepare_kernel_cred(NULL);
cred = prepare_kernel_cred(&init_task);
if (!cred)
return -ENOMEM;

View File

@ -470,7 +470,7 @@ init_cifs_idmap(void)
* this is used to prevent malicious redirections from being installed
* with add_key().
*/
cred = prepare_kernel_cred(NULL);
cred = prepare_kernel_cred(&init_task);
if (!cred)
return -ENOMEM;

View File

@ -68,7 +68,10 @@ struct core_name {
static int expand_corename(struct core_name *cn, int size)
{
char *corename = krealloc(cn->corename, size, GFP_KERNEL);
char *corename;
size = kmalloc_size_roundup(size);
corename = krealloc(cn->corename, size, GFP_KERNEL);
if (!corename)
return -ENOMEM;
@ -76,7 +79,7 @@ static int expand_corename(struct core_name *cn, int size)
if (size > core_name_size) /* racy but harmless */
core_name_size = size;
cn->size = ksize(corename);
cn->size = size;
cn->corename = corename;
return 0;
}

View File

@ -3438,7 +3438,7 @@ static int smb2_populate_readdir_entry(struct ksmbd_conn *conn, int info_level,
goto free_conv_name;
}
struct_sz = readdir_info_level_struct_sz(info_level) - 1 + conv_len;
struct_sz = readdir_info_level_struct_sz(info_level) + conv_len;
next_entry_offset = ALIGN(struct_sz, KSMBD_DIR_INFO_ALIGNMENT);
d_info->last_entry_off_align = next_entry_offset - struct_sz;
@ -3690,7 +3690,7 @@ static int reserve_populate_dentry(struct ksmbd_dir_info *d_info,
return -EOPNOTSUPP;
conv_len = (d_info->name_len + 1) * 2;
next_entry_offset = ALIGN(struct_sz - 1 + conv_len,
next_entry_offset = ALIGN(struct_sz + conv_len,
KSMBD_DIR_INFO_ALIGNMENT);
if (next_entry_offset > d_info->out_buf_len) {

View File

@ -443,7 +443,7 @@ struct smb2_posix_info {
/* SidBuffer contain two sids (UNIX user sid(16), UNIX group sid(16)) */
u8 SidBuffer[32];
__le32 name_len;
u8 name[1];
u8 name[];
/*
* var sized owner SID
* var sized group SID

View File

@ -623,7 +623,7 @@ int ksmbd_override_fsids(struct ksmbd_work *work)
if (share->force_gid != KSMBD_SHARE_INVALID_GID)
gid = share->force_gid;
cred = prepare_kernel_cred(NULL);
cred = prepare_kernel_cred(&init_task);
if (!cred)
return -ENOMEM;

View File

@ -277,14 +277,14 @@ struct file_directory_info {
__le64 AllocationSize;
__le32 ExtFileAttributes;
__le32 FileNameLength;
char FileName[1];
char FileName[];
} __packed; /* level 0x101 FF resp data */
struct file_names_info {
__le32 NextEntryOffset;
__u32 FileIndex;
__le32 FileNameLength;
char FileName[1];
char FileName[];
} __packed; /* level 0xc FF resp data */
struct file_full_directory_info {
@ -299,7 +299,7 @@ struct file_full_directory_info {
__le32 ExtFileAttributes;
__le32 FileNameLength;
__le32 EaSize;
char FileName[1];
char FileName[];
} __packed; /* level 0x102 FF resp */
struct file_both_directory_info {
@ -317,7 +317,7 @@ struct file_both_directory_info {
__u8 ShortNameLength;
__u8 Reserved;
__u8 ShortName[24];
char FileName[1];
char FileName[];
} __packed; /* level 0x104 FFrsp data */
struct file_id_both_directory_info {
@ -337,7 +337,7 @@ struct file_id_both_directory_info {
__u8 ShortName[24];
__le16 Reserved2;
__le64 UniqueId;
char FileName[1];
char FileName[];
} __packed;
struct file_id_full_dir_info {
@ -354,7 +354,7 @@ struct file_id_full_dir_info {
__le32 EaSize; /* EA size */
__le32 Reserved;
__le64 UniqueId; /* inode num - le since Samba puts ino in low 32 bit*/
char FileName[1];
char FileName[];
} __packed; /* level 0x105 FF rsp data */
struct smb_version_values {

View File

@ -493,10 +493,10 @@ ff_layout_alloc_lseg(struct pnfs_layout_hdr *lh,
gid = make_kgid(&init_user_ns, id);
if (gfp_flags & __GFP_FS)
kcred = prepare_kernel_cred(NULL);
kcred = prepare_kernel_cred(&init_task);
else {
unsigned int nofs_flags = memalloc_nofs_save();
kcred = prepare_kernel_cred(NULL);
kcred = prepare_kernel_cred(&init_task);
memalloc_nofs_restore(nofs_flags);
}
rc = -ENOMEM;

View File

@ -203,7 +203,7 @@ int nfs_idmap_init(void)
printk(KERN_NOTICE "NFS: Registering the %s key type\n",
key_type_id_resolver.name);
cred = prepare_kernel_cred(NULL);
cred = prepare_kernel_cred(&init_task);
if (!cred)
return -ENOMEM;

View File

@ -942,7 +942,7 @@ static const struct cred *get_backchannel_cred(struct nfs4_client *clp, struct r
} else {
struct cred *kcred;
kcred = prepare_kernel_cred(NULL);
kcred = prepare_kernel_cred(&init_task);
if (!kcred)
return NULL;

View File

@ -236,6 +236,7 @@ static inline void *offset_to_ptr(const int *off)
* bool and also pointer types.
*/
#define is_signed_type(type) (((type)(-1)) < (__force type)1)
#define is_unsigned_type(type) (!is_signed_type(type))
/*
* This is needed in functions which generate the stack canary, see

View File

@ -197,9 +197,9 @@ void devres_remove_group(struct device *dev, void *id);
int devres_release_group(struct device *dev, void *id);
/* managed devm_k.alloc/kfree for device drivers */
void *devm_kmalloc(struct device *dev, size_t size, gfp_t gfp) __malloc;
void *devm_kmalloc(struct device *dev, size_t size, gfp_t gfp) __alloc_size(2);
void *devm_krealloc(struct device *dev, void *ptr, size_t size,
gfp_t gfp) __must_check;
gfp_t gfp) __must_check __realloc_size(3);
__printf(3, 0) char *devm_kvasprintf(struct device *dev, gfp_t gfp,
const char *fmt, va_list ap) __malloc;
__printf(3, 4) char *devm_kasprintf(struct device *dev, gfp_t gfp,
@ -226,7 +226,8 @@ static inline void *devm_kcalloc(struct device *dev,
void devm_kfree(struct device *dev, const void *p);
char *devm_kstrdup(struct device *dev, const char *s, gfp_t gfp) __malloc;
const char *devm_kstrdup_const(struct device *dev, const char *s, gfp_t gfp);
void *devm_kmemdup(struct device *dev, const void *src, size_t len, gfp_t gfp);
void *devm_kmemdup(struct device *dev, const void *src, size_t len, gfp_t gfp)
__realloc_size(3);
unsigned long devm_get_free_pages(struct device *dev,
gfp_t gfp_mask, unsigned int order);

View File

@ -18,7 +18,7 @@ void __write_overflow_field(size_t avail, size_t wanted) __compiletime_warning("
#define __compiletime_strlen(p) \
({ \
unsigned char *__p = (unsigned char *)(p); \
char *__p = (char *)(p); \
size_t __ret = SIZE_MAX; \
size_t __p_size = __member_size(p); \
if (__p_size != SIZE_MAX && \
@ -119,13 +119,13 @@ extern char *__underlying_strncpy(char *p, const char *q, __kernel_size_t size)
* Instead, please choose an alternative, so that the expectation
* of @p's contents is unambiguous:
*
* +--------------------+-----------------+------------+
* | @p needs to be: | padded to @size | not padded |
* +====================+=================+============+
* | NUL-terminated | strscpy_pad() | strscpy() |
* +--------------------+-----------------+------------+
* | not NUL-terminated | strtomem_pad() | strtomem() |
* +--------------------+-----------------+------------+
* +--------------------+--------------------+------------+
* | **p** needs to be: | padded to **size** | not padded |
* +====================+====================+============+
* | NUL-terminated | strscpy_pad() | strscpy() |
* +--------------------+--------------------+------------+
* | not NUL-terminated | strtomem_pad() | strtomem() |
* +--------------------+--------------------+------------+
*
* Note strscpy*()'s differing return values for detecting truncation,
* and strtomem*()'s expectation that the destination is marked with
@ -144,6 +144,21 @@ char *strncpy(char * const POS p, const char *q, __kernel_size_t size)
return __underlying_strncpy(p, q, size);
}
/**
* strcat - Append a string to an existing string
*
* @p: pointer to NUL-terminated string to append to
* @q: pointer to NUL-terminated source string to append from
*
* Do not use this function. While FORTIFY_SOURCE tries to avoid
* read and write overflows, this is only possible when the
* destination buffer size is known to the compiler. Prefer
* building the string with formatting, via scnprintf() or similar.
* At the very least, use strncat().
*
* Returns @p.
*
*/
__FORTIFY_INLINE __diagnose_as(__builtin_strcat, 1, 2)
char *strcat(char * const POS p, const char *q)
{
@ -157,6 +172,16 @@ char *strcat(char * const POS p, const char *q)
}
extern __kernel_size_t __real_strnlen(const char *, __kernel_size_t) __RENAME(strnlen);
/**
* strnlen - Return bounded count of characters in a NUL-terminated string
*
* @p: pointer to NUL-terminated string to count.
* @maxlen: maximum number of characters to count.
*
* Returns number of characters in @p (NOT including the final NUL), or
* @maxlen, if no NUL has been found up to there.
*
*/
__FORTIFY_INLINE __kernel_size_t strnlen(const char * const POS p, __kernel_size_t maxlen)
{
size_t p_size = __member_size(p);
@ -182,6 +207,19 @@ __FORTIFY_INLINE __kernel_size_t strnlen(const char * const POS p, __kernel_size
* possible for strlen() to be used on compile-time strings for use in
* static initializers (i.e. as a constant expression).
*/
/**
* strlen - Return count of characters in a NUL-terminated string
*
* @p: pointer to NUL-terminated string to count.
*
* Do not use this function unless the string length is known at
* compile-time. When @p is unterminated, this function may crash
* or return unexpected counts that could lead to memory content
* exposures. Prefer strnlen().
*
* Returns number of characters in @p (NOT including the final NUL).
*
*/
#define strlen(p) \
__builtin_choose_expr(__is_constexpr(__builtin_strlen(p)), \
__builtin_strlen(p), __fortify_strlen(p))
@ -200,8 +238,26 @@ __kernel_size_t __fortify_strlen(const char * const POS p)
return ret;
}
/* defined after fortified strlen to reuse it */
/* Defined after fortified strlen() to reuse it. */
extern size_t __real_strlcpy(char *, const char *, size_t) __RENAME(strlcpy);
/**
* strlcpy - Copy a string into another string buffer
*
* @p: pointer to destination of copy
* @q: pointer to NUL-terminated source string to copy
* @size: maximum number of bytes to write at @p
*
* If strlen(@q) >= @size, the copy of @q will be truncated at
* @size - 1 bytes. @p will always be NUL-terminated.
*
* Do not use this function. While FORTIFY_SOURCE tries to avoid
* over-reads when calculating strlen(@q), it is still possible.
* Prefer strscpy(), though note its different return values for
* detecting truncation.
*
* Returns total number of bytes written to @p, including terminating NUL.
*
*/
__FORTIFY_INLINE size_t strlcpy(char * const POS p, const char * const POS q, size_t size)
{
size_t p_size = __member_size(p);
@ -227,8 +283,32 @@ __FORTIFY_INLINE size_t strlcpy(char * const POS p, const char * const POS q, si
return q_len;
}
/* defined after fortified strnlen to reuse it */
/* Defined after fortified strnlen() to reuse it. */
extern ssize_t __real_strscpy(char *, const char *, size_t) __RENAME(strscpy);
/**
* strscpy - Copy a C-string into a sized buffer
*
* @p: Where to copy the string to
* @q: Where to copy the string from
* @size: Size of destination buffer
*
* Copy the source string @p, or as much of it as fits, into the destination
* @q buffer. The behavior is undefined if the string buffers overlap. The
* destination @p buffer is always NUL terminated, unless it's zero-sized.
*
* Preferred to strlcpy() since the API doesn't require reading memory
* from the source @q string beyond the specified @size bytes, and since
* the return value is easier to error-check than strlcpy()'s.
* In addition, the implementation is robust to the string changing out
* from underneath it, unlike the current strlcpy() implementation.
*
* Preferred to strncpy() since it always returns a valid string, and
* doesn't unnecessarily force the tail of the destination buffer to be
* zero padded. If padding is desired please use strscpy_pad().
*
* Returns the number of characters copied in @p (not including the
* trailing %NUL) or -E2BIG if @size is 0 or the copy of @q was truncated.
*/
__FORTIFY_INLINE ssize_t strscpy(char * const POS p, const char * const POS q, size_t size)
{
size_t len;
@ -247,6 +327,16 @@ __FORTIFY_INLINE ssize_t strscpy(char * const POS p, const char * const POS q, s
if (__compiletime_lessthan(p_size, size))
__write_overflow();
/* Short-circuit for compile-time known-safe lengths. */
if (__compiletime_lessthan(p_size, SIZE_MAX)) {
len = __compiletime_strlen(q);
if (len < SIZE_MAX && __compiletime_lessthan(len, size)) {
__underlying_memcpy(p, q, len + 1);
return len;
}
}
/*
* This call protects from read overflow, because len will default to q
* length if it smaller than size.
@ -274,7 +364,26 @@ __FORTIFY_INLINE ssize_t strscpy(char * const POS p, const char * const POS q, s
return __real_strscpy(p, q, len);
}
/* defined after fortified strlen and strnlen to reuse them */
/**
* strncat - Append a string to an existing string
*
* @p: pointer to NUL-terminated string to append to
* @q: pointer to source string to append from
* @count: Maximum bytes to read from @q
*
* Appends at most @count bytes from @q (stopping at the first
* NUL byte) after the NUL-terminated string at @p. @p will be
* NUL-terminated.
*
* Do not use this function. While FORTIFY_SOURCE tries to avoid
* read and write overflows, this is only possible when the sizes
* of @p and @q are known to the compiler. Prefer building the
* string with formatting, via scnprintf() or similar.
*
* Returns @p.
*
*/
/* Defined after fortified strlen() and strnlen() to reuse them. */
__FORTIFY_INLINE __diagnose_as(__builtin_strncat, 1, 2, 3)
char *strncat(char * const POS p, const char * const POS q, __kernel_size_t count)
{
@ -573,7 +682,8 @@ __FORTIFY_INLINE void *memchr_inv(const void * const POS0 p, int c, size_t size)
return __real_memchr_inv(p, c, size);
}
extern void *__real_kmemdup(const void *src, size_t len, gfp_t gfp) __RENAME(kmemdup);
extern void *__real_kmemdup(const void *src, size_t len, gfp_t gfp) __RENAME(kmemdup)
__realloc_size(2);
__FORTIFY_INLINE void *kmemdup(const void * const POS0 p, size_t size, gfp_t gfp)
{
size_t p_size = __struct_size(p);
@ -585,6 +695,20 @@ __FORTIFY_INLINE void *kmemdup(const void * const POS0 p, size_t size, gfp_t gfp
return __real_kmemdup(p, size, gfp);
}
/**
* strcpy - Copy a string into another string buffer
*
* @p: pointer to destination of copy
* @q: pointer to NUL-terminated source string to copy
*
* Do not use this function. While FORTIFY_SOURCE tries to avoid
* overflows, this is only possible when the sizes of @q and @p are
* known to the compiler. Prefer strscpy(), though note its different
* return values for detecting truncation.
*
* Returns @p.
*
*/
/* Defined after fortified strlen to reuse it. */
__FORTIFY_INLINE __diagnose_as(__builtin_strcpy, 1, 2)
char *strcpy(char * const POS p, const char * const POS q)

View File

@ -30,7 +30,7 @@ struct hpet {
unsigned long _hpet_compare;
} _u1;
u64 hpet_fsb[2]; /* FSB route */
} hpet_timers[1];
} hpet_timers[];
};
#define hpet_mc _u0._hpet_mc

View File

@ -128,6 +128,53 @@ static inline bool __must_check __must_check_overflow(bool overflow)
(*_d >> _to_shift) != _a); \
}))
#define __overflows_type_constexpr(x, T) ( \
is_unsigned_type(typeof(x)) ? \
(x) > type_max(typeof(T)) : \
is_unsigned_type(typeof(T)) ? \
(x) < 0 || (x) > type_max(typeof(T)) : \
(x) < type_min(typeof(T)) || (x) > type_max(typeof(T)))
#define __overflows_type(x, T) ({ \
typeof(T) v = 0; \
check_add_overflow((x), v, &v); \
})
/**
* overflows_type - helper for checking the overflows between value, variables,
* or data type
*
* @n: source constant value or variable to be checked
* @T: destination variable or data type proposed to store @x
*
* Compares the @x expression for whether or not it can safely fit in
* the storage of the type in @T. @x and @T can have different types.
* If @x is a constant expression, this will also resolve to a constant
* expression.
*
* Returns: true if overflow can occur, false otherwise.
*/
#define overflows_type(n, T) \
__builtin_choose_expr(__is_constexpr(n), \
__overflows_type_constexpr(n, T), \
__overflows_type(n, T))
/**
* castable_to_type - like __same_type(), but also allows for casted literals
*
* @n: variable or constant value
* @T: variable or data type
*
* Unlike the __same_type() macro, this allows a constant value as the
* first argument. If this value would not overflow into an assignment
* of the second argument's type, it returns true. Otherwise, this falls
* back to __same_type().
*/
#define castable_to_type(n, T) \
__builtin_choose_expr(__is_constexpr(n), \
!__overflows_type_constexpr(n, T), \
__same_type(n, T))
/**
* size_mul() - Calculate size_t multiplication with saturation at SIZE_MAX
* @factor1: first factor

View File

@ -11,6 +11,7 @@ extern long (*panic_blink)(int state);
__printf(1, 2)
void panic(const char *fmt, ...) __noreturn __cold;
void nmi_panic(struct pt_regs *regs, const char *msg);
void check_panic_on_warn(const char *origin);
extern void oops_enter(void);
extern void oops_exit(void);
extern bool oops_may_print(void);

View File

@ -176,7 +176,7 @@ extern void kfree_const(const void *x);
extern char *kstrdup(const char *s, gfp_t gfp) __malloc;
extern const char *kstrdup_const(const char *s, gfp_t gfp);
extern char *kstrndup(const char *s, size_t len, gfp_t gfp);
extern void *kmemdup(const void *src, size_t len, gfp_t gfp);
extern void *kmemdup(const void *src, size_t len, gfp_t gfp) __realloc_size(2);
extern char *kmemdup_nul(const char *s, size_t len, gfp_t gfp);
extern char **argv_split(gfp_t gfp, const char *str, int *argcp);

View File

@ -159,10 +159,12 @@ config WERROR
help
A kernel build should not cause any compiler warnings, and this
enables the '-Werror' (for C) and '-Dwarnings' (for Rust) flags
to enforce that rule by default.
to enforce that rule by default. Certain warnings from other tools
such as the linker may be upgraded to errors with this option as
well.
However, if you have a new (or very old) compiler with odd and
unusual warnings, or you have some architecture with problems,
However, if you have a new (or very old) compiler or linker with odd
and unusual warnings, or you have some architecture with problems,
you may need to disable this config option in order to
successfully build the kernel.
@ -1454,6 +1456,13 @@ config LD_ORPHAN_WARN
def_bool y
depends on ARCH_WANT_LD_ORPHAN_WARN
depends on $(ld-option,--orphan-handling=warn)
depends on $(ld-option,--orphan-handling=error)
config LD_ORPHAN_WARN_LEVEL
string
depends on LD_ORPHAN_WARN
default "error" if WERROR
default "warn"
config SYSCTL
bool

View File

@ -701,9 +701,9 @@ void __init cred_init(void)
* override a task's own credentials so that work can be done on behalf of that
* task that requires a different subjective context.
*
* @daemon is used to provide a base for the security record, but can be NULL.
* If @daemon is supplied, then the security data will be derived from that;
* otherwise they'll be set to 0 and no groups, full capabilities and no keys.
* @daemon is used to provide a base cred, with the security data derived from
* that; if this is "&init_task", they'll be set to 0, no groups, full
* capabilities, and no keys.
*
* The caller may change these controls afterwards if desired.
*
@ -714,17 +714,16 @@ struct cred *prepare_kernel_cred(struct task_struct *daemon)
const struct cred *old;
struct cred *new;
if (WARN_ON_ONCE(!daemon))
return NULL;
new = kmem_cache_alloc(cred_jar, GFP_KERNEL);
if (!new)
return NULL;
kdebug("prepare_kernel_cred() alloc %p", new);
if (daemon)
old = get_task_cred(daemon);
else
old = get_cred(&init_cred);
old = get_task_cred(daemon);
validate_creds(old);
*new = *old;

View File

@ -67,11 +67,58 @@
#include <linux/io_uring.h>
#include <linux/kprobes.h>
#include <linux/rethook.h>
#include <linux/sysfs.h>
#include <linux/uaccess.h>
#include <asm/unistd.h>
#include <asm/mmu_context.h>
/*
* The default value should be high enough to not crash a system that randomly
* crashes its kernel from time to time, but low enough to at least not permit
* overflowing 32-bit refcounts or the ldsem writer count.
*/
static unsigned int oops_limit = 10000;
#ifdef CONFIG_SYSCTL
static struct ctl_table kern_exit_table[] = {
{
.procname = "oops_limit",
.data = &oops_limit,
.maxlen = sizeof(oops_limit),
.mode = 0644,
.proc_handler = proc_douintvec,
},
{ }
};
static __init int kernel_exit_sysctls_init(void)
{
register_sysctl_init("kernel", kern_exit_table);
return 0;
}
late_initcall(kernel_exit_sysctls_init);
#endif
static atomic_t oops_count = ATOMIC_INIT(0);
#ifdef CONFIG_SYSFS
static ssize_t oops_count_show(struct kobject *kobj, struct kobj_attribute *attr,
char *page)
{
return sysfs_emit(page, "%d\n", atomic_read(&oops_count));
}
static struct kobj_attribute oops_count_attr = __ATTR_RO(oops_count);
static __init int kernel_exit_sysfs_init(void)
{
sysfs_add_file_to_group(kernel_kobj, &oops_count_attr.attr, NULL);
return 0;
}
late_initcall(kernel_exit_sysfs_init);
#endif
static void __unhash_process(struct task_struct *p, bool group_dead)
{
nr_threads--;
@ -897,6 +944,19 @@ void __noreturn make_task_dead(int signr)
preempt_count_set(PREEMPT_ENABLED);
}
/*
* Every time the system oopses, if the oops happens while a reference
* to an object was held, the reference leaks.
* If the oops doesn't also leak memory, repeated oopsing can cause
* reference counters to wrap around (if they're not using refcount_t).
* This means that repeated oopsing can make unexploitable-looking bugs
* exploitable through repeated oopsing.
* To make sure this can't happen, place an upper bound on how often the
* kernel may oops without panic().
*/
if (atomic_inc_return(&oops_count) >= READ_ONCE(oops_limit) && oops_limit)
panic("Oopsed too often (kernel.oops_limit is %d)", oops_limit);
/*
* We're taking recursive faults here in make_task_dead. Safest is to just
* leave this task alone and wait for reboot.

View File

@ -492,8 +492,7 @@ static void print_report(enum kcsan_value_change value_change,
dump_stack_print_info(KERN_DEFAULT);
pr_err("==================================================================\n");
if (panic_on_warn)
panic("panic_on_warn set ...\n");
check_panic_on_warn("KCSAN");
}
static void release_report(unsigned long *flags, struct other_info *other_info)

View File

@ -33,6 +33,7 @@
#include <linux/bug.h>
#include <linux/ratelimit.h>
#include <linux/debugfs.h>
#include <linux/sysfs.h>
#include <trace/events/error_report.h>
#include <asm/sections.h>
@ -59,6 +60,7 @@ bool crash_kexec_post_notifiers;
int panic_on_warn __read_mostly;
unsigned long panic_on_taint;
bool panic_on_taint_nousertaint = false;
static unsigned int warn_limit __read_mostly;
int panic_timeout = CONFIG_PANIC_TIMEOUT;
EXPORT_SYMBOL_GPL(panic_timeout);
@ -76,8 +78,9 @@ ATOMIC_NOTIFIER_HEAD(panic_notifier_list);
EXPORT_SYMBOL(panic_notifier_list);
#if defined(CONFIG_SMP) && defined(CONFIG_SYSCTL)
#ifdef CONFIG_SYSCTL
static struct ctl_table kern_panic_table[] = {
#ifdef CONFIG_SMP
{
.procname = "oops_all_cpu_backtrace",
.data = &sysctl_oops_all_cpu_backtrace,
@ -87,6 +90,14 @@ static struct ctl_table kern_panic_table[] = {
.extra1 = SYSCTL_ZERO,
.extra2 = SYSCTL_ONE,
},
#endif
{
.procname = "warn_limit",
.data = &warn_limit,
.maxlen = sizeof(warn_limit),
.mode = 0644,
.proc_handler = proc_douintvec,
},
{ }
};
@ -98,6 +109,25 @@ static __init int kernel_panic_sysctls_init(void)
late_initcall(kernel_panic_sysctls_init);
#endif
static atomic_t warn_count = ATOMIC_INIT(0);
#ifdef CONFIG_SYSFS
static ssize_t warn_count_show(struct kobject *kobj, struct kobj_attribute *attr,
char *page)
{
return sysfs_emit(page, "%d\n", atomic_read(&warn_count));
}
static struct kobj_attribute warn_count_attr = __ATTR_RO(warn_count);
static __init int kernel_panic_sysfs_init(void)
{
sysfs_add_file_to_group(kernel_kobj, &warn_count_attr.attr, NULL);
return 0;
}
late_initcall(kernel_panic_sysfs_init);
#endif
static long no_blink(int state)
{
return 0;
@ -200,6 +230,16 @@ static void panic_print_sys_info(bool console_flush)
ftrace_dump(DUMP_ALL);
}
void check_panic_on_warn(const char *origin)
{
if (panic_on_warn)
panic("%s: panic_on_warn set ...\n", origin);
if (atomic_inc_return(&warn_count) >= READ_ONCE(warn_limit) && warn_limit)
panic("%s: system warned too often (kernel.warn_limit is %d)",
origin, warn_limit);
}
/**
* panic - halt the system
* @fmt: The text string to print
@ -618,8 +658,7 @@ void __warn(const char *file, int line, void *caller, unsigned taint,
if (regs)
show_regs(regs);
if (panic_on_warn)
panic("panic_on_warn set ...\n");
check_panic_on_warn("kernel");
if (!regs)
dump_stack();

View File

@ -5782,8 +5782,7 @@ static noinline void __schedule_bug(struct task_struct *prev)
pr_err("Preemption disabled at:");
print_ip_sym(KERN_ERR, preempt_disable_ip);
}
if (panic_on_warn)
panic("scheduling while atomic\n");
check_panic_on_warn("scheduling while atomic");
dump_stack();
add_taint(TAINT_WARN, LOCKDEP_STILL_OK);

View File

@ -2693,6 +2693,7 @@ relock:
/* Has this task already been marked for death? */
if ((signal->flags & SIGNAL_GROUP_EXIT) ||
signal->group_exec_task) {
clear_siginfo(&ksig->info);
ksig->info.si_signo = signr = SIGKILL;
sigdelset(&current->pending.signal, SIGKILL);
trace_signal_deliver(SIGKILL, SEND_SIG_NOINFO,

View File

@ -2234,9 +2234,6 @@ config STRING_SELFTEST
config TEST_STRING_HELPERS
tristate "Test functions located in the string_helpers module at runtime"
config TEST_STRSCPY
tristate "Test strscpy*() family of functions at runtime"
config TEST_KSTRTOX
tristate "Test kstrto*() family of functions at runtime"
@ -2271,15 +2268,6 @@ config TEST_RHASHTABLE
If unsure, say N.
config TEST_SIPHASH
tristate "Perform selftest on siphash functions"
help
Enable this option to test the kernel's siphash (<linux/siphash.h>) hash
functions on boot (or module load).
This is intended to help people writing architecture-specific
optimized versions. If unsure, say N.
config TEST_IDA
tristate "Perform selftest on IDA functions"
@ -2607,6 +2595,22 @@ config HW_BREAKPOINT_KUNIT_TEST
If unsure, say N.
config STRSCPY_KUNIT_TEST
tristate "Test strscpy*() family of functions at runtime" if !KUNIT_ALL_TESTS
depends on KUNIT
default KUNIT_ALL_TESTS
config SIPHASH_KUNIT_TEST
tristate "Perform selftest on siphash functions" if !KUNIT_ALL_TESTS
depends on KUNIT
default KUNIT_ALL_TESTS
help
Enable this option to test the kernel's siphash (<linux/siphash.h>) hash
functions on boot (or module load).
This is intended to help people writing architecture-specific
optimized versions. If unsure, say N.
config TEST_UDELAY
tristate "udelay test driver"
help

View File

@ -62,7 +62,6 @@ obj-$(CONFIG_TEST_BITOPS) += test_bitops.o
CFLAGS_test_bitops.o += -Werror
obj-$(CONFIG_CPUMASK_KUNIT_TEST) += cpumask_kunit.o
obj-$(CONFIG_TEST_SYSCTL) += test_sysctl.o
obj-$(CONFIG_TEST_SIPHASH) += test_siphash.o
obj-$(CONFIG_HASH_KUNIT_TEST) += test_hash.o
obj-$(CONFIG_TEST_IDA) += test_ida.o
obj-$(CONFIG_TEST_UBSAN) += test_ubsan.o
@ -82,7 +81,6 @@ obj-$(CONFIG_TEST_DYNAMIC_DEBUG) += test_dynamic_debug.o
obj-$(CONFIG_TEST_PRINTF) += test_printf.o
obj-$(CONFIG_TEST_SCANF) += test_scanf.o
obj-$(CONFIG_TEST_BITMAP) += test_bitmap.o
obj-$(CONFIG_TEST_STRSCPY) += test_strscpy.o
obj-$(CONFIG_TEST_UUID) += test_uuid.o
obj-$(CONFIG_TEST_XARRAY) += test_xarray.o
obj-$(CONFIG_TEST_MAPLE_TREE) += test_maple_tree.o
@ -377,10 +375,15 @@ obj-$(CONFIG_CMDLINE_KUNIT_TEST) += cmdline_kunit.o
obj-$(CONFIG_SLUB_KUNIT_TEST) += slub_kunit.o
obj-$(CONFIG_MEMCPY_KUNIT_TEST) += memcpy_kunit.o
obj-$(CONFIG_IS_SIGNED_TYPE_KUNIT_TEST) += is_signed_type_kunit.o
CFLAGS_overflow_kunit.o = $(call cc-disable-warning, tautological-constant-out-of-range-compare)
obj-$(CONFIG_OVERFLOW_KUNIT_TEST) += overflow_kunit.o
CFLAGS_stackinit_kunit.o += $(call cc-disable-warning, switch-unreachable)
obj-$(CONFIG_STACKINIT_KUNIT_TEST) += stackinit_kunit.o
CFLAGS_fortify_kunit.o += $(call cc-disable-warning, unsequenced)
CFLAGS_fortify_kunit.o += $(DISABLE_STRUCTLEAK_PLUGIN)
obj-$(CONFIG_FORTIFY_KUNIT_TEST) += fortify_kunit.o
obj-$(CONFIG_STRSCPY_KUNIT_TEST) += strscpy_kunit.o
obj-$(CONFIG_SIPHASH_KUNIT_TEST) += siphash_kunit.o
obj-$(CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED) += devmem_is_allowed.o

View File

@ -16,7 +16,10 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <kunit/test.h>
#include <linux/device.h>
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/vmalloc.h>
static const char array_of_10[] = "this is 10";
static const char *ptr_of_11 = "this is 11!";
@ -60,9 +63,261 @@ static void control_flow_split_test(struct kunit *test)
KUNIT_EXPECT_EQ(test, want_minus_one(pick), SIZE_MAX);
}
#define KUNIT_EXPECT_BOS(test, p, expected, name) \
KUNIT_EXPECT_EQ_MSG(test, __builtin_object_size(p, 1), \
expected, \
"__alloc_size() not working with __bos on " name "\n")
#if !__has_builtin(__builtin_dynamic_object_size)
#define KUNIT_EXPECT_BDOS(test, p, expected, name) \
/* Silence "unused variable 'expected'" warning. */ \
KUNIT_EXPECT_EQ(test, expected, expected)
#else
#define KUNIT_EXPECT_BDOS(test, p, expected, name) \
KUNIT_EXPECT_EQ_MSG(test, __builtin_dynamic_object_size(p, 1), \
expected, \
"__alloc_size() not working with __bdos on " name "\n")
#endif
/* If the execpted size is a constant value, __bos can see it. */
#define check_const(_expected, alloc, free) do { \
size_t expected = (_expected); \
void *p = alloc; \
KUNIT_EXPECT_TRUE_MSG(test, p != NULL, #alloc " failed?!\n"); \
KUNIT_EXPECT_BOS(test, p, expected, #alloc); \
KUNIT_EXPECT_BDOS(test, p, expected, #alloc); \
free; \
} while (0)
/* If the execpted size is NOT a constant value, __bos CANNOT see it. */
#define check_dynamic(_expected, alloc, free) do { \
size_t expected = (_expected); \
void *p = alloc; \
KUNIT_EXPECT_TRUE_MSG(test, p != NULL, #alloc " failed?!\n"); \
KUNIT_EXPECT_BOS(test, p, SIZE_MAX, #alloc); \
KUNIT_EXPECT_BDOS(test, p, expected, #alloc); \
free; \
} while (0)
/* Assortment of constant-value kinda-edge cases. */
#define CONST_TEST_BODY(TEST_alloc) do { \
/* Special-case vmalloc()-family to skip 0-sized allocs. */ \
if (strcmp(#TEST_alloc, "TEST_vmalloc") != 0) \
TEST_alloc(check_const, 0, 0); \
TEST_alloc(check_const, 1, 1); \
TEST_alloc(check_const, 128, 128); \
TEST_alloc(check_const, 1023, 1023); \
TEST_alloc(check_const, 1025, 1025); \
TEST_alloc(check_const, 4096, 4096); \
TEST_alloc(check_const, 4097, 4097); \
} while (0)
static volatile size_t zero_size;
static volatile size_t unknown_size = 50;
#if !__has_builtin(__builtin_dynamic_object_size)
#define DYNAMIC_TEST_BODY(TEST_alloc) \
kunit_skip(test, "Compiler is missing __builtin_dynamic_object_size() support\n")
#else
#define DYNAMIC_TEST_BODY(TEST_alloc) do { \
size_t size = unknown_size; \
\
/* \
* Expected size is "size" in each test, before it is then \
* internally incremented in each test. Requires we disable \
* -Wunsequenced. \
*/ \
TEST_alloc(check_dynamic, size, size++); \
/* Make sure incrementing actually happened. */ \
KUNIT_EXPECT_NE(test, size, unknown_size); \
} while (0)
#endif
#define DEFINE_ALLOC_SIZE_TEST_PAIR(allocator) \
static void alloc_size_##allocator##_const_test(struct kunit *test) \
{ \
CONST_TEST_BODY(TEST_##allocator); \
} \
static void alloc_size_##allocator##_dynamic_test(struct kunit *test) \
{ \
DYNAMIC_TEST_BODY(TEST_##allocator); \
}
#define TEST_kmalloc(checker, expected_size, alloc_size) do { \
gfp_t gfp = GFP_KERNEL | __GFP_NOWARN; \
void *orig; \
size_t len; \
\
checker(expected_size, kmalloc(alloc_size, gfp), \
kfree(p)); \
checker(expected_size, \
kmalloc_node(alloc_size, gfp, NUMA_NO_NODE), \
kfree(p)); \
checker(expected_size, kzalloc(alloc_size, gfp), \
kfree(p)); \
checker(expected_size, \
kzalloc_node(alloc_size, gfp, NUMA_NO_NODE), \
kfree(p)); \
checker(expected_size, kcalloc(1, alloc_size, gfp), \
kfree(p)); \
checker(expected_size, kcalloc(alloc_size, 1, gfp), \
kfree(p)); \
checker(expected_size, \
kcalloc_node(1, alloc_size, gfp, NUMA_NO_NODE), \
kfree(p)); \
checker(expected_size, \
kcalloc_node(alloc_size, 1, gfp, NUMA_NO_NODE), \
kfree(p)); \
checker(expected_size, kmalloc_array(1, alloc_size, gfp), \
kfree(p)); \
checker(expected_size, kmalloc_array(alloc_size, 1, gfp), \
kfree(p)); \
checker(expected_size, \
kmalloc_array_node(1, alloc_size, gfp, NUMA_NO_NODE), \
kfree(p)); \
checker(expected_size, \
kmalloc_array_node(alloc_size, 1, gfp, NUMA_NO_NODE), \
kfree(p)); \
checker(expected_size, __kmalloc(alloc_size, gfp), \
kfree(p)); \
checker(expected_size, \
__kmalloc_node(alloc_size, gfp, NUMA_NO_NODE), \
kfree(p)); \
\
orig = kmalloc(alloc_size, gfp); \
KUNIT_EXPECT_TRUE(test, orig != NULL); \
checker((expected_size) * 2, \
krealloc(orig, (alloc_size) * 2, gfp), \
kfree(p)); \
orig = kmalloc(alloc_size, gfp); \
KUNIT_EXPECT_TRUE(test, orig != NULL); \
checker((expected_size) * 2, \
krealloc_array(orig, 1, (alloc_size) * 2, gfp), \
kfree(p)); \
orig = kmalloc(alloc_size, gfp); \
KUNIT_EXPECT_TRUE(test, orig != NULL); \
checker((expected_size) * 2, \
krealloc_array(orig, (alloc_size) * 2, 1, gfp), \
kfree(p)); \
\
len = 11; \
/* Using memdup() with fixed size, so force unknown length. */ \
if (!__builtin_constant_p(expected_size)) \
len += zero_size; \
checker(len, kmemdup("hello there", len, gfp), kfree(p)); \
} while (0)
DEFINE_ALLOC_SIZE_TEST_PAIR(kmalloc)
/* Sizes are in pages, not bytes. */
#define TEST_vmalloc(checker, expected_pages, alloc_pages) do { \
gfp_t gfp = GFP_KERNEL | __GFP_NOWARN; \
checker((expected_pages) * PAGE_SIZE, \
vmalloc((alloc_pages) * PAGE_SIZE), vfree(p)); \
checker((expected_pages) * PAGE_SIZE, \
vzalloc((alloc_pages) * PAGE_SIZE), vfree(p)); \
checker((expected_pages) * PAGE_SIZE, \
__vmalloc((alloc_pages) * PAGE_SIZE, gfp), vfree(p)); \
} while (0)
DEFINE_ALLOC_SIZE_TEST_PAIR(vmalloc)
/* Sizes are in pages (and open-coded for side-effects), not bytes. */
#define TEST_kvmalloc(checker, expected_pages, alloc_pages) do { \
gfp_t gfp = GFP_KERNEL | __GFP_NOWARN; \
size_t prev_size; \
void *orig; \
\
checker((expected_pages) * PAGE_SIZE, \
kvmalloc((alloc_pages) * PAGE_SIZE, gfp), \
vfree(p)); \
checker((expected_pages) * PAGE_SIZE, \
kvmalloc_node((alloc_pages) * PAGE_SIZE, gfp, NUMA_NO_NODE), \
vfree(p)); \
checker((expected_pages) * PAGE_SIZE, \
kvzalloc((alloc_pages) * PAGE_SIZE, gfp), \
vfree(p)); \
checker((expected_pages) * PAGE_SIZE, \
kvzalloc_node((alloc_pages) * PAGE_SIZE, gfp, NUMA_NO_NODE), \
vfree(p)); \
checker((expected_pages) * PAGE_SIZE, \
kvcalloc(1, (alloc_pages) * PAGE_SIZE, gfp), \
vfree(p)); \
checker((expected_pages) * PAGE_SIZE, \
kvcalloc((alloc_pages) * PAGE_SIZE, 1, gfp), \
vfree(p)); \
checker((expected_pages) * PAGE_SIZE, \
kvmalloc_array(1, (alloc_pages) * PAGE_SIZE, gfp), \
vfree(p)); \
checker((expected_pages) * PAGE_SIZE, \
kvmalloc_array((alloc_pages) * PAGE_SIZE, 1, gfp), \
vfree(p)); \
\
prev_size = (expected_pages) * PAGE_SIZE; \
orig = kvmalloc(prev_size, gfp); \
KUNIT_EXPECT_TRUE(test, orig != NULL); \
checker(((expected_pages) * PAGE_SIZE) * 2, \
kvrealloc(orig, prev_size, \
((alloc_pages) * PAGE_SIZE) * 2, gfp), \
kvfree(p)); \
} while (0)
DEFINE_ALLOC_SIZE_TEST_PAIR(kvmalloc)
#define TEST_devm_kmalloc(checker, expected_size, alloc_size) do { \
gfp_t gfp = GFP_KERNEL | __GFP_NOWARN; \
const char dev_name[] = "fortify-test"; \
struct device *dev; \
void *orig; \
size_t len; \
\
/* Create dummy device for devm_kmalloc()-family tests. */ \
dev = root_device_register(dev_name); \
KUNIT_ASSERT_FALSE_MSG(test, IS_ERR(dev), \
"Cannot register test device\n"); \
\
checker(expected_size, devm_kmalloc(dev, alloc_size, gfp), \
devm_kfree(dev, p)); \
checker(expected_size, devm_kzalloc(dev, alloc_size, gfp), \
devm_kfree(dev, p)); \
checker(expected_size, \
devm_kmalloc_array(dev, 1, alloc_size, gfp), \
devm_kfree(dev, p)); \
checker(expected_size, \
devm_kmalloc_array(dev, alloc_size, 1, gfp), \
devm_kfree(dev, p)); \
checker(expected_size, \
devm_kcalloc(dev, 1, alloc_size, gfp), \
devm_kfree(dev, p)); \
checker(expected_size, \
devm_kcalloc(dev, alloc_size, 1, gfp), \
devm_kfree(dev, p)); \
\
orig = devm_kmalloc(dev, alloc_size, gfp); \
KUNIT_EXPECT_TRUE(test, orig != NULL); \
checker((expected_size) * 2, \
devm_krealloc(dev, orig, (alloc_size) * 2, gfp), \
devm_kfree(dev, p)); \
\
len = 4; \
/* Using memdup() with fixed size, so force unknown length. */ \
if (!__builtin_constant_p(expected_size)) \
len += zero_size; \
checker(len, devm_kmemdup(dev, "Ohai", len, gfp), \
devm_kfree(dev, p)); \
\
device_unregister(dev); \
} while (0)
DEFINE_ALLOC_SIZE_TEST_PAIR(devm_kmalloc)
static struct kunit_case fortify_test_cases[] = {
KUNIT_CASE(known_sizes_test),
KUNIT_CASE(control_flow_split_test),
KUNIT_CASE(alloc_size_kmalloc_const_test),
KUNIT_CASE(alloc_size_kmalloc_dynamic_test),
KUNIT_CASE(alloc_size_vmalloc_const_test),
KUNIT_CASE(alloc_size_vmalloc_dynamic_test),
KUNIT_CASE(alloc_size_kvmalloc_const_test),
KUNIT_CASE(alloc_size_kvmalloc_dynamic_test),
KUNIT_CASE(alloc_size_devm_kmalloc_const_test),
KUNIT_CASE(alloc_size_devm_kmalloc_dynamic_test),
{}
};

View File

@ -292,6 +292,208 @@ static void memset_test(struct kunit *test)
#undef TEST_OP
}
static u8 large_src[1024];
static u8 large_dst[2048];
static const u8 large_zero[2048];
static void set_random_nonzero(struct kunit *test, u8 *byte)
{
int failed_rng = 0;
while (*byte == 0) {
get_random_bytes(byte, 1);
KUNIT_ASSERT_LT_MSG(test, failed_rng++, 100,
"Is the RNG broken?");
}
}
static void init_large(struct kunit *test)
{
/* Get many bit patterns. */
get_random_bytes(large_src, ARRAY_SIZE(large_src));
/* Make sure we have non-zero edges. */
set_random_nonzero(test, &large_src[0]);
set_random_nonzero(test, &large_src[ARRAY_SIZE(large_src) - 1]);
/* Explicitly zero the entire destination. */
memset(large_dst, 0, ARRAY_SIZE(large_dst));
}
/*
* Instead of an indirect function call for "copy" or a giant macro,
* use a bool to pick memcpy or memmove.
*/
static void copy_large_test(struct kunit *test, bool use_memmove)
{
init_large(test);
/* Copy a growing number of non-overlapping bytes ... */
for (int bytes = 1; bytes <= ARRAY_SIZE(large_src); bytes++) {
/* Over a shifting destination window ... */
for (int offset = 0; offset < ARRAY_SIZE(large_src); offset++) {
int right_zero_pos = offset + bytes;
int right_zero_size = ARRAY_SIZE(large_dst) - right_zero_pos;
/* Copy! */
if (use_memmove)
memmove(large_dst + offset, large_src, bytes);
else
memcpy(large_dst + offset, large_src, bytes);
/* Did we touch anything before the copy area? */
KUNIT_ASSERT_EQ_MSG(test,
memcmp(large_dst, large_zero, offset), 0,
"with size %d at offset %d", bytes, offset);
/* Did we touch anything after the copy area? */
KUNIT_ASSERT_EQ_MSG(test,
memcmp(&large_dst[right_zero_pos], large_zero, right_zero_size), 0,
"with size %d at offset %d", bytes, offset);
/* Are we byte-for-byte exact across the copy? */
KUNIT_ASSERT_EQ_MSG(test,
memcmp(large_dst + offset, large_src, bytes), 0,
"with size %d at offset %d", bytes, offset);
/* Zero out what we copied for the next cycle. */
memset(large_dst + offset, 0, bytes);
}
/* Avoid stall warnings if this loop gets slow. */
cond_resched();
}
}
static void memcpy_large_test(struct kunit *test)
{
copy_large_test(test, false);
}
static void memmove_large_test(struct kunit *test)
{
copy_large_test(test, true);
}
/*
* On the assumption that boundary conditions are going to be the most
* sensitive, instead of taking a full step (inc) each iteration,
* take single index steps for at least the first "inc"-many indexes
* from the "start" and at least the last "inc"-many indexes before
* the "end". When in the middle, take full "inc"-wide steps. For
* example, calling next_step(idx, 1, 15, 3) with idx starting at 0
* would see the following pattern: 1 2 3 4 7 10 11 12 13 14 15.
*/
static int next_step(int idx, int start, int end, int inc)
{
start += inc;
end -= inc;
if (idx < start || idx + inc > end)
inc = 1;
return idx + inc;
}
static void inner_loop(struct kunit *test, int bytes, int d_off, int s_off)
{
int left_zero_pos, left_zero_size;
int right_zero_pos, right_zero_size;
int src_pos, src_orig_pos, src_size;
int pos;
/* Place the source in the destination buffer. */
memcpy(&large_dst[s_off], large_src, bytes);
/* Copy to destination offset. */
memmove(&large_dst[d_off], &large_dst[s_off], bytes);
/* Make sure destination entirely matches. */
KUNIT_ASSERT_EQ_MSG(test, memcmp(&large_dst[d_off], large_src, bytes), 0,
"with size %d at src offset %d and dest offset %d",
bytes, s_off, d_off);
/* Calculate the expected zero spans. */
if (s_off < d_off) {
left_zero_pos = 0;
left_zero_size = s_off;
right_zero_pos = d_off + bytes;
right_zero_size = ARRAY_SIZE(large_dst) - right_zero_pos;
src_pos = s_off;
src_orig_pos = 0;
src_size = d_off - s_off;
} else {
left_zero_pos = 0;
left_zero_size = d_off;
right_zero_pos = s_off + bytes;
right_zero_size = ARRAY_SIZE(large_dst) - right_zero_pos;
src_pos = d_off + bytes;
src_orig_pos = src_pos - s_off;
src_size = right_zero_pos - src_pos;
}
/* Check non-overlapping source is unchanged.*/
KUNIT_ASSERT_EQ_MSG(test,
memcmp(&large_dst[src_pos], &large_src[src_orig_pos], src_size), 0,
"with size %d at src offset %d and dest offset %d",
bytes, s_off, d_off);
/* Check leading buffer contents are zero. */
KUNIT_ASSERT_EQ_MSG(test,
memcmp(&large_dst[left_zero_pos], large_zero, left_zero_size), 0,
"with size %d at src offset %d and dest offset %d",
bytes, s_off, d_off);
/* Check trailing buffer contents are zero. */
KUNIT_ASSERT_EQ_MSG(test,
memcmp(&large_dst[right_zero_pos], large_zero, right_zero_size), 0,
"with size %d at src offset %d and dest offset %d",
bytes, s_off, d_off);
/* Zero out everything not already zeroed.*/
pos = left_zero_pos + left_zero_size;
memset(&large_dst[pos], 0, right_zero_pos - pos);
}
static void memmove_overlap_test(struct kunit *test)
{
/*
* Running all possible offset and overlap combinations takes a
* very long time. Instead, only check up to 128 bytes offset
* into the destination buffer (which should result in crossing
* cachelines), with a step size of 1 through 7 to try to skip some
* redundancy.
*/
static const int offset_max = 128; /* less than ARRAY_SIZE(large_src); */
static const int bytes_step = 7;
static const int window_step = 7;
static const int bytes_start = 1;
static const int bytes_end = ARRAY_SIZE(large_src) + 1;
init_large(test);
/* Copy a growing number of overlapping bytes ... */
for (int bytes = bytes_start; bytes < bytes_end;
bytes = next_step(bytes, bytes_start, bytes_end, bytes_step)) {
/* Over a shifting destination window ... */
for (int d_off = 0; d_off < offset_max; d_off++) {
int s_start = max(d_off - bytes, 0);
int s_end = min_t(int, d_off + bytes, ARRAY_SIZE(large_src));
/* Over a shifting source window ... */
for (int s_off = s_start; s_off < s_end;
s_off = next_step(s_off, s_start, s_end, window_step))
inner_loop(test, bytes, d_off, s_off);
/* Avoid stall warnings. */
cond_resched();
}
}
}
static void strtomem_test(struct kunit *test)
{
static const char input[sizeof(unsigned long)] = "hi";
@ -347,7 +549,10 @@ static void strtomem_test(struct kunit *test)
static struct kunit_case memcpy_test_cases[] = {
KUNIT_CASE(memset_test),
KUNIT_CASE(memcpy_test),
KUNIT_CASE(memcpy_large_test),
KUNIT_CASE(memmove_test),
KUNIT_CASE(memmove_large_test),
KUNIT_CASE(memmove_overlap_test),
KUNIT_CASE(strtomem_test),
{}
};

View File

@ -736,6 +736,384 @@ static void overflow_size_helpers_test(struct kunit *test)
#undef check_one_size_helper
}
static void overflows_type_test(struct kunit *test)
{
int count = 0;
unsigned int var;
#define __TEST_OVERFLOWS_TYPE(func, arg1, arg2, of) do { \
bool __of = func(arg1, arg2); \
KUNIT_EXPECT_EQ_MSG(test, __of, of, \
"expected " #func "(" #arg1 ", " #arg2 " to%s overflow\n",\
of ? "" : " not"); \
count++; \
} while (0)
/* Args are: first type, second type, value, overflow expected */
#define TEST_OVERFLOWS_TYPE(__t1, __t2, v, of) do { \
__t1 t1 = (v); \
__t2 t2; \
__TEST_OVERFLOWS_TYPE(__overflows_type, t1, t2, of); \
__TEST_OVERFLOWS_TYPE(__overflows_type, t1, __t2, of); \
__TEST_OVERFLOWS_TYPE(__overflows_type_constexpr, t1, t2, of); \
__TEST_OVERFLOWS_TYPE(__overflows_type_constexpr, t1, __t2, of);\
} while (0)
TEST_OVERFLOWS_TYPE(u8, u8, U8_MAX, false);
TEST_OVERFLOWS_TYPE(u8, u16, U8_MAX, false);
TEST_OVERFLOWS_TYPE(u8, s8, U8_MAX, true);
TEST_OVERFLOWS_TYPE(u8, s8, S8_MAX, false);
TEST_OVERFLOWS_TYPE(u8, s8, (u8)S8_MAX + 1, true);
TEST_OVERFLOWS_TYPE(u8, s16, U8_MAX, false);
TEST_OVERFLOWS_TYPE(s8, u8, S8_MAX, false);
TEST_OVERFLOWS_TYPE(s8, u8, -1, true);
TEST_OVERFLOWS_TYPE(s8, u8, S8_MIN, true);
TEST_OVERFLOWS_TYPE(s8, u16, S8_MAX, false);
TEST_OVERFLOWS_TYPE(s8, u16, -1, true);
TEST_OVERFLOWS_TYPE(s8, u16, S8_MIN, true);
TEST_OVERFLOWS_TYPE(s8, u32, S8_MAX, false);
TEST_OVERFLOWS_TYPE(s8, u32, -1, true);
TEST_OVERFLOWS_TYPE(s8, u32, S8_MIN, true);
#if BITS_PER_LONG == 64
TEST_OVERFLOWS_TYPE(s8, u64, S8_MAX, false);
TEST_OVERFLOWS_TYPE(s8, u64, -1, true);
TEST_OVERFLOWS_TYPE(s8, u64, S8_MIN, true);
#endif
TEST_OVERFLOWS_TYPE(s8, s8, S8_MAX, false);
TEST_OVERFLOWS_TYPE(s8, s8, S8_MIN, false);
TEST_OVERFLOWS_TYPE(s8, s16, S8_MAX, false);
TEST_OVERFLOWS_TYPE(s8, s16, S8_MIN, false);
TEST_OVERFLOWS_TYPE(u16, u8, U8_MAX, false);
TEST_OVERFLOWS_TYPE(u16, u8, (u16)U8_MAX + 1, true);
TEST_OVERFLOWS_TYPE(u16, u8, U16_MAX, true);
TEST_OVERFLOWS_TYPE(u16, s8, S8_MAX, false);
TEST_OVERFLOWS_TYPE(u16, s8, (u16)S8_MAX + 1, true);
TEST_OVERFLOWS_TYPE(u16, s8, U16_MAX, true);
TEST_OVERFLOWS_TYPE(u16, s16, S16_MAX, false);
TEST_OVERFLOWS_TYPE(u16, s16, (u16)S16_MAX + 1, true);
TEST_OVERFLOWS_TYPE(u16, s16, U16_MAX, true);
TEST_OVERFLOWS_TYPE(u16, u32, U16_MAX, false);
TEST_OVERFLOWS_TYPE(u16, s32, U16_MAX, false);
TEST_OVERFLOWS_TYPE(s16, u8, U8_MAX, false);
TEST_OVERFLOWS_TYPE(s16, u8, (s16)U8_MAX + 1, true);
TEST_OVERFLOWS_TYPE(s16, u8, -1, true);
TEST_OVERFLOWS_TYPE(s16, u8, S16_MIN, true);
TEST_OVERFLOWS_TYPE(s16, u16, S16_MAX, false);
TEST_OVERFLOWS_TYPE(s16, u16, -1, true);
TEST_OVERFLOWS_TYPE(s16, u16, S16_MIN, true);
TEST_OVERFLOWS_TYPE(s16, u32, S16_MAX, false);
TEST_OVERFLOWS_TYPE(s16, u32, -1, true);
TEST_OVERFLOWS_TYPE(s16, u32, S16_MIN, true);
#if BITS_PER_LONG == 64
TEST_OVERFLOWS_TYPE(s16, u64, S16_MAX, false);
TEST_OVERFLOWS_TYPE(s16, u64, -1, true);
TEST_OVERFLOWS_TYPE(s16, u64, S16_MIN, true);
#endif
TEST_OVERFLOWS_TYPE(s16, s8, S8_MAX, false);
TEST_OVERFLOWS_TYPE(s16, s8, S8_MIN, false);
TEST_OVERFLOWS_TYPE(s16, s8, (s16)S8_MAX + 1, true);
TEST_OVERFLOWS_TYPE(s16, s8, (s16)S8_MIN - 1, true);
TEST_OVERFLOWS_TYPE(s16, s8, S16_MAX, true);
TEST_OVERFLOWS_TYPE(s16, s8, S16_MIN, true);
TEST_OVERFLOWS_TYPE(s16, s16, S16_MAX, false);
TEST_OVERFLOWS_TYPE(s16, s16, S16_MIN, false);
TEST_OVERFLOWS_TYPE(s16, s32, S16_MAX, false);
TEST_OVERFLOWS_TYPE(s16, s32, S16_MIN, false);
TEST_OVERFLOWS_TYPE(u32, u8, U8_MAX, false);
TEST_OVERFLOWS_TYPE(u32, u8, (u32)U8_MAX + 1, true);
TEST_OVERFLOWS_TYPE(u32, u8, U32_MAX, true);
TEST_OVERFLOWS_TYPE(u32, s8, S8_MAX, false);
TEST_OVERFLOWS_TYPE(u32, s8, (u32)S8_MAX + 1, true);
TEST_OVERFLOWS_TYPE(u32, s8, U32_MAX, true);
TEST_OVERFLOWS_TYPE(u32, u16, U16_MAX, false);
TEST_OVERFLOWS_TYPE(u32, u16, U16_MAX + 1, true);
TEST_OVERFLOWS_TYPE(u32, u16, U32_MAX, true);
TEST_OVERFLOWS_TYPE(u32, s16, S16_MAX, false);
TEST_OVERFLOWS_TYPE(u32, s16, (u32)S16_MAX + 1, true);
TEST_OVERFLOWS_TYPE(u32, s16, U32_MAX, true);
TEST_OVERFLOWS_TYPE(u32, u32, U32_MAX, false);
TEST_OVERFLOWS_TYPE(u32, s32, S32_MAX, false);
TEST_OVERFLOWS_TYPE(u32, s32, U32_MAX, true);
TEST_OVERFLOWS_TYPE(u32, s32, (u32)S32_MAX + 1, true);
#if BITS_PER_LONG == 64
TEST_OVERFLOWS_TYPE(u32, u64, U32_MAX, false);
TEST_OVERFLOWS_TYPE(u32, s64, U32_MAX, false);
#endif
TEST_OVERFLOWS_TYPE(s32, u8, U8_MAX, false);
TEST_OVERFLOWS_TYPE(s32, u8, (s32)U8_MAX + 1, true);
TEST_OVERFLOWS_TYPE(s32, u16, S32_MAX, true);
TEST_OVERFLOWS_TYPE(s32, u8, -1, true);
TEST_OVERFLOWS_TYPE(s32, u8, S32_MIN, true);
TEST_OVERFLOWS_TYPE(s32, u16, U16_MAX, false);
TEST_OVERFLOWS_TYPE(s32, u16, (s32)U16_MAX + 1, true);
TEST_OVERFLOWS_TYPE(s32, u16, S32_MAX, true);
TEST_OVERFLOWS_TYPE(s32, u16, -1, true);
TEST_OVERFLOWS_TYPE(s32, u16, S32_MIN, true);
TEST_OVERFLOWS_TYPE(s32, u32, S32_MAX, false);
TEST_OVERFLOWS_TYPE(s32, u32, -1, true);
TEST_OVERFLOWS_TYPE(s32, u32, S32_MIN, true);
#if BITS_PER_LONG == 64
TEST_OVERFLOWS_TYPE(s32, u64, S32_MAX, false);
TEST_OVERFLOWS_TYPE(s32, u64, -1, true);
TEST_OVERFLOWS_TYPE(s32, u64, S32_MIN, true);
#endif
TEST_OVERFLOWS_TYPE(s32, s8, S8_MAX, false);
TEST_OVERFLOWS_TYPE(s32, s8, S8_MIN, false);
TEST_OVERFLOWS_TYPE(s32, s8, (s32)S8_MAX + 1, true);
TEST_OVERFLOWS_TYPE(s32, s8, (s32)S8_MIN - 1, true);
TEST_OVERFLOWS_TYPE(s32, s8, S32_MAX, true);
TEST_OVERFLOWS_TYPE(s32, s8, S32_MIN, true);
TEST_OVERFLOWS_TYPE(s32, s16, S16_MAX, false);
TEST_OVERFLOWS_TYPE(s32, s16, S16_MIN, false);
TEST_OVERFLOWS_TYPE(s32, s16, (s32)S16_MAX + 1, true);
TEST_OVERFLOWS_TYPE(s32, s16, (s32)S16_MIN - 1, true);
TEST_OVERFLOWS_TYPE(s32, s16, S32_MAX, true);
TEST_OVERFLOWS_TYPE(s32, s16, S32_MIN, true);
TEST_OVERFLOWS_TYPE(s32, s32, S32_MAX, false);
TEST_OVERFLOWS_TYPE(s32, s32, S32_MIN, false);
#if BITS_PER_LONG == 64
TEST_OVERFLOWS_TYPE(s32, s64, S32_MAX, false);
TEST_OVERFLOWS_TYPE(s32, s64, S32_MIN, false);
TEST_OVERFLOWS_TYPE(u64, u8, U64_MAX, true);
TEST_OVERFLOWS_TYPE(u64, u8, U8_MAX, false);
TEST_OVERFLOWS_TYPE(u64, u8, (u64)U8_MAX + 1, true);
TEST_OVERFLOWS_TYPE(u64, u16, U64_MAX, true);
TEST_OVERFLOWS_TYPE(u64, u16, U16_MAX, false);
TEST_OVERFLOWS_TYPE(u64, u16, (u64)U16_MAX + 1, true);
TEST_OVERFLOWS_TYPE(u64, u32, U64_MAX, true);
TEST_OVERFLOWS_TYPE(u64, u32, U32_MAX, false);
TEST_OVERFLOWS_TYPE(u64, u32, (u64)U32_MAX + 1, true);
TEST_OVERFLOWS_TYPE(u64, u64, U64_MAX, false);
TEST_OVERFLOWS_TYPE(u64, s8, S8_MAX, false);
TEST_OVERFLOWS_TYPE(u64, s8, (u64)S8_MAX + 1, true);
TEST_OVERFLOWS_TYPE(u64, s8, U64_MAX, true);
TEST_OVERFLOWS_TYPE(u64, s16, S16_MAX, false);
TEST_OVERFLOWS_TYPE(u64, s16, (u64)S16_MAX + 1, true);
TEST_OVERFLOWS_TYPE(u64, s16, U64_MAX, true);
TEST_OVERFLOWS_TYPE(u64, s32, S32_MAX, false);
TEST_OVERFLOWS_TYPE(u64, s32, (u64)S32_MAX + 1, true);
TEST_OVERFLOWS_TYPE(u64, s32, U64_MAX, true);
TEST_OVERFLOWS_TYPE(u64, s64, S64_MAX, false);
TEST_OVERFLOWS_TYPE(u64, s64, U64_MAX, true);
TEST_OVERFLOWS_TYPE(u64, s64, (u64)S64_MAX + 1, true);
TEST_OVERFLOWS_TYPE(s64, u8, S64_MAX, true);
TEST_OVERFLOWS_TYPE(s64, u8, S64_MIN, true);
TEST_OVERFLOWS_TYPE(s64, u8, -1, true);
TEST_OVERFLOWS_TYPE(s64, u8, U8_MAX, false);
TEST_OVERFLOWS_TYPE(s64, u8, (s64)U8_MAX + 1, true);
TEST_OVERFLOWS_TYPE(s64, u16, S64_MAX, true);
TEST_OVERFLOWS_TYPE(s64, u16, S64_MIN, true);
TEST_OVERFLOWS_TYPE(s64, u16, -1, true);
TEST_OVERFLOWS_TYPE(s64, u16, U16_MAX, false);
TEST_OVERFLOWS_TYPE(s64, u16, (s64)U16_MAX + 1, true);
TEST_OVERFLOWS_TYPE(s64, u32, S64_MAX, true);
TEST_OVERFLOWS_TYPE(s64, u32, S64_MIN, true);
TEST_OVERFLOWS_TYPE(s64, u32, -1, true);
TEST_OVERFLOWS_TYPE(s64, u32, U32_MAX, false);
TEST_OVERFLOWS_TYPE(s64, u32, (s64)U32_MAX + 1, true);
TEST_OVERFLOWS_TYPE(s64, u64, S64_MAX, false);
TEST_OVERFLOWS_TYPE(s64, u64, S64_MIN, true);
TEST_OVERFLOWS_TYPE(s64, u64, -1, true);
TEST_OVERFLOWS_TYPE(s64, s8, S8_MAX, false);
TEST_OVERFLOWS_TYPE(s64, s8, S8_MIN, false);
TEST_OVERFLOWS_TYPE(s64, s8, (s64)S8_MAX + 1, true);
TEST_OVERFLOWS_TYPE(s64, s8, (s64)S8_MIN - 1, true);
TEST_OVERFLOWS_TYPE(s64, s8, S64_MAX, true);
TEST_OVERFLOWS_TYPE(s64, s16, S16_MAX, false);
TEST_OVERFLOWS_TYPE(s64, s16, S16_MIN, false);
TEST_OVERFLOWS_TYPE(s64, s16, (s64)S16_MAX + 1, true);
TEST_OVERFLOWS_TYPE(s64, s16, (s64)S16_MIN - 1, true);
TEST_OVERFLOWS_TYPE(s64, s16, S64_MAX, true);
TEST_OVERFLOWS_TYPE(s64, s32, S32_MAX, false);
TEST_OVERFLOWS_TYPE(s64, s32, S32_MIN, false);
TEST_OVERFLOWS_TYPE(s64, s32, (s64)S32_MAX + 1, true);
TEST_OVERFLOWS_TYPE(s64, s32, (s64)S32_MIN - 1, true);
TEST_OVERFLOWS_TYPE(s64, s32, S64_MAX, true);
TEST_OVERFLOWS_TYPE(s64, s64, S64_MAX, false);
TEST_OVERFLOWS_TYPE(s64, s64, S64_MIN, false);
#endif
/* Check for macro side-effects. */
var = INT_MAX - 1;
__TEST_OVERFLOWS_TYPE(__overflows_type, var++, int, false);
__TEST_OVERFLOWS_TYPE(__overflows_type, var++, int, false);
__TEST_OVERFLOWS_TYPE(__overflows_type, var++, int, true);
var = INT_MAX - 1;
__TEST_OVERFLOWS_TYPE(overflows_type, var++, int, false);
__TEST_OVERFLOWS_TYPE(overflows_type, var++, int, false);
__TEST_OVERFLOWS_TYPE(overflows_type, var++, int, true);
kunit_info(test, "%d overflows_type() tests finished\n", count);
#undef TEST_OVERFLOWS_TYPE
#undef __TEST_OVERFLOWS_TYPE
}
static void same_type_test(struct kunit *test)
{
int count = 0;
int var;
#define TEST_SAME_TYPE(t1, t2, same) do { \
typeof(t1) __t1h = type_max(t1); \
typeof(t1) __t1l = type_min(t1); \
typeof(t2) __t2h = type_max(t2); \
typeof(t2) __t2l = type_min(t2); \
KUNIT_EXPECT_EQ(test, true, __same_type(t1, __t1h)); \
KUNIT_EXPECT_EQ(test, true, __same_type(t1, __t1l)); \
KUNIT_EXPECT_EQ(test, true, __same_type(__t1h, t1)); \
KUNIT_EXPECT_EQ(test, true, __same_type(__t1l, t1)); \
KUNIT_EXPECT_EQ(test, true, __same_type(t2, __t2h)); \
KUNIT_EXPECT_EQ(test, true, __same_type(t2, __t2l)); \
KUNIT_EXPECT_EQ(test, true, __same_type(__t2h, t2)); \
KUNIT_EXPECT_EQ(test, true, __same_type(__t2l, t2)); \
KUNIT_EXPECT_EQ(test, same, __same_type(t1, t2)); \
KUNIT_EXPECT_EQ(test, same, __same_type(t2, __t1h)); \
KUNIT_EXPECT_EQ(test, same, __same_type(t2, __t1l)); \
KUNIT_EXPECT_EQ(test, same, __same_type(__t1h, t2)); \
KUNIT_EXPECT_EQ(test, same, __same_type(__t1l, t2)); \
KUNIT_EXPECT_EQ(test, same, __same_type(t1, __t2h)); \
KUNIT_EXPECT_EQ(test, same, __same_type(t1, __t2l)); \
KUNIT_EXPECT_EQ(test, same, __same_type(__t2h, t1)); \
KUNIT_EXPECT_EQ(test, same, __same_type(__t2l, t1)); \
} while (0)
#if BITS_PER_LONG == 64
# define TEST_SAME_TYPE64(base, t, m) TEST_SAME_TYPE(base, t, m)
#else
# define TEST_SAME_TYPE64(base, t, m) do { } while (0)
#endif
#define TEST_TYPE_SETS(base, mu8, mu16, mu32, ms8, ms16, ms32, mu64, ms64) \
do { \
TEST_SAME_TYPE(base, u8, mu8); \
TEST_SAME_TYPE(base, u16, mu16); \
TEST_SAME_TYPE(base, u32, mu32); \
TEST_SAME_TYPE(base, s8, ms8); \
TEST_SAME_TYPE(base, s16, ms16); \
TEST_SAME_TYPE(base, s32, ms32); \
TEST_SAME_TYPE64(base, u64, mu64); \
TEST_SAME_TYPE64(base, s64, ms64); \
} while (0)
TEST_TYPE_SETS(u8, true, false, false, false, false, false, false, false);
TEST_TYPE_SETS(u16, false, true, false, false, false, false, false, false);
TEST_TYPE_SETS(u32, false, false, true, false, false, false, false, false);
TEST_TYPE_SETS(s8, false, false, false, true, false, false, false, false);
TEST_TYPE_SETS(s16, false, false, false, false, true, false, false, false);
TEST_TYPE_SETS(s32, false, false, false, false, false, true, false, false);
#if BITS_PER_LONG == 64
TEST_TYPE_SETS(u64, false, false, false, false, false, false, true, false);
TEST_TYPE_SETS(s64, false, false, false, false, false, false, false, true);
#endif
/* Check for macro side-effects. */
var = 4;
KUNIT_EXPECT_EQ(test, var, 4);
KUNIT_EXPECT_TRUE(test, __same_type(var++, int));
KUNIT_EXPECT_EQ(test, var, 4);
KUNIT_EXPECT_TRUE(test, __same_type(int, var++));
KUNIT_EXPECT_EQ(test, var, 4);
KUNIT_EXPECT_TRUE(test, __same_type(var++, var++));
KUNIT_EXPECT_EQ(test, var, 4);
kunit_info(test, "%d __same_type() tests finished\n", count);
#undef TEST_TYPE_SETS
#undef TEST_SAME_TYPE64
#undef TEST_SAME_TYPE
}
static void castable_to_type_test(struct kunit *test)
{
int count = 0;
#define TEST_CASTABLE_TO_TYPE(arg1, arg2, pass) do { \
bool __pass = castable_to_type(arg1, arg2); \
KUNIT_EXPECT_EQ_MSG(test, __pass, pass, \
"expected castable_to_type(" #arg1 ", " #arg2 ") to%s pass\n",\
pass ? "" : " not"); \
count++; \
} while (0)
TEST_CASTABLE_TO_TYPE(16, u8, true);
TEST_CASTABLE_TO_TYPE(16, u16, true);
TEST_CASTABLE_TO_TYPE(16, u32, true);
TEST_CASTABLE_TO_TYPE(16, s8, true);
TEST_CASTABLE_TO_TYPE(16, s16, true);
TEST_CASTABLE_TO_TYPE(16, s32, true);
TEST_CASTABLE_TO_TYPE(-16, s8, true);
TEST_CASTABLE_TO_TYPE(-16, s16, true);
TEST_CASTABLE_TO_TYPE(-16, s32, true);
#if BITS_PER_LONG == 64
TEST_CASTABLE_TO_TYPE(16, u64, true);
TEST_CASTABLE_TO_TYPE(-16, s64, true);
#endif
#define TEST_CASTABLE_TO_TYPE_VAR(width) do { \
u ## width u ## width ## var = 0; \
s ## width s ## width ## var = 0; \
\
/* Constant expressions that fit types. */ \
TEST_CASTABLE_TO_TYPE(type_max(u ## width), u ## width, true); \
TEST_CASTABLE_TO_TYPE(type_min(u ## width), u ## width, true); \
TEST_CASTABLE_TO_TYPE(type_max(u ## width), u ## width ## var, true); \
TEST_CASTABLE_TO_TYPE(type_min(u ## width), u ## width ## var, true); \
TEST_CASTABLE_TO_TYPE(type_max(s ## width), s ## width, true); \
TEST_CASTABLE_TO_TYPE(type_min(s ## width), s ## width, true); \
TEST_CASTABLE_TO_TYPE(type_max(s ## width), s ## width ## var, true); \
TEST_CASTABLE_TO_TYPE(type_min(u ## width), s ## width ## var, true); \
/* Constant expressions that do not fit types. */ \
TEST_CASTABLE_TO_TYPE(type_max(u ## width), s ## width, false); \
TEST_CASTABLE_TO_TYPE(type_max(u ## width), s ## width ## var, false); \
TEST_CASTABLE_TO_TYPE(type_min(s ## width), u ## width, false); \
TEST_CASTABLE_TO_TYPE(type_min(s ## width), u ## width ## var, false); \
/* Non-constant expression with mismatched type. */ \
TEST_CASTABLE_TO_TYPE(s ## width ## var, u ## width, false); \
TEST_CASTABLE_TO_TYPE(u ## width ## var, s ## width, false); \
} while (0)
#define TEST_CASTABLE_TO_TYPE_RANGE(width) do { \
unsigned long big = U ## width ## _MAX; \
signed long small = S ## width ## _MIN; \
u ## width u ## width ## var = 0; \
s ## width s ## width ## var = 0; \
\
/* Constant expression in range. */ \
TEST_CASTABLE_TO_TYPE(U ## width ## _MAX, u ## width, true); \
TEST_CASTABLE_TO_TYPE(U ## width ## _MAX, u ## width ## var, true); \
TEST_CASTABLE_TO_TYPE(S ## width ## _MIN, s ## width, true); \
TEST_CASTABLE_TO_TYPE(S ## width ## _MIN, s ## width ## var, true); \
/* Constant expression out of range. */ \
TEST_CASTABLE_TO_TYPE((unsigned long)U ## width ## _MAX + 1, u ## width, false); \
TEST_CASTABLE_TO_TYPE((unsigned long)U ## width ## _MAX + 1, u ## width ## var, false); \
TEST_CASTABLE_TO_TYPE((signed long)S ## width ## _MIN - 1, s ## width, false); \
TEST_CASTABLE_TO_TYPE((signed long)S ## width ## _MIN - 1, s ## width ## var, false); \
/* Non-constant expression with mismatched type. */ \
TEST_CASTABLE_TO_TYPE(big, u ## width, false); \
TEST_CASTABLE_TO_TYPE(big, u ## width ## var, false); \
TEST_CASTABLE_TO_TYPE(small, s ## width, false); \
TEST_CASTABLE_TO_TYPE(small, s ## width ## var, false); \
} while (0)
TEST_CASTABLE_TO_TYPE_VAR(8);
TEST_CASTABLE_TO_TYPE_VAR(16);
TEST_CASTABLE_TO_TYPE_VAR(32);
#if BITS_PER_LONG == 64
TEST_CASTABLE_TO_TYPE_VAR(64);
#endif
TEST_CASTABLE_TO_TYPE_RANGE(8);
TEST_CASTABLE_TO_TYPE_RANGE(16);
#if BITS_PER_LONG == 64
TEST_CASTABLE_TO_TYPE_RANGE(32);
#endif
kunit_info(test, "%d castable_to_type() tests finished\n", count);
#undef TEST_CASTABLE_TO_TYPE_RANGE
#undef TEST_CASTABLE_TO_TYPE_VAR
#undef TEST_CASTABLE_TO_TYPE
}
static struct kunit_case overflow_test_cases[] = {
KUNIT_CASE(u8_u8__u8_overflow_test),
KUNIT_CASE(s8_s8__s8_overflow_test),
@ -755,6 +1133,9 @@ static struct kunit_case overflow_test_cases[] = {
KUNIT_CASE(shift_nonsense_test),
KUNIT_CASE(overflow_allocation_test),
KUNIT_CASE(overflow_size_helpers_test),
KUNIT_CASE(overflows_type_test),
KUNIT_CASE(same_type_test),
KUNIT_CASE(castable_to_type_test),
{}
};

View File

@ -13,6 +13,7 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <kunit/test.h>
#include <linux/siphash.h>
#include <linux/kernel.h>
#include <linux/string.h>
@ -109,114 +110,88 @@ static const u32 test_vectors_hsiphash[64] = {
};
#endif
static int __init siphash_test_init(void)
#define chk(hash, vector, fmt...) \
KUNIT_EXPECT_EQ_MSG(test, hash, vector, fmt)
static void siphash_test(struct kunit *test)
{
u8 in[64] __aligned(SIPHASH_ALIGNMENT);
u8 in_unaligned[65] __aligned(SIPHASH_ALIGNMENT);
u8 i;
int ret = 0;
for (i = 0; i < 64; ++i) {
in[i] = i;
in_unaligned[i + 1] = i;
if (siphash(in, i, &test_key_siphash) !=
test_vectors_siphash[i]) {
pr_info("siphash self-test aligned %u: FAIL\n", i + 1);
ret = -EINVAL;
}
if (siphash(in_unaligned + 1, i, &test_key_siphash) !=
test_vectors_siphash[i]) {
pr_info("siphash self-test unaligned %u: FAIL\n", i + 1);
ret = -EINVAL;
}
if (hsiphash(in, i, &test_key_hsiphash) !=
test_vectors_hsiphash[i]) {
pr_info("hsiphash self-test aligned %u: FAIL\n", i + 1);
ret = -EINVAL;
}
if (hsiphash(in_unaligned + 1, i, &test_key_hsiphash) !=
test_vectors_hsiphash[i]) {
pr_info("hsiphash self-test unaligned %u: FAIL\n", i + 1);
ret = -EINVAL;
}
chk(siphash(in, i, &test_key_siphash),
test_vectors_siphash[i],
"siphash self-test aligned %u: FAIL", i + 1);
chk(siphash(in_unaligned + 1, i, &test_key_siphash),
test_vectors_siphash[i],
"siphash self-test unaligned %u: FAIL", i + 1);
chk(hsiphash(in, i, &test_key_hsiphash),
test_vectors_hsiphash[i],
"hsiphash self-test aligned %u: FAIL", i + 1);
chk(hsiphash(in_unaligned + 1, i, &test_key_hsiphash),
test_vectors_hsiphash[i],
"hsiphash self-test unaligned %u: FAIL", i + 1);
}
if (siphash_1u64(0x0706050403020100ULL, &test_key_siphash) !=
test_vectors_siphash[8]) {
pr_info("siphash self-test 1u64: FAIL\n");
ret = -EINVAL;
}
if (siphash_2u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL,
&test_key_siphash) != test_vectors_siphash[16]) {
pr_info("siphash self-test 2u64: FAIL\n");
ret = -EINVAL;
}
if (siphash_3u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL,
0x1716151413121110ULL, &test_key_siphash) !=
test_vectors_siphash[24]) {
pr_info("siphash self-test 3u64: FAIL\n");
ret = -EINVAL;
}
if (siphash_4u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL,
chk(siphash_1u64(0x0706050403020100ULL, &test_key_siphash),
test_vectors_siphash[8],
"siphash self-test 1u64: FAIL");
chk(siphash_2u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL,
&test_key_siphash),
test_vectors_siphash[16],
"siphash self-test 2u64: FAIL");
chk(siphash_3u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL,
0x1716151413121110ULL, &test_key_siphash),
test_vectors_siphash[24],
"siphash self-test 3u64: FAIL");
chk(siphash_4u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL,
0x1716151413121110ULL, 0x1f1e1d1c1b1a1918ULL,
&test_key_siphash) != test_vectors_siphash[32]) {
pr_info("siphash self-test 4u64: FAIL\n");
ret = -EINVAL;
}
if (siphash_1u32(0x03020100U, &test_key_siphash) !=
test_vectors_siphash[4]) {
pr_info("siphash self-test 1u32: FAIL\n");
ret = -EINVAL;
}
if (siphash_2u32(0x03020100U, 0x07060504U, &test_key_siphash) !=
test_vectors_siphash[8]) {
pr_info("siphash self-test 2u32: FAIL\n");
ret = -EINVAL;
}
if (siphash_3u32(0x03020100U, 0x07060504U,
0x0b0a0908U, &test_key_siphash) !=
test_vectors_siphash[12]) {
pr_info("siphash self-test 3u32: FAIL\n");
ret = -EINVAL;
}
if (siphash_4u32(0x03020100U, 0x07060504U,
0x0b0a0908U, 0x0f0e0d0cU, &test_key_siphash) !=
test_vectors_siphash[16]) {
pr_info("siphash self-test 4u32: FAIL\n");
ret = -EINVAL;
}
if (hsiphash_1u32(0x03020100U, &test_key_hsiphash) !=
test_vectors_hsiphash[4]) {
pr_info("hsiphash self-test 1u32: FAIL\n");
ret = -EINVAL;
}
if (hsiphash_2u32(0x03020100U, 0x07060504U, &test_key_hsiphash) !=
test_vectors_hsiphash[8]) {
pr_info("hsiphash self-test 2u32: FAIL\n");
ret = -EINVAL;
}
if (hsiphash_3u32(0x03020100U, 0x07060504U,
0x0b0a0908U, &test_key_hsiphash) !=
test_vectors_hsiphash[12]) {
pr_info("hsiphash self-test 3u32: FAIL\n");
ret = -EINVAL;
}
if (hsiphash_4u32(0x03020100U, 0x07060504U,
0x0b0a0908U, 0x0f0e0d0cU, &test_key_hsiphash) !=
test_vectors_hsiphash[16]) {
pr_info("hsiphash self-test 4u32: FAIL\n");
ret = -EINVAL;
}
if (!ret)
pr_info("self-tests: pass\n");
return ret;
&test_key_siphash),
test_vectors_siphash[32],
"siphash self-test 4u64: FAIL");
chk(siphash_1u32(0x03020100U, &test_key_siphash),
test_vectors_siphash[4],
"siphash self-test 1u32: FAIL");
chk(siphash_2u32(0x03020100U, 0x07060504U, &test_key_siphash),
test_vectors_siphash[8],
"siphash self-test 2u32: FAIL");
chk(siphash_3u32(0x03020100U, 0x07060504U,
0x0b0a0908U, &test_key_siphash),
test_vectors_siphash[12],
"siphash self-test 3u32: FAIL");
chk(siphash_4u32(0x03020100U, 0x07060504U,
0x0b0a0908U, 0x0f0e0d0cU, &test_key_siphash),
test_vectors_siphash[16],
"siphash self-test 4u32: FAIL");
chk(hsiphash_1u32(0x03020100U, &test_key_hsiphash),
test_vectors_hsiphash[4],
"hsiphash self-test 1u32: FAIL");
chk(hsiphash_2u32(0x03020100U, 0x07060504U, &test_key_hsiphash),
test_vectors_hsiphash[8],
"hsiphash self-test 2u32: FAIL");
chk(hsiphash_3u32(0x03020100U, 0x07060504U,
0x0b0a0908U, &test_key_hsiphash),
test_vectors_hsiphash[12],
"hsiphash self-test 3u32: FAIL");
chk(hsiphash_4u32(0x03020100U, 0x07060504U,
0x0b0a0908U, 0x0f0e0d0cU, &test_key_hsiphash),
test_vectors_hsiphash[16],
"hsiphash self-test 4u32: FAIL");
}
static void __exit siphash_test_exit(void)
{
}
static struct kunit_case siphash_test_cases[] = {
KUNIT_CASE(siphash_test),
{}
};
module_init(siphash_test_init);
module_exit(siphash_test_exit);
static struct kunit_suite siphash_test_suite = {
.name = "siphash",
.test_cases = siphash_test_cases,
};
kunit_test_suite(siphash_test_suite);
MODULE_AUTHOR("Jason A. Donenfeld <Jason@zx2c4.com>");
MODULE_LICENSE("Dual BSD/GPL");

View File

@ -76,11 +76,6 @@ EXPORT_SYMBOL(strcasecmp);
#endif
#ifndef __HAVE_ARCH_STRCPY
/**
* strcpy - Copy a %NUL terminated string
* @dest: Where to copy the string to
* @src: Where to copy the string from
*/
char *strcpy(char *dest, const char *src)
{
char *tmp = dest;
@ -93,19 +88,6 @@ EXPORT_SYMBOL(strcpy);
#endif
#ifndef __HAVE_ARCH_STRNCPY
/**
* strncpy - Copy a length-limited, C-string
* @dest: Where to copy the string to
* @src: Where to copy the string from
* @count: The maximum number of bytes to copy
*
* The result is not %NUL-terminated if the source exceeds
* @count bytes.
*
* In the case where the length of @src is less than that of
* count, the remainder of @dest will be padded with %NUL.
*
*/
char *strncpy(char *dest, const char *src, size_t count)
{
char *tmp = dest;
@ -122,17 +104,6 @@ EXPORT_SYMBOL(strncpy);
#endif
#ifndef __HAVE_ARCH_STRLCPY
/**
* strlcpy - Copy a C-string into a sized buffer
* @dest: Where to copy the string to
* @src: Where to copy the string from
* @size: size of destination buffer
*
* Compatible with ``*BSD``: the result is always a valid
* NUL-terminated string that fits in the buffer (unless,
* of course, the buffer size is zero). It does not pad
* out the result like strncpy() does.
*/
size_t strlcpy(char *dest, const char *src, size_t size)
{
size_t ret = strlen(src);
@ -148,30 +119,6 @@ EXPORT_SYMBOL(strlcpy);
#endif
#ifndef __HAVE_ARCH_STRSCPY
/**
* strscpy - Copy a C-string into a sized buffer
* @dest: Where to copy the string to
* @src: Where to copy the string from
* @count: Size of destination buffer
*
* Copy the string, or as much of it as fits, into the dest buffer. The
* behavior is undefined if the string buffers overlap. The destination
* buffer is always NUL terminated, unless it's zero-sized.
*
* Preferred to strlcpy() since the API doesn't require reading memory
* from the src string beyond the specified "count" bytes, and since
* the return value is easier to error-check than strlcpy()'s.
* In addition, the implementation is robust to the string changing out
* from underneath it, unlike the current strlcpy() implementation.
*
* Preferred to strncpy() since it always returns a valid string, and
* doesn't unnecessarily force the tail of the destination buffer to be
* zeroed. If zeroing is desired please use strscpy_pad().
*
* Returns:
* * The number of characters copied (not including the trailing %NUL)
* * -E2BIG if count is 0 or @src was truncated.
*/
ssize_t strscpy(char *dest, const char *src, size_t count)
{
const struct word_at_a_time constants = WORD_AT_A_TIME_CONSTANTS;
@ -266,11 +213,6 @@ char *stpcpy(char *__restrict__ dest, const char *__restrict__ src)
EXPORT_SYMBOL(stpcpy);
#ifndef __HAVE_ARCH_STRCAT
/**
* strcat - Append one %NUL-terminated string to another
* @dest: The string to be appended to
* @src: The string to append to it
*/
char *strcat(char *dest, const char *src)
{
char *tmp = dest;
@ -285,15 +227,6 @@ EXPORT_SYMBOL(strcat);
#endif
#ifndef __HAVE_ARCH_STRNCAT
/**
* strncat - Append a length-limited, C-string to another
* @dest: The string to be appended to
* @src: The string to append to it
* @count: The maximum numbers of bytes to copy
*
* Note that in contrast to strncpy(), strncat() ensures the result is
* terminated.
*/
char *strncat(char *dest, const char *src, size_t count)
{
char *tmp = dest;
@ -314,12 +247,6 @@ EXPORT_SYMBOL(strncat);
#endif
#ifndef __HAVE_ARCH_STRLCAT
/**
* strlcat - Append a length-limited, C-string to another
* @dest: The string to be appended to
* @src: The string to append to it
* @count: The size of the destination buffer.
*/
size_t strlcat(char *dest, const char *src, size_t count)
{
size_t dsize = strlen(dest);
@ -484,10 +411,6 @@ EXPORT_SYMBOL(strnchr);
#endif
#ifndef __HAVE_ARCH_STRLEN
/**
* strlen - Find the length of a string
* @s: The string to be sized
*/
size_t strlen(const char *s)
{
const char *sc;
@ -500,11 +423,6 @@ EXPORT_SYMBOL(strlen);
#endif
#ifndef __HAVE_ARCH_STRNLEN
/**
* strnlen - Find the length of a length-limited string
* @s: The string to be sized
* @count: The maximum number of bytes to search
*/
size_t strnlen(const char *s, size_t count)
{
const char *sc;

142
lib/strscpy_kunit.c Normal file
View File

@ -0,0 +1,142 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Kernel module for testing 'strscpy' family of functions.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <kunit/test.h>
#include <linux/string.h>
/*
* tc() - Run a specific test case.
* @src: Source string, argument to strscpy_pad()
* @count: Size of destination buffer, argument to strscpy_pad()
* @expected: Expected return value from call to strscpy_pad()
* @terminator: 1 if there should be a terminating null byte 0 otherwise.
* @chars: Number of characters from the src string expected to be
* written to the dst buffer.
* @pad: Number of pad characters expected (in the tail of dst buffer).
* (@pad does not include the null terminator byte.)
*
* Calls strscpy_pad() and verifies the return value and state of the
* destination buffer after the call returns.
*/
static void tc(struct kunit *test, char *src, int count, int expected,
int chars, int terminator, int pad)
{
int nr_bytes_poison;
int max_expected;
int max_count;
int written;
char buf[6];
int index, i;
const char POISON = 'z';
KUNIT_ASSERT_TRUE_MSG(test, src != NULL,
"null source string not supported");
memset(buf, POISON, sizeof(buf));
/* Future proofing test suite, validate args */
max_count = sizeof(buf) - 2; /* Space for null and to verify overflow */
max_expected = count - 1; /* Space for the null */
KUNIT_ASSERT_LE_MSG(test, count, max_count,
"count (%d) is too big (%d) ... aborting", count, max_count);
KUNIT_EXPECT_LE_MSG(test, expected, max_expected,
"expected (%d) is bigger than can possibly be returned (%d)",
expected, max_expected);
written = strscpy_pad(buf, src, count);
KUNIT_ASSERT_EQ(test, written, expected);
if (count && written == -E2BIG) {
KUNIT_ASSERT_EQ_MSG(test, 0, strncmp(buf, src, count - 1),
"buffer state invalid for -E2BIG");
KUNIT_ASSERT_EQ_MSG(test, buf[count - 1], '\0',
"too big string is not null terminated correctly");
}
for (i = 0; i < chars; i++)
KUNIT_ASSERT_EQ_MSG(test, buf[i], src[i],
"buf[i]==%c != src[i]==%c", buf[i], src[i]);
if (terminator)
KUNIT_ASSERT_EQ_MSG(test, buf[count - 1], '\0',
"string is not null terminated correctly");
for (i = 0; i < pad; i++) {
index = chars + terminator + i;
KUNIT_ASSERT_EQ_MSG(test, buf[index], '\0',
"padding missing at index: %d", i);
}
nr_bytes_poison = sizeof(buf) - chars - terminator - pad;
for (i = 0; i < nr_bytes_poison; i++) {
index = sizeof(buf) - 1 - i; /* Check from the end back */
KUNIT_ASSERT_EQ_MSG(test, buf[index], POISON,
"poison value missing at index: %d", i);
}
}
static void strscpy_test(struct kunit *test)
{
char dest[8];
/*
* tc() uses a destination buffer of size 6 and needs at
* least 2 characters spare (one for null and one to check for
* overflow). This means we should only call tc() with
* strings up to a maximum of 4 characters long and 'count'
* should not exceed 4. To test with longer strings increase
* the buffer size in tc().
*/
/* tc(test, src, count, expected, chars, terminator, pad) */
tc(test, "a", 0, -E2BIG, 0, 0, 0);
tc(test, "", 0, -E2BIG, 0, 0, 0);
tc(test, "a", 1, -E2BIG, 0, 1, 0);
tc(test, "", 1, 0, 0, 1, 0);
tc(test, "ab", 2, -E2BIG, 1, 1, 0);
tc(test, "a", 2, 1, 1, 1, 0);
tc(test, "", 2, 0, 0, 1, 1);
tc(test, "abc", 3, -E2BIG, 2, 1, 0);
tc(test, "ab", 3, 2, 2, 1, 0);
tc(test, "a", 3, 1, 1, 1, 1);
tc(test, "", 3, 0, 0, 1, 2);
tc(test, "abcd", 4, -E2BIG, 3, 1, 0);
tc(test, "abc", 4, 3, 3, 1, 0);
tc(test, "ab", 4, 2, 2, 1, 1);
tc(test, "a", 4, 1, 1, 1, 2);
tc(test, "", 4, 0, 0, 1, 3);
/* Compile-time-known source strings. */
KUNIT_EXPECT_EQ(test, strscpy(dest, "", ARRAY_SIZE(dest)), 0);
KUNIT_EXPECT_EQ(test, strscpy(dest, "", 3), 0);
KUNIT_EXPECT_EQ(test, strscpy(dest, "", 1), 0);
KUNIT_EXPECT_EQ(test, strscpy(dest, "", 0), -E2BIG);
KUNIT_EXPECT_EQ(test, strscpy(dest, "Fixed", ARRAY_SIZE(dest)), 5);
KUNIT_EXPECT_EQ(test, strscpy(dest, "Fixed", 3), -E2BIG);
KUNIT_EXPECT_EQ(test, strscpy(dest, "Fixed", 1), -E2BIG);
KUNIT_EXPECT_EQ(test, strscpy(dest, "Fixed", 0), -E2BIG);
KUNIT_EXPECT_EQ(test, strscpy(dest, "This is too long", ARRAY_SIZE(dest)), -E2BIG);
}
static struct kunit_case strscpy_test_cases[] = {
KUNIT_CASE(strscpy_test),
{}
};
static struct kunit_suite strscpy_test_suite = {
.name = "strscpy",
.test_cases = strscpy_test_cases,
};
kunit_test_suite(strscpy_test_suite);
MODULE_AUTHOR("Tobin C. Harding <tobin@kernel.org>");
MODULE_LICENSE("GPL");

View File

@ -1,150 +0,0 @@
// SPDX-License-Identifier: GPL-2.0+
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/string.h>
#include "../tools/testing/selftests/kselftest_module.h"
/*
* Kernel module for testing 'strscpy' family of functions.
*/
KSTM_MODULE_GLOBALS();
/*
* tc() - Run a specific test case.
* @src: Source string, argument to strscpy_pad()
* @count: Size of destination buffer, argument to strscpy_pad()
* @expected: Expected return value from call to strscpy_pad()
* @terminator: 1 if there should be a terminating null byte 0 otherwise.
* @chars: Number of characters from the src string expected to be
* written to the dst buffer.
* @pad: Number of pad characters expected (in the tail of dst buffer).
* (@pad does not include the null terminator byte.)
*
* Calls strscpy_pad() and verifies the return value and state of the
* destination buffer after the call returns.
*/
static int __init tc(char *src, int count, int expected,
int chars, int terminator, int pad)
{
int nr_bytes_poison;
int max_expected;
int max_count;
int written;
char buf[6];
int index, i;
const char POISON = 'z';
total_tests++;
if (!src) {
pr_err("null source string not supported\n");
return -1;
}
memset(buf, POISON, sizeof(buf));
/* Future proofing test suite, validate args */
max_count = sizeof(buf) - 2; /* Space for null and to verify overflow */
max_expected = count - 1; /* Space for the null */
if (count > max_count) {
pr_err("count (%d) is too big (%d) ... aborting", count, max_count);
return -1;
}
if (expected > max_expected) {
pr_warn("expected (%d) is bigger than can possibly be returned (%d)",
expected, max_expected);
}
written = strscpy_pad(buf, src, count);
if ((written) != (expected)) {
pr_err("%d != %d (written, expected)\n", written, expected);
goto fail;
}
if (count && written == -E2BIG) {
if (strncmp(buf, src, count - 1) != 0) {
pr_err("buffer state invalid for -E2BIG\n");
goto fail;
}
if (buf[count - 1] != '\0') {
pr_err("too big string is not null terminated correctly\n");
goto fail;
}
}
for (i = 0; i < chars; i++) {
if (buf[i] != src[i]) {
pr_err("buf[i]==%c != src[i]==%c\n", buf[i], src[i]);
goto fail;
}
}
if (terminator) {
if (buf[count - 1] != '\0') {
pr_err("string is not null terminated correctly\n");
goto fail;
}
}
for (i = 0; i < pad; i++) {
index = chars + terminator + i;
if (buf[index] != '\0') {
pr_err("padding missing at index: %d\n", i);
goto fail;
}
}
nr_bytes_poison = sizeof(buf) - chars - terminator - pad;
for (i = 0; i < nr_bytes_poison; i++) {
index = sizeof(buf) - 1 - i; /* Check from the end back */
if (buf[index] != POISON) {
pr_err("poison value missing at index: %d\n", i);
goto fail;
}
}
return 0;
fail:
failed_tests++;
return -1;
}
static void __init selftest(void)
{
/*
* tc() uses a destination buffer of size 6 and needs at
* least 2 characters spare (one for null and one to check for
* overflow). This means we should only call tc() with
* strings up to a maximum of 4 characters long and 'count'
* should not exceed 4. To test with longer strings increase
* the buffer size in tc().
*/
/* tc(src, count, expected, chars, terminator, pad) */
KSTM_CHECK_ZERO(tc("a", 0, -E2BIG, 0, 0, 0));
KSTM_CHECK_ZERO(tc("", 0, -E2BIG, 0, 0, 0));
KSTM_CHECK_ZERO(tc("a", 1, -E2BIG, 0, 1, 0));
KSTM_CHECK_ZERO(tc("", 1, 0, 0, 1, 0));
KSTM_CHECK_ZERO(tc("ab", 2, -E2BIG, 1, 1, 0));
KSTM_CHECK_ZERO(tc("a", 2, 1, 1, 1, 0));
KSTM_CHECK_ZERO(tc("", 2, 0, 0, 1, 1));
KSTM_CHECK_ZERO(tc("abc", 3, -E2BIG, 2, 1, 0));
KSTM_CHECK_ZERO(tc("ab", 3, 2, 2, 1, 0));
KSTM_CHECK_ZERO(tc("a", 3, 1, 1, 1, 1));
KSTM_CHECK_ZERO(tc("", 3, 0, 0, 1, 2));
KSTM_CHECK_ZERO(tc("abcd", 4, -E2BIG, 3, 1, 0));
KSTM_CHECK_ZERO(tc("abc", 4, 3, 3, 1, 0));
KSTM_CHECK_ZERO(tc("ab", 4, 2, 2, 1, 1));
KSTM_CHECK_ZERO(tc("a", 4, 1, 1, 1, 2));
KSTM_CHECK_ZERO(tc("", 4, 0, 0, 1, 3));
}
KSTM_MODULE_LOADERS(test_strscpy);
MODULE_AUTHOR("Tobin C. Harding <tobin@kernel.org>");
MODULE_LICENSE("GPL");

View File

@ -154,8 +154,7 @@ static void ubsan_epilogue(void)
current->in_ubsan--;
if (panic_on_warn)
panic("panic_on_warn set ...\n");
check_panic_on_warn("UBSAN");
}
void __ubsan_handle_divrem_overflow(void *_data, void *lhs, void *rhs)

View File

@ -825,23 +825,30 @@ static void kasan_global_oob_left(struct kunit *test)
KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p);
}
/* Check that ksize() makes the whole object accessible. */
/* Check that ksize() does NOT unpoison whole object. */
static void ksize_unpoisons_memory(struct kunit *test)
{
char *ptr;
size_t size = 123, real_size;
size_t size = 128 - KASAN_GRANULE_SIZE - 5;
size_t real_size;
ptr = kmalloc(size, GFP_KERNEL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
real_size = ksize(ptr);
KUNIT_EXPECT_GT(test, real_size, size);
OPTIMIZER_HIDE_VAR(ptr);
/* This access shouldn't trigger a KASAN report. */
ptr[size] = 'x';
/* These accesses shouldn't trigger a KASAN report. */
ptr[0] = 'x';
ptr[size - 1] = 'x';
/* This one must. */
KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size]);
/* These must trigger a KASAN report. */
if (IS_ENABLED(CONFIG_KASAN_GENERIC))
KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size + 5]);
KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]);
kfree(ptr);
}

View File

@ -186,8 +186,8 @@ static void end_report(unsigned long *flags, void *addr)
(unsigned long)addr);
pr_err("==================================================================\n");
spin_unlock_irqrestore(&report_lock, *flags);
if (panic_on_warn && !test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags))
panic("panic_on_warn set ...\n");
if (!test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags))
check_panic_on_warn("KASAN");
if (kasan_arg_fault == KASAN_ARG_FAULT_PANIC)
panic("kasan.fault=panic set ...\n");
add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);

View File

@ -273,8 +273,7 @@ void kfence_report_error(unsigned long address, bool is_write, struct pt_regs *r
lockdep_on();
if (panic_on_warn)
panic("panic_on_warn set ...\n");
check_panic_on_warn("KFENCE");
/* We encountered a memory safety error, taint the kernel! */
add_taint(TAINT_BAD_PAGE, LOCKDEP_STILL_OK);

View File

@ -1348,11 +1348,11 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
void *ret;
size_t ks;
/* Don't use instrumented ksize to allow precise KASAN poisoning. */
/* Check for double-free before calling ksize. */
if (likely(!ZERO_OR_NULL_PTR(p))) {
if (!kasan_check_byte(p))
return NULL;
ks = kfence_ksize(p) ?: __ksize(p);
ks = ksize(p);
} else
ks = 0;
@ -1420,21 +1420,21 @@ void kfree_sensitive(const void *p)
void *mem = (void *)p;
ks = ksize(mem);
if (ks)
if (ks) {
kasan_unpoison_range(mem, ks);
memzero_explicit(mem, ks);
}
kfree(mem);
}
EXPORT_SYMBOL(kfree_sensitive);
size_t ksize(const void *objp)
{
size_t size;
/*
* We need to first check that the pointer to the object is valid, and
* only then unpoison the memory. The report printed from ksize() is
* more useful, then when it's printed later when the behaviour could
* be undefined due to a potential use-after-free or double-free.
* We need to first check that the pointer to the object is valid.
* The KASAN report printed from ksize() is more useful, then when
* it's printed later when the behaviour could be undefined due to
* a potential use-after-free or double-free.
*
* We use kasan_check_byte(), which is supported for the hardware
* tag-based KASAN mode, unlike kasan_check_read/write().
@ -1448,13 +1448,7 @@ size_t ksize(const void *objp)
if (unlikely(ZERO_OR_NULL_PTR(objp)) || !kasan_check_byte(objp))
return 0;
size = kfence_ksize(objp) ?: __ksize(objp);
/*
* We assume that ksize callers could use whole allocated area,
* so we need to unpoison this area.
*/
kasan_unpoison_range(objp, size);
return size;
return kfence_ksize(objp) ?: __ksize(objp);
}
EXPORT_SYMBOL(ksize);

View File

@ -337,7 +337,7 @@ static int __init init_dns_resolver(void)
* this is used to prevent malicious redirections from being installed
* with add_key().
*/
cred = prepare_kernel_cred(NULL);
cred = prepare_kernel_cred(&init_task);
if (!cred)
return -ENOMEM;

View File

@ -1461,6 +1461,8 @@ sub create_parameterlist($$$$) {
foreach my $arg (split($splitter, $args)) {
# strip comments
$arg =~ s/\/\*.*\*\///;
# ignore argument attributes
$arg =~ s/\sPOS0?\s/ /;
# strip leading/trailing spaces
$arg =~ s/^\s*//;
$arg =~ s/\s*$//;
@ -1670,6 +1672,7 @@ sub dump_function($$) {
$prototype =~ s/^__inline +//;
$prototype =~ s/^__always_inline +//;
$prototype =~ s/^noinline +//;
$prototype =~ s/^__FORTIFY_INLINE +//;
$prototype =~ s/__init +//;
$prototype =~ s/__init_or_module +//;
$prototype =~ s/__deprecated +//;
@ -1679,7 +1682,8 @@ sub dump_function($$) {
$prototype =~ s/__weak +//;
$prototype =~ s/__sched +//;
$prototype =~ s/__printf\s*\(\s*\d*\s*,\s*\d*\s*\) +//;
$prototype =~ s/__alloc_size\s*\(\s*\d+\s*(?:,\s*\d+\s*)?\) +//;
$prototype =~ s/__(?:re)?alloc_size\s*\(\s*\d+\s*(?:,\s*\d+\s*)?\) +//;
$prototype =~ s/__diagnose_as\s*\(\s*\S+\s*(?:,\s*\d+\s*)*\) +//;
my $define = $prototype =~ s/^#\s*define\s+//; #ak added
$prototype =~ s/__attribute_const__ +//;
$prototype =~ s/__attribute__\s*\(\(