Commit Graph

154 Commits

Author SHA1 Message Date
Kees Cook
cf00764747 ARM: ptrace: Restore syscall restart tracing
Since commit 4e57a4ddf6 ("ARM: 9107/1: syscall: always store
thread_info->abi_syscall"), the seccomp selftests "syscall_restart" has
been broken. This was caused by the restart syscall not being stored to
"abi_syscall" during restart setup before branching to the "local_restart"
label. Tracers would see the wrong syscall, and scno would get overwritten
while returning from the TIF_WORK path. Add the missing store.

Cc: Russell King <linux@armlinux.org.uk>
Cc: Arnd Bergmann <arnd@kernel.org>
Cc: Lecopzer Chen <lecopzer.chen@mediatek.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: linux-arm-kernel@lists.infradead.org
Fixes: 4e57a4ddf6 ("ARM: 9107/1: syscall: always store thread_info->abi_syscall")
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20230810195422.2304827-1-keescook@chromium.org
Signed-off-by: Kees Cook <keescook@chromium.org>
2023-08-16 13:58:49 -07:00
Linus Walleij
a9ff696160 ARM: mm: Make virt_to_pfn() a static inline
Making virt_to_pfn() a static inline taking a strongly typed
(const void *) makes the contract of a passing a pointer of that
type to the function explicit and exposes any misuse of the
macro virt_to_pfn() acting polymorphic and accepting many types
such as (void *), (unitptr_t) or (unsigned long) as arguments
without warnings.

Doing this is a bit intrusive: virt_to_pfn() requires
PHYS_PFN_OFFSET and PAGE_SHIFT to be defined, and this is defined in
<asm/page.h>, so this must be included *before* <asm/memory.h>.

The use of macros were obscuring the unclear inclusion order here,
as the macros would eventually be resolved, but a static inline
like this cannot be compiled with unresolved macros.

The naive solution to include <asm/page.h> at the top of
<asm/memory.h> does not work, because <asm/memory.h> sometimes
includes <asm/page.h> at the end of itself, which would create a
confusing inclusion loop. So instead, take the approach to always
unconditionally include <asm/page.h> at the end of <asm/memory.h>

arch/arm uses <asm/memory.h> explicitly in a lot of places,
however it turns out that if we just unconditionally include
<asm/memory.h> into <asm/page.h> and switch all inclusions of
<asm/memory.h> to <asm/page.h> instead, we enforce the right
order and <asm/memory.h> will always have access to the
definitions.

Put an inclusion guard in place making it impossible to include
<asm/memory.h> explicitly.

Link: https://lore.kernel.org/linux-mm/20220701160004.2ffff4e5ab59a55499f4c736@linux-foundation.org/
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2023-05-29 11:27:08 +02:00
Arnd Bergmann
b91a69d162 ARM: iop32x: remove the platform
This was marked as unused in 5.19 and can now be removed

Cc: Lennert Buytenhek <kernel@wantstofly.org>
Cc: Martin Michlmayr <tbm@cyrius.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Wolfram Sang <wsa@kernel.org> # for I2C
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2023-01-10 23:10:27 +01:00
Linus Torvalds
7d9d077c78 RCU pull request for v5.20 (or whatever)
This pull request contains the following branches:
 
 doc.2022.06.21a: Documentation updates.
 
 fixes.2022.07.19a: Miscellaneous fixes.
 
 nocb.2022.07.19a: Callback-offload updates, perhaps most notably a new
 	RCU_NOCB_CPU_DEFAULT_ALL Kconfig option that causes all CPUs to
 	be offloaded at boot time, regardless of kernel boot parameters.
 	This is useful to battery-powered systems such as ChromeOS
 	and Android.  In addition, a new RCU_NOCB_CPU_CB_BOOST kernel
 	boot parameter prevents offloaded callbacks from interfering
 	with real-time workloads and with energy-efficiency mechanisms.
 
 poll.2022.07.21a: Polled grace-period updates, perhaps most notably
 	making these APIs account for both normal and expedited grace
 	periods.
 
 rcu-tasks.2022.06.21a: Tasks RCU updates, perhaps most notably reducing
 	the CPU overhead of RCU tasks trace grace periods by more than
 	a factor of two on a system with 15,000 tasks.	The reduction
 	is expected to increase with the number of tasks, so it seems
 	reasonable to hypothesize that a system with 150,000 tasks might
 	see a 20-fold reduction in CPU overhead.
 
 torture.2022.06.21a: Torture-test updates.
 
 ctxt.2022.07.05a: Updates that merge RCU's dyntick-idle tracking into
 	context tracking, thus reducing the overhead of transitioning to
 	kernel mode from either idle or nohz_full userspace execution
 	for kernels that track context independently of RCU.  This is
 	expected to be helpful primarily for kernels built with
 	CONFIG_NO_HZ_FULL=y.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEbK7UrM+RBIrCoViJnr8S83LZ+4wFAmLgMcgTHHBhdWxtY2tA
 a2VybmVsLm9yZwAKCRCevxLzctn7jArXD/0fjbCwqpRjHVTzjMY8jN4zDkqZZD6m
 g8Fx27hZ4ToNFwRptyHwNezrNj14skjAJEXfdjaVw32W62ivXvf0HINvSzsTLCSq
 k2kWyBdXLc9CwY5p5W4smnpn5VoAScjg5PoPL59INoZ/Zziji323C7Zepl/1DYJt
 0T6bPCQjo1ZQoDUCyVpSjDmAqxnderWG0MeJVt74GkLqmnYLANg0GH8c7mH4+9LL
 kVGlLp5nlPgNJ4FEoFdMwNU8T/ETmaVld/m2dkiawjkXjJzB2XKtBigU91DDmXz5
 7DIdV4ABrxiy4kGNqtIe/jFgnKyVD7xiDpyfjd6KTeDr/rDS8u2ZH7+1iHsyz3g0
 Np/tS3vcd0KR+gI/d0eXxPbgm5sKlCmKw/nU2eArpW/+4LmVXBUfHTG9Jg+LJmBc
 JrUh6aEdIZJZHgv/nOQBNig7GJW43IG50rjuJxAuzcxiZNEG5lUSS23ysaA9CPCL
 PxRWKSxIEfK3kdmvVO5IIbKTQmIBGWlcWMTcYictFSVfBgcCXpPAksGvqA5JiUkc
 egW+xLFo/7K+E158vSKsVqlWZcEeUbsNJ88QOlpqnRgH++I2Yv/LhK41XfJfpH+Y
 ALxVaDd+mAq6v+qSHNVq9wT3ozXIPy/zK1hDlMIqx40h2YvaEsH4je+521oSoN9r
 vX60+QNxvUBLwA==
 =vUNm
 -----END PGP SIGNATURE-----

Merge tag 'rcu.2022.07.26a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu

Pull RCU updates from Paul McKenney:

 - Documentation updates

 - Miscellaneous fixes

 - Callback-offload updates, perhaps most notably a new
   RCU_NOCB_CPU_DEFAULT_ALL Kconfig option that causes all CPUs to be
   offloaded at boot time, regardless of kernel boot parameters.

   This is useful to battery-powered systems such as ChromeOS and
   Android. In addition, a new RCU_NOCB_CPU_CB_BOOST kernel boot
   parameter prevents offloaded callbacks from interfering with
   real-time workloads and with energy-efficiency mechanisms

 - Polled grace-period updates, perhaps most notably making these APIs
   account for both normal and expedited grace periods

 - Tasks RCU updates, perhaps most notably reducing the CPU overhead of
   RCU tasks trace grace periods by more than a factor of two on a
   system with 15,000 tasks.

   The reduction is expected to increase with the number of tasks, so it
   seems reasonable to hypothesize that a system with 150,000 tasks
   might see a 20-fold reduction in CPU overhead

 - Torture-test updates

 - Updates that merge RCU's dyntick-idle tracking into context tracking,
   thus reducing the overhead of transitioning to kernel mode from
   either idle or nohz_full userspace execution for kernels that track
   context independently of RCU.

   This is expected to be helpful primarily for kernels built with
   CONFIG_NO_HZ_FULL=y

* tag 'rcu.2022.07.26a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu: (98 commits)
  rcu: Add irqs-disabled indicator to expedited RCU CPU stall warnings
  rcu: Diagnose extended sync_rcu_do_polled_gp() loops
  rcu: Put panic_on_rcu_stall() after expedited RCU CPU stall warnings
  rcutorture: Test polled expedited grace-period primitives
  rcu: Add polled expedited grace-period primitives
  rcutorture: Verify that polled GP API sees synchronous grace periods
  rcu: Make Tiny RCU grace periods visible to polled APIs
  rcu: Make polled grace-period API account for expedited grace periods
  rcu: Switch polled grace-period APIs to ->gp_seq_polled
  rcu/nocb: Avoid polling when my_rdp->nocb_head_rdp list is empty
  rcu/nocb: Add option to opt rcuo kthreads out of RT priority
  rcu: Add nocb_cb_kthread check to rcu_is_callbacks_kthread()
  rcu/nocb: Add an option to offload all CPUs on boot
  rcu/nocb: Fix NOCB kthreads spawn failure with rcu_nocb_rdp_deoffload() direct call
  rcu/nocb: Invert rcu_state.barrier_mutex VS hotplug lock locking order
  rcu/nocb: Add/del rdp to iterate from rcuog itself
  rcu/tree: Add comment to describe GP-done condition in fqs loop
  rcu: Initialize first_gp_fqs at declaration in rcu_gp_fqs()
  rcu/kvfree: Remove useless monitor_todo flag
  rcu: Cleanup RCU urgency state for offline CPU
  ...
2022-08-02 19:12:45 -07:00
Ard Biesheuvel
29589ca09a ARM: 9208/1: entry: add .ltorg directive to keep literals in range
LKP reports a build issue on Clang, related to a literal load of
__current issued through the ldr_va macro. This turns out to be due to
the fact that group relocations are disabled when CONFIG_COMPILE_TEST=y,
which means that the ldr_va macro resolves to a pair of LDR
instructions, the first one being a literal load issued too far from its
literal pool.

Due to the introduction of a couple of new uses of this macro in commit
508074607c ("ARM: 9195/1: entry: avoid explicit literal loads"),
the literal pools end up getting rearranged in a way that causes the
literal for __current to go out of range. Let's fix this up by putting a
.ltorg directive in a suitable place in the code.

Link: https://lore.kernel.org/all/202205290805.1vZLAr36-lkp@intel.com/

Fixes: 508074607c ("ARM: 9195/1: entry: avoid explicit literal loads")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-07-14 13:19:51 +01:00
Frederic Weisbecker
24a9c54182 context_tracking: Split user tracking Kconfig
Context tracking is going to be used not only to track user transitions
but also idle/IRQs/NMIs. The user tracking part will then become a
separate feature. Prepare Kconfig for that.

[ frederic: Apply Max Filippov feedback. ]

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Cc: Uladzislau Rezki <uladzislau.rezki@sony.com>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Nicolas Saenz Julienne <nsaenz@kernel.org>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Xiongfeng Wang <wangxiongfeng2@huawei.com>
Cc: Yu Liao <liaoyu15@huawei.com>
Cc: Phil Auld <pauld@redhat.com>
Cc: Paul Gortmaker<paul.gortmaker@windriver.com>
Cc: Alex Belits <abelits@marvell.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Reviewed-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
Tested-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
2022-06-29 17:04:09 -07:00
Ard Biesheuvel
892c608a7d ARM: 9199/1: spectre-bhb: use local DSB and elide ISB in loop8 sequence
The loop8 mitigation for Spectre-BHB only requires a CPU local DSB
rather than a systemwide one, which is much more costly. And by the same
reasoning as why it is justified to omit the ISB after BPIALL, we can
also elide the ISB and rely on the exception return for the context
synchronization.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-05-20 12:33:47 +01:00
Ard Biesheuvel
508074607c ARM: 9195/1: entry: avoid explicit literal loads
ARMv7 has MOVW/MOVT instruction pairs to load symbol addresses into
registers without having to rely on literal loads that go via the
D-cache.  For older cores, we now support a similar arrangement, based
on PC-relative group relocations.

This means we can elide most literal loads entirely from the entry path,
by switching to the ldr_va macro to emit the appropriate sequence
depending on the target architecture revision.

While at it, switch to the bl_r macro for invoking the right PABT/DABT
helpers instead of setting the LR register explicitly, which does not
play well with cores that speculate across function returns.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-05-20 12:32:32 +01:00
Linus Torvalds
9c0e6a89b5 ARM development updates for 5.18:
Updates for IRQ stacks and virtually mapped stack support for ARM from
 the following pull requests, etc:
 
 1) ARM: support for IRQ and vmap'ed stacks
 
 This PR covers all the work related to implementing IRQ stacks and
 vmap'ed stacks for all 32-bit ARM systems that are currently supported
 by the Linux kernel, including RiscPC and Footbridge. It has been
 submitted for review in three different waves:
 - IRQ stacks support for v7 SMP systems [0],
 - vmap'ed stacks support for v7 SMP systems[1],
 - extending support for both IRQ stacks and vmap'ed stacks for all
   remaining configurations, including v6/v7 SMP multiplatform kernels
   and uniprocessor configurations including v7-M [2]
 
 [0] https://lore.kernel.org/linux-arm-kernel/20211115084732.3704393-1-ardb@kernel.org/
 [1] https://lore.kernel.org/linux-arm-kernel/20211122092816.2865873-1-ardb@kernel.org/
 [2] https://lore.kernel.org/linux-arm-kernel/20211206164659.1495084-1-ardb@kernel.org/
 
 2) ARM: support for IRQ and vmap'ed stacks [v6]
 
 This tag covers the changes between the version of vmap'ed + IRQ stacks
 support pulled into rmk/devel-stable [0] (which was dropped from v5.17
 due to issues discovered too late in the cycle), and my v5 proposed for
 the v5.18 cycle [1].
 
 [0] git://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git arm-irq-and-vmap-stacks-for-rmk
 [1] https://lore.kernel.org/linux-arm-kernel/20220124174744.1054712-1-ardb@kernel.org/
 
 3) ARM: ftrace fixes and cleanups
 
 Make all flavors of ftrace available on all builds, regardless of ISA
 choice, unwinder choice or compiler:
 - use ADD not POP where possible
 - fix a couple of Thumb2 related issues
 - enable HAVE_FUNCTION_GRAPH_FP_TEST for robustness
 - enable the graph tracer with the EABI unwinder
 - avoid clobbering frame pointer registers to make Clang happy
 
 Link: https://lore.kernel.org/linux-arm-kernel/20220203082204.1176734-1-ardb@kernel.org/
 
 4) Fixes for the above.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEuNNh8scc2k/wOAE+9OeQG+StrGQFAmI7U9IACgkQ9OeQG+St
 rGQghg/+PmgLJ9zmJrMGOarNLmmGzCbkPi6SrlbaDxriE7ofqo76qrQhGAsWxvDx
 OEYBNdWmOxTi7GP6sozFaTWrpD2ZbuFuKUUpusnjU2sMD/BwYHZZ/lKfZpn7WoE0
 48e2FCFYsJ3sYpROhVgaFWk+64eVwHfZ7pr9pad1gAEB4SAaT+CiuXBsJCl4DBi7
 eobYzLqETtCBkXFUo46n6r0xESdzQfgfZMsh5IpPRyswSPhzqdYrSLXJRmFGBqvT
 FS2gcMgd7IpcVsmd4pgFKe0Y9rBSuMEqsqYvzwSWI4GAgFppZO1R5RvHdS89US4P
 9F6hgxYnJdc8hVhoAZNNi5cCcJp9th3Io97YzTUIm0xgK3nXyhsSGWIk3ahx76mX
 mnCcflUoOP9YVHUuoi1/N7iSe6xwtH+dg0Mn69aM4rNcZh5J59jV2rrNhdnr1Pjb
 XE8iQHJZATHZrxyAtj7PzlnNzJsfVcJyT/WieT0My7tZaZC0cICdKEJ6yurTlCvE
 v7P3EHUYFaQGkQijHFJdstkouY7SHpN0iH18xKErciWOwDmRsgVaoxw18iNIvuY/
 TvSNXJBDgh8is8eV/mmN0iVkK0mYTxhy0G5CHavrgy8STWNC6CdqFtrxZnInoCAz
 wq25QvQtPZcxz1dS9bTuWUfrPATaIeQeCzUsAIiE7u9aP/olL5M=
 =AVCL
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm

Pull ARM updates from Russell King:
 "Updates for IRQ stacks and virtually mapped stack support, and ftrace:

   - Support for IRQ and vmap'ed stacks

     This covers all the work related to implementing IRQ stacks and
     vmap'ed stacks for all 32-bit ARM systems that are currently
     supported by the Linux kernel, including RiscPC and Footbridge. It
     has been submitted for review in four different waves:

      - IRQ stacks support for v7 SMP systems [0]

      - vmap'ed stacks support for v7 SMP systems[1]

      - extending support for both IRQ stacks and vmap'ed stacks for all
        remaining configurations, including v6/v7 SMP multiplatform
        kernels and uniprocessor configurations including v7-M [2]

      - fixes and updates in [3]

   - ftrace fixes and cleanups

     Make all flavors of ftrace available on all builds, regardless of
     ISA choice, unwinder choice or compiler [4]:

      - use ADD not POP where possible

      - fix a couple of Thumb2 related issues

      - enable HAVE_FUNCTION_GRAPH_FP_TEST for robustness

      - enable the graph tracer with the EABI unwinder

      - avoid clobbering frame pointer registers to make Clang happy

   - Fixes for the above"

[0] https://lore.kernel.org/linux-arm-kernel/20211115084732.3704393-1-ardb@kernel.org/
[1] https://lore.kernel.org/linux-arm-kernel/20211122092816.2865873-1-ardb@kernel.org/
[2] https://lore.kernel.org/linux-arm-kernel/20211206164659.1495084-1-ardb@kernel.org/
[3] https://lore.kernel.org/linux-arm-kernel/20220124174744.1054712-1-ardb@kernel.org/
[4] https://lore.kernel.org/linux-arm-kernel/20220203082204.1176734-1-ardb@kernel.org/

* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (62 commits)
  ARM: fix building NOMMU ARMv4/v5 kernels
  ARM: unwind: only permit stack switch when unwinding call_with_stack()
  ARM: Revert "unwind: dump exception stack from calling frame"
  ARM: entry: fix unwinder problems caused by IRQ stacks
  ARM: unwind: set frame.pc correctly for current-thread unwinding
  ARM: 9184/1: return_address: disable again for CONFIG_ARM_UNWIND=y
  ARM: 9183/1: unwind: avoid spurious warnings on bogus code addresses
  Revert "ARM: 9144/1: forbid ftrace with clang and thumb2_kernel"
  ARM: mach-bcm: disable ftrace in SMC invocation routines
  ARM: cacheflush: avoid clobbering the frame pointer
  ARM: kprobes: treat R7 as the frame pointer register in Thumb2 builds
  ARM: ftrace: enable the graph tracer with the EABI unwinder
  ARM: unwind: track location of LR value in stack frame
  ARM: ftrace: enable HAVE_FUNCTION_GRAPH_FP_TEST
  ARM: ftrace: avoid unnecessary literal loads
  ARM: ftrace: avoid redundant loads or clobbering IP
  ARM: ftrace: use trampolines to keep .init.text in branching range
  ARM: ftrace: use ADD not POP to counter PUSH at entry
  ARM: ftrace: ensure that ADR takes the Thumb bit into account
  ARM: make get_current() and __my_cpu_offset() __always_inline
  ...
2022-03-23 17:35:57 -07:00
Russell King (Oracle)
b9baf5c8c5 ARM: Spectre-BHB workaround
Workaround the Spectre BHB issues for Cortex-A15, Cortex-A57,
Cortex-A72, Cortex-A73 and Cortex-A75. We also include Brahma B15 as
well to be safe, which is affected by Spectre V2 in the same ways as
Cortex-A15.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-03-05 10:42:07 +00:00
Arnd Bergmann
6f5d248d05 ARM: iop32x: use GENERIC_IRQ_MULTI_HANDLER
iop32x uses the entry-macro.S file for both the IRQ entry and for
hooking into the arch_ret_to_user code path. This is done because the
cp6 registers have to be enabled before accessing any of the interrupt
controller registers but have to be disabled when running in user space.

There is also a lazy-enable logic in cp6.c, but during a hardirq, we
know it has to be enabled.

Both the cp6-enable code and the code to read the IRQ status can be
lifted into the normal generic_handle_arch_irq() path, but the
cp6-disable code has to remain in the user return code. As nothing
other than iop32x uses this hook, just open-code it there with an
ifdef for the platform that can eventually be removed when iop32x
has reached the end of its life.

The cp6-enable path in the IRQ entry has an extra cp_wait barrier that
the trap version does not have, but it is harmless to do it in both
cases to simplify the logic here at the cost of a few extra cycles
for the trap.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Marc Zyngier <maz@kernel.org>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-12-06 12:49:04 +01:00
Ard Biesheuvel
50596b7559 ARM: smp: Store current pointer in TPIDRURO register if available
Now that the user space TLS register is assigned on every return to user
space, we can use it to keep the 'current' pointer while running in the
kernel. This removes the need to access it via thread_info, which is
located at the base of the stack, but will be moved out of there in a
subsequent patch.

Use the __builtin_thread_pointer() helper when available - this will
help GCC understand that reloading the value within the same function is
not necessary, even when using the per-task stack protector (which also
generates accesses via the TLS register). For example, the generated
code below loads TPIDRURO only once, and uses it to access both the
stack canary and the preempt_count fields.

<do_one_initcall>:
       e92d 41f0       stmdb   sp!, {r4, r5, r6, r7, r8, lr}
       ee1d 4f70       mrc     15, 0, r4, cr13, cr0, {3}
       4606            mov     r6, r0
       b094            sub     sp, #80 ; 0x50
       f8d4 34e8       ldr.w   r3, [r4, #1256] ; 0x4e8  <- stack canary
       9313            str     r3, [sp, #76]   ; 0x4c
       f8d4 8004       ldr.w   r8, [r4, #4]             <- preempt count

Co-developed-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
2021-09-27 16:54:02 +02:00
Arnd Bergmann
8ac6f5d7f8 ARM: 9113/1: uaccess: remove set_fs() implementation
There are no remaining callers of set_fs(), so just remove it
along with all associated code that operates on
thread_info->addr_limit.

There are still further optimizations that can be done:

- In get_user(), the address check could be moved entirely
  into the out of line code, rather than passing a constant
  as an argument,

- I assume the DACR handling can be simplified as we now
  only change it during user access when CONFIG_CPU_SW_DOMAIN_PAN
  is set, but not during set_fs().

Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2021-08-20 11:39:27 +01:00
Arnd Bergmann
4e57a4ddf6 ARM: 9107/1: syscall: always store thread_info->abi_syscall
The system call number is used in a a couple of places, in particular
ptrace, seccomp and /proc/<pid>/syscall.

The last one apparently never worked reliably on ARM for tasks that are
not currently getting traced.

Storing the syscall number in the normal entry path makes it work,
as well as allowing us to see if the current system call is for OABI
compat mode, which is the next thing I want to hook into.

Since the thread_info->syscall field is not just the number any more, it
is now renamed to abi_syscall. In kernels that enable both OABI and EABI,
the upper bits of this field encode 0x900000 (__NR_OABI_SYSCALL_BASE)
for OABI tasks, while normal EABI tasks do not set the upper bits. This
makes it possible to implement the in_oabi_syscall() helper later.

All other users of thread_info->syscall go through the syscall_get_nr()
helper, which in turn filters out the ABI bits.

Note that the ABI information is lost with PTRACE_SET_SYSCALL, so one
cannot set the internal number to a particular version, but this was
already the case. We could change it to let gdb encode the ABI type along
with the syscall in a CONFIG_OABI_COMPAT-enabled kernel, but that itself
would be a (backwards-compatible) ABI change, so I don't do it here.

Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2021-08-20 11:39:26 +01:00
Masahiro Yamada
0047eb9f09 ARM: 9068/1: syscalls: switch to generic syscalltbl.sh
Many architectures duplicate similar shell scripts.

This commit converts ARM to use scripts/syscalltbl.sh.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-03-25 14:13:13 +00:00
Linus Torvalds
c45647f9f5 ARM updates for 5.11:
- Rework phys/virt translation
 - Add KASan support
 - Move DT out of linear map region
 - Use more PC-relative addressing in assembly
 - Remove FP emulation handling while in kernel mode
 - Link with '-z norelro'
 - remove old check for GCC <= 4.2 in ARM unwinder code
 - disable big endian if using clang's linker
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEuNNh8scc2k/wOAE+9OeQG+StrGQFAl/ghq0ACgkQ9OeQG+St
 rGQXsxAAilC+P06NRN3etSFOnJH8GzGNu89wbVW/0lft89o+EpN8oZ9kEYRdb4d1
 AJ1z4kGN0akKKNWWeg+1c2YzXh4xGvT1th1TzbBpCf8BxoMHFCSS1IZ98LZ3iiqy
 bpMRpq2LJG+Va/5lkPnkY7e2sL9Jj5BxFdHAYUUg1Ipc0tfh7hXWLnRMohE1EYmu
 E69AHTfyWs9ojgspCSg3KoUQ3eXUiaBslf8U4/zFhtmA9lwiOOozZ4ZRRgDWqI75
 bp6pGzxpqXIFdD1QyThgSb3gvVBahbsYN7kj1fmD5LokBVWxHawCyzkCzNzKEfDL
 ES+gc/wTewxwN928cjB5vfmOrAvd1T6amh/gsr39WnOIFngEPAGMBfApXAzhffsc
 L5TYaDI3DNbQ75FCySfVV2VwQhSW03XQHYtElVxzc2Z1Q1Q9yoscqLzgHDgDy3LM
 8s4CRviVtOzP9e/rNx48lUxgdQHmAjQ+dI4Y9NVxyphQzK0LLTv5Uc4zy/nG0F27
 QIFtGCDz3PHDPWLzGBudYcu9HAqwXVhZXf9pMeYgwgvmqBdz0BFbXhEbZaup6oDl
 H5k4iAZh3ADW38+8Vhp/D7CGDhznZm2dFNrgreJm2tHTEwd5xgpsUj1MaAMCcPbr
 HTxiy0i4p9wN1jl9iWFD4A3/KsBvAIJFB+wqqJOyWku0FikntjU=
 =fZGX
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux

Pull ARM updates from Russell King:

 - Rework phys/virt translation

 - Add KASan support

 - Move DT out of linear map region

 - Use more PC-relative addressing in assembly

 - Remove FP emulation handling while in kernel mode

 - Link with '-z norelro'

 - remove old check for GCC <= 4.2 in ARM unwinder code

 - disable big endian if using clang's linker

* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (46 commits)
  ARM: 9027/1: head.S: explicitly map DT even if it lives in the first physical section
  ARM: 9038/1: Link with '-z norelro'
  ARM: 9037/1: uncompress: Add OF_DT_MAGIC macro
  ARM: 9036/1: uncompress: Fix dbgadtb size parameter name
  ARM: 9035/1: uncompress: Add be32tocpu macro
  ARM: 9033/1: arm/smp: Drop the macro S(x,s)
  ARM: 9032/1: arm/mm: Convert PUD level pgtable helper macros into functions
  ARM: 9031/1: hyp-stub: remove unused .L__boot_cpu_mode_offset symbol
  ARM: 9044/1: vfp: use undef hook for VFP support detection
  ARM: 9034/1: __div64_32(): straighten up inline asm constraints
  ARM: 9030/1: entry: omit FP emulation for UND exceptions taken in kernel mode
  ARM: 9029/1: Make iwmmxt.S support Clang's integrated assembler
  ARM: 9028/1: disable KASAN in call stack capturing routines
  ARM: 9026/1: unwind: remove old check for GCC <= 4.2
  ARM: 9025/1: Kconfig: CPU_BIG_ENDIAN depends on !LD_IS_LLD
  ARM: 9024/1: Drop useless cast of "u64" to "long long"
  ARM: 9023/1: Spelling s/mmeory/memory/
  ARM: 9022/1: Change arch/arm/lib/mem*.S to use WEAK instead of .weak
  ARM: kvm: replace open coded VA->PA calculations with adr_l call
  ARM: head.S: use PC relative insn sequence to calculate PHYS_OFFSET
  ...
2020-12-22 13:34:27 -08:00
Jens Axboe
32d59773da arm: add support for TIF_NOTIFY_SIGNAL
Wire up TIF_NOTIFY_SIGNAL handling for arm.

Cc: linux-arm-kernel@lists.infradead.org
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-11-12 08:45:51 -07:00
Linus Walleij
c12366ba44 ARM: 9015/2: Define the virtual space of KASan's shadow region
Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for
the Arm kernel address sanitizer. We are "stealing" lowmem (the 4GB
addressable by a 32bit architecture) out of the virtual address
space to use as shadow memory for KASan as follows:

 +----+ 0xffffffff
 |    |
 |    | |-> Static kernel image (vmlinux) BSS and page table
 |    |/
 +----+ PAGE_OFFSET
 |    |
 |    | |->  Loadable kernel modules virtual address space area
 |    |/
 +----+ MODULES_VADDR = KASAN_SHADOW_END
 |    |
 |    | |-> The shadow area of kernel virtual address.
 |    |/
 +----+->  TASK_SIZE (start of kernel space) = KASAN_SHADOW_START the
 |    |   shadow address of MODULES_VADDR
 |    | |
 |    | |
 |    | |-> The user space area in lowmem. The kernel address
 |    | |   sanitizer do not use this space, nor does it map it.
 |    | |
 |    | |
 |    | |
 |    | |
 |    |/
 ------ 0

0 .. TASK_SIZE is the memory that can be used by shared
userspace/kernelspace. It us used for userspace processes and for
passing parameters and memory buffers in system calls etc. We do not
need to shadow this area.

KASAN_SHADOW_START:
 This value begins with the MODULE_VADDR's shadow address. It is the
 start of kernel virtual space. Since we have modules to load, we need
 to cover also that area with shadow memory so we can find memory
 bugs in modules.

KASAN_SHADOW_END
 This value is the 0x100000000's shadow address: the mapping that would
 be after the end of the kernel memory at 0xffffffff. It is the end of
 kernel address sanitizer shadow area. It is also the start of the
 module area.

KASAN_SHADOW_OFFSET:
 This value is used to map an address to the corresponding shadow
 address by the following formula:

   shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;

 As you would expect, >> 3 is equal to dividing by 8, meaning each
 byte in the shadow memory covers 8 bytes of kernel memory, so one
 bit shadow memory per byte of kernel memory is used.

 The KASAN_SHADOW_OFFSET is provided in a Kconfig option depending
 on the VMSPLIT layout of the system: the kernel and userspace can
 split up lowmem in different ways according to needs, so we calculate
 the shadow offset depending on this.

When kasan is enabled, the definition of TASK_SIZE is not an 8-bit
rotated constant, so we need to modify the TASK_SIZE access code in the
*.s file.

The kernel and modules may use different amounts of memory,
according to the VMSPLIT configuration, which in turn
determines the PAGE_OFFSET.

We use the following KASAN_SHADOW_OFFSETs depending on how the
virtual memory is split up:

- 0x1f000000 if we have 1G userspace / 3G kernelspace split:
  - The kernel address space is 3G (0xc0000000)
  - PAGE_OFFSET is then set to 0x40000000 so the kernel static
    image (vmlinux) uses addresses 0x40000000 .. 0xffffffff
  - On top of that we have the MODULES_VADDR which under
    the worst case (using ARM instructions) is
    PAGE_OFFSET - 16M (0x01000000) = 0x3f000000
    so the modules use addresses 0x3f000000 .. 0x3fffffff
  - So the addresses 0x3f000000 .. 0xffffffff need to be
    covered with shadow memory. That is 0xc1000000 bytes
    of memory.
  - 1/8 of that is needed for its shadow memory, so
    0x18200000 bytes of shadow memory is needed. We
    "steal" that from the remaining lowmem.
  - The KASAN_SHADOW_START becomes 0x26e00000, to
    KASAN_SHADOW_END at 0x3effffff.
  - Now we can calculate the KASAN_SHADOW_OFFSET for any
    kernel address as 0x3f000000 needs to map to the first
    byte of shadow memory and 0xffffffff needs to map to
    the last byte of shadow memory. Since:
    SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
    0x26e00000 = (0x3f000000 >> 3) + KASAN_SHADOW_OFFSET
    KASAN_SHADOW_OFFSET = 0x26e00000 - (0x3f000000 >> 3)
    KASAN_SHADOW_OFFSET = 0x26e00000 - 0x07e00000
    KASAN_SHADOW_OFFSET = 0x1f000000

- 0x5f000000 if we have 2G userspace / 2G kernelspace split:
  - The kernel space is 2G (0x80000000)
  - PAGE_OFFSET is set to 0x80000000 so the kernel static
    image uses 0x80000000 .. 0xffffffff.
  - On top of that we have the MODULES_VADDR which under
    the worst case (using ARM instructions) is
    PAGE_OFFSET - 16M (0x01000000) = 0x7f000000
    so the modules use addresses 0x7f000000 .. 0x7fffffff
  - So the addresses 0x7f000000 .. 0xffffffff need to be
    covered with shadow memory. That is 0x81000000 bytes
    of memory.
  - 1/8 of that is needed for its shadow memory, so
    0x10200000 bytes of shadow memory is needed. We
    "steal" that from the remaining lowmem.
  - The KASAN_SHADOW_START becomes 0x6ee00000, to
    KASAN_SHADOW_END at 0x7effffff.
  - Now we can calculate the KASAN_SHADOW_OFFSET for any
    kernel address as 0x7f000000 needs to map to the first
    byte of shadow memory and 0xffffffff needs to map to
    the last byte of shadow memory. Since:
    SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
    0x6ee00000 = (0x7f000000 >> 3) + KASAN_SHADOW_OFFSET
    KASAN_SHADOW_OFFSET = 0x6ee00000 - (0x7f000000 >> 3)
    KASAN_SHADOW_OFFSET = 0x6ee00000 - 0x0fe00000
    KASAN_SHADOW_OFFSET = 0x5f000000

- 0x9f000000 if we have 3G userspace / 1G kernelspace split,
  and this is the default split for ARM:
  - The kernel address space is 1GB (0x40000000)
  - PAGE_OFFSET is set to 0xc0000000 so the kernel static
    image uses 0xc0000000 .. 0xffffffff.
  - On top of that we have the MODULES_VADDR which under
    the worst case (using ARM instructions) is
    PAGE_OFFSET - 16M (0x01000000) = 0xbf000000
    so the modules use addresses 0xbf000000 .. 0xbfffffff
  - So the addresses 0xbf000000 .. 0xffffffff need to be
    covered with shadow memory. That is 0x41000000 bytes
    of memory.
  - 1/8 of that is needed for its shadow memory, so
    0x08200000 bytes of shadow memory is needed. We
    "steal" that from the remaining lowmem.
  - The KASAN_SHADOW_START becomes 0xb6e00000, to
    KASAN_SHADOW_END at 0xbfffffff.
  - Now we can calculate the KASAN_SHADOW_OFFSET for any
    kernel address as 0xbf000000 needs to map to the first
    byte of shadow memory and 0xffffffff needs to map to
    the last byte of shadow memory. Since:
    SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
    0xb6e00000 = (0xbf000000 >> 3) + KASAN_SHADOW_OFFSET
    KASAN_SHADOW_OFFSET = 0xb6e00000 - (0xbf000000 >> 3)
    KASAN_SHADOW_OFFSET = 0xb6e00000 - 0x17e00000
    KASAN_SHADOW_OFFSET = 0x9f000000

- 0x8f000000 if we have 3G userspace / 1G kernelspace with
  full 1 GB low memory (VMSPLIT_3G_OPT):
  - The kernel address space is 1GB (0x40000000)
  - PAGE_OFFSET is set to 0xb0000000 so the kernel static
    image uses 0xb0000000 .. 0xffffffff.
  - On top of that we have the MODULES_VADDR which under
    the worst case (using ARM instructions) is
    PAGE_OFFSET - 16M (0x01000000) = 0xaf000000
    so the modules use addresses 0xaf000000 .. 0xaffffff
  - So the addresses 0xaf000000 .. 0xffffffff need to be
    covered with shadow memory. That is 0x51000000 bytes
    of memory.
  - 1/8 of that is needed for its shadow memory, so
    0x0a200000 bytes of shadow memory is needed. We
    "steal" that from the remaining lowmem.
  - The KASAN_SHADOW_START becomes 0xa4e00000, to
    KASAN_SHADOW_END at 0xaeffffff.
  - Now we can calculate the KASAN_SHADOW_OFFSET for any
    kernel address as 0xaf000000 needs to map to the first
    byte of shadow memory and 0xffffffff needs to map to
    the last byte of shadow memory. Since:
    SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
    0xa4e00000 = (0xaf000000 >> 3) + KASAN_SHADOW_OFFSET
    KASAN_SHADOW_OFFSET = 0xa4e00000 - (0xaf000000 >> 3)
    KASAN_SHADOW_OFFSET = 0xa4e00000 - 0x15e00000
    KASAN_SHADOW_OFFSET = 0x8f000000

- The default value of 0xffffffff for KASAN_SHADOW_OFFSET
  is an error value. We should always match one of the
  above shadow offsets.

When we do this, TASK_SIZE will sometimes get a bit odd values
that will not fit into immediate mov assembly instructions.
To account for this, we need to rewrite some assembly using
TASK_SIZE like this:

-       mov     r1, #TASK_SIZE
+       ldr     r1, =TASK_SIZE

or

-       cmp     r4, #TASK_SIZE
+       ldr     r0, =TASK_SIZE
+       cmp     r4, r0

this is done to avoid the immediate #TASK_SIZE that need to
fit into a limited number of bits.

Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Cc: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs
Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q
Reported-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-10-27 12:11:08 +00:00
Thomas Gleixner
d2912cb15b treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license version 2 as
  published by the free software foundation

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license version 2 as
  published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

  GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-19 17:09:55 +02:00
Stefan Agner
e44fc38818 ARM: 8844/1: use unified assembler in assembly files
Use unified assembler syntax (UAL) in assembly files. Divided
syntax is considered deprecated. This will also allow to build
the kernel using LLVM's integrated assembler.

Signed-off-by: Stefan Agner <stefan@agner.ch>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26 11:26:07 +00:00
Timothy E Baldwin
f18aef742c ARM: 8802/1: Call syscall_trace_exit even when system call skipped
On at least x86 and ARM64, and as documented in the ptrace man page
a skipped system call will still cause a syscall exit ptrace stop.

Previous to this commit 32-bit ARM did not, resulting in strace
being confused when seccomp skips system calls.

This change also impacts programs that use ptrace to skip system calls.

Fixes: ad75b51459 ("ARM: 7579/1: arch/allow a scno of -1 to not cause a SIGILL")
Signed-off-by: Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
Signed-off-by: Eugene Syromyatnikov <evgsyr@gmail.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Tested-by: Eugene Syromyatnikov <evgsyr@gmail.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-10-10 13:53:12 +01:00
Vincent Whitchurch
afc9f65e01 ARM: 8781/1: Fix Thumb-2 syscall return for binutils 2.29+
When building the kernel as Thumb-2 with binutils 2.29 or newer, if the
assembler has seen the .type directive (via ENDPROC()) for a symbol, it
automatically handles the setting of the lowest bit when the symbol is
used with ADR.  The badr macro on the other hand handles this lowest bit
manually.  This leads to a jump to a wrong address in the wrong state
in the syscall return path:

 Internal error: Oops - undefined instruction: 0 [#2] SMP THUMB2
 Modules linked in:
 CPU: 0 PID: 652 Comm: modprobe Tainted: G      D           4.18.0-rc3+ #8
 PC is at ret_fast_syscall+0x4/0x62
 LR is at sys_brk+0x109/0x128
 pc : [<80101004>]    lr : [<801c8a35>]    psr: 60000013
 Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment none
 Control: 50c5387d  Table: 9e82006a  DAC: 00000051
 Process modprobe (pid: 652, stack limit = 0x(ptrval))

 80101000 <ret_fast_syscall>:
 80101000:       b672            cpsid   i
 80101002:       f8d9 2008       ldr.w   r2, [r9, #8]
 80101006:       f1b2 4ffe       cmp.w   r2, #2130706432 ; 0x7f000000

 80101184 <local_restart>:
 80101184:       f8d9 a000       ldr.w   sl, [r9]
 80101188:       e92d 0030       stmdb   sp!, {r4, r5}
 8010118c:       f01a 0ff0       tst.w   sl, #240        ; 0xf0
 80101190:       d117            bne.n   801011c2 <__sys_trace>
 80101192:       46ba            mov     sl, r7
 80101194:       f5ba 7fc8       cmp.w   sl, #400        ; 0x190
 80101198:       bf28            it      cs
 8010119a:       f04f 0a00       movcs.w sl, #0
 8010119e:       f3af 8014       nop.w   {20}
 801011a2:       f2af 1ea2       subw    lr, pc, #418    ; 0x1a2

To fix this, add a new symbol name which doesn't have ENDPROC used on it
and use that with badr.  We can't remove the badr usage since that would
would cause breakage with older binutils.

Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-07-30 11:45:19 +01:00
Linus Torvalds
d82991a868 Merge branch 'core-rseq-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull restartable sequence support from Thomas Gleixner:
 "The restartable sequences syscall (finally):

  After a lot of back and forth discussion and massive delays caused by
  the speculative distraction of maintainers, the core set of
  restartable sequences has finally reached a consensus.

  It comes with the basic non disputed core implementation along with
  support for arm, powerpc and x86 and a full set of selftests

  It was exposed to linux-next earlier this week, so it does not fully
  comply with the merge window requirements, but there is really no
  point to drag it out for yet another cycle"

* 'core-rseq-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  rseq/selftests: Provide Makefile, scripts, gitignore
  rseq/selftests: Provide parametrized tests
  rseq/selftests: Provide basic percpu ops test
  rseq/selftests: Provide basic test
  rseq/selftests: Provide rseq library
  selftests/lib.mk: Introduce OVERRIDE_TARGETS
  powerpc: Wire up restartable sequences system call
  powerpc: Add syscall detection for restartable sequences
  powerpc: Add support for restartable sequences
  x86: Wire up restartable sequence system call
  x86: Add support for restartable sequences
  arm: Wire up restartable sequences system call
  arm: Add syscall detection for restartable sequences
  arm: Add restartable sequences support
  rseq: Introduce restartable sequences system call
  uapi/headers: Provide types_32_64.h
2018-06-10 10:17:09 -07:00
Mathieu Desnoyers
b74406f377 arm: Add syscall detection for restartable sequences
Syscalls are not allowed inside restartable sequences, so add a call to
rseq_syscall() at the very beginning of system call exiting path for
CONFIG_DEBUG_RSEQ=y kernel. This could help us to detect whether there
is a syscall issued inside restartable sequences.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Joel Fernandes <joelaf@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Watson <davejwatson@fb.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: "H . Peter Anvin" <hpa@zytor.com>
Cc: Chris Lameter <cl@linux.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Andrew Hunter <ahh@google.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Paul Turner <pjt@google.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Maurer <bmaurer@fb.com>
Cc: linux-api@vger.kernel.org
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lkml.kernel.org/r/20180602124408.8430-5-mathieu.desnoyers@efficios.com
2018-06-06 11:58:31 +02:00
Russell King
10573ae547 ARM: spectre-v1: fix syscall entry
Prevent speculation at the syscall table decoding by clamping the index
used to zero on invalid system call numbers, and using the csdb
speculative barrier.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
2018-05-31 23:27:26 +01:00
Russell King
c608906165 ARM: probes: avoid adding kprobes to sensitive kernel-entry/exit code
Avoid adding kprobes to any of the kernel entry/exit or startup
assembly code, or code in the identity-mapped region.  This code does
not conform to the standard C conventions, which means that the
expectations of the kprobes code is not forfilled.

Placing kprobes at some of these locations results in the kernel trying
to return to userspace addresses while retaining the CPU in kernel mode.

Tested-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-12-17 22:14:21 +00:00
Linus Torvalds
441692aafc Merge branch 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM updates from Russell King:

 - add support for ELF fdpic binaries on both MMU and noMMU platforms

 - linker script cleanups

 - support for compressed .data section for XIP images

 - discard memblock arrays when possible

 - various cleanups

 - atomic DMA pool updates

 - better diagnostics of missing/corrupt device tree

 - export information to allow userspace kexec tool to place images more
   inteligently, so that the device tree isn't overwritten by the
   booting kernel

 - make early_printk more efficient on semihosted systems

 - noMMU cleanups

 - SA1111 PCMCIA update in preparation for further cleanups

* 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (38 commits)
  ARM: 8719/1: NOMMU: work around maybe-uninitialized warning
  ARM: 8717/2: debug printch/printascii: translate '\n' to "\r\n" not "\n\r"
  ARM: 8713/1: NOMMU: Support MPU in XIP configuration
  ARM: 8712/1: NOMMU: Use more MPU regions to cover memory
  ARM: 8711/1: V7M: Add support for MPU to M-class
  ARM: 8710/1: Kconfig: Kill CONFIG_VECTORS_BASE
  ARM: 8709/1: NOMMU: Disallow MPU for XIP
  ARM: 8708/1: NOMMU: Rework MPU to be mostly done in C
  ARM: 8707/1: NOMMU: Update MPU accessors to use cp15 helpers
  ARM: 8706/1: NOMMU: Move out MPU setup in separate module
  ARM: 8702/1: head-common.S: Clear lr before jumping to start_kernel()
  ARM: 8705/1: early_printk: use printascii() rather than printch()
  ARM: 8703/1: debug.S: move hexbuf to a writable section
  ARM: add additional table to compressed kernel
  ARM: decompressor: fix BSS size calculation
  pcmcia: sa1111: remove special sa1111 mmio accessors
  pcmcia: sa1111: use sa1111_get_irq() to obtain IRQ resources
  ARM: better diagnostics with missing/corrupt dtb
  ARM: 8699/1: dma-mapping: Remove init_dma_coherent_pool_size()
  ARM: 8698/1: dma-mapping: Mark atomic_pool as __ro_after_init
  ..
2017-11-16 12:50:35 -08:00
Vladimir Murzin
6022f80da0 ARM: 8695/1: entry: Remove dead code in sys_mmap2
We support page size of 4K only, remove dead code.

Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-09-28 11:13:03 +01:00
Thomas Garnier
e33f8d3267 arm/syscalls: Optimize address limit check
Disable the generic address limit check in favor of an architecture
specific optimized implementation. The generic implementation using
pending work flags did not work well with ARM and alignment faults.

The address limit is checked on each syscall return path to user-mode
path as well as the irq user-mode return function. If the address limit
was changed, a function is called to report data corruption (stopping
the kernel or process based on configuration).

The address limit check has to be done before any pending work because
they can reset the address limit and the process is killed using a
SIGKILL signal. For example the lkdtm address limit check does not work
because the signal to kill the process will reset the user-mode address
limit.

Signed-off-by: Thomas Garnier <thgarnie@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Tested-by: Leonard Crestez <leonard.crestez@nxp.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Pratyush Anand <panand@redhat.com>
Cc: Dave Martin <Dave.Martin@arm.com>
Cc: Will Drewry <wad@chromium.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: David Howells <dhowells@redhat.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: linux-api@vger.kernel.org
Cc: Yonghong Song <yhs@fb.com>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/1504798247-48833-4-git-send-email-keescook@chromium.org
2017-09-17 19:45:33 +02:00
Thomas Garnier
2404269bc4 Revert "arm/syscalls: Check address limit on user-mode return"
This reverts commit 73ac5d6a2b.

The work pending loop can call set_fs after addr_limit_user_check
removed the _TIF_FSCHECK flag. This may happen at anytime based on how
ARM handles alignment exceptions. It leads to an infinite loop condition.

After discussion, it has been agreed that the generic approach is not
tailored to the ARM architecture and any fix might not be complete. This
patch will be replaced by an architecture specific implementation. The
work flag approach will be kept for other architectures.

Reported-by: Leonard Crestez <leonard.crestez@nxp.com>
Signed-off-by: Thomas Garnier <thgarnie@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Pratyush Anand <panand@redhat.com>
Cc: Dave Martin <Dave.Martin@arm.com>
Cc: Will Drewry <wad@chromium.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: David Howells <dhowells@redhat.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: linux-api@vger.kernel.org
Cc: Yonghong Song <yhs@fb.com>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/1504798247-48833-3-git-send-email-keescook@chromium.org
2017-09-17 19:45:33 +02:00
Linus Torvalds
8fac2f96ab Merge branch 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM updates from Russell King:
 "Low priority fixes and updates for ARM:

   - add some missing includes

   - efficiency improvements in system call entry code when tracing is
     enabled

   - ensure ARMv6+ is always built as EABI

   - export save_stack_trace_tsk()

   - fix fatal signal handling during mm fault

   - build translation table base address register from scratch

   - appropriately align the .data section to a word boundary where we
     rely on that data being word aligned"

* 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm:
  ARM: 8691/1: Export save_stack_trace_tsk()
  ARM: 8692/1: mm: abort uaccess retries upon fatal signal
  ARM: 8690/1: lpae: build TTB control register value from scratch in v7_ttb_setup
  ARM: align .data section
  ARM: always enable AEABI for ARMv6+
  ARM: avoid saving and restoring registers unnecessarily
  ARM: move PC value into r9
  ARM: obtain thread info structure later
  ARM: use aliases for registers in entry-common
  ARM: 8689/1: scu: add missing errno include
  ARM: 8688/1: pm: add missing types include
2017-09-12 06:10:44 -07:00
Russell King
dca778c5bb ARM: avoid saving and restoring registers unnecessarily
Avoid repeatedly saving and restoring registers around the calls to
trace_hardirqs_on() and context_tracking_user_exit().  With the
previous changes, we no longer need to preserve "lr" across these
calls, and if we re-load r0-r3 later, we can avoid preserving these
regsiters too.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-08-02 14:15:04 +01:00
Russell King
fcea45236d ARM: move PC value into r9
Move the saved PC value into r9, thereby moving it into a caller-saved
register for functions that we may call during the entry to a syscall.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-08-02 14:15:04 +01:00
Russell King
da594e3fff ARM: obtain thread info structure later
Obtain the thread info structure later in the syscall processing, so
that we free up a register for earlier code.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-08-02 14:15:04 +01:00
Russell King
309ee04257 ARM: use aliases for registers in entry-common
Use aliases for the saved (and preserved) PSR and PC values so that we
can control which registers are used.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-08-02 14:15:04 +01:00
Thomas Garnier
73ac5d6a2b arm/syscalls: Check address limit on user-mode return
Ensure the address limit is a user-mode segment before returning to
user-mode. Otherwise a process can corrupt kernel-mode memory and
elevate privileges [1].

The set_fs function sets the TIF_SETFS flag to force a slow path on
return. In the slow path, the address limit is checked to be USER_DS if
needed.

The TIF_SETFS flag is added to _TIF_WORK_MASK shifting _TIF_SYSCALL_WORK
for arm instruction immediate support. The global work mask is too big
to used on a single instruction so adapt ret_fast_syscall.

[1] https://bugs.chromium.org/p/project-zero/issues/detail?id=990

Signed-off-by: Thomas Garnier <thgarnie@google.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: kernel-hardening@lists.openwall.com
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Miroslav Benes <mbenes@suse.cz>
Cc: Chris Metcalf <cmetcalf@mellanox.com>
Cc: Pratyush Anand <panand@redhat.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: Will Drewry <wad@chromium.org>
Cc: linux-api@vger.kernel.org
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Link: http://lkml.kernel.org/r/20170615011203.144108-2-thgarnie@google.com
2017-07-08 14:05:33 +02:00
Russell King
96a8fae0fe ARM: convert to generated system call tables
Convert ARM to use a similar mechanism to x86 to generate the unistd.h
system call numbers and the various kernel system call tables.  This
means that rather than having to edit three places (asm/unistd.h for
the total number of system calls, uapi/asm/unistd.h for the system call
numbers, and arch/arm/kernel/calls.S for the call table) we have only
one place to edit, making the process much more simple.

The scripts have knowledge of the table padding requirements, so there's
no need to worry about __NR_syscalls not fitting within the immediate
constant field of ALU instructions anymore.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2016-10-18 21:34:06 +01:00
Russell King
5745eef6b8 ARM: rename S_FRAME_SIZE to PT_REGS_SIZE
S_FRAME_SIZE is no longer the size of the kernel stack frame, so this
name is misleading.  It is the size of the kernel pt_regs structure.
Name it so.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2016-06-22 19:54:28 +01:00
Russell King
40d3f02851 Merge branches 'cleanup', 'fixes', 'misc', 'omap-barrier' and 'uaccess' into for-linus 2015-09-03 15:28:37 +01:00
Russell King
2190fed67b ARM: entry: provide uaccess assembly macro hooks
Provide hooks into the kernel entry and exit paths to permit control
of userspace visibility to the kernel.  The intended use is:

- on entry to kernel from user, uaccess_disable will be called to
  disable userspace visibility
- on exit from kernel to user, uaccess_enable will be called to
  enable userspace visibility
- on entry from a kernel exception, uaccess_save_and_disable will be
  called to save the current userspace visibility setting, and disable
  access
- on exit from a kernel exception, uaccess_restore will be called to
  restore the userspace visibility as it was before the exception
  occurred.

These hooks allows us to keep userspace visibility disabled for the
vast majority of the kernel, except for localised regions where we
want to explicitly access userspace.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-08-26 20:27:02 +01:00
Russell King
e0aa3a6657 ARM: entry: ensure that IRQs are enabled when calling syscall_trace_exit()
The audit code looks like it's been written to cope with being called
with IRQs enabled.  However, it's unclear whether IRQs should be
enabled or disabled when calling the syscall tracing infrastructure.

Right now, sometimes we call this with IRQs enabled, and other times
with IRQs disabled.  Opt for IRQs being enabled for consistency.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-08-25 10:32:50 +01:00
Russell King
3302caddf1 ARM: entry: efficiency cleanups
Make the "fast" syscall return path fast again.  The addition of IRQ
tracing and context tracking has made this path grossly inefficient.
We can do much better if these options are enabled if we save the
syscall return code on the stack - we then don't need to save a bunch
of registers around every single callout to C code.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-08-25 10:32:48 +01:00
Drew Richardson
e83dd37700 ARM: 8409/1: Mark ret_fast_syscall as a function
ret_fast_syscall runs when user space makes a syscall. However it
needs to be marked as such so the ELF information is correct. Before
it was:

   101: 8000f300     0 NOTYPE  LOCAL  DEFAULT    2 ret_fast_syscall

But with this change it correctly shows as:

   101: 8000f300    96 FUNC    LOCAL  DEFAULT    2 ret_fast_syscall

I see this function when using perf to unwind call stacks from kernel
space to user space. Without this change I would need to add some
special case logic when using the vmlinux ELF information.

Signed-off-by: Drew Richardson <drew.richardson@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-08-07 19:57:02 +01:00
Russell King
05c9ca8843 Merge branch 'bsym' into for-next
Conflicts:
	arch/arm/kernel/head.S
2015-06-12 21:18:38 +01:00
Russell King
1b97937246 ARM: fix missing syscall trace exit
Josh Stone reports:

  I've discovered a case where both arm and arm64 will miss a ptrace
  syscall-exit that they should report.  If the syscall is entered
  without TIF_SYSCALL_TRACE set, then it goes on the fast path.  It's
  then possible to have TIF_SYSCALL_TRACE added in the middle of the
  syscall, but ret_fast_syscall doesn't check this flag again.

Fix this by always checking for a syscall trace in the fast exit path.

Reported-by: Josh Stone <jistone@redhat.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-05-15 11:06:35 +01:00
Russell King
14327c6628 ARM: replace BSYM() with badr assembly macro
BSYM() was invented to allow us to work around a problem with the
assembler, where local symbols resolved by the assembler for the 'adr'
instruction did not take account of their ISA.

Since we don't want BSYM() used elsewhere, replace BSYM() with a new
macro 'badr', which is like the 'adr' pseudo-op, but with the BSYM()
mechanics integrated into it.  This ensures that the BSYM()-ification
is only used in conjunction with 'adr'.

Acked-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-05-08 17:33:50 +01:00
Russell King
82112379b7 ARM: move ftrace assembly code to separate file
The ftrace assembly code doesn't need to live in entry-common.S and
be surrounded with #ifdef CONFIG_FUNCTION_TRACER.  Instead, move it
to its own file and conditionally assemble it.

Tested-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-11-21 15:25:01 +00:00
Russell King
195b58add4 ARM: Avoid writing to control register on every exception
If we are not changing the control register value, avoid writing to it.
Writes to the control register can be very expensive, taking around a
hundred cycles or so.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-09-26 14:39:54 +01:00
Russell King
6ebbf2ce43 ARM: convert all "mov.* pc, reg" to "bx reg" for ARMv6+
ARMv6 and greater introduced a new instruction ("bx") which can be used
to return from function calls.  Recent CPUs perform better when the
"bx lr" instruction is used rather than the "mov pc, lr" instruction,
and this sequence is strongly recommended to be used by the ARM
architecture manual (section A.4.1.1).

We provide a new macro "ret" with all its variants for the condition
code which will resolve to the appropriate instruction.

Rather than doing this piecemeal, and miss some instances, change all
the "mov pc" instances to use the new macro, with the exception of
the "movs" instruction and the kprobes code.  This allows us to detect
the "mov pc, lr" case and fix it up - and also gives us the possibility
of deploying this for other registers depending on the CPU selection.

Reported-by: Will Deacon <will.deacon@arm.com>
Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1
Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S
Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood
Tested-by: Shawn Guo <shawn.guo@freescale.com>
Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385
Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci
Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp
Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen
Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M
Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18 12:29:04 +01:00
Russell King
8229c54fa1 ARM: consolidate last remaining open-coded alignment trap enable
We can use the alignment_trap assembly macro here too.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-06-02 09:20:20 +01:00