Commit Graph

210 Commits

Author SHA1 Message Date
Ard Biesheuvel
8bcba70cb5 ARM: entry: Disregard Thumb undef exception in coproc dispatch
Now that the only remaining coprocessor instructions being handled via
the dispatch in entry-armv.S are ones that only exist in a ARM (A32)
encoding, we can simplify the handling of Thumb undef exceptions, and
send them straight to the undefined instruction handlers in C code.

This also means we can drop the code that partially decodes the
instruction to decide whether it is a 16-bit or 32-bit Thumb
instruction: this is all taken care of by the undef hook.

Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-05-17 15:08:22 +02:00
Ard Biesheuvel
cdd87465ad ARM: vfp: Use undef hook for handling VFP exceptions
Now that the VFP support code has been reimplemented as a C function
that takes a struct pt_regs pointer and an opcode, we can use the
existing undef_hook framework to deal with undef exceptions triggered by
VFP instructions instead of having special handling in assembler.

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-05-17 15:08:22 +02:00
Ard Biesheuvel
6ee1e6772e ARM: kernel: Get rid of thread_info::used_cp[] array
We keep track of which coprocessor triggered a fault in the used_cp[]
array in thread_info, but this data is never used anywhere. So let's
remove it.

Linus did some digging and found out that the last user of this field
was removed in commit bb1a773d5b ("kill unused dump_fpu() instances").

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-05-17 15:08:22 +02:00
Ard Biesheuvel
ad12c2f158 ARM: 9201/1: spectre-bhb: rely on linker to emit cross-section literal loads
The assembler does not permit 'LDR PC, <sym>' when the symbol lives in a
different section, which is why we have been relying on rather fragile
open-coded arithmetic to load the address of the vector_swi routine into
the program counter using a single LDR instruction in the SWI slot in
the vector table. The literal was moved to a different section to in
commit 19accfd373 ("ARM: move vector stubs") to ensure that the
vector stubs page does not need to be mapped readable for user space,
which is the case for the vector page itself, as it carries the kuser
helpers as well.

So the cross-section literal load is open-coded, and this relies on the
address of vector_swi to be at the very start of the vector stubs page,
and we won't notice if we got it wrong until booting the kernel and see
it break. Fortunately, it was guaranteed to break, so this was fragile
but not problematic.

Now that we have added two other variants of the vector table, we have 3
occurrences of the same trick, and so the size of our ISA/compiler/CPU
validation space has tripled, in a way that may cause regressions to only
be observed once booting the image in question on a CPU that exercises a
particular vector table.

So let's switch to true cross section references, and let the linker fix
them up like it fixes up all the other cross section references in the
vector page.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-05-20 12:33:47 +01:00
Ard Biesheuvel
1290c70d72 ARM: 9200/1: spectre-bhb: avoid cross-subsection jump using a numbered label
In order to minimize potential confusion regarding numbered labels
appearing in a different order in the assembler output due to the use of
subsections, use a named local label to jump back into the vector
handler code from the associated loop8 mitigation sequence.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-05-20 12:33:47 +01:00
Ard Biesheuvel
892c608a7d ARM: 9199/1: spectre-bhb: use local DSB and elide ISB in loop8 sequence
The loop8 mitigation for Spectre-BHB only requires a CPU local DSB
rather than a systemwide one, which is much more costly. And by the same
reasoning as why it is justified to omit the ISB after BPIALL, we can
also elide the ISB and rely on the exception return for the context
synchronization.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-05-20 12:33:47 +01:00
Ard Biesheuvel
c4f486f1e7 ARM: 9198/1: spectre-bhb: simplify BPIALL vector macro
The BPIALL mitigation for Spectre-BHB adds a single instruction to the
handler sequence that doesn't clobber any registers. Given that these
sequences are 10 instructions long, they don't fit neatly into a
cacheline anyway, so we can simply move that single instruction to the
start of the unmitigated one, and rearrange the symbol names accordingly.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-05-20 12:32:32 +01:00
Ard Biesheuvel
508074607c ARM: 9195/1: entry: avoid explicit literal loads
ARMv7 has MOVW/MOVT instruction pairs to load symbol addresses into
registers without having to rely on literal loads that go via the
D-cache.  For older cores, we now support a similar arrangement, based
on PC-relative group relocations.

This means we can elide most literal loads entirely from the entry path,
by switching to the ldr_va macro to emit the appropriate sequence
depending on the target architecture revision.

While at it, switch to the bl_r macro for invoking the right PABT/DABT
helpers instead of setting the LR register explicitly, which does not
play well with cores that speculate across function returns.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-05-20 12:32:32 +01:00
Ard Biesheuvel
3cfb301997 ARM: 9197/1: spectre-bhb: fix loop8 sequence for Thumb2
In Thumb2, 'b . + 4' produces a branch instruction that uses a narrow
encoding, and so it does not jump to the following instruction as
expected. So use W(b) instead.

Fixes: 6c7cb60bff ("ARM: fix Thumb2 regression with Spectre BHB")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-05-18 11:38:47 +01:00
Linus Torvalds
9c0e6a89b5 ARM development updates for 5.18:
Updates for IRQ stacks and virtually mapped stack support for ARM from
 the following pull requests, etc:
 
 1) ARM: support for IRQ and vmap'ed stacks
 
 This PR covers all the work related to implementing IRQ stacks and
 vmap'ed stacks for all 32-bit ARM systems that are currently supported
 by the Linux kernel, including RiscPC and Footbridge. It has been
 submitted for review in three different waves:
 - IRQ stacks support for v7 SMP systems [0],
 - vmap'ed stacks support for v7 SMP systems[1],
 - extending support for both IRQ stacks and vmap'ed stacks for all
   remaining configurations, including v6/v7 SMP multiplatform kernels
   and uniprocessor configurations including v7-M [2]
 
 [0] https://lore.kernel.org/linux-arm-kernel/20211115084732.3704393-1-ardb@kernel.org/
 [1] https://lore.kernel.org/linux-arm-kernel/20211122092816.2865873-1-ardb@kernel.org/
 [2] https://lore.kernel.org/linux-arm-kernel/20211206164659.1495084-1-ardb@kernel.org/
 
 2) ARM: support for IRQ and vmap'ed stacks [v6]
 
 This tag covers the changes between the version of vmap'ed + IRQ stacks
 support pulled into rmk/devel-stable [0] (which was dropped from v5.17
 due to issues discovered too late in the cycle), and my v5 proposed for
 the v5.18 cycle [1].
 
 [0] git://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git arm-irq-and-vmap-stacks-for-rmk
 [1] https://lore.kernel.org/linux-arm-kernel/20220124174744.1054712-1-ardb@kernel.org/
 
 3) ARM: ftrace fixes and cleanups
 
 Make all flavors of ftrace available on all builds, regardless of ISA
 choice, unwinder choice or compiler:
 - use ADD not POP where possible
 - fix a couple of Thumb2 related issues
 - enable HAVE_FUNCTION_GRAPH_FP_TEST for robustness
 - enable the graph tracer with the EABI unwinder
 - avoid clobbering frame pointer registers to make Clang happy
 
 Link: https://lore.kernel.org/linux-arm-kernel/20220203082204.1176734-1-ardb@kernel.org/
 
 4) Fixes for the above.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEuNNh8scc2k/wOAE+9OeQG+StrGQFAmI7U9IACgkQ9OeQG+St
 rGQghg/+PmgLJ9zmJrMGOarNLmmGzCbkPi6SrlbaDxriE7ofqo76qrQhGAsWxvDx
 OEYBNdWmOxTi7GP6sozFaTWrpD2ZbuFuKUUpusnjU2sMD/BwYHZZ/lKfZpn7WoE0
 48e2FCFYsJ3sYpROhVgaFWk+64eVwHfZ7pr9pad1gAEB4SAaT+CiuXBsJCl4DBi7
 eobYzLqETtCBkXFUo46n6r0xESdzQfgfZMsh5IpPRyswSPhzqdYrSLXJRmFGBqvT
 FS2gcMgd7IpcVsmd4pgFKe0Y9rBSuMEqsqYvzwSWI4GAgFppZO1R5RvHdS89US4P
 9F6hgxYnJdc8hVhoAZNNi5cCcJp9th3Io97YzTUIm0xgK3nXyhsSGWIk3ahx76mX
 mnCcflUoOP9YVHUuoi1/N7iSe6xwtH+dg0Mn69aM4rNcZh5J59jV2rrNhdnr1Pjb
 XE8iQHJZATHZrxyAtj7PzlnNzJsfVcJyT/WieT0My7tZaZC0cICdKEJ6yurTlCvE
 v7P3EHUYFaQGkQijHFJdstkouY7SHpN0iH18xKErciWOwDmRsgVaoxw18iNIvuY/
 TvSNXJBDgh8is8eV/mmN0iVkK0mYTxhy0G5CHavrgy8STWNC6CdqFtrxZnInoCAz
 wq25QvQtPZcxz1dS9bTuWUfrPATaIeQeCzUsAIiE7u9aP/olL5M=
 =AVCL
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm

Pull ARM updates from Russell King:
 "Updates for IRQ stacks and virtually mapped stack support, and ftrace:

   - Support for IRQ and vmap'ed stacks

     This covers all the work related to implementing IRQ stacks and
     vmap'ed stacks for all 32-bit ARM systems that are currently
     supported by the Linux kernel, including RiscPC and Footbridge. It
     has been submitted for review in four different waves:

      - IRQ stacks support for v7 SMP systems [0]

      - vmap'ed stacks support for v7 SMP systems[1]

      - extending support for both IRQ stacks and vmap'ed stacks for all
        remaining configurations, including v6/v7 SMP multiplatform
        kernels and uniprocessor configurations including v7-M [2]

      - fixes and updates in [3]

   - ftrace fixes and cleanups

     Make all flavors of ftrace available on all builds, regardless of
     ISA choice, unwinder choice or compiler [4]:

      - use ADD not POP where possible

      - fix a couple of Thumb2 related issues

      - enable HAVE_FUNCTION_GRAPH_FP_TEST for robustness

      - enable the graph tracer with the EABI unwinder

      - avoid clobbering frame pointer registers to make Clang happy

   - Fixes for the above"

[0] https://lore.kernel.org/linux-arm-kernel/20211115084732.3704393-1-ardb@kernel.org/
[1] https://lore.kernel.org/linux-arm-kernel/20211122092816.2865873-1-ardb@kernel.org/
[2] https://lore.kernel.org/linux-arm-kernel/20211206164659.1495084-1-ardb@kernel.org/
[3] https://lore.kernel.org/linux-arm-kernel/20220124174744.1054712-1-ardb@kernel.org/
[4] https://lore.kernel.org/linux-arm-kernel/20220203082204.1176734-1-ardb@kernel.org/

* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (62 commits)
  ARM: fix building NOMMU ARMv4/v5 kernels
  ARM: unwind: only permit stack switch when unwinding call_with_stack()
  ARM: Revert "unwind: dump exception stack from calling frame"
  ARM: entry: fix unwinder problems caused by IRQ stacks
  ARM: unwind: set frame.pc correctly for current-thread unwinding
  ARM: 9184/1: return_address: disable again for CONFIG_ARM_UNWIND=y
  ARM: 9183/1: unwind: avoid spurious warnings on bogus code addresses
  Revert "ARM: 9144/1: forbid ftrace with clang and thumb2_kernel"
  ARM: mach-bcm: disable ftrace in SMC invocation routines
  ARM: cacheflush: avoid clobbering the frame pointer
  ARM: kprobes: treat R7 as the frame pointer register in Thumb2 builds
  ARM: ftrace: enable the graph tracer with the EABI unwinder
  ARM: unwind: track location of LR value in stack frame
  ARM: ftrace: enable HAVE_FUNCTION_GRAPH_FP_TEST
  ARM: ftrace: avoid unnecessary literal loads
  ARM: ftrace: avoid redundant loads or clobbering IP
  ARM: ftrace: use trampolines to keep .init.text in branching range
  ARM: ftrace: use ADD not POP to counter PUSH at entry
  ARM: ftrace: ensure that ADR takes the Thumb bit into account
  ARM: make get_current() and __my_cpu_offset() __always_inline
  ...
2022-03-23 17:35:57 -07:00
Russell King (Oracle)
6c7cb60bff ARM: fix Thumb2 regression with Spectre BHB
When building for Thumb2, the vectors make use of a local label. Sadly,
the Spectre BHB code also uses a local label with the same number which
results in the Thumb2 reference pointing at the wrong place. Fix this
by changing the number used for the Spectre BHB local label.

Fixes: b9baf5c8c5 ("ARM: Spectre-BHB workaround")
Tested-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-03-11 11:40:08 -08:00
Ard Biesheuvel
7a8ca84a25 ARM: entry: fix unwinder problems caused by IRQ stacks
The IRQ stacks series made some changes to the unwinder, to permit
unwinding across different stacks. This is needed because otherwise, the
call stack would terminate at the point where the stack switch between
the task stack and the IRQ stack occurs, which would defeat any
diagnostics that rely on timer interrupts, such as RCU stall detection.

Unfortunately, getting the unwind annotations correct turns out to be
difficult, given that this now involves a frame pointer which needs to
point into the right location in the task stack when unwinding from the
IRQ stack. Getting this wrong for an exception handling routine results
in the stack pointer to be unwound from the wrong location, causing any
subsequent unwind attempts to cause all kinds of issues, as reported by
Naresh here [0].

So let's simplify this, by deferring the stack switch to
call_with_stack(), which already has the correct unwind annotations, and
removing all the complicated handling of the stack frame from the IRQ
exception entrypoint itself.

[0] https://lore.kernel.org/all/CA+G9fYtpy8VgK+ag6OsA9TDrwi5YGU4hu7GM8xwpO7v6LrCD4Q@mail.gmail.com/

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-03-11 12:59:32 +00:00
Russell King (Oracle)
b9baf5c8c5 ARM: Spectre-BHB workaround
Workaround the Spectre BHB issues for Cortex-A15, Cortex-A57,
Cortex-A72, Cortex-A73 and Cortex-A75. We also include Brahma B15 as
well to be safe, which is affected by Spectre V2 in the same ways as
Cortex-A15.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-03-05 10:42:07 +00:00
Ard Biesheuvel
aa0a20f521 ARM: entry: avoid clobbering R9 in IRQ handler
Avoid using R9 in the IRQ handler code, as the entry code uses it for
tsk, and expects it to remain untouched between the IRQ entry and exit
code.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2022-01-25 09:53:52 +01:00
Ard Biesheuvel
8536a5ef88 ARM: 9169/1: entry: fix Thumb2 bug in iWMMXt exception handling
The Thumb2 version of the FP exception handling entry code treats the
register holding the CP number (R8) differently, resulting in the iWMMXT
CP number check to be incorrect.

Fix this by unifying the ARM and Thumb2 code paths, and switch the
order of the additions of the TI_USED_CP offset and the shifted CP
index.

Cc: <stable@vger.kernel.org>
Fixes: b86040a59f ("Thumb-2: Implementation of the unified start-up and exceptions code")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2021-12-17 12:02:17 +00:00
Ard Biesheuvel
9c46929e79 ARM: implement THREAD_INFO_IN_TASK for uniprocessor systems
On UP systems, only a single task can be 'current' at the same time,
which means we can use a global variable to track it. This means we can
also enable THREAD_INFO_IN_TASK for those systems, as in that case,
thread_info is accessed via current rather than the other way around,
removing the need to store thread_info at the base of the task stack.
This, in turn, permits us to enable IRQ stacks and vmap'ed stacks on UP
systems as well.

To partially mitigate the performance overhead of this arrangement, use
a ADD/ADD/LDR sequence with the appropriate PC-relative group
relocations to load the value of current when needed. This means that
accessing current will still only require a single load as before,
avoiding the need for a literal to carry the address of the global
variable in each function. However, accessing thread_info will now
require this load as well.

Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Marc Zyngier <maz@kernel.org>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-12-06 12:49:17 +01:00
Ard Biesheuvel
7b9896c352 ARM: percpu: add SMP_ON_UP support
Permit the use of the TPIDRPRW system register for carrying the per-CPU
offset in generic SMP configurations that also target non-SMP capable
ARMv6 cores. This uses the SMP_ON_UP code patching framework to turn all
TPIDRPRW accesses into reads/writes of entry #0 in the __per_cpu_offset
array.

While at it, switch over some existing direct TPIDRPRW accesses in asm
code to invocations of a new helper that is patched in the same way when
necessary.

Note that CPU_V6+SMP without SMP_ON_UP results in a kernel that does not
boot on v6 CPUs without SMP extensions, so add this dependency to
Kconfig as well.

Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Marc Zyngier <maz@kernel.org>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-12-06 12:49:17 +01:00
Ard Biesheuvel
4e918ab13e ARM: assembler: add optimized ldr/str macros to load variables from memory
We will be adding variable loads to various hot paths, so it makes sense
to add a helper macro that can load variables from asm code without the
use of literal pool entries. On v7 or later, we can simply use MOVW/MOVT
pairs, but on earlier cores, this requires a bit of hackery to emit a
instruction sequence that implements this using a sequence of ADD/LDR
instructions.

Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Marc Zyngier <maz@kernel.org>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-12-06 12:49:16 +01:00
Ard Biesheuvel
831a469bc1 ARM: entry: preserve thread_info pointer in switch_to
Tweak the UP stack protector handling code so that the thread info
pointer is preserved in R7 until set_current is called. This is needed
for a subsequent patch that implements THREAD_INFO_IN_TASK and
set_current for UP as well.

This also means we will prefer the per-task protector on UP systems that
implement the thread ID registers, so tweak the preprocessor
conditionals to reflect this.

Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Marc Zyngier <maz@kernel.org>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-12-06 12:49:16 +01:00
Arnd Bergmann
54f481a230 ARM: remove old-style irq entry
The last user of arch_irq_handler_default is gone now, so the
entry-macro-multi.S file and all references to mach/entry-macro.S can
be removed, as well as the asm_do_IRQ() entrypoint into the interrupt
handling routines implemented in C.

Note: The ARMv7-M entry still uses its own top-level IRQ entry, calling
nvic_handle_irq() from assembly. This could be changed to go through
generic_handle_arch_irq() as well, but it's unclear to me if there are
any benefits.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
[ardb: keep irq_handler macro as it carries all the IRQ stack handling]
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Marc Zyngier <maz@kernel.org>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
2021-12-06 12:49:11 +01:00
Ard Biesheuvel
a1c510d0ad ARM: implement support for vmap'ed stacks
Wire up the generic support for managing task stack allocations via vmalloc,
and implement the entry code that detects whether we faulted because of a
stack overrun (or future stack overrun caused by pushing the pt_regs array)

While this adds a fair amount of tricky entry asm code, it should be
noted that it only adds a TST + branch to the svc_entry path. The code
implementing the non-trivial handling of the overflow stack is emitted
out-of-line into the .text section.

Since on ARM, we rely on do_translation_fault() to keep PMD level page
table entries that cover the vmalloc region up to date, we need to
ensure that we don't hit such a stale PMD entry when accessing the
stack. So we do a dummy read from the new stack while still running from
the old one on the context switch path, and bump the vmalloc_seq counter
when PMD level entries in the vmalloc range are modified, so that the MM
switch fetches the latest version of the entries.

Note that we need to increase the per-mode stack by 1 word, to gain some
space to stash a GPR until we know it is safe to touch the stack.
However, due to the cacheline alignment of the struct, this does not
actually increase the memory footprint of the struct stack array at all.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Keith Packard <keithpac@amazon.com>
Tested-by: Marc Zyngier <maz@kernel.org>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-12-03 15:11:33 +01:00
Ard Biesheuvel
ae5cc07da8 ARM: entry: rework stack realignment code in svc_entry
The original Thumb-2 enablement patches updated the stack realignment
code in svc_entry to work around the lack of a STMIB instruction in
Thumb-2, by subtracting 4 from the frame size, inverting the sense of
the misaligment check, and changing to a STMIA instruction and a final
stack push of a 4 byte quantity that results in the stack becoming
aligned at the end of the sequence. It also pushes and pops R0 to the
stack in order to have a temp register that Thumb-2 allows in general
purpose ALU instructions, as TST using SP is not permitted.

Both are a bit problematic for vmap'ed stacks, as using the stack is
only permitted after we decide that we did not overflow the stack, or
have already switched to the overflow stack.

As for the alignment check: the current approach creates a corner case
where, if the initial SUB of SP ends up right at the start of the stack,
we will end up subtracting another 8 bytes and overflowing it.  This
means we would need to add the overflow check *after* the SUB that
deliberately misaligns the stack. However, this would require us to keep
local state (i.e., whether we performed the subtract or not) across the
overflow check, but without any GPRs or stack available.

So let's switch to an approach where we don't use the stack, and where
the alignment check of the stack pointer occurs in the usual way, as
this is guaranteed not to result in overflow. This means we will be able
to do the overflow check first.

While at it, switch to R1 so the mode stack pointer in R0 remains
accessible.

Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Marc Zyngier <maz@kernel.org>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-12-03 15:11:33 +01:00
Ard Biesheuvel
b832faec33 ARM: switch_to: clean up Thumb2 code path
The load-multiple instruction that essentially performs the switch_to
operation in ARM mode, by loading all callee save registers as well the
stack pointer and the program counter, is split into 3 separate loads
for Thumb-2, with the IP register used as a temporary to capture the
value of R4 before it gets overwritten.

We can clean this up a bit, by sticking with a single LDMIA instruction,
but one that pops SP and PC into IP and LR, respectively, and by using
ordinary move register and branch instructions to get those values into
SP and PC. This also allows us to move the set_current call closer to
the assignment of SP, reducing the window where those are mutually out
of sync. This is especially relevant for CONFIG_VMAP_STACK, which is
being introduced in a subsequent patch, where we need to issue a load
that might fault from the new stack while running from the old one, to
ensure that stale PMD entries in the VMALLOC space are synced up.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Keith Packard <keithpac@amazon.com>
Tested-by: Marc Zyngier <maz@kernel.org>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-12-03 15:11:32 +01:00
Ard Biesheuvel
d4664b6c98 ARM: implement IRQ stacks
Now that we no longer rely on the stack pointer to access the current
task struct or thread info, we can implement support for IRQ stacks
cleanly as well.

Define a per-CPU IRQ stack and switch to this stack when taking an IRQ,
provided that we were not already using that stack in the interrupted
context. This is never the case for IRQs taken from user space, but ones
taken while running in the kernel could fire while one taken from user
space has not completed yet.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Keith Packard <keithpac@amazon.com>
Acked-by: Nick Desaulniers <ndesaulniers@google.com>
Tested-by: Marc Zyngier <maz@kernel.org>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-12-03 15:11:31 +01:00
Linus Torvalds
ab2e7f4b46 ARM development for 5.16:
- Rejig task/thread info to place thread info in task struct
 - Amba bus cleanups (removing unused functions)
 - Handle Amba device probe without IRQ domains
 - Parse linux,usable-memory-range in decompressor
 - Mark OCRAM as read-only after initialisation
 - Refactor page fault handling
 - Fix PXN handling with LPAE kernels
 - Warning and build fixes from Arnd
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEuNNh8scc2k/wOAE+9OeQG+StrGQFAmGBEFIACgkQ9OeQG+St
 rGRbLw/9EQPEVAnx4obKSfQ7+H6JF8EZnSUvmFe0tK2zyR5T8dCoifh5bmdkdu51
 r2gtxse9gbCKzGyfAlhcR+p47P94ULt3/slRb+thc1E3USAkP9mDLH2gXlWraVOL
 TdBn6WD2zHdGWmLYB3RPjh/FpQy2IEKajQU9pFC+Rp0Hf8OKg7KH1E5Ap8W9kjz6
 o2HieVxsteuWKwCauQq95IDNZ/fpq/FuQi38fn11O52uB8PO4OC3LUR33/4qKBYj
 iykzt6hxHHnDLWKMrR9hbv0J6hSjflVgqTEirTuk1EpkKcIVoc6EOPTGENus7U1r
 GcrVbrnAs/obgYgT1DwTS0mreIAQ2dNpekbbICqD/SFrV4Rt/zOjImFXm17L4mxU
 2D0FG9iyTFgQIYOQBrbaUbeeDpH+Dxn4ldFYWZ0/PLukz901KK40xV6b9gpe52iY
 DJDmO8OVH55ZargQQXB13vcJ79ZYcHusEr+kBkU+kXXP7LzTBTZfj2a/xCR0H2hw
 urS5ocp8WXHQ+jSZGVLR82kIVK0TuqDmkuOSi+VXuCIAMC3ITaSs5X1/foVn7r2b
 SdBSoqa2R28HSWudtkP7Ki1QCqxzrrn4RWjCOyEa+aoHyKT/MXQ7E7qwrg8nWBXV
 Ep83wvL2TmE5ZaljfEcyXnjYnXTcStB+YAD4G/6dZERLXbJ1tms=
 =lDRU
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm

Pull ARM updates from Russell King:

 - Rejig task/thread info to place thread info in task struct

 - Amba bus cleanups (removing unused functions)

 - Handle Amba device probe without IRQ domains

 - Parse linux,usable-memory-range in decompressor

 - Mark OCRAM as read-only after initialisation

 - Refactor page fault handling

 - Fix PXN handling with LPAE kernels

 - Warning and build fixes from Arnd

* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (32 commits)
  ARM: 9151/1: Thumb2: avoid __builtin_thread_pointer() on Clang
  ARM: 9150/1: Fix PID_IN_CONTEXTIDR regression when THREAD_INFO_IN_TASK=y
  ARM: 9147/1: add printf format attribute to early_print()
  ARM: 9146/1: RiscPC needs older gcc version
  ARM: 9145/1: patch: fix BE32 compilation
  ARM: 9144/1: forbid ftrace with clang and thumb2_kernel
  ARM: 9143/1: add CONFIG_PHYS_OFFSET default values
  ARM: 9142/1: kasan: work around LPAE build warning
  ARM: 9140/1: allow compile-testing without machine record
  ARM: 9137/1: disallow CONFIG_THUMB with ARMv4
  ARM: 9136/1: ARMv7-M uses BE-8, not BE-32
  ARM: 9135/1: kprobes: address gcc -Wempty-body warning
  ARM: 9101/1: sa1100/assabet: convert LEDs to gpiod APIs
  ARM: 9131/1: mm: Fix PXN process with LPAE feature
  ARM: 9130/1: mm: Provide die_kernel_fault() helper
  ARM: 9126/1: mm: Kill page table base print in show_pte()
  ARM: 9127/1: mm: Cleanup access_error()
  ARM: 9129/1: mm: Kill task_struct argument for __do_page_fault()
  ARM: 9128/1: mm: Refactor the __do_page_fault()
  ARM: imx6: mark OCRAM mapping read-only
  ...
2021-11-02 11:33:15 -07:00
Mark Rutland
a7b0872e96 irq: arm: perform irqentry in entry code
In preparation for removing HANDLE_DOMAIN_IRQ_IRQENTRY, have arch/arm
perform all the irqentry accounting in its entry code.

For configurations with CONFIG_GENERIC_IRQ_MULTI_HANDLER, we can use
generic_handle_arch_irq(). Other than asm_do_IRQ(), all C calls to
handle_IRQ() are from irqchip handlers which will be called from
generic_handle_arch_irq(), so to avoid double accounting IRQ entry, the
entry logic is moved from handle_IRQ() into asm_do_IRQ().

For ARMv7M the entry assembly is tightly coupled with the NVIC irqchip, and
while the entry code should logically live under arch/arm/, moving the
entry logic there makes things more convoluted. So for now, place the
entry logic in the NVIC irqchip, but separated into a separate
function to make the split of responsibility clear.

For all other configurations without CONFIG_GENERIC_IRQ_MULTI_HANDLER,
IRQ entry is already handled in arch code, and requires no changes.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
Cc: Russell King <linux@armlinux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
2021-10-25 10:05:31 +01:00
Ard Biesheuvel
18ed1c01a7 ARM: smp: Enable THREAD_INFO_IN_TASK
Now that we no longer rely on thread_info living at the base of the task
stack to be able to access the 'current' pointer, we can wire up the
generic support for moving thread_info into the task struct itself.

Note that this requires us to update the cpu field in thread_info
explicitly, now that the core code no longer does so. Ideally, we would
switch the percpu code to access the cpu field in task_struct instead,
but this unleashes #include circular dependency hell.

Co-developed-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
2021-09-27 16:54:02 +02:00
Ard Biesheuvel
50596b7559 ARM: smp: Store current pointer in TPIDRURO register if available
Now that the user space TLS register is assigned on every return to user
space, we can use it to keep the 'current' pointer while running in the
kernel. This removes the need to access it via thread_info, which is
located at the base of the stack, but will be moved out of there in a
subsequent patch.

Use the __builtin_thread_pointer() helper when available - this will
help GCC understand that reloading the value within the same function is
not necessary, even when using the per-task stack protector (which also
generates accesses via the TLS register). For example, the generated
code below loads TPIDRURO only once, and uses it to access both the
stack canary and the preempt_count fields.

<do_one_initcall>:
       e92d 41f0       stmdb   sp!, {r4, r5, r6, r7, r8, lr}
       ee1d 4f70       mrc     15, 0, r4, cr13, cr0, {3}
       4606            mov     r6, r0
       b094            sub     sp, #80 ; 0x50
       f8d4 34e8       ldr.w   r3, [r4, #1256] ; 0x4e8  <- stack canary
       9313            str     r3, [sp, #76]   ; 0x4c
       f8d4 8004       ldr.w   r8, [r4, #4]             <- preempt count

Co-developed-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
2021-09-27 16:54:02 +02:00
Arnd Bergmann
12c3dca25d ARM: ep93xx: remove MaverickCrunch support
The MaverickCrunch support for ep93xx never made it into glibc and
was removed from gcc in its 4.8 release in 2012. It is now one of
the last parts of arch/arm/ that fails to build with the clang
integrated assembler, which is unlikely to ever want to support it.

The two alternatives are to force the use of binutils/gas when
building the crunch support, or to remove it entirely.

According to Hartley Sweeten:

 "Martin Guy did a lot of work trying to get the maverick crunch working
  but I was never able to successfully use it for anything. It "kind"
  of works but depending on the EP93xx silicon revision there are still
  a number of hardware bugs that either give imprecise or garbage results.

  I have no problem with removing the kernel support for the maverick
  crunch."

Unless someone else comes up with a good reason to keep it around,
remove it now. This touches mostly the ep93xx platform, but removes
a bit of code from ARM common ptrace and signal frame handling as well.

If there are remaining users of MaverickCrunch, they can use LTS
kernels for at least another five years before kernel support ends.

Link: https://lore.kernel.org/linux-arm-kernel/20210802141245.1146772-1-arnd@kernel.org/
Link: https://lore.kernel.org/linux-arm-kernel/20210226164345.3889993-1-arnd@kernel.org/
Link: https://github.com/ClangBuiltLinux/linux/issues/1272
Link: https://gcc.gnu.org/legacy-ml/gcc/2008-03/msg01063.html
Cc: "Martin Guy" <martinwguy@martinwguy@gmail.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2021-08-04 13:30:04 +02:00
Ard Biesheuvel
f77ac2e378 ARM: 9030/1: entry: omit FP emulation for UND exceptions taken in kernel mode
There are a couple of problems with the exception entry code that deals
with FP exceptions (which are reported as UND exceptions) when building
the kernel in Thumb2 mode:
- the conditional branch to vfp_kmode_exception in vfp_support_entry()
  may be out of range for its target, depending on how the linker decides
  to arrange the sections;
- when the UND exception is taken in kernel mode, the emulation handling
  logic is entered via the 'call_fpe' label, which means we end up using
  the wrong value/mask pairs to match and detect the NEON opcodes.

Since UND exceptions in kernel mode are unlikely to occur on a hot path
(as opposed to the user mode version which is invoked for VFP support
code and lazy restore), we can use the existing undef hook machinery for
any kernel mode instruction emulation that is needed, including calling
the existing vfp_kmode_exception() routine for unexpected cases. So drop
the call to call_fpe, and instead, install an undef hook that will get
called for NEON and VFP instructions that trigger an UND exception in
kernel mode.

While at it, make sure that the PC correction is accurate for the
execution mode where the exception was taken, by checking the PSR
Thumb bit.

Cc: Dmitry Osipenko <digetx@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Fixes: eff8728fe6 ("vmlinux.lds.h: Add PGO and AutoFDO input sections")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-12-08 10:15:00 +00:00
Linus Walleij
c12366ba44 ARM: 9015/2: Define the virtual space of KASan's shadow region
Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for
the Arm kernel address sanitizer. We are "stealing" lowmem (the 4GB
addressable by a 32bit architecture) out of the virtual address
space to use as shadow memory for KASan as follows:

 +----+ 0xffffffff
 |    |
 |    | |-> Static kernel image (vmlinux) BSS and page table
 |    |/
 +----+ PAGE_OFFSET
 |    |
 |    | |->  Loadable kernel modules virtual address space area
 |    |/
 +----+ MODULES_VADDR = KASAN_SHADOW_END
 |    |
 |    | |-> The shadow area of kernel virtual address.
 |    |/
 +----+->  TASK_SIZE (start of kernel space) = KASAN_SHADOW_START the
 |    |   shadow address of MODULES_VADDR
 |    | |
 |    | |
 |    | |-> The user space area in lowmem. The kernel address
 |    | |   sanitizer do not use this space, nor does it map it.
 |    | |
 |    | |
 |    | |
 |    | |
 |    |/
 ------ 0

0 .. TASK_SIZE is the memory that can be used by shared
userspace/kernelspace. It us used for userspace processes and for
passing parameters and memory buffers in system calls etc. We do not
need to shadow this area.

KASAN_SHADOW_START:
 This value begins with the MODULE_VADDR's shadow address. It is the
 start of kernel virtual space. Since we have modules to load, we need
 to cover also that area with shadow memory so we can find memory
 bugs in modules.

KASAN_SHADOW_END
 This value is the 0x100000000's shadow address: the mapping that would
 be after the end of the kernel memory at 0xffffffff. It is the end of
 kernel address sanitizer shadow area. It is also the start of the
 module area.

KASAN_SHADOW_OFFSET:
 This value is used to map an address to the corresponding shadow
 address by the following formula:

   shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;

 As you would expect, >> 3 is equal to dividing by 8, meaning each
 byte in the shadow memory covers 8 bytes of kernel memory, so one
 bit shadow memory per byte of kernel memory is used.

 The KASAN_SHADOW_OFFSET is provided in a Kconfig option depending
 on the VMSPLIT layout of the system: the kernel and userspace can
 split up lowmem in different ways according to needs, so we calculate
 the shadow offset depending on this.

When kasan is enabled, the definition of TASK_SIZE is not an 8-bit
rotated constant, so we need to modify the TASK_SIZE access code in the
*.s file.

The kernel and modules may use different amounts of memory,
according to the VMSPLIT configuration, which in turn
determines the PAGE_OFFSET.

We use the following KASAN_SHADOW_OFFSETs depending on how the
virtual memory is split up:

- 0x1f000000 if we have 1G userspace / 3G kernelspace split:
  - The kernel address space is 3G (0xc0000000)
  - PAGE_OFFSET is then set to 0x40000000 so the kernel static
    image (vmlinux) uses addresses 0x40000000 .. 0xffffffff
  - On top of that we have the MODULES_VADDR which under
    the worst case (using ARM instructions) is
    PAGE_OFFSET - 16M (0x01000000) = 0x3f000000
    so the modules use addresses 0x3f000000 .. 0x3fffffff
  - So the addresses 0x3f000000 .. 0xffffffff need to be
    covered with shadow memory. That is 0xc1000000 bytes
    of memory.
  - 1/8 of that is needed for its shadow memory, so
    0x18200000 bytes of shadow memory is needed. We
    "steal" that from the remaining lowmem.
  - The KASAN_SHADOW_START becomes 0x26e00000, to
    KASAN_SHADOW_END at 0x3effffff.
  - Now we can calculate the KASAN_SHADOW_OFFSET for any
    kernel address as 0x3f000000 needs to map to the first
    byte of shadow memory and 0xffffffff needs to map to
    the last byte of shadow memory. Since:
    SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
    0x26e00000 = (0x3f000000 >> 3) + KASAN_SHADOW_OFFSET
    KASAN_SHADOW_OFFSET = 0x26e00000 - (0x3f000000 >> 3)
    KASAN_SHADOW_OFFSET = 0x26e00000 - 0x07e00000
    KASAN_SHADOW_OFFSET = 0x1f000000

- 0x5f000000 if we have 2G userspace / 2G kernelspace split:
  - The kernel space is 2G (0x80000000)
  - PAGE_OFFSET is set to 0x80000000 so the kernel static
    image uses 0x80000000 .. 0xffffffff.
  - On top of that we have the MODULES_VADDR which under
    the worst case (using ARM instructions) is
    PAGE_OFFSET - 16M (0x01000000) = 0x7f000000
    so the modules use addresses 0x7f000000 .. 0x7fffffff
  - So the addresses 0x7f000000 .. 0xffffffff need to be
    covered with shadow memory. That is 0x81000000 bytes
    of memory.
  - 1/8 of that is needed for its shadow memory, so
    0x10200000 bytes of shadow memory is needed. We
    "steal" that from the remaining lowmem.
  - The KASAN_SHADOW_START becomes 0x6ee00000, to
    KASAN_SHADOW_END at 0x7effffff.
  - Now we can calculate the KASAN_SHADOW_OFFSET for any
    kernel address as 0x7f000000 needs to map to the first
    byte of shadow memory and 0xffffffff needs to map to
    the last byte of shadow memory. Since:
    SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
    0x6ee00000 = (0x7f000000 >> 3) + KASAN_SHADOW_OFFSET
    KASAN_SHADOW_OFFSET = 0x6ee00000 - (0x7f000000 >> 3)
    KASAN_SHADOW_OFFSET = 0x6ee00000 - 0x0fe00000
    KASAN_SHADOW_OFFSET = 0x5f000000

- 0x9f000000 if we have 3G userspace / 1G kernelspace split,
  and this is the default split for ARM:
  - The kernel address space is 1GB (0x40000000)
  - PAGE_OFFSET is set to 0xc0000000 so the kernel static
    image uses 0xc0000000 .. 0xffffffff.
  - On top of that we have the MODULES_VADDR which under
    the worst case (using ARM instructions) is
    PAGE_OFFSET - 16M (0x01000000) = 0xbf000000
    so the modules use addresses 0xbf000000 .. 0xbfffffff
  - So the addresses 0xbf000000 .. 0xffffffff need to be
    covered with shadow memory. That is 0x41000000 bytes
    of memory.
  - 1/8 of that is needed for its shadow memory, so
    0x08200000 bytes of shadow memory is needed. We
    "steal" that from the remaining lowmem.
  - The KASAN_SHADOW_START becomes 0xb6e00000, to
    KASAN_SHADOW_END at 0xbfffffff.
  - Now we can calculate the KASAN_SHADOW_OFFSET for any
    kernel address as 0xbf000000 needs to map to the first
    byte of shadow memory and 0xffffffff needs to map to
    the last byte of shadow memory. Since:
    SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
    0xb6e00000 = (0xbf000000 >> 3) + KASAN_SHADOW_OFFSET
    KASAN_SHADOW_OFFSET = 0xb6e00000 - (0xbf000000 >> 3)
    KASAN_SHADOW_OFFSET = 0xb6e00000 - 0x17e00000
    KASAN_SHADOW_OFFSET = 0x9f000000

- 0x8f000000 if we have 3G userspace / 1G kernelspace with
  full 1 GB low memory (VMSPLIT_3G_OPT):
  - The kernel address space is 1GB (0x40000000)
  - PAGE_OFFSET is set to 0xb0000000 so the kernel static
    image uses 0xb0000000 .. 0xffffffff.
  - On top of that we have the MODULES_VADDR which under
    the worst case (using ARM instructions) is
    PAGE_OFFSET - 16M (0x01000000) = 0xaf000000
    so the modules use addresses 0xaf000000 .. 0xaffffff
  - So the addresses 0xaf000000 .. 0xffffffff need to be
    covered with shadow memory. That is 0x51000000 bytes
    of memory.
  - 1/8 of that is needed for its shadow memory, so
    0x0a200000 bytes of shadow memory is needed. We
    "steal" that from the remaining lowmem.
  - The KASAN_SHADOW_START becomes 0xa4e00000, to
    KASAN_SHADOW_END at 0xaeffffff.
  - Now we can calculate the KASAN_SHADOW_OFFSET for any
    kernel address as 0xaf000000 needs to map to the first
    byte of shadow memory and 0xffffffff needs to map to
    the last byte of shadow memory. Since:
    SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
    0xa4e00000 = (0xaf000000 >> 3) + KASAN_SHADOW_OFFSET
    KASAN_SHADOW_OFFSET = 0xa4e00000 - (0xaf000000 >> 3)
    KASAN_SHADOW_OFFSET = 0xa4e00000 - 0x15e00000
    KASAN_SHADOW_OFFSET = 0x8f000000

- The default value of 0xffffffff for KASAN_SHADOW_OFFSET
  is an error value. We should always match one of the
  above shadow offsets.

When we do this, TASK_SIZE will sometimes get a bit odd values
that will not fit into immediate mov assembly instructions.
To account for this, we need to rewrite some assembly using
TASK_SIZE like this:

-       mov     r1, #TASK_SIZE
+       ldr     r1, =TASK_SIZE

or

-       cmp     r4, #TASK_SIZE
+       ldr     r0, =TASK_SIZE
+       cmp     r4, r0

this is done to avoid the immediate #TASK_SIZE that need to
fit into a limited number of bits.

Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Cc: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs
Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q
Reported-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-10-27 12:11:08 +00:00
Russell King
747ffc2fcf ARM: uaccess: consolidate uaccess asm to asm/uaccess-asm.h
Consolidate the user access assembly code to asm/uaccess-asm.h.  This
moves the csdb, check_uaccess, uaccess_mask_range_ptr, uaccess_enable,
uaccess_disable, uaccess_save, uaccess_restore macros, and creates two
new ones for exception entry and exit - uaccess_entry and uaccess_exit.

This makes the uaccess_save and uaccess_restore macros private to
asm/uaccess-asm.h.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-05-03 17:30:24 +01:00
Thomas Gleixner
e7289c6de8 sched/rt, ARM: Use CONFIG_PREEMPTION
CONFIG_PREEMPTION is selected by CONFIG_PREEMPT and by CONFIG_PREEMPT_RT.
Both PREEMPT and PREEMPT_RT require the same functionality which today
depends on CONFIG_PREEMPT.

Switch the entry code, cache over to use CONFIG_PREEMPTION and add output
in show_stack() for PREEMPT_RT.

[bigeasy: +traps.c]

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: linux-arm-kernel@lists.infradead.org
Link: https://lore.kernel.org/r/20191015191821.11479-2-bigeasy@linutronix.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-12-08 14:37:32 +01:00
Mauro Carvalho Chehab
dc7a12bdfc docs: arm: convert docs to ReST and rename to *.rst
Converts ARM the text files to ReST, preparing them to be an
architecture book.

The conversion is actually:
  - add blank lines and identation in order to identify paragraphs;
  - fix tables markups;
  - add some lists markups;
  - mark literal blocks;
  - adjust title markups.

At its new index.rst, let's add a :orphan: while this is not linked to
the main index.rst file, in order to avoid build warnings.

Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
Reviewed-by Corentin Labbe <clabbe.montjoie@gmail.com> # For sun4i-ss
2019-07-15 09:20:24 -03:00
Thomas Gleixner
d2912cb15b treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license version 2 as
  published by the free software foundation

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license version 2 as
  published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

  GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-19 17:09:55 +02:00
Stefan Agner
e44fc38818 ARM: 8844/1: use unified assembler in assembly files
Use unified assembler syntax (UAL) in assembly files. Divided
syntax is considered deprecated. This will also allow to build
the kernel using LLVM's integrated assembler.

Signed-off-by: Stefan Agner <stefan@agner.ch>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26 11:26:07 +00:00
Palmer Dabbelt
4c301f9b6a ARM: Convert to GENERIC_IRQ_MULTI_HANDLER
Converts the ARM interrupt code to use the recently added
GENERIC_IRQ_MULTI_HANDLER, which is essentially just a copy of ARM's
existhing MULTI_IRQ_HANDLER.  The only changes are:

* handle_arch_irq is now defined in a generic C file instead of an
  arm-specific assembly file.
 
* handle_arch_irq is now marked as __ro_after_init.

Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: linux@armlinux.org.uk
Cc: catalin.marinas@arm.com
Cc: Will Deacon <will.deacon@arm.com>
Cc: jonas@southpole.se
Cc: stefan.kristiansson@saunalahti.fi
Cc: shorne@gmail.com
Cc: jason@lakedaemon.net
Cc: marc.zyngier@arm.com
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: nicolas.pitre@linaro.org
Cc: vladimir.murzin@arm.com
Cc: keescook@chromium.org
Cc: jinb.park7@gmail.com
Cc: yamada.masahiro@socionext.com
Cc: alexandre.belloni@bootlin.com
Cc: pombredanne@nexb.com
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: kstewart@linuxfoundation.org
Cc: jhogan@kernel.org
Cc: mark.rutland@arm.com
Cc: ard.biesheuvel@linaro.org
Cc: james.morse@arm.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: openrisc@lists.librecores.org
Link: https://lkml.kernel.org/r/20180622170126.6308-3-palmer@sifive.com
2018-08-03 12:14:08 +02:00
Linus Torvalds
050e9baa9d Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables
The changes to automatically test for working stack protector compiler
support in the Kconfig files removed the special STACKPROTECTOR_AUTO
option that picked the strongest stack protector that the compiler
supported.

That was all a nice cleanup - it makes no sense to have the AUTO case
now that the Kconfig phase can just determine the compiler support
directly.

HOWEVER.

It also meant that doing "make oldconfig" would now _disable_ the strong
stackprotector if you had AUTO enabled, because in a legacy config file,
the sane stack protector configuration would look like

  CONFIG_HAVE_CC_STACKPROTECTOR=y
  # CONFIG_CC_STACKPROTECTOR_NONE is not set
  # CONFIG_CC_STACKPROTECTOR_REGULAR is not set
  # CONFIG_CC_STACKPROTECTOR_STRONG is not set
  CONFIG_CC_STACKPROTECTOR_AUTO=y

and when you ran this through "make oldconfig" with the Kbuild changes,
it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had
been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just
CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version
used to be disabled (because it was really enabled by AUTO), and would
disable it in the new config, resulting in:

  CONFIG_HAVE_CC_STACKPROTECTOR=y
  CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
  CONFIG_CC_STACKPROTECTOR=y
  # CONFIG_CC_STACKPROTECTOR_STRONG is not set
  CONFIG_CC_HAS_SANE_STACKPROTECTOR=y

That's dangerously subtle - people could suddenly find themselves with
the weaker stack protector setup without even realizing.

The solution here is to just rename not just the old RECULAR stack
protector option, but also the strong one.  This does that by just
removing the CC_ prefix entirely for the user choices, because it really
is not about the compiler support (the compiler support now instead
automatially impacts _visibility_ of the options to users).

This results in "make oldconfig" actually asking the user for their
choice, so that we don't have any silent subtle security model changes.
The end result would generally look like this:

  CONFIG_HAVE_CC_STACKPROTECTOR=y
  CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
  CONFIG_STACKPROTECTOR=y
  CONFIG_STACKPROTECTOR_STRONG=y
  CONFIG_CC_HAS_SANE_STACKPROTECTOR=y

where the "CC_" versions really are about internal compiler
infrastructure, not the user selections.

Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-14 12:21:18 +09:00
Russell King
c608906165 ARM: probes: avoid adding kprobes to sensitive kernel-entry/exit code
Avoid adding kprobes to any of the kernel entry/exit or startup
assembly code, or code in the identity-mapped region.  This code does
not conform to the standard C conventions, which means that the
expectations of the kprobes code is not forfilled.

Placing kprobes at some of these locations results in the kernel trying
to return to userspace addresses while retaining the CPU in kernel mode.

Tested-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-12-17 22:14:21 +00:00
Russell King
e558bdc21a Merge branches 'fixes' and 'misc' into for-linus 2017-09-09 16:34:41 +01:00
Russell King
1abd350237 ARM: align .data section
Robert Jarzmik reports that his PXA25x system fails to boot with 4.12,
failing at __flush_whole_cache in arch/arm/mm/proc-xscale.S:215:

   0xc0019e20 <+0>:     ldr     r1, [pc, #788]
   0xc0019e24 <+4>:     ldr     r0, [r1]	<== here

with r1 containing 0xc06f82cd, which is the address of "clean_addr".
Examination of the System.map shows:

c06f22c8 D user_pmd_table
c06f22cc d __warned.19178
c06f22cd d clean_addr

indicating that a .data.unlikely section has appeared just before the
.data section from proc-xscale.S.  According to objdump -h, it appears
that our assembly files default their .data alignment to 2**0, which
is bad news if the preceding .data section size is not power-of-2
aligned at link time.

Add the appropriate .align directives to all assembly files in arch/arm
that are missing them where we require an appropriate alignment.

Reported-by: Robert Jarzmik <robert.jarzmik@free.fr>
Tested-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-08-14 16:22:55 +01:00
Arnd Bergmann
ffa47aa678 ARM: Prepare for randomized task_struct
With the new task struct randomization, we can run into a build
failure for certain random seeds, which will place fields beyond
the allow immediate size in the assembly:

arch/arm/kernel/entry-armv.S: Assembler messages:
arch/arm/kernel/entry-armv.S:803: Error: bad immediate value for offset (4096)

Only two constants in asm-offset.h are affected, and I'm changing
both of them here to work correctly in all configurations.

One more macro has the problem, but is currently unused, so this
removes it instead of adding complexity.

Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
[kees: Adjust commit log slightly]
Signed-off-by: Kees Cook <keescook@chromium.org>
2017-06-30 12:00:50 -07:00
Russell King
87eed3c74d ARM: fix address limit restoration for undefined instructions
During boot, sometimes the kernel will test to see if an instruction
causes an undefined instruction exception.  Unfortunately, the exit
path for these exceptions did not restore the address limit, which
causes the rootfs mount code to fail.  Fix the missing address limit
restoration.

Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2016-08-09 22:57:59 +01:00
Russell King
e6978e4bf1 ARM: save and reset the address limit when entering an exception
When we enter an exception, the current address limit should not apply
to the exception context: if the exception context wishes to access
kernel space via the user accessors (eg, perf code), it must explicitly
request such access.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2016-07-07 16:01:01 +01:00
Russell King
e6a9dc6129 ARM: introduce svc_pt_regs structure
Since the privileged mode pt_regs are an extended version of the saved
userland pt_regs, introduce a new svc_pt_regs structure to describe this
layout.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2016-06-22 19:54:52 +01:00
Russell King
5745eef6b8 ARM: rename S_FRAME_SIZE to PT_REGS_SIZE
S_FRAME_SIZE is no longer the size of the kernel stack frame, so this
name is misleading.  It is the size of the kernel pt_regs structure.
Name it so.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2016-06-22 19:54:28 +01:00
Ard Biesheuvel
31b96cae5c ARM: 8515/2: move .vectors and .stubs sections back into the kernel VMA
Commit b9b32bf70f ("ARM: use linker magic for vectors and vector stubs")
updated the linker script to emit the .vectors and .stubs sections into a
VMA range that is zero based and disjoint from the normal static kernel
region. The reason for that was that this way, the sections can be placed
exactly 4 KB apart, while the payload of the .vectors section is only 32
bytes.

Since the symbols that are part of the .stubs section are emitted into the
kallsyms table, they appear with zero based addresses as well, e.g.,

  00001004 t vector_rst
  00001020 t vector_irq
  000010a0 t vector_dabt
  00001120 t vector_pabt
  000011a0 t vector_und
  00001220 t vector_addrexcptn
  00001240 t vector_fiq
  00001240 T vector_fiq_offset

As this confuses perf when it accesses the kallsyms tables, commit
7122c3e915 ("scripts/link-vmlinux.sh: only filter kernel symbols for
arm") implemented a somewhat ugly special case for ARM, where the value
of CONFIG_PAGE_OFFSET is passed to scripts/kallsyms, and symbols whose
addresses are below it are filtered out. Note that this special case only
applies to CONFIG_XIP_KERNEL=n, not because the issue the patch addresses
exists only in that case, but because finding a limit below which to apply
the filtering is not entirely straightforward.

Since the .vectors and .stubs sections contain position independent code
that is never executed in place, we can emit it at its most likely runtime
VMA (for more recent CPUs), which is 0xffff0000 for the vector table and
0xffff1000 for the stubs. Not only does this fix the perf issue with
kallsyms, allowing us to drop the special case in scripts/kallsyms
entirely, it also gives debuggers a more realistic view of the address
space, and setting breakpoints or single stepping through code in the
vector table or the stubs is more likely to work as expected on CPUs that
use a high vector address. E.g.,

  00001240 A vector_fiq_offset
  ...
  c0c35000 T __init_begin
  c0c35000 T __vectors_start
  c0c35020 T __stubs_start
  c0c35020 T __vectors_end
  c0c352e0 T _sinittext
  c0c352e0 T __stubs_end
  ...
  ffff1004 t vector_rst
  ffff1020 t vector_irq
  ffff10a0 t vector_dabt
  ffff1120 t vector_pabt
  ffff11a0 t vector_und
  ffff1220 t vector_addrexcptn
  ffff1240 T vector_fiq

(Note that vector_fiq_offset is now an absolute symbol, which kallsyms
already ignores by default)

The LMA footprint is identical with or without this change, only the VMAs
are different:

  Before:
  Idx Name          Size      VMA       LMA       File off  Algn
   ...
   14 .notes        00000024  c0c34020  c0c34020  00a34020  2**2
                    CONTENTS, ALLOC, LOAD, READONLY, CODE
   15 .vectors      00000020  00000000  c0c35000  00a40000  2**1
                    CONTENTS, ALLOC, LOAD, READONLY, CODE
   16 .stubs        000002c0  00001000  c0c35020  00a41000  2**5
                    CONTENTS, ALLOC, LOAD, READONLY, CODE
   17 .init.text    0006b1b8  c0c352e0  c0c352e0  00a452e0  2**5
                    CONTENTS, ALLOC, LOAD, READONLY, CODE
   ...

  After:
  Idx Name          Size      VMA       LMA       File off  Algn
   ...
   14 .notes        00000024  c0c34020  c0c34020  00a34020  2**2
                    CONTENTS, ALLOC, LOAD, READONLY, CODE
   15 .vectors      00000020  ffff0000  c0c35000  00a40000  2**1
                    CONTENTS, ALLOC, LOAD, READONLY, CODE
   16 .stubs        000002c0  ffff1000  c0c35020  00a41000  2**5
                    CONTENTS, ALLOC, LOAD, READONLY, CODE
   17 .init.text    0006b1b8  c0c352e0  c0c352e0  00a452e0  2**5
                    CONTENTS, ALLOC, LOAD, READONLY, CODE
   ...

Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Chris Brandt <chris.brandt@renesas.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-02-11 15:33:39 +00:00
Ard Biesheuvel
b48da55830 ARM: 8514/1: remove duplicate definitions of __vectors_start and __stubs_start
Commit b9b32bf70f ("ARM: use linker magic for vectors and vector stubs")
introduced new global definitions of __vectors_start and __stubs_start,
and changed the existing ones to have internal linkage only. However, these
symbols are still visible to kallsyms, and due to the way the .vectors and
.stubs sections are emitted at the base of the VMA space, these duplicate
definitions have conflicting values.

  $ nm -n vmlinux |grep -E __vectors|__stubs
  00000000 t __vectors_start
  00001000 t __stubs_start
  c0e77000 T __vectors_start
  c0e77020 T __stubs_start

This is completely harmless by itself, since the wrong values are local
symbols that cannot be referenced by other object files directly. However,
since these symbols are also listed in the kallsyms symbol table in some
cases (i.e., CONFIG_KALLSYMS_ALL=y and CONFIG_XIP_KERNEL=y), having these
conflicting values can be confusing. So either remove them, or make them
strictly local.

Acked-by: Chris Brandt <chris.brandt@renesas.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-02-11 15:33:39 +00:00
Russell King
db695c0509 ARM: remove user cmpxchg syscall
Mark Brand reports that a NEEDS_SYSCALL_FOR_CMPXCHG enabled kernel would
open a security hole in the ghost syscall used to implement cmpxchg, as
it fails to validate the user pointer.

However, in order for this option to be enabled, you'd need to be
building a pre-ARMv6 kernel with SMP support.  There is only one system
known which fits that, which is an early ARM SMP FPGA implementation
based on the ARM926T.

In any case, the Kconfig does not allow SMP to be enabled for pre-ARMv6
systems.

Moreover, even if NEEDS_SYSCALL_FOR_CMPXCHG were to be enabled, the
kernel would not build as __ARM_NR_cmpxchg64 is not defined.

The simple answer is to remove the buggy code.

Reported-by: Mark Brand <markbrand@google.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-10-03 16:36:45 +01:00
Russell King
40d3f02851 Merge branches 'cleanup', 'fixes', 'misc', 'omap-barrier' and 'uaccess' into for-linus 2015-09-03 15:28:37 +01:00