* for-next/kprobes:
arm64: kprobes: Return DBG_HOOK_ERROR if kprobes can not handle a BRK
arm64: kprobes: Let arch do_page_fault() fix up page fault in user handler
arm64: Prohibit instrumentation on arch_stack_walk()
* for-next/kdump:
arm64: kdump: Support crashkernel=X fall back to reserve region above DMA zones
arm64: kdump: Provide default size when crashkernel=Y,low is not specified
* for-next/cpufeature:
kselftest/arm64: Add SVE 2.1 to hwcap test
arm64/hwcap: Add support for SVE 2.1
kselftest/arm64: Add FEAT_RPRFM to the hwcap test
arm64/hwcap: Add support for FEAT_RPRFM
kselftest/arm64: Add FEAT_CSSC to the hwcap selftest
arm64/hwcap: Add support for FEAT_CSSC
arm64: Enable data independent timing (DIT) in the kernel
Return DBG_HOOK_ERROR if kprobes can not handle a BRK because it
fails to find a kprobe corresponding to the address.
Since arm64 kprobes uses stop_machine based text patching for removing
BRK, it ensures all running kprobe_break_handler() is done at that point.
And after removing the BRK, it removes the kprobe from its hash list.
Thus, if the kprobe_break_handler() fails to find kprobe from hash list,
there is a bug.
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/166994753273.439920.6629626290560350760.stgit@devnote3
Signed-off-by: Will Deacon <will@kernel.org>
Since arm64's do_page_fault() can handle the page fault correctly
than kprobe_fault_handler() according to the context, let it handle
the page fault instead of simply call fixup_exception() in the
kprobe_fault_handler().
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/166994752269.439920.4801339965959400456.stgit@devnote3
Signed-off-by: Will Deacon <will@kernel.org>
The build test robot pointer out that there's a build failure when:
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=y
CONFIG_DYNAMIC_FTRACE_WITH_ARGS=n
... due to some mismatched ifdeffery, some of which checks
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS, and some of which checks
CONFIG_DYNAMIC_FTRACE_WITH_ARGS, leading to some missing definitions expected
by the core code when CONFIG_DYNAMIC_FTRACE=n and consequently
CONFIG_DYNAMIC_FTRACE_WITH_ARGS=n.
There's really not much point in supporting CONFIG_DYNAMIC_FTRACE=n (AKA
static ftrace). All supported toolchains allow us to implement
DYNAMIC_FTRACE, distributions all prefer DYNAMIC_FTRACE, and both
powerpc and s390 removed support for static ftrace in commits:
0c0c52306f ("powerpc: Only support DYNAMIC_FTRACE not static")
5d6a016349 ("s390/ftrace: enforce DYNAMIC_FTRACE if FUNCTION_TRACER is selected")
... and according to Steven, static ftrace is only supported on x86 to
allow testing that the core code still functions in this configuration.
Given that, let's simplify matters by removing arm64's support for
static ftrace. This avoids the problem originally reported, and leaves
us with less code to maintain.
Fixes: 26299b3f6b ("ftrace: arm64: move from REGS to ARGS")
Link: https://lore.kernel.org/r/202211212249.livTPi3Y-lkp@intel.com
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Link: https://lore.kernel.org/r/20221122163624.1225912-1-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
If a Cortex-A715 cpu sees a page mapping permissions change from executable
to non-executable, it may corrupt the ESR_ELx and FAR_ELx registers, on the
next instruction abort caused by permission fault.
Only user-space does executable to non-executable permission transition via
mprotect() system call which calls ptep_modify_prot_start() and ptep_modify
_prot_commit() helpers, while changing the page mapping. The platform code
can override these helpers via __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION.
Work around the problem via doing a break-before-make TLB invalidation, for
all executable user space mappings, that go through mprotect() system call.
This overrides ptep_modify_prot_start() and ptep_modify_prot_commit(), via
defining HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION on the platform thus giving
an opportunity to intercept user space exec mappings, and do the necessary
TLB invalidation. Similar interceptions are also implemented for HugeTLB.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-doc@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20221116140915.356601-3-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
For crashkernel=X without '@offset', select a region within DMA zones
first, and fall back to reserve region above DMA zones. This allows
users to use the same configuration on multiple platforms.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Acked-by: Baoquan He <bhe@redhat.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20221116121044.1690-3-thunder.leizhen@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
Try to allocate at least 128 MiB low memory automatically for the case
that crashkernel=,high is explicitly specified, while crashkenrel=,low
is omitted. This allows users to focus more on the high memory
requirements of their business rather than the low memory requirements
of the crash kernel booting.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Acked-by: Baoquan He <bhe@redhat.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20221116121044.1690-2-thunder.leizhen@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
This commit replaces arm64's support for FTRACE_WITH_REGS with support
for FTRACE_WITH_ARGS. This removes some overhead and complexity, and
removes some latent issues with inconsistent presentation of struct
pt_regs (which can only be reliably saved/restored at exception
boundaries).
FTRACE_WITH_REGS has been supported on arm64 since commit:
3b23e4991f ("arm64: implement ftrace with regs")
As noted in the commit message, the major reasons for implementing
FTRACE_WITH_REGS were:
(1) To make it possible to use the ftrace graph tracer with pointer
authentication, where it's necessary to snapshot/manipulate the LR
before it is signed by the instrumented function.
(2) To make it possible to implement LIVEPATCH in future, where we need
to hook function entry before an instrumented function manipulates
the stack or argument registers. Practically speaking, we need to
preserve the argument/return registers, PC, LR, and SP.
Neither of these need a struct pt_regs, and only require the set of
registers which are live at function call/return boundaries. Our calling
convention is defined by "Procedure Call Standard for the Arm® 64-bit
Architecture (AArch64)" (AKA "AAPCS64"), which can currently be found
at:
https://github.com/ARM-software/abi-aa/blob/main/aapcs64/aapcs64.rst
Per AAPCS64, all function call argument and return values are held in
the following GPRs:
* X0 - X7 : parameter / result registers
* X8 : indirect result location register
* SP : stack pointer (AKA SP)
Additionally, ad function call boundaries, the following GPRs hold
context/return information:
* X29 : frame pointer (AKA FP)
* X30 : link register (AKA LR)
... and for ftrace we need to capture the instrumented address:
* PC : program counter
No other GPRs are relevant, as none of the other arguments hold
parameters or return values:
* X9 - X17 : temporaries, may be clobbered
* X18 : shadow call stack pointer (or temorary)
* X19 - X28 : callee saved
This patch implements FTRACE_WITH_ARGS for arm64, only saving/restoring
the minimal set of registers necessary. This is always sufficient to
manipulate control flow (e.g. for live-patching) or to manipulate
function arguments and return values.
This reduces the necessary stack usage from 336 bytes for pt_regs down
to 112 bytes for ftrace_regs + 32 bytes for two frame records, freeing
up 188 bytes. This could be reduced further with changes to the
unwinder.
As there is no longer a need to save different sets of registers for
different features, we no longer need distinct `ftrace_caller` and
`ftrace_regs_caller` trampolines. This allows the trampoline assembly to
be simpler, and simplifies code which previously had to handle the two
trampolines.
I've tested this with the ftrace selftests, where there are no
unexpected failures.
Co-developed-by: Florent Revest <revest@chromium.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Florent Revest <revest@chromium.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Link: https://lore.kernel.org/r/20221103170520.931305-5-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
Alexander noted that KFENCE only expects to handle faults from invalid page
table entries (i.e. translation faults), but arm64's fault handling logic will
call kfence_handle_page_fault() for other types of faults, including alignment
faults caused by unaligned atomics. This has the unfortunate property of
causing those other faults to be reported as "KFENCE: use-after-free",
which is misleading and hinders debugging.
Fix this by only forwarding unhandled translation faults to the KFENCE
code, similar to what x86 does already.
Alexander has verified that this passes all the tests in the KFENCE test
suite and avoids bogus reports on misaligned atomics.
Link: https://lore.kernel.org/all/20221102081620.1465154-1-zhongbaisong@huawei.com/
Fixes: 840b239863 ("arm64, kfence: enable KFENCE for ARM64")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Alexander Potapenko <glider@google.com>
Tested-by: Alexander Potapenko <glider@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marco Elver <elver@google.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20221114104411.2853040-1-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
All users of aarch64_insn_gen_hint() (e.g. aarch64_insn_gen_nop()) pass
a constant argument and generate a constant value. Some of those users
are noinstr code (e.g. for alternatives patching).
For noinstr code it is necessary to either inline these functions or to
ensure the out-of-line versions are noinstr.
Since in all cases these are generating a constant, make them
__always_inline.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Link: https://lore.kernel.org/r/20221114135928.3000571-5-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
The only code which needs to check for an entire instruction group is
the aarch64_insn_is_steppable() helper function used by kprobes, which
must not be instrumented, and only needs to check for the "Branch,
exception generation and system instructions" class.
Currently we have an out-of-line helper in insn.c which must be marked
as __kprobes, which indexes a table with some bits extracted from the
instruction. In aarch64_insn_is_steppable() we then need to compare the
result with an expected enum value.
It would be simpler to have a predicate for this, as with the other
aarch64_insn_is_*() helpers, which would be always inlined to prevent
inadvertent instrumentation, and would permit better code generation.
This patch adds a predicate function for this instruction group using
the existing __AARCH64_INSN_FUNCS() helpers, and removes the existing
out-of-line helper. As the only class we currently care about is the
branch+exception+sys class, I have only added helpers for this, and left
the other classes unimplemented for now.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Link: https://lore.kernel.org/r/20221114135928.3000571-4-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
We have a number of aarch64_insn_*() predicates which are used in code
which is not instrumentation safe (e.g. alternatives patching, kprobes).
Some of those are marked with __kprobes, but most are not, and are
implemented out-of-line in insn.c.
This patch moves the predicates to insn.h and marks them with
__always_inline. This is ensures that they will respect the
instrumentation requirements of their caller which they will be inlined
into.
At the same time, I've formatted each of the functions consistently as a
list, to make them easier to read and update in future.
Other than preventing unwanted instrumentation, there should be no
functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Link: https://lore.kernel.org/r/20221114135928.3000571-3-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
There are no users of aarch64_insn_gen_prefetch(), and which encodes a
PRFM (immediate) with a hard-coded offset of 0.
Remove it for now; we can always restore it with tests if we need it in
future.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Link: https://lore.kernel.org/r/20221114135928.3000571-2-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
pte_to_phys() assembly definition does multiple bits field transformations
to derive physical address, embedded inside a page table entry. Unlike its
C counter part i.e __pte_to_phys(), pte_to_phys() is not very apparent. It
simplifies these operations via a new macro PTE_ADDR_HIGH_SHIFT indicating
how far the pte encoded higher address bits need to be left shifted. While
here, this also updates __pte_to_phys() and __phys_to_pte_val().
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Suggested-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/20221107141753.2938621-1-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
Implement dynamic shadow call stack support on Clang, by parsing the
unwind tables at init time to locate all occurrences of PACIASP/AUTIASP
instructions, and replacing them with the shadow call stack push and pop
instructions, respectively.
This is useful because the overhead of the shadow call stack is
difficult to justify on hardware that implements pointer authentication
(PAC), and given that the PAC instructions are executed as NOPs on
hardware that doesn't, we can just replace them without breaking
anything. As PACIASP/AUTIASP are guaranteed to be paired with respect to
manipulations of the return address, replacing them 1:1 with shadow call
stack pushes and pops is guaranteed to result in the desired behavior.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Sami Tolvanen <samitolvanen@google.com>
Tested-by: Sami Tolvanen <samitolvanen@google.com>
Link: https://lore.kernel.org/r/20221027155908.1940624-4-ardb@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
Enable asynchronous unwind table generation for both the core kernel as
well as modules, and emit the resulting .eh_frame sections as init code
so we can use the unwind directives for code patching at boot or module
load time.
This will be used by dynamic shadow call stack support, which will rely
on code patching rather than compiler codegen to emit the shadow call
stack push and pop instructions.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Sami Tolvanen <samitolvanen@google.com>
Tested-by: Sami Tolvanen <samitolvanen@google.com>
Link: https://lore.kernel.org/r/20221027155908.1940624-2-ardb@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
FEAT_SVE2p1 introduces a number of new SVE instructions. Since there is no
new architectural state added kernel support is simply a new hwcap which
lets userspace know that the feature is supported.
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20221017152520.1039165-6-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
FEAT_RPRFM adds a new range prefetch hint within the existing PRFM space
for range prefetch hinting. Add a new hwcap to allow userspace to discover
support for the new instruction.
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20221017152520.1039165-4-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
FEAT_CSSC adds a number of new instructions usable to optimise common short
sequences of instructions, add a hwcap indicating that the feature is
available and can be used by userspace.
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20221017152520.1039165-2-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
Currently for reasons lost in the mists of time the kernel_neon_ APIs are
EXPORT_SYMBOL() but the general policy for floating point usage is that it
should be GPL only given the non-standard runtime environment that holds
while it is in use and PCS impacts when code is compiled for FP usage.
Given the limited existing deployment of non-GPL modules for arm64 and the
fact that other architectures like x86 already make their equivalent
functions GPL only this is not expected to be disruptive to existing users.
Suggested-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20221107170747.276910-1-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
The ARM architecture revision v8.4 introduces a data independent timing
control (DIT) which can be set at any exception level, and instructs the
CPU to avoid optimizations that may result in a correlation between the
execution time of certain instructions and the value of the data they
operate on.
The DIT bit is part of PSTATE, and is therefore context switched as
usual, given that it becomes part of the saved program state (SPSR) when
taking an exception. We have also defined a hwcap for DIT, and so user
space can discover already whether or nor DIT is available. This means
that, as far as user space is concerned, DIT is wired up and fully
functional.
In the kernel, however, we never bothered with DIT: we disable at it
boot (i.e., INIT_PSTATE_EL1 has DIT cleared) and ignore the fact that we
might run with DIT enabled if user space happened to set it.
Currently, we have no idea whether or not running privileged code with
DIT disabled on a CPU that implements support for it may result in a
side channel that exposes privileged data to unprivileged user space
processes, so let's be cautious and just enable DIT while running in the
kernel if supported by all CPUs.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Eric Biggers <ebiggers@kernel.org>
Cc: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Adam Langley <agl@google.com>
Link: https://lore.kernel.org/all/YwgCrqutxmX0W72r@gmail.com/
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20221107172400.1851434-1-ardb@kernel.org
[will: Removed cpu_has_dit() as per Mark's suggestion on the list]
Signed-off-by: Will Deacon <will@kernel.org>
Inspired by x86 commit 864b435514b2("x86/jump_label: Mark arguments as
const to satisfy asm constraints"), constify alternative_has_feature_*
argument to satisfy asm constraints. And Steven in [1] also pointed
out that "The "i" constraint needs to be a constant."
Tested with building a simple external kernel module with "O0".
Before the patch, got similar gcc warnings and errors as below:
In file included from <command-line>:
In function ‘alternative_has_feature_likely’,
inlined from ‘system_capabilities_finalized’ at
arch/arm64/include/asm/cpufeature.h:440:9,
inlined from ‘arm64_preempt_schedule_irq’ at
arch/arm64/kernel/entry-common.c:264:6:
include/linux/compiler_types.h:285:33: warning:
‘asm’ operand 0 probably does not match constraints
285 | #define asm_volatile_goto(x...) asm goto(x)
| ^~~
arch/arm64/include/asm/alternative-macros.h:232:9:
note: in expansion of macro ‘asm_volatile_goto’
232 | asm_volatile_goto(
| ^~~~~~~~~~~~~~~~~
include/linux/compiler_types.h:285:33: error:
impossible constraint in ‘asm’
285 | #define asm_volatile_goto(x...) asm goto(x)
| ^~~
arch/arm64/include/asm/alternative-macros.h:232:9:
note: in expansion of macro ‘asm_volatile_goto’
232 | asm_volatile_goto(
| ^~~~~~~~~~~~~~~~~
After the patch, the simple external test kernel module is built fine
with "-O0".
[1]https://lore.kernel.org/all/20210212094059.5f8d05e8@gandalf.local.home/
Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
Link: https://lore.kernel.org/r/20221006075542.2658-3-jszhang@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
Inspired by x86 commit 864b435514b2("x86/jump_label: Mark arguments as
const to satisfy asm constraints"), mark arch_static_branch()'s and
arch_static_branch_jump()'s arguments as const to satisfy asm
constraints. And Steven in [1] also pointed out that "The "i"
constraint needs to be a constant."
Tested with building a simple external kernel module with "O0".
[1]https://lore.kernel.org/all/20210212094059.5f8d05e8@gandalf.local.home/
Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
Link: https://lore.kernel.org/r/20221006075542.2658-2-jszhang@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
ARM Performance Monitoring Unit Table describes the properties of PMU
support in ARM-based system. The APMT table contains a list of nodes,
each represents a PMU in the system that conforms to ARM CoreSight PMU
architecture. The properties of each node include information required
to access the PMU (e.g. MMIO base address, interrupt number) and also
identification. For more detailed information, please refer to the
specification below:
* APMT: https://developer.arm.com/documentation/den0117/latest
* ARM Coresight PMU:
https://developer.arm.com/documentation/ihi0091/latest
The initial support adds the detection of APMT table and generic
infrastructure to create platform devices for ARM CoreSight PMUs.
Similar to IORT the root pointer of APMT is preserved during runtime
and each PMU platform device is given a pointer to the corresponding
APMT node.
Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Sudeep Holla <sudeep.holla@arm.com>
Link: https://lore.kernel.org/r/20220929002834.32664-1-bwicaksono@nvidia.com
Signed-off-by: Will Deacon <will@kernel.org>
* Fix the pKVM stage-1 walker erronously using the stage-2 accessor
* Correctly convert vcpu->kvm to a hyp pointer when generating
an exception in a nVHE+MTE configuration
* Check that KVM_CAP_DIRTY_LOG_* are valid before enabling them
* Fix SMPRI_EL1/TPIDR2_EL0 trapping on VHE
* Document the boot requirements for FGT when entering the kernel
at EL1
x86:
* Use SRCU to protect zap in __kvm_set_or_clear_apicv_inhibit()
* Make argument order consistent for kvcalloc()
* Userspace API fixes for DEBUGCTL and LBRs
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmNncNEUHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroOKJQf9HhmONhrKaLQ1Ycp5R5qbwbj4zKZR
3f78NxGaauG9MUHP96tSPWRSgLNQi36yUKI9FOFwfw/qsp79B+9KWkuqzWkYgXqj
CagwjTtCbQsLzQvDrvBt8Zrw7IQPtGFBFQjwQfyxRipEQBHndJpip0oYr8hoze5O
xICLmFsjMDtiHOjLwUhHJhaAh/qAg4xaoC6LsV855vkkqxd9Bhrj4z8QkcdUnjlt
mrP2u/4iAQGubH+3YnAqdWFQUMYxmd0WsIUw3RTzdZJWei6mLjDaA+B3jAIUiXnv
6UKrwlL56yQzUQxOt/v+d6J76FTDvjiqmUhgy7pINasJBoB5+xG4sJhOIA==
=Gqfw
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm fixes from Paolo Bonzini:
"ARM:
- Fix the pKVM stage-1 walker erronously using the stage-2 accessor
- Correctly convert vcpu->kvm to a hyp pointer when generating an
exception in a nVHE+MTE configuration
- Check that KVM_CAP_DIRTY_LOG_* are valid before enabling them
- Fix SMPRI_EL1/TPIDR2_EL0 trapping on VHE
- Document the boot requirements for FGT when entering the kernel at
EL1
x86:
- Use SRCU to protect zap in __kvm_set_or_clear_apicv_inhibit()
- Make argument order consistent for kvcalloc()
- Userspace API fixes for DEBUGCTL and LBRs"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: x86: Fix a typo about the usage of kvcalloc()
KVM: x86: Use SRCU to protect zap in __kvm_set_or_clear_apicv_inhibit()
KVM: VMX: Ignore guest CPUID for host userspace writes to DEBUGCTL
KVM: VMX: Fold vmx_supported_debugctl() into vcpu_supported_debugctl()
KVM: VMX: Advertise PMU LBRs if and only if perf supports LBRs
arm64: booting: Document our requirements for fine grained traps with SME
KVM: arm64: Fix SMPRI_EL1/TPIDR2_EL0 trapping on VHE
KVM: Check KVM_CAP_DIRTY_LOG_{RING, RING_ACQ_REL} prior to enabling them
KVM: arm64: Fix bad dereference on MTE-enabled systems
KVM: arm64: Use correct accessor to parse stage-1 PTEs
- Avoid kprobe recursion when cortex_a76_erratum_1463225_debug_handler()
is not inlined (change to __always_inline).
- Fix the visibility of compat hwcaps, broken by recent changes to
consolidate the visibility of hwcaps and the user-space view of the ID
registers.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEE5RElWfyWxS+3PLO2a9axLQDIXvEFAmNlmqgACgkQa9axLQDI
XvF4VRAAiiUH5JeYmq62e07luAdVgAaN77RGu5rTM2ocZPOxvJu/hnvhGTOabksA
7/dgcaLGmdUXTpLN4zplSPvgprq+6BoZBKZPaRcK3NZAdN7oM0lGYgZwDXEzSj/R
va99+TWi9TUl9pp6gIk5U2y1lt2H3VnY1nPGXiEya4MCN+ISIoG7sPPPiFBcWKrM
ONzKx9CXRT17fF0l5GCR/lsdRCYU2lCBQAxFQ8wSIjrrkfEmU6YBD+9BVufW0Jtb
j2apMLoD0Udtn5weqUhSwr7vOWxICgnH2JWRUSq1sV5nJD/YwZ7MehQ4GzkVe+v2
upxeZd9R3DKr+vihw/YxuFhxz8KtRR+3J1zi693R0/4CYhmZSjnzgGN+VJpO9hw1
5oS9+DsYhEeCdFiXGNYo7UF/lPKXRgMF5hZUOWsl3rzDg7wiNjX01L5ki9I3XR+d
6WBB3d09BUgmGmr1o14ozCMaamBscRfFKKXpt7jirMMdx5Cu7wzjzikrAGnqSp69
8Tk3zdQiftUtAUqctSF6/4B/e29kYqY/s0n2xNid8eRokIT0Lvp6YUfwqC4zeBFc
m4ZANIqpnOAGtGogPpDW3I96jAM3P4WM4lzsojrCiAnBsjCOcSZz3laigQOPC9cJ
fryHvl7zXOAnymUA5urG9tBEZFOH9DOdkze+vEydrB9hcLF2Poc=
=5Vh8
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Catalin Marinas:
- Avoid kprobe recursion when cortex_a76_erratum_1463225_debug_handler()
is not inlined (change to __always_inline).
- Fix the visibility of compat hwcaps, broken by recent changes to
consolidate the visibility of hwcaps and the user-space view of the
ID registers.
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: cpufeature: Fix the visibility of compat hwcaps
arm64: entry: avoid kprobe recursion
- A pair of tweaks to the EFI random seed code so that externally
provided version of this config table are handled more robustly
- Another fix for the v6.0 EFI variable refactor that turned out to
break Apple machines which don't provide QueryVariableInfo()
- Add some guard rails to the EFI runtime service call wrapper so we can
recover from synchronous exceptions caused by firmware
-----BEGIN PGP SIGNATURE-----
iQGzBAABCgAdFiEE+9lifEBpyUIVN1cpw08iOZLZjyQFAmNj+NQACgkQw08iOZLZ
jyTk1AwAmTAWL8o5U0Z+QTFPUAw1xM7qX7GgPtsrZ8Sn1d9MWYDKVKvDmaZwKWZh
rK623STwTwM5PQoiFJgKhuEvDLyAj5ZJ48zd1ZiuzYzCQ2w7Aq4rtCONlfjeeY2C
JAH/CqSF9VuSHM+ato5UfpeDfq+fnZWc17cM7xSGtFEJeeqi1la1XN5F9Nr1+Jfw
XBckPxTWPh6qZ2Kim4TcYUaVgMwEmbHrzsz4mTNS6MGryPVj9rtDiP/IRs3f4QZl
KaVCfY+mRmEy0Jzt0jy9wRKknb0lK+wipiPE4CSAuX4jkuwWIhEt0ZfzuEHCfl4R
6hmL2byMjmGnk9RTUllcMzWvBrRkz7cY3ssAhY+sXPXPmZLLaYpiUYLwnhRUKBGh
U0kQYHYaB0kRsq/xLsGtnZVOon89rWOIW6okbpfcrhWNTaQ+DI54G7ci+he6F8lU
Nfgo99RMse22ES87l3jsEwYSjLOSYhFAO5HTYblWcrCvVrPRhyelif6bnOF9iF3I
9yRtZV/A
=fjxs
-----END PGP SIGNATURE-----
Merge tag 'efi-fixes-for-v6.1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi
Pull EFI fixes from Ard Biesheuvel:
- A pair of tweaks to the EFI random seed code so that externally
provided version of this config table are handled more robustly
- Another fix for the v6.0 EFI variable refactor that turned out to
break Apple machines which don't provide QueryVariableInfo()
- Add some guard rails to the EFI runtime service call wrapper so we
can recover from synchronous exceptions caused by firmware
* tag 'efi-fixes-for-v6.1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi:
arm64: efi: Recover from synchronous exceptions occurring in firmware
efi: efivars: Fix variable writes with unsupported query_variable_store()
efi: random: Use 'ACPI reclaim' memory for random seed
efi: random: reduce seed size to 32 bytes
efi/tpm: Pass correct address to memblock_reserve
Commit 237405ebef ("arm64: cpufeature: Force HWCAP to be based on the
sysreg visible to user-space") forced the hwcaps to use sanitised
user-space view of the id registers. However, the ID register structures
used to select few compat cpufeatures (vfp, crc32, ...) are masked and
hence such hwcaps do not appear in /proc/cpuinfo anymore for PER_LINUX32
personality.
Add the ID register structures explicitly and set the relevant entry as
visible. As these ID registers are now of type visible so make them
available in 64-bit userspace by making necessary changes in register
emulation logic and documentation.
While at it, update the comment for structure ftr_generic_32bits[] which
lists the ID register that use it.
Fixes: 237405ebef ("arm64: cpufeature: Force HWCAP to be based on the sysreg visible to user-space")
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Link: https://lore.kernel.org/r/20221103082232.19189-1-amit.kachhap@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Unlike x86, which has machinery to deal with page faults that occur
during the execution of EFI runtime services, arm64 has nothing like
that, and a synchronous exception raised by firmware code brings down
the whole system.
With more EFI based systems appearing that were not built to run Linux
(such as the Windows-on-ARM laptops based on Qualcomm SOCs), as well as
the introduction of PRM (platform specific firmware routines that are
callable just like EFI runtime services), we are more likely to run into
issues of this sort, and it is much more likely that we can identify and
work around such issues if they don't bring down the system entirely.
Since we already use a EFI runtime services call wrapper in assembler,
we can quite easily add some code that captures the execution state at
the point where the call is made, allowing us to revert to this state
and proceed execution if the call triggered a synchronous exception.
Given that the kernel and the firmware don't share any data structures
that could end up in an indeterminate state, we can happily continue
running, as long as we mark the EFI runtime services as unavailable from
that point on.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Just a single fix to add the missing critical points in the thermal
zones that has been mandatory in the binding but was enforced in the
code recently.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEunHlEgbzHrJD3ZPhAEG6vDF+4pgFAmNhDBYACgkQAEG6vDF+
4phghw/+NOs8NupkeiiYvio5PexgBK1lcXhp9iznmTo0s4cjyAaFC6/rolIGGKsx
+HsG/MU62jYhYNtp449TMhV4czEk5REJbY5KMtRO0/Cr35IOlfZYHloZfy5da3SS
UDhITTNmq/FOOzDq4293kiX+zb5M+wlL54WUba0MpcSXqJM4AM6mKJBmEtq7MLgB
UL3lbyMEv6VLCbWtxVAlReQszbEKZy3/wW1uTfUJ85n1aka8SGW2IK0RiBl9hSem
k7oMe4a5Lu5/GGMn/DDVHEKpidc11qGZo0wMh3Iu00a6c/h0QZABGVa/oU8TppoO
CqiczsybT3/ZGN6DIqzzTyITpxvW4bsV5twvYn5A7YnhUoz5jL9MZgdlxJpbu65D
QSI4U2XXhHy3IwBSqBVxVdaqCKO5MUFPHnBX8SNJ72OM2bvP4S7pz6L5KFkwr2BK
cuFj+ra1iIS7JnvGBQYuKXac2Jw/nCDfUfQiH8CmjKth9x9JMUYu0dHuiyMuuodO
VcGzm/Y5T18tZVfiJMVdy0HB3DWj+T+jN0rE4zpyoqesNckIRSUZtoiV3uwvVDDU
HZwsS9LDCJY3TURftjJCusaQr1Nix8lVElYOR4v7niePMsc3FYfcPHhLUMvJAWYm
HhPbDBfGajitgWsm6rEzwjSRAn6SKOafEOcK4YRvkIv36MUJYsw=
=mi0I
-----END PGP SIGNATURE-----
Merge tag 'juno-fix-6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux into arm/fixes
Armv8 Juno fix for v6.1
Just a single fix to add the missing critical points in the thermal
zones that has been mandatory in the binding but was enforced in the
code recently.
* tag 'juno-fix-6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux:
arm64: dts: juno: Add thermal critical trip points
Link: https://lore.kernel.org/r/20221102140156.2758137-1-sudeep.holla@arm.com
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
The trapping of SMPRI_EL1 and TPIDR2_EL0 currently only really
work on nVHE, as only this mode uses the fine-grained trapping
that controls these two registers.
Move the trapping enable/disable code into
__{de,}activate_traps_common(), allowing it to be called when it
actually matters on VHE, and remove the flipping of EL2 control
for TPIDR2_EL0, which only affects the host access of this
register.
Fixes: 861262ab86 ("KVM: arm64: Handle SME host state when running guests")
Reported-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/86bkpqer4z.wl-maz@kernel.org
When thermnal zones are defined, trip points definitions are mandatory.
Define a couple of critical trip points for monitoring of existing
PMIC and SOC thermal zones.
This was lost between txt to yaml conversion and was re-enforced recently
via the commit 8c59632423 ("dt-bindings: thermal: Fix missing required property")
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>
Cc: devicetree@vger.kernel.org
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
Fixes: f7b636a8d8 ("arm64: dts: juno: add thermal zones for scpi sensors")
Link: https://lore.kernel.org/r/20221028140833.280091-8-cristian.marussi@arm.com
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
- Fix imx93-pd driver to release resources when error occurs in probe.
- A series from Ioana Ciornei to add missing clock frequencies for MDIO
controllers on LayerScape SoCs, so that the kernel driver can work
independently from bootloader.
- A series from Li Jun to fix USB power domain setup in i.MX8MM/N device
trees.
- Fix CPLD_Dn pull configuration for MX8Menlo board to avoid interfering
with CPLD power off functionality.
- Fix ctrl_sleep_moci GPIO setup for verdin-imx8mp board.
- Fix DT schema check warnings on uSDHC clocks for imx8-ss-conn device
tree.
- Fix up gpcv2 DT bindings to have an optional `power-domains` property.
- A couple of i.MX93 device tree fixes on S4MU interrupt and gpio-ranges
of GPIO controllers.
- Keep PU regulator on for Quad and QuadPlus based imx6dl-yapp4 boards to
work around a hardware design flaw in supply voltage distribution.
- Fix user push-button GPIO offset on imx6qdl-gw59 boards.
-----BEGIN PGP SIGNATURE-----
iQFIBAABCgAyFiEEFmJXigPl4LoGSz08UFdYWoewfM4FAmNgjiEUHHNoYXduZ3Vv
QGtlcm5lbC5vcmcACgkQUFdYWoewfM4anAgAmZ5lSvXKhlaYijN7DSh9WlpwDcf1
BRGAoT94d2C0rv1+lPKaEcSCJAo7/jqiMUgNorUVhlwR9frJIJK9HvNkfzJ66vQ9
DRksUZjomlIuefaOWdja18ZZD6WuPoe95pjW2ssB/A2zAmKYxtSoqiHsPfnzRtp1
yaBa1DyKBodVp+Eo0HT9Cyg1gBbPaHDex4D/v852UaBQ3J297phykeFrL/FjwsZT
ojAlK30sEb5LOBvcBnQxus1f0EhaB+sjapAsNMX0vaEkx49zO+fOEDa2I/vBXbA8
NF/jOf87pl9QeYkVDGKlP4EtTbXKObsVOfSFc05C1gT68FUGUbGF2pX/3g==
=YELK
-----END PGP SIGNATURE-----
gpgsig -----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEo6/YBQwIrVS28WGKmmx57+YAGNkFAmNhAQoACgkQmmx57+YA
GNknHQ/9ENt7IOQmPPYPF+9jsEE6YtlzPSO40hdbgTcQM94qFN2mQUYRTf9kBzr+
Gaeot/CE9cZEj5ZlN0qSUK/NWsIfHNiMmCMCTrEB9hqjJqyKfPrWjUD2CaAnORii
za1Qma7IBrSc0wfiNNx0ZIiWeJluPYw5iw9BgF7HlHOYd81Rs6ZAxDjT0nwa81NN
DjzpIswc7dFQSJU6mgJ20tEPvthZBRRlDnI4zdzr84hVovizCMF6UfM3K+XfM5Tb
lJMuLC1w3uCVqS6v3+MpZJABg3qFZwB7TqRI+fS/W+R+w21EGkhzpo8OqqpSULYJ
4HHrd4Gw1QItue8dVqbqNSTBJNo0lpUvcFKOoNPh0kUuUwRgoxOWKE+EowLpqaNO
6Judyy4KWTTBv3YyBSF0AVEy/wuQDsv4wj//cttHOrWZKcEnF3t9EiKEBzu0f7b9
u4dVVw6aOlF/bv7CoOXwhUK79Dm2XOV6NseXKGPGkCdWZNg+bwojKjDIy80sjL7i
3YR0ggDCb5rN+eioKp04l//qaKbw6cr8FtW7AjfXASfqgHojcyh8UXu0PIuMKRNW
jod9rP2HFKV8+gLVl/Ax4T9GWyKAbBtogzR+J+77MZXZ5+hbZa4e7CEtHy3feHzC
d52+kjPqOLiW8CfhgNbTYeWkNhz+qSF1ea8I7AjpyUurWNtlVf4=
=W2eO
-----END PGP SIGNATURE-----
Merge tag 'imx-fixes-6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux into arm/fixes
i.MX fixes for 6.1:
- Fix imx93-pd driver to release resources when error occurs in probe.
- A series from Ioana Ciornei to add missing clock frequencies for MDIO
controllers on LayerScape SoCs, so that the kernel driver can work
independently from bootloader.
- A series from Li Jun to fix USB power domain setup in i.MX8MM/N device
trees.
- Fix CPLD_Dn pull configuration for MX8Menlo board to avoid interfering
with CPLD power off functionality.
- Fix ctrl_sleep_moci GPIO setup for verdin-imx8mp board.
- Fix DT schema check warnings on uSDHC clocks for imx8-ss-conn device
tree.
- Fix up gpcv2 DT bindings to have an optional `power-domains` property.
- A couple of i.MX93 device tree fixes on S4MU interrupt and gpio-ranges
of GPIO controllers.
- Keep PU regulator on for Quad and QuadPlus based imx6dl-yapp4 boards to
work around a hardware design flaw in supply voltage distribution.
- Fix user push-button GPIO offset on imx6qdl-gw59 boards.
* tag 'imx-fixes-6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux:
arm64: dts: ls208xa: specify clock frequencies for the MDIO controllers
arm64: dts: ls1088a: specify clock frequencies for the MDIO controllers
arm64: dts: lx2160a: specify clock frequencies for the MDIO controllers
soc: imx: imx93-pd: Fix the error handling path of imx93_pd_probe()
arm64: dts: imx93: correct gpio-ranges
arm64: dts: imx93: correct s4mu interrupt names
dt-bindings: power: gpcv2: add power-domains property
arm64: dts: imx8: correct clock order
ARM: dts: imx6dl-yapp4: Do not allow PM to switch PU regulator off on Q/QP
ARM: dts: imx6qdl-gw59{10,13}: fix user pushbutton GPIO offset
arm64: dts: imx8mn: Correct the usb power domain
arm64: dts: imx8mn: remove otg1 power domain dependency on hsio
arm64: dts: imx8mm: correct usb power domains
arm64: dts: imx8mm: remove otg1/2 power domain dependency on hsio
arm64: dts: verdin-imx8mp: fix ctrl_sleep_moci
arm64: dts: imx8mm: Enable CPLD_Dn pull down resistor on MX8Menlo
Link: https://lore.kernel.org/r/20221101031547.GB125525@dragon
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Up until now, the external MDIO controller frequency values relied
either on the default ones out of reset or on those setup by u-boot.
Let's just properly specify the MDC frequency in the DTS so that even
without u-boot's intervention Linux can drive the MDIO bus.
Fixes: 0420dde30a ("arm64: dts: ls208xa: add the external MDIO nodes")
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: Shawn Guo <shawnguo@kernel.org>
Up until now, the external MDIO controller frequency values relied
either on the default ones out of reset or on those setup by u-boot.
Let's just properly specify the MDC frequency in the DTS so that even
without u-boot's intervention Linux can drive the MDIO bus.
Fixes: bbe75af7b0 ("arm64: dts: ls1088a: add external MDIO device nodes")
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: Shawn Guo <shawnguo@kernel.org>
Up until now, the external MDIO controller frequency values relied
either on the default ones out of reset or on those setup by u-boot.
Let's just properly specify the MDC frequency in the DTS so that even
without u-boot's intervention Linux can drive the MDIO bus.
Fixes: 6e1b8fae89 ("arm64: dts: lx2160a: add emdio1 node")
Fixes: 5705b9dcda ("arm64: dts: lx2160a: add emdio2 node")
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: Shawn Guo <shawnguo@kernel.org>