In order to guarantee that the patched instruction stream is visible to
a CPU, that CPU must execute an isb instruction after any related cache
maintenance has completed.
The instruction patching routines in kernel/insn.c get this right for
things like jump labels and ftrace, but the alternatives patching omits
it entirely leaving secondary cores in a potential limbo between the old
and the new code.
This patch adds an isb following the secondary polling loop in the
altenatives patching.
Signed-off-by: Will Deacon <will.deacon@arm.com>
The ll/sc __cmpxchg_case_##name assembly mostly uses symbolic names for
operands, but in a single case uses %2 to refer to what is otherwise
known as %[v]. This makes the code more painful to read than is
necessary.
Use %[v] instead.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Since __get_dma_pgprot() does The Right Thing(TM) in the non-coherent
case, and the non-cacheable alias for DMA buffers is private to the
kernel anyway, we can simplify things slightly and make the code more
readable by just using PAGE_KERNEL as the base pgprot.
Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit 4b3dc9679c ("arm64: force CONFIG_SMP=y and remove redundant #ifdefs")
accidentally retained code for !CONFIG_SMP in cpu_resume function. This
resulted in the hash index being zeroed in x7 after proper computation,
which is then used to get the cpu context pointer while resuming.
This patch removes the remanant code and restores back the cpu suspend/
resume functionality.
Fixes: 4b3dc9679c ("arm64: force CONFIG_SMP=y and remove redundant #ifdefs")
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
On ARM64 PROBE_ONLY PCI systems resources are not currently claimed,
therefore they can't be enabled since they do not have a valid
parent pointer; this in turn prevents enabling PCI devices on
ARM64 PROBE_ONLY systems, causing PCI devices initialization to
fail.
To solve this issue, resources must be claimed when devices are
added on PROBE_ONLY systems, which ensures that the resource hierarchy
is validated and the resource tree is sane, but this requires changes
in the ARM64 resource management that can affect adversely existing
PCI set-ups (claiming resources on !PROBE_ONLY systems might break
existing ARM64 PCI platform implementations).
As a temporary solution in preparation for a proper resources claiming
implementation in ARM64 core, to enable PCI PROBE_ONLY systems on ARM64,
this patch adds a pcibios_enable_device() arch implementation that
simply prevents enabling resources on PROBE_ONLY systems (mirroring ARM
behaviour).
This is always a safe thing to do because on PROBE_ONLY systems the
configuration space set-up can be considered immutable, and it is in
preparation of proper resource claiming that would finally validate
the PCI resources tree in the ARM64 arch implementation on PROBE_ONLY
systems.
For !PROBE_ONLY systems resources enablement in pcibios_enable_device()
on ARM64 is implemented as in current PCI core, leaving the behaviour
unchanged.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When performing a cmpxchg operation on a signed sub-word type (e.g. s8),
we need to ensure that the upper register bits of the "old" value used
for comparison are zeroed, otherwise we may erroneously fail the cmpxchg
which may even be interpreted as success by the caller (if the compiler
performs the truncation as part of its check). This has been observed
in mod_state, where negative values where causing problems with
this_cpu_cmpxchg.
This patch fixes the issue by explicitly casting 8-bit and 16-bit "old"
values using unsigned types in our cmpxchg wrappers. 32-bit types can be
left alone, since the underlying asm makes use of W registers in this
case.
Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When patching the kernel text with alternatives, we may end up patching
parts of the stop_machine state machine (e.g. atomic_dec_and_test in
ack_state) and consequently corrupt the instruction stream of any
secondary CPUs.
This patch passes the cpu_online_mask to stop_machine, forcing all of
the CPUs into our own callback which can place the secondary cores into
a dumb (but safe!) polling loop whilst the patching is carried out.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Clarify that the memory below the start of the image but inside the
region covered by the linear mapping has no special significance to
the kernel, and may be used by the firmware provided that it is marked
as reserved.
Also, fix up some whitespace errors.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
For some reason, the ll/sc cmpxchg asm is all off to the left and
awkward to read in conjunction with the following (correctly indented)
LSE version.
This patch shifts the ll/sc code back to where it should be.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit 4b3dc9679c ("arm64: force CONFIG_SMP=y and remove redundant
#ifdefs") forces SMP on arm64. To build the necessary objects for SMP,
they were added to the arm64-obj-y rule in arch/arm64/kernel/Makefile,
without removing the arm64-obj-$(CONFIG_SMP) rule.
Remove redundant object file list depending on always-yes CONFIG_SMP in
arch/arm64/kernel/Makefile.
Signed-off-by: Jonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit 4b3dc9679c ("arm64: force CONFIG_SMP=y and remove redundant
and therfore can not be selected anymore.
Remove dead #ifdef-block depending on UP_LATE_INIT in
arch/arm64/kernel/setup.c
Signed-off-by: Jonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de>
[will: kill do_post_cpus_up_work altogether]
Signed-off-by: Will Deacon <will.deacon@arm.com>
pte_valid should check if the PTE_VALID bit (1 << 0) is set in the pte,
so fix the macro definition to use bitwise & instead of logical &&.
Signed-off-by: Will Deacon <will.deacon@arm.com>
When unlocking a spinlock, we perform a read-modify-write on the owner
ticket in order to increment it and store it back with release
semantics.
In the LL/SC case, we load the 16-bit ticket using a 32-bit load and
therefore store back the wrong halfword on a big-endian system,
corrupting the lock after the first unlock and killing the system dead.
This patch fixes the unlock code to use 16-bit accessors consistently.
Signed-off-by: Will Deacon <will.deacon@arm.com>
The flush_tlb_page() function is used on user address ranges when PTEs
(or PMDs/PUDs for huge pages) were changed (attributes or clearing). For
such cases, it is more efficient to invalidate only the last level of
the TLB with the "tlbi vale1is" instruction.
In the TLB shoot-down case, the TLB caching of the intermediate page
table levels (pmd, pud, pgd) is handled by __flush_tlb_pgtable() via the
__(pte|pmd|pud)_free_tlb() functions and it is not deferred to
tlb_finish_mmu() (as of commit 285994a62c - "arm64: Invalidate the TLB
corresponding to intermediate page table levels"). The tlb_flush()
function only needs to invalidate the TLB for the last level of page
tables; the __flush_tlb_range() function gains a fourth argument for
last level TLBI.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch moves the MAX_TLB_RANGE check into the
flush_tlb(_kernel)_range functions directly to avoid the
undescore-prefixed definitions (and for consistency with a subsequent
patch).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently create_mapping is marked with __ref, apparently because it
refers to early_alloc. However, create_mapping has no logic to prevent
erroneous use of early_alloc after it has been freed, and is only ever
called by __init functions anyway. Thus the __ref marker is misleading
and unnecessary.
Instead, this patch marks create_mapping as __init, resulting in
warnings if it is used from a a non __init functions, and allowing its
memory to be reclaimed.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
lib/list_sort.c defines a 'struct debug_el', where "el" is assumedly a
a contraction of "element". This conflicts with 'enum debug_el' in our
asm/debug-monitors.h header file, where "el" stands for Exception Level.
The result is build failure when targetting allmodconfig, so rename our
enum to 'dbg_active_el' to be slightly more explicit about what it is.
Signed-off-by: Will Deacon <will.deacon@arm.com>
It is not needed after booting, this patch moves the
free_initrd_mem() function to the __init section.
This patch also make keep_initrd __initdata, to reduce kernel
size.
Signed-off-by: Wang Long <long.wanglong@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
cpuid_feature_extract_field takes care of the fiddly ID register
field sign-extension, so use that instead of rolling our own version.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Rework the cpufeature detection to support ISAR0 and use that for
detecting the presence of LSE atomics.
Signed-off-by: Will Deacon <will.deacon@arm.com>
ARMv8 CPUs do not support any of the v8.1 features, so group them
together in Kconfig to make it clear that they're part of 8.1 and not
relevant to older cores.
Signed-off-by: Will Deacon <will.deacon@arm.com>
We implement an optimised cmpxchg_local macro, so let the kernel know.
Reviewed-by: Steve Capper <steve.capper@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
If we attempt to atomic64_dec_if_positive on INT_MIN, we will underflow
and incorrectly decide that the original parameter was positive.
This patches fixes the broken condition code so that we handle this
corner case correctly.
Reviewed-by: Steve Capper <steve.capper@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We don't need duplicate cmpxchg implementations, so use cmpxchg to
implement atomic{,64}_cmpxchg, like we do for xchg already.
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The cost of changing a cacheline from shared to exclusive state can be
significant, especially when this is triggered by an exclusive store,
since it may result in having to retry the transaction.
This patch makes use of prfm to prefetch cachelines for write prior to
ldxr/stxr loops when using the ll/sc atomic routines.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The common (i.e. identical for ll/sc and lse) atomic macros in atomic.h
are needlessley different for atomic_t and atomic64_t.
This patch tidies up the definitions to make them consistent across the
two atomic types and factors out common code such as the add_unless
implementation based on cmpxchg.
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
cmpxchg doesn't require memory barrier semantics when the value
comparison fails, so make the barrier conditional on success.
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We can perform the cmpxchg comparison using eor and cbnz which avoids
the "cc" clobber for the ll/sc case and consequently for the LSE case
where we may have to fall-back on the ll/sc code at runtime.
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.
This patch introduces runtime patching of our cmpxchg_double primitives
so that the LSE casp instruction is used instead.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.
This patch introduces runtime patching of our cmpxchg primitives so that
the LSE cas instruction is used instead.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.
This patch introduces runtime patching of our xchg primitives so that
the LSE swp instruction (yes, you read right!) is used instead.
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.
This patch introduces runtime patching of our bitops functions so that
LSE atomic instructions are used instead.
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.
This patch introduces runtime patching of our locking functions so that
LSE atomic instructions are used for spinlocks and rwlocks.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.
This patch introduces runtime patching of atomic_t and atomic64_t
routines so that the call-site for the out-of-line ll/sc sequences is
patched with an LSE atomic instruction when we detect that
the CPU supports it.
If binutils is not recent enough to assemble the LSE instructions, then
the ll/sc sequences are inlined as though CONFIG_ARM64_LSE_ATOMICS=n.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In order to patch in the new atomic instructions at runtime, we need to
generate wrappers around the out-of-line exclusive load/store atomics.
This patch adds a new Kconfig option, CONFIG_ARM64_LSE_ATOMICS. which
causes our atomic functions to branch to the out-of-line ll/sc
implementations. To avoid the register spill overhead of the PCS, the
out-of-line functions are compiled with specific compiler flags to
force out-of-line save/restore of any registers that are usually
caller-saved.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Add a CPU feature for the LSE atomic instructions, so that they can be
patched in at runtime when we detect that they are supported.
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The ARM v8.1 architecture introduces new atomic instructions to the A64
instruction set for things like cmpxchg, so advertise their availability
to userspace using a hwcap.
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In preparation for the Large System Extension (LSE) atomic instructions
introduced by ARM v8.1, move the current exclusive load/store (LL/SC)
atomics into their own header file.
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
cpufeature.h makes use of DECLARE_BITMAP, which in turn relies on the
BITS_TO_LONGS and DIV_ROUND_UP macros.
This patch includes kernel.h in cpufeature.h to prevent all users having
to do the same thing.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
STXR can fail for a number of reasons, so don't fail an rwlock trylock
operation simply because the STXR reported failure.
I'm not aware of any issues with the current code, but this makes it
consistent with spin_trylock and also other architectures (e.g. arch/arm).
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Add a few atomic_t tests, gets some compile coverage for the new
operations.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Replace the deprecated atomic_{set,clear}_mask() usage with the now
ubiquous atomic_{or,andnot}() functions.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Move the now generic definitions of atomic_{set,clear}_mask() into
linux/atomic.h to avoid endless and pointless repetition.
Also, provide an atomic_andnot() wrapper for those few archs that can
implement that.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Implement atomic logic ops -- atomic_{or,xor,and}.
These will replace the atomic_{set,clear}_mask functions that are
available on some archs.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Implement atomic logic ops -- atomic_{or,xor,and}.
For tilegx, these are relatively straightforward; the architecture
provides atomic "or" and "and", both 32-bit and 64-bit. To support
xor we provide a loop using "cmpexch".
For the older 32-bit tilepro architecture, we have to extend
the set of low-level assembly routines to include 32-bit "and",
as well as all three 64-bit routines. Somewhat confusingly,
some 32-bit versions are already used by the bitops inlines, with
parameter types appropriate for bitops, so we have to do a bit of
casting to match "int" to "unsigned long".
Signed-off-by: Chris Metcalf <cmetcalf@ezchip.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1436474297-32187-1-git-send-email-cmetcalf@ezchip.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Implement atomic logic ops -- atomic_{or,xor,and}
These will replace the atomic_{set,clear}_mask functions that are
available on some archs.
Also rework the atomic implementation in terms of CPP macros to avoid
the typical repetition -- I seem to have missed this arch the last
time around when I did that.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Mostly complete rewrite of the FRV atomic implementation, instead of
using assembly files, use inline assembler.
The out-of-line CONFIG option makes a bit of a mess of things, but a
little CPP trickery gets that done too.
FRV already had the atomic logic ops but under a non standard name,
the reimplementation provides the generic names and provides the
intermediate form required for the bitops implementation.
The slightly inconsistent __atomic32_fetch_##op naming is because
__atomic_fetch_##op conlicts with GCC builtin functions.
The 64bit atomic ops use the inline assembly %Ln construct to access
the low word register (r+1), afaik this construct was not previously
used in the kernel and is completely undocumented, but I found it in
the FRV GCC code and it seems to work.
FRV had a non-standard definition of atomic_{clear,set}_mask() which
would work types other than atomic_t, the one user relying on that
(arch/frv/kernel/dma.c) got converted to use the new intermediate
form.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Implement atomic logic ops -- atomic_{or,xor,and}.
These will replace the atomic_{set,clear}_mask functions that are
available on some archs.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>