Commit Graph

1348 Commits

Author SHA1 Message Date
Lv Zheng
8d3523fb3b ACPI / osl: Remove deprecated acpi_get_table_with_size()/early_acpi_os_unmap_memory()
Since all users are cleaned up, remove the 2 deprecated APIs due to no
users.
As a Linux variable rather than an ACPICA variable, acpi_gbl_permanent_mmap
is renamed to acpi_permanent_mmap to have a consistent coding style across
entire Linux ACPI subsystem.

Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2016-12-21 02:36:38 +01:00
Linus Torvalds
e234832afb ARM fixes. There are a couple pending x86 patches but they'll have to
wait for next week.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQExBAABCAAbBQJYJvAXFBxwYm9uemluaUByZWRoYXQuY29tAAoJEL/70l94x66D
 ztUH/3DZVYkVYUea+Sk1mBnLaK5cbEJMGtxV/NsAqwz8rYB981cB5Iqw0+f2HX2G
 LWLc0cf/kyUH+6TrFxQFyXLsBvV7xku2YwJdoiRutpR50767qPFPaPUBHcxCYPYR
 MI4VSXb1hW0c4rd8409HesKv5qc5BiYjdWKPMu+f1LUF2CED+Xq4SWHVg+CwSE8i
 UInvFUlj/ejQlSdxN2piv87SRcMuO+4nzP+JbtyN7onvtLIah6JfvHhlOO6em+0c
 KX1G3qQ6fth6jewTdOPUB1Tx8CoEAZ+YLieRQA19vGHVI4eIhofDgtSaPZJmu9G7
 0kjbrlqAuViTEsRl7Dx8zT8OpBM=
 =N6w9
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull KVM fixes from Paolo Bonzini:
 "ARM fixes.  There are a couple pending x86 patches but they'll have to
  wait for next week"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
  KVM: arm/arm64: vgic: Kick VCPUs when queueing already pending IRQs
  KVM: arm/arm64: vgic: Prevent access to invalid SPIs
  arm/arm64: KVM: Perform local TLB invalidation when multiplexing vcpus on a single CPU
2016-11-13 10:28:53 -08:00
Paolo Bonzini
05d36a7dff KVM/ARM updates for v4.9-rc4
- Kick the vcpu when a pending interrupt becomes pending again
 - Prevent access to invalid interrupt registers
 - Invalid TLBs when two vcpus from the same VM share a CPU
 -----BEGIN PGP SIGNATURE-----
 
 iQIyBAABCAAcBQJYHNMTFRxtYXJjLnp5bmdpZXJAYXJtLmNvbQAKCRAj0NC60T16
 Q1WDD/9d5KfQ3dWiLtBXbeD3w2K0gXknwLAMsCCAdhgkCdLenxSBjlB7lmVYi1lZ
 pTnshnR4HC0P3yW3bA78J7LZnUzJg72pq/S5K/om9KylVUdXz9WzQ3u+XyB3KTFW
 b+viTUK3mqose67UcBSKGfFEWpIOmJ/nZVvWAIaUTg49btxnetKjyhv2Ux744Hm/
 Jba3trcA4m8RPJ8Vu6mIfd6gkTXzSkQaN2wGVaEFhCFHOPDCQHjcdspe20Ig9fmY
 kTXEBe4r0sC+8fXoymEM6TDQFWB8WthIIqfeIJ3FgfoETKrwmyJ23YfLAh49m1cB
 nFpyy/lr9PNsOjJKXFi84pzx6l8U/CDslnBm5klYTT2kFc3stKbyDtIILvUOwKl8
 n9UZSO8NGhOpKscGXLzO/CmIO+wgL15LTsxYsOh3HK7KjzocspQpxyD7pPWN8CUI
 M2IGLvYMzCaBAOzs6WO4P9xlJRNtUMK8lvAthnBiCeE2Nnu3Oajf8krR4DZmBcQh
 Q/GOACa1kuBMfqmWNrCVq3UNiFLxxAseShgxq9/E/dNe20daXOnxSaRGdRzTvAQF
 dRBEtHXdY0qDgLz3tVzBdTTmx3M2k4B4/t+VxnsFFVlvbr0OyOozvFH42tGeTw5t
 IBoXP9x87+Rpl6P6wW+ICketXQMRmdl40JXNjR96sXN94Y/Z4A==
 =vj/s
 -----END PGP SIGNATURE-----

Merge tag 'kvm-arm-for-v4.9-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD

KVM/ARM updates for v4.9-rc4

- Kick the vcpu when a pending interrupt becomes pending again
- Prevent access to invalid interrupt registers
- Invalid TLBs when two vcpus from the same VM share a CPU
2016-11-11 11:13:36 +01:00
Catalin Marinas
272d01bd79 arm64: Fix circular include of asm/lse.h through linux/jump_label.h
Commit efd9e03fac ("arm64: Use static keys for CPU features")
introduced support for static keys in asm/cpufeature.h, including
linux/jump_label.h. When CC_HAVE_ASM_GOTO is not defined, this causes a
circular dependency via linux/atomic.h, asm/lse.h and asm/cpufeature.h.

This patch moves the capability macros out out of asm/cpufeature.h into
a separate asm/cpucaps.h and modifies some of the #includes accordingly.

Fixes: efd9e03fac ("arm64: Use static keys for CPU features")
Reported-by: Artem Savkov <asavkov@redhat.com>
Tested-by: Artem Savkov <asavkov@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-11-05 20:59:06 +00:00
Marc Zyngier
94d0e5980d arm/arm64: KVM: Perform local TLB invalidation when multiplexing vcpus on a single CPU
Architecturally, TLBs are private to the (physical) CPU they're
associated with. But when multiple vcpus from the same VM are
being multiplexed on the same CPU, the TLBs are not private
to the vcpus (and are actually shared across the VMID).

Let's consider the following scenario:

- vcpu-0 maps PA to VA
- vcpu-1 maps PA' to VA

If run on the same physical CPU, vcpu-1 can hit TLB entries generated
by vcpu-0 accesses, and access the wrong physical page.

The solution to this is to keep a per-VM map of which vcpu ran last
on each given physical CPU, and invalidate local TLBs when switching
to a different vcpu from the same VM.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-11-04 17:56:28 +00:00
Neeraj Upadhyay
3fa72fe9c6 arm64: mm: fix __page_to_voff definition
Fix parameter name for __page_to_voff, to match its definition.
At present, we don't see any issue, as page_to_virt's caller
declares 'page'.

Fixes: 9f2875912d ("arm64: mm: restrict virt_to_page() to the linear mapping")
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-10-26 18:22:42 +01:00
Linus Torvalds
a23b27ae12 KVM fixes for v4.9-rc2
ARM:
  - avoid livelock when walking guest page tables
  - fix HYP mode static keys without CC_HAVE_ASM_GOTO
 
 MIPS:
  - fix a build error without TRACEPOINTS_ENABLED
 
 s390:
  - reject a malformed userspace configuration
 
 x86:
  - suppress a warning without CONFIG_CPU_FREQ
  - initialize whole irq_eoi array
 -----BEGIN PGP SIGNATURE-----
 
 iQEcBAABCAAGBQJYCl1iAAoJEED/6hsPKofo7pUH/R/sL417YLTkY6UVhtrCXQq1
 cUPWLLp96/Ijkmb+PoByLn5msKxhUa9A06QfphKCbmvpInubXPTxaWDCpoXxHmCO
 ywHmwuNk7Zgc8MnvcqBKte1jo8/JxQTM1NYZEys7va+J/fC4Nqb9gjZnECSTfUK5
 JE8bPs+yxVSavsh0KOZcTdTHtuZQ6SQijgDkE4pSDBYhCKxIpYAXaKVUOC+VSTDH
 ACUMLvUrFlFbAev0z4oF4CSKotAq6VEkJQhequghKPUHSeWabZB4wAHTkfUbJ+Bb
 Ar57zrz5YCGbojywuHi1954eHWv6AfWyD8bnYSCtD4gsIRws+dH/MIiPgEMjLOQ=
 =9U78
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull KVM fixes from Radim Krčmář:
 "ARM:
   - avoid livelock when walking guest page tables
   - fix HYP mode static keys without CC_HAVE_ASM_GOTO

  MIPS:
   - fix a build error without TRACEPOINTS_ENABLED

  s390:
   - reject a malformed userspace configuration

  x86:
   - suppress a warning without CONFIG_CPU_FREQ
   - initialize whole irq_eoi array"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
  arm/arm64: KVM: Map the BSS at HYP
  arm64: KVM: Take S1 walks into account when determining S2 write faults
  KVM: s390: reject invalid modes for runtime instrumentation
  kvm: x86: memset whole irq_eoi
  kvm/x86: Fix unused variable warning in kvm_timer_init()
  KVM: MIPS: Add missing uaccess.h include
2016-10-21 19:09:29 -07:00
Will Deacon
60e21a0ef5 arm64: KVM: Take S1 walks into account when determining S2 write faults
The WnR bit in the HSR/ESR_EL2 indicates whether a data abort was
generated by a read or a write instruction. For stage 2 data aborts
generated by a stage 1 translation table walk (i.e. the actual page
table access faults at EL2), the WnR bit therefore reports whether the
instruction generating the walk was a load or a store, *not* whether the
page table walker was reading or writing the entry.

For page tables marked as read-only at stage 2 (e.g. due to KSM merging
them with the tables from another guest), this could result in livelock,
where a page table walk generated by a load instruction attempts to
set the access flag in the stage 1 descriptor, but fails to trigger
CoW in the host since only a read fault is reported.

This patch modifies the arm64 kvm_vcpu_dabt_iswrite function to
take into account stage 2 faults in stage 1 walks. Since DBM cannot be
disabled at EL2 for CPUs that implement it, we assume that these faults
are always causes by writes, avoiding the livelock situation at the
expense of occasional, spurious CoWs.

We could, in theory, do a bit better by checking the guest TCR
configuration and inspecting the page table to see why the PTE faulted.
However, I doubt this is measurable in practice, and the threat of
livelock is real.

Cc: <stable@vger.kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-10-21 17:25:47 +01:00
James Morse
d08544127d arm64: suspend: Reconfigure PSTATE after resume from idle
The suspend/resume path in kernel/sleep.S, as used by cpu-idle, does not
save/restore PSTATE. As a result of this cpufeatures that were detected
and have bits in PSTATE get lost when we resume from idle.

UAO gets set appropriately on the next context switch. PAN will be
re-enabled next time we return from user-space, but on a preemptible
kernel we may run work accessing user space before this point.

Add code to re-enable theses two features in __cpu_suspend_exit().
We re-use uao_thread_switch() passing current.

Signed-off-by: James Morse <james.morse@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-10-20 09:50:54 +01:00
James Morse
2a6dcb2b5f arm64: cpufeature: Schedule enable() calls instead of calling them via IPI
The enable() call for a cpufeature/errata is called using on_each_cpu().
This issues a cross-call IPI to get the work done. Implicitly, this
stashes the running PSTATE in SPSR when the CPU receives the IPI, and
restores it when we return. This means an enable() call can never modify
PSTATE.

To allow PAN to do this, change the on_each_cpu() call to use
stop_machine(). This schedules the work on each CPU which allows
us to modify PSTATE.

This involves changing the protype of all the enable() functions.

enable_cpu_capabilities() is called during boot and enables the feature
on all online CPUs. This path now uses stop_machine(). CPU features for
hotplug'd CPUs are enabled by verify_local_cpu_features() which only
acts on the local CPU, and can already modify the running PSTATE as it
is called from secondary_start_kernel().

Reported-by: Tony Thompson <anthony.thompson@arm.com>
Reported-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-10-20 09:50:53 +01:00
Andre Przywara
87261d1904 arm64: Cortex-A53 errata workaround: check for kernel addresses
Commit 7dd01aef05 ("arm64: trap userspace "dc cvau" cache operation on
errata-affected core") adds code to execute cache maintenance instructions
in the kernel on behalf of userland on CPUs with certain ARM CPU errata.
It turns out that the address hasn't been checked to be a valid user
space address, allowing userland to clean cache lines in kernel space.
Fix this by introducing an address check before executing the
instructions on behalf of userland.

Since the address doesn't come via a syscall parameter, we can't just
reject tagged pointers and instead have to remove the tag when checking
against the user address limit.

Cc: <stable@vger.kernel.org>
Fixes: 7dd01aef05 ("arm64: trap userspace "dc cvau" cache operation on errata-affected core")
Reported-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
[will: rework commit message + replace access_ok with max_user_addr()]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-10-20 09:50:49 +01:00
Will Deacon
1e6e57d9b3 arm64: percpu: rewrite ll/sc loops in assembly
Writing the outer loop of an LL/SC sequence using do {...} while
constructs potentially allows the compiler to hoist memory accesses
between the STXR and the branch back to the LDXR. On CPUs that do not
guarantee forward progress of LL/SC loops when faced with memory
accesses to the same ERG (up to 2k) between the failed STXR and the
branch back, we may end up livelocking.

This patch avoids this issue in our percpu atomics by rewriting the
outer loop as part of the LL/SC inline assembly block.

Cc: <stable@vger.kernel.org>
Fixes: f97fc81079 ("arm64: percpu: Implement this_cpu operations")
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-10-19 15:37:29 +01:00
Will Deacon
91cb163e4d arm64: sysreg: Fix use of XZR in write_sysreg_s
Commit 8a71f0c656 ("arm64: sysreg: replace open-coded mrs_s/msr_s with
{read,write}_sysreg_s") introduced a write_sysreg_s macro for writing
to system registers that are not supported by binutils.

Unfortunately, this was implemented with the wrong template (%0 vs %x0),
so in the case that we are writing a constant 0, we will generate
invalid instruction syntax and bail with a cryptic assembler error:

  | Error: constant expression required

This patch fixes the template.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-10-17 19:38:30 +01:00
Ard Biesheuvel
9c0e83c371 arm64: kaslr: fix breakage with CONFIG_MODVERSIONS=y
As it turns out, the KASLR code breaks CONFIG_MODVERSIONS, since the
kcrctab has an absolute address field that is relocated at runtime
when the kernel offset is randomized.

This has been fixed already for PowerPC in the past, so simply wire up
the existing code dealing with this issue.

Cc: <stable@vger.kernel.org>
Fixes: f80fb3a3d5 ("arm64: add support for kernel ASLR")
Tested-by: Timur Tabi <timur@codeaurora.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-10-17 12:42:16 +01:00
Linus Torvalds
b26b5ef5ec Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull more misc uaccess and vfs updates from Al Viro:
 "The rest of the stuff from -next (more uaccess work) + assorted fixes"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
  score: traps: Add missing include file to fix build error
  fs/super.c: don't fool lockdep in freeze_super() and thaw_super() paths
  fs/super.c: fix race between freeze_super() and thaw_super()
  overlayfs: Fix setting IOP_XATTR flag
  iov_iter: kernel-doc import_iovec() and rw_copy_check_uvector()
  blackfin: no access_ok() for __copy_{to,from}_user()
  arm64: don't zero in __copy_from_user{,_inatomic}
  arm: don't zero in __copy_from_user_inatomic()/__copy_from_user()
  arc: don't leak bits of kernel stack into coredump
  alpha: get rid of tail-zeroing in __copy_user()
2016-10-14 18:19:05 -07:00
Al Viro
2692a71bbd Merge branch 'work.uaccess' into for-linus 2016-10-14 20:42:44 -04:00
Masahiro Yamada
97139d4a6f treewide: remove redundant #include <linux/kconfig.h>
Kernel source files need not include <linux/kconfig.h> explicitly
because the top Makefile forces to include it with:

  -include $(srctree)/include/linux/kconfig.h

This commit removes explicit includes except the following:

  * arch/s390/include/asm/facilities_src.h
  * tools/testing/radix-tree/linux/kernel.h

These two are used for host programs.

Link: http://lkml.kernel.org/r/1473656164-11929-1-git-send-email-yamada.masahiro@socionext.com
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-10-11 15:06:33 -07:00
Linus Torvalds
6218590bcb KVM updates for v4.9-rc1
All architectures:
   Move `make kvmconfig` stubs from x86;  use 64 bits for debugfs stats.
 
 ARM:
   Important fixes for not using an in-kernel irqchip; handle SError
   exceptions and present them to guests if appropriate; proxying of GICV
   access at EL2 if guest mappings are unsafe; GICv3 on AArch32 on ARMv8;
   preparations for GICv3 save/restore, including ABI docs; cleanups and
   a bit of optimizations.
 
 MIPS:
   A couple of fixes in preparation for supporting MIPS EVA host kernels;
   MIPS SMP host & TLB invalidation fixes.
 
 PPC:
   Fix the bug which caused guests to falsely report lockups; other minor
   fixes; a small optimization.
 
 s390:
   Lazy enablement of runtime instrumentation; up to 255 CPUs for nested
   guests; rework of machine check deliver; cleanups and fixes.
 
 x86:
   IOMMU part of AMD's AVIC for vmexit-less interrupt delivery; Hyper-V
   TSC page; per-vcpu tsc_offset in debugfs; accelerated INS/OUTS in
   nVMX; cleanups and fixes.
 -----BEGIN PGP SIGNATURE-----
 
 iQEcBAABCAAGBQJX9iDrAAoJEED/6hsPKofoOPoIAIUlgojkb9l2l1XVDgsXdgQL
 sRVhYSVv7/c8sk9vFImrD5ElOPZd+CEAIqFOu45+NM3cNi7gxip9yftUVs7wI5aC
 eDZRWm1E4trDZLe54ZM9ThcqZzZZiELVGMfR1+ZndUycybwyWzafpXYsYyaXp3BW
 hyHM3qVkoWO3dxBWFwHIoO/AUJrWYkRHEByKyvlC6KPxSdBPSa5c1AQwMCoE0Mo4
 K/xUj4gBn9eMelNhg4Oqu/uh49/q+dtdoP2C+sVM8bSdquD+PmIeOhPFIcuGbGFI
 B+oRpUhIuntN39gz8wInJ4/GRSeTuR2faNPxMn4E1i1u4LiuJvipcsOjPfe0a18=
 =fZRB
 -----END PGP SIGNATURE-----

Merge tag 'kvm-4.9-1' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull KVM updates from Radim Krčmář:
 "All architectures:
   - move `make kvmconfig` stubs from x86
   - use 64 bits for debugfs stats

  ARM:
   - Important fixes for not using an in-kernel irqchip
   - handle SError exceptions and present them to guests if appropriate
   - proxying of GICV access at EL2 if guest mappings are unsafe
   - GICv3 on AArch32 on ARMv8
   - preparations for GICv3 save/restore, including ABI docs
   - cleanups and a bit of optimizations

  MIPS:
   - A couple of fixes in preparation for supporting MIPS EVA host
     kernels
   - MIPS SMP host & TLB invalidation fixes

  PPC:
   - Fix the bug which caused guests to falsely report lockups
   - other minor fixes
   - a small optimization

  s390:
   - Lazy enablement of runtime instrumentation
   - up to 255 CPUs for nested guests
   - rework of machine check deliver
   - cleanups and fixes

  x86:
   - IOMMU part of AMD's AVIC for vmexit-less interrupt delivery
   - Hyper-V TSC page
   - per-vcpu tsc_offset in debugfs
   - accelerated INS/OUTS in nVMX
   - cleanups and fixes"

* tag 'kvm-4.9-1' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (140 commits)
  KVM: MIPS: Drop dubious EntryHi optimisation
  KVM: MIPS: Invalidate TLB by regenerating ASIDs
  KVM: MIPS: Split kernel/user ASID regeneration
  KVM: MIPS: Drop other CPU ASIDs on guest MMU changes
  KVM: arm/arm64: vgic: Don't flush/sync without a working vgic
  KVM: arm64: Require in-kernel irqchip for PMU support
  KVM: PPC: Book3s PR: Allow access to unprivileged MMCR2 register
  KVM: PPC: Book3S PR: Support 64kB page size on POWER8E and POWER8NVL
  KVM: PPC: Book3S: Remove duplicate setting of the B field in tlbie
  KVM: PPC: BookE: Fix a sanity check
  KVM: PPC: Book3S HV: Take out virtual core piggybacking code
  KVM: PPC: Book3S: Treat VTB as a per-subcore register, not per-thread
  ARM: gic-v3: Work around definition of gic_write_bpr1
  KVM: nVMX: Fix the NMI IDT-vectoring handling
  KVM: VMX: Enable MSR-BASED TPR shadow even if APICv is inactive
  KVM: nVMX: Fix reload apic access page warning
  kvmconfig: add virtio-gpu to config fragment
  config: move x86 kvm_guest.config to a common location
  arm64: KVM: Remove duplicating init code for setting VMID
  ARM: KVM: Support vgic-v3
  ...
2016-10-06 10:49:01 -07:00
Linus Torvalds
999dcbe241 Merge branch 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull irq updates from Thomas Gleixner:
 "The irq departement proudly presents:

   - A rework of the core infrastructure to optimally spread interrupt
     for multiqueue devices. The first version was a bit naive and
     failed to take thread siblings and other details into account.
     Developed in cooperation with Christoph and Keith.

   - Proper delegation of softirqs to ksoftirqd, so if ksoftirqd is
     active then no further softirq processsing on interrupt return
     happens. Otherwise we try to delegate and still run another batch
     of network packets in the irq return path, which then tries to
     delegate to ksoftirqd .....

   - A proper machine parseable sysfs based alternative for
     /proc/interrupts.

   - ACPI support for the GICV3-ITS and ARM interrupt remapping

   - Two new irq chips from the ARM SoC zoo: STM32-EXTI and MVEBU-PIC

   - A new irq chip for the JCore (SuperH)

   - The usual pile of small fixlets in core and irqchip drivers"

* 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (42 commits)
  softirq: Let ksoftirqd do its job
  genirq: Make function __irq_do_set_handler() static
  ARM/dts: Add EXTI controller node to stm32f429
  ARM/STM32: Select external interrupts controller
  drivers/irqchip: Add STM32 external interrupts support
  Documentation/dt-bindings: Document STM32 EXTI controller bindings
  irqchip/mips-gic: Use for_each_set_bit to iterate over local IRQs
  pci/msi: Retrieve affinity for a vector
  genirq/affinity: Remove old irq spread infrastructure
  genirq/msi: Switch to new irq spreading infrastructure
  genirq/affinity: Provide smarter irq spreading infrastructure
  genirq/msi: Add cpumask allocation to alloc_msi_entry
  genirq: Expose interrupt information through sysfs
  irqchip/gicv3-its: Use MADT ITS subtable to do PCI/MSI domain initialization
  irqchip/gicv3-its: Factor out PCI-MSI part that might be reused for ACPI
  irqchip/gicv3-its: Probe ITS in the ACPI way
  irqchip/gicv3-its: Refactor ITS DT init code to prepare for ACPI
  irqchip/gicv3-its: Cleanup for ITS domain initialization
  PCI/MSI: Setup MSI domain on a per-device basis using IORT ACPI table
  ACPI: Add new IORT functions to support MSI domain handling
  ...
2016-10-03 19:10:15 -07:00
Linus Torvalds
7af8a0f808 arm64 updates for 4.9:
- Support for execute-only page permissions
 - Support for hibernate and DEBUG_PAGEALLOC
 - Support for heterogeneous systems with mismatches cache line sizes
 - Errata workarounds (A53 843419 update and QorIQ A-008585 timer bug)
 - arm64 PMU perf updates, including cpumasks for heterogeneous systems
 - Set UTS_MACHINE for building rpm packages
 - Yet another head.S tidy-up
 - Some cleanups and refactoring, particularly in the NUMA code
 - Lots of random, non-critical fixes across the board
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABCgAGBQJX7k31AAoJELescNyEwWM0XX0H/iOaWCfKlWOhvBsStGUCsLrK
 XryTzQT2KjdnLKf3jwP+1ateCuBR5ROurYxoDCX5/7mD63c5KiI338Vbv61a1lE1
 AAwjt1stmQVUg/j+kqnuQwB/0DYg+2C8se3D3q5Iyn7zc19cDZJEGcBHNrvLMufc
 XgHrgHgl/rzBDDlHJXleknDFge/MfhU5/Q1vJMRRb4JYrpAtmIokzCO75CYMRcCT
 ND2QbmppKtsyuFPGUTVbAFzJlP6dGKb3eruYta7/ct5d0pJQxav3u98D2yWGfjdM
 YaYq1EmX5Pol7rWumqLtk0+mA9yCFcKLLc+PrJu20Vx0UkvOq8G8Xt70sHNvZU8=
 =gdPM
 -----END PGP SIGNATURE-----

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 updates from Will Deacon:
 "It's a bit all over the place this time with no "killer feature" to
  speak of.  Support for mismatched cache line sizes should help people
  seeing whacky JIT failures on some SoCs, and the big.LITTLE perf
  updates have been a long time coming, but a lot of the changes here
  are cleanups.

  We stray outside arch/arm64 in a few areas: the arch/arm/ arch_timer
  workaround is acked by Russell, the DT/OF bits are acked by Rob, the
  arch_timer clocksource changes acked by Marc, CPU hotplug by tglx and
  jump_label by Peter (all CC'd).

  Summary:

   - Support for execute-only page permissions
   - Support for hibernate and DEBUG_PAGEALLOC
   - Support for heterogeneous systems with mismatches cache line sizes
   - Errata workarounds (A53 843419 update and QorIQ A-008585 timer bug)
   - arm64 PMU perf updates, including cpumasks for heterogeneous systems
   - Set UTS_MACHINE for building rpm packages
   - Yet another head.S tidy-up
   - Some cleanups and refactoring, particularly in the NUMA code
   - Lots of random, non-critical fixes across the board"

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (100 commits)
  arm64: tlbflush.h: add __tlbi() macro
  arm64: Kconfig: remove SMP dependence for NUMA
  arm64: Kconfig: select OF/ACPI_NUMA under NUMA config
  arm64: fix dump_backtrace/unwind_frame with NULL tsk
  arm/arm64: arch_timer: Use archdata to indicate vdso suitability
  arm64: arch_timer: Work around QorIQ Erratum A-008585
  arm64: arch_timer: Add device tree binding for A-008585 erratum
  arm64: Correctly bounds check virt_addr_valid
  arm64: migrate exception table users off module.h and onto extable.h
  arm64: pmu: Hoist pmu platform device name
  arm64: pmu: Probe default hw/cache counters
  arm64: pmu: add fallback probe table
  MAINTAINERS: Update ARM PMU PROFILING AND DEBUGGING entry
  arm64: Improve kprobes test for atomic sequence
  arm64/kvm: use alternative auto-nop
  arm64: use alternative auto-nop
  arm64: alternative: add auto-nop infrastructure
  arm64: lse: convert lse alternatives NOP padding to use __nops
  arm64: barriers: introduce nops and __nops macros for NOP sequences
  arm64: sysreg: replace open-coded mrs_s/msr_s with {read,write}_sysreg_s
  ...
2016-10-03 08:58:35 -07:00
Radim Krčmář
45ca877ad0 KVM/ARM Changes for v4.9
- Various cleanups and removal of redundant code
  - Two important fixes for not using an in-kernel irqchip
  - A bit of optimizations
  - Handle SError exceptions and present them to guests if appropriate
  - Proxying of GICV access at EL2 if guest mappings are unsafe
  - GICv3 on AArch32 on ARMv8
  - Preparations for GICv3 save/restore, including ABI docs
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJX6rKQAAoJEEtpOizt6ddy8i4H/0bfB1EVukggoL/FfGeds/dg
 p2FG0oOsggcSBwK7VXUUvVllO7ioUssRCqqkn1e0/bCLtQrN4ex4PqJ3618EHFz/
 pLP72hf8Zl33rP3OVtPaDcxzjjKKdf+xGbBIv3AE7x7O5rFZg4lWHeWjy4yuhFv2
 Jm+8ul7JCxCMse08Xc90riou4i/jWjyoLadHbAoeX3tR+dVcZyOUZSlgAPI1bS/P
 rOQi/zkl3bT2R3kh28QuEFTrJ9BVTnmw25BRW8DNr6+CWmR9bpM6y7AGzOwrZ3FZ
 F1MbsPpN3ogcjvPg2QTYuOoqrwz8NLLHw5pR5YNj84VppjSpSsAhKU7Ug5Uhsr0=
 =1z/L
 -----END PGP SIGNATURE-----

Merge tag 'kvm-arm-for-v4.9' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into next

KVM/ARM Changes for v4.9

 - Various cleanups and removal of redundant code
 - Two important fixes for not using an in-kernel irqchip
 - A bit of optimizations
 - Handle SError exceptions and present them to guests if appropriate
 - Proxying of GICV access at EL2 if guest mappings are unsafe
 - GICv3 on AArch32 on ARMv8
 - Preparations for GICv3 save/restore, including ABI docs
2016-09-29 16:01:51 +02:00
Mark Rutland
db68f3e759 arm64: tlbflush.h: add __tlbi() macro
As with dsb() and isb(), add a __tlbi() helper so that we can avoid
distracting asm boilerplate every time we want a TLBI. As some TLBI
operations take an argument while others do not, some pre-processor is
used to handle these two cases with different assembly blocks.

The existing tlbflush.h code is moved over to use the helper.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
[ rename helper to __tlbi, update comment and commit log ]
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-28 10:44:05 +01:00
Scott Wood
1d8f51d41f arm/arm64: arch_timer: Use archdata to indicate vdso suitability
Instead of comparing the name to a magic string, use archdata to
explicitly communicate whether the arch timer is suitable for
direct vdso access.

Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Scott Wood <oss@buserror.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-23 17:19:25 +01:00
Scott Wood
f6dc1576cd arm64: arch_timer: Work around QorIQ Erratum A-008585
Erratum A-008585 says that the ARM generic timer counter "has the
potential to contain an erroneous value for a small number of core
clock cycles every time the timer value changes".  Accesses to TVAL
(both read and write) are also affected due to the implicit counter
read.  Accesses to CVAL are not affected.

The workaround is to reread TVAL and count registers until successive
reads return the same value.  Writes to TVAL are replaced with an
equivalent write to CVAL.

The workaround is to reread TVAL and count registers until successive reads
return the same value, and when writing TVAL to retry until counter
reads before and after the write return the same value.

The workaround is enabled if the fsl,erratum-a008585 property is found in
the timer node in the device tree.  This can be overridden with the
clocksource.arm_arch_timer.fsl-a008585 boot parameter, which allows KVM
users to enable the workaround until a mechanism is implemented to
automatically communicate this information.

This erratum can be found on LS1043A and LS2080A.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Scott Wood <oss@buserror.net>
[will: renamed read macro to reflect that it's not usually unstable]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-23 17:19:25 +01:00
AKASHI Takahiro
67787b68ec arm64: kgdb: handle read-only text / modules
Handle read-only cases when CONFIG_DEBUG_RODATA (4.0) or
CONFIG_DEBUG_SET_MODULE_RONX (3.18) are enabled by using
aarch64_insn_write() instead of probe_kernel_write() as introduced by
commit 2f896d5866 ("arm64: use fixmap for text patching") in 4.0.

Fixes: 11d91a770f ("arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support")
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Jason Wessel <jason.wessel@windriver.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-09-23 11:25:01 +01:00
Vladimir Murzin
b5525ce898 arm64: KVM: Move GIC accessors to arch_gicv3.h
Since we are going to share vgic-v3 save/restore code with ARM keep
arch specific accessors separately.

Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2016-09-22 13:21:46 +02:00
Laura Abbott
ca219452c6 arm64: Correctly bounds check virt_addr_valid
virt_addr_valid is supposed to return true if and only if virt_to_page
returns a valid page structure. The current macro does math on whatever
address is given and passes that to pfn_valid to verify. vmalloc and
module addresses can happen to generate a pfn that 'happens' to be
valid. Fix this by only performing the pfn_valid check on addresses that
have the potential to be valid.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-22 10:17:22 +01:00
Al Viro
4855bd255f arm64: don't zero in __copy_from_user{,_inatomic}
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-09-15 19:51:56 -04:00
Daniel Thompson
91ef84428a irqchip/gic-v3: Reset BPR during initialization
Currently, when running on FVP, CPU 0 boots up with its BPR changed from
the reset value. This renders it impossible to (preemptively) prioritize
interrupts on CPU 0.

This is harmless on normal systems since Linux typically does not
support preemptive interrupts. It does however cause problems in
systems with additional changes (such as patches for NMI simulation).

Many thanks to Andrew Thoelke for suggesting the BPR as having the
potential to harm preemption.

Suggested-by: Andrew Thoelke <andrew.thoelke@arm.com>
Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-09-12 19:46:19 +01:00
Mark Rutland
e506236a7b arm64/kvm: use alternative auto-nop
Make use of the new alternative_if and alternative_else_nop_endif and
get rid of our open-coded NOP sleds, making the code simpler to read.

Note that for __kvm_call_hyp the branch to __vhe_hyp_call has been moved
out of the alternative sequence, and in the default case there will be
four additional NOPs executed.

Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-12 10:46:07 +01:00
Mark Rutland
792d47379f arm64: alternative: add auto-nop infrastructure
In some cases, one side of an alternative sequence is simply a number of
NOPs used to balance the other side. Keeping track of this manually is
tedious, and the presence of large chains of NOPs makes the code more
painful to read than necessary.

To ameliorate matters, this patch adds a new alternative_else_nop_endif,
which automatically balances an alternative sequence with a trivial NOP
sled.

In many cases, we would like a NOP-sled in the default case, and
instructions patched in in the presence of a feature. To enable the NOPs
to be generated automatically for this case, this patch also adds a new
alternative_if, and updates alternative_else and alternative_endif to
work with either alternative_if or alternative_endif.

Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: James Morse <james.morse@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[will: use new nops macro to generate nop sequences]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-12 10:45:34 +01:00
Will Deacon
05492f2fd8 arm64: lse: convert lse alternatives NOP padding to use __nops
The LSE atomics are implemented using alternative code sequences of
different lengths, and explicit NOP padding is used to ensure the
patching works correctly.

This patch converts the bulk of the LSE code over to using the __nops
macro, which makes it slightly clearer as to what is going on and also
consolidates all of the padding at the end of the various sequences.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 18:12:34 +01:00
Will Deacon
f99a250cb6 arm64: barriers: introduce nops and __nops macros for NOP sequences
NOP sequences tend to get used for padding out alternative sections
and uarch-specific pipeline flushes in errata workarounds.

This patch adds macros for generating these sequences as both inline
asm blocks, but also as strings suitable for embedding in other asm
blocks directly.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 18:12:28 +01:00
Will Deacon
8a71f0c656 arm64: sysreg: replace open-coded mrs_s/msr_s with {read,write}_sysreg_s
Similar to our {read,write}_sysreg accessors for architected, named
system registers, this patch introduces {read,write}_sysreg_s variants
that can take arbitrary sys_reg output and therefore access IMPDEF
registers or registers that unsupported by binutils.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 18:12:08 +01:00
Robin Murphy
0e27a7fce6 arm64: Remove shadowed asm-generic headers
We've grown our own versions of bug.h, ftrace.h, pci.h and topology.h,
so generating the generic ones as well is unnecessary and a potential
source of build hiccups. At the very least, having them present has
confused my source-indexing tool, and that simply will not do.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 15:05:08 +01:00
Suzuki K Poulose
116c81f427 arm64: Work around systems with mismatched cache line sizes
Systems with differing CPU i-cache/d-cache line sizes can cause
problems with the cache management by software when the execution
is migrated from one to another. Usually, the application reads
the cache size on a CPU and then uses that length to perform cache
operations. However, if it gets migrated to another CPU with a smaller
cache line size, things could go completely wrong. To prevent such
cases, always use the smallest cache line size among the CPUs. The
kernel CPU feature infrastructure already keeps track of the safe
value for all CPUID registers including CTR. This patch works around
the problem by :

For kernel, dynamically patch the kernel to read the cache size
from the system wide copy of CTR_EL0.

For applications, trap read accesses to CTR_EL0 (by clearing the SCTLR.UCT)
and emulate the mrs instruction to return the system wide safe value
of CTR_EL0.

For faster access (i.e, avoiding to lookup the system wide value of CTR_EL0
via read_system_reg), we keep track of the pointer to table entry for
CTR_EL0 in the CPU feature infrastructure.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 15:03:29 +01:00
Suzuki K Poulose
9dbd5bb25c arm64: Refactor sysinstr exception handling
Right now we trap some of the user space data cache operations
based on a few Errata (ARM 819472, 826319, 827319 and 824069).
We need to trap userspace access to CTR_EL0, if we detect mismatched
cache line size. Since both these traps share the EC, refactor
the handler a little bit to make it a bit more reader friendly.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 15:03:29 +01:00
Suzuki K Poulose
072f0a6338 arm64: Introduce raw_{d,i}cache_line_size
On systems with mismatched i/d cache min line sizes, we need to use
the smallest size possible across all CPUs. This will be done by fetching
the system wide safe value from CPU feature infrastructure.
However the some special users(e.g kexec, hibernate) would need the line
size on the CPU (rather than the system wide), when either the system
wide feature may not be accessible or it is guranteed that the caller
executes with a gurantee of no migration.
Provide another helper which will fetch cache line size on the current CPU.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: James Morse <james.morse@arm.com>
Reviewed-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 15:03:29 +01:00
Suzuki K Poulose
46084bc253 arm64: insn: Add helpers for adrp offsets
Adds helpers for decoding/encoding the PC relative addresses for adrp.
This will be used for handling dynamic patching of 'adrp' instructions
in alternative code patching.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 15:03:28 +01:00
Suzuki K Poulose
c47a1900ad arm64: Rearrange CPU errata workaround checks
Right now we run through the work around checks on a CPU
from __cpuinfo_store_cpu. There are some problems with that:

1) We initialise the system wide CPU feature registers only after the
Boot CPU updates its cpuinfo. Now, if a work around depends on the
variance of a CPU ID feature (e.g, check for Cache Line size mismatch),
we have no way of performing it cleanly for the boot CPU.

2) It is out of place, invoked from __cpuinfo_store_cpu() in cpuinfo.c. It
is not an obvious place for that.

This patch rearranges the CPU specific capability(aka work around) checks.

1) At the moment we use verify_local_cpu_capabilities() to check if a new
CPU has all the system advertised features. Use this for the secondary CPUs
to perform the work around check. For that we rename
  verify_local_cpu_capabilities() => check_local_cpu_capabilities()
which:

   If the system wide capabilities haven't been initialised (i.e, the CPU
   is activated at the boot), update the system wide detected work arounds.

   Otherwise (i.e a CPU hotplugged in later) verify that this CPU conforms to the
   system wide capabilities.

2) Boot CPU updates the work arounds from smp_prepare_boot_cpu() after we have
initialised the system wide CPU feature values.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 15:03:28 +01:00
Suzuki K Poulose
89ba26458b arm64: Use consistent naming for errata handling
This is a cosmetic change to rename the functions dealing with
the errata work arounds to be more consistent with their naming.

1) check_local_cpu_errata() => update_cpu_errata_workarounds()
check_local_cpu_errata() actually updates the system's errata work
arounds. So rename it to reflect the same.

2) verify_local_cpu_errata() => verify_local_cpu_errata_workarounds()
Use errata_workarounds instead of _errata.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 15:03:28 +01:00
Suzuki K Poulose
ee7bc638f1 arm64: Set the safe value for L1 icache policy
Right now we use 0 as the safe value for CTR_EL0:L1Ip, which is
not defined at the moment. The safer value for the L1Ip should be
the weakest of the policies, which happens to be AIVIVT. While at it,
fix the comment about safe_val.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 15:03:28 +01:00
Chunyan Zhang
2b9743441a arm64: use preempt_disable_notrace in _percpu_read/write
When debug preempt or preempt tracer is enabled, preempt_count_add/sub()
can be traced by function and function graph tracing, and
preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace
subsystem we should use preempt_disable/enable_notrace instead.

In the commit 345ddcc882 ("ftrace: Have set_ftrace_pid use the bitmap
like events do") the function this_cpu_read() was added to
trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph
tracer will go into a recursive loop, even if the tracing_on is
disabled.

So this patch change to use preempt_enable/disable_notrace instead in
this_cpu_read().

Since Yonghui Yang helped a lot to find the root cause of this problem,
so also add his SOB.

Signed-off-by: Yonghui Yang <mark.yang@spreadtrum.com>
Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-09-09 12:34:47 +01:00
Will Deacon
872c63fbf9 arm64: spinlocks: implement smp_mb__before_spinlock() as smp_mb()
smp_mb__before_spinlock() is intended to upgrade a spin_lock() operation
to a full barrier, such that prior stores are ordered with respect to
loads and stores occuring inside the critical section.

Unfortunately, the core code defines the barrier as smp_wmb(), which
is insufficient to provide the required ordering guarantees when used in
conjunction with our load-acquire-based spinlock implementation.

This patch overrides the arm64 definition of smp_mb__before_spinlock()
to map to a full smp_mb().

Cc: <stable@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Reported-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-09-09 12:33:48 +01:00
Mark Rutland
d3ea42aad5 arm64: simplify contextidr_thread_switch
When CONFIG_PID_IN_CONTEXTIDR is not selected, we use an empty stub
definition of contextidr_thread_switch(). As everything we rely upon
exists regardless of CONFIG_PID_IN_CONTEXTIDR, we don't strictly require
an empty stub.

By using IS_ENABLED() rather than ifdeffery, we avoid duplication, and
get compiler coverage on all the code even when CONFIG_PID_IN_CONTEXTIDR
is not selected and the code is optimised away.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 11:43:50 +01:00
Mark Rutland
adf7589997 arm64: simplify sysreg manipulation
A while back we added {read,write}_sysreg accessors to handle accesses
to system registers, without the usual boilerplate asm volatile,
temporary variable, etc.

This patch makes use of these across arm64 to make code shorter and
clearer. For sequences with a trailing ISB, the existing isb() macro is
also used so that asm blocks can be removed entirely.

A few uses of inline assembly for msr/mrs are left as-is. Those
manipulating sp_el0 for the current thread_info value have special
clobber requiremends.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 11:43:50 +01:00
Mark Rutland
1f3d8699be arm64/kvm: use {read,write}_sysreg()
A while back we added {read,write}_sysreg accessors to handle accesses
to system registers, without the usual boilerplate asm volatile,
temporary variable, etc.

This patch makes use of these in the arm64 KVM code to make the code
shorter and clearer.

At the same time, a comment style violation next to a system register
access is fixed up in reset_pmcr, and comments describing whether
operations are reads or writes are removed as this is now painfully
obvious.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 11:42:27 +01:00
Mark Rutland
d0a69d9f38 arm64: dcc: simplify accessors
A while back we added {read,write}_sysreg accessors to handle accesses
to system registers, without the usual boilerplate asm volatile,
temporary variable, etc.

This patch makes use of these in the arm64 DCC accessors to make the
code shorter and clearer.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 11:41:13 +01:00
Mark Rutland
cd5f22d796 arm64: arch_timer: simplify accessors
A while back we added {read,write}_sysreg accessors to handle accesses
to system registers, without the usual boilerplate asm volatile,
temporary variable, etc.

This patch makes use of these in the arm64 arch timer accessors to make
the code shorter and clearer.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 11:41:13 +01:00
Mark Rutland
7aff4a2dd3 arm64: sysreg: allow write_sysreg to use XZR
Currently write_sysreg has to allocate a temporary register to write
zero to a system register, which is unfortunate given that the MSR
instruction accepts XZR as an operand.

Allow XZR to be used when appropriate by fiddling with the assembly
constraints.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 11:40:39 +01:00