2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-25 05:34:00 +08:00
Commit Graph

574582 Commits

Author SHA1 Message Date
Mark Rutland
c661cb1c53 arm64: consistently use p?d_set_huge
Commit 324420bf91 ("arm64: add support for ioremap() block
mappings") added new p?d_set_huge functions which do the hard work to
generate and set a correct block entry.

These differ from open-coded huge page creation in the early page table
code by explicitly setting the P?D_TYPE_SECT bits (which are implicitly
retained by mk_sect_prot() for any valid prot), but are otherwise
identical (and cannot fail on arm64).

For simplicity and consistency, make use of these in the initial page
table creation code.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-03-24 16:32:29 +00:00
Ard Biesheuvel
d5e5743797 arm64: kaslr: use callee saved register to preserve SCTLR across C call
The KASLR code incorrectly expects the contents of x18 to be preserved
across a call into C code, and uses it to stash the contents of SCTLR_EL1
before enabling the MMU. If the MMU needs to be disabled again to create
the randomized kernel mapping, x18 is written back to SCTLR_EL1, which is
likely to crash the system if x18 has been clobbered by kasan_early_init()
or kaslr_early_init(). So use x22 instead, which is not in use so far in
head.S

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-03-24 16:19:24 +00:00
Catalin Marinas
f09f1bacfe arm64: Split pr_notice("Virtual kernel memory layout...") into multiple pr_cont()
The printk() implementation has a limit of LOG_LINE_MAX (== 1024 - 32)
buffer per call which the arm64 mem_init() breaches when printing the
virtual memory layout with CONFIG_KASAN enabled. The result is that the
last line is no longer printed. This patch splits the call into a
pr_notice() + additional pr_cont() calls.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
2016-03-21 12:19:12 +00:00
Kefeng Wang
cc7c0cda8f arm64: drop unused __local_flush_icache_all()
After commit 65da0a8e34 ("arm64: use non-global mappings for UEFI
runtime regions"), nobody use __local_flush_icache_all() anymore,
so drop it.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-03-21 12:10:03 +00:00
Mark Rutland
b90b4a608e arm64: fix KASLR boot-time I-cache maintenance
Commit f80fb3a3d5 ("arm64: add support for kernel ASLR") missed a
DSB necessary to complete I-cache maintenance in the primary boot path,
and hence stale instructions may still be present in the I-cache and may
be executed until the I-cache maintenance naturally completes.

Since commit 8ec4198743 ("arm64: mm: ensure patched kernel text is
fetched from PoU"), all CPUs invalidate their I-caches after their MMU
is enabled. Prior a CPU's MMU having been enabled, arbitrary lines may
have been fetched from the PoC into I-caches. We never patch text
expected to be executed with the MMU off. Thus, it is unnecessary to
perform broadcast I-cache maintenance in the primary boot path.

This patch reduces the scope of the I-cache maintenance to the local
CPU, and adds the missing DSB with similar scope, matching prior
maintenance in the primary boot path.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesehvuel <ard.biesheuvel@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-03-21 12:08:50 +00:00
Ard Biesheuvel
b660950c60 arm64/kernel: fix incorrect EL0 check in inv_entry macro
The implementation of macro inv_entry refers to its 'el' argument without
the required leading backslash, which results in an undefined symbol
'el' to be passed into the kernel_entry macro rather than the index of
the exception level as intended.

This undefined symbol strangely enough does not result in build failures,
although it is visible in vmlinux:

     $ nm -n vmlinux |head
                      U el
     0000000000000000 A _kernel_flags_le_hi32
     0000000000000000 A _kernel_offset_le_hi32
     0000000000000000 A _kernel_size_le_hi32
     000000000000000a A _kernel_flags_le_lo32
     .....

However, it does result in incorrect code being generated for invalid
exceptions taken from EL0, since the argument check in kernel_entry
assumes EL1 if its argument does not equal '0'.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-03-21 12:05:34 +00:00
Catalin Marinas
2776e0e8ef arm64: kasan: Fix zero shadow mapping overriding kernel image shadow
With the 16KB and 64KB page size configurations, SWAPPER_BLOCK_SIZE is
PAGE_SIZE and ARM64_SWAPPER_USES_SECTION_MAPS is 0. Since
kimg_shadow_end is not page aligned (_end shifted by
KASAN_SHADOW_SCALE_SHIFT), the edges of previously mapped kernel image
shadow via vmemmap_populate() may be overridden by subsequent calls to
kasan_populate_zero_shadow(), leading to kernel panics like below:

------------------------------------------------------------------------------
Unable to handle kernel paging request at virtual address fffffc100135068c
pgd = fffffc8009ac0000
[fffffc100135068c] *pgd=00000009ffee0003, *pud=00000009ffee0003, *pmd=00000009ffee0003, *pte=00e0000081a00793
Internal error: Oops: 9600004f [#1] PREEMPT SMP
Modules linked in:
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.5.0-rc4+ #1984
Hardware name: Juno (DT)
task: fffffe09001a0000 ti: fffffe0900200000 task.ti: fffffe0900200000
PC is at __memset+0x4c/0x200
LR is at kasan_unpoison_shadow+0x34/0x50
pc : [<fffffc800846f1cc>] lr : [<fffffc800821ff54>] pstate: 00000245
sp : fffffe0900203db0
x29: fffffe0900203db0 x28: 0000000000000000
x27: 0000000000000000 x26: 0000000000000000
x25: fffffc80099b69d0 x24: 0000000000000001
x23: 0000000000000000 x22: 0000000000002000
x21: dffffc8000000000 x20: 1fffff9001350a8c
x19: 0000000000002000 x18: 0000000000000008
x17: 0000000000000147 x16: ffffffffffffffff
x15: 79746972100e041d x14: ffffff0000000000
x13: ffff000000000000 x12: 0000000000000000
x11: 0101010101010101 x10: 1fffffc11c000000
x9 : 0000000000000000 x8 : fffffc100135068c
x7 : 0000000000000000 x6 : 000000000000003f
x5 : 0000000000000040 x4 : 0000000000000004
x3 : fffffc100134f651 x2 : 0000000000000400
x1 : 0000000000000000 x0 : fffffc100135068c

Process swapper/0 (pid: 1, stack limit = 0xfffffe0900200020)
Call trace:
[<fffffc800846f1cc>] __memset+0x4c/0x200
[<fffffc8008220044>] __asan_register_globals+0x5c/0xb0
[<fffffc8008a09d34>] _GLOBAL__sub_I_65535_1_sunrpc_cache_lookup+0x1c/0x28
[<fffffc8008f20d28>] kernel_init_freeable+0x104/0x274
[<fffffc80089e1948>] kernel_init+0x10/0xf8
[<fffffc8008093a00>] ret_from_fork+0x10/0x50
------------------------------------------------------------------------------

This patch aligns kimg_shadow_start and kimg_shadow_end to
SWAPPER_BLOCK_SIZE in all configurations.

Fixes: f9040773b7 ("arm64: move kernel image to base of vmalloc area")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2016-03-11 11:03:35 +00:00
Catalin Marinas
2f76969f2e arm64: kasan: Use actual memory node when populating the kernel image shadow
With the 16KB or 64KB page configurations, the generic
vmemmap_populate() implementation warns on potential offnode
page_structs via vmemmap_verify() because the arm64 kasan_init() passes
NUMA_NO_NODE instead of the actual node for the kernel image memory.

Fixes: f9040773b7 ("arm64: move kernel image to base of vmalloc area")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: James Morse <james.morse@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
2016-03-11 11:03:34 +00:00
Catalin Marinas
fdc69e7df3 arm64: Update PTE_RDONLY in set_pte_at() for PROT_NONE permission
The set_pte_at() function must update the hardware PTE_RDONLY bit
depending on the state of the PTE_WRITE and PTE_DIRTY bits of the given
entry value. However, it currently only performs this for pte_valid()
entries, ignoring PTE_PROT_NONE. The side-effect is that PROT_NONE
mappings would not have the PTE_RDONLY bit set. Without
CONFIG_ARM64_HW_AFDBM, this is not an issue since such PROT_NONE pages
are not accessible anyway.

With commit 2f4b829c62 ("arm64: Add support for hardware updates of
the access and dirty pte bits"), the ptep_set_wrprotect() function was
re-written to cope with automatic hardware updates of the dirty state.
As an optimisation, only PTE_RDONLY is checked to assess the "dirty"
status. Since set_pte_at() does not set this bit for PROT_NONE mappings,
such pages may be considered "dirty" as a result of
ptep_set_wrprotect().

This patch updates the pte_valid() check to pte_present() in
set_pte_at(). It also adds PTE_PROT_NONE to the swap entry bits comment.

Fixes: 2f4b829c62 ("arm64: Add support for hardware updates of the access and dirty pte bits")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com>
Tested-by: Ganapatrao Kulkarni <gkulkarni@cavium.com>
Cc: <stable@vger.kernel.org>
2016-03-11 11:03:34 +00:00
Adam Buchbinder
ef769e3208 arm64: Fix misspellings in comments.
Signed-off-by: Adam Buchbinder <adam.buchbinder@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-03-04 18:19:17 +00:00
Ard Biesheuvel
cd1b76bb73 arm64: efi: add missing frame pointer assignment
The prologue of the EFI entry point pushes x29 and x30 onto the stack but
fails to create the stack frame correctly by omitting the assignment of x29
to the new value of the stack pointer. So fix that.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-03-04 18:12:23 +00:00
Mark Rutland
1cc6ed90dd arm64: make mrs_s prefixing implicit in read_cpuid
Commit 0f54b14e76 ("arm64: cpufeature: Change read_cpuid() to use
sysreg's mrs_s macro") changed read_cpuid to require a SYS_ prefix on
register names, to allow manual assembly of registers unknown by the
toolchain, using tables in sysreg.h.

This interacts poorly with commit 42b5573403 ("efi/arm64: Check
for h/w support before booting a >4 KB granular kernel"), which is
curretly queued via the tip tree, and uses read_cpuid without a SYS_
prefix. Due to this, a build of next-20160304 fails if EFI and 64K pages
are selected.

To avoid this issue when trees are merged, move the required SYS_
prefixing into read_cpuid, and revert all of the updated callsites to
pass plain register names. This effectively reverts the bulk of commit
0f54b14e76.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-03-04 14:12:46 +00:00
Ard Biesheuvel
57efac2f71 arm64: enable CONFIG_DEBUG_RODATA by default
In spite of its name, CONFIG_DEBUG_RODATA is an important hardening feature
for production kernels, and distros all enable it by default in their
kernel configs. However, since enabling it used to result in more granular,
and thus less efficient kernel mappings, it is not enabled by default for
performance reasons.

However, since commit 2f39b5f91e ("arm64: mm: Mark .rodata as RO"), the
various kernel segments (.text, .rodata, .init and .data) are already
mapped individually, and the only effect of setting CONFIG_DEBUG_RODATA is
that the existing .text and .rodata mappings are updated late in the boot
sequence to have their read-only attributes set, which means that any
performance concerns related to enabling CONFIG_DEBUG_RODATA are no longer
valid.

So from now on, make CONFIG_DEBUG_RODATA default to 'y'

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-03-03 18:14:17 +00:00
Mark Rutland
dbd4d7ca56 arm64: Rework valid_user_regs
We validate pstate using PSR_MODE32_BIT, which is part of the
user-provided pstate (and cannot be trusted). Also, we conflate
validation of AArch32 and AArch64 pstate values, making the code
difficult to reason about.

Instead, validate the pstate value based on the associated task. The
task may or may not be current (e.g. when using ptrace), so this must be
passed explicitly by callers. To avoid circular header dependencies via
sched.h, is_compat_task is pulled out of asm/ptrace.h.

To make the code possible to reason about, the AArch64 and AArch32
validation is split into separate functions. Software must respect the
RES0 policy for SPSR bits, and thus the kernel mirrors the hardware
policy (RAZ/WI) for bits as-yet unallocated. When these acquire an
architected meaning writes may be permitted (potentially with additional
validation).

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-03-02 15:49:28 +00:00
Ard Biesheuvel
6d2aa549de arm64: mm: check at build time that PAGE_OFFSET divides the VA space evenly
Commit 8439e62a15 ("arm64: mm: use bit ops rather than arithmetic in
pa/va translations") changed the boundary check against PAGE_OFFSET from
an arithmetic comparison to a bit test. This means we now silently assume
that PAGE_OFFSET is a power of 2 that divides the kernel virtual address
space into two equal halves. So make that assumption explicit.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-03-02 09:45:52 +00:00
Marc Zyngier
22b39ca3f2 arm64: KVM: Move kvm_call_hyp back to its original localtion
In order to reduce the risk of a bad merge, let's move the new
kvm_call_hyp back to its original location in the file. This has
zero impact from a code point of view.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-03-01 13:49:51 +00:00
Ard Biesheuvel
020d044f66 arm64: mm: treat memstart_addr as a signed quantity
Commit c031a4213c ("arm64: kaslr: randomize the linear region")
implements randomization of the linear region, by subtracting a random
multiple of PUD_SIZE from memstart_addr. This causes the virtual mapping
of system RAM to move upwards in the linear region, and at the same time
causes memstart_addr to assume a value which may be negative if the offset
of system RAM in the physical space is smaller than its offset relative to
PAGE_OFFSET in the virtual space.

Since memstart_addr is effectively an offset now, redefine its type as s64
so that expressions involving shifting or division preserve its sign.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-29 18:31:03 +00:00
Ard Biesheuvel
a6e1f7273b arm64: mm: list kernel sections in order
In the boot log, instead of listing .init first, list .text, .rodata,
.init and .data in the same order they appear in memory

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-29 17:15:44 +00:00
Ard Biesheuvel
5be8b70af1 arm64: lse: deal with clobbered IP registers after branch via PLT
The LSE atomics implementation uses runtime patching to patch in calls
to out of line non-LSE atomics implementations on cores that lack hardware
support for LSE. To avoid paying the overhead cost of a function call even
if no call ends up being made, the bl instruction is kept invisible to the
compiler, and the out of line implementations preserve all registers, not
just the ones that they are required to preserve as per the AAPCS64.

However, commit fd045f6cd9 ("arm64: add support for module PLTs") added
support for routing branch instructions via veneers if the branch target
offset exceeds the range of the ordinary relative branch instructions.
Since this deals with jump and call instructions that are exposed to ELF
relocations, the PLT code uses x16 to hold the address of the branch target
when it performs an indirect branch-to-register, something which is
explicitly allowed by the AAPCS64 (and ordinary compiler generated code
does not expect register x16 or x17 to retain their values across a bl
instruction).

Since the lse runtime patched bl instructions don't adhere to the AAPCS64,
they don't deal with this clobbering of registers x16 and x17. So add them
to the clobber list of the asm() statements that perform the call
instructions, and drop x16 and x17 from the list of registers that are
callee saved in the out of line non-LSE implementations.

In addition, since we have given these functions two scratch registers,
they no longer need to stack/unstack temp registers.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: factored clobber list into #define, updated Makefile comment]
Signed-off-by: Will Deacon <will.deacon@arm.com>

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-26 18:35:02 +00:00
Kefeng Wang
cc30e6b95c arm64: mm: dump: Use VA_START directly instead of private LOWEST_ADDR
Use VA_START macro in asm/memory.h instead of private LOWEST_ADDR
definition in dump.c.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-26 18:31:41 +00:00
Will Deacon
f993318bfe arm64: kconfig: add submenu for 8.2 architectural features
UAO is a feature of ARMv8.2, so add a submenu like we have for 8.1.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-26 18:12:35 +00:00
Lorenzo Pieralisi
c1e4659ba8 arm64: kernel: acpi: fix ioremap in ACPI parking protocol cpu_postboot
When secondary cpus are booted through the ACPI parking protocol, the
booted cpu should check that FW has correctly cleared its mailbox entry
point value to make sure the boot process was correctly executed.
The entry point check is carried in the cpu_ops->cpu_postboot method, that
is executed by secondary cpus when entering the kernel with irqs disabled.

The ACPI parking protocol cpu_ops maps/unmaps the mailboxes on the
primary CPU to trigger secondary boot in the cpu_ops->cpu_boot method
and on secondary processors to carry out FW checks on the booted CPU
to verify the boot protocol was successfully executed in the
cpu_ops->cpu_postboot method.

Therefore, the cpu_ops->cpu_postboot method is forced to ioremap/unmap the
mailboxes, which is wrong in that ioremap cannot be safely be carried out
with irqs disabled.

To fix this issue, this patch reshuffles the code so that the mailboxes
are still mapped after the boot processor executes the cpu_ops->cpu_boot
method for a given cpu, and the VA at which a mailbox is mapped for a given
cpu is stashed in the per-cpu data struct so that secondary cpus can
retrieve them in the cpu_ops->cpu_postboot and complete the required
FW checks.

Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reported-by: Itaru Kitayama <itaru.kitayama@riken.jp>
Tested-by: Loc Ho <lho@apm.com>
Tested-by: Itaru Kitayama <itaru.kitayama@riken.jp>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Loc Ho <lho@apm.com>
Cc: Itaru Kitayama <itaru.kitayama@riken.jp>
Cc: Sudeep Holla <sudeep.holla@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Al Stone <ahs3@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-26 15:39:52 +00:00
Suzuki K Poulose
bf50061844 arm64: Add support for Half precision floating point
ARMv8.2 extensions [1] include an optional feature, which supports
half precision(16bit) floating point/asimd data processing
instructions. This patch adds support for detecting and exposing
the same to the userspace via HWCAPs

[1] https://community.arm.com/groups/processors/blog/2016/01/05/armv8-a-architecture-evolution

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-26 15:37:01 +00:00
Mark Rutland
3eca86e75e arm64: Remove fixmap include fragility
The asm-generic fixmap.h depends on each architecture's fixmap.h to pull
in the definition of PAGE_KERNEL_RO, if this exists. In the absence of
this, FIXMAP_PAGE_RO will not be defined. In mm/early_ioremap.c the
definition of early_memremap_ro is predicated on FIXMAP_PAGE_RO being
defined.

Currently, the arm64 fixmap.h doesn't include pgtable.h for the
definition of PAGE_KERNEL_RO, and as a knock-on effect early_memremap_ro
is not always defined, leading to link-time failures when it is used.
This has been observed with defconfig on next-20160226.

Unfortunately, as pgtable.h includes fixmap.h, adding the include
introduces a circular dependency, which is just as fragile.

Instead, this patch factors out PAGE_KERNEL_RO and other prot
definitions into a new pgtable-prot header which can be included by poth
pgtable.h and fixmap.h, avoiding the  circular dependency, and ensuring
that early_memremap_ro is alwyas defined where it is used.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-26 15:22:53 +00:00
Andrew Pinski
104a0c02e8 arm64: Add workaround for Cavium erratum 27456
On ThunderX T88 pass 1.x through 2.1 parts, broadcast TLBI
instructions may cause the icache to become corrupted if it contains
data for a non-current ASID.

This patch implements the workaround (which invalidates the local
icache when switching the mm) by using code patching.

Signed-off-by: Andrew Pinski <apinski@cavium.com>
Signed-off-by: David Daney <david.daney@cavium.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-26 15:14:27 +00:00
Jeremy Linton
2f39b5f91e arm64: mm: Mark .rodata as RO
Currently the .rodata section is actually still executable when DEBUG_RODATA
is enabled. This changes that so the .rodata is actually read only, no execute.
It also adds the .rodata section to the mem_init banner.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
[catalin.marinas@arm.com: added vm_struct vmlinux_rodata in map_kernel()]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-26 15:08:04 +00:00
Miles Chen
b7dc8d16e7 arm64/mm: remove unnecessary boundary check
Remove the unnecessary boundary check since there is a huge
gap between user and kernel address that they would never overlap.
(arm64 does not have enough levels of page tables to cover 64-bit
virtual address)

See Documentation/arm64/memory.txt

Signed-off-by: Miles Chen <miles.chen@mediatek.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-26 13:39:53 +00:00
Catalin Marinas
cac4b8cdf5 arm64: Fix building error with 16KB pages and 36-bit VA
In such configuration, Linux uses only two pages of page tables and
__pud_populate() should not be used. However, the BUILD_BUG() triggers
since pud_sect() is still defined and the compiler cannot eliminate such
code, even though at run-time it should not be triggered. This patch
extends the #ifdef ARM64_64K_PAGES condition for pud_sect to include
PGTABLE_LEVELS < 3.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-25 16:01:26 +00:00
Suzuki K Poulose
28c5dcb22f arm64: Rename cpuid_feature field extract routines
Now that we have a clear understanding of the sign of a feature,
rename the routines to reflect the sign, so that it is not misused.
The cpuid_feature_extract_field() now accepts a 'sign' parameter.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-25 10:33:08 +00:00
Suzuki K Poulose
ff96f7bc7b arm64: capabilities: Handle sign of the feature bit
Use the appropriate accessor for the feature bit by keeping
track of the sign of the feature

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-25 10:33:07 +00:00
Suzuki K Poulose
0710cfdb8d arm64: cpufeature: Fix the sign of feature bits
There is a confusion on whether the values of a feature are signed
or not in ARM. This is not clearly mentioned in the ARM ARM either.
We have dealt most of the bits as signed so far, and marked the
rest as unsigned explicitly. This fixed in ARM ARM and will be rolled
out soon.

Here is the criteria in a nutshell:

1) The fields, which are either signed or unsigned, use increasing
   numerical values to indicate an increase in functionality. Thus, if a value
   of 0x1 indicates the presence of some instructions, then the 0x2 value will
   indicate the presence of those instructions plus some additional instructions
   or functionality.

2) For ID field values where the value 0x0 defines that a feature is not present,
   the number is an unsigned value.

3) For some features where the feature was made optional or removed after the
   start of the definition of the architecture, the value 0x0 is used to
   indicate the presence of a feature, and 0xF indicates the absence of the
   feature. In these cases, the fields are, in effect, holding signed values.

So with these rules applied, we have only the following fields which are signed and
the rest are unsigned.

 a) ID_AA64PFR0_EL1: {FP, ASIMD}
 b) ID_AA64MMFR0_EL1: {TGran4K, TGran64K}
 c) ID_AA64DFR0_EL1: PMUVer (0xf - PMUv3 not implemented)
 d) ID_DFR0_EL1: PerfMon
 e) ID_MMFR0_EL1: {InnerShr, OuterShr}

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-25 10:33:07 +00:00
Suzuki K Poulose
e53435031a arm64: cpufeature: Correct feature register tables
Correct the feature bit entries for :
  ID_DFR0
  ID_MMFR0

to fix the default safe value for some of the bits.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-25 10:33:07 +00:00
Suzuki K Poulose
13f417f3b8 arm64: Ensure the secondary CPUs have safe ASIDBits size
Adds a hook for checking whether a secondary CPU has the
features used already by the kernel during early boot, based
on the boot CPU and plugs in the check for ASID size.

The ID_AA64MMFR0_EL1:ASIDBits determines the size of the mm context
id and is used in the early boot to make decisions. The value is
picked up from the Boot CPU and cannot be delayed until other CPUs
are up. If a secondary CPU has a smaller size than that of the Boot
CPU, things will break horribly and the usual SANITY check is not good
enough to prevent the system from crashing. So, crash the system with
enough information.

Cc: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-25 10:33:06 +00:00
Suzuki K Poulose
038dc9c66a arm64: Add helper for extracting ASIDBits
Add a helper to extract ASIDBits on the current cpu

Cc: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-25 10:33:06 +00:00
Suzuki K Poulose
fd9c2790cb arm64: Enable CPU capability verification unconditionally
We verify the capabilities of the secondary CPUs only when
hotplug is enabled. The boot time activated CPUs do not
go through the verification by checking whether the system
wide capabilities were initialised or not.

This patch removes the capability check dependency on CONFIG_HOTPLUG_CPU,
to make sure that all the secondary CPUs go through the check.
The boot time activated CPUs will still skip the system wide
capability check. The plan is to hook in a check for CPU features
used by the kernel at early boot up, based on the Boot CPU values.

Cc: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-25 10:33:06 +00:00
Suzuki K Poulose
bb9052744f arm64: Handle early CPU boot failures
A secondary CPU could fail to come online due to insufficient
capabilities and could simply die or loop in the kernel.
e.g, a CPU with no support for the selected kernel PAGE_SIZE
loops in kernel with MMU turned off.
or a hotplugged CPU which doesn't have one of the advertised
system capability will die during the activation.

There is no way to synchronise the status of the failing CPU
back to the master. This patch solves the issue by adding a
field to the secondary_data which can be updated by the failing
CPU. If the secondary CPU fails even before turning the MMU on,
it updates the status in a special variable reserved in the head.txt
section to make sure that the update can be cache invalidated safely
without possible sharing of cache write back granule.

Here are the possible states :

 -1. CPU_MMU_OFF - Initial value set by the master CPU, this value
indicates that the CPU could not turn the MMU on, hence the status
could not be reliably updated in the secondary_data. Instead, the
CPU has updated the status @ __early_cpu_boot_status.

 0. CPU_BOOT_SUCCESS - CPU has booted successfully.

 1. CPU_KILL_ME - CPU has invoked cpu_ops->die, indicating the
master CPU to synchronise by issuing a cpu_ops->cpu_kill.

 2. CPU_STUCK_IN_KERNEL - CPU couldn't invoke die(), instead is
looping in the kernel. This information could be used by say,
kexec to check if it is really safe to do a kexec reboot.

 3. CPU_PANIC_KERNEL - CPU detected some serious issues which
requires kernel to crash immediately. The secondary CPU cannot
call panic() until it has initialised the GIC. This flag can
be used to instruct the master to do so.

Cc: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
[catalin.marinas@arm.com: conflict resolution]
[catalin.marinas@arm.com: converted "status" from int to long]
[catalin.marinas@arm.com: updated update_early_cpu_boot_status to use str_l]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-25 10:32:23 +00:00
Suzuki K Poulose
fce6361fe9 arm64: Move cpu_die_early to smp.c
This patch moves cpu_die_early to smp.c, where it fits better.
No functional changes, except for adding the necessary checks
for CONFIG_HOTPLUG_CPU.

Cc: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-24 17:17:45 +00:00
Suzuki K Poulose
ee02a15919 arm64: Introduce cpu_die_early
Or in other words, make fail_incapable_cpu() reusable.

We use fail_incapable_cpu() to kill a secondary CPU early during the
bringup, which doesn't have the system advertised capabilities.
This patch makes the routine more generic, to kill a secondary
booting CPU, getting rid of the dependency on capability struct.
This can be used by checks which are not necessarily attached to
a capability struct (e.g, cpu ASIDBits).

In that process, renames the function to cpu_die_early() to better
match its functionality. This will be moved to arch/arm64/kernel/smp.c
later.

Cc: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-24 17:17:44 +00:00
Suzuki K Poulose
c4bc34d202 arm64: Add a helper for parking CPUs in a loop
Adds a routine which can be used to park CPUs (spinning in kernel)
when they can't be killed.

Cc: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-24 17:17:44 +00:00
Ard Biesheuvel
2b5fe07a78 arm64: efi: invoke EFI_RNG_PROTOCOL to supply KASLR randomness
Since arm64 does not use a decompressor that supplies an execution
environment where it is feasible to some extent to provide a source of
randomness, the arm64 KASLR kernel depends on the bootloader to supply
some random bits in the /chosen/kaslr-seed DT property upon kernel entry.

On UEFI systems, we can use the EFI_RNG_PROTOCOL, if supplied, to obtain
some random bits. At the same time, use it to randomize the offset of the
kernel Image in physical memory.

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-24 14:57:29 +00:00
Ard Biesheuvel
48fcb2d021 efi: stub: use high allocation for converted command line
Before we can move the command line processing before the allocation
of the kernel, which is required for detecting the 'nokaslr' option
which controls that allocation, move the converted command line higher
up in memory, to prevent it from interfering with the kernel itself.

Since x86 needs the address to fit in 32 bits, use UINT_MAX as the upper
bound there. Otherwise, use ULONG_MAX (i.e., no limit)

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-24 14:57:28 +00:00
Ard Biesheuvel
2ddbfc81ea efi: stub: add implementation of efi_random_alloc()
This implements efi_random_alloc(), which allocates a chunk of memory of
a certain size at a certain alignment, and uses the random_seed argument
it receives to randomize the address of the allocation.

This is implemented by iterating over the UEFI memory map, counting the
number of suitable slots (aligned offsets) within each region, and picking
a random number between 0 and 'number of slots - 1' to select the slot,
This should guarantee that each possible offset is chosen equally likely.

Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-24 14:57:28 +00:00
Ard Biesheuvel
e4fbf47674 efi: stub: implement efi_get_random_bytes() based on EFI_RNG_PROTOCOL
This exposes the firmware's implementation of EFI_RNG_PROTOCOL via a new
function efi_get_random_bytes().

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-24 14:57:28 +00:00
Ard Biesheuvel
c031a4213c arm64: kaslr: randomize the linear region
When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), and entropy has been
provided by the bootloader, randomize the placement of RAM inside the
linear region if sufficient space is available. For instance, on a 4KB
granule/3 levels kernel, the linear region is 256 GB in size, and we can
choose any 1 GB aligned offset that is far enough from the top of the
address space to fit the distance between the start of the lowest memblock
and the top of the highest memblock.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-24 14:57:27 +00:00
Ard Biesheuvel
f80fb3a3d5 arm64: add support for kernel ASLR
This adds support for KASLR is implemented, based on entropy provided by
the bootloader in the /chosen/kaslr-seed DT property. Depending on the size
of the address space (VA_BITS) and the page size, the entropy in the
virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all
4 levels), with the sidenote that displacements that result in the kernel
image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB
granule kernels, respectively) are not allowed, and will be rounded up to
an acceptable value.

If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is
randomized independently from the core kernel. This makes it less likely
that the location of core kernel data structures can be determined by an
adversary, but causes all function calls from modules into the core kernel
to be resolved via entries in the module PLTs.

If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is
randomized by choosing a page aligned 128 MB region inside the interval
[_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of
entropy (depending on page size), independently of the kernel randomization,
but still guarantees that modules are within the range of relative branch
and jump instructions (with the caveat that, since the module region is
shared with other uses of the vmalloc area, modules may need to be loaded
further away if the module region is exhausted)

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-24 14:57:27 +00:00
Ard Biesheuvel
1e48ef7fcc arm64: add support for building vmlinux as a relocatable PIE binary
This implements CONFIG_RELOCATABLE, which links the final vmlinux
image with a dynamic relocation section, allowing the early boot code
to perform a relocation to a different virtual address at runtime.

This is a prerequisite for KASLR (CONFIG_RANDOMIZE_BASE).

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-24 14:57:27 +00:00
Ard Biesheuvel
6c94f27ac8 arm64: switch to relative exception tables
Instead of using absolute addresses for both the exception location
and the fixup, use offsets relative to the exception table entry values.
Not only does this cut the size of the exception table in half, it is
also a prerequisite for KASLR, since absolute exception table entries
are subject to dynamic relocation, which is incompatible with the sorting
of the exception table that occurs at build time.

This patch also introduces the _ASM_EXTABLE preprocessor macro (which
exists on x86 as well) and its _asm_extable assembly counterpart, as
shorthands to emit exception table entries.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-24 14:57:26 +00:00
Ard Biesheuvel
a272858a3c extable: add support for relative extables to search and sort routines
This adds support to the generic search_extable() and sort_extable()
implementations for dealing with exception table entries whose fields
contain relative offsets rather than absolute addresses.

Acked-by: Helge Deller <deller@gmx.de>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-24 14:57:26 +00:00
Ard Biesheuvel
7b957b6e60 scripts/sortextable: add support for ET_DYN binaries
Add support to scripts/sortextable for handling relocatable (PIE)
executables, whose ELF type is ET_DYN, not ET_EXEC. Other than adding
support for the new type, no changes are needed.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-24 14:57:26 +00:00
Ard Biesheuvel
4a2e034e5c arm64: make asm/elf.h available to asm files
This reshuffles some code in asm/elf.h and puts a #ifndef __ASSEMBLY__
around its C definitions so that the CPP defines can be used in asm
source files as well.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-24 14:57:25 +00:00