2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-19 10:44:14 +08:00
Commit Graph

83301 Commits

Author SHA1 Message Date
Steven Capper
809e660f43 ARM: 7775/1: mm: Remove do_sect_fault from LPAE code
For LPAE, do_sect_fault used to be invoked as the second level access
flag handler. When transparent huge pages were introduced for LPAE,
do_page_fault was used instead.

Unfortunately, do_sect_fault remains defined but not used for LPAE code
resulting in a compile warning.

This patch surrounds do_sect_fault with #ifndef CONFIG_ARM_LPAE to fix
this warning.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-06-29 11:23:23 +01:00
Lorenzo Pieralisi
7604537bbb ARM: kernel: implement stack pointer save array through MPIDR hashing
Current implementation of cpu_{suspend}/cpu_{resume} relies on the MPIDR
to index the array of pointers where the context is saved and restored.
The current approach works as long as the MPIDR can be considered a
linear index, so that the pointers array can simply be dereferenced by
using the MPIDR[7:0] value.
On ARM multi-cluster systems, where the MPIDR may not be a linear index,
to properly dereference the stack pointer array, a mapping function should
be applied to it so that it can be used for arrays look-ups.

This patch adds code in the cpu_{suspend}/cpu_{resume} implementation
that relies on shifting and ORing hashing method to map a MPIDR value to a
set of buckets precomputed at boot to have a collision free mapping from
MPIDR to context pointers.

The hashing algorithm must be simple, fast, and implementable with few
instructions since in the cpu_resume path the mapping is carried out with
the MMU off and the I-cache off, hence code and data are fetched from DRAM
with no-caching available. Simplicity is counterbalanced with a little
increase of memory (allocated dynamically) for stack pointers buckets, that
should be anyway fairly limited on most systems.

Memory for context pointers is allocated in a early_initcall with
size precomputed and stashed previously in kernel data structures.
Memory for context pointers is allocated through kmalloc; this
guarantees contiguous physical addresses for the allocated memory which
is fundamental to the correct functioning of the resume mechanism that
relies on the context pointer array to be a chunk of contiguous physical
memory. Virtual to physical address conversion for the context pointer
array base is carried out at boot to avoid fiddling with virt_to_phys
conversions in the cpu_resume path which is quite fragile and should be
optimized to execute as few instructions as possible.
Virtual and physical context pointer base array addresses are stashed in a
struct that is accessible from assembly using values generated through the
asm-offsets.c mechanism.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Colin Cross <ccross@android.com>
Cc: Santosh Shilimkar <santosh.shilimkar@ti.com>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Amit Kucheria <amit.kucheria@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Tested-by: Kevin Hilman <khilman@linaro.org>
Tested-by: Stephen Warren <swarren@wwwdotorg.org>
2013-06-20 11:24:11 +01:00
Lorenzo Pieralisi
8cf72172d7 ARM: kernel: build MPIDR hash function data structure
On ARM SMP systems, cores are identified by their MPIDR register.
The MPIDR guidelines in the ARM ARM do not provide strict enforcement of
MPIDR layout, only recommendations that, if followed, split the MPIDR
on ARM 32 bit platforms in three affinity levels. In multi-cluster
systems like big.LITTLE, if the affinity guidelines are followed, the
MPIDR can not be considered an index anymore. This means that the
association between logical CPU in the kernel and the HW CPU identifier
becomes somewhat more complicated requiring methods like hashing to
associate a given MPIDR to a CPU logical index, in order for the look-up
to be carried out in an efficient and scalable way.

This patch provides a function in the kernel that starting from the
cpu_logical_map, implement collision-free hashing of MPIDR values by checking
all significative bits of MPIDR affinity level bitfields. The hashing
can then be carried out through bits shifting and ORing; the resulting
hash algorithm is a collision-free though not minimal hash that can be
executed with few assembly instructions. The mpidr is filtered through a
mpidr mask that is built by checking all bits that toggle in the set of
MPIDRs corresponding to possible CPUs. Bits that do not toggle do not carry
information so they do not contribute to the resulting hash.

Pseudo code:

/* check all bits that toggle, so they are required */
for (i = 1, mpidr_mask = 0; i < num_possible_cpus(); i++)
	mpidr_mask |= (cpu_logical_map(i) ^ cpu_logical_map(0));

/*
 * Build shifts to be applied to aff0, aff1, aff2 values to hash the mpidr
 * fls() returns the last bit set in a word, 0 if none
 * ffs() returns the first bit set in a word, 0 if none
 */
fs0 = mpidr_mask[7:0] ? ffs(mpidr_mask[7:0]) - 1 : 0;
fs1 = mpidr_mask[15:8] ? ffs(mpidr_mask[15:8]) - 1 : 0;
fs2 = mpidr_mask[23:16] ? ffs(mpidr_mask[23:16]) - 1 : 0;
ls0 = fls(mpidr_mask[7:0]);
ls1 = fls(mpidr_mask[15:8]);
ls2 = fls(mpidr_mask[23:16]);
bits0 = ls0 - fs0;
bits1 = ls1 - fs1;
bits2 = ls2 - fs2;
aff0_shift = fs0;
aff1_shift = 8 + fs1 - bits0;
aff2_shift = 16 + fs2 - (bits0 + bits1);
u32 hash(u32 mpidr) {
	u32 l0, l1, l2;
	u32 mpidr_masked = mpidr & mpidr_mask;
	l0 = mpidr_masked & 0xff;
	l1 = mpidr_masked & 0xff00;
	l2 = mpidr_masked & 0xff0000;
	return (l0 >> aff0_shift | l1 >> aff1_shift | l2 >> aff2_shift);
}

The hashing algorithm relies on the inherent properties set in the ARM ARM
recommendations for the MPIDR. Exotic configurations, where for instance the
MPIDR values at a given affinity level have large holes, can end up requiring
big hash tables since the compression of values that can be achieved through
shifting is somewhat crippled when holes are present. Kernel warns if
the number of buckets of the resulting hash table exceeds the number of
possible CPUs by a factor of 4, which is a symptom of a very sparse HW
MPIDR configuration.

The hash algorithm is quite simple and can easily be implemented in assembly
code, to be used in code paths where the kernel virtual address space is
not set-up (ie cpu_resume) and instruction and data fetches are strongly
ordered so code must be compact and must carry out few data accesses.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Colin Cross <ccross@android.com>
Cc: Santosh Shilimkar <santosh.shilimkar@ti.com>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Amit Kucheria <amit.kucheria@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Tested-by: Kevin Hilman <khilman@linaro.org>
Tested-by: Stephen Warren <swarren@wwwdotorg.org>
2013-06-20 11:22:56 +01:00
Russell King
fd8957a96d Merge branch 'for-rmk/arch-timer-cleanups' of git://linux-arm.org/linux-mr into devel-stable
Please pull these arch_timer cleanups I've been holding onto for a while.
They're the same as my last posting [1], but have been rebased to v3.10-rc3.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2013-May/170602.html
-- Mark Rutland
2013-06-18 20:12:56 +01:00
Russell King
3fbd55ec21 Merge branch 'for-rmk/lpae' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into devel-stable
Conflicts:
	arch/arm/kernel/smp.c

Please pull these miscellaneous LPAE fixes I've been collecting for a while
now for 3.11. They've been tested and reviewed by quite a few people, and most
of the patches are pretty trivial. -- Will Deacon.
2013-06-18 20:11:32 +01:00
Russell King
b3f288de7c Merge branch 'for-rmk/hugepages' of git://git.linaro.org/people/stevecapper/linux into devel-stable
These changes bring both HugeTLB support and Transparent HugePage
(THP) support to ARM.  Only long descriptors (LPAE) are supported
in this series.

The code has been tested on an Arndale board (Exynos 5250).
2013-06-18 20:05:48 +01:00
Russell King
04e71d72ab Merge branch 'ja-nommu-for-rmk-v2' of git://linux-arm.org/linux-ja into devel-stable
This includes the following series sent earlier to the list:
 - nommu-fixes
 - R7 Support
 - MPU support

I've left out the ARCH_MULTIPLATFORM/!MMU stuff that Arnd and I were
discussing today until we've reached a conclusion/that's had some more
review.

This is rebased (and re-tested) on your devel-stable branch because
otherwise there were going to be conflicts with Uwe's V7M work now that
you've merged that. I've included the fix for limiting MPU to CPU_V7.
2013-06-17 16:52:34 +01:00
Jonathan Austin
de8297765d ARM: mpu: Ensure that MPU depends on CPU_V7
The support for the MPU is currently implemented only for R-class
(PMSAv7/R). Since the merge of V7M support in to the kernel it is possible
to select MPU support on V7M.

This patch ensures that until MPU support for M-class processors is
implemented, the MPU can only be selected with R-class CPUs

Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
2013-06-17 15:13:18 +01:00
Jonathan Austin
9dfc28b630 ARM: mpu: protect the vectors page with an MPU region
Without an MMU it is possible for userspace programs to start executing code
in places that they have no business executing. The MPU allows some level of
protection against this.

This patch protects the vectors page from access by userspace processes.
Userspace tasks that dereference a null pointer are already protected by an
svc at 0x0 that kills them. However when tasks use an offset from a null
pointer (eg a function in a null struct) they miss this carefully placed svc
and enter the exception vectors in user mode, ending up in the kernel.

This patch causes programs that do this to receive a SEGV instead of happily
entering the kernel in user-mode, and hence avoid a 'Bad Mode' panic.

As part of this change it is necessary to make sigreturn happen via the
stack when there is not an sa_restorer function. This change is invisible to
userspace, and irrelevant to code compiled using a uClibc toolchain, which
always uses an sa_restorer function.

Because we don't get to remap the vectors in !MMU kuser_helpers are not
in a defined location, and hence aren't usable. This means we don't need to
worry about keeping them accessible from PL0

Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
CC: Nicolas Pitre <nico@linaro.org>
CC: Catalin Marinas <catalin.marinas@arm.com>
2013-06-17 15:13:18 +01:00
Jonathan Austin
801bb21c60 ARM: mpu: Allow enabling of the MPU via kconfig
Allows the user to select MPU support when compiling for ARM processors
that support the PMSAv7.

This ensures that CONFIG_SMP depends on the MPU in the case that no MMU
is present.

CONFIG_SMP_ON_UP is not implemented for nommu, so introduce an MMU
dependency there.

Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
2013-06-17 15:13:03 +01:00
Jonathan Austin
eb08375ea6 ARM: mpu: add MPU initialisation for secondary cores
The MPU initialisation on the primary core is performed in two stages, one
minimal stage to ensure the CPU can boot and a second one after
sanity_check_meminfo. As the memory configuration is known by the time we
boot secondary cores only a single step is necessary, provided the values
for DRSR are passed to secondaries.

This patch implements this arrangement. The configuration generated for the
MPU regions is made available to the secondary core, which can then use the
asm MPU intialisation code to program a complete region configuration.

This is necessary for SMP configurations without an MMU, as the MPU
initialisation is the only way to ensure that memory is specified as
'shared'.

Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
CC: Nicolas Pitre <nico@linaro.org>
2013-06-07 17:02:53 +01:00
Jonathan Austin
9a271567fe ARM: mpu: Complete initialisation of the MPU after reaching the C-world
Much like with the MMU, MPU initialisation is performed in two stages; the
first in the pre-C world and the 'real' initialisation during arch setup.

This patch wires in previously added MPU initialisation functions so that
the whole of memory is mapped with the appropriate region properties for
'normal' RAM (the appropriate properties depend on whether the system is
SMP).

Stub initialisation functions are added for the case that there MPU support
is not configured in to the kernel.

Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
CC: Hyok S. Choi <hyok.choi@samsung.com>
2013-06-07 17:02:53 +01:00
Jonathan Austin
5ad7dcbe40 ARM: mpu: add MPU probe and initialisation functions in C
This patch adds new functions for probing and initialising the ARMv7
PMSA-compliant MPU.

These use the pre-defined and reserved MPU_PROBE_REGION for establishing
properties of the MPU, which is necessary because certain probe operations
require modifying region properties and reading back the results.

This patch also introduces a minimal sanity_check_meminfo_mpu function, that
ensures that the memory set-up passed to the kernel can be used in conjunction
with the MPU. The base address of a region must be aligned to the region size,
otherwise behavior is unpredictable and region sizes can only be specified as a
power-of-two. To simplify the satisfaction of these requirements this
implementation currently enforces that all memory is contiguous from
PHYS_OFFSET, merging banks that are contiguous but passed in separately.

The functions are added in this patch but wired in to the boot process later
in the series.

Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
CC: Hyok S. Choi <hyok.choi@samsung.com>
2013-06-07 17:02:52 +01:00
Jonathan Austin
67c9845bea ARM: mpu: add early bring-up code for the ARMv7 PMSA-compliant MPU
This patch adds initial support for using the MPU, which is necessary for
SMP operation on PMSAv7 processors because it is the only way to ensure
memory is shared. This is an initial patch and full SMP support is added
later in this series.

The setup of the MPU is performed in a way analagous to that for the MMU:
Very early initialisation before the C environment is brought up, followed
by a sanity check and more complete initialisation in C.

This patch provides the simplest possible memory region configuration:
MPU_PROBE_REGION: Reserved for probing MPU details, not enabled
MPU_BG_REGION: A 'background' region that specifies all memory strongly ordered
MPU_RAM_REGION: A single shared, cacheable, normal region for the valid RAM.

In this early initialisation code we simply map the whole of the address
space with the BG_REGION and (at least) the kernel with the RAM_REGION. The
MPU has region alignment constraints that require us to round past the end
of the kernel.

As region 2 has a higher priority than region 1, it overrides the strongly-
ordered behaviour for RAM only.

Subsequent patches will add more complete initialisation from the C-world
and support for bringing up secondary CPUs.

Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
CC: Hyok S. Choi <hyok.choi@samsung.com>
2013-06-07 17:02:51 +01:00
Jonathan Austin
a2b45b0da8 ARM: mpu: add header for MPU register layouts and region data
This commit adds definitions relevant to the ARM v7 PMSA compliant MPU.

The register layouts and region configuration data is made accessible to asm
as well as C-code so that it can be used in early bring-up of the MPU.

The mpu region information structs assume that the properties for the I/D side
are the same, though the implementation could be trivially extended for future
platforms where this is no-longer true.

The MPU_*_REGION defines are used for the basic, static MPU region setup.

Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
2013-06-07 17:02:50 +01:00
Jonathan Austin
aca7e5920c ARM: mpu: add PMSA related registers and bitfields to existing headers
This patch adds the following definitions relevant to the PMSA:

Add SCTLR bit 17, (CR_BR - Background Region bit) to the list of CR_*
bitfields. This bit determines whether to use the architecturally defined
memory map

Add the MPUIR to the available registers when using read_cpuid macro. The
MPUIR is the MPU type register.

Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
CC:"Uwe Kleine-König" <u.kleine-koenig@pengutronix.de>
2013-06-07 17:02:49 +01:00
Jonathan Austin
ed18bdc875 ARM: vexpress: Add Cortex-R Series UART, selectable via DEBUG_LL
The Cortex-R series processors on Versatile Express have a different memory
map to the RS1 and CA9X4 tiles. Most of the platform difference can be
expressed in device-trees, but the UART definitions for LL_DEBUG cannot.

This patch defines the UART location for R-Series processors on
versatile-express, allowing low-level debug and output from the decompressor.
These definitions are selectable via Kconfig

Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
CC: Pawel Moll <pawel.moll@arm.com>
2013-06-07 17:02:48 +01:00
Jonathan Austin
c90ad5c940 ARM: add Cortex-R7 Processor Info
This patch adds processor info for ARM Ltd. Cortex-R7.

The R7 has many similarities to the A9 and though the ACTLR layout is not
identical, the bits associated with cache operations broadcasting and SMP
modes are the same for A9, A5 and R7 (Though in the A-class processors the
same bits toggle TLB-ops broadcasting as well as cache-ops)

Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Stephen Boyd <sboyd@codeaurora.org>
2013-06-07 17:02:47 +01:00
Jonathan Austin
66567618f3 ARM: select CPU_CPU15_MMU/MPU appropriately
Currently CPU_V7 selects CPU_CP15_MMU, however in the case of a V7 CPU
implementing the PMSA, such as the Cortex-R7, the CP15_MMU operations are
not available. Selecting CPU_CP15_MPU is appropriate in this case.

This patch makes CPU_CP15_MMU dependent on the use of the MMU, selecting
CPU_CP15_MPU for v7 processors when !MMU is chosen.

Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
2013-06-07 17:02:46 +01:00
Jonathan Austin
8d655d835b ARM: nommu: add stub local_flush_bp_all() for !CONFIG_MMUU
Since the merging of Will's tlb-ops branch, specifically 89c7e4b8bb
(ARM: 7661/1: mm: perform explicit branch predictor maintenance when required),
building SMP without CONFIG_MMU has been broken.

The local_flush_bp_all function is only called for operations related to
changing the kernel's view of memory and ASID rollover - both of which are
irrelevant to an !MMU kernel.

This patch adds a stub local_flush_bp_all() function to the other tlb
maintenance stubs and restores the ability to build an SMP !MMU kernel.

Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
2013-06-07 17:02:46 +01:00
Jonathan Austin
8006b4d1a7 ARM: nommu: Don't build smp_tlb.c for !CONFIG_MMU
Without an MMU we don't need to do any TLB maintenance. Until the addition
of 93dc68876b (ARM: 7684/1: errata: Workaround for Cortex-A15 erratum 798181
(TLBI/DSB operations)) building the tlb maintenance ops in smp_tlb.c worked,
though none of the contents were used.

Since that commit, however, SMP NOMMU has not been able to build. This patch
restores that ability by making the building of smp_tlb.c dependent on MMU.

Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
2013-06-07 17:02:45 +01:00
Will Deacon
aa1aadc330 ARM: suspend: fix CPU suspend code for !CONFIG_MMU configurations
The ARM CPU suspend code can be selected even for a !CONFIG_MMU
configuration. The resulting kernel will not compile and, even if it did,
would access undefined co-processor registers when executing.

This patch fixes the v6 and v7 CPU suspend code for the nommu case.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Tested-by: Jonathan Austin <jonathan.austin@arm.com>
CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> (commit_signer:1/3=33%)
CC: Santosh Shilimkar <santosh.shilimkar@ti.com> (commit_signer:1/3=33%)
CC: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
2013-06-07 17:02:44 +01:00
Will Deacon
c4a1f032ed ARM: nommu: do not initialise page tables in secondary_data structure
nommu systems do not require any page tables, so don't try to initialise
them when bringing up secondary cores.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-06-07 17:02:43 +01:00
Will Deacon
02ed1c7bba ARM: nommu: provide dummy cpu_switch_mm implementation
cpu_switch_mm is a logical nop on nommu systems, so define it as such
when !CONFIG_MMU.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-06-07 17:02:43 +01:00
Will Deacon
5c709e6998 ARM: nommu: define dummy TLB operations for nommu configurations
nommu platforms do not perform address translation and therefore clearly
don't have TLBs. However, some SMP code assumes the presence of the TLB
flushing routines and will therefore fail to compile for a nommu system.

This patch defines dummy local_* TLB operations and #defines
tlb_ops_need_broadcast() as 0, therefore causing the usual ARM SMP TLB
operations to call the local variants instead.

Signed-off-by: Will Deacon <will.deacon@arm.com>
CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
CC: Nicolas Pitre <nico@linaro.org>
2013-06-07 17:02:42 +01:00
Will Deacon
01fafcab20 ARM: nommu: add entry point for secondary CPUs to head-nommu.S
This patch adds a secondary_startup entry point to head-nommu.S so that
we can boot secondary CPUs on an SMP nommu configuration.

Signed-off-by: Will Deacon <will.deacon@arm.com>
CC: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
CC: Nicolas Pitre <nico@linaro.org>
2013-06-07 17:02:41 +01:00
Marc Zyngier
3f71be237c ARM: arch_timer: stop virtual timer when booted in HYP mode
When booting the kernel, a bootloader could have left the virtual
timer ticking away, potentially generating interrupts. This could
be troublesome if the user of the virtual timer is not careful
when enabling the interrupt.

In order to avoid any surprise, stop the virtual timer from
interrupting us when booted in HYP mode, as we'll use the physical
timer in this case.

Reported-by: Giridhar Maruthy <giridhar.m@samsung.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Cc: Dave Martin <dave.martin@linaro.org>
2013-06-07 10:20:29 +01:00
Mark Rutland
fb521a0da1 arm: fix up ARM_ARCH_TIMER selects
In 8a4da6e: "arm: arch_timer: move core to drivers/clocksource", the
selection of ARM_ARCH_TIMER was indirected via HAVE_ARM_ARCH_TIMER,
though mach-exynos's selection of ARM_ARCH_TIMER was missed, and since
then mach-shmobile, mach-tegra, and mach-virt have begun selecting
ARM_ARCH_TIMER. This can lead to architected timer support erroneously
appearing to not be selected in menuconfig.

This patch fixes up the Kconfigs for those platforms to select
HAVE_ARM_ARCH_TIMER.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Simon Horman <horms+renesas@verge.net.au>
Cc: Kukjin Kim <kgene.kim@samsung.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
2013-06-07 10:20:28 +01:00
Mark Rutland
0d651e4e65 clocksource: arch_timer: use virtual counters
Switching between reading the virtual or physical counters is
problematic, as some core code wants a view of time before we're fully
set up. Using a function pointer and switching the source after the
first read can make time appear to go backwards, and having a check in
the read function is an unfortunate block on what we want to be a fast
path.

Instead, this patch makes us always use the virtual counters. If we're a
guest, or don't have hyp mode, we'll use the virtual timers, and as such
don't care about CNTVOFF as long as it doesn't change in such a way as
to make time appear to travel backwards. As the guest will use the
virtual timers, a (potential) KVM host must use the physical timers
(which can wake up the host even if they fire while a guest is
executing), and hence a host must have CNTVOFF set to zero so as to have
a consistent view of time between the physical timers and virtual
counters.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Cc: Rob Herring <rob.herring@calxeda.com>
2013-06-07 10:20:28 +01:00
Mark Rutland
f793c23ebb ARM: KVM: arch_timers: zero CNTVOFF upon return to host
To use the virtual counters from the host, we need to ensure that
CNTVOFF doesn't change unexpectedly. When we change to a guest, we
replace the host's CNTVOFF, but we don't restore it when returning to
the host.

As the host sets CNTVOFF to zero, and never changes it, we can simply
zero CNTVOFF when returning to the host. This patch adds said zeroing to
the return to host path.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Christoffer Dall <cdall@cs.columbia.edu>
2013-06-07 10:20:27 +01:00
Marc Zyngier
0af0b189ab ARM: hyp: initialize CNTVOFF to zero
In order to be able to use the virtual counter in a safe way,
make sure it is initialized to zero before dropping to SVC.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Cc: Dave Martin <dave.martin@linaro.org>
2013-06-07 10:20:27 +01:00
Arnd Bergmann
fdeb94b5dc ARM: 7745/1: psci: fix building without HOTPLUG_CPU
The cpu_die field in smp_operations is not valid with CONFIG_HOTPLUG_CPU,
so we must enclose it in #ifdef, but at least that lets us remove
two other lines.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-06-05 23:36:22 +01:00
Catalin Marinas
8d96250700 ARM: mm: Transparent huge page support for LPAE systems.
The patch adds support for THP (transparent huge pages) to LPAE
systems. When this feature is enabled, the kernel tries to map
anonymous pages as 2MB sections where possible.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
[steve.capper@linaro.org: symbolic constants used, value of
PMD_SECT_SPLITTING adjusted, tlbflush.h included in pgtable.h,
added PROT_NONE support.]
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
2013-06-04 16:52:38 +01:00
Catalin Marinas
1355e2a6eb ARM: mm: HugeTLB support for LPAE systems.
This patch adds support for hugetlbfs based on the x86 implementation.
It allows mapping of 2MB sections (see Documentation/vm/hugetlbpage.txt
for usage). The 64K pages configuration is not supported (section size
is 512MB in this case).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
[steve.capper@linaro.org: symbolic constants replace numbers in places.
Split up into multiple files, to simplify future non-LPAE support,
removed huge_pmd_share code, as this is very rarely executed,
Added PROT_NONE support].
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
2013-06-04 16:52:37 +01:00
Steve Capper
0b19f93351 ARM: mm: Add support for flushing HugeTLB pages.
On ARM we use the __flush_dcache_page function to flush the dcache
of pages when needed; usually when the PG_dcache_clean bit is unset
and we are setting a PTE.

A HugeTLB page is represented as a compound page consisting of an
array of pages. Thus to flush the dcache of a HugeTLB page, one must
flush more than a single page.

This patch modifies __flush_dcache_page such that all constituent
pages of a HugeTLB page are flushed.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
2013-06-04 16:52:37 +01:00
Steve Capper
dde1b65110 ARM: mm: correct pte_same behaviour for LPAE.
For 3 levels of paging the PTE_EXT_NG bit will be set for user
address ptes that are written to a page table but not for ptes
created with mk_pte.

This can cause some comparison tests made by pte_same to fail
spuriously and lead to other problems.

To correct this behaviour, we mask off PTE_EXT_NG for any pte that
is present before running the comparison.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
2013-06-04 16:52:37 +01:00
Will Deacon
a469abd0f8 ARM: elf: add new hwcap for identifying atomic ldrd/strd instructions
CPUs implementing LPAE have atomic ldrd/strd instructions, meaning that
userspace software can avoid having to use the exclusive variants of
these instructions if they wish.

This patch advertises the atomicity of these instructions via the
hwcaps, so userspace can detect this CPU feature.

Reported-by: Vladimir Danushevsky <vladimir.danushevsky@oracle.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-30 16:02:34 +01:00
Will Deacon
e38a517578 ARM: lpae: fix definition of PTE_HWTABLE_PTRS
For 2-level page tables, PTE_HWTABLE_PTRS describes the offset between
Linux PTEs and hardware PTEs. On LPAE, there is no distinction (since
we have 64-bit descriptors with plenty of space) so PTE_HWTABLE_PTRS
should be 0. Unfortunately, it is wrongly defined as PTRS_PER_PTE,
meaning that current pte table flushing is off by a page. Luckily,
all current LPAE implementations are SMP, so the hardware walker can
snoop L1.

This patch fixes the broken definition.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-30 16:02:33 +01:00
Cyril Chemparathy
28d4bf7a29 ARM: mm: clean up membank size limit checks
This patch cleans up the highmem sanity check code by simplifying the range
checks with a pre-calculated size_limit.  This patch should otherwise have no
functional impact on behavior.

This patch also removes a redundant (bank->start < vmalloc_limit) check, since
this is already covered by the !highmem condition.

Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-30 16:02:30 +01:00
Cyril Chemparathy
adf2e9fda3 ARM: mm: cleanup checks for membank overlap with vmalloc area
On Keystone platforms, physical memory is entirely outside the 32-bit
addressible range.  Therefore, the (bank->start > ULONG_MAX) check below marks
the entire system memory as highmem, and this causes unpleasentness all over.

This patch eliminates the extra bank start check (against ULONG_MAX) by
checking bank->start against the physical address corresponding to vmalloc_min
instead.

In the process, this patch also cleans up parts of the highmem sanity check
code by removing what has now become a redundant check for banks that entirely
overlap with the vmalloc range.

Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-30 16:02:26 +01:00
Cyril Chemparathy
5b20c5b2f0 ARM: fix type of PHYS_PFN_OFFSET to unsigned long
On LPAE machines, PHYS_OFFSET evaluates to a phys_addr_t and this type is
inherited by the PHYS_PFN_OFFSET definition as well.  Consequently, the kernel
build emits warnings of the form:

init/main.c: In function 'start_kernel':
init/main.c:588:7: warning: format '%lx' expects argument of type 'long unsigned int', but argument 2 has type 'phys_addr_t' [-Wformat]

This patch fixes this warning by pinning down the PFN type to unsigned long.

Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-30 16:02:22 +01:00
Cyril Chemparathy
82f667046e ARM: mm: use physical addresses in highmem sanity checks
This patch modifies the highmem sanity checking code to use physical addresses
instead.  This change eliminates the wrap-around problems associated with the
original virtual address based checks, and this simplifies the code a bit.

The one constraint imposed here is that low physical memory must be mapped in
a monotonically increasing fashion if there are multiple banks of memory,
i.e., x < y must => pa(x) < pa(y).

Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-30 16:02:18 +01:00
Cyril Chemparathy
4756dcbfd3 ARM: LPAE: accomodate >32-bit addresses for page table base
This patch redefines the early boot time use of the R4 register to steal a few
low order bits (ARCH_PGD_SHIFT bits) on LPAE systems.  This allows for up to
38-bit physical addresses.

Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-30 16:02:15 +01:00
Cyril Chemparathy
a7fbc0d62a ARM: LPAE: factor out T1SZ and TTBR1 computations
This patch moves the TTBR1 offset calculation and the T1SZ calculation out
of the TTB setup assembly code.  This should not affect functionality in
any way, but improves code readability as well as readability of subsequent
patches in this series.

Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-30 16:02:11 +01:00
Cyril Chemparathy
1fc84ae84b ARM: LPAE: use 64-bit accessors for TTBR registers
This patch adds TTBR accessor macros, and modifies cpu_get_pgd() and
the LPAE version of cpu_set_reserved_ttbr0() to use these instead.

In the process, we also fix these functions to correctly handle cases
where the physical address lies beyond the 4G limit of 32-bit addressing.

Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-30 16:02:07 +01:00
Cyril Chemparathy
13f659b0f3 ARM: LPAE: use phys_addr_t in switch_mm()
This patch modifies the switch_mm() processor functions to use phys_addr_t.
On LPAE systems, we now honor the upper 32-bits of the physical address that
is being passed in, and program these into TTBR as expected.

Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
[will: fixed up conflict in 3-level switch_mm with big-endian changes]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-30 16:02:03 +01:00
Vitaly Andrianov
de22cc6e33 ARM: LPAE: use phys_addr_t for initrd location
This patch fixes the initrd setup code to use phys_addr_t instead of assuming
32-bit addressing.  Without this we cannot boot on systems where initrd is
located above the 4G physical address limit.

Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-30 16:01:59 +01:00
Vitaly Andrianov
56bc628666 ARM: LPAE: use phys_addr_t in free_memmap()
The free_memmap() was mistakenly using unsigned long type to represent
physical addresses.  This breaks on PAE systems where memory could be placed
above the 32-bit addressible limit.

This patch fixes this function to properly use phys_addr_t instead.

Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-30 16:01:56 +01:00
Vitaly Andrianov
20d6956d8c ARM: LPAE: use phys_addr_t in alloc_init_pud()
This patch fixes the alloc_init_pud() function to use phys_addr_t instead of
unsigned long when passing in the phys argument.

This is an extension to commit 97092e0c56 (ARM:
pgtable: use phys_addr_t for physical addresses), which applied similar changes
elsewhere in the ARM memory management code.

Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-30 16:01:52 +01:00
Cyril Chemparathy
926edcc747 ARM: LPAE: use signed arithmetic for mask definitions
This patch applies to PAGE_MASK, PMD_MASK, and PGDIR_MASK, where forcing
unsigned long math truncates the mask at the 32-bits.  This clearly does bad
things on PAE systems.

This patch fixes this problem by defining these masks as signed quantities.
We then rely on sign extension to do the right thing.

Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-30 16:01:30 +01:00