The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
and are flagged as __cpuinit -- so if we remove the __cpuinit from
the arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
related content into no-ops as early as possible, since that will get
rid of these warnings. In any case, they are temporary and harmless.
This removes all the ARM uses of the __cpuinit macros from C code,
and all __CPUINIT from assembly code. It also had two ".previous"
section statements that were paired off against __CPUINIT
(aka .section ".cpuinit.text") that also get removed here.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Pull ARM fixes from Russell King:
"A few fixes for ARM, mostly just one liners with the exception of the
missing section specification. We decided not to rely on .previous to
fix this but to explicitly state the section we want the code to be
in."
* 'fixes' of git://git.linaro.org/people/rmk/linux-arm:
ARM: 7778/1: smp_twd: twd_update_frequency need be run on all online CPUs
ARM: 7782/1: Kconfig: Let ARM_ERRATA_364296 not depend on CONFIG_SMP
ARM: mm: fix boot on SA1110 Assabet
ARM: 7781/1: mmu: Add debug_ll_io_init() mappings to early mappings
ARM: 7780/1: add missing linker section markup to head-common.S
Since all architectures have been converted to use vm_unmapped_area(),
there is no remaining use for the free_area_cache.
Signed-off-by: Michel Lespinasse <walken@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Commit 83db0384 (mm/ARM: use common help functions to free reserved
pages) broke booting on the Assabet by trying to convert a PFN to
a virtual address using the __va() macro. This macro takes the
physical address, not a PFN. Fix this.
Cc: <stable@vger.kernel.org> # 3.10
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Failure to add the mapping created in debug_ll_io_init() can lead
to the BUG_ON() triggering in lib/ioremap.c:27 if the static
virtual address decided for the debug_ll mapping overlaps with
another mapping that is created later. This happens because the
generic ioremap code has no idea there is a mapping there and it
tries to place a mapping in the same location and blows up when
it sees that there is a pte already present.
kernel BUG at lib/ioremap.c:27!
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP ARM
Modules linked in:
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.10.0-rc2-00042-g2af0c67-dirty #316
task: ef088000 ti: ef082000 task.ti: ef082000
PC is at ioremap_page_range+0x16c/0x198
LR is at ioremap_page_range+0xf0/0x198
pc : [<c04cb874>] lr : [<c04cb7f8>] psr: 20000113
sp : ef083e78 ip : af140000 fp : ef083ebc
r10: ef7fc100 r9 : ef7fc104 r8 : 000af174
r7 : 00000647 r6 : beffffff r5 : f004c000 r4 : f0040000
r3 : af173417 r2 : 16440653 r1 : af173e07 r0 : ef7fc8fc
Flags: nzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment kernel
Control: 10c5787d Table: 8020406a DAC: 00000015
Process swapper/0 (pid: 1, stack limit = 0xef082238)
Stack: (0xef083e78 to 0xef084000)
3e60: 00040000 ef083eec
3e80: bf134000 f004bfff c0207c00 f004c000 c02fc120 f000c000 c15e7800 00040000
3ea0: ef083eec 00000647 c098ba9c c0953544 ef083edc ef083ec0 c021b82c c04cb714
3ec0: c09cdc50 00000040 ef0f1e00 ef1003c0 ef083f14 ef083ee0 c09535bc c021b7bc
3ee0: c0953544 c04d0c6c c094e2cc c1600be4 c07440c4 c09a6888 00000002 c0a15f00
3f00: ef082000 00000000 ef083f54 ef083f18 c0208728 c0953550 00000002 c1600bfc
3f20: c08e3fac c0839918 ef083f54 c1600b80 c09a6888 c0a15f00 0000008b c094e2cc
3f40: c098ba9c c098bab8 ef083f94 ef083f58 c094ea0c c020865c 00000002 00000002
3f60: c094e2cc 00000000 c025b674 00000000 c06ff860 00000000 00000000 00000000
3f80: 00000000 00000000 ef083fac ef083f98 c06ff878 c094e910 00000000 00000000
3fa0: 00000000 ef083fb0 c020efe8 c06ff86c 00000000 00000000 00000000 00000000
3fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
3fe0: 00000000 00000000 00000000 00000000 00000013 00000000 00000000 c0595108
[<c04cb874>] (ioremap_page_range+0x16c/0x198) from [<c021b82c>] (__alloc_remap_buffer.isra.18+0x7c/0xc4)
[<c021b82c>] (__alloc_remap_buffer.isra.18+0x7c/0xc4) from [<c09535bc>] (atomic_pool_init+0x78/0x128)
[<c09535bc>] (atomic_pool_init+0x78/0x128) from [<c0208728>] (do_one_initcall+0xd8/0x198)
[<c0208728>] (do_one_initcall+0xd8/0x198) from [<c094ea0c>] (kernel_init_freeable+0x108/0x1d0)
[<c094ea0c>] (kernel_init_freeable+0x108/0x1d0) from [<c06ff878>] (kernel_init+0x18/0xf4)
[<c06ff878>] (kernel_init+0x18/0xf4) from [<c020efe8>] (ret_from_fork+0x14/0x20)
Code: e50b0040 ebf54b2f e51b0040 eaffffee (e7f001f2)
Fix it by telling generic layers about the static mapping via
iotable_init(). This also has the nice side effect of letting
you see the mapping in procfs' vmallocinfo file.
Cc: Rob Herring <rob.herring@calxeda.com>
Cc: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Pull ARM DMA mapping updates from Marek Szyprowski:
"This contains important bugfixes and an update for IOMMU integration
support for ARM architecture"
* 'for-v3.11' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping:
ARM: dma: Drop __GFP_COMP for iommu dma memory allocations
ARM: DMA-mapping: mark all !DMA_TO_DEVICE pages in unmapping as clean
ARM: dma-mapping: NULLify dev->archdata.mapping pointer on detach
ARM: dma-mapping: convert DMA direction into IOMMU protection attributes
ARM: dma-mapping: Get pages if the cpu_addr is out of atomic_pool
Prepare for removing num_physpages and simplify mem_init().
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Concentrate code to modify totalram_pages into the mm core, so the arch
memory initialized code doesn't need to take care of it. With these
changes applied, only following functions from mm core modify global
variable totalram_pages: free_bootmem_late(), free_all_bootmem(),
free_all_bootmem_node(), adjust_managed_page_count().
With this patch applied, it will be much more easier for us to keep
totalram_pages and zone->managed_pages in consistence.
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Acked-by: David Howells <dhowells@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: <sworddragon2@aol.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Jianguo Wu <wujianguo@huawei.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Michel Lespinasse <walken@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tang Chen <tangchen@cn.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Address more review comments from last round of code review.
1) Enhance free_reserved_area() to support poisoning freed memory with
pattern '0'. This could be used to get rid of poison_init_mem()
on ARM64.
2) A previous patch has disabled memory poison for initmem on s390
by mistake, so restore to the original behavior.
3) Remove redundant PAGE_ALIGN() when calling free_reserved_area().
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: <sworddragon2@aol.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Jianguo Wu <wujianguo@huawei.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Michel Lespinasse <walken@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tang Chen <tangchen@cn.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Change signature of free_reserved_area() according to Russell King's
suggestion to fix following build warnings:
arch/arm/mm/init.c: In function 'mem_init':
arch/arm/mm/init.c:603:2: warning: passing argument 1 of 'free_reserved_area' makes integer from pointer without a cast [enabled by default]
free_reserved_area(__va(PHYS_PFN_OFFSET), swapper_pg_dir, 0, NULL);
^
In file included from include/linux/mman.h:4:0,
from arch/arm/mm/init.c:15:
include/linux/mm.h:1301:22: note: expected 'long unsigned int' but argument is of type 'void *'
extern unsigned long free_reserved_area(unsigned long start, unsigned long end,
mm/page_alloc.c: In function 'free_reserved_area':
>> mm/page_alloc.c:5134:3: warning: passing argument 1 of 'virt_to_phys' makes pointer from integer without a cast [enabled by default]
In file included from arch/mips/include/asm/page.h:49:0,
from include/linux/mmzone.h:20,
from include/linux/gfp.h:4,
from include/linux/mm.h:8,
from mm/page_alloc.c:18:
arch/mips/include/asm/io.h:119:29: note: expected 'const volatile void *' but argument is of type 'long unsigned int'
mm/page_alloc.c: In function 'free_area_init_nodes':
mm/page_alloc.c:5030:34: warning: array subscript is below array bounds [-Warray-bounds]
Also address some minor code review comments.
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Reported-by: Arnd Bergmann <arnd@arndb.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: <sworddragon2@aol.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Jianguo Wu <wujianguo@huawei.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Michel Lespinasse <walken@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tang Chen <tangchen@cn.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull ARM updates from Russell King:
"This contains the usual updates from other people (listed below) and
the usual random muddle of miscellaneous ARM updates which cover some
low priority bug fixes and performance improvements.
I've started to put the pull request wording into the merge commits,
which are:
- NoMMU stuff:
This includes the following series sent earlier to the list:
- nommu-fixes
- R7 Support
- MPU support
I've left out the ARCH_MULTIPLATFORM/!MMU stuff that Arnd and I
were discussing today until we've reached a conclusion/that's had
some more review.
This is rebased (and re-tested) on your devel-stable branch because
otherwise there were going to be conflicts with Uwe's V7M work now
that you've merged that. I've included the fix for limiting MPU to
CPU_V7.
- Huge page support
These changes bring both HugeTLB support and Transparent HugePage
(THP) support to ARM. Only long descriptors (LPAE) are supported
in this series.
The code has been tested on an Arndale board (Exynos 5250).
- LPAE updates
Please pull these miscellaneous LPAE fixes I've been collecting for
a while now for 3.11. They've been tested and reviewed by quite a
few people, and most of the patches are pretty trivial. -- Will Deacon.
- arch_timer cleanups
Please pull these arch_timer cleanups I've been holding onto for a
while. They're the same as my last posting, but have been rebased
to v3.10-rc3.
- mpidr linearisation (multiprocessor id register - identifies which
CPU number we are in the system)
This patch series that implements MPIDR linearization through a
simple hashing algorithm and updates current cpu_{suspend}/{resume}
code to use the newly created hash structures to retrieve context
pointers. It represents a stepping stone for the implementation of
power management code on forthcoming multi-cluster ARM systems.
It has been tested on TC2 (dual cluster A15xA7 system), iMX6q,
OMAP4 and Tegra, with processors hitting low-power states requiring
warm-boot resume through the cpu_resume code path"
* 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (77 commits)
ARM: 7775/1: mm: Remove do_sect_fault from LPAE code
ARM: 7777/1: Avoid extra calls to the C compiler
ARM: 7774/1: Fix dtb dependency to use order-only prerequisites
ARM: 7770/1: remove residual ARMv2 support from decompressor
ARM: 7769/1: Cortex-A15: fix erratum 798181 implementation
ARM: 7768/1: prevent risks of out-of-bound access in ASID allocator
ARM: 7767/1: let the ASID allocator handle suspended animation
ARM: 7766/1: versatile: don't mark pen as __INIT
ARM: 7765/1: perf: Record the user-mode PC in the call chain.
ARM: 7735/2: Preserve the user r/w register TPIDRURW on context switch and fork
ARM: kernel: implement stack pointer save array through MPIDR hashing
ARM: kernel: build MPIDR hash function data structure
ARM: mpu: Ensure that MPU depends on CPU_V7
ARM: mpu: protect the vectors page with an MPU region
ARM: mpu: Allow enabling of the MPU via kconfig
ARM: 7758/1: introduce config HAS_BANDGAP
ARM: 7757/1: mm: don't flush icache in switch_mm with hardware broadcasting
ARM: 7751/1: zImage: don't overwrite ourself with a page table
ARM: 7749/1: spinlock: retry trylock operation if strex fails on free lock
ARM: 7748/1: oabi: handle faults when loading swi instruction from userspace
...
These changes are all to SoC-specific code, a total of 33 branches on
17 platforms were pulled into this. Like last time, Renesas sh-mobile
is now the platform with the most changes, followed by OMAP and EXYNOS.
Two new platforms, TI Keystone and Rockchips RK3xxx are added in
this branch, both containing almost no platform specific code at all,
since they are using generic subsystem interfaces for clocks, pinctrl,
interrupts etc. The device drivers are getting merged through the
respective subsystem maintainer trees.
One more SoC (u300) is now multiplatform capable and several others
(shmobile, exynos, msm, integrator, kirkwood, clps711x) are moving
towards that goal with this series but need more work.
Also noteworthy is the work on PCI here, which is traditionally part of
the SoC specific code. With the changes done by Thomas Petazzoni, we can
now more easily have PCI host controller drivers as loadable modules and
keep them separate from the platform code in drivers/pci/host. This has
already led to the discovery that three platforms (exynos, spear and imx)
are actually using an identical PCIe host controller and will be able
to share a driver once support for spear and imx is added.
Conflicts:
* asm/glue-proc.h has one CPU type getting added that conflicts
with another addition in 3.10-rc7
* Simple context changes in arch/arm/Makefile and arch/arm/Kconfig
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIVAwUAUdLnpmCrR//JCVInAQLoFRAAyatR+MhVFwc91cO7yDw/mz81RO1V9jEd
QMufoWi0BRfBsubqxnGlb510EEMTz7gxdrlYPILYNr8TqR+lNGhjKt2FQAjN3q2O
IBvu4x8C+xcxnMNbkCnTQRxP/ziK6yCI6e7enQhwuMuJwvsnJtGbsqKi5ODMw6x0
o5EQmIdj5NhhSJqJZPCmWsKbx100TH1UwaEnhNl0DSaFj51n3bVRrK6Nxce10GWZ
HsS1/a63lq/YZLkwfUEvgin/PU9Jx5jMmqhlp3bZjG+f1ItdzJF+9IgS248vCIi2
ystzWCH88Kh69UFcYFfCjeZe8H45XcP+Zykd8WC0DvF/a7Hwk5KTKE/ciT6RPRxb
rkWW5EwjqZL9w9cU3rUHWtSVenayQMMEmCfksadr1AExyCrhPqfs9RINyBs2lK5a
q2bdSFbXZsNzSyL+3yQAfChvRo1/2FdlFVQy+oVUCActV7L77Y7y6jl+b2qzFsSu
xMKwvC/1vDXTvOnGk6A/qJu7yrHpqJrvw1eI+wnMswNBl7lCTgyyHnr5y8S092jI
KU4hmSxsYP+y13HmKy4ewPy9DYJYBTSdReKfEFo79Dx8eqySAWjHFL/OPRqhCUYS
kBq0eZpVZO7tJnHRaRz8n93wIYzb1UOhhgVwxdjPZF9L4d/jzh1BCv0OBWv8IXCu
uWLAi92lL24=
=0r9S
-----END PGP SIGNATURE-----
Merge tag 'soc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC specific changes from Arnd Bergmann:
"These changes are all to SoC-specific code, a total of 33 branches on
17 platforms were pulled into this. Like last time, Renesas sh-mobile
is now the platform with the most changes, followed by OMAP and
EXYNOS.
Two new platforms, TI Keystone and Rockchips RK3xxx are added in this
branch, both containing almost no platform specific code at all, since
they are using generic subsystem interfaces for clocks, pinctrl,
interrupts etc. The device drivers are getting merged through the
respective subsystem maintainer trees.
One more SoC (u300) is now multiplatform capable and several others
(shmobile, exynos, msm, integrator, kirkwood, clps711x) are moving
towards that goal with this series but need more work.
Also noteworthy is the work on PCI here, which is traditionally part
of the SoC specific code. With the changes done by Thomas Petazzoni,
we can now more easily have PCI host controller drivers as loadable
modules and keep them separate from the platform code in
drivers/pci/host. This has already led to the discovery that three
platforms (exynos, spear and imx) are actually using an identical PCIe
host controller and will be able to share a driver once support for
spear and imx is added."
* tag 'soc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (480 commits)
ARM: integrator: let pciv3 use mem/premem from device tree
ARM: integrator: set local side PCI addresses right
ARM: dts: Add pcie controller node for exynos5440-ssdk5440
ARM: dts: Add pcie controller node for Samsung EXYNOS5440 SoC
ARM: EXYNOS: Enable PCIe support for Exynos5440
pci: Add PCIe driver for Samsung Exynos
ARM: OMAP5: voltagedomain data: remove temporary OMAP4 voltage data
ARM: keystone: Move CPU bringup code to dedicated asm file
ARM: multiplatform: always pick one CPU type
ARM: imx: select syscon for IMX6SL
ARM: keystone: select ARM_ERRATA_798181 only for SMP
ARM: imx: Synertronixx scb9328 needs to select SOC_IMX1
ARM: OMAP2+: AM43x: resolve SMP related build error
dmaengine: edma: enable build for AM33XX
ARM: edma: Add EDMA crossbar event mux support
ARM: edma: Add DT and runtime PM support to the private EDMA API
dmaengine: edma: Add TI EDMA device tree binding
arm: add basic support for Rockchip RK3066a boards
arm: add debug uarts for rockchip rk29xx and rk3xxx series
arm: Add basic clocks for Rockchip rk3066a SoCs
...
This contains cleanups as preparation for other branches adding new
features, we pulled 16 branches for 9 platforms into this one.
Most notable here is the removal of support for ATAGS based OMAP4
systems. Since all OMAP4 machines are fully functional with DT based
booting in 3.10, we can remove a lot of code here.
Also noteworthy is Maxime Ripard's cleanup of the machine descriptors,
which means we need no machine descriptors in a lot more cases and
can boot additional machines by just having the respective device
drivers enabled.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIVAwUAUdLnoGCrR//JCVInAQL5bw/+OZeE60sBJxgDAf9XEYls9t5cnLY963uE
izgsyLKwwAi21Xbg/0vgGZLbpdfyd2IJa+bWXhxVTLFI43Hb0D2x1hYyMzy/fFWj
gmqQ4dLYawYj+1sOirTPWDquR31mavofmMF2HVk23S6NWmNIjPk1+7Wgd46Y4vNX
7T6j4cg9HPrxQ37a6ucOuEX6+rqmFe2Q+v7qcsXkwxkVxgIC8V4MgHJmt8gGMRvB
HHrY1kPBHlMJm07fLngilAfpa8G91fmgKxSfugeClyKotj7lHxno/lh/+oxMGvaJ
J9memdfbYISLSvDLeH6Rib/zaC7VnSij9QtZmFtToiJ6qVVZiLFd2dpP+ccpaMGb
YEvm58ayajAvb0wZMueoeKs9yW0UWCdXdkzKbhuWrwmPDjKSb9f9t1u3ons8vl+2
dOwlTex9/ijsxu1qTHMm4/EVg+NR/AwVVwiBRG9sYnfxSHkgXxPW5TF7T4d1H71v
WZkXWsJKIUDQgk+2nnE4J9TvFlPyaV09yFYyiY/+DWAs9DUus8cf37nt2Wz6H1l6
THQcQDcZsPLZsSIEdzUrchdLKDZHzhkb3i8pae7TC6CySiOzj/yX2zbh9Ot538WG
2C7qzAVtyMVuAdFh0caIu0iVXqjsnJLAZGImLZQySR00aq34uXh7MsHepnhAI10q
EQ1vILi4mU4=
=XDJs
-----END PGP SIGNATURE-----
Merge tag 'cleanup-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC cleanups from Arnd Bergmann:
"This contains cleanups as preparation for other branches adding new
features, we pulled 16 branches for 9 platforms into this one.
Most notable here is the removal of support for ATAGS based OMAP4
systems. Since all OMAP4 machines are fully functional with DT based
booting in 3.10, we can remove a lot of code here.
Also noteworthy is Maxime Ripard's cleanup of the machine descriptors,
which means we need no machine descriptors in a lot more cases and can
boot additional machines by just having the respective device drivers
enabled."
* tag 'cleanup-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (76 commits)
ARM: picoxcell: remove .nr_irqs reference
ARM: s5p64x0: avoid build warning for uncompress.h
ARM: SAMSUNG: Remove unused plat/regs-watchdog.h header
ARM: SAMSUNG: Remove legacy watchdog reset code
ARM: SAMSUNG: Let platforms use the new watchdog reset driver
ARM: SAMSUNG: Add watchdog reset driver
ARM: SAMSUNG: Use local definitions of watchdog registers
watchdog: s3c2410_wdt: Use local register definitions
ARM: S5P64X0: Use common uncompress.h part for plat-samsung
ARM: SAMSUNG: Consolidate uncompress subroutine
ARM: at91: drop rm9200dk board support
ARM: dts: msm: Fix merge resolution
ARM: OMAP1: Remove dma.h
ARM: OMAP1: Remove legacy irda.h and irda setup from board files
ARM: OMAP1: Remove duplicated DMA channel definitions
ARM: OMAP1: Remove McBSP DMA channel definitions
ARM: OMAP2+: Remove dma.h
ARM: OMAP2+: hwmod: Remove remaining DMA channel definitions
ARM: OMAP2+: Remove duplicated DMA channel definitions
ARM: OMAP2+: Remove AES crypto device DMA channel definitions
...
For LPAE, do_sect_fault used to be invoked as the second level access
flag handler. When transparent huge pages were introduced for LPAE,
do_page_fault was used instead.
Unfortunately, do_sect_fault remains defined but not used for LPAE code
resulting in a compile warning.
This patch surrounds do_sect_fault with #ifndef CONFIG_ARM_LPAE to fix
this warning.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
__iommu_alloc_buffer wants to split pages after allocation in order to
reduce the memory footprint. This does not work well with __GFP_COMP
pages, so drop this flag before allocation
One failure example is snd_malloc_dev_pages call dma_alloc_coherent with
__GFP_COMP.
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
It is common for one sg to include many pages, so mark all these
pages as clean to avoid unnecessary flushing on them in
set_pte_at() or update_mmu_cache().
The patch might improve loading performance of applciation code a bit.
On the below test code to read file(~1GByte size) from usb mass storage
disk to buffer created with mmap(PROT_READ | PROT_EXEC) on
Pandaboard, average ~1% improvement can be observed with the patch on
10 times test.
unsigned int sum = 0;
static unsigned long tv_diff(struct timeval *tv1, struct timeval *tv2)
{
return (tv2->tv_sec - tv1->tv_sec) * 1000000 +
(tv2->tv_usec - tv1->tv_usec);
}
int main(int argc, char *argv[])
{
char *mbuffer;
int fd;
int i;
unsigned long page_size, size;
struct stat stat;
struct timeval t1, t2;
page_size = getpagesize();
fd = open(argv[1], O_RDONLY);
assert(fd >= 0);
fstat(fd, &stat);
size = stat.st_size;
printf("%s: file %s, file size %lu, page size %lu\n", argv[0],
read_filename, size, page_size);
gettimeofday(&t1, NULL);
mbuffer = mmap(NULL, size, PROT_READ | PROT_EXEC, MAP_SHARED, fd, 0);
for (i = 0 ; i < size ; i += page_size)
sum += mbuffer[i];
munmap(mbuffer, page_size);
gettimeofday(&t2, NULL);
printf("\tread mmaped time: %luus\n", tv_diff(&t1, &t2));
close(fd);
}
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
The current code only clobbers a local variable, so the device is left
with a stale mapping pointer.
Cc: Hiroshi Doyu <hdoyu@nvidia.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Hiroshi Doyu <hdoyu@nvidia.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
IOMMU mappings take a prot parameter, identifying the protection bits
to enforce on the newly created mapping (READ or WRITE). The ARM
dma-mapping framework currently just passes 0 as the prot argument,
resulting in faulting mappings.
This patch infers the protection attributes based on the direction of
the DMA transfer.
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
In __iommu_get_pages(), the cpu_addr is checked wheather in
atomic_pool range or not. So if the cpu_addr is in atomic_pool
range, it does not need to check twice.
Signed-off-by: YoungJun Cho <yj44.cho@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Looking into the active_asids array is not enough, as we also need
to look into the reserved_asids array (they both represent processes
that are currently running).
Also, not holding the ASID allocator lock is racy, as another CPU
could schedule that process and trigger a rollover, making the erratum
workaround miss an IPI.
Exposing this outside of context.c is a little ugly on the side, so
let's define a new entry point that the erratum workaround can call
to obtain the cpumask.
Cc: <stable@vger.kernel.org> # 3.9
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On a CPU that never ran anything, both the active and reserved ASID
fields are set to zero. In this case the ASID_TO_IDX() macro will
return -1, which is not a very useful value to index a bitmap.
Instead of trying to offset the ASID so that ASID #1 is actually
bit 0 in the asid_map bitmap, just always ignore bit 0 and start
the search from bit 1. This makes the code a bit more readable,
and without risk of OoB access.
Cc: <stable@vger.kernel.org> # 3.9
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When a CPU is running a process, the ASID for that process is
held in a per-CPU variable (the "active ASIDs" array). When
the ASID allocator handles a rollover, it copies the active
ASIDs into a "reserved ASIDs" array to ensure that a process
currently running on another CPU will continue to run unaffected.
The active array is zero-ed to indicate that a rollover occurred.
Because of this mechanism, a reserved ASID is only remembered for
a single rollover. A subsequent rollover will completely refill
the reserved ASIDs array.
In a severely oversubscribed environment where a CPU can be
prevented from running for extended periods of time (think virtual
machines), the above has a horrible side effect:
[P{a} denotes process P running with ASID a]
CPU-0 CPU-1
A{x} [active = <x 0>]
[suspended] runs B{y} [active = <x y>]
[rollover:
active = <0 0>
reserved = <x y>]
runs B{y} [active = <0 y>
reserved = <x y>]
[rollover:
active = <0 0>
reserved = <0 y>]
runs C{x} [active = <0 x>]
[resumes]
runs A{x}
At that stage, both A and C have the same ASID, with deadly
consequences.
The fix is to preserve reserved ASIDs across rollovers if
the CPU doesn't have an active ASID when the rollover occurs.
Cc: <stable@vger.kernel.org> # 3.9
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Carinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This commit fixes the regression on Armada 370 (the kernal hang during
boot) introduced by the commit: "ARM: 7691/1: mm: kill unused
TLB_CAN_READ_FROM_L1_CACHE and use ALT_SMP instead".
When coming out of either a Wait for Interrupt (WFI) or a Wait for
Event (WFE) IDLE states, a specific timing sensitivity exists between
the retiring WFI/WFE instructions and the newly issued subsequent
instructions. This sensitivity can result in a CPU hang scenario. The
workaround is to insert either a Data Synchronization Barrier (DSB) or
Data Memory Barrier (DMB) command immediately after the WFI/WFE
instruction.
This commit was based on the work of Lior Amsalem, but heavily
modified to apply the errata fix dynamically according to the
processor type thanks to the suggestions of Russell King and Nicolas
Pitre.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Willy Tarreau <w@1wt.eu>
Cc: <stable@vger.kernel.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 1bc3974 (ARM: 7755/1: handle user space mapped pages in
flush_kernel_dcache_page) moved the implementation of
flush_kernel_dcache_page() into mm/flush.c but did not implement it
on noMMU ARM.
Signed-off-by: Simon Baatz <gmbnomis@gmail.com>
Acked-by: Kevin Hilman <khilman@linaro.org>
Cc: <stable@vger.kernel.org> # 3.2+: 1bc3974: ARM: 7755/1
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As it was already suggested by Russell King and Arnd Bergmann:
https://lkml.org/lkml/2013/5/16/133
moxart and gemini seem to be the only platforms using CPU_FA526,
and instead of pointing arm_pm_idle to an empty function from
platform code, it makes sense to remove WFI code from the processor
specific idle function.
Applies to arm-soc/for-next (and 3.10-rc1).
Changes since v1:
1. remove WFI but make sure cpu_fa526_do_idle do not fall through
to cpu_fa526_dcache_clean_area
Note: moxart boots and prints to UART without this patch, but input is broken.
Signed-off-by: Jonas Jensen <jonas.jensen@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Conflicts:
arch/arm/kernel/smp.c
Please pull these miscellaneous LPAE fixes I've been collecting for a while
now for 3.11. They've been tested and reviewed by quite a few people, and most
of the patches are pretty trivial. -- Will Deacon.
These changes bring both HugeTLB support and Transparent HugePage
(THP) support to ARM. Only long descriptors (LPAE) are supported
in this series.
The code has been tested on an Arndale board (Exynos 5250).
Commit f8b63c1 made flush_kernel_dcache_page a no-op assuming that
the pages it needs to handle are kernel mapped only. However, for
example when doing direct I/O, pages with user space mappings may
occur.
Thus, continue to do lazy flushing if there are no user space
mappings. Otherwise, flush the kernel cache lines directly.
Signed-off-by: Simon Baatz <gmbnomis@gmail.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: <stable@vger.kernel.org> # 3.2+
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This commit fixes the ID and mask for the PJ4B which was too
restrictive and didn't match the CPU of the Armada 370 SoC.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This bug was introduced in commit e651eab0.
Some v4/v5 platforms failed to boot due to this.
Signed-off-by: Po-Yu Chuang <ratbert.chuang@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On Cortex-A9 before version r1p0, the LoUIS bit field of the CLIDR
register returns zero when it should return one. This leads to cache
maintenance operations which rely on this value to not function as
intended, causing data corruption.
The workaround for this errata is to detect affected CPUs and correct
the LoUIS value read.
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Cc: stable@vger.kernel.org
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Much like with the MMU, MPU initialisation is performed in two stages; the
first in the pre-C world and the 'real' initialisation during arch setup.
This patch wires in previously added MPU initialisation functions so that
the whole of memory is mapped with the appropriate region properties for
'normal' RAM (the appropriate properties depend on whether the system is
SMP).
Stub initialisation functions are added for the case that there MPU support
is not configured in to the kernel.
Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
CC: Hyok S. Choi <hyok.choi@samsung.com>
This patch adds new functions for probing and initialising the ARMv7
PMSA-compliant MPU.
These use the pre-defined and reserved MPU_PROBE_REGION for establishing
properties of the MPU, which is necessary because certain probe operations
require modifying region properties and reading back the results.
This patch also introduces a minimal sanity_check_meminfo_mpu function, that
ensures that the memory set-up passed to the kernel can be used in conjunction
with the MPU. The base address of a region must be aligned to the region size,
otherwise behavior is unpredictable and region sizes can only be specified as a
power-of-two. To simplify the satisfaction of these requirements this
implementation currently enforces that all memory is contiguous from
PHYS_OFFSET, merging banks that are contiguous but passed in separately.
The functions are added in this patch but wired in to the boot process later
in the series.
Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
CC: Hyok S. Choi <hyok.choi@samsung.com>
This patch adds processor info for ARM Ltd. Cortex-R7.
The R7 has many similarities to the A9 and though the ACTLR layout is not
identical, the bits associated with cache operations broadcasting and SMP
modes are the same for A9, A5 and R7 (Though in the A-class processors the
same bits toggle TLB-ops broadcasting as well as cache-ops)
Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Stephen Boyd <sboyd@codeaurora.org>
Currently CPU_V7 selects CPU_CP15_MMU, however in the case of a V7 CPU
implementing the PMSA, such as the Cortex-R7, the CP15_MMU operations are
not available. Selecting CPU_CP15_MPU is appropriate in this case.
This patch makes CPU_CP15_MMU dependent on the use of the MMU, selecting
CPU_CP15_MPU for v7 processors when !MMU is chosen.
Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
The ARM CPU suspend code can be selected even for a !CONFIG_MMU
configuration. The resulting kernel will not compile and, even if it did,
would access undefined co-processor registers when executing.
This patch fixes the v6 and v7 CPU suspend code for the nommu case.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Tested-by: Jonathan Austin <jonathan.austin@arm.com>
CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> (commit_signer:1/3=33%)
CC: Santosh Shilimkar <santosh.shilimkar@ti.com> (commit_signer:1/3=33%)
CC: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Currently flush_dcache_page() thinks pages as non-mapped if
mapping_mapped(mapping) return false. This approach is very
coase:
- mmap on part of file may cause all pages backed on
the file being thought as mmaped
- file-backed pages aren't mapped into user space actually
if the memory mmaped on the file isn't accessed
This patch uses page_mapped() to decide if the page has been
mapped.
From the attached test code, I find there is much performance
improvement(>25%) when accessing page caches via read under this
situations, so memcpy benefits a lot from not flushing cache
under this situation.
No. read time without the patch No. read time with the patch
================================================================
No. 0, time 22615636 us No. 0, time 22014717 us
No. 1, time 4387851 us No. 1, time 3113184 us
No. 2, time 4276535 us No. 2, time 3005244 us
No. 3, time 4259821 us No. 3, time 3001565 us
No. 4, time 4263811 us No. 4, time 3002748 us
No. 5, time 4258486 us No. 5, time 3004104 us
No. 6, time 4253009 us No. 6, time 3002188 us
No. 7, time 4262809 us No. 7, time 2998196 us
No. 8, time 4264525 us No. 8, time 3007255 us
No. 9, time 4267795 us No. 9, time 3005094 us
1), No.0. is to read the file from storage device, and others are
to read the file from page caches basically.
2), file size is 512M, and is on ext4 over usb mass storage.
3), the test is done on Pandaboard.
unsigned int sum = 0;
unsigned long sum_val = 0;
static unsigned long tv_diff(struct timeval *tv1, struct timeval *tv2)
{
return (tv2->tv_sec - tv1->tv_sec) * 1000000 +
(tv2->tv_usec - tv1->tv_usec);
}
int main(int argc, char *argv[])
{
char *mbuf, fbuf;
int fd;
int i;
unsigned long page_size, size;
struct stat stat;
struct timeval t1, t2;
unsigned char *rbuf = malloc(32 * page_size);
if (!rbuf) {
printf(" %sn", "malloc failed");
exit(-1);
}
page_size = getpagesize();
fd = open(argv[1], O_RDWR);
assert(fd >= 0);
fstat(fd, &stat);
size = stat.st_size;
printf("%s: file %s, size %lu, page size %lun",
argv[0],
argv[1], size, page_size);
gettimeofday(&t1, NULL);
mbuf = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
if (!mbuf) {
printf(" %sn", "mmap failed");
exit(-1);
}
for (i = 0 ; i < size ; i += (page_size * 32)) {
int rcnt;
lseek(fd, i, SEEK_SET);
rcnt = read(fd, rbuf, page_size * 32);
if (rcnt != page_size * 32) {
printf("%s: read faildn", __func__);
exit(-1);
}
}
free(rbuf);
munmap(mbuf, size);
gettimeofday(&t2, NULL);
printf("tread mmaped time: %luusn", tv_diff(&t1, &t2));
close(fd);
}
Cc: Michel Lespinasse <walken@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The patch adds support for THP (transparent huge pages) to LPAE
systems. When this feature is enabled, the kernel tries to map
anonymous pages as 2MB sections where possible.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
[steve.capper@linaro.org: symbolic constants used, value of
PMD_SECT_SPLITTING adjusted, tlbflush.h included in pgtable.h,
added PROT_NONE support.]
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
This patch adds support for hugetlbfs based on the x86 implementation.
It allows mapping of 2MB sections (see Documentation/vm/hugetlbpage.txt
for usage). The 64K pages configuration is not supported (section size
is 512MB in this case).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
[steve.capper@linaro.org: symbolic constants replace numbers in places.
Split up into multiple files, to simplify future non-LPAE support,
removed huge_pmd_share code, as this is very rarely executed,
Added PROT_NONE support].
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
On ARM we use the __flush_dcache_page function to flush the dcache
of pages when needed; usually when the PG_dcache_clean bit is unset
and we are setting a PTE.
A HugeTLB page is represented as a compound page consisting of an
array of pages. Thus to flush the dcache of a HugeTLB page, one must
flush more than a single page.
This patch modifies __flush_dcache_page such that all constituent
pages of a HugeTLB page are flushed.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
This patch cleans up the highmem sanity check code by simplifying the range
checks with a pre-calculated size_limit. This patch should otherwise have no
functional impact on behavior.
This patch also removes a redundant (bank->start < vmalloc_limit) check, since
this is already covered by the !highmem condition.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
On Keystone platforms, physical memory is entirely outside the 32-bit
addressible range. Therefore, the (bank->start > ULONG_MAX) check below marks
the entire system memory as highmem, and this causes unpleasentness all over.
This patch eliminates the extra bank start check (against ULONG_MAX) by
checking bank->start against the physical address corresponding to vmalloc_min
instead.
In the process, this patch also cleans up parts of the highmem sanity check
code by removing what has now become a redundant check for banks that entirely
overlap with the vmalloc range.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch modifies the highmem sanity checking code to use physical addresses
instead. This change eliminates the wrap-around problems associated with the
original virtual address based checks, and this simplifies the code a bit.
The one constraint imposed here is that low physical memory must be mapped in
a monotonically increasing fashion if there are multiple banks of memory,
i.e., x < y must => pa(x) < pa(y).
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch redefines the early boot time use of the R4 register to steal a few
low order bits (ARCH_PGD_SHIFT bits) on LPAE systems. This allows for up to
38-bit physical addresses.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch moves the TTBR1 offset calculation and the T1SZ calculation out
of the TTB setup assembly code. This should not affect functionality in
any way, but improves code readability as well as readability of subsequent
patches in this series.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch adds TTBR accessor macros, and modifies cpu_get_pgd() and
the LPAE version of cpu_set_reserved_ttbr0() to use these instead.
In the process, we also fix these functions to correctly handle cases
where the physical address lies beyond the 4G limit of 32-bit addressing.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch modifies the switch_mm() processor functions to use phys_addr_t.
On LPAE systems, we now honor the upper 32-bits of the physical address that
is being passed in, and program these into TTBR as expected.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
[will: fixed up conflict in 3-level switch_mm with big-endian changes]
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch fixes the initrd setup code to use phys_addr_t instead of assuming
32-bit addressing. Without this we cannot boot on systems where initrd is
located above the 4G physical address limit.
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>