The non-highmem() and the __pfn_to_bus() based page_to_dma() both
compile to the same code, so its pointless having these two different
approaches. Use the __pfn_to_bus() based version.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Jamie Iles <jamie@jamieiles.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
This reverts commit 75f4aa15cf.
We have a couple of platforms which require non-linear P:V mappings,
so we need these to be overridable.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On OMAP platforms, some people want to declare to segment up the memory
between the kernel and a separate application such that there is a hole
in the middle of the memory as far as Linux is concerned. However,
they want to be able to mmap() the hole.
This currently causes problems, because update_mmu_cache() thinks that
there are valid struct pages for the "hole". Fix this by making
pfn_valid() slightly more expensive, by checking whether the PFN is
contained within the meminfo array.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Khasim Syed Mohammed <khasim@ti.com>
Modules compiled to Thumb-2 have two additional relocations needing to
be resolved at load time, R_ARM_THM_CALL and R_ARM_THM_JUMP24, for BL
and B.W instructions. The maximum Thumb-2 addressing range is +/-2^24
(+/-16MB) therefore the MODULES_VADDR macro in asm/memory.h is set to
(MODULES_END - 8MB) for the Thumb-2 compiled kernel.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
If a machine class has a custom __virt_to_bus() implementation then it
must provide a __arch_page_to_dma() implementation as well which is
_not_ based on page_address() to support highmem.
This patch fixes existing __arch_page_to_dma() and provide a default
implementation otherwise. The default implementation for highmem is
based on __pfn_to_bus() which is defined only when no custom
__virt_to_bus() is provided by the machine class.
That leaves only ebsa110 and footbridge which cannot support highmem
until they provide their own __arch_page_to_dma() implementation.
But highmem support on those legacy platforms with limited memory is
certainly not a priority.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
The kmap virtual area borrows a 2MB range at the top of the 16MB area
below PAGE_OFFSET currently reserved for kernel modules and/or the
XIP kernel. This 2MB corresponds to the range covered by 2 consecutive
second-level page tables, or a single pmd entry as seen by the Linux
page table abstraction. Because XIP kernels are unlikely to be seen
on systems needing highmem support, there shouldn't be any shortage of
VM space for modules (14 MB for modules is still way more than twice the
typical usage).
Because the virtual mapping of highmem pages can go away at any moment
after kunmap() is called on them, we need to bypass the delayed cache
flushing provided by flush_dcache_page() in that case.
The atomic kmap versions are based on fixmaps, and
__cpuc_flush_dcache_page() is used directly in that case.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Let's provide an overridable default instead of having every machine
class define __virt_to_bus and __bus_to_virt to the same thing. What
most platforms are using is bus_addr == phys_addr so such is the default.
One exception is ebsa110 which has no DMA what so ever, so the actual
definition is not important except only for proper compilation. Also
added a comment about the special footbridge bus translation.
Let's also remove comments alluding to set_dma_addr which is not
(and should not) be commonly used.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There is no machine class overriding this. If non linear translations
are implemented again for some machines then this could be restored at
that time.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As of 73bdf0a60e, the kernel needs
to know where modules are located in the virtual address space.
On ARM, we located this region between MODULE_START and MODULE_END.
Unfortunately, everyone else calls it MODULES_VADDR and MODULES_END.
Update ARM to use the same naming, so is_vmalloc_or_module_addr()
can work properly. Also update the comment on mm/vmalloc.c to
reflect that ARM also places modules in a separate region from the
vmalloc space.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Most ARM machines don't need a special "DMA" memory zone, and
when configured out, the kernel becomes a bit smaller:
| text data bss dec hex filename
|3826182 102384 111700 4040266 3da64a vmlinux
|3823593 101616 111700 4036909 3d992d vmlinux.nodmazone
This is because the system now has only one zone total which effect is
to optimize away many conditionals in page allocation paths.
So let's configure this zone only on machines that need split zones.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There's no point scattering this around the tree, the parsing
of the parameter might as well live beside the code which uses
it. That also means we can make vmalloc_reserve a static
variable.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds a config option (CONFIG_VMSPLIT_*) to allow choosing
between 3:1, 2:2 and 1:3 user:kernel memory splits.
Tested-by: Riku Voipio <riku.voipio@iki.fi>
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
OMAP at least gets the return type(s) for the DMA translation functions
wrong, which can lead to subtle errors. Avoid this by moving the DMA
translation functions to asm/dma-mapping.h, and converting them to
inline functions.
Fix the OMAP DMA translation macros to use the correct argument and
result types.
Also, remove the unnecessary casts in dmabounce.c.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch will truncate and/or ignore memory banks if their kernel
direct mappings would (partially) overlap with the vmalloc area or
the mappings between the vmalloc area and the address space top, to
prevent crashing during early boot if there happens to be more RAM
installed than we are expecting.
Since the start of the vmalloc area is not at a fixed address (but
the vmalloc end address is, via the per-platform VMALLOC_END define),
a default area of 128M is reserved for vmalloc mappings, which can
be shrunk or enlarged by passing an appropriate vmalloc= command line
option as it is done on x86.
On a board with a 3:1 user:kernel split, VMALLOC_END at 0xfe000000,
two 512M RAM banks and vmalloc=128M (the default), this patch gives:
Truncating RAM at 20000000-3fffffff to -35ffffff (vmalloc region overlap).
Memory: 512MB 352MB = 864MB total
On a board with a 3:1 user:kernel split, VMALLOC_END at 0xfe800000,
two 256M RAM banks and vmalloc=768M, this patch gives:
Truncating RAM at 00000000-0fffffff to -0e7fffff (vmalloc region overlap).
Ignoring RAM at 10000000-1fffffff (vmalloc region overlap).
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Tested-by: Riku Voipio <riku.voipio@iki.fi>
Move platform independent header files to arch/arm/include/asm, leaving
those in asm/arch* and asm/plat* alone.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>