Our copy_user_highpage() implementations may require cache maintainence.
Ensure that implementations have all necessary details to perform this
maintainence.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This is a fix for the following crash observed in 2.6.29-rc3:
http://lkml.org/lkml/2009/1/29/150
On ARM it doesn't make sense to trace a naked function because then
mcount is called without stack and frame pointer being set up and there
is no chance to restore the lr register to the value before mcount was
called.
Reported-by: Matthias Kaehlcke <matthias@kaehlcke.net>
Tested-by: Matthias Kaehlcke <matthias@kaehlcke.net>
Cc: Abhishek Sagar <sagar.abhishek@gmail.com>
Cc: Steven Rostedt <rostedt@home.goodmis.org>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
In all cases the kaddr is assigned an input register even though it is
modified in the assembly code. Let's assign a new variable to the
modified value and mark those inline asm with volatile otherwise they
get optimized away because the output variable is otherwise not used.
Also fix a few conversion errors in copypage-feroceon.c and
copypage-v4mc.c.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
For similar reasons as copy_user_page(), we want to avoid the
additional kmap_atomic if it's unnecessary.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We used to override the copy_user_page() function. However, this
is not only inefficient, it also causes additional complexity for
highmem support, since we convert from a struct page to a kernel
direct mapped address and back to a struct page again.
Moreover, with highmem support, we end up pointlessly setting up
kmap entries for pages which we're going to remap. So, push the
kmapping down into the copypage implementation files where it's
required.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide L_PTE_MT_xxx definitions to describe the memory types that we
use in Linux/ARM. These definitions are carefully picked such that:
1. their LSBs match what is required for pre-ARMv6 CPUs.
2. they all have a unique encoding, including after modification
by build_mem_type_table() (the result being that some have more
than one combination.)
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If PG_dcache_dirty is set for a page, we need to flush the source page
before performing any copypage operation using a different virtual address.
This fixes the copypage implementations for XScale, StrongARM and ARMv6.
This patch fixes segmentation faults seen in the dynamic linker under
the usage patterns in glibc 2.4/2.5.
Signed-off-by: Richard Purdie <rpurdie@rpsys.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
L_PTE_ASID is not really required to be stored in every PTE, since we
can identify it via the address passed to set_pte_at(). So, create
set_pte_ext() which takes the address of the PTE to set, the Linux
PTE value, and the additional CPU PTE bits which aren't encoded in
the Linux PTE value.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move top_pmd into arch/arm/mm/mm.h - nothing outside arch/arm/mm
references it.
Move the repeated definition of TOP_PTE into mm/mm.h, as well as
a few function prototypes.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the locking for copy_user_page() and clear_user_page() into
the implementations which require locking. For simple memcpy/
memset based implementations, the locking is extra overhead which
is not necessary, and prevents preemption occuring.
Signed-off-by: Russell King <rmk@arm.linux.org.uk>