Commit Graph

51444 Commits

Author SHA1 Message Date
Paul Mackerras
b142eb3a5a Merge branch 'for-2.6.22' of master.kernel.org:/pub/scm/linux/kernel/git/arnd/cell-2.6 into for-2.6.22 2007-04-24 11:46:09 +10:00
Paul Mackerras
13177c8b7e Merge branch 'spufs' of master.kernel.org:/pub/scm/linux/kernel/git/arnd/cell-2.6 into for-2.6.22 2007-04-24 11:45:03 +10:00
Paul Mackerras
445c9b5507 Merge branch 'kconfig' of master.kernel.org:/pub/scm/linux/kernel/git/galak/powerpc into for-2.6.22 2007-04-24 08:42:11 +10:00
Arnd Bergmann
c6d344819e [POWERPC] update cell_defconfig
Sync with the Kconfig changes, and enable some options for celleb

Cc: Ishizaki Kou <kou.ishizaki@toshiba.co.jp>
Signed-off-by: Jens Osterkamp <jens@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:44:41 +02:00
Jeremy Kerr
150f7e3cfe [POWERPC] cell: enable RTAS-based PTCAL for Cell XDR memory
Enable Periodic Recalibration (PTCAL) support for Cell XDR memory,
using the new ibm,cbe-start-ptcal and ibm,cbe-stop-ptcal RTAS calls.

Tested on QS20 and QS21 (by Thomas Huth). It seems that SLOF has
problems disabling, at least on QS20; this patch should only be
used once these problems have been addressed.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:44:41 +02:00
Christian Krafft
9dd855a729 [POWERPC] cell: add support for proper device-tree
This patch adds support for a proper device-tree.
A porper device-tree on cell contains be nodes
for each CBE containg nodes for SPEs and all the
other special devices on it.
Ofcourse oldschool devicetree is still supported.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:44:40 +02:00
Christian Krafft
6bf05fd776 [POWERPC] add of_iomap function
The of_iomap function maps memory for a given
device_node and returns a pointer to that memory.
This is used at some places, so it makes sense to
a seperate function.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:44:40 +02:00
Christian Krafft
4a065f9418 [POWERPC] pmi probe device by device-type
At the moment the pmi device driver is probing for devices with
a given type and a given name. As there may be devices of
the same type but with a different name, probing should be
done also for device type only.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:44:40 +02:00
Christian Krafft
79baf4a60e [POWERPC] add check for initialized driver data to pmi driver
This patch adds a check for the private driver data to be initialized.
The bug showed up, as the caller found a pmi device by it's type.
Whereas the pmi driver probes for the type and the name.
Since the name was not as the driver expected, it did not initialize.
A more relaxed probing will be supplied with an extra patch, too.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:44:40 +02:00
Christian Krafft
5050063c04 [POWERPC] cell: use pmi in cpufreq driver
The new PMI driver was added in order to support
cpufreq on blades that require the frequency to
be controlled by the service processor, so use it
on those.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:44:39 +02:00
Christian Krafft
5f7bdaee2a [POWERPC] cbe_thermal: add throttling attributes to cpu and spu nodes
This patch adds some attributes the cpu and spu nodes:
/sys/devices/system/[c|s]pu/[c|s]pu*/thermal/throttle_begin
/sys/devices/system/[c|s]pu/[c|s]pu*/thermal/throttle_end
/sys/devices/system/[c|s]pu/[c|s]pu*/thermal/throttle_full_stop

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:44:39 +02:00
Christian Krafft
24d560d7b9 [POWERPC] cbe_thermal: clean up computation of temperature
This patch introduces a little function for transforming
register values into temperature.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:44:39 +02:00
Christian Krafft
91a69c9646 [POWERPC] cell: add cbe_node_to_cpu function
This patch adds code to deal with conversion of
logical cpu to cbe nodes. It removes code that
assummed there were two logical CPUs per CBE.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:44:38 +02:00
Jeremy Kerr
ccf17e9d00 [POWERPC] spu_base: fix initialisation on systems with no SPEs
This change fixes the case where spu_base and spufs are initialised on a
system with no SPEs - unconditionally create the spu_lists so spu_alloc
doesn't explode, and check for spu_management ops before starting spufs.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>

 arch/powerpc/platforms/cell/spu_base.c    |    7 ++++---
 arch/powerpc/platforms/cell/spufs/inode.c |    5 +++++
 2 files changed, 9 insertions(+), 3 deletions(-)
2007-04-23 21:19:00 +02:00
Christoph Hellwig
befdc746ee [POWERPC] spu_base: remove cleanup_spu_base
spu_base.c is always built into the kernel image, so there is no need
for a cleanup function.  And some of the things it does are in the
way for my following patches, so I'd rather get rid of it ASAP.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:59 +02:00
Christoph Hellwig
aa45e2569f [POWERPC] spufs: various run.c cleanups
- remove the spu_acquire_runnable from spu_run_init.  I need to
   opencode it in spufs_run_spu in the next patch
 - remove various inline attributes, we don't really want to inline
   long functions with multiple callsites
 - cleanup return values and runcntl_write calls in spu_run_init
 - use normal kernel codingstyle in spu_reacquire_runnable

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:59 +02:00
Akinobu Mita
fe8a29db5b [POWERPC] spufs: enable SPU coredump for kernel-builtin spufs
spu_coredump_calls.owner is NULL in case of a builtin spufs,
so the checks in here break.
Check for the availability of the spu_coredump_calls variable
instead.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:59 +02:00
Arnd Bergmann
6cf2179202 [POWERPC] spufs: fix memory leak on coredump
Dynamically allocated read/write buffer in spufs_arch_write_note() will
not be freed. Convert it to get_free_page at the same time.

Cc: Akinobu Mita <mita@fixstars.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:58 +02:00
Jeremy Kerr
d3764397d0 [POWERPC] spufs: Minor cleanup of spu_wait
Change the loop in spu_wait to be a little more straightforward.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:58 +02:00
Jeremy Kerr
f11f5ee70f [POWERPC] spufs: add mode= mount option
Add a 'mode=' option to spufs mount arguments. This allows more
control over access to the top-level spufs directory.

Tested on Cell.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:58 +02:00
Akinobu Mita
9e2fe2ce4e [POWERPC] spufs: use memcpy_fromio() to copy from local store
GCC may generates inline copy loop to handle memcpy() function
instead of kernel defined memcpy(). But this inlined version of memcpy()
causes an alignment interrupt when copying from local store.

This patch uses memcpy_fromio() and memcpy_toio to copy local store
to prevent memcpy() being inlined.

Signed-off-by: Akinobu Mita <mita@fixstars.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:57 +02:00
Christoph Hellwig
8a7d86bdb2 [POWERPC] spufs: avoid spurious memory barriers
We now have proper locking around assignets of the mapping pointers,
and the spin_unlock implies enough of a barrier to get rid of the
explicit one.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:57 +02:00
Akinobu Mita
db1384b40d [POWERPC] spufs: fix memory leak on spufs reloading
When SPU isolation mode enabled, isolated_loader would be
allocated by spufs_init_isolated_loader() on module_init().
But anyone do not free it.

This patch introduces spufs_exit_isolated_loader() which is
the opposite of spufs_init_isolated_loader() and called on
module_exit().

Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Akinobu Mita <mita@fixstars.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:57 +02:00
Akinobu Mita
c99c1994a2 [POWERPC] spufs: fix missing error handling in module_init()
spufs module_init forgot to call a few cleanup functions
on error path. This patch also includes cosmetic changes in
spu_sched_init() (identation fix and return error code).

[modified by hch to apply ontop of the latest schedule changes]

Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Akinobu Mita <mita@fixstars.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:56 +02:00
Akinobu Mita
577f8f1021 [POWERPC] spufs: check spu_acquire_runnable() return value
This patch checks return value of spu_acquire_runnable() in
spufs_mfc_write().

Signed-off-by: Akinobu Mita <mita@fixstars.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:56 +02:00
Christoph Hellwig
e45d48a34d [POWERPC] spufs: turn run_sema into run_mutex
There is no reason for run_sema to be a struct semaphore.  Changing
it to a mutex and rename it accordingly.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:56 +02:00
Jeremy Kerr
c8a1e9393a [POWERPC] spufs: provide siginfo for SPE faults
This change populates a siginfo struct for SPE application exceptions
(ie, invalid DMAs and illegal instructions).

Tested on an IBM Cell Blade.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:55 +02:00
Arnd Bergmann
57dace2391 [POWERPC] spufs: make spu page faults not block scheduling
Until now, we have always entered the spu page fault handler
with a mutex for the spu context held. This has multiple
bad side-effects:
- it becomes impossible to suspend the context during
  page faults
- if an spu program attempts to access its own mmio
  areas through DMA, we get an immediate livelock when
  the nopage function tries to acquire the same mutex

This patch makes the page fault logic operate on a
struct spu_context instead of a struct spu, and moves it
from spu_base.c to a new file fault.c inside of spufs.

We now also need to copy the dar and dsisr contents
of the last fault into the saved context to have it
accessible in case we schedule out the context before
activating the page fault handler.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:55 +02:00
Christoph Hellwig
62c05d583e [POWERPC] spu_base: move spu_init_channels out of spu_mutex
There is no reason to execute spu_init_channels under spu_mutex
after the spu has been taken off the freelist it's ours.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:55 +02:00
Luke Browning
4e0f4ed0df [POWERPC] spu sched: make addition to stop_wq and runque atomic vs wakeup
Addition to stop_wq needs to happen before adding to the runqeueue and
under the same lock so that we don't have a race window for a lost
wake up in the spu scheduler.

Signed-off-by: Luke Browning <lukebrowning@us.ibm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:55 +02:00
Christoph Hellwig
7ec18ab923 [POWERPC] spufs: streamline locking for isolated spu setup
For quite a while now spu state is protected by a simple mutex instead
of the old rw_semaphore, and this means we can simplify the locking
around spu_setup_isolated a lot.

Instead of doing an spu_release before entering spu_setup_isolated and
then calling the complicated spu_acquire_exclusive we can now simply
enter the function locked an in guaranteed runnable state, so that the
only bit of spu_acquire_exclusive that's left is the call to
spu_unmap_mappings.

Similarly there's no more need to unlock and reacquire the state_mutex
when spu_setup_isolated is done, but we can always return with the
lock held and only drop it in spu_run_init in the failure case.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:54 +02:00
Christoph Hellwig
a475c2f435 [POWERPC] spufs: remove woken threads from the runqueue early
A single context should only be woken once, and we should not have
more wakeups for a given priority than the number of contexts on
that runqueue position.

Also add some asserts to trap future problems in this area more
easily.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:54 +02:00
Arnd Bergmann
390c534304 [POWERPC] spufs: add memory barriers after set_bit
set_bit does not guarantee ordering on powerpc, so using it
for communication between threads requires explicit
mb() calls.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:54 +02:00
Christoph Hellwig
e097b51328 [POWERPC] spu sched: ensure preempted threads are put back on the runqueue, part2
To not lose a spu thread we need to make sure it always gets put back
on the runqueue.  In find_victim aswell as in the scheduler tick as done
in the previous patch.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:53 +02:00
Christoph Hellwig
b3e76cc324 [POWERPC] spu sched: ensure preempted threads are put back on the runqueue
To not lose a spu thread we need to make sure it always gets put back
on the runqueue.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:53 +02:00
Christoph Hellwig
43c2bbd932 [POWERPC] spufs: clear mapping pointers after last close
Make sure the pointers to various mappings are cleared once the last
user stopped using them.  This avoids accessing freed memory when
tearing down the gang directory aswell as optimizing away
pte invalidations if no one uses these.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:53 +02:00
Christoph Hellwig
0887309589 [POWERPC] spufs: use cancel_rearming_delayed_workqueue when stopping spu contexts
The scheduler workqueue may rearm itself and deadlock when we try to stop
it.  Put a flag in place to avoid skip the work if we're tearing down
the context.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:18:52 +02:00
Paul Mackerras
390cbb56a7 [POWERPC] Fix detection of loader-supplied initrd on OF platforms
Commit 79c8541924 introduced code to move
the initrd if it was in a place where it would get overwritten by the
kernel image.  Unfortunately this exposed the fact that the code that
checks whether the values passed in r3 and r4 are intended to indicate
the start address and size of an initrd image was not as thorough as the
kernel's checks.  The symptom is that on OF-based platforms, the
bootwrapper can cause an exception which causes the system to drop back
into OF.

Previously it didn't matter so much if the code incorrectly thought that
there was an initrd, since the values for start and size were just passed
through to the kernel.  Now the bootwrapper needs to apply the same checks
as the kernel since it is now using the initrd data itself (in the process
of copying it if necessary).  This adds the code to do that.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13 10:46:21 +10:00
Kumar Gala
98750261fb [POWERPC] Miscellaneous arch/powerpc Kconfig and platform/Kconfig cleanup
* Cleaned up some whitespace in arch/powerpc/Kconfig
* Moved sourcing of platforms/embedded6xx/Kconfig into platform/Kconfig
* Moved sourcing of platforms/4xx/Kconfig into platform/Kconfig and disabled it
* Removed EMBEDDEDBOOT since its not supported in arch/powerpc
* Removed PC_KEYBOARD since its not used anywhere
* Moved a few CONFIG options around in platform/Kconfig
* Moved interrupt controllers into platform/Kconfig out of bus section

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-04-12 18:01:34 -05:00
Kumar Gala
db9478086d [POWERPC] Convert 85xx platform to unified platform Kconfig
Moved 85xx platform Kconfig over to being sourced by the unified
arch/powerpc/platforms/Kconfig.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-04-12 17:44:07 -05:00
Kumar Gala
c8a55f3dda [POWERPC] Convert 8xx platform to unified platform Kconfig
Moved 8xx platform Kconfig over to being sourced by the unified
arch/powerpc/platforms/Kconfig.  Also, cleaned up whitespace issues in 8xx
Kconfig.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-04-12 17:35:54 -05:00
Kumar Gala
d6071f881f [POWERPC] Convert 82xx platform to unified platform Kconfig
Moved 82xx platform Kconfig over to being sourced by the unified
arch/powerpc/platforms/Kconfig.  Also, cleaned up whitespace issues in 82xx
Kconfig.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-04-12 16:53:32 -05:00
Kumar Gala
b5a4834692 [POWERPC] Convert 83xx platform to unified platform Kconfig
Moved 83xx platform Kconfig over to being sourced by the unified
arch/powerpc/platforms/Kconfig.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-04-12 16:41:12 -05:00
Kumar Gala
4a89f7fa7a [POWERPC] Convert 86xx platform to unified platform Kconfig
Moved 86xx platform Kconfig over to being sourced by the unified
arch/powerpc/platforms/Kconfig.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-04-12 15:41:26 -05:00
Kumar Gala
164a460d46 [POWERPC] Ensure platform CONFIG options have correct dependencies
We currently support TAU and CPU frequency scaling only on discrete
(non-SOC) processors.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-04-12 15:35:50 -05:00
Joachim Fenkes
0727702a3a [POWERPC] ibmebus: change probe/remove interface from using loc-code to DT path
In some cases, multiple OFDT nodes might share the same location code, so
the location code is not a unique identifier for an OFDT node. Changed the
ibmebus probe/remove interface to use the DT path of the device node instead
of the location code.

The DT path must be written into probe/remove right as it would appear in
the "devspec" attribute of the ebus device: relative to the DT root, with a
leading slash and without a trailing slash. One trailing newline will not
hurt; multiple newlines will (like perl's chomp()).

Example:

 Add a device "/proc/device-tree/foo@12345678" to ibmebus like this:
    echo /foo@12345678 > /sys/bus/ibmebus/probe

 Remove the device like this:
    echo /foo@12345678 > /sys/bus/ibmebus/remove

Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13 06:12:42 +10:00
Benjamin Herrenschmidt
370a908db1 [POWERPC] DEBUG_PAGEALLOC for 64-bit
Here's an implementation of DEBUG_PAGEALLOC for 64 bits powerpc.
It applies on top of the 32 bits patch.

Unlike Anton's previous attempt, I'm not using updatepp. I'm removing
the hash entries from the bolted mapping (using a map in RAM of all the
slots). Expensive but it doesn't really matter, does it ? :-)

Memory hot-added doesn't benefit from this unless it's added at an
address that is below end_of_DRAM() as calculated at boot time.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

 arch/powerpc/Kconfig.debug      |    2
 arch/powerpc/mm/hash_utils_64.c |   84 ++++++++++++++++++++++++++++++++++++++--
 2 files changed, 82 insertions(+), 4 deletions(-)
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13 04:09:39 +10:00
Benjamin Herrenschmidt
88df6e90fa [POWERPC] DEBUG_PAGEALLOC for 32-bit
Here's an implementation of DEBUG_PAGEALLOC for ppc32. It disables BAT
mapping and is only tested with Hash table based processor though it
shouldn't be too hard to adapt it to others.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

 arch/powerpc/Kconfig.debug       |    9 ++++++
 arch/powerpc/mm/init_32.c        |    4 +++
 arch/powerpc/mm/pgtable_32.c     |   52 +++++++++++++++++++++++++++++++++++++++
 arch/powerpc/mm/ppc_mmu_32.c     |    4 ++-
 include/asm-powerpc/cacheflush.h |    6 ++++
 5 files changed, 74 insertions(+), 1 deletion(-)
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13 04:09:39 +10:00
Benjamin Herrenschmidt
ee4f2ea486 [POWERPC] Fix 32-bit mm operations when not using BATs
On hash table based 32 bits powerpc's, the hash management code runs with
a big spinlock. It's thus important that it never causes itself a hash
fault. That code is generally safe (it does memory accesses in real mode
among other things) with the exception of the actual access to the code
itself. That is, the kernel text needs to be accessible without taking
a hash miss exceptions.

This is currently guaranteed by having a BAT register mapping part of the
linear mapping permanently, which includes the kernel text. But this is
not true if using the "nobats" kernel command line option (which can be
useful for debugging) and will not be true when using DEBUG_PAGEALLOC
implemented in a subsequent patch.

This patch fixes this by pre-faulting in the hash table pages that hit
the kernel text, and making sure we never evict such a page under hash
pressure.

Signed-off-by: Benjamin Herrenchmidt <benh@kernel.crashing.org>

 arch/powerpc/mm/hash_low_32.S |   22 ++++++++++++++++++++--
 arch/powerpc/mm/mem.c         |    3 ---
 arch/powerpc/mm/mmu_decl.h    |    4 ++++
 arch/powerpc/mm/pgtable_32.c  |   11 +++++++----
 4 files changed, 31 insertions(+), 9 deletions(-)
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13 04:09:39 +10:00
Benjamin Herrenschmidt
3be4e6990e [POWERPC] Cleanup 32-bit map_page
The 32 bits map_page() function is used internally by the mm code
for early mmu mappings and for ioremap. It should never be called
for an address that already has a valid PTE or hash entry, so we
add a BUG_ON for that and remove the useless flush_HPTE call.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

 arch/powerpc/mm/pgtable_32.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13 04:09:39 +10:00