Fix array initialization in lots of arches
The number of zones may now be reduced from 4 to 2 for many arches. Fix the
array initialization for the zones array for all architectures so that it is
not initializing a fixed number of elements.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
With sparsemem, pfn should be checked by pfn_valid() before pfn_to_page().
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
With some memory model other than FLATMEM, the single node can
contains some holes so there might be many invalid pages. For
example, with two 256M memory and one 256M hole, some variables
(num_physpage, totalpages, nr_kernel_pages, nr_all_pages, etc.) will
indicate that there are 768MB on this system. This is not desired
because, for example, alloc_large_system_hash() allocates too many
entries.
Use free_area_init_node() with counted zholes_size[] instead of
free_area_init().
For num_physpages, use number of ram pages instead of max_low_pfn.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Move memory_present() in arch/mips/kernel/setup.c. When using sparsemem
extreme, this function does an allocate for bootmem. This would always
fail since init_bootmem hasn't been called yet.
Move memory_present after free_bootmem. This only marks actual memory
ranges as present instead of the entire address space.
Signed-off-by: Chad Reese <creese@caviumnetworks.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Save the Config.OD bit from being clobbered by coherency_setup(). This
bit, when set, fixes various errata in the early steppings of Au1x00
SOCs. Unfortunately, the bit was write-only on the most early of them.
In addition, also restore the bit after a wakeup from sleep.
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
A proper fix would involve introducing the notion of shared caches but
at this stage of 2.6.17 that's going to be too intrusive and not needed
for current hardware; aside I think some discussion will be needed.
So for now on the affected SMP configurations which happen to suffer from
cache aliases we make use of the fact that a single cache will be shared
by all processors. This solves the deadlock issue and will improve
performance by getting rid of the smp_call_function overhead.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Nothing exciting; Linux just didn't know it yet so this is most adding
a value to a case statement.
Signed-off-by: Chris Dearman <chris@mips.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Fix the cache index value in tx49_blast_icache32_page_indexed().
This is a damage by de62893bc0 commit.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Just about every architecture defines some macros to do operations on pfns.
They're all virtually identical. This patch consolidates all of them.
One minor glitch is that at least i386 uses them in a very skeletal header
file. To keep away from #include dependency hell, I stuck the new
definitions in a new, isolated header.
Of all of the implementations, sh64 is the only one that varied by a bit.
It used some masks to ensure that any sign-extension got ripped away before
the arithmetic is done. This has been posted to that sh64 maintainers and
the development list.
Compiles on x86, x86_64, ia64 and ppc64.
Signed-off-by: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
set_page_count usage outside mm/ is limited to setting the refcount to 1.
Remove set_page_count from outside mm/, and replace those users with
init_page_count() and set_page_refcounted().
This allows more debug checking, and tighter control on how code is allowed
to play around with page->_count.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Have an explicit mm call to split higher order pages into individual pages.
Should help to avoid bugs and be more explicit about the code's intention.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: David Howells <dhowells@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Chris Zankel <chris@zankel.net>
Signed-off-by: Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The TX49XX has the prefetch instruction. It supports only Pref_Load
(hint 0). Actually changes in this patch except for Kconfig are not
have any effects, I added these changes to prevent misuse of unsupported
hints.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Basically identical to c-r4k.c, so maintaining one is really enough.
Signed-off-by: Thiemo Seufer <ths@networkno.de>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This option is no longer usable with supported compilers. It will be
replaced by usage of -msym32 in a separate patch.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
If dcache_size != icache_size or dcache_size != scache_size, or
set-associative cache, icache/scache does not flushed properly. Make
blast_?cache_page_indexed() masks its index value correctly. Also,
use physical address for physically indexed pcache/scache.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When a CPU has no scache, the scache flushing functions currently
aren't getting initialized and the NULL pointer is eventually called
as a function. Initialize the scache flushing functions as a noop
when there's no scache.
Initial patch by me and most of the debugging done by Martin Michlmayr.
Signed-off-by: Martin Michlmayr <tbm@cyrius.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add blast_xxx_range(), protected_blast_xxx_range() etc. for common
use. They are built by __BUILD_BLAST_CACHE_RANGE().
Use protected_cache_op() macro for various protected_ routines.
Output code should be logically same.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
I'm pretty sure that the CKSEG0 bits are wrong, but I did need to
cover that region - because the SB-1 kernel links at 0xffffffff80100000
or so, disassembly and printing static variables don't work unless the
debugger can read that region.
Signed-off-by: Daniel Jacobowitz <dan@codesourcery.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Page count should be initialized to 1 on each of the MIPS empty zero pages,
to avoid a bad_page warning whenever one of them is freed from all mappings.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
First step in pushing down the page_table_lock. init_mm.page_table_lock has
been used throughout the architectures (usually for ioremap): not to serialize
kernel address space allocation (that's usually vmlist_lock), but because
pud_alloc,pmd_alloc,pte_alloc_kernel expect caller holds it.
Reverse that: don't lock or unlock init_mm.page_table_lock in any of the
architectures; instead rely on pud_alloc,pmd_alloc,pte_alloc_kernel to take
and drop it when allocating a new one, to check lest a racing task already
did. Similarly no page_table_lock in vmalloc's map_vm_area.
Some temporary ugliness in __pud_alloc and __pmd_alloc: since they also handle
user mms, which are converted only by a later patch, for now they have to lock
differently according to whether or not it's init_mm.
If sources get muddled, there's a danger that an arch source taking
init_mm.page_table_lock will be mixed with common source also taking it (or
neither take it). So break the rules and make another change, which should
break the build for such a mismatch: remove the redundant mm arg from
pte_alloc_kernel (ppc64 scrapped its distinct ioremap_mm in 2.6.13).
Exceptions: arm26 used pte_alloc_kernel on user mm, now pte_alloc_map; ia64
used pte_alloc_map on init_mm, now pte_alloc_kernel; parisc had bad args to
pmd_alloc and pte_alloc_kernel in unused USE_HPPA_IOREMAP code; ppc64
map_io_page forgot to unlock on failure; ppc mmu_mapin_ram and ppc64 im_free
took page_table_lock for no good reason.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>