Commit Graph

349 Commits

Author SHA1 Message Date
Paul Mundt
deaef20e97 sh: Rework sh4_flush_cache_page() for coherent kmap mapping.
This builds on top of the MIPS r4k code that does roughly the same thing.
This permits the use of kmap_coherent() for mapped pages with dirty
dcache lines and falls back on kmap_atomic() otherwise.

This also fixes up a problem with the alias check and defers to
shm_align_mask directly.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 16:06:39 +09:00
Paul Mundt
bd6df57481 sh: Kill off segment-based d-cache flushing on SH-4.
This kills off the unrolled segment based flushers on SH-4 and switches
over to a generic unrolled approach derived from the writethrough segment
flusher.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 14:22:15 +09:00
Paul Mundt
31c9efde78 sh: Kill off broken PHYSADDR() usage in sh4_flush_dcache_page().
PHYSADDR() runs in to issues in 32-bit mode when we do not have the
legacy P1/P2 areas mapped, as such, we need to use page_to_phys()
directly, which also happens to do the right thing in legacy 29-bit mode.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 14:10:28 +09:00
Paul Mundt
654d364e26 sh: sh4_flush_cache_mm() optimizations.
The i-cache flush in the case of VM_EXEC was added way back when as a
sanity measure, and in practice we only care about evicting aliases from
the d-cache. As a result, it's possible to drop the i-cache flush
completely here.

After careful profiling it's also come up that all of the work associated
with hunting down aliases and doing ranged flushing ends up generating
more overhead than simply blasting away the entire dcache, particularly
if there are many mm's that need to be iterated over. As a result of
that, just move back to flush_dcache_all() in these cases, which restores
the old behaviour, and vastly simplifies the path.

Additionally, on platforms without aliases at all, this can simply be
nopped out. Presently we have the alias check in the SH-4 specific
version, but this is true for all of the platforms, so move the check up
to a generic location. This cuts down quite a bit on superfluous cacheop
IPIs.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 14:04:06 +09:00
Paul Mundt
682f88ab74 sh: Cleanup whitespace damage in sh4_flush_icache_range().
There was quite a lot of tab->space damage done here from a former patch,
clean it up once and for all.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 13:19:46 +09:00
Paul Mundt
6e4154d4c2 sh: Use more aggressive dcache purging in kmap teardown.
This fixes up a number of outstanding issues observed with old mappings
on the same colour hanging around. This requires some more optimal
handling, but is a safe fallback until all of the corner cases have been
handled.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-08 16:21:00 +09:00
Paul Mundt
0906a3ad33 sh: Fix up and optimize the kmap_coherent() interface.
This fixes up the kmap_coherent/kunmap_coherent() interface for recent
changes both in the page fault path and the shared cache flushers, as
well as adding in some optimizations.

One of the key things to note here is that the TLB flush itself is
deferred until the unmap, and the call in to update_mmu_cache() itself
goes away, relying on the regular page fault path to handle the lazy
dcache writeback if necessary.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-03 17:21:10 +09:00
Paul Mundt
6f3795788b sh: Fix up UP deadlock with SMP-aware cache ops.
This builds on top of the previous reversion and implements a special
on_each_cpu() variant that simple disables preemption across the call
while leaving the interrupt state to the function itself. There were some
unintended consequences with IRQ disabling in some of these paths on UP
that ran in to a deadlock scenario with IRQs being missed.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-01 21:21:36 +09:00
Paul Mundt
983f4c514c Revert "sh: Kill off now redundant local irq disabling."
This reverts commit 64a6d72213.

Unfortunately we can't use on_each_cpu() for all of the cache ops, as
some of them only require preempt disabling. This seems to be the same
issue that impacts the mips r4k caches, where this code was based on.
This fixes up a deadlock that showed up in some IRQ context cases.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-01 21:12:55 +09:00
Paul Mundt
ac6a0cf671 Merge branch 'master' into sh/smp
Conflicts:
	arch/sh/mm/cache-sh4.c
2009-09-01 13:54:14 +09:00
Matt Fleming
ce3f7cb96e sh: Fix dcache flushing for N-way write-through caches.
This adopts the special-cased 2-way write-through dcache flusher for
N-ways and moves it in to the generic path. Assignment is done at runtime
via the check for the CCR_CACHE_WT bit in the same path as the per-way
writeback flushers.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-01 13:32:48 +09:00
Paul Mundt
e76a0136a3 sh: Fix up sh4_flush_dcache_page() build on UP.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-27 11:31:16 +09:00
Stuart Menefy
ffad9d7a54 sh: Fix problems with cache flushing when cache is in write-through mode
Change the method used to flush the cache in write-through mode to
avoid corrupted data being written back to memory.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-24 18:39:39 +09:00
Stuart Menefy
a1fce73235 sh: Fix overzealous checking in __ioremap()
Allow peripherals before the start of RAM to be remapped.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-24 18:29:25 +09:00
Stuart Menefy
a5cf9e2444 sh: Improve comments int SH4 cache flushing code
This is a pure documentation, to try to explain why the cache flushing code
for the SH4 is implemented the way it is.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-24 17:36:24 +09:00
Paul Mundt
64a6d72213 sh: Kill off now redundant local irq disabling.
on_each_cpu() takes care of IRQ and preempt handling, the localized
handling in each of the called functions can be killed off.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-21 18:21:07 +09:00
Yoshihiro Shimoda
c01f0f1a4a sh: Add initial support for SH7757 CPU subtype
Signed-off-by: Yoshihiro Shimoda <shimoda.yoshihiro@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-21 17:25:47 +09:00
Paul Mundt
f26b2a562b sh: Make cache flushers SMP-aware.
This does a bit of rework for making the cache flushers SMP-aware. The
function pointer-based flushers are renamed to local variants with the
exported interface being commonly implemented and wrapping as necessary.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-21 17:23:14 +09:00
Paul Mundt
c139a59587 sh: Fix up cache-sh4 build on SMP.
mapping is unused on the SMP build, trigger a build error. Move it under
the ifdef.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-20 15:24:41 +09:00
Michael Trimarchi
6503fe4a65 sh: Better description of SH-4 PTEA register update.
Signed-off-by: Michael Trimarchi <trimarchimichael@yahoo.it>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-20 13:27:44 +09:00
Paul Mundt
e055d41ff5 sh: Build fix for disabled caches.
This fixes up the build when caches are disabled, by linking in all of
the cache routines directly. This paves the way for splitting out
separate I and D cache disabling, similar to what sh64 had, and which
we want for SH-X3 anyways.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-19 17:57:01 +09:00
Paul Mundt
1b3edd9745 sh: Merge the _32/_64 variants of arch/sh/mm/Makefile.
Now that there is sufficient shared infrastructure, merge the Makefiles.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-16 03:49:21 +09:00
Paul Mundt
2b4315185a sh: Wire up sh5_cache_init().
Now that the SH-5 code is more or less behaving with the new cacheflush
interface, wire up the initialization code.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-16 02:16:44 +09:00
Paul Mundt
8c41cdcaff sh64: Kill off dead i/d-cache disabled bits.
These will be handled through the shared cache interface instead, and
they are presently undefined anyways.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-16 02:15:50 +09:00
Paul Mundt
94ecd224c9 sh: Fix up the SH-5 build with caches enabled.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-16 01:50:17 +09:00
Paul Mundt
65305ae816 sh: Convert cache disabled SH-5 over to new cache interface.
The caches enabled case needs more work, but is presently broken
regardless, so this can be done incrementally.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-16 00:53:56 +09:00
Paul Mundt
0d051d90bb sh: Convert SH7705 extended mode to new cacheflush interface.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 12:53:39 +09:00
Paul Mundt
79f1c9da5e sh: Convert SH-3 to new cacheflush interface.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 12:42:55 +09:00
Paul Mundt
a58e1a2ab4 sh: Convert SH-2A to new cacheflush interface.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 12:38:29 +09:00
Paul Mundt
109b44a82a sh: Convert SH-2 to new cacheflush interface.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 12:35:15 +09:00
Paul Mundt
37443ef3f0 sh: Migrate SH-4 cacheflush ops to function pointers.
This paves the way for allowing individual CPUs to overload the
individual flushing routines that they care about without having to
depend on weak aliases. SH-4 is converted over initially, as it wires
up pretty much everything. The majority of the other CPUs will simply use
the default no-op implementation with their own region flushers wired up.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 12:29:49 +09:00
Paul Mundt
916e97834e sh: Kill off unused flush_icache_user_range().
We use flush_cache_page() outright in copy_to_user_page(), and nothing
else needs it, so just kill it off. SH-5 still defines its own version,
but that too will go away in the same fashion once it converts over.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 11:38:05 +09:00
Paul Mundt
0b445dcaf3 sh: Don't export flush_dcache_all().
flush_dcache_all() is used internally by the SH-4 cache code, it is not
part of the exported cache API, so make it static and don't export it.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 11:22:50 +09:00
Paul Mundt
27d59ec170 sh: Move alias computation to shared cache init.
This migrates the alias computation and printing of probed cache
parameters from the SH-4 code to the shared cpu_cache_init().

This permits other platforms with aliases to make use of the same
probe logic without having to roll their own, and also produces
consistent output regardless of platform.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 11:11:16 +09:00
Paul Mundt
ecba106058 sh: Centralize the CPU cache initialization routines.
This provides a central point for CPU cache initialization routines.
This replaces the antiquated p3_cache_init() method, which the vast
majority of CPUs never cared about.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 11:05:42 +09:00
Paul Mundt
aae4d1428c sh: consolidate nommu stubs in arch/sh/mm/nommu.c.
These were previous littered around tlb-nommu.c and pg-nommu.c, though at
this point there are more stubs than are strictly TLB or page op related,
so just consolidate them in a single nommu.c.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 09:57:57 +09:00
Paul Mundt
dde5e3ffb7 sh: rework nommu for generic cache.c use.
This does a bit of reorganizing for allowing nommu to use the new
and generic cache.c, no functional changes.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 09:49:32 +09:00
Paul Mundt
cbbe2f68f6 sh: rename pg-mmu.c -> cache.c, enable generically.
This builds in the newly created cache.c (renamed from pg-mmu.c) for both
MMU and NOMMU configurations. The kmap_coherent() stubs and alias
information recorded by each CPU family takes care of doing the right
thing while enabling the code to be commonly shared.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 09:30:39 +09:00
Paul Mundt
2739742c24 sh: Provide the kmap_coherent() interface generically.
This plugs in kmap_coherent() for the non-SH4 cases to permit the
pg-mmu.c bits to be used generically across all CPUs. SH-5 is still in
the TODO state, but will move over to fixmap and the generic interface
gradually.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 09:19:19 +09:00
Paul Mundt
8edcfcbbd1 sh: Bail from kmap_coherent_init() if we have no dcache aliases.
This kills off the ifdef from kmap_coherent_init() and just bails if
there are no cache aliases. This permits the kmap coherent code to be
used on other CPUs.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 09:03:59 +09:00
Paul Mundt
d2dcd9101b Merge branch 'master' into sh/cachetlb 2009-08-15 05:58:45 +09:00
Paul Mundt
8010fbe7a6 sh: TLB fast path optimizations for load/store exceptions.
This only bothers with the TLB entry flush in the case of the initial
page write exception, as it is unecessary in the case of the load/store
exceptions.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 03:06:41 +09:00
Paul Mundt
112e58471d sh: TLB protection violation exception optimizations.
This adds a bit of rework to have the TLB protection violations skip the
TLB miss fastpath and go directly in to do_page_fault(), as these require
slow path handling.

Based on an earlier patch by SUGIOKA Toshinobu.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 02:49:40 +09:00
Paul Mundt
e7b8b7f16e sh: NO_CONTEXT ASID optimizations for SH-4 cache flush.
This optimizes for the cases when a CPU does not yet have a valid ASID
context associated with it, as in this case there is no work for any of
flush_cache_mm()/flush_cache_page()/flush_cache_range() to do. Based on
the the MIPS implementation.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 02:21:16 +09:00
Paul Mundt
795687265d sh64: Wire up the shared __flush_xxx_region() flushers.
Now with all of the prep work out of the way, kill off the SH-5 variants
and use the SH-4 version directly. This also takes advantage of the
unrolling that was previously done for the new version.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 02:00:54 +09:00
Paul Mundt
43bc61d86f sh: Add register alignment helpers for shared flushers.
This plugs in some register alignment helpers for the shared flushers,
allowing them to also be used on SH-5. The main rationale here is that
in the SH-5 case we have a variable ABI, where the pointer size may not
equal the register width. This register extension is taken care of by
the SH-5 code already today, and is otherwise unused on the SH-4 code.
This combines the two and allows us to kill off the SH-5 implementation.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 01:57:36 +09:00
Marcin Slusarz
922b0dc59b sh: use printk_once
Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Cc: linux-sh@vger.kernel.org
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-13 11:48:08 +09:00
Paul Mundt
0837f52463 sh: Partially unroll the SH-4 __flush_xxx_region() flushers.
This does a bit of unrolling for the SH-4 region flushers.

Based on an earlier patch by SUGIOKA Toshinobu.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-04 18:09:54 +09:00
Paul Mundt
8174252752 sh: Split out SH-4 __flush_xxx_region() ops.
This splits out the SH-4 __flush_xxx_region() functions and defines them
as weak symbols. This allows us to provide optimized versions without
having to ifdef cache-sh4.c to death.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-04 18:06:01 +09:00
Paul Mundt
c7914834ef sh: Tidy up NEFF-based sign extension for SH-5.
This consolidates all of the NEFF-based sign extension for SH-5.
In the future the other SH code will need to make use of this as well,
so make it generic in preparation for more 32/64 consolidation.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-04 17:14:39 +09:00