2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-21 11:44:01 +08:00
Commit Graph

11 Commits

Author SHA1 Message Date
David Howells
e839ca5287 Disintegrate asm/system.h for SH
Disintegrate asm/system.h for SH.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: linux-sh@vger.kernel.org
2012-03-28 18:30:03 +01:00
Paul Mundt
be97d758e5 sh: Fix up the SH-3 build for recent TLB changes.
While the MMUCR.URB and ITLB/UTLB differentiation works fine for all SH-4
and later TLBs, these features are absent on SH-3. This splits out
local_flush_tlb_all() in to SH-4 and PTEAEX copies while restoring the
old SH-3 one, subsequently fixing up the build.

This will probably want some further reordering and tidying in the
future, but that's out of scope at present.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-04-02 16:13:27 +09:00
Matt Fleming
a9eb4f6d1a sh: Flush ITLB too in PTEAEX's flush_tlb_page()
flush_tlb_page() can be used to flush TLB entries that map executable
pages. Therefore, we need to ensure that the ITLB is also flushed in
local_flush_tlb_page().

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-23 13:36:15 +09:00
Paul Mundt
2dc2f8e0c4 sh: Kill off the special uncached section and fixmap.
Now that cached_to_uncached works as advertized in 32-bit mode and we're
never going to be able to map < 16MB anyways, there's no need for the
special uncached section. Kill it off.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-21 16:05:25 +09:00
Paul Mundt
bb29c677b3 sh: Split out MMUCR.URB based entry wiring in to shared helper.
Presently this is duplicated between tlb-sh4 and tlb-pteaex. Split the
helpers out in to a generic tlb-urb that can be used by any parts
equipped with MMUCR.URB.

At the same time, move the SH-5 code out-of-line, as we require single
global state for DTLB entry wiring.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-19 15:20:35 +09:00
Matt Fleming
8eda551420 sh: New extended page flag to wire/unwire TLB entries
Provide a new extended page flag, _PAGE_WIRED and an SH4 implementation
for wiring TLB entries and use it in the fixmap code path so that we can
wire the fixmap TLB entry.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
2010-01-16 14:28:57 +00:00
Paul Mundt
3ed6e12939 sh: Handle a NULL vma in __update_tlb() for the fast-path.
The TLB miss fast-path presently calls in to update_mmu_cache() to
set up the entry, and does so with a NULL vma. Check for vma validity
in the __update_tlb() ptrace checks.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-07-29 22:06:58 +09:00
Paul Mundt
9cef749269 sh: update_mmu_cache() consolidation.
This splits out a separate __update_cache()/__update_tlb() for
update_mmu_cache() to wrap in to. This lets us share the common
__update_cache() bits while keeping special __update_tlb() handling
broken out.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-07-29 00:12:17 +09:00
Paul Mundt
2277ab4a1d sh: Migrate from PG_mapped to PG_dcache_dirty.
This inverts the delayed dcache flush a bit to be more in line with other
platforms. At the same time this also gives us the ability to do some
more optimizations and cleanup. Now that the update_mmu_cache() callsite
only tests for the bit, the implementation can gradually be split out and
made generic, rather than relying on special implementations for each of
the peculiar CPU types.

SH7705 in 32kB mode and SH-4 still need slightly different handling, but
this is something that can remain isolated in the varying page copy/clear
routines. On top of that, SH-X3 is dcache coherent, so there is no need
to bother with any of these tests in the PTEAEX version of
update_mmu_cache(), so we kill that off too.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-07-22 19:20:49 +09:00
Paul Mundt
c54a43e90b sh: tlb-pteaex: Kill off legacy PTEA updates.
While harmless, PTEA has different semantics on these parts, and is only
used in extended TLB mode. Kill off the legacy support.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-03-17 17:58:33 +09:00
Paul Mundt
8263a67e16 sh: Support for extended ASIDs on PTEAEX-capable SH-X3 cores.
This adds support for extended ASIDs (up to 16-bits) on newer SH-X3 cores
that implement the PTAEX register and respective functionality. Presently
only the 65nm SH7786 (90nm only supports legacy 8-bit ASIDs).

The main change is in how the PTE is written out when loading the entry
in to the TLB, as well as in how the TLB entry is selectively flushed.

While SH-X2 extended mode splits out the memory-mapped U and I-TLB data
arrays for extra bits, extended ASID mode splits out the address arrays.
While we don't use the memory-mapped data array access, the address
array accesses are necessary for selective TLB flushes, so these are
implemented newly and replace the generic SH-4 implementation.

With this, TLB flushes in switch_mm() are almost non-existent on newer
parts.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-03-17 17:49:49 +09:00