Commit Graph

21 Commits

Author SHA1 Message Date
David S. Miller
6c8927c963 [SPARC64]: Fix some SUN4V TLB handling bugs.
1) Add error return checking for TLB load hypervisor
   calls.

2) Don't fallthru to dtlb tsb miss handler from itlb tsb
   miss handler, oops.

3) On window fixups, propagate fault information to fixup
   handler correctly.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:13:32 -08:00
David S. Miller
a7b31bac69 [SPARC64]: Do not write garbage into %pstate in tsb_context_switch().
For SUN4V, we were clobbering %o5 to do the hypervisor call.
This clobbers the saved %pstate value and we end up writing
garbage into that register as a result.  Oops.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:13:08 -08:00
David S. Miller
c4bce90ea2 [SPARC64]: Deal with PTE layout differences in SUN4V.
Yes, you heard it right, they changed the PTE layout for
SUN4V.  Ho hum...

This is the simple and inefficient way to support this.
It'll get optimized, don't worry.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:12:25 -08:00
David S. Miller
36a68e77c5 [SPARC64]: Simplify sun4v TLB handling using macros.
There was also a bug in sun4v_itlb_miss, it loaded the
MMU Fault Status base into %g3 instead of %g2.

This pointed out a fast path for TSB miss processing,
since we have %g2 with the MMU Fault Status base, we
can use that to quickly load up the PGD phys address.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:12:16 -08:00
David S. Miller
164c220fa3 [SPARC64]: Fix hypervisor call arg passing.
Function goes in %o5, args go in %o0 --> %o5.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:12:14 -08:00
David S. Miller
618e9ed98a [SPARC64]: Hypervisor TSB context switching.
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:12:06 -08:00
David S. Miller
aa9143b971 [SPARC64]: Implement sun4v TSB miss handlers.
When we register a TSB with the hypervisor, so that it or hardware can
handle TLB misses and do the TSB walk for us, the hypervisor traps
down to these trap when it incurs a TSB miss.

Processing is simple, we load the missing virtual address and context,
and do a full page table walk.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:12:05 -08:00
David S. Miller
df7d6aec96 [SPARC64]: Rename gl_{1,2}insn_patch --> sun4v_{1,2}insn_patch
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:53 -08:00
David S. Miller
d257d5da39 [SPARC64]: Initial sun4v TLB miss handling infrastructure.
Things are a little tricky because, unlike sun4u, we have
to:

1) do a hypervisor trap to do the TLB load.
2) do the TSB lookup calculations by hand

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:52 -08:00
David S. Miller
45fec05f80 [SPARC64]: Sanitize %pstate writes for sun4v.
If we're just switching between different alternate global
sets, nop it out on sun4v.  Also, get rid of all of the
alternate global save/restore in the OBP CIF trampoline code.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:50 -08:00
David S. Miller
314ef68597 [SPARC64]: Refine register window trap handling.
When saving and restoing trap state, do the window spill/fill
handling inline so that we never trap deeper than 2 trap levels.
This is important for chips like Niagara.

The window fixup code is massively simplified, and many more
improvements are now possible.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:36 -08:00
David S. Miller
ffe483d552 [SPARC64]: Add explicit register args to trap state loading macros.
This, as well as making the code cleaner, allows a simplification in
the TSB miss handling path.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:35 -08:00
David S. Miller
517af33237 [SPARC64]: Access TSB with physical addresses when possible.
This way we don't need to lock the TSB into the TLB.
The trick is that every TSB load/store is registered into
a special instruction patch section.  The default uses
virtual addresses, and the patch instructions use physical
address load/stores.

We can't do this on all chips because only cheetah+ and later
have the physical variant of the atomic quad load.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:32 -08:00
David S. Miller
9bc657b28e [SPARC64]: Fix too early reference to %g6
%g6 is not necessarily set to current_thread_info()
at sparc64_realfault_common.  So store the fault
code and address after we invoke etrap and %g6 is
properly set up.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:27 -08:00
David S. Miller
3487d1d441 [SPARC64]: Kill PROM locked TLB entry preservation code.
It is totally unnecessary complexity.  After we take over
the trap table, we handle all PROM tlb misses fully.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:24 -08:00
David S. Miller
6b6d017235 [SPARC64]: Use sparc64_highest_unlocked_tlb_ent in __tsb_context_switch()
Instead of ugly hard-coded value.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:23 -08:00
David S. Miller
b70c0fa161 [SPARC64]: Preload TSB entries from update_mmu_cache().
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:19 -08:00
David S. Miller
98c5584cfc [SPARC64]: Add infrastructure for dynamic TSB sizing.
This also cleans up tsb_context_switch().  The assembler
routine is now __tsb_context_switch() and the former is
an inline function that picks out the bits from the mm_struct
and passes it into the assembler code as arguments.

setup_tsb_parms() computes the locked TLB entry to map the
TSB.  Later when we support using the physical address quad
load instructions of Cheetah+ and later, we'll simply use
the physical address for the TSB register value and set
the map virtual and PTE both to zero.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:17 -08:00
David S. Miller
09f94287f7 [SPARC64]: TSB refinements.
Move {init_new,destroy}_context() out of line.

Do not put huge pages into the TSB, only base page size translations.
There are some clever things we could do here, but for now let's be
correct instead of fancy.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:16 -08:00
David S. Miller
56fb4df6da [SPARC64]: Elminate all usage of hard-coded trap globals.
UltraSPARC has special sets of global registers which are switched to
for certain trap types.  There is one set for MMU related traps, one
set of Interrupt Vector processing, and another set (called the
Alternate globals) for all other trap types.

For what seems like forever we've hard coded the values in some of
these trap registers.  Some examples include:

1) Interrupt Vector global %g6 holds current processors interrupt
   work struct where received interrupts are managed for IRQ handler
   dispatch.

2) MMU global %g7 holds the base of the page tables of the currently
   active address space.

3) Alternate global %g6 held the current_thread_info() value.

Such hardcoding has resulted in some serious issues in many areas.
There are some code sequences where having another register available
would help clean up the implementation.  Taking traps such as
cross-calls from the OBP firmware requires some trick code sequences
wherein we have to save away and restore all of the special sets of
global registers when we enter/exit OBP.

We were also using the IMMU TSB register on SMP to hold the per-cpu
area base address, which doesn't work any longer now that we actually
use the TSB facility of the cpu.

The implementation is pretty straight forward.  One tricky bit is
getting the current processor ID as that is different on different cpu
variants.  We use a stub with a fancy calling convention which we
patch at boot time.  The calling convention is that the stub is
branched to and the (PC - 4) to return to is in register %g1.  The cpu
number is left in %g6.  This stub can be invoked by using the
__GET_CPUID macro.

We use an array of per-cpu trap state to store the current thread and
physical address of the current address space's page tables.  The
TRAP_LOAD_THREAD_REG loads %g6 with the current thread from this
table, it uses __GET_CPUID and also clobbers %g1.

TRAP_LOAD_IRQ_WORK is used by the interrupt vector processing to load
the current processor's IRQ software state into %g6.  It also uses
__GET_CPUID and clobbers %g1.

Finally, TRAP_LOAD_PGD_PHYS loads the physical address base of the
current address space's page tables into %g7, it clobbers %g1 and uses
__GET_CPUID.

Many refinements are possible, as well as some tuning, with this stuff
in place.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:16 -08:00
David S. Miller
74bf4312ff [SPARC64]: Move away from virtual page tables, part 1.
We now use the TSB hardware assist features of the UltraSPARC
MMUs.

SMP is currently knowingly broken, we need to find another place
to store the per-cpu base pointers.  We hid them away in the TSB
base register, and that obviously will not work any more :-)

Another known broken case is non-8KB base page size.

Also noticed that flush_tlb_all() is not referenced anywhere, only
the internal __flush_tlb_all() (local cpu only) is used by the
sparc64 port, so we can get rid of flush_tlb_all().

The kernel gets it's own 8KB TSB (swapper_tsb) and each address space
gets it's own private 8K TSB.  Later we can add code to dynamically
increase the size of per-process TSB as the RSS grows.  An 8KB TSB is
good enough for up to about a 4MB RSS, after which the TSB starts to
incur many capacity and conflict misses.

We even accumulate OBP translations into the kernel TSB.

Another area for refinement is large page size support.  We could use
a secondary address space TSB to handle those.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:13 -08:00