Commit Graph

166 Commits

Author SHA1 Message Date
David S. Miller
cb736fdbb2 sparc64: Convert U1copy_{from,to}_user to accurate exception reporting.
Report the exact number of bytes which have not been successfully
copied when an exception occurs, using the running remaining length.

Signed-off-by: David S. Miller <davem@davemloft.net>
2016-10-24 11:32:12 -07:00
David S. Miller
d0796b555b sparc64: Convert GENcopy_{from,to}_user to accurate exception reporting.
Report the exact number of bytes which have not been successfully
copied when an exception occurs, using the running remaining length.

Signed-off-by: David S. Miller <davem@davemloft.net>
2016-10-24 11:31:58 -07:00
David S. Miller
0096ac9f47 sparc64: Convert copy_in_user to accurate exception reporting.
Report the exact number of bytes which have not been successfully
copied when an exception occurs, using the running remaining length.

Signed-off-by: David S. Miller <davem@davemloft.net>
2016-10-24 11:31:58 -07:00
David S. Miller
83a17d2661 sparc64: Prepare to move to more saner user copy exception handling.
The fixup helper function mechanism for handling user copy fault
handling is not %100 accurrate, and can never be made so.

We are going to transition the code to return the running return
return length, which is always kept track in one or more registers
of each of these routines.

In order to convert them one by one, we have to allow the existing
behavior to continue functioning.

Therefore make all the copy code that wants the fixup helper to be
used return negative one.

After all of the user copy routines have been converted, this logic
and the fixup helpers themselves can be removed completely.

Signed-off-by: David S. Miller <davem@davemloft.net>
2016-10-24 11:31:58 -07:00
Al Viro
fb2e6fdbbd sparc32: debride memcpy.S a bit
unreachable code, unused macros...

Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-08-07 23:55:49 -04:00
Al Viro
70a6fcf328 [sparc] unify 32bit and 64bit string.h
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-08-07 23:55:48 -04:00
Al Viro
d3867f0483 sparc: move exports to definitions
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-08-07 23:55:43 -04:00
Peter Zijlstra
3a1adb23a5 locking/atomic, arch/sparc: Implement atomic{,64}_fetch_{add,sub,and,or,xor}()
Implement FETCH-OP atomic primitives, these are very similar to the
existing OP-RETURN primitives we already have, except they return the
value of the atomic variable _before_ modification.

This is especially useful for irreversible operations -- such as
bitops (because it becomes impossible to reconstruct the state prior
to modification).

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: David S. Miller <davem@davemloft.net>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: James Y Knight <jyknight@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arch@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: sparclinux@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-06-16 10:48:30 +02:00
Rob Gardner
a7c5724b5c sparc64: fix FP corruption in user copy functions
Short story: Exception handlers used by some copy_to_user() and
copy_from_user() functions do not diligently clean up floating point
register usage, and this can result in a user process seeing invalid
values in floating point registers. This sometimes makes the process
fail.

Long story: Several cpu-specific (NG4, NG2, U1, U3) memcpy functions
use floating point registers and VIS alignaddr/faligndata to
accelerate data copying when source and dest addresses don't align
well. Linux uses a lazy scheme for saving floating point registers; It
is not done upon entering the kernel since it's a very expensive
operation. Rather, it is done only when needed. If the kernel ends up
not using FP regs during the course of some trap or system call, then
it can return to user space without saving or restoring them.

The various memcpy functions begin their FP code with VISEntry (or a
variation thereof), which saves the FP regs. They conclude their FP
code with VISExit (or a variation) which essentially marks the FP regs
"clean", ie, they contain no unsaved values. fprs.FPRS_FEF is turned
off so that a lazy restore will be triggered when/if the user process
accesses floating point regs again.

The bug is that the user copy variants of memcpy, copy_from_user() and
copy_to_user(), employ an exception handling mechanism to detect faults
when accessing user space addresses, and when this handler is invoked,
an immediate return from the function is forced, and VISExit is not
executed, thus leaving the fprs register in an indeterminate state,
but often with fprs.FPRS_FEF set and one or more dirty bits. This
results in a return to user space with invalid values in the FP regs,
and since fprs.FPRS_FEF is on, no lazy restore occurs.

This bug affects copy_to_user() and copy_from_user() for NG4, NG2,
U3, and U1. All are fixed by using a new exception handler for those
loads and stores that are done during the time between VISEnter and
VISExit.

n.b. In NG4memcpy, the problematic code can be triggered by a copy
size greater than 128 bytes and an unaligned source address.  This bug
is known to be the cause of random user process memory corruptions
while perf is running with the callgraph option (ie, perf record -g).
This occurs because perf uses copy_from_user() to read user stacks,
and may fault when it follows a stack frame pointer off to an
invalid page. Validation checks on the stack address just obscure
the underlying problem.

Signed-off-by: Rob Gardner <rob.gardner@oracle.com>
Signed-off-by: Dave Aldridge <david.j.aldridge@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-24 12:13:18 -05:00
Linus Torvalds
2c302e7e41 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc
Pull sparc updates from David Miller:
 "Just a couple of fixes/cleanups:

   - Correct NUMA latency calculations on sparc64, from Nitin Gupta.

   - ASI_ST_BLKINIT_MRU_S value was wrong, from Rob Gardner.

   - Fix non-faulting load handling of non-quad values, also from Rob
     Gardner.

   - Cleanup VISsave assembler, from Sam Ravnborg.

   - Fix iommu-common code so it doesn't emit rediculous warnings on
     some architectures, particularly ARM"

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc:
  sparc64: Fix numa distance values
  sparc64: Don't restrict fp regs for no-fault loads
  iommu-common: Fix error code used in iommu_tbl_range_{alloc,free}().
  sparc64: use ENTRY/ENDPROC in VISsave
  sparc64: Fix incorrect ASI_ST_BLKINIT_MRU_S value
2015-11-05 16:34:48 -08:00
Linus Torvalds
ca520cab25 Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking and atomic updates from Ingo Molnar:
 "Main changes in this cycle are:

   - Extend atomic primitives with coherent logic op primitives
     (atomic_{or,and,xor}()) and deprecate the old partial APIs
     (atomic_{set,clear}_mask())

     The old ops were incoherent with incompatible signatures across
     architectures and with incomplete support.  Now every architecture
     supports the primitives consistently (by Peter Zijlstra)

   - Generic support for 'relaxed atomics':

       - _acquire/release/relaxed() flavours of xchg(), cmpxchg() and {add,sub}_return()
       - atomic_read_acquire()
       - atomic_set_release()

     This came out of porting qwrlock code to arm64 (by Will Deacon)

   - Clean up the fragile static_key APIs that were causing repeat bugs,
     by introducing a new one:

       DEFINE_STATIC_KEY_TRUE(name);
       DEFINE_STATIC_KEY_FALSE(name);

     which define a key of different types with an initial true/false
     value.

     Then allow:

       static_branch_likely()
       static_branch_unlikely()

     to take a key of either type and emit the right instruction for the
     case.  To be able to know the 'type' of the static key we encode it
     in the jump entry (by Peter Zijlstra)

   - Static key self-tests (by Jason Baron)

   - qrwlock optimizations (by Waiman Long)

   - small futex enhancements (by Davidlohr Bueso)

   - ... and misc other changes"

* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (63 commits)
  jump_label/x86: Work around asm build bug on older/backported GCCs
  locking, ARM, atomics: Define our SMP atomics in terms of _relaxed() operations
  locking, include/llist: Use linux/atomic.h instead of asm/cmpxchg.h
  locking/qrwlock: Make use of _{acquire|release|relaxed}() atomics
  locking/qrwlock: Implement queue_write_unlock() using smp_store_release()
  locking/lockref: Remove homebrew cmpxchg64_relaxed() macro definition
  locking, asm-generic: Add _{relaxed|acquire|release}() variants for 'atomic_long_t'
  locking, asm-generic: Rework atomic-long.h to avoid bulk code duplication
  locking/atomics: Add _{acquire|release|relaxed}() variants of some atomic operations
  locking, compiler.h: Cast away attributes in the WRITE_ONCE() magic
  locking/static_keys: Make verify_keys() static
  jump label, locking/static_keys: Update docs
  locking/static_keys: Provide a selftest
  jump_label: Provide a self-test
  s390/uaccess, locking/static_keys: employ static_branch_likely()
  x86, tsc, locking/static_keys: Employ static_branch_likely()
  locking/static_keys: Add selftest
  locking/static_keys: Add a new static_key interface
  locking/static_keys: Rework update logic
  locking/static_keys: Add static_key_{en,dis}able() helpers
  ...
2015-09-03 15:46:07 -07:00
Sam Ravnborg
73958c651f sparc64: use ENTRY/ENDPROC in VISsave
From 7d8a508d74e6cacf0f2438286a959c3195a35a37 Mon Sep 17 00:00:00 2001
From: Sam Ravnborg <sam@ravnborg.org>
Date: Fri, 7 Aug 2015 20:26:12 +0200
Subject: [PATCH] sparc64: use ENTRY/ENDPROC in VISsave

Commit 44922150d8
("sparc64: Fix userspace FPU register corruptions") left a
stale globl symbol which was not used.

Fix this and introduce use of ENTRY/ENDPROC

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-07 15:22:40 -07:00
David S. Miller
44922150d8 sparc64: Fix userspace FPU register corruptions.
If we have a series of events from userpsace, with %fprs=FPRS_FEF,
like follows:

ETRAP
	ETRAP
		VIS_ENTRY(fprs=0x4)
		VIS_EXIT
		RTRAP (kernel FPU restore with fpu_saved=0x4)
	RTRAP

We will not restore the user registers that were clobbered by the FPU
using kernel code in the inner-most trap.

Traps allocate FPU save slots in the thread struct, and FPU using
sequences save the "dirty" FPU registers only.

This works at the initial trap level because all of the registers
get recorded into the top-level FPU save area, and we'll return
to userspace with the FPU disabled so that any FPU use by the user
will take an FPU disabled trap wherein we'll load the registers
back up properly.

But this is not how trap returns from kernel to kernel operate.

The simplest fix for this bug is to always save all FPU register state
for anything other than the top-most FPU save area.

Getting rid of the optimized inner-slot FPU saving code ends up
making VISEntryHalf degenerate into plain VISEntry.

Longer term we need to do something smarter to reinstate the partial
save optimizations.  Perhaps the fundament error is having trap entry
and exit allocate FPU save slots and restore register state.  Instead,
the VISEntry et al. calls should be doing that work.

This bug is about two decades old.

Reported-by: James Y Knight <jyknight@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-06 19:13:25 -07:00
Peter Zijlstra
304a0d699a sparc: Provide atomic_{or,xor,and}
Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.

Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-07-27 14:06:23 +02:00
David S. Miller
2077cef4d5 sparc64: Fix several bugs in memmove().
Firstly, handle zero length calls properly.  Believe it or not there
are a few of these happening during early boot.

Next, we can't just drop to a memcpy() call in the forward copy case
where dst <= src.  The reason is that the cache initializing stores
used in the Niagara memcpy() implementations can end up clearing out
cache lines before we've sourced their original contents completely.

For example, considering NG4memcpy, the main unrolled loop begins like
this:

     load   src + 0x00
     load   src + 0x08
     load   src + 0x10
     load   src + 0x18
     load   src + 0x20
     store  dst + 0x00

Assume dst is 64 byte aligned and let's say that dst is src - 8 for
this memcpy() call.  That store at the end there is the one to the
first line in the cache line, thus clearing the whole line, which thus
clobbers "src + 0x28" before it even gets loaded.

To avoid this, just fall through to a simple copy only mildly
optimized for the case where src and dst are 8 byte aligned and the
length is a multiple of 8 as well.  We could get fancy and call
GENmemcpy() but this is good enough for how this thing is actually
used.

Reported-by: David Ahern <david.ahern@oracle.com>
Reported-by: Bob Picco <bpicco@meloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-03-23 09:22:10 -07:00
Andreas Larsson
1a17fdc4f4 sparc32: Implement xchg and atomic_xchg using ATOMIC_HASH locks
Atomicity between xchg and cmpxchg cannot be guaranteed when xchg is
implemented with a swap and cmpxchg is implemented with locks.
Without this, e.g. mcs_spin_lock and mcs_spin_unlock are broken.

Signed-off-by: Andreas Larsson <andreas@gaisler.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-07 12:51:44 -08:00
David S. Miller
f4da3628dc sparc64: Fix FPU register corruption with AES crypto offload.
The AES loops in arch/sparc/crypto/aes_glue.c use a scheme where the
key material is preloaded into the FPU registers, and then we loop
over and over doing the crypt operation, reusing those pre-cooked key
registers.

There are intervening blkcipher*() calls between the crypt operation
calls.  And those might perform memcpy() and thus also try to use the
FPU.

The sparc64 kernel FPU usage mechanism is designed to allow such
recursive uses, but with a catch.

There has to be a trap between the two FPU using threads of control.

The mechanism works by, when the FPU is already in use by the kernel,
allocating a slot for FPU saving at trap time.  Then if, within the
trap handler, we try to use the FPU registers, the pre-trap FPU
register state is saved into the slot.  Then at trap return time we
notice this and restore the pre-trap FPU state.

Over the long term there are various more involved ways we can make
this work, but for a quick fix let's take advantage of the fact that
the situation where this happens is very limited.

All sparc64 chips that support the crypto instructiosn also are using
the Niagara4 memcpy routine, and that routine only uses the FPU for
large copies where we can't get the source aligned properly to a
multiple of 8 bytes.

We look to see if the FPU is already in use in this context, and if so
we use the non-large copy path which only uses integer registers.

Furthermore, we also limit this special logic to when we are doing
kernel copy, rather than a user copy.

Signed-off-by: David S. Miller <davem@davemloft.net>
2014-10-14 19:37:58 -07:00
Linus Torvalds
dbb885fecc Merge branch 'locking-arch-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull arch atomic cleanups from Ingo Molnar:
 "This is a series kept separate from the main locking tree, which
  cleans up and improves various details in the atomics type handling:

   - Remove the unused atomic_or_long() method

   - Consolidate and compress atomic ops implementations between
     architectures, to reduce linecount and to make it easier to add new
     ops.

   - Rewrite generic atomic support to only require cmpxchg() from an
     architecture - generate all other methods from that"

* 'locking-arch-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (23 commits)
  locking,arch: Use ACCESS_ONCE() instead of cast to volatile in atomic_read()
  locking, mips: Fix atomics
  locking, sparc64: Fix atomics
  locking,arch: Rewrite generic atomic support
  locking,arch,xtensa: Fold atomic_ops
  locking,arch,sparc: Fold atomic_ops
  locking,arch,sh: Fold atomic_ops
  locking,arch,powerpc: Fold atomic_ops
  locking,arch,parisc: Fold atomic_ops
  locking,arch,mn10300: Fold atomic_ops
  locking,arch,mips: Fold atomic_ops
  locking,arch,metag: Fold atomic_ops
  locking,arch,m68k: Fold atomic_ops
  locking,arch,m32r: Fold atomic_ops
  locking,arch,ia64: Fold atomic_ops
  locking,arch,hexagon: Fold atomic_ops
  locking,arch,cris: Fold atomic_ops
  locking,arch,avr32: Fold atomic_ops
  locking,arch,arm64: Fold atomic_ops
  locking,arch,arm: Fold atomic_ops
  ...
2014-10-13 15:48:00 +02:00
Peter Zijlstra
caa17d49f9 locking, sparc64: Fix atomics
The patch folding the atomic ops had a silly fail in the _return primitives.

Fixes: 4f3316c2b5 ("locking,arch,sparc: Fold atomic_ops")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: David S. Miller <davem@davemloft.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: sparclinux@vger.kernel.org
Link: http://lkml.kernel.org/r/20140902094016.GD31157@worktop.ger.corp.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-09-10 11:45:04 +02:00
Andreas Larsson
74cad25c07 sparc: Let memset return the address argument
This makes memset follow the standard (instead of returning 0 on success). This
is needed when certain versions of gcc optimizes around memset calls and assume
that the address argument is preserved in %o0.

Signed-off-by: Andreas Larsson <andreas@gaisler.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-09 16:38:10 -07:00
Peter Zijlstra
4f3316c2b5 locking,arch,sparc: Fold atomic_ops
Many of the atomic op implementations are the same except for one
instruction; fold the lot into a few CPP macros and reduce LoC.

This also prepares for easy addition of new ops.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David S. Miller <davem@davemloft.net>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Kirill Tkhai <tkhai@yandex.ru>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: sparclinux@vger.kernel.org
Link: http://lkml.kernel.org/r/20140508135852.825281379@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-08-14 12:48:13 +02:00
Linus Torvalds
049711bf3c Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next
Pull sparc updates from David Miller:

 1) Add sparc RAM output to /proc/iomem, from Bob Picco.

 2) Allow seeks on /dev/mdesc, from Khalid Aziz.

 3) Cleanup sparc64 I/O accessors, from Sam Ravnborg.

 4) If update_mmu_cache{,_pmd}() is called with an not-valid mapping, do
    not insert it into the TLB miss hash tables otherwise we'll
    livelock.  Based upon work by Christopher Alexander Tobias Schulze.

 5) Fix BREAK detection in sunsab driver when no actual characters are
    pending, from Christopher Alexander Tobias Schulze.

 6) Because we have modules --> openfirmware --> vmalloc ordering of
    virtual memory, the lazy VMAP TLB flusher can cons up an invocation
    of flush_tlb_kernel_range() that covers the openfirmware address
    range.  Unfortunately this will flush out the firmware's locked TLB
    mapping which causes all kinds of trouble.  Just split up the flush
    request if this happens, but in the long term the lazy VMAP flusher
    should probably be made a little bit smarter.

    Based upon work by Christopher Alexander Tobias Schulze.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next:
  sparc64: Fix up merge thinko.
  sparc: Add "install" target
  arch/sparc/math-emu/math_32.c: drop stray break operator
  sparc64: ldc_connect() should not return EINVAL when handshake is in progress.
  sparc64: Guard against flushing openfirmware mappings.
  sunsab: Fix detection of BREAK on sunsab serial console
  bbc-i2c: Fix BBC I2C envctrl on SunBlade 2000
  sparc64: Do not insert non-valid PTEs into the TSB hash table.
  sparc64: avoid code duplication in io_64.h
  sparc64: reorder functions in io_64.h
  sparc64: drop unused SLOW_DOWN_IO definitions
  sparc64: remove macro indirection in io_64.h
  sparc64: update IO access functions in PeeCeeI
  sparcspkr: use sbus_*() primitives for IO
  sparc: Add support for seek and shorter read to /dev/mdesc
  sparc: use %s for unaligned panic
  drivers/sbus/char: Micro-optimization in display7seg.c
  display7seg: Introduce the use of the managed version of kzalloc
  sparc64 - add mem to iomem resource
2014-08-06 09:41:23 -07:00
Sam Ravnborg
6b8b5507ed sparc64: update IO access functions in PeeCeeI
The PeeCeeI.c code used in*() + out*() for IO access.
But these are in little endian and the native (big) endian
result was required which resulted in some bit-shifting.
Shift the code over to use the __raw_*() variants all over.

This simplifies the code as we can drop the calls
to le16_to_cpu() and le32_to_cpu().
And it should be a little faster too.

With this change we now uses the same type of IO access functions
in all of the file.

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-21 21:43:18 -07:00
Steven Rostedt (Red Hat)
2563b9d965 sparc64,ftrace: Remove check of obsolete variable function_trace_stop
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.

Link: http://lkml.kernel.org/r/20140703.211820.1674895115102216877.davem@davemloft.net

Cc: David S. Miller <davem@davemloft.net>
OKed-to-go-through-tracing-tree-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18 13:57:03 -04:00
Linus Torvalds
c4222e4635 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next
Pull sparc fixes from David Miller:
 "Sparc sparse fixes from Sam Ravnborg"

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next: (67 commits)
  sparc64: fix sparse warnings in int_64.c
  sparc64: fix sparse warning in ftrace.c
  sparc64: fix sparse warning in kprobes.c
  sparc64: fix sparse warning in kgdb_64.c
  sparc64: fix sparse warnings in compat_audit.c
  sparc64: fix sparse warnings in init_64.c
  sparc64: fix sparse warnings in aes_glue.c
  sparc: fix sparse warnings in smp_32.c + smp_64.c
  sparc64: fix sparse warnings in perf_event.c
  sparc64: fix sparse warnings in kprobes.c
  sparc64: fix sparse warning in tsb.c
  sparc64: clean up compat_sigset_t.seta handling
  sparc64: fix sparse "Should it be static?" warnings in signal32.c
  sparc64: fix sparse warnings in sys_sparc32.c
  sparc64: fix sparse warning in pci.c
  sparc64: fix sparse warnings in smp_64.c
  sparc64: fix sparse warning in prom_64.c
  sparc64: fix sparse warning in btext.c
  sparc64: fix sparse warnings in sys_sparc_64.c + unaligned_64.c
  sparc64: fix sparse warning in process_64.c
  ...

Conflicts:
	arch/sparc/include/asm/pgtable_64.h
2014-06-19 07:50:07 -10:00
David S. Miller
5aa4ecfd0d sparc64: Add membar to Niagara2 memcpy code.
This is the prevent previous stores from overlapping the block stores
done by the memcpy loop.

Based upon a glibc patch by Jose E. Marchesi

Signed-off-by: David S. Miller <davem@davemloft.net>
2014-05-17 11:28:05 -07:00
Sam Ravnborg
e1039fb426 sparc32: introduce asm-generic/io.h
Use asm-generic/io.h definitions where applicable.
The inxx() and outxx() methods whcih was duplicated in pcic.c +
leon_pci.c are replaced by a set of static inlins from asm-generic/io.h

iomap.c is replaced by the generic versions, but are still
present to support sparc64.

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Cc: Daniel Hellstrom <daniel@gaisler.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-05-02 01:30:21 -04:00
David S. Miller
b2d4383480 sparc64: Make PAGE_OFFSET variable.
Choose PAGE_OFFSET dynamically based upon cpu type.

Original UltraSPARC-I (spitfire) chips only supported a 44-bit
virtual address space.

Newer chips (T4 and later) support 52-bit virtual addresses
and up to 47-bits of physical memory space.

Therefore we have to adjust PAGE_SIZE dynamically based upon
the capabilities of the chip.

Note that this change alone does not allow us to support > 43-bit
physical memory, to do that we need to re-arrange our page table
support.  The current encodings of the pmd_t and pgd_t pointers
restricts us to "32 + 11" == 43 bits.

This change can waste quite a bit of memory for the various tables.
In particular, a future change should work to size and allocate
kern_linear_bitmap[] and sparc64_valid_addr_bitmap[] dynamically.
This isn't easy as we really cannot take a TLB miss when accessing
kern_linear_bitmap[].  We'd have to lock it into the TLB or similar.

Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Bob Picco <bob.picco@oracle.com>
2013-11-12 15:22:34 -08:00
Kirill Tkhai
61d9b9355b sparc64: Remove RWSEM export leftovers
The functions

			__down_read
			__down_read_trylock
			__down_write
			__down_write_trylock
			__up_read
			__up_write
			__downgrade_write

are implemented inline, so remove corresponding EXPORT_SYMBOLs
(They lead to compile errors on RT kernel).

Signed-off-by: Kirill Tkhai <tkhai@yandex.ru>
CC: David Miller <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-05 12:12:51 -07:00
Stephen Boyd
446f24d119 Kconfig: consolidate CONFIG_DEBUG_STRICT_USER_COPY_CHECKS
The help text for this config is duplicated across the x86, parisc, and
s390 Kconfig.debug files.  Arnd Bergman noted that the help text was
slightly misleading and should be fixed to state that enabling this
option isn't a problem when using pre 4.4 gcc.

To simplify the rewording, consolidate the text into lib/Kconfig.debug
and modify it there to be more explicit about when you should say N to
this config.

Also, make the text a bit more generic by stating that this option
enables compile time checks so we can cover architectures which emit
warnings vs.  ones which emit errors.  The details of how an
architecture decided to implement the checks isn't as important as the
concept of compile time checking of copy_from_user() calls.

While we're doing this, remove all the copy_from_user_overflow() code
that's duplicated many times and place it into lib/ so that any
architecture supporting this option can get the function for free.

Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: H. Peter Anvin <hpa@zytor.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Helge Deller <deller@gmx.de>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-04-30 17:04:09 -07:00
Akinobu Mita
54df2db36c sparc/srmmu: clear trailing edge of bitmap properly
srmmu_nocache_bitmap is cleared by bit_map_init().  But bit_map_init()
attempts to clear by memset(), so it can't clear the trailing edge of
bitmap properly on big-endian architecture if the number of bits is not
a multiple of BITS_PER_LONG.

Actually, the number of bits in srmmu_nocache_bitmap is not always
a multiple of BITS_PER_LONG.  It is calculated as below:

        bitmap_bits = srmmu_nocache_size >> SRMMU_NOCACHE_BITMAP_SHIFT;

srmmu_nocache_size is decided proportionally by the amount of system RAM
and it is rounded to a multiple of PAGE_SIZE.  SRMMU_NOCACHE_BITMAP_SHIFT
is defined as (PAGE_SHIFT - 4).  So it can only be said that bitmap_bits
is a multiple of 16.

This fixes the problem by using bitmap_clear() instead of memset()
in bit_map_init() and this also uses BITS_TO_LONGS() to calculate correct
size at bitmap allocation time.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: sparclinux@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-03-31 19:29:12 -04:00
David S. Miller
193d2aadc0 sparc: Support atomic64_dec_if_positive properly.
Sparc32 already supported it, as a consequence of using the
generic atomic64 implementation.  And the sparc64 implementation
is rather trivial.

This allows us to set ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE for all
of sparc, and avoid the annoying warning from lib/atomic64_test.c

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-11-09 19:37:59 -08:00
David S. Miller
9f825962ef sparc64: Niagara-4 bzero/memset, plus use MRU stores in page copy.
This adds optimized memset/bzero/page-clear routines for Niagara-4.

We basically can do what powerpc has been able to do for a decade (via
the "dcbz" instruction), which is use cache line clearing stores for
bzero and memsets with a 'c' argument of zero.

As long as we make the cache initializing store to each 32-byte
subblock of the L2 cache line, it works.

As with other Niagara-4 optimized routines, the key is to make sure to
avoid any usage of the %asi register, as reads and writes to it cost
at least 50 cycles.

For the user clear cases, we don't use these new routines, we use the
Niagara-1 variants instead.  Those have to use %asi in an unavoidable
way.

A Niagara-4 8K page clear costs just under 600 cycles.

Add definitions of the MRU variants of the cache initializing store
ASIs.  By default, cache initializing stores install the line as Least
Recently Used.  If we know we're going to use the data immediately
(which is true for page copies and clears) we can use the Most
Recently Used variant, to decrease the likelyhood of the lines being
evicted before they get used.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-10-05 13:45:26 -07:00
David S. Miller
954f9ac43b Merge git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux
There's a Niagara 2 memcpy fix in this tree and I have
a Kconfig fix from Dave Jones which requires the sparc-next
changes which went upstream yesterday.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-10-02 23:02:10 -04:00
David S. Miller
42a4172b6e sparc64: Fix trailing whitespace in NG4 memcpy.
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-09-28 13:08:22 -07:00
David S. Miller
9019205732 sparc64: Fix comment type in NG4 copy from user.
Noticed by Greg Onufer.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-09-27 14:26:41 -07:00
David S. Miller
1b62ca7bf5 sparc64: Fix return value of Niagara-2 memcpy.
It gets clobbered by the kernel's VISEntryHalf, so we have to save it
in a different register than the set clobbered by that macro.

The instance in glibc is OK and doesn't have this problem.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-09-27 01:06:43 -07:00
David S. Miller
ae2c6ca641 sparc64: Add SPARC-T4 optimized memcpy.
Before		After
		--------------	--------------
bw_tcp:         1288.53 MB/sec	1637.77 MB/sec
bw_pipe:        1517.18 MB/sec	2107.61 MB/sec
bw_unix:        1838.38 MB/sec	2640.91 MB/sec

make -s -j128
allmodconfig	5min 49sec	5min 31sec

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-09-27 00:35:11 -07:00
David S. Miller
4ff28d4ca9 sparc64: Add SHA1 driver making use of the 'sha1' instruction.
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-08-20 15:08:49 -07:00
David S. Miller
6f1d827f29 sparc64: Consistently use fsrc2 rather than fmovd in optimized asm.
Because fsrc2, unlike fmovd, does not update the %fsr register.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-06-27 01:25:23 -07:00
David Miller
2c66f62363 sparc: use the new generic strnlen_user() function
This throws away the sparc-specific functions in favor of the generic
optimized version.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-26 11:33:54 -07:00
David S. Miller
2922585b93 lib: Sparc's strncpy_from_user is generic enough, move under lib/
To use this, an architecture simply needs to:

1) Provide a user_addr_max() implementation via asm/uaccess.h

2) Add "select GENERIC_STRNCPY_FROM_USER" to their arch Kcnfig

3) Remove the existing strncpy_from_user() implementation and symbol
   exports their architecture had.

Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: David Howells <dhowells@redhat.com>
2012-05-24 13:12:28 -07:00
David S. Miller
446969084d kernel: Move REPEAT_BYTE definition into linux/kernel.h
And make sure that everything using it explicitly includes
that header file.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-24 13:10:05 -07:00
David S. Miller
35c9646062 sparc: Increase portability of strncpy_from_user() implementation.
Hide details of maximum user address calculation in a new
asm/uaccess.h interface named user_addr_max().

Provide little-endian implementation in find_zero(), which should work
but can probably be improved.

Abstrace alignment check behind IS_UNALIGNED() macro.

Kill double-semicolon, noticed by David Howells.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-24 13:04:24 -07:00
David S. Miller
4efcac3a24 sparc: Optimize strncpy_from_user() zero byte search.
Compute a mask that will only have 0x80 in the bytes which
had a zero in them.  The formula is:

	~(((x & 0x7f7f7f7f) + 0x7f7f7f7f) | x | 0x7f7f7f7f)

In the inner word iteration, we have to compute the "x | 0x7f7f7f7f"
part, so we can reuse that in the above calculation.

Once we have this mask, we perform divide and conquer to find the
highest 0x80 location.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-23 19:20:20 -07:00
David S. Miller
ff06dffbc8 sparc: Add full proper error handling to strncpy_from_user().
Linus removed the end-of-address-space hackery from
fs/namei.c:do_getname() so we really have to validate these edge
conditions and cannot cheat any more (as x86 used to as well).

Move to a common C implementation like x86 did.  And if both
src and dst are sufficiently aligned we'll do word at a time
copies and checks as well.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-22 23:32:27 -07:00
David S. Miller
74c7b28953 sparc32: Add ucmpdi2.o to obj-y instead of lib-y.
Otherwise if no references exist in the static kernel image,
we won't export the symbol properly to modules.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-19 15:27:01 -07:00
Sam Ravnborg
de36e66d5f sparc32: add ucmpdi2
Based on copy from microblaze add ucmpdi2 implementation.
This fixes build of niu driver which failed with:

drivers/built-in.o: In function `niu_get_nfc':
niu.c:(.text+0x91494): undefined reference to `__ucmpdi2'

This driver will never be used on a sparc32 system,
but patch added to fix build breakage with all*config builds.

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-19 15:23:57 -07:00
David S. Miller
1b35a57b1c sparc32: Kill off software 32-bit multiply/divide routines.
For the explicit calls to .udiv/.umul in assembler, I made a
mechanical (read as: safe) transformation.  I didn't attempt
to make any simplifications.

In particular, __ndelay and __udelay can be simplified significantly.
Some of the %y reads are unnecessary and these routines have no need
any longer for allocating a register window, they can be leaf
functions.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-15 11:23:47 -07:00
David S. Miller
73c1377da9 sparc32: Kill btfixup for xchg()'s 'swap' instruction.
We always have this instruction available, so no need to use
btfixup for it any more.

This also eradicates the whole of atomic_32.S and thus the
__atomic_begin and __atomic_end symbols completely.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-13 13:07:16 -07:00
David S. Miller
8695c37d06 sparc: Convert some assembler over to linakge.h's ENTRY/ENDPROC
Use those, instead of doing it all by hand.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-11 20:33:22 -07:00
David S. Miller
b55e81b9f8 sparc32: Remove inline strncmp "optimization" for constant counts.
Let the compiler do stuff like this.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-11 19:53:29 -07:00
Sam Ravnborg
593fc6ea47 sparc32: drop sun4c specific ___xchg32 implementation
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-11 19:27:47 -07:00
David Miller
c6df4b17c8 lib: Fix multiple definitions of clz_tab
Both sparc 32-bit's software divide assembler and MPILIB provide
clz_tab[] with identical contents.

Break it out into a seperate object file and select it when
SPARC32 or MPILIB is set.

Reported-by: Al Viro <viro@ZenIV.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: James Morris <jmorris@namei.org>
2012-02-02 10:34:23 +11:00
Linus Torvalds
e343a895a9 lib: use generic pci_iomap on all architectures
Many architectures don't want to pull in iomap.c,
 so they ended up duplicating pci_iomap from that file.
 That function isn't trivial, and we are going to modify it
 https://lkml.org/lkml/2011/11/14/183
 so the duplication hurts.
 
 This reduces the scope of the problem significantly,
 by moving pci_iomap to a separate file and
 referencing that from all architectures.
 
 Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iQEcBAABAgAGBQJPBZXBAAoJECgfDbjSjVRpuuYIAIMD0wE96MuTOSBJX4VG8VAP
 UyjL9dsfMRy8CKioQo5/fxpTY07YBCWmNauSSX7pzgcoUKBfYIGn4Z1qwGYsWK9M
 CzLs6PXLTugw0FtKobHZl/klRTWEBS6YOUjp9x568rplwF+Ppk7b993uj7eS/g+e
 T0mUKzqg4/UavbHd9+W5KgC4drQ5hgtu2WZHoUxBK4umnd3C2G+U82Sthg50o/XU
 SC8IGm39K8I36HoIWgXj3Y7nkOP3mQELohOT4ZPiVSmLvGS4i47+ix75anO+8ZvZ
 jxHr8RC85IK1Nd89NZhbKOyvx0QQiwoKUZaTwcWXJNSOADzZnM6icdIsodc+Elo=
 =ccQZ
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost

lib: use generic pci_iomap on all architectures

Many architectures don't want to pull in iomap.c,
so they ended up duplicating pci_iomap from that file.
That function isn't trivial, and we are going to modify it
https://lkml.org/lkml/2011/11/14/183
so the duplication hurts.

This reduces the scope of the problem significantly,
by moving pci_iomap to a separate file and
referencing that from all architectures.

* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
  alpha: drop pci_iomap/pci_iounmap from pci-noop.c
  mn10300: switch to GENERIC_PCI_IOMAP
  mn10300: add missing __iomap markers
  frv: switch to GENERIC_PCI_IOMAP
  tile: switch to GENERIC_PCI_IOMAP
  tile: don't panic on iomap
  sparc: switch to GENERIC_PCI_IOMAP
  sh: switch to GENERIC_PCI_IOMAP
  powerpc: switch to GENERIC_PCI_IOMAP
  parisc: switch to GENERIC_PCI_IOMAP
  mips: switch to GENERIC_PCI_IOMAP
  microblaze: switch to GENERIC_PCI_IOMAP
  arm: switch to GENERIC_PCI_IOMAP
  alpha: switch to GENERIC_PCI_IOMAP
  lib: add GENERIC_PCI_IOMAP
  lib: move GENERIC_IOMAP to lib/Kconfig

Fix up trivial conflicts due to changes nearby in arch/{m68k,score}/Kconfig
2012-01-10 18:04:27 -08:00
Sam Ravnborg
348738afe5 sparc32: drop unused atomic24 support
atomic24 support was used to semaphores in the past - but is no longer used.

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-12-27 14:11:40 -05:00
Michael S. Tsirkin
a21a2fd403 sparc: switch to GENERIC_PCI_IOMAP
sparc copied pci_iomap from generic code, probably to avoid
pulling the rest of iomap.c in.  Since that's in
a separate file now, we can reuse the common implementation.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-12-04 15:59:49 +02:00
David S. Miller
a52312b88c sparc32: Correct the return value of memcpy.
Properly return the original destination buffer pointer.

Signed-off-by: David S. Miller <davem@davemloft.net>
Tested-by: Kjetil Oftedal <oftedal@gmail.com>
2011-10-20 15:17:23 -07:00
David S. Miller
21f74d361d sparc32: Remove uses of %g7 in memcpy implementation.
This is setting things up so that we can correct the return
value, so that it properly returns the original destination
buffer pointer.

Signed-off-by: David S. Miller <davem@davemloft.net>
Tested-by: Kjetil Oftedal <oftedal@gmail.com>
2011-10-20 15:17:22 -07:00
David S. Miller
045b7de9ca sparc32: Remove non-kernel code from memcpy implementation.
Signed-off-by: David S. Miller <davem@davemloft.net>
Tested-by: Kjetil Oftedal <oftedal@gmail.com>
2011-10-20 15:17:22 -07:00
Josip Rodin
a61b582954 sparc: Fix __atomic_add_unless() return value.
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-08-04 02:47:40 -07:00
David S. Miller
56d205cc5c sparc: Use popc when possible for ffs/__ffs/ffz.
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-08-02 21:28:53 -07:00
David S. Miller
ef7c4d4675 sparc: Use popc if possible for hweight routines.
Just like powerpc, we code patch at boot time.

Signed-off-by: David S. Miller <davem@davemloft.net>
2011-08-02 21:28:50 -07:00
David S. Miller
e95ade0839 sparc: Minor tweaks to Niagara page copy/clear.
Don't use floating point on Niagara2, use the traditional
plain Niagara code instead.

Unroll Niagara loops to 128 bytes for copy, and 256 bytes
for clear.

Signed-off-by: David S. Miller <davem@davemloft.net>
2011-08-02 21:28:32 -07:00
Stephen Rothwell
678624e401 sparc: rename atomic_add_unless
Should have been done in commit 1af08a1407f4 ("This is in preparation
for more generic atomic").

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Arun Sharma <asharma@fb.com>
Cc: David Miller <davem@davemloft.net>
Cc: "Hans-Christian Egtvedt" <hans-christian.egtvedt@atmel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-07-27 12:53:36 -07:00
Arun Sharma
60063497a9 atomic: use <linux/atomic.h>
This allows us to move duplicated code in <asm/atomic.h>
(atomic_inc_not_zero() for now) to <linux/atomic.h>

Signed-off-by: Arun Sharma <asharma@fb.com>
Reviewed-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: David Miller <davem@davemloft.net>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-07-26 16:49:47 -07:00
David S. Miller
9fafbd8061 Merge branch 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/sparc-2.6 2011-05-20 12:59:54 -07:00
Tkhai Kirill
b1054282d7 sparc32: Fixed unaligned memory copying in function __csum_partial_copy_sparc_generic
When we are in the label cc_dword_align, registers %o0 and %o1 have the same last 2 bits,
but it's not guaranteed one of them is zero. So we can get unaligned memory access
in label ccte. Example of parameters which lead to this:
%o0=0x7ff183e9, %o1=0x8e709e7d, %g1=3

With the parameters I had a memory corruption, when the additional 5 bytes were rewritten.
This patch corrects the error.

One comment to the patch. We don't care about the third bit in %o1, because cc_end_cruft
stores word or less.

Signed-off-by: Tkhai Kirill <tkhai@yandex.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-05-11 21:35:04 -07:00
Daniel Hellstrom
1827237065 sparc32: removed unused code, implemented by generic code
Signed-off-by: Daniel Hellstrom <daniel@gaisler.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-04-21 16:44:44 -07:00
Linus Torvalds
0586bed3e8 Merge branch 'core-locking-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'core-locking-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  rtmutex: tester: Remove the remaining BKL leftovers
  lockdep/timers: Explain in detail the locking problems del_timer_sync() may cause
  rtmutex: Simplify PI algorithm and make highest prio task get lock
  rwsem: Remove redundant asmregparm annotation
  rwsem: Move duplicate function prototypes to linux/rwsem.h
  rwsem: Unify the duplicate rwsem_is_locked() inlines
  rwsem: Move duplicate init macros and functions to linux/rwsem.h
  rwsem: Move duplicate struct rwsem declaration to linux/rwsem.h
  x86: Cleanup rwsem_count_t typedef
  rwsem: Cleanup includes
  locking: Remove deprecated lock initializers
  cred: Replace deprecated spinlock initialization
  kthread: Replace deprecated spinlock initialization
  xtensa: Replace deprecated spinlock initialization
  um: Replace deprecated spinlock initialization
  sparc: Replace deprecated spinlock initialization
  mips: Replace deprecated spinlock initialization
  cris: Replace deprecated spinlock initialization
  alpha: Replace deprecated spinlock initialization
  rtmutex-tester: Remove BKL tests
2011-03-15 18:28:30 -07:00
Akinobu Mita
e637804c33 sparc: use bitmap_set()
Use bitmap_set() instead of calling __set_bit() each bit.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: sparclinux@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-02-08 22:52:53 -08:00
Thomas Gleixner
24774fbdea sparc: Replace deprecated spinlock initialization
SPIN_LOCK_UNLOCK is deprecated. Use the lockdep capable variant
instead.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: David S. Miller <davem@davemloft.net>
2011-01-27 12:30:37 +01:00
David S. Miller
0f58189d4a sparc64: Make lock backoff really a NOP on UP builds.
As noticed by Mikulas Patocka, the backoff macros don't
completely nop out for UP builds, we still get a
branch always and a delay slot nop.

Fix this by making the branch to the backoff spin loop
selective, then we can nop out the spin loop completely.

Signed-off-by: David S. Miller <davem@davemloft.net>
2010-08-18 22:53:26 -07:00
Mikulas Patocka
6ec274750c sparc64: simple microoptimizations for atomic functions
Simple microoptimizations for sparc64 atomic functions:
Save one instruction by using a delay slot.
Use %g1 instead of %g7, because %g1 is written earlier.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-08-18 22:51:08 -07:00
David S. Miller
9b3bb86aca sparc64: Make rwsems 64-bit.
Basically tip-off the powerpc code, use a 64-bit type and atomic64_t
interfaces for the implementation.

This gets us off of the by-hand asm code I wrote, which frankly I
think probably ruins I-cache hit rates.

The idea was the keep the call chains less deep, but anything taking
the rw-semaphores probably is also calling other stuff and therefore
already has allocated a stack-frame.  So no real stack frame savings
ever.

Ben H. has posted patches to make powerpc use 64-bit too and with some
abstractions we can probably use a shared header file somewhere.

With suggestions from Sam Ravnborg.

Signed-off-by: David S. Miller <davem@davemloft.net>
2010-08-17 22:49:26 -07:00
David S. Miller
035df35d96 sparc64: Allocate sufficient stack space in ftrace stubs.
128 bytes is sufficient for the register window save area, but the
calling conventions allow the callee to save up to 6 incoming argument
registers into the stack frame after the register window save area.

This means a minimal stack frame is 176 bytes (128 + (6 * 8)).

This fixes random crashes when using the function tracer.

Reported-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-13 18:59:02 -07:00
David S. Miller
9960e9e894 sparc64: Add function graph tracer support.
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-12 22:37:26 -07:00
David S. Miller
a71d1d6bb1 sparc64: Give a stack frame to the ftrace call sites.
It's the only way we'll be able to implement the function
graph tracer properly.

A positive is that we no longer have to worry about the
linker over-optimizing the tail call, since we don't
use a tail call any more.

Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-12 22:37:15 -07:00
David S. Miller
ddacd0bc70 sparc64: Kill CONFIG_STACK_DEBUG code.
The generic stack tracer does this job just as well.

Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-12 22:36:03 -07:00
David S. Miller
63b7549573 sparc64: Add HAVE_FUNCTION_TRACE_MCOUNT_TEST and tidy up.
Check function_trace_stop at ftrace_caller

Toss mcount_call and dummy call of ftrace_stub, unnecessary.

Document problems we'll have if the final kernel image link
ever turns on relaxation.

Properly size 'ftrace_call' so it looks right when inspecting
instructions under gdb et al.

Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-12 22:35:24 -07:00
David S. Miller
4d14a45985 sparc: Stop trying to be so fancy and use __builtin_{memcpy,memset}()
This mirrors commit ff60fab71b
(x86: Use __builtin_memset and __builtin_memcpy for memset/memcpy)

Signed-off-by: David S. Miller <davem@davemloft.net>
2009-12-10 23:32:10 -08:00
David S. Miller
fb34035e7b sparc: Use __builtin_object_size() to validate the buffer size for copy_from_user()
This mirrors x86 commit 9f0cf4adb6
(x86: Use __builtin_object_size() to validate the buffer size for copy_from_user())

Signed-off-by: David S. Miller <davem@davemloft.net>
2009-12-10 23:05:23 -08:00
David S. Miller
166e553a57 sparc64: Fix stack debugging IRQ stack regression.
Commit 4f70f7a91b
(sparc64: Implement IRQ stacks.) has two bugs.

First, the softirq range check forgets to subtract STACK_BIAS
before comparing with %sp.  Next, on failure the wrong label
is jumped to, resulting in a bogus stack being loaded.

Reported-by: Igor Kovalenko <igor.v.kovalenko@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-12-09 01:43:45 -08:00
David S. Miller
6373fffc5d sparc64: Fix section attribute warnings.
CSUM copy to/from user assembler was missing allocatable and
executable attributes for .fixup

Signed-off-by: David S. Miller <davem@davemloft.net>
2009-05-29 16:12:02 -07:00
David S. Miller
aeb3987683 sparc64: Fix probe_kernel_{read,write}().
This is based upon a report from Chris Torek and his initial patch.
From Chris's report:

--------------------
This came up in testing kgdb, using the built-in tests -- turn
on CONFIG_KGDB_TESTS, then

    echo V1 > /sys/module/kgdbts/parameters/kgdbts

-- but it would affect using kgdb if you were debugging and looking
at bad pointers.
--------------------

When we get a copy_{from,to}_user() request and the %asi is set to
something other than ASI_AIUS (which is userspace) then we branch off
to a routine called memcpy_user_stub().  It just does a straight
memcpy since we are copying from kernel to kernel in this case.

The logic was that since source and destination are both kernel
pointers we don't need to have exception checks.

But for what probe_kernel_{read,write}() is trying to do, we have to
have the checks, otherwise things like kgdb bad kernel pointer
accesses don't do the right thing.

Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-08 22:32:31 -08:00
David S. Miller
40bdac7dbc sparc64: Kill .fixup section bloat.
This is an implementation of a suggestion made by Chris Torek:
--------------------
Something else I noticed in passing: the EX and EX_LD/EX_ST macros
scattered throughout the various .S files make a fair bit of .fixup
code, all of which does the same thing.  At the cost of one symbol
in copy_in_user.S, you could just have one common two-instruction
retl-and-mov-1 fixup that they all share.
--------------------

The following is with a defconfig build:

   text	   data	    bss	    dec	    hex	filename
3972767	 344024	 584449	4901240	 4ac978	vmlinux.orig
3968887	 344024	 584449	4897360	 4aba50	vmlinux

Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-08 22:00:55 -08:00
Sam Ravnborg
62dfcd336c sparc64: fix modpost failure
Previously PeeCeeI.o was a library but it
was always pulled in due to insw and friends being exported
(at least for a modular kernel).

But this resulted in modpost failures if there where no in-kernel
users because then insw & friends were not linked in.

Fix this by including PeeCeeI.o in the kernel unconditionally.

The only drawback for this solution is that a nonmodular kernel
will always include insw & friends no matter if they are in use or not.

Reported-by: Meelis Roos <mroos@linux.ee>
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-01-17 23:12:27 -08:00
Sam Ravnborg
917c3660d6 sparc64: move EXPORT_SYMBOL to the symbols definition
Move all applicable EXPORT_SYMBOL()s to the file where the respective
symbol is defined.

Removed all the includes that are no longer needed in sparc_ksyms_64.c

Comment all remaining EXPORT_SYMBOL()s in sparc_ksyms_64.c

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>

Additions by Julian Calaby:
* Moved EXPORT_SYMBOL()s for prom functions to their rightful places.
* Made some minor cleanups to the includes and comments of sparc_ksyms_64.c
* Updated and tidied commit message.
* Rebased patch over sparc-2.6.git HEAD.
* Ensured that all modified files have the correct includes.

Signed-off-by: Julian Calaby <julian.calaby@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-01-08 16:58:20 -08:00
Sam Ravnborg
45536ffc8d sparc: Create a new file lib/ksyms.c and add export of all symbols defined in assembler in lib/ to this file.
Remove the duplicate entries from kernel/sparc_ksyms_*.c

The rationale behind this is that the EXPORT_SYMBOL() should be close to
their definition and we cannot add designate a symbol to be exported in
assembler so at least put it in a file in the same directory.

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>

Additions by Julian Calaby:
* Rebased over sparc-2.6.git HEAD

Signed-off-by: Julian Calaby <julian.calaby@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-01-08 16:57:35 -08:00
David S. Miller
18cdae68e7 sparc: Commonize memcmp assembler.
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-12-09 04:09:07 -08:00
David S. Miller
ae984d72e0 sparc: Unify strlen assembler.
Use the new asm/asm.h header to help commonize the
strlen assembler between 32-bit and 64-bit

While we're here, use proper linux/linkage.h macros
instead of by-hand stuff.

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-12-09 01:07:09 -08:00
David S. Miller
8bf68e4d90 sparc: Kill memcmp_32.S code which has been ifdef'd out for centuries.
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-12-08 16:05:49 -08:00
Sam Ravnborg
478b8fecda sparc,sparc64: unify lib/
o Renamed files in sparc64 to <name>_64.S when identical
  to sparc32 files.
o iomap.c were equal for sparc32 and sparc64
o adjusted sparc/Makefile now we have only one lib/

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-12-04 09:17:19 -08:00
Sam Ravnborg
18269c0fd4 sparc: prepare lib/ for unification
Identical named files renamed to <name>_32.S
Refactored Makefile to prepare for unification.

Linking order was altered slightly - but this is a lib.a file so
it should not matter.

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-12-04 09:17:18 -08:00
Adrian Bunk
88278ca27a sparc: remove CVS keywords
This patch removes the CVS keywords that weren't updated for a long time
from comments.

Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-05-20 00:33:44 -07:00
Benjamin Herrenschmidt
b70d3a2c59 iomap: fix 64 bits resources on 32 bits
Almost all implementations of pci_iomap() in the kernel, including the generic
lib/iomap.c one, copies the content of a struct resource into unsigned long's
which will break on 32 bits platforms with 64 bits resources.

This fixes all definitions of pci_iomap() to use resource_size_t.  I also
"fixed" the 64bits arch for consistency.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-29 08:06:02 -07:00
Sam Ravnborg
c6d64c16bb [SPARC/SPARC64]: Fix usage of .section .sched.text in assembler code.
ld will generate an unique named section when assembler do not use
"ax" but gcc does. Add the missing annotation.

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-01-31 19:32:43 -08:00
David S. Miller
6cc0735d0d [SPARC32]: Add __cmpdi2() libcall implementation ala. MIPS.
Device mapper generates calls to this with recent versions
of gcc.

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-08-26 18:49:09 -07:00
Alexander Shmelev
f61698e648 [SPARC32]: Fix bug in sparc optimized memset.
Sparc optimized memset (arch/sparc/lib/memset.S) does not fill last
byte of the memory area, if area size is less than 8 bytes and start
address is not word (4-bytes) aligned.

Here is code chunk where bug located:
/* %o0 - memory address, %o1 - size, %g3 - value */
8:
     add    %o0, 1, %o0
    subcc    %o1, 1, %o1
    bne,a    8b
     stb %g3, [%o0 - 1]

This code should write byte every loop iteration, but last time delay
instruction stb is not executed because branch instruction sets
"annul" bit.

Patch replaces bne,a by bne instruction.

Error can be reproduced by simple kernel module:

--------------------
#include <linux/module.h>
#include <linux/config.h>
#include <linux/kernel.h>
#include <linux/errno.h>
#include <string.h>

static void do_memset(void **p, int size)
{
        memset(p, 0x00, size);
}

static int __init memset_test_init(void)
{
    char fooc[8];
    int *fooi;
    memset(fooc, 0xba, sizeof(fooc));

    do_memset((void**)(fooc + 3), 1);

    fooi = (int*) fooc;
    printk("%08X %08X\n", fooi[0], fooi[1]);

    return -1;
}

static void __exit memset_test_cleanup(void)
{
    return;
}

module_init(memset_test_init);
module_exit(memset_test_cleanup);

MODULE_LICENSE("GPL");
EXPORT_NO_SYMBOLS;
--------------------

Signed-off-by: Alexander Shmelev <ashmelev@task.sun.mcst.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-24 13:41:44 -07:00
Andrew Morton
1fb8812ba5 [SPARC32]: Build fix.
Fix 6197fe4d72

arch/sparc/lib/atomic32.c: In function '__cmpxchg_u32':
arch/sparc/lib/atomic32.c:127: error: 'addr' undeclared (first use in this function)
arch/sparc/lib/atomic32.c:127: error: (Each undeclared identifier is reported only once
arch/sparc/lib/atomic32.c:127: error: for each function it appears in.)

I assume this is what was intended..

Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: William Irwin <wli@holomorphy.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-05-31 01:52:51 -07:00