The default values of HARDIRQ_BITS and PREEMPT_BITS in common code leads to
build failure:
In file included from include/linux/interrupt.h:12,
from include/linux/kernel_stat.h:8,
from arch/blackfin/kernel/asm-offsets.c:32:
include/linux/hardirq.h:66:2: error: #error PREEMPT_ACTIVE is too low!
So until that gets resolved, just declare our own default value again.
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
* 'sched-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/peterz/linux-2.6-sched:
sched: Fix bug in SCHED_IDLE interaction with group scheduling
sched: Fix rt_rq->pushable_tasks initialization in init_rt_rq()
sched: Reset sched stats on fork()
sched_rt: Fix overload bug on rt group scheduling
sched: Documentation/sched-rt-group: Fix style issues & bump version
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc:
powerpc: Fix another bug in move of altivec code to vector.S
powerpc: Fix booke user_disable_single_step()
If a tty in N_TTY mode with echo enabled manages to get itself into a state
where
- echo characters are pending
- FASYNC is enabled
- tty_write_wakeup is called from either
- a device write path (pty)
- an IRQ (serial)
then it either deadlocks or explodes taking a mutex in the IRQ path.
On the serial side it is almost impossible to reproduce because you have to
go from a full serial port to a near empty one with echo characters
pending. The pty case happens to have become possible to trigger using
emacs and ptys, the pty changes having created a scenario which shows up
this bug.
The code path is
n_tty:process_echoes() (takes mutex)
tty_io:tty_put_char()
pty:pty_write (or serial paths)
tty_wakeup (from pty_write or serial IRQ)
n_tty_write_wakeup()
process_echoes()
*KABOOM*
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Don't forget to drop a tty refererence on fail paths in
receive_data().
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Bootmem is not used for the vt screen buffer anymore as slab is now
available at the time the console is initialized.
Get rid of the now superfluous distinction between slab and bootmem,
it's always slab.
This also fixes a kmalloc leak which Catalin described thusly:
Commit a5f4f52e ("vt: use kzalloc() instead of the bootmem allocator")
replaced the alloc_bootmem() with kzalloc() but didn't set vc_kmalloced to
1 and the memory block is later leaked. The corresponding kmemleak trace:
unreferenced object 0xdf828000 (size 8192):
comm "swapper", pid 0, jiffies 4294937296
backtrace:
[<c006d473>] __save_stack_trace+0x17/0x1c
[<c000d869>] log_early+0x55/0x84
[<c01cfa4b>] kmemleak_alloc+0x33/0x3c
[<c006c013>] __kmalloc+0xd7/0xe4
[<c00108c7>] con_init+0xbf/0x1b8
[<c0010149>] console_init+0x11/0x20
[<c0008797>] start_kernel+0x137/0x1e4
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Pekka Enberg <penberg@cs.helsinki.fi>
Tested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
dcb314@hotmail.com notes that this memset has its args reversed.
It's unneeded anyway, so remove it.
Addresses http://bugzilla.kernel.org/show_bug.cgi?id=13587
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
msm_serial_driver is registered using platform_driver_probe which takes
care for the probe function itself. So don't pass it in the driver
struct, too.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We can get a situation where a hangup occurs during or after a close. In
that case the ldisc gets disposed of by the close and the hangup then
explodes.
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Turning on this flag could prevent the compiler from optimising away
some "useless" checks for null pointers. Such bugs can sometimes become
exploitable at compile time because of the -O2 optimisation.
See http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Optimize-Options.html
An example that clearly shows this 'problem' is commit 6bf67672.
static void __devexit agnx_pci_remove(struct pci_dev *pdev)
{
struct ieee80211_hw *dev = pci_get_drvdata(pdev);
- struct agnx_priv *priv = dev->priv;
+ struct agnx_priv *priv;
AGNX_TRACE;
if (!dev)
return;
+ priv = dev->priv;
By reverting this patch, and compile it with and without
-fno-delete-null-pointer-checks flag, we can see that the check for dev
is compiled away.
call printk #
- testq %r12, %r12 # dev
- je .L94 #,
movq %r12, %rdi # dev,
Clearly the 'fix' is to stop using dev before it is tested, but building
with -fno-delete-null-pointer-checks flag at least makes it harder to
abuse.
Signed-off-by: Eugene Teo <eugeneteo@kernel.sg>
Acked-by: Eric Paris <eparis@redhat.com>
Acked-by: Wang Cong <amwang@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This reverts commit a6540f731d, as
requested by Alan:
"... as it was wrong, the pty code is now fixed and the fact this
isn't reverted is breaking pptp setups."
Requested-by: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
When a slab cache uses SLAB_DESTROY_BY_RCU, we must be careful when allocating
objects, since slab allocator could give a freed object still used by lockless
readers.
In particular, nf_conntrack RCU lookups rely on ct->tuplehash[xxx].hnnode.next
being always valid (ie containing a valid 'nulls' value, or a valid pointer to next
object in hash chain.)
kmem_cache_zalloc() setups object with NULL values, but a NULL value is not valid
for ct->tuplehash[xxx].hnnode.next.
Fix is to call kmem_cache_alloc() and do the zeroing ourself.
As spotted by Patrick, we also need to make sure lookup keys are committed to
memory before setting refcount to 1, or a lockless reader could get a reference
on the old version of the object. Its key re-check could then pass the barrier.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Check that the result of kmalloc is not NULL before passing it to other
functions.
In the first two cases, the new code returns -ENOMEM, which seems
compatible with what is done for similar functions for other architectures.
In the last two cases, the new code fails silently, ie just returns,
because the function has void return type.
The semantic match that finds this problem is as follows:
(http://www.emn.fr/x-info/coccinelle/)
// <smpl>
@@
expression *x;
identifier f;
constant char *C;
@@
x = \(kmalloc\|kcalloc\|kzalloc\)(...);
... when != x == NULL
when != x != NULL
when != (x || ...)
(
kfree(x)
|
f(...,C,...,x,...)
|
*f(...,x,...)
|
*x->f
)
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Sonic Zhang <sonic.zhang@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
The Blackfin SMP port was missing CPLB entries for Core B on-chip L1 SRAM
regions. Any code that attempted to use these would wrongly crash due to
a CPLB miss.
Signed-off-by: Graf Yang <graf.yang@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Similar to anomaly 05000281 but not as bad, we cannot return to the
instruction causing a fault otherwise we'll trigger a second false
exception. The system can still recover, but it isn't correct.
Signed-off-by: Robin Getz <robin.getz@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
On Blackfin SMP, a per-cpu loops_per_jiffy is pointless since both cores
always run at the same CCLK. In addition, the current implementation has
flaws since the main consumer for loops_per_jiffy (asm/delay.h) uses the
global kernel loops_per_jiffy and not the per_cpu one. So punt all of the
per-cpu handling and go back to the global shared one.
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Change the bfin_gpio_pm_hibernate_restore() function to:
1) AND restored DATA with DIR (not OR) to get correct final state
2) Restore DATA before setting DIR to avoid glitches
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
The AD7142 add-on card hooks the IRQ line up to PG5, not PF5.
Signed-off-by: Barry Song <barry.song@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
The interrupt context save logic incorrectly stored the address of the
IPEND register rather than its value due to a missing dereference. While
we're here, also enable this code for all kernel debugging scenarios and
not just when KGDB is enabled.
Signed-off-by: Robin Getz <robin.getz@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
We already catch this anomaly at compile time, and the runtime version is
such that it ends up checking on all parts rather than just the ones that
might actually have it.
Signed-off-by: Robin Getz <robin.getz@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
The sed used to rename the bfin-twi-lcd only replaced the first instance
rather than all which led to the resources not being enabled when the
driver was built as a module.
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
The Blackfin serial headers were inverting the CTS value leading to wrong
handling of the CTS line which broke CTS/RTS handling completely.
Signed-off-by: Sonic Zhang <sonic.zhang@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
This anomaly only applies to the BF527-0.1, not the BF526-0.1, and not any
other revision of the BF527. So make sure we don't go returning 0xffff
for other cases.
Signed-off-by: Graf Yang <graf.yang@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
The early logic to locate a free DMA channel and then set it up was broken
in a few ways that only manifested itself when we needed to set up more
than 2 on chip SRAM regions (most board defaults setup 1 or 2). First, we
checked the wrong status register (the destination gets updated, not the
source) and second, we did the ssync before rather than after resetting a
DMA config register.
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Rather than assume Core B is always run with caches turned on, let people
load into any of the on-chip memory regions. It is their business how the
SRAM/Cache regions are utilized, so don't prevent them from being able to
load into them.
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
The code used in the Blackfin lshrdi3 utilizes gcc constructs. However,
the structures declared don't line up with the code gcc generates, so
under certain optimizations, we get bad code and things crap out in fun
random ways. So rather than trying to maintain different gcc definitions
ourselves, just use the ones available in gcclib.h.
URL: http://blackfin.uclinux.org/gf/tracker/5286
Signed-off-by: Jie Zhang <jie.zhang@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Since we need to relocate the attached filesystem with the uClinux MTD map
(to handle some anomalies), we need to know its real filesize. If we boot
a kernel without a filesystem actually attached, we end up blindly reading
and copying garbage (since there is no magic value to detect validity).
Often times this results in an early crash and no output. So add a few
basic sanity checks before operating on things to catch the majority of
cases.
Signed-off-by: Robin Getz <robin.getz@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Previous unification code put the exception banner behind the "is oops"
logic when it should have been printed all the time.
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Add missing anomaly workaround for anomaly 05000281 - we can't return to
instructions which cause hardware errors otherwise we trigger the error
again which means we go into an infinite loop of handling, returning, and
retriggering. This work around confuses gdb when the error occurs as the
PC will seemed to have moved, so a better long term fix will need to be
figured out, but for now this is better than an infinite crash loop.
Signed-off-by: Robin Getz <robin.getz@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Graf Yang <graf.yang@analog.com>
Signed-off-by: Cliff Cai <cliff.cai@analog.com>
Signed-off-by: Robin Getz <robin.getz@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
There are no CONFIG_{BLK,CHR}_DEV_FLASH Kconfig options, and there is no
flash_probe() function, so not really sure what this code is all about.
Seems to be dead code that stretches way back to the start of the Blackfin
port.
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Improve the assembly with a few explanatory comments and use symbolic
defines rather than numeric values for bit positions.
Signed-off-by: Robin Getz <robin.getz@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Get rid of extenddisksize parameter of ext3_get_blocks_handle(). This seems to
be a relict from some old days and setting disksize in this function does not
make much sence. Currently it was set only by ext3_getblk(). Since the
parameter has some effect only if create == 1, it is easy to check that the
three callers which end up calling ext3_getblk() with create == 1 (ext3_append,
ext3_quota_write, ext3_mkdir) do the right thing and set disksize themselves.
Signed-off-by: Jan Kara <jack@suse.cz>
The following race can happen:
CPU1 CPU2
checkpointing code checks the buffer, adds
it to an array for writeback
do_get_write_access()
...
lock_buffer()
unlock_buffer()
flush_batch() submits the buffer for IO
__jbd_journal_file_buffer()
So a buffer under writeout is returned from do_get_write_access(). Since
the filesystem code relies on the fact that journaled buffers cannot be
written out, it does not take the buffer lock and so it can modify buffer
while it is under writeout. That can lead to a filesystem corruption
if we crash at the right moment. The similar problem can happen with
the journal_get_create_access() path.
We fix the problem by clearing the buffer dirty bit under buffer_lock
even if the buffer is on BJ_None list. Actually, we clear the dirty bit
regardless the list the buffer is in and warn about the fact if
the buffer is already journalled.
Thanks for spotting the problem goes to dingdinghua <dingdinghua85@gmail.com>.
Reported-by: dingdinghua <dingdinghua85@gmail.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Contents of long symlinks is written via standard write methods. So when the
write fails, we add inode to orphan list. But symlinks don't have .truncate
method defined so nobody properly removes them from the orphan list (both on
disk and in memory).
Fix this by calling ext3_truncate() directly instead of calling vmtruncate()
(which is saner anyway since we don't need anything vmtruncate() does except
from calling .truncate in these paths). We also add inode to orphan list only
if ext3_can_truncate() is true (currently, it can be false for symlinks when
there are no blocks allocated) - otherwise orphan list processing will complain
and ext3_truncate() will not remove inode from on-disk orphan list.
Signed-off-by: Jan Kara <jack@suse.cz>
Due to on disk corruption, it can happen that journal is too short. Fail
to load it in such case so that we don't oops somewhere later.
Reported-by: Nageswara R Sastry <rnsastry@linux.vnet.ibm.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Add appropriate MODULE_ALIAS() to facilitate autoloading of can protocol drivers
Signed-off-by: Lothar Wassmann <LW@KARO-electronics.de>
Acked-by: Oliver Hartkopp <oliver@hartkopp.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix a use after free bug in can protocol drivers
The release functions of the can protocol drivers lack a call to
sock_orphan() which leads to referencing freed memory under certain
circumstances.
This patch fixes a bug reported here:
https://lists.berlios.de/pipermail/socketcan-users/2009-July/000985.html
Signed-off-by: Lothar Wassmann <LW@KARO-electronics.de>
Acked-by: Oliver Hartkopp <oliver@hartkopp.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
* 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/libata-dev:
ahci: add device ID for 82801JI sata controller
drivers/ata: Move a dereference below a NULL test
libata: implement and use HORKAGE_NOSETXFER, take#2
libata: fix follow-up SRST failure path
We need to check returning error for pci_register_driver(&joystick_driver)
On failure, we should unregister formerly registered audio drivers
This also fixed the compiler warning :
CC [M] sound/pci/riptide/riptide.o
sound/pci/riptide/riptide.c: In function ‘alsa_card_riptide_init’:
sound/pci/riptide/riptide.c:2200: warning: ignoring return value of ‘__pci_register_driver’, declared with attribute warn_unused_result
Signed-off-by: Jaswinder Singh Rajput <jaswinderrajput@gmail.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Blue Microphones USB devices have an alternate setting that sends two
channels of data to the computer. Unfortunately, the descriptors of
that altsetting have a wrong channel setting, which means that any
recorded data from such a device has twice the sample rate from what
would be expected.
This patch adds a workaround to ignore that altsetting. Since these
devices have only one actual channel, no data is lost.
Signed-off-by: Clemens Ladisch <clemens@ladisch.de>
Cc: <stable@kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
This patch fixes a bug in the image seq. number handling in the
scanning level. The assignment of the image_seq was incorrect.
Signed-off-by: Holger Brunck <holger.brunck@keymile.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>