* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6:
fix endian lossage in forcedeth
net/tokenring/olympic.c section fixes
net: marvell.c fix sparse shadowed variable warning
[VLAN]: Fix egress priority mappings leak.
[TG3]: Add PHY workaround for 5784
[NET]: srandom32 fixes for networking v2
[IPV6]: Fix refcounting for anycast dst entries.
[IPV6]: inet6_dev on loopback should be kept until namespace stop.
[IPV6]: Event type in addrconf_ifdown is mis-used.
[ICMP]: Ensure that ICMP relookup maintains status quo
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6:
[SPARC64]: Fix user accesses in regset code.
[SPARC64]: Fix FPU saving in 64-bit signal handling.
* 'pci_id_updates' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/v4l-dvb:
V4L/DVB (7497): pvrusb2: add new usb pid for 73xxx models
V4L/DVB (7496): pvrusb2: add new usb pid for 75xxx models
We handle a broken tsc these days, so no need to panic. We clear the
TSC bit when tsc_init decides it's unreliable (eg. under lguest w/ bad
host TSC), leading to bogus panic.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Now that we're mapping registers in the DRM driver at load time, the
driver actually checks the PCI ID, so we need to make sure the macros
have all the right bits (and longer term use the DRM headers as the sole
copy of the PCI & register definitions).
This patch adds 945GME support to the DRM headers, fixing a regression
reported in http://bugzilla.kernel.org/show_bug.cgi?id=10395.
Tested-by: Alexander Oltu <alexander@all-2.com>
Signed-off-by: Jesse Barnes <jesse.barnes@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Since 2.6.25-rc7, I've been seeing an occasional livelock on one x86_64
machine, copying kernel trees to tmpfs, paging out to swap.
Signature: 6000 pages under writeback but never getting written; most
tasks of interest trying to reclaim, but each get_swap_bio waiting for a
bio in mempool_alloc's io_schedule_timeout(5*HZ); every five seconds an
atomic page allocation failure report from kblockd failing to allocate a
sense_buffer in __scsi_get_command.
__scsi_get_command has a (one item) free_list to protect against this,
but rc1's [SCSI] use dynamically allocated sense buffer
de25deb180 upset that slightly. When it
fails to allocate from the separate sense_slab, instead of giving up, it
must fall back to the command free_list, which is sure to have a
sense_buffer attached.
Either my earlier -rc testing missed this, or there's some recent
contributory factor. One very significant factor is SLUB, which merges
slab caches when it can, and on 64-bit happens to merge both bio cache
and sense_slab cache into kmalloc's 128-byte cache: so that under this
swapping load, bios above are liable to gobble up all the slots needed
for scsi_cmnd sense_buffers below.
That's disturbing behaviour, and I tried a few things to fix it. Adding
a no-op constructor to the sense_slab inhibits SLUB from merging it, and
stops all the allocation failures I was seeing; but it's rather a hack,
and perhaps in different configurations we have other caches on the
swapout path which are ill-merged.
Another alternative is to revert the separate sense_slab, using
cache-line-aligned sense_buffer allocated beyond scsi_cmnd from the one
kmem_cache; but that might waste more memory, and is only a way of
diverting around the known problem.
While I don't like seeing the allocation failures, and hate the idea of
all those bios piled up above a scsi host working one by one, it does
seem to emerge fairly soon with the livelock fix. So lacking better
ideas, stick with that one clear fix for now.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Peter Zijlstra <a.p.ziljstra@chello.nl>
Cc: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Preserve all other bits when setting gpio.
Signed-off-by: Michael Krufky <mkrufky@linuxtv.org>
Signed-off-by: Steven Toth <stoth@hauppauge.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@infradead.org>
I noticed this while testing the latest code. I'm not sure if it is required,
but the normal (or LSB) timeout value is set to zero, so the MSB should
be as well to stay consistent.
If the chip revision is >= 8, set MSB of the 16-bit timeout value to zero
when disabling the watchdog in it8712f_wdt_disable().
Signed-off-by: Andrew Paprocki <andrew@ishiboo.com>
Signed-off-by: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This reverts commit 7c0ea45be4 which
caused a regression with the backlight being set to off when a laptop
doesn't have a _BQC entry to query the actual backlight value. The code
blindly then falls back on a value of 0.
See
http://bugzilla.kernel.org/show_bug.cgi?id=10387http://lkml.org/lkml/2008/4/2/366
for details.
Bisected-and-reported-by: Andrey Borzenkov <arvidjaar@mail.ru>
Cc: Zhao Yakui <yakui.zhao@intel.com>
Cc: Zhang Rui <rui.zhang@intel.com>
Cc: Len Brown <len.brown@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
In 2.6.14 a patch was merged which switching the order of the ipmi device
naming from in-order-of-discovery over to reverse-order-of-discovery.
So on systems with multiple BMC interfaces, the ipmi device names are being
created in reverse order relative to how they are discovered on the system
(e.g. on an IBM x3950 multinode server with N nodes, the device name for the
BMC in the first node is /dev/ipmiN-1 and the device name for the BMC in the
last node is /dev/ipmi0, etc.).
The problem is caused by the list handling routines chosen in dmi_scan.c.
Using list_add() causes the multiple ipmi devices to be added to the device
list using a stack-paradigm and so the ipmi driver subsequently pulls them off
during initialization in LIFO order. This patch changes the
dmi_save_ipmi_device() list handling paradigm to a queue, thereby allowing the
ipmi driver to build the ipmi device names in the order in which they are
found on the system.
Signed-off-by: Carol Hebert <cah@us.ibm.com>
Signed-off-by: Corey Minyard <cminyard@mvista.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
THe CFI driver in 2.6.24 kernel is broken. Not so intensive read/write
operations cause incomplete writes which lead to kernel panics in JFFS2.
We investigated the issue - it is caused by bug in FL_SHUTDOWN parsing code.
Sometimes chip returns -EIO as if it is in FL_SHUTDOWN state when it should
wait in FL_PONT (error in order of conditions).
The following patch fixes the bug in state parsing code of CFI. Also I've
added comments to notify developers if they want to add new case in future.
Signed-off-by: Alexey Korolev <akorolev@infradead.org>
Reviewed-by: Joern Engel <joern@logfs.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
A boot option for the memory controller was discussed on lkml. It is a good
idea to add it, since it saves memory for people who want to turn off the
memory controller.
By default the option is on for the following two reasons:
1. It provides compatibility with the current scheme where the memory
controller turns on if the config option is enabled
2. It allows for wider testing of the memory controller, once the config
option is enabled
We still allow the create, destroy callbacks to succeed, since they are not
aware of boot options. We do not populate the directory will memory resource
controller specific files.
Signed-off-by: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: Paul Menage <menage@google.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: Pavel Emelyanov <xemul@openvz.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Sudhir Kumar <skumar@linux.vnet.ibm.com>
Cc: YAMAMOTO Takashi <yamamoto@valinux.co.jp>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The effects of cgroup_disable=foo are:
- foo isn't auto-mounted if you mount all cgroups in a single hierarchy
- foo isn't visible as an individually mountable subsystem
As a result there will only ever be one call to foo->create(), at init time;
all processes will stay in this group, and the group will never be mounted on
a visible hierarchy. Any additional effects (e.g. not allocating metadata)
are up to the foo subsystem.
This doesn't handle early_init subsystems (their "disabled" bit isn't set be,
but it could easily be extended to do so if any of the early_init systems
wanted it - I think it would just involve some nastier parameter processing
since it would occur before the command-line argument parser had been run.
Hugh said:
Ballpark figures, I'm trying to get this question out rather than
processing the exact numbers: CONFIG_CGROUP_MEM_RES_CTLR adds 15% overhead
to the affected paths, booting with cgroup_disable=memory cuts that back to
1% overhead (due to slightly bigger struct page).
I'm no expert on distros, they may have no interest whatever in
CONFIG_CGROUP_MEM_RES_CTLR=y; and the rest of us can easily build with or
without it, or apply the cgroup_disable=memory patches.
Unix bench's execl test result on x86_64 was
== just after boot without mounting any cgroup fs.==
mem_cgorup=off : Execl Throughput 43.0 3150.1 732.6
mem_cgroup=on : Execl Throughput 43.0 2932.6 682.0
==
[lizf@cn.fujitsu.com: fix boot option parsing]
Signed-off-by: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: Paul Menage <menage@google.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: Pavel Emelyanov <xemul@openvz.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Sudhir Kumar <skumar@linux.vnet.ibm.com>
Cc: YAMAMOTO Takashi <yamamoto@valinux.co.jp>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Building UP kernel with KGDB enabled produces the following errors and warning
(fatal due to -Werror in arch/mips/kernel/Makefile):
In file included from arch/mips/kernel/gdb-stub.c:142:
include/asm/smp.h:25:1: "raw_smp_processor_id" redefined
In file included from include/linux/sched.h:69,
from arch/mips/kernel/gdb-stub.c:126:
include/linux/smp.h:88:1: this is the location of the previous definition
In file included from arch/mips/kernel/gdb-stub.c:142:
include/asm/smp.h:62: error: redefinition of 'smp_send_reschedule'
include/linux/smp.h:102: error: previous definition of 'smp_send_reschedule' was here
include/asm/smp.h: In function `smp_send_reschedule':
include/asm/smp.h:65: error: dereferencing pointer to incomplete type
arch/mips/kernel/gdb-stub.c: At top level:
arch/mips/kernel/gdb-stub.c:660: warning: 'kgdb_wait' defined but not used
Fix the errors by not directly including <asm/smp.h> (which is already included
by <linux/smp.h>) and the warning by enclosing kgdb_wait() in #ifdef CONFIG_SMP.
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Long overdue update of the m68k defconfigs
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The default defconfig should be one from arch/m68k/configs/
arch/m68k/defconfig was not exactly identical to amiga_defconfig but
also considering how long they have been without any update that doesn't
seem to have been on purpose.
Signed-off-by: Adrian Bunk <adrian.bunk@movial.fi>
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/libata-dev:
pata_ali: disable ATAPI DMA
libata: ATA_12/16 doesn't fall into ATAPI_MISC
libata: uninline atapi_cmd_type()
libata: fix IDENTIFY order in ata_bus_probe()
Mikulas Patocka noted that the optimization where we check if a buffer
was already dirty (and we avoid re-dirtying it) was not really SMP-safe.
Since the read of the old status was not synchronized with anything, an
aggressive CPU re-ordering of memory accesses might have moved that read
up to before the data was even written to the buffer, and another CPU
that cleaned it again, causing the newly dirty state to never actually
hit the disk.
Admittedly this would probably never trigger in practice, but it's still
wrong.
Mikulas sent a patch that fixed the problem, but I dislike the subtlety
of the whole optimization, so this is an alternate fix that is more
explicit about the particular SMP ordering for the optimization, and
separates out the speculative reads of the buffer state into its own
conditional (and makes the memory barrier only happen if we are likely
to actually hit the optimized case in the first place).
I considered removing the optimization entirely, but Andrew argued for
it's continued existence. I'm a push-over.
Cc: Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Commit f63fd7e299 ("parport_pc: detection
for SuperIO IT87XX POST") only released the IO port region on success,
not when the probe for the IT87XX chip failed.
That caused not only a reserved region to leak, but also caused an oops
when the driver module was unloaded and somebody tried to cat
/proc/ioports - because the string that was assigned to the IO port
region was a static string in the module virtual address area.
Reported-by: Lubos Lunak <l.lunak@suse.cz>
Cc: Jan Kara <jack@suse.cz>
Cc: Petr Cvek <petr.cvek@tul.cz>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
a) if you initialize something with le32_to_cpu(...), then |= it
with host-endian and feed to cpu_to_le32(), it's most definitely
*not* __le32. As sparse would've told you...
b) the whole sequence is |= cpu_to_le32(host-endian constant)
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
My previous section fix only turned one section problem into another
section problem.
This patch fixes it for real.
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
The other if blocks don't redeclare temp, remove the redeclaration in
the final if() block.
drivers/net/phy/marvell.c:214:7: warning: symbol 'temp' shadows an earlier one
drivers/net/phy/marvell.c:160:6: originally declared here
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
These entries are allocated in vlan_dev_set_egress_priority,
but are never released and leaks on vlan device removal.
Drop these in vlan's ->uninit callback - after the device is
brought down and everyone is notified about it is going to
be unregistered.
Found during testing vlan netnsization patchset.
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
Acked-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
The commits:
commit 37a47db8d7
Author: Balaji Rao <balajirrao@gmail.com>
Date: Wed Jan 30 13:30:03 2008 +0100
x86: assign IRQs to HPET timers, fix
and
commit e3f37a54f6
Author: Balaji Rao <balajirrao@gmail.com>
Date: Wed Jan 30 13:30:03 2008 +0100
x86: assign IRQs to HPET timers
have been identified to cause a regression on some platforms due to
the assignement of legacy IRQs which makes the legacy devices
connected to those IRQs disfunctional.
Revert them.
This fixes http://bugzilla.kernel.org/show_bug.cgi?id=10382
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
We already catch most of the TSC problems by sanity checks, but there
is a subtle bug which has been in the code for ever. This can cause
time jumps in the range of hours.
This was reported in:
http://lkml.org/lkml/2007/8/23/96
and
http://lkml.org/lkml/2008/3/31/23
I was able to reproduce the problem with a gettimeofday loop test on a
dual core and a quad core machine which both have sychronized
TSCs. The TSCs seems not to be perfectly in sync though, but the
kernel is not able to detect the slight delta in the sync check. Still
there exists an extremly small window where this delta can be observed
with a real big time jump. So far I was only able to reproduce this
with the vsyscall gettimeofday implementation, but in theory this
might be observable with the syscall based version as well.
CPU 0 updates the clock source variables under xtime/vyscall lock and
CPU1, where the TSC is slighty behind CPU0, is reading the time right
after the seqlock was unlocked.
The clocksource reference data was updated with the TSC from CPU0 and
the value which is read from TSC on CPU1 is less than the reference
data. This results in a huge delta value due to the unsigned
subtraction of the TSC value and the reference value. This algorithm
can not be changed due to the support of wrapping clock sources like
pm timer.
The huge delta is converted to nanoseconds and added to xtime, which
is then observable by the caller. The next gettimeofday call on CPU1
will show the correct time again as now the TSC has advanced above the
reference value.
To prevent this TSC specific wreckage we need to compare the TSC value
against the reference value and return the latter when it is larger
than the actual TSC value.
I pondered to mark the TSC unstable when the readout is smaller than
the reference value, but this would render an otherwise good and fast
clocksource unusable without a real good reason.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Fix obsolete printks in aperture-64. We used not to handle missing
agpgart, but we handle it okay now.
Signed-off-by: Pavel Machek <pavel@suse.cz>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
fix memory corruption and crash due to mis-sized grant table.
A PV OS has two grant table data structures: the grant table itself
and a free list. The free list is composed of an array of pages,
which grow dynamically as the guest OS requires more grants. While
the grant table contains 8-byte entries, the free list contains 4-byte
entries. So we have half as many pages in the free list than in the
grant table.
There was a bug in the free list allocation code. The free list was
indexed as if it was the same size as the grant table. But it's only
half as large. So memory got corrupted, and I was seeing crashes in
the slab allocator later on.
Taken from:
http://xenbits.xensource.com/linux-2.6.18-xen.hg?rev/4018c0da3360
Signed-off-by: Michael Abd-El-Malek <mabdelmalek@cmu.edu>
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
25-rc* stopped working with CONFIG_X86_VSMP on vSMP machines.
Looks like the vsmp irq ops got accidentally removed during merge of x86_64
pvops in 2.6.25. -- commit 6abcd98ffa removed
vsmp irq ops.
Tested with both CONFIG_X86_VSMP and without CONFIG_X86_VSMP, on vSMP and non
vSMP x86_64 machines.
Please apply.
Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
right now if there's no CPU support for nmi_watchdog=2 we'll just
refuse it silently.
print a useful warning.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
implement nmi_watchdog=2 on this class of CPUs:
cpu family : 15
model : 6
model name : Intel(R) Pentium(R) D CPU 3.00GHz
the watchdog's ->setup() method is safe anyway, so if the CPU
cannot support it we'll bail out safely.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
ATAPI DMA just doesn't work reliably on pata_ali. The IDE driver can
do it but for some mysterious reason, pata_ali can't. This patch
disables it by default and makes the driver whine during
initialization. "pata_ali.atapi_dma" parameter is added so that user
can bypass the workaround.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
SAT passthrus don't really fit into ATAPI_MISC class. SAT passthru
commands always transfer multiple of 512 bytes and variable length
response is not allowed. This patch creates a separate category -
ATAPI_PASS_THRU - for these.
This fixes HSM violation on "hdparm -I".
Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Uninline atapi_cmd_type(). It doesn't really have to be inline and
more case will be added which need to access unexported libata
variable.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Commit f58229f806 accidentally made
ata_bus_probe() not use reverse order probing. Fix it.
There currently isn't any PATA driver which uses obsolete
ata_bus_probe() path, so this patch is mainly for correctness.
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
The 5784 B step and newer chips require the PHY DSPs to be fine-tuned
based on one-time programmable values stored in the chip. This is
essential to achieve optimal PHY operations especially when using
long cables. We also need to properly handle the 10Mbit RX bit in the
CPMU_CTRL register during PHY reset.
Update version to 3.89.
Signed-off-by: Matt Carlson <mcarlson@broadcom.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>