The generic networking code ensures that no two networking devices
have the same name, so there is no time except when sysfs has
implementation bugs that device_rename when called from
dev_change_name will fail.
The current error handling for errors from device_rename in
dev_change_name is wrong and results in an unusable and unrecoverable
network device if device_rename is happens to return an error.
This patch removes the buggy error handling. Which confines the mess
when device_rename hits a problem to sysfs, instead of propagating it
the rest of the network stack. Making linux a little more robust.
Without this patch you can observe what happens when sysfs has a bug
when CONFIG_SYSFS_DEPRECATED is not set and you attempt to rename
a real network device to a name like (broken_parity_status, device,
modalias, power, resource2, subsystem_vendor, class, driver, irq,
msi_bus, resource, subsystem, uevent, config, enable, local_cpus,
numa_node, resource0, subsystem_device, vendor)
Greg has a patch that fixes the sysfs bugs but he doesn't trust it
for a 2.6.21 timeframe. This patch which just ignores errors should
be safe and it keeps the system from going completely wacky.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mention the slab name when listing corrupt objects. Although the function
that released the memory is mentioned, that is frequently ambiguous as such
functions often release several pieces of memory.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* 'upstream-linus' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/libata-dev:
libata: Limit ATAPI DMA to R/W commands only for TORiSAN DVD drives (take 3)
libata: Limit max sector to 128 for TORiSAN DVD drives (take 3)
libata: Clear tf before doing request sense (take 3)
libata: reorder HSM_ST_FIRST for easier decoding (take 3)
libata bugfix: preserve LBA bit for HDIO_DRIVE_TASK
2.6.21 fix lba48 bug in libata fill_result_tf()
This adds some NCQ blacklist entries taken from the Silicon Image 3124/3132
Windows driver .inf files. There are some confirming reports of problems
with these drives under Linux (for example http://lkml.org/lkml/2007/3/4/178)
so let's disable NCQ on these drives.
[ I'm personally starting to wonder whether we shouldn't disable NCQ by
default, and perhaps have a white-list. There seems to be a *lot* of
drives that do this wrong.. - Linus ]
Signed-off-by: Robert Hancock <hancockr@shaw.ca>
Acked-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* 'upstream-linus' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6:
r8169: fix suspend/resume for down interface
r8169: issue request_irq after the private data are completely initialized
b44: fix IFF_ALLMULTI handling of CAM slots
cxgb3 - Firwmare update
cxgb3 - Tighten xgmac workaround
cxgb3 - detect NIC only adapters
cxgb3 - Safeguard TCAM size usage
Wipe internal irb if the clear function bit is set before accumulating
bits from the irb in order to follow hardware behaviour.
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The git commit c2fda5fed8 which
added the page_test_and_clear_dirty call to page_mkclean and the
git commit 7658cc2892 which fixes
the "nasty and subtle race in shared mmap'ed page writeback"
problem in clear_page_dirty_for_io cause data corruption on s390.
The effect of the two changes is that for every call to
clear_page_dirty_for_io a page_test_and_clear_dirty is done. If
the per page dirty bit is set set_page_dirty is called. Strangly
clear_page_dirty_for_io is called for not-uptodate pages, e.g.
over this call-chain:
[<000000000007c0f2>] clear_page_dirty_for_io+0x12a/0x130
[<000000000007c494>] generic_writepages+0x258/0x3e0
[<000000000007c692>] do_writepages+0x76/0x7c
[<00000000000c7a26>] __writeback_single_inode+0xba/0x3e4
[<00000000000c831a>] sync_sb_inodes+0x23e/0x398
[<00000000000c8802>] writeback_inodes+0x12e/0x140
[<000000000007b9ee>] wb_kupdate+0xd2/0x178
[<000000000007cca2>] pdflush+0x162/0x23c
The bad news now is that page_test_and_clear_dirty might claim
that a not-uptodate page is dirty since SetPageUptodate which
resets the per page dirty bit has not yet been called. The page
writeback that follows clobbers the data on disk.
The simplest solution to this problem is to move the call to
page_test_and_clear_dirty under the "if (page_mapped(page))".
If a file backed page is mapped it is uptodate.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
patch 4/4:
Limit ATAPI DMA to R/W commands only for TORiSAN DRD-N216 DVD-ROM drives
(http://bugzilla.kernel.org/show_bug.cgi?id=6710)
Signed-off-by: Albert Lee <albertcc@tw.ibm.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
patch 3/4:
The TORiSAN drive locks up when max sector == 256.
Limit max sector to 128 for the TORiSAN DRD-N216 drives.
(http://bugzilla.kernel.org/show_bug.cgi?id=6710)
Signed-off-by: Albert Lee <albertcc@tw.ibm.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
patch 2/4:
Clear tf before doing request sense.
This fixes the AOpen 56X/AKH timeout problem.
(http://bugzilla.kernel.org/show_bug.cgi?id=8244)
Signed-off-by: Albert Lee <albertcc@tw.ibm.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
patch 1/4:
Reorder HSM_ST_FIRST, such that the task state transition is easier decoded with human eyes.
Signed-off-by: Albert Lee <albertcc@tw.ibm.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Preserve the LBA bit in the DevSel/Head register for HDIO_DRIVE_TASK.
Signed-off-by: Mark Lord <mlord@pobox.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Current 2.6.21 libata does the following:
void ata_tf_read(struct ata_port *ap, struct ata_taskfile *tf)
{
struct ata_ioports *ioaddr = &ap->ioaddr;
tf->command = ata_check_status(ap);
...
if (tf->flags & ATA_TFLAG_LBA48) {
iowrite8(tf->ctl | ATA_HOB, ioaddr->ctl_addr);
tf->hob_feature = ioread8(ioaddr->error_addr);
...
}
}
...
static void fill_result_tf(struct ata_queued_cmd *qc)
{
struct ata_port *ap = qc->ap;
ap->ops->tf_read(ap, &qc->result_tf);
qc->result_tf.flags = qc->tf.flags;
}
Based on this, those last two statements fill_result_tf()
appear to me to be in the wrong order, in that the tf->flags
are uninitialized at the point where tf_read() is invoked.
So for lba48 commands, tf_read() won't be reading back the
full lba48 register contents..
Correct?
This patch corrects fill_result_tf() so that the flags
get copied to result_tf before they are used by tf_read().
Signed-off-by: Mark Lord <mlord@pobox.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
The PM hooks are no-op if the r8169 interface is down (i.e. !IFF_UP).
However, as the chipset is enabled, the device will not work after a
suspend/resume cycle. The patch always issue the required PCI suspend
sequence and removes the module unload/reload workaround.
Signed-off-by: Arnaud Patard <apatard@mandriva.com>
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
The irq handler schedules a NAPI poll request unconditionally as soon as
the status register is not clean. It has been there - and wrong - for
ages but a recent timing change made it apparently easier to trigger.
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
Cc: Jay Cliburn <jacliburn@bellsouth.net>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
If you set the IFF_ALLMULTI flag on a b44 device, or if you join more than
B44_MCAST_TABLE_SIZE multicast groups, the device will stop receiving unicast
messages. This is because the __b44_set_mac_addr call sets the zeroth CAM
entry to the MAC address of the device, and then the loop at line 1722
proceeds to overwrite it unless the value of i is set by the __b44_load_mcast
call. However, when IFF_ALLMULTI is set, that call is bypassed, leaving i set
to zero.
Fixed by starting the loop at 1 to make it skip the CAM entry for the MAC
address.
Signed-off-by: Bill Helfinstine <bhelf@flitterfly.whirpon.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Run the watchdog task when the link is up.
Flush the XGMAC Tx FIFO when the link drops.
Also remove a statistics update that should have gone
in the previous modification of xgmac.c.
Signed-off-by: Divy Le Ray <divy@chelsio.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Differentiate NIC only adapters from RNICs.
Initialize offload capabilities for RNICs only.
Signed-off-by: Divy Le Ray <divy@chelsio.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
This is a simplified and actually more comprehensive form of a bug
fix from Mitch Williams <mitch.a.williams@intel.com>.
When we mask or unmask a msi-x irqs the writes may be posted because
we are writing to memory mapped region. This means the mask and
unmask don't happen immediately but at some unspecified time in the
future. Which is out of sync with how the mask/unmask logic work
for ioapic irqs.
The practical result is that we get very subtle and hard to track down
irq migration bugs.
This patch performs a read flush after writes to the MSI-X table for mask
and unmask operations. Since the SMP affinity is set while the interrupt
is masked, and since it's unmasked immediately after, no additional flushes
are required in the various affinity setting routines.
The testing by Mitch Williams on his especially problematic system should
still be valid as I have only simplified the code, not changed the
functionality.
We currently have 7 drivers: cciss, mthca, cxgb3, forceth, s2io,
pcie/portdrv_core, and qla2xxx in 2.6.21 that are affected by this
problem when the hardware they driver is plugged into the right slot.
Given the difficulty of reproducing this bug and tracing it down to
anything that even remotely resembles a cause, even if people are
being affected we aren't likely to see many meaningful bug reports, and
the people who see this bug aren't likely to be able to reproduce this
bug in a timely fashion. So it is best to get this problem fixed
as soon as we can so people don't have problems.
Then if people do have a kernel message stating "No irq for vector" we
will know it is yet another novel cause that needs a complete new
investigation.
Cc: Greg KH <greg@kroah.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Acked-by: Mitch Williams <mitch.a.williams@intel.com>
Acked-by: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* master.kernel.org:/pub/scm/linux/kernel/git/davem/sparc-2.6:
[SCSI]: Fix scsi_send_eh_cmnd scatterlist handling
[SPARC]: Add unsigned to unused bit field in a.out.h
This fixes a regression caused by commit:
2dc611de5a
The sense buffer code in scsi_send_eh_cmnd was changed to use
alloc_page() and a scatter list, but the sense data copy was not
updated to match so what we actually get in the sense buffer is total
grabage starting with the kernel address of the struct page we got.
Basically the stack frame of scsi_send_eh_cmd() is what ends up
in the sense buffer.
Depending upon how pointers look on a given platform, you can
end up getting sr_ioctl.c errors when you mount a cdrom. If
the CDROM gives a check condition for GPCMD_GET_CONFIGURATION issued
by drivers/cdrom/cdrom.c:cdrom_mmc_profile(), sr_ioctl will
spit out this error message in sr_do_ioctl() with the way pointers
are on sparc64:
default:
printk(KERN_ERR "%s: CDROM (ioctl) error, command: ", cd->cdi.name);
__scsi_print_command(cgc->cmd);
scsi_print_sense_hdr("sr", &sshdr);
err = -EIO;
This is the error Tom Callaway reported in:
http://marc.info/?l=linux-sparc&m=117407453208101&w=2
Anyways, fix this by using page_address(sgl.page) which is OK
because we know this is low-mem due to GFP_ATOMIC.
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Christoph Hellwig <hch@lst.de>
Add unsigned to unused bit field in a.out.h to make sparse happy.
[ I took care of the sparc64 side as well -DaveM ]
Signed-off-by: Robert Reif <reif@earthlink.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
The nvram dword alignment logic was broken when writing less than 4
bytes on a non-aligned offset. It was missing logic to round the
length to 4 bytes.
The page erase code is also moved so that it is only called when
using non-buffered flash for better code clarity.
Update version to 1.5.7.
Based on initial patch from Tony Cureington <tony.cureington@hp.com>.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In article <20070329.142644.70222545.davem@davemloft.net> (at Thu, 29 Mar 2007 14:26:44 -0700 (PDT)), David Miller <davem@davemloft.net> says:
> From: Sridhar Samudrala <sri@us.ibm.com>
> Date: Thu, 29 Mar 2007 14:17:28 -0700
>
> > The check for length in rawv6_sendmsg() is incorrect.
> > As len is an unsigned int, (len < 0) will never be TRUE.
> > I think checking for IPV6_MAXPLEN(65535) is better.
> >
> > Is it possible to send ipv6 jumbo packets using raw
> > sockets? If so, we can remove this check.
>
> I don't see why such a limitation against jumbo would exist,
> does anyone else?
>
> Thanks for catching this Sridhar. A good compiler should simply
> fail to compile "if (x < 0)" when 'x' is an unsigned type, don't
> you think :-)
Dave, we use "int" for returning value,
so we should fix this anyway, IMHO;
we should not allow len > INT_MAX.
Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Acked-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This changes the "not found" error return for the lookup
function to -ESRCH so that it can be distinguished from
the case where a rule or route resulting in -ENETUNREACH
has been found during the search.
It fixes a bug where if DECnet was compiled with routing
support, but no routes were added to the routing table,
it was failing to fall back to endnode routing.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
Signed-off-by: Patrick Caulfield <pcaulfie@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
* 'for-linus' of git://one.firstfloor.org/home/andi/git/linux-2.6:
[PATCH] x86: Don't probe for DDC on VBE1.2
[PATCH] x86-64: Increase NMI watchdog probing timeout
[PATCH] x86-64: Let oprofile reserve MSR on all CPUs
[PATCH] x86-64: Disable local APIC timer use on AMD systems with C1E
The __copy_to_user_inatomic() calls in file_read_actor() and pipe_read()
are broken on original i386 machines, where WP-works-ok == false, as
__copy_to_user_inatomic() on such systems calls functions which might
sleep and/or contain cond_resched() calls inside of a kmap_atomic()
region.
The original check for WP-works-ok was in access_ok(), but got moved
during the 2.5 series to fix a race vs. swap.
Return the number of bytes to copy in the case where we are in an atomic
region, so the non atomic code pathes in file_read_actor() and
pipe_read() are taken.
This could be optimized to avoid the kmap_atomicby moving the check for
WP-works-ok into fault_in_pages_writeable(), but this is more intrusive
and can be done later.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
On a multiprocessor machine the VT_WAITACTIVE ioctl call may return 0 if
fg_console has already been updated in redraw_screen() but the console
switch itself hasn't been completed. Fix this by checking fg_console in
vt_waitactive() with the console sem held.
Signed-off-by: Michal Januszewski <spock@gentoo.org>
Acked-by: Antonino Daplas <adaplas@pol.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Fix the regression resulting from the recent change of suspend code
ordering that causes systems based on Intel x86 CPUs using the microcode
driver to hang during the resume.
The problem occurs since the microcode driver uses request_firmware() in
its CPU hotplug notifier, which is called after tasks has been frozen and
hangs. It can be fixed by telling the microcode driver to use the
microcode stored in memory during the resume instead of trying to load it
from disk.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Adrian Bunk <bunk@stusta.de>
Cc: Tigran Aivazian <tigran@aivazian.fsnet.co.uk>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Maxim <maximlevitsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Lockdep reported cmos_suspend() and cmos_resume() calling rtc_update_irq()
with IRQs enabled; not allowed.
Also fix problems seen on some hardware, whereby false alarm IRQs could be
reported (primarily to userspace); and update two comments to match changes
in ACPI. Those make up most of this patch, by volume.
Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
Cc: Alessandro Zummo <a.zummo@towertech.it>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Change PnP resource handling code to use proper type for resource start and
length. Fixes bogus regions reported in /proc/iomem.
I've also made some pointer constant, as they are constant...
Signed-off-by: Petr Vandrovec <petr@vandrovec.name>
Cc: Bjorn Helgaas <bjorn.helgaas@hp.com>
Cc: Adam Belay <ambx1@neo.rr.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Revert e92a4d595b.
Dmitry points out
"When we block_prepare_write() failed while ext3_prepare_write() we jump to
"failure" label and call ext3_prepare_failure() witch search last mapped bh
and invoke commit_write untill it. This is wrong!! because some bh from
begining to the last mapped bh may be not uptodate. As a result we commit to
disk not uptodate page content witch contains garbage from previous usage."
and
"Unexpected file size increasing."
Call trace the same as it was in first issue but result is different.
For example we have file with i_size is zero. we want write two blocks ,
but fs has only one free block.
->ext3_prepare_write(...from == 0, to == 2048)
retry:
->block_prepare_write() == -ENOSPC# we failed but allocated one block here.
->ext3_prepare_failure()
->commit_write( from == 0, to == 1024) # after this i_size becomes 1024 :)
if (ret == -ENOSPC && ext3_should_retry_alloc(inode->i_sb, &retries))
goto retry;
Finally when all retries will be spended ext3_prepare_failure return
-ENOSPC, but i_size was increased and later block trimm procedures can't
help here.
We don't appear to have the horsepower to fix these issues, so let's put
things back the way they were for now.
Cc: Kirill Korotaev <dev@openvz.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Ken Chen <kenneth.w.chen@intel.com>
Cc: Andrey Savochkin <saw@sw.ru>
Cc: <linux-ext4@vger.kernel.org>
Cc: Dmitriy Monakhov <dmonakhov@openvz.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
When the dump cannot occur most likely because of a full file system and
the page to be written is the zero page, the call to page_cache_release()
is missed.
Signed-off-by: Brian Pomerantz <bapper@mvista.com>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: David Howells <dhowells@redhat.com>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
It seems that there must be at least one node in mems and at least one CPU
in cpus in order to be able to assign tasks to a cpuset. This makes sense.
And I think it would also make sense to include a mems setting in the
basic usage section of the documentation.
I also wonder if something logged to dmsg, explaining why a write failed,
would be a good enhancement. I ended up having rummage arround in cpuset.c
in order to work out why my configuration was failing.
Signed-off-by: Simon Horman <horms@verge.net.au>
Acked-by: Paul Jackson <pj@sgi.com>
Cc: Paul Menage <menage@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Fix an off-by-one spotted by the Coverity checker.
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Cc: Ben Dooks <ben-linux@fluff.org>
Cc: Vincent Sanders <vince@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Currently we have a confused udelay implementation.
* __const_udelay does not accept usecs but xloops in i386 and x86_64
* our implementation requires usecs as arg
* it gets a xloops count when called by asm/arch/delay.h
Bugs related to this (extremely long shutdown times) where reported by some
x86_64 users, especially using Device Mapper.
To hit this bug, a compile-time constant time parameter must be passed -
that's why UML seems to work most times. Fix this with a simple udelay
implementation.
Signed-off-by: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Acked-by: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We're using #ifdef CONFIG_SYSCTL, but we should be using CONFIG_PROC_SYSCTL,
so we get
fs/built-in.o: In function `proc_root_init':
/usr/src/linux/fs/proc/root.c:83: undefined reference to `proc_sys_init'
Fix that up and remove an ifdef-in-C.
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Helge Hafting <helgehaf@aitel.hist.no>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The ADEF bits in the TSCR register have different meanings in read and
write mode. For this reason ADEF has to be reset on every
read-modify-write operation.
This patch introduces a special write function for this register, which
takes care of it.
Thanks to Holger Magnussen for pointing my nose at this problem.
Signed-off-by: Andreas Oberritter <obi@linuxtv.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@infradead.org>
Setting the message length to zero means to send one byte, so you need a
subtraction instead of an addition.
Signed-off-by: Andreas Oberritter <obi@linuxtv.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@infradead.org>
VBE1.2 doesn't support function 15h (DDC) resulting in a 'hang' whilst
uncompressing kernel with some video cards. Make sure we check VBE version
before fiddling around with DDC.
http://bugzilla.kernel.org/show_bug.cgi?id=1458
Opened: 2003-10-30 09:12 Last update: 2007-02-13 22:03
Much thanks to Tobias Hain for help in testing and investigating the bug.
Tested on;
i386, Chips & Technologies 65548 VESA VBE 1.2
CONFIG_VIDEO_SELECT=Y
CONFIG_FIRMWARE_EDID=Y
Untested on x86_64.
Signed-off-by: Zwane Mwaikambo <zwane@infradead.org>
Signed-off-by: Andi Kleen <ak@suse.de>
The MSR reservation is per CPU and oprofile would only allocate them
on the CPU it was initialized on. Change this to handle all CPUs.
This also fixes a warning about unprotected use of smp_processor_id()
in preemptible kernels.
Signed-off-by: Andi Kleen <ak@suse.de>