SMP balancing is done with IRQs disabled and can iterate the full rq.
When rqs are large this can cause large irq-latencies. Limit the nr of
iterations on each run.
This fixes a scheduling latency regression reported by the -rt folks.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Tested-by: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Sukadev Bhattiprolu reported a kernel crash with control groups.
There are couple of problems discovered by Suka's test:
- The test requires the cgroup filesystem to be mounted with
atleast the cpu and ns options (i.e both namespace and cpu
controllers are active in the same hierarchy).
# mkdir /dev/cpuctl
# mount -t cgroup -ocpu,ns none cpuctl
(or simply)
# mount -t cgroup none cpuctl -> Will activate all controllers
in same hierarchy.
- The test invokes clone() with CLONE_NEWNS set. This causes a a new child
to be created, also a new group (do_fork->copy_namespaces->ns_cgroup_clone->
cgroup_clone) and the child is attached to the new group (cgroup_clone->
attach_task->sched_move_task). At this point in time, the child's scheduler
related fields are uninitialized (including its on_rq field, which it has
inherited from parent). As a result sched_move_task thinks its on
runqueue, when it isn't.
As a solution to this problem, I moved sched_fork() call, which
initializes scheduler related fields on a new task, before
copy_namespaces(). I am not sure though whether moving up will
cause other side-effects. Do you see any issue?
- The second problem exposed by this test is that task_new_fair()
assumes that parent and child will be part of the same group (which
needn't be as this test shows). As a result, cfs_rq->curr can be NULL
for the child.
The solution is to test for curr pointer being NULL in
task_new_fair().
With the patch below, I could run ns_exec() fine w/o a crash.
Reported-by: Sukadev Bhattiprolu <sukadev@us.ibm.com>
Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
clean up the preemption check to not use unnecessary 64-bit
variables. This improves code size:
text data bss dec hex filename
44227 3326 36 47589 b9e5 sched.o.before
44201 3326 36 47563 b9cb sched.o.after
Signed-off-by: Ingo Molnar <mingo@elte.hu>
wakeup preemption fix: do not make it dependent on p->prio.
Preemption purely depends on ->vruntime.
This improves preemption in mixed-nice-level workloads.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
remove PREEMPT_RESTRICT. (this is a separate commit so that any
regression related to the removal itself is bisectable)
Signed-off-by: Ingo Molnar <mingo@elte.hu>
PREEMPT_RESTRICT was a method aimed at reducing the amount of wakeup
related preemption. It has a disadvantage though, it can prevent
legitimate wakeups if a task is 'unlucky' to be hit too early by a tick
that clears peer_preempt.
Now that the wakeup preemption has been cleaned up we dont seem to have
excessive preemptions anymore, so this feature can be turned off. (and
removed in the next patch)
Signed-off-by: Ingo Molnar <mingo@elte.hu>
fix a !SMP build error:
drivers/kvm/kvm_main.c: In function 'kvm_flush_remote_tlbs':
drivers/kvm/kvm_main.c:220: error: implicit declaration of function 'smp_call_function_mask'
(and also avoid unused function warning related to up_smp_call_function()
not making use of the 'func' parameter.)
Signed-off-by: Ingo Molnar <mingo@elte.hu>
prepare for up_smp_call_function() to ensure that the 'func'
pointer is unused. (which is related to a KVM build fix)
Signed-off-by: Ingo Molnar <mingo@elte.hu>
1) hardcoded 1000000000 value is used five times in places where
NSEC_PER_SEC might be more readable.
2) A conversion from nsec to msec uses the hardcoded 1000000 value,
which is a candidate for NSEC_PER_MSEC.
no code changed:
text data bss dec hex filename
44359 3326 36 47721 ba69 sched.o.before
44359 3326 36 47721 ba69 sched.o.after
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Yanmin Zhang reported an aim7 regression and bisected it down to:
| commit 38ad464d41
| Author: Ingo Molnar <mingo@elte.hu>
| Date: Mon Oct 15 17:00:02 2007 +0200
|
| sched: uniform tunings
|
| use the same defaults on both UP and SMP.
fix this by reintroducing similar SMP tunings again. This resolves
the regression.
(also update the comments to match the ilog2(nr_cpus) tuning effect)
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Since powerpc started using CONFIG_GENERIC_CLOCKEVENTS, the
deterministic CPU accounting (CONFIG_VIRT_CPU_ACCOUNTING) has been
broken on powerpc, because we end up counting user time twice: once in
timer_interrupt() and once in update_process_times().
This fixes the problem by pulling the code in update_process_times
that updates utime and stime into a separate function called
account_process_tick. If CONFIG_VIRT_CPU_ACCOUNTING is not defined,
there is a version of account_process_tick in kernel/timer.c that
simply accounts a whole tick to either utime or stime as before. If
CONFIG_VIRT_CPU_ACCOUNTING is defined, then arch code gets to
implement account_process_tick.
This also lets us simplify the s390 code a bit; it means that the s390
timer interrupt can now call update_process_times even when
CONFIG_VIRT_CPU_ACCOUNTING is turned on, and can just implement a
suitable account_process_tick().
account_process_tick() now takes the task_struct * as an argument.
Tested both with and without CONFIG_VIRT_CPU_ACCOUNTING.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Fix the delay accounting regression introduced by commit
75d4ef16a6. rq no longer has sched_info
data associated with it. task_struct sched_info structure is used by delay
accounting to provide back statistics to user space.
also remove direct use of sched_clock() (which is not a valid thing to
do anymore) and use rq->clock instead.
Signed-off-by: Balbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
we lost the sched_min_granularity tunable to a clever optimization
that uses the sched_latency/min_granularity ratio - but the ratio
is quite unintuitive to users and can also crash the kernel if the
ratio is set to 0. So reintroduce the min_granularity tunable,
while keeping the ratio maintained internally.
no functionality changed.
[ mingo@elte.hu: some fixlets. ]
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Add a few comments to place_entity(). No code changed.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
vslice was missing a factor NICE_0_LOAD, as weight is in
weight*NICE_0_LOAD units.
the effect of this bug was larger initial slices and
thus latency-noisier forks.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
On Altix (sn2) machines the "Error parsing MADT" message is
misleading because the lack of IOSAPIC entries is expected.
Since I am sure someone will ask, I have been told that
the chance of this changing anytime soon is close to nil.
Signed-off-by: George Beshers <gbeshers@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Newer Itanium versions have added additional processor feature set
bits. This patch prints all the implemented feature set bits. Some
bit descriptions have not been made public. For those bits, a generic
"Feature set X bit Y" message is printed. Bits that are not implemented
will no longer be printed.
Signed-off-by: Russ Anderson <rja@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Fix the problem that redirect hit bit in I/O SAPIC RTE is set even
when it must be disabled (e.g. nointroute boot option is set, CPU
hotplug is enabled or percpu vector is enabled).
Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Currently, XPC's heartbeat timer function runs on whatever CPU modprobe/insmod
ran on when XPC was started. To avoid the heartbeat from being delayed for
long periods the timer function must run on CPU 0.
N.B. Altix doesn't currently allow cpu0 to be taken offline, so this is
safe for now. This code must be revised when offline of cpu0 is enabled.
Signed-off-by: Dean Nelson <dcn@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
When the code calls uncork, trigger a queue flush, even
if the queue was not corked. Most callers that explicitely
cork the queue will have additinal checks to see if they
corked it. Callers who do not cork the queue expect packets
to flow when they call uncork.
The scneario that showcased this bug happend when we were not
able to bundle DATA with outgoing COOKIE-ECHO. As a result
the data just sat in the outqueue and did not get transmitted.
The application expected a response, but nothing happened.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
There is a small bug when we process a FWD-TSN. We'll deliver
anything upto the current next expected SSN. However, if the
next expected is already in the queue, it will take another
chunk to trigger its delivery. The fix is to simply check
the current queued SSN is the next expected one.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
SCTP-AUTH and future ADD-IP updates have a requirement to
do additional verification of parameters and an ability to
ABORT the association if verification fails. So, introduce
additional return code so that we can clear signal a required
action.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
A SCTP endpoint may have a lot of associations on them and walking
the list is fairly inefficient. Instead, use a hashed lookup,
and filter out the hash list based on the endopoing we already have.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Added blk_unplug interface, allowing all invocations of unplugs to result
in a generated blktrace UNPLUG.
Signed-off-by: Alan D. Brunelle <Alan.Brunelle@hp.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Credit goes to juergen.kadidlo@exasol.com for diagnosing this issue
and supplying the initial patch.
blk_queue_invalidate_tags() must use the proper requeueing paths instead
of open coding the re-add of the request, otherwise we bug out in rq
accounting. Just switch to using blk_requeue_request(), that takes care
of end-tag handling as well and also adds the blktrace REQUEUE notify
event that is also appropriate here.
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
One-shot timer mode on PXA has various bugs which prevent kernels
build with NO_HZ enabled booting. They end up spinning on a
permanently asserted timer interrupt because we don't properly
clear it down - clearing the OIER bit does not stop the pending
interrupt status. Fix this in the set_mode handler as well.
Moreover, the code which sets the next expiry point may race with
the hardware, and we might not set the match register sufficiently
in the future. If we encounter that situation, return -ETIME so
the generic time code retries.
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Cyberpro: when user requests 16bpp, use it and not 24bpp.
There was a missing break causing requests for 16bpp mode
to end up in 24bpp mode.
Signed-off-by: Jan Rinze Peterzon <janrinze@home.nl>
Acked-by: Ralph Siemsen <ralphs@netwinder.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When mounted with cifsacl mount option, readdir can not
instantiate the inode with the estimated mode based on the ACL
for each file since we have not queried for the ACL for
each of these files yet. So set the refresh time to zero
for these inodes so that the next stat will cause the client
to go to the server for the ACL info so we can build the estimated
mode (this means we also will issue an extra QueryPathInfo if
the stat happens within 1 second, but this is trivial compared to
the time required to open/getacl/close for each).
ls -l is slower when cifsacl mount option is specified, but
displays correct mode information.
Signed-off-by: Shirish Pargaonkar <shirishp@us.ibm.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
Port / host stop calls used to be made from ata_host_release() which
is called after all hardware resources acquired after host allocation
are released. This is wrong as port and host stop routines often
access the hardware.
Add separate devres for port / host stop which is invoked right after
IRQ is released but with all other hardware resources intact. The
devres is added iff ->host_stop and/or ->port_stop exist.
This problem has been spotted by Mark Lord.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Cc: Mark Lord <liml@rtr.ca>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
In a presentation of true workmanship, pata_ali asserts IRQ
permanantly if the TF status register is read more than once when
there's no device attached to the port.
Avoid waiting polling for !0xff if it's PATA. It's needed only for
some rare SATA devices anyway.
This problem is reported by Luca Tettamanti in bugzilla bug 9298.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Tested-By: Luca Tettamanti <kronos.it@gmail.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Some SH boards (old R2D-1 boards) have generally not had working CF
under libata, due to both buswidth issues (handled by Aoi Shinkai
in 43f4b8c757), and buggy interrupt
controllers. For these sorts of boards simply disabling the IRQ and
polling ends up working fine.
This conditionalizes the IRQ resource for pata_platform and lets
platforms that want to use polling mode simply omit the resource
entirely.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
By default ata_host_activate() expects a valid IRQ in order to
successfully register the host. This patch enables a special case
for registering polling-only hosts that either don't have IRQs
or have buggy IRQ generation (either in terms of handling or
sensing), which otherwise work fine.
Hosts that want to use polling mode can simply set ATA_FLAG_PIO_POLLING
and pass in an invalid IRQ.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
sata_qstor conversion to new error handling (EH).
Convert sata_qstor to use the newer libata EH mechanisms.
Based on earlier work by Jeff Garzik.
Signed-off-by: Mark Lord <mlord@pobox.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
sata_qstor workaround for spurious interrupts.
The qstor hardware generates spurious interrupts from time to time when
switching in and out of packet mode. These eventually result in the
IRQ being disabled, which kills other devices sharing this IRQ with us.
This workaround isn't perfect, but it's about the best we can do for
this hardware. Spurious interrupts will still happen, but won't be
logged as such, and therefore won't cause the IRQ to be inadvertently
disabled.
Signed-off-by: Mark Lord <mlord@pobox.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
sata_qstor nuke idle state.
We're really only ever in one of two hardware states: packet, or mmio.
Get rid of unnecessary "qs_state_idle" state.
Signed-off-by: Mark Lord <mlord@pobox.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Please warmly welcome the PRO variant of Satellite U200 to the broken
suspend list.
Original patch is from Yann Chachkoff. Patch reformatted and
forwarded by Tejun Heo.
Signed-off-by: Yann Chachkoff <yann.chachkoff@myrealbox.com>
Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
When mounted with the cifsacl mount option, we were
treating any deny ACEs found like allow ACEs and it turns out for
SFU and SUA Windows set these type of access control entries often.
The order of ACEs is important too. The canonical order that most
ACL tools and Windows explorer consruct ACLs with is to begin with
DENY entries then follow with ALLOW, otherwise an allow entry
could be encountered first, making the subsequent deny entry like "dead
code which would be superflous since Windows stops when a match is
made for the operation you are trying to perform for your user
We start with no permissions in the mode and build up as we find
permissions (ie allow ACEs). This fixes deny ACEs so they affect
the mask used to set the subsequent allow ACEs.
Acked-by: Shirish Pargaonkar <shirishp@us.ibm.com>
CC: Alexander Bokovoy <ab@samba.org>
Signed-off-by: Steve French <sfrench@us.ibm.com>
Adds uid to key description fro supporting user mounts
and minor formating changes
Acked-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Igor Mammedov <niallain@gmail.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
crypto/crc32.c:chksum_final() is computing the digest as
*(__le32 *)out = ~cpu_to_le32(mctx->crc);
so the low-level crc32c_le routines should just keep
the crc in cpu order, otherwise it is getting swabbed
one too many times on big-endian machines.
Signed-off-by: Benny Halevy <bhalevy@fs1.bhalevy.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit 45711f1a ("[SG] Update drivers to use sg helpers") had the
following bogus change in drivers/mmc/card/queue.c:
> - src_buf = page_address(src->page) + src->offset;
> + src_buf = sg_virt(dst);
(Notice that "src" is converted to "dst"). Turn this "dst" back into
the intended "src".
Signed-off-by: Roland Dreier <roland@digitalvampire.org>
Tested-by: Romano Giannetti <romano.giannetti@gmail.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
For kernel addresses between TASK_SIZE and PAGE_OFFSET,
flush_tlb_kern_range() does not work as would be expected.
The TLB invalidate works with a matching ASID, or on entries marked as
global. The set_pte_at() macro marks addresses >= PAGE_OFFSET as
global, but not addresses from TASK_SIZE to PAGE_OFFSET, which are
also kernel addresses.
The result is that the entries in this range are not actually
invalidated by flush_tlb_kern_range().
This patch instead marks addresses >= TASK_SIZE as global.
Signed-off-by: Satoru Fujii <s-fujii@ct.jp.nec.com>
Signed-off-by: Kevin Hilman <khilman@mvista.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
'invd' can destroy host data, and 'wbinvd' allows the guest to induce
long (milliseconds) latencies.
Noted by Ben Serebrin.
Signed-off-by: Avi Kivity <avi@qumranet.com>