mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-12-05 10:04:12 +08:00
64c7c8f885
Make some changes to the NEED_RESCHED and POLLING_NRFLAG to reduce confusion, and make their semantics rigid. Improves efficiency of resched_task and some cpu_idle routines. * In resched_task: - TIF_NEED_RESCHED is only cleared with the task's runqueue lock held, and as we hold it during resched_task, then there is no need for an atomic test and set there. The only other time this should be set is when the task's quantum expires, in the timer interrupt - this is protected against because the rq lock is irq-safe. - If TIF_NEED_RESCHED is set, then we don't need to do anything. It won't get unset until the task get's schedule()d off. - If we are running on the same CPU as the task we resched, then set TIF_NEED_RESCHED and no further action is required. - If we are running on another CPU, and TIF_POLLING_NRFLAG is *not* set after TIF_NEED_RESCHED has been set, then we need to send an IPI. Using these rules, we are able to remove the test and set operation in resched_task, and make clear the previously vague semantics of POLLING_NRFLAG. * In idle routines: - Enter cpu_idle with preempt disabled. When the need_resched() condition becomes true, explicitly call schedule(). This makes things a bit clearer (IMO), but haven't updated all architectures yet. - Many do a test and clear of TIF_NEED_RESCHED for some reason. According to the resched_task rules, this isn't needed (and actually breaks the assumption that TIF_NEED_RESCHED is only cleared with the runqueue lock held). So remove that. Generally one less locked memory op when switching to the idle thread. - Many idle routines clear TIF_POLLING_NRFLAG, and only set it in the inner most polling idle loops. The above resched_task semantics allow it to be set until before the last time need_resched() is checked before going into a halt requiring interrupt wakeup. Many idle routines simply never enter such a halt, and so POLLING_NRFLAG can be always left set, completely eliminating resched IPIs when rescheduling the idle task. POLLING_NRFLAG width can be increased, to reduce the chance of resched IPIs. Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Con Kolivas <kernel@kolivas.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
90 lines
3.3 KiB
Plaintext
90 lines
3.3 KiB
Plaintext
CPU Scheduler implementation hints for architecture specific code
|
|
|
|
Nick Piggin, 2005
|
|
|
|
Context switch
|
|
==============
|
|
1. Runqueue locking
|
|
By default, the switch_to arch function is called with the runqueue
|
|
locked. This is usually not a problem unless switch_to may need to
|
|
take the runqueue lock. This is usually due to a wake up operation in
|
|
the context switch. See include/asm-ia64/system.h for an example.
|
|
|
|
To request the scheduler call switch_to with the runqueue unlocked,
|
|
you must `#define __ARCH_WANT_UNLOCKED_CTXSW` in a header file
|
|
(typically the one where switch_to is defined).
|
|
|
|
Unlocked context switches introduce only a very minor performance
|
|
penalty to the core scheduler implementation in the CONFIG_SMP case.
|
|
|
|
2. Interrupt status
|
|
By default, the switch_to arch function is called with interrupts
|
|
disabled. Interrupts may be enabled over the call if it is likely to
|
|
introduce a significant interrupt latency by adding the line
|
|
`#define __ARCH_WANT_INTERRUPTS_ON_CTXSW` in the same place as for
|
|
unlocked context switches. This define also implies
|
|
`__ARCH_WANT_UNLOCKED_CTXSW`. See include/asm-arm/system.h for an
|
|
example.
|
|
|
|
|
|
CPU idle
|
|
========
|
|
Your cpu_idle routines need to obey the following rules:
|
|
|
|
1. Preempt should now disabled over idle routines. Should only
|
|
be enabled to call schedule() then disabled again.
|
|
|
|
2. need_resched/TIF_NEED_RESCHED is only ever set, and will never
|
|
be cleared until the running task has called schedule(). Idle
|
|
threads need only ever query need_resched, and may never set or
|
|
clear it.
|
|
|
|
3. When cpu_idle finds (need_resched() == 'true'), it should call
|
|
schedule(). It should not call schedule() otherwise.
|
|
|
|
4. The only time interrupts need to be disabled when checking
|
|
need_resched is if we are about to sleep the processor until
|
|
the next interrupt (this doesn't provide any protection of
|
|
need_resched, it prevents losing an interrupt).
|
|
|
|
4a. Common problem with this type of sleep appears to be:
|
|
local_irq_disable();
|
|
if (!need_resched()) {
|
|
local_irq_enable();
|
|
*** resched interrupt arrives here ***
|
|
__asm__("sleep until next interrupt");
|
|
}
|
|
|
|
5. TIF_POLLING_NRFLAG can be set by idle routines that do not
|
|
need an interrupt to wake them up when need_resched goes high.
|
|
In other words, they must be periodically polling need_resched,
|
|
although it may be reasonable to do some background work or enter
|
|
a low CPU priority.
|
|
|
|
5a. If TIF_POLLING_NRFLAG is set, and we do decide to enter
|
|
an interrupt sleep, it needs to be cleared then a memory
|
|
barrier issued (followed by a test of need_resched with
|
|
interrupts disabled, as explained in 3).
|
|
|
|
arch/i386/kernel/process.c has examples of both polling and
|
|
sleeping idle functions.
|
|
|
|
|
|
Possible arch/ problems
|
|
=======================
|
|
|
|
Possible arch problems I found (and either tried to fix or didn't):
|
|
|
|
h8300 - Is such sleeping racy vs interrupts? (See #4a).
|
|
The H8/300 manual I found indicates yes, however disabling IRQs
|
|
over the sleep mean only NMIs can wake it up, so can't fix easily
|
|
without doing spin waiting.
|
|
|
|
ia64 - is safe_halt call racy vs interrupts? (does it sleep?) (See #4a)
|
|
|
|
sh64 - Is sleeping racy vs interrupts? (See #4a)
|
|
|
|
sparc - IRQs on at this point(?), change local_irq_save to _disable.
|
|
- TODO: needs secondary CPUs to disable preempt (See #1)
|
|
|