2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-15 16:53:54 +08:00

sched/clock: Use static_branch_likely() with sched_clock_running

sched_clock_running is enabled early at bootup stage and never
disabled. So hint that to the compiler by using static_branch_likely()
rather than static_branch_unlikely().

The branch probability mis-annotation was introduced in the original
commit that converted the plain sched_clock_running flag to a static key:

  46457ea464 ("sched/clock: Use static key for sched_clock_running")

Steve further notes:

  | Looks like the confusion was the moving of the "!":
  |
  | -       if (unlikely(!sched_clock_running))
  | +       if (!static_branch_unlikely(&sched_clock_running))
  |
  | Where, it was unlikely that !sched_clock_running would be true, but
  | because the "!" was moved outside the "unlikely()" it makes the test
  | "likely()". That is, if we added an intermediate step, it would have
  | been:
  |
  |         if (!likely(sched_clock_running))
  |
  | which would have prevented the mistake that this patch fixes.

  [ mingo: Edited the changelog. ]

Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: juri.lelli@redhat.com
Cc: mgorman@suse.de
Cc: vincent.guittot@linaro.org
Link: https://lkml.kernel.org/r/1574843848-26825-1-git-send-email-zhenzhong.duan@oracle.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Zhenzhong Duan 2019-11-27 16:37:28 +08:00 committed by Ingo Molnar
parent c2da5bdc66
commit c5105d764e

View File

@ -370,7 +370,7 @@ u64 sched_clock_cpu(int cpu)
if (sched_clock_stable())
return sched_clock() + __sched_clock_offset;
if (!static_branch_unlikely(&sched_clock_running))
if (!static_branch_likely(&sched_clock_running))
return sched_clock();
preempt_disable_notrace();
@ -393,7 +393,7 @@ void sched_clock_tick(void)
if (sched_clock_stable())
return;
if (!static_branch_unlikely(&sched_clock_running))
if (!static_branch_likely(&sched_clock_running))
return;
lockdep_assert_irqs_disabled();
@ -460,7 +460,7 @@ void __init sched_clock_init(void)
u64 sched_clock_cpu(int cpu)
{
if (!static_branch_unlikely(&sched_clock_running))
if (!static_branch_likely(&sched_clock_running))
return 0;
return sched_clock();