linux/kernel/sched
Yang Yingliang 2a3548c7ef sched/smt: Fix unbalance sched_smt_present dec/inc
commit e22f910a26 upstream.

I got the following warn report while doing stress test:

jump label: negative count!
WARNING: CPU: 3 PID: 38 at kernel/jump_label.c:263 static_key_slow_try_dec+0x9d/0xb0
Call Trace:
 <TASK>
 __static_key_slow_dec_cpuslocked+0x16/0x70
 sched_cpu_deactivate+0x26e/0x2a0
 cpuhp_invoke_callback+0x3ad/0x10d0
 cpuhp_thread_fun+0x3f5/0x680
 smpboot_thread_fn+0x56d/0x8d0
 kthread+0x309/0x400
 ret_from_fork+0x41/0x70
 ret_from_fork_asm+0x1b/0x30
 </TASK>

Because when cpuset_cpu_inactive() fails in sched_cpu_deactivate(),
the cpu offline failed, but sched_smt_present is decremented before
calling sched_cpu_deactivate(), it leads to unbalanced dec/inc, so
fix it by incrementing sched_smt_present in the error path.

Fixes: c5511d03ec ("sched/smt: Make sched_smt_present track topology")
Cc: stable@kernel.org
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Chen Yu <yu.c.chen@intel.com>
Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
Link: https://lore.kernel.org/r/20240703031610.587047-3-yangyingliang@huaweicloud.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-19 05:45:47 +02:00
..
autogroup.c sched/fair: Prevent dead task groups from regaining cfs_rq's 2021-11-25 09:48:32 +01:00
autogroup.h
clock.c sched: Fix various typos 2021-03-22 00:11:52 +01:00
completion.c
core_sched.c sched: prctl() core-scheduling interface 2021-05-12 11:43:31 +02:00
core.c sched/smt: Fix unbalance sched_smt_present dec/inc 2024-08-19 05:45:47 +02:00
cpuacct.c sched/cpuacct: Optimize away RCU read lock 2023-10-06 13:18:19 +02:00
cpudeadline.c sched/core: Introduce sched_asym_cpucap_active() 2022-12-31 13:14:01 +01:00
cpudeadline.h
cpufreq_schedutil.c sched/uclamp: Fix iowait boost escaping uclamp restriction 2022-04-08 14:23:10 +02:00
cpufreq.c
cpupri.c sched/rt: Fix live lock between select_fallback_rq() and RT push 2023-10-06 13:18:22 +02:00
cpupri.h sched/cpupri: Add CPUPRI_HIGHER 2020-10-29 11:00:30 +01:00
cputime.c sched/cputime: Fix mul_u64_u64_div_u64() precision for cputime 2024-08-19 05:45:39 +02:00
deadline.c sched: Fix stop_one_cpu_nowait() vs hotplug 2023-11-20 11:08:13 +01:00
debug.c sched: Fix DEBUG && !SCHEDSTATS warn 2023-05-11 23:00:40 +09:00
fair.c profiling: remove profile=sleep support 2024-08-19 05:45:39 +02:00
features.h sched/fair: Introduce SIS_UTIL to search idle CPU based on sum of util_avg 2022-08-17 14:23:00 +02:00
idle.c Revert "kernel/sched: Modify initial boot task idle setup" 2023-10-19 23:05:38 +02:00
isolation.c sched/isolation: Reconcile rcu_nocbs= and nohz_full= 2021-05-13 14:12:47 +02:00
loadavg.c sched: Make multiple runqueue task counters 32-bit 2021-05-12 21:34:17 +02:00
Makefile sched: Trivial core scheduling cookie management 2021-05-12 11:43:31 +02:00
membarrier.c sched/membarrier: reduce the ability to hammer on sys_membarrier 2024-02-23 08:55:14 +01:00
pelt.c sched: Fix various typos 2021-03-22 00:11:52 +01:00
pelt.h sched/fair: Fix cfs_rq_clock_pelt() for throttled cfs_rq 2022-06-09 10:22:48 +02:00
psi.c sched/psi: Fix use-after-free in ep_remove_wait_queue() 2023-02-22 12:57:06 +01:00
rt.c sched/rt: Disallow writing invalid values to sched_rt_period_us 2024-03-01 13:21:43 +01:00
sched-pelt.h
sched.h sched/fair: set_load_weight() must also call reweight_task() for SCHED_IDLE tasks 2024-08-19 05:45:11 +02:00
smp.h
stats.c sched: Fix various typos 2021-03-22 00:11:52 +01:00
stats.h sched: Make struct sched_statistics independent of fair sched class 2023-05-11 23:00:34 +09:00
stop_task.c sched: Make struct sched_statistics independent of fair sched class 2023-05-11 23:00:34 +09:00
swait.c
topology.c sched/fair: Allow disabling sched_balance_newidle with sched_relax_domain_level 2024-06-16 13:39:34 +02:00
wait_bit.c
wait.c wait: add wake_up_pollfree() 2021-12-14 10:57:15 +01:00