mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-11-14 07:44:21 +08:00
net: give more chances to rcu in netdev_wait_allrefs_any()
[ Upstream commitcd42ba1c8a
] This came while reviewing commitc4e86b4363
("net: add two more call_rcu_hurry()"). Paolo asked if adding one synchronize_rcu() would help. While synchronize_rcu() does not help, making sure to call rcu_barrier() before msleep(wait) is definitely helping to make sure lazy call_rcu() are completed. Instead of waiting ~100 seconds in my tests, the ref_tracker splats occurs one time only, and netdev_wait_allrefs_any() latency is reduced to the strict minimum. Ideally we should audit our call_rcu() users to make sure no refcount (or cascading call_rcu()) is held too long, because rcu_barrier() is quite expensive. Fixes:0e4be9e57e
("net: use exponential backoff in netdev_wait_allrefs") Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/all/28bbf698-befb-42f6-b561-851c67f464aa@kernel.org/T/#m76d73ed6b03cd930778ac4d20a777f22a08d6824 Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
This commit is contained in:
parent
b1e86f1ef8
commit
e06cc14cff
@ -10488,8 +10488,9 @@ static struct net_device *netdev_wait_allrefs_any(struct list_head *list)
|
||||
rebroadcast_time = jiffies;
|
||||
}
|
||||
|
||||
rcu_barrier();
|
||||
|
||||
if (!wait) {
|
||||
rcu_barrier();
|
||||
wait = WAIT_REFS_MIN_MSECS;
|
||||
} else {
|
||||
msleep(wait);
|
||||
|
Loading…
Reference in New Issue
Block a user