mm/slub: simplify get_partial_node()

The break conditions for filling cpu partial can be more readable and
simple.

If slub_get_cpu_partial() returns 0, we can confirm that we don't need
to fill cpu partial, then we should break from the loop. On the other
hand, we also should break from the loop if we have added enough cpu
partial slabs.

Meanwhile, the logic above gets rid of the #ifdef and also fixes a weird
corner case that if we set cpu_partial_slabs to 0 from sysfs, we still
allocate at least one here.

Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
This commit is contained in:
Xiongwei Song 2024-04-04 13:58:26 +08:00 committed by Vlastimil Babka
parent 721a2f8be1
commit ff99b18fee

View File

@ -2614,18 +2614,18 @@ static struct slab *get_partial_node(struct kmem_cache *s,
if (!partial) { if (!partial) {
partial = slab; partial = slab;
stat(s, ALLOC_FROM_PARTIAL); stat(s, ALLOC_FROM_PARTIAL);
if ((slub_get_cpu_partial(s) == 0)) {
break;
}
} else { } else {
put_cpu_partial(s, slab, 0); put_cpu_partial(s, slab, 0);
stat(s, CPU_PARTIAL_NODE); stat(s, CPU_PARTIAL_NODE);
partial_slabs++;
}
#ifdef CONFIG_SLUB_CPU_PARTIAL
if (partial_slabs > s->cpu_partial_slabs / 2)
break;
#else
break;
#endif
if (++partial_slabs > slub_get_cpu_partial(s) / 2) {
break;
}
}
} }
spin_unlock_irqrestore(&n->list_lock, flags); spin_unlock_irqrestore(&n->list_lock, flags);
return partial; return partial;