mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-11-30 07:34:12 +08:00
badea125d7
The patch below should fix a race which could cause stale TLB entries. Specifically, when 2 CPUs ended up racing for entrance to wrap_mmu_context(). The losing CPU would find that by the time it acquired ctx.lock, mm->context already had a valid value, but then it failed to (re-)check the delayed TLB flushing logic and hence could end up using a context number when there were still stale entries in its TLB. The fix is to check for delayed TLB flushes only after mm->context is valid (non-zero). The patch also makes GCC v4.x happier by defining a non-volatile variant of mm_context_t called nv_mm_context_t. Signed-off-by: David Mosberger-Tang <David.Mosberger@acm.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
14 lines
335 B
C
14 lines
335 B
C
#ifndef __MMU_H
|
|
#define __MMU_H
|
|
|
|
/*
|
|
* Type for a context number. We declare it volatile to ensure proper
|
|
* ordering when it's accessed outside of spinlock'd critical sections
|
|
* (e.g., as done in activate_mm() and init_new_context()).
|
|
*/
|
|
typedef volatile unsigned long mm_context_t;
|
|
|
|
typedef unsigned long nv_mm_context_t;
|
|
|
|
#endif
|