mirror of
https://github.com/edk2-porting/linux-next.git
synced 2024-12-26 14:14:01 +08:00
22e4ebb975
Implement MEMBARRIER_CMD_PRIVATE_EXPEDITED with IPIs using cpumask built from all runqueues for which current thread's mm is the same as the thread calling sys_membarrier. It executes faster than the non-expedited variant (no blocking). It also works on NOHZ_FULL configurations. Scheduler-wise, it requires a memory barrier before and after context switching between processes (which have different mm). The memory barrier before context switch is already present. For the barrier after context switch: * Our TSO archs can do RELEASE without being a full barrier. Look at x86 spin_unlock() being a regular STORE for example. But for those archs, all atomics imply smp_mb and all of them have atomic ops in switch_mm() for mm_cpumask(), and on x86 the CR3 load acts as a full barrier. * From all weakly ordered machines, only ARM64 and PPC can do RELEASE, the rest does indeed do smp_mb(), so there the spin_unlock() is a full barrier and we're good. * ARM64 has a very heavy barrier in switch_to(), which suffices. * PPC just removed its barrier from switch_to(), but appears to be talking about adding something to switch_mm(). So add a smp_mb__after_unlock_lock() for now, until this is settled on the PPC side. Changes since v3: - Properly document the memory barriers provided by each architecture. Changes since v2: - Address comments from Peter Zijlstra, - Add smp_mb__after_unlock_lock() after finish_lock_switch() in finish_task_switch() to add the memory barrier we need after storing to rq->curr. This is much simpler than the previous approach relying on atomic_dec_and_test() in mmdrop(), which actually added a memory barrier in the common case of switching between userspace processes. - Return -EINVAL when MEMBARRIER_CMD_SHARED is used on a nohz_full kernel, rather than having the whole membarrier system call returning -ENOSYS. Indeed, CMD_PRIVATE_EXPEDITED is compatible with nohz_full. Adapt the CMD_QUERY mask accordingly. Changes since v1: - move membarrier code under kernel/sched/ because it uses the scheduler runqueue, - only add the barrier when we switch from a kernel thread. The case where we switch from a user-space thread is already handled by the atomic_dec_and_test() in mmdrop(). - add a comment to mmdrop() documenting the requirement on the implicit memory barrier. CC: Peter Zijlstra <peterz@infradead.org> CC: Paul E. McKenney <paulmck@linux.vnet.ibm.com> CC: Boqun Feng <boqun.feng@gmail.com> CC: Andrew Hunter <ahh@google.com> CC: Maged Michael <maged.michael@gmail.com> CC: gromer@google.com CC: Avi Kivity <avi@scylladb.com> CC: Benjamin Herrenschmidt <benh@kernel.crashing.org> CC: Paul Mackerras <paulus@samba.org> CC: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Tested-by: Dave Watson <davejwatson@fb.com>
73 lines
3.6 KiB
C
73 lines
3.6 KiB
C
#ifndef _UAPI_LINUX_MEMBARRIER_H
|
|
#define _UAPI_LINUX_MEMBARRIER_H
|
|
|
|
/*
|
|
* linux/membarrier.h
|
|
*
|
|
* membarrier system call API
|
|
*
|
|
* Copyright (c) 2010, 2015 Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
|
|
*
|
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
* of this software and associated documentation files (the "Software"), to deal
|
|
* in the Software without restriction, including without limitation the rights
|
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
* copies of the Software, and to permit persons to whom the Software is
|
|
* furnished to do so, subject to the following conditions:
|
|
*
|
|
* The above copyright notice and this permission notice shall be included in
|
|
* all copies or substantial portions of the Software.
|
|
*
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
* SOFTWARE.
|
|
*/
|
|
|
|
/**
|
|
* enum membarrier_cmd - membarrier system call command
|
|
* @MEMBARRIER_CMD_QUERY: Query the set of supported commands. It returns
|
|
* a bitmask of valid commands.
|
|
* @MEMBARRIER_CMD_SHARED: Execute a memory barrier on all running threads.
|
|
* Upon return from system call, the caller thread
|
|
* is ensured that all running threads have passed
|
|
* through a state where all memory accesses to
|
|
* user-space addresses match program order between
|
|
* entry to and return from the system call
|
|
* (non-running threads are de facto in such a
|
|
* state). This covers threads from all processes
|
|
* running on the system. This command returns 0.
|
|
* @MEMBARRIER_CMD_PRIVATE_EXPEDITED:
|
|
* Execute a memory barrier on each running
|
|
* thread belonging to the same process as the current
|
|
* thread. Upon return from system call, the
|
|
* caller thread is ensured that all its running
|
|
* threads siblings have passed through a state
|
|
* where all memory accesses to user-space
|
|
* addresses match program order between entry
|
|
* to and return from the system call
|
|
* (non-running threads are de facto in such a
|
|
* state). This only covers threads from the
|
|
* same processes as the caller thread. This
|
|
* command returns 0. The "expedited" commands
|
|
* complete faster than the non-expedited ones,
|
|
* they never block, but have the downside of
|
|
* causing extra overhead.
|
|
*
|
|
* Command to be passed to the membarrier system call. The commands need to
|
|
* be a single bit each, except for MEMBARRIER_CMD_QUERY which is assigned to
|
|
* the value 0.
|
|
*/
|
|
enum membarrier_cmd {
|
|
MEMBARRIER_CMD_QUERY = 0,
|
|
MEMBARRIER_CMD_SHARED = (1 << 0),
|
|
/* reserved for MEMBARRIER_CMD_SHARED_EXPEDITED (1 << 1) */
|
|
/* reserved for MEMBARRIER_CMD_PRIVATE (1 << 2) */
|
|
MEMBARRIER_CMD_PRIVATE_EXPEDITED = (1 << 3),
|
|
};
|
|
|
|
#endif /* _UAPI_LINUX_MEMBARRIER_H */
|