mirror of
https://gcc.gnu.org/git/gcc.git
synced 2024-12-14 06:23:58 +08:00
5ed77fb3ed
Consider the following omp fragment. ... #pragma omp target #pragma omp parallel num_threads (2) #pragma omp task ; ... This hangs at -O0 for nvptx. Investigating the behaviour gives us the following trace of events: - both threads execute GOMP_task, where they: - deposit a task, and - execute gomp_team_barrier_wake - thread 1 executes gomp_team_barrier_wait_end and, not being the last thread, proceeds to wait at the team barrier - thread 0 executes gomp_team_barrier_wait_end and, being the last thread, it calls gomp_barrier_handle_tasks, where it: - executes both tasks and marks the team barrier done - executes a gomp_team_barrier_wake which wakes up thread 1 - thread 1 exits the team barrier - thread 0 returns from gomp_barrier_handle_tasks and goes to wait at the team barrier. - thread 0 hangs. To understand why there is a hang here, it's good to understand how things are setup for nvptx. The libgomp/config/nvptx/bar.c implementation is a copy of the libgomp/config/linux/bar.c implementation, with uses of both futex_wake and do_wait replaced with uses of ptx insn bar.sync: ... if (bar->total > 1) asm ("bar.sync 1, %0;" : : "r" (32 * bar->total)); ... The point where thread 0 goes to wait at the team barrier, corresponds in the linux implementation with a do_wait. In the linux case, the call to do_wait doesn't hang, because it's waiting for bar->generation to become a certain value, and if bar->generation already has that value, it just proceeds, without any need for coordination with other threads. In the nvtpx case, the bar.sync waits until thread 1 joins it in the same logical barrier, which never happens: thread 1 is lingering in the thread pool at the thread pool barrier (using a different logical barrier), waiting to join a new team. The easiest way to fix this is to revert to the posix implementation for bar.{c,h}. That however falls back on a busy-waiting approach, and does not take advantage of the ptx bar.sync insn. Instead, we revert to the linux implementation for bar.c, and implement bar.c local functions futex_wait and futex_wake using the bar.sync insn. The bar.sync insn takes an argument specifying how many threads are participating, and that doesn't play well with the futex syntax where it's not clear in advance how many threads will be woken up. This is solved by waking up all waiting threads each time a futex_wait or futex_wake happens, and possibly going back to sleep with an updated thread count. Tested libgomp on x86_64 with nvptx accelerator. libgomp/ChangeLog: 2021-04-20 Tom de Vries <tdevries@suse.de> PR target/99555 * config/nvptx/bar.c (generation_to_barrier): New function, copied from config/rtems/bar.c. (futex_wait, futex_wake): New function. (do_spin, do_wait): New function, copied from config/linux/wait.h. (gomp_barrier_wait_end, gomp_barrier_wait_last) (gomp_team_barrier_wake, gomp_team_barrier_wait_end): (gomp_team_barrier_wait_cancel_end, gomp_team_barrier_cancel): Remove and replace with include of config/linux/bar.c. * config/nvptx/bar.h (gomp_barrier_t): Add fields waiters and lock. (gomp_barrier_init): Init new fields. * testsuite/libgomp.c-c++-common/task-detach-6.c: Remove nvptx-specific workarounds. * testsuite/libgomp.c/pr99555-1.c: Same. * testsuite/libgomp.fortran/task-detach-6.f90: Same. |
||
---|---|---|
.. | ||
config | ||
plugin | ||
testsuite | ||
.gitattributes | ||
acc_prof.h | ||
acinclude.m4 | ||
aclocal.m4 | ||
affinity-fmt.c | ||
affinity.c | ||
alloc.c | ||
allocator.c | ||
atomic.c | ||
barrier.c | ||
ChangeLog | ||
ChangeLog.graphite | ||
config.h.in | ||
configure | ||
configure.ac | ||
configure.tgt | ||
critical.c | ||
env.c | ||
error.c | ||
fortran.c | ||
hashtab.h | ||
icv-device.c | ||
icv.c | ||
iter_ull.c | ||
iter.c | ||
libgomp_f.h.in | ||
libgomp_g.h | ||
libgomp-plugin.c | ||
libgomp-plugin.h | ||
libgomp.h | ||
libgomp.map | ||
libgomp.spec.in | ||
libgomp.texi | ||
lock.c | ||
loop_ull.c | ||
loop.c | ||
Makefile.am | ||
Makefile.in | ||
oacc-async.c | ||
oacc-cuda.c | ||
oacc-host.c | ||
oacc-init.c | ||
oacc-int.h | ||
oacc-mem.c | ||
oacc-parallel.c | ||
oacc-plugin.c | ||
oacc-plugin.h | ||
oacc-profiling.c | ||
oacc-target.c | ||
omp_lib.f90.in | ||
omp_lib.h.in | ||
omp.h.in | ||
openacc_lib.h | ||
openacc.f90 | ||
openacc.h | ||
ordered.c | ||
parallel.c | ||
priority_queue.c | ||
priority_queue.h | ||
scope.c | ||
sections.c | ||
secure_getenv.h | ||
single.c | ||
splay-tree.c | ||
splay-tree.h | ||
target.c | ||
task.c | ||
taskloop.c | ||
team.c | ||
teams.c | ||
work.c |