mirror of
https://gcc.gnu.org/git/gcc.git
synced 2024-12-12 05:13:50 +08:00
41dbbb3789
contrib/ * gcc_update (files_and_dependencies): Update rules for new libgomp/plugin/Makefrag.am and libgomp/plugin/configfrag.ac files. gcc/ * builtin-types.def (BT_FN_VOID_INT_INT_VAR) (BT_FN_VOID_INT_PTR_SIZE_PTR_PTR_PTR_INT_INT_VAR) (BT_FN_VOID_INT_OMPFN_PTR_SIZE_PTR_PTR_PTR_INT_INT_INT_INT_INT_VAR): New function types. * builtins.c: Include "gomp-constants.h". (expand_builtin_acc_on_device): New function. (expand_builtin, is_inexpensive_builtin): Handle BUILT_IN_ACC_ON_DEVICE. * builtins.def (DEF_GOACC_BUILTIN, DEF_GOACC_BUILTIN_COMPILER): New macros. * cgraph.c (cgraph_node::create): Consider flag_openacc next to flag_openmp. * config.gcc <nvptx-*> (tm_file): Add nvptx/offload.h. <*-intelmic-* | *-intelmicemul-*> (tm_file): Add i386/intelmic-offload.h. * gcc.c (LINK_COMMAND_SPEC, GOMP_SELF_SPECS): For -fopenacc, link to libgomp and its dependencies. * config/arc/arc.h (LINK_COMMAND_SPEC): Likewise. * config/darwin.h (LINK_COMMAND_SPEC_A): Likewise. * config/i386/mingw32.h (GOMP_SELF_SPECS): Likewise. * config/ia64/hpux.h (LIB_SPEC): Likewise. * config/pa/pa-hpux11.h (LIB_SPEC): Likewise. * config/pa/pa64-hpux.h (LIB_SPEC): Likewise. * doc/generic.texi: Update for OpenACC changes. * doc/gimple.texi: Likewise. * doc/invoke.texi: Likewise. * doc/sourcebuild.texi: Likewise. * gimple-pretty-print.c (dump_gimple_omp_for): Handle GF_OMP_FOR_KIND_OACC_LOOP. (dump_gimple_omp_target): Handle GF_OMP_TARGET_KIND_OACC_KERNELS, GF_OMP_TARGET_KIND_OACC_PARALLEL, GF_OMP_TARGET_KIND_OACC_DATA, GF_OMP_TARGET_KIND_OACC_UPDATE, GF_OMP_TARGET_KIND_OACC_ENTER_EXIT_DATA. Dump more data. * gimple.c: Update comments for OpenACC changes. * gimple.def: Likewise. * gimple.h: Likewise. (enum gf_mask): Add GF_OMP_FOR_KIND_OACC_LOOP, GF_OMP_TARGET_KIND_OACC_PARALLEL, GF_OMP_TARGET_KIND_OACC_KERNELS, GF_OMP_TARGET_KIND_OACC_DATA, GF_OMP_TARGET_KIND_OACC_UPDATE, GF_OMP_TARGET_KIND_OACC_ENTER_EXIT_DATA. (gimple_omp_for_cond, gimple_omp_for_set_cond): Sort in the appropriate place. (is_gimple_omp_oacc, is_gimple_omp_offloaded): New functions. * gimplify.c: Include "gomp-constants.h". Update comments for OpenACC changes. (is_gimple_stmt): Handle OACC_PARALLEL, OACC_KERNELS, OACC_DATA, OACC_HOST_DATA, OACC_DECLARE, OACC_UPDATE, OACC_ENTER_DATA, OACC_EXIT_DATA, OACC_CACHE, OACC_LOOP. (gimplify_scan_omp_clauses, gimplify_adjust_omp_clauses): Handle OMP_CLAUSE__CACHE_, OMP_CLAUSE_ASYNC, OMP_CLAUSE_WAIT, OMP_CLAUSE_NUM_GANGS, OMP_CLAUSE_NUM_WORKERS, OMP_CLAUSE_VECTOR_LENGTH, OMP_CLAUSE_GANG, OMP_CLAUSE_WORKER, OMP_CLAUSE_VECTOR, OMP_CLAUSE_DEVICE_RESIDENT, OMP_CLAUSE_USE_DEVICE, OMP_CLAUSE_INDEPENDENT, OMP_CLAUSE_AUTO, OMP_CLAUSE_SEQ. (gimplify_adjust_omp_clauses_1, gimplify_adjust_omp_clauses): Use GOMP_MAP_* instead of OMP_CLAUSE_MAP_*. Use OMP_CLAUSE_SET_MAP_KIND. (gimplify_oacc_cache): New function. (gimplify_omp_for): Handle OACC_LOOP. (gimplify_omp_workshare): Handle OACC_KERNELS, OACC_PARALLEL, OACC_DATA. (gimplify_omp_target_update): Handle OACC_ENTER_DATA, OACC_EXIT_DATA, OACC_UPDATE. (gimplify_expr): Handle OACC_LOOP, OACC_CACHE, OACC_HOST_DATA, OACC_DECLARE, OACC_KERNELS, OACC_PARALLEL, OACC_DATA, OACC_ENTER_DATA, OACC_EXIT_DATA, OACC_UPDATE. (gimplify_body): Consider flag_openacc next to flag_openmp. * lto-streamer-out.c: Include "gomp-constants.h". * omp-builtins.def (BUILT_IN_ACC_GET_DEVICE_TYPE) (BUILT_IN_GOACC_DATA_START, BUILT_IN_GOACC_DATA_END) (BUILT_IN_GOACC_ENTER_EXIT_DATA, BUILT_IN_GOACC_PARALLEL) (BUILT_IN_GOACC_UPDATE, BUILT_IN_GOACC_WAIT) (BUILT_IN_GOACC_GET_THREAD_NUM, BUILT_IN_GOACC_GET_NUM_THREADS) (BUILT_IN_ACC_ON_DEVICE): New builtins. * omp-low.c: Include "gomp-constants.h". Update comments for OpenACC changes. (struct omp_context): Add reduction_map, gwv_below, gwv_this members. (extract_omp_for_data, use_pointer_for_field, install_var_field) (new_omp_context, delete_omp_context, scan_sharing_clauses) (create_omp_child_function, scan_omp_for, scan_omp_target) (check_omp_nesting_restrictions, lower_reduction_clauses) (build_omp_regions_1, diagnose_sb_0, make_gimple_omp_edges): Update for OpenACC changes. (scan_sharing_clauses): Handle OMP_CLAUSE_NUM_GANGS: OMP_CLAUSE_NUM_WORKERS: OMP_CLAUSE_VECTOR_LENGTH, OMP_CLAUSE_ASYNC, OMP_CLAUSE_WAIT, OMP_CLAUSE_GANG, OMP_CLAUSE_WORKER, OMP_CLAUSE_VECTOR, OMP_CLAUSE_DEVICE_RESIDENT, OMP_CLAUSE_USE_DEVICE, OMP_CLAUSE__CACHE_, OMP_CLAUSE_INDEPENDENT, OMP_CLAUSE_AUTO, OMP_CLAUSE_SEQ. Use GOMP_MAP_* instead of OMP_CLAUSE_MAP_*. (expand_omp_for_static_nochunk, expand_omp_for_static_chunk): Handle GF_OMP_FOR_KIND_OACC_LOOP. (expand_omp_target, lower_omp_target): Handle GF_OMP_TARGET_KIND_OACC_PARALLEL, GF_OMP_TARGET_KIND_OACC_KERNELS, GF_OMP_TARGET_KIND_OACC_UPDATE, GF_OMP_TARGET_KIND_OACC_ENTER_EXIT_DATA, GF_OMP_TARGET_KIND_OACC_DATA. (pass_expand_omp::execute, execute_lower_omp) (pass_diagnose_omp_blocks::gate): Consider flag_openacc next to flag_openmp. (offload_symbol_decl): New variable. (oacc_get_reduction_array_id, oacc_max_threads) (get_offload_symbol_decl, get_base_type, lookup_oacc_reduction) (maybe_lookup_oacc_reduction, enclosing_target_ctx) (oacc_loop_or_target_p, oacc_lower_reduction_var_helper) (oacc_gimple_assign, oacc_initialize_reduction_data) (oacc_finalize_reduction_data, oacc_process_reduction_data): New functions. (is_targetreg_ctx): Remove function. * tree-core.h (enum omp_clause_code): Add OMP_CLAUSE__CACHE_, OMP_CLAUSE_DEVICE_RESIDENT, OMP_CLAUSE_USE_DEVICE, OMP_CLAUSE_GANG, OMP_CLAUSE_ASYNC, OMP_CLAUSE_WAIT, OMP_CLAUSE_AUTO, OMP_CLAUSE_SEQ, OMP_CLAUSE_INDEPENDENT, OMP_CLAUSE_WORKER, OMP_CLAUSE_VECTOR, OMP_CLAUSE_NUM_GANGS, OMP_CLAUSE_NUM_WORKERS, OMP_CLAUSE_VECTOR_LENGTH. * tree.c (omp_clause_code_name, walk_tree_1): Update accordingly. * tree.h (OMP_CLAUSE_GANG_EXPR, OMP_CLAUSE_GANG_STATIC_EXPR) (OMP_CLAUSE_ASYNC_EXPR, OMP_CLAUSE_WAIT_EXPR) (OMP_CLAUSE_VECTOR_EXPR, OMP_CLAUSE_WORKER_EXPR) (OMP_CLAUSE_NUM_GANGS_EXPR, OMP_CLAUSE_NUM_WORKERS_EXPR) (OMP_CLAUSE_VECTOR_LENGTH_EXPR): New macros. * tree-core.h: Update comments for OpenACC changes. (enum omp_clause_map_kind): Remove. (struct tree_omp_clause): Change type of map_kind member from enum omp_clause_map_kind to unsigned char. * tree-inline.c: Update comments for OpenACC changes. * tree-nested.c: Likewise. Include "gomp-constants.h". (convert_nonlocal_reference_stmt, convert_local_reference_stmt) (convert_tramp_reference_stmt, convert_gimple_call): Update for OpenACC changes. Use GOMP_MAP_* instead of OMP_CLAUSE_MAP_*. Use OMP_CLAUSE_SET_MAP_KIND. * tree-pretty-print.c: Include "gomp-constants.h". (dump_omp_clause): Handle OMP_CLAUSE_DEVICE_RESIDENT, OMP_CLAUSE_USE_DEVICE, OMP_CLAUSE__CACHE_, OMP_CLAUSE_GANG, OMP_CLAUSE_ASYNC, OMP_CLAUSE_AUTO, OMP_CLAUSE_SEQ, OMP_CLAUSE_WAIT, OMP_CLAUSE_WORKER, OMP_CLAUSE_VECTOR, OMP_CLAUSE_NUM_GANGS, OMP_CLAUSE_NUM_WORKERS, OMP_CLAUSE_VECTOR_LENGTH, OMP_CLAUSE_INDEPENDENT. Use GOMP_MAP_* instead of OMP_CLAUSE_MAP_*. (dump_generic_node): Handle OACC_PARALLEL, OACC_KERNELS, OACC_DATA, OACC_HOST_DATA, OACC_DECLARE, OACC_UPDATE, OACC_ENTER_DATA, OACC_EXIT_DATA, OACC_CACHE, OACC_LOOP. * tree-streamer-in.c: Include "gomp-constants.h". (unpack_ts_omp_clause_value_fields) Use GOMP_MAP_* instead of OMP_CLAUSE_MAP_*. Use OMP_CLAUSE_SET_MAP_KIND. * tree-streamer-out.c: Include "gomp-constants.h". (pack_ts_omp_clause_value_fields): Use GOMP_MAP_* instead of OMP_CLAUSE_MAP_*. * tree.def (OACC_PARALLEL, OACC_KERNELS, OACC_DATA) (OACC_HOST_DATA, OACC_LOOP, OACC_CACHE, OACC_DECLARE) (OACC_ENTER_DATA, OACC_EXIT_DATA, OACC_UPDATE): New tree codes. * tree.c (omp_clause_num_ops): Update accordingly. * tree.h (OMP_BODY, OMP_CLAUSES, OMP_LOOP_CHECK, OMP_CLAUSE_SIZE): Likewise. (OACC_PARALLEL_BODY, OACC_PARALLEL_CLAUSES, OACC_KERNELS_BODY) (OACC_KERNELS_CLAUSES, OACC_DATA_BODY, OACC_DATA_CLAUSES) (OACC_HOST_DATA_BODY, OACC_HOST_DATA_CLAUSES, OACC_CACHE_CLAUSES) (OACC_DECLARE_CLAUSES, OACC_ENTER_DATA_CLAUSES) (OACC_EXIT_DATA_CLAUSES, OACC_UPDATE_CLAUSES) (OACC_KERNELS_COMBINED, OACC_PARALLEL_COMBINED): New macros. * tree.h (OMP_CLAUSE_MAP_KIND): Cast it to enum gomp_map_kind. (OMP_CLAUSE_SET_MAP_KIND): New macro. * varpool.c (varpool_node::get_create): Consider flag_openacc next to flag_openmp. * config/i386/intelmic-offload.h: New file. * config/nvptx/offload.h: Likewise. gcc/ada/ * gcc-interface/utils.c (DEF_FUNCTION_TYPE_VAR_8) (DEF_FUNCTION_TYPE_VAR_12): New macros. gcc/c-family/ * c.opt (fopenacc): New option. * c-cppbuiltin.c (c_cpp_builtins): Conditionally define _OPENACC. * c-common.c (DEF_FUNCTION_TYPE_VAR_8, DEF_FUNCTION_TYPE_VAR_12): New macros. * c-common.h (c_finish_oacc_wait): New prototype. * c-omp.c: Include "omp-low.h" and "gomp-constants.h". (c_finish_oacc_wait): New function. * c-pragma.c (oacc_pragmas): New variable. (c_pp_lookup_pragma, init_pragma): Handle it. * c-pragma.h (enum pragma_kind): Add PRAGMA_OACC_CACHE, PRAGMA_OACC_DATA, PRAGMA_OACC_ENTER_DATA, PRAGMA_OACC_EXIT_DATA, PRAGMA_OACC_KERNELS, PRAGMA_OACC_LOOP, PRAGMA_OACC_PARALLEL, PRAGMA_OACC_UPDATE, PRAGMA_OACC_WAIT. (enum pragma_omp_clause): Add PRAGMA_OACC_CLAUSE_ASYNC, PRAGMA_OACC_CLAUSE_AUTO, PRAGMA_OACC_CLAUSE_COLLAPSE, PRAGMA_OACC_CLAUSE_COPY, PRAGMA_OACC_CLAUSE_COPYIN, PRAGMA_OACC_CLAUSE_COPYOUT, PRAGMA_OACC_CLAUSE_CREATE, PRAGMA_OACC_CLAUSE_DELETE, PRAGMA_OACC_CLAUSE_DEVICE, PRAGMA_OACC_CLAUSE_DEVICEPTR, PRAGMA_OACC_CLAUSE_FIRSTPRIVATE, PRAGMA_OACC_CLAUSE_GANG, PRAGMA_OACC_CLAUSE_HOST, PRAGMA_OACC_CLAUSE_IF, PRAGMA_OACC_CLAUSE_NUM_GANGS, PRAGMA_OACC_CLAUSE_NUM_WORKERS, PRAGMA_OACC_CLAUSE_PRESENT, PRAGMA_OACC_CLAUSE_PRESENT_OR_COPY, PRAGMA_OACC_CLAUSE_PRESENT_OR_COPYIN, PRAGMA_OACC_CLAUSE_PRESENT_OR_COPYOUT, PRAGMA_OACC_CLAUSE_PRESENT_OR_CREATE, PRAGMA_OACC_CLAUSE_PRIVATE, PRAGMA_OACC_CLAUSE_REDUCTION, PRAGMA_OACC_CLAUSE_SELF, PRAGMA_OACC_CLAUSE_SEQ, PRAGMA_OACC_CLAUSE_VECTOR, PRAGMA_OACC_CLAUSE_VECTOR_LENGTH, PRAGMA_OACC_CLAUSE_WAIT, PRAGMA_OACC_CLAUSE_WORKER. gcc/c/ * c-parser.c: Include "gomp-constants.h". (c_parser_omp_clause_map): Use enum gomp_map_kind instead of enum omp_clause_map_kind. Use GOMP_MAP_* instead of OMP_CLAUSE_MAP_*. Use OMP_CLAUSE_SET_MAP_KIND. (c_parser_pragma): Handle PRAGMA_OACC_ENTER_DATA, PRAGMA_OACC_EXIT_DATA, PRAGMA_OACC_UPDATE. (c_parser_omp_construct): Handle PRAGMA_OACC_CACHE, PRAGMA_OACC_DATA, PRAGMA_OACC_KERNELS, PRAGMA_OACC_LOOP, PRAGMA_OACC_PARALLEL, PRAGMA_OACC_WAIT. (c_parser_omp_clause_name): Handle "auto", "async", "copy", "copyout", "create", "delete", "deviceptr", "gang", "host", "num_gangs", "num_workers", "present", "present_or_copy", "pcopy", "present_or_copyin", "pcopyin", "present_or_copyout", "pcopyout", "present_or_create", "pcreate", "seq", "self", "vector", "vector_length", "wait", "worker". (OACC_DATA_CLAUSE_MASK, OACC_KERNELS_CLAUSE_MASK) (OACC_ENTER_DATA_CLAUSE_MASK, OACC_EXIT_DATA_CLAUSE_MASK) (OACC_LOOP_CLAUSE_MASK, OACC_PARALLEL_CLAUSE_MASK) (OACC_UPDATE_CLAUSE_MASK, OACC_WAIT_CLAUSE_MASK): New macros. (c_parser_omp_variable_list): Handle OMP_CLAUSE__CACHE_. (c_parser_oacc_wait_list, c_parser_oacc_data_clause) (c_parser_oacc_data_clause_deviceptr) (c_parser_omp_clause_num_gangs, c_parser_omp_clause_num_workers) (c_parser_oacc_clause_async, c_parser_oacc_clause_wait) (c_parser_omp_clause_vector_length, c_parser_oacc_all_clauses) (c_parser_oacc_cache, c_parser_oacc_data, c_parser_oacc_kernels) (c_parser_oacc_enter_exit_data, c_parser_oacc_loop) (c_parser_oacc_parallel, c_parser_oacc_update) (c_parser_oacc_wait): New functions. * c-tree.h (c_finish_oacc_parallel, c_finish_oacc_kernels) (c_finish_oacc_data): New prototypes. * c-typeck.c: Include "gomp-constants.h". (handle_omp_array_sections): Handle GOMP_MAP_FORCE_DEVICEPTR. Use GOMP_MAP_* instead of OMP_CLAUSE_MAP_*. Use OMP_CLAUSE_SET_MAP_KIND. (c_finish_oacc_parallel, c_finish_oacc_kernels) (c_finish_oacc_data): New functions. (c_finish_omp_clauses): Handle OMP_CLAUSE__CACHE_, OMP_CLAUSE_NUM_GANGS, OMP_CLAUSE_NUM_WORKERS, OMP_CLAUSE_VECTOR_LENGTH, OMP_CLAUSE_ASYNC, OMP_CLAUSE_WAIT, OMP_CLAUSE_AUTO, OMP_CLAUSE_SEQ, OMP_CLAUSE_GANG, OMP_CLAUSE_WORKER, OMP_CLAUSE_VECTOR, and OMP_CLAUSE_MAP's GOMP_MAP_FORCE_DEVICEPTR. gcc/cp/ * parser.c: Include "gomp-constants.h". (cp_parser_omp_clause_map): Use enum gomp_map_kind instead of enum omp_clause_map_kind. Use GOMP_MAP_* instead of OMP_CLAUSE_MAP_*. Use OMP_CLAUSE_SET_MAP_KIND. (cp_parser_omp_construct, cp_parser_pragma): Handle PRAGMA_OACC_CACHE, PRAGMA_OACC_DATA, PRAGMA_OACC_ENTER_DATA, PRAGMA_OACC_EXIT_DATA, PRAGMA_OACC_KERNELS, PRAGMA_OACC_PARALLEL, PRAGMA_OACC_LOOP, PRAGMA_OACC_UPDATE, PRAGMA_OACC_WAIT. (cp_parser_omp_clause_name): Handle "async", "copy", "copyout", "create", "delete", "deviceptr", "host", "num_gangs", "num_workers", "present", "present_or_copy", "pcopy", "present_or_copyin", "pcopyin", "present_or_copyout", "pcopyout", "present_or_create", "pcreate", "vector_length", "wait". (OACC_DATA_CLAUSE_MASK, OACC_ENTER_DATA_CLAUSE_MASK) (OACC_EXIT_DATA_CLAUSE_MASK, OACC_KERNELS_CLAUSE_MASK) (OACC_LOOP_CLAUSE_MASK, OACC_PARALLEL_CLAUSE_MASK) (OACC_UPDATE_CLAUSE_MASK, OACC_WAIT_CLAUSE_MASK): New macros. (cp_parser_omp_var_list_no_open): Handle OMP_CLAUSE__CACHE_. (cp_parser_oacc_data_clause, cp_parser_oacc_data_clause_deviceptr) (cp_parser_oacc_clause_vector_length, cp_parser_oacc_wait_list) (cp_parser_oacc_clause_wait, cp_parser_omp_clause_num_gangs) (cp_parser_omp_clause_num_workers, cp_parser_oacc_clause_async) (cp_parser_oacc_all_clauses, cp_parser_oacc_cache) (cp_parser_oacc_data, cp_parser_oacc_enter_exit_data) (cp_parser_oacc_kernels, cp_parser_oacc_loop) (cp_parser_oacc_parallel, cp_parser_oacc_update) (cp_parser_oacc_wait): New functions. * cp-tree.h (finish_oacc_data, finish_oacc_kernels) (finish_oacc_parallel): New prototypes. * semantics.c: Include "gomp-constants.h". (handle_omp_array_sections): Handle GOMP_MAP_FORCE_DEVICEPTR. Use GOMP_MAP_* instead of OMP_CLAUSE_MAP_*. Use OMP_CLAUSE_SET_MAP_KIND. (finish_omp_clauses): Handle OMP_CLAUSE_ASYNC, OMP_CLAUSE_VECTOR_LENGTH, OMP_CLAUSE_WAIT, OMP_CLAUSE__CACHE_. Use GOMP_MAP_* instead of OMP_CLAUSE_MAP_*. (finish_oacc_data, finish_oacc_kernels, finish_oacc_parallel): New functions. gcc/fortran/ * lang.opt (fopenacc): New option. * cpp.c (cpp_define_builtins): Conditionally define _OPENACC. * dump-parse-tree.c (show_omp_node): Split part of it into... (show_omp_clauses): ... this new function. (show_omp_node, show_code_node): Handle EXEC_OACC_PARALLEL_LOOP, EXEC_OACC_PARALLEL, EXEC_OACC_KERNELS_LOOP, EXEC_OACC_KERNELS, EXEC_OACC_DATA, EXEC_OACC_HOST_DATA, EXEC_OACC_LOOP, EXEC_OACC_UPDATE, EXEC_OACC_WAIT, EXEC_OACC_CACHE, EXEC_OACC_ENTER_DATA, EXEC_OACC_EXIT_DATA. (show_namespace): Update for OpenACC. * f95-lang.c (DEF_FUNCTION_TYPE_VAR_2, DEF_FUNCTION_TYPE_VAR_8) (DEF_FUNCTION_TYPE_VAR_12, DEF_GOACC_BUILTIN) (DEF_GOACC_BUILTIN_COMPILER): New macros. * types.def (BT_FN_VOID_INT_INT_VAR) (BT_FN_VOID_INT_PTR_SIZE_PTR_PTR_PTR_INT_INT_VAR) (BT_FN_VOID_INT_OMPFN_PTR_SIZE_PTR_PTR_PTR_INT_INT_INT_INT_INT_VAR): New function types. * gfortran.h (gfc_statement): Add ST_OACC_PARALLEL_LOOP, ST_OACC_END_PARALLEL_LOOP, ST_OACC_PARALLEL, ST_OACC_END_PARALLEL, ST_OACC_KERNELS, ST_OACC_END_KERNELS, ST_OACC_DATA, ST_OACC_END_DATA, ST_OACC_HOST_DATA, ST_OACC_END_HOST_DATA, ST_OACC_LOOP, ST_OACC_END_LOOP, ST_OACC_DECLARE, ST_OACC_UPDATE, ST_OACC_WAIT, ST_OACC_CACHE, ST_OACC_KERNELS_LOOP, ST_OACC_END_KERNELS_LOOP, ST_OACC_ENTER_DATA, ST_OACC_EXIT_DATA, ST_OACC_ROUTINE. (struct gfc_expr_list): New data type. (gfc_get_expr_list): New macro. (gfc_omp_map_op): Add OMP_MAP_FORCE_ALLOC, OMP_MAP_FORCE_DEALLOC, OMP_MAP_FORCE_TO, OMP_MAP_FORCE_FROM, OMP_MAP_FORCE_TOFROM, OMP_MAP_FORCE_PRESENT, OMP_MAP_FORCE_DEVICEPTR. (OMP_LIST_FIRST, OMP_LIST_DEVICE_RESIDENT, OMP_LIST_USE_DEVICE) (OMP_LIST_CACHE): New enumerators. (struct gfc_omp_clauses): Add async_expr, gang_expr, worker_expr, vector_expr, num_gangs_expr, num_workers_expr, vector_length_expr, wait_list, tile_list, async, gang, worker, vector, seq, independent, wait, par_auto, gang_static, and loc members. (struct gfc_namespace): Add oacc_declare_clauses member. (gfc_exec_op): Add EXEC_OACC_KERNELS_LOOP, EXEC_OACC_PARALLEL_LOOP, EXEC_OACC_PARALLEL, EXEC_OACC_KERNELS, EXEC_OACC_DATA, EXEC_OACC_HOST_DATA, EXEC_OACC_LOOP, EXEC_OACC_UPDATE, EXEC_OACC_WAIT, EXEC_OACC_CACHE, EXEC_OACC_ENTER_DATA, EXEC_OACC_EXIT_DATA. (gfc_free_expr_list, gfc_resolve_oacc_directive) (gfc_resolve_oacc_declare, gfc_resolve_oacc_parallel_loop_blocks) (gfc_resolve_oacc_blocks): New prototypes. * match.c (match_exit_cycle): Handle EXEC_OACC_LOOP and EXEC_OACC_PARALLEL_LOOP. * match.h (gfc_match_oacc_cache, gfc_match_oacc_wait) (gfc_match_oacc_update, gfc_match_oacc_declare) (gfc_match_oacc_loop, gfc_match_oacc_host_data) (gfc_match_oacc_data, gfc_match_oacc_kernels) (gfc_match_oacc_kernels_loop, gfc_match_oacc_parallel) (gfc_match_oacc_parallel_loop, gfc_match_oacc_enter_data) (gfc_match_oacc_exit_data, gfc_match_oacc_routine): New prototypes. * openmp.c: Include "diagnostic.h" and "gomp-constants.h". (gfc_free_omp_clauses): Update for members added to struct gfc_omp_clauses. (gfc_match_omp_clauses): Change mask paramter to uint64_t. Add openacc parameter. (resolve_omp_clauses): Add openacc parameter. Update for OpenACC. (struct fortran_omp_context): Add is_openmp member. (gfc_resolve_omp_parallel_blocks): Initialize it. (gfc_resolve_do_iterator): Update for OpenACC. (gfc_resolve_omp_directive): Call resolve_omp_directive_inside_oacc_region. (OMP_CLAUSE_PRIVATE, OMP_CLAUSE_FIRSTPRIVATE) (OMP_CLAUSE_LASTPRIVATE, OMP_CLAUSE_COPYPRIVATE) (OMP_CLAUSE_SHARED, OMP_CLAUSE_COPYIN, OMP_CLAUSE_REDUCTION) (OMP_CLAUSE_IF, OMP_CLAUSE_NUM_THREADS, OMP_CLAUSE_SCHEDULE) (OMP_CLAUSE_DEFAULT, OMP_CLAUSE_ORDERED, OMP_CLAUSE_COLLAPSE) (OMP_CLAUSE_UNTIED, OMP_CLAUSE_FINAL, OMP_CLAUSE_MERGEABLE) (OMP_CLAUSE_ALIGNED, OMP_CLAUSE_DEPEND, OMP_CLAUSE_INBRANCH) (OMP_CLAUSE_LINEAR, OMP_CLAUSE_NOTINBRANCH, OMP_CLAUSE_PROC_BIND) (OMP_CLAUSE_SAFELEN, OMP_CLAUSE_SIMDLEN, OMP_CLAUSE_UNIFORM) (OMP_CLAUSE_DEVICE, OMP_CLAUSE_MAP, OMP_CLAUSE_TO) (OMP_CLAUSE_FROM, OMP_CLAUSE_NUM_TEAMS, OMP_CLAUSE_THREAD_LIMIT) (OMP_CLAUSE_DIST_SCHEDULE): Use uint64_t. (OMP_CLAUSE_ASYNC, OMP_CLAUSE_NUM_GANGS, OMP_CLAUSE_NUM_WORKERS) (OMP_CLAUSE_VECTOR_LENGTH, OMP_CLAUSE_COPY, OMP_CLAUSE_COPYOUT) (OMP_CLAUSE_CREATE, OMP_CLAUSE_PRESENT) (OMP_CLAUSE_PRESENT_OR_COPY, OMP_CLAUSE_PRESENT_OR_COPYIN) (OMP_CLAUSE_PRESENT_OR_COPYOUT, OMP_CLAUSE_PRESENT_OR_CREATE) (OMP_CLAUSE_DEVICEPTR, OMP_CLAUSE_GANG, OMP_CLAUSE_WORKER) (OMP_CLAUSE_VECTOR, OMP_CLAUSE_SEQ, OMP_CLAUSE_INDEPENDENT) (OMP_CLAUSE_USE_DEVICE, OMP_CLAUSE_DEVICE_RESIDENT) (OMP_CLAUSE_HOST_SELF, OMP_CLAUSE_OACC_DEVICE, OMP_CLAUSE_WAIT) (OMP_CLAUSE_DELETE, OMP_CLAUSE_AUTO, OMP_CLAUSE_TILE): New macros. (gfc_match_omp_clauses): Handle those. (OACC_PARALLEL_CLAUSES, OACC_KERNELS_CLAUSES, OACC_DATA_CLAUSES) (OACC_LOOP_CLAUSES, OACC_PARALLEL_LOOP_CLAUSES) (OACC_KERNELS_LOOP_CLAUSES, OACC_HOST_DATA_CLAUSES) (OACC_DECLARE_CLAUSES, OACC_UPDATE_CLAUSES) (OACC_ENTER_DATA_CLAUSES, OACC_EXIT_DATA_CLAUSES) (OACC_WAIT_CLAUSES): New macros. (gfc_free_expr_list, match_oacc_expr_list, match_oacc_clause_gang) (gfc_match_omp_map_clause, gfc_match_oacc_parallel_loop) (gfc_match_oacc_parallel, gfc_match_oacc_kernels_loop) (gfc_match_oacc_kernels, gfc_match_oacc_data) (gfc_match_oacc_host_data, gfc_match_oacc_loop) (gfc_match_oacc_declare, gfc_match_oacc_update) (gfc_match_oacc_enter_data, gfc_match_oacc_exit_data) (gfc_match_oacc_wait, gfc_match_oacc_cache) (gfc_match_oacc_routine, oacc_is_loop) (resolve_oacc_scalar_int_expr, resolve_oacc_positive_int_expr) (check_symbol_not_pointer, check_array_not_assumed) (resolve_oacc_data_clauses, resolve_oacc_deviceptr_clause) (oacc_compatible_clauses, oacc_is_parallel, oacc_is_kernels) (omp_code_to_statement, oacc_code_to_statement) (resolve_oacc_directive_inside_omp_region) (resolve_omp_directive_inside_oacc_region) (resolve_oacc_nested_loops, resolve_oacc_params_in_parallel) (resolve_oacc_loop_blocks, gfc_resolve_oacc_blocks) (resolve_oacc_loop, resolve_oacc_cache, gfc_resolve_oacc_declare) (gfc_resolve_oacc_directive): New functions. * parse.c (next_free): Update for OpenACC. Move some code into... (verify_token_free): ... this new function. (next_fixed): Update for OpenACC. Move some code into... (verify_token_fixed): ... this new function. (case_executable): Add ST_OACC_UPDATE, ST_OACC_WAIT, ST_OACC_CACHE, ST_OACC_ENTER_DATA, and ST_OACC_EXIT_DATA. (case_exec_markers): Add ST_OACC_PARALLEL_LOOP, ST_OACC_PARALLEL, ST_OACC_KERNELS, ST_OACC_DATA, ST_OACC_HOST_DATA, ST_OACC_LOOP, ST_OACC_KERNELS_LOOP. (case_decl): Add ST_OACC_ROUTINE. (push_state, parse_critical_block, parse_progunit): Update for OpenACC. (gfc_ascii_statement): Handle ST_OACC_PARALLEL_LOOP, ST_OACC_END_PARALLEL_LOOP, ST_OACC_PARALLEL, ST_OACC_END_PARALLEL, ST_OACC_KERNELS, ST_OACC_END_KERNELS, ST_OACC_KERNELS_LOOP, ST_OACC_END_KERNELS_LOOP, ST_OACC_DATA, ST_OACC_END_DATA, ST_OACC_HOST_DATA, ST_OACC_END_HOST_DATA, ST_OACC_LOOP, ST_OACC_END_LOOP, ST_OACC_DECLARE, ST_OACC_UPDATE, ST_OACC_WAIT, ST_OACC_CACHE, ST_OACC_ENTER_DATA, ST_OACC_EXIT_DATA, ST_OACC_ROUTINE. (verify_st_order, parse_spec): Handle ST_OACC_DECLARE. (parse_executable): Handle ST_OACC_PARALLEL_LOOP, ST_OACC_KERNELS_LOOP, ST_OACC_LOOP, ST_OACC_PARALLEL, ST_OACC_KERNELS, ST_OACC_DATA, ST_OACC_HOST_DATA. (decode_oacc_directive, parse_oacc_structured_block) (parse_oacc_loop, is_oacc): New functions. * parse.h (struct gfc_state_data): Add oacc_declare_clauses member. (is_oacc): New prototype. * resolve.c (gfc_resolve_blocks, gfc_resolve_code): Handle EXEC_OACC_PARALLEL_LOOP, EXEC_OACC_PARALLEL, EXEC_OACC_KERNELS_LOOP, EXEC_OACC_KERNELS, EXEC_OACC_DATA, EXEC_OACC_HOST_DATA, EXEC_OACC_LOOP, EXEC_OACC_UPDATE, EXEC_OACC_WAIT, EXEC_OACC_CACHE, EXEC_OACC_ENTER_DATA, EXEC_OACC_EXIT_DATA. (resolve_codes): Call gfc_resolve_oacc_declare. * scanner.c (openacc_flag, openacc_locus): New variables. (skip_free_comments): Update for OpenACC. Move some code into... (skip_omp_attribute): ... this new function. (skip_oacc_attribute): New function. (skip_fixed_comments, gfc_next_char_literal): Update for OpenACC. * st.c (gfc_free_statement): Handle EXEC_OACC_PARALLEL_LOOP, EXEC_OACC_PARALLEL, EXEC_OACC_KERNELS_LOOP, EXEC_OACC_KERNELS, EXEC_OACC_DATA, EXEC_OACC_HOST_DATA, EXEC_OACC_LOOP, EXEC_OACC_UPDATE, EXEC_OACC_WAIT, EXEC_OACC_CACHE, EXEC_OACC_ENTER_DATA, EXEC_OACC_EXIT_DATA. * trans-decl.c (gfc_generate_function_code): Update for OpenACC. * trans-openmp.c: Include "gomp-constants.h". (gfc_omp_finish_clause, gfc_trans_omp_clauses): Use GOMP_MAP_* instead of OMP_CLAUSE_MAP_*. Use OMP_CLAUSE_SET_MAP_KIND. (gfc_trans_omp_clauses): Handle OMP_LIST_USE_DEVICE, OMP_LIST_DEVICE_RESIDENT, OMP_LIST_CACHE, and OMP_MAP_FORCE_ALLOC, OMP_MAP_FORCE_DEALLOC, OMP_MAP_FORCE_TO, OMP_MAP_FORCE_FROM, OMP_MAP_FORCE_TOFROM, OMP_MAP_FORCE_PRESENT, OMP_MAP_FORCE_DEVICEPTR, and gfc_omp_clauses' async, seq, independent, wait_list, num_gangs_expr, num_workers_expr, vector_length_expr, vector, vector_expr, worker, worker_expr, gang, gang_expr members. (gfc_trans_omp_do): Handle EXEC_OACC_LOOP. (gfc_convert_expr_to_tree, gfc_trans_oacc_construct) (gfc_trans_oacc_executable_directive) (gfc_trans_oacc_wait_directive, gfc_trans_oacc_combined_directive) (gfc_trans_oacc_declare, gfc_trans_oacc_directive): New functions. * trans-stmt.c (gfc_trans_block_construct): Update for OpenACC. * trans-stmt.h (gfc_trans_oacc_directive, gfc_trans_oacc_declare): New prototypes. * trans.c (tranc_code): Handle EXEC_OACC_CACHE, EXEC_OACC_WAIT, EXEC_OACC_UPDATE, EXEC_OACC_LOOP, EXEC_OACC_HOST_DATA, EXEC_OACC_DATA, EXEC_OACC_KERNELS, EXEC_OACC_KERNELS_LOOP, EXEC_OACC_PARALLEL, EXEC_OACC_PARALLEL_LOOP, EXEC_OACC_ENTER_DATA, EXEC_OACC_EXIT_DATA. * gfortran.texi: Update for OpenACC. * intrinsic.texi: Likewise. * invoke.texi: Likewise. gcc/lto/ * lto-lang.c (DEF_FUNCTION_TYPE_VAR_8, DEF_FUNCTION_TYPE_VAR_12): New macros. * lto.c: Include "gomp-constants.h". gcc/testsuite/ * lib/target-supports.exp (check_effective_target_fopenacc): New procedure. * g++.dg/goacc-gomp/goacc-gomp.exp: New file. * g++.dg/goacc/goacc.exp: Likewise. * gcc.dg/goacc-gomp/goacc-gomp.exp: Likewise. * gcc.dg/goacc/goacc.exp: Likewise. * gfortran.dg/goacc/goacc.exp: Likewise. * c-c++-common/cpp/openacc-define-1.c: New file. * c-c++-common/cpp/openacc-define-2.c: Likewise. * c-c++-common/cpp/openacc-define-3.c: Likewise. * c-c++-common/goacc-gomp/nesting-1.c: Likewise. * c-c++-common/goacc-gomp/nesting-fail-1.c: Likewise. * c-c++-common/goacc/acc_on_device-2-off.c: Likewise. * c-c++-common/goacc/acc_on_device-2.c: Likewise. * c-c++-common/goacc/asyncwait-1.c: Likewise. * c-c++-common/goacc/cache-1.c: Likewise. * c-c++-common/goacc/clauses-fail.c: Likewise. * c-c++-common/goacc/collapse-1.c: Likewise. * c-c++-common/goacc/data-1.c: Likewise. * c-c++-common/goacc/data-2.c: Likewise. * c-c++-common/goacc/data-clause-duplicate-1.c: Likewise. * c-c++-common/goacc/deviceptr-1.c: Likewise. * c-c++-common/goacc/deviceptr-2.c: Likewise. * c-c++-common/goacc/deviceptr-3.c: Likewise. * c-c++-common/goacc/if-clause-1.c: Likewise. * c-c++-common/goacc/if-clause-2.c: Likewise. * c-c++-common/goacc/kernels-1.c: Likewise. * c-c++-common/goacc/loop-1.c: Likewise. * c-c++-common/goacc/loop-private-1.c: Likewise. * c-c++-common/goacc/nesting-1.c: Likewise. * c-c++-common/goacc/nesting-data-1.c: Likewise. * c-c++-common/goacc/nesting-fail-1.c: Likewise. * c-c++-common/goacc/parallel-1.c: Likewise. * c-c++-common/goacc/pcopy.c: Likewise. * c-c++-common/goacc/pcopyin.c: Likewise. * c-c++-common/goacc/pcopyout.c: Likewise. * c-c++-common/goacc/pcreate.c: Likewise. * c-c++-common/goacc/pragma_context.c: Likewise. * c-c++-common/goacc/present-1.c: Likewise. * c-c++-common/goacc/reduction-1.c: Likewise. * c-c++-common/goacc/reduction-2.c: Likewise. * c-c++-common/goacc/reduction-3.c: Likewise. * c-c++-common/goacc/reduction-4.c: Likewise. * c-c++-common/goacc/sb-1.c: Likewise. * c-c++-common/goacc/sb-2.c: Likewise. * c-c++-common/goacc/sb-3.c: Likewise. * c-c++-common/goacc/update-1.c: Likewise. * gcc.dg/goacc/acc_on_device-1.c: Likewise. * gfortran.dg/goacc/acc_on_device-1.f95: Likewise. * gfortran.dg/goacc/acc_on_device-2-off.f95: Likewise. * gfortran.dg/goacc/acc_on_device-2.f95: Likewise. * gfortran.dg/goacc/assumed.f95: Likewise. * gfortran.dg/goacc/asyncwait-1.f95: Likewise. * gfortran.dg/goacc/asyncwait-2.f95: Likewise. * gfortran.dg/goacc/asyncwait-3.f95: Likewise. * gfortran.dg/goacc/asyncwait-4.f95: Likewise. * gfortran.dg/goacc/branch.f95: Likewise. * gfortran.dg/goacc/cache-1.f95: Likewise. * gfortran.dg/goacc/coarray.f95: Likewise. * gfortran.dg/goacc/continuation-free-form.f95: Likewise. * gfortran.dg/goacc/cray.f95: Likewise. * gfortran.dg/goacc/critical.f95: Likewise. * gfortran.dg/goacc/data-clauses.f95: Likewise. * gfortran.dg/goacc/data-tree.f95: Likewise. * gfortran.dg/goacc/declare-1.f95: Likewise. * gfortran.dg/goacc/enter-exit-data.f95: Likewise. * gfortran.dg/goacc/fixed-1.f: Likewise. * gfortran.dg/goacc/fixed-2.f: Likewise. * gfortran.dg/goacc/fixed-3.f: Likewise. * gfortran.dg/goacc/fixed-4.f: Likewise. * gfortran.dg/goacc/host_data-tree.f95: Likewise. * gfortran.dg/goacc/if.f95: Likewise. * gfortran.dg/goacc/kernels-tree.f95: Likewise. * gfortran.dg/goacc/list.f95: Likewise. * gfortran.dg/goacc/literal.f95: Likewise. * gfortran.dg/goacc/loop-1.f95: Likewise. * gfortran.dg/goacc/loop-2.f95: Likewise. * gfortran.dg/goacc/loop-3.f95: Likewise. * gfortran.dg/goacc/loop-tree-1.f90: Likewise. * gfortran.dg/goacc/omp.f95: Likewise. * gfortran.dg/goacc/parallel-kernels-clauses.f95: Likewise. * gfortran.dg/goacc/parallel-kernels-regions.f95: Likewise. * gfortran.dg/goacc/parallel-tree.f95: Likewise. * gfortran.dg/goacc/parameter.f95: Likewise. * gfortran.dg/goacc/private-1.f95: Likewise. * gfortran.dg/goacc/private-2.f95: Likewise. * gfortran.dg/goacc/private-3.f95: Likewise. * gfortran.dg/goacc/pure-elemental-procedures.f95: Likewise. * gfortran.dg/goacc/reduction-2.f95: Likewise. * gfortran.dg/goacc/reduction.f95: Likewise. * gfortran.dg/goacc/routine-1.f90: Likewise. * gfortran.dg/goacc/routine-2.f90: Likewise. * gfortran.dg/goacc/sentinel-free-form.f95: Likewise. * gfortran.dg/goacc/several-directives.f95: Likewise. * gfortran.dg/goacc/sie.f95: Likewise. * gfortran.dg/goacc/subarrays.f95: Likewise. * gfortran.dg/gomp/map-1.f90: Likewise. * gfortran.dg/openacc-define-1.f90: Likewise. * gfortran.dg/openacc-define-2.f90: Likewise. * gfortran.dg/openacc-define-3.f90: Likewise. * g++.dg/gomp/block-1.C: Update for changed compiler output. * g++.dg/gomp/block-2.C: Likewise. * g++.dg/gomp/block-3.C: Likewise. * g++.dg/gomp/block-5.C: Likewise. * g++.dg/gomp/target-1.C: Likewise. * g++.dg/gomp/target-2.C: Likewise. * g++.dg/gomp/taskgroup-1.C: Likewise. * g++.dg/gomp/teams-1.C: Likewise. * gcc.dg/cilk-plus/jump-openmp.c: Likewise. * gcc.dg/cilk-plus/jump.c: Likewise. * gcc.dg/gomp/block-1.c: Likewise. * gcc.dg/gomp/block-10.c: Likewise. * gcc.dg/gomp/block-2.c: Likewise. * gcc.dg/gomp/block-3.c: Likewise. * gcc.dg/gomp/block-4.c: Likewise. * gcc.dg/gomp/block-5.c: Likewise. * gcc.dg/gomp/block-6.c: Likewise. * gcc.dg/gomp/block-7.c: Likewise. * gcc.dg/gomp/block-8.c: Likewise. * gcc.dg/gomp/block-9.c: Likewise. * gcc.dg/gomp/target-1.c: Likewise. * gcc.dg/gomp/target-2.c: Likewise. * gcc.dg/gomp/taskgroup-1.c: Likewise. * gcc.dg/gomp/teams-1.c: Likewise. include/ * gomp-constants.h: New file. libgomp/ * Makefile.am (search_path): Add $(top_srcdir)/../include. (libgomp_la_SOURCES): Add splay-tree.c, libgomp-plugin.c, oacc-parallel.c, oacc-host.c, oacc-init.c, oacc-mem.c, oacc-async.c, oacc-plugin.c, oacc-cuda.c. [USE_FORTRAN] (libgomp_la_SOURCES): Add openacc.f90. Include $(top_srcdir)/plugin/Makefrag.am. (nodist_libsubinclude_HEADERS): Add openacc.h. [USE_FORTRAN] (nodist_finclude_HEADERS): Add openacc_lib.h, openacc.f90, openacc.mod, openacc_kinds.mod. (omp_lib.mod): Generalize into... (%.mod): ... this new rule. (openacc_kinds.mod, openacc.mod): New rules. * plugin/configfrag.ac: New file. * configure.ac: Move plugin/offloading support into it. Include it. Instantiate testsuite/libgomp-test-support.pt.exp. * plugin/Makefrag.am: New file. * testsuite/Makefile.am (OFFLOAD_TARGETS) (OFFLOAD_ADDITIONAL_OPTIONS, OFFLOAD_ADDITIONAL_LIB_PATHS): Don't export. (libgomp-test-support.exp): New rule. (all-local): Depend on it. * Makefile.in: Regenerate. * testsuite/Makefile.in: Regenerate. * config.h.in: Likewise. * configure: Likewise. * configure.tgt: Harden shell syntax. * env.c: Include "oacc-int.h". (parse_acc_device_type): New function. (gomp_debug_var, goacc_device_type, goacc_device_num): New variables. (initialize_env): Initialize those. Call goacc_runtime_initialize. * error.c (gomp_vdebug, gomp_debug, gomp_vfatal): New functions. (gomp_fatal): Call gomp_vfatal. * libgomp.h: Include "libgomp-plugin.h" and <stdarg.h>. (gomp_debug_var, goacc_device_type, goacc_device_num, gomp_vdebug) (gomp_debug, gomp_verror, gomp_vfatal, gomp_init_targets_once) (splay_tree_node, splay_tree, splay_tree_key) (struct target_mem_desc, struct splay_tree_key_s) (struct gomp_memory_mapping, struct acc_dispatch_t) (struct gomp_device_descr, gomp_acc_insert_pointer) (gomp_acc_remove_pointer, target_mem_desc, gomp_copy_from_async) (gomp_unmap_vars, gomp_init_device, gomp_init_tables) (gomp_free_memmap, gomp_fini_device): New declarations. (gomp_vdebug, gomp_debug): New macros. Include "splay-tree.h". * libgomp.map (OACC_2.0): New symbol version. Use for acc_get_num_devices, acc_get_num_devices_h_, acc_set_device_type, acc_set_device_type_h_, acc_get_device_type, acc_get_device_type_h_, acc_set_device_num, acc_set_device_num_h_, acc_get_device_num, acc_get_device_num_h_, acc_async_test, acc_async_test_h_, acc_async_test_all, acc_async_test_all_h_, acc_wait, acc_wait_h_, acc_wait_async, acc_wait_async_h_, acc_wait_all, acc_wait_all_h_, acc_wait_all_async, acc_wait_all_async_h_, acc_init, acc_init_h_, acc_shutdown, acc_shutdown_h_, acc_on_device, acc_on_device_h_, acc_malloc, acc_free, acc_copyin, acc_copyin_32_h_, acc_copyin_64_h_, acc_copyin_array_h_, acc_present_or_copyin, acc_present_or_copyin_32_h_, acc_present_or_copyin_64_h_, acc_present_or_copyin_array_h_, acc_create, acc_create_32_h_, acc_create_64_h_, acc_create_array_h_, acc_present_or_create, acc_present_or_create_32_h_, acc_present_or_create_64_h_, acc_present_or_create_array_h_, acc_copyout, acc_copyout_32_h_, acc_copyout_64_h_, acc_copyout_array_h_, acc_delete, acc_delete_32_h_, acc_delete_64_h_, acc_delete_array_h_, acc_update_device, acc_update_device_32_h_, acc_update_device_64_h_, acc_update_device_array_h_, acc_update_self, acc_update_self_32_h_, acc_update_self_64_h_, acc_update_self_array_h_, acc_map_data, acc_unmap_data, acc_deviceptr, acc_hostptr, acc_is_present, acc_is_present_32_h_, acc_is_present_64_h_, acc_is_present_array_h_, acc_memcpy_to_device, acc_memcpy_from_device, acc_get_current_cuda_device, acc_get_current_cuda_context, acc_get_cuda_stream, acc_set_cuda_stream. (GOACC_2.0): New symbol version. Use for GOACC_data_end, GOACC_data_start, GOACC_enter_exit_data, GOACC_parallel, GOACC_update, GOACC_wait, GOACC_get_thread_num, GOACC_get_num_threads. (GOMP_PLUGIN_1.0): New symbol version. Use for GOMP_PLUGIN_malloc, GOMP_PLUGIN_malloc_cleared, GOMP_PLUGIN_realloc, GOMP_PLUGIN_debug, GOMP_PLUGIN_error, GOMP_PLUGIN_fatal, GOMP_PLUGIN_async_unmap_vars, GOMP_PLUGIN_acc_thread. * libgomp.texi: Update for OpenACC changes, and GOMP_DEBUG environment variable. * libgomp_g.h (GOACC_data_start, GOACC_data_end) (GOACC_enter_exit_data, GOACC_parallel, GOACC_update, GOACC_wait) (GOACC_get_num_threads, GOACC_get_thread_num): New declarations. * splay-tree.h (splay_tree_lookup, splay_tree_insert) (splay_tree_remove): New declarations. (rotate_left, rotate_right, splay_tree_splay, splay_tree_insert) (splay_tree_remove, splay_tree_lookup): Move into... * splay-tree.c: ... this new file. * target.c: Include "oacc-plugin.h", "oacc-int.h", <assert.h>. (splay_tree_node, splay_tree, splay_tree_key) (struct target_mem_desc, struct splay_tree_key_s) (struct gomp_device_descr): Don't declare. (num_devices_openmp): New variable. (gomp_get_num_devices ): Use it. (gomp_init_targets_once): New function. (gomp_get_num_devices ): Use it. (get_kind, gomp_copy_from_async, gomp_free_memmap) (gomp_fini_device, gomp_register_image_for_device): New functions. (gomp_map_vars): Add devaddrs parameter. (gomp_update): Add mm parameter. (gomp_init_device): Move most of it into... (gomp_init_tables): ... this new function. (gomp_register_images_for_device): Remove function. (splay_compare, gomp_map_vars, gomp_unmap_vars, gomp_init_device): Make them hidden instead of static. (gomp_map_vars_existing, gomp_map_vars, gomp_unmap_vars) (gomp_update, gomp_init_device, GOMP_target, GOMP_target_data) (GOMP_target_end_data, GOMP_target_update) (gomp_load_plugin_for_device, gomp_target_init): Update for OpenACC changes. * oacc-async.c: New file. * oacc-cuda.c: Likewise. * oacc-host.c: Likewise. * oacc-init.c: Likewise. * oacc-int.h: Likewise. * oacc-mem.c: Likewise. * oacc-parallel.c: Likewise. * oacc-plugin.c: Likewise. * oacc-plugin.h: Likewise. * oacc-ptx.h: Likewise. * openacc.f90: Likewise. * openacc.h: Likewise. * openacc_lib.h: Likewise. * plugin/plugin-host.c: Likewise. * plugin/plugin-nvptx.c: Likewise. * libgomp-plugin.c: Likewise. * libgomp-plugin.h: Likewise. * libgomp_target.h: Remove file after merging content into the former file. Update all users. * testsuite/lib/libgomp.exp: Load libgomp-test-support.exp. (offload_targets_s, offload_targets_s_openacc): New variables. (check_effective_target_openacc_nvidia_accel_present) (check_effective_target_openacc_nvidia_accel_selected): New procedures. (libgomp_init): Update for OpenACC changes. * testsuite/libgomp-test-support.exp.in: New file. * testsuite/libgomp.oacc-c++/c++.exp: Likewise. * testsuite/libgomp.oacc-c/c.exp: Likewise. * testsuite/libgomp.oacc-fortran/fortran.exp: Likewise. * testsuite/libgomp.oacc-c-c++-common/abort-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/abort-2.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/abort-3.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/abort-4.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/acc_on_device-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/asyncwait-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/cache-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/clauses-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/clauses-2.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/collapse-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/collapse-2.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/collapse-3.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/collapse-4.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/context-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/context-2.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/context-3.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/context-4.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/data-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/data-2.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/data-3.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/data-already-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/data-already-2.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/data-already-3.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/data-already-4.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/data-already-5.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/data-already-6.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/data-already-7.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/data-already-8.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/deviceptr-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/if-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/kernels-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/kernels-empty.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-10.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-11.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-12.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-13.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-14.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-15.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-16.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-17.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-18.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-19.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-2.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-20.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-21.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-22.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-23.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-24.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-25.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-26.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-27.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-28.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-29.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-3.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-30.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-31.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-32.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-33.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-34.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-35.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-36.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-37.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-38.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-39.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-4.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-40.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-41.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-42.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-43.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-44.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-45.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-46.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-47.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-48.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-49.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-5.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-50.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-51.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-52.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-53.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-54.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-55.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-56.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-57.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-58.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-59.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-6.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-60.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-61.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-62.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-63.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-64.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-65.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-66.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-67.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-68.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-69.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-7.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-70.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-71.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-72.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-73.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-74.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-75.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-76.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-77.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-78.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-79.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-80.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-81.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-82.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-83.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-84.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-85.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-86.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-87.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-88.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-89.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-9.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-90.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-91.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/lib-92.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/nested-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/nested-2.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/offset-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/parallel-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/parallel-empty.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/pointer-align-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/present-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/present-2.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/reduction-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/reduction-2.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/reduction-3.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/reduction-4.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/reduction-5.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/reduction-initial-1.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/subr.h: Likewise. * testsuite/libgomp.oacc-c-c++-common/subr.ptx: Likewise. * testsuite/libgomp.oacc-c-c++-common/timer.h: Likewise. * testsuite/libgomp.oacc-c-c++-common/update-1-2.c: Likewise. * testsuite/libgomp.oacc-c-c++-common/update-1.c: Likewise. * testsuite/libgomp.oacc-fortran/abort-1.f90: Likewise. * testsuite/libgomp.oacc-fortran/abort-2.f90: Likewise. * testsuite/libgomp.oacc-fortran/acc_on_device-1-1.f90: Likewise. * testsuite/libgomp.oacc-fortran/acc_on_device-1-2.f: Likewise. * testsuite/libgomp.oacc-fortran/acc_on_device-1-3.f: Likewise. * testsuite/libgomp.oacc-fortran/asyncwait-1.f90: Likewise. * testsuite/libgomp.oacc-fortran/asyncwait-2.f90: Likewise. * testsuite/libgomp.oacc-fortran/asyncwait-3.f90: Likewise. * testsuite/libgomp.oacc-fortran/collapse-1.f90: Likewise. * testsuite/libgomp.oacc-fortran/collapse-2.f90: Likewise. * testsuite/libgomp.oacc-fortran/collapse-3.f90: Likewise. * testsuite/libgomp.oacc-fortran/collapse-4.f90: Likewise. * testsuite/libgomp.oacc-fortran/collapse-5.f90: Likewise. * testsuite/libgomp.oacc-fortran/collapse-6.f90: Likewise. * testsuite/libgomp.oacc-fortran/collapse-7.f90: Likewise. * testsuite/libgomp.oacc-fortran/collapse-8.f90: Likewise. * testsuite/libgomp.oacc-fortran/data-1.f90: Likewise. * testsuite/libgomp.oacc-fortran/data-2.f90: Likewise. * testsuite/libgomp.oacc-fortran/data-3.f90: Likewise. * testsuite/libgomp.oacc-fortran/data-4-2.f90: Likewise. * testsuite/libgomp.oacc-fortran/data-4.f90: Likewise. * testsuite/libgomp.oacc-fortran/data-already-1.f: Likewise. * testsuite/libgomp.oacc-fortran/data-already-2.f: Likewise. * testsuite/libgomp.oacc-fortran/data-already-3.f: Likewise. * testsuite/libgomp.oacc-fortran/data-already-4.f: Likewise. * testsuite/libgomp.oacc-fortran/data-already-5.f: Likewise. * testsuite/libgomp.oacc-fortran/data-already-6.f: Likewise. * testsuite/libgomp.oacc-fortran/data-already-7.f: Likewise. * testsuite/libgomp.oacc-fortran/data-already-8.f: Likewise. * testsuite/libgomp.oacc-fortran/lib-1.f90: Likewise. * testsuite/libgomp.oacc-fortran/lib-10.f90: Likewise. * testsuite/libgomp.oacc-fortran/lib-2.f: Likewise. * testsuite/libgomp.oacc-fortran/lib-3.f: Likewise. * testsuite/libgomp.oacc-fortran/lib-4.f90: Likewise. * testsuite/libgomp.oacc-fortran/lib-5.f90: Likewise. * testsuite/libgomp.oacc-fortran/lib-6.f90: Likewise. * testsuite/libgomp.oacc-fortran/lib-7.f90: Likewise. * testsuite/libgomp.oacc-fortran/lib-8.f90: Likewise. * testsuite/libgomp.oacc-fortran/map-1.f90: Likewise. * testsuite/libgomp.oacc-fortran/openacc_version-1.f: Likewise. * testsuite/libgomp.oacc-fortran/openacc_version-2.f90: Likewise. * testsuite/libgomp.oacc-fortran/pointer-align-1.f90: Likewise. * testsuite/libgomp.oacc-fortran/pset-1.f90: Likewise. * testsuite/libgomp.oacc-fortran/reduction-1.f90: Likewise. * testsuite/libgomp.oacc-fortran/reduction-2.f90: Likewise. * testsuite/libgomp.oacc-fortran/reduction-3.f90: Likewise. * testsuite/libgomp.oacc-fortran/reduction-4.f90: Likewise. * testsuite/libgomp.oacc-fortran/reduction-5.f90: Likewise. * testsuite/libgomp.oacc-fortran/reduction-6.f90: Likewise. * testsuite/libgomp.oacc-fortran/routine-1.f90: Likewise. * testsuite/libgomp.oacc-fortran/routine-2.f90: Likewise. * testsuite/libgomp.oacc-fortran/routine-3.f90: Likewise. * testsuite/libgomp.oacc-fortran/routine-4.f90: Likewise. * testsuite/libgomp.oacc-fortran/subarrays-1.f90: Likewise. * testsuite/libgomp.oacc-fortran/subarrays-2.f90: Likewise. liboffloadmic/ * plugin/libgomp-plugin-intelmic.cpp (GOMP_OFFLOAD_get_name) (GOMP_OFFLOAD_get_caps, GOMP_OFFLOAD_fini_device): New functions. Co-Authored-By: Bernd Schmidt <bernds@codesourcery.com> Co-Authored-By: Cesar Philippidis <cesar@codesourcery.com> Co-Authored-By: Dmitry Bocharnikov <dmitry.b@samsung.com> Co-Authored-By: Evgeny Gavrin <e.gavrin@samsung.com> Co-Authored-By: Ilmir Usmanov <i.usmanov@samsung.com> Co-Authored-By: Jakub Jelinek <jakub@redhat.com> Co-Authored-By: James Norris <jnorris@codesourcery.com> Co-Authored-By: Julian Brown <julian@codesourcery.com> Co-Authored-By: Nathan Sidwell <nathan@codesourcery.com> Co-Authored-By: Tobias Burnus <burnus@net-b.de> Co-Authored-By: Tom de Vries <tom@codesourcery.com> From-SVN: r219682
2143 lines
66 KiB
Plaintext
2143 lines
66 KiB
Plaintext
\input texinfo @c -*-texinfo-*-
|
|
|
|
@c %**start of header
|
|
@setfilename libgomp.info
|
|
@settitle GNU libgomp
|
|
@c %**end of header
|
|
|
|
|
|
@copying
|
|
Copyright @copyright{} 2006-2015 Free Software Foundation, Inc.
|
|
|
|
Permission is granted to copy, distribute and/or modify this document
|
|
under the terms of the GNU Free Documentation License, Version 1.3 or
|
|
any later version published by the Free Software Foundation; with the
|
|
Invariant Sections being ``Funding Free Software'', the Front-Cover
|
|
texts being (a) (see below), and with the Back-Cover Texts being (b)
|
|
(see below). A copy of the license is included in the section entitled
|
|
``GNU Free Documentation License''.
|
|
|
|
(a) The FSF's Front-Cover Text is:
|
|
|
|
A GNU Manual
|
|
|
|
(b) The FSF's Back-Cover Text is:
|
|
|
|
You have freedom to copy and modify this GNU Manual, like GNU
|
|
software. Copies published by the Free Software Foundation raise
|
|
funds for GNU development.
|
|
@end copying
|
|
|
|
@ifinfo
|
|
@dircategory GNU Libraries
|
|
@direntry
|
|
* libgomp: (libgomp). GNU Offloading and Multi Processing Runtime Library.
|
|
@end direntry
|
|
|
|
This manual documents libgomp, the GNU Offloading and Multi Processing
|
|
Runtime library. This is the GNU implementation of the OpenMP and
|
|
OpenACC APIs for parallel and accelerator programming in C/C++ and
|
|
Fortran.
|
|
|
|
Published by the Free Software Foundation
|
|
51 Franklin Street, Fifth Floor
|
|
Boston, MA 02110-1301 USA
|
|
|
|
@insertcopying
|
|
@end ifinfo
|
|
|
|
|
|
@setchapternewpage odd
|
|
|
|
@titlepage
|
|
@title GNU Offloading and Multi Processing Runtime Library
|
|
@subtitle The GNU OpenMP and OpenACC Implementation
|
|
@page
|
|
@vskip 0pt plus 1filll
|
|
@comment For the @value{version-GCC} Version*
|
|
@sp 1
|
|
Published by the Free Software Foundation @*
|
|
51 Franklin Street, Fifth Floor@*
|
|
Boston, MA 02110-1301, USA@*
|
|
@sp 1
|
|
@insertcopying
|
|
@end titlepage
|
|
|
|
@summarycontents
|
|
@contents
|
|
@page
|
|
|
|
|
|
@node Top
|
|
@top Introduction
|
|
@cindex Introduction
|
|
|
|
This manual documents the usage of libgomp, the GNU Offloading and
|
|
Multi Processing Runtime Library. This includes the GNU
|
|
implementation of the @uref{http://www.openmp.org, OpenMP} Application
|
|
Programming Interface (API) for multi-platform shared-memory parallel
|
|
programming in C/C++ and Fortran, and the GNU implementation of the
|
|
@uref{http://www.openacc.org/, OpenACC} Application Programming
|
|
Interface (API) for offloading of code to accelerator devices in C/C++
|
|
and Fortran.
|
|
|
|
Originally, libgomp implemented the GNU OpenMP Runtime Library. Based
|
|
on this, support for OpenACC and offloading (both OpenACC and OpenMP
|
|
4's target construct) has been added later on, and the library's name
|
|
changed to GNU Offloading and Multi Processing Runtime Library.
|
|
|
|
|
|
|
|
@comment
|
|
@comment When you add a new menu item, please keep the right hand
|
|
@comment aligned to the same column. Do not use tabs. This provides
|
|
@comment better formatting.
|
|
@comment
|
|
@menu
|
|
* Enabling OpenMP:: How to enable OpenMP for your applications.
|
|
* Runtime Library Routines:: The OpenMP runtime application programming
|
|
interface.
|
|
* Environment Variables:: Influencing runtime behavior with environment
|
|
variables.
|
|
* The libgomp ABI:: Notes on the external ABI presented by libgomp.
|
|
* Reporting Bugs:: How to report bugs in the GNU Offloading and
|
|
Multi Processing Runtime Library.
|
|
* Copying:: GNU general public license says
|
|
how you can copy and share libgomp.
|
|
* GNU Free Documentation License::
|
|
How you can copy and share this manual.
|
|
* Funding:: How to help assure continued work for free
|
|
software.
|
|
* Library Index:: Index of this documentation.
|
|
@end menu
|
|
|
|
|
|
@c ---------------------------------------------------------------------
|
|
@c Enabling OpenMP
|
|
@c ---------------------------------------------------------------------
|
|
|
|
@node Enabling OpenMP
|
|
@chapter Enabling OpenMP
|
|
|
|
To activate the OpenMP extensions for C/C++ and Fortran, the compile-time
|
|
flag @command{-fopenmp} must be specified. This enables the OpenMP directive
|
|
@code{#pragma omp} in C/C++ and @code{!$omp} directives in free form,
|
|
@code{c$omp}, @code{*$omp} and @code{!$omp} directives in fixed form,
|
|
@code{!$} conditional compilation sentinels in free form and @code{c$},
|
|
@code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
|
|
arranges for automatic linking of the OpenMP runtime library
|
|
(@ref{Runtime Library Routines}).
|
|
|
|
A complete description of all OpenMP directives accepted may be found in
|
|
the @uref{http://www.openmp.org, OpenMP Application Program Interface} manual,
|
|
version 4.0.
|
|
|
|
|
|
@c ---------------------------------------------------------------------
|
|
@c Runtime Library Routines
|
|
@c ---------------------------------------------------------------------
|
|
|
|
@node Runtime Library Routines
|
|
@chapter Runtime Library Routines
|
|
|
|
The runtime routines described here are defined by Section 3 of the OpenMP
|
|
specification in version 4.0. The routines are structured in following
|
|
three parts:
|
|
|
|
@menu
|
|
Control threads, processors and the parallel environment. They have C
|
|
linkage, and do not throw exceptions.
|
|
|
|
* omp_get_active_level:: Number of active parallel regions
|
|
* omp_get_ancestor_thread_num:: Ancestor thread ID
|
|
* omp_get_cancellation:: Whether cancellation support is enabled
|
|
* omp_get_default_device:: Get the default device for target regions
|
|
* omp_get_dynamic:: Dynamic teams setting
|
|
* omp_get_level:: Number of parallel regions
|
|
* omp_get_max_active_levels:: Maximum number of active regions
|
|
* omp_get_max_threads:: Maximum number of threads of parallel region
|
|
* omp_get_nested:: Nested parallel regions
|
|
* omp_get_num_devices:: Number of target devices
|
|
* omp_get_num_procs:: Number of processors online
|
|
* omp_get_num_teams:: Number of teams
|
|
* omp_get_num_threads:: Size of the active team
|
|
* omp_get_proc_bind:: Whether theads may be moved between CPUs
|
|
* omp_get_schedule:: Obtain the runtime scheduling method
|
|
* omp_get_team_num:: Get team number
|
|
* omp_get_team_size:: Number of threads in a team
|
|
* omp_get_thread_limit:: Maximum number of threads
|
|
* omp_get_thread_num:: Current thread ID
|
|
* omp_in_parallel:: Whether a parallel region is active
|
|
* omp_in_final:: Whether in final or included task region
|
|
* omp_is_initial_device:: Whether executing on the host device
|
|
* omp_set_default_device:: Set the default device for target regions
|
|
* omp_set_dynamic:: Enable/disable dynamic teams
|
|
* omp_set_max_active_levels:: Limits the number of active parallel regions
|
|
* omp_set_nested:: Enable/disable nested parallel regions
|
|
* omp_set_num_threads:: Set upper team size limit
|
|
* omp_set_schedule:: Set the runtime scheduling method
|
|
|
|
Initialize, set, test, unset and destroy simple and nested locks.
|
|
|
|
* omp_init_lock:: Initialize simple lock
|
|
* omp_set_lock:: Wait for and set simple lock
|
|
* omp_test_lock:: Test and set simple lock if available
|
|
* omp_unset_lock:: Unset simple lock
|
|
* omp_destroy_lock:: Destroy simple lock
|
|
* omp_init_nest_lock:: Initialize nested lock
|
|
* omp_set_nest_lock:: Wait for and set simple lock
|
|
* omp_test_nest_lock:: Test and set nested lock if available
|
|
* omp_unset_nest_lock:: Unset nested lock
|
|
* omp_destroy_nest_lock:: Destroy nested lock
|
|
|
|
Portable, thread-based, wall clock timer.
|
|
|
|
* omp_get_wtick:: Get timer precision.
|
|
* omp_get_wtime:: Elapsed wall clock time.
|
|
@end menu
|
|
|
|
|
|
|
|
@node omp_get_active_level
|
|
@section @code{omp_get_active_level} -- Number of parallel regions
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns the nesting level for the active parallel blocks,
|
|
which enclose the calling call.
|
|
|
|
@item @emph{C/C++}
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.20.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_ancestor_thread_num
|
|
@section @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns the thread identification number for the given
|
|
nesting level of the current thread. For values of @var{level} outside
|
|
zero to @code{omp_get_level} -1 is returned; if @var{level} is
|
|
@code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
|
|
|
|
@item @emph{C/C++}
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
|
|
@item @tab @code{integer level}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.18.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_cancellation
|
|
@section @code{omp_get_cancellation} -- Whether cancellation support is enabled
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns @code{true} if cancellation is activated, @code{false}
|
|
otherwise. Here, @code{true} and @code{false} represent their language-specific
|
|
counterparts. Unless @env{OMP_CANCELLATION} is set true, cancellations are
|
|
deactivated.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_cancellation(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{logical function omp_get_cancellation()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_CANCELLATION}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.9.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_default_device
|
|
@section @code{omp_get_default_device} -- Get the default device for target regions
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Get the default device for target regions without device clause.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_default_device(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_default_device()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_DEFAULT_DEVICE}, @ref{omp_set_default_device}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.24.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_dynamic
|
|
@section @code{omp_get_dynamic} -- Dynamic teams setting
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns @code{true} if enabled, @code{false} otherwise.
|
|
Here, @code{true} and @code{false} represent their language-specific
|
|
counterparts.
|
|
|
|
The dynamic team setting may be initialized at startup by the
|
|
@env{OMP_DYNAMIC} environment variable or at runtime using
|
|
@code{omp_set_dynamic}. If undefined, dynamic adjustment is
|
|
disabled by default.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{logical function omp_get_dynamic()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_dynamic}, @ref{OMP_DYNAMIC}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.8.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_level
|
|
@section @code{omp_get_level} -- Obtain the current nesting level
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns the nesting level for the parallel blocks,
|
|
which enclose the calling call.
|
|
|
|
@item @emph{C/C++}
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_level(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_level()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_active_level}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.17.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_max_active_levels
|
|
@section @code{omp_get_max_active_levels} -- Maximum number of active regions
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function obtains the maximum allowed number of nested, active parallel regions.
|
|
|
|
@item @emph{C/C++}
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.16.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_max_threads
|
|
@section @code{omp_get_max_threads} -- Maximum number of threads of parallel region
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Return the maximum number of threads used for the current parallel region
|
|
that does not use the clause @code{num_threads}.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.3.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_nested
|
|
@section @code{omp_get_nested} -- Nested parallel regions
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns @code{true} if nested parallel regions are
|
|
enabled, @code{false} otherwise. Here, @code{true} and @code{false}
|
|
represent their language-specific counterparts.
|
|
|
|
Nested parallel regions may be initialized at startup by the
|
|
@env{OMP_NESTED} environment variable or at runtime using
|
|
@code{omp_set_nested}. If undefined, nested parallel regions are
|
|
disabled by default.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{logical function omp_get_nested()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_nested}, @ref{OMP_NESTED}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.11.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_num_devices
|
|
@section @code{omp_get_num_devices} -- Number of target devices
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Returns the number of target devices.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_num_devices(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_num_devices()}
|
|
@end multitable
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.25.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_num_procs
|
|
@section @code{omp_get_num_procs} -- Number of processors online
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Returns the number of processors online on that device.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
|
|
@end multitable
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.5.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_num_teams
|
|
@section @code{omp_get_num_teams} -- Number of teams
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Returns the number of teams in the current team region.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_num_teams(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_num_teams()}
|
|
@end multitable
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.26.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_num_threads
|
|
@section @code{omp_get_num_threads} -- Size of the active team
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Returns the number of threads in the current team. In a sequential section of
|
|
the program @code{omp_get_num_threads} returns 1.
|
|
|
|
The default team size may be initialized at startup by the
|
|
@env{OMP_NUM_THREADS} environment variable. At runtime, the size
|
|
of the current team may be set either by the @code{NUM_THREADS}
|
|
clause or by @code{omp_set_num_threads}. If none of the above were
|
|
used to define a specific value and @env{OMP_DYNAMIC} is disabled,
|
|
one thread per CPU online is used.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.2.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_proc_bind
|
|
@section @code{omp_get_proc_bind} -- Whether theads may be moved between CPUs
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This functions returns the currently active thread affinity policy, which is
|
|
set via @env{OMP_PROC_BIND}. Possible values are @code{omp_proc_bind_false},
|
|
@code{omp_proc_bind_true}, @code{omp_proc_bind_master},
|
|
@code{omp_proc_bind_close} and @code{omp_proc_bind_spread}.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{omp_proc_bind_t omp_get_proc_bind(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer(kind=omp_proc_bind_kind) function omp_get_proc_bind()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_PROC_BIND}, @ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY},
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.22.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_schedule
|
|
@section @code{omp_get_schedule} -- Obtain the runtime scheduling method
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Obtain the runtime scheduling method. The @var{kind} argument will be
|
|
set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
|
|
@code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
|
|
@var{modifier}, is set to the chunk size.
|
|
|
|
@item @emph{C/C++}
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_get_schedule(omp_sched_t *kind, int *modifier);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_get_schedule(kind, modifier)}
|
|
@item @tab @code{integer(kind=omp_sched_kind) kind}
|
|
@item @tab @code{integer modifier}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.13.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_team_num
|
|
@section @code{omp_get_team_num} -- Get team number
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Returns the team number of the calling thread.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_team_num(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_team_num()}
|
|
@end multitable
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.27.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_team_size
|
|
@section @code{omp_get_team_size} -- Number of threads in a team
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns the number of threads in a thread team to which
|
|
either the current thread or its ancestor belongs. For values of @var{level}
|
|
outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
|
|
1 is returned, and for @code{omp_get_level}, the result is identical
|
|
to @code{omp_get_num_threads}.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
|
|
@item @tab @code{integer level}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.19.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_thread_limit
|
|
@section @code{omp_get_thread_limit} -- Maximum number of threads
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Return the maximum number of threads of the program.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.14.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_thread_num
|
|
@section @code{omp_get_thread_num} -- Current thread ID
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Returns a unique thread identification number within the current team.
|
|
In a sequential parts of the program, @code{omp_get_thread_num}
|
|
always returns 0. In parallel regions the return value varies
|
|
from 0 to @code{omp_get_num_threads}-1 inclusive. The return
|
|
value of the master thread of a team is always 0.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.4.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_in_parallel
|
|
@section @code{omp_in_parallel} -- Whether a parallel region is active
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns @code{true} if currently running in parallel,
|
|
@code{false} otherwise. Here, @code{true} and @code{false} represent
|
|
their language-specific counterparts.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
|
|
@end multitable
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.6.
|
|
@end table
|
|
|
|
|
|
@node omp_in_final
|
|
@section @code{omp_in_final} -- Whether in final or included task region
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns @code{true} if currently running in a final
|
|
or included task region, @code{false} otherwise. Here, @code{true}
|
|
and @code{false} represent their language-specific counterparts.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_in_final(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{logical function omp_in_final()}
|
|
@end multitable
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.21.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_is_initial_device
|
|
@section @code{omp_is_initial_device} -- Whether executing on the host device
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function returns @code{true} if currently running on the host device,
|
|
@code{false} otherwise. Here, @code{true} and @code{false} represent
|
|
their language-specific counterparts.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_is_initial_device(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{logical function omp_is_initial_device()}
|
|
@end multitable
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.28.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_set_default_device
|
|
@section @code{omp_set_default_device} -- Set the default device for target regions
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Set the default device for target regions without device clause. The argument
|
|
shall be a nonnegative device number.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_set_default_device(int device_num);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_set_default_device(device_num)}
|
|
@item @tab @code{integer device_num}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_DEFAULT_DEVICE}, @ref{omp_get_default_device}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.23.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_set_dynamic
|
|
@section @code{omp_set_dynamic} -- Enable/disable dynamic teams
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Enable or disable the dynamic adjustment of the number of threads
|
|
within a team. The function takes the language-specific equivalent
|
|
of @code{true} and @code{false}, where @code{true} enables dynamic
|
|
adjustment of team sizes and @code{false} disables it.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_set_dynamic(int dynamic_threads);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(dynamic_threads)}
|
|
@item @tab @code{logical, intent(in) :: dynamic_threads}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.7.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_set_max_active_levels
|
|
@section @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
This function limits the maximum allowed number of nested, active
|
|
parallel regions.
|
|
|
|
@item @emph{C/C++}
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
|
|
@item @tab @code{integer max_levels}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_max_active_levels}, @ref{omp_get_active_level}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.15.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_set_nested
|
|
@section @code{omp_set_nested} -- Enable/disable nested parallel regions
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Enable or disable nested parallel regions, i.e., whether team members
|
|
are allowed to create new teams. The function takes the language-specific
|
|
equivalent of @code{true} and @code{false}, where @code{true} enables
|
|
dynamic adjustment of team sizes and @code{false} disables it.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_set_nested(int nested);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_set_nested(nested)}
|
|
@item @tab @code{logical, intent(in) :: nested}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_NESTED}, @ref{omp_get_nested}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.10.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_set_num_threads
|
|
@section @code{omp_set_num_threads} -- Set upper team size limit
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Specifies the number of threads used by default in subsequent parallel
|
|
sections, if those do not specify a @code{num_threads} clause. The
|
|
argument of @code{omp_set_num_threads} shall be a positive integer.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_set_num_threads(int num_threads);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(num_threads)}
|
|
@item @tab @code{integer, intent(in) :: num_threads}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.1.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_set_schedule
|
|
@section @code{omp_set_schedule} -- Set the runtime scheduling method
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Sets the runtime scheduling method. The @var{kind} argument can have the
|
|
value @code{omp_sched_static}, @code{omp_sched_dynamic},
|
|
@code{omp_sched_guided} or @code{omp_sched_auto}. Except for
|
|
@code{omp_sched_auto}, the chunk size is set to the value of
|
|
@var{modifier} if positive, or to the default value if zero or negative.
|
|
For @code{omp_sched_auto} the @var{modifier} argument is ignored.
|
|
|
|
@item @emph{C/C++}
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t kind, int modifier);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, modifier)}
|
|
@item @tab @code{integer(kind=omp_sched_kind) kind}
|
|
@item @tab @code{integer modifier}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_schedule}
|
|
@ref{OMP_SCHEDULE}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.2.12.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_init_lock
|
|
@section @code{omp_init_lock} -- Initialize simple lock
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Initialize a simple lock. After initialization, the lock is in
|
|
an unlocked state.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_init_lock(omp_lock_t *lock);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_init_lock(svar)}
|
|
@item @tab @code{integer(omp_lock_kind), intent(out) :: svar}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_destroy_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.1.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_set_lock
|
|
@section @code{omp_set_lock} -- Wait for and set simple lock
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Before setting a simple lock, the lock variable must be initialized by
|
|
@code{omp_init_lock}. The calling thread is blocked until the lock
|
|
is available. If the lock is already held by the current thread,
|
|
a deadlock occurs.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_set_lock(svar)}
|
|
@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.3.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_test_lock
|
|
@section @code{omp_test_lock} -- Test and set simple lock if available
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Before setting a simple lock, the lock variable must be initialized by
|
|
@code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
|
|
does not block if the lock is not available. This function returns
|
|
@code{true} upon success, @code{false} otherwise. Here, @code{true} and
|
|
@code{false} represent their language-specific counterparts.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{logical function omp_test_lock(svar)}
|
|
@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.5.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_unset_lock
|
|
@section @code{omp_unset_lock} -- Unset simple lock
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
A simple lock about to be unset must have been locked by @code{omp_set_lock}
|
|
or @code{omp_test_lock} before. In addition, the lock must be held by the
|
|
thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
|
|
or more threads attempted to set the lock before, one of them is chosen to,
|
|
again, set the lock to itself.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_unset_lock(svar)}
|
|
@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_lock}, @ref{omp_test_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.4.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_destroy_lock
|
|
@section @code{omp_destroy_lock} -- Destroy simple lock
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Destroy a simple lock. In order to be destroyed, a simple lock must be
|
|
in the unlocked state.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(svar)}
|
|
@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_init_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.2.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_init_nest_lock
|
|
@section @code{omp_init_nest_lock} -- Initialize nested lock
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Initialize a nested lock. After initialization, the lock is in
|
|
an unlocked state and the nesting count is set to zero.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(nvar)}
|
|
@item @tab @code{integer(omp_nest_lock_kind), intent(out) :: nvar}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_destroy_nest_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.1.
|
|
@end table
|
|
|
|
|
|
@node omp_set_nest_lock
|
|
@section @code{omp_set_nest_lock} -- Wait for and set nested lock
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Before setting a nested lock, the lock variable must be initialized by
|
|
@code{omp_init_nest_lock}. The calling thread is blocked until the lock
|
|
is available. If the lock is already held by the current thread, the
|
|
nesting count for the lock is incremented.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_set_nest_lock(omp_nest_lock_t *lock);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_set_nest_lock(nvar)}
|
|
@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_init_nest_lock}, @ref{omp_unset_nest_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.3.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_test_nest_lock
|
|
@section @code{omp_test_nest_lock} -- Test and set nested lock if available
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Before setting a nested lock, the lock variable must be initialized by
|
|
@code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
|
|
@code{omp_test_nest_lock} does not block if the lock is not available.
|
|
If the lock is already held by the current thread, the new nesting count
|
|
is returned. Otherwise, the return value equals zero.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(nvar)}
|
|
@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
|
|
@end multitable
|
|
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.5.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_unset_nest_lock
|
|
@section @code{omp_unset_nest_lock} -- Unset nested lock
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
|
|
or @code{omp_test_nested_lock} before. In addition, the lock must be held by the
|
|
thread calling @code{omp_unset_nested_lock}. If the nesting count drops to zero, the
|
|
lock becomes unlocked. If one ore more threads attempted to set the lock before,
|
|
one of them is chosen to, again, set the lock to itself.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_unset_nest_lock(omp_nest_lock_t *lock);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_unset_nest_lock(nvar)}
|
|
@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_nest_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.4.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_destroy_nest_lock
|
|
@section @code{omp_destroy_nest_lock} -- Destroy nested lock
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Destroy a nested lock. In order to be destroyed, a nested lock must be
|
|
in the unlocked state and its nesting count must equal zero.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(nvar)}
|
|
@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_init_lock}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.3.2.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_wtick
|
|
@section @code{omp_get_wtick} -- Get timer precision
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Gets the timer precision, i.e., the number of seconds between two
|
|
successive clock ticks.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{double precision function omp_get_wtick()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_wtime}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.4.2.
|
|
@end table
|
|
|
|
|
|
|
|
@node omp_get_wtime
|
|
@section @code{omp_get_wtime} -- Elapsed wall clock time
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Elapsed wall clock time in seconds. The time is measured per thread, no
|
|
guarantee can be made that two distinct threads measure the same time.
|
|
Time is measured from some "time in the past", which is an arbitrary time
|
|
guaranteed not to change during the execution of the program.
|
|
|
|
@item @emph{C/C++}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
|
|
@end multitable
|
|
|
|
@item @emph{Fortran}:
|
|
@multitable @columnfractions .20 .80
|
|
@item @emph{Interface}: @tab @code{double precision function omp_get_wtime()}
|
|
@end multitable
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_wtick}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 3.4.1.
|
|
@end table
|
|
|
|
|
|
|
|
@c ---------------------------------------------------------------------
|
|
@c Environment Variables
|
|
@c ---------------------------------------------------------------------
|
|
|
|
@node Environment Variables
|
|
@chapter Environment Variables
|
|
|
|
The environment variables which beginning with @env{OMP_} are defined by
|
|
section 4 of the OpenMP specification in version 4.0, while those
|
|
beginning with @env{GOMP_} are GNU extensions.
|
|
|
|
@menu
|
|
* OMP_CANCELLATION:: Set whether cancellation is activated
|
|
* OMP_DISPLAY_ENV:: Show OpenMP version and environment variables
|
|
* OMP_DEFAULT_DEVICE:: Set the device used in target regions
|
|
* OMP_DYNAMIC:: Dynamic adjustment of threads
|
|
* OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
|
|
* OMP_NESTED:: Nested parallel regions
|
|
* OMP_NUM_THREADS:: Specifies the number of threads to use
|
|
* OMP_PROC_BIND:: Whether theads may be moved between CPUs
|
|
* OMP_PLACES:: Specifies on which CPUs the theads should be placed
|
|
* OMP_STACKSIZE:: Set default thread stack size
|
|
* OMP_SCHEDULE:: How threads are scheduled
|
|
* OMP_THREAD_LIMIT:: Set the maximum number of threads
|
|
* OMP_WAIT_POLICY:: How waiting threads are handled
|
|
* GOMP_CPU_AFFINITY:: Bind threads to specific CPUs
|
|
* GOMP_DEBUG:: Enable debugging output
|
|
* GOMP_STACKSIZE:: Set default thread stack size
|
|
* GOMP_SPINCOUNT:: Set the busy-wait spin count
|
|
@end menu
|
|
|
|
|
|
@node OMP_CANCELLATION
|
|
@section @env{OMP_CANCELLATION} -- Set whether cancellation is activated
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
If set to @code{TRUE}, the cancellation is activated. If set to @code{FALSE} or
|
|
if unset, cancellation is disabled and the @code{cancel} construct is ignored.
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_cancellation}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.11
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_DISPLAY_ENV
|
|
@section @env{OMP_DISPLAY_ENV} -- Show OpenMP version and environment variables
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
If set to @code{TRUE}, the OpenMP version number and the values
|
|
associated with the OpenMP environment variables are printed to @code{stderr}.
|
|
If set to @code{VERBOSE}, it additionally shows the value of the environment
|
|
variables which are GNU extensions. If undefined or set to @code{FALSE},
|
|
this information will not be shown.
|
|
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.12
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_DEFAULT_DEVICE
|
|
@section @env{OMP_DEFAULT_DEVICE} -- Set the device used in target regions
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Set to choose the device which is used in a @code{target} region, unless the
|
|
value is overridden by @code{omp_set_default_device} or by a @code{device}
|
|
clause. The value shall be the nonnegative device number. If no device with
|
|
the given device number exists, the code is executed on the host. If unset,
|
|
device number 0 will be used.
|
|
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_get_default_device}, @ref{omp_set_default_device},
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.11
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_DYNAMIC
|
|
@section @env{OMP_DYNAMIC} -- Dynamic adjustment of threads
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Enable or disable the dynamic adjustment of the number of threads
|
|
within a team. The value of this environment variable shall be
|
|
@code{TRUE} or @code{FALSE}. If undefined, dynamic adjustment is
|
|
disabled by default.
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_dynamic}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.3
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_MAX_ACTIVE_LEVELS
|
|
@section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Specifies the initial value for the maximum number of nested parallel
|
|
regions. The value of this variable shall be a positive integer.
|
|
If undefined, the number of active levels is unlimited.
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_max_active_levels}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.9
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_NESTED
|
|
@section @env{OMP_NESTED} -- Nested parallel regions
|
|
@cindex Environment Variable
|
|
@cindex Implementation specific setting
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Enable or disable nested parallel regions, i.e., whether team members
|
|
are allowed to create new teams. The value of this environment variable
|
|
shall be @code{TRUE} or @code{FALSE}. If undefined, nested parallel
|
|
regions are disabled by default.
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_nested}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.6
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_NUM_THREADS
|
|
@section @env{OMP_NUM_THREADS} -- Specifies the number of threads to use
|
|
@cindex Environment Variable
|
|
@cindex Implementation specific setting
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Specifies the default number of threads to use in parallel regions. The
|
|
value of this variable shall be a comma-separated list of positive integers;
|
|
the value specified the number of threads to use for the corresponding nested
|
|
level. If undefined one thread per CPU is used.
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_num_threads}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.2
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_PROC_BIND
|
|
@section @env{OMP_PROC_BIND} -- Whether theads may be moved between CPUs
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Specifies whether threads may be moved between processors. If set to
|
|
@code{TRUE}, OpenMP theads should not be moved; if set to @code{FALSE}
|
|
they may be moved. Alternatively, a comma separated list with the
|
|
values @code{MASTER}, @code{CLOSE} and @code{SPREAD} can be used to specify
|
|
the thread affinity policy for the corresponding nesting level. With
|
|
@code{MASTER} the worker threads are in the same place partition as the
|
|
master thread. With @code{CLOSE} those are kept close to the master thread
|
|
in contiguous place partitions. And with @code{SPREAD} a sparse distribution
|
|
across the place partitions is used.
|
|
|
|
When undefined, @env{OMP_PROC_BIND} defaults to @code{TRUE} when
|
|
@env{OMP_PLACES} or @env{GOMP_CPU_AFFINITY} is set and @code{FALSE} otherwise.
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.4
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_PLACES
|
|
@section @env{OMP_PLACES} -- Specifies on which CPUs the theads should be placed
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
The thread placement can be either specified using an abstract name or by an
|
|
explicit list of the places. The abstract names @code{threads}, @code{cores}
|
|
and @code{sockets} can be optionally followed by a positive number in
|
|
parentheses, which denotes the how many places shall be created. With
|
|
@code{threads} each place corresponds to a single hardware thread; @code{cores}
|
|
to a single core with the corresponding number of hardware threads; and with
|
|
@code{sockets} the place corresponds to a single socket. The resulting
|
|
placement can be shown by setting the @env{OMP_DISPLAY_ENV} environment
|
|
variable.
|
|
|
|
Alternatively, the placement can be specified explicitly as comma-separated
|
|
list of places. A place is specified by set of nonnegative numbers in curly
|
|
braces, denoting the denoting the hardware threads. The hardware threads
|
|
belonging to a place can either be specified as comma-separated list of
|
|
nonnegative thread numbers or using an interval. Multiple places can also be
|
|
either specified by a comma-separated list of places or by an interval. To
|
|
specify an interval, a colon followed by the count is placed after after
|
|
the hardware thread number or the place. Optionally, the length can be
|
|
followed by a colon and the stride number -- otherwise a unit stride is
|
|
assumed. For instance, the following specifies the same places list:
|
|
@code{"@{0,1,2@}, @{3,4,6@}, @{7,8,9@}, @{10,11,12@}"};
|
|
@code{"@{0:3@}, @{3:3@}, @{7:3@}, @{10:3@}"}; and @code{"@{0:2@}:4:3"}.
|
|
|
|
If @env{OMP_PLACES} and @env{GOMP_CPU_AFFINITY} are unset and
|
|
@env{OMP_PROC_BIND} is either unset or @code{false}, threads may be moved
|
|
between CPUs following no placement policy.
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_PROC_BIND}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind},
|
|
@ref{OMP_DISPLAY_ENV}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.5
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_STACKSIZE
|
|
@section @env{OMP_STACKSIZE} -- Set default thread stack size
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Set the default thread stack size in kilobytes, unless the number
|
|
is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
|
|
case the size is, respectively, in bytes, kilobytes, megabytes
|
|
or gigabytes. This is different from @code{pthread_attr_setstacksize}
|
|
which gets the number of bytes as an argument. If the stack size cannot
|
|
be set due to system constraints, an error is reported and the initial
|
|
stack size is left unchanged. If undefined, the stack size is system
|
|
dependent.
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.7
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_SCHEDULE
|
|
@section @env{OMP_SCHEDULE} -- How threads are scheduled
|
|
@cindex Environment Variable
|
|
@cindex Implementation specific setting
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Allows to specify @code{schedule type} and @code{chunk size}.
|
|
The value of the variable shall have the form: @code{type[,chunk]} where
|
|
@code{type} is one of @code{static}, @code{dynamic}, @code{guided} or @code{auto}
|
|
The optional @code{chunk} size shall be a positive integer. If undefined,
|
|
dynamic scheduling and a chunk size of 1 is used.
|
|
|
|
@item @emph{See also}:
|
|
@ref{omp_set_schedule}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Sections 2.7.1 and 4.1
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_THREAD_LIMIT
|
|
@section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Specifies the number of threads to use for the whole program. The
|
|
value of this variable shall be a positive integer. If undefined,
|
|
the number of threads is not limited.
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_NUM_THREADS}, @ref{omp_get_thread_limit}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.10
|
|
@end table
|
|
|
|
|
|
|
|
@node OMP_WAIT_POLICY
|
|
@section @env{OMP_WAIT_POLICY} -- How waiting threads are handled
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Specifies whether waiting threads should be active or passive. If
|
|
the value is @code{PASSIVE}, waiting threads should not consume CPU
|
|
power while waiting; while the value is @code{ACTIVE} specifies that
|
|
they should. If undefined, threads wait actively for a short time
|
|
before waiting passively.
|
|
|
|
@item @emph{See also}:
|
|
@ref{GOMP_SPINCOUNT}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://www.openmp.org/, OpenMP specification v4.0}, Section 4.8
|
|
@end table
|
|
|
|
|
|
|
|
@node GOMP_CPU_AFFINITY
|
|
@section @env{GOMP_CPU_AFFINITY} -- Bind threads to specific CPUs
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Binds threads to specific CPUs. The variable should contain a space-separated
|
|
or comma-separated list of CPUs. This list may contain different kinds of
|
|
entries: either single CPU numbers in any order, a range of CPUs (M-N)
|
|
or a range with some stride (M-N:S). CPU numbers are zero based. For example,
|
|
@code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} will bind the initial thread
|
|
to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
|
|
CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
|
|
and 14 respectively and then start assigning back from the beginning of
|
|
the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
|
|
|
|
There is no libgomp library routine to determine whether a CPU affinity
|
|
specification is in effect. As a workaround, language-specific library
|
|
functions, e.g., @code{getenv} in C or @code{GET_ENVIRONMENT_VARIABLE} in
|
|
Fortran, may be used to query the setting of the @code{GOMP_CPU_AFFINITY}
|
|
environment variable. A defined CPU affinity on startup cannot be changed
|
|
or disabled during the runtime of the application.
|
|
|
|
If both @env{GOMP_CPU_AFFINITY} and @env{OMP_PROC_BIND} are set,
|
|
@env{OMP_PROC_BIND} has a higher precedence. If neither has been set and
|
|
@env{OMP_PROC_BIND} is unset, or when @env{OMP_PROC_BIND} is set to
|
|
@code{FALSE}, the host system will handle the assignment of threads to CPUs.
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_PLACES}, @ref{OMP_PROC_BIND}
|
|
@end table
|
|
|
|
|
|
|
|
@node GOMP_DEBUG
|
|
@section @env{GOMP_DEBUG} -- Enable debugging output
|
|
@cindex Environment Variable
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Enable debugging output. The variable should be set to @code{0}
|
|
(disabled, also the default if not set), or @code{1} (enabled).
|
|
|
|
If enabled, some debugging output will be printed during execution.
|
|
This is currently not specified in more detail, and subject to change.
|
|
@end table
|
|
|
|
|
|
|
|
@node GOMP_STACKSIZE
|
|
@section @env{GOMP_STACKSIZE} -- Set default thread stack size
|
|
@cindex Environment Variable
|
|
@cindex Implementation specific setting
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Set the default thread stack size in kilobytes. This is different from
|
|
@code{pthread_attr_setstacksize} which gets the number of bytes as an
|
|
argument. If the stack size cannot be set due to system constraints, an
|
|
error is reported and the initial stack size is left unchanged. If undefined,
|
|
the stack size is system dependent.
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_STACKSIZE}
|
|
|
|
@item @emph{Reference}:
|
|
@uref{http://gcc.gnu.org/ml/gcc-patches/2006-06/msg00493.html,
|
|
GCC Patches Mailinglist},
|
|
@uref{http://gcc.gnu.org/ml/gcc-patches/2006-06/msg00496.html,
|
|
GCC Patches Mailinglist}
|
|
@end table
|
|
|
|
|
|
|
|
@node GOMP_SPINCOUNT
|
|
@section @env{GOMP_SPINCOUNT} -- Set the busy-wait spin count
|
|
@cindex Environment Variable
|
|
@cindex Implementation specific setting
|
|
@table @asis
|
|
@item @emph{Description}:
|
|
Determines how long a threads waits actively with consuming CPU power
|
|
before waiting passively without consuming CPU power. The value may be
|
|
either @code{INFINITE}, @code{INFINITY} to always wait actively or an
|
|
integer which gives the number of spins of the busy-wait loop. The
|
|
integer may optionally be followed by the following suffixes acting
|
|
as multiplication factors: @code{k} (kilo, thousand), @code{M} (mega,
|
|
million), @code{G} (giga, billion), or @code{T} (tera, trillion).
|
|
If undefined, 0 is used when @env{OMP_WAIT_POLICY} is @code{PASSIVE},
|
|
300,000 is used when @env{OMP_WAIT_POLICY} is undefined and
|
|
30 billion is used when @env{OMP_WAIT_POLICY} is @code{ACTIVE}.
|
|
If there are more OpenMP threads than available CPUs, 1000 and 100
|
|
spins are used for @env{OMP_WAIT_POLICY} being @code{ACTIVE} or
|
|
undefined, respectively; unless the @env{GOMP_SPINCOUNT} is lower
|
|
or @env{OMP_WAIT_POLICY} is @code{PASSIVE}.
|
|
|
|
@item @emph{See also}:
|
|
@ref{OMP_WAIT_POLICY}
|
|
@end table
|
|
|
|
|
|
|
|
@c ---------------------------------------------------------------------
|
|
@c The libgomp ABI
|
|
@c ---------------------------------------------------------------------
|
|
|
|
@node The libgomp ABI
|
|
@chapter The libgomp ABI
|
|
|
|
The following sections present notes on the external ABI as
|
|
presented by libgomp. Only maintainers should need them.
|
|
|
|
@menu
|
|
* Implementing MASTER construct::
|
|
* Implementing CRITICAL construct::
|
|
* Implementing ATOMIC construct::
|
|
* Implementing FLUSH construct::
|
|
* Implementing BARRIER construct::
|
|
* Implementing THREADPRIVATE construct::
|
|
* Implementing PRIVATE clause::
|
|
* Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses::
|
|
* Implementing REDUCTION clause::
|
|
* Implementing PARALLEL construct::
|
|
* Implementing FOR construct::
|
|
* Implementing ORDERED construct::
|
|
* Implementing SECTIONS construct::
|
|
* Implementing SINGLE construct::
|
|
@end menu
|
|
|
|
|
|
@node Implementing MASTER construct
|
|
@section Implementing MASTER construct
|
|
|
|
@smallexample
|
|
if (omp_get_thread_num () == 0)
|
|
block
|
|
@end smallexample
|
|
|
|
Alternately, we generate two copies of the parallel subfunction
|
|
and only include this in the version run by the master thread.
|
|
Surely this is not worthwhile though...
|
|
|
|
|
|
|
|
@node Implementing CRITICAL construct
|
|
@section Implementing CRITICAL construct
|
|
|
|
Without a specified name,
|
|
|
|
@smallexample
|
|
void GOMP_critical_start (void);
|
|
void GOMP_critical_end (void);
|
|
@end smallexample
|
|
|
|
so that we don't get COPY relocations from libgomp to the main
|
|
application.
|
|
|
|
With a specified name, use omp_set_lock and omp_unset_lock with
|
|
name being transformed into a variable declared like
|
|
|
|
@smallexample
|
|
omp_lock_t gomp_critical_user_<name> __attribute__((common))
|
|
@end smallexample
|
|
|
|
Ideally the ABI would specify that all zero is a valid unlocked
|
|
state, and so we wouldn't need to initialize this at
|
|
startup.
|
|
|
|
|
|
|
|
@node Implementing ATOMIC construct
|
|
@section Implementing ATOMIC construct
|
|
|
|
The target should implement the @code{__sync} builtins.
|
|
|
|
Failing that we could add
|
|
|
|
@smallexample
|
|
void GOMP_atomic_enter (void)
|
|
void GOMP_atomic_exit (void)
|
|
@end smallexample
|
|
|
|
which reuses the regular lock code, but with yet another lock
|
|
object private to the library.
|
|
|
|
|
|
|
|
@node Implementing FLUSH construct
|
|
@section Implementing FLUSH construct
|
|
|
|
Expands to the @code{__sync_synchronize} builtin.
|
|
|
|
|
|
|
|
@node Implementing BARRIER construct
|
|
@section Implementing BARRIER construct
|
|
|
|
@smallexample
|
|
void GOMP_barrier (void)
|
|
@end smallexample
|
|
|
|
|
|
@node Implementing THREADPRIVATE construct
|
|
@section Implementing THREADPRIVATE construct
|
|
|
|
In _most_ cases we can map this directly to @code{__thread}. Except
|
|
that OMP allows constructors for C++ objects. We can either
|
|
refuse to support this (how often is it used?) or we can
|
|
implement something akin to .ctors.
|
|
|
|
Even more ideally, this ctor feature is handled by extensions
|
|
to the main pthreads library. Failing that, we can have a set
|
|
of entry points to register ctor functions to be called.
|
|
|
|
|
|
|
|
@node Implementing PRIVATE clause
|
|
@section Implementing PRIVATE clause
|
|
|
|
In association with a PARALLEL, or within the lexical extent
|
|
of a PARALLEL block, the variable becomes a local variable in
|
|
the parallel subfunction.
|
|
|
|
In association with FOR or SECTIONS blocks, create a new
|
|
automatic variable within the current function. This preserves
|
|
the semantic of new variable creation.
|
|
|
|
|
|
|
|
@node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
|
|
@section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
|
|
|
|
This seems simple enough for PARALLEL blocks. Create a private
|
|
struct for communicating between the parent and subfunction.
|
|
In the parent, copy in values for scalar and "small" structs;
|
|
copy in addresses for others TREE_ADDRESSABLE types. In the
|
|
subfunction, copy the value into the local variable.
|
|
|
|
It is not clear what to do with bare FOR or SECTION blocks.
|
|
The only thing I can figure is that we do something like:
|
|
|
|
@smallexample
|
|
#pragma omp for firstprivate(x) lastprivate(y)
|
|
for (int i = 0; i < n; ++i)
|
|
body;
|
|
@end smallexample
|
|
|
|
which becomes
|
|
|
|
@smallexample
|
|
@{
|
|
int x = x, y;
|
|
|
|
// for stuff
|
|
|
|
if (i == n)
|
|
y = y;
|
|
@}
|
|
@end smallexample
|
|
|
|
where the "x=x" and "y=y" assignments actually have different
|
|
uids for the two variables, i.e. not something you could write
|
|
directly in C. Presumably this only makes sense if the "outer"
|
|
x and y are global variables.
|
|
|
|
COPYPRIVATE would work the same way, except the structure
|
|
broadcast would have to happen via SINGLE machinery instead.
|
|
|
|
|
|
|
|
@node Implementing REDUCTION clause
|
|
@section Implementing REDUCTION clause
|
|
|
|
The private struct mentioned in the previous section should have
|
|
a pointer to an array of the type of the variable, indexed by the
|
|
thread's @var{team_id}. The thread stores its final value into the
|
|
array, and after the barrier, the master thread iterates over the
|
|
array to collect the values.
|
|
|
|
|
|
@node Implementing PARALLEL construct
|
|
@section Implementing PARALLEL construct
|
|
|
|
@smallexample
|
|
#pragma omp parallel
|
|
@{
|
|
body;
|
|
@}
|
|
@end smallexample
|
|
|
|
becomes
|
|
|
|
@smallexample
|
|
void subfunction (void *data)
|
|
@{
|
|
use data;
|
|
body;
|
|
@}
|
|
|
|
setup data;
|
|
GOMP_parallel_start (subfunction, &data, num_threads);
|
|
subfunction (&data);
|
|
GOMP_parallel_end ();
|
|
@end smallexample
|
|
|
|
@smallexample
|
|
void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads)
|
|
@end smallexample
|
|
|
|
The @var{FN} argument is the subfunction to be run in parallel.
|
|
|
|
The @var{DATA} argument is a pointer to a structure used to
|
|
communicate data in and out of the subfunction, as discussed
|
|
above with respect to FIRSTPRIVATE et al.
|
|
|
|
The @var{NUM_THREADS} argument is 1 if an IF clause is present
|
|
and false, or the value of the NUM_THREADS clause, if
|
|
present, or 0.
|
|
|
|
The function needs to create the appropriate number of
|
|
threads and/or launch them from the dock. It needs to
|
|
create the team structure and assign team ids.
|
|
|
|
@smallexample
|
|
void GOMP_parallel_end (void)
|
|
@end smallexample
|
|
|
|
Tears down the team and returns us to the previous @code{omp_in_parallel()} state.
|
|
|
|
|
|
|
|
@node Implementing FOR construct
|
|
@section Implementing FOR construct
|
|
|
|
@smallexample
|
|
#pragma omp parallel for
|
|
for (i = lb; i <= ub; i++)
|
|
body;
|
|
@end smallexample
|
|
|
|
becomes
|
|
|
|
@smallexample
|
|
void subfunction (void *data)
|
|
@{
|
|
long _s0, _e0;
|
|
while (GOMP_loop_static_next (&_s0, &_e0))
|
|
@{
|
|
long _e1 = _e0, i;
|
|
for (i = _s0; i < _e1; i++)
|
|
body;
|
|
@}
|
|
GOMP_loop_end_nowait ();
|
|
@}
|
|
|
|
GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
|
|
subfunction (NULL);
|
|
GOMP_parallel_end ();
|
|
@end smallexample
|
|
|
|
@smallexample
|
|
#pragma omp for schedule(runtime)
|
|
for (i = 0; i < n; i++)
|
|
body;
|
|
@end smallexample
|
|
|
|
becomes
|
|
|
|
@smallexample
|
|
@{
|
|
long i, _s0, _e0;
|
|
if (GOMP_loop_runtime_start (0, n, 1, &_s0, &_e0))
|
|
do @{
|
|
long _e1 = _e0;
|
|
for (i = _s0, i < _e0; i++)
|
|
body;
|
|
@} while (GOMP_loop_runtime_next (&_s0, _&e0));
|
|
GOMP_loop_end ();
|
|
@}
|
|
@end smallexample
|
|
|
|
Note that while it looks like there is trickiness to propagating
|
|
a non-constant STEP, there isn't really. We're explicitly allowed
|
|
to evaluate it as many times as we want, and any variables involved
|
|
should automatically be handled as PRIVATE or SHARED like any other
|
|
variables. So the expression should remain evaluable in the
|
|
subfunction. We can also pull it into a local variable if we like,
|
|
but since its supposed to remain unchanged, we can also not if we like.
|
|
|
|
If we have SCHEDULE(STATIC), and no ORDERED, then we ought to be
|
|
able to get away with no work-sharing context at all, since we can
|
|
simply perform the arithmetic directly in each thread to divide up
|
|
the iterations. Which would mean that we wouldn't need to call any
|
|
of these routines.
|
|
|
|
There are separate routines for handling loops with an ORDERED
|
|
clause. Bookkeeping for that is non-trivial...
|
|
|
|
|
|
|
|
@node Implementing ORDERED construct
|
|
@section Implementing ORDERED construct
|
|
|
|
@smallexample
|
|
void GOMP_ordered_start (void)
|
|
void GOMP_ordered_end (void)
|
|
@end smallexample
|
|
|
|
|
|
|
|
@node Implementing SECTIONS construct
|
|
@section Implementing SECTIONS construct
|
|
|
|
A block as
|
|
|
|
@smallexample
|
|
#pragma omp sections
|
|
@{
|
|
#pragma omp section
|
|
stmt1;
|
|
#pragma omp section
|
|
stmt2;
|
|
#pragma omp section
|
|
stmt3;
|
|
@}
|
|
@end smallexample
|
|
|
|
becomes
|
|
|
|
@smallexample
|
|
for (i = GOMP_sections_start (3); i != 0; i = GOMP_sections_next ())
|
|
switch (i)
|
|
@{
|
|
case 1:
|
|
stmt1;
|
|
break;
|
|
case 2:
|
|
stmt2;
|
|
break;
|
|
case 3:
|
|
stmt3;
|
|
break;
|
|
@}
|
|
GOMP_barrier ();
|
|
@end smallexample
|
|
|
|
|
|
@node Implementing SINGLE construct
|
|
@section Implementing SINGLE construct
|
|
|
|
A block like
|
|
|
|
@smallexample
|
|
#pragma omp single
|
|
@{
|
|
body;
|
|
@}
|
|
@end smallexample
|
|
|
|
becomes
|
|
|
|
@smallexample
|
|
if (GOMP_single_start ())
|
|
body;
|
|
GOMP_barrier ();
|
|
@end smallexample
|
|
|
|
while
|
|
|
|
@smallexample
|
|
#pragma omp single copyprivate(x)
|
|
body;
|
|
@end smallexample
|
|
|
|
becomes
|
|
|
|
@smallexample
|
|
datap = GOMP_single_copy_start ();
|
|
if (datap == NULL)
|
|
@{
|
|
body;
|
|
data.x = x;
|
|
GOMP_single_copy_end (&data);
|
|
@}
|
|
else
|
|
x = datap->x;
|
|
GOMP_barrier ();
|
|
@end smallexample
|
|
|
|
|
|
|
|
@c ---------------------------------------------------------------------
|
|
@c Reporting Bugs
|
|
@c ---------------------------------------------------------------------
|
|
|
|
@node Reporting Bugs
|
|
@chapter Reporting Bugs
|
|
|
|
Bugs in the GNU Offloading and Multi Processing Runtime Library should
|
|
be reported via @uref{http://gcc.gnu.org/bugzilla/, Bugzilla}. Please add
|
|
"openacc", or "openmp", or both to the keywords field in the bug
|
|
report, as appropriate.
|
|
|
|
|
|
|
|
@c ---------------------------------------------------------------------
|
|
@c GNU General Public License
|
|
@c ---------------------------------------------------------------------
|
|
|
|
@include gpl_v3.texi
|
|
|
|
|
|
|
|
@c ---------------------------------------------------------------------
|
|
@c GNU Free Documentation License
|
|
@c ---------------------------------------------------------------------
|
|
|
|
@include fdl.texi
|
|
|
|
|
|
|
|
@c ---------------------------------------------------------------------
|
|
@c Funding Free Software
|
|
@c ---------------------------------------------------------------------
|
|
|
|
@include funding.texi
|
|
|
|
@c ---------------------------------------------------------------------
|
|
@c Index
|
|
@c ---------------------------------------------------------------------
|
|
|
|
@node Library Index
|
|
@unnumbered Library Index
|
|
|
|
@printindex cp
|
|
|
|
@bye
|