Commit Graph

188927 Commits

Author SHA1 Message Date
Patrick Palka
861440a77b libstdc++: Implement LWG 3523 changes to ranges::iota_view
libstdc++-v3/ChangeLog:

	* include/std/ranges (iota_view::_Iterator): Befriend iota_view.
	(iota_view::_Sentinel): Likewise.
	(iota_view::iota_view): Add three overloads, each taking an
	iterator/sentinel pair as per LWG 3523.
	* testsuite/std/ranges/iota/iota_view.cc (test06): New test.
2021-10-19 17:54:24 -04:00
Patrick Palka
53b1c382d5 libstdc++: Implement LWG 3549 changes to ranges::enable_view
This patch also reverts r11-3504 since that workaround is now obsolete
after this resolution.

libstdc++-v3/ChangeLog:

	* include/bits/ranges_base.h (view_interface): Forward declare.
	(__detail::__is_derived_from_view_interface_fn): Declare.
	(__detail::__is_derived_from_view_interface): Define as per LWG 3549.
	(enable_view): Adjust as per LWG 3549.
	* include/bits/ranges_util.h (view_interface): Don't derive from
	view_base.
	* include/std/ranges (filter_view): Revert r11-3504 change.
	(transform_view): Likewise.
	(take_view): Likewise.
	(take_while_view): Likewise.
	(drop_view): Likewise.
	(drop_while_view): Likewise.
	(join_view): Likewise.
	(lazy_split_view): Likewise.
	(split_view): Likewise.
	(reverse_view): Likewise.
	* testsuite/std/ranges/adaptors/sizeof.cc: Update expected sizes.
	* testsuite/std/ranges/view.cc (test_view::test_view): Remove
	this default ctor since views no longer need to be default initable.
	(test01): New test.
2021-10-19 17:50:56 -04:00
Jonathan Wakely
c6a1fdd6dd doc: Fix typo in name of PowerPC __builtin_cpu_supports built-in
gcc/ChangeLog:

	* doc/extend.texi (Basic PowerPC Built-in Functions): Fix typo.
2021-10-19 20:39:46 +01:00
Jonathan Wakely
58f339fc5e libstdc++: Implement std::random_device::entropy() for other sources
Currently this function only returns a non-zero value for /dev/random
and /dev/urandom. When a hardware instruction such as RDRAND is in use
it should (in theory) be perfectly random and produce 32 bits of entropy
in each 32-bit result. Add a helper function to identify the source of
randomness from the _M_func and _M_file data members, and return a
suitable value when RDRAND or RDSEED is being used.

libstdc++-v3/ChangeLog:

	* src/c++11/random.cc (which_source): New helper function.
	(random_device::_M_getentropy()): Use which_source and return
	suitable values for sources other than device files.
	* testsuite/26_numerics/random/random_device/entropy.cc: New test.
2021-10-19 17:27:06 +01:00
Paul A. Clarke
3cfbe5dc08 rs6000: Guard some x86 intrinsics implementations
Some compatibility implementations of x86 intrinsics include
Power intrinsics which require POWER8.  Guard them.

emmintrin.h:
- _mm_cmpord_pd: Remove code which was ostensibly for pre-POWER8,
  but which indeed depended on POWER8 (vec_cmpgt(v2du)/vcmpgtud).
  The "POWER8" version works fine on pre-POWER8.
- _mm_mul_epu32: vec_mule(v4su) uses vmuleuw.
pmmintrin.h:
- _mm_movehdup_ps: vec_mergeo(v4su) uses vmrgow.
- _mm_moveldup_ps: vec_mergee(v4su) uses vmrgew.
smmintrin.h:
- _mm_cmpeq_epi64: vec_cmpeq(v2di) uses vcmpequd.
- _mm_mul_epi32: vec_mule(v4si) uses vmuluwm.
- _mm_cmpgt_epi64: vec_cmpgt(v2di) uses vcmpgtsd.
tmmintrin.h:
- _mm_sign_epi8: vec_neg(v4si) uses vsububm.
- _mm_sign_epi16: vec_neg(v4si) uses vsubuhm.
- _mm_sign_epi32: vec_neg(v4si) uses vsubuwm.
  Note that the above three could actually be supported pre-POWER8,
  but current GCC does not support them before POWER8.
- _mm_sign_pi8: depends on _mm_sign_epi8.
- _mm_sign_pi16: depends on _mm_sign_epi16.
- _mm_sign_pi32: depends on _mm_sign_epi32.

sse4_2-pcmpgtq.c:
- _mm_cmpgt_epi64: vec_cmpeq(v2di) uses vcmpequd.

2021-10-19  Paul A. Clarke  <pc@us.ibm.com>

gcc
	PR target/101893
	PR target/102719
	* config/rs6000/emmintrin.h: Guard POWER8 intrinsics.
	* config/rs6000/pmmintrin.h: Same.
	* config/rs6000/smmintrin.h: Same.
	* config/rs6000/tmmintrin.h: Same.

gcc/testsuite
	* gcc.target/powerpc/sse4_2-pcmpgtq.c: Tighten dg constraints
	to minimally Power8.
2021-10-19 10:36:59 -05:00
Paul A. Clarke
ce8add4b0e rs6000: Add nmmintrin.h to extra_headers
Fix an omission in commit 29fb1e831b.

2021-10-19  Paul A. Clarke  <pc@us.ibm.com>

gcc
	* config.gcc (extra_headers): Add nmmintrin.h.
2021-10-19 10:33:25 -05:00
Jonathan Wakely
04d392e843 libstdc++: Fix doxygen generation to work with relative paths
In r12-826 I tried to remove some redundant steps from the doxygen
build, but they are needed when configure is run as a relative path. The
use of pwd is to resolve the relative path to an absolute one.

libstdc++-v3/ChangeLog:

	* doc/Makefile.am (stamp-html-doxygen, stamp-html-doxygen)
	(stamp-latex-doxygen, stamp-man-doxygen): Fix recipes for
	relative ${top_srcdir}.
	* doc/Makefile.in: Regenerate.
2021-10-19 16:07:41 +01:00
Tobias Burnus
ff0eec94e8 Fortran: Fix 'fn spec' for deferred character length
Shows now up with gfortran.dg/deferred_type_param_6.f90 due to more ME
optimizations, causing fails without this commit.

gcc/fortran/ChangeLog:

	* trans-types.c (create_fn_spec): For allocatable/pointer
	character(len=:), use 'w' not 'R' as fn spec for the length dummy
	argument.
2021-10-19 16:43:56 +02:00
Martin Liska
7ef0cc4444 Make file utf8 valid input.
liboffloadmic/ChangeLog:

	* include/coi/source/COIBuffer_source.h: Convert 2 chars to
	unicode.
2021-10-19 16:13:56 +02:00
Richard Biener
93bd021388 Refactor vect_supportable_dr_alignment
This refactors vect_supportable_dr_alignment to get the misalignment
as input parameter which allows us to elide modifying/restoring
of DR_MISALIGNMENT during alignment peeling analysis which eventually
makes it more straight-forward to split out the negative step
handling.

2021-10-19  Richard Biener  <rguenther@suse.de>

	* tree-vectorizer.h (vect_supportable_dr_alignment): Add
	misalignment parameter.
	* tree-vect-data-refs.c (vect_get_peeling_costs_all_drs):
	Do not change DR_MISALIGNMENT in place, instead pass the
	adjusted misalignment to vect_supportable_dr_alignment.
	(vect_peeling_supportable): Likewise.
	(vect_peeling_hash_get_lowest_cost): Adjust.
	(vect_enhance_data_refs_alignment): Likewise.
	(vect_vfa_access_size): Likewise.
	(vect_supportable_dr_alignment): Add misalignment
	parameter and simplify.
	* tree-vect-stmts.c (get_negative_load_store_type): Adjust.
	(get_group_load_store_type): Likewise.
	(get_load_store_type): Likewise.
2021-10-19 16:09:01 +02:00
Jonathan Wakely
5a8832b165 libstdc++: Change std::variant union member to empty struct
This more clearly expresses the intent (a completely unused, trivial
type) than using char. It's also consistent with the unions in
std::optional.

libstdc++-v3/ChangeLog:

	* include/std/variant (_Uninitialized): Use an empty struct
	for the unused union member, instead of char.
2021-10-19 15:01:16 +01:00
Jonathan Wakely
c4ecb11e4f libstdc++: Fix std::stack deduction guide
libstdc++-v3/ChangeLog:

	* include/bits/stl_stack.h (stack(Iterator, Iterator)): Remove
	non-deducible template parameter from deduction guide.
	* testsuite/23_containers/stack/deduction.cc: Check new C++23
	deduction guides.
2021-10-19 15:01:16 +01:00
Jonathan Wakely
82b2e4f8cf libstdc++: Implement monadic operations for std::optional (P0798R8)
Another new addition to the C++23 working draft.

The new member functions of std::optional are only defined for C++23,
but the new members of _Optional_payload_base are defined for C++20 so
that they can be used in non-propagating-cache in <ranges>. The
_Optional_payload_base::_M_construct member can also be used in
non-propagating-cache now, because it's constexpr since r12-4389.

There will be an LWG issue about the feature test macro, suggesting that
we should just bump the value of __cpp_lib_optional instead. I haven't
done that here, but it can be changed once consensus is reached on the
change.

libstdc++-v3/ChangeLog:

	* include/std/optional (_Optional_payload_base::_Storage): Add
	constructor taking a callable function to invoke.
	(_Optional_payload_base::_M_apply): New function.
	(__cpp_lib_monadic_optional): Define for C++23.
	(optional::and_then, optional::transform, optional::or_else):
	Define for C++23.
	* include/std/ranges (__detail::__cached): Remove.
	(__detail::__non_propagating_cache): Remove use of __cached for
	contained value. Use _Optional_payload_base::_M_construct and
	_Optional_payload_base::_M_apply to set the contained value.
	* include/std/version (__cpp_lib_monadic_optional): Define.
	* testsuite/20_util/optional/monadic/and_then.cc: New test.
	* testsuite/20_util/optional/monadic/or_else.cc: New test.
	* testsuite/20_util/optional/monadic/or_else_neg.cc: New test.
	* testsuite/20_util/optional/monadic/transform.cc: New test.
	* testsuite/20_util/optional/monadic/version.cc: New test.
2021-10-19 15:01:16 +01:00
Tobias Burnus
6920d5a1a2 Fortran: Fix "str" to scalar descriptor conversion [PR92482]
PR fortran/92482
gcc/fortran/ChangeLog:

	* trans-expr.c (gfc_conv_procedure_call): Use TREE_OPERAND not
	build_fold_indirect_ref_loc to undo an ADDR_EXPR.

gcc/testsuite/ChangeLog:

	* gfortran.dg/bind-c-char-descr.f90: Remove xfail; extend a bit.
2021-10-19 15:16:01 +02:00
Clément Chigot
e3ef92e79f aix: ensure reference to __tls_get_addr is in text section.
The garbage collector of AIX linker might remove the reference to
__tls_get_addr if it's added inside an unused csect, which can be
the case of .data with very simple programs.

gcc/ChangeLog:
2021-10-19  Clément Chigot  <clement.chigot@atos.net>

	* config/rs6000/rs6000.c (rs6000_xcoff_file_end): Move
	__tls_get_addr reference to .text csect.
2021-10-19 14:42:45 +02:00
Martin Liska
6b34f5c5ec target: Support whitespaces in target attr/pragma.
PR target/102375

gcc/ChangeLog:

	* config/aarch64/aarch64.c (aarch64_process_one_target_attr):
	Strip whitespaces.

gcc/testsuite/ChangeLog:

	* gcc.target/aarch64/pr102375.c: New test.
2021-10-19 14:39:31 +02:00
Clément Chigot
5f5baf7992 MAINTAINERS: Add myself for write after approval
ChangeLog:
2021-10-19  Clément Chigot  <clement.chigot@atos.net>

	* MAINTAINERS: Add myself for write after approval.
2021-10-19 13:38:04 +02:00
Richard Biener
793d2549b1 Refactor load/store costing
This passes down the already available alignment scheme and
misalignment to the load/store costing routines, removing
redundant queries.

2021-10-19  Richard Biener  <rguenther@suse.de>

	* tree-vectorizer.h (vect_get_store_cost): Adjust signature.
	(vect_get_load_cost): Likewise.
	* tree-vect-data-refs.c (vect_get_data_access_cost): Get
	alignment support scheme and misalignment as arguments
	and pass them down.
	(vect_get_peeling_costs_all_drs): Compute that info here
	and note that we shouldn't need to.
	* tree-vect-stmts.c (vect_model_store_cost): Get
	alignment support scheme and misalignment as arguments.
	(vect_get_store_cost): Likewise.
	(vect_model_load_cost): Likewise.
	(vect_get_load_cost): Likewise.
	(vectorizable_store): Pass down alignment support scheme
	and misalignment to costing.
	(vectorizable_load): Likewise.
2021-10-19 13:34:47 +02:00
Jonathan Wakely
9890b12c72 libstdc++: Fix mem-initializer in std::move_only_function [PR102825]
libstdc++-v3/ChangeLog:

	PR libstdc++/102825
	* include/bits/mofunc_impl.h (move_only_function): Remove
	invalid base initializer.
	* testsuite/20_util/move_only_function/cons.cc: Instantiate
	constructors to check bodies.
2021-10-19 11:50:46 +01:00
Richard Biener
476ca5ade8 Compute negative offset in get_load_store_type
This moves the computation of a negative offset that needs to be
applied when we vectorize a negative stride access to
get_load_store_type alongside where we compute the actual access
method.

2021-10-19  Richard Biener  <rguenther@suse.de>

	* tree-vect-stmts.c (get_negative_load_store_type): Add
	offset output parameter and initialize it.
	(get_group_load_store_type): Likewise.
	(get_load_store_type): Likewise.
	(vectorizable_store): Use offset as computed by
	get_load_store_type.
	(vectorizable_load): Likewise.
2021-10-19 12:29:33 +02:00
Richard Biener
d996799a50 tree-optimization/102827 - avoid stmts in preheader
The PR shows that when carefully crafting the runtime alias
condition in the vectorizer we might end up using defs from
the loop preheader but will end up inserting the condition
before the .LOOP_VECTORIZED call.  So the following makes
sure to insert invariants before that when we versioned the
loop, preserving the invariant the vectorizer relies on.

2021-10-19  Richard Biener  <rguenther@suse.de>

	PR tree-optimization/102827
	* tree-if-conv.c (predicate_statements): Add pe parameter
	and use that edge to insert invariant stmts on.
	(combine_blocks): Pass through pe.
	(tree_if_conversion): Compute the edge to insert invariant
	stmts on and pass it along.

	* gcc.dg/pr102827.c: New testcase.
2021-10-19 12:29:33 +02:00
Roger Sayle
f98359ba9d PR target/102785: Correct addsub/subadd patterns on bfin.
This patch resolves PR target/102785 where my recent patch to constant
fold saturating addition/subtraction exposed a latent bug in the bfin
backend.  The patterns used for blackfin's V2HI ssaddsub and sssubadd
instructions had the indices/operations swapped.  This was harmless
until we started evaluating these expressions at compile-time, when
the mismatch was caught by the testsuite.

2021-10-19  Roger Sayle  <roger@nextmovesoftware.com>

gcc/ChangeLog
	PR target/102785
	* config/bfin/bfin.md (addsubv2hi3, subaddv2hi3, ssaddsubv2hi3,
	sssubaddv2hi3):  Swap the order of operators in vec_concat.
2021-10-19 11:00:10 +01:00
Xionghu Luo
0910c516a3 rs6000: Remove unspecs for vec_mrghl[bhw]
vmrghb only accepts permute index {0, 16, 1, 17, 2, 18, 3, 19, 4, 20,
5, 21, 6, 22, 7, 23} no matter for BE or LE in ISA, similarly for vmrglb.
Remove UNSPEC_VMRGH_DIRECT/UNSPEC_VMRGL_DIRECT pattern as vec_select
+ vec_concat as normal RTL.

Tested pass on P8LE, P9LE and P8BE{m32}.

gcc/ChangeLog:

2021-10-19  Xionghu Luo  <luoxhu@linux.ibm.com>

	* config/rs6000/altivec.md (*altivec_vmrghb_internal): Delete.
	(altivec_vmrghb_direct): New.
	(*altivec_vmrghh_internal): Delete.
	(altivec_vmrghh_direct): New.
	(*altivec_vmrghw_internal): Delete.
	(altivec_vmrghw_direct_<mode>): New.
	(altivec_vmrghw_direct): Delete.
	(*altivec_vmrglb_internal): Delete.
	(altivec_vmrglb_direct): New.
	(*altivec_vmrglh_internal): Delete.
	(altivec_vmrglh_direct): New.
	(*altivec_vmrglw_internal): Delete.
	(altivec_vmrglw_direct_<mode>): New.
	(altivec_vmrglw_direct): Delete.
	* config/rs6000/rs6000-p8swap.c (rtx_is_swappable_p): Adjust.
	* config/rs6000/rs6000.c (altivec_expand_vec_perm_const):
	Adjust.
	* config/rs6000/vsx.md (vsx_xxmrghw_<mode>): Adjust.
	(vsx_xxmrglw_<mode>): Adjust.

gcc/testsuite/ChangeLog:

2021-10-19  Xionghu Luo  <luoxhu@linux.ibm.com>

	* gcc.target/powerpc/builtins-1.c: Update instruction counts.
2021-10-19 04:02:04 -05:00
Aldy Hernandez
d2161caffb Change threading comment before pass_ccp pass.
gcc/ChangeLog:

	* passes.def: Change threading comment before pass_ccp pass.
2021-10-19 10:48:46 +02:00
Haochen Gui
91419baf4d Optimize the builtin vec_xl_sext
gcc/
	* config/rs6000/rs6000-call.c (altivec_expand_lxvr_builtin):
	Modify the expansion for sign extension. All extensions are done
	within VSX registers.

gcc/testsuite/
	* gcc.target/powerpc/p10_vec_xl_sext.c: New test.
2021-10-19 16:47:22 +08:00
prathamesh.kulkarni
6b4c18b981 [sve] PR93183 - Add support for conditional neg.
gcc/testsuite/ChangeLog:
	PR target/93183
	* gcc.target/aarch64/sve/pr93183.c: Remove -mcpu=generic+sve from dg-options.
2021-10-19 13:51:51 +05:30
Richard Biener
d19d90289d Add misalignment output parameter to get_load_store_type
This makes us compute the misalignment alongside the alignment support
scheme in get_load_store_type, removing some out-of-place calls to
the DR alignment API.

2021-10-18  Richard Biener  <rguenther@suse.de>

	* tree-vect-stmts.c (get_group_load_store_type): Add
	misalignment output parameter and initialize it.
	(get_group_load_store_type): Likewise.
	(vectorizable_store): Remove now redundant queries.
	(vectorizable_load): Likewise.
2021-10-19 10:12:45 +02:00
Jakub Jelinek
f45610a452 c++: Don't reject calls through PMF during constant evaluation [PR102786]
The following testcase incorrectly rejects the c initializer,
while in the s.*a case cxx_eval_* sees .__pfn reads etc.,
in the s.*&S::foo case get_member_function_from_ptrfunc creates
expressions which use INTEGER_CSTs with type of pointer to METHOD_TYPE.
And cxx_eval_constant_expression rejects any INTEGER_CSTs with pointer
type if they aren't 0.
Either we'd need to make sure we defer such folding till cp_fold but the
function and pfn_from_ptrmemfunc is used from lots of places, or
the following patch just tries to reject only non-zero INTEGER_CSTs
with pointer types if they don't point to METHOD_TYPE in the hope that
all such INTEGER_CSTs with POINTER_TYPE to METHOD_TYPE are result of
folding valid pointer-to-member function expressions.
I don't immediately see how one could create such INTEGER_CSTs otherwise,
cast of integers to PMF is rejected and would have the PMF RECORD_TYPE
anyway, etc.

2021-10-19  Jakub Jelinek  <jakub@redhat.com>

	PR c++/102786
	* constexpr.c (cxx_eval_constant_expression): Don't reject
	INTEGER_CSTs with type POINTER_TYPE to METHOD_TYPE.

	* g++.dg/cpp2a/constexpr-virtual19.C: New test.
2021-10-19 09:24:57 +02:00
Richard Biener
caab013976 Remove check_aligned parameter from vect_supportable_dr_alignment
There are two calls with true as parameter, one is only relevant
for the case of the misalignment being unknown which means the
access is never aligned there, the other is in the peeling hash
insert code used conditional on the unlimited cost model which
adds an artificial count.  But the way it works right now is
that it boosts the count if the specific misalignment when not peeling
is unsupported - in particular when the access is currently aligned
we'll query the backend with a misalign value of zero.  I've
changed it to boost the peeling when unknown alignment is not
supported instead and noted how we could in principle improve this.

2021-10-19  Richard Biener  <rguenther@suse.de>

	* tree-vectorizer.h (vect_supportable_dr_alignment): Remove
	check_aligned argument.
	* tree-vect-data-refs.c (vect_supportable_dr_alignment):
	Likewise.
	(vect_peeling_hash_insert): Add supportable_if_not_aligned
	argument and do not call vect_supportable_dr_alignment here.
	(vect_peeling_supportable): Adjust.
	(vect_enhance_data_refs_alignment): Compute whether the
	access is supported with different alignment here and
	pass that down to vect_peeling_hash_insert.
	(vect_vfa_access_size): Adjust.
	* tree-vect-stmts.c (vect_get_store_cost): Likewise.
	(vect_get_load_cost): Likewise.
	(get_negative_load_store_type): Likewise.
	(get_group_load_store_type): Likewise.
	(get_load_store_type): Likewise.
2021-10-19 09:12:41 +02:00
Martin Liska
df592811f9 target: support spaces in target attribute.
PR target/102374

gcc/ChangeLog:

	* config/i386/i386-options.c (ix86_valid_target_attribute_inner_p): Strip whitespaces.
	* system.h (strip_whilespaces): New function.

gcc/testsuite/ChangeLog:

	* gcc.target/i386/pr102374.c: New test.
2021-10-19 08:51:32 +02:00
dianhong xu
38f6ee6bfc AVX512FP16: Add *_set1_pch intrinsics.
Add *_set1_pch (_Float16 _Complex A) intrinsics.

gcc/ChangeLog:

	* config/i386/avx512fp16intrin.h:
	(_mm512_set1_pch): New intrinsic.
	* config/i386/avx512fp16vlintrin.h:
	(_mm256_set1_pch): New intrinsic.
	(_mm_set1_pch): Ditto.

gcc/testsuite/ChangeLog:

	* gcc.target/i386/avx512fp16-set1-pch-1a.c: New test.
	* gcc.target/i386/avx512fp16-set1-pch-1b.c: New test.
	* gcc.target/i386/avx512fp16vl-set1-pch-1a.c: New test.
	* gcc.target/i386/avx512fp16vl-set1-pch-1b.c: New test.
2021-10-19 14:48:21 +08:00
GCC Administrator
ce4d1f632f Daily bump. 2021-10-19 00:16:23 +00:00
Andrew MacLeod
4d92a69fc5 Process EH edges again and call get_tree_range on non gimple_range_ssa_p names.
PR tree-optimization/102796
	gcc/
	* gimple-range.cc (gimple_ranger::range_on_edge): Process EH edges
	normally.  Return get_tree_range for non gimple_range_ssa_p names.
	(gimple_ranger::range_of_stmt): Use get_tree_range for non
	gimple_range_ssa_p names.

	gcc/testsuite/
	* g++.dg/pr102796.C: New.
2021-10-18 18:01:22 -04:00
Kwok Cheung Yeung
3873323402 openmp: Add additional tests for declare variant in Fortran
Add tests to check that explicitly specifying the containing procedure as the
base name for declare variant works.

2021-10-18  Kwok Cheung Yeung  <kcy@codesourcery.com>

gcc/testsuite/

	* gfortran.dg/gomp/declare-variant-15.f90 (variant2, base2, test2):
	Add tests.
	* gfortran.dg/gomp/declare-variant-16.f90 (base2, variant2, test2):
	Add tests.
2021-10-18 13:56:59 -07:00
Uros Bizjak
4abc0c196b i386: Fix ICE in ix86_print_opreand_address [PR 102761]
2021-10-18  Uroš Bizjak  <ubizjak@gmail.com>

	PR target/102761

gcc/ChangeLog:

	* config/i386/i386.c (ix86_print_operand_address):
	Error out for non-address_operand asm operands.

gcc/testsuite/ChangeLog:

	* gcc.target/i386/pr102761.c: New test.
2021-10-18 17:04:26 +02:00
Jason Merrill
582d43a48c c++: improve template/crash90.C
In r208350 I improved the diagnostic location of the initializer-list
pedwarn in C++98 mode on crash90.C, but didn't adjust the testcase to verify
the location, so reverting that change didn't break regression testing.

gcc/testsuite/ChangeLog:

	* g++.dg/template/crash90.C: Check location of pedwarn.
2021-10-18 10:21:16 -04:00
Richard Biener
1257aad107 Apply TLC to vect_supportable_dr_alignment
This fixes handling of the return value of vect_supportable_dr_alignment
in multiple places.  We should use the enum type and not int for
storage and not auto-convert the enum return value to bool.  It also
commonizes the read/write path in vect_supportable_dr_alignment.

2021-10-18  Richard Biener  <rguenther@suse.de>

	* tree-vect-data-refs.c (vect_peeling_hash_insert): Do
	not auto-convert dr_alignment_support to bool.
	(vect_peeling_supportable): Likewise.
	(vect_enhance_data_refs_alignment): Likewise.
	(vect_supportable_dr_alignment): Commonize read/write case.
	* tree-vect-stmts.c (vect_get_store_cost): Use
	dr_alignment_support, not int, for the vect_supportable_dr_alignment
	result.
	(vect_get_load_cost): Likewise.
2021-10-18 16:15:19 +02:00
Siddhesh Poyarekar
30d6ff3916 tree-object-size: Avoid unnecessary processing of __builtin_object_size
This is a minor cleanup to bail out early if the result of
__builtin_object_size is not assigned to anything and avoid initializing
the object size arrays.

gcc/ChangeLog:

	* tree-object-size.c (object_sizes_execute): Consolidate LHS
	null check and do it early.

Signed-off-by: Siddhesh Poyarekar <siddhesh@gotplt.org>
2021-10-18 19:04:16 +05:30
Richard Biener
c9ff458184 Reduce the number of aligned_access_p calls
This uses the computed alignment scheme in vectorizable_store
much like vectorizable_load does instead of re-querying
it via aligned_access_p.

2021-10-18  Richard Biener  <rguenther@suse.de>

	* tree-vect-stmts.c (vectorizable_store): Use the
	computed alignment scheme instead of querying
	aligned_access_p.
2021-10-18 15:18:03 +02:00
Richard Biener
b0ea7a8409 Remove redundant alignment scheme recomputation
The following avoids the recomputation of the alignment scheme
which is already fully determined by get_load_store_type.

2021-10-18  Richard Biener  <rguenther@suse.de>

	* tree-vect-stmts.c (vectorizable_store): Do not recompute
	alignment scheme already determined by get_load_store_type.
2021-10-18 15:18:03 +02:00
Jakub Jelinek
3adcf7e104 openmp: Fix handling of numa_domains(1)
If numa-domains is used with num-places count, sometimes the function
could create more places than requested and crash.  This depended on the
content of /sys/devices/system/node/online file, e.g. if the file
contains
0-1,16-17
and all NUMA nodes contain at least one CPU in the cpuset of the program,
then numa_domains(2) or numa_domains(4) (or 5+) work fine while
numa_domains(1) or numa_domains(3) misbehave.  I.e. the function was able
to stop after reaching limit on the , separators (or trivially at the end),
but not within in the ranges.

2021-10-18  Jakub Jelinek  <jakub@redhat.com>

	* config/linux/affinity.c (gomp_affinity_init_numa_domains): Add
	&& gomp_places_list_len < count after nfirst <= nlast loop condition.
2021-10-18 15:00:46 +02:00
Aldy Hernandez
dece6ae772 Clone correct pass in class pass_thread_jumps_full.
The pass_thread_jumps_full pass was cloning the wrong pass.

gcc/ChangeLog:

	* tree-ssa-threadbackward.c (class pass_thread_jumps_full):
	Clone corresponding pass.
2021-10-18 14:55:04 +02:00
H.J. Lu
80d360fa72 387-12.c: Require ia32 target instead of -m32
On x86-64,

$ make check RUNTESTFLAGS="--target_board='unix{-m32,}'"

can be used to test both 64-bit and 32-bit targets.  Require ia32 target
instead of explicit -m32 for 32-bit only test.

	* gcc.target/i386/387-12.c (dg-do compile): Require ia32.
	(dg-options): Remove -m32.
2021-10-18 05:39:53 -07:00
Roger Sayle
247c407c83 Try placing RTL folded constants in the constant pool.
My recent attempts to come up with a testcase for my patch to evaluate
ss_plus in simplify-rtx.c, identified a missed optimization opportunity
(that's potentially a long-time regression): The RTL optimizers no longer
place constants in the constant pool.

The motivating x86_64 example is the simple program:

typedef char v8qi __attribute__ ((vector_size (8)));

v8qi foo()
{
  v8qi tx = { 1, 0, 0, 0, 0, 0, 0, 0 };
  v8qi ty = { 2, 0, 0, 0, 0, 0, 0, 0 };
  v8qi t = __builtin_ia32_paddsb(tx, ty);
  return t;
}

which (with my previous patch) currently results in:
foo:	movq    .LC0(%rip), %xmm0
        movq    .LC1(%rip), %xmm1
        paddsb  %xmm1, %xmm0
        ret

even though the RTL contains the result in a REG_EQUAL note:

(insn 7 6 12 2 (set (reg:V8QI 83)
        (ss_plus:V8QI (reg:V8QI 84)
            (reg:V8QI 85))) "ssaddqi3.c":7:12 1419 {*mmx_ssaddv8qi3}
     (expr_list:REG_DEAD (reg:V8QI 85)
        (expr_list:REG_DEAD (reg:V8QI 84)
            (expr_list:REG_EQUAL (const_vector:V8QI [
                        (const_int 3 [0x3])
                        (const_int 0 [0]) repeated x7
                    ])
                (nil)))))

Together with the patch below, GCC will now generate the much
more sensible:
foo:	movq    .LC2(%rip), %xmm0
        ret

My first approach was to look in cse.c (where the REG_EQUAL note gets
added) and notice that the constant pool handling functionality has been
unreachable for a while.  A quick search for constant_pool_entries_cost
shows that it's initialized to zero, but never set to a non-zero value,
meaning that force_const_mem is never called.  This functionality used
to work way back in 2003, but has been lost over time:
https://gcc.gnu.org/pipermail/gcc-patches/2003-October/116435.html

The changes to cse.c below restore this functionality (placing suitable
constants in the constant pool) with two significant refinements;
(i) it only attempts to do this if the function already uses a constant
pool (thanks to the availability of crtl->uses_constant_pool since 2003).
(ii) it allows different constants (i.e. modes) to have different costs,
so that floating point "doubles" and 64-bit, 128-bit, 256-bit and 512-bit
vectors don't all have the share the same cost.  Back in 2003, the
assumption was that everything in a constant pool had the same
cost, hence the global variable constant_pool_entries_cost.

Although this is a useful CSE fix, it turns out that it doesn't cure
my motivating problem above.  CSE only considers a single instruction,
so determines that it's cheaper to perform the ss_plus (COSTS_N_INSNS(1))
than read the result from the constant pool (COSTS_N_INSNS(2)).  It's
only when the other reads from the constant pool are also eliminated,
that this transformation is a win.  Hence a better place to perform
this transformation is in combine, where after failing to "recog" the
load of a suitable constant, it can retry after calling force_const_mem.
This achieves the desired transformation and allows the backend insn_cost
call-back to control whether or not using the constant pool is preferrable.

Alas, it's rare to change code generation without affecting something in
GCC's testsuite.  On x86_64-pc-linux-gnu there were two families of new
failures (and I'd predict similar benign fallout on other platforms).
One failure was gcc.target/i386/387-12.c (aka PR target/26915), where
the test is missing an explicit -m32 flag.  On i686, it's very reasonable
to materialize -1.0 using "fld1; fchs", but on x86_64-pc-linux-gnu we
currently generate the awkward:
testm1: fld1
        fchs
        fstpl   -8(%rsp)
        movsd   -8(%rsp), %xmm0
        ret

which combine now very reasonably simplifies to just:
testm1: movsd   .LC3(%rip), %xmm0
	ret

The other class of x86_64-pc-linux-gnu failure was from materialization
of vector constants using vpbroadcast (e.g. gcc.target/i386/pr90773-17.c)
where the decision is finely balanced; the load of an integer register
with an immediate constant, followed by a vpbroadcast is deemed to be
COSTS_N_INSNS(2), whereas a load from the constant pool is also reported
as COSTS_N_INSNS(2).  My solution is to tweak the i386.c's rtx_costs
so that all other things being equal, an instruction (sequence) that
accesses memory is fractionally more expensive than one that doesn't.

2021-10-18  Roger Sayle  <roger@nextmovesoftware.com>

gcc/ChangeLog
	* combine.c (recog_for_combine): For an unrecognized move/set of
	a constant, try force_const_mem to place it in the constant pool.
	* cse.c (constant_pool_entries_cost, constant_pool_entries_regcost):
	Delete global variables (that are no longer assigned a cost value).
	(cse_insn): Simplify logic for deciding whether to place a folded
	constant in the constant pool using force_const_mem.
	(cse_main): Remove zero initialization of constant_pool_entries_cost
	and constant_pool_entries_regcost.

	* config/i386/i386.c (ix86_rtx_costs): Make memory accesses
	fractionally more expensive, when optimizing for speed.

gcc/testsuite/ChangeLog
	* gcc.target/i386/387-12.c: Add explicit -m32 option.
2021-10-18 12:18:00 +01:00
Martin Liska
815f15d338 gcov: return proper exit code when error happens
PR gcov-profile/102746
	PR gcov-profile/102747

gcc/ChangeLog:

	* gcov.c (main): Return return_code.
	(output_gcov_file): Mark return_code when error happens.
	(generate_results): Likewise.
	(read_graph_file): Likewise.
	(read_count_file): Likewise.
2021-10-18 13:12:29 +02:00
Roger Sayle
fecda57e60 bfin: Popcount-related improvements to machine description.
Blackfin processors support a ONES instruction that implements a
32-bit popcount returning a 16-bit result.  This instruction was
previously described by GCC's bfin backend using an UNSPEC, which
this patch changes to use a popcount:SI rtx thats capture its semantics,
allowing it to evaluated and simplified at compile-time.  I've decided
to keep the instruction name the same (avoiding any changes to the
__builtin_bfin_ones machinery), but have provided popcountsi2 and
popcounthi2 expanders so that the middle-end can use this instruction
to implement __builtin_popcount (and __builtin_parity).

The new testcase ones.c
short foo ()
{
  int t = 5;
  short r = __builtin_bfin_ones(t);
  return r;
}

previously generated:
_foo:   nop;
        nop;
        R0 = 5 (X);
        R0.L = ONES R0;
        rts;

with this patch, now generates:
_foo:   nop;
        nop;
        nop;
        R0 = 2 (X);
        rts;

The new testcase popcount.c
int foo(int x)
{
  return __builtin_popcount(x);
}

previously generated:
_foo:   [--SP] = RETS;
        SP += -12;
        call ___popcountsi2;
        SP += 12;
        RETS = [SP++];
        rts;

now generates:
_foo:   nop;
        nop;
        R0.L = ONES R0;
        R0 = R0.L (Z);
        rts;

And the new testcase parity.c
int foo(int x)
{
  return __builtin_parity(x);
}

previously generated:
_foo:	[--SP] = RETS;
        SP += -12;
        call ___paritysi2;
        SP += 12;
        RETS = [SP++];
        rts;

now generates:
_foo:	nop;
        R1 = 1 (X);
        R0.L = ONES R0;
        R0 = R1 & R0;
        rts;

2021-10-18  Roger Sayle  <roger@nextmovesoftware.com>

gcc/ChangeLog
	* config/bfin/bfin.md (define_constants): Remove UNSPEC_ONES.
	(define_insn "ones"): Replace UNSPEC_ONES with a truncate of
	a popcount, allowing compile-time evaluation/simplification.
	(popcountsi2, popcounthi2): New expanders using a "ones" insn.

gcc/testsuite/ChangeLog
	* gcc.target/bfin/ones.c: New test case.
	* gcc.target/bfin/parity.c: New test case.
	* gcc.target/bfin/popcount.c: New test case.
2021-10-18 11:59:31 +01:00
Richard Biener
eb03289367 tree-optimization/102788 - avoid spurious bool pattern fails
Bool pattern recog is required for correctness since vectorized
compares otherwise produce -1 for true so any context where bool
is used as value and not as condition or mask needs to be replaced
with CMP ? 1 : 0.  When we fail to find a vector type for the
result of such use we may not simply elide such transform since
a new bool result can emerge when for example the cast_forwprop
pattern is applied.  So the following avoids failing of the
bool pattern recog process and instead not assign a vector type
for the stmt.

2021-10-18  Richard Biener  <rguenther@suse.de>

	PR tree-optimization/102788
	* tree-vect-patterns.c (vect_init_pattern_stmt): Allow
	a NULL vectype.
	(vect_pattern_recog_1): Likewise.
	(vect_recog_bool_pattern): Continue matching the pattern
	even if we do not have a vector type for a conversion
	result.

	* g++.dg/vect/pr102788.cc: New testcase.
2021-10-18 12:57:43 +02:00
Roger Sayle
94dff03f67 Constant fold SS_NEG and SS_ABS in simplify-rtx.c
This simple patch performs compile-time constant folding of
signed saturating negation and signed saturating absolute value
in the RTL optimizers.  Normally in two's complement arithmetic
the lowest representable signed value overflows on negation,
With these saturating operators they "saturate" to the maximum
representable signed value, so SS_NEG:QI -128 is 127, and
SS_ABS:HI -32768 is 32767.

On bfin-elf, the following two short functions:

short foo()
{
  short t = -32768;
  short r = __builtin_bfin_negate_fr1x16(t);
  return r;
}

int bar()
{
  int t = -2147483648;
  int r = __builtin_bfin_abs_fr1x32(t);
  return r;
}

currently compile to:
_foo:	nop;
        nop;
        R0 = -32768 (X);
        R0 = -R0 (V);
        rts;

_bar:	nop;
        R0 = -1 (X);
        R0 <<= 31;
        R0 = abs R0;
        rts;

but with this middle-end patch now compile to:

_foo:	nop;
        nop;
        nop;
        R0 = 32767 (X);
        rts;

_bar:	nop;
        nop;
        R0 = -1 (X);
        R0.H = 32767;
        rts;

2021-10-18  Roger Sayle  <roger@nextmovesoftware.com>

gcc/ChangeLog
	* simplify-rtx.c (simplify_const_unary_operation) [SS_NEG, SS_ABS]:
	Evalute SS_NEG and SS_ABS of a constant argument.

gcc/testsuite/ChangeLog
	* gcc.target/bfin/ssabs.c: New test case.
	* gcc.target/bfin/ssneg.c: New test case.
2021-10-18 11:51:07 +01:00
prathamesh.kulkarni
20dcda98ed [sve] PR93183 - Add support for conditional neg.
gcc/ChangeLog:
	PR target/93183
	* gimple-match-head.c (try_conditional_simplification): Add case for single operand.
	* internal-fn.def: Add entry for COND_NEG internal function.
	* internal-fn.c (FOR_EACH_CODE_MAPPING): Add entry for
	NEGATE_EXPR, COND_NEG mapping.
	* optabs.def: Add entry for cond_neg_optab.
	* match.pd (UNCOND_UNARY, COND_UNARY): New operator lists.
	(vec_cond COND (foo A) B) -> (IFN_COND_FOO COND A B): New pattern.
	(vec_cond COND B (foo A)) -> (IFN_COND_FOO ~COND A B): Likewise.

gcc/testsuite/ChangeLog:
	PR target/93183
	* gcc.target/aarch64/sve/cond_unary_4.c: Adjust.
	* gcc.target/aarch64/sve/pr93183.c: New test.
2021-10-18 15:44:06 +05:30
Martin Liska
85ce673378 gcc-changelog: update error message location
contrib/ChangeLog:

	* gcc-changelog/git_commit.py: Update location of
	'bad parentheses wrapping'.
	* gcc-changelog/test_email.py: Test it.
2021-10-18 11:07:14 +02:00