This patch adds support for unpacked SVE LSL, ASR and LSR.
For right shifts, the type suffix needs to be taken from the
element size rather than the container size.
gcc/
* config/aarch64/aarch64-sve.md (<ASHIFT:optab><mode>3)
(v<ASHIFT:optab><mode>3, @aarch64_pred_<optab><mode>)
(*post_ra_v<ASHIFT:optab><mode>3): Extend from SVE_FULL_I to SVE_I.
gcc/testsuite/
* gcc.target/aarch64/sve/shift_2.c: New test.
In GCC10 cp_walk_subtrees has been changed to walk template arguments.
As the following testcase, that changed the mangling of some functions.
I believe the previous behavior that find_abi_tags_r doesn't recurse into
template args has been the correct one, but setting *walk_subtrees = 0
for the types and handling the types subtree walking manually in
find_abi_tags_r looks too hard, there are a lot of subtrees and details what
should and shouldn't be walked, both in tree.c (walk_type_fields there,
which is static) and in cp_walk_subtrees itself.
The following patch abuses the fact that *walk_subtrees is an int to
tell cp_walk_subtrees it shouldn't walk the template args.
Co-authored-by: Jason Merrill <jason@redhat.com>
gcc/cp/ChangeLog:
PR c++/98481
* class.c (find_abi_tags_r): Set *walk_subtrees to 2 instead of 1
for types.
(mark_abi_tags_r): Likewise.
* decl2.c (min_vis_r): Likewise.
* tree.c (cp_walk_subtrees): If *walk_subtrees_p is 2, look through
typedefs.
gcc/testsuite/ChangeLog:
PR c++/98481
* g++.dg/abi/abi-tag24.C: New test.
The vectorizer, for large permuted grouped loads, generates
inefficient intermediate code (cleaned up only later) that runs
into complexity issues in SCEV analysis and elsewhere. For the
non-single-element interleaving case we already put a hard limit
in place, this applies the same limit to the missing case.
2021-01-11 Richard Biener <rguenther@suse.de>
PR tree-optimization/91403
* tree-vect-data-refs.c (vect_analyze_group_access_1): Cap
single-element interleaving group size at 4096 elements.
* gcc.dg/vect/pr91403.c: New testcase.
The .ld1_args file is not created when HAVE_GNU_LD is false.
The ltrans0.ltrans_arg file is not created when the make jobserver
is available, so remove the MAKEFLAGS variable.
Add an exception for *.gcc_args files similar to the
exception for *.cdtor.* files.
Limit both exceptions to targets that define EH_FRAME_THROUGH_COLLECT2.
That means although the test case does not use C++ constructors
or destructors it is still using dwarf2 frame info.
2021-01-11 Bernd Edlinger <bernd.edlinger@hotmail.de>
PR testsuite/98225
* gcc.misc-tests/outputs.exp: Unset MAKEFLAGS.
Expect .ld1_args only when GNU LD is used.
Add an exception for *.gcc_args files.
This fixes a double-counting in the reduction cost when vectorizing
the reduction through the regular vectorizable_* functions.
2021-01-11 Richard Biener <rguenther@suse.de>
PR tree-optimization/98526
* tree-vect-loop.c (vect_model_reduction_cost): Remove costing
of the actual reduction op for the regular case.
(vectorizable_reduction): Cost the stmts
vect_transform_reduction produces here.
The deprecation phase for access checks is finished.
The `-ftransition=import` and `-ftransition=checkimports` switches no
longer have an effect and are now removed. Symbols that are not visible
in a particular scope will no longer be found by the compiler.
Reviewed-on: https://github.com/dlang/dmd/pull/12124
gcc/d/ChangeLog:
* dmd/MERGE: Merge upstream dmd 2d3d13748.
* d-lang.cc (d_handle_option): Remove OPT_ftransition_checkimports and
OPT_ftransition_import.
* gdc.texi (Warnings): Remove documentation for -ftransition=import
and -ftransition=checkimports.
* lang.opt (ftransition=checkimports): Remove.
(ftransition=import): Remove.
The vec-abi-varargs-1.c testcase on IBM Z currently fails.
While adding an SI mode vector to a DI mode vector the first is unpacked using:
_28 = BIT_INSERT_EXPR <{ 0, 0, 0, 0 }, _2, 0>;
_34 = [vec_unpack_lo_expr] _28;
However, on big endian targets lo refers to the right hand side of the vector - in this case the zeroes.
2021-01-11 Andreas Krebbel <krebbel@linux.ibm.com>
* tree-ssa-forwprop.c (simplify_vector_constructor): For
big-endian, use UNPACK[_FLOAT]_HI.
This fixes a memory leak in complex_add_pattern because I was not calling
vect_free_slp_tree when dissolving one side of the TWO_OPERANDS nodes.
Secondly it also upgrades the class to the new inteface required by the other
patterns.
gcc/ChangeLog:
* tree-vect-slp-patterns.c (class complex_pattern,
class complex_add_pattern): Add parameters to matches.
(complex_add_pattern::build): Free memory.
(complex_add_pattern::matches): Move validation end of match.
(complex_add_pattern::recognize): Likewise.
This fixes a bug with externals and linear_loads_p where I forgot to save the
value before returning.
It also fixes handling of nodes with multiple children on a non VEC_PERM node.
There the child iteration would already resolve the kind and the loads are All
expected to be the same if valid so just return one.
gcc/ChangeLog:
* tree-vect-slp-patterns.c (linear_loads_p): Fix externals.
This fixes an issue where is_linear_load_p could return the incorrect
permutation kind because it is singe pass.
This arranges the candidates in such a way that there won't be any ambiguity so
that the function can still be linear but give correct values.
gcc/ChangeLog:
* tree-vect-slp-patterns.c (is_linear_load_p): Fix ambiguity.
For floating point multiply, we have nice code in reassoc to reassociate
multiplications to almost optimal sequence of as few multiplications as
possible (or library call), but for integral types we just give up
because there is no __builtin_powi* for those types.
As there is no library routine we could use, instead of adding new internal
call just to hold it temporarily and then lower to multiplications again,
this patch for the integral types calls into the sincos pass routine that
expands it into multiplications right away.
2021-01-11 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/95867
* tree-ssa-math-opts.h: New header.
* tree-ssa-math-opts.c: Include tree-ssa-math-opts.h.
(powi_as_mults): No longer static. Use build_one_cst instead of
build_real. Formatting fix.
* tree-ssa-reassoc.c: Include tree-ssa-math-opts.h.
(attempt_builtin_powi): Handle multiplication reassociation without
powi_fndecl using powi_as_mults.
(reassociate_bb): For integral types don't require
-funsafe-math-optimizations to call attempt_builtin_powi.
* gcc.dg/tree-ssa/pr95867.c: New test.
On top of the previous widening_mul patch, this one recognizes also
(non-perfect) signed multiplication with overflow, like:
int
f5 (int x, int y, int *res)
{
*res = (unsigned) x * y;
return x && (*res / x) != y;
}
The problem with such checks is that they invoke UB if x is -1 and
y is INT_MIN during the division, but perhaps the code knows that
those values won't appear. As that case is UB, we can do for that
case whatever we want and handling that case as signed overflow
is the best option. If x is a constant not equal to -1, then the checks
are 100% correct though.
Haven't tried to pattern match bullet-proof checks, because I really don't
know if users would write it in real-world code like that,
perhaps
*res = (unsigned) x * y;
return x && (x == -1 ? (*res / y) != x : (*res / x) != y);
?
https://wiki.sei.cmu.edu/confluence/display/c/INT32-C.+Ensure+that+operations+on+signed+integers+do+not+result+in+overflow
suggests to use twice as wide multiplication (perhaps we should handle that
too, for both signed and unsigned), or some very large code
with 4 different divisions nested in many conditionals, no way one can
match all the possible variants thereof.
2021-01-11 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/95852
* tree-ssa-math-opts.c (maybe_optimize_guarding_check): Change
mul_stmts parameter type to vec<gimple *> &. Before cond_stmt
allow in the bb any of the stmts in that vector, div_stmt and
up to 3 cast stmts.
(arith_cast_equal_p): New function.
(arith_overflow_check_p): Add cast_stmt argument, handle signed
multiply overflow checks.
(match_arith_overflow): Adjust caller. Handle signed multiply
overflow checks.
* gcc.target/i386/pr95852-3.c: New test.
* gcc.target/i386/pr95852-4.c: New test.
The following patch pattern recognizes some forms of multiplication followed
by overflow check through division by one of the operands compared to the
other one, with optional removal of guarding non-zero check for that operand
if possible. The patterns are replaced with effectively
__builtin_mul_overflow or __builtin_mul_overflow_p. The testcases cover 64
different forms of that.
2021-01-11 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/95852
* tree-ssa-math-opts.c (maybe_optimize_guarding_check): New function.
(uaddsub_overflow_check_p): Renamed to ...
(arith_overflow_check_p): ... this. Handle also multiplication
with overflow check.
(match_uaddsub_overflow): Renamed to ...
(match_arith_overflow): ... this. Add cfg_changed argument. Handle
also multiplication with overflow check. Adjust function comment.
(math_opts_dom_walker::after_dom_children): Adjust callers. Call
match_arith_overflow also for MULT_EXPR.
* gcc.target/i386/pr95852-1.c: New test.
* gcc.target/i386/pr95852-2.c: New test.
__builtin_convertvector seems well-suited to implementing the vmovl and
vmovn intrinsics that widen and narrow
the integer elements in a vector.
This removes some more inline assembly from the intrinsics.
gcc/
* config/aarch64/arm_neon.h (vmovl_s8): Reimplement using
__builtin_convertvector.
(vmovl_s16): Likewise.
(vmovl_s32): Likewise.
(vmovl_u8): Likewise.
(vmovl_u16): Likewise.
(vmovl_u32): Likewise.
(vmovn_s16): Likewise.
(vmovn_s32): Likewise.
(vmovn_s64): Likewise.
(vmovn_u16): Likewise.
(vmovn_u32): Likewise.
(vmovn_u64): Likewise.
gcc/testsuite/ChangeLog:
PR gcov-profile/98273
* lib/gcov.exp: Add run-gcov-pytest function which runs pytest.
* g++.dg/gcov/pr98273.C: New test.
* g++.dg/gcov/gcov.py: New test.
* g++.dg/gcov/test-pr98273.py: New test.
gcc/ChangeLog:
* gimple-if-to-switch.cc (struct condition_info): Use auto_var.
(if_chain::is_beneficial): Delete clusters
(find_conditions): Make second argument of conditions_in_bbs a
pointer so that we control over it's lifetime.
(pass_if_to_switch::execute): Delete them.
This patch is to make move_unallocated_pseudos consistent
to what we have in function find_moveable_pseudos, where we
record the original pseudo into pseudo_replaced_reg only if
validate_change succeeds with newreg. To ensure every
unallocated pseudo in move_unallocated_pseudos has expected
information, it's better to add a check and skip it if it's
unexpected. This avoids possible ICEs in future.
gcc/ChangeLog:
* ira.c (move_unallocated_pseudos): Check other_reg and skip if
it isn't set.
Adds TEST_OUTPUT directives and reduces the verbosity of many tests.
Reviewed-on: https://github.com/dlang/dmd/pull/12112
gcc/d/ChangeLog:
* dmd/MERGE: Merge upstream dmd cb1106ad5.
Expression-based contract syntax has been added. Contracts that consist
of a single assertion can now be written more succinctly and multiple
`in` or `out` contracts can be specified for the same function.
Reviewed-on: https://github.com/dlang/dmd/pull/12106
gcc/d/ChangeLog:
* dmd/MERGE: Merge upstream dmd e598f69c0.
Remove fallout from commit 0bd675183d ("match.pd: Add ~(X - Y) -> ~X
+ Y simplification [PR96685]") and paper over the regression caused as
it is not the matter of the test cases affected.
Previously assembly like this:
.text
.align 1
.globl eq_notsi
.type eq_notsi, @function
eq_notsi:
.word 0 # 35 [c=0] procedure_entry_mask
subl2 $4,%sp # 46 [c=32] *addsi3
mcoml 4(%ap),%r0 # 32 [c=16] *one_cmplsi2_ccz
jeql .L1 # 34 [c=26] *branch_ccz
addl2 $2,%r0 # 31 [c=32] *addsi3
.L1:
ret # 40 [c=0] return
.size eq_notsi, .-eq_notsi
was produced. Now this:
.text
.align 1
.globl eq_notsi
.type eq_notsi, @function
eq_notsi:
.word 0 # 36 [c=0] procedure_entry_mask
subl2 $4,%sp # 48 [c=32] *addsi3
movl 4(%ap),%r0 # 33 [c=16] *movsi_2
cmpl %r0,$-1 # 34 [c=8] *cmpsi_ccz/1
jeql .L3 # 35 [c=26] *branch_ccz
subl3 %r0,$1,%r0 # 32 [c=32] *subsi3/1
ret # 27 [c=0] return
.L3:
clrl %r0 # 31 [c=2] *movsi_2
ret # 41 [c=0] return
.size eq_notsi, .-eq_notsi
is, which cannot work with post-reload comparison elimination, due to
the comparison against -1 rather than 0.
Use subtraction from a constant then rather than addition as the former
operation is not transformed, removing these regressions:
FAIL: gcc.target/vax/cmpelim-eq-notsi.c -O1 scan-rtl-dump-times cmpelim "deleting insn with uid" 1
FAIL: gcc.target/vax/cmpelim-eq-notsi.c -O1 scan-assembler-not \t(bit|cmpz?|tst).
FAIL: gcc.target/vax/cmpelim-eq-notsi.c -O1 scan-assembler one_cmplsi[^ ]*_ccz(/[0-9]+)?\n
FAIL: gcc.target/vax/cmpelim-lt-notsi.c -O1 scan-rtl-dump-times cmpelim "deleting insn with uid" 1
FAIL: gcc.target/vax/cmpelim-lt-notsi.c -O1 scan-assembler-not \t(bit|cmpz?|tst).
FAIL: gcc.target/vax/cmpelim-lt-notsi.c -O1 scan-assembler one_cmplsi[^ ]*_ccn(/[0-9]+)?\n
and likewise across some of the other the optimization levels verified.
The LE variant appears unaffected as the new transformation produces
slightly different although still suboptimal code:
.text
.align 1
.globl le_notsi
.type le_notsi, @function
le_notsi:
.word 0 # 27 [c=0] procedure_entry_mask
subl2 $4,%sp # 34 [c=32] *addsi3
movl 4(%ap),%r1 # 23 [c=16] *movsi_2
mcoml %r1,%r0 # 24 [c=8] *one_cmplsi2_ccnz
jleq .L1 # 26 [c=26] *branch_ccnz
subl3 %r1,$1,%r0 # 22 [c=32] *subsi3/1
.L1:
ret # 32 [c=0] return
.size le_notsi, .-le_notsi
but update the test case too, for consistency with the other two.
gcc/testsuite/
* gcc.target/vax/cmpelim-eq-notsi.c: Use subtraction from a
constant then rather than addition.
* gcc.target/vax/cmpelim-le-notsi.c: Likewise.
* gcc.target/vax/cmpelim-lt-notsi.c: Likewise.
For predictable semantics propagate the mode from operands referred by
the FP substitution to the `const_double_zero' expressions used with the
associated condition code calculation. Use an iterator to make copies
of the FP substitution across the FP modes supported as the substitution
now has to match the mode of the operands.
gcc/
* config/vax/vax.md (subst_f<cc>): Add mode to operands and
`const_double_zero'.
For predictable semantics propagate the mode from operands referred by
FP substitutions to the `const_double_zero' expressions used with the
associated condition code calculation, resulting in the following update
to insn-emit.c code produced for the `pdp11-aout' target (with machine
description line numbering change noise removed):
@@ -1514,7 +1514,7 @@
gen_rtx_COMPARE (CCmode,
gen_rtx_ABS (DFmode,
operand1),
- CONST_DOUBLE_ATOF ("0", VOIDmode))),
+ CONST_DOUBLE_ATOF ("0", DFmode))),
gen_rtx_SET (operand0,
gen_rtx_ABS (DFmode,
copy_rtx (operand1)))));
@@ -1555,7 +1555,7 @@
gen_rtx_COMPARE (CCmode,
gen_rtx_NEG (DFmode,
operand1),
- CONST_DOUBLE_ATOF ("0", VOIDmode))),
+ CONST_DOUBLE_ATOF ("0", DFmode))),
gen_rtx_SET (operand0,
gen_rtx_NEG (DFmode,
copy_rtx (operand1)))));
@@ -1790,7 +1790,7 @@
gen_rtx_MULT (DFmode,
operand1,
operand2),
- CONST_DOUBLE_ATOF ("0", VOIDmode))),
+ CONST_DOUBLE_ATOF ("0", DFmode))),
gen_rtx_SET (operand0,
gen_rtx_MULT (DFmode,
copy_rtx (operand1),
@@ -1942,7 +1942,7 @@
gen_rtx_DIV (DFmode,
operand1,
operand2),
- CONST_DOUBLE_ATOF ("0", VOIDmode))),
+ CONST_DOUBLE_ATOF ("0", DFmode))),
gen_rtx_SET (operand0,
gen_rtx_DIV (DFmode,
copy_rtx (operand1),
Provide a new iterator to provide copies of FP substitutions across the
FP modes supported as the substitutions now need to match the mode of
the operands.
gcc/
* config/pdp11/pdp11.md (PDPfp): New mode iterator.
(fcc_cc, fcc_ccnz): Use it. Add mode to `const_double_zero' and
operands.
As conversions between signed integers and signed enums with the same
precision are useless in GIMPLE, it seems strange that we require that
POINTER_DIFF_EXPR result must be INTEGER_TYPE.
If we really wanted to require that, we'd need to change the gimplifier
to ensure that, which it isn't the case on the following testcase.
What is going on during the gimplification is that when we have the
(enum T) (p - q) cast, it is stripped through
/* Strip away as many useless type conversions as possible
at the toplevel. */
STRIP_USELESS_TYPE_CONVERSION (*expr_p);
and when the MODIFY_EXPR is gimplified, the *to_p has enum T type,
while *from_p has intptr_t type and as there is no conversion in between,
we just create GIMPLE_ASSIGN from that.
2021-01-09 Jakub Jelinek <jakub@redhat.com>
PR c++/98556
* tree-cfg.c (verify_gimple_assign_binary): Allow lhs of
POINTER_DIFF_EXPR to be any integral type.
* c-c++-common/pr98556.c: New test.
If an asm insn fails constraint checking during vregs, it is just deleted.
We don't delete asm goto though because of the edges to the labels, so
instantiate_virtual_regs_in_insn would just remove the inputs and their
constraints, the pattern etc.
This worked fine when asm goto couldn't have output operands, but causes
ICEs later on when it has more than one output (and furthermore doesn't
really remove the problematic outputs). The problem is that
for multiple outputs we have a PARALLEL with multiple ASM_OPERANDS, but
those must use the same ASM_OPERANDS_INPUT_VEC etc., but the code was
adjusting just one.
The following patch turns invalid asm goto into a bare
asm goto ("" : : : : lab, lab2, lab3);
i.e. no inputs/outputs/clobbers, just the labels.
2021-01-09 Jakub Jelinek <jakub@redhat.com>
PR rtl-optimization/98603
* function.c (instantiate_virtual_regs_in_insn): For asm goto
with impossible constraints, drop all SETs, CLOBBERs, drop PARALLEL
if any, set ASM_OPERANDS mode to VOIDmode and change
ASM_OPERANDS_OUTPUT_CONSTRAINT and ASM_OPERANDS_OUTPUT_IDX.
* gcc.target/i386/pr98603.c: New test.
* gcc.target/aarch64/pr98603.c: New test.
Back when I introduced debug markers, I seem to have been under the
impression that location line 0 would only ever occur for unknown and
builtin locations.
Though line 0 never comes up in normal processing of source files, and
debug info formats often cannot represent them, I suppose there's no
need to preemptively discard them during final.
for gcc/ChangeLog
PR debug/97714
* final.c (notice_source_line): Narrow down the condition to
skip a line-0 marker.
for gcc/testsuite/ChangeLog
PR debug/97714
* gcc.dg/debug/pr97714.c: New.
The destination register is only partially overwritten, so + should be
used instead of =.
gcc/ChangeLog:
2021-01-08 Ilya Leoshkevich <iii@linux.ibm.com>
* config/s390/vector.md (*tf_to_fprx2_0): Rename from
"*mov_tf_to_fprx2_0" for consistency, fix constraint.
(*tf_to_fprx2_1): Rename from "*mov_tf_to_fprx2_1" for
consistency, fix constraint.
Give end users the opportunity to find out whether long doubles are
stored in floating-point register pairs or in vector registers, so that
they could fine-tune their asm statements.
gcc/ChangeLog:
2020-12-14 Ilya Leoshkevich <iii@linux.ibm.com>
* config/s390/s390-c.c (s390_def_or_undef_macro): Accept
callables instead of mask values.
(struct target_flag_set_p): New predicate.
(s390_cpu_cpp_builtins_internal): Define or undefine
__LONG_DOUBLE_VX__ macro.
2020-12-14 Ilya Leoshkevich <iii@linux.ibm.com>
gcc/testsuite/ChangeLog:
* gcc.target/s390/vector/long-double-vx-macro-off-on.c: New test.
* gcc.target/s390/vector/long-double-vx-macro-on-off.c: New test.
We shouldn't do replace_result_decl after evaluating a call that returns
a PMF because PMF temporaries aren't wrapped in a TARGET_EXPR (and so we
can't trust ctx->object), and PMF initializers can't be self-referential
anyway, so replace_result_decl would always be a no-op.
To that end, this patch changes the relevant AGGREGATE_TYPE_P test to
CLASS_TYPE_P, which should rule out PMFs (as well as arrays, which we
can't return and therefore won't see here). This fixes an ICE from the
sanity check in replace_result_decl in the below testcase during
constexpr evaluation of the call f() in the initializer g(f()).
gcc/cp/ChangeLog:
PR c++/98551
* constexpr.c (cxx_eval_call_expression): Check CLASS_TYPE_P
instead of AGGREGATE_TYPE_P before calling replace_result_decl.
gcc/testsuite/ChangeLog:
PR c++/98551
* g++.dg/cpp0x/constexpr-pmf2.C: New test.
In the first testcase below, we incorrectly reject the use of the
protected non-static member A::var0 from C<int>::g() because
check_accessibility_of_qualified_id, at template parse time, determines
that the access doesn't go through 'this'. (This happens because the
dependent base B<T> of C<T> doesn't have a binfo object, so it appears
to DERIVED_FROM_P that A is not an indirect base of C<T>.) From there
we create the corresponding deferred access check, which we then
perform at instantiation time and which (expectedly) fails.
The problem ultimately seems to be that we can't in general determine
whether a use of a scoped non-static member goes through 'this' until
instantiation time, as the second testcase below illustrates. So this
patch makes check_accessibility_of_qualified_id punt in such situations
to avoid creating a bogus deferred access check.
gcc/cp/ChangeLog:
PR c++/98515
* semantics.c (check_accessibility_of_qualified_id): Punt if
we're checking access of a scoped non-static member inside a
class template.
gcc/testsuite/ChangeLog:
PR c++/98515
* g++.dg/template/access32.C: New test.
* g++.dg/template/access33.C: New test.
For NO_PROFILE_COUNTERS targets, R11 is a scratch register. We can use
R10 and R11 to call mcount in large model with PIC.
gcc/
PR target/98482
* config/i386/i386.c (x86_function_profiler): Use R10 and R11
to call mcount in large model with PIC for NO_PROFILE_COUNTERS
targets.
gcc/testsuite/
PR target/98482
* gcc.target/i386/pr98482-2.c: Updated.
When running FRE in the loop pipeline (as part of the conditionally
scheduled scalar cleanups) we have to reset the SCEV hashtable as
otherwise we can end up with stale entries and all sorts of problems.
Catched by my out-of-tree verifier for this problem.
2021-01-08 Richard Biener <rguenther@suse.de>
* tree-ssa-sccvn.c (pass_fre::execute): Reset the SCEV hash table.
This plugs two memleaks in the vectorizer.
2021-01-08 Richard Biener <rguenther@suse.de>
* tree-vect-slp.c (scalar_stmts_to_slp_tree_map_t): Fix.
(vect_build_slp_tree): On cache hit release the matched
scalar stmts vector.
* tree-vect-stmts.c (vectorizable_store): Properly free
vec_oprnds before possibly gathering them again.
Permute nodes are not transparent to the permute of their children.
Instead we have to materialize child permutes always and in future
may treat permute nodes as the source of arbitrary permutes as
we can permute the lane permutation vector at will (as the target
supports in the end).
2021-01-08 Richard Biener <rguenther@suse.de>
PR tree-optimization/98544
* tree-vect-slp.c (vect_optimize_slp): Always materialize
permutes at a permute node.
* gcc.dg/vect/bb-slp-pr98544.c: New testcase.
R10 is caller-saved. Although it can be used as a static chain register,
it is preserved when calling mcount for nested functions. Use R10 as a
scratch register to call mcount in large model.
gcc/
PR target/98482
* config/i386/i386.c (x86_function_profiler): Use R10 to call
mcount in large model. Sorry for large model with PIC.
gcc/testsuite/
PR target/98482
* gcc.target/i386/pr98482-1.c: New test.
* gcc.target/i386/pr98482-1.c: Likewise.
My patch to save/restore opts_set rather than essentially treating
global_options_set as a logical or whether some option has ever been
explicitly set somewhere apparently broke -mcmodel= vs. target attribute
(and as the patch shows some other options too).
The thing is, at least for options for which we ever test opts_set->x_*
or global_options_set.x_*, we need to save/restore them next to the
saving/restoring of the actual option values.
If an option has Save keyword or in case of TargetVariable, it is the
generic code that handles the saving and restoring of both the option
and corresponding opts_set flag automatically, for other variables
(TargetSave, or Target without Save) the backend needs to do that in the
target hook manually and in that case should save/restore both the option
values (the hooks mostly did that) and opts_set (they didn't).
As it seems much easier to let the automatic saving/restoring do the work
for us unless the saving/restoring of the option needs some specific magic,
the following patch is a result of grepping through the backend for
opts_set->x_ and global_options_set.x_ and for all such referenced
variables, grepping whether it is saved/restored including opts_set properly
in the generated options-save.c or not.
2021-01-08 Jakub Jelinek <jakub@redhat.com>
PR target/98585
* config/i386/i386.opt (ix86_cmodel, ix86_incoming_stack_boundary_arg,
ix86_pmode, ix86_preferred_stack_boundary_arg, ix86_regparm,
ix86_veclibabi_type): Remove x_ prefix, use TargetVariable instead of
TargetSave and initialize for variables with enum types.
(mfentry, mstack-protector-guard-reg=, mstack-protector-guard-offset=,
mstack-protector-guard-symbol=): Add Save.
* config/i386/i386-options.c (ix86_function_specific_save,
ix86_function_specific_restore): Don't save or restore x_ix86_cmodel,
x_ix86_incoming_stack_boundary_arg, x_ix86_pmode,
x_ix86_preferred_stack_boundary_arg, x_ix86_regparm,
x_ix86_veclibabi_type.
* gcc.target/i386/pr98585.c: New test.
This patch adds unpacked support for unconditional and
conditional CNOT. The type suffix has to be taken from
the element size rather than the container size.
gcc/
* config/aarch64/aarch64-sve.md (*cnot<mode>): Extend from
SVE_FULL_I to SVE_I.
(*cond_cnot<mode>_2, *cond_cnot<mode>_any): Likewise.
gcc/testsuite/
* gcc.target/aarch64/sve/cnot_2.c: New test.
* gcc.target/aarch64/sve/cond_cnot_4.c: Likewise.
* gcc.target/aarch64/sve/cond_cnot_4_run.c: Likewise.
* gcc.target/aarch64/sve/cond_cnot_5.c: Likewise.
* gcc.target/aarch64/sve/cond_cnot_5_run.c: Likewise.
* gcc.target/aarch64/sve/cond_cnot_6.c: Likewise.
* gcc.target/aarch64/sve/cond_cnot_6_run.c: Likewise.
This patch extends the conditional UXT patterns from SVE_FULL_I
to SVE_I. It doesn't matter in this case whether the type suffix
is taken from the element size or the container size.
gcc/
* config/aarch64/aarch64-sve.md (*cond_uxt<mode>_2): Extend from
SVE_FULL_I to SVE_I.
(*cond_uxt<mode>_any): Likewise.
gcc/testsuite/
* gcc.target/aarch64/sve/cond_uxt_5.c: New test.
* gcc.target/aarch64/sve/cond_uxt_5_run.c: Likewise.
* gcc.target/aarch64/sve/cond_uxt_6.c: Likewise.
* gcc.target/aarch64/sve/cond_uxt_6_run.c: Likewise.
* gcc.target/aarch64/sve/cond_uxt_7.c: Likewise.
* gcc.target/aarch64/sve/cond_uxt_7_run.c: Likewise.
* gcc.target/aarch64/sve/cond_uxt_8.c: Likewise.
* gcc.target/aarch64/sve/cond_uxt_8_run.c: Likewise.