Merge commit 'af4bb221153359f5948da917d5ef2df738bb1e61' into HEAD

This commit is contained in:
Thomas Schwinge 2024-03-11 22:51:28 +01:00
commit a95e21151a
1201 changed files with 54710 additions and 24918 deletions

View File

@ -1,3 +1,34 @@
2023-10-15 Mike Frysinger <vapier@gentoo.org>
* Makefile.def: Add distclean-sim dependency on distclean-gnulib.
* Makefile.in: Regenerate.
2023-10-11 Filip Kastl <fkastl@suse.cz>
* MAINTAINERS: Fix name order.
2023-10-10 Christoph Müllner <christoph.muellner@vrull.eu>
* MAINTAINERS: Add myself.
2023-10-06 Sergei Trofimovich <siarheit@google.com>
PR bootstrap/111663
* Makefile.tpl (STAGEfeedback_CONFIGURE_FLAGS): Disable -Werror.
* Makefile.in: Regenerate.
2023-10-05 Jan Engelhardt <jengelh@inai.de>
* SECURITY.txt: Fix up indentation.
2023-10-05 Jan Engelhardt <jengelh@inai.de>
* SECURITY.txt: Fix up commas.
2023-10-04 Siddhesh Poyarekar <siddhesh@gotplt.org>
* SECURITY.txt: New file.
2023-09-18 Fei Gao <gaofei@eswincomputing.com>
* MAINTAINERS: Add myself.

View File

@ -579,6 +579,7 @@ Brooks Moses <bmoses@google.com>
Dirk Mueller <dmueller@suse.de>
Phil Muldoon <pmuldoon@redhat.com>
Gaius Mulley <gaiusmod2@gmail.com>
Christoph Müllner <christoph.muellner@vrull.eu>
Steven Munroe <munroesj52@gmail.com>
Szabolcs Nagy <szabolcs.nagy@arm.com>
Victor Do Nascimento <victor.donascimento@arm.com>

View File

@ -615,6 +615,7 @@ dependencies = { module=check-libctf; on=all-ld; };
// gdb and gdbserver.
dependencies = { module=distclean-gnulib; on=distclean-gdb; };
dependencies = { module=distclean-gnulib; on=distclean-gdbserver; };
dependencies = { module=distclean-gnulib; on=distclean-sim; };
// Warning, these are not well tested.
dependencies = { module=all-bison; on=all-intl; };

View File

@ -638,6 +638,10 @@ STAGEtrain_TFLAGS = $(filter-out -fchecking=1,$(STAGE3_TFLAGS))
STAGEfeedback_CFLAGS = $(STAGE4_CFLAGS) -fprofile-use -fprofile-reproducible=parallel-runs
STAGEfeedback_TFLAGS = $(STAGE4_TFLAGS)
# Disable warnings as errors for a few reasons:
# - sources for gen* binaries do not have .gcda files available
# - inlining decisions generate extra warnings
STAGEfeedback_CONFIGURE_FLAGS = $(filter-out --enable-werror-always,$(STAGE_CONFIGURE_FLAGS))
STAGEautoprofile_CFLAGS = $(filter-out -gtoggle,$(STAGE2_CFLAGS)) -g
STAGEautoprofile_TFLAGS = $(STAGE2_TFLAGS)
@ -68671,6 +68675,7 @@ check-stageautoprofile-libctf: maybe-all-stageautoprofile-ld
check-stageautofeedback-libctf: maybe-all-stageautofeedback-ld
distclean-gnulib: maybe-distclean-gdb
distclean-gnulib: maybe-distclean-gdbserver
distclean-gnulib: maybe-distclean-sim
all-bison: maybe-all-build-texinfo
all-flex: maybe-all-build-bison
all-flex: maybe-all-m4

View File

@ -561,6 +561,10 @@ STAGEtrain_TFLAGS = $(filter-out -fchecking=1,$(STAGE3_TFLAGS))
STAGEfeedback_CFLAGS = $(STAGE4_CFLAGS) -fprofile-use -fprofile-reproducible=parallel-runs
STAGEfeedback_TFLAGS = $(STAGE4_TFLAGS)
# Disable warnings as errors for a few reasons:
# - sources for gen* binaries do not have .gcda files available
# - inlining decisions generate extra warnings
STAGEfeedback_CONFIGURE_FLAGS = $(filter-out --enable-werror-always,$(STAGE_CONFIGURE_FLAGS))
STAGEautoprofile_CFLAGS = $(filter-out -gtoggle,$(STAGE2_CFLAGS)) -g
STAGEautoprofile_TFLAGS = $(STAGE2_TFLAGS)

205
SECURITY.txt Normal file
View File

@ -0,0 +1,205 @@
What is a GCC security bug?
===========================
A security bug is one that threatens the security of a system or
network, or might compromise the security of data stored on it.
In the context of GCC, there are multiple ways in which this might
happen and some common scenarios are detailed below.
If you're reporting a security issue and feel like it does not fit
into any of the descriptions below, you're encouraged to reach out
through the GCC bugzilla or, if needed, privately, by following the
instructions in the last two sections of this document.
Compiler drivers, programs, libgccjit and support libraries
-----------------------------------------------------------
The compiler driver processes source code, invokes other programs
such as the assembler and linker and generates the output result,
which may be assembly code or machine code. Compiling untrusted
sources can result in arbitrary code execution and unconstrained
resource consumption in the compiler. As a result, compilation of
such code should be done inside a sandboxed environment to ensure
that it does not compromise the host environment.
The libgccjit library can, despite the name, be used both for
ahead-of-time compilation and for just-in-compilation. In both
cases, it can be used to translate input representations (such as
source code) in the application context; in the latter case, the
generated code is also run in the application context.
Limitations that apply to the compiler driver apply here too in
terms of trusting inputs and it is recommended that both the
compilation *and* execution context of the code are appropriately
sandboxed to contain the effects of any bugs in libgccjit, the
application code using it, or its generated code to the sandboxed
environment.
Libraries such as libiberty, libcc1 and libcpp are not distributed
for runtime support and have similar challenges to compiler drivers.
While they are expected to be robust against arbitrary input, they
should only be used with trusted inputs when linked into the
compiler.
Libraries such as zlib that are bundled with GCC to build it will be
treated the same as the compiler drivers and programs as far as
security coverage is concerned. However, if you find an issue in
these libraries independent of their use in GCC, you should reach
out to their upstream projects to report them.
As a result, the only case for a potential security issue in the
compiler is when it generates vulnerable application code for
trusted input source code that is conforming to the relevant
programming standard or extensions documented as supported by GCC
and the algorithm expressed in the source code does not have the
vulnerability. The output application code could be considered
vulnerable if it produces an actual vulnerability in the target
application, for example:
- The application dereferences an invalid memory location despite
the application sources being valid.
- The application reads from or writes to a valid but incorrect
memory location, resulting in an information integrity issue or an
information leak.
- The application ends up running in an infinite loop or with
severe degradation in performance despite the input sources having
no such issue, resulting in a Denial of Service. Note that
correct but non-performant code is not a security issue candidate,
this only applies to incorrect code that may result in performance
degradation severe enough to amount to a denial of service.
- The application crashes due to the generated incorrect code,
resulting in a Denial of Service.
Language runtime libraries
--------------------------
GCC also builds and distributes libraries that are intended to be
used widely to implement runtime support for various programming
languages. These include the following:
* libada
* libatomic
* libbacktrace
* libcc1
* libcody
* libcpp
* libdecnumber
* libffi
* libgcc
* libgfortran
* libgm2
* libgo
* libgomp
* libitm
* libobjc
* libphobos
* libquadmath
* libssp
* libstdc++
These libraries are intended to be used in arbitrary contexts and, as
a result, bugs in these libraries may be evaluated for security
impact. However, some of these libraries, e.g. libgo, libphobos,
etc. are not maintained in the GCC project, due to which the GCC
project may not be the correct point of contact for them. You are
encouraged to look at README files within those library directories
to locate the canonical security contact point for those projects
and include them in the report. Once the issue is fixed in the
upstream project, the fix will be synced into GCC in a future
release.
Most security vulnerabilities in these runtime libraries arise when
an application uses functionality in a specific way. As a result,
not all bugs qualify as security relevant. The following guidelines
can help with the decision:
- Buffer overflows and integer overflows should be treated as
security issues if it is conceivable that the data triggering them
can come from an untrusted source.
- Bugs that cause memory corruption which is likely exploitable
should be treated as security bugs.
- Information disclosure can be security bugs, especially if
exposure through applications can be determined.
- Memory leaks and races are security bugs if they cause service
breakage.
- Stack overflow through unbounded alloca calls or variable-length
arrays are security bugs if it is conceivable that the data
triggering the overflow could come from an untrusted source.
- Stack overflow through deep recursion and other crashes are
security bugs if they cause service breakage.
- Bugs that cripple the whole system (so that it doesn't even boot
or does not run most applications) are not security bugs because
they will not be exploitable in practice, due to general system
instability.
Diagnostic libraries
--------------------
Libraries like libvtv and the sanitizers are intended to be used in
diagnostic cases and not intended for use in sensitive environments.
As a result, bugs in these libraries will not be considered security
sensitive.
GCC plugins
-----------
It should be noted that GCC may execute arbitrary code loaded by a
user through the GCC plugin mechanism or through system preloading
mechanism. Such custom code should be vetted by the user for safety,
as bugs exposed through such code will not be considered security
issues.
Security features implemented in GCC
------------------------------------
GCC implements a number of security features that reduce the impact
of security issues in applications, such as -fstack-protector,
-fstack-clash-protection, _FORTIFY_SOURCE and so on. A failure of
these features to function perfectly in all situations is not an
exploitable vulnerability in itself since it does not affect the
correctness of programs. Further, they're dependent on heuristics
and may not always have full coverage for protection.
Similarly, GCC may transform code in a way that the correctness of
the expressed algorithm is preserved, but supplementary properties
that are not specifically expressible in a high-level language
are not preserved. Examples of such supplementary properties
include absence of sensitive data in the program's address space
after an attempt to wipe it, or data-independent timing of code.
When the source code attempts to express such properties, failure
to preserve them in resulting machine code is not a security issue
in GCC.
Reporting private security bugs
===============================
*All bugs reported in the GCC Bugzilla are public.*
In order to report a private security bug that is not immediately
public, please contact one of the downstream distributions with
security teams. The following teams have volunteered to handle
such bugs:
Debian: security@debian.org
Red Hat: secalert@redhat.com
SUSE: security@suse.de
AdaCore: product-security@adacore.com
Please report the bug to just one of these teams. It will be shared
with other teams as necessary.
The team contacted will take care of details such as vulnerability
rating and CVE assignment (http://cve.mitre.org/about/). It is likely
that the team will ask to file a public bug because the issue is
sufficiently minor and does not warrant an embargo. An embargo is not
a requirement for being credited with the discovery of a security
vulnerability.
Reporting public security bugs
==============================
It is expected that critical security bugs will be rare, and that most
security bugs can be reported in GCC, thus making
them public immediately. The system can be found here:
https://gcc.gnu.org/bugzilla/

View File

@ -1,3 +1,42 @@
2023-10-05 Andrea Corallo <andrea.corallo@arm.com>
* mdcompact/mdcompact-testsuite.el: New file.
* mdcompact/mdcompact.el: Likewise.
* mdcompact/tests/1.md: Likewise.
* mdcompact/tests/1.md.out: Likewise.
* mdcompact/tests/2.md: Likewise.
* mdcompact/tests/2.md.out: Likewise.
* mdcompact/tests/3.md: Likewise.
* mdcompact/tests/3.md.out: Likewise.
* mdcompact/tests/4.md: Likewise.
* mdcompact/tests/4.md.out: Likewise.
* mdcompact/tests/5.md: Likewise.
* mdcompact/tests/5.md.out: Likewise.
* mdcompact/tests/6.md: Likewise.
* mdcompact/tests/6.md.out: Likewise.
* mdcompact/tests/7.md: Likewise.
* mdcompact/tests/7.md.out: Likewise.
2023-10-03 Martin Jambor <mjambor@suse.cz>
* mklog.py (skip_line_in_changelog): Compare to None using is instead
of ==, add an extra newline after the function.
2023-10-02 Iain Sandoe <iain@sandoe.co.uk>
* config-list.mk: Add newer Darwin versions, trim one older.
Remove cases with no OS version, which is not supported for cross-
compilers.
2023-09-29 Patrick O'Neill <patrick@rivosinc.com>
* check_GNU_style_lib.py: Skip machine description file bracket linting.
2023-09-29 Paul Iannetta <piannetta@kalrayinc.com>
* dg-extract-results.py: Print the "Test run" line.
* dg-extract-results.sh: Print the "Host" line.
2023-09-12 Jonathan Wakely <jwakely@redhat.com>
PR other/111360

View File

@ -182,6 +182,9 @@ class SquareBracketCheck:
self.re = re.compile('\w\s+(\[)')
def check(self, filename, lineno, line):
if filename.endswith('.md'):
return None
m = self.re.search(line)
if m != None:
return CheckError(filename, lineno,

View File

@ -29,7 +29,8 @@ GCC_SRC_DIR=../../gcc
# > make.out 2>&1 &
#
LIST = aarch64-elf aarch64-freebsd13 aarch64-linux-gnu aarch64-rtems \
LIST = \
aarch64-elf aarch64-freebsd13 aarch64-linux-gnu aarch64-rtems \
alpha-linux-gnu alpha-netbsd alpha-openbsd \
alpha64-dec-vms alpha-dec-vms \
amdgcn-amdhsa \
@ -47,11 +48,11 @@ LIST = aarch64-elf aarch64-freebsd13 aarch64-linux-gnu aarch64-rtems \
hppa-linux-gnuOPT-enable-sjlj-exceptions=yes hppa64-linux-gnu \
hppa64-hpux11.3 \
hppa64-hpux11.0OPT-enable-sjlj-exceptions=yes \
i686-pc-linux-gnu i686-apple-darwin i686-apple-darwin9 i686-apple-darwin10 \
i686-apple-darwin9 i686-apple-darwin13 i686-apple-darwin17 \
i686-freebsd13 i686-kfreebsd-gnu \
i686-netbsdelf9 \
i686-openbsd i686-elf i686-kopensolaris-gnu i686-gnu \
i686-pc-msdosdjgpp i686-lynxos i686-nto-qnx \
i686-pc-linux-gnu i686-pc-msdosdjgpp i686-lynxos i686-nto-qnx \
i686-rtems i686-solaris2.11 i686-wrs-vxworks \
i686-wrs-vxworksae \
i686-cygwinOPT-enable-threads=yes i686-mingw32crt ia64-elf \
@ -75,8 +76,8 @@ LIST = aarch64-elf aarch64-freebsd13 aarch64-linux-gnu aarch64-rtems \
nvptx-none \
or1k-elf or1k-linux-uclibc or1k-linux-musl or1k-rtems \
pdp11-aout \
powerpc-darwin8 \
powerpc-darwin7 powerpc64-darwin powerpc-freebsd13 powerpc-netbsd \
powerpc-apple-darwin9 powerpc64-apple-darwin9 powerpc-apple-darwin8 \
powerpc-freebsd13 powerpc-netbsd \
powerpc-eabisimaltivec powerpc-eabisim ppc-elf \
powerpc-eabialtivec powerpc-xilinx-eabi powerpc-eabi \
powerpc-rtems \
@ -96,8 +97,9 @@ LIST = aarch64-elf aarch64-freebsd13 aarch64-linux-gnu aarch64-rtems \
sparc-wrs-vxworks sparc64-elf sparc64-rtems sparc64-linux \
sparc64-netbsd sparc64-openbsd \
v850e1-elf v850e-elf v850-elf v850-rtems vax-linux-gnu \
vax-netbsdelf visium-elf x86_64-apple-darwin x86_64-gnu \
x86_64-pc-linux-gnuOPT-with-fpmath=avx \
vax-netbsdelf visium-elf \
x86_64-apple-darwin10 x86_64-apple-darwin15 x86_64-apple-darwin21 \
x86_64-gnu x86_64-pc-linux-gnuOPT-with-fpmath=avx \
x86_64-elfOPT-with-fpmath=sse x86_64-freebsd13 x86_64-netbsd \
x86_64-w64-mingw32 \
x86_64-mingw32OPT-enable-sjlj-exceptions=yes x86_64-rtems \

View File

@ -113,7 +113,7 @@ class Prog:
# Whether to create .sum rather than .log output.
self.do_sum = True
# Regexps used while parsing.
self.test_run_re = re.compile (r'^Test Run By (\S+) on (.*)$')
self.test_run_re = re.compile (r'^Test run by (\S+) on (.*)$')
self.tool_re = re.compile (r'^\t\t=== (.*) tests ===$')
self.result_re = re.compile (r'^(PASS|XPASS|FAIL|XFAIL|UNRESOLVED'
r'|WARNING|ERROR|UNSUPPORTED|UNTESTED'

View File

@ -271,7 +271,7 @@ cat $SUM_FILES \
# Write the begining of the combined summary file.
head -n 2 $FIRST_SUM
head -n 3 $FIRST_SUM
echo
echo " === $TOOL tests ==="
echo

View File

@ -0,0 +1,56 @@
;;; -*- lexical-binding: t; -*-
;; This file is part of GCC.
;; GCC is free software: you can redistribute it and/or modify it
;; under the terms of the GNU General Public License as published by
;; the Free Software Foundation, either version 3 of the License, or
;; (at your option) any later version.
;; GCC is distributed in the hope that it will be useful, but WITHOUT
;; ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
;; or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
;; License for more details.
;; You should have received a copy of the GNU General Public License
;; along with GCC. If not, see <https://www.gnu.org/licenses/>.
;;; Commentary:
;;; Usage:
;; $ emacs -batch -l mdcompact.el -l mdcompact-testsuite.el -f ert-run-tests-batch-and-exit
;;; Code:
(require 'mdcompact)
(require 'ert)
(defconst mdcompat-test-directory (concat (file-name-directory
(or load-file-name
buffer-file-name))
"tests/"))
(defun mdcompat-test-run (f)
(with-temp-buffer
(insert-file-contents f)
(mdcomp-run-at-point)
(let ((a (buffer-string))
(b (with-temp-buffer
(insert-file-contents (concat f ".out"))
(buffer-string))))
(should (string= a b)))))
(defmacro mdcompat-gen-tests ()
`(progn
,@(cl-loop
for f in (directory-files mdcompat-test-directory t "md$")
collect
`(ert-deftest ,(intern (concat "mdcompat-test-"
(file-name-sans-extension
(file-name-nondirectory f))))
()
(mdcompat-test-run ,f)))))
(mdcompat-gen-tests)
;;; mdcompact-testsuite.el ends here

View File

@ -0,0 +1,296 @@
;;; -*- lexical-binding: t; -*-
;; Author: Andrea Corallo <andrea.corallo@arm.com>
;; Package: mdcompact
;; Keywords: languages, extensions
;; Package-Requires: ((emacs "29"))
;; This file is part of GCC.
;; GCC is free software: you can redistribute it and/or modify it
;; under the terms of the GNU General Public License as published by
;; the Free Software Foundation, either version 3 of the License, or
;; (at your option) any later version.
;; GCC is distributed in the hope that it will be useful, but WITHOUT
;; ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
;; or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
;; License for more details.
;; You should have received a copy of the GNU General Public License
;; along with GCC. If not, see <https://www.gnu.org/licenses/>.
;;; Commentary:
;; Convert multi choice GCC machine description patterns to compact
;; syntax.
;;; Usage:
;; With the point on a pattern run 'M-x mdcomp-run-at-point' to
;; convert that pattern.
;; Run 'M-x mdcomp-run-buffer' to convert all convertible patterns in
;; the current buffer.
;; Run 'M-x mdcomp-run-directory' to convert all convertible patterns
;; in a directory.
;; One can invoke the tool from shell as well, ex for running it on
;; the arm backend from the GCC checkout directory:
;; emacs -batch -l ./contrib/mdcompact/mdcompact.el -f mdcomp-run-directory ./gcc/config/arm/
;;; Code:
(require 'cl-lib)
(require 'rx)
(defconst
mdcomp-constr-rx
(rx "(match_operand" (? ":" (1+ (or punct alnum)))
(1+ space) (group-n 1 num) (1+ space) "\""
(1+ (or alnum "_" "<" ">")) "\""
(group-n 2 (1+ space) "\"" (group-n 3 (0+ (not "\""))) "\"")
")"))
(cl-defstruct mdcomp-operand
num
cstr)
(cl-defstruct mdcomp-attr
name
vals)
;; A reasonable name
(rx-define mdcomp-name (1+ (or alnum "_")))
(defconst mdcomp-attr-rx
(rx "(set_attr" (1+ space) "\""
(group-n 1 mdcomp-name)
"\"" (1+ space) "\""
(group-n 2 (1+ (not ")")))
"\"" (0+ space) ")"))
(defun mdcomp-parse-delete-attr ()
(save-match-data
(when (re-search-forward mdcomp-attr-rx nil t)
(let ((res (save-match-data
(make-mdcomp-attr
:name (match-string-no-properties 1)
:vals (cl-delete-if #'string-empty-p
(split-string
(replace-regexp-in-string
(rx "\\") ""
(match-string-no-properties 2))
(rx (1+ (or space ",")))))))))
(if (length= (mdcomp-attr-vals res) 1)
'short
(delete-region (match-beginning 0) (match-end 0))
res)))))
(defun mdcomp-parse-attrs ()
(save-excursion
(let* ((res (cl-loop for x = (mdcomp-parse-delete-attr)
while x
collect x))
(beg (re-search-backward (rx bol (1+ space) "["))))
(unless (memq 'short res)
(when res
(delete-region beg (re-search-forward (rx "]")))))
(cl-delete 'short res))))
(defun mdcomp-remove-quoting (beg)
(save-excursion
(save-match-data
(replace-regexp-in-region (regexp-quote "\\\\") "\\\\" beg (point-max))
(replace-regexp-in-region (regexp-quote "\\\"") "\"" beg (point-max)))))
(defun mdcomp-remove-escaped-newlines (beg)
(save-excursion
(save-match-data
(replace-regexp-in-region (rx "\\" eol (0+ space)) " " beg (point-max)))))
(defun mdcomp-parse-delete-cstr ()
(cl-loop while (re-search-forward mdcomp-constr-rx nil t)
unless (string= "" (match-string-no-properties 3))
collect (save-match-data
(make-mdcomp-operand
:num (string-to-number (match-string-no-properties 1))
:cstr (cl-delete-if #'string-empty-p
(split-string
(replace-regexp-in-string " " ""
(match-string-no-properties 3))
(rx (1+ ","))))))
do (delete-region (match-beginning 2) (match-end 2))))
(defun mdcomp-run* ()
(let* ((ops (mdcomp-parse-delete-cstr))
(attrs (mdcomp-parse-attrs))
(beg (re-search-forward "\"@")))
(cl-sort ops (lambda (x y)
(< (mdcomp-operand-num x) (mdcomp-operand-num y))))
(mdcomp-remove-escaped-newlines beg)
(save-match-data
(save-excursion
(left-char 2)
(forward-sexp)
(left-char 1)
(delete-char 1)
(insert "\n }")))
(mdcomp-remove-quoting beg)
(replace-match "{@")
(re-search-forward (rx (or "\"" ")")))
(re-search-backward "@")
(right-char 1)
(insert "[ cons: ")
(cl-loop
for op in ops
when (string-match "=" (cl-first (mdcomp-operand-cstr op)))
do (insert "=")
do (insert (number-to-string (mdcomp-operand-num op)) ", ")
finally
(progn
;; In case add attributes names
(when attrs
(delete-char -2)
(insert "; attrs: ")
(cl-loop for attr in attrs
do (insert (mdcomp-attr-name attr) ", ")))
(delete-char -2)
(insert "]")))
(cl-loop
while (re-search-forward (rx bol (0+ space) (or (group-n 1 "* return")
(group-n 2 "}")
"#" alpha "<"))
nil t)
for i from 0
when (match-string 2)
do (cl-return)
when (match-string 1)
do (progn
(delete-region (match-beginning 1) (+ (match-beginning 1) (length "* return")))
(insert "<<")
(left-char 1))
do
(progn
(left-char 1)
(cl-loop
initially (insert " [ ")
for op in ops
for c = (nth i (mdcomp-operand-cstr op))
unless c
do (cl-return)
do (insert (if (string-match "=" c)
(substring c 1 nil)
c)
", ")
finally (progn
(when attrs
(delete-char -2)
(insert "; ")
(cl-loop for attr in attrs
for str = (nth i (mdcomp-attr-vals attr))
when str
do (insert str)
do (insert ", ")))
(delete-char -2)
(insert " ] ")
(move-end-of-line 1)))))
;; remove everything after ] align what needs to be aligned
;; and re-add the asm template
(re-search-backward (regexp-quote "@[ cons:"))
(let* ((n (length (mdcomp-operand-cstr (car ops))))
(asms (cl-loop
initially (re-search-forward "]")
repeat n
collect (let* ((beg (re-search-forward "]"))
(end (re-search-forward (rx eol)))
(str (buffer-substring-no-properties beg end)))
(delete-region beg end)
str)))
(beg (re-search-backward (regexp-quote "@[ cons:")))
(indent-tabs-mode nil))
(re-search-forward "}")
(align-regexp beg (point) (rx (group-n 1 "") "["))
(align-regexp beg (point) (rx (group-n 1 "") (or "," ";")) nil nil t)
(align-regexp beg (point) (rx (group-n 1 "") "]"))
(goto-char beg)
(cl-loop
initially (re-search-forward "]")
for i below n
do (progn
(re-search-forward "]")
(insert (nth i asms))))
(when (re-search-forward (rx (1+ (or space eol)) ")") nil t)
(replace-match "\n)" nil t)))))
(defun mdcomp-narrow-to-md-pattern ()
(condition-case nil
(let ((beg (re-search-forward "\n("))
(end (re-search-forward (rx bol (1+ ")")))))
(narrow-to-region beg end))
(error
(narrow-to-defun))))
(defun mdcomp-run-at-point ()
"Convert the multi choice top-level form around point to compact syntax."
(interactive)
(save-restriction
(save-mark-and-excursion
(mdcomp-narrow-to-md-pattern)
(goto-char (point-min))
(let ((pattern-name (save-excursion
(re-search-forward (rx "\"" (group-n 1 (1+ (not "\""))) "\""))
(match-string-no-properties 1)))
(orig-text (buffer-substring-no-properties (point-min) (point-max))))
(condition-case nil
(progn
(mdcomp-run*)
(message "Converted: %s" pattern-name))
(error
(message "Skipping convertion for: %s" pattern-name)
(delete-region (point-min) (point-max))
(insert orig-text)
'fail))))))
(defun mdcomp-run-buffer ()
"Convert the multi choice top-level forms in the buffer to compact syntax."
(interactive)
(save-excursion
(message "Conversion for buffer %s started" (buffer-file-name))
(goto-char (point-min))
(while (re-search-forward
(rx "match_operand" (1+ any) letter (0+ space) "," (0+ space) letter) nil t)
(when (eq (mdcomp-run-at-point) 'fail)
(condition-case nil
(forward-sexp)
(error
;; If forward-sexp fails falls back.
(re-search-forward (rx ")" eol eol))))))
(message "Conversion done")))
(defconst mdcomp-file-rx (rx bol alpha (0+ not-newline) ".md" eol))
(defun mdcomp-run-directory (folder &optional recursive)
"Run el mdcompact on a FOLDER possibly in a RECURSIVE fashion."
(interactive "D")
(let ((before-save-hook nil)
(init-time (current-time)))
(mapc (lambda (f)
(with-temp-file f
(message "Working on %s" f)
(insert-file-contents f)
(mdcomp-run-buffer)
(message "Done with %s" f)))
(if recursive
(directory-files-recursively folder mdcomp-file-rx)
(directory-files folder t mdcomp-file-rx)))
(message "Converted in %f sec" (float-time (time-since init-time)))))
(defun mdcomp-batch-run-directory ()
"Same as `mdcomp-run-directory' but use cmd line args."
(mdcomp-run-directory (nth 0 argv) (nth 1 argv)))
(provide 'mdcompact)
;;; mdcompact.el ends here

View File

@ -0,0 +1,36 @@
(define_insn_and_split "*movsi_aarch64"
[(set (match_operand:SI 0 "nonimmediate_operand" "=r,k,r,r,r,r, r,w, m, m, r, r, r, w,r,w, w")
(match_operand:SI 1 "aarch64_mov_operand" " r,r,k,M,n,Usv,m,m,rZ,w,Usw,Usa,Ush,rZ,w,w,Ds"))]
"(register_operand (operands[0], SImode)
|| aarch64_reg_or_zero (operands[1], SImode))"
"@
mov\\t%w0, %w1
mov\\t%w0, %w1
mov\\t%w0, %w1
mov\\t%w0, %1
#
* return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]);
ldr\\t%w0, %1
ldr\\t%s0, %1
str\\t%w1, %0
str\\t%s1, %0
adrp\\t%x0, %A1\;ldr\\t%w0, [%x0, %L1]
adr\\t%x0, %c1
adrp\\t%x0, %A1
fmov\\t%s0, %w1
fmov\\t%w0, %s1
fmov\\t%s0, %s1
* return aarch64_output_scalar_simd_mov_immediate (operands[1], SImode);"
"CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), SImode)
&& REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))"
[(const_int 0)]
"{
aarch64_expand_mov_immediate (operands[0], operands[1]);
DONE;
}"
[(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm,load_4,
load_4,store_4,store_4,load_4,adr,adr,f_mcr,f_mrc,fmov,neon_move")
(set_attr "arch" "*,*,*,*,*,sve,*,fp,*,fp,*,*,*,fp,fp,fp,simd")
(set_attr "length" "4,4,4,4,*, 4,4, 4,4, 4,8,4,4, 4, 4, 4, 4")
]
)

View File

@ -0,0 +1,32 @@
(define_insn_and_split "*movsi_aarch64"
[(set (match_operand:SI 0 "nonimmediate_operand")
(match_operand:SI 1 "aarch64_mov_operand"))]
"(register_operand (operands[0], SImode)
|| aarch64_reg_or_zero (operands[1], SImode))"
{@ [ cons: =0 , 1 ; attrs: type , arch , length ]
[ r , r ; mov_reg , * , 4 ] mov\t%w0, %w1
[ k , r ; mov_reg , * , 4 ] mov\t%w0, %w1
[ r , k ; mov_reg , * , 4 ] mov\t%w0, %w1
[ r , M ; mov_imm , * , 4 ] mov\t%w0, %1
[ r , n ; mov_imm , * , * ] #
[ r , Usv ; mov_imm , sve , 4 ] << aarch64_output_sve_cnt_immediate ("cnt", "%x0", operands[1]);
[ r , m ; load_4 , * , 4 ] ldr\t%w0, %1
[ w , m ; load_4 , fp , 4 ] ldr\t%s0, %1
[ m , rZ ; store_4 , * , 4 ] str\t%w1, %0
[ m , w ; store_4 , fp , 4 ] str\t%s1, %0
[ r , Usw ; load_4 , * , 8 ] adrp\t%x0, %A1\;ldr\t%w0, [%x0, %L1]
[ r , Usa ; adr , * , 4 ] adr\t%x0, %c1
[ r , Ush ; adr , * , 4 ] adrp\t%x0, %A1
[ w , rZ ; f_mcr , fp , 4 ] fmov\t%s0, %w1
[ r , w ; f_mrc , fp , 4 ] fmov\t%w0, %s1
[ w , w ; fmov , fp , 4 ] fmov\t%s0, %s1
[ w , Ds ; neon_move , simd , 4 ] << aarch64_output_scalar_simd_mov_immediate (operands[1], SImode);
}
"CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), SImode)
&& REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))"
[(const_int 0)]
"{
aarch64_expand_mov_immediate (operands[0], operands[1]);
DONE;
}"
)

View File

@ -0,0 +1,25 @@
(define_insn "*movti_aarch64"
[(set (match_operand:TI 0
"nonimmediate_operand" "= r,w,w,w, r,w,r,m,m,w,m")
(match_operand:TI 1
"aarch64_movti_operand" " rUti,Z,Z,r, w,w,m,r,Z,m,w"))]
"(register_operand (operands[0], TImode)
|| aarch64_reg_or_zero (operands[1], TImode))"
"@
#
movi\\t%0.2d, #0
fmov\t%d0, xzr
#
#
mov\\t%0.16b, %1.16b
ldp\\t%0, %H0, %1
stp\\t%1, %H1, %0
stp\\txzr, xzr, %0
ldr\\t%q0, %1
str\\t%q1, %0"
[(set_attr "type" "multiple,neon_move,f_mcr,f_mcr,f_mrc,neon_logic_q, \
load_16,store_16,store_16,\
load_16,store_16")
(set_attr "length" "8,4,4,8,8,4,4,4,4,4,4")
(set_attr "arch" "*,simd,*,*,*,simd,*,*,*,fp,fp")]
)

View File

@ -0,0 +1,21 @@
(define_insn "*movti_aarch64"
[(set (match_operand:TI 0
"nonimmediate_operand")
(match_operand:TI 1
"aarch64_movti_operand"))]
"(register_operand (operands[0], TImode)
|| aarch64_reg_or_zero (operands[1], TImode))"
{@ [ cons: =0 , 1 ; attrs: type , length , arch ]
[ r , rUti ; multiple , 8 , * ] #
[ w , Z ; neon_move , 4 , simd ] movi\t%0.2d, #0
[ w , Z ; f_mcr , 4 , * ] fmov\t%d0, xzr
[ w , r ; f_mcr , 8 , * ] #
[ r , w ; f_mrc , 8 , * ] #
[ w , w ; neon_logic_q , 4 , simd ] mov\t%0.16b, %1.16b
[ r , m ; load_16 , 4 , * ] ldp\t%0, %H0, %1
[ m , r ; store_16 , 4 , * ] stp\t%1, %H1, %0
[ m , Z ; store_16 , 4 , * ] stp\txzr, xzr, %0
[ w , m ; load_16 , 4 , fp ] ldr\t%q0, %1
[ m , w ; store_16 , 4 , fp ] str\t%q1, %0
}
)

View File

@ -0,0 +1,16 @@
(define_insn "*add<mode>3_compareV_cconly_imm"
[(set (reg:CC_V CC_REGNUM)
(compare:CC_V
(plus:<DWI>
(sign_extend:<DWI> (match_operand:GPI 0 "register_operand" "r,r"))
(match_operand:<DWI> 1 "const_scalar_int_operand" ""))
(sign_extend:<DWI>
(plus:GPI
(match_dup 0)
(match_operand:GPI 2 "aarch64_plus_immediate" "I,J")))))]
"INTVAL (operands[1]) == INTVAL (operands[2])"
"@
cmn\\t%<w>0, %<w>1
cmp\\t%<w>0, #%n1"
[(set_attr "type" "alus_imm")]
)

View File

@ -0,0 +1,17 @@
(define_insn "*add<mode>3_compareV_cconly_imm"
[(set (reg:CC_V CC_REGNUM)
(compare:CC_V
(plus:<DWI>
(sign_extend:<DWI> (match_operand:GPI 0 "register_operand"))
(match_operand:<DWI> 1 "const_scalar_int_operand"))
(sign_extend:<DWI>
(plus:GPI
(match_dup 0)
(match_operand:GPI 2 "aarch64_plus_immediate")))))]
"INTVAL (operands[1]) == INTVAL (operands[2])"
{@ [ cons: 0 , 2 ]
[ r , I ] cmn\t%<w>0, %<w>1
[ r , J ] cmp\t%<w>0, #%n1
}
[(set_attr "type" "alus_imm")]
)

View File

@ -0,0 +1,17 @@
(define_insn "*sibcall_insn"
[(call (mem:DI (match_operand:DI 0 "aarch64_call_insn_operand" "Ucs, Usf"))
(match_operand 1 ""))
(unspec:DI [(match_operand:DI 2 "const_int_operand")] UNSPEC_CALLEE_ABI)
(return)]
"SIBLING_CALL_P (insn)"
{
if (which_alternative == 0)
{
output_asm_insn ("br\\t%0", operands);
return aarch64_sls_barrier (aarch64_harden_sls_retbr_p ());
}
return "b\\t%c0";
}
[(set_attr "type" "branch, branch")
(set_attr "sls_length" "retbr,none")]
)

View File

@ -0,0 +1,17 @@
(define_insn "*sibcall_insn"
[(call (mem:DI (match_operand:DI 0 "aarch64_call_insn_operand" "Ucs, Usf"))
(match_operand 1 ""))
(unspec:DI [(match_operand:DI 2 "const_int_operand")] UNSPEC_CALLEE_ABI)
(return)]
"SIBLING_CALL_P (insn)"
{
if (which_alternative == 0)
{
output_asm_insn ("br\\t%0", operands);
return aarch64_sls_barrier (aarch64_harden_sls_retbr_p ());
}
return "b\\t%c0";
}
[(set_attr "type" "branch, branch")
(set_attr "sls_length" "retbr,none")]
)

View File

@ -0,0 +1,12 @@
(define_insn "<optab><mode>3"
[(set (match_operand:GPI 0 "register_operand" "=r,rk,w")
(LOGICAL:GPI (match_operand:GPI 1 "register_operand" "%r,r,w")
(match_operand:GPI 2 "aarch64_logical_operand" "r,<lconst>,w")))]
""
"@
<logical>\\t%<w>0, %<w>1, %<w>2
<logical>\\t%<w>0, %<w>1, %2
<logical>\\t%0.<Vbtype>, %1.<Vbtype>, %2.<Vbtype>"
[(set_attr "type" "logic_reg,logic_imm,neon_logic")
(set_attr "arch" "*,*,simd")]
)

View File

@ -0,0 +1,11 @@
(define_insn "<optab><mode>3"
[(set (match_operand:GPI 0 "register_operand")
(LOGICAL:GPI (match_operand:GPI 1 "register_operand")
(match_operand:GPI 2 "aarch64_logical_operand")))]
""
{@ [ cons: =0 , 1 , 2 ; attrs: type , arch ]
[ r , %r , r ; logic_reg , * ] <logical>\t%<w>0, %<w>1, %<w>2
[ rk , r , <lconst> ; logic_imm , * ] <logical>\t%<w>0, %<w>1, %2
[ w , w , w ; neon_logic , simd ] <logical>\t%0.<Vbtype>, %1.<Vbtype>, %2.<Vbtype>
}
)

View File

@ -0,0 +1,11 @@
(define_insn "aarch64_wrffr"
[(set (reg:VNx16BI FFR_REGNUM)
(match_operand:VNx16BI 0 "aarch64_simd_reg_or_minus_one"))
(set (reg:VNx16BI FFRT_REGNUM)
(unspec:VNx16BI [(match_dup 0)] UNSPEC_WRFFR))]
"TARGET_SVE"
{@ [ cons: 0 ]
[ Dm ] setffr
[ Upa ] wrffr\t%0.b
}
)

View File

@ -0,0 +1,11 @@
(define_insn "aarch64_wrffr"
[(set (reg:VNx16BI FFR_REGNUM)
(match_operand:VNx16BI 0 "aarch64_simd_reg_or_minus_one"))
(set (reg:VNx16BI FFRT_REGNUM)
(unspec:VNx16BI [(match_dup 0)] UNSPEC_WRFFR))]
"TARGET_SVE"
{@ [ cons: 0 ]
[ Dm ] setffr
[ Upa ] wrffr\t%0.b
}
)

View File

@ -0,0 +1,11 @@
(define_insn "and<mode>3<vczle><vczbe>"
[(set (match_operand:VDQ_I 0 "register_operand" "=w,w")
(and:VDQ_I (match_operand:VDQ_I 1 "register_operand" "w,0")
(match_operand:VDQ_I 2 "aarch64_reg_or_bic_imm" "w,Db")))]
"TARGET_SIMD"
"@
and\t%0.<Vbtype>, %1.<Vbtype>, %2.<Vbtype>
* return aarch64_output_simd_mov_immediate (operands[2], <bitsize>,\
AARCH64_CHECK_BIC);"
[(set_attr "type" "neon_logic<q>")]
)

View File

@ -0,0 +1,11 @@
(define_insn "and<mode>3<vczle><vczbe>"
[(set (match_operand:VDQ_I 0 "register_operand")
(and:VDQ_I (match_operand:VDQ_I 1 "register_operand")
(match_operand:VDQ_I 2 "aarch64_reg_or_bic_imm")))]
"TARGET_SIMD"
{@ [ cons: =0 , 1 , 2 ]
[ w , w , w ] and\t%0.<Vbtype>, %1.<Vbtype>, %2.<Vbtype>
[ w , 0 , Db ] << aarch64_output_simd_mov_immediate (operands[2], <bitsize>, AARCH64_CHECK_BIC);
}
[(set_attr "type" "neon_logic<q>")]
)

View File

@ -357,7 +357,8 @@ def update_copyright(data):
def skip_line_in_changelog(line):
return FIRST_LINE_OF_END_RE.match(line) == None
return FIRST_LINE_OF_END_RE.match(line) is None
if __name__ == '__main__':
extra_args = os.getenv('GCC_MKLOG_ARGS')

File diff suppressed because it is too large Load Diff

View File

@ -1 +1 @@
20230928
20231018

View File

@ -1443,6 +1443,7 @@ OBJS = \
fixed-value.o \
fold-const.o \
fold-const-call.o \
fold-mem-offsets.o \
function.o \
function-abi.o \
function-tests.o \

View File

@ -1,3 +1,68 @@
2023-10-10 Eric Botcazou <ebotcazou@adacore.com>
* gcc-interface/decl.cc (inline_status_for_subprog): Minor tweak.
(gnat_to_gnu_field): Try harder to get a packable form of the type
for a bitfield.
2023-10-10 Ronan Desplanques <desplanques@adacore.com>
* libgnat/a-direct.adb (Start_Search_Internal): Tweak subprogram
body.
2023-10-10 Eric Botcazou <ebotcazou@adacore.com>
* sem_util.ads (Set_Scope_Is_Transient): Delete.
* sem_util.adb (Set_Scope_Is_Transient): Likewise.
* exp_ch7.adb (Create_Transient_Scope): Set Is_Transient directly.
2023-10-10 Eric Botcazou <ebotcazou@adacore.com>
* exp_aggr.adb (Is_Build_In_Place_Aggregate_Return): Return true
if the aggregate is a dependent expression of a conditional
expression being returned from a build-in-place function.
2023-10-10 Eric Botcazou <ebotcazou@adacore.com>
PR ada/111434
* sem_ch10.adb (Replace): New procedure to replace an entity with
another on the homonym chain.
(Install_Limited_With_Clause): Rename Non_Lim_View to Typ for the
sake of consistency. Call Replace to do the replacements and split
the code into the regular and the special cases. Add debuggging
output controlled by -gnatdi.
(Install_With_Clause): Print the Parent_With and Implicit_With flags
in the debugging output controlled by -gnatdi.
(Remove_Limited_With_Unit.Restore_Chain_For_Shadow (Shadow)): Rewrite
using a direct replacement of E4 by E2. Call Replace to do the
replacements. Add debuggging output controlled by -gnatdi.
2023-10-10 Ronan Desplanques <desplanques@adacore.com>
* libgnat/a-direct.adb: Fix filesystem entry filtering.
2023-10-10 Ronan Desplanques <desplanques@adacore.com>
* atree.ads, nlists.ads, types.ads: Remove references to extended
nodes. Fix typo.
* sinfo.ads: Likewise and fix position of
Comes_From_Check_Or_Contract description.
2023-10-10 Javier Miranda <miranda@adacore.com>
* sem_attr.adb (Analyze_Attribute): Protect the frontend against
replacing 'Size by its static value if 'Size is not known at
compile time and we are processing pragmas Compile_Time_Warning or
Compile_Time_Errors.
2023-10-03 David Malcolm <dmalcolm@redhat.com>
* gcc-interface/misc.cc: Use text_info ctor.
2023-10-02 David Malcolm <dmalcolm@redhat.com>
* gcc-interface/misc.cc (gnat_post_options): Update for renaming
of diagnostic_context's show_caret to m_source_printing.enabled.
2023-09-26 Eric Botcazou <ebotcazou@adacore.com>
* exp_ch7.adb (Build_Finalizer.Process_Declarations): Remove call

View File

@ -252,7 +252,7 @@ package Atree is
-- The usual approach is to build a new node using this function and
-- then, using the value returned, use the Set_xxx functions to set
-- fields of the node as required. New_Node can only be used for
-- non-entity nodes, i.e. it never generates an extended node.
-- non-entity nodes.
--
-- If we are currently parsing, as indicated by a previous call to
-- Set_Comes_From_Source_Default (True), then this call also resets
@ -308,8 +308,7 @@ package Atree is
-- returns Empty, and New_Copy (Error) returns Error. Note that, unlike
-- Copy_Separate_Tree, New_Copy does not recursively copy any descendants,
-- so in general parent pointers are not set correctly for the descendants
-- of the copied node. Both normal and extended nodes (entities) may be
-- copied using New_Copy.
-- of the copied node.
function Relocate_Node (Source : Node_Id) return Node_Id;
-- Source is a non-entity node that is to be relocated. A new node is
@ -359,7 +358,7 @@ package Atree is
-- caller, according to context.
procedure Extend_Node (Source : Node_Id);
-- This turns a node into an entity; it function is used only by Sinfo.CN.
-- This turns a node into an entity; it is only used by Sinfo.CN.
type Ignored_Ghost_Record_Proc is access procedure (N : Node_Or_Entity_Id);
@ -540,7 +539,7 @@ package Atree is
-- newly constructed replacement subtree. The actual mechanism is to swap
-- the contents of these two nodes fixing up the parent pointers of the
-- replaced node (we do not attempt to preserve parent pointers for the
-- original node). Neither Old_Node nor New_Node can be extended nodes.
-- original node).
-- ??? The above explanation is incorrect, instead Copy_Node is called.
--
-- Note: New_Node may not contain references to Old_Node, for example as

View File

@ -173,8 +173,11 @@ package body Exp_Aggr is
------------------------------------------------------
function Is_Build_In_Place_Aggregate_Return (N : Node_Id) return Boolean;
-- True if N is an aggregate (possibly qualified or converted) that is
-- being returned from a build-in-place function.
-- True if N is an aggregate (possibly qualified or a dependent expression
-- of a conditional expression, and possibly recursively so) that is being
-- returned from a build-in-place function. Such qualified and conditional
-- expressions are transparent for this purpose because an enclosing return
-- is propagated resp. distributed into these expressions by the expander.
function Build_Record_Aggr_Code
(N : Node_Id;
@ -8463,7 +8466,11 @@ package body Exp_Aggr is
P : Node_Id := Parent (N);
begin
while Nkind (P) = N_Qualified_Expression loop
while Nkind (P) in N_Case_Expression
| N_Case_Expression_Alternative
| N_If_Expression
| N_Qualified_Expression
loop
P := Parent (P);
end loop;

View File

@ -4529,7 +4529,7 @@ package body Exp_Ch7 is
Push_Scope (Trans_Scop);
Scope_Stack.Table (Scope_Stack.Last).Node_To_Be_Wrapped := Context;
Set_Scope_Is_Transient;
Scope_Stack.Table (Scope_Stack.Last).Is_Transient := True;
-- The transient scope must also manage the secondary stack

View File

@ -5114,7 +5114,7 @@ inline_status_for_subprog (Entity_Id subprog)
tree gnu_type;
/* This is a kludge to work around a pass ordering issue: for small
record types with many components, i.e. typically bit-fields, the
record types with many components, i.e. typically bitfields, the
initialization routine can contain many assignments that will be
merged by the GIMPLE store merging pass. But this pass runs very
late in the pipeline, in particular after the inlining decisions
@ -7702,6 +7702,18 @@ gnat_to_gnu_field (Entity_Id gnat_field, tree gnu_record_type, int packed,
gnu_field_type = maybe_pad_type (gnu_field_type, gnu_size, 0, gnat_field,
false, definition, true);
/* For a bitfield, if the type still has BLKmode, try again to change it
to an integral mode form. This may be necessary on strict-alignment
platforms with a size clause that is much larger than the field type,
because maybe_pad_type has preserved the alignment of the field type,
which may be too low for the new size. */
if (!needs_strict_alignment
&& RECORD_OR_UNION_TYPE_P (gnu_field_type)
&& !TYPE_FAT_POINTER_P (gnu_field_type)
&& TYPE_MODE (gnu_field_type) == BLKmode
&& is_bitfield)
gnu_field_type = make_packable_type (gnu_field_type, true, 1);
/* If a padding record was made, declare it now since it will never be
declared otherwise. This is necessary to ensure that its subtrees
are properly marked. */

View File

@ -269,7 +269,7 @@ gnat_post_options (const char **pfilename ATTRIBUTE_UNUSED)
/* No caret by default for Ada. */
if (!OPTION_SET_P (flag_diagnostics_show_caret))
global_dc->show_caret = false;
global_dc->m_source_printing.enabled = false;
/* Copy global settings to local versions. */
gnat_encodings = global_options.x_gnat_encodings;
@ -293,7 +293,6 @@ static void
internal_error_function (diagnostic_context *context, const char *msgid,
va_list *ap)
{
text_info tinfo;
char *buffer, *p, *loc;
String_Template temp, temp_loc;
String_Pointer sp, sp_loc;
@ -309,9 +308,7 @@ internal_error_function (diagnostic_context *context, const char *msgid,
pp_clear_output_area (context->printer);
/* Format the message into the pretty-printer. */
tinfo.format_spec = msgid;
tinfo.args_ptr = ap;
tinfo.err_no = errno;
text_info tinfo (msgid, ap, errno);
pp_format_verbatim (context->printer, &tinfo);
/* Extract a (writable) pointer to the formatted text. */

View File

@ -1379,13 +1379,21 @@ package body Ada.Directories is
Compose (Directory, File_Name) & ASCII.NUL;
Path : String renames
Path_C (Path_C'First .. Path_C'Last - 1);
Found : Boolean := False;
Attr : aliased File_Attributes;
Exists : Integer;
Error : Integer;
Kind : File_Kind;
Size : File_Size;
type Result (Found : Boolean := False) is record
case Found is
when True =>
Kind : File_Kind;
Size : File_Size;
when False =>
null;
end case;
end record;
Res : Result := (Found => False);
begin
-- Get the file attributes for the directory item
@ -1414,32 +1422,30 @@ package body Ada.Directories is
elsif Exists = 1 then
if Is_Regular_File_Attr (Path_C'Address, Attr'Access) = 1
and then Filter (Ordinary_File)
then
Found := True;
Kind := Ordinary_File;
Size :=
File_Size
(File_Length_Attr
(-1, Path_C'Address, Attr'Access));
if Filter (Ordinary_File) then
Res := (Found => True,
Kind => Ordinary_File,
Size => File_Size
(File_Length_Attr
(-1, Path_C'Address, Attr'Access)));
end if;
elsif Is_Directory_Attr (Path_C'Address, Attr'Access) = 1
and then Filter (File_Kind'First)
then
Found := True;
Kind := File_Kind'First;
-- File_Kind'First is used instead of Directory due
-- to a name overload issue with the procedure
-- parameter Directory.
Size := 0;
if Filter (File_Kind'First) then
Res := (Found => True,
Kind => File_Kind'First,
Size => 0);
end if;
elsif Filter (Special_File) then
Found := True;
Kind := Special_File;
Size := 0;
Res := (Found => True,
Kind => Special_File,
Size => 0);
end if;
if Found then
if Res.Found then
Search.State.Dir_Contents.Append
(Directory_Entry_Type'
(Valid => True,
@ -1447,9 +1453,9 @@ package body Ada.Directories is
To_Unbounded_String (File_Name),
Full_Name => To_Unbounded_String (Path),
Attr_Error_Code => 0,
Kind => Kind,
Kind => Res.Kind,
Modification_Time => Modification_Time (Path),
Size => Size));
Size => Res.Size));
end if;
end if;
end;

View File

@ -43,9 +43,6 @@ package Nlists is
-- this header, which may be used to access the nodes in the list using
-- the set of routines that define this interface.
-- Note: node lists can contain either nodes or entities (extended nodes)
-- or a mixture of nodes and extended nodes.
function In_Same_List (N1, N2 : Node_Or_Entity_Id) return Boolean;
pragma Inline (In_Same_List);
-- Equivalent to List_Containing (N1) = List_Containing (N2)

View File

@ -6457,17 +6457,30 @@ package body Sem_Attr is
or else Size_Known_At_Compile_Time (Entity (P)))
then
declare
Siz : Uint;
Prefix_E : Entity_Id := Entity (P);
Siz : Uint;
begin
if Known_Static_RM_Size (Entity (P)) then
Siz := RM_Size (Entity (P));
else
Siz := Esize (Entity (P));
-- Handle private and incomplete types
if Present (Underlying_Type (Prefix_E)) then
Prefix_E := Underlying_Type (Prefix_E);
end if;
Rewrite (N, Make_Integer_Literal (Sloc (N), Siz));
Analyze (N);
if Known_Static_RM_Size (Prefix_E) then
Siz := RM_Size (Prefix_E);
else
Siz := Esize (Prefix_E);
end if;
-- Protect the frontend against cases where the attribute
-- Size_Known_At_Compile_Time is set, but the Esize value
-- is not available (see Einfo.ads).
if Present (Siz) then
Rewrite (N, Make_Integer_Literal (Sloc (N), Siz));
Analyze (N);
end if;
end;
end if;

View File

@ -238,6 +238,9 @@ package body Sem_Ch10 is
-- Reset all visibility flags on unit after compiling it, either as a main
-- unit or as a unit in the context.
procedure Replace (Old_E, New_E : Entity_Id);
-- Replace Old_E by New_E on visibility list
procedure Unchain (E : Entity_Id);
-- Remove single entity from visibility list
@ -5310,15 +5313,12 @@ package body Sem_Ch10 is
and then not Is_Child_Unit (Lim_Typ)
then
declare
Non_Lim_View : constant Entity_Id :=
Non_Limited_View (Lim_Typ);
Typ : constant Entity_Id := Non_Limited_View (Lim_Typ);
Prev : Entity_Id;
begin
Prev := Current_Entity (Lim_Typ);
-- Replace Non_Lim_View in the homonyms list, so that the
-- Replace Typ by Lim_Typ in the homonyms list, so that the
-- limited view becomes available.
-- If the nonlimited view is a record with an anonymous
@ -5350,38 +5350,47 @@ package body Sem_Ch10 is
--
-- [*] denotes the visible entity (Current_Entity)
if Prev = Non_Lim_View
or else
(Ekind (Prev) = E_Incomplete_Type
and then Full_View (Prev) = Non_Lim_View)
or else
(Ekind (Prev) = E_Incomplete_Type
and then From_Limited_With (Prev)
and then
Ekind (Non_Limited_View (Prev)) = E_Incomplete_Type
and then
Full_View (Non_Limited_View (Prev)) = Non_Lim_View)
then
Set_Current_Entity (Lim_Typ);
Prev := Current_Entity (Lim_Typ);
else
while Present (Homonym (Prev))
and then Homonym (Prev) /= Non_Lim_View
loop
Prev := Homonym (Prev);
end loop;
while Present (Prev) loop
-- This is a regular replacement
Set_Homonym (Prev, Lim_Typ);
end if;
if Prev = Typ
or else (Ekind (Prev) = E_Incomplete_Type
and then Full_View (Prev) = Typ)
then
Replace (Prev, Lim_Typ);
Set_Homonym (Lim_Typ, Homonym (Non_Lim_View));
if Debug_Flag_I then
Write_Str (" (homonym) replace ");
Write_Name (Chars (Typ));
Write_Eol;
end if;
exit;
-- This is where E1 is replaced with E4
elsif Ekind (Prev) = E_Incomplete_Type
and then From_Limited_With (Prev)
and then
Ekind (Non_Limited_View (Prev)) = E_Incomplete_Type
and then Full_View (Non_Limited_View (Prev)) = Typ
then
Replace (Prev, Lim_Typ);
if Debug_Flag_I then
Write_Str (" (homonym) E1 -> E4 ");
Write_Name (Chars (Typ));
Write_Eol;
end if;
exit;
end if;
Prev := Homonym (Prev);
end loop;
end;
if Debug_Flag_I then
Write_Str (" (homonym) chain ");
Write_Name (Chars (Lim_Typ));
Write_Eol;
end if;
end if;
Next_Entity (Lim_Typ);
@ -5474,6 +5483,10 @@ package body Sem_Ch10 is
if Debug_Flag_I then
if Private_Present (With_Clause) then
Write_Str ("install private withed unit ");
elsif Parent_With (With_Clause) then
Write_Str ("install parent withed unit ");
elsif Implicit_With (With_Clause) then
Write_Str ("install implicit withed unit ");
else
Write_Str ("install withed unit ");
end if;
@ -6816,9 +6829,10 @@ package body Sem_Ch10 is
------------------------------
procedure Restore_Chain_For_Shadow (Shadow : Entity_Id) is
Is_E3 : Boolean;
Typ : constant Entity_Id := Non_Limited_View (Shadow);
pragma Assert (not In_Chain (Typ));
Prev : Entity_Id;
Typ : Entity_Id;
begin
-- If the package has incomplete types, the limited view of the
@ -6827,9 +6841,8 @@ package body Sem_Ch10 is
-- the incomplete type at stake. This in turn has a full view
-- E3 that is the full declaration, with a corresponding
-- shadow entity E4. When reinstalling the nonlimited view,
-- the nonvisible entity E1 is first replaced with E2, but then
-- E3 must *not* become the visible entity as it is replacing E4
-- in the homonyms list and simply be ignored.
-- the visible entity E4 is replaced directly with E2 in the
-- the homonyms list and E3 is simply ignored.
--
-- regular views limited views
--
@ -6842,40 +6855,42 @@ package body Sem_Ch10 is
--
-- [*] denotes the visible entity (Current_Entity)
Typ := Non_Limited_View (Shadow);
pragma Assert (not In_Chain (Typ));
Is_E3 := Nkind (Parent (Typ)) = N_Full_Type_Declaration
and then Present (Incomplete_View (Parent (Typ)));
Prev := Current_Entity (Shadow);
if Prev = Shadow then
if Is_E3 then
Set_Name_Entity_Id (Chars (Prev), Homonym (Prev));
return;
while Present (Prev) loop
-- This is a regular replacement
else
Set_Current_Entity (Typ);
if Prev = Shadow then
Replace (Prev, Typ);
if Debug_Flag_I then
Write_Str (" (homonym) replace ");
Write_Name (Chars (Typ));
Write_Eol;
end if;
exit;
-- This is where E4 is replaced with E2
elsif Ekind (Prev) = E_Incomplete_Type
and then From_Limited_With (Prev)
and then Ekind (Typ) = E_Incomplete_Type
and then Full_View (Typ) = Non_Limited_View (Prev)
then
Replace (Prev, Typ);
if Debug_Flag_I then
Write_Str (" (homonym) E4 -> E2 ");
Write_Name (Chars (Typ));
Write_Eol;
end if;
exit;
end if;
else
while Present (Homonym (Prev))
and then Homonym (Prev) /= Shadow
loop
Prev := Homonym (Prev);
end loop;
if Is_E3 then
Set_Homonym (Prev, Homonym (Shadow));
return;
else
Set_Homonym (Prev, Typ);
end if;
end if;
Set_Homonym (Typ, Homonym (Shadow));
Prev := Homonym (Prev);
end loop;
end Restore_Chain_For_Shadow;
--------------------
@ -7177,6 +7192,35 @@ package body Sem_Ch10 is
null;
end sm;
-------------
-- Replace --
-------------
procedure Replace (Old_E, New_E : Entity_Id) is
Prev : Entity_Id;
begin
Prev := Current_Entity (Old_E);
if No (Prev) then
return;
elsif Prev = Old_E then
Set_Current_Entity (New_E);
Set_Homonym (New_E, Homonym (Old_E));
else
while Present (Prev) and then Homonym (Prev) /= Old_E loop
Prev := Homonym (Prev);
end loop;
if Present (Prev) then
Set_Homonym (Prev, New_E);
Set_Homonym (New_E, Homonym (Old_E));
end if;
end if;
end Replace;
-------------
-- Unchain --
-------------

View File

@ -27792,15 +27792,6 @@ package body Sem_Util is
end if;
end Set_Rep_Info;
----------------------------
-- Set_Scope_Is_Transient --
----------------------------
procedure Set_Scope_Is_Transient (V : Boolean := True) is
begin
Scope_Stack.Table (Scope_Stack.Last).Is_Transient := V;
end Set_Scope_Is_Transient;
-------------------
-- Set_Size_Info --
-------------------

View File

@ -3165,9 +3165,6 @@ package Sem_Util is
-- from sub(type) entity T2 to (sub)type entity T1, as well as Is_Volatile
-- if T1 is a base type.
procedure Set_Scope_Is_Transient (V : Boolean := True);
-- Set the flag Is_Transient of the current scope
procedure Set_Size_Info (T1, T2 : Entity_Id);
pragma Inline (Set_Size_Info);
-- Copies the Esize field and Has_Biased_Representation flag from sub(type)

View File

@ -82,12 +82,6 @@ package Sinfo is
-- for this purpose, so e.g. in X := (if A then B else C);
-- Paren_Count for the right side will be 1.
-- Comes_From_Check_Or_Contract
-- This flag is present in all N_If_Statement nodes and
-- gets set when an N_If_Statement is generated as part of
-- the expansion of a Check, Assert, or contract-related
-- pragma.
-- Comes_From_Source
-- This flag is present in all nodes. It is set if the
-- node is built by the scanner or parser, and clear if
@ -953,6 +947,12 @@ package Sinfo is
-- attribute definition clause is given, rather than testing this at the
-- freeze point.
-- Comes_From_Check_Or_Contract
-- This flag is present in all N_If_Statement nodes and
-- gets set when an N_If_Statement is generated as part of
-- the expansion of a Check, Assert, or contract-related
-- pragma.
-- Comes_From_Extended_Return_Statement
-- Present in N_Simple_Return_Statement nodes. True if this node was
-- constructed as part of the N_Extended_Return_Statement expansion.
@ -2809,12 +2809,6 @@ package Sinfo is
-- fields are defined (and access subprograms declared) in package
-- Einfo.
-- Note: N_Defining_Identifier is an extended node whose fields are
-- deliberately laid out to match the layout of fields in an ordinary
-- N_Identifier node allowing for easy alteration of an identifier
-- node into a defining identifier node. For details, see procedure
-- Sinfo.CN.Change_Identifier_To_Defining_Identifier.
-- N_Defining_Identifier
-- Sloc points to identifier
-- Chars contains the Name_Id for the identifier
@ -3156,12 +3150,6 @@ package Sinfo is
-- additional fields are defined (and access subprograms declared)
-- in package Einfo.
-- Note: N_Defining_Character_Literal is an extended node whose fields
-- are deliberately laid out to match layout of fields in an ordinary
-- N_Character_Literal node, allowing for easy alteration of a character
-- literal node into a defining character literal node. For details, see
-- Sinfo.CN.Change_Character_Literal_To_Defining_Character_Literal.
-- N_Defining_Character_Literal
-- Sloc points to literal
-- Chars contains the Name_Id for the identifier
@ -5416,13 +5404,6 @@ package Sinfo is
-- additional fields are defined (and access subprograms declared)
-- in package Einfo.
-- Note: N_Defining_Operator_Symbol is an extended node whose fields
-- are deliberately laid out to match the layout of fields in an
-- ordinary N_Operator_Symbol node allowing for easy alteration of
-- an operator symbol node into a defining operator symbol node.
-- See Sinfo.CN.Change_Operator_Symbol_To_Defining_Operator_Symbol
-- for further details.
-- N_Defining_Operator_Symbol
-- Sloc points to literal
-- Chars contains the Name_Id for the operator symbol

View File

@ -405,9 +405,7 @@ package Types is
subtype Entity_Id is Node_Id;
-- A synonym for node types, used in the Einfo package to refer to nodes
-- that are entities (i.e. nodes with an Nkind of N_Defining_xxx). All such
-- nodes are extended nodes and these are the only extended nodes, so that
-- in practice entity and extended nodes are synonymous.
-- that are entities (i.e. nodes with an Nkind of N_Defining_xxx).
--
-- Note that Sinfo.Nodes.N_Entity_Id is the same as Entity_Id, except it
-- has a predicate requiring the correct Nkind. Opt_N_Entity_Id is the same

View File

@ -28,8 +28,12 @@ inline enum reg_class
base_reg_class (machine_mode mode ATTRIBUTE_UNUSED,
addr_space_t as ATTRIBUTE_UNUSED,
enum rtx_code outer_code ATTRIBUTE_UNUSED,
enum rtx_code index_code ATTRIBUTE_UNUSED)
enum rtx_code index_code ATTRIBUTE_UNUSED,
rtx_insn *insn ATTRIBUTE_UNUSED = NULL)
{
#ifdef INSN_BASE_REG_CLASS
return INSN_BASE_REG_CLASS (insn);
#else
#ifdef MODE_CODE_BASE_REG_CLASS
return MODE_CODE_BASE_REG_CLASS (MACRO_MODE (mode), as, outer_code,
index_code);
@ -44,6 +48,17 @@ base_reg_class (machine_mode mode ATTRIBUTE_UNUSED,
return BASE_REG_CLASS;
#endif
#endif
#endif
}
inline enum reg_class
index_reg_class (rtx_insn *insn ATTRIBUTE_UNUSED = NULL)
{
#ifdef INSN_INDEX_REG_CLASS
return INSN_INDEX_REG_CLASS (insn);
#else
return INDEX_REG_CLASS;
#endif
}
/* Wrapper function to unify target macros REGNO_MODE_CODE_OK_FOR_BASE_P,
@ -56,8 +71,12 @@ ok_for_base_p_1 (unsigned regno ATTRIBUTE_UNUSED,
machine_mode mode ATTRIBUTE_UNUSED,
addr_space_t as ATTRIBUTE_UNUSED,
enum rtx_code outer_code ATTRIBUTE_UNUSED,
enum rtx_code index_code ATTRIBUTE_UNUSED)
enum rtx_code index_code ATTRIBUTE_UNUSED,
rtx_insn* insn ATTRIBUTE_UNUSED = NULL)
{
#ifdef REGNO_OK_FOR_INSN_BASE_P
return REGNO_OK_FOR_INSN_BASE_P (regno, insn);
#else
#ifdef REGNO_MODE_CODE_OK_FOR_BASE_P
return REGNO_MODE_CODE_OK_FOR_BASE_P (regno, MACRO_MODE (mode), as,
outer_code, index_code);
@ -72,6 +91,7 @@ ok_for_base_p_1 (unsigned regno ATTRIBUTE_UNUSED,
return REGNO_OK_FOR_BASE_P (regno);
#endif
#endif
#endif
}
/* Wrapper around ok_for_base_p_1, for use after register allocation is
@ -79,12 +99,13 @@ ok_for_base_p_1 (unsigned regno ATTRIBUTE_UNUSED,
inline bool
regno_ok_for_base_p (unsigned regno, machine_mode mode, addr_space_t as,
enum rtx_code outer_code, enum rtx_code index_code)
enum rtx_code outer_code, enum rtx_code index_code,
rtx_insn *insn = NULL)
{
if (regno >= FIRST_PSEUDO_REGISTER && reg_renumber[regno] >= 0)
regno = reg_renumber[regno];
return ok_for_base_p_1 (regno, mode, as, outer_code, index_code);
return ok_for_base_p_1 (regno, mode, as, outer_code, index_code, insn);
}
#endif /* GCC_ADDRESSES_H */

View File

@ -774,7 +774,22 @@ reference_alias_ptr_type_1 (tree *t)
&& (TYPE_MAIN_VARIANT (TREE_TYPE (inner))
!= TYPE_MAIN_VARIANT
(TREE_TYPE (TREE_TYPE (TREE_OPERAND (inner, 1))))))
return TREE_TYPE (TREE_OPERAND (inner, 1));
{
tree alias_ptrtype = TREE_TYPE (TREE_OPERAND (inner, 1));
/* Unless we have the (aggregate) effective type of the access
somewhere on the access path. If we have for example
(&a->elts[i])->l.len exposed by abstraction we'd see
MEM <A> [(B *)a].elts[i].l.len and we can use the alias set
of 'len' when typeof (MEM <A> [(B *)a].elts[i]) == B for
example. See PR111715. */
tree inner = *t;
while (handled_component_p (inner)
&& (TYPE_MAIN_VARIANT (TREE_TYPE (inner))
!= TYPE_MAIN_VARIANT (TREE_TYPE (alias_ptrtype))))
inner = TREE_OPERAND (inner, 0);
if (TREE_CODE (inner) == MEM_REF)
return alias_ptrtype;
}
/* Otherwise, pick up the outermost object that we could have
a pointer to. */

View File

@ -1,3 +1,74 @@
2023-10-09 David Malcolm <dmalcolm@redhat.com>
* access-diagram.cc (boundaries::add): Explicitly state
"boundaries::" scope for "kind" enum.
2023-10-08 David Malcolm <dmalcolm@redhat.com>
PR analyzer/111155
* access-diagram.cc (boundaries::boundaries): Add logger param
(boundaries::add): Add logging.
(boundaries::get_hard_boundaries_in_range): New.
(boundaries::m_logger): New field.
(boundaries::get_table_x_for_offset): Make public.
(class svalue_spatial_item): New.
(class compound_svalue_spatial_item): New.
(add_ellipsis_to_gaps): New.
(valid_region_spatial_item::valid_region_spatial_item): Add theme
param. Initialize m_boundaries, m_existing_sval, and
m_existing_sval_spatial_item.
(valid_region_spatial_item::add_boundaries): Set m_boundaries.
Add boundaries for any m_existing_sval_spatial_item.
(valid_region_spatial_item::add_array_elements_to_table): Rewrite
creation of min/max index in terms of
maybe_add_array_index_to_table. Rewrite ellipsis code using
add_ellipsis_to_gaps. Add index values for any hard boundaries
within the valid region.
(valid_region_spatial_item::maybe_add_array_index_to_table): New,
based on code formerly in add_array_elements_to_table.
(valid_region_spatial_item::make_table): Make use of
m_existing_sval_spatial_item, if any.
(valid_region_spatial_item::m_boundaries): New field.
(valid_region_spatial_item::m_existing_sval): New field.
(valid_region_spatial_item::m_existing_sval_spatial_item): New
field.
(class svalue_spatial_item): Rename to...
(class written_svalue_spatial_item): ...this.
(class string_region_spatial_item): Rename to..
(class string_literal_spatial_item): ...this. Add "kind".
(string_literal_spatial_item::add_boundaries): Use m_kind to
determine kind of boundary. Update for renaming of m_actual_bits
to m_bits.
(string_literal_spatial_item::make_table): Likewise. Support not
displaying a row for byte indexes, and not displaying a row for
the type.
(string_literal_spatial_item::add_column_for_byte): Make byte index
row optional.
(svalue_spatial_item::make): Convert to...
(make_written_svalue_spatial_item): ...this.
(make_existing_svalue_spatial_item): New.
(access_diagram_impl::access_diagram_impl): Pass theme to
m_valid_region_spatial_item ctor. Update for renaming of
m_svalue_spatial_item.
(access_diagram_impl::find_boundaries): Pass logger to boundaries.
Update for renaming of...
(access_diagram_impl::m_svalue_spatial_item): Rename to...
(access_diagram_impl::m_written_svalue_spatial_item): ...this.
2023-10-03 David Malcolm <dmalcolm@redhat.com>
* analyzer-logging.cc (logger::log_va_partial): Use text_info
ctor.
* analyzer.cc (make_label_text): Likewise.
(make_label_text_n): Likewise.
* pending-diagnostic.cc (evdesc::event_desc::formatted_print):
Likewise.
2023-10-02 David Malcolm <dmalcolm@redhat.com>
* program-point.cc: Update for grouping of source printing fields
within diagnostic_context.
2023-09-15 David Malcolm <dmalcolm@redhat.com>
* analyzer.cc (get_stmt_location): Handle null stmt.

View File

@ -630,8 +630,8 @@ class boundaries
public:
enum class kind { HARD, SOFT};
boundaries (const region &base_reg)
: m_base_reg (base_reg)
boundaries (const region &base_reg, logger *logger)
: m_base_reg (base_reg), m_logger (logger)
{
}
@ -646,6 +646,16 @@ public:
{
add (range.m_start, kind);
add (range.m_next, kind);
if (m_logger)
{
m_logger->start_log_line ();
m_logger->log_partial ("added access_range: ");
range.dump_to_pp (m_logger->get_printer (), true);
m_logger->log_partial (" (%s)",
(kind == boundaries::kind::HARD)
? "HARD" : "soft");
m_logger->end_log_line ();
}
}
void add (const region &reg, region_model_manager *mgr, enum kind kind)
@ -714,8 +724,30 @@ public:
return m_all_offsets.size ();
}
std::vector<region_offset>
get_hard_boundaries_in_range (byte_offset_t min_offset,
byte_offset_t max_offset) const
{
std::vector<region_offset> result;
for (auto &offset : m_hard_offsets)
{
if (!offset.concrete_p ())
continue;
byte_offset_t byte;
if (!offset.get_concrete_byte_offset (&byte))
continue;
if (byte < min_offset)
continue;
if (byte > max_offset)
continue;
result.push_back (offset);
}
return result;
}
private:
const region &m_base_reg;
logger *m_logger;
std::set<region_offset> m_all_offsets;
std::set<region_offset> m_hard_offsets;
};
@ -1085,7 +1117,6 @@ public:
logger.dec_indent ();
}
private:
int get_table_x_for_offset (region_offset offset) const
{
auto slot = m_table_x_for_offset.find (offset);
@ -1097,6 +1128,7 @@ private:
return slot->second;
}
private:
int get_table_x_for_prev_offset (region_offset offset) const
{
auto slot = m_table_x_for_prev_offset.find (offset);
@ -1132,6 +1164,124 @@ public:
style_manager &sm) const = 0;
};
/* A spatial_item that involves showing an svalue at a particular offset. */
class svalue_spatial_item : public spatial_item
{
public:
enum class kind
{
WRITTEN,
EXISTING
};
protected:
svalue_spatial_item (const svalue &sval,
access_range bits,
enum kind kind)
: m_sval (sval), m_bits (bits), m_kind (kind)
{
}
const svalue &m_sval;
access_range m_bits;
enum kind m_kind;
};
static std::unique_ptr<spatial_item>
make_existing_svalue_spatial_item (const svalue *sval,
const access_range &bits,
const theme &theme);
class compound_svalue_spatial_item : public svalue_spatial_item
{
public:
compound_svalue_spatial_item (const compound_svalue &sval,
const access_range &bits,
enum kind kind,
const theme &theme)
: svalue_spatial_item (sval, bits, kind),
m_compound_sval (sval)
{
const binding_map &map = m_compound_sval.get_map ();
auto_vec <const binding_key *> binding_keys;
for (auto iter : map)
{
const binding_key *key = iter.first;
const svalue *bound_sval = iter.second;
if (const concrete_binding *concrete_key
= key->dyn_cast_concrete_binding ())
{
access_range range (nullptr,
concrete_key->get_bit_range ());
if (std::unique_ptr<spatial_item> child
= make_existing_svalue_spatial_item (bound_sval,
range,
theme))
m_children.push_back (std::move (child));
}
}
}
void add_boundaries (boundaries &out, logger *logger) const final override
{
LOG_SCOPE (logger);
for (auto &iter : m_children)
iter->add_boundaries (out, logger);
}
table make_table (const bit_to_table_map &btm,
style_manager &sm) const final override
{
std::vector<table> child_tables;
int max_rows = 0;
for (auto &iter : m_children)
{
table child_table (iter->make_table (btm, sm));
max_rows = MAX (max_rows, child_table.get_size ().h);
child_tables.push_back (std::move (child_table));
}
table t (table::size_t (btm.get_num_columns (), max_rows));
for (auto &&child_table : child_tables)
t.add_other_table (std::move (child_table),
table::coord_t (0, 0));
return t;
}
private:
const compound_svalue &m_compound_sval;
std::vector<std::unique_ptr<spatial_item>> m_children;
};
/* Loop through the TABLE_X_RANGE columns of T, adding
cells containing "..." in any unoccupied ranges of table cell. */
static void
add_ellipsis_to_gaps (table &t,
style_manager &sm,
const table::range_t &table_x_range,
const table::range_t &table_y_range)
{
int table_x = table_x_range.get_min ();
while (table_x < table_x_range.get_next ())
{
/* Find a run of unoccupied table cells. */
const int start_table_x = table_x;
while (table_x < table_x_range.get_next ()
&& !t.get_placement_at (table::coord_t (table_x,
table_y_range.get_min ())))
table_x++;
const table::range_t unoccupied_x_range (start_table_x, table_x);
if (unoccupied_x_range.get_size () > 0)
t.set_cell_span (table::rect_t (unoccupied_x_range, table_y_range),
styled_string (sm, "..."));
/* Skip occupied table cells. */
while (table_x < table_x_range.get_next ()
&& t.get_placement_at (table::coord_t (table_x,
table_y_range.get_min ())))
table_x++;
}
}
/* Subclass of spatial_item for visualizing the region of memory
that's valid to access relative to the base region of region accessed in
the operation. */
@ -1140,14 +1290,23 @@ class valid_region_spatial_item : public spatial_item
{
public:
valid_region_spatial_item (const access_operation &op,
diagnostic_event_id_t region_creation_event_id)
diagnostic_event_id_t region_creation_event_id,
const theme &theme)
: m_op (op),
m_region_creation_event_id (region_creation_event_id)
{}
m_region_creation_event_id (region_creation_event_id),
m_boundaries (nullptr),
m_existing_sval (op.m_model.get_store_value (op.m_base_region, nullptr)),
m_existing_sval_spatial_item
(make_existing_svalue_spatial_item (m_existing_sval,
op.get_valid_bits (),
theme))
{
}
void add_boundaries (boundaries &out, logger *logger) const final override
{
LOG_SCOPE (logger);
m_boundaries = &out;
access_range valid_bits = m_op.get_valid_bits ();
if (logger)
{
@ -1158,6 +1317,18 @@ public:
}
out.add (valid_bits, boundaries::kind::HARD);
if (m_existing_sval_spatial_item)
{
if (logger)
{
logger->start_log_line ();
logger->log_partial ("existing svalue: ");
m_existing_sval->dump_to_pp (logger->get_printer (), true);
logger->end_log_line ();
}
m_existing_sval_spatial_item->add_boundaries (out, logger);
}
/* Support for showing first and final element in array types. */
if (tree base_type = m_op.m_base_region->get_type ())
if (TREE_CODE (base_type) == ARRAY_TYPE)
@ -1193,65 +1364,102 @@ public:
{
tree base_type = m_op.m_base_region->get_type ();
gcc_assert (TREE_CODE (base_type) == ARRAY_TYPE);
gcc_assert (m_boundaries != nullptr);
tree domain = TYPE_DOMAIN (base_type);
if (!(TYPE_MIN_VALUE (domain) && TYPE_MAX_VALUE (domain)))
return;
region_model_manager * const mgr = m_op.get_manager ();
const int table_y = 0;
const int table_h = 1;
const table::range_t table_y_range (table_y, table_y + table_h);
t.add_row ();
const svalue *min_idx_sval
= mgr->get_or_create_constant_svalue (TYPE_MIN_VALUE (domain));
const region *min_element = mgr->get_element_region (m_op.m_base_region,
const table::range_t min_x_range
= maybe_add_array_index_to_table (t, btm, sm, table_y_range,
TYPE_MIN_VALUE (domain));
const table::range_t max_x_range
= maybe_add_array_index_to_table (t, btm, sm, table_y_range,
TYPE_MAX_VALUE (domain));
if (TREE_TYPE (base_type) == char_type_node)
{
/* For a char array,: if there are any hard boundaries in
m_boundaries that are *within* the valid region,
then show those index values. */
std::vector<region_offset> hard_boundaries
= m_boundaries->get_hard_boundaries_in_range
(tree_to_shwi (TYPE_MIN_VALUE (domain)),
tree_to_shwi (TYPE_MAX_VALUE (domain)));
for (auto &offset : hard_boundaries)
{
const int table_x = btm.get_table_x_for_offset (offset);
if (!offset.concrete_p ())
continue;
byte_offset_t byte;
if (!offset.get_concrete_byte_offset (&byte))
continue;
table::range_t table_x_range (table_x, table_x + 1);
t.maybe_set_cell_span (table::rect_t (table_x_range,
table_y_range),
fmt_styled_string (sm, "[%wi]",
byte.to_shwi ()));
}
}
add_ellipsis_to_gaps (t, sm,
table::range_t (min_x_range.get_next (),
max_x_range.get_min ()),
table_y_range);
}
table::range_t
maybe_add_array_index_to_table (table &t,
const bit_to_table_map &btm,
style_manager &sm,
const table::range_t table_y_range,
tree idx_cst) const
{
region_model_manager * const mgr = m_op.get_manager ();
tree base_type = m_op.m_base_region->get_type ();
const svalue *idx_sval
= mgr->get_or_create_constant_svalue (idx_cst);
const region *element_reg = mgr->get_element_region (m_op.m_base_region,
TREE_TYPE (base_type),
min_idx_sval);
const access_range min_element_range (*min_element, mgr);
const table::range_t min_element_x_range
= btm.get_table_x_for_range (min_element_range);
idx_sval);
const access_range element_range (*element_reg, mgr);
const table::range_t element_x_range
= btm.get_table_x_for_range (element_range);
t.set_cell_span (table::rect_t (min_element_x_range,
table_y_range),
fmt_styled_string (sm, "[%E]",
TYPE_MIN_VALUE (domain)));
t.maybe_set_cell_span (table::rect_t (element_x_range,
table_y_range),
fmt_styled_string (sm, "[%E]", idx_cst));
const svalue *max_idx_sval
= mgr->get_or_create_constant_svalue (TYPE_MAX_VALUE (domain));
const region *max_element = mgr->get_element_region (m_op.m_base_region,
TREE_TYPE (base_type),
max_idx_sval);
if (min_element == max_element)
return; // 1-element array
const access_range max_element_range (*max_element, mgr);
const table::range_t max_element_x_range
= btm.get_table_x_for_range (max_element_range);
t.set_cell_span (table::rect_t (max_element_x_range,
table_y_range),
fmt_styled_string (sm, "[%E]",
TYPE_MAX_VALUE (domain)));
const table::range_t other_elements_x_range (min_element_x_range.next,
max_element_x_range.start);
if (other_elements_x_range.get_size () > 0)
t.set_cell_span (table::rect_t (other_elements_x_range, table_y_range),
styled_string (sm, "..."));
return element_x_range;
}
table make_table (const bit_to_table_map &btm,
style_manager &sm) const final override
{
table t (table::size_t (btm.get_num_columns (), 1));
table t (table::size_t (btm.get_num_columns (), 0));
if (tree base_type = m_op.m_base_region->get_type ())
if (TREE_CODE (base_type) == ARRAY_TYPE)
add_array_elements_to_table (t, btm, sm);
/* Make use of m_existing_sval_spatial_item, if any. */
if (m_existing_sval_spatial_item)
{
table table_for_existing
= m_existing_sval_spatial_item->make_table (btm, sm);
const int table_y = t.add_rows (table_for_existing.get_size ().h);
t.add_other_table (std::move (table_for_existing),
table::coord_t (0, table_y));
}
access_range valid_bits = m_op.get_valid_bits ();
const int table_y = t.get_size ().h - 1;
const int table_y = t.add_row ();
const int table_h = 1;
table::rect_t rect = btm.get_table_rect (valid_bits, table_y, table_h);
styled_string s;
@ -1306,6 +1514,9 @@ public:
private:
const access_operation &m_op;
diagnostic_event_id_t m_region_creation_event_id;
mutable const boundaries *m_boundaries;
const svalue *m_existing_sval;
std::unique_ptr<spatial_item> m_existing_sval_spatial_item;
};
/* Subclass of spatial_item for visualizing the region of memory
@ -1362,15 +1573,10 @@ private:
to the accessed region.
Can be subclassed to give visualizations of specific kinds of svalue. */
class svalue_spatial_item : public spatial_item
class written_svalue_spatial_item : public spatial_item
{
public:
static std::unique_ptr<svalue_spatial_item> make (const access_operation &op,
const svalue &sval,
access_range actual_bits,
const theme &theme);
svalue_spatial_item (const access_operation &op,
written_svalue_spatial_item (const access_operation &op,
const svalue &sval,
access_range actual_bits)
: m_op (op), m_sval (sval), m_actual_bits (actual_bits)
@ -1479,15 +1685,15 @@ protected:
*/
class string_region_spatial_item : public svalue_spatial_item
class string_literal_spatial_item : public svalue_spatial_item
{
public:
string_region_spatial_item (const access_operation &op,
const svalue &sval,
access_range actual_bits,
const string_region &string_reg,
const theme &theme)
: svalue_spatial_item (op, sval, actual_bits),
string_literal_spatial_item (const svalue &sval,
access_range actual_bits,
const string_region &string_reg,
const theme &theme,
enum kind kind)
: svalue_spatial_item (sval, actual_bits, kind),
m_string_reg (string_reg),
m_theme (theme),
m_ellipsis_threshold (param_analyzer_text_art_string_ellipsis_threshold),
@ -1501,16 +1707,18 @@ public:
void add_boundaries (boundaries &out, logger *logger) const override
{
LOG_SCOPE (logger);
out.add (m_actual_bits, boundaries::kind::HARD);
out.add (m_bits, m_kind == svalue_spatial_item::kind::WRITTEN
? boundaries::kind::HARD
: boundaries::kind::SOFT);
tree string_cst = get_string_cst ();
/* TREE_STRING_LENGTH is sizeof, not strlen. */
if (m_show_full_string)
out.add_all_bytes_in_range (m_actual_bits);
out.add_all_bytes_in_range (m_bits);
else
{
byte_range bytes (0, 0);
bool valid = m_actual_bits.as_concrete_byte_range (&bytes);
bool valid = m_bits.as_concrete_byte_range (&bytes);
gcc_assert (valid);
byte_range head_of_string (bytes.get_start_byte_offset (),
m_ellipsis_head_len);
@ -1532,11 +1740,13 @@ public:
{
table t (table::size_t (btm.get_num_columns (), 0));
const int byte_idx_table_y = t.add_row ();
const int byte_idx_table_y = (m_kind == svalue_spatial_item::kind::WRITTEN
? t.add_row ()
: -1);
const int byte_val_table_y = t.add_row ();
byte_range bytes (0, 0);
bool valid = m_actual_bits.as_concrete_byte_range (&bytes);
bool valid = m_bits.as_concrete_byte_range (&bytes);
gcc_assert (valid);
tree string_cst = get_string_cst ();
if (m_show_full_string)
@ -1616,14 +1826,17 @@ public:
byte_idx,
byte_idx_table_y, byte_val_table_y);
/* Ellipsis (two rows high). */
/* Ellipsis. */
const byte_range ellipsis_bytes
(m_ellipsis_head_len + bytes.get_start_byte_offset (),
TREE_STRING_LENGTH (string_cst)
- (m_ellipsis_head_len + m_ellipsis_tail_len));
const table::rect_t table_rect
= btm.get_table_rect (&m_string_reg, ellipsis_bytes,
byte_idx_table_y, 2);
= ((byte_idx_table_y != -1)
? btm.get_table_rect (&m_string_reg, ellipsis_bytes,
byte_idx_table_y, 2)
: btm.get_table_rect (&m_string_reg, ellipsis_bytes,
byte_val_table_y, 1));
t.set_cell_span(table_rect, styled_string (sm, "..."));
/* Tail of string. */
@ -1637,12 +1850,15 @@ public:
byte_idx_table_y, byte_val_table_y);
}
const int summary_table_y = t.add_row ();
t.set_cell_span (btm.get_table_rect (&m_string_reg, bytes,
summary_table_y, 1),
fmt_styled_string (sm,
_("string literal (type: %qT)"),
TREE_TYPE (string_cst)));
if (m_kind == svalue_spatial_item::kind::WRITTEN)
{
const int summary_table_y = t.add_row ();
t.set_cell_span (btm.get_table_rect (&m_string_reg, bytes,
summary_table_y, 1),
fmt_styled_string (sm,
_("string literal (type: %qT)"),
TREE_TYPE (string_cst)));
}
return t;
}
@ -1687,7 +1903,7 @@ private:
gcc_assert (byte_idx_within_string < TREE_STRING_LENGTH (string_cst));
const byte_range bytes (byte_idx_within_cluster, 1);
if (1) // show_byte_indices
if (byte_idx_table_y != -1)
{
const table::rect_t idx_table_rect
= btm.get_table_rect (&m_string_reg, bytes, byte_idx_table_y, 1);
@ -1729,18 +1945,54 @@ private:
const bool m_show_utf8;
};
std::unique_ptr<svalue_spatial_item>
svalue_spatial_item::make (const access_operation &op,
const svalue &sval,
access_range actual_bits,
const theme &theme)
static std::unique_ptr<spatial_item>
make_written_svalue_spatial_item (const access_operation &op,
const svalue &sval,
access_range actual_bits,
const theme &theme)
{
if (const initial_svalue *initial_sval = sval.dyn_cast_initial_svalue ())
if (const string_region *string_reg
= initial_sval->get_region ()->dyn_cast_string_region ())
return make_unique <string_region_spatial_item> (op, sval, actual_bits,
*string_reg, theme);
return make_unique <svalue_spatial_item> (op, sval, actual_bits);
return make_unique <string_literal_spatial_item>
(sval, actual_bits,
*string_reg, theme,
svalue_spatial_item::kind::WRITTEN);
return make_unique <written_svalue_spatial_item> (op, sval, actual_bits);
}
static std::unique_ptr<spatial_item>
make_existing_svalue_spatial_item (const svalue *sval,
const access_range &bits,
const theme &theme)
{
if (!sval)
return nullptr;
switch (sval->get_kind ())
{
default:
return nullptr;
case SK_INITIAL:
{
const initial_svalue *initial_sval = (const initial_svalue *)sval;
if (const string_region *string_reg
= initial_sval->get_region ()->dyn_cast_string_region ())
return make_unique <string_literal_spatial_item>
(*sval, bits,
*string_reg, theme,
svalue_spatial_item::kind::EXISTING);
return nullptr;
}
case SK_COMPOUND:
return make_unique<compound_svalue_spatial_item>
(*((const compound_svalue *)sval),
bits,
svalue_spatial_item::kind::EXISTING,
theme);
}
}
/* Widget subclass implementing access diagrams. */
@ -1759,7 +2011,7 @@ public:
m_theme (theme),
m_logger (logger),
m_invalid (false),
m_valid_region_spatial_item (op, region_creation_event_id),
m_valid_region_spatial_item (op, region_creation_event_id, theme),
m_accessed_region_spatial_item (op),
m_btm (),
m_calc_req_size_called (false)
@ -1800,10 +2052,11 @@ public:
if (op.m_sval_hint)
{
access_range actual_bits = m_op.get_actual_bits ();
m_svalue_spatial_item = svalue_spatial_item::make (m_op,
*op.m_sval_hint,
actual_bits,
m_theme);
m_written_svalue_spatial_item
= make_written_svalue_spatial_item (m_op,
*op.m_sval_hint,
actual_bits,
m_theme);
}
/* Two passes:
@ -1856,9 +2109,9 @@ public:
add_aligned_child_table (std::move (t_headings));
}
if (m_svalue_spatial_item)
if (m_written_svalue_spatial_item)
{
table t_sval (m_svalue_spatial_item->make_table (m_btm, m_sm));
table t_sval (m_written_svalue_spatial_item->make_table (m_btm, m_sm));
add_aligned_child_table (std::move (t_sval));
}
else
@ -1942,12 +2195,12 @@ private:
find_boundaries () const
{
std::unique_ptr<boundaries> result
= make_unique<boundaries> (*m_op.m_base_region);
= make_unique<boundaries> (*m_op.m_base_region, m_logger);
m_valid_region_spatial_item.add_boundaries (*result, m_logger);
m_accessed_region_spatial_item.add_boundaries (*result, m_logger);
if (m_svalue_spatial_item)
m_svalue_spatial_item->add_boundaries (*result, m_logger);
if (m_written_svalue_spatial_item)
m_written_svalue_spatial_item->add_boundaries (*result, m_logger);
return result;
}
@ -2324,7 +2577,7 @@ private:
valid_region_spatial_item m_valid_region_spatial_item;
accessed_region_spatial_item m_accessed_region_spatial_item;
std::unique_ptr<svalue_spatial_item> m_svalue_spatial_item;
std::unique_ptr<spatial_item> m_written_svalue_spatial_item;
std::unique_ptr<boundaries> m_boundaries;

View File

@ -144,10 +144,7 @@ logger::log_partial (const char *fmt, ...)
void
logger::log_va_partial (const char *fmt, va_list *ap)
{
text_info text;
text.format_spec = fmt;
text.args_ptr = ap;
text.err_no = 0;
text_info text (fmt, ap, 0);
pp_format (m_pp, &text);
pp_output_formatted_text (m_pp);
}

View File

@ -425,19 +425,13 @@ make_label_text (bool can_colorize, const char *fmt, ...)
if (!can_colorize)
pp_show_color (pp) = false;
text_info ti;
rich_location rich_loc (line_table, UNKNOWN_LOCATION);
va_list ap;
va_start (ap, fmt);
ti.format_spec = _(fmt);
ti.args_ptr = &ap;
ti.err_no = 0;
ti.x_data = NULL;
ti.m_richloc = &rich_loc;
text_info ti (_(fmt), &ap, 0, NULL, &rich_loc);
pp_format (pp, &ti);
pp_output_formatted_text (pp);
@ -461,7 +455,6 @@ make_label_text_n (bool can_colorize, unsigned HOST_WIDE_INT n,
if (!can_colorize)
pp_show_color (pp) = false;
text_info ti;
rich_location rich_loc (line_table, UNKNOWN_LOCATION);
va_list ap;
@ -470,11 +463,7 @@ make_label_text_n (bool can_colorize, unsigned HOST_WIDE_INT n,
const char *fmt = ngettext (singular_fmt, plural_fmt, n);
ti.format_spec = fmt;
ti.args_ptr = &ap;
ti.err_no = 0;
ti.x_data = NULL;
ti.m_richloc = &rich_loc;
text_info ti (fmt, &ap, 0, NULL, &rich_loc);
pp_format (pp, &ti);
pp_output_formatted_text (pp);

View File

@ -96,15 +96,10 @@ evdesc::event_desc::formatted_print (const char *fmt, ...) const
pp_show_color (pp) = m_colorize;
text_info ti;
rich_location rich_loc (line_table, UNKNOWN_LOCATION);
va_list ap;
va_start (ap, fmt);
ti.format_spec = _(fmt);
ti.args_ptr = &ap;
ti.err_no = 0;
ti.x_data = NULL;
ti.m_richloc = &rich_loc;
text_info ti (_(fmt), &ap, 0, nullptr, &rich_loc);
pp_format (pp, &ti);
pp_output_formatted_text (pp);
va_end (ap);

View File

@ -256,8 +256,8 @@ public:
debug_diagnostic_context ()
{
diagnostic_initialize (this, 0);
show_line_numbers_p = true;
show_caret = true;
m_source_printing.show_line_numbers_p = true;
m_source_printing.enabled = true;
}
~debug_diagnostic_context ()
{

View File

@ -1434,7 +1434,7 @@ afdo_calculate_branch_prob (bb_set *annotated_bb)
else
total_count += AFDO_EINFO (e)->get_count ();
}
if (num_unknown_succ == 0 && total_count > profile_count::zero ())
if (num_unknown_succ == 0 && total_count.nonzero_p())
{
FOR_EACH_EDGE (e, ei, bb->succs)
e->probability
@ -1571,7 +1571,7 @@ afdo_annotate_cfg (const stmt_set &promoted_stmts)
DECL_SOURCE_LOCATION (current_function_decl));
afdo_source_profile->mark_annotated (cfun->function_start_locus);
afdo_source_profile->mark_annotated (cfun->function_end_locus);
if (max_count > profile_count::zero ())
if (max_count.nonzero_p())
{
/* Calculate, propagate count and probability information on CFG. */
afdo_calculate_branch_prob (&annotated_bb);

View File

@ -743,39 +743,22 @@ c_strlen (tree arg, int only_value, c_strlen_data *data, unsigned eltsize)
as needed. */
rtx
c_readstr (const char *str, scalar_int_mode mode,
c_readstr (const char *str, fixed_size_mode mode,
bool null_terminated_p/*=true*/)
{
HOST_WIDE_INT ch;
unsigned int i, j;
HOST_WIDE_INT tmp[MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_WIDE_INT];
auto_vec<target_unit, MAX_BITSIZE_MODE_ANY_INT / BITS_PER_UNIT> bytes;
gcc_assert (GET_MODE_CLASS (mode) == MODE_INT);
unsigned int len = (GET_MODE_PRECISION (mode) + HOST_BITS_PER_WIDE_INT - 1)
/ HOST_BITS_PER_WIDE_INT;
bytes.reserve (GET_MODE_SIZE (mode));
gcc_assert (len <= MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_WIDE_INT);
for (i = 0; i < len; i++)
tmp[i] = 0;
ch = 1;
for (i = 0; i < GET_MODE_SIZE (mode); i++)
target_unit ch = 1;
for (unsigned int i = 0; i < GET_MODE_SIZE (mode); ++i)
{
j = i;
if (WORDS_BIG_ENDIAN)
j = GET_MODE_SIZE (mode) - i - 1;
if (BYTES_BIG_ENDIAN != WORDS_BIG_ENDIAN
&& GET_MODE_SIZE (mode) >= UNITS_PER_WORD)
j = j + UNITS_PER_WORD - 2 * (j % UNITS_PER_WORD) - 1;
j *= BITS_PER_UNIT;
if (ch || !null_terminated_p)
ch = (unsigned char) str[i];
tmp[j / HOST_BITS_PER_WIDE_INT] |= ch << (j % HOST_BITS_PER_WIDE_INT);
bytes.quick_push (ch);
}
wide_int c = wide_int::from_array (tmp, len, GET_MODE_PRECISION (mode));
return immed_wide_int_const (c, mode);
return native_decode_rtx (mode, bytes, 0);
}
/* Cast a target constant CST to target CHAR and if that value fits into
@ -3530,10 +3513,7 @@ builtin_memcpy_read_str (void *data, void *, HOST_WIDE_INT offset,
string but the caller guarantees it's large enough for MODE. */
const char *rep = (const char *) data;
/* The by-pieces infrastructure does not try to pick a vector mode
for memcpy expansion. */
return c_readstr (rep + offset, as_a <scalar_int_mode> (mode),
/*nul_terminated=*/false);
return c_readstr (rep + offset, mode, /*nul_terminated=*/false);
}
/* LEN specify length of the block of memcpy/memset operation.
@ -3994,9 +3974,7 @@ builtin_strncpy_read_str (void *data, void *, HOST_WIDE_INT offset,
if ((unsigned HOST_WIDE_INT) offset > strlen (str))
return const0_rtx;
/* The by-pieces infrastructure does not try to pick a vector mode
for strncpy expansion. */
return c_readstr (str + offset, as_a <scalar_int_mode> (mode));
return c_readstr (str + offset, mode);
}
/* Helper to check the sizes of sequences and the destination of calls
@ -4227,8 +4205,7 @@ builtin_memset_read_str (void *data, void *prev,
memset (p, *c, size);
/* Vector modes should be handled above. */
return c_readstr (p, as_a <scalar_int_mode> (mode));
return c_readstr (p, mode);
}
/* Callback routine for store_by_pieces. Return the RTL of a register
@ -4275,8 +4252,7 @@ builtin_memset_gen_str (void *data, void *prev,
p = XALLOCAVEC (char, size);
memset (p, 1, size);
/* Vector modes should be handled above. */
coeff = c_readstr (p, as_a <scalar_int_mode> (mode));
coeff = c_readstr (p, mode);
target = convert_to_mode (mode, (rtx) data, 1);
target = expand_mult (mode, target, coeff, NULL_RTX, 1);

View File

@ -105,7 +105,7 @@ struct c_strlen_data
};
extern tree c_strlen (tree, int, c_strlen_data * = NULL, unsigned = 1);
extern rtx c_readstr (const char *, scalar_int_mode, bool = true);
extern rtx c_readstr (const char *, fixed_size_mode, bool = true);
extern void expand_builtin_setjmp_setup (rtx, rtx);
extern void expand_builtin_setjmp_receiver (rtx);
extern void expand_builtin_update_setjmp_buf (rtx);

View File

@ -1,3 +1,24 @@
2023-10-15 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/111800
* c-warn.cc (match_case_to_enum_1): Assert w.get_precision ()
is smaller or equal to WIDE_INT_MAX_INL_PRECISION rather than
w.get_len () is smaller or equal to WIDE_INT_MAX_INL_ELTS.
2023-10-12 Jakub Jelinek <jakub@redhat.com>
PR c/102989
* c-warn.cc (match_case_to_enum_1): Use wi::to_wide just once instead
of 3 times, assert get_len () is smaller than WIDE_INT_MAX_INL_ELTS.
2023-10-02 David Malcolm <dmalcolm@redhat.com>
* c-common.cc (maybe_add_include_fixit): Update for renaming of
diagnostic_context's show_caret to m_source_printing.enabled.
* c-opts.cc (c_common_init_options): Update for renaming of
diagnostic_context's colorize_source_p to
m_source_printing.colorize_source_p.
2023-09-20 Jakub Jelinek <jakub@redhat.com>
PR c++/111392

View File

@ -9569,7 +9569,7 @@ maybe_add_include_fixit (rich_location *richloc, const char *header,
richloc->add_fixit_insert_before (include_insert_loc, text);
free (text);
if (override_location && global_dc->show_caret)
if (override_location && global_dc->m_source_printing.enabled)
{
/* Replace the primary location with that of the insertion point for the
fix-it hint.

View File

@ -272,7 +272,7 @@ c_common_init_options (unsigned int decoded_options_count,
if (c_dialect_cxx ())
set_std_cxx17 (/*ISO*/false);
global_dc->colorize_source_p = true;
global_dc->m_source_printing.colorize_source_p = true;
}
/* Handle switch SCODE with argument ARG. VALUE is true, unless no-

View File

@ -1517,13 +1517,15 @@ match_case_to_enum_1 (tree key, tree type, tree label)
return;
char buf[WIDE_INT_PRINT_BUFFER_SIZE];
wide_int w = wi::to_wide (key);
gcc_assert (w.get_precision () <= WIDE_INT_MAX_INL_PRECISION);
if (tree_fits_uhwi_p (key))
print_dec (wi::to_wide (key), buf, UNSIGNED);
print_dec (w, buf, UNSIGNED);
else if (tree_fits_shwi_p (key))
print_dec (wi::to_wide (key), buf, SIGNED);
print_dec (w, buf, SIGNED);
else
print_hex (wi::to_wide (key), buf);
print_hex (w, buf);
if (TYPE_NAME (type) == NULL_TREE)
warning_at (DECL_SOURCE_LOCATION (CASE_LABEL (label)),

View File

@ -1,3 +1,17 @@
2023-10-17 Martin Uecker <uecker@tugraz.at>
PR c/111708
* c-decl.cc (grokdeclarator): Add error.
2023-10-03 David Malcolm <dmalcolm@redhat.com>
* c-objc-common.cc (c_tree_printer): Update for "m_" prefixes to
text_info fields.
2023-09-30 Eugene Rozenfeld <erozen@microsoft.com>
* Make-lang.in: Make create_fdas_for_cc1 target not .PHONY
2023-09-20 Jakub Jelinek <jakub@redhat.com>
* c-parser.cc (c_parser_postfix_expression_after_primary): Parse

View File

@ -91,8 +91,6 @@ cc1$(exeext): $(C_OBJS) cc1-checksum.o $(BACKEND) $(LIBDEPS)
components_in_prev = "bfd opcodes binutils fixincludes gas gcc gmp mpfr mpc isl gold intl ld libbacktrace libcpp libcody libdecnumber libiberty libiberty-linker-plugin libiconv zlib lto-plugin libctf libsframe"
components_in_prev_target = "libstdc++-v3 libsanitizer libvtv libgcc libbacktrace libphobos zlib libgomp libatomic"
.PHONY: create_fdas_for_cc1
cc1.fda: create_fdas_for_cc1
$(PROFILE_MERGER) $(shell ls -ha cc1_*.fda) --output_file cc1.fda -gcov_version 2
@ -116,6 +114,8 @@ create_fdas_for_cc1: ../stage1-gcc/cc1$(exeext) ../prev-gcc/$(PERF_DATA)
$(CREATE_GCOV) -binary ../prev-gcc/cc1$(exeext) -gcov $$profile_name -profile $$perf_path -gcov_version 2; \
fi; \
done;
$(STAMP) $@
#
# Build hooks:

View File

@ -8032,6 +8032,27 @@ grokdeclarator (const struct c_declarator *declarator,
TREE_THIS_VOLATILE (decl) = 1;
}
}
/* C99 6.2.2p7: It is invalid (compile-time undefined
behavior) to create an 'extern' declaration for a
function if there is a global declaration that is
'static' and the global declaration is not visible.
(If the static declaration _is_ currently visible,
the 'extern' declaration is taken to refer to that decl.) */
if (!initialized
&& TREE_PUBLIC (decl)
&& current_scope != file_scope)
{
tree global_decl = identifier_global_value (declarator->u.id.id);
tree visible_decl = lookup_name (declarator->u.id.id);
if (global_decl
&& global_decl != visible_decl
&& VAR_OR_FUNCTION_DECL_P (global_decl)
&& !TREE_PUBLIC (global_decl))
error_at (loc, "function previously declared %<static%> "
"redeclared %<extern%>");
}
}
else
{

View File

@ -272,7 +272,7 @@ c_tree_printer (pretty_printer *pp, text_info *text, const char *spec,
if (*spec != 'v')
{
t = va_arg (*text->args_ptr, tree);
t = va_arg (*text->m_args_ptr, tree);
if (set_locus)
text->set_location (0, DECL_SOURCE_LOCATION (t),
SHOW_RANGE_WITH_CARET);
@ -316,7 +316,7 @@ c_tree_printer (pretty_printer *pp, text_info *text, const char *spec,
return true;
case 'v':
pp_c_cv_qualifiers (cpp, va_arg (*text->args_ptr, int), hash);
pp_c_cv_qualifiers (cpp, va_arg (*text->m_args_ptr, int), hash);
return true;
default:

View File

@ -1291,7 +1291,7 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
cumulative_args_t args_so_far,
int reg_parm_stack_space,
rtx *old_stack_level,
poly_int64_pod *old_pending_adj,
poly_int64 *old_pending_adj,
bool *must_preallocate, int *ecf_flags,
bool *may_tailcall, bool call_from_thunk_p)
{
@ -2298,7 +2298,7 @@ load_register_parameters (struct arg_data *args, int num_actuals,
bytes that should be popped after the call. */
static bool
combine_pending_stack_adjustment_and_call (poly_int64_pod *adjustment_out,
combine_pending_stack_adjustment_and_call (poly_int64 *adjustment_out,
poly_int64 unadjusted_args_size,
struct args_size *args_size,
unsigned int preferred_unit_stack_boundary)

View File

@ -468,7 +468,7 @@ control_dependences::control_dependences ()
bitmap_obstack_initialize (&m_bitmaps);
control_dependence_map.create (last_basic_block_for_fn (cfun));
control_dependence_map.quick_grow (last_basic_block_for_fn (cfun));
control_dependence_map.quick_grow_cleared (last_basic_block_for_fn (cfun));
for (int i = 0; i < last_basic_block_for_fn (cfun); ++i)
bitmap_initialize (&control_dependence_map[i], &m_bitmaps);
for (int i = 0; i < num_edges; ++i)

View File

@ -693,6 +693,43 @@ compute_outgoing_frequencies (basic_block b)
}
}
/* Update the profile information for BB, which was created by splitting
an RTL block that had a non-final jump. */
static void
update_profile_for_new_sub_basic_block (basic_block bb)
{
edge e;
edge_iterator ei;
bool initialized_src = false, uninitialized_src = false;
bb->count = profile_count::zero ();
FOR_EACH_EDGE (e, ei, bb->preds)
{
if (e->count ().initialized_p ())
{
bb->count += e->count ();
initialized_src = true;
}
else
uninitialized_src = true;
}
/* When some edges are missing with read profile, this is
most likely because RTL expansion introduced loop.
When profile is guessed we may have BB that is reachable
from unlikely path as well as from normal path.
TODO: We should handle loops created during BB expansion
correctly here. For now we assume all those loop to cycle
precisely once. */
if (!initialized_src
|| (uninitialized_src
&& profile_status_for_fn (cfun) < PROFILE_GUESSED))
bb->count = profile_count::uninitialized ();
compute_outgoing_frequencies (bb);
}
/* Assume that some pass has inserted labels or control flow
instructions within a basic block. Split basic blocks as needed
and create edges. */
@ -744,40 +781,15 @@ find_many_sub_basic_blocks (sbitmap blocks)
if (profile_status_for_fn (cfun) != PROFILE_ABSENT)
FOR_BB_BETWEEN (bb, min, max->next_bb, next_bb)
{
edge e;
edge_iterator ei;
if (STATE (bb) == BLOCK_ORIGINAL)
continue;
if (STATE (bb) == BLOCK_NEW)
{
bool initialized_src = false, uninitialized_src = false;
bb->count = profile_count::zero ();
FOR_EACH_EDGE (e, ei, bb->preds)
{
if (e->count ().initialized_p ())
{
bb->count += e->count ();
initialized_src = true;
}
else
uninitialized_src = true;
}
/* When some edges are missing with read profile, this is
most likely because RTL expansion introduced loop.
When profile is guessed we may have BB that is reachable
from unlikely path as well as from normal path.
TODO: We should handle loops created during BB expansion
correctly here. For now we assume all those loop to cycle
precisely once. */
if (!initialized_src
|| (uninitialized_src
&& profile_status_for_fn (cfun) < PROFILE_GUESSED))
bb->count = profile_count::uninitialized ();
update_profile_for_new_sub_basic_block (bb);
continue;
}
/* If nothing changed, there is no need to create new BBs. */
else if (EDGE_COUNT (bb->succs) == n_succs[bb->index])
/* If nothing changed, there is no need to create new BBs. */
if (EDGE_COUNT (bb->succs) == n_succs[bb->index])
{
/* In rare occassions RTL expansion might have mistakely assigned
a probabilities different from what is in CFG. This happens
@ -788,10 +800,33 @@ find_many_sub_basic_blocks (sbitmap blocks)
update_br_prob_note (bb);
continue;
}
compute_outgoing_frequencies (bb);
}
FOR_EACH_BB_FN (bb, cfun)
SET_STATE (bb, 0);
}
/* Like find_many_sub_basic_blocks, but look only within BB. */
void
find_sub_basic_blocks (basic_block bb)
{
basic_block end_bb = bb->next_bb;
find_bb_boundaries (bb);
if (bb->next_bb == end_bb)
return;
/* Re-scan and wire in all edges. This expects simple (conditional)
jumps at the end of each new basic blocks. */
make_edges (bb, end_bb->prev_bb, 1);
/* Update branch probabilities. Expect only (un)conditional jumps
to be created with only the forward edges. */
if (profile_status_for_fn (cfun) != PROFILE_ABSENT)
{
compute_outgoing_frequencies (bb);
for (bb = bb->next_bb; bb != end_bb; bb = bb->next_bb)
update_profile_for_new_sub_basic_block (bb);
}
}

View File

@ -24,5 +24,6 @@ extern bool inside_basic_block_p (const rtx_insn *);
extern bool control_flow_insn_p (const rtx_insn *);
extern void rtl_make_eh_edge (sbitmap, basic_block, rtx);
extern void find_many_sub_basic_blocks (sbitmap);
extern void find_sub_basic_blocks (basic_block);
#endif /* GCC_CFGBUILD_H */

View File

@ -1895,33 +1895,38 @@ void
record_niter_bound (class loop *loop, const widest_int &i_bound,
bool realistic, bool upper)
{
if (wi::min_precision (i_bound, SIGNED) > bound_wide_int ().get_precision ())
return;
bound_wide_int bound = bound_wide_int::from (i_bound, SIGNED);
/* Update the bounds only when there is no previous estimation, or when the
current estimation is smaller. */
if (upper
&& (!loop->any_upper_bound
|| wi::ltu_p (i_bound, loop->nb_iterations_upper_bound)))
|| wi::ltu_p (bound, loop->nb_iterations_upper_bound)))
{
loop->any_upper_bound = true;
loop->nb_iterations_upper_bound = i_bound;
loop->nb_iterations_upper_bound = bound;
if (!loop->any_likely_upper_bound)
{
loop->any_likely_upper_bound = true;
loop->nb_iterations_likely_upper_bound = i_bound;
loop->nb_iterations_likely_upper_bound = bound;
}
}
if (realistic
&& (!loop->any_estimate
|| wi::ltu_p (i_bound, loop->nb_iterations_estimate)))
|| wi::ltu_p (bound, loop->nb_iterations_estimate)))
{
loop->any_estimate = true;
loop->nb_iterations_estimate = i_bound;
loop->nb_iterations_estimate = bound;
}
if (!realistic
&& (!loop->any_likely_upper_bound
|| wi::ltu_p (i_bound, loop->nb_iterations_likely_upper_bound)))
|| wi::ltu_p (bound, loop->nb_iterations_likely_upper_bound)))
{
loop->any_likely_upper_bound = true;
loop->nb_iterations_likely_upper_bound = i_bound;
loop->nb_iterations_likely_upper_bound = bound;
}
/* If an upper bound is smaller than the realistic estimate of the
@ -2018,7 +2023,7 @@ get_estimated_loop_iterations (class loop *loop, widest_int *nit)
return false;
}
*nit = loop->nb_iterations_estimate;
*nit = widest_int::from (loop->nb_iterations_estimate, SIGNED);
return true;
}
@ -2032,7 +2037,7 @@ get_max_loop_iterations (const class loop *loop, widest_int *nit)
if (!loop->any_upper_bound)
return false;
*nit = loop->nb_iterations_upper_bound;
*nit = widest_int::from (loop->nb_iterations_upper_bound, SIGNED);
return true;
}
@ -2066,7 +2071,7 @@ get_likely_max_loop_iterations (class loop *loop, widest_int *nit)
if (!loop->any_likely_upper_bound)
return false;
*nit = loop->nb_iterations_likely_upper_bound;
*nit = widest_int::from (loop->nb_iterations_likely_upper_bound, SIGNED);
return true;
}

View File

@ -44,6 +44,9 @@ enum iv_extend_code
IV_UNKNOWN_EXTEND
};
typedef generic_wide_int <fixed_wide_int_storage <WIDE_INT_MAX_INL_PRECISION> >
bound_wide_int;
/* The structure describing a bound on number of iterations of a loop. */
class GTY ((chain_next ("%h.next"))) nb_iter_bound {
@ -58,7 +61,7 @@ public:
overflows (as MAX + 1 is sometimes produced as the estimate on number
of executions of STMT).
b) it is consistent with the result of number_of_iterations_exit. */
widest_int bound;
bound_wide_int bound;
/* True if, after executing the statement BOUND + 1 times, we will
leave the loop; that is, all the statements after it are executed at most
@ -161,14 +164,14 @@ public:
/* An integer guaranteed to be greater or equal to nb_iterations. Only
valid if any_upper_bound is true. */
widest_int nb_iterations_upper_bound;
bound_wide_int nb_iterations_upper_bound;
widest_int nb_iterations_likely_upper_bound;
bound_wide_int nb_iterations_likely_upper_bound;
/* An integer giving an estimate on nb_iterations. Unlike
nb_iterations_upper_bound, there is no guarantee that it is at least
nb_iterations. */
widest_int nb_iterations_estimate;
bound_wide_int nb_iterations_estimate;
/* If > 0, an integer, where the user asserted that for any
I in [ 0, nb_iterations ) and for any J in

View File

@ -11923,7 +11923,7 @@ simplify_compare_const (enum rtx_code code, machine_mode mode,
/* (unsigned) < 0x80000000 is equivalent to >= 0. */
else if (is_a <scalar_int_mode> (mode, &int_mode)
&& GET_MODE_PRECISION (int_mode) - 1 < HOST_BITS_PER_WIDE_INT
&& ((unsigned HOST_WIDE_INT) const_op
&& (((unsigned HOST_WIDE_INT) const_op & GET_MODE_MASK (int_mode))
== HOST_WIDE_INT_1U << (GET_MODE_PRECISION (int_mode) - 1)))
{
const_op = 0;
@ -11962,7 +11962,7 @@ simplify_compare_const (enum rtx_code code, machine_mode mode,
/* (unsigned) >= 0x80000000 is equivalent to < 0. */
else if (is_a <scalar_int_mode> (mode, &int_mode)
&& GET_MODE_PRECISION (int_mode) - 1 < HOST_BITS_PER_WIDE_INT
&& ((unsigned HOST_WIDE_INT) const_op
&& (((unsigned HOST_WIDE_INT) const_op & GET_MODE_MASK (int_mode))
== HOST_WIDE_INT_1U << (GET_MODE_PRECISION (int_mode) - 1)))
{
const_op = 0;
@ -12003,14 +12003,15 @@ simplify_compare_const (enum rtx_code code, machine_mode mode,
&& !MEM_VOLATILE_P (op0)
/* The optimization makes only sense for constants which are big enough
so that we have a chance to chop off something at all. */
&& (unsigned HOST_WIDE_INT) const_op > 0xff
/* Bail out, if the constant does not fit into INT_MODE. */
&& (unsigned HOST_WIDE_INT) const_op
< ((HOST_WIDE_INT_1U << (GET_MODE_PRECISION (int_mode) - 1) << 1) - 1)
&& ((unsigned HOST_WIDE_INT) const_op & GET_MODE_MASK (int_mode)) > 0xff
/* Ensure that we do not overflow during normalization. */
&& (code != GTU || (unsigned HOST_WIDE_INT) const_op < HOST_WIDE_INT_M1U))
&& (code != GTU
|| ((unsigned HOST_WIDE_INT) const_op & GET_MODE_MASK (int_mode))
< HOST_WIDE_INT_M1U)
&& trunc_int_for_mode (const_op, int_mode) == const_op)
{
unsigned HOST_WIDE_INT n = (unsigned HOST_WIDE_INT) const_op;
unsigned HOST_WIDE_INT n
= (unsigned HOST_WIDE_INT) const_op & GET_MODE_MASK (int_mode);
enum rtx_code adjusted_code;
/* Normalize code to either LEU or GEU. */
@ -12051,15 +12052,15 @@ simplify_compare_const (enum rtx_code code, machine_mode mode,
HOST_WIDE_INT_PRINT_HEX ") to (MEM %s "
HOST_WIDE_INT_PRINT_HEX ").\n", GET_MODE_NAME (int_mode),
GET_MODE_NAME (narrow_mode_iter), GET_RTX_NAME (code),
(unsigned HOST_WIDE_INT)const_op, GET_RTX_NAME (adjusted_code),
n);
(unsigned HOST_WIDE_INT) const_op & GET_MODE_MASK (int_mode),
GET_RTX_NAME (adjusted_code), n);
}
poly_int64 offset = (BYTES_BIG_ENDIAN
? 0
: (GET_MODE_SIZE (int_mode)
- GET_MODE_SIZE (narrow_mode_iter)));
*pop0 = adjust_address_nv (op0, narrow_mode_iter, offset);
*pop1 = GEN_INT (n);
*pop1 = gen_int_mode (n, narrow_mode_iter);
return adjusted_code;
}
}
@ -13410,27 +13411,43 @@ record_dead_and_set_regs_1 (rtx dest, const_rtx setter, void *data)
if (REG_P (dest))
{
/* If we are setting the whole register, we know its value. Otherwise
show that we don't know the value. We can handle a SUBREG if it's
the low part, but we must be careful with paradoxical SUBREGs on
RISC architectures because we cannot strip e.g. an extension around
a load and record the naked load since the RTL middle-end considers
that the upper bits are defined according to LOAD_EXTEND_OP. */
/* If we are setting the whole register, we know its value. */
if (GET_CODE (setter) == SET && dest == SET_DEST (setter))
record_value_for_reg (dest, record_dead_insn, SET_SRC (setter));
/* We can handle a SUBREG if it's the low part, but we must be
careful with paradoxical SUBREGs on RISC architectures because
we cannot strip e.g. an extension around a load and record the
naked load since the RTL middle-end considers that the upper bits
are defined according to LOAD_EXTEND_OP. */
else if (GET_CODE (setter) == SET
&& GET_CODE (SET_DEST (setter)) == SUBREG
&& SUBREG_REG (SET_DEST (setter)) == dest
&& known_le (GET_MODE_PRECISION (GET_MODE (dest)),
BITS_PER_WORD)
&& subreg_lowpart_p (SET_DEST (setter)))
record_value_for_reg (dest, record_dead_insn,
WORD_REGISTER_OPERATIONS
&& word_register_operation_p (SET_SRC (setter))
&& paradoxical_subreg_p (SET_DEST (setter))
? SET_SRC (setter)
: gen_lowpart (GET_MODE (dest),
SET_SRC (setter)));
{
if (WORD_REGISTER_OPERATIONS
&& word_register_operation_p (SET_SRC (setter))
&& paradoxical_subreg_p (SET_DEST (setter)))
record_value_for_reg (dest, record_dead_insn, SET_SRC (setter));
else if (!partial_subreg_p (SET_DEST (setter)))
record_value_for_reg (dest, record_dead_insn,
gen_lowpart (GET_MODE (dest),
SET_SRC (setter)));
else
{
record_value_for_reg (dest, record_dead_insn,
gen_lowpart (GET_MODE (dest),
SET_SRC (setter)));
unsigned HOST_WIDE_INT mask;
reg_stat_type *rsp = &reg_stat[REGNO (dest)];
mask = GET_MODE_MASK (GET_MODE (SET_DEST (setter)));
rsp->last_set_nonzero_bits |= ~mask;
rsp->last_set_sign_bit_copies = 1;
}
}
/* Otherwise show that we don't know the value. */
else
record_value_for_reg (dest, record_dead_insn, NULL_RTX);
}

View File

@ -1252,6 +1252,10 @@ fcprop-registers
Common Var(flag_cprop_registers) Optimization
Perform a register copy-propagation optimization pass.
ffold-mem-offsets
Target Bool Var(flag_fold_mem_offsets) Init(1)
Fold instructions calculating memory offsets to the memory access instruction if possible.
fcrossjumping
Common Var(flag_crossjumping) Optimization
Perform cross-jumping optimization.

View File

@ -608,6 +608,20 @@ get_intel_cpu (struct __processor_model *cpu_model,
cpu_model->__cpu_type = INTEL_COREI7;
cpu_model->__cpu_subtype = INTEL_COREI7_ARROWLAKE_S;
break;
case 0xdd:
/* Clearwater Forest. */
cpu = "clearwaterforest";
CHECK___builtin_cpu_is ("clearwaterforest");
cpu_model->__cpu_type = INTEL_CLEARWATERFOREST;
break;
case 0xcc:
/* Panther Lake. */
cpu = "pantherlake";
CHECK___builtin_cpu_is ("corei7");
CHECK___builtin_cpu_is ("pantherlake");
cpu_model->__cpu_type = INTEL_COREI7;
cpu_model->__cpu_subtype = INTEL_COREI7_PANTHERLAKE;
break;
case 0x17:
case 0x1d:
/* Penryn. */
@ -678,6 +692,7 @@ get_available_features (struct __processor_model *cpu_model,
#define XSTATE_HI_ZMM 0x80
#define XSTATE_TILECFG 0x20000
#define XSTATE_TILEDATA 0x40000
#define XSTATE_APX_F 0x80000
#define XCR_AVX_ENABLED_MASK \
(XSTATE_SSE | XSTATE_YMM)
@ -685,11 +700,13 @@ get_available_features (struct __processor_model *cpu_model,
(XSTATE_SSE | XSTATE_YMM | XSTATE_OPMASK | XSTATE_ZMM | XSTATE_HI_ZMM)
#define XCR_AMX_ENABLED_MASK \
(XSTATE_TILECFG | XSTATE_TILEDATA)
#define XCR_APX_F_ENABLED_MASK XSTATE_APX_F
/* Check if AVX and AVX512 are usable. */
/* Check if AVX, AVX512 and APX are usable. */
int avx_usable = 0;
int avx512_usable = 0;
int amx_usable = 0;
int apx_usable = 0;
/* Check if KL is usable. */
int has_kl = 0;
if ((ecx & bit_OSXSAVE))
@ -709,6 +726,8 @@ get_available_features (struct __processor_model *cpu_model,
}
amx_usable = ((xcrlow & XCR_AMX_ENABLED_MASK)
== XCR_AMX_ENABLED_MASK);
apx_usable = ((xcrlow & XCR_APX_F_ENABLED_MASK)
== XCR_APX_F_ENABLED_MASK);
}
#define set_feature(f) \
@ -833,6 +852,8 @@ get_available_features (struct __processor_model *cpu_model,
set_feature (FEATURE_IBT);
if (edx & bit_UINTR)
set_feature (FEATURE_UINTR);
if (edx & bit_USER_MSR)
set_feature (FEATURE_USER_MSR);
if (amx_usable)
{
if (edx & bit_AMX_TILE)
@ -922,6 +943,11 @@ get_available_features (struct __processor_model *cpu_model,
if (edx & bit_AMX_COMPLEX)
set_feature (FEATURE_AMX_COMPLEX);
}
if (apx_usable)
{
if (edx & bit_APX_F)
set_feature (FEATURE_APX_F);
}
}
}

View File

@ -123,6 +123,9 @@ along with GCC; see the file COPYING3. If not see
#define OPTION_MASK_ISA2_SM3_SET OPTION_MASK_ISA2_SM3
#define OPTION_MASK_ISA2_SHA512_SET OPTION_MASK_ISA2_SHA512
#define OPTION_MASK_ISA2_SM4_SET OPTION_MASK_ISA2_SM4
#define OPTION_MASK_ISA2_APX_F_SET OPTION_MASK_ISA2_APX_F
#define OPTION_MASK_ISA2_EVEX512_SET OPTION_MASK_ISA2_EVEX512
#define OPTION_MASK_ISA2_USER_MSR_SET OPTION_MASK_ISA2_USER_MSR
/* SSE4 includes both SSE4.1 and SSE4.2. -msse4 should be the same
as -msse4.2. */
@ -309,6 +312,9 @@ along with GCC; see the file COPYING3. If not see
#define OPTION_MASK_ISA2_SM3_UNSET OPTION_MASK_ISA2_SM3
#define OPTION_MASK_ISA2_SHA512_UNSET OPTION_MASK_ISA2_SHA512
#define OPTION_MASK_ISA2_SM4_UNSET OPTION_MASK_ISA2_SM4
#define OPTION_MASK_ISA2_APX_F_UNSET OPTION_MASK_ISA2_APX_F
#define OPTION_MASK_ISA2_EVEX512_UNSET OPTION_MASK_ISA2_EVEX512
#define OPTION_MASK_ISA2_USER_MSR_UNSET OPTION_MASK_ISA2_USER_MSR
/* SSE4 includes both SSE4.1 and SSE4.2. -mno-sse4 should the same
as -mno-sse4.1. */
@ -1341,6 +1347,47 @@ ix86_handle_option (struct gcc_options *opts,
}
return true;
case OPT_mapxf:
if (value)
{
opts->x_ix86_isa_flags2 |= OPTION_MASK_ISA2_APX_F_SET;
opts->x_ix86_isa_flags2_explicit |= OPTION_MASK_ISA2_APX_F_SET;
opts->x_ix86_apx_features = apx_all;
}
else
{
opts->x_ix86_isa_flags2 &= ~OPTION_MASK_ISA2_APX_F_UNSET;
opts->x_ix86_isa_flags2_explicit |= OPTION_MASK_ISA2_APX_F_UNSET;
opts->x_ix86_apx_features = apx_none;
}
return true;
case OPT_mevex512:
if (value)
{
opts->x_ix86_isa_flags2 |= OPTION_MASK_ISA2_EVEX512_SET;
opts->x_ix86_isa_flags2_explicit |= OPTION_MASK_ISA2_EVEX512_SET;
}
else
{
opts->x_ix86_isa_flags2 &= ~OPTION_MASK_ISA2_EVEX512_UNSET;
opts->x_ix86_isa_flags2_explicit |= OPTION_MASK_ISA2_EVEX512_UNSET;
}
return true;
case OPT_musermsr:
if (value)
{
opts->x_ix86_isa_flags2 |= OPTION_MASK_ISA2_USER_MSR_SET;
opts->x_ix86_isa_flags2_explicit |= OPTION_MASK_ISA2_USER_MSR_SET;
}
else
{
opts->x_ix86_isa_flags2 &= ~OPTION_MASK_ISA2_USER_MSR_UNSET;
opts->x_ix86_isa_flags2_explicit |= OPTION_MASK_ISA2_USER_MSR_UNSET;
}
return true;
case OPT_mfma:
if (value)
{
@ -2030,6 +2077,7 @@ const char *const processor_names[] =
"tremont",
"sierraforest",
"grandridge",
"clearwaterforest",
"knl",
"knm",
"skylake",
@ -2047,6 +2095,7 @@ const char *const processor_names[] =
"graniterapids-d",
"arrowlake",
"arrowlake-s",
"pantherlake",
"intel",
"lujiazui",
"geode",
@ -2179,6 +2228,8 @@ const pta processor_alias_table[] =
M_CPU_SUBTYPE (INTEL_COREI7_ARROWLAKE_S), P_PROC_AVX2},
{"lunarlake", PROCESSOR_ARROWLAKE_S, CPU_HASWELL, PTA_ARROWLAKE_S,
M_CPU_SUBTYPE (INTEL_COREI7_ARROWLAKE_S), P_PROC_AVX2},
{"pantherlake", PROCESSOR_PANTHERLAKE, CPU_HASWELL, PTA_PANTHERLAKE,
M_CPU_SUBTYPE (INTEL_COREI7_PANTHERLAKE), P_PROC_AVX2},
{"bonnell", PROCESSOR_BONNELL, CPU_ATOM, PTA_BONNELL,
M_CPU_TYPE (INTEL_BONNELL), P_PROC_SSSE3},
{"atom", PROCESSOR_BONNELL, CPU_ATOM, PTA_BONNELL,
@ -2199,6 +2250,8 @@ const pta processor_alias_table[] =
M_CPU_SUBTYPE (INTEL_SIERRAFOREST), P_PROC_AVX2},
{"grandridge", PROCESSOR_GRANDRIDGE, CPU_HASWELL, PTA_GRANDRIDGE,
M_CPU_TYPE (INTEL_GRANDRIDGE), P_PROC_AVX2},
{"clearwaterforest", PROCESSOR_CLEARWATERFOREST, CPU_HASWELL,
PTA_CLEARWATERFOREST, M_CPU_TYPE (INTEL_CLEARWATERFOREST), P_PROC_AVX2},
{"knl", PROCESSOR_KNL, CPU_SLM, PTA_KNL,
M_CPU_TYPE (INTEL_KNL), P_PROC_AVX512F},
{"knm", PROCESSOR_KNM, CPU_SLM, PTA_KNM,

View File

@ -62,6 +62,7 @@ enum processor_types
ZHAOXIN_FAM7H,
INTEL_SIERRAFOREST,
INTEL_GRANDRIDGE,
INTEL_CLEARWATERFOREST,
CPU_TYPE_MAX,
BUILTIN_CPU_TYPE_MAX = CPU_TYPE_MAX
};
@ -101,6 +102,7 @@ enum processor_subtypes
INTEL_COREI7_GRANITERAPIDS_D,
INTEL_COREI7_ARROWLAKE,
INTEL_COREI7_ARROWLAKE_S,
INTEL_COREI7_PANTHERLAKE,
CPU_SUBTYPE_MAX
};
@ -261,6 +263,8 @@ enum processor_features
FEATURE_SM3,
FEATURE_SHA512,
FEATURE_SM4,
FEATURE_APX_F,
FEATURE_USER_MSR,
CPU_FEATURE_MAX
};

View File

@ -191,4 +191,6 @@ ISA_NAMES_TABLE_START
ISA_NAMES_TABLE_ENTRY("sm3", FEATURE_SM3, P_NONE, "-msm3")
ISA_NAMES_TABLE_ENTRY("sha512", FEATURE_SHA512, P_NONE, "-msha512")
ISA_NAMES_TABLE_ENTRY("sm4", FEATURE_SM4, P_NONE, "-msm4")
ISA_NAMES_TABLE_ENTRY("apxf", FEATURE_APX_F, P_NONE, "-mapxf")
ISA_NAMES_TABLE_ENTRY("usermsr", FEATURE_USER_MSR, P_NONE, "-musermsr")
ISA_NAMES_TABLE_END

View File

@ -310,6 +310,9 @@ static const struct riscv_ext_version riscv_ext_version_table[] =
{"svnapot", ISA_SPEC_CLASS_NONE, 1, 0},
{"svpbmt", ISA_SPEC_CLASS_NONE, 1, 0},
{"xcvmac", ISA_SPEC_CLASS_NONE, 1, 0},
{"xcvalu", ISA_SPEC_CLASS_NONE, 1, 0},
{"xtheadba", ISA_SPEC_CLASS_NONE, 1, 0},
{"xtheadbb", ISA_SPEC_CLASS_NONE, 1, 0},
{"xtheadbs", ISA_SPEC_CLASS_NONE, 1, 0},
@ -1036,6 +1039,41 @@ riscv_subset_list::parse_std_ext (const char *p)
return p;
}
/* Parsing function for one standard extensions.
Return Value:
Points to the end of extensions.
Arguments:
`p`: Current parsing position. */
const char *
riscv_subset_list::parse_single_std_ext (const char *p)
{
if (*p == 'x' || *p == 's' || *p == 'z')
{
error_at (m_loc,
"%<-march=%s%>: Not single-letter extension. "
"%<%c%>",
m_arch, *p);
return nullptr;
}
unsigned major_version = 0;
unsigned minor_version = 0;
bool explicit_version_p = false;
char subset[2] = {0, 0};
subset[0] = *p;
p++;
p = parsing_subset_version (subset, p, &major_version, &minor_version,
/* std_ext_p= */ true, &explicit_version_p);
add (subset, major_version, minor_version, explicit_version_p, false);
return p;
}
/* Check any implied extensions for EXT. */
void
@ -1138,6 +1176,105 @@ riscv_subset_list::handle_combine_ext ()
}
}
/* Parsing function for multi-letter extensions.
Return Value:
Points to the end of extensions.
Arguments:
`p`: Current parsing position.
`ext_type`: What kind of extensions, 's', 'z' or 'x'.
`ext_type_str`: Full name for kind of extension. */
const char *
riscv_subset_list::parse_single_multiletter_ext (const char *p,
const char *ext_type,
const char *ext_type_str)
{
unsigned major_version = 0;
unsigned minor_version = 0;
size_t ext_type_len = strlen (ext_type);
if (strncmp (p, ext_type, ext_type_len) != 0)
return NULL;
char *subset = xstrdup (p);
const char *end_of_version;
bool explicit_version_p = false;
char *ext;
char backup;
size_t len = strlen (p);
size_t end_of_version_pos, i;
bool found_any_number = false;
bool found_minor_version = false;
end_of_version_pos = len;
/* Find the begin of version string. */
for (i = len -1; i > 0; --i)
{
if (ISDIGIT (subset[i]))
{
found_any_number = true;
continue;
}
/* Might be version seperator, but need to check one more char,
we only allow <major>p<minor>, so we could stop parsing if found
any more `p`. */
if (subset[i] == 'p' &&
!found_minor_version &&
found_any_number && ISDIGIT (subset[i-1]))
{
found_minor_version = true;
continue;
}
end_of_version_pos = i + 1;
break;
}
backup = subset[end_of_version_pos];
subset[end_of_version_pos] = '\0';
ext = xstrdup (subset);
subset[end_of_version_pos] = backup;
end_of_version
= parsing_subset_version (ext, subset + end_of_version_pos, &major_version,
&minor_version, /* std_ext_p= */ false,
&explicit_version_p);
free (ext);
if (end_of_version == NULL)
{
free (subset);
return NULL;
}
subset[end_of_version_pos] = '\0';
if (strlen (subset) == 1)
{
error_at (m_loc, "%<-march=%s%>: name of %s must be more than 1 letter",
m_arch, ext_type_str);
free (subset);
return NULL;
}
add (subset, major_version, minor_version, explicit_version_p, false);
p += end_of_version - subset;
free (subset);
if (*p != '\0' && *p != '_')
{
error_at (m_loc, "%<-march=%s%>: %s must separate with %<_%>",
m_arch, ext_type_str);
return NULL;
}
return p;
}
/* Parsing function for multi-letter extensions.
Return Value:
@ -1250,6 +1387,30 @@ riscv_subset_list::parse_multiletter_ext (const char *p,
return p;
}
/* Parsing function for a single-letter or multi-letter extensions.
Return Value:
Points to the end of extensions.
Arguments:
`p`: Current parsing position. */
const char *
riscv_subset_list::parse_single_ext (const char *p)
{
switch (p[0])
{
case 'x':
return parse_single_multiletter_ext (p, "x", "non-standard extension");
case 'z':
return parse_single_multiletter_ext (p, "z", "sub-extension");
case 's':
return parse_single_multiletter_ext (p, "s", "supervisor extension");
default:
return parse_single_std_ext (p);
}
}
/* Parsing arch string to subset list, return NULL if parsing failed. */
riscv_subset_list *
@ -1342,6 +1503,26 @@ fail:
return NULL;
}
/* Clone whole subset list. */
riscv_subset_list *
riscv_subset_list::clone () const
{
riscv_subset_list *new_list = new riscv_subset_list (m_arch, m_loc);
for (riscv_subset_t *itr = m_head; itr != NULL; itr = itr->next)
new_list->add (itr->name.c_str (), itr->major_version, itr->minor_version,
itr->explicit_version_p, true);
new_list->m_xlen = m_xlen;
return new_list;
}
void
riscv_subset_list::set_loc (location_t loc)
{
m_loc = loc;
}
/* Return the current arch string. */
std::string
@ -1480,6 +1661,9 @@ static const riscv_ext_flag_table_t riscv_ext_flag_table[] =
{"ztso", &gcc_options::x_riscv_ztso_subext, MASK_ZTSO},
{"xcvmac", &gcc_options::x_riscv_xcv_subext, MASK_XCVMAC},
{"xcvalu", &gcc_options::x_riscv_xcv_subext, MASK_XCVALU},
{"xtheadba", &gcc_options::x_riscv_xthead_subext, MASK_XTHEADBA},
{"xtheadbb", &gcc_options::x_riscv_xthead_subext, MASK_XTHEADBB},
{"xtheadbs", &gcc_options::x_riscv_xthead_subext, MASK_XTHEADBS},
@ -1498,6 +1682,37 @@ static const riscv_ext_flag_table_t riscv_ext_flag_table[] =
{NULL, NULL, 0}
};
/* Apply SUBSET_LIST to OPTS if OPTS is not null, also set CURRENT_SUBSET_LIST
to SUBSET_LIST, just note this WON'T delete old CURRENT_SUBSET_LIST. */
void
riscv_set_arch_by_subset_list (riscv_subset_list *subset_list,
struct gcc_options *opts)
{
if (opts)
{
const riscv_ext_flag_table_t *arch_ext_flag_tab;
/* Clean up target flags before we set. */
for (arch_ext_flag_tab = &riscv_ext_flag_table[0]; arch_ext_flag_tab->ext;
++arch_ext_flag_tab)
opts->*arch_ext_flag_tab->var_ref &= ~arch_ext_flag_tab->mask;
if (subset_list->xlen () == 32)
opts->x_target_flags &= ~MASK_64BIT;
else if (subset_list->xlen () == 64)
opts->x_target_flags |= MASK_64BIT;
for (arch_ext_flag_tab = &riscv_ext_flag_table[0]; arch_ext_flag_tab->ext;
++arch_ext_flag_tab)
{
if (subset_list->lookup (arch_ext_flag_tab->ext))
opts->*arch_ext_flag_tab->var_ref |= arch_ext_flag_tab->mask;
}
}
current_subset_list = subset_list;
}
/* Parse a RISC-V ISA string into an option mask. Must clear or set all arch
dependent mask bits, in case more than one -march string is passed. */

View File

@ -436,18 +436,20 @@ i[34567]86-*-* | x86_64-*-*)
avx512vbmi2vlintrin.h avx512vnniintrin.h
avx512vnnivlintrin.h vaesintrin.h vpclmulqdqintrin.h
avx512vpopcntdqvlintrin.h avx512bitalgintrin.h
pconfigintrin.h wbnoinvdintrin.h movdirintrin.h
waitpkgintrin.h cldemoteintrin.h avx512bf16vlintrin.h
avx512bf16intrin.h enqcmdintrin.h serializeintrin.h
avx512vp2intersectintrin.h avx512vp2intersectvlintrin.h
tsxldtrkintrin.h amxtileintrin.h amxint8intrin.h
amxbf16intrin.h x86gprintrin.h uintrintrin.h
hresetintrin.h keylockerintrin.h avxvnniintrin.h
mwaitintrin.h avx512fp16intrin.h avx512fp16vlintrin.h
avxifmaintrin.h avxvnniint8intrin.h avxneconvertintrin.h
avx512bitalgvlintrin.h pconfigintrin.h wbnoinvdintrin.h
movdirintrin.h waitpkgintrin.h cldemoteintrin.h
avx512bf16vlintrin.h avx512bf16intrin.h enqcmdintrin.h
serializeintrin.h avx512vp2intersectintrin.h
avx512vp2intersectvlintrin.h tsxldtrkintrin.h
amxtileintrin.h amxint8intrin.h amxbf16intrin.h
x86gprintrin.h uintrintrin.h hresetintrin.h
keylockerintrin.h avxvnniintrin.h mwaitintrin.h
avx512fp16intrin.h avx512fp16vlintrin.h avxifmaintrin.h
avxvnniint8intrin.h avxneconvertintrin.h
cmpccxaddintrin.h amxfp16intrin.h prfchiintrin.h
raointintrin.h amxcomplexintrin.h avxvnniint16intrin.h
sm3intrin.h sha512intrin.h sm4intrin.h"
sm3intrin.h sha512intrin.h sm4intrin.h
usermsrintrin.h"
;;
ia64-*-*)
extra_headers=ia64intrin.h
@ -706,7 +708,7 @@ skylake goldmont goldmont-plus tremont cascadelake tigerlake cooperlake \
sapphirerapids alderlake rocketlake eden-x2 nano nano-1000 nano-2000 nano-3000 \
nano-x2 eden-x4 nano-x4 lujiazui x86-64 x86-64-v2 x86-64-v3 x86-64-v4 \
sierraforest graniterapids graniterapids-d grandridge arrowlake arrowlake-s \
native"
clearwaterforest pantherlake native"
# Additional x86 processors supported by --with-cpu=. Each processor
# MUST be separated by exactly one space.
@ -2524,7 +2526,7 @@ riscv*-*-freebsd*)
loongarch*-*-linux*)
tm_file="elfos.h gnu-user.h linux.h linux-android.h glibc-stdint.h ${tm_file}"
tm_file="${tm_file} loongarch/gnu-user.h loongarch/linux.h"
tm_file="${tm_file} loongarch/gnu-user.h loongarch/linux.h loongarch/loongarch-driver.h"
extra_options="${extra_options} linux-android.opt"
tmake_file="${tmake_file} loongarch/t-multilib loongarch/t-linux"
gnu_ld=yes
@ -2537,7 +2539,7 @@ loongarch*-*-linux*)
loongarch*-*-elf*)
tm_file="elfos.h newlib-stdint.h ${tm_file}"
tm_file="${tm_file} loongarch/elf.h loongarch/linux.h"
tm_file="${tm_file} loongarch/elf.h loongarch/linux.h loongarch/loongarch-driver.h"
tmake_file="${tmake_file} loongarch/t-multilib loongarch/t-linux"
gnu_ld=yes
gas=yes

View File

@ -604,6 +604,12 @@
#endif
/* Define if your macOS assembler supports .build_version directives */
#ifndef USED_FOR_TARGET
#undef HAVE_AS_MACOS_BUILD_VERSION
#endif
/* Define if the assembler understands -march=rv*_zifencei. */
#ifndef USED_FOR_TARGET
#undef HAVE_AS_MARCH_ZIFENCEI

View File

@ -82,6 +82,7 @@ aarch64_update_cpp_builtins (cpp_reader *pfile)
{
aarch64_def_or_undef (flag_unsafe_math_optimizations, "__ARM_FP_FAST", pfile);
cpp_undef (pfile, "__ARM_ARCH");
builtin_define_with_int_value ("__ARM_ARCH", AARCH64_ISA_V9A ? 9 : 8);
builtin_define_with_int_value ("__ARM_SIZEOF_MINIMAL_ENUM",

View File

@ -182,6 +182,8 @@ AARCH64_CORE("cortex-x2", cortexx2, cortexa57, V9A, (SVE2_BITPERM, MEMTAG, I8M
AARCH64_CORE("cortex-x3", cortexx3, cortexa57, V9A, (SVE2_BITPERM, MEMTAG, I8MM, BF16), neoversen2, 0x41, 0xd4e, -1)
AARCH64_CORE("cortex-x4", cortexx4, cortexa57, V9_2A, (SVE2_BITPERM, MEMTAG, PROFILE), neoversen2, 0x41, 0xd81, -1)
AARCH64_CORE("neoverse-n2", neoversen2, cortexa57, V9A, (I8MM, BF16, SVE2_BITPERM, RNG, MEMTAG, PROFILE), neoversen2, 0x41, 0xd49, -1)
AARCH64_CORE("neoverse-v2", neoversev2, cortexa57, V9A, (I8MM, BF16, SVE2_BITPERM, RNG, MEMTAG, PROFILE), neoversev2, 0x41, 0xd4f, -1)

View File

@ -108,20 +108,18 @@ enum aarch64_key_type {
AARCH64_KEY_B
};
/* Load pair policy type. */
enum aarch64_ldp_policy {
LDP_POLICY_DEFAULT,
LDP_POLICY_ALWAYS,
LDP_POLICY_NEVER,
LDP_POLICY_ALIGNED
};
/* Store pair policy type. */
enum aarch64_stp_policy {
STP_POLICY_DEFAULT,
STP_POLICY_ALWAYS,
STP_POLICY_NEVER,
STP_POLICY_ALIGNED
/* An enum specifying how to handle load and store pairs using
a fine-grained policy:
- LDP_STP_POLICY_DEFAULT: Use the policy defined in the tuning structure.
- LDP_STP_POLICY_ALIGNED: Emit ldp/stp if the source pointer is aligned
to at least double the alignment of the type.
- LDP_STP_POLICY_ALWAYS: Emit ldp/stp regardless of alignment.
- LDP_STP_POLICY_NEVER: Do not emit ldp/stp. */
enum aarch64_ldp_stp_policy {
AARCH64_LDP_STP_POLICY_DEFAULT,
AARCH64_LDP_STP_POLICY_ALIGNED,
AARCH64_LDP_STP_POLICY_ALWAYS,
AARCH64_LDP_STP_POLICY_NEVER
};
#endif

View File

@ -568,30 +568,9 @@ struct tune_params
/* Place prefetch struct pointer at the end to enable type checking
errors when tune_params misses elements (e.g., from erroneous merges). */
const struct cpu_prefetch_tune *prefetch;
/* An enum specifying how to handle load pairs using a fine-grained policy:
- LDP_POLICY_ALIGNED: Emit ldp if the source pointer is aligned
to at least double the alignment of the type.
- LDP_POLICY_ALWAYS: Emit ldp regardless of alignment.
- LDP_POLICY_NEVER: Do not emit ldp. */
enum aarch64_ldp_policy_model
{
LDP_POLICY_ALIGNED,
LDP_POLICY_ALWAYS,
LDP_POLICY_NEVER
} ldp_policy_model;
/* An enum specifying how to handle store pairs using a fine-grained policy:
- STP_POLICY_ALIGNED: Emit stp if the source pointer is aligned
to at least double the alignment of the type.
- STP_POLICY_ALWAYS: Emit stp regardless of alignment.
- STP_POLICY_NEVER: Do not emit stp. */
enum aarch64_stp_policy_model
{
STP_POLICY_ALIGNED,
STP_POLICY_ALWAYS,
STP_POLICY_NEVER
} stp_policy_model;
/* Define models for the aarch64_ldp_stp_policy. */
enum aarch64_ldp_stp_policy ldp_policy_model, stp_policy_model;
};
/* Classifies an address.
@ -789,6 +768,7 @@ bool aarch64_emit_approx_div (rtx, rtx, rtx);
bool aarch64_emit_approx_sqrt (rtx, rtx, bool);
tree aarch64_vector_load_decl (tree);
void aarch64_expand_call (rtx, rtx, rtx, bool);
bool aarch64_expand_cpymem_mops (rtx *, bool);
bool aarch64_expand_cpymem (rtx *);
bool aarch64_expand_setmem (rtx *);
bool aarch64_float_const_zero_rtx_p (rtx);

View File

@ -91,25 +91,25 @@
})
(define_insn "aarch64_simd_dup<mode>"
[(set (match_operand:VDQ_I 0 "register_operand" "=w, w")
[(set (match_operand:VDQ_I 0 "register_operand")
(vec_duplicate:VDQ_I
(match_operand:<VEL> 1 "register_operand" "w,?r")))]
(match_operand:<VEL> 1 "register_operand")))]
"TARGET_SIMD"
"@
dup\\t%0.<Vtype>, %1.<Vetype>[0]
dup\\t%0.<Vtype>, %<vwcore>1"
[(set_attr "type" "neon_dup<q>, neon_from_gp<q>")]
{@ [ cons: =0 , 1 ; attrs: type ]
[ w , w ; neon_dup<q> ] dup\t%0.<Vtype>, %1.<Vetype>[0]
[ w , ?r ; neon_from_gp<q> ] dup\t%0.<Vtype>, %<vwcore>1
}
)
(define_insn "aarch64_simd_dup<mode>"
[(set (match_operand:VDQF_F16 0 "register_operand" "=w,w")
[(set (match_operand:VDQF_F16 0 "register_operand")
(vec_duplicate:VDQF_F16
(match_operand:<VEL> 1 "register_operand" "w,r")))]
(match_operand:<VEL> 1 "register_operand")))]
"TARGET_SIMD"
"@
dup\\t%0.<Vtype>, %1.<Vetype>[0]
dup\\t%0.<Vtype>, %<vwcore>1"
[(set_attr "type" "neon_dup<q>, neon_from_gp<q>")]
{@ [ cons: =0 , 1 ; attrs: type ]
[ w , w ; neon_dup<q> ] dup\t%0.<Vtype>, %1.<Vetype>[0]
[ w , r ; neon_from_gp<q> ] dup\t%0.<Vtype>, %<vwcore>1
}
)
(define_insn "aarch64_dup_lane<mode>"
@ -143,54 +143,59 @@
)
(define_insn "*aarch64_simd_mov<VDMOV:mode>"
[(set (match_operand:VDMOV 0 "nonimmediate_operand"
"=w, r, m, m, m, w, ?r, ?w, ?r, w, w")
(match_operand:VDMOV 1 "general_operand"
"m, m, Dz, w, r, w, w, r, r, Dn, Dz"))]
[(set (match_operand:VDMOV 0 "nonimmediate_operand")
(match_operand:VDMOV 1 "general_operand"))]
"TARGET_FLOAT
&& (register_operand (operands[0], <MODE>mode)
|| aarch64_simd_reg_or_zero (operands[1], <MODE>mode))"
"@
ldr\t%d0, %1
ldr\t%x0, %1
str\txzr, %0
str\t%d1, %0
str\t%x1, %0
* return TARGET_SIMD ? \"mov\t%0.<Vbtype>, %1.<Vbtype>\" : \"fmov\t%d0, %d1\";
* return TARGET_SIMD ? \"umov\t%0, %1.d[0]\" : \"fmov\t%x0, %d1\";
fmov\t%d0, %1
mov\t%0, %1
* return aarch64_output_simd_mov_immediate (operands[1], 64);
fmov\t%d0, xzr"
[(set_attr "type" "neon_load1_1reg<q>, load_8, store_8, neon_store1_1reg<q>,\
store_8, neon_logic<q>, neon_to_gp<q>, f_mcr,\
mov_reg, neon_move<q>, f_mcr")
(set_attr "arch" "*,*,*,*,*,*,*,*,*,simd,*")]
{@ [cons: =0, 1; attrs: type, arch]
[w , m ; neon_load1_1reg<q> , * ] ldr\t%d0, %1
[r , m ; load_8 , * ] ldr\t%x0, %1
[m , Dz; store_8 , * ] str\txzr, %0
[m , w ; neon_store1_1reg<q>, * ] str\t%d1, %0
[m , r ; store_8 , * ] str\t%x1, %0
[w , w ; neon_logic<q> , simd] mov\t%0.<Vbtype>, %1.<Vbtype>
[w , w ; neon_logic<q> , * ] fmov\t%d0, %d1
[?r, w ; neon_to_gp<q> , simd] umov\t%0, %1.d[0]
[?r, w ; neon_to_gp<q> , * ] fmov\t%x0, %d1
[?w, r ; f_mcr , * ] fmov\t%d0, %1
[?r, r ; mov_reg , * ] mov\t%0, %1
[w , Dn; neon_move<q> , simd] << aarch64_output_simd_mov_immediate (operands[1], 64);
[w , Dz; f_mcr , * ] fmov\t%d0, xzr
}
)
(define_insn "*aarch64_simd_mov<VQMOV:mode>"
[(set (match_operand:VQMOV 0 "nonimmediate_operand"
"=w, Umn, m, w, ?r, ?w, ?r, w, w")
(match_operand:VQMOV 1 "general_operand"
"m, Dz, w, w, w, r, r, Dn, Dz"))]
(define_insn_and_split "*aarch64_simd_mov<VQMOV:mode>"
[(set (match_operand:VQMOV 0 "nonimmediate_operand")
(match_operand:VQMOV 1 "general_operand"))]
"TARGET_FLOAT
&& (register_operand (operands[0], <MODE>mode)
|| aarch64_simd_reg_or_zero (operands[1], <MODE>mode))"
"@
ldr\t%q0, %1
stp\txzr, xzr, %0
str\t%q1, %0
mov\t%0.<Vbtype>, %1.<Vbtype>
#
#
#
* return aarch64_output_simd_mov_immediate (operands[1], 128);
fmov\t%d0, xzr"
[(set_attr "type" "neon_load1_1reg<q>, store_16, neon_store1_1reg<q>,\
neon_logic<q>, multiple, multiple,\
multiple, neon_move<q>, fmov")
(set_attr "length" "4,4,4,4,8,8,8,4,4")
(set_attr "arch" "*,*,*,simd,*,*,*,simd,*")]
{@ [cons: =0, 1; attrs: type, arch, length]
[w , m ; neon_load1_1reg<q> , * , 4] ldr\t%q0, %1
[Umn, Dz; store_16 , * , 4] stp\txzr, xzr, %0
[m , w ; neon_store1_1reg<q>, * , 4] str\t%q1, %0
[w , w ; neon_logic<q> , simd, 4] mov\t%0.<Vbtype>, %1.<Vbtype>
[?r , w ; multiple , * , 8] #
[?w , r ; multiple , * , 8] #
[?r , r ; multiple , * , 8] #
[w , Dn; neon_move<q> , simd, 4] << aarch64_output_simd_mov_immediate (operands[1], 128);
[w , Dz; fmov , * , 4] fmov\t%d0, xzr
}
"&& reload_completed
&& (REG_P (operands[0])
&& REG_P (operands[1])
&& !(FP_REGNUM_P (REGNO (operands[0]))
&& FP_REGNUM_P (REGNO (operands[1]))))"
[(const_int 0)]
{
if (GP_REGNUM_P (REGNO (operands[0]))
&& GP_REGNUM_P (REGNO (operands[1])))
aarch64_simd_emit_reg_reg_move (operands, DImode, 2);
else
aarch64_split_simd_move (operands[0], operands[1]);
DONE;
}
)
;; When storing lane zero we can use the normal STR and its more permissive
@ -207,45 +212,45 @@
)
(define_insn "load_pair<DREG:mode><DREG2:mode>"
[(set (match_operand:DREG 0 "register_operand" "=w,r")
(match_operand:DREG 1 "aarch64_mem_pair_operand" "Ump,Ump"))
(set (match_operand:DREG2 2 "register_operand" "=w,r")
(match_operand:DREG2 3 "memory_operand" "m,m"))]
[(set (match_operand:DREG 0 "register_operand")
(match_operand:DREG 1 "aarch64_mem_pair_operand"))
(set (match_operand:DREG2 2 "register_operand")
(match_operand:DREG2 3 "memory_operand"))]
"TARGET_FLOAT
&& rtx_equal_p (XEXP (operands[3], 0),
plus_constant (Pmode,
XEXP (operands[1], 0),
GET_MODE_SIZE (<DREG:MODE>mode)))"
"@
ldp\t%d0, %d2, %z1
ldp\t%x0, %x2, %z1"
[(set_attr "type" "neon_ldp,load_16")]
{@ [ cons: =0 , 1 , =2 , 3 ; attrs: type ]
[ w , Ump , w , m ; neon_ldp ] ldp\t%d0, %d2, %z1
[ r , Ump , r , m ; load_16 ] ldp\t%x0, %x2, %z1
}
)
(define_insn "vec_store_pair<DREG:mode><DREG2:mode>"
[(set (match_operand:DREG 0 "aarch64_mem_pair_operand" "=Ump,Ump")
(match_operand:DREG 1 "register_operand" "w,r"))
(set (match_operand:DREG2 2 "memory_operand" "=m,m")
(match_operand:DREG2 3 "register_operand" "w,r"))]
[(set (match_operand:DREG 0 "aarch64_mem_pair_operand")
(match_operand:DREG 1 "register_operand"))
(set (match_operand:DREG2 2 "memory_operand")
(match_operand:DREG2 3 "register_operand"))]
"TARGET_FLOAT
&& rtx_equal_p (XEXP (operands[2], 0),
plus_constant (Pmode,
XEXP (operands[0], 0),
GET_MODE_SIZE (<DREG:MODE>mode)))"
"@
stp\t%d1, %d3, %z0
stp\t%x1, %x3, %z0"
[(set_attr "type" "neon_stp,store_16")]
{@ [ cons: =0 , 1 , =2 , 3 ; attrs: type ]
[ Ump , w , m , w ; neon_stp ] stp\t%d1, %d3, %z0
[ Ump , r , m , r ; store_16 ] stp\t%x1, %x3, %z0
}
)
(define_insn "aarch64_simd_stp<mode>"
[(set (match_operand:VP_2E 0 "aarch64_mem_pair_lanes_operand" "=Umn,Umn")
(vec_duplicate:VP_2E (match_operand:<VEL> 1 "register_operand" "w,r")))]
[(set (match_operand:VP_2E 0 "aarch64_mem_pair_lanes_operand")
(vec_duplicate:VP_2E (match_operand:<VEL> 1 "register_operand")))]
"TARGET_SIMD"
"@
stp\\t%<Vetype>1, %<Vetype>1, %y0
stp\\t%<vw>1, %<vw>1, %y0"
[(set_attr "type" "neon_stp, store_<ldpstp_vel_sz>")]
{@ [ cons: =0 , 1 ; attrs: type ]
[ Umn , w ; neon_stp ] stp\t%<Vetype>1, %<Vetype>1, %y0
[ Umn , r ; store_<ldpstp_vel_sz> ] stp\t%<vw>1, %<vw>1, %y0
}
)
(define_insn "load_pair<VQ:mode><VQ2:mode>"
@ -276,33 +281,6 @@
[(set_attr "type" "neon_stp_q")]
)
(define_split
[(set (match_operand:VQMOV 0 "register_operand" "")
(match_operand:VQMOV 1 "register_operand" ""))]
"TARGET_FLOAT
&& reload_completed
&& GP_REGNUM_P (REGNO (operands[0]))
&& GP_REGNUM_P (REGNO (operands[1]))"
[(const_int 0)]
{
aarch64_simd_emit_reg_reg_move (operands, DImode, 2);
DONE;
})
(define_split
[(set (match_operand:VQMOV 0 "register_operand" "")
(match_operand:VQMOV 1 "register_operand" ""))]
"TARGET_FLOAT
&& reload_completed
&& ((FP_REGNUM_P (REGNO (operands[0])) && GP_REGNUM_P (REGNO (operands[1])))
|| (GP_REGNUM_P (REGNO (operands[0])) && FP_REGNUM_P (REGNO (operands[1]))))"
[(const_int 0)]
{
aarch64_split_simd_move (operands[0], operands[1]);
DONE;
})
(define_expand "@aarch64_split_simd_mov<mode>"
[(set (match_operand:VQMOV 0)
(match_operand:VQMOV 1))]
@ -372,35 +350,37 @@
)
(define_insn_and_split "aarch64_simd_mov_from_<mode>low"
[(set (match_operand:<VHALF> 0 "register_operand" "=w,?r")
[(set (match_operand:<VHALF> 0 "register_operand")
(vec_select:<VHALF>
(match_operand:VQMOV_NO2E 1 "register_operand" "w,w")
(match_operand:VQMOV_NO2E 2 "vect_par_cnst_lo_half" "")))]
(match_operand:VQMOV_NO2E 1 "register_operand")
(match_operand:VQMOV_NO2E 2 "vect_par_cnst_lo_half")))]
"TARGET_SIMD"
"@
#
umov\t%0, %1.d[0]"
{@ [ cons: =0 , 1 ; attrs: type ]
[ w , w ; mov_reg ] #
[ ?r , w ; neon_to_gp<q> ] umov\t%0, %1.d[0]
}
"&& reload_completed && aarch64_simd_register (operands[0], <VHALF>mode)"
[(set (match_dup 0) (match_dup 1))]
{
operands[1] = aarch64_replace_reg_mode (operands[1], <VHALF>mode);
}
[(set_attr "type" "mov_reg,neon_to_gp<q>")
[
(set_attr "length" "4")]
)
(define_insn "aarch64_simd_mov_from_<mode>high"
[(set (match_operand:<VHALF> 0 "register_operand" "=w,?r,?r")
[(set (match_operand:<VHALF> 0 "register_operand")
(vec_select:<VHALF>
(match_operand:VQMOV_NO2E 1 "register_operand" "w,w,w")
(match_operand:VQMOV_NO2E 2 "vect_par_cnst_hi_half" "")))]
(match_operand:VQMOV_NO2E 1 "register_operand")
(match_operand:VQMOV_NO2E 2 "vect_par_cnst_hi_half")))]
"TARGET_FLOAT"
"@
dup\t%d0, %1.d[1]
umov\t%0, %1.d[1]
fmov\t%0, %1.d[1]"
[(set_attr "type" "neon_dup<q>,neon_to_gp<q>,f_mrc")
(set_attr "arch" "simd,simd,*")
{@ [ cons: =0 , 1 ; attrs: type , arch ]
[ w , w ; neon_dup<q> , simd ] dup\t%d0, %1.d[1]
[ ?r , w ; neon_to_gp<q> , simd ] umov\t%0, %1.d[1]
[ ?r , w ; f_mrc , * ] fmov\t%0, %1.d[1]
}
[
(set_attr "length" "4")]
)
@ -500,7 +480,7 @@
}
)
(define_expand "xorsign<mode>3"
(define_expand "@xorsign<mode>3"
[(match_operand:VHSDF 0 "register_operand")
(match_operand:VHSDF 1 "register_operand")
(match_operand:VHSDF 2 "register_operand")]
@ -1204,27 +1184,27 @@
;; For AND (vector, register) and BIC (vector, immediate)
(define_insn "and<mode>3<vczle><vczbe>"
[(set (match_operand:VDQ_I 0 "register_operand" "=w,w")
(and:VDQ_I (match_operand:VDQ_I 1 "register_operand" "w,0")
(match_operand:VDQ_I 2 "aarch64_reg_or_bic_imm" "w,Db")))]
[(set (match_operand:VDQ_I 0 "register_operand")
(and:VDQ_I (match_operand:VDQ_I 1 "register_operand")
(match_operand:VDQ_I 2 "aarch64_reg_or_bic_imm")))]
"TARGET_SIMD"
"@
and\t%0.<Vbtype>, %1.<Vbtype>, %2.<Vbtype>
* return aarch64_output_simd_mov_immediate (operands[2], <bitsize>,\
AARCH64_CHECK_BIC);"
{@ [ cons: =0 , 1 , 2 ]
[ w , w , w ] and\t%0.<Vbtype>, %1.<Vbtype>, %2.<Vbtype>
[ w , 0 , Db ] << aarch64_output_simd_mov_immediate (operands[2], <bitsize>, AARCH64_CHECK_BIC);
}
[(set_attr "type" "neon_logic<q>")]
)
;; For ORR (vector, register) and ORR (vector, immediate)
(define_insn "ior<mode>3<vczle><vczbe>"
[(set (match_operand:VDQ_I 0 "register_operand" "=w,w")
(ior:VDQ_I (match_operand:VDQ_I 1 "register_operand" "w,0")
(match_operand:VDQ_I 2 "aarch64_reg_or_orr_imm" "w,Do")))]
[(set (match_operand:VDQ_I 0 "register_operand")
(ior:VDQ_I (match_operand:VDQ_I 1 "register_operand")
(match_operand:VDQ_I 2 "aarch64_reg_or_orr_imm")))]
"TARGET_SIMD"
"@
orr\t%0.<Vbtype>, %1.<Vbtype>, %2.<Vbtype>
* return aarch64_output_simd_mov_immediate (operands[2], <bitsize>,\
AARCH64_CHECK_ORR);"
{@ [ cons: =0 , 1 , 2 ]
[ w , w , w ] orr\t%0.<Vbtype>, %1.<Vbtype>, %2.<Vbtype>
[ w , 0 , Do ] << aarch64_output_simd_mov_immediate (operands[2], <bitsize>, AARCH64_CHECK_ORR);
}
[(set_attr "type" "neon_logic<q>")]
)
@ -1353,14 +1333,14 @@
)
(define_insn "aarch64_simd_ashr<mode><vczle><vczbe>"
[(set (match_operand:VDQ_I 0 "register_operand" "=w,w")
(ashiftrt:VDQ_I (match_operand:VDQ_I 1 "register_operand" "w,w")
(match_operand:VDQ_I 2 "aarch64_simd_rshift_imm" "D1,Dr")))]
[(set (match_operand:VDQ_I 0 "register_operand")
(ashiftrt:VDQ_I (match_operand:VDQ_I 1 "register_operand")
(match_operand:VDQ_I 2 "aarch64_simd_rshift_imm")))]
"TARGET_SIMD"
"@
cmlt\t%0.<Vtype>, %1.<Vtype>, #0
sshr\t%0.<Vtype>, %1.<Vtype>, %2"
[(set_attr "type" "neon_compare<q>,neon_shift_imm<q>")]
{@ [ cons: =0 , 1 , 2 ; attrs: type ]
[ w , w , D1 ; neon_compare<q> ] cmlt\t%0.<Vtype>, %1.<Vtype>, #0
[ w , w , Dr ; neon_shift_imm<q> ] sshr\t%0.<Vtype>, %1.<Vtype>, %2
}
)
(define_insn "aarch64_<sra_op>sra_n<mode>_insn"
@ -3701,20 +3681,21 @@
;; in *aarch64_simd_bsl<mode>_alt.
(define_insn "aarch64_simd_bsl<mode>_internal<vczle><vczbe>"
[(set (match_operand:VDQ_I 0 "register_operand" "=w,w,w")
[(set (match_operand:VDQ_I 0 "register_operand")
(xor:VDQ_I
(and:VDQ_I
(xor:VDQ_I
(match_operand:<V_INT_EQUIV> 3 "register_operand" "w,0,w")
(match_operand:VDQ_I 2 "register_operand" "w,w,0"))
(match_operand:VDQ_I 1 "register_operand" "0,w,w"))
(match_operand:<V_INT_EQUIV> 3 "register_operand")
(match_operand:VDQ_I 2 "register_operand"))
(match_operand:VDQ_I 1 "register_operand"))
(match_dup:<V_INT_EQUIV> 3)
))]
"TARGET_SIMD"
"@
bsl\\t%0.<Vbtype>, %2.<Vbtype>, %3.<Vbtype>
bit\\t%0.<Vbtype>, %2.<Vbtype>, %1.<Vbtype>
bif\\t%0.<Vbtype>, %3.<Vbtype>, %1.<Vbtype>"
{@ [ cons: =0 , 1 , 2 , 3 ]
[ w , 0 , w , w ] bsl\t%0.<Vbtype>, %2.<Vbtype>, %3.<Vbtype>
[ w , w , w , 0 ] bit\t%0.<Vbtype>, %2.<Vbtype>, %1.<Vbtype>
[ w , w , 0 , w ] bif\t%0.<Vbtype>, %3.<Vbtype>, %1.<Vbtype>
}
[(set_attr "type" "neon_bsl<q>")]
)
@ -3725,19 +3706,20 @@
;; permutations of commutative operations, we have to have a separate pattern.
(define_insn "*aarch64_simd_bsl<mode>_alt<vczle><vczbe>"
[(set (match_operand:VDQ_I 0 "register_operand" "=w,w,w")
[(set (match_operand:VDQ_I 0 "register_operand")
(xor:VDQ_I
(and:VDQ_I
(xor:VDQ_I
(match_operand:VDQ_I 3 "register_operand" "w,w,0")
(match_operand:<V_INT_EQUIV> 2 "register_operand" "w,0,w"))
(match_operand:VDQ_I 1 "register_operand" "0,w,w"))
(match_operand:VDQ_I 3 "register_operand")
(match_operand:<V_INT_EQUIV> 2 "register_operand"))
(match_operand:VDQ_I 1 "register_operand"))
(match_dup:<V_INT_EQUIV> 2)))]
"TARGET_SIMD"
"@
bsl\\t%0.<Vbtype>, %3.<Vbtype>, %2.<Vbtype>
bit\\t%0.<Vbtype>, %3.<Vbtype>, %1.<Vbtype>
bif\\t%0.<Vbtype>, %2.<Vbtype>, %1.<Vbtype>"
{@ [ cons: =0 , 1 , 2 , 3 ]
[ w , 0 , w , w ] bsl\t%0.<Vbtype>, %3.<Vbtype>, %2.<Vbtype>
[ w , w , 0 , w ] bit\t%0.<Vbtype>, %3.<Vbtype>, %1.<Vbtype>
[ w , w , w , 0 ] bif\t%0.<Vbtype>, %2.<Vbtype>, %1.<Vbtype>
}
[(set_attr "type" "neon_bsl<q>")]
)
@ -3752,21 +3734,22 @@
;; would be better calculated on the integer side.
(define_insn_and_split "aarch64_simd_bsldi_internal"
[(set (match_operand:DI 0 "register_operand" "=w,w,w,&r")
[(set (match_operand:DI 0 "register_operand")
(xor:DI
(and:DI
(xor:DI
(match_operand:DI 3 "register_operand" "w,0,w,r")
(match_operand:DI 2 "register_operand" "w,w,0,r"))
(match_operand:DI 1 "register_operand" "0,w,w,r"))
(match_operand:DI 3 "register_operand")
(match_operand:DI 2 "register_operand"))
(match_operand:DI 1 "register_operand"))
(match_dup:DI 3)
))]
"TARGET_SIMD"
"@
bsl\\t%0.8b, %2.8b, %3.8b
bit\\t%0.8b, %2.8b, %1.8b
bif\\t%0.8b, %3.8b, %1.8b
#"
{@ [ cons: =0 , 1 , 2 , 3 ; attrs: type , length ]
[ w , 0 , w , w ; neon_bsl , 4 ] bsl\t%0.8b, %2.8b, %3.8b
[ w , w , w , 0 ; neon_bsl , 4 ] bit\t%0.8b, %2.8b, %1.8b
[ w , w , 0 , w ; neon_bsl , 4 ] bif\t%0.8b, %3.8b, %1.8b
[ &r , r , r , r ; multiple , 12 ] #
}
"&& REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))"
[(match_dup 1) (match_dup 1) (match_dup 2) (match_dup 3)]
{
@ -3789,26 +3772,25 @@
emit_insn (gen_xordi3 (operands[0], scratch, operands[3]));
DONE;
}
[(set_attr "type" "neon_bsl,neon_bsl,neon_bsl,multiple")
(set_attr "length" "4,4,4,12")]
)
(define_insn_and_split "aarch64_simd_bsldi_alt"
[(set (match_operand:DI 0 "register_operand" "=w,w,w,&r")
[(set (match_operand:DI 0 "register_operand")
(xor:DI
(and:DI
(xor:DI
(match_operand:DI 3 "register_operand" "w,w,0,r")
(match_operand:DI 2 "register_operand" "w,0,w,r"))
(match_operand:DI 1 "register_operand" "0,w,w,r"))
(match_operand:DI 3 "register_operand")
(match_operand:DI 2 "register_operand"))
(match_operand:DI 1 "register_operand"))
(match_dup:DI 2)
))]
"TARGET_SIMD"
"@
bsl\\t%0.8b, %3.8b, %2.8b
bit\\t%0.8b, %3.8b, %1.8b
bif\\t%0.8b, %2.8b, %1.8b
#"
{@ [ cons: =0 , 1 , 2 , 3 ; attrs: type , length ]
[ w , 0 , w , w ; neon_bsl , 4 ] bsl\t%0.8b, %3.8b, %2.8b
[ w , w , 0 , w ; neon_bsl , 4 ] bit\t%0.8b, %3.8b, %1.8b
[ w , w , w , 0 ; neon_bsl , 4 ] bif\t%0.8b, %2.8b, %1.8b
[ &r , r , r , r ; multiple , 12 ] #
}
"&& REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))"
[(match_dup 0) (match_dup 1) (match_dup 2) (match_dup 3)]
{
@ -3831,8 +3813,6 @@
emit_insn (gen_xordi3 (operands[0], scratch, operands[2]));
DONE;
}
[(set_attr "type" "neon_bsl,neon_bsl,neon_bsl,multiple")
(set_attr "length" "4,4,4,12")]
)
(define_expand "aarch64_simd_bsl<mode>"
@ -4385,15 +4365,15 @@
;; This dedicated pattern must come first.
(define_insn "store_pair_lanes<mode>"
[(set (match_operand:<VDBL> 0 "aarch64_mem_pair_lanes_operand" "=Umn, Umn")
[(set (match_operand:<VDBL> 0 "aarch64_mem_pair_lanes_operand")
(vec_concat:<VDBL>
(match_operand:VDCSIF 1 "register_operand" "w, r")
(match_operand:VDCSIF 2 "register_operand" "w, r")))]
(match_operand:VDCSIF 1 "register_operand")
(match_operand:VDCSIF 2 "register_operand")))]
"TARGET_FLOAT"
"@
stp\t%<single_type>1, %<single_type>2, %y0
stp\t%<single_wx>1, %<single_wx>2, %y0"
[(set_attr "type" "neon_stp, store_16")]
{@ [ cons: =0 , 1 , 2 ; attrs: type ]
[ Umn , w , w ; neon_stp ] stp\t%<single_type>1, %<single_type>2, %y0
[ Umn , r , r ; store_16 ] stp\t%<single_wx>1, %<single_wx>2, %y0
}
)
;; Form a vector whose least significant half comes from operand 1 and whose
@ -4404,73 +4384,70 @@
;; the register alternatives either don't accept or themselves disparage.
(define_insn "*aarch64_combine_internal<mode>"
[(set (match_operand:<VDBL> 0 "aarch64_reg_or_mem_pair_operand" "=w, w, w, w, Umn, Umn")
[(set (match_operand:<VDBL> 0 "aarch64_reg_or_mem_pair_operand")
(vec_concat:<VDBL>
(match_operand:VDCSIF 1 "register_operand" "0, 0, 0, 0, ?w, ?r")
(match_operand:VDCSIF 2 "aarch64_simd_nonimmediate_operand" "w, ?r, ?r, Utv, w, ?r")))]
(match_operand:VDCSIF 1 "register_operand")
(match_operand:VDCSIF 2 "aarch64_simd_nonimmediate_operand")))]
"TARGET_FLOAT
&& !BYTES_BIG_ENDIAN
&& (register_operand (operands[0], <VDBL>mode)
|| register_operand (operands[2], <MODE>mode))"
"@
ins\t%0.<single_type>[1], %2.<single_type>[0]
ins\t%0.<single_type>[1], %<single_wx>2
fmov\t%0.d[1], %2
ld1\t{%0.<single_type>}[1], %2
stp\t%<single_type>1, %<single_type>2, %y0
stp\t%<single_wx>1, %<single_wx>2, %y0"
[(set_attr "type" "neon_ins<dblq>, neon_from_gp<dblq>, f_mcr,
neon_load1_one_lane<dblq>, neon_stp, store_16")
(set_attr "arch" "simd,simd,*,simd,*,*")]
{@ [ cons: =0 , 1 , 2 ; attrs: type , arch ]
[ w , 0 , w ; neon_ins<dblq> , simd ] ins\t%0.<single_type>[1], %2.<single_type>[0]
[ w , 0 , ?r ; neon_from_gp<dblq> , simd ] ins\t%0.<single_type>[1], %<single_wx>2
[ w , 0 , ?r ; f_mcr , * ] fmov\t%0.d[1], %2
[ w , 0 , Utv ; neon_load1_one_lane<dblq> , simd ] ld1\t{%0.<single_type>}[1], %2
[ Umn , ?w , w ; neon_stp , * ] stp\t%<single_type>1, %<single_type>2, %y0
[ Umn , ?r , ?r ; store_16 , * ] stp\t%<single_wx>1, %<single_wx>2, %y0
}
)
(define_insn "*aarch64_combine_internal_be<mode>"
[(set (match_operand:<VDBL> 0 "aarch64_reg_or_mem_pair_operand" "=w, w, w, w, Umn, Umn")
[(set (match_operand:<VDBL> 0 "aarch64_reg_or_mem_pair_operand")
(vec_concat:<VDBL>
(match_operand:VDCSIF 2 "aarch64_simd_nonimmediate_operand" "w, ?r, ?r, Utv, ?w, ?r")
(match_operand:VDCSIF 1 "register_operand" "0, 0, 0, 0, ?w, ?r")))]
(match_operand:VDCSIF 2 "aarch64_simd_nonimmediate_operand")
(match_operand:VDCSIF 1 "register_operand")))]
"TARGET_FLOAT
&& BYTES_BIG_ENDIAN
&& (register_operand (operands[0], <VDBL>mode)
|| register_operand (operands[2], <MODE>mode))"
"@
ins\t%0.<single_type>[1], %2.<single_type>[0]
ins\t%0.<single_type>[1], %<single_wx>2
fmov\t%0.d[1], %2
ld1\t{%0.<single_type>}[1], %2
stp\t%<single_type>2, %<single_type>1, %y0
stp\t%<single_wx>2, %<single_wx>1, %y0"
[(set_attr "type" "neon_ins<dblq>, neon_from_gp<dblq>, f_mcr, neon_load1_one_lane<dblq>, neon_stp, store_16")
(set_attr "arch" "simd,simd,*,simd,*,*")]
{@ [ cons: =0 , 1 , 2 ; attrs: type , arch ]
[ w , 0 , w ; neon_ins<dblq> , simd ] ins\t%0.<single_type>[1], %2.<single_type>[0]
[ w , 0 , ?r ; neon_from_gp<dblq> , simd ] ins\t%0.<single_type>[1], %<single_wx>2
[ w , 0 , ?r ; f_mcr , * ] fmov\t%0.d[1], %2
[ w , 0 , Utv ; neon_load1_one_lane<dblq> , simd ] ld1\t{%0.<single_type>}[1], %2
[ Umn , ?w , ?w ; neon_stp , * ] stp\t%<single_type>2, %<single_type>1, %y0
[ Umn , ?r , ?r ; store_16 , * ] stp\t%<single_wx>2, %<single_wx>1, %y0
}
)
;; In this insn, operand 1 should be low, and operand 2 the high part of the
;; dest vector.
(define_insn "*aarch64_combinez<mode>"
[(set (match_operand:<VDBL> 0 "register_operand" "=w,w,w")
[(set (match_operand:<VDBL> 0 "register_operand")
(vec_concat:<VDBL>
(match_operand:VDCSIF 1 "nonimmediate_operand" "w,?r,m")
(match_operand:VDCSIF 1 "nonimmediate_operand")
(match_operand:VDCSIF 2 "aarch64_simd_or_scalar_imm_zero")))]
"TARGET_FLOAT && !BYTES_BIG_ENDIAN"
"@
fmov\\t%<single_type>0, %<single_type>1
fmov\t%<single_type>0, %<single_wx>1
ldr\\t%<single_type>0, %1"
[(set_attr "type" "neon_move<q>, neon_from_gp, neon_load1_1reg")]
{@ [ cons: =0 , 1 ; attrs: type ]
[ w , w ; neon_move<q> ] fmov\t%<single_type>0, %<single_type>1
[ w , ?r ; neon_from_gp ] fmov\t%<single_type>0, %<single_wx>1
[ w , m ; neon_load1_1reg ] ldr\t%<single_type>0, %1
}
)
(define_insn "*aarch64_combinez_be<mode>"
[(set (match_operand:<VDBL> 0 "register_operand" "=w,w,w")
[(set (match_operand:<VDBL> 0 "register_operand")
(vec_concat:<VDBL>
(match_operand:VDCSIF 2 "aarch64_simd_or_scalar_imm_zero")
(match_operand:VDCSIF 1 "nonimmediate_operand" "w,?r,m")))]
(match_operand:VDCSIF 1 "nonimmediate_operand")))]
"TARGET_FLOAT && BYTES_BIG_ENDIAN"
"@
fmov\\t%<single_type>0, %<single_type>1
fmov\t%<single_type>0, %<single_wx>1
ldr\\t%<single_type>0, %1"
[(set_attr "type" "neon_move<q>, neon_from_gp, neon_load1_1reg")]
{@ [ cons: =0 , 1 ; attrs: type ]
[ w , w ; neon_move<q> ] fmov\t%<single_type>0, %<single_type>1
[ w , ?r ; neon_from_gp ] fmov\t%<single_type>0, %<single_wx>1
[ w , m ; neon_load1_1reg ] ldr\t%<single_type>0, %1
}
)
;; Form a vector whose first half (in array order) comes from operand 1
@ -7051,17 +7028,17 @@
;; have different ideas of what should be passed to this pattern.
(define_insn "aarch64_cm<optab><mode><vczle><vczbe>"
[(set (match_operand:<V_INT_EQUIV> 0 "register_operand" "=w,w")
[(set (match_operand:<V_INT_EQUIV> 0 "register_operand")
(neg:<V_INT_EQUIV>
(COMPARISONS:<V_INT_EQUIV>
(match_operand:VDQ_I 1 "register_operand" "w,w")
(match_operand:VDQ_I 2 "aarch64_simd_reg_or_zero" "w,ZDz")
(match_operand:VDQ_I 1 "register_operand")
(match_operand:VDQ_I 2 "aarch64_simd_reg_or_zero")
)))]
"TARGET_SIMD"
"@
cm<n_optab>\t%<v>0<Vmtype>, %<v><cmp_1><Vmtype>, %<v><cmp_2><Vmtype>
cm<optab>\t%<v>0<Vmtype>, %<v>1<Vmtype>, #0"
[(set_attr "type" "neon_compare<q>, neon_compare_zero<q>")]
{@ [ cons: =0 , 1 , 2 ; attrs: type ]
[ w , w , w ; neon_compare<q> ] cm<n_optab>\t%<v>0<Vmtype>, %<v><cmp_1><Vmtype>, %<v><cmp_2><Vmtype>
[ w , w , ZDz ; neon_compare_zero<q> ] cm<optab>\t%<v>0<Vmtype>, %<v>1<Vmtype>, #0
}
)
(define_insn_and_split "aarch64_cm<optab>di"
@ -7100,17 +7077,17 @@
)
(define_insn "*aarch64_cm<optab>di"
[(set (match_operand:DI 0 "register_operand" "=w,w")
[(set (match_operand:DI 0 "register_operand")
(neg:DI
(COMPARISONS:DI
(match_operand:DI 1 "register_operand" "w,w")
(match_operand:DI 2 "aarch64_simd_reg_or_zero" "w,ZDz")
(match_operand:DI 1 "register_operand")
(match_operand:DI 2 "aarch64_simd_reg_or_zero")
)))]
"TARGET_SIMD && reload_completed"
"@
cm<n_optab>\t%d0, %d<cmp_1>, %d<cmp_2>
cm<optab>\t%d0, %d1, #0"
[(set_attr "type" "neon_compare, neon_compare_zero")]
{@ [ cons: =0 , 1 , 2 ; attrs: type ]
[ w , w , w ; neon_compare ] cm<n_optab>\t%d0, %d<cmp_1>, %d<cmp_2>
[ w , w , ZDz ; neon_compare_zero ] cm<optab>\t%d0, %d1, #0
}
)
;; cm(hs|hi)
@ -7268,16 +7245,17 @@
;; fcm(eq|ge|gt|le|lt)
(define_insn "aarch64_cm<optab><mode><vczle><vczbe>"
[(set (match_operand:<V_INT_EQUIV> 0 "register_operand" "=w,w")
[(set (match_operand:<V_INT_EQUIV> 0 "register_operand")
(neg:<V_INT_EQUIV>
(COMPARISONS:<V_INT_EQUIV>
(match_operand:VHSDF_HSDF 1 "register_operand" "w,w")
(match_operand:VHSDF_HSDF 2 "aarch64_simd_reg_or_zero" "w,YDz")
(match_operand:VHSDF_HSDF 1 "register_operand")
(match_operand:VHSDF_HSDF 2 "aarch64_simd_reg_or_zero")
)))]
"TARGET_SIMD"
"@
fcm<n_optab>\t%<v>0<Vmtype>, %<v><cmp_1><Vmtype>, %<v><cmp_2><Vmtype>
fcm<optab>\t%<v>0<Vmtype>, %<v>1<Vmtype>, 0"
{@ [ cons: =0 , 1 , 2 ]
[ w , w , w ] fcm<n_optab>\t%<v>0<Vmtype>, %<v><cmp_1><Vmtype>, %<v><cmp_2><Vmtype>
[ w , w , YDz ] fcm<optab>\t%<v>0<Vmtype>, %<v>1<Vmtype>, 0
}
[(set_attr "type" "neon_fp_compare_<stype><q>")]
)
@ -7880,33 +7858,29 @@
)
(define_insn "*aarch64_mov<mode>"
[(set (match_operand:VSTRUCT_QD 0 "aarch64_simd_nonimmediate_operand" "=w,Utv,w")
(match_operand:VSTRUCT_QD 1 "aarch64_simd_general_operand" " w,w,Utv"))]
[(set (match_operand:VSTRUCT_QD 0 "aarch64_simd_nonimmediate_operand")
(match_operand:VSTRUCT_QD 1 "aarch64_simd_general_operand"))]
"TARGET_SIMD && !BYTES_BIG_ENDIAN
&& (register_operand (operands[0], <MODE>mode)
|| register_operand (operands[1], <MODE>mode))"
"@
#
st1\\t{%S1.<Vtype> - %<Vendreg>1.<Vtype>}, %0
ld1\\t{%S0.<Vtype> - %<Vendreg>0.<Vtype>}, %1"
[(set_attr "type" "multiple,neon_store<nregs>_<nregs>reg_q,\
neon_load<nregs>_<nregs>reg_q")
(set_attr "length" "<insn_count>,4,4")]
{@ [ cons: =0 , 1 ; attrs: type , length ]
[ w , w ; multiple , <insn_count> ] #
[ Utv , w ; neon_store<nregs>_<nregs>reg_q , 4 ] st1\t{%S1.<Vtype> - %<Vendreg>1.<Vtype>}, %0
[ w , Utv ; neon_load<nregs>_<nregs>reg_q , 4 ] ld1\t{%S0.<Vtype> - %<Vendreg>0.<Vtype>}, %1
}
)
(define_insn "*aarch64_mov<mode>"
[(set (match_operand:VSTRUCT 0 "aarch64_simd_nonimmediate_operand" "=w,Utv,w")
(match_operand:VSTRUCT 1 "aarch64_simd_general_operand" " w,w,Utv"))]
[(set (match_operand:VSTRUCT 0 "aarch64_simd_nonimmediate_operand")
(match_operand:VSTRUCT 1 "aarch64_simd_general_operand"))]
"TARGET_SIMD && !BYTES_BIG_ENDIAN
&& (register_operand (operands[0], <MODE>mode)
|| register_operand (operands[1], <MODE>mode))"
"@
#
st1\\t{%S1.16b - %<Vendreg>1.16b}, %0
ld1\\t{%S0.16b - %<Vendreg>0.16b}, %1"
[(set_attr "type" "multiple,neon_store<nregs>_<nregs>reg_q,\
neon_load<nregs>_<nregs>reg_q")
(set_attr "length" "<insn_count>,4,4")]
{@ [ cons: =0 , 1 ; attrs: type , length ]
[ w , w ; multiple , <insn_count> ] #
[ Utv , w ; neon_store<nregs>_<nregs>reg_q , 4 ] st1\t{%S1.16b - %<Vendreg>1.16b}, %0
[ w , Utv ; neon_load<nregs>_<nregs>reg_q , 4 ] ld1\t{%S0.16b - %<Vendreg>0.16b}, %1
}
)
(define_insn "*aarch64_movv8di"
@ -7939,50 +7913,45 @@
)
(define_insn "*aarch64_be_mov<mode>"
[(set (match_operand:VSTRUCT_2D 0 "nonimmediate_operand" "=w,m,w")
(match_operand:VSTRUCT_2D 1 "general_operand" " w,w,m"))]
[(set (match_operand:VSTRUCT_2D 0 "nonimmediate_operand")
(match_operand:VSTRUCT_2D 1 "general_operand"))]
"TARGET_FLOAT
&& (!TARGET_SIMD || BYTES_BIG_ENDIAN)
&& (register_operand (operands[0], <MODE>mode)
|| register_operand (operands[1], <MODE>mode))"
"@
#
stp\\t%d1, %R1, %0
ldp\\t%d0, %R0, %1"
[(set_attr "type" "multiple,neon_stp,neon_ldp")
(set_attr "length" "8,4,4")]
{@ [ cons: =0 , 1 ; attrs: type , length ]
[ w , w ; multiple , 8 ] #
[ m , w ; neon_stp , 4 ] stp\t%d1, %R1, %0
[ w , m ; neon_ldp , 4 ] ldp\t%d0, %R0, %1
}
)
(define_insn "*aarch64_be_mov<mode>"
[(set (match_operand:VSTRUCT_2Q 0 "nonimmediate_operand" "=w,m,w")
(match_operand:VSTRUCT_2Q 1 "general_operand" " w,w,m"))]
[(set (match_operand:VSTRUCT_2Q 0 "nonimmediate_operand")
(match_operand:VSTRUCT_2Q 1 "general_operand"))]
"TARGET_FLOAT
&& (!TARGET_SIMD || BYTES_BIG_ENDIAN)
&& (register_operand (operands[0], <MODE>mode)
|| register_operand (operands[1], <MODE>mode))"
"@
#
stp\\t%q1, %R1, %0
ldp\\t%q0, %R0, %1"
[(set_attr "type" "multiple,neon_stp_q,neon_ldp_q")
(set_attr "arch" "simd,*,*")
(set_attr "length" "8,4,4")]
{@ [ cons: =0 , 1 ; attrs: type , arch , length ]
[ w , w ; multiple , simd , 8 ] #
[ m , w ; neon_stp_q , * , 4 ] stp\t%q1, %R1, %0
[ w , m ; neon_ldp_q , * , 4 ] ldp\t%q0, %R0, %1
}
)
(define_insn "*aarch64_be_movoi"
[(set (match_operand:OI 0 "nonimmediate_operand" "=w,m,w")
(match_operand:OI 1 "general_operand" " w,w,m"))]
[(set (match_operand:OI 0 "nonimmediate_operand")
(match_operand:OI 1 "general_operand"))]
"TARGET_FLOAT
&& (!TARGET_SIMD || BYTES_BIG_ENDIAN)
&& (register_operand (operands[0], OImode)
|| register_operand (operands[1], OImode))"
"@
#
stp\\t%q1, %R1, %0
ldp\\t%q0, %R0, %1"
[(set_attr "type" "multiple,neon_stp_q,neon_ldp_q")
(set_attr "arch" "simd,*,*")
(set_attr "length" "8,4,4")]
{@ [ cons: =0 , 1 ; attrs: type , arch , length ]
[ w , w ; multiple , simd , 8 ] #
[ m , w ; neon_stp_q , * , 4 ] stp\t%q1, %R1, %0
[ w , m ; neon_ldp_q , * , 4 ] ldp\t%q0, %R0, %1
}
)
(define_insn "*aarch64_be_mov<mode>"

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
;; -*- buffer-read-only: t -*-
;; Generated automatically by gentune.sh from aarch64-cores.def
(define_attr "tune"
"cortexa34,cortexa35,cortexa53,cortexa57,cortexa72,cortexa73,thunderx,thunderxt88p1,thunderxt88,octeontx,octeontxt81,octeontxt83,thunderxt81,thunderxt83,ampere1,ampere1a,emag,xgene1,falkor,qdf24xx,exynosm1,phecda,thunderx2t99p1,vulcan,thunderx2t99,cortexa55,cortexa75,cortexa76,cortexa76ae,cortexa77,cortexa78,cortexa78ae,cortexa78c,cortexa65,cortexa65ae,cortexx1,cortexx1c,neoversen1,ares,neoversee1,octeontx2,octeontx2t98,octeontx2t96,octeontx2t93,octeontx2f95,octeontx2f95n,octeontx2f95mm,a64fx,tsv110,thunderx3t110,neoversev1,zeus,neoverse512tvb,saphira,cortexa57cortexa53,cortexa72cortexa53,cortexa73cortexa35,cortexa73cortexa53,cortexa75cortexa55,cortexa76cortexa55,cortexr82,cortexa510,cortexa520,cortexa710,cortexa715,cortexa720,cortexx2,cortexx3,neoversen2,neoversev2,demeter"
"cortexa34,cortexa35,cortexa53,cortexa57,cortexa72,cortexa73,thunderx,thunderxt88p1,thunderxt88,octeontx,octeontxt81,octeontxt83,thunderxt81,thunderxt83,ampere1,ampere1a,emag,xgene1,falkor,qdf24xx,exynosm1,phecda,thunderx2t99p1,vulcan,thunderx2t99,cortexa55,cortexa75,cortexa76,cortexa76ae,cortexa77,cortexa78,cortexa78ae,cortexa78c,cortexa65,cortexa65ae,cortexx1,cortexx1c,neoversen1,ares,neoversee1,octeontx2,octeontx2t98,octeontx2t96,octeontx2t93,octeontx2f95,octeontx2f95n,octeontx2f95mm,a64fx,tsv110,thunderx3t110,neoversev1,zeus,neoverse512tvb,saphira,cortexa57cortexa53,cortexa72cortexa53,cortexa73cortexa35,cortexa73cortexa53,cortexa75cortexa55,cortexa76cortexa55,cortexr82,cortexa510,cortexa520,cortexa710,cortexa715,cortexa720,cortexx2,cortexx3,cortexx4,neoversen2,neoversev2,demeter"
(const (symbol_ref "((enum attr_tune) aarch64_tune)")))

View File

@ -257,7 +257,7 @@ public:
machine_mode orig_mode;
/* The offset in bytes of the piece from the start of the type. */
poly_uint64_pod offset;
poly_uint64 offset;
};
/* Divides types analyzed as IS_PST into individual pieces. The pieces
@ -1358,8 +1358,8 @@ static const struct tune_params generic_tunings =
have at most a very minor effect on SVE2 cores. */
(AARCH64_EXTRA_TUNE_CSE_SVE_VL_CONSTANTS), /* tune_flags. */
&generic_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
static const struct tune_params cortexa35_tunings =
@ -1394,8 +1394,8 @@ static const struct tune_params cortexa35_tunings =
tune_params::AUTOPREFETCHER_WEAK, /* autoprefetcher_model. */
(AARCH64_EXTRA_TUNE_NONE), /* tune_flags. */
&generic_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
static const struct tune_params cortexa53_tunings =
@ -1430,8 +1430,8 @@ static const struct tune_params cortexa53_tunings =
tune_params::AUTOPREFETCHER_WEAK, /* autoprefetcher_model. */
(AARCH64_EXTRA_TUNE_NONE), /* tune_flags. */
&generic_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
static const struct tune_params cortexa57_tunings =
@ -1466,8 +1466,8 @@ static const struct tune_params cortexa57_tunings =
tune_params::AUTOPREFETCHER_WEAK, /* autoprefetcher_model. */
(AARCH64_EXTRA_TUNE_RENAME_FMA_REGS), /* tune_flags. */
&generic_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
static const struct tune_params cortexa72_tunings =
@ -1502,8 +1502,8 @@ static const struct tune_params cortexa72_tunings =
tune_params::AUTOPREFETCHER_WEAK, /* autoprefetcher_model. */
(AARCH64_EXTRA_TUNE_NONE), /* tune_flags. */
&generic_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
static const struct tune_params cortexa73_tunings =
@ -1538,12 +1538,10 @@ static const struct tune_params cortexa73_tunings =
tune_params::AUTOPREFETCHER_WEAK, /* autoprefetcher_model. */
(AARCH64_EXTRA_TUNE_NONE), /* tune_flags. */
&generic_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
static const struct tune_params exynosm1_tunings =
{
&exynosm1_extra_costs,
@ -1575,8 +1573,8 @@ static const struct tune_params exynosm1_tunings =
tune_params::AUTOPREFETCHER_WEAK, /* autoprefetcher_model. */
(AARCH64_EXTRA_TUNE_NONE), /* tune_flags. */
&exynosm1_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
static const struct tune_params thunderxt88_tunings =
@ -1610,8 +1608,8 @@ static const struct tune_params thunderxt88_tunings =
tune_params::AUTOPREFETCHER_OFF, /* autoprefetcher_model. */
(AARCH64_EXTRA_TUNE_NONE), /* tune_flags. */
&thunderxt88_prefetch_tune,
tune_params::LDP_POLICY_ALIGNED, /* ldp_policy_model. */
tune_params::STP_POLICY_ALIGNED /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALIGNED, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALIGNED /* stp_policy_model. */
};
static const struct tune_params thunderx_tunings =
@ -1645,8 +1643,8 @@ static const struct tune_params thunderx_tunings =
tune_params::AUTOPREFETCHER_OFF, /* autoprefetcher_model. */
(AARCH64_EXTRA_TUNE_CHEAP_SHIFT_EXTEND), /* tune_flags. */
&thunderx_prefetch_tune,
tune_params::LDP_POLICY_ALIGNED, /* ldp_policy_model. */
tune_params::STP_POLICY_ALIGNED /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALIGNED, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALIGNED /* stp_policy_model. */
};
static const struct tune_params tsv110_tunings =
@ -1681,8 +1679,8 @@ static const struct tune_params tsv110_tunings =
tune_params::AUTOPREFETCHER_WEAK, /* autoprefetcher_model. */
(AARCH64_EXTRA_TUNE_NONE), /* tune_flags. */
&tsv110_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
static const struct tune_params xgene1_tunings =
@ -1716,8 +1714,8 @@ static const struct tune_params xgene1_tunings =
tune_params::AUTOPREFETCHER_OFF, /* autoprefetcher_model. */
(AARCH64_EXTRA_TUNE_NO_LDP_STP_QREGS), /* tune_flags. */
&xgene1_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
static const struct tune_params emag_tunings =
@ -1751,8 +1749,8 @@ static const struct tune_params emag_tunings =
tune_params::AUTOPREFETCHER_OFF, /* autoprefetcher_model. */
(AARCH64_EXTRA_TUNE_NO_LDP_STP_QREGS), /* tune_flags. */
&xgene1_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
static const struct tune_params qdf24xx_tunings =
@ -1787,8 +1785,8 @@ static const struct tune_params qdf24xx_tunings =
tune_params::AUTOPREFETCHER_WEAK, /* autoprefetcher_model. */
AARCH64_EXTRA_TUNE_RENAME_LOAD_REGS, /* tune_flags. */
&qdf24xx_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
/* Tuning structure for the Qualcomm Saphira core. Default to falkor values
@ -1825,8 +1823,8 @@ static const struct tune_params saphira_tunings =
tune_params::AUTOPREFETCHER_WEAK, /* autoprefetcher_model. */
(AARCH64_EXTRA_TUNE_NONE), /* tune_flags. */
&generic_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
static const struct tune_params thunderx2t99_tunings =
@ -1861,8 +1859,8 @@ static const struct tune_params thunderx2t99_tunings =
tune_params::AUTOPREFETCHER_WEAK, /* autoprefetcher_model. */
(AARCH64_EXTRA_TUNE_NONE), /* tune_flags. */
&thunderx2t99_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
static const struct tune_params thunderx3t110_tunings =
@ -1897,8 +1895,8 @@ static const struct tune_params thunderx3t110_tunings =
tune_params::AUTOPREFETCHER_WEAK, /* autoprefetcher_model. */
(AARCH64_EXTRA_TUNE_NONE), /* tune_flags. */
&thunderx3t110_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
static const struct tune_params neoversen1_tunings =
@ -1932,8 +1930,8 @@ static const struct tune_params neoversen1_tunings =
tune_params::AUTOPREFETCHER_WEAK, /* autoprefetcher_model. */
(AARCH64_EXTRA_TUNE_CHEAP_SHIFT_EXTEND), /* tune_flags. */
&generic_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
static const struct tune_params ampere1_tunings =
@ -1971,8 +1969,8 @@ static const struct tune_params ampere1_tunings =
tune_params::AUTOPREFETCHER_WEAK, /* autoprefetcher_model. */
(AARCH64_EXTRA_TUNE_NONE), /* tune_flags. */
&ampere1_prefetch_tune,
tune_params::LDP_POLICY_ALIGNED, /* ldp_policy_model. */
tune_params::STP_POLICY_ALIGNED /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALIGNED, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALIGNED /* stp_policy_model. */
};
static const struct tune_params ampere1a_tunings =
@ -2011,8 +2009,8 @@ static const struct tune_params ampere1a_tunings =
tune_params::AUTOPREFETCHER_WEAK, /* autoprefetcher_model. */
(AARCH64_EXTRA_TUNE_NONE), /* tune_flags. */
&ampere1_prefetch_tune,
tune_params::LDP_POLICY_ALIGNED, /* ldp_policy_model. */
tune_params::STP_POLICY_ALIGNED /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALIGNED, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALIGNED /* stp_policy_model. */
};
static const advsimd_vec_cost neoversev1_advsimd_vector_cost =
@ -2194,8 +2192,8 @@ static const struct tune_params neoversev1_tunings =
| AARCH64_EXTRA_TUNE_MATCHED_VECTOR_THROUGHPUT
| AARCH64_EXTRA_TUNE_CHEAP_SHIFT_EXTEND), /* tune_flags. */
&generic_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
static const sve_vec_cost neoverse512tvb_sve_vector_cost =
@ -2333,8 +2331,8 @@ static const struct tune_params neoverse512tvb_tunings =
| AARCH64_EXTRA_TUNE_USE_NEW_VECTOR_COSTS
| AARCH64_EXTRA_TUNE_MATCHED_VECTOR_THROUGHPUT), /* tune_flags. */
&generic_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
static const advsimd_vec_cost neoversen2_advsimd_vector_cost =
@ -2525,8 +2523,8 @@ static const struct tune_params neoversen2_tunings =
| AARCH64_EXTRA_TUNE_USE_NEW_VECTOR_COSTS
| AARCH64_EXTRA_TUNE_MATCHED_VECTOR_THROUGHPUT), /* tune_flags. */
&generic_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
static const advsimd_vec_cost neoversev2_advsimd_vector_cost =
@ -2717,8 +2715,8 @@ static const struct tune_params neoversev2_tunings =
| AARCH64_EXTRA_TUNE_USE_NEW_VECTOR_COSTS
| AARCH64_EXTRA_TUNE_MATCHED_VECTOR_THROUGHPUT), /* tune_flags. */
&generic_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
static const struct tune_params a64fx_tunings =
@ -2752,8 +2750,8 @@ static const struct tune_params a64fx_tunings =
tune_params::AUTOPREFETCHER_WEAK, /* autoprefetcher_model. */
(AARCH64_EXTRA_TUNE_NONE), /* tune_flags. */
&a64fx_prefetch_tune,
tune_params::LDP_POLICY_ALWAYS, /* ldp_policy_model. */
tune_params::STP_POLICY_ALWAYS /* stp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS, /* ldp_policy_model. */
AARCH64_LDP_STP_POLICY_ALWAYS /* stp_policy_model. */
};
/* Support for fine-grained override of the tuning structures. */
@ -8529,13 +8527,17 @@ aarch64_save_regs_above_locals_p ()
static void
aarch64_layout_frame (void)
{
int regno, last_fp_reg = INVALID_REGNUM;
unsigned regno, last_fp_reg = INVALID_REGNUM;
machine_mode vector_save_mode = aarch64_reg_save_mode (V8_REGNUM);
poly_int64 vector_save_size = GET_MODE_SIZE (vector_save_mode);
bool frame_related_fp_reg_p = false;
aarch64_frame &frame = cfun->machine->frame;
poly_int64 top_of_locals = -1;
vec_safe_truncate (frame.saved_gprs, 0);
vec_safe_truncate (frame.saved_fprs, 0);
vec_safe_truncate (frame.saved_prs, 0);
frame.emit_frame_chain = aarch64_needs_frame_chain ();
/* Adjust the outgoing arguments size if required. Keep it in sync with what
@ -8620,6 +8622,7 @@ aarch64_layout_frame (void)
for (regno = P0_REGNUM; regno <= P15_REGNUM; regno++)
if (known_eq (frame.reg_offset[regno], SLOT_REQUIRED))
{
vec_safe_push (frame.saved_prs, regno);
if (frame.sve_save_and_probe == INVALID_REGNUM)
frame.sve_save_and_probe = regno;
frame.reg_offset[regno] = offset;
@ -8641,7 +8644,7 @@ aarch64_layout_frame (void)
If we don't have any vector registers to save, and we know how
big the predicate save area is, we can just round it up to the
next 16-byte boundary. */
if (last_fp_reg == (int) INVALID_REGNUM && offset.is_constant ())
if (last_fp_reg == INVALID_REGNUM && offset.is_constant ())
offset = aligned_upper_bound (offset, STACK_BOUNDARY / BITS_PER_UNIT);
else
{
@ -8655,10 +8658,11 @@ aarch64_layout_frame (void)
}
/* If we need to save any SVE vector registers, add them next. */
if (last_fp_reg != (int) INVALID_REGNUM && crtl->abi->id () == ARM_PCS_SVE)
if (last_fp_reg != INVALID_REGNUM && crtl->abi->id () == ARM_PCS_SVE)
for (regno = V0_REGNUM; regno <= V31_REGNUM; regno++)
if (known_eq (frame.reg_offset[regno], SLOT_REQUIRED))
{
vec_safe_push (frame.saved_fprs, regno);
if (frame.sve_save_and_probe == INVALID_REGNUM)
frame.sve_save_and_probe = regno;
frame.reg_offset[regno] = offset;
@ -8679,13 +8683,8 @@ aarch64_layout_frame (void)
auto allocate_gpr_slot = [&](unsigned int regno)
{
if (frame.hard_fp_save_and_probe == INVALID_REGNUM)
frame.hard_fp_save_and_probe = regno;
vec_safe_push (frame.saved_gprs, regno);
frame.reg_offset[regno] = offset;
if (frame.wb_push_candidate1 == INVALID_REGNUM)
frame.wb_push_candidate1 = regno;
else if (frame.wb_push_candidate2 == INVALID_REGNUM)
frame.wb_push_candidate2 = regno;
offset += UNITS_PER_WORD;
};
@ -8695,7 +8694,7 @@ aarch64_layout_frame (void)
allocate_gpr_slot (R29_REGNUM);
allocate_gpr_slot (R30_REGNUM);
}
else if (flag_stack_clash_protection
else if ((flag_stack_clash_protection || !frame.is_scs_enabled)
&& known_eq (frame.reg_offset[R30_REGNUM], SLOT_REQUIRED))
/* Put the LR save slot first, since it makes a good choice of probe
for stack clash purposes. The idea is that the link register usually
@ -8714,8 +8713,7 @@ aarch64_layout_frame (void)
for (regno = V0_REGNUM; regno <= V31_REGNUM; regno++)
if (known_eq (frame.reg_offset[regno], SLOT_REQUIRED))
{
if (frame.hard_fp_save_and_probe == INVALID_REGNUM)
frame.hard_fp_save_and_probe = regno;
vec_safe_push (frame.saved_fprs, regno);
/* If there is an alignment gap between integer and fp callee-saves,
allocate the last fp register to it if possible. */
if (regno == last_fp_reg
@ -8728,21 +8726,25 @@ aarch64_layout_frame (void)
}
frame.reg_offset[regno] = offset;
if (frame.wb_push_candidate1 == INVALID_REGNUM)
frame.wb_push_candidate1 = regno;
else if (frame.wb_push_candidate2 == INVALID_REGNUM
&& frame.wb_push_candidate1 >= V0_REGNUM)
frame.wb_push_candidate2 = regno;
offset += vector_save_size;
}
offset = aligned_upper_bound (offset, STACK_BOUNDARY / BITS_PER_UNIT);
auto saved_regs_size = offset - frame.bytes_below_saved_regs;
gcc_assert (known_eq (saved_regs_size, below_hard_fp_saved_regs_size)
|| (frame.hard_fp_save_and_probe != INVALID_REGNUM
&& known_eq (frame.reg_offset[frame.hard_fp_save_and_probe],
frame.bytes_below_hard_fp)));
array_slice<unsigned int> push_regs = (!vec_safe_is_empty (frame.saved_gprs)
? frame.saved_gprs
: frame.saved_fprs);
if (!push_regs.empty ()
&& known_eq (frame.reg_offset[push_regs[0]], frame.bytes_below_hard_fp))
{
frame.hard_fp_save_and_probe = push_regs[0];
frame.wb_push_candidate1 = push_regs[0];
if (push_regs.size () > 1)
frame.wb_push_candidate2 = push_regs[1];
}
else
gcc_assert (known_eq (saved_regs_size, below_hard_fp_saved_regs_size));
/* With stack-clash, a register must be saved in non-leaf functions.
The saving of the bottommost register counts as an implicit probe,
@ -8906,12 +8908,14 @@ aarch64_layout_frame (void)
+ frame.sve_callee_adjust
+ frame.final_adjust, frame.frame_size));
if (!frame.emit_frame_chain && frame.callee_adjust == 0)
if (frame.callee_adjust == 0)
{
/* We've decided not to associate any register saves with the initial
stack allocation. */
frame.wb_pop_candidate1 = frame.wb_push_candidate1 = INVALID_REGNUM;
frame.wb_pop_candidate2 = frame.wb_push_candidate2 = INVALID_REGNUM;
/* We've decided not to do a "real" push and pop. However,
setting up the frame chain is treated as being essentially
a multi-instruction push. */
frame.wb_pop_candidate1 = frame.wb_pop_candidate2 = INVALID_REGNUM;
if (!frame.emit_frame_chain)
frame.wb_push_candidate1 = frame.wb_push_candidate2 = INVALID_REGNUM;
}
frame.laid_out = true;
@ -8926,17 +8930,6 @@ aarch64_register_saved_on_entry (int regno)
return known_ge (cfun->machine->frame.reg_offset[regno], 0);
}
/* Return the next register up from REGNO up to LIMIT for the callee
to save. */
static unsigned
aarch64_next_callee_save (unsigned regno, unsigned limit)
{
while (regno <= limit && !aarch64_register_saved_on_entry (regno))
regno ++;
return regno;
}
/* Push the register number REGNO of mode MODE to the stack with write-back
adjusting the stack by ADJUSTMENT. */
@ -9254,41 +9247,46 @@ aarch64_add_cfa_expression (rtx_insn *insn, rtx reg,
add_reg_note (insn, REG_CFA_EXPRESSION, gen_rtx_SET (mem, reg));
}
/* Emit code to save the callee-saved registers from register number START
to LIMIT to the stack. The stack pointer is currently BYTES_BELOW_SP
bytes above the bottom of the static frame. Skip any write-back
candidates if SKIP_WB is true. HARD_FP_VALID_P is true if the hard
frame pointer has been set up. */
/* Emit code to save the callee-saved registers in REGS. Skip any
write-back candidates if SKIP_WB is true, otherwise consider only
write-back candidates.
The stack pointer is currently BYTES_BELOW_SP bytes above the bottom
of the static frame. HARD_FP_VALID_P is true if the hard frame pointer
has been set up. */
static void
aarch64_save_callee_saves (poly_int64 bytes_below_sp,
unsigned start, unsigned limit, bool skip_wb,
array_slice<unsigned int> regs, bool skip_wb,
bool hard_fp_valid_p)
{
aarch64_frame &frame = cfun->machine->frame;
rtx_insn *insn;
unsigned regno;
unsigned regno2;
rtx anchor_reg = NULL_RTX, ptrue = NULL_RTX;
for (regno = aarch64_next_callee_save (start, limit);
regno <= limit;
regno = aarch64_next_callee_save (regno + 1, limit))
auto skip_save_p = [&](unsigned int regno)
{
rtx reg, mem;
if (cfun->machine->reg_is_wrapped_separately[regno])
return true;
if (skip_wb == (regno == frame.wb_push_candidate1
|| regno == frame.wb_push_candidate2))
return true;
return false;
};
for (unsigned int i = 0; i < regs.size (); ++i)
{
unsigned int regno = regs[i];
poly_int64 offset;
bool frame_related_p = aarch64_emit_cfi_for_reg_p (regno);
if (skip_wb
&& (regno == frame.wb_push_candidate1
|| regno == frame.wb_push_candidate2))
continue;
if (cfun->machine->reg_is_wrapped_separately[regno])
if (skip_save_p (regno))
continue;
machine_mode mode = aarch64_reg_save_mode (regno);
reg = gen_rtx_REG (mode, regno);
rtx reg = gen_rtx_REG (mode, regno);
offset = frame.reg_offset[regno] - bytes_below_sp;
rtx base_rtx = stack_pointer_rtx;
poly_int64 sp_offset = offset;
@ -9315,12 +9313,13 @@ aarch64_save_callee_saves (poly_int64 bytes_below_sp,
}
offset -= fp_offset;
}
mem = gen_frame_mem (mode, plus_constant (Pmode, base_rtx, offset));
rtx mem = gen_frame_mem (mode, plus_constant (Pmode, base_rtx, offset));
bool need_cfa_note_p = (base_rtx != stack_pointer_rtx);
unsigned int regno2;
if (!aarch64_sve_mode_p (mode)
&& (regno2 = aarch64_next_callee_save (regno + 1, limit)) <= limit
&& !cfun->machine->reg_is_wrapped_separately[regno2]
&& i + 1 < regs.size ()
&& (regno2 = regs[i + 1], !skip_save_p (regno2))
&& known_eq (GET_MODE_SIZE (mode),
frame.reg_offset[regno2] - frame.reg_offset[regno]))
{
@ -9346,6 +9345,7 @@ aarch64_save_callee_saves (poly_int64 bytes_below_sp,
}
regno = regno2;
++i;
}
else if (mode == VNx2DImode && BYTES_BIG_ENDIAN)
{
@ -9363,49 +9363,57 @@ aarch64_save_callee_saves (poly_int64 bytes_below_sp,
}
}
/* Emit code to restore the callee registers from register number START
up to and including LIMIT. The stack pointer is currently BYTES_BELOW_SP
bytes above the bottom of the static frame. Skip any write-back
candidates if SKIP_WB is true. Write the appropriate REG_CFA_RESTORE
notes into CFI_OPS. */
/* Emit code to restore the callee registers in REGS, ignoring pop candidates
and any other registers that are handled separately. Write the appropriate
REG_CFA_RESTORE notes into CFI_OPS.
The stack pointer is currently BYTES_BELOW_SP bytes above the bottom
of the static frame. */
static void
aarch64_restore_callee_saves (poly_int64 bytes_below_sp, unsigned start,
unsigned limit, bool skip_wb, rtx *cfi_ops)
aarch64_restore_callee_saves (poly_int64 bytes_below_sp,
array_slice<unsigned int> regs, rtx *cfi_ops)
{
aarch64_frame &frame = cfun->machine->frame;
unsigned regno;
unsigned regno2;
poly_int64 offset;
rtx anchor_reg = NULL_RTX, ptrue = NULL_RTX;
for (regno = aarch64_next_callee_save (start, limit);
regno <= limit;
regno = aarch64_next_callee_save (regno + 1, limit))
auto skip_restore_p = [&](unsigned int regno)
{
bool frame_related_p = aarch64_emit_cfi_for_reg_p (regno);
if (cfun->machine->reg_is_wrapped_separately[regno])
continue;
return true;
rtx reg, mem;
if (regno == frame.wb_pop_candidate1
|| regno == frame.wb_pop_candidate2)
return true;
if (skip_wb
&& (regno == frame.wb_pop_candidate1
|| regno == frame.wb_pop_candidate2))
/* The shadow call stack code restores LR separately. */
if (frame.is_scs_enabled && regno == LR_REGNUM)
return true;
return false;
};
for (unsigned int i = 0; i < regs.size (); ++i)
{
unsigned int regno = regs[i];
bool frame_related_p = aarch64_emit_cfi_for_reg_p (regno);
if (skip_restore_p (regno))
continue;
machine_mode mode = aarch64_reg_save_mode (regno);
reg = gen_rtx_REG (mode, regno);
rtx reg = gen_rtx_REG (mode, regno);
offset = frame.reg_offset[regno] - bytes_below_sp;
rtx base_rtx = stack_pointer_rtx;
if (mode == VNx2DImode && BYTES_BIG_ENDIAN)
aarch64_adjust_sve_callee_save_base (mode, base_rtx, anchor_reg,
offset, ptrue);
mem = gen_frame_mem (mode, plus_constant (Pmode, base_rtx, offset));
rtx mem = gen_frame_mem (mode, plus_constant (Pmode, base_rtx, offset));
unsigned int regno2;
if (!aarch64_sve_mode_p (mode)
&& (regno2 = aarch64_next_callee_save (regno + 1, limit)) <= limit
&& !cfun->machine->reg_is_wrapped_separately[regno2]
&& i + 1 < regs.size ()
&& (regno2 = regs[i + 1], !skip_restore_p (regno2))
&& known_eq (GET_MODE_SIZE (mode),
frame.reg_offset[regno2] - frame.reg_offset[regno]))
{
@ -9418,6 +9426,7 @@ aarch64_restore_callee_saves (poly_int64 bytes_below_sp, unsigned start,
*cfi_ops = alloc_reg_note (REG_CFA_RESTORE, reg2, *cfi_ops);
regno = regno2;
++i;
}
else if (mode == VNx2DImode && BYTES_BIG_ENDIAN)
emit_insn (gen_aarch64_pred_mov (mode, reg, ptrue, mem));
@ -10239,13 +10248,10 @@ aarch64_expand_prologue (void)
- frame.bytes_above_hard_fp);
gcc_assert (known_ge (chain_offset, 0));
gcc_assert (reg1 == R29_REGNUM && reg2 == R30_REGNUM);
if (callee_adjust == 0)
{
reg1 = R29_REGNUM;
reg2 = R30_REGNUM;
aarch64_save_callee_saves (bytes_below_sp, reg1, reg2,
false, false);
}
aarch64_save_callee_saves (bytes_below_sp, frame.saved_gprs,
false, false);
else
gcc_assert (known_eq (chain_offset, 0));
aarch64_add_offset (Pmode, hard_frame_pointer_rtx,
@ -10283,8 +10289,7 @@ aarch64_expand_prologue (void)
aarch64_emit_stack_tie (hard_frame_pointer_rtx);
}
aarch64_save_callee_saves (bytes_below_sp, R0_REGNUM, R30_REGNUM,
callee_adjust != 0 || emit_frame_chain,
aarch64_save_callee_saves (bytes_below_sp, frame.saved_gprs, true,
emit_frame_chain);
if (maybe_ne (sve_callee_adjust, 0))
{
@ -10295,10 +10300,9 @@ aarch64_expand_prologue (void)
!frame_pointer_needed, false);
bytes_below_sp -= sve_callee_adjust;
}
aarch64_save_callee_saves (bytes_below_sp, P0_REGNUM, P15_REGNUM,
false, emit_frame_chain);
aarch64_save_callee_saves (bytes_below_sp, V0_REGNUM, V31_REGNUM,
callee_adjust != 0 || emit_frame_chain,
aarch64_save_callee_saves (bytes_below_sp, frame.saved_prs, true,
emit_frame_chain);
aarch64_save_callee_saves (bytes_below_sp, frame.saved_fprs, true,
emit_frame_chain);
/* We may need to probe the final adjustment if it is larger than the guard
@ -10344,8 +10348,6 @@ aarch64_expand_epilogue (bool for_sibcall)
poly_int64 bytes_below_hard_fp = frame.bytes_below_hard_fp;
unsigned reg1 = frame.wb_pop_candidate1;
unsigned reg2 = frame.wb_pop_candidate2;
unsigned int last_gpr = (frame.is_scs_enabled
? R29_REGNUM : R30_REGNUM);
rtx cfi_ops = NULL;
rtx_insn *insn;
/* A stack clash protection prologue may not have left EP0_REGNUM or
@ -10409,10 +10411,8 @@ aarch64_expand_epilogue (bool for_sibcall)
/* Restore the vector registers before the predicate registers,
so that we can use P4 as a temporary for big-endian SVE frames. */
aarch64_restore_callee_saves (final_adjust, V0_REGNUM, V31_REGNUM,
callee_adjust != 0, &cfi_ops);
aarch64_restore_callee_saves (final_adjust, P0_REGNUM, P15_REGNUM,
false, &cfi_ops);
aarch64_restore_callee_saves (final_adjust, frame.saved_fprs, &cfi_ops);
aarch64_restore_callee_saves (final_adjust, frame.saved_prs, &cfi_ops);
if (maybe_ne (sve_callee_adjust, 0))
aarch64_add_sp (NULL_RTX, NULL_RTX, sve_callee_adjust, true);
@ -10420,8 +10420,7 @@ aarch64_expand_epilogue (bool for_sibcall)
restore x30, we don't need to restore x30 again in the traditional
way. */
aarch64_restore_callee_saves (final_adjust + sve_callee_adjust,
R0_REGNUM, last_gpr,
callee_adjust != 0, &cfi_ops);
frame.saved_gprs, &cfi_ops);
if (need_barrier_p)
aarch64_emit_stack_tie (stack_pointer_rtx);
@ -17866,36 +17865,6 @@ aarch64_parse_tune (const char *to_parse, const struct processor **res)
return AARCH_PARSE_INVALID_ARG;
}
/* Parse a command-line -param=aarch64-ldp-policy= parameter. VALUE is
the value of the parameter. */
static void
aarch64_parse_ldp_policy (enum aarch64_ldp_policy value,
struct tune_params* tune)
{
if (value == LDP_POLICY_ALWAYS)
tune->ldp_policy_model = tune_params::LDP_POLICY_ALWAYS;
else if (value == LDP_POLICY_NEVER)
tune->ldp_policy_model = tune_params::LDP_POLICY_NEVER;
else if (value == LDP_POLICY_ALIGNED)
tune->ldp_policy_model = tune_params::LDP_POLICY_ALIGNED;
}
/* Parse a command-line -param=aarch64-stp-policy= parameter. VALUE is
the value of the parameter. */
static void
aarch64_parse_stp_policy (enum aarch64_stp_policy value,
struct tune_params* tune)
{
if (value == STP_POLICY_ALWAYS)
tune->stp_policy_model = tune_params::STP_POLICY_ALWAYS;
else if (value == STP_POLICY_NEVER)
tune->stp_policy_model = tune_params::STP_POLICY_NEVER;
else if (value == STP_POLICY_ALIGNED)
tune->stp_policy_model = tune_params::STP_POLICY_ALIGNED;
}
/* Parse TOKEN, which has length LENGTH to see if it is an option
described in FLAG. If it is, return the index bit for that fusion type.
If not, error (printing OPTION_NAME) and return zero. */
@ -18245,12 +18214,10 @@ aarch64_override_options_internal (struct gcc_options *opts)
&aarch64_tune_params);
if (opts->x_aarch64_ldp_policy_param)
aarch64_parse_ldp_policy (opts->x_aarch64_ldp_policy_param,
&aarch64_tune_params);
aarch64_tune_params.ldp_policy_model = opts->x_aarch64_ldp_policy_param;
if (opts->x_aarch64_stp_policy_param)
aarch64_parse_stp_policy (opts->x_aarch64_stp_policy_param,
&aarch64_tune_params);
aarch64_tune_params.stp_policy_model = opts->x_aarch64_stp_policy_param;
/* This target defaults to strict volatile bitfields. */
if (opts->x_flag_strict_volatile_bitfields < 0 && abi_version_at_least (2))
@ -25313,10 +25280,11 @@ aarch64_copy_one_block_and_progress_pointers (rtx *src, rtx *dst,
*dst = aarch64_progress_pointer (*dst);
}
/* Expand a cpymem using the MOPS extension. OPERANDS are taken
from the cpymem pattern. Return true iff we succeeded. */
static bool
aarch64_expand_cpymem_mops (rtx *operands)
/* Expand a cpymem/movmem using the MOPS extension. OPERANDS are taken
from the cpymem/movmem pattern. IS_MEMMOVE is true if this is a memmove
rather than memcpy. Return true iff we succeeded. */
bool
aarch64_expand_cpymem_mops (rtx *operands, bool is_memmove = false)
{
if (!TARGET_MOPS)
return false;
@ -25328,8 +25296,10 @@ aarch64_expand_cpymem_mops (rtx *operands)
rtx dst_mem = replace_equiv_address (operands[0], dst_addr);
rtx src_mem = replace_equiv_address (operands[1], src_addr);
rtx sz_reg = copy_to_mode_reg (DImode, operands[2]);
emit_insn (gen_aarch64_cpymemdi (dst_mem, src_mem, sz_reg));
if (is_memmove)
emit_insn (gen_aarch64_movmemdi (dst_mem, src_mem, sz_reg));
else
emit_insn (gen_aarch64_cpymemdi (dst_mem, src_mem, sz_reg));
return true;
}
@ -26548,30 +26518,18 @@ aarch64_mergeable_load_pair_p (machine_mode mode, rtx mem1, rtx mem2)
bool
aarch64_mem_ok_with_ldpstp_policy_model (rtx mem, bool load, machine_mode mode)
{
/* If we have LDP_POLICY_NEVER, reject the load pair. */
if (load
&& aarch64_tune_params.ldp_policy_model == tune_params::LDP_POLICY_NEVER)
auto policy = (load
? aarch64_tune_params.ldp_policy_model
: aarch64_tune_params.stp_policy_model);
/* If we have AARCH64_LDP_STP_POLICY_NEVER, reject the load pair. */
if (policy == AARCH64_LDP_STP_POLICY_NEVER)
return false;
/* If we have STP_POLICY_NEVER, reject the store pair. */
if (!load
&& aarch64_tune_params.stp_policy_model == tune_params::STP_POLICY_NEVER)
return false;
/* If we have LDP_POLICY_ALIGNED,
/* If we have AARCH64_LDP_STP_POLICY_ALIGNED,
do not emit the load pair unless the alignment is checked to be
at least double the alignment of the type. */
if (load
&& aarch64_tune_params.ldp_policy_model == tune_params::LDP_POLICY_ALIGNED
&& !optimize_function_for_size_p (cfun)
&& MEM_ALIGN (mem) < 2 * GET_MODE_ALIGNMENT (mode))
return false;
/* If we have STP_POLICY_ALIGNED,
do not emit the store pair unless the alignment is checked to be
at least double the alignment of the type. */
if (!load
&& aarch64_tune_params.stp_policy_model == tune_params::STP_POLICY_ALIGNED
if (policy == AARCH64_LDP_STP_POLICY_ALIGNED
&& !optimize_function_for_size_p (cfun)
&& MEM_ALIGN (mem) < 2 * GET_MODE_ALIGNMENT (mode))
return false;

View File

@ -762,7 +762,7 @@ extern enum aarch64_processor aarch64_tune;
#define DEFAULT_PCC_STRUCT_RETURN 0
#ifdef HAVE_POLY_INT_H
#if defined(HAVE_POLY_INT_H) && defined(GCC_VEC_H)
struct GTY (()) aarch64_frame
{
/* The offset from the bottom of the static frame (the bottom of the
@ -770,6 +770,13 @@ struct GTY (()) aarch64_frame
needed. */
poly_int64 reg_offset[LAST_SAVED_REGNUM + 1];
/* The list of GPRs, FPRs and predicate registers that have nonnegative
entries in reg_offset. The registers are listed in order of
increasing offset (rather than increasing register number). */
vec<unsigned, va_gc_atomic> *saved_gprs;
vec<unsigned, va_gc_atomic> *saved_fprs;
vec<unsigned, va_gc_atomic> *saved_prs;
/* The number of extra stack bytes taken up by register varargs.
This area is allocated by the callee at the very top of the
frame. This value is rounded up to a multiple of

File diff suppressed because it is too large Load Diff

View File

@ -339,39 +339,24 @@ Target Joined UInteger Var(aarch64_vect_unroll_limit) Init(4) Param
Limit how much the autovectorizer may unroll a loop.
-param=aarch64-ldp-policy=
Target Joined Var(aarch64_ldp_policy_param) Enum(aarch64_ldp_policy) Init(LDP_POLICY_DEFAULT) Param
Target Joined Var(aarch64_ldp_policy_param) Enum(aarch64_ldp_stp_policy) Init(AARCH64_LDP_STP_POLICY_DEFAULT) Param
--param=aarch64-ldp-policy=[default|always|never|aligned] Fine-grained policy for load pairs.
Enum
Name(aarch64_ldp_policy) Type(enum aarch64_ldp_policy) UnknownError(unknown aarch64_ldp_policy mode %qs)
EnumValue
Enum(aarch64_ldp_policy) String(default) Value(LDP_POLICY_DEFAULT)
EnumValue
Enum(aarch64_ldp_policy) String(always) Value(LDP_POLICY_ALWAYS)
EnumValue
Enum(aarch64_ldp_policy) String(never) Value(LDP_POLICY_NEVER)
EnumValue
Enum(aarch64_ldp_policy) String(aligned) Value(LDP_POLICY_ALIGNED)
-param=aarch64-stp-policy=
Target Joined Var(aarch64_stp_policy_param) Enum(aarch64_stp_policy) Init(STP_POLICY_DEFAULT) Param
Target Joined Var(aarch64_stp_policy_param) Enum(aarch64_ldp_stp_policy) Init(AARCH64_LDP_STP_POLICY_DEFAULT) Param
--param=aarch64-stp-policy=[default|always|never|aligned] Fine-grained policy for store pairs.
Enum
Name(aarch64_stp_policy) Type(enum aarch64_stp_policy) UnknownError(unknown aarch64_stp_policy mode %qs)
Name(aarch64_ldp_stp_policy) Type(enum aarch64_ldp_stp_policy) UnknownError(unknown LDP/STP policy %qs)
EnumValue
Enum(aarch64_stp_policy) String(default) Value(STP_POLICY_DEFAULT)
Enum(aarch64_ldp_stp_policy) String(default) Value(AARCH64_LDP_STP_POLICY_DEFAULT)
EnumValue
Enum(aarch64_stp_policy) String(always) Value(STP_POLICY_ALWAYS)
Enum(aarch64_ldp_stp_policy) String(always) Value(AARCH64_LDP_STP_POLICY_ALWAYS)
EnumValue
Enum(aarch64_stp_policy) String(never) Value(STP_POLICY_NEVER)
Enum(aarch64_ldp_stp_policy) String(never) Value(AARCH64_LDP_STP_POLICY_NEVER)
EnumValue
Enum(aarch64_stp_policy) String(aligned) Value(STP_POLICY_ALIGNED)
Enum(aarch64_ldp_stp_policy) String(aligned) Value(AARCH64_LDP_STP_POLICY_ALIGNED)

View File

@ -1428,7 +1428,8 @@
(V4HF "V8HF") (V8HF "V8HF")
(V2SF "V4SF") (V4SF "V4SF")
(V2DF "V2DF") (SI "V4SI")
(HI "V8HI") (QI "V16QI")])
(HI "V8HI") (QI "V16QI")
(SF "V4SF") (DF "V2DF")])
;; Half modes of all vector modes.
(define_mode_attr VHALF [(V8QI "V4QI") (V16QI "V8QI")

View File

@ -17,12 +17,6 @@
along with GCC; see the file COPYING3. If not see
<http://www.gnu.org/licenses/>. */
/* First target dependent ARC if-conversion pass. */
INSERT_PASS_AFTER (pass_delay_slots, 1, pass_arc_ifcvt);
/* Second target dependent ARC if-conversion pass. */
INSERT_PASS_BEFORE (pass_shorten_branches, 1, pass_arc_ifcvt);
/* Find annulled delay insns and convert them to use the appropriate
predicate. This allows branch shortening to size up these
instructions properly. */

View File

@ -35,7 +35,7 @@ extern const char *arc_output_libcall (const char *);
extern int arc_output_commutative_cond_exec (rtx *operands, bool);
extern bool arc_expand_cpymem (rtx *operands);
extern bool prepare_move_operands (rtx *operands, machine_mode mode);
extern void emit_shift (enum rtx_code, rtx, rtx, rtx);
extern bool arc_pre_reload_split (void);
extern void arc_expand_atomic_op (enum rtx_code, rtx, rtx, rtx, rtx, rtx);
extern void arc_split_compare_and_swap (rtx *);
extern void arc_expand_compare_and_swap (rtx *);
@ -52,8 +52,6 @@ extern bool arc_can_use_return_insn (void);
extern bool arc_split_move_p (rtx *);
#endif /* RTX_CODE */
extern bool arc_ccfsm_branch_deleted_p (void);
extern void arc_ccfsm_record_branch_deleted (void);
void arc_asm_output_aligned_decl_local (FILE *, tree, const char *,
unsigned HOST_WIDE_INT,
@ -67,7 +65,6 @@ extern bool arc_raw_symbolic_reference_mentioned_p (rtx, bool);
extern bool arc_is_longcall_p (rtx);
extern bool arc_is_shortcall_p (rtx);
extern bool valid_brcc_with_delay_p (rtx *);
extern bool arc_ccfsm_cond_exec_p (void);
extern rtx disi_highpart (rtx);
extern int arc_adjust_insn_length (rtx_insn *, int, bool);
extern int arc_corereg_hazard (rtx, rtx);
@ -76,15 +73,10 @@ extern int arc_write_ext_corereg (rtx);
extern rtx gen_acc1 (void);
extern rtx gen_acc2 (void);
extern bool arc_branch_size_unknown_p (void);
struct arc_ccfsm;
extern void arc_ccfsm_record_condition (rtx, bool, rtx_insn *,
struct arc_ccfsm *);
extern void arc_expand_prologue (void);
extern void arc_expand_epilogue (int);
extern void arc_init_expanders (void);
extern int arc_check_millicode (rtx op, int offset, int load_p);
extern void arc_clear_unalign (void);
extern void arc_toggle_unalign (void);
extern void split_subsi (rtx *);
extern void arc_split_move (rtx *);
extern const char *arc_short_long (rtx_insn *insn, const char *, const char *);
@ -106,5 +98,4 @@ extern bool arc_is_jli_call_p (rtx);
extern void arc_file_end (void);
extern bool arc_is_secure_call_p (rtx);
rtl_opt_pass * make_pass_arc_ifcvt (gcc::context *ctxt);
rtl_opt_pass * make_pass_arc_predicate_delay_insns (gcc::context *ctxt);

File diff suppressed because it is too large Load Diff

View File

@ -1312,20 +1312,6 @@ do { \
/* Defined to also emit an .align in elfos.h. We don't want that. */
#undef ASM_OUTPUT_CASE_LABEL
/* ADDR_DIFF_VECs are in the text section and thus can affect the
current alignment. */
#define ASM_OUTPUT_CASE_END(FILE, NUM, JUMPTABLE) \
do \
{ \
if (GET_CODE (PATTERN (JUMPTABLE)) == ADDR_DIFF_VEC \
&& ((GET_MODE_SIZE (as_a <scalar_int_mode> \
(GET_MODE (PATTERN (JUMPTABLE)))) \
* XVECLEN (PATTERN (JUMPTABLE), 1) + 1) \
& 2)) \
arc_toggle_unalign (); \
} \
while (0)
#define JUMP_ALIGN(LABEL) (arc_size_opt_level < 2 ? 2 : 0)
#define LABEL_ALIGN_AFTER_BARRIER(LABEL) \
(JUMP_ALIGN(LABEL) \
@ -1346,8 +1332,6 @@ do { \
#define ASM_OUTPUT_ALIGN(FILE,LOG) \
do { \
if ((LOG) != 0) fprintf (FILE, "\t.align %d\n", 1 << (LOG)); \
if ((LOG) > 1) \
arc_clear_unalign (); \
} while (0)
/* ASM_OUTPUT_ALIGNED_DECL_LOCAL (STREAM, DECL, NAME, SIZE, ALIGNMENT)

View File

@ -547,16 +547,6 @@
(const_string "false")]
(const_string "true")))
;; Instructions that we can put into a delay slot and conditionalize.
(define_attr "cond_delay_insn" "no,yes"
(cond [(eq_attr "cond" "!canuse") (const_string "no")
(eq_attr "type" "call,branch,uncond_branch,jump,brcc")
(const_string "no")
(match_test "find_reg_note (insn, REG_SAVE_NOTE, GEN_INT (2))")
(const_string "no")
(eq_attr "length" "2,4") (const_string "yes")]
(const_string "no")))
(define_attr "in_ret_delay_slot" "no,yes"
(cond [(eq_attr "in_delay_slot" "false")
(const_string "no")
@ -565,19 +555,6 @@
(const_string "no")]
(const_string "yes")))
(define_attr "cond_ret_delay_insn" "no,yes"
(cond [(eq_attr "in_ret_delay_slot" "no") (const_string "no")
(eq_attr "cond_delay_insn" "no") (const_string "no")]
(const_string "yes")))
(define_attr "annul_ret_delay_insn" "no,yes"
(cond [(eq_attr "cond_ret_delay_insn" "yes") (const_string "yes")
(match_test "TARGET_AT_DBR_CONDEXEC") (const_string "no")
(eq_attr "type" "!call,branch,uncond_branch,jump,brcc,return,sfunc")
(const_string "yes")]
(const_string "no")))
;; Delay slot definition for ARCompact ISA
;; ??? FIXME:
;; When outputting an annul-true insn elegible for cond-exec
@ -590,14 +567,7 @@
(eq_attr "in_call_delay_slot" "true")
(nil)])
(define_delay (and (match_test "!TARGET_AT_DBR_CONDEXEC")
(eq_attr "type" "brcc"))
[(eq_attr "in_delay_slot" "true")
(eq_attr "in_delay_slot" "true")
(nil)])
(define_delay (and (match_test "TARGET_AT_DBR_CONDEXEC")
(eq_attr "type" "brcc"))
(define_delay (eq_attr "type" "brcc")
[(eq_attr "in_delay_slot" "true")
(nil)
(nil)])
@ -605,39 +575,26 @@
(define_delay
(eq_attr "type" "return")
[(eq_attr "in_ret_delay_slot" "yes")
(eq_attr "annul_ret_delay_insn" "yes")
(eq_attr "cond_ret_delay_insn" "yes")])
(nil)
(nil)])
(define_delay (eq_attr "type" "loop_end")
[(eq_attr "in_delay_slot" "true")
(eq_attr "in_delay_slot" "true")
(nil)
(nil)])
;; For ARC600, unexposing the delay sloy incurs a penalty also in the
;; non-taken case, so the only meaningful way to have an annull-true
;; The only meaningful way to have an annull-true
;; filled delay slot is to conditionalize the delay slot insn.
(define_delay (and (match_test "TARGET_AT_DBR_CONDEXEC")
(eq_attr "type" "branch,uncond_branch,jump")
(define_delay (and (eq_attr "type" "branch,uncond_branch,jump")
(match_test "!optimize_size"))
[(eq_attr "in_delay_slot" "true")
(eq_attr "cond_delay_insn" "yes")
(eq_attr "cond_delay_insn" "yes")])
;; For ARC700, anything goes for annulled-true insns, since there is no
;; penalty for the unexposed delay slot when the branch is not taken,
;; however, we must avoid things that have a delay slot themselvese to
;; avoid confusing gcc.
(define_delay (and (match_test "!TARGET_AT_DBR_CONDEXEC")
(eq_attr "type" "branch,uncond_branch,jump")
(match_test "!optimize_size"))
[(eq_attr "in_delay_slot" "true")
(eq_attr "type" "!call,branch,uncond_branch,jump,brcc,return,sfunc")
(eq_attr "cond_delay_insn" "yes")])
(nil)
(nil)])
;; -mlongcall -fpic sfuncs use r12 to load the function address
(define_delay (eq_attr "type" "sfunc")
[(eq_attr "in_sfunc_delay_slot" "true")
(eq_attr "in_sfunc_delay_slot" "true")
(nil)
(nil)])
;; ??? need to use a working strategy for canuse_limm:
;; - either canuse_limm is not eligible for delay slots, and has no
@ -712,19 +669,19 @@ archs4x, archs4xd"
|| (satisfies_constraint_Cm3 (operands[1])
&& memory_operand (operands[0], QImode))"
"@
mov%? %0,%1%&
mov%? %0,%1%&
mov%? %0,%1%&
mov%? %0,%1%&
mov%? %0,%1%&
mov%? %0,%1
mov%? %0,%1
mov%? %0,%1
mov%? %0,%1
mov%? %0,%1
ldb%? %0,%1%&
stb%? %1,%0%&
ldb%? %0,%1%&
mov%? %0,%1
mov%? %0,%1
mov%? %0,%1
mov%? %0,%1
mov%? %0,%1
ldb%? %0,%1
stb%? %1,%0
ldb%? %0,%1
xldb%U1 %0,%1
ldb%U1%V1 %0,%1
xstb%U0 %1,%0
@ -756,19 +713,19 @@ archs4x, archs4xd"
|| (satisfies_constraint_Cm3 (operands[1])
&& memory_operand (operands[0], HImode))"
"@
mov%? %0,%1%&
mov%? %0,%1%&
mov%? %0,%1%&
mov%? %0,%1%&
mov%? %0,%1%&
mov%? %0,%1
mov%? %0,%1
mov%? %0,%1
mov%? %0,%1%&
mov%? %0,%1
mov%? %0,%1
ld%_%? %0,%1%&
st%_%? %1,%0%&
mov%? %0,%1
mov%? %0,%1
mov%? %0,%1
mov%? %0,%1
mov%? %0,%1
mov%? %0,%1
ld%_%? %0,%1
st%_%? %1,%0
xld%_%U1 %0,%1
ld%_%U1%V1 %0,%1
xst%_%U0 %1,%0
@ -822,15 +779,15 @@ archs4x, archs4xd"
mov%?\\t%0,%j1 ;14
ld%?\\t%0,%1 ;15
st%?\\t%1,%0 ;16
* return arc_short_long (insn, \"push%?\\t%1%&\", \"st%U0\\t%1,%0%&\");
* return arc_short_long (insn, \"pop%?\\t%0%&\", \"ld%U1\\t%0,%1%&\");
* return arc_short_long (insn, \"push%?\\t%1\", \"st%U0\\t%1,%0\");
* return arc_short_long (insn, \"pop%?\\t%0\", \"ld%U1\\t%0,%1\");
ld%?\\t%0,%1 ;19
xld%U1\\t%0,%1 ;20
ld%?\\t%0,%1 ;21
ld%?\\t%0,%1 ;22
ld%U1%V1\\t%0,%1 ;23
xst%U0\\t%1,%0 ;24
st%?\\t%1,%0%& ;25
st%?\\t%1,%0 ;25
st%U0%V0\\t%1,%0 ;26
st%U0%V0\\t%1,%0 ;37
st%U0%V0\\t%1,%0 ;28"
@ -1034,9 +991,9 @@ archs4x, archs4xd"
case 1:
return \"btst%? %1,%z2\";
case 4:
return \"bmsk%?.f 0,%1,%Z2%&\";
return \"bmsk%?.f 0,%1,%Z2\";
case 5:
return \"bclr%?.f 0,%1,%M2%&\";
return \"bclr%?.f 0,%1,%M2\";
case 6:
return \"asr.f 0,%1,%p2\";
default:
@ -1145,34 +1102,33 @@ archs4x, archs4xd"
; the combiner needs this pattern
(define_insn "*addsi_compare"
[(set (reg:CC_ZN CC_REG)
(compare:CC_ZN (match_operand:SI 0 "register_operand" "c")
(neg:SI (match_operand:SI 1 "register_operand" "c"))))]
(compare:CC_ZN (neg:SI
(match_operand:SI 0 "register_operand" "r"))
(match_operand:SI 1 "register_operand" "r")))]
""
"add.f 0,%0,%1"
"add.f\\t0,%0,%1"
[(set_attr "cond" "set")
(set_attr "type" "compare")
(set_attr "length" "4")])
; for flag setting 'add' instructions like if (a+b < a) { ...}
; the combiner needs this pattern
(define_insn "addsi_compare_2"
[(set (reg:CC_C CC_REG)
(compare:CC_C (plus:SI (match_operand:SI 0 "register_operand" "c,c")
(match_operand:SI 1 "nonmemory_operand" "cL,Cal"))
(match_dup 0)))]
(compare:CC_C (plus:SI (match_operand:SI 0 "register_operand" "r,r")
(match_operand:SI 1 "nonmemory_operand" "rL,Cal"))
(match_dup 0)))]
""
"add.f 0,%0,%1"
"add.f\\t0,%0,%1"
[(set_attr "cond" "set")
(set_attr "type" "compare")
(set_attr "length" "4,8")])
(define_insn "*addsi_compare_3"
[(set (reg:CC_C CC_REG)
(compare:CC_C (plus:SI (match_operand:SI 0 "register_operand" "c")
(match_operand:SI 1 "register_operand" "c"))
(match_dup 1)))]
(compare:CC_C (plus:SI (match_operand:SI 0 "register_operand" "r")
(match_operand:SI 1 "register_operand" "r"))
(match_dup 1)))]
""
"add.f 0,%0,%1"
"add.f\\t0,%0,%1"
[(set_attr "cond" "set")
(set_attr "type" "compare")
(set_attr "length" "4")])
@ -1960,7 +1916,7 @@ archs4x, archs4xd"
"@
sex%_%?\\t%0,%1
sex%_\\t%0,%1
ldh%?.x\\t%0,%1%&
ldh%?.x\\t%0,%1
ld%_.x%U1%V1\\t%0,%1
ld%_.x%U1%V1\\t%0,%1"
[(set_attr "type" "unary,unary,load,load,load")
@ -1988,7 +1944,7 @@ archs4x, archs4xd"
[(set (match_operand:SI 0 "dest_reg_operand" "=q,w,w")
(abs:SI (match_operand:SI 1 "nonmemory_operand" "q,cL,Cal")))]
""
"abs%? %0,%1%&"
"abs%? %0,%1"
[(set_attr "type" "two_cycle_core")
(set_attr "length" "*,4,8")
(set_attr "iscompact" "true,false,false")])
@ -2286,7 +2242,7 @@ archs4x, archs4xd"
(sign_extend:DI (match_operand:SI 0 "register_operand" "%q, c,c, c"))
(sign_extend:DI (match_operand:SI 1 "nonmemory_operand" "q,cL,L,C32"))))]
"TARGET_MUL64_SET"
"mul64%? \t0, %0, %1%&"
"mul64%? \t0, %0, %1"
[(set_attr "length" "*,4,4,8")
(set_attr "iscompact" "maybe,false,false,false")
(set_attr "type" "multi,multi,multi,multi")
@ -2321,7 +2277,7 @@ archs4x, archs4xd"
(zero_extend:DI (match_operand:SI 0 "register_operand" "%c,c,c"))
(zero_extend:DI (match_operand:SI 1 "nonmemory_operand" "cL,L,C32"))))]
"TARGET_MUL64_SET"
"mulu64%? \t0, %0, %1%&"
"mulu64%? \t0, %0, %1"
[(set_attr "length" "4,4,8")
(set_attr "iscompact" "false")
(set_attr "type" "umulti")
@ -2902,8 +2858,8 @@ archs4x, archs4xd"
"register_operand (operands[1], SImode)
|| register_operand (operands[2], SImode)"
"@
sub%?\\t%0,%1,%2%&
sub%?\\t%0,%1,%2%&
sub%?\\t%0,%1,%2
sub%?\\t%0,%1,%2
sub%?\\t%0,%1,%2
rsub%?\\t%0,%2,%1
sub\\t%0,%1,%2
@ -3211,26 +3167,26 @@ archs4x, archs4xd"
switch (which_alternative)
{
case 0: case 5: case 10: case 11: case 16: case 17: case 18:
return "and%? %0,%1,%2%&";
return "and%? %0,%1,%2";
case 1: case 6:
return "and%? %0,%2,%1%&";
return "and%? %0,%2,%1";
case 2:
return "bmsk%? %0,%1,%Z2%&";
return "bmsk%? %0,%1,%Z2";
case 7: case 12:
if (satisfies_constraint_C2p (operands[2]))
{
operands[2] = GEN_INT ((~INTVAL (operands[2])));
return "bmskn%? %0,%1,%Z2%&";
return "bmskn%? %0,%1,%Z2";
}
else
{
return "bmsk%? %0,%1,%Z2%&";
return "bmsk%? %0,%1,%Z2";
}
case 3: case 8: case 13:
return "bclr%? %0,%1,%M2%&";
return "bclr%? %0,%1,%M2";
case 4:
return (INTVAL (operands[2]) == 0xff
? "extb%? %0,%1%&" : "ext%_%? %0,%1%&");
? "extb%? %0,%1" : "ext%_%? %0,%1");
case 9: case 14: return \"bic%? %0,%1,%n2-1\";
case 15:
return "movb.cl %0,%1,%p2,%p2,%x2";
@ -3288,7 +3244,7 @@ archs4x, archs4xd"
(match_operand:SI 2 "nonmemory_operand" "0,0,0,0,r,r,Cal")))]
""
"@
bic%?\\t%0, %2, %1%& ;;constraint 0
bic%?\\t%0, %2, %1 ;;constraint 0
bic%?\\t%0,%2,%1 ;;constraint 1
bic\\t%0,%2,%1 ;;constraint 2, FIXME: will it ever get generated ???
bic%?\\t%0,%2,%1 ;;constraint 3, FIXME: will it ever get generated ???
@ -3343,9 +3299,9 @@ archs4x, archs4xd"
switch (which_alternative)
{
case 0: case 2: case 5: case 6: case 8: case 9: case 10:
return \"xor%?\\t%0,%1,%2%&\";
return \"xor%?\\t%0,%1,%2\";
case 1: case 3:
return \"xor%?\\t%0,%2,%1%&\";
return \"xor%?\\t%0,%2,%1\";
case 4: case 7:
return \"bxor%?\\t%0,%1,%z2\";
default:
@ -3362,7 +3318,7 @@ archs4x, archs4xd"
[(set (match_operand:SI 0 "dest_reg_operand" "=q,q,r,r")
(neg:SI (match_operand:SI 1 "register_operand" "0,q,0,r")))]
""
"neg%?\\t%0,%1%&"
"neg%?\\t%0,%1"
[(set_attr "type" "unary")
(set_attr "iscompact" "maybe,true,false,false")
(set_attr "predicable" "no,no,yes,no")])
@ -3371,7 +3327,7 @@ archs4x, archs4xd"
[(set (match_operand:SI 0 "dest_reg_operand" "=q,w")
(not:SI (match_operand:SI 1 "register_operand" "q,c")))]
""
"not%? %0,%1%&"
"not%? %0,%1"
[(set_attr "type" "unary,unary")
(set_attr "iscompact" "true,false")])
@ -3401,70 +3357,19 @@ archs4x, archs4xd"
[(set (match_operand:SI 0 "dest_reg_operand" "")
(ashift:SI (match_operand:SI 1 "register_operand" "")
(match_operand:SI 2 "nonmemory_operand" "")))]
""
"
{
if (!TARGET_BARREL_SHIFTER)
{
emit_shift (ASHIFT, operands[0], operands[1], operands[2]);
DONE;
}
}")
"")
(define_expand "ashrsi3"
[(set (match_operand:SI 0 "dest_reg_operand" "")
(ashiftrt:SI (match_operand:SI 1 "register_operand" "")
(match_operand:SI 2 "nonmemory_operand" "")))]
""
"
{
if (!TARGET_BARREL_SHIFTER)
{
emit_shift (ASHIFTRT, operands[0], operands[1], operands[2]);
DONE;
}
}")
"")
(define_expand "lshrsi3"
[(set (match_operand:SI 0 "dest_reg_operand" "")
(lshiftrt:SI (match_operand:SI 1 "register_operand" "")
(match_operand:SI 2 "nonmemory_operand" "")))]
""
"
{
if (!TARGET_BARREL_SHIFTER)
{
emit_shift (LSHIFTRT, operands[0], operands[1], operands[2]);
DONE;
}
}")
(define_insn "shift_si3"
[(set (match_operand:SI 0 "dest_reg_operand" "=r")
(match_operator:SI 3 "shift4_operator"
[(match_operand:SI 1 "register_operand" "0")
(match_operand:SI 2 "const_int_operand" "n")]))
(clobber (match_scratch:SI 4 "=&r"))
(clobber (reg:CC CC_REG))
]
"!TARGET_BARREL_SHIFTER"
"* return output_shift (operands);"
[(set_attr "type" "shift")
(set_attr "length" "16")])
(define_insn "shift_si3_loop"
[(set (match_operand:SI 0 "dest_reg_operand" "=r,r")
(match_operator:SI 3 "shift_operator"
[(match_operand:SI 1 "register_operand" "0,0")
(match_operand:SI 2 "nonmemory_operand" "rn,Cal")]))
(clobber (match_scratch:SI 4 "=X,X"))
(clobber (reg:SI LP_COUNT))
(clobber (reg:CC CC_REG))
]
"!TARGET_BARREL_SHIFTER"
"* return output_shift (operands);"
[(set_attr "type" "shift")
(set_attr "length" "16,20")])
"")
; asl, asr, lsr patterns:
; There is no point in including an 'I' alternative since only the lowest 5
@ -3499,18 +3404,215 @@ archs4x, archs4xd"
(set_attr "cond" "canuse,nocond,canuse,canuse,nocond,nocond")])
(define_insn "*lshrsi3_insn"
[(set (match_operand:SI 0 "dest_reg_operand" "=q,q, q, r, r, r")
(lshiftrt:SI (match_operand:SI 1 "nonmemory_operand" "!0,q, 0, 0, r,rCal")
(match_operand:SI 2 "nonmemory_operand" "N,N,qM,rL,rL,rCal")))]
[(set (match_operand:SI 0 "dest_reg_operand" "=q, q, r, r, r")
(lshiftrt:SI (match_operand:SI 1 "nonmemory_operand" "q, 0, 0, r,rCal")
(match_operand:SI 2 "nonmemory_operand" "N,qM,rL,rL,rCal")))]
"TARGET_BARREL_SHIFTER
&& (register_operand (operands[1], SImode)
|| register_operand (operands[2], SImode))"
"*return (which_alternative <= 1 && !arc_ccfsm_cond_exec_p ()
? \"lsr%?\\t%0,%1%&\" : \"lsr%?\\t%0,%1,%2%&\");"
"@
lsr_s\\t%0,%1
lsr_s\\t%0,%1,%2
lsr%?\\t%0,%1,%2
lsr%?\\t%0,%1,%2
lsr%?\\t%0,%1,%2"
[(set_attr "type" "shift")
(set_attr "iscompact" "maybe,maybe,maybe,false,false,false")
(set_attr "predicable" "no,no,no,yes,no,no")
(set_attr "cond" "canuse,nocond,canuse,canuse,nocond,nocond")])
(set_attr "iscompact" "maybe,maybe,false,false,false")
(set_attr "predicable" "no,no,yes,no,no")
(set_attr "cond" "nocond,canuse,canuse,nocond,nocond")])
;; Split asl dst,1,src into bset dst,0,src.
(define_insn_and_split "*ashlsi3_1"
[(set (match_operand:SI 0 "dest_reg_operand")
(ashift:SI (const_int 1)
(match_operand:SI 1 "nonmemory_operand")))]
"!TARGET_BARREL_SHIFTER
&& arc_pre_reload_split ()"
"#"
"&& 1"
[(set (match_dup 0)
(ior:SI (ashift:SI (const_int 1) (match_dup 1))
(const_int 0)))]
""
[(set_attr "type" "shift")
(set_attr "length" "8")])
(define_insn_and_split "*ashlsi3_nobs"
[(set (match_operand:SI 0 "dest_reg_operand")
(ashift:SI (match_operand:SI 1 "register_operand")
(match_operand:SI 2 "nonmemory_operand")))]
"!TARGET_BARREL_SHIFTER
&& operands[2] != const1_rtx
&& arc_pre_reload_split ()"
"#"
"&& 1"
[(const_int 0)]
{
if (CONST_INT_P (operands[2]))
{
int n = INTVAL (operands[2]) & 0x1f;
if (n <= 9)
{
if (n == 0)
emit_move_insn (operands[0], operands[1]);
else if (n <= 2)
{
emit_insn (gen_ashlsi3_cnt1 (operands[0], operands[1]));
if (n == 2)
emit_insn (gen_ashlsi3_cnt1 (operands[0], operands[0]));
}
else
{
rtx zero = gen_reg_rtx (SImode);
emit_move_insn (zero, const0_rtx);
emit_insn (gen_add_shift (operands[0], operands[1],
GEN_INT (3), zero));
for (n -= 3; n >= 3; n -= 3)
emit_insn (gen_add_shift (operands[0], operands[0],
GEN_INT (3), zero));
if (n == 2)
emit_insn (gen_add_shift (operands[0], operands[0],
const2_rtx, zero));
else if (n)
emit_insn (gen_ashlsi3_cnt1 (operands[0], operands[0]));
}
DONE;
}
else if (n >= 29)
{
if (n < 31)
{
if (n == 29)
{
emit_insn (gen_andsi3_i (operands[0], operands[1],
GEN_INT (7)));
emit_insn (gen_rotrsi3_cnt1 (operands[0], operands[0]));
}
else
emit_insn (gen_andsi3_i (operands[0], operands[1],
GEN_INT (3)));
emit_insn (gen_rotrsi3_cnt1 (operands[0], operands[0]));
}
else
emit_insn (gen_andsi3_i (operands[0], operands[1], const1_rtx));
emit_insn (gen_rotrsi3_cnt1 (operands[0], operands[0]));
DONE;
}
}
rtx shift = gen_rtx_fmt_ee (ASHIFT, SImode, operands[1], operands[2]);
emit_insn (gen_shift_si3_loop (operands[0], operands[1],
operands[2], shift));
DONE;
})
(define_insn_and_split "*ashlri3_nobs"
[(set (match_operand:SI 0 "dest_reg_operand")
(ashiftrt:SI (match_operand:SI 1 "register_operand")
(match_operand:SI 2 "nonmemory_operand")))]
"!TARGET_BARREL_SHIFTER
&& operands[2] != const1_rtx
&& arc_pre_reload_split ()"
"#"
"&& 1"
[(const_int 0)]
{
if (CONST_INT_P (operands[2]))
{
int n = INTVAL (operands[2]) & 0x1f;
if (n <= 4)
{
if (n != 0)
{
emit_insn (gen_ashrsi3_cnt1 (operands[0], operands[1]));
while (--n > 0)
emit_insn (gen_ashrsi3_cnt1 (operands[0], operands[0]));
}
else
emit_move_insn (operands[0], operands[1]);
DONE;
}
}
rtx pat;
rtx shift = gen_rtx_fmt_ee (ASHIFTRT, SImode, operands[1], operands[2]);
if (shiftr4_operator (shift, SImode))
pat = gen_shift_si3 (operands[0], operands[1], operands[2], shift);
else
pat = gen_shift_si3_loop (operands[0], operands[1], operands[2], shift);
emit_insn (pat);
DONE;
})
(define_insn_and_split "*lshrsi3_nobs"
[(set (match_operand:SI 0 "dest_reg_operand")
(lshiftrt:SI (match_operand:SI 1 "register_operand")
(match_operand:SI 2 "nonmemory_operand")))]
"!TARGET_BARREL_SHIFTER
&& operands[2] != const1_rtx
&& arc_pre_reload_split ()"
"#"
"&& 1"
[(const_int 0)]
{
if (CONST_INT_P (operands[2]))
{
int n = INTVAL (operands[2]) & 0x1f;
if (n <= 4)
{
if (n != 0)
{
emit_insn (gen_lshrsi3_cnt1 (operands[0], operands[1]));
while (--n > 0)
emit_insn (gen_lshrsi3_cnt1 (operands[0], operands[0]));
}
else
emit_move_insn (operands[0], operands[1]);
DONE;
}
}
rtx pat;
rtx shift = gen_rtx_fmt_ee (LSHIFTRT, SImode, operands[1], operands[2]);
if (shiftr4_operator (shift, SImode))
pat = gen_shift_si3 (operands[0], operands[1], operands[2], shift);
else
pat = gen_shift_si3_loop (operands[0], operands[1], operands[2], shift);
emit_insn (pat);
DONE;
})
;; shift_si3 appears after {ashr,lshr}si3_nobs
(define_insn "shift_si3"
[(set (match_operand:SI 0 "dest_reg_operand" "=r")
(match_operator:SI 3 "shiftr4_operator"
[(match_operand:SI 1 "register_operand" "0")
(match_operand:SI 2 "const_int_operand" "n")]))
(clobber (match_scratch:SI 4 "=&r"))
(clobber (reg:CC CC_REG))
]
"!TARGET_BARREL_SHIFTER
&& operands[2] != const1_rtx"
"* return output_shift (operands);"
[(set_attr "type" "shift")
(set_attr "length" "16")])
;; shift_si3_loop appears after {ashl,ashr,lshr}si3_nobs
(define_insn "shift_si3_loop"
[(set (match_operand:SI 0 "dest_reg_operand" "=r,r")
(match_operator:SI 3 "shift_operator"
[(match_operand:SI 1 "register_operand" "0,0")
(match_operand:SI 2 "nonmemory_operand" "rn,Cal")]))
(clobber (reg:SI LP_COUNT))
(clobber (reg:CC CC_REG))
]
"!TARGET_BARREL_SHIFTER
&& operands[2] != const1_rtx"
"* return output_shift (operands);"
[(set_attr "type" "shift")
(set_attr "length" "16,20")])
;; Rotate instructions.
(define_insn "rotrsi3"
[(set (match_operand:SI 0 "dest_reg_operand" "=r, r, r")
@ -3550,7 +3652,7 @@ archs4x, archs4xd"
(compare:CC (match_operand:SI 0 "register_operand" "q, q, h, c, c, q,c")
(match_operand:SI 1 "nonmemory_operand" "cO,hO,Cm1,cI,cL,Cal,Cal")))]
""
"cmp%? %0,%B1%&"
"cmp%? %0,%B1"
[(set_attr "type" "compare")
(set_attr "iscompact" "true,true,true,false,false,true_limm,false")
(set_attr "predicable" "no,no,no,no,yes,no,yes")
@ -3563,7 +3665,7 @@ archs4x, archs4xd"
(compare:CC_ZN (match_operand:SI 0 "register_operand" "q,c")
(const_int 0)))]
""
"tst%? %0,%0%&"
"tst%? %0,%0"
[(set_attr "type" "compare,compare")
(set_attr "iscompact" "true,false")
(set_attr "predicable" "no,yes")
@ -3592,7 +3694,7 @@ archs4x, archs4xd"
(match_operand:SI 1 "p2_immediate_operand" "O,n")))]
""
"@
cmp%? %0,%1%&
cmp%? %0,%1
bxor.f 0,%0,%z1"
[(set_attr "type" "compare,compare")
(set_attr "iscompact" "true,false")
@ -3604,7 +3706,7 @@ archs4x, archs4xd"
(compare:CC_C (match_operand:SI 0 "register_operand" "q, q, h, c, q, c")
(match_operand:SI 1 "nonmemory_operand" "cO,hO,Cm1,cI,Cal,Cal")))]
""
"cmp%? %0,%1%&"
"cmp%? %0,%1"
[(set_attr "type" "compare")
(set_attr "iscompact" "true,true,true,false,true_limm,false")
(set_attr "cond" "set")
@ -3658,12 +3760,24 @@ archs4x, archs4xd"
(define_expand "scc_insn"
[(set (match_operand:SI 0 "dest_reg_operand" "=w") (match_operand:SI 1 ""))])
(define_mode_iterator CC_ltu [CC_C CC])
(define_insn "scc_ltu_<mode>"
[(set (match_operand:SI 0 "dest_reg_operand" "=w")
(ltu:SI (reg:CC_ltu CC_REG) (const_int 0)))]
""
"rlc\\t%0,0"
[(set_attr "type" "shift")
(set_attr "predicable" "no")
(set_attr "length" "4")])
(define_insn_and_split "*scc_insn"
[(set (match_operand:SI 0 "dest_reg_operand" "=w")
(match_operator:SI 1 "proper_comparison_operator" [(reg CC_REG) (const_int 0)]))]
""
"#"
"reload_completed"
"reload_completed
&& GET_CODE (operands[1]) != LTU"
[(set (match_dup 0) (const_int 1))
(cond_exec
(match_dup 1)
@ -3787,19 +3901,10 @@ archs4x, archs4xd"
""
"*
{
if (arc_ccfsm_branch_deleted_p ())
{
arc_ccfsm_record_branch_deleted ();
return \"; branch deleted, next insns conditionalized\";
}
else
{
arc_ccfsm_record_condition (operands[1], false, insn, 0);
if (get_attr_length (insn) == 2)
return \"b%d1%? %^%l0%&\";
return \"b%d1%?\\t%l0\";
else
return \"b%d1%# %^%l0\";
}
return \"b%d1%*\\t%l0\";
}"
[(set_attr "type" "branch")
(set
@ -3835,22 +3940,7 @@ archs4x, archs4xd"
(pc)
(label_ref (match_operand 0 "" ""))))]
"REVERSIBLE_CC_MODE (GET_MODE (XEXP (operands[1], 0)))"
"*
{
if (arc_ccfsm_branch_deleted_p ())
{
arc_ccfsm_record_branch_deleted ();
return \"; branch deleted, next insns conditionalized\";
}
else
{
arc_ccfsm_record_condition (operands[1], true, insn, 0);
if (get_attr_length (insn) == 2)
return \"b%D1%? %^%l0\";
else
return \"b%D1%# %^%l0\";
}
}"
"b%D1%?\\t%l0"
[(set_attr "type" "branch")
(set
(attr "length")
@ -3888,7 +3978,7 @@ archs4x, archs4xd"
(define_insn "jump_i"
[(set (pc) (label_ref (match_operand 0 "" "")))]
"!TARGET_LONG_CALLS_SET || !CROSSING_JUMP_P (insn)"
"b%!%* %^%l0%&"
"b%!%*\\t%l0"
[(set_attr "type" "uncond_branch")
(set (attr "iscompact")
(if_then_else (match_test "get_attr_length (insn) == 2")
@ -3917,11 +4007,11 @@ archs4x, archs4xd"
[(set (pc) (match_operand:SI 0 "nonmemory_operand" "L,I,Cal,q,r"))]
""
"@
j%!%* %0%&
j%!%* %0%&
j%!%* %0%&
j%!%* [%0]%&
j%!%* [%0]%&"
j%!%* %0
j%!%* %0
j%!%* %0
j%!%* [%0]
j%!%* [%0]"
[(set_attr "type" "jump")
(set_attr "iscompact" "false,false,false,maybe,false")
(set_attr "cond" "canuse,canuse_limm,canuse,canuse,canuse")])
@ -4006,14 +4096,14 @@ archs4x, archs4xd"
switch (GET_MODE (diff_vec))
{
case E_SImode:
return \"ld.as\\t%0,[%1,%2]%&\";
return \"ld.as\\t%0,[%1,%2]\";
case E_HImode:
if (ADDR_DIFF_VEC_FLAGS (diff_vec).offset_unsigned)
return \"ld%_.as\\t%0,[%1,%2]\";
return \"ld%_.x.as\\t%0,[%1,%2]\";
case E_QImode:
if (ADDR_DIFF_VEC_FLAGS (diff_vec).offset_unsigned)
return \"ldb%?\\t%0,[%1,%2]%&\";
return \"ldb%?\\t%0,[%1,%2]\";
return \"ldb.x\\t%0,[%1,%2]\";
default:
gcc_unreachable ();
@ -4049,7 +4139,7 @@ archs4x, archs4xd"
[(set (pc) (match_operand:SI 0 "register_operand" "Cal,q,c"))
(use (label_ref (match_operand 1 "" "")))]
""
"j%!%* [%0]%&"
"j%!%* [%0]"
[(set_attr "type" "jump")
(set_attr "iscompact" "false,maybe,false")
(set_attr "cond" "canuse")])
@ -4085,7 +4175,7 @@ archs4x, archs4xd"
(clobber (reg:SI 31))]
""
"@
jl%!%* [%0]%&
jl%!%* [%0]
jl%!%* [%0]
jli_s %J0
sjli %J0
@ -4129,7 +4219,7 @@ archs4x, archs4xd"
(clobber (reg:SI 31))]
""
"@
jl%!%* [%1]%&
jl%!%* [%1]
jl%!%* [%1]
jli_s %J1
sjli %J1
@ -4648,7 +4738,6 @@ archs4x, archs4xd"
{
if (which_alternative == 0)
{
arc_toggle_unalign ();
return \"trap_s %0\";
}
@ -4809,12 +4898,7 @@ archs4x, archs4xd"
[(reg CC_REG) (const_int 0)])
(simple_return) (pc)))]
"reload_completed"
{
output_asm_insn (\"j%d0%!%#\\t[blink]\", operands);
/* record the condition in case there is a delay insn. */
arc_ccfsm_record_condition (operands[0], false, insn, 0);
return \"\";
}
"j%d0%!%*\\t[blink]"
[(set_attr "type" "return")
(set_attr "cond" "use")
(set_attr "iscompact" "maybe" )
@ -4853,13 +4937,13 @@ archs4x, archs4xd"
"*
switch (get_attr_length (insn))
{
case 2: return \"br%d0%? %1, %2, %^%l3%&\";
case 4: return \"br%d0%* %1, %B2, %^%l3\";
case 2: return \"br%d0%?\\t%1,%2,%l3\";
case 4: return \"br%d0%*\\t%1,%B2,%l3\";
case 8: if (!brcc_nolimm_operator (operands[0], VOIDmode))
return \"br%d0%* %1, %B2, %^%l3\";
return \"br%d0%*\\t%1,%B2,%l3\";
/* FALLTHRU */
case 6: case 10:
case 12:return \"cmp%? %1, %B2\\n\\tb%d0%* %^%l3%& ;br%d0 out of range\";
case 12:return \"cmp%? %1, %B2\\n\\tb%d0%*\\t%l3 ;br%d0 out of range\";
default: fprintf (stderr, \"unexpected length %d\\n\", get_attr_length (insn)); fflush (stderr); gcc_unreachable ();
}
"
@ -5038,7 +5122,7 @@ archs4x, archs4xd"
(clobber (match_scratch:SI 2 "=X,r"))]
"TARGET_DBNZ"
"@
dbnz%#\\t%0,%l1
dbnz%*\\t%0,%l1
#"
"TARGET_DBNZ && reload_completed && memory_operand (operands[0], SImode)"
[(set (match_dup 2) (match_dup 0))
@ -5122,7 +5206,7 @@ archs4x, archs4xd"
[(set (match_operand:SF 0 "dest_reg_operand" "=q,r,r")
(abs:SF (match_operand:SF 1 "register_operand" "0,0,r")))]
""
"bclr%?\\t%0,%1,31%&"
"bclr%?\\t%0,%1,31"
[(set_attr "type" "unary")
(set_attr "iscompact" "maybe,false,false")
(set_attr "length" "2,4,4")
@ -5911,7 +5995,7 @@ archs4x, archs4xd"
(zero_extract:SI (match_dup 1) (match_dup 5) (match_dup 7)))])
(match_dup 1)])
(define_insn "*rotrsi3_cnt1"
(define_insn "rotrsi3_cnt1"
[(set (match_operand:SI 0 "dest_reg_operand" "=r")
(rotatert:SI (match_operand:SI 1 "nonmemory_operand" "rL")
(const_int 1)))]
@ -5931,15 +6015,15 @@ archs4x, archs4xd"
(set_attr "predicable" "no")
(set_attr "length" "4")])
(define_insn "*ashlsi2_cnt1"
(define_insn "ashlsi3_cnt1"
[(set (match_operand:SI 0 "dest_reg_operand" "=q,w")
(ashift:SI (match_operand:SI 1 "register_operand" "q,c")
(const_int 1)))]
""
"asl%? %0,%1%&"
[(set_attr "type" "shift")
"asl%? %0,%1"
[(set_attr "type" "unary")
(set_attr "iscompact" "maybe,false")
(set_attr "length" "4")
(set_attr "length" "*,4")
(set_attr "predicable" "no,no")])
(define_insn "*ashlsi2_cnt8"
@ -5964,23 +6048,23 @@ archs4x, archs4xd"
(set_attr "length" "4")
(set_attr "predicable" "no")])
(define_insn "*lshrsi3_cnt1"
(define_insn "lshrsi3_cnt1"
[(set (match_operand:SI 0 "dest_reg_operand" "=q,w")
(lshiftrt:SI (match_operand:SI 1 "register_operand" "q,c")
(const_int 1)))]
""
"lsr%? %0,%1%&"
[(set_attr "type" "shift")
"lsr%? %0,%1"
[(set_attr "type" "unary")
(set_attr "iscompact" "maybe,false")
(set_attr "predicable" "no,no")])
(define_insn "*ashrsi3_cnt1"
(define_insn "ashrsi3_cnt1"
[(set (match_operand:SI 0 "dest_reg_operand" "=q,w")
(ashiftrt:SI (match_operand:SI 1 "register_operand" "q,c")
(const_int 1)))]
""
"asr%? %0,%1%&"
[(set_attr "type" "shift")
"asr%? %0,%1"
[(set_attr "type" "unary")
(set_attr "iscompact" "maybe,false")
(set_attr "predicable" "no,no")])
@ -6330,7 +6414,7 @@ archs4x, archs4xd"
(set_attr "type" "multi")
(set_attr "predicable" "yes")])
(define_insn "*add_shift"
(define_insn "add_shift"
[(set (match_operand:SI 0 "register_operand" "=q,r,r")
(plus:SI (ashift:SI (match_operand:SI 1 "register_operand" "q,r,r")
(match_operand:SI 2 "_1_2_3_operand" ""))

View File

@ -300,8 +300,8 @@ Target Var(TARGET_MEDIUM_CALLS) Init(TARGET_MMEDIUM_CALLS_DEFAULT)
Don't use less than 25 bit addressing range for calls.
mannotate-align
Target Var(TARGET_ANNOTATE_ALIGN)
Explain what alignment considerations lead to the decision to make an insn short or long.
Target Ignore
Does nothing. Preserved for backward compatibility.
malign-call
Target Ignore

View File

@ -549,16 +549,6 @@
(match_code "ashiftrt, lshiftrt, ashift")
)
;; Return true if OP is a left shift operator that can be implemented in
;; four insn words or less without a barrel shifter or multiplier.
(define_predicate "shiftl4_operator"
(and (match_code "ashift")
(match_test "const_int_operand (XEXP (op, 1), VOIDmode) ")
(match_test "UINTVAL (XEXP (op, 1)) <= 9U
|| INTVAL (XEXP (op, 1)) == 29
|| INTVAL (XEXP (op, 1)) == 30
|| INTVAL (XEXP (op, 1)) == 31")))
;; Return true if OP is a right shift operator that can be implemented in
;; four insn words or less without a barrel shifter or multiplier.
(define_predicate "shiftr4_operator"
@ -568,12 +558,6 @@
|| INTVAL (XEXP (op, 1)) == 30
|| INTVAL (XEXP (op, 1)) == 31")))
;; Return true if OP is a shift operator that can be implemented in
;; four insn words or less without a barrel shifter or multiplier.
(define_predicate "shift4_operator"
(ior (match_operand 0 "shiftl4_operator")
(match_operand 0 "shiftr4_operator")))
(define_predicate "mult_operator"
(and (match_code "mult") (match_test "TARGET_MPY"))
)

View File

@ -36,7 +36,7 @@
;; in Thumb-1 state: Pa, Pb, Pc, Pd, Pe
;; in Thumb-2 state: Ha, Pj, PJ, Ps, Pt, Pu, Pv, Pw, Px, Py, Pz, Rd, Rf, Rb, Ra,
;; Rg, Ri
;; in all states: Pf, Pg
;; in all states: Pg
;; The following memory constraints have been used:
;; in ARM/Thumb-2 state: Uh, Ut, Uv, Uy, Un, Um, Us, Up, Uf, Ux, Ul
@ -239,13 +239,6 @@
(and (match_code "const_int")
(match_test "TARGET_THUMB1 && ival >= 256 && ival <= 510")))
(define_constraint "Pf"
"Memory models except relaxed, consume or release ones."
(and (match_code "const_int")
(match_test "!is_mm_relaxed (memmodel_from_int (ival))
&& !is_mm_consume (memmodel_from_int (ival))
&& !is_mm_release (memmodel_from_int (ival))")))
(define_constraint "Pg"
"@internal In Thumb-2 state a constant in range 1 to 32"
(and (match_code "const_int")

Some files were not shown because too many files have changed in this diff Show More