* sysdeps/htl/sem-timedwait.c (struct cancel_ctx): Add cancel_wake
field.
(cancel_hook): When unblocking thread, set cancel_wake field to 1.
(__sem_timedwait_internal): Set cancel_wake field to 0 by default.
On cancellation exit, check whether we hold a token, to be put back.
By aligning its implementation on pthread_cond_wait.
* sysdeps/htl/sem-timedwait.c (cancel_ctx): New structure.
(cancel_hook): New function.
(__sem_timedwait_internal): Check for cancellation and register
cancellation hook that wakes the thread up, and check again for
cancellation on exit.
* nptl/tst-cancel13.c, nptl/tst-cancelx13.c: Move to...
* sysdeps/pthread/: ... here.
* nptl/Makefile: Move corresponding references and rules to...
* sysdeps/pthread/Makefile: ... here.
Since __pthread_exit does not return, we do not need to indent the
noncancel path
* sysdeps/htl/pt-cond-timedwait.c (__pthread_cond_timedwait_internal):
Move cancelled path before non-cancelled path, to avoid "else"
indentation.
* nptl/tst-cancel25.c: Move to...
* sysdeps/pthread/tst-cancel25.c: ... here.
(tf2) Do not test for SIGCANCEL when it is not defined.
* nptl/Makefile: Move corresponding reference to...
* sysdeps/pthread/Makefile: ... here.
Linux commit ID ee988c11acf6f9464b7b44e9a091bf6afb3b3a49 reserved 2 new
bits in AT_HWCAP2:
- PPC_FEATURE2_ARCH_3_1 indicates the availability of the POWER ISA
3.1;
- PPC_FEATURE2_MMA indicates the availability of the Matrix-Multiply
Assist facility.
Add support for MTE to strncmp. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.
The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.
Co-authored-by: Branislav Rankov <branislav.rankov@arm.com>
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
Add support for MTE to strcmp. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.
The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.
Co-authored-by: Branislav Rankov <branislav.rankov@arm.com>
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
Add support for MTE to strrchr. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.
The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
Add support for MTE to memrchr. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.
The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
Add support for MTE to memchr. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.
The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.
Co-authored-by: Gabor Kertesz <gabor.kertesz@arm.com>
Add support for MTE to strcpy. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.
The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
1. Divide architecture features into the usable features and the preferred
features. The usable features are for correctness and can be exported in
a stable ABI. The preferred features are for performance and only for
glibc internal use.
2. Change struct cpu_features to
struct cpu_features
{
struct cpu_features_basic basic;
unsigned int *usable_p;
struct cpuid_registers cpuid[COMMON_CPUID_INDEX_MAX];
unsigned int usable[USABLE_FEATURE_INDEX_MAX];
unsigned int preferred[PREFERRED_FEATURE_INDEX_MAX];
...
};
and initialize usable_p to pointer to the usable arary so that
struct cpu_features
{
struct cpu_features_basic basic;
unsigned int *usable_p;
struct cpuid_registers cpuid[COMMON_CPUID_INDEX_MAX];
};
can be exported via a stable ABI. The cpuid and usable arrays can be
expanded with backward binary compatibility for both .o and .so files.
3. Add COMMON_CPUID_INDEX_7_ECX_1 for AVX512_BF16.
4. Detect ENQCMD, PKS, AVX512_VP2INTERSECT, MD_CLEAR, SERIALIZE, HYBRID,
TSXLDTRK, L1D_FLUSH, CORE_CAPABILITIES and AVX512_BF16.
5. Rename CAPABILITIES to ARCH_CAPABILITIES.
6. Check if AVX512_VP2INTERSECT, AVX512_BF16 and PKU are usable.
7. Update CPU feature detection test.
The -fno-math-errno is already added by default and the minimum
required GCC to build glibc (6.2) make the -ffinite-math-only
superflous.
Checked on aarch64-linux-gnu.
Checked with a build for riscv64-linux-gnu-rv64imac-lp64 (no
builtin support), riscv64-linux-gnu-rv64imafdc-lp64, and
riscv64-linux-gnu-rv64imafdc-lp64d.
The generic implementation is simplified by removing the
'optimization' for !_IEEE_FP_INEXACT (which does not handle
inexact neither some values).
Checked on alpha-linux-gnu.
The powerpc sqrt implementation is also simplified:
- the static constants are open coded within the implementation.
- for !USE_SQRT_BUILTIN the function is implemented directly on
__ieee754_sqrt (it avoid an superflous extra jump).
Checked on powerpc-linux-gnu and powerpc64le-linux-gnu.
The define is already set on the math-use-builtins-ceil.h, the patch
just removes the implementations (it was missed on c9feb1be93).
Checked on aarch64-linux-gnu.
Each symbol definitions are moved on a separated file and it
cover all symbol type definitions (float, double, long double,
and float128).
It allows to set support for architectures without the boiler
place of copying default values.
Checked with a build on the affected ABIs.
The generic implementation is slight worse (Itanium(R) Processor 9020):
Before new code:
"exp10f": {
"workload-spec2017.wrf (adapted)": {
"duration": 3.61582e+08,
"iterations": 2.384e+07,
"reciprocal-throughput": 14.8334,
"latency": 15.5006,
"max-throughput": 6.74153e+07,
"min-throughput": 6.45136e+07
}
}
With new code:
"exp10f": {
"workload-spec2017.wrf (adapted)": {
"duration": 3.85549e+08,
"iterations": 2.384e+07,
"reciprocal-throughput": 15.8391,
"latency": 16.5056,
"max-throughput": 6.31348e+07,
"min-throughput": 6.05857e+07
}
}
However it fixes all the issues on both:
math/test-float-exp10
math/test-float32-exp10
(all the issues wrong results for non default rounding modes).
The existing ia64 libm interface uses matherrf and matherrl in addition
to matherr for SVID error handling. However, there is no such error
handling support for exp10f in ia64 libm. So replacing it with the
generic implementation should be fine.
Checked on ia64-linux-gnu.
This patch changes the exp10f error handling semantics to only set
errno according to POSIX rules. New symbol version is introduced at
GLIBC_2.32. The old wrappers are kept for compat symbols.
There are some outliers that need special handling:
- ia64 provides an optimized implementation of exp10f that uses ia64
specific routines to set SVID compatibility. The new symbol version
is aliased to the exp10f one.
- m68k also provides an optimized implementation, and the new version
uses it instead of the sysdeps/ieee754/flt32 one.
- riscv and csky uses the generic template implementation that
does not provide SVID support. For both cases a new exp10f
version is not added, but rather the symbols version of the
generic sysdeps/ieee754/flt32 is adjusted instead.
Checked on aarch64-linux-gnu, x86_64-linux-gnu, i686-linux-gnu,
powerpc64le-linux-gnu.
It is inspired by expf and reuses its tables and internal functions.
The error checks are inlined and errno setting is in separate tail
called functions, but the wrappers are kept in this patch to handle
the _LIB_VERSION==_SVID_ case.
Double precision arithmetics is used which is expected to be faster on
most targets (including soft-float) than using single precision and it
is easier to get good precision result with it.
Result for x86_64 (i7-4790K CPU @ 4.00GHz) are:
Before new code:
"exp10f": {
"workload-spec2017.wrf (adapted)": {
"duration": 4.0414e+09,
"iterations": 1.00128e+08,
"reciprocal-throughput": 26.6818,
"latency": 54.043,
"max-throughput": 3.74787e+07,
"min-throughput": 1.85038e+07
}
With new code:
"exp10f": {
"workload-spec2017.wrf (adapted)": {
"duration": 4.11951e+09,
"iterations": 1.23968e+08,
"reciprocal-throughput": 21.0581,
"latency": 45.4028,
"max-throughput": 4.74876e+07,
"min-throughput": 2.20251e+07
}
Result for aarch64 (A72 @ 2GHz) are:
Before new code:
"exp10f": {
"workload-spec2017.wrf (adapted)": {
"duration": 4.62362e+09,
"iterations": 3.3376e+07,
"reciprocal-throughput": 127.698,
"latency": 149.365,
"max-throughput": 7.831e+06,
"min-throughput": 6.69501e+06
}
With new code:
"exp10f": {
"workload-spec2017.wrf (adapted)": {
"duration": 4.29108e+09,
"iterations": 6.6752e+07,
"reciprocal-throughput": 51.2111,
"latency": 77.3568,
"max-throughput": 1.9527e+07,
"min-throughput": 1.29271e+07
}
Checked on x86_64-linux-gnu, powerpc64le-linux-gnu, aarch64-linux-gnu,
and sparc64-linux-gnu.
strcmp-avx2.S: In avx2 strncmp function, strings are compared in
chunks of 4 vector size(i.e. 32x4=128 byte for avx2). After first 4
vector size comparison, code must check whether it already passed
the given offset. This patch implement avx2 offset check condition
for strncmp function, if both string compare same for first 4 vector
size.
Linux 5.7 has no new syscalls. Update the version number in
syscall-names.list to reflect that it is still current for 5.7.
Tested with build-many-glibcs.py.
This came to light when adding hard-flaot support to ARC glibc port
without hardware sqrt support causing glibc build to fail:
| ../sysdeps/ieee754/dbl-64/e_sqrt.c: In function '__ieee754_sqrt':
| ../sysdeps/ieee754/dbl-64/e_sqrt.c:58:54: error: unused variable 'ty' [-Werror=unused-variable]
| double y, t, del, res, res1, hy, z, zz, p, hx, tx, ty, s;
The reason being EMULV() macro uses the hardware provided
__builtin_fma() variant, leaving temporary variables 'p, hx, tx, hy, ty'
unused hence compiler warning and ensuing error.
The intent of the patch was to fix that error, but EMULV is pervasive
and used fair bit indirectly via othe rmacros, hence this patch.
Functionally it should not result in code gen changes and if at all
those would be better since the scope of those temporaries is greatly
reduced now
Built tested with aarch64-linux-gnu arm-linux-gnueabi arm-linux-gnueabihf hppa-linux-gnu x86_64-linux-gnu arm-linux-gnueabihf riscv64-linux-gnu-rv64imac-lp64 riscv64-linux-gnu-rv64imafdc-lp64 powerpc-linux-gnu microblaze-linux-gnu nios2-linux-gnu hppa-linux-gnu
Also as suggested by Joseph [1] used --strip and compared the libs with
and w/o patch and they are byte-for-byte unchanged (with gcc 9).
| for i in `find . -name libm-2.31.9000.so`;
| do
| echo $i; diff $i /SCRATCH/vgupta/gnu2/install/glibcs/$i ; echo $?;
| done
| ./aarch64-linux-gnu/lib64/libm-2.31.9000.so
| 0
| ./arm-linux-gnueabi/lib/libm-2.31.9000.so
| 0
| ./x86_64-linux-gnu/lib64/libm-2.31.9000.so
| 0
| ./arm-linux-gnueabihf/lib/libm-2.31.9000.so
| 0
| ./riscv64-linux-gnu-rv64imac-lp64/lib64/lp64/libm-2.31.9000.so
| 0
| ./riscv64-linux-gnu-rv64imafdc-lp64/lib64/lp64/libm-2.31.9000.so
| 0
| ./powerpc-linux-gnu/lib/libm-2.31.9000.so
| 0
| ./microblaze-linux-gnu/lib/libm-2.31.9000.so
| 0
| ./nios2-linux-gnu/lib/libm-2.31.9000.so
| 0
| ./hppa-linux-gnu/lib/libm-2.31.9000.so
| 0
| ./s390x-linux-gnu/lib64/libm-2.31.9000.so
[1] https://sourceware.org/pipermail/libc-alpha/2019-November/108267.html
* sysdeps/mach/hurd/Makefile [subdir=misc] (sysdep_routines): Add
writev_nocancel writev_nocancel_nostatus.
* sysdeps/mach/hurd/not-cancel.h (__writev_nocancel_nostatus): Replace
macro with function declaration (with hidden prototype in libc).
(__writev_nocancel): New function declaration (with hidden prototype in libc).
* sysdeps/mach/hurd/writev_nocancel_nostatus.c: New file.
* sysdeps/posix/writev_nocancel.c: New file, includes writev.c to make a
nocancel variant that calls __write_nocancel.
* sysdeps/posix/writev.c (writev): Do not define alias if __writev is
renamed.
and add _nocancel variants.
* sysdeps/mach/hurd/write.c (__libc_write): Call __write_nocancel
surrounded by enabling async cancel, to replace implementation moved
to...
* sysdeps/mach/hurd/write_nocancel.c (__write_nocancel): ... here.
* sysdeps/mach/hurd/pwrite64.c (__libc_pwrite64): Call
__pwrite64_nocancel surrounded by enabling async cancel, to replace
implementation moved to...
* sysdeps/mach/hurd/pwrite64_nocancel.c (__pwrite64_nocancel): ... here.
* sysdeps/mach/hurd/Makefile (sysdep_routines): Add write_nocancel and
pwrite64_nocancel.
* sysdeps/mach/hurd/not-cancel.h (__write_nocancel,
__pwrite64_nocancel): Replace macros with prototypes with a hidden proto on
libc.
* sysdeps/mach/hurd/dl-sysdep.c (__write_nocancel): New alias, check
that it is not hidden.
* sysdeps/mach/hurd/Versions (libc.GLIBC_PRIVATE): Add __write_nocancel.
(ld.GLIBC_PRIVATE): Add __write_nocancel.
* sysdeps/mach/hurd/i386/localplt.data (__write_nocancel): Add
reference.
* sysdeps/htl/stdio-lock.h: New file, registers locking cleanup to htl.
* sysdeps/htl/libc-lockP.h: Include <libc-lock.h>.
(__libc_cleanup_region_start, __libc_cleanup_end,
__libc_cleanup_region_end): Override macros from <libc-lock.h> with
versions which register cleanup to htl.
(__pthread_get_cleanup_stack): Make reference weak for skipping
registration on in the static non-libpthread case.
* sysdeps/mach/hurd/recv.c (__recv): Make the __socket_recv call
cancellable.
* sysdeps/mach/hurd/recvfrom.c (__recvfrom): Make the __socket_recv and
__socket_whatis_address calls cancellable.
* sysdeps/mach/hurd/recvmsg.c (__libc_recvmsg): Make the __socket_recv,
__socket_whatis_address, __io_reauthenticate, and __auth_user_authenticate calls
cancellable.
Added a check to detect the CPU value in preconfigure, so that glibc is
built with the correct --with-cpu value. And move existing checks into
preconfigure.ac.
Co-Authored-By: Carlos Eduardo Seo <cseo@linux.vnet.ibm.com>
Co-Authored-By: Tulio Magno Quites Machado Filho <tuliom@linux.vnet.ibm.com>
Introduce an Arm MTE compatible strlen implementation.
The existing implementation assumes that any access to the pages in
which the string resides is safe. This assumption is not true when
MTE is enabled. This patch updates the algorithm to ensure that
accesses remain within the bounds of an MTE tag (16-byte chunks) and
improves overall performance on modern cores. On cores with less
efficient Advanced SIMD implementation such as Cortex-A53 it can
be slower.
Benchmarked on Cortex-A72, Cortex-A53, Neoverse N1.
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
Introduce an Arm MTE compatible strchr implementation.
The existing implementation assumes that any access to the pages in
which the string resides is safe. This assumption is not true when
MTE is enabled. This patch updates the algorithm to ensure that
accesses remain within the bounds of an MTE tag (16-byte chunks) and
improves overall performance.
Benchmarked on Cortex-A72, Cortex-A53, Neoverse N1.
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
Introduce an Arm MTE compatible strchrnul implementation.
The existing implementation assumes that any access to the pages in
which the string resides is safe. This assumption is not true when
MTE is enabled. This patch updates the algorithm to ensure that
accesses remain within the bounds of an MTE tag (16-byte chunks) and
improves overall performance.
Benchmarked on Cortex-A72, Cortex-A53, Neoverse N1.
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
Falkor's memcpy and memmove share some implementation details,
therefore, the two routines are moved to a single source file
for code reuse.
The two routines now share code for small and medium copies
(up to and including 128 bytes). Large copies in memcpy do not
handle overlap correctly, consequently, the loops for
moving/copying more than 128 bytes stay separate for memcpy
and memmove.
To increase code reuse a number of small modifications were made:
1. The old implementation of memcpy copied the first 16-bytes as
soon as the size of data was determined to be greater than 32 bytes.
For memcpy code to also work when copying small/medium overlapping
data, the first load and store was moved to the large copy case.
2. Medium memcpy case no longer assumes that 16 bytes were already
copied and uses 8 registers to copy up to 128 bytes.
3. Small case for memmove was enlarged to that of memcpy, which is
less than or equal to 32 bytes.
4. Medium case for memmove was enlarged to that of memcpy, which is
less than or equal to 128 bytes.
Other changes include:
1. Improve alignment of existing loop bodies.
2. 'Delouse' memmove and memcpy input arguments. Make sure that
upper 32-bits of input registers are zeroed if unused.
3. Do one more iteration in memmove loops and reduce the number of
copies made from the start/end of the buffer, depending on
the direction of the memmove loop.
Benchmarking:
Looking at the results from bench-memcpy-random.out, we can see that
now memmove_falkor is about 5% faster than memcpy_falkor_old, while
memmove_falkor_old was more than 15% slower. The memcpy implementation
remained largely unmodified, so there is no significant performance
change.
The reason for such a significant memmove performance gain is the
increase of the upper bound on the small copy case to 32 bytes and
the increase of the upper bound on the medium copy case to 128 bytes.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
d6d74ec16 ('htl: Enable more tests') moved the linking rules from
nptl/Makefile and htl/Makefile to the shared sysdeps/pthread/Makefile. But
e.g. on powerpc some tests are added in sysdeps/powerpc/Makefile, which is
included *after* sysdeps/pthread/Makefile, and thus the tests don't get
affected by the rules and fail to link. For now let's just copy over the
set of rules in both nptl/Makefile and htl/Makefile.
* sysdeps/pthread/Makefile: Move libpthread linking rules to...
* htl/Makefile: ... here and...
* nptl/Makefile: ... there.
We really need modules to use their own pthread_atfork so that
__dso_handle properly identifies them.
* sysdeps/htl/pt-atfork.c (__pthread_atfork): Hide function.
(pthread_atfork): Hide alias.
* sysdeps/htl/old_pt-atfork.c (pthread_atfork): Rename macro to
__pthread_atfork to fix building the compatibility alias.
* sysdeps/htl/pthreadP.h: Include <link.h>
(__pthread_init_static_tls): New prototype.
* htl/pt-alloc.c (__pthread_init_static_tls): New function.
* sysdeps/mach/hurd/htl/pt-sysdep.c (_init_routine): Initialize tcb
field of initial thread. Set GL(dl_init_static_tls) to
&__pthread_init_static_tls.
and add _nocancel variants.
* sysdeps/mach/hurd/pread64.c (__libc_pread64): Call __pread64_nocancel
surrounded by enabling async cancel, to replace implementation moved to...
* sysdeps/mach/hurd/pread64_nocancel.c (__pread64_nocancel): ... here.
* sysdeps/mach/hurd/read.c (__libc_read): Call __read_nocancel surrounded by
enabling async cancel, to replace implementation moved to...
* sysdeps/mach/hurd/read_nocancel.c (__read_nocancel): ... here.
* sysdeps/mach/hurd/Makefile (sysdep_routines): Add read_nocancel and
pread64_nocancel.
* sysdeps/mach/hurd/not-cancel.h (__read_nocancel, __pread64_nocancel):
Replace macros with prototypes with a hidden proto on libc.
* sysdeps/mach/hurd/dl-sysdep.c: Include <not-cancel.h>.
(__pread64_nocancel): New alias, check that it is not hidden.
(__read_nocancel): New alias, check that it is not hidden.
* sysdeps/mach/hurd/Versions (libc.GLIBC_PRIVATE): Add __read_nocancel and
__pread64_nocancel.
(ld.GLIBC_2.1): Add __pread64.
(ld.GLIBC_PRIVATE): Add __read_nocancel and __pread64_nocancel.
* sysdeps/mach/hurd/i386/ld.abilist (__pread64): Add symbol.
* sysdeps/mach/hurd/i386/localplt.data (__read_nocancel, __pread64,
__pread64_nocancel): Add references.
* sysdeps/i386/htl/Makefile: New file.
* sysdeps/i386/htl/tcb-offsets.sym: New file.
* sysdeps/mach/hurd/i386/Makefile [setjmp] (gen-as-const-headers): Add
signal-defines.sym.
* sysdeps/mach/hurd/i386/____longjmp_chk.S: Include tcb-offsets.h.
(____longjmp_chk): Harmonize with i386's __longjmp. Clear SS_ONSTACK
when jumping off the alternate stack.
* sysdeps/mach/hurd/i386/__longjmp.S: New file.
The existing macros are fragile and expect local variables with a
certain name. Fix this by defining them as functions with default
implementation in a new header dl-runtime.h which arches can override
if need be.
This came up during ARC port review, hence the need for argument pltgot
in reloc_index() which is not needed by existing ports.
This patch potentially only affects hppa/x86 ports,
build tested for both those configs and a few more.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This started as a trivial change to Anton's rawmemchr. I got
carried away. This is a hybrid between P8's asympotically
faster 64B checks with extremely efficient small string checks
e.g <64B (and sometimes a little bit more depending on alignment).
The second trick is to align to 64B by running a 48B checking loop
16B at a time until we naturally align to 64B (i.e checking 48/96/144
bytes/iteration based on the alignment after the first 5 comparisons).
This allieviates the need to check page boundaries.
Finally, explicly use the P7 strlen with the runtime loader when building
P9. We need to be cautious about vector/vsx extensions here on P9 only
builds.
This defines the macro such that it should behave best on all
supported powerpc targets. Likewise, this allows us to remove the
ppc64le specific s_fmaf128.c.
I have verified powerpc64le multiarch and powerpc64le power9
no-multiarch builds continue to generate optimize fmaf128.
commit e9698175b0
Author: Lukasz Majewski <lukma@denx.de>
Date: Mon Mar 16 08:31:41 2020 +0100
y2038: Replace __clock_gettime with __clock_gettime64
breaks benchtests with sysdeps/generic/hp-timing.h:
In file included from ./bench-timing.h:23,
from ./bench-skeleton.c:25,
from
/export/build/gnu/tools-build/glibc-gitlab/build-x86_64-linux/benchtests/bench-rint.c:45:
./bench-skeleton.c: In function ‘main’:
../sysdeps/generic/hp-timing.h:37:23: error: storage size of ‘tv’ isn’t known
37 | struct __timespec64 tv; \
| ^~
Define HP_TIMING_NOW with clock_gettime in sysdeps/generic/hp-timing.h
if _ISOMAC is defined. Don't define __clock_gettime in bench-timing.h
since it is no longer needed.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
The build uses an undefined macro evaluation for fmaf128 build.
For now set USE_FMAL_BUILTIN and USE_FMAF128_BUILTIN to 0.
Checked with a build for:
powerpc64le-linux-gnu-power9-disable-multi-arch
powerpc64le-linux-gnu-power9
powerpc64le-linux-gnu
powerpc64-linux-gnu-power8
powerpc64-linux-gnu
powerpc-linux-gnu-power4
powerpc-linux-gnu
timer_create needs to create threads with all signals blocked,
including SIGTIMER (which happens to equal SIGCANCEL).
Fixes commit b3cae39dcb ("nptl: Start
new threads with all signals blocked [BZ #25098]").
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This introduces the function __pthread_attr_extension to allocate the
extension space, which is freed by pthread_attr_destroy.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This allows to reuse the storage after calling pthread_cond_destroy.
* sysdeps/htl/bits/types/struct___pthread_cond.h (__pthread_cond):
Replace unused struct __pthread_condimpl *__impl field with unsigned int
__wrefs.
(__PTHREAD_COND_INITIALIZER): Update accordingly.
* sysdeps/htl/pt-cond-timedwait.c (__pthread_cond_timedwait_internal):
Register as waiter in __wrefs field. On unregistering, wake any pending
pthread_cond_destroy.
* sysdeps/htl/pt-cond-destroy.c (__pthread_cond_destroy): Register wake
request in __wrefs.
* nptl/Makefile (tests): Move tst-cond20 tst-cond21 to...
* sysdeps/pthread/Makefile (tests): ... here.
* nptl/tst-cond20.c nptl/tst-cond21.c: Move to...
* sysdeps/pthread/tst-cond20.c sysdeps/pthread/tst-cond21.c: ... here.
Linux overrides this file via sysdeps/unix/sysv/linux/i386/sysdep.c.
Hurd does not have sysdeps/unix/i386 on its search path, so it uses
csu/sysdep.c instead.
_hurdsig_preemptors and _hurdsig_preempted_set are not ABI symbols,
so do not declare them. HURD_PREEMPT_SIGNAL_P is an implementation
detail, so move it as well.
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
This fixes various build errors due to deprecation warnings.
Fixes commit 02802fafcf
("signal: Deprecate additional legacy signal handling functions").
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
This change makes it easier to set a breakpoint on these calls.
This also addresses the issue that including <ldsodefs.h> without
<unistd.h> does not result usable _dl_*printf macros because of the
use of the STD*_FILENO macros there.
(The private symbol for _dl_fatal_printf will go away again
once the exception handling implementation is unified between
libc and ld.so.)
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Also add the private type union pthread_attr_transparent, to reduce
the amount of casting that is required.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
This is part of the libpthread removal project:
<https://sourceware.org/ml/libc-alpha/2019-10/msg00080.html>
Use __getline instead of __getdelim to avoid a localplt failure.
Likewise for __getrlimit/getrlimit.
The abilist updates were performed by:
git ls-files 'sysdeps/unix/sysv/linux/**/libc.abilist' \
| while read x ; do
echo "GLIBC_2.32 pthread_getattr_np F" >> $x
done
python3 scripts/move-symbol-to-libc.py --only-linux pthread_getattr_np
The private export of __pthread_getaffinity_np is no longer needed, but
the hidden alias still necessary so that the symbol can be exported with
versioned_symbol.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
This is part of the libpthread removal project:
<https://sourceware.org/ml/libc-alpha/2019-10/msg00080.html>
The abilist updates were performed by:
git ls-files 'sysdeps/unix/sysv/linux/**/libc.abilist' \
| while read x ; do
echo "GLIBC_2.32 pthread_getaffinity_np F" >> $x
done
python3 scripts/move-symbol-to-libc.py pthread_getaffinity_np
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
This is part of the libpthread removal project:
<https://sourceware.org/ml/libc-alpha/2019-10/msg00080.html>
The symbol did not previously exist in libc, so a new GLIBC_2.32
symbol is needed, to get correct dependency for binaries which
use the symbol but no longer link against libpthread.
The abilist updates were performed by:
git ls-files 'sysdeps/unix/sysv/linux/**/libc.abilist' \
| while read x ; do
echo "GLIBC_2.32 pthread_attr_setaffinity_np F" >> $x
done
python3 scripts/move-symbol-to-libc.py pthread_attr_setaffinity_np
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The stubs for pthread_getaffinity_np, pthread_getname_np,
pthread_setaffinity_np, pthread_setname_np are replaced, and corresponding
tests are moved.
After the removal of the NaCl port, nptl is Linux-specific, and the stubs
are no longer needed. This effectively reverts commit
c76d1ff514 ("NPTL: Add stubs for Linux-only
extension functions.").
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This fixes a build error:
../sysdeps/unix/sysv/linux/ntp_gettime.c: In function ‘__ntp_gettime’:
../sysdeps/unix/sysv/linux/ntp_gettime.c:56:10: error: ‘ntv64.tai’ is used uninitialized in this function [-Werror=uninitialized]
56 | *ntv = valid_ntptimeval64_to_ntptimeval (ntv64);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The __clock_gettime internal function is not supporting 64 bit time on
architectures with __WORDSIZE == 32 and __TIMESIZE != 64 (like e.g. ARM 32
bit).
The __clock_gettime64 function shall be used instead in the glibc itself as
it supports 64 bit time on those systems.
This patch does not bring any changes to systems with __WORDSIZE == 64 as
for them the __clock_gettime64 is aliased to __clock_gettime (in
./include/time.h).
This patch provides new __ntp_gettimex64 explicit 64 bit function for getting
time parameters via NTP interface.
The call to __adjtimex in __ntp_gettime64 function has been replaced with
direct call to __clock_adjtime64 syscall, to simplify the code.
Moreover, a 32 bit version - __ntp_gettimex has been refactored to internally
use __ntp_gettimex64.
The __ntp_gettimex is now supposed to be used on systems still supporting 32
bit time (__TIMESIZE != 64) - hence the necessary conversions between struct
ntptimeval and 64 bit struct __ntptimeval64.
Build tests:
./src/scripts/build-many-glibcs.py glibcs
Run-time tests:
- Run specific tests on ARM/x86 32bit systems (qemu):
https://github.com/lmajewski/meta-y2038 and run tests:
https://github.com/lmajewski/y2038-tests/commits/master
Above tests were performed with Y2038 redirection applied as well as without to
test the proper usage of both __ntp_gettimex64 and __ntp_gettimex.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch provides new __ntp_gettime64 explicit 64 bit function for getting
time parameters via NTP interface.
Internally, the __clock_adjtime64 syscall is used instead of __adjtimex. This
patch is necessary for having architectures with __WORDSIZE == 32 Y2038 safe.
Moreover, a 32 bit version - __ntp_gettime has been refactored to internally
use __ntp_gettime64.
The __ntp_gettime is now supposed to be used on systems still supporting 32
bit time (__TIMESIZE != 64) - hence the necessary conversions between struct
ntptimeval and 64 bit struct __ntptimeval64.
Build tests:
./src/scripts/build-many-glibcs.py glibcs
Run-time tests:
- Run specific tests on ARM/x86 32bit systems (qemu):
https://github.com/lmajewski/meta-y2038 and run tests:
https://github.com/lmajewski/y2038-tests/commits/master
Above tests were performed with Y2038 redirection applied as well as without to
test the proper usage of both __ntp_gettime64 and __ntp_gettime.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Those functions allow easy conversion between Y2038 safe, glibc internal
struct __ntptimeval64 and struct ntptimeval.
The reserved fields (i.e. __glibc_reserved{1234}) during conversion are
zeroed as well, to provide behavior similar to one in ntp_gettimex function
(where those are cleared before the struct ntptimeval is returned).
Those functions are put in Linux specific sys/timex.h file, as putting
them into glibc's local include/time.h would cause build break on HURD as
it doesn't support struct timex related syscalls.
Build tests:
./src/scripts/build-many-glibcs.py glibcs
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This type is a glibc's "internal" type to get time parameters data from
Linux kernel (NTP daemon interface). It stores time in struct __timeval64
rather than struct timeval, which makes it Y2038-proof.
Build tests:
./src/scripts/build-many-glibcs.py glibcs
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch provides new __adjtime64 explicit 64 bit function for adjusting
Linux kernel clock.
Internally, the __clock_adjtime64 syscall is used instead of __adjtimex. This
patch is necessary for having architectures with __WORDSIZE == 32 Y2038 safe.
Moreover, a 32 bit version - __adjtime has been refactored to internally use
__adjtime64.
The __adjtime is now supposed to be used on systems still supporting 32
bit time (__TIMESIZE != 64) - hence the necessary conversions between struct
timeval and 64 bit struct __timeval64.
Build tests:
./src/scripts/build-many-glibcs.py glibcs
Run-time tests:
- Run specific tests on ARM/x86 32bit systems (qemu):
https://github.com/lmajewski/meta-y2038 and run tests:
https://github.com/lmajewski/y2038-tests/commits/master
Above tests were performed with Y2038 redirection applied as well as without to
test the proper usage of both __adjtime64 and __adjtime.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch provides new ___adjtimex64 explicit 64 bit function for adjusting
Linux kernel clock.
Internally, the __clock_adjtime64 syscall is used. This patch is necessary
for having architectures with __WORDSIZE == 32 Y2038 safe.
Moreover, a 32 bit version - ___adjtimex has been refactored to internally
use ___adjtimex64.
The ___adjtimex is now supposed to be used on systems still supporting 32
bit time (__TIMESIZE != 64) - hence the necessary conversions between struct
timex and 64 bit struct __timex64.
Last but not least, in ___adjtimex64 function the __clock_adjtime syscall has
been replaced with __clock_adjtime64 to support 64 bit time on architectures
with __WORDSIZE == 32 and __TIMESIZE != 64.
Build tests:
./src/scripts/build-many-glibcs.py glibcs
Run-time tests:
- Run specific tests on ARM/x86 32bit systems (qemu):
https://github.com/lmajewski/meta-y2038 and run tests:
https://github.com/lmajewski/y2038-tests/commits/master
Above tests were performed with Y2038 redirection applied as well as without to
test the proper usage of both ___adjtimex64 and ___adjtimex.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch replaces auto generated wrapper (as described in
sysdeps/unix/sysv/linux/syscalls.list) for clock_adjtime with one which adds
extra support for reading 64 bit time values on machines with __TIMESIZE != 64.
To achieve this goal new __clock_adjtime64 explicit 64 bit function for
adjusting Linux clock has been added.
Moreover, a 32 bit version - __clock_adjtime has been refactored to internally
use __clock_adjtime64.
The __clock_adjtime is now supposed to be used on systems still supporting 32
bit time (__TIMESIZE != 64) - hence the necessary conversions between 64 bit
struct __timespec64 and struct timespec.
The new __clock_adjtime64 syscall available from Linux 5.1+ has been used, when
applicable.
Up till v5.4 in the Linux kernel there was a bug preventing this call from
obtaining correct struct's timex time.tv_sec time after time_t overflow
(i.e. not being Y2038 safe).
Build tests:
- ./src/scripts/build-many-glibcs.py glibcs
Run-time tests:
- Run specific tests on ARM/x86 32bit systems (qemu):
https://github.com/lmajewski/meta-y2038 and run tests:
https://github.com/lmajewski/y2038-tests/commits/master
Linux kernel, headers and minimal kernel version for glibc build test matrix:
- Linux v5.1 (with clock_adjtime64) and glibc build with v5.1 as
minimal kernel version (--enable-kernel="5.1.0")
The __ASSUME_TIME64_SYSCALLS flag defined.
- Linux v5.1 and default minimal kernel version
The __ASSUME_TIME64_SYSCALLS not defined, but kernel supports clock_adjtime64
syscall.
- Linux v4.19 (no clock_adjtime64 support) with default minimal kernel version
for contemporary glibc (3.2.0)
This kernel doesn't support clock_adjtime64 syscall, so the fallback to
clock_adjtime is tested.
Above tests were performed with Y2038 redirection applied as well as without
(so the __TIMESIZE != 64 execution path is checked as well).
No regressions were observed.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This version uses vector instructions and is up to 60% faster on medium
matches and up to 90% faster on long matches, compared to the POWER7
version. A few examples:
__rawmemchr_power9 __rawmemchr_power7
Length 32, alignment 0: 2.27566 3.77765
Length 64, alignment 2: 2.46231 3.51064
Length 1024, alignment 0: 17.3059 32.6678
When CET is enabled, it is an error to dlopen a non CET enabled shared
library in CET enabled application. It may be desirable to make CET
permissive, that is disable CET when dlopening a non CET enabled shared
library. With the new --enable-cet=permissive configure option, CET is
disabled when dlopening a non CET enabled shared library.
Add DEFAULT_DL_X86_CET_CONTROL to config.h.in:
/* The default value of x86 CET control. */
#define DEFAULT_DL_X86_CET_CONTROL cet_elf_property
which enables CET features based on ELF property note.
--enable-cet=permissive it to
/* The default value of x86 CET control. */
#define DEFAULT_DL_X86_CET_CONTROL cet_permissive
which enables CET features permissively.
Update tst-cet-legacy-5a, tst-cet-legacy-5b, tst-cet-legacy-6a and
tst-cet-legacy-6b to check --enable-cet and --enable-cet=permissive.
This was originally added to support binutils older than version
2.22:
<https://sourceware.org/ml/libc-alpha/2010-12/msg00051.html>
Since 2.22 is older than the minimum required binutils version
for building glibc, we no longer need this. (The changes do
not impact the statically linked startup code.)
Add stpcpy support to the POWER9 strcpy. This is up to 40% faster on
small strings and up to 90% faster on long relatively unaligned strings,
compared to the POWER8 version. A few examples:
__stpcpy_power9 __stpcpy_power8
Length 20, alignments in bytes 4/ 4: 2.58246 4.8788
Length 1024, alignments in bytes 1/ 6: 24.8186 47.8528
This version uses VSX store vector with length instructions and is
significantly faster on small strings and relatively unaligned large
strings, compared to the POWER8 version. A few examples:
__strcpy_power9 __strcpy_power8
Length 16, alignments in bytes 0/ 0: 2.52454 4.62695
Length 412, alignments in bytes 4/ 0: 11.6 22.9185
1. Include <dl-procruntime.c> to get architecture specific initializer in
rtld_global.
2. Change _dl_x86_feature_1[2] to _dl_x86_feature_1.
3. Add _dl_x86_feature_control after _dl_x86_feature_1, which is a
struct of 2 bitfields for IBT and SHSTK control
This fixes [BZ #25887].
The getcpu cache was removed from the kernel in Linux 2.6.24. glibc
support from the sched_getcpu implementation was removed in commit
dd26c44403 ("Consolidate sched_getcpu").
This patch fixes the optimized implementation of strcpy and strnlen
on a big-endian arm64 machine.
The optimized method uses neon, which can process 128bit with one
instruction. On a big-endian machine, the bit order should be reversed
for the whole 128-bits double word. But with instuction
rev64 datav.16b, datav.16b
it reverses 64bits in the two halves rather than reversing 128bits.
There is no such instruction as rev128 to reverse the 128bits, but we
can fix this by loading the data registers accordingly.
Fixes 0237b61526e7("aarch64: Optimized implementation of strcpy") and
2911cb68ed3d("aarch64: Optimized implementation of strnlen").
Signed-off-by: Lexi Shao <shaolexi@huawei.com>
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
After using "make update-syscall-lists" to update arch-syscall.h for
new kernel versions, sysd-syscalls will not be not be regenerated.
This will cause a compile error because the new data is not being
picked up.
Fixes commit a1bd5f8673
("Linux: Use system call tables during build").
Reviewed-by: Florian Weimer <fweimer@redhat.com>
When using outline atomics (-moutline-atomics, the default for ARMv8-A
starting with GCC 10), libgcc contains an ELF constructor which calls
__getauxval. This code is built outside of glibc, so none of its
internal PLT avoidance schemes can be applied to it. This change
suppresses the elf/check-localplt failure.
The script can now be called to query the definition status of
system call numbers across all architectures, like this:
$ python3 sysdeps/unix/sysv/linux/glibcsyscalls.py query-syscall sync_file_range sync_file_range2
sync_file_range:
defined: aarch64 alpha csky hppa i386 ia64 m68k microblaze mips/mips32 mips/mips64/n32 mips/mips64/n64 nios2 riscv/rv64 s390/s390-32 s390/s390-64 sh sparc/sparc32 sparc/sparc64 x86_64/64 x86_64/x32
undefined: arm powerpc/powerpc32 powerpc/powerpc64
sync_file_range2:
defined: arm powerpc/powerpc32 powerpc/powerpc64
undefined: aarch64 alpha csky hppa i386 ia64 m68k microblaze mips/mips32 mips/mips64/n32 mips/mips64/n64 nios2 riscv/rv64 s390/s390-32 s390/s390-64 sh sparc/sparc32 sparc/sparc64 x86_64/64 x86_64/x32
This command lists the headers containing the system call numbers:
$ python3 sysdeps/unix/sysv/linux/glibcsyscalls.py list-headers
The argument parser code is based on a suggestion from Adhemerval Zanella.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Since __x86_shared_non_temporal_threshold is defined as
long int __x86_shared_non_temporal_threshold;
and long int is 4 bytes for x32, use RDX_LP to compare against
__x86_shared_non_temporal_threshold in assembly code.
Only alpha and ia64 do not support __NR_umount2 (defined as
__NR_umount), but recent kernel fixes (74cd2184833f for ia64, and
12b57c5c70f39 for alpha) add the required alias.
Checked with a build against all affected ABIs.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
This consolidates the copy-pasted arch specific semaphore header into
single version (based on s390) which suffices 32-bit and and 64-bit
arch/ABI based on the canonical WORDSIZE.
For now I've left out arches which use alternate defines to choose for
32 vs 64-bit builds (aarch64, mips) which in theory can also use the same
header.
Passes build-many for
aarch64-linux-gnu arm-linux-gnueabi arm-linux-gnueabihf
riscv64-linux-gnu-rv64imac-lp64 riscv64-linux-gnu-rv64imafdc-lp64
x86_64-linux-gnu microblaze-linux-gnu nios2-linux-gnu
Suggested-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Those functions allow easy conversion between Y2038 safe, glibc internal
struct __timex64 and struct timex.
Those functions are put in Linux specific sys/timex.h file, as putting
them into glibc's local include/time.h would cause build break on HURD as
it doesn't support struct timex related syscalls.
Build tests:
./src/scripts/build-many-glibcs.py glibcs
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The introduced glibc's 'internal' struct __timex64 is a copy of Linux kernel's
struct __kernel_timex (v5.6) introduced for properly handling data for
clock_adjtime64 syscall.
As the struct's __kernel_timex size is the same as for archs with
__WORDSIZE == 64, proper padding and data types conversion (i.e. long to long
long) had to be added for architectures with __WORDSIZE == 32 &&
__TIMESIZE != 64.
Moreover, it stores time in struct __timeval64 rather than struct
timeval, which makes it Y2038-proof.
Build tests:
./src/scripts/build-many-glibcs.py glibcs
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
For Linux glibc ports the __TIMESIZE == 64 ensures proper aliasing for
__clock_gettime64 (to __clock_gettime).
When __TIMESIZE != 64 (like ARM32, PPC) the glibc expects separate definition
of the __clock_gettime64.
The HURD port only provides __clock_gettime, so this patch adds
__clock_gettime64 as a tiny wrapper on it.
Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
strcmp is used while resolving PLT references. Vector registers
should not be used during this. The P9 strcmp makes heavy use of
vector registers, so it should be avoided in rtld.
This prevents quiet vector register corruption when glibc is configured
with --disable-multi-arch and --with-cpu=power9. This can be seen with
test-float64x-compat_totalordermag during the first call into
totalordermagf64x@GLIBC_2.27.
Add a guard to fallback to the power8 implementation when building
power9 strcmp for libraries other than libc.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The minimum GCC version has been raised to 6.2 for building
glibc. Therefore, follow the advice inside the implementation
and remove the GCC < 6 codepath.
Likewise, remove the hidden_proto as all internal usages should
inline now.
Commit a98dc92dd1 ("x86: Add cache
information support for Zhaoxin processors") introduced an unused
variable warning in the default i686-linux-gnu build:
In file included from ../sysdeps/i386/cacheinfo.c:3:
../sysdeps/x86/cacheinfo.c: In function 'init_cacheinfo':
../sysdeps/x86/cacheinfo.c:762:16: error: unused variable 'eax' [-Werror=unused-variable]
762 | unsigned int eax;
| ^~~
Add a C wrapper to pass arguments in
/* Control process execution. */
extern int prctl (int __option, ...) __THROW;
to prctl syscall:
extern int prctl (int, unsigned long int, unsigned long int,
unsigned long int, unsigned long int);
On platforms where long double may have two different formats, i.e.: the
same format as double (64-bits) or something else (128-bits), building
with -mlong-double-128 is the default and function calls in the user
program match the name of the function in Glibc. When building with
-mlong-double-64, Glibc installed headers redirect such calls to the
appropriate function.
Likewise, the internals of glibc are now built against IEEE long double.
However, the only (minimally) notable usage of long double is difftime.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
GCC 7.5.0 (PR94200) will refuse to compile if both -mabi=% and
-mlong-double-128 are passed on the command line. Surprisingly,
it will work happily if the latter is not. For the sake of
maintaining status quo, test for and blacklist such compilers.
Tested with a GCC 8.3.1 and GCC 7.5.0 compiler for ppc64le.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
This is a small step up from 2.25 which brings in support for
rewriting the .gnu.attributes section of libc/libm.so.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
Add compiler feature tests to ensure we can build ieee128 long double.
These test for -mabi=ieeelongdouble, -mno-gnu-attribute, and -Wno-psabi.
Likewise, verify some compiler bugs have been addressed. These aren't
helpful for building glibc, but may cause test failures when testing
the new long double. See notes below from Raji.
On powerpc64le, some older compiler versions give error for the function
signbit() for 128-bit floating point types. This is fixed by PR83862
in gcc 8.0 and backported to gcc6 and gcc7. This patch adds a test
to check compiler version to avoid compiler errors during make check.
Likewise, test for -mno-gnu-attribute support which was
On powerpc64le, a few files are built on IEEE long double mode
(-mabi=ieeelongdouble), whereas most are built on IBM long double mode
(-mabi=ibmlongdouble, the default for -mlong-double-128). Since binutils
2.31, linking object files with different long double modes causes
errors similar to:
ld: libc_pic.a(s_isinfl.os) uses IBM long double,
libc_pic.a(ieee128-qefgcvt.os) uses IEEE long double.
collect2: error: ld returned 1 exit status
make[2]: *** [../Makerules:649: libc_pic.os] Error 1
The warnings are fair and correct, but in order for glibc to have
support for both long double modes on powerpc64le, they have to be
ignored. This can be accomplished with the use of -mno-gnu-attribute
option when building the few files that require IEEE long double mode.
However, -mno-gnu-attribute is not available in GCC 6, the minimum
version required to build glibc, so this patch adds a test for this
feature in powerpc64le builds, and fails early if it's not available.
Co-Authored-By: Rajalakshmi Srinivasaraghavan <raji@linux.vnet.ibm.com>
Co-Authored-By: Gabriel F. T. Gomes <gabrielftg@linux.ibm.com>
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
Improve the commentary to aid future developers who will stumble
upon this novel, yet not always perfect, mechanism to support
alternative formats for long double.
Likewise, rename __LONG_DOUBLE_USES_FLOAT128 to
__LDOUBLE_REDIRECTS_TO_FLOAT128_ABI now that development work
has settled down. The command used was
git grep -l __LONG_DOUBLE_USES_FLOAT128 ':!./ChangeLog*' | \
xargs sed -i 's/__LONG_DOUBLE_USES_FLOAT128/__LDOUBLE_REDIRECTS_TO_FLOAT128_ABI/g'
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
To obtain Zhaoxin CPU cache information, add a new function
handle_zhaoxin().
Add a new function get_common_cache_info() that extracts the code
in init_cacheinfo() to get the value of the variable shared, threads.
Add Zhaoxin branch in init_cacheinfo() for initializing variables,
such as __x86_shared_cache_size.
Since the the U marker can only be applied to 2 unsigned long arguments
in syscalls.list files, add a C wrapper for process_vm_readv and
process_vm_writev syscals which have more than 2 unsigned long arguments.
Update the default typesizes.h to match the new kernel sizes for 32-bit
architectures with a 64-bit time_t and friends. This follows the sizes
used for RV32 which is a y2038 safe architecture added after Linux 5.1.
Reviewed-by: Vineet Gupta <vgupta@synopsys.com>
Tested-by: Vineet Gupta <vgupta@synopsys.com>
Remove the sem-pad.h file and instead have architectures override the
struct semid_ds via the bits/types/struct_semid_ds.h file.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Split out the struct semid_ds into it's own file. This will allow us to
have architectures specify their own version.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Mark unsigned long arguments in mmap, read, recv, recvfrom, send, sendto,
write, ioperm, sendfile64, setxattr, lsetxattr, fsetxattr, getxattr,
lgetxattr, fgetxattr, listxattr, llistxattr and flistxattr with U in
syscalls.list files.
X32 has 32-bit long and pointer with 64-bit off_t. Since x32 psABI
requires that pointers passed in registers must be zero-extended to
64bit, x32 can share many syscall interfaces with LP64. When a LP64
syscall with long and unsigned long int arguments is used for x32, these
arguments must be properly extended to 64-bit. Otherwise if the upper
32 bits of the register have undefined value, such a syscall will be
rejected by kernel.
For syscalls implemented in assembly codes, 'U' is added to syscall
signature key letters for unsigned long, which is zero-extended to
64-bit types. SYSCALL_ULONG_ARG_1 and SYSCALL_ULONG_ARG_2 are passed
to syscall-template.S for the first and the second unsigned long int
arguments if PSEUDOS_HAVE_ULONG_INDICES is defined. They are used by
x32 to zero-extend 32-bit arguments to 64 bits.
Tested on i386, x86-64 and x32 as well as with build-many-glibcs.py.
This change should not have an effect because the system call was
never defined. Also add the misssing attribute_compat_text_section
attribute to the sstk function (a minor optimization). Also update the
NEWS file to document the change.
Fixes commit 9cc93ba097
("misc: Turn sstk into a compat symbol").
This patch removes the IEEE_DOUBLE_BIG_ENDIAN and
IEEE_DOUBLE_MIXED_ENDIAN macros from gmp-impl.h and gmp-mparam.h, and
the ieee_double_extract union from gmp-impl.h. The macros were used
only in defining the union, which was used nowhere in glibc. As GMP's
gmp-impl.h is over 5000 lines, the file in glibc is so far from the
GMP version that it doesn't seem to make sense to keep things there
that are not relevant in glibc. (I expect there is plenty more in the
header after this patch that is also not relevant in glibc and can be
cleaned up later.)
Tested with build-many-glibcs.py that installed stripped shared
libraries are unchanged by this patch.
Most gmp-mparam.h headers in glibc define various macros to the same
values they would be defined to by the generic version of that header,
plus macros IEEE_DOUBLE_BIG_ENDIAN or IEEE_DOUBLE_MIXED_ENDIAN related
to the representation of double. The latter macros are in turn only
used in gmp-impl.h to define union ieee_double_extract, which is not
used in glibc. Thus all of these headers, except for the generic one
and those that define _LONG_LONG_LIMB for ILP32 configurations with
64-bit registers, are redundant, and this patch removes them.
Tested with build-many-glibcs.py that installed stripped shared
libraries are unchanged by this patch.
This function is defined in libc.so, and the dynamic loader calls
right after relocation has been finished, before any ELF constructors
or the preinit function is invoked. It is also used in the static
build for initializing parts of the static libc.
To locate __libc_early_init, a direct symbol lookup function is used,
_dl_lookup_direct. It does not search the entire symbol scope and
consults merely a single link map. This function could also be used
to implement lookups in the vDSO (as an optimization).
A per-namespace variable (libc_map) is added for locating libc.so,
to avoid repeated traversals of the search scope. It is similar to
GL(dl_initfirst). An alternative would have been to thread a context
argument from _dl_open down to _dl_map_object_from_fd (where libc.so
is identified). This could have avoided the global variable, but
the change would be larger as a result. It would not have been
possible to use this to replace GL(dl_initfirst) because that global
variable is used to pass the function pointer past the stack switch
from dl_main to the main program. Replacing that requires adding
a new argument to _dl_init, which in turn needs changes to the
architecture-specific libc.so startup code written in assembler.
__libc_early_init should not be used to replace _dl_var_init (as
it exists today on some architectures). Instead, _dl_lookup_direct
should be used to look up a new variable symbol in libc.so, and
that should then be initialized from the dynamic loader, immediately
after the object has been loaded in _dl_map_object_from_fd (before
relocation is run). This way, more IFUNC resolvers which depend on
these variables will work.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
MIPS needs to ignore certain existing symbols during symbol lookup.
The old scheme uses the ELF_MACHINE_SYM_NO_MATCH macro, with an
inline function, within its own header, with a sysdeps override for
MIPS. This allows re-use of the function from another file (without
having to include <dl-machine.h> or providing the default definition
for ELF_MACHINE_SYM_NO_MATCH).
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
The upper bits of the sigset_t s not fully initialized in the signal
mask calls that return information from kernel (sigprocmask,
sigpending, and pthread_sigmask), since the exported sigset_t size
(1024 bits) is larger than Linux support one (64 or 128 bits).
It might make sigisemptyset/sigorset/sigandset fail if the mask
is filled prior the call.
This patch changes the internal signal function to handle up to
supported Linux signal number (_NSIG), the remaining bits are
untouched.
Checked on x86_64-linux-gnu and i686-linux-gnu.
It is required because __libc_unwind_longjmp (used on thread
cancellation) calls __sigprocmask. Replace with a direct call.
They are required because __libc_unwind_longjmp (used for thread
cancellation) calls __sigprocmask. Replace this with a direct call.
The sigblock function is not exported and is not used internally, so
it can be removed.
Checked on cross build for ia64-linux-gnu.
This is part of the libpthread removal project:
<https://sourceware.org/ml/libc-alpha/2019-10/msg00080.html>
A new symbol version is added on libc to force loading failure
instead of lazy binding one for newly binaries with old loaders.
Checked with a build against all affected ABIs.
The __sfp_handle_exceptions is not fully correct regarding raising
exceptions, since there is no direct way to raise only FP_EX_OVERFLOW
nor FP_EX_UNDERFLOW for SSE mode. Both libgcc and feraiseexcept rely
on x87 mode to accomplish it.
This reverts commit 460ee50de0.
Checked on x86_64.
These will be used by upcoming RV32 and ARC ports and any future ports.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
The exported x86_64 fenv.h functions operate on both i387 and SSE (since
they should work on both float, double, and long double) while the
internal libc_fe* set either SSE (float, double, and float128) or
i387 (long double).
The libgcc __sfp_handle_exceptions (used on float128 implementation),
however, will set either SEE or i387 exception depending of the
exception to raise. This broke the internal assumption of float128
where only SSE operations will be used.
This patch reimplements the libgcc __sfp_handle_exceptions to use only
SSE operations and sets libgcc to use it instead of its own
implementation.
And I think we should fix libgcc in a similar manner, since checking on
config/i386/64/sfp-machine.h it already only supports SSE rounding mode
and x86_64 ABI also expectes float128 to use SSE registers [1]
(although it is not clear on how future implementation might implement
it).
Checked on x86_64-linux-gnu.
[1] https://github.com/hjl-tools/x86-psABI/wiki/X86-psABI
It is required for i686 BZ#12683 support when building with -Os
or -fno-omit-frame-pointer on some gcc versions. It is not used
on current code.
Check on i686-linux-gnu.
Linux 5.5 remove the system call in commit
61a47c1ad3a4dc6882f01ebdc88138ac62d0df03 ("Linux: Remove
<sys/sysctl.h>"). Therefore, the compat function is just a stub that
sets ENOSYS.
Due to SHLIB_COMPAT, new ports will not add the sysctl function anymore
automatically.
x32 already lacks the sysctl function, so an empty sysctl.c file is
used to suppress it. Otherwise, a new compat symbol would be added.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Patch 600f00b "linux: Use long time_t for wait4/getrusage" introduced
two bugs:
- The usage32 struct was set if the wait4 syscall had an error.
- For 32-bit systems the usage struct was set even if it was specified
as NULL.
This patch fixes the two issues.
X32 has 32-bit long and pointer with 64-bit off_t. Since x32 psABI
requires that pointers passed in registers must be zero-extended to
64bit, x32 can share many syscall interfaces with LP64. When a LP64
syscall with long and unsigned long arguments is used for x32, these
arguments must be properly extended to 64-bit. Otherwise if the upper
32 bits of the register have undefined value, such a syscall will be
rejected by kernel.
Enforce zero-extension for pointers and array system call arguments.
For integer types, extend to int64_t (the full register) using a
regular cast, resulting in zero or sign extension based on the
signedness of the original type.
For
void *mmap(void *addr, size_t length, int prot, int flags,
int fd, off_t offset);
we now generate
0: 41 f7 c1 ff 0f 00 00 test $0xfff,%r9d
7: 75 1f jne 28 <__mmap64+0x28>
9: 48 63 d2 movslq %edx,%rdx
c: 89 f6 mov %esi,%esi
e: 4d 63 c0 movslq %r8d,%r8
11: 4c 63 d1 movslq %ecx,%r10
14: b8 09 00 00 40 mov $0x40000009,%eax
19: 0f 05 syscall
That is
1. addr is unchanged.
2. length is zero-extend to 64 bits.
3. prot is sign-extend to 64 bits.
4. flags is sign-extend to 64 bits.
5. fd is sign-extend to 64 bits.
6. offset is unchanged.
For int arguments, since kernel uses only the lower 32 bits and ignores
the upper 32 bits in 64-bit registers, these work correctly.
Tested on x86-64 and x32. There are no code changes on x86-64.
This patch updates the kernel version in the test tst-mman-consts.py
to 5.6. (There are no new constants covered by this test in 5.6 that
need any other header changes.)
Tested with build-many-glibcs.py.
There are 2 new input values that require to be marked as
xfail-rounding:ibm128-libgcc as they're known to fail because of libgcc
issues with different rounding modes.
Otherwise, the other tests just need an increase in ULP.
Since GCC 6.2 or later is required to build glibc, remove build support
for GCC older than GCC 6.
Testd with GCC 6.4 and GCC 9.3.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch provides new __mq_timedreceive_time64 explicit 64 bit function for
receiving messages with absolute timeout.
Moreover, a 32 bit version - __mq_timedreceive has been refactored to
internally use __mq_timedreceive_time64.
The __mq_timedreceive is now supposed to be used on systems still supporting 32
bit time (__TIMESIZE != 64) - hence the necessary conversion to 64 bit struct
__timespec64 from struct timespec.
The new mq_timedsend_time64 syscall available from Linux 5.1+ has been used,
when applicable.
As this wrapper function is also used internally in the glibc, to e.g. provide
mq_receive implementation, an explicit check for abs_timeout being NULL has been
added due to conversions between struct timespec and struct __timespec64.
Before this change the Linux kernel handled this NULL pointer.
Build tests:
- ./src/scripts/build-many-glibcs.py glibcs
Run-time tests:
- Run specific tests on ARM/x86 32bit systems (qemu):
https://github.com/lmajewski/meta-y2038 and run tests:
https://github.com/lmajewski/y2038-tests/commits/master
Linux kernel, headers and minimal kernel version for glibc build test matrix:
- Linux v5.1 (with mq_timedreceive_time64) and glibc built with v5.1 as
minimal kernel version (--enable-kernel="5.1.0")
The __ASSUME_TIME64_SYSCALLS flag defined.
- Linux v5.1 and default minimal kernel version
The __ASSUME_TIME64_SYSCALLS not defined, but kernel supports
mq_timedreceive_time64 syscall.
- Linux v4.19 (no mq_timedreceive_time64 support) with default minimal kernel
version for contemporary glibc (3.2.0)
This kernel doesn't support mq_timedreceive_time64 syscall, so the fallback to
mq_timedreceive is tested.
Above tests were performed with Y2038 redirection applied as well as without
(so the __TIMESIZE != 64 execution path is checked as well).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch provides new __mq_timedsend_time64 explicit 64 bit function for
sending messages with absolute timeout.
Moreover, a 32 bit version - __mq_timedsend has been refactored to internally
use __mq_timedsend_time64.
The __mq_timedsend is now supposed to be used on systems still supporting 32
bit time (__TIMESIZE != 64) - hence the necessary conversion to 64 bit struct
__timespec64 from struct timespec.
The new __mq_timedsend_time64 syscall available from Linux 5.1+ has been used,
when applicable.
As this wrapper function is also used internally in the glibc, to e.g. provide
mq_send implementation, an explicit check for abs_timeout being NULL has been
added due to conversions between struct timespec and struct __timespec64.
Before this change the Linux kernel handled this NULL pointer.
Build tests:
- ./src/scripts/build-many-glibcs.py glibcs
Run-time tests:
- Run specific tests on ARM/x86 32bit systems (qemu):
https://github.com/lmajewski/meta-y2038 and run tests:
https://github.com/lmajewski/y2038-tests/commits/master
Linux kernel, headers and minimal kernel version for glibc build test matrix:
- Linux v5.1 (with mq_timedsend_time64) and glibc built with v5.1 as a
minimal kernel version (--enable-kernel="5.1.0")
The __ASSUME_TIME64_SYSCALLS flag defined.
- Linux v5.1 and default minimal kernel version
The __ASSUME_TIME64_SYSCALLS not defined, but kernel supports
mq_timedsend_time64 syscall.
- Linux v4.19 (no mq_timedsend_time64 support) with default minimal kernel
version for contemporary glibc (3.2.0)
This kernel doesn't support mq_timedsend_time64 syscall, so the fallback to
mq_timedsend is tested.
Above tests were performed with Y2038 redirection applied as well as without
(so the __TIMESIZE != 64 execution path is checked as well).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
We turn off this feature to avoid polluting our shared libary with
a specific value. However, static libgcc is not under our control,
and has enabled this for ibm128 routines. This pollutes the
resulting shared libraries with it.
Attach a post-linking hook to replace this section with one crafted
as hard-float + indeterminate ldbl. This allows IEEE ldbl users to
avoid having to disable the gnu attributes feature which should
protect them from linking ibm ldbl libraries using the gnu attributes
feature.
Currently, this only replaces libc and libm which support both ldbl
formats and rely on application code to explicitly determine which
is to be used.
Strictly speaking, the section could be deleted with minimal lost value.
However correctly set attributes could prove useful for some future change,
and similarly missing attributes.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
-mabi=ieeelongdouble triggers the stdc++ libraries _Float128
support, which then breaks if algorithm is included. For now,
explicitly disable _Float128 for such tests.
I have opened up GCC BZ 94080 to track this.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
I have observed a bug on 7.4.0 whereby __mulkc3 calls are
swapped with __multc3 depending on ABI selection. For the
sake of being overly cautious, build all _Float128 files
with ibm128 to workaround these compilers. This has been
noted in GCC BZ 84914, and will not be fixed for GCC 7.
Likewise, non-math files built with _Float128 are assumed
to have ibm long double. Explicilty preserve this
assumption.
Finally, add some bootstrapping code to avoid applying
these options until IEEE long double is enabled as they
require GCC 7 and above.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
The test for enabling _Float128 or IEEE 128 long double can be
greatly simplified knowing that there is no ibm128, thus we require
no special cases, and everything is canonical.
This reverts the changes to ldbl-128ibm iscanonical.h from commit
8dbfea3a20 and extends the check
for __NO_LONG_DOUBLE_MATH to include a check for float128 redirects
to long double.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
check_consistency should be disabled for GCC 5 and above since there is
no fixed PIC register in GCC 5 and above. Check __GNUC_PREREQ (5,0)
instead OPTIMIZE_FOR_GCC_5 since OPTIMIZE_FOR_GCC_5 is false with
-fno-omit-frame-pointer.
Linux 5.6 has new openat2 and pidfd_getfd syscalls. This patch adds
them to syscall-names.list and regenerates the arch-syscall.h files.
Tested with build-many-glibcs.py.
All cancellable syscalls are done by C implementations, so there is no
no need to use a specialized implementation to optimize register usage.
It fixes BZ #25765.
Checked on x86_64-linux-gnu.
Now there is a generic __timeval32 and helpers we can use them for Alpha
instead of the Alpha specific ones.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The Linux kernel expects rusage to use a 32-bit time_t, even on archs
with a 64-bit time_t (like RV32). To address this let's convert
rusage to/from 32-bit and 64-bit to ensure the kernel always gets
a 32-bit time_t.
While we are converting these functions let's also convert them to be
the y2038 safe versions. This means there is a *64 function that is
called by a backwards compatible wrapper.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The Linux kernel expects itimerval to use a 32-bit time_t, even on archs
with a 64-bit time_t (like RV32). To address this let's convert
itimerval to/from 32-bit and 64-bit to ensure the kernel always gets
a 32-bit time_t.
While we are converting these functions let's also convert them to be
the y2038 safe versions. This means there is a *64 function that is
called by a backwards compatible wrapper.
Tested-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
On y2038 safe 32-bit systems the Linux kernel expects itimerval
and rusage to use a 32-bit time_t, even though the other time_t's
are 64-bit. There are currently no plans to make 64-bit time_t versions
of these structs.
There are also other occurrences where the time passed to the kernel via
timeval doesn't match the wordsize.
To handle these cases let's define a new macro
__KERNEL_OLD_TIMEVAL_MATCHES_TIMEVAL64. This macro specifies if the
kernel's old_timeval matches the new timeval64. This should be 1 for
64-bit architectures except for Alpha's osf syscalls. The define should
be 0 for 32-bit architectures and Alpha's osf syscalls.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The corner cases included were generated using exhaustive search
for all float/binary32 values on x86_64 (comparing to MPFR for
correct rounding to nearest).
For the j0/j1/y0 functions, only cases with ulp error <= 9 were
included.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Adds a POWER9 version of fmaf128 that uses the xsmaddqp
instruction.
Co-authored-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This addresses an issue that is present mainly on SMP machines running
threaded code. In a typical indirect call or PLT import stub, the
target address is loaded first. Then the global pointer is loaded into
the PIC register in the delay slot of a branch to the target address.
During lazy binding, the target address is a trampoline which transfers
to _dl_runtime_resolve().
_dl_runtime_resolve() uses the relocation offset stored in the global
pointer and the linkage map stored in the trampoline to find the
relocation. Then, the function descriptor is updated.
In a multi-threaded application, it is possible for the global pointer
to be updated between the load of the target address and the global
pointer. When this happens, the relocation offset has been replaced
by the new global pointer. The function pointer has probably been
updated as well but there is no way to find the address of the function
descriptor and to transfer to the target. So, _dl_runtime_resolve()
typically crashes.
HP-UX addressed this problem by adding an extra pc-relative branch to
the trampoline. The descriptor is initially setup to point to the
branch. The branch then transfers to the trampoline. This allowed
the trampoline code to figure out which descriptor was being used
without any modification to user code. I didn't use this approach
as it is more complex and changes function pointer canonicalization.
The order of loading the target address and global pointer in
indirect calls was not consistent with the order used in import stubs.
In particular, $$dyncall and some inline versions of it loaded the
global pointer first. This was inconsistent with the global pointer
being updated first in dl-machine.h. Assuming the accesses are
ordered, we want elf_machine_fixup_plt() to store the global pointer
first and calls to load it last. Then, the global pointer will be
correct when the target function is entered.
However, just to make things more fun, HP added support for
out-of-order execution of accesses in PA 2.0. The accesses used by
calls are weakly ordered. So, it's possibly under some circumstances
that a function might be entered with the wrong global pointer.
However, HP uses weakly ordered accesses in 64-bit HP-UX, so I assume
that loading the global pointer in the delay slot of the branch must
work consistently.
The basic fix for the race is a combination of modifying user code to
preserve the address of the function descriptor in register %r22 and
setting the least-significant bit in the relocation offset. The
latter was suggested by Carlos as a way to distinguish relocation
offsets from global pointer values. Conventionally, %r22 is used
as the address of the function descriptor in calls to $$dyncall.
So, it wasn't hard to preserve the address in %r22.
I have updated gcc trunk and gcc-9 branch to not clobber %r22 in
$$dyncall and inline indirect calls. I have also modified the import
stubs in binutils trunk and the 2.33 branch to preserve %r22. This
required making the stubs one instruction longer but we save one
relocation. I also modified binutils to align the .plt section on
a 8-byte boundary. This allows descriptors to be updated atomically
with a floting-point store.
With these changes, _dl_runtime_resolve() can fallback to an alternate
mechanism to find the relocation offset when it has been clobbered.
There's just one additional instruction in the fast path. I tested
the fallback function, _dl_fix_reloc_arg(), by changing the branch to
always use the fallback. Old code still runs as it did before.
Fixes bug 23296.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Similar to fenvinline.h removal, this kind of optimization is better
implemented by the compiler. Also newer code avoid setting exceptions
directly (for instance the code to make new logf, log2f and powf
implementatation to now support SVID compat).
The BZ#94194 [1] the corresponding GCC bug for adding replacements
for these on x86.
Checked on x86_64-linux-gnu and i686-linux-gnu.
[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94194
Similar to string2.h (18b10de7ce) and string3.h (09a596cc2c) this
patch removes the fenvinline.h on all architectures. Currently
only powerpc implements some optimizations. This kind of optimization
is better implemented by the compiler (which handles the architecture
ISA transparently).
Also, for the specific optimized powerpc implementation the code is
becoming convoluted and these micro-optimization are hardly wildly
used, even more being a possible hotspot in realword cases
(non-default rounding are used only on specific cases and exception
handling are done most likely only on errors path). Only x86
implements similar optimization (on fenv.h) also indicates that
these should no be on libc.
The math/test-fenv already covers all math/test-fenvinline tests,
so it is safe to remove it.
The powerpc fegetround optimization is moved to internal
fenv_libc.h.
The BZ#94193 [1] the corresponding GCC bug for adding replacements
for these on powerpc.
Checked on x86_64-linux-gnu and powerpc64le-linux-gnu.
[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94193
These functions are alpha specifc, rename them to be clear.
Let's also rename the header file from tv32-compat.h to
alpha-tv32-compat.h. This is to avoid conflicts with the one we will
introduce later.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Some of these files depend on the avoidance of using the various
register sets of POWER. When enabling the IEEE 128 long double,
we must be sure to disable this ABI as some compilers will
refuse to compile if -mno-vsx and -mabi=ieeelongdouble are both
present.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
In practice, this flag should be applied globally, but it makes a good
sanity check to ensure ibm128 and ieee128 long double files are not
getting mismatched. _Float128 files use no long double, thus are
always safe to use this option.
Similarly, when investigating the linker complaints, difftime
makes trivial, self contained, usage of long double, so thus it
is also explicitly marked as such.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
This better resembles the default linking process with the gnulibs,
and also resolves the increasingly difficult to maintain
f128-loader-link usage on powerpc64le as some libgcc symbols are
dependent on those found in the loader (ld).
Ensure the correct ldouble abi flags are applied to ibm128 files and
nldbl files. Remove the IEEE options if used, and apply the flags
used to build ldouble files which are ibm128 abi.
nldbl tests are a little tricky. To use the support, we must remove
all ldouble abi flags, and ensure -mlong-double-64 is used.
Co-authored-by: Rajalakshmi Srinivasaraghavan <raji@linux.vnet.ibm.com>
Co-authored-by: Tulio Magno Quites Machado Filho <tuliom@linux.vnet.ibm.com>
Co-authored-by: Paul E. Murphy <murphyp@linux.vnet.ibm.com>
Tweak the PLT bypass magic when building glibc with long double
redirects. This is made more difficult by the fact we only get
one chance to redirect functions. This happens via the public
headers.
There are roughly three classes of redirect we need to attend to
today:
1. Simple redirects, redirected via cdef macro overrides and
and new libc_hidden_ldbl_proto macro.
2. Internal usage of internal API, e.g __snprintf, which has
no direct analogue. This is bypassed directly on case-by-
case basis.
3. Double redirects, e.g sscanf and related. These require
a heavier handed approach of macro renaming to existing
symbols.
Most simple redirects are handled via 1. Ideally, the libc_*
macro would live in libc-symbols.h, but in practice the macros
needed for it to do anything useful live in cdefs.h, so they
are defined in the local override.
Notably, the internal name of the asprintf generated for ieee ldbl
redirects is renamed to work with internal prefixed usage.
This resolves the local plt usage introduced when building glibc
with ldbl == ieee128 on ppc64le.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
During the conversion to support 64 bit time on some architectures with
__WORDSIZE == 32 && __TIMESIZE != 64 the libc_hidden_def attribute for
eligible functions was by mistake omitted.
This patch fixes this issue and exports (and allows using) those
functions when Y2038 support is enabled in glibc.
With mathinline removal there is no need to keep building and testing
inline math tests.
The gen-libm-tests.py support to generate ULP_I_* is removed and all
libm-test-ulps files are updated to longer have the
i{float,double,ldouble} entries. The support for no-test-inline is
also removed from both gen-auto-libm-tests and the
auto-libm-test-out-* were regenerated.
Checked on x86_64-linux-gnu and i686-linux-gnu.
This is similar to x86 (da75c1b180) and powerpc (32ea729996)
mathinline.h removal. The required macros to build the fpu routines
are moved to mathimpl.h, while the inline optimization macros for
atan, tanh, rint, log1p, significand, trunc, floor, ceil, isinf,
finite, scalbn, isnan, scalbln, nearbyint, lrint, and sincos are removed.
The gcc bug https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94204 was
created to track builtin support.
Checked with a build against m68k-linux-gnu, resulting binaries
are similar with and without the patch.
Since legacy bitmap doesn't cover jitted code generated by legacy JIT
engine, it isn't very useful. This patch removes ARCH_CET_LEGACY_BITMAP
and treats indirect branch tracking similar to shadow stack by removing
legacy bitmap support.
Tested on CET Linux/x86-64 and non-CET Linux/x86-64.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Further optimize integer memcpy. Small cases now include copies up
to 32 bytes. 64-128 byte copies are split into two cases to improve
performance of 64-96 byte copies. Comments have been rewritten.
This conversion patch for supporting 64 bit time for futimesat only differs
from the work performed for futimes (when providing __futimes64) with passing
also the file name (and path) to utimensat.
All the design and conversion decisions are exactly the same as for futimens
conversion.
This conversion patch for supporting 64 bit time for lutimes mostly differs from
the work performed for futimes (when providing __futimes64) with adding the
AT_SYMLINK_NOFOLLOW flag to utimensat.
It also supports passing file name instead of file descriptor number, but this
is not relevant for utimensat used to implement it.
All the design and conversion decisions are exactly the same as for futimens
conversion.
This patch provides new __futimes64 explicit 64 bit function for setting file's
64 bit attributes for access and modification time (by specifying file
descriptor number).
Internally, the __utimensat64_helper function is used. This patch is necessary
for having architectures with __WORDSIZE == 32 Y2038 safe.
Moreover, a 32 bit version - __futimes has been refactored to internally use
__futimes64.
The __futimes is now supposed to be used on systems still supporting 32
bit time (__TIMESIZE != 64) - hence the necessary conversion of struct timeval
to 64 bit struct __timeval64.
The check if struct timevals' usec fields are in the range between 0 and 1000000
has been removed as Linux kernel performs it internally in the implementation
of utimensat (the conversion between struct __timeval64 and __timespec64 is not
relevant for this particular check).
Last but not least, checks for tvp{64} not being NULL have been preserved from
the original code as some legacy user space programs may rely on it.
Build tests:
./src/scripts/build-many-glibcs.py glibcs
Run-time tests:
- Run specific tests on ARM/x86 32bit systems (qemu):
https://github.com/lmajewski/meta-y2038 and run tests:
https://github.com/lmajewski/y2038-tests/commits/master
Above tests were performed with Y2038 redirection applied as well as without to
test the proper usage of both __futimes64 and __futimes.
It seems that some gcc versions might generates a stack frame for the
sigreturn stub requires on sparc signal handling. For instance:
$ cat test.c
#define _GNU_SOURCE
#include <sys/syscall.h>
__attribute__ ((__optimize__ ("-fno-stack-protector")))
void
__sigreturn_stub (void)
{
__asm__ ("mov %0, %%g1\n\t"
"ta 0x10\n\t"
: /* no outputs */
: "i" (SYS_rt_sigreturn));
}
$ gcc -v
[...]
gcc version 9.2.1 20200224 (Debian 9.2.1-30)
$ gcc -O2 -m64 test.c -S -o -
[...]
__sigreturn_stub:
save %sp, -176, %sp
#APP
! 9 "t.c" 1
mov 101, %g1
ta 0x10
! 0 "" 2
#NO_APP
.size __sigreturn_stub, .-__sigreturn_stub
As indicated by kernel developers [1], the sigreturn stub can not change
the register window or the stack pointer since the kernel has setup the
restore frame at a precise location relative to the stack pointer when
the stub is invoked.
I tried to play with some compiler flags and even with _Noreturn and
__builtin_unreachable after the asm does not help (and Sparc does not
support naked functions).
To avoid similar issues, as the stack-protector support also have
stumbled, this patch moves the implementation of the sigreturn stubs to
assembly.
Checked on sparcv9-linux-gnu and sparc64-linux-gnu with gcc 9.2.1
and gcc 7.5.0.
[1] https://lkml.org/lkml/2016/5/27/465
Soon, powerpc64le will need to provide extra compiler flags to the long
double files in order to continue to build using the IBM 128-bit
extended floating point type as long double.
This patch creates test-ibm128* tests from the long double function tests.
In order to explicitly test IBM long double functions -mabi=ibmlongdouble is
added to CFLAGS.
Likewise, update the test headers to correct choose ULPs when redirects
are enabled.
Co-authored-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
Co-authored-by: Paul E. Murphy <murphyp@linux.vnet.ibm.com>
A recent change to fenvinline.h modified the check if __e is a
a power of 2 inside feraiseexcept and feclearexcept macros. It
introduced the use of the powerof2 macro but also removed the
if statement checking whether __e != 0 before issuing an mtfsb*
instruction. This is problematic because powerof2 (0) evaluates
to 1 and without the removed if __e is allowed to be 0 when
__builtin_clz is called. In that case the value 32 is passed
to __MTFSB*, which is invalid.
This commit uses __builtin_popcount instead of powerof2 to fix this
issue and avoid the extra check for __e != 0. This was the approach
used by the initial versions of that previous patch.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
The commit "arm: Split BE/LE abilist"
(1673ba87fe) changed the soft-fp order for
ARM selection when __SOFTFP__ is defined by the compiler.
On 2.30 the sysdeps order is:
2.30
sysdeps/unix/sysv/linux/arm
sysdeps/arm/nptl
sysdeps/unix/sysv/linux
sysdeps/nptl
sysdeps/pthread
sysdeps/gnu
sysdeps/unix/inet
sysdeps/unix/sysv
sysdeps/unix/arm
sysdeps/unix
sysdeps/posix
sysdeps/arm/nofpu
sysdeps/ieee754/soft-fp
sysdeps/arm
sysdeps/wordsize-32
sysdeps/ieee754/flt-32
sysdeps/ieee754/dbl-64
sysdeps/ieee754
sysdeps/generic
While on master is:
sysdeps/unix/sysv/linux/arm/le
sysdeps/unix/sysv/linux/arm
sysdeps/arm/nptl
sysdeps/unix/sysv/linux
sysdeps/nptl
sysdeps/pthread
sysdeps/gnu
sysdeps/unix/inet
sysdeps/unix/sysv
sysdeps/unix/arm
sysdeps/unix
sysdeps/posix
sysdeps/arm/le
sysdeps/arm
sysdeps/wordsize-32
sysdeps/ieee754/flt-32
sysdeps/ieee754/dbl-64
sysdeps/arm/nofpu
sysdeps/ieee754/soft-fp
sysdeps/ieee754
sysdeps/generic
It make the build select some routines (fadd, fdiv, fmul, fsub, and fma)
on ieee754/flt-32 and ieee754/dbl-64 that requires fenv support to be
correctly rounded which in turns lead to math failures since the
__SOFTFP__ does not have fenv support.
With this patch the order is now:
sysdeps/unix/sysv/linux/arm/le
sysdeps/unix/sysv/linux/arm
sysdeps/arm/nptl
sysdeps/unix/sysv/linux
sysdeps/nptlsysdeps/pthread
sysdeps/gnu
sysdeps/unix/inet
sysdeps/unix/sysv
sysdeps/unix/arm
sysdeps/unix
sysdeps/posix
sysdeps/arm/le/nofpu
sysdeps/arm/nofpu
sysdeps/ieee754/soft-fp
sysdeps/arm/le
sysdeps/arm
sysdeps/wordsize-32
sysdeps/ieee754/flt-32
sysdeps/ieee754/dbl-64
sysdeps/ieee754
sysdeps/generic
Checked on arm-linux-gnuaebi.
The kernel might not clear the padding value for the ipc_perm mode
fields in compat mode (32 bit running on a 64 bit kernel). It was
fixed on v4.14 when the ipc compat code was refactored to move
(commits 553f770ef71b, 469391684626, c0ebccb6fa1e).
Although it is most likely a kernel issue, it was shown only due
BZ#18231 fix which made all the SysVIPC mode_t 32-bit regardless of
the kABI.
This patch fixes it by explicitly zeroing the upper bits for such
cases. The __ASSUME_SYSVIPC_BROKEN_MODE_T case already handles
it with the shift.
(The aarch64 ipc_priv.h is superflous since
__ASSUME_SYSVIPC_DEFAULT_IPC_64 is now defined as default).
Checked on i686-linux-gnu on 3.10 and on 4.15 kernel.
fstatat64 depends on inlining to produce the desired __fxstatat64
call, which does not happen with -Os, leading to a link failure
with an undefined reference to fstatat64. __fxstatat64 has a macro
definition in include/sys/stat.h and thus avoids the problem.
After recent discussions:
- "[PATCH] s390: Remove backchain-based fallback from backtrace"
https://www.sourceware.org/ml/libc-alpha/2020-02/msg00287.html
- "Re: [PATCH 07/11] s390: Implement backtrace on top of <unwind-link.h>"
https://www.sourceware.org/ml/libc-alpha/2020-02/msg00637.html
We've checked and decided to remove the backchain:
We don't know of any environments without libgcc. Thus the backchain
unwinder is not used. If somebody builds with -mbackchain and without
fasynchronous-unwind-tables and has libgcc installed, then the
libgcc unwinder is called but not the backchain-based fallback.
This step allows to get rid of the s390x specific backtrace.c files at all.
Furthermore the now used debug/backtrace.c version has some more
advantages:
- Free all resources if necessary. (libc_freeres_fn)
- Remove NULL address above _start.
- Check whether we make any progress while getting addresses.
The combination of GCC 10 and binutils 2.35 (both unreleased) is no
longer able to link the dynamic linker, due to a GP16 relocation
overflow error:
glibc/alpha-linux-gnu/elf/librtld.os: in function `calloc': glibc/elf/../include/rtld-malloc.h:44:(.text+0xd98): relocation truncated to fit: GPREL16 against symbol `__rtld_calloc' defined in .data.rel.ro section in glibc/alpha-linux-gnu/elf/librtld.os
glibc/alpha-linux-gnu/elf/librtld.os: in function `malloc': glibc/elf/../include/rtld-malloc.h:56:(.text+0x2978): relocation truncated to fit: GPREL16 against symbol `__rtld_malloc' defined in .data.rel.ro section in glibc/alpha-linux-gnu/elf/librtld.os
This is arguably a linker bug; the object files and their section size
requirements look reasonable enough.
Using -fPIC (the default) works around this issue.
This patch replaces auto generated wrapper (as described in
sysdeps/unix/sysv/linux/syscalls.list) for utime with one which adds extra
support for setting file's access and modification 64 bit time on machines
with __TIMESIZE != 64.
Internally, the __utimensat_time64 helper function is used. This patch is
necessary for having architectures with __WORDSIZE == 32 && __TIMESIZE != 64
Y2038 safe.
Moreover, a 32 bit version - __utime has been refactored to internally use
__utime64.
The __utime is now supposed to be used on systems still supporting 32
bit time (__TIMESIZE != 64) - hence the necessary conversion between struct
utimbuf and struct __utimbuf64.
Build tests:
./src/scripts/build-many-glibcs.py glibcs
Run-time tests:
- Run specific tests on ARM/x86 32bit systems (qemu):
https://github.com/lmajewski/meta-y2038 and run tests:
https://github.com/lmajewski/y2038-tests/commits/master
Above tests were performed with Y2038 redirection applied as well as
without to test proper usage of both __utime64 and __utime.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch provides new __utimes64 explicit 64 bit function for setting file's
64 bit attributes for access and modification time.
Internally, the __utimensat64_helper function is used. This patch is necessary
for having architectures with __WORDSIZE == 32 Y2038 safe.
Moreover, a 32 bit version - __utimes has been refactored to internally use
__utimes64.
The __utimes is now supposed to be used on systems still supporting 32
bit time (__TIMESIZE != 64) - hence the necessary conversion of struct
timeval to 64 bit struct __timeval64.
Build tests:
./src/scripts/build-many-glibcs.py glibcs
Run-time tests:
- Run specific tests on ARM/x86 32bit systems (qemu):
https://github.com/lmajewski/meta-y2038 and run tests:
https://github.com/lmajewski/y2038-tests/commits/master
Above tests were performed with Y2038 redirection applied as well as without
to test proper usage of both __utimes64 and __utimes.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Due to the built-in tables, __NR_vfork is always defined, so the
fork-based fallback code is never used.
(It appears that the vfork system call was wired up when the port was
contributed to the kernel.)
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Due to the built-in tables, __NR_set_robust_list is always defined
(although it may not be available at run time).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Due to the built-in tables, __NR_getdents64 is always defined,
although it may not be supported at run time.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
With the built-in tables __NR_preadv2 and __NR_pwritev2 are always
defined.
The kernel has never defined __NR_preadv64v2 and __NR_pwritev64v2
and is unlikely to do so, given that the preadv2 and pwritev2 system
calls themselves are 64-bit.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Due to the built-in tables, __NR_rt_sigqueueinfo is always defined.
sysdeps/pthread/time_routines.c is not updated because it is shared with
Hurd.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The names __NR_preadv64, __NR_pwritev64 appear to be a glibc invention.
With the built-in tables, __NR_preadv and __NR_pwritev are always defined.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Linux removed the last definitions of __NR_pread and __NR_pwrite
in commit 4ba66a9760722ccbb691b8f7116cad2f791cca7b, the removal
of the blackfin port. All architectures now define __NR_pread64 and
__NR_pwrite64 only.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>