The symbol was moved using scripts/move-symbol-to-libc.py.
The __pthread_cond_wait@@GLIBC_PRIVATE symbol is no longer
neded, so remove that as well.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
The __pthread_cond_timedwait@@GLIBC_PRIVATE symbol is no longer
neded, so remove that as well.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
The __pthread_cond_signal@@GLIBC_PRIVATE symbol is no longer
neded, so remove that as well.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
The __pthread_cond_init@@GLIBC_PRIVATE symbol is no longer
neded, so remove that as well.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
The __pthread_cond_destroy@@GLIBC_PRIVATE symbol is no longer
neded, so remove that as well.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
The __pthread_cond_broadcast@@GLIBC_PRIVATE symbol is no longer
neded, so remove that as well.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
This change also turns __pthread_once into a compatibility symbol
because after the call_once move, an internal call to __pthread_once
can be used. This an adjustment to __libc_once: Outside libc (e.g.,
in nscd), it has to call pthread_once. With __pthread_once as a
compatibility symbol, it is no longer to add a new GLIBC_2.34
version after the move from libpthread, and this commit removes
the new __pthread_once@@GLIBC_2.34 version.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
These make variables can be used to add routines to different
libraries for the Hurd and Linux builds.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
This implementation is based on __memset_power8 and integrates a lot
of suggestions from Anton Blanchard.
The biggest difference is that it makes extensive use of stxvl to
alignment and tail code to avoid branches and small stores. It has
three main execution paths:
a) "Short lengths" for lengths up to 64 bytes, avoiding as many
branches as possible.
b) "General case" for larger lengths, it has an alignment section
using stxvl to avoid branches, a 128 bytes loop and then a tail
code, again using stxvl with few branches.
c) "Zeroing cache blocks" for lengths from 256 bytes upwards and set
value being zero. It is mostly the __memset_power8 code but the
alignment phase was simplified because, at this point, address is
already 16-bytes aligned and also changed to use vector stores.
The tail code was also simplified to reuse the general case tail.
All unaligned stores use stxvl instructions that do not generate
alignment interrupts on POWER10, making it safe to use on
caching-inhibited memory.
On average, this implementation provides something around 30%
improvement when compared to __memset_power8.
Reviewed-by: Matheus Castanho <msc@linux.ibm.com>
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
This implementation is based on __memcpy_power8_cached and integrates
suggestions from Anton Blanchard.
It benefits from loads and stores with length for short lengths and for
tail code, simplifying the code.
All unaligned memory accesses use instructions that do not generate
alignment interrupts on POWER10, making it safe to use on
caching-inhibited memory.
The main loop has also been modified in order to increase instruction
throughput by reducing the dependency on updates from previous iterations.
On average, this implementation provides around 30% improvement when
compared to __memcpy_power7 and 10% improvement in comparison to
__memcpy_power8_cached.
This patch was initially based on the __memmove_power7 with some ideas
from strncpy implementation for Power 9.
Improvements from __memmove_power7:
1. Use lxvl/stxvl for alignment code.
The code for Power 7 uses branches when the input is not naturally
aligned to the width of a vector. The new implementation uses
lxvl/stxvl instead which reduces pressure on GPRs. It also allows
the removal of branch instructions, implicitly removing branch stalls
and mispredictions.
2. Use of lxv/stxv and lxvl/stxvl pair is safe to use on Cache Inhibited
memory.
On Power 10 vector load and stores are safe to use on CI memory for
addresses unaligned to 16B. This code takes advantage of this to
do unaligned loads.
The unaligned loads don't have a significant performance impact by
themselves. However doing so decreases register pressure on GPRs
and interdependence stalls on load/store pairs. This also improved
readability as there are now less code paths for different alignments.
Finally this reduces the overall code size.
3. Improved performance.
This version runs on average about 30% better than memmove_power7
for lengths larger than 8KB. For input lengths shorter than 8KB
the improvement is smaller, it has on average about 17% better
performance.
This version has a degradation of about 50% for input lengths
in the 0 to 31 bytes range when dest is unaligned.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
This patch updates the kernel version in the test tst-mman-consts.py
to 5.12. (There are no new MAP_* constants covered by this test in
5.12 that need any other header changes.)
Tested with build-many-glibcs.py.
Linux 5.12 has one new syscall, mount_setattr. Update
syscall-names.list and regenerate the arch-syscall.h headers with
build-many-glibcs.py update-syscalls.
Tested with build-many-glibcs.py.
On x86_64, when configuring glibc with CFLAGS="-O2 -g -march=native",
some tests fail. After this patch, "make check" succeeds.
Tested on Intel Core i5-4590 with gcc 10.2.1.
GCC 11 warns when a pointer to an uninitialized object is passed
to a function that takes a const-qualified argument. This is done
on the assumption that most such functions read from the object.
For the rare case of a function that doesn't, GCC 11 extends
attribute access to add a new mode called none.
POSIX pthread_setspecific() is one such rare function that takes
a const void* argument but that doesn't read from the object it
points to. To suppress the -Wmaybe-uninitialized issued by GCC
11 when the address of an uninitialized object is passed to it
(e.g., the result of malloc()), this change #defines
__attr_access_none in cdefs.h and uses the macro on the function
in sysdeps/htl/pthread.h and sysdeps/nptl/pthread.h.
No bug. This commit optimizes strchr-evex.S. The optimizations are
mostly small things such as save an ALU in the alignment process,
saving a few instructions in the loop return. The one significant
change is saving 2 instructions in the 4x loop. test-strchr,
test-strchrnul, test-wcschr, and test-wcschrnul are all passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
No bug. This commit optimizes strchr-avx2.S. The optimizations are all
small things such as save an ALU in the alignment process, saving a
few instructions in the loop return, saving some bytes in the main
loop, and increasing the ILP in the return cases. test-strchr,
test-strchrnul, test-wcschr, and test-wcschrnul are all passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
For some architectures, the two functions are aliased, so these
symbols need to be moved at the same time.
The symbols were moved using scripts/move-symbol-to-libc.py.
And pthread_mutexattr_setkind_np as a compatibility symbol.
__pthread_mutexattr_settype is used in mtx_init from libpthread,
so this commit adds a GLIBC_2.34 symbol version for it.
The symbols were moved using scripts/move-symbol-to-libc.py.
__pthread_mutexattr_init cannot be be made a compat symbol because
it is used in mtx_init, which is still in libpthread.
The symbols were moved using scripts/move-symbol-to-libc.py.
And pthread_mutexattr_getkind_np as a compatibility symbol.
(There is no declaration in <pthread.h>, so there is no need
to add an alias or a deprecation warning there.)
The symbols were moved using scripts/move-symbol-to-libc.py.
And __pthread_mutexattr_destroy as a compat symbol (so no
GLIBC_2.34 symbol version is added for it).
The symbols were moved using scripts/move-symbol-to-libc.py.
The symbols were moved using scripts/move-symbol-to-libc.py.
__pthread_mutex_trylock is used to implement mtx_timedlock,
which still resides in libpthread, so add a GLIBC_2.34 version
for it, to match the existing GLIBC_2.0 version.
The symbols were moved using scripts/move-symbol-to-libc.py.
The symbol aliasing follows pthread_cond_timedwait et al.
Missing hidden prototypes had to be added to nptl/pthreadP.h
for consistency.
Improvements compared to POWER9 version:
1. Take into account first 16B comparison for aligned strings
The previous version compares the first 16B and increments r4 by the number
of bytes until the address is 16B-aligned, then starts doing aligned loads at
that address. For aligned strings, this causes the first 16B to be compared
twice, because the increment is 0. Here we calculate the next 16B-aligned
address differently, which avoids that issue.
2. Use simple comparisons for the first ~192 bytes
The main loop is good for big strings, but comparing 16B each time is better
for smaller strings. So after aligning the address to 16 Bytes, we check
more 176B in 16B chunks. There may be some overlaps with the main loop for
unaligned strings, but we avoid using the more aggressive strategy too soon,
and also allow the loop to start at a 64B-aligned address. This greatly
benefits smaller strings and avoids overlapping checks if the string is
already aligned at a 64B boundary.
3. Reduce dependencies between load blocks caused by address calculation on loop
Doing a precise time tracing on the code showed many loads in the loop were
stalled waiting for updates to r4 from previous code blocks. This
implementation avoids that as much as possible by using 2 registers (r4 and
r5) to hold addresses to be used by different parts of the code.
Also, the previous code aligned the address to 16B, then to 64B by doing a
few 48B loops (if needed) until the address was aligned. The main loop could
not start until that 48B loop had finished and r4 was updated with the
current address. Here we calculate the address used by the loop very early,
so it can start sooner.
The main loop now uses 2 pointers 128B apart to make pointer updates less
frequent, and also unrolls 1 iteration to guarantee there is enough time
between iterations to update the pointers, reducing stalled cycles.
4. Use new P10 instructions
lxvp is used to load 32B with a single instruction, reducing contention in
the load queue.
vextractbm allows simplifying the tail code for the loop, replacing
vbpermq and avoiding having to generate a permute control vector.
Reviewed-by: Paul E Murphy <murphyp@linux.ibm.com>
Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
Reviewed-by: Lucas A. M. Magalhaes <lamm@linux.ibm.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
There is no new symbol version because of the compatibility symbol
status. The __pthread_atfork reference in nptl/Versions was unused.
This is required for GCC versions before 10 which default to -fcommon.
Fixes commit 442e8a40da ("nptl: Move part
of TCB initialization from libpthread to __tls_init_tp").
The signal handler is exported as __nptl_setxid_sighandler, so
that the libpthread initialization code can install it. This
is sufficient for now because it is guarantueed to happen before
the first pthread_create call.
Onl pthread_cond_clockwait did not have a forwarder, so it needs
a new symbol version.
Some complications arise due to the need to supply hidden aliases,
GLIBC_PRIVATE exports (for the C11 condition variable implementation
that still remains in libpthread) and 64-bit time_t stubs.
pthread_cond_broadcast, pthread_cond_signal, pthread_cond_timedwait,
pthread_cond_wait, pthread_cond_clockwait have been moved using
scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This is complicated because of a second compilation of
nptl/pthread_mutex_lock.c via nptl/pthread_mutex_cond_lock.c.
PTHREAD_MUTEX_VERSIONS is introduced to suppress symbol versions
in that case.
The symbols __pthread_mutex_lock, __pthread_mutex_unlock,
__pthread_mutex_init, __pthread_mutex_destroy, pthread_mutex_lock,
pthread_mutex_unlock, pthread_mutex_init, pthread_mutex_destroy
have been moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The current approach is to do this optimizations at a higher level,
in generic code, so that single-threaded cases can be specifically
targeted.
Furthermore, using IS_IN (libc) as a compile-time indicator that
all locks are private is no longer correct once process-shared lock
implementations are moved into libc.
The generic <lowlevellock.h> is not compatible with assembler code
(obviously), so it's necessary to remove two long-unused #includes.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This is in preparation of moving the mutex code into libc.
__pthread_tunables_init is now called via __pthread_early_init.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The forwarders were only used internally, so new symbol versions
are needed. All symbols are moved at once because the forwarders
are no-ops if libpthread is not loaded, leading to inconsistencies
in case of a partial migration.
The symbols __pthread_rwlock_rdlock, __pthread_rwlock_unlock,
__pthread_rwlock_wrlock, pthread_rwlock_rdlock,
pthread_rwlock_unlock, pthread_rwlock_wrlock have been moved using
scripts/move-symbol-to-libc.py.
The __ symbol variants are turned into compat symbols, which is why they
do not receive a GLIBC_2.34 version.
The symbol was moved using scripts/move-symbol-to-libc.py.
tss_delete (still in libpthread) uses the __pthread_key_create
alias, so that is now exported under GLIBC_PRIVATE.
This initalization should only happen once for the main thread's TCB.
At present, auditors can achieve this by not linking against
libpthread. If libpthread becomes part of libc, doing this
initialization in libc would happen for every audit namespace,
or too late (if it happens from the main libc only). That's why
moving this code into ld.so seems the right thing to do, right after
the TCB initialization.
For !__ASSUME_SET_ROBUST_LIST ports, this also moves the symbol
__set_robust_list_avail into ld.so, as __nptl_set_robust_list_avail.
It also turned into a proper boolean flag.
Inline the __pthread_initialize_pids function because it seems no
longer useful as a separate function.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
TLS_INIT_TP is processor-specific, so it is not a good place to
put thread library initialization code (it would have to be repeated
for all CPUs). Introduce __tls_init_tp as a separate function,
to be called immediately after TLS_INIT_TP. Move the existing
stack list setup code for NPTL to this function.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Calling free directly may end up freeing a pointer allocated by the
dynamic loader using malloc from libc.so in the base namespace using
the allocator from libc.so in a secondary namespace, which results in
crashes.
This commit redirects the free call through GLRO and the dynamic
linker, to reach the correct namespace. It also cleans up the dlerror
handling along the way, so that pthread_setspecific is no longer
needed (which avoids triggering bug 24774).
Commit 9e78f6f6e7 ("Implement
_dl_catch_error, _dl_signal_error in libc.so [BZ #16628]") has the
side effect that distinct namespaces, as created by dlmopen, now have
separate implementations of the rtld exception mechanism. This means
that the call to _dl_catch_error from libdl in a secondary namespace
does not actually install an exception handler because the
thread-local variable catch_hook in the libc.so copy in the secondary
namespace is distinct from that of the base namepace. As a result, a
dlsym/dlopen/... failure in a secondary namespace terminates the process
with a dynamic linker error because it looks to the exception handler
mechanism as if no handler has been installed.
This commit restores GLRO (dl_catch_error) and uses it to set the
handler in the base namespace.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
No new symbol version is required because there was a forwarder.
The symbol has been moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
No new symbol version is required because there was a forwarder.
The symbol has been moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The pthread_exit symbol was moved using
scripts/move-symbol-to-libc.py. No new symbol version is needed
because there was a forwarder.
The new tests nptl/tst-pthread_exit-nothreads and
nptl/tst-pthread_exit-nothreads-static exercise the scenario
that pthread_exit is called without libpthread having been linked in.
This is not possible for the generic code, so these tests do not
live in sysdeps/pthread for now.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
It's necessary to stub out __libc_disable_asynccancel and
__libc_enable_asynccancel via rtld-stubbed-symbols because the new
direct references to the unwinder result in symbol conflicts when the
rtld exception handling from libc is linked in during the construction
of librtld.map.
unwind-forcedunwind.c is merged into unwind-resume.c. libc now needs
the functions that were previously only used in libpthread.
The GLIBC_PRIVATE exports of __libc_longjmp and __libc_siglongjmp are
no longer needed, so switch them to hidden symbols.
The symbol __pthread_unwind_next has been moved using
scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerva Zanella <adhemerval.zanella@linaro.org>
And also the fork generation counter, __fork_generation. This
eliminates the need for __fork_generation_pointer.
call_once remains in libpthread and calls the exported __pthread_once
symbol.
pthread_once and __pthread_once have been moved using
scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This internal symbol is used as part of the longjmp implementation.
Rename the file from nptl/pt-cleanup.c to nptl/pthread_cleanup_upto.c
so that the pt-* files remain restricted to libpthread.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The definitions in libc are sufficient, the forwarders are no longer
needed.
The symbols have been moved using scripts/move-symbol-to-libc.py.
s390-linux-gnu and s390x-linux-gnu need a new version placeholder
to keep the GLIBC_2.19 symbol version in libpthread.
Tested on i386-linux-gnu, powerpc64le-linux-gnu, s390x-linux-gnu,
x86_64-linux-gnu. Built with build-many-glibcs.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This affects _pthread_cleanup_pop, _pthread_cleanup_pop_restore,
_pthread_cleanup_push, _pthread_cleanup_push_defer. The symbols
have been moved using scripts/move-symbol-to-libc.py.
No new symbol versions are added because the symbols are turned into
compatibility symbols at the same time.
__pthread_cleanup_pop and __pthread_cleanup_push are added as
GLIBC_PRIVATE symbols because they are also used internally, for
glibc's own cancellation handling.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
It is still used internally. Since unwinding is now available
unconditionally, avoid indirect calls through function pointers loaded
from the stack by inlining the non-cancellation cleanup code. This
avoids a regression in security hardening.
The out-of-line __libc_cleanup_routine implementation is no longer
needed because the inline definition is now static __always_inline.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Remove generic tlsdesc code related to lazy tlsdesc processing since
lazy tlsdesc relocation is no longer supported. This includes removing
GL(dl_load_lock) from _dl_make_tlsdesc_dynamic which is only called at
load time when that lock is already held.
Added a documentation comment too.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
No bug. This commit optimizes strlen-avx2.S. The optimizations are
mostly small things but they add up to roughly 10-30% performance
improvement for strlen. The results for strnlen are bit more
ambiguous. test-strlen, test-strnlen, test-wcslen, and test-wcsnlen
are all passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
No bug. This commit optimizes strlen-evex.S. The
optimizations are mostly small things but they add up to roughly
10-30% performance improvement for strlen. The results for strnlen are
bit more ambiguous. test-strlen, test-strnlen, test-wcslen, and
test-wcsnlen are all passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
No bug. This commit adds optimized cased for less_vec memset case that
uses the avx512vl/avx512bw mask store avoiding the excessive
branches. test-memset and test-wmemset are passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Since strchr-avx2.S updated by
commit 1f745ecc21
Author: noah <goldstein.w.n@gmail.com>
Date: Wed Feb 3 00:38:59 2021 -0500
x86-64: Refactor and improve performance of strchr-avx2.S
uses sarx:
c4 e2 72 f7 c0 sarx %ecx,%eax,%eax
for strchr-avx2 family functions, require BMI2 in ifunc-impl-list.c and
ifunc-avx2.h.
Since __strlen_evex and __strnlen_evex added by
commit 1fd8c163a8
Author: H.J. Lu <hjl.tools@gmail.com>
Date: Fri Mar 5 06:24:52 2021 -0800
x86-64: Add ifunc-avx2.h functions with 256-bit EVEX
use sarx:
c4 e2 6a f7 c0 sarx %edx,%eax,%eax
require BMI2 for __strlen_evex and __strnlen_evex in ifunc-impl-list.c.
ifunc-avx2.h already requires BMI2 for EVEX implementation.
No Bug. This commit updates the large memcpy case (no overlap). The
update is to perform memcpy on either 2 or 4 contiguous pages at
once. This 1) helps to alleviate the affects of false memory aliasing
when destination and source have a close 4k alignment and 2) In most
cases and for most DRAM units is a modestly more efficient access
pattern. These changes are a clear performance improvement for
VEC_SIZE =16/32, though more ambiguous for VEC_SIZE=64. test-memcpy,
test-memccpy, test-mempcpy, test-memmove, and tst-memmove-overflow all
pass.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Some registers that can be clobbered by the kernel during a syscall are not
listed on the clobbers list in sysdeps/unix/sysv/linux/powerpc/sysdep.h.
For syscalls using sc:
- XER is zeroed by the kernel on exit
For syscalls using scv:
- XER is zeroed by the kernel on exit
- Different from the sc case, most CR fields can be clobbered (according to
the ELF ABI and the Linux kernel's syscall ABI for powerpc
(linux/Documentation/powerpc/syscall64-abi.rst)
The same should apply to vsyscalls, which effectively execute a function call
but are not currently adding these registers as clobbers either.
These are likely not causing issues today, but they should be added to the
clobbers list just in case things change on the kernel side in the future.
Reported-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
MSG_NOSIGNAL was added on POSIX 2008 and Hurd seems to support it.
The SIGPIPE handling also makes the implementation not thread-safe
(due the sigaction usage).
Checked on x86_64-linux-gnu.
Now that libsupport abstract Linux possible missing support (either
due FS limitation that can't handle 64 bit timestamp or architectures
that do not handle values larger than unsigned 32 bit values) the
tests can be turned generic.
Checked on x86_64-linux-gnu and i686-linux-gnu. I also built the
tests for i686-gnu.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Lazy tlsdesc relocation is racy because the static tls optimization and
tlsdesc management operations are done without holding the dlopen lock.
This similar to the commit b7cf203b5c
for aarch64, but it fixes a different race: bug 27137.
On i386 the code is a bit more complicated than on x86_64 because both
rel and rela relocs are supported.
Lazy tlsdesc relocation is racy because the static tls optimization and
tlsdesc management operations are done without holding the dlopen lock.
This similar to the commit b7cf203b5c
for aarch64, but it fixes a different race: bug 27137.
Another issue is that ld auditing ignores DT_BIND_NOW and thus tries to
relocate tlsdesc lazily, but that does not work in a BIND_NOW module
due to missing DT_TLSDESC_PLT. Unconditionally relocating tlsdesc at
load time fixes this bug 27721 too.