this change is consistent with the corresponding glibc functions and
is semantically const-correct. the incorrect argument types without
const seem to have been taken from erroneous man pages.
this functionality has essentially always been deprecated in linux,
and was never supported by musl. the presence of the header was
reported to cause some software to attempt to use the nonexistant
function, so removing the header is the cleanest solution.
this was wrong since the original commit adding inotify, and I don't
see any explanation for it. not even the man pages have it wrong. it
was most likely a copy-and-paste error.
this practice came from very early, before internal/syscall.h defined
macros that could accept pointer arguments directly and handle them
correctly. aside from being ugly and unnecessary, it looks like it
will be problematic when we add support for 32-bit ABIs on archs where
registers (and syscall arguments) are 64-bit, e.g. x32 and mips n32.
this agrees with implementation practice on glibc and BSD systems, and
is the const-correct way to do things; it eliminates warnings from
passing pointers to const. the prototype without const came from
seemingly erroneous man pages.
the header is included only as a guard to check that the declaration
and definition match, so the typo didn't cause any breakage aside
from omitting this check.
the reasons are the same as for sbrk. unlike sbrk, there is no safe
usage because brk does not return any useful information, so it should
just fail unconditionally.
use of sbrk is never safe; it conflicts with malloc, and malloc may be
used internally by the implementation basically anywhere. prior to
this change, applications attempting to use sbrk to do their own heap
management simply caused untrackable memory corruption; now, they will
fail with ENOMEM allowing the errors to be fixed.
sbrk(0) is still permitted as a way to get the current brk; some
misguided applications use this as a measurement of their memory
usage or for other related purposes, and such usage is harmless.
eventually sbrk may be re-added if/when malloc is changed to avoid
using the brk by using mmap for all allocations.
ssi_ptr is really 64-bit in kernel, so fix that. assuming sizeof(void*)
for it also caused incorrect padding for 32-bits, as the following
64-bits are aligned to 64-bits (and the padding was not taken into
account), so fix the padding as well. add addr_lsb field while there.
the workaround/fallback code for supporting O_PATH file descriptors
when the kernel lacks support for performing these operations on them
caused EBADF to get replaced by ENOENT (due to missing entry in
/proc/self/fd). this is unlikely to affect real-world code (calls that
might yield EBADF are generally unsafe, especially in library code)
but it was breaking some test cases.
the fix I've applied is something of a tradeoff: it adds one syscall
to these operations on kernels where the workaround is needed. the
alternative would be to catch ENOENT from the /proc lookup and
translate it to EBADF, but I want to avoid doing that in the interest
of not touching/depending on /proc at all in these functions as long
as the kernel correctly supports the operations. this is following the
general principle of isolating hacks to code paths that are taken on
broken systems, and keeping the code for correct systems completely
hack-free.
the ABI allows the callee to clobber stack slots that correspond to
arguments passed in registers, so the caller must adjust the stack
pointer to reserve space appropriately. prior to this fix, the argv
array was possibly clobbered by dynamic linker code before passing
control to the main program.
our getcwd already (as an extension) supports allocation of a buffer
when the buffer argument is a null pointer, so there's no need to
duplicate the allocation logic in this wrapper function. duplicating
it is actually harmful in that it doubles the stack usage from
PATH_MAX to 2*PATH_MAX.
the wildcard function in GNU make includes dangling symlinks; if any
exist under the .git directory, they would get added as dependencies,
causing make to exit with an error due to lacking a rule to build the
missing file.
as far as I can tell, git operations which should force version.h to
be rebuilt must all touch the mtime of the top-level .git directory.
historically these functions appeared in BSD 4.3 without prototypes,
then in the bind project prototypes were added to resolv.h, but those
were incompatible with the definitions of the implementation.
the bind resolv.h became the defacto api most systems use now, but the
old internal definitions found their way into the linux manuals and thus
into musl.
previously this flag was defined and accepted as a no-op, possibly
breaking some software that uses it. given the choice to remove the
definition and possibly break applications that were already working,
or simply implement the feature, the latter turned out to be easy
enough to make the decision easy.
in the case where the FNM_PATHNAME flag is also set, this
implementation is clean and essentially optimal. otherwise, it's an
inefficient "brute force" implementation. at some point, when cleaning
up and refactoring this code, I may add a more direct code path for
handling FNM_LEADING_DIR in the non-FNM_PATHNAME case, but at this
point my main interest is avoiding introducing new bugs in the code
that implements the standard fnmatch features specified by POSIX.
this is still experimental and subject to change. for git checkouts,
an attempt is made to record the exact revision to aid in bug reports
and debugging. no version information is recorded in the static libc.a
or binaries it's linked into.
the FNM_PATHNAME logic for advancing by /-delimited components was
incorrect when the / character was escaped (i.e. \/), and a final \ at
the end of pattern was not handled correctly.
a '/' in the pattern could be incorrectly matched against the
terminating null byte in the string causing arbitrarily long
sequence of out-of-bounds access in fnmatch("/","",FNM_PATHNAME)
a v6 socket will only be used if there is at least one v6 nameserver
address. if the kernel lacks v6 support, the code will fall back to
using a v4 socket and requests to v6 servers will silently fail. when
using a v6 socket, v4 addresses are converted to v4-mapped form and
setsockopt is used to ensure that the v6 socket can accept both v4 and
v6 traffic (this is on-by-default on Linux but the default is
configurable in /proc and so it needs to be set explicitly on the
socket level). this scheme avoids increasing resource usage during
lookups and allows the existing network io loop to be used without
modification.
previously, nameservers whose address family did not match the address
family of the first-listed nameserver were simply ignored. prior to
recent __ipparse fixes, they were not ignored but erroneously parsed.
the old value of 20 was reported by Laurent Bercot as being
insufficient for a reasonable real-world usage case. actual problem
was the internal buffer used by ttyname(), but the implementation of
ttyname uses TTY_NAME_MAX, and for consistency it's best to increase
both. the new value is aligned with glibc.
subsequent code assumes the address family requested is either
unspecified or one of IPv4/IPv6, and could malfunction if this
constraint is not met, so other address families should be explicitly
rejected.
on archs with excess precision, the floating point constant 1e40f may
be evaluated such that it does not actually produce an infinity.
1e5000f is sufficiently large to produce an infinity for all supported
floating point formats. note that this definition of INFINITY is only
used for old or non-GNUC compilers anyway; despite being a portable,
conforming definition, it leads to erroneous warnings on many
compilers and thus using the builtin is preferred.
these functions were spuriously failing in the case where the buffer
size was exactly the number of bytes/characters to be written,
including null termination. since these functions do not have defined
error conditions other than buffer size, a reasonable application may
fail to check the return value when the format string and buffer size
are known to be valid; such an application could then attempt to use a
non-terminated buffer.
in addition to fixing the bug, I have changed the error handling
behavior so that these functions always null-terminate the output
except in the case where the buffer size is zero, and so that they
always write as many characters as possible before failing, rather
than dropping whole fields that do not fit. this actually simplifies
the logic somewhat anyway.
unfortunately this eliminates the ability of the compiler to diagnose
some dangerous/incorrect usage, but POSIX requires (as an extension to
the C language, i.e. CX shaded) that NULL have type void *. plain C
allows it to be defined as any null pointer constant.
the definition 0L is preserved for C++ rather than reverting to plain
0 to avoid dangerous behavior in non-conforming programs which use
NULL as a variadic sentinel. (it's impossible to use (void *)0 for C++
since C++ lacks the proper implicit pointer conversions, and other
popular alternatives like the GCC __null extension seem non-conforming
to the standard's requirements.)