git/git-compat-util.h

1607 lines
42 KiB
C
Raw Permalink Normal View History

#ifndef GIT_COMPAT_UTIL_H
#define GIT_COMPAT_UTIL_H
git-compat-util: add a test balloon for C99 support The C99 standard was released in January 1999, now 22 years ago. It provides a variety of useful features, including variadic arguments for macros, declarations after statements, designated initializers, and a wide variety of other useful features, many of which we already use. We'd like to take advantage of these features, but we want to be cautious. As far as we know, all major compilers now support C99 or a later C standard, such as C11 or C17. POSIX has required C99 support as a requirement for the 2001 revision, so we can safely assume any POSIX system which we are interested in supporting has C99. Even MSVC, long a holdout against modern C, now supports both C11 and C17 with an appropriate update. Moreover, even if people are using an older version of MSVC on these systems, they will generally need some implementation of the standard Unix utilities for the testsuite, and GNU coreutils, the most common option, has required C99 since 2009. Therefore, we can safely assume that a suitable version of GCC or clang is available to users even if their version of MSVC is not sufficiently capable. Let's add a test balloon to git-compat-util.h to see if anyone is using an older compiler. We'll add a comment telling people how to enable this functionality on GCC and Clang, even though modern versions of both will automatically do the right thing, and ask people still experiencing a problem to report that to us on the list. Note that C89 compilers don't provide the __STDC_VERSION__ macro, so we use a well-known hack of using "- 0". On compilers with this macro, it doesn't change the value, and on C89 compilers, the macro will be replaced with nothing, and our value will be 0. For sparse, we explicitly request the gnu99 style because we've traditionally taken advantage of some GCC- and clang-specific extensions when available and we'd like to retain the ability to do that. sparse also defaults to C89 without it, so things will fail for us if we don't. Update the cmake configuration to require C11 for MSVC. We do this because this will make MSVC to use C11, since it does not explicitly support C99. We do this with a compiler options because setting the C_STANDARD option does not work in our CI on MSVC and at the moment, we don't want to require C11 for Unix compilers. In the Makefile, don't set any compiler flags for the compiler itself, since on some systems, such as FreeBSD, we actually need C11, and asking for C99 causes things to fail to compile. The error message should make it obvious what's going wrong and allow a user to set the appropriate option when building in the event they're using a Unix compiler that doesn't support it by default. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-12-01 09:40:50 +08:00
#if __STDC_VERSION__ - 0 < 199901L
/*
* Git is in a testing period for mandatory C99 support in the compiler. If
* your compiler is reasonably recent, you can try to enable C99 support (or,
* for MSVC, C11 support). If you encounter a problem and can't enable C99
* support with your compiler (such as with "-std=gnu99") and don't have access
* to one with this support, such as GCC or Clang, you can remove this #if
* directive, but please report the details of your system to
* git@vger.kernel.org.
*/
#error "Required C99 support is in a test phase. Please see git-compat-util.h for more details."
#endif
#ifdef USE_MSVC_CRTDBG
/*
* For these to work they must appear very early in each
* file -- before most of the standard header files.
*/
#include <stdlib.h>
#include <crtdbg.h>
#endif
struct strbuf;
#define _FILE_OFFSET_BITS 64
/* Derived from Linux "Features Test Macro" header
* Convenience macros to test the versions of gcc (or
* a compatible compiler).
* Use them like this:
* #if GIT_GNUC_PREREQ (2,8)
* ... code requiring gcc 2.8 or later ...
* #endif
*/
#if defined(__GNUC__) && defined(__GNUC_MINOR__)
# define GIT_GNUC_PREREQ(maj, min) \
((__GNUC__ << 16) + __GNUC_MINOR__ >= ((maj) << 16) + (min))
#else
#define GIT_GNUC_PREREQ(maj, min) 0
#endif
#ifndef FLEX_ARRAY
/*
* See if our compiler is known to support flexible array members.
*/
/*
* Check vendor specific quirks first, before checking the
* __STDC_VERSION__, as vendor compilers can lie and we need to be
* able to work them around. Note that by not defining FLEX_ARRAY
* here, we can fall back to use the "safer but a bit wasteful" one
* later.
*/
#if defined(__SUNPRO_C) && (__SUNPRO_C <= 0x580)
#elif defined(__GNUC__)
# if (__GNUC__ >= 3)
# define FLEX_ARRAY /* empty */
# else
# define FLEX_ARRAY 0 /* older GNU extension */
# endif
#elif defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L)
# define FLEX_ARRAY /* empty */
#endif
/*
* Otherwise, default to safer but a bit wasteful traditional style
*/
#ifndef FLEX_ARRAY
# define FLEX_ARRAY 1
#endif
#endif
/*
* BUILD_ASSERT_OR_ZERO - assert a build-time dependency, as an expression.
* @cond: the compile-time condition which must be true.
*
* Your compile will fail if the condition isn't true, or can't be evaluated
* by the compiler. This can be used in an expression: its value is "0".
*
* Example:
* #define foo_to_char(foo) \
* ((char *)(foo) \
* + BUILD_ASSERT_OR_ZERO(offsetof(struct foo, string) == 0))
*/
#define BUILD_ASSERT_OR_ZERO(cond) \
(sizeof(char [1 - 2*!(cond)]) - 1)
#if GIT_GNUC_PREREQ(3, 1)
/* &arr[0] degrades to a pointer: a different type from an array */
# define BARF_UNLESS_AN_ARRAY(arr) \
BUILD_ASSERT_OR_ZERO(!__builtin_types_compatible_p(__typeof__(arr), \
__typeof__(&(arr)[0])))
# define BARF_UNLESS_COPYABLE(dst, src) \
BUILD_ASSERT_OR_ZERO(__builtin_types_compatible_p(__typeof__(*(dst)), \
__typeof__(*(src))))
#else
# define BARF_UNLESS_AN_ARRAY(arr) 0
# define BARF_UNLESS_COPYABLE(dst, src) \
BUILD_ASSERT_OR_ZERO(0 ? ((*(dst) = *(src)), 0) : \
sizeof(*(dst)) == sizeof(*(src)))
#endif
/*
* ARRAY_SIZE - get the number of elements in a visible array
* @x: the array whose size you want.
*
* This does not work on pointers, or arrays declared as [], or
* function parameters. With correct compiler support, such usage
* will cause a build error (see the build_assert_or_zero macro).
*/
#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]) + BARF_UNLESS_AN_ARRAY(x))
#define bitsizeof(x) (CHAR_BIT * sizeof(x))
#define maximum_signed_value_of_type(a) \
(INTMAX_MAX >> (bitsizeof(intmax_t) - bitsizeof(a)))
#define maximum_unsigned_value_of_type(a) \
(UINTMAX_MAX >> (bitsizeof(uintmax_t) - bitsizeof(a)))
/*
* Signed integer overflow is undefined in C, so here's a helper macro
* to detect if the sum of two integers will overflow.
*
* Requires: a >= 0, typeof(a) equals typeof(b)
*/
#define signed_add_overflows(a, b) \
((b) > maximum_signed_value_of_type(a) - (a))
#define unsigned_add_overflows(a, b) \
((b) > maximum_unsigned_value_of_type(a) - (a))
/*
* Returns true if the multiplication of "a" and "b" will
* overflow. The types of "a" and "b" must match and must be unsigned.
* Note that this macro evaluates "a" twice!
*/
#define unsigned_mult_overflows(a, b) \
((a) && (b) > maximum_unsigned_value_of_type(a) / (a))
/*
* Returns true if the left shift of "a" by "shift" bits will
* overflow. The type of "a" must be unsigned.
*/
#define unsigned_left_shift_overflows(a, shift) \
((shift) < bitsizeof(a) && \
(a) > maximum_unsigned_value_of_type(a) >> (shift))
#ifdef __GNUC__
#define TYPEOF(x) (__typeof__(x))
#else
#define TYPEOF(x)
#endif
#define MSB(x, bits) ((x) & TYPEOF(x)(~0ULL << (bitsizeof(x) - (bits))))
#define HAS_MULTI_BITS(i) ((i) & ((i) - 1)) /* checks if an integer has more than 1 bit set */
#define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d))
/* Approximation of the length of the decimal representation of this type. */
#define decimal_length(x) ((int)(sizeof(x) * 2.56 + 0.5) + 1)
#ifdef __MINGW64__
#define _POSIX_C_SOURCE 1
#elif defined(__sun__)
/*
* On Solaris, when _XOPEN_EXTENDED is set, its header file
* forces the programs to be XPG4v2, defeating any _XOPEN_SOURCE
* setting to say we are XPG5 or XPG6. Also on Solaris,
* XPG6 programs must be compiled with a c99 compiler, while
* non XPG6 programs must be compiled with a pre-c99 compiler.
*/
# if __STDC_VERSION__ - 0 >= 199901L
# define _XOPEN_SOURCE 600
# else
# define _XOPEN_SOURCE 500
# endif
#elif !defined(__APPLE__) && !defined(__FreeBSD__) && !defined(__USLC__) && \
!defined(_M_UNIX) && !defined(__sgi) && !defined(__DragonFly__) && \
!defined(__TANDEM) && !defined(__QNX__) && !defined(__MirBSD__) && \
!defined(__CYGWIN__)
#define _XOPEN_SOURCE 600 /* glibc2 and AIX 5.3L need 500, OpenBSD needs 600 for S_ISLNK() */
#define _XOPEN_SOURCE_EXTENDED 1 /* AIX 5.3L needs this */
#endif
#define _ALL_SOURCE 1
#define _GNU_SOURCE 1
#define _BSD_SOURCE 1
#define _DEFAULT_SOURCE 1
#define _NETBSD_SOURCE 1
#define _SGI_SOURCE 1
/*
* UNUSED marks a function parameter that is always unused. It also
* can be used to annotate a function, a variable, or a type that is
* always unused.
*
* A callback interface may dictate that a function accepts a
* parameter at that position, but the implementation of the function
* may not need to use the parameter. In such a case, mark the parameter
* with UNUSED.
*
* When a parameter may be used or unused, depending on conditional
* compilation, consider using MAYBE_UNUSED instead.
*/
#if GIT_GNUC_PREREQ(4, 5)
#define UNUSED __attribute__((unused)) \
__attribute__((deprecated ("parameter declared as UNUSED")))
#elif defined(__GNUC__)
#define UNUSED __attribute__((unused)) \
__attribute__((deprecated))
git-compat-util: add UNUSED macro In preparation for compiling with -Wunused-parameter, we'd like to be able to annotate some function parameters as false positives (e.g., parameters which must exist to conform to a callback interface). Ideally our annotation will: - be portable, turning into nothing on platforms which don't support it - be easy to read, without looking too syntactically odd or taking attention away from the rest of the parameters - help us notice when a parameter marked as unused is actually used, which keeps our annotations accurate. In theory a compiler could tell us this easily, but gcc has no such warning. Clang has -Wused-but-marked-unused, but it triggers false positives with our MAYBE_UNUSED annotation (e.g., for commit-slab functions) This patch introduces an UNUSED() macro which takes the parameter name as an argument. That lets us tweak the name in such a way that we'll notice if somebody tries to use it. It looks like this in use: int some_ref_cb(const char *refname, const struct object_id *UNUSED(oid), int UNUSED(flags), void *UNUSED(data)) { printf("got refname %s", refname); return 0; } Because the unused parameter names are rewritten behind the scenes to UNUSED_oid, etc, adding code like: printf("oid is %s", oid_to_hex(oid)); will fail compilation with "oid undeclared". Sadly, the "did you mean" feature of modern compilers is not generally smart enough to suggest the "unused" name. If we used a very short prefix like U_oid, that does convince gcc to say "did you mean", but since the "U_" in the suggestion isn't much of a hint, it doesn't really help. In practice, a look at the function definition usually makes the problem pretty obvious. Note that we have to put the definition of UNUSED early in git-compat-util.h, because it will eventually be used for some compat functions themselves (both directly here and in mingw.h). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-08-19 18:08:27 +08:00
#else
#define UNUSED
git-compat-util: add UNUSED macro In preparation for compiling with -Wunused-parameter, we'd like to be able to annotate some function parameters as false positives (e.g., parameters which must exist to conform to a callback interface). Ideally our annotation will: - be portable, turning into nothing on platforms which don't support it - be easy to read, without looking too syntactically odd or taking attention away from the rest of the parameters - help us notice when a parameter marked as unused is actually used, which keeps our annotations accurate. In theory a compiler could tell us this easily, but gcc has no such warning. Clang has -Wused-but-marked-unused, but it triggers false positives with our MAYBE_UNUSED annotation (e.g., for commit-slab functions) This patch introduces an UNUSED() macro which takes the parameter name as an argument. That lets us tweak the name in such a way that we'll notice if somebody tries to use it. It looks like this in use: int some_ref_cb(const char *refname, const struct object_id *UNUSED(oid), int UNUSED(flags), void *UNUSED(data)) { printf("got refname %s", refname); return 0; } Because the unused parameter names are rewritten behind the scenes to UNUSED_oid, etc, adding code like: printf("oid is %s", oid_to_hex(oid)); will fail compilation with "oid undeclared". Sadly, the "did you mean" feature of modern compilers is not generally smart enough to suggest the "unused" name. If we used a very short prefix like U_oid, that does convince gcc to say "did you mean", but since the "U_" in the suggestion isn't much of a hint, it doesn't really help. In practice, a look at the function definition usually makes the problem pretty obvious. Note that we have to put the definition of UNUSED early in git-compat-util.h, because it will eventually be used for some compat functions themselves (both directly here and in mingw.h). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-08-19 18:08:27 +08:00
#endif
#if defined(WIN32) && !defined(__CYGWIN__) /* Both MinGW and MSVC */
# if !defined(_WIN32_WINNT)
# define _WIN32_WINNT 0x0600
# endif
#define WIN32_LEAN_AND_MEAN /* stops windows.h including winsock.h */
#include <winsock2.h>
#ifndef NO_UNIX_SOCKETS
#include <afunix.h>
#endif
#include <windows.h>
#define GIT_WINDOWS_NATIVE
#endif
#if defined(NO_UNIX_SOCKETS) || !defined(GIT_WINDOWS_NATIVE)
static inline int _have_unix_sockets(void)
{
#if defined(NO_UNIX_SOCKETS)
return 0;
#else
return 1;
#endif
}
#define have_unix_sockets _have_unix_sockets
#endif
#include <unistd.h>
#include <stdio.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stddef.h>
#include <stdlib.h>
#include <stdarg.h>
#include <stdbool.h>
#include <string.h>
#ifdef HAVE_STRINGS_H
#include <strings.h> /* for strcasecmp() */
#endif
#include <errno.h>
#include <limits.h>
#include <locale.h>
#ifdef NEEDS_SYS_PARAM_H
#include <sys/param.h>
#endif
#include <sys/types.h>
#include <dirent.h>
#include <sys/time.h>
#include <time.h>
#include <signal.h>
#include <assert.h>
#include <regex.h>
#include <utime.h>
#include <syslog.h>
#if !defined(NO_POLL_H)
#include <poll.h>
#elif !defined(NO_SYS_POLL_H)
#include <sys/poll.h>
#else
/* Pull the compat stuff */
#include <poll.h>
#endif
#ifdef HAVE_BSD_SYSCTL
#include <sys/sysctl.h>
#endif
dir API: add a generalized path_match_flags() function Add a path_match_flags() function and have the two sets of starts_with_dot_{,dot_}slash() functions added in 63e95beb085 (submodule: port resolve_relative_url from shell to C, 2016-04-15) and a2b26ffb1a8 (fsck: convert gitmodules url to URL passed to curl, 2020-04-18) be thin wrappers for it. As the latter of those notes the fsck version was copied from the initial builtin/submodule--helper.c version. Since the code added in a2b26ffb1a8 was doing really doing the same as win32_is_dir_sep() added in 1cadad6f658 (git clone <url> C:\cygwin\home\USER\repo' is working (again), 2018-12-15) let's move the latter to git-compat-util.h is a is_xplatform_dir_sep(). We can then call either it or the platform-specific is_dir_sep() from this new function. Let's likewise change code in various other places that was hardcoding checks for "'/' || '\\'" with the new is_xplatform_dir_sep(). As can be seen in those callers some of them still concern themselves with ':' (Mac OS classic?), but let's leave the question of whether that should be consolidated for some other time. As we expect to make wider use of the "native" case in the future, define and use two starts_with_dot_{,dot_}slash_native() convenience wrappers. This makes the diff in builtin/submodule--helper.c much smaller. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Derrick Stolee <derrickstolee@github.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-17 04:10:59 +08:00
/* Used by compat/win32/path-utils.h, and more */
static inline int is_xplatform_dir_sep(int c)
{
return c == '/' || c == '\\';
}
#if defined(__CYGWIN__)
#include "compat/win32/path-utils.h"
#endif
#if defined(__MINGW32__)
/* pull in Windows compatibility stuff */
#include "compat/win32/path-utils.h"
#include "compat/mingw.h"
#elif defined(_MSC_VER)
#include "compat/win32/path-utils.h"
#include "compat/msvc.h"
#else
#include <sys/utsname.h>
#include <sys/wait.h>
#include <sys/resource.h>
#include <sys/socket.h>
#include <sys/ioctl.h>
#include <sys/statvfs.h>
#include <termios.h>
#ifndef NO_SYS_SELECT_H
#include <sys/select.h>
#endif
#include <netinet/in.h>
#include <netinet/tcp.h>
#include <arpa/inet.h>
#include <netdb.h>
#include <pwd.h>
#include <sys/un.h>
#ifndef NO_INTTYPES_H
#include <inttypes.h>
#else
#include <stdint.h>
#endif
wrapper: add a helper to generate numbers from a CSPRNG There are many situations in which having access to a cryptographically secure pseudorandom number generator (CSPRNG) is helpful. In the future, we'll encounter one of these when dealing with temporary files. To make this possible, let's add a function which reads from a system CSPRNG and returns some bytes. We know that all systems will have such an interface. A CSPRNG is required for a secure TLS or SSH implementation and a Git implementation which provided neither would be of little practical use. In addition, POSIX is set to standardize getentropy(2) in the next version, so in the (potentially distant) future we can rely on that. For systems which lack one of the other interfaces, we provide the ability to use OpenSSL's CSPRNG. OpenSSL is highly portable and functions on practically every known OS, and we know it will have access to some source of cryptographically secure randomness. We also provide support for the arc4random in libbsd for folks who would prefer to use that. Because this is a security sensitive interface, we take some precautions. We either succeed by filling the buffer completely as we requested, or we fail. We don't return partial data because the caller will almost never find that to be a useful behavior. Specify a makefile knob which users can use to specify one or more suitable CSPRNGs, and turn the multiple string options into a set of defines, since we cannot match on strings in the preprocessor. We allow multiple options to make the job of handling this in autoconf easier. The order of options is important here. On systems with arc4random, which is most of the BSDs, we use that, since, except on MirBSD and macOS, it uses ChaCha20, which is extremely fast, and sits entirely in userspace, avoiding a system call. We then prefer getrandom over getentropy, because the former has been available longer on Linux, and then OpenSSL. Finally, if none of those are available, we use /dev/urandom, because most Unix-like operating systems provide that API. We prefer options that don't involve device files when possible because those work in some restricted environments where device files may not be available. Set the configuration variables appropriately for Linux and the BSDs, including macOS, as well as Windows and NonStop. We specifically only consider versions which receive publicly available security support here. For the same reason, we don't specify getrandom(2) on Linux, because CentOS 7 doesn't support it in glibc (although its kernel does) and we don't want to resort to making syscalls. Finally, add a test helper to allow this to be tested by hand and in tests. We don't add any tests, since invoking the CSPRNG is not likely to produce interesting, reproducible results. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-18 05:56:16 +08:00
#ifdef HAVE_ARC4RANDOM_LIBBSD
#include <bsd/stdlib.h>
#endif
#ifdef HAVE_GETRANDOM
#include <sys/random.h>
#endif
#ifdef NO_INTPTR_T
/*
* On I16LP32, ILP32 and LP64 "long" is the safe bet, however
* on LLP86, IL33LLP64 and P64 it needs to be "long long",
* while on IP16 and IP16L32 it is "int" (resp. "short")
* Size needs to match (or exceed) 'sizeof(void *)'.
* We can't take "long long" here as not everybody has it.
*/
typedef long intptr_t;
typedef unsigned long uintptr_t;
#endif
#undef _ALL_SOURCE /* AIX 5.3L defines a struct list with _ALL_SOURCE. */
#include <grp.h>
#define _ALL_SOURCE 1
#endif
git on Mac OS and precomposed unicode Mac OS X mangles file names containing unicode on file systems HFS+, VFAT or SAMBA. When a file using unicode code points outside ASCII is created on a HFS+ drive, the file name is converted into decomposed unicode and written to disk. No conversion is done if the file name is already decomposed unicode. Calling open("\xc3\x84", ...) with a precomposed "Ä" yields the same result as open("\x41\xcc\x88",...) with a decomposed "Ä". As a consequence, readdir() returns the file names in decomposed unicode, even if the user expects precomposed unicode. Unlike on HFS+, Mac OS X stores files on a VFAT drive (e.g. an USB drive) in precomposed unicode, but readdir() still returns file names in decomposed unicode. When a git repository is stored on a network share using SAMBA, file names are send over the wire and written to disk on the remote system in precomposed unicode, but Mac OS X readdir() returns decomposed unicode to be compatible with its behaviour on HFS+ and VFAT. The unicode decomposition causes many problems: - The names "git add" and other commands get from the end user may often be precomposed form (the decomposed form is not easily input from the keyboard), but when the commands read from the filesystem to see what it is going to update the index with already is on the filesystem, readdir() will give decomposed form, which is different. - Similarly "git log", "git mv" and all other commands that need to compare pathnames found on the command line (often but not always precomposed form; a command line input resulting from globbing may be in decomposed) with pathnames found in the tree objects (should be precomposed form to be compatible with other systems and for consistency in general). - The same for names stored in the index, which should be precomposed, that may need to be compared with the names read from readdir(). NFS mounted from Linux is fully transparent and does not suffer from the above. As Mac OS X treats precomposed and decomposed file names as equal, we can - wrap readdir() on Mac OS X to return the precomposed form, and - normalize decomposed form given from the command line also to the precomposed form, to ensure that all pathnames used in Git are always in the precomposed form. This behaviour can be requested by setting "core.precomposedunicode" configuration variable to true. The code in compat/precomposed_utf8.c implements basically 4 new functions: precomposed_utf8_opendir(), precomposed_utf8_readdir(), precomposed_utf8_closedir() and precompose_argv(). The first three are to wrap opendir(3), readdir(3), and closedir(3) functions. The argv[] conversion allows to use the TAB filename completion done by the shell on command line. It tolerates other tools which use readdir() to feed decomposed file names into git. When creating a new git repository with "git init" or "git clone", "core.precomposedunicode" will be set "false". The user needs to activate this feature manually. She typically sets core.precomposedunicode to "true" on HFS and VFAT, or file systems mounted via SAMBA. Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Torsten Bögershausen <tboegi@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-07-08 21:50:25 +08:00
/* used on Mac OS X */
#ifdef PRECOMPOSE_UNICODE
#include "compat/precompose_utf8.h"
#else
static inline const char *precompose_argv_prefix(int argc UNUSED,
const char **argv UNUSED,
const char *prefix)
{
MacOS: precompose_argv_prefix() The following sequence leads to a "BUG" assertion running under MacOS: DIR=git-test-restore-p Adiarnfd=$(printf 'A\314\210') DIRNAME=xx${Adiarnfd}yy mkdir $DIR && cd $DIR && git init && mkdir $DIRNAME && cd $DIRNAME && echo "Initial" >file && git add file && echo "One more line" >>file && echo y | git restore -p . Initialized empty Git repository in /tmp/git-test-restore-p/.git/ BUG: pathspec.c:495: error initializing pathspec_item Cannot close git diff-index --cached --numstat [snip] The command `git restore` is run from a directory inside a Git repo. Git needs to split the $CWD into 2 parts: The path to the repo and "the rest", if any. "The rest" becomes a "prefix" later used inside the pathspec code. As an example, "/path/to/repo/dir-inside-repå" would determine "/path/to/repo" as the root of the repo, the place where the configuration file .git/config is found. The rest becomes the prefix ("dir-inside-repå"), from where the pathspec machinery expands the ".", more about this later. If there is a decomposed form, (making the decomposing visible like this), "dir-inside-rep°a" doesn't match "dir-inside-repå". Git commands need to: (a) read the configuration variable "core.precomposeunicode" (b) precocompose argv[] (c) precompose the prefix, if there was any The first commit, 76759c7dff53 "git on Mac OS and precomposed unicode" addressed (a) and (b). The call to precompose_argv() was added into parse-options.c, because that seemed to be a good place when the patch was written. Commands that don't use parse-options need to do (a) and (b) themselfs. The commands `diff-files`, `diff-index`, `diff-tree` and `diff` learned (a) and (b) in commit 90a78b83e0b8 "diff: run arguments through precompose_argv" Branch names (or refs in general) using decomposed code points resulting in decomposed file names had been fixed in commit 8e712ef6fc97 "Honor core.precomposeUnicode in more places" The bug report from above shows 2 things: - more commands need to handle precomposed unicode - (c) should be implemented for all commands using pathspecs Solution: precompose_argv() now handles the prefix (if needed), and is renamed into precompose_argv_prefix(). Inside this function the config variable core.precomposeunicode is read into the global variable precomposed_unicode, as before. This reading is skipped if precomposed_unicode had been read before. The original patch for preocomposed unicode, 76759c7dff53, placed precompose_argv() into parse-options.c Now add it into git.c::run_builtin() as well. Existing precompose calls in diff-files.c and others may become redundant, and if we audit the callflows that reach these places to make sure that they can never be reached without going through the new call added to run_builtin(), we might be able to remove these existing ones. But in this commit, we do not bother to do so and leave these precompose callsites as they are. Because precompose() is idempotent and can be called on an already precomposed string safely, this is safer than removing existing calls without fully vetting the callflows. There is certainly room for cleanups - this change intends to be a bug fix. Cleanups needs more tests in e.g. t/t3910-mac-os-precompose.sh, and should be done in future commits. [1] git-bugreport-2021-01-06-1209.txt (git can't deal with special characters) [2] https://lore.kernel.org/git/A102844A-9501-4A86-854D-E3B387D378AA@icloud.com/ Reported-by: Daniel Troger <random_n0body@icloud.com> Helped-By: Philippe Blain <levraiphilippeblain@gmail.com> Signed-off-by: Torsten Bögershausen <tboegi@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-02-04 00:28:23 +08:00
return prefix;
}
static inline const char *precompose_string_if_needed(const char *in)
{
return in;
}
#define probe_utf8_pathname_composition()
git on Mac OS and precomposed unicode Mac OS X mangles file names containing unicode on file systems HFS+, VFAT or SAMBA. When a file using unicode code points outside ASCII is created on a HFS+ drive, the file name is converted into decomposed unicode and written to disk. No conversion is done if the file name is already decomposed unicode. Calling open("\xc3\x84", ...) with a precomposed "Ä" yields the same result as open("\x41\xcc\x88",...) with a decomposed "Ä". As a consequence, readdir() returns the file names in decomposed unicode, even if the user expects precomposed unicode. Unlike on HFS+, Mac OS X stores files on a VFAT drive (e.g. an USB drive) in precomposed unicode, but readdir() still returns file names in decomposed unicode. When a git repository is stored on a network share using SAMBA, file names are send over the wire and written to disk on the remote system in precomposed unicode, but Mac OS X readdir() returns decomposed unicode to be compatible with its behaviour on HFS+ and VFAT. The unicode decomposition causes many problems: - The names "git add" and other commands get from the end user may often be precomposed form (the decomposed form is not easily input from the keyboard), but when the commands read from the filesystem to see what it is going to update the index with already is on the filesystem, readdir() will give decomposed form, which is different. - Similarly "git log", "git mv" and all other commands that need to compare pathnames found on the command line (often but not always precomposed form; a command line input resulting from globbing may be in decomposed) with pathnames found in the tree objects (should be precomposed form to be compatible with other systems and for consistency in general). - The same for names stored in the index, which should be precomposed, that may need to be compared with the names read from readdir(). NFS mounted from Linux is fully transparent and does not suffer from the above. As Mac OS X treats precomposed and decomposed file names as equal, we can - wrap readdir() on Mac OS X to return the precomposed form, and - normalize decomposed form given from the command line also to the precomposed form, to ensure that all pathnames used in Git are always in the precomposed form. This behaviour can be requested by setting "core.precomposedunicode" configuration variable to true. The code in compat/precomposed_utf8.c implements basically 4 new functions: precomposed_utf8_opendir(), precomposed_utf8_readdir(), precomposed_utf8_closedir() and precompose_argv(). The first three are to wrap opendir(3), readdir(3), and closedir(3) functions. The argv[] conversion allows to use the TAB filename completion done by the shell on command line. It tolerates other tools which use readdir() to feed decomposed file names into git. When creating a new git repository with "git init" or "git clone", "core.precomposedunicode" will be set "false". The user needs to activate this feature manually. She typically sets core.precomposedunicode to "true" on HFS and VFAT, or file systems mounted via SAMBA. Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Torsten Bögershausen <tboegi@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-07-08 21:50:25 +08:00
#endif
#ifdef MKDIR_WO_TRAILING_SLASH
#define mkdir(a,b) compat_mkdir_wo_trailing_slash((a),(b))
int compat_mkdir_wo_trailing_slash(const char*, mode_t);
#endif
git-compat-util: use gettimeofday(2) for time(2) Use gettimeofday instead of time(NULL) to get current time. This avoids clock skew on glibc 2.31+ on Linux, where in the first 1 to 2.5 ms of every second, time(NULL) returns a value that is one less than the tv_sec part of higher-resolution timestamps such as those returned by gettimeofday or timespec_get, or those in the file system. There are similar clock skew problems on AIX and MS-Windows, which have problems in the first 5 ms of every second. Without this patch, users can observe Git issuing a timestamp T+1 before it issues timestamp T, because Git sometimes uses time(NULL) or time(&t) and sometimes uses higher-res methods like gettimeofday. Although strictly speaking users should tolerate this behavior because a superuser can always change the clock back, this is a quality of implementation issue and users naturally expect Git to issue timestamps in increasing order unless the superuser has fiddled with the system clock. This patch always uses gettimeofday(...) instead of time(...), and I have verified that the resulting .o files never refer to the name 'time'. A trickier patch would change only those calls for which timestamp monotonicity is user-visible. Such a patch would require more expertise about Git internals, though, and would be harder to maintain later. Another possibility would be to change Git's documentation to warn users that Git does not always issue timestamps in increasing order. However, Git users would likely be either dismayed by this possibility, or confused by the level of detail that any such documentation would require. Yet another possibility would be to fix the Linux kernel so that the time syscall is consistent with the other timestamp syscalls. I suppose this has not been done due to performance implications. (Git's use of timestamps is rare enough that performance is not a significant consideration for git.) However, this wouldn't fix Git's problem on older Linux kernels, or on AIX or MS-Windows. Signed-off-by: Paul Eggert <eggert@cs.ucla.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-03-21 07:05:07 +08:00
#ifdef time
#undef time
#endif
static inline time_t git_time(time_t *tloc)
{
struct timeval tv;
/*
* Avoid time(NULL), which can disagree with gettimeofday(2)
* and filesystem timestamps.
*/
gettimeofday(&tv, NULL);
if (tloc)
*tloc = tv.tv_sec;
return tv.tv_sec;
}
#define time git_time
#ifdef NO_STRUCT_ITIMERVAL
struct itimerval {
struct timeval it_interval;
struct timeval it_value;
};
#endif
#ifdef NO_SETITIMER
static inline int git_setitimer(int which UNUSED,
const struct itimerval *value UNUSED,
struct itimerval *newvalue UNUSED) {
return 0; /* pretend success */
}
#undef setitimer
git-compat-util: avoid redefining system function names Our git-compat-util header defines a few noop wrappers for system functions if they are not available. This was originally done with a macro, but in 15b52a44e0 (compat-util: type-check parameters of no-op replacement functions, 2020-08-06) we switched to inline functions, because it gives us basic type-checking. This can cause compilation failures when the system _does_ declare those functions but we choose not to use them, since the compiler will complain about the redeclaration. This was seen in the real world when compiling against certain builds of uclibc, which may leave _POSIX_THREAD_SAFE_FUNCTIONS unset, but still declare flockfile() and funlockfile(). It can also be seen on any platform that has setitimer() if you choose to compile without it (which plausibly could happen if the system implementation is buggy). E.g., on Linux: $ make NO_SETITIMER=IWouldPreferNotTo git.o CC git.o In file included from builtin.h:4, from git.c:1: git-compat-util.h:344:19: error: conflicting types for ‘setitimer’; have ‘int(int, const struct itimerval *, struct itimerval *)’ 344 | static inline int setitimer(int which UNUSED, | ^~~~~~~~~ In file included from git-compat-util.h:234: /usr/include/x86_64-linux-gnu/sys/time.h:155:12: note: previous declaration of ‘setitimer’ with type ‘int(__itimer_which_t, const struct itimerval * restrict, struct itimerval * restrict)’ 155 | extern int setitimer (__itimer_which_t __which, | ^~~~~~~~~ make: *** [Makefile:2714: git.o] Error 1 Here I think the compiler is complaining about the lack of "restrict" annotations in our version, but even if we matched it completely (and there is no way to match all platforms anyway), it would still complain about a static declaration following a non-static one. Using macros doesn't have this problem, because the C preprocessor rewrites the name in our code before we hit this level of compilation. One way to fix this would just be to revert most of 15b52a44e0. What we really cared about there was catching build problems with precompose_argv(), which most platforms _don't_ build, and which is our custom function. So we could just switch the system wrappers back to macros; most people build the real versions anyway, and they don't change. So the extra type-checking isn't likely to catch bugs. But with a little work, we can have our cake and eat it, too. If we define the type-checking wrappers with a unique name, and then redirect the system names to them with macros, we still get our type checking, but without redeclaring the system function names. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-12-01 05:15:14 +08:00
#define setitimer(which,value,ovalue) git_setitimer(which,value,ovalue)
#endif
#ifndef NO_LIBGEN_H
#include <libgen.h>
#else
#define basename gitbasename
char *gitbasename(char *);
#define dirname gitdirname
char *gitdirname(char *);
#endif
#ifndef NO_ICONV
#include <iconv.h>
#endif
#ifndef NO_OPENSSL
#ifdef __APPLE__
#undef __AVAILABILITY_MACROS_USES_AVAILABILITY
git-compat-util: suppress unavoidable Apple-specific deprecation warnings With the release of Mac OS X 10.7 in July 2011, Apple deprecated all openssl.h functionality due to OpenSSL ABI (application binary interface) instability, resulting in an explosion of compilation warnings about deprecated SSL, SHA1, and X509 functions (among others). 61067954ce (cache.h: eliminate SHA-1 deprecation warnings on Mac OS X; 2013-05-19) and be4c828b76 (imap-send: eliminate HMAC deprecation warnings on Mac OS X; 2013-05-19) attempted to ameliorate the situation by taking advantage of drop-in replacement functionality provided by Apple's (ABI-stable) CommonCrypto facility, however CommonCrypto supplies only a subset of deprecated OpenSSL functionality, thus a host of warnings remain. Despite this shortcoming, it was hoped that Apple would ultimately provide CommonCrypto replacements for all deprecated OpenSSL functionality, and that the effort started by 61067954ce and be4c828b76 would be continued and eventually eliminate all deprecation warnings. However, now 3.5 years later, and with Mac OS X at 10.10, the hoped-for CommonCrypto replacements have not yet materialized, nor is there any indication that they will be forthcoming. These Apple-specific warnings are pure noise: they don't tell us anything useful and we have no control over them, nor is Apple likely to provide replacements any time soon. Such noise may obscure other legitimate warnings, therefore silence them. Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-12-17 07:19:36 +08:00
#define __AVAILABILITY_MACROS_USES_AVAILABILITY 0
#include <AvailabilityMacros.h>
#undef DEPRECATED_ATTRIBUTE
#define DEPRECATED_ATTRIBUTE
#undef __AVAILABILITY_MACROS_USES_AVAILABILITY
#endif
#include <openssl/ssl.h>
#include <openssl/err.h>
#endif
#ifdef HAVE_SYSINFO
# include <sys/sysinfo.h>
#endif
/* On most systems <netdb.h> would have given us this, but
* not on some systems (e.g. z/OS).
*/
#ifndef NI_MAXHOST
#define NI_MAXHOST 1025
#endif
#ifndef NI_MAXSERV
#define NI_MAXSERV 32
#endif
/* On most systems <limits.h> would have given us this, but
* not on some systems (e.g. GNU/Hurd).
*/
#ifndef PATH_MAX
#define PATH_MAX 4096
#endif
#ifndef NAME_MAX
#define NAME_MAX 255
#endif
typedef uintmax_t timestamp_t;
#define PRItime PRIuMAX
#define parse_timestamp strtoumax
#define TIME_MAX UINTMAX_MAX
#define TIME_MIN 0
#ifndef PATH_SEP
#define PATH_SEP ':'
#endif
#ifdef HAVE_PATHS_H
#include <paths.h>
#endif
#ifndef _PATH_DEFPATH
#define _PATH_DEFPATH "/usr/local/bin:/usr/bin:/bin"
#endif
#ifndef platform_core_config
config: add ctx arg to config_fn_t Add a new "const struct config_context *ctx" arg to config_fn_t to hold additional information about the config iteration operation. config_context has a "struct key_value_info kvi" member that holds metadata about the config source being read (e.g. what kind of config source it is, the filename, etc). In this series, we're only interested in .kvi, so we could have just used "struct key_value_info" as an arg, but config_context makes it possible to add/adjust members in the future without changing the config_fn_t signature. We could also consider other ways of organizing the args (e.g. moving the config name and value into config_context or key_value_info), but in my experiments, the incremental benefit doesn't justify the added complexity (e.g. a config_fn_t will sometimes invoke another config_fn_t but with a different config value). In subsequent commits, the .kvi member will replace the global "struct config_reader" in config.c, making config iteration a global-free operation. It requires much more work for the machinery to provide meaningful values of .kvi, so for now, merely change the signature and call sites, pass NULL as a placeholder value, and don't rely on the arg in any meaningful way. Most of the changes are performed by contrib/coccinelle/config_fn_ctx.pending.cocci, which, for every config_fn_t: - Modifies the signature to accept "const struct config_context *ctx" - Passes "ctx" to any inner config_fn_t, if needed - Adds UNUSED attributes to "ctx", if needed Most config_fn_t instances are easily identified by seeing if they are called by the various config functions. Most of the remaining ones are manually named in the .cocci patch. Manual cleanups are still needed, but the majority of it is trivial; it's either adjusting config_fn_t that the .cocci patch didn't catch, or adding forward declarations of "struct config_context ctx" to make the signatures make sense. The non-trivial changes are in cases where we are invoking a config_fn_t outside of config machinery, and we now need to decide what value of "ctx" to pass. These cases are: - trace2/tr2_cfg.c:tr2_cfg_set_fl() This is indirectly called by git_config_set() so that the trace2 machinery can notice the new config values and update its settings using the tr2 config parsing function, i.e. tr2_cfg_cb(). - builtin/checkout.c:checkout_main() This calls git_xmerge_config() as a shorthand for parsing a CLI arg. This might be worth refactoring away in the future, since git_xmerge_config() can call git_default_config(), which can do much more than just parsing. Handle them by creating a KVI_INIT macro that initializes "struct key_value_info" to a reasonable default, and use that to construct the "ctx" arg. Signed-off-by: Glen Choo <chooglen@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-06-29 03:26:22 +08:00
struct config_context;
static inline int noop_core_config(const char *var UNUSED,
const char *value UNUSED,
config: add ctx arg to config_fn_t Add a new "const struct config_context *ctx" arg to config_fn_t to hold additional information about the config iteration operation. config_context has a "struct key_value_info kvi" member that holds metadata about the config source being read (e.g. what kind of config source it is, the filename, etc). In this series, we're only interested in .kvi, so we could have just used "struct key_value_info" as an arg, but config_context makes it possible to add/adjust members in the future without changing the config_fn_t signature. We could also consider other ways of organizing the args (e.g. moving the config name and value into config_context or key_value_info), but in my experiments, the incremental benefit doesn't justify the added complexity (e.g. a config_fn_t will sometimes invoke another config_fn_t but with a different config value). In subsequent commits, the .kvi member will replace the global "struct config_reader" in config.c, making config iteration a global-free operation. It requires much more work for the machinery to provide meaningful values of .kvi, so for now, merely change the signature and call sites, pass NULL as a placeholder value, and don't rely on the arg in any meaningful way. Most of the changes are performed by contrib/coccinelle/config_fn_ctx.pending.cocci, which, for every config_fn_t: - Modifies the signature to accept "const struct config_context *ctx" - Passes "ctx" to any inner config_fn_t, if needed - Adds UNUSED attributes to "ctx", if needed Most config_fn_t instances are easily identified by seeing if they are called by the various config functions. Most of the remaining ones are manually named in the .cocci patch. Manual cleanups are still needed, but the majority of it is trivial; it's either adjusting config_fn_t that the .cocci patch didn't catch, or adding forward declarations of "struct config_context ctx" to make the signatures make sense. The non-trivial changes are in cases where we are invoking a config_fn_t outside of config machinery, and we now need to decide what value of "ctx" to pass. These cases are: - trace2/tr2_cfg.c:tr2_cfg_set_fl() This is indirectly called by git_config_set() so that the trace2 machinery can notice the new config values and update its settings using the tr2 config parsing function, i.e. tr2_cfg_cb(). - builtin/checkout.c:checkout_main() This calls git_xmerge_config() as a shorthand for parsing a CLI arg. This might be worth refactoring away in the future, since git_xmerge_config() can call git_default_config(), which can do much more than just parsing. Handle them by creating a KVI_INIT macro that initializes "struct key_value_info" to a reasonable default, and use that to construct the "ctx" arg. Signed-off-by: Glen Choo <chooglen@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-06-29 03:26:22 +08:00
const struct config_context *ctx UNUSED,
void *cb UNUSED)
{
return 0;
}
#define platform_core_config noop_core_config
#endif
checkout: fix bug that makes checkout follow symlinks in leading path Before checking out a file, we have to confirm that all of its leading components are real existing directories. And to reduce the number of lstat() calls in this process, we cache the last leading path known to contain only directories. However, when a path collision occurs (e.g. when checking out case-sensitive files in case-insensitive file systems), a cached path might have its file type changed on disk, leaving the cache on an invalid state. Normally, this doesn't bring any bad consequences as we usually check out files in index order, and therefore, by the time the cached path becomes outdated, we no longer need it anyway (because all files in that directory would have already been written). But, there are some users of the checkout machinery that do not always follow the index order. In particular: checkout-index writes the paths in the same order that they appear on the CLI (or stdin); and the delayed checkout feature -- used when a long-running filter process replies with "status=delayed" -- postpones the checkout of some entries, thus modifying the checkout order. When we have to check out an out-of-order entry and the lstat() cache is invalid (due to a previous path collision), checkout_entry() may end up using the invalid data and thrusting that the leading components are real directories when, in reality, they are not. In the best case scenario, where the directory was replaced by a regular file, the user will get an error: "fatal: unable to create file 'foo/bar': Not a directory". But if the directory was replaced by a symlink, checkout could actually end up following the symlink and writing the file at a wrong place, even outside the repository. Since delayed checkout is affected by this bug, it could be used by an attacker to write arbitrary files during the clone of a maliciously crafted repository. Some candidate solutions considered were to disable the lstat() cache during unordered checkouts or sort the entries before passing them to the checkout machinery. But both ideas include some performance penalty and they don't future-proof the code against new unordered use cases. Instead, we now manually reset the lstat cache whenever we successfully remove a directory. Note: We are not even checking whether the directory was the same as the lstat cache points to because we might face a scenario where the paths refer to the same location but differ due to case folding, precomposed UTF-8 issues, or the presence of `..` components in the path. Two regression tests, with case-collisions and utf8-collisions, are also added for both checkout-index and delayed checkout. Note: to make the previously mentioned clone attack unfeasible, it would be sufficient to reset the lstat cache only after the remove_subtree() call inside checkout_entry(). This is the place where we would remove a directory whose path collides with the path of another entry that we are currently trying to check out (possibly a symlink). However, in the interest of a thorough fix that does not leave Git open to similar-but-not-identical attack vectors, we decided to intercept all `rmdir()` calls in one fell swoop. This addresses CVE-2021-21300. Co-authored-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br>
2020-12-10 21:27:55 +08:00
int lstat_cache_aware_rmdir(const char *path);
#if !defined(__MINGW32__) && !defined(_MSC_VER)
#define rmdir lstat_cache_aware_rmdir
#endif
#ifndef has_dos_drive_prefix
static inline int git_has_dos_drive_prefix(const char *path UNUSED)
{
return 0;
}
#define has_dos_drive_prefix git_has_dos_drive_prefix
#endif
#ifndef skip_dos_drive_prefix
static inline int git_skip_dos_drive_prefix(char **path UNUSED)
{
return 0;
}
#define skip_dos_drive_prefix git_skip_dos_drive_prefix
#endif
static inline int git_is_dir_sep(int c)
{
return c == '/';
}
dir API: add a generalized path_match_flags() function Add a path_match_flags() function and have the two sets of starts_with_dot_{,dot_}slash() functions added in 63e95beb085 (submodule: port resolve_relative_url from shell to C, 2016-04-15) and a2b26ffb1a8 (fsck: convert gitmodules url to URL passed to curl, 2020-04-18) be thin wrappers for it. As the latter of those notes the fsck version was copied from the initial builtin/submodule--helper.c version. Since the code added in a2b26ffb1a8 was doing really doing the same as win32_is_dir_sep() added in 1cadad6f658 (git clone <url> C:\cygwin\home\USER\repo' is working (again), 2018-12-15) let's move the latter to git-compat-util.h is a is_xplatform_dir_sep(). We can then call either it or the platform-specific is_dir_sep() from this new function. Let's likewise change code in various other places that was hardcoding checks for "'/' || '\\'" with the new is_xplatform_dir_sep(). As can be seen in those callers some of them still concern themselves with ':' (Mac OS classic?), but let's leave the question of whether that should be consolidated for some other time. As we expect to make wider use of the "native" case in the future, define and use two starts_with_dot_{,dot_}slash_native() convenience wrappers. This makes the diff in builtin/submodule--helper.c much smaller. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Derrick Stolee <derrickstolee@github.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-17 04:10:59 +08:00
#ifndef is_dir_sep
#define is_dir_sep git_is_dir_sep
#endif
#ifndef offset_1st_component
static inline int git_offset_1st_component(const char *path)
{
return is_dir_sep(path[0]);
}
#define offset_1st_component git_offset_1st_component
#endif
#ifndef fspathcmp
#define fspathcmp git_fspathcmp
#endif
#ifndef fspathncmp
#define fspathncmp git_fspathncmp
#endif
mingw: refuse to access paths with trailing spaces or periods When creating a directory on Windows whose path ends in a space or a period (or chains thereof), the Win32 API "helpfully" trims those. For example, `mkdir("abc ");` will return success, but actually create a directory called `abc` instead. This stems back to the DOS days, when all file names had exactly 8 characters plus exactly 3 characters for the file extension, and the only way to have shorter names was by padding with spaces. Sadly, this "helpful" behavior is a bit inconsistent: after a successful `mkdir("abc ");`, a `mkdir("abc /def")` will actually _fail_ (because the directory `abc ` does not actually exist). Even if it would work, we now have a serious problem because a Git repository could contain directories `abc` and `abc `, and on Windows, they would be "merged" unintentionally. As these paths are illegal on Windows, anyway, let's disallow any accesses to such paths on that Operating System. For practical reasons, this behavior is still guarded by the config setting `core.protectNTFS`: it is possible (and at least two regression tests make use of it) to create commits without involving the worktree. In such a scenario, it is of course possible -- even on Windows -- to create such file names. Among other consequences, this patch disallows submodules' paths to end in spaces on Windows (which would formerly have confused Git enough to try to write into incorrect paths, anyway). While this patch does not fix a vulnerability on its own, it prevents an attack vector that was exploited in demonstrations of a number of recently-fixed security bugs. The regression test added to `t/t7417-submodule-path-url.sh` reflects that attack vector. Note that we have to adjust the test case "prevent git~1 squatting on Windows" in `t/t7415-submodule-names.sh` because of a very subtle issue. It tries to clone two submodules whose names differ only in a trailing period character, and as a consequence their git directories differ in the same way. Previously, when Git tried to clone the second submodule, it thought that the git directory already existed (because on Windows, when you create a directory with the name `b.` it actually creates `b`), but with this patch, the first submodule's clone will fail because of the illegal name of the git directory. Therefore, when cloning the second submodule, Git will take a different code path: a fresh clone (without an existing git directory). Both code paths fail to clone the second submodule, both because the the corresponding worktree directory exists and is not empty, but the error messages are worded differently. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2019-09-05 19:27:53 +08:00
#ifndef is_valid_path
#define is_valid_path(path) 1
#endif
#ifndef is_path_owned_by_current_user
git-compat-util: avoid failing dir ownership checks if running privileged bdc77d1d685 (Add a function to determine whether a path is owned by the current user, 2022-03-02) checks for the effective uid of the running process using geteuid() but didn't account for cases where that user was root (because git was invoked through sudo or a compatible tool) and the original uid that repository trusted for its config was no longer known, therefore failing the following otherwise safe call: guy@renard ~/Software/uncrustify $ sudo git describe --always --dirty [sudo] password for guy: fatal: unsafe repository ('/home/guy/Software/uncrustify' is owned by someone else) Attempt to detect those cases by using the environment variables that those tools create to keep track of the original user id, and do the ownership check using that instead. This assumes the environment the user is running on after going privileged can't be tampered with, and also adds code to restrict that the new behavior only applies if running as root, therefore keeping the most common case, which runs unprivileged, from changing, but because of that, it will miss cases where sudo (or an equivalent) was used to change to another unprivileged user or where the equivalent tool used to raise privileges didn't track the original id in a sudo compatible way. Because of compatibility with sudo, the code assumes that uid_t is an unsigned integer type (which is not required by the standard) but is used that way in their codebase to generate SUDO_UID. In systems where uid_t is signed, sudo might be also patched to NOT be unsigned and that might be able to trigger an edge case and a bug (as described in the code), but it is considered unlikely to happen and even if it does, the code would just mostly fail safely, so there was no attempt either to detect it or prevent it by the code, which is something that might change in the future, based on expected user feedback. Reported-by: Guy Maurel <guy.j@maurel.de> Helped-by: SZEDER Gábor <szeder.dev@gmail.com> Helped-by: Randall Becker <rsbecker@nexbridge.com> Helped-by: Phillip Wood <phillip.wood123@gmail.com> Suggested-by: Johannes Schindelin <Johannes.Schindelin@gmx.de> Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-13 09:00:18 +08:00
#ifdef __TANDEM
#define ROOT_UID 65535
#else
#define ROOT_UID 0
#endif
/*
* Do not use this function when
* (1) geteuid() did not say we are running as 'root', or
* (2) using this function will compromise the system.
*
* PORTABILITY WARNING:
* This code assumes uid_t is unsigned because that is what sudo does.
* If your uid_t type is signed and all your ids are positive then it
* should all work fine.
* If your version of sudo uses negative values for uid_t or it is
* buggy and return an overflowed value in SUDO_UID, then git might
* fail to grant access to your repository properly or even mistakenly
* grant access to someone else.
* In the unlikely scenario this happened to you, and that is how you
* got to this message, we would like to know about it; so sent us an
* email to git@vger.kernel.org indicating which platform you are
* using and which version of sudo, so we can improve this logic and
* maybe provide you with a patch that would prevent this issue again
* in the future.
*/
static inline void extract_id_from_env(const char *env, uid_t *id)
{
const char *real_uid = getenv(env);
/* discard anything empty to avoid a more complex check below */
if (real_uid && *real_uid) {
char *endptr = NULL;
unsigned long env_id;
errno = 0;
/* silent overflow errors could trigger a bug here */
env_id = strtoul(real_uid, &endptr, 10);
if (!*endptr && !errno)
*id = env_id;
}
}
static inline int is_path_owned_by_current_uid(const char *path,
struct strbuf *report UNUSED)
{
struct stat st;
git-compat-util: avoid failing dir ownership checks if running privileged bdc77d1d685 (Add a function to determine whether a path is owned by the current user, 2022-03-02) checks for the effective uid of the running process using geteuid() but didn't account for cases where that user was root (because git was invoked through sudo or a compatible tool) and the original uid that repository trusted for its config was no longer known, therefore failing the following otherwise safe call: guy@renard ~/Software/uncrustify $ sudo git describe --always --dirty [sudo] password for guy: fatal: unsafe repository ('/home/guy/Software/uncrustify' is owned by someone else) Attempt to detect those cases by using the environment variables that those tools create to keep track of the original user id, and do the ownership check using that instead. This assumes the environment the user is running on after going privileged can't be tampered with, and also adds code to restrict that the new behavior only applies if running as root, therefore keeping the most common case, which runs unprivileged, from changing, but because of that, it will miss cases where sudo (or an equivalent) was used to change to another unprivileged user or where the equivalent tool used to raise privileges didn't track the original id in a sudo compatible way. Because of compatibility with sudo, the code assumes that uid_t is an unsigned integer type (which is not required by the standard) but is used that way in their codebase to generate SUDO_UID. In systems where uid_t is signed, sudo might be also patched to NOT be unsigned and that might be able to trigger an edge case and a bug (as described in the code), but it is considered unlikely to happen and even if it does, the code would just mostly fail safely, so there was no attempt either to detect it or prevent it by the code, which is something that might change in the future, based on expected user feedback. Reported-by: Guy Maurel <guy.j@maurel.de> Helped-by: SZEDER Gábor <szeder.dev@gmail.com> Helped-by: Randall Becker <rsbecker@nexbridge.com> Helped-by: Phillip Wood <phillip.wood123@gmail.com> Suggested-by: Johannes Schindelin <Johannes.Schindelin@gmx.de> Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-13 09:00:18 +08:00
uid_t euid;
if (lstat(path, &st))
return 0;
git-compat-util: avoid failing dir ownership checks if running privileged bdc77d1d685 (Add a function to determine whether a path is owned by the current user, 2022-03-02) checks for the effective uid of the running process using geteuid() but didn't account for cases where that user was root (because git was invoked through sudo or a compatible tool) and the original uid that repository trusted for its config was no longer known, therefore failing the following otherwise safe call: guy@renard ~/Software/uncrustify $ sudo git describe --always --dirty [sudo] password for guy: fatal: unsafe repository ('/home/guy/Software/uncrustify' is owned by someone else) Attempt to detect those cases by using the environment variables that those tools create to keep track of the original user id, and do the ownership check using that instead. This assumes the environment the user is running on after going privileged can't be tampered with, and also adds code to restrict that the new behavior only applies if running as root, therefore keeping the most common case, which runs unprivileged, from changing, but because of that, it will miss cases where sudo (or an equivalent) was used to change to another unprivileged user or where the equivalent tool used to raise privileges didn't track the original id in a sudo compatible way. Because of compatibility with sudo, the code assumes that uid_t is an unsigned integer type (which is not required by the standard) but is used that way in their codebase to generate SUDO_UID. In systems where uid_t is signed, sudo might be also patched to NOT be unsigned and that might be able to trigger an edge case and a bug (as described in the code), but it is considered unlikely to happen and even if it does, the code would just mostly fail safely, so there was no attempt either to detect it or prevent it by the code, which is something that might change in the future, based on expected user feedback. Reported-by: Guy Maurel <guy.j@maurel.de> Helped-by: SZEDER Gábor <szeder.dev@gmail.com> Helped-by: Randall Becker <rsbecker@nexbridge.com> Helped-by: Phillip Wood <phillip.wood123@gmail.com> Suggested-by: Johannes Schindelin <Johannes.Schindelin@gmx.de> Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-13 09:00:18 +08:00
euid = geteuid();
if (euid == ROOT_UID)
{
if (st.st_uid == ROOT_UID)
return 1;
else
extract_id_from_env("SUDO_UID", &euid);
}
git-compat-util: avoid failing dir ownership checks if running privileged bdc77d1d685 (Add a function to determine whether a path is owned by the current user, 2022-03-02) checks for the effective uid of the running process using geteuid() but didn't account for cases where that user was root (because git was invoked through sudo or a compatible tool) and the original uid that repository trusted for its config was no longer known, therefore failing the following otherwise safe call: guy@renard ~/Software/uncrustify $ sudo git describe --always --dirty [sudo] password for guy: fatal: unsafe repository ('/home/guy/Software/uncrustify' is owned by someone else) Attempt to detect those cases by using the environment variables that those tools create to keep track of the original user id, and do the ownership check using that instead. This assumes the environment the user is running on after going privileged can't be tampered with, and also adds code to restrict that the new behavior only applies if running as root, therefore keeping the most common case, which runs unprivileged, from changing, but because of that, it will miss cases where sudo (or an equivalent) was used to change to another unprivileged user or where the equivalent tool used to raise privileges didn't track the original id in a sudo compatible way. Because of compatibility with sudo, the code assumes that uid_t is an unsigned integer type (which is not required by the standard) but is used that way in their codebase to generate SUDO_UID. In systems where uid_t is signed, sudo might be also patched to NOT be unsigned and that might be able to trigger an edge case and a bug (as described in the code), but it is considered unlikely to happen and even if it does, the code would just mostly fail safely, so there was no attempt either to detect it or prevent it by the code, which is something that might change in the future, based on expected user feedback. Reported-by: Guy Maurel <guy.j@maurel.de> Helped-by: SZEDER Gábor <szeder.dev@gmail.com> Helped-by: Randall Becker <rsbecker@nexbridge.com> Helped-by: Phillip Wood <phillip.wood123@gmail.com> Suggested-by: Johannes Schindelin <Johannes.Schindelin@gmx.de> Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-13 09:00:18 +08:00
return st.st_uid == euid;
}
#define is_path_owned_by_current_user is_path_owned_by_current_uid
#endif
#ifndef find_last_dir_sep
static inline char *git_find_last_dir_sep(const char *path)
{
return strrchr(path, '/');
}
#define find_last_dir_sep git_find_last_dir_sep
#endif
#ifndef has_dir_sep
static inline int git_has_dir_sep(const char *path)
{
return !!strchr(path, '/');
}
#define has_dir_sep(path) git_has_dir_sep(path)
#endif
#ifndef query_user_email
#define query_user_email() NULL
#endif
#ifdef __TANDEM
#include <floss.h(floss_execl,floss_execlp,floss_execv,floss_execvp)>
#include <floss.h(floss_getpwuid)>
#ifndef NSIG
/*
* NonStop NSE and NSX do not provide NSIG. SIGGUARDIAN(99) is the highest
* known, by detective work using kill -l as a list is all signals
* instead of signal.h where it should be.
*/
# define NSIG 100
#endif
#endif
#if defined(__HP_cc) && (__HP_cc >= 61000)
#define NORETURN __attribute__((noreturn))
#define NORETURN_PTR
#elif defined(__GNUC__) && !defined(NO_NORETURN)
#define NORETURN __attribute__((__noreturn__))
#define NORETURN_PTR __attribute__((__noreturn__))
#elif defined(_MSC_VER)
#define NORETURN __declspec(noreturn)
#define NORETURN_PTR
#else
#define NORETURN
#define NORETURN_PTR
#ifndef __GNUC__
#ifndef __attribute__
#define __attribute__(x)
#endif
#endif
#endif
/* The sentinel attribute is valid from gcc version 4.0 */
#if defined(__GNUC__) && (__GNUC__ >= 4)
#define LAST_ARG_MUST_BE_NULL __attribute__((sentinel))
/* warn_unused_result exists as of gcc 3.4.0, but be lazy and check 4.0 */
#define RESULT_MUST_BE_USED __attribute__ ((warn_unused_result))
#else
#define LAST_ARG_MUST_BE_NULL
#define RESULT_MUST_BE_USED
#endif
/*
* MAYBE_UNUSED marks a function parameter that may be unused, but
* whose use is not an error. It also can be used to annotate a
* function, a variable, or a type that may be unused.
*
* Depending on a configuration, all uses of such a thing may become
* #ifdef'ed away. Marking it with UNUSED would give a warning in a
* compilation where it is indeed used, and not marking it at all
* would give a warning in a compilation where it is unused. In such
* a case, MAYBE_UNUSED is the appropriate annotation to use.
*/
#define MAYBE_UNUSED __attribute__((__unused__))
#include "compat/bswap.h"
#include "wrapper.h"
/* General helper functions */
NORETURN void usage(const char *err);
NORETURN void usagef(const char *err, ...) __attribute__((format (printf, 1, 2)));
NORETURN void die(const char *err, ...) __attribute__((format (printf, 1, 2)));
NORETURN void die_errno(const char *err, ...) __attribute__((format (printf, 1, 2)));
int die_message(const char *err, ...) __attribute__((format (printf, 1, 2)));
int die_message_errno(const char *err, ...) __attribute__((format (printf, 1, 2)));
int error(const char *err, ...) __attribute__((format (printf, 1, 2)));
int error_errno(const char *err, ...) __attribute__((format (printf, 1, 2)));
void warning(const char *err, ...) __attribute__((format (printf, 1, 2)));
void warning_errno(const char *err, ...) __attribute__((format (printf, 1, 2)));
#ifndef NO_OPENSSL
#ifdef APPLE_COMMON_CRYPTO
#include "compat/apple-common-crypto.h"
#else
#include <openssl/evp.h>
#include <openssl/hmac.h>
#endif /* APPLE_COMMON_CRYPTO */
#include <openssl/x509v3.h>
#endif /* NO_OPENSSL */
#ifdef HAVE_OPENSSL_CSPRNG
#include <openssl/rand.h>
#endif
make error()'s constant return value more visible When git is compiled with "gcc -Wuninitialized -O3", some inlined calls provide an additional opportunity for the compiler to do static analysis on variable initialization. For example, with two functions like this: int get_foo(int *foo) { if (something_that_might_fail() < 0) return error("unable to get foo"); *foo = 0; return 0; } void some_fun(void) { int foo; if (get_foo(&foo) < 0) return -1; printf("foo is %d\n", foo); } If get_foo() is not inlined, then when compiling some_fun, gcc sees only that a pointer to the local variable is passed, and must assume that it is an out parameter that is initialized after get_foo returns. However, when get_foo() is inlined, the compiler may look at all of the code together and see that some code paths in get_foo() do not initialize the variable. As a result, it prints a warning. But what the compiler can't see is that error() always returns -1, and therefore we know that either we return early from some_fun, or foo ends up initialized, and the code is safe. The warning is a false positive. If we can make the compiler aware that error() will always return -1, it can do a better job of analysis. The simplest method would be to inline the error() function. However, this doesn't work, because gcc will not inline a variadc function. We can work around this by defining a macro. This relies on two gcc extensions: 1. Variadic macros (these are present in C99, but we do not rely on that). 2. Gcc treats the "##" paste operator specially between a comma and __VA_ARGS__, which lets our variadic macro work even if no format parameters are passed to error(). Since we are using these extra features, we hide the macro behind an #ifdef. This is OK, though, because our goal was just to help gcc. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-12-16 01:37:36 +08:00
/*
* Let callers be aware of the constant return value; this can help
* gcc with -Wuninitialized analysis. We restrict this trick to gcc, though,
* because other compilers may be confused by this.
make error()'s constant return value more visible When git is compiled with "gcc -Wuninitialized -O3", some inlined calls provide an additional opportunity for the compiler to do static analysis on variable initialization. For example, with two functions like this: int get_foo(int *foo) { if (something_that_might_fail() < 0) return error("unable to get foo"); *foo = 0; return 0; } void some_fun(void) { int foo; if (get_foo(&foo) < 0) return -1; printf("foo is %d\n", foo); } If get_foo() is not inlined, then when compiling some_fun, gcc sees only that a pointer to the local variable is passed, and must assume that it is an out parameter that is initialized after get_foo returns. However, when get_foo() is inlined, the compiler may look at all of the code together and see that some code paths in get_foo() do not initialize the variable. As a result, it prints a warning. But what the compiler can't see is that error() always returns -1, and therefore we know that either we return early from some_fun, or foo ends up initialized, and the code is safe. The warning is a false positive. If we can make the compiler aware that error() will always return -1, it can do a better job of analysis. The simplest method would be to inline the error() function. However, this doesn't work, because gcc will not inline a variadc function. We can work around this by defining a macro. This relies on two gcc extensions: 1. Variadic macros (these are present in C99, but we do not rely on that). 2. Gcc treats the "##" paste operator specially between a comma and __VA_ARGS__, which lets our variadic macro work even if no format parameters are passed to error(). Since we are using these extra features, we hide the macro behind an #ifdef. This is OK, though, because our goal was just to help gcc. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-12-16 01:37:36 +08:00
*/
#if defined(__GNUC__)
static inline int const_error(void)
{
return -1;
}
#define error(...) (error(__VA_ARGS__), const_error())
#define error_errno(...) (error_errno(__VA_ARGS__), const_error())
make error()'s constant return value more visible When git is compiled with "gcc -Wuninitialized -O3", some inlined calls provide an additional opportunity for the compiler to do static analysis on variable initialization. For example, with two functions like this: int get_foo(int *foo) { if (something_that_might_fail() < 0) return error("unable to get foo"); *foo = 0; return 0; } void some_fun(void) { int foo; if (get_foo(&foo) < 0) return -1; printf("foo is %d\n", foo); } If get_foo() is not inlined, then when compiling some_fun, gcc sees only that a pointer to the local variable is passed, and must assume that it is an out parameter that is initialized after get_foo returns. However, when get_foo() is inlined, the compiler may look at all of the code together and see that some code paths in get_foo() do not initialize the variable. As a result, it prints a warning. But what the compiler can't see is that error() always returns -1, and therefore we know that either we return early from some_fun, or foo ends up initialized, and the code is safe. The warning is a false positive. If we can make the compiler aware that error() will always return -1, it can do a better job of analysis. The simplest method would be to inline the error() function. However, this doesn't work, because gcc will not inline a variadc function. We can work around this by defining a macro. This relies on two gcc extensions: 1. Variadic macros (these are present in C99, but we do not rely on that). 2. Gcc treats the "##" paste operator specially between a comma and __VA_ARGS__, which lets our variadic macro work even if no format parameters are passed to error(). Since we are using these extra features, we hide the macro behind an #ifdef. This is OK, though, because our goal was just to help gcc. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-12-16 01:37:36 +08:00
#endif
typedef void (*report_fn)(const char *, va_list params);
void set_die_routine(NORETURN_PTR report_fn routine);
report_fn get_die_message_routine(void);
void set_error_routine(report_fn routine);
report_fn get_error_routine(void);
void set_warn_routine(report_fn routine);
report_fn get_warn_routine(void);
void set_die_is_recursing_routine(int (*routine)(void));
/*
* If the string "str" begins with the string found in "prefix", return true.
* The "out" parameter is set to "str + strlen(prefix)" (i.e., to the point in
* the string right after the prefix).
*
* Otherwise, return false and leave "out" untouched.
*
* Examples:
*
* [extract branch name, fail if not a branch]
* if (!skip_prefix(ref, "refs/heads/", &branch)
* return -1;
*
* [skip prefix if present, otherwise use whole string]
* skip_prefix(name, "refs/heads/", &name);
*/
static inline bool skip_prefix(const char *str, const char *prefix,
const char **out)
{
do {
if (!*prefix) {
*out = str;
return true;
}
} while (*str++ == *prefix++);
return false;
}
/*
* Like skip_prefix, but promises never to read past "len" bytes of the input
* buffer, and returns the remaining number of bytes in "out" via "outlen".
*/
static inline bool skip_prefix_mem(const char *buf, size_t len,
const char *prefix,
const char **out, size_t *outlen)
{
size_t prefix_len = strlen(prefix);
if (prefix_len <= len && !memcmp(buf, prefix, prefix_len)) {
*out = buf + prefix_len;
*outlen = len - prefix_len;
return true;
}
return false;
}
/*
* If buf ends with suffix, return true and subtract the length of the suffix
* from *len. Otherwise, return false and leave *len untouched.
*/
static inline bool strip_suffix_mem(const char *buf, size_t *len,
const char *suffix)
{
size_t suflen = strlen(suffix);
if (*len < suflen || memcmp(buf + (*len - suflen), suffix, suflen))
return false;
*len -= suflen;
return true;
}
/*
* If str ends with suffix, return true and set *len to the size of the string
* without the suffix. Otherwise, return false and set *len to the size of the
* string.
*
* Note that we do _not_ NUL-terminate str to the new length.
*/
static inline bool strip_suffix(const char *str, const char *suffix,
size_t *len)
{
*len = strlen(str);
return strip_suffix_mem(str, len, suffix);
}
#define SWAP(a, b) do { \
void *_swap_a_ptr = &(a); \
void *_swap_b_ptr = &(b); \
unsigned char _swap_buffer[sizeof(a)]; \
memcpy(_swap_buffer, _swap_a_ptr, sizeof(a)); \
memcpy(_swap_a_ptr, _swap_b_ptr, sizeof(a) + \
BUILD_ASSERT_OR_ZERO(sizeof(a) == sizeof(b))); \
memcpy(_swap_b_ptr, _swap_buffer, sizeof(a)); \
} while (0)
#if defined(NO_MMAP) || defined(USE_WIN32_MMAP)
#ifndef PROT_READ
#define PROT_READ 1
#define PROT_WRITE 2
#define MAP_PRIVATE 1
#endif
#define mmap git_mmap
#define munmap git_munmap
void *git_mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset);
int git_munmap(void *start, size_t length);
#else /* NO_MMAP || USE_WIN32_MMAP */
#include <sys/mman.h>
#endif /* NO_MMAP || USE_WIN32_MMAP */
#ifdef NO_MMAP
/* This value must be multiple of (pagesize * 2) */
#define DEFAULT_PACKED_GIT_WINDOW_SIZE (1 * 1024 * 1024)
#else /* NO_MMAP */
/* This value must be multiple of (pagesize * 2) */
#define DEFAULT_PACKED_GIT_WINDOW_SIZE \
(sizeof(void*) >= 8 \
? 1 * 1024 * 1024 * 1024 \
: 32 * 1024 * 1024)
#endif /* NO_MMAP */
#ifndef MAP_FAILED
#define MAP_FAILED ((void *)-1)
#endif
#ifdef NO_ST_BLOCKS_IN_STRUCT_STAT
#define on_disk_bytes(st) ((st).st_size)
#else
#define on_disk_bytes(st) ((st).st_blocks * 512)
#endif
#ifdef NEEDS_MODE_TRANSLATION
#undef S_IFMT
#undef S_IFREG
#undef S_IFDIR
#undef S_IFLNK
#undef S_IFBLK
#undef S_IFCHR
#undef S_IFIFO
#undef S_IFSOCK
#define S_IFMT 0170000
#define S_IFREG 0100000
#define S_IFDIR 0040000
#define S_IFLNK 0120000
#define S_IFBLK 0060000
#define S_IFCHR 0020000
#define S_IFIFO 0010000
#define S_IFSOCK 0140000
#ifdef stat
#undef stat
#endif
#define stat(path, buf) git_stat(path, buf)
int git_stat(const char *, struct stat *);
#ifdef fstat
#undef fstat
#endif
#define fstat(fd, buf) git_fstat(fd, buf)
int git_fstat(int, struct stat *);
#ifdef lstat
#undef lstat
#endif
#define lstat(path, buf) git_lstat(path, buf)
int git_lstat(const char *, struct stat *);
#endif
#define DEFAULT_PACKED_GIT_LIMIT \
((1024L * 1024L) * (size_t)(sizeof(void*) >= 8 ? (32 * 1024L * 1024L) : 256))
#ifdef NO_PREAD
#define pread git_pread
ssize_t git_pread(int fd, void *buf, size_t count, off_t offset);
#endif
#ifdef NO_SETENV
#define setenv gitsetenv
int gitsetenv(const char *, const char *, int);
#endif
#ifdef NO_MKDTEMP
#define mkdtemp gitmkdtemp
char *gitmkdtemp(char *);
#endif
#ifdef NO_UNSETENV
#define unsetenv gitunsetenv
int gitunsetenv(const char *);
#endif
#ifdef NO_STRCASESTR
#define strcasestr gitstrcasestr
char *gitstrcasestr(const char *haystack, const char *needle);
#endif
#ifdef NO_STRLCPY
#define strlcpy gitstrlcpy
size_t gitstrlcpy(char *, const char *, size_t);
#endif
#ifdef NO_STRTOUMAX
#define strtoumax gitstrtoumax
uintmax_t gitstrtoumax(const char *, char **, int);
#define strtoimax gitstrtoimax
intmax_t gitstrtoimax(const char *, char **, int);
#endif
#ifdef NO_HSTRERROR
#define hstrerror githstrerror
const char *githstrerror(int herror);
#endif
#ifdef NO_MEMMEM
#define memmem gitmemmem
void *gitmemmem(const void *haystack, size_t haystacklen,
const void *needle, size_t needlelen);
#endif
#ifdef OVERRIDE_STRDUP
#ifdef strdup
#undef strdup
#endif
#define strdup gitstrdup
char *gitstrdup(const char *s);
#endif
#ifdef NO_GETPAGESIZE
#define getpagesize() sysconf(_SC_PAGESIZE)
#endif
2016-08-22 20:47:55 +08:00
#ifndef O_CLOEXEC
#define O_CLOEXEC 0
#endif
#ifdef FREAD_READS_DIRECTORIES
# if !defined(SUPPRESS_FOPEN_REDEFINITION)
# ifdef fopen
# undef fopen
# endif
# define fopen(a,b) git_fopen(a,b)
# endif
FILE *git_fopen(const char*, const char*);
#endif
#ifdef SNPRINTF_RETURNS_BOGUS
#ifdef snprintf
#undef snprintf
#endif
#define snprintf git_snprintf
int git_snprintf(char *str, size_t maxsize,
const char *format, ...);
#ifdef vsnprintf
#undef vsnprintf
#endif
#define vsnprintf git_vsnprintf
int git_vsnprintf(char *str, size_t maxsize,
const char *format, va_list ap);
#endif
Makefile: add OPEN_RETURNS_EINTR knob On some platforms, open() reportedly returns EINTR when opening regular files and we receive a signal (usually SIGALRM from our progress meter). This shouldn't happen, as open() should be a restartable syscall, and we specify SA_RESTART when setting up the alarm handler. So it may actually be a kernel or libc bug for this to happen. But it has been reported on at least one version of Linux (on a network filesystem): https://lore.kernel.org/git/c8061cce-71e4-17bd-a56a-a5fed93804da@neanderfunk.de/ as well as on macOS starting with Big Sur even on a regular filesystem. We can work around it by retrying open() calls that get EINTR, just as we do for read(), etc. Since we don't ever _want_ to interrupt an open() call, we can get away with just redefining open, rather than insisting all callsites use xopen(). We actually do have an xopen() wrapper already (and it even does this retry, though there's no indication of it being an observed problem back then; it seems simply to have been lifted from xread(), etc). But it is used hardly anywhere, and isn't suitable for general use because it will die() on error. In theory we could combine the two, but it's awkward to do so because of the variable-args interface of open(). This patch adds a Makefile knob for enabling the workaround. It's not enabled by default for any platforms in config.mak.uname yet, as we don't have enough data to decide how common this is (I have not been able to reproduce on either Linux or Big Sur myself). It may be worth enabling preemptively anyway, since the cost is pretty low (if we don't see an EINTR, it's just an extra conditional). However, note that we must not enable this on Windows. It doesn't do anything there, and the macro overrides the existing mingw_open() redirection. I've added a preemptive #undef here in the mingw header (which is processed first) to just quietly disable it (we could also make it an #error, but there is little point in being so aggressive). Reported-by: Aleksey Kliger <alklig@microsoft.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-02-26 14:14:35 +08:00
#ifdef OPEN_RETURNS_EINTR
#undef open
#define open git_open_with_retry
int git_open_with_retry(const char *path, int flag, ...);
#endif
#ifdef __GLIBC_PREREQ
#if __GLIBC_PREREQ(2, 1)
#define HAVE_STRCHRNUL
#endif
#endif
#ifndef HAVE_STRCHRNUL
#define strchrnul gitstrchrnul
static inline char *gitstrchrnul(const char *s, int c)
{
while (*s && *s != c)
s++;
return (char *)s;
}
#endif
#ifdef NO_INET_PTON
int inet_pton(int af, const char *src, void *dst);
#endif
#ifdef NO_INET_NTOP
const char *inet_ntop(int af, const void *src, char *dst, size_t size);
#endif
#ifdef NO_PTHREADS
#define atexit git_atexit
int git_atexit(void (*handler)(void));
#endif
static inline size_t st_add(size_t a, size_t b)
{
if (unsigned_add_overflows(a, b))
die("size_t overflow: %"PRIuMAX" + %"PRIuMAX,
(uintmax_t)a, (uintmax_t)b);
return a + b;
}
#define st_add3(a,b,c) st_add(st_add((a),(b)),(c))
#define st_add4(a,b,c,d) st_add(st_add3((a),(b),(c)),(d))
static inline size_t st_mult(size_t a, size_t b)
{
if (unsigned_mult_overflows(a, b))
die("size_t overflow: %"PRIuMAX" * %"PRIuMAX,
(uintmax_t)a, (uintmax_t)b);
return a * b;
}
static inline size_t st_sub(size_t a, size_t b)
{
if (a < b)
die("size_t underflow: %"PRIuMAX" - %"PRIuMAX,
(uintmax_t)a, (uintmax_t)b);
return a - b;
}
static inline size_t st_left_shift(size_t a, unsigned shift)
{
if (unsigned_left_shift_overflows(a, shift))
die("size_t overflow: %"PRIuMAX" << %u",
(uintmax_t)a, shift);
return a << shift;
}
static inline unsigned long cast_size_t_to_ulong(size_t a)
{
if (a != (unsigned long)a)
die("object too large to read on this platform: %"
PRIuMAX" is cut off to %lu",
(uintmax_t)a, (unsigned long)a);
return (unsigned long)a;
}
static inline uint32_t cast_size_t_to_uint32_t(size_t a)
{
if (a != (uint32_t)a)
die("object too large to read on this platform: %"
PRIuMAX" is cut off to %u",
(uintmax_t)a, (uint32_t)a);
return (uint32_t)a;
}
static inline int cast_size_t_to_int(size_t a)
{
if (a > INT_MAX)
die("number too large to represent as int on this platform: %"PRIuMAX,
(uintmax_t)a);
return (int)a;
}
/*
* Limit size of IO chunks, because huge chunks only cause pain. OS X
* 64-bit is buggy, returning EINVAL if len >= INT_MAX; and even in
* the absence of bugs, large chunks can result in bad latencies when
* you decide to kill the process.
*
* We pick 8 MiB as our default, but if the platform defines SSIZE_MAX
* that is smaller than that, clip it to SSIZE_MAX, as a call to
* read(2) or write(2) larger than that is allowed to fail. As the last
* resort, we allow a port to pass via CFLAGS e.g. "-DMAX_IO_SIZE=value"
* to override this, if the definition of SSIZE_MAX given by the platform
* is broken.
*/
#ifndef MAX_IO_SIZE
# define MAX_IO_SIZE_DEFAULT (8*1024*1024)
# if defined(SSIZE_MAX) && (SSIZE_MAX < MAX_IO_SIZE_DEFAULT)
# define MAX_IO_SIZE SSIZE_MAX
# else
# define MAX_IO_SIZE MAX_IO_SIZE_DEFAULT
# endif
#endif
Portable alloca for Git In the next patch we'll have to use alloca() for performance reasons, but since alloca is non-standardized and is not portable, let's have a trick with compatibility wrappers: 1. at configure time, determine, do we have working alloca() through alloca.h, and define #define HAVE_ALLOCA_H if yes. 2. in code #ifdef HAVE_ALLOCA_H # include <alloca.h> # define xalloca(size) (alloca(size)) # define xalloca_free(p) do {} while(0) #else # define xalloca(size) (xmalloc(size)) # define xalloca_free(p) (free(p)) #endif and use it like func() { p = xalloca(size); ... xalloca_free(p); } This way, for systems, where alloca is available, we'll have optimal on-stack allocations with fast executions. On the other hand, on systems, where alloca is not available, this gracefully fallbacks to xmalloc/free. Both autoconf and config.mak.uname configurations were updated. For autoconf, we are not bothering considering cases, when no alloca.h is available, but alloca() works some other way - its simply alloca.h is available and works or not, everything else is deep legacy. For config.mak.uname, I've tried to make my almost-sure guess for where alloca() is available, but since I only have access to Linux it is the only change I can be sure about myself, with relevant to other changed systems people Cc'ed. NOTE SunOS and Windows had explicit -DHAVE_ALLOCA_H in their configurations. I've changed that to now-common HAVE_ALLOCA_H=YesPlease which should be correct. Cc: Brandon Casey <drafnel@gmail.com> Cc: Marius Storm-Olsen <mstormo@gmail.com> Cc: Johannes Sixt <j6t@kdbg.org> Cc: Johannes Schindelin <Johannes.Schindelin@gmx.de> Cc: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Cc: Gerrit Pape <pape@smarden.org> Cc: Petr Salinger <Petr.Salinger@seznam.cz> Cc: Jonathan Nieder <jrnieder@gmail.com> Acked-by: Thomas Schwinge <thomas@codesourcery.com> (GNU Hurd changes) Signed-off-by: Kirill Smelkov <kirr@mns.spb.ru> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-03-27 22:22:50 +08:00
#ifdef HAVE_ALLOCA_H
# include <alloca.h>
# define xalloca(size) (alloca(size))
# define xalloca_free(p) do {} while (0)
#else
# define xalloca(size) (xmalloc(size))
# define xalloca_free(p) (free(p))
#endif
/*
* FREE_AND_NULL(ptr) is like free(ptr) followed by ptr = NULL. Note
* that ptr is used twice, so don't pass e.g. ptr++.
*/
#define FREE_AND_NULL(p) do { free(p); (p) = NULL; } while (0)
#define ALLOC_ARRAY(x, alloc) (x) = xmalloc(st_mult(sizeof(*(x)), (alloc)))
#define CALLOC_ARRAY(x, alloc) (x) = xcalloc((alloc), sizeof(*(x)))
#define REALLOC_ARRAY(x, alloc) (x) = xrealloc((x), st_mult(sizeof(*(x)), (alloc)))
#define COPY_ARRAY(dst, src, n) copy_array((dst), (src), (n), sizeof(*(dst)) + \
BARF_UNLESS_COPYABLE((dst), (src)))
static inline void copy_array(void *dst, const void *src, size_t n, size_t size)
{
if (n)
memcpy(dst, src, st_mult(size, n));
}
#define MOVE_ARRAY(dst, src, n) move_array((dst), (src), (n), sizeof(*(dst)) + \
BARF_UNLESS_COPYABLE((dst), (src)))
static inline void move_array(void *dst, const void *src, size_t n, size_t size)
{
if (n)
memmove(dst, src, st_mult(size, n));
}
#define DUP_ARRAY(dst, src, n) do { \
size_t dup_array_n_ = (n); \
COPY_ARRAY(ALLOC_ARRAY((dst), dup_array_n_), (src), dup_array_n_); \
} while (0)
/*
* These functions help you allocate structs with flex arrays, and copy
* the data directly into the array. For example, if you had:
*
* struct foo {
* int bar;
* char name[FLEX_ARRAY];
* };
*
* you can do:
*
* struct foo *f;
* FLEX_ALLOC_MEM(f, name, src, len);
*
* to allocate a "foo" with the contents of "src" in the "name" field.
* The resulting struct is automatically zero'd, and the flex-array field
* is NUL-terminated (whether the incoming src buffer was or not).
*
* The FLEXPTR_* variants operate on structs that don't use flex-arrays,
* but do want to store a pointer to some extra data in the same allocated
* block. For example, if you have:
*
* struct foo {
* char *name;
* int bar;
* };
*
* you can do:
*
* struct foo *f;
* FLEXPTR_ALLOC_STR(f, name, src);
*
* and "name" will point to a block of memory after the struct, which will be
* freed along with the struct (but the pointer can be repointed anywhere).
*
* The *_STR variants accept a string parameter rather than a ptr/len
* combination.
*
* Note that these macros will evaluate the first parameter multiple
* times, and it must be assignable as an lvalue.
*/
#define FLEX_ALLOC_MEM(x, flexname, buf, len) do { \
size_t flex_array_len_ = (len); \
(x) = xcalloc(1, st_add3(sizeof(*(x)), flex_array_len_, 1)); \
memcpy((void *)(x)->flexname, (buf), flex_array_len_); \
} while (0)
#define FLEXPTR_ALLOC_MEM(x, ptrname, buf, len) do { \
size_t flex_array_len_ = (len); \
(x) = xcalloc(1, st_add3(sizeof(*(x)), flex_array_len_, 1)); \
memcpy((x) + 1, (buf), flex_array_len_); \
(x)->ptrname = (void *)((x)+1); \
} while(0)
#define FLEX_ALLOC_STR(x, flexname, str) \
FLEX_ALLOC_MEM((x), flexname, (str), strlen(str))
#define FLEXPTR_ALLOC_STR(x, ptrname, str) \
FLEXPTR_ALLOC_MEM((x), ptrname, (str), strlen(str))
#define alloc_nr(x) (((x)+16)*3/2)
/**
* Dynamically growing an array using realloc() is error prone and boring.
*
* Define your array with:
*
* - a pointer (`item`) that points at the array, initialized to `NULL`
* (although please name the variable based on its contents, not on its
* type);
*
* - an integer variable (`alloc`) that keeps track of how big the current
* allocation is, initialized to `0`;
*
* - another integer variable (`nr`) to keep track of how many elements the
* array currently has, initialized to `0`.
*
* Then before adding `n`th element to the item, call `ALLOC_GROW(item, n,
* alloc)`. This ensures that the array can hold at least `n` elements by
* calling `realloc(3)` and adjusting `alloc` variable.
*
* ------------
* sometype *item;
* size_t nr;
* size_t alloc
*
* for (i = 0; i < nr; i++)
* if (we like item[i] already)
* return;
*
* // we did not like any existing one, so add one
* ALLOC_GROW(item, nr + 1, alloc);
* item[nr++] = value you like;
* ------------
*
* You are responsible for updating the `nr` variable.
*
* If you need to specify the number of elements to allocate explicitly
* then use the macro `REALLOC_ARRAY(item, alloc)` instead of `ALLOC_GROW`.
*
* Consider using ALLOC_GROW_BY instead of ALLOC_GROW as it has some
* added niceties.
*
* DO NOT USE any expression with side-effect for 'x', 'nr', or 'alloc'.
*/
#define ALLOC_GROW(x, nr, alloc) \
do { \
if ((nr) > alloc) { \
if (alloc_nr(alloc) < (nr)) \
alloc = (nr); \
else \
alloc = alloc_nr(alloc); \
REALLOC_ARRAY(x, alloc); \
} \
} while (0)
/*
* Similar to ALLOC_GROW but handles updating of the nr value and
* zeroing the bytes of the newly-grown array elements.
*
* DO NOT USE any expression with side-effect for any of the
* arguments.
*/
#define ALLOC_GROW_BY(x, nr, increase, alloc) \
do { \
if (increase) { \
size_t new_nr = nr + (increase); \
if (new_nr < nr) \
BUG("negative growth in ALLOC_GROW_BY"); \
ALLOC_GROW(x, new_nr, alloc); \
memset((x) + nr, 0, sizeof(*(x)) * (increase)); \
nr = new_nr; \
} \
} while (0)
static inline char *xstrdup_or_null(const char *str)
{
return str ? xstrdup(str) : NULL;
}
static inline size_t xsize_t(off_t len)
{
xsize_t: avoid implementation defined behavior when len < 0 The xsize_t helper aims to safely convert an off_t to a size_t, erroring out when a file offset is too large to fit into a memory address. It does this by using two casts: size_t size = (size_t) len; if (len != (off_t) size) ... error out ... On a platform with sizeof(size_t) < sizeof(off_t), this check is safe and correct. The first cast truncates to a size_t by finding the remainder modulo SIZE_MAX+1 (see C99 section 6.3.1.3 Signed and unsigned integers) and the second promotes to an off_t, meaning the result is true if and only if len is representable as a size_t. On other platforms, this two-casts strategy still works well (always succeeds) for len >= 0. But for len < 0, when the first cast succeeds and produces SIZE_MAX + 1 + len, the resulting value is too large to be represented as an off_t, so the second cast produces implementation defined behavior. In practice, it is likely to produce a result of true despite len not being representable as size_t. Simplify by replacing with a more straightforward check: compare len to the relevant bounds and then cast it. (To avoid a -Wsign-compare warning, after checking that len >= 0, we explicitly convert to a sufficiently-large unsigned type before comparing to SIZE_MAX.) In practice, this is not likely to come up since typical callers use nonnegative len. Still, it's helpful to handle this case to make the behavior easy to reason about. Historical note: the original bounds-checking in 46be82dfd0 (xsize_t: check whether we lose bits, 2010-07-28) did not produce this implementation-defined behavior, though it still did not handle negative offsets. It was not until 73560c793a (git-compat-util.h: xsize_t() - avoid -Wsign-compare warnings, 2017-09-21) introduced the double cast that the implementation-defined behavior was triggered. Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-19 09:52:56 +08:00
if (len < 0 || (uintmax_t) len > SIZE_MAX)
die("Cannot handle files this big");
xsize_t: avoid implementation defined behavior when len < 0 The xsize_t helper aims to safely convert an off_t to a size_t, erroring out when a file offset is too large to fit into a memory address. It does this by using two casts: size_t size = (size_t) len; if (len != (off_t) size) ... error out ... On a platform with sizeof(size_t) < sizeof(off_t), this check is safe and correct. The first cast truncates to a size_t by finding the remainder modulo SIZE_MAX+1 (see C99 section 6.3.1.3 Signed and unsigned integers) and the second promotes to an off_t, meaning the result is true if and only if len is representable as a size_t. On other platforms, this two-casts strategy still works well (always succeeds) for len >= 0. But for len < 0, when the first cast succeeds and produces SIZE_MAX + 1 + len, the resulting value is too large to be represented as an off_t, so the second cast produces implementation defined behavior. In practice, it is likely to produce a result of true despite len not being representable as size_t. Simplify by replacing with a more straightforward check: compare len to the relevant bounds and then cast it. (To avoid a -Wsign-compare warning, after checking that len >= 0, we explicitly convert to a sufficiently-large unsigned type before comparing to SIZE_MAX.) In practice, this is not likely to come up since typical callers use nonnegative len. Still, it's helpful to handle this case to make the behavior easy to reason about. Historical note: the original bounds-checking in 46be82dfd0 (xsize_t: check whether we lose bits, 2010-07-28) did not produce this implementation-defined behavior, though it still did not handle negative offsets. It was not until 73560c793a (git-compat-util.h: xsize_t() - avoid -Wsign-compare warnings, 2017-09-21) introduced the double cast that the implementation-defined behavior was triggered. Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-19 09:52:56 +08:00
return (size_t) len;
}
#ifndef HOST_NAME_MAX
#define HOST_NAME_MAX 256
#endif
#include "sane-ctype.h"
/*
* Like skip_prefix, but compare case-insensitively. Note that the comparison
* is done via tolower(), so it is strictly ASCII (no multi-byte characters or
* locale-specific conversions).
*/
static inline int skip_iprefix(const char *str, const char *prefix,
const char **out)
{
do {
if (!*prefix) {
*out = str;
return 1;
}
} while (tolower(*str++) == tolower(*prefix++));
return 0;
}
/*
* Like skip_prefix_mem, but compare case-insensitively. Note that the
* comparison is done via tolower(), so it is strictly ASCII (no multi-byte
* characters or locale-specific conversions).
*/
static inline int skip_iprefix_mem(const char *buf, size_t len,
const char *prefix,
const char **out, size_t *outlen)
{
do {
if (!*prefix) {
*out = buf;
*outlen = len;
return 1;
}
} while (len-- > 0 && tolower(*buf++) == tolower(*prefix++));
return 0;
}
static inline int strtoul_ui(char const *s, int base, unsigned int *result)
{
unsigned long ul;
char *p;
errno = 0;
/* negative values would be accepted by strtoul */
if (strchr(s, '-'))
return -1;
ul = strtoul(s, &p, base);
if (errno || *p || p == s || (unsigned int) ul != ul)
return -1;
*result = ul;
return 0;
}
static inline int strtol_i(char const *s, int base, int *result)
{
long ul;
char *p;
errno = 0;
ul = strtol(s, &p, base);
if (errno || *p || p == s || (int) ul != ul)
return -1;
*result = ul;
return 0;
}
void git_stable_qsort(void *base, size_t nmemb, size_t size,
int(*compar)(const void *, const void *));
#ifdef INTERNAL_QSORT
#define qsort git_stable_qsort
#endif
#define QSORT(base, n, compar) sane_qsort((base), (n), sizeof(*(base)), compar)
static inline void sane_qsort(void *base, size_t nmemb, size_t size,
int(*compar)(const void *, const void *))
{
if (nmemb > 1)
qsort(base, nmemb, size, compar);
}
#define STABLE_QSORT(base, n, compar) \
git_stable_qsort((base), (n), sizeof(*(base)), compar)
#ifndef HAVE_ISO_QSORT_S
int git_qsort_s(void *base, size_t nmemb, size_t size,
int (*compar)(const void *, const void *, void *), void *ctx);
#define qsort_s git_qsort_s
#endif
#define QSORT_S(base, n, compar, ctx) do { \
if (qsort_s((base), (n), sizeof(*(base)), compar, ctx)) \
BUG("qsort_s() failed"); \
} while (0)
#ifndef REG_STARTEND
#error "Git requires REG_STARTEND support. Compile with NO_REGEX=NeedsStartEnd"
#endif
static inline int regexec_buf(const regex_t *preg, const char *buf, size_t size,
size_t nmatch, regmatch_t pmatch[], int eflags)
{
assert(nmatch > 0 && pmatch);
pmatch[0].rm_so = 0;
pmatch[0].rm_eo = size;
return regexec(preg, buf, nmatch, pmatch, eflags | REG_STARTEND);
}
#ifdef USE_ENHANCED_BASIC_REGULAR_EXPRESSIONS
int git_regcomp(regex_t *preg, const char *pattern, int cflags);
#define regcomp git_regcomp
#endif
#ifndef DIR_HAS_BSD_GROUP_SEMANTICS
# define FORCE_DIR_SET_GID S_ISGID
#else
# define FORCE_DIR_SET_GID 0
#endif
#ifdef NO_NSEC
#undef USE_NSEC
#define ST_CTIME_NSEC(st) 0
#define ST_MTIME_NSEC(st) 0
#else
#ifdef USE_ST_TIMESPEC
#define ST_CTIME_NSEC(st) ((unsigned int)((st).st_ctimespec.tv_nsec))
#define ST_MTIME_NSEC(st) ((unsigned int)((st).st_mtimespec.tv_nsec))
#else
#define ST_CTIME_NSEC(st) ((unsigned int)((st).st_ctim.tv_nsec))
#define ST_MTIME_NSEC(st) ((unsigned int)((st).st_mtim.tv_nsec))
#endif
#endif
#ifdef UNRELIABLE_FSTAT
#define fstat_is_reliable() 0
#else
#define fstat_is_reliable() 1
#endif
#ifndef va_copy
/*
* Since an obvious implementation of va_list would be to make it a
* pointer into the stack frame, a simple assignment will work on
* many systems. But let's try to be more portable.
*/
#ifdef __va_copy
#define va_copy(dst, src) __va_copy(dst, src)
#else
#define va_copy(dst, src) ((dst) = (src))
#endif
#endif
/* usage.c: only to be used for testing BUG() implementation (see test-tool) */
extern int BUG_exit_code;
/* usage.c: if bug() is called we should have a BUG_if_bug() afterwards */
extern int bug_called_must_BUG;
usage.c: add BUG() function There's a convention in Git's code base to write assertions as: if (...some_bad_thing...) die("BUG: the terrible thing happened"); with the idea that users should never see a "BUG:" message (but if they, it at least gives a clue what happened). We use die() here because it's convenient, but there are a few draw-backs: 1. Without parsing the messages, it's hard for callers to distinguish BUG assertions from regular errors. For instance, it would be nice if the test suite could check that we don't hit any assertions, but test_must_fail will pass BUG deaths as OK. 2. It would be useful to add more debugging features to BUG assertions, like file/line numbers or dumping core. 3. The die() handler can be replaced, and might not actually exit the whole program (e.g., it may just pthread_exit()). This is convenient for normal errors, but for an assertion failure (which is supposed to never happen), we're probably better off taking down the whole process as quickly and cleanly as possible. We could address these by checking in die() whether the error message starts with "BUG", and behaving appropriately. But there's little advantage at that point to sharing the die() code, and only downsides (e.g., we can't change the BUG() interface independently). Moreover, converting all of the existing BUG calls reveals that the test suite does indeed trigger a few of them. Instead, this patch introduces a new BUG() function, which prints an error before dying via SIGABRT. This gives us test suite checking and core dumps. The function is actually a macro (when supported) so that we can show the file/line number. We can convert die("BUG") invocations to BUG() in further patches, dealing with any test fallouts individually. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-13 11:28:50 +08:00
__attribute__((format (printf, 3, 4))) NORETURN
void BUG_fl(const char *file, int line, const char *fmt, ...);
#define BUG(...) BUG_fl(__FILE__, __LINE__, __VA_ARGS__)
__attribute__((format (printf, 3, 4)))
void bug_fl(const char *file, int line, const char *fmt, ...);
#define bug(...) bug_fl(__FILE__, __LINE__, __VA_ARGS__)
#define BUG_if_bug(...) do { \
if (bug_called_must_BUG) \
BUG_fl(__FILE__, __LINE__, __VA_ARGS__); \
} while (0)
usage.c: add BUG() function There's a convention in Git's code base to write assertions as: if (...some_bad_thing...) die("BUG: the terrible thing happened"); with the idea that users should never see a "BUG:" message (but if they, it at least gives a clue what happened). We use die() here because it's convenient, but there are a few draw-backs: 1. Without parsing the messages, it's hard for callers to distinguish BUG assertions from regular errors. For instance, it would be nice if the test suite could check that we don't hit any assertions, but test_must_fail will pass BUG deaths as OK. 2. It would be useful to add more debugging features to BUG assertions, like file/line numbers or dumping core. 3. The die() handler can be replaced, and might not actually exit the whole program (e.g., it may just pthread_exit()). This is convenient for normal errors, but for an assertion failure (which is supposed to never happen), we're probably better off taking down the whole process as quickly and cleanly as possible. We could address these by checking in die() whether the error message starts with "BUG", and behaving appropriately. But there's little advantage at that point to sharing the die() code, and only downsides (e.g., we can't change the BUG() interface independently). Moreover, converting all of the existing BUG calls reveals that the test suite does indeed trigger a few of them. Instead, this patch introduces a new BUG() function, which prints an error before dying via SIGABRT. This gives us test suite checking and core dumps. The function is actually a macro (when supported) so that we can show the file/line number. We can convert die("BUG") invocations to BUG() in further patches, dealing with any test fallouts individually. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-13 11:28:50 +08:00
#ifndef FSYNC_METHOD_DEFAULT
#ifdef __APPLE__
#define FSYNC_METHOD_DEFAULT FSYNC_METHOD_WRITEOUT_ONLY
#else
#define FSYNC_METHOD_DEFAULT FSYNC_METHOD_FSYNC
#endif
#endif
#ifndef SHELL_PATH
# define SHELL_PATH "/bin/sh"
#endif
#ifndef _POSIX_THREAD_SAFE_FUNCTIONS
static inline void git_flockfile(FILE *fh UNUSED)
{
; /* nothing */
}
static inline void git_funlockfile(FILE *fh UNUSED)
{
; /* nothing */
}
#undef flockfile
#undef funlockfile
#undef getc_unlocked
git-compat-util: avoid redefining system function names Our git-compat-util header defines a few noop wrappers for system functions if they are not available. This was originally done with a macro, but in 15b52a44e0 (compat-util: type-check parameters of no-op replacement functions, 2020-08-06) we switched to inline functions, because it gives us basic type-checking. This can cause compilation failures when the system _does_ declare those functions but we choose not to use them, since the compiler will complain about the redeclaration. This was seen in the real world when compiling against certain builds of uclibc, which may leave _POSIX_THREAD_SAFE_FUNCTIONS unset, but still declare flockfile() and funlockfile(). It can also be seen on any platform that has setitimer() if you choose to compile without it (which plausibly could happen if the system implementation is buggy). E.g., on Linux: $ make NO_SETITIMER=IWouldPreferNotTo git.o CC git.o In file included from builtin.h:4, from git.c:1: git-compat-util.h:344:19: error: conflicting types for ‘setitimer’; have ‘int(int, const struct itimerval *, struct itimerval *)’ 344 | static inline int setitimer(int which UNUSED, | ^~~~~~~~~ In file included from git-compat-util.h:234: /usr/include/x86_64-linux-gnu/sys/time.h:155:12: note: previous declaration of ‘setitimer’ with type ‘int(__itimer_which_t, const struct itimerval * restrict, struct itimerval * restrict)’ 155 | extern int setitimer (__itimer_which_t __which, | ^~~~~~~~~ make: *** [Makefile:2714: git.o] Error 1 Here I think the compiler is complaining about the lack of "restrict" annotations in our version, but even if we matched it completely (and there is no way to match all platforms anyway), it would still complain about a static declaration following a non-static one. Using macros doesn't have this problem, because the C preprocessor rewrites the name in our code before we hit this level of compilation. One way to fix this would just be to revert most of 15b52a44e0. What we really cared about there was catching build problems with precompose_argv(), which most platforms _don't_ build, and which is our custom function. So we could just switch the system wrappers back to macros; most people build the real versions anyway, and they don't change. So the extra type-checking isn't likely to catch bugs. But with a little work, we can have our cake and eat it, too. If we define the type-checking wrappers with a unique name, and then redirect the system names to them with macros, we still get our type checking, but without redeclaring the system function names. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-12-01 05:15:14 +08:00
#define flockfile(fh) git_flockfile(fh)
#define funlockfile(fh) git_funlockfile(fh)
#define getc_unlocked(fh) getc(fh)
#endif
#ifdef FILENO_IS_A_MACRO
int git_fileno(FILE *stream);
# ifndef COMPAT_CODE_FILENO
# undef fileno
# define fileno(p) git_fileno(p)
# endif
#endif
#ifdef NEED_ACCESS_ROOT_HANDLER
int git_access(const char *path, int mode);
# ifndef COMPAT_CODE_ACCESS
# ifdef access
# undef access
# endif
# define access(path, mode) git_access(path, mode)
# endif
#endif
/*
* Our code often opens a path to an optional file, to work on its
* contents when we can successfully open it. We can ignore a failure
* to open if such an optional file does not exist, but we do want to
* report a failure in opening for other reasons (e.g. we got an I/O
* error, or the file is there, but we lack the permission to open).
*
* Call this function after seeing an error from open() or fopen() to
* see if the errno indicates a missing file that we can safely ignore.
*/
static inline int is_missing_file_error(int errno_)
{
return (errno_ == ENOENT || errno_ == ENOTDIR);
}
int cmd_main(int, const char **);
/*
* Intercept all calls to exit() and route them to trace2 to
* optionally emit a message before calling the real exit().
*/
int common_exit(const char *file, int line, int code);
#define exit(code) exit(common_exit(__FILE__, __LINE__, (code)))
add UNLEAK annotation for reducing leak false positives It's a common pattern in git commands to allocate some memory that should last for the lifetime of the program and then not bother to free it, relying on the OS to throw it away. This keeps the code simple, and it's fast (we don't waste time traversing structures or calling free at the end of the program). But it also triggers warnings from memory-leak checkers like valgrind or LSAN. They know that the memory was still allocated at program exit, but they don't know _when_ the leaked memory stopped being useful. If it was early in the program, then it's probably a real and important leak. But if it was used right up until program exit, it's not an interesting leak and we'd like to suppress it so that we can see the real leaks. This patch introduces an UNLEAK() macro that lets us do so. To understand its design, let's first look at some of the alternatives. Unfortunately the suppression systems offered by leak-checking tools don't quite do what we want. A leak-checker basically knows two things: 1. Which blocks were allocated via malloc, and the callstack during the allocation. 2. Which blocks were left un-freed at the end of the program (and which are unreachable, but more on that later). Their suppressions work by mentioning the function or callstack of a particular allocation, and marking it as OK to leak. So imagine you have code like this: int cmd_foo(...) { /* this allocates some memory */ char *p = some_function(); printf("%s", p); return 0; } You can say "ignore allocations from some_function(), they're not leaks". But that's not right. That function may be called elsewhere, too, and we would potentially want to know about those leaks. So you can say "ignore the callstack when main calls some_function". That works, but your annotations are brittle. In this case it's only two functions, but you can imagine that the actual allocation is much deeper. If any of the intermediate code changes, you have to update the suppression. What we _really_ want to say is that "the value assigned to p at the end of the function is not a real leak". But leak-checkers can't understand that; they don't know about "p" in the first place. However, we can do something a little bit tricky if we make some assumptions about how leak-checkers work. They generally don't just report all un-freed blocks. That would report even globals which are still accessible when the leak-check is run. Instead they take some set of memory (like BSS) as a root and mark it as "reachable". Then they scan the reachable blocks for anything that looks like a pointer to a malloc'd block, and consider that block reachable. And then they scan those blocks, and so on, transitively marking anything reachable from a global as "not leaked" (or at least leaked in a different category). So we can mark the value of "p" as reachable by putting it into a variable with program lifetime. One way to do that is to just mark "p" as static. But that actually affects the run-time behavior if the function is called twice (you aren't likely to call main() twice, but some of our cmd_*() functions are called from other commands). Instead, we can trick the leak-checker by putting the value into _any_ reachable bytes. This patch keeps a global linked-list of bytes copied from "unleaked" variables. That list is reachable even at program exit, which confers recursive reachability on whatever values we unleak. In other words, you can do: int cmd_foo(...) { char *p = some_function(); printf("%s", p); UNLEAK(p); return 0; } to annotate "p" and suppress the leak report. But wait, couldn't we just say "free(p)"? In this toy example, yes. But UNLEAK()'s byte-copying strategy has several advantages over actually freeing the memory: 1. It's recursive across structures. In many cases our "p" is not just a pointer, but a complex struct whose fields may have been allocated by a sub-function. And in some cases (e.g., dir_struct) we don't even have a function which knows how to free all of the struct members. By marking the struct itself as reachable, that confers reachability on any pointers it contains (including those found in embedded structs, or reachable by walking heap blocks recursively. 2. It works on cases where we're not sure if the value is allocated or not. For example: char *p = argc > 1 ? argv[1] : some_function(); It's safe to use UNLEAK(p) here, because it's not freeing any memory. In the case that we're pointing to argv here, the reachability checker will just ignore our bytes. 3. Likewise, it works even if the variable has _already_ been freed. We're just copying the pointer bytes. If the block has been freed, the leak-checker will skip over those bytes as uninteresting. 4. Because it's not actually freeing memory, you can UNLEAK() before we are finished accessing the variable. This is helpful in cases like this: char *p = some_function(); return another_function(p); Writing this with free() requires: int ret; char *p = some_function(); ret = another_function(p); free(p); return ret; But with unleak we can just write: char *p = some_function(); UNLEAK(p); return another_function(p); This patch adds the UNLEAK() macro and enables it automatically when Git is compiled with SANITIZE=leak. In normal builds it's a noop, so we pay no runtime cost. It also adds some UNLEAK() annotations to show off how the feature works. On top of other recent leak fixes, these are enough to get t0000 and t0001 to pass when compiled with LSAN. Note the case in commit.c which actually converts a strbuf_release() into an UNLEAK. This code was already non-leaky, but the free didn't do anything useful, since we're exiting. Converting it to an annotation means that non-leak-checking builds pay no runtime cost. The cost is minimal enough that it's probably not worth going on a crusade to convert these kinds of frees to UNLEAKS. I did it here for consistency with the "sb" leak (though it would have been equally correct to go the other way, and turn them both into strbuf_release() calls). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-08 14:38:41 +08:00
/*
* You can mark a stack variable with UNLEAK(var) to avoid it being
* reported as a leak by tools like LSAN or valgrind. The argument
* should generally be the variable itself (not its address and not what
* it points to). It's safe to use this on pointers which may already
* have been freed, or on pointers which may still be in use.
*
* Use this _only_ for a variable that leaks by going out of scope at
* program exit (so only from cmd_* functions or their direct helpers).
* Normal functions, especially those which may be called multiple
* times, should actually free their memory. This is only meant as
* an annotation, and does nothing in non-leak-checking builds.
*/
#ifdef SUPPRESS_ANNOTATED_LEAKS
void unleak_memory(const void *ptr, size_t len);
#define UNLEAK(var) unleak_memory(&(var), sizeof(var))
add UNLEAK annotation for reducing leak false positives It's a common pattern in git commands to allocate some memory that should last for the lifetime of the program and then not bother to free it, relying on the OS to throw it away. This keeps the code simple, and it's fast (we don't waste time traversing structures or calling free at the end of the program). But it also triggers warnings from memory-leak checkers like valgrind or LSAN. They know that the memory was still allocated at program exit, but they don't know _when_ the leaked memory stopped being useful. If it was early in the program, then it's probably a real and important leak. But if it was used right up until program exit, it's not an interesting leak and we'd like to suppress it so that we can see the real leaks. This patch introduces an UNLEAK() macro that lets us do so. To understand its design, let's first look at some of the alternatives. Unfortunately the suppression systems offered by leak-checking tools don't quite do what we want. A leak-checker basically knows two things: 1. Which blocks were allocated via malloc, and the callstack during the allocation. 2. Which blocks were left un-freed at the end of the program (and which are unreachable, but more on that later). Their suppressions work by mentioning the function or callstack of a particular allocation, and marking it as OK to leak. So imagine you have code like this: int cmd_foo(...) { /* this allocates some memory */ char *p = some_function(); printf("%s", p); return 0; } You can say "ignore allocations from some_function(), they're not leaks". But that's not right. That function may be called elsewhere, too, and we would potentially want to know about those leaks. So you can say "ignore the callstack when main calls some_function". That works, but your annotations are brittle. In this case it's only two functions, but you can imagine that the actual allocation is much deeper. If any of the intermediate code changes, you have to update the suppression. What we _really_ want to say is that "the value assigned to p at the end of the function is not a real leak". But leak-checkers can't understand that; they don't know about "p" in the first place. However, we can do something a little bit tricky if we make some assumptions about how leak-checkers work. They generally don't just report all un-freed blocks. That would report even globals which are still accessible when the leak-check is run. Instead they take some set of memory (like BSS) as a root and mark it as "reachable". Then they scan the reachable blocks for anything that looks like a pointer to a malloc'd block, and consider that block reachable. And then they scan those blocks, and so on, transitively marking anything reachable from a global as "not leaked" (or at least leaked in a different category). So we can mark the value of "p" as reachable by putting it into a variable with program lifetime. One way to do that is to just mark "p" as static. But that actually affects the run-time behavior if the function is called twice (you aren't likely to call main() twice, but some of our cmd_*() functions are called from other commands). Instead, we can trick the leak-checker by putting the value into _any_ reachable bytes. This patch keeps a global linked-list of bytes copied from "unleaked" variables. That list is reachable even at program exit, which confers recursive reachability on whatever values we unleak. In other words, you can do: int cmd_foo(...) { char *p = some_function(); printf("%s", p); UNLEAK(p); return 0; } to annotate "p" and suppress the leak report. But wait, couldn't we just say "free(p)"? In this toy example, yes. But UNLEAK()'s byte-copying strategy has several advantages over actually freeing the memory: 1. It's recursive across structures. In many cases our "p" is not just a pointer, but a complex struct whose fields may have been allocated by a sub-function. And in some cases (e.g., dir_struct) we don't even have a function which knows how to free all of the struct members. By marking the struct itself as reachable, that confers reachability on any pointers it contains (including those found in embedded structs, or reachable by walking heap blocks recursively. 2. It works on cases where we're not sure if the value is allocated or not. For example: char *p = argc > 1 ? argv[1] : some_function(); It's safe to use UNLEAK(p) here, because it's not freeing any memory. In the case that we're pointing to argv here, the reachability checker will just ignore our bytes. 3. Likewise, it works even if the variable has _already_ been freed. We're just copying the pointer bytes. If the block has been freed, the leak-checker will skip over those bytes as uninteresting. 4. Because it's not actually freeing memory, you can UNLEAK() before we are finished accessing the variable. This is helpful in cases like this: char *p = some_function(); return another_function(p); Writing this with free() requires: int ret; char *p = some_function(); ret = another_function(p); free(p); return ret; But with unleak we can just write: char *p = some_function(); UNLEAK(p); return another_function(p); This patch adds the UNLEAK() macro and enables it automatically when Git is compiled with SANITIZE=leak. In normal builds it's a noop, so we pay no runtime cost. It also adds some UNLEAK() annotations to show off how the feature works. On top of other recent leak fixes, these are enough to get t0000 and t0001 to pass when compiled with LSAN. Note the case in commit.c which actually converts a strbuf_release() into an UNLEAK. This code was already non-leaky, but the free didn't do anything useful, since we're exiting. Converting it to an annotation means that non-leak-checking builds pay no runtime cost. The cost is minimal enough that it's probably not worth going on a crusade to convert these kinds of frees to UNLEAKS. I did it here for consistency with the "sb" leak (though it would have been equally correct to go the other way, and turn them both into strbuf_release() calls). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-08 14:38:41 +08:00
#else
#define UNLEAK(var) do {} while (0)
add UNLEAK annotation for reducing leak false positives It's a common pattern in git commands to allocate some memory that should last for the lifetime of the program and then not bother to free it, relying on the OS to throw it away. This keeps the code simple, and it's fast (we don't waste time traversing structures or calling free at the end of the program). But it also triggers warnings from memory-leak checkers like valgrind or LSAN. They know that the memory was still allocated at program exit, but they don't know _when_ the leaked memory stopped being useful. If it was early in the program, then it's probably a real and important leak. But if it was used right up until program exit, it's not an interesting leak and we'd like to suppress it so that we can see the real leaks. This patch introduces an UNLEAK() macro that lets us do so. To understand its design, let's first look at some of the alternatives. Unfortunately the suppression systems offered by leak-checking tools don't quite do what we want. A leak-checker basically knows two things: 1. Which blocks were allocated via malloc, and the callstack during the allocation. 2. Which blocks were left un-freed at the end of the program (and which are unreachable, but more on that later). Their suppressions work by mentioning the function or callstack of a particular allocation, and marking it as OK to leak. So imagine you have code like this: int cmd_foo(...) { /* this allocates some memory */ char *p = some_function(); printf("%s", p); return 0; } You can say "ignore allocations from some_function(), they're not leaks". But that's not right. That function may be called elsewhere, too, and we would potentially want to know about those leaks. So you can say "ignore the callstack when main calls some_function". That works, but your annotations are brittle. In this case it's only two functions, but you can imagine that the actual allocation is much deeper. If any of the intermediate code changes, you have to update the suppression. What we _really_ want to say is that "the value assigned to p at the end of the function is not a real leak". But leak-checkers can't understand that; they don't know about "p" in the first place. However, we can do something a little bit tricky if we make some assumptions about how leak-checkers work. They generally don't just report all un-freed blocks. That would report even globals which are still accessible when the leak-check is run. Instead they take some set of memory (like BSS) as a root and mark it as "reachable". Then they scan the reachable blocks for anything that looks like a pointer to a malloc'd block, and consider that block reachable. And then they scan those blocks, and so on, transitively marking anything reachable from a global as "not leaked" (or at least leaked in a different category). So we can mark the value of "p" as reachable by putting it into a variable with program lifetime. One way to do that is to just mark "p" as static. But that actually affects the run-time behavior if the function is called twice (you aren't likely to call main() twice, but some of our cmd_*() functions are called from other commands). Instead, we can trick the leak-checker by putting the value into _any_ reachable bytes. This patch keeps a global linked-list of bytes copied from "unleaked" variables. That list is reachable even at program exit, which confers recursive reachability on whatever values we unleak. In other words, you can do: int cmd_foo(...) { char *p = some_function(); printf("%s", p); UNLEAK(p); return 0; } to annotate "p" and suppress the leak report. But wait, couldn't we just say "free(p)"? In this toy example, yes. But UNLEAK()'s byte-copying strategy has several advantages over actually freeing the memory: 1. It's recursive across structures. In many cases our "p" is not just a pointer, but a complex struct whose fields may have been allocated by a sub-function. And in some cases (e.g., dir_struct) we don't even have a function which knows how to free all of the struct members. By marking the struct itself as reachable, that confers reachability on any pointers it contains (including those found in embedded structs, or reachable by walking heap blocks recursively. 2. It works on cases where we're not sure if the value is allocated or not. For example: char *p = argc > 1 ? argv[1] : some_function(); It's safe to use UNLEAK(p) here, because it's not freeing any memory. In the case that we're pointing to argv here, the reachability checker will just ignore our bytes. 3. Likewise, it works even if the variable has _already_ been freed. We're just copying the pointer bytes. If the block has been freed, the leak-checker will skip over those bytes as uninteresting. 4. Because it's not actually freeing memory, you can UNLEAK() before we are finished accessing the variable. This is helpful in cases like this: char *p = some_function(); return another_function(p); Writing this with free() requires: int ret; char *p = some_function(); ret = another_function(p); free(p); return ret; But with unleak we can just write: char *p = some_function(); UNLEAK(p); return another_function(p); This patch adds the UNLEAK() macro and enables it automatically when Git is compiled with SANITIZE=leak. In normal builds it's a noop, so we pay no runtime cost. It also adds some UNLEAK() annotations to show off how the feature works. On top of other recent leak fixes, these are enough to get t0000 and t0001 to pass when compiled with LSAN. Note the case in commit.c which actually converts a strbuf_release() into an UNLEAK. This code was already non-leaky, but the free didn't do anything useful, since we're exiting. Converting it to an annotation means that non-leak-checking builds pay no runtime cost. The cost is minimal enough that it's probably not worth going on a crusade to convert these kinds of frees to UNLEAKS. I did it here for consistency with the "sb" leak (though it would have been equally correct to go the other way, and turn them both into strbuf_release() calls). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-08 14:38:41 +08:00
#endif
compat: auto-detect if zlib has uncompress2() We have a copy of uncompress2() implementation in compat/ so that we can build with an older version of zlib that lack the function, and the build procedure selects if it is used via the NO_UNCOMPRESS2 $(MAKE) variable. This is yet another "annoying" knob the porters need to tweak on platforms that are not common enough to have the default set in the config.mak.uname file. Attempt to instead ask the system header <zlib.h> to decide if we need the compatibility implementation. This is a deviation from the way we have been handling the "compatiblity" features so far, and if it can be done cleanly enough, it could work as a model for features that need compatibility definition we discover in the future. With that goal in mind, avoid expedient but ugly hacks, like shoving the code that is conditionally compiled into an unrelated .c file, which may not work in future cases---instead, take an approach that uses a file that is independently compiled and stands on its own. Compile and link compat/zlib-uncompress2.c file unconditionally, but conditionally hide the implementation behind #if/#endif when zlib version is 1.2.9 or newer, and unconditionally archive the resulting object file in the libgit.a to be picked up by the linker. There are a few things to note in the shape of the code base after this change: - We no longer use NO_UNCOMPRESS2 knob; if the system header <zlib.h> claims a version that is more cent than the library actually is, this would break, but it is easy to add it back when we find such a system. - The object file compat/zlib-uncompress2.o is always compiled and archived in libgit.a, just like a few other compat/ object files already are. - The inclusion of <zlib.h> is done in <git-compat-util.h>; we used to do so from <cache.h> which includes <git-compat-util.h> as the first thing it does, so from the *.c codes, there is no practical change. - Until objects in libgit.a that is already used gains a reference to the function, the reftable code will be the only one that wants it, so libgit.a on the linker command line needs to appear once more at the end to satisify the mutual dependency. - Beat found a trick used by OpenSSL to avoid making the conditionally-compiled object truly empty (apparently because they had to deal with compilers that do not want to see an effectively empty input file). Our compat/zlib-uncompress2.c file borrows the same trick for portabilty. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Helped-by: Beat Bolli <dev+git@drbeat.li> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-25 02:27:59 +08:00
#define z_const
#include <zlib.h>
#if ZLIB_VERNUM < 0x1290
/*
* This is uncompress2, which is only available in zlib >= 1.2.9
* (released as of early 2017). See compat/zlib-uncompress2.c.
*/
int uncompress2(Bytef *dest, uLongf *destLen, const Bytef *source,
uLong *sourceLen);
#endif
automatically ban strcpy() There are a few standard C functions (like strcpy) which are easy to misuse. E.g.: char path[PATH_MAX]; strcpy(path, arg); may overflow the "path" buffer. Sometimes there's an earlier constraint on the size of "arg", but even in such a case it's hard to verify that the code is correct. If the size really is unbounded, you're better off using a dynamic helper like strbuf: struct strbuf path = STRBUF_INIT; strbuf_addstr(path, arg); or if it really is bounded, then use xsnprintf to show your expectation (and get a run-time assertion): char path[PATH_MAX]; xsnprintf(path, sizeof(path), "%s", arg); which makes further auditing easier. We'd usually catch undesirable code like this in a review, but there's no automated enforcement. Adding that enforcement can help us be more consistent and save effort (and a round-trip) during review. This patch teaches the compiler to report an error when it sees strcpy (and will become a model for banning a few other functions). This has a few advantages over a separate linting tool: 1. We know it's run as part of a build cycle, so it's hard to ignore. Whereas an external linter is an extra step the developer needs to remember to do. 2. Likewise, it's basically free since the compiler is parsing the code anyway. 3. We know it's robust against false positives (unlike a grep-based linter). The two big disadvantages are: 1. We'll only check code that is actually compiled, so it may miss code that isn't triggered on your particular system. But since presumably people don't add new code without compiling it (and if they do, the banned function list is the least of their worries), we really only care about failing to clean up old code when adding new functions to the list. And that's easy enough to address with a manual audit when adding a new function (which is what I did for the functions here). 2. If this ends up generating false positives, it's going to be harder to disable (as opposed to a separate linter, which may have mechanisms for overriding a particular case). But the intent is to only ban functions which are obviously bad, and for which we accept using an alternative even when this particular use isn't buggy (e.g., the xsnprintf alternative above). The implementation here is simple: we'll define a macro for the banned function which replaces it with a reference to a descriptively named but undeclared identifier. Replacing it with any invalid code would work (since we just want to break compilation). But ideally we'd meet these goals: - it should be portable; ideally this would trigger everywhere, and does not need to be part of a DEVELOPER=1 setup (because unlike warnings which may depend on the compiler or system, this is a clear indicator of something wrong in the code). - it should generate a readable error that gives the developer a clue what happened - it should avoid generating too much other cruft that makes it hard to see the actual error - it should mention the original callsite in the error The output with this patch looks like this (using gcc 7, on a checkout with 022d2ac1f3 reverted, which removed the final strcpy from blame.c): CC builtin/blame.o In file included from ./git-compat-util.h:1246, from ./cache.h:4, from builtin/blame.c:8: builtin/blame.c: In function ‘cmd_blame’: ./banned.h:11:22: error: ‘sorry_strcpy_is_a_banned_function’ undeclared (first use in this function) #define BANNED(func) sorry_##func##_is_a_banned_function ^~~~~~ ./banned.h:14:21: note: in expansion of macro ‘BANNED’ #define strcpy(x,y) BANNED(strcpy) ^~~~~~ builtin/blame.c:1074:4: note: in expansion of macro ‘strcpy’ strcpy(repeated_meta_color, GIT_COLOR_CYAN); ^~~~~~ ./banned.h:11:22: note: each undeclared identifier is reported only once for each function it appears in #define BANNED(func) sorry_##func##_is_a_banned_function ^~~~~~ ./banned.h:14:21: note: in expansion of macro ‘BANNED’ #define strcpy(x,y) BANNED(strcpy) ^~~~~~ builtin/blame.c:1074:4: note: in expansion of macro ‘strcpy’ strcpy(repeated_meta_color, GIT_COLOR_CYAN); ^~~~~~ This prominently shows the phrase "strcpy is a banned function", along with the original callsite in blame.c and the location of the ban code in banned.h. Which should be enough to get even a developer seeing this for the first time pointed in the right direction. This doesn't match our ideals perfectly, but it's a pretty good balance. A few alternatives I tried: 1. Instead of using an undeclared variable, using an undeclared function. This shortens the message, because the "each undeclared identifier" message is not needed (and as you can see above, it triggers a separate mention of each of the expansion points). But it doesn't actually stop compilation unless you use -Werror=implicit-function-declaration in your CFLAGS. This is the case for DEVELOPER=1, but not for a default build (on the other hand, we'd eventually produce a link error pointing to the correct source line with the descriptive name). 2. The linux kernel uses a similar mechanism in its BUILD_BUG_ON_MSG(), where they actually declare the function but do so with gcc's error attribute. But that's not portable to other compilers (and it also runs afoul of our error() macro). We could make a gcc-specific technique and fallback on other compilers, but it's probably not worth the complexity. It also isn't significantly shorter than the error message shown above. 3. We could drop the BANNED() macro, which would shorten the number of lines in the error. But curiously, removing it (and just expanding strcpy directly to the bogus identifier) causes gcc _not_ to report the original line of code. So this strategy seems to be an acceptable mix of information, portability, simplicity, and robustness, without _too_ much extra clutter. I also tested it with clang, and it looks as good (actually, slightly less cluttered than with gcc). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-26 15:21:05 +08:00
/*
* This include must come after system headers, since it introduces macros that
* replace system names.
*/
#include "banned.h"
/*
* container_of - Get the address of an object containing a field.
*
* @ptr: pointer to the field.
* @type: type of the object.
* @member: name of the field within the object.
*/
#define container_of(ptr, type, member) \
((type *) ((char *)(ptr) - offsetof(type, member)))
/*
* helper function for `container_of_or_null' to avoid multiple
* evaluation of @ptr
*/
static inline void *container_of_or_null_offset(void *ptr, size_t offset)
{
return ptr ? (char *)ptr - offset : NULL;
}
/*
* like `container_of', but allows returned value to be NULL
*/
#define container_of_or_null(ptr, type, member) \
(type *)container_of_or_null_offset(ptr, offsetof(type, member))
/*
* like offsetof(), but takes a pointer to a variable of type which
* contains @member, instead of a specified type.
* @ptr is subject to multiple evaluation since we can't rely on __typeof__
* everywhere.
*/
#if defined(__GNUC__) /* clang sets this, too */
#define OFFSETOF_VAR(ptr, member) offsetof(__typeof__(*ptr), member)
#else /* !__GNUC__ */
#define OFFSETOF_VAR(ptr, member) \
((uintptr_t)&(ptr)->member - (uintptr_t)(ptr))
#endif /* !__GNUC__ */
#endif