Document ORIG_HEAD a bit more.
* pb/doc-orig-head:
git-rebase.txt: add a note about 'ORIG_HEAD' being overwritten
revisions.txt: be explicit about commands writing 'ORIG_HEAD'
git-merge.txt: mention 'ORIG_HEAD' in the Description
git-reset.txt: mention 'ORIG_HEAD' in the Description
git-cherry-pick.txt: do not use 'ORIG_HEAD' in example
Test clean-up.
* ar/test-cleanup:
t7527: use test_when_finished in 'case insensitive+preserving'
t6422: drop commented out code
t6003: uncomment test '--max-age=c3, --topo-order'
Code cleaning.
* rs/dup-array:
use DUP_ARRAY
add DUP_ARRAY
do full type check in BARF_UNLESS_COPYABLE
factor out BARF_UNLESS_COPYABLE
mingw: make argv2 in try_shell_exec() non-const
Test updates.
* jx/t1301-updates:
t1301: do not change $CWD in "shared=all" test case
t1301: use test_when_finished for cleanup
t1301: fix wrong template dir for git-init
The cURL one hasn't cooked for a week in 'next', but let's fast
track it so that linux-musl CI job would be happy.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Deal with a few deprecation warning from cURL library.
* jk/curl-avoid-deprecated-api:
http: support CURLOPT_PROTOCOLS_STR
http: prefer CURLOPT_SEEKFUNCTION to CURLOPT_IOCTLFUNCTION
http-push: prefer CURLOPT_UPLOAD to CURLOPT_PUT
The fsck code operates on an object buffer represented as a pointer/len
combination. However, the parsing of commits and tags is a little bit
loose; we mostly scan left-to-right through the buffer, without checking
whether we've gone past the length we were given.
This has traditionally been OK because the buffers we feed to fsck
always have an extra NUL after the end of the object content, which ends
any left-to-right scan. That has always been true for objects we read
from the odb, and we made it true for incoming index-pack/unpack-objects
checks in a1e920a0a7 (index-pack: terminate object buffers with NUL,
2014-12-08).
However, we recently added an exception: hash-object asks index_fd() to
do fsck checks. That _may_ have an extra NUL (if we read from a pipe
into a strbuf), but it might not (if we read the contents from the
file). Nor can we just teach it to always add a NUL. We may mmap the
on-disk file, which will not have any extra bytes (if it's a multiple of
the page size). Not to mention that this is a rather subtle assumption
for the fsck code to make.
Instead, let's make sure that the fsck parsers don't ever look past the
size of the buffer they've been given. This _almost_ works already,
thanks to earlier work in 4d0d89755e (Make sure fsck_commit_buffer()
does not run out of the buffer, 2014-09-11). The theory there is that we
check up front whether we have the end of header double-newline
separator. And then any left-to-right scanning we do is OK as long as it
stops when it hits that boundary.
However, we later softened that in 84d18c0bcf (fsck: it is OK for a tag
and a commit to lack the body, 2015-06-28), which allows the
double-newline header to be missing, but does require that the header
ends in a newline. That was OK back then, because of the NUL-termination
guarantees (including the one from a1e920a0a7 mentioned above).
Because 84d18c0bcf guarantees that any header line does end in a
newline, we are still OK with most of the left-to-right scanning. We
only need to take care after completing a line, to check that there is
another line (and we didn't run out of buffer).
Most of these checks are just need to check "buffer < buffer_end" (where
buffer is advanced as we parse) before scanning for the next header
line. But here are a few notes:
- we don't technically need to check for remaining buffer before
parsing the very first line ("tree" for a commit, or "object" for a
tag), because verify_headers() rejects a totally empty buffer. But
we'll do so in the name of consistency and defensiveness.
- there are some calls to strchr('\n'). These are actually OK by the
"the final header line must end in a newline" guarantee from
verify_headers(). They will always find that rather than run off the
end of the buffer. Curiously, they do check for a NULL return and
complain, but I believe that condition can never be reached.
However, I converted them to use memchr() with a proper size and
retained the NULL checks. Using memchr() is not much longer and
makes it more obvious what is going on. Likewise, retaining the NULL
checks serves as a defensive measure in case my analysis is wrong.
- commit 9a1a3a4d4c (mktag: allow omitting the header/body \n
separator, 2021-01-05), does check for the end-of-buffer condition,
but does so with "!*buffer", relying explicitly on the NUL
termination. We can accomplish the same thing with a pointer
comparison. I also folded it into the follow-on conditional that
checks the contents of the buffer, for consistency with the other
checks.
- fsck_ident() uses parse_timestamp(), which is based on strtoumax().
That function will happily skip past leading whitespace, including
newlines, which makes it a risk. We can fix this by scanning to the
first digit ourselves, and then using parse_timestamp() to do the
actual numeric conversion.
Note that as a side effect this fixes the fact that we missed
zero-padded timestamps like "<email> 0123" (whereas we would
complain about "<email> 0123"). I doubt anybody cares, but I
mention it here for completeness.
- fsck_tree() does not need any modifications. It relies on
decode_tree_entry() to do the actual parsing, and that function
checks both that there are enough bytes in the buffer to represent
an entry, and that there is a NUL at the appropriate spot (one
hash-length from the end; this may not be the NUL for the entry we
are parsing, but we know that in the worst case, everything from our
current position to that NUL is a filename, so we won't run out of
bytes).
In addition to fixing the code itself, we'd like to make sure our rather
subtle assumptions are not violated in the future. So this patch does
two more things:
- add comments around verify_headers() documenting the link between
what it checks and the memory safety of the callers. I don't expect
this code to be modified frequently, but this may help somebody from
accidentally breaking things.
- add a thorough set of tests covering truncations at various key
spots (e.g., for a "tree $oid" line, in the middle of the word
"tree", right after it, after the space, in the middle of the $oid,
and right at the end of the line. Most of these are fine already (it
is only truncating right at the end of the line that is currently
broken). And some of them are not even possible with the current
code (we parse "tree " as a unit, so truncating before the space is
equivalent). But I aimed here to consider the code a black box and
look for any truncations that would be a problem for a left-to-right
parser.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fetching in parallel from a remote group with a duplicated remote results
in the following:
error: cannot lock ref '<ref>': is at <oid> but expected <oid>
This doesn't happen in serial since fetching from the same remote that
has already been fetched from is a noop. Therefore, remove any duplicated
remotes after remote groups are parsed.
Signed-off-by: Calvin Wan <calvinwan@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The previous commits added clarifications to the column alignment
placeholders, note that the spaces are optional around the parameters.
Also, a proposed extension [1] to allow hard truncation (without
ellipsis '..') highlighted that the existing code does not play well
with wide characters, such as Asian fonts and emojis.
For example, N wide characters take 2N columns so won't fit an odd number
column width, causing misalignment somewhere.
Further analysis also showed that decomposed characters, e.g. separate
`a` + `umlaut` Unicode code-points may also be mis-counted, in some cases
leaving multiple loose `umlauts` all combined together.
Add some notes about these limitations, and add basic tests to demonstrate
them.
The chosen solution for the tests is to substitute any wide character
that overlaps a splitting boundary for the unicode vertical ellipsis
code point as a rare but 'obvious' substitution.
An alternative could be the substitution with a single dot '.' which
matches regular expression usage, and our two dot ellipsis, and further
in scenarios where the bulk of the text is wide characters, would be
obvious. In mainly 'ascii' scenarios a singleton emoji being substituted
by a dot could be confusing.
It is enough that the tests fail cleanly. The final choice for the
substitute character can be deferred.
[1]
https://lore.kernel.org/git/20221030185614.3842-1-philipoakley@iee.email/
Signed-off-by: Philip Oakley <philipoakley@iee.email>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit a7f01c6b4d (pretty: support truncating in %>, %< and %><,
2013-04-19) added the use of ellipsis when truncating placeholder
values.
Show our 'two dot' ellipsis, and examples for the left, middle and
right truncation to avoid any confusion as to which end of the string
is adjusted. (cf justification and sub-string).
Signed-off-by: Philip Oakley <philipoakley@iee.email>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 066790d7cb (pretty.c: support <direction>|(<negative number>) forms,
2016-06-16) added the option for right justified column alignment without
updating the documentation.
Add an explanation of its use of negative column values.
Signed-off-by: Philip Oakley <philipoakley@iee.email>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit a57523428b (pretty: support padding placeholders, %< %> and %><,
2013-04-19) introduced column width place holders. It also added
separate column position `%<|(` placeholders for display screen based
placement.
Change the display screen parameter reference from 'N' to 'M' and
corresponding descriptives to make the distinction clearer.
Signed-off-by: Philip Oakley <philipoakley@iee.email>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit a57523428b (pretty: support padding placeholders, %< %> and %><,
2013-04-19) introduced columnated place holders. These placeholders
can be confusing as they contain `<` and `>` characters as part
of their placeholders adjacent to the `<N>` parameters.
Add spaces either side of the `<N>` parameters in the title line.
The code (strtol) will consume any spaces around the number values
(assuming they are passed as a quoted string with spaces).
Note that the spaces are optional.
Subsequent commits will clarify other confusions.
Signed-off-by: Philip Oakley <philipoakley@iee.email>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
On platforms where `size_t` does not have the same width as `unsigned
long`, passing a pointer to the former when a pointer to the latter is
expected can lead to problems.
Windows and 32-bit Linux are among the affected platforms.
In this instance, we want to store the size of the blob that was read in
that variable. However, `read_blob_data_from_index()` passes that
pointer to `read_object_file()` which expects an `unsigned long *`.
Which means that on affected platforms, the variable is not fully
populated and part of its value is left uninitialized. (On Big-Endian
platforms, this problem would be even worse.)
The consequence is that depending on the uninitialized memory's
contents, we may erroneously reject perfectly fine attributes.
Let's address this by passing a pointer to a variable of the expected
data type.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The existing text read as if "git checkout -b/-B name" were
equivalent to "git branch [-f] name", which clearly was not
what we wanted to say.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In hash_object(), we open a descriptor for each file to hash (whether we
got the filename from the command line or --stdin-paths), but never
close it. For the traditional code path, which feeds the result to
index_fd(), this is OK; it closes the descriptor for us.
But 5ba9a93b39 (hash-object: add --literally option, 2014-09-11) added a
second code path, which does not close the descriptor. There we need to
do so ourselves.
You can see the problem in a clone of git.git like this:
$ git ls-files -s | grep ^100644 | cut -f2 |
git hash-object --stdin-paths --literally >/dev/null
fatal: could not open 'builtin/var.c' for reading: Too many open files
After this patch, it completes successfully. I didn't bother with a
test, as it's a pain to deal with descriptor limits portably, and the
fix is so trivial.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git branch -f name start" forces to recreate the named branch, but
the forcing does not defeat the "do not touch a branch that is
checked out elsewhere" safety valve.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When UTF is enabled for a PCRE match, the corresponding flags are
added to the pcre2_compile() call, but PCRE2_UCP wasn't included.
This prevents extending the meaning of the character classes to
include those new valid characters and therefore result in failed
matches for expressions that rely on that extention, for ex:
$ git grep -P '\bÆvar'
Add PCRE2_UCP so that \w will include Æ and therefore \b could
correctly match the beginning of that word.
This has an impact on performance that has been estimated to be
between 20% to 40% and that is shown through the added performance
test.
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Acked-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
'git branch --recurse-submodules start from-here' fails if any submodule
present in 'from-here' is not yet cloned (under
submodule.propagateBranches=true). We then give this advice:
"You may try updating the submodules using 'git checkout from-here && git submodule update --init'"
If 'submodule.recurse' is set, 'git checkout from-here' will also fail since
it will try to recursively checkout the submodules.
Improve the advice by adding '--no-recurse-submodules' to the checkout
command.
Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Reviewed-by: Glen Choo <chooglen@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since c879daa237 (Make hash-object more robust against malformed
objects, 2011-02-05), we've done some rudimentary checks against objects
we're about to write by running them through our usual parsers for
trees, commits, and tags.
These parsers catch some problems, but they are not nearly as careful as
the fsck functions (which make sense; the parsers are designed to be
fast and forgiving, bailing only when the input is unintelligible). We
are better off doing the more thorough fsck checks when writing objects.
Doing so at write time is much better than writing garbage only to find
out later (after building more history atop it!) that fsck complains
about it, or hosts with transfer.fsckObjects reject it.
This is obviously going to be a user-visible behavior change, and the
test changes earlier in this series show the scope of the impact. But
I'd argue that this is OK:
- the documentation for hash-object is already vague about which
checks we might do, saying that --literally will allow "any
garbage[...] which might not otherwise pass standard object parsing
or git-fsck checks". So we are already covered under the documented
behavior.
- users don't generally run hash-object anyway. There are a lot of
spots in the tests that needed to be updated because creating
garbage objects is something that Git's tests disproportionately do.
- it's hard to imagine anyone thinking the new behavior is worse. Any
object we reject would be a potential problem down the road for the
user. And if they really want to create garbage, --literally is
already the escape hatch they need.
Note that the change here is actually in index_mem(), which handles the
HASH_FORMAT_CHECK flag passed by hash-object. That flag is also used by
"git-replace --edit" to sanity-check the result. Covering that with more
thorough checks likewise seems like a good thing.
Besides being more thorough, there are a few other bonuses:
- we get rid of some questionable stack allocations of object structs.
These don't seem to currently cause any problems in practice, but
they subtly violate some of the assumptions made by the rest of the
code (e.g., the "struct commit" we put on the stack and
zero-initialize will not have a proper index from
alloc_comit_index().
- likewise, those parsed object structs are the source of some small
memory leaks
- the resulting messages are much better. For example:
[before]
$ echo 'tree 123' | git hash-object -t commit --stdin
error: bogus commit object 0000000000000000000000000000000000000000
fatal: corrupt commit
[after]
$ echo 'tree 123' | git.compile hash-object -t commit --stdin
error: object fails fsck: badTreeSha1: invalid 'tree' line format - bad sha1
fatal: refusing to create malformed object
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The fsck code has been slowly moving away from requiring an object
struct in commits like 103fb6d43b (fsck: accept an oid instead of a
"struct tag" for fsck_tag(), 2019-10-18), c5b4269b57 (fsck: accept an
oid instead of a "struct commit" for fsck_commit(), 2019-10-18), etc.
However, the only external interface that fsck.c provides is
fsck_object(), which requires an object struct, then promptly discards
everything except its oid and type. Let's factor out the post-discard
part of that function as fsck_buffer(), leaving fsck_object() as a thin
wrapper around it. That will provide more flexibility for callers which
may not have a struct.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Many test scripts use hash-object to create malformed objects to see how
we handle the results in various commands. In some cases we already have
to use "hash-object --literally", because it does some rudimentary
quality checks. But let's use "--literally" more consistently to
future-proof these tests against hash-object learning to be more
careful.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We intentionally invalidate the signature of a tag by switching its tag
name from "seventh" to "7th forged". However, the latter is not a valid
tag name because it contains a space. This doesn't currently affect the
test, but we're better off using something syntactically valid. That
reduces the number of possible failure modes in the test, and
future-proofs us if git hash-object gets more picky about its input.
The t7031 script, which was mostly copied from t7030, has the same
problem, so we'll fix it, too.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The fake objects in t1006 use dummy timestamps like "0000000000 +0000".
While this does make them look more like normal timestamps (which,
unless it is 1970, have many digits), it actually violates our fsck
checks, which complain about zero-padded timestamps.
This doesn't currently break anything, but let's future-proof our tests
against a version of hash-object which is a little more careful about
its input. We don't actually care about the exact values here (and in
fact, the helper functions in this script end up removing the timestamps
anyway, so we don't even have to adjust other parts of the tests).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The tests in t1007 for detecting malformed objects have two
anachronisms:
- they use "sha1" instead of "oid" in variable names, even though the
script as a whole has been adapted to handle sha256
- they use test_i18ngrep, which is no longer necessary
Since we'll be adding a new similar test, let's clean these up so they
are all consistently using the modern style.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With Asciidoctor, all of the '+' introduced in a797c0ea04 ("cat-file:
add mailmap support to --batch-check option", 2022-12-20) render
literally rather than functioning as list continuations. With asciidoc,
this renders just fine. It's not too surprising that there is room for
ambiguity and surprises here, since we have lists within lists.
Simply replacing all of these '+' with empty lines makes this render
fine using both tools. Except, in the third hunk, where after this inner
'*' list ends, we want to continue with more contents of the outer list
item (`--batch-command=<format>`). We can solve any ambiguity here and
make this clear to both tools by wrapping the inner list in an open
block (using "--").
For consistency, let's wrap all three of these inner lists from
a797c0ea04 in open blocks. This also future-proofs us a little -- if we
ever gain more contents after any of those first two lists, as we did
already in a797c0ea04 for the third list, we're prepared and should
render fine with both asciidoc and Asciidoctor from the start.
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the "repo" member was added to "the_index" in [1] the
repo_read_index() was made to populate it, but the unpopulated
"the_index" variable didn't get the same treatment.
Let's do that in initialize_the_repository() when we set it up, and
likewise for all of the current callers initialized an empty "struct
index_state".
This simplifies code that needs to deal with "the_index" or a custom
"struct index_state", we no longer need to second-guess this part of
the "index_state" deep in the stack. A recent example of such
second-guessing is the "istate->repo ? istate->repo : the_repository"
code in [2]. We can now simply use "istate->repo".
We're doing this by making use of the INDEX_STATE_INIT() macro (and
corresponding function) added in [3], which now have mandatory "repo"
arguments.
Because we now call index_state_init() in repository.c's
initialize_the_repository() we don't need to handle the case where we
have a "repo->index" whose "repo" member doesn't match the "repo"
we're setting up, i.e. the "Complete the double-reference" code in
repo_read_index() being altered here. That logic was originally added
in [1], and was working around the lack of what we now have in
initialize_the_repository().
For "fsmonitor-settings.c" we can remove the initialization of a NULL
"r" argument to "the_repository". This was added back in [4], and was
needed at the time for callers that would pass us the "r" from an
"istate->repo". Before this change such a change to
"fsmonitor-settings.c" would segfault all over the test suite (e.g. in
t0002-gitfile.sh).
This change has wider eventual implications for
"fsmonitor-settings.c". The reason the other lazy loading behavior in
it is required (starting with "if (!r->settings.fsmonitor) ..." is
because of the previously passed "r" being "NULL".
I have other local changes on top of this which move its configuration
reading to "prepare_repo_settings()" in "repo-settings.c", as we could
now start to rely on it being called for our "r". But let's leave all
of that for now, and narrowly remove this particular part of the
lazy-loading.
1. 1fd9ae517c (repository: add repo reference to index_state,
2021-01-23)
2. ee1f0c242e (read-cache: add index.skipHash config option,
2023-01-06)
3. 2f6b1eb794 (cache API: add a "INDEX_STATE_INIT" macro/function,
add release_index(), 2023-01-12)
4. 1e0ea5c431 (fsmonitor: config settings are repository-specific,
2022-03-25)
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Acked-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* ab/cache-api-cleanup:
cache API: add a "INDEX_STATE_INIT" macro/function, add release_index()
read-cache.c: refactor set_new_index_sparsity() for subsequent commit
sparse-index API: BUG() out on NULL ensure_full_index()
sparse-index.c: expand_to_path() can assume non-NULL "istate"
builtin/difftool.c: { 0 }-initialize rather than using memset()
The CURLOPT_PROTOCOLS (and matching CURLOPT_REDIR_PROTOCOLS) flag was
deprecated in curl 7.85.0, and using it generate compiler warnings as of
curl 7.87.0. The path forward is to use CURLOPT_PROTOCOLS_STR, but we
can't just do so unilaterally, as it was only introduced less than a
year ago in 7.85.0.
Until that version becomes ubiquitous, we have to either disable the
deprecation warning or conditionally use the "STR" variant on newer
versions of libcurl. This patch switches to the new variant, which is
nice for two reasons:
- we don't have to worry that silencing curl's deprecation warnings
might cause us to miss other more useful ones
- we'd eventually want to move to the new variant anyway, so this gets
us set up (albeit with some extra ugly boilerplate for the
conditional)
There are a lot of ways to split up the two cases. One way would be to
abstract the storage type (strbuf versus a long), how to append
(strbuf_addstr vs bitwise OR), how to initialize, which CURLOPT to use,
and so on. But the resulting code looks pretty magical:
GIT_CURL_PROTOCOL_TYPE allowed = GIT_CURL_PROTOCOL_TYPE_INIT;
if (...http is allowed...)
GIT_CURL_PROTOCOL_APPEND(&allowed, "http", CURLOPT_HTTP);
and you end up with more "#define GIT_CURL_PROTOCOL_TYPE" macros than
actual code.
On the other end of the spectrum, we could just implement two separate
functions, one that handles a string list and one that handles bits. But
then we end up repeating our list of protocols (http, https, ftp, ftp).
This patch takes the middle ground. The run-time code is always there to
handle both types, and we just choose which one to feed to curl.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The IOCTLFUNCTION option has been deprecated, and generates a compiler
warning in recent versions of curl. We can switch to using SEEKFUNCTION
instead. It was added in 2008 via curl 7.18.0; our INSTALL file already
indicates we require at least curl 7.19.4.
But there's one catch: curl says we should use CURL_SEEKFUNC_{OK,FAIL},
and those didn't arrive until 7.19.5. One workaround would be to use a
bare 0/1 here (or define our own macros). But let's just bump the
minimum required version to 7.19.5. That version is only a minor version
bump from our existing requirement, and is only a 2 month time bump for
versions that are almost 13 years old. So it's not likely that anybody
cares about the distinction.
Switching means we have to rewrite the ioctl functions into seek
functions. In some ways they are simpler (seeking is the only
operation), but in some ways more complex (the ioctl allowed only a full
rewind, but now we can seek to arbitrary offsets).
Curl will only ever use SEEK_SET (per their documentation), so I didn't
bother implementing anything else, since it would naturally be
completely untested. This seems unlikely to change, but I added an
assertion just in case.
Likewise, I doubt curl will ever try to seek outside of the buffer sizes
we've told it, but I erred on the defensive side here, rather than do an
out-of-bounds read.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>