Many commands learned to pay attention to submodule.recurse
configuration.
* sb/submodule-blanket-recursive:
builtin/fetch.c: respect 'submodule.recurse' option
builtin/push.c: respect 'submodule.recurse' option
builtin/grep.c: respect 'submodule.recurse' option
Introduce 'submodule.recurse' option for worktree manipulators
submodule loading: separate code path for .gitmodules and config overlay
reset/checkout/read-tree: unify config callback for submodule recursion
submodule test invocation: only pass additional arguments
submodule recursing: do not write a config variable twice
Just because the files backend can't retain reflogs for deleted
references is no reason that they shouldn't be supported by the
virtual method interface. Also, `delete_ref()` and `refs_delete_ref()`
have already gained `msg` parameters. Now let's add them to
`delete_refs()` and `refs_delete_refs()`.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Convert lookup_commit, lookup_commit_or_die,
lookup_commit_reference, and lookup_commit_reference_gently to take
struct object_id arguments.
Introduce a temporary in parse_object buffer in order to convert this
function. This is required since in order to convert parse_object and
parse_object_buffer, lookup_commit_reference_gently and
lookup_commit_or_die would need to be converted. Not introducing a
temporary would therefore require that lookup_commit_or_die take a
struct object_id *, but lookup_commit would take unsigned char *,
leaving a confusing and hard-to-use interface.
parse_object_buffer will lose this temporary in a later patch.
This commit was created with manual changes to commit.c, commit.h, and
object.c, plus the following semantic patch:
@@
expression E1, E2;
@@
- lookup_commit_reference_gently(E1.hash, E2)
+ lookup_commit_reference_gently(&E1, E2)
@@
expression E1, E2;
@@
- lookup_commit_reference_gently(E1->hash, E2)
+ lookup_commit_reference_gently(E1, E2)
@@
expression E1;
@@
- lookup_commit_reference(E1.hash)
+ lookup_commit_reference(&E1)
@@
expression E1;
@@
- lookup_commit_reference(E1->hash)
+ lookup_commit_reference(E1)
@@
expression E1;
@@
- lookup_commit(E1.hash)
+ lookup_commit(&E1)
@@
expression E1;
@@
- lookup_commit(E1->hash)
+ lookup_commit(E1)
@@
expression E1, E2;
@@
- lookup_commit_or_die(E1.hash, E2)
+ lookup_commit_or_die(&E1, E2)
@@
expression E1, E2;
@@
- lookup_commit_or_die(E1->hash, E2)
+ lookup_commit_or_die(E1, E2)
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Conversion from unsigned char [40] to struct object_id continues.
* bc/object-id:
Documentation: update and rename api-sha1-array.txt
Rename sha1_array to oid_array
Convert sha1_array_for_each_unique and for_each_abbrev to object_id
Convert sha1_array_lookup to take struct object_id
Convert remaining callers of sha1_array_lookup to object_id
Make sha1_array_append take a struct object_id *
sha1-array: convert internal storage for struct sha1_array to object_id
builtin/pull: convert to struct object_id
submodule: convert check_for_new_submodule_commits to object_id
sha1_name: convert disambiguate_hint_fn to take object_id
sha1_name: convert struct disambiguate_state to object_id
test-sha1-array: convert most code to struct object_id
parse-options-cb: convert sha1_array_append caller to struct object_id
fsck: convert init_skiplist to struct object_id
builtin/receive-pack: convert portions to struct object_id
builtin/pull: convert portions to struct object_id
builtin/diff: convert to struct object_id
Convert GIT_SHA1_RAWSZ used for allocation to GIT_MAX_RAWSZ
Convert GIT_SHA1_HEXSZ used for allocation to GIT_MAX_HEXSZ
Define new hash-size constants for allocating memory
Part of the reflog content comes from the environment, which
can be much larger than our fixed buffer. Let's use a heap
buffer so we avoid truncating it.
Signed-off-by: Jeff King <peff@peff.net>
All of the callers of this function have been converted, so convert this
function and update the callers. This function also calls
sha1_array_append, which we'll convert shortly.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With anticipatory tweaking for remotes defined in ~/.gitconfig
(e.g. "remote.origin.prune" set to true, even though there may or
may not actually be "origin" remote defined in a particular Git
repository), "git remote rename" and other commands misinterpreted
and behaved as if such a non-existing remote actually existed.
* js/remote-rename-with-half-configured-remote:
remote rename: more carefully determine whether a remote is configured
remote rename: demonstrate a bogus "remote exists" bug
One of the really nice features of the ~/.gitconfig file is that users
can override defaults by their own preferred settings for all of their
repositories.
One such default that some users like to override is whether the
"origin" remote gets auto-pruned or not. The user would simply call
git config --global remote.origin.prune true
and from now on all "origin" remotes would be pruned automatically when
fetching into the local repository.
There is just one catch: now Git thinks that the "origin" remote is
configured, even if the repository config has no [remote "origin"]
section at all, as it does not realize that the "prune" setting was
configured globally and that there really is no "origin" remote
configured in this repository.
That is a problem e.g. when renaming a remote to a new name, when Git
may be fooled into thinking that there is already a remote of that new
name.
Let's fix this by paying more attention to *where* the remote settings
came from: if they are configured in the local repository config, we
must not overwrite them. If they were configured elsewhere, we cannot
overwrite them to begin with, as we only write the repository config.
There is only one caller of remote_is_configured() (in `git fetch`) that
may want to take remotes into account even if they were configured
outside the repository config; all other callers essentially try to
prevent the Git command from overwriting settings in the repository
config.
To accommodate that fact, the remote_is_configured() function now
requires a parameter that states whether the caller is interested in all
remotes, or only in those that were configured in the repository config.
Many thanks to Jeff King whose tireless review helped with settling for
nothing less than the current strategy.
This fixes https://github.com/git-for-windows/git/issues/888
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Code cleanup to avoid using redundant refspecs while fetching with
the --tags option.
* jt/fetch-no-redundant-tag-fetch-map:
fetch: do not redundantly calculate tag refmap
"deepen by excluding" does not make sense because excluding a revision
does not deepen a repository; it makes the repository more shallow.
Signed-off-by: Alex Henrie <alexhenrie24@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
builtin/fetch.c redundantly calculates refmaps for tags twice. Remove
the first calculation.
This is only a code simplification and slight performance improvement -
the result is unchanged, as the redundant refmaps are subsequently
removed by the invocation to "ref_remove_duplicates" anyway.
This was introduced in commit c5a84e9 ("fetch --tags: fetch tags *in
addition to* other stuff", 2013-10-29) when modifying the effect of the
--tags parameter to "git fetch". The refmap-for-tag calculation was
copied instead of moved.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When fetching from a remote that has many tags that are irrelevant
to branches we are following, we used to waste way too many cycles
when checking if the object pointed at by a tag (that we are not
going to fetch!) exists in our repository too carefully.
* jk/fetch-quick-tag-following:
fetch: use "quick" has_sha1_file for tag following
"git push" and "git fetch" reports from what old object to what new
object each ref was updated, using abbreviated refnames, and they
attempt to align the columns for this and other pieces of
information. The way these codepaths compute how many display
columns to allocate for the object names portion of this output has
been updated to match the recent "auto scale the default
abbreviation length" change.
* jc/abbrev-auto:
transport: compute summary-width dynamically
transport: allow summary-width to be computed dynamically
fetch: pass summary_width down the callchain
transport: pass summary_width down the callchain
Allow the default abbreviation length, which has historically been
7, to scale as the repository grows. The logic suggests to use 12
hexdigits for the Linux kernel, and 9 to 10 for Git itself.
* lt/abbrev-auto:
abbrev: auto size the default abbreviation
abbrev: prepare for new world order
abbrev: add FALLBACK_DEFAULT_ABBREV to prepare for auto sizing
When fetching from a remote that has many tags that are irrelevant
to branches we are following, we used to waste way too many cycles
when checking if the object pointed at by a tag (that we are not
going to fetch!) exists in our repository too carefully.
* jk/fetch-quick-tag-following:
fetch: use "quick" has_sha1_file for tag following
Now we have identified three callchains that have a set of refs that
they want to show their <old, new> object names in an aligned output,
we can replace their reference to the constant TRANSPORT_SUMMARY_WIDTH
with a helper function call to transport_summary_width() that takes
the set of ref as a parameter. This step does not yet iterate over
the refs and compute, which is left as an exercise to the readers.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The leaf function on the "fetch" side that uses TRANSPORT_SUMMARY_WIDTH
constant is builtin/fetch.c::format_display() and it has two distinct
callchains. The one that reports the primary result of fetch originates
at store_updated_refs(); the other one that reports the pruning of
the remote-tracking refs originates at prune_refs().
Teach these two places to pass summary_width down the callchain,
just like we did for the "push" side in the previous commit.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we auto-follow tags in a fetch, we look at all of the
tags advertised by the remote and fetch ones where we don't
already have the tag, but we do have the object it peels to.
This involves a lot of calls to has_sha1_file(), some of
which we can reasonably expect to fail. Since 45e8a74
(has_sha1_file: re-check pack directory before giving up,
2013-08-30), this may cause many calls to
reprepare_packed_git(), which is potentially expensive.
This has gone unnoticed for several years because it
requires a fairly unique setup to matter:
1. You need to have a lot of packs on the client side to
make reprepare_packed_git() expensive (the most
expensive part is finding duplicates in an unsorted
list, which is currently quadratic).
2. You need a large number of tag refs on the server side
that are candidates for auto-following (i.e., that the
client doesn't have). Each one triggers a re-read of
the pack directory.
3. Under normal circumstances, the client would
auto-follow those tags and after one large fetch, (2)
would no longer be true. But if those tags point to
history which is disconnected from what the client
otherwise fetches, then it will never auto-follow, and
those candidates will impact it on every fetch.
So when all three are true, each fetch pays an extra
O(nr_tags * nr_packs^2) cost, mostly in string comparisons
on the pack names. This was exacerbated by 47bf4b0
(prepare_packed_git_one: refactor duplicate-pack check,
2014-06-30) which uses a slightly more expensive string
check, under the assumption that the duplicate check doesn't
happen very often (and it shouldn't; the real problem here
is how often we are calling reprepare_packed_git()).
This patch teaches fetch to use HAS_SHA1_QUICK to sacrifice
accuracy for speed, in cases where we might be racy with a
simultaneous repack. This is similar to the fix in 0eeb077
(index-pack: avoid excessive re-reading of pack directory,
2015-06-09). As with that case, it's OK for has_sha1_file()
occasionally say "no I don't have it" when we do, because
the worst case is not a corruption, but simply that we may
fail to auto-follow a tag that points to it.
Here are results from the included perf script, which sets
up a situation similar to the one described above:
Test HEAD^ HEAD
----------------------------------------------------------
5550.4: fetch 11.21(10.42+0.78) 0.08(0.04+0.02) -99.3%
Reported-by: Vegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The existing "git fetch --depth=<n>" option was hard to use
correctly when making the history of an existing shallow clone
deeper. A new option, "--deepen=<n>", has been added to make this
easier to use. "git clone" also learned "--shallow-since=<date>"
and "--shallow-exclude=<tag>" options to make it easier to specify
"I am interested only in the recent N months worth of history" and
"Give me only the history since that version".
* nd/shallow-deepen: (27 commits)
fetch, upload-pack: --deepen=N extends shallow boundary by N commits
upload-pack: add get_reachable_list()
upload-pack: split check_unreachable() in two, prep for get_reachable_list()
t5500, t5539: tests for shallow depth excluding a ref
clone: define shallow clone boundary with --shallow-exclude
fetch: define shallow boundary with --shallow-exclude
upload-pack: support define shallow boundary by excluding revisions
refs: add expand_ref()
t5500, t5539: tests for shallow depth since a specific date
clone: define shallow clone boundary based on time with --shallow-since
fetch: define shallow boundary with --shallow-since
upload-pack: add deepen-since to cut shallow repos based on time
shallow.c: implement a generic shallow boundary finder based on rev-list
fetch-pack: use a separate flag for fetch in deepening mode
fetch-pack.c: mark strings for translating
fetch-pack: use a common function for verbose printing
fetch-pack: use skip_prefix() instead of starts_with()
upload-pack: move rev-list code out of check_non_tip()
upload-pack: make check_non_tip() clean things up on error
upload-pack: tighten number parsing at "deepen" lines
...
We'll be introducing a new way to decide the default abbreviation
length by initialising DEFAULT_ABBREV to -1 to signal the first call
to "find unique abbreviation" codepath to compute a reasonable value
based on the number of objects we have to avoid collisions.
We have long relied on DEFAULT_ABBREV being a positive concrete
value that is used as the abbreviation length when no extra
configuration or command line option has overridden it. Some
codepaths wants to use such a positive concrete default value
even before making their first request to actually trigger the
computation for the auto sized default.
Introduce FALLBACK_DEFAULT_ABBREV and use it to the code that
attempts to align the report from "git fetch". For now, this
macro is also used to initialize the default_abbrev variable,
but the auto-sizing code will use -1 and then use the value of
FALLBACK_DEFAULT_ABBREV as the starting point of auto-sizing.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git push" and "git clone" learned to give better progress meters
to the end user who is waiting on the terminal.
* jk/push-progress:
receive-pack: send keepalives during quiet periods
receive-pack: turn on connectivity progress
receive-pack: relay connectivity errors to sideband
receive-pack: turn on index-pack resolving progress
index-pack: add flag for showing delta-resolution progress
clone: use a real progress meter for connectivity check
check_connected: add progress flag
check_connected: relay errors to alternate descriptor
check_everything_connected: use a struct with named options
check_everything_connected: convert to argv_array
rev-list: add optional progress reporting
check_everything_connected: always pass --quiet to rev-list
The API to iterate over all the refs (i.e. for_each_ref(), etc.)
has been revamped.
* mh/ref-iterators:
for_each_reflog(): reimplement using iterators
dir_iterator: new API for iterating over a directory tree
for_each_reflog(): don't abort for bad references
do_for_each_ref(): reimplement using reference iteration
refs: introduce an iterator interface
ref_resolves_to_object(): new function
entry_resolves_to_object(): rename function from ref_resolves_to_object()
get_ref_cache(): only create an instance if there is a submodule
remote rm: handle symbolic refs correctly
delete_refs(): add a flags argument
refs: use name "prefix" consistently
do_for_each_ref(): move docstring to the header file
refs: remove unnecessary "extern" keywords
The number of variants of check_everything_connected has
grown over the years, so that the "real" function takes
several possibly-zero, possibly-NULL arguments. We hid the
complexity behind some wrapper functions, but this doesn't
scale well when we want to add new options.
If we add more wrapper variants to handle the new options,
then we can get a combinatorial explosion when those options
might be used together (right now nobody wants to use both
"shallow" and "transport" together, so we get by with just a
few wrappers).
If instead we add new parameters to each function, each of
which can have a default value, then callers who want the
defaults end up with confusing invocations like:
check_everything_connected(fn, 0, data, -1, 0, NULL);
where it is unclear which parameter is which (and every
caller needs updated when we add new options).
Instead, let's add a struct to hold all of the optional
parameters. This is a little more verbose for the callers
(who have to declare the struct and fill it in), but it
makes their code much easier to follow, because every option
is named as it is set (and unused options do not have to be
mentioned at all).
Note that we could also stick the iteration function and its
callback data into the option struct, too. But since those
are required for each call, by avoiding doing so, we can let
very simple callers just pass "NULL" for the options and not
worry about the struct at all.
While we're touching each site, let's also rename the
function to check_connected(). The existing name was quite
long, and not all of the wrappers even used the full name.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Improve the look of the way "git fetch" reports what happened to
each ref that was fetched.
* nd/fetch-ref-summary:
fetch: reduce duplicate in ref update status lines with placeholder
fetch: align all "remote -> local" output
fetch: change flag code for displaying tag update and deleted ref
fetch: refactor ref update status formatting code
git-fetch.txt: document fetch output
The ownership rule for the piece of memory that hold references to
be fetched in "git fetch" was screwy, which has been cleaned up.
* km/fetch-do-not-free-remote-name:
builtin/fetch.c: don't free remote->name after fetch
The ownership rule for the piece of memory that hold references to
be fetched in "git fetch" was screwy, which has been cleaned up.
* km/fetch-do-not-free-remote-name:
builtin/fetch.c: don't free remote->name after fetch
In the "remote -> local" line, if either ref is a substring of the
other, the common part in the other string is replaced with "*". For
example
abc -> origin/abc
refs/pull/123/head -> pull/123
become
abc -> origin/*
refs/*/head -> pull/123
Activated with fetch.output=compact.
For the record, this output is not perfect. A single giant ref can
push all refs very far to the right and likely be wrapped around. We
may have a few options:
- exclude these long lines smarter
- break the line after "->", exclude it from column width calculation
- implement a new format, { -> origin/}foo, which makes the problem
go away at the cost of a bit harder to read
- reverse all the arrows so we have "* <- looong-ref", again still
hard to read.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We do align "remote -> local" output by allocating 10 columns to
"remote". That produces aligned output only for short refs. An extra
pass is performed to find the longest remote ref name (that does not
produce a line longer than terminal width) to produce better aligned
output.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This makes the fetch flag code consistent with push, where '-' means
deleted ref.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This makes it easier to change the formatting later. And it makes sure
translators cannot mess up format specifiers and break Git.
There are a couple call sites where the length of the second column is
TRANSPORT_SUMMARY_WIDTH instead of calculated by TRANSPORT_SUMMARY(),
which is enforced now. The result should be the same because these call
sites do not contain characters outside ASCII range.
The two strbuf_addf() calls instead of one is mostly to reduce
diff-noise in a future patch where "ref -> ref" is reformatted
differently.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This will be useful for passing REF_NODEREF through.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Make fetch's string_list of remote names own all of its string items
(strdup'ing when necessary) so that it can deallocate them safely
when clearing.
Signed-off-by: Keith McGuigan <kmcguigan@twopensource.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In git-fetch, --depth argument is always relative with the latest
remote refs. This makes it a bit difficult to cover this use case,
where the user wants to make the shallow history, say 3 levels
deeper. It would work if remote refs have not moved yet, but nobody
can guarantee that, especially when that use case is performed a
couple months after the last clone or "git fetch --depth". Also,
modifying shallow boundary using --depth does not work well with
clones created by --since or --not.
This patch fixes that. A new argument --deepen=<N> will add <N> more (*)
parent commits to the current history regardless of where remote refs
are.
Have/Want negotiation is still respected. So if remote refs move, the
server will send two chunks: one between "have" and "want" and another
to extend shallow history. In theory, the client could send no "want"s
in order to get the second chunk only. But the protocol does not allow
that. Either you send no want lines, which means ls-remote; or you
have to send at least one want line that carries deep-relative to the
server..
The main work was done by Dongcan Jiang. I fixed it up here and there.
And of course all the bugs belong to me.
(*) We could even support --deepen=<N> where <N> is negative. In that
case we can cut some history from the shallow clone. This operation
(and --depth=<shorter depth>) does not require interaction with remote
side (and more complicated to implement as a result).
Helped-by: Duy Nguyen <pclouds@gmail.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Dongcan Jiang <dongcan.jiang@gmail.com>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The code for warning_errno/die_errno has been refactored and a new
error_errno() reporting helper is introduced.
* nd/error-errno: (41 commits)
wrapper.c: use warning_errno()
vcs-svn: use error_errno()
upload-pack.c: use error_errno()
unpack-trees.c: use error_errno()
transport-helper.c: use error_errno()
sha1_file.c: use {error,die,warning}_errno()
server-info.c: use error_errno()
sequencer.c: use error_errno()
run-command.c: use error_errno()
rerere.c: use error_errno() and warning_errno()
reachable.c: use error_errno()
mailmap.c: use error_errno()
ident.c: use warning_errno()
http.c: use error_errno() and warning_errno()
grep.c: use error_errno()
gpg-interface.c: use error_errno()
fast-import.c: use error_errno()
entry.c: use error_errno()
editor.c: use error_errno()
diff-no-index.c: use error_errno()
...
A couple of newlines are also removed, because both error() and
error_errno() automatically append a newline.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A major part of "git submodule update" has been ported to C to take
advantage of the recently added framework to run download tasks in
parallel.
* sb/submodule-parallel-update:
clone: allow an explicit argument for parallel submodule clones
submodule update: expose parallelism to the user
submodule helper: remove double 'fatal: ' prefix
git submodule update: have a dedicated helper for cloning
run_processes_parallel: rename parameters for the callbacks
run_processes_parallel: treat output of children as byte array
submodule update: direct error message to stderr
fetching submodules: respect `submodule.fetchJobs` config option
submodule-config: drop check against NULL
submodule-config: keep update strategy around
This allows to configure fetching and updating in parallel
without having the command line option.
This moved the responsibility to determine how many parallel processes
to start from builtin/fetch to submodule.c as we need a way to communicate
"The user did not specify the number of parallel processes in the command
line options" in the builtin fetch. The submodule code takes care of
the precedence (CLI > config > default).
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Update various codepaths to avoid manually-counted malloc().
* jk/tighten-alloc: (22 commits)
ewah: convert to REALLOC_ARRAY, etc
convert ewah/bitmap code to use xmalloc
diff_populate_gitlink: use a strbuf
transport_anonymize_url: use xstrfmt
git-compat-util: drop mempcpy compat code
sequencer: simplify memory allocation of get_message
test-path-utils: fix normalize_path_copy output buffer size
fetch-pack: simplify add_sought_entry
fast-import: simplify allocation in start_packfile
write_untracked_extension: use FLEX_ALLOC helper
prepare_{git,shell}_cmd: use argv_array
use st_add and st_mult for allocation size computation
convert trivial cases to FLEX_ARRAY macros
use xmallocz to avoid size arithmetic
convert trivial cases to ALLOC_ARRAY
convert manual allocations to argv_array
argv-array: add detach function
add helpers for allocating flex-array structs
harden REALLOC_ARRAY and xcalloc against size_t overflow
tree-diff: catch integer overflow in combine_diff_path allocation
...
The internal API to interact with "remote.*" configuration
variables has been streamlined.
* tg/git-remote:
remote: use remote_is_configured() for add and rename
remote: actually check if remote exits
remote: simplify remote_is_configured()
remote: use parse_config_key
"git fetch" and friends that make network connections can now be
told to only use ipv4 (or ipv6).
* ew/force-ipv4:
connect & http: support -4 and -6 switches for remote operations
If our size computation overflows size_t, we may allocate a
much smaller buffer than we expected and overflow it. It's
probably impossible to trigger an overflow in most of these
sites in practice, but it is easy enough convert their
additions and multiplications into overflow-checking
variants. This may be fixing real bugs, and it makes
auditing the code easier.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The remote_is_configured() function allows checking whether a remote
exists or not. The function however only works if remote_get() wasn't
called before calling it. In addition, it only checks the configuration
for remotes, but not remotes or branches files.
Make use of the origin member of struct remote instead, which indicates
where the remote comes from. It will be set to some value if the remote
is configured in any file in the repository, but is initialized to 0 if
the remote is only created in make_remote().
Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Sometimes it is necessary to force IPv4-only or IPv6-only operation
on networks where name lookups may return a non-routable address and
stall remote operations.
The ssh(1) command has an equivalent switches which we may pass when
we run them. There may be old ssh(1) implementations out there
which do not support these switches; they should report the
appropriate error in that case.
rsync support is untouched for now since it is deprecated and
scheduled to be removed.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Reviewed-by: Torsten Bögershausen <tboegi@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Many codepaths that run "gc --auto" before exiting kept packfiles
mapped and left the file descriptors to them open, which was not
friendly to systems that cannot remove files that are open. They
now close the packs before doing so.
* js/close-packs-before-gc:
receive-pack: release pack files before garbage-collecting
merge: release pack files before garbage-collecting
am: release pack files before garbage-collecting
fetch: release pack files before garbage-collecting
Some codepaths used fopen(3) when opening a fixed path in $GIT_DIR
(e.g. COMMIT_EDITMSG) that is meant to be left after the command is
done. This however did not work well if the repository is set to
be shared with core.sharedRepository and the umask of the previous
user is tighter. They have been made to work better by calling
unlink(2) and retrying after fopen(3) fails with EPERM.
* js/fopen-harder:
Handle more file writes correctly in shared repos
commit: allow editing the commit message even in shared repos
Many codepaths that run "gc --auto" before exiting kept packfiles
mapped and left the file descriptors to them open, which was not
friendly to systems that cannot remove files that are open. They
now close the packs before doing so.
* js/close-packs-before-gc:
receive-pack: release pack files before garbage-collecting
merge: release pack files before garbage-collecting
am: release pack files before garbage-collecting
fetch: release pack files before garbage-collecting
Some codepaths used fopen(3) when opening a fixed path in $GIT_DIR
(e.g. COMMIT_EDITMSG) that is meant to be left after the command is
done. This however did not work well if the repository is set to
be shared with core.sharedRepository and the umask of the previous
user is tighter. They have been made to work better by calling
unlink(2) and retrying after fopen(3) fails with EPERM.
* js/fopen-harder:
Handle more file writes correctly in shared repos
commit: allow editing the commit message even in shared repos