refs: introduce an iterator interface
Currently, the API for iterating over references is via a family of
for_each_ref()-type functions that invoke a callback function for each
selected reference. All of these eventually call do_for_each_ref(),
which knows how to do one thing: iterate in parallel through two
ref_caches, one for loose and one for packed refs, giving loose
references precedence over packed refs. This is rather complicated code,
and is quite specialized to the files backend. It also requires callers
to encapsulate their work into a callback function, which often means
that they have to define and use a "cb_data" struct to manage their
context.
The current design is already bursting at the seams, and will become
even more awkward in the upcoming world of multiple reference storage
backends:
* Per-worktree vs. shared references are currently handled via a kludge
in git_path() rather than iterating over each part of the reference
namespace separately and merging the results. This kludge will cease
to work when we have multiple reference storage backends.
* The current scheme is inflexible. What if we sometimes want to bypass
the ref_cache, or use it only for packed or only for loose refs? What
if we want to store symbolic refs in one type of storage backend and
non-symbolic ones in another?
In the future, each reference backend will need to define its own way of
iterating over references. The crux of the problem with the current
design is that it is impossible to compose for_each_ref()-style
iterations, because the flow of control is owned by the for_each_ref()
function. There is nothing that a caller can do but iterate through all
references in a single burst, so there is no way for it to interleave
references from multiple backends and present the result to the rest of
the world as a single compound backend.
This commit introduces a new iteration primitive for references: a
ref_iterator. A ref_iterator is a polymorphic object that a reference
storage backend can be asked to instantiate. There are three functions
that can be applied to a ref_iterator:
* ref_iterator_advance(): move to the next reference in the iteration
* ref_iterator_abort(): end the iteration before it is exhausted
* ref_iterator_peel(): peel the reference currently being looked at
Iterating using a ref_iterator leaves the flow of control in the hands
of the caller, which means that ref_iterators from multiple
sources (e.g., loose and packed refs) can be composed and presented to
the world as a single compound ref_iterator.
It also means that the backend code for implementing reference iteration
will sometimes be more complicated. For example, the
cache_ref_iterator (which iterates over a ref_cache) can't use the C
stack to recurse; instead, it must manage its own stack internally as
explicit data structures. There is also a lot of boilerplate connected
with object-oriented programming in C.
Eventually, end-user callers will be able to be written in a more
natural way—managing their own flow of control rather than having to
work via callbacks. Since there will only be a few reference backends
but there are many consumers of this API, this is a good tradeoff.
More importantly, we gain composability, and especially the possibility
of writing interchangeable parts that can work with any ref_iterator.
For example, merge_ref_iterator implements a generic way of merging the
contents of any two ref_iterators. It is used to merge loose + packed
refs as part of the implementation of the files_ref_iterator. But it
will also be possible to use it to merge other pairs of reference
sources (e.g., per-worktree vs. shared refs).
Another example is prefix_ref_iterator, which can be used to trim a
prefix off the front of reference names before presenting them to the
caller (e.g., "refs/heads/master" -> "master").
In this patch, we introduce the iterator abstraction and many utilities,
and implement a reference iterator for the files ref storage backend.
(I've written several other obvious utilities, for example a generic way
to filter references being iterated over. These will probably be useful
in the future. But they are not needed for this patch series, so I am
not including them at this time.)
In a moment we will rewrite do_for_each_ref() to work via reference
iterators (allowing some special-purpose code to be discarded), and do
something similar for reflogs. In future patch series, we will expose
the ref_iterator abstraction in the public refs API so that callers can
use it directly.
Implementation note: I tried abstracting this a layer further to allow
generic iterators (over arbitrary types of objects) and generic
utilities like a generic merge_iterator. But the implementation in C was
very cumbersome, involving (in my opinion) too much boilerplate and too
much unsafe casting, some of which would have had to be done on the
caller side. However, I did put a few iterator-related constants in a
top-level header file, iterator.h, as they will be useful in a moment to
implement iteration over directory trees and possibly other types of
iterators in the future.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-18 12:15:15 +08:00
|
|
|
/*
|
|
|
|
* Generic reference iterator infrastructure. See refs-internal.h for
|
|
|
|
* documentation about the design and use of reference iterators.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include "cache.h"
|
|
|
|
#include "refs.h"
|
|
|
|
#include "refs/refs-internal.h"
|
|
|
|
#include "iterator.h"
|
|
|
|
|
|
|
|
int ref_iterator_advance(struct ref_iterator *ref_iterator)
|
|
|
|
{
|
|
|
|
return ref_iterator->vtable->advance(ref_iterator);
|
|
|
|
}
|
|
|
|
|
|
|
|
int ref_iterator_peel(struct ref_iterator *ref_iterator,
|
|
|
|
struct object_id *peeled)
|
|
|
|
{
|
|
|
|
return ref_iterator->vtable->peel(ref_iterator, peeled);
|
|
|
|
}
|
|
|
|
|
|
|
|
int ref_iterator_abort(struct ref_iterator *ref_iterator)
|
|
|
|
{
|
|
|
|
return ref_iterator->vtable->abort(ref_iterator);
|
|
|
|
}
|
|
|
|
|
|
|
|
void base_ref_iterator_init(struct ref_iterator *iter,
|
|
|
|
struct ref_iterator_vtable *vtable)
|
|
|
|
{
|
|
|
|
iter->vtable = vtable;
|
|
|
|
iter->refname = NULL;
|
|
|
|
iter->oid = NULL;
|
|
|
|
iter->flags = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void base_ref_iterator_free(struct ref_iterator *iter)
|
|
|
|
{
|
|
|
|
/* Help make use-after-free bugs fail quickly: */
|
|
|
|
iter->vtable = NULL;
|
|
|
|
free(iter);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct empty_ref_iterator {
|
|
|
|
struct ref_iterator base;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int empty_ref_iterator_advance(struct ref_iterator *ref_iterator)
|
|
|
|
{
|
|
|
|
return ref_iterator_abort(ref_iterator);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int empty_ref_iterator_peel(struct ref_iterator *ref_iterator,
|
|
|
|
struct object_id *peeled)
|
|
|
|
{
|
|
|
|
die("BUG: peel called for empty iterator");
|
|
|
|
}
|
|
|
|
|
|
|
|
static int empty_ref_iterator_abort(struct ref_iterator *ref_iterator)
|
|
|
|
{
|
|
|
|
base_ref_iterator_free(ref_iterator);
|
|
|
|
return ITER_DONE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct ref_iterator_vtable empty_ref_iterator_vtable = {
|
|
|
|
empty_ref_iterator_advance,
|
|
|
|
empty_ref_iterator_peel,
|
|
|
|
empty_ref_iterator_abort
|
|
|
|
};
|
|
|
|
|
|
|
|
struct ref_iterator *empty_ref_iterator_begin(void)
|
|
|
|
{
|
|
|
|
struct empty_ref_iterator *iter = xcalloc(1, sizeof(*iter));
|
|
|
|
struct ref_iterator *ref_iterator = &iter->base;
|
|
|
|
|
|
|
|
base_ref_iterator_init(ref_iterator, &empty_ref_iterator_vtable);
|
|
|
|
return ref_iterator;
|
|
|
|
}
|
|
|
|
|
|
|
|
int is_empty_ref_iterator(struct ref_iterator *ref_iterator)
|
|
|
|
{
|
|
|
|
return ref_iterator->vtable == &empty_ref_iterator_vtable;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct merge_ref_iterator {
|
|
|
|
struct ref_iterator base;
|
|
|
|
|
|
|
|
struct ref_iterator *iter0, *iter1;
|
|
|
|
|
|
|
|
ref_iterator_select_fn *select;
|
|
|
|
void *cb_data;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* A pointer to iter0 or iter1 (whichever is supplying the
|
|
|
|
* current value), or NULL if advance has not yet been called.
|
|
|
|
*/
|
|
|
|
struct ref_iterator **current;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int merge_ref_iterator_advance(struct ref_iterator *ref_iterator)
|
|
|
|
{
|
|
|
|
struct merge_ref_iterator *iter =
|
|
|
|
(struct merge_ref_iterator *)ref_iterator;
|
|
|
|
int ok;
|
|
|
|
|
|
|
|
if (!iter->current) {
|
|
|
|
/* Initialize: advance both iterators to their first entries */
|
|
|
|
if ((ok = ref_iterator_advance(iter->iter0)) != ITER_OK) {
|
|
|
|
iter->iter0 = NULL;
|
|
|
|
if (ok == ITER_ERROR)
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
if ((ok = ref_iterator_advance(iter->iter1)) != ITER_OK) {
|
|
|
|
iter->iter1 = NULL;
|
|
|
|
if (ok == ITER_ERROR)
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* Advance the current iterator past the just-used
|
|
|
|
* entry:
|
|
|
|
*/
|
|
|
|
if ((ok = ref_iterator_advance(*iter->current)) != ITER_OK) {
|
|
|
|
*iter->current = NULL;
|
|
|
|
if (ok == ITER_ERROR)
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Loop until we find an entry that we can yield. */
|
|
|
|
while (1) {
|
|
|
|
struct ref_iterator **secondary;
|
|
|
|
enum iterator_selection selection =
|
|
|
|
iter->select(iter->iter0, iter->iter1, iter->cb_data);
|
|
|
|
|
|
|
|
if (selection == ITER_SELECT_DONE) {
|
|
|
|
return ref_iterator_abort(ref_iterator);
|
|
|
|
} else if (selection == ITER_SELECT_ERROR) {
|
|
|
|
ref_iterator_abort(ref_iterator);
|
|
|
|
return ITER_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if ((selection & ITER_CURRENT_SELECTION_MASK) == 0) {
|
|
|
|
iter->current = &iter->iter0;
|
|
|
|
secondary = &iter->iter1;
|
|
|
|
} else {
|
|
|
|
iter->current = &iter->iter1;
|
|
|
|
secondary = &iter->iter0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (selection & ITER_SKIP_SECONDARY) {
|
|
|
|
if ((ok = ref_iterator_advance(*secondary)) != ITER_OK) {
|
|
|
|
*secondary = NULL;
|
|
|
|
if (ok == ITER_ERROR)
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (selection & ITER_YIELD_CURRENT) {
|
|
|
|
iter->base.refname = (*iter->current)->refname;
|
|
|
|
iter->base.oid = (*iter->current)->oid;
|
|
|
|
iter->base.flags = (*iter->current)->flags;
|
|
|
|
return ITER_OK;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
error:
|
|
|
|
ref_iterator_abort(ref_iterator);
|
|
|
|
return ITER_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int merge_ref_iterator_peel(struct ref_iterator *ref_iterator,
|
|
|
|
struct object_id *peeled)
|
|
|
|
{
|
|
|
|
struct merge_ref_iterator *iter =
|
|
|
|
(struct merge_ref_iterator *)ref_iterator;
|
|
|
|
|
|
|
|
if (!iter->current) {
|
|
|
|
die("BUG: peel called before advance for merge iterator");
|
|
|
|
}
|
|
|
|
return ref_iterator_peel(*iter->current, peeled);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int merge_ref_iterator_abort(struct ref_iterator *ref_iterator)
|
|
|
|
{
|
|
|
|
struct merge_ref_iterator *iter =
|
|
|
|
(struct merge_ref_iterator *)ref_iterator;
|
|
|
|
int ok = ITER_DONE;
|
|
|
|
|
|
|
|
if (iter->iter0) {
|
|
|
|
if (ref_iterator_abort(iter->iter0) != ITER_DONE)
|
|
|
|
ok = ITER_ERROR;
|
|
|
|
}
|
|
|
|
if (iter->iter1) {
|
|
|
|
if (ref_iterator_abort(iter->iter1) != ITER_DONE)
|
|
|
|
ok = ITER_ERROR;
|
|
|
|
}
|
|
|
|
base_ref_iterator_free(ref_iterator);
|
|
|
|
return ok;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct ref_iterator_vtable merge_ref_iterator_vtable = {
|
|
|
|
merge_ref_iterator_advance,
|
|
|
|
merge_ref_iterator_peel,
|
|
|
|
merge_ref_iterator_abort
|
|
|
|
};
|
|
|
|
|
|
|
|
struct ref_iterator *merge_ref_iterator_begin(
|
|
|
|
struct ref_iterator *iter0, struct ref_iterator *iter1,
|
|
|
|
ref_iterator_select_fn *select, void *cb_data)
|
|
|
|
{
|
|
|
|
struct merge_ref_iterator *iter = xcalloc(1, sizeof(*iter));
|
|
|
|
struct ref_iterator *ref_iterator = &iter->base;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We can't do the same kind of is_empty_ref_iterator()-style
|
|
|
|
* optimization here as overlay_ref_iterator_begin() does,
|
|
|
|
* because we don't know the semantics of the select function.
|
|
|
|
* It might, for example, implement "intersect" by passing
|
|
|
|
* references through only if they exist in both iterators.
|
|
|
|
*/
|
|
|
|
|
|
|
|
base_ref_iterator_init(ref_iterator, &merge_ref_iterator_vtable);
|
|
|
|
iter->iter0 = iter0;
|
|
|
|
iter->iter1 = iter1;
|
|
|
|
iter->select = select;
|
|
|
|
iter->cb_data = cb_data;
|
|
|
|
iter->current = NULL;
|
|
|
|
return ref_iterator;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* A ref_iterator_select_fn that overlays the items from front on top
|
|
|
|
* of those from back (like loose refs over packed refs). See
|
|
|
|
* overlay_ref_iterator_begin().
|
|
|
|
*/
|
|
|
|
static enum iterator_selection overlay_iterator_select(
|
|
|
|
struct ref_iterator *front, struct ref_iterator *back,
|
|
|
|
void *cb_data)
|
|
|
|
{
|
|
|
|
int cmp;
|
|
|
|
|
|
|
|
if (!back)
|
|
|
|
return front ? ITER_SELECT_0 : ITER_SELECT_DONE;
|
|
|
|
else if (!front)
|
|
|
|
return ITER_SELECT_1;
|
|
|
|
|
|
|
|
cmp = strcmp(front->refname, back->refname);
|
|
|
|
|
|
|
|
if (cmp < 0)
|
|
|
|
return ITER_SELECT_0;
|
|
|
|
else if (cmp > 0)
|
|
|
|
return ITER_SELECT_1;
|
|
|
|
else
|
|
|
|
return ITER_SELECT_0_SKIP_1;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct ref_iterator *overlay_ref_iterator_begin(
|
|
|
|
struct ref_iterator *front, struct ref_iterator *back)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Optimization: if one of the iterators is empty, return the
|
|
|
|
* other one rather than incurring the overhead of wrapping
|
|
|
|
* them.
|
|
|
|
*/
|
|
|
|
if (is_empty_ref_iterator(front)) {
|
|
|
|
ref_iterator_abort(front);
|
|
|
|
return back;
|
|
|
|
} else if (is_empty_ref_iterator(back)) {
|
|
|
|
ref_iterator_abort(back);
|
|
|
|
return front;
|
|
|
|
}
|
|
|
|
|
|
|
|
return merge_ref_iterator_begin(front, back,
|
|
|
|
overlay_iterator_select, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct prefix_ref_iterator {
|
|
|
|
struct ref_iterator base;
|
|
|
|
|
|
|
|
struct ref_iterator *iter0;
|
|
|
|
char *prefix;
|
|
|
|
int trim;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int prefix_ref_iterator_advance(struct ref_iterator *ref_iterator)
|
|
|
|
{
|
|
|
|
struct prefix_ref_iterator *iter =
|
|
|
|
(struct prefix_ref_iterator *)ref_iterator;
|
|
|
|
int ok;
|
|
|
|
|
|
|
|
while ((ok = ref_iterator_advance(iter->iter0)) == ITER_OK) {
|
|
|
|
if (!starts_with(iter->iter0->refname, iter->prefix))
|
|
|
|
continue;
|
|
|
|
|
2017-05-22 22:17:35 +08:00
|
|
|
if (iter->trim) {
|
|
|
|
/*
|
|
|
|
* It is nonsense to trim off characters that
|
|
|
|
* you haven't already checked for via a
|
|
|
|
* prefix check, whether via this
|
|
|
|
* `prefix_ref_iterator` or upstream in
|
|
|
|
* `iter0`). So if there wouldn't be at least
|
|
|
|
* one character left in the refname after
|
|
|
|
* trimming, report it as a bug:
|
|
|
|
*/
|
|
|
|
if (strlen(iter->iter0->refname) <= iter->trim)
|
|
|
|
die("BUG: attempt to trim too many characters");
|
|
|
|
iter->base.refname = iter->iter0->refname + iter->trim;
|
|
|
|
} else {
|
|
|
|
iter->base.refname = iter->iter0->refname;
|
|
|
|
}
|
|
|
|
|
refs: introduce an iterator interface
Currently, the API for iterating over references is via a family of
for_each_ref()-type functions that invoke a callback function for each
selected reference. All of these eventually call do_for_each_ref(),
which knows how to do one thing: iterate in parallel through two
ref_caches, one for loose and one for packed refs, giving loose
references precedence over packed refs. This is rather complicated code,
and is quite specialized to the files backend. It also requires callers
to encapsulate their work into a callback function, which often means
that they have to define and use a "cb_data" struct to manage their
context.
The current design is already bursting at the seams, and will become
even more awkward in the upcoming world of multiple reference storage
backends:
* Per-worktree vs. shared references are currently handled via a kludge
in git_path() rather than iterating over each part of the reference
namespace separately and merging the results. This kludge will cease
to work when we have multiple reference storage backends.
* The current scheme is inflexible. What if we sometimes want to bypass
the ref_cache, or use it only for packed or only for loose refs? What
if we want to store symbolic refs in one type of storage backend and
non-symbolic ones in another?
In the future, each reference backend will need to define its own way of
iterating over references. The crux of the problem with the current
design is that it is impossible to compose for_each_ref()-style
iterations, because the flow of control is owned by the for_each_ref()
function. There is nothing that a caller can do but iterate through all
references in a single burst, so there is no way for it to interleave
references from multiple backends and present the result to the rest of
the world as a single compound backend.
This commit introduces a new iteration primitive for references: a
ref_iterator. A ref_iterator is a polymorphic object that a reference
storage backend can be asked to instantiate. There are three functions
that can be applied to a ref_iterator:
* ref_iterator_advance(): move to the next reference in the iteration
* ref_iterator_abort(): end the iteration before it is exhausted
* ref_iterator_peel(): peel the reference currently being looked at
Iterating using a ref_iterator leaves the flow of control in the hands
of the caller, which means that ref_iterators from multiple
sources (e.g., loose and packed refs) can be composed and presented to
the world as a single compound ref_iterator.
It also means that the backend code for implementing reference iteration
will sometimes be more complicated. For example, the
cache_ref_iterator (which iterates over a ref_cache) can't use the C
stack to recurse; instead, it must manage its own stack internally as
explicit data structures. There is also a lot of boilerplate connected
with object-oriented programming in C.
Eventually, end-user callers will be able to be written in a more
natural way—managing their own flow of control rather than having to
work via callbacks. Since there will only be a few reference backends
but there are many consumers of this API, this is a good tradeoff.
More importantly, we gain composability, and especially the possibility
of writing interchangeable parts that can work with any ref_iterator.
For example, merge_ref_iterator implements a generic way of merging the
contents of any two ref_iterators. It is used to merge loose + packed
refs as part of the implementation of the files_ref_iterator. But it
will also be possible to use it to merge other pairs of reference
sources (e.g., per-worktree vs. shared refs).
Another example is prefix_ref_iterator, which can be used to trim a
prefix off the front of reference names before presenting them to the
caller (e.g., "refs/heads/master" -> "master").
In this patch, we introduce the iterator abstraction and many utilities,
and implement a reference iterator for the files ref storage backend.
(I've written several other obvious utilities, for example a generic way
to filter references being iterated over. These will probably be useful
in the future. But they are not needed for this patch series, so I am
not including them at this time.)
In a moment we will rewrite do_for_each_ref() to work via reference
iterators (allowing some special-purpose code to be discarded), and do
something similar for reflogs. In future patch series, we will expose
the ref_iterator abstraction in the public refs API so that callers can
use it directly.
Implementation note: I tried abstracting this a layer further to allow
generic iterators (over arbitrary types of objects) and generic
utilities like a generic merge_iterator. But the implementation in C was
very cumbersome, involving (in my opinion) too much boilerplate and too
much unsafe casting, some of which would have had to be done on the
caller side. However, I did put a few iterator-related constants in a
top-level header file, iterator.h, as they will be useful in a moment to
implement iteration over directory trees and possibly other types of
iterators in the future.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-18 12:15:15 +08:00
|
|
|
iter->base.oid = iter->iter0->oid;
|
|
|
|
iter->base.flags = iter->iter0->flags;
|
|
|
|
return ITER_OK;
|
|
|
|
}
|
|
|
|
|
|
|
|
iter->iter0 = NULL;
|
|
|
|
if (ref_iterator_abort(ref_iterator) != ITER_DONE)
|
|
|
|
return ITER_ERROR;
|
|
|
|
return ok;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int prefix_ref_iterator_peel(struct ref_iterator *ref_iterator,
|
|
|
|
struct object_id *peeled)
|
|
|
|
{
|
|
|
|
struct prefix_ref_iterator *iter =
|
|
|
|
(struct prefix_ref_iterator *)ref_iterator;
|
|
|
|
|
|
|
|
return ref_iterator_peel(iter->iter0, peeled);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int prefix_ref_iterator_abort(struct ref_iterator *ref_iterator)
|
|
|
|
{
|
|
|
|
struct prefix_ref_iterator *iter =
|
|
|
|
(struct prefix_ref_iterator *)ref_iterator;
|
|
|
|
int ok = ITER_DONE;
|
|
|
|
|
|
|
|
if (iter->iter0)
|
|
|
|
ok = ref_iterator_abort(iter->iter0);
|
|
|
|
free(iter->prefix);
|
|
|
|
base_ref_iterator_free(ref_iterator);
|
|
|
|
return ok;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct ref_iterator_vtable prefix_ref_iterator_vtable = {
|
|
|
|
prefix_ref_iterator_advance,
|
|
|
|
prefix_ref_iterator_peel,
|
|
|
|
prefix_ref_iterator_abort
|
|
|
|
};
|
|
|
|
|
|
|
|
struct ref_iterator *prefix_ref_iterator_begin(struct ref_iterator *iter0,
|
|
|
|
const char *prefix,
|
|
|
|
int trim)
|
|
|
|
{
|
|
|
|
struct prefix_ref_iterator *iter;
|
|
|
|
struct ref_iterator *ref_iterator;
|
|
|
|
|
|
|
|
if (!*prefix && !trim)
|
|
|
|
return iter0; /* optimization: no need to wrap iterator */
|
|
|
|
|
|
|
|
iter = xcalloc(1, sizeof(*iter));
|
|
|
|
ref_iterator = &iter->base;
|
|
|
|
|
|
|
|
base_ref_iterator_init(ref_iterator, &prefix_ref_iterator_vtable);
|
|
|
|
|
|
|
|
iter->iter0 = iter0;
|
|
|
|
iter->prefix = xstrdup(prefix);
|
|
|
|
iter->trim = trim;
|
|
|
|
|
|
|
|
return ref_iterator;
|
|
|
|
}
|
do_for_each_ref(): reimplement using reference iteration
Use the reference iterator interface to implement do_for_each_ref().
Delete a bunch of code supporting the old for_each_ref() implementation.
And now that do_for_each_ref() is generic code (it is no longer tied to
the files backend), move it to refs.c.
The implementation is via a new function, do_for_each_ref_iterator(),
which takes a reference iterator as argument and calls a callback
function for each of the references in the iterator.
This change requires the current_ref performance hack for peel_ref() to
be implemented via ref_iterator_peel() rather than peel_entry() because
we don't have a ref_entry handy (it is hidden under three layers:
file_ref_iterator, merge_ref_iterator, and cache_ref_iterator). So:
* do_for_each_ref_iterator() records the active iterator in
current_ref_iter while it is running.
* peel_ref() checks whether current_ref_iter is pointing at the
requested reference. If so, it asks the iterator to peel the
reference (which it can do efficiently via its "peel" virtual
function). For extra safety, we do the optimization only if the
refname *addresses* are the same, not only if the refname *strings*
are the same, to forestall possible mixups between refnames that come
from different ref_iterators.
Please note that this optimization of peel_ref() is only available when
iterating via do_for_each_ref_iterator() (including all of the
for_each_ref() functions, which call it indirectly). It would be
complicated to implement a similar optimization when iterating directly
using a reference iterator, because multiple reference iterators can be
in use at the same time, with interleaved calls to
ref_iterator_advance(). (In fact we do exactly that in
merge_ref_iterator.)
But that is not necessary. peel_ref() is only called while iterating
over references. Callers who iterate using the for_each_ref() functions
benefit from the optimization described above. Callers who iterate using
reference iterators directly have access to the ref_iterator, so they
can call ref_iterator_peel() themselves to get an analogous optimization
in a more straightforward manner.
If we rewrite all callers to use the reference iteration API, then we
can remove the current_ref_iter hack permanently.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-18 12:15:16 +08:00
|
|
|
|
|
|
|
struct ref_iterator *current_ref_iter = NULL;
|
|
|
|
|
|
|
|
int do_for_each_ref_iterator(struct ref_iterator *iter,
|
|
|
|
each_ref_fn fn, void *cb_data)
|
|
|
|
{
|
|
|
|
int retval = 0, ok;
|
|
|
|
struct ref_iterator *old_ref_iter = current_ref_iter;
|
|
|
|
|
|
|
|
current_ref_iter = iter;
|
|
|
|
while ((ok = ref_iterator_advance(iter)) == ITER_OK) {
|
|
|
|
retval = fn(iter->refname, iter->oid, iter->flags, cb_data);
|
|
|
|
if (retval) {
|
|
|
|
/*
|
|
|
|
* If ref_iterator_abort() returns ITER_ERROR,
|
|
|
|
* we ignore that error in deference to the
|
|
|
|
* callback function's return value.
|
|
|
|
*/
|
|
|
|
ref_iterator_abort(iter);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
current_ref_iter = old_ref_iter;
|
|
|
|
if (ok == ITER_ERROR)
|
|
|
|
return -1;
|
|
|
|
return retval;
|
|
|
|
}
|