mirror of
https://github.com/git/git.git
synced 2024-11-29 04:54:56 +08:00
f1068efefe
Long ago in 628522ec14
(sha1-lookup: more memory efficient
search in sorted list of SHA-1, 2007-12-29) we added
sha1_entry_pos(), a binary search that uses the uniform
distribution of sha1s to scale the selection of mid-points.
As this was a performance experiment, we tied it to the
GIT_USE_LOOKUP environment variable and never enabled it by
default.
This code was successful in reducing the number of steps in
each search. But the overhead of the scaling ends up making
it slower when the cache is warm. Here are best-of-five
timings for running rev-list on linux.git, which will have
to look up every object:
$ time git rev-list --objects --all >/dev/null
real 0m35.357s
user 0m35.016s
sys 0m0.340s
$ time GIT_USE_LOOKUP=1 git rev-list --objects --all >/dev/null
real 0m37.364s
user 0m37.045s
sys 0m0.316s
The USE_LOOKUP version might have more benefit on a cold
cache, as the time to fault in each page would dominate. But
that would be for a single lookup. In practice, most
operations tend to look up many objects, and the whole pack
.idx will end up warm.
It's possible that the code could be better optimized to
compete with a naive binary search for the warm-cache case,
and we could have the best of both worlds. But over the
years nobody has done so, and this is largely dead code that
is rarely run outside of the test suite. Let's drop it in
the name of simplicity.
This lets us remove sha1_entry_pos() entirely, as the .idx
lookup code was the only caller. Note that sha1-lookup.c
still contains sha1_pos(), which differs from
sha1_entry_pos() in two ways:
- it has a different interface; it uses a function pointer
to access sha1 entries rather than a size/offset pair
describing the table's memory layout
- it only scales the initial selection of "mi", rather
than each iteration of the search
We can't get rid of this function, as it's called from
several places. It may be that we could replace it with a
simple binary search, but that's out of scope for this patch
(and would need benchmarking).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
102 lines
2.6 KiB
C
102 lines
2.6 KiB
C
#include "cache.h"
|
|
#include "sha1-lookup.h"
|
|
|
|
static uint32_t take2(const unsigned char *sha1)
|
|
{
|
|
return ((sha1[0] << 8) | sha1[1]);
|
|
}
|
|
|
|
/*
|
|
* Conventional binary search loop looks like this:
|
|
*
|
|
* do {
|
|
* int mi = (lo + hi) / 2;
|
|
* int cmp = "entry pointed at by mi" minus "target";
|
|
* if (!cmp)
|
|
* return (mi is the wanted one)
|
|
* if (cmp > 0)
|
|
* hi = mi; "mi is larger than target"
|
|
* else
|
|
* lo = mi+1; "mi is smaller than target"
|
|
* } while (lo < hi);
|
|
*
|
|
* The invariants are:
|
|
*
|
|
* - When entering the loop, lo points at a slot that is never
|
|
* above the target (it could be at the target), hi points at a
|
|
* slot that is guaranteed to be above the target (it can never
|
|
* be at the target).
|
|
*
|
|
* - We find a point 'mi' between lo and hi (mi could be the same
|
|
* as lo, but never can be the same as hi), and check if it hits
|
|
* the target. There are three cases:
|
|
*
|
|
* - if it is a hit, we are happy.
|
|
*
|
|
* - if it is strictly higher than the target, we update hi with
|
|
* it.
|
|
*
|
|
* - if it is strictly lower than the target, we update lo to be
|
|
* one slot after it, because we allow lo to be at the target.
|
|
*
|
|
* When choosing 'mi', we do not have to take the "middle" but
|
|
* anywhere in between lo and hi, as long as lo <= mi < hi is
|
|
* satisfied. When we somehow know that the distance between the
|
|
* target and lo is much shorter than the target and hi, we could
|
|
* pick mi that is much closer to lo than the midway.
|
|
*/
|
|
/*
|
|
* The table should contain "nr" elements.
|
|
* The sha1 of element i (between 0 and nr - 1) should be returned
|
|
* by "fn(i, table)".
|
|
*/
|
|
int sha1_pos(const unsigned char *sha1, void *table, size_t nr,
|
|
sha1_access_fn fn)
|
|
{
|
|
size_t hi = nr;
|
|
size_t lo = 0;
|
|
size_t mi = 0;
|
|
|
|
if (!nr)
|
|
return -1;
|
|
|
|
if (nr != 1) {
|
|
size_t lov, hiv, miv, ofs;
|
|
|
|
for (ofs = 0; ofs < 18; ofs += 2) {
|
|
lov = take2(fn(0, table) + ofs);
|
|
hiv = take2(fn(nr - 1, table) + ofs);
|
|
miv = take2(sha1 + ofs);
|
|
if (miv < lov)
|
|
return -1;
|
|
if (hiv < miv)
|
|
return -1 - nr;
|
|
if (lov != hiv) {
|
|
/*
|
|
* At this point miv could be equal
|
|
* to hiv (but sha1 could still be higher);
|
|
* the invariant of (mi < hi) should be
|
|
* kept.
|
|
*/
|
|
mi = (nr - 1) * (miv - lov) / (hiv - lov);
|
|
if (lo <= mi && mi < hi)
|
|
break;
|
|
die("BUG: assertion failed in binary search");
|
|
}
|
|
}
|
|
}
|
|
|
|
do {
|
|
int cmp;
|
|
cmp = hashcmp(fn(mi, table), sha1);
|
|
if (!cmp)
|
|
return mi;
|
|
if (cmp > 0)
|
|
hi = mi;
|
|
else
|
|
lo = mi + 1;
|
|
mi = (hi + lo) / 2;
|
|
} while (lo < hi);
|
|
return -lo-1;
|
|
}
|