sha1-lookup: more memory efficient search in sorted list of SHA-1
Currently, when looking for a packed object from the pack idx, a
simple binary search is used.
A conventional binary search loop looks like this:
unsigned lo, hi;
do {
unsigned mi = (lo + hi) / 2;
int cmp = "entry pointed at by mi" minus "target";
if (!cmp)
return mi; "mi is the wanted one"
if (cmp > 0)
hi = mi; "mi is larger than target"
else
lo = mi+1; "mi is smaller than target"
} while (lo < hi);
"did not find what we wanted"
The invariants are:
- When entering the loop, 'lo' points at a slot that is never
above the target (it could be at the target), 'hi' points at
a slot that is guaranteed to be above the target (it can
never be at the target).
- We find a point 'mi' between 'lo' and 'hi' ('mi' could be
the same as 'lo', but never can be as high as 'hi'), and
check if 'mi' hits the target. There are three cases:
- if it is a hit, we have found what we are looking for;
- if it is strictly higher than the target, we set it to
'hi', and repeat the search.
- if it is strictly lower than the target, we update 'lo'
to one slot after it, because we allow 'lo' to be at the
target and 'mi' is known to be below the target.
If the loop exits, there is no matching entry.
When choosing 'mi', we do not have to take the "middle" but
anywhere in between 'lo' and 'hi', as long as lo <= mi < hi is
satisfied. When we somehow know that the distance between the
target and 'lo' is much shorter than the target and 'hi', we
could pick 'mi' that is much closer to 'lo' than (hi+lo)/2,
which a conventional binary search would pick.
This patch takes advantage of the fact that the SHA-1 is a good
hash function, and as long as there are enough entries in the
table, we can expect uniform distribution. An entry that begins
with for example "deadbeef..." is much likely to appear much
later than in the midway of a reasonably populated table. In
fact, it can be expected to be near 87% (222/256) from the top
of the table.
This is a work-in-progress and has switches to allow easier
experiments and debugging. Exporting GIT_USE_LOOKUP environment
variable enables this code.
On my admittedly memory starved machine, with a partial KDE
repository (3.0G pack with 95M idx):
$ GIT_USE_LOOKUP=t git log -800 --stat HEAD >/dev/null
3.93user 0.16system 0:04.09elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+55588minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -800 --stat HEAD >/dev/null
4.00user 0.15system 0:04.17elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+60258minor)pagefaults 0swaps
In the same repository:
$ GIT_USE_LOOKUP=t git log -2000 HEAD >/dev/null
0.12user 0.00system 0:00.12elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+4241minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -2000 HEAD >/dev/null
0.05user 0.01system 0:00.07elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+8506minor)pagefaults 0swaps
There isn't much time difference, but the number of minor faults
seems to show that we are touching much smaller number of pages,
which is expected.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-12-29 18:05:47 +08:00
|
|
|
#include "cache.h"
|
|
|
|
#include "sha1-lookup.h"
|
|
|
|
|
2009-04-05 04:59:26 +08:00
|
|
|
static uint32_t take2(const unsigned char *sha1)
|
|
|
|
{
|
|
|
|
return ((sha1[0] << 8) | sha1[1]);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Conventional binary search loop looks like this:
|
|
|
|
*
|
|
|
|
* do {
|
|
|
|
* int mi = (lo + hi) / 2;
|
|
|
|
* int cmp = "entry pointed at by mi" minus "target";
|
|
|
|
* if (!cmp)
|
|
|
|
* return (mi is the wanted one)
|
|
|
|
* if (cmp > 0)
|
|
|
|
* hi = mi; "mi is larger than target"
|
|
|
|
* else
|
|
|
|
* lo = mi+1; "mi is smaller than target"
|
|
|
|
* } while (lo < hi);
|
|
|
|
*
|
|
|
|
* The invariants are:
|
|
|
|
*
|
|
|
|
* - When entering the loop, lo points at a slot that is never
|
|
|
|
* above the target (it could be at the target), hi points at a
|
|
|
|
* slot that is guaranteed to be above the target (it can never
|
|
|
|
* be at the target).
|
|
|
|
*
|
|
|
|
* - We find a point 'mi' between lo and hi (mi could be the same
|
|
|
|
* as lo, but never can be the same as hi), and check if it hits
|
|
|
|
* the target. There are three cases:
|
|
|
|
*
|
|
|
|
* - if it is a hit, we are happy.
|
|
|
|
*
|
|
|
|
* - if it is strictly higher than the target, we update hi with
|
|
|
|
* it.
|
|
|
|
*
|
|
|
|
* - if it is strictly lower than the target, we update lo to be
|
|
|
|
* one slot after it, because we allow lo to be at the target.
|
|
|
|
*
|
|
|
|
* When choosing 'mi', we do not have to take the "middle" but
|
|
|
|
* anywhere in between lo and hi, as long as lo <= mi < hi is
|
|
|
|
* satisfied. When we somehow know that the distance between the
|
|
|
|
* target and lo is much shorter than the target and hi, we could
|
|
|
|
* pick mi that is much closer to lo than the midway.
|
|
|
|
*/
|
|
|
|
/*
|
|
|
|
* The table should contain "nr" elements.
|
|
|
|
* The sha1 of element i (between 0 and nr - 1) should be returned
|
|
|
|
* by "fn(i, table)".
|
|
|
|
*/
|
|
|
|
int sha1_pos(const unsigned char *sha1, void *table, size_t nr,
|
|
|
|
sha1_access_fn fn)
|
|
|
|
{
|
|
|
|
size_t hi = nr;
|
|
|
|
size_t lo = 0;
|
|
|
|
size_t mi = 0;
|
|
|
|
|
|
|
|
if (!nr)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
if (nr != 1) {
|
|
|
|
size_t lov, hiv, miv, ofs;
|
|
|
|
|
|
|
|
for (ofs = 0; ofs < 18; ofs += 2) {
|
|
|
|
lov = take2(fn(0, table) + ofs);
|
|
|
|
hiv = take2(fn(nr - 1, table) + ofs);
|
|
|
|
miv = take2(sha1 + ofs);
|
|
|
|
if (miv < lov)
|
|
|
|
return -1;
|
|
|
|
if (hiv < miv)
|
|
|
|
return -1 - nr;
|
|
|
|
if (lov != hiv) {
|
|
|
|
/*
|
|
|
|
* At this point miv could be equal
|
|
|
|
* to hiv (but sha1 could still be higher);
|
|
|
|
* the invariant of (mi < hi) should be
|
|
|
|
* kept.
|
|
|
|
*/
|
|
|
|
mi = (nr - 1) * (miv - lov) / (hiv - lov);
|
|
|
|
if (lo <= mi && mi < hi)
|
|
|
|
break;
|
2009-04-06 15:48:49 +08:00
|
|
|
die("BUG: assertion failed in binary search");
|
2009-04-05 04:59:26 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
if (18 <= ofs)
|
|
|
|
die("cannot happen -- lo and hi are identical");
|
|
|
|
}
|
|
|
|
|
|
|
|
do {
|
|
|
|
int cmp;
|
|
|
|
cmp = hashcmp(fn(mi, table), sha1);
|
|
|
|
if (!cmp)
|
|
|
|
return mi;
|
|
|
|
if (cmp > 0)
|
|
|
|
hi = mi;
|
|
|
|
else
|
|
|
|
lo = mi + 1;
|
|
|
|
mi = (hi + lo) / 2;
|
|
|
|
} while (lo < hi);
|
|
|
|
return -lo-1;
|
|
|
|
}
|
|
|
|
|
sha1-lookup: more memory efficient search in sorted list of SHA-1
Currently, when looking for a packed object from the pack idx, a
simple binary search is used.
A conventional binary search loop looks like this:
unsigned lo, hi;
do {
unsigned mi = (lo + hi) / 2;
int cmp = "entry pointed at by mi" minus "target";
if (!cmp)
return mi; "mi is the wanted one"
if (cmp > 0)
hi = mi; "mi is larger than target"
else
lo = mi+1; "mi is smaller than target"
} while (lo < hi);
"did not find what we wanted"
The invariants are:
- When entering the loop, 'lo' points at a slot that is never
above the target (it could be at the target), 'hi' points at
a slot that is guaranteed to be above the target (it can
never be at the target).
- We find a point 'mi' between 'lo' and 'hi' ('mi' could be
the same as 'lo', but never can be as high as 'hi'), and
check if 'mi' hits the target. There are three cases:
- if it is a hit, we have found what we are looking for;
- if it is strictly higher than the target, we set it to
'hi', and repeat the search.
- if it is strictly lower than the target, we update 'lo'
to one slot after it, because we allow 'lo' to be at the
target and 'mi' is known to be below the target.
If the loop exits, there is no matching entry.
When choosing 'mi', we do not have to take the "middle" but
anywhere in between 'lo' and 'hi', as long as lo <= mi < hi is
satisfied. When we somehow know that the distance between the
target and 'lo' is much shorter than the target and 'hi', we
could pick 'mi' that is much closer to 'lo' than (hi+lo)/2,
which a conventional binary search would pick.
This patch takes advantage of the fact that the SHA-1 is a good
hash function, and as long as there are enough entries in the
table, we can expect uniform distribution. An entry that begins
with for example "deadbeef..." is much likely to appear much
later than in the midway of a reasonably populated table. In
fact, it can be expected to be near 87% (222/256) from the top
of the table.
This is a work-in-progress and has switches to allow easier
experiments and debugging. Exporting GIT_USE_LOOKUP environment
variable enables this code.
On my admittedly memory starved machine, with a partial KDE
repository (3.0G pack with 95M idx):
$ GIT_USE_LOOKUP=t git log -800 --stat HEAD >/dev/null
3.93user 0.16system 0:04.09elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+55588minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -800 --stat HEAD >/dev/null
4.00user 0.15system 0:04.17elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+60258minor)pagefaults 0swaps
In the same repository:
$ GIT_USE_LOOKUP=t git log -2000 HEAD >/dev/null
0.12user 0.00system 0:00.12elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+4241minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -2000 HEAD >/dev/null
0.05user 0.01system 0:00.07elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+8506minor)pagefaults 0swaps
There isn't much time difference, but the number of minor faults
seems to show that we are touching much smaller number of pages,
which is expected.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-12-29 18:05:47 +08:00
|
|
|
/*
|
|
|
|
* Conventional binary search loop looks like this:
|
|
|
|
*
|
|
|
|
* unsigned lo, hi;
|
|
|
|
* do {
|
|
|
|
* unsigned mi = (lo + hi) / 2;
|
|
|
|
* int cmp = "entry pointed at by mi" minus "target";
|
|
|
|
* if (!cmp)
|
|
|
|
* return (mi is the wanted one)
|
|
|
|
* if (cmp > 0)
|
|
|
|
* hi = mi; "mi is larger than target"
|
|
|
|
* else
|
|
|
|
* lo = mi+1; "mi is smaller than target"
|
|
|
|
* } while (lo < hi);
|
|
|
|
*
|
|
|
|
* The invariants are:
|
|
|
|
*
|
|
|
|
* - When entering the loop, lo points at a slot that is never
|
|
|
|
* above the target (it could be at the target), hi points at a
|
|
|
|
* slot that is guaranteed to be above the target (it can never
|
|
|
|
* be at the target).
|
|
|
|
*
|
|
|
|
* - We find a point 'mi' between lo and hi (mi could be the same
|
|
|
|
* as lo, but never can be as same as hi), and check if it hits
|
|
|
|
* the target. There are three cases:
|
|
|
|
*
|
|
|
|
* - if it is a hit, we are happy.
|
|
|
|
*
|
|
|
|
* - if it is strictly higher than the target, we set it to hi,
|
|
|
|
* and repeat the search.
|
|
|
|
*
|
|
|
|
* - if it is strictly lower than the target, we update lo to
|
|
|
|
* one slot after it, because we allow lo to be at the target.
|
|
|
|
*
|
|
|
|
* If the loop exits, there is no matching entry.
|
|
|
|
*
|
|
|
|
* When choosing 'mi', we do not have to take the "middle" but
|
|
|
|
* anywhere in between lo and hi, as long as lo <= mi < hi is
|
|
|
|
* satisfied. When we somehow know that the distance between the
|
|
|
|
* target and lo is much shorter than the target and hi, we could
|
|
|
|
* pick mi that is much closer to lo than the midway.
|
|
|
|
*
|
|
|
|
* Now, we can take advantage of the fact that SHA-1 is a good hash
|
|
|
|
* function, and as long as there are enough entries in the table, we
|
|
|
|
* can expect uniform distribution. An entry that begins with for
|
|
|
|
* example "deadbeef..." is much likely to appear much later than in
|
|
|
|
* the midway of the table. It can reasonably be expected to be near
|
|
|
|
* 87% (222/256) from the top of the table.
|
|
|
|
*
|
2007-12-30 19:13:27 +08:00
|
|
|
* However, we do not want to pick "mi" too precisely. If the entry at
|
|
|
|
* the 87% in the above example turns out to be higher than the target
|
|
|
|
* we are looking for, we would end up narrowing the search space down
|
|
|
|
* only by 13%, instead of 50% we would get if we did a simple binary
|
|
|
|
* search. So we would want to hedge our bets by being less aggressive.
|
|
|
|
*
|
sha1-lookup: more memory efficient search in sorted list of SHA-1
Currently, when looking for a packed object from the pack idx, a
simple binary search is used.
A conventional binary search loop looks like this:
unsigned lo, hi;
do {
unsigned mi = (lo + hi) / 2;
int cmp = "entry pointed at by mi" minus "target";
if (!cmp)
return mi; "mi is the wanted one"
if (cmp > 0)
hi = mi; "mi is larger than target"
else
lo = mi+1; "mi is smaller than target"
} while (lo < hi);
"did not find what we wanted"
The invariants are:
- When entering the loop, 'lo' points at a slot that is never
above the target (it could be at the target), 'hi' points at
a slot that is guaranteed to be above the target (it can
never be at the target).
- We find a point 'mi' between 'lo' and 'hi' ('mi' could be
the same as 'lo', but never can be as high as 'hi'), and
check if 'mi' hits the target. There are three cases:
- if it is a hit, we have found what we are looking for;
- if it is strictly higher than the target, we set it to
'hi', and repeat the search.
- if it is strictly lower than the target, we update 'lo'
to one slot after it, because we allow 'lo' to be at the
target and 'mi' is known to be below the target.
If the loop exits, there is no matching entry.
When choosing 'mi', we do not have to take the "middle" but
anywhere in between 'lo' and 'hi', as long as lo <= mi < hi is
satisfied. When we somehow know that the distance between the
target and 'lo' is much shorter than the target and 'hi', we
could pick 'mi' that is much closer to 'lo' than (hi+lo)/2,
which a conventional binary search would pick.
This patch takes advantage of the fact that the SHA-1 is a good
hash function, and as long as there are enough entries in the
table, we can expect uniform distribution. An entry that begins
with for example "deadbeef..." is much likely to appear much
later than in the midway of a reasonably populated table. In
fact, it can be expected to be near 87% (222/256) from the top
of the table.
This is a work-in-progress and has switches to allow easier
experiments and debugging. Exporting GIT_USE_LOOKUP environment
variable enables this code.
On my admittedly memory starved machine, with a partial KDE
repository (3.0G pack with 95M idx):
$ GIT_USE_LOOKUP=t git log -800 --stat HEAD >/dev/null
3.93user 0.16system 0:04.09elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+55588minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -800 --stat HEAD >/dev/null
4.00user 0.15system 0:04.17elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+60258minor)pagefaults 0swaps
In the same repository:
$ GIT_USE_LOOKUP=t git log -2000 HEAD >/dev/null
0.12user 0.00system 0:00.12elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+4241minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -2000 HEAD >/dev/null
0.05user 0.01system 0:00.07elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+8506minor)pagefaults 0swaps
There isn't much time difference, but the number of minor faults
seems to show that we are touching much smaller number of pages,
which is expected.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-12-29 18:05:47 +08:00
|
|
|
* The table at "table" holds at least "nr" entries of "elem_size"
|
|
|
|
* bytes each. Each entry has the SHA-1 key at "key_offset". The
|
|
|
|
* table is sorted by the SHA-1 key of the entries. The caller wants
|
|
|
|
* to find the entry with "key", and knows that the entry at "lo" is
|
|
|
|
* not higher than the entry it is looking for, and that the entry at
|
|
|
|
* "hi" is higher than the entry it is looking for.
|
|
|
|
*/
|
|
|
|
int sha1_entry_pos(const void *table,
|
|
|
|
size_t elem_size,
|
|
|
|
size_t key_offset,
|
|
|
|
unsigned lo, unsigned hi, unsigned nr,
|
|
|
|
const unsigned char *key)
|
|
|
|
{
|
|
|
|
const unsigned char *base = table;
|
|
|
|
const unsigned char *hi_key, *lo_key;
|
|
|
|
unsigned ofs_0;
|
|
|
|
static int debug_lookup = -1;
|
|
|
|
|
|
|
|
if (debug_lookup < 0)
|
|
|
|
debug_lookup = !!getenv("GIT_DEBUG_LOOKUP");
|
|
|
|
|
|
|
|
if (!nr || lo >= hi)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
if (nr == hi)
|
|
|
|
hi_key = NULL;
|
|
|
|
else
|
|
|
|
hi_key = base + elem_size * hi + key_offset;
|
|
|
|
lo_key = base + elem_size * lo + key_offset;
|
|
|
|
|
|
|
|
ofs_0 = 0;
|
|
|
|
do {
|
|
|
|
int cmp;
|
|
|
|
unsigned ofs, mi, range;
|
|
|
|
unsigned lov, hiv, kyv;
|
|
|
|
const unsigned char *mi_key;
|
|
|
|
|
|
|
|
range = hi - lo;
|
|
|
|
if (hi_key) {
|
|
|
|
for (ofs = ofs_0; ofs < 20; ofs++)
|
|
|
|
if (lo_key[ofs] != hi_key[ofs])
|
|
|
|
break;
|
|
|
|
ofs_0 = ofs;
|
|
|
|
/*
|
|
|
|
* byte 0 thru (ofs-1) are the same between
|
|
|
|
* lo and hi; ofs is the first byte that is
|
|
|
|
* different.
|
sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the "lo" and "hi"
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.
We may hit a point in the search where both our "lo" and
"hi" point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an "assert(lov < hiv)".
Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.
Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway). If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 08:02:25 +08:00
|
|
|
*
|
|
|
|
* If ofs==20, then no bytes are different,
|
|
|
|
* meaning we have entries with duplicate
|
|
|
|
* keys. We know that we are in a solid run
|
|
|
|
* of this entry (because the entries are
|
|
|
|
* sorted, and our lo and hi are the same,
|
|
|
|
* there can be nothing but this single key
|
|
|
|
* in between). So we can stop the search.
|
|
|
|
* Either one of these entries is it (and
|
|
|
|
* we do not care which), or we do not have
|
|
|
|
* it.
|
|
|
|
*
|
|
|
|
* Furthermore, we know that one of our
|
|
|
|
* endpoints must be the edge of the run of
|
|
|
|
* duplicates. For example, given this
|
|
|
|
* sequence:
|
|
|
|
*
|
|
|
|
* idx 0 1 2 3 4 5
|
|
|
|
* key A C C C C D
|
|
|
|
*
|
|
|
|
* If we are searching for "B", we might
|
|
|
|
* hit the duplicate run at lo=1, hi=3
|
|
|
|
* (e.g., by first mi=3, then mi=0). But we
|
|
|
|
* can never have lo > 1, because B < C.
|
|
|
|
* That is, if our key is less than the
|
|
|
|
* run, we know that "lo" is the edge, but
|
|
|
|
* we can say nothing of "hi". Similarly,
|
|
|
|
* if our key is greater than the run, we
|
|
|
|
* know that "hi" is the edge, but we can
|
|
|
|
* say nothing of "lo".
|
|
|
|
*
|
|
|
|
* Therefore if we do not find it, we also
|
|
|
|
* know where it would go if it did exist:
|
|
|
|
* just on the far side of the edge that we
|
|
|
|
* know about.
|
sha1-lookup: more memory efficient search in sorted list of SHA-1
Currently, when looking for a packed object from the pack idx, a
simple binary search is used.
A conventional binary search loop looks like this:
unsigned lo, hi;
do {
unsigned mi = (lo + hi) / 2;
int cmp = "entry pointed at by mi" minus "target";
if (!cmp)
return mi; "mi is the wanted one"
if (cmp > 0)
hi = mi; "mi is larger than target"
else
lo = mi+1; "mi is smaller than target"
} while (lo < hi);
"did not find what we wanted"
The invariants are:
- When entering the loop, 'lo' points at a slot that is never
above the target (it could be at the target), 'hi' points at
a slot that is guaranteed to be above the target (it can
never be at the target).
- We find a point 'mi' between 'lo' and 'hi' ('mi' could be
the same as 'lo', but never can be as high as 'hi'), and
check if 'mi' hits the target. There are three cases:
- if it is a hit, we have found what we are looking for;
- if it is strictly higher than the target, we set it to
'hi', and repeat the search.
- if it is strictly lower than the target, we update 'lo'
to one slot after it, because we allow 'lo' to be at the
target and 'mi' is known to be below the target.
If the loop exits, there is no matching entry.
When choosing 'mi', we do not have to take the "middle" but
anywhere in between 'lo' and 'hi', as long as lo <= mi < hi is
satisfied. When we somehow know that the distance between the
target and 'lo' is much shorter than the target and 'hi', we
could pick 'mi' that is much closer to 'lo' than (hi+lo)/2,
which a conventional binary search would pick.
This patch takes advantage of the fact that the SHA-1 is a good
hash function, and as long as there are enough entries in the
table, we can expect uniform distribution. An entry that begins
with for example "deadbeef..." is much likely to appear much
later than in the midway of a reasonably populated table. In
fact, it can be expected to be near 87% (222/256) from the top
of the table.
This is a work-in-progress and has switches to allow easier
experiments and debugging. Exporting GIT_USE_LOOKUP environment
variable enables this code.
On my admittedly memory starved machine, with a partial KDE
repository (3.0G pack with 95M idx):
$ GIT_USE_LOOKUP=t git log -800 --stat HEAD >/dev/null
3.93user 0.16system 0:04.09elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+55588minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -800 --stat HEAD >/dev/null
4.00user 0.15system 0:04.17elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+60258minor)pagefaults 0swaps
In the same repository:
$ GIT_USE_LOOKUP=t git log -2000 HEAD >/dev/null
0.12user 0.00system 0:00.12elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+4241minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -2000 HEAD >/dev/null
0.05user 0.01system 0:00.07elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+8506minor)pagefaults 0swaps
There isn't much time difference, but the number of minor faults
seems to show that we are touching much smaller number of pages,
which is expected.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-12-29 18:05:47 +08:00
|
|
|
*/
|
sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the "lo" and "hi"
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.
We may hit a point in the search where both our "lo" and
"hi" point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an "assert(lov < hiv)".
Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.
Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway). If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 08:02:25 +08:00
|
|
|
if (ofs == 20) {
|
|
|
|
mi = lo;
|
|
|
|
mi_key = base + elem_size * mi + key_offset;
|
|
|
|
cmp = memcmp(mi_key, key, 20);
|
|
|
|
if (!cmp)
|
|
|
|
return mi;
|
|
|
|
if (cmp < 0)
|
|
|
|
return -1 - hi;
|
|
|
|
else
|
|
|
|
return -1 - lo;
|
|
|
|
}
|
|
|
|
|
sha1-lookup: more memory efficient search in sorted list of SHA-1
Currently, when looking for a packed object from the pack idx, a
simple binary search is used.
A conventional binary search loop looks like this:
unsigned lo, hi;
do {
unsigned mi = (lo + hi) / 2;
int cmp = "entry pointed at by mi" minus "target";
if (!cmp)
return mi; "mi is the wanted one"
if (cmp > 0)
hi = mi; "mi is larger than target"
else
lo = mi+1; "mi is smaller than target"
} while (lo < hi);
"did not find what we wanted"
The invariants are:
- When entering the loop, 'lo' points at a slot that is never
above the target (it could be at the target), 'hi' points at
a slot that is guaranteed to be above the target (it can
never be at the target).
- We find a point 'mi' between 'lo' and 'hi' ('mi' could be
the same as 'lo', but never can be as high as 'hi'), and
check if 'mi' hits the target. There are three cases:
- if it is a hit, we have found what we are looking for;
- if it is strictly higher than the target, we set it to
'hi', and repeat the search.
- if it is strictly lower than the target, we update 'lo'
to one slot after it, because we allow 'lo' to be at the
target and 'mi' is known to be below the target.
If the loop exits, there is no matching entry.
When choosing 'mi', we do not have to take the "middle" but
anywhere in between 'lo' and 'hi', as long as lo <= mi < hi is
satisfied. When we somehow know that the distance between the
target and 'lo' is much shorter than the target and 'hi', we
could pick 'mi' that is much closer to 'lo' than (hi+lo)/2,
which a conventional binary search would pick.
This patch takes advantage of the fact that the SHA-1 is a good
hash function, and as long as there are enough entries in the
table, we can expect uniform distribution. An entry that begins
with for example "deadbeef..." is much likely to appear much
later than in the midway of a reasonably populated table. In
fact, it can be expected to be near 87% (222/256) from the top
of the table.
This is a work-in-progress and has switches to allow easier
experiments and debugging. Exporting GIT_USE_LOOKUP environment
variable enables this code.
On my admittedly memory starved machine, with a partial KDE
repository (3.0G pack with 95M idx):
$ GIT_USE_LOOKUP=t git log -800 --stat HEAD >/dev/null
3.93user 0.16system 0:04.09elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+55588minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -800 --stat HEAD >/dev/null
4.00user 0.15system 0:04.17elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+60258minor)pagefaults 0swaps
In the same repository:
$ GIT_USE_LOOKUP=t git log -2000 HEAD >/dev/null
0.12user 0.00system 0:00.12elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+4241minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -2000 HEAD >/dev/null
0.05user 0.01system 0:00.07elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+8506minor)pagefaults 0swaps
There isn't much time difference, but the number of minor faults
seems to show that we are touching much smaller number of pages,
which is expected.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-12-29 18:05:47 +08:00
|
|
|
hiv = hi_key[ofs_0];
|
|
|
|
if (ofs_0 < 19)
|
|
|
|
hiv = (hiv << 8) | hi_key[ofs_0+1];
|
|
|
|
} else {
|
|
|
|
hiv = 256;
|
|
|
|
if (ofs_0 < 19)
|
|
|
|
hiv <<= 8;
|
|
|
|
}
|
|
|
|
lov = lo_key[ofs_0];
|
|
|
|
kyv = key[ofs_0];
|
|
|
|
if (ofs_0 < 19) {
|
|
|
|
lov = (lov << 8) | lo_key[ofs_0+1];
|
|
|
|
kyv = (kyv << 8) | key[ofs_0+1];
|
|
|
|
}
|
|
|
|
assert(lov < hiv);
|
|
|
|
|
|
|
|
if (kyv < lov)
|
|
|
|
return -1 - lo;
|
|
|
|
if (hiv < kyv)
|
|
|
|
return -1 - hi;
|
|
|
|
|
2007-12-30 19:13:27 +08:00
|
|
|
/*
|
|
|
|
* Even if we know the target is much closer to 'hi'
|
|
|
|
* than 'lo', if we pick too precisely and overshoot
|
|
|
|
* (e.g. when we know 'mi' is closer to 'hi' than to
|
|
|
|
* 'lo', pick 'mi' that is higher than the target), we
|
|
|
|
* end up narrowing the search space by a smaller
|
|
|
|
* amount (i.e. the distance between 'mi' and 'hi')
|
|
|
|
* than what we would have (i.e. about half of 'lo'
|
|
|
|
* and 'hi'). Hedge our bets to pick 'mi' less
|
|
|
|
* aggressively, i.e. make 'mi' a bit closer to the
|
|
|
|
* middle than we would otherwise pick.
|
|
|
|
*/
|
|
|
|
kyv = (kyv * 6 + lov + hiv) / 8;
|
|
|
|
if (lov < hiv - 1) {
|
|
|
|
if (kyv == lov)
|
|
|
|
kyv++;
|
|
|
|
else if (kyv == hiv)
|
|
|
|
kyv--;
|
|
|
|
}
|
sha1-lookup: more memory efficient search in sorted list of SHA-1
Currently, when looking for a packed object from the pack idx, a
simple binary search is used.
A conventional binary search loop looks like this:
unsigned lo, hi;
do {
unsigned mi = (lo + hi) / 2;
int cmp = "entry pointed at by mi" minus "target";
if (!cmp)
return mi; "mi is the wanted one"
if (cmp > 0)
hi = mi; "mi is larger than target"
else
lo = mi+1; "mi is smaller than target"
} while (lo < hi);
"did not find what we wanted"
The invariants are:
- When entering the loop, 'lo' points at a slot that is never
above the target (it could be at the target), 'hi' points at
a slot that is guaranteed to be above the target (it can
never be at the target).
- We find a point 'mi' between 'lo' and 'hi' ('mi' could be
the same as 'lo', but never can be as high as 'hi'), and
check if 'mi' hits the target. There are three cases:
- if it is a hit, we have found what we are looking for;
- if it is strictly higher than the target, we set it to
'hi', and repeat the search.
- if it is strictly lower than the target, we update 'lo'
to one slot after it, because we allow 'lo' to be at the
target and 'mi' is known to be below the target.
If the loop exits, there is no matching entry.
When choosing 'mi', we do not have to take the "middle" but
anywhere in between 'lo' and 'hi', as long as lo <= mi < hi is
satisfied. When we somehow know that the distance between the
target and 'lo' is much shorter than the target and 'hi', we
could pick 'mi' that is much closer to 'lo' than (hi+lo)/2,
which a conventional binary search would pick.
This patch takes advantage of the fact that the SHA-1 is a good
hash function, and as long as there are enough entries in the
table, we can expect uniform distribution. An entry that begins
with for example "deadbeef..." is much likely to appear much
later than in the midway of a reasonably populated table. In
fact, it can be expected to be near 87% (222/256) from the top
of the table.
This is a work-in-progress and has switches to allow easier
experiments and debugging. Exporting GIT_USE_LOOKUP environment
variable enables this code.
On my admittedly memory starved machine, with a partial KDE
repository (3.0G pack with 95M idx):
$ GIT_USE_LOOKUP=t git log -800 --stat HEAD >/dev/null
3.93user 0.16system 0:04.09elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+55588minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -800 --stat HEAD >/dev/null
4.00user 0.15system 0:04.17elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+60258minor)pagefaults 0swaps
In the same repository:
$ GIT_USE_LOOKUP=t git log -2000 HEAD >/dev/null
0.12user 0.00system 0:00.12elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+4241minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -2000 HEAD >/dev/null
0.05user 0.01system 0:00.07elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+8506minor)pagefaults 0swaps
There isn't much time difference, but the number of minor faults
seems to show that we are touching much smaller number of pages,
which is expected.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-12-29 18:05:47 +08:00
|
|
|
mi = (range - 1) * (kyv - lov) / (hiv - lov) + lo;
|
|
|
|
|
|
|
|
if (debug_lookup) {
|
|
|
|
printf("lo %u hi %u rg %u mi %u ", lo, hi, range, mi);
|
|
|
|
printf("ofs %u lov %x, hiv %x, kyv %x\n",
|
|
|
|
ofs_0, lov, hiv, kyv);
|
|
|
|
}
|
|
|
|
if (!(lo <= mi && mi < hi))
|
|
|
|
die("assertion failure lo %u mi %u hi %u %s",
|
|
|
|
lo, mi, hi, sha1_to_hex(key));
|
|
|
|
|
|
|
|
mi_key = base + elem_size * mi + key_offset;
|
|
|
|
cmp = memcmp(mi_key + ofs_0, key + ofs_0, 20 - ofs_0);
|
|
|
|
if (!cmp)
|
|
|
|
return mi;
|
|
|
|
if (cmp > 0) {
|
|
|
|
hi = mi;
|
|
|
|
hi_key = mi_key;
|
2007-12-30 19:13:27 +08:00
|
|
|
} else {
|
sha1-lookup: more memory efficient search in sorted list of SHA-1
Currently, when looking for a packed object from the pack idx, a
simple binary search is used.
A conventional binary search loop looks like this:
unsigned lo, hi;
do {
unsigned mi = (lo + hi) / 2;
int cmp = "entry pointed at by mi" minus "target";
if (!cmp)
return mi; "mi is the wanted one"
if (cmp > 0)
hi = mi; "mi is larger than target"
else
lo = mi+1; "mi is smaller than target"
} while (lo < hi);
"did not find what we wanted"
The invariants are:
- When entering the loop, 'lo' points at a slot that is never
above the target (it could be at the target), 'hi' points at
a slot that is guaranteed to be above the target (it can
never be at the target).
- We find a point 'mi' between 'lo' and 'hi' ('mi' could be
the same as 'lo', but never can be as high as 'hi'), and
check if 'mi' hits the target. There are three cases:
- if it is a hit, we have found what we are looking for;
- if it is strictly higher than the target, we set it to
'hi', and repeat the search.
- if it is strictly lower than the target, we update 'lo'
to one slot after it, because we allow 'lo' to be at the
target and 'mi' is known to be below the target.
If the loop exits, there is no matching entry.
When choosing 'mi', we do not have to take the "middle" but
anywhere in between 'lo' and 'hi', as long as lo <= mi < hi is
satisfied. When we somehow know that the distance between the
target and 'lo' is much shorter than the target and 'hi', we
could pick 'mi' that is much closer to 'lo' than (hi+lo)/2,
which a conventional binary search would pick.
This patch takes advantage of the fact that the SHA-1 is a good
hash function, and as long as there are enough entries in the
table, we can expect uniform distribution. An entry that begins
with for example "deadbeef..." is much likely to appear much
later than in the midway of a reasonably populated table. In
fact, it can be expected to be near 87% (222/256) from the top
of the table.
This is a work-in-progress and has switches to allow easier
experiments and debugging. Exporting GIT_USE_LOOKUP environment
variable enables this code.
On my admittedly memory starved machine, with a partial KDE
repository (3.0G pack with 95M idx):
$ GIT_USE_LOOKUP=t git log -800 --stat HEAD >/dev/null
3.93user 0.16system 0:04.09elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+55588minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -800 --stat HEAD >/dev/null
4.00user 0.15system 0:04.17elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+60258minor)pagefaults 0swaps
In the same repository:
$ GIT_USE_LOOKUP=t git log -2000 HEAD >/dev/null
0.12user 0.00system 0:00.12elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+4241minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -2000 HEAD >/dev/null
0.05user 0.01system 0:00.07elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+8506minor)pagefaults 0swaps
There isn't much time difference, but the number of minor faults
seems to show that we are touching much smaller number of pages,
which is expected.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-12-29 18:05:47 +08:00
|
|
|
lo = mi + 1;
|
|
|
|
lo_key = mi_key + elem_size;
|
|
|
|
}
|
|
|
|
} while (lo < hi);
|
|
|
|
return -lo-1;
|
|
|
|
}
|