sha1-lookup: more memory efficient search in sorted list of SHA-1
Currently, when looking for a packed object from the pack idx, a
simple binary search is used.
A conventional binary search loop looks like this:
unsigned lo, hi;
do {
unsigned mi = (lo + hi) / 2;
int cmp = "entry pointed at by mi" minus "target";
if (!cmp)
return mi; "mi is the wanted one"
if (cmp > 0)
hi = mi; "mi is larger than target"
else
lo = mi+1; "mi is smaller than target"
} while (lo < hi);
"did not find what we wanted"
The invariants are:
- When entering the loop, 'lo' points at a slot that is never
above the target (it could be at the target), 'hi' points at
a slot that is guaranteed to be above the target (it can
never be at the target).
- We find a point 'mi' between 'lo' and 'hi' ('mi' could be
the same as 'lo', but never can be as high as 'hi'), and
check if 'mi' hits the target. There are three cases:
- if it is a hit, we have found what we are looking for;
- if it is strictly higher than the target, we set it to
'hi', and repeat the search.
- if it is strictly lower than the target, we update 'lo'
to one slot after it, because we allow 'lo' to be at the
target and 'mi' is known to be below the target.
If the loop exits, there is no matching entry.
When choosing 'mi', we do not have to take the "middle" but
anywhere in between 'lo' and 'hi', as long as lo <= mi < hi is
satisfied. When we somehow know that the distance between the
target and 'lo' is much shorter than the target and 'hi', we
could pick 'mi' that is much closer to 'lo' than (hi+lo)/2,
which a conventional binary search would pick.
This patch takes advantage of the fact that the SHA-1 is a good
hash function, and as long as there are enough entries in the
table, we can expect uniform distribution. An entry that begins
with for example "deadbeef..." is much likely to appear much
later than in the midway of a reasonably populated table. In
fact, it can be expected to be near 87% (222/256) from the top
of the table.
This is a work-in-progress and has switches to allow easier
experiments and debugging. Exporting GIT_USE_LOOKUP environment
variable enables this code.
On my admittedly memory starved machine, with a partial KDE
repository (3.0G pack with 95M idx):
$ GIT_USE_LOOKUP=t git log -800 --stat HEAD >/dev/null
3.93user 0.16system 0:04.09elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+55588minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -800 --stat HEAD >/dev/null
4.00user 0.15system 0:04.17elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+60258minor)pagefaults 0swaps
In the same repository:
$ GIT_USE_LOOKUP=t git log -2000 HEAD >/dev/null
0.12user 0.00system 0:00.12elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+4241minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -2000 HEAD >/dev/null
0.05user 0.01system 0:00.07elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+8506minor)pagefaults 0swaps
There isn't much time difference, but the number of minor faults
seems to show that we are touching much smaller number of pages,
which is expected.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-12-29 18:05:47 +08:00
|
|
|
#include "cache.h"
|
2020-12-31 19:56:23 +08:00
|
|
|
#include "hash-lookup.h"
|
sha1-lookup: more memory efficient search in sorted list of SHA-1
Currently, when looking for a packed object from the pack idx, a
simple binary search is used.
A conventional binary search loop looks like this:
unsigned lo, hi;
do {
unsigned mi = (lo + hi) / 2;
int cmp = "entry pointed at by mi" minus "target";
if (!cmp)
return mi; "mi is the wanted one"
if (cmp > 0)
hi = mi; "mi is larger than target"
else
lo = mi+1; "mi is smaller than target"
} while (lo < hi);
"did not find what we wanted"
The invariants are:
- When entering the loop, 'lo' points at a slot that is never
above the target (it could be at the target), 'hi' points at
a slot that is guaranteed to be above the target (it can
never be at the target).
- We find a point 'mi' between 'lo' and 'hi' ('mi' could be
the same as 'lo', but never can be as high as 'hi'), and
check if 'mi' hits the target. There are three cases:
- if it is a hit, we have found what we are looking for;
- if it is strictly higher than the target, we set it to
'hi', and repeat the search.
- if it is strictly lower than the target, we update 'lo'
to one slot after it, because we allow 'lo' to be at the
target and 'mi' is known to be below the target.
If the loop exits, there is no matching entry.
When choosing 'mi', we do not have to take the "middle" but
anywhere in between 'lo' and 'hi', as long as lo <= mi < hi is
satisfied. When we somehow know that the distance between the
target and 'lo' is much shorter than the target and 'hi', we
could pick 'mi' that is much closer to 'lo' than (hi+lo)/2,
which a conventional binary search would pick.
This patch takes advantage of the fact that the SHA-1 is a good
hash function, and as long as there are enough entries in the
table, we can expect uniform distribution. An entry that begins
with for example "deadbeef..." is much likely to appear much
later than in the midway of a reasonably populated table. In
fact, it can be expected to be near 87% (222/256) from the top
of the table.
This is a work-in-progress and has switches to allow easier
experiments and debugging. Exporting GIT_USE_LOOKUP environment
variable enables this code.
On my admittedly memory starved machine, with a partial KDE
repository (3.0G pack with 95M idx):
$ GIT_USE_LOOKUP=t git log -800 --stat HEAD >/dev/null
3.93user 0.16system 0:04.09elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+55588minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -800 --stat HEAD >/dev/null
4.00user 0.15system 0:04.17elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+60258minor)pagefaults 0swaps
In the same repository:
$ GIT_USE_LOOKUP=t git log -2000 HEAD >/dev/null
0.12user 0.00system 0:00.12elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+4241minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -2000 HEAD >/dev/null
0.05user 0.01system 0:00.07elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+8506minor)pagefaults 0swaps
There isn't much time difference, but the number of minor faults
seems to show that we are touching much smaller number of pages,
which is expected.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-12-29 18:05:47 +08:00
|
|
|
|
2021-01-28 14:19:42 +08:00
|
|
|
static uint32_t take2(const struct object_id *oid, size_t ofs)
|
2009-04-05 04:59:26 +08:00
|
|
|
{
|
2021-01-28 14:19:42 +08:00
|
|
|
return ((oid->hash[ofs] << 8) | oid->hash[ofs + 1]);
|
2009-04-05 04:59:26 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Conventional binary search loop looks like this:
|
|
|
|
*
|
|
|
|
* do {
|
2017-10-09 02:29:37 +08:00
|
|
|
* int mi = lo + (hi - lo) / 2;
|
2009-04-05 04:59:26 +08:00
|
|
|
* int cmp = "entry pointed at by mi" minus "target";
|
|
|
|
* if (!cmp)
|
|
|
|
* return (mi is the wanted one)
|
|
|
|
* if (cmp > 0)
|
|
|
|
* hi = mi; "mi is larger than target"
|
|
|
|
* else
|
|
|
|
* lo = mi+1; "mi is smaller than target"
|
|
|
|
* } while (lo < hi);
|
|
|
|
*
|
|
|
|
* The invariants are:
|
|
|
|
*
|
|
|
|
* - When entering the loop, lo points at a slot that is never
|
|
|
|
* above the target (it could be at the target), hi points at a
|
|
|
|
* slot that is guaranteed to be above the target (it can never
|
|
|
|
* be at the target).
|
|
|
|
*
|
|
|
|
* - We find a point 'mi' between lo and hi (mi could be the same
|
|
|
|
* as lo, but never can be the same as hi), and check if it hits
|
|
|
|
* the target. There are three cases:
|
|
|
|
*
|
|
|
|
* - if it is a hit, we are happy.
|
|
|
|
*
|
|
|
|
* - if it is strictly higher than the target, we update hi with
|
|
|
|
* it.
|
|
|
|
*
|
|
|
|
* - if it is strictly lower than the target, we update lo to be
|
|
|
|
* one slot after it, because we allow lo to be at the target.
|
|
|
|
*
|
|
|
|
* When choosing 'mi', we do not have to take the "middle" but
|
|
|
|
* anywhere in between lo and hi, as long as lo <= mi < hi is
|
|
|
|
* satisfied. When we somehow know that the distance between the
|
|
|
|
* target and lo is much shorter than the target and hi, we could
|
|
|
|
* pick mi that is much closer to lo than the midway.
|
|
|
|
*/
|
|
|
|
/*
|
|
|
|
* The table should contain "nr" elements.
|
2021-01-28 14:19:42 +08:00
|
|
|
* The oid of element i (between 0 and nr - 1) should be returned
|
2009-04-05 04:59:26 +08:00
|
|
|
* by "fn(i, table)".
|
|
|
|
*/
|
2021-01-28 14:20:23 +08:00
|
|
|
int oid_pos(const struct object_id *oid, const void *table, size_t nr,
|
2021-01-28 14:19:42 +08:00
|
|
|
oid_access_fn fn)
|
2009-04-05 04:59:26 +08:00
|
|
|
{
|
|
|
|
size_t hi = nr;
|
|
|
|
size_t lo = 0;
|
|
|
|
size_t mi = 0;
|
|
|
|
|
|
|
|
if (!nr)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
if (nr != 1) {
|
|
|
|
size_t lov, hiv, miv, ofs;
|
|
|
|
|
2019-08-19 04:04:14 +08:00
|
|
|
for (ofs = 0; ofs < the_hash_algo->rawsz - 2; ofs += 2) {
|
2021-01-28 14:19:42 +08:00
|
|
|
lov = take2(fn(0, table), ofs);
|
|
|
|
hiv = take2(fn(nr - 1, table), ofs);
|
|
|
|
miv = take2(oid, ofs);
|
2009-04-05 04:59:26 +08:00
|
|
|
if (miv < lov)
|
|
|
|
return -1;
|
|
|
|
if (hiv < miv)
|
msvc: avoid using minus operator on unsigned types
MSVC complains about this with `-Wall`, which can be taken as a sign
that this is indeed a real bug. The symptom is:
C4146: unary minus operator applied to unsigned type, result
still unsigned
Let's avoid this warning in the minimal way, e.g. writing `-1 -
<unsigned value>` instead of `-<unsigned value> - 1`.
Note that the change in the `estimate_cache_size()` function is
needed because MSVC considers the "return type" of the `sizeof()`
operator to be `size_t`, i.e. unsigned, and therefore it cannot be
negated using the unary minus operator.
Even worse, that arithmetic is doing extra work, in vain. We want to
calculate the entry extra cache size as the difference between the
size of the `cache_entry` structure minus the size of the
`ondisk_cache_entry` structure, padded to the appropriate alignment
boundary.
To that end, we start by assigning that difference to the `per_entry`
variable, and then abuse the `len` parameter of the
`align_padding_size()` macro to take the negative size of the ondisk
entry size. Essentially, we try to avoid passing the already calculated
difference to that macro by passing the operands of that difference
instead, when the macro expects operands of an addition:
#define align_padding_size(size, len) \
((size + (len) + 8) & ~7) - (size + len)
Currently, we pass A and -B to that macro instead of passing A - B and
0, where A - B is already stored in the `per_entry` variable, ready to
be used.
This is neither necessary, nor intuitive. Let's fix this, and have code
that is both easier to read and that also does not trigger MSVC's
warning.
While at it, we take care of reporting overflows (which are unlikely,
but hey, defensive programming is good!).
We _also_ take pains of casting the unsigned value to signed: otherwise,
the signed operand (i.e. the `-1`) would be cast to unsigned before
doing the arithmetic.
Helped-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-04 23:09:26 +08:00
|
|
|
return index_pos_to_insert_pos(nr);
|
2009-04-05 04:59:26 +08:00
|
|
|
if (lov != hiv) {
|
|
|
|
/*
|
|
|
|
* At this point miv could be equal
|
2020-12-31 19:56:22 +08:00
|
|
|
* to hiv (but hash could still be higher);
|
2009-04-05 04:59:26 +08:00
|
|
|
* the invariant of (mi < hi) should be
|
|
|
|
* kept.
|
|
|
|
*/
|
|
|
|
mi = (nr - 1) * (miv - lov) / (hiv - lov);
|
|
|
|
if (lo <= mi && mi < hi)
|
|
|
|
break;
|
2018-05-02 17:38:39 +08:00
|
|
|
BUG("assertion failed in binary search");
|
2009-04-05 04:59:26 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
do {
|
|
|
|
int cmp;
|
2021-01-28 14:19:42 +08:00
|
|
|
cmp = oidcmp(fn(mi, table), oid);
|
2009-04-05 04:59:26 +08:00
|
|
|
if (!cmp)
|
|
|
|
return mi;
|
|
|
|
if (cmp > 0)
|
|
|
|
hi = mi;
|
|
|
|
else
|
|
|
|
lo = mi + 1;
|
2017-10-09 02:29:37 +08:00
|
|
|
mi = lo + (hi - lo) / 2;
|
2009-04-05 04:59:26 +08:00
|
|
|
} while (lo < hi);
|
msvc: avoid using minus operator on unsigned types
MSVC complains about this with `-Wall`, which can be taken as a sign
that this is indeed a real bug. The symptom is:
C4146: unary minus operator applied to unsigned type, result
still unsigned
Let's avoid this warning in the minimal way, e.g. writing `-1 -
<unsigned value>` instead of `-<unsigned value> - 1`.
Note that the change in the `estimate_cache_size()` function is
needed because MSVC considers the "return type" of the `sizeof()`
operator to be `size_t`, i.e. unsigned, and therefore it cannot be
negated using the unary minus operator.
Even worse, that arithmetic is doing extra work, in vain. We want to
calculate the entry extra cache size as the difference between the
size of the `cache_entry` structure minus the size of the
`ondisk_cache_entry` structure, padded to the appropriate alignment
boundary.
To that end, we start by assigning that difference to the `per_entry`
variable, and then abuse the `len` parameter of the
`align_padding_size()` macro to take the negative size of the ondisk
entry size. Essentially, we try to avoid passing the already calculated
difference to that macro by passing the operands of that difference
instead, when the macro expects operands of an addition:
#define align_padding_size(size, len) \
((size + (len) + 8) & ~7) - (size + len)
Currently, we pass A and -B to that macro instead of passing A - B and
0, where A - B is already stored in the `per_entry` variable, ready to
be used.
This is neither necessary, nor intuitive. Let's fix this, and have code
that is both easier to read and that also does not trigger MSVC's
warning.
While at it, we take care of reporting overflows (which are unlikely,
but hey, defensive programming is good!).
We _also_ take pains of casting the unsigned value to signed: otherwise,
the signed operand (i.e. the `-1`) would be cast to unsigned before
doing the arithmetic.
Helped-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-04 23:09:26 +08:00
|
|
|
return index_pos_to_insert_pos(lo);
|
2009-04-05 04:59:26 +08:00
|
|
|
}
|
2018-02-14 02:39:39 +08:00
|
|
|
|
2020-12-31 19:56:23 +08:00
|
|
|
int bsearch_hash(const unsigned char *hash, const uint32_t *fanout_nbo,
|
2018-02-14 02:39:39 +08:00
|
|
|
const unsigned char *table, size_t stride, uint32_t *result)
|
|
|
|
{
|
|
|
|
uint32_t hi, lo;
|
|
|
|
|
2020-12-31 19:56:23 +08:00
|
|
|
hi = ntohl(fanout_nbo[*hash]);
|
|
|
|
lo = ((*hash == 0x0) ? 0 : ntohl(fanout_nbo[*hash - 1]));
|
2018-02-14 02:39:39 +08:00
|
|
|
|
|
|
|
while (lo < hi) {
|
|
|
|
unsigned mi = lo + (hi - lo) / 2;
|
2020-12-31 19:56:23 +08:00
|
|
|
int cmp = hashcmp(table + mi * stride, hash);
|
2018-02-14 02:39:39 +08:00
|
|
|
|
|
|
|
if (!cmp) {
|
|
|
|
if (result)
|
|
|
|
*result = mi;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
if (cmp > 0)
|
|
|
|
hi = mi;
|
|
|
|
else
|
|
|
|
lo = mi + 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (result)
|
|
|
|
*result = lo;
|
|
|
|
return 0;
|
|
|
|
}
|