git/oidtree.c

110 lines
2.5 KiB
C
Raw Normal View History

oidtree: a crit-bit tree for odb_loose_cache This saves 8K per `struct object_directory', meaning it saves around 800MB in my case involving 100K alternates (half or more of those alternates are unlikely to hold loose objects). This is implemented in two parts: a generic, allocation-free `cbtree' and the `oidtree' wrapper on top of it. The latter provides allocation using alloc_state as a memory pool to improve locality and reduce free(3) overhead. Unlike oid-array, the crit-bit tree does not require sorting. Performance is bound by the key length, for oidtree that is fixed at sizeof(struct object_id). There's no need to have 256 oidtrees to mitigate the O(n log n) overhead like we did with oid-array. Being a prefix trie, it is natively suited for expanding short object IDs via prefix-limited iteration in `find_short_object_filename'. On my busy workstation, p4205 performance seems to be roughly unchanged (+/-8%). Startup with 100K total alternates with no loose objects seems around 10-20% faster on a hot cache. (800MB in memory savings means more memory for the kernel FS cache). The generic cbtree implementation does impose some extra overhead for oidtree in that it uses memcmp(3) on "struct object_id" so it wastes cycles comparing 12 extra bytes on SHA-1 repositories. I've not yet explored reducing this overhead, but I expect there are many places in our code base where we'd want to investigate this. More information on crit-bit trees: https://cr.yp.to/critbit.html Signed-off-by: Eric Wong <e@80x24.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-08 07:10:19 +08:00
/*
* A wrapper around cbtree which stores oids
* May be used to replace oid-array for prefix (abbreviation) matches
*/
#include "oidtree.h"
#include "alloc.h"
#include "hash.h"
struct oidtree_iter_data {
oidtree_iter fn;
void *arg;
size_t *last_nibble_at;
int algo;
uint8_t last_byte;
};
void oidtree_init(struct oidtree *ot)
{
cb_init(&ot->tree);
mem_pool_init(&ot->mem_pool, 0);
}
void oidtree_clear(struct oidtree *ot)
{
if (ot) {
mem_pool_discard(&ot->mem_pool, 0);
oidtree_init(ot);
}
}
void oidtree_insert(struct oidtree *ot, const struct object_id *oid)
{
struct cb_node *on;
oidtree: avoid unaligned access to crit-bit tree The flexible array member "k" of struct cb_node is used to store the key of the crit-bit tree node. It offers no alignment guarantees -- in fact the current struct layout puts it one byte after a 4-byte aligned address, i.e. guaranteed to be misaligned. oidtree uses a struct object_id as cb_node key. Since cf0983213c (hash: add an algo member to struct object_id, 2021-04-26) it requires 4-byte alignment. The mismatch is reported by UndefinedBehaviorSanitizer at runtime like this: hash.h:277:2: runtime error: member access within misaligned address 0x00015000802d for type 'struct object_id', which requires 4 byte alignment 0x00015000802d: note: pointer points here 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ^ SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior hash.h:277:2 in We can fix that by: 1. eliminating the alignment requirement of struct object_id, 2. providing the alignment in struct cb_node, or 3. avoiding the issue by only using memcpy to access "k". Currently we only store one of two values in "algo" in struct object_id. We could use a uint8_t for that instead and widen it only once we add support for our twohundredth algorithm or so. That would not only avoid alignment issues, but also reduce the memory requirements for each instance of struct object_id by ca. 9%. Supporting keys with alignment requirements might be useful to spread the use of crit-bit trees. It can be achieved by using a wider type for "k" (e.g. uintmax_t), using different types for the members "byte" and "otherbits" (e.g. uint16_t or uint32_t for each), or by avoiding the use of flexible arrays like khash.h does. This patch implements the third option, though, because it has the least potential for causing side-effects and we're close to the next release. If one of the other options is implemented later as well to get their additional benefits we can get rid of the extra copies introduced here. Reported-by: Andrzej Hunt <andrzej@ahunt.org> Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-08-15 04:00:38 +08:00
struct object_id k;
oidtree: a crit-bit tree for odb_loose_cache This saves 8K per `struct object_directory', meaning it saves around 800MB in my case involving 100K alternates (half or more of those alternates are unlikely to hold loose objects). This is implemented in two parts: a generic, allocation-free `cbtree' and the `oidtree' wrapper on top of it. The latter provides allocation using alloc_state as a memory pool to improve locality and reduce free(3) overhead. Unlike oid-array, the crit-bit tree does not require sorting. Performance is bound by the key length, for oidtree that is fixed at sizeof(struct object_id). There's no need to have 256 oidtrees to mitigate the O(n log n) overhead like we did with oid-array. Being a prefix trie, it is natively suited for expanding short object IDs via prefix-limited iteration in `find_short_object_filename'. On my busy workstation, p4205 performance seems to be roughly unchanged (+/-8%). Startup with 100K total alternates with no loose objects seems around 10-20% faster on a hot cache. (800MB in memory savings means more memory for the kernel FS cache). The generic cbtree implementation does impose some extra overhead for oidtree in that it uses memcmp(3) on "struct object_id" so it wastes cycles comparing 12 extra bytes on SHA-1 repositories. I've not yet explored reducing this overhead, but I expect there are many places in our code base where we'd want to investigate this. More information on crit-bit trees: https://cr.yp.to/critbit.html Signed-off-by: Eric Wong <e@80x24.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-08 07:10:19 +08:00
if (!oid->algo)
BUG("oidtree_insert requires oid->algo");
on = mem_pool_alloc(&ot->mem_pool, sizeof(*on) + sizeof(*oid));
oidtree: avoid unaligned access to crit-bit tree The flexible array member "k" of struct cb_node is used to store the key of the crit-bit tree node. It offers no alignment guarantees -- in fact the current struct layout puts it one byte after a 4-byte aligned address, i.e. guaranteed to be misaligned. oidtree uses a struct object_id as cb_node key. Since cf0983213c (hash: add an algo member to struct object_id, 2021-04-26) it requires 4-byte alignment. The mismatch is reported by UndefinedBehaviorSanitizer at runtime like this: hash.h:277:2: runtime error: member access within misaligned address 0x00015000802d for type 'struct object_id', which requires 4 byte alignment 0x00015000802d: note: pointer points here 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ^ SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior hash.h:277:2 in We can fix that by: 1. eliminating the alignment requirement of struct object_id, 2. providing the alignment in struct cb_node, or 3. avoiding the issue by only using memcpy to access "k". Currently we only store one of two values in "algo" in struct object_id. We could use a uint8_t for that instead and widen it only once we add support for our twohundredth algorithm or so. That would not only avoid alignment issues, but also reduce the memory requirements for each instance of struct object_id by ca. 9%. Supporting keys with alignment requirements might be useful to spread the use of crit-bit trees. It can be achieved by using a wider type for "k" (e.g. uintmax_t), using different types for the members "byte" and "otherbits" (e.g. uint16_t or uint32_t for each), or by avoiding the use of flexible arrays like khash.h does. This patch implements the third option, though, because it has the least potential for causing side-effects and we're close to the next release. If one of the other options is implemented later as well to get their additional benefits we can get rid of the extra copies introduced here. Reported-by: Andrzej Hunt <andrzej@ahunt.org> Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-08-15 04:00:38 +08:00
/*
* Clear the padding and copy the result in separate steps to
* respect the 4-byte alignment needed by struct object_id.
*/
oidcpy_with_padding(&k, oid);
memcpy(on->k, &k, sizeof(k));
oidtree: a crit-bit tree for odb_loose_cache This saves 8K per `struct object_directory', meaning it saves around 800MB in my case involving 100K alternates (half or more of those alternates are unlikely to hold loose objects). This is implemented in two parts: a generic, allocation-free `cbtree' and the `oidtree' wrapper on top of it. The latter provides allocation using alloc_state as a memory pool to improve locality and reduce free(3) overhead. Unlike oid-array, the crit-bit tree does not require sorting. Performance is bound by the key length, for oidtree that is fixed at sizeof(struct object_id). There's no need to have 256 oidtrees to mitigate the O(n log n) overhead like we did with oid-array. Being a prefix trie, it is natively suited for expanding short object IDs via prefix-limited iteration in `find_short_object_filename'. On my busy workstation, p4205 performance seems to be roughly unchanged (+/-8%). Startup with 100K total alternates with no loose objects seems around 10-20% faster on a hot cache. (800MB in memory savings means more memory for the kernel FS cache). The generic cbtree implementation does impose some extra overhead for oidtree in that it uses memcmp(3) on "struct object_id" so it wastes cycles comparing 12 extra bytes on SHA-1 repositories. I've not yet explored reducing this overhead, but I expect there are many places in our code base where we'd want to investigate this. More information on crit-bit trees: https://cr.yp.to/critbit.html Signed-off-by: Eric Wong <e@80x24.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-08 07:10:19 +08:00
/*
* n.b. Current callers won't get us duplicates, here. If a
* future caller causes duplicates, there'll be a a small leak
* that won't be freed until oidtree_clear. Currently it's not
* worth maintaining a free list
*/
cb_insert(&ot->tree, on, sizeof(*oid));
oidtree: a crit-bit tree for odb_loose_cache This saves 8K per `struct object_directory', meaning it saves around 800MB in my case involving 100K alternates (half or more of those alternates are unlikely to hold loose objects). This is implemented in two parts: a generic, allocation-free `cbtree' and the `oidtree' wrapper on top of it. The latter provides allocation using alloc_state as a memory pool to improve locality and reduce free(3) overhead. Unlike oid-array, the crit-bit tree does not require sorting. Performance is bound by the key length, for oidtree that is fixed at sizeof(struct object_id). There's no need to have 256 oidtrees to mitigate the O(n log n) overhead like we did with oid-array. Being a prefix trie, it is natively suited for expanding short object IDs via prefix-limited iteration in `find_short_object_filename'. On my busy workstation, p4205 performance seems to be roughly unchanged (+/-8%). Startup with 100K total alternates with no loose objects seems around 10-20% faster on a hot cache. (800MB in memory savings means more memory for the kernel FS cache). The generic cbtree implementation does impose some extra overhead for oidtree in that it uses memcmp(3) on "struct object_id" so it wastes cycles comparing 12 extra bytes on SHA-1 repositories. I've not yet explored reducing this overhead, but I expect there are many places in our code base where we'd want to investigate this. More information on crit-bit trees: https://cr.yp.to/critbit.html Signed-off-by: Eric Wong <e@80x24.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-08 07:10:19 +08:00
}
int oidtree_contains(struct oidtree *ot, const struct object_id *oid)
{
struct object_id k;
size_t klen = sizeof(k);
oidcpy_with_padding(&k, oid);
if (oid->algo == GIT_HASH_UNKNOWN)
klen -= sizeof(oid->algo);
/* cb_lookup relies on memcmp on the struct, so order matters: */
klen += BUILD_ASSERT_OR_ZERO(offsetof(struct object_id, hash) <
offsetof(struct object_id, algo));
return cb_lookup(&ot->tree, (const uint8_t *)&k, klen) ? 1 : 0;
}
static enum cb_next iter(struct cb_node *n, void *arg)
{
struct oidtree_iter_data *x = arg;
oidtree: avoid unaligned access to crit-bit tree The flexible array member "k" of struct cb_node is used to store the key of the crit-bit tree node. It offers no alignment guarantees -- in fact the current struct layout puts it one byte after a 4-byte aligned address, i.e. guaranteed to be misaligned. oidtree uses a struct object_id as cb_node key. Since cf0983213c (hash: add an algo member to struct object_id, 2021-04-26) it requires 4-byte alignment. The mismatch is reported by UndefinedBehaviorSanitizer at runtime like this: hash.h:277:2: runtime error: member access within misaligned address 0x00015000802d for type 'struct object_id', which requires 4 byte alignment 0x00015000802d: note: pointer points here 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ^ SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior hash.h:277:2 in We can fix that by: 1. eliminating the alignment requirement of struct object_id, 2. providing the alignment in struct cb_node, or 3. avoiding the issue by only using memcpy to access "k". Currently we only store one of two values in "algo" in struct object_id. We could use a uint8_t for that instead and widen it only once we add support for our twohundredth algorithm or so. That would not only avoid alignment issues, but also reduce the memory requirements for each instance of struct object_id by ca. 9%. Supporting keys with alignment requirements might be useful to spread the use of crit-bit trees. It can be achieved by using a wider type for "k" (e.g. uintmax_t), using different types for the members "byte" and "otherbits" (e.g. uint16_t or uint32_t for each), or by avoiding the use of flexible arrays like khash.h does. This patch implements the third option, though, because it has the least potential for causing side-effects and we're close to the next release. If one of the other options is implemented later as well to get their additional benefits we can get rid of the extra copies introduced here. Reported-by: Andrzej Hunt <andrzej@ahunt.org> Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-08-15 04:00:38 +08:00
struct object_id k;
/* Copy to provide 4-byte alignment needed by struct object_id. */
memcpy(&k, n->k, sizeof(k));
oidtree: a crit-bit tree for odb_loose_cache This saves 8K per `struct object_directory', meaning it saves around 800MB in my case involving 100K alternates (half or more of those alternates are unlikely to hold loose objects). This is implemented in two parts: a generic, allocation-free `cbtree' and the `oidtree' wrapper on top of it. The latter provides allocation using alloc_state as a memory pool to improve locality and reduce free(3) overhead. Unlike oid-array, the crit-bit tree does not require sorting. Performance is bound by the key length, for oidtree that is fixed at sizeof(struct object_id). There's no need to have 256 oidtrees to mitigate the O(n log n) overhead like we did with oid-array. Being a prefix trie, it is natively suited for expanding short object IDs via prefix-limited iteration in `find_short_object_filename'. On my busy workstation, p4205 performance seems to be roughly unchanged (+/-8%). Startup with 100K total alternates with no loose objects seems around 10-20% faster on a hot cache. (800MB in memory savings means more memory for the kernel FS cache). The generic cbtree implementation does impose some extra overhead for oidtree in that it uses memcmp(3) on "struct object_id" so it wastes cycles comparing 12 extra bytes on SHA-1 repositories. I've not yet explored reducing this overhead, but I expect there are many places in our code base where we'd want to investigate this. More information on crit-bit trees: https://cr.yp.to/critbit.html Signed-off-by: Eric Wong <e@80x24.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-08 07:10:19 +08:00
oidtree: avoid unaligned access to crit-bit tree The flexible array member "k" of struct cb_node is used to store the key of the crit-bit tree node. It offers no alignment guarantees -- in fact the current struct layout puts it one byte after a 4-byte aligned address, i.e. guaranteed to be misaligned. oidtree uses a struct object_id as cb_node key. Since cf0983213c (hash: add an algo member to struct object_id, 2021-04-26) it requires 4-byte alignment. The mismatch is reported by UndefinedBehaviorSanitizer at runtime like this: hash.h:277:2: runtime error: member access within misaligned address 0x00015000802d for type 'struct object_id', which requires 4 byte alignment 0x00015000802d: note: pointer points here 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ^ SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior hash.h:277:2 in We can fix that by: 1. eliminating the alignment requirement of struct object_id, 2. providing the alignment in struct cb_node, or 3. avoiding the issue by only using memcpy to access "k". Currently we only store one of two values in "algo" in struct object_id. We could use a uint8_t for that instead and widen it only once we add support for our twohundredth algorithm or so. That would not only avoid alignment issues, but also reduce the memory requirements for each instance of struct object_id by ca. 9%. Supporting keys with alignment requirements might be useful to spread the use of crit-bit trees. It can be achieved by using a wider type for "k" (e.g. uintmax_t), using different types for the members "byte" and "otherbits" (e.g. uint16_t or uint32_t for each), or by avoiding the use of flexible arrays like khash.h does. This patch implements the third option, though, because it has the least potential for causing side-effects and we're close to the next release. If one of the other options is implemented later as well to get their additional benefits we can get rid of the extra copies introduced here. Reported-by: Andrzej Hunt <andrzej@ahunt.org> Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-08-15 04:00:38 +08:00
if (x->algo != GIT_HASH_UNKNOWN && x->algo != k.algo)
oidtree: a crit-bit tree for odb_loose_cache This saves 8K per `struct object_directory', meaning it saves around 800MB in my case involving 100K alternates (half or more of those alternates are unlikely to hold loose objects). This is implemented in two parts: a generic, allocation-free `cbtree' and the `oidtree' wrapper on top of it. The latter provides allocation using alloc_state as a memory pool to improve locality and reduce free(3) overhead. Unlike oid-array, the crit-bit tree does not require sorting. Performance is bound by the key length, for oidtree that is fixed at sizeof(struct object_id). There's no need to have 256 oidtrees to mitigate the O(n log n) overhead like we did with oid-array. Being a prefix trie, it is natively suited for expanding short object IDs via prefix-limited iteration in `find_short_object_filename'. On my busy workstation, p4205 performance seems to be roughly unchanged (+/-8%). Startup with 100K total alternates with no loose objects seems around 10-20% faster on a hot cache. (800MB in memory savings means more memory for the kernel FS cache). The generic cbtree implementation does impose some extra overhead for oidtree in that it uses memcmp(3) on "struct object_id" so it wastes cycles comparing 12 extra bytes on SHA-1 repositories. I've not yet explored reducing this overhead, but I expect there are many places in our code base where we'd want to investigate this. More information on crit-bit trees: https://cr.yp.to/critbit.html Signed-off-by: Eric Wong <e@80x24.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-08 07:10:19 +08:00
return CB_CONTINUE;
if (x->last_nibble_at) {
oidtree: avoid unaligned access to crit-bit tree The flexible array member "k" of struct cb_node is used to store the key of the crit-bit tree node. It offers no alignment guarantees -- in fact the current struct layout puts it one byte after a 4-byte aligned address, i.e. guaranteed to be misaligned. oidtree uses a struct object_id as cb_node key. Since cf0983213c (hash: add an algo member to struct object_id, 2021-04-26) it requires 4-byte alignment. The mismatch is reported by UndefinedBehaviorSanitizer at runtime like this: hash.h:277:2: runtime error: member access within misaligned address 0x00015000802d for type 'struct object_id', which requires 4 byte alignment 0x00015000802d: note: pointer points here 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ^ SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior hash.h:277:2 in We can fix that by: 1. eliminating the alignment requirement of struct object_id, 2. providing the alignment in struct cb_node, or 3. avoiding the issue by only using memcpy to access "k". Currently we only store one of two values in "algo" in struct object_id. We could use a uint8_t for that instead and widen it only once we add support for our twohundredth algorithm or so. That would not only avoid alignment issues, but also reduce the memory requirements for each instance of struct object_id by ca. 9%. Supporting keys with alignment requirements might be useful to spread the use of crit-bit trees. It can be achieved by using a wider type for "k" (e.g. uintmax_t), using different types for the members "byte" and "otherbits" (e.g. uint16_t or uint32_t for each), or by avoiding the use of flexible arrays like khash.h does. This patch implements the third option, though, because it has the least potential for causing side-effects and we're close to the next release. If one of the other options is implemented later as well to get their additional benefits we can get rid of the extra copies introduced here. Reported-by: Andrzej Hunt <andrzej@ahunt.org> Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-08-15 04:00:38 +08:00
if ((k.hash[*x->last_nibble_at] ^ x->last_byte) & 0xf0)
oidtree: a crit-bit tree for odb_loose_cache This saves 8K per `struct object_directory', meaning it saves around 800MB in my case involving 100K alternates (half or more of those alternates are unlikely to hold loose objects). This is implemented in two parts: a generic, allocation-free `cbtree' and the `oidtree' wrapper on top of it. The latter provides allocation using alloc_state as a memory pool to improve locality and reduce free(3) overhead. Unlike oid-array, the crit-bit tree does not require sorting. Performance is bound by the key length, for oidtree that is fixed at sizeof(struct object_id). There's no need to have 256 oidtrees to mitigate the O(n log n) overhead like we did with oid-array. Being a prefix trie, it is natively suited for expanding short object IDs via prefix-limited iteration in `find_short_object_filename'. On my busy workstation, p4205 performance seems to be roughly unchanged (+/-8%). Startup with 100K total alternates with no loose objects seems around 10-20% faster on a hot cache. (800MB in memory savings means more memory for the kernel FS cache). The generic cbtree implementation does impose some extra overhead for oidtree in that it uses memcmp(3) on "struct object_id" so it wastes cycles comparing 12 extra bytes on SHA-1 repositories. I've not yet explored reducing this overhead, but I expect there are many places in our code base where we'd want to investigate this. More information on crit-bit trees: https://cr.yp.to/critbit.html Signed-off-by: Eric Wong <e@80x24.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-08 07:10:19 +08:00
return CB_CONTINUE;
}
oidtree: avoid unaligned access to crit-bit tree The flexible array member "k" of struct cb_node is used to store the key of the crit-bit tree node. It offers no alignment guarantees -- in fact the current struct layout puts it one byte after a 4-byte aligned address, i.e. guaranteed to be misaligned. oidtree uses a struct object_id as cb_node key. Since cf0983213c (hash: add an algo member to struct object_id, 2021-04-26) it requires 4-byte alignment. The mismatch is reported by UndefinedBehaviorSanitizer at runtime like this: hash.h:277:2: runtime error: member access within misaligned address 0x00015000802d for type 'struct object_id', which requires 4 byte alignment 0x00015000802d: note: pointer points here 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ^ SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior hash.h:277:2 in We can fix that by: 1. eliminating the alignment requirement of struct object_id, 2. providing the alignment in struct cb_node, or 3. avoiding the issue by only using memcpy to access "k". Currently we only store one of two values in "algo" in struct object_id. We could use a uint8_t for that instead and widen it only once we add support for our twohundredth algorithm or so. That would not only avoid alignment issues, but also reduce the memory requirements for each instance of struct object_id by ca. 9%. Supporting keys with alignment requirements might be useful to spread the use of crit-bit trees. It can be achieved by using a wider type for "k" (e.g. uintmax_t), using different types for the members "byte" and "otherbits" (e.g. uint16_t or uint32_t for each), or by avoiding the use of flexible arrays like khash.h does. This patch implements the third option, though, because it has the least potential for causing side-effects and we're close to the next release. If one of the other options is implemented later as well to get their additional benefits we can get rid of the extra copies introduced here. Reported-by: Andrzej Hunt <andrzej@ahunt.org> Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-08-15 04:00:38 +08:00
return x->fn(&k, x->arg);
oidtree: a crit-bit tree for odb_loose_cache This saves 8K per `struct object_directory', meaning it saves around 800MB in my case involving 100K alternates (half or more of those alternates are unlikely to hold loose objects). This is implemented in two parts: a generic, allocation-free `cbtree' and the `oidtree' wrapper on top of it. The latter provides allocation using alloc_state as a memory pool to improve locality and reduce free(3) overhead. Unlike oid-array, the crit-bit tree does not require sorting. Performance is bound by the key length, for oidtree that is fixed at sizeof(struct object_id). There's no need to have 256 oidtrees to mitigate the O(n log n) overhead like we did with oid-array. Being a prefix trie, it is natively suited for expanding short object IDs via prefix-limited iteration in `find_short_object_filename'. On my busy workstation, p4205 performance seems to be roughly unchanged (+/-8%). Startup with 100K total alternates with no loose objects seems around 10-20% faster on a hot cache. (800MB in memory savings means more memory for the kernel FS cache). The generic cbtree implementation does impose some extra overhead for oidtree in that it uses memcmp(3) on "struct object_id" so it wastes cycles comparing 12 extra bytes on SHA-1 repositories. I've not yet explored reducing this overhead, but I expect there are many places in our code base where we'd want to investigate this. More information on crit-bit trees: https://cr.yp.to/critbit.html Signed-off-by: Eric Wong <e@80x24.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-08 07:10:19 +08:00
}
void oidtree_each(struct oidtree *ot, const struct object_id *oid,
size_t oidhexsz, oidtree_iter fn, void *arg)
{
size_t klen = oidhexsz / 2;
struct oidtree_iter_data x = { 0 };
assert(oidhexsz <= GIT_MAX_HEXSZ);
x.fn = fn;
x.arg = arg;
x.algo = oid->algo;
if (oidhexsz & 1) {
x.last_byte = oid->hash[klen];
x.last_nibble_at = &klen;
}
cb_each(&ot->tree, (const uint8_t *)oid, klen, iter, &x);
}