Commit Graph

85096 Commits

Author SHA1 Message Date
Kent Overstreet
6fed42bb77 bcachefs: Plumb through subvolume id
To implement snapshots, we need every filesystem btree operation (every
btree operation without a subvolume) to start by looking up the
subvolume and getting the current snapshot ID, with
bch2_subvolume_get_snapshot() - then, that snapshot ID is used for doing
btree lookups in BTREE_ITER_FILTER_SNAPSHOTS mode.

This patch adds those bch2_subvolume_get_snapshot() calls, and also
switches to passing around a subvol_inum instead of just an inode
number.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:12 -04:00
Kent Overstreet
c075ff700f bcachefs: BTREE_ITER_FILTER_SNAPSHOTS
For snapshots, we need to implement btree lookups that return the first
key that's an ancestor of the snapshot ID the lookup is being done in -
and filter out keys in unrelated snapshots. This patch adds the btree
iterator flag BTREE_ITER_FILTER_SNAPSHOTS which does that filtering.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:12 -04:00
Kent Overstreet
284ae18c1d bcachefs: Add subvolume to ei_inode_info
Filesystem operations generally operate within a subvolume: at the start
of every btree transaction we'll be looking up (and locking) the
subvolume to get the current snapshot ID, which we then use for our
other btree lookups in BTREE_ITER_FILTER_SNAPSHOTS mode.

But inodes don't record what subvolume they're in - they can't, because
if they did we'd have to update every single inode within a subvolume
when taking a snapshot in order to keep that field up to date. So it
needs to be tracked in memory, based on how we got to that inode.

Hence this patch adds a subvolume field to ei_inode_info, and switches
to iget5() so we can index by it in the inode hash table.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:12 -04:00
Kent Overstreet
81ed9ce367 bcachefs: Per subvolume lost+found
On existing filesystems, we have a single global lost+found. Introducing
subvolumes means we need to introduce per subvolume lost+found
directories, because inodes are added to lost+found by their inode
number, and inode numbers are now only unique within a subvolume.

This patch adds support to fsck for per subvolume lost+found.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:12 -04:00
Kent Overstreet
b9e1adf579 bcachefs: Add support for dirents that point to subvolumes
Dirents currently always point to inodes. Subvolumes add a new type of
dirent, with d_type DT_SUBVOL, that instead points to an entry in the
subvolumes btree, and the subvolume has a pointer to the root inode.

This patch adds bch2_dirent_read_target() to get the inode (and
potentially subvolume) a dirent points to, and changes existing code to
use that instead of reading from d_inum directly.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:12 -04:00
Kent Overstreet
14b393ee76 bcachefs: Subvolumes, snapshots
This patch adds subvolume.c - support for the subvolumes and snapshots
btrees and related data types and on disk data structures. The next
patches will start hooking up this new code to existing code.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:12 -04:00
Kent Overstreet
8948fc8f15 bcachefs: Disable quota support
Existing quota support breaks badly with snapshots. We're not deleting
the code because some of it will be needed when we reimplement quotas
along the lines of btrfs subvolume quotas.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:12 -04:00
Kent Overstreet
3074bc0f7d Revert "bcachefs: Add more assertions for locking btree iterators out of order"
Figured out the bug we were chasing, and it had nothing to do with
locking btree iterators/paths out of order.

This reverts commit ff08733dd298c969aec7c7828095458f73fd5374.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:12 -04:00
Kent Overstreet
aae4eea60c bcachefs: Improve btree_node_mem_ptr optimization
This patch checks b->hash_val before attempting to lock the node in the
btree, which makes it more equivalent to the "lookup in hash table"
path - and potentially avoids an unnecessary transaction restart if
btree_node_mem_ptr(k) no longer points to the node we want.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:12 -04:00
Kent Overstreet
aa76bd3321 bcachefs: Add a missing bch2_trans_relock() call
This was causing an assertion to pop in fsck, in one of the repair
paths.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:12 -04:00
Kent Overstreet
c79272d1e4 bcachefs: Fix some compiler warnings
gcc couldn't always deduce that written wasn't used uninitialized

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:12 -04:00
Kent Overstreet
5b5b03e732 bcachefs: Add missing BTREE_ITER_INTENT
No reason not to be using it here...

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:12 -04:00
Kent Overstreet
caaa66aa54 bcachefs: Better approach to write vs. read lock deadlocks
Instead of unconditionally upgrading read locks to intent locks in
do_bch2_trans_commit(), this patch changes the path that takes write
locks to first trylock, and then if trylock fails check if we have a
conflicting read lock, and restart the transaction if necessary.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:12 -04:00
Kent Overstreet
b301105b48 bcachefs: normalize_read_intent_locks
This is a new approach to avoiding the self deadlock we'd get if we
tried to take a write lock on a node while holding a read lock - we
simply upgrade the readers to intent.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:12 -04:00
Kent Overstreet
8ee0134e03 bcachefs: Consolidate intent lock code in btree_path_up_until_good_node
We need to take all needed intent locks when relocking an iterator:
bch2_btree_path_traverse() had a special cased, faster version of this,
but it really should be in up_until_good_node() so that set_pos() can
use it too.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:12 -04:00
Kent Overstreet
db92f2ea5e bcachefs: Optimize btree lookups in write path
This patch significantly reduces the number of btree lookups required in
the extent update path.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:12 -04:00
Kent Overstreet
c404f20386 bcachefs: Add a missing btree_path_make_mut() call
Also add another small helper, btree_path_clone().

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:12 -04:00
Kent Overstreet
8ffa63cd7e bcachefs: Enabled shard_inode_numbers by default
We'd like performance increasing options to be on by default.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
cf3c68cda6 bcachefs: No need to clone iterators for update
Since btree_path is now internally refcounted, we don't need to clone an
iterator before calling bch2_trans_update() if we'll be mutating it.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
22b383ad7e bcachefs: Kill retry loop in btree merge path
Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
f48361b00c bcachefs: Drop some fast path tracepoints
These haven't turned out to be useful

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
1d3ecd7ea7 bcachefs: Tighten up btree locking invariants
New rule is: if a btree path holds any locks it should be holding
precisely the locks wanted (accoringing to path->level and
path->locks_want).

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
1ae29c1faa bcachefs: Extent btree iterators are no longer special
Since iter->real_pos was introduced, we no longer have to deal with
extent btree iterators that have skipped past deleted keys - this is a
real performance improvement on btree updates.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
068bcaa589 bcachefs: Add more assertions for locking btree iterators out of order
btree_path_traverse_all() traverses btree iterators in sorted order, and
thus shouldn't see transaction restarts due to potential deadlocks - but
sometimes we do. This patch adds some more assertions and tracks some
more state to help track this down.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
807dda8c83 bcachefs: Kill bpos_diff() XXX check for perf regression
This improves the btree iterator lookup code by using
trans_for_each_iter_inorder().

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
67e0dd8f0d bcachefs: btree_path
This splits btree_iter into two components: btree_iter is now the
externally visible componont, and it points to a btree_path which is now
reference counted.

This means we no longer have to clone iterators up front if they might
be mutated - btree_path can be shared by multiple iterators, and cloned
if an iterator would mutate a shared btree_path. This will help us use
iterators more efficiently, as well as slimming down the main long lived
state in btree_trans, and significantly cleans up the logic for iterator
lifetimes.

Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22 17:09:11 -04:00
Kent Overstreet
8f54337dc6 bcachefs: Fix initialization of bch_write_op.nonce
If an extent ends up with a replica that is encrypted an a replica that
isn't encrypted (due the user changing options), and then
copygc/rebalance moves one of the replicas by reading from the
unencrypted replica, we had a bug where we wouldn't correctly initialize
op->nonce - for each crc field in an extent, crc.offset + crc.nonce must
be equal.

This patch fixes that by moving op.nonce initialization to
bch2_migrate_write_init.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
fbf14104da bcachefs: Improve an error message
When we detect an invalid key being inserted, we should print what code
was doing the update.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
cab8e23373 bcachefs: Add an assertion for removing btree nodes from cache
Chasing a bug that has something to do with the btree node cache.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
f21566f17a bcachefs: Kill BTREE_ITER_NODES
We really only need to distinguish between btree iterators and btree key
cache iterators - this is more prep work for btree_path.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
deb0e573b4 bcachefs: Kill BTREE_ITER_NEED_PEEK
This was used for an optimization that hasn't existing in quite awhile
- iter->uptodate will probably be going away as well.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
6fba6b83b4 bcachefs: Prefer using btree_insert_entry to btree_iter
This moves some data dependencies forward, to improve pipelining.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
a0a568794d bcachefs: More renaming
Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
f7a966a3e2 bcachefs: Clean up/rename bch2_trans_node_* fns
These utility functions are for managing btree node state within a
btree_trans - rename them for consistency, and drop some unneeded
arguments.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
78cf784eaa bcachefs: Further reduce iter->trans usage
This is prep work for splitting btree_path out from btree_iter -
btree_path will not have a pointer to btree_trans.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
05046a962f bcachefs: Better algorithm for btree node merging in write path
The existing algorithm was O(n^2) in the number of updates in the
commit.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
5f8077cca8 bcachefs: Kill BTREE_ITER_SET_POS_AFTER_COMMIT
BTREE_ITER_SET_POS_AFTER_COMMIT is used internally to automagically
advance extent btree iterators on sucessful commit.

But with the upcomnig btree_path patch it's getting more awkward to
support, and it adds overhead to core data structures that's only used
in a few places, and can be easily done by the caller instead.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
638c6ff951 bcachefs: Refactor bch2_trans_update_extent()
This consolidates the code for doing extent updates, and makes the btree
iterator usage a bit cleaner and more efficient.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:11 -04:00
Kent Overstreet
9f6bd30703 bcachefs: Reduce iter->trans usage
Disfavoured, and should go away.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:10 -04:00
Kent Overstreet
84841b0d13 bcachefs: bch2_dump_trans_iters_updates()
This factors out bch2_dump_trans_iters_updates() from the iter alloc
overflow path, and makes some small improvements to what it prints.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:10 -04:00
Kent Overstreet
e6e024e9bf bcachefs: Ensure iter->real_pos is consistent with key returned
iter->real_pos needs to match the key returned or bad things will happen
when we go to update the key at that position. When we returned a
pending update from btree_trans_peek_updates(), this wasn't necessarily
the case.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:10 -04:00
Kent Overstreet
1865ccff15 bcachefs: Add SPOS_MAX to bpos_to_text()
Better pretty printing ftw

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:10 -04:00
Kent Overstreet
dc02bed6d9 bcachefs: Free iterator if we have duplicate
This helps - but does not fully fix - the outstanding "transaction
iterator overflow" bugs.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:10 -04:00
Kent Overstreet
f4ccfe07e2 bcachefs: Fix unhandled transaction restart in bch2_gc_btree_gens()
This fixes https://github.com/koverstreet/bcachefs/issues/305

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:10 -04:00
Brett Holman
8dd6ed9451 bcachefs: add progress stats to sysfs
This adds progress stats to sysfs for copygc, rebalance, recovery, and the
cmd_job ioctls.

Signed-off-by: Brett Holman <bholman.devel@gmail.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22 17:09:10 -04:00
Brett Holman
fd0bd123d5 bcachefs: Fix 32 bit build failures
This fix replaces multiple 64 bit divisions with do_div() equivalents.

Signed-off-by: Brett Holman <bholman.devel@gmail.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22 17:09:10 -04:00
Kent Overstreet
28624ba424 bcachefs: Be sure to check ptr->dev in copygc pred function
Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:10 -04:00
Kent Overstreet
62df3d443c bcachefs: Disk space accounting fix
DIV_ROUND_UP() wasn't doing what we wanted when passing it negative
numbers - fix it by just not passing it negative numbers anymore.

Also, no need to do the scaling by compression ratio for incompressible
data.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:10 -04:00
Kent Overstreet
8ddef4d6cc bcachefs: Fix a valgrind conditional jump
Valgrind was complaining about a jump depending on uninitialized memory
- we weren't, but this change makes the code less confusing for valgrind
to follow.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:10 -04:00
Kent Overstreet
c8476a4eb2 bcachefs: Minor btree iter refactoring
This makes the flow control in bch2_btree_iter_peek() and
bch2_btree_iter_peek_prev() a bit cleaner.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22 17:09:10 -04:00