Add two new helpers for allocating memory with btree locks held: The
idea is to first try the allocation with GFP_NOWAIT|__GFP_NOWARN, then
if that fails - unlock, retry with GFP_KERNEL, and then call
trans_relock().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The promote path had a BUG_ON() for unknown error type, which we're now
seeing: change it to a WARN_ON() - because we're curious what this is -
and otherwise handle it in the normal error path.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
GFP_NOFS doesn't ever make sense. If we're allocatingc memory it should
be GFP_NOWAIT if btree locks are held, GFP_KERNEL otherwise.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When allocating memory, gfp flags should generally be
- GFP_NOWAIT|__GFP_NOWARN if btree locks are held
- GFP_NOFS if in the IO path or otherwise holding resources needed for
IO submission
- GFP_KERNEL otherwise
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Add a new helper for the common pattern of:
- trans_unlock()
- do something
- trans_relock()
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
GFP_NOIO dates from the bcache days, when we operated under the block
layer. Now, GFP_NOFS is more appropriate, so switch all GFP_NOIO uses to
GFP_NOFS.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Fix a bug where bch2_btree_node_get() might call bch2_trans_unlock() (in
fill) without calling bch2_trans_relock(); this is a bug when it's done
in the core btree code.
Also, twea bch2_btree_node_mem_alloc() to drop btree locks before doing
a blocking memory allocation.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We've been using __GFP_NOFAIL for allocating struct bch_folio, our
private per-folio state.
However, that struct is variable size - it holds state for each sector
in the folio, and folios can be quite large now, which means it's
possible for bch_folio to be larger than PAGE_SIZE now.
__GFP_NOFAIL allocations are undesirable in normal circumstances, but
particularly so at >= PAGE_SIZE, and warnings are emitted for that.
So, this patch adds proper error paths and eliminates most uses of
__GFP_NOFAIL. Also, do some more cleanup of gfp flags w.r.t. btree node
locks: we can use GFP_KERNEL, but only if we're not holding btree locks,
and if we are holding btree locks we should be using GFP_NOWAIT.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When partially overwriting an extent in an older snapshot, the existing
extent has to be split.
If the existing extent was overwritten in a different (sibling)
snapshot, we have to ensure that the split won't be visible in the
sibling snapshot.
data_update.c already has code for this,
bch2_insert_snapshot_writeouts() - we just need to move it into
btree_update_leaf.c and change bch2_trans_update_extent() to use it as
well.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
As with previous conversions, replace -ENOENT uses with more informative
private error codes.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bch2_btree_trans_to_text() is used on btree_trans objects that are owned
by different threads - when printing out deadlock cycles - so we need a
safe version of trans_for_each_path(), else we race with seeing a
btree_path that was just allocated and not fully initialized:
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bch2_fs_quota_read() could see an inode that's been deleted
(KEY_TYPE_inode_generation) - bch2_fs_quota_read_inode() needs to check
for that instead of erroring.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
fail counters need to be events, not numbers of sectors - or the
calculations the tests use for determining if we've had too many
slowpath events don't work.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We've been seeing difficult to debug "missing indirect extent" bugs,
that fsck doesn't seem to find.
One possibility is that there was a missing indirect extent, but then a
new indirect extent was created at the location of the previous indirect
extent.
This patch eliminates that possibility by always creating new indirect
extents right after the last one, at the end of the reflink btree.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Add some more tests that test conventional and weighted mean
simultaneously, and with a table of values that represents events that
we'll be using this to look for so we can verify-by-eyeball that the
output looks sane.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When running in userspace, we currently don't have a real percpu
implementation available - at least in bcachefs-tools, which is where
this code is currently used in userspace.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This switches to a newer cmpxchg variant which updates @old for us on
failure, simplifying the cmpxchg loops a bit and supposedly generating
better code.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
In the conversion to atomic_t, six_lock_slowpath() ended up calling
six_lock_wakeup() in the failure path with a state variable that was
never initialized - whoops.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Since we're not generating different versions of the lock functions for
each lock type, the constant propagation we were trying to do before is
no longer useful - this is now a small code size decrease.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This deletes the crazy cast-atomic-to-unsigned-long, and replaces them
with atomic_and() and atomic_or().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
lock->state.seq is shortly being moved out of lock->state, to kill the
depedency on atomic64; in preparation for that, we change the write
locking bit to write locked.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The next patch is going to move lock->seq out of lock->state. This
replaces six_relock() with a much simpler implementation based on
trylock.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
- Expanded and revamped overview documentation in six.h, giving an
overview of all features
- docbook-comments for all external interfaces
- Rename some functions for simplicity, i.e.
six_lock_ip_type() -> six_lock_ip()
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
As suggested by Linus, this drops the six_lock_state union in favor of
raw bitmasks.
On the one hand, bitfields give more type-level structure to the code.
However, a significant amount of the code was working with
six_lock_state as a u64/atomic64_t, and the conversions from the
bitfields to the u64 were deemed a bit too out-there.
More significantly, because bitfield order is poorly defined (#ifdef
__LITTLE_ENDIAN_BITFIELD can be used, but is gross), incrementing the
sequence number would overflow into the rest of the bitfield if the
compiler didn't put the sequence number at the high end of the word.
The new code is a bit saner when we're on an architecture without real
atomic64_t support - all accesses to lock->state now go through
atomic64_*() operations.
On architectures with real atomic64_t support, we additionally use
atomic bit ops for setting/clearing individual bits.
Text size: 7467 bytes -> 4649 bytes - compilers still suck at
bitfields.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Originally, we used inlining/flattening to cause the compiler to
generate different versions of lock/trylock/relock/unlock for each lock
type - read, intent, and write. This made the individual functions
smaller and let the compiler eliminate table lookups: however, as the
code has gotten more complicated these optimizations have gotten less
worthwhile, and all the tricky inlining and dispatching made the code
less readable.
Text size: 11015 bytes -> 7467 bytes, and benchmarks show no loss of
performance.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Originally, the waiting bit was always set by trylock() on failure:
however, it's now set by __six_lock_type_slowpath(), with wait_lock held
- which is the more correct place to do it.
That made setting the waiting bit in trylock redundant, so this patch
deletes that.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The lost wakeup bug hasn't been observed in awhile, and we're trying to
provoke it and determine if it still exists.
This patch removes some defenses that were added to attempt to track it
down; if it still exists, this should make it easier to see it.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
six_lock_pcpu_alloc() is an unsafe interface: it's not safe to allocate
or free the percpu reader count on an existing lock that's in use, the
only safe time to allocate percpu readers is when the lock is first
being initialized.
This patch adds a flags parameter to six_lock_init(), and instead of
six_lock_pcpu_free() we now expose six_lock_exit(), which does the same
thing but is less likely to be misused.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This moves a helper out of the bcachefs code that shouldn't have been
there, since it touches six lock internals.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We were copying the size of a struct bch_fs_usage_online to a struct
bch_fs_usage, which is 8 bytes smaller.
This adds some new helpers so we can do this correctly, and get rid of
some magic +1s too.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
With the recent bkey_ops.min_val_size addition, bkey values are
automatically extended to the size of the current version.
The check in bch2_alloc_v4_invalid() needs to be updated to take this
into account.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
These deletes a bch2_trans_unlock() call from __bch2_move_data(). It was
redundant; bch2_move_extent() has the correct unlock call, and it was
buggy because when move_extent calls bch2_extent_drop_ptrs() we don't
want the transaction to be unlocked yet - this fixes a btree_iter.c
assertion.
Fixes https://github.com/koverstreet/bcachefs/issues/511.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Small performance optimization; an open coded loop is better than rep ;
movsq for small copies.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
A user hit this BUG_ON() - it's unclear how it happened, so replace it
with a fatal error that will cause us to go read only, and print out
more information.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bch2_replicas_gc_(start|end) is now only used for journal replicas
entries, which don't have bucket sector counts - so this code is
entirely dead and can be deleted.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The journal write submission path marks the associated replica
entries for journal data in journal_write_done(), which is just
after journal write bio submission. This creates a small window
where journal entries might have been written out, but the
associated replica is not marked such that recovery does not know
that the associated device contains journal data.
Move the replica marking a bit earlier in the write path such that
recovery is guaranteed to recognize that the device contains journal
data in the event of a crash.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Now that we can reliably designate and find the master subvolume out of
a tree of snapshots, we can finally make quotas work with snapshots:
That is - quotas will now _ignore_ snapshot subvolumes, and only be in
effect for the master (non snapshot) subvolume.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Add two new fields to bch_subvolume:
- otime: creation time
- parent: For snapshots, this is the id of the subvolume the snapshot
was created from
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This adds a new btree which gets us a persistent per-snapshot-tree
identifier.
- BTREE_ID_snapshot_trees
- KEY_TYPE_snapshot_tree
- bch_snapshot now has a field that points to a snapshot_tree
This is going to be used to designate one snapshot ID/subvolume out of a
given tree of snapshots as the "main" subvolume, so that we can do quota
accounting in that subvolume and not the rest.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>