If a device has keys in the bucket_gens btree associated with its
buckets and is removed from a bcachefs volume, fsck will complain
about the presence of keys associated with an invalid device index.
A repair removes the associated keys and restores correctness.
Update bch2_dev_remove_alloc() to remove device related keys at
device removal time to avoid the problem.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
fsck removes bucket_gens keys for devices that do not exist in the
volume (i.e., if the device was removed). In 'fsck -n' mode, the
associated fsck_err_on() wrapper returns false to skip the key
removal. This proceeds on to the rest of the function, which
eventually segfaults on a NULL bch_dev because the device does not
exist.
Update bch2_check_bucket_gens_key() to skip out of the rest of the
function when the associated device does not exist, regardless of
running fsck in check or repair mode.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Create a small helper to translate from file offset to the
associated bch_folio_sector index in the underlying bch_folio. The
helper assumes the file offset is covered by the passed folio.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
It turns out, in rare situations we need to be passing in a disk
reservation, which will be used internally by the transaction commit
path when needed. Pass one in...
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
In __bch2_alloc_to_v4_mut(), we overrun the buffer we allocate if the
alloc key had backpointers stored in it (which we no longer support).
Fix this with a max() call.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This changes the ask_yn() function used by fsck to accept Y or N,
meaning yes or no for all errors of a given type.
With this, the user can be prompted only for distinct error types -
useful when a filesystem has lots of errors.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Some of the folio_end_*() helpers are prone to overflow of signed
64-bit types because the mapping is only limited by the max value of
loff_t and the associated helpers return the start offset of the
next folio. Therefore, a folio_end_pos() of the max allowable folio in a
mapping returns a value that overflows loff_t.
This makes it hard to rely on such values when doing folio
processing across a range of a file, as bcachefs attempts to do with
the recent folio changes. For example, generic/564 causes problems
in the buffered write path when testing writes at max boundary
conditions.
The current understanding is that the pagecache historically limited
the mapping to one less page to avoid this problem and this was
dropped with some of the folio conversions, but may be reinstated to
properly address the problem. In the meantime, update the internal
folio_end_*() helpers in bcachefs to return a u64, and all of the
associated code to use or cast to u64 to avoid overflow problems.
This allows generic/564 to pass and can be reverted back to using
loff_t if at any point the pagecache subsystem can guarantee these
boundary conditions will not overflow.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The buffered write path batches folio creations in the file mapping
based on the requested size of the write. Under low free space
conditions, it is possible to add a bunch of folios to the mapping
and then return a short write or -ENOSPC due to lack of space. If
this occurs on an extending write, the file size is updated based on
the amount of data successfully written to the file. If folios were
added beyond the final i_size, they may hang around until reclaimed,
truncated or encountered unexpectedly by another operation.
For example, generic/083 reproduces a sequence of events where a
short write leaves around one or more post-EOF folios on an inode, a
subsequent zero range request extends beyond i_size and overlaps
with an aforementioned folio, and __bch2_truncate_folio() happens
across it and complains.
Update __bch2_buffered_write() to keep track of the start offset of
the last folio added to the mapping for a prospective write. After
i_size is updated, check whether this offset starts beyond EOF. If
so, truncate pagecache beyond the latest EOF to clean up any folios
that don't reside at least partially within EOF upon completion of
the write.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
generic/083 occasionally reproduces a panic caused by an overflow
when accessing the bch_folio_sector array of the folio being
processed by __bch2_truncate_folio(). The immediate cause of the
overflow is that the folio offset is beyond i_size, and therefore
the sector index calculation underflows on subtraction of the folio
offset.
One cause of this is mainly observed on nocow mounts. When nocow is
enabled, fallocate performs physical block allocation (as opposed to
block reservation in cow mode), which range_has_data() then
interprets as valid data that requires partial zeroing on truncate.
Therefore, if a post-eof zero range request lands across post-eof
preallocated blocks, __bch2_truncate_folio() may actually create a
post-eof folio in order to perform zeroing. To avoid this problem,
update range_has_data() to filter out unwritten blocks from folio
creation and partial zeroing.
Even though we should never create folios beyond EOF like this, the
mere existence of such folios is not necessarily a fatal error. Fix
up the truncate code to warn about this condition and not overflow
the sector array and possibly crash the system. The addition of this
warning without the corresponding unwritten extent fix has shown
that various other fstests are able to reproduce this problem fairly
frequently, but often in ways that doesn't necessarily result in a
kernel panic or a change in user observable behavior, and therefore
the problem goes undetected.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
With large folios, it's now incidentally possible to end up with a
clean, uptodate folio in the page cache that doesn't have a bch_folio
attached, if a folio has to be split.
This patch fixes __bch2_truncate_folio() to check for this; other code
paths appear to handle it.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
- X-macro-ize the bch_folio_sector_state enum: this means we can easily
generate strings, which is helpful for debugging.
- Add helpers for state transitions: folio_sector_dirty(),
folio_sector_undirty(), folio_sector_reserve()
- Add folio_sector_set(), a single helper for changing folio sector
state just so that we have a single place to instrument when we're
debugging.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This converts fs-io.c to pass folios, not pages. We're not handling
large folios yet, there's no functional changes in this patch - just a
lot of churn doing the initial type conversions.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Seeing an odd bug with page/folio state not being properly initialized,
this is to help track it down.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We're just doing cpu work here and it could take awhile, a
cond_resched() is definitely needed.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This greatly expands the move_extent_fail tracepoint - now it includes
all the information we have available, including exactly why the extent
wasn't updated.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Most counters aren't in units of sectors, and the ones that are should
just be switched to bytes, for simplicity.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We don't store backpointers in alloc keys anymore, since we gained the
btree write buffer.
This patch drops support for backpointers in alloc keys, and revs the on
disk format version so that we know a fsck is required.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
If we block on journal reservation attempting to log journal
messages during recovery, particularly for the first message(s)
before we start doing actual work, chances are the filesystem ends
up deadlocked.
Allow logged messages to use reserved journal space to mitigate this
problem. In the worst case where no space is available whatsoever,
this at least allows the fs to recognize that the journal is stuck
and fail the mount gracefully.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Seeing occasional test failures where we get stuck in a livelock that
involves this event - this will help track it down.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
It turns out that it's currently impossible to invalidate buckets
containing only cached data if they're part of a stripe. The normal
bucket invalidate path can't do it because we have to be able to
incerement the bucket's gen, which isn't correct becasue it's still a
member of the stripe - and the bucket invalidate path makes the bucket
availabel for reuse right away, which also isn't correct for buckets in
stripes.
What would work is invalidating cached data by following backpointers,
except that cached replicas don't currently get backpointers - because
they would be awkward for the existing bucket invalidate path to delete
and they haven't been needed elsewhere.
So for the time being, to prevent running out of space in stripes,
switch the data update path to not leave cached replicas; we may revisit
this in the future.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Previously, copygc used a fifo for tracking buckets in flight - this had
the disadvantage of being fixed size, since we pass references to
elements into the move code.
This restructures it to be a hash table and linked list, since with
erasure coding we need to be able to pipeline across an arbitrary number
of buckets.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This adds a flags param to bch2_backpointer_get_key() so that we can
pass BTREE_ITER_INTENT, since ec_stripe_update_extent() is updating the
extent immediately.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
It appears freespace init can still take awhile, and we've had a report
or two of it getting stuck - let's have it print out where it's at every
10 seconds.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Like in the recovery, and device add, we have to check if devices don't
have the freespace btree initialized - this was missed in the device hot
add path.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This just adds a line for how long copygc has been waiting to sysfs
copygc_wait, helpful for debugging why copygc isn't running.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bch2_path_put_nokeep() is sketchy, and we should consider removing it:
it unconditionally frees btree_paths once their ref hits 0.
The assumption is that we only use it for paths that have never been
visible outside the btree core btree code; i.e. higher level code will
never be making assumptions about locking based on these paths.
However, there's subtle brokenness with this approach:
- If we call bch2_path_put(), then bch2_path_put_nokeep(),
bch2_path_put() may free the first path on the assumption that we we
have another path keeping a node locked - but then
bch2_path_put_nokeep() just unconditionally frees it.
The same bug may arise if we're calling bch2_path_put() and
bch2_path_put_nokeep() on the same (refcounted) path, or two adjacent
paths that point to the same btree node.
This patch hacks around one of these bugs by calling
bch2_path_put_nokeep() first in bch2_trans_iter_exit.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The journal stucking check in bch2_journal_space_available() is
particularly aggressive and can lead to premature shutdown in some
rare cases. This is difficult to reproduce, but also comes along
with a fatal error and so is worthwhile to be cautious.
For example, we've seen instances where the journal is under heavy
reservation pressure, the journal allocation path transitions into
the final available journal bucket, the journal write path
immediately consumes that bucket and calls into
bch2_journal_space_available(), which then in turn flags the journal
as stuck because there is no available space and shuts down the
filesystem instead of submitting the journal write (that would have
otherwise succeeded).
To avoid this problem, simplify the journal stuck checking by just
relying on the higher level logic in the journal reservation path.
This produces more useful debug output and is a more reliable
indicator that things have bogged down.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bcachefs checks for journal stuck conditions both in the journal
space calculation code and the journal reservation slow path. The
logic in both places is rather tricky and can result in
non-deterministic failure characteristics and debug output.
In preparation to condense journal stuck handling to a single place,
refactor the __journal_res_get() logic into a standalone helper.
Since multiple callers into the reservation code can result in
duplicate reports, use the ->err_seq field as a serialization
mechanism for the debug dump. Finally, add some comments to help
explain the logic and hopefully facilitate further improvements in
the future.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bcachefs detects journal stuck conditions in a couple different
places. If the logic in the journal reservation slow path happens to
detect the problem, I've seen instances where the filesystem remains
deadlocked even though it has been shut down. This is occasionally
reproduced by generic/333, and usually manifests as one or more
tasks stuck in the journal reservation slow path.
To help avoid this problem, repeat the journal error check in
__journal_res_get() once under spinlock to cover the case where the
previous lock holder might have triggered shutdown. This also helps
avoid spurious/duplicate stuck reports. Also, wake the journal from
the halt code to make sure blocked callers of the journal res
slowpath have a chance to wake up and observe the pending error.
This survives an overnight looping run of generic/333 without the
aforementioned lockups.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The btree write buffer flush code is prone to causing journal
deadlock due to inefficient use and release of reservation space.
Reservation is not pre-reserved for write buffered keys (as is done
for key cache keys, for example), because the write buffer flush
side uses a fast path that attempts insertion without need for any
reservation at all.
The write buffer flush attempts to deal with this by inserting keys
using the BTREE_INSERT_JOURNAL_RECLAIM flag to return an error on
journal reservations that require blocking. Upon first error, it
falls back to a slow path that inserts in journal order and supports
moving the associated journal pin forward.
The problem is that under pathological conditions (i.e. smaller log,
larger write buffer and journal reservation pressure), we've seen
instances where the fast path fails fairly quickly without having
completed many insertions, and then the slow path is unable to push
the journal pin forward enough to free up the space it needs to
completely flush the buffer. This problem is occasionally reproduced
by fstest generic/333.
To avoid this problem, update the fast path algorithm to skip key
inserts that fail due to inability to acquire needed journal
reservation without immediately breaking out of the loop. Instead,
insert as many keys as possible, zap the sequence numbers to mark
them as processed, and then fall back to the slow path to process
the remaining set in journal order. This reduces the amount of
journal reservation that might be required to flush the entire
buffer and increases the odds that the slow path is able to move the
journal pin forward and free up space as keys are processed.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
A workqueue resource deadlock has been observed when running fsck
on a filesystem with a full/stuck journal. fsck is not currently
able to repair the fs due to fairly rapid emergency shutdown, but
rather than exit gracefully the fsck process hangs during the
shutdown sequence. Fortunately this is easily recoverable from
userspace, but the root cause involves code shared between the
kernel and userspace and so should be addressed.
The deadlock scenario involves the main task in the bch2_fs_stop()
-> bch2_fs_read_only() path waiting on write references to drain
with the fs state lock held. A bch2_read_only_work() workqueue task
is scheduled on the system_long_wq, blocked on the state lock.
Finally, various other write ref holding workqueue tasks are
scheduled to run on the same workqueue and must complete in order to
release references that the initial task is waiting on.
To avoid this problem, we can split the dependent workqueue tasks
across different workqueues. It's a bit of a waste to create a
dedicated wq for the read-only worker, but there are several tasks
throughout the fs that follow the pattern of acquiring a write
reference and then scheduling to the system wq. Use a local wq
for such tasks to break the subtle dependency between these and the
read-only worker.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We were going into an infinite loop when printing out backpointers, due
to never incrementing bp_offset - whoops.
Also limit the number of backpointers we print to 10; this is debug code
and we only need to print a sample, not all of them.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>