The write path may (rarely) see an encoded (checksummed) extent that
exceeds encoded_extent_max - this can happen when we're moving an
existing extent that was not checksummed, but was given a checksum by
bch2_write_rechecksum().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We're going to be using bch2_target_to_text() ->
bch2_disk_path_to_text() from bch2_bkey_ptrs_to_text() and
bch2_bkey_ptrs_invalid(), which can be called in any context.
This patch adds the actual label to bch_disk_group_cpu so that it can be
used by bch2_disk_path_to_text, and splits out bch2_disk_path_to_text()
into two variants - like the previous patch, one for when we have a
running filesystem and another for when we only have a superblock.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Previously we just had bch2_opt_target_to_text() which could be passed
either a filesystem object or just a superblock - depending on if we
have a running filesystem or not.
Split these into two functions for clarity.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Upcoming rebalance_work btree will require extent triggers to be
BTREE_TRIGGER_WANTS_OLD_AND_NEW - so to reduce potential confusion,
let's just make all triggers BTREE_TRIGGER_WANTS_OLD_AND_NEW.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The data move path now correctly picks IO options when inodes in
different snapshots have different options applied.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We can't mark device superblocks or allocate journal on a device that
isn't online.
That means we may need to do this on every mount, because we may have
formatted a new filesystem and then done the first mount
(bch2_fs_initialize()) in degraded mode.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The ca->oldest_gen array needs to be the same size as the bucket_gens
array; ca->mi.nbuckets is updated with only state_lock held, not
gc_lock, so bch2_gc_gens() could race with device resize and allocate
too small of an oldest_gens array.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
More forwards compatibility fixups: having BKEY_TYPE_btree at the end of
the enum conflicts with unnkown btree IDs, this shifts BKEY_TYPE_btree
to slot 0 and fixes things up accordingly.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Since we can run with unknown btree IDs, we can't directly index btree
IDs into fixed size arrays.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Be a bit more careful about when bch2_delete_dead_snapshots needs to
run: it only needs to run synchronously if we're running fsck, and it
only needs to run at all if we have snapshot nodes to delete or if fsck
has noticed that it needs to run.
Also:
Rename BCH_FS_HAVE_DELETED_SNAPSHOTS -> BCH_FS_NEED_DELETE_DEAD_SNAPSHOTS
Kill bch2_delete_dead_snapshots_hook(), move functionality to
bch2_mark_snapshot()
Factor out bch2_check_snapshot_needs_deletion(), to explicitly check
if we need to be running snapshot deletion.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We must not hold btree locks while taking snapshot_create_lock - this
fixes a lockdep splat.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
As pointed out by Linus, closure_sync() was racy; we could skip blocking
immediately after a get() and a put(), but then that would skip any
barrier corresponding to the other thread's put() barrier.
To fix this, always do the full __closure_sync() sequence whenever any
get() has happened and the closure might have been used by other
threads.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Here's the bcachefs filesystem pull request.
One new patch since last week: the exportfs constants ended up
conflicting with other filesystems that are also getting added to the
global enum, so switched to new constants picked by Amir.
I'll also be sending another pull request later on in the cycle bringing
things up to date my master branch that people are currently running;
that will be restricted to fs/bcachefs/, naturally.
Testing - fstests as well as the bcachefs specific tests in ktest:
https://evilpiepirate.org/~testdashboard/ci?branch=bcachefs-for-upstream
It's also been soaking in linux-next, which resulted in a whole bunch of
smatch complaints and fixes and a patch or two from Kees.
The only new non fs/bcachefs/ patch is the objtool patch that adds
bcachefs functions to the list of noreturns. The patch that exports
osq_lock() has been dropped for now, per Ingo.
Prereq patch list:
faf1dce852 objtool: Add bcachefs noreturns
73badee428 lib/generic-radix-tree.c: Add peek_prev()
9492261ff2 lib/generic-radix-tree.c: Don't overflow in peek()
0fb5d567f5 MAINTAINERS: Add entry for generic-radix-tree
b414e8ecd4 closures: Add a missing include
48b7935722 closures: closure_nr_remaining()
ced58fc7ab closures: closure_wait_event()
bd0d22e41e MAINTAINERS: Add entry for closures
8c8d2d9670 bcache: move closures to lib/
957e48087d locking: export contention tracepoints for bcachefs six locks
21db931445 lib: Export errname
83feeb1955 lib/string_helpers: string_get_size() now returns characters wrote
7d672f4094 stacktrace: Export stack_trace_save_tsk
771eb4fe8b fs: factor out d_mark_tmpfile()
2b69987be5 sched: Add task_struct->faults_disabled_mapping
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEKnAFLkS8Qha+jvQrE6szbY3KbnYFAmU/wyIACgkQE6szbY3K
bnZc1xAAqjQBGXdtgtKQvk0/ru0WaMZguMsOHd3BUXIbm30F6eJqnoXQ/ahALofc
Ju6NrOgcy9wmdPKWpbeF+aK3WnkAW9jShDd0QieVH6PkhyYyh5r11iR/EVtjjLu5
6Teodn8fyTqn9WSDtKG15QreTCJrEasAoGFQKQDA8oiXC7zc+RSpLUkkTWD/pxyW
zVqkGGiAUG4x6FON+X2a3QBa9WCahIgV6XzHstGLsmOECxKO/LopGR5jThuIhv9t
Yo0wodQTKAgb9QviG6V3f2dJLQKKUVDmVEGTXv+8Hl3d8CiYBJeIh+icp+VESBo1
m8ev0y2xbTPLwgm5v0Uj4o/G8ISZ+qmcexV2zQ9xUWUAd2AjEBzhCh9BrNXM5qSg
o7mphH+Pt6bJXgzxb2RkYJixU11yG3yuHPOCrRGGFpVHiNYhdHuJeDZOqChWZB8x
6kY0uvU0X0tqVfWKxMwTwuqG8mJ5BkJNvnEvYi05QEZG0dDcUhgOqYlNNaL8vGkl
qVixOwE4aH4kscdmW2gXY1c76VSebheyN8n6Wj1zrmTw4hTJH7ZWXPtmbRqQzpB6
U6w3NjVyopbIjuF+syWeGqitTT/8fpvgZU4E9MpKGmHX4ADgecp6YSZQzzxTJn7D
cbVX7YQxhmsM50C1PW7A8yLCspD/uRNiKLvzb/g9gFSInk4rV+U=
=g+ia
-----END PGP SIGNATURE-----
Merge tag 'bcachefs-2023-10-30' of https://evilpiepirate.org/git/bcachefs
Pull initial bcachefs updates from Kent Overstreet:
"Here's the bcachefs filesystem pull request.
One new patch since last week: the exportfs constants ended up
conflicting with other filesystems that are also getting added to the
global enum, so switched to new constants picked by Amir.
The only new non fs/bcachefs/ patch is the objtool patch that adds
bcachefs functions to the list of noreturns. The patch that exports
osq_lock() has been dropped for now, per Ingo"
* tag 'bcachefs-2023-10-30' of https://evilpiepirate.org/git/bcachefs: (2781 commits)
exportfs: Change bcachefs fid_type enum to avoid conflicts
bcachefs: Refactor memcpy into direct assignment
bcachefs: Fix drop_alloc_keys()
bcachefs: snapshot_create_lock
bcachefs: Fix snapshot skiplists during snapshot deletion
bcachefs: bch2_sb_field_get() refactoring
bcachefs: KEY_TYPE_error now counts towards i_sectors
bcachefs: Fix handling of unknown bkey types
bcachefs: Switch to unsafe_memcpy() in a few places
bcachefs: Use struct_size()
bcachefs: Correctly initialize new buckets on device resize
bcachefs: Fix another smatch complaint
bcachefs: Use strsep() in split_devs()
bcachefs: Add iops fields to bch_member
bcachefs: Rename bch_sb_field_members -> bch_sb_field_members_v1
bcachefs: New superblock section members_v2
bcachefs: Add new helper to retrieve bch_member from sb
bcachefs: bucket_lock() is now a sleepable lock
bcachefs: fix crc32c checksum merge byte order problem
bcachefs: Fix bch2_inode_delete_keys()
...
The memcpy() in bch2_bkey_append_ptr() is operating on an embedded fake
flexible array which looks to the compiler like it has 0 size. This
causes W=1 builds to emit warnings due to -Wstringop-overflow:
In file included from include/linux/string.h:254,
from include/linux/bitmap.h:11,
from include/linux/cpumask.h:12,
from include/linux/smp.h:13,
from include/linux/lockdep.h:14,
from include/linux/radix-tree.h:14,
from include/linux/backing-dev-defs.h:6,
from fs/bcachefs/bcachefs.h:182:
fs/bcachefs/extents.c: In function 'bch2_bkey_append_ptr':
include/linux/fortify-string.h:57:33: warning: writing 8 bytes into a region of size 0 [-Wstringop-overflow=]
57 | #define __underlying_memcpy __builtin_memcpy
| ^
include/linux/fortify-string.h:648:9: note: in expansion of macro '__underlying_memcpy'
648 | __underlying_##op(p, q, __fortify_size); \
| ^~~~~~~~~~~~~
include/linux/fortify-string.h:693:26: note: in expansion of macro '__fortify_memcpy_chk'
693 | #define memcpy(p, q, s) __fortify_memcpy_chk(p, q, s, \
| ^~~~~~~~~~~~~~~~~~~~
fs/bcachefs/extents.c:235:17: note: in expansion of macro 'memcpy'
235 | memcpy((void *) &k->v + bkey_val_bytes(&k->k),
| ^~~~~~
fs/bcachefs/bcachefs_format.h:287:33: note: destination object 'v' of size 0
287 | struct bch_val v;
| ^
Avoid making any structure changes and just replace the u64 copy into a
direct assignment, side-stepping the entire problem.
Cc: Kent Overstreet <kent.overstreet@linux.dev>
Cc: Brian Foster <bfoster@redhat.com>
Cc: linux-bcachefs@vger.kernel.org
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202309192314.VBsjiIm5-lkp@intel.com/
Link: https://lore.kernel.org/r/20231010235609.work.594-kees@kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
For consistency with the rest of the reconstruct_alloc option, we should
be skipping all alloc keys.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Add a new lock for snapshot creation - this addresses a few races with
logged operations and snapshot deletion.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
In snapshot deleion, we have to pick new skiplist nodes for entries that
point to nodes being deleted.
The function that finds a new skiplist node, skipping over entries being
deleted, was incorrect: if n = 0, but the parent node is being deleted,
we also need to skip over that node.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Instead of using token pasting to generate methods for each superblock
section, just make the type a parameter to bch2_sb_field_get().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
KEY_TYPE_error is used when all replicas in an extent are marked as
failed; it indicates that data was present, but has been lost.
So that i_sectors doesn't change when replacing extents with
KEY_TYPE_error, we now have to count error keys as allocations - this
fixes fsck errors later.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
min_val_size was U8_MAX for unknown key types, causing us to flag any
known key as invalid - it should have been 0.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The new fortify checking doesn't work for us in all places; this
switches to unsafe_memcpy() where appropriate to silence a few
warnings/errors.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Use struct_size() instead of hand writing it.
This is less verbose and more robust.
While at it, prepare for the coming implementation by GCC and Clang of the
__counted_by attribute. Flexible array members annotated with __counted_by
can have their accesses bounds-checked at run-time checking via
CONFIG_UBSAN_BOUNDS (for array indexing) and CONFIG_FORTIFY_SOURCE (for
strcpy/memcpy-family functions).
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bch2_dev_resize() was never updated for the allocator rewrite with
persistent freelists, and it wasn't noticed because the tests weren't
running fsck - oops.
Fix this by running bch2_dev_freespace_init() for the new buckets.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
members_v2 has dynamically resizable entries so that we can extend
bch_member. The members can no longer be accessed with simple array
indexing Instead members_v2_get is used to find a member's exact
location within the array and returns a copy of that member.
Alternatively member_v2_get_mut retrieves a mutable point to a member.
Signed-off-by: Hunter Shaffer <huntershaffer182456@gmail.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Prep work for introducing bch_sb_field_members_v2 - introduce new
helpers that will check for members_v2 if it exists, otherwise using v1
Signed-off-by: Hunter Shaffer <huntershaffer182456@gmail.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
fsck_err() may sleep - it takes a mutex and may allocate memory, so
bucket_lock() needs to be a sleepable lock.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
An fsstress task on a big endian system (s390x) quickly produces a
bunch of CRC errors in the system logs. Most of these are related to
the narrow CRCs path, but the fundamental problem can be reduced to
a single write and re-read (after dropping caches) of a previously
merged extent.
The key merge path that handles extent merges eventually calls into
bch2_checksum_merge() to combine the CRCs of the associated extents.
This code attempts to avoid a byte order swap by feeding the le64
values into the crc32c code, but the latter casts the resulting u64
value down to a u32, which truncates the high bytes where the actual
crc value ends up. This results in a CRC value that does not change
(since it is merged with a CRC of 0), and checksum failures ensue.
Fix the checksum merge code to swap to cpu byte order on the
boundaries to the external crc code such that any value casting is
handled properly.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bch2_inode_delete_keys() was using BTREE_ITER_NOT_EXTENTS, on the
assumption that it would never need to split extents.
But that caused a race with extents being split by other threads -
specifically, the data move path. Extents iterators have the iterator
position pointing to the start of the extent, which avoids the race.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The entire btree will be lost, but that is better than the entire
filesystem not being recoverable.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We can only do this in userspace, unfortunately - but kernel keyrings
have never seemed to worked reliably, this is a useful fallback.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
These errors aren't actual errors, and should never be printed - do this
in the common helpers.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
- assert in shutdown path that no nocow locks are held
- check for overflow when taking nocow locks
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This makes mount option handling consistent with other filesystems -
options may be handled at different layers, so an option we don't know
about might not be intended for us.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Previously, we would check for invalid bkeys at transaction commit time,
but only if CONFIG_BCACHEFS_DEBUG=y.
This check is important enough to always be on - it appears there's been
corruption making it into the journal that would have been caught by it.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Previously, equiv was set in the snapshot deletion path, which is where
it's needed - equiv, for snapshot ID equivalence classes, would ideally
be a private data structure to the snapshot deletion path.
But if a new snapshot is created while snapshot deletion is running,
move_key_to_correct_snapshot() moves a key to snapshot id 0 - oops.
Fixes: https://github.com/koverstreet/bcachefs/issues/593
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Initial support for the vfs superblock freeze and unfreeze
operations. Superblock freeze occurs in stages, where the vfs
attempts to quiesce high level write operations, page faults, fs
internal operations, and then finally calls into the filesystem for
any last stage steps (i.e. log flushing, etc.) before marking the
superblock frozen.
The majority of write paths are covered by freeze protection (i.e.
sb_start_write() and friends) in higher level common code, with the
exception of the fs-internal SB_FREEZE_FS stage (i.e.
sb_start_intwrite()). This typically maps to active filesystem
transactions in a manner that allows the vfs to implement a barrier
of internal fs operations during the freeze sequence. This is not a
viable model for bcachefs, however, because it utilizes transactions
both to populate the journal as well as to perform journal reclaim.
This means that mapping intwrite protection to transaction lifecycle
or transaction commit is likely to deadlock freeze, as quiescing the
journal requires transactional operations blocked by the final stage
of freeze.
The flipside of this is that bcachefs does already maintain its own
internal sets of write references for similar purposes, currently
utilized for transitions from read-write to read-only mode. Since
this largely mirrors the high level sequence involved with freeze,
we can simply invoke this mechanism in the freeze callback to fully
quiesce the filesystem in the final stage. This means that while the
SB_FREEZE_FS stage is essentially a no-op, the ->freeze_fs()
callback that immediately follows begins by performing effectively
the same step by quiescing all internal write references.
One caveat to this approach is that without integration of internal
freeze protection, write operations gated on internal write refs
will fail with an internal -EROFS error rather than block on
acquiring freeze protection. IOW, this is roughly equivalent to only
having support for sb_start_intwrite_trylock(), and not the blocking
variant. Many of these paths already use non-blocking internal write
refs and so would map into an sb_start_intwrite_trylock() anyways.
The only instance of this I've been able to uncover that doesn't
explicitly rely on a higher level non-blocking write ref is the
bch2_rbio_narrow_crcs() path, which updates crcs in certain read
cases, and Kent has pointed out isn't critical if it happens to fail
due to read-only status.
Given that, implement basic freeze support as described above and
leave tighter integration with internal freeze protection as a
possible future enhancement. There are multiple potential ideas
worth exploring here. For example, we could implement a multi-stage
freeze callback that might allow bcachefs to quiesce its internal
write references without deadlocks, we could integrate intwrite
protection with bcachefs' internal write references somehow or
another, or perhaps consider implementing blocking support for
internal write refs to be used specifically for freeze, etc. In the
meantime, this enables functional freeze support and the associated
test coverage that comes with it.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
- it's no longer possible for trans to be NULL
- also, move "wait for read to complete" to the slowpath,
__bch2_btree_node_get().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
strndup_user() returns an error pointer, not NULL.
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bch2_journal_write() expects process context, it takes journal_lock as
needed.
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
crypto_alloc_sync_skcipher() returns an ERR_PTR, not NULL.
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When bucket sector counts were changed from u16s to u32s, a few things
were missed. This fixes an overflow check, and a truncation that
prevented the overflow check from firing.
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
copy_to_user() returns the number of bytes successfully copied - not an
errcode.
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bcachefs freeze testing via fstests generic/390 occasionally
reproduces the following BUG from bch2_fs_read_only():
BUG_ON(atomic_long_read(&c->btree_key_cache.nr_dirty));
This indicates that one or more dirty key cache keys still exist
after the attempt to flush and quiesce the fs. The sequence that
leads to this problem actually occurs on unfreeze (ro->rw), and
looks something like the following:
- Task A begins a transaction commit and acquires journal_res for
the current seq. This transaction intends to perform key cache
insertion.
- Task B begins a bch2_journal_flush() via bch2_sync_fs(). This ends
up in journal_entry_want_write(), which closes the current journal
entry and drops the reference to the pin list created on entry open.
The pin put pops the front of the journal via fast reclaim since the
reference count has dropped to 0.
- Task A attempts to set the journal pin for the associated cached
key, but bch2_journal_pin_set() skips the pin insert because the
seq of the transaction reservation is behind the front of the pin
list fifo.
The end result is that the pin associated with the cached key is not
added, which prevents a subsequent reclaim from processing the key
and thus leaves it dangling at freeze time. The fundamental cause of
this problem is that the front of the journal is allowed to pop
before a transaction with outstanding reservation on the associated
journal seq is able to add a pin. The count for the pin list
associated with the seq drops to zero and is prematurely reclaimed
as a result.
The logical fix for this problem lies in how the journal buffer is
managed in similar scenarios where the entry might have been closed
before a transaction with outstanding reservations happens to be
committed.
When a journal entry is opened, the current sequence number is
bumped, the associated pin list is initialized with a reference
count of 1, and the journal buffer reference count is bumped (via
journal_state_inc()). When a journal reservation is acquired, the
reservation also acquires a reference on the associated buffer. If
the journal entry is closed in the meantime, it drops both the pin
and buffer references held by the open entry, but the buffer still
has references held by outstanding reservation. After the associated
transaction commits, the reservation release drops the associated
buffer references and the buffer is written out once the reference
count has dropped to zero.
The fundamental problem here is that the lifecycle of the pin list
reference held by an open journal entry is too short to cover the
processing of transactions with outstanding reservations. The
simplest way to address this is to expand the pin list reference to
the lifecycle of the buffer vs. the shorter lifecycle of the open
journal entry. This ensures the pin list for a seq with outstanding
reservation cannot be popped and reclaimed before all outstanding
reservations have been released, even if the associated journal
entry has been closed for further reservations.
Move the pin put from journal entry close to where final processing
of the journal buffer occurs. Create a duplicate helper to cover the
case where the caller doesn't already hold the journal lock. This
allows generic/390 to pass reliably.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bcachefs freeze testing has uncovered some raciness between journal
entry open/close and pin list reference count management. The
details of the problem are described in a separate patch. In
preparation for the associated fix, refactor the journal buffer put
path a bit to allow it to eventually handle dropping the pin list
reference currently held by an open journal entry.
Retain the journal write dispatch helper since the closure code is
inlined and we don't want to increase the amount of inline code in
the transaction commit path, but rename the function to reflect
the purpose of final processing of the journal buffer.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We have a couple journal pin put helpers to handle cases where the
journal lock is already held or not. Refactor the helpers to lock
and reclaim from the highest level and open code the reclaim from
the one caller of the internal variant. The latter call will be
moved into the journal buf release helper in a later patch.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This code accidentally left out the "ret = " assignment so the errors
from for_each_btree_key2() are not checked.
Fixes: 53534482a250 ("bcachefs: for_each_btree_key2()")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The copy_to_user() function returns the number of bytes that it wasn't
able to copy but we want to return -EFAULT to the user.
Fixes: e0750d947352 ("bcachefs: Initial commit")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The "ret = bkey_err(k);" assignment was accidentally left out so the
call to bch2_btree_iter_peek_slot() is not checked for errors.
Fixes: 53306e096d91 ("bcachefs: Always check for transaction restarts")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The clean up code at the end of the function uses "acl" so it needs
to be initialized to NULL.
Fixes: 53306e096d91 ("bcachefs: Always check for transaction restarts")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Fixes the following observed error reported by Nathan on IRC.
fs/bcachefs/io_misc.c:467:6: error: explicitly assigning value of
variable of type 'int' to itself [-Werror,-Wself-assign]
467 | ret = ret;
| ~~~ ^ ~~~
Reported-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
./fs/bcachefs/btree_update.h: journal.h is included more than once.
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Closes: https://bugzilla.openanolis.cn/show_bug.cgi?id=6573
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
There is a typo here where it uses ";" instead of "?:". The result is
that bch2_fs_fs_io_direct_init() is called unconditionally and the errors
from it are not checked.
Fixes: 0060c68159fc ("bcachefs: Split up fs-io.[ch]")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Reviewed-by: Brian Foster <bfoster@redhat.com>
On 32 bit systems, "sizeof(*arg) + replica_entries_bytes" can have an
integer overflow leading to memory corruption. Use size_add() to
prevent this.
Fixes: b44dd3797034 ("bcachefs: Redo filesystem usage ioctls")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The copy_to_user() function returns the number of bytes remaining but
we want to return -EFAULT to the user.
Fixes: e0750d947352 ("bcachefs: Initial commit")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bucket_lock() previously open coded a spinlock, because we need to cram
a spinlock into a single byte.
But it turns out not all archs support xchg() on a single byte; since we
need struct bucket to be small, this means we have to play fun games
with casts and ifdefs for endianness.
This fixes building on 32 bit arm, and likely other architectures.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Cc: linux-bcachefs@vger.kernel.org
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
In general it's a good idea to avoid using bare unreachable() because it
introduces undefined behavior in compiled code. In this case it even
confuses GCC into emitting an empty unused
bch2_dev_buckets_reserved.part.0() function.
Use BUG() instead, which is nice and defined. While in theory it should
never trigger, if something were to go awry and the BCH_WATERMARK_NR
case were to actually hit, the failure mode is much more robust.
Fixes the following warnings:
vmlinux.o: warning: objtool: bch2_bucket_alloc_trans() falls through to next function bch2_reset_alloc_cursors()
vmlinux.o: warning: objtool: bch2_dev_buckets_reserved.part.0() is missing an ELF size annotation
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Remove a redundant call to bch2_free_super().
This is harmless because bch2_free_super() has a memset() at its end. So
a second call would only lead to from kfree(NULL).
Remove the redundant call and only rely on the error handling path.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
If __bch2_dev_attach_bdev() fails, bch2_dev_free() is called twice.
Once here and another time in the error handling path.
This leads to several use-after-free.
Remove the redundant call and only rely on the error handling path.
Fixes: 6a44735653d4 ("bcachefs: Improved superblock-related error messages")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
modpost produces the following warning:
WARNING: modpost: missing MODULE_DESCRIPTION() in fs/bcachefs/bcachefs.o
Add a module description for bcachefs.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We're using more stack than we'd like in a number of functions, and
btree_trans is the biggest object that we stack allocate.
But we have to do a heap allocatation to initialize it anyways, so
there's no real downside to heap allocating the entire thing.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When building bcachefs for 32-bit ARM, there is a warning when using
max() to compare an expression involving 'size_t' with an 'unsigned
long' literal:
fs/bcachefs/movinggc.c:159:21: error: comparison of distinct pointer types ('typeof (16UL) *' (aka 'unsigned long *') and 'typeof (buckets_in_flight->nr / 4) *' (aka 'unsigned int *')) [-Werror,-Wcompare-distinct-pointer-types]
159 | size_t nr_to_get = max(16UL, buckets_in_flight->nr / 4);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/minmax.h:76:19: note: expanded from macro 'max'
76 | #define max(x, y) __careful_cmp(x, y, >)
| ^~~~~~~~~~~~~~~~~~~~~~
include/linux/minmax.h:38:24: note: expanded from macro '__careful_cmp'
38 | __builtin_choose_expr(__safe_cmp(x, y), \
| ^~~~~~~~~~~~~~~~
include/linux/minmax.h:28:4: note: expanded from macro '__safe_cmp'
28 | (__typecheck(x, y) && __no_side_effects(x, y))
| ^~~~~~~~~~~~~~~~~
include/linux/minmax.h:22:28: note: expanded from macro '__typecheck'
22 | (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1)))
| ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
1 error generated.
On 64-bit architectures, size_t is 'unsigned long', so there is no
warning when comparing these two expressions. Use max_t(size_t, ...) for
this situation, eliminating the warning.
Fixes: dd49018737d4 ("bcachefs: Rhashtable based buckets_in_flight for copygc")
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When building bcachefs for 32-bit ARM, there is a warning when using
min() to compare a variable of type 'size_t' with an expression of type
'unsigned long':
fs/bcachefs/checksum.c:142:22: error: comparison of distinct pointer types ('typeof (len) *' (aka 'unsigned int *') and 'typeof (((1UL) << 12) - offset) *' (aka 'unsigned long *')) [-Werror,-Wcompare-distinct-pointer-types]
142 | unsigned pg_len = min(len, PAGE_SIZE - offset);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/minmax.h:69:19: note: expanded from macro 'min'
69 | #define min(x, y) __careful_cmp(x, y, <)
| ^~~~~~~~~~~~~~~~~~~~~~
include/linux/minmax.h:38:24: note: expanded from macro '__careful_cmp'
38 | __builtin_choose_expr(__safe_cmp(x, y), \
| ^~~~~~~~~~~~~~~~
include/linux/minmax.h:28:4: note: expanded from macro '__safe_cmp'
28 | (__typecheck(x, y) && __no_side_effects(x, y))
| ^~~~~~~~~~~~~~~~~
include/linux/minmax.h:22:28: note: expanded from macro '__typecheck'
22 | (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1)))
| ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
1 error generated.
On 64-bit architectures, size_t is 'unsigned long', so there is no
warning when comparing these two expressions. Use min_t(size_t, ...) for
this situation, eliminating the warning.
Fixes: 1fb50457684f ("bcachefs: Fix memory corruption in encryption path")
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When building bcachefs with -Wincompatible-function-pointer-types-strict,
a clang warning designed to catch issues with mismatched function
pointer types, which will be fatal at runtime due to kernel Control Flow
Integrity (kCFI), there are several instances along the lines of:
fs/bcachefs/bkey_methods.c:118:2: error: incompatible function pointer types initializing 'int (*)(const struct bch_fs *, struct bkey_s_c, enum bkey_invalid_flags, struct printbuf *)' with an expression of type 'int (const struct bch_fs *, struct bkey_s_c, unsigned int, struct printbuf *)' [-Werror,-Wincompatible-function-pointer-types-strict]
118 | BCH_BKEY_TYPES()
| ^~~~~~~~~~~~~~~~
fs/bcachefs/bcachefs_format.h:342:2: note: expanded from macro 'BCH_BKEY_TYPES'
342 | x(deleted, 0) \
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
fs/bcachefs/bkey_methods.c:117:41: note: expanded from macro 'x'
117 | #define x(name, nr) [KEY_TYPE_##name] = bch2_bkey_ops_##name,
| ^~~~~~~~~~~~~~~~~~~~
<scratch space>:206:1: note: expanded from here
206 | bch2_bkey_ops_deleted
| ^~~~~~~~~~~~~~~~~~~~~
fs/bcachefs/bkey_methods.c:34:17: note: expanded from macro 'bch2_bkey_ops_deleted'
34 | .key_invalid = deleted_key_invalid, \
| ^~~~~~~~~~~~~~~~~~~
The flags parameter should be of type 'enum bkey_invalid_flags', not
'unsigned int'. Adjust the type everywhere so that there is no more
warning.
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When building bcachefs for 32-bit ARM, there is a compiler warning in
bch2_bucket_gens_invalid() due to use of an incorrect format specifier:
fs/bcachefs/alloc_background.c:530:10: error: format specifies type 'unsigned long' but the argument has type 'size_t' (aka 'unsigned int') [-Werror,-Wformat]
529 | prt_printf(err, "bad val size (%lu != %zu)",
| ~~~
| %zu
530 | bkey_val_bytes(k.k), sizeof(struct bch_bucket_gens));
| ^~~~~~~~~~~~~~~~~~~
fs/bcachefs/util.h:223:54: note: expanded from macro 'prt_printf'
223 | #define prt_printf(_out, ...) bch2_prt_printf(_out, __VA_ARGS__)
| ^~~~~~~~~~~
On 64-bit architectures, size_t is 'unsigned long', so there is no
warning when using %lu but on 32-bit architectures, size_t is 'unsigned
int'. Use '%zu', the format specifier for 'size_t', to eliminate the
warning.
Fixes: 4be0d766a7e9 ("bcachefs: bucket_gens btree")
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When building bcachefs for 32-bit ARM, there is a compiler warning in
bch2_alloc_v4_invalid() due to use of an incorrect format specifier:
fs/bcachefs/alloc_background.c:246:30: error: format specifies type 'unsigned long' but the argument has type 'unsigned int' [-Werror,-Wformat]
245 | prt_printf(err, "bad val size (%u > %lu)",
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| %u
246 | alloc_v4_u64s(a.v), bkey_val_u64s(k.k));
| ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~
fs/bcachefs/bkey.h:58:27: note: expanded from macro 'bkey_val_u64s'
58 | #define bkey_val_u64s(_k) ((_k)->u64s - BKEY_U64s)
| ^
fs/bcachefs/util.h:223:54: note: expanded from macro 'prt_printf'
223 | #define prt_printf(_out, ...) bch2_prt_printf(_out, __VA_ARGS__)
| ^~~~~~~~~~~
This expression is of type 'size_t'. On 64-bit architectures, size_t is
'unsigned long', so there is no warning when using %lu but on 32-bit
architectures, size_t is 'unsigned int'. Use '%zu', the format specifier
for 'size_t' to eliminate the warning.
Fixes: 11be8e8db283 ("bcachefs: New on disk format: Backpointers")
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When building bcachefs for 32-bit ARM, there is a compiler warning in
bch2_btree_key_cache_to_text() due to use of an incorrect format
specifier:
fs/bcachefs/btree_key_cache.c:1060:36: error: format specifies type 'size_t' (aka 'unsigned int') but the argument has type 'long' [-Werror,-Wformat]
1060 | prt_printf(out, "nr_freed:\t%zu", atomic_long_read(&c->nr_freed));
| ~~~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| %ld
fs/bcachefs/util.h:223:54: note: expanded from macro 'prt_printf'
223 | #define prt_printf(_out, ...) bch2_prt_printf(_out, __VA_ARGS__)
| ^~~~~~~~~~~
1 error generated.
On 64-bit architectures, size_t is 'unsigned long', so there is no
warning when using %zu but on 32-bit architectures, size_t is
'unsigned int'. Use '%lu' to match the other format specifiers used in
this function for printing values returned from atomic_long_read().
Fixes: 6d799930ce0f ("bcachefs: btree key cache pcpu freedlist")
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When building bcachefs for 32-bit ARM, there is a compiler warning in
bch2_set_bucket_needs_journal_commit() due to a debug print using the
wrong specifier:
fs/bcachefs/buckets_waiting_for_journal.c:137:30: error: format specifies type 'size_t' (aka 'unsigned int') but the argument has type 'unsigned long' [-Werror,-Wformat]
136 | pr_debug("took %zu rehashes, table at %zu/%zu elements",
| ~~~
| %lu
137 | nr_rehashes, nr_elements, 1UL << b->t->bits);
| ^~~~~~~~~~~~~~~~~
include/linux/printk.h:579:26: note: expanded from macro 'pr_debug'
579 | dynamic_pr_debug(fmt, ##__VA_ARGS__)
| ~~~ ^~~~~~~~~~~
include/linux/dynamic_debug.h:270:22: note: expanded from macro 'dynamic_pr_debug'
270 | pr_fmt(fmt), ##__VA_ARGS__)
| ~~~ ^~~~~~~~~~~
include/linux/dynamic_debug.h:250:59: note: expanded from macro '_dynamic_func_call'
250 | _dynamic_func_call_cls(_DPRINTK_CLASS_DFLT, fmt, func, ##__VA_ARGS__)
| ^~~~~~~~~~~
include/linux/dynamic_debug.h:248:65: note: expanded from macro '_dynamic_func_call_cls'
248 | __dynamic_func_call_cls(__UNIQUE_ID(ddebug), cls, fmt, func, ##__VA_ARGS__)
| ^~~~~~~~~~~
include/linux/dynamic_debug.h:224:15: note: expanded from macro '__dynamic_func_call_cls'
224 | func(&id, ##__VA_ARGS__); \
| ^~~~~~~~~~~
1 error generated.
On 64-bit architectures, size_t is 'unsigned long', so there is no
warning when using %zu but on 32-bit architectures, size_t is
'unsigned int'. Use the correct specifier to resolve the warning.
Fixes: 7a82e75ddaef ("bcachefs: New data structure for buckets waiting on journal commit")
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
There are several spelling mistakes in error messages. Fix these.
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The pointer q is being assigned a value but it is never read. The
assignment and pointer are redundant and can be removed.
Cleans up clang scan build warning:
fs/bcachefs/quota.c:813:2: warning: Value stored to 'q' is never
read [deadcode.DeadStores]
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Variable offset_into_extent is being assigned to zero and a few
statements later it is being re-assigned again to the save value.
The second assignment is redundant and can be removed. Cleans up
clang-scan build warning:
fs/bcachefs/io.c:2722:3: warning: Value stored to 'offset_into_extent'
is never read [deadcode.DeadStores]
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The variables start_offset and end_offset are being initialized with
values that are never read, they being re-assigned later on. The
initializations are redundant and can be removed.
Cleans up clang-scan build warnings:
fs/bcachefs/fs-io.c:243:11: warning: Value stored to 'start_offset' during
its initialization is never read [deadcode.DeadStores]
fs/bcachefs/fs-io.c:244:11: warning: Value stored to 'end_offset' during
its initialization is never read [deadcode.DeadStores]
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The pointer dst is being initialized with a value that is never read,
it is being re-assigned later on when it is used in a while-loop
The initialization is redundant and can be removed.
Cleans up clang-scan build warning:
fs/bcachefs/disk_groups.c:186:30: warning: Value stored to 'dst' during
its initialization is never read [deadcode.DeadStores]
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The pointer d is being initialized with a value that is never read,
it is being re-assigned later on when it is used in a for-loop.
The initialization is redundant and can be removed.
Cleans up clang-scan build warning:
fs/bcachefs/buckets.c:1303:25: warning: Value stored to 'd' during its
initialization is never read [deadcode.DeadStores]
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Now that we have the logged operations btree, we can make
finsert/fcollapse atomic w.r.t. unclean shutdown as well.
This adds bch_logged_op_finsert to represent the state of an finsert or
fcollapse, which is a bit more complicated than truncate since we need
to track our position in the "shift extents" operation.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Previously, we guaranteed atomicity of truncate after unclean shutdown
with the BCH_INODE_I_SIZE_DIRTY flag - which required a full scan of the
inodes btree.
Recently the deleted inodes btree was added so that we no longer have to
scan for deleted inodes, but truncate was unfinished and that change
left it broken.
This patch uses the new logged operations btree to fix truncate
atomicity; we now log an operation that can be replayed at the start of
a truncate.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Add a new btree for long running logged operations - i.e. for logging
operations that we can't do within a single btree transaction, so that
they can be resumed if we crash.
Keys in the logged operations btree will represent operations in
progress, with the state of the operation stored in the value.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This pulls the non vfs specific parts of truncate and finsert/fcollapse
out of fs-io.c, and moves them to io_misc.c.
This is prep work for logging these operations, to make them atomic in
the event of a crash.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
More reorganization, this splits up io.c into
- io_read.c
- io_misc.c - fallocate, fpunch, truncate
- io_write.c
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
In the bch2_fs_alloc() error path we call bch2_fs_free() without setting
BCH_FS_STOPPING - this is fine.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bch2_quota_read(), when scanning for inodes, may attempt to look up
inodes that have been deleted in the main subvolume - this is not an
error.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When we handle a transaction restart in a nested context, we need to
return -BCH_ERR_transaction_restart_nested because we invalidated the
outer context's iterators and locks.
bch2_propagate_key_to_snapshot_leaves() wasn't doing this, this patch
fixes it to use trans_was_restarted().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This changes mark_btree_node_locked() to take an enum
btree_node_locked_type, not a six_lock_type, since BTREE_NODE_UNLOCKED
is -1 which may cause problems converting back and forth to
six_lock_type if short enums are in use.
With this change, we never store BTREE_NODE_UNLOCKED in a six_lock_type
enum.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
It's no longer legal to use a zero size array as a flexible array
member - this causes UBSAN to complain.
This patch switches our zero size arrays to normal flexible array
members when possible, and inserts casts in other places (e.g. where we
use the zero size array as a marker partway through an array).
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Commit c2d5ff36065a4 ("bcachefs: Start journal reclaim thread
earlier") tweaked reclaim thread management to start a bit earlier
in the mount sequence by moving the start call from
__bch2_fs_read_write() to bch2_fs_journal_start(). This has the side
effect of never starting the reclaim thread on a ro->rw transition,
which can be observed by monitoring reclaim behavior via the
journal_reclaim tracepoints. I.e. once an fs has remounted ro->rw,
we only ever rely on direct reclaim from that point forward.
Since bch2_journal_reclaim_start() properly handles the case where
the reclaim thread has already been created, restore the start call
in the read-write helper. This allows the reclaim thread to start
early when appropriate and also exit/restart on remounts or freeze
cycles. In the latter case it may be possible to simply allow the
task to freeze rather than destroy it, but for now just fix the
immediate bug.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We weren't correctly checking snapshot skiplist nodes - we were checking
if they were in the same tree, not if they were an actual ancestor.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Since we set bucket data type to BCH_DATA_stripe based on the data
pointer, not just the stripe pointer, it doesn't make sense to check for
no stripe in the .key_invalid method - this is a situation that
shouldn't happen, but our other fsck/repair code handles it.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Print more information out about moving contexts - fold in the output of
the redundant bch2_data_jobs_to_text(), and also include information
relevant to whether move_data() should be blocked.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When doing updates early in recovery, before we can go RW, we still want
to check that keys are valid at commit time - this moves key invalid
checking to before the "btree updates to journal" path.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
If fsck finds a key that needs work done, the primary example being an
unlinked inode that needs to be deleted, and the key is in an internal
snapshot node, we have a bit of a conundrum.
The conundrum is that internal snapshot nodes are shared, and we in
general do updates in internal snapshot nodes because there may be
overwrites in some snapshots and not others, and this may affect other
keys referenced by this key (i.e. extents).
For example, we might be seeing an unlinked inode in an internal
snapshot node, but then in one child snapshot the inode might have been
reattached and might not be unlinked. Deleting the inode in the internal
snapshot node would be wrong, because then we'll delete all the extents
that the child snapshot references.
But if an unlinked inode does not have any overwrites in child
snapshots, we're fine: the inode is overwrritten in all child snapshots,
so we can do the deletion at the point of comonality in the snapshot
tree, i.e. the node where we found it.
This patch adds a new helper, bch2_propagate_key_to_snapshot_leaves(),
to handle the case where we need a to update a key that does have
overwrites in child snapshots: we copy the key to leaf snapshot nodes,
and then rewind fsck and process the needed updates there.
With this, fsck can now always correctly handle unlinked inodes found in
internal snapshot nodes.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
After deleteing snapshots, we may be left with a snapshot tree where
some nodes only have one child, and we have a linear chain.
Interior snapshot nodes are never used directly (i.e. they never have
subvolumes that point to them), they are only referered to by child
snapshot nodes - hence, they are redundant.
The existing code talks about redundant snapshot nodes as forming and
equivalence class; i.e. nodes for which snapshot_t->equiv is equal. In a
given equivalence class, we only ever need a single key at a given
position - i.e. multiple versions with different snapshot fields are
redundant.
The existing snapshot cleanup code deletes these redundant keys, but not
redundant nodes. It turns out this is buggy, because we assume that
after snapshot deletion finishes we should only have a single key per
equivalence class, but the btree update path doesn't preserve this -
overwriting keys in old snapshots doesn't check for the equivalence
class being equal, and thus we can end up with duplicate keys in the
same equivalence class and fsck complaining about snapshot deletion not
having run correctly.
The equivalence class notion has been leaking out of the core snapshots
code and into too much other code, i.e. fsck, so this patch takes a
different approach: snapshot deletion now moves keys to the node in an
equivalence class being kept (the leafiest node) and then deletes the
redundant nodes in the equivalance class.
Some work has to be done to correctly delete interior snapshot nodes;
snapshot node depth and skiplist fields for descendent nodes have to be
fixed.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The is_ancestor bitmap is at optimization for bch2_snapshot_is_ancestor;
once we get sufficiently close to the ancestor ID we're searching for we
test a bitmap.
But initialization of the is_ancestor bitmap was broken; we do it by
using bch2_snapshot_parent(), but we call that on nodes that haven't
been initialized yet with bch2_mark_snapshot().
Fix this by adding a separate loop in bch2_snapshots_read() for
initializing the is_ancestor bitmap, and also add some new debug asserts
for checking this sort of breakage in the future.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
In the bch2_mount() error path, we were calling
deactivate_locked_super(), which calls ->kill_sb(), which in our case
was calling bch2_fs_free() without __bch2_fs_stop().
This changes bch2_mount() to just call bch2_fs_stop() directly.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
In https://github.com/koverstreet/bcachefs/issues/450, we're seeing
unexplained btree_path_relock_fail events - according to the information
currently in the tracepoint, it appears the relock should be succeeding.
This adds lock counts to the tracepoint to help track it down.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
subvolume.c has gotten a bit large, this splits out a separate file just
for managing snapshot trees - BTREE_ID_snapshots.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
In __bch2_buffered_write, if we fail to write to an entire !uptodate
folio, we have to back out the write, bail out and retry.
But we were missing an iov_iter_revert() call, so the data written to
the folio was lost and the rest of the write shifted to the wrong
offset.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The folio_hole_offset() helper returns a mix of bool and int types.
The latter is to support a possible -EAGAIN error code when using
nonblocking locks. This is not only confusing, but the only caller
also essentially ignores errors outside of stopping the range
iteration. This means an -EAGAIN error can't return directly from
folio_hole_offset() and may be lost via bch2_clamp_data_hole().
Fix up the error handling and make it more readable.
__filemap_get_folio() returns -ENOENT instead of NULL when no folio
exists, so reuse the same error code in folio_hole_offset(). Fix up
bch2_seek_pagecache_hole() to return the current offset on -ENOENT,
but otherwise return unexpected error code up to the caller.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
For extents, we increase the number of bits of the size field to allow
extents to get bigger due to merging - but this code didn't check for
overflow.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
- There was no need for a retry loop in bch2_extent_fallocate(); if we
have to retry we may be overwriting something different and we need
to return an error and let the caller retry.
- The bch2_alloc_sectors_start() error path was wrong, and wasn't
running our cleanup at the end of the function
This also fixes a very rare open bucket leak due to the missing cleanup.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This fixes a bug in the cycle detector, bch2_check_for_deadlock() - we
have to make sure the node pointers in the btree paths array are set to
something not-garbage before another thread may see them.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This fixes the device removal tests, which have been failing at random
due to the fact that when we're running the .key_invalid checks in the
write path the key may actually no longer exist - we might be racing
with the keys being deleted.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
To ensure we aren't shooting ourselves in the foot after merge for
potentially doing future revisions for dirent or for storing multiple
names for casefolding, limit this to 512 for now.
Previously this define was linked to the max size a d_name in
bch_dirent could be.
Signed-off-by: Joshua Ashton <joshua@froggi.es>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Avoids doing a full strnlen for getting the length of the name of a
dirent entry.
Given the fact that the name of dirents is stored at the end of the
bkey's value, and we know the length of that in u64s, we can find the
last u64 and figure out how many NUL bytes are at the end of the string.
On little endian systems this ends up being the leading zeros of the
last u64, whereas on big endian systems this ends up being the trailing
zeros of the last u64.
We can take that value in bits and divide it by 8 to get the number of
NUL bytes at the end.
There is no endian-fixup or other compatibility here as this is string
data interpreted as a u64.
Signed-off-by: Joshua Ashton <joshua@froggi.es>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
A nice cleanup that avoids a bunch of open-coding name/string usage
around dirent usage.
Will be used by casefolding impl in future commits.
Signed-off-by: Joshua Ashton <joshua@froggi.es>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We're hunting for an open_bucket leak, add an assertion to help track it
down: also, we can't use the bch_fs after dropping our write ref to it.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Six locks do lock handoff via the wakeup path: the thread doing the
wakeup also takes the lock on behalf of the waiter, which means the
waiter only has to look at its waitlist entry, and doesn't have to touch
the lock cacheline while another thread is using it.
Linus noticed that this needs a real barrier, which this patch fixes.
Also add a comment for the should_sleep_fn() error path.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: linux-bcachefs@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
This will be used when we need to re-hash a directory tree when setting
flags.
It is not possible to have concurrent btree_trans on a thread.
Signed-off-by: Joshua Ashton <joshua@froggi.es>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Now we also print the open_buckets owned by each write_point - this is
to help with debugging a shutdown hang.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This fixes the replicas_write_errors test: the patch
bcachefs: mark journal replicas before journal write submission
partially fixed replicas marking for the journal, but it broke the case
where one replica failed - this patch re-adds marking after the journal
write completes, when we know how many replicas succeeded.
Additionally, we do not consider it a fsck error when the very last
journal entry is not correctly marked, since there is an inherent race
there.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Split out a new file from recovery.c for managing the list of keys we
read from the journal: before journal replay finishes the btree iterator
code needs to be able to iterate over and return keys from the journal
as well, so there's a fair bit of code here.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Split out a new file for bch_sb_field_members - we'll likely want to
move more code here in the future.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
fs-io.c is too big - time for some reorganization
- fs-dio.c: direct io
- fs-pagecache.c: pagecache data structures (bch_folio), utility code
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
On old kernels, kmalloc() may return an allocation that's not naturally
aligned - this resulted in a bug where we allocated a bio with not
enough biovecs. Fix this by using buf_pages().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This fixes a bug where we were already passing bkey_invalid_flags
around, but treating the parameter as just read/write - so the compat
code wasn't being run correctly.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Awhile back, we changed bkey_format generation to ensure that the packed
representation could never represent fields larger than the unpacked
representation.
This was to ensure that bkey_packed_successor() always gave a sensible
result, but in the current code bkey_packed_successor() is only used in
a debug assertion - not for anything important.
This kills the requirement that we've gotten rid of those weird bkey
formats, and instead changes the assertion to check if we're dealing
with an old weird bkey format.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Add error messages when we fail to lookup an inode, and also add a few
missing bch2_err_class() calls.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We've observed significant lock thrashing on fstests generic/083 in
fallocate, due to dropping and retaking btree locks when checking the
pagecache for data.
This adds a nonblocking mode to bch2_clamp_data_hole(), where we only
use folio_trylock(), and can thus be used safely while btree locks are
held - thus we only have to drop btree locks as a fallback, on actual
lock contention.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This fixes should_restart_for_topology_repair() - previously it was
returning false if the btree io path had already seleceted topology
repair to run, even if it hadn't run yet.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
UBSAN was complaining about a shift by 64 in set_inc_field().
This only happened when the value being shifted was 0, so in theory
should be harmless - a shift by 64 (or register width) should logically
give a result of 0, but CPUs will in practice leave the input unchanged
when the number of bits to shift by wraps - and since our input here is
0, the output is still what we want.
But, it's still undefined behaviour and we need our UBSAN output to be
clean, so it needs to be fixed.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
- add a to_text() method for bkey_format
- convert bch2_bkey_format_validate() to modern error message style,
where we pass a printbuf for the error string instead of returning a
static string
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Add a new bitset btree for inodes pending deletion; this means we no
longer have to scan the full inodes btree after an unclean shutdown.
Specifically, this adds:
- a trigger to update the deleted_inodes btree based on changes to the
inodes btree
- a new recovery pass
- and check_inodes is now only a fsck pass.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
A number of smallish fixes for overlapping extent repair, and (part of)
a new unit test. This fixes all the issues turned up by bhzhu203, in his
filesystem image from running mongodb + snapshots.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We want to ensure that fsck actually fixed all the errors it found - the
second fsck run should be clean.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
clang had a few more warnings about enum conversion, and also didn't
like the opts.c initializer.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This refactoring centralizes defining per-btree properties.
bch2_key_types_allowed was also about to overflow a u32, so expand that
to a u64.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This adds bch2_run_explicit_recovery_pass(), for rewinding recovery and
explicitly running a specific recovery pass - this is a more general
replacement for how we were running topology repair before.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This introduces bch2_run_explicit_recovery_pass() and uses it for when
fsck detects that we need to re-run dead snaphots cleanup, and makes
dead snapshot cleanup more like a normal recovery pass.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The write buffer mechanism journals keys twice in certain
situations. A key is always journaled on write buffer insertion, and
is potentially journaled again if a write buffer flush falls into
either of the slow btree insert paths. This has shown to cause
journal recovery ordering problems in the event of an untimely
crash.
For example, consider if a key is inserted into index 0 of a write
buffer, the active write buffer switches to index 1, the key is
deleted in index 1, and then index 0 is flushed. If the original key
is rejournaled in the btree update from the index 0 flush, the (now
deleted) key is journaled in a seq buffer ahead of the latest
version of key (which was journaled when the key was deleted in
index 1). If the fs crashes while this is still observable in the
log, recovery sees the key from the btree update after the delete
key from the write buffer insert, which is the incorrect order. This
problem is occasionally reproduced by generic/388 and generally
manifests as one or more backpointer entry inconsistencies.
To avoid this problem, never rejournal write buffered key updates to
the associated btree. Instead, use prejournaled key updates to pass
the journal seq of the write buffer insert down to the btree insert,
which updates the btree leaf pin to reflect the seq of the key.
Note that tracking the seq is required instead of just using
NOJOURNAL here because otherwise we lose protection of the write
buffer pin when the buffer is flushed, which means the key can fall
off the tail of the on-disk journal before the btree leaf is flushed
and lead to similar recovery inconsistencies.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Introduce support for prejournaled key updates. This allows a
transaction to commit an update for a key that already exists (and
is pinned) in the journal. This is required for btree write buffer
updates as the current scheme of journaling both on write buffer
insertion and write buffer (slow path) flush is unsafe in certain
crash recovery scenarios.
Create a small trans update wrapper to pass along the seq where the
key resides into the btree_insert_entry. From there, trans commit
passes the seq into the btree insert path where it is used to manage
the journal pin for the associated btree leaf.
Note that this patch only introduces the underlying mechanism and
otherwise includes no functional changes.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
There is only one other caller so eliminate some boilerplate.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This is in preparation to support prejournaled keys. We want the
ability to optionally pass a seq stored in the btree update rather
than the seq of the committing transaction.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We commonly use no_data_io mode when debugging filesystem metadata
dumps, where data checksum/compression errors are expected and
unimportant - this patch suppresses these.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>