2018-04-04 01:23:33 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2007-06-12 21:07:21 +08:00
|
|
|
/*
|
|
|
|
* Copyright (C) 2007 Oracle. All rights reserved.
|
|
|
|
*/
|
|
|
|
|
2007-03-23 00:13:20 +08:00
|
|
|
#include <linux/fs.h>
|
2007-03-29 01:57:48 +08:00
|
|
|
#include <linux/blkdev.h>
|
2007-04-09 22:42:37 +08:00
|
|
|
#include <linux/radix-tree.h>
|
2007-05-03 03:53:43 +08:00
|
|
|
#include <linux/writeback.h>
|
2009-01-06 10:25:51 +08:00
|
|
|
#include <linux/buffer_head.h>
|
2008-04-10 04:28:12 +08:00
|
|
|
#include <linux/workqueue.h>
|
2008-06-26 04:01:31 +08:00
|
|
|
#include <linux/kthread.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2010-11-22 11:20:49 +08:00
|
|
|
#include <linux/migrate.h>
|
2011-05-06 21:33:15 +08:00
|
|
|
#include <linux/ratelimit.h>
|
2013-04-19 23:08:05 +08:00
|
|
|
#include <linux/uuid.h>
|
2013-08-15 23:11:21 +08:00
|
|
|
#include <linux/semaphore.h>
|
2018-01-13 01:55:03 +08:00
|
|
|
#include <linux/error-injection.h>
|
btrfs: Remove custom crc32c init code
The custom crc32 init code was introduced in
14a958e678cd ("Btrfs: fix btrfs boot when compiled as built-in") to
enable using btrfs as a built-in. However, later as pointed out by
60efa5eb2e88 ("Btrfs: use late_initcall instead of module_init") this
wasn't enough and finally btrfs was switched to late_initcall which
comes after the generic crc32c implementation is initiliased. The
latter commit superseeded the former. Now that we don't have to
maintain our own code let's just remove it and switch to using the
generic implementation.
Despite touching a lot of files the patch is really simple. Here is the gist of
the changes:
1. Select LIBCRC32C rather than the low-level modules.
2. s/btrfs_crc32c/crc32c/g
3. replace hash.h with linux/crc32c.h
4. Move the btrfs namehash funcs to ctree.h and change the tree accordingly.
I've tested this with btrfs being both a module and a built-in and xfstest
doesn't complain.
Does seem to fix the longstanding problem of not automatically selectiong
the crc32c module when btrfs is used. Possibly there is a workaround in
dracut.
The modinfo confirms that now all the module dependencies are there:
before:
depends: zstd_compress,zstd_decompress,raid6_pq,xor,zlib_deflate
after:
depends: libcrc32c,zstd_compress,zstd_decompress,raid6_pq,xor,zlib_deflate
Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add more info to changelog from mails ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-01-08 17:45:05 +08:00
|
|
|
#include <linux/crc32c.h>
|
2018-12-14 05:16:45 +08:00
|
|
|
#include <linux/sched/mm.h>
|
2011-03-19 06:56:43 +08:00
|
|
|
#include <asm/unaligned.h>
|
2007-02-02 22:18:22 +08:00
|
|
|
#include "ctree.h"
|
|
|
|
#include "disk-io.h"
|
2007-03-17 04:20:31 +08:00
|
|
|
#include "transaction.h"
|
2007-04-09 22:42:37 +08:00
|
|
|
#include "btrfs_inode.h"
|
2008-03-25 03:01:56 +08:00
|
|
|
#include "volumes.h"
|
2007-10-16 04:15:53 +08:00
|
|
|
#include "print-tree.h"
|
2008-06-26 04:01:30 +08:00
|
|
|
#include "locking.h"
|
2008-09-06 04:13:11 +08:00
|
|
|
#include "tree-log.h"
|
2009-04-03 21:47:43 +08:00
|
|
|
#include "free-space-cache.h"
|
2015-09-30 11:50:38 +08:00
|
|
|
#include "free-space-tree.h"
|
Btrfs: Cache free inode numbers in memory
Currently btrfs stores the highest objectid of the fs tree, and it always
returns (highest+1) inode number when we create a file, so inode numbers
won't be reclaimed when we delete files, so we'll run out of inode numbers
as we keep create/delete files in 32bits machines.
This fixes it, and it works similarly to how we cache free space in block
cgroups.
We start a kernel thread to read the file tree. By scanning inode items,
we know which chunks of inode numbers are free, and we cache them in
an rb-tree.
Because we are searching the commit root, we have to carefully handle the
cross-transaction case.
The rb-tree is a hybrid extent+bitmap tree, so if we have too many small
chunks of inode numbers, we'll use bitmaps. Initially we allow 16K ram
of extents, and a bitmap will be used if we exceed this threshold. The
extents threshold is adjusted in runtime.
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
2011-04-20 10:06:11 +08:00
|
|
|
#include "inode-map.h"
|
2011-11-09 20:44:05 +08:00
|
|
|
#include "check-integrity.h"
|
2012-06-05 02:03:51 +08:00
|
|
|
#include "rcu-string.h"
|
2012-11-06 20:15:27 +08:00
|
|
|
#include "dev-replace.h"
|
2013-01-30 07:40:14 +08:00
|
|
|
#include "raid56.h"
|
2013-11-02 01:06:58 +08:00
|
|
|
#include "sysfs.h"
|
2014-05-14 08:30:47 +08:00
|
|
|
#include "qgroup.h"
|
2016-03-10 17:26:59 +08:00
|
|
|
#include "compression.h"
|
2017-10-09 09:51:02 +08:00
|
|
|
#include "tree-checker.h"
|
2017-09-30 03:43:50 +08:00
|
|
|
#include "ref-verify.h"
|
2007-02-02 22:18:22 +08:00
|
|
|
|
2012-09-26 02:25:58 +08:00
|
|
|
#ifdef CONFIG_X86
|
|
|
|
#include <asm/cpufeature.h>
|
|
|
|
#endif
|
|
|
|
|
2015-12-15 09:14:36 +08:00
|
|
|
#define BTRFS_SUPER_FLAG_SUPP (BTRFS_HEADER_FLAG_WRITTEN |\
|
|
|
|
BTRFS_HEADER_FLAG_RELOC |\
|
|
|
|
BTRFS_SUPER_FLAG_ERROR |\
|
|
|
|
BTRFS_SUPER_FLAG_SEEDING |\
|
2018-01-09 09:05:41 +08:00
|
|
|
BTRFS_SUPER_FLAG_METADUMP |\
|
|
|
|
BTRFS_SUPER_FLAG_METADUMP_V2)
|
2015-12-15 09:14:36 +08:00
|
|
|
|
2015-01-03 01:23:10 +08:00
|
|
|
static const struct extent_io_ops btree_extent_io_ops;
|
2008-06-12 04:50:36 +08:00
|
|
|
static void end_workqueue_fn(struct btrfs_work *work);
|
2012-03-01 21:56:26 +08:00
|
|
|
static void btrfs_destroy_ordered_extents(struct btrfs_root *root);
|
2011-01-06 19:30:25 +08:00
|
|
|
static int btrfs_destroy_delayed_refs(struct btrfs_transaction *trans,
|
2016-06-23 06:54:24 +08:00
|
|
|
struct btrfs_fs_info *fs_info);
|
2012-03-01 21:56:26 +08:00
|
|
|
static void btrfs_destroy_delalloc_inodes(struct btrfs_root *root);
|
2016-06-23 06:54:24 +08:00
|
|
|
static int btrfs_destroy_marked_extents(struct btrfs_fs_info *fs_info,
|
2011-01-06 19:30:25 +08:00
|
|
|
struct extent_io_tree *dirty_pages,
|
|
|
|
int mark);
|
2016-06-23 06:54:24 +08:00
|
|
|
static int btrfs_destroy_pinned_extent(struct btrfs_fs_info *fs_info,
|
2011-01-06 19:30:25 +08:00
|
|
|
struct extent_io_tree *pinned_extents);
|
2016-06-23 06:54:24 +08:00
|
|
|
static int btrfs_cleanup_transaction(struct btrfs_fs_info *fs_info);
|
|
|
|
static void btrfs_error_commit_super(struct btrfs_fs_info *fs_info);
|
2008-04-10 04:28:12 +08:00
|
|
|
|
2008-09-30 03:18:18 +08:00
|
|
|
/*
|
2014-07-30 06:55:42 +08:00
|
|
|
* btrfs_end_io_wq structs are used to do processing in task context when an IO
|
|
|
|
* is complete. This is used during reads to verify checksums, and it is used
|
2008-09-30 03:18:18 +08:00
|
|
|
* by writes to insert metadata for new file extents after IO is complete.
|
|
|
|
*/
|
2014-07-30 06:55:42 +08:00
|
|
|
struct btrfs_end_io_wq {
|
2008-04-10 04:28:12 +08:00
|
|
|
struct bio *bio;
|
|
|
|
bio_end_io_t *end_io;
|
|
|
|
void *private;
|
|
|
|
struct btrfs_fs_info *info;
|
2017-06-03 15:38:06 +08:00
|
|
|
blk_status_t status;
|
2014-07-30 06:25:45 +08:00
|
|
|
enum btrfs_wq_endio_type metadata;
|
2008-06-12 04:50:36 +08:00
|
|
|
struct btrfs_work work;
|
2008-04-10 04:28:12 +08:00
|
|
|
};
|
2007-11-08 10:08:01 +08:00
|
|
|
|
2014-07-30 06:55:42 +08:00
|
|
|
static struct kmem_cache *btrfs_end_io_wq_cache;
|
|
|
|
|
|
|
|
int __init btrfs_end_io_wq_init(void)
|
|
|
|
{
|
|
|
|
btrfs_end_io_wq_cache = kmem_cache_create("btrfs_end_io_wq",
|
|
|
|
sizeof(struct btrfs_end_io_wq),
|
|
|
|
0,
|
2016-06-24 02:17:08 +08:00
|
|
|
SLAB_MEM_SPREAD,
|
2014-07-30 06:55:42 +08:00
|
|
|
NULL);
|
|
|
|
if (!btrfs_end_io_wq_cache)
|
|
|
|
return -ENOMEM;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2018-02-20 00:24:18 +08:00
|
|
|
void __cold btrfs_end_io_wq_exit(void)
|
2014-07-30 06:55:42 +08:00
|
|
|
{
|
2016-01-29 21:36:35 +08:00
|
|
|
kmem_cache_destroy(btrfs_end_io_wq_cache);
|
2014-07-30 06:55:42 +08:00
|
|
|
}
|
|
|
|
|
2008-09-30 03:18:18 +08:00
|
|
|
/*
|
|
|
|
* async submit bios are used to offload expensive checksumming
|
|
|
|
* onto the worker threads. They checksum file and metadata bios
|
|
|
|
* just before they are sent down the IO stack.
|
|
|
|
*/
|
2008-04-16 23:14:51 +08:00
|
|
|
struct async_submit_bio {
|
2017-05-05 23:57:13 +08:00
|
|
|
void *private_data;
|
2008-04-16 23:14:51 +08:00
|
|
|
struct bio *bio;
|
2017-06-23 09:05:23 +08:00
|
|
|
extent_submit_bio_start_t *submit_bio_start;
|
2008-04-16 23:14:51 +08:00
|
|
|
int mirror_num;
|
2010-05-25 21:48:28 +08:00
|
|
|
/*
|
|
|
|
* bio_offset is optional, can be used if the pages in the bio
|
|
|
|
* can't tell us where in the file the bio should go
|
|
|
|
*/
|
|
|
|
u64 bio_offset;
|
2008-06-12 04:50:36 +08:00
|
|
|
struct btrfs_work work;
|
2017-06-03 15:38:06 +08:00
|
|
|
blk_status_t status;
|
2008-04-16 23:14:51 +08:00
|
|
|
};
|
|
|
|
|
2011-07-27 04:11:19 +08:00
|
|
|
/*
|
|
|
|
* Lockdep class keys for extent_buffer->lock's in this root. For a given
|
|
|
|
* eb, the lockdep key is determined by the btrfs_root it belongs to and
|
|
|
|
* the level the eb occupies in the tree.
|
|
|
|
*
|
|
|
|
* Different roots are used for different purposes and may nest inside each
|
|
|
|
* other and they require separate keysets. As lockdep keys should be
|
|
|
|
* static, assign keysets according to the purpose of the root as indicated
|
2018-08-06 13:25:24 +08:00
|
|
|
* by btrfs_root->root_key.objectid. This ensures that all special purpose
|
|
|
|
* roots have separate keysets.
|
2009-02-13 03:09:45 +08:00
|
|
|
*
|
2011-07-27 04:11:19 +08:00
|
|
|
* Lock-nesting across peer nodes is always done with the immediate parent
|
|
|
|
* node locked thus preventing deadlock. As lockdep doesn't know this, use
|
|
|
|
* subclass to avoid triggering lockdep warning in such cases.
|
2009-02-13 03:09:45 +08:00
|
|
|
*
|
2011-07-27 04:11:19 +08:00
|
|
|
* The key is set by the readpage_end_io_hook after the buffer has passed
|
|
|
|
* csum validation but before the pages are unlocked. It is also set by
|
|
|
|
* btrfs_init_new_buffer on freshly allocated blocks.
|
2009-02-13 03:09:45 +08:00
|
|
|
*
|
2011-07-27 04:11:19 +08:00
|
|
|
* We also add a check to make sure the highest level of the tree is the
|
|
|
|
* same as our lockdep setup here. If BTRFS_MAX_LEVEL changes, this code
|
|
|
|
* needs update as well.
|
2009-02-13 03:09:45 +08:00
|
|
|
*/
|
|
|
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
|
|
|
# if BTRFS_MAX_LEVEL != 8
|
|
|
|
# error
|
|
|
|
# endif
|
2011-07-27 04:11:19 +08:00
|
|
|
|
|
|
|
static struct btrfs_lockdep_keyset {
|
|
|
|
u64 id; /* root objectid */
|
|
|
|
const char *name_stem; /* lock name stem */
|
|
|
|
char names[BTRFS_MAX_LEVEL + 1][20];
|
|
|
|
struct lock_class_key keys[BTRFS_MAX_LEVEL + 1];
|
|
|
|
} btrfs_lockdep_keysets[] = {
|
|
|
|
{ .id = BTRFS_ROOT_TREE_OBJECTID, .name_stem = "root" },
|
|
|
|
{ .id = BTRFS_EXTENT_TREE_OBJECTID, .name_stem = "extent" },
|
|
|
|
{ .id = BTRFS_CHUNK_TREE_OBJECTID, .name_stem = "chunk" },
|
|
|
|
{ .id = BTRFS_DEV_TREE_OBJECTID, .name_stem = "dev" },
|
|
|
|
{ .id = BTRFS_FS_TREE_OBJECTID, .name_stem = "fs" },
|
|
|
|
{ .id = BTRFS_CSUM_TREE_OBJECTID, .name_stem = "csum" },
|
2013-05-01 01:29:29 +08:00
|
|
|
{ .id = BTRFS_QUOTA_TREE_OBJECTID, .name_stem = "quota" },
|
2011-07-27 04:11:19 +08:00
|
|
|
{ .id = BTRFS_TREE_LOG_OBJECTID, .name_stem = "log" },
|
|
|
|
{ .id = BTRFS_TREE_RELOC_OBJECTID, .name_stem = "treloc" },
|
|
|
|
{ .id = BTRFS_DATA_RELOC_TREE_OBJECTID, .name_stem = "dreloc" },
|
2013-09-04 00:28:57 +08:00
|
|
|
{ .id = BTRFS_UUID_TREE_OBJECTID, .name_stem = "uuid" },
|
2016-01-25 23:30:22 +08:00
|
|
|
{ .id = BTRFS_FREE_SPACE_TREE_OBJECTID, .name_stem = "free-space" },
|
2011-07-27 04:11:19 +08:00
|
|
|
{ .id = 0, .name_stem = "tree" },
|
2009-02-13 03:09:45 +08:00
|
|
|
};
|
2011-07-27 04:11:19 +08:00
|
|
|
|
|
|
|
void __init btrfs_init_lockdep(void)
|
|
|
|
{
|
|
|
|
int i, j;
|
|
|
|
|
|
|
|
/* initialize lockdep class names */
|
|
|
|
for (i = 0; i < ARRAY_SIZE(btrfs_lockdep_keysets); i++) {
|
|
|
|
struct btrfs_lockdep_keyset *ks = &btrfs_lockdep_keysets[i];
|
|
|
|
|
|
|
|
for (j = 0; j < ARRAY_SIZE(ks->names); j++)
|
|
|
|
snprintf(ks->names[j], sizeof(ks->names[j]),
|
|
|
|
"btrfs-%s-%02d", ks->name_stem, j);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void btrfs_set_buffer_lockdep_class(u64 objectid, struct extent_buffer *eb,
|
|
|
|
int level)
|
|
|
|
{
|
|
|
|
struct btrfs_lockdep_keyset *ks;
|
|
|
|
|
|
|
|
BUG_ON(level >= ARRAY_SIZE(ks->keys));
|
|
|
|
|
|
|
|
/* find the matching keyset, id 0 is the default entry */
|
|
|
|
for (ks = btrfs_lockdep_keysets; ks->id; ks++)
|
|
|
|
if (ks->id == objectid)
|
|
|
|
break;
|
|
|
|
|
|
|
|
lockdep_set_class_and_name(&eb->lock,
|
|
|
|
&ks->keys[level], ks->names[level]);
|
|
|
|
}
|
|
|
|
|
2009-02-13 03:09:45 +08:00
|
|
|
#endif
|
|
|
|
|
2008-09-30 03:18:18 +08:00
|
|
|
/*
|
|
|
|
* extents on the btree inode are pretty simple, there's one extent
|
|
|
|
* that covers the entire device
|
|
|
|
*/
|
2017-06-23 10:09:57 +08:00
|
|
|
struct extent_map *btree_get_extent(struct btrfs_inode *inode,
|
2011-04-19 20:29:38 +08:00
|
|
|
struct page *page, size_t pg_offset, u64 start, u64 len,
|
2008-12-02 22:54:17 +08:00
|
|
|
int create)
|
2007-04-12 03:53:25 +08:00
|
|
|
{
|
2018-06-29 16:56:42 +08:00
|
|
|
struct btrfs_fs_info *fs_info = inode->root->fs_info;
|
2017-02-20 19:51:06 +08:00
|
|
|
struct extent_map_tree *em_tree = &inode->extent_tree;
|
2007-10-16 04:14:19 +08:00
|
|
|
struct extent_map *em;
|
|
|
|
int ret;
|
|
|
|
|
2009-09-03 04:24:52 +08:00
|
|
|
read_lock(&em_tree->lock);
|
2008-01-25 05:13:08 +08:00
|
|
|
em = lookup_extent_mapping(em_tree, start, len);
|
2008-05-07 23:43:44 +08:00
|
|
|
if (em) {
|
2016-06-23 06:54:23 +08:00
|
|
|
em->bdev = fs_info->fs_devices->latest_bdev;
|
2009-09-03 04:24:52 +08:00
|
|
|
read_unlock(&em_tree->lock);
|
2007-10-16 04:14:19 +08:00
|
|
|
goto out;
|
2008-05-07 23:43:44 +08:00
|
|
|
}
|
2009-09-03 04:24:52 +08:00
|
|
|
read_unlock(&em_tree->lock);
|
2008-04-18 22:29:50 +08:00
|
|
|
|
2011-04-21 06:48:27 +08:00
|
|
|
em = alloc_extent_map();
|
2007-10-16 04:14:19 +08:00
|
|
|
if (!em) {
|
|
|
|
em = ERR_PTR(-ENOMEM);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
em->start = 0;
|
2008-04-19 02:17:20 +08:00
|
|
|
em->len = (u64)-1;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
em->block_len = (u64)-1;
|
2007-10-16 04:14:19 +08:00
|
|
|
em->block_start = 0;
|
2016-06-23 06:54:23 +08:00
|
|
|
em->bdev = fs_info->fs_devices->latest_bdev;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2009-09-03 04:24:52 +08:00
|
|
|
write_lock(&em_tree->lock);
|
2013-04-06 04:51:15 +08:00
|
|
|
ret = add_extent_mapping(em_tree, em, 0);
|
2007-10-16 04:14:19 +08:00
|
|
|
if (ret == -EEXIST) {
|
|
|
|
free_extent_map(em);
|
2008-04-18 22:29:50 +08:00
|
|
|
em = lookup_extent_mapping(em_tree, start, len);
|
2012-09-13 17:32:54 +08:00
|
|
|
if (!em)
|
2012-09-13 17:32:32 +08:00
|
|
|
em = ERR_PTR(-EIO);
|
2007-10-16 04:14:19 +08:00
|
|
|
} else if (ret) {
|
2008-04-18 22:29:50 +08:00
|
|
|
free_extent_map(em);
|
2012-09-13 17:32:32 +08:00
|
|
|
em = ERR_PTR(ret);
|
2007-10-16 04:14:19 +08:00
|
|
|
}
|
2009-09-03 04:24:52 +08:00
|
|
|
write_unlock(&em_tree->lock);
|
2008-04-18 22:29:50 +08:00
|
|
|
|
2007-10-16 04:14:19 +08:00
|
|
|
out:
|
|
|
|
return em;
|
2007-04-12 03:53:25 +08:00
|
|
|
}
|
|
|
|
|
2017-02-15 01:03:49 +08:00
|
|
|
u32 btrfs_csum_data(const char *data, u32 seed, size_t len)
|
2007-10-16 04:19:22 +08:00
|
|
|
{
|
btrfs: Remove custom crc32c init code
The custom crc32 init code was introduced in
14a958e678cd ("Btrfs: fix btrfs boot when compiled as built-in") to
enable using btrfs as a built-in. However, later as pointed out by
60efa5eb2e88 ("Btrfs: use late_initcall instead of module_init") this
wasn't enough and finally btrfs was switched to late_initcall which
comes after the generic crc32c implementation is initiliased. The
latter commit superseeded the former. Now that we don't have to
maintain our own code let's just remove it and switch to using the
generic implementation.
Despite touching a lot of files the patch is really simple. Here is the gist of
the changes:
1. Select LIBCRC32C rather than the low-level modules.
2. s/btrfs_crc32c/crc32c/g
3. replace hash.h with linux/crc32c.h
4. Move the btrfs namehash funcs to ctree.h and change the tree accordingly.
I've tested this with btrfs being both a module and a built-in and xfstest
doesn't complain.
Does seem to fix the longstanding problem of not automatically selectiong
the crc32c module when btrfs is used. Possibly there is a workaround in
dracut.
The modinfo confirms that now all the module dependencies are there:
before:
depends: zstd_compress,zstd_decompress,raid6_pq,xor,zlib_deflate
after:
depends: libcrc32c,zstd_compress,zstd_decompress,raid6_pq,xor,zlib_deflate
Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add more info to changelog from mails ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-01-08 17:45:05 +08:00
|
|
|
return crc32c(seed, data, len);
|
2007-10-16 04:19:22 +08:00
|
|
|
}
|
|
|
|
|
2016-10-27 15:52:33 +08:00
|
|
|
void btrfs_csum_final(u32 crc, u8 *result)
|
2007-10-16 04:19:22 +08:00
|
|
|
{
|
2011-03-19 06:56:43 +08:00
|
|
|
put_unaligned_le32(~crc, result);
|
2007-10-16 04:19:22 +08:00
|
|
|
}
|
|
|
|
|
2008-09-30 03:18:18 +08:00
|
|
|
/*
|
2019-02-25 21:24:15 +08:00
|
|
|
* Compute the csum of a btree block and store the result to provided buffer.
|
|
|
|
*
|
|
|
|
* Returns error if the extent buffer cannot be mapped.
|
2008-09-30 03:18:18 +08:00
|
|
|
*/
|
2019-02-25 21:24:15 +08:00
|
|
|
static int csum_tree_block(struct extent_buffer *buf, u8 *result)
|
2007-10-16 04:19:22 +08:00
|
|
|
{
|
|
|
|
unsigned long len;
|
|
|
|
unsigned long cur_len;
|
|
|
|
unsigned long offset = BTRFS_CSUM_SIZE;
|
|
|
|
char *kaddr;
|
|
|
|
unsigned long map_start;
|
|
|
|
unsigned long map_len;
|
|
|
|
int err;
|
|
|
|
u32 crc = ~(u32)0;
|
|
|
|
|
|
|
|
len = buf->len - offset;
|
2009-01-06 10:25:51 +08:00
|
|
|
while (len > 0) {
|
2018-11-28 16:54:56 +08:00
|
|
|
/*
|
|
|
|
* Note: we don't need to check for the err == 1 case here, as
|
|
|
|
* with the given combination of 'start = BTRFS_CSUM_SIZE (32)'
|
|
|
|
* and 'min_len = 32' and the currently implemented mapping
|
|
|
|
* algorithm we cannot cross a page boundary.
|
|
|
|
*/
|
2007-10-16 04:19:22 +08:00
|
|
|
err = map_private_extent_buffer(buf, offset, 32,
|
2011-07-20 00:04:14 +08:00
|
|
|
&kaddr, &map_start, &map_len);
|
2019-02-25 21:24:16 +08:00
|
|
|
if (WARN_ON(err))
|
2016-03-10 19:09:46 +08:00
|
|
|
return err;
|
2007-10-16 04:19:22 +08:00
|
|
|
cur_len = min(len, map_len - (offset - map_start));
|
2013-03-14 22:57:45 +08:00
|
|
|
crc = btrfs_csum_data(kaddr + offset - map_start,
|
2007-10-16 04:19:22 +08:00
|
|
|
crc, cur_len);
|
|
|
|
len -= cur_len;
|
|
|
|
offset += cur_len;
|
|
|
|
}
|
2017-11-07 02:23:00 +08:00
|
|
|
memset(result, 0, BTRFS_CSUM_SIZE);
|
2008-12-02 20:17:45 +08:00
|
|
|
|
2007-10-16 04:19:22 +08:00
|
|
|
btrfs_csum_final(crc, result);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-09-30 03:18:18 +08:00
|
|
|
/*
|
|
|
|
* we can't consider a given block up to date unless the transid of the
|
|
|
|
* block matches the transid in the parent node's pointer. This is how we
|
|
|
|
* detect blocks that either didn't get written at all or got written
|
|
|
|
* in the wrong place.
|
|
|
|
*/
|
2008-05-13 01:39:03 +08:00
|
|
|
static int verify_parent_transid(struct extent_io_tree *io_tree,
|
2012-05-06 19:23:47 +08:00
|
|
|
struct extent_buffer *eb, u64 parent_transid,
|
|
|
|
int atomic)
|
2008-05-13 01:39:03 +08:00
|
|
|
{
|
2010-02-04 03:33:23 +08:00
|
|
|
struct extent_state *cached_state = NULL;
|
2008-05-13 01:39:03 +08:00
|
|
|
int ret;
|
2014-07-31 06:43:18 +08:00
|
|
|
bool need_lock = (current->journal_info == BTRFS_SEND_TRANS_STUB);
|
2008-05-13 01:39:03 +08:00
|
|
|
|
|
|
|
if (!parent_transid || btrfs_header_generation(eb) == parent_transid)
|
|
|
|
return 0;
|
|
|
|
|
2012-05-06 19:23:47 +08:00
|
|
|
if (atomic)
|
|
|
|
return -EAGAIN;
|
|
|
|
|
2014-03-29 05:07:27 +08:00
|
|
|
if (need_lock) {
|
|
|
|
btrfs_tree_read_lock(eb);
|
2018-04-04 08:00:17 +08:00
|
|
|
btrfs_set_lock_blocking_read(eb);
|
2014-03-29 05:07:27 +08:00
|
|
|
}
|
|
|
|
|
2010-02-04 03:33:23 +08:00
|
|
|
lock_extent_bits(io_tree, eb->start, eb->start + eb->len - 1,
|
2015-12-03 21:30:40 +08:00
|
|
|
&cached_state);
|
2012-03-13 21:38:00 +08:00
|
|
|
if (extent_buffer_uptodate(eb) &&
|
2008-05-13 01:39:03 +08:00
|
|
|
btrfs_header_generation(eb) == parent_transid) {
|
|
|
|
ret = 0;
|
|
|
|
goto out;
|
|
|
|
}
|
2015-10-08 17:01:36 +08:00
|
|
|
btrfs_err_rl(eb->fs_info,
|
|
|
|
"parent transid verify failed on %llu wanted %llu found %llu",
|
|
|
|
eb->start,
|
2014-07-04 17:59:06 +08:00
|
|
|
parent_transid, btrfs_header_generation(eb));
|
2008-05-13 01:39:03 +08:00
|
|
|
ret = 1;
|
2014-03-29 05:07:27 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Things reading via commit roots that don't have normal protection,
|
|
|
|
* like send, can have a really old block in cache that may point at a
|
2016-05-20 09:18:45 +08:00
|
|
|
* block that has been freed and re-allocated. So don't clear uptodate
|
2014-03-29 05:07:27 +08:00
|
|
|
* if we find an eb that is under IO (dirty/writeback) because we could
|
|
|
|
* end up reading in the stale data and then writing it back out and
|
|
|
|
* making everybody very sad.
|
|
|
|
*/
|
|
|
|
if (!extent_buffer_under_io(eb))
|
|
|
|
clear_extent_buffer_uptodate(eb);
|
2008-07-30 22:29:12 +08:00
|
|
|
out:
|
2010-02-04 03:33:23 +08:00
|
|
|
unlock_extent_cached(io_tree, eb->start, eb->start + eb->len - 1,
|
2017-12-13 04:43:52 +08:00
|
|
|
&cached_state);
|
2014-06-26 04:45:41 +08:00
|
|
|
if (need_lock)
|
|
|
|
btrfs_tree_read_unlock_blocking(eb);
|
2008-05-13 01:39:03 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2013-03-06 22:57:46 +08:00
|
|
|
/*
|
|
|
|
* Return 0 if the superblock checksum type matches the checksum value of that
|
|
|
|
* algorithm. Pass the raw disk superblock data.
|
|
|
|
*/
|
2016-09-20 22:05:02 +08:00
|
|
|
static int btrfs_check_super_csum(struct btrfs_fs_info *fs_info,
|
|
|
|
char *raw_disk_sb)
|
2013-03-06 22:57:46 +08:00
|
|
|
{
|
|
|
|
struct btrfs_super_block *disk_sb =
|
|
|
|
(struct btrfs_super_block *)raw_disk_sb;
|
|
|
|
u16 csum_type = btrfs_super_csum_type(disk_sb);
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
if (csum_type == BTRFS_CSUM_TYPE_CRC32) {
|
|
|
|
u32 crc = ~(u32)0;
|
2018-03-13 16:26:06 +08:00
|
|
|
char result[sizeof(crc)];
|
2013-03-06 22:57:46 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The super_block structure does not span the whole
|
|
|
|
* BTRFS_SUPER_INFO_SIZE range, we expect that the unused space
|
2016-05-20 09:18:45 +08:00
|
|
|
* is filled with zeros and is included in the checksum.
|
2013-03-06 22:57:46 +08:00
|
|
|
*/
|
|
|
|
crc = btrfs_csum_data(raw_disk_sb + BTRFS_CSUM_SIZE,
|
|
|
|
crc, BTRFS_SUPER_INFO_SIZE - BTRFS_CSUM_SIZE);
|
|
|
|
btrfs_csum_final(crc, result);
|
|
|
|
|
2018-03-13 16:26:06 +08:00
|
|
|
if (memcmp(raw_disk_sb, result, sizeof(result)))
|
2013-03-06 22:57:46 +08:00
|
|
|
ret = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (csum_type >= ARRAY_SIZE(btrfs_csum_sizes)) {
|
2016-09-20 22:05:02 +08:00
|
|
|
btrfs_err(fs_info, "unsupported checksum algorithm %u",
|
2013-03-06 22:57:46 +08:00
|
|
|
csum_type);
|
|
|
|
ret = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-03-20 21:58:13 +08:00
|
|
|
int btrfs_verify_level_key(struct extent_buffer *eb, int level,
|
btrfs: Check the first key and level for cached extent buffer
[BUG]
When reading a file from a fuzzed image, kernel can panic like:
BTRFS warning (device loop0): csum failed root 5 ino 270 off 0 csum 0x98f94189 expected csum 0x00000000 mirror 1
assertion failed: !memcmp_extent_buffer(b, &disk_key, offsetof(struct btrfs_leaf, items[0].key), sizeof(disk_key)), file: fs/btrfs/ctree.c, line: 2544
------------[ cut here ]------------
kernel BUG at fs/btrfs/ctree.h:3500!
invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
RIP: 0010:btrfs_search_slot.cold.24+0x61/0x63 [btrfs]
Call Trace:
btrfs_lookup_csum+0x52/0x150 [btrfs]
__btrfs_lookup_bio_sums+0x209/0x640 [btrfs]
btrfs_submit_bio_hook+0x103/0x170 [btrfs]
submit_one_bio+0x59/0x80 [btrfs]
extent_read_full_page+0x58/0x80 [btrfs]
generic_file_read_iter+0x2f6/0x9d0
__vfs_read+0x14d/0x1a0
vfs_read+0x8d/0x140
ksys_read+0x52/0xc0
do_syscall_64+0x60/0x210
entry_SYSCALL_64_after_hwframe+0x49/0xbe
[CAUSE]
The fuzzed image has a corrupted leaf whose first key doesn't match its
parent:
checksum tree key (CSUM_TREE ROOT_ITEM 0)
node 29741056 level 1 items 14 free 107 generation 19 owner CSUM_TREE
fs uuid 3381d111-94a3-4ac7-8f39-611bbbdab7e6
chunk uuid 9af1c3c7-2af5-488b-8553-530bd515f14c
...
key (EXTENT_CSUM EXTENT_CSUM 79691776) block 29761536 gen 19
leaf 29761536 items 1 free space 1726 generation 19 owner CSUM_TREE
leaf 29761536 flags 0x1(WRITTEN) backref revision 1
fs uuid 3381d111-94a3-4ac7-8f39-611bbbdab7e6
chunk uuid 9af1c3c7-2af5-488b-8553-530bd515f14c
item 0 key (EXTENT_CSUM EXTENT_CSUM 8798638964736) itemoff 1751 itemsize 2244
range start 8798638964736 end 8798641262592 length 2297856
When reading the above tree block, we have extent_buffer->refs = 2 in
the context:
- initial one from __alloc_extent_buffer()
alloc_extent_buffer()
|- __alloc_extent_buffer()
|- atomic_set(&eb->refs, 1)
- one being added to fs_info->buffer_radix
alloc_extent_buffer()
|- check_buffer_tree_ref()
|- atomic_inc(&eb->refs)
So if even we call free_extent_buffer() in read_tree_block or other
similar situation, we only decrease the refs by 1, it doesn't reach 0
and won't be freed right now.
The staled eb and its corrupted content will still be kept cached.
Furthermore, we have several extra cases where we either don't do first
key check or the check is not proper for all callers:
- scrub
We just don't have first key in this context.
- shared tree block
One tree block can be shared by several snapshot/subvolume trees.
In that case, the first key check for one subvolume doesn't apply to
another.
So for the above reasons, a corrupted extent buffer can sneak into the
buffer cache.
[FIX]
Call verify_level_key in read_block_for_search to do another
verification. For that purpose the function is exported.
Due to above reasons, although we can free corrupted extent buffer from
cache, we still need the check in read_block_for_search(), for scrub and
shared tree blocks.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202755
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202757
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202759
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202761
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202767
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202769
Reported-by: Yoon Jungyeon <jungyeon@gatech.edu>
CC: stable@vger.kernel.org # 4.19+
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2019-03-12 17:10:40 +08:00
|
|
|
struct btrfs_key *first_key, u64 parent_transid)
|
2018-03-29 09:08:11 +08:00
|
|
|
{
|
2019-03-20 21:58:13 +08:00
|
|
|
struct btrfs_fs_info *fs_info = eb->fs_info;
|
2018-03-29 09:08:11 +08:00
|
|
|
int found_level;
|
|
|
|
struct btrfs_key found_key;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
found_level = btrfs_header_level(eb);
|
|
|
|
if (found_level != level) {
|
2019-03-20 14:27:39 +08:00
|
|
|
WARN(IS_ENABLED(CONFIG_BTRFS_DEBUG),
|
|
|
|
KERN_ERR "BTRFS: tree level check failed\n");
|
2018-03-29 09:08:11 +08:00
|
|
|
btrfs_err(fs_info,
|
|
|
|
"tree level mismatch detected, bytenr=%llu level expected=%u has=%u",
|
|
|
|
eb->start, level, found_level);
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!first_key)
|
|
|
|
return 0;
|
|
|
|
|
2018-04-13 06:32:47 +08:00
|
|
|
/*
|
|
|
|
* For live tree block (new tree blocks in current transaction),
|
|
|
|
* we need proper lock context to avoid race, which is impossible here.
|
|
|
|
* So we only checks tree blocks which is read from disk, whose
|
|
|
|
* generation <= fs_info->last_trans_committed.
|
|
|
|
*/
|
|
|
|
if (btrfs_header_generation(eb) > fs_info->last_trans_committed)
|
|
|
|
return 0;
|
2018-03-29 09:08:11 +08:00
|
|
|
if (found_level)
|
|
|
|
btrfs_node_key_to_cpu(eb, &found_key, 0);
|
|
|
|
else
|
|
|
|
btrfs_item_key_to_cpu(eb, &found_key, 0);
|
|
|
|
ret = btrfs_comp_cpu_keys(first_key, &found_key);
|
|
|
|
|
|
|
|
if (ret) {
|
2019-03-20 14:27:39 +08:00
|
|
|
WARN(IS_ENABLED(CONFIG_BTRFS_DEBUG),
|
|
|
|
KERN_ERR "BTRFS: tree first key check failed\n");
|
2018-03-29 09:08:11 +08:00
|
|
|
btrfs_err(fs_info,
|
2018-05-18 10:59:35 +08:00
|
|
|
"tree first key mismatch detected, bytenr=%llu parent_transid=%llu key expected=(%llu,%u,%llu) has=(%llu,%u,%llu)",
|
|
|
|
eb->start, parent_transid, first_key->objectid,
|
|
|
|
first_key->type, first_key->offset,
|
|
|
|
found_key.objectid, found_key.type,
|
|
|
|
found_key.offset);
|
2018-03-29 09:08:11 +08:00
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2008-09-30 03:18:18 +08:00
|
|
|
/*
|
|
|
|
* helper to read a given tree block, doing retries as required when
|
|
|
|
* the checksums don't match and we have alternate mirrors to try.
|
2018-03-29 09:08:11 +08:00
|
|
|
*
|
|
|
|
* @parent_transid: expected transid, skip check if 0
|
|
|
|
* @level: expected level, mandatory check
|
|
|
|
* @first_key: expected key of first slot, skip check if NULL
|
2008-09-30 03:18:18 +08:00
|
|
|
*/
|
2019-03-20 21:56:39 +08:00
|
|
|
static int btree_read_extent_buffer_pages(struct extent_buffer *eb,
|
2018-03-29 09:08:11 +08:00
|
|
|
u64 parent_transid, int level,
|
|
|
|
struct btrfs_key *first_key)
|
2008-04-10 04:28:12 +08:00
|
|
|
{
|
2019-03-20 21:56:39 +08:00
|
|
|
struct btrfs_fs_info *fs_info = eb->fs_info;
|
2008-04-10 04:28:12 +08:00
|
|
|
struct extent_io_tree *io_tree;
|
2012-03-27 09:57:36 +08:00
|
|
|
int failed = 0;
|
2008-04-10 04:28:12 +08:00
|
|
|
int ret;
|
|
|
|
int num_copies = 0;
|
|
|
|
int mirror_num = 0;
|
2012-03-27 09:57:36 +08:00
|
|
|
int failed_mirror = 0;
|
2008-04-10 04:28:12 +08:00
|
|
|
|
2016-06-23 06:54:23 +08:00
|
|
|
io_tree = &BTRFS_I(fs_info->btree_inode)->io_tree;
|
2008-04-10 04:28:12 +08:00
|
|
|
while (1) {
|
2018-11-06 22:40:20 +08:00
|
|
|
clear_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags);
|
2019-04-10 22:24:40 +08:00
|
|
|
ret = read_extent_buffer_pages(eb, WAIT_COMPLETE, mirror_num);
|
2012-08-10 22:58:21 +08:00
|
|
|
if (!ret) {
|
2018-03-29 09:08:11 +08:00
|
|
|
if (verify_parent_transid(io_tree, eb,
|
2012-05-06 19:23:47 +08:00
|
|
|
parent_transid, 0))
|
2012-08-10 22:58:21 +08:00
|
|
|
ret = -EIO;
|
2019-03-20 21:58:13 +08:00
|
|
|
else if (btrfs_verify_level_key(eb, level,
|
btrfs: Check the first key and level for cached extent buffer
[BUG]
When reading a file from a fuzzed image, kernel can panic like:
BTRFS warning (device loop0): csum failed root 5 ino 270 off 0 csum 0x98f94189 expected csum 0x00000000 mirror 1
assertion failed: !memcmp_extent_buffer(b, &disk_key, offsetof(struct btrfs_leaf, items[0].key), sizeof(disk_key)), file: fs/btrfs/ctree.c, line: 2544
------------[ cut here ]------------
kernel BUG at fs/btrfs/ctree.h:3500!
invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
RIP: 0010:btrfs_search_slot.cold.24+0x61/0x63 [btrfs]
Call Trace:
btrfs_lookup_csum+0x52/0x150 [btrfs]
__btrfs_lookup_bio_sums+0x209/0x640 [btrfs]
btrfs_submit_bio_hook+0x103/0x170 [btrfs]
submit_one_bio+0x59/0x80 [btrfs]
extent_read_full_page+0x58/0x80 [btrfs]
generic_file_read_iter+0x2f6/0x9d0
__vfs_read+0x14d/0x1a0
vfs_read+0x8d/0x140
ksys_read+0x52/0xc0
do_syscall_64+0x60/0x210
entry_SYSCALL_64_after_hwframe+0x49/0xbe
[CAUSE]
The fuzzed image has a corrupted leaf whose first key doesn't match its
parent:
checksum tree key (CSUM_TREE ROOT_ITEM 0)
node 29741056 level 1 items 14 free 107 generation 19 owner CSUM_TREE
fs uuid 3381d111-94a3-4ac7-8f39-611bbbdab7e6
chunk uuid 9af1c3c7-2af5-488b-8553-530bd515f14c
...
key (EXTENT_CSUM EXTENT_CSUM 79691776) block 29761536 gen 19
leaf 29761536 items 1 free space 1726 generation 19 owner CSUM_TREE
leaf 29761536 flags 0x1(WRITTEN) backref revision 1
fs uuid 3381d111-94a3-4ac7-8f39-611bbbdab7e6
chunk uuid 9af1c3c7-2af5-488b-8553-530bd515f14c
item 0 key (EXTENT_CSUM EXTENT_CSUM 8798638964736) itemoff 1751 itemsize 2244
range start 8798638964736 end 8798641262592 length 2297856
When reading the above tree block, we have extent_buffer->refs = 2 in
the context:
- initial one from __alloc_extent_buffer()
alloc_extent_buffer()
|- __alloc_extent_buffer()
|- atomic_set(&eb->refs, 1)
- one being added to fs_info->buffer_radix
alloc_extent_buffer()
|- check_buffer_tree_ref()
|- atomic_inc(&eb->refs)
So if even we call free_extent_buffer() in read_tree_block or other
similar situation, we only decrease the refs by 1, it doesn't reach 0
and won't be freed right now.
The staled eb and its corrupted content will still be kept cached.
Furthermore, we have several extra cases where we either don't do first
key check or the check is not proper for all callers:
- scrub
We just don't have first key in this context.
- shared tree block
One tree block can be shared by several snapshot/subvolume trees.
In that case, the first key check for one subvolume doesn't apply to
another.
So for the above reasons, a corrupted extent buffer can sneak into the
buffer cache.
[FIX]
Call verify_level_key in read_block_for_search to do another
verification. For that purpose the function is exported.
Due to above reasons, although we can free corrupted extent buffer from
cache, we still need the check in read_block_for_search(), for scrub and
shared tree blocks.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202755
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202757
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202759
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202761
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202767
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202769
Reported-by: Yoon Jungyeon <jungyeon@gatech.edu>
CC: stable@vger.kernel.org # 4.19+
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2019-03-12 17:10:40 +08:00
|
|
|
first_key, parent_transid))
|
2018-03-29 09:08:11 +08:00
|
|
|
ret = -EUCLEAN;
|
|
|
|
else
|
|
|
|
break;
|
2012-08-10 22:58:21 +08:00
|
|
|
}
|
2009-01-06 10:25:51 +08:00
|
|
|
|
2016-06-23 06:54:23 +08:00
|
|
|
num_copies = btrfs_num_copies(fs_info,
|
2008-04-10 04:28:12 +08:00
|
|
|
eb->start, eb->len);
|
2008-04-29 04:40:52 +08:00
|
|
|
if (num_copies == 1)
|
2012-03-27 09:57:36 +08:00
|
|
|
break;
|
2008-04-29 04:40:52 +08:00
|
|
|
|
2012-04-16 21:42:26 +08:00
|
|
|
if (!failed_mirror) {
|
|
|
|
failed = 1;
|
|
|
|
failed_mirror = eb->read_mirror;
|
|
|
|
}
|
|
|
|
|
2008-04-10 04:28:12 +08:00
|
|
|
mirror_num++;
|
2012-03-27 09:57:36 +08:00
|
|
|
if (mirror_num == failed_mirror)
|
|
|
|
mirror_num++;
|
|
|
|
|
2008-04-29 04:40:52 +08:00
|
|
|
if (mirror_num > num_copies)
|
2012-03-27 09:57:36 +08:00
|
|
|
break;
|
2008-04-10 04:28:12 +08:00
|
|
|
}
|
2012-03-27 09:57:36 +08:00
|
|
|
|
2012-07-10 21:30:17 +08:00
|
|
|
if (failed && !ret && failed_mirror)
|
2019-03-20 18:23:44 +08:00
|
|
|
btrfs_repair_eb_io_failure(eb, failed_mirror);
|
2012-03-27 09:57:36 +08:00
|
|
|
|
|
|
|
return ret;
|
2008-04-10 04:28:12 +08:00
|
|
|
}
|
2007-10-16 04:19:22 +08:00
|
|
|
|
2008-09-30 03:18:18 +08:00
|
|
|
/*
|
2009-01-06 10:25:51 +08:00
|
|
|
* checksum a dirty tree block before IO. This has extra checks to make sure
|
|
|
|
* we only fill in the checksum field in the first page of a multi-page block
|
2008-09-30 03:18:18 +08:00
|
|
|
*/
|
2009-01-06 10:25:51 +08:00
|
|
|
|
2014-11-21 16:15:07 +08:00
|
|
|
static int csum_dirty_buffer(struct btrfs_fs_info *fs_info, struct page *page)
|
2007-10-16 04:19:22 +08:00
|
|
|
{
|
2012-12-21 17:17:45 +08:00
|
|
|
u64 start = page_offset(page);
|
2007-10-16 04:19:22 +08:00
|
|
|
u64 found_start;
|
2019-02-25 21:24:15 +08:00
|
|
|
u8 result[BTRFS_CSUM_SIZE];
|
|
|
|
u16 csum_size = btrfs_super_csum_size(fs_info->super_copy);
|
2007-10-16 04:19:22 +08:00
|
|
|
struct extent_buffer *eb;
|
2019-04-04 11:47:08 +08:00
|
|
|
int ret;
|
2008-04-10 04:28:12 +08:00
|
|
|
|
2012-03-08 05:20:05 +08:00
|
|
|
eb = (struct extent_buffer *)page->private;
|
|
|
|
if (page != eb->pages[0])
|
|
|
|
return 0;
|
2016-03-10 19:10:15 +08:00
|
|
|
|
2007-10-16 04:19:22 +08:00
|
|
|
found_start = btrfs_header_bytenr(eb);
|
2016-03-10 19:10:15 +08:00
|
|
|
/*
|
|
|
|
* Please do not consolidate these warnings into a single if.
|
|
|
|
* It is useful to know what went wrong.
|
|
|
|
*/
|
|
|
|
if (WARN_ON(found_start != start))
|
|
|
|
return -EUCLEAN;
|
|
|
|
if (WARN_ON(!PageUptodate(page)))
|
|
|
|
return -EUCLEAN;
|
|
|
|
|
2018-10-30 22:43:24 +08:00
|
|
|
ASSERT(memcmp_extent_buffer(eb, fs_info->fs_devices->metadata_uuid,
|
2016-03-10 19:10:15 +08:00
|
|
|
btrfs_header_fsid(), BTRFS_FSID_SIZE) == 0);
|
|
|
|
|
2019-02-25 21:24:15 +08:00
|
|
|
if (csum_tree_block(eb, result))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2019-04-04 11:47:08 +08:00
|
|
|
if (btrfs_header_level(eb))
|
|
|
|
ret = btrfs_check_node(eb);
|
|
|
|
else
|
|
|
|
ret = btrfs_check_leaf_full(eb);
|
|
|
|
|
|
|
|
if (ret < 0) {
|
|
|
|
btrfs_err(fs_info,
|
|
|
|
"block=%llu write time tree block corruption detected",
|
|
|
|
eb->start);
|
|
|
|
return ret;
|
|
|
|
}
|
2019-02-25 21:24:15 +08:00
|
|
|
write_extent_buffer(eb, result, 0, csum_size);
|
2019-04-04 11:47:08 +08:00
|
|
|
|
2019-02-25 21:24:15 +08:00
|
|
|
return 0;
|
2007-10-16 04:19:22 +08:00
|
|
|
}
|
|
|
|
|
2019-03-20 20:12:00 +08:00
|
|
|
static int check_tree_block_fsid(struct extent_buffer *eb)
|
2008-11-18 10:11:30 +08:00
|
|
|
{
|
2019-03-20 20:12:00 +08:00
|
|
|
struct btrfs_fs_info *fs_info = eb->fs_info;
|
2014-11-21 16:15:07 +08:00
|
|
|
struct btrfs_fs_devices *fs_devices = fs_info->fs_devices;
|
2017-07-29 17:50:09 +08:00
|
|
|
u8 fsid[BTRFS_FSID_SIZE];
|
2008-11-18 10:11:30 +08:00
|
|
|
int ret = 1;
|
|
|
|
|
2013-09-24 17:12:38 +08:00
|
|
|
read_extent_buffer(eb, fsid, btrfs_header_fsid(), BTRFS_FSID_SIZE);
|
2008-11-18 10:11:30 +08:00
|
|
|
while (fs_devices) {
|
2018-10-30 22:43:23 +08:00
|
|
|
u8 *metadata_uuid;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Checking the incompat flag is only valid for the current
|
|
|
|
* fs. For seed devices it's forbidden to have their uuid
|
|
|
|
* changed so reading ->fsid in this case is fine
|
|
|
|
*/
|
|
|
|
if (fs_devices == fs_info->fs_devices &&
|
|
|
|
btrfs_fs_incompat(fs_info, METADATA_UUID))
|
|
|
|
metadata_uuid = fs_devices->metadata_uuid;
|
|
|
|
else
|
|
|
|
metadata_uuid = fs_devices->fsid;
|
|
|
|
|
|
|
|
if (!memcmp(fsid, metadata_uuid, BTRFS_FSID_SIZE)) {
|
2008-11-18 10:11:30 +08:00
|
|
|
ret = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
fs_devices = fs_devices->seed;
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2013-07-25 19:22:34 +08:00
|
|
|
static int btree_readpage_end_io_hook(struct btrfs_io_bio *io_bio,
|
|
|
|
u64 phy_offset, struct page *page,
|
|
|
|
u64 start, u64 end, int mirror)
|
2008-04-10 04:28:12 +08:00
|
|
|
{
|
|
|
|
u64 found_start;
|
|
|
|
int found_level;
|
|
|
|
struct extent_buffer *eb;
|
|
|
|
struct btrfs_root *root = BTRFS_I(page->mapping->host)->root;
|
2015-12-31 22:46:45 +08:00
|
|
|
struct btrfs_fs_info *fs_info = root->fs_info;
|
2019-02-25 21:24:15 +08:00
|
|
|
u16 csum_size = btrfs_super_csum_size(fs_info->super_copy);
|
2008-04-10 04:28:12 +08:00
|
|
|
int ret = 0;
|
2019-02-25 21:24:15 +08:00
|
|
|
u8 result[BTRFS_CSUM_SIZE];
|
2010-08-07 01:21:20 +08:00
|
|
|
int reads_done;
|
2008-04-10 04:28:12 +08:00
|
|
|
|
|
|
|
if (!page->private)
|
|
|
|
goto out;
|
2009-01-06 10:25:51 +08:00
|
|
|
|
2012-03-08 05:20:05 +08:00
|
|
|
eb = (struct extent_buffer *)page->private;
|
2009-01-06 10:25:51 +08:00
|
|
|
|
2012-03-13 21:38:00 +08:00
|
|
|
/* the pending IO might have been the only thing that kept this buffer
|
|
|
|
* in memory. Make sure we have a ref for all this other checks
|
|
|
|
*/
|
|
|
|
extent_buffer_get(eb);
|
|
|
|
|
|
|
|
reads_done = atomic_dec_and_test(&eb->io_pages);
|
2010-08-07 01:21:20 +08:00
|
|
|
if (!reads_done)
|
|
|
|
goto err;
|
2008-04-10 04:28:12 +08:00
|
|
|
|
2012-04-16 21:42:26 +08:00
|
|
|
eb->read_mirror = mirror;
|
Btrfs: be aware of btree inode write errors to avoid fs corruption
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode->i_mapping->a_ops->writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit the running transaction. If this happens,
we might have no way of knowing that such error happened when we are
committing the transaction - because the pages might no longer be
marked dirty nor tagged for writeback (if a subsequent modification
to the extent buffer didn't happen before the transaction commit) which
makes filemap_fdata[write|wait]_range unable to find such pages (even
if they're marked with SetPageError).
So if this happens we must abort the transaction, otherwise we commit
a super block with btree roots that point to btree nodes/leafs whose
content on disk is invalid - either garbage or the content of some
node/leaf from a past generation that got cowed or deleted and is no
longer valid (for this later case we end up getting error messages like
"parent transid verify failed on 10826481664 wanted 25748 found 29562"
when reading btree nodes/leafs from disk).
Note that setting and checking AS_EIO/AS_ENOSPC in the btree inode's
i_mapping would not be enough because we need to distinguish between
log tree extents (not fatal) vs non-log tree extents (fatal) and
because the next call to filemap_fdatawait_range() will catch and clear
such errors in the mapping - and that call might be from a log sync and
not from a transaction commit, which means we would not know about the
error at transaction commit time. Also, checking for the eb flag
EXTENT_BUFFER_IOERR at transaction commit time isn't done and would
not be completely reliable, as the eb might be removed from memory and
read back when trying to get it, which clears that flag right before
reading the eb's pages from disk, making us not know about the previous
write error.
Using the new 3 flags for the btree inode also makes us achieve the
goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
writeback for all dirty pages and before filemap_fdatawait_range() is
called, the writeback for all dirty pages had already finished with
errors - because we were not using AS_EIO/AS_ENOSPC,
filemap_fdatawait_range() would return success, as it could not know
that writeback errors happened (the pages were no longer tagged for
writeback).
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-26 19:25:56 +08:00
|
|
|
if (test_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags)) {
|
2012-03-27 09:57:36 +08:00
|
|
|
ret = -EIO;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
2008-04-10 04:28:12 +08:00
|
|
|
found_start = btrfs_header_bytenr(eb);
|
2010-08-07 01:21:20 +08:00
|
|
|
if (found_start != eb->start) {
|
2018-06-22 09:52:15 +08:00
|
|
|
btrfs_err_rl(fs_info, "bad tree block start, want %llu have %llu",
|
|
|
|
eb->start, found_start);
|
2008-04-10 04:28:12 +08:00
|
|
|
ret = -EIO;
|
2008-04-10 04:28:12 +08:00
|
|
|
goto err;
|
|
|
|
}
|
2019-03-20 20:12:00 +08:00
|
|
|
if (check_tree_block_fsid(eb)) {
|
2015-12-31 22:46:45 +08:00
|
|
|
btrfs_err_rl(fs_info, "bad fsid on block %llu",
|
|
|
|
eb->start);
|
2008-05-13 01:39:03 +08:00
|
|
|
ret = -EIO;
|
|
|
|
goto err;
|
|
|
|
}
|
2008-04-10 04:28:12 +08:00
|
|
|
found_level = btrfs_header_level(eb);
|
2013-04-23 23:30:14 +08:00
|
|
|
if (found_level >= BTRFS_MAX_LEVEL) {
|
2018-06-22 09:52:15 +08:00
|
|
|
btrfs_err(fs_info, "bad tree block level %d on %llu",
|
|
|
|
(int)btrfs_header_level(eb), eb->start);
|
2013-04-23 23:30:14 +08:00
|
|
|
ret = -EIO;
|
|
|
|
goto err;
|
|
|
|
}
|
2008-04-10 04:28:12 +08:00
|
|
|
|
2011-07-27 04:11:19 +08:00
|
|
|
btrfs_set_buffer_lockdep_class(btrfs_header_owner(eb),
|
|
|
|
eb, found_level);
|
2009-02-13 03:09:45 +08:00
|
|
|
|
2019-02-25 21:24:15 +08:00
|
|
|
ret = csum_tree_block(eb, result);
|
2016-03-10 19:09:46 +08:00
|
|
|
if (ret)
|
2011-03-17 01:42:43 +08:00
|
|
|
goto err;
|
|
|
|
|
2019-02-25 21:24:15 +08:00
|
|
|
if (memcmp_extent_buffer(eb, result, 0, csum_size)) {
|
|
|
|
u32 val;
|
|
|
|
u32 found = 0;
|
|
|
|
|
|
|
|
memcpy(&found, result, csum_size);
|
|
|
|
|
|
|
|
read_extent_buffer(eb, &val, 0, csum_size);
|
|
|
|
btrfs_warn_rl(fs_info,
|
|
|
|
"%s checksum verify failed on %llu wanted %x found %x level %d",
|
|
|
|
fs_info->sb->s_id, eb->start,
|
|
|
|
val, found, btrfs_header_level(eb));
|
|
|
|
ret = -EUCLEAN;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
2011-03-17 01:42:43 +08:00
|
|
|
/*
|
|
|
|
* If this is a leaf block and it is corrupt, set the corrupt bit so
|
|
|
|
* that we don't try and read the other copies of this block, just
|
|
|
|
* return -EIO.
|
|
|
|
*/
|
2019-03-20 23:23:29 +08:00
|
|
|
if (found_level == 0 && btrfs_check_leaf_full(eb)) {
|
2011-03-17 01:42:43 +08:00
|
|
|
set_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags);
|
|
|
|
ret = -EIO;
|
|
|
|
}
|
2008-04-10 04:28:12 +08:00
|
|
|
|
2019-03-20 23:25:00 +08:00
|
|
|
if (found_level > 0 && btrfs_check_node(eb))
|
2016-08-24 08:37:45 +08:00
|
|
|
ret = -EIO;
|
|
|
|
|
2012-03-13 21:38:00 +08:00
|
|
|
if (!ret)
|
|
|
|
set_extent_buffer_uptodate(eb);
|
2019-03-20 14:27:40 +08:00
|
|
|
else
|
|
|
|
btrfs_err(fs_info,
|
|
|
|
"block=%llu read time tree block corruption detected",
|
|
|
|
eb->start);
|
2008-04-10 04:28:12 +08:00
|
|
|
err:
|
2013-04-20 22:18:27 +08:00
|
|
|
if (reads_done &&
|
|
|
|
test_and_clear_bit(EXTENT_BUFFER_READAHEAD, &eb->bflags))
|
2017-03-03 02:43:30 +08:00
|
|
|
btree_readahead_hook(eb, ret);
|
2011-06-10 19:55:54 +08:00
|
|
|
|
2013-01-30 07:40:14 +08:00
|
|
|
if (ret) {
|
|
|
|
/*
|
|
|
|
* our io error hook is going to dec the io pages
|
|
|
|
* again, we have to make sure it has something
|
|
|
|
* to decrement
|
|
|
|
*/
|
|
|
|
atomic_inc(&eb->io_pages);
|
2012-03-13 21:38:00 +08:00
|
|
|
clear_extent_buffer_uptodate(eb);
|
2013-01-30 07:40:14 +08:00
|
|
|
}
|
2012-03-13 21:38:00 +08:00
|
|
|
free_extent_buffer(eb);
|
2008-04-10 04:28:12 +08:00
|
|
|
out:
|
2008-04-10 04:28:12 +08:00
|
|
|
return ret;
|
2008-04-10 04:28:12 +08:00
|
|
|
}
|
|
|
|
|
2015-07-20 21:29:37 +08:00
|
|
|
static void end_workqueue_bio(struct bio *bio)
|
2008-04-10 04:28:12 +08:00
|
|
|
{
|
2014-07-30 06:55:42 +08:00
|
|
|
struct btrfs_end_io_wq *end_io_wq = bio->bi_private;
|
2008-04-10 04:28:12 +08:00
|
|
|
struct btrfs_fs_info *fs_info;
|
Btrfs: fix task hang under heavy compressed write
This has been reported and discussed for a long time, and this hang occurs in
both 3.15 and 3.16.
Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.
Btrfs has a kind of work queued as an ordered way, which means that its
ordered_func() must be processed in the way of FIFO, so it usually looks like --
normal_work_helper(arg)
work = container_of(arg, struct btrfs_work, normal_work);
work->func() <---- (we name it work X)
for ordered_work in wq->ordered_list
ordered_work->ordered_func()
ordered_work->ordered_free()
The hang is a rare case, first when we find free space, we get an uncached block
group, then we go to read its free space cache inode for free space information,
so it will
file a readahead request
btrfs_readpages()
for page that is not in page cache
__do_readpage()
submit_extent_page()
btrfs_submit_bio_hook()
btrfs_bio_wq_end_io()
submit_bio()
end_workqueue_bio() <--(ret by the 1st endio)
queue a work(named work Y) for the 2nd
also the real endio()
So the hang occurs when work Y's work_struct and work X's work_struct happens
to share the same address.
A bit more explanation,
A,B,C -- struct btrfs_work
arg -- struct work_struct
kthread:
worker_thread()
pick up a work_struct from @worklist
process_one_work(arg)
worker->current_work = arg; <-- arg is A->normal_work
worker->current_func(arg)
normal_work_helper(arg)
A = container_of(arg, struct btrfs_work, normal_work);
A->func()
A->ordered_func()
A->ordered_free() <-- A gets freed
B->ordered_func()
submit_compressed_extents()
find_free_extent()
load_free_space_inode()
... <-- (the above readhead stack)
end_workqueue_bio()
btrfs_queue_work(work C)
B->ordered_free()
As if work A has a high priority in wq->ordered_list and there are more ordered
works queued after it, such as B->ordered_func(), its memory could have been
freed before normal_work_helper() returns, which means that kernel workqueue
code worker_thread() still has worker->current_work pointer to be work
A->normal_work's, ie. arg's address.
Meanwhile, work C is allocated after work A is freed, work C->normal_work
and work A->normal_work are likely to share the same address(I confirmed this
with ftrace output, so I'm not just guessing, it's rare though).
When another kthread picks up work C->normal_work to process, and finds our
kthread is processing it(see find_worker_executing_work()), it'll think
work C as a collision and skip then, which ends up nobody processing work C.
So the situation is that our kthread is waiting forever on work C.
Besides, there're other cases that can lead to deadlock, but the real problem
is that all btrfs workqueue shares one work->func, -- normal_work_helper,
so this makes each workqueue to have its own helper function, but only a
wraper pf normal_work_helper.
With this patch, I no long hit the above hang.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-08-15 23:36:53 +08:00
|
|
|
struct btrfs_workqueue *wq;
|
|
|
|
btrfs_work_func_t func;
|
2008-04-10 04:28:12 +08:00
|
|
|
|
|
|
|
fs_info = end_io_wq->info;
|
2017-06-03 15:38:06 +08:00
|
|
|
end_io_wq->status = bio->bi_status;
|
Btrfs: move data checksumming into a dedicated tree
Btrfs stores checksums for each data block. Until now, they have
been stored in the subvolume trees, indexed by the inode that is
referencing the data block. This means that when we read the inode,
we've probably read in at least some checksums as well.
But, this has a few problems:
* The checksums are indexed by logical offset in the file. When
compression is on, this means we have to do the expensive checksumming
on the uncompressed data. It would be faster if we could checksum
the compressed data instead.
* If we implement encryption, we'll be checksumming the plain text and
storing that on disk. This is significantly less secure.
* For either compression or encryption, we have to get the plain text
back before we can verify the checksum as correct. This makes the raid
layer balancing and extent moving much more expensive.
* It makes the front end caching code more complex, as we have touch
the subvolume and inodes as we cache extents.
* There is potentitally one copy of the checksum in each subvolume
referencing an extent.
The solution used here is to store the extent checksums in a dedicated
tree. This allows us to index the checksums by phyiscal extent
start and length. It means:
* The checksum is against the data stored on disk, after any compression
or encryption is done.
* The checksum is stored in a central location, and can be verified without
following back references, or reading inodes.
This makes compression significantly faster by reducing the amount of
data that needs to be checksummed. It will also allow much faster
raid management code in general.
The checksums are indexed by a key with a fixed objectid (a magic value
in ctree.h) and offset set to the starting byte of the extent. This
allows us to copy the checksum items into the fsync log tree directly (or
any other tree), without having to invent a second format for them.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-12-09 05:58:54 +08:00
|
|
|
|
2016-06-06 03:31:52 +08:00
|
|
|
if (bio_op(bio) == REQ_OP_WRITE) {
|
Btrfs: fix task hang under heavy compressed write
This has been reported and discussed for a long time, and this hang occurs in
both 3.15 and 3.16.
Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.
Btrfs has a kind of work queued as an ordered way, which means that its
ordered_func() must be processed in the way of FIFO, so it usually looks like --
normal_work_helper(arg)
work = container_of(arg, struct btrfs_work, normal_work);
work->func() <---- (we name it work X)
for ordered_work in wq->ordered_list
ordered_work->ordered_func()
ordered_work->ordered_free()
The hang is a rare case, first when we find free space, we get an uncached block
group, then we go to read its free space cache inode for free space information,
so it will
file a readahead request
btrfs_readpages()
for page that is not in page cache
__do_readpage()
submit_extent_page()
btrfs_submit_bio_hook()
btrfs_bio_wq_end_io()
submit_bio()
end_workqueue_bio() <--(ret by the 1st endio)
queue a work(named work Y) for the 2nd
also the real endio()
So the hang occurs when work Y's work_struct and work X's work_struct happens
to share the same address.
A bit more explanation,
A,B,C -- struct btrfs_work
arg -- struct work_struct
kthread:
worker_thread()
pick up a work_struct from @worklist
process_one_work(arg)
worker->current_work = arg; <-- arg is A->normal_work
worker->current_func(arg)
normal_work_helper(arg)
A = container_of(arg, struct btrfs_work, normal_work);
A->func()
A->ordered_func()
A->ordered_free() <-- A gets freed
B->ordered_func()
submit_compressed_extents()
find_free_extent()
load_free_space_inode()
... <-- (the above readhead stack)
end_workqueue_bio()
btrfs_queue_work(work C)
B->ordered_free()
As if work A has a high priority in wq->ordered_list and there are more ordered
works queued after it, such as B->ordered_func(), its memory could have been
freed before normal_work_helper() returns, which means that kernel workqueue
code worker_thread() still has worker->current_work pointer to be work
A->normal_work's, ie. arg's address.
Meanwhile, work C is allocated after work A is freed, work C->normal_work
and work A->normal_work are likely to share the same address(I confirmed this
with ftrace output, so I'm not just guessing, it's rare though).
When another kthread picks up work C->normal_work to process, and finds our
kthread is processing it(see find_worker_executing_work()), it'll think
work C as a collision and skip then, which ends up nobody processing work C.
So the situation is that our kthread is waiting forever on work C.
Besides, there're other cases that can lead to deadlock, but the real problem
is that all btrfs workqueue shares one work->func, -- normal_work_helper,
so this makes each workqueue to have its own helper function, but only a
wraper pf normal_work_helper.
With this patch, I no long hit the above hang.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-08-15 23:36:53 +08:00
|
|
|
if (end_io_wq->metadata == BTRFS_WQ_ENDIO_METADATA) {
|
|
|
|
wq = fs_info->endio_meta_write_workers;
|
|
|
|
func = btrfs_endio_meta_write_helper;
|
|
|
|
} else if (end_io_wq->metadata == BTRFS_WQ_ENDIO_FREE_SPACE) {
|
|
|
|
wq = fs_info->endio_freespace_worker;
|
|
|
|
func = btrfs_freespace_write_helper;
|
|
|
|
} else if (end_io_wq->metadata == BTRFS_WQ_ENDIO_RAID56) {
|
|
|
|
wq = fs_info->endio_raid56_workers;
|
|
|
|
func = btrfs_endio_raid56_helper;
|
|
|
|
} else {
|
|
|
|
wq = fs_info->endio_write_workers;
|
|
|
|
func = btrfs_endio_write_helper;
|
|
|
|
}
|
Btrfs: move data checksumming into a dedicated tree
Btrfs stores checksums for each data block. Until now, they have
been stored in the subvolume trees, indexed by the inode that is
referencing the data block. This means that when we read the inode,
we've probably read in at least some checksums as well.
But, this has a few problems:
* The checksums are indexed by logical offset in the file. When
compression is on, this means we have to do the expensive checksumming
on the uncompressed data. It would be faster if we could checksum
the compressed data instead.
* If we implement encryption, we'll be checksumming the plain text and
storing that on disk. This is significantly less secure.
* For either compression or encryption, we have to get the plain text
back before we can verify the checksum as correct. This makes the raid
layer balancing and extent moving much more expensive.
* It makes the front end caching code more complex, as we have touch
the subvolume and inodes as we cache extents.
* There is potentitally one copy of the checksum in each subvolume
referencing an extent.
The solution used here is to store the extent checksums in a dedicated
tree. This allows us to index the checksums by phyiscal extent
start and length. It means:
* The checksum is against the data stored on disk, after any compression
or encryption is done.
* The checksum is stored in a central location, and can be verified without
following back references, or reading inodes.
This makes compression significantly faster by reducing the amount of
data that needs to be checksummed. It will also allow much faster
raid management code in general.
The checksums are indexed by a key with a fixed objectid (a magic value
in ctree.h) and offset set to the starting byte of the extent. This
allows us to copy the checksum items into the fsync log tree directly (or
any other tree), without having to invent a second format for them.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-12-09 05:58:54 +08:00
|
|
|
} else {
|
2014-09-12 18:44:03 +08:00
|
|
|
if (unlikely(end_io_wq->metadata ==
|
|
|
|
BTRFS_WQ_ENDIO_DIO_REPAIR)) {
|
|
|
|
wq = fs_info->endio_repair_workers;
|
|
|
|
func = btrfs_endio_repair_helper;
|
|
|
|
} else if (end_io_wq->metadata == BTRFS_WQ_ENDIO_RAID56) {
|
Btrfs: fix task hang under heavy compressed write
This has been reported and discussed for a long time, and this hang occurs in
both 3.15 and 3.16.
Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.
Btrfs has a kind of work queued as an ordered way, which means that its
ordered_func() must be processed in the way of FIFO, so it usually looks like --
normal_work_helper(arg)
work = container_of(arg, struct btrfs_work, normal_work);
work->func() <---- (we name it work X)
for ordered_work in wq->ordered_list
ordered_work->ordered_func()
ordered_work->ordered_free()
The hang is a rare case, first when we find free space, we get an uncached block
group, then we go to read its free space cache inode for free space information,
so it will
file a readahead request
btrfs_readpages()
for page that is not in page cache
__do_readpage()
submit_extent_page()
btrfs_submit_bio_hook()
btrfs_bio_wq_end_io()
submit_bio()
end_workqueue_bio() <--(ret by the 1st endio)
queue a work(named work Y) for the 2nd
also the real endio()
So the hang occurs when work Y's work_struct and work X's work_struct happens
to share the same address.
A bit more explanation,
A,B,C -- struct btrfs_work
arg -- struct work_struct
kthread:
worker_thread()
pick up a work_struct from @worklist
process_one_work(arg)
worker->current_work = arg; <-- arg is A->normal_work
worker->current_func(arg)
normal_work_helper(arg)
A = container_of(arg, struct btrfs_work, normal_work);
A->func()
A->ordered_func()
A->ordered_free() <-- A gets freed
B->ordered_func()
submit_compressed_extents()
find_free_extent()
load_free_space_inode()
... <-- (the above readhead stack)
end_workqueue_bio()
btrfs_queue_work(work C)
B->ordered_free()
As if work A has a high priority in wq->ordered_list and there are more ordered
works queued after it, such as B->ordered_func(), its memory could have been
freed before normal_work_helper() returns, which means that kernel workqueue
code worker_thread() still has worker->current_work pointer to be work
A->normal_work's, ie. arg's address.
Meanwhile, work C is allocated after work A is freed, work C->normal_work
and work A->normal_work are likely to share the same address(I confirmed this
with ftrace output, so I'm not just guessing, it's rare though).
When another kthread picks up work C->normal_work to process, and finds our
kthread is processing it(see find_worker_executing_work()), it'll think
work C as a collision and skip then, which ends up nobody processing work C.
So the situation is that our kthread is waiting forever on work C.
Besides, there're other cases that can lead to deadlock, but the real problem
is that all btrfs workqueue shares one work->func, -- normal_work_helper,
so this makes each workqueue to have its own helper function, but only a
wraper pf normal_work_helper.
With this patch, I no long hit the above hang.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-08-15 23:36:53 +08:00
|
|
|
wq = fs_info->endio_raid56_workers;
|
|
|
|
func = btrfs_endio_raid56_helper;
|
|
|
|
} else if (end_io_wq->metadata) {
|
|
|
|
wq = fs_info->endio_meta_workers;
|
|
|
|
func = btrfs_endio_meta_helper;
|
|
|
|
} else {
|
|
|
|
wq = fs_info->endio_workers;
|
|
|
|
func = btrfs_endio_helper;
|
|
|
|
}
|
Btrfs: move data checksumming into a dedicated tree
Btrfs stores checksums for each data block. Until now, they have
been stored in the subvolume trees, indexed by the inode that is
referencing the data block. This means that when we read the inode,
we've probably read in at least some checksums as well.
But, this has a few problems:
* The checksums are indexed by logical offset in the file. When
compression is on, this means we have to do the expensive checksumming
on the uncompressed data. It would be faster if we could checksum
the compressed data instead.
* If we implement encryption, we'll be checksumming the plain text and
storing that on disk. This is significantly less secure.
* For either compression or encryption, we have to get the plain text
back before we can verify the checksum as correct. This makes the raid
layer balancing and extent moving much more expensive.
* It makes the front end caching code more complex, as we have touch
the subvolume and inodes as we cache extents.
* There is potentitally one copy of the checksum in each subvolume
referencing an extent.
The solution used here is to store the extent checksums in a dedicated
tree. This allows us to index the checksums by phyiscal extent
start and length. It means:
* The checksum is against the data stored on disk, after any compression
or encryption is done.
* The checksum is stored in a central location, and can be verified without
following back references, or reading inodes.
This makes compression significantly faster by reducing the amount of
data that needs to be checksummed. It will also allow much faster
raid management code in general.
The checksums are indexed by a key with a fixed objectid (a magic value
in ctree.h) and offset set to the starting byte of the extent. This
allows us to copy the checksum items into the fsync log tree directly (or
any other tree), without having to invent a second format for them.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-12-09 05:58:54 +08:00
|
|
|
}
|
Btrfs: fix task hang under heavy compressed write
This has been reported and discussed for a long time, and this hang occurs in
both 3.15 and 3.16.
Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.
Btrfs has a kind of work queued as an ordered way, which means that its
ordered_func() must be processed in the way of FIFO, so it usually looks like --
normal_work_helper(arg)
work = container_of(arg, struct btrfs_work, normal_work);
work->func() <---- (we name it work X)
for ordered_work in wq->ordered_list
ordered_work->ordered_func()
ordered_work->ordered_free()
The hang is a rare case, first when we find free space, we get an uncached block
group, then we go to read its free space cache inode for free space information,
so it will
file a readahead request
btrfs_readpages()
for page that is not in page cache
__do_readpage()
submit_extent_page()
btrfs_submit_bio_hook()
btrfs_bio_wq_end_io()
submit_bio()
end_workqueue_bio() <--(ret by the 1st endio)
queue a work(named work Y) for the 2nd
also the real endio()
So the hang occurs when work Y's work_struct and work X's work_struct happens
to share the same address.
A bit more explanation,
A,B,C -- struct btrfs_work
arg -- struct work_struct
kthread:
worker_thread()
pick up a work_struct from @worklist
process_one_work(arg)
worker->current_work = arg; <-- arg is A->normal_work
worker->current_func(arg)
normal_work_helper(arg)
A = container_of(arg, struct btrfs_work, normal_work);
A->func()
A->ordered_func()
A->ordered_free() <-- A gets freed
B->ordered_func()
submit_compressed_extents()
find_free_extent()
load_free_space_inode()
... <-- (the above readhead stack)
end_workqueue_bio()
btrfs_queue_work(work C)
B->ordered_free()
As if work A has a high priority in wq->ordered_list and there are more ordered
works queued after it, such as B->ordered_func(), its memory could have been
freed before normal_work_helper() returns, which means that kernel workqueue
code worker_thread() still has worker->current_work pointer to be work
A->normal_work's, ie. arg's address.
Meanwhile, work C is allocated after work A is freed, work C->normal_work
and work A->normal_work are likely to share the same address(I confirmed this
with ftrace output, so I'm not just guessing, it's rare though).
When another kthread picks up work C->normal_work to process, and finds our
kthread is processing it(see find_worker_executing_work()), it'll think
work C as a collision and skip then, which ends up nobody processing work C.
So the situation is that our kthread is waiting forever on work C.
Besides, there're other cases that can lead to deadlock, but the real problem
is that all btrfs workqueue shares one work->func, -- normal_work_helper,
so this makes each workqueue to have its own helper function, but only a
wraper pf normal_work_helper.
With this patch, I no long hit the above hang.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-08-15 23:36:53 +08:00
|
|
|
|
|
|
|
btrfs_init_work(&end_io_wq->work, func, end_workqueue_fn, NULL, NULL);
|
|
|
|
btrfs_queue_work(wq, &end_io_wq->work);
|
2008-04-10 04:28:12 +08:00
|
|
|
}
|
|
|
|
|
2017-06-03 15:38:06 +08:00
|
|
|
blk_status_t btrfs_bio_wq_end_io(struct btrfs_fs_info *info, struct bio *bio,
|
2014-07-30 06:25:45 +08:00
|
|
|
enum btrfs_wq_endio_type metadata)
|
2008-03-25 03:01:56 +08:00
|
|
|
{
|
2014-07-30 06:55:42 +08:00
|
|
|
struct btrfs_end_io_wq *end_io_wq;
|
2014-09-12 18:44:03 +08:00
|
|
|
|
2014-07-30 06:55:42 +08:00
|
|
|
end_io_wq = kmem_cache_alloc(btrfs_end_io_wq_cache, GFP_NOFS);
|
2008-04-10 04:28:12 +08:00
|
|
|
if (!end_io_wq)
|
2017-06-03 15:38:06 +08:00
|
|
|
return BLK_STS_RESOURCE;
|
2008-04-10 04:28:12 +08:00
|
|
|
|
|
|
|
end_io_wq->private = bio->bi_private;
|
|
|
|
end_io_wq->end_io = bio->bi_end_io;
|
2008-04-10 04:28:12 +08:00
|
|
|
end_io_wq->info = info;
|
2017-06-03 15:38:06 +08:00
|
|
|
end_io_wq->status = 0;
|
2008-04-10 04:28:12 +08:00
|
|
|
end_io_wq->bio = bio;
|
2008-04-10 04:28:12 +08:00
|
|
|
end_io_wq->metadata = metadata;
|
2008-04-10 04:28:12 +08:00
|
|
|
|
|
|
|
bio->bi_private = end_io_wq;
|
|
|
|
bio->bi_end_io = end_workqueue_bio;
|
2008-04-10 04:28:12 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 11:03:00 +08:00
|
|
|
static void run_one_async_start(struct btrfs_work *work)
|
|
|
|
{
|
|
|
|
struct async_submit_bio *async;
|
2017-06-03 15:38:06 +08:00
|
|
|
blk_status_t ret;
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 11:03:00 +08:00
|
|
|
|
|
|
|
async = container_of(work, struct async_submit_bio, work);
|
2017-05-05 23:57:13 +08:00
|
|
|
ret = async->submit_bio_start(async->private_data, async->bio,
|
2012-03-12 23:03:00 +08:00
|
|
|
async->bio_offset);
|
|
|
|
if (ret)
|
2017-06-03 15:38:06 +08:00
|
|
|
async->status = ret;
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 11:03:00 +08:00
|
|
|
}
|
|
|
|
|
2018-07-18 04:08:41 +08:00
|
|
|
/*
|
|
|
|
* In order to insert checksums into the metadata in large chunks, we wait
|
|
|
|
* until bio submission time. All the pages in the bio are checksummed and
|
|
|
|
* sums are attached onto the ordered extent record.
|
|
|
|
*
|
|
|
|
* At IO completion time the csums attached on the ordered extent record are
|
|
|
|
* inserted into the tree.
|
|
|
|
*/
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 11:03:00 +08:00
|
|
|
static void run_one_async_done(struct btrfs_work *work)
|
2008-06-12 04:50:36 +08:00
|
|
|
{
|
|
|
|
struct async_submit_bio *async;
|
2018-07-18 04:08:41 +08:00
|
|
|
struct inode *inode;
|
|
|
|
blk_status_t ret;
|
2008-06-12 04:50:36 +08:00
|
|
|
|
|
|
|
async = container_of(work, struct async_submit_bio, work);
|
2018-07-18 04:08:41 +08:00
|
|
|
inode = async->private_data;
|
2008-08-16 03:34:17 +08:00
|
|
|
|
2016-03-05 03:23:12 +08:00
|
|
|
/* If an error occurred we just want to clean up the bio and move on */
|
2017-06-03 15:38:06 +08:00
|
|
|
if (async->status) {
|
|
|
|
async->bio->bi_status = async->status;
|
2015-07-20 21:29:37 +08:00
|
|
|
bio_endio(async->bio);
|
2012-03-12 23:03:00 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2018-07-18 04:08:41 +08:00
|
|
|
ret = btrfs_map_bio(btrfs_sb(inode->i_sb), async->bio,
|
|
|
|
async->mirror_num, 1);
|
|
|
|
if (ret) {
|
|
|
|
async->bio->bi_status = ret;
|
|
|
|
bio_endio(async->bio);
|
|
|
|
}
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 11:03:00 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void run_one_async_free(struct btrfs_work *work)
|
|
|
|
{
|
|
|
|
struct async_submit_bio *async;
|
|
|
|
|
|
|
|
async = container_of(work, struct async_submit_bio, work);
|
2008-06-12 04:50:36 +08:00
|
|
|
kfree(async);
|
|
|
|
}
|
|
|
|
|
2017-07-06 07:41:23 +08:00
|
|
|
blk_status_t btrfs_wq_submit_bio(struct btrfs_fs_info *fs_info, struct bio *bio,
|
|
|
|
int mirror_num, unsigned long bio_flags,
|
|
|
|
u64 bio_offset, void *private_data,
|
2018-07-18 23:36:24 +08:00
|
|
|
extent_submit_bio_start_t *submit_bio_start)
|
2008-04-16 23:14:51 +08:00
|
|
|
{
|
|
|
|
struct async_submit_bio *async;
|
|
|
|
|
|
|
|
async = kmalloc(sizeof(*async), GFP_NOFS);
|
|
|
|
if (!async)
|
2017-06-03 15:38:06 +08:00
|
|
|
return BLK_STS_RESOURCE;
|
2008-04-16 23:14:51 +08:00
|
|
|
|
2017-05-05 23:57:13 +08:00
|
|
|
async->private_data = private_data;
|
2008-04-16 23:14:51 +08:00
|
|
|
async->bio = bio;
|
|
|
|
async->mirror_num = mirror_num;
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 11:03:00 +08:00
|
|
|
async->submit_bio_start = submit_bio_start;
|
|
|
|
|
Btrfs: fix task hang under heavy compressed write
This has been reported and discussed for a long time, and this hang occurs in
both 3.15 and 3.16.
Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.
Btrfs has a kind of work queued as an ordered way, which means that its
ordered_func() must be processed in the way of FIFO, so it usually looks like --
normal_work_helper(arg)
work = container_of(arg, struct btrfs_work, normal_work);
work->func() <---- (we name it work X)
for ordered_work in wq->ordered_list
ordered_work->ordered_func()
ordered_work->ordered_free()
The hang is a rare case, first when we find free space, we get an uncached block
group, then we go to read its free space cache inode for free space information,
so it will
file a readahead request
btrfs_readpages()
for page that is not in page cache
__do_readpage()
submit_extent_page()
btrfs_submit_bio_hook()
btrfs_bio_wq_end_io()
submit_bio()
end_workqueue_bio() <--(ret by the 1st endio)
queue a work(named work Y) for the 2nd
also the real endio()
So the hang occurs when work Y's work_struct and work X's work_struct happens
to share the same address.
A bit more explanation,
A,B,C -- struct btrfs_work
arg -- struct work_struct
kthread:
worker_thread()
pick up a work_struct from @worklist
process_one_work(arg)
worker->current_work = arg; <-- arg is A->normal_work
worker->current_func(arg)
normal_work_helper(arg)
A = container_of(arg, struct btrfs_work, normal_work);
A->func()
A->ordered_func()
A->ordered_free() <-- A gets freed
B->ordered_func()
submit_compressed_extents()
find_free_extent()
load_free_space_inode()
... <-- (the above readhead stack)
end_workqueue_bio()
btrfs_queue_work(work C)
B->ordered_free()
As if work A has a high priority in wq->ordered_list and there are more ordered
works queued after it, such as B->ordered_func(), its memory could have been
freed before normal_work_helper() returns, which means that kernel workqueue
code worker_thread() still has worker->current_work pointer to be work
A->normal_work's, ie. arg's address.
Meanwhile, work C is allocated after work A is freed, work C->normal_work
and work A->normal_work are likely to share the same address(I confirmed this
with ftrace output, so I'm not just guessing, it's rare though).
When another kthread picks up work C->normal_work to process, and finds our
kthread is processing it(see find_worker_executing_work()), it'll think
work C as a collision and skip then, which ends up nobody processing work C.
So the situation is that our kthread is waiting forever on work C.
Besides, there're other cases that can lead to deadlock, but the real problem
is that all btrfs workqueue shares one work->func, -- normal_work_helper,
so this makes each workqueue to have its own helper function, but only a
wraper pf normal_work_helper.
With this patch, I no long hit the above hang.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-08-15 23:36:53 +08:00
|
|
|
btrfs_init_work(&async->work, btrfs_worker_helper, run_one_async_start,
|
2014-02-28 10:46:06 +08:00
|
|
|
run_one_async_done, run_one_async_free);
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 11:03:00 +08:00
|
|
|
|
2010-05-25 21:48:28 +08:00
|
|
|
async->bio_offset = bio_offset;
|
2008-09-29 23:19:10 +08:00
|
|
|
|
2017-06-03 15:38:06 +08:00
|
|
|
async->status = 0;
|
2012-03-12 23:03:00 +08:00
|
|
|
|
2016-11-01 21:40:06 +08:00
|
|
|
if (op_is_sync(bio->bi_opf))
|
2014-02-28 10:46:06 +08:00
|
|
|
btrfs_set_work_high_priority(&async->work);
|
2009-04-21 03:50:09 +08:00
|
|
|
|
2014-02-28 10:46:06 +08:00
|
|
|
btrfs_queue_work(fs_info->workers, &async->work);
|
2008-04-16 23:14:51 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-06-03 15:38:06 +08:00
|
|
|
static blk_status_t btree_csum_one_bio(struct bio *bio)
|
2008-09-24 01:14:12 +08:00
|
|
|
{
|
2013-11-08 04:20:26 +08:00
|
|
|
struct bio_vec *bvec;
|
2008-09-24 01:14:12 +08:00
|
|
|
struct btrfs_root *root;
|
2013-11-08 04:20:26 +08:00
|
|
|
int i, ret = 0;
|
2019-02-15 19:13:19 +08:00
|
|
|
struct bvec_iter_all iter_all;
|
2008-09-24 01:14:12 +08:00
|
|
|
|
2017-07-14 00:10:07 +08:00
|
|
|
ASSERT(!bio_flagged(bio, BIO_CLONED));
|
2019-02-15 19:13:19 +08:00
|
|
|
bio_for_each_segment_all(bvec, bio, i, iter_all) {
|
2008-09-24 01:14:12 +08:00
|
|
|
root = BTRFS_I(bvec->bv_page->mapping->host)->root;
|
2014-11-21 16:15:07 +08:00
|
|
|
ret = csum_dirty_buffer(root->fs_info, bvec->bv_page);
|
2012-03-12 23:03:00 +08:00
|
|
|
if (ret)
|
|
|
|
break;
|
2008-09-24 01:14:12 +08:00
|
|
|
}
|
2013-11-08 04:20:26 +08:00
|
|
|
|
2017-06-03 15:38:06 +08:00
|
|
|
return errno_to_blk_status(ret);
|
2008-09-24 01:14:12 +08:00
|
|
|
}
|
|
|
|
|
2018-03-08 21:35:48 +08:00
|
|
|
static blk_status_t btree_submit_bio_start(void *private_data, struct bio *bio,
|
2017-07-06 07:41:23 +08:00
|
|
|
u64 bio_offset)
|
2008-04-10 04:28:12 +08:00
|
|
|
{
|
2008-06-12 04:50:36 +08:00
|
|
|
/*
|
|
|
|
* when we're called for a write, we're already in the async
|
2008-08-16 03:34:16 +08:00
|
|
|
* submission context. Just jump into btrfs_map_bio
|
2008-06-12 04:50:36 +08:00
|
|
|
*/
|
2012-03-12 23:03:00 +08:00
|
|
|
return btree_csum_one_bio(bio);
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 11:03:00 +08:00
|
|
|
}
|
2008-04-10 04:28:12 +08:00
|
|
|
|
Btrfs: remove bio_flags which indicates a meta block of log-tree
Since both committing transaction and writing log-tree are doing
plugging on metadata IO, we can unify to use %sync_writers to benefit
both cases, instead of checking bio_flags while writing meta blocks of
log-tree.
We can remove this bio_flags because in order to write dirty blocks,
log tree also uses btrfs_write_marked_extents(), inside which we
have enabled %sync_writers, therefore, every write goes in a
synchronous way, so does checksuming.
Please also note that, bio_flags is applied per-context while
%sync_writers is applied per-inode, so this might incur some overhead, ie.
1) while log tree is flushing its dirty blocks via
btrfs_write_marked_extents(), in which %sync_writers is increased
by one.
2) in the meantime, some writeback operations may happen upon btrfs's
metadata inode, so these writes go synchronously, too.
However, AFAICS, the overhead is not a big one while the win is that
we unify the two places that needs synchronous way and remove a
special hack/flag.
This removes the bio_flags related stuff for writing log-tree.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-09-14 02:18:22 +08:00
|
|
|
static int check_async_write(struct btrfs_inode *bi)
|
2012-09-26 02:25:58 +08:00
|
|
|
{
|
2017-08-22 05:49:59 +08:00
|
|
|
if (atomic_read(&bi->sync_writers))
|
|
|
|
return 0;
|
2012-09-26 02:25:58 +08:00
|
|
|
#ifdef CONFIG_X86
|
2016-01-27 05:12:05 +08:00
|
|
|
if (static_cpu_has(X86_FEATURE_XMM4_2))
|
2012-09-26 02:25:58 +08:00
|
|
|
return 0;
|
|
|
|
#endif
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2019-04-10 22:24:39 +08:00
|
|
|
static blk_status_t btree_submit_bio_hook(struct inode *inode, struct bio *bio,
|
2019-04-11 00:46:04 +08:00
|
|
|
int mirror_num,
|
|
|
|
unsigned long bio_flags)
|
2008-04-16 23:14:51 +08:00
|
|
|
{
|
2016-06-23 06:54:23 +08:00
|
|
|
struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
|
Btrfs: remove bio_flags which indicates a meta block of log-tree
Since both committing transaction and writing log-tree are doing
plugging on metadata IO, we can unify to use %sync_writers to benefit
both cases, instead of checking bio_flags while writing meta blocks of
log-tree.
We can remove this bio_flags because in order to write dirty blocks,
log tree also uses btrfs_write_marked_extents(), inside which we
have enabled %sync_writers, therefore, every write goes in a
synchronous way, so does checksuming.
Please also note that, bio_flags is applied per-context while
%sync_writers is applied per-inode, so this might incur some overhead, ie.
1) while log tree is flushing its dirty blocks via
btrfs_write_marked_extents(), in which %sync_writers is increased
by one.
2) in the meantime, some writeback operations may happen upon btrfs's
metadata inode, so these writes go synchronously, too.
However, AFAICS, the overhead is not a big one while the win is that
we unify the two places that needs synchronous way and remove a
special hack/flag.
This removes the bio_flags related stuff for writing log-tree.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-09-14 02:18:22 +08:00
|
|
|
int async = check_async_write(BTRFS_I(inode));
|
2017-06-03 15:38:06 +08:00
|
|
|
blk_status_t ret;
|
2008-12-18 03:51:42 +08:00
|
|
|
|
2016-06-06 03:31:52 +08:00
|
|
|
if (bio_op(bio) != REQ_OP_WRITE) {
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 11:03:00 +08:00
|
|
|
/*
|
|
|
|
* called for a read, do the setup so that checksum validation
|
|
|
|
* can happen in the async kernel threads
|
|
|
|
*/
|
2016-06-23 06:54:23 +08:00
|
|
|
ret = btrfs_bio_wq_end_io(fs_info, bio,
|
|
|
|
BTRFS_WQ_ENDIO_METADATA);
|
2012-03-29 08:31:37 +08:00
|
|
|
if (ret)
|
2012-11-06 01:51:52 +08:00
|
|
|
goto out_w_error;
|
2016-06-23 06:54:24 +08:00
|
|
|
ret = btrfs_map_bio(fs_info, bio, mirror_num, 0);
|
2012-09-26 02:25:58 +08:00
|
|
|
} else if (!async) {
|
|
|
|
ret = btree_csum_one_bio(bio);
|
|
|
|
if (ret)
|
2012-11-06 01:51:52 +08:00
|
|
|
goto out_w_error;
|
2016-06-23 06:54:24 +08:00
|
|
|
ret = btrfs_map_bio(fs_info, bio, mirror_num, 0);
|
2012-11-06 01:51:52 +08:00
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* kthread helpers are used to submit writes so that
|
|
|
|
* checksumming can happen in parallel across all CPUs
|
|
|
|
*/
|
2017-05-05 23:57:13 +08:00
|
|
|
ret = btrfs_wq_submit_bio(fs_info, bio, mirror_num, 0,
|
2019-04-10 22:24:42 +08:00
|
|
|
0, inode, btree_submit_bio_start);
|
2008-04-16 23:14:51 +08:00
|
|
|
}
|
2009-04-21 03:50:09 +08:00
|
|
|
|
2015-07-20 21:29:37 +08:00
|
|
|
if (ret)
|
|
|
|
goto out_w_error;
|
|
|
|
return 0;
|
|
|
|
|
2012-11-06 01:51:52 +08:00
|
|
|
out_w_error:
|
2017-06-03 15:38:06 +08:00
|
|
|
bio->bi_status = ret;
|
2015-07-20 21:29:37 +08:00
|
|
|
bio_endio(bio);
|
2012-11-06 01:51:52 +08:00
|
|
|
return ret;
|
2008-04-16 23:14:51 +08:00
|
|
|
}
|
|
|
|
|
2010-12-07 22:54:09 +08:00
|
|
|
#ifdef CONFIG_MIGRATION
|
2010-11-22 11:20:49 +08:00
|
|
|
static int btree_migratepage(struct address_space *mapping,
|
2012-01-13 09:19:43 +08:00
|
|
|
struct page *newpage, struct page *page,
|
|
|
|
enum migrate_mode mode)
|
2010-11-22 11:20:49 +08:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* we can't safely write a btree page from here,
|
|
|
|
* we haven't done the locking hook
|
|
|
|
*/
|
|
|
|
if (PageDirty(page))
|
|
|
|
return -EAGAIN;
|
|
|
|
/*
|
|
|
|
* Buffers may be managed in a filesystem specific way.
|
|
|
|
* We must have no buffers or drop them.
|
|
|
|
*/
|
|
|
|
if (page_has_private(page) &&
|
|
|
|
!try_to_release_page(page, GFP_KERNEL))
|
|
|
|
return -EAGAIN;
|
2012-01-13 09:19:43 +08:00
|
|
|
return migrate_page(mapping, newpage, page, mode);
|
2010-11-22 11:20:49 +08:00
|
|
|
}
|
2010-12-07 22:54:09 +08:00
|
|
|
#endif
|
2010-11-22 11:20:49 +08:00
|
|
|
|
2007-11-08 10:08:01 +08:00
|
|
|
|
|
|
|
static int btree_writepages(struct address_space *mapping,
|
|
|
|
struct writeback_control *wbc)
|
|
|
|
{
|
2013-01-29 18:09:20 +08:00
|
|
|
struct btrfs_fs_info *fs_info;
|
|
|
|
int ret;
|
|
|
|
|
2007-12-12 01:42:00 +08:00
|
|
|
if (wbc->sync_mode == WB_SYNC_NONE) {
|
2007-11-27 23:52:01 +08:00
|
|
|
|
|
|
|
if (wbc->for_kupdate)
|
|
|
|
return 0;
|
|
|
|
|
2013-01-29 18:09:20 +08:00
|
|
|
fs_info = BTRFS_I(mapping->host)->root->fs_info;
|
2009-03-13 23:00:37 +08:00
|
|
|
/* this is a bit racy, but that's ok */
|
2018-07-02 15:44:58 +08:00
|
|
|
ret = __percpu_counter_compare(&fs_info->dirty_metadata_bytes,
|
|
|
|
BTRFS_DIRTY_METADATA_THRESH,
|
|
|
|
fs_info->dirty_metadata_batch);
|
2013-01-29 18:09:20 +08:00
|
|
|
if (ret < 0)
|
2007-11-27 08:34:41 +08:00
|
|
|
return 0;
|
|
|
|
}
|
2012-03-13 21:38:00 +08:00
|
|
|
return btree_write_cache_pages(mapping, wbc);
|
2007-11-08 10:08:01 +08:00
|
|
|
}
|
|
|
|
|
2008-12-02 22:54:17 +08:00
|
|
|
static int btree_readpage(struct file *file, struct page *page)
|
2007-10-16 04:14:19 +08:00
|
|
|
{
|
2008-01-25 05:13:08 +08:00
|
|
|
struct extent_io_tree *tree;
|
|
|
|
tree = &BTRFS_I(page->mapping->host)->io_tree;
|
2011-06-14 02:02:58 +08:00
|
|
|
return extent_read_full_page(tree, page, btree_get_extent, 0);
|
2007-10-16 04:14:19 +08:00
|
|
|
}
|
2007-03-30 20:47:31 +08:00
|
|
|
|
2008-01-29 22:59:12 +08:00
|
|
|
static int btree_releasepage(struct page *page, gfp_t gfp_flags)
|
2007-10-16 04:14:19 +08:00
|
|
|
{
|
2008-09-12 03:51:43 +08:00
|
|
|
if (PageWriteback(page) || PageDirty(page))
|
2009-01-06 10:25:51 +08:00
|
|
|
return 0;
|
2012-01-27 04:01:12 +08:00
|
|
|
|
2013-04-26 22:56:29 +08:00
|
|
|
return try_release_extent_buffer(page);
|
2007-03-29 01:57:48 +08:00
|
|
|
}
|
|
|
|
|
2013-05-22 11:17:23 +08:00
|
|
|
static void btree_invalidatepage(struct page *page, unsigned int offset,
|
|
|
|
unsigned int length)
|
2007-03-29 01:57:48 +08:00
|
|
|
{
|
2008-01-25 05:13:08 +08:00
|
|
|
struct extent_io_tree *tree;
|
|
|
|
tree = &BTRFS_I(page->mapping->host)->io_tree;
|
2007-10-16 04:14:19 +08:00
|
|
|
extent_invalidatepage(tree, page, offset);
|
|
|
|
btree_releasepage(page, GFP_NOFS);
|
2008-04-19 04:11:30 +08:00
|
|
|
if (PagePrivate(page)) {
|
2013-12-21 00:37:06 +08:00
|
|
|
btrfs_warn(BTRFS_I(page->mapping->host)->root->fs_info,
|
|
|
|
"page private not zero on page %llu",
|
|
|
|
(unsigned long long)page_offset(page));
|
2008-04-19 04:11:30 +08:00
|
|
|
ClearPagePrivate(page);
|
|
|
|
set_page_private(page, 0);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
put_page(page);
|
2008-04-19 04:11:30 +08:00
|
|
|
}
|
2007-03-29 01:57:48 +08:00
|
|
|
}
|
|
|
|
|
2012-03-13 21:38:00 +08:00
|
|
|
static int btree_set_page_dirty(struct page *page)
|
|
|
|
{
|
2012-10-16 01:30:43 +08:00
|
|
|
#ifdef DEBUG
|
2012-03-13 21:38:00 +08:00
|
|
|
struct extent_buffer *eb;
|
|
|
|
|
|
|
|
BUG_ON(!PagePrivate(page));
|
|
|
|
eb = (struct extent_buffer *)page->private;
|
|
|
|
BUG_ON(!eb);
|
|
|
|
BUG_ON(!test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags));
|
|
|
|
BUG_ON(!atomic_read(&eb->refs));
|
|
|
|
btrfs_assert_tree_locked(eb);
|
2012-10-16 01:30:43 +08:00
|
|
|
#endif
|
2012-03-13 21:38:00 +08:00
|
|
|
return __set_page_dirty_nobuffers(page);
|
|
|
|
}
|
|
|
|
|
2009-09-22 08:01:10 +08:00
|
|
|
static const struct address_space_operations btree_aops = {
|
2007-03-29 01:57:48 +08:00
|
|
|
.readpage = btree_readpage,
|
2007-11-08 10:08:01 +08:00
|
|
|
.writepages = btree_writepages,
|
2007-10-16 04:14:19 +08:00
|
|
|
.releasepage = btree_releasepage,
|
|
|
|
.invalidatepage = btree_invalidatepage,
|
2010-11-29 22:49:11 +08:00
|
|
|
#ifdef CONFIG_MIGRATION
|
2010-11-22 11:20:49 +08:00
|
|
|
.migratepage = btree_migratepage,
|
2010-11-29 22:49:11 +08:00
|
|
|
#endif
|
2012-03-13 21:38:00 +08:00
|
|
|
.set_page_dirty = btree_set_page_dirty,
|
2007-03-29 01:57:48 +08:00
|
|
|
};
|
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
void readahead_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr)
|
2007-05-01 20:53:32 +08:00
|
|
|
{
|
2007-10-16 04:14:19 +08:00
|
|
|
struct extent_buffer *buf = NULL;
|
2019-03-14 15:52:35 +08:00
|
|
|
int ret;
|
2007-05-01 20:53:32 +08:00
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
buf = btrfs_find_create_tree_block(fs_info, bytenr);
|
2016-06-07 03:01:23 +08:00
|
|
|
if (IS_ERR(buf))
|
2014-06-15 06:49:36 +08:00
|
|
|
return;
|
2019-03-14 15:52:35 +08:00
|
|
|
|
2019-04-10 22:24:40 +08:00
|
|
|
ret = read_extent_buffer_pages(buf, WAIT_NONE, 0);
|
2019-03-14 15:52:35 +08:00
|
|
|
if (ret < 0)
|
|
|
|
free_extent_buffer_stale(buf);
|
|
|
|
else
|
|
|
|
free_extent_buffer(buf);
|
2007-05-01 20:53:32 +08:00
|
|
|
}
|
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
int reada_tree_block_flagged(struct btrfs_fs_info *fs_info, u64 bytenr,
|
2011-05-23 20:25:41 +08:00
|
|
|
int mirror_num, struct extent_buffer **eb)
|
|
|
|
{
|
|
|
|
struct extent_buffer *buf = NULL;
|
|
|
|
int ret;
|
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
buf = btrfs_find_create_tree_block(fs_info, bytenr);
|
2016-06-07 03:01:23 +08:00
|
|
|
if (IS_ERR(buf))
|
2011-05-23 20:25:41 +08:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
set_bit(EXTENT_BUFFER_READAHEAD, &buf->bflags);
|
|
|
|
|
2019-04-10 22:24:40 +08:00
|
|
|
ret = read_extent_buffer_pages(buf, WAIT_PAGE_LOCK, mirror_num);
|
2011-05-23 20:25:41 +08:00
|
|
|
if (ret) {
|
2019-03-14 15:52:35 +08:00
|
|
|
free_extent_buffer_stale(buf);
|
2011-05-23 20:25:41 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (test_bit(EXTENT_BUFFER_CORRUPT, &buf->bflags)) {
|
2019-03-14 15:52:35 +08:00
|
|
|
free_extent_buffer_stale(buf);
|
2011-05-23 20:25:41 +08:00
|
|
|
return -EIO;
|
2012-03-13 21:38:00 +08:00
|
|
|
} else if (extent_buffer_uptodate(buf)) {
|
2011-05-23 20:25:41 +08:00
|
|
|
*eb = buf;
|
|
|
|
} else {
|
|
|
|
free_extent_buffer(buf);
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
struct extent_buffer *btrfs_find_create_tree_block(
|
|
|
|
struct btrfs_fs_info *fs_info,
|
|
|
|
u64 bytenr)
|
2008-04-02 01:48:14 +08:00
|
|
|
{
|
2016-06-23 06:54:23 +08:00
|
|
|
if (btrfs_is_testing(fs_info))
|
|
|
|
return alloc_test_extent_buffer(fs_info, bytenr);
|
|
|
|
return alloc_extent_buffer(fs_info, bytenr);
|
2008-04-02 01:48:14 +08:00
|
|
|
}
|
|
|
|
|
2018-03-29 09:08:11 +08:00
|
|
|
/*
|
|
|
|
* Read tree block at logical address @bytenr and do variant basic but critical
|
|
|
|
* verification.
|
|
|
|
*
|
|
|
|
* @parent_transid: expected transid of this tree block, skip check if 0
|
|
|
|
* @level: expected level, mandatory check
|
|
|
|
* @first_key: expected key in slot 0, skip check if NULL
|
|
|
|
*/
|
2016-06-23 06:54:24 +08:00
|
|
|
struct extent_buffer *read_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr,
|
2018-03-29 09:08:11 +08:00
|
|
|
u64 parent_transid, int level,
|
|
|
|
struct btrfs_key *first_key)
|
2008-04-02 01:48:14 +08:00
|
|
|
{
|
|
|
|
struct extent_buffer *buf = NULL;
|
|
|
|
int ret;
|
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
buf = btrfs_find_create_tree_block(fs_info, bytenr);
|
2016-06-07 03:01:23 +08:00
|
|
|
if (IS_ERR(buf))
|
|
|
|
return buf;
|
2008-04-02 01:48:14 +08:00
|
|
|
|
2019-03-20 21:56:39 +08:00
|
|
|
ret = btree_read_extent_buffer_pages(buf, parent_transid,
|
2018-03-29 09:08:11 +08:00
|
|
|
level, first_key);
|
2013-07-31 07:39:56 +08:00
|
|
|
if (ret) {
|
2019-03-14 15:52:35 +08:00
|
|
|
free_extent_buffer_stale(buf);
|
2015-05-25 17:30:15 +08:00
|
|
|
return ERR_PTR(ret);
|
2013-07-31 07:39:56 +08:00
|
|
|
}
|
2007-10-16 04:14:19 +08:00
|
|
|
return buf;
|
2008-04-10 04:28:12 +08:00
|
|
|
|
2007-02-02 22:18:22 +08:00
|
|
|
}
|
|
|
|
|
2019-03-20 21:30:02 +08:00
|
|
|
void btrfs_clean_tree_block(struct extent_buffer *buf)
|
2007-03-02 07:59:40 +08:00
|
|
|
{
|
2019-03-20 21:30:02 +08:00
|
|
|
struct btrfs_fs_info *fs_info = buf->fs_info;
|
2008-01-10 04:55:33 +08:00
|
|
|
if (btrfs_header_generation(buf) ==
|
2013-01-29 18:09:20 +08:00
|
|
|
fs_info->running_transaction->transid) {
|
2009-03-09 23:45:38 +08:00
|
|
|
btrfs_assert_tree_locked(buf);
|
Btrfs: Change btree locking to use explicit blocking points
Most of the btrfs metadata operations can be protected by a spinlock,
but some operations still need to schedule.
So far, btrfs has been using a mutex along with a trylock loop,
most of the time it is able to avoid going for the full mutex, so
the trylock loop is a big performance gain.
This commit is step one for getting rid of the blocking locks entirely.
btrfs_tree_lock takes a spinlock, and the code explicitly switches
to a blocking lock when it starts an operation that can schedule.
We'll be able get rid of the blocking locks in smaller pieces over time.
Tracing allows us to find the most common cause of blocking, so we
can start with the hot spots first.
The basic idea is:
btrfs_tree_lock() returns with the spin lock held
btrfs_set_lock_blocking() sets the EXTENT_BUFFER_BLOCKING bit in
the extent buffer flags, and then drops the spin lock. The buffer is
still considered locked by all of the btrfs code.
If btrfs_tree_lock gets the spinlock but finds the blocking bit set, it drops
the spin lock and waits on a wait queue for the blocking bit to go away.
Much of the code that needs to set the blocking bit finishes without actually
blocking a good percentage of the time. So, an adaptive spin is still
used against the blocking bit to avoid very high context switch rates.
btrfs_clear_lock_blocking() clears the blocking bit and returns
with the spinlock held again.
btrfs_tree_unlock() can be called on either blocking or spinning locks,
it does the right thing based on the blocking bit.
ctree.c has a helper function to set/clear all the locked buffers in a
path as blocking.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-02-04 22:25:08 +08:00
|
|
|
|
2009-03-13 23:00:37 +08:00
|
|
|
if (test_and_clear_bit(EXTENT_BUFFER_DIRTY, &buf->bflags)) {
|
2017-06-21 02:01:20 +08:00
|
|
|
percpu_counter_add_batch(&fs_info->dirty_metadata_bytes,
|
|
|
|
-buf->len,
|
|
|
|
fs_info->dirty_metadata_batch);
|
2012-10-16 01:33:54 +08:00
|
|
|
/* ugh, clear_extent_buffer_dirty needs to lock the page */
|
2018-04-04 08:03:48 +08:00
|
|
|
btrfs_set_lock_blocking_write(buf);
|
2012-10-16 01:33:54 +08:00
|
|
|
clear_extent_buffer_dirty(buf);
|
|
|
|
}
|
2008-06-26 04:01:30 +08:00
|
|
|
}
|
2007-10-16 04:14:19 +08:00
|
|
|
}
|
|
|
|
|
2014-03-06 13:38:19 +08:00
|
|
|
static struct btrfs_subvolume_writers *btrfs_alloc_subvolume_writers(void)
|
|
|
|
{
|
|
|
|
struct btrfs_subvolume_writers *writers;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
writers = kmalloc(sizeof(*writers), GFP_NOFS);
|
|
|
|
if (!writers)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
|
|
|
|
2018-03-17 02:36:27 +08:00
|
|
|
ret = percpu_counter_init(&writers->counter, 0, GFP_NOFS);
|
2014-03-06 13:38:19 +08:00
|
|
|
if (ret < 0) {
|
|
|
|
kfree(writers);
|
|
|
|
return ERR_PTR(ret);
|
|
|
|
}
|
|
|
|
|
|
|
|
init_waitqueue_head(&writers->wait);
|
|
|
|
return writers;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
btrfs_free_subvolume_writers(struct btrfs_subvolume_writers *writers)
|
|
|
|
{
|
|
|
|
percpu_counter_destroy(&writers->counter);
|
|
|
|
kfree(writers);
|
|
|
|
}
|
|
|
|
|
2016-06-15 21:22:56 +08:00
|
|
|
static void __setup_root(struct btrfs_root *root, struct btrfs_fs_info *fs_info,
|
2012-03-01 21:56:26 +08:00
|
|
|
u64 objectid)
|
2007-02-21 05:40:44 +08:00
|
|
|
{
|
2016-06-21 02:14:09 +08:00
|
|
|
bool dummy = test_bit(BTRFS_FS_STATE_DUMMY_FS_INFO, &fs_info->fs_state);
|
2007-02-22 06:04:57 +08:00
|
|
|
root->node = NULL;
|
2007-03-07 09:08:01 +08:00
|
|
|
root->commit_root = NULL;
|
2014-04-02 19:51:05 +08:00
|
|
|
root->state = 0;
|
2010-05-16 22:49:58 +08:00
|
|
|
root->orphan_cleanup_state = 0;
|
2008-03-25 03:01:56 +08:00
|
|
|
|
2007-04-09 22:42:37 +08:00
|
|
|
root->last_trans = 0;
|
2009-09-22 03:56:00 +08:00
|
|
|
root->highest_objectid = 0;
|
2013-05-15 15:48:22 +08:00
|
|
|
root->nr_delalloc_inodes = 0;
|
2013-05-15 15:48:23 +08:00
|
|
|
root->nr_ordered_extents = 0;
|
2010-02-24 03:43:04 +08:00
|
|
|
root->inode_tree = RB_ROOT;
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 18:12:22 +08:00
|
|
|
INIT_RADIX_TREE(&root->delayed_nodes_tree, GFP_ATOMIC);
|
2010-05-16 22:46:25 +08:00
|
|
|
root->block_rsv = NULL;
|
2008-03-25 03:01:56 +08:00
|
|
|
|
|
|
|
INIT_LIST_HEAD(&root->dirty_list);
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 22:45:14 +08:00
|
|
|
INIT_LIST_HEAD(&root->root_list);
|
2013-05-15 15:48:22 +08:00
|
|
|
INIT_LIST_HEAD(&root->delalloc_inodes);
|
|
|
|
INIT_LIST_HEAD(&root->delalloc_root);
|
2013-05-15 15:48:23 +08:00
|
|
|
INIT_LIST_HEAD(&root->ordered_extents);
|
|
|
|
INIT_LIST_HEAD(&root->ordered_root);
|
2019-01-23 15:15:14 +08:00
|
|
|
INIT_LIST_HEAD(&root->reloc_dirty_list);
|
2012-10-13 03:27:49 +08:00
|
|
|
INIT_LIST_HEAD(&root->logged_list[0]);
|
|
|
|
INIT_LIST_HEAD(&root->logged_list[1]);
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 22:45:14 +08:00
|
|
|
spin_lock_init(&root->inode_lock);
|
2013-05-15 15:48:22 +08:00
|
|
|
spin_lock_init(&root->delalloc_lock);
|
2013-05-15 15:48:23 +08:00
|
|
|
spin_lock_init(&root->ordered_extent_lock);
|
2010-05-16 22:46:25 +08:00
|
|
|
spin_lock_init(&root->accounting_lock);
|
2012-10-13 03:27:49 +08:00
|
|
|
spin_lock_init(&root->log_extents_lock[0]);
|
|
|
|
spin_lock_init(&root->log_extents_lock[1]);
|
2017-12-12 15:34:34 +08:00
|
|
|
spin_lock_init(&root->qgroup_meta_rsv_lock);
|
2008-06-26 04:01:30 +08:00
|
|
|
mutex_init(&root->objectid_mutex);
|
2008-09-06 04:13:11 +08:00
|
|
|
mutex_init(&root->log_mutex);
|
2014-03-06 13:55:02 +08:00
|
|
|
mutex_init(&root->ordered_extent_mutex);
|
2014-03-06 13:55:03 +08:00
|
|
|
mutex_init(&root->delalloc_mutex);
|
2009-01-22 01:54:03 +08:00
|
|
|
init_waitqueue_head(&root->log_writer_wait);
|
|
|
|
init_waitqueue_head(&root->log_commit_wait[0]);
|
|
|
|
init_waitqueue_head(&root->log_commit_wait[1]);
|
2014-02-20 18:08:58 +08:00
|
|
|
INIT_LIST_HEAD(&root->log_ctxs[0]);
|
|
|
|
INIT_LIST_HEAD(&root->log_ctxs[1]);
|
2009-01-22 01:54:03 +08:00
|
|
|
atomic_set(&root->log_commit[0], 0);
|
|
|
|
atomic_set(&root->log_commit[1], 0);
|
|
|
|
atomic_set(&root->log_writers, 0);
|
2012-09-06 18:04:27 +08:00
|
|
|
atomic_set(&root->log_batch, 0);
|
2017-03-03 16:55:18 +08:00
|
|
|
refcount_set(&root->refs, 1);
|
2017-06-22 08:19:11 +08:00
|
|
|
atomic_set(&root->will_be_snapshotted, 0);
|
Btrfs: fix unexpected failure of nocow buffered writes after snapshotting when low on space
Commit e9894fd3e3b3 ("Btrfs: fix snapshot vs nocow writting") forced
nocow writes to fallback to COW, during writeback, when a snapshot is
created. This resulted in writes made before creating the snapshot to
unexpectedly fail with ENOSPC during writeback when success (0) was
returned to user space through the write system call.
The steps leading to this problem are:
1. When it's not possible to allocate data space for a write, the
buffered write path checks if a NOCOW write is possible. If it is,
it will not reserve space and success (0) is returned to user space.
2. Then when a snapshot is created, the root's will_be_snapshotted
atomic is incremented and writeback is triggered for all inode's that
belong to the root being snapshotted. Incrementing that atomic forces
all previous writes to fallback to COW during writeback (running
delalloc).
3. This results in the writeback for the inodes to fail and therefore
setting the ENOSPC error in their mappings, so that a subsequent
fsync on them will report the error to user space. So it's not a
completely silent data loss (since fsync will report ENOSPC) but it's
a very unexpected and undesirable behaviour, because if a clean
shutdown/unmount of the filesystem happens without previous calls to
fsync, it is expected to have the data present in the files after
mounting the filesystem again.
So fix this by adding a new atomic named snapshot_force_cow to the
root structure which prevents this behaviour and works the following way:
1. It is incremented when we start to create a snapshot after triggering
writeback and before waiting for writeback to finish.
2. This new atomic is now what is used by writeback (running delalloc)
to decide whether we need to fallback to COW or not. Because we
incremented this new atomic after triggering writeback in the
snapshot creation ioctl, we ensure that all buffered writes that
happened before snapshot creation will succeed and not fallback to
COW (which would make them fail with ENOSPC).
3. The existing atomic, will_be_snapshotted, is kept because it is used
to force new buffered writes, that start after we started
snapshotting, to reserve data space even when NOCOW is possible.
This makes these writes fail early with ENOSPC when there's no
available space to allocate, preventing the unexpected behaviour of
writeback later failing with ENOSPC due to a fallback to COW mode.
Fixes: e9894fd3e3b3 ("Btrfs: fix snapshot vs nocow writting")
Signed-off-by: Robbie Ko <robbieko@synology.com>
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2018-08-06 10:30:30 +08:00
|
|
|
atomic_set(&root->snapshot_force_cow, 0);
|
Btrfs: prevent ioctls from interfering with a swap file
A later patch will implement swap file support for Btrfs, but before we
do that, we need to make sure that the various Btrfs ioctls cannot
change a swap file.
When a swap file is active, we must make sure that the extents of the
file are not moved and that they don't become shared. That means that
the following are not safe:
- chattr +c (enable compression)
- reflink
- dedupe
- snapshot
- defrag
Don't allow those to happen on an active swap file.
Additionally, balance, resize, device remove, and device replace are
also unsafe if they affect an active swapfile. Add a red-black tree of
block groups and devices which contain an active swapfile. Relocation
checks each block group against this tree and skips it or errors out for
balance or resize, respectively. Device remove and device replace check
the tree for the device they will operate on.
Note that we don't have to worry about chattr -C (disable nocow), which
we ignore for non-empty files, because an active swapfile must be
non-empty and can't be truncated. We also don't have to worry about
autodefrag because it's only done on COW files. Truncate and fallocate
are already taken care of by the generic code. Device add doesn't do
relocation so it's not an issue, either.
Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-11-04 01:28:12 +08:00
|
|
|
atomic_set(&root->nr_swapfiles, 0);
|
2009-01-22 01:54:03 +08:00
|
|
|
root->log_transid = 0;
|
2014-02-20 18:08:59 +08:00
|
|
|
root->log_transid_committed = -1;
|
2009-10-14 01:21:08 +08:00
|
|
|
root->last_log_commit = 0;
|
2016-06-21 02:14:09 +08:00
|
|
|
if (!dummy)
|
2019-03-01 10:47:59 +08:00
|
|
|
extent_io_tree_init(fs_info, &root->dirty_log_pages,
|
|
|
|
IO_TREE_ROOT_DIRTY_LOG_PAGES, NULL);
|
2008-07-29 03:32:51 +08:00
|
|
|
|
2007-03-14 04:47:54 +08:00
|
|
|
memset(&root->root_key, 0, sizeof(root->root_key));
|
|
|
|
memset(&root->root_item, 0, sizeof(root->root_item));
|
2007-08-08 04:15:09 +08:00
|
|
|
memset(&root->defrag_progress, 0, sizeof(root->defrag_progress));
|
2016-06-21 02:14:09 +08:00
|
|
|
if (!dummy)
|
2013-09-20 04:07:01 +08:00
|
|
|
root->defrag_trans_start = fs_info->generation;
|
|
|
|
else
|
|
|
|
root->defrag_trans_start = 0;
|
2007-04-21 08:23:12 +08:00
|
|
|
root->root_key.objectid = objectid;
|
2011-07-08 03:44:25 +08:00
|
|
|
root->anon_dev = 0;
|
2012-07-25 23:35:53 +08:00
|
|
|
|
2012-12-07 17:28:54 +08:00
|
|
|
spin_lock_init(&root->root_item_lock);
|
btrfs: qgroup: Introduce per-root swapped blocks infrastructure
To allow delayed subtree swap rescan, btrfs needs to record per-root
information about which tree blocks get swapped. This patch introduces
the required infrastructure.
The designed workflow will be:
1) Record the subtree root block that gets swapped.
During subtree swap:
O = Old tree blocks
N = New tree blocks
reloc tree subvolume tree X
Root Root
/ \ / \
NA OB OA OB
/ | | \ / | | \
NC ND OE OF OC OD OE OF
In this case, NA and OA are going to be swapped, record (NA, OA) into
subvolume tree X.
2) After subtree swap.
reloc tree subvolume tree X
Root Root
/ \ / \
OA OB NA OB
/ | | \ / | | \
OC OD OE OF NC ND OE OF
3a) COW happens for OB
If we are going to COW tree block OB, we check OB's bytenr against
tree X's swapped_blocks structure.
If it doesn't fit any, nothing will happen.
3b) COW happens for NA
Check NA's bytenr against tree X's swapped_blocks, and get a hit.
Then we do subtree scan on both subtrees OA and NA.
Resulting 6 tree blocks to be scanned (OA, OC, OD, NA, NC, ND).
Then no matter what we do to subvolume tree X, qgroup numbers will
still be correct.
Then NA's record gets removed from X's swapped_blocks.
4) Transaction commit
Any record in X's swapped_blocks gets removed, since there is no
modification to swapped subtrees, no need to trigger heavy qgroup
subtree rescan for them.
This will introduce 128 bytes overhead for each btrfs_root even qgroup
is not enabled. This is to reduce memory allocations and potential
failures.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2019-01-23 15:15:16 +08:00
|
|
|
btrfs_qgroup_init_swapped_blocks(&root->swapped_blocks);
|
2007-03-14 04:47:54 +08:00
|
|
|
}
|
|
|
|
|
2016-02-11 18:01:55 +08:00
|
|
|
static struct btrfs_root *btrfs_alloc_root(struct btrfs_fs_info *fs_info,
|
|
|
|
gfp_t flags)
|
2011-11-17 13:46:16 +08:00
|
|
|
{
|
2016-02-11 18:01:55 +08:00
|
|
|
struct btrfs_root *root = kzalloc(sizeof(*root), flags);
|
2011-11-17 13:46:16 +08:00
|
|
|
if (root)
|
|
|
|
root->fs_info = fs_info;
|
|
|
|
return root;
|
|
|
|
}
|
|
|
|
|
2013-09-20 04:07:01 +08:00
|
|
|
#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
|
|
|
|
/* Should only be used by the testing infrastructure */
|
2016-06-15 21:22:56 +08:00
|
|
|
struct btrfs_root *btrfs_alloc_dummy_root(struct btrfs_fs_info *fs_info)
|
2013-09-20 04:07:01 +08:00
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
|
|
|
|
2016-06-21 02:14:09 +08:00
|
|
|
if (!fs_info)
|
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
|
|
|
|
root = btrfs_alloc_root(fs_info, GFP_KERNEL);
|
2013-09-20 04:07:01 +08:00
|
|
|
if (!root)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
2016-06-15 21:22:56 +08:00
|
|
|
|
2016-06-01 19:18:25 +08:00
|
|
|
/* We don't use the stripesize in selftest, set it as sectorsize */
|
2016-06-15 21:22:56 +08:00
|
|
|
__setup_root(root, fs_info, BTRFS_ROOT_TREE_OBJECTID);
|
2014-05-08 05:06:09 +08:00
|
|
|
root->alloc_bytenr = 0;
|
2013-09-20 04:07:01 +08:00
|
|
|
|
|
|
|
return root;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2011-09-13 18:44:20 +08:00
|
|
|
struct btrfs_root *btrfs_create_tree(struct btrfs_trans_handle *trans,
|
|
|
|
u64 objectid)
|
|
|
|
{
|
2019-03-20 20:20:49 +08:00
|
|
|
struct btrfs_fs_info *fs_info = trans->fs_info;
|
2011-09-13 18:44:20 +08:00
|
|
|
struct extent_buffer *leaf;
|
|
|
|
struct btrfs_root *tree_root = fs_info->tree_root;
|
|
|
|
struct btrfs_root *root;
|
|
|
|
struct btrfs_key key;
|
2018-12-14 05:16:45 +08:00
|
|
|
unsigned int nofs_flag;
|
2011-09-13 18:44:20 +08:00
|
|
|
int ret = 0;
|
2017-10-31 14:08:16 +08:00
|
|
|
uuid_le uuid = NULL_UUID_LE;
|
2011-09-13 18:44:20 +08:00
|
|
|
|
2018-12-14 05:16:45 +08:00
|
|
|
/*
|
|
|
|
* We're holding a transaction handle, so use a NOFS memory allocation
|
|
|
|
* context to avoid deadlock if reclaim happens.
|
|
|
|
*/
|
|
|
|
nofs_flag = memalloc_nofs_save();
|
2016-02-11 18:01:55 +08:00
|
|
|
root = btrfs_alloc_root(fs_info, GFP_KERNEL);
|
2018-12-14 05:16:45 +08:00
|
|
|
memalloc_nofs_restore(nofs_flag);
|
2011-09-13 18:44:20 +08:00
|
|
|
if (!root)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
|
|
|
|
2016-06-15 21:22:56 +08:00
|
|
|
__setup_root(root, fs_info, objectid);
|
2011-09-13 18:44:20 +08:00
|
|
|
root->root_key.objectid = objectid;
|
|
|
|
root->root_key.type = BTRFS_ROOT_ITEM_KEY;
|
|
|
|
root->root_key.offset = 0;
|
|
|
|
|
2014-06-15 07:54:12 +08:00
|
|
|
leaf = btrfs_alloc_tree_block(trans, root, 0, objectid, NULL, 0, 0, 0);
|
2011-09-13 18:44:20 +08:00
|
|
|
if (IS_ERR(leaf)) {
|
|
|
|
ret = PTR_ERR(leaf);
|
2013-03-21 12:32:32 +08:00
|
|
|
leaf = NULL;
|
2011-09-13 18:44:20 +08:00
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
|
|
|
root->node = leaf;
|
|
|
|
btrfs_mark_buffer_dirty(leaf);
|
|
|
|
|
|
|
|
root->commit_root = btrfs_root_node(root);
|
2014-04-02 19:51:05 +08:00
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
2011-09-13 18:44:20 +08:00
|
|
|
|
|
|
|
root->root_item.flags = 0;
|
|
|
|
root->root_item.byte_limit = 0;
|
|
|
|
btrfs_set_root_bytenr(&root->root_item, leaf->start);
|
|
|
|
btrfs_set_root_generation(&root->root_item, trans->transid);
|
|
|
|
btrfs_set_root_level(&root->root_item, 0);
|
|
|
|
btrfs_set_root_refs(&root->root_item, 1);
|
|
|
|
btrfs_set_root_used(&root->root_item, leaf->len);
|
|
|
|
btrfs_set_root_last_snapshot(&root->root_item, 0);
|
|
|
|
btrfs_set_root_dirid(&root->root_item, 0);
|
2017-10-31 14:08:16 +08:00
|
|
|
if (is_fstree(objectid))
|
|
|
|
uuid_le_gen(&uuid);
|
2013-04-19 23:08:05 +08:00
|
|
|
memcpy(root->root_item.uuid, uuid.b, BTRFS_UUID_SIZE);
|
2011-09-13 18:44:20 +08:00
|
|
|
root->root_item.drop_level = 0;
|
|
|
|
|
|
|
|
key.objectid = objectid;
|
|
|
|
key.type = BTRFS_ROOT_ITEM_KEY;
|
|
|
|
key.offset = 0;
|
|
|
|
ret = btrfs_insert_root(trans, tree_root, &key, &root->root_item);
|
|
|
|
if (ret)
|
|
|
|
goto fail;
|
|
|
|
|
|
|
|
btrfs_tree_unlock(leaf);
|
|
|
|
|
2013-03-21 12:32:32 +08:00
|
|
|
return root;
|
|
|
|
|
2011-09-13 18:44:20 +08:00
|
|
|
fail:
|
2013-03-21 12:32:32 +08:00
|
|
|
if (leaf) {
|
|
|
|
btrfs_tree_unlock(leaf);
|
2014-04-09 08:18:04 +08:00
|
|
|
free_extent_buffer(root->commit_root);
|
2013-03-21 12:32:32 +08:00
|
|
|
free_extent_buffer(leaf);
|
|
|
|
}
|
|
|
|
kfree(root);
|
2011-09-13 18:44:20 +08:00
|
|
|
|
2013-03-21 12:32:32 +08:00
|
|
|
return ERR_PTR(ret);
|
2011-09-13 18:44:20 +08:00
|
|
|
}
|
|
|
|
|
2009-01-22 01:54:03 +08:00
|
|
|
static struct btrfs_root *alloc_log_tree(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_fs_info *fs_info)
|
2007-04-09 22:42:37 +08:00
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
2009-01-22 01:54:03 +08:00
|
|
|
struct extent_buffer *leaf;
|
2008-09-06 04:13:11 +08:00
|
|
|
|
2016-02-11 18:01:55 +08:00
|
|
|
root = btrfs_alloc_root(fs_info, GFP_NOFS);
|
2008-09-06 04:13:11 +08:00
|
|
|
if (!root)
|
2009-01-22 01:54:03 +08:00
|
|
|
return ERR_PTR(-ENOMEM);
|
2008-09-06 04:13:11 +08:00
|
|
|
|
2016-06-15 21:22:56 +08:00
|
|
|
__setup_root(root, fs_info, BTRFS_TREE_LOG_OBJECTID);
|
2008-09-06 04:13:11 +08:00
|
|
|
|
|
|
|
root->root_key.objectid = BTRFS_TREE_LOG_OBJECTID;
|
|
|
|
root->root_key.type = BTRFS_ROOT_ITEM_KEY;
|
|
|
|
root->root_key.offset = BTRFS_TREE_LOG_OBJECTID;
|
2014-04-02 19:51:05 +08:00
|
|
|
|
2009-01-22 01:54:03 +08:00
|
|
|
/*
|
2014-04-02 19:51:05 +08:00
|
|
|
* DON'T set REF_COWS for log trees
|
|
|
|
*
|
2009-01-22 01:54:03 +08:00
|
|
|
* log trees do not get reference counted because they go away
|
|
|
|
* before a real commit is actually done. They do store pointers
|
|
|
|
* to file data extents, and those reference counts still get
|
|
|
|
* updated (along with back refs to the log tree).
|
|
|
|
*/
|
2008-09-06 04:13:11 +08:00
|
|
|
|
2014-06-15 07:54:12 +08:00
|
|
|
leaf = btrfs_alloc_tree_block(trans, root, 0, BTRFS_TREE_LOG_OBJECTID,
|
|
|
|
NULL, 0, 0, 0);
|
2009-01-22 01:54:03 +08:00
|
|
|
if (IS_ERR(leaf)) {
|
|
|
|
kfree(root);
|
|
|
|
return ERR_CAST(leaf);
|
|
|
|
}
|
2008-09-06 04:13:11 +08:00
|
|
|
|
2009-01-22 01:54:03 +08:00
|
|
|
root->node = leaf;
|
2008-09-06 04:13:11 +08:00
|
|
|
|
|
|
|
btrfs_mark_buffer_dirty(root->node);
|
|
|
|
btrfs_tree_unlock(root->node);
|
2009-01-22 01:54:03 +08:00
|
|
|
return root;
|
|
|
|
}
|
|
|
|
|
|
|
|
int btrfs_init_log_root_tree(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
struct btrfs_root *log_root;
|
|
|
|
|
|
|
|
log_root = alloc_log_tree(trans, fs_info);
|
|
|
|
if (IS_ERR(log_root))
|
|
|
|
return PTR_ERR(log_root);
|
|
|
|
WARN_ON(fs_info->log_root_tree);
|
|
|
|
fs_info->log_root_tree = log_root;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int btrfs_add_log_tree(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_root *root)
|
|
|
|
{
|
2016-06-23 06:54:23 +08:00
|
|
|
struct btrfs_fs_info *fs_info = root->fs_info;
|
2009-01-22 01:54:03 +08:00
|
|
|
struct btrfs_root *log_root;
|
|
|
|
struct btrfs_inode_item *inode_item;
|
|
|
|
|
2016-06-23 06:54:23 +08:00
|
|
|
log_root = alloc_log_tree(trans, fs_info);
|
2009-01-22 01:54:03 +08:00
|
|
|
if (IS_ERR(log_root))
|
|
|
|
return PTR_ERR(log_root);
|
|
|
|
|
|
|
|
log_root->last_trans = trans->transid;
|
|
|
|
log_root->root_key.offset = root->root_key.objectid;
|
|
|
|
|
|
|
|
inode_item = &log_root->root_item.inode;
|
2013-07-16 11:19:18 +08:00
|
|
|
btrfs_set_stack_inode_generation(inode_item, 1);
|
|
|
|
btrfs_set_stack_inode_size(inode_item, 3);
|
|
|
|
btrfs_set_stack_inode_nlink(inode_item, 1);
|
2016-06-15 21:22:56 +08:00
|
|
|
btrfs_set_stack_inode_nbytes(inode_item,
|
2016-06-23 06:54:23 +08:00
|
|
|
fs_info->nodesize);
|
2013-07-16 11:19:18 +08:00
|
|
|
btrfs_set_stack_inode_mode(inode_item, S_IFDIR | 0755);
|
2009-01-22 01:54:03 +08:00
|
|
|
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 22:45:14 +08:00
|
|
|
btrfs_set_root_node(&log_root->root_item, log_root->node);
|
2009-01-22 01:54:03 +08:00
|
|
|
|
|
|
|
WARN_ON(root->log_root);
|
|
|
|
root->log_root = log_root;
|
|
|
|
root->log_transid = 0;
|
2014-02-20 18:08:59 +08:00
|
|
|
root->log_transid_committed = -1;
|
2009-10-14 01:21:08 +08:00
|
|
|
root->last_log_commit = 0;
|
2008-09-06 04:13:11 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-08-15 00:12:25 +08:00
|
|
|
static struct btrfs_root *btrfs_read_tree_root(struct btrfs_root *tree_root,
|
|
|
|
struct btrfs_key *key)
|
2008-09-06 04:13:11 +08:00
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
|
|
|
struct btrfs_fs_info *fs_info = tree_root->fs_info;
|
2007-04-09 22:42:37 +08:00
|
|
|
struct btrfs_path *path;
|
2008-10-30 02:49:05 +08:00
|
|
|
u64 generation;
|
2013-05-15 15:48:19 +08:00
|
|
|
int ret;
|
2018-03-29 09:08:11 +08:00
|
|
|
int level;
|
2007-04-09 22:42:37 +08:00
|
|
|
|
2013-05-15 15:48:19 +08:00
|
|
|
path = btrfs_alloc_path();
|
|
|
|
if (!path)
|
2007-04-09 22:42:37 +08:00
|
|
|
return ERR_PTR(-ENOMEM);
|
2013-05-15 15:48:19 +08:00
|
|
|
|
2016-02-11 18:01:55 +08:00
|
|
|
root = btrfs_alloc_root(fs_info, GFP_NOFS);
|
2013-05-15 15:48:19 +08:00
|
|
|
if (!root) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto alloc_fail;
|
2007-04-09 22:42:37 +08:00
|
|
|
}
|
|
|
|
|
2016-06-15 21:22:56 +08:00
|
|
|
__setup_root(root, fs_info, key->objectid);
|
2007-04-09 22:42:37 +08:00
|
|
|
|
2013-05-15 15:48:19 +08:00
|
|
|
ret = btrfs_find_root(tree_root, key, path,
|
|
|
|
&root->root_item, &root->root_key);
|
2007-04-09 22:42:37 +08:00
|
|
|
if (ret) {
|
2009-09-22 03:56:00 +08:00
|
|
|
if (ret > 0)
|
|
|
|
ret = -ENOENT;
|
2013-05-15 15:48:19 +08:00
|
|
|
goto find_fail;
|
2007-04-09 22:42:37 +08:00
|
|
|
}
|
2009-09-22 03:56:00 +08:00
|
|
|
|
2008-10-30 02:49:05 +08:00
|
|
|
generation = btrfs_root_generation(&root->root_item);
|
2018-03-29 09:08:11 +08:00
|
|
|
level = btrfs_root_level(&root->root_item);
|
2016-06-23 06:54:24 +08:00
|
|
|
root->node = read_tree_block(fs_info,
|
|
|
|
btrfs_root_bytenr(&root->root_item),
|
2018-03-29 09:08:11 +08:00
|
|
|
generation, level, NULL);
|
2015-05-25 17:30:15 +08:00
|
|
|
if (IS_ERR(root->node)) {
|
|
|
|
ret = PTR_ERR(root->node);
|
2013-05-15 15:48:19 +08:00
|
|
|
goto find_fail;
|
|
|
|
} else if (!btrfs_buffer_uptodate(root->node, generation, 0)) {
|
|
|
|
ret = -EIO;
|
2015-05-25 17:30:15 +08:00
|
|
|
free_extent_buffer(root->node);
|
|
|
|
goto find_fail;
|
2013-04-24 02:17:42 +08:00
|
|
|
}
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 22:45:14 +08:00
|
|
|
root->commit_root = btrfs_root_node(root);
|
2009-09-22 03:56:00 +08:00
|
|
|
out:
|
2013-05-15 15:48:19 +08:00
|
|
|
btrfs_free_path(path);
|
|
|
|
return root;
|
|
|
|
|
|
|
|
find_fail:
|
|
|
|
kfree(root);
|
|
|
|
alloc_fail:
|
|
|
|
root = ERR_PTR(ret);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct btrfs_root *btrfs_read_fs_root(struct btrfs_root *tree_root,
|
|
|
|
struct btrfs_key *location)
|
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
|
|
|
|
|
|
|
root = btrfs_read_tree_root(tree_root, location);
|
|
|
|
if (IS_ERR(root))
|
|
|
|
return root;
|
|
|
|
|
|
|
|
if (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID) {
|
2014-04-02 19:51:05 +08:00
|
|
|
set_bit(BTRFS_ROOT_REF_COWS, &root->state);
|
2011-03-28 10:01:25 +08:00
|
|
|
btrfs_check_and_init_root_item(&root->root_item);
|
|
|
|
}
|
2009-09-22 03:56:00 +08:00
|
|
|
|
2007-06-23 02:16:25 +08:00
|
|
|
return root;
|
|
|
|
}
|
|
|
|
|
2013-05-15 15:48:19 +08:00
|
|
|
int btrfs_init_fs_root(struct btrfs_root *root)
|
|
|
|
{
|
|
|
|
int ret;
|
2014-03-06 13:38:19 +08:00
|
|
|
struct btrfs_subvolume_writers *writers;
|
2013-05-15 15:48:19 +08:00
|
|
|
|
|
|
|
root->free_ino_ctl = kzalloc(sizeof(*root->free_ino_ctl), GFP_NOFS);
|
|
|
|
root->free_ino_pinned = kzalloc(sizeof(*root->free_ino_pinned),
|
|
|
|
GFP_NOFS);
|
|
|
|
if (!root->free_ino_pinned || !root->free_ino_ctl) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
2014-03-06 13:38:19 +08:00
|
|
|
writers = btrfs_alloc_subvolume_writers();
|
|
|
|
if (IS_ERR(writers)) {
|
|
|
|
ret = PTR_ERR(writers);
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
root->subv_writers = writers;
|
|
|
|
|
2013-05-15 15:48:19 +08:00
|
|
|
btrfs_init_free_ino_ctl(root);
|
2014-02-05 09:37:48 +08:00
|
|
|
spin_lock_init(&root->ino_cache_lock);
|
|
|
|
init_waitqueue_head(&root->ino_cache_wait);
|
2013-05-15 15:48:19 +08:00
|
|
|
|
|
|
|
ret = get_anon_bdev(&root->anon_dev);
|
|
|
|
if (ret)
|
2016-06-29 04:44:38 +08:00
|
|
|
goto fail;
|
2016-01-07 21:26:59 +08:00
|
|
|
|
|
|
|
mutex_lock(&root->objectid_mutex);
|
|
|
|
ret = btrfs_find_highest_objectid(root,
|
|
|
|
&root->highest_objectid);
|
|
|
|
if (ret) {
|
|
|
|
mutex_unlock(&root->objectid_mutex);
|
2016-06-29 04:44:38 +08:00
|
|
|
goto fail;
|
2016-01-07 21:26:59 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT(root->highest_objectid <= BTRFS_LAST_FREE_OBJECTID);
|
|
|
|
|
|
|
|
mutex_unlock(&root->objectid_mutex);
|
|
|
|
|
2013-05-15 15:48:19 +08:00
|
|
|
return 0;
|
|
|
|
fail:
|
2018-07-20 22:30:25 +08:00
|
|
|
/* The caller is responsible to call btrfs_free_fs_root */
|
2013-05-15 15:48:19 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-08-18 09:58:33 +08:00
|
|
|
struct btrfs_root *btrfs_lookup_fs_root(struct btrfs_fs_info *fs_info,
|
|
|
|
u64 root_id)
|
2013-05-15 15:48:19 +08:00
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
|
|
|
|
|
|
|
spin_lock(&fs_info->fs_roots_radix_lock);
|
|
|
|
root = radix_tree_lookup(&fs_info->fs_roots_radix,
|
|
|
|
(unsigned long)root_id);
|
|
|
|
spin_unlock(&fs_info->fs_roots_radix_lock);
|
|
|
|
return root;
|
|
|
|
}
|
|
|
|
|
|
|
|
int btrfs_insert_fs_root(struct btrfs_fs_info *fs_info,
|
|
|
|
struct btrfs_root *root)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2016-05-09 20:11:38 +08:00
|
|
|
ret = radix_tree_preload(GFP_NOFS);
|
2013-05-15 15:48:19 +08:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
spin_lock(&fs_info->fs_roots_radix_lock);
|
|
|
|
ret = radix_tree_insert(&fs_info->fs_roots_radix,
|
|
|
|
(unsigned long)root->root_key.objectid,
|
|
|
|
root);
|
|
|
|
if (ret == 0)
|
2014-04-02 19:51:05 +08:00
|
|
|
set_bit(BTRFS_ROOT_IN_RADIX, &root->state);
|
2013-05-15 15:48:19 +08:00
|
|
|
spin_unlock(&fs_info->fs_roots_radix_lock);
|
|
|
|
radix_tree_preload_end();
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2013-09-25 21:47:44 +08:00
|
|
|
struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info,
|
|
|
|
struct btrfs_key *location,
|
|
|
|
bool check_ref)
|
2007-06-23 02:16:25 +08:00
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
2015-01-03 01:45:16 +08:00
|
|
|
struct btrfs_path *path;
|
2015-01-03 02:36:14 +08:00
|
|
|
struct btrfs_key key;
|
2007-06-23 02:16:25 +08:00
|
|
|
int ret;
|
|
|
|
|
2007-12-22 05:27:24 +08:00
|
|
|
if (location->objectid == BTRFS_ROOT_TREE_OBJECTID)
|
|
|
|
return fs_info->tree_root;
|
|
|
|
if (location->objectid == BTRFS_EXTENT_TREE_OBJECTID)
|
|
|
|
return fs_info->extent_root;
|
2008-04-26 04:53:30 +08:00
|
|
|
if (location->objectid == BTRFS_CHUNK_TREE_OBJECTID)
|
|
|
|
return fs_info->chunk_root;
|
|
|
|
if (location->objectid == BTRFS_DEV_TREE_OBJECTID)
|
|
|
|
return fs_info->dev_root;
|
2008-12-11 09:32:51 +08:00
|
|
|
if (location->objectid == BTRFS_CSUM_TREE_OBJECTID)
|
|
|
|
return fs_info->csum_root;
|
2011-09-13 21:23:30 +08:00
|
|
|
if (location->objectid == BTRFS_QUOTA_TREE_OBJECTID)
|
|
|
|
return fs_info->quota_root ? fs_info->quota_root :
|
|
|
|
ERR_PTR(-ENOENT);
|
2013-08-15 23:11:19 +08:00
|
|
|
if (location->objectid == BTRFS_UUID_TREE_OBJECTID)
|
|
|
|
return fs_info->uuid_root ? fs_info->uuid_root :
|
|
|
|
ERR_PTR(-ENOENT);
|
2015-09-30 11:50:38 +08:00
|
|
|
if (location->objectid == BTRFS_FREE_SPACE_TREE_OBJECTID)
|
|
|
|
return fs_info->free_space_root ? fs_info->free_space_root :
|
|
|
|
ERR_PTR(-ENOENT);
|
2009-09-22 03:56:00 +08:00
|
|
|
again:
|
2013-05-15 15:48:19 +08:00
|
|
|
root = btrfs_lookup_fs_root(fs_info, location->objectid);
|
2013-08-23 16:34:42 +08:00
|
|
|
if (root) {
|
2013-09-25 21:47:44 +08:00
|
|
|
if (check_ref && btrfs_root_refs(&root->root_item) == 0)
|
2013-08-23 16:34:42 +08:00
|
|
|
return ERR_PTR(-ENOENT);
|
2007-06-23 02:16:25 +08:00
|
|
|
return root;
|
2013-08-23 16:34:42 +08:00
|
|
|
}
|
2007-06-23 02:16:25 +08:00
|
|
|
|
2013-05-15 15:48:19 +08:00
|
|
|
root = btrfs_read_fs_root(fs_info->tree_root, location);
|
2007-06-23 02:16:25 +08:00
|
|
|
if (IS_ERR(root))
|
|
|
|
return root;
|
2008-11-18 09:42:26 +08:00
|
|
|
|
2013-09-25 21:47:44 +08:00
|
|
|
if (check_ref && btrfs_root_refs(&root->root_item) == 0) {
|
2013-05-15 15:48:19 +08:00
|
|
|
ret = -ENOENT;
|
Btrfs: Cache free inode numbers in memory
Currently btrfs stores the highest objectid of the fs tree, and it always
returns (highest+1) inode number when we create a file, so inode numbers
won't be reclaimed when we delete files, so we'll run out of inode numbers
as we keep create/delete files in 32bits machines.
This fixes it, and it works similarly to how we cache free space in block
cgroups.
We start a kernel thread to read the file tree. By scanning inode items,
we know which chunks of inode numbers are free, and we cache them in
an rb-tree.
Because we are searching the commit root, we have to carefully handle the
cross-transaction case.
The rb-tree is a hybrid extent+bitmap tree, so if we have too many small
chunks of inode numbers, we'll use bitmaps. Initially we allow 16K ram
of extents, and a bitmap will be used if we exceed this threshold. The
extents threshold is adjusted in runtime.
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
2011-04-20 10:06:11 +08:00
|
|
|
goto fail;
|
2011-06-13 23:18:23 +08:00
|
|
|
}
|
Btrfs: Cache free inode numbers in memory
Currently btrfs stores the highest objectid of the fs tree, and it always
returns (highest+1) inode number when we create a file, so inode numbers
won't be reclaimed when we delete files, so we'll run out of inode numbers
as we keep create/delete files in 32bits machines.
This fixes it, and it works similarly to how we cache free space in block
cgroups.
We start a kernel thread to read the file tree. By scanning inode items,
we know which chunks of inode numbers are free, and we cache them in
an rb-tree.
Because we are searching the commit root, we have to carefully handle the
cross-transaction case.
The rb-tree is a hybrid extent+bitmap tree, so if we have too many small
chunks of inode numbers, we'll use bitmaps. Initially we allow 16K ram
of extents, and a bitmap will be used if we exceed this threshold. The
extents threshold is adjusted in runtime.
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
2011-04-20 10:06:11 +08:00
|
|
|
|
2013-05-15 15:48:19 +08:00
|
|
|
ret = btrfs_init_fs_root(root);
|
2011-06-13 23:28:50 +08:00
|
|
|
if (ret)
|
|
|
|
goto fail;
|
2008-11-18 09:42:26 +08:00
|
|
|
|
2015-01-03 01:45:16 +08:00
|
|
|
path = btrfs_alloc_path();
|
|
|
|
if (!path) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto fail;
|
|
|
|
}
|
2015-01-03 02:36:14 +08:00
|
|
|
key.objectid = BTRFS_ORPHAN_OBJECTID;
|
|
|
|
key.type = BTRFS_ORPHAN_ITEM_KEY;
|
|
|
|
key.offset = location->objectid;
|
|
|
|
|
|
|
|
ret = btrfs_search_slot(NULL, fs_info->tree_root, &key, path, 0, 0);
|
2015-01-03 01:45:16 +08:00
|
|
|
btrfs_free_path(path);
|
2010-05-16 22:49:58 +08:00
|
|
|
if (ret < 0)
|
|
|
|
goto fail;
|
|
|
|
if (ret == 0)
|
2014-04-02 19:51:05 +08:00
|
|
|
set_bit(BTRFS_ROOT_ORPHAN_ITEM_INSERTED, &root->state);
|
2010-05-16 22:49:58 +08:00
|
|
|
|
2013-05-15 15:48:19 +08:00
|
|
|
ret = btrfs_insert_fs_root(fs_info, root);
|
2007-04-09 22:42:37 +08:00
|
|
|
if (ret) {
|
2009-09-22 03:56:00 +08:00
|
|
|
if (ret == -EEXIST) {
|
2018-07-20 22:30:25 +08:00
|
|
|
btrfs_free_fs_root(root);
|
2009-09-22 03:56:00 +08:00
|
|
|
goto again;
|
|
|
|
}
|
|
|
|
goto fail;
|
2007-04-09 22:42:37 +08:00
|
|
|
}
|
2007-12-22 05:27:24 +08:00
|
|
|
return root;
|
2009-09-22 03:56:00 +08:00
|
|
|
fail:
|
2018-07-20 22:30:25 +08:00
|
|
|
btrfs_free_fs_root(root);
|
2009-09-22 03:56:00 +08:00
|
|
|
return ERR_PTR(ret);
|
2007-12-22 05:27:24 +08:00
|
|
|
}
|
|
|
|
|
2008-03-26 22:28:07 +08:00
|
|
|
static int btrfs_congested_fn(void *congested_data, int bdi_bits)
|
|
|
|
{
|
|
|
|
struct btrfs_fs_info *info = (struct btrfs_fs_info *)congested_data;
|
|
|
|
int ret = 0;
|
|
|
|
struct btrfs_device *device;
|
|
|
|
struct backing_dev_info *bdi;
|
2009-04-27 19:29:04 +08:00
|
|
|
|
2011-04-20 18:09:16 +08:00
|
|
|
rcu_read_lock();
|
|
|
|
list_for_each_entry_rcu(device, &info->fs_devices->devices, dev_list) {
|
2008-05-14 01:46:40 +08:00
|
|
|
if (!device->bdev)
|
|
|
|
continue;
|
2017-02-02 22:56:53 +08:00
|
|
|
bdi = device->bdev->bd_bdi;
|
2014-09-08 07:03:56 +08:00
|
|
|
if (bdi_congested(bdi, bdi_bits)) {
|
2008-03-26 22:28:07 +08:00
|
|
|
ret = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2011-04-20 18:09:16 +08:00
|
|
|
rcu_read_unlock();
|
2008-03-26 22:28:07 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2008-06-12 04:50:36 +08:00
|
|
|
/*
|
|
|
|
* called by the kthread helper functions to finally call the bio end_io
|
|
|
|
* functions. This is where read checksum verification actually happens
|
|
|
|
*/
|
|
|
|
static void end_workqueue_fn(struct btrfs_work *work)
|
2008-04-10 04:28:12 +08:00
|
|
|
{
|
|
|
|
struct bio *bio;
|
2014-07-30 06:55:42 +08:00
|
|
|
struct btrfs_end_io_wq *end_io_wq;
|
2008-04-10 04:28:12 +08:00
|
|
|
|
2014-07-30 06:55:42 +08:00
|
|
|
end_io_wq = container_of(work, struct btrfs_end_io_wq, work);
|
2008-06-12 04:50:36 +08:00
|
|
|
bio = end_io_wq->bio;
|
2008-04-10 04:28:12 +08:00
|
|
|
|
2017-06-03 15:38:06 +08:00
|
|
|
bio->bi_status = end_io_wq->status;
|
2008-06-12 04:50:36 +08:00
|
|
|
bio->bi_private = end_io_wq->private;
|
|
|
|
bio->bi_end_io = end_io_wq->end_io;
|
2014-07-30 06:55:42 +08:00
|
|
|
kmem_cache_free(btrfs_end_io_wq_cache, end_io_wq);
|
2015-07-20 21:29:37 +08:00
|
|
|
bio_endio(bio);
|
2008-04-16 23:14:51 +08:00
|
|
|
}
|
|
|
|
|
2008-06-26 04:01:31 +08:00
|
|
|
static int cleaner_kthread(void *arg)
|
|
|
|
{
|
|
|
|
struct btrfs_root *root = arg;
|
2016-06-23 06:54:23 +08:00
|
|
|
struct btrfs_fs_info *fs_info = root->fs_info;
|
2013-05-14 18:20:40 +08:00
|
|
|
int again;
|
2008-06-26 04:01:31 +08:00
|
|
|
|
Btrfs: fix missing delayed iputs on unmount
There's a race between close_ctree() and cleaner_kthread().
close_ctree() sets btrfs_fs_closing(), and the cleaner stops when it
sees it set, but this is racy; the cleaner might have already checked
the bit and could be cleaning stuff. In particular, if it deletes unused
block groups, it will create delayed iputs for the free space cache
inodes. As of "btrfs: don't run delayed_iputs in commit", we're no
longer running delayed iputs after a commit. Therefore, if the cleaner
creates more delayed iputs after delayed iputs are run in
btrfs_commit_super(), we will leak inodes on unmount and get a busy
inode crash from the VFS.
Fix it by parking the cleaner before we actually close anything. Then,
any remaining delayed iputs will always be handled in
btrfs_commit_super(). This also ensures that the commit in close_ctree()
is really the last commit, so we can get rid of the commit in
cleaner_kthread().
The fstest/generic/475 followed by 476 can trigger a crash that
manifests as a slab corruption caused by accessing the freed kthread
structure by a wake up function. Sample trace:
[ 5657.077612] BUG: unable to handle kernel NULL pointer dereference at 00000000000000cc
[ 5657.079432] PGD 1c57a067 P4D 1c57a067 PUD da10067 PMD 0
[ 5657.080661] Oops: 0000 [#1] PREEMPT SMP
[ 5657.081592] CPU: 1 PID: 5157 Comm: fsstress Tainted: G W 4.19.0-rc8-default+ #323
[ 5657.083703] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.2-0-gf9626cc-prebuilt.qemu-project.org 04/01/2014
[ 5657.086577] RIP: 0010:shrink_page_list+0x2f9/0xe90
[ 5657.091937] RSP: 0018:ffffb5c745c8f728 EFLAGS: 00010287
[ 5657.092953] RAX: 0000000000000074 RBX: ffffb5c745c8f830 RCX: 0000000000000000
[ 5657.094590] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff9a8747fdf3d0
[ 5657.095987] RBP: ffffb5c745c8f9e0 R08: 0000000000000000 R09: 0000000000000000
[ 5657.097159] R10: ffff9a8747fdf5e8 R11: 0000000000000000 R12: ffffb5c745c8f788
[ 5657.098513] R13: ffff9a877f6ff2c0 R14: ffff9a877f6ff2c8 R15: dead000000000200
[ 5657.099689] FS: 00007f948d853b80(0000) GS:ffff9a877d600000(0000) knlGS:0000000000000000
[ 5657.101032] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 5657.101953] CR2: 00000000000000cc CR3: 00000000684bd000 CR4: 00000000000006e0
[ 5657.103159] Call Trace:
[ 5657.103776] shrink_inactive_list+0x194/0x410
[ 5657.104671] shrink_node_memcg.constprop.84+0x39a/0x6a0
[ 5657.105750] shrink_node+0x62/0x1c0
[ 5657.106529] try_to_free_pages+0x1a4/0x500
[ 5657.107408] __alloc_pages_slowpath+0x2c9/0xb20
[ 5657.108418] __alloc_pages_nodemask+0x268/0x2b0
[ 5657.109348] kmalloc_large_node+0x37/0x90
[ 5657.110205] __kmalloc_node+0x236/0x310
[ 5657.111014] kvmalloc_node+0x3e/0x70
Fixes: 30928e9baac2 ("btrfs: don't run delayed_iputs in commit")
Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add trace ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-11-01 01:06:08 +08:00
|
|
|
while (1) {
|
2013-05-14 18:20:40 +08:00
|
|
|
again = 0;
|
2008-06-26 04:01:31 +08:00
|
|
|
|
2019-01-11 23:21:02 +08:00
|
|
|
set_bit(BTRFS_FS_CLEANER_RUNNING, &fs_info->flags);
|
|
|
|
|
2013-05-14 18:20:40 +08:00
|
|
|
/* Make the cleaner go to sleep early. */
|
2016-06-23 06:54:24 +08:00
|
|
|
if (btrfs_need_cleaner_sleep(fs_info))
|
2013-05-14 18:20:40 +08:00
|
|
|
goto sleep;
|
|
|
|
|
2016-06-13 11:39:58 +08:00
|
|
|
/*
|
|
|
|
* Do not do anything if we might cause open_ctree() to block
|
|
|
|
* before we have finished mounting the filesystem.
|
|
|
|
*/
|
2016-06-23 06:54:23 +08:00
|
|
|
if (!test_bit(BTRFS_FS_OPEN, &fs_info->flags))
|
2016-06-13 11:39:58 +08:00
|
|
|
goto sleep;
|
|
|
|
|
2016-06-23 06:54:23 +08:00
|
|
|
if (!mutex_trylock(&fs_info->cleaner_mutex))
|
2013-05-14 18:20:40 +08:00
|
|
|
goto sleep;
|
|
|
|
|
2013-05-14 18:20:42 +08:00
|
|
|
/*
|
|
|
|
* Avoid the problem that we change the status of the fs
|
|
|
|
* during the above check and trylock.
|
|
|
|
*/
|
2016-06-23 06:54:24 +08:00
|
|
|
if (btrfs_need_cleaner_sleep(fs_info)) {
|
2016-06-23 06:54:23 +08:00
|
|
|
mutex_unlock(&fs_info->cleaner_mutex);
|
2013-05-14 18:20:42 +08:00
|
|
|
goto sleep;
|
2009-09-22 04:00:26 +08:00
|
|
|
}
|
2008-06-26 04:01:31 +08:00
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
btrfs_run_delayed_iputs(fs_info);
|
Btrfs: fix deadlock running delayed iputs at transaction commit time
While running a stress test I ran into a deadlock when running the delayed
iputs at transaction time, which produced the following report and trace:
[ 886.399989] =============================================
[ 886.400871] [ INFO: possible recursive locking detected ]
[ 886.401663] 4.4.0-rc6-btrfs-next-18+ #1 Not tainted
[ 886.402384] ---------------------------------------------
[ 886.403182] fio/8277 is trying to acquire lock:
[ 886.403568] (&fs_info->delayed_iput_sem){++++..}, at: [<ffffffffa0538823>] btrfs_run_delayed_iputs+0x36/0xbf [btrfs]
[ 886.403568]
[ 886.403568] but task is already holding lock:
[ 886.403568] (&fs_info->delayed_iput_sem){++++..}, at: [<ffffffffa0538823>] btrfs_run_delayed_iputs+0x36/0xbf [btrfs]
[ 886.403568]
[ 886.403568] other info that might help us debug this:
[ 886.403568] Possible unsafe locking scenario:
[ 886.403568]
[ 886.403568] CPU0
[ 886.403568] ----
[ 886.403568] lock(&fs_info->delayed_iput_sem);
[ 886.403568] lock(&fs_info->delayed_iput_sem);
[ 886.403568]
[ 886.403568] *** DEADLOCK ***
[ 886.403568]
[ 886.403568] May be due to missing lock nesting notation
[ 886.403568]
[ 886.403568] 3 locks held by fio/8277:
[ 886.403568] #0: (sb_writers#11){.+.+.+}, at: [<ffffffff81174c4c>] __sb_start_write+0x5f/0xb0
[ 886.403568] #1: (&sb->s_type->i_mutex_key#15){+.+.+.}, at: [<ffffffffa054620d>] btrfs_file_write_iter+0x73/0x408 [btrfs]
[ 886.403568] #2: (&fs_info->delayed_iput_sem){++++..}, at: [<ffffffffa0538823>] btrfs_run_delayed_iputs+0x36/0xbf [btrfs]
[ 886.403568]
[ 886.403568] stack backtrace:
[ 886.403568] CPU: 6 PID: 8277 Comm: fio Not tainted 4.4.0-rc6-btrfs-next-18+ #1
[ 886.403568] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS by qemu-project.org 04/01/2014
[ 886.403568] 0000000000000000 ffff88009f80f770 ffffffff8125d4fd ffffffff82af1fc0
[ 886.403568] ffff88009f80f830 ffffffff8108e5f9 0000000200000000 ffff88009fd92290
[ 886.403568] 0000000000000000 ffffffff82af1fc0 ffffffff829cfb01 00042b216d008804
[ 886.403568] Call Trace:
[ 886.403568] [<ffffffff8125d4fd>] dump_stack+0x4e/0x79
[ 886.403568] [<ffffffff8108e5f9>] __lock_acquire+0xd42/0xf0b
[ 886.403568] [<ffffffff810c22db>] ? __module_address+0xdf/0x108
[ 886.403568] [<ffffffff8108eb77>] lock_acquire+0x10d/0x194
[ 886.403568] [<ffffffff8108eb77>] ? lock_acquire+0x10d/0x194
[ 886.403568] [<ffffffffa0538823>] ? btrfs_run_delayed_iputs+0x36/0xbf [btrfs]
[ 886.489542] [<ffffffff8148556b>] down_read+0x3e/0x4d
[ 886.489542] [<ffffffffa0538823>] ? btrfs_run_delayed_iputs+0x36/0xbf [btrfs]
[ 886.489542] [<ffffffffa0538823>] btrfs_run_delayed_iputs+0x36/0xbf [btrfs]
[ 886.489542] [<ffffffffa0533953>] btrfs_commit_transaction+0x8f5/0x96e [btrfs]
[ 886.489542] [<ffffffffa0521d7a>] flush_space+0x435/0x44a [btrfs]
[ 886.489542] [<ffffffffa052218b>] ? reserve_metadata_bytes+0x26a/0x384 [btrfs]
[ 886.489542] [<ffffffffa05221ae>] reserve_metadata_bytes+0x28d/0x384 [btrfs]
[ 886.489542] [<ffffffffa052256c>] ? btrfs_block_rsv_refill+0x58/0x96 [btrfs]
[ 886.489542] [<ffffffffa0522584>] btrfs_block_rsv_refill+0x70/0x96 [btrfs]
[ 886.489542] [<ffffffffa053d747>] btrfs_evict_inode+0x394/0x55a [btrfs]
[ 886.489542] [<ffffffff81188e31>] evict+0xa7/0x15c
[ 886.489542] [<ffffffff81189878>] iput+0x1d3/0x266
[ 886.489542] [<ffffffffa053887c>] btrfs_run_delayed_iputs+0x8f/0xbf [btrfs]
[ 886.489542] [<ffffffffa0533953>] btrfs_commit_transaction+0x8f5/0x96e [btrfs]
[ 886.489542] [<ffffffff81085096>] ? signal_pending_state+0x31/0x31
[ 886.489542] [<ffffffffa0521191>] btrfs_alloc_data_chunk_ondemand+0x1d7/0x288 [btrfs]
[ 886.489542] [<ffffffffa0521282>] btrfs_check_data_free_space+0x40/0x59 [btrfs]
[ 886.489542] [<ffffffffa05228f5>] btrfs_delalloc_reserve_space+0x1e/0x4e [btrfs]
[ 886.489542] [<ffffffffa053620a>] btrfs_direct_IO+0x10c/0x27e [btrfs]
[ 886.489542] [<ffffffff8111d9a1>] generic_file_direct_write+0xb3/0x128
[ 886.489542] [<ffffffffa05463c3>] btrfs_file_write_iter+0x229/0x408 [btrfs]
[ 886.489542] [<ffffffff8108ae38>] ? __lock_is_held+0x38/0x50
[ 886.489542] [<ffffffff8117279e>] __vfs_write+0x7c/0xa5
[ 886.489542] [<ffffffff81172cda>] vfs_write+0xa0/0xe4
[ 886.489542] [<ffffffff811734cc>] SyS_write+0x50/0x7e
[ 886.489542] [<ffffffff814872d7>] entry_SYSCALL_64_fastpath+0x12/0x6f
[ 1081.852335] INFO: task fio:8244 blocked for more than 120 seconds.
[ 1081.854348] Not tainted 4.4.0-rc6-btrfs-next-18+ #1
[ 1081.857560] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1081.863227] fio D ffff880213f9bb28 0 8244 8240 0x00000000
[ 1081.868719] ffff880213f9bb28 00ffffff810fc6b0 ffffffff0000000a ffff88023ed55240
[ 1081.872499] ffff880206b5d400 ffff880213f9c000 ffff88020a4d5318 ffff880206b5d400
[ 1081.876834] ffffffff00000001 ffff880206b5d400 ffff880213f9bb40 ffffffff81482ba4
[ 1081.880782] Call Trace:
[ 1081.881793] [<ffffffff81482ba4>] schedule+0x7f/0x97
[ 1081.883340] [<ffffffff81485eb5>] rwsem_down_write_failed+0x2d5/0x325
[ 1081.895525] [<ffffffff8108d48d>] ? trace_hardirqs_on_caller+0x16/0x1ab
[ 1081.897419] [<ffffffff81269723>] call_rwsem_down_write_failed+0x13/0x20
[ 1081.899251] [<ffffffff81269723>] ? call_rwsem_down_write_failed+0x13/0x20
[ 1081.901063] [<ffffffff81089fae>] ? __down_write_nested.isra.0+0x1f/0x21
[ 1081.902365] [<ffffffff814855bd>] down_write+0x43/0x57
[ 1081.903846] [<ffffffffa05211b0>] ? btrfs_alloc_data_chunk_ondemand+0x1f6/0x288 [btrfs]
[ 1081.906078] [<ffffffffa05211b0>] btrfs_alloc_data_chunk_ondemand+0x1f6/0x288 [btrfs]
[ 1081.908846] [<ffffffff8108d461>] ? mark_held_locks+0x56/0x6c
[ 1081.910409] [<ffffffffa0521282>] btrfs_check_data_free_space+0x40/0x59 [btrfs]
[ 1081.912482] [<ffffffffa05228f5>] btrfs_delalloc_reserve_space+0x1e/0x4e [btrfs]
[ 1081.914597] [<ffffffffa053620a>] btrfs_direct_IO+0x10c/0x27e [btrfs]
[ 1081.919037] [<ffffffff8111d9a1>] generic_file_direct_write+0xb3/0x128
[ 1081.920754] [<ffffffffa05463c3>] btrfs_file_write_iter+0x229/0x408 [btrfs]
[ 1081.922496] [<ffffffff8108ae38>] ? __lock_is_held+0x38/0x50
[ 1081.923922] [<ffffffff8117279e>] __vfs_write+0x7c/0xa5
[ 1081.925275] [<ffffffff81172cda>] vfs_write+0xa0/0xe4
[ 1081.926584] [<ffffffff811734cc>] SyS_write+0x50/0x7e
[ 1081.927968] [<ffffffff814872d7>] entry_SYSCALL_64_fastpath+0x12/0x6f
[ 1081.985293] INFO: lockdep is turned off.
[ 1081.986132] INFO: task fio:8249 blocked for more than 120 seconds.
[ 1081.987434] Not tainted 4.4.0-rc6-btrfs-next-18+ #1
[ 1081.988534] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1081.990147] fio D ffff880218febbb8 0 8249 8240 0x00000000
[ 1081.991626] ffff880218febbb8 00ffffff81486b8e ffff88020000000b ffff88023ed75240
[ 1081.993258] ffff8802120a9a00 ffff880218fec000 ffff88020a4d5318 ffff8802120a9a00
[ 1081.994850] ffffffff00000001 ffff8802120a9a00 ffff880218febbd0 ffffffff81482ba4
[ 1081.996485] Call Trace:
[ 1081.997037] [<ffffffff81482ba4>] schedule+0x7f/0x97
[ 1081.998017] [<ffffffff81485eb5>] rwsem_down_write_failed+0x2d5/0x325
[ 1081.999241] [<ffffffff810852a5>] ? finish_wait+0x6d/0x76
[ 1082.000306] [<ffffffff81269723>] call_rwsem_down_write_failed+0x13/0x20
[ 1082.001533] [<ffffffff81269723>] ? call_rwsem_down_write_failed+0x13/0x20
[ 1082.002776] [<ffffffff81089fae>] ? __down_write_nested.isra.0+0x1f/0x21
[ 1082.003995] [<ffffffff814855bd>] down_write+0x43/0x57
[ 1082.005000] [<ffffffffa05211b0>] ? btrfs_alloc_data_chunk_ondemand+0x1f6/0x288 [btrfs]
[ 1082.007403] [<ffffffffa05211b0>] btrfs_alloc_data_chunk_ondemand+0x1f6/0x288 [btrfs]
[ 1082.008988] [<ffffffffa0545064>] btrfs_fallocate+0x7c1/0xc2f [btrfs]
[ 1082.010193] [<ffffffff8108a1ba>] ? percpu_down_read+0x4e/0x77
[ 1082.011280] [<ffffffff81174c4c>] ? __sb_start_write+0x5f/0xb0
[ 1082.012265] [<ffffffff81174c4c>] ? __sb_start_write+0x5f/0xb0
[ 1082.013021] [<ffffffff811712e4>] vfs_fallocate+0x170/0x1ff
[ 1082.013738] [<ffffffff81181ebb>] ioctl_preallocate+0x89/0x9b
[ 1082.014778] [<ffffffff811822d7>] do_vfs_ioctl+0x40a/0x4ea
[ 1082.015778] [<ffffffff81176ea7>] ? SYSC_newfstat+0x25/0x2e
[ 1082.016806] [<ffffffff8118b4de>] ? __fget_light+0x4d/0x71
[ 1082.017789] [<ffffffff8118240e>] SyS_ioctl+0x57/0x79
[ 1082.018706] [<ffffffff814872d7>] entry_SYSCALL_64_fastpath+0x12/0x6f
This happens because we can recursively acquire the semaphore
fs_info->delayed_iput_sem when attempting to allocate space to satisfy
a file write request as shown in the first trace above - when committing
a transaction we acquire (down_read) the semaphore before running the
delayed iputs, and when running a delayed iput() we can end up calling
an inode's eviction handler, which in turn commits another transaction
and attempts to acquire (down_read) again the semaphore to run more
delayed iput operations.
This results in a deadlock because if a task acquires multiple times a
semaphore it should invoke down_read_nested() with a different lockdep
class for each level of recursion.
Fix this by simplifying the implementation and use a mutex instead that
is acquired by the cleaner kthread before it runs the delayed iputs
instead of always acquiring a semaphore before delayed references are
run from anywhere.
Fixes: d7c151717a1e (btrfs: Fix NO_SPACE bug caused by delayed-iput)
Cc: stable@vger.kernel.org # 4.1+
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2016-01-15 19:05:12 +08:00
|
|
|
|
2013-05-14 18:20:40 +08:00
|
|
|
again = btrfs_clean_one_deleted_snapshot(root);
|
2016-06-23 06:54:23 +08:00
|
|
|
mutex_unlock(&fs_info->cleaner_mutex);
|
2013-05-14 18:20:40 +08:00
|
|
|
|
|
|
|
/*
|
2013-05-14 18:20:41 +08:00
|
|
|
* The defragger has dealt with the R/O remount and umount,
|
|
|
|
* needn't do anything special here.
|
2013-05-14 18:20:40 +08:00
|
|
|
*/
|
2016-06-23 06:54:23 +08:00
|
|
|
btrfs_run_defrag_inodes(fs_info);
|
Btrfs: fix race between balance and unused block group deletion
We have a race between deleting an unused block group and balancing the
same block group that leads to an assertion failure/BUG(), producing the
following trace:
[181631.208236] BTRFS: assertion failed: 0, file: fs/btrfs/volumes.c, line: 2622
[181631.220591] ------------[ cut here ]------------
[181631.222959] kernel BUG at fs/btrfs/ctree.h:4062!
[181631.223932] invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
[181631.224566] Modules linked in: btrfs dm_flakey dm_mod crc32c_generic xor raid6_pq nfsd auth_rpcgss oid_registry nfs_acl nfs lockd grace fscache sunrpc loop fuse acpi_cpufreq parpor$
[181631.224566] CPU: 8 PID: 17451 Comm: btrfs Tainted: G W 4.1.0-rc5-btrfs-next-10+ #1
[181631.224566] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.1-0-g4adadbd-20150316_085822-nilsson.home.kraxel.org 04/01/2014
[181631.224566] task: ffff880127e09590 ti: ffff8800b5824000 task.ti: ffff8800b5824000
[181631.224566] RIP: 0010:[<ffffffffa03f19f6>] [<ffffffffa03f19f6>] assfail.constprop.50+0x1e/0x20 [btrfs]
[181631.224566] RSP: 0018:ffff8800b5827ae8 EFLAGS: 00010246
[181631.224566] RAX: 0000000000000040 RBX: ffff8800109fc218 RCX: ffffffff81095dce
[181631.224566] RDX: 0000000000005124 RSI: ffffffff81464819 RDI: 00000000ffffffff
[181631.224566] RBP: ffff8800b5827ae8 R08: 0000000000000001 R09: 0000000000000000
[181631.224566] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8800109fc200
[181631.224566] R13: ffff880020095000 R14: ffff8800b1a13f38 R15: ffff880020095000
[181631.224566] FS: 00007f70ca0b0c80(0000) GS:ffff88013ec00000(0000) knlGS:0000000000000000
[181631.224566] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[181631.224566] CR2: 00007f2872ab6e68 CR3: 00000000a717c000 CR4: 00000000000006e0
[181631.224566] Stack:
[181631.224566] ffff8800b5827ba8 ffffffffa03f3916 ffff8800b5827b38 ffffffffa03d080e
[181631.224566] ffffffffa03d1423 ffff880020095000 ffff88001233c000 0000000000000001
[181631.224566] ffff880020095000 ffff8800b1a13f38 0000000a69c00000 0000000000000000
[181631.224566] Call Trace:
[181631.224566] [<ffffffffa03f3916>] btrfs_remove_chunk+0xa4/0x6bb [btrfs]
[181631.224566] [<ffffffffa03d080e>] ? join_transaction.isra.8+0xb9/0x3ba [btrfs]
[181631.224566] [<ffffffffa03d1423>] ? wait_current_trans.isra.13+0x22/0xfc [btrfs]
[181631.224566] [<ffffffffa03f3fbc>] btrfs_relocate_chunk.isra.29+0x8f/0xa7 [btrfs]
[181631.224566] [<ffffffffa03f54df>] btrfs_balance+0xaa4/0xc52 [btrfs]
[181631.224566] [<ffffffffa03fd388>] btrfs_ioctl_balance+0x23f/0x2b0 [btrfs]
[181631.224566] [<ffffffff810872f9>] ? trace_hardirqs_on+0xd/0xf
[181631.224566] [<ffffffffa04019a3>] btrfs_ioctl+0xfe2/0x2220 [btrfs]
[181631.224566] [<ffffffff812603ed>] ? __this_cpu_preempt_check+0x13/0x15
[181631.224566] [<ffffffff81084669>] ? arch_local_irq_save+0x9/0xc
[181631.224566] [<ffffffff81138def>] ? handle_mm_fault+0x834/0xcd2
[181631.224566] [<ffffffff81138def>] ? handle_mm_fault+0x834/0xcd2
[181631.224566] [<ffffffff8103e48c>] ? __do_page_fault+0x211/0x424
[181631.224566] [<ffffffff811755e6>] do_vfs_ioctl+0x3c6/0x479
(...)
The sequence of steps leading to this are:
CPU 0 CPU 1
btrfs_balance()
btrfs_relocate_chunk()
btrfs_relocate_block_group(bg X)
btrfs_lookup_block_group(bg X)
cleaner_kthread
locks fs_info->cleaner_mutex
btrfs_delete_unused_bgs()
finds bg X, which became
unused in the previous
transaction
checks bg X ->ro == 0,
so it proceeds
sets bg X ->ro to 1
(btrfs_set_block_group_ro(bg X))
blocks on fs_info->cleaner_mutex
btrfs_remove_chunk(bg X)
unlocks fs_info->cleaner_mutex
acquires fs_info->cleaner_mutex
relocate_block_group()
--> does nothing, no extents found in
the extent tree from bg X
unlocks fs_info->cleaner_mutex
btrfs_relocate_block_group(bg X) returns
btrfs_remove_chunk(bg X)
extent map not found
--> ASSERT(0)
Fix this by using a new mutex to make sure these 2 operations, block
group relocation and removal, are serialized.
This issue is reproducible by running fstests generic/038 (which stresses
chunk allocation and automatic removal of unused block groups) together
with the following balance loop:
while true; do btrfs balance start -dusage=0 <mountpoint> ; done
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2015-06-11 07:58:53 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Acquires fs_info->delete_unused_bgs_mutex to avoid racing
|
|
|
|
* with relocation (btrfs_relocate_chunk) and relocation
|
|
|
|
* acquires fs_info->cleaner_mutex (btrfs_relocate_block_group)
|
|
|
|
* after acquiring fs_info->delete_unused_bgs_mutex. So we
|
|
|
|
* can't hold, nor need to, fs_info->cleaner_mutex when deleting
|
|
|
|
* unused block groups.
|
|
|
|
*/
|
2016-06-23 06:54:23 +08:00
|
|
|
btrfs_delete_unused_bgs(fs_info);
|
2013-05-14 18:20:40 +08:00
|
|
|
sleep:
|
2019-01-11 23:21:02 +08:00
|
|
|
clear_bit(BTRFS_FS_CLEANER_RUNNING, &fs_info->flags);
|
Btrfs: fix missing delayed iputs on unmount
There's a race between close_ctree() and cleaner_kthread().
close_ctree() sets btrfs_fs_closing(), and the cleaner stops when it
sees it set, but this is racy; the cleaner might have already checked
the bit and could be cleaning stuff. In particular, if it deletes unused
block groups, it will create delayed iputs for the free space cache
inodes. As of "btrfs: don't run delayed_iputs in commit", we're no
longer running delayed iputs after a commit. Therefore, if the cleaner
creates more delayed iputs after delayed iputs are run in
btrfs_commit_super(), we will leak inodes on unmount and get a busy
inode crash from the VFS.
Fix it by parking the cleaner before we actually close anything. Then,
any remaining delayed iputs will always be handled in
btrfs_commit_super(). This also ensures that the commit in close_ctree()
is really the last commit, so we can get rid of the commit in
cleaner_kthread().
The fstest/generic/475 followed by 476 can trigger a crash that
manifests as a slab corruption caused by accessing the freed kthread
structure by a wake up function. Sample trace:
[ 5657.077612] BUG: unable to handle kernel NULL pointer dereference at 00000000000000cc
[ 5657.079432] PGD 1c57a067 P4D 1c57a067 PUD da10067 PMD 0
[ 5657.080661] Oops: 0000 [#1] PREEMPT SMP
[ 5657.081592] CPU: 1 PID: 5157 Comm: fsstress Tainted: G W 4.19.0-rc8-default+ #323
[ 5657.083703] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.2-0-gf9626cc-prebuilt.qemu-project.org 04/01/2014
[ 5657.086577] RIP: 0010:shrink_page_list+0x2f9/0xe90
[ 5657.091937] RSP: 0018:ffffb5c745c8f728 EFLAGS: 00010287
[ 5657.092953] RAX: 0000000000000074 RBX: ffffb5c745c8f830 RCX: 0000000000000000
[ 5657.094590] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff9a8747fdf3d0
[ 5657.095987] RBP: ffffb5c745c8f9e0 R08: 0000000000000000 R09: 0000000000000000
[ 5657.097159] R10: ffff9a8747fdf5e8 R11: 0000000000000000 R12: ffffb5c745c8f788
[ 5657.098513] R13: ffff9a877f6ff2c0 R14: ffff9a877f6ff2c8 R15: dead000000000200
[ 5657.099689] FS: 00007f948d853b80(0000) GS:ffff9a877d600000(0000) knlGS:0000000000000000
[ 5657.101032] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 5657.101953] CR2: 00000000000000cc CR3: 00000000684bd000 CR4: 00000000000006e0
[ 5657.103159] Call Trace:
[ 5657.103776] shrink_inactive_list+0x194/0x410
[ 5657.104671] shrink_node_memcg.constprop.84+0x39a/0x6a0
[ 5657.105750] shrink_node+0x62/0x1c0
[ 5657.106529] try_to_free_pages+0x1a4/0x500
[ 5657.107408] __alloc_pages_slowpath+0x2c9/0xb20
[ 5657.108418] __alloc_pages_nodemask+0x268/0x2b0
[ 5657.109348] kmalloc_large_node+0x37/0x90
[ 5657.110205] __kmalloc_node+0x236/0x310
[ 5657.111014] kvmalloc_node+0x3e/0x70
Fixes: 30928e9baac2 ("btrfs: don't run delayed_iputs in commit")
Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add trace ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-11-01 01:06:08 +08:00
|
|
|
if (kthread_should_park())
|
|
|
|
kthread_parkme();
|
|
|
|
if (kthread_should_stop())
|
|
|
|
return 0;
|
2016-03-15 18:28:54 +08:00
|
|
|
if (!again) {
|
2008-06-26 04:01:31 +08:00
|
|
|
set_current_state(TASK_INTERRUPTIBLE);
|
Btrfs: fix missing delayed iputs on unmount
There's a race between close_ctree() and cleaner_kthread().
close_ctree() sets btrfs_fs_closing(), and the cleaner stops when it
sees it set, but this is racy; the cleaner might have already checked
the bit and could be cleaning stuff. In particular, if it deletes unused
block groups, it will create delayed iputs for the free space cache
inodes. As of "btrfs: don't run delayed_iputs in commit", we're no
longer running delayed iputs after a commit. Therefore, if the cleaner
creates more delayed iputs after delayed iputs are run in
btrfs_commit_super(), we will leak inodes on unmount and get a busy
inode crash from the VFS.
Fix it by parking the cleaner before we actually close anything. Then,
any remaining delayed iputs will always be handled in
btrfs_commit_super(). This also ensures that the commit in close_ctree()
is really the last commit, so we can get rid of the commit in
cleaner_kthread().
The fstest/generic/475 followed by 476 can trigger a crash that
manifests as a slab corruption caused by accessing the freed kthread
structure by a wake up function. Sample trace:
[ 5657.077612] BUG: unable to handle kernel NULL pointer dereference at 00000000000000cc
[ 5657.079432] PGD 1c57a067 P4D 1c57a067 PUD da10067 PMD 0
[ 5657.080661] Oops: 0000 [#1] PREEMPT SMP
[ 5657.081592] CPU: 1 PID: 5157 Comm: fsstress Tainted: G W 4.19.0-rc8-default+ #323
[ 5657.083703] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.2-0-gf9626cc-prebuilt.qemu-project.org 04/01/2014
[ 5657.086577] RIP: 0010:shrink_page_list+0x2f9/0xe90
[ 5657.091937] RSP: 0018:ffffb5c745c8f728 EFLAGS: 00010287
[ 5657.092953] RAX: 0000000000000074 RBX: ffffb5c745c8f830 RCX: 0000000000000000
[ 5657.094590] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff9a8747fdf3d0
[ 5657.095987] RBP: ffffb5c745c8f9e0 R08: 0000000000000000 R09: 0000000000000000
[ 5657.097159] R10: ffff9a8747fdf5e8 R11: 0000000000000000 R12: ffffb5c745c8f788
[ 5657.098513] R13: ffff9a877f6ff2c0 R14: ffff9a877f6ff2c8 R15: dead000000000200
[ 5657.099689] FS: 00007f948d853b80(0000) GS:ffff9a877d600000(0000) knlGS:0000000000000000
[ 5657.101032] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 5657.101953] CR2: 00000000000000cc CR3: 00000000684bd000 CR4: 00000000000006e0
[ 5657.103159] Call Trace:
[ 5657.103776] shrink_inactive_list+0x194/0x410
[ 5657.104671] shrink_node_memcg.constprop.84+0x39a/0x6a0
[ 5657.105750] shrink_node+0x62/0x1c0
[ 5657.106529] try_to_free_pages+0x1a4/0x500
[ 5657.107408] __alloc_pages_slowpath+0x2c9/0xb20
[ 5657.108418] __alloc_pages_nodemask+0x268/0x2b0
[ 5657.109348] kmalloc_large_node+0x37/0x90
[ 5657.110205] __kmalloc_node+0x236/0x310
[ 5657.111014] kvmalloc_node+0x3e/0x70
Fixes: 30928e9baac2 ("btrfs: don't run delayed_iputs in commit")
Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add trace ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-11-01 01:06:08 +08:00
|
|
|
schedule();
|
2008-06-26 04:01:31 +08:00
|
|
|
__set_current_state(TASK_RUNNING);
|
|
|
|
}
|
Btrfs: fix crash on close_ctree() if cleaner starts new transaction
Often when running fstests btrfs/079 I was running into the following
trace during umount on one of my qemu/kvm test vms:
[ 8245.682441] WARNING: CPU: 8 PID: 25064 at fs/btrfs/extent-tree.c:138 btrfs_put_block_group+0x51/0x69 [btrfs]()
[ 8245.685039] Modules linked in: btrfs dm_flakey dm_mod crc32c_generic xor raid6_pq nfsd auth_rpcgss oid_registry nfs_acl nfs lockd grace fscache sunrpc loop fuse parport_pc i2c_piix4 acpi_cpufreq processor psmouse i2c_core thermal_sys parport evdev serio_raw button pcspkr microcode ext4 crc16 jbd2 mbcache sg sr_mod cdrom sd_mod ata_generic virtio_scsi ata_piix libata floppy virtio_pci virtio_ring scsi_mod virtio e1000 [last unloaded: btrfs]
[ 8245.693860] CPU: 8 PID: 25064 Comm: umount Tainted: G W 4.1.0-rc5-btrfs-next-10+ #1
[ 8245.695081] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.1-0-g4adadbd-20150316_085822-nilsson.home.kraxel.org 04/01/2014
[ 8245.697583] 0000000000000009 ffff88020d047ce8 ffffffff8145eec7 ffffffff81095dce
[ 8245.699234] 0000000000000000 ffff88020d047d28 ffffffff8104b399 0000000000000028
[ 8245.700995] ffffffffa04db07b ffff8801c6036c00 ffff8801c6036d68 ffff880202eb40b0
[ 8245.702510] Call Trace:
[ 8245.703006] [<ffffffff8145eec7>] dump_stack+0x4f/0x7b
[ 8245.705393] [<ffffffff81095dce>] ? console_unlock+0x356/0x3a2
[ 8245.706569] [<ffffffff8104b399>] warn_slowpath_common+0xa1/0xbb
[ 8245.707747] [<ffffffffa04db07b>] ? btrfs_put_block_group+0x51/0x69 [btrfs]
[ 8245.709101] [<ffffffff8104b456>] warn_slowpath_null+0x1a/0x1c
[ 8245.710274] [<ffffffffa04db07b>] btrfs_put_block_group+0x51/0x69 [btrfs]
[ 8245.711823] [<ffffffffa04e3473>] btrfs_free_block_groups+0x145/0x322 [btrfs]
[ 8245.713251] [<ffffffffa04ef31a>] close_ctree+0x1ef/0x325 [btrfs]
[ 8245.714448] [<ffffffff8117d26e>] ? evict_inodes+0xdc/0xeb
[ 8245.715539] [<ffffffffa04cb3ad>] btrfs_put_super+0x19/0x1b [btrfs]
[ 8245.716835] [<ffffffff81167607>] generic_shutdown_super+0x73/0xef
[ 8245.718015] [<ffffffff81167a3a>] kill_anon_super+0x13/0x1e
[ 8245.719101] [<ffffffffa04cb1b6>] btrfs_kill_super+0x17/0x23 [btrfs]
[ 8245.720316] [<ffffffff81167544>] deactivate_locked_super+0x3b/0x68
[ 8245.721517] [<ffffffff81167dd6>] deactivate_super+0x3f/0x43
[ 8245.722581] [<ffffffff8117fbb9>] cleanup_mnt+0x59/0x78
[ 8245.723538] [<ffffffff8117fc18>] __cleanup_mnt+0x12/0x14
[ 8245.724572] [<ffffffff81065371>] task_work_run+0x8f/0xbc
[ 8245.725598] [<ffffffff810028fb>] do_notify_resume+0x45/0x53
[ 8245.726892] [<ffffffff814651ac>] int_signal+0x12/0x17
[ 8245.737887] ---[ end trace a01d038397e99b92 ]---
[ 8245.769363] general protection fault: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
[ 8245.770737] Modules linked in: btrfs dm_flakey dm_mod crc32c_generic xor raid6_pq nfsd auth_rpcgss oid_registry nfs_acl nfs lockd grace fscache sunrpc loop fuse parport_pc i2c_piix4 acpi_cpufreq processor psmouse i2c_core thermal_sys parport evdev serio_raw button pcspkr microcode ext4 crc16 jbd2 mbcache sg sr_mod cdrom sd_mod ata_generic virtio_scsi ata_piix libata floppy virtio_pci virtio_ring scsi_mod virtio e1000 [last unloaded: btrfs]
[ 8245.772641] CPU: 2 PID: 25064 Comm: umount Tainted: G W 4.1.0-rc5-btrfs-next-10+ #1
[ 8245.772641] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.1-0-g4adadbd-20150316_085822-nilsson.home.kraxel.org 04/01/2014
[ 8245.772641] task: ffff880013005810 ti: ffff88020d044000 task.ti: ffff88020d044000
[ 8245.772641] RIP: 0010:[<ffffffffa051c8e6>] [<ffffffffa051c8e6>] btrfs_queue_work+0x2c/0x14d [btrfs]
[ 8245.772641] RSP: 0018:ffff88020d0478b8 EFLAGS: 00010202
[ 8245.772641] RAX: 0000000000000004 RBX: 6b6b6b6b6b6b6b6b RCX: ffffffffa0581488
[ 8245.772641] RDX: 0000000000000000 RSI: ffff880194b7bf48 RDI: ffff880144b6a7a0
[ 8245.772641] RBP: ffff88020d0478d8 R08: 0000000000000000 R09: 000000000000ffff
[ 8245.772641] R10: 0000000000000004 R11: 0000000000000005 R12: ffff880194b7bf48
[ 8245.772641] R13: ffff880194b7bf48 R14: 0000000000000410 R15: 0000000000000000
[ 8245.772641] FS: 00007f991e77d840(0000) GS:ffff88023e280000(0000) knlGS:0000000000000000
[ 8245.772641] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 8245.772641] CR2: 00007fbbd325ee68 CR3: 000000021de8e000 CR4: 00000000000006e0
[ 8245.772641] Stack:
[ 8245.772641] ffff880194b7bf00 ffff880202eb4000 ffff880194b7bf48 0000000000000410
[ 8245.772641] ffff88020d047958 ffffffffa04ec6d5 ffff8801629b2ee8 0000000082987570
[ 8245.772641] 0000000000a5813f 0000000000000001 ffff880013006100 0000000000000002
[ 8245.772641] Call Trace:
[ 8245.772641] [<ffffffffa04ec6d5>] btrfs_wq_submit_bio+0xe1/0x17b [btrfs]
[ 8245.772641] [<ffffffff81086bff>] ? check_irq_usage+0x76/0x87
[ 8245.772641] [<ffffffffa04ec825>] btree_submit_bio_hook+0xb6/0xd9 [btrfs]
[ 8245.772641] [<ffffffffa04ebb7c>] ? btree_csum_one_bio+0xad/0xad [btrfs]
[ 8245.772641] [<ffffffffa04eb1a6>] ? btree_io_failed_hook+0x5e/0x5e [btrfs]
[ 8245.772641] [<ffffffffa050a6e7>] submit_one_bio+0x8c/0xc7 [btrfs]
[ 8245.772641] [<ffffffffa050d75b>] submit_extent_page.isra.18+0x9d/0x186 [btrfs]
[ 8245.772641] [<ffffffffa050d95b>] write_one_eb+0x117/0x1ae [btrfs]
[ 8245.772641] [<ffffffffa050a79b>] ? end_extent_buffer_writeback+0x21/0x21 [btrfs]
[ 8245.772641] [<ffffffffa0510510>] btree_write_cache_pages+0x2ab/0x385 [btrfs]
[ 8245.772641] [<ffffffffa04eb2b8>] btree_writepages+0x23/0x5c [btrfs]
[ 8245.772641] [<ffffffff8111c661>] do_writepages+0x23/0x2c
[ 8245.772641] [<ffffffff81189cd4>] __writeback_single_inode+0xda/0x5bd
[ 8245.772641] [<ffffffff8118aa60>] ? writeback_single_inode+0x2b/0x173
[ 8245.772641] [<ffffffff8118aafd>] writeback_single_inode+0xc8/0x173
[ 8245.772641] [<ffffffff8118ac95>] write_inode_now+0x8a/0x95
[ 8245.772641] [<ffffffff81247bf0>] ? _atomic_dec_and_lock+0x30/0x4e
[ 8245.772641] [<ffffffff8117cc5e>] iput+0x17d/0x26a
[ 8245.772641] [<ffffffffa04ef355>] close_ctree+0x22a/0x325 [btrfs]
[ 8245.772641] [<ffffffff8117d26e>] ? evict_inodes+0xdc/0xeb
[ 8245.772641] [<ffffffffa04cb3ad>] btrfs_put_super+0x19/0x1b [btrfs]
[ 8245.772641] [<ffffffff81167607>] generic_shutdown_super+0x73/0xef
[ 8245.772641] [<ffffffff81167a3a>] kill_anon_super+0x13/0x1e
[ 8245.772641] [<ffffffffa04cb1b6>] btrfs_kill_super+0x17/0x23 [btrfs]
[ 8245.772641] [<ffffffff81167544>] deactivate_locked_super+0x3b/0x68
[ 8245.772641] [<ffffffff81167dd6>] deactivate_super+0x3f/0x43
[ 8245.772641] [<ffffffff8117fbb9>] cleanup_mnt+0x59/0x78
[ 8245.772641] [<ffffffff8117fc18>] __cleanup_mnt+0x12/0x14
[ 8245.772641] [<ffffffff81065371>] task_work_run+0x8f/0xbc
[ 8245.772641] [<ffffffff810028fb>] do_notify_resume+0x45/0x53
[ 8245.772641] [<ffffffff814651ac>] int_signal+0x12/0x17
[ 8245.772641] Code: 1f 44 00 00 55 48 89 e5 41 56 41 55 41 54 53 49 89 f4 48 8b 46 70 a8 04 74 09 48 8b 5f 08 48 85 db 75 03 48 8b 1f 49 89 5c 24 68 <83> 7b 5c ff 74 04 f0 ff 43 50 49 83 7c 24 08 00 74 2c 4c 8d 6b
[ 8245.772641] RIP [<ffffffffa051c8e6>] btrfs_queue_work+0x2c/0x14d [btrfs]
[ 8245.772641] RSP <ffff88020d0478b8>
[ 8245.845040] ---[ end trace a01d038397e99b93 ]---
For logical reasons such as the phase of the moon, this happened more
often with "-o inode_cache" than without any mount options.
After some debugging it turned out to be simple to understand what was
happening:
1) close_ctree() is called;
2) It then stops the transaction kthread, which commits the current
transaction;
3) It asks the cleaner kthread to stop, which is currently running
btrfs_delete_unused_bgs();
4) btrfs_delete_unused_bgs() finds an unused block group, starts a new
transaction, deletes the block group, which implies COWing some
tree nodes and leafs and dirtying their respective pages, and then
finally it ends the transaction it started, without committing it;
5) The cleaner kthread stops;
6) close_ctree() releases (from memory) the block group objects, which
produces the warning in the trace pasted above;
7) Then it invalidates all pages of the btree inode, by calling
invalidate_inode_pages2(), which waits for any pages under writeback,
and releases any non-dirty pages;
8) All work queues are destroyed (waiting first for their current tasks
to finish execution);
9) A final iput() is called against the btree inode;
10) This iput triggers a writeback of the btree inode because it still
has dirty pages;
11) This starts the whole chain of callbacks for the btree inode until
it eventually reaches btrfs_wq_submit_bio() where it leads to a
NULL pointer dereference because the work queues were already
destroyed.
Fix this by making the cleaner commit any transaction that it started
after the transaction kthread was stopped.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2015-06-13 13:55:31 +08:00
|
|
|
}
|
2008-06-26 04:01:31 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int transaction_kthread(void *arg)
|
|
|
|
{
|
|
|
|
struct btrfs_root *root = arg;
|
2016-06-23 06:54:23 +08:00
|
|
|
struct btrfs_fs_info *fs_info = root->fs_info;
|
2008-06-26 04:01:31 +08:00
|
|
|
struct btrfs_trans_handle *trans;
|
|
|
|
struct btrfs_transaction *cur;
|
2010-05-16 22:49:58 +08:00
|
|
|
u64 transid;
|
2018-06-12 19:48:25 +08:00
|
|
|
time64_t now;
|
2008-06-26 04:01:31 +08:00
|
|
|
unsigned long delay;
|
2012-03-12 23:05:50 +08:00
|
|
|
bool cannot_commit;
|
2008-06-26 04:01:31 +08:00
|
|
|
|
|
|
|
do {
|
2012-03-12 23:05:50 +08:00
|
|
|
cannot_commit = false;
|
2016-06-23 06:54:23 +08:00
|
|
|
delay = HZ * fs_info->commit_interval;
|
|
|
|
mutex_lock(&fs_info->transaction_kthread_mutex);
|
2008-06-26 04:01:31 +08:00
|
|
|
|
2016-06-23 06:54:23 +08:00
|
|
|
spin_lock(&fs_info->trans_lock);
|
|
|
|
cur = fs_info->running_transaction;
|
2008-06-26 04:01:31 +08:00
|
|
|
if (!cur) {
|
2016-06-23 06:54:23 +08:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2008-06-26 04:01:31 +08:00
|
|
|
goto sleep;
|
|
|
|
}
|
2008-07-29 03:32:19 +08:00
|
|
|
|
2018-06-22 00:04:05 +08:00
|
|
|
now = ktime_get_seconds();
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 11:53:43 +08:00
|
|
|
if (cur->state < TRANS_STATE_BLOCKED &&
|
2017-12-22 16:06:39 +08:00
|
|
|
!test_bit(BTRFS_FS_NEED_ASYNC_COMMIT, &fs_info->flags) &&
|
2013-08-02 00:14:52 +08:00
|
|
|
(now < cur->start_time ||
|
2016-06-23 06:54:23 +08:00
|
|
|
now - cur->start_time < fs_info->commit_interval)) {
|
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2008-06-26 04:01:31 +08:00
|
|
|
delay = HZ * 5;
|
|
|
|
goto sleep;
|
|
|
|
}
|
2010-05-16 22:49:58 +08:00
|
|
|
transid = cur->transid;
|
2016-06-23 06:54:23 +08:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2009-03-13 22:10:06 +08:00
|
|
|
|
2012-03-12 23:03:00 +08:00
|
|
|
/* If the file system is aborted, this will always fail. */
|
Btrfs: fix orphan transaction on the freezed filesystem
With the following debug patch:
static int btrfs_freeze(struct super_block *sb)
{
+ struct btrfs_fs_info *fs_info = btrfs_sb(sb);
+ struct btrfs_transaction *trans;
+
+ spin_lock(&fs_info->trans_lock);
+ trans = fs_info->running_transaction;
+ if (trans) {
+ printk("Transid %llu, use_count %d, num_writer %d\n",
+ trans->transid, atomic_read(&trans->use_count),
+ atomic_read(&trans->num_writers));
+ }
+ spin_unlock(&fs_info->trans_lock);
return 0;
}
I found there was a orphan transaction after the freeze operation was done.
It is because the transaction may not be committed when the transaction handle
end even though it is the last handle of the current transaction. This design
avoid committing the transaction frequently, but also introduce the above
problem.
So I add btrfs_attach_transaction() which can catch the current transaction
and commit it. If there is no transaction, it will return ENOENT, and do not
anything.
This function also can be used to instead of btrfs_join_transaction_freeze()
because it don't increase the writer counter and don't start a new transaction,
so it also can fix the deadlock between sync and freeze.
Besides that, it is used to instead of btrfs_join_transaction() in
transaction_kthread(), because if there is no transaction, the transaction
kthread needn't anything.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
2012-09-20 15:54:00 +08:00
|
|
|
trans = btrfs_attach_transaction(root);
|
2012-03-12 23:05:50 +08:00
|
|
|
if (IS_ERR(trans)) {
|
Btrfs: fix orphan transaction on the freezed filesystem
With the following debug patch:
static int btrfs_freeze(struct super_block *sb)
{
+ struct btrfs_fs_info *fs_info = btrfs_sb(sb);
+ struct btrfs_transaction *trans;
+
+ spin_lock(&fs_info->trans_lock);
+ trans = fs_info->running_transaction;
+ if (trans) {
+ printk("Transid %llu, use_count %d, num_writer %d\n",
+ trans->transid, atomic_read(&trans->use_count),
+ atomic_read(&trans->num_writers));
+ }
+ spin_unlock(&fs_info->trans_lock);
return 0;
}
I found there was a orphan transaction after the freeze operation was done.
It is because the transaction may not be committed when the transaction handle
end even though it is the last handle of the current transaction. This design
avoid committing the transaction frequently, but also introduce the above
problem.
So I add btrfs_attach_transaction() which can catch the current transaction
and commit it. If there is no transaction, it will return ENOENT, and do not
anything.
This function also can be used to instead of btrfs_join_transaction_freeze()
because it don't increase the writer counter and don't start a new transaction,
so it also can fix the deadlock between sync and freeze.
Besides that, it is used to instead of btrfs_join_transaction() in
transaction_kthread(), because if there is no transaction, the transaction
kthread needn't anything.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
2012-09-20 15:54:00 +08:00
|
|
|
if (PTR_ERR(trans) != -ENOENT)
|
|
|
|
cannot_commit = true;
|
2012-03-12 23:03:00 +08:00
|
|
|
goto sleep;
|
2012-03-12 23:05:50 +08:00
|
|
|
}
|
2010-05-16 22:49:58 +08:00
|
|
|
if (transid == trans->transid) {
|
2016-09-10 09:39:03 +08:00
|
|
|
btrfs_commit_transaction(trans);
|
2010-05-16 22:49:58 +08:00
|
|
|
} else {
|
2016-09-10 09:39:03 +08:00
|
|
|
btrfs_end_transaction(trans);
|
2010-05-16 22:49:58 +08:00
|
|
|
}
|
2008-06-26 04:01:31 +08:00
|
|
|
sleep:
|
2016-06-23 06:54:23 +08:00
|
|
|
wake_up_process(fs_info->cleaner_kthread);
|
|
|
|
mutex_unlock(&fs_info->transaction_kthread_mutex);
|
2008-06-26 04:01:31 +08:00
|
|
|
|
2013-09-28 04:32:39 +08:00
|
|
|
if (unlikely(test_bit(BTRFS_FS_STATE_ERROR,
|
2016-06-23 06:54:23 +08:00
|
|
|
&fs_info->fs_state)))
|
2016-06-23 06:54:24 +08:00
|
|
|
btrfs_cleanup_transaction(fs_info);
|
2016-03-15 18:28:59 +08:00
|
|
|
if (!kthread_should_stop() &&
|
2016-06-23 06:54:23 +08:00
|
|
|
(!btrfs_transaction_blocked(fs_info) ||
|
2016-03-15 18:28:59 +08:00
|
|
|
cannot_commit))
|
2018-01-23 20:46:53 +08:00
|
|
|
schedule_timeout_interruptible(delay);
|
2008-06-26 04:01:31 +08:00
|
|
|
} while (!kthread_should_stop());
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-11-04 03:17:42 +08:00
|
|
|
/*
|
|
|
|
* this will find the highest generation in the array of
|
|
|
|
* root backups. The index of the highest array is returned,
|
|
|
|
* or -1 if we can't find anything.
|
|
|
|
*
|
|
|
|
* We check to make sure the array is valid by comparing the
|
|
|
|
* generation of the latest root in the array with the generation
|
|
|
|
* in the super block. If they don't match we pitch it.
|
|
|
|
*/
|
|
|
|
static int find_newest_super_backup(struct btrfs_fs_info *info, u64 newest_gen)
|
|
|
|
{
|
|
|
|
u64 cur;
|
|
|
|
int newest_index = -1;
|
|
|
|
struct btrfs_root_backup *root_backup;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < BTRFS_NUM_BACKUP_ROOTS; i++) {
|
|
|
|
root_backup = info->super_copy->super_roots + i;
|
|
|
|
cur = btrfs_backup_tree_root_gen(root_backup);
|
|
|
|
if (cur == newest_gen)
|
|
|
|
newest_index = i;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* check to see if we actually wrapped around */
|
|
|
|
if (newest_index == BTRFS_NUM_BACKUP_ROOTS - 1) {
|
|
|
|
root_backup = info->super_copy->super_roots;
|
|
|
|
cur = btrfs_backup_tree_root_gen(root_backup);
|
|
|
|
if (cur == newest_gen)
|
|
|
|
newest_index = 0;
|
|
|
|
}
|
|
|
|
return newest_index;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* find the oldest backup so we know where to store new entries
|
|
|
|
* in the backup array. This will set the backup_root_index
|
|
|
|
* field in the fs_info struct
|
|
|
|
*/
|
|
|
|
static void find_oldest_super_backup(struct btrfs_fs_info *info,
|
|
|
|
u64 newest_gen)
|
|
|
|
{
|
|
|
|
int newest_index = -1;
|
|
|
|
|
|
|
|
newest_index = find_newest_super_backup(info, newest_gen);
|
|
|
|
/* if there was garbage in there, just move along */
|
|
|
|
if (newest_index == -1) {
|
|
|
|
info->backup_root_index = 0;
|
|
|
|
} else {
|
|
|
|
info->backup_root_index = (newest_index + 1) % BTRFS_NUM_BACKUP_ROOTS;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* copy all the root pointers into the super backup array.
|
|
|
|
* this will bump the backup pointer by one when it is
|
|
|
|
* done
|
|
|
|
*/
|
|
|
|
static void backup_super_roots(struct btrfs_fs_info *info)
|
|
|
|
{
|
|
|
|
int next_backup;
|
|
|
|
struct btrfs_root_backup *root_backup;
|
|
|
|
int last_backup;
|
|
|
|
|
|
|
|
next_backup = info->backup_root_index;
|
|
|
|
last_backup = (next_backup + BTRFS_NUM_BACKUP_ROOTS - 1) %
|
|
|
|
BTRFS_NUM_BACKUP_ROOTS;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* just overwrite the last backup if we're at the same generation
|
|
|
|
* this happens only at umount
|
|
|
|
*/
|
|
|
|
root_backup = info->super_for_commit->super_roots + last_backup;
|
|
|
|
if (btrfs_backup_tree_root_gen(root_backup) ==
|
|
|
|
btrfs_header_generation(info->tree_root->node))
|
|
|
|
next_backup = last_backup;
|
|
|
|
|
|
|
|
root_backup = info->super_for_commit->super_roots + next_backup;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* make sure all of our padding and empty slots get zero filled
|
|
|
|
* regardless of which ones we use today
|
|
|
|
*/
|
|
|
|
memset(root_backup, 0, sizeof(*root_backup));
|
|
|
|
|
|
|
|
info->backup_root_index = (next_backup + 1) % BTRFS_NUM_BACKUP_ROOTS;
|
|
|
|
|
|
|
|
btrfs_set_backup_tree_root(root_backup, info->tree_root->node->start);
|
|
|
|
btrfs_set_backup_tree_root_gen(root_backup,
|
|
|
|
btrfs_header_generation(info->tree_root->node));
|
|
|
|
|
|
|
|
btrfs_set_backup_tree_root_level(root_backup,
|
|
|
|
btrfs_header_level(info->tree_root->node));
|
|
|
|
|
|
|
|
btrfs_set_backup_chunk_root(root_backup, info->chunk_root->node->start);
|
|
|
|
btrfs_set_backup_chunk_root_gen(root_backup,
|
|
|
|
btrfs_header_generation(info->chunk_root->node));
|
|
|
|
btrfs_set_backup_chunk_root_level(root_backup,
|
|
|
|
btrfs_header_level(info->chunk_root->node));
|
|
|
|
|
|
|
|
btrfs_set_backup_extent_root(root_backup, info->extent_root->node->start);
|
|
|
|
btrfs_set_backup_extent_root_gen(root_backup,
|
|
|
|
btrfs_header_generation(info->extent_root->node));
|
|
|
|
btrfs_set_backup_extent_root_level(root_backup,
|
|
|
|
btrfs_header_level(info->extent_root->node));
|
|
|
|
|
2011-11-07 07:50:56 +08:00
|
|
|
/*
|
|
|
|
* we might commit during log recovery, which happens before we set
|
|
|
|
* the fs_root. Make sure it is valid before we fill it in.
|
|
|
|
*/
|
|
|
|
if (info->fs_root && info->fs_root->node) {
|
|
|
|
btrfs_set_backup_fs_root(root_backup,
|
|
|
|
info->fs_root->node->start);
|
|
|
|
btrfs_set_backup_fs_root_gen(root_backup,
|
2011-11-04 03:17:42 +08:00
|
|
|
btrfs_header_generation(info->fs_root->node));
|
2011-11-07 07:50:56 +08:00
|
|
|
btrfs_set_backup_fs_root_level(root_backup,
|
2011-11-04 03:17:42 +08:00
|
|
|
btrfs_header_level(info->fs_root->node));
|
2011-11-07 07:50:56 +08:00
|
|
|
}
|
2011-11-04 03:17:42 +08:00
|
|
|
|
|
|
|
btrfs_set_backup_dev_root(root_backup, info->dev_root->node->start);
|
|
|
|
btrfs_set_backup_dev_root_gen(root_backup,
|
|
|
|
btrfs_header_generation(info->dev_root->node));
|
|
|
|
btrfs_set_backup_dev_root_level(root_backup,
|
|
|
|
btrfs_header_level(info->dev_root->node));
|
|
|
|
|
|
|
|
btrfs_set_backup_csum_root(root_backup, info->csum_root->node->start);
|
|
|
|
btrfs_set_backup_csum_root_gen(root_backup,
|
|
|
|
btrfs_header_generation(info->csum_root->node));
|
|
|
|
btrfs_set_backup_csum_root_level(root_backup,
|
|
|
|
btrfs_header_level(info->csum_root->node));
|
|
|
|
|
|
|
|
btrfs_set_backup_total_bytes(root_backup,
|
|
|
|
btrfs_super_total_bytes(info->super_copy));
|
|
|
|
btrfs_set_backup_bytes_used(root_backup,
|
|
|
|
btrfs_super_bytes_used(info->super_copy));
|
|
|
|
btrfs_set_backup_num_devices(root_backup,
|
|
|
|
btrfs_super_num_devices(info->super_copy));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* if we don't copy this out to the super_copy, it won't get remembered
|
|
|
|
* for the next commit
|
|
|
|
*/
|
|
|
|
memcpy(&info->super_copy->super_roots,
|
|
|
|
&info->super_for_commit->super_roots,
|
|
|
|
sizeof(*root_backup) * BTRFS_NUM_BACKUP_ROOTS);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* this copies info out of the root backup array and back into
|
|
|
|
* the in-memory super block. It is meant to help iterate through
|
|
|
|
* the array, so you send it the number of backups you've already
|
|
|
|
* tried and the last backup index you used.
|
|
|
|
*
|
|
|
|
* this returns -1 when it has tried all the backups
|
|
|
|
*/
|
|
|
|
static noinline int next_root_backup(struct btrfs_fs_info *info,
|
|
|
|
struct btrfs_super_block *super,
|
|
|
|
int *num_backups_tried, int *backup_index)
|
|
|
|
{
|
|
|
|
struct btrfs_root_backup *root_backup;
|
|
|
|
int newest = *backup_index;
|
|
|
|
|
|
|
|
if (*num_backups_tried == 0) {
|
|
|
|
u64 gen = btrfs_super_generation(super);
|
|
|
|
|
|
|
|
newest = find_newest_super_backup(info, gen);
|
|
|
|
if (newest == -1)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
*backup_index = newest;
|
|
|
|
*num_backups_tried = 1;
|
|
|
|
} else if (*num_backups_tried == BTRFS_NUM_BACKUP_ROOTS) {
|
|
|
|
/* we've tried all the backups, all done */
|
|
|
|
return -1;
|
|
|
|
} else {
|
|
|
|
/* jump to the next oldest backup */
|
|
|
|
newest = (*backup_index + BTRFS_NUM_BACKUP_ROOTS - 1) %
|
|
|
|
BTRFS_NUM_BACKUP_ROOTS;
|
|
|
|
*backup_index = newest;
|
|
|
|
*num_backups_tried += 1;
|
|
|
|
}
|
|
|
|
root_backup = super->super_roots + newest;
|
|
|
|
|
|
|
|
btrfs_set_super_generation(super,
|
|
|
|
btrfs_backup_tree_root_gen(root_backup));
|
|
|
|
btrfs_set_super_root(super, btrfs_backup_tree_root(root_backup));
|
|
|
|
btrfs_set_super_root_level(super,
|
|
|
|
btrfs_backup_tree_root_level(root_backup));
|
|
|
|
btrfs_set_super_bytes_used(super, btrfs_backup_bytes_used(root_backup));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* fixme: the total bytes and num_devices need to match or we should
|
|
|
|
* need a fsck
|
|
|
|
*/
|
|
|
|
btrfs_set_super_total_bytes(super, btrfs_backup_total_bytes(root_backup));
|
|
|
|
btrfs_set_super_num_devices(super, btrfs_backup_num_devices(root_backup));
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-03-17 10:10:31 +08:00
|
|
|
/* helper to cleanup workers */
|
|
|
|
static void btrfs_stop_all_workers(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
2014-02-28 10:46:14 +08:00
|
|
|
btrfs_destroy_workqueue(fs_info->fixup_workers);
|
2014-02-28 10:46:07 +08:00
|
|
|
btrfs_destroy_workqueue(fs_info->delalloc_workers);
|
2014-02-28 10:46:06 +08:00
|
|
|
btrfs_destroy_workqueue(fs_info->workers);
|
2014-02-28 10:46:10 +08:00
|
|
|
btrfs_destroy_workqueue(fs_info->endio_workers);
|
|
|
|
btrfs_destroy_workqueue(fs_info->endio_raid56_workers);
|
2014-09-12 18:44:03 +08:00
|
|
|
btrfs_destroy_workqueue(fs_info->endio_repair_workers);
|
2014-02-28 10:46:11 +08:00
|
|
|
btrfs_destroy_workqueue(fs_info->rmw_workers);
|
2014-02-28 10:46:10 +08:00
|
|
|
btrfs_destroy_workqueue(fs_info->endio_write_workers);
|
|
|
|
btrfs_destroy_workqueue(fs_info->endio_freespace_worker);
|
2014-02-28 10:46:08 +08:00
|
|
|
btrfs_destroy_workqueue(fs_info->submit_workers);
|
2014-02-28 10:46:15 +08:00
|
|
|
btrfs_destroy_workqueue(fs_info->delayed_workers);
|
2014-02-28 10:46:12 +08:00
|
|
|
btrfs_destroy_workqueue(fs_info->caching_workers);
|
2014-02-28 10:46:13 +08:00
|
|
|
btrfs_destroy_workqueue(fs_info->readahead_workers);
|
2014-02-28 10:46:09 +08:00
|
|
|
btrfs_destroy_workqueue(fs_info->flush_workers);
|
2014-02-28 10:46:16 +08:00
|
|
|
btrfs_destroy_workqueue(fs_info->qgroup_rescan_workers);
|
2014-05-23 07:18:52 +08:00
|
|
|
btrfs_destroy_workqueue(fs_info->extent_workers);
|
2017-02-05 01:12:00 +08:00
|
|
|
/*
|
|
|
|
* Now that all other work queues are destroyed, we can safely destroy
|
|
|
|
* the queues used for metadata I/O, since tasks from those other work
|
|
|
|
* queues can do metadata I/O operations.
|
|
|
|
*/
|
|
|
|
btrfs_destroy_workqueue(fs_info->endio_meta_workers);
|
|
|
|
btrfs_destroy_workqueue(fs_info->endio_meta_write_workers);
|
2013-03-17 10:10:31 +08:00
|
|
|
}
|
|
|
|
|
2013-10-31 05:15:20 +08:00
|
|
|
static void free_root_extent_buffers(struct btrfs_root *root)
|
|
|
|
{
|
|
|
|
if (root) {
|
|
|
|
free_extent_buffer(root->node);
|
|
|
|
free_extent_buffer(root->commit_root);
|
|
|
|
root->node = NULL;
|
|
|
|
root->commit_root = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-11-04 03:17:42 +08:00
|
|
|
/* helper to cleanup tree roots */
|
|
|
|
static void free_root_pointers(struct btrfs_fs_info *info, int chunk_root)
|
|
|
|
{
|
2013-10-31 05:15:20 +08:00
|
|
|
free_root_extent_buffers(info->tree_root);
|
2013-05-18 02:06:51 +08:00
|
|
|
|
2013-10-31 05:15:20 +08:00
|
|
|
free_root_extent_buffers(info->dev_root);
|
|
|
|
free_root_extent_buffers(info->extent_root);
|
|
|
|
free_root_extent_buffers(info->csum_root);
|
|
|
|
free_root_extent_buffers(info->quota_root);
|
|
|
|
free_root_extent_buffers(info->uuid_root);
|
|
|
|
if (chunk_root)
|
|
|
|
free_root_extent_buffers(info->chunk_root);
|
2015-09-30 11:50:38 +08:00
|
|
|
free_root_extent_buffers(info->free_space_root);
|
2011-11-04 03:17:42 +08:00
|
|
|
}
|
|
|
|
|
2014-05-08 05:06:09 +08:00
|
|
|
void btrfs_free_fs_roots(struct btrfs_fs_info *fs_info)
|
2013-04-25 04:35:41 +08:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
struct btrfs_root *gang[8];
|
|
|
|
int i;
|
|
|
|
|
|
|
|
while (!list_empty(&fs_info->dead_roots)) {
|
|
|
|
gang[0] = list_entry(fs_info->dead_roots.next,
|
|
|
|
struct btrfs_root, root_list);
|
|
|
|
list_del(&gang[0]->root_list);
|
|
|
|
|
2014-04-02 19:51:05 +08:00
|
|
|
if (test_bit(BTRFS_ROOT_IN_RADIX, &gang[0]->state)) {
|
2013-05-15 15:48:19 +08:00
|
|
|
btrfs_drop_and_free_fs_root(fs_info, gang[0]);
|
2013-04-25 04:35:41 +08:00
|
|
|
} else {
|
|
|
|
free_extent_buffer(gang[0]->node);
|
|
|
|
free_extent_buffer(gang[0]->commit_root);
|
2013-05-15 15:48:20 +08:00
|
|
|
btrfs_put_fs_root(gang[0]);
|
2013-04-25 04:35:41 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
while (1) {
|
|
|
|
ret = radix_tree_gang_lookup(&fs_info->fs_roots_radix,
|
|
|
|
(void **)gang, 0,
|
|
|
|
ARRAY_SIZE(gang));
|
|
|
|
if (!ret)
|
|
|
|
break;
|
|
|
|
for (i = 0; i < ret; i++)
|
2013-05-15 15:48:19 +08:00
|
|
|
btrfs_drop_and_free_fs_root(fs_info, gang[i]);
|
2013-04-25 04:35:41 +08:00
|
|
|
}
|
2014-01-13 19:53:53 +08:00
|
|
|
|
|
|
|
if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) {
|
|
|
|
btrfs_free_log_root_tree(NULL, fs_info);
|
2016-06-23 06:54:24 +08:00
|
|
|
btrfs_destroy_pinned_extent(fs_info, fs_info->pinned_extents);
|
2014-01-13 19:53:53 +08:00
|
|
|
}
|
2013-04-25 04:35:41 +08:00
|
|
|
}
|
2011-11-04 03:17:42 +08:00
|
|
|
|
2014-08-02 07:12:38 +08:00
|
|
|
static void btrfs_init_scrub(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
mutex_init(&fs_info->scrub_lock);
|
|
|
|
atomic_set(&fs_info->scrubs_running, 0);
|
|
|
|
atomic_set(&fs_info->scrub_pause_req, 0);
|
|
|
|
atomic_set(&fs_info->scrubs_paused, 0);
|
|
|
|
atomic_set(&fs_info->scrub_cancel_req, 0);
|
|
|
|
init_waitqueue_head(&fs_info->scrub_pause_wait);
|
2019-01-30 14:45:02 +08:00
|
|
|
refcount_set(&fs_info->scrub_workers_refcnt, 0);
|
2014-08-02 07:12:38 +08:00
|
|
|
}
|
|
|
|
|
2014-08-02 07:12:39 +08:00
|
|
|
static void btrfs_init_balance(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
spin_lock_init(&fs_info->balance_lock);
|
|
|
|
mutex_init(&fs_info->balance_mutex);
|
|
|
|
atomic_set(&fs_info->balance_pause_req, 0);
|
|
|
|
atomic_set(&fs_info->balance_cancel_req, 0);
|
|
|
|
fs_info->balance_ctl = NULL;
|
|
|
|
init_waitqueue_head(&fs_info->balance_wait_q);
|
|
|
|
}
|
|
|
|
|
2016-06-22 09:16:51 +08:00
|
|
|
static void btrfs_init_btree_inode(struct btrfs_fs_info *fs_info)
|
2014-08-02 07:12:40 +08:00
|
|
|
{
|
2016-06-23 06:54:24 +08:00
|
|
|
struct inode *inode = fs_info->btree_inode;
|
|
|
|
|
|
|
|
inode->i_ino = BTRFS_BTREE_INODE_OBJECTID;
|
|
|
|
set_nlink(inode, 1);
|
2014-08-02 07:12:40 +08:00
|
|
|
/*
|
|
|
|
* we set the i_size on the btree inode to the max possible int.
|
|
|
|
* the real end of the address space is determined by all of
|
|
|
|
* the devices in the system
|
|
|
|
*/
|
2016-06-23 06:54:24 +08:00
|
|
|
inode->i_size = OFFSET_MAX;
|
|
|
|
inode->i_mapping->a_ops = &btree_aops;
|
2014-08-02 07:12:40 +08:00
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
RB_CLEAR_NODE(&BTRFS_I(inode)->rb_node);
|
2019-03-01 10:47:59 +08:00
|
|
|
extent_io_tree_init(fs_info, &BTRFS_I(inode)->io_tree,
|
|
|
|
IO_TREE_INODE_IO, inode);
|
2019-03-11 22:58:30 +08:00
|
|
|
BTRFS_I(inode)->io_tree.track_uptodate = false;
|
2016-06-23 06:54:24 +08:00
|
|
|
extent_map_tree_init(&BTRFS_I(inode)->extent_tree);
|
2014-08-02 07:12:40 +08:00
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
BTRFS_I(inode)->io_tree.ops = &btree_extent_io_ops;
|
2014-08-02 07:12:40 +08:00
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
BTRFS_I(inode)->root = fs_info->tree_root;
|
|
|
|
memset(&BTRFS_I(inode)->location, 0, sizeof(struct btrfs_key));
|
|
|
|
set_bit(BTRFS_INODE_DUMMY, &BTRFS_I(inode)->runtime_flags);
|
|
|
|
btrfs_insert_inode_hash(inode);
|
2014-08-02 07:12:40 +08:00
|
|
|
}
|
|
|
|
|
2014-08-02 07:12:41 +08:00
|
|
|
static void btrfs_init_dev_replace_locks(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
mutex_init(&fs_info->dev_replace.lock_finishing_cancel_unmount);
|
2018-04-05 07:29:24 +08:00
|
|
|
init_rwsem(&fs_info->dev_replace.rwsem);
|
2018-04-05 07:04:49 +08:00
|
|
|
init_waitqueue_head(&fs_info->dev_replace.replace_wait);
|
2014-08-02 07:12:41 +08:00
|
|
|
}
|
|
|
|
|
2014-08-02 07:12:42 +08:00
|
|
|
static void btrfs_init_qgroup(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
spin_lock_init(&fs_info->qgroup_lock);
|
|
|
|
mutex_init(&fs_info->qgroup_ioctl_lock);
|
|
|
|
fs_info->qgroup_tree = RB_ROOT;
|
|
|
|
INIT_LIST_HEAD(&fs_info->dirty_qgroups);
|
|
|
|
fs_info->qgroup_seq = 1;
|
|
|
|
fs_info->qgroup_ulist = NULL;
|
2016-08-16 00:10:33 +08:00
|
|
|
fs_info->qgroup_rescan_running = false;
|
2014-08-02 07:12:42 +08:00
|
|
|
mutex_init(&fs_info->qgroup_rescan_lock);
|
|
|
|
}
|
|
|
|
|
2015-02-16 23:29:26 +08:00
|
|
|
static int btrfs_init_workqueues(struct btrfs_fs_info *fs_info,
|
|
|
|
struct btrfs_fs_devices *fs_devices)
|
|
|
|
{
|
2018-02-13 17:50:42 +08:00
|
|
|
u32 max_active = fs_info->thread_pool_size;
|
2015-02-17 01:34:01 +08:00
|
|
|
unsigned int flags = WQ_MEM_RECLAIM | WQ_FREEZABLE | WQ_UNBOUND;
|
2015-02-16 23:29:26 +08:00
|
|
|
|
|
|
|
fs_info->workers =
|
2016-06-10 04:22:11 +08:00
|
|
|
btrfs_alloc_workqueue(fs_info, "worker",
|
|
|
|
flags | WQ_HIGHPRI, max_active, 16);
|
2015-02-16 23:29:26 +08:00
|
|
|
|
|
|
|
fs_info->delalloc_workers =
|
2016-06-10 04:22:11 +08:00
|
|
|
btrfs_alloc_workqueue(fs_info, "delalloc",
|
|
|
|
flags, max_active, 2);
|
2015-02-16 23:29:26 +08:00
|
|
|
|
|
|
|
fs_info->flush_workers =
|
2016-06-10 04:22:11 +08:00
|
|
|
btrfs_alloc_workqueue(fs_info, "flush_delalloc",
|
|
|
|
flags, max_active, 0);
|
2015-02-16 23:29:26 +08:00
|
|
|
|
|
|
|
fs_info->caching_workers =
|
2016-06-10 04:22:11 +08:00
|
|
|
btrfs_alloc_workqueue(fs_info, "cache", flags, max_active, 0);
|
2015-02-16 23:29:26 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* a higher idle thresh on the submit workers makes it much more
|
|
|
|
* likely that bios will be send down in a sane order to the
|
|
|
|
* devices
|
|
|
|
*/
|
|
|
|
fs_info->submit_workers =
|
2016-06-10 04:22:11 +08:00
|
|
|
btrfs_alloc_workqueue(fs_info, "submit", flags,
|
2015-02-16 23:29:26 +08:00
|
|
|
min_t(u64, fs_devices->num_devices,
|
|
|
|
max_active), 64);
|
|
|
|
|
|
|
|
fs_info->fixup_workers =
|
2016-06-10 04:22:11 +08:00
|
|
|
btrfs_alloc_workqueue(fs_info, "fixup", flags, 1, 0);
|
2015-02-16 23:29:26 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* endios are largely parallel and should have a very
|
|
|
|
* low idle thresh
|
|
|
|
*/
|
|
|
|
fs_info->endio_workers =
|
2016-06-10 04:22:11 +08:00
|
|
|
btrfs_alloc_workqueue(fs_info, "endio", flags, max_active, 4);
|
2015-02-16 23:29:26 +08:00
|
|
|
fs_info->endio_meta_workers =
|
2016-06-10 04:22:11 +08:00
|
|
|
btrfs_alloc_workqueue(fs_info, "endio-meta", flags,
|
|
|
|
max_active, 4);
|
2015-02-16 23:29:26 +08:00
|
|
|
fs_info->endio_meta_write_workers =
|
2016-06-10 04:22:11 +08:00
|
|
|
btrfs_alloc_workqueue(fs_info, "endio-meta-write", flags,
|
|
|
|
max_active, 2);
|
2015-02-16 23:29:26 +08:00
|
|
|
fs_info->endio_raid56_workers =
|
2016-06-10 04:22:11 +08:00
|
|
|
btrfs_alloc_workqueue(fs_info, "endio-raid56", flags,
|
|
|
|
max_active, 4);
|
2015-02-16 23:29:26 +08:00
|
|
|
fs_info->endio_repair_workers =
|
2016-06-10 04:22:11 +08:00
|
|
|
btrfs_alloc_workqueue(fs_info, "endio-repair", flags, 1, 0);
|
2015-02-16 23:29:26 +08:00
|
|
|
fs_info->rmw_workers =
|
2016-06-10 04:22:11 +08:00
|
|
|
btrfs_alloc_workqueue(fs_info, "rmw", flags, max_active, 2);
|
2015-02-16 23:29:26 +08:00
|
|
|
fs_info->endio_write_workers =
|
2016-06-10 04:22:11 +08:00
|
|
|
btrfs_alloc_workqueue(fs_info, "endio-write", flags,
|
|
|
|
max_active, 2);
|
2015-02-16 23:29:26 +08:00
|
|
|
fs_info->endio_freespace_worker =
|
2016-06-10 04:22:11 +08:00
|
|
|
btrfs_alloc_workqueue(fs_info, "freespace-write", flags,
|
|
|
|
max_active, 0);
|
2015-02-16 23:29:26 +08:00
|
|
|
fs_info->delayed_workers =
|
2016-06-10 04:22:11 +08:00
|
|
|
btrfs_alloc_workqueue(fs_info, "delayed-meta", flags,
|
|
|
|
max_active, 0);
|
2015-02-16 23:29:26 +08:00
|
|
|
fs_info->readahead_workers =
|
2016-06-10 04:22:11 +08:00
|
|
|
btrfs_alloc_workqueue(fs_info, "readahead", flags,
|
|
|
|
max_active, 2);
|
2015-02-16 23:29:26 +08:00
|
|
|
fs_info->qgroup_rescan_workers =
|
2016-06-10 04:22:11 +08:00
|
|
|
btrfs_alloc_workqueue(fs_info, "qgroup-rescan", flags, 1, 0);
|
2015-02-16 23:29:26 +08:00
|
|
|
fs_info->extent_workers =
|
2016-06-10 04:22:11 +08:00
|
|
|
btrfs_alloc_workqueue(fs_info, "extent-refs", flags,
|
2015-02-16 23:29:26 +08:00
|
|
|
min_t(u64, fs_devices->num_devices,
|
|
|
|
max_active), 8);
|
|
|
|
|
|
|
|
if (!(fs_info->workers && fs_info->delalloc_workers &&
|
|
|
|
fs_info->submit_workers && fs_info->flush_workers &&
|
|
|
|
fs_info->endio_workers && fs_info->endio_meta_workers &&
|
|
|
|
fs_info->endio_meta_write_workers &&
|
|
|
|
fs_info->endio_repair_workers &&
|
|
|
|
fs_info->endio_write_workers && fs_info->endio_raid56_workers &&
|
|
|
|
fs_info->endio_freespace_worker && fs_info->rmw_workers &&
|
|
|
|
fs_info->caching_workers && fs_info->readahead_workers &&
|
|
|
|
fs_info->fixup_workers && fs_info->delayed_workers &&
|
|
|
|
fs_info->extent_workers &&
|
|
|
|
fs_info->qgroup_rescan_workers)) {
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-08-02 07:12:46 +08:00
|
|
|
static int btrfs_replay_log(struct btrfs_fs_info *fs_info,
|
|
|
|
struct btrfs_fs_devices *fs_devices)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
struct btrfs_root *log_tree_root;
|
|
|
|
struct btrfs_super_block *disk_super = fs_info->super_copy;
|
|
|
|
u64 bytenr = btrfs_super_log_root(disk_super);
|
2018-03-29 09:08:11 +08:00
|
|
|
int level = btrfs_super_log_root_level(disk_super);
|
2014-08-02 07:12:46 +08:00
|
|
|
|
|
|
|
if (fs_devices->rw_devices == 0) {
|
2015-10-08 17:37:06 +08:00
|
|
|
btrfs_warn(fs_info, "log replay required on RO media");
|
2014-08-02 07:12:46 +08:00
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
|
2016-02-11 18:01:55 +08:00
|
|
|
log_tree_root = btrfs_alloc_root(fs_info, GFP_KERNEL);
|
2014-08-02 07:12:46 +08:00
|
|
|
if (!log_tree_root)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2016-06-15 21:22:56 +08:00
|
|
|
__setup_root(log_tree_root, fs_info, BTRFS_TREE_LOG_OBJECTID);
|
2014-08-02 07:12:46 +08:00
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
log_tree_root->node = read_tree_block(fs_info, bytenr,
|
2018-03-29 09:08:11 +08:00
|
|
|
fs_info->generation + 1,
|
|
|
|
level, NULL);
|
2015-05-25 17:30:15 +08:00
|
|
|
if (IS_ERR(log_tree_root->node)) {
|
2015-10-08 17:37:06 +08:00
|
|
|
btrfs_warn(fs_info, "failed to read log tree");
|
2015-06-11 14:16:44 +08:00
|
|
|
ret = PTR_ERR(log_tree_root->node);
|
2015-05-25 17:30:15 +08:00
|
|
|
kfree(log_tree_root);
|
2015-06-11 14:16:44 +08:00
|
|
|
return ret;
|
2015-05-25 17:30:15 +08:00
|
|
|
} else if (!extent_buffer_uptodate(log_tree_root->node)) {
|
2015-10-08 17:37:06 +08:00
|
|
|
btrfs_err(fs_info, "failed to read log tree");
|
2014-08-02 07:12:46 +08:00
|
|
|
free_extent_buffer(log_tree_root->node);
|
|
|
|
kfree(log_tree_root);
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
/* returns with log_tree_root freed on success */
|
|
|
|
ret = btrfs_recover_log_trees(log_tree_root);
|
|
|
|
if (ret) {
|
2016-06-23 06:54:23 +08:00
|
|
|
btrfs_handle_fs_error(fs_info, ret,
|
|
|
|
"Failed to recover log tree");
|
2014-08-02 07:12:46 +08:00
|
|
|
free_extent_buffer(log_tree_root->node);
|
|
|
|
kfree(log_tree_root);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-07-17 15:45:34 +08:00
|
|
|
if (sb_rdonly(fs_info->sb)) {
|
2016-06-22 09:16:51 +08:00
|
|
|
ret = btrfs_commit_super(fs_info);
|
2014-08-02 07:12:46 +08:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-06-22 09:16:51 +08:00
|
|
|
static int btrfs_read_roots(struct btrfs_fs_info *fs_info)
|
2014-08-02 07:12:45 +08:00
|
|
|
{
|
2016-06-22 09:16:51 +08:00
|
|
|
struct btrfs_root *tree_root = fs_info->tree_root;
|
2015-02-17 01:44:34 +08:00
|
|
|
struct btrfs_root *root;
|
2014-08-02 07:12:45 +08:00
|
|
|
struct btrfs_key location;
|
|
|
|
int ret;
|
|
|
|
|
2016-06-22 09:16:51 +08:00
|
|
|
BUG_ON(!fs_info->tree_root);
|
|
|
|
|
2014-08-02 07:12:45 +08:00
|
|
|
location.objectid = BTRFS_EXTENT_TREE_OBJECTID;
|
|
|
|
location.type = BTRFS_ROOT_ITEM_KEY;
|
|
|
|
location.offset = 0;
|
|
|
|
|
2015-02-17 01:44:34 +08:00
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
2018-03-29 06:11:45 +08:00
|
|
|
if (IS_ERR(root)) {
|
|
|
|
ret = PTR_ERR(root);
|
|
|
|
goto out;
|
|
|
|
}
|
2015-02-17 01:44:34 +08:00
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
fs_info->extent_root = root;
|
2014-08-02 07:12:45 +08:00
|
|
|
|
|
|
|
location.objectid = BTRFS_DEV_TREE_OBJECTID;
|
2015-02-17 01:44:34 +08:00
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
2018-03-29 06:11:45 +08:00
|
|
|
if (IS_ERR(root)) {
|
|
|
|
ret = PTR_ERR(root);
|
|
|
|
goto out;
|
|
|
|
}
|
2015-02-17 01:44:34 +08:00
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
fs_info->dev_root = root;
|
2014-08-02 07:12:45 +08:00
|
|
|
btrfs_init_devices_late(fs_info);
|
|
|
|
|
|
|
|
location.objectid = BTRFS_CSUM_TREE_OBJECTID;
|
2015-02-17 01:44:34 +08:00
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
2018-03-29 06:11:45 +08:00
|
|
|
if (IS_ERR(root)) {
|
|
|
|
ret = PTR_ERR(root);
|
|
|
|
goto out;
|
|
|
|
}
|
2015-02-17 01:44:34 +08:00
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
fs_info->csum_root = root;
|
2014-08-02 07:12:45 +08:00
|
|
|
|
|
|
|
location.objectid = BTRFS_QUOTA_TREE_OBJECTID;
|
2015-02-17 01:44:34 +08:00
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
|
|
|
if (!IS_ERR(root)) {
|
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
2016-09-03 03:40:02 +08:00
|
|
|
set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
|
2015-02-17 01:44:34 +08:00
|
|
|
fs_info->quota_root = root;
|
2014-08-02 07:12:45 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
location.objectid = BTRFS_UUID_TREE_OBJECTID;
|
2015-02-17 01:44:34 +08:00
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
|
|
|
if (IS_ERR(root)) {
|
|
|
|
ret = PTR_ERR(root);
|
2014-08-02 07:12:45 +08:00
|
|
|
if (ret != -ENOENT)
|
2018-03-29 06:11:45 +08:00
|
|
|
goto out;
|
2014-08-02 07:12:45 +08:00
|
|
|
} else {
|
2015-02-17 01:44:34 +08:00
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
fs_info->uuid_root = root;
|
2014-08-02 07:12:45 +08:00
|
|
|
}
|
|
|
|
|
2015-09-30 11:50:38 +08:00
|
|
|
if (btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE)) {
|
|
|
|
location.objectid = BTRFS_FREE_SPACE_TREE_OBJECTID;
|
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
2018-03-29 06:11:45 +08:00
|
|
|
if (IS_ERR(root)) {
|
|
|
|
ret = PTR_ERR(root);
|
|
|
|
goto out;
|
|
|
|
}
|
2015-09-30 11:50:38 +08:00
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
fs_info->free_space_root = root;
|
|
|
|
}
|
|
|
|
|
2014-08-02 07:12:45 +08:00
|
|
|
return 0;
|
2018-03-29 06:11:45 +08:00
|
|
|
out:
|
|
|
|
btrfs_warn(fs_info, "failed to read root (objectid=%llu): %d",
|
|
|
|
location.objectid, ret);
|
|
|
|
return ret;
|
2014-08-02 07:12:45 +08:00
|
|
|
}
|
|
|
|
|
2018-05-11 13:35:26 +08:00
|
|
|
/*
|
|
|
|
* Real super block validation
|
|
|
|
* NOTE: super csum type and incompat features will not be checked here.
|
|
|
|
*
|
|
|
|
* @sb: super block to check
|
|
|
|
* @mirror_num: the super block number to check its bytenr:
|
|
|
|
* 0 the primary (1st) sb
|
|
|
|
* 1, 2 2nd and 3rd backup copy
|
|
|
|
* -1 skip bytenr check
|
|
|
|
*/
|
|
|
|
static int validate_super(struct btrfs_fs_info *fs_info,
|
|
|
|
struct btrfs_super_block *sb, int mirror_num)
|
2018-05-11 13:35:25 +08:00
|
|
|
{
|
|
|
|
u64 nodesize = btrfs_super_nodesize(sb);
|
|
|
|
u64 sectorsize = btrfs_super_sectorsize(sb);
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
if (btrfs_super_magic(sb) != BTRFS_MAGIC) {
|
|
|
|
btrfs_err(fs_info, "no valid FS found");
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (btrfs_super_flags(sb) & ~BTRFS_SUPER_FLAG_SUPP) {
|
|
|
|
btrfs_err(fs_info, "unrecognized or unsupported super flag: %llu",
|
|
|
|
btrfs_super_flags(sb) & ~BTRFS_SUPER_FLAG_SUPP);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (btrfs_super_root_level(sb) >= BTRFS_MAX_LEVEL) {
|
|
|
|
btrfs_err(fs_info, "tree_root level too big: %d >= %d",
|
|
|
|
btrfs_super_root_level(sb), BTRFS_MAX_LEVEL);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (btrfs_super_chunk_root_level(sb) >= BTRFS_MAX_LEVEL) {
|
|
|
|
btrfs_err(fs_info, "chunk_root level too big: %d >= %d",
|
|
|
|
btrfs_super_chunk_root_level(sb), BTRFS_MAX_LEVEL);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (btrfs_super_log_root_level(sb) >= BTRFS_MAX_LEVEL) {
|
|
|
|
btrfs_err(fs_info, "log_root level too big: %d >= %d",
|
|
|
|
btrfs_super_log_root_level(sb), BTRFS_MAX_LEVEL);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check sectorsize and nodesize first, other check will need it.
|
|
|
|
* Check all possible sectorsize(4K, 8K, 16K, 32K, 64K) here.
|
|
|
|
*/
|
|
|
|
if (!is_power_of_2(sectorsize) || sectorsize < 4096 ||
|
|
|
|
sectorsize > BTRFS_MAX_METADATA_BLOCKSIZE) {
|
|
|
|
btrfs_err(fs_info, "invalid sectorsize %llu", sectorsize);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
/* Only PAGE SIZE is supported yet */
|
|
|
|
if (sectorsize != PAGE_SIZE) {
|
|
|
|
btrfs_err(fs_info,
|
|
|
|
"sectorsize %llu not supported yet, only support %lu",
|
|
|
|
sectorsize, PAGE_SIZE);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (!is_power_of_2(nodesize) || nodesize < sectorsize ||
|
|
|
|
nodesize > BTRFS_MAX_METADATA_BLOCKSIZE) {
|
|
|
|
btrfs_err(fs_info, "invalid nodesize %llu", nodesize);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (nodesize != le32_to_cpu(sb->__unused_leafsize)) {
|
|
|
|
btrfs_err(fs_info, "invalid leafsize %u, should be %llu",
|
|
|
|
le32_to_cpu(sb->__unused_leafsize), nodesize);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Root alignment check */
|
|
|
|
if (!IS_ALIGNED(btrfs_super_root(sb), sectorsize)) {
|
|
|
|
btrfs_warn(fs_info, "tree_root block unaligned: %llu",
|
|
|
|
btrfs_super_root(sb));
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (!IS_ALIGNED(btrfs_super_chunk_root(sb), sectorsize)) {
|
|
|
|
btrfs_warn(fs_info, "chunk_root block unaligned: %llu",
|
|
|
|
btrfs_super_chunk_root(sb));
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (!IS_ALIGNED(btrfs_super_log_root(sb), sectorsize)) {
|
|
|
|
btrfs_warn(fs_info, "log_root block unaligned: %llu",
|
|
|
|
btrfs_super_log_root(sb));
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
|
2018-10-30 22:43:24 +08:00
|
|
|
if (memcmp(fs_info->fs_devices->metadata_uuid, sb->dev_item.fsid,
|
2018-10-30 22:43:23 +08:00
|
|
|
BTRFS_FSID_SIZE) != 0) {
|
2018-05-11 13:35:25 +08:00
|
|
|
btrfs_err(fs_info,
|
2018-10-30 22:43:23 +08:00
|
|
|
"dev_item UUID does not match metadata fsid: %pU != %pU",
|
2018-10-30 22:43:24 +08:00
|
|
|
fs_info->fs_devices->metadata_uuid, sb->dev_item.fsid);
|
2018-05-11 13:35:25 +08:00
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Hint to catch really bogus numbers, bitflips or so, more exact checks are
|
|
|
|
* done later
|
|
|
|
*/
|
|
|
|
if (btrfs_super_bytes_used(sb) < 6 * btrfs_super_nodesize(sb)) {
|
|
|
|
btrfs_err(fs_info, "bytes_used is too small %llu",
|
|
|
|
btrfs_super_bytes_used(sb));
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (!is_power_of_2(btrfs_super_stripesize(sb))) {
|
|
|
|
btrfs_err(fs_info, "invalid stripesize %u",
|
|
|
|
btrfs_super_stripesize(sb));
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (btrfs_super_num_devices(sb) > (1UL << 31))
|
|
|
|
btrfs_warn(fs_info, "suspicious number of devices: %llu",
|
|
|
|
btrfs_super_num_devices(sb));
|
|
|
|
if (btrfs_super_num_devices(sb) == 0) {
|
|
|
|
btrfs_err(fs_info, "number of devices is 0");
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
|
2018-05-11 13:35:26 +08:00
|
|
|
if (mirror_num >= 0 &&
|
|
|
|
btrfs_super_bytenr(sb) != btrfs_sb_offset(mirror_num)) {
|
2018-05-11 13:35:25 +08:00
|
|
|
btrfs_err(fs_info, "super offset mismatch %llu != %u",
|
|
|
|
btrfs_super_bytenr(sb), BTRFS_SUPER_INFO_OFFSET);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Obvious sys_chunk_array corruptions, it must hold at least one key
|
|
|
|
* and one chunk
|
|
|
|
*/
|
|
|
|
if (btrfs_super_sys_array_size(sb) > BTRFS_SYSTEM_CHUNK_ARRAY_SIZE) {
|
|
|
|
btrfs_err(fs_info, "system chunk array too big %u > %u",
|
|
|
|
btrfs_super_sys_array_size(sb),
|
|
|
|
BTRFS_SYSTEM_CHUNK_ARRAY_SIZE);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (btrfs_super_sys_array_size(sb) < sizeof(struct btrfs_disk_key)
|
|
|
|
+ sizeof(struct btrfs_chunk)) {
|
|
|
|
btrfs_err(fs_info, "system chunk array too small %u < %zu",
|
|
|
|
btrfs_super_sys_array_size(sb),
|
|
|
|
sizeof(struct btrfs_disk_key)
|
|
|
|
+ sizeof(struct btrfs_chunk));
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The generation is a global counter, we'll trust it more than the others
|
|
|
|
* but it's still possible that it's the one that's wrong.
|
|
|
|
*/
|
|
|
|
if (btrfs_super_generation(sb) < btrfs_super_chunk_root_generation(sb))
|
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"suspicious: generation < chunk_root_generation: %llu < %llu",
|
|
|
|
btrfs_super_generation(sb),
|
|
|
|
btrfs_super_chunk_root_generation(sb));
|
|
|
|
if (btrfs_super_generation(sb) < btrfs_super_cache_generation(sb)
|
|
|
|
&& btrfs_super_cache_generation(sb) != (u64)-1)
|
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"suspicious: generation < cache_generation: %llu < %llu",
|
|
|
|
btrfs_super_generation(sb),
|
|
|
|
btrfs_super_cache_generation(sb));
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2018-05-11 13:35:26 +08:00
|
|
|
/*
|
|
|
|
* Validation of super block at mount time.
|
|
|
|
* Some checks already done early at mount time, like csum type and incompat
|
|
|
|
* flags will be skipped.
|
|
|
|
*/
|
|
|
|
static int btrfs_validate_mount_super(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
return validate_super(fs_info, fs_info->super_copy, 0);
|
|
|
|
}
|
|
|
|
|
btrfs: Do super block verification before writing it to disk
There are already 2 reports about strangely corrupted super blocks,
where csum still matches but extra garbage gets slipped into super block.
The corruption would looks like:
------
superblock: bytenr=65536, device=/dev/sdc1
---------------------------------------------------------
csum_type 41700 (INVALID)
csum 0x3b252d3a [match]
bytenr 65536
flags 0x1
( WRITTEN )
magic _BHRfS_M [match]
...
incompat_flags 0x5b22400000000169
( MIXED_BACKREF |
COMPRESS_LZO |
BIG_METADATA |
EXTENDED_IREF |
SKINNY_METADATA |
unknown flag: 0x5b22400000000000 )
...
------
Or
------
superblock: bytenr=65536, device=/dev/mapper/x
---------------------------------------------------------
csum_type 35355 (INVALID)
csum_size 32
csum 0xf0dbeddd [match]
bytenr 65536
flags 0x1
( WRITTEN )
magic _BHRfS_M [match]
...
incompat_flags 0x176d200000000169
( MIXED_BACKREF |
COMPRESS_LZO |
BIG_METADATA |
EXTENDED_IREF |
SKINNY_METADATA |
unknown flag: 0x176d200000000000 )
------
Obviously, csum_type and incompat_flags get some garbage, but its csum
still matches, which means kernel calculates the csum based on corrupted
super block memory.
And after manually fixing these values, the filesystem is completely
healthy without any problem exposed by btrfs check.
Although the cause is still unknown, at least detect it and prevent further
corruption.
Both reports have same symptoms, there's an overwrite on offset 192 of
the superblock, by 4 bytes. The superblock structure is not allocated or
freed and stays in the memory for the whole filesystem lifetime, so it's
not a use-after-free kind of error on someone else's leaked page.
As a vague point for the problable cause is mentioning of other system
freezing related to graphic card drivers.
Reported-by: Ken Swenson <flat@imo.uto.moe>
Reported-by: Ben Parsons <9parsonsb@gmail.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add brief analysis of the reports ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-05-11 13:35:27 +08:00
|
|
|
/*
|
|
|
|
* Validation of super block at write time.
|
|
|
|
* Some checks like bytenr check will be skipped as their values will be
|
|
|
|
* overwritten soon.
|
|
|
|
* Extra checks like csum type and incompat flags will be done here.
|
|
|
|
*/
|
|
|
|
static int btrfs_validate_write_super(struct btrfs_fs_info *fs_info,
|
|
|
|
struct btrfs_super_block *sb)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = validate_super(fs_info, sb, -1);
|
|
|
|
if (ret < 0)
|
|
|
|
goto out;
|
|
|
|
if (btrfs_super_csum_type(sb) != BTRFS_CSUM_TYPE_CRC32) {
|
|
|
|
ret = -EUCLEAN;
|
|
|
|
btrfs_err(fs_info, "invalid csum type, has %u want %u",
|
|
|
|
btrfs_super_csum_type(sb), BTRFS_CSUM_TYPE_CRC32);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (btrfs_super_incompat_flags(sb) & ~BTRFS_FEATURE_INCOMPAT_SUPP) {
|
|
|
|
ret = -EUCLEAN;
|
|
|
|
btrfs_err(fs_info,
|
|
|
|
"invalid incompat flags, has 0x%llx valid mask 0x%llx",
|
|
|
|
btrfs_super_incompat_flags(sb),
|
|
|
|
(unsigned long long)BTRFS_FEATURE_INCOMPAT_SUPP);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
out:
|
|
|
|
if (ret < 0)
|
|
|
|
btrfs_err(fs_info,
|
|
|
|
"super block corruption detected before writing it to disk");
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2011-11-17 14:10:02 +08:00
|
|
|
int open_ctree(struct super_block *sb,
|
|
|
|
struct btrfs_fs_devices *fs_devices,
|
|
|
|
char *options)
|
2007-03-21 23:12:56 +08:00
|
|
|
{
|
2007-10-16 04:15:53 +08:00
|
|
|
u32 sectorsize;
|
|
|
|
u32 nodesize;
|
2007-12-01 00:30:34 +08:00
|
|
|
u32 stripesize;
|
2008-10-30 02:49:05 +08:00
|
|
|
u64 generation;
|
2008-12-02 19:36:08 +08:00
|
|
|
u64 features;
|
2008-11-18 10:02:50 +08:00
|
|
|
struct btrfs_key location;
|
2008-05-07 23:43:44 +08:00
|
|
|
struct buffer_head *bh;
|
2011-11-09 06:08:15 +08:00
|
|
|
struct btrfs_super_block *disk_super;
|
2011-11-18 04:40:49 +08:00
|
|
|
struct btrfs_fs_info *fs_info = btrfs_sb(sb);
|
2011-11-18 04:57:57 +08:00
|
|
|
struct btrfs_root *tree_root;
|
2011-11-09 06:08:15 +08:00
|
|
|
struct btrfs_root *chunk_root;
|
2007-02-02 22:18:22 +08:00
|
|
|
int ret;
|
2008-04-01 23:21:34 +08:00
|
|
|
int err = -EINVAL;
|
2011-11-04 03:17:42 +08:00
|
|
|
int num_backups_tried = 0;
|
|
|
|
int backup_index = 0;
|
2016-09-23 08:24:22 +08:00
|
|
|
int clear_free_space_tree = 0;
|
2018-03-29 09:08:11 +08:00
|
|
|
int level;
|
2008-06-12 09:47:56 +08:00
|
|
|
|
2016-02-11 18:01:55 +08:00
|
|
|
tree_root = fs_info->tree_root = btrfs_alloc_root(fs_info, GFP_KERNEL);
|
|
|
|
chunk_root = fs_info->chunk_root = btrfs_alloc_root(fs_info, GFP_KERNEL);
|
2013-05-15 15:48:19 +08:00
|
|
|
if (!tree_root || !chunk_root) {
|
2007-06-12 18:35:45 +08:00
|
|
|
err = -ENOMEM;
|
|
|
|
goto fail;
|
|
|
|
}
|
2009-09-22 04:00:26 +08:00
|
|
|
|
|
|
|
ret = init_srcu_struct(&fs_info->subvol_srcu);
|
|
|
|
if (ret) {
|
|
|
|
err = ret;
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
2019-04-11 03:56:09 +08:00
|
|
|
ret = percpu_counter_init(&fs_info->dio_bytes, 0, GFP_KERNEL);
|
2013-01-29 18:09:20 +08:00
|
|
|
if (ret) {
|
|
|
|
err = ret;
|
2017-04-12 18:24:32 +08:00
|
|
|
goto fail_srcu;
|
2013-01-29 18:09:20 +08:00
|
|
|
}
|
2019-04-11 03:56:09 +08:00
|
|
|
|
|
|
|
ret = percpu_counter_init(&fs_info->dirty_metadata_bytes, 0, GFP_KERNEL);
|
|
|
|
if (ret) {
|
|
|
|
err = ret;
|
|
|
|
goto fail_dio_bytes;
|
|
|
|
}
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
fs_info->dirty_metadata_batch = PAGE_SIZE *
|
2013-01-29 18:09:20 +08:00
|
|
|
(1 + ilog2(nr_cpu_ids));
|
|
|
|
|
2014-09-08 08:51:29 +08:00
|
|
|
ret = percpu_counter_init(&fs_info->delalloc_bytes, 0, GFP_KERNEL);
|
2013-01-29 18:10:51 +08:00
|
|
|
if (ret) {
|
|
|
|
err = ret;
|
|
|
|
goto fail_dirty_metadata_bytes;
|
|
|
|
}
|
|
|
|
|
2018-04-05 07:04:49 +08:00
|
|
|
ret = percpu_counter_init(&fs_info->dev_replace.bio_counter, 0,
|
|
|
|
GFP_KERNEL);
|
Btrfs: fix use-after-free in the finishing procedure of the device replace
During device replace test, we hit a null pointer deference (It was very easy
to reproduce it by running xfstests' btrfs/011 on the devices with the virtio
scsi driver). There were two bugs that caused this problem:
- We might allocate new chunks on the replaced device after we updated
the mapping tree. And we forgot to replace the source device in those
mapping of the new chunks.
- We might get the mapping information which including the source device
before the mapping information update. And then submit the bio which was
based on that mapping information after we freed the source device.
For the first bug, we can fix it by doing mapping tree update and source
device remove in the same context of the chunk mutex. The chunk mutex is
used to protect the allocable device list, the above method can avoid
the new chunk allocation, and after we remove the source device, all
the new chunks will be allocated on the new device. So it can fix
the first bug.
For the second bug, we need make sure all flighting bios are finished and
no new bios are produced during we are removing the source device. To fix
this problem, we introduced a global @bio_counter, we not only inc/dec
@bio_counter outsize of map_blocks, but also inc it before submitting bio
and dec @bio_counter when ending bios.
Since Raid56 is a little different and device replace dosen't support raid56
yet, it is not addressed in the patch and I add comments to make sure we will
fix it in the future.
Reported-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Signed-off-by: Wang Shilong <wangsl.fnst@cn.fujitsu.com>
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fb.com>
2014-01-30 16:46:55 +08:00
|
|
|
if (ret) {
|
|
|
|
err = ret;
|
|
|
|
goto fail_delalloc_bytes;
|
|
|
|
}
|
|
|
|
|
2009-09-22 04:00:26 +08:00
|
|
|
INIT_RADIX_TREE(&fs_info->fs_roots_radix, GFP_ATOMIC);
|
2013-12-17 02:24:27 +08:00
|
|
|
INIT_RADIX_TREE(&fs_info->buffer_radix, GFP_ATOMIC);
|
2007-04-20 09:01:03 +08:00
|
|
|
INIT_LIST_HEAD(&fs_info->trans_list);
|
2007-06-09 06:11:48 +08:00
|
|
|
INIT_LIST_HEAD(&fs_info->dead_roots);
|
2009-11-12 17:36:34 +08:00
|
|
|
INIT_LIST_HEAD(&fs_info->delayed_iputs);
|
2013-05-15 15:48:22 +08:00
|
|
|
INIT_LIST_HEAD(&fs_info->delalloc_roots);
|
2009-09-12 04:11:19 +08:00
|
|
|
INIT_LIST_HEAD(&fs_info->caching_block_groups);
|
2018-03-21 03:25:26 +08:00
|
|
|
INIT_LIST_HEAD(&fs_info->pending_raid_kobjs);
|
|
|
|
spin_lock_init(&fs_info->pending_raid_kobjs_lock);
|
2013-05-15 15:48:22 +08:00
|
|
|
spin_lock_init(&fs_info->delalloc_root_lock);
|
2011-04-12 05:25:13 +08:00
|
|
|
spin_lock_init(&fs_info->trans_lock);
|
2009-09-22 04:00:26 +08:00
|
|
|
spin_lock_init(&fs_info->fs_roots_radix_lock);
|
2009-11-12 17:36:34 +08:00
|
|
|
spin_lock_init(&fs_info->delayed_iput_lock);
|
2011-05-25 03:35:30 +08:00
|
|
|
spin_lock_init(&fs_info->defrag_inodes_lock);
|
2012-05-16 23:55:38 +08:00
|
|
|
spin_lock_init(&fs_info->tree_mod_seq_lock);
|
2013-04-11 18:30:16 +08:00
|
|
|
spin_lock_init(&fs_info->super_lock);
|
2013-12-17 02:24:27 +08:00
|
|
|
spin_lock_init(&fs_info->buffer_lock);
|
2014-09-18 23:20:02 +08:00
|
|
|
spin_lock_init(&fs_info->unused_bgs_lock);
|
2012-05-16 23:55:38 +08:00
|
|
|
rwlock_init(&fs_info->tree_mod_log_lock);
|
2015-02-26 10:49:20 +08:00
|
|
|
mutex_init(&fs_info->unused_bg_unpin_mutex);
|
Btrfs: fix race between balance and unused block group deletion
We have a race between deleting an unused block group and balancing the
same block group that leads to an assertion failure/BUG(), producing the
following trace:
[181631.208236] BTRFS: assertion failed: 0, file: fs/btrfs/volumes.c, line: 2622
[181631.220591] ------------[ cut here ]------------
[181631.222959] kernel BUG at fs/btrfs/ctree.h:4062!
[181631.223932] invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
[181631.224566] Modules linked in: btrfs dm_flakey dm_mod crc32c_generic xor raid6_pq nfsd auth_rpcgss oid_registry nfs_acl nfs lockd grace fscache sunrpc loop fuse acpi_cpufreq parpor$
[181631.224566] CPU: 8 PID: 17451 Comm: btrfs Tainted: G W 4.1.0-rc5-btrfs-next-10+ #1
[181631.224566] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.1-0-g4adadbd-20150316_085822-nilsson.home.kraxel.org 04/01/2014
[181631.224566] task: ffff880127e09590 ti: ffff8800b5824000 task.ti: ffff8800b5824000
[181631.224566] RIP: 0010:[<ffffffffa03f19f6>] [<ffffffffa03f19f6>] assfail.constprop.50+0x1e/0x20 [btrfs]
[181631.224566] RSP: 0018:ffff8800b5827ae8 EFLAGS: 00010246
[181631.224566] RAX: 0000000000000040 RBX: ffff8800109fc218 RCX: ffffffff81095dce
[181631.224566] RDX: 0000000000005124 RSI: ffffffff81464819 RDI: 00000000ffffffff
[181631.224566] RBP: ffff8800b5827ae8 R08: 0000000000000001 R09: 0000000000000000
[181631.224566] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8800109fc200
[181631.224566] R13: ffff880020095000 R14: ffff8800b1a13f38 R15: ffff880020095000
[181631.224566] FS: 00007f70ca0b0c80(0000) GS:ffff88013ec00000(0000) knlGS:0000000000000000
[181631.224566] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[181631.224566] CR2: 00007f2872ab6e68 CR3: 00000000a717c000 CR4: 00000000000006e0
[181631.224566] Stack:
[181631.224566] ffff8800b5827ba8 ffffffffa03f3916 ffff8800b5827b38 ffffffffa03d080e
[181631.224566] ffffffffa03d1423 ffff880020095000 ffff88001233c000 0000000000000001
[181631.224566] ffff880020095000 ffff8800b1a13f38 0000000a69c00000 0000000000000000
[181631.224566] Call Trace:
[181631.224566] [<ffffffffa03f3916>] btrfs_remove_chunk+0xa4/0x6bb [btrfs]
[181631.224566] [<ffffffffa03d080e>] ? join_transaction.isra.8+0xb9/0x3ba [btrfs]
[181631.224566] [<ffffffffa03d1423>] ? wait_current_trans.isra.13+0x22/0xfc [btrfs]
[181631.224566] [<ffffffffa03f3fbc>] btrfs_relocate_chunk.isra.29+0x8f/0xa7 [btrfs]
[181631.224566] [<ffffffffa03f54df>] btrfs_balance+0xaa4/0xc52 [btrfs]
[181631.224566] [<ffffffffa03fd388>] btrfs_ioctl_balance+0x23f/0x2b0 [btrfs]
[181631.224566] [<ffffffff810872f9>] ? trace_hardirqs_on+0xd/0xf
[181631.224566] [<ffffffffa04019a3>] btrfs_ioctl+0xfe2/0x2220 [btrfs]
[181631.224566] [<ffffffff812603ed>] ? __this_cpu_preempt_check+0x13/0x15
[181631.224566] [<ffffffff81084669>] ? arch_local_irq_save+0x9/0xc
[181631.224566] [<ffffffff81138def>] ? handle_mm_fault+0x834/0xcd2
[181631.224566] [<ffffffff81138def>] ? handle_mm_fault+0x834/0xcd2
[181631.224566] [<ffffffff8103e48c>] ? __do_page_fault+0x211/0x424
[181631.224566] [<ffffffff811755e6>] do_vfs_ioctl+0x3c6/0x479
(...)
The sequence of steps leading to this are:
CPU 0 CPU 1
btrfs_balance()
btrfs_relocate_chunk()
btrfs_relocate_block_group(bg X)
btrfs_lookup_block_group(bg X)
cleaner_kthread
locks fs_info->cleaner_mutex
btrfs_delete_unused_bgs()
finds bg X, which became
unused in the previous
transaction
checks bg X ->ro == 0,
so it proceeds
sets bg X ->ro to 1
(btrfs_set_block_group_ro(bg X))
blocks on fs_info->cleaner_mutex
btrfs_remove_chunk(bg X)
unlocks fs_info->cleaner_mutex
acquires fs_info->cleaner_mutex
relocate_block_group()
--> does nothing, no extents found in
the extent tree from bg X
unlocks fs_info->cleaner_mutex
btrfs_relocate_block_group(bg X) returns
btrfs_remove_chunk(bg X)
extent map not found
--> ASSERT(0)
Fix this by using a new mutex to make sure these 2 operations, block
group relocation and removal, are serialized.
This issue is reproducible by running fstests generic/038 (which stresses
chunk allocation and automatic removal of unused block groups) together
with the following balance loop:
while true; do btrfs balance start -dusage=0 <mountpoint> ; done
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2015-06-11 07:58:53 +08:00
|
|
|
mutex_init(&fs_info->delete_unused_bgs_mutex);
|
2011-06-14 08:00:16 +08:00
|
|
|
mutex_init(&fs_info->reloc_mutex);
|
2014-03-06 13:55:03 +08:00
|
|
|
mutex_init(&fs_info->delalloc_root_mutex);
|
2013-01-29 18:13:12 +08:00
|
|
|
seqlock_init(&fs_info->profiles_lock);
|
2007-10-16 04:19:22 +08:00
|
|
|
|
2008-03-25 03:01:56 +08:00
|
|
|
INIT_LIST_HEAD(&fs_info->dirty_cowonly_roots);
|
2008-03-25 03:01:59 +08:00
|
|
|
INIT_LIST_HEAD(&fs_info->space_info);
|
2012-05-16 23:55:38 +08:00
|
|
|
INIT_LIST_HEAD(&fs_info->tree_mod_seq_list);
|
2014-09-18 23:20:02 +08:00
|
|
|
INIT_LIST_HEAD(&fs_info->unused_bgs);
|
2008-03-25 03:01:56 +08:00
|
|
|
btrfs_mapping_init(&fs_info->mapping_tree);
|
2012-09-06 18:02:28 +08:00
|
|
|
btrfs_init_block_rsv(&fs_info->global_block_rsv,
|
|
|
|
BTRFS_BLOCK_RSV_GLOBAL);
|
|
|
|
btrfs_init_block_rsv(&fs_info->trans_block_rsv, BTRFS_BLOCK_RSV_TRANS);
|
|
|
|
btrfs_init_block_rsv(&fs_info->chunk_block_rsv, BTRFS_BLOCK_RSV_CHUNK);
|
|
|
|
btrfs_init_block_rsv(&fs_info->empty_block_rsv, BTRFS_BLOCK_RSV_EMPTY);
|
|
|
|
btrfs_init_block_rsv(&fs_info->delayed_block_rsv,
|
|
|
|
BTRFS_BLOCK_RSV_DELOPS);
|
btrfs: introduce delayed_refs_rsv
Traditionally we've had voodoo in btrfs to account for the space that
delayed refs may take up by having a global_block_rsv. This works most
of the time, except when it doesn't. We've had issues reported and seen
in production where sometimes the global reserve is exhausted during
transaction commit before we can run all of our delayed refs, resulting
in an aborted transaction. Because of this voodoo we have equally
dubious flushing semantics around throttling delayed refs which we often
get wrong.
So instead give them their own block_rsv. This way we can always know
exactly how much outstanding space we need for delayed refs. This
allows us to make sure we are constantly filling that reservation up
with space, and allows us to put more precise pressure on the enospc
system. Instead of doing math to see if its a good time to throttle,
the normal enospc code will be invoked if we have a lot of delayed refs
pending, and they will be run via the normal flushing mechanism.
For now the delayed_refs_rsv will hold the reservations for the delayed
refs, the block group updates, and deleting csums. We could have a
separate rsv for the block group updates, but the csum deletion stuff is
still handled via the delayed_refs so that will stay there.
Historical background:
The global reserve has grown to cover everything we don't reserve space
explicitly for, and we've grown a lot of weird ad-hoc heuristics to know
if we're running short on space and when it's time to force a commit. A
failure rate of 20-40 file systems when we run hundreds of thousands of
them isn't super high, but cleaning up this code will make things less
ugly and more predictible.
Thus the delayed refs rsv. We always know how many delayed refs we have
outstanding, and although running them generates more we can use the
global reserve for that spill over, which fits better into it's desired
use than a full blown reservation. This first approach is to simply
take how many times we're reserving space for and multiply that by 2 in
order to save enough space for the delayed refs that could be generated.
This is a niave approach and will probably evolve, but for now it works.
Signed-off-by: Josef Bacik <jbacik@fb.com>
Reviewed-by: David Sterba <dsterba@suse.com> # high-level review
[ added background notes from the cover letter ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-12-03 23:20:33 +08:00
|
|
|
btrfs_init_block_rsv(&fs_info->delayed_refs_rsv,
|
|
|
|
BTRFS_BLOCK_RSV_DELREFS);
|
|
|
|
|
2008-11-07 11:02:51 +08:00
|
|
|
atomic_set(&fs_info->async_delalloc_pages, 0);
|
2011-05-25 03:35:30 +08:00
|
|
|
atomic_set(&fs_info->defrag_running, 0);
|
2016-01-07 18:38:48 +08:00
|
|
|
atomic_set(&fs_info->reada_works_cnt, 0);
|
2018-12-04 00:06:52 +08:00
|
|
|
atomic_set(&fs_info->nr_delayed_iputs, 0);
|
2013-04-25 00:57:33 +08:00
|
|
|
atomic64_set(&fs_info->tree_mod_seq, 0);
|
2007-03-23 00:13:20 +08:00
|
|
|
fs_info->sb = sb;
|
2013-08-09 05:45:48 +08:00
|
|
|
fs_info->max_inline = BTRFS_DEFAULT_MAX_INLINE;
|
2009-09-12 04:12:44 +08:00
|
|
|
fs_info->metadata_ratio = 0;
|
2011-05-25 03:35:30 +08:00
|
|
|
fs_info->defrag_inodes = RB_ROOT;
|
2017-05-11 14:17:46 +08:00
|
|
|
atomic64_set(&fs_info->free_chunk_space, 0);
|
2012-05-16 23:55:38 +08:00
|
|
|
fs_info->tree_mod_log = RB_ROOT;
|
2013-08-02 00:14:52 +08:00
|
|
|
fs_info->commit_interval = BTRFS_DEFAULT_COMMIT_INTERVAL;
|
2015-01-17 00:21:12 +08:00
|
|
|
fs_info->avg_delayed_ref_runtime = NSEC_PER_SEC >> 6; /* div by 64 */
|
2011-05-23 20:30:00 +08:00
|
|
|
/* readahead state */
|
2015-11-07 08:28:21 +08:00
|
|
|
INIT_RADIX_TREE(&fs_info->reada_tree, GFP_NOFS & ~__GFP_DIRECT_RECLAIM);
|
2011-05-23 20:30:00 +08:00
|
|
|
spin_lock_init(&fs_info->reada_lock);
|
2017-09-30 03:43:50 +08:00
|
|
|
btrfs_init_ref_verify(fs_info);
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
|
2008-12-20 04:43:22 +08:00
|
|
|
fs_info->thread_pool_size = min_t(unsigned long,
|
|
|
|
num_online_cpus() + 2, 8);
|
2008-04-19 02:17:20 +08:00
|
|
|
|
2013-05-15 15:48:23 +08:00
|
|
|
INIT_LIST_HEAD(&fs_info->ordered_roots);
|
|
|
|
spin_lock_init(&fs_info->ordered_root_lock);
|
2017-10-20 02:15:57 +08:00
|
|
|
|
|
|
|
fs_info->btree_inode = new_inode(sb);
|
|
|
|
if (!fs_info->btree_inode) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto fail_bio_counter;
|
|
|
|
}
|
|
|
|
mapping_set_gfp_mask(fs_info->btree_inode->i_mapping, GFP_NOFS);
|
|
|
|
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 18:12:22 +08:00
|
|
|
fs_info->delayed_root = kmalloc(sizeof(struct btrfs_delayed_root),
|
2016-02-11 18:01:55 +08:00
|
|
|
GFP_KERNEL);
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 18:12:22 +08:00
|
|
|
if (!fs_info->delayed_root) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto fail_iput;
|
|
|
|
}
|
|
|
|
btrfs_init_delayed_root(fs_info->delayed_root);
|
2008-07-24 23:57:52 +08:00
|
|
|
|
2014-08-02 07:12:38 +08:00
|
|
|
btrfs_init_scrub(fs_info);
|
2011-11-09 20:44:05 +08:00
|
|
|
#ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY
|
|
|
|
fs_info->check_integrity_print_mask = 0;
|
|
|
|
#endif
|
2014-08-02 07:12:39 +08:00
|
|
|
btrfs_init_balance(fs_info);
|
Btrfs: reclaim the reserved metadata space at background
Before applying this patch, the task had to reclaim the metadata space
by itself if the metadata space was not enough. And When the task started
the space reclamation, all the other tasks which wanted to reserve the
metadata space were blocked. At some cases, they would be blocked for
a long time, it made the performance fluctuate wildly.
So we introduce the background metadata space reclamation, when the space
is about to be exhausted, we insert a reclaim work into the workqueue, the
worker of the workqueue helps us to reclaim the reserved space at the
background. By this way, the tasks needn't reclaim the space by themselves at
most cases, and even if the tasks have to reclaim the space or are blocked
for the space reclamation, they will get enough space more quickly.
Here is my test result(Tested by compilebench):
Memory: 2GB
CPU: 2Cores * 1CPU
Partition: 40GB(SSD)
Test command:
# compilebench -D <mnt> -m
Without this patch:
intial create total runs 30 avg 54.36 MB/s (user 0.52s sys 2.44s)
compile total runs 30 avg 123.72 MB/s (user 0.13s sys 1.17s)
read compiled tree total runs 3 avg 81.15 MB/s (user 0.74s sys 4.89s)
delete compiled tree total runs 30 avg 5.32 seconds (user 0.35s sys 4.37s)
With this patch:
intial create total runs 30 avg 59.80 MB/s (user 0.52s sys 2.53s)
compile total runs 30 avg 151.44 MB/s (user 0.13s sys 1.11s)
read compiled tree total runs 3 avg 83.25 MB/s (user 0.76s sys 4.91s)
delete compiled tree total runs 30 avg 5.29 seconds (user 0.34s sys 4.34s)
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-05-14 08:29:04 +08:00
|
|
|
btrfs_init_async_reclaim_work(&fs_info->async_reclaim_work);
|
2011-03-08 21:14:00 +08:00
|
|
|
|
2017-06-16 07:48:05 +08:00
|
|
|
sb->s_blocksize = BTRFS_BDEV_BLOCKSIZE;
|
|
|
|
sb->s_blocksize_bits = blksize_bits(BTRFS_BDEV_BLOCKSIZE);
|
2008-05-07 23:43:44 +08:00
|
|
|
|
2016-06-22 09:16:51 +08:00
|
|
|
btrfs_init_btree_inode(fs_info);
|
2009-09-22 04:00:26 +08:00
|
|
|
|
Btrfs: free space accounting redo
1) replace the per fs_info extent_io_tree that tracked free space with two
rb-trees per block group to track free space areas via offset and size. The
reason to do this is because most allocations come with a hint byte where to
start, so we can usually find a chunk of free space at that hint byte to satisfy
the allocation and get good space packing. If we cannot find free space at or
after the given offset we fall back on looking for a chunk of the given size as
close to that given offset as possible. When we fall back on the size search we
also try to find a slot as close to the size we want as possible, to avoid
breaking small chunks off of huge areas if possible.
2) remove the extent_io_tree that tracked the block group cache from fs_info and
replaced it with an rb-tree thats tracks block group cache via offset. also
added a per space_info list that tracks the block group cache for the particular
space so we can lookup related block groups easily.
3) cleaned up the allocation code to make it a little easier to read and a
little less complicated. Basically there are 3 steps, first look from our
provided hint. If we couldn't find from that given hint, start back at our
original search start and look for space from there. If that fails try to
allocate space if we can and start looking again. If not we're screwed and need
to start over again.
4) small fixes. there were some issues in volumes.c where we wouldn't allocate
the rest of the disk. fixed cow_file_range to actually pass the alloc_hint,
which has helped a good bit in making the fs_mark test I run have semi-normal
results as we run out of space. Generally with data allocations we don't track
where we last allocated from, so everytime we did a data allocation we'd search
through every block group that we have looking for free space. Now searching a
block group with no free space isn't terribly time consuming, it was causing a
slight degradation as we got more data block groups. The alloc_hint has fixed
this slight degredation and made things semi-normal.
There is still one nagging problem I'm working on where we will get ENOSPC when
there is definitely plenty of space. This only happens with metadata
allocations, and only when we are almost full. So you generally hit the 85%
mark first, but sometimes you'll hit the BUG before you hit the 85% wall. I'm
still tracking it down, but until then this seems to be pretty stable and make a
significant performance gain.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-09-24 01:14:11 +08:00
|
|
|
spin_lock_init(&fs_info->block_group_cache_lock);
|
2010-02-24 03:43:04 +08:00
|
|
|
fs_info->block_group_cache_tree = RB_ROOT;
|
2012-12-27 17:01:23 +08:00
|
|
|
fs_info->first_logical_byte = (u64)-1;
|
Btrfs: free space accounting redo
1) replace the per fs_info extent_io_tree that tracked free space with two
rb-trees per block group to track free space areas via offset and size. The
reason to do this is because most allocations come with a hint byte where to
start, so we can usually find a chunk of free space at that hint byte to satisfy
the allocation and get good space packing. If we cannot find free space at or
after the given offset we fall back on looking for a chunk of the given size as
close to that given offset as possible. When we fall back on the size search we
also try to find a slot as close to the size we want as possible, to avoid
breaking small chunks off of huge areas if possible.
2) remove the extent_io_tree that tracked the block group cache from fs_info and
replaced it with an rb-tree thats tracks block group cache via offset. also
added a per space_info list that tracks the block group cache for the particular
space so we can lookup related block groups easily.
3) cleaned up the allocation code to make it a little easier to read and a
little less complicated. Basically there are 3 steps, first look from our
provided hint. If we couldn't find from that given hint, start back at our
original search start and look for space from there. If that fails try to
allocate space if we can and start looking again. If not we're screwed and need
to start over again.
4) small fixes. there were some issues in volumes.c where we wouldn't allocate
the rest of the disk. fixed cow_file_range to actually pass the alloc_hint,
which has helped a good bit in making the fs_mark test I run have semi-normal
results as we run out of space. Generally with data allocations we don't track
where we last allocated from, so everytime we did a data allocation we'd search
through every block group that we have looking for free space. Now searching a
block group with no free space isn't terribly time consuming, it was causing a
slight degradation as we got more data block groups. The alloc_hint has fixed
this slight degredation and made things semi-normal.
There is still one nagging problem I'm working on where we will get ENOSPC when
there is definitely plenty of space. This only happens with metadata
allocations, and only when we are almost full. So you generally hit the 85%
mark first, but sometimes you'll hit the BUG before you hit the 85% wall. I'm
still tracking it down, but until then this seems to be pretty stable and make a
significant performance gain.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-09-24 01:14:11 +08:00
|
|
|
|
2019-03-01 10:47:59 +08:00
|
|
|
extent_io_tree_init(fs_info, &fs_info->freed_extents[0],
|
|
|
|
IO_TREE_FS_INFO_FREED_EXTENTS0, NULL);
|
|
|
|
extent_io_tree_init(fs_info, &fs_info->freed_extents[1],
|
|
|
|
IO_TREE_FS_INFO_FREED_EXTENTS1, NULL);
|
2009-09-12 04:11:19 +08:00
|
|
|
fs_info->pinned_extents = &fs_info->freed_extents[0];
|
2016-09-03 03:40:02 +08:00
|
|
|
set_bit(BTRFS_FS_BARRIER, &fs_info->flags);
|
2007-06-12 18:35:45 +08:00
|
|
|
|
2009-04-01 01:27:11 +08:00
|
|
|
mutex_init(&fs_info->ordered_operations_mutex);
|
2008-09-06 04:13:11 +08:00
|
|
|
mutex_init(&fs_info->tree_log_mutex);
|
2008-06-26 04:01:30 +08:00
|
|
|
mutex_init(&fs_info->chunk_mutex);
|
2008-06-26 04:01:31 +08:00
|
|
|
mutex_init(&fs_info->transaction_kthread_mutex);
|
|
|
|
mutex_init(&fs_info->cleaner_mutex);
|
2015-04-07 03:46:08 +08:00
|
|
|
mutex_init(&fs_info->ro_block_group_mutex);
|
2014-03-14 03:42:13 +08:00
|
|
|
init_rwsem(&fs_info->commit_root_sem);
|
2009-11-12 17:34:40 +08:00
|
|
|
init_rwsem(&fs_info->cleanup_work_sem);
|
2009-09-22 04:00:26 +08:00
|
|
|
init_rwsem(&fs_info->subvol_sem);
|
2013-08-15 23:11:21 +08:00
|
|
|
sema_init(&fs_info->uuid_tree_rescan_sem, 1);
|
2009-04-03 21:47:43 +08:00
|
|
|
|
2014-08-02 07:12:41 +08:00
|
|
|
btrfs_init_dev_replace_locks(fs_info);
|
2014-08-02 07:12:42 +08:00
|
|
|
btrfs_init_qgroup(fs_info);
|
2011-09-13 18:56:09 +08:00
|
|
|
|
2009-04-03 21:47:43 +08:00
|
|
|
btrfs_init_free_cluster(&fs_info->meta_alloc_cluster);
|
|
|
|
btrfs_init_free_cluster(&fs_info->data_alloc_cluster);
|
|
|
|
|
2008-07-18 00:53:50 +08:00
|
|
|
init_waitqueue_head(&fs_info->transaction_throttle);
|
2008-07-18 00:54:14 +08:00
|
|
|
init_waitqueue_head(&fs_info->transaction_wait);
|
2010-10-30 03:37:34 +08:00
|
|
|
init_waitqueue_head(&fs_info->transaction_blocked_wait);
|
2008-08-16 03:34:17 +08:00
|
|
|
init_waitqueue_head(&fs_info->async_submit_wait);
|
2018-12-04 00:06:52 +08:00
|
|
|
init_waitqueue_head(&fs_info->delayed_iputs_wait);
|
2007-03-14 04:47:54 +08:00
|
|
|
|
2016-06-15 21:22:56 +08:00
|
|
|
/* Usable values until the real ones are cached from the superblock */
|
|
|
|
fs_info->nodesize = 4096;
|
|
|
|
fs_info->sectorsize = 4096;
|
|
|
|
fs_info->stripesize = 4096;
|
|
|
|
|
Btrfs: prevent ioctls from interfering with a swap file
A later patch will implement swap file support for Btrfs, but before we
do that, we need to make sure that the various Btrfs ioctls cannot
change a swap file.
When a swap file is active, we must make sure that the extents of the
file are not moved and that they don't become shared. That means that
the following are not safe:
- chattr +c (enable compression)
- reflink
- dedupe
- snapshot
- defrag
Don't allow those to happen on an active swap file.
Additionally, balance, resize, device remove, and device replace are
also unsafe if they affect an active swapfile. Add a red-black tree of
block groups and devices which contain an active swapfile. Relocation
checks each block group against this tree and skips it or errors out for
balance or resize, respectively. Device remove and device replace check
the tree for the device they will operate on.
Note that we don't have to worry about chattr -C (disable nocow), which
we ignore for non-empty files, because an active swapfile must be
non-empty and can't be truncated. We also don't have to worry about
autodefrag because it's only done on COW files. Truncate and fallocate
are already taken care of by the generic code. Device add doesn't do
relocation so it's not an issue, either.
Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-11-04 01:28:12 +08:00
|
|
|
spin_lock_init(&fs_info->swapfile_pins_lock);
|
|
|
|
fs_info->swapfile_pins = RB_ROOT;
|
|
|
|
|
2013-01-30 07:40:14 +08:00
|
|
|
ret = btrfs_alloc_stripe_hash_table(fs_info);
|
|
|
|
if (ret) {
|
2013-03-01 23:03:00 +08:00
|
|
|
err = ret;
|
2013-01-30 07:40:14 +08:00
|
|
|
goto fail_alloc;
|
|
|
|
}
|
|
|
|
|
2016-06-15 21:22:56 +08:00
|
|
|
__setup_root(tree_root, fs_info, BTRFS_ROOT_TREE_OBJECTID);
|
2007-04-12 03:53:25 +08:00
|
|
|
|
2012-03-28 06:56:56 +08:00
|
|
|
invalidate_bdev(fs_devices->latest_bdev);
|
2013-03-06 22:57:46 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Read super block and check the signature bytes only
|
|
|
|
*/
|
2008-12-09 05:46:26 +08:00
|
|
|
bh = btrfs_read_dev_super(fs_devices->latest_bdev);
|
2015-08-14 18:32:51 +08:00
|
|
|
if (IS_ERR(bh)) {
|
|
|
|
err = PTR_ERR(bh);
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 18:12:22 +08:00
|
|
|
goto fail_alloc;
|
2011-01-08 18:09:13 +08:00
|
|
|
}
|
2007-06-12 18:35:45 +08:00
|
|
|
|
2013-03-06 22:57:46 +08:00
|
|
|
/*
|
|
|
|
* We want to check superblock checksum, the type is stored inside.
|
|
|
|
* Pass the whole disk block of size BTRFS_SUPER_INFO_SIZE (4k).
|
|
|
|
*/
|
2016-09-20 22:05:02 +08:00
|
|
|
if (btrfs_check_super_csum(fs_info, bh->b_data)) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_err(fs_info, "superblock checksum mismatch");
|
2013-03-06 22:57:46 +08:00
|
|
|
err = -EINVAL;
|
2015-10-07 17:23:23 +08:00
|
|
|
brelse(bh);
|
2013-03-06 22:57:46 +08:00
|
|
|
goto fail_alloc;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* super_copy is zeroed at allocation time and we never touch the
|
|
|
|
* following bytes up to INFO_SIZE, the checksum is calculated from
|
|
|
|
* the whole block of INFO_SIZE
|
|
|
|
*/
|
2011-04-13 21:41:04 +08:00
|
|
|
memcpy(fs_info->super_copy, bh->b_data, sizeof(*fs_info->super_copy));
|
2008-05-07 23:43:44 +08:00
|
|
|
brelse(bh);
|
2007-10-16 04:14:19 +08:00
|
|
|
|
2018-10-30 22:43:25 +08:00
|
|
|
disk_super = fs_info->super_copy;
|
|
|
|
|
2018-10-30 22:43:24 +08:00
|
|
|
ASSERT(!memcmp(fs_info->fs_devices->fsid, fs_info->super_copy->fsid,
|
|
|
|
BTRFS_FSID_SIZE));
|
|
|
|
|
2018-10-30 22:43:23 +08:00
|
|
|
if (btrfs_fs_incompat(fs_info, METADATA_UUID)) {
|
2018-10-30 22:43:24 +08:00
|
|
|
ASSERT(!memcmp(fs_info->fs_devices->metadata_uuid,
|
|
|
|
fs_info->super_copy->metadata_uuid,
|
|
|
|
BTRFS_FSID_SIZE));
|
2018-10-30 22:43:23 +08:00
|
|
|
}
|
2008-03-25 03:01:56 +08:00
|
|
|
|
2018-10-30 22:43:25 +08:00
|
|
|
features = btrfs_super_flags(disk_super);
|
|
|
|
if (features & BTRFS_SUPER_FLAG_CHANGING_FSID_V2) {
|
|
|
|
features &= ~BTRFS_SUPER_FLAG_CHANGING_FSID_V2;
|
|
|
|
btrfs_set_super_flags(disk_super, features);
|
|
|
|
btrfs_info(fs_info,
|
|
|
|
"found metadata UUID change in progress flag, clearing");
|
|
|
|
}
|
|
|
|
|
|
|
|
memcpy(fs_info->super_for_commit, fs_info->super_copy,
|
|
|
|
sizeof(*fs_info->super_for_commit));
|
2018-10-30 22:43:24 +08:00
|
|
|
|
2018-05-11 13:35:26 +08:00
|
|
|
ret = btrfs_validate_mount_super(fs_info);
|
2013-03-06 22:57:46 +08:00
|
|
|
if (ret) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_err(fs_info, "superblock contains fatal errors");
|
2013-03-06 22:57:46 +08:00
|
|
|
err = -EINVAL;
|
|
|
|
goto fail_alloc;
|
|
|
|
}
|
|
|
|
|
2007-04-09 22:42:37 +08:00
|
|
|
if (!btrfs_super_root(disk_super))
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 18:12:22 +08:00
|
|
|
goto fail_alloc;
|
2007-04-09 22:42:37 +08:00
|
|
|
|
2011-01-06 19:30:25 +08:00
|
|
|
/* check FS state, whether FS is broken. */
|
2013-01-29 18:14:48 +08:00
|
|
|
if (btrfs_super_flags(disk_super) & BTRFS_SUPER_FLAG_ERROR)
|
|
|
|
set_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state);
|
2011-01-06 19:30:25 +08:00
|
|
|
|
2011-11-04 03:17:42 +08:00
|
|
|
/*
|
|
|
|
* run through our array of backup supers and setup
|
|
|
|
* our ring pointer to the oldest one
|
|
|
|
*/
|
|
|
|
generation = btrfs_super_generation(disk_super);
|
|
|
|
find_oldest_super_backup(fs_info, generation);
|
|
|
|
|
Btrfs: Per file/directory controls for COW and compression
Data compression and data cow are controlled across the entire FS by mount
options right now. ioctls are needed to set this on a per file or per
directory basis. This has been proposed previously, but VFS developers
wanted us to use generic ioctls rather than btrfs-specific ones.
According to Chris's comment, there should be just one true compression
method(probably LZO) stored in the super. However, before this, we would
wait for that one method is stable enough to be adopted into the super.
So I list it as a long term goal, and just store it in ram today.
After applying this patch, we can use the generic "FS_IOC_SETFLAGS" ioctl to
control file and directory's datacow and compression attribute.
NOTE:
- The compression type is selected by such rules:
If we mount btrfs with compress options, ie, zlib/lzo, the type is it.
Otherwise, we'll use the default compress type (zlib today).
v1->v2:
- rebase to the latest btrfs.
v2->v3:
- fix a problem, i.e. when a file is set NOCOW via mount option, then this NOCOW
will be screwed by inheritance from parent directory.
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-03-22 18:12:20 +08:00
|
|
|
/*
|
|
|
|
* In the long term, we'll store the compression type in the super
|
|
|
|
* block, and it'll be used for per file compression control.
|
|
|
|
*/
|
|
|
|
fs_info->compress_type = BTRFS_COMPRESS_ZLIB;
|
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
ret = btrfs_parse_options(fs_info, options, sb->s_flags);
|
2008-11-18 10:11:30 +08:00
|
|
|
if (ret) {
|
|
|
|
err = ret;
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 18:12:22 +08:00
|
|
|
goto fail_alloc;
|
2008-11-18 10:11:30 +08:00
|
|
|
}
|
2008-05-14 01:46:40 +08:00
|
|
|
|
2008-12-02 19:36:08 +08:00
|
|
|
features = btrfs_super_incompat_flags(disk_super) &
|
|
|
|
~BTRFS_FEATURE_INCOMPAT_SUPP;
|
|
|
|
if (features) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_err(fs_info,
|
|
|
|
"cannot mount because of unsupported optional features (%llx)",
|
|
|
|
features);
|
2008-12-02 19:36:08 +08:00
|
|
|
err = -EINVAL;
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 18:12:22 +08:00
|
|
|
goto fail_alloc;
|
2008-12-02 19:36:08 +08:00
|
|
|
}
|
|
|
|
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 22:45:14 +08:00
|
|
|
features = btrfs_super_incompat_flags(disk_super);
|
2010-10-25 15:12:26 +08:00
|
|
|
features |= BTRFS_FEATURE_INCOMPAT_MIXED_BACKREF;
|
2016-06-23 06:54:23 +08:00
|
|
|
if (fs_info->compress_type == BTRFS_COMPRESS_LZO)
|
2010-10-25 15:12:26 +08:00
|
|
|
features |= BTRFS_FEATURE_INCOMPAT_COMPRESS_LZO;
|
btrfs: Add zstd support
Add zstd compression and decompression support to BtrFS. zstd at its
fastest level compresses almost as well as zlib, while offering much
faster compression and decompression, approaching lzo speeds.
I benchmarked btrfs with zstd compression against no compression, lzo
compression, and zlib compression. I benchmarked two scenarios. Copying
a set of files to btrfs, and then reading the files. Copying a tarball
to btrfs, extracting it to btrfs, and then reading the extracted files.
After every operation, I call `sync` and include the sync time.
Between every pair of operations I unmount and remount the filesystem
to avoid caching. The benchmark files can be found in the upstream
zstd source repository under
`contrib/linux-kernel/{btrfs-benchmark.sh,btrfs-extract-benchmark.sh}`
[1] [2].
I ran the benchmarks on a Ubuntu 14.04 VM with 2 cores and 4 GiB of RAM.
The VM is running on a MacBook Pro with a 3.1 GHz Intel Core i7 processor,
16 GB of RAM, and a SSD.
The first compression benchmark is copying 10 copies of the unzipped
Silesia corpus [3] into a BtrFS filesystem mounted with
`-o compress-force=Method`. The decompression benchmark times how long
it takes to `tar` all 10 copies into `/dev/null`. The compression ratio is
measured by comparing the output of `df` and `du`. See the benchmark file
[1] for details. I benchmarked multiple zstd compression levels, although
the patch uses zstd level 1.
| Method | Ratio | Compression MB/s | Decompression speed |
|---------|-------|------------------|---------------------|
| None | 0.99 | 504 | 686 |
| lzo | 1.66 | 398 | 442 |
| zlib | 2.58 | 65 | 241 |
| zstd 1 | 2.57 | 260 | 383 |
| zstd 3 | 2.71 | 174 | 408 |
| zstd 6 | 2.87 | 70 | 398 |
| zstd 9 | 2.92 | 43 | 406 |
| zstd 12 | 2.93 | 21 | 408 |
| zstd 15 | 3.01 | 11 | 354 |
The next benchmark first copies `linux-4.11.6.tar` [4] to btrfs. Then it
measures the compression ratio, extracts the tar, and deletes the tar.
Then it measures the compression ratio again, and `tar`s the extracted
files into `/dev/null`. See the benchmark file [2] for details.
| Method | Tar Ratio | Extract Ratio | Copy (s) | Extract (s)| Read (s) |
|--------|-----------|---------------|----------|------------|----------|
| None | 0.97 | 0.78 | 0.981 | 5.501 | 8.807 |
| lzo | 2.06 | 1.38 | 1.631 | 8.458 | 8.585 |
| zlib | 3.40 | 1.86 | 7.750 | 21.544 | 11.744 |
| zstd 1 | 3.57 | 1.85 | 2.579 | 11.479 | 9.389 |
[1] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/btrfs-benchmark.sh
[2] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/btrfs-extract-benchmark.sh
[3] http://sun.aei.polsl.pl/~sdeor/index.php?page=silesia
[4] https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.11.6.tar.xz
zstd source repository: https://github.com/facebook/zstd
Signed-off-by: Nick Terrell <terrelln@fb.com>
Signed-off-by: Chris Mason <clm@fb.com>
2017-08-10 10:39:02 +08:00
|
|
|
else if (fs_info->compress_type == BTRFS_COMPRESS_ZSTD)
|
|
|
|
features |= BTRFS_FEATURE_INCOMPAT_COMPRESS_ZSTD;
|
2010-08-07 01:21:20 +08:00
|
|
|
|
2013-03-08 03:22:04 +08:00
|
|
|
if (features & BTRFS_FEATURE_INCOMPAT_SKINNY_METADATA)
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_info(fs_info, "has skinny extents");
|
2013-03-08 03:22:04 +08:00
|
|
|
|
2010-08-07 01:21:20 +08:00
|
|
|
/*
|
|
|
|
* flag our filesystem as having big metadata blocks if
|
|
|
|
* they are bigger than the page size
|
|
|
|
*/
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
if (btrfs_super_nodesize(disk_super) > PAGE_SIZE) {
|
2010-08-07 01:21:20 +08:00
|
|
|
if (!(features & BTRFS_FEATURE_INCOMPAT_BIG_METADATA))
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_info(fs_info,
|
|
|
|
"flagging fs with big metadata feature");
|
2010-08-07 01:21:20 +08:00
|
|
|
features |= BTRFS_FEATURE_INCOMPAT_BIG_METADATA;
|
|
|
|
}
|
|
|
|
|
2012-03-30 05:02:47 +08:00
|
|
|
nodesize = btrfs_super_nodesize(disk_super);
|
|
|
|
sectorsize = btrfs_super_sectorsize(disk_super);
|
2016-06-23 17:46:44 +08:00
|
|
|
stripesize = sectorsize;
|
2014-06-05 01:22:26 +08:00
|
|
|
fs_info->dirty_metadata_batch = nodesize * (1 + ilog2(nr_cpu_ids));
|
2013-01-29 18:10:51 +08:00
|
|
|
fs_info->delalloc_batch = sectorsize * 512 * (1 + ilog2(nr_cpu_ids));
|
2012-03-30 05:02:47 +08:00
|
|
|
|
2016-06-15 21:22:56 +08:00
|
|
|
/* Cache block sizes */
|
|
|
|
fs_info->nodesize = nodesize;
|
|
|
|
fs_info->sectorsize = sectorsize;
|
|
|
|
fs_info->stripesize = stripesize;
|
|
|
|
|
2012-03-30 05:02:47 +08:00
|
|
|
/*
|
|
|
|
* mixed block groups end up with duplicate but slightly offset
|
|
|
|
* extent buffers for the same range. It leads to corruptions
|
|
|
|
*/
|
|
|
|
if ((features & BTRFS_FEATURE_INCOMPAT_MIXED_GROUPS) &&
|
2014-06-05 01:22:26 +08:00
|
|
|
(sectorsize != nodesize)) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_err(fs_info,
|
|
|
|
"unequal nodesize/sectorsize (%u != %u) are not allowed for mixed block groups",
|
|
|
|
nodesize, sectorsize);
|
2012-03-30 05:02:47 +08:00
|
|
|
goto fail_alloc;
|
|
|
|
}
|
|
|
|
|
2013-04-11 18:30:16 +08:00
|
|
|
/*
|
|
|
|
* Needn't use the lock because there is no other task which will
|
|
|
|
* update the flag.
|
|
|
|
*/
|
2010-10-25 15:12:26 +08:00
|
|
|
btrfs_set_super_incompat_flags(disk_super, features);
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 22:45:14 +08:00
|
|
|
|
2008-12-02 19:36:08 +08:00
|
|
|
features = btrfs_super_compat_ro_flags(disk_super) &
|
|
|
|
~BTRFS_FEATURE_COMPAT_RO_SUPP;
|
2017-07-17 15:45:34 +08:00
|
|
|
if (!sb_rdonly(sb) && features) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_err(fs_info,
|
|
|
|
"cannot mount read-write because of unsupported optional features (%llx)",
|
2013-08-20 19:20:07 +08:00
|
|
|
features);
|
2008-12-02 19:36:08 +08:00
|
|
|
err = -EINVAL;
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 18:12:22 +08:00
|
|
|
goto fail_alloc;
|
2008-12-02 19:36:08 +08:00
|
|
|
}
|
2009-10-03 07:11:56 +08:00
|
|
|
|
2015-02-16 23:29:26 +08:00
|
|
|
ret = btrfs_init_workqueues(fs_info, fs_devices);
|
|
|
|
if (ret) {
|
|
|
|
err = ret;
|
2011-11-19 03:37:27 +08:00
|
|
|
goto fail_sb_buffer;
|
|
|
|
}
|
2008-06-12 09:47:56 +08:00
|
|
|
|
2017-04-12 18:24:32 +08:00
|
|
|
sb->s_bdi->congested_fn = btrfs_congested_fn;
|
|
|
|
sb->s_bdi->congested_data = fs_info;
|
|
|
|
sb->s_bdi->capabilities |= BDI_CAP_CGROUP_WRITEBACK;
|
2019-03-12 14:28:13 +08:00
|
|
|
sb->s_bdi->ra_pages = VM_READAHEAD_PAGES;
|
2017-04-12 18:24:32 +08:00
|
|
|
sb->s_bdi->ra_pages *= btrfs_super_num_devices(disk_super);
|
|
|
|
sb->s_bdi->ra_pages = max(sb->s_bdi->ra_pages, SZ_4M / PAGE_SIZE);
|
2008-04-19 04:13:31 +08:00
|
|
|
|
2008-05-07 23:43:44 +08:00
|
|
|
sb->s_blocksize = sectorsize;
|
|
|
|
sb->s_blocksize_bits = blksize_bits(sectorsize);
|
2018-10-30 22:43:24 +08:00
|
|
|
memcpy(&sb->s_uuid, fs_info->fs_devices->fsid, BTRFS_FSID_SIZE);
|
2007-10-16 04:15:53 +08:00
|
|
|
|
2008-06-26 04:01:30 +08:00
|
|
|
mutex_lock(&fs_info->chunk_mutex);
|
2016-06-22 09:16:51 +08:00
|
|
|
ret = btrfs_read_sys_array(fs_info);
|
2008-06-26 04:01:30 +08:00
|
|
|
mutex_unlock(&fs_info->chunk_mutex);
|
2008-04-25 21:04:37 +08:00
|
|
|
if (ret) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_err(fs_info, "failed to read the system array: %d", ret);
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 22:45:14 +08:00
|
|
|
goto fail_sb_buffer;
|
2008-04-25 21:04:37 +08:00
|
|
|
}
|
2008-03-25 03:01:56 +08:00
|
|
|
|
2008-10-30 02:49:05 +08:00
|
|
|
generation = btrfs_super_chunk_root_generation(disk_super);
|
2018-03-29 09:08:11 +08:00
|
|
|
level = btrfs_super_chunk_root_level(disk_super);
|
2008-03-25 03:01:56 +08:00
|
|
|
|
2016-06-15 21:22:56 +08:00
|
|
|
__setup_root(chunk_root, fs_info, BTRFS_CHUNK_TREE_OBJECTID);
|
2008-03-25 03:01:56 +08:00
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
chunk_root->node = read_tree_block(fs_info,
|
2008-03-25 03:01:56 +08:00
|
|
|
btrfs_super_chunk_root(disk_super),
|
2018-03-29 09:08:11 +08:00
|
|
|
generation, level, NULL);
|
2015-05-25 17:30:15 +08:00
|
|
|
if (IS_ERR(chunk_root->node) ||
|
|
|
|
!extent_buffer_uptodate(chunk_root->node)) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_err(fs_info, "failed to read chunk root");
|
2015-10-06 00:44:25 +08:00
|
|
|
if (!IS_ERR(chunk_root->node))
|
|
|
|
free_extent_buffer(chunk_root->node);
|
2015-07-15 21:02:09 +08:00
|
|
|
chunk_root->node = NULL;
|
2011-11-04 03:17:42 +08:00
|
|
|
goto fail_tree_roots;
|
2009-07-23 04:52:13 +08:00
|
|
|
}
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 22:45:14 +08:00
|
|
|
btrfs_set_root_node(&chunk_root->root_item, chunk_root->node);
|
|
|
|
chunk_root->commit_root = btrfs_root_node(chunk_root);
|
2008-03-25 03:01:56 +08:00
|
|
|
|
2008-04-16 03:41:47 +08:00
|
|
|
read_extent_buffer(chunk_root->node, fs_info->chunk_tree_uuid,
|
2013-08-20 19:20:15 +08:00
|
|
|
btrfs_header_chunk_tree_uuid(chunk_root->node), BTRFS_UUID_SIZE);
|
2008-04-16 03:41:47 +08:00
|
|
|
|
2016-06-21 22:40:19 +08:00
|
|
|
ret = btrfs_read_chunk_tree(fs_info);
|
2008-11-18 10:11:30 +08:00
|
|
|
if (ret) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_err(fs_info, "failed to read chunk tree: %d", ret);
|
2011-11-04 03:17:42 +08:00
|
|
|
goto fail_tree_roots;
|
2008-11-18 10:11:30 +08:00
|
|
|
}
|
2008-03-25 03:01:56 +08:00
|
|
|
|
2012-11-06 20:15:27 +08:00
|
|
|
/*
|
2018-02-27 12:41:59 +08:00
|
|
|
* Keep the devid that is marked to be the target device for the
|
|
|
|
* device replace procedure
|
2012-11-06 20:15:27 +08:00
|
|
|
*/
|
2018-02-27 12:41:59 +08:00
|
|
|
btrfs_free_extra_devids(fs_devices, 0);
|
2008-05-14 01:46:40 +08:00
|
|
|
|
2012-02-21 09:53:43 +08:00
|
|
|
if (!fs_devices->latest_bdev) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_err(fs_info, "failed to read devices");
|
2012-02-21 09:53:43 +08:00
|
|
|
goto fail_tree_roots;
|
|
|
|
}
|
|
|
|
|
2011-11-04 03:17:42 +08:00
|
|
|
retry_root_backup:
|
2008-10-30 02:49:05 +08:00
|
|
|
generation = btrfs_super_generation(disk_super);
|
2018-03-29 09:08:11 +08:00
|
|
|
level = btrfs_super_root_level(disk_super);
|
2008-03-25 03:01:56 +08:00
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
tree_root->node = read_tree_block(fs_info,
|
2007-10-16 04:15:53 +08:00
|
|
|
btrfs_super_root(disk_super),
|
2018-03-29 09:08:11 +08:00
|
|
|
generation, level, NULL);
|
2015-05-25 17:30:15 +08:00
|
|
|
if (IS_ERR(tree_root->node) ||
|
|
|
|
!extent_buffer_uptodate(tree_root->node)) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_warn(fs_info, "failed to read tree root");
|
2015-10-06 00:44:25 +08:00
|
|
|
if (!IS_ERR(tree_root->node))
|
|
|
|
free_extent_buffer(tree_root->node);
|
2015-07-15 21:02:09 +08:00
|
|
|
tree_root->node = NULL;
|
2011-11-04 03:17:42 +08:00
|
|
|
goto recovery_tree_root;
|
2009-07-23 04:52:13 +08:00
|
|
|
}
|
2011-11-04 03:17:42 +08:00
|
|
|
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 22:45:14 +08:00
|
|
|
btrfs_set_root_node(&tree_root->root_item, tree_root->node);
|
|
|
|
tree_root->commit_root = btrfs_root_node(tree_root);
|
2013-09-05 22:58:43 +08:00
|
|
|
btrfs_set_root_refs(&tree_root->root_item, 1);
|
2007-10-16 04:15:53 +08:00
|
|
|
|
2016-01-07 21:26:59 +08:00
|
|
|
mutex_lock(&tree_root->objectid_mutex);
|
|
|
|
ret = btrfs_find_highest_objectid(tree_root,
|
|
|
|
&tree_root->highest_objectid);
|
|
|
|
if (ret) {
|
|
|
|
mutex_unlock(&tree_root->objectid_mutex);
|
|
|
|
goto recovery_tree_root;
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT(tree_root->highest_objectid <= BTRFS_LAST_FREE_OBJECTID);
|
|
|
|
|
|
|
|
mutex_unlock(&tree_root->objectid_mutex);
|
|
|
|
|
2016-06-22 09:16:51 +08:00
|
|
|
ret = btrfs_read_roots(fs_info);
|
2014-08-02 07:12:45 +08:00
|
|
|
if (ret)
|
2011-11-04 03:17:42 +08:00
|
|
|
goto recovery_tree_root;
|
2013-08-15 23:11:19 +08:00
|
|
|
|
2010-05-16 22:49:58 +08:00
|
|
|
fs_info->generation = generation;
|
|
|
|
fs_info->last_trans_committed = generation;
|
|
|
|
|
2018-08-01 10:37:19 +08:00
|
|
|
ret = btrfs_verify_dev_extents(fs_info);
|
|
|
|
if (ret) {
|
|
|
|
btrfs_err(fs_info,
|
|
|
|
"failed to verify dev extents against chunks: %d",
|
|
|
|
ret);
|
|
|
|
goto fail_block_groups;
|
|
|
|
}
|
2012-06-23 02:24:12 +08:00
|
|
|
ret = btrfs_recover_balance(fs_info);
|
|
|
|
if (ret) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_err(fs_info, "failed to recover balance: %d", ret);
|
2012-06-23 02:24:12 +08:00
|
|
|
goto fail_block_groups;
|
|
|
|
}
|
|
|
|
|
2012-05-25 22:06:10 +08:00
|
|
|
ret = btrfs_init_dev_stats(fs_info);
|
|
|
|
if (ret) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_err(fs_info, "failed to init dev_stats: %d", ret);
|
2012-05-25 22:06:10 +08:00
|
|
|
goto fail_block_groups;
|
|
|
|
}
|
|
|
|
|
2012-11-06 20:15:27 +08:00
|
|
|
ret = btrfs_init_dev_replace(fs_info);
|
|
|
|
if (ret) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_err(fs_info, "failed to init dev_replace: %d", ret);
|
2012-11-06 20:15:27 +08:00
|
|
|
goto fail_block_groups;
|
|
|
|
}
|
|
|
|
|
2018-02-27 12:41:59 +08:00
|
|
|
btrfs_free_extra_devids(fs_devices, 1);
|
2012-11-06 20:15:27 +08:00
|
|
|
|
2015-03-10 06:38:38 +08:00
|
|
|
ret = btrfs_sysfs_add_fsid(fs_devices, NULL);
|
|
|
|
if (ret) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_err(fs_info, "failed to init sysfs fsid interface: %d",
|
|
|
|
ret);
|
2015-03-10 06:38:38 +08:00
|
|
|
goto fail_block_groups;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = btrfs_sysfs_add_device(fs_devices);
|
|
|
|
if (ret) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_err(fs_info, "failed to init sysfs device interface: %d",
|
|
|
|
ret);
|
2015-03-10 06:38:38 +08:00
|
|
|
goto fail_fsdev_sysfs;
|
|
|
|
}
|
|
|
|
|
2015-08-14 18:32:46 +08:00
|
|
|
ret = btrfs_sysfs_add_mounted(fs_info);
|
2011-03-07 10:13:14 +08:00
|
|
|
if (ret) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_err(fs_info, "failed to init sysfs interface: %d", ret);
|
2015-03-10 06:38:38 +08:00
|
|
|
goto fail_fsdev_sysfs;
|
2011-03-07 10:13:14 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
ret = btrfs_init_space_info(fs_info);
|
|
|
|
if (ret) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_err(fs_info, "failed to initialize space info: %d", ret);
|
2014-01-22 11:15:51 +08:00
|
|
|
goto fail_sysfs;
|
2011-03-07 10:13:14 +08:00
|
|
|
}
|
|
|
|
|
2016-06-21 22:40:19 +08:00
|
|
|
ret = btrfs_read_block_groups(fs_info);
|
2010-03-20 04:49:55 +08:00
|
|
|
if (ret) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_err(fs_info, "failed to read block groups: %d", ret);
|
2014-01-22 11:15:51 +08:00
|
|
|
goto fail_sysfs;
|
2010-03-20 04:49:55 +08:00
|
|
|
}
|
2017-03-09 09:34:37 +08:00
|
|
|
|
2017-12-18 17:08:59 +08:00
|
|
|
if (!sb_rdonly(sb) && !btrfs_check_rw_degradable(fs_info, NULL)) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_warn(fs_info,
|
2018-11-28 19:05:13 +08:00
|
|
|
"writable mount is not allowed due to too many missing devices");
|
2014-01-22 11:15:51 +08:00
|
|
|
goto fail_sysfs;
|
2012-10-31 01:16:16 +08:00
|
|
|
}
|
2007-04-27 04:46:15 +08:00
|
|
|
|
2008-06-26 04:01:31 +08:00
|
|
|
fs_info->cleaner_kthread = kthread_run(cleaner_kthread, tree_root,
|
|
|
|
"btrfs-cleaner");
|
2009-01-21 23:49:16 +08:00
|
|
|
if (IS_ERR(fs_info->cleaner_kthread))
|
2014-01-22 11:15:51 +08:00
|
|
|
goto fail_sysfs;
|
2008-06-26 04:01:31 +08:00
|
|
|
|
|
|
|
fs_info->transaction_kthread = kthread_run(transaction_kthread,
|
|
|
|
tree_root,
|
|
|
|
"btrfs-transaction");
|
2009-01-21 23:49:16 +08:00
|
|
|
if (IS_ERR(fs_info->transaction_kthread))
|
2008-06-26 04:01:31 +08:00
|
|
|
goto fail_cleaner;
|
2008-06-26 04:01:31 +08:00
|
|
|
|
btrfs: Do not use data_alloc_cluster in ssd mode
This patch provides a band aid to improve the 'out of the box'
behaviour of btrfs for disks that are detected as being an ssd. In a
general purpose mixed workload scenario, the current ssd mode causes
overallocation of available raw disk space for data, while leaving
behind increasing amounts of unused fragmented free space. This
situation leads to early ENOSPC problems which are harming user
experience and adoption of btrfs as a general purpose filesystem.
This patch modifies the data extent allocation behaviour of the ssd mode
to make it behave identical to nossd mode. The metadata behaviour and
additional ssd_spread option stay untouched so far.
Recommendations for future development are to reconsider the current
oversimplified nossd / ssd distinction and the broken detection
mechanism based on the rotational attribute in sysfs and provide
experienced users with a more flexible way to choose allocator behaviour
for data and metadata, optimized for certain use cases, while keeping
sane 'out of the box' default settings. The internals of the current
btrfs code have more potential than what currently gets exposed to the
user to choose from.
The SSD story...
In the first year of btrfs development, around early 2008, btrfs
gained a mount option which enables specific functionality for
filesystems on solid state devices. The first occurance of this
functionality is in commit e18e4809, labeled "Add mount -o ssd, which
includes optimizations for seek free storage".
The effect on allocating free space for doing (data) writes is to
'cluster' writes together, writing them out in contiguous space, as
opposed to a 'tetris' way of putting all separate writes into any free
space fragment that fits (which is what the -o nossd behaviour does).
A somewhat simplified explanation of what happens is that, when for
example, the 'cluster' size is set to 2MiB, when we do some writes, the
data allocator will search for a free space block that is 2MiB big, and
put the writes in there. The ssd mode itself might allow a 2MiB cluster
to be composed of multiple free space extents with some existing data in
between, while the additional ssd_spread mount option kills off this
option and requires fully free space.
The idea behind this is (commit 536ac8ae): "The [...] clusters make it
more likely a given IO will completely overwrite the ssd block, so it
doesn't have to do an internal rwm cycle."; ssd block meaning nand erase
block. So, effectively this means applying a "locality based algorithm"
and trying to outsmart the actual ssd.
Since then, various changes have been made to the involved code, but the
basic idea is still present, and gets activated whenever the ssd mount
option is active. This also happens by default, when the rotational flag
as seen at /sys/block/<device>/queue/rotational is set to 0.
However, there's a number of problems with this approach.
First, what the optimization is trying to do is outsmart the ssd by
assuming there is a relation between the physical address space of the
block device as seen by btrfs and the actual physical storage of the
ssd, and then adjusting data placement. However, since the introduction
of the Flash Translation Layer (FTL) which is a part of the internal
controller of an ssd, these attempts are futile. The use of good quality
FTL in consumer ssd products might have been limited in 2008, but this
situation has changed drastically soon after that time. Today, even the
flash memory in your automatic cat feeding machine or your grandma's
wheelchair has a full featured one.
Second, the behaviour as described above results in the filesystem being
filled up with badly fragmented free space extents because of relatively
small pieces of space that are freed up by deletes, but not selected
again as part of a 'cluster'. Since the algorithm prefers allocating a
new chunk over going back to tetris mode, the end result is a filesystem
in which all raw space is allocated, but which is composed of
underutilized chunks with a 'shotgun blast' pattern of fragmented free
space. Usually, the next problematic thing that happens is the
filesystem wanting to allocate new space for metadata, which causes the
filesystem to fail in spectacular ways.
Third, the default mount options you get for an ssd ('ssd' mode enabled,
'discard' not enabled), in combination with spreading out writes over
the full address space and ignoring freed up space leads to worst case
behaviour in providing information to the ssd itself, since it will
never learn that all the free space left behind is actually free. There
are two ways to let an ssd know previously written data does not have to
be preserved, which are sending explicit signals using discard or
fstrim, or by simply overwriting the space with new data. The worst
case behaviour is the btrfs ssd_spread mount option in combination with
not having discard enabled. It has a side effect of minimizing the reuse
of free space previously written in.
Fourth, the rotational flag in /sys/ does not reliably indicate if the
device is a locally attached ssd. For example, iSCSI or NBD displays as
non-rotational, while a loop device on an ssd shows up as rotational.
The combination of the second and third problem effectively means that
despite all the good intentions, the btrfs ssd mode reliably causes the
ssd hardware and the filesystem structures and performance to be choked
to death. The clickbait version of the title of this story would have
been "Btrfs ssd optimizations considered harmful for ssds".
The current nossd 'tetris' mode (even still without discard) allows a
pattern of overwriting much more previously used space, causing many
more implicit discards to happen because of the overwrite information
the ssd gets. The actual location in the physical address space, as seen
from the point of view of btrfs is irrelevant, because the actual writes
to the low level flash are reordered anyway thanks to the FTL.
Changes made in the code
1. Make ssd mode data allocation identical to tetris mode, like nossd.
2. Adjust and clean up filesystem mount messages so that we can easily
identify if a kernel has this patch applied or not, when providing
support to end users. Also, make better use of the *_and_info helpers to
only trigger messages on actual state changes.
Backporting notes
Notes for whoever wants to backport this patch to their 4.9 LTS kernel:
* First apply commit 951e7966 "btrfs: drop the nossd flag when
remounting with -o ssd", or fixup the differences manually.
* The rest of the conflicts are because of the fs_info refactoring. So,
for example, instead of using fs_info, it's root->fs_info in
extent-tree.c
Signed-off-by: Hans van Kranenburg <hans.van.kranenburg@mendix.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-07-28 14:31:28 +08:00
|
|
|
if (!btrfs_test_opt(fs_info, NOSSD) &&
|
2009-06-10 21:51:32 +08:00
|
|
|
!fs_info->fs_devices->rotating) {
|
btrfs: Do not use data_alloc_cluster in ssd mode
This patch provides a band aid to improve the 'out of the box'
behaviour of btrfs for disks that are detected as being an ssd. In a
general purpose mixed workload scenario, the current ssd mode causes
overallocation of available raw disk space for data, while leaving
behind increasing amounts of unused fragmented free space. This
situation leads to early ENOSPC problems which are harming user
experience and adoption of btrfs as a general purpose filesystem.
This patch modifies the data extent allocation behaviour of the ssd mode
to make it behave identical to nossd mode. The metadata behaviour and
additional ssd_spread option stay untouched so far.
Recommendations for future development are to reconsider the current
oversimplified nossd / ssd distinction and the broken detection
mechanism based on the rotational attribute in sysfs and provide
experienced users with a more flexible way to choose allocator behaviour
for data and metadata, optimized for certain use cases, while keeping
sane 'out of the box' default settings. The internals of the current
btrfs code have more potential than what currently gets exposed to the
user to choose from.
The SSD story...
In the first year of btrfs development, around early 2008, btrfs
gained a mount option which enables specific functionality for
filesystems on solid state devices. The first occurance of this
functionality is in commit e18e4809, labeled "Add mount -o ssd, which
includes optimizations for seek free storage".
The effect on allocating free space for doing (data) writes is to
'cluster' writes together, writing them out in contiguous space, as
opposed to a 'tetris' way of putting all separate writes into any free
space fragment that fits (which is what the -o nossd behaviour does).
A somewhat simplified explanation of what happens is that, when for
example, the 'cluster' size is set to 2MiB, when we do some writes, the
data allocator will search for a free space block that is 2MiB big, and
put the writes in there. The ssd mode itself might allow a 2MiB cluster
to be composed of multiple free space extents with some existing data in
between, while the additional ssd_spread mount option kills off this
option and requires fully free space.
The idea behind this is (commit 536ac8ae): "The [...] clusters make it
more likely a given IO will completely overwrite the ssd block, so it
doesn't have to do an internal rwm cycle."; ssd block meaning nand erase
block. So, effectively this means applying a "locality based algorithm"
and trying to outsmart the actual ssd.
Since then, various changes have been made to the involved code, but the
basic idea is still present, and gets activated whenever the ssd mount
option is active. This also happens by default, when the rotational flag
as seen at /sys/block/<device>/queue/rotational is set to 0.
However, there's a number of problems with this approach.
First, what the optimization is trying to do is outsmart the ssd by
assuming there is a relation between the physical address space of the
block device as seen by btrfs and the actual physical storage of the
ssd, and then adjusting data placement. However, since the introduction
of the Flash Translation Layer (FTL) which is a part of the internal
controller of an ssd, these attempts are futile. The use of good quality
FTL in consumer ssd products might have been limited in 2008, but this
situation has changed drastically soon after that time. Today, even the
flash memory in your automatic cat feeding machine or your grandma's
wheelchair has a full featured one.
Second, the behaviour as described above results in the filesystem being
filled up with badly fragmented free space extents because of relatively
small pieces of space that are freed up by deletes, but not selected
again as part of a 'cluster'. Since the algorithm prefers allocating a
new chunk over going back to tetris mode, the end result is a filesystem
in which all raw space is allocated, but which is composed of
underutilized chunks with a 'shotgun blast' pattern of fragmented free
space. Usually, the next problematic thing that happens is the
filesystem wanting to allocate new space for metadata, which causes the
filesystem to fail in spectacular ways.
Third, the default mount options you get for an ssd ('ssd' mode enabled,
'discard' not enabled), in combination with spreading out writes over
the full address space and ignoring freed up space leads to worst case
behaviour in providing information to the ssd itself, since it will
never learn that all the free space left behind is actually free. There
are two ways to let an ssd know previously written data does not have to
be preserved, which are sending explicit signals using discard or
fstrim, or by simply overwriting the space with new data. The worst
case behaviour is the btrfs ssd_spread mount option in combination with
not having discard enabled. It has a side effect of minimizing the reuse
of free space previously written in.
Fourth, the rotational flag in /sys/ does not reliably indicate if the
device is a locally attached ssd. For example, iSCSI or NBD displays as
non-rotational, while a loop device on an ssd shows up as rotational.
The combination of the second and third problem effectively means that
despite all the good intentions, the btrfs ssd mode reliably causes the
ssd hardware and the filesystem structures and performance to be choked
to death. The clickbait version of the title of this story would have
been "Btrfs ssd optimizations considered harmful for ssds".
The current nossd 'tetris' mode (even still without discard) allows a
pattern of overwriting much more previously used space, causing many
more implicit discards to happen because of the overwrite information
the ssd gets. The actual location in the physical address space, as seen
from the point of view of btrfs is irrelevant, because the actual writes
to the low level flash are reordered anyway thanks to the FTL.
Changes made in the code
1. Make ssd mode data allocation identical to tetris mode, like nossd.
2. Adjust and clean up filesystem mount messages so that we can easily
identify if a kernel has this patch applied or not, when providing
support to end users. Also, make better use of the *_and_info helpers to
only trigger messages on actual state changes.
Backporting notes
Notes for whoever wants to backport this patch to their 4.9 LTS kernel:
* First apply commit 951e7966 "btrfs: drop the nossd flag when
remounting with -o ssd", or fixup the differences manually.
* The rest of the conflicts are because of the fs_info refactoring. So,
for example, instead of using fs_info, it's root->fs_info in
extent-tree.c
Signed-off-by: Hans van Kranenburg <hans.van.kranenburg@mendix.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-07-28 14:31:28 +08:00
|
|
|
btrfs_set_and_info(fs_info, SSD, "enabling ssd optimizations");
|
2009-06-10 21:51:32 +08:00
|
|
|
}
|
|
|
|
|
2014-02-05 22:26:17 +08:00
|
|
|
/*
|
2016-05-20 09:18:45 +08:00
|
|
|
* Mount does not set all options immediately, we can do it now and do
|
2014-02-05 22:26:17 +08:00
|
|
|
* not have to wait for transaction commit
|
|
|
|
*/
|
|
|
|
btrfs_apply_pending_changes(fs_info);
|
2014-01-13 13:36:06 +08:00
|
|
|
|
2011-11-09 20:44:05 +08:00
|
|
|
#ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY
|
2016-06-23 06:54:23 +08:00
|
|
|
if (btrfs_test_opt(fs_info, CHECK_INTEGRITY)) {
|
2016-06-23 06:54:24 +08:00
|
|
|
ret = btrfsic_mount(fs_info, fs_devices,
|
2016-06-23 06:54:23 +08:00
|
|
|
btrfs_test_opt(fs_info,
|
2011-11-09 20:44:05 +08:00
|
|
|
CHECK_INTEGRITY_INCLUDING_EXTENT_DATA) ?
|
|
|
|
1 : 0,
|
|
|
|
fs_info->check_integrity_print_mask);
|
|
|
|
if (ret)
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"failed to initialize integrity check module: %d",
|
|
|
|
ret);
|
2011-11-09 20:44:05 +08:00
|
|
|
}
|
|
|
|
#endif
|
2011-09-13 21:23:30 +08:00
|
|
|
ret = btrfs_read_qgroup_config(fs_info);
|
|
|
|
if (ret)
|
|
|
|
goto fail_trans_kthread;
|
2011-11-09 20:44:05 +08:00
|
|
|
|
2017-09-30 03:43:50 +08:00
|
|
|
if (btrfs_build_ref_tree(fs_info))
|
|
|
|
btrfs_err(fs_info, "couldn't build ref tree");
|
|
|
|
|
2016-01-19 10:23:03 +08:00
|
|
|
/* do not make disk changes in broken FS or nologreplay is given */
|
|
|
|
if (btrfs_super_log_root(disk_super) != 0 &&
|
2016-06-23 06:54:23 +08:00
|
|
|
!btrfs_test_opt(fs_info, NOLOGREPLAY)) {
|
2014-08-02 07:12:46 +08:00
|
|
|
ret = btrfs_replay_log(fs_info, fs_devices);
|
2012-03-12 23:03:00 +08:00
|
|
|
if (ret) {
|
2014-08-02 07:12:46 +08:00
|
|
|
err = ret;
|
2014-04-23 19:33:35 +08:00
|
|
|
goto fail_qgroup;
|
2012-03-12 23:03:00 +08:00
|
|
|
}
|
2008-09-06 04:13:11 +08:00
|
|
|
}
|
Btrfs: update space balancing code
This patch updates the space balancing code to utilize the new
backref format. Before, btrfs-vol -b would break any COW links
on data blocks or metadata. This was slow and caused the amount
of space used to explode if a large number of snapshots were present.
The new code can keeps the sharing of all data extents and
most of the tree blocks.
To maintain the sharing of data extents, the space balance code uses
a seperate inode hold data extent pointers, then updates the references
to point to the new location.
To maintain the sharing of tree blocks, the space balance code uses
reloc trees to relocate tree blocks in reference counted roots.
There is one reloc tree for each subvol, and all reloc trees share
same root key objectid. Reloc trees are snapshots of the latest
committed roots of subvols (root->commit_root).
To relocate a tree block referenced by a subvol, there are two steps.
COW the block through subvol's reloc tree, then update block pointer in
the subvol to point to the new block. Since all reloc trees share
same root key objectid, doing special handing for tree blocks
owned by them is easy. Once a tree block has been COWed in one
reloc tree, we can use the resulting new block directly when the
same block is required to COW again through other reloc trees.
In this way, relocated tree blocks are shared between reloc trees,
so they are also shared between subvols.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-09-26 22:09:34 +08:00
|
|
|
|
2016-06-22 09:16:51 +08:00
|
|
|
ret = btrfs_find_orphan_roots(fs_info);
|
2012-03-12 23:03:00 +08:00
|
|
|
if (ret)
|
2014-04-23 19:33:35 +08:00
|
|
|
goto fail_qgroup;
|
2009-09-22 04:00:26 +08:00
|
|
|
|
2017-07-17 15:45:34 +08:00
|
|
|
if (!sb_rdonly(sb)) {
|
2010-05-16 22:49:58 +08:00
|
|
|
ret = btrfs_cleanup_fs_roots(fs_info);
|
2012-06-23 02:14:13 +08:00
|
|
|
if (ret)
|
2014-04-23 19:33:35 +08:00
|
|
|
goto fail_qgroup;
|
2016-06-13 11:39:58 +08:00
|
|
|
|
|
|
|
mutex_lock(&fs_info->cleaner_mutex);
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 22:45:14 +08:00
|
|
|
ret = btrfs_recover_relocation(tree_root);
|
2016-06-13 11:39:58 +08:00
|
|
|
mutex_unlock(&fs_info->cleaner_mutex);
|
2010-02-02 16:46:44 +08:00
|
|
|
if (ret < 0) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_warn(fs_info, "failed to recover relocation: %d",
|
|
|
|
ret);
|
2010-02-02 16:46:44 +08:00
|
|
|
err = -EINVAL;
|
2011-09-13 21:23:30 +08:00
|
|
|
goto fail_qgroup;
|
2010-02-02 16:46:44 +08:00
|
|
|
}
|
2008-11-20 04:13:35 +08:00
|
|
|
}
|
Btrfs: update space balancing code
This patch updates the space balancing code to utilize the new
backref format. Before, btrfs-vol -b would break any COW links
on data blocks or metadata. This was slow and caused the amount
of space used to explode if a large number of snapshots were present.
The new code can keeps the sharing of all data extents and
most of the tree blocks.
To maintain the sharing of data extents, the space balance code uses
a seperate inode hold data extent pointers, then updates the references
to point to the new location.
To maintain the sharing of tree blocks, the space balance code uses
reloc trees to relocate tree blocks in reference counted roots.
There is one reloc tree for each subvol, and all reloc trees share
same root key objectid. Reloc trees are snapshots of the latest
committed roots of subvols (root->commit_root).
To relocate a tree block referenced by a subvol, there are two steps.
COW the block through subvol's reloc tree, then update block pointer in
the subvol to point to the new block. Since all reloc trees share
same root key objectid, doing special handing for tree blocks
owned by them is easy. Once a tree block has been COWed in one
reloc tree, we can use the resulting new block directly when the
same block is required to COW again through other reloc trees.
In this way, relocated tree blocks are shared between reloc trees,
so they are also shared between subvols.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-09-26 22:09:34 +08:00
|
|
|
|
2008-11-18 10:02:50 +08:00
|
|
|
location.objectid = BTRFS_FS_TREE_OBJECTID;
|
|
|
|
location.type = BTRFS_ROOT_ITEM_KEY;
|
2013-05-15 15:48:19 +08:00
|
|
|
location.offset = 0;
|
2008-11-18 10:02:50 +08:00
|
|
|
|
|
|
|
fs_info->fs_root = btrfs_read_fs_root_no_name(fs_info, &location);
|
2010-05-29 17:44:10 +08:00
|
|
|
if (IS_ERR(fs_info->fs_root)) {
|
|
|
|
err = PTR_ERR(fs_info->fs_root);
|
2018-03-29 06:11:45 +08:00
|
|
|
btrfs_warn(fs_info, "failed to read fs tree: %d", err);
|
2011-09-13 21:23:30 +08:00
|
|
|
goto fail_qgroup;
|
2010-05-29 17:44:10 +08:00
|
|
|
}
|
2009-06-10 21:51:32 +08:00
|
|
|
|
2017-07-17 15:45:34 +08:00
|
|
|
if (sb_rdonly(sb))
|
2012-06-23 02:24:13 +08:00
|
|
|
return 0;
|
2012-01-17 04:04:48 +08:00
|
|
|
|
2016-09-23 08:24:21 +08:00
|
|
|
if (btrfs_test_opt(fs_info, CLEAR_CACHE) &&
|
|
|
|
btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE)) {
|
2016-09-23 08:24:22 +08:00
|
|
|
clear_free_space_tree = 1;
|
|
|
|
} else if (btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE) &&
|
|
|
|
!btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE_VALID)) {
|
|
|
|
btrfs_warn(fs_info, "free space tree is invalid");
|
|
|
|
clear_free_space_tree = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (clear_free_space_tree) {
|
2016-09-23 08:24:21 +08:00
|
|
|
btrfs_info(fs_info, "clearing free space tree");
|
|
|
|
ret = btrfs_clear_free_space_tree(fs_info);
|
|
|
|
if (ret) {
|
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"failed to clear free space tree: %d", ret);
|
2016-06-22 09:16:51 +08:00
|
|
|
close_ctree(fs_info);
|
2016-09-23 08:24:21 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-06-23 06:54:23 +08:00
|
|
|
if (btrfs_test_opt(fs_info, FREE_SPACE_TREE) &&
|
2015-12-30 23:52:35 +08:00
|
|
|
!btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE)) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_info(fs_info, "creating free space tree");
|
2015-12-30 23:52:35 +08:00
|
|
|
ret = btrfs_create_free_space_tree(fs_info);
|
|
|
|
if (ret) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"failed to create free space tree: %d", ret);
|
2016-06-22 09:16:51 +08:00
|
|
|
close_ctree(fs_info);
|
2015-12-30 23:52:35 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-06-23 02:24:13 +08:00
|
|
|
down_read(&fs_info->cleanup_work_sem);
|
|
|
|
if ((ret = btrfs_orphan_cleanup(fs_info->fs_root)) ||
|
|
|
|
(ret = btrfs_orphan_cleanup(fs_info->tree_root))) {
|
2010-01-26 22:30:53 +08:00
|
|
|
up_read(&fs_info->cleanup_work_sem);
|
2016-06-22 09:16:51 +08:00
|
|
|
close_ctree(fs_info);
|
2012-06-23 02:24:13 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
up_read(&fs_info->cleanup_work_sem);
|
2012-01-17 04:04:48 +08:00
|
|
|
|
2012-06-23 02:24:13 +08:00
|
|
|
ret = btrfs_resume_balance_async(fs_info);
|
|
|
|
if (ret) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_warn(fs_info, "failed to resume balance: %d", ret);
|
2016-06-22 09:16:51 +08:00
|
|
|
close_ctree(fs_info);
|
2012-06-23 02:24:13 +08:00
|
|
|
return ret;
|
2010-01-26 22:30:53 +08:00
|
|
|
}
|
|
|
|
|
2012-11-06 20:15:27 +08:00
|
|
|
ret = btrfs_resume_dev_replace_async(fs_info);
|
|
|
|
if (ret) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_warn(fs_info, "failed to resume device replace: %d", ret);
|
2016-06-22 09:16:51 +08:00
|
|
|
close_ctree(fs_info);
|
2012-11-06 20:15:27 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
Btrfs: fix qgroup rescan resume on mount
When called during mount, we cannot start the rescan worker thread until
open_ctree is done. This commit restuctures the qgroup rescan internals to
enable a clean deferral of the rescan resume operation.
First of all, the struct qgroup_rescan is removed, saving us a malloc and
some initialization synchronizations problems. Its only element (the worker
struct) now lives within fs_info just as the rest of the rescan code.
Then setting up a rescan worker is split into several reusable stages.
Currently we have three different rescan startup scenarios:
(A) rescan ioctl
(B) rescan resume by mount
(C) rescan by quota enable
Each case needs its own combination of the four following steps:
(1) set the progress [A, C: zero; B: state of umount]
(2) commit the transaction [A]
(3) set the counters [A, C: zero; B: state of umount]
(4) start worker [A, B, C]
qgroup_rescan_init does step (1). There's no extra function added to commit
a transaction, we've got that already. qgroup_rescan_zero_tracking does
step (3). Step (4) is nothing more than a call to the generic
btrfs_queue_worker.
We also get rid of a double check for the rescan progress during
btrfs_qgroup_account_ref, which is no longer required due to having step 2
from the list above.
As a side effect, this commit prepares to move the rescan start code from
btrfs_run_qgroups (which is run during commit) to a less time critical
section.
Signed-off-by: Jan Schmidt <list.btrfs@jan-o-sch.net>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-28 23:47:24 +08:00
|
|
|
btrfs_qgroup_rescan_resume(fs_info);
|
|
|
|
|
2014-08-02 07:12:45 +08:00
|
|
|
if (!fs_info->uuid_root) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_info(fs_info, "creating UUID tree");
|
2013-08-15 23:11:19 +08:00
|
|
|
ret = btrfs_create_uuid_tree(fs_info);
|
|
|
|
if (ret) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"failed to create the UUID tree: %d", ret);
|
2016-06-22 09:16:51 +08:00
|
|
|
close_ctree(fs_info);
|
2013-08-15 23:11:19 +08:00
|
|
|
return ret;
|
|
|
|
}
|
2016-06-23 06:54:23 +08:00
|
|
|
} else if (btrfs_test_opt(fs_info, RESCAN_UUID_TREE) ||
|
2014-08-02 07:12:45 +08:00
|
|
|
fs_info->generation !=
|
|
|
|
btrfs_super_uuid_tree_generation(disk_super)) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_info(fs_info, "checking UUID tree");
|
2013-08-15 23:11:23 +08:00
|
|
|
ret = btrfs_check_uuid_tree(fs_info);
|
|
|
|
if (ret) {
|
2016-05-09 17:32:39 +08:00
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"failed to check the UUID tree: %d", ret);
|
2016-06-22 09:16:51 +08:00
|
|
|
close_ctree(fs_info);
|
2013-08-15 23:11:23 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
} else {
|
2016-09-03 03:40:02 +08:00
|
|
|
set_bit(BTRFS_FS_UPDATE_UUID_TREE_GEN, &fs_info->flags);
|
2013-08-15 23:11:19 +08:00
|
|
|
}
|
2016-09-03 03:40:02 +08:00
|
|
|
set_bit(BTRFS_FS_OPEN, &fs_info->flags);
|
2014-09-18 23:20:02 +08:00
|
|
|
|
2016-01-19 10:23:02 +08:00
|
|
|
/*
|
|
|
|
* backuproot only affect mount behavior, and if open_ctree succeeded,
|
|
|
|
* no need to keep the flag
|
|
|
|
*/
|
|
|
|
btrfs_clear_opt(fs_info->mount_opt, USEBACKUPROOT);
|
|
|
|
|
2011-11-17 14:10:02 +08:00
|
|
|
return 0;
|
2007-06-12 18:35:45 +08:00
|
|
|
|
2011-09-13 21:23:30 +08:00
|
|
|
fail_qgroup:
|
|
|
|
btrfs_free_qgroup_config(fs_info);
|
2008-11-20 04:13:35 +08:00
|
|
|
fail_trans_kthread:
|
|
|
|
kthread_stop(fs_info->transaction_kthread);
|
2016-06-23 06:54:24 +08:00
|
|
|
btrfs_cleanup_transaction(fs_info);
|
2014-05-08 05:06:09 +08:00
|
|
|
btrfs_free_fs_roots(fs_info);
|
2008-06-26 04:01:31 +08:00
|
|
|
fail_cleaner:
|
2008-06-26 04:01:31 +08:00
|
|
|
kthread_stop(fs_info->cleaner_kthread);
|
2008-11-20 04:13:35 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* make sure we're done with the btree inode before we stop our
|
|
|
|
* kthreads
|
|
|
|
*/
|
|
|
|
filemap_write_and_wait(fs_info->btree_inode->i_mapping);
|
|
|
|
|
2014-01-22 11:15:51 +08:00
|
|
|
fail_sysfs:
|
2015-08-14 18:32:47 +08:00
|
|
|
btrfs_sysfs_remove_mounted(fs_info);
|
2014-01-22 11:15:51 +08:00
|
|
|
|
2015-03-10 06:38:38 +08:00
|
|
|
fail_fsdev_sysfs:
|
|
|
|
btrfs_sysfs_remove_fsid(fs_info->fs_devices);
|
|
|
|
|
2010-03-20 04:49:55 +08:00
|
|
|
fail_block_groups:
|
2013-04-26 01:44:38 +08:00
|
|
|
btrfs_put_block_group_cache(fs_info);
|
2011-11-04 03:17:42 +08:00
|
|
|
|
|
|
|
fail_tree_roots:
|
|
|
|
free_root_pointers(fs_info, 1);
|
2013-02-07 14:01:35 +08:00
|
|
|
invalidate_inode_pages2(fs_info->btree_inode->i_mapping);
|
2011-11-04 03:17:42 +08:00
|
|
|
|
2007-06-12 18:35:45 +08:00
|
|
|
fail_sb_buffer:
|
2013-03-17 10:10:31 +08:00
|
|
|
btrfs_stop_all_workers(fs_info);
|
2017-02-02 06:39:50 +08:00
|
|
|
btrfs_free_block_groups(fs_info);
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 18:12:22 +08:00
|
|
|
fail_alloc:
|
2008-06-12 09:47:56 +08:00
|
|
|
fail_iput:
|
2011-11-09 19:26:37 +08:00
|
|
|
btrfs_mapping_tree_free(&fs_info->mapping_tree);
|
|
|
|
|
2008-06-12 09:47:56 +08:00
|
|
|
iput(fs_info->btree_inode);
|
Btrfs: fix use-after-free in the finishing procedure of the device replace
During device replace test, we hit a null pointer deference (It was very easy
to reproduce it by running xfstests' btrfs/011 on the devices with the virtio
scsi driver). There were two bugs that caused this problem:
- We might allocate new chunks on the replaced device after we updated
the mapping tree. And we forgot to replace the source device in those
mapping of the new chunks.
- We might get the mapping information which including the source device
before the mapping information update. And then submit the bio which was
based on that mapping information after we freed the source device.
For the first bug, we can fix it by doing mapping tree update and source
device remove in the same context of the chunk mutex. The chunk mutex is
used to protect the allocable device list, the above method can avoid
the new chunk allocation, and after we remove the source device, all
the new chunks will be allocated on the new device. So it can fix
the first bug.
For the second bug, we need make sure all flighting bios are finished and
no new bios are produced during we are removing the source device. To fix
this problem, we introduced a global @bio_counter, we not only inc/dec
@bio_counter outsize of map_blocks, but also inc it before submitting bio
and dec @bio_counter when ending bios.
Since Raid56 is a little different and device replace dosen't support raid56
yet, it is not addressed in the patch and I add comments to make sure we will
fix it in the future.
Reported-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Signed-off-by: Wang Shilong <wangsl.fnst@cn.fujitsu.com>
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fb.com>
2014-01-30 16:46:55 +08:00
|
|
|
fail_bio_counter:
|
2018-04-05 07:04:49 +08:00
|
|
|
percpu_counter_destroy(&fs_info->dev_replace.bio_counter);
|
2013-01-29 18:10:51 +08:00
|
|
|
fail_delalloc_bytes:
|
|
|
|
percpu_counter_destroy(&fs_info->delalloc_bytes);
|
2013-01-29 18:09:20 +08:00
|
|
|
fail_dirty_metadata_bytes:
|
|
|
|
percpu_counter_destroy(&fs_info->dirty_metadata_bytes);
|
2019-04-11 03:56:09 +08:00
|
|
|
fail_dio_bytes:
|
|
|
|
percpu_counter_destroy(&fs_info->dio_bytes);
|
2009-09-22 04:00:26 +08:00
|
|
|
fail_srcu:
|
|
|
|
cleanup_srcu_struct(&fs_info->subvol_srcu);
|
2009-01-21 23:49:16 +08:00
|
|
|
fail:
|
2013-01-30 07:40:14 +08:00
|
|
|
btrfs_free_stripe_hash_table(fs_info);
|
2011-11-09 19:26:37 +08:00
|
|
|
btrfs_close_devices(fs_info->fs_devices);
|
2011-11-17 14:10:02 +08:00
|
|
|
return err;
|
2011-11-04 03:17:42 +08:00
|
|
|
|
|
|
|
recovery_tree_root:
|
2016-06-23 06:54:23 +08:00
|
|
|
if (!btrfs_test_opt(fs_info, USEBACKUPROOT))
|
2011-11-04 03:17:42 +08:00
|
|
|
goto fail_tree_roots;
|
|
|
|
|
|
|
|
free_root_pointers(fs_info, 0);
|
|
|
|
|
|
|
|
/* don't use the log in recovery mode, it won't be valid */
|
|
|
|
btrfs_set_super_log_root(disk_super, 0);
|
|
|
|
|
|
|
|
/* we can't trust the free space cache either */
|
|
|
|
btrfs_set_opt(fs_info->mount_opt, CLEAR_CACHE);
|
|
|
|
|
|
|
|
ret = next_root_backup(fs_info, fs_info->super_copy,
|
|
|
|
&num_backups_tried, &backup_index);
|
|
|
|
if (ret == -1)
|
|
|
|
goto fail_block_groups;
|
|
|
|
goto retry_root_backup;
|
2007-02-02 22:18:22 +08:00
|
|
|
}
|
2018-01-13 01:55:33 +08:00
|
|
|
ALLOW_ERROR_INJECTION(open_ctree, ERRNO);
|
2007-02-02 22:18:22 +08:00
|
|
|
|
2008-04-11 04:19:33 +08:00
|
|
|
static void btrfs_end_buffer_write_sync(struct buffer_head *bh, int uptodate)
|
|
|
|
{
|
|
|
|
if (uptodate) {
|
|
|
|
set_buffer_uptodate(bh);
|
|
|
|
} else {
|
2012-05-25 22:06:08 +08:00
|
|
|
struct btrfs_device *device = (struct btrfs_device *)
|
|
|
|
bh->b_private;
|
|
|
|
|
2016-06-23 06:54:56 +08:00
|
|
|
btrfs_warn_rl_in_rcu(device->fs_info,
|
2015-10-08 16:43:10 +08:00
|
|
|
"lost page write due to IO error on %s",
|
2012-06-05 02:03:51 +08:00
|
|
|
rcu_str_deref(device->name));
|
2016-05-20 09:18:45 +08:00
|
|
|
/* note, we don't set_buffer_write_io_error because we have
|
2008-05-13 01:39:03 +08:00
|
|
|
* our own ways of dealing with the IO errors
|
|
|
|
*/
|
2008-04-11 04:19:33 +08:00
|
|
|
clear_buffer_uptodate(bh);
|
2012-05-25 22:06:08 +08:00
|
|
|
btrfs_dev_stat_inc_and_print(device, BTRFS_DEV_STAT_WRITE_ERRS);
|
2008-04-11 04:19:33 +08:00
|
|
|
}
|
|
|
|
unlock_buffer(bh);
|
|
|
|
put_bh(bh);
|
|
|
|
}
|
|
|
|
|
2015-08-14 18:32:58 +08:00
|
|
|
int btrfs_read_dev_one_super(struct block_device *bdev, int copy_num,
|
|
|
|
struct buffer_head **bh_ret)
|
|
|
|
{
|
|
|
|
struct buffer_head *bh;
|
|
|
|
struct btrfs_super_block *super;
|
|
|
|
u64 bytenr;
|
|
|
|
|
|
|
|
bytenr = btrfs_sb_offset(copy_num);
|
|
|
|
if (bytenr + BTRFS_SUPER_INFO_SIZE >= i_size_read(bdev->bd_inode))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2017-06-16 07:48:05 +08:00
|
|
|
bh = __bread(bdev, bytenr / BTRFS_BDEV_BLOCKSIZE, BTRFS_SUPER_INFO_SIZE);
|
2015-08-14 18:32:58 +08:00
|
|
|
/*
|
|
|
|
* If we fail to read from the underlying devices, as of now
|
|
|
|
* the best option we have is to mark it EIO.
|
|
|
|
*/
|
|
|
|
if (!bh)
|
|
|
|
return -EIO;
|
|
|
|
|
|
|
|
super = (struct btrfs_super_block *)bh->b_data;
|
|
|
|
if (btrfs_super_bytenr(super) != bytenr ||
|
|
|
|
btrfs_super_magic(super) != BTRFS_MAGIC) {
|
|
|
|
brelse(bh);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
*bh_ret = bh;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2008-12-09 05:46:26 +08:00
|
|
|
struct buffer_head *btrfs_read_dev_super(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
struct buffer_head *bh;
|
|
|
|
struct buffer_head *latest = NULL;
|
|
|
|
struct btrfs_super_block *super;
|
|
|
|
int i;
|
|
|
|
u64 transid = 0;
|
2015-08-14 18:32:51 +08:00
|
|
|
int ret = -EINVAL;
|
2008-12-09 05:46:26 +08:00
|
|
|
|
|
|
|
/* we would like to check all the supers, but that would make
|
|
|
|
* a btrfs mount succeed after a mkfs from a different FS.
|
|
|
|
* So, we need to add a special mount option to scan for
|
|
|
|
* later supers, using BTRFS_SUPER_MIRROR_MAX instead
|
|
|
|
*/
|
|
|
|
for (i = 0; i < 1; i++) {
|
2015-08-14 18:32:58 +08:00
|
|
|
ret = btrfs_read_dev_one_super(bdev, i, &bh);
|
|
|
|
if (ret)
|
2008-12-09 05:46:26 +08:00
|
|
|
continue;
|
|
|
|
|
|
|
|
super = (struct btrfs_super_block *)bh->b_data;
|
|
|
|
|
|
|
|
if (!latest || btrfs_super_generation(super) > transid) {
|
|
|
|
brelse(latest);
|
|
|
|
latest = bh;
|
|
|
|
transid = btrfs_super_generation(super);
|
|
|
|
} else {
|
|
|
|
brelse(bh);
|
|
|
|
}
|
|
|
|
}
|
2015-08-14 18:32:51 +08:00
|
|
|
|
|
|
|
if (!latest)
|
|
|
|
return ERR_PTR(ret);
|
|
|
|
|
2008-12-09 05:46:26 +08:00
|
|
|
return latest;
|
|
|
|
}
|
|
|
|
|
2009-06-11 03:28:55 +08:00
|
|
|
/*
|
2017-06-16 06:50:33 +08:00
|
|
|
* Write superblock @sb to the @device. Do not wait for completion, all the
|
|
|
|
* buffer heads we write are pinned.
|
2009-06-11 03:28:55 +08:00
|
|
|
*
|
2017-06-16 06:50:33 +08:00
|
|
|
* Write @max_mirrors copies of the superblock, where 0 means default that fit
|
|
|
|
* the expected device size at commit time. Note that max_mirrors must be
|
|
|
|
* same for write and wait phases.
|
2009-06-11 03:28:55 +08:00
|
|
|
*
|
2017-06-16 06:50:33 +08:00
|
|
|
* Return number of errors when buffer head is not found or submission fails.
|
2009-06-11 03:28:55 +08:00
|
|
|
*/
|
2008-12-09 05:46:26 +08:00
|
|
|
static int write_dev_supers(struct btrfs_device *device,
|
2017-06-16 06:50:33 +08:00
|
|
|
struct btrfs_super_block *sb, int max_mirrors)
|
2008-12-09 05:46:26 +08:00
|
|
|
{
|
|
|
|
struct buffer_head *bh;
|
|
|
|
int i;
|
|
|
|
int ret;
|
|
|
|
int errors = 0;
|
|
|
|
u32 crc;
|
|
|
|
u64 bytenr;
|
2017-12-06 14:54:02 +08:00
|
|
|
int op_flags;
|
2008-12-09 05:46:26 +08:00
|
|
|
|
|
|
|
if (max_mirrors == 0)
|
|
|
|
max_mirrors = BTRFS_SUPER_MIRROR_MAX;
|
|
|
|
|
|
|
|
for (i = 0; i < max_mirrors; i++) {
|
|
|
|
bytenr = btrfs_sb_offset(i);
|
2014-09-03 21:35:33 +08:00
|
|
|
if (bytenr + BTRFS_SUPER_INFO_SIZE >=
|
|
|
|
device->commit_total_bytes)
|
2008-12-09 05:46:26 +08:00
|
|
|
break;
|
|
|
|
|
2017-06-16 06:50:33 +08:00
|
|
|
btrfs_set_super_bytenr(sb, bytenr);
|
2009-06-11 03:28:55 +08:00
|
|
|
|
2017-06-16 06:50:33 +08:00
|
|
|
crc = ~(u32)0;
|
|
|
|
crc = btrfs_csum_data((const char *)sb + BTRFS_CSUM_SIZE, crc,
|
|
|
|
BTRFS_SUPER_INFO_SIZE - BTRFS_CSUM_SIZE);
|
|
|
|
btrfs_csum_final(crc, sb->csum);
|
2009-06-11 03:28:55 +08:00
|
|
|
|
2017-06-16 06:50:33 +08:00
|
|
|
/* One reference for us, and we leave it for the caller */
|
2017-06-16 07:48:05 +08:00
|
|
|
bh = __getblk(device->bdev, bytenr / BTRFS_BDEV_BLOCKSIZE,
|
2017-06-16 06:50:33 +08:00
|
|
|
BTRFS_SUPER_INFO_SIZE);
|
|
|
|
if (!bh) {
|
|
|
|
btrfs_err(device->fs_info,
|
|
|
|
"couldn't get super buffer head for bytenr %llu",
|
|
|
|
bytenr);
|
|
|
|
errors++;
|
2009-06-11 03:28:55 +08:00
|
|
|
continue;
|
2017-06-16 06:50:33 +08:00
|
|
|
}
|
2013-04-29 22:05:57 +08:00
|
|
|
|
2017-06-16 06:50:33 +08:00
|
|
|
memcpy(bh->b_data, sb, BTRFS_SUPER_INFO_SIZE);
|
2008-12-09 05:46:26 +08:00
|
|
|
|
2017-06-16 06:50:33 +08:00
|
|
|
/* one reference for submit_bh */
|
|
|
|
get_bh(bh);
|
2009-06-11 03:28:55 +08:00
|
|
|
|
2017-06-16 06:50:33 +08:00
|
|
|
set_buffer_uptodate(bh);
|
|
|
|
lock_buffer(bh);
|
|
|
|
bh->b_end_io = btrfs_end_buffer_write_sync;
|
|
|
|
bh->b_private = device;
|
2008-12-09 05:46:26 +08:00
|
|
|
|
2011-11-19 04:07:51 +08:00
|
|
|
/*
|
|
|
|
* we fua the first super. The others we allow
|
|
|
|
* to go down lazy.
|
|
|
|
*/
|
2017-12-06 14:54:02 +08:00
|
|
|
op_flags = REQ_SYNC | REQ_META | REQ_PRIO;
|
|
|
|
if (i == 0 && !btrfs_test_opt(device->fs_info, NOBARRIER))
|
|
|
|
op_flags |= REQ_FUA;
|
|
|
|
ret = btrfsic_submit_bh(REQ_OP_WRITE, op_flags, bh);
|
2009-06-11 03:28:55 +08:00
|
|
|
if (ret)
|
2008-12-09 05:46:26 +08:00
|
|
|
errors++;
|
|
|
|
}
|
|
|
|
return errors < i ? 0 : -1;
|
|
|
|
}
|
|
|
|
|
2017-06-16 06:50:33 +08:00
|
|
|
/*
|
|
|
|
* Wait for write completion of superblocks done by write_dev_supers,
|
|
|
|
* @max_mirrors same for write and wait phases.
|
|
|
|
*
|
|
|
|
* Return number of errors when buffer head is not found or not marked up to
|
|
|
|
* date.
|
|
|
|
*/
|
|
|
|
static int wait_dev_supers(struct btrfs_device *device, int max_mirrors)
|
|
|
|
{
|
|
|
|
struct buffer_head *bh;
|
|
|
|
int i;
|
|
|
|
int errors = 0;
|
2018-02-03 03:09:01 +08:00
|
|
|
bool primary_failed = false;
|
2017-06-16 06:50:33 +08:00
|
|
|
u64 bytenr;
|
|
|
|
|
|
|
|
if (max_mirrors == 0)
|
|
|
|
max_mirrors = BTRFS_SUPER_MIRROR_MAX;
|
|
|
|
|
|
|
|
for (i = 0; i < max_mirrors; i++) {
|
|
|
|
bytenr = btrfs_sb_offset(i);
|
|
|
|
if (bytenr + BTRFS_SUPER_INFO_SIZE >=
|
|
|
|
device->commit_total_bytes)
|
|
|
|
break;
|
|
|
|
|
2017-06-16 07:48:05 +08:00
|
|
|
bh = __find_get_block(device->bdev,
|
|
|
|
bytenr / BTRFS_BDEV_BLOCKSIZE,
|
2017-06-16 06:50:33 +08:00
|
|
|
BTRFS_SUPER_INFO_SIZE);
|
|
|
|
if (!bh) {
|
|
|
|
errors++;
|
2018-02-03 03:09:01 +08:00
|
|
|
if (i == 0)
|
|
|
|
primary_failed = true;
|
2017-06-16 06:50:33 +08:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
wait_on_buffer(bh);
|
2018-02-03 03:09:01 +08:00
|
|
|
if (!buffer_uptodate(bh)) {
|
2017-06-16 06:50:33 +08:00
|
|
|
errors++;
|
2018-02-03 03:09:01 +08:00
|
|
|
if (i == 0)
|
|
|
|
primary_failed = true;
|
|
|
|
}
|
2017-06-16 06:50:33 +08:00
|
|
|
|
|
|
|
/* drop our reference */
|
|
|
|
brelse(bh);
|
|
|
|
|
|
|
|
/* drop the reference from the writing run */
|
|
|
|
brelse(bh);
|
|
|
|
}
|
|
|
|
|
2018-02-03 03:09:01 +08:00
|
|
|
/* log error, force error return */
|
|
|
|
if (primary_failed) {
|
|
|
|
btrfs_err(device->fs_info, "error writing primary super block to device %llu",
|
|
|
|
device->devid);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2017-06-16 06:50:33 +08:00
|
|
|
return errors < i ? 0 : -1;
|
|
|
|
}
|
|
|
|
|
2011-11-19 04:07:51 +08:00
|
|
|
/*
|
|
|
|
* endio for the write_dev_flush, this will wake anyone waiting
|
|
|
|
* for the barrier when it is done
|
|
|
|
*/
|
2015-07-20 21:29:37 +08:00
|
|
|
static void btrfs_end_empty_barrier(struct bio *bio)
|
2011-11-19 04:07:51 +08:00
|
|
|
{
|
2017-06-06 23:06:06 +08:00
|
|
|
complete(bio->bi_private);
|
2011-11-19 04:07:51 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2017-06-13 17:05:41 +08:00
|
|
|
* Submit a flush request to the device if it supports it. Error handling is
|
|
|
|
* done in the waiting counterpart.
|
2011-11-19 04:07:51 +08:00
|
|
|
*/
|
2017-06-13 17:05:41 +08:00
|
|
|
static void write_dev_flush(struct btrfs_device *device)
|
2011-11-19 04:07:51 +08:00
|
|
|
{
|
2017-04-06 11:22:53 +08:00
|
|
|
struct request_queue *q = bdev_get_queue(device->bdev);
|
2017-06-06 23:06:06 +08:00
|
|
|
struct bio *bio = device->flush_bio;
|
2011-11-19 04:07:51 +08:00
|
|
|
|
2017-04-06 11:22:53 +08:00
|
|
|
if (!test_bit(QUEUE_FLAG_WC, &q->queue_flags))
|
2017-06-13 17:05:41 +08:00
|
|
|
return;
|
2011-11-19 04:07:51 +08:00
|
|
|
|
2017-06-06 23:06:06 +08:00
|
|
|
bio_reset(bio);
|
2011-11-19 04:07:51 +08:00
|
|
|
bio->bi_end_io = btrfs_end_empty_barrier;
|
2017-08-24 01:10:32 +08:00
|
|
|
bio_set_dev(bio, device->bdev);
|
2017-05-02 23:03:50 +08:00
|
|
|
bio->bi_opf = REQ_OP_WRITE | REQ_SYNC | REQ_PREFLUSH;
|
2011-11-19 04:07:51 +08:00
|
|
|
init_completion(&device->flush_wait);
|
|
|
|
bio->bi_private = &device->flush_wait;
|
|
|
|
|
2017-08-18 16:38:07 +08:00
|
|
|
btrfsic_submit_bio(bio);
|
2017-12-04 12:54:56 +08:00
|
|
|
set_bit(BTRFS_DEV_STATE_FLUSH_SENT, &device->dev_state);
|
2017-06-13 17:05:41 +08:00
|
|
|
}
|
2011-11-19 04:07:51 +08:00
|
|
|
|
2017-06-13 17:05:41 +08:00
|
|
|
/*
|
|
|
|
* If the flush bio has been submitted by write_dev_flush, wait for it.
|
|
|
|
*/
|
2017-07-06 07:41:23 +08:00
|
|
|
static blk_status_t wait_dev_flush(struct btrfs_device *device)
|
2017-06-13 17:05:41 +08:00
|
|
|
{
|
|
|
|
struct bio *bio = device->flush_bio;
|
2011-11-19 04:07:51 +08:00
|
|
|
|
2017-12-04 12:54:56 +08:00
|
|
|
if (!test_bit(BTRFS_DEV_STATE_FLUSH_SENT, &device->dev_state))
|
2017-08-23 14:45:59 +08:00
|
|
|
return BLK_STS_OK;
|
2011-11-19 04:07:51 +08:00
|
|
|
|
2017-12-04 12:54:56 +08:00
|
|
|
clear_bit(BTRFS_DEV_STATE_FLUSH_SENT, &device->dev_state);
|
2017-06-15 22:04:26 +08:00
|
|
|
wait_for_completion_io(&device->flush_wait);
|
2011-11-19 04:07:51 +08:00
|
|
|
|
2017-07-06 07:41:23 +08:00
|
|
|
return bio->bi_status;
|
2011-11-19 04:07:51 +08:00
|
|
|
}
|
|
|
|
|
2017-06-27 17:28:40 +08:00
|
|
|
static int check_barrier_error(struct btrfs_fs_info *fs_info)
|
2017-05-06 07:17:54 +08:00
|
|
|
{
|
2017-12-18 17:08:59 +08:00
|
|
|
if (!btrfs_check_rw_degradable(fs_info, NULL))
|
2017-05-06 07:17:54 +08:00
|
|
|
return -EIO;
|
2011-11-19 04:07:51 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* send an empty flush down to each device in parallel,
|
|
|
|
* then wait for them
|
|
|
|
*/
|
|
|
|
static int barrier_all_devices(struct btrfs_fs_info *info)
|
|
|
|
{
|
|
|
|
struct list_head *head;
|
|
|
|
struct btrfs_device *dev;
|
2012-08-02 00:56:49 +08:00
|
|
|
int errors_wait = 0;
|
2017-06-03 15:38:06 +08:00
|
|
|
blk_status_t ret;
|
2011-11-19 04:07:51 +08:00
|
|
|
|
2017-06-16 06:28:47 +08:00
|
|
|
lockdep_assert_held(&info->fs_devices->device_list_mutex);
|
2011-11-19 04:07:51 +08:00
|
|
|
/* send down all the barriers */
|
|
|
|
head = &info->fs_devices->devices;
|
2017-06-16 06:28:47 +08:00
|
|
|
list_for_each_entry(dev, head, dev_list) {
|
2017-12-04 12:54:54 +08:00
|
|
|
if (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state))
|
2014-02-05 15:34:38 +08:00
|
|
|
continue;
|
2017-06-13 17:05:40 +08:00
|
|
|
if (!dev->bdev)
|
2011-11-19 04:07:51 +08:00
|
|
|
continue;
|
2017-12-04 12:54:53 +08:00
|
|
|
if (!test_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &dev->dev_state) ||
|
2017-12-04 12:54:52 +08:00
|
|
|
!test_bit(BTRFS_DEV_STATE_WRITEABLE, &dev->dev_state))
|
2011-11-19 04:07:51 +08:00
|
|
|
continue;
|
|
|
|
|
2017-06-13 17:05:41 +08:00
|
|
|
write_dev_flush(dev);
|
2017-08-23 14:45:59 +08:00
|
|
|
dev->last_flush_error = BLK_STS_OK;
|
2011-11-19 04:07:51 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* wait for all the barriers */
|
2017-06-16 06:28:47 +08:00
|
|
|
list_for_each_entry(dev, head, dev_list) {
|
2017-12-04 12:54:54 +08:00
|
|
|
if (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state))
|
2014-02-05 15:34:38 +08:00
|
|
|
continue;
|
2011-11-19 04:07:51 +08:00
|
|
|
if (!dev->bdev) {
|
2012-08-02 00:56:49 +08:00
|
|
|
errors_wait++;
|
2011-11-19 04:07:51 +08:00
|
|
|
continue;
|
|
|
|
}
|
2017-12-04 12:54:53 +08:00
|
|
|
if (!test_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &dev->dev_state) ||
|
2017-12-04 12:54:52 +08:00
|
|
|
!test_bit(BTRFS_DEV_STATE_WRITEABLE, &dev->dev_state))
|
2011-11-19 04:07:51 +08:00
|
|
|
continue;
|
|
|
|
|
2017-06-13 17:05:41 +08:00
|
|
|
ret = wait_dev_flush(dev);
|
2017-05-06 07:17:54 +08:00
|
|
|
if (ret) {
|
|
|
|
dev->last_flush_error = ret;
|
2017-06-15 22:20:43 +08:00
|
|
|
btrfs_dev_stat_inc_and_print(dev,
|
|
|
|
BTRFS_DEV_STAT_FLUSH_ERRS);
|
2012-08-02 00:56:49 +08:00
|
|
|
errors_wait++;
|
2017-05-06 07:17:54 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-06-13 17:05:40 +08:00
|
|
|
if (errors_wait) {
|
2017-05-06 07:17:54 +08:00
|
|
|
/*
|
|
|
|
* At some point we need the status of all disks
|
|
|
|
* to arrive at the volume status. So error checking
|
|
|
|
* is being pushed to a separate loop.
|
|
|
|
*/
|
2017-06-27 17:28:40 +08:00
|
|
|
return check_barrier_error(info);
|
2011-11-19 04:07:51 +08:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-08-19 15:54:15 +08:00
|
|
|
int btrfs_get_num_tolerated_disk_barrier_failures(u64 flags)
|
|
|
|
{
|
2015-09-15 21:08:07 +08:00
|
|
|
int raid_type;
|
|
|
|
int min_tolerated = INT_MAX;
|
2015-08-19 15:54:15 +08:00
|
|
|
|
2015-09-15 21:08:07 +08:00
|
|
|
if ((flags & BTRFS_BLOCK_GROUP_PROFILE_MASK) == 0 ||
|
|
|
|
(flags & BTRFS_AVAIL_ALLOC_BIT_SINGLE))
|
|
|
|
min_tolerated = min(min_tolerated,
|
|
|
|
btrfs_raid_array[BTRFS_RAID_SINGLE].
|
|
|
|
tolerated_failures);
|
2015-08-19 15:54:15 +08:00
|
|
|
|
2015-09-15 21:08:07 +08:00
|
|
|
for (raid_type = 0; raid_type < BTRFS_NR_RAID_TYPES; raid_type++) {
|
|
|
|
if (raid_type == BTRFS_RAID_SINGLE)
|
|
|
|
continue;
|
2018-04-25 19:01:43 +08:00
|
|
|
if (!(flags & btrfs_raid_array[raid_type].bg_flag))
|
2015-09-15 21:08:07 +08:00
|
|
|
continue;
|
|
|
|
min_tolerated = min(min_tolerated,
|
|
|
|
btrfs_raid_array[raid_type].
|
|
|
|
tolerated_failures);
|
|
|
|
}
|
2015-08-19 15:54:15 +08:00
|
|
|
|
2015-09-15 21:08:07 +08:00
|
|
|
if (min_tolerated == INT_MAX) {
|
2016-09-20 22:05:02 +08:00
|
|
|
pr_warn("BTRFS: unknown raid flag: %llu", flags);
|
2015-09-15 21:08:07 +08:00
|
|
|
min_tolerated = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return min_tolerated;
|
2015-08-19 15:54:15 +08:00
|
|
|
}
|
|
|
|
|
2017-02-11 02:04:32 +08:00
|
|
|
int write_all_supers(struct btrfs_fs_info *fs_info, int max_mirrors)
|
2008-04-11 04:19:33 +08:00
|
|
|
{
|
2009-06-11 03:17:02 +08:00
|
|
|
struct list_head *head;
|
2008-04-11 04:19:33 +08:00
|
|
|
struct btrfs_device *dev;
|
2008-05-07 23:43:44 +08:00
|
|
|
struct btrfs_super_block *sb;
|
2008-04-11 04:19:33 +08:00
|
|
|
struct btrfs_dev_item *dev_item;
|
|
|
|
int ret;
|
|
|
|
int do_barriers;
|
2008-04-29 21:38:00 +08:00
|
|
|
int max_errors;
|
|
|
|
int total_errors = 0;
|
2008-05-07 23:43:44 +08:00
|
|
|
u64 flags;
|
2008-04-11 04:19:33 +08:00
|
|
|
|
2016-06-23 06:54:23 +08:00
|
|
|
do_barriers = !btrfs_test_opt(fs_info, NOBARRIER);
|
2017-09-14 02:25:21 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* max_mirrors == 0 indicates we're from commit_transaction,
|
|
|
|
* not from fsync where the tree roots in fs_info have not
|
|
|
|
* been consistent on disk.
|
|
|
|
*/
|
|
|
|
if (max_mirrors == 0)
|
|
|
|
backup_super_roots(fs_info);
|
2008-04-11 04:19:33 +08:00
|
|
|
|
2016-06-23 06:54:23 +08:00
|
|
|
sb = fs_info->super_for_commit;
|
2008-05-07 23:43:44 +08:00
|
|
|
dev_item = &sb->dev_item;
|
2009-06-11 03:17:02 +08:00
|
|
|
|
2016-06-23 06:54:23 +08:00
|
|
|
mutex_lock(&fs_info->fs_devices->device_list_mutex);
|
|
|
|
head = &fs_info->fs_devices->devices;
|
|
|
|
max_errors = btrfs_super_num_devices(fs_info->super_copy) - 1;
|
2011-11-19 04:07:51 +08:00
|
|
|
|
2012-08-02 00:56:49 +08:00
|
|
|
if (do_barriers) {
|
2016-06-23 06:54:23 +08:00
|
|
|
ret = barrier_all_devices(fs_info);
|
2012-08-02 00:56:49 +08:00
|
|
|
if (ret) {
|
|
|
|
mutex_unlock(
|
2016-06-23 06:54:23 +08:00
|
|
|
&fs_info->fs_devices->device_list_mutex);
|
|
|
|
btrfs_handle_fs_error(fs_info, ret,
|
|
|
|
"errors while submitting device barriers.");
|
2012-08-02 00:56:49 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
2011-11-19 04:07:51 +08:00
|
|
|
|
2017-06-16 06:28:47 +08:00
|
|
|
list_for_each_entry(dev, head, dev_list) {
|
2008-05-14 01:46:40 +08:00
|
|
|
if (!dev->bdev) {
|
|
|
|
total_errors++;
|
|
|
|
continue;
|
|
|
|
}
|
2017-12-04 12:54:53 +08:00
|
|
|
if (!test_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &dev->dev_state) ||
|
2017-12-04 12:54:52 +08:00
|
|
|
!test_bit(BTRFS_DEV_STATE_WRITEABLE, &dev->dev_state))
|
2008-05-14 01:46:40 +08:00
|
|
|
continue;
|
|
|
|
|
2008-11-18 10:11:30 +08:00
|
|
|
btrfs_set_stack_device_generation(dev_item, 0);
|
2008-05-07 23:43:44 +08:00
|
|
|
btrfs_set_stack_device_type(dev_item, dev->type);
|
|
|
|
btrfs_set_stack_device_id(dev_item, dev->devid);
|
2014-07-24 11:37:13 +08:00
|
|
|
btrfs_set_stack_device_total_bytes(dev_item,
|
2014-09-03 21:35:33 +08:00
|
|
|
dev->commit_total_bytes);
|
2014-09-03 21:35:34 +08:00
|
|
|
btrfs_set_stack_device_bytes_used(dev_item,
|
|
|
|
dev->commit_bytes_used);
|
2008-05-07 23:43:44 +08:00
|
|
|
btrfs_set_stack_device_io_align(dev_item, dev->io_align);
|
|
|
|
btrfs_set_stack_device_io_width(dev_item, dev->io_width);
|
|
|
|
btrfs_set_stack_device_sector_size(dev_item, dev->sector_size);
|
|
|
|
memcpy(dev_item->uuid, dev->uuid, BTRFS_UUID_SIZE);
|
2018-10-30 22:43:23 +08:00
|
|
|
memcpy(dev_item->fsid, dev->fs_devices->metadata_uuid,
|
|
|
|
BTRFS_FSID_SIZE);
|
2008-12-09 05:46:26 +08:00
|
|
|
|
2008-05-07 23:43:44 +08:00
|
|
|
flags = btrfs_super_flags(sb);
|
|
|
|
btrfs_set_super_flags(sb, flags | BTRFS_HEADER_FLAG_WRITTEN);
|
|
|
|
|
btrfs: Do super block verification before writing it to disk
There are already 2 reports about strangely corrupted super blocks,
where csum still matches but extra garbage gets slipped into super block.
The corruption would looks like:
------
superblock: bytenr=65536, device=/dev/sdc1
---------------------------------------------------------
csum_type 41700 (INVALID)
csum 0x3b252d3a [match]
bytenr 65536
flags 0x1
( WRITTEN )
magic _BHRfS_M [match]
...
incompat_flags 0x5b22400000000169
( MIXED_BACKREF |
COMPRESS_LZO |
BIG_METADATA |
EXTENDED_IREF |
SKINNY_METADATA |
unknown flag: 0x5b22400000000000 )
...
------
Or
------
superblock: bytenr=65536, device=/dev/mapper/x
---------------------------------------------------------
csum_type 35355 (INVALID)
csum_size 32
csum 0xf0dbeddd [match]
bytenr 65536
flags 0x1
( WRITTEN )
magic _BHRfS_M [match]
...
incompat_flags 0x176d200000000169
( MIXED_BACKREF |
COMPRESS_LZO |
BIG_METADATA |
EXTENDED_IREF |
SKINNY_METADATA |
unknown flag: 0x176d200000000000 )
------
Obviously, csum_type and incompat_flags get some garbage, but its csum
still matches, which means kernel calculates the csum based on corrupted
super block memory.
And after manually fixing these values, the filesystem is completely
healthy without any problem exposed by btrfs check.
Although the cause is still unknown, at least detect it and prevent further
corruption.
Both reports have same symptoms, there's an overwrite on offset 192 of
the superblock, by 4 bytes. The superblock structure is not allocated or
freed and stays in the memory for the whole filesystem lifetime, so it's
not a use-after-free kind of error on someone else's leaked page.
As a vague point for the problable cause is mentioning of other system
freezing related to graphic card drivers.
Reported-by: Ken Swenson <flat@imo.uto.moe>
Reported-by: Ben Parsons <9parsonsb@gmail.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add brief analysis of the reports ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-05-11 13:35:27 +08:00
|
|
|
ret = btrfs_validate_write_super(fs_info, sb);
|
|
|
|
if (ret < 0) {
|
|
|
|
mutex_unlock(&fs_info->fs_devices->device_list_mutex);
|
|
|
|
btrfs_handle_fs_error(fs_info, -EUCLEAN,
|
|
|
|
"unexpected superblock corruption detected");
|
|
|
|
return -EUCLEAN;
|
|
|
|
}
|
|
|
|
|
2017-06-16 06:50:33 +08:00
|
|
|
ret = write_dev_supers(dev, sb, max_mirrors);
|
2008-04-29 21:38:00 +08:00
|
|
|
if (ret)
|
|
|
|
total_errors++;
|
2008-04-11 04:19:33 +08:00
|
|
|
}
|
2008-04-29 21:38:00 +08:00
|
|
|
if (total_errors > max_errors) {
|
2016-06-23 06:54:23 +08:00
|
|
|
btrfs_err(fs_info, "%d errors while writing supers",
|
|
|
|
total_errors);
|
|
|
|
mutex_unlock(&fs_info->fs_devices->device_list_mutex);
|
2012-03-12 23:03:00 +08:00
|
|
|
|
2013-08-09 23:08:40 +08:00
|
|
|
/* FUA is masked off if unsupported and can't be the reason */
|
2016-06-23 06:54:23 +08:00
|
|
|
btrfs_handle_fs_error(fs_info, -EIO,
|
|
|
|
"%d errors while writing supers",
|
|
|
|
total_errors);
|
2013-08-09 23:08:40 +08:00
|
|
|
return -EIO;
|
2008-04-29 21:38:00 +08:00
|
|
|
}
|
2008-04-11 04:19:33 +08:00
|
|
|
|
2008-12-09 05:46:26 +08:00
|
|
|
total_errors = 0;
|
2017-06-16 06:28:47 +08:00
|
|
|
list_for_each_entry(dev, head, dev_list) {
|
2008-05-14 01:46:40 +08:00
|
|
|
if (!dev->bdev)
|
|
|
|
continue;
|
2017-12-04 12:54:53 +08:00
|
|
|
if (!test_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &dev->dev_state) ||
|
2017-12-04 12:54:52 +08:00
|
|
|
!test_bit(BTRFS_DEV_STATE_WRITEABLE, &dev->dev_state))
|
2008-05-14 01:46:40 +08:00
|
|
|
continue;
|
|
|
|
|
2017-06-16 06:50:33 +08:00
|
|
|
ret = wait_dev_supers(dev, max_mirrors);
|
2008-12-09 05:46:26 +08:00
|
|
|
if (ret)
|
|
|
|
total_errors++;
|
2008-04-11 04:19:33 +08:00
|
|
|
}
|
2016-06-23 06:54:23 +08:00
|
|
|
mutex_unlock(&fs_info->fs_devices->device_list_mutex);
|
2008-04-29 21:38:00 +08:00
|
|
|
if (total_errors > max_errors) {
|
2016-06-23 06:54:23 +08:00
|
|
|
btrfs_handle_fs_error(fs_info, -EIO,
|
|
|
|
"%d errors while writing supers",
|
|
|
|
total_errors);
|
2012-03-12 23:03:00 +08:00
|
|
|
return -EIO;
|
2008-04-29 21:38:00 +08:00
|
|
|
}
|
2008-04-11 04:19:33 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-05-15 15:48:19 +08:00
|
|
|
/* Drop a fs root from the radix tree and free it. */
|
|
|
|
void btrfs_drop_and_free_fs_root(struct btrfs_fs_info *fs_info,
|
|
|
|
struct btrfs_root *root)
|
2007-04-11 04:58:11 +08:00
|
|
|
{
|
2009-09-22 03:56:00 +08:00
|
|
|
spin_lock(&fs_info->fs_roots_radix_lock);
|
2007-04-11 04:58:11 +08:00
|
|
|
radix_tree_delete(&fs_info->fs_roots_radix,
|
|
|
|
(unsigned long)root->root_key.objectid);
|
2009-09-22 03:56:00 +08:00
|
|
|
spin_unlock(&fs_info->fs_roots_radix_lock);
|
2009-09-22 04:00:26 +08:00
|
|
|
|
|
|
|
if (btrfs_root_refs(&root->root_item) == 0)
|
|
|
|
synchronize_srcu(&fs_info->subvol_srcu);
|
|
|
|
|
2016-07-20 06:36:05 +08:00
|
|
|
if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) {
|
2013-02-27 21:28:24 +08:00
|
|
|
btrfs_free_log(NULL, root);
|
2016-07-20 06:36:05 +08:00
|
|
|
if (root->reloc_root) {
|
|
|
|
free_extent_buffer(root->reloc_root->node);
|
|
|
|
free_extent_buffer(root->reloc_root->commit_root);
|
|
|
|
btrfs_put_fs_root(root->reloc_root);
|
|
|
|
root->reloc_root = NULL;
|
|
|
|
}
|
|
|
|
}
|
2013-02-27 21:28:24 +08:00
|
|
|
|
2014-05-08 05:06:09 +08:00
|
|
|
if (root->free_ino_pinned)
|
|
|
|
__btrfs_remove_free_space_cache(root->free_ino_pinned);
|
|
|
|
if (root->free_ino_ctl)
|
|
|
|
__btrfs_remove_free_space_cache(root->free_ino_ctl);
|
2018-07-20 22:30:25 +08:00
|
|
|
btrfs_free_fs_root(root);
|
2009-09-22 03:56:00 +08:00
|
|
|
}
|
|
|
|
|
2018-07-20 22:30:25 +08:00
|
|
|
void btrfs_free_fs_root(struct btrfs_root *root)
|
2009-09-22 03:56:00 +08:00
|
|
|
{
|
2014-02-05 09:37:48 +08:00
|
|
|
iput(root->ino_cache_inode);
|
2009-09-22 03:56:00 +08:00
|
|
|
WARN_ON(!RB_EMPTY_ROOT(&root->inode_tree));
|
2011-07-08 03:44:25 +08:00
|
|
|
if (root->anon_dev)
|
|
|
|
free_anon_bdev(root->anon_dev);
|
2014-03-06 13:38:19 +08:00
|
|
|
if (root->subv_writers)
|
|
|
|
btrfs_free_subvolume_writers(root->subv_writers);
|
2009-09-22 03:56:00 +08:00
|
|
|
free_extent_buffer(root->node);
|
|
|
|
free_extent_buffer(root->commit_root);
|
Btrfs: Cache free inode numbers in memory
Currently btrfs stores the highest objectid of the fs tree, and it always
returns (highest+1) inode number when we create a file, so inode numbers
won't be reclaimed when we delete files, so we'll run out of inode numbers
as we keep create/delete files in 32bits machines.
This fixes it, and it works similarly to how we cache free space in block
cgroups.
We start a kernel thread to read the file tree. By scanning inode items,
we know which chunks of inode numbers are free, and we cache them in
an rb-tree.
Because we are searching the commit root, we have to carefully handle the
cross-transaction case.
The rb-tree is a hybrid extent+bitmap tree, so if we have too many small
chunks of inode numbers, we'll use bitmaps. Initially we allow 16K ram
of extents, and a bitmap will be used if we exceed this threshold. The
extents threshold is adjusted in runtime.
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
2011-04-20 10:06:11 +08:00
|
|
|
kfree(root->free_ino_ctl);
|
|
|
|
kfree(root->free_ino_pinned);
|
2013-05-15 15:48:20 +08:00
|
|
|
btrfs_put_fs_root(root);
|
2007-04-11 04:58:11 +08:00
|
|
|
}
|
|
|
|
|
2008-11-13 03:34:12 +08:00
|
|
|
int btrfs_cleanup_fs_roots(struct btrfs_fs_info *fs_info)
|
2007-02-22 06:04:57 +08:00
|
|
|
{
|
2008-11-13 03:34:12 +08:00
|
|
|
u64 root_objectid = 0;
|
|
|
|
struct btrfs_root *gang[8];
|
2014-04-22 17:13:51 +08:00
|
|
|
int i = 0;
|
|
|
|
int err = 0;
|
|
|
|
unsigned int ret = 0;
|
|
|
|
int index;
|
2007-03-17 04:20:31 +08:00
|
|
|
|
2008-11-13 03:34:12 +08:00
|
|
|
while (1) {
|
2014-04-22 17:13:51 +08:00
|
|
|
index = srcu_read_lock(&fs_info->subvol_srcu);
|
2008-11-13 03:34:12 +08:00
|
|
|
ret = radix_tree_gang_lookup(&fs_info->fs_roots_radix,
|
|
|
|
(void **)gang, root_objectid,
|
|
|
|
ARRAY_SIZE(gang));
|
2014-04-22 17:13:51 +08:00
|
|
|
if (!ret) {
|
|
|
|
srcu_read_unlock(&fs_info->subvol_srcu, index);
|
2008-11-13 03:34:12 +08:00
|
|
|
break;
|
2014-04-22 17:13:51 +08:00
|
|
|
}
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 22:45:14 +08:00
|
|
|
root_objectid = gang[ret - 1]->root_key.objectid + 1;
|
2014-04-22 17:13:51 +08:00
|
|
|
|
2008-11-13 03:34:12 +08:00
|
|
|
for (i = 0; i < ret; i++) {
|
2014-04-22 17:13:51 +08:00
|
|
|
/* Avoid to grab roots in dead_roots */
|
|
|
|
if (btrfs_root_refs(&gang[i]->root_item) == 0) {
|
|
|
|
gang[i] = NULL;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
/* grab all the search result for later use */
|
|
|
|
gang[i] = btrfs_grab_fs_root(gang[i]);
|
|
|
|
}
|
|
|
|
srcu_read_unlock(&fs_info->subvol_srcu, index);
|
2011-02-01 05:22:42 +08:00
|
|
|
|
2014-04-22 17:13:51 +08:00
|
|
|
for (i = 0; i < ret; i++) {
|
|
|
|
if (!gang[i])
|
|
|
|
continue;
|
2008-11-13 03:34:12 +08:00
|
|
|
root_objectid = gang[i]->root_key.objectid;
|
2011-02-01 05:22:42 +08:00
|
|
|
err = btrfs_orphan_cleanup(gang[i]);
|
|
|
|
if (err)
|
2014-04-22 17:13:51 +08:00
|
|
|
break;
|
|
|
|
btrfs_put_fs_root(gang[i]);
|
2008-11-13 03:34:12 +08:00
|
|
|
}
|
|
|
|
root_objectid++;
|
|
|
|
}
|
2014-04-22 17:13:51 +08:00
|
|
|
|
|
|
|
/* release the uncleaned roots due to error */
|
|
|
|
for (; i < ret; i++) {
|
|
|
|
if (gang[i])
|
|
|
|
btrfs_put_fs_root(gang[i]);
|
|
|
|
}
|
|
|
|
return err;
|
2008-11-13 03:34:12 +08:00
|
|
|
}
|
2008-06-26 04:01:30 +08:00
|
|
|
|
2016-06-22 09:16:51 +08:00
|
|
|
int btrfs_commit_super(struct btrfs_fs_info *fs_info)
|
2008-11-13 03:34:12 +08:00
|
|
|
{
|
2016-06-22 09:16:51 +08:00
|
|
|
struct btrfs_root *root = fs_info->tree_root;
|
2008-11-13 03:34:12 +08:00
|
|
|
struct btrfs_trans_handle *trans;
|
2008-06-26 04:01:31 +08:00
|
|
|
|
2016-06-23 06:54:23 +08:00
|
|
|
mutex_lock(&fs_info->cleaner_mutex);
|
2016-06-23 06:54:24 +08:00
|
|
|
btrfs_run_delayed_iputs(fs_info);
|
2016-06-23 06:54:23 +08:00
|
|
|
mutex_unlock(&fs_info->cleaner_mutex);
|
|
|
|
wake_up_process(fs_info->cleaner_kthread);
|
2009-11-12 17:34:40 +08:00
|
|
|
|
|
|
|
/* wait until ongoing cleanup work done */
|
2016-06-23 06:54:23 +08:00
|
|
|
down_write(&fs_info->cleanup_work_sem);
|
|
|
|
up_write(&fs_info->cleanup_work_sem);
|
2009-11-12 17:34:40 +08:00
|
|
|
|
2011-04-14 00:54:33 +08:00
|
|
|
trans = btrfs_join_transaction(root);
|
2011-01-25 10:51:38 +08:00
|
|
|
if (IS_ERR(trans))
|
|
|
|
return PTR_ERR(trans);
|
2016-09-10 09:39:03 +08:00
|
|
|
return btrfs_commit_transaction(trans);
|
2008-11-13 03:34:12 +08:00
|
|
|
}
|
|
|
|
|
2016-06-22 09:16:51 +08:00
|
|
|
void close_ctree(struct btrfs_fs_info *fs_info)
|
2008-11-13 03:34:12 +08:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2016-09-03 03:40:02 +08:00
|
|
|
set_bit(BTRFS_FS_CLOSING_START, &fs_info->flags);
|
Btrfs: fix missing delayed iputs on unmount
There's a race between close_ctree() and cleaner_kthread().
close_ctree() sets btrfs_fs_closing(), and the cleaner stops when it
sees it set, but this is racy; the cleaner might have already checked
the bit and could be cleaning stuff. In particular, if it deletes unused
block groups, it will create delayed iputs for the free space cache
inodes. As of "btrfs: don't run delayed_iputs in commit", we're no
longer running delayed iputs after a commit. Therefore, if the cleaner
creates more delayed iputs after delayed iputs are run in
btrfs_commit_super(), we will leak inodes on unmount and get a busy
inode crash from the VFS.
Fix it by parking the cleaner before we actually close anything. Then,
any remaining delayed iputs will always be handled in
btrfs_commit_super(). This also ensures that the commit in close_ctree()
is really the last commit, so we can get rid of the commit in
cleaner_kthread().
The fstest/generic/475 followed by 476 can trigger a crash that
manifests as a slab corruption caused by accessing the freed kthread
structure by a wake up function. Sample trace:
[ 5657.077612] BUG: unable to handle kernel NULL pointer dereference at 00000000000000cc
[ 5657.079432] PGD 1c57a067 P4D 1c57a067 PUD da10067 PMD 0
[ 5657.080661] Oops: 0000 [#1] PREEMPT SMP
[ 5657.081592] CPU: 1 PID: 5157 Comm: fsstress Tainted: G W 4.19.0-rc8-default+ #323
[ 5657.083703] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.2-0-gf9626cc-prebuilt.qemu-project.org 04/01/2014
[ 5657.086577] RIP: 0010:shrink_page_list+0x2f9/0xe90
[ 5657.091937] RSP: 0018:ffffb5c745c8f728 EFLAGS: 00010287
[ 5657.092953] RAX: 0000000000000074 RBX: ffffb5c745c8f830 RCX: 0000000000000000
[ 5657.094590] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff9a8747fdf3d0
[ 5657.095987] RBP: ffffb5c745c8f9e0 R08: 0000000000000000 R09: 0000000000000000
[ 5657.097159] R10: ffff9a8747fdf5e8 R11: 0000000000000000 R12: ffffb5c745c8f788
[ 5657.098513] R13: ffff9a877f6ff2c0 R14: ffff9a877f6ff2c8 R15: dead000000000200
[ 5657.099689] FS: 00007f948d853b80(0000) GS:ffff9a877d600000(0000) knlGS:0000000000000000
[ 5657.101032] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 5657.101953] CR2: 00000000000000cc CR3: 00000000684bd000 CR4: 00000000000006e0
[ 5657.103159] Call Trace:
[ 5657.103776] shrink_inactive_list+0x194/0x410
[ 5657.104671] shrink_node_memcg.constprop.84+0x39a/0x6a0
[ 5657.105750] shrink_node+0x62/0x1c0
[ 5657.106529] try_to_free_pages+0x1a4/0x500
[ 5657.107408] __alloc_pages_slowpath+0x2c9/0xb20
[ 5657.108418] __alloc_pages_nodemask+0x268/0x2b0
[ 5657.109348] kmalloc_large_node+0x37/0x90
[ 5657.110205] __kmalloc_node+0x236/0x310
[ 5657.111014] kvmalloc_node+0x3e/0x70
Fixes: 30928e9baac2 ("btrfs: don't run delayed_iputs in commit")
Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add trace ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-11-01 01:06:08 +08:00
|
|
|
/*
|
|
|
|
* We don't want the cleaner to start new transactions, add more delayed
|
|
|
|
* iputs, etc. while we're closing. We can't use kthread_stop() yet
|
|
|
|
* because that frees the task_struct, and the transaction kthread might
|
|
|
|
* still try to wake up the cleaner.
|
|
|
|
*/
|
|
|
|
kthread_park(fs_info->cleaner_kthread);
|
2008-11-13 03:34:12 +08:00
|
|
|
|
2015-11-05 07:56:16 +08:00
|
|
|
/* wait for the qgroup rescan worker to stop */
|
2016-08-09 10:08:06 +08:00
|
|
|
btrfs_qgroup_wait_for_completion(fs_info, false);
|
2015-11-05 07:56:16 +08:00
|
|
|
|
2013-08-15 23:11:21 +08:00
|
|
|
/* wait for the uuid_scan task to finish */
|
|
|
|
down(&fs_info->uuid_tree_rescan_sem);
|
|
|
|
/* avoid complains from lockdep et al., set sem back to initial state */
|
|
|
|
up(&fs_info->uuid_tree_rescan_sem);
|
|
|
|
|
2012-01-17 04:04:49 +08:00
|
|
|
/* pause restriper - we want to resume on mount */
|
2012-11-06 00:03:39 +08:00
|
|
|
btrfs_pause_balance(fs_info);
|
2012-01-17 04:04:49 +08:00
|
|
|
|
2012-11-06 20:15:27 +08:00
|
|
|
btrfs_dev_replace_suspend_for_unmount(fs_info);
|
|
|
|
|
2012-11-06 00:03:39 +08:00
|
|
|
btrfs_scrub_cancel(fs_info);
|
2011-05-25 03:35:30 +08:00
|
|
|
|
|
|
|
/* wait for any defraggers to finish */
|
|
|
|
wait_event(fs_info->transaction_wait,
|
|
|
|
(atomic_read(&fs_info->defrag_running) == 0));
|
|
|
|
|
|
|
|
/* clear out the rbtree of defraggable inodes */
|
2012-11-26 17:26:20 +08:00
|
|
|
btrfs_cleanup_defrag_inodes(fs_info);
|
2011-05-25 03:35:30 +08:00
|
|
|
|
Btrfs: reclaim the reserved metadata space at background
Before applying this patch, the task had to reclaim the metadata space
by itself if the metadata space was not enough. And When the task started
the space reclamation, all the other tasks which wanted to reserve the
metadata space were blocked. At some cases, they would be blocked for
a long time, it made the performance fluctuate wildly.
So we introduce the background metadata space reclamation, when the space
is about to be exhausted, we insert a reclaim work into the workqueue, the
worker of the workqueue helps us to reclaim the reserved space at the
background. By this way, the tasks needn't reclaim the space by themselves at
most cases, and even if the tasks have to reclaim the space or are blocked
for the space reclamation, they will get enough space more quickly.
Here is my test result(Tested by compilebench):
Memory: 2GB
CPU: 2Cores * 1CPU
Partition: 40GB(SSD)
Test command:
# compilebench -D <mnt> -m
Without this patch:
intial create total runs 30 avg 54.36 MB/s (user 0.52s sys 2.44s)
compile total runs 30 avg 123.72 MB/s (user 0.13s sys 1.17s)
read compiled tree total runs 3 avg 81.15 MB/s (user 0.74s sys 4.89s)
delete compiled tree total runs 30 avg 5.32 seconds (user 0.35s sys 4.37s)
With this patch:
intial create total runs 30 avg 59.80 MB/s (user 0.52s sys 2.53s)
compile total runs 30 avg 151.44 MB/s (user 0.13s sys 1.11s)
read compiled tree total runs 3 avg 83.25 MB/s (user 0.76s sys 4.91s)
delete compiled tree total runs 30 avg 5.29 seconds (user 0.34s sys 4.34s)
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-05-14 08:29:04 +08:00
|
|
|
cancel_work_sync(&fs_info->async_reclaim_work);
|
|
|
|
|
2017-07-17 15:45:34 +08:00
|
|
|
if (!sb_rdonly(fs_info->sb)) {
|
2015-06-15 21:41:18 +08:00
|
|
|
/*
|
Btrfs: fix missing delayed iputs on unmount
There's a race between close_ctree() and cleaner_kthread().
close_ctree() sets btrfs_fs_closing(), and the cleaner stops when it
sees it set, but this is racy; the cleaner might have already checked
the bit and could be cleaning stuff. In particular, if it deletes unused
block groups, it will create delayed iputs for the free space cache
inodes. As of "btrfs: don't run delayed_iputs in commit", we're no
longer running delayed iputs after a commit. Therefore, if the cleaner
creates more delayed iputs after delayed iputs are run in
btrfs_commit_super(), we will leak inodes on unmount and get a busy
inode crash from the VFS.
Fix it by parking the cleaner before we actually close anything. Then,
any remaining delayed iputs will always be handled in
btrfs_commit_super(). This also ensures that the commit in close_ctree()
is really the last commit, so we can get rid of the commit in
cleaner_kthread().
The fstest/generic/475 followed by 476 can trigger a crash that
manifests as a slab corruption caused by accessing the freed kthread
structure by a wake up function. Sample trace:
[ 5657.077612] BUG: unable to handle kernel NULL pointer dereference at 00000000000000cc
[ 5657.079432] PGD 1c57a067 P4D 1c57a067 PUD da10067 PMD 0
[ 5657.080661] Oops: 0000 [#1] PREEMPT SMP
[ 5657.081592] CPU: 1 PID: 5157 Comm: fsstress Tainted: G W 4.19.0-rc8-default+ #323
[ 5657.083703] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.2-0-gf9626cc-prebuilt.qemu-project.org 04/01/2014
[ 5657.086577] RIP: 0010:shrink_page_list+0x2f9/0xe90
[ 5657.091937] RSP: 0018:ffffb5c745c8f728 EFLAGS: 00010287
[ 5657.092953] RAX: 0000000000000074 RBX: ffffb5c745c8f830 RCX: 0000000000000000
[ 5657.094590] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff9a8747fdf3d0
[ 5657.095987] RBP: ffffb5c745c8f9e0 R08: 0000000000000000 R09: 0000000000000000
[ 5657.097159] R10: ffff9a8747fdf5e8 R11: 0000000000000000 R12: ffffb5c745c8f788
[ 5657.098513] R13: ffff9a877f6ff2c0 R14: ffff9a877f6ff2c8 R15: dead000000000200
[ 5657.099689] FS: 00007f948d853b80(0000) GS:ffff9a877d600000(0000) knlGS:0000000000000000
[ 5657.101032] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 5657.101953] CR2: 00000000000000cc CR3: 00000000684bd000 CR4: 00000000000006e0
[ 5657.103159] Call Trace:
[ 5657.103776] shrink_inactive_list+0x194/0x410
[ 5657.104671] shrink_node_memcg.constprop.84+0x39a/0x6a0
[ 5657.105750] shrink_node+0x62/0x1c0
[ 5657.106529] try_to_free_pages+0x1a4/0x500
[ 5657.107408] __alloc_pages_slowpath+0x2c9/0xb20
[ 5657.108418] __alloc_pages_nodemask+0x268/0x2b0
[ 5657.109348] kmalloc_large_node+0x37/0x90
[ 5657.110205] __kmalloc_node+0x236/0x310
[ 5657.111014] kvmalloc_node+0x3e/0x70
Fixes: 30928e9baac2 ("btrfs: don't run delayed_iputs in commit")
Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add trace ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-11-01 01:06:08 +08:00
|
|
|
* The cleaner kthread is stopped, so do one final pass over
|
|
|
|
* unused block groups.
|
2015-06-15 21:41:18 +08:00
|
|
|
*/
|
2016-06-23 06:54:23 +08:00
|
|
|
btrfs_delete_unused_bgs(fs_info);
|
2015-06-15 21:41:18 +08:00
|
|
|
|
2016-06-22 09:16:51 +08:00
|
|
|
ret = btrfs_commit_super(fs_info);
|
2011-01-06 19:30:25 +08:00
|
|
|
if (ret)
|
2014-08-02 07:12:36 +08:00
|
|
|
btrfs_err(fs_info, "commit super ret %d", ret);
|
2011-01-06 19:30:25 +08:00
|
|
|
}
|
|
|
|
|
Btrfs: clean up resources during umount after trans is aborted
Currently if some fatal errors occur, like all IO get -EIO, resources
would be cleaned up when
a) transaction is being committed or
b) BTRFS_FS_STATE_ERROR is set
However, in some rare cases, resources may be left alone after transaction
gets aborted and umount may run into some ASSERT(), e.g.
ASSERT(list_empty(&block_group->dirty_list));
For case a), in btrfs_commit_transaciton(), there're several places at the
beginning where we just call btrfs_end_transaction() without cleaning up
resources. For case b), it is possible that the trans handle doesn't have
any dirty stuff, then only trans hanlde is marked as aborted while
BTRFS_FS_STATE_ERROR is not set, so resources remain in memory.
This makes btrfs also check BTRFS_FS_STATE_TRANS_ABORTED to make sure that
all resources won't stay in memory after umount.
Signed-off-by: Liu Bo <bo.liu@linux.alibaba.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2018-03-31 06:11:56 +08:00
|
|
|
if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state) ||
|
|
|
|
test_bit(BTRFS_FS_STATE_TRANS_ABORTED, &fs_info->fs_state))
|
2016-06-23 06:54:24 +08:00
|
|
|
btrfs_error_commit_super(fs_info);
|
2007-04-09 22:42:37 +08:00
|
|
|
|
2011-11-17 13:56:18 +08:00
|
|
|
kthread_stop(fs_info->transaction_kthread);
|
|
|
|
kthread_stop(fs_info->cleaner_kthread);
|
2010-05-16 22:49:58 +08:00
|
|
|
|
2018-09-28 19:18:03 +08:00
|
|
|
ASSERT(list_empty(&fs_info->delayed_iputs));
|
2016-09-03 03:40:02 +08:00
|
|
|
set_bit(BTRFS_FS_CLOSING_DONE, &fs_info->flags);
|
2009-07-28 20:41:57 +08:00
|
|
|
|
2014-08-02 07:12:36 +08:00
|
|
|
btrfs_free_qgroup_config(fs_info);
|
2018-04-27 17:21:53 +08:00
|
|
|
ASSERT(list_empty(&fs_info->delalloc_roots));
|
2011-09-13 21:23:30 +08:00
|
|
|
|
2013-01-29 18:10:51 +08:00
|
|
|
if (percpu_counter_sum(&fs_info->delalloc_bytes)) {
|
2014-08-02 07:12:36 +08:00
|
|
|
btrfs_info(fs_info, "at unmount delalloc count %lld",
|
2013-01-29 18:10:51 +08:00
|
|
|
percpu_counter_sum(&fs_info->delalloc_bytes));
|
2008-02-01 00:05:37 +08:00
|
|
|
}
|
2008-07-31 04:29:20 +08:00
|
|
|
|
2019-04-11 03:56:09 +08:00
|
|
|
if (percpu_counter_sum(&fs_info->dio_bytes))
|
|
|
|
btrfs_info(fs_info, "at unmount dio bytes count %lld",
|
|
|
|
percpu_counter_sum(&fs_info->dio_bytes));
|
|
|
|
|
2015-08-14 18:32:47 +08:00
|
|
|
btrfs_sysfs_remove_mounted(fs_info);
|
2015-03-10 06:38:38 +08:00
|
|
|
btrfs_sysfs_remove_fsid(fs_info->fs_devices);
|
2013-11-02 01:06:58 +08:00
|
|
|
|
2014-05-08 05:06:09 +08:00
|
|
|
btrfs_free_fs_roots(fs_info);
|
2007-12-18 09:14:04 +08:00
|
|
|
|
2014-01-13 19:53:53 +08:00
|
|
|
btrfs_put_block_group_cache(fs_info);
|
|
|
|
|
2014-04-09 19:23:22 +08:00
|
|
|
/*
|
|
|
|
* we must make sure there is not any read request to
|
|
|
|
* submit after we stopping all workers.
|
|
|
|
*/
|
|
|
|
invalidate_inode_pages2(fs_info->btree_inode->i_mapping);
|
2013-10-17 01:53:28 +08:00
|
|
|
btrfs_stop_all_workers(fs_info);
|
|
|
|
|
2017-02-02 06:39:50 +08:00
|
|
|
btrfs_free_block_groups(fs_info);
|
|
|
|
|
2016-09-03 03:40:02 +08:00
|
|
|
clear_bit(BTRFS_FS_OPEN, &fs_info->flags);
|
2013-05-31 04:55:44 +08:00
|
|
|
free_root_pointers(fs_info, 1);
|
2008-04-19 04:11:30 +08:00
|
|
|
|
2013-05-31 04:55:44 +08:00
|
|
|
iput(fs_info->btree_inode);
|
2008-05-01 01:59:35 +08:00
|
|
|
|
2011-11-09 20:44:05 +08:00
|
|
|
#ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY
|
2016-06-23 06:54:23 +08:00
|
|
|
if (btrfs_test_opt(fs_info, CHECK_INTEGRITY))
|
2016-06-23 06:54:24 +08:00
|
|
|
btrfsic_unmount(fs_info->fs_devices);
|
2011-11-09 20:44:05 +08:00
|
|
|
#endif
|
|
|
|
|
2008-03-25 03:01:56 +08:00
|
|
|
btrfs_mapping_tree_free(&fs_info->mapping_tree);
|
2019-02-12 22:13:14 +08:00
|
|
|
btrfs_close_devices(fs_info->fs_devices);
|
2008-04-14 21:48:18 +08:00
|
|
|
|
2013-01-29 18:09:20 +08:00
|
|
|
percpu_counter_destroy(&fs_info->dirty_metadata_bytes);
|
2013-01-29 18:10:51 +08:00
|
|
|
percpu_counter_destroy(&fs_info->delalloc_bytes);
|
2019-04-11 03:56:09 +08:00
|
|
|
percpu_counter_destroy(&fs_info->dio_bytes);
|
2018-04-05 07:04:49 +08:00
|
|
|
percpu_counter_destroy(&fs_info->dev_replace.bio_counter);
|
2009-09-22 04:00:26 +08:00
|
|
|
cleanup_srcu_struct(&fs_info->subvol_srcu);
|
2008-03-25 03:01:56 +08:00
|
|
|
|
2013-01-30 07:40:14 +08:00
|
|
|
btrfs_free_stripe_hash_table(fs_info);
|
2017-09-30 03:43:50 +08:00
|
|
|
btrfs_free_ref_cache(fs_info);
|
2007-02-02 22:18:22 +08:00
|
|
|
}
|
|
|
|
|
2012-05-06 19:23:47 +08:00
|
|
|
int btrfs_buffer_uptodate(struct extent_buffer *buf, u64 parent_transid,
|
|
|
|
int atomic)
|
2007-10-16 04:14:19 +08:00
|
|
|
{
|
2008-05-13 01:39:03 +08:00
|
|
|
int ret;
|
2010-08-07 01:21:20 +08:00
|
|
|
struct inode *btree_inode = buf->pages[0]->mapping->host;
|
2008-05-13 01:39:03 +08:00
|
|
|
|
2012-03-13 21:38:00 +08:00
|
|
|
ret = extent_buffer_uptodate(buf);
|
2008-05-13 01:39:03 +08:00
|
|
|
if (!ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
ret = verify_parent_transid(&BTRFS_I(btree_inode)->io_tree, buf,
|
2012-05-06 19:23:47 +08:00
|
|
|
parent_transid, atomic);
|
|
|
|
if (ret == -EAGAIN)
|
|
|
|
return ret;
|
2008-05-13 01:39:03 +08:00
|
|
|
return !ret;
|
2007-10-16 04:14:19 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void btrfs_mark_buffer_dirty(struct extent_buffer *buf)
|
|
|
|
{
|
2016-06-23 06:54:23 +08:00
|
|
|
struct btrfs_fs_info *fs_info;
|
2013-09-20 04:07:01 +08:00
|
|
|
struct btrfs_root *root;
|
2007-10-16 04:14:19 +08:00
|
|
|
u64 transid = btrfs_header_generation(buf);
|
2009-03-13 23:00:37 +08:00
|
|
|
int was_dirty;
|
Btrfs: Change btree locking to use explicit blocking points
Most of the btrfs metadata operations can be protected by a spinlock,
but some operations still need to schedule.
So far, btrfs has been using a mutex along with a trylock loop,
most of the time it is able to avoid going for the full mutex, so
the trylock loop is a big performance gain.
This commit is step one for getting rid of the blocking locks entirely.
btrfs_tree_lock takes a spinlock, and the code explicitly switches
to a blocking lock when it starts an operation that can schedule.
We'll be able get rid of the blocking locks in smaller pieces over time.
Tracing allows us to find the most common cause of blocking, so we
can start with the hot spots first.
The basic idea is:
btrfs_tree_lock() returns with the spin lock held
btrfs_set_lock_blocking() sets the EXTENT_BUFFER_BLOCKING bit in
the extent buffer flags, and then drops the spin lock. The buffer is
still considered locked by all of the btrfs code.
If btrfs_tree_lock gets the spinlock but finds the blocking bit set, it drops
the spin lock and waits on a wait queue for the blocking bit to go away.
Much of the code that needs to set the blocking bit finishes without actually
blocking a good percentage of the time. So, an adaptive spin is still
used against the blocking bit to avoid very high context switch rates.
btrfs_clear_lock_blocking() clears the blocking bit and returns
with the spinlock held again.
btrfs_tree_unlock() can be called on either blocking or spinning locks,
it does the right thing based on the blocking bit.
ctree.c has a helper function to set/clear all the locked buffers in a
path as blocking.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-02-04 22:25:08 +08:00
|
|
|
|
2013-09-20 04:07:01 +08:00
|
|
|
#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
|
|
|
|
/*
|
|
|
|
* This is a fast path so only do this check if we have sanity tests
|
2018-11-28 19:05:13 +08:00
|
|
|
* enabled. Normal people shouldn't be using unmapped buffers as dirty
|
2013-09-20 04:07:01 +08:00
|
|
|
* outside of the sanity tests.
|
|
|
|
*/
|
2018-06-27 21:38:24 +08:00
|
|
|
if (unlikely(test_bit(EXTENT_BUFFER_UNMAPPED, &buf->bflags)))
|
2013-09-20 04:07:01 +08:00
|
|
|
return;
|
|
|
|
#endif
|
|
|
|
root = BTRFS_I(buf->pages[0]->mapping->host)->root;
|
2016-06-23 06:54:23 +08:00
|
|
|
fs_info = root->fs_info;
|
2009-03-09 23:45:38 +08:00
|
|
|
btrfs_assert_tree_locked(buf);
|
2016-06-23 06:54:23 +08:00
|
|
|
if (transid != fs_info->generation)
|
2016-09-20 22:05:00 +08:00
|
|
|
WARN(1, KERN_CRIT "btrfs transid mismatch buffer %llu, found %llu running %llu\n",
|
2016-06-23 06:54:23 +08:00
|
|
|
buf->start, transid, fs_info->generation);
|
2012-03-13 21:38:00 +08:00
|
|
|
was_dirty = set_extent_buffer_dirty(buf);
|
2013-01-29 18:09:20 +08:00
|
|
|
if (!was_dirty)
|
2017-06-21 02:01:20 +08:00
|
|
|
percpu_counter_add_batch(&fs_info->dirty_metadata_bytes,
|
|
|
|
buf->len,
|
|
|
|
fs_info->dirty_metadata_batch);
|
2014-04-09 22:37:06 +08:00
|
|
|
#ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY
|
2017-11-08 08:54:24 +08:00
|
|
|
/*
|
|
|
|
* Since btrfs_mark_buffer_dirty() can be called with item pointer set
|
|
|
|
* but item data not updated.
|
|
|
|
* So here we should only check item pointers, not item data.
|
|
|
|
*/
|
|
|
|
if (btrfs_header_level(buf) == 0 &&
|
2019-03-20 23:24:18 +08:00
|
|
|
btrfs_check_leaf_relaxed(buf)) {
|
2017-06-30 00:37:49 +08:00
|
|
|
btrfs_print_leaf(buf);
|
2014-04-09 22:37:06 +08:00
|
|
|
ASSERT(0);
|
|
|
|
}
|
|
|
|
#endif
|
2007-02-02 22:18:22 +08:00
|
|
|
}
|
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
static void __btrfs_btree_balance_dirty(struct btrfs_fs_info *fs_info,
|
2012-11-14 22:34:34 +08:00
|
|
|
int flush_delayed)
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 18:12:22 +08:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* looks as though older kernels can get into trouble with
|
|
|
|
* this code, they end up stuck in balance_dirty_pages forever
|
|
|
|
*/
|
2013-01-29 18:09:20 +08:00
|
|
|
int ret;
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 18:12:22 +08:00
|
|
|
|
|
|
|
if (current->flags & PF_MEMALLOC)
|
|
|
|
return;
|
|
|
|
|
2012-11-14 22:34:34 +08:00
|
|
|
if (flush_delayed)
|
2016-06-23 06:54:24 +08:00
|
|
|
btrfs_balance_delayed_items(fs_info);
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 18:12:22 +08:00
|
|
|
|
2018-07-02 15:44:58 +08:00
|
|
|
ret = __percpu_counter_compare(&fs_info->dirty_metadata_bytes,
|
|
|
|
BTRFS_DIRTY_METADATA_THRESH,
|
|
|
|
fs_info->dirty_metadata_batch);
|
2013-01-29 18:09:20 +08:00
|
|
|
if (ret > 0) {
|
2016-06-23 06:54:23 +08:00
|
|
|
balance_dirty_pages_ratelimited(fs_info->btree_inode->i_mapping);
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 18:12:22 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
void btrfs_btree_balance_dirty(struct btrfs_fs_info *fs_info)
|
2007-05-03 03:53:43 +08:00
|
|
|
{
|
2016-06-23 06:54:24 +08:00
|
|
|
__btrfs_btree_balance_dirty(fs_info, 1);
|
2012-11-14 22:34:34 +08:00
|
|
|
}
|
2009-05-18 22:41:58 +08:00
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
void btrfs_btree_balance_dirty_nodelay(struct btrfs_fs_info *fs_info)
|
2012-11-14 22:34:34 +08:00
|
|
|
{
|
2016-06-23 06:54:24 +08:00
|
|
|
__btrfs_btree_balance_dirty(fs_info, 0);
|
2007-05-03 03:53:43 +08:00
|
|
|
}
|
2007-10-16 04:17:34 +08:00
|
|
|
|
2018-03-29 09:08:11 +08:00
|
|
|
int btrfs_read_buffer(struct extent_buffer *buf, u64 parent_transid, int level,
|
|
|
|
struct btrfs_key *first_key)
|
2007-10-16 04:17:34 +08:00
|
|
|
{
|
2019-03-20 21:56:39 +08:00
|
|
|
return btree_read_extent_buffer_pages(buf, parent_transid,
|
2018-03-29 09:08:11 +08:00
|
|
|
level, first_key);
|
2007-10-16 04:17:34 +08:00
|
|
|
}
|
2007-11-08 10:08:01 +08:00
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
static void btrfs_error_commit_super(struct btrfs_fs_info *fs_info)
|
2011-01-06 19:30:25 +08:00
|
|
|
{
|
2018-04-27 17:21:53 +08:00
|
|
|
/* cleanup FS via transaction */
|
|
|
|
btrfs_cleanup_transaction(fs_info);
|
|
|
|
|
2016-06-23 06:54:23 +08:00
|
|
|
mutex_lock(&fs_info->cleaner_mutex);
|
2016-06-23 06:54:24 +08:00
|
|
|
btrfs_run_delayed_iputs(fs_info);
|
2016-06-23 06:54:23 +08:00
|
|
|
mutex_unlock(&fs_info->cleaner_mutex);
|
2011-01-06 19:30:25 +08:00
|
|
|
|
2016-06-23 06:54:23 +08:00
|
|
|
down_write(&fs_info->cleanup_work_sem);
|
|
|
|
up_write(&fs_info->cleanup_work_sem);
|
2011-01-06 19:30:25 +08:00
|
|
|
}
|
|
|
|
|
2012-03-01 21:56:26 +08:00
|
|
|
static void btrfs_destroy_ordered_extents(struct btrfs_root *root)
|
2011-01-06 19:30:25 +08:00
|
|
|
{
|
|
|
|
struct btrfs_ordered_extent *ordered;
|
|
|
|
|
2013-05-15 15:48:23 +08:00
|
|
|
spin_lock(&root->ordered_extent_lock);
|
2013-02-01 03:30:08 +08:00
|
|
|
/*
|
|
|
|
* This will just short circuit the ordered completion stuff which will
|
|
|
|
* make sure the ordered extent gets properly cleaned up.
|
|
|
|
*/
|
2013-05-15 15:48:23 +08:00
|
|
|
list_for_each_entry(ordered, &root->ordered_extents,
|
2013-02-01 03:30:08 +08:00
|
|
|
root_extent_list)
|
|
|
|
set_bit(BTRFS_ORDERED_IOERR, &ordered->flags);
|
2013-05-15 15:48:23 +08:00
|
|
|
spin_unlock(&root->ordered_extent_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void btrfs_destroy_all_ordered_extents(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
|
|
|
struct list_head splice;
|
|
|
|
|
|
|
|
INIT_LIST_HEAD(&splice);
|
|
|
|
|
|
|
|
spin_lock(&fs_info->ordered_root_lock);
|
|
|
|
list_splice_init(&fs_info->ordered_roots, &splice);
|
|
|
|
while (!list_empty(&splice)) {
|
|
|
|
root = list_first_entry(&splice, struct btrfs_root,
|
|
|
|
ordered_root);
|
2013-09-28 04:36:02 +08:00
|
|
|
list_move_tail(&root->ordered_root,
|
|
|
|
&fs_info->ordered_roots);
|
2013-05-15 15:48:23 +08:00
|
|
|
|
2014-02-10 17:07:16 +08:00
|
|
|
spin_unlock(&fs_info->ordered_root_lock);
|
2013-05-15 15:48:23 +08:00
|
|
|
btrfs_destroy_ordered_extents(root);
|
|
|
|
|
2014-02-10 17:07:16 +08:00
|
|
|
cond_resched();
|
|
|
|
spin_lock(&fs_info->ordered_root_lock);
|
2013-05-15 15:48:23 +08:00
|
|
|
}
|
|
|
|
spin_unlock(&fs_info->ordered_root_lock);
|
2018-11-22 03:05:45 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We need this here because if we've been flipped read-only we won't
|
|
|
|
* get sync() from the umount, so we need to make sure any ordered
|
|
|
|
* extents that haven't had their dirty pages IO start writeout yet
|
|
|
|
* actually get run and error out properly.
|
|
|
|
*/
|
|
|
|
btrfs_wait_ordered_roots(fs_info, U64_MAX, 0, (u64)-1);
|
2011-01-06 19:30:25 +08:00
|
|
|
}
|
|
|
|
|
2013-08-15 00:12:25 +08:00
|
|
|
static int btrfs_destroy_delayed_refs(struct btrfs_transaction *trans,
|
2016-06-23 06:54:24 +08:00
|
|
|
struct btrfs_fs_info *fs_info)
|
2011-01-06 19:30:25 +08:00
|
|
|
{
|
|
|
|
struct rb_node *node;
|
|
|
|
struct btrfs_delayed_ref_root *delayed_refs;
|
|
|
|
struct btrfs_delayed_ref_node *ref;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
delayed_refs = &trans->delayed_refs;
|
|
|
|
|
|
|
|
spin_lock(&delayed_refs->lock);
|
2014-01-23 22:21:38 +08:00
|
|
|
if (atomic_read(&delayed_refs->num_entries) == 0) {
|
2011-04-26 07:43:52 +08:00
|
|
|
spin_unlock(&delayed_refs->lock);
|
2016-06-23 06:54:23 +08:00
|
|
|
btrfs_info(fs_info, "delayed_refs has NO entry");
|
2011-01-06 19:30:25 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2018-08-23 03:51:49 +08:00
|
|
|
while ((node = rb_first_cached(&delayed_refs->href_root)) != NULL) {
|
2014-01-23 22:21:38 +08:00
|
|
|
struct btrfs_delayed_ref_head *head;
|
2017-10-20 02:16:00 +08:00
|
|
|
struct rb_node *n;
|
2013-06-04 04:42:36 +08:00
|
|
|
bool pin_bytes = false;
|
2011-01-06 19:30:25 +08:00
|
|
|
|
2014-01-23 22:21:38 +08:00
|
|
|
head = rb_entry(node, struct btrfs_delayed_ref_head,
|
|
|
|
href_node);
|
2018-11-22 03:05:39 +08:00
|
|
|
if (btrfs_delayed_ref_lock(delayed_refs, head))
|
2014-01-23 22:21:38 +08:00
|
|
|
continue;
|
2018-11-22 03:05:39 +08:00
|
|
|
|
2014-01-23 22:21:38 +08:00
|
|
|
spin_lock(&head->lock);
|
2018-08-23 03:51:50 +08:00
|
|
|
while ((n = rb_first_cached(&head->ref_tree)) != NULL) {
|
2017-10-20 02:16:00 +08:00
|
|
|
ref = rb_entry(n, struct btrfs_delayed_ref_node,
|
|
|
|
ref_node);
|
2014-01-23 22:21:38 +08:00
|
|
|
ref->in_tree = 0;
|
2018-08-23 03:51:50 +08:00
|
|
|
rb_erase_cached(&ref->ref_node, &head->ref_tree);
|
2017-10-20 02:16:00 +08:00
|
|
|
RB_CLEAR_NODE(&ref->ref_node);
|
btrfs: improve delayed refs iterations
This issue was found when I tried to delete a heavily reflinked file,
when deleting such files, other transaction operation will not have a
chance to make progress, for example, start_transaction() will blocked
in wait_current_trans(root) for long time, sometimes it even triggers
soft lockups, and the time taken to delete such heavily reflinked file
is also very large, often hundreds of seconds. Using perf top, it reports
that:
PerfTop: 7416 irqs/sec kernel:99.8% exact: 0.0% [4000Hz cpu-clock], (all, 4 CPUs)
---------------------------------------------------------------------------------------
84.37% [btrfs] [k] __btrfs_run_delayed_refs.constprop.80
11.02% [kernel] [k] delay_tsc
0.79% [kernel] [k] _raw_spin_unlock_irq
0.78% [kernel] [k] _raw_spin_unlock_irqrestore
0.45% [kernel] [k] do_raw_spin_lock
0.18% [kernel] [k] __slab_alloc
It seems __btrfs_run_delayed_refs() took most cpu time, after some debug
work, I found it's select_delayed_ref() causing this issue, for a delayed
head, in our case, it'll be full of BTRFS_DROP_DELAYED_REF nodes, but
select_delayed_ref() will firstly try to iterate node list to find
BTRFS_ADD_DELAYED_REF nodes, obviously it's a disaster in this case, and
waste much time.
To fix this issue, we introduce a new ref_add_list in struct btrfs_delayed_ref_head,
then in select_delayed_ref(), if this list is not empty, we can directly use
nodes in this list. With this patch, it just took about 10~15 seconds to
delte the same file. Now using perf top, it reports that:
PerfTop: 2734 irqs/sec kernel:99.5% exact: 0.0% [4000Hz cpu-clock], (all, 4 CPUs)
----------------------------------------------------------------------------------------
20.74% [kernel] [k] _raw_spin_unlock_irqrestore
16.33% [kernel] [k] __slab_alloc
5.41% [kernel] [k] lock_acquired
4.42% [kernel] [k] lock_acquire
4.05% [kernel] [k] lock_release
3.37% [kernel] [k] _raw_spin_unlock_irq
For normal files, this patch also gives help, at least we do not need to
iterate whole list to found BTRFS_ADD_DELAYED_REF nodes.
Signed-off-by: Wang Xiaoguang <wangxg.fnst@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-10-26 18:07:33 +08:00
|
|
|
if (!list_empty(&ref->add_list))
|
|
|
|
list_del(&ref->add_list);
|
2014-01-23 22:21:38 +08:00
|
|
|
atomic_dec(&delayed_refs->num_entries);
|
|
|
|
btrfs_put_delayed_ref(ref);
|
2013-06-04 04:42:36 +08:00
|
|
|
}
|
2014-01-23 22:21:38 +08:00
|
|
|
if (head->must_insert_reserved)
|
|
|
|
pin_bytes = true;
|
|
|
|
btrfs_free_delayed_extent_op(head->extent_op);
|
2018-11-22 03:05:40 +08:00
|
|
|
btrfs_delete_ref_head(delayed_refs, head);
|
2014-01-23 22:21:38 +08:00
|
|
|
spin_unlock(&head->lock);
|
|
|
|
spin_unlock(&delayed_refs->lock);
|
|
|
|
mutex_unlock(&head->mutex);
|
2011-01-06 19:30:25 +08:00
|
|
|
|
2014-01-23 22:21:38 +08:00
|
|
|
if (pin_bytes)
|
2017-09-30 03:43:57 +08:00
|
|
|
btrfs_pin_extent(fs_info, head->bytenr,
|
|
|
|
head->num_bytes, 1);
|
2018-11-22 03:05:41 +08:00
|
|
|
btrfs_cleanup_ref_head_accounting(fs_info, delayed_refs, head);
|
2017-09-30 03:43:57 +08:00
|
|
|
btrfs_put_delayed_ref_head(head);
|
2011-01-06 19:30:25 +08:00
|
|
|
cond_resched();
|
|
|
|
spin_lock(&delayed_refs->lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
spin_unlock(&delayed_refs->lock);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2012-03-01 21:56:26 +08:00
|
|
|
static void btrfs_destroy_delalloc_inodes(struct btrfs_root *root)
|
2011-01-06 19:30:25 +08:00
|
|
|
{
|
|
|
|
struct btrfs_inode *btrfs_inode;
|
|
|
|
struct list_head splice;
|
|
|
|
|
|
|
|
INIT_LIST_HEAD(&splice);
|
|
|
|
|
2013-05-15 15:48:22 +08:00
|
|
|
spin_lock(&root->delalloc_lock);
|
|
|
|
list_splice_init(&root->delalloc_inodes, &splice);
|
2011-01-06 19:30:25 +08:00
|
|
|
|
|
|
|
while (!list_empty(&splice)) {
|
2018-04-27 17:21:53 +08:00
|
|
|
struct inode *inode = NULL;
|
2013-05-15 15:48:22 +08:00
|
|
|
btrfs_inode = list_first_entry(&splice, struct btrfs_inode,
|
|
|
|
delalloc_inodes);
|
2018-04-27 17:21:53 +08:00
|
|
|
__btrfs_del_delalloc_inode(root, btrfs_inode);
|
2013-05-15 15:48:22 +08:00
|
|
|
spin_unlock(&root->delalloc_lock);
|
2011-01-06 19:30:25 +08:00
|
|
|
|
2018-04-27 17:21:53 +08:00
|
|
|
/*
|
|
|
|
* Make sure we get a live inode and that it'll not disappear
|
|
|
|
* meanwhile.
|
|
|
|
*/
|
|
|
|
inode = igrab(&btrfs_inode->vfs_inode);
|
|
|
|
if (inode) {
|
|
|
|
invalidate_inode_pages2(inode->i_mapping);
|
|
|
|
iput(inode);
|
|
|
|
}
|
2013-05-15 15:48:22 +08:00
|
|
|
spin_lock(&root->delalloc_lock);
|
2011-01-06 19:30:25 +08:00
|
|
|
}
|
2013-05-15 15:48:22 +08:00
|
|
|
spin_unlock(&root->delalloc_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void btrfs_destroy_all_delalloc_inodes(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
|
|
|
struct list_head splice;
|
|
|
|
|
|
|
|
INIT_LIST_HEAD(&splice);
|
|
|
|
|
|
|
|
spin_lock(&fs_info->delalloc_root_lock);
|
|
|
|
list_splice_init(&fs_info->delalloc_roots, &splice);
|
|
|
|
while (!list_empty(&splice)) {
|
|
|
|
root = list_first_entry(&splice, struct btrfs_root,
|
|
|
|
delalloc_root);
|
|
|
|
root = btrfs_grab_fs_root(root);
|
|
|
|
BUG_ON(!root);
|
|
|
|
spin_unlock(&fs_info->delalloc_root_lock);
|
|
|
|
|
|
|
|
btrfs_destroy_delalloc_inodes(root);
|
|
|
|
btrfs_put_fs_root(root);
|
|
|
|
|
|
|
|
spin_lock(&fs_info->delalloc_root_lock);
|
|
|
|
}
|
|
|
|
spin_unlock(&fs_info->delalloc_root_lock);
|
2011-01-06 19:30:25 +08:00
|
|
|
}
|
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
static int btrfs_destroy_marked_extents(struct btrfs_fs_info *fs_info,
|
2011-01-06 19:30:25 +08:00
|
|
|
struct extent_io_tree *dirty_pages,
|
|
|
|
int mark)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
struct extent_buffer *eb;
|
|
|
|
u64 start = 0;
|
|
|
|
u64 end;
|
|
|
|
|
|
|
|
while (1) {
|
|
|
|
ret = find_first_extent_bit(dirty_pages, start, &start, &end,
|
2012-09-28 05:07:30 +08:00
|
|
|
mark, NULL);
|
2011-01-06 19:30:25 +08:00
|
|
|
if (ret)
|
|
|
|
break;
|
|
|
|
|
2016-04-27 05:54:39 +08:00
|
|
|
clear_extent_bits(dirty_pages, start, end, mark);
|
2011-01-06 19:30:25 +08:00
|
|
|
while (start <= end) {
|
2016-06-23 06:54:23 +08:00
|
|
|
eb = find_extent_buffer(fs_info, start);
|
|
|
|
start += fs_info->nodesize;
|
2013-04-25 04:41:19 +08:00
|
|
|
if (!eb)
|
2011-01-06 19:30:25 +08:00
|
|
|
continue;
|
2013-04-25 04:41:19 +08:00
|
|
|
wait_on_extent_buffer_writeback(eb);
|
2011-01-06 19:30:25 +08:00
|
|
|
|
2013-04-25 04:41:19 +08:00
|
|
|
if (test_and_clear_bit(EXTENT_BUFFER_DIRTY,
|
|
|
|
&eb->bflags))
|
|
|
|
clear_extent_buffer_dirty(eb);
|
|
|
|
free_extent_buffer_stale(eb);
|
2011-01-06 19:30:25 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
static int btrfs_destroy_pinned_extent(struct btrfs_fs_info *fs_info,
|
2011-01-06 19:30:25 +08:00
|
|
|
struct extent_io_tree *pinned_extents)
|
|
|
|
{
|
|
|
|
struct extent_io_tree *unpin;
|
|
|
|
u64 start;
|
|
|
|
u64 end;
|
|
|
|
int ret;
|
2012-06-14 16:23:21 +08:00
|
|
|
bool loop = true;
|
2011-01-06 19:30:25 +08:00
|
|
|
|
|
|
|
unpin = pinned_extents;
|
2012-06-14 16:23:21 +08:00
|
|
|
again:
|
2011-01-06 19:30:25 +08:00
|
|
|
while (1) {
|
2018-11-16 21:04:44 +08:00
|
|
|
struct extent_state *cached_state = NULL;
|
|
|
|
|
btrfs: fix pinned underflow after transaction aborted
When running generic/475, we may get the following warning in dmesg:
[ 6902.102154] WARNING: CPU: 3 PID: 18013 at fs/btrfs/extent-tree.c:9776 btrfs_free_block_groups+0x2af/0x3b0 [btrfs]
[ 6902.109160] CPU: 3 PID: 18013 Comm: umount Tainted: G W O 4.19.0-rc8+ #8
[ 6902.110971] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
[ 6902.112857] RIP: 0010:btrfs_free_block_groups+0x2af/0x3b0 [btrfs]
[ 6902.118921] RSP: 0018:ffffc9000459bdb0 EFLAGS: 00010286
[ 6902.120315] RAX: ffff880175050bb0 RBX: ffff8801124a8000 RCX: 0000000000170007
[ 6902.121969] RDX: 0000000000000002 RSI: 0000000000170007 RDI: ffffffff8125fb74
[ 6902.123716] RBP: ffff880175055d10 R08: 0000000000000000 R09: 0000000000000000
[ 6902.125417] R10: 0000000000000000 R11: 0000000000000000 R12: ffff880175055d88
[ 6902.127129] R13: ffff880175050bb0 R14: 0000000000000000 R15: dead000000000100
[ 6902.129060] FS: 00007f4507223780(0000) GS:ffff88017ba00000(0000) knlGS:0000000000000000
[ 6902.130996] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 6902.132558] CR2: 00005623599cac78 CR3: 000000014b700001 CR4: 00000000003606e0
[ 6902.134270] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 6902.135981] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 6902.137836] Call Trace:
[ 6902.138939] close_ctree+0x171/0x330 [btrfs]
[ 6902.140181] ? kthread_stop+0x146/0x1f0
[ 6902.141277] generic_shutdown_super+0x6c/0x100
[ 6902.142517] kill_anon_super+0x14/0x30
[ 6902.143554] btrfs_kill_super+0x13/0x100 [btrfs]
[ 6902.144790] deactivate_locked_super+0x2f/0x70
[ 6902.146014] cleanup_mnt+0x3b/0x70
[ 6902.147020] task_work_run+0x9e/0xd0
[ 6902.148036] do_syscall_64+0x470/0x600
[ 6902.149142] ? trace_hardirqs_off_thunk+0x1a/0x1c
[ 6902.150375] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 6902.151640] RIP: 0033:0x7f45077a6a7b
[ 6902.157324] RSP: 002b:00007ffd589f3e68 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[ 6902.159187] RAX: 0000000000000000 RBX: 000055e8eec732b0 RCX: 00007f45077a6a7b
[ 6902.160834] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 000055e8eec73490
[ 6902.162526] RBP: 0000000000000000 R08: 000055e8eec734b0 R09: 00007ffd589f26c0
[ 6902.164141] R10: 0000000000000000 R11: 0000000000000246 R12: 000055e8eec73490
[ 6902.165815] R13: 00007f4507ac61a4 R14: 0000000000000000 R15: 00007ffd589f40d8
[ 6902.167553] irq event stamp: 0
[ 6902.168998] hardirqs last enabled at (0): [<0000000000000000>] (null)
[ 6902.170731] hardirqs last disabled at (0): [<ffffffff810cd810>] copy_process.part.55+0x3b0/0x1f00
[ 6902.172773] softirqs last enabled at (0): [<ffffffff810cd810>] copy_process.part.55+0x3b0/0x1f00
[ 6902.174671] softirqs last disabled at (0): [<0000000000000000>] (null)
[ 6902.176407] ---[ end trace 463138c2986b275c ]---
[ 6902.177636] BTRFS info (device dm-3): space_info 4 has 273465344 free, is not full
[ 6902.179453] BTRFS info (device dm-3): space_info total=276824064, used=4685824, pinned=18446744073708158976, reserved=0, may_use=0, readonly=65536
In the above line there's "pinned=18446744073708158976" which is an
unsigned u64 value of -1392640, an obvious underflow.
When transaction_kthread is running cleanup_transaction(), another
fsstress is running btrfs_commit_transaction(). The
btrfs_finish_extent_commit() may get the same range as
btrfs_destroy_pinned_extent() got, which causes the pinned underflow.
Fixes: d4b450cd4b33 ("Btrfs: fix race between transaction commit and empty block group removal")
CC: stable@vger.kernel.org # 4.4+
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2018-10-24 20:24:03 +08:00
|
|
|
/*
|
|
|
|
* The btrfs_finish_extent_commit() may get the same range as
|
|
|
|
* ours between find_first_extent_bit and clear_extent_dirty.
|
|
|
|
* Hence, hold the unused_bg_unpin_mutex to avoid double unpin
|
|
|
|
* the same extent range.
|
|
|
|
*/
|
|
|
|
mutex_lock(&fs_info->unused_bg_unpin_mutex);
|
2011-01-06 19:30:25 +08:00
|
|
|
ret = find_first_extent_bit(unpin, 0, &start, &end,
|
2018-11-16 21:04:44 +08:00
|
|
|
EXTENT_DIRTY, &cached_state);
|
btrfs: fix pinned underflow after transaction aborted
When running generic/475, we may get the following warning in dmesg:
[ 6902.102154] WARNING: CPU: 3 PID: 18013 at fs/btrfs/extent-tree.c:9776 btrfs_free_block_groups+0x2af/0x3b0 [btrfs]
[ 6902.109160] CPU: 3 PID: 18013 Comm: umount Tainted: G W O 4.19.0-rc8+ #8
[ 6902.110971] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
[ 6902.112857] RIP: 0010:btrfs_free_block_groups+0x2af/0x3b0 [btrfs]
[ 6902.118921] RSP: 0018:ffffc9000459bdb0 EFLAGS: 00010286
[ 6902.120315] RAX: ffff880175050bb0 RBX: ffff8801124a8000 RCX: 0000000000170007
[ 6902.121969] RDX: 0000000000000002 RSI: 0000000000170007 RDI: ffffffff8125fb74
[ 6902.123716] RBP: ffff880175055d10 R08: 0000000000000000 R09: 0000000000000000
[ 6902.125417] R10: 0000000000000000 R11: 0000000000000000 R12: ffff880175055d88
[ 6902.127129] R13: ffff880175050bb0 R14: 0000000000000000 R15: dead000000000100
[ 6902.129060] FS: 00007f4507223780(0000) GS:ffff88017ba00000(0000) knlGS:0000000000000000
[ 6902.130996] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 6902.132558] CR2: 00005623599cac78 CR3: 000000014b700001 CR4: 00000000003606e0
[ 6902.134270] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 6902.135981] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 6902.137836] Call Trace:
[ 6902.138939] close_ctree+0x171/0x330 [btrfs]
[ 6902.140181] ? kthread_stop+0x146/0x1f0
[ 6902.141277] generic_shutdown_super+0x6c/0x100
[ 6902.142517] kill_anon_super+0x14/0x30
[ 6902.143554] btrfs_kill_super+0x13/0x100 [btrfs]
[ 6902.144790] deactivate_locked_super+0x2f/0x70
[ 6902.146014] cleanup_mnt+0x3b/0x70
[ 6902.147020] task_work_run+0x9e/0xd0
[ 6902.148036] do_syscall_64+0x470/0x600
[ 6902.149142] ? trace_hardirqs_off_thunk+0x1a/0x1c
[ 6902.150375] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 6902.151640] RIP: 0033:0x7f45077a6a7b
[ 6902.157324] RSP: 002b:00007ffd589f3e68 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[ 6902.159187] RAX: 0000000000000000 RBX: 000055e8eec732b0 RCX: 00007f45077a6a7b
[ 6902.160834] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 000055e8eec73490
[ 6902.162526] RBP: 0000000000000000 R08: 000055e8eec734b0 R09: 00007ffd589f26c0
[ 6902.164141] R10: 0000000000000000 R11: 0000000000000246 R12: 000055e8eec73490
[ 6902.165815] R13: 00007f4507ac61a4 R14: 0000000000000000 R15: 00007ffd589f40d8
[ 6902.167553] irq event stamp: 0
[ 6902.168998] hardirqs last enabled at (0): [<0000000000000000>] (null)
[ 6902.170731] hardirqs last disabled at (0): [<ffffffff810cd810>] copy_process.part.55+0x3b0/0x1f00
[ 6902.172773] softirqs last enabled at (0): [<ffffffff810cd810>] copy_process.part.55+0x3b0/0x1f00
[ 6902.174671] softirqs last disabled at (0): [<0000000000000000>] (null)
[ 6902.176407] ---[ end trace 463138c2986b275c ]---
[ 6902.177636] BTRFS info (device dm-3): space_info 4 has 273465344 free, is not full
[ 6902.179453] BTRFS info (device dm-3): space_info total=276824064, used=4685824, pinned=18446744073708158976, reserved=0, may_use=0, readonly=65536
In the above line there's "pinned=18446744073708158976" which is an
unsigned u64 value of -1392640, an obvious underflow.
When transaction_kthread is running cleanup_transaction(), another
fsstress is running btrfs_commit_transaction(). The
btrfs_finish_extent_commit() may get the same range as
btrfs_destroy_pinned_extent() got, which causes the pinned underflow.
Fixes: d4b450cd4b33 ("Btrfs: fix race between transaction commit and empty block group removal")
CC: stable@vger.kernel.org # 4.4+
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2018-10-24 20:24:03 +08:00
|
|
|
if (ret) {
|
|
|
|
mutex_unlock(&fs_info->unused_bg_unpin_mutex);
|
2011-01-06 19:30:25 +08:00
|
|
|
break;
|
btrfs: fix pinned underflow after transaction aborted
When running generic/475, we may get the following warning in dmesg:
[ 6902.102154] WARNING: CPU: 3 PID: 18013 at fs/btrfs/extent-tree.c:9776 btrfs_free_block_groups+0x2af/0x3b0 [btrfs]
[ 6902.109160] CPU: 3 PID: 18013 Comm: umount Tainted: G W O 4.19.0-rc8+ #8
[ 6902.110971] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
[ 6902.112857] RIP: 0010:btrfs_free_block_groups+0x2af/0x3b0 [btrfs]
[ 6902.118921] RSP: 0018:ffffc9000459bdb0 EFLAGS: 00010286
[ 6902.120315] RAX: ffff880175050bb0 RBX: ffff8801124a8000 RCX: 0000000000170007
[ 6902.121969] RDX: 0000000000000002 RSI: 0000000000170007 RDI: ffffffff8125fb74
[ 6902.123716] RBP: ffff880175055d10 R08: 0000000000000000 R09: 0000000000000000
[ 6902.125417] R10: 0000000000000000 R11: 0000000000000000 R12: ffff880175055d88
[ 6902.127129] R13: ffff880175050bb0 R14: 0000000000000000 R15: dead000000000100
[ 6902.129060] FS: 00007f4507223780(0000) GS:ffff88017ba00000(0000) knlGS:0000000000000000
[ 6902.130996] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 6902.132558] CR2: 00005623599cac78 CR3: 000000014b700001 CR4: 00000000003606e0
[ 6902.134270] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 6902.135981] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 6902.137836] Call Trace:
[ 6902.138939] close_ctree+0x171/0x330 [btrfs]
[ 6902.140181] ? kthread_stop+0x146/0x1f0
[ 6902.141277] generic_shutdown_super+0x6c/0x100
[ 6902.142517] kill_anon_super+0x14/0x30
[ 6902.143554] btrfs_kill_super+0x13/0x100 [btrfs]
[ 6902.144790] deactivate_locked_super+0x2f/0x70
[ 6902.146014] cleanup_mnt+0x3b/0x70
[ 6902.147020] task_work_run+0x9e/0xd0
[ 6902.148036] do_syscall_64+0x470/0x600
[ 6902.149142] ? trace_hardirqs_off_thunk+0x1a/0x1c
[ 6902.150375] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 6902.151640] RIP: 0033:0x7f45077a6a7b
[ 6902.157324] RSP: 002b:00007ffd589f3e68 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[ 6902.159187] RAX: 0000000000000000 RBX: 000055e8eec732b0 RCX: 00007f45077a6a7b
[ 6902.160834] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 000055e8eec73490
[ 6902.162526] RBP: 0000000000000000 R08: 000055e8eec734b0 R09: 00007ffd589f26c0
[ 6902.164141] R10: 0000000000000000 R11: 0000000000000246 R12: 000055e8eec73490
[ 6902.165815] R13: 00007f4507ac61a4 R14: 0000000000000000 R15: 00007ffd589f40d8
[ 6902.167553] irq event stamp: 0
[ 6902.168998] hardirqs last enabled at (0): [<0000000000000000>] (null)
[ 6902.170731] hardirqs last disabled at (0): [<ffffffff810cd810>] copy_process.part.55+0x3b0/0x1f00
[ 6902.172773] softirqs last enabled at (0): [<ffffffff810cd810>] copy_process.part.55+0x3b0/0x1f00
[ 6902.174671] softirqs last disabled at (0): [<0000000000000000>] (null)
[ 6902.176407] ---[ end trace 463138c2986b275c ]---
[ 6902.177636] BTRFS info (device dm-3): space_info 4 has 273465344 free, is not full
[ 6902.179453] BTRFS info (device dm-3): space_info total=276824064, used=4685824, pinned=18446744073708158976, reserved=0, may_use=0, readonly=65536
In the above line there's "pinned=18446744073708158976" which is an
unsigned u64 value of -1392640, an obvious underflow.
When transaction_kthread is running cleanup_transaction(), another
fsstress is running btrfs_commit_transaction(). The
btrfs_finish_extent_commit() may get the same range as
btrfs_destroy_pinned_extent() got, which causes the pinned underflow.
Fixes: d4b450cd4b33 ("Btrfs: fix race between transaction commit and empty block group removal")
CC: stable@vger.kernel.org # 4.4+
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2018-10-24 20:24:03 +08:00
|
|
|
}
|
2011-01-06 19:30:25 +08:00
|
|
|
|
2018-11-16 21:04:44 +08:00
|
|
|
clear_extent_dirty(unpin, start, end, &cached_state);
|
|
|
|
free_extent_state(cached_state);
|
2016-06-23 06:54:24 +08:00
|
|
|
btrfs_error_unpin_extent_range(fs_info, start, end);
|
btrfs: fix pinned underflow after transaction aborted
When running generic/475, we may get the following warning in dmesg:
[ 6902.102154] WARNING: CPU: 3 PID: 18013 at fs/btrfs/extent-tree.c:9776 btrfs_free_block_groups+0x2af/0x3b0 [btrfs]
[ 6902.109160] CPU: 3 PID: 18013 Comm: umount Tainted: G W O 4.19.0-rc8+ #8
[ 6902.110971] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
[ 6902.112857] RIP: 0010:btrfs_free_block_groups+0x2af/0x3b0 [btrfs]
[ 6902.118921] RSP: 0018:ffffc9000459bdb0 EFLAGS: 00010286
[ 6902.120315] RAX: ffff880175050bb0 RBX: ffff8801124a8000 RCX: 0000000000170007
[ 6902.121969] RDX: 0000000000000002 RSI: 0000000000170007 RDI: ffffffff8125fb74
[ 6902.123716] RBP: ffff880175055d10 R08: 0000000000000000 R09: 0000000000000000
[ 6902.125417] R10: 0000000000000000 R11: 0000000000000000 R12: ffff880175055d88
[ 6902.127129] R13: ffff880175050bb0 R14: 0000000000000000 R15: dead000000000100
[ 6902.129060] FS: 00007f4507223780(0000) GS:ffff88017ba00000(0000) knlGS:0000000000000000
[ 6902.130996] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 6902.132558] CR2: 00005623599cac78 CR3: 000000014b700001 CR4: 00000000003606e0
[ 6902.134270] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 6902.135981] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 6902.137836] Call Trace:
[ 6902.138939] close_ctree+0x171/0x330 [btrfs]
[ 6902.140181] ? kthread_stop+0x146/0x1f0
[ 6902.141277] generic_shutdown_super+0x6c/0x100
[ 6902.142517] kill_anon_super+0x14/0x30
[ 6902.143554] btrfs_kill_super+0x13/0x100 [btrfs]
[ 6902.144790] deactivate_locked_super+0x2f/0x70
[ 6902.146014] cleanup_mnt+0x3b/0x70
[ 6902.147020] task_work_run+0x9e/0xd0
[ 6902.148036] do_syscall_64+0x470/0x600
[ 6902.149142] ? trace_hardirqs_off_thunk+0x1a/0x1c
[ 6902.150375] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 6902.151640] RIP: 0033:0x7f45077a6a7b
[ 6902.157324] RSP: 002b:00007ffd589f3e68 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[ 6902.159187] RAX: 0000000000000000 RBX: 000055e8eec732b0 RCX: 00007f45077a6a7b
[ 6902.160834] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 000055e8eec73490
[ 6902.162526] RBP: 0000000000000000 R08: 000055e8eec734b0 R09: 00007ffd589f26c0
[ 6902.164141] R10: 0000000000000000 R11: 0000000000000246 R12: 000055e8eec73490
[ 6902.165815] R13: 00007f4507ac61a4 R14: 0000000000000000 R15: 00007ffd589f40d8
[ 6902.167553] irq event stamp: 0
[ 6902.168998] hardirqs last enabled at (0): [<0000000000000000>] (null)
[ 6902.170731] hardirqs last disabled at (0): [<ffffffff810cd810>] copy_process.part.55+0x3b0/0x1f00
[ 6902.172773] softirqs last enabled at (0): [<ffffffff810cd810>] copy_process.part.55+0x3b0/0x1f00
[ 6902.174671] softirqs last disabled at (0): [<0000000000000000>] (null)
[ 6902.176407] ---[ end trace 463138c2986b275c ]---
[ 6902.177636] BTRFS info (device dm-3): space_info 4 has 273465344 free, is not full
[ 6902.179453] BTRFS info (device dm-3): space_info total=276824064, used=4685824, pinned=18446744073708158976, reserved=0, may_use=0, readonly=65536
In the above line there's "pinned=18446744073708158976" which is an
unsigned u64 value of -1392640, an obvious underflow.
When transaction_kthread is running cleanup_transaction(), another
fsstress is running btrfs_commit_transaction(). The
btrfs_finish_extent_commit() may get the same range as
btrfs_destroy_pinned_extent() got, which causes the pinned underflow.
Fixes: d4b450cd4b33 ("Btrfs: fix race between transaction commit and empty block group removal")
CC: stable@vger.kernel.org # 4.4+
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2018-10-24 20:24:03 +08:00
|
|
|
mutex_unlock(&fs_info->unused_bg_unpin_mutex);
|
2011-01-06 19:30:25 +08:00
|
|
|
cond_resched();
|
|
|
|
}
|
|
|
|
|
2012-06-14 16:23:21 +08:00
|
|
|
if (loop) {
|
2016-06-23 06:54:23 +08:00
|
|
|
if (unpin == &fs_info->freed_extents[0])
|
|
|
|
unpin = &fs_info->freed_extents[1];
|
2012-06-14 16:23:21 +08:00
|
|
|
else
|
2016-06-23 06:54:23 +08:00
|
|
|
unpin = &fs_info->freed_extents[0];
|
2012-06-14 16:23:21 +08:00
|
|
|
loop = false;
|
|
|
|
goto again;
|
|
|
|
}
|
|
|
|
|
2011-01-06 19:30:25 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-07-21 08:44:12 +08:00
|
|
|
static void btrfs_cleanup_bg_io(struct btrfs_block_group_cache *cache)
|
|
|
|
{
|
|
|
|
struct inode *inode;
|
|
|
|
|
|
|
|
inode = cache->io_ctl.inode;
|
|
|
|
if (inode) {
|
|
|
|
invalidate_inode_pages2(inode->i_mapping);
|
|
|
|
BTRFS_I(inode)->generation = 0;
|
|
|
|
cache->io_ctl.inode = NULL;
|
|
|
|
iput(inode);
|
|
|
|
}
|
|
|
|
btrfs_put_block_group(cache);
|
|
|
|
}
|
|
|
|
|
|
|
|
void btrfs_cleanup_dirty_bgs(struct btrfs_transaction *cur_trans,
|
2016-06-23 06:54:24 +08:00
|
|
|
struct btrfs_fs_info *fs_info)
|
2016-07-21 08:44:12 +08:00
|
|
|
{
|
|
|
|
struct btrfs_block_group_cache *cache;
|
|
|
|
|
|
|
|
spin_lock(&cur_trans->dirty_bgs_lock);
|
|
|
|
while (!list_empty(&cur_trans->dirty_bgs)) {
|
|
|
|
cache = list_first_entry(&cur_trans->dirty_bgs,
|
|
|
|
struct btrfs_block_group_cache,
|
|
|
|
dirty_list);
|
|
|
|
|
|
|
|
if (!list_empty(&cache->io_list)) {
|
|
|
|
spin_unlock(&cur_trans->dirty_bgs_lock);
|
|
|
|
list_del_init(&cache->io_list);
|
|
|
|
btrfs_cleanup_bg_io(cache);
|
|
|
|
spin_lock(&cur_trans->dirty_bgs_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
list_del_init(&cache->dirty_list);
|
|
|
|
spin_lock(&cache->lock);
|
|
|
|
cache->disk_cache_state = BTRFS_DC_ERROR;
|
|
|
|
spin_unlock(&cache->lock);
|
|
|
|
|
|
|
|
spin_unlock(&cur_trans->dirty_bgs_lock);
|
|
|
|
btrfs_put_block_group(cache);
|
btrfs: introduce delayed_refs_rsv
Traditionally we've had voodoo in btrfs to account for the space that
delayed refs may take up by having a global_block_rsv. This works most
of the time, except when it doesn't. We've had issues reported and seen
in production where sometimes the global reserve is exhausted during
transaction commit before we can run all of our delayed refs, resulting
in an aborted transaction. Because of this voodoo we have equally
dubious flushing semantics around throttling delayed refs which we often
get wrong.
So instead give them their own block_rsv. This way we can always know
exactly how much outstanding space we need for delayed refs. This
allows us to make sure we are constantly filling that reservation up
with space, and allows us to put more precise pressure on the enospc
system. Instead of doing math to see if its a good time to throttle,
the normal enospc code will be invoked if we have a lot of delayed refs
pending, and they will be run via the normal flushing mechanism.
For now the delayed_refs_rsv will hold the reservations for the delayed
refs, the block group updates, and deleting csums. We could have a
separate rsv for the block group updates, but the csum deletion stuff is
still handled via the delayed_refs so that will stay there.
Historical background:
The global reserve has grown to cover everything we don't reserve space
explicitly for, and we've grown a lot of weird ad-hoc heuristics to know
if we're running short on space and when it's time to force a commit. A
failure rate of 20-40 file systems when we run hundreds of thousands of
them isn't super high, but cleaning up this code will make things less
ugly and more predictible.
Thus the delayed refs rsv. We always know how many delayed refs we have
outstanding, and although running them generates more we can use the
global reserve for that spill over, which fits better into it's desired
use than a full blown reservation. This first approach is to simply
take how many times we're reserving space for and multiply that by 2 in
order to save enough space for the delayed refs that could be generated.
This is a niave approach and will probably evolve, but for now it works.
Signed-off-by: Josef Bacik <jbacik@fb.com>
Reviewed-by: David Sterba <dsterba@suse.com> # high-level review
[ added background notes from the cover letter ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-12-03 23:20:33 +08:00
|
|
|
btrfs_delayed_refs_rsv_release(fs_info, 1);
|
2016-07-21 08:44:12 +08:00
|
|
|
spin_lock(&cur_trans->dirty_bgs_lock);
|
|
|
|
}
|
|
|
|
spin_unlock(&cur_trans->dirty_bgs_lock);
|
|
|
|
|
2018-02-09 00:25:18 +08:00
|
|
|
/*
|
|
|
|
* Refer to the definition of io_bgs member for details why it's safe
|
|
|
|
* to use it without any locking
|
|
|
|
*/
|
2016-07-21 08:44:12 +08:00
|
|
|
while (!list_empty(&cur_trans->io_bgs)) {
|
|
|
|
cache = list_first_entry(&cur_trans->io_bgs,
|
|
|
|
struct btrfs_block_group_cache,
|
|
|
|
io_list);
|
|
|
|
|
|
|
|
list_del_init(&cache->io_list);
|
|
|
|
spin_lock(&cache->lock);
|
|
|
|
cache->disk_cache_state = BTRFS_DC_ERROR;
|
|
|
|
spin_unlock(&cache->lock);
|
|
|
|
btrfs_cleanup_bg_io(cache);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-03-02 00:24:58 +08:00
|
|
|
void btrfs_cleanup_one_transaction(struct btrfs_transaction *cur_trans,
|
2016-06-23 06:54:24 +08:00
|
|
|
struct btrfs_fs_info *fs_info)
|
2012-03-02 00:24:58 +08:00
|
|
|
{
|
2019-03-25 20:31:22 +08:00
|
|
|
struct btrfs_device *dev, *tmp;
|
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
btrfs_cleanup_dirty_bgs(cur_trans, fs_info);
|
2016-07-21 08:44:12 +08:00
|
|
|
ASSERT(list_empty(&cur_trans->dirty_bgs));
|
|
|
|
ASSERT(list_empty(&cur_trans->io_bgs));
|
|
|
|
|
2019-03-25 20:31:22 +08:00
|
|
|
list_for_each_entry_safe(dev, tmp, &cur_trans->dev_update_list,
|
|
|
|
post_commit_list) {
|
|
|
|
list_del_init(&dev->post_commit_list);
|
|
|
|
}
|
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
btrfs_destroy_delayed_refs(cur_trans, fs_info);
|
2012-03-02 00:24:58 +08:00
|
|
|
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 11:53:43 +08:00
|
|
|
cur_trans->state = TRANS_STATE_COMMIT_START;
|
2016-06-23 06:54:23 +08:00
|
|
|
wake_up(&fs_info->transaction_blocked_wait);
|
2012-03-02 00:24:58 +08:00
|
|
|
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 11:53:43 +08:00
|
|
|
cur_trans->state = TRANS_STATE_UNBLOCKED;
|
2016-06-23 06:54:23 +08:00
|
|
|
wake_up(&fs_info->transaction_wait);
|
2012-03-02 00:24:58 +08:00
|
|
|
|
2016-06-23 06:54:23 +08:00
|
|
|
btrfs_destroy_delayed_inodes(fs_info);
|
|
|
|
btrfs_assert_delayed_root_empty(fs_info);
|
2012-03-02 00:24:58 +08:00
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
btrfs_destroy_marked_extents(fs_info, &cur_trans->dirty_pages,
|
2012-03-02 00:24:58 +08:00
|
|
|
EXTENT_DIRTY);
|
2016-06-23 06:54:24 +08:00
|
|
|
btrfs_destroy_pinned_extent(fs_info,
|
2016-06-23 06:54:23 +08:00
|
|
|
fs_info->pinned_extents);
|
2012-03-02 00:24:58 +08:00
|
|
|
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 11:53:43 +08:00
|
|
|
cur_trans->state =TRANS_STATE_COMPLETED;
|
|
|
|
wake_up(&cur_trans->commit_wait);
|
2012-03-02 00:24:58 +08:00
|
|
|
}
|
|
|
|
|
2016-06-23 06:54:24 +08:00
|
|
|
static int btrfs_cleanup_transaction(struct btrfs_fs_info *fs_info)
|
2011-01-06 19:30:25 +08:00
|
|
|
{
|
|
|
|
struct btrfs_transaction *t;
|
|
|
|
|
2016-06-23 06:54:23 +08:00
|
|
|
mutex_lock(&fs_info->transaction_kthread_mutex);
|
2011-01-06 19:30:25 +08:00
|
|
|
|
2016-06-23 06:54:23 +08:00
|
|
|
spin_lock(&fs_info->trans_lock);
|
|
|
|
while (!list_empty(&fs_info->trans_list)) {
|
|
|
|
t = list_first_entry(&fs_info->trans_list,
|
2013-09-30 23:36:38 +08:00
|
|
|
struct btrfs_transaction, list);
|
|
|
|
if (t->state >= TRANS_STATE_COMMIT_START) {
|
2017-03-03 16:55:11 +08:00
|
|
|
refcount_inc(&t->use_count);
|
2016-06-23 06:54:23 +08:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2016-06-23 06:54:24 +08:00
|
|
|
btrfs_wait_for_commit(fs_info, t->transid);
|
2013-09-30 23:36:38 +08:00
|
|
|
btrfs_put_transaction(t);
|
2016-06-23 06:54:23 +08:00
|
|
|
spin_lock(&fs_info->trans_lock);
|
2013-09-30 23:36:38 +08:00
|
|
|
continue;
|
|
|
|
}
|
2016-06-23 06:54:23 +08:00
|
|
|
if (t == fs_info->running_transaction) {
|
2013-09-30 23:36:38 +08:00
|
|
|
t->state = TRANS_STATE_COMMIT_DOING;
|
2016-06-23 06:54:23 +08:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2013-09-30 23:36:38 +08:00
|
|
|
/*
|
|
|
|
* We wait for 0 num_writers since we don't hold a trans
|
|
|
|
* handle open currently for this transaction.
|
|
|
|
*/
|
|
|
|
wait_event(t->writer_wait,
|
|
|
|
atomic_read(&t->num_writers) == 0);
|
|
|
|
} else {
|
2016-06-23 06:54:23 +08:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2013-09-30 23:36:38 +08:00
|
|
|
}
|
2016-06-23 06:54:24 +08:00
|
|
|
btrfs_cleanup_one_transaction(t, fs_info);
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 11:53:43 +08:00
|
|
|
|
2016-06-23 06:54:23 +08:00
|
|
|
spin_lock(&fs_info->trans_lock);
|
|
|
|
if (t == fs_info->running_transaction)
|
|
|
|
fs_info->running_transaction = NULL;
|
2011-01-06 19:30:25 +08:00
|
|
|
list_del_init(&t->list);
|
2016-06-23 06:54:23 +08:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2011-01-06 19:30:25 +08:00
|
|
|
|
2013-09-30 23:36:38 +08:00
|
|
|
btrfs_put_transaction(t);
|
2016-06-23 06:54:24 +08:00
|
|
|
trace_btrfs_transaction_commit(fs_info->tree_root);
|
2016-06-23 06:54:23 +08:00
|
|
|
spin_lock(&fs_info->trans_lock);
|
2013-09-30 23:36:38 +08:00
|
|
|
}
|
2016-06-23 06:54:23 +08:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
|
|
|
btrfs_destroy_all_ordered_extents(fs_info);
|
2016-06-23 06:54:23 +08:00
|
|
|
btrfs_destroy_delayed_inodes(fs_info);
|
|
|
|
btrfs_assert_delayed_root_empty(fs_info);
|
2016-06-23 06:54:24 +08:00
|
|
|
btrfs_destroy_pinned_extent(fs_info, fs_info->pinned_extents);
|
2016-06-23 06:54:23 +08:00
|
|
|
btrfs_destroy_all_delalloc_inodes(fs_info);
|
|
|
|
mutex_unlock(&fs_info->transaction_kthread_mutex);
|
2011-01-06 19:30:25 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-01-03 01:23:10 +08:00
|
|
|
static const struct extent_io_ops btree_extent_io_ops = {
|
2017-02-17 22:27:44 +08:00
|
|
|
/* mandatory callbacks */
|
2008-03-25 03:01:56 +08:00
|
|
|
.submit_bio_hook = btree_submit_bio_hook,
|
2017-02-17 22:27:44 +08:00
|
|
|
.readpage_end_io_hook = btree_readpage_end_io_hook,
|
2007-11-08 10:08:01 +08:00
|
|
|
};
|