mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-12-12 05:24:12 +08:00
1aec7c3d05
In commitf8f2835a9c
we changed the behavior of XFS to use EFIs to remove blocks from an overfilled AGFL because there were complaints about transaction overruns that stemmed from trying to free multiple blocks in a single transaction. Unfortunately, that commit missed a subtlety in the debug-mode transaction accounting when a realtime volume is attached. If a realtime file undergoes a data fork mapping change such that realtime extents are allocated (or freed) in the same transaction that a data device block is also allocated (or freed), we can trip a debugging assertion. This can happen (for example) if a realtime extent is allocated and it is necessary to reshape the bmbt to hold the new mapping. When we go to allocate a bmbt block from an AG, the first thing the data device block allocator does is ensure that the freelist is the proper length. If the freelist is too long, it will trim the freelist to the proper length. In debug mode, trimming the freelist calls xfs_trans_agflist_delta() to record the decrement in the AG free list count. Prior to f8f28 we would put the free block back in the free space btrees in the same transaction, which calls xfs_trans_agblocks_delta() to record the increment in the AG free block count. Since AGFL blocks are included in the global free block count (fdblocks), there is no corresponding fdblocks update, so the AGFL free satisfies the following condition in xfs_trans_apply_sb_deltas: /* * Check that superblock mods match the mods made to AGF counters. */ ASSERT((tp->t_fdblocks_delta + tp->t_res_fdblocks_delta) == (tp->t_ag_freeblks_delta + tp->t_ag_flist_delta + tp->t_ag_btree_delta)); The comparison here used to be: (X + 0) == ((X+1) + -1 + 0), where X is the number blocks that were allocated. After commit f8f28 we defer the block freeing to the next chained transaction, which means that the calls to xfs_trans_agflist_delta and xfs_trans_agblocks_delta occur in separate transactions. The (first) transaction that shortens the free list trips on the comparison, which has now become: (X + 0) == ((X) + -1 + 0) because we haven't freed the AGFL block yet; we've only logged an intention to free it. When the second transaction (the deferred free) commits, it will evaluate the expression as: (0 + 0) == (1 + 0 + 0) and trip over that in turn. At this point, the astute reader may note that the two commits tagged by this patch have been in the kernel for a long time but haven't generated any bug reports. How is it that the author became aware of this bug? This originally surfaced as an intermittent failure when I was testing realtime rmap, but a different bug report by Zorro Lang reveals the same assertion occuring on !lazysbcount filesystems. The common factor to both reports (and why this problem wasn't previously reported) becomes apparent if we consider when xfs_trans_apply_sb_deltas is called by __xfs_trans_commit(): if (tp->t_flags & XFS_TRANS_SB_DIRTY) xfs_trans_apply_sb_deltas(tp); With a modern lazysbcount filesystem, transactions update only the percpu counters, so they don't need to set XFS_TRANS_SB_DIRTY, hence xfs_trans_apply_sb_deltas is rarely called. However, updates to the count of free realtime extents are not part of lazysbcount, so XFS_TRANS_SB_DIRTY will be set on transactions adding or removing data fork mappings to realtime files; similarly, XFS_TRANS_SB_DIRTY is always set on !lazysbcount filesystems. Dave mentioned in response to an earlier version of this patch: "IIUC, what you are saying is that this debug code is simply not exercised in normal testing and hasn't been for the past decade? And it still won't be exercised on anything other than realtime device testing? "...it was debugging code from 1994 that was largely turned into dead code when lazysbcounters were introduced in 2007. Hence I'm not sure it holds any value anymore." This debugging code isn't especially helpful - you can modify the flcount on one AG and the freeblks of another AG, and it won't trigger. Add the fact that nobody noticed for a decade, and let's just get rid of it (and start testing realtime :P). This bug was found by running generic/051 on either a V4 filesystem lacking lazysbcount; or a V5 filesystem with a realtime volume. Cc: bfoster@redhat.com, zlang@redhat.com Fixes:f8f2835a9c
("xfs: defer agfl block frees when dfops is available") Signed-off-by: Darrick J. Wong <djwong@kernel.org> Reviewed-by: Brian Foster <bfoster@redhat.com>
636 lines
16 KiB
C
636 lines
16 KiB
C
// SPDX-License-Identifier: GPL-2.0
|
|
/*
|
|
* Copyright (c) 2014 Red Hat, Inc.
|
|
* All Rights Reserved.
|
|
*/
|
|
#include "xfs.h"
|
|
#include "xfs_fs.h"
|
|
#include "xfs_shared.h"
|
|
#include "xfs_format.h"
|
|
#include "xfs_log_format.h"
|
|
#include "xfs_trans_resv.h"
|
|
#include "xfs_sb.h"
|
|
#include "xfs_mount.h"
|
|
#include "xfs_trans.h"
|
|
#include "xfs_alloc.h"
|
|
#include "xfs_btree.h"
|
|
#include "xfs_btree_staging.h"
|
|
#include "xfs_rmap.h"
|
|
#include "xfs_rmap_btree.h"
|
|
#include "xfs_trace.h"
|
|
#include "xfs_error.h"
|
|
#include "xfs_extent_busy.h"
|
|
#include "xfs_ag_resv.h"
|
|
|
|
/*
|
|
* Reverse map btree.
|
|
*
|
|
* This is a per-ag tree used to track the owner(s) of a given extent. With
|
|
* reflink it is possible for there to be multiple owners, which is a departure
|
|
* from classic XFS. Owner records for data extents are inserted when the
|
|
* extent is mapped and removed when an extent is unmapped. Owner records for
|
|
* all other block types (i.e. metadata) are inserted when an extent is
|
|
* allocated and removed when an extent is freed. There can only be one owner
|
|
* of a metadata extent, usually an inode or some other metadata structure like
|
|
* an AG btree.
|
|
*
|
|
* The rmap btree is part of the free space management, so blocks for the tree
|
|
* are sourced from the agfl. Hence we need transaction reservation support for
|
|
* this tree so that the freelist is always large enough. This also impacts on
|
|
* the minimum space we need to leave free in the AG.
|
|
*
|
|
* The tree is ordered by [ag block, owner, offset]. This is a large key size,
|
|
* but it is the only way to enforce unique keys when a block can be owned by
|
|
* multiple files at any offset. There's no need to order/search by extent
|
|
* size for online updating/management of the tree. It is intended that most
|
|
* reverse lookups will be to find the owner(s) of a particular block, or to
|
|
* try to recover tree and file data from corrupt primary metadata.
|
|
*/
|
|
|
|
static struct xfs_btree_cur *
|
|
xfs_rmapbt_dup_cursor(
|
|
struct xfs_btree_cur *cur)
|
|
{
|
|
return xfs_rmapbt_init_cursor(cur->bc_mp, cur->bc_tp,
|
|
cur->bc_ag.agbp, cur->bc_ag.agno);
|
|
}
|
|
|
|
STATIC void
|
|
xfs_rmapbt_set_root(
|
|
struct xfs_btree_cur *cur,
|
|
union xfs_btree_ptr *ptr,
|
|
int inc)
|
|
{
|
|
struct xfs_buf *agbp = cur->bc_ag.agbp;
|
|
struct xfs_agf *agf = agbp->b_addr;
|
|
int btnum = cur->bc_btnum;
|
|
struct xfs_perag *pag = agbp->b_pag;
|
|
|
|
ASSERT(ptr->s != 0);
|
|
|
|
agf->agf_roots[btnum] = ptr->s;
|
|
be32_add_cpu(&agf->agf_levels[btnum], inc);
|
|
pag->pagf_levels[btnum] += inc;
|
|
|
|
xfs_alloc_log_agf(cur->bc_tp, agbp, XFS_AGF_ROOTS | XFS_AGF_LEVELS);
|
|
}
|
|
|
|
STATIC int
|
|
xfs_rmapbt_alloc_block(
|
|
struct xfs_btree_cur *cur,
|
|
union xfs_btree_ptr *start,
|
|
union xfs_btree_ptr *new,
|
|
int *stat)
|
|
{
|
|
struct xfs_buf *agbp = cur->bc_ag.agbp;
|
|
struct xfs_agf *agf = agbp->b_addr;
|
|
int error;
|
|
xfs_agblock_t bno;
|
|
|
|
/* Allocate the new block from the freelist. If we can't, give up. */
|
|
error = xfs_alloc_get_freelist(cur->bc_tp, cur->bc_ag.agbp,
|
|
&bno, 1);
|
|
if (error)
|
|
return error;
|
|
|
|
trace_xfs_rmapbt_alloc_block(cur->bc_mp, cur->bc_ag.agno,
|
|
bno, 1);
|
|
if (bno == NULLAGBLOCK) {
|
|
*stat = 0;
|
|
return 0;
|
|
}
|
|
|
|
xfs_extent_busy_reuse(cur->bc_mp, cur->bc_ag.agno, bno, 1,
|
|
false);
|
|
|
|
new->s = cpu_to_be32(bno);
|
|
be32_add_cpu(&agf->agf_rmap_blocks, 1);
|
|
xfs_alloc_log_agf(cur->bc_tp, agbp, XFS_AGF_RMAP_BLOCKS);
|
|
|
|
xfs_ag_resv_rmapbt_alloc(cur->bc_mp, cur->bc_ag.agno);
|
|
|
|
*stat = 1;
|
|
return 0;
|
|
}
|
|
|
|
STATIC int
|
|
xfs_rmapbt_free_block(
|
|
struct xfs_btree_cur *cur,
|
|
struct xfs_buf *bp)
|
|
{
|
|
struct xfs_buf *agbp = cur->bc_ag.agbp;
|
|
struct xfs_agf *agf = agbp->b_addr;
|
|
struct xfs_perag *pag;
|
|
xfs_agblock_t bno;
|
|
int error;
|
|
|
|
bno = xfs_daddr_to_agbno(cur->bc_mp, XFS_BUF_ADDR(bp));
|
|
trace_xfs_rmapbt_free_block(cur->bc_mp, cur->bc_ag.agno,
|
|
bno, 1);
|
|
be32_add_cpu(&agf->agf_rmap_blocks, -1);
|
|
xfs_alloc_log_agf(cur->bc_tp, agbp, XFS_AGF_RMAP_BLOCKS);
|
|
error = xfs_alloc_put_freelist(cur->bc_tp, agbp, NULL, bno, 1);
|
|
if (error)
|
|
return error;
|
|
|
|
xfs_extent_busy_insert(cur->bc_tp, be32_to_cpu(agf->agf_seqno), bno, 1,
|
|
XFS_EXTENT_BUSY_SKIP_DISCARD);
|
|
|
|
pag = cur->bc_ag.agbp->b_pag;
|
|
xfs_ag_resv_free_extent(pag, XFS_AG_RESV_RMAPBT, NULL, 1);
|
|
return 0;
|
|
}
|
|
|
|
STATIC int
|
|
xfs_rmapbt_get_minrecs(
|
|
struct xfs_btree_cur *cur,
|
|
int level)
|
|
{
|
|
return cur->bc_mp->m_rmap_mnr[level != 0];
|
|
}
|
|
|
|
STATIC int
|
|
xfs_rmapbt_get_maxrecs(
|
|
struct xfs_btree_cur *cur,
|
|
int level)
|
|
{
|
|
return cur->bc_mp->m_rmap_mxr[level != 0];
|
|
}
|
|
|
|
STATIC void
|
|
xfs_rmapbt_init_key_from_rec(
|
|
union xfs_btree_key *key,
|
|
union xfs_btree_rec *rec)
|
|
{
|
|
key->rmap.rm_startblock = rec->rmap.rm_startblock;
|
|
key->rmap.rm_owner = rec->rmap.rm_owner;
|
|
key->rmap.rm_offset = rec->rmap.rm_offset;
|
|
}
|
|
|
|
/*
|
|
* The high key for a reverse mapping record can be computed by shifting
|
|
* the startblock and offset to the highest value that would still map
|
|
* to that record. In practice this means that we add blockcount-1 to
|
|
* the startblock for all records, and if the record is for a data/attr
|
|
* fork mapping, we add blockcount-1 to the offset too.
|
|
*/
|
|
STATIC void
|
|
xfs_rmapbt_init_high_key_from_rec(
|
|
union xfs_btree_key *key,
|
|
union xfs_btree_rec *rec)
|
|
{
|
|
uint64_t off;
|
|
int adj;
|
|
|
|
adj = be32_to_cpu(rec->rmap.rm_blockcount) - 1;
|
|
|
|
key->rmap.rm_startblock = rec->rmap.rm_startblock;
|
|
be32_add_cpu(&key->rmap.rm_startblock, adj);
|
|
key->rmap.rm_owner = rec->rmap.rm_owner;
|
|
key->rmap.rm_offset = rec->rmap.rm_offset;
|
|
if (XFS_RMAP_NON_INODE_OWNER(be64_to_cpu(rec->rmap.rm_owner)) ||
|
|
XFS_RMAP_IS_BMBT_BLOCK(be64_to_cpu(rec->rmap.rm_offset)))
|
|
return;
|
|
off = be64_to_cpu(key->rmap.rm_offset);
|
|
off = (XFS_RMAP_OFF(off) + adj) | (off & ~XFS_RMAP_OFF_MASK);
|
|
key->rmap.rm_offset = cpu_to_be64(off);
|
|
}
|
|
|
|
STATIC void
|
|
xfs_rmapbt_init_rec_from_cur(
|
|
struct xfs_btree_cur *cur,
|
|
union xfs_btree_rec *rec)
|
|
{
|
|
rec->rmap.rm_startblock = cpu_to_be32(cur->bc_rec.r.rm_startblock);
|
|
rec->rmap.rm_blockcount = cpu_to_be32(cur->bc_rec.r.rm_blockcount);
|
|
rec->rmap.rm_owner = cpu_to_be64(cur->bc_rec.r.rm_owner);
|
|
rec->rmap.rm_offset = cpu_to_be64(
|
|
xfs_rmap_irec_offset_pack(&cur->bc_rec.r));
|
|
}
|
|
|
|
STATIC void
|
|
xfs_rmapbt_init_ptr_from_cur(
|
|
struct xfs_btree_cur *cur,
|
|
union xfs_btree_ptr *ptr)
|
|
{
|
|
struct xfs_agf *agf = cur->bc_ag.agbp->b_addr;
|
|
|
|
ASSERT(cur->bc_ag.agno == be32_to_cpu(agf->agf_seqno));
|
|
|
|
ptr->s = agf->agf_roots[cur->bc_btnum];
|
|
}
|
|
|
|
STATIC int64_t
|
|
xfs_rmapbt_key_diff(
|
|
struct xfs_btree_cur *cur,
|
|
union xfs_btree_key *key)
|
|
{
|
|
struct xfs_rmap_irec *rec = &cur->bc_rec.r;
|
|
struct xfs_rmap_key *kp = &key->rmap;
|
|
__u64 x, y;
|
|
int64_t d;
|
|
|
|
d = (int64_t)be32_to_cpu(kp->rm_startblock) - rec->rm_startblock;
|
|
if (d)
|
|
return d;
|
|
|
|
x = be64_to_cpu(kp->rm_owner);
|
|
y = rec->rm_owner;
|
|
if (x > y)
|
|
return 1;
|
|
else if (y > x)
|
|
return -1;
|
|
|
|
x = XFS_RMAP_OFF(be64_to_cpu(kp->rm_offset));
|
|
y = rec->rm_offset;
|
|
if (x > y)
|
|
return 1;
|
|
else if (y > x)
|
|
return -1;
|
|
return 0;
|
|
}
|
|
|
|
STATIC int64_t
|
|
xfs_rmapbt_diff_two_keys(
|
|
struct xfs_btree_cur *cur,
|
|
union xfs_btree_key *k1,
|
|
union xfs_btree_key *k2)
|
|
{
|
|
struct xfs_rmap_key *kp1 = &k1->rmap;
|
|
struct xfs_rmap_key *kp2 = &k2->rmap;
|
|
int64_t d;
|
|
__u64 x, y;
|
|
|
|
d = (int64_t)be32_to_cpu(kp1->rm_startblock) -
|
|
be32_to_cpu(kp2->rm_startblock);
|
|
if (d)
|
|
return d;
|
|
|
|
x = be64_to_cpu(kp1->rm_owner);
|
|
y = be64_to_cpu(kp2->rm_owner);
|
|
if (x > y)
|
|
return 1;
|
|
else if (y > x)
|
|
return -1;
|
|
|
|
x = XFS_RMAP_OFF(be64_to_cpu(kp1->rm_offset));
|
|
y = XFS_RMAP_OFF(be64_to_cpu(kp2->rm_offset));
|
|
if (x > y)
|
|
return 1;
|
|
else if (y > x)
|
|
return -1;
|
|
return 0;
|
|
}
|
|
|
|
static xfs_failaddr_t
|
|
xfs_rmapbt_verify(
|
|
struct xfs_buf *bp)
|
|
{
|
|
struct xfs_mount *mp = bp->b_mount;
|
|
struct xfs_btree_block *block = XFS_BUF_TO_BLOCK(bp);
|
|
struct xfs_perag *pag = bp->b_pag;
|
|
xfs_failaddr_t fa;
|
|
unsigned int level;
|
|
|
|
/*
|
|
* magic number and level verification
|
|
*
|
|
* During growfs operations, we can't verify the exact level or owner as
|
|
* the perag is not fully initialised and hence not attached to the
|
|
* buffer. In this case, check against the maximum tree depth.
|
|
*
|
|
* Similarly, during log recovery we will have a perag structure
|
|
* attached, but the agf information will not yet have been initialised
|
|
* from the on disk AGF. Again, we can only check against maximum limits
|
|
* in this case.
|
|
*/
|
|
if (!xfs_verify_magic(bp, block->bb_magic))
|
|
return __this_address;
|
|
|
|
if (!xfs_sb_version_hasrmapbt(&mp->m_sb))
|
|
return __this_address;
|
|
fa = xfs_btree_sblock_v5hdr_verify(bp);
|
|
if (fa)
|
|
return fa;
|
|
|
|
level = be16_to_cpu(block->bb_level);
|
|
if (pag && pag->pagf_init) {
|
|
if (level >= pag->pagf_levels[XFS_BTNUM_RMAPi])
|
|
return __this_address;
|
|
} else if (level >= mp->m_rmap_maxlevels)
|
|
return __this_address;
|
|
|
|
return xfs_btree_sblock_verify(bp, mp->m_rmap_mxr[level != 0]);
|
|
}
|
|
|
|
static void
|
|
xfs_rmapbt_read_verify(
|
|
struct xfs_buf *bp)
|
|
{
|
|
xfs_failaddr_t fa;
|
|
|
|
if (!xfs_btree_sblock_verify_crc(bp))
|
|
xfs_verifier_error(bp, -EFSBADCRC, __this_address);
|
|
else {
|
|
fa = xfs_rmapbt_verify(bp);
|
|
if (fa)
|
|
xfs_verifier_error(bp, -EFSCORRUPTED, fa);
|
|
}
|
|
|
|
if (bp->b_error)
|
|
trace_xfs_btree_corrupt(bp, _RET_IP_);
|
|
}
|
|
|
|
static void
|
|
xfs_rmapbt_write_verify(
|
|
struct xfs_buf *bp)
|
|
{
|
|
xfs_failaddr_t fa;
|
|
|
|
fa = xfs_rmapbt_verify(bp);
|
|
if (fa) {
|
|
trace_xfs_btree_corrupt(bp, _RET_IP_);
|
|
xfs_verifier_error(bp, -EFSCORRUPTED, fa);
|
|
return;
|
|
}
|
|
xfs_btree_sblock_calc_crc(bp);
|
|
|
|
}
|
|
|
|
const struct xfs_buf_ops xfs_rmapbt_buf_ops = {
|
|
.name = "xfs_rmapbt",
|
|
.magic = { 0, cpu_to_be32(XFS_RMAP_CRC_MAGIC) },
|
|
.verify_read = xfs_rmapbt_read_verify,
|
|
.verify_write = xfs_rmapbt_write_verify,
|
|
.verify_struct = xfs_rmapbt_verify,
|
|
};
|
|
|
|
STATIC int
|
|
xfs_rmapbt_keys_inorder(
|
|
struct xfs_btree_cur *cur,
|
|
union xfs_btree_key *k1,
|
|
union xfs_btree_key *k2)
|
|
{
|
|
uint32_t x;
|
|
uint32_t y;
|
|
uint64_t a;
|
|
uint64_t b;
|
|
|
|
x = be32_to_cpu(k1->rmap.rm_startblock);
|
|
y = be32_to_cpu(k2->rmap.rm_startblock);
|
|
if (x < y)
|
|
return 1;
|
|
else if (x > y)
|
|
return 0;
|
|
a = be64_to_cpu(k1->rmap.rm_owner);
|
|
b = be64_to_cpu(k2->rmap.rm_owner);
|
|
if (a < b)
|
|
return 1;
|
|
else if (a > b)
|
|
return 0;
|
|
a = XFS_RMAP_OFF(be64_to_cpu(k1->rmap.rm_offset));
|
|
b = XFS_RMAP_OFF(be64_to_cpu(k2->rmap.rm_offset));
|
|
if (a <= b)
|
|
return 1;
|
|
return 0;
|
|
}
|
|
|
|
STATIC int
|
|
xfs_rmapbt_recs_inorder(
|
|
struct xfs_btree_cur *cur,
|
|
union xfs_btree_rec *r1,
|
|
union xfs_btree_rec *r2)
|
|
{
|
|
uint32_t x;
|
|
uint32_t y;
|
|
uint64_t a;
|
|
uint64_t b;
|
|
|
|
x = be32_to_cpu(r1->rmap.rm_startblock);
|
|
y = be32_to_cpu(r2->rmap.rm_startblock);
|
|
if (x < y)
|
|
return 1;
|
|
else if (x > y)
|
|
return 0;
|
|
a = be64_to_cpu(r1->rmap.rm_owner);
|
|
b = be64_to_cpu(r2->rmap.rm_owner);
|
|
if (a < b)
|
|
return 1;
|
|
else if (a > b)
|
|
return 0;
|
|
a = XFS_RMAP_OFF(be64_to_cpu(r1->rmap.rm_offset));
|
|
b = XFS_RMAP_OFF(be64_to_cpu(r2->rmap.rm_offset));
|
|
if (a <= b)
|
|
return 1;
|
|
return 0;
|
|
}
|
|
|
|
static const struct xfs_btree_ops xfs_rmapbt_ops = {
|
|
.rec_len = sizeof(struct xfs_rmap_rec),
|
|
.key_len = 2 * sizeof(struct xfs_rmap_key),
|
|
|
|
.dup_cursor = xfs_rmapbt_dup_cursor,
|
|
.set_root = xfs_rmapbt_set_root,
|
|
.alloc_block = xfs_rmapbt_alloc_block,
|
|
.free_block = xfs_rmapbt_free_block,
|
|
.get_minrecs = xfs_rmapbt_get_minrecs,
|
|
.get_maxrecs = xfs_rmapbt_get_maxrecs,
|
|
.init_key_from_rec = xfs_rmapbt_init_key_from_rec,
|
|
.init_high_key_from_rec = xfs_rmapbt_init_high_key_from_rec,
|
|
.init_rec_from_cur = xfs_rmapbt_init_rec_from_cur,
|
|
.init_ptr_from_cur = xfs_rmapbt_init_ptr_from_cur,
|
|
.key_diff = xfs_rmapbt_key_diff,
|
|
.buf_ops = &xfs_rmapbt_buf_ops,
|
|
.diff_two_keys = xfs_rmapbt_diff_two_keys,
|
|
.keys_inorder = xfs_rmapbt_keys_inorder,
|
|
.recs_inorder = xfs_rmapbt_recs_inorder,
|
|
};
|
|
|
|
static struct xfs_btree_cur *
|
|
xfs_rmapbt_init_common(
|
|
struct xfs_mount *mp,
|
|
struct xfs_trans *tp,
|
|
xfs_agnumber_t agno)
|
|
{
|
|
struct xfs_btree_cur *cur;
|
|
|
|
cur = kmem_cache_zalloc(xfs_btree_cur_zone, GFP_NOFS | __GFP_NOFAIL);
|
|
cur->bc_tp = tp;
|
|
cur->bc_mp = mp;
|
|
/* Overlapping btree; 2 keys per pointer. */
|
|
cur->bc_btnum = XFS_BTNUM_RMAP;
|
|
cur->bc_flags = XFS_BTREE_CRC_BLOCKS | XFS_BTREE_OVERLAPPING;
|
|
cur->bc_blocklog = mp->m_sb.sb_blocklog;
|
|
cur->bc_statoff = XFS_STATS_CALC_INDEX(xs_rmap_2);
|
|
cur->bc_ag.agno = agno;
|
|
cur->bc_ops = &xfs_rmapbt_ops;
|
|
|
|
return cur;
|
|
}
|
|
|
|
/* Create a new reverse mapping btree cursor. */
|
|
struct xfs_btree_cur *
|
|
xfs_rmapbt_init_cursor(
|
|
struct xfs_mount *mp,
|
|
struct xfs_trans *tp,
|
|
struct xfs_buf *agbp,
|
|
xfs_agnumber_t agno)
|
|
{
|
|
struct xfs_agf *agf = agbp->b_addr;
|
|
struct xfs_btree_cur *cur;
|
|
|
|
cur = xfs_rmapbt_init_common(mp, tp, agno);
|
|
cur->bc_nlevels = be32_to_cpu(agf->agf_levels[XFS_BTNUM_RMAP]);
|
|
cur->bc_ag.agbp = agbp;
|
|
return cur;
|
|
}
|
|
|
|
/* Create a new reverse mapping btree cursor with a fake root for staging. */
|
|
struct xfs_btree_cur *
|
|
xfs_rmapbt_stage_cursor(
|
|
struct xfs_mount *mp,
|
|
struct xbtree_afakeroot *afake,
|
|
xfs_agnumber_t agno)
|
|
{
|
|
struct xfs_btree_cur *cur;
|
|
|
|
cur = xfs_rmapbt_init_common(mp, NULL, agno);
|
|
xfs_btree_stage_afakeroot(cur, afake);
|
|
return cur;
|
|
}
|
|
|
|
/*
|
|
* Install a new reverse mapping btree root. Caller is responsible for
|
|
* invalidating and freeing the old btree blocks.
|
|
*/
|
|
void
|
|
xfs_rmapbt_commit_staged_btree(
|
|
struct xfs_btree_cur *cur,
|
|
struct xfs_trans *tp,
|
|
struct xfs_buf *agbp)
|
|
{
|
|
struct xfs_agf *agf = agbp->b_addr;
|
|
struct xbtree_afakeroot *afake = cur->bc_ag.afake;
|
|
|
|
ASSERT(cur->bc_flags & XFS_BTREE_STAGING);
|
|
|
|
agf->agf_roots[cur->bc_btnum] = cpu_to_be32(afake->af_root);
|
|
agf->agf_levels[cur->bc_btnum] = cpu_to_be32(afake->af_levels);
|
|
agf->agf_rmap_blocks = cpu_to_be32(afake->af_blocks);
|
|
xfs_alloc_log_agf(tp, agbp, XFS_AGF_ROOTS | XFS_AGF_LEVELS |
|
|
XFS_AGF_RMAP_BLOCKS);
|
|
xfs_btree_commit_afakeroot(cur, tp, agbp, &xfs_rmapbt_ops);
|
|
}
|
|
|
|
/*
|
|
* Calculate number of records in an rmap btree block.
|
|
*/
|
|
int
|
|
xfs_rmapbt_maxrecs(
|
|
int blocklen,
|
|
int leaf)
|
|
{
|
|
blocklen -= XFS_RMAP_BLOCK_LEN;
|
|
|
|
if (leaf)
|
|
return blocklen / sizeof(struct xfs_rmap_rec);
|
|
return blocklen /
|
|
(2 * sizeof(struct xfs_rmap_key) + sizeof(xfs_rmap_ptr_t));
|
|
}
|
|
|
|
/* Compute the maximum height of an rmap btree. */
|
|
void
|
|
xfs_rmapbt_compute_maxlevels(
|
|
struct xfs_mount *mp)
|
|
{
|
|
/*
|
|
* On a non-reflink filesystem, the maximum number of rmap
|
|
* records is the number of blocks in the AG, hence the max
|
|
* rmapbt height is log_$maxrecs($agblocks). However, with
|
|
* reflink each AG block can have up to 2^32 (per the refcount
|
|
* record format) owners, which means that theoretically we
|
|
* could face up to 2^64 rmap records.
|
|
*
|
|
* That effectively means that the max rmapbt height must be
|
|
* XFS_BTREE_MAXLEVELS. "Fortunately" we'll run out of AG
|
|
* blocks to feed the rmapbt long before the rmapbt reaches
|
|
* maximum height. The reflink code uses ag_resv_critical to
|
|
* disallow reflinking when less than 10% of the per-AG metadata
|
|
* block reservation since the fallback is a regular file copy.
|
|
*/
|
|
if (xfs_sb_version_hasreflink(&mp->m_sb))
|
|
mp->m_rmap_maxlevels = XFS_BTREE_MAXLEVELS;
|
|
else
|
|
mp->m_rmap_maxlevels = xfs_btree_compute_maxlevels(
|
|
mp->m_rmap_mnr, mp->m_sb.sb_agblocks);
|
|
}
|
|
|
|
/* Calculate the refcount btree size for some records. */
|
|
xfs_extlen_t
|
|
xfs_rmapbt_calc_size(
|
|
struct xfs_mount *mp,
|
|
unsigned long long len)
|
|
{
|
|
return xfs_btree_calc_size(mp->m_rmap_mnr, len);
|
|
}
|
|
|
|
/*
|
|
* Calculate the maximum refcount btree size.
|
|
*/
|
|
xfs_extlen_t
|
|
xfs_rmapbt_max_size(
|
|
struct xfs_mount *mp,
|
|
xfs_agblock_t agblocks)
|
|
{
|
|
/* Bail out if we're uninitialized, which can happen in mkfs. */
|
|
if (mp->m_rmap_mxr[0] == 0)
|
|
return 0;
|
|
|
|
return xfs_rmapbt_calc_size(mp, agblocks);
|
|
}
|
|
|
|
/*
|
|
* Figure out how many blocks to reserve and how many are used by this btree.
|
|
*/
|
|
int
|
|
xfs_rmapbt_calc_reserves(
|
|
struct xfs_mount *mp,
|
|
struct xfs_trans *tp,
|
|
xfs_agnumber_t agno,
|
|
xfs_extlen_t *ask,
|
|
xfs_extlen_t *used)
|
|
{
|
|
struct xfs_buf *agbp;
|
|
struct xfs_agf *agf;
|
|
xfs_agblock_t agblocks;
|
|
xfs_extlen_t tree_len;
|
|
int error;
|
|
|
|
if (!xfs_sb_version_hasrmapbt(&mp->m_sb))
|
|
return 0;
|
|
|
|
error = xfs_alloc_read_agf(mp, tp, agno, 0, &agbp);
|
|
if (error)
|
|
return error;
|
|
|
|
agf = agbp->b_addr;
|
|
agblocks = be32_to_cpu(agf->agf_length);
|
|
tree_len = be32_to_cpu(agf->agf_rmap_blocks);
|
|
xfs_trans_brelse(tp, agbp);
|
|
|
|
/*
|
|
* The log is permanently allocated, so the space it occupies will
|
|
* never be available for the kinds of things that would require btree
|
|
* expansion. We therefore can pretend the space isn't there.
|
|
*/
|
|
if (mp->m_sb.sb_logstart &&
|
|
XFS_FSB_TO_AGNO(mp, mp->m_sb.sb_logstart) == agno)
|
|
agblocks -= mp->m_sb.sb_logblocks;
|
|
|
|
/* Reserve 1% of the AG or enough for 1 block per record. */
|
|
*ask += max(agblocks / 100, xfs_rmapbt_max_size(mp, agblocks));
|
|
*used += tree_len;
|
|
|
|
return error;
|
|
}
|