mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-11-11 21:38:32 +08:00
New code for 5.15:
- Fix a potential log livelock on busy filesystems when there's so much work going on that we can't finish a quotaoff before filling up the log by removing the ability to disable quota accounting. - Introduce the ability to use per-CPU data structures in XFS so that we can do a better job of maintaining CPU locality for certain operations. - Defer inode inactivation work to per-CPU lists, which will help us batch that processing. Deletions of large sparse files will *appear* to run faster, but all that means is that we've moved the work to the backend. - Drop the EXPERIMENTAL warnings from the y2038+ support and the inode btree counters, since it's been nearly a year and no complaints have come in. - Remove more of our bespoke kmem* variants in favor of using the standard Linux calls. - Prepare for the addition of log incompat features in upcoming cycles by actually adding code to support this. - Small cleanups of the xattr code in preparation for landing support for full logging of extended attribute updates in a future cycle. - Replace the various log shutdown state and flag code all over xfs with a single atomic bit flag. - Fix a serious log recovery bug where log item replay can be skipped based on the start lsn of a transaction even though the transaction commit lsn is the key data point for that by enforcing start lsns to appear in the log in the same order as commit lsns. - Enable pipelining in the code that pushes log items to disk. - Drop ->writepage. - Fix some bugs in GETFSMAP where the last fsmap record reported for a device could extend beyond the end of the device, and a separate bug where query keys for one device could be applied to another. - Don't let GETFSMAP query functions edit their input parameters. - Small cleanups to the scrub code's handling of perag structures. - Small cleanups to the incore inode tree walk code. - Constify btree function parameters that aren't changed, so that there will never again be confusion about range query functions changing their input parameters. - Standardize the format and names of tracepoint data attributes. - Clean up all the mount state and feature flags to use wrapped bitset functions instead of inconsistently open-coded flag checks. - Fix some confusion between xfs_buf hash table key variable vs. block number. - Fix a mis-interaction with iomap where we reported shared delalloc cow fork extents to iomap, which would cause the iomap unshare operation to return IO errors unnecessarily. - Fix DONTCACHE behavior. -----BEGIN PGP SIGNATURE----- iQIzBAABCgAdFiEEUzaAxoMeQq6m2jMV+H93GTRKtOsFAmEnwqcACgkQ+H93GTRK tOtpZg/9G1RD9oDbVhKJy67bxkeLPX990dUtQFhcVjL3AMMyCJez2PBTqkQY3tL9 WDQveIF0UL5TjP5QUO2/6fncIXBmf5yXtinkfeQwkvkStb/yxs10zlpn2ZDEvJ7H EUWwkV3cBY6Q+ftJIfXJmNW6eCcaxYs6KFiBwodbcoBxy2dIx6KFBQuqwtxOA97s ZYfv1mPGOIg6AVJN9oxFWtF36qM8loFDNQeZj1ATfCsP25VNHbQf7YOFnJEnwLOB rzz2zKQ3lP0hWavA6M2lX+IGymDphngx7qe4lZYcjAsh2BzL0IZf0QmFrXGQKuY/ kD0dWeStM8OHQbqCdkYx4XxcjucvJ7qmIYCtrWdpFqrrrQHygaJW6nI8LgsNTdvb OPXpPPz58jdGY3ATaRYX/IFmpJExj655ZHUfpkeVGacBTa5KCVDykYKv1eYOfNsk Aj+bZ4g++bx3dlGFHGsPScRn+hwg5h/+UyQJpAYupuaUsq3rpBhH/bhAJNyPUsYu ej8LIeAWB3EPLozT4ewop8G0WWDBOe0MlYeO5gQho2AfFZzFInf15cSR62KZqx+v XTZgITnnp0ND4wzgqAhgdU4USS9z5MtHGvhSkuYejg85R/bKirrwRu2P0n681sHv UioiIVbXGWSAJqDQicfSjncafS3POIAUmMt4tgmDI33/3mTKwZQ= =HPJr -----END PGP SIGNATURE----- Merge tag 'xfs-5.15-merge-6' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux Pull xfs updates from Darrick Wong: "There's a lot in this cycle. Starting with bug fixes: To avoid livelocks between the logging code and the quota code, we've disabled the ability of quotaoff to turn off quota accounting. (Admins can still disable quota enforcement, but truly turning off accounting requires a remount.) We've tried to do this in a careful enough way that there shouldn't be any user visible effects aside from quotaoff no longer randomly hanging the system. We've also fixed some bugs in runtime log behavior that could trip up log recovery if (otherwise unrelated) transactions manage to start and commit concurrently; some bugs in the GETFSMAP ioctl where we would incorrectly restrict the range of records output if the two xfs devices are of different sizes; a bug that resulted in fallocate funshare failing unnecessarily; and broken behavior in the xfs inode cache when DONTCACHE is in play. As for new features: we now batch inode inactivations in percpu background threads, which sharply decreases frontend thread wait time when performing file deletions and should improve overall directory tree deletion times. This eliminates both the problem where closing an unlinked file (especially on a frozen fs) can stall for a long time, and should also ease complaints about direct reclaim bogging down on unlinked file cleanup. Starting with this release, we've enabled pipelining of the XFS log. On workloads with high rates of metadata updates to different shards of the filesystem, multiple threads can be used to format committed log updates into log checkpoints. Lastly, with this release, two new features have graduated to supported status: inode btree counters (for faster mounts), and support for dates beyond Y2038. Expect these to be enabled by default in a future release of xfsprogs. Summary: - Fix a potential log livelock on busy filesystems when there's so much work going on that we can't finish a quotaoff before filling up the log by removing the ability to disable quota accounting. - Introduce the ability to use per-CPU data structures in XFS so that we can do a better job of maintaining CPU locality for certain operations. - Defer inode inactivation work to per-CPU lists, which will help us batch that processing. Deletions of large sparse files will *appear* to run faster, but all that means is that we've moved the work to the backend. - Drop the EXPERIMENTAL warnings from the y2038+ support and the inode btree counters, since it's been nearly a year and no complaints have come in. - Remove more of our bespoke kmem* variants in favor of using the standard Linux calls. - Prepare for the addition of log incompat features in upcoming cycles by actually adding code to support this. - Small cleanups of the xattr code in preparation for landing support for full logging of extended attribute updates in a future cycle. - Replace the various log shutdown state and flag code all over xfs with a single atomic bit flag. - Fix a serious log recovery bug where log item replay can be skipped based on the start lsn of a transaction even though the transaction commit lsn is the key data point for that by enforcing start lsns to appear in the log in the same order as commit lsns. - Enable pipelining in the code that pushes log items to disk. - Drop ->writepage. - Fix some bugs in GETFSMAP where the last fsmap record reported for a device could extend beyond the end of the device, and a separate bug where query keys for one device could be applied to another. - Don't let GETFSMAP query functions edit their input parameters. - Small cleanups to the scrub code's handling of perag structures. - Small cleanups to the incore inode tree walk code. - Constify btree function parameters that aren't changed, so that there will never again be confusion about range query functions changing their input parameters. - Standardize the format and names of tracepoint data attributes. - Clean up all the mount state and feature flags to use wrapped bitset functions instead of inconsistently open-coded flag checks. - Fix some confusion between xfs_buf hash table key variable vs. block number. - Fix a mis-interaction with iomap where we reported shared delalloc cow fork extents to iomap, which would cause the iomap unshare operation to return IO errors unnecessarily. - Fix DONTCACHE behavior" * tag 'xfs-5.15-merge-6' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux: (103 commits) xfs: fix I_DONTCACHE xfs: only set IOMAP_F_SHARED when providing a srcmap to a write xfs: fix perag structure refcounting error when scrub fails xfs: rename buffer cache index variable b_bn xfs: convert bp->b_bn references to xfs_buf_daddr() xfs: introduce xfs_buf_daddr() xfs: kill xfs_sb_version_has_v3inode() xfs: introduce xfs_sb_is_v5 helper xfs: remove unused xfs_sb_version_has wrappers xfs: convert xfs_sb_version_has checks to use mount features xfs: convert scrub to use mount-based feature checks xfs: open code sb verifier feature checks xfs: convert xfs_fs_geometry to use mount feature checks xfs: replace XFS_FORCED_SHUTDOWN with xfs_is_shutdown xfs: convert remaining mount flags to state flags xfs: convert mount flags to features xfs: consolidate mount option features in m_features xfs: replace xfs_sb_version checks with feature flag checks xfs: reflect sb features in xfs_mount xfs: rework attr2 feature and mount options ...
This commit is contained in:
commit
90c90cda05
@ -29,67 +29,3 @@ kmem_alloc(size_t size, xfs_km_flags_t flags)
|
||||
congestion_wait(BLK_RW_ASYNC, HZ/50);
|
||||
} while (1);
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* __vmalloc() will allocate data pages and auxiliary structures (e.g.
|
||||
* pagetables) with GFP_KERNEL, yet we may be under GFP_NOFS context here. Hence
|
||||
* we need to tell memory reclaim that we are in such a context via
|
||||
* PF_MEMALLOC_NOFS to prevent memory reclaim re-entering the filesystem here
|
||||
* and potentially deadlocking.
|
||||
*/
|
||||
static void *
|
||||
__kmem_vmalloc(size_t size, xfs_km_flags_t flags)
|
||||
{
|
||||
unsigned nofs_flag = 0;
|
||||
void *ptr;
|
||||
gfp_t lflags = kmem_flags_convert(flags);
|
||||
|
||||
if (flags & KM_NOFS)
|
||||
nofs_flag = memalloc_nofs_save();
|
||||
|
||||
ptr = __vmalloc(size, lflags);
|
||||
|
||||
if (flags & KM_NOFS)
|
||||
memalloc_nofs_restore(nofs_flag);
|
||||
|
||||
return ptr;
|
||||
}
|
||||
|
||||
/*
|
||||
* Same as kmem_alloc_large, except we guarantee the buffer returned is aligned
|
||||
* to the @align_mask. We only guarantee alignment up to page size, we'll clamp
|
||||
* alignment at page size if it is larger. vmalloc always returns a PAGE_SIZE
|
||||
* aligned region.
|
||||
*/
|
||||
void *
|
||||
kmem_alloc_io(size_t size, int align_mask, xfs_km_flags_t flags)
|
||||
{
|
||||
void *ptr;
|
||||
|
||||
trace_kmem_alloc_io(size, flags, _RET_IP_);
|
||||
|
||||
if (WARN_ON_ONCE(align_mask >= PAGE_SIZE))
|
||||
align_mask = PAGE_SIZE - 1;
|
||||
|
||||
ptr = kmem_alloc(size, flags | KM_MAYFAIL);
|
||||
if (ptr) {
|
||||
if (!((uintptr_t)ptr & align_mask))
|
||||
return ptr;
|
||||
kfree(ptr);
|
||||
}
|
||||
return __kmem_vmalloc(size, flags);
|
||||
}
|
||||
|
||||
void *
|
||||
kmem_alloc_large(size_t size, xfs_km_flags_t flags)
|
||||
{
|
||||
void *ptr;
|
||||
|
||||
trace_kmem_alloc_large(size, flags, _RET_IP_);
|
||||
|
||||
ptr = kmem_alloc(size, flags | KM_MAYFAIL);
|
||||
if (ptr)
|
||||
return ptr;
|
||||
return __kmem_vmalloc(size, flags);
|
||||
}
|
||||
|
@ -57,8 +57,6 @@ kmem_flags_convert(xfs_km_flags_t flags)
|
||||
}
|
||||
|
||||
extern void *kmem_alloc(size_t, xfs_km_flags_t);
|
||||
extern void *kmem_alloc_io(size_t size, int align_mask, xfs_km_flags_t flags);
|
||||
extern void *kmem_alloc_large(size_t size, xfs_km_flags_t);
|
||||
static inline void kmem_free(const void *ptr)
|
||||
{
|
||||
kvfree(ptr);
|
||||
|
@ -313,7 +313,6 @@ xfs_get_aghdr_buf(
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
bp->b_bn = blkno;
|
||||
bp->b_maps[0].bm_bn = blkno;
|
||||
bp->b_ops = ops;
|
||||
|
||||
@ -469,7 +468,7 @@ xfs_rmaproot_init(
|
||||
rrec->rm_offset = 0;
|
||||
|
||||
/* account for refc btree root */
|
||||
if (xfs_sb_version_hasreflink(&mp->m_sb)) {
|
||||
if (xfs_has_reflink(mp)) {
|
||||
rrec = XFS_RMAP_REC_ADDR(block, 5);
|
||||
rrec->rm_startblock = cpu_to_be32(xfs_refc_block(mp));
|
||||
rrec->rm_blockcount = cpu_to_be32(1);
|
||||
@ -528,7 +527,7 @@ xfs_agfblock_init(
|
||||
agf->agf_roots[XFS_BTNUM_CNTi] = cpu_to_be32(XFS_CNT_BLOCK(mp));
|
||||
agf->agf_levels[XFS_BTNUM_BNOi] = cpu_to_be32(1);
|
||||
agf->agf_levels[XFS_BTNUM_CNTi] = cpu_to_be32(1);
|
||||
if (xfs_sb_version_hasrmapbt(&mp->m_sb)) {
|
||||
if (xfs_has_rmapbt(mp)) {
|
||||
agf->agf_roots[XFS_BTNUM_RMAPi] =
|
||||
cpu_to_be32(XFS_RMAP_BLOCK(mp));
|
||||
agf->agf_levels[XFS_BTNUM_RMAPi] = cpu_to_be32(1);
|
||||
@ -541,9 +540,9 @@ xfs_agfblock_init(
|
||||
tmpsize = id->agsize - mp->m_ag_prealloc_blocks;
|
||||
agf->agf_freeblks = cpu_to_be32(tmpsize);
|
||||
agf->agf_longest = cpu_to_be32(tmpsize);
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (xfs_has_crc(mp))
|
||||
uuid_copy(&agf->agf_uuid, &mp->m_sb.sb_meta_uuid);
|
||||
if (xfs_sb_version_hasreflink(&mp->m_sb)) {
|
||||
if (xfs_has_reflink(mp)) {
|
||||
agf->agf_refcount_root = cpu_to_be32(
|
||||
xfs_refc_block(mp));
|
||||
agf->agf_refcount_level = cpu_to_be32(1);
|
||||
@ -569,7 +568,7 @@ xfs_agflblock_init(
|
||||
__be32 *agfl_bno;
|
||||
int bucket;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
agfl->agfl_magicnum = cpu_to_be32(XFS_AGFL_MAGIC);
|
||||
agfl->agfl_seqno = cpu_to_be32(id->agno);
|
||||
uuid_copy(&agfl->agfl_uuid, &mp->m_sb.sb_meta_uuid);
|
||||
@ -599,17 +598,17 @@ xfs_agiblock_init(
|
||||
agi->agi_freecount = 0;
|
||||
agi->agi_newino = cpu_to_be32(NULLAGINO);
|
||||
agi->agi_dirino = cpu_to_be32(NULLAGINO);
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (xfs_has_crc(mp))
|
||||
uuid_copy(&agi->agi_uuid, &mp->m_sb.sb_meta_uuid);
|
||||
if (xfs_sb_version_hasfinobt(&mp->m_sb)) {
|
||||
if (xfs_has_finobt(mp)) {
|
||||
agi->agi_free_root = cpu_to_be32(XFS_FIBT_BLOCK(mp));
|
||||
agi->agi_free_level = cpu_to_be32(1);
|
||||
}
|
||||
for (bucket = 0; bucket < XFS_AGI_UNLINKED_BUCKETS; bucket++)
|
||||
agi->agi_unlinked[bucket] = cpu_to_be32(NULLAGINO);
|
||||
if (xfs_sb_version_hasinobtcounts(&mp->m_sb)) {
|
||||
if (xfs_has_inobtcounts(mp)) {
|
||||
agi->agi_iblocks = cpu_to_be32(1);
|
||||
if (xfs_sb_version_hasfinobt(&mp->m_sb))
|
||||
if (xfs_has_finobt(mp))
|
||||
agi->agi_fblocks = cpu_to_be32(1);
|
||||
}
|
||||
}
|
||||
@ -719,14 +718,14 @@ xfs_ag_init_headers(
|
||||
.ops = &xfs_finobt_buf_ops,
|
||||
.work = &xfs_btroot_init,
|
||||
.type = XFS_BTNUM_FINO,
|
||||
.need_init = xfs_sb_version_hasfinobt(&mp->m_sb)
|
||||
.need_init = xfs_has_finobt(mp)
|
||||
},
|
||||
{ /* RMAP root block */
|
||||
.daddr = XFS_AGB_TO_DADDR(mp, id->agno, XFS_RMAP_BLOCK(mp)),
|
||||
.numblks = BTOBB(mp->m_sb.sb_blocksize),
|
||||
.ops = &xfs_rmapbt_buf_ops,
|
||||
.work = &xfs_rmaproot_init,
|
||||
.need_init = xfs_sb_version_hasrmapbt(&mp->m_sb)
|
||||
.need_init = xfs_has_rmapbt(mp)
|
||||
},
|
||||
{ /* REFC root block */
|
||||
.daddr = XFS_AGB_TO_DADDR(mp, id->agno, xfs_refc_block(mp)),
|
||||
@ -734,7 +733,7 @@ xfs_ag_init_headers(
|
||||
.ops = &xfs_refcountbt_buf_ops,
|
||||
.work = &xfs_btroot_init,
|
||||
.type = XFS_BTNUM_REFC,
|
||||
.need_init = xfs_sb_version_hasreflink(&mp->m_sb)
|
||||
.need_init = xfs_has_reflink(mp)
|
||||
},
|
||||
{ /* NULL terminating block */
|
||||
.daddr = XFS_BUF_DADDR_NULL,
|
||||
|
@ -51,7 +51,7 @@ xfs_agfl_size(
|
||||
{
|
||||
unsigned int size = mp->m_sb.sb_sectsize;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (xfs_has_crc(mp))
|
||||
size -= sizeof(struct xfs_agfl);
|
||||
|
||||
return size / sizeof(xfs_agblock_t);
|
||||
@ -61,9 +61,9 @@ unsigned int
|
||||
xfs_refc_block(
|
||||
struct xfs_mount *mp)
|
||||
{
|
||||
if (xfs_sb_version_hasrmapbt(&mp->m_sb))
|
||||
if (xfs_has_rmapbt(mp))
|
||||
return XFS_RMAP_BLOCK(mp) + 1;
|
||||
if (xfs_sb_version_hasfinobt(&mp->m_sb))
|
||||
if (xfs_has_finobt(mp))
|
||||
return XFS_FIBT_BLOCK(mp) + 1;
|
||||
return XFS_IBT_BLOCK(mp) + 1;
|
||||
}
|
||||
@ -72,11 +72,11 @@ xfs_extlen_t
|
||||
xfs_prealloc_blocks(
|
||||
struct xfs_mount *mp)
|
||||
{
|
||||
if (xfs_sb_version_hasreflink(&mp->m_sb))
|
||||
if (xfs_has_reflink(mp))
|
||||
return xfs_refc_block(mp) + 1;
|
||||
if (xfs_sb_version_hasrmapbt(&mp->m_sb))
|
||||
if (xfs_has_rmapbt(mp))
|
||||
return XFS_RMAP_BLOCK(mp) + 1;
|
||||
if (xfs_sb_version_hasfinobt(&mp->m_sb))
|
||||
if (xfs_has_finobt(mp))
|
||||
return XFS_FIBT_BLOCK(mp) + 1;
|
||||
return XFS_IBT_BLOCK(mp) + 1;
|
||||
}
|
||||
@ -126,11 +126,11 @@ xfs_alloc_ag_max_usable(
|
||||
blocks = XFS_BB_TO_FSB(mp, XFS_FSS_TO_BB(mp, 4)); /* ag headers */
|
||||
blocks += XFS_ALLOC_AGFL_RESERVE;
|
||||
blocks += 3; /* AGF, AGI btree root blocks */
|
||||
if (xfs_sb_version_hasfinobt(&mp->m_sb))
|
||||
if (xfs_has_finobt(mp))
|
||||
blocks++; /* finobt root block */
|
||||
if (xfs_sb_version_hasrmapbt(&mp->m_sb))
|
||||
if (xfs_has_rmapbt(mp))
|
||||
blocks++; /* rmap root block */
|
||||
if (xfs_sb_version_hasreflink(&mp->m_sb))
|
||||
if (xfs_has_reflink(mp))
|
||||
blocks++; /* refcount root block */
|
||||
|
||||
return mp->m_sb.sb_agblocks - blocks;
|
||||
@ -598,7 +598,7 @@ xfs_agfl_verify(
|
||||
* AGFL is what the AGF says is active. We can't get to the AGF, so we
|
||||
* can't verify just those entries are valid.
|
||||
*/
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return NULL;
|
||||
|
||||
if (!xfs_verify_magic(bp, agfl->agfl_magicnum))
|
||||
@ -638,7 +638,7 @@ xfs_agfl_read_verify(
|
||||
* AGFL is what the AGF says is active. We can't get to the AGF, so we
|
||||
* can't verify just those entries are valid.
|
||||
*/
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return;
|
||||
|
||||
if (!xfs_buf_verify_cksum(bp, XFS_AGFL_CRC_OFF))
|
||||
@ -659,7 +659,7 @@ xfs_agfl_write_verify(
|
||||
xfs_failaddr_t fa;
|
||||
|
||||
/* no verification of non-crc AGFLs */
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return;
|
||||
|
||||
fa = xfs_agfl_verify(bp);
|
||||
@ -2264,7 +2264,7 @@ xfs_alloc_min_freelist(
|
||||
min_free += min_t(unsigned int, levels[XFS_BTNUM_CNTi] + 1,
|
||||
mp->m_ag_maxlevels);
|
||||
/* space needed reverse mapping used space btree */
|
||||
if (xfs_sb_version_hasrmapbt(&mp->m_sb))
|
||||
if (xfs_has_rmapbt(mp))
|
||||
min_free += min_t(unsigned int, levels[XFS_BTNUM_RMAPi] + 1,
|
||||
mp->m_rmap_maxlevels);
|
||||
|
||||
@ -2373,7 +2373,7 @@ xfs_agfl_needs_reset(
|
||||
int active;
|
||||
|
||||
/* no agfl header on v4 supers */
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return false;
|
||||
|
||||
/*
|
||||
@ -2877,7 +2877,7 @@ xfs_agf_verify(
|
||||
struct xfs_mount *mp = bp->b_mount;
|
||||
struct xfs_agf *agf = bp->b_addr;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
if (!uuid_equal(&agf->agf_uuid, &mp->m_sb.sb_meta_uuid))
|
||||
return __this_address;
|
||||
if (!xfs_log_check_lsn(mp, be64_to_cpu(agf->agf_lsn)))
|
||||
@ -2907,12 +2907,12 @@ xfs_agf_verify(
|
||||
be32_to_cpu(agf->agf_levels[XFS_BTNUM_CNT]) > mp->m_ag_maxlevels)
|
||||
return __this_address;
|
||||
|
||||
if (xfs_sb_version_hasrmapbt(&mp->m_sb) &&
|
||||
if (xfs_has_rmapbt(mp) &&
|
||||
(be32_to_cpu(agf->agf_levels[XFS_BTNUM_RMAP]) < 1 ||
|
||||
be32_to_cpu(agf->agf_levels[XFS_BTNUM_RMAP]) > mp->m_rmap_maxlevels))
|
||||
return __this_address;
|
||||
|
||||
if (xfs_sb_version_hasrmapbt(&mp->m_sb) &&
|
||||
if (xfs_has_rmapbt(mp) &&
|
||||
be32_to_cpu(agf->agf_rmap_blocks) > be32_to_cpu(agf->agf_length))
|
||||
return __this_address;
|
||||
|
||||
@ -2925,16 +2925,16 @@ xfs_agf_verify(
|
||||
if (bp->b_pag && be32_to_cpu(agf->agf_seqno) != bp->b_pag->pag_agno)
|
||||
return __this_address;
|
||||
|
||||
if (xfs_sb_version_haslazysbcount(&mp->m_sb) &&
|
||||
if (xfs_has_lazysbcount(mp) &&
|
||||
be32_to_cpu(agf->agf_btreeblks) > be32_to_cpu(agf->agf_length))
|
||||
return __this_address;
|
||||
|
||||
if (xfs_sb_version_hasreflink(&mp->m_sb) &&
|
||||
if (xfs_has_reflink(mp) &&
|
||||
be32_to_cpu(agf->agf_refcount_blocks) >
|
||||
be32_to_cpu(agf->agf_length))
|
||||
return __this_address;
|
||||
|
||||
if (xfs_sb_version_hasreflink(&mp->m_sb) &&
|
||||
if (xfs_has_reflink(mp) &&
|
||||
(be32_to_cpu(agf->agf_refcount_level) < 1 ||
|
||||
be32_to_cpu(agf->agf_refcount_level) > mp->m_refc_maxlevels))
|
||||
return __this_address;
|
||||
@ -2950,7 +2950,7 @@ xfs_agf_read_verify(
|
||||
struct xfs_mount *mp = bp->b_mount;
|
||||
xfs_failaddr_t fa;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb) &&
|
||||
if (xfs_has_crc(mp) &&
|
||||
!xfs_buf_verify_cksum(bp, XFS_AGF_CRC_OFF))
|
||||
xfs_verifier_error(bp, -EFSBADCRC, __this_address);
|
||||
else {
|
||||
@ -2975,7 +2975,7 @@ xfs_agf_write_verify(
|
||||
return;
|
||||
}
|
||||
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return;
|
||||
|
||||
if (bip)
|
||||
@ -3073,13 +3073,13 @@ xfs_alloc_read_agf(
|
||||
* counter only tracks non-root blocks.
|
||||
*/
|
||||
allocbt_blks = pag->pagf_btreeblks;
|
||||
if (xfs_sb_version_hasrmapbt(&mp->m_sb))
|
||||
if (xfs_has_rmapbt(mp))
|
||||
allocbt_blks -= be32_to_cpu(agf->agf_rmap_blocks) - 1;
|
||||
if (allocbt_blks > 0)
|
||||
atomic64_add(allocbt_blks, &mp->m_allocbt_blks);
|
||||
}
|
||||
#ifdef DEBUG
|
||||
else if (!XFS_FORCED_SHUTDOWN(mp)) {
|
||||
else if (!xfs_is_shutdown(mp)) {
|
||||
ASSERT(pag->pagf_freeblks == be32_to_cpu(agf->agf_freeblks));
|
||||
ASSERT(pag->pagf_btreeblks == be32_to_cpu(agf->agf_btreeblks));
|
||||
ASSERT(pag->pagf_flcount == be32_to_cpu(agf->agf_flcount));
|
||||
@ -3166,7 +3166,7 @@ xfs_alloc_vextent(
|
||||
* the first a.g. fails.
|
||||
*/
|
||||
if ((args->datatype & XFS_ALLOC_INITIAL_USER_DATA) &&
|
||||
(mp->m_flags & XFS_MOUNT_32BITINODES)) {
|
||||
xfs_is_inode32(mp)) {
|
||||
args->fsbno = XFS_AGB_TO_FSB(mp,
|
||||
((mp->m_agfrotor / rotorstep) %
|
||||
mp->m_sb.sb_agcount), 0);
|
||||
@ -3392,7 +3392,7 @@ struct xfs_alloc_query_range_info {
|
||||
STATIC int
|
||||
xfs_alloc_query_range_helper(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_rec *rec,
|
||||
const union xfs_btree_rec *rec,
|
||||
void *priv)
|
||||
{
|
||||
struct xfs_alloc_query_range_info *query = priv;
|
||||
@ -3407,8 +3407,8 @@ xfs_alloc_query_range_helper(
|
||||
int
|
||||
xfs_alloc_query_range(
|
||||
struct xfs_btree_cur *cur,
|
||||
struct xfs_alloc_rec_incore *low_rec,
|
||||
struct xfs_alloc_rec_incore *high_rec,
|
||||
const struct xfs_alloc_rec_incore *low_rec,
|
||||
const struct xfs_alloc_rec_incore *high_rec,
|
||||
xfs_alloc_query_range_fn fn,
|
||||
void *priv)
|
||||
{
|
||||
|
@ -220,13 +220,13 @@ int xfs_free_extent_fix_freelist(struct xfs_trans *tp, struct xfs_perag *pag,
|
||||
xfs_extlen_t xfs_prealloc_blocks(struct xfs_mount *mp);
|
||||
|
||||
typedef int (*xfs_alloc_query_range_fn)(
|
||||
struct xfs_btree_cur *cur,
|
||||
struct xfs_alloc_rec_incore *rec,
|
||||
void *priv);
|
||||
struct xfs_btree_cur *cur,
|
||||
const struct xfs_alloc_rec_incore *rec,
|
||||
void *priv);
|
||||
|
||||
int xfs_alloc_query_range(struct xfs_btree_cur *cur,
|
||||
struct xfs_alloc_rec_incore *low_rec,
|
||||
struct xfs_alloc_rec_incore *high_rec,
|
||||
const struct xfs_alloc_rec_incore *low_rec,
|
||||
const struct xfs_alloc_rec_incore *high_rec,
|
||||
xfs_alloc_query_range_fn fn, void *priv);
|
||||
int xfs_alloc_query_all(struct xfs_btree_cur *cur, xfs_alloc_query_range_fn fn,
|
||||
void *priv);
|
||||
@ -243,7 +243,7 @@ static inline __be32 *
|
||||
xfs_buf_to_agfl_bno(
|
||||
struct xfs_buf *bp)
|
||||
{
|
||||
if (xfs_sb_version_hascrc(&bp->b_mount->m_sb))
|
||||
if (xfs_has_crc(bp->b_mount))
|
||||
return bp->b_addr + sizeof(struct xfs_agfl);
|
||||
return bp->b_addr;
|
||||
}
|
||||
|
@ -31,9 +31,9 @@ xfs_allocbt_dup_cursor(
|
||||
|
||||
STATIC void
|
||||
xfs_allocbt_set_root(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *ptr,
|
||||
int inc)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *ptr,
|
||||
int inc)
|
||||
{
|
||||
struct xfs_buf *agbp = cur->bc_ag.agbp;
|
||||
struct xfs_agf *agf = agbp->b_addr;
|
||||
@ -50,10 +50,10 @@ xfs_allocbt_set_root(
|
||||
|
||||
STATIC int
|
||||
xfs_allocbt_alloc_block(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *start,
|
||||
union xfs_btree_ptr *new,
|
||||
int *stat)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *start,
|
||||
union xfs_btree_ptr *new,
|
||||
int *stat)
|
||||
{
|
||||
int error;
|
||||
xfs_agblock_t bno;
|
||||
@ -87,7 +87,7 @@ xfs_allocbt_free_block(
|
||||
xfs_agblock_t bno;
|
||||
int error;
|
||||
|
||||
bno = xfs_daddr_to_agbno(cur->bc_mp, XFS_BUF_ADDR(bp));
|
||||
bno = xfs_daddr_to_agbno(cur->bc_mp, xfs_buf_daddr(bp));
|
||||
error = xfs_alloc_put_freelist(cur->bc_tp, agbp, NULL, bno, 1);
|
||||
if (error)
|
||||
return error;
|
||||
@ -103,11 +103,11 @@ xfs_allocbt_free_block(
|
||||
*/
|
||||
STATIC void
|
||||
xfs_allocbt_update_lastrec(
|
||||
struct xfs_btree_cur *cur,
|
||||
struct xfs_btree_block *block,
|
||||
union xfs_btree_rec *rec,
|
||||
int ptr,
|
||||
int reason)
|
||||
struct xfs_btree_cur *cur,
|
||||
const struct xfs_btree_block *block,
|
||||
const union xfs_btree_rec *rec,
|
||||
int ptr,
|
||||
int reason)
|
||||
{
|
||||
struct xfs_agf *agf = cur->bc_ag.agbp->b_addr;
|
||||
struct xfs_perag *pag;
|
||||
@ -177,8 +177,8 @@ xfs_allocbt_get_maxrecs(
|
||||
|
||||
STATIC void
|
||||
xfs_allocbt_init_key_from_rec(
|
||||
union xfs_btree_key *key,
|
||||
union xfs_btree_rec *rec)
|
||||
union xfs_btree_key *key,
|
||||
const union xfs_btree_rec *rec)
|
||||
{
|
||||
key->alloc.ar_startblock = rec->alloc.ar_startblock;
|
||||
key->alloc.ar_blockcount = rec->alloc.ar_blockcount;
|
||||
@ -186,10 +186,10 @@ xfs_allocbt_init_key_from_rec(
|
||||
|
||||
STATIC void
|
||||
xfs_bnobt_init_high_key_from_rec(
|
||||
union xfs_btree_key *key,
|
||||
union xfs_btree_rec *rec)
|
||||
union xfs_btree_key *key,
|
||||
const union xfs_btree_rec *rec)
|
||||
{
|
||||
__u32 x;
|
||||
__u32 x;
|
||||
|
||||
x = be32_to_cpu(rec->alloc.ar_startblock);
|
||||
x += be32_to_cpu(rec->alloc.ar_blockcount) - 1;
|
||||
@ -199,8 +199,8 @@ xfs_bnobt_init_high_key_from_rec(
|
||||
|
||||
STATIC void
|
||||
xfs_cntbt_init_high_key_from_rec(
|
||||
union xfs_btree_key *key,
|
||||
union xfs_btree_rec *rec)
|
||||
union xfs_btree_key *key,
|
||||
const union xfs_btree_rec *rec)
|
||||
{
|
||||
key->alloc.ar_blockcount = rec->alloc.ar_blockcount;
|
||||
key->alloc.ar_startblock = 0;
|
||||
@ -229,23 +229,23 @@ xfs_allocbt_init_ptr_from_cur(
|
||||
|
||||
STATIC int64_t
|
||||
xfs_bnobt_key_diff(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *key)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_key *key)
|
||||
{
|
||||
xfs_alloc_rec_incore_t *rec = &cur->bc_rec.a;
|
||||
xfs_alloc_key_t *kp = &key->alloc;
|
||||
struct xfs_alloc_rec_incore *rec = &cur->bc_rec.a;
|
||||
const struct xfs_alloc_rec *kp = &key->alloc;
|
||||
|
||||
return (int64_t)be32_to_cpu(kp->ar_startblock) - rec->ar_startblock;
|
||||
}
|
||||
|
||||
STATIC int64_t
|
||||
xfs_cntbt_key_diff(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *key)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_key *key)
|
||||
{
|
||||
xfs_alloc_rec_incore_t *rec = &cur->bc_rec.a;
|
||||
xfs_alloc_key_t *kp = &key->alloc;
|
||||
int64_t diff;
|
||||
struct xfs_alloc_rec_incore *rec = &cur->bc_rec.a;
|
||||
const struct xfs_alloc_rec *kp = &key->alloc;
|
||||
int64_t diff;
|
||||
|
||||
diff = (int64_t)be32_to_cpu(kp->ar_blockcount) - rec->ar_blockcount;
|
||||
if (diff)
|
||||
@ -256,9 +256,9 @@ xfs_cntbt_key_diff(
|
||||
|
||||
STATIC int64_t
|
||||
xfs_bnobt_diff_two_keys(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *k1,
|
||||
union xfs_btree_key *k2)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_key *k1,
|
||||
const union xfs_btree_key *k2)
|
||||
{
|
||||
return (int64_t)be32_to_cpu(k1->alloc.ar_startblock) -
|
||||
be32_to_cpu(k2->alloc.ar_startblock);
|
||||
@ -266,11 +266,11 @@ xfs_bnobt_diff_two_keys(
|
||||
|
||||
STATIC int64_t
|
||||
xfs_cntbt_diff_two_keys(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *k1,
|
||||
union xfs_btree_key *k2)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_key *k1,
|
||||
const union xfs_btree_key *k2)
|
||||
{
|
||||
int64_t diff;
|
||||
int64_t diff;
|
||||
|
||||
diff = be32_to_cpu(k1->alloc.ar_blockcount) -
|
||||
be32_to_cpu(k2->alloc.ar_blockcount);
|
||||
@ -295,7 +295,7 @@ xfs_allocbt_verify(
|
||||
if (!xfs_verify_magic(bp, block->bb_magic))
|
||||
return __this_address;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
fa = xfs_btree_sblock_v5hdr_verify(bp);
|
||||
if (fa)
|
||||
return fa;
|
||||
@ -376,9 +376,9 @@ const struct xfs_buf_ops xfs_cntbt_buf_ops = {
|
||||
|
||||
STATIC int
|
||||
xfs_bnobt_keys_inorder(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *k1,
|
||||
union xfs_btree_key *k2)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_key *k1,
|
||||
const union xfs_btree_key *k2)
|
||||
{
|
||||
return be32_to_cpu(k1->alloc.ar_startblock) <
|
||||
be32_to_cpu(k2->alloc.ar_startblock);
|
||||
@ -386,9 +386,9 @@ xfs_bnobt_keys_inorder(
|
||||
|
||||
STATIC int
|
||||
xfs_bnobt_recs_inorder(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_rec *r1,
|
||||
union xfs_btree_rec *r2)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_rec *r1,
|
||||
const union xfs_btree_rec *r2)
|
||||
{
|
||||
return be32_to_cpu(r1->alloc.ar_startblock) +
|
||||
be32_to_cpu(r1->alloc.ar_blockcount) <=
|
||||
@ -397,9 +397,9 @@ xfs_bnobt_recs_inorder(
|
||||
|
||||
STATIC int
|
||||
xfs_cntbt_keys_inorder(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *k1,
|
||||
union xfs_btree_key *k2)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_key *k1,
|
||||
const union xfs_btree_key *k2)
|
||||
{
|
||||
return be32_to_cpu(k1->alloc.ar_blockcount) <
|
||||
be32_to_cpu(k2->alloc.ar_blockcount) ||
|
||||
@ -410,9 +410,9 @@ xfs_cntbt_keys_inorder(
|
||||
|
||||
STATIC int
|
||||
xfs_cntbt_recs_inorder(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_rec *r1,
|
||||
union xfs_btree_rec *r2)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_rec *r1,
|
||||
const union xfs_btree_rec *r2)
|
||||
{
|
||||
return be32_to_cpu(r1->alloc.ar_blockcount) <
|
||||
be32_to_cpu(r2->alloc.ar_blockcount) ||
|
||||
@ -498,7 +498,7 @@ xfs_allocbt_init_common(
|
||||
atomic_inc(&pag->pag_ref);
|
||||
cur->bc_ag.pag = pag;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (xfs_has_crc(mp))
|
||||
cur->bc_flags |= XFS_BTREE_CRC_BLOCKS;
|
||||
|
||||
return cur;
|
||||
|
@ -20,7 +20,7 @@ struct xbtree_afakeroot;
|
||||
* Btree block header size depends on a superblock flag.
|
||||
*/
|
||||
#define XFS_ALLOC_BLOCK_LEN(mp) \
|
||||
(xfs_sb_version_hascrc(&((mp)->m_sb)) ? \
|
||||
(xfs_has_crc(((mp))) ? \
|
||||
XFS_BTREE_SBLOCK_CRC_LEN : XFS_BTREE_SBLOCK_LEN)
|
||||
|
||||
/*
|
||||
|
@ -146,7 +146,7 @@ xfs_attr_get(
|
||||
|
||||
XFS_STATS_INC(args->dp->i_mount, xs_attr_get);
|
||||
|
||||
if (XFS_FORCED_SHUTDOWN(args->dp->i_mount))
|
||||
if (xfs_is_shutdown(args->dp->i_mount))
|
||||
return -EIO;
|
||||
|
||||
args->geo = args->dp->i_mount->m_attr_geo;
|
||||
@ -224,7 +224,7 @@ xfs_attr_try_sf_addname(
|
||||
if (!error && !(args->op_flags & XFS_DA_OP_NOTIME))
|
||||
xfs_trans_ichgtime(args->trans, dp, XFS_ICHGTIME_CHG);
|
||||
|
||||
if (dp->i_mount->m_flags & XFS_MOUNT_WSYNC)
|
||||
if (xfs_has_wsync(dp->i_mount))
|
||||
xfs_trans_set_sync(args->trans);
|
||||
|
||||
return error;
|
||||
@ -335,6 +335,7 @@ xfs_attr_sf_addname(
|
||||
* the attr fork to leaf format and will restart with the leaf
|
||||
* add.
|
||||
*/
|
||||
trace_xfs_attr_sf_addname_return(XFS_DAS_UNINIT, args->dp);
|
||||
dac->flags |= XFS_DAC_DEFER_FINISH;
|
||||
return -EAGAIN;
|
||||
}
|
||||
@ -394,6 +395,8 @@ xfs_attr_set_iter(
|
||||
* handling code below
|
||||
*/
|
||||
dac->flags |= XFS_DAC_DEFER_FINISH;
|
||||
trace_xfs_attr_set_iter_return(
|
||||
dac->dela_state, args->dp);
|
||||
return -EAGAIN;
|
||||
} else if (error) {
|
||||
return error;
|
||||
@ -411,6 +414,7 @@ xfs_attr_set_iter(
|
||||
|
||||
dac->dela_state = XFS_DAS_FOUND_NBLK;
|
||||
}
|
||||
trace_xfs_attr_set_iter_return(dac->dela_state, args->dp);
|
||||
return -EAGAIN;
|
||||
case XFS_DAS_FOUND_LBLK:
|
||||
/*
|
||||
@ -438,6 +442,8 @@ xfs_attr_set_iter(
|
||||
error = xfs_attr_rmtval_set_blk(dac);
|
||||
if (error)
|
||||
return error;
|
||||
trace_xfs_attr_set_iter_return(dac->dela_state,
|
||||
args->dp);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
@ -472,6 +478,7 @@ xfs_attr_set_iter(
|
||||
* series.
|
||||
*/
|
||||
dac->dela_state = XFS_DAS_FLIP_LFLAG;
|
||||
trace_xfs_attr_set_iter_return(dac->dela_state, args->dp);
|
||||
return -EAGAIN;
|
||||
case XFS_DAS_FLIP_LFLAG:
|
||||
/*
|
||||
@ -488,11 +495,15 @@ xfs_attr_set_iter(
|
||||
/* Set state in case xfs_attr_rmtval_remove returns -EAGAIN */
|
||||
dac->dela_state = XFS_DAS_RM_LBLK;
|
||||
if (args->rmtblkno) {
|
||||
error = __xfs_attr_rmtval_remove(dac);
|
||||
error = xfs_attr_rmtval_remove(dac);
|
||||
if (error == -EAGAIN)
|
||||
trace_xfs_attr_set_iter_return(
|
||||
dac->dela_state, args->dp);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
dac->dela_state = XFS_DAS_RD_LEAF;
|
||||
trace_xfs_attr_set_iter_return(dac->dela_state, args->dp);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
@ -542,6 +553,8 @@ xfs_attr_set_iter(
|
||||
error = xfs_attr_rmtval_set_blk(dac);
|
||||
if (error)
|
||||
return error;
|
||||
trace_xfs_attr_set_iter_return(
|
||||
dac->dela_state, args->dp);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
@ -577,6 +590,7 @@ xfs_attr_set_iter(
|
||||
* series
|
||||
*/
|
||||
dac->dela_state = XFS_DAS_FLIP_NFLAG;
|
||||
trace_xfs_attr_set_iter_return(dac->dela_state, args->dp);
|
||||
return -EAGAIN;
|
||||
|
||||
case XFS_DAS_FLIP_NFLAG:
|
||||
@ -595,11 +609,16 @@ xfs_attr_set_iter(
|
||||
/* Set state in case xfs_attr_rmtval_remove returns -EAGAIN */
|
||||
dac->dela_state = XFS_DAS_RM_NBLK;
|
||||
if (args->rmtblkno) {
|
||||
error = __xfs_attr_rmtval_remove(dac);
|
||||
error = xfs_attr_rmtval_remove(dac);
|
||||
if (error == -EAGAIN)
|
||||
trace_xfs_attr_set_iter_return(
|
||||
dac->dela_state, args->dp);
|
||||
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
dac->dela_state = XFS_DAS_CLR_FLAG;
|
||||
trace_xfs_attr_set_iter_return(dac->dela_state, args->dp);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
@ -623,8 +642,8 @@ out:
|
||||
/*
|
||||
* Return EEXIST if attr is found, or ENOATTR if not
|
||||
*/
|
||||
int
|
||||
xfs_has_attr(
|
||||
static int
|
||||
xfs_attr_lookup(
|
||||
struct xfs_da_args *args)
|
||||
{
|
||||
struct xfs_inode *dp = args->dp;
|
||||
@ -691,7 +710,7 @@ xfs_attr_set(
|
||||
int rmt_blks = 0;
|
||||
unsigned int total;
|
||||
|
||||
if (XFS_FORCED_SHUTDOWN(dp->i_mount))
|
||||
if (xfs_is_shutdown(dp->i_mount))
|
||||
return -EIO;
|
||||
|
||||
error = xfs_qm_dqattach(dp);
|
||||
@ -761,8 +780,8 @@ xfs_attr_set(
|
||||
goto out_trans_cancel;
|
||||
}
|
||||
|
||||
error = xfs_attr_lookup(args);
|
||||
if (args->value) {
|
||||
error = xfs_has_attr(args);
|
||||
if (error == -EEXIST && (args->attr_flags & XATTR_CREATE))
|
||||
goto out_trans_cancel;
|
||||
if (error == -ENOATTR && (args->attr_flags & XATTR_REPLACE))
|
||||
@ -777,7 +796,6 @@ xfs_attr_set(
|
||||
if (!args->trans)
|
||||
goto out_unlock;
|
||||
} else {
|
||||
error = xfs_has_attr(args);
|
||||
if (error != -EEXIST)
|
||||
goto out_trans_cancel;
|
||||
|
||||
@ -790,7 +808,7 @@ xfs_attr_set(
|
||||
* If this is a synchronous mount, make sure that the
|
||||
* transaction goes to disk before returning to the user.
|
||||
*/
|
||||
if (mp->m_flags & XFS_MOUNT_WSYNC)
|
||||
if (xfs_has_wsync(mp))
|
||||
xfs_trans_set_sync(args->trans);
|
||||
|
||||
if (!(args->op_flags & XFS_DA_OP_NOTIME))
|
||||
@ -1176,6 +1194,8 @@ xfs_attr_node_addname(
|
||||
* this point.
|
||||
*/
|
||||
dac->flags |= XFS_DAC_DEFER_FINISH;
|
||||
trace_xfs_attr_node_addname_return(
|
||||
dac->dela_state, args->dp);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
@ -1421,11 +1441,14 @@ xfs_attr_remove_iter(
|
||||
* May return -EAGAIN. Roll and repeat until all remote
|
||||
* blocks are removed.
|
||||
*/
|
||||
error = __xfs_attr_rmtval_remove(dac);
|
||||
if (error == -EAGAIN)
|
||||
error = xfs_attr_rmtval_remove(dac);
|
||||
if (error == -EAGAIN) {
|
||||
trace_xfs_attr_remove_iter_return(
|
||||
dac->dela_state, args->dp);
|
||||
return error;
|
||||
else if (error)
|
||||
} else if (error) {
|
||||
goto out;
|
||||
}
|
||||
|
||||
/*
|
||||
* Refill the state structure with buffers (the prior
|
||||
@ -1438,6 +1461,7 @@ xfs_attr_remove_iter(
|
||||
goto out;
|
||||
dac->dela_state = XFS_DAS_RM_NAME;
|
||||
dac->flags |= XFS_DAC_DEFER_FINISH;
|
||||
trace_xfs_attr_remove_iter_return(dac->dela_state, args->dp);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
@ -1466,6 +1490,8 @@ xfs_attr_remove_iter(
|
||||
|
||||
dac->flags |= XFS_DAC_DEFER_FINISH;
|
||||
dac->dela_state = XFS_DAS_RM_SHRINK;
|
||||
trace_xfs_attr_remove_iter_return(
|
||||
dac->dela_state, args->dp);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
@ -1514,7 +1540,7 @@ xfs_attr_fillstate(xfs_da_state_t *state)
|
||||
ASSERT((path->active >= 0) && (path->active < XFS_DA_NODE_MAXDEPTH));
|
||||
for (blk = path->blk, level = 0; level < path->active; blk++, level++) {
|
||||
if (blk->bp) {
|
||||
blk->disk_blkno = XFS_BUF_ADDR(blk->bp);
|
||||
blk->disk_blkno = xfs_buf_daddr(blk->bp);
|
||||
blk->bp = NULL;
|
||||
} else {
|
||||
blk->disk_blkno = 0;
|
||||
@ -1529,7 +1555,7 @@ xfs_attr_fillstate(xfs_da_state_t *state)
|
||||
ASSERT((path->active >= 0) && (path->active < XFS_DA_NODE_MAXDEPTH));
|
||||
for (blk = path->blk, level = 0; level < path->active; blk++, level++) {
|
||||
if (blk->bp) {
|
||||
blk->disk_blkno = XFS_BUF_ADDR(blk->bp);
|
||||
blk->disk_blkno = xfs_buf_daddr(blk->bp);
|
||||
blk->bp = NULL;
|
||||
} else {
|
||||
blk->disk_blkno = 0;
|
||||
|
@ -490,7 +490,6 @@ int xfs_attr_get_ilocked(struct xfs_da_args *args);
|
||||
int xfs_attr_get(struct xfs_da_args *args);
|
||||
int xfs_attr_set(struct xfs_da_args *args);
|
||||
int xfs_attr_set_args(struct xfs_da_args *args);
|
||||
int xfs_has_attr(struct xfs_da_args *args);
|
||||
int xfs_attr_remove_args(struct xfs_da_args *args);
|
||||
int xfs_attr_remove_iter(struct xfs_delattr_context *dac);
|
||||
bool xfs_attr_namecheck(const void *name, size_t length);
|
||||
|
@ -384,7 +384,7 @@ xfs_attr3_leaf_write_verify(
|
||||
return;
|
||||
}
|
||||
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return;
|
||||
|
||||
if (bip)
|
||||
@ -406,7 +406,7 @@ xfs_attr3_leaf_read_verify(
|
||||
struct xfs_mount *mp = bp->b_mount;
|
||||
xfs_failaddr_t fa;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb) &&
|
||||
if (xfs_has_crc(mp) &&
|
||||
!xfs_buf_verify_cksum(bp, XFS_ATTR3_LEAF_CRC_OFF))
|
||||
xfs_verifier_error(bp, -EFSBADCRC, __this_address);
|
||||
else {
|
||||
@ -489,7 +489,7 @@ xfs_attr_copy_value(
|
||||
}
|
||||
|
||||
if (!args->value) {
|
||||
args->value = kmem_alloc_large(valuelen, KM_NOLOCKDEP);
|
||||
args->value = kvmalloc(valuelen, GFP_KERNEL | __GFP_NOLOCKDEP);
|
||||
if (!args->value)
|
||||
return -ENOMEM;
|
||||
}
|
||||
@ -568,7 +568,7 @@ xfs_attr_shortform_bytesfit(
|
||||
* literal area, but for the old format we are done if there is no
|
||||
* space in the fixed attribute fork.
|
||||
*/
|
||||
if (!(mp->m_flags & XFS_MOUNT_ATTR2))
|
||||
if (!xfs_has_attr2(mp))
|
||||
return 0;
|
||||
|
||||
dsize = dp->i_df.if_bytes;
|
||||
@ -576,7 +576,7 @@ xfs_attr_shortform_bytesfit(
|
||||
switch (dp->i_df.if_format) {
|
||||
case XFS_DINODE_FMT_EXTENTS:
|
||||
/*
|
||||
* If there is no attr fork and the data fork is extents,
|
||||
* If there is no attr fork and the data fork is extents,
|
||||
* determine if creating the default attr fork will result
|
||||
* in the extents form migrating to btree. If so, the
|
||||
* minimum offset only needs to be the space required for
|
||||
@ -621,21 +621,27 @@ xfs_attr_shortform_bytesfit(
|
||||
}
|
||||
|
||||
/*
|
||||
* Switch on the ATTR2 superblock bit (implies also FEATURES2)
|
||||
* Switch on the ATTR2 superblock bit (implies also FEATURES2) unless:
|
||||
* - noattr2 mount option is set,
|
||||
* - on-disk version bit says it is already set, or
|
||||
* - the attr2 mount option is not set to enable automatic upgrade from attr1.
|
||||
*/
|
||||
STATIC void
|
||||
xfs_sbversion_add_attr2(xfs_mount_t *mp, xfs_trans_t *tp)
|
||||
xfs_sbversion_add_attr2(
|
||||
struct xfs_mount *mp,
|
||||
struct xfs_trans *tp)
|
||||
{
|
||||
if ((mp->m_flags & XFS_MOUNT_ATTR2) &&
|
||||
!(xfs_sb_version_hasattr2(&mp->m_sb))) {
|
||||
spin_lock(&mp->m_sb_lock);
|
||||
if (!xfs_sb_version_hasattr2(&mp->m_sb)) {
|
||||
xfs_sb_version_addattr2(&mp->m_sb);
|
||||
spin_unlock(&mp->m_sb_lock);
|
||||
xfs_log_sb(tp);
|
||||
} else
|
||||
spin_unlock(&mp->m_sb_lock);
|
||||
}
|
||||
if (xfs_has_noattr2(mp))
|
||||
return;
|
||||
if (mp->m_sb.sb_features2 & XFS_SB_VERSION2_ATTR2BIT)
|
||||
return;
|
||||
if (!xfs_has_attr2(mp))
|
||||
return;
|
||||
|
||||
spin_lock(&mp->m_sb_lock);
|
||||
xfs_add_attr2(mp);
|
||||
spin_unlock(&mp->m_sb_lock);
|
||||
xfs_log_sb(tp);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -810,8 +816,7 @@ xfs_attr_sf_removename(
|
||||
* Fix up the start offset of the attribute fork
|
||||
*/
|
||||
totsize -= size;
|
||||
if (totsize == sizeof(xfs_attr_sf_hdr_t) &&
|
||||
(mp->m_flags & XFS_MOUNT_ATTR2) &&
|
||||
if (totsize == sizeof(xfs_attr_sf_hdr_t) && xfs_has_attr2(mp) &&
|
||||
(dp->i_df.if_format != XFS_DINODE_FMT_BTREE) &&
|
||||
!(args->op_flags & XFS_DA_OP_ADDNAME)) {
|
||||
xfs_attr_fork_remove(dp, args->trans);
|
||||
@ -821,7 +826,7 @@ xfs_attr_sf_removename(
|
||||
ASSERT(dp->i_forkoff);
|
||||
ASSERT(totsize > sizeof(xfs_attr_sf_hdr_t) ||
|
||||
(args->op_flags & XFS_DA_OP_ADDNAME) ||
|
||||
!(mp->m_flags & XFS_MOUNT_ATTR2) ||
|
||||
!xfs_has_attr2(mp) ||
|
||||
dp->i_df.if_format == XFS_DINODE_FMT_BTREE);
|
||||
xfs_trans_log_inode(args->trans, dp,
|
||||
XFS_ILOG_CORE | XFS_ILOG_ADATA);
|
||||
@ -997,7 +1002,7 @@ xfs_attr_shortform_allfit(
|
||||
bytes += xfs_attr_sf_entsize_byname(name_loc->namelen,
|
||||
be16_to_cpu(name_loc->valuelen));
|
||||
}
|
||||
if ((dp->i_mount->m_flags & XFS_MOUNT_ATTR2) &&
|
||||
if (xfs_has_attr2(dp->i_mount) &&
|
||||
(dp->i_df.if_format != XFS_DINODE_FMT_BTREE) &&
|
||||
(bytes == sizeof(struct xfs_attr_sf_hdr)))
|
||||
return -1;
|
||||
@ -1122,7 +1127,7 @@ xfs_attr3_leaf_to_shortform(
|
||||
goto out;
|
||||
|
||||
if (forkoff == -1) {
|
||||
ASSERT(dp->i_mount->m_flags & XFS_MOUNT_ATTR2);
|
||||
ASSERT(xfs_has_attr2(dp->i_mount));
|
||||
ASSERT(dp->i_df.if_format != XFS_DINODE_FMT_BTREE);
|
||||
xfs_attr_fork_remove(dp, args->trans);
|
||||
goto out;
|
||||
@ -1199,9 +1204,9 @@ xfs_attr3_leaf_to_node(
|
||||
xfs_trans_buf_set_type(args->trans, bp2, XFS_BLFT_ATTR_LEAF_BUF);
|
||||
bp2->b_ops = bp1->b_ops;
|
||||
memcpy(bp2->b_addr, bp1->b_addr, args->geo->blksize);
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
struct xfs_da3_blkinfo *hdr3 = bp2->b_addr;
|
||||
hdr3->blkno = cpu_to_be64(bp2->b_bn);
|
||||
hdr3->blkno = cpu_to_be64(xfs_buf_daddr(bp2));
|
||||
}
|
||||
xfs_trans_log_buf(args->trans, bp2, 0, args->geo->blksize - 1);
|
||||
|
||||
@ -1264,12 +1269,12 @@ xfs_attr3_leaf_create(
|
||||
memset(&ichdr, 0, sizeof(ichdr));
|
||||
ichdr.firstused = args->geo->blksize;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
struct xfs_da3_blkinfo *hdr3 = bp->b_addr;
|
||||
|
||||
ichdr.magic = XFS_ATTR3_LEAF_MAGIC;
|
||||
|
||||
hdr3->blkno = cpu_to_be64(bp->b_bn);
|
||||
hdr3->blkno = cpu_to_be64(xfs_buf_daddr(bp));
|
||||
hdr3->owner = cpu_to_be64(dp->i_ino);
|
||||
uuid_copy(&hdr3->uuid, &mp->m_sb.sb_meta_uuid);
|
||||
|
||||
|
@ -51,7 +51,7 @@ xfs_attr3_rmt_blocks(
|
||||
struct xfs_mount *mp,
|
||||
int attrlen)
|
||||
{
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
int buflen = XFS_ATTR3_RMT_BUF_SPACE(mp, mp->m_sb.sb_blocksize);
|
||||
return (attrlen + buflen - 1) / buflen;
|
||||
}
|
||||
@ -126,11 +126,11 @@ __xfs_attr3_rmt_read_verify(
|
||||
int blksize = mp->m_attr_geo->blksize;
|
||||
|
||||
/* no verification of non-crc buffers */
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return 0;
|
||||
|
||||
ptr = bp->b_addr;
|
||||
bno = bp->b_bn;
|
||||
bno = xfs_buf_daddr(bp);
|
||||
len = BBTOB(bp->b_length);
|
||||
ASSERT(len >= blksize);
|
||||
|
||||
@ -191,11 +191,11 @@ xfs_attr3_rmt_write_verify(
|
||||
xfs_daddr_t bno;
|
||||
|
||||
/* no verification of non-crc buffers */
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return;
|
||||
|
||||
ptr = bp->b_addr;
|
||||
bno = bp->b_bn;
|
||||
bno = xfs_buf_daddr(bp);
|
||||
len = BBTOB(bp->b_length);
|
||||
ASSERT(len >= blksize);
|
||||
|
||||
@ -246,7 +246,7 @@ xfs_attr3_rmt_hdr_set(
|
||||
{
|
||||
struct xfs_attr3_rmt_hdr *rmt = ptr;
|
||||
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return 0;
|
||||
|
||||
rmt->rm_magic = cpu_to_be32(XFS_ATTR3_RMT_MAGIC);
|
||||
@ -284,7 +284,7 @@ xfs_attr_rmtval_copyout(
|
||||
uint8_t **dst)
|
||||
{
|
||||
char *src = bp->b_addr;
|
||||
xfs_daddr_t bno = bp->b_bn;
|
||||
xfs_daddr_t bno = xfs_buf_daddr(bp);
|
||||
int len = BBTOB(bp->b_length);
|
||||
int blksize = mp->m_attr_geo->blksize;
|
||||
|
||||
@ -296,7 +296,7 @@ xfs_attr_rmtval_copyout(
|
||||
|
||||
byte_cnt = min(*valuelen, byte_cnt);
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
if (xfs_attr3_rmt_hdr_ok(src, ino, *offset,
|
||||
byte_cnt, bno)) {
|
||||
xfs_alert(mp,
|
||||
@ -332,7 +332,7 @@ xfs_attr_rmtval_copyin(
|
||||
uint8_t **src)
|
||||
{
|
||||
char *dst = bp->b_addr;
|
||||
xfs_daddr_t bno = bp->b_bn;
|
||||
xfs_daddr_t bno = xfs_buf_daddr(bp);
|
||||
int len = BBTOB(bp->b_length);
|
||||
int blksize = mp->m_attr_geo->blksize;
|
||||
|
||||
@ -672,7 +672,7 @@ xfs_attr_rmtval_invalidate(
|
||||
* routine until it returns something other than -EAGAIN.
|
||||
*/
|
||||
int
|
||||
__xfs_attr_rmtval_remove(
|
||||
xfs_attr_rmtval_remove(
|
||||
struct xfs_delattr_context *dac)
|
||||
{
|
||||
struct xfs_da_args *args = dac->da_args;
|
||||
@ -696,6 +696,7 @@ __xfs_attr_rmtval_remove(
|
||||
*/
|
||||
if (!done) {
|
||||
dac->flags |= XFS_DAC_DEFER_FINISH;
|
||||
trace_xfs_attr_rmtval_remove_return(dac->dela_state, args->dp);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
|
@ -12,7 +12,7 @@ int xfs_attr_rmtval_get(struct xfs_da_args *args);
|
||||
int xfs_attr_rmtval_stale(struct xfs_inode *ip, struct xfs_bmbt_irec *map,
|
||||
xfs_buf_flags_t incore_flags);
|
||||
int xfs_attr_rmtval_invalidate(struct xfs_da_args *args);
|
||||
int __xfs_attr_rmtval_remove(struct xfs_delattr_context *dac);
|
||||
int xfs_attr_rmtval_remove(struct xfs_delattr_context *dac);
|
||||
int xfs_attr_rmt_find_hole(struct xfs_da_args *args);
|
||||
int xfs_attr_rmtval_set_value(struct xfs_da_args *args);
|
||||
int xfs_attr_rmtval_set_blk(struct xfs_delattr_context *dac);
|
||||
|
@ -242,7 +242,7 @@ xfs_bmap_get_bp(
|
||||
for (i = 0; i < XFS_BTREE_MAXLEVELS; i++) {
|
||||
if (!cur->bc_bufs[i])
|
||||
break;
|
||||
if (XFS_BUF_ADDR(cur->bc_bufs[i]) == bno)
|
||||
if (xfs_buf_daddr(cur->bc_bufs[i]) == bno)
|
||||
return cur->bc_bufs[i];
|
||||
}
|
||||
|
||||
@ -251,7 +251,7 @@ xfs_bmap_get_bp(
|
||||
struct xfs_buf_log_item *bip = (struct xfs_buf_log_item *)lip;
|
||||
|
||||
if (bip->bli_item.li_type == XFS_LI_BUF &&
|
||||
XFS_BUF_ADDR(bip->bli_buf) == bno)
|
||||
xfs_buf_daddr(bip->bli_buf) == bno)
|
||||
return bip->bli_buf;
|
||||
}
|
||||
|
||||
@ -739,7 +739,7 @@ xfs_bmap_extents_to_btree(
|
||||
*/
|
||||
abp->b_ops = &xfs_bmbt_buf_ops;
|
||||
ablock = XFS_BUF_TO_BLOCK(abp);
|
||||
xfs_btree_init_block_int(mp, ablock, abp->b_bn,
|
||||
xfs_btree_init_block_int(mp, ablock, xfs_buf_daddr(abp),
|
||||
XFS_BTNUM_BMAP, 0, 0, ip->i_ino,
|
||||
XFS_BTREE_LONG_PTRS);
|
||||
|
||||
@ -1047,7 +1047,7 @@ xfs_bmap_set_attrforkoff(
|
||||
ip->i_forkoff = xfs_attr_shortform_bytesfit(ip, size);
|
||||
if (!ip->i_forkoff)
|
||||
ip->i_forkoff = default_size;
|
||||
else if ((ip->i_mount->m_flags & XFS_MOUNT_ATTR2) && version)
|
||||
else if (xfs_has_attr2(ip->i_mount) && version)
|
||||
*version = 2;
|
||||
break;
|
||||
default:
|
||||
@ -1115,17 +1115,17 @@ xfs_bmap_add_attrfork(
|
||||
xfs_trans_log_inode(tp, ip, logflags);
|
||||
if (error)
|
||||
goto trans_cancel;
|
||||
if (!xfs_sb_version_hasattr(&mp->m_sb) ||
|
||||
(!xfs_sb_version_hasattr2(&mp->m_sb) && version == 2)) {
|
||||
if (!xfs_has_attr(mp) ||
|
||||
(!xfs_has_attr2(mp) && version == 2)) {
|
||||
bool log_sb = false;
|
||||
|
||||
spin_lock(&mp->m_sb_lock);
|
||||
if (!xfs_sb_version_hasattr(&mp->m_sb)) {
|
||||
xfs_sb_version_addattr(&mp->m_sb);
|
||||
if (!xfs_has_attr(mp)) {
|
||||
xfs_add_attr(mp);
|
||||
log_sb = true;
|
||||
}
|
||||
if (!xfs_sb_version_hasattr2(&mp->m_sb) && version == 2) {
|
||||
xfs_sb_version_addattr2(&mp->m_sb);
|
||||
if (!xfs_has_attr2(mp) && version == 2) {
|
||||
xfs_add_attr2(mp);
|
||||
log_sb = true;
|
||||
}
|
||||
spin_unlock(&mp->m_sb_lock);
|
||||
@ -3422,7 +3422,7 @@ xfs_bmap_compute_alignments(
|
||||
int stripe_align = 0;
|
||||
|
||||
/* stripe alignment for allocation is determined by mount parameters */
|
||||
if (mp->m_swidth && (mp->m_flags & XFS_MOUNT_SWALLOC))
|
||||
if (mp->m_swidth && xfs_has_swalloc(mp))
|
||||
stripe_align = mp->m_swidth;
|
||||
else if (mp->m_dalign)
|
||||
stripe_align = mp->m_dalign;
|
||||
@ -3938,7 +3938,7 @@ xfs_bmapi_read(
|
||||
XFS_TEST_ERROR(false, mp, XFS_ERRTAG_BMAPIFORMAT))
|
||||
return -EFSCORRUPTED;
|
||||
|
||||
if (XFS_FORCED_SHUTDOWN(mp))
|
||||
if (xfs_is_shutdown(mp))
|
||||
return -EIO;
|
||||
|
||||
XFS_STATS_INC(mp, xs_blk_mapr);
|
||||
@ -4420,7 +4420,7 @@ xfs_bmapi_write(
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
|
||||
if (XFS_FORCED_SHUTDOWN(mp))
|
||||
if (xfs_is_shutdown(mp))
|
||||
return -EIO;
|
||||
|
||||
XFS_STATS_INC(mp, xs_blk_mapw);
|
||||
@ -4703,7 +4703,7 @@ xfs_bmapi_remap(
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
|
||||
if (XFS_FORCED_SHUTDOWN(mp))
|
||||
if (xfs_is_shutdown(mp))
|
||||
return -EIO;
|
||||
|
||||
error = xfs_iread_extents(tp, ip, whichfork);
|
||||
@ -5361,7 +5361,7 @@ __xfs_bunmapi(
|
||||
ifp = XFS_IFORK_PTR(ip, whichfork);
|
||||
if (XFS_IS_CORRUPT(mp, !xfs_ifork_has_extents(ifp)))
|
||||
return -EFSCORRUPTED;
|
||||
if (XFS_FORCED_SHUTDOWN(mp))
|
||||
if (xfs_is_shutdown(mp))
|
||||
return -EIO;
|
||||
|
||||
ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL));
|
||||
@ -5852,7 +5852,7 @@ xfs_bmap_collapse_extents(
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
|
||||
if (XFS_FORCED_SHUTDOWN(mp))
|
||||
if (xfs_is_shutdown(mp))
|
||||
return -EIO;
|
||||
|
||||
ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL | XFS_ILOCK_EXCL));
|
||||
@ -5930,7 +5930,7 @@ xfs_bmap_can_insert_extents(
|
||||
|
||||
ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL));
|
||||
|
||||
if (XFS_FORCED_SHUTDOWN(ip->i_mount))
|
||||
if (xfs_is_shutdown(ip->i_mount))
|
||||
return -EIO;
|
||||
|
||||
xfs_ilock(ip, XFS_ILOCK_EXCL);
|
||||
@ -5967,7 +5967,7 @@ xfs_bmap_insert_extents(
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
|
||||
if (XFS_FORCED_SHUTDOWN(mp))
|
||||
if (xfs_is_shutdown(mp))
|
||||
return -EIO;
|
||||
|
||||
ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL | XFS_ILOCK_EXCL));
|
||||
@ -6070,7 +6070,7 @@ xfs_bmap_split_extent(
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
|
||||
if (XFS_FORCED_SHUTDOWN(mp))
|
||||
if (xfs_is_shutdown(mp))
|
||||
return -EIO;
|
||||
|
||||
/* Read in all the extents */
|
||||
|
@ -58,7 +58,7 @@ xfs_bmdr_to_bmbt(
|
||||
|
||||
void
|
||||
xfs_bmbt_disk_get_all(
|
||||
struct xfs_bmbt_rec *rec,
|
||||
const struct xfs_bmbt_rec *rec,
|
||||
struct xfs_bmbt_irec *irec)
|
||||
{
|
||||
uint64_t l0 = get_unaligned_be64(&rec->l0);
|
||||
@ -78,7 +78,7 @@ xfs_bmbt_disk_get_all(
|
||||
*/
|
||||
xfs_filblks_t
|
||||
xfs_bmbt_disk_get_blockcount(
|
||||
xfs_bmbt_rec_t *r)
|
||||
const struct xfs_bmbt_rec *r)
|
||||
{
|
||||
return (xfs_filblks_t)(be64_to_cpu(r->l1) & xfs_mask64lo(21));
|
||||
}
|
||||
@ -88,7 +88,7 @@ xfs_bmbt_disk_get_blockcount(
|
||||
*/
|
||||
xfs_fileoff_t
|
||||
xfs_bmbt_disk_get_startoff(
|
||||
xfs_bmbt_rec_t *r)
|
||||
const struct xfs_bmbt_rec *r)
|
||||
{
|
||||
return ((xfs_fileoff_t)be64_to_cpu(r->l0) &
|
||||
xfs_mask64lo(64 - BMBT_EXNTFLAG_BITLEN)) >> 9;
|
||||
@ -136,7 +136,7 @@ xfs_bmbt_to_bmdr(
|
||||
xfs_bmbt_key_t *tkp;
|
||||
__be64 *tpp;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
ASSERT(rblock->bb_magic == cpu_to_be32(XFS_BMAP_CRC_MAGIC));
|
||||
ASSERT(uuid_equal(&rblock->bb_u.l.bb_uuid,
|
||||
&mp->m_sb.sb_meta_uuid));
|
||||
@ -193,10 +193,10 @@ xfs_bmbt_update_cursor(
|
||||
|
||||
STATIC int
|
||||
xfs_bmbt_alloc_block(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *start,
|
||||
union xfs_btree_ptr *new,
|
||||
int *stat)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *start,
|
||||
union xfs_btree_ptr *new,
|
||||
int *stat)
|
||||
{
|
||||
xfs_alloc_arg_t args; /* block allocation args */
|
||||
int error; /* error return value */
|
||||
@ -282,7 +282,7 @@ xfs_bmbt_free_block(
|
||||
struct xfs_mount *mp = cur->bc_mp;
|
||||
struct xfs_inode *ip = cur->bc_ino.ip;
|
||||
struct xfs_trans *tp = cur->bc_tp;
|
||||
xfs_fsblock_t fsbno = XFS_DADDR_TO_FSB(mp, XFS_BUF_ADDR(bp));
|
||||
xfs_fsblock_t fsbno = XFS_DADDR_TO_FSB(mp, xfs_buf_daddr(bp));
|
||||
struct xfs_owner_info oinfo;
|
||||
|
||||
xfs_rmap_ino_bmbt_owner(&oinfo, ip->i_ino, cur->bc_ino.whichfork);
|
||||
@ -352,8 +352,8 @@ xfs_bmbt_get_dmaxrecs(
|
||||
|
||||
STATIC void
|
||||
xfs_bmbt_init_key_from_rec(
|
||||
union xfs_btree_key *key,
|
||||
union xfs_btree_rec *rec)
|
||||
union xfs_btree_key *key,
|
||||
const union xfs_btree_rec *rec)
|
||||
{
|
||||
key->bmbt.br_startoff =
|
||||
cpu_to_be64(xfs_bmbt_disk_get_startoff(&rec->bmbt));
|
||||
@ -361,8 +361,8 @@ xfs_bmbt_init_key_from_rec(
|
||||
|
||||
STATIC void
|
||||
xfs_bmbt_init_high_key_from_rec(
|
||||
union xfs_btree_key *key,
|
||||
union xfs_btree_rec *rec)
|
||||
union xfs_btree_key *key,
|
||||
const union xfs_btree_rec *rec)
|
||||
{
|
||||
key->bmbt.br_startoff = cpu_to_be64(
|
||||
xfs_bmbt_disk_get_startoff(&rec->bmbt) +
|
||||
@ -387,8 +387,8 @@ xfs_bmbt_init_ptr_from_cur(
|
||||
|
||||
STATIC int64_t
|
||||
xfs_bmbt_key_diff(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *key)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_key *key)
|
||||
{
|
||||
return (int64_t)be64_to_cpu(key->bmbt.br_startoff) -
|
||||
cur->bc_rec.b.br_startoff;
|
||||
@ -396,12 +396,12 @@ xfs_bmbt_key_diff(
|
||||
|
||||
STATIC int64_t
|
||||
xfs_bmbt_diff_two_keys(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *k1,
|
||||
union xfs_btree_key *k2)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_key *k1,
|
||||
const union xfs_btree_key *k2)
|
||||
{
|
||||
uint64_t a = be64_to_cpu(k1->bmbt.br_startoff);
|
||||
uint64_t b = be64_to_cpu(k2->bmbt.br_startoff);
|
||||
uint64_t a = be64_to_cpu(k1->bmbt.br_startoff);
|
||||
uint64_t b = be64_to_cpu(k2->bmbt.br_startoff);
|
||||
|
||||
/*
|
||||
* Note: This routine previously casted a and b to int64 and subtracted
|
||||
@ -428,7 +428,7 @@ xfs_bmbt_verify(
|
||||
if (!xfs_verify_magic(bp, block->bb_magic))
|
||||
return __this_address;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
/*
|
||||
* XXX: need a better way of verifying the owner here. Right now
|
||||
* just make sure there has been one set.
|
||||
@ -497,9 +497,9 @@ const struct xfs_buf_ops xfs_bmbt_buf_ops = {
|
||||
|
||||
STATIC int
|
||||
xfs_bmbt_keys_inorder(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *k1,
|
||||
union xfs_btree_key *k2)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_key *k1,
|
||||
const union xfs_btree_key *k2)
|
||||
{
|
||||
return be64_to_cpu(k1->bmbt.br_startoff) <
|
||||
be64_to_cpu(k2->bmbt.br_startoff);
|
||||
@ -507,9 +507,9 @@ xfs_bmbt_keys_inorder(
|
||||
|
||||
STATIC int
|
||||
xfs_bmbt_recs_inorder(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_rec *r1,
|
||||
union xfs_btree_rec *r2)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_rec *r1,
|
||||
const union xfs_btree_rec *r2)
|
||||
{
|
||||
return xfs_bmbt_disk_get_startoff(&r1->bmbt) +
|
||||
xfs_bmbt_disk_get_blockcount(&r1->bmbt) <=
|
||||
@ -563,7 +563,7 @@ xfs_bmbt_init_cursor(
|
||||
|
||||
cur->bc_ops = &xfs_bmbt_ops;
|
||||
cur->bc_flags = XFS_BTREE_LONG_PTRS | XFS_BTREE_ROOT_IN_INODE;
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (xfs_has_crc(mp))
|
||||
cur->bc_flags |= XFS_BTREE_CRC_BLOCKS;
|
||||
|
||||
cur->bc_ino.forksize = XFS_IFORK_SIZE(ip, whichfork);
|
||||
|
@ -16,7 +16,7 @@ struct xfs_trans;
|
||||
* Btree block header size depends on a superblock flag.
|
||||
*/
|
||||
#define XFS_BMBT_BLOCK_LEN(mp) \
|
||||
(xfs_sb_version_hascrc(&((mp)->m_sb)) ? \
|
||||
(xfs_has_crc(((mp))) ? \
|
||||
XFS_BTREE_LBLOCK_CRC_LEN : XFS_BTREE_LBLOCK_LEN)
|
||||
|
||||
#define XFS_BMBT_REC_ADDR(mp, block, index) \
|
||||
@ -88,9 +88,10 @@ extern void xfs_bmdr_to_bmbt(struct xfs_inode *, xfs_bmdr_block_t *, int,
|
||||
struct xfs_btree_block *, int);
|
||||
|
||||
void xfs_bmbt_disk_set_all(struct xfs_bmbt_rec *r, struct xfs_bmbt_irec *s);
|
||||
extern xfs_filblks_t xfs_bmbt_disk_get_blockcount(xfs_bmbt_rec_t *r);
|
||||
extern xfs_fileoff_t xfs_bmbt_disk_get_startoff(xfs_bmbt_rec_t *r);
|
||||
extern void xfs_bmbt_disk_get_all(xfs_bmbt_rec_t *r, xfs_bmbt_irec_t *s);
|
||||
extern xfs_filblks_t xfs_bmbt_disk_get_blockcount(const struct xfs_bmbt_rec *r);
|
||||
extern xfs_fileoff_t xfs_bmbt_disk_get_startoff(const struct xfs_bmbt_rec *r);
|
||||
void xfs_bmbt_disk_get_all(const struct xfs_bmbt_rec *r,
|
||||
struct xfs_bmbt_irec *s);
|
||||
|
||||
extern void xfs_bmbt_to_bmdr(struct xfs_mount *, struct xfs_btree_block *, int,
|
||||
xfs_bmdr_block_t *, int);
|
||||
|
@ -64,13 +64,13 @@ __xfs_btree_check_lblock(
|
||||
{
|
||||
struct xfs_mount *mp = cur->bc_mp;
|
||||
xfs_btnum_t btnum = cur->bc_btnum;
|
||||
int crc = xfs_sb_version_hascrc(&mp->m_sb);
|
||||
int crc = xfs_has_crc(mp);
|
||||
|
||||
if (crc) {
|
||||
if (!uuid_equal(&block->bb_u.l.bb_uuid, &mp->m_sb.sb_meta_uuid))
|
||||
return __this_address;
|
||||
if (block->bb_u.l.bb_blkno !=
|
||||
cpu_to_be64(bp ? bp->b_bn : XFS_BUF_DADDR_NULL))
|
||||
cpu_to_be64(bp ? xfs_buf_daddr(bp) : XFS_BUF_DADDR_NULL))
|
||||
return __this_address;
|
||||
if (block->bb_u.l.bb_pad != cpu_to_be32(0))
|
||||
return __this_address;
|
||||
@ -129,13 +129,13 @@ __xfs_btree_check_sblock(
|
||||
{
|
||||
struct xfs_mount *mp = cur->bc_mp;
|
||||
xfs_btnum_t btnum = cur->bc_btnum;
|
||||
int crc = xfs_sb_version_hascrc(&mp->m_sb);
|
||||
int crc = xfs_has_crc(mp);
|
||||
|
||||
if (crc) {
|
||||
if (!uuid_equal(&block->bb_u.s.bb_uuid, &mp->m_sb.sb_meta_uuid))
|
||||
return __this_address;
|
||||
if (block->bb_u.s.bb_blkno !=
|
||||
cpu_to_be64(bp ? bp->b_bn : XFS_BUF_DADDR_NULL))
|
||||
cpu_to_be64(bp ? xfs_buf_daddr(bp) : XFS_BUF_DADDR_NULL))
|
||||
return __this_address;
|
||||
}
|
||||
|
||||
@ -225,10 +225,10 @@ xfs_btree_check_sptr(
|
||||
*/
|
||||
static int
|
||||
xfs_btree_check_ptr(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *ptr,
|
||||
int index,
|
||||
int level)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *ptr,
|
||||
int index,
|
||||
int level)
|
||||
{
|
||||
if (cur->bc_flags & XFS_BTREE_LONG_PTRS) {
|
||||
if (xfs_btree_check_lptr(cur, be64_to_cpu((&ptr->l)[index]),
|
||||
@ -273,7 +273,7 @@ xfs_btree_lblock_calc_crc(
|
||||
struct xfs_btree_block *block = XFS_BUF_TO_BLOCK(bp);
|
||||
struct xfs_buf_log_item *bip = bp->b_log_item;
|
||||
|
||||
if (!xfs_sb_version_hascrc(&bp->b_mount->m_sb))
|
||||
if (!xfs_has_crc(bp->b_mount))
|
||||
return;
|
||||
if (bip)
|
||||
block->bb_u.l.bb_lsn = cpu_to_be64(bip->bli_item.li_lsn);
|
||||
@ -287,7 +287,7 @@ xfs_btree_lblock_verify_crc(
|
||||
struct xfs_btree_block *block = XFS_BUF_TO_BLOCK(bp);
|
||||
struct xfs_mount *mp = bp->b_mount;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
if (!xfs_log_check_lsn(mp, be64_to_cpu(block->bb_u.l.bb_lsn)))
|
||||
return false;
|
||||
return xfs_buf_verify_cksum(bp, XFS_BTREE_LBLOCK_CRC_OFF);
|
||||
@ -311,7 +311,7 @@ xfs_btree_sblock_calc_crc(
|
||||
struct xfs_btree_block *block = XFS_BUF_TO_BLOCK(bp);
|
||||
struct xfs_buf_log_item *bip = bp->b_log_item;
|
||||
|
||||
if (!xfs_sb_version_hascrc(&bp->b_mount->m_sb))
|
||||
if (!xfs_has_crc(bp->b_mount))
|
||||
return;
|
||||
if (bip)
|
||||
block->bb_u.s.bb_lsn = cpu_to_be64(bip->bli_item.li_lsn);
|
||||
@ -325,7 +325,7 @@ xfs_btree_sblock_verify_crc(
|
||||
struct xfs_btree_block *block = XFS_BUF_TO_BLOCK(bp);
|
||||
struct xfs_mount *mp = bp->b_mount;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
if (!xfs_log_check_lsn(mp, be64_to_cpu(block->bb_u.s.bb_lsn)))
|
||||
return false;
|
||||
return xfs_buf_verify_cksum(bp, XFS_BTREE_SBLOCK_CRC_OFF);
|
||||
@ -374,7 +374,7 @@ xfs_btree_del_cursor(
|
||||
}
|
||||
|
||||
ASSERT(cur->bc_btnum != XFS_BTNUM_BMAP || cur->bc_ino.allocated == 0 ||
|
||||
XFS_FORCED_SHUTDOWN(cur->bc_mp));
|
||||
xfs_is_shutdown(cur->bc_mp));
|
||||
if (unlikely(cur->bc_flags & XFS_BTREE_STAGING))
|
||||
kmem_free(cur->bc_ops);
|
||||
if (!(cur->bc_flags & XFS_BTREE_LONG_PTRS) && cur->bc_ag.pag)
|
||||
@ -420,7 +420,7 @@ xfs_btree_dup_cursor(
|
||||
bp = cur->bc_bufs[i];
|
||||
if (bp) {
|
||||
error = xfs_trans_read_buf(mp, tp, mp->m_ddev_targp,
|
||||
XFS_BUF_ADDR(bp), mp->m_bsize,
|
||||
xfs_buf_daddr(bp), mp->m_bsize,
|
||||
0, &bp,
|
||||
cur->bc_ops->buf_ops);
|
||||
if (error) {
|
||||
@ -935,9 +935,9 @@ xfs_btree_readahead(
|
||||
|
||||
STATIC int
|
||||
xfs_btree_ptr_to_daddr(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *ptr,
|
||||
xfs_daddr_t *daddr)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *ptr,
|
||||
xfs_daddr_t *daddr)
|
||||
{
|
||||
xfs_fsblock_t fsbno;
|
||||
xfs_agblock_t agbno;
|
||||
@ -1012,8 +1012,8 @@ xfs_btree_setbuf(
|
||||
|
||||
bool
|
||||
xfs_btree_ptr_is_null(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *ptr)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *ptr)
|
||||
{
|
||||
if (cur->bc_flags & XFS_BTREE_LONG_PTRS)
|
||||
return ptr->l == cpu_to_be64(NULLFSBLOCK);
|
||||
@ -1059,10 +1059,10 @@ xfs_btree_get_sibling(
|
||||
|
||||
void
|
||||
xfs_btree_set_sibling(
|
||||
struct xfs_btree_cur *cur,
|
||||
struct xfs_btree_block *block,
|
||||
union xfs_btree_ptr *ptr,
|
||||
int lr)
|
||||
struct xfs_btree_cur *cur,
|
||||
struct xfs_btree_block *block,
|
||||
const union xfs_btree_ptr *ptr,
|
||||
int lr)
|
||||
{
|
||||
ASSERT(lr == XFS_BB_LEFTSIB || lr == XFS_BB_RIGHTSIB);
|
||||
|
||||
@ -1090,7 +1090,7 @@ xfs_btree_init_block_int(
|
||||
__u64 owner,
|
||||
unsigned int flags)
|
||||
{
|
||||
int crc = xfs_sb_version_hascrc(&mp->m_sb);
|
||||
int crc = xfs_has_crc(mp);
|
||||
__u32 magic = xfs_btree_magic(crc, btnum);
|
||||
|
||||
buf->bb_magic = cpu_to_be32(magic);
|
||||
@ -1131,7 +1131,7 @@ xfs_btree_init_block(
|
||||
__u16 numrecs,
|
||||
__u64 owner)
|
||||
{
|
||||
xfs_btree_init_block_int(mp, XFS_BUF_TO_BLOCK(bp), bp->b_bn,
|
||||
xfs_btree_init_block_int(mp, XFS_BUF_TO_BLOCK(bp), xfs_buf_daddr(bp),
|
||||
btnum, level, numrecs, owner, 0);
|
||||
}
|
||||
|
||||
@ -1155,9 +1155,9 @@ xfs_btree_init_block_cur(
|
||||
else
|
||||
owner = cur->bc_ag.pag->pag_agno;
|
||||
|
||||
xfs_btree_init_block_int(cur->bc_mp, XFS_BUF_TO_BLOCK(bp), bp->b_bn,
|
||||
cur->bc_btnum, level, numrecs,
|
||||
owner, cur->bc_flags);
|
||||
xfs_btree_init_block_int(cur->bc_mp, XFS_BUF_TO_BLOCK(bp),
|
||||
xfs_buf_daddr(bp), cur->bc_btnum, level,
|
||||
numrecs, owner, cur->bc_flags);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -1192,10 +1192,10 @@ xfs_btree_buf_to_ptr(
|
||||
{
|
||||
if (cur->bc_flags & XFS_BTREE_LONG_PTRS)
|
||||
ptr->l = cpu_to_be64(XFS_DADDR_TO_FSB(cur->bc_mp,
|
||||
XFS_BUF_ADDR(bp)));
|
||||
xfs_buf_daddr(bp)));
|
||||
else {
|
||||
ptr->s = cpu_to_be32(xfs_daddr_to_agbno(cur->bc_mp,
|
||||
XFS_BUF_ADDR(bp)));
|
||||
xfs_buf_daddr(bp)));
|
||||
}
|
||||
}
|
||||
|
||||
@ -1229,10 +1229,10 @@ xfs_btree_set_refs(
|
||||
|
||||
int
|
||||
xfs_btree_get_buf_block(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *ptr,
|
||||
struct xfs_btree_block **block,
|
||||
struct xfs_buf **bpp)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *ptr,
|
||||
struct xfs_btree_block **block,
|
||||
struct xfs_buf **bpp)
|
||||
{
|
||||
struct xfs_mount *mp = cur->bc_mp;
|
||||
xfs_daddr_t d;
|
||||
@ -1257,11 +1257,11 @@ xfs_btree_get_buf_block(
|
||||
*/
|
||||
STATIC int
|
||||
xfs_btree_read_buf_block(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *ptr,
|
||||
int flags,
|
||||
struct xfs_btree_block **block,
|
||||
struct xfs_buf **bpp)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *ptr,
|
||||
int flags,
|
||||
struct xfs_btree_block **block,
|
||||
struct xfs_buf **bpp)
|
||||
{
|
||||
struct xfs_mount *mp = cur->bc_mp;
|
||||
xfs_daddr_t d;
|
||||
@ -1289,10 +1289,10 @@ xfs_btree_read_buf_block(
|
||||
*/
|
||||
void
|
||||
xfs_btree_copy_keys(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *dst_key,
|
||||
union xfs_btree_key *src_key,
|
||||
int numkeys)
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *dst_key,
|
||||
const union xfs_btree_key *src_key,
|
||||
int numkeys)
|
||||
{
|
||||
ASSERT(numkeys >= 0);
|
||||
memcpy(dst_key, src_key, numkeys * cur->bc_ops->key_len);
|
||||
@ -1713,10 +1713,10 @@ error0:
|
||||
|
||||
int
|
||||
xfs_btree_lookup_get_block(
|
||||
struct xfs_btree_cur *cur, /* btree cursor */
|
||||
int level, /* level in the btree */
|
||||
union xfs_btree_ptr *pp, /* ptr to btree block */
|
||||
struct xfs_btree_block **blkp) /* return btree block */
|
||||
struct xfs_btree_cur *cur, /* btree cursor */
|
||||
int level, /* level in the btree */
|
||||
const union xfs_btree_ptr *pp, /* ptr to btree block */
|
||||
struct xfs_btree_block **blkp) /* return btree block */
|
||||
{
|
||||
struct xfs_buf *bp; /* buffer pointer for btree block */
|
||||
xfs_daddr_t daddr;
|
||||
@ -1739,7 +1739,7 @@ xfs_btree_lookup_get_block(
|
||||
error = xfs_btree_ptr_to_daddr(cur, pp, &daddr);
|
||||
if (error)
|
||||
return error;
|
||||
if (bp && XFS_BUF_ADDR(bp) == daddr) {
|
||||
if (bp && xfs_buf_daddr(bp) == daddr) {
|
||||
*blkp = XFS_BUF_TO_BLOCK(bp);
|
||||
return 0;
|
||||
}
|
||||
@ -1749,7 +1749,7 @@ xfs_btree_lookup_get_block(
|
||||
return error;
|
||||
|
||||
/* Check the inode owner since the verifiers don't. */
|
||||
if (xfs_sb_version_hascrc(&cur->bc_mp->m_sb) &&
|
||||
if (xfs_has_crc(cur->bc_mp) &&
|
||||
!(cur->bc_ino.flags & XFS_BTCUR_BMBT_INVALID_OWNER) &&
|
||||
(cur->bc_flags & XFS_BTREE_LONG_PTRS) &&
|
||||
be64_to_cpu((*blkp)->bb_u.l.bb_owner) !=
|
||||
@ -2923,10 +2923,11 @@ xfs_btree_new_iroot(
|
||||
*/
|
||||
memcpy(cblock, block, xfs_btree_block_len(cur));
|
||||
if (cur->bc_flags & XFS_BTREE_CRC_BLOCKS) {
|
||||
__be64 bno = cpu_to_be64(xfs_buf_daddr(cbp));
|
||||
if (cur->bc_flags & XFS_BTREE_LONG_PTRS)
|
||||
cblock->bb_u.l.bb_blkno = cpu_to_be64(cbp->b_bn);
|
||||
cblock->bb_u.l.bb_blkno = bno;
|
||||
else
|
||||
cblock->bb_u.s.bb_blkno = cpu_to_be64(cbp->b_bn);
|
||||
cblock->bb_u.s.bb_blkno = bno;
|
||||
}
|
||||
|
||||
be16_add_cpu(&block->bb_level, 1);
|
||||
@ -3225,7 +3226,7 @@ xfs_btree_insrec(
|
||||
|
||||
/* Get pointers to the btree buffer and block. */
|
||||
block = xfs_btree_get_block(cur, level, &bp);
|
||||
old_bn = bp ? bp->b_bn : XFS_BUF_DADDR_NULL;
|
||||
old_bn = bp ? xfs_buf_daddr(bp) : XFS_BUF_DADDR_NULL;
|
||||
numrecs = xfs_btree_get_numrecs(block);
|
||||
|
||||
#ifdef DEBUG
|
||||
@ -3341,7 +3342,7 @@ xfs_btree_insrec(
|
||||
* some records into the new tree block), so use the regular key
|
||||
* update mechanism.
|
||||
*/
|
||||
if (bp && bp->b_bn != old_bn) {
|
||||
if (bp && xfs_buf_daddr(bp) != old_bn) {
|
||||
xfs_btree_get_keys(cur, block, lkey);
|
||||
} else if (xfs_btree_needs_key_update(cur, optr)) {
|
||||
error = xfs_btree_update_keys(cur, level);
|
||||
@ -4418,11 +4419,11 @@ xfs_btree_lblock_v5hdr_verify(
|
||||
struct xfs_mount *mp = bp->b_mount;
|
||||
struct xfs_btree_block *block = XFS_BUF_TO_BLOCK(bp);
|
||||
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return __this_address;
|
||||
if (!uuid_equal(&block->bb_u.l.bb_uuid, &mp->m_sb.sb_meta_uuid))
|
||||
return __this_address;
|
||||
if (block->bb_u.l.bb_blkno != cpu_to_be64(bp->b_bn))
|
||||
if (block->bb_u.l.bb_blkno != cpu_to_be64(xfs_buf_daddr(bp)))
|
||||
return __this_address;
|
||||
if (owner != XFS_RMAP_OWN_UNKNOWN &&
|
||||
be64_to_cpu(block->bb_u.l.bb_owner) != owner)
|
||||
@ -4468,11 +4469,11 @@ xfs_btree_sblock_v5hdr_verify(
|
||||
struct xfs_btree_block *block = XFS_BUF_TO_BLOCK(bp);
|
||||
struct xfs_perag *pag = bp->b_pag;
|
||||
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return __this_address;
|
||||
if (!uuid_equal(&block->bb_u.s.bb_uuid, &mp->m_sb.sb_meta_uuid))
|
||||
return __this_address;
|
||||
if (block->bb_u.s.bb_blkno != cpu_to_be64(bp->b_bn))
|
||||
if (block->bb_u.s.bb_blkno != cpu_to_be64(xfs_buf_daddr(bp)))
|
||||
return __this_address;
|
||||
if (pag && be32_to_cpu(block->bb_u.s.bb_owner) != pag->pag_agno)
|
||||
return __this_address;
|
||||
@ -4499,7 +4500,7 @@ xfs_btree_sblock_verify(
|
||||
return __this_address;
|
||||
|
||||
/* sibling pointer verification */
|
||||
agno = xfs_daddr_to_agno(mp, XFS_BUF_ADDR(bp));
|
||||
agno = xfs_daddr_to_agno(mp, xfs_buf_daddr(bp));
|
||||
if (block->bb_u.s.bb_leftsib != cpu_to_be32(NULLAGBLOCK) &&
|
||||
!xfs_verify_agbno(mp, agno, be32_to_cpu(block->bb_u.s.bb_leftsib)))
|
||||
return __this_address;
|
||||
@ -4536,8 +4537,8 @@ xfs_btree_compute_maxlevels(
|
||||
STATIC int
|
||||
xfs_btree_simple_query_range(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *low_key,
|
||||
union xfs_btree_key *high_key,
|
||||
const union xfs_btree_key *low_key,
|
||||
const union xfs_btree_key *high_key,
|
||||
xfs_btree_query_range_fn fn,
|
||||
void *priv)
|
||||
{
|
||||
@ -4627,8 +4628,8 @@ out:
|
||||
STATIC int
|
||||
xfs_btree_overlapped_query_range(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *low_key,
|
||||
union xfs_btree_key *high_key,
|
||||
const union xfs_btree_key *low_key,
|
||||
const union xfs_btree_key *high_key,
|
||||
xfs_btree_query_range_fn fn,
|
||||
void *priv)
|
||||
{
|
||||
@ -4769,8 +4770,8 @@ out:
|
||||
int
|
||||
xfs_btree_query_range(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_irec *low_rec,
|
||||
union xfs_btree_irec *high_rec,
|
||||
const union xfs_btree_irec *low_rec,
|
||||
const union xfs_btree_irec *high_rec,
|
||||
xfs_btree_query_range_fn fn,
|
||||
void *priv)
|
||||
{
|
||||
@ -4877,7 +4878,7 @@ xfs_btree_diff_two_ptrs(
|
||||
STATIC int
|
||||
xfs_btree_has_record_helper(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_rec *rec,
|
||||
const union xfs_btree_rec *rec,
|
||||
void *priv)
|
||||
{
|
||||
return -ECANCELED;
|
||||
@ -4886,12 +4887,12 @@ xfs_btree_has_record_helper(
|
||||
/* Is there a record covering a given range of keys? */
|
||||
int
|
||||
xfs_btree_has_record(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_irec *low,
|
||||
union xfs_btree_irec *high,
|
||||
bool *exists)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_irec *low,
|
||||
const union xfs_btree_irec *high,
|
||||
bool *exists)
|
||||
{
|
||||
int error;
|
||||
int error;
|
||||
|
||||
error = xfs_btree_query_range(cur, low, high,
|
||||
&xfs_btree_has_record_helper, NULL);
|
||||
|
@ -106,19 +106,19 @@ struct xfs_btree_ops {
|
||||
|
||||
/* update btree root pointer */
|
||||
void (*set_root)(struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *nptr, int level_change);
|
||||
const union xfs_btree_ptr *nptr, int level_change);
|
||||
|
||||
/* block allocation / freeing */
|
||||
int (*alloc_block)(struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *start_bno,
|
||||
const union xfs_btree_ptr *start_bno,
|
||||
union xfs_btree_ptr *new_bno,
|
||||
int *stat);
|
||||
int (*free_block)(struct xfs_btree_cur *cur, struct xfs_buf *bp);
|
||||
|
||||
/* update last record information */
|
||||
void (*update_lastrec)(struct xfs_btree_cur *cur,
|
||||
struct xfs_btree_block *block,
|
||||
union xfs_btree_rec *rec,
|
||||
const struct xfs_btree_block *block,
|
||||
const union xfs_btree_rec *rec,
|
||||
int ptr, int reason);
|
||||
|
||||
/* records in block/level */
|
||||
@ -130,37 +130,37 @@ struct xfs_btree_ops {
|
||||
|
||||
/* init values of btree structures */
|
||||
void (*init_key_from_rec)(union xfs_btree_key *key,
|
||||
union xfs_btree_rec *rec);
|
||||
const union xfs_btree_rec *rec);
|
||||
void (*init_rec_from_cur)(struct xfs_btree_cur *cur,
|
||||
union xfs_btree_rec *rec);
|
||||
void (*init_ptr_from_cur)(struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *ptr);
|
||||
void (*init_high_key_from_rec)(union xfs_btree_key *key,
|
||||
union xfs_btree_rec *rec);
|
||||
const union xfs_btree_rec *rec);
|
||||
|
||||
/* difference between key value and cursor value */
|
||||
int64_t (*key_diff)(struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *key);
|
||||
const union xfs_btree_key *key);
|
||||
|
||||
/*
|
||||
* Difference between key2 and key1 -- positive if key1 > key2,
|
||||
* negative if key1 < key2, and zero if equal.
|
||||
*/
|
||||
int64_t (*diff_two_keys)(struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *key1,
|
||||
union xfs_btree_key *key2);
|
||||
const union xfs_btree_key *key1,
|
||||
const union xfs_btree_key *key2);
|
||||
|
||||
const struct xfs_buf_ops *buf_ops;
|
||||
|
||||
/* check that k1 is lower than k2 */
|
||||
int (*keys_inorder)(struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *k1,
|
||||
union xfs_btree_key *k2);
|
||||
const union xfs_btree_key *k1,
|
||||
const union xfs_btree_key *k2);
|
||||
|
||||
/* check that r1 is lower than r2 */
|
||||
int (*recs_inorder)(struct xfs_btree_cur *cur,
|
||||
union xfs_btree_rec *r1,
|
||||
union xfs_btree_rec *r2);
|
||||
const union xfs_btree_rec *r1,
|
||||
const union xfs_btree_rec *r2);
|
||||
};
|
||||
|
||||
/*
|
||||
@ -423,7 +423,7 @@ void xfs_btree_log_recs(struct xfs_btree_cur *, struct xfs_buf *, int, int);
|
||||
/*
|
||||
* Helpers.
|
||||
*/
|
||||
static inline int xfs_btree_get_numrecs(struct xfs_btree_block *block)
|
||||
static inline int xfs_btree_get_numrecs(const struct xfs_btree_block *block)
|
||||
{
|
||||
return be16_to_cpu(block->bb_numrecs);
|
||||
}
|
||||
@ -434,7 +434,7 @@ static inline void xfs_btree_set_numrecs(struct xfs_btree_block *block,
|
||||
block->bb_numrecs = cpu_to_be16(numrecs);
|
||||
}
|
||||
|
||||
static inline int xfs_btree_get_level(struct xfs_btree_block *block)
|
||||
static inline int xfs_btree_get_level(const struct xfs_btree_block *block)
|
||||
{
|
||||
return be16_to_cpu(block->bb_level);
|
||||
}
|
||||
@ -471,10 +471,11 @@ unsigned long long xfs_btree_calc_size(uint *limits, unsigned long long len);
|
||||
* code on its own.
|
||||
*/
|
||||
typedef int (*xfs_btree_query_range_fn)(struct xfs_btree_cur *cur,
|
||||
union xfs_btree_rec *rec, void *priv);
|
||||
const union xfs_btree_rec *rec, void *priv);
|
||||
|
||||
int xfs_btree_query_range(struct xfs_btree_cur *cur,
|
||||
union xfs_btree_irec *low_rec, union xfs_btree_irec *high_rec,
|
||||
const union xfs_btree_irec *low_rec,
|
||||
const union xfs_btree_irec *high_rec,
|
||||
xfs_btree_query_range_fn fn, void *priv);
|
||||
int xfs_btree_query_all(struct xfs_btree_cur *cur, xfs_btree_query_range_fn fn,
|
||||
void *priv);
|
||||
@ -502,10 +503,11 @@ union xfs_btree_key *xfs_btree_high_key_addr(struct xfs_btree_cur *cur, int n,
|
||||
union xfs_btree_ptr *xfs_btree_ptr_addr(struct xfs_btree_cur *cur, int n,
|
||||
struct xfs_btree_block *block);
|
||||
int xfs_btree_lookup_get_block(struct xfs_btree_cur *cur, int level,
|
||||
union xfs_btree_ptr *pp, struct xfs_btree_block **blkp);
|
||||
const union xfs_btree_ptr *pp, struct xfs_btree_block **blkp);
|
||||
struct xfs_btree_block *xfs_btree_get_block(struct xfs_btree_cur *cur,
|
||||
int level, struct xfs_buf **bpp);
|
||||
bool xfs_btree_ptr_is_null(struct xfs_btree_cur *cur, union xfs_btree_ptr *ptr);
|
||||
bool xfs_btree_ptr_is_null(struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *ptr);
|
||||
int64_t xfs_btree_diff_two_ptrs(struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *a,
|
||||
const union xfs_btree_ptr *b);
|
||||
@ -516,8 +518,9 @@ void xfs_btree_get_keys(struct xfs_btree_cur *cur,
|
||||
struct xfs_btree_block *block, union xfs_btree_key *key);
|
||||
union xfs_btree_key *xfs_btree_high_key_from_key(struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *key);
|
||||
int xfs_btree_has_record(struct xfs_btree_cur *cur, union xfs_btree_irec *low,
|
||||
union xfs_btree_irec *high, bool *exists);
|
||||
int xfs_btree_has_record(struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_irec *low,
|
||||
const union xfs_btree_irec *high, bool *exists);
|
||||
bool xfs_btree_has_more_records(struct xfs_btree_cur *cur);
|
||||
struct xfs_ifork *xfs_btree_ifork_ptr(struct xfs_btree_cur *cur);
|
||||
|
||||
@ -540,10 +543,11 @@ xfs_btree_islastblock(
|
||||
|
||||
void xfs_btree_set_ptr_null(struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *ptr);
|
||||
int xfs_btree_get_buf_block(struct xfs_btree_cur *cur, union xfs_btree_ptr *ptr,
|
||||
struct xfs_btree_block **block, struct xfs_buf **bpp);
|
||||
int xfs_btree_get_buf_block(struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *ptr, struct xfs_btree_block **block,
|
||||
struct xfs_buf **bpp);
|
||||
void xfs_btree_set_sibling(struct xfs_btree_cur *cur,
|
||||
struct xfs_btree_block *block, union xfs_btree_ptr *ptr,
|
||||
struct xfs_btree_block *block, const union xfs_btree_ptr *ptr,
|
||||
int lr);
|
||||
void xfs_btree_init_block_cur(struct xfs_btree_cur *cur,
|
||||
struct xfs_buf *bp, int level, int numrecs);
|
||||
@ -551,7 +555,7 @@ void xfs_btree_copy_ptrs(struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *dst_ptr,
|
||||
const union xfs_btree_ptr *src_ptr, int numptrs);
|
||||
void xfs_btree_copy_keys(struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *dst_key, union xfs_btree_key *src_key,
|
||||
int numkeys);
|
||||
union xfs_btree_key *dst_key,
|
||||
const union xfs_btree_key *src_key, int numkeys);
|
||||
|
||||
#endif /* __XFS_BTREE_H__ */
|
||||
|
@ -59,10 +59,10 @@ xfs_btree_fakeroot_dup_cursor(
|
||||
*/
|
||||
STATIC int
|
||||
xfs_btree_fakeroot_alloc_block(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *start_bno,
|
||||
union xfs_btree_ptr *new_bno,
|
||||
int *stat)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *start_bno,
|
||||
union xfs_btree_ptr *new_bno,
|
||||
int *stat)
|
||||
{
|
||||
ASSERT(0);
|
||||
return -EFSCORRUPTED;
|
||||
@ -112,9 +112,9 @@ xfs_btree_fakeroot_init_ptr_from_cur(
|
||||
/* Update the btree root information for a per-AG fake root. */
|
||||
STATIC void
|
||||
xfs_btree_afakeroot_set_root(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *ptr,
|
||||
int inc)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *ptr,
|
||||
int inc)
|
||||
{
|
||||
struct xbtree_afakeroot *afake = cur->bc_ag.afake;
|
||||
|
||||
|
@ -129,7 +129,7 @@ xfs_da3_node_hdr_from_disk(
|
||||
struct xfs_da3_icnode_hdr *to,
|
||||
struct xfs_da_intnode *from)
|
||||
{
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
struct xfs_da3_intnode *from3 = (struct xfs_da3_intnode *)from;
|
||||
|
||||
to->forw = be32_to_cpu(from3->hdr.info.hdr.forw);
|
||||
@ -156,7 +156,7 @@ xfs_da3_node_hdr_to_disk(
|
||||
struct xfs_da_intnode *to,
|
||||
struct xfs_da3_icnode_hdr *from)
|
||||
{
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
struct xfs_da3_intnode *to3 = (struct xfs_da3_intnode *)to;
|
||||
|
||||
ASSERT(from->magic == XFS_DA3_NODE_MAGIC);
|
||||
@ -191,10 +191,10 @@ xfs_da3_blkinfo_verify(
|
||||
if (!xfs_verify_magic16(bp, hdr->magic))
|
||||
return __this_address;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
if (!uuid_equal(&hdr3->uuid, &mp->m_sb.sb_meta_uuid))
|
||||
return __this_address;
|
||||
if (be64_to_cpu(hdr3->blkno) != bp->b_bn)
|
||||
if (be64_to_cpu(hdr3->blkno) != xfs_buf_daddr(bp))
|
||||
return __this_address;
|
||||
if (!xfs_log_check_lsn(mp, be64_to_cpu(hdr3->lsn)))
|
||||
return __this_address;
|
||||
@ -253,7 +253,7 @@ xfs_da3_node_write_verify(
|
||||
return;
|
||||
}
|
||||
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return;
|
||||
|
||||
if (bip)
|
||||
@ -442,12 +442,12 @@ xfs_da3_node_create(
|
||||
xfs_trans_buf_set_type(tp, bp, XFS_BLFT_DA_NODE_BUF);
|
||||
node = bp->b_addr;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
struct xfs_da3_node_hdr *hdr3 = bp->b_addr;
|
||||
|
||||
memset(hdr3, 0, sizeof(struct xfs_da3_node_hdr));
|
||||
ichdr.magic = XFS_DA3_NODE_MAGIC;
|
||||
hdr3->info.blkno = cpu_to_be64(bp->b_bn);
|
||||
hdr3->info.blkno = cpu_to_be64(xfs_buf_daddr(bp));
|
||||
hdr3->info.owner = cpu_to_be64(args->dp->i_ino);
|
||||
uuid_copy(&hdr3->info.uuid, &mp->m_sb.sb_meta_uuid);
|
||||
} else {
|
||||
@ -711,7 +711,7 @@ xfs_da3_root_split(
|
||||
oldroot->hdr.info.magic == cpu_to_be16(XFS_DIR3_LEAFN_MAGIC)) {
|
||||
struct xfs_da3_intnode *node3 = (struct xfs_da3_intnode *)node;
|
||||
|
||||
node3->hdr.info.blkno = cpu_to_be64(bp->b_bn);
|
||||
node3->hdr.info.blkno = cpu_to_be64(xfs_buf_daddr(bp));
|
||||
}
|
||||
xfs_trans_log_buf(tp, bp, 0, size - 1);
|
||||
|
||||
@ -1219,7 +1219,7 @@ xfs_da3_root_join(
|
||||
xfs_trans_buf_copy_type(root_blk->bp, bp);
|
||||
if (oldroothdr.magic == XFS_DA3_NODE_MAGIC) {
|
||||
struct xfs_da3_blkinfo *da3 = root_blk->bp->b_addr;
|
||||
da3->blkno = cpu_to_be64(root_blk->bp->b_bn);
|
||||
da3->blkno = cpu_to_be64(xfs_buf_daddr(root_blk->bp));
|
||||
}
|
||||
xfs_trans_log_buf(args->trans, root_blk->bp, 0,
|
||||
args->geo->blksize - 1);
|
||||
|
@ -789,7 +789,7 @@ struct xfs_attr3_rmt_hdr {
|
||||
#define XFS_ATTR3_RMT_CRC_OFF offsetof(struct xfs_attr3_rmt_hdr, rm_crc)
|
||||
|
||||
#define XFS_ATTR3_RMT_BUF_SPACE(mp, bufsize) \
|
||||
((bufsize) - (xfs_sb_version_hascrc(&(mp)->m_sb) ? \
|
||||
((bufsize) - (xfs_has_crc((mp)) ? \
|
||||
sizeof(struct xfs_attr3_rmt_hdr) : 0))
|
||||
|
||||
/* Number of bytes in a directory block. */
|
||||
|
@ -115,7 +115,7 @@ xfs_da_mount(
|
||||
dageo->fsblog = mp->m_sb.sb_blocklog;
|
||||
dageo->blksize = xfs_dir2_dirblock_bytes(&mp->m_sb);
|
||||
dageo->fsbcount = 1 << mp->m_sb.sb_dirblklog;
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
dageo->node_hdr_size = sizeof(struct xfs_da3_node_hdr);
|
||||
dageo->leaf_hdr_size = sizeof(struct xfs_dir3_leaf_hdr);
|
||||
dageo->free_hdr_size = sizeof(struct xfs_dir3_free_hdr);
|
||||
@ -730,7 +730,7 @@ xfs_dir2_hashname(
|
||||
struct xfs_mount *mp,
|
||||
struct xfs_name *name)
|
||||
{
|
||||
if (unlikely(xfs_sb_version_hasasciici(&mp->m_sb)))
|
||||
if (unlikely(xfs_has_asciici(mp)))
|
||||
return xfs_ascii_ci_hashname(name);
|
||||
return xfs_da_hashname(name->name, name->len);
|
||||
}
|
||||
@ -741,7 +741,7 @@ xfs_dir2_compname(
|
||||
const unsigned char *name,
|
||||
int len)
|
||||
{
|
||||
if (unlikely(xfs_sb_version_hasasciici(&args->dp->i_mount->m_sb)))
|
||||
if (unlikely(xfs_has_asciici(args->dp->i_mount)))
|
||||
return xfs_ascii_ci_compname(args, name, len);
|
||||
return xfs_da_compname(args, name, len);
|
||||
}
|
||||
|
@ -53,10 +53,10 @@ xfs_dir3_block_verify(
|
||||
if (!xfs_verify_magic(bp, hdr3->magic))
|
||||
return __this_address;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
if (!uuid_equal(&hdr3->uuid, &mp->m_sb.sb_meta_uuid))
|
||||
return __this_address;
|
||||
if (be64_to_cpu(hdr3->blkno) != bp->b_bn)
|
||||
if (be64_to_cpu(hdr3->blkno) != xfs_buf_daddr(bp))
|
||||
return __this_address;
|
||||
if (!xfs_log_check_lsn(mp, be64_to_cpu(hdr3->lsn)))
|
||||
return __this_address;
|
||||
@ -71,7 +71,7 @@ xfs_dir3_block_read_verify(
|
||||
struct xfs_mount *mp = bp->b_mount;
|
||||
xfs_failaddr_t fa;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb) &&
|
||||
if (xfs_has_crc(mp) &&
|
||||
!xfs_buf_verify_cksum(bp, XFS_DIR3_DATA_CRC_OFF))
|
||||
xfs_verifier_error(bp, -EFSBADCRC, __this_address);
|
||||
else {
|
||||
@ -96,7 +96,7 @@ xfs_dir3_block_write_verify(
|
||||
return;
|
||||
}
|
||||
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return;
|
||||
|
||||
if (bip)
|
||||
@ -121,7 +121,7 @@ xfs_dir3_block_header_check(
|
||||
{
|
||||
struct xfs_mount *mp = dp->i_mount;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
struct xfs_dir3_blk_hdr *hdr3 = bp->b_addr;
|
||||
|
||||
if (be64_to_cpu(hdr3->owner) != dp->i_ino)
|
||||
@ -171,10 +171,10 @@ xfs_dir3_block_init(
|
||||
bp->b_ops = &xfs_dir3_block_buf_ops;
|
||||
xfs_trans_buf_set_type(tp, bp, XFS_BLFT_DIR_BLOCK_BUF);
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
memset(hdr3, 0, sizeof(*hdr3));
|
||||
hdr3->magic = cpu_to_be32(XFS_DIR3_BLOCK_MAGIC);
|
||||
hdr3->blkno = cpu_to_be64(bp->b_bn);
|
||||
hdr3->blkno = cpu_to_be64(xfs_buf_daddr(bp));
|
||||
hdr3->owner = cpu_to_be64(dp->i_ino);
|
||||
uuid_copy(&hdr3->uuid, &mp->m_sb.sb_meta_uuid);
|
||||
return;
|
||||
|
@ -29,7 +29,7 @@ xfs_dir2_data_bestfree_p(
|
||||
struct xfs_mount *mp,
|
||||
struct xfs_dir2_data_hdr *hdr)
|
||||
{
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (xfs_has_crc(mp))
|
||||
return ((struct xfs_dir3_data_hdr *)hdr)->best_free;
|
||||
return hdr->bestfree;
|
||||
}
|
||||
@ -51,7 +51,7 @@ xfs_dir2_data_get_ftype(
|
||||
struct xfs_mount *mp,
|
||||
struct xfs_dir2_data_entry *dep)
|
||||
{
|
||||
if (xfs_sb_version_hasftype(&mp->m_sb)) {
|
||||
if (xfs_has_ftype(mp)) {
|
||||
uint8_t ftype = dep->name[dep->namelen];
|
||||
|
||||
if (likely(ftype < XFS_DIR3_FT_MAX))
|
||||
@ -70,7 +70,7 @@ xfs_dir2_data_put_ftype(
|
||||
ASSERT(ftype < XFS_DIR3_FT_MAX);
|
||||
ASSERT(dep->namelen != 0);
|
||||
|
||||
if (xfs_sb_version_hasftype(&mp->m_sb))
|
||||
if (xfs_has_ftype(mp))
|
||||
dep->name[dep->namelen] = ftype;
|
||||
}
|
||||
|
||||
@ -297,10 +297,10 @@ xfs_dir3_data_verify(
|
||||
if (!xfs_verify_magic(bp, hdr3->magic))
|
||||
return __this_address;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
if (!uuid_equal(&hdr3->uuid, &mp->m_sb.sb_meta_uuid))
|
||||
return __this_address;
|
||||
if (be64_to_cpu(hdr3->blkno) != bp->b_bn)
|
||||
if (be64_to_cpu(hdr3->blkno) != xfs_buf_daddr(bp))
|
||||
return __this_address;
|
||||
if (!xfs_log_check_lsn(mp, be64_to_cpu(hdr3->lsn)))
|
||||
return __this_address;
|
||||
@ -343,7 +343,7 @@ xfs_dir3_data_read_verify(
|
||||
struct xfs_mount *mp = bp->b_mount;
|
||||
xfs_failaddr_t fa;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb) &&
|
||||
if (xfs_has_crc(mp) &&
|
||||
!xfs_buf_verify_cksum(bp, XFS_DIR3_DATA_CRC_OFF))
|
||||
xfs_verifier_error(bp, -EFSBADCRC, __this_address);
|
||||
else {
|
||||
@ -368,7 +368,7 @@ xfs_dir3_data_write_verify(
|
||||
return;
|
||||
}
|
||||
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return;
|
||||
|
||||
if (bip)
|
||||
@ -401,7 +401,7 @@ xfs_dir3_data_header_check(
|
||||
{
|
||||
struct xfs_mount *mp = dp->i_mount;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
struct xfs_dir3_data_hdr *hdr3 = bp->b_addr;
|
||||
|
||||
if (be64_to_cpu(hdr3->hdr.owner) != dp->i_ino)
|
||||
@ -717,12 +717,12 @@ xfs_dir3_data_init(
|
||||
* Initialize the header.
|
||||
*/
|
||||
hdr = bp->b_addr;
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
struct xfs_dir3_blk_hdr *hdr3 = bp->b_addr;
|
||||
|
||||
memset(hdr3, 0, sizeof(*hdr3));
|
||||
hdr3->magic = cpu_to_be32(XFS_DIR3_DATA_MAGIC);
|
||||
hdr3->blkno = cpu_to_be64(bp->b_bn);
|
||||
hdr3->blkno = cpu_to_be64(xfs_buf_daddr(bp));
|
||||
hdr3->owner = cpu_to_be64(dp->i_ino);
|
||||
uuid_copy(&hdr3->uuid, &mp->m_sb.sb_meta_uuid);
|
||||
|
||||
|
@ -37,7 +37,7 @@ xfs_dir2_leaf_hdr_from_disk(
|
||||
struct xfs_dir3_icleaf_hdr *to,
|
||||
struct xfs_dir2_leaf *from)
|
||||
{
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
struct xfs_dir3_leaf *from3 = (struct xfs_dir3_leaf *)from;
|
||||
|
||||
to->forw = be32_to_cpu(from3->hdr.info.hdr.forw);
|
||||
@ -68,7 +68,7 @@ xfs_dir2_leaf_hdr_to_disk(
|
||||
struct xfs_dir2_leaf *to,
|
||||
struct xfs_dir3_icleaf_hdr *from)
|
||||
{
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
struct xfs_dir3_leaf *to3 = (struct xfs_dir3_leaf *)to;
|
||||
|
||||
ASSERT(from->magic == XFS_DIR3_LEAF1_MAGIC ||
|
||||
@ -108,7 +108,7 @@ xfs_dir3_leaf1_check(
|
||||
|
||||
if (leafhdr.magic == XFS_DIR3_LEAF1_MAGIC) {
|
||||
struct xfs_dir3_leaf_hdr *leaf3 = bp->b_addr;
|
||||
if (be64_to_cpu(leaf3->info.blkno) != bp->b_bn)
|
||||
if (be64_to_cpu(leaf3->info.blkno) != xfs_buf_daddr(bp))
|
||||
return __this_address;
|
||||
} else if (leafhdr.magic != XFS_DIR2_LEAF1_MAGIC)
|
||||
return __this_address;
|
||||
@ -209,7 +209,7 @@ xfs_dir3_leaf_read_verify(
|
||||
struct xfs_mount *mp = bp->b_mount;
|
||||
xfs_failaddr_t fa;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb) &&
|
||||
if (xfs_has_crc(mp) &&
|
||||
!xfs_buf_verify_cksum(bp, XFS_DIR3_LEAF_CRC_OFF))
|
||||
xfs_verifier_error(bp, -EFSBADCRC, __this_address);
|
||||
else {
|
||||
@ -234,7 +234,7 @@ xfs_dir3_leaf_write_verify(
|
||||
return;
|
||||
}
|
||||
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return;
|
||||
|
||||
if (bip)
|
||||
@ -308,7 +308,7 @@ xfs_dir3_leaf_init(
|
||||
|
||||
ASSERT(type == XFS_DIR2_LEAF1_MAGIC || type == XFS_DIR2_LEAFN_MAGIC);
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
struct xfs_dir3_leaf_hdr *leaf3 = bp->b_addr;
|
||||
|
||||
memset(leaf3, 0, sizeof(*leaf3));
|
||||
@ -316,7 +316,7 @@ xfs_dir3_leaf_init(
|
||||
leaf3->info.hdr.magic = (type == XFS_DIR2_LEAF1_MAGIC)
|
||||
? cpu_to_be16(XFS_DIR3_LEAF1_MAGIC)
|
||||
: cpu_to_be16(XFS_DIR3_LEAFN_MAGIC);
|
||||
leaf3->info.blkno = cpu_to_be64(bp->b_bn);
|
||||
leaf3->info.blkno = cpu_to_be64(xfs_buf_daddr(bp));
|
||||
leaf3->info.owner = cpu_to_be64(owner);
|
||||
uuid_copy(&leaf3->info.uuid, &mp->m_sb.sb_meta_uuid);
|
||||
} else {
|
||||
|
@ -68,7 +68,7 @@ xfs_dir3_leafn_check(
|
||||
|
||||
if (leafhdr.magic == XFS_DIR3_LEAFN_MAGIC) {
|
||||
struct xfs_dir3_leaf_hdr *leaf3 = bp->b_addr;
|
||||
if (be64_to_cpu(leaf3->info.blkno) != bp->b_bn)
|
||||
if (be64_to_cpu(leaf3->info.blkno) != xfs_buf_daddr(bp))
|
||||
return __this_address;
|
||||
} else if (leafhdr.magic != XFS_DIR2_LEAFN_MAGIC)
|
||||
return __this_address;
|
||||
@ -105,12 +105,12 @@ xfs_dir3_free_verify(
|
||||
if (!xfs_verify_magic(bp, hdr->magic))
|
||||
return __this_address;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
struct xfs_dir3_blk_hdr *hdr3 = bp->b_addr;
|
||||
|
||||
if (!uuid_equal(&hdr3->uuid, &mp->m_sb.sb_meta_uuid))
|
||||
return __this_address;
|
||||
if (be64_to_cpu(hdr3->blkno) != bp->b_bn)
|
||||
if (be64_to_cpu(hdr3->blkno) != xfs_buf_daddr(bp))
|
||||
return __this_address;
|
||||
if (!xfs_log_check_lsn(mp, be64_to_cpu(hdr3->lsn)))
|
||||
return __this_address;
|
||||
@ -128,7 +128,7 @@ xfs_dir3_free_read_verify(
|
||||
struct xfs_mount *mp = bp->b_mount;
|
||||
xfs_failaddr_t fa;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb) &&
|
||||
if (xfs_has_crc(mp) &&
|
||||
!xfs_buf_verify_cksum(bp, XFS_DIR3_FREE_CRC_OFF))
|
||||
xfs_verifier_error(bp, -EFSBADCRC, __this_address);
|
||||
else {
|
||||
@ -153,7 +153,7 @@ xfs_dir3_free_write_verify(
|
||||
return;
|
||||
}
|
||||
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return;
|
||||
|
||||
if (bip)
|
||||
@ -185,7 +185,7 @@ xfs_dir3_free_header_check(
|
||||
firstdb = (xfs_dir2_da_to_db(mp->m_dir_geo, fbno) -
|
||||
xfs_dir2_byte_to_db(mp->m_dir_geo, XFS_DIR2_FREE_OFFSET)) *
|
||||
maxbests;
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
struct xfs_dir3_free_hdr *hdr3 = bp->b_addr;
|
||||
|
||||
if (be32_to_cpu(hdr3->firstdb) != firstdb)
|
||||
@ -247,7 +247,7 @@ xfs_dir2_free_hdr_from_disk(
|
||||
struct xfs_dir3_icfree_hdr *to,
|
||||
struct xfs_dir2_free *from)
|
||||
{
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
struct xfs_dir3_free *from3 = (struct xfs_dir3_free *)from;
|
||||
|
||||
to->magic = be32_to_cpu(from3->hdr.hdr.magic);
|
||||
@ -274,7 +274,7 @@ xfs_dir2_free_hdr_to_disk(
|
||||
struct xfs_dir2_free *to,
|
||||
struct xfs_dir3_icfree_hdr *from)
|
||||
{
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
struct xfs_dir3_free *to3 = (struct xfs_dir3_free *)to;
|
||||
|
||||
ASSERT(from->magic == XFS_DIR3_FREE_MAGIC);
|
||||
@ -341,12 +341,12 @@ xfs_dir3_free_get_buf(
|
||||
memset(bp->b_addr, 0, sizeof(struct xfs_dir3_free_hdr));
|
||||
memset(&hdr, 0, sizeof(hdr));
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
struct xfs_dir3_free_hdr *hdr3 = bp->b_addr;
|
||||
|
||||
hdr.magic = XFS_DIR3_FREE_MAGIC;
|
||||
|
||||
hdr3->hdr.blkno = cpu_to_be64(bp->b_bn);
|
||||
hdr3->hdr.blkno = cpu_to_be64(xfs_buf_daddr(bp));
|
||||
hdr3->hdr.owner = cpu_to_be64(dp->i_ino);
|
||||
uuid_copy(&hdr3->hdr.uuid, &mp->m_sb.sb_meta_uuid);
|
||||
} else
|
||||
|
@ -196,7 +196,7 @@ xfs_dir2_data_entsize(
|
||||
|
||||
len = offsetof(struct xfs_dir2_data_entry, name[0]) + namelen +
|
||||
sizeof(xfs_dir2_data_off_t) /* tag */;
|
||||
if (xfs_sb_version_hasftype(&mp->m_sb))
|
||||
if (xfs_has_ftype(mp))
|
||||
len += sizeof(uint8_t);
|
||||
return round_up(len, XFS_DIR2_DATA_ALIGN);
|
||||
}
|
||||
|
@ -48,7 +48,7 @@ xfs_dir2_sf_entsize(
|
||||
count += sizeof(struct xfs_dir2_sf_entry); /* namelen + offset */
|
||||
count += hdr->i8count ? XFS_INO64_SIZE : XFS_INO32_SIZE; /* ino # */
|
||||
|
||||
if (xfs_sb_version_hasftype(&mp->m_sb))
|
||||
if (xfs_has_ftype(mp))
|
||||
count += sizeof(uint8_t);
|
||||
return count;
|
||||
}
|
||||
@ -76,7 +76,7 @@ xfs_dir2_sf_get_ino(
|
||||
{
|
||||
uint8_t *from = sfep->name + sfep->namelen;
|
||||
|
||||
if (xfs_sb_version_hasftype(&mp->m_sb))
|
||||
if (xfs_has_ftype(mp))
|
||||
from++;
|
||||
|
||||
if (!hdr->i8count)
|
||||
@ -95,7 +95,7 @@ xfs_dir2_sf_put_ino(
|
||||
|
||||
ASSERT(ino <= XFS_MAXINUMBER);
|
||||
|
||||
if (xfs_sb_version_hasftype(&mp->m_sb))
|
||||
if (xfs_has_ftype(mp))
|
||||
to++;
|
||||
|
||||
if (hdr->i8count)
|
||||
@ -135,7 +135,7 @@ xfs_dir2_sf_get_ftype(
|
||||
struct xfs_mount *mp,
|
||||
struct xfs_dir2_sf_entry *sfep)
|
||||
{
|
||||
if (xfs_sb_version_hasftype(&mp->m_sb)) {
|
||||
if (xfs_has_ftype(mp)) {
|
||||
uint8_t ftype = sfep->name[sfep->namelen];
|
||||
|
||||
if (ftype < XFS_DIR3_FT_MAX)
|
||||
@ -153,7 +153,7 @@ xfs_dir2_sf_put_ftype(
|
||||
{
|
||||
ASSERT(ftype < XFS_DIR3_FT_MAX);
|
||||
|
||||
if (xfs_sb_version_hasftype(&mp->m_sb))
|
||||
if (xfs_has_ftype(mp))
|
||||
sfep->name[sfep->namelen] = ftype;
|
||||
}
|
||||
|
||||
@ -192,7 +192,7 @@ xfs_dir2_block_sfsize(
|
||||
* if there is a filetype field, add the extra byte to the namelen
|
||||
* for each entry that we see.
|
||||
*/
|
||||
has_ftype = xfs_sb_version_hasftype(&mp->m_sb) ? 1 : 0;
|
||||
has_ftype = xfs_has_ftype(mp) ? 1 : 0;
|
||||
|
||||
count = i8count = namelen = 0;
|
||||
btp = xfs_dir2_block_tail_p(geo, hdr);
|
||||
|
@ -70,7 +70,7 @@ xfs_dquot_verify(
|
||||
return __this_address;
|
||||
|
||||
if ((ddq->d_type & XFS_DQTYPE_BIGTIME) &&
|
||||
!xfs_sb_version_hasbigtime(&mp->m_sb))
|
||||
!xfs_has_bigtime(mp))
|
||||
return __this_address;
|
||||
|
||||
if ((ddq->d_type & XFS_DQTYPE_BIGTIME) && !ddq->d_id)
|
||||
@ -106,7 +106,7 @@ xfs_dqblk_verify(
|
||||
struct xfs_dqblk *dqb,
|
||||
xfs_dqid_t id) /* used only during quotacheck */
|
||||
{
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb) &&
|
||||
if (xfs_has_crc(mp) &&
|
||||
!uuid_equal(&dqb->dd_uuid, &mp->m_sb.sb_meta_uuid))
|
||||
return __this_address;
|
||||
|
||||
@ -134,7 +134,7 @@ xfs_dqblk_repair(
|
||||
dqb->dd_diskdq.d_type = type;
|
||||
dqb->dd_diskdq.d_id = cpu_to_be32(id);
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
uuid_copy(&dqb->dd_uuid, &mp->m_sb.sb_meta_uuid);
|
||||
xfs_update_cksum((char *)dqb, sizeof(struct xfs_dqblk),
|
||||
XFS_DQUOT_CRC_OFF);
|
||||
@ -151,7 +151,7 @@ xfs_dquot_buf_verify_crc(
|
||||
int ndquots;
|
||||
int i;
|
||||
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return true;
|
||||
|
||||
/*
|
||||
|
@ -9,7 +9,7 @@
|
||||
/*
|
||||
* XFS On Disk Format Definitions
|
||||
*
|
||||
* This header file defines all the on-disk format definitions for
|
||||
* This header file defines all the on-disk format definitions for
|
||||
* general XFS objects. Directory and attribute related objects are defined in
|
||||
* xfs_da_format.h, which log and log item formats are defined in
|
||||
* xfs_log_format.h. Everything else goes here.
|
||||
@ -265,7 +265,6 @@ typedef struct xfs_dsb {
|
||||
/* must be padded to 64 bit alignment */
|
||||
} xfs_dsb_t;
|
||||
|
||||
|
||||
/*
|
||||
* Misc. Flags - warning - these will be cleared by xfs_repair unless
|
||||
* a feature bit is set when the flag is used.
|
||||
@ -280,37 +279,9 @@ typedef struct xfs_dsb {
|
||||
|
||||
#define XFS_SB_VERSION_NUM(sbp) ((sbp)->sb_versionnum & XFS_SB_VERSION_NUMBITS)
|
||||
|
||||
/*
|
||||
* The first XFS version we support is a v4 superblock with V2 directories.
|
||||
*/
|
||||
static inline bool xfs_sb_good_v4_features(struct xfs_sb *sbp)
|
||||
static inline bool xfs_sb_is_v5(struct xfs_sb *sbp)
|
||||
{
|
||||
if (!(sbp->sb_versionnum & XFS_SB_VERSION_DIRV2BIT))
|
||||
return false;
|
||||
if (!(sbp->sb_versionnum & XFS_SB_VERSION_EXTFLGBIT))
|
||||
return false;
|
||||
|
||||
/* check for unknown features in the fs */
|
||||
if ((sbp->sb_versionnum & ~XFS_SB_VERSION_OKBITS) ||
|
||||
((sbp->sb_versionnum & XFS_SB_VERSION_MOREBITSBIT) &&
|
||||
(sbp->sb_features2 & ~XFS_SB_VERSION2_OKBITS)))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_good_version(struct xfs_sb *sbp)
|
||||
{
|
||||
if (XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5)
|
||||
return true;
|
||||
if (XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_4)
|
||||
return xfs_sb_good_v4_features(sbp);
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_version_hasrealtime(struct xfs_sb *sbp)
|
||||
{
|
||||
return sbp->sb_rblocks > 0;
|
||||
return XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5;
|
||||
}
|
||||
|
||||
/*
|
||||
@ -322,9 +293,10 @@ static inline bool xfs_sb_has_mismatched_features2(struct xfs_sb *sbp)
|
||||
return sbp->sb_bad_features2 != sbp->sb_features2;
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_version_hasattr(struct xfs_sb *sbp)
|
||||
static inline bool xfs_sb_version_hasmorebits(struct xfs_sb *sbp)
|
||||
{
|
||||
return (sbp->sb_versionnum & XFS_SB_VERSION_ATTRBIT);
|
||||
return xfs_sb_is_v5(sbp) ||
|
||||
(sbp->sb_versionnum & XFS_SB_VERSION_MOREBITSBIT);
|
||||
}
|
||||
|
||||
static inline void xfs_sb_version_addattr(struct xfs_sb *sbp)
|
||||
@ -332,87 +304,18 @@ static inline void xfs_sb_version_addattr(struct xfs_sb *sbp)
|
||||
sbp->sb_versionnum |= XFS_SB_VERSION_ATTRBIT;
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_version_hasquota(struct xfs_sb *sbp)
|
||||
{
|
||||
return (sbp->sb_versionnum & XFS_SB_VERSION_QUOTABIT);
|
||||
}
|
||||
|
||||
static inline void xfs_sb_version_addquota(struct xfs_sb *sbp)
|
||||
{
|
||||
sbp->sb_versionnum |= XFS_SB_VERSION_QUOTABIT;
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_version_hasalign(struct xfs_sb *sbp)
|
||||
{
|
||||
return (XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5 ||
|
||||
(sbp->sb_versionnum & XFS_SB_VERSION_ALIGNBIT));
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_version_hasdalign(struct xfs_sb *sbp)
|
||||
{
|
||||
return (sbp->sb_versionnum & XFS_SB_VERSION_DALIGNBIT);
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_version_haslogv2(struct xfs_sb *sbp)
|
||||
{
|
||||
return XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5 ||
|
||||
(sbp->sb_versionnum & XFS_SB_VERSION_LOGV2BIT);
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_version_hassector(struct xfs_sb *sbp)
|
||||
{
|
||||
return (sbp->sb_versionnum & XFS_SB_VERSION_SECTORBIT);
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_version_hasasciici(struct xfs_sb *sbp)
|
||||
{
|
||||
return (sbp->sb_versionnum & XFS_SB_VERSION_BORGBIT);
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_version_hasmorebits(struct xfs_sb *sbp)
|
||||
{
|
||||
return XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5 ||
|
||||
(sbp->sb_versionnum & XFS_SB_VERSION_MOREBITSBIT);
|
||||
}
|
||||
|
||||
/*
|
||||
* sb_features2 bit version macros.
|
||||
*/
|
||||
static inline bool xfs_sb_version_haslazysbcount(struct xfs_sb *sbp)
|
||||
{
|
||||
return (XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5) ||
|
||||
(xfs_sb_version_hasmorebits(sbp) &&
|
||||
(sbp->sb_features2 & XFS_SB_VERSION2_LAZYSBCOUNTBIT));
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_version_hasattr2(struct xfs_sb *sbp)
|
||||
{
|
||||
return (XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5) ||
|
||||
(xfs_sb_version_hasmorebits(sbp) &&
|
||||
(sbp->sb_features2 & XFS_SB_VERSION2_ATTR2BIT));
|
||||
}
|
||||
|
||||
static inline void xfs_sb_version_addattr2(struct xfs_sb *sbp)
|
||||
{
|
||||
sbp->sb_versionnum |= XFS_SB_VERSION_MOREBITSBIT;
|
||||
sbp->sb_features2 |= XFS_SB_VERSION2_ATTR2BIT;
|
||||
}
|
||||
|
||||
static inline void xfs_sb_version_removeattr2(struct xfs_sb *sbp)
|
||||
{
|
||||
sbp->sb_features2 &= ~XFS_SB_VERSION2_ATTR2BIT;
|
||||
if (!sbp->sb_features2)
|
||||
sbp->sb_versionnum &= ~XFS_SB_VERSION_MOREBITSBIT;
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_version_hasprojid32bit(struct xfs_sb *sbp)
|
||||
{
|
||||
return (XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5) ||
|
||||
(xfs_sb_version_hasmorebits(sbp) &&
|
||||
(sbp->sb_features2 & XFS_SB_VERSION2_PROJID32BIT));
|
||||
}
|
||||
|
||||
static inline void xfs_sb_version_addprojid32bit(struct xfs_sb *sbp)
|
||||
static inline void xfs_sb_version_addprojid32(struct xfs_sb *sbp)
|
||||
{
|
||||
sbp->sb_versionnum |= XFS_SB_VERSION_MOREBITSBIT;
|
||||
sbp->sb_features2 |= XFS_SB_VERSION2_PROJID32BIT;
|
||||
@ -495,106 +398,21 @@ xfs_sb_has_incompat_log_feature(
|
||||
return (sbp->sb_features_log_incompat & feature) != 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* V5 superblock specific feature checks
|
||||
*/
|
||||
static inline bool xfs_sb_version_hascrc(struct xfs_sb *sbp)
|
||||
static inline void
|
||||
xfs_sb_remove_incompat_log_features(
|
||||
struct xfs_sb *sbp)
|
||||
{
|
||||
return XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5;
|
||||
sbp->sb_features_log_incompat &= ~XFS_SB_FEAT_INCOMPAT_LOG_ALL;
|
||||
}
|
||||
|
||||
/*
|
||||
* v5 file systems support V3 inodes only, earlier file systems support
|
||||
* v2 and v1 inodes.
|
||||
*/
|
||||
static inline bool xfs_sb_version_has_v3inode(struct xfs_sb *sbp)
|
||||
static inline void
|
||||
xfs_sb_add_incompat_log_features(
|
||||
struct xfs_sb *sbp,
|
||||
unsigned int features)
|
||||
{
|
||||
return XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5;
|
||||
sbp->sb_features_log_incompat |= features;
|
||||
}
|
||||
|
||||
static inline bool xfs_dinode_good_version(struct xfs_sb *sbp,
|
||||
uint8_t version)
|
||||
{
|
||||
if (xfs_sb_version_has_v3inode(sbp))
|
||||
return version == 3;
|
||||
return version == 1 || version == 2;
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_version_has_pquotino(struct xfs_sb *sbp)
|
||||
{
|
||||
return XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5;
|
||||
}
|
||||
|
||||
static inline int xfs_sb_version_hasftype(struct xfs_sb *sbp)
|
||||
{
|
||||
return (XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5 &&
|
||||
xfs_sb_has_incompat_feature(sbp, XFS_SB_FEAT_INCOMPAT_FTYPE)) ||
|
||||
(xfs_sb_version_hasmorebits(sbp) &&
|
||||
(sbp->sb_features2 & XFS_SB_VERSION2_FTYPE));
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_version_hasfinobt(xfs_sb_t *sbp)
|
||||
{
|
||||
return (XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5) &&
|
||||
(sbp->sb_features_ro_compat & XFS_SB_FEAT_RO_COMPAT_FINOBT);
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_version_hassparseinodes(struct xfs_sb *sbp)
|
||||
{
|
||||
return XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5 &&
|
||||
xfs_sb_has_incompat_feature(sbp, XFS_SB_FEAT_INCOMPAT_SPINODES);
|
||||
}
|
||||
|
||||
/*
|
||||
* XFS_SB_FEAT_INCOMPAT_META_UUID indicates that the metadata UUID
|
||||
* is stored separately from the user-visible UUID; this allows the
|
||||
* user-visible UUID to be changed on V5 filesystems which have a
|
||||
* filesystem UUID stamped into every piece of metadata.
|
||||
*/
|
||||
static inline bool xfs_sb_version_hasmetauuid(struct xfs_sb *sbp)
|
||||
{
|
||||
return (XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5) &&
|
||||
(sbp->sb_features_incompat & XFS_SB_FEAT_INCOMPAT_META_UUID);
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_version_hasrmapbt(struct xfs_sb *sbp)
|
||||
{
|
||||
return (XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5) &&
|
||||
(sbp->sb_features_ro_compat & XFS_SB_FEAT_RO_COMPAT_RMAPBT);
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_version_hasreflink(struct xfs_sb *sbp)
|
||||
{
|
||||
return XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5 &&
|
||||
(sbp->sb_features_ro_compat & XFS_SB_FEAT_RO_COMPAT_REFLINK);
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_version_hasbigtime(struct xfs_sb *sbp)
|
||||
{
|
||||
return XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5 &&
|
||||
(sbp->sb_features_incompat & XFS_SB_FEAT_INCOMPAT_BIGTIME);
|
||||
}
|
||||
|
||||
/*
|
||||
* Inode btree block counter. We record the number of inobt and finobt blocks
|
||||
* in the AGI header so that we can skip the finobt walk at mount time when
|
||||
* setting up per-AG reservations.
|
||||
*/
|
||||
static inline bool xfs_sb_version_hasinobtcounts(struct xfs_sb *sbp)
|
||||
{
|
||||
return XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5 &&
|
||||
(sbp->sb_features_ro_compat & XFS_SB_FEAT_RO_COMPAT_INOBTCNT);
|
||||
}
|
||||
|
||||
static inline bool xfs_sb_version_needsrepair(struct xfs_sb *sbp)
|
||||
{
|
||||
return XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5 &&
|
||||
(sbp->sb_features_incompat & XFS_SB_FEAT_INCOMPAT_NEEDSREPAIR);
|
||||
}
|
||||
|
||||
/*
|
||||
* end of superblock version macros
|
||||
*/
|
||||
|
||||
static inline bool
|
||||
xfs_is_quota_inode(struct xfs_sb *sbp, xfs_ino_t ino)
|
||||
@ -1062,12 +880,12 @@ enum xfs_dinode_fmt {
|
||||
/*
|
||||
* Inode size for given fs.
|
||||
*/
|
||||
#define XFS_DINODE_SIZE(sbp) \
|
||||
(xfs_sb_version_has_v3inode(sbp) ? \
|
||||
#define XFS_DINODE_SIZE(mp) \
|
||||
(xfs_has_v3inodes(mp) ? \
|
||||
sizeof(struct xfs_dinode) : \
|
||||
offsetof(struct xfs_dinode, di_crc))
|
||||
#define XFS_LITINO(mp) \
|
||||
((mp)->m_sb.sb_inodesize - XFS_DINODE_SIZE(&(mp)->m_sb))
|
||||
((mp)->m_sb.sb_inodesize - XFS_DINODE_SIZE(mp))
|
||||
|
||||
/*
|
||||
* Inode data & attribute fork sizes, per inode.
|
||||
@ -1454,7 +1272,7 @@ struct xfs_dsymlink_hdr {
|
||||
#define XFS_SYMLINK_MAPS 3
|
||||
|
||||
#define XFS_SYMLINK_BUF_SPACE(mp, bufsize) \
|
||||
((bufsize) - (xfs_sb_version_hascrc(&(mp)->m_sb) ? \
|
||||
((bufsize) - (xfs_has_crc((mp)) ? \
|
||||
sizeof(struct xfs_dsymlink_hdr) : 0))
|
||||
|
||||
|
||||
@ -1686,7 +1504,7 @@ struct xfs_rmap_key {
|
||||
typedef __be32 xfs_rmap_ptr_t;
|
||||
|
||||
#define XFS_RMAP_BLOCK(mp) \
|
||||
(xfs_sb_version_hasfinobt(&((mp)->m_sb)) ? \
|
||||
(xfs_has_finobt(((mp))) ? \
|
||||
XFS_FIBT_BLOCK(mp) + 1 : \
|
||||
XFS_IBT_BLOCK(mp) + 1)
|
||||
|
||||
@ -1918,7 +1736,7 @@ struct xfs_acl {
|
||||
* limited only by the maximum size of the xattr that stores the information.
|
||||
*/
|
||||
#define XFS_ACL_MAX_ENTRIES(mp) \
|
||||
(xfs_sb_version_hascrc(&mp->m_sb) \
|
||||
(xfs_has_crc(mp) \
|
||||
? (XFS_XATTR_SIZE_MAX - sizeof(struct xfs_acl)) / \
|
||||
sizeof(struct xfs_acl_entry) \
|
||||
: 25)
|
||||
|
@ -58,7 +58,7 @@ xfs_inobt_update(
|
||||
union xfs_btree_rec rec;
|
||||
|
||||
rec.inobt.ir_startino = cpu_to_be32(irec->ir_startino);
|
||||
if (xfs_sb_version_hassparseinodes(&cur->bc_mp->m_sb)) {
|
||||
if (xfs_has_sparseinodes(cur->bc_mp)) {
|
||||
rec.inobt.ir_u.sp.ir_holemask = cpu_to_be16(irec->ir_holemask);
|
||||
rec.inobt.ir_u.sp.ir_count = irec->ir_count;
|
||||
rec.inobt.ir_u.sp.ir_freecount = irec->ir_freecount;
|
||||
@ -74,11 +74,11 @@ xfs_inobt_update(
|
||||
void
|
||||
xfs_inobt_btrec_to_irec(
|
||||
struct xfs_mount *mp,
|
||||
union xfs_btree_rec *rec,
|
||||
const union xfs_btree_rec *rec,
|
||||
struct xfs_inobt_rec_incore *irec)
|
||||
{
|
||||
irec->ir_startino = be32_to_cpu(rec->inobt.ir_startino);
|
||||
if (xfs_sb_version_hassparseinodes(&mp->m_sb)) {
|
||||
if (xfs_has_sparseinodes(mp)) {
|
||||
irec->ir_holemask = be16_to_cpu(rec->inobt.ir_u.sp.ir_holemask);
|
||||
irec->ir_count = rec->inobt.ir_u.sp.ir_count;
|
||||
irec->ir_freecount = rec->inobt.ir_u.sp.ir_freecount;
|
||||
@ -241,7 +241,7 @@ xfs_check_agi_freecount(
|
||||
}
|
||||
} while (i == 1);
|
||||
|
||||
if (!XFS_FORCED_SHUTDOWN(cur->bc_mp))
|
||||
if (!xfs_is_shutdown(cur->bc_mp))
|
||||
ASSERT(freecount == cur->bc_ag.pag->pagi_freecount);
|
||||
}
|
||||
return 0;
|
||||
@ -302,7 +302,7 @@ xfs_ialloc_inode_init(
|
||||
* That means for v3 inode we log the entire buffer rather than just the
|
||||
* inode cores.
|
||||
*/
|
||||
if (xfs_sb_version_has_v3inode(&mp->m_sb)) {
|
||||
if (xfs_has_v3inodes(mp)) {
|
||||
version = 3;
|
||||
ino = XFS_AGINO_TO_INO(mp, agno, XFS_AGB_TO_AGINO(mp, agbno));
|
||||
|
||||
@ -337,7 +337,6 @@ xfs_ialloc_inode_init(
|
||||
xfs_buf_zero(fbuf, 0, BBTOB(fbuf->b_length));
|
||||
for (i = 0; i < M_IGEO(mp)->inodes_per_cluster; i++) {
|
||||
int ioffset = i << mp->m_sb.sb_inodelog;
|
||||
uint isize = XFS_DINODE_SIZE(&mp->m_sb);
|
||||
|
||||
free = xfs_make_iptr(mp, fbuf, i);
|
||||
free->di_magic = cpu_to_be16(XFS_DINODE_MAGIC);
|
||||
@ -354,7 +353,7 @@ xfs_ialloc_inode_init(
|
||||
} else if (tp) {
|
||||
/* just log the inode core */
|
||||
xfs_trans_log_buf(tp, fbuf, ioffset,
|
||||
ioffset + isize - 1);
|
||||
ioffset + XFS_DINODE_SIZE(mp) - 1);
|
||||
}
|
||||
}
|
||||
|
||||
@ -635,7 +634,7 @@ xfs_ialloc_ag_alloc(
|
||||
|
||||
#ifdef DEBUG
|
||||
/* randomly do sparse inode allocations */
|
||||
if (xfs_sb_version_hassparseinodes(&tp->t_mountp->m_sb) &&
|
||||
if (xfs_has_sparseinodes(tp->t_mountp) &&
|
||||
igeo->ialloc_min_blks < igeo->ialloc_blks)
|
||||
do_sparse = prandom_u32() & 1;
|
||||
#endif
|
||||
@ -712,7 +711,7 @@ xfs_ialloc_ag_alloc(
|
||||
*/
|
||||
isaligned = 0;
|
||||
if (igeo->ialloc_align) {
|
||||
ASSERT(!(args.mp->m_flags & XFS_MOUNT_NOALIGN));
|
||||
ASSERT(!xfs_has_noalign(args.mp));
|
||||
args.alignment = args.mp->m_dalign;
|
||||
isaligned = 1;
|
||||
} else
|
||||
@ -754,7 +753,7 @@ xfs_ialloc_ag_alloc(
|
||||
* Finally, try a sparse allocation if the filesystem supports it and
|
||||
* the sparse allocation length is smaller than a full chunk.
|
||||
*/
|
||||
if (xfs_sb_version_hassparseinodes(&args.mp->m_sb) &&
|
||||
if (xfs_has_sparseinodes(args.mp) &&
|
||||
igeo->ialloc_min_blks < igeo->ialloc_blks &&
|
||||
args.fsbno == NULLFSBLOCK) {
|
||||
sparse_alloc:
|
||||
@ -856,7 +855,7 @@ sparse_alloc:
|
||||
* from the previous call. Set merge false to replace any
|
||||
* existing record with this one.
|
||||
*/
|
||||
if (xfs_sb_version_hasfinobt(&args.mp->m_sb)) {
|
||||
if (xfs_has_finobt(args.mp)) {
|
||||
error = xfs_inobt_insert_sprec(args.mp, tp, agbp, pag,
|
||||
XFS_BTNUM_FINO, &rec, false);
|
||||
if (error)
|
||||
@ -869,7 +868,7 @@ sparse_alloc:
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
if (xfs_sb_version_hasfinobt(&args.mp->m_sb)) {
|
||||
if (xfs_has_finobt(args.mp)) {
|
||||
error = xfs_inobt_insert(args.mp, tp, agbp, pag, newino,
|
||||
newlen, XFS_BTNUM_FINO);
|
||||
if (error)
|
||||
@ -1448,7 +1447,7 @@ xfs_dialloc_ag(
|
||||
int offset;
|
||||
int i;
|
||||
|
||||
if (!xfs_sb_version_hasfinobt(&mp->m_sb))
|
||||
if (!xfs_has_finobt(mp))
|
||||
return xfs_dialloc_ag_inobt(tp, agbp, pag, parent, inop);
|
||||
|
||||
/*
|
||||
@ -1784,7 +1783,7 @@ xfs_dialloc(
|
||||
break;
|
||||
}
|
||||
|
||||
if (XFS_FORCED_SHUTDOWN(mp)) {
|
||||
if (xfs_is_shutdown(mp)) {
|
||||
error = -EFSCORRUPTED;
|
||||
break;
|
||||
}
|
||||
@ -1953,8 +1952,7 @@ xfs_difree_inobt(
|
||||
* remove the chunk if the block size is large enough for multiple inode
|
||||
* chunks (that might not be free).
|
||||
*/
|
||||
if (!(mp->m_flags & XFS_MOUNT_IKEEP) &&
|
||||
rec.ir_free == XFS_INOBT_ALL_FREE &&
|
||||
if (!xfs_has_ikeep(mp) && rec.ir_free == XFS_INOBT_ALL_FREE &&
|
||||
mp->m_sb.sb_inopblock <= XFS_INODES_PER_CHUNK) {
|
||||
struct xfs_perag *pag = agbp->b_pag;
|
||||
|
||||
@ -1994,7 +1992,7 @@ xfs_difree_inobt(
|
||||
goto error0;
|
||||
}
|
||||
|
||||
/*
|
||||
/*
|
||||
* Change the inode free counts and log the ag/sb changes.
|
||||
*/
|
||||
be32_add_cpu(&agi->agi_freecount, 1);
|
||||
@ -2098,9 +2096,8 @@ xfs_difree_finobt(
|
||||
* enough for multiple chunks. Leave the finobt record to remain in sync
|
||||
* with the inobt.
|
||||
*/
|
||||
if (rec.ir_free == XFS_INOBT_ALL_FREE &&
|
||||
mp->m_sb.sb_inopblock <= XFS_INODES_PER_CHUNK &&
|
||||
!(mp->m_flags & XFS_MOUNT_IKEEP)) {
|
||||
if (!xfs_has_ikeep(mp) && rec.ir_free == XFS_INOBT_ALL_FREE &&
|
||||
mp->m_sb.sb_inopblock <= XFS_INODES_PER_CHUNK) {
|
||||
error = xfs_btree_delete(cur, &i);
|
||||
if (error)
|
||||
goto error;
|
||||
@ -2189,7 +2186,7 @@ xfs_difree(
|
||||
/*
|
||||
* Fix up the free inode btree.
|
||||
*/
|
||||
if (xfs_sb_version_hasfinobt(&mp->m_sb)) {
|
||||
if (xfs_has_finobt(mp)) {
|
||||
error = xfs_difree_finobt(mp, tp, agbp, pag, agino, &rec);
|
||||
if (error)
|
||||
goto error0;
|
||||
@ -2478,7 +2475,7 @@ xfs_agi_verify(
|
||||
struct xfs_agi *agi = bp->b_addr;
|
||||
int i;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
if (!uuid_equal(&agi->agi_uuid, &mp->m_sb.sb_meta_uuid))
|
||||
return __this_address;
|
||||
if (!xfs_log_check_lsn(mp, be64_to_cpu(agi->agi_lsn)))
|
||||
@ -2497,7 +2494,7 @@ xfs_agi_verify(
|
||||
be32_to_cpu(agi->agi_level) > M_IGEO(mp)->inobt_maxlevels)
|
||||
return __this_address;
|
||||
|
||||
if (xfs_sb_version_hasfinobt(&mp->m_sb) &&
|
||||
if (xfs_has_finobt(mp) &&
|
||||
(be32_to_cpu(agi->agi_free_level) < 1 ||
|
||||
be32_to_cpu(agi->agi_free_level) > M_IGEO(mp)->inobt_maxlevels))
|
||||
return __this_address;
|
||||
@ -2528,7 +2525,7 @@ xfs_agi_read_verify(
|
||||
struct xfs_mount *mp = bp->b_mount;
|
||||
xfs_failaddr_t fa;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb) &&
|
||||
if (xfs_has_crc(mp) &&
|
||||
!xfs_buf_verify_cksum(bp, XFS_AGI_CRC_OFF))
|
||||
xfs_verifier_error(bp, -EFSBADCRC, __this_address);
|
||||
else {
|
||||
@ -2553,7 +2550,7 @@ xfs_agi_write_verify(
|
||||
return;
|
||||
}
|
||||
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return;
|
||||
|
||||
if (bip)
|
||||
@ -2626,7 +2623,7 @@ xfs_ialloc_read_agi(
|
||||
* we are in the middle of a forced shutdown.
|
||||
*/
|
||||
ASSERT(pag->pagi_freecount == be32_to_cpu(agi->agi_freecount) ||
|
||||
XFS_FORCED_SHUTDOWN(mp));
|
||||
xfs_is_shutdown(mp));
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -2716,7 +2713,7 @@ struct xfs_ialloc_count_inodes {
|
||||
STATIC int
|
||||
xfs_ialloc_count_inodes_rec(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_rec *rec,
|
||||
const union xfs_btree_rec *rec,
|
||||
void *priv)
|
||||
{
|
||||
struct xfs_inobt_rec_incore irec;
|
||||
@ -2773,7 +2770,7 @@ xfs_ialloc_setup_geometry(
|
||||
uint inodes;
|
||||
|
||||
igeo->new_diflags2 = 0;
|
||||
if (xfs_sb_version_hasbigtime(&mp->m_sb))
|
||||
if (xfs_has_bigtime(mp))
|
||||
igeo->new_diflags2 |= XFS_DIFLAG2_BIGTIME;
|
||||
|
||||
/* Compute inode btree geometry. */
|
||||
@ -2828,7 +2825,7 @@ xfs_ialloc_setup_geometry(
|
||||
* cannot change the behavior.
|
||||
*/
|
||||
igeo->inode_cluster_size_raw = XFS_INODE_BIG_CLUSTER_SIZE;
|
||||
if (xfs_sb_version_has_v3inode(&mp->m_sb)) {
|
||||
if (xfs_has_v3inodes(mp)) {
|
||||
int new_size = igeo->inode_cluster_size_raw;
|
||||
|
||||
new_size *= mp->m_sb.sb_inodesize / XFS_DINODE_MIN_SIZE;
|
||||
@ -2846,7 +2843,7 @@ xfs_ialloc_setup_geometry(
|
||||
igeo->inodes_per_cluster = XFS_FSB_TO_INO(mp, igeo->blocks_per_cluster);
|
||||
|
||||
/* Calculate inode cluster alignment. */
|
||||
if (xfs_sb_version_hasalign(&mp->m_sb) &&
|
||||
if (xfs_has_align(mp) &&
|
||||
mp->m_sb.sb_inoalignmt >= igeo->blocks_per_cluster)
|
||||
igeo->cluster_align = mp->m_sb.sb_inoalignmt;
|
||||
else
|
||||
@ -2894,15 +2891,15 @@ xfs_ialloc_calc_rootino(
|
||||
first_bno += xfs_alloc_min_freelist(mp, NULL);
|
||||
|
||||
/* ...the free inode btree root... */
|
||||
if (xfs_sb_version_hasfinobt(&mp->m_sb))
|
||||
if (xfs_has_finobt(mp))
|
||||
first_bno++;
|
||||
|
||||
/* ...the reverse mapping btree root... */
|
||||
if (xfs_sb_version_hasrmapbt(&mp->m_sb))
|
||||
if (xfs_has_rmapbt(mp))
|
||||
first_bno++;
|
||||
|
||||
/* ...the reference count btree... */
|
||||
if (xfs_sb_version_hasreflink(&mp->m_sb))
|
||||
if (xfs_has_reflink(mp))
|
||||
first_bno++;
|
||||
|
||||
/*
|
||||
@ -2920,9 +2917,9 @@ xfs_ialloc_calc_rootino(
|
||||
* Now round first_bno up to whatever allocation alignment is given
|
||||
* by the filesystem or was passed in.
|
||||
*/
|
||||
if (xfs_sb_version_hasdalign(&mp->m_sb) && igeo->ialloc_align > 0)
|
||||
if (xfs_has_dalign(mp) && igeo->ialloc_align > 0)
|
||||
first_bno = roundup(first_bno, sunit);
|
||||
else if (xfs_sb_version_hasalign(&mp->m_sb) &&
|
||||
else if (xfs_has_align(mp) &&
|
||||
mp->m_sb.sb_inoalignmt > 1)
|
||||
first_bno = roundup(first_bno, mp->m_sb.sb_inoalignmt);
|
||||
|
||||
@ -2953,7 +2950,7 @@ xfs_ialloc_check_shrink(
|
||||
int has;
|
||||
int error;
|
||||
|
||||
if (!xfs_sb_version_hassparseinodes(&mp->m_sb))
|
||||
if (!xfs_has_sparseinodes(mp))
|
||||
return 0;
|
||||
|
||||
pag = xfs_perag_get(mp, agno);
|
||||
|
@ -106,7 +106,8 @@ int xfs_read_agi(struct xfs_mount *mp, struct xfs_trans *tp,
|
||||
xfs_agnumber_t agno, struct xfs_buf **bpp);
|
||||
|
||||
union xfs_btree_rec;
|
||||
void xfs_inobt_btrec_to_irec(struct xfs_mount *mp, union xfs_btree_rec *rec,
|
||||
void xfs_inobt_btrec_to_irec(struct xfs_mount *mp,
|
||||
const union xfs_btree_rec *rec,
|
||||
struct xfs_inobt_rec_incore *irec);
|
||||
int xfs_ialloc_has_inodes_at_extent(struct xfs_btree_cur *cur,
|
||||
xfs_agblock_t bno, xfs_extlen_t len, bool *exists);
|
||||
|
@ -40,9 +40,9 @@ xfs_inobt_dup_cursor(
|
||||
|
||||
STATIC void
|
||||
xfs_inobt_set_root(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *nptr,
|
||||
int inc) /* level change */
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *nptr,
|
||||
int inc) /* level change */
|
||||
{
|
||||
struct xfs_buf *agbp = cur->bc_ag.agbp;
|
||||
struct xfs_agi *agi = agbp->b_addr;
|
||||
@ -54,9 +54,9 @@ xfs_inobt_set_root(
|
||||
|
||||
STATIC void
|
||||
xfs_finobt_set_root(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *nptr,
|
||||
int inc) /* level change */
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *nptr,
|
||||
int inc) /* level change */
|
||||
{
|
||||
struct xfs_buf *agbp = cur->bc_ag.agbp;
|
||||
struct xfs_agi *agi = agbp->b_addr;
|
||||
@ -76,7 +76,7 @@ xfs_inobt_mod_blockcount(
|
||||
struct xfs_buf *agbp = cur->bc_ag.agbp;
|
||||
struct xfs_agi *agi = agbp->b_addr;
|
||||
|
||||
if (!xfs_sb_version_hasinobtcounts(&cur->bc_mp->m_sb))
|
||||
if (!xfs_has_inobtcounts(cur->bc_mp))
|
||||
return;
|
||||
|
||||
if (cur->bc_btnum == XFS_BTNUM_FINO)
|
||||
@ -88,11 +88,11 @@ xfs_inobt_mod_blockcount(
|
||||
|
||||
STATIC int
|
||||
__xfs_inobt_alloc_block(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *start,
|
||||
union xfs_btree_ptr *new,
|
||||
int *stat,
|
||||
enum xfs_ag_resv_type resv)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *start,
|
||||
union xfs_btree_ptr *new,
|
||||
int *stat,
|
||||
enum xfs_ag_resv_type resv)
|
||||
{
|
||||
xfs_alloc_arg_t args; /* block allocation args */
|
||||
int error; /* error return value */
|
||||
@ -127,20 +127,20 @@ __xfs_inobt_alloc_block(
|
||||
|
||||
STATIC int
|
||||
xfs_inobt_alloc_block(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *start,
|
||||
union xfs_btree_ptr *new,
|
||||
int *stat)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *start,
|
||||
union xfs_btree_ptr *new,
|
||||
int *stat)
|
||||
{
|
||||
return __xfs_inobt_alloc_block(cur, start, new, stat, XFS_AG_RESV_NONE);
|
||||
}
|
||||
|
||||
STATIC int
|
||||
xfs_finobt_alloc_block(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *start,
|
||||
union xfs_btree_ptr *new,
|
||||
int *stat)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *start,
|
||||
union xfs_btree_ptr *new,
|
||||
int *stat)
|
||||
{
|
||||
if (cur->bc_mp->m_finobt_nores)
|
||||
return xfs_inobt_alloc_block(cur, start, new, stat);
|
||||
@ -156,7 +156,7 @@ __xfs_inobt_free_block(
|
||||
{
|
||||
xfs_inobt_mod_blockcount(cur, -1);
|
||||
return xfs_free_extent(cur->bc_tp,
|
||||
XFS_DADDR_TO_FSB(cur->bc_mp, XFS_BUF_ADDR(bp)), 1,
|
||||
XFS_DADDR_TO_FSB(cur->bc_mp, xfs_buf_daddr(bp)), 1,
|
||||
&XFS_RMAP_OINFO_INOBT, resv);
|
||||
}
|
||||
|
||||
@ -188,18 +188,18 @@ xfs_inobt_get_maxrecs(
|
||||
|
||||
STATIC void
|
||||
xfs_inobt_init_key_from_rec(
|
||||
union xfs_btree_key *key,
|
||||
union xfs_btree_rec *rec)
|
||||
union xfs_btree_key *key,
|
||||
const union xfs_btree_rec *rec)
|
||||
{
|
||||
key->inobt.ir_startino = rec->inobt.ir_startino;
|
||||
}
|
||||
|
||||
STATIC void
|
||||
xfs_inobt_init_high_key_from_rec(
|
||||
union xfs_btree_key *key,
|
||||
union xfs_btree_rec *rec)
|
||||
union xfs_btree_key *key,
|
||||
const union xfs_btree_rec *rec)
|
||||
{
|
||||
__u32 x;
|
||||
__u32 x;
|
||||
|
||||
x = be32_to_cpu(rec->inobt.ir_startino);
|
||||
x += XFS_INODES_PER_CHUNK - 1;
|
||||
@ -212,7 +212,7 @@ xfs_inobt_init_rec_from_cur(
|
||||
union xfs_btree_rec *rec)
|
||||
{
|
||||
rec->inobt.ir_startino = cpu_to_be32(cur->bc_rec.i.ir_startino);
|
||||
if (xfs_sb_version_hassparseinodes(&cur->bc_mp->m_sb)) {
|
||||
if (xfs_has_sparseinodes(cur->bc_mp)) {
|
||||
rec->inobt.ir_u.sp.ir_holemask =
|
||||
cpu_to_be16(cur->bc_rec.i.ir_holemask);
|
||||
rec->inobt.ir_u.sp.ir_count = cur->bc_rec.i.ir_count;
|
||||
@ -253,8 +253,8 @@ xfs_finobt_init_ptr_from_cur(
|
||||
|
||||
STATIC int64_t
|
||||
xfs_inobt_key_diff(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *key)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_key *key)
|
||||
{
|
||||
return (int64_t)be32_to_cpu(key->inobt.ir_startino) -
|
||||
cur->bc_rec.i.ir_startino;
|
||||
@ -262,9 +262,9 @@ xfs_inobt_key_diff(
|
||||
|
||||
STATIC int64_t
|
||||
xfs_inobt_diff_two_keys(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *k1,
|
||||
union xfs_btree_key *k2)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_key *k1,
|
||||
const union xfs_btree_key *k2)
|
||||
{
|
||||
return (int64_t)be32_to_cpu(k1->inobt.ir_startino) -
|
||||
be32_to_cpu(k2->inobt.ir_startino);
|
||||
@ -292,7 +292,7 @@ xfs_inobt_verify(
|
||||
* but beware of the landmine (i.e. need to check pag->pagi_init) if we
|
||||
* ever do.
|
||||
*/
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
fa = xfs_btree_sblock_v5hdr_verify(bp);
|
||||
if (fa)
|
||||
return fa;
|
||||
@ -360,9 +360,9 @@ const struct xfs_buf_ops xfs_finobt_buf_ops = {
|
||||
|
||||
STATIC int
|
||||
xfs_inobt_keys_inorder(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *k1,
|
||||
union xfs_btree_key *k2)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_key *k1,
|
||||
const union xfs_btree_key *k2)
|
||||
{
|
||||
return be32_to_cpu(k1->inobt.ir_startino) <
|
||||
be32_to_cpu(k2->inobt.ir_startino);
|
||||
@ -370,9 +370,9 @@ xfs_inobt_keys_inorder(
|
||||
|
||||
STATIC int
|
||||
xfs_inobt_recs_inorder(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_rec *r1,
|
||||
union xfs_btree_rec *r2)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_rec *r1,
|
||||
const union xfs_btree_rec *r2)
|
||||
{
|
||||
return be32_to_cpu(r1->inobt.ir_startino) + XFS_INODES_PER_CHUNK <=
|
||||
be32_to_cpu(r2->inobt.ir_startino);
|
||||
@ -446,7 +446,7 @@ xfs_inobt_init_common(
|
||||
|
||||
cur->bc_blocklog = mp->m_sb.sb_blocklog;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (xfs_has_crc(mp))
|
||||
cur->bc_flags |= XFS_BTREE_CRC_BLOCKS;
|
||||
|
||||
/* take a reference for the cursor */
|
||||
@ -511,7 +511,7 @@ xfs_inobt_commit_staged_btree(
|
||||
fields = XFS_AGI_ROOT | XFS_AGI_LEVEL;
|
||||
agi->agi_root = cpu_to_be32(afake->af_root);
|
||||
agi->agi_level = cpu_to_be32(afake->af_levels);
|
||||
if (xfs_sb_version_hasinobtcounts(&cur->bc_mp->m_sb)) {
|
||||
if (xfs_has_inobtcounts(cur->bc_mp)) {
|
||||
agi->agi_iblocks = cpu_to_be32(afake->af_blocks);
|
||||
fields |= XFS_AGI_IBLOCKS;
|
||||
}
|
||||
@ -521,7 +521,7 @@ xfs_inobt_commit_staged_btree(
|
||||
fields = XFS_AGI_FREE_ROOT | XFS_AGI_FREE_LEVEL;
|
||||
agi->agi_free_root = cpu_to_be32(afake->af_root);
|
||||
agi->agi_free_level = cpu_to_be32(afake->af_levels);
|
||||
if (xfs_sb_version_hasinobtcounts(&cur->bc_mp->m_sb)) {
|
||||
if (xfs_has_inobtcounts(cur->bc_mp)) {
|
||||
agi->agi_fblocks = cpu_to_be32(afake->af_blocks);
|
||||
fields |= XFS_AGI_IBLOCKS;
|
||||
}
|
||||
@ -737,10 +737,10 @@ xfs_finobt_calc_reserves(
|
||||
xfs_extlen_t tree_len = 0;
|
||||
int error;
|
||||
|
||||
if (!xfs_sb_version_hasfinobt(&mp->m_sb))
|
||||
if (!xfs_has_finobt(mp))
|
||||
return 0;
|
||||
|
||||
if (xfs_sb_version_hasinobtcounts(&mp->m_sb))
|
||||
if (xfs_has_inobtcounts(mp))
|
||||
error = xfs_finobt_read_blocks(mp, tp, pag, &tree_len);
|
||||
else
|
||||
error = xfs_inobt_count_blocks(mp, tp, pag, XFS_BTNUM_FINO,
|
||||
|
@ -19,7 +19,7 @@ struct xfs_perag;
|
||||
* Btree block header size depends on a superblock flag.
|
||||
*/
|
||||
#define XFS_INOBT_BLOCK_LEN(mp) \
|
||||
(xfs_sb_version_hascrc(&((mp)->m_sb)) ? \
|
||||
(xfs_has_crc(((mp))) ? \
|
||||
XFS_BTREE_SBLOCK_CRC_LEN : XFS_BTREE_SBLOCK_LEN)
|
||||
|
||||
/*
|
||||
|
@ -48,7 +48,7 @@ xfs_inode_buf_verify(
|
||||
/*
|
||||
* Validate the magic number and version of every inode in the buffer
|
||||
*/
|
||||
agno = xfs_daddr_to_agno(mp, XFS_BUF_ADDR(bp));
|
||||
agno = xfs_daddr_to_agno(mp, xfs_buf_daddr(bp));
|
||||
ni = XFS_BB_TO_FSB(mp, bp->b_length) * mp->m_sb.sb_inopblock;
|
||||
for (i = 0; i < ni; i++) {
|
||||
int di_ok;
|
||||
@ -58,7 +58,7 @@ xfs_inode_buf_verify(
|
||||
dip = xfs_buf_offset(bp, (i << mp->m_sb.sb_inodelog));
|
||||
unlinked_ino = be32_to_cpu(dip->di_next_unlinked);
|
||||
di_ok = xfs_verify_magic16(bp, dip->di_magic) &&
|
||||
xfs_dinode_good_version(&mp->m_sb, dip->di_version) &&
|
||||
xfs_dinode_good_version(mp, dip->di_version) &&
|
||||
xfs_verify_agino_or_null(mp, agno, unlinked_ino);
|
||||
if (unlikely(XFS_TEST_ERROR(!di_ok, mp,
|
||||
XFS_ERRTAG_ITOBP_INOTOBP))) {
|
||||
@ -71,7 +71,7 @@ xfs_inode_buf_verify(
|
||||
#ifdef DEBUG
|
||||
xfs_alert(mp,
|
||||
"bad inode magic/vsn daddr %lld #%d (magic=%x)",
|
||||
(unsigned long long)bp->b_bn, i,
|
||||
(unsigned long long)xfs_buf_daddr(bp), i,
|
||||
be16_to_cpu(dip->di_magic));
|
||||
#endif
|
||||
xfs_buf_verifier_error(bp, -EFSCORRUPTED,
|
||||
@ -192,7 +192,7 @@ xfs_inode_from_disk(
|
||||
* inode. If the inode is unused, mode is zero and we shouldn't mess
|
||||
* with the uninitialized part of it.
|
||||
*/
|
||||
if (!xfs_sb_version_has_v3inode(&ip->i_mount->m_sb))
|
||||
if (!xfs_has_v3inodes(ip->i_mount))
|
||||
ip->i_flushiter = be16_to_cpu(from->di_flushiter);
|
||||
inode->i_generation = be32_to_cpu(from->di_gen);
|
||||
inode->i_mode = be16_to_cpu(from->di_mode);
|
||||
@ -235,7 +235,7 @@ xfs_inode_from_disk(
|
||||
if (from->di_dmevmask || from->di_dmstate)
|
||||
xfs_iflags_set(ip, XFS_IPRESERVE_DM_FIELDS);
|
||||
|
||||
if (xfs_sb_version_has_v3inode(&ip->i_mount->m_sb)) {
|
||||
if (xfs_has_v3inodes(ip->i_mount)) {
|
||||
inode_set_iversion_queried(inode,
|
||||
be64_to_cpu(from->di_changecount));
|
||||
ip->i_crtime = xfs_inode_from_disk_ts(from, from->di_crtime);
|
||||
@ -313,7 +313,7 @@ xfs_inode_to_disk(
|
||||
to->di_aformat = xfs_ifork_format(ip->i_afp);
|
||||
to->di_flags = cpu_to_be16(ip->i_diflags);
|
||||
|
||||
if (xfs_sb_version_has_v3inode(&ip->i_mount->m_sb)) {
|
||||
if (xfs_has_v3inodes(ip->i_mount)) {
|
||||
to->di_version = 3;
|
||||
to->di_changecount = cpu_to_be64(inode_peek_iversion(inode));
|
||||
to->di_crtime = xfs_inode_to_disk_ts(ip, ip->i_crtime);
|
||||
@ -413,7 +413,7 @@ xfs_dinode_verify(
|
||||
|
||||
/* Verify v3 integrity information first */
|
||||
if (dip->di_version >= 3) {
|
||||
if (!xfs_sb_version_has_v3inode(&mp->m_sb))
|
||||
if (!xfs_has_v3inodes(mp))
|
||||
return __this_address;
|
||||
if (!xfs_verify_cksum((char *)dip, mp->m_sb.sb_inodesize,
|
||||
XFS_DINODE_CRC_OFF))
|
||||
@ -515,7 +515,7 @@ xfs_dinode_verify(
|
||||
|
||||
/* don't allow reflink/cowextsize if we don't have reflink */
|
||||
if ((flags2 & (XFS_DIFLAG2_REFLINK | XFS_DIFLAG2_COWEXTSIZE)) &&
|
||||
!xfs_sb_version_hasreflink(&mp->m_sb))
|
||||
!xfs_has_reflink(mp))
|
||||
return __this_address;
|
||||
|
||||
/* only regular files get reflink */
|
||||
@ -534,7 +534,7 @@ xfs_dinode_verify(
|
||||
|
||||
/* bigtime iflag can only happen on bigtime filesystems */
|
||||
if (xfs_dinode_has_bigtime(dip) &&
|
||||
!xfs_sb_version_hasbigtime(&mp->m_sb))
|
||||
!xfs_has_bigtime(mp))
|
||||
return __this_address;
|
||||
|
||||
return NULL;
|
||||
@ -550,7 +550,7 @@ xfs_dinode_calc_crc(
|
||||
if (dip->di_version < 3)
|
||||
return;
|
||||
|
||||
ASSERT(xfs_sb_version_hascrc(&mp->m_sb));
|
||||
ASSERT(xfs_has_crc(mp));
|
||||
crc = xfs_start_cksum_update((char *)dip, mp->m_sb.sb_inodesize,
|
||||
XFS_DINODE_CRC_OFF);
|
||||
dip->di_crc = xfs_end_cksum(crc);
|
||||
@ -677,7 +677,7 @@ xfs_inode_validate_cowextsize(
|
||||
hint_flag = (flags2 & XFS_DIFLAG2_COWEXTSIZE);
|
||||
cowextsize_bytes = XFS_FSB_TO_B(mp, cowextsize);
|
||||
|
||||
if (hint_flag && !xfs_sb_version_hasreflink(&mp->m_sb))
|
||||
if (hint_flag && !xfs_has_reflink(mp))
|
||||
return __this_address;
|
||||
|
||||
if (hint_flag && !(S_ISDIR(mode) || S_ISREG(mode)))
|
||||
|
@ -21,7 +21,7 @@ struct xfs_imap {
|
||||
|
||||
int xfs_imap_to_bp(struct xfs_mount *mp, struct xfs_trans *tp,
|
||||
struct xfs_imap *imap, struct xfs_buf **bpp);
|
||||
void xfs_dinode_calc_crc(struct xfs_mount *, struct xfs_dinode *);
|
||||
void xfs_dinode_calc_crc(struct xfs_mount *mp, struct xfs_dinode *dip);
|
||||
void xfs_inode_to_disk(struct xfs_inode *ip, struct xfs_dinode *to,
|
||||
xfs_lsn_t lsn);
|
||||
int xfs_inode_from_disk(struct xfs_inode *ip, struct xfs_dinode *from);
|
||||
@ -42,4 +42,13 @@ static inline uint64_t xfs_inode_encode_bigtime(struct timespec64 tv)
|
||||
struct timespec64 xfs_inode_from_disk_ts(struct xfs_dinode *dip,
|
||||
const xfs_timestamp_t ts);
|
||||
|
||||
static inline bool
|
||||
xfs_dinode_good_version(struct xfs_mount *mp, uint8_t version)
|
||||
{
|
||||
if (xfs_has_v3inodes(mp))
|
||||
return version == 3;
|
||||
return version == 1 || version == 2;
|
||||
}
|
||||
|
||||
|
||||
#endif /* __XFS_INODE_BUF_H__ */
|
||||
|
@ -41,10 +41,10 @@ typedef uint32_t xlog_tid_t;
|
||||
#define XFS_MIN_LOG_FACTOR 3
|
||||
|
||||
#define XLOG_REC_SHIFT(log) \
|
||||
BTOBB(1 << (xfs_sb_version_haslogv2(&log->l_mp->m_sb) ? \
|
||||
BTOBB(1 << (xfs_has_logv2(log->l_mp) ? \
|
||||
XLOG_MAX_RECORD_BSHIFT : XLOG_BIG_RECORD_BSHIFT))
|
||||
#define XLOG_TOTAL_REC_SHIFT(log) \
|
||||
BTOBB(XLOG_MAX_ICLOGS << (xfs_sb_version_haslogv2(&log->l_mp->m_sb) ? \
|
||||
BTOBB(XLOG_MAX_ICLOGS << (xfs_has_logv2(log->l_mp) ? \
|
||||
XLOG_MAX_RECORD_BSHIFT : XLOG_BIG_RECORD_BSHIFT))
|
||||
|
||||
/* get lsn fields */
|
||||
@ -434,7 +434,7 @@ struct xfs_log_dinode {
|
||||
};
|
||||
|
||||
#define xfs_log_dinode_size(mp) \
|
||||
(xfs_sb_version_has_v3inode(&(mp)->m_sb) ? \
|
||||
(xfs_has_v3inodes((mp)) ? \
|
||||
sizeof(struct xfs_log_dinode) : \
|
||||
offsetof(struct xfs_log_dinode, di_next_unlinked))
|
||||
|
||||
|
@ -122,6 +122,8 @@ void xlog_buf_readahead(struct xlog *log, xfs_daddr_t blkno, uint len,
|
||||
const struct xfs_buf_ops *ops);
|
||||
bool xlog_is_buffer_cancelled(struct xlog *log, xfs_daddr_t blkno, uint len);
|
||||
|
||||
int xlog_recover_iget(struct xfs_mount *mp, xfs_ino_t ino,
|
||||
struct xfs_inode **ipp);
|
||||
void xlog_recover_release_intent(struct xlog *log, unsigned short intent_type,
|
||||
uint64_t intent_id);
|
||||
|
||||
|
@ -92,7 +92,7 @@ xfs_log_calc_minimum_size(
|
||||
if (tres.tr_logcount > 1)
|
||||
max_logres *= tres.tr_logcount;
|
||||
|
||||
if (xfs_sb_version_haslogv2(&mp->m_sb) && mp->m_sb.sb_logsunit > 1)
|
||||
if (xfs_has_logv2(mp) && mp->m_sb.sb_logsunit > 1)
|
||||
lsunit = BTOBB(mp->m_sb.sb_logsunit);
|
||||
|
||||
/*
|
||||
|
@ -60,36 +60,14 @@ typedef uint8_t xfs_dqtype_t;
|
||||
#define XFS_DQUOT_LOGRES(mp) \
|
||||
((sizeof(struct xfs_dq_logformat) + sizeof(struct xfs_disk_dquot)) * 6)
|
||||
|
||||
#define XFS_IS_QUOTA_RUNNING(mp) ((mp)->m_qflags & XFS_ALL_QUOTA_ACCT)
|
||||
#define XFS_IS_UQUOTA_RUNNING(mp) ((mp)->m_qflags & XFS_UQUOTA_ACCT)
|
||||
#define XFS_IS_PQUOTA_RUNNING(mp) ((mp)->m_qflags & XFS_PQUOTA_ACCT)
|
||||
#define XFS_IS_GQUOTA_RUNNING(mp) ((mp)->m_qflags & XFS_GQUOTA_ACCT)
|
||||
#define XFS_IS_QUOTA_ON(mp) ((mp)->m_qflags & XFS_ALL_QUOTA_ACCT)
|
||||
#define XFS_IS_UQUOTA_ON(mp) ((mp)->m_qflags & XFS_UQUOTA_ACCT)
|
||||
#define XFS_IS_PQUOTA_ON(mp) ((mp)->m_qflags & XFS_PQUOTA_ACCT)
|
||||
#define XFS_IS_GQUOTA_ON(mp) ((mp)->m_qflags & XFS_GQUOTA_ACCT)
|
||||
#define XFS_IS_UQUOTA_ENFORCED(mp) ((mp)->m_qflags & XFS_UQUOTA_ENFD)
|
||||
#define XFS_IS_GQUOTA_ENFORCED(mp) ((mp)->m_qflags & XFS_GQUOTA_ENFD)
|
||||
#define XFS_IS_PQUOTA_ENFORCED(mp) ((mp)->m_qflags & XFS_PQUOTA_ENFD)
|
||||
|
||||
/*
|
||||
* Incore only flags for quotaoff - these bits get cleared when quota(s)
|
||||
* are in the process of getting turned off. These flags are in m_qflags but
|
||||
* never in sb_qflags.
|
||||
*/
|
||||
#define XFS_UQUOTA_ACTIVE 0x1000 /* uquotas are being turned off */
|
||||
#define XFS_GQUOTA_ACTIVE 0x2000 /* gquotas are being turned off */
|
||||
#define XFS_PQUOTA_ACTIVE 0x4000 /* pquotas are being turned off */
|
||||
#define XFS_ALL_QUOTA_ACTIVE \
|
||||
(XFS_UQUOTA_ACTIVE | XFS_GQUOTA_ACTIVE | XFS_PQUOTA_ACTIVE)
|
||||
|
||||
/*
|
||||
* Checking XFS_IS_*QUOTA_ON() while holding any inode lock guarantees
|
||||
* quota will be not be switched off as long as that inode lock is held.
|
||||
*/
|
||||
#define XFS_IS_QUOTA_ON(mp) ((mp)->m_qflags & (XFS_UQUOTA_ACTIVE | \
|
||||
XFS_GQUOTA_ACTIVE | \
|
||||
XFS_PQUOTA_ACTIVE))
|
||||
#define XFS_IS_UQUOTA_ON(mp) ((mp)->m_qflags & XFS_UQUOTA_ACTIVE)
|
||||
#define XFS_IS_GQUOTA_ON(mp) ((mp)->m_qflags & XFS_GQUOTA_ACTIVE)
|
||||
#define XFS_IS_PQUOTA_ON(mp) ((mp)->m_qflags & XFS_PQUOTA_ACTIVE)
|
||||
|
||||
/*
|
||||
* Flags to tell various functions what to do. Not all of these are meaningful
|
||||
* to a single function. None of these XFS_QMOPT_* flags are meant to have
|
||||
|
@ -91,7 +91,7 @@ xfs_refcount_lookup_eq(
|
||||
/* Convert on-disk record to in-core format. */
|
||||
void
|
||||
xfs_refcount_btrec_to_irec(
|
||||
union xfs_btree_rec *rec,
|
||||
const union xfs_btree_rec *rec,
|
||||
struct xfs_refcount_irec *irec)
|
||||
{
|
||||
irec->rc_startblock = be32_to_cpu(rec->refc.rc_startblock);
|
||||
@ -1253,7 +1253,7 @@ xfs_refcount_increase_extent(
|
||||
struct xfs_trans *tp,
|
||||
struct xfs_bmbt_irec *PREV)
|
||||
{
|
||||
if (!xfs_sb_version_hasreflink(&tp->t_mountp->m_sb))
|
||||
if (!xfs_has_reflink(tp->t_mountp))
|
||||
return;
|
||||
|
||||
__xfs_refcount_add(tp, XFS_REFCOUNT_INCREASE, PREV->br_startblock,
|
||||
@ -1268,7 +1268,7 @@ xfs_refcount_decrease_extent(
|
||||
struct xfs_trans *tp,
|
||||
struct xfs_bmbt_irec *PREV)
|
||||
{
|
||||
if (!xfs_sb_version_hasreflink(&tp->t_mountp->m_sb))
|
||||
if (!xfs_has_reflink(tp->t_mountp))
|
||||
return;
|
||||
|
||||
__xfs_refcount_add(tp, XFS_REFCOUNT_DECREASE, PREV->br_startblock,
|
||||
@ -1617,7 +1617,7 @@ xfs_refcount_alloc_cow_extent(
|
||||
{
|
||||
struct xfs_mount *mp = tp->t_mountp;
|
||||
|
||||
if (!xfs_sb_version_hasreflink(&mp->m_sb))
|
||||
if (!xfs_has_reflink(mp))
|
||||
return;
|
||||
|
||||
__xfs_refcount_add(tp, XFS_REFCOUNT_ALLOC_COW, fsb, len);
|
||||
@ -1636,7 +1636,7 @@ xfs_refcount_free_cow_extent(
|
||||
{
|
||||
struct xfs_mount *mp = tp->t_mountp;
|
||||
|
||||
if (!xfs_sb_version_hasreflink(&mp->m_sb))
|
||||
if (!xfs_has_reflink(mp))
|
||||
return;
|
||||
|
||||
/* Remove rmap entry */
|
||||
@ -1654,7 +1654,7 @@ struct xfs_refcount_recovery {
|
||||
STATIC int
|
||||
xfs_refcount_recover_extent(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_rec *rec,
|
||||
const union xfs_btree_rec *rec,
|
||||
void *priv)
|
||||
{
|
||||
struct list_head *debris = priv;
|
||||
|
@ -78,7 +78,7 @@ static inline xfs_fileoff_t xfs_refcount_max_unmap(int log_res)
|
||||
extern int xfs_refcount_has_record(struct xfs_btree_cur *cur,
|
||||
xfs_agblock_t bno, xfs_extlen_t len, bool *exists);
|
||||
union xfs_btree_rec;
|
||||
extern void xfs_refcount_btrec_to_irec(union xfs_btree_rec *rec,
|
||||
extern void xfs_refcount_btrec_to_irec(const union xfs_btree_rec *rec,
|
||||
struct xfs_refcount_irec *irec);
|
||||
extern int xfs_refcount_insert(struct xfs_btree_cur *cur,
|
||||
struct xfs_refcount_irec *irec, int *stat);
|
||||
|
@ -31,9 +31,9 @@ xfs_refcountbt_dup_cursor(
|
||||
|
||||
STATIC void
|
||||
xfs_refcountbt_set_root(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *ptr,
|
||||
int inc)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *ptr,
|
||||
int inc)
|
||||
{
|
||||
struct xfs_buf *agbp = cur->bc_ag.agbp;
|
||||
struct xfs_agf *agf = agbp->b_addr;
|
||||
@ -51,10 +51,10 @@ xfs_refcountbt_set_root(
|
||||
|
||||
STATIC int
|
||||
xfs_refcountbt_alloc_block(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *start,
|
||||
union xfs_btree_ptr *new,
|
||||
int *stat)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *start,
|
||||
union xfs_btree_ptr *new,
|
||||
int *stat)
|
||||
{
|
||||
struct xfs_buf *agbp = cur->bc_ag.agbp;
|
||||
struct xfs_agf *agf = agbp->b_addr;
|
||||
@ -102,7 +102,7 @@ xfs_refcountbt_free_block(
|
||||
struct xfs_mount *mp = cur->bc_mp;
|
||||
struct xfs_buf *agbp = cur->bc_ag.agbp;
|
||||
struct xfs_agf *agf = agbp->b_addr;
|
||||
xfs_fsblock_t fsbno = XFS_DADDR_TO_FSB(mp, XFS_BUF_ADDR(bp));
|
||||
xfs_fsblock_t fsbno = XFS_DADDR_TO_FSB(mp, xfs_buf_daddr(bp));
|
||||
int error;
|
||||
|
||||
trace_xfs_refcountbt_free_block(cur->bc_mp, cur->bc_ag.pag->pag_agno,
|
||||
@ -135,18 +135,18 @@ xfs_refcountbt_get_maxrecs(
|
||||
|
||||
STATIC void
|
||||
xfs_refcountbt_init_key_from_rec(
|
||||
union xfs_btree_key *key,
|
||||
union xfs_btree_rec *rec)
|
||||
union xfs_btree_key *key,
|
||||
const union xfs_btree_rec *rec)
|
||||
{
|
||||
key->refc.rc_startblock = rec->refc.rc_startblock;
|
||||
}
|
||||
|
||||
STATIC void
|
||||
xfs_refcountbt_init_high_key_from_rec(
|
||||
union xfs_btree_key *key,
|
||||
union xfs_btree_rec *rec)
|
||||
union xfs_btree_key *key,
|
||||
const union xfs_btree_rec *rec)
|
||||
{
|
||||
__u32 x;
|
||||
__u32 x;
|
||||
|
||||
x = be32_to_cpu(rec->refc.rc_startblock);
|
||||
x += be32_to_cpu(rec->refc.rc_blockcount) - 1;
|
||||
@ -177,20 +177,20 @@ xfs_refcountbt_init_ptr_from_cur(
|
||||
|
||||
STATIC int64_t
|
||||
xfs_refcountbt_key_diff(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *key)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_key *key)
|
||||
{
|
||||
struct xfs_refcount_irec *rec = &cur->bc_rec.rc;
|
||||
struct xfs_refcount_key *kp = &key->refc;
|
||||
const struct xfs_refcount_key *kp = &key->refc;
|
||||
|
||||
return (int64_t)be32_to_cpu(kp->rc_startblock) - rec->rc_startblock;
|
||||
}
|
||||
|
||||
STATIC int64_t
|
||||
xfs_refcountbt_diff_two_keys(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *k1,
|
||||
union xfs_btree_key *k2)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_key *k1,
|
||||
const union xfs_btree_key *k2)
|
||||
{
|
||||
return (int64_t)be32_to_cpu(k1->refc.rc_startblock) -
|
||||
be32_to_cpu(k2->refc.rc_startblock);
|
||||
@ -209,7 +209,7 @@ xfs_refcountbt_verify(
|
||||
if (!xfs_verify_magic(bp, block->bb_magic))
|
||||
return __this_address;
|
||||
|
||||
if (!xfs_sb_version_hasreflink(&mp->m_sb))
|
||||
if (!xfs_has_reflink(mp))
|
||||
return __this_address;
|
||||
fa = xfs_btree_sblock_v5hdr_verify(bp);
|
||||
if (fa)
|
||||
@ -269,9 +269,9 @@ const struct xfs_buf_ops xfs_refcountbt_buf_ops = {
|
||||
|
||||
STATIC int
|
||||
xfs_refcountbt_keys_inorder(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *k1,
|
||||
union xfs_btree_key *k2)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_key *k1,
|
||||
const union xfs_btree_key *k2)
|
||||
{
|
||||
return be32_to_cpu(k1->refc.rc_startblock) <
|
||||
be32_to_cpu(k2->refc.rc_startblock);
|
||||
@ -279,9 +279,9 @@ xfs_refcountbt_keys_inorder(
|
||||
|
||||
STATIC int
|
||||
xfs_refcountbt_recs_inorder(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_rec *r1,
|
||||
union xfs_btree_rec *r2)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_rec *r1,
|
||||
const union xfs_btree_rec *r2)
|
||||
{
|
||||
return be32_to_cpu(r1->refc.rc_startblock) +
|
||||
be32_to_cpu(r1->refc.rc_blockcount) <=
|
||||
@ -462,7 +462,7 @@ xfs_refcountbt_calc_reserves(
|
||||
xfs_extlen_t tree_len;
|
||||
int error;
|
||||
|
||||
if (!xfs_sb_version_hasreflink(&mp->m_sb))
|
||||
if (!xfs_has_reflink(mp))
|
||||
return 0;
|
||||
|
||||
error = xfs_alloc_read_agf(mp, tp, pag->pag_agno, 0, &agbp);
|
||||
|
@ -179,8 +179,8 @@ done:
|
||||
/* Convert an internal btree record to an rmap record. */
|
||||
int
|
||||
xfs_rmap_btrec_to_irec(
|
||||
union xfs_btree_rec *rec,
|
||||
struct xfs_rmap_irec *irec)
|
||||
const union xfs_btree_rec *rec,
|
||||
struct xfs_rmap_irec *irec)
|
||||
{
|
||||
irec->rm_startblock = be32_to_cpu(rec->rmap.rm_startblock);
|
||||
irec->rm_blockcount = be32_to_cpu(rec->rmap.rm_blockcount);
|
||||
@ -255,9 +255,9 @@ struct xfs_find_left_neighbor_info {
|
||||
/* For each rmap given, figure out if it matches the key we want. */
|
||||
STATIC int
|
||||
xfs_rmap_find_left_neighbor_helper(
|
||||
struct xfs_btree_cur *cur,
|
||||
struct xfs_rmap_irec *rec,
|
||||
void *priv)
|
||||
struct xfs_btree_cur *cur,
|
||||
const struct xfs_rmap_irec *rec,
|
||||
void *priv)
|
||||
{
|
||||
struct xfs_find_left_neighbor_info *info = priv;
|
||||
|
||||
@ -331,9 +331,9 @@ xfs_rmap_find_left_neighbor(
|
||||
/* For each rmap given, figure out if it matches the key we want. */
|
||||
STATIC int
|
||||
xfs_rmap_lookup_le_range_helper(
|
||||
struct xfs_btree_cur *cur,
|
||||
struct xfs_rmap_irec *rec,
|
||||
void *priv)
|
||||
struct xfs_btree_cur *cur,
|
||||
const struct xfs_rmap_irec *rec,
|
||||
void *priv)
|
||||
{
|
||||
struct xfs_find_left_neighbor_info *info = priv;
|
||||
|
||||
@ -705,7 +705,7 @@ xfs_rmap_free(
|
||||
struct xfs_btree_cur *cur;
|
||||
int error;
|
||||
|
||||
if (!xfs_sb_version_hasrmapbt(&mp->m_sb))
|
||||
if (!xfs_has_rmapbt(mp))
|
||||
return 0;
|
||||
|
||||
cur = xfs_rmapbt_init_cursor(mp, tp, agbp, pag);
|
||||
@ -959,7 +959,7 @@ xfs_rmap_alloc(
|
||||
struct xfs_btree_cur *cur;
|
||||
int error;
|
||||
|
||||
if (!xfs_sb_version_hasrmapbt(&mp->m_sb))
|
||||
if (!xfs_has_rmapbt(mp))
|
||||
return 0;
|
||||
|
||||
cur = xfs_rmapbt_init_cursor(mp, tp, agbp, pag);
|
||||
@ -2278,9 +2278,9 @@ struct xfs_rmap_query_range_info {
|
||||
/* Format btree record and pass to our callback. */
|
||||
STATIC int
|
||||
xfs_rmap_query_range_helper(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_rec *rec,
|
||||
void *priv)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_rec *rec,
|
||||
void *priv)
|
||||
{
|
||||
struct xfs_rmap_query_range_info *query = priv;
|
||||
struct xfs_rmap_irec irec;
|
||||
@ -2296,8 +2296,8 @@ xfs_rmap_query_range_helper(
|
||||
int
|
||||
xfs_rmap_query_range(
|
||||
struct xfs_btree_cur *cur,
|
||||
struct xfs_rmap_irec *low_rec,
|
||||
struct xfs_rmap_irec *high_rec,
|
||||
const struct xfs_rmap_irec *low_rec,
|
||||
const struct xfs_rmap_irec *high_rec,
|
||||
xfs_rmap_query_range_fn fn,
|
||||
void *priv)
|
||||
{
|
||||
@ -2459,7 +2459,7 @@ xfs_rmap_update_is_needed(
|
||||
struct xfs_mount *mp,
|
||||
int whichfork)
|
||||
{
|
||||
return xfs_sb_version_hasrmapbt(&mp->m_sb) && whichfork != XFS_COW_FORK;
|
||||
return xfs_has_rmapbt(mp) && whichfork != XFS_COW_FORK;
|
||||
}
|
||||
|
||||
/*
|
||||
@ -2707,7 +2707,7 @@ struct xfs_rmap_key_state {
|
||||
STATIC int
|
||||
xfs_rmap_has_other_keys_helper(
|
||||
struct xfs_btree_cur *cur,
|
||||
struct xfs_rmap_irec *rec,
|
||||
const struct xfs_rmap_irec *rec,
|
||||
void *priv)
|
||||
{
|
||||
struct xfs_rmap_key_state *rks = priv;
|
||||
|
@ -134,12 +134,13 @@ int xfs_rmap_get_rec(struct xfs_btree_cur *cur, struct xfs_rmap_irec *irec,
|
||||
int *stat);
|
||||
|
||||
typedef int (*xfs_rmap_query_range_fn)(
|
||||
struct xfs_btree_cur *cur,
|
||||
struct xfs_rmap_irec *rec,
|
||||
void *priv);
|
||||
struct xfs_btree_cur *cur,
|
||||
const struct xfs_rmap_irec *rec,
|
||||
void *priv);
|
||||
|
||||
int xfs_rmap_query_range(struct xfs_btree_cur *cur,
|
||||
struct xfs_rmap_irec *low_rec, struct xfs_rmap_irec *high_rec,
|
||||
const struct xfs_rmap_irec *low_rec,
|
||||
const struct xfs_rmap_irec *high_rec,
|
||||
xfs_rmap_query_range_fn fn, void *priv);
|
||||
int xfs_rmap_query_all(struct xfs_btree_cur *cur, xfs_rmap_query_range_fn fn,
|
||||
void *priv);
|
||||
@ -192,7 +193,7 @@ int xfs_rmap_lookup_le_range(struct xfs_btree_cur *cur, xfs_agblock_t bno,
|
||||
int xfs_rmap_compare(const struct xfs_rmap_irec *a,
|
||||
const struct xfs_rmap_irec *b);
|
||||
union xfs_btree_rec;
|
||||
int xfs_rmap_btrec_to_irec(union xfs_btree_rec *rec,
|
||||
int xfs_rmap_btrec_to_irec(const union xfs_btree_rec *rec,
|
||||
struct xfs_rmap_irec *irec);
|
||||
int xfs_rmap_has_record(struct xfs_btree_cur *cur, xfs_agblock_t bno,
|
||||
xfs_extlen_t len, bool *exists);
|
||||
|
@ -57,9 +57,9 @@ xfs_rmapbt_dup_cursor(
|
||||
|
||||
STATIC void
|
||||
xfs_rmapbt_set_root(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *ptr,
|
||||
int inc)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *ptr,
|
||||
int inc)
|
||||
{
|
||||
struct xfs_buf *agbp = cur->bc_ag.agbp;
|
||||
struct xfs_agf *agf = agbp->b_addr;
|
||||
@ -76,10 +76,10 @@ xfs_rmapbt_set_root(
|
||||
|
||||
STATIC int
|
||||
xfs_rmapbt_alloc_block(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_ptr *start,
|
||||
union xfs_btree_ptr *new,
|
||||
int *stat)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_ptr *start,
|
||||
union xfs_btree_ptr *new,
|
||||
int *stat)
|
||||
{
|
||||
struct xfs_buf *agbp = cur->bc_ag.agbp;
|
||||
struct xfs_agf *agf = agbp->b_addr;
|
||||
@ -122,7 +122,7 @@ xfs_rmapbt_free_block(
|
||||
xfs_agblock_t bno;
|
||||
int error;
|
||||
|
||||
bno = xfs_daddr_to_agbno(cur->bc_mp, XFS_BUF_ADDR(bp));
|
||||
bno = xfs_daddr_to_agbno(cur->bc_mp, xfs_buf_daddr(bp));
|
||||
trace_xfs_rmapbt_free_block(cur->bc_mp, pag->pag_agno,
|
||||
bno, 1);
|
||||
be32_add_cpu(&agf->agf_rmap_blocks, -1);
|
||||
@ -156,8 +156,8 @@ xfs_rmapbt_get_maxrecs(
|
||||
|
||||
STATIC void
|
||||
xfs_rmapbt_init_key_from_rec(
|
||||
union xfs_btree_key *key,
|
||||
union xfs_btree_rec *rec)
|
||||
union xfs_btree_key *key,
|
||||
const union xfs_btree_rec *rec)
|
||||
{
|
||||
key->rmap.rm_startblock = rec->rmap.rm_startblock;
|
||||
key->rmap.rm_owner = rec->rmap.rm_owner;
|
||||
@ -173,11 +173,11 @@ xfs_rmapbt_init_key_from_rec(
|
||||
*/
|
||||
STATIC void
|
||||
xfs_rmapbt_init_high_key_from_rec(
|
||||
union xfs_btree_key *key,
|
||||
union xfs_btree_rec *rec)
|
||||
union xfs_btree_key *key,
|
||||
const union xfs_btree_rec *rec)
|
||||
{
|
||||
uint64_t off;
|
||||
int adj;
|
||||
uint64_t off;
|
||||
int adj;
|
||||
|
||||
adj = be32_to_cpu(rec->rmap.rm_blockcount) - 1;
|
||||
|
||||
@ -219,13 +219,13 @@ xfs_rmapbt_init_ptr_from_cur(
|
||||
|
||||
STATIC int64_t
|
||||
xfs_rmapbt_key_diff(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *key)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_key *key)
|
||||
{
|
||||
struct xfs_rmap_irec *rec = &cur->bc_rec.r;
|
||||
struct xfs_rmap_key *kp = &key->rmap;
|
||||
__u64 x, y;
|
||||
int64_t d;
|
||||
struct xfs_rmap_irec *rec = &cur->bc_rec.r;
|
||||
const struct xfs_rmap_key *kp = &key->rmap;
|
||||
__u64 x, y;
|
||||
int64_t d;
|
||||
|
||||
d = (int64_t)be32_to_cpu(kp->rm_startblock) - rec->rm_startblock;
|
||||
if (d)
|
||||
@ -249,14 +249,14 @@ xfs_rmapbt_key_diff(
|
||||
|
||||
STATIC int64_t
|
||||
xfs_rmapbt_diff_two_keys(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *k1,
|
||||
union xfs_btree_key *k2)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_key *k1,
|
||||
const union xfs_btree_key *k2)
|
||||
{
|
||||
struct xfs_rmap_key *kp1 = &k1->rmap;
|
||||
struct xfs_rmap_key *kp2 = &k2->rmap;
|
||||
int64_t d;
|
||||
__u64 x, y;
|
||||
const struct xfs_rmap_key *kp1 = &k1->rmap;
|
||||
const struct xfs_rmap_key *kp2 = &k2->rmap;
|
||||
int64_t d;
|
||||
__u64 x, y;
|
||||
|
||||
d = (int64_t)be32_to_cpu(kp1->rm_startblock) -
|
||||
be32_to_cpu(kp2->rm_startblock);
|
||||
@ -304,7 +304,7 @@ xfs_rmapbt_verify(
|
||||
if (!xfs_verify_magic(bp, block->bb_magic))
|
||||
return __this_address;
|
||||
|
||||
if (!xfs_sb_version_hasrmapbt(&mp->m_sb))
|
||||
if (!xfs_has_rmapbt(mp))
|
||||
return __this_address;
|
||||
fa = xfs_btree_sblock_v5hdr_verify(bp);
|
||||
if (fa)
|
||||
@ -364,9 +364,9 @@ const struct xfs_buf_ops xfs_rmapbt_buf_ops = {
|
||||
|
||||
STATIC int
|
||||
xfs_rmapbt_keys_inorder(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_key *k1,
|
||||
union xfs_btree_key *k2)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_key *k1,
|
||||
const union xfs_btree_key *k2)
|
||||
{
|
||||
uint32_t x;
|
||||
uint32_t y;
|
||||
@ -394,9 +394,9 @@ xfs_rmapbt_keys_inorder(
|
||||
|
||||
STATIC int
|
||||
xfs_rmapbt_recs_inorder(
|
||||
struct xfs_btree_cur *cur,
|
||||
union xfs_btree_rec *r1,
|
||||
union xfs_btree_rec *r2)
|
||||
struct xfs_btree_cur *cur,
|
||||
const union xfs_btree_rec *r1,
|
||||
const union xfs_btree_rec *r2)
|
||||
{
|
||||
uint32_t x;
|
||||
uint32_t y;
|
||||
@ -558,7 +558,7 @@ xfs_rmapbt_compute_maxlevels(
|
||||
* disallow reflinking when less than 10% of the per-AG metadata
|
||||
* block reservation since the fallback is a regular file copy.
|
||||
*/
|
||||
if (xfs_sb_version_hasreflink(&mp->m_sb))
|
||||
if (xfs_has_reflink(mp))
|
||||
mp->m_rmap_maxlevels = XFS_BTREE_MAXLEVELS;
|
||||
else
|
||||
mp->m_rmap_maxlevels = xfs_btree_compute_maxlevels(
|
||||
@ -606,7 +606,7 @@ xfs_rmapbt_calc_reserves(
|
||||
xfs_extlen_t tree_len;
|
||||
int error;
|
||||
|
||||
if (!xfs_sb_version_hasrmapbt(&mp->m_sb))
|
||||
if (!xfs_has_rmapbt(mp))
|
||||
return 0;
|
||||
|
||||
error = xfs_alloc_read_agf(mp, tp, pag->pag_agno, 0, &agbp);
|
||||
|
@ -59,4 +59,4 @@ extern xfs_extlen_t xfs_rmapbt_max_size(struct xfs_mount *mp,
|
||||
extern int xfs_rmapbt_calc_reserves(struct xfs_mount *mp, struct xfs_trans *tp,
|
||||
struct xfs_perag *pag, xfs_extlen_t *ask, xfs_extlen_t *used);
|
||||
|
||||
#endif /* __XFS_RMAP_BTREE_H__ */
|
||||
#endif /* __XFS_RMAP_BTREE_H__ */
|
||||
|
@ -1009,8 +1009,8 @@ xfs_rtfree_extent(
|
||||
int
|
||||
xfs_rtalloc_query_range(
|
||||
struct xfs_trans *tp,
|
||||
struct xfs_rtalloc_rec *low_rec,
|
||||
struct xfs_rtalloc_rec *high_rec,
|
||||
const struct xfs_rtalloc_rec *low_rec,
|
||||
const struct xfs_rtalloc_rec *high_rec,
|
||||
xfs_rtalloc_query_range_fn fn,
|
||||
void *priv)
|
||||
{
|
||||
@ -1018,6 +1018,7 @@ xfs_rtalloc_query_range(
|
||||
struct xfs_mount *mp = tp->t_mountp;
|
||||
xfs_rtblock_t rtstart;
|
||||
xfs_rtblock_t rtend;
|
||||
xfs_rtblock_t high_key;
|
||||
int is_free;
|
||||
int error = 0;
|
||||
|
||||
@ -1026,12 +1027,12 @@ xfs_rtalloc_query_range(
|
||||
if (low_rec->ar_startext >= mp->m_sb.sb_rextents ||
|
||||
low_rec->ar_startext == high_rec->ar_startext)
|
||||
return 0;
|
||||
high_rec->ar_startext = min(high_rec->ar_startext,
|
||||
mp->m_sb.sb_rextents - 1);
|
||||
|
||||
high_key = min(high_rec->ar_startext, mp->m_sb.sb_rextents - 1);
|
||||
|
||||
/* Iterate the bitmap, looking for discrepancies. */
|
||||
rtstart = low_rec->ar_startext;
|
||||
while (rtstart <= high_rec->ar_startext) {
|
||||
while (rtstart <= high_key) {
|
||||
/* Is the first block free? */
|
||||
error = xfs_rtcheck_range(mp, tp, rtstart, 1, 1, &rtend,
|
||||
&is_free);
|
||||
@ -1039,8 +1040,7 @@ xfs_rtalloc_query_range(
|
||||
break;
|
||||
|
||||
/* How long does the extent go for? */
|
||||
error = xfs_rtfind_forw(mp, tp, rtstart,
|
||||
high_rec->ar_startext, &rtend);
|
||||
error = xfs_rtfind_forw(mp, tp, rtstart, high_key, &rtend);
|
||||
if (error)
|
||||
break;
|
||||
|
||||
|
@ -30,13 +30,110 @@
|
||||
* Physical superblock buffer manipulations. Shared with libxfs in userspace.
|
||||
*/
|
||||
|
||||
/*
|
||||
* We support all XFS versions newer than a v4 superblock with V2 directories.
|
||||
*/
|
||||
bool
|
||||
xfs_sb_good_version(
|
||||
struct xfs_sb *sbp)
|
||||
{
|
||||
/* all v5 filesystems are supported */
|
||||
if (xfs_sb_is_v5(sbp))
|
||||
return true;
|
||||
|
||||
/* versions prior to v4 are not supported */
|
||||
if (XFS_SB_VERSION_NUM(sbp) < XFS_SB_VERSION_4)
|
||||
return false;
|
||||
|
||||
/* V4 filesystems need v2 directories and unwritten extents */
|
||||
if (!(sbp->sb_versionnum & XFS_SB_VERSION_DIRV2BIT))
|
||||
return false;
|
||||
if (!(sbp->sb_versionnum & XFS_SB_VERSION_EXTFLGBIT))
|
||||
return false;
|
||||
|
||||
/* And must not have any unknown v4 feature bits set */
|
||||
if ((sbp->sb_versionnum & ~XFS_SB_VERSION_OKBITS) ||
|
||||
((sbp->sb_versionnum & XFS_SB_VERSION_MOREBITSBIT) &&
|
||||
(sbp->sb_features2 & ~XFS_SB_VERSION2_OKBITS)))
|
||||
return false;
|
||||
|
||||
/* It's a supported v4 filesystem */
|
||||
return true;
|
||||
}
|
||||
|
||||
uint64_t
|
||||
xfs_sb_version_to_features(
|
||||
struct xfs_sb *sbp)
|
||||
{
|
||||
uint64_t features = 0;
|
||||
|
||||
/* optional V4 features */
|
||||
if (sbp->sb_rblocks > 0)
|
||||
features |= XFS_FEAT_REALTIME;
|
||||
if (sbp->sb_versionnum & XFS_SB_VERSION_ATTRBIT)
|
||||
features |= XFS_FEAT_ATTR;
|
||||
if (sbp->sb_versionnum & XFS_SB_VERSION_QUOTABIT)
|
||||
features |= XFS_FEAT_QUOTA;
|
||||
if (sbp->sb_versionnum & XFS_SB_VERSION_ALIGNBIT)
|
||||
features |= XFS_FEAT_ALIGN;
|
||||
if (sbp->sb_versionnum & XFS_SB_VERSION_LOGV2BIT)
|
||||
features |= XFS_FEAT_LOGV2;
|
||||
if (sbp->sb_versionnum & XFS_SB_VERSION_DALIGNBIT)
|
||||
features |= XFS_FEAT_DALIGN;
|
||||
if (sbp->sb_versionnum & XFS_SB_VERSION_EXTFLGBIT)
|
||||
features |= XFS_FEAT_EXTFLG;
|
||||
if (sbp->sb_versionnum & XFS_SB_VERSION_SECTORBIT)
|
||||
features |= XFS_FEAT_SECTOR;
|
||||
if (sbp->sb_versionnum & XFS_SB_VERSION_BORGBIT)
|
||||
features |= XFS_FEAT_ASCIICI;
|
||||
if (sbp->sb_versionnum & XFS_SB_VERSION_MOREBITSBIT) {
|
||||
if (sbp->sb_features2 & XFS_SB_VERSION2_LAZYSBCOUNTBIT)
|
||||
features |= XFS_FEAT_LAZYSBCOUNT;
|
||||
if (sbp->sb_features2 & XFS_SB_VERSION2_ATTR2BIT)
|
||||
features |= XFS_FEAT_ATTR2;
|
||||
if (sbp->sb_features2 & XFS_SB_VERSION2_PROJID32BIT)
|
||||
features |= XFS_FEAT_PROJID32;
|
||||
if (sbp->sb_features2 & XFS_SB_VERSION2_FTYPE)
|
||||
features |= XFS_FEAT_FTYPE;
|
||||
}
|
||||
|
||||
if (!xfs_sb_is_v5(sbp))
|
||||
return features;
|
||||
|
||||
/* Always on V5 features */
|
||||
features |= XFS_FEAT_ALIGN | XFS_FEAT_LOGV2 | XFS_FEAT_EXTFLG |
|
||||
XFS_FEAT_LAZYSBCOUNT | XFS_FEAT_ATTR2 | XFS_FEAT_PROJID32 |
|
||||
XFS_FEAT_V3INODES | XFS_FEAT_CRC | XFS_FEAT_PQUOTINO;
|
||||
|
||||
/* Optional V5 features */
|
||||
if (sbp->sb_features_ro_compat & XFS_SB_FEAT_RO_COMPAT_FINOBT)
|
||||
features |= XFS_FEAT_FINOBT;
|
||||
if (sbp->sb_features_ro_compat & XFS_SB_FEAT_RO_COMPAT_RMAPBT)
|
||||
features |= XFS_FEAT_RMAPBT;
|
||||
if (sbp->sb_features_ro_compat & XFS_SB_FEAT_RO_COMPAT_REFLINK)
|
||||
features |= XFS_FEAT_REFLINK;
|
||||
if (sbp->sb_features_ro_compat & XFS_SB_FEAT_RO_COMPAT_INOBTCNT)
|
||||
features |= XFS_FEAT_INOBTCNT;
|
||||
if (sbp->sb_features_incompat & XFS_SB_FEAT_INCOMPAT_FTYPE)
|
||||
features |= XFS_FEAT_FTYPE;
|
||||
if (sbp->sb_features_incompat & XFS_SB_FEAT_INCOMPAT_SPINODES)
|
||||
features |= XFS_FEAT_SPINODES;
|
||||
if (sbp->sb_features_incompat & XFS_SB_FEAT_INCOMPAT_META_UUID)
|
||||
features |= XFS_FEAT_META_UUID;
|
||||
if (sbp->sb_features_incompat & XFS_SB_FEAT_INCOMPAT_BIGTIME)
|
||||
features |= XFS_FEAT_BIGTIME;
|
||||
if (sbp->sb_features_incompat & XFS_SB_FEAT_INCOMPAT_NEEDSREPAIR)
|
||||
features |= XFS_FEAT_NEEDSREPAIR;
|
||||
return features;
|
||||
}
|
||||
|
||||
/* Check all the superblock fields we care about when reading one in. */
|
||||
STATIC int
|
||||
xfs_validate_sb_read(
|
||||
struct xfs_mount *mp,
|
||||
struct xfs_sb *sbp)
|
||||
{
|
||||
if (XFS_SB_VERSION_NUM(sbp) != XFS_SB_VERSION_5)
|
||||
if (!xfs_sb_is_v5(sbp))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
@ -56,7 +153,7 @@ xfs_validate_sb_read(
|
||||
"Superblock has unknown read-only compatible features (0x%x) enabled.",
|
||||
(sbp->sb_features_ro_compat &
|
||||
XFS_SB_FEAT_RO_COMPAT_UNKNOWN));
|
||||
if (!(mp->m_flags & XFS_MOUNT_RDONLY)) {
|
||||
if (!xfs_is_readonly(mp)) {
|
||||
xfs_warn(mp,
|
||||
"Attempted to mount read-only compatible filesystem read-write.");
|
||||
xfs_warn(mp,
|
||||
@ -95,7 +192,7 @@ xfs_validate_sb_write(
|
||||
* secondary superblocks, so allow this usage to continue because
|
||||
* we never read counters from such superblocks.
|
||||
*/
|
||||
if (XFS_BUF_ADDR(bp) == XFS_SB_DADDR && !sbp->sb_inprogress &&
|
||||
if (xfs_buf_daddr(bp) == XFS_SB_DADDR && !sbp->sb_inprogress &&
|
||||
(sbp->sb_fdblocks > sbp->sb_dblocks ||
|
||||
!xfs_verify_icount(mp, sbp->sb_icount) ||
|
||||
sbp->sb_ifree > sbp->sb_icount)) {
|
||||
@ -103,7 +200,7 @@ xfs_validate_sb_write(
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
|
||||
if (XFS_SB_VERSION_NUM(sbp) != XFS_SB_VERSION_5)
|
||||
if (!xfs_sb_is_v5(sbp))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
@ -162,6 +259,7 @@ xfs_validate_sb_common(
|
||||
struct xfs_dsb *dsb = bp->b_addr;
|
||||
uint32_t agcount = 0;
|
||||
uint32_t rem;
|
||||
bool has_dalign;
|
||||
|
||||
if (!xfs_verify_magic(bp, dsb->sb_magicnum)) {
|
||||
xfs_warn(mp, "bad magic number");
|
||||
@ -173,12 +271,41 @@ xfs_validate_sb_common(
|
||||
return -EWRONGFS;
|
||||
}
|
||||
|
||||
if (xfs_sb_version_has_pquotino(sbp)) {
|
||||
/*
|
||||
* Validate feature flags and state
|
||||
*/
|
||||
if (xfs_sb_is_v5(sbp)) {
|
||||
if (sbp->sb_blocksize < XFS_MIN_CRC_BLOCKSIZE) {
|
||||
xfs_notice(mp,
|
||||
"Block size (%u bytes) too small for Version 5 superblock (minimum %d bytes)",
|
||||
sbp->sb_blocksize, XFS_MIN_CRC_BLOCKSIZE);
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
|
||||
/* V5 has a separate project quota inode */
|
||||
if (sbp->sb_qflags & (XFS_OQUOTA_ENFD | XFS_OQUOTA_CHKD)) {
|
||||
xfs_notice(mp,
|
||||
"Version 5 of Super block has XFS_OQUOTA bits.");
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
|
||||
/*
|
||||
* Full inode chunks must be aligned to inode chunk size when
|
||||
* sparse inodes are enabled to support the sparse chunk
|
||||
* allocation algorithm and prevent overlapping inode records.
|
||||
*/
|
||||
if (sbp->sb_features_incompat & XFS_SB_FEAT_INCOMPAT_SPINODES) {
|
||||
uint32_t align;
|
||||
|
||||
align = XFS_INODES_PER_CHUNK * sbp->sb_inodesize
|
||||
>> sbp->sb_blocklog;
|
||||
if (sbp->sb_inoalignmt != align) {
|
||||
xfs_warn(mp,
|
||||
"Inode block alignment (%u) must match chunk size (%u) for sparse inodes.",
|
||||
sbp->sb_inoalignmt, align);
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
} else if (sbp->sb_qflags & (XFS_PQUOTA_ENFD | XFS_GQUOTA_ENFD |
|
||||
XFS_PQUOTA_CHKD | XFS_GQUOTA_CHKD)) {
|
||||
xfs_notice(mp,
|
||||
@ -186,24 +313,6 @@ xfs_validate_sb_common(
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
|
||||
/*
|
||||
* Full inode chunks must be aligned to inode chunk size when
|
||||
* sparse inodes are enabled to support the sparse chunk
|
||||
* allocation algorithm and prevent overlapping inode records.
|
||||
*/
|
||||
if (xfs_sb_version_hassparseinodes(sbp)) {
|
||||
uint32_t align;
|
||||
|
||||
align = XFS_INODES_PER_CHUNK * sbp->sb_inodesize
|
||||
>> sbp->sb_blocklog;
|
||||
if (sbp->sb_inoalignmt != align) {
|
||||
xfs_warn(mp,
|
||||
"Inode block alignment (%u) must match chunk size (%u) for sparse inodes.",
|
||||
sbp->sb_inoalignmt, align);
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
if (unlikely(
|
||||
sbp->sb_logstart == 0 && mp->m_logdev_targp == mp->m_ddev_targp)) {
|
||||
xfs_warn(mp,
|
||||
@ -303,7 +412,8 @@ xfs_validate_sb_common(
|
||||
* Either (sb_unit and !hasdalign) or (!sb_unit and hasdalign)
|
||||
* would imply the image is corrupted.
|
||||
*/
|
||||
if (!!sbp->sb_unit ^ xfs_sb_version_hasdalign(sbp)) {
|
||||
has_dalign = sbp->sb_versionnum & XFS_SB_VERSION_DALIGNBIT;
|
||||
if (!!sbp->sb_unit ^ has_dalign) {
|
||||
xfs_notice(mp, "SB stripe alignment sanity check failed");
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
@ -312,12 +422,6 @@ xfs_validate_sb_common(
|
||||
XFS_FSB_TO_B(mp, sbp->sb_width), 0, false))
|
||||
return -EFSCORRUPTED;
|
||||
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb) &&
|
||||
sbp->sb_blocksize < XFS_MIN_CRC_BLOCKSIZE) {
|
||||
xfs_notice(mp, "v5 SB sanity check failed");
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
|
||||
/*
|
||||
* Currently only very few inode sizes are supported.
|
||||
*/
|
||||
@ -361,7 +465,7 @@ xfs_sb_quota_from_disk(struct xfs_sb *sbp)
|
||||
* We need to do these manipilations only if we are working
|
||||
* with an older version of on-disk superblock.
|
||||
*/
|
||||
if (xfs_sb_version_has_pquotino(sbp))
|
||||
if (xfs_sb_is_v5(sbp))
|
||||
return;
|
||||
|
||||
if (sbp->sb_qflags & XFS_OQUOTA_ENFD)
|
||||
@ -454,7 +558,8 @@ __xfs_sb_from_disk(
|
||||
* sb_meta_uuid is only on disk if it differs from sb_uuid and the
|
||||
* feature flag is set; if not set we keep it only in memory.
|
||||
*/
|
||||
if (xfs_sb_version_hasmetauuid(to))
|
||||
if (xfs_sb_is_v5(to) &&
|
||||
(to->sb_features_incompat & XFS_SB_FEAT_INCOMPAT_META_UUID))
|
||||
uuid_copy(&to->sb_meta_uuid, &from->sb_meta_uuid);
|
||||
else
|
||||
uuid_copy(&to->sb_meta_uuid, &from->sb_uuid);
|
||||
@ -479,7 +584,12 @@ xfs_sb_quota_to_disk(
|
||||
uint16_t qflags = from->sb_qflags;
|
||||
|
||||
to->sb_uquotino = cpu_to_be64(from->sb_uquotino);
|
||||
if (xfs_sb_version_has_pquotino(from)) {
|
||||
|
||||
/*
|
||||
* The in-memory superblock quota state matches the v5 on-disk format so
|
||||
* just write them out and return
|
||||
*/
|
||||
if (xfs_sb_is_v5(from)) {
|
||||
to->sb_qflags = cpu_to_be16(from->sb_qflags);
|
||||
to->sb_gquotino = cpu_to_be64(from->sb_gquotino);
|
||||
to->sb_pquotino = cpu_to_be64(from->sb_pquotino);
|
||||
@ -487,9 +597,9 @@ xfs_sb_quota_to_disk(
|
||||
}
|
||||
|
||||
/*
|
||||
* The in-core version of sb_qflags do not have XFS_OQUOTA_*
|
||||
* flags, whereas the on-disk version does. So, convert incore
|
||||
* XFS_{PG}QUOTA_* flags to on-disk XFS_OQUOTA_* flags.
|
||||
* For older superblocks (v4), the in-core version of sb_qflags do not
|
||||
* have XFS_OQUOTA_* flags, whereas the on-disk version does. So,
|
||||
* convert incore XFS_{PG}QUOTA_* flags to on-disk XFS_OQUOTA_* flags.
|
||||
*/
|
||||
qflags &= ~(XFS_PQUOTA_ENFD | XFS_PQUOTA_CHKD |
|
||||
XFS_GQUOTA_ENFD | XFS_GQUOTA_CHKD);
|
||||
@ -589,19 +699,20 @@ xfs_sb_to_disk(
|
||||
to->sb_features2 = cpu_to_be32(from->sb_features2);
|
||||
to->sb_bad_features2 = cpu_to_be32(from->sb_bad_features2);
|
||||
|
||||
if (xfs_sb_version_hascrc(from)) {
|
||||
to->sb_features_compat = cpu_to_be32(from->sb_features_compat);
|
||||
to->sb_features_ro_compat =
|
||||
cpu_to_be32(from->sb_features_ro_compat);
|
||||
to->sb_features_incompat =
|
||||
cpu_to_be32(from->sb_features_incompat);
|
||||
to->sb_features_log_incompat =
|
||||
cpu_to_be32(from->sb_features_log_incompat);
|
||||
to->sb_spino_align = cpu_to_be32(from->sb_spino_align);
|
||||
to->sb_lsn = cpu_to_be64(from->sb_lsn);
|
||||
if (xfs_sb_version_hasmetauuid(from))
|
||||
uuid_copy(&to->sb_meta_uuid, &from->sb_meta_uuid);
|
||||
}
|
||||
if (!xfs_sb_is_v5(from))
|
||||
return;
|
||||
|
||||
to->sb_features_compat = cpu_to_be32(from->sb_features_compat);
|
||||
to->sb_features_ro_compat =
|
||||
cpu_to_be32(from->sb_features_ro_compat);
|
||||
to->sb_features_incompat =
|
||||
cpu_to_be32(from->sb_features_incompat);
|
||||
to->sb_features_log_incompat =
|
||||
cpu_to_be32(from->sb_features_log_incompat);
|
||||
to->sb_spino_align = cpu_to_be32(from->sb_spino_align);
|
||||
to->sb_lsn = cpu_to_be64(from->sb_lsn);
|
||||
if (from->sb_features_incompat & XFS_SB_FEAT_INCOMPAT_META_UUID)
|
||||
uuid_copy(&to->sb_meta_uuid, &from->sb_meta_uuid);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -636,8 +747,8 @@ xfs_sb_read_verify(
|
||||
|
||||
if (!xfs_buf_verify_cksum(bp, XFS_SB_CRC_OFF)) {
|
||||
/* Only fail bad secondaries on a known V5 filesystem */
|
||||
if (bp->b_bn == XFS_SB_DADDR ||
|
||||
xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_buf_daddr(bp) == XFS_SB_DADDR ||
|
||||
xfs_has_crc(mp)) {
|
||||
error = -EFSBADCRC;
|
||||
goto out_error;
|
||||
}
|
||||
@ -704,7 +815,7 @@ xfs_sb_write_verify(
|
||||
if (error)
|
||||
goto out_error;
|
||||
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_sb_is_v5(&sb))
|
||||
return;
|
||||
|
||||
if (bip)
|
||||
@ -801,7 +912,7 @@ xfs_log_sb(
|
||||
* unclean shutdown, this will be corrected by log recovery rebuilding
|
||||
* the counters from the AGF block counts.
|
||||
*/
|
||||
if (xfs_sb_version_haslazysbcount(&mp->m_sb)) {
|
||||
if (xfs_has_lazysbcount(mp)) {
|
||||
mp->m_sb.sb_icount = percpu_counter_sum(&mp->m_icount);
|
||||
mp->m_sb.sb_ifree = percpu_counter_sum(&mp->m_ifree);
|
||||
mp->m_sb.sb_fdblocks = percpu_counter_sum(&mp->m_fdblocks);
|
||||
@ -950,10 +1061,12 @@ out:
|
||||
|
||||
void
|
||||
xfs_fs_geometry(
|
||||
struct xfs_sb *sbp,
|
||||
struct xfs_mount *mp,
|
||||
struct xfs_fsop_geom *geo,
|
||||
int struct_version)
|
||||
{
|
||||
struct xfs_sb *sbp = &mp->m_sb;
|
||||
|
||||
memset(geo, 0, sizeof(struct xfs_fsop_geom));
|
||||
|
||||
geo->blocksize = sbp->sb_blocksize;
|
||||
@ -984,51 +1097,51 @@ xfs_fs_geometry(
|
||||
geo->flags = XFS_FSOP_GEOM_FLAGS_NLINK |
|
||||
XFS_FSOP_GEOM_FLAGS_DIRV2 |
|
||||
XFS_FSOP_GEOM_FLAGS_EXTFLG;
|
||||
if (xfs_sb_version_hasattr(sbp))
|
||||
if (xfs_has_attr(mp))
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_ATTR;
|
||||
if (xfs_sb_version_hasquota(sbp))
|
||||
if (xfs_has_quota(mp))
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_QUOTA;
|
||||
if (xfs_sb_version_hasalign(sbp))
|
||||
if (xfs_has_align(mp))
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_IALIGN;
|
||||
if (xfs_sb_version_hasdalign(sbp))
|
||||
if (xfs_has_dalign(mp))
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_DALIGN;
|
||||
if (xfs_sb_version_hassector(sbp))
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_SECTOR;
|
||||
if (xfs_sb_version_hasasciici(sbp))
|
||||
if (xfs_has_asciici(mp))
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_DIRV2CI;
|
||||
if (xfs_sb_version_haslazysbcount(sbp))
|
||||
if (xfs_has_lazysbcount(mp))
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_LAZYSB;
|
||||
if (xfs_sb_version_hasattr2(sbp))
|
||||
if (xfs_has_attr2(mp))
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_ATTR2;
|
||||
if (xfs_sb_version_hasprojid32bit(sbp))
|
||||
if (xfs_has_projid32(mp))
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_PROJID32;
|
||||
if (xfs_sb_version_hascrc(sbp))
|
||||
if (xfs_has_crc(mp))
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_V5SB;
|
||||
if (xfs_sb_version_hasftype(sbp))
|
||||
if (xfs_has_ftype(mp))
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_FTYPE;
|
||||
if (xfs_sb_version_hasfinobt(sbp))
|
||||
if (xfs_has_finobt(mp))
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_FINOBT;
|
||||
if (xfs_sb_version_hassparseinodes(sbp))
|
||||
if (xfs_has_sparseinodes(mp))
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_SPINODES;
|
||||
if (xfs_sb_version_hasrmapbt(sbp))
|
||||
if (xfs_has_rmapbt(mp))
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_RMAPBT;
|
||||
if (xfs_sb_version_hasreflink(sbp))
|
||||
if (xfs_has_reflink(mp))
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_REFLINK;
|
||||
if (xfs_sb_version_hasbigtime(sbp))
|
||||
if (xfs_has_bigtime(mp))
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_BIGTIME;
|
||||
if (xfs_sb_version_hasinobtcounts(sbp))
|
||||
if (xfs_has_inobtcounts(mp))
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_INOBTCNT;
|
||||
if (xfs_sb_version_hassector(sbp))
|
||||
if (xfs_has_sector(mp)) {
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_SECTOR;
|
||||
geo->logsectsize = sbp->sb_logsectsize;
|
||||
else
|
||||
} else {
|
||||
geo->logsectsize = BBSIZE;
|
||||
}
|
||||
geo->rtsectsize = sbp->sb_blocksize;
|
||||
geo->dirblocksize = xfs_dir2_dirblock_bytes(sbp);
|
||||
|
||||
if (struct_version < 4)
|
||||
return;
|
||||
|
||||
if (xfs_sb_version_haslogv2(sbp))
|
||||
if (xfs_has_logv2(mp))
|
||||
geo->flags |= XFS_FSOP_GEOM_FLAGS_LOGV2;
|
||||
|
||||
geo->logsunit = sbp->sb_logsunit;
|
||||
|
@ -20,11 +20,13 @@ extern void xfs_sb_mount_common(struct xfs_mount *mp, struct xfs_sb *sbp);
|
||||
extern void xfs_sb_from_disk(struct xfs_sb *to, struct xfs_dsb *from);
|
||||
extern void xfs_sb_to_disk(struct xfs_dsb *to, struct xfs_sb *from);
|
||||
extern void xfs_sb_quota_from_disk(struct xfs_sb *sbp);
|
||||
extern bool xfs_sb_good_version(struct xfs_sb *sbp);
|
||||
extern uint64_t xfs_sb_version_to_features(struct xfs_sb *sbp);
|
||||
|
||||
extern int xfs_update_secondary_sbs(struct xfs_mount *mp);
|
||||
|
||||
#define XFS_FS_GEOM_MAX_STRUCT_VER (4)
|
||||
extern void xfs_fs_geometry(struct xfs_sb *sbp, struct xfs_fsop_geom *geo,
|
||||
extern void xfs_fs_geometry(struct xfs_mount *mp, struct xfs_fsop_geom *geo,
|
||||
int struct_version);
|
||||
extern int xfs_sb_read_secondary(struct xfs_mount *mp,
|
||||
struct xfs_trans *tp, xfs_agnumber_t agno,
|
||||
|
@ -42,7 +42,7 @@ xfs_symlink_hdr_set(
|
||||
{
|
||||
struct xfs_dsymlink_hdr *dsl = bp->b_addr;
|
||||
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return 0;
|
||||
|
||||
memset(dsl, 0, sizeof(struct xfs_dsymlink_hdr));
|
||||
@ -51,7 +51,7 @@ xfs_symlink_hdr_set(
|
||||
dsl->sl_bytes = cpu_to_be32(size);
|
||||
uuid_copy(&dsl->sl_uuid, &mp->m_sb.sb_meta_uuid);
|
||||
dsl->sl_owner = cpu_to_be64(ino);
|
||||
dsl->sl_blkno = cpu_to_be64(bp->b_bn);
|
||||
dsl->sl_blkno = cpu_to_be64(xfs_buf_daddr(bp));
|
||||
bp->b_ops = &xfs_symlink_buf_ops;
|
||||
|
||||
return sizeof(struct xfs_dsymlink_hdr);
|
||||
@ -89,13 +89,13 @@ xfs_symlink_verify(
|
||||
struct xfs_mount *mp = bp->b_mount;
|
||||
struct xfs_dsymlink_hdr *dsl = bp->b_addr;
|
||||
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return __this_address;
|
||||
if (!xfs_verify_magic(bp, dsl->sl_magic))
|
||||
return __this_address;
|
||||
if (!uuid_equal(&dsl->sl_uuid, &mp->m_sb.sb_meta_uuid))
|
||||
return __this_address;
|
||||
if (bp->b_bn != be64_to_cpu(dsl->sl_blkno))
|
||||
if (xfs_buf_daddr(bp) != be64_to_cpu(dsl->sl_blkno))
|
||||
return __this_address;
|
||||
if (be32_to_cpu(dsl->sl_offset) +
|
||||
be32_to_cpu(dsl->sl_bytes) >= XFS_SYMLINK_MAXLEN)
|
||||
@ -116,7 +116,7 @@ xfs_symlink_read_verify(
|
||||
xfs_failaddr_t fa;
|
||||
|
||||
/* no verification of non-crc buffers */
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return;
|
||||
|
||||
if (!xfs_buf_verify_cksum(bp, XFS_SYMLINK_CRC_OFF))
|
||||
@ -137,7 +137,7 @@ xfs_symlink_write_verify(
|
||||
xfs_failaddr_t fa;
|
||||
|
||||
/* no verification of non-crc buffers */
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return;
|
||||
|
||||
fa = xfs_symlink_verify(bp);
|
||||
@ -173,7 +173,7 @@ xfs_symlink_local_to_remote(
|
||||
|
||||
xfs_trans_buf_set_type(tp, bp, XFS_BLFT_SYMLINK_BUF);
|
||||
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (!xfs_has_crc(mp)) {
|
||||
bp->b_ops = NULL;
|
||||
memcpy(bp->b_addr, ifp->if_u1.if_data, ifp->if_bytes);
|
||||
xfs_trans_log_buf(tp, bp, 0, ifp->if_bytes - 1);
|
||||
|
@ -136,7 +136,7 @@ xfs_trans_log_inode(
|
||||
* to upgrade this inode to bigtime format, do so now.
|
||||
*/
|
||||
if ((flags & (XFS_ILOG_CORE | XFS_ILOG_TIMESTAMP)) &&
|
||||
xfs_sb_version_hasbigtime(&ip->i_mount->m_sb) &&
|
||||
xfs_has_bigtime(ip->i_mount) &&
|
||||
!xfs_inode_has_bigtime(ip)) {
|
||||
ip->i_diflags2 |= XFS_DIFLAG2_BIGTIME;
|
||||
flags |= XFS_ILOG_CORE;
|
||||
|
@ -71,9 +71,9 @@ xfs_allocfree_log_count(
|
||||
uint blocks;
|
||||
|
||||
blocks = num_ops * 2 * (2 * mp->m_ag_maxlevels - 1);
|
||||
if (xfs_sb_version_hasrmapbt(&mp->m_sb))
|
||||
if (xfs_has_rmapbt(mp))
|
||||
blocks += num_ops * (2 * mp->m_rmap_maxlevels - 1);
|
||||
if (xfs_sb_version_hasreflink(&mp->m_sb))
|
||||
if (xfs_has_reflink(mp))
|
||||
blocks += num_ops * (2 * mp->m_refc_maxlevels - 1);
|
||||
|
||||
return blocks;
|
||||
@ -155,7 +155,7 @@ STATIC uint
|
||||
xfs_calc_finobt_res(
|
||||
struct xfs_mount *mp)
|
||||
{
|
||||
if (!xfs_sb_version_hasfinobt(&mp->m_sb))
|
||||
if (!xfs_has_finobt(mp))
|
||||
return 0;
|
||||
|
||||
return xfs_calc_inobt_res(mp);
|
||||
@ -187,7 +187,7 @@ xfs_calc_inode_chunk_res(
|
||||
XFS_FSB_TO_B(mp, 1));
|
||||
if (alloc) {
|
||||
/* icreate tx uses ordered buffers */
|
||||
if (xfs_sb_version_has_v3inode(&mp->m_sb))
|
||||
if (xfs_has_v3inodes(mp))
|
||||
return res;
|
||||
size = XFS_FSB_TO_B(mp, 1);
|
||||
}
|
||||
@ -268,7 +268,7 @@ xfs_calc_write_reservation(
|
||||
xfs_calc_buf_res(3, mp->m_sb.sb_sectsize) +
|
||||
xfs_calc_buf_res(xfs_allocfree_log_count(mp, 2), blksz);
|
||||
|
||||
if (xfs_sb_version_hasrealtime(&mp->m_sb)) {
|
||||
if (xfs_has_realtime(mp)) {
|
||||
t2 = xfs_calc_inode_res(mp, 1) +
|
||||
xfs_calc_buf_res(XFS_BM_MAXLEVELS(mp, XFS_DATA_FORK),
|
||||
blksz) +
|
||||
@ -317,7 +317,7 @@ xfs_calc_itruncate_reservation(
|
||||
t2 = xfs_calc_buf_res(9, mp->m_sb.sb_sectsize) +
|
||||
xfs_calc_buf_res(xfs_allocfree_log_count(mp, 4), blksz);
|
||||
|
||||
if (xfs_sb_version_hasrealtime(&mp->m_sb)) {
|
||||
if (xfs_has_realtime(mp)) {
|
||||
t3 = xfs_calc_buf_res(5, mp->m_sb.sb_sectsize) +
|
||||
xfs_calc_buf_res(xfs_rtalloc_log_count(mp, 2), blksz) +
|
||||
xfs_calc_buf_res(xfs_allocfree_log_count(mp, 2), blksz);
|
||||
@ -798,29 +798,6 @@ xfs_calc_qm_dqalloc_reservation(
|
||||
XFS_FSB_TO_B(mp, XFS_DQUOT_CLUSTER_SIZE_FSB) - 1);
|
||||
}
|
||||
|
||||
/*
|
||||
* Turning off quotas.
|
||||
* the quota off logitems: sizeof(struct xfs_qoff_logitem) * 2
|
||||
* the superblock for the quota flags: sector size
|
||||
*/
|
||||
STATIC uint
|
||||
xfs_calc_qm_quotaoff_reservation(
|
||||
struct xfs_mount *mp)
|
||||
{
|
||||
return sizeof(struct xfs_qoff_logitem) * 2 +
|
||||
xfs_calc_buf_res(1, mp->m_sb.sb_sectsize);
|
||||
}
|
||||
|
||||
/*
|
||||
* End of turning off quotas.
|
||||
* the quota off logitems: sizeof(struct xfs_qoff_logitem) * 2
|
||||
*/
|
||||
STATIC uint
|
||||
xfs_calc_qm_quotaoff_end_reservation(void)
|
||||
{
|
||||
return sizeof(struct xfs_qoff_logitem) * 2;
|
||||
}
|
||||
|
||||
/*
|
||||
* Syncing the incore super block changes to disk.
|
||||
* the super block to reflect the changes: sector size
|
||||
@ -842,14 +819,14 @@ xfs_trans_resv_calc(
|
||||
* require a permanent reservation on space.
|
||||
*/
|
||||
resp->tr_write.tr_logres = xfs_calc_write_reservation(mp);
|
||||
if (xfs_sb_version_hasreflink(&mp->m_sb))
|
||||
if (xfs_has_reflink(mp))
|
||||
resp->tr_write.tr_logcount = XFS_WRITE_LOG_COUNT_REFLINK;
|
||||
else
|
||||
resp->tr_write.tr_logcount = XFS_WRITE_LOG_COUNT;
|
||||
resp->tr_write.tr_logflags |= XFS_TRANS_PERM_LOG_RES;
|
||||
|
||||
resp->tr_itruncate.tr_logres = xfs_calc_itruncate_reservation(mp);
|
||||
if (xfs_sb_version_hasreflink(&mp->m_sb))
|
||||
if (xfs_has_reflink(mp))
|
||||
resp->tr_itruncate.tr_logcount =
|
||||
XFS_ITRUNCATE_LOG_COUNT_REFLINK;
|
||||
else
|
||||
@ -910,7 +887,7 @@ xfs_trans_resv_calc(
|
||||
resp->tr_growrtalloc.tr_logflags |= XFS_TRANS_PERM_LOG_RES;
|
||||
|
||||
resp->tr_qm_dqalloc.tr_logres = xfs_calc_qm_dqalloc_reservation(mp);
|
||||
if (xfs_sb_version_hasreflink(&mp->m_sb))
|
||||
if (xfs_has_reflink(mp))
|
||||
resp->tr_qm_dqalloc.tr_logcount = XFS_WRITE_LOG_COUNT_REFLINK;
|
||||
else
|
||||
resp->tr_qm_dqalloc.tr_logcount = XFS_WRITE_LOG_COUNT;
|
||||
@ -923,13 +900,6 @@ xfs_trans_resv_calc(
|
||||
resp->tr_qm_setqlim.tr_logres = xfs_calc_qm_setqlim_reservation();
|
||||
resp->tr_qm_setqlim.tr_logcount = XFS_DEFAULT_LOG_COUNT;
|
||||
|
||||
resp->tr_qm_quotaoff.tr_logres = xfs_calc_qm_quotaoff_reservation(mp);
|
||||
resp->tr_qm_quotaoff.tr_logcount = XFS_DEFAULT_LOG_COUNT;
|
||||
|
||||
resp->tr_qm_equotaoff.tr_logres =
|
||||
xfs_calc_qm_quotaoff_end_reservation();
|
||||
resp->tr_qm_equotaoff.tr_logcount = XFS_DEFAULT_LOG_COUNT;
|
||||
|
||||
resp->tr_sb.tr_logres = xfs_calc_sb_reservation(mp);
|
||||
resp->tr_sb.tr_logcount = XFS_DEFAULT_LOG_COUNT;
|
||||
|
||||
|
@ -46,8 +46,6 @@ struct xfs_trans_resv {
|
||||
struct xfs_trans_res tr_growrtfree; /* grow realtime freeing */
|
||||
struct xfs_trans_res tr_qm_setqlim; /* adjust quota limits */
|
||||
struct xfs_trans_res tr_qm_dqalloc; /* allocate quota on disk */
|
||||
struct xfs_trans_res tr_qm_quotaoff; /* turn quota off */
|
||||
struct xfs_trans_res tr_qm_equotaoff;/* end of turn quota off */
|
||||
struct xfs_trans_res tr_sb; /* modify superblock */
|
||||
struct xfs_trans_res tr_fsyncts; /* update timestamps on fsync */
|
||||
};
|
||||
|
@ -57,8 +57,7 @@
|
||||
XFS_DAREMOVE_SPACE_RES(mp, XFS_DATA_FORK)
|
||||
#define XFS_IALLOC_SPACE_RES(mp) \
|
||||
(M_IGEO(mp)->ialloc_blks + \
|
||||
((xfs_sb_version_hasfinobt(&mp->m_sb) ? 2 : 1) * \
|
||||
M_IGEO(mp)->inobt_maxlevels))
|
||||
((xfs_has_finobt(mp) ? 2 : 1) * M_IGEO(mp)->inobt_maxlevels))
|
||||
|
||||
/*
|
||||
* Space reservation values for various transactions.
|
||||
@ -94,8 +93,7 @@
|
||||
#define XFS_SYMLINK_SPACE_RES(mp,nl,b) \
|
||||
(XFS_IALLOC_SPACE_RES(mp) + XFS_DIRENTER_SPACE_RES(mp,nl) + (b))
|
||||
#define XFS_IFREE_SPACE_RES(mp) \
|
||||
(xfs_sb_version_hasfinobt(&mp->m_sb) ? \
|
||||
M_IGEO(mp)->inobt_maxlevels : 0)
|
||||
(xfs_has_finobt(mp) ? M_IGEO(mp)->inobt_maxlevels : 0)
|
||||
|
||||
|
||||
#endif /* __XFS_TRANS_SPACE_H__ */
|
||||
|
@ -169,7 +169,7 @@ xfs_internal_inum(
|
||||
xfs_ino_t ino)
|
||||
{
|
||||
return ino == mp->m_sb.sb_rbmino || ino == mp->m_sb.sb_rsumino ||
|
||||
(xfs_sb_version_hasquota(&mp->m_sb) &&
|
||||
(xfs_has_quota(mp) &&
|
||||
xfs_is_quota_inode(&mp->m_sb, ino));
|
||||
}
|
||||
|
||||
|
@ -87,6 +87,11 @@ typedef void * xfs_failaddr_t;
|
||||
#define XFS_ATTR_FORK 1
|
||||
#define XFS_COW_FORK 2
|
||||
|
||||
#define XFS_WHICHFORK_STRINGS \
|
||||
{ XFS_DATA_FORK, "data" }, \
|
||||
{ XFS_ATTR_FORK, "attr" }, \
|
||||
{ XFS_COW_FORK, "cow" }
|
||||
|
||||
/*
|
||||
* Min numbers of data/attr fork btree root pointers.
|
||||
*/
|
||||
|
@ -36,7 +36,7 @@ xchk_superblock_xref(
|
||||
|
||||
agbno = XFS_SB_BLOCK(mp);
|
||||
|
||||
error = xchk_ag_init(sc, agno, &sc->sa);
|
||||
error = xchk_ag_init_existing(sc, agno, &sc->sa);
|
||||
if (!xchk_xref_process_error(sc, agno, agbno, &error))
|
||||
return;
|
||||
|
||||
@ -63,6 +63,7 @@ xchk_superblock(
|
||||
struct xfs_mount *mp = sc->mp;
|
||||
struct xfs_buf *bp;
|
||||
struct xfs_dsb *sb;
|
||||
struct xfs_perag *pag;
|
||||
xfs_agnumber_t agno;
|
||||
uint32_t v2_ok;
|
||||
__be32 features_mask;
|
||||
@ -73,6 +74,15 @@ xchk_superblock(
|
||||
if (agno == 0)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* Grab an active reference to the perag structure. If we can't get
|
||||
* it, we're racing with something that's tearing down the AG, so
|
||||
* signal that the AG no longer exists.
|
||||
*/
|
||||
pag = xfs_perag_get(mp, agno);
|
||||
if (!pag)
|
||||
return -ENOENT;
|
||||
|
||||
error = xfs_sb_read_secondary(mp, sc->tp, agno, &bp);
|
||||
/*
|
||||
* The superblock verifier can return several different error codes
|
||||
@ -92,7 +102,7 @@ xchk_superblock(
|
||||
break;
|
||||
}
|
||||
if (!xchk_process_error(sc, agno, XFS_SB_BLOCK(mp), &error))
|
||||
return error;
|
||||
goto out_pag;
|
||||
|
||||
sb = bp->b_addr;
|
||||
|
||||
@ -248,7 +258,7 @@ xchk_superblock(
|
||||
xchk_block_set_corrupt(sc, bp);
|
||||
} else {
|
||||
v2_ok = XFS_SB_VERSION2_OKBITS;
|
||||
if (XFS_SB_VERSION_NUM(&mp->m_sb) >= XFS_SB_VERSION_5)
|
||||
if (xfs_sb_is_v5(&mp->m_sb))
|
||||
v2_ok |= XFS_SB_VERSION2_CRCBIT;
|
||||
|
||||
if (!!(sb->sb_features2 & cpu_to_be32(~v2_ok)))
|
||||
@ -273,7 +283,7 @@ xchk_superblock(
|
||||
(cpu_to_be32(mp->m_sb.sb_features2) & features_mask))
|
||||
xchk_block_set_corrupt(sc, bp);
|
||||
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (!xfs_has_crc(mp)) {
|
||||
/* all v5 fields must be zero */
|
||||
if (memchr_inv(&sb->sb_features_compat, 0,
|
||||
sizeof(struct xfs_dsb) -
|
||||
@ -324,7 +334,7 @@ xchk_superblock(
|
||||
/* Don't care about sb_lsn */
|
||||
}
|
||||
|
||||
if (xfs_sb_version_hasmetauuid(&mp->m_sb)) {
|
||||
if (xfs_has_metauuid(mp)) {
|
||||
/* The metadata UUID must be the same for all supers */
|
||||
if (!uuid_equal(&sb->sb_meta_uuid, &mp->m_sb.sb_meta_uuid))
|
||||
xchk_block_set_corrupt(sc, bp);
|
||||
@ -336,7 +346,8 @@ xchk_superblock(
|
||||
xchk_block_set_corrupt(sc, bp);
|
||||
|
||||
xchk_superblock_xref(sc, bp);
|
||||
|
||||
out_pag:
|
||||
xfs_perag_put(pag);
|
||||
return error;
|
||||
}
|
||||
|
||||
@ -346,7 +357,7 @@ xchk_superblock(
|
||||
STATIC int
|
||||
xchk_agf_record_bno_lengths(
|
||||
struct xfs_btree_cur *cur,
|
||||
struct xfs_alloc_rec_incore *rec,
|
||||
const struct xfs_alloc_rec_incore *rec,
|
||||
void *priv)
|
||||
{
|
||||
xfs_extlen_t *blocks = priv;
|
||||
@ -419,7 +430,7 @@ xchk_agf_xref_btreeblks(
|
||||
int error;
|
||||
|
||||
/* agf_btreeblks didn't exist before lazysbcount */
|
||||
if (!xfs_sb_version_haslazysbcount(&sc->mp->m_sb))
|
||||
if (!xfs_has_lazysbcount(sc->mp))
|
||||
return;
|
||||
|
||||
/* Check agf_rmap_blocks; set up for agf_btreeblks check */
|
||||
@ -438,7 +449,7 @@ xchk_agf_xref_btreeblks(
|
||||
* No rmap cursor; we can't xref if we have the rmapbt feature.
|
||||
* We also can't do it if we're missing the free space btree cursors.
|
||||
*/
|
||||
if ((xfs_sb_version_hasrmapbt(&mp->m_sb) && !sc->sa.rmap_cur) ||
|
||||
if ((xfs_has_rmapbt(mp) && !sc->sa.rmap_cur) ||
|
||||
!sc->sa.bno_cur || !sc->sa.cnt_cur)
|
||||
return;
|
||||
|
||||
@ -527,6 +538,7 @@ xchk_agf(
|
||||
xchk_buffer_recheck(sc, sc->sa.agf_bp);
|
||||
|
||||
agf = sc->sa.agf_bp->b_addr;
|
||||
pag = sc->sa.pag;
|
||||
|
||||
/* Check the AG length */
|
||||
eoag = be32_to_cpu(agf->agf_length);
|
||||
@ -550,7 +562,7 @@ xchk_agf(
|
||||
if (level <= 0 || level > XFS_BTREE_MAXLEVELS)
|
||||
xchk_block_set_corrupt(sc, sc->sa.agf_bp);
|
||||
|
||||
if (xfs_sb_version_hasrmapbt(&mp->m_sb)) {
|
||||
if (xfs_has_rmapbt(mp)) {
|
||||
agbno = be32_to_cpu(agf->agf_roots[XFS_BTNUM_RMAP]);
|
||||
if (!xfs_verify_agbno(mp, agno, agbno))
|
||||
xchk_block_set_corrupt(sc, sc->sa.agf_bp);
|
||||
@ -560,7 +572,7 @@ xchk_agf(
|
||||
xchk_block_set_corrupt(sc, sc->sa.agf_bp);
|
||||
}
|
||||
|
||||
if (xfs_sb_version_hasreflink(&mp->m_sb)) {
|
||||
if (xfs_has_reflink(mp)) {
|
||||
agbno = be32_to_cpu(agf->agf_refcount_root);
|
||||
if (!xfs_verify_agbno(mp, agno, agbno))
|
||||
xchk_block_set_corrupt(sc, sc->sa.agf_bp);
|
||||
@ -582,15 +594,13 @@ xchk_agf(
|
||||
xchk_block_set_corrupt(sc, sc->sa.agf_bp);
|
||||
|
||||
/* Do the incore counters match? */
|
||||
pag = xfs_perag_get(mp, agno);
|
||||
if (pag->pagf_freeblks != be32_to_cpu(agf->agf_freeblks))
|
||||
xchk_block_set_corrupt(sc, sc->sa.agf_bp);
|
||||
if (pag->pagf_flcount != be32_to_cpu(agf->agf_flcount))
|
||||
xchk_block_set_corrupt(sc, sc->sa.agf_bp);
|
||||
if (xfs_sb_version_haslazysbcount(&sc->mp->m_sb) &&
|
||||
if (xfs_has_lazysbcount(sc->mp) &&
|
||||
pag->pagf_btreeblks != be32_to_cpu(agf->agf_btreeblks))
|
||||
xchk_block_set_corrupt(sc, sc->sa.agf_bp);
|
||||
xfs_perag_put(pag);
|
||||
|
||||
xchk_agf_xref(sc);
|
||||
out:
|
||||
@ -630,7 +640,7 @@ xchk_agfl_block(
|
||||
{
|
||||
struct xchk_agfl_info *sai = priv;
|
||||
struct xfs_scrub *sc = sai->sc;
|
||||
xfs_agnumber_t agno = sc->sa.agno;
|
||||
xfs_agnumber_t agno = sc->sa.pag->pag_agno;
|
||||
|
||||
if (xfs_verify_agbno(mp, agno, agbno) &&
|
||||
sai->nr_entries < sai->sz_entries)
|
||||
@ -787,7 +797,7 @@ xchk_agi_xref_fiblocks(
|
||||
xfs_agblock_t blocks;
|
||||
int error = 0;
|
||||
|
||||
if (!xfs_sb_version_hasinobtcounts(&sc->mp->m_sb))
|
||||
if (!xfs_has_inobtcounts(sc->mp))
|
||||
return;
|
||||
|
||||
if (sc->sa.ino_cur) {
|
||||
@ -857,6 +867,7 @@ xchk_agi(
|
||||
xchk_buffer_recheck(sc, sc->sa.agi_bp);
|
||||
|
||||
agi = sc->sa.agi_bp->b_addr;
|
||||
pag = sc->sa.pag;
|
||||
|
||||
/* Check the AG length */
|
||||
eoag = be32_to_cpu(agi->agi_length);
|
||||
@ -872,7 +883,7 @@ xchk_agi(
|
||||
if (level <= 0 || level > XFS_BTREE_MAXLEVELS)
|
||||
xchk_block_set_corrupt(sc, sc->sa.agi_bp);
|
||||
|
||||
if (xfs_sb_version_hasfinobt(&mp->m_sb)) {
|
||||
if (xfs_has_finobt(mp)) {
|
||||
agbno = be32_to_cpu(agi->agi_free_root);
|
||||
if (!xfs_verify_agbno(mp, agno, agbno))
|
||||
xchk_block_set_corrupt(sc, sc->sa.agi_bp);
|
||||
@ -909,12 +920,10 @@ xchk_agi(
|
||||
xchk_block_set_corrupt(sc, sc->sa.agi_bp);
|
||||
|
||||
/* Do the incore counters match? */
|
||||
pag = xfs_perag_get(mp, agno);
|
||||
if (pag->pagi_count != be32_to_cpu(agi->agi_count))
|
||||
xchk_block_set_corrupt(sc, sc->sa.agi_bp);
|
||||
if (pag->pagi_freecount != be32_to_cpu(agi->agi_freecount))
|
||||
xchk_block_set_corrupt(sc, sc->sa.agi_bp);
|
||||
xfs_perag_put(pag);
|
||||
|
||||
xchk_agi_xref(sc);
|
||||
out:
|
||||
|
@ -70,7 +70,7 @@ struct xrep_agf_allocbt {
|
||||
STATIC int
|
||||
xrep_agf_walk_allocbt(
|
||||
struct xfs_btree_cur *cur,
|
||||
struct xfs_alloc_rec_incore *rec,
|
||||
const struct xfs_alloc_rec_incore *rec,
|
||||
void *priv)
|
||||
{
|
||||
struct xrep_agf_allocbt *raa = priv;
|
||||
@ -94,7 +94,7 @@ xrep_agf_check_agfl_block(
|
||||
{
|
||||
struct xfs_scrub *sc = priv;
|
||||
|
||||
if (!xfs_verify_agbno(mp, sc->sa.agno, agbno))
|
||||
if (!xfs_verify_agbno(mp, sc->sa.pag->pag_agno, agbno))
|
||||
return -EFSCORRUPTED;
|
||||
return 0;
|
||||
}
|
||||
@ -164,7 +164,7 @@ xrep_agf_find_btrees(
|
||||
return -EFSCORRUPTED;
|
||||
|
||||
/* We must find the refcountbt root if that feature is enabled. */
|
||||
if (xfs_sb_version_hasreflink(&sc->mp->m_sb) &&
|
||||
if (xfs_has_reflink(sc->mp) &&
|
||||
!xrep_check_btree_root(sc, &fab[XREP_AGF_REFCOUNTBT]))
|
||||
return -EFSCORRUPTED;
|
||||
|
||||
@ -188,12 +188,13 @@ xrep_agf_init_header(
|
||||
memset(agf, 0, BBTOB(agf_bp->b_length));
|
||||
agf->agf_magicnum = cpu_to_be32(XFS_AGF_MAGIC);
|
||||
agf->agf_versionnum = cpu_to_be32(XFS_AGF_VERSION);
|
||||
agf->agf_seqno = cpu_to_be32(sc->sa.agno);
|
||||
agf->agf_length = cpu_to_be32(xfs_ag_block_count(mp, sc->sa.agno));
|
||||
agf->agf_seqno = cpu_to_be32(sc->sa.pag->pag_agno);
|
||||
agf->agf_length = cpu_to_be32(xfs_ag_block_count(mp,
|
||||
sc->sa.pag->pag_agno));
|
||||
agf->agf_flfirst = old_agf->agf_flfirst;
|
||||
agf->agf_fllast = old_agf->agf_fllast;
|
||||
agf->agf_flcount = old_agf->agf_flcount;
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (xfs_has_crc(mp))
|
||||
uuid_copy(&agf->agf_uuid, &mp->m_sb.sb_meta_uuid);
|
||||
|
||||
/* Mark the incore AGF data stale until we're done fixing things. */
|
||||
@ -223,7 +224,7 @@ xrep_agf_set_roots(
|
||||
agf->agf_levels[XFS_BTNUM_RMAPi] =
|
||||
cpu_to_be32(fab[XREP_AGF_RMAPBT].height);
|
||||
|
||||
if (xfs_sb_version_hasreflink(&sc->mp->m_sb)) {
|
||||
if (xfs_has_reflink(sc->mp)) {
|
||||
agf->agf_refcount_root =
|
||||
cpu_to_be32(fab[XREP_AGF_REFCOUNTBT].root);
|
||||
agf->agf_refcount_level =
|
||||
@ -280,7 +281,7 @@ xrep_agf_calc_from_btrees(
|
||||
agf->agf_btreeblks = cpu_to_be32(btreeblks);
|
||||
|
||||
/* Update the AGF counters from the refcountbt. */
|
||||
if (xfs_sb_version_hasreflink(&mp->m_sb)) {
|
||||
if (xfs_has_reflink(mp)) {
|
||||
cur = xfs_refcountbt_init_cursor(mp, sc->tp, agf_bp,
|
||||
sc->sa.pag);
|
||||
error = xfs_btree_count_blocks(cur, &blocks);
|
||||
@ -363,16 +364,16 @@ xrep_agf(
|
||||
int error;
|
||||
|
||||
/* We require the rmapbt to rebuild anything. */
|
||||
if (!xfs_sb_version_hasrmapbt(&mp->m_sb))
|
||||
if (!xfs_has_rmapbt(mp))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
xchk_perag_get(sc->mp, &sc->sa);
|
||||
/*
|
||||
* Make sure we have the AGF buffer, as scrub might have decided it
|
||||
* was corrupt after xfs_alloc_read_agf failed with -EFSCORRUPTED.
|
||||
*/
|
||||
error = xfs_trans_read_buf(mp, sc->tp, mp->m_ddev_targp,
|
||||
XFS_AG_DADDR(mp, sc->sa.agno, XFS_AGF_DADDR(mp)),
|
||||
XFS_AG_DADDR(mp, sc->sa.pag->pag_agno,
|
||||
XFS_AGF_DADDR(mp)),
|
||||
XFS_FSS_TO_BB(mp, 1), 0, &agf_bp, NULL);
|
||||
if (error)
|
||||
return error;
|
||||
@ -388,7 +389,7 @@ xrep_agf(
|
||||
* btrees rooted in the AGF. If the AGFL contents are obviously bad
|
||||
* then we'll bail out.
|
||||
*/
|
||||
error = xfs_alloc_read_agfl(mp, sc->tp, sc->sa.agno, &agfl_bp);
|
||||
error = xfs_alloc_read_agfl(mp, sc->tp, sc->sa.pag->pag_agno, &agfl_bp);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
@ -442,7 +443,7 @@ struct xrep_agfl {
|
||||
STATIC int
|
||||
xrep_agfl_walk_rmap(
|
||||
struct xfs_btree_cur *cur,
|
||||
struct xfs_rmap_irec *rec,
|
||||
const struct xfs_rmap_irec *rec,
|
||||
void *priv)
|
||||
{
|
||||
struct xrep_agfl *ra = priv;
|
||||
@ -586,7 +587,7 @@ xrep_agfl_init_header(
|
||||
agfl = XFS_BUF_TO_AGFL(agfl_bp);
|
||||
memset(agfl, 0xFF, BBTOB(agfl_bp->b_length));
|
||||
agfl->agfl_magicnum = cpu_to_be32(XFS_AGFL_MAGIC);
|
||||
agfl->agfl_seqno = cpu_to_be32(sc->sa.agno);
|
||||
agfl->agfl_seqno = cpu_to_be32(sc->sa.pag->pag_agno);
|
||||
uuid_copy(&agfl->agfl_uuid, &mp->m_sb.sb_meta_uuid);
|
||||
|
||||
/*
|
||||
@ -599,7 +600,8 @@ xrep_agfl_init_header(
|
||||
for_each_xbitmap_extent(br, n, agfl_extents) {
|
||||
agbno = XFS_FSB_TO_AGBNO(mp, br->start);
|
||||
|
||||
trace_xrep_agfl_insert(mp, sc->sa.agno, agbno, br->len);
|
||||
trace_xrep_agfl_insert(mp, sc->sa.pag->pag_agno, agbno,
|
||||
br->len);
|
||||
|
||||
while (br->len > 0 && fl_off < flcount) {
|
||||
agfl_bno[fl_off] = cpu_to_be32(agbno);
|
||||
@ -638,10 +640,9 @@ xrep_agfl(
|
||||
int error;
|
||||
|
||||
/* We require the rmapbt to rebuild anything. */
|
||||
if (!xfs_sb_version_hasrmapbt(&mp->m_sb))
|
||||
if (!xfs_has_rmapbt(mp))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
xchk_perag_get(sc->mp, &sc->sa);
|
||||
xbitmap_init(&agfl_extents);
|
||||
|
||||
/*
|
||||
@ -649,7 +650,8 @@ xrep_agfl(
|
||||
* nothing wrong with the AGF, but all the AG header repair functions
|
||||
* have this chicken-and-egg problem.
|
||||
*/
|
||||
error = xfs_alloc_read_agf(mp, sc->tp, sc->sa.agno, 0, &agf_bp);
|
||||
error = xfs_alloc_read_agf(mp, sc->tp, sc->sa.pag->pag_agno, 0,
|
||||
&agf_bp);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
@ -658,7 +660,8 @@ xrep_agfl(
|
||||
* was corrupt after xfs_alloc_read_agfl failed with -EFSCORRUPTED.
|
||||
*/
|
||||
error = xfs_trans_read_buf(mp, sc->tp, mp->m_ddev_targp,
|
||||
XFS_AG_DADDR(mp, sc->sa.agno, XFS_AGFL_DADDR(mp)),
|
||||
XFS_AG_DADDR(mp, sc->sa.pag->pag_agno,
|
||||
XFS_AGFL_DADDR(mp)),
|
||||
XFS_FSS_TO_BB(mp, 1), 0, &agfl_bp, NULL);
|
||||
if (error)
|
||||
return error;
|
||||
@ -723,7 +726,8 @@ xrep_agi_find_btrees(
|
||||
int error;
|
||||
|
||||
/* Read the AGF. */
|
||||
error = xfs_alloc_read_agf(mp, sc->tp, sc->sa.agno, 0, &agf_bp);
|
||||
error = xfs_alloc_read_agf(mp, sc->tp, sc->sa.pag->pag_agno, 0,
|
||||
&agf_bp);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
@ -737,7 +741,7 @@ xrep_agi_find_btrees(
|
||||
return -EFSCORRUPTED;
|
||||
|
||||
/* We must find the finobt root if that feature is enabled. */
|
||||
if (xfs_sb_version_hasfinobt(&mp->m_sb) &&
|
||||
if (xfs_has_finobt(mp) &&
|
||||
!xrep_check_btree_root(sc, &fab[XREP_AGI_FINOBT]))
|
||||
return -EFSCORRUPTED;
|
||||
|
||||
@ -761,11 +765,12 @@ xrep_agi_init_header(
|
||||
memset(agi, 0, BBTOB(agi_bp->b_length));
|
||||
agi->agi_magicnum = cpu_to_be32(XFS_AGI_MAGIC);
|
||||
agi->agi_versionnum = cpu_to_be32(XFS_AGI_VERSION);
|
||||
agi->agi_seqno = cpu_to_be32(sc->sa.agno);
|
||||
agi->agi_length = cpu_to_be32(xfs_ag_block_count(mp, sc->sa.agno));
|
||||
agi->agi_seqno = cpu_to_be32(sc->sa.pag->pag_agno);
|
||||
agi->agi_length = cpu_to_be32(xfs_ag_block_count(mp,
|
||||
sc->sa.pag->pag_agno));
|
||||
agi->agi_newino = cpu_to_be32(NULLAGINO);
|
||||
agi->agi_dirino = cpu_to_be32(NULLAGINO);
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (xfs_has_crc(mp))
|
||||
uuid_copy(&agi->agi_uuid, &mp->m_sb.sb_meta_uuid);
|
||||
|
||||
/* We don't know how to fix the unlinked list yet. */
|
||||
@ -787,7 +792,7 @@ xrep_agi_set_roots(
|
||||
agi->agi_root = cpu_to_be32(fab[XREP_AGI_INOBT].root);
|
||||
agi->agi_level = cpu_to_be32(fab[XREP_AGI_INOBT].height);
|
||||
|
||||
if (xfs_sb_version_hasfinobt(&sc->mp->m_sb)) {
|
||||
if (xfs_has_finobt(sc->mp)) {
|
||||
agi->agi_free_root = cpu_to_be32(fab[XREP_AGI_FINOBT].root);
|
||||
agi->agi_free_level = cpu_to_be32(fab[XREP_AGI_FINOBT].height);
|
||||
}
|
||||
@ -811,7 +816,7 @@ xrep_agi_calc_from_btrees(
|
||||
error = xfs_ialloc_count_inodes(cur, &count, &freecount);
|
||||
if (error)
|
||||
goto err;
|
||||
if (xfs_sb_version_hasinobtcounts(&mp->m_sb)) {
|
||||
if (xfs_has_inobtcounts(mp)) {
|
||||
xfs_agblock_t blocks;
|
||||
|
||||
error = xfs_btree_count_blocks(cur, &blocks);
|
||||
@ -824,8 +829,7 @@ xrep_agi_calc_from_btrees(
|
||||
agi->agi_count = cpu_to_be32(count);
|
||||
agi->agi_freecount = cpu_to_be32(freecount);
|
||||
|
||||
if (xfs_sb_version_hasfinobt(&mp->m_sb) &&
|
||||
xfs_sb_version_hasinobtcounts(&mp->m_sb)) {
|
||||
if (xfs_has_finobt(mp) && xfs_has_inobtcounts(mp)) {
|
||||
xfs_agblock_t blocks;
|
||||
|
||||
cur = xfs_inobt_init_cursor(mp, sc->tp, agi_bp,
|
||||
@ -893,16 +897,16 @@ xrep_agi(
|
||||
int error;
|
||||
|
||||
/* We require the rmapbt to rebuild anything. */
|
||||
if (!xfs_sb_version_hasrmapbt(&mp->m_sb))
|
||||
if (!xfs_has_rmapbt(mp))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
xchk_perag_get(sc->mp, &sc->sa);
|
||||
/*
|
||||
* Make sure we have the AGI buffer, as scrub might have decided it
|
||||
* was corrupt after xfs_ialloc_read_agi failed with -EFSCORRUPTED.
|
||||
*/
|
||||
error = xfs_trans_read_buf(mp, sc->tp, mp->m_ddev_targp,
|
||||
XFS_AG_DADDR(mp, sc->sa.agno, XFS_AGI_DADDR(mp)),
|
||||
XFS_AG_DADDR(mp, sc->sa.pag->pag_agno,
|
||||
XFS_AGI_DADDR(mp)),
|
||||
XFS_FSS_TO_BB(mp, 1), 0, &agi_bp, NULL);
|
||||
if (error)
|
||||
return error;
|
||||
|
@ -91,7 +91,7 @@ xchk_allocbt_xref(
|
||||
STATIC int
|
||||
xchk_allocbt_rec(
|
||||
struct xchk_btree *bs,
|
||||
union xfs_btree_rec *rec)
|
||||
const union xfs_btree_rec *rec)
|
||||
{
|
||||
struct xfs_mount *mp = bs->cur->bc_mp;
|
||||
xfs_agnumber_t agno = bs->cur->bc_ag.pag->pag_agno;
|
||||
|
@ -25,11 +25,11 @@
|
||||
* reallocating the buffer if necessary. Buffer contents are not preserved
|
||||
* across a reallocation.
|
||||
*/
|
||||
int
|
||||
static int
|
||||
xchk_setup_xattr_buf(
|
||||
struct xfs_scrub *sc,
|
||||
size_t value_size,
|
||||
xfs_km_flags_t flags)
|
||||
gfp_t flags)
|
||||
{
|
||||
size_t sz;
|
||||
struct xchk_xattr_buf *ab = sc->buf;
|
||||
@ -57,7 +57,7 @@ xchk_setup_xattr_buf(
|
||||
* Don't zero the buffer upon allocation to avoid runtime overhead.
|
||||
* All users must be careful never to read uninitialized contents.
|
||||
*/
|
||||
ab = kmem_alloc_large(sizeof(*ab) + sz, flags);
|
||||
ab = kvmalloc(sizeof(*ab) + sz, flags);
|
||||
if (!ab)
|
||||
return -ENOMEM;
|
||||
|
||||
@ -79,7 +79,7 @@ xchk_setup_xattr(
|
||||
* without the inode lock held, which means we can sleep.
|
||||
*/
|
||||
if (sc->flags & XCHK_TRY_HARDER) {
|
||||
error = xchk_setup_xattr_buf(sc, XATTR_SIZE_MAX, 0);
|
||||
error = xchk_setup_xattr_buf(sc, XATTR_SIZE_MAX, GFP_KERNEL);
|
||||
if (error)
|
||||
return error;
|
||||
}
|
||||
@ -138,7 +138,8 @@ xchk_xattr_listent(
|
||||
* doesn't work, we overload the seen_enough variable to convey
|
||||
* the error message back to the main scrub function.
|
||||
*/
|
||||
error = xchk_setup_xattr_buf(sx->sc, valuelen, KM_MAYFAIL);
|
||||
error = xchk_setup_xattr_buf(sx->sc, valuelen,
|
||||
GFP_KERNEL | __GFP_RETRY_MAYFAIL);
|
||||
if (error == -ENOMEM)
|
||||
error = -EDEADLOCK;
|
||||
if (error) {
|
||||
@ -323,7 +324,8 @@ xchk_xattr_block(
|
||||
return 0;
|
||||
|
||||
/* Allocate memory for block usage checking. */
|
||||
error = xchk_setup_xattr_buf(ds->sc, 0, KM_MAYFAIL);
|
||||
error = xchk_setup_xattr_buf(ds->sc, 0,
|
||||
GFP_KERNEL | __GFP_RETRY_MAYFAIL);
|
||||
if (error == -ENOMEM)
|
||||
return -EDEADLOCK;
|
||||
if (error)
|
||||
@ -334,7 +336,7 @@ xchk_xattr_block(
|
||||
bitmap_zero(usedmap, mp->m_attr_geo->blksize);
|
||||
|
||||
/* Check all the padding. */
|
||||
if (xfs_sb_version_hascrc(&ds->sc->mp->m_sb)) {
|
||||
if (xfs_has_crc(ds->sc->mp)) {
|
||||
struct xfs_attr3_leafblock *leaf = bp->b_addr;
|
||||
|
||||
if (leaf->hdr.pad1 != 0 || leaf->hdr.pad2 != 0 ||
|
||||
|
@ -65,7 +65,4 @@ xchk_xattr_dstmap(
|
||||
BITS_TO_LONGS(sc->mp->m_attr_geo->blksize);
|
||||
}
|
||||
|
||||
int xchk_setup_xattr_buf(struct xfs_scrub *sc, size_t value_size,
|
||||
xfs_km_flags_t flags);
|
||||
|
||||
#endif /* __XFS_SCRUB_ATTR_H__ */
|
||||
|
@ -260,7 +260,7 @@ xbitmap_set_btcur_path(
|
||||
xfs_btree_get_block(cur, i, &bp);
|
||||
if (!bp)
|
||||
continue;
|
||||
fsb = XFS_DADDR_TO_FSB(cur->bc_mp, bp->b_bn);
|
||||
fsb = XFS_DADDR_TO_FSB(cur->bc_mp, xfs_buf_daddr(bp));
|
||||
error = xbitmap_set(bitmap, fsb, 1);
|
||||
if (error)
|
||||
return error;
|
||||
@ -284,7 +284,7 @@ xbitmap_collect_btblock(
|
||||
if (!bp)
|
||||
return 0;
|
||||
|
||||
fsbno = XFS_DADDR_TO_FSB(cur->bc_mp, bp->b_bn);
|
||||
fsbno = XFS_DADDR_TO_FSB(cur->bc_mp, xfs_buf_daddr(bp));
|
||||
return xbitmap_set(bitmap, fsbno, 1);
|
||||
}
|
||||
|
||||
|
@ -260,10 +260,10 @@ xchk_bmap_iextent_xref(
|
||||
agbno = XFS_FSB_TO_AGBNO(mp, irec->br_startblock);
|
||||
len = irec->br_blockcount;
|
||||
|
||||
error = xchk_ag_init(info->sc, agno, &info->sc->sa);
|
||||
error = xchk_ag_init_existing(info->sc, agno, &info->sc->sa);
|
||||
if (!xchk_fblock_process_error(info->sc, info->whichfork,
|
||||
irec->br_startoff, &error))
|
||||
return;
|
||||
goto out_free;
|
||||
|
||||
xchk_xref_is_used_space(info->sc, agbno, len);
|
||||
xchk_xref_is_not_inode_chunk(info->sc, agbno, len);
|
||||
@ -283,6 +283,7 @@ xchk_bmap_iextent_xref(
|
||||
break;
|
||||
}
|
||||
|
||||
out_free:
|
||||
xchk_ag_free(info->sc, &info->sc->sa);
|
||||
}
|
||||
|
||||
@ -383,7 +384,7 @@ xchk_bmap_iextent(
|
||||
STATIC int
|
||||
xchk_bmapbt_rec(
|
||||
struct xchk_btree *bs,
|
||||
union xfs_btree_rec *rec)
|
||||
const union xfs_btree_rec *rec)
|
||||
{
|
||||
struct xfs_bmbt_irec irec;
|
||||
struct xfs_bmbt_irec iext_irec;
|
||||
@ -400,7 +401,7 @@ xchk_bmapbt_rec(
|
||||
* Check the owners of the btree blocks up to the level below
|
||||
* the root since the verifiers don't do that.
|
||||
*/
|
||||
if (xfs_sb_version_hascrc(&bs->cur->bc_mp->m_sb) &&
|
||||
if (xfs_has_crc(bs->cur->bc_mp) &&
|
||||
bs->cur->bc_ptrs[0] == 1) {
|
||||
for (i = 0; i < bs->cur->bc_nlevels - 1; i++) {
|
||||
block = xfs_btree_get_block(bs->cur, i, &bp);
|
||||
@ -473,10 +474,11 @@ struct xchk_bmap_check_rmap_info {
|
||||
STATIC int
|
||||
xchk_bmap_check_rmap(
|
||||
struct xfs_btree_cur *cur,
|
||||
struct xfs_rmap_irec *rec,
|
||||
const struct xfs_rmap_irec *rec,
|
||||
void *priv)
|
||||
{
|
||||
struct xfs_bmbt_irec irec;
|
||||
struct xfs_rmap_irec check_rec;
|
||||
struct xchk_bmap_check_rmap_info *sbcri = priv;
|
||||
struct xfs_ifork *ifp;
|
||||
struct xfs_scrub *sc = sbcri->sc;
|
||||
@ -510,28 +512,30 @@ xchk_bmap_check_rmap(
|
||||
* length, so we have to loop through the bmbt to make sure that the
|
||||
* entire rmap is covered by bmbt records.
|
||||
*/
|
||||
check_rec = *rec;
|
||||
while (have_map) {
|
||||
if (irec.br_startoff != rec->rm_offset)
|
||||
if (irec.br_startoff != check_rec.rm_offset)
|
||||
xchk_fblock_set_corrupt(sc, sbcri->whichfork,
|
||||
rec->rm_offset);
|
||||
check_rec.rm_offset);
|
||||
if (irec.br_startblock != XFS_AGB_TO_FSB(sc->mp,
|
||||
cur->bc_ag.pag->pag_agno, rec->rm_startblock))
|
||||
cur->bc_ag.pag->pag_agno,
|
||||
check_rec.rm_startblock))
|
||||
xchk_fblock_set_corrupt(sc, sbcri->whichfork,
|
||||
rec->rm_offset);
|
||||
if (irec.br_blockcount > rec->rm_blockcount)
|
||||
check_rec.rm_offset);
|
||||
if (irec.br_blockcount > check_rec.rm_blockcount)
|
||||
xchk_fblock_set_corrupt(sc, sbcri->whichfork,
|
||||
rec->rm_offset);
|
||||
check_rec.rm_offset);
|
||||
if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
|
||||
break;
|
||||
rec->rm_startblock += irec.br_blockcount;
|
||||
rec->rm_offset += irec.br_blockcount;
|
||||
rec->rm_blockcount -= irec.br_blockcount;
|
||||
if (rec->rm_blockcount == 0)
|
||||
check_rec.rm_startblock += irec.br_blockcount;
|
||||
check_rec.rm_offset += irec.br_blockcount;
|
||||
check_rec.rm_blockcount -= irec.br_blockcount;
|
||||
if (check_rec.rm_blockcount == 0)
|
||||
break;
|
||||
have_map = xfs_iext_next_extent(ifp, &sbcri->icur, &irec);
|
||||
if (!have_map)
|
||||
xchk_fblock_set_corrupt(sc, sbcri->whichfork,
|
||||
rec->rm_offset);
|
||||
check_rec.rm_offset);
|
||||
}
|
||||
|
||||
out:
|
||||
@ -581,7 +585,7 @@ xchk_bmap_check_rmaps(
|
||||
bool zero_size;
|
||||
int error;
|
||||
|
||||
if (!xfs_sb_version_hasrmapbt(&sc->mp->m_sb) ||
|
||||
if (!xfs_has_rmapbt(sc->mp) ||
|
||||
whichfork == XFS_COW_FORK ||
|
||||
(sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT))
|
||||
return 0;
|
||||
@ -659,8 +663,7 @@ xchk_bmap(
|
||||
}
|
||||
break;
|
||||
case XFS_ATTR_FORK:
|
||||
if (!xfs_sb_version_hasattr(&mp->m_sb) &&
|
||||
!xfs_sb_version_hasattr2(&mp->m_sb))
|
||||
if (!xfs_has_attr(mp) && !xfs_has_attr2(mp))
|
||||
xchk_ino_set_corrupt(sc, sc->ip->i_ino);
|
||||
break;
|
||||
default:
|
||||
|
@ -374,10 +374,10 @@ xchk_btree_check_block_owner(
|
||||
|
||||
init_sa = bs->cur->bc_flags & XFS_BTREE_LONG_PTRS;
|
||||
if (init_sa) {
|
||||
error = xchk_ag_init(bs->sc, agno, &bs->sc->sa);
|
||||
error = xchk_ag_init_existing(bs->sc, agno, &bs->sc->sa);
|
||||
if (!xchk_btree_xref_process_error(bs->sc, bs->cur,
|
||||
level, &error))
|
||||
return error;
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
xchk_xref_is_used_space(bs->sc, agbno, 1);
|
||||
@ -393,6 +393,7 @@ xchk_btree_check_block_owner(
|
||||
if (!bs->sc->sa.rmap_cur && btnum == XFS_BTNUM_RMAP)
|
||||
bs->cur = NULL;
|
||||
|
||||
out_free:
|
||||
if (init_sa)
|
||||
xchk_ag_free(bs->sc, &bs->sc->sa);
|
||||
|
||||
@ -435,12 +436,12 @@ xchk_btree_check_owner(
|
||||
if (!co)
|
||||
return -ENOMEM;
|
||||
co->level = level;
|
||||
co->daddr = XFS_BUF_ADDR(bp);
|
||||
co->daddr = xfs_buf_daddr(bp);
|
||||
list_add_tail(&co->list, &bs->to_check);
|
||||
return 0;
|
||||
}
|
||||
|
||||
return xchk_btree_check_block_owner(bs, level, XFS_BUF_ADDR(bp));
|
||||
return xchk_btree_check_block_owner(bs, level, xfs_buf_daddr(bp));
|
||||
}
|
||||
|
||||
/* Decide if we want to check minrecs of a btree block in the inode root. */
|
||||
|
@ -26,8 +26,8 @@ void xchk_btree_xref_set_corrupt(struct xfs_scrub *sc,
|
||||
|
||||
struct xchk_btree;
|
||||
typedef int (*xchk_btree_rec_fn)(
|
||||
struct xchk_btree *bs,
|
||||
union xfs_btree_rec *rec);
|
||||
struct xchk_btree *bs,
|
||||
const union xfs_btree_rec *rec);
|
||||
|
||||
struct xchk_btree {
|
||||
/* caller-provided scrub state */
|
||||
|
@ -186,7 +186,7 @@ xchk_block_set_preen(
|
||||
struct xfs_buf *bp)
|
||||
{
|
||||
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_PREEN;
|
||||
trace_xchk_block_preen(sc, bp->b_bn, __return_address);
|
||||
trace_xchk_block_preen(sc, xfs_buf_daddr(bp), __return_address);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -219,7 +219,7 @@ xchk_block_set_corrupt(
|
||||
struct xfs_buf *bp)
|
||||
{
|
||||
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_CORRUPT;
|
||||
trace_xchk_block_error(sc, bp->b_bn, __return_address);
|
||||
trace_xchk_block_error(sc, xfs_buf_daddr(bp), __return_address);
|
||||
}
|
||||
|
||||
/* Record a corruption while cross-referencing. */
|
||||
@ -229,7 +229,7 @@ xchk_block_xref_set_corrupt(
|
||||
struct xfs_buf *bp)
|
||||
{
|
||||
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_XCORRUPT;
|
||||
trace_xchk_block_error(sc, bp->b_bn, __return_address);
|
||||
trace_xchk_block_error(sc, xfs_buf_daddr(bp), __return_address);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -324,7 +324,7 @@ struct xchk_rmap_ownedby_info {
|
||||
STATIC int
|
||||
xchk_count_rmap_ownedby_irec(
|
||||
struct xfs_btree_cur *cur,
|
||||
struct xfs_rmap_irec *rec,
|
||||
const struct xfs_rmap_irec *rec,
|
||||
void *priv)
|
||||
{
|
||||
struct xchk_rmap_ownedby_info *sroi = priv;
|
||||
@ -394,11 +394,11 @@ want_ag_read_header_failure(
|
||||
}
|
||||
|
||||
/*
|
||||
* Grab all the headers for an AG.
|
||||
* Grab the perag structure and all the headers for an AG.
|
||||
*
|
||||
* The headers should be released by xchk_ag_free, but as a fail
|
||||
* safe we attach all the buffers we grab to the scrub transaction so
|
||||
* they'll all be freed when we cancel it.
|
||||
* The headers should be released by xchk_ag_free, but as a fail safe we attach
|
||||
* all the buffers we grab to the scrub transaction so they'll all be freed
|
||||
* when we cancel it. Returns ENOENT if we can't grab the perag structure.
|
||||
*/
|
||||
int
|
||||
xchk_ag_read_headers(
|
||||
@ -409,22 +409,24 @@ xchk_ag_read_headers(
|
||||
struct xfs_mount *mp = sc->mp;
|
||||
int error;
|
||||
|
||||
sa->agno = agno;
|
||||
ASSERT(!sa->pag);
|
||||
sa->pag = xfs_perag_get(mp, agno);
|
||||
if (!sa->pag)
|
||||
return -ENOENT;
|
||||
|
||||
error = xfs_ialloc_read_agi(mp, sc->tp, agno, &sa->agi_bp);
|
||||
if (error && want_ag_read_header_failure(sc, XFS_SCRUB_TYPE_AGI))
|
||||
goto out;
|
||||
return error;
|
||||
|
||||
error = xfs_alloc_read_agf(mp, sc->tp, agno, 0, &sa->agf_bp);
|
||||
if (error && want_ag_read_header_failure(sc, XFS_SCRUB_TYPE_AGF))
|
||||
goto out;
|
||||
return error;
|
||||
|
||||
error = xfs_alloc_read_agfl(mp, sc->tp, agno, &sa->agfl_bp);
|
||||
if (error && want_ag_read_header_failure(sc, XFS_SCRUB_TYPE_AGFL))
|
||||
goto out;
|
||||
error = 0;
|
||||
out:
|
||||
return error;
|
||||
return error;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Release all the AG btree cursors. */
|
||||
@ -461,7 +463,6 @@ xchk_ag_btcur_init(
|
||||
{
|
||||
struct xfs_mount *mp = sc->mp;
|
||||
|
||||
xchk_perag_get(sc->mp, sa);
|
||||
if (sa->agf_bp &&
|
||||
xchk_ag_btree_healthy_enough(sc, sa->pag, XFS_BTNUM_BNO)) {
|
||||
/* Set up a bnobt cursor for cross-referencing. */
|
||||
@ -484,21 +485,21 @@ xchk_ag_btcur_init(
|
||||
}
|
||||
|
||||
/* Set up a finobt cursor for cross-referencing. */
|
||||
if (sa->agi_bp && xfs_sb_version_hasfinobt(&mp->m_sb) &&
|
||||
if (sa->agi_bp && xfs_has_finobt(mp) &&
|
||||
xchk_ag_btree_healthy_enough(sc, sa->pag, XFS_BTNUM_FINO)) {
|
||||
sa->fino_cur = xfs_inobt_init_cursor(mp, sc->tp, sa->agi_bp,
|
||||
sa->pag, XFS_BTNUM_FINO);
|
||||
}
|
||||
|
||||
/* Set up a rmapbt cursor for cross-referencing. */
|
||||
if (sa->agf_bp && xfs_sb_version_hasrmapbt(&mp->m_sb) &&
|
||||
if (sa->agf_bp && xfs_has_rmapbt(mp) &&
|
||||
xchk_ag_btree_healthy_enough(sc, sa->pag, XFS_BTNUM_RMAP)) {
|
||||
sa->rmap_cur = xfs_rmapbt_init_cursor(mp, sc->tp, sa->agf_bp,
|
||||
sa->pag);
|
||||
}
|
||||
|
||||
/* Set up a refcountbt cursor for cross-referencing. */
|
||||
if (sa->agf_bp && xfs_sb_version_hasreflink(&mp->m_sb) &&
|
||||
if (sa->agf_bp && xfs_has_reflink(mp) &&
|
||||
xchk_ag_btree_healthy_enough(sc, sa->pag, XFS_BTNUM_REFC)) {
|
||||
sa->refc_cur = xfs_refcountbt_init_cursor(mp, sc->tp,
|
||||
sa->agf_bp, sa->pag);
|
||||
@ -528,15 +529,14 @@ xchk_ag_free(
|
||||
xfs_perag_put(sa->pag);
|
||||
sa->pag = NULL;
|
||||
}
|
||||
sa->agno = NULLAGNUMBER;
|
||||
}
|
||||
|
||||
/*
|
||||
* For scrub, grab the AGI and the AGF headers, in that order. Locking
|
||||
* order requires us to get the AGI before the AGF. We use the
|
||||
* transaction to avoid deadlocking on crosslinked metadata buffers;
|
||||
* either the caller passes one in (bmap scrub) or we have to create a
|
||||
* transaction ourselves.
|
||||
* For scrub, grab the perag structure, the AGI, and the AGF headers, in that
|
||||
* order. Locking order requires us to get the AGI before the AGF. We use the
|
||||
* transaction to avoid deadlocking on crosslinked metadata buffers; either the
|
||||
* caller passes one in (bmap scrub) or we have to create a transaction
|
||||
* ourselves. Returns ENOENT if the perag struct cannot be grabbed.
|
||||
*/
|
||||
int
|
||||
xchk_ag_init(
|
||||
@ -554,19 +554,6 @@ xchk_ag_init(
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Grab the per-ag structure if we haven't already gotten it. Teardown of the
|
||||
* xchk_ag will release it for us.
|
||||
*/
|
||||
void
|
||||
xchk_perag_get(
|
||||
struct xfs_mount *mp,
|
||||
struct xchk_ag *sa)
|
||||
{
|
||||
if (!sa->pag)
|
||||
sa->pag = xfs_perag_get(mp, sa->agno);
|
||||
}
|
||||
|
||||
/* Per-scrubber setup functions */
|
||||
|
||||
/*
|
||||
@ -797,7 +784,7 @@ xchk_buffer_recheck(
|
||||
if (!fa)
|
||||
return;
|
||||
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_CORRUPT;
|
||||
trace_xchk_block_error(sc, bp->b_bn, fa);
|
||||
trace_xchk_block_error(sc, xfs_buf_daddr(bp), fa);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -842,7 +829,7 @@ xchk_metadata_inode_forks(
|
||||
return error;
|
||||
|
||||
/* Look for incorrect shared blocks. */
|
||||
if (xfs_sb_version_hasreflink(&sc->mp->m_sb)) {
|
||||
if (xfs_has_reflink(sc->mp)) {
|
||||
error = xfs_reflink_inode_has_shared_extents(sc->tp, sc->ip,
|
||||
&shared);
|
||||
if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, 0,
|
||||
@ -884,6 +871,7 @@ xchk_stop_reaping(
|
||||
{
|
||||
sc->flags |= XCHK_REAPING_DISABLED;
|
||||
xfs_blockgc_stop(sc->mp);
|
||||
xfs_inodegc_stop(sc->mp);
|
||||
}
|
||||
|
||||
/* Restart background reaping of resources. */
|
||||
@ -891,6 +879,13 @@ void
|
||||
xchk_start_reaping(
|
||||
struct xfs_scrub *sc)
|
||||
{
|
||||
xfs_blockgc_start(sc->mp);
|
||||
/*
|
||||
* Readonly filesystems do not perform inactivation or speculative
|
||||
* preallocation, so there's no need to restart the workers.
|
||||
*/
|
||||
if (!xfs_is_readonly(sc->mp)) {
|
||||
xfs_inodegc_start(sc->mp);
|
||||
xfs_blockgc_start(sc->mp);
|
||||
}
|
||||
sc->flags &= ~XCHK_REAPING_DISABLED;
|
||||
}
|
||||
|
@ -107,7 +107,23 @@ int xchk_setup_fscounters(struct xfs_scrub *sc);
|
||||
void xchk_ag_free(struct xfs_scrub *sc, struct xchk_ag *sa);
|
||||
int xchk_ag_init(struct xfs_scrub *sc, xfs_agnumber_t agno,
|
||||
struct xchk_ag *sa);
|
||||
void xchk_perag_get(struct xfs_mount *mp, struct xchk_ag *sa);
|
||||
|
||||
/*
|
||||
* Grab all AG resources, treating the inability to grab the perag structure as
|
||||
* a fs corruption. This is intended for callers checking an ondisk reference
|
||||
* to a given AG, which means that the AG must still exist.
|
||||
*/
|
||||
static inline int
|
||||
xchk_ag_init_existing(
|
||||
struct xfs_scrub *sc,
|
||||
xfs_agnumber_t agno,
|
||||
struct xchk_ag *sa)
|
||||
{
|
||||
int error = xchk_ag_init(sc, agno, sa);
|
||||
|
||||
return error == -ENOENT ? -EFSCORRUPTED : error;
|
||||
}
|
||||
|
||||
int xchk_ag_read_headers(struct xfs_scrub *sc, xfs_agnumber_t agno,
|
||||
struct xchk_ag *sa);
|
||||
void xchk_ag_btcur_free(struct xchk_ag *sa);
|
||||
|
@ -367,11 +367,11 @@ xchk_da_btree_block(
|
||||
pmaxrecs = &ds->maxrecs[level];
|
||||
|
||||
/* We only started zeroing the header on v5 filesystems. */
|
||||
if (xfs_sb_version_hascrc(&ds->sc->mp->m_sb) && hdr3->hdr.pad)
|
||||
if (xfs_has_crc(ds->sc->mp) && hdr3->hdr.pad)
|
||||
xchk_da_set_corrupt(ds, level);
|
||||
|
||||
/* Check the owner. */
|
||||
if (xfs_sb_version_hascrc(&ip->i_mount->m_sb)) {
|
||||
if (xfs_has_crc(ip->i_mount)) {
|
||||
owner = be64_to_cpu(hdr3->owner);
|
||||
if (owner != ip->i_ino)
|
||||
xchk_da_set_corrupt(ds, level);
|
||||
|
@ -51,7 +51,7 @@ xchk_dir_check_ftype(
|
||||
int ino_dtype;
|
||||
int error = 0;
|
||||
|
||||
if (!xfs_sb_version_hasftype(&mp->m_sb)) {
|
||||
if (!xfs_has_ftype(mp)) {
|
||||
if (dtype != DT_UNKNOWN && dtype != DT_DIR)
|
||||
xchk_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK,
|
||||
offset);
|
||||
@ -140,7 +140,7 @@ xchk_dir_actor(
|
||||
|
||||
if (!strncmp(".", name, namelen)) {
|
||||
/* If this is "." then check that the inum matches the dir. */
|
||||
if (xfs_sb_version_hasftype(&mp->m_sb) && type != DT_DIR)
|
||||
if (xfs_has_ftype(mp) && type != DT_DIR)
|
||||
xchk_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK,
|
||||
offset);
|
||||
checked_ftype = true;
|
||||
@ -152,7 +152,7 @@ xchk_dir_actor(
|
||||
* If this is ".." in the root inode, check that the inum
|
||||
* matches this dir.
|
||||
*/
|
||||
if (xfs_sb_version_hasftype(&mp->m_sb) && type != DT_DIR)
|
||||
if (xfs_has_ftype(mp) && type != DT_DIR)
|
||||
xchk_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK,
|
||||
offset);
|
||||
checked_ftype = true;
|
||||
@ -526,7 +526,7 @@ xchk_directory_leaf1_bestfree(
|
||||
bestcount = be32_to_cpu(ltp->bestcount);
|
||||
bestp = xfs_dir2_leaf_bests_p(ltp);
|
||||
|
||||
if (xfs_sb_version_hascrc(&sc->mp->m_sb)) {
|
||||
if (xfs_has_crc(sc->mp)) {
|
||||
struct xfs_dir3_leaf_hdr *hdr3 = bp->b_addr;
|
||||
|
||||
if (hdr3->pad != cpu_to_be32(0))
|
||||
@ -623,7 +623,7 @@ xchk_directory_free_bestfree(
|
||||
return error;
|
||||
xchk_buffer_recheck(sc, bp);
|
||||
|
||||
if (xfs_sb_version_hascrc(&sc->mp->m_sb)) {
|
||||
if (xfs_has_crc(sc->mp)) {
|
||||
struct xfs_dir3_free_hdr *hdr3 = bp->b_addr;
|
||||
|
||||
if (hdr3->pad != cpu_to_be32(0))
|
||||
|
@ -148,9 +148,9 @@ xchk_fscount_btreeblks(
|
||||
xfs_extlen_t blocks;
|
||||
int error;
|
||||
|
||||
error = xchk_ag_init(sc, agno, &sc->sa);
|
||||
error = xchk_ag_init_existing(sc, agno, &sc->sa);
|
||||
if (error)
|
||||
return error;
|
||||
goto out_free;
|
||||
|
||||
error = xfs_btree_count_blocks(sc->sa.bno_cur, &blocks);
|
||||
if (error)
|
||||
@ -207,7 +207,7 @@ retry:
|
||||
/* Add up the free/freelist/bnobt/cntbt blocks */
|
||||
fsc->fdblocks += pag->pagf_freeblks;
|
||||
fsc->fdblocks += pag->pagf_flcount;
|
||||
if (xfs_sb_version_haslazysbcount(&sc->mp->m_sb)) {
|
||||
if (xfs_has_lazysbcount(sc->mp)) {
|
||||
fsc->fdblocks += pag->pagf_btreeblks;
|
||||
} else {
|
||||
error = xchk_fscount_btreeblks(sc, fsc, agno);
|
||||
|
@ -418,7 +418,7 @@ xchk_iallocbt_rec_alignment(
|
||||
STATIC int
|
||||
xchk_iallocbt_rec(
|
||||
struct xchk_btree *bs,
|
||||
union xfs_btree_rec *rec)
|
||||
const union xfs_btree_rec *rec)
|
||||
{
|
||||
struct xfs_mount *mp = bs->cur->bc_mp;
|
||||
struct xchk_iallocbt *iabt = bs->private;
|
||||
@ -517,7 +517,7 @@ xchk_iallocbt_xref_rmap_btreeblks(
|
||||
int error;
|
||||
|
||||
if (!sc->sa.ino_cur || !sc->sa.rmap_cur ||
|
||||
(xfs_sb_version_hasfinobt(&sc->mp->m_sb) && !sc->sa.fino_cur) ||
|
||||
(xfs_has_finobt(sc->mp) && !sc->sa.fino_cur) ||
|
||||
xchk_skip_xref(sc->sm))
|
||||
return;
|
||||
|
||||
|
@ -181,7 +181,7 @@ xchk_inode_flags2(
|
||||
|
||||
/* reflink flag requires reflink feature */
|
||||
if ((flags2 & XFS_DIFLAG2_REFLINK) &&
|
||||
!xfs_sb_version_hasreflink(&mp->m_sb))
|
||||
!xfs_has_reflink(mp))
|
||||
goto bad;
|
||||
|
||||
/* cowextsize flag is checked w.r.t. mode separately */
|
||||
@ -199,8 +199,7 @@ xchk_inode_flags2(
|
||||
goto bad;
|
||||
|
||||
/* no bigtime iflag without the bigtime feature */
|
||||
if (xfs_dinode_has_bigtime(dip) &&
|
||||
!xfs_sb_version_hasbigtime(&mp->m_sb))
|
||||
if (xfs_dinode_has_bigtime(dip) && !xfs_has_bigtime(mp))
|
||||
goto bad;
|
||||
|
||||
return;
|
||||
@ -278,7 +277,7 @@ xchk_dinode(
|
||||
xchk_ino_set_corrupt(sc, ino);
|
||||
|
||||
if (dip->di_projid_hi != 0 &&
|
||||
!xfs_sb_version_hasprojid32bit(&mp->m_sb))
|
||||
!xfs_has_projid32(mp))
|
||||
xchk_ino_set_corrupt(sc, ino);
|
||||
break;
|
||||
default:
|
||||
@ -532,9 +531,9 @@ xchk_inode_xref(
|
||||
agno = XFS_INO_TO_AGNO(sc->mp, ino);
|
||||
agbno = XFS_INO_TO_AGBNO(sc->mp, ino);
|
||||
|
||||
error = xchk_ag_init(sc, agno, &sc->sa);
|
||||
error = xchk_ag_init_existing(sc, agno, &sc->sa);
|
||||
if (!xchk_xref_process_error(sc, agno, agbno, &error))
|
||||
return;
|
||||
goto out_free;
|
||||
|
||||
xchk_xref_is_used_space(sc, agbno, 1);
|
||||
xchk_inode_xref_finobt(sc, ino);
|
||||
@ -542,6 +541,7 @@ xchk_inode_xref(
|
||||
xchk_xref_is_not_shared(sc, agbno, 1);
|
||||
xchk_inode_xref_bmap(sc, dip);
|
||||
|
||||
out_free:
|
||||
xchk_ag_free(sc, &sc->sa);
|
||||
}
|
||||
|
||||
@ -560,7 +560,7 @@ xchk_inode_check_reflink_iflag(
|
||||
bool has_shared;
|
||||
int error;
|
||||
|
||||
if (!xfs_sb_version_hasreflink(&mp->m_sb))
|
||||
if (!xfs_has_reflink(mp))
|
||||
return;
|
||||
|
||||
error = xfs_reflink_inode_has_shared_extents(sc->tp, sc->ip,
|
||||
|
@ -42,7 +42,7 @@ xchk_setup_quota(
|
||||
xfs_dqtype_t dqtype;
|
||||
int error;
|
||||
|
||||
if (!XFS_IS_QUOTA_RUNNING(sc->mp) || !XFS_IS_QUOTA_ON(sc->mp))
|
||||
if (!XFS_IS_QUOTA_ON(sc->mp))
|
||||
return -ENOENT;
|
||||
|
||||
dqtype = xchk_quota_to_dqtype(sc);
|
||||
@ -127,7 +127,7 @@ xchk_quota_item(
|
||||
* a reflink filesystem we're allowed to exceed physical space
|
||||
* if there are no quota limits.
|
||||
*/
|
||||
if (xfs_sb_version_hasreflink(&mp->m_sb)) {
|
||||
if (xfs_has_reflink(mp)) {
|
||||
if (mp->m_sb.sb_dblocks < dq->q_blk.count)
|
||||
xchk_fblock_set_warning(sc, XFS_DATA_FORK,
|
||||
offset);
|
||||
|
@ -91,7 +91,7 @@ struct xchk_refcnt_check {
|
||||
STATIC int
|
||||
xchk_refcountbt_rmap_check(
|
||||
struct xfs_btree_cur *cur,
|
||||
struct xfs_rmap_irec *rec,
|
||||
const struct xfs_rmap_irec *rec,
|
||||
void *priv)
|
||||
{
|
||||
struct xchk_refcnt_check *refchk = priv;
|
||||
@ -330,7 +330,7 @@ xchk_refcountbt_xref(
|
||||
STATIC int
|
||||
xchk_refcountbt_rec(
|
||||
struct xchk_btree *bs,
|
||||
union xfs_btree_rec *rec)
|
||||
const union xfs_btree_rec *rec)
|
||||
{
|
||||
struct xfs_mount *mp = bs->cur->bc_mp;
|
||||
xfs_agblock_t *cow_blocks = bs->private;
|
||||
|
@ -248,19 +248,19 @@ xrep_calc_ag_resblks(
|
||||
* bnobt/cntbt or inobt/finobt as pairs.
|
||||
*/
|
||||
bnobt_sz = 2 * xfs_allocbt_calc_size(mp, freelen);
|
||||
if (xfs_sb_version_hassparseinodes(&mp->m_sb))
|
||||
if (xfs_has_sparseinodes(mp))
|
||||
inobt_sz = xfs_iallocbt_calc_size(mp, icount /
|
||||
XFS_INODES_PER_HOLEMASK_BIT);
|
||||
else
|
||||
inobt_sz = xfs_iallocbt_calc_size(mp, icount /
|
||||
XFS_INODES_PER_CHUNK);
|
||||
if (xfs_sb_version_hasfinobt(&mp->m_sb))
|
||||
if (xfs_has_finobt(mp))
|
||||
inobt_sz *= 2;
|
||||
if (xfs_sb_version_hasreflink(&mp->m_sb))
|
||||
if (xfs_has_reflink(mp))
|
||||
refcbt_sz = xfs_refcountbt_calc_size(mp, usedlen);
|
||||
else
|
||||
refcbt_sz = 0;
|
||||
if (xfs_sb_version_hasrmapbt(&mp->m_sb)) {
|
||||
if (xfs_has_rmapbt(mp)) {
|
||||
/*
|
||||
* Guess how many blocks we need to rebuild the rmapbt.
|
||||
* For non-reflink filesystems we can't have more records than
|
||||
@ -269,7 +269,7 @@ xrep_calc_ag_resblks(
|
||||
* many rmaps there could be in the AG, so we start off with
|
||||
* what we hope is an generous over-estimation.
|
||||
*/
|
||||
if (xfs_sb_version_hasreflink(&mp->m_sb))
|
||||
if (xfs_has_reflink(mp))
|
||||
rmapbt_sz = xfs_rmapbt_calc_size(mp,
|
||||
(unsigned long long)aglen * 2);
|
||||
else
|
||||
@ -306,9 +306,9 @@ xrep_alloc_ag_block(
|
||||
return -ENOSPC;
|
||||
xfs_extent_busy_reuse(sc->mp, sc->sa.pag, bno,
|
||||
1, false);
|
||||
*fsbno = XFS_AGB_TO_FSB(sc->mp, sc->sa.agno, bno);
|
||||
*fsbno = XFS_AGB_TO_FSB(sc->mp, sc->sa.pag->pag_agno, bno);
|
||||
if (resv == XFS_AG_RESV_RMAPBT)
|
||||
xfs_ag_resv_rmapbt_alloc(sc->mp, sc->sa.agno);
|
||||
xfs_ag_resv_rmapbt_alloc(sc->mp, sc->sa.pag->pag_agno);
|
||||
return 0;
|
||||
default:
|
||||
break;
|
||||
@ -317,7 +317,7 @@ xrep_alloc_ag_block(
|
||||
args.tp = sc->tp;
|
||||
args.mp = sc->mp;
|
||||
args.oinfo = *oinfo;
|
||||
args.fsbno = XFS_AGB_TO_FSB(args.mp, sc->sa.agno, 0);
|
||||
args.fsbno = XFS_AGB_TO_FSB(args.mp, sc->sa.pag->pag_agno, 0);
|
||||
args.minlen = 1;
|
||||
args.maxlen = 1;
|
||||
args.prod = 1;
|
||||
@ -352,14 +352,14 @@ xrep_init_btblock(
|
||||
trace_xrep_init_btblock(mp, XFS_FSB_TO_AGNO(mp, fsb),
|
||||
XFS_FSB_TO_AGBNO(mp, fsb), btnum);
|
||||
|
||||
ASSERT(XFS_FSB_TO_AGNO(mp, fsb) == sc->sa.agno);
|
||||
ASSERT(XFS_FSB_TO_AGNO(mp, fsb) == sc->sa.pag->pag_agno);
|
||||
error = xfs_trans_get_buf(tp, mp->m_ddev_targp,
|
||||
XFS_FSB_TO_DADDR(mp, fsb), XFS_FSB_TO_BB(mp, 1), 0,
|
||||
&bp);
|
||||
if (error)
|
||||
return error;
|
||||
xfs_buf_zero(bp, 0, BBTOB(bp->b_length));
|
||||
xfs_btree_init_block(mp, bp, btnum, 0, 0, sc->sa.agno);
|
||||
xfs_btree_init_block(mp, bp, btnum, 0, 0, sc->sa.pag->pag_agno);
|
||||
xfs_trans_buf_set_type(tp, bp, XFS_BLFT_BTREE_BUF);
|
||||
xfs_trans_log_buf(tp, bp, 0, BBTOB(bp->b_length) - 1);
|
||||
bp->b_ops = ops;
|
||||
@ -481,7 +481,7 @@ xrep_fix_freelist(
|
||||
|
||||
args.mp = sc->mp;
|
||||
args.tp = sc->tp;
|
||||
args.agno = sc->sa.agno;
|
||||
args.agno = sc->sa.pag->pag_agno;
|
||||
args.alignment = 1;
|
||||
args.pag = sc->sa.pag;
|
||||
|
||||
@ -611,11 +611,11 @@ xrep_reap_extents(
|
||||
xfs_fsblock_t fsbno;
|
||||
int error = 0;
|
||||
|
||||
ASSERT(xfs_sb_version_hasrmapbt(&sc->mp->m_sb));
|
||||
ASSERT(xfs_has_rmapbt(sc->mp));
|
||||
|
||||
for_each_xbitmap_block(fsbno, bmr, n, bitmap) {
|
||||
ASSERT(sc->ip != NULL ||
|
||||
XFS_FSB_TO_AGNO(sc->mp, fsbno) == sc->sa.agno);
|
||||
XFS_FSB_TO_AGNO(sc->mp, fsbno) == sc->sa.pag->pag_agno);
|
||||
trace_xrep_dispose_btree_extent(sc->mp,
|
||||
XFS_FSB_TO_AGNO(sc->mp, fsbno),
|
||||
XFS_FSB_TO_AGBNO(sc->mp, fsbno), 1);
|
||||
@ -690,7 +690,7 @@ xrep_findroot_block(
|
||||
int block_level;
|
||||
int error = 0;
|
||||
|
||||
daddr = XFS_AGB_TO_DADDR(mp, ri->sc->sa.agno, agbno);
|
||||
daddr = XFS_AGB_TO_DADDR(mp, ri->sc->sa.pag->pag_agno, agbno);
|
||||
|
||||
/*
|
||||
* Blocks in the AGFL have stale contents that might just happen to
|
||||
@ -819,7 +819,7 @@ xrep_findroot_block(
|
||||
else
|
||||
fab->root = NULLAGBLOCK;
|
||||
|
||||
trace_xrep_findroot_block(mp, ri->sc->sa.agno, agbno,
|
||||
trace_xrep_findroot_block(mp, ri->sc->sa.pag->pag_agno, agbno,
|
||||
be32_to_cpu(btblock->bb_magic), fab->height - 1);
|
||||
out:
|
||||
xfs_trans_brelse(ri->sc->tp, bp);
|
||||
@ -833,7 +833,7 @@ out:
|
||||
STATIC int
|
||||
xrep_findroot_rmap(
|
||||
struct xfs_btree_cur *cur,
|
||||
struct xfs_rmap_irec *rec,
|
||||
const struct xfs_rmap_irec *rec,
|
||||
void *priv)
|
||||
{
|
||||
struct xrep_findroot *ri = priv;
|
||||
|
@ -88,7 +88,7 @@ xchk_rmapbt_xref(
|
||||
STATIC int
|
||||
xchk_rmapbt_rec(
|
||||
struct xchk_btree *bs,
|
||||
union xfs_btree_rec *rec)
|
||||
const union xfs_btree_rec *rec)
|
||||
{
|
||||
struct xfs_mount *mp = bs->cur->bc_mp;
|
||||
struct xfs_rmap_irec irec;
|
||||
|
@ -41,7 +41,7 @@ xchk_setup_rt(
|
||||
STATIC int
|
||||
xchk_rtbitmap_rec(
|
||||
struct xfs_trans *tp,
|
||||
struct xfs_rtalloc_rec *rec,
|
||||
const struct xfs_rtalloc_rec *rec,
|
||||
void *priv)
|
||||
{
|
||||
struct xfs_scrub *sc = priv;
|
||||
|
@ -239,21 +239,21 @@ static const struct xchk_meta_ops meta_scrub_ops[] = {
|
||||
.type = ST_PERAG,
|
||||
.setup = xchk_setup_ag_iallocbt,
|
||||
.scrub = xchk_finobt,
|
||||
.has = xfs_sb_version_hasfinobt,
|
||||
.has = xfs_has_finobt,
|
||||
.repair = xrep_notsupported,
|
||||
},
|
||||
[XFS_SCRUB_TYPE_RMAPBT] = { /* rmapbt */
|
||||
.type = ST_PERAG,
|
||||
.setup = xchk_setup_ag_rmapbt,
|
||||
.scrub = xchk_rmapbt,
|
||||
.has = xfs_sb_version_hasrmapbt,
|
||||
.has = xfs_has_rmapbt,
|
||||
.repair = xrep_notsupported,
|
||||
},
|
||||
[XFS_SCRUB_TYPE_REFCNTBT] = { /* refcountbt */
|
||||
.type = ST_PERAG,
|
||||
.setup = xchk_setup_ag_refcountbt,
|
||||
.scrub = xchk_refcountbt,
|
||||
.has = xfs_sb_version_hasreflink,
|
||||
.has = xfs_has_reflink,
|
||||
.repair = xrep_notsupported,
|
||||
},
|
||||
[XFS_SCRUB_TYPE_INODE] = { /* inode record */
|
||||
@ -308,14 +308,14 @@ static const struct xchk_meta_ops meta_scrub_ops[] = {
|
||||
.type = ST_FS,
|
||||
.setup = xchk_setup_rt,
|
||||
.scrub = xchk_rtbitmap,
|
||||
.has = xfs_sb_version_hasrealtime,
|
||||
.has = xfs_has_realtime,
|
||||
.repair = xrep_notsupported,
|
||||
},
|
||||
[XFS_SCRUB_TYPE_RTSUM] = { /* realtime summary */
|
||||
.type = ST_FS,
|
||||
.setup = xchk_setup_rt,
|
||||
.scrub = xchk_rtsummary,
|
||||
.has = xfs_sb_version_hasrealtime,
|
||||
.has = xfs_has_realtime,
|
||||
.repair = xrep_notsupported,
|
||||
},
|
||||
[XFS_SCRUB_TYPE_UQUOTA] = { /* user quota */
|
||||
@ -383,7 +383,7 @@ xchk_validate_inputs(
|
||||
if (ops->setup == NULL || ops->scrub == NULL)
|
||||
goto out;
|
||||
/* Does this fs even support this type of metadata? */
|
||||
if (ops->has && !ops->has(&mp->m_sb))
|
||||
if (ops->has && !ops->has(mp))
|
||||
goto out;
|
||||
|
||||
error = -EINVAL;
|
||||
@ -415,11 +415,11 @@ xchk_validate_inputs(
|
||||
*/
|
||||
if (sm->sm_flags & XFS_SCRUB_IFLAG_REPAIR) {
|
||||
error = -EOPNOTSUPP;
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
goto out;
|
||||
|
||||
error = -EROFS;
|
||||
if (mp->m_flags & XFS_MOUNT_RDONLY)
|
||||
if (xfs_is_readonly(mp))
|
||||
goto out;
|
||||
}
|
||||
|
||||
@ -464,9 +464,6 @@ xfs_scrub_metadata(
|
||||
struct xfs_scrub sc = {
|
||||
.file = file,
|
||||
.sm = sm,
|
||||
.sa = {
|
||||
.agno = NULLAGNUMBER,
|
||||
},
|
||||
};
|
||||
struct xfs_mount *mp = XFS_I(file_inode(file))->i_mount;
|
||||
int error = 0;
|
||||
@ -480,10 +477,10 @@ xfs_scrub_metadata(
|
||||
|
||||
/* Forbidden if we are shut down or mounted norecovery. */
|
||||
error = -ESHUTDOWN;
|
||||
if (XFS_FORCED_SHUTDOWN(mp))
|
||||
if (xfs_is_shutdown(mp))
|
||||
goto out;
|
||||
error = -ENOTRECOVERABLE;
|
||||
if (mp->m_flags & XFS_MOUNT_NORECOVERY)
|
||||
if (xfs_has_norecovery(mp))
|
||||
goto out;
|
||||
|
||||
error = xchk_validate_inputs(mp, sm);
|
||||
|
@ -27,7 +27,7 @@ struct xchk_meta_ops {
|
||||
int (*repair)(struct xfs_scrub *);
|
||||
|
||||
/* Decide if we even have this piece of metadata. */
|
||||
bool (*has)(struct xfs_sb *);
|
||||
bool (*has)(struct xfs_mount *);
|
||||
|
||||
/* type describing required/allowed inputs */
|
||||
enum xchk_type type;
|
||||
@ -35,7 +35,6 @@ struct xchk_meta_ops {
|
||||
|
||||
/* Buffer pointers and btree cursors for an entire AG. */
|
||||
struct xchk_ag {
|
||||
xfs_agnumber_t agno;
|
||||
struct xfs_perag *pag;
|
||||
|
||||
/* AG btree roots */
|
||||
|
@ -22,11 +22,11 @@ xchk_btree_cur_fsbno(
|
||||
int level)
|
||||
{
|
||||
if (level < cur->bc_nlevels && cur->bc_bufs[level])
|
||||
return XFS_DADDR_TO_FSB(cur->bc_mp, cur->bc_bufs[level]->b_bn);
|
||||
else if (level == cur->bc_nlevels - 1 &&
|
||||
cur->bc_flags & XFS_BTREE_LONG_PTRS)
|
||||
return XFS_DADDR_TO_FSB(cur->bc_mp,
|
||||
xfs_buf_daddr(cur->bc_bufs[level]));
|
||||
if (level == cur->bc_nlevels - 1 && cur->bc_flags & XFS_BTREE_LONG_PTRS)
|
||||
return XFS_INO_TO_FSB(cur->bc_mp, cur->bc_ino.ip->i_ino);
|
||||
else if (!(cur->bc_flags & XFS_BTREE_LONG_PTRS))
|
||||
if (!(cur->bc_flags & XFS_BTREE_LONG_PTRS))
|
||||
return XFS_AGB_TO_FSB(cur->bc_mp, cur->bc_ag.pag->pag_agno, 0);
|
||||
return NULLFSBLOCK;
|
||||
}
|
||||
|
@ -2,6 +2,10 @@
|
||||
/*
|
||||
* Copyright (C) 2017 Oracle. All Rights Reserved.
|
||||
* Author: Darrick J. Wong <darrick.wong@oracle.com>
|
||||
*
|
||||
* NOTE: none of these tracepoints shall be considered a stable kernel ABI
|
||||
* as they can change at any time. See xfs_trace.h for documentation of
|
||||
* specific units found in tracepoint output.
|
||||
*/
|
||||
#undef TRACE_SYSTEM
|
||||
#define TRACE_SYSTEM xfs_scrub
|
||||
@ -79,6 +83,16 @@ TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_FSCOUNTERS);
|
||||
{ XFS_SCRUB_TYPE_PQUOTA, "prjquota" }, \
|
||||
{ XFS_SCRUB_TYPE_FSCOUNTERS, "fscounters" }
|
||||
|
||||
#define XFS_SCRUB_FLAG_STRINGS \
|
||||
{ XFS_SCRUB_IFLAG_REPAIR, "repair" }, \
|
||||
{ XFS_SCRUB_OFLAG_CORRUPT, "corrupt" }, \
|
||||
{ XFS_SCRUB_OFLAG_PREEN, "preen" }, \
|
||||
{ XFS_SCRUB_OFLAG_XFAIL, "xfail" }, \
|
||||
{ XFS_SCRUB_OFLAG_XCORRUPT, "xcorrupt" }, \
|
||||
{ XFS_SCRUB_OFLAG_INCOMPLETE, "incomplete" }, \
|
||||
{ XFS_SCRUB_OFLAG_WARNING, "warning" }, \
|
||||
{ XFS_SCRUB_OFLAG_NO_REPAIR_NEEDED, "norepair" }
|
||||
|
||||
DECLARE_EVENT_CLASS(xchk_class,
|
||||
TP_PROTO(struct xfs_inode *ip, struct xfs_scrub_metadata *sm,
|
||||
int error),
|
||||
@ -103,14 +117,14 @@ DECLARE_EVENT_CLASS(xchk_class,
|
||||
__entry->flags = sm->sm_flags;
|
||||
__entry->error = error;
|
||||
),
|
||||
TP_printk("dev %d:%d ino 0x%llx type %s agno %u inum %llu gen %u flags 0x%x error %d",
|
||||
TP_printk("dev %d:%d ino 0x%llx type %s agno 0x%x inum 0x%llx gen 0x%x flags (%s) error %d",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__entry->ino,
|
||||
__print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS),
|
||||
__entry->agno,
|
||||
__entry->inum,
|
||||
__entry->gen,
|
||||
__entry->flags,
|
||||
__print_flags(__entry->flags, "|", XFS_SCRUB_FLAG_STRINGS),
|
||||
__entry->error)
|
||||
)
|
||||
#define DEFINE_SCRUB_EVENT(name) \
|
||||
@ -145,7 +159,7 @@ TRACE_EVENT(xchk_op_error,
|
||||
__entry->error = error;
|
||||
__entry->ret_ip = ret_ip;
|
||||
),
|
||||
TP_printk("dev %d:%d type %s agno %u agbno %u error %d ret_ip %pS",
|
||||
TP_printk("dev %d:%d type %s agno 0x%x agbno 0x%x error %d ret_ip %pS",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS),
|
||||
__entry->agno,
|
||||
@ -176,10 +190,10 @@ TRACE_EVENT(xchk_file_op_error,
|
||||
__entry->error = error;
|
||||
__entry->ret_ip = ret_ip;
|
||||
),
|
||||
TP_printk("dev %d:%d ino 0x%llx fork %d type %s offset %llu error %d ret_ip %pS",
|
||||
TP_printk("dev %d:%d ino 0x%llx fork %s type %s fileoff 0x%llx error %d ret_ip %pS",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__entry->ino,
|
||||
__entry->whichfork,
|
||||
__print_symbolic(__entry->whichfork, XFS_WHICHFORK_STRINGS),
|
||||
__print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS),
|
||||
__entry->offset,
|
||||
__entry->error,
|
||||
@ -193,29 +207,21 @@ DECLARE_EVENT_CLASS(xchk_block_error_class,
|
||||
__field(dev_t, dev)
|
||||
__field(unsigned int, type)
|
||||
__field(xfs_agnumber_t, agno)
|
||||
__field(xfs_agblock_t, bno)
|
||||
__field(xfs_agblock_t, agbno)
|
||||
__field(void *, ret_ip)
|
||||
),
|
||||
TP_fast_assign(
|
||||
xfs_fsblock_t fsbno;
|
||||
xfs_agnumber_t agno;
|
||||
xfs_agblock_t bno;
|
||||
|
||||
fsbno = XFS_DADDR_TO_FSB(sc->mp, daddr);
|
||||
agno = XFS_FSB_TO_AGNO(sc->mp, fsbno);
|
||||
bno = XFS_FSB_TO_AGBNO(sc->mp, fsbno);
|
||||
|
||||
__entry->dev = sc->mp->m_super->s_dev;
|
||||
__entry->type = sc->sm->sm_type;
|
||||
__entry->agno = agno;
|
||||
__entry->bno = bno;
|
||||
__entry->agno = xfs_daddr_to_agno(sc->mp, daddr);
|
||||
__entry->agbno = xfs_daddr_to_agbno(sc->mp, daddr);
|
||||
__entry->ret_ip = ret_ip;
|
||||
),
|
||||
TP_printk("dev %d:%d type %s agno %u agbno %u ret_ip %pS",
|
||||
TP_printk("dev %d:%d type %s agno 0x%x agbno 0x%x ret_ip %pS",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS),
|
||||
__entry->agno,
|
||||
__entry->bno,
|
||||
__entry->agbno,
|
||||
__entry->ret_ip)
|
||||
)
|
||||
|
||||
@ -281,10 +287,10 @@ DECLARE_EVENT_CLASS(xchk_fblock_error_class,
|
||||
__entry->offset = offset;
|
||||
__entry->ret_ip = ret_ip;
|
||||
),
|
||||
TP_printk("dev %d:%d ino 0x%llx fork %d type %s offset %llu ret_ip %pS",
|
||||
TP_printk("dev %d:%d ino 0x%llx fork %s type %s fileoff 0x%llx ret_ip %pS",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__entry->ino,
|
||||
__entry->whichfork,
|
||||
__print_symbolic(__entry->whichfork, XFS_WHICHFORK_STRINGS),
|
||||
__print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS),
|
||||
__entry->offset,
|
||||
__entry->ret_ip)
|
||||
@ -346,7 +352,7 @@ TRACE_EVENT(xchk_btree_op_error,
|
||||
__entry->error = error;
|
||||
__entry->ret_ip = ret_ip;
|
||||
),
|
||||
TP_printk("dev %d:%d type %s btree %s level %d ptr %d agno %u agbno %u error %d ret_ip %pS",
|
||||
TP_printk("dev %d:%d type %s btree %s level %d ptr %d agno 0x%x agbno 0x%x error %d ret_ip %pS",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS),
|
||||
__print_symbolic(__entry->btnum, XFS_BTNUM_STRINGS),
|
||||
@ -389,10 +395,10 @@ TRACE_EVENT(xchk_ifork_btree_op_error,
|
||||
__entry->error = error;
|
||||
__entry->ret_ip = ret_ip;
|
||||
),
|
||||
TP_printk("dev %d:%d ino 0x%llx fork %d type %s btree %s level %d ptr %d agno %u agbno %u error %d ret_ip %pS",
|
||||
TP_printk("dev %d:%d ino 0x%llx fork %s type %s btree %s level %d ptr %d agno 0x%x agbno 0x%x error %d ret_ip %pS",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__entry->ino,
|
||||
__entry->whichfork,
|
||||
__print_symbolic(__entry->whichfork, XFS_WHICHFORK_STRINGS),
|
||||
__print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS),
|
||||
__print_symbolic(__entry->btnum, XFS_BTNUM_STRINGS),
|
||||
__entry->level,
|
||||
@ -428,7 +434,7 @@ TRACE_EVENT(xchk_btree_error,
|
||||
__entry->ptr = cur->bc_ptrs[level];
|
||||
__entry->ret_ip = ret_ip;
|
||||
),
|
||||
TP_printk("dev %d:%d type %s btree %s level %d ptr %d agno %u agbno %u ret_ip %pS",
|
||||
TP_printk("dev %d:%d type %s btree %s level %d ptr %d agno 0x%x agbno 0x%x ret_ip %pS",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS),
|
||||
__print_symbolic(__entry->btnum, XFS_BTNUM_STRINGS),
|
||||
@ -468,10 +474,10 @@ TRACE_EVENT(xchk_ifork_btree_error,
|
||||
__entry->ptr = cur->bc_ptrs[level];
|
||||
__entry->ret_ip = ret_ip;
|
||||
),
|
||||
TP_printk("dev %d:%d ino 0x%llx fork %d type %s btree %s level %d ptr %d agno %u agbno %u ret_ip %pS",
|
||||
TP_printk("dev %d:%d ino 0x%llx fork %s type %s btree %s level %d ptr %d agno 0x%x agbno 0x%x ret_ip %pS",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__entry->ino,
|
||||
__entry->whichfork,
|
||||
__print_symbolic(__entry->whichfork, XFS_WHICHFORK_STRINGS),
|
||||
__print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS),
|
||||
__print_symbolic(__entry->btnum, XFS_BTNUM_STRINGS),
|
||||
__entry->level,
|
||||
@ -507,7 +513,7 @@ DECLARE_EVENT_CLASS(xchk_sbtree_class,
|
||||
__entry->nlevels = cur->bc_nlevels;
|
||||
__entry->ptr = cur->bc_ptrs[level];
|
||||
),
|
||||
TP_printk("dev %d:%d type %s btree %s agno %u agbno %u level %d nlevels %d ptr %d",
|
||||
TP_printk("dev %d:%d type %s btree %s agno 0x%x agbno 0x%x level %d nlevels %d ptr %d",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS),
|
||||
__print_symbolic(__entry->btnum, XFS_BTNUM_STRINGS),
|
||||
@ -580,7 +586,7 @@ TRACE_EVENT(xchk_iallocbt_check_cluster,
|
||||
__entry->holemask = holemask;
|
||||
__entry->cluster_ino = cluster_ino;
|
||||
),
|
||||
TP_printk("dev %d:%d agno %d startino %u daddr 0x%llx len %d chunkino %u nr_inodes %u cluster_mask 0x%x holemask 0x%x cluster_ino %u",
|
||||
TP_printk("dev %d:%d agno 0x%x startino 0x%x daddr 0x%llx bbcount 0x%x chunkino 0x%x nr_inodes %u cluster_mask 0x%x holemask 0x%x cluster_ino 0x%x",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__entry->agno,
|
||||
__entry->startino,
|
||||
@ -670,7 +676,7 @@ DECLARE_EVENT_CLASS(xrep_extent_class,
|
||||
__entry->agbno = agbno;
|
||||
__entry->len = len;
|
||||
),
|
||||
TP_printk("dev %d:%d agno %u agbno %u len %u",
|
||||
TP_printk("dev %d:%d agno 0x%x agbno 0x%x fsbcount 0x%x",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__entry->agno,
|
||||
__entry->agbno,
|
||||
@ -707,7 +713,7 @@ DECLARE_EVENT_CLASS(xrep_rmap_class,
|
||||
__entry->offset = offset;
|
||||
__entry->flags = flags;
|
||||
),
|
||||
TP_printk("dev %d:%d agno %u agbno %u len %u owner %lld offset %llu flags 0x%x",
|
||||
TP_printk("dev %d:%d agno 0x%x agbno 0x%x fsbcount 0x%x owner 0x%llx fileoff 0x%llx flags 0x%x",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__entry->agno,
|
||||
__entry->agbno,
|
||||
@ -745,7 +751,7 @@ TRACE_EVENT(xrep_refcount_extent_fn,
|
||||
__entry->blockcount = irec->rc_blockcount;
|
||||
__entry->refcount = irec->rc_refcount;
|
||||
),
|
||||
TP_printk("dev %d:%d agno %u agbno %u len %u refcount %u",
|
||||
TP_printk("dev %d:%d agno 0x%x agbno 0x%x fsbcount 0x%x refcount %u",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__entry->agno,
|
||||
__entry->startblock,
|
||||
@ -769,7 +775,7 @@ TRACE_EVENT(xrep_init_btblock,
|
||||
__entry->agbno = agbno;
|
||||
__entry->btnum = btnum;
|
||||
),
|
||||
TP_printk("dev %d:%d agno %u agbno %u btree %s",
|
||||
TP_printk("dev %d:%d agno 0x%x agbno 0x%x btree %s",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__entry->agno,
|
||||
__entry->agbno,
|
||||
@ -793,7 +799,7 @@ TRACE_EVENT(xrep_findroot_block,
|
||||
__entry->magic = magic;
|
||||
__entry->level = level;
|
||||
),
|
||||
TP_printk("dev %d:%d agno %u agbno %u magic 0x%x level %u",
|
||||
TP_printk("dev %d:%d agno 0x%x agbno 0x%x magic 0x%x level %u",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__entry->agno,
|
||||
__entry->agbno,
|
||||
@ -821,7 +827,7 @@ TRACE_EVENT(xrep_calc_ag_resblks,
|
||||
__entry->freelen = freelen;
|
||||
__entry->usedlen = usedlen;
|
||||
),
|
||||
TP_printk("dev %d:%d agno %d icount %u aglen %u freelen %u usedlen %u",
|
||||
TP_printk("dev %d:%d agno 0x%x icount %u aglen %u freelen %u usedlen %u",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__entry->agno,
|
||||
__entry->icount,
|
||||
@ -850,7 +856,7 @@ TRACE_EVENT(xrep_calc_ag_resblks_btsize,
|
||||
__entry->rmapbt_sz = rmapbt_sz;
|
||||
__entry->refcbt_sz = refcbt_sz;
|
||||
),
|
||||
TP_printk("dev %d:%d agno %d bno %u ino %u rmap %u refcount %u",
|
||||
TP_printk("dev %d:%d agno 0x%x bnobt %u inobt %u rmapbt %u refcountbt %u",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__entry->agno,
|
||||
__entry->bnobt_sz,
|
||||
@ -894,7 +900,7 @@ TRACE_EVENT(xrep_ialloc_insert,
|
||||
__entry->freecount = freecount;
|
||||
__entry->freemask = freemask;
|
||||
),
|
||||
TP_printk("dev %d:%d agno %d startino %u holemask 0x%x count %u freecount %u freemask 0x%llx",
|
||||
TP_printk("dev %d:%d agno 0x%x startino 0x%x holemask 0x%x count %u freecount %u freemask 0x%llx",
|
||||
MAJOR(__entry->dev), MINOR(__entry->dev),
|
||||
__entry->agno,
|
||||
__entry->startino,
|
||||
|
@ -232,7 +232,7 @@ xfs_acl_set_mode(
|
||||
inode->i_ctime = current_time(inode);
|
||||
xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
|
||||
|
||||
if (mp->m_flags & XFS_MOUNT_WSYNC)
|
||||
if (xfs_has_wsync(mp))
|
||||
xfs_trans_set_sync(tp);
|
||||
return xfs_trans_commit(tp);
|
||||
}
|
||||
|
@ -97,7 +97,7 @@ xfs_end_ioend(
|
||||
/*
|
||||
* Just clean up the in-memory structures if the fs has been shut down.
|
||||
*/
|
||||
if (XFS_FORCED_SHUTDOWN(ip->i_mount)) {
|
||||
if (xfs_is_shutdown(ip->i_mount)) {
|
||||
error = -EIO;
|
||||
goto done;
|
||||
}
|
||||
@ -260,7 +260,7 @@ xfs_map_blocks(
|
||||
int retries = 0;
|
||||
int error = 0;
|
||||
|
||||
if (XFS_FORCED_SHUTDOWN(mp))
|
||||
if (xfs_is_shutdown(mp))
|
||||
return -EIO;
|
||||
|
||||
/*
|
||||
@ -440,7 +440,7 @@ xfs_discard_page(
|
||||
xfs_fileoff_t pageoff_fsb = XFS_B_TO_FSBT(mp, pageoff);
|
||||
int error;
|
||||
|
||||
if (XFS_FORCED_SHUTDOWN(mp))
|
||||
if (xfs_is_shutdown(mp))
|
||||
goto out_invalidate;
|
||||
|
||||
xfs_alert_ratelimited(mp,
|
||||
@ -449,7 +449,7 @@ xfs_discard_page(
|
||||
|
||||
error = xfs_bmap_punch_delalloc_range(ip, start_fsb,
|
||||
i_blocks_per_page(inode, page) - pageoff_fsb);
|
||||
if (error && !XFS_FORCED_SHUTDOWN(mp))
|
||||
if (error && !xfs_is_shutdown(mp))
|
||||
xfs_alert(mp, "page discard unable to remove delalloc mapping.");
|
||||
out_invalidate:
|
||||
iomap_invalidatepage(page, pageoff, PAGE_SIZE - pageoff);
|
||||
@ -461,22 +461,6 @@ static const struct iomap_writeback_ops xfs_writeback_ops = {
|
||||
.discard_page = xfs_discard_page,
|
||||
};
|
||||
|
||||
STATIC int
|
||||
xfs_vm_writepage(
|
||||
struct page *page,
|
||||
struct writeback_control *wbc)
|
||||
{
|
||||
struct xfs_writepage_ctx wpc = { };
|
||||
|
||||
if (WARN_ON_ONCE(current->journal_info)) {
|
||||
redirty_page_for_writepage(wbc, page);
|
||||
unlock_page(page);
|
||||
return 0;
|
||||
}
|
||||
|
||||
return iomap_writepage(page, wbc, &wpc.ctx, &xfs_writeback_ops);
|
||||
}
|
||||
|
||||
STATIC int
|
||||
xfs_vm_writepages(
|
||||
struct address_space *mapping,
|
||||
@ -559,7 +543,6 @@ xfs_iomap_swapfile_activate(
|
||||
const struct address_space_operations xfs_address_space_operations = {
|
||||
.readpage = xfs_vm_readpage,
|
||||
.readahead = xfs_vm_readahead,
|
||||
.writepage = xfs_vm_writepage,
|
||||
.writepages = xfs_vm_writepages,
|
||||
.set_page_dirty = __set_page_dirty_nobuffers,
|
||||
.releasepage = iomap_releasepage,
|
||||
|
@ -151,7 +151,7 @@ xfs_attr3_node_inactive(
|
||||
}
|
||||
|
||||
xfs_da3_node_hdr_from_disk(dp->i_mount, &ichdr, bp->b_addr);
|
||||
parent_blkno = bp->b_bn;
|
||||
parent_blkno = xfs_buf_daddr(bp);
|
||||
if (!ichdr.count) {
|
||||
xfs_trans_brelse(*trans, bp);
|
||||
return 0;
|
||||
@ -177,7 +177,7 @@ xfs_attr3_node_inactive(
|
||||
return error;
|
||||
|
||||
/* save for re-read later */
|
||||
child_blkno = XFS_BUF_ADDR(child_bp);
|
||||
child_blkno = xfs_buf_daddr(child_bp);
|
||||
|
||||
/*
|
||||
* Invalidate the subtree, however we have to.
|
||||
@ -271,7 +271,7 @@ xfs_attr3_root_inactive(
|
||||
error = xfs_da3_node_read(*trans, dp, 0, &bp, XFS_ATTR_FORK);
|
||||
if (error)
|
||||
return error;
|
||||
blkno = bp->b_bn;
|
||||
blkno = xfs_buf_daddr(bp);
|
||||
|
||||
/*
|
||||
* Invalidate the tree, even if the "tree" is only a single leaf block.
|
||||
|
@ -529,7 +529,7 @@ xfs_attr_list(
|
||||
|
||||
XFS_STATS_INC(dp->i_mount, xs_attr_list);
|
||||
|
||||
if (XFS_FORCED_SHUTDOWN(dp->i_mount))
|
||||
if (xfs_is_shutdown(dp->i_mount))
|
||||
return -EIO;
|
||||
|
||||
lock_mode = xfs_ilock_attr_map_shared(dp);
|
||||
|
@ -24,7 +24,6 @@
|
||||
#include "xfs_error.h"
|
||||
#include "xfs_log_priv.h"
|
||||
#include "xfs_log_recover.h"
|
||||
#include "xfs_quota.h"
|
||||
|
||||
kmem_zone_t *xfs_bui_zone;
|
||||
kmem_zone_t *xfs_bud_zone;
|
||||
@ -487,18 +486,10 @@ xfs_bui_item_recover(
|
||||
XFS_ATTR_FORK : XFS_DATA_FORK;
|
||||
bui_type = bmap->me_flags & XFS_BMAP_EXTENT_TYPE_MASK;
|
||||
|
||||
/* Grab the inode. */
|
||||
error = xfs_iget(mp, NULL, bmap->me_owner, 0, 0, &ip);
|
||||
error = xlog_recover_iget(mp, bmap->me_owner, &ip);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
error = xfs_qm_dqattach(ip);
|
||||
if (error)
|
||||
goto err_rele;
|
||||
|
||||
if (VFS_I(ip)->i_nlink == 0)
|
||||
xfs_iflags_set(ip, XFS_IRECOVERY);
|
||||
|
||||
/* Allocate transaction and do the work. */
|
||||
error = xfs_trans_alloc(mp, &M_RES(mp)->tr_itruncate,
|
||||
XFS_EXTENTADD_SPACE_RES(mp, XFS_DATA_FORK), 0, 0, &tp);
|
||||
@ -522,6 +513,9 @@ xfs_bui_item_recover(
|
||||
error = xfs_trans_log_finish_bmap_update(tp, budp, bui_type, ip,
|
||||
whichfork, bmap->me_startoff, bmap->me_startblock,
|
||||
&count, state);
|
||||
if (error == -EFSCORRUPTED)
|
||||
XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp, bmap,
|
||||
sizeof(*bmap));
|
||||
if (error)
|
||||
goto err_cancel;
|
||||
|
||||
|
@ -731,7 +731,7 @@ xfs_free_eofblocks(
|
||||
|
||||
error = xfs_trans_alloc(mp, &M_RES(mp)->tr_itruncate, 0, 0, 0, &tp);
|
||||
if (error) {
|
||||
ASSERT(XFS_FORCED_SHUTDOWN(mp));
|
||||
ASSERT(xfs_is_shutdown(mp));
|
||||
return error;
|
||||
}
|
||||
|
||||
@ -789,7 +789,7 @@ xfs_alloc_file_space(
|
||||
|
||||
trace_xfs_alloc_file_space(ip);
|
||||
|
||||
if (XFS_FORCED_SHUTDOWN(mp))
|
||||
if (xfs_is_shutdown(mp))
|
||||
return -EIO;
|
||||
|
||||
error = xfs_qm_dqattach(ip);
|
||||
@ -1282,7 +1282,7 @@ xfs_swap_extents_check_format(
|
||||
* If we have to use the (expensive) rmap swap method, we can
|
||||
* handle any number of extents and any format.
|
||||
*/
|
||||
if (xfs_sb_version_hasrmapbt(&ip->i_mount->m_sb))
|
||||
if (xfs_has_rmapbt(ip->i_mount))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
@ -1516,7 +1516,7 @@ xfs_swap_extent_forks(
|
||||
* event of a crash. Set the owner change log flags now and leave the
|
||||
* bmbt scan as the last step.
|
||||
*/
|
||||
if (xfs_sb_version_has_v3inode(&ip->i_mount->m_sb)) {
|
||||
if (xfs_has_v3inodes(ip->i_mount)) {
|
||||
if (ip->i_df.if_format == XFS_DINODE_FMT_BTREE)
|
||||
(*target_log_flags) |= XFS_ILOG_DOWNER;
|
||||
if (tip->i_df.if_format == XFS_DINODE_FMT_BTREE)
|
||||
@ -1553,7 +1553,7 @@ xfs_swap_extent_forks(
|
||||
(*src_log_flags) |= XFS_ILOG_DEXT;
|
||||
break;
|
||||
case XFS_DINODE_FMT_BTREE:
|
||||
ASSERT(!xfs_sb_version_has_v3inode(&ip->i_mount->m_sb) ||
|
||||
ASSERT(!xfs_has_v3inodes(ip->i_mount) ||
|
||||
(*src_log_flags & XFS_ILOG_DOWNER));
|
||||
(*src_log_flags) |= XFS_ILOG_DBROOT;
|
||||
break;
|
||||
@ -1565,7 +1565,7 @@ xfs_swap_extent_forks(
|
||||
break;
|
||||
case XFS_DINODE_FMT_BTREE:
|
||||
(*target_log_flags) |= XFS_ILOG_DBROOT;
|
||||
ASSERT(!xfs_sb_version_has_v3inode(&ip->i_mount->m_sb) ||
|
||||
ASSERT(!xfs_has_v3inodes(ip->i_mount) ||
|
||||
(*target_log_flags & XFS_ILOG_DOWNER));
|
||||
break;
|
||||
}
|
||||
@ -1678,7 +1678,7 @@ xfs_swap_extents(
|
||||
* a block reservation because it's really just a remap operation
|
||||
* performed with log redo items!
|
||||
*/
|
||||
if (xfs_sb_version_hasrmapbt(&mp->m_sb)) {
|
||||
if (xfs_has_rmapbt(mp)) {
|
||||
int w = XFS_DATA_FORK;
|
||||
uint32_t ipnext = ip->i_df.if_nextents;
|
||||
uint32_t tipnext = tip->i_df.if_nextents;
|
||||
@ -1759,7 +1759,7 @@ xfs_swap_extents(
|
||||
src_log_flags = XFS_ILOG_CORE;
|
||||
target_log_flags = XFS_ILOG_CORE;
|
||||
|
||||
if (xfs_sb_version_hasrmapbt(&mp->m_sb))
|
||||
if (xfs_has_rmapbt(mp))
|
||||
error = xfs_swap_extent_rmap(&tp, ip, tip);
|
||||
else
|
||||
error = xfs_swap_extent_forks(tp, ip, tip, &src_log_flags,
|
||||
@ -1778,7 +1778,7 @@ xfs_swap_extents(
|
||||
}
|
||||
|
||||
/* Swap the cow forks. */
|
||||
if (xfs_sb_version_hasreflink(&mp->m_sb)) {
|
||||
if (xfs_has_reflink(mp)) {
|
||||
ASSERT(!ip->i_cowfp ||
|
||||
ip->i_cowfp->if_format == XFS_DINODE_FMT_EXTENTS);
|
||||
ASSERT(!tip->i_cowfp ||
|
||||
@ -1820,7 +1820,7 @@ xfs_swap_extents(
|
||||
* If this is a synchronous mount, make sure that the
|
||||
* transaction goes to disk before returning to the user.
|
||||
*/
|
||||
if (mp->m_flags & XFS_MOUNT_WSYNC)
|
||||
if (xfs_has_wsync(mp))
|
||||
xfs_trans_set_sync(tp);
|
||||
|
||||
error = xfs_trans_commit(tp);
|
||||
|
@ -251,7 +251,7 @@ _xfs_buf_alloc(
|
||||
return error;
|
||||
}
|
||||
|
||||
bp->b_bn = map[0].bm_bn;
|
||||
bp->b_rhash_key = map[0].bm_bn;
|
||||
bp->b_length = 0;
|
||||
for (i = 0; i < nmaps; i++) {
|
||||
bp->b_maps[i].bm_bn = map[i].bm_bn;
|
||||
@ -315,7 +315,6 @@ xfs_buf_alloc_kmem(
|
||||
struct xfs_buf *bp,
|
||||
xfs_buf_flags_t flags)
|
||||
{
|
||||
int align_mask = xfs_buftarg_dma_alignment(bp->b_target);
|
||||
xfs_km_flags_t kmflag_mask = KM_NOFS;
|
||||
size_t size = BBTOB(bp->b_length);
|
||||
|
||||
@ -323,7 +322,7 @@ xfs_buf_alloc_kmem(
|
||||
if (!(flags & XBF_READ))
|
||||
kmflag_mask |= KM_ZERO;
|
||||
|
||||
bp->b_addr = kmem_alloc_io(size, align_mask, kmflag_mask);
|
||||
bp->b_addr = kmem_alloc(size, kmflag_mask);
|
||||
if (!bp->b_addr)
|
||||
return -ENOMEM;
|
||||
|
||||
@ -460,7 +459,7 @@ _xfs_buf_obj_cmp(
|
||||
*/
|
||||
BUILD_BUG_ON(offsetof(struct xfs_buf_map, bm_bn) != 0);
|
||||
|
||||
if (bp->b_bn != map->bm_bn)
|
||||
if (bp->b_rhash_key != map->bm_bn)
|
||||
return 1;
|
||||
|
||||
if (unlikely(bp->b_length != map->bm_len)) {
|
||||
@ -482,7 +481,7 @@ static const struct rhashtable_params xfs_buf_hash_params = {
|
||||
.min_size = 32, /* empty AGs have minimal footprint */
|
||||
.nelem_hint = 16,
|
||||
.key_len = sizeof(xfs_daddr_t),
|
||||
.key_offset = offsetof(struct xfs_buf, b_bn),
|
||||
.key_offset = offsetof(struct xfs_buf, b_rhash_key),
|
||||
.head_offset = offsetof(struct xfs_buf, b_rhash_head),
|
||||
.automatic_shrinking = true,
|
||||
.obj_cmpfn = _xfs_buf_obj_cmp,
|
||||
@ -814,7 +813,7 @@ xfs_buf_read_map(
|
||||
* buffer.
|
||||
*/
|
||||
if (error) {
|
||||
if (!XFS_FORCED_SHUTDOWN(target->bt_mount))
|
||||
if (!xfs_is_shutdown(target->bt_mount))
|
||||
xfs_buf_ioerror_alert(bp, fa);
|
||||
|
||||
bp->b_flags &= ~XBF_DONE;
|
||||
@ -854,7 +853,9 @@ xfs_buf_readahead_map(
|
||||
|
||||
/*
|
||||
* Read an uncached buffer from disk. Allocates and returns a locked
|
||||
* buffer containing the disk contents or nothing.
|
||||
* buffer containing the disk contents or nothing. Uncached buffers always have
|
||||
* a cache index of XFS_BUF_DADDR_NULL so we can easily determine if the buffer
|
||||
* is cached or uncached during fault diagnosis.
|
||||
*/
|
||||
int
|
||||
xfs_buf_read_uncached(
|
||||
@ -876,7 +877,7 @@ xfs_buf_read_uncached(
|
||||
|
||||
/* set up the buffer for a read IO */
|
||||
ASSERT(bp->b_map_count == 1);
|
||||
bp->b_bn = XFS_BUF_DADDR_NULL; /* always null for uncached buffers */
|
||||
bp->b_rhash_key = XFS_BUF_DADDR_NULL;
|
||||
bp->b_maps[0].bm_bn = daddr;
|
||||
bp->b_flags |= XBF_READ;
|
||||
bp->b_ops = ops;
|
||||
@ -1145,7 +1146,7 @@ xfs_buf_ioerror_permanent(
|
||||
return true;
|
||||
|
||||
/* At unmount we may treat errors differently */
|
||||
if ((mp->m_flags & XFS_MOUNT_UNMOUNTING) && mp->m_fail_unmount)
|
||||
if (xfs_is_unmounting(mp) && mp->m_fail_unmount)
|
||||
return true;
|
||||
|
||||
return false;
|
||||
@ -1179,7 +1180,7 @@ xfs_buf_ioend_handle_error(
|
||||
* If we've already decided to shutdown the filesystem because of I/O
|
||||
* errors, there's no point in giving this a retry.
|
||||
*/
|
||||
if (XFS_FORCED_SHUTDOWN(mp))
|
||||
if (xfs_is_shutdown(mp))
|
||||
goto out_stale;
|
||||
|
||||
xfs_buf_ioerror_alert_ratelimited(bp);
|
||||
@ -1336,7 +1337,7 @@ xfs_buf_ioerror_alert(
|
||||
{
|
||||
xfs_buf_alert_ratelimited(bp, "XFS: metadata IO error",
|
||||
"metadata I/O error in \"%pS\" at daddr 0x%llx len %d error %d",
|
||||
func, (uint64_t)XFS_BUF_ADDR(bp),
|
||||
func, (uint64_t)xfs_buf_daddr(bp),
|
||||
bp->b_length, -bp->b_error);
|
||||
}
|
||||
|
||||
@ -1514,17 +1515,18 @@ _xfs_buf_ioapply(
|
||||
SHUTDOWN_CORRUPT_INCORE);
|
||||
return;
|
||||
}
|
||||
} else if (bp->b_bn != XFS_BUF_DADDR_NULL) {
|
||||
} else if (bp->b_rhash_key != XFS_BUF_DADDR_NULL) {
|
||||
struct xfs_mount *mp = bp->b_mount;
|
||||
|
||||
/*
|
||||
* non-crc filesystems don't attach verifiers during
|
||||
* log recovery, so don't warn for such filesystems.
|
||||
*/
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
xfs_warn(mp,
|
||||
"%s: no buf ops on daddr 0x%llx len %d",
|
||||
__func__, bp->b_bn, bp->b_length);
|
||||
__func__, xfs_buf_daddr(bp),
|
||||
bp->b_length);
|
||||
xfs_hex_dump(bp->b_addr,
|
||||
XFS_CORRUPTION_DUMP_LEN);
|
||||
dump_stack();
|
||||
@ -1592,7 +1594,7 @@ __xfs_buf_submit(
|
||||
ASSERT(!(bp->b_flags & _XBF_DELWRI_Q));
|
||||
|
||||
/* on shutdown we stale and complete the buffer immediately */
|
||||
if (XFS_FORCED_SHUTDOWN(bp->b_mount)) {
|
||||
if (xfs_is_shutdown(bp->b_mount)) {
|
||||
xfs_buf_ioend_fail(bp);
|
||||
return -EIO;
|
||||
}
|
||||
@ -1794,7 +1796,7 @@ xfs_buftarg_drain(
|
||||
xfs_buf_alert_ratelimited(bp,
|
||||
"XFS: Corruption Alert",
|
||||
"Corruption Alert: Buffer at daddr 0x%llx had permanent write failures!",
|
||||
(long long)bp->b_bn);
|
||||
(long long)xfs_buf_daddr(bp));
|
||||
}
|
||||
xfs_buf_rele(bp);
|
||||
}
|
||||
@ -1809,7 +1811,7 @@ xfs_buftarg_drain(
|
||||
* down the fs.
|
||||
*/
|
||||
if (write_fail) {
|
||||
ASSERT(XFS_FORCED_SHUTDOWN(btp->bt_mount));
|
||||
ASSERT(xfs_is_shutdown(btp->bt_mount));
|
||||
xfs_alert(btp->bt_mount,
|
||||
"Please run xfs_repair to determine the extent of the problem.");
|
||||
}
|
||||
@ -2302,7 +2304,7 @@ xfs_verify_magic(
|
||||
struct xfs_mount *mp = bp->b_mount;
|
||||
int idx;
|
||||
|
||||
idx = xfs_sb_version_hascrc(&mp->m_sb);
|
||||
idx = xfs_has_crc(mp);
|
||||
if (WARN_ON(!bp->b_ops || !bp->b_ops->magic[idx]))
|
||||
return false;
|
||||
return dmagic == bp->b_ops->magic[idx];
|
||||
@ -2320,7 +2322,7 @@ xfs_verify_magic16(
|
||||
struct xfs_mount *mp = bp->b_mount;
|
||||
int idx;
|
||||
|
||||
idx = xfs_sb_version_hascrc(&mp->m_sb);
|
||||
idx = xfs_has_crc(mp);
|
||||
if (WARN_ON(!bp->b_ops || !bp->b_ops->magic16[idx]))
|
||||
return false;
|
||||
return dmagic == bp->b_ops->magic16[idx];
|
||||
|
@ -133,7 +133,8 @@ struct xfs_buf {
|
||||
* fast-path on locking.
|
||||
*/
|
||||
struct rhash_head b_rhash_head; /* pag buffer hash node */
|
||||
xfs_daddr_t b_bn; /* block number of buffer */
|
||||
|
||||
xfs_daddr_t b_rhash_key; /* buffer cache index */
|
||||
int b_length; /* size of buffer in BBs */
|
||||
atomic_t b_hold; /* reference count */
|
||||
atomic_t b_lru_ref; /* lru reclaim ref count */
|
||||
@ -296,18 +297,10 @@ extern int xfs_buf_delwri_pushbuf(struct xfs_buf *, struct list_head *);
|
||||
extern int xfs_buf_init(void);
|
||||
extern void xfs_buf_terminate(void);
|
||||
|
||||
/*
|
||||
* These macros use the IO block map rather than b_bn. b_bn is now really
|
||||
* just for the buffer cache index for cached buffers. As IO does not use b_bn
|
||||
* anymore, uncached buffers do not use b_bn at all and hence must modify the IO
|
||||
* map directly. Uncached buffers are not allowed to be discontiguous, so this
|
||||
* is safe to do.
|
||||
*
|
||||
* In future, uncached buffers will pass the block number directly to the io
|
||||
* request function and hence these macros will go away at that point.
|
||||
*/
|
||||
#define XFS_BUF_ADDR(bp) ((bp)->b_maps[0].bm_bn)
|
||||
#define XFS_BUF_SET_ADDR(bp, bno) ((bp)->b_maps[0].bm_bn = (xfs_daddr_t)(bno))
|
||||
static inline xfs_daddr_t xfs_buf_daddr(struct xfs_buf *bp)
|
||||
{
|
||||
return bp->b_maps[0].bm_bn;
|
||||
}
|
||||
|
||||
void xfs_buf_set_ref(struct xfs_buf *bp, int lru_ref);
|
||||
|
||||
@ -355,12 +348,6 @@ extern int xfs_setsize_buftarg(struct xfs_buftarg *, unsigned int);
|
||||
#define xfs_getsize_buftarg(buftarg) block_size((buftarg)->bt_bdev)
|
||||
#define xfs_readonly_buftarg(buftarg) bdev_read_only((buftarg)->bt_bdev)
|
||||
|
||||
static inline int
|
||||
xfs_buftarg_dma_alignment(struct xfs_buftarg *bt)
|
||||
{
|
||||
return queue_dma_alignment(bt->bt_bdev->bd_disk->queue);
|
||||
}
|
||||
|
||||
int xfs_buf_reverify(struct xfs_buf *bp, const struct xfs_buf_ops *ops);
|
||||
bool xfs_verify_magic(struct xfs_buf *bp, __be32 dmagic);
|
||||
bool xfs_verify_magic16(struct xfs_buf *bp, __be16 dmagic);
|
||||
|
@ -428,7 +428,7 @@ xfs_buf_item_format(
|
||||
* occurs during recovery.
|
||||
*/
|
||||
if (bip->bli_flags & XFS_BLI_INODE_BUF) {
|
||||
if (xfs_sb_version_has_v3inode(&lip->li_mountp->m_sb) ||
|
||||
if (xfs_has_v3inodes(lip->li_mountp) ||
|
||||
!((bip->bli_flags & XFS_BLI_INODE_ALLOC_BUF) &&
|
||||
xfs_log_item_in_current_chkpt(lip)))
|
||||
bip->__bli_format.blf_flags |= XFS_BLF_INODE_BUF;
|
||||
@ -581,7 +581,7 @@ xfs_buf_item_push(
|
||||
if (bp->b_flags & XBF_WRITE_FAIL) {
|
||||
xfs_buf_alert_ratelimited(bp, "XFS: Failing async write",
|
||||
"Failing async write on buffer block 0x%llx. Retrying async write.",
|
||||
(long long)bp->b_bn);
|
||||
(long long)xfs_buf_daddr(bp));
|
||||
}
|
||||
|
||||
if (!xfs_buf_delwri_queue(bp, buffer_list))
|
||||
@ -616,7 +616,7 @@ xfs_buf_item_put(
|
||||
* that case, the bli is freed on buffer writeback completion.
|
||||
*/
|
||||
aborted = test_bit(XFS_LI_ABORTED, &lip->li_flags) ||
|
||||
XFS_FORCED_SHUTDOWN(lip->li_mountp);
|
||||
xfs_is_shutdown(lip->li_mountp);
|
||||
dirty = bip->bli_flags & XFS_BLI_DIRTY;
|
||||
if (dirty && !aborted)
|
||||
return false;
|
||||
|
@ -219,7 +219,7 @@ xlog_recover_validate_buf_type(
|
||||
* inconsistent state resulting in verification failures. Hence for now
|
||||
* just avoid the verification stage for non-crc filesystems
|
||||
*/
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
return;
|
||||
|
||||
magic32 = be32_to_cpu(*(__be32 *)bp->b_addr);
|
||||
@ -497,7 +497,7 @@ xlog_recover_do_reg_buffer(
|
||||
if (fa) {
|
||||
xfs_alert(mp,
|
||||
"dquot corrupt at %pS trying to replay into block 0x%llx",
|
||||
fa, bp->b_bn);
|
||||
fa, xfs_buf_daddr(bp));
|
||||
goto next;
|
||||
}
|
||||
}
|
||||
@ -597,7 +597,7 @@ xlog_recover_do_inode_buffer(
|
||||
* Post recovery validation only works properly on CRC enabled
|
||||
* filesystems.
|
||||
*/
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (xfs_has_crc(mp))
|
||||
bp->b_ops = &xfs_inode_buf_ops;
|
||||
|
||||
inodes_per_buf = BBTOB(bp->b_length) >> mp->m_sb.sb_inodelog;
|
||||
@ -710,7 +710,7 @@ xlog_recover_get_buf_lsn(
|
||||
uint16_t blft;
|
||||
|
||||
/* v4 filesystems always recover immediately */
|
||||
if (!xfs_sb_version_hascrc(&mp->m_sb))
|
||||
if (!xfs_has_crc(mp))
|
||||
goto recover_immediately;
|
||||
|
||||
/*
|
||||
@ -787,7 +787,7 @@ xlog_recover_get_buf_lsn(
|
||||
* the relevant UUID in the superblock.
|
||||
*/
|
||||
lsn = be64_to_cpu(((struct xfs_dsb *)blk)->sb_lsn);
|
||||
if (xfs_sb_version_hasmetauuid(&mp->m_sb))
|
||||
if (xfs_has_metauuid(mp))
|
||||
uuid = &((struct xfs_dsb *)blk)->sb_meta_uuid;
|
||||
else
|
||||
uuid = &((struct xfs_dsb *)blk)->sb_uuid;
|
||||
|
@ -32,7 +32,7 @@ xfs_dir3_get_dtype(
|
||||
struct xfs_mount *mp,
|
||||
uint8_t filetype)
|
||||
{
|
||||
if (!xfs_sb_version_hasftype(&mp->m_sb))
|
||||
if (!xfs_has_ftype(mp))
|
||||
return DT_UNKNOWN;
|
||||
|
||||
if (filetype >= XFS_DIR3_FT_MAX)
|
||||
@ -512,7 +512,7 @@ xfs_readdir(
|
||||
|
||||
trace_xfs_readdir(dp);
|
||||
|
||||
if (XFS_FORCED_SHUTDOWN(dp->i_mount))
|
||||
if (xfs_is_shutdown(dp->i_mount))
|
||||
return -EIO;
|
||||
|
||||
ASSERT(S_ISDIR(VFS_I(dp)->i_mode));
|
||||
|
@ -169,7 +169,7 @@ xfs_ioc_trim(
|
||||
* We haven't recovered the log, so we cannot use our bnobt-guided
|
||||
* storage zapping commands.
|
||||
*/
|
||||
if (mp->m_flags & XFS_MOUNT_NORECOVERY)
|
||||
if (xfs_has_norecovery(mp))
|
||||
return -EROFS;
|
||||
|
||||
if (copy_from_user(&range, urange, sizeof(range)))
|
||||
|
@ -223,9 +223,9 @@ xfs_qm_init_dquot_blk(
|
||||
d->dd_diskdq.d_version = XFS_DQUOT_VERSION;
|
||||
d->dd_diskdq.d_id = cpu_to_be32(curid);
|
||||
d->dd_diskdq.d_type = type;
|
||||
if (curid > 0 && xfs_sb_version_hasbigtime(&mp->m_sb))
|
||||
if (curid > 0 && xfs_has_bigtime(mp))
|
||||
d->dd_diskdq.d_type |= XFS_DQTYPE_BIGTIME;
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
uuid_copy(&d->dd_uuid, &mp->m_sb.sb_meta_uuid);
|
||||
xfs_update_cksum((char *)d, sizeof(struct xfs_dqblk),
|
||||
XFS_DQUOT_CRC_OFF);
|
||||
@ -526,7 +526,7 @@ xfs_dquot_check_type(
|
||||
* expect an exact match for user dquots and for non-root group and
|
||||
* project dquots.
|
||||
*/
|
||||
if (xfs_sb_version_hascrc(&dqp->q_mount->m_sb) ||
|
||||
if (xfs_has_crc(dqp->q_mount) ||
|
||||
dqp_type == XFS_DQTYPE_USER || dqp->q_id != 0)
|
||||
return ddqp_type == dqp_type;
|
||||
|
||||
@ -847,9 +847,6 @@ xfs_qm_dqget_checks(
|
||||
struct xfs_mount *mp,
|
||||
xfs_dqtype_t type)
|
||||
{
|
||||
if (WARN_ON_ONCE(!XFS_IS_QUOTA_RUNNING(mp)))
|
||||
return -ESRCH;
|
||||
|
||||
switch (type) {
|
||||
case XFS_DQTYPE_USER:
|
||||
if (!XFS_IS_UQUOTA_ON(mp))
|
||||
@ -1222,7 +1219,7 @@ xfs_qm_dqflush_check(
|
||||
|
||||
/* bigtime flag should never be set on root dquots */
|
||||
if (dqp->q_type & XFS_DQTYPE_BIGTIME) {
|
||||
if (!xfs_sb_version_hasbigtime(&dqp->q_mount->m_sb))
|
||||
if (!xfs_has_bigtime(dqp->q_mount))
|
||||
return __this_address;
|
||||
if (dqp->q_id == 0)
|
||||
return __this_address;
|
||||
@ -1301,7 +1298,7 @@ xfs_qm_dqflush(
|
||||
* buffer always has a valid CRC. This ensures there is no possibility
|
||||
* of a dquot without an up-to-date CRC getting to disk.
|
||||
*/
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
dqblk->dd_lsn = cpu_to_be64(dqp->q_logitem.qli_item.li_lsn);
|
||||
xfs_update_cksum((char *)dqblk, sizeof(struct xfs_dqblk),
|
||||
XFS_DQUOT_CRC_OFF);
|
||||
|
@ -54,6 +54,16 @@ struct xfs_dquot_res {
|
||||
xfs_qwarncnt_t warnings;
|
||||
};
|
||||
|
||||
static inline bool
|
||||
xfs_dquot_res_over_limits(
|
||||
const struct xfs_dquot_res *qres)
|
||||
{
|
||||
if ((qres->softlimit && qres->softlimit < qres->reserved) ||
|
||||
(qres->hardlimit && qres->hardlimit < qres->reserved))
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* The incore dquot structure
|
||||
*/
|
||||
|
@ -218,137 +218,3 @@ xfs_qm_dquot_logitem_init(
|
||||
&xfs_dquot_item_ops);
|
||||
lp->qli_dquot = dqp;
|
||||
}
|
||||
|
||||
/*------------------ QUOTAOFF LOG ITEMS -------------------*/
|
||||
|
||||
static inline struct xfs_qoff_logitem *QOFF_ITEM(struct xfs_log_item *lip)
|
||||
{
|
||||
return container_of(lip, struct xfs_qoff_logitem, qql_item);
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* This returns the number of iovecs needed to log the given quotaoff item.
|
||||
* We only need 1 iovec for an quotaoff item. It just logs the
|
||||
* quotaoff_log_format structure.
|
||||
*/
|
||||
STATIC void
|
||||
xfs_qm_qoff_logitem_size(
|
||||
struct xfs_log_item *lip,
|
||||
int *nvecs,
|
||||
int *nbytes)
|
||||
{
|
||||
*nvecs += 1;
|
||||
*nbytes += sizeof(struct xfs_qoff_logitem);
|
||||
}
|
||||
|
||||
STATIC void
|
||||
xfs_qm_qoff_logitem_format(
|
||||
struct xfs_log_item *lip,
|
||||
struct xfs_log_vec *lv)
|
||||
{
|
||||
struct xfs_qoff_logitem *qflip = QOFF_ITEM(lip);
|
||||
struct xfs_log_iovec *vecp = NULL;
|
||||
struct xfs_qoff_logformat *qlf;
|
||||
|
||||
qlf = xlog_prepare_iovec(lv, &vecp, XLOG_REG_TYPE_QUOTAOFF);
|
||||
qlf->qf_type = XFS_LI_QUOTAOFF;
|
||||
qlf->qf_size = 1;
|
||||
qlf->qf_flags = qflip->qql_flags;
|
||||
xlog_finish_iovec(lv, vecp, sizeof(struct xfs_qoff_logitem));
|
||||
}
|
||||
|
||||
/*
|
||||
* There isn't much you can do to push a quotaoff item. It is simply
|
||||
* stuck waiting for the log to be flushed to disk.
|
||||
*/
|
||||
STATIC uint
|
||||
xfs_qm_qoff_logitem_push(
|
||||
struct xfs_log_item *lip,
|
||||
struct list_head *buffer_list)
|
||||
{
|
||||
return XFS_ITEM_LOCKED;
|
||||
}
|
||||
|
||||
STATIC xfs_lsn_t
|
||||
xfs_qm_qoffend_logitem_committed(
|
||||
struct xfs_log_item *lip,
|
||||
xfs_lsn_t lsn)
|
||||
{
|
||||
struct xfs_qoff_logitem *qfe = QOFF_ITEM(lip);
|
||||
struct xfs_qoff_logitem *qfs = qfe->qql_start_lip;
|
||||
|
||||
xfs_qm_qoff_logitem_relse(qfs);
|
||||
|
||||
kmem_free(lip->li_lv_shadow);
|
||||
kmem_free(qfe);
|
||||
return (xfs_lsn_t)-1;
|
||||
}
|
||||
|
||||
STATIC void
|
||||
xfs_qm_qoff_logitem_release(
|
||||
struct xfs_log_item *lip)
|
||||
{
|
||||
struct xfs_qoff_logitem *qoff = QOFF_ITEM(lip);
|
||||
|
||||
if (test_bit(XFS_LI_ABORTED, &lip->li_flags)) {
|
||||
if (qoff->qql_start_lip)
|
||||
xfs_qm_qoff_logitem_relse(qoff->qql_start_lip);
|
||||
xfs_qm_qoff_logitem_relse(qoff);
|
||||
}
|
||||
}
|
||||
|
||||
static const struct xfs_item_ops xfs_qm_qoffend_logitem_ops = {
|
||||
.iop_size = xfs_qm_qoff_logitem_size,
|
||||
.iop_format = xfs_qm_qoff_logitem_format,
|
||||
.iop_committed = xfs_qm_qoffend_logitem_committed,
|
||||
.iop_push = xfs_qm_qoff_logitem_push,
|
||||
.iop_release = xfs_qm_qoff_logitem_release,
|
||||
};
|
||||
|
||||
static const struct xfs_item_ops xfs_qm_qoff_logitem_ops = {
|
||||
.iop_size = xfs_qm_qoff_logitem_size,
|
||||
.iop_format = xfs_qm_qoff_logitem_format,
|
||||
.iop_push = xfs_qm_qoff_logitem_push,
|
||||
.iop_release = xfs_qm_qoff_logitem_release,
|
||||
};
|
||||
|
||||
/*
|
||||
* Delete the quotaoff intent from the AIL and free it. On success,
|
||||
* this should only be called for the start item. It can be used for
|
||||
* either on shutdown or abort.
|
||||
*/
|
||||
void
|
||||
xfs_qm_qoff_logitem_relse(
|
||||
struct xfs_qoff_logitem *qoff)
|
||||
{
|
||||
struct xfs_log_item *lip = &qoff->qql_item;
|
||||
|
||||
ASSERT(test_bit(XFS_LI_IN_AIL, &lip->li_flags) ||
|
||||
test_bit(XFS_LI_ABORTED, &lip->li_flags) ||
|
||||
XFS_FORCED_SHUTDOWN(lip->li_mountp));
|
||||
xfs_trans_ail_delete(lip, 0);
|
||||
kmem_free(lip->li_lv_shadow);
|
||||
kmem_free(qoff);
|
||||
}
|
||||
|
||||
/*
|
||||
* Allocate and initialize an quotaoff item of the correct quota type(s).
|
||||
*/
|
||||
struct xfs_qoff_logitem *
|
||||
xfs_qm_qoff_logitem_init(
|
||||
struct xfs_mount *mp,
|
||||
struct xfs_qoff_logitem *start,
|
||||
uint flags)
|
||||
{
|
||||
struct xfs_qoff_logitem *qf;
|
||||
|
||||
qf = kmem_zalloc(sizeof(struct xfs_qoff_logitem), 0);
|
||||
|
||||
xfs_log_item_init(mp, &qf->qql_item, XFS_LI_QUOTAOFF, start ?
|
||||
&xfs_qm_qoffend_logitem_ops : &xfs_qm_qoff_logitem_ops);
|
||||
qf->qql_item.li_mountp = mp;
|
||||
qf->qql_start_lip = start;
|
||||
qf->qql_flags = flags;
|
||||
return qf;
|
||||
}
|
||||
|
@ -9,7 +9,6 @@
|
||||
struct xfs_dquot;
|
||||
struct xfs_trans;
|
||||
struct xfs_mount;
|
||||
struct xfs_qoff_logitem;
|
||||
|
||||
struct xfs_dq_logitem {
|
||||
struct xfs_log_item qli_item; /* common portion */
|
||||
@ -17,22 +16,6 @@ struct xfs_dq_logitem {
|
||||
xfs_lsn_t qli_flush_lsn; /* lsn at last flush */
|
||||
};
|
||||
|
||||
struct xfs_qoff_logitem {
|
||||
struct xfs_log_item qql_item; /* common portion */
|
||||
struct xfs_qoff_logitem *qql_start_lip; /* qoff-start logitem, if any */
|
||||
unsigned int qql_flags;
|
||||
};
|
||||
|
||||
|
||||
void xfs_qm_dquot_logitem_init(struct xfs_dquot *dqp);
|
||||
struct xfs_qoff_logitem *xfs_qm_qoff_logitem_init(struct xfs_mount *mp,
|
||||
struct xfs_qoff_logitem *start,
|
||||
uint flags);
|
||||
void xfs_qm_qoff_logitem_relse(struct xfs_qoff_logitem *);
|
||||
struct xfs_qoff_logitem *xfs_trans_get_qoff_item(struct xfs_trans *tp,
|
||||
struct xfs_qoff_logitem *startqoff,
|
||||
uint flags);
|
||||
void xfs_trans_log_quotaoff_item(struct xfs_trans *tp,
|
||||
struct xfs_qoff_logitem *qlp);
|
||||
|
||||
#endif /* __XFS_DQUOT_ITEM_H__ */
|
||||
|
@ -136,7 +136,7 @@ xlog_recover_dquot_commit_pass2(
|
||||
* If the dquot has an LSN in it, recover the dquot only if it's less
|
||||
* than the lsn of the transaction we are replaying.
|
||||
*/
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
struct xfs_dqblk *dqb = (struct xfs_dqblk *)ddq;
|
||||
xfs_lsn_t lsn = be64_to_cpu(dqb->dd_lsn);
|
||||
|
||||
@ -146,7 +146,7 @@ xlog_recover_dquot_commit_pass2(
|
||||
}
|
||||
|
||||
memcpy(ddq, recddq, item->ri_buf[1].i_len);
|
||||
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
||||
if (xfs_has_crc(mp)) {
|
||||
xfs_update_cksum((char *)ddq, sizeof(struct xfs_dqblk),
|
||||
XFS_DQUOT_CRC_OFF);
|
||||
}
|
||||
|
@ -371,7 +371,7 @@ xfs_buf_corruption_error(
|
||||
|
||||
xfs_alert_tag(mp, XFS_PTAG_VERIFIER_ERROR,
|
||||
"Metadata corruption detected at %pS, %s block 0x%llx",
|
||||
fa, bp->b_ops->name, bp->b_bn);
|
||||
fa, bp->b_ops->name, xfs_buf_daddr(bp));
|
||||
|
||||
xfs_alert(mp, "Unmount and run xfs_repair");
|
||||
|
||||
@ -402,7 +402,7 @@ xfs_buf_verifier_error(
|
||||
xfs_alert_tag(mp, XFS_PTAG_VERIFIER_ERROR,
|
||||
"Metadata %s detected at %pS, %s block 0x%llx %s",
|
||||
bp->b_error == -EFSBADCRC ? "CRC error" : "corruption",
|
||||
fa, bp->b_ops->name, bp->b_bn, name);
|
||||
fa, bp->b_ops->name, xfs_buf_daddr(bp), name);
|
||||
|
||||
xfs_alert(mp, "Unmount and run xfs_repair");
|
||||
|
||||
|
@ -75,4 +75,16 @@ extern int xfs_errortag_clearall(struct xfs_mount *mp);
|
||||
#define XFS_PTAG_FSBLOCK_ZERO 0x00000080
|
||||
#define XFS_PTAG_VERIFIER_ERROR 0x00000100
|
||||
|
||||
#define XFS_PTAG_STRINGS \
|
||||
{ XFS_NO_PTAG, "none" }, \
|
||||
{ XFS_PTAG_IFLUSH, "iflush" }, \
|
||||
{ XFS_PTAG_LOGRES, "logres" }, \
|
||||
{ XFS_PTAG_AILDELETE, "aildelete" }, \
|
||||
{ XFS_PTAG_ERROR_REPORT , "error_report" }, \
|
||||
{ XFS_PTAG_SHUTDOWN_CORRUPT, "corrupt" }, \
|
||||
{ XFS_PTAG_SHUTDOWN_IOERROR, "ioerror" }, \
|
||||
{ XFS_PTAG_SHUTDOWN_LOGERROR, "logerror" }, \
|
||||
{ XFS_PTAG_FSBLOCK_ZERO, "fsb_zero" }, \
|
||||
{ XFS_PTAG_VERIFIER_ERROR, "verifier" }
|
||||
|
||||
#endif /* __XFS_ERROR_H__ */
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user