Use the objectid, type, offset natural order as it's more readable and
we're used to read keys like that.
Signed-off-by: David Sterba <dsterba@suse.com>
Use a local copy of the search header for proper aligned access instead
of the unaligned helpers, move the definitions to the closest scope.
Signed-off-by: David Sterba <dsterba@suse.com>
Use tree search ioctl wrappers for code that is considered internal, ie.
leaving out libbtrfs (legacy), libbtrfsutil (needs own API for that).
Conversion is mostly direct of what the API provides.
Signed-off-by: David Sterba <dsterba@suse.com>
For historical reasons the helpers [btrfs_]open_dir... return also
the 'DIR *dirstream' value when a directory is opened.
Replace btrfs_open_dir() with btrfs_open_dir_fd() removing
any reference to the unused/useless dirstream variables.
Calling btrfs_open_dir_fd() with only the path is equivalent to
btrfs_open_dir(_, _, 1).
Signed-off-by: Goffredo Baroncelli <kreijack@libero.it>
Signed-off-by: David Sterba <dsterba@suse.com>
In case of a raid5/6 filesystem 'btrfs fi us' returns wrong values
without the root capabilities:
$ sudo btrfs fi us /tmp/raid5fs # as root
Overall:
Device size: 3.00GiB
Device allocated: 1.51GiB <--- OK
Device unallocated: 1.49GiB <--- OK
Device missing: 0.00B
Device slack: 0.00B
Used: 769.03MiB <--- OK
Free (estimated): 1.32GiB (min: 1.32GiB) <-OK
Free (statfs, df): 1.32GiB
Data ratio: 1.50 <--- OK
Metadata ratio: 1.50 <--- OK
Global reserve: 5.50MiB (used: 0.00B)
Multiple profiles: no
[...]
$ btrfs fi us /tmp/raid5fs # as user
WARNING: cannot read detailed chunk info, per-device usage will not be shown, run as root
Overall:
Device size: 3.00GiB
Device allocated: 0.00B <--- WRONG
Device unallocated: 3.00GiB <--- WRONG
Device missing: 0.00B
Device slack: 0.00B
Used: 0.00B <--- WRONG
Free (estimated): 0.00B (min: 8.00EiB) <- WRONG
Free (statfs, df): 1.32GiB
Data ratio: 0.00 <--- WRONG
Metadata ratio: 0.00 <--- WRONG
Global reserve: 5.50MiB (used: 0.00B)
Multiple profiles: no
[...]
The reason is that the BTRFS_IOC_SPACE_INFO ioctl doesn't return enough
information. To bypass it a scan of the chunks is required when a
raid5/6 profile is present.
To avoid providing wrong information, in case of a raid5/6 filesystem
without the root capabilities the "btrfs fi us" is not executed at all
and a warning with a suggestion to run it as root is printed.
$ ./btrfs fi us /tmp/t/
WARNING: cannot read detailed chunk info, per-device usage will not be shown, run as root
WARNING: due to the presence of a raid5/raid6 profile, we cannots compute some values;
WARNING: run as root instead.
Signed-off-by: Goffredo Baroncelli <kreijack@libero.it>
Signed-off-by: David Sterba <dsterba@suse.com>
If "btrfs dev us" is invoked by a not root user, it is impossible to
collect the chunk info data (not enough privileges). This causes
"btrfs dev us" to print as "Unallocated" value the size of the disk.
This patch handles the case where print_device_chunks() is invoked
without the chunk info data, printing "Unallocated N/A":
Before the patch:
$ btrfs dev us t/
WARNING: cannot read detailed chunk info, per-device usage will not be shown, run as root
/dev/loop0, ID: 1
Device size: 5.00GiB
Device slack: 0.00B
Unallocated: 5.00GiB <-- Wrong
$ sudo btrfs dev us t/
[sudo] password for ghigo:
/dev/loop0, ID: 1
Device size: 5.00GiB
Device slack: 0.00B
Data,single: 8.00MiB
Metadata,DUP: 512.00MiB
System,DUP: 16.00MiB
Unallocated: 4.48GiB <-- Correct
After the patch:
$ ./btrfs dev us /tmp/t/
WARNING: cannot read detailed chunk info, per-device usage will not be shown, run as root
/dev/loop0, ID: 1
Device size: 5.00GiB
Device slack: 0.00B
Unallocated: N/A
$ sudo ./btrfs dev us /tmp/t/
[sudo] password for ghigo:
/dev/loop0, ID: 1
Device size: 5.00GiB
Device slack: 0.00B
Data,single: 8.00MiB
Metadata,DUP: 512.00MiB
System,DUP: 16.00MiB
Unallocated: 4.48GiB
Reviewed-by: Anand Jain <anand.jain@oracle.com>
Signed-off-by: Goffredo Baroncelli <kreijack@libero.it>
Signed-off-by: David Sterba <dsterba@suse.com>
The statfs(2) syscall is deprecated by LSB in favor of statvfs(2),
however we can't replace all uses because we still need the
statfs::f_type to determine the filesystem by magic numer.
Signed-off-by: David Sterba <dsterba@suse.com>
Due to refactoring in 88c25674c7 ("btrfs-progs: convert device info
to struct array") the variable tracking number of devices was not
updated and led to an error.
$ btrfs device usage /path
ERROR: unexpected number of devices: 0 != 1
...
Issue: #697
Signed-off-by: David Sterba <dsterba@suse.com>
There are quite some variable shadowing in btrfs-progs, most of them are
just reusing some common names like tmp.
And those are quite safe and the shadowed one are even different type.
But there are some exceptions:
- @end in traverse_tree_blocks()
There is already an @end with the same type, but a different meaning
(the end of the current extent buffer passed in).
Just rename it to @child_end.
- @start in generate_new_data_csums_range()
Just rename it to @csum_start.
- @size of fixup_chunk_tree_block()
This one is particularly bad, we declare a local @size and initialize
it to -1, then before we really utilize the variable @size, we
immediately reset it to 0, then pass it to logical_to_physical().
Then there is a location to check if @size is -1, which will always be
true.
According to the code in logical_to_physical(), @size would be clamped
down by its original value, thus our local @size will always be 0.
This patch would rename the local @size to @found_size, and only set
it to -1.
The call site is only to pass something as logical_to_physical()
requires a non-NULL pointer.
We don't really need to bother the returned value.
- duplicated @ref declaration in run_delayed_tree_ref()
- duplicated @super_flags in change_meta_csums()
Just delete the duplicated one.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
I was seeing test-cli/016 failures because it claimed we were getting
EPERM from the TREE_SEARCH ioctl to get the chunk info out of the file
system. This turned out to be because errno was already set going into
this function, the ioctl itself wasn't actually failing. Fix this by
checking for a return value from the ioctl first, and then returning
-EPERM if appropriate. This fixed the failures in my setup.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The sysfs could use more convenience helpers so move the current code to
own file before adding more helpers.
Signed-off-by: David Sterba <dsterba@suse.com>
While syncing messages.[ch] I had to back out the ASSERT() code in
kerncompat.h, which means we now rely on the kernel code for ASSERT().
In order to maintain some semblance of separation introduce UASSERT()
and use that in all the purely userspace code.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
For older kernels, the sysfs interface providing the fsid may not be present yet.
Since 32c2e57c65 ("btrfs-progs: read fsid from the sysfs in
device_is_seed"), the fallback to the previous approach to determine
the fsid was not used anymore.
Ensure negative return values of sysfs_open_fsid_file are handled by
falling back to the dev_to_fsid in this case.
Pull-request: #599
Signed-off-by: David Sterba <dsterba@suse.com>
The kernel commit a26d60dedf9a ("btrfs: sysfs: add devinfo/fsid to
retrieve actual fsid from the device") introduced a sysfs interface
to access the device's fsid from the userspace. This is a more
reliable method to obtain the fsid compared to reading the
superblock, and it even works if the device is not present.
Additionally, this sysfs interface can be read by non-root users.
Therefore, it is recommended to utilize this new sysfs interface to
retrieve the fsid.
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Anand Jain <anand.jain@oracle.com>
Signed-off-by: David Sterba <dsterba@suse.com>
load_device_info() checks if the device is a seed device by reading
superblock::fsid and comparing it with the mount fsid, and it fails
to work if the device is missing (a RAID1 seed fs). Move this part
of the code into a new helper function device_is_seed() in
preparation to make device_is_seed() work with the new sysfs
devinfo/<devid>/fsid interface.
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Anand Jain <anand.jain@oracle.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The (unsigned long long) type casts can be dropped, printf understands
%llu and u64 and does not warn. In cases where the type is not u64 keep
the cast.
Signed-off-by: David Sterba <dsterba@suse.com>
Replace printf by the level-aware helper. No change for commands that
don't have the global -q/-v options, otherwise the output can be
quieted.
Signed-off-by: David Sterba <dsterba@suse.com>
Switch the remaining use of assert() as it lacks the verbose assert that
we have for ASSERT (but otherwise is equivalent).
Signed-off-by: David Sterba <dsterba@suse.com>
The preferred order:
- system headers
- standard headers
- libraries
- kernel library
- kernel shared
- common headers
- other tools
- own headers
Signed-off-by: David Sterba <dsterba@suse.com>
The size reported as Unallocated in the table was different that the one
in the listing, calculated differently. The values should reflect the
unallocated area available for the filesystem - not necessarily the
total size of the device. If there's such slack space it's reported
separately.
The values in the table mean:
- Unallocated: block device size - slack - allocated
- Total: block device size - slack
- Slack: block device size - filesystem
The new columns make the table wider but the values are deemed to be
important by users and for filesystems with normal profiles it fits
under reasonable line width. During balance or with multiple profiles it
can get wider but this should not be a serious problem.
Example output:
Overall:
Device size: 13.00GiB
Device allocated: 536.00MiB
Device unallocated: 12.48GiB
Device missing: 0.00B
Device slack: 1.00GiB
Used: 2.31MiB
Free (estimated): 12.48GiB (min: 6.24GiB)
Free (statfs, df): 12.48GiB
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 3.50MiB (used: 0.00B)
Multiple profiles: no
Data Metadata System
Id Path single DUP DUP Unallocated Total Slack
-- ---------- ------- --------- -------- ----------- -------- -------
1 /dev/loop0 8.00MiB 512.00MiB 16.00MiB 2.48GiB 3.00GiB 1.00GiB
2 /dev/loop1 - - - 10.00GiB 10.00GiB -
-- ---------- ------- --------- -------- ----------- -------- -------
Total 8.00MiB 256.00MiB 8.00MiB 12.48GiB 13.00GiB 1.00GiB
Used 2.00MiB 144.00KiB 16.00KiB
Issue: #508
Pull-request: #509 (partial fix)
Signed-off-by: David Sterba <dsterba@suse.com>
Current formula calculates the stripe size, however that's not what we
want in the case of RAID1/DUP profiles. In those cases since chunk are
mirrored across devices we want the full size of the chunk. Without this
patch the 'btrfs fi usage' output from an fs which is using RAID1 is:
Data,RAID1: Size:2.00GiB, Used:1.00GiB (50.03%)
/dev/vdc 1.00GiB
/dev/vdf 1.00GiB
Metadata,RAID1: Size:256.00MiB, Used:1.34MiB (0.52%)
/dev/vdc 128.00MiB
/dev/vdf 128.00MiB
System,RAID1: Size:8.00MiB, Used:16.00KiB (0.20%)
/dev/vdc 4.00MiB
/dev/vdf 4.00MiB
Unallocated:
/dev/vdc 8.87GiB
/dev/vdf 8.87GiB
So a 2 gigabyte RAID1 chunk actually will take up 4 gigabytes on the
actual disks 2 each. In this case this is being miscalculated as taking
up 1GiB on each device.
This also leads to erroneously calculated unallocated space. The correct
output in this case is:
Data,RAID1: Size:2.00GiB, Used:1.00GiB (50.03%)
/dev/vdc 2.00GiB
/dev/vdf 2.00GiB
Metadata,RAID1: Size:256.00MiB, Used:1.34MiB (0.52%)
/dev/vdc 256.00MiB
/dev/vdf 256.00MiB
System,RAID1: Size:8.00MiB, Used:16.00KiB (0.20%)
/dev/vdc 8.00MiB
/dev/vdf 8.00MiB
Unallocated:
/dev/vdc 7.74GiB
/dev/vdf 7.74GiB
Fix it by only utilising the chunk formula for profiles which are not
RAID1/DUP.
Issue: #422
Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Commit 80714610f3 ("btrfs-progs: use raid table for ncopies")
slightly broke how raid ratio are being calculated since the resulting
code would always reset ratio to be 1 in case we didn't have RAID56
profile. The correct behavior is to simply set it to 0 if we have RAID56
as the calculation is different in this case and leave it intact
otherwise.
This bug manifests by doing all size-related calculation for 'btrfs
filesystem usage' command as if all block groups are of type SINGLE. Fix
this by only resetting ratio 0 in case of RAID56.
Issue: #422
Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
There are a lot of call sites where we use the following code snippet:
u8 super_block_data[BTRFS_SUPER_INFO_SIZE];
struct btrfs_super_block *sb;
u64 ret;
sb = (struct btrfs_super_block *)super_block_data;
The reason for this is, structure btrfs_super_block was smaller than
BTRFS_SUPER_INFO_SIZE.
Thus for anything with csum involved, we have to use a proper 4K buffer.
Since the recent unification of sizeof(struct btrfs_super_block), we no
longer need such workaround, and can use struct btrfs_super_block
directly to do any operation.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The profile descriptions allow us to use a single formula to calculate
chunk size. Right now there are no profiles with parity (raid5-like) and
sub_stripes (raid10-like), which makes it easier.
- parity stripes are subtracted from the total count
- then divided by number of sub stripes
Practically speaking, 1:1 copy profiles do not have any adjustments.
Signed-off-by: David Sterba <dsterba@suse.com>
The striped profiles covering arbitrary number of devices are often
hardcoded so use the new helper btrfs_bg_type_is_stripey for that.
Signed-off-by: David Sterba <dsterba@suse.com>
There's opencoded value of raid table ncopies in
print_filesystem_usage_overall, add a helper and use it.
Signed-off-by: David Sterba <dsterba@suse.com>
There is some code that using NAME_MAX but it doesn't include header
that is defined. This patch adds a line that includes linux/limits.h
which defines NAME_MAX.
Issue: #386
Issue: #385
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Sidong Yang <realwakka@gmail.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The Used and Free should be together, while all the device information
is in the first section.
Example:
Overall:
Device size: 128.00GiB
Device allocated: 24.00GiB
Device unallocated: 104.00GiB
Device missing: 0.00B
Device zone unusable: 5.13MiB
Device zone size: 256.00MiB
Used: 213.33MiB
Free (estimated): 111.79GiB (min: 111.79GiB)
Free (statfs, df): 111.79GiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 25.58MiB (used: 16.00KiB)
Multiple profiles: no
Signed-off-by: David Sterba <dsterba@suse.com>
Read device size and print it in the overall overview in zoned mode. The
total unusable size is there so the zone size is complementing it. It's
read from the first device assuming that kernel mandates that all
devices have the same zone size.
Example:
Overall:
Device size: 128.00GiB
Device allocated: 24.00GiB
Device unallocated: 104.00GiB
Device missing: 0.00B
Used: 213.33MiB
Device zone unusable: 5.13MiB
Device zone size: 256.00MiB
Free (estimated): 111.79GiB (min: 111.79GiB)
Free (statfs, df): 111.79GiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 25.58MiB (used: 16.00KiB)
Multiple profiles: no
Signed-off-by: David Sterba <dsterba@suse.com>
Print number of stripes for striped profiles in device usage commands.
It helps to see profiles easily. The output is like below.
/dev/vdc, ID: 1
Device size: 1.00GiB
Device slack: 0.00B
Data,RAID0/2: 912.62MiB
Data,RAID0/3: 912.62MiB
Metadata,RAID1: 102.38MiB
System,RAID1: 8.00MiB
Unallocated: 1.00MiB
Multiple lines can appear in case a balance conversion process was
interrupted or if there's been a new device added and new data written
to the full stripe.
Issue: #372
Signed-off-by: Sidong Yang <realwakka@gmail.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Print the total zone_unusable size in the summary for 'fi usage' for a
filesystem in zoned mode. It's a sum of all the zone_unusable values
from 'fi df'. Per-device stats are not implemented and would need more
complicated calculations from raw data, kernel does not export that (but
it could).
As of 5.12, the zone_unusable is stored only in memory so we'd have to
map raw block device zones to the block groups and the live extents in
the associated block groups to get the exact numbers.
Example:
# btrfs fi usage /mnt
Overall:
Device size: 2.00GiB
Device allocated: 768.00MiB
Device unallocated: 1.25GiB
Device missing: 0.00B
Device zone unusable: 320.00KiB
Used: 128.00KiB
Free (estimated): 1.50GiB (min: 1.50GiB)
Free (statfs, df): 1.50GiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 3.25MiB (used: 32.00KiB)
Multiple profiles: no
Data,single: Size:256.00MiB, Used:0.00B (0.00%)
/dev/nullb0 256.00MiB
Metadata,single: Size:256.00MiB, Used:112.00KiB (0.04%)
/dev/nullb0 256.00MiB
System,single: Size:256.00MiB, Used:16.00KiB (0.01%)
/dev/nullb0 256.00MiB
Unallocated:
/dev/nullb0 1.25GiB
# btrfs fi df
Data, single: total=256.00MiB, used=0.00B, zone_unusable=0.00B
System, single: total=256.00MiB, used=16.00KiB, zone_unusable=160.00KiB
Metadata, single: total=256.00MiB, used=112.00KiB, zone_unusable=160.00KiB
GlobalReserve, single: total=3.25MiB, used=32.00KiB
Signed-off-by: David Sterba <dsterba@suse.com>
There's a group of functions that are related to opening filesystem in
various modes, this can be moved to a separate file.
Signed-off-by: David Sterba <dsterba@suse.com>
Add available space information from statfs(). This can be different from
'Free (estimated)' in some cases. This patch provide more information
about filesystem usage like below.
Overall:
Device size: 5.00GiB
Device allocated: 1.02GiB
Device unallocated: 3.98GiB
Device missing: 0.00B
Used: 88.00KiB
Free (estimated): 4.48GiB (min: 2.49GiB)
Free (statfs, df) 4.48GiB
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 832.00KiB (used: 0.00B)
Multiple profiles: no
Issue: #306
Signed-off-by: Sidong Yang <realwakka@gmail.com>
Signed-off-by: David Sterba <dsterba@suse.com>