In commit 3ab99d9b60, the build system was changed not to set
_FILE_OFFSET_BITS explicitly due to some weird error on mips64el.
Unfortunately, this breaks the aarch64 Debian build because libfuse
2.9.9 requires this value to be set explicitly. Restore this dumb
preprocessor symbol dependency with even more hackery as documented in
the commit.
Fixes: 3ab99d9b60 ("Remove explicit #define of _FILE_OFFSET_BITS")
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Link: https://lore.kernel.org/r/20240529181214.GA52969@frogsfrogsfrogs
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
This fixes a Lintian warning which is triggered by an arbtrary
MANROFFSEQ='' environment variable:
an.tmac:<standard input>:376: warning: tbl preprocessor failed, or it or soelim was not run; table(s) likely not rendered (TE macro called with TW register undefined)
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
So far, we have used `instalable: false` to avoid collision with the
other modules that are installed to the same path. A typical example was
<foo> and <foo>.microdroid. The latter is a modified version of the
former for the inclusion of the microdroid image. They however both have
the same instalation path (ex: system/bin) and stem (ex: foo) so that we
can reference them using the same path regardless of whether we are in
Android or microdroid.
However, the use of `installable: false` for the purpose is actually
incorrect, because `installable: false` also means, obviously, "this
module shouldn't be installed". The only reason this incorrect way has
worked is simply because packaging modules (ex: android_filesystem)
didn't respect the property when gathering the modules.
As packaging modules are now fixed to respect `installable: false`, we
need a correct way of avoiding the collision. `no_full_install: true` is
it.
If a module has this property set to true, it is never installed to the
full instal path like out/target/product/<partition>/... It can be
installed only via packaging modules.
Bug: 338160898
Test: m
Change-Id: Idb173a7e3528c96b23f857bb3bdf5f37e698c445
From AOSP commit: 21a895548df7de83ce1e2e146e1718e5f723af7f
vendor_init needs to execute these binaries when converting partitions
to EXT4.
Test: th
Bug: 293313353
Change-Id: I1fa49c1a0f802b3c36e96112ef262bae4d5d394a
From AOSP commit: 0b54b8227815d447b52de76bb419735b21608941
It was being installed on some products when it shouldn't be.
Bug: 205632228
Test: m installclean && m with aosp/2773149
Change-Id: I7f4642ba6fa8d97f7711b6df57c4e3fd781b40fd
From AOSP commit: ecb8d2faa7411d9de228a3bd8b883ed2d5220188
The size of msg_buffer is carefully calculated so it can never
overflow, but it triggers a Coverity warning. Use snprintf instead of
sprintf to silence the Coverity warning.
Addresses-Coverty-Bug: 1520603
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
The problem with explicitly setting _FILE_OFFSET_BITS is that
it's not necessarily a no-op on a 64-bit platform with a 64-bit off_t.
Apparently glibc's mips64el which end up using a different structure
definition for struct stat, and this causes a compatibility problem
with libarchive. It's not needed on mips64el, since off_t is 64-bits,
but it actually causes problems.
So remove it, since we now use the autoconf's AC_SYS_LARGEFILE, which
will set _FILE_OFFSET_BITS when it is necessary (such as on a 32-bit
i386 Linux platform), and will skip it when it is unnecessary.
Addresses-Debian-Bug: #1070042
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Teach configure the --without-libarchive option, which forcibly
disables use of the libarchive library.
The option --with-libarchive=direct will disable the use of dlopen,
and will link mke2fs with -larchive directly. This doesn't work when
building mke2f.static, since -larchive has a large number of
depedencies, and even "pkgconf --libs --static libarchive" doesn't
provide all of the appropriate library dependencies. :-(
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
We explicitly decided not to reserve space for a 64-bit dtime, since
it's never displayed or exposed to userspace. The dtime field is used
a linked list for the ophan list, and for forensic purposes when
trying to determine when an inode was deleted. So right after the
2038 epoch, a deleted inode might end up with a dtime which is zero or
smaller than the number of inodes, which will result in e2fsck
reporting a potential problems. So when we set the dtime, make sure
that the dtime won't be mistaken for an inode number.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
When copying files to the newly created file system using "mke2fs -d",
and there are timestamps greater than what is specified by
SOURCE_DATE_EPOCH, clamp the timestamp to the SOURCE_DATE_EPOCH
timestamp.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
This changes were missed in commit ca8bc9240a ("Add post-2038
timestamp support to e2fsprogs").
Addresses-Coverity-Bug: 1531832
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Clang 17's Undefined Behaviour Sanitizer will throw run-time warnings
if a function pointer is dereferenced with a different function
signature than one in the pointer --- even if the difference is a
missing const qualifier. To fix regression test failures, change
declarations of argv to use ss_argv_t instead of an inconsistently
open-coded type.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Since a23b50cd ("mke2fs: warn about missing y2038 support when
formatting fresh ext4 fs"), the default inode size is 256 bytes
for all filesystems, including small and floppy, except for the
Hurd since it currently only supports 128-byte inodes.
Signed-off-by: Pascal Hambourg <pascal@plouf.fr.eu.org>
Reviewed-by: Andreas Dilger <adilger@dilger.ca>
Link: https://lore.kernel.org/r/E1rx4t4-00073d-1e@zenith
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
If archive.h is available during compilation, enable mke2fs to read a
tarball as input. Since libarchive.so.13 is opened with dlopen,
libarchive is not a hard library dependency of the resulting binary.
In comparison with feeding a directory tree to mke2fs via -d this has
the following advantages:
- no superuser privileges, nor fakeroot, nor unshared user namespaces
are needed to create filesystems with arbitrary ownership information
and special files like device nodes which otherwise require being root
- by reading a tarball from standard input, no temporary files need to
be written out first as mke2fs can be used as part of a shell pipeline
which reduces disk usage and makes the conversion independent of the
underlying file system
A round-trip from tarball to ext4 to tarball yields bit-by-bit identical
results
Signed-off-by: Johannes Schauer Marin Rodrigues <josch@mister-muffin.de>
According to the mke2fs man page, the supported cluster-size values
for an ext4 filesystem are 2048 to 256M bytes. However, this is not
the case.
When mkfs is run to create a filesystem with following specifications:
* 1k blocksize and cluster-size greater than 32M
* 2k blocksize and cluster-size greater than 64M
* 4k blocksize and cluster-size greater than 128M
mkfs fails with "Invalid argument passed to ext2 library while trying
to create journal" error. In general, when the cluster-size to blocksize
ratio is greater than 32k, mkfs fails with this error.
Went through the code and found out that the function
`ext2fs_new_range()` is the source of this error. This is because when
the cluster-size to blocksize ratio exceeds 32k, the length argument
to the function `ext2fs_new_range()` results in 0. Hence, the error.
This patch corrects the valid cluster-size values.
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Link: https://lore.kernel.org/r/20240403043037.3992724-1-srivathsa.d.dara@oracle.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
This patch is part of the preparation required to allow
GDT blocks expand beyond a single group,
it introduces 2 new interfaces:
- ext2fs_count_used_blocks(), to return the blocks used
in the bitmap range.
- ext2fs_reserve_super_and_bgd2() to return blocks used by
superblock/GDT blocks for every group, by looking up blocks used.
Signed-off-by: Li Dongyang <dongyangli@ddn.com>
Reviewed-by: Andreas Dilger <adilger@dilger.ca>
Link: https://lore.kernel.org/r/20230925060801.1397581-1-dongyangli@ddn.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
For flex_bg enabled fs, we could merge the
inode table blocks into a contiguous range,
this improves mke2fs time on large devices
when lazy_itable_init is disabled.
On a 977TB device, unpatched mke2fs was running
for 449m10s before getting terminated manually.
strace shows huge number of fallocate, given the
offset from fallocate it has done 41% of the inode
tables, the estimated time needed would be 1082m.
unpatched patched
real 449m10.954s 4m20.531s
user 0m18.217s 0m16.147s
sys 0m20.311s 0m8.944s
Signed-off-by: Li Dongyang <dongyangli@ddn.com>
Link: https://lore.kernel.org/r/20230904045806.827621-1-dongyangli@ddn.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
The ext4 kernel code implemented support for s_mtime_hi,
s_wtime_hi, and related timestamp fields to avoid timestamp
overflow in 2038, but similar handling is not in e2fsprogs.
Add helper macros for the superblock _hi timestamp fields
ext2fs_super_tstamp_get() and ext2fs_super_tstamp_set().
Add helper macro for inode _extra timestamp fields
ext2fs_inode_xtime_get() and ext2fs_inode_xtime_set().
Add helper macro ext2fs_actual_inode_size() to avoid open
coding the i_extra_isize check in multiple places.
Remove inode_time_to_string() since this is unused once callers
change to time_to_string(ext2fs_inode_xtime_get()) directly.
Fix inode_includes() macro to properly wrap "inode" parameter,
and rename to ext2fs_inode_includes() to avoid potential name
clashes. Use this to check inode field inclusion in debugfs
instead of bare constants for inode field offsets.
Use these interfaces to access timestamps in debugfs, e2fsck,
libext2fs, fuse2fs, tune2fs, and e2undo.
Signed-off-by: Andreas Dilger <adilger@dilger.ca>
Link: https://lore.kernel.org/r/20230927054016.16645-1-adilger@dilger.ca
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
For changing inode size (-I) and setting quota fearture (-Q), tune2fs
only check whether the filesystem is umounted. Considering mount
namepspaces, the filesystem is umounted, however it already be left
in other mount namespace.
So we add one check whether the filesystem is not in use with using
EXT2_MF_BUSY flag, which can indicate the device is already opened
with O_EXCL, as suggested by Ted.
Reported-by: Baokun Li <libaokun1@huawei.com>
Signed-off-by: Zhiqiang Liu <liuzhiqiang26@huawei.com>
Signed-off-by: zhanchengbin <zhanchengbin1@huawei.com>
Link: https://lore.kernel.org/r/28455341-ca26-d203-8b54-792bae002251@huawei.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
If we fail to get/open the mount point for get/set
fs label ioctl, just fall back to old method and
silence the error messages.
Fixes: f85b4526f ("tune2fs: implement support for set/get label iocts")
Signed-off-by: Li Dongyang <dongyangli@ddn.com>
Link: https://lore.kernel.org/r/20230520104329.2402182-1-dongyangli@ddn.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
This options allows user to specify custom root dir
permissions at FS creation time. If no perms are specified
then the root dir permissions would be set to default.
Signed-off-by: Dmitriy Chestnykh <dm.chestnykh@gmail.com>
At the moment, op_mkdir() ORs the requested mode with fs->umask, which
results in the group/other write permission bits always being cleared
regardless of what the creating process requested. Instead, leave the
requested mode alone so that the resulting directory has the permssions
the creator expects.
Signed-off-by: Steven Luo <steven@steven676.net>
dumpe2fs has never been modified to correctly report block ranges
corresponding to free clusters in block allocation bitmaps from bigalloc
file systems. Rather than reporting block ranges covering all the
blocks in free clusters found in a block bitmap, it either reports just
the first block number in a cluster for a single free cluster, or a
range beginning with the first block number in the first cluster in a
series of free clusters, and ending with the first block number in the
last cluster in that series.
This behavior causes xfstest shared/298 to fail when run on a bigalloc
file system with a 1k block size. The test uses dumpe2fs to collect
a list of the blocks freed when files are deleted from a file system.
When the test deletes a file containing blocks located after the first
block in the last cluster in a series of clusters, dumpe2fs does not
report those blocks as free per the test's expectations.
Modify dumpe2fs to report full block ranges for free clusters. At the
same time, fix a small bug causing unnecessary !in_use() retests while
iterating over a block bitmap.
Signed-off-by: Eric Whitney <enwlinux@gmail.com>
Link: https://lore.kernel.org/r/20230721185506.1020225-1-enwlinux@gmail.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Work around an issue with the Android NDK where its copy of
linux/fsmap.h is missing the inline functions fsmap_sizeof() and
fsmap_advance(). This was causing an error when building e2fsprogs
using the Android NDK, using the autotools-based build system.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
test_rw() and test_nd() need to allocate two or three times the product
of the block size and the block counts. This can overflow the signed int
type of block_size and result in allocate_buffer() being called with a
value smaller than intended. Once that buffer is written to, badblocks
segfaults.
Since allocate_buffer() accepts a size_t, change the input validation to
use SIZE_MAX and cast accordingly when calculating the argument.
Fixing the segfault allows larger values to be passed to read() and
write(); these need to be cast to size_t as well in order to avoid a
signed integer overflow causing failure, in which case badblocks would
fall back to testing a single block at once.
Before:
$ misc/badblocks -w -b 4096 -c 524288 -e 1 -s -v /tmp/testfile.bin
Checking for bad blocks in read-write mode
From block 0 to 524287
Segmentation fault
$ misc/badblocks -n -b 4096 -c 524288 -e 1 -s -v /tmp/testfile.bin
Checking for bad blocks in non-destructive read-write mode
From block 0 to 524287
Checking for bad blocks (non-destructive read-write test)
Segmentation fault
After:
$ misc/badblocks -w -b 4096 -c 524288 -e 1 -s -v /tmp/testfile.bin
Checking for bad blocks in read-write mode
From block 0 to 524287
Testing with pattern 0xaa: done
Reading and comparing: done
Testing with pattern 0x55: done
Reading and comparing: done
Testing with pattern 0xff: done
Reading and comparing: done
Testing with pattern 0x00: done
Reading and comparing: done
Pass completed, 0 bad blocks found. (0/0/0 errors)
$ misc/badblocks -n -b 4096 -c 524288 -e 1 -s -v /tmp/testfile.bin
Checking for bad blocks in non-destructive read-write mode
From block 0 to 524287
Checking for bad blocks (non-destructive read-write test)
Testing with random pattern: done
Pass completed, 0 bad blocks found. (0/0/0 errors)
Signed-off-by: Corey Hickey <bugfood-c@fatooh.org>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Since the conditional checks the product of block_size and
blocks_at_once, reporting that the problem is solely with
blocks_at_once is misleading.
Also change the error to use the name of the parameter listed in the
manual rather than the variable name.
Since blocks_at_once is unsigned, change the test to == rather than <=.
Before:
$ misc/badblocks -w -b 16777216 -c 524288 -e 1 -s -v /tmp/testfile.bin
misc/badblocks: Invalid blocks_at_once: 524288
After:
$ misc/badblocks -w -b 16777216 -c 524288 -e 1 -s -v /tmp/testfile.bin
misc/badblocks: For block size 16777216, blocks_at_once too large: 524288
$ misc/badblocks -w -b 16777216 -c 0 -e 1 -s -v /tmp/testfile.bin
misc/badblocks: Invalid number of blocks: 0
Signed-off-by: Corey Hickey <bugfood-c@fatooh.org>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
block_size is parsed as an unsigned int from parse_uint(), so retain it
as such until _after_ it has been constrained to a size within INT_MAX.
Lower level code still requires this to be an int, so cast to int for
anything below main().
Before:
$ misc/badblocks -w -b 4294967295 -c 1 /tmp/testfile.bin
misc/badblocks: Invalid block size: -1
After:
$ misc/badblocks -w -b 4294967295 -c 1 /tmp/testfile.bin
misc/badblocks: Invalid block size: 4294967295
Signed-off-by: Corey Hickey <bugfood-c@fatooh.org>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
This works the same way that mount -o offset=<bytes> works, and can be
used to mount particular partitions from a whole disk image.
Signed-off-by: Matt Stark <msta@google.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
fallocate can be used to have 64bit off_t provided its compiled with
_FILE_OFFSET_BITS=64 which will be added automatically when
--enable-largefile is used.
[ Run autoreconf to update configure and config.h.in -- TYT ]
Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>