mke2fs: fix bugs in hugefile creation

For certain sizes mke2fs's hugefile creation would fail with the error:

mke2fs: Could not allocate block in ext2 filesystem while creating huge file 0

This would happen because we had failed to reserve enough space for
the metadata blocks for the hugefile.  There were two problems:

1) The overhead calculation function failed to take into account the
cluster size for bigalloc file systems.

2) In the case where num_blocks is 0 and num_files is 1, the overhead
calculation function was passed a size of 0, which caused the
calculated overhead to be zero.

Google-Bug-Id: 123239032
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
This commit is contained in:
Theodore Ts'o 2019-01-24 22:35:15 -05:00
parent c3749cad15
commit f6c50d68cc

View File

@ -427,7 +427,8 @@ static blk64_t calc_overhead(ext2_filsys fs, blk64_t num)
e_blocks2 = (e_blocks + extents_per_block - 1) / extents_per_block;
e_blocks3 = (e_blocks2 + extents_per_block - 1) / extents_per_block;
e_blocks4 = (e_blocks3 + extents_per_block - 1) / extents_per_block;
return e_blocks + e_blocks2 + e_blocks3 + e_blocks4;
return (e_blocks + e_blocks2 + e_blocks3 + e_blocks4) *
EXT2FS_CLUSTER_RATIO(fs);
}
/*
@ -567,7 +568,8 @@ errcode_t mk_hugefiles(ext2_filsys fs, const char *device_name)
num_blocks = fs_blocks / num_files;
}
num_slack += calc_overhead(fs, num_blocks) * num_files;
num_slack += (calc_overhead(fs, num_blocks ? num_blocks : fs_blocks) *
num_files);
num_slack += (num_files / 16) + 1; /* space for dir entries */
goal = get_start_block(fs, num_slack);
goal = round_up_align(goal, align, part_offset);