mirror of
https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git
synced 2024-12-01 14:04:18 +08:00
193 lines
6.7 KiB
Plaintext
193 lines
6.7 KiB
Plaintext
User request:
|
|
|
|
BTW: Could you please add some sort of deleted and possibly corrupted file
|
|
and inode list to e2fsck report. There should be filenames deleted
|
|
from directory inodes, files with duplicate blocks e.t.c.
|
|
It's pretty annoying to filter this information from e2fsck output
|
|
by hand :-
|
|
|
|
------------------------------------------
|
|
|
|
Add a "answer Yes always to this class of question" response.
|
|
|
|
----------------------------------
|
|
|
|
ext2fs_flush() should return a different error message for primary
|
|
versus backup superblock flushing, so that mke2fs can print an
|
|
appropriate error message.
|
|
|
|
---------------------------------
|
|
Date: Mon, 08 Mar 1999 21:46:14 +0100
|
|
From: Sergio Polini <s.polini@mclink.it>
|
|
|
|
|
|
I'm reading the sorce code of e2fsck 1.14.
|
|
In pass2.c, lines 352-357, I read:
|
|
|
|
if ((dirent->name_len & 0xFF) > EXT2_NAME_LEN) {
|
|
if (fix_problem(ctx, PR_2_FILENAME_LONG, &cd->pctx)) {
|
|
dirent->name_len = EXT2_NAME_LEN;
|
|
dir_modified++;
|
|
}
|
|
}
|
|
|
|
I think that I'll never see any messages about too long filenames,
|
|
because "whatever & 0xFF" can never be "> 0xFF".
|
|
Am I wrong?
|
|
--------------------------------------
|
|
|
|
Add chmod command to debugfs.
|
|
|
|
------------------------------------------
|
|
|
|
Maybe a bug in debugfs v.1.14:
|
|
if a file has more than one hardlink, only the first filename is shown when
|
|
using command
|
|
ncheck <inode>
|
|
|
|
------------------------------------
|
|
|
|
Add a filesystem creation date to the superblock
|
|
|
|
-----------------------------------
|
|
Date: Tue, 18 Jan 2000 17:54:53 -0800 (PST)
|
|
From: Alan Blanchard <alan@abraxas.to>
|
|
To: tytso@MIT.EDU
|
|
Subject: DEBUGFS - thanks and a feature idea
|
|
Content-Type: TEXT/PLAIN; charset=US-ASCII
|
|
|
|
Theodore:
|
|
|
|
First, let me thank you for writing debugfs. Recently, my Linux box
|
|
(RH 6.0, 400 MHz PIII, on a DSL line) was hacked into. The intruder did
|
|
an "rm -Rf" on a 34 GB drive with about 5GB of data on it. I was able to
|
|
restore essentially the entire thing with debugfs and a bit of C code and Perl.
|
|
Actually, I could have done the entire thing with debugfs and Perl, but I
|
|
thought it would be too slow.
|
|
|
|
During this exercise, I noticed that one small feature was lacking that would
|
|
have made my job a bit easier. The length of a deleted directory is
|
|
reported as 0, hence debugfs won't dump the contents of the directory to a
|
|
file using the "dump" command. The only thing that saved me was that the
|
|
list of disk blocks is not zeroed out. I was able to dump the contents of the
|
|
directories by using debugfs to get the relevant block numbers, then
|
|
using dd to get the actual data.
|
|
|
|
If debugfs had a feature where it ignored the size of a directory reported by
|
|
the inode and instead just dumped all the blocks, it would have facilited
|
|
things a bit. This seems like a very easy feature to add.
|
|
|
|
Again, thanks for writing debugfs (and all the other Linux stuff you've written!).
|
|
|
|
Cheers,
|
|
Alan Blanchard
|
|
alan@abraxas.to
|
|
|
|
|
|
-------------------------------------------------------------------
|
|
|
|
Date: Fri, 21 Jan 2000 14:07:12 -0800
|
|
From: "H. Peter Anvin" <hpa@www.transmeta.com>
|
|
Subject: mkfs -cc and fsck -c
|
|
|
|
b) An option to mkfs to zero the partition. Yes, it can be done with
|
|
dd, but it would be a nicer way of doing it.
|
|
|
|
------------------------------------------------------------------
|
|
|
|
Add support for in ext2fs_block_iterate() for a returning the
|
|
compressed flag blocks to block_iterate. Change default to not return
|
|
EXT2_COMPRESSED_BLKADDR. Change e2fsck to pass this flag in.
|
|
|
|
(The old compression patches did this by default all the time, which
|
|
is bad, since it meant e2fsck never saw the EXT2_COMPRESSED_BLKADDR
|
|
flagword.
|
|
|
|
------------------------------------------------------------
|
|
|
|
E2fsck should offer to clear all the blocks in an indirect block, not
|
|
the entire inode, so there's better recovery for when an indirect
|
|
block gets trashed.
|
|
|
|
|
|
-------------------------------------------------------------
|
|
|
|
From: Yann Dirson - LOGATIQUE <Yann.Dirson@France.Sun.COM>
|
|
Date: Thu, 2 Mar 2000 13:52:13 +0100 (MET)
|
|
|
|
During my experiments on the broken system, I noticed the following in
|
|
the badblocks program (which I'm aware is not designed for IDE drives)
|
|
- I'd probably have already fixed them if my home system was up :(
|
|
|
|
* the syntax summary documents 2nd arg as blocks_count, which should
|
|
probably read something like end_count.
|
|
|
|
* testing past end of device is not detected, and lists those blocks
|
|
as bad, whereas they simply do not exist.
|
|
|
|
|
|
I think I'll probably add a "max count" option to findsuper(8), so
|
|
that I do not have to wait for the whole disk to be scanned when the
|
|
system had to be launched with "init=/bin/sh", in which case Ctrl-[CZ]
|
|
and friends appear to be absolutely ignored.
|
|
|
|
|
|
Somewhat unrelated, I just noticed the
|
|
http://web.mit.edu/tytso/www/linux/ext2.html could be updated:
|
|
|
|
- could mention SGI xfs (http://oss.sgi.com/projects/xfs/ - they just
|
|
release 0.03 snapshot)
|
|
|
|
----------------------------------------------------------------
|
|
|
|
Return-Path: <tytso@MIT.EDU>
|
|
Date: Thu, 10 Feb 2000 13:20:14 -0500
|
|
From: "Theodore Y. Ts'o" <tytso@MIT.EDU>
|
|
To: R.E.Wolff@BitWizard.nl
|
|
In-Reply-To: Rogier Wolff's message of Thu, 10 Feb 2000 08:46:30 +0100 (MET),
|
|
<200002100746.IAA24573@cave.bitwizard.nl>
|
|
Subject: Re: e2fsck request for enhancement.
|
|
Phone: (781) 391-3464
|
|
|
|
Date: Thu, 10 Feb 2000 08:46:30 +0100 (MET)
|
|
From: R.E.Wolff@BitWizard.nl (Rogier Wolff)
|
|
|
|
Lately, while trying to recover a broken disk, my system froze (twice,
|
|
until I tried something else) while copying the disk.
|
|
|
|
So I had a file of about 50Mb that was growing frantically at the
|
|
moment of the crash.
|
|
|
|
e2fsck, then finds an indirect block that is completely bogus. It
|
|
starts by asking me if it's ok to clear a few of the referenced
|
|
blocks. I say yes. Then it comes to the conclusion:
|
|
|
|
too many invalid blocks. Clear inode?
|
|
|
|
and then I get the option to delete the whole file. Not to truncate
|
|
the file to a "working" size.
|
|
|
|
|
|
I'd MUCH rather have e2fsck say something like:
|
|
|
|
inode 1234 references an invalid block 134345454. Hmm.
|
|
inode 1234 references 567 out of 50176 invalid blocks,
|
|
all near the end. Truncate file to 49152 blocks?
|
|
|
|
Here you can see that of the 1024 blocks near the end of the file,
|
|
only 567 were detected as invalid. However now 48Mb of the file will
|
|
be recovered, instead of thrown away.
|
|
|
|
That's a good point. Actually, the right thing is for e2fsck to offer
|
|
to clear all of the bad blocks in a particular indirect block. I don't
|
|
know how hard it would be to do that, but I'll put it on my e2fsprogs
|
|
TODO list.
|
|
|
|
- Ted
|
|
|
|
-----------------------------------------------------------------
|
|
|
|
Debugfs's link command should set the file type information
|
|
|
|
---------------------------------------------------------------
|