The previous fix for the warning referred to 'prevbuf' being used
uninitialized and this is also what the compiler says. However
initializing 'prevbuf' doesn't make the warning go away and further
testing revealed that it is really 'savebuf' being possibly used prior
to initialization that is the source of the warning (the incorrect
warning message is probably an optimization-related gcc bug). So replace
previous ineffective fix with explicit initialization of 'savebuf'.
For 64-bit (e.g. x86_64) Linux the 64-bit wide types resolve to long,
not long long as is the case in 32-bit (e.g. i386) Linux. So we need an
explicit cast to long long for 64-bit types since the format string must
specify the 'll' modifier in order to print 64-bit values.
Some compilers issue a warning when a pointer is initialized in
both alternatives of a condition. Force an extra initialization
to avoid such warnings.
Closing the volume is the way to sync the MFT to disk. When not doing
so, the MFT runlists in $DATA and $Bitmap are not synced if they have
been updated in the second resizing stage relative to runlists which
have grown outside their original MFT record.
Unlike in most cases, the bad sector inode has to be closed if it
was updated and required MFT extents (when there are a lot of bad
sectors and some of them were outside the truncated partition).
Not doing so causes the inode to not be fully synced to device.
This fixes compiler warnings emitted when you compare an le32 value with
e.g. 'const_cpu_to_le32(-1)' on a little-endian system, because
previously the expansion of the macro expression 'const_cpu_to_le32(-1)'
would be '(-1)' on a little-endian system but '(u32)((((u32)(-1) &
0xff000000u) >> 24) | (((u32)(-1) & 0x00ff0000u) >> 8) | (((u32)(-1) &
0x0000ff00u) << 8) | (((u32)(-1) & 0x000000ffu) << 24))' on a
big-endian system, i.e. the type of the expanded expression would be
'int' (signed) in the little-endian case but 'u32' (unsigned) in the
big-endian case.
With this commit the type of the expanded expression will be 'le32' in
both the little-endian and the big-endian case.
This field is always assigned a signed value, and compared to other
signed values (ntfs_time values are signed little-endian 32-bit
integers).
This fixes two compiler warnings about signed/unsigned comparison.
These variable are only ever assigned to/from s64 values, so their type
should be s64, not u64. This fixes a compiler warning about
signed/unsigned comparison.
The UTF-16LE label buffer containing the result of mbs2ucs is the one
that should be NULL-terminated when the label is longer than permitted.
Not the input buffer, which is a function parameter assumed to be
NULL-terminated anyway.
This is done to match the type of the LSN struct members in layout.h.
The effect of this change is that while these members were declared with
the le64 type previously, leLSN resolves to sle64. I.e. what was
previously unsigned fields are now signed.
Following this change we also need to switch over a few macros from
unsigned to signed versions in the code that uses these struct
definitions.
There were multiple cases of little-endian fields being used as
CPU-endian without byte swapping. This would result in incorrect
behaviour on big-endian systems.
On big-endian systems the result of the '!=' operation would be
endian-swapped rather than the first argument (which must have been the
intended action).
This is harmless except when we do strict endianness checking, in which
case this results in a compile error. Fixed by converting values to
CPU endianness before comparing them.
In 'dump_resident_attr_val', 'i' was sometimes used as a native-endian
'int'-precision string length value and sometimes used as a little-
endian 16-bit flags value. This type of mixed usage is bad practice and
results in a hard error when strict endianness checking is used.
Fixed by introducing new variable 'flags' to hold the little-endian 16-
bit flags value.
If the attribute type is specified by the user, 'attr_type' was assigned
a CPU-endian value, however if the attribute type was not specified it
would be assigned the attribute type AT_DATA, which is a little-endian
value. The rest of the code seems to assume that 'attr_type' is
CPU-endian, so this is clearly a bug.
Resolved by fixing the endianness of the variable at little-endian,
converting the input value to little-endian when specified.
In 'dump_attr_record' the variable 'u' was first used to store a
CPU-endian 32-bit value, and then to store a 16-bit little-endian value.
This is bad practice and results in a hard error when strict endian type
checking is used.
Fixed by storing the 16-bit little-endian flags value in a new variable
'flags'.
When looking up the lowercase equivalent of a Unicode character in
ntfs_fix_file_name, no byte swapping was performed on the ntfschar used
as index into the 'locase' array. This would lead to very strange
results on big-endian systems.
This commit addresses issues where little-endian variables are emitted
raw to a log or output stream which is to be interpreted by the user.
Outputting data in non-native endianness can cause confusion for anybody
attempting to debug issues with a file system.
If start buffer is more recent than restart, we update committed LSN
with last record LSN of block (last_end_lsn) while applying action but
forget about it while printing records with -f for investigation
purpose.
Note that while applying actions we use start_buffer to calculate
latest page out of block 2 and block 3 and then from latest take
committed LSN. For -f we don't need buffers so we just compare
directly with committed LSN from restart.
(contributed by Rakesh Pandit)
The new compression formats used by Windows 10 uses reparse data, and
a new reparse tag which it is useful to define even though these formats
is not yet supported by ntfs-3g.
When the unreadable directory has an ATTRIBUTE_LIST attribute and an
INDEX_ALLOCATION attribute occupying split over several extents, the first
of which defines a single cluster, the first INDEX_ALLOCATION extent has
lowest_vcn=0 and highest_vcn=0, and the second one has lowest_vcn=1.
This unusual case, which can be created by the combination of a small
volume and near-full MFT records, triggers some special-case behavior in
ntfs_mapping_pairs_decompress_i(). That behavior is incorrect if the
attribute's first extent only contains a single cluster, since in that case
highest_vcn=0 as well.
This configuration has been tested on Windows and it *is* able to
successfully read the directory. This supports the hypothesis that the
volume is valid and NTFS-3g has a bug on the read side.
This bug could, in theory, occur with any non-resident attribute, not just
INDEX_ALLOCATION attributes.
(Contributed by Eric Biggers)
This fixes the case where the original bad cluster list requires extents.
The list is processed globally, no relocation is done, and the list is
truncated, possibly fitting into fewer extents.