Fixed huge data writes

When computing the runlist for the first non-resident write to an
attribute, an inconsistency was created between the attribute image
and the ntfs_attr structure, which could cause an MFT record overflow
when the first write is huge and fragmented (reported by Vito Caputo).
This commit is contained in:
Jean-Pierre André 2011-10-20 19:05:27 +02:00
parent 59ecea5c80
commit 864cf7232e

View File

@ -5468,6 +5468,7 @@ static int ntfs_attr_update_meta(ATTR_RECORD *a, ntfs_attr *na, MFT_RECORD *m,
NAttrClearSparse(na);
a->flags &= ~ATTR_IS_SPARSE;
na->data_flags = a->flags;
a->compression_unit = 0;
memmove((u8*)a + le16_to_cpu(a->name_offset) - 8,