bzip2.1: remove blank spaces in man page and drop the .PU macro.

Author: Bjarni Ingi Gislason
Bug-Debian: https://bugs.debian.org/675380
This commit is contained in:
Mark Wielaard 2019-07-21 17:09:25 +02:00
parent 6a8690fc8d
commit 8d9410ce88

30
bzip2.1
View File

@ -1,4 +1,3 @@
.PU
.TH bzip2 1 .TH bzip2 1
.SH NAME .SH NAME
bzip2, bunzip2 \- a block-sorting file compressor, v1.0.8 bzip2, bunzip2 \- a block-sorting file compressor, v1.0.8
@ -152,8 +151,7 @@ of most file compressors) is coded at about 8.05 bits per byte, giving
an expansion of around 0.5%. an expansion of around 0.5%.
As a self-check for your protection, As a self-check for your protection,
.I .I bzip2
bzip2
uses 32-bit CRCs to uses 32-bit CRCs to
make sure that the decompressed version of a file is identical to the make sure that the decompressed version of a file is identical to the
original. This guards against corruption of the compressed data, and original. This guards against corruption of the compressed data, and
@ -224,9 +222,9 @@ or decompression.
Reduce memory usage, for compression, decompression and testing. Files Reduce memory usage, for compression, decompression and testing. Files
are decompressed and tested using a modified algorithm which only are decompressed and tested using a modified algorithm which only
requires 2.5 bytes per block byte. This means any file can be requires 2.5 bytes per block byte. This means any file can be
decompressed in 2300k of memory, albeit at about half the normal speed. decompressed in 2300\ k of memory, albeit at about half the normal speed.
During compression, \-s selects a block size of 200k, which limits During compression, \-s selects a block size of 200\ k, which limits
memory use to around the same figure, at the expense of your compression memory use to around the same figure, at the expense of your compression
ratio. In short, if your machine is low on memory (8 megabytes or ratio. In short, if your machine is low on memory (8 megabytes or
less), use \-s for everything. See MEMORY MANAGEMENT below. less), use \-s for everything. See MEMORY MANAGEMENT below.
@ -244,7 +242,7 @@ information which is primarily of interest for diagnostic purposes.
Display the software version, license terms and conditions. Display the software version, license terms and conditions.
.TP .TP
.B \-1 (or \-\-fast) to \-9 (or \-\-best) .B \-1 (or \-\-fast) to \-9 (or \-\-best)
Set the block size to 100 k, 200 k .. 900 k when compressing. Has no Set the block size to 100 k, 200 k ... 900 k when compressing. Has no
effect when decompressing. See MEMORY MANAGEMENT below. effect when decompressing. See MEMORY MANAGEMENT below.
The \-\-fast and \-\-best aliases are primarily for GNU gzip The \-\-fast and \-\-best aliases are primarily for GNU gzip
compatibility. In particular, \-\-fast doesn't make things compatibility. In particular, \-\-fast doesn't make things
@ -279,10 +277,10 @@ during decompression.
Compression and decompression requirements, Compression and decompression requirements,
in bytes, can be estimated as: in bytes, can be estimated as:
Compression: 400k + ( 8 x block size ) Compression: 400\ k + ( 8 x block size )
Decompression: 100k + ( 4 x block size ), or Decompression: 100\ k + ( 4 x block size ), or
100k + ( 2.5 x block size ) 100\ k + ( 2.5 x block size )
Larger block sizes give rapidly diminishing marginal returns. Most of Larger block sizes give rapidly diminishing marginal returns. Most of
the compression comes from the first two or three hundred k of block the compression comes from the first two or three hundred k of block
@ -292,7 +290,7 @@ on small machines.
It is also important to appreciate that the decompression memory It is also important to appreciate that the decompression memory
requirement is set at compression time by the choice of block size. requirement is set at compression time by the choice of block size.
For files compressed with the default 900k block size, For files compressed with the default 900\ k block size,
.I bunzip2 .I bunzip2
will require about 3700 kbytes to decompress. To support decompression will require about 3700 kbytes to decompress. To support decompression
of any file on a 4 megabyte machine, of any file on a 4 megabyte machine,
@ -311,9 +309,9 @@ Another significant point applies to files which fit in a single block
amount of real memory touched is proportional to the size of the file, amount of real memory touched is proportional to the size of the file,
since the file is smaller than a block. For example, compressing a file since the file is smaller than a block. For example, compressing a file
20,000 bytes long with the flag -9 will cause the compressor to 20,000 bytes long with the flag -9 will cause the compressor to
allocate around 7600k of memory, but only touch 400k + 20000 * 8 = 560 allocate around 7600\ k of memory, but only touch 400\ k + 20000 * 8 = 560
kbytes of it. Similarly, the decompressor will allocate 3700k but only kbytes of it. Similarly, the decompressor will allocate 3700\ k but only
touch 100k + 20000 * 4 = 180 kbytes. touch 100\ k + 20000 * 4 = 180 kbytes.
Here is a table which summarises the maximum memory usage for different Here is a table which summarises the maximum memory usage for different
block sizes. Also recorded is the total compressed size for 14 files of block sizes. Also recorded is the total compressed size for 14 files of
@ -337,7 +335,7 @@ larger files, since the Corpus is dominated by smaller files.
.SH RECOVERING DATA FROM DAMAGED FILES .SH RECOVERING DATA FROM DAMAGED FILES
.I bzip2 .I bzip2
compresses files in blocks, usually 900kbytes long. Each compresses files in blocks, usually 900\ kbytes long. Each
block is handled independently. If a media or transmission error causes block is handled independently. If a media or transmission error causes
a multi-block .bz2 a multi-block .bz2
file to become damaged, it may be possible to file to become damaged, it may be possible to
@ -361,7 +359,7 @@ undamaged.
.I bzip2recover .I bzip2recover
takes a single argument, the name of the damaged file, takes a single argument, the name of the damaged file,
and writes a number of files "rec00001file.bz2", and writes a number of files "rec00001file.bz2",
"rec00002file.bz2", etc, containing the extracted blocks. "rec00002file.bz2", etc., containing the extracted blocks.
The output filenames are designed so that the use of The output filenames are designed so that the use of
wildcards in subsequent processing -- for example, wildcards in subsequent processing -- for example,
"bzip2 -dc rec*file.bz2 > recovered_data" -- processes the files in "bzip2 -dc rec*file.bz2 > recovered_data" -- processes the files in
@ -379,7 +377,7 @@ block size.
.SH PERFORMANCE NOTES .SH PERFORMANCE NOTES
The sorting phase of compression gathers together similar strings in the The sorting phase of compression gathers together similar strings in the
file. Because of this, files containing very long runs of repeated file. Because of this, files containing very long runs of repeated
symbols, like "aabaabaabaab ..." (repeated several hundred times) may symbols, like "aabaabaabaab ...\&" (repeated several hundred times) may
compress more slowly than normal. Versions 0.9.5 and above fare much compress more slowly than normal. Versions 0.9.5 and above fare much
better than previous versions in this respect. The ratio between better than previous versions in this respect. The ratio between
worst-case and average-case compression time is in the region of 10:1. worst-case and average-case compression time is in the region of 10:1.