Commit Graph

148 Commits

Author SHA1 Message Date
Herbert Xu
e4d5b79c66 [CRYPTO] users: Use crypto_comp and crypto_has_*
This patch converts all users to use the new crypto_comp type and the
crypto_has_* functions.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:46:22 +10:00
Herbert Xu
fce32d70ba [CRYPTO] api: Add crypto_comp and crypto_has_*
This patch adds the crypto_comp type to complete the compile-time checking
conversion.  The functions crypto_has_alg and crypto_has_cipher, etc. are
also added to replace crypto_alg_available.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:46:21 +10:00
Herbert Xu
8425165dfe [CRYPTO] digest: Remove old HMAC implementation
This patch removes the old HMAC implementation now that nobody uses it
anymore.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:46:20 +10:00
Herbert Xu
e9d41164e2 [CRYPTO] tcrypt: Use HMAC template and hash interface
This patch converts tcrypt to use the new HMAC template rather than the
hard-coded version of HMAC.  It also converts all digest users to use
the new cipher interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:46:18 +10:00
Herbert Xu
0796ae061e [CRYPTO] hmac: Add crypto template implementation
This patch rewrites HMAC as a crypto template.  This means that HMAC is no
longer a hard-coded part of the API.  It's now a template that generates
standard digest algorithms like any other.

The old HMAC is preserved until all current users are converted.

The same structure can be used by other MACs such as AES-XCBC-MAC.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:46:17 +10:00
Herbert Xu
055bcee310 [CRYPTO] digest: Added user API for new hash type
The existing digest user interface is inadequate for support asynchronous
operations.  For one it doesn't return a value to indicate success or
failure, nor does it take a per-operation descriptor which is essential
for the issuing of requests while other requests are still outstanding.

This patch is the first in a series of steps to remodel the interface
for asynchronous operations.

For the ease of transition the new interface will be known as "hash"
while the old one will remain as "digest".

This patch also changes sg_next to allow chaining.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:46:17 +10:00
Herbert Xu
7226bc877a [CRYPTO] api: Mark parts of cipher interface as deprecated
Mark the parts of the cipher interface that have been replaced by
block ciphers as deprecated.  Thanks to Andrew Morton for suggesting
doing this before removing them completely.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:46:16 +10:00
Herbert Xu
cba83564d1 [CRYPTO] tcrypt: Use block ciphers where applicable
This patch converts tcrypt to use the new block cipher type where
applicable.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:44:50 +10:00
Herbert Xu
a9e62fadf0 [CRYPTO] s390: Added block cipher versions of CBC/ECB
This patch adds block cipher algorithms for S390.  Once all users of the
old cipher type have been converted the existing CBC/ECB non-block cipher
operations will be removed.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:44:50 +10:00
Herbert Xu
db131ef908 [CRYPTO] cipher: Added block ciphers for CBC/ECB
This patch adds two block cipher algorithms, CBC and ECB.  These
are implemented as templates on top of existing single-block cipher
algorithms.  They invoke the single-block cipher through the new
encrypt_one/decrypt_one interface.

This also optimises the in-place encryption and decryption to remove
the cost of an IV copy each round.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:44:08 +10:00
Herbert Xu
5cde0af2a9 [CRYPTO] cipher: Added block cipher type
This patch adds the new type of block ciphers.  Unlike current cipher
algorithms which operate on a single block at a time, block ciphers
operate on an arbitrarily long linear area of data.  As it is block-based,
it will skip any data remaining at the end which cannot form a block.

The block cipher has one major difference when compared to the existing
block cipher implementation.  The sg walking is now performed by the
algorithm rather than the cipher mid-layer.  This is needed for drivers
that directly support sg lists.  It also improves performance for all
algorithms as it reduces the total number of indirect calls by one.

In future the existing cipher algorithm will be converted to only have
a single-block interface.  This will be done after all existing users
have switched over to the new block cipher type.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:52 +10:00
Herbert Xu
5c64097aa0 [CRYPTO] scatterwalk: Prepare for block ciphers
This patch prepares the scatterwalk code for use by the new block cipher
type.

Firstly it halves the size of scatter_walk on 32-bit platforms.  This
is important as we allocate at least two of these objects on the stack
for each block cipher operation.

It also exports the symbols since the block cipher code can be built as
a module.

Finally there is a hack in scatterwalk_unmap that relies on progress
being made.  Unfortunately, for hardware crypto we can't guarantee
progress to be made since the hardware can fail.

So this also gets rid of the hack by not advancing the address returned
by scatterwalk_map.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:52 +10:00
Herbert Xu
f28776a369 [CRYPTO] cipher: Added encrypt_one/decrypt_one
This patch adds two new operations for the simple cipher that encrypts or
decrypts a single block at a time.  This will be the main interface after
the existing block operations have moved over to the new block ciphers.

It also adds the crypto_cipher type which is currently only used on the
new operations but will be extended to setkey as well once existing users
have been converted to use block ciphers where applicable.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:51 +10:00
Herbert Xu
e853c3cfa8 [CRYPTO] api: Added crypto_type support
This patch adds the crypto_type structure which will be used for all new
crypto algorithm types, beginning with block ciphers.

The primary purpose of this abstraction is to allow different crypto_type
objects for crypto algorithms of the same type, in particular, there will
be a different crypto_type objects for asynchronous algorithms.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:51 +10:00
Herbert Xu
8f21cf0d2b [CRYPTO] api: Feed flag directly to crypto_yield
The sleeping flag used to determine whether crypto_yield can actually
yield is really a per-operation flag rather than a per-tfm flag.  This
patch changes crypto_yield to take a flag directly so that we can start
using a per-operation flag instead the tfm flag.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:50 +10:00
Herbert Xu
6d7d684d63 [CRYPTO] api: Added crypto_alloc_base
Up until now all crypto transforms have been of the same type, struct
crypto_tfm, regardless of whether they are ciphers, digests, or other
types.  As a result of that, we check the types at run-time before
each crypto operation.

This is rather cumbersome.  We could instead use different C types for
each crypto type to ensure that the correct types are used at compile
time.  That is, we would have crypto_cipher/crypto_digest instead of
just crypto_tfm.  The appropriate type would then be required for the
actual operations such as crypto_digest_digest.

Now that we have the type/mask fields when looking up algorithms, it
is easy to request for an algorithm of the precise type that the user
wants.  However, crypto_alloc_tfm currently does not expose these new
attributes.

This patch introduces the function crypto_alloc_base which will carry
these new parameters.  It will be renamed to crypto_alloc_tfm once
all existing users have been converted.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:50 +10:00
Herbert Xu
f3f632d61a [CRYPTO] api: Added asynchronous flag
This patch adds the asynchronous flag and changes all existing users to
only look up algorithms that are synchronous.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:49 +10:00
Herbert Xu
7fed0bf271 [CRYPTO] api: Add common instance initialisation code
This patch adds the helpers crypto_get_attr_alg and crypto_alloc_instance
which can be used by simple one-argument templates like hmac to process
input parameters and allocate instances.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:04 +10:00
Herbert Xu
df89820ebd [CRYPTO] cipher: Removed special IV checks for ECB
This patch makes IV operations on ECB fail through nocrypt_iv rather than
calling BUG().  This is needed to generalise CBC/ECB using the template
mechanism.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:03 +10:00
Herbert Xu
c907ee76d8 [CRYPTO] tcrypt: Use test_hash for crc32c
Now that crc32c has been fixed to conform with standard digest semantics,
we can use test_hash for it.  I've turned the last test into a chunky
test.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:03 +10:00
Herbert Xu
ee7564166d [CRYPTO] digest: Store temporary digest in tfm
When the final result location is unaligned, we store the digest in a
temporary buffer before copying it to the final location.  Currently
that buffer sits on the stack.  This patch moves it to an area in the
tfm, just like the CBC IV buffer.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:02 +10:00
Herbert Xu
560c06ae1a [CRYPTO] api: Get rid of flags argument to setkey
Now that the tfm is passed directly to setkey instead of the ctx, we no
longer need to pass the &tfm->crt_flags pointer.

This patch also gets rid of a few unnecessary checks on the key length
for ciphers as the cipher layer guarantees that the key length is within
the bounds specified by the algorithm.

Rather than testing dia_setkey every time, this patch does it only once
during crypto_alloc_tfm.  The redundant check from crypto_digest_setkey
is also removed.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:02 +10:00
Herbert Xu
25cdbcd9e5 [CRYPTO] crc32c: Fix unconventional setkey usage
The convention for setkey is that once it is set it should not change,
in particular, init must not wipe out the key set by it.  In fact, init
should always be used after setkey before any digestion is performed.

The only user of crc32c that sets the key is tcrypt.  This patch adds
the necessary init calls there.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:01 +10:00
Michal Ludvig
b3be9a6d9a [CRYPTO] sha: Add module aliases for sha1 / sha256
Crypto modules should be loadable by their .cra_driver_name, so
we should make MODULE_ALIAS()es with these names. This patch adds
aliases for SHA1 and SHA256 only as that's what we need for
PadLock-SHA driver.

Signed-off-by: Michal Ludvig <michal@logix.cz>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:40:20 +10:00
Herbert Xu
6bfd48096f [CRYPTO] api: Added spawns
Spawns lock a specific crypto algorithm in place.  They can then be used
with crypto_spawn_tfm to allocate a tfm for that algorithm.  When the base
algorithm of a spawn is deregistered, all its spawns will be automatically
removed.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:39:29 +10:00
Herbert Xu
492e2b63eb [CRYPTO] api: Allow algorithm lookup by type
This patch also adds the infrastructure to pick an algorithm based on
their type.  For example, this allows you to select the encryption
algorithm "aes", instead of any algorithm registered under the name
"aes".  For now this is only accessible internally.  Eventually it
will be made available through crypto_alloc_tfm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:35:17 +10:00
Herbert Xu
2b8c19dbdc [CRYPTO] api: Add cryptomgr
The cryptomgr module is a simple manager of crypto algorithm instances.
It ensures that parameterised algorithms of the type tmpl(alg) (e.g.,
cbc(aes)) are always created.

This is meant to satisfy the needs for most users.  For more complex
cases such as deeper combinations or multiple parameters, a netlink
module will be created which allows arbitrary expressions to be parsed
in user-space.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:31:44 +10:00
Herbert Xu
2825982d9d [CRYPTO] api: Added event notification
This patch adds a notifier chain for algorithm/template registration events.
This will be used to register compound algorithms such as cbc(aes).  In
future this will also be passed onto user-space through netlink.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:17:13 +10:00
Herbert Xu
4cc7720cd1 [CRYPTO] api: Add template registration
A crypto_template generates a crypto_alg object when given a set of
parameters.  this patch adds the basic data structure fo templates
and code to handle their registration/deregistration.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:17:12 +10:00
Herbert Xu
cce9e06d10 [CRYPTO] api: Split out low-level API
The crypto API is made up of the part facing users such as IPsec and the
low-level part which is used by cryptographic entities such as algorithms.
This patch splits out the latter so that the two APIs are more clearly
delineated.  As a bonus the low-level API can now be modularised if all
algorithms are built as modules.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:16:30 +10:00
Herbert Xu
6521f30273 [CRYPTO] api: Add crypto_alg reference counting
Up until now we've relied on module reference counting to ensure that the
crypto_alg structures don't disappear from under us.  This was good enough
as long as each crypto_alg came from exactly one module.

However, with parameterised crypto algorithms a crypto_alg object may need
two or more modules to operate.  This means that we need to count the
references to the crypto_alg object directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:16:29 +10:00
Herbert Xu
72fa491912 [CRYPTO] api: Rename crypto_alg_get to crypto_mod_get
The functions crypto_alg_get and crypto_alg_put operates on the crypto
modules rather than the algorithms.  Therefore it makes sense to call
them crypto_mod_get and crypto_alg_put respectively.

This is needed because we need to have real algorithm reference counters
for parameterised algorithms as they can be unregistered from below by
when their parameter algorithms are themselves unregistered.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:16:29 +10:00
Joachim Fritschi
eaf44088ff [CRYPTO] twofish: x86-64 assembly version
The patch passed the trycpt tests and automated filesystem tests.
This rewrite resulted in some nice perfomance increase over my last patch.

Short summary of the tcrypt benchmarks:

Twofish Assembler vs. Twofish C (256bit 8kb block CBC)
encrypt: -27% Cycles
decrypt: -23% Cycles

Twofish Assembler vs. AES Assembler (128bit 8kb block CBC)
encrypt: +18%  Cycles
decrypt: +15% Cycles

Twofish Assembler vs. AES Assembler (256bit 8kb block CBC)
encrypt: -9% Cycles
decrypt: -8% Cycles

Full Output:
http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-twofish-c-x86_64.txt
http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-twofish-asm-x86_64.txt
http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-aes-asm-x86_64.txt


Here is another bonnie++ benchmark with encrypted filesystems. Most runs maxed
out the hd. It should give some idea what the module can do for encrypted filesystem
performance even though you can't see the full numbers.

http://homepages.tu-darmstadt.de/~fritschi/twofish/output_20060610_130806_x86_64.html

Signed-off-by: Joachim Fritschi <jfritschi@freenet.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:16:29 +10:00
Joachim Fritschi
b9f535ffe3 [CRYPTO] twofish: i586 assembly version
The patch passed the trycpt tests and automated filesystem tests.
This rewrite resulted in some nice perfomance increase over my last patch.

Short summary of the tcrypt benchmarks:

Twofish Assembler vs. Twofish C (256bit 8kb block CBC)
encrypt: -33% Cycles
decrypt: -45% Cycles

Twofish Assembler vs. AES Assembler (128bit 8kb block CBC)
encrypt: +3%  Cycles
decrypt: -22% Cycles

Twofish Assembler vs. AES Assembler (256bit 8kb block CBC)
encrypt: -20% Cycles
decrypt: -36% Cycles

Full Output:
http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-twofish-asm-i586.txt
http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-twofish-c-i586.txt
http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-aes-asm-i586.txt


Here is another bonnie++ benchmark with encrypted filesystems. All runs with
the twofish assembler modules max out the drivespeed. It should give some
idea what the module can do for encrypted filesystem performance even though
you can't see the full numbers.

http://homepages.tu-darmstadt.de/~fritschi/twofish/output_20060611_205432_x86.html

Signed-off-by: Joachim Fritschi <jfritschi@freenet.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:16:28 +10:00
Joachim Fritschi
758f570ea7 [CRYPTO] twofish: Fix the priority
This patch adds a proper driver name and priority to the generic c
implemtation to allow coexistance of c and assembler modules.

Signed-off-by: Joachim Fritschi <jfritschi@freenet.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:16:28 +10:00
Joachim Fritschi
2729bb427f [CRYPTO] twofish: Split out common c code
This patch splits up the twofish crypto routine into a common part ( key
setup  ) which will be uses by all twofish crypto modules ( generic-c , i586
assembler and x86_64 assembler ) and generic-c part. It also creates a new
header file which will be used by all 3 modules.

This eliminates all code duplication.

Correctness was verified with the tcrypt module and automated test scripts.

Signed-off-by: Joachim Fritschi <jfritschi@freenet.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:16:27 +10:00
Herbert Xu
b9d0a25a48 [CRYPTO] tcrypt: Forbid tcrypt from being built-in
It makes no sense to build tcrypt into the kernel.  In fact, now that
the driver init function's return status is being checked, it is in
fact harmful to do so.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:42 +10:00
Michal Ludvig
e805792851 [CRYPTO] tcrypt: Speed benchmark support for digest algorithms
This patch adds speed tests (benchmarks) for digest algorithms.
Tests are run with different buffer sizes (16 bytes, ... 8 kBytes)
and with each buffer multiple tests are run with different update()
sizes (e.g. hash 64 bytes buffer in four 16 byte updates).
There is no correctness checking of the result and all tests and
algorithms use the same input buffer.

Signed-off-by: Michal Ludvig <michal@logix.cz>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:41 +10:00
Michal Ludvig
14fdf477a7 [CRYPTO] tcrypt: Return -EAGAIN from module_init()
Intentionaly return -EAGAIN from module_init() to ensure
it doesn't stay loaded in the kernel.  The module does all
its work from init() and doesn't offer any runtime
functionality => we don't need it in the memory, do we?

Signed-off-by: Michal Ludvig <michal@logix.cz>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:41 +10:00
Herbert Xu
996e2523cc [CRYPTO] api: Allow replacement when registering new algorithms
We already allow asynchronous removal of existing algorithm modules.  By
allowing the replacement of existing algorithms, we can replace algorithms
without having to wait for for all existing users to complete.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:41 +10:00
Herbert Xu
d913ea0d6b [CRYPTO] api: Removed const from cra_name/cra_driver_name
We do need to change these names now and even more so in future with
instantiated algorithms.  So let's stop lying to the compiler and get
rid of the const modifiers.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:40 +10:00
Herbert Xu
c7fc05992a [CRYPTO] api: Added cra_init/cra_exit
This patch adds the hooks cra_init/cra_exit which are called during a tfm's
construction and destruction respectively.  This will be used by the instances
to allocate child tfm's.

For now this lets us get rid of the coa_init/coa_exit functions which are
used for exactly that purpose (unlike the dia_init function which is called
for each transaction).

In fact the coa_exit path is currently buggy as it may get called twice
when an error is encountered during initialisation.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:40 +10:00
Michal Ludvig
110bf1c0e9 [CRYPTO] api: Fixed incorrect passing of context instead of tfm
Fix a few omissions in passing TFM instead of CTX to algorithms.

Signed-off-by: Michal Ludvig <michal@logix.cz>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:40 +10:00
Herbert Xu
6c2bb98bc3 [CRYPTO] all: Pass tfm instead of ctx to algorithms
Up until now algorithms have been happy to get a context pointer since
they know everything that's in the tfm already (e.g., alignment, block
size).

However, once we have parameterised algorithms, such information will
be specific to each tfm.  So the algorithm API needs to be changed to
pass the tfm structure instead of the context pointer.

This patch is basically a text substitution.  The only tricky bit is
the assembly routines that need to get the context pointer offset
through asm-offsets.h.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:39 +10:00
Herbert Xu
43600106e3 [CRYPTO] digest: Remove unnecessary zeroing during init
Various digest algorithms operate one block at a time and therefore
keep a temporary buffer of partial blocks.  This buffer does not need
to be initialised since there is a counter which indicates what is and
isn't valid in it.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:38 +10:00
Atsushi Nemoto
e1147d8f47 [CRYPTO] digest: Add alignment handling
Some hash modules load/store data words directly.  The digest layer
should pass properly aligned buffer to update()/final() method.  This
patch also add cra_alignmask to some hash modules.

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:38 +10:00
Atsushi Nemoto
d00e708cef [CRYPTO] khazad: Use 32-bit reads on key
On 64-bit platform, reading 64-bit keys (which is supposed to be
32-bit aligned) at a time will result in unaligned access.

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:37 +10:00
David McCullough
55e9dce37d [CRYPTO] aes: Fixed array boundary violation
The AES setkey routine writes 64 bytes to the E_KEY area even though
there are only 60 bytes there.  It is in fact safe since E_KEY is
immediately follwed by D_KEY which is initialised afterwards.  However,
doing this may trigger undefined behaviour and makes Coverity unhappy.

So by combining E_KEY and D_KEY into one array we sidestep this issue
altogether.

This problem was reported by Adrian Bunk.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-03-21 20:14:10 +11:00
Atsushi Nemoto
06b42aa94b [CRYPTO] tcrypt: Fix key alignment
Force 32-bit alignment on keys in tcrypt test vectors.  Also rearrange the
structure to prevent unnecessary padding.

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-03-21 20:14:09 +11:00
Atsushi Nemoto
20ea340489 [CRYPTO] all: Add missing cra_alignmask
The "des3_ede" and "serpent" lack cra_alignmask.

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-03-21 20:14:09 +11:00
Eric Sesterhenn
bbeb563f7b [CRYPTO] all: Use kzalloc where possible
this patch converts crypto/ to kzalloc usage.
Compile tested with allyesconfig.

Signed-off-by: Eric Sesterhenn <snakebyte@gmx.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-03-21 20:14:08 +11:00
Herbert Xu
f10b7897ee [CRYPTO] api: Align tfm context as wide as possible
Since tfm contexts can contain arbitrary types we should provide at least
natural alignment (__attribute__ ((__aligned__))) for them.  In particular,
this is needed on the Xscale which is a 32-bit architecture with a u64 type
that requires 64-bit alignment.  This problem was reported by Ronen Shitrit.

The crypto_tfm structure's size was 44 bytes on 32-bit architectures and
80 bytes on 64-bit architectures.  So adding this requirement only means
that we have to add an extra 4 bytes on 32-bit architectures.

On i386 the natural alignment is 16 bytes which also benefits the VIA
Padlock as it no longer has to manually align its context structure to
128 bits.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-03-21 20:14:08 +11:00
Denis Vlasenko
a5f8c47305 [CRYPTO] twofish: Use rol32/ror32 where appropriate
Convert open coded rotations to rol32/ror32.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-03-21 20:14:08 +11:00
Al Viro
1b8623545b [PATCH] remove bogus asm/bug.h includes.
A bunch of asm/bug.h includes are both not needed (since it will get
pulled anyway) and bogus (since they are done too early).  Removed.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2006-02-07 20:56:35 -05:00
Herbert Xu
a429d2609c [CRYPTO] cipher: Set alignmask for multi-byte loads
Many cipher implementations use 4-byte/8-byte loads/stores which require
alignment on some architectures.  This patch explicitly sets the alignment
requirements for them.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:16:00 -08:00
Herbert Xu
7302533aac [CRYPTO] api: Require block size to be less than PAGE_SIZE/8
The cipher code path may allocate up to two blocks of data on the stack.
Therefore we need to place limits on the maximum block size.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:15:58 -08:00
Herbert Xu
bcb0ad2b34 [CRYPTO] sha1: Fixed off-by-64 bug in sha1_update
After a partial update, the done pointer is off to the right by 64 bytes.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:15:56 -08:00
Herbert Xu
827c3911d8 [CRYPTO] cipher: Align temporary buffer in cbc_process_decrypt
Since the temporary buffer is used as an argument to cia_decrypt, it must be
aligned by cra_alignmask.  This bug was found by linux@horizon.com.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:15:49 -08:00
Nicolas Pitre
fa9b98fdab [CRYPTO] sha1: Avoid shifting count left and right
This patch avoids shifting the count left and right needlessly for each
call to sha1_update().  It instead can be done only once at the end in
sha1_final().

Keeping the previous test example (sha1_update() successively called with
len=64), a 1.3% performance increase can be observed on i386, or 0.2% on
ARM.  The generated code is also smaller on ARM.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:15:46 -08:00
Nicolas Pitre
9d70a6c86c [CRYPTO] sha1: Rename i/j to done/partial
This patch gives more descriptive names to the variables i and j.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:15:44 -08:00
Nicolas Pitre
cfa8d17cc8 [CRYPTO] sha1: Avoid useless memcpy()
The current code unconditionally copy the first block for every call to
sha1_update().  This can be avoided if there is no pending partial block.
This is always the case on the first call to sha1_update() (if the length
is >= 64 of course.

Furthermore, temp does need to be called if sha_transform is never invoked.
Also consolidate the sha_transform calls into one to reduce code size.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:15:41 -08:00
Herbert Xu
c8a19c91b5 [CRYPTO] Allow AES C/ASM implementations to coexist
As the Crypto API now allows multiple implementations to be registered
for the same algorithm, we no longer have to play tricks with Kconfig
to select the right AES implementation.

This patch sets the driver name and priority for all the AES
implementations and removes the Kconfig conditions on the C implementation
for AES.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:15:39 -08:00
Herbert Xu
5cb1454b86 [CRYPTO] Allow multiple implementations of the same algorithm
This is the first step on the road towards asynchronous support in
the Crypto API.  It adds support for having multiple crypto_alg objects
for the same algorithm registered in the system.

For example, each device driver would register a crypto_alg object
for each algorithm that it supports.  While at the same time the
user may load software implementations of those same algorithms.

Users of the Crypto API may then select a specific implementation
by name, or choose any implementation for a given algorithm with
the highest priority.

The priority field is a 32-bit signed integer.  In future it will be
possible to modify it from user-space.

This also provides a solution to the problem of selecting amongst
various AES implementations, that is, aes vs. aes-i586 vs. aes-padlock.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:15:37 -08:00
Herbert Xu
06ace7a9ba [CRYPTO] Use standard byte order macros wherever possible
A lot of crypto code needs to read/write a 32-bit/64-bit words in a
specific gender.  Many of them open code them by reading/writing one
byte at a time.  This patch converts all the applicable usages over
to use the standard byte order macros.

This is based on a previous patch by Denis Vlasenko.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:15:34 -08:00
Martin Schwidefsky
347a8dc3b8 [PATCH] s390: cleanup Kconfig
Sanitize some s390 Kconfig options.  We have ARCH_S390, ARCH_S390X,
ARCH_S390_31, 64BIT, S390_SUPPORT and COMPAT.  Replace these 6 options by
S390, 64BIT and COMPAT.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 08:33:53 -08:00
Jan Glauber
05f29fcdb0 [PATCH] s390: in-kernel crypto test vectors
Add new test vectors to the AES test suite for AES CBC and AES with plaintext
larger than AES blocksize.

Signed-off-by: Jan Glauber <jan.glauber@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 08:33:51 -08:00
Jan Glauber
bf754ae8ef [PATCH] s390: aes support
Add support for the hardware accelerated AES crypto algorithm.

Signed-off-by: Jan Glauber <jan.glauber@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 08:33:50 -08:00
Jan Glauber
0a497c17fe [PATCH] s390: sha256 support
Add support for the hardware accelerated sha256 crypto algorithm.

Signed-off-by: Jan Glauber <jan.glauber@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 08:33:50 -08:00
Jan Glauber
c1e26e1ef7 [PATCH] s390: in-kernel crypto rename
Replace all references to z990 by s390 in the in-kernel crypto files in
arch/s390/crypto.  The code is not specific to a particular machine (z990) but
to the s390 platform.  Big diff, does nothing..

Signed-off-by: Jan Glauber <jan.glauber@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 08:33:50 -08:00
Herbert Xu
1b40efd772 [CRYPTO] Check cra_alignmask against cra_blocksize
The cipher code relies on the fact that the block size is a multiple
of the required alignment.  So we should check this at the time of
algorith registration.  We also ensure that the block size is bounded
by the page size.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2005-10-30 11:19:43 +11:00
Herbert Xu
6df5b9f48d [CRYPTO] Simplify one-member scatterlist expressions
This patch rewrites various occurences of &sg[0] where sg is an array
of length one to simply sg.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2005-10-30 11:19:43 +11:00
David Hardeman
378f058cc4 [PATCH] Use sg_set_buf/sg_init_one where applicable
This patch uses sg_set_buf/sg_init_one in some places where it was
duplicated.

Signed-off-by: David Hardeman <david@2gen.com>
Cc: James Bottomley <James.Bottomley@steeleye.com>
Cc: Greg KH <greg@kroah.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jeff Garzik <jgarzik@pobox.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2005-10-30 11:19:43 +11:00
Herbert Xu
fe2d5295a1 [CRYPTO] Fix boundary check in standard multi-block cipher processors
The boundary check in the standard multi-block cipher processors are
broken when nbytes is not a multiple of bsize.  In those cases it will
always process an extra block.

This patch corrects the check so that it processes at most nbytes of
data.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-06 14:49:44 -07:00
Herbert Xu
64baf3cfea [CRYPTO]: Added CRYPTO_TFM_REQ_MAY_SLEEP flag
The crypto layer currently uses in_atomic() to determine whether it is
allowed to sleep.  This is incorrect since spin locks don't always cause
in_atomic() to return true.

Instead of that, this patch returns to an earlier idea of a per-tfm flag
which determines whether sleeping is allowed.  Unlike the earlier version,
the default is to not allow sleeping.  This ensures that no existing code
can break.

As usual, this flag may either be set through crypto_alloc_tfm(), or
just before a specific crypto operation.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-01 17:43:05 -07:00
Aaron Grothe
fb4f10ed50 [CRYPTO]: Fix XTEA implementation
The XTEA implementation was incorrect due to a misinterpretation of
operator precedence.  Because of the wide-spread nature of this
error, the erroneous implementation will be kept, albeit under the
new name of XETA.

Signed-off-by: Aaron Grothe <ajgrothe@yahoo.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-01 17:42:46 -07:00
Jesper Juhl
77933d7276 [PATCH] clean up inline static vs static inline
`gcc -W' likes to complain if the static keyword is not at the beginning of
the declaration.  This patch fixes all remaining occurrences of "inline
static" up with "static inline" in the entire kernel tree (140 occurrences in
47 files).

While making this change I came across a few lines with trailing whitespace
that I also fixed up, I have also added or removed a blank line or two here
and there, but there are no functional changes in the patch.

Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-27 16:26:20 -07:00
Herbert Xu
9d853c3757 [CRYPTO]: Fix zero-extension bug on 64-bit architectures.
Noticed by Ken-ichirou MATSUZAWA.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-15 07:41:31 -07:00
Dag Arne Osvik
e1d5dea1df [CRYPTO] Add faster DES code from Dag Arne Osvik
I've made a new implementation of DES to replace the old one in the kernel.
It provides faster encryption on all tested processors apart from the original
Pentium, and key setup is many times faster.

                                Speed relative to old kernel implementation
Processor       des_setkey      des_encrypt     des3_ede_setkey des3_ede_encrypt
Pentium
120Mhz          6.8             0.82            7.2             0.86
Pentium III
1.266Ghz        5.6             1.19            5.8             1.34
Pentium M
1.3Ghz          5.7             1.15            6.0             1.31
Pentium 4
2.266Ghz        5.8             1.24            6.0             1.40
Pentium 4E
3Ghz            5.4             1.27            5.5             1.48
StrongARM 1110
206Mhz          4.3             1.03            4.4             1.14
Athlon XP
2Ghz            7.8             1.44            8.1             1.61
Athlon 64
2Ghz            7.8             1.34            8.3             1.49

Signed-off-by: Dag Arne Osvik <da@osvik.no>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:55:44 -07:00
Herbert Xu
a9df3597fe [CRYPTO] Remove unused iv field from context structure
The iv field in des_ctx/des3_ede_ctx/serpent_ctx has never been used.
This was noticed by Dag Arne Osvik.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:55:21 -07:00
Andreas Steinmetz
a2a892a236 [CRYPTO] Add x86_64 asm AES
Implementation:
===============
The encrypt/decrypt code is based on an x86 implementation I did a while
ago which I never published. This unpublished implementation does
include an assembler based key schedule and precomputed tables. For
simplicity and best acceptance, however, I took Gladman's in-kernel code
for table generation and key schedule for the kernel port of my
assembler code and modified this code to produce the key schedule as
required by my assembler implementation. File locations and Kconfig are
kept similar to the i586 AES assembler implementation.
It may seem a little bit strange to use 32 bit I/O and registers in the
assembler implementation but this gives the best code size. My
implementation takes one instruction more per round compared to
Gladman's x86 assembler but it doesn't require any stack for local
variables or saved registers and it is less serialized than Gladman's
code.
Note that all comparisons to Gladman's code were done after my code was
implemented. I did only use FIPS PUB 197 for the implementation so my
implementation is independent work.
If anybody has a better assembler solution for x86_64 I'll be pleased to
have my code replaced with the better solution.

Testing:
========
The implementation passes the in-kernel crypto testing module and I'm
running it without any problems on my laptop where it is mainly used for
dm-crypt.

Microbenchmark:
===============
The microbenchmark was done in userspace with similar compile flags as
used during kernel compile.
Encrypt/decrypt is about 35% faster than the generic C implementation.
As the generic C as well as my assembler implementation are both table
I don't really expect that there is much room for further
improvements though I'll be glad to be corrected here.
The key schedule is about 5% slower than the generic C implementation.
This is due to the fact that some more work has to be done in the key
schedule routine to fit the schedule to the assembler implementation.

Code Size:
==========
Encrypt and decrypt are together about 2.1 Kbytes smaller than the
generic C implementation which is important with regard to L1 cache
usage. The key schedule routine is about 100 bytes larger than the
generic C implementation.

Data Size:
==========
There's no difference in data size requirements between the assembler
implementation and the generic C implementation.

License:
========
Gladmans's code is dual BSD/GPL whereas my assembler code is GPLv2 only
(I'm  not going to change the license for my code). So I had to change
the module license for the x86_64 aes module from 'Dual BSD/GPL' to
'GPL' to reflect the most restrictive license within the module.

Signed-off-by: Andreas Steinmetz <ast@domdv.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:55:00 -07:00
Jesper Juhl
a61cc44812 [CRYPTO] Add null short circuit to crypto_free_tfm
As far as I'm aware there's a general concensus that functions that are
responsible for freeing resources should be able to cope with being passed
a NULL pointer. This makes sense as it removes the need for all callers to
check for NULL, thus elliminating the bugs that happen when some forget
(safer to just check centrally in the freeing function) and it also makes
for smaller code all over due to the lack of all those NULL checks.
This patch makes it safe to pass the crypto_free_tfm() function a NULL
pointer. Once this patch is applied we can start removing the NULL checks
from the callers.

Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:54:31 -07:00
Herbert Xu
915e8561d5 [CRYPTO] Handle unaligned iv from encrypt_iv/decrypt_iv
Even though cit_iv is now always aligned, the user can still supply an
unaligned iv through crypto_cipher_encrypt_iv/crypto_cipher_decrypt_iv.
This patch will check the alignment of the user-supplied iv and copy
it if necessary.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:53:47 -07:00
Herbert Xu
fbdae9f3e7 [CRYPTO] Ensure cit_iv is aligned correctly
This patch ensures that cit_iv is aligned according to cra_alignmask
by allocating it as part of the tfm structure.  As a side effect the
crypto layer will also guarantee that the tfm ctx area has enough space
to be aligned by cra_alignmask.  This allows us to remove the extra
space reservation from the Padlock driver.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:53:29 -07:00
Adrian Bunk
176c3652c5 [CRYPTO] Make crypto_alg_lookup static
This patch makes a needlessly global function static.

Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:53:09 -07:00
Herbert Xu
9547737799 [CRYPTO] Add alignmask for low-level cipher implementations
The VIA Padlock device requires the input and output buffers to
be aligned on 16-byte boundaries.  This patch adds the alignmask
attribute for low-level cipher implementations to indicate their
alignment requirements.

The mid-level crypt() function will copy the input/output buffers
if they are not aligned correctly before they are passed to the
low-level implementation.

Strictly speaking, some of the software implementations require
the buffers to be aligned on 4-byte boundaries as they do 32-bit
loads.  However, it is not clear whether it is better to copy
the buffers or pay the penalty for unaligned loads/stores.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:52:09 -07:00
Herbert Xu
40725181b7 [CRYPTO] Add support for low-level multi-block operations
This patch adds hooks for cipher algorithms to implement multi-block
ECB/CBC operations directly.  This is expected to provide significant
performance boots to the VIA Padlock.

It could also be used for improving software implementations such as
AES where operating on multiple blocks at a time may enable certain
optimisations.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:51:52 -07:00
Herbert Xu
c774e93e21 [CRYPTO] Add plumbing for multi-block operations
The VIA Padlock device is able to perform much better when multiple
blocks are fed to it at once.  As this device offers an exceptional
throughput rate it is worthwhile to optimise the infrastructure
specifically for it.

We shift the existing page-sized fast path down to the CBC/ECB functions.
We can then replace the CBC/ECB functions with functions provided by the
underlying algorithm that performs the multi-block operations.

As a side-effect this improves the performance of large cipher operations
for all existing algorithm implementations.  I've measured the gain to be
around 5% for 3DES and 15% for AES.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:51:31 -07:00
Jesper Juhl
8279dd748f [CRYPTO] Don't check for NULL before kfree()
Checking a pointer for NULL before calling kfree() on it is redundant.
This patch removes such checks from crypto/

Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:51:00 -07:00
Herbert Xu
6a17944ca1 [CRYPTO]: Use CPU cycle counters in tcrypt
After using this facility for a while to test my changes to the
cipher crypt() layer, I realised that I should've listend to Dave
and made this thing use CPU cycle counters :) As it is it's too
jittery for me to feel safe about relying on the results.

So here is a patch to make it use CPU cycles by default but fall
back to jiffies if the user specifies a non-zero sec value.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-22 13:29:03 -07:00
Herbert Xu
dce907c00f [CRYPTO]: Use template keys for speed tests if possible
The existing keys used in the speed tests do not pass the 3DES quality check.
This patch makes it use the template keys instead.

Other algorithms can supply template keys through the same interface if needed.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-22 13:27:51 -07:00
Harald Welte
ebfd9bcf16 [CRYPTO]: Add cipher speed tests
From: Reyk Floeter <reyk@vantronix.net>

I recently had the requirement to do some benchmarking on cryptoapi, and
I found reyk's very useful performance test patch [1].

However, I could not find any discussion on why that extension (or
something providing a similar feature but different implementation) was
not merged into mainline.  If there was such a discussion, can someone
please point me to the archive[s]?

I've now merged the old patch into 2.6.12-rc1, the result can be found
attached to this email.

[1] http://lists.logix.cz/pipermail/padlock/2004/000010.html

Signed-off-by: Harald Welte <laforge@gnumonks.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-22 13:27:23 -07:00
Herbert Xu
3cc3816f93 [CRYPTO]: Kill unnecessary strncpy from tcrypt
It seems that bad code tends to get copied (see test_cipher_speed).  So let's
kill this idiom before it spreads any further.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-22 13:26:36 -07:00
Herbert Xu
ef2736fc74 [CRYPTO]: White space and coding style clean up in tcrypt
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-22 13:26:03 -07:00
Herbert Xu
15333038d5 [CRYPTO]: Only reschedule if !in_atomic()
The netlink gfp_any() problem made me double-check the uses of in_softirq()
in crypto/*.  It seems to me that we should be checking in_atomic() instead
of in_softirq() in crypto_yield.  Otherwise people calling the crypto ops
with spin locks held or preemption disabled will get burnt, right?

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-23 12:36:25 -07:00
Patrick McHardy
d0856009db [PATCH] crypto: fix null encryption/compression
null_encrypt() needs to copy the data in case src and dst are disjunct,
null_compress() needs to copy the data in any case as far as I can tell.  I
joined compress/decompress and encrypt/decrypt to avoid duplicating code.

Without this patch ESP null_enc packets look like this:

IP (tos 0x0, ttl  64, id 23130, offset 0, flags [DF], length: 128)
10.0.0.1 > 10.0.0.2: ESP(spi=0x0f9ca149,seq=0x4)
	0x0000:  4500 0080 5a5a 4000 4032 cbef 0a00 0001  E...ZZ@.@2......
	0x0010:  0a00 0002 0f9c a149 0000 0004 0000 0000  .......I........
	0x0020:  0000 0000 0000 0000 0000 0000 0000 0000  ................
	0x0030:  0000 0000 0000 0000 0000 0000 0000 0000  ................
	0x0040:  0000 0000 0000 0000 0000 0000 0000 0000  ................
	0x0050:  0000                                     ..

IP (tos 0x0, ttl  64, id 256, offset 0, flags [DF], length: 128)
10.0.0.2 > 10.0.0.1: ESP(spi=0x0e4f7b51,seq=0x2)
	0x0000:  4500 0080 0100 4000 4032 254a 0a00 0002  E.....@.@2%J....
	0x0010:  0a00 0001 0e4f 7b51 0000 0002 a8a8 a8a8  .....O{Q........
	0x0020:  a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 a8a8  ................
	0x0030:  a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 a8a8  ................
	0x0040:  a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 a8a8  ................
	0x0050:  a8a8                                     ..

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-05-17 07:59:18 -07:00
Paolo 'Blaisorblade' Giarrusso
c45166be3c [PATCH] uml: support AES i586 crypto driver
We want to make possible, for the user, to enable the i586 AES implementation.
This requires a restructure.

- Add a CONFIG_UML_X86 to notify that we are building a UML for i386.

- Rename CONFIG_64_BIT to CONFIG_64BIT as is used for all other archs

- Tell crypto/Kconfig that UML_X86 is as good as X86

- Tell it that it must exclude not X86_64 but 64BIT, which will give the
  same results.

- Tell kbuild to descend down into arch/i386/crypto/ to build what's needed.

Signed-off-by: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-05-01 08:58:54 -07:00
Artem B. Bityuckiy
9ffb7146f0 [PATCH] crypto: call zlib end functions on deflate exit path
In the deflate_[compress|uncompress|pcompress] functions we call the
zlib_[in|de]flateReset function at the beginning.  This is OK.  But when we
unload the deflate module we don't call zlib_[in|de]flateEnd to free all
the zlib internal data.  It looks like a bug for me.  Please, consider the
attached patch.

Signed-off-by: Artem B. Bityuckiy <dedekind@infradead.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-04-16 15:23:58 -07:00
Linus Torvalds
1da177e4c3 Linux-2.6.12-rc2
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!
2005-04-16 15:20:36 -07:00