This patch converts xts over to the skcipher interface. It also
optimises the implementation to be based on ECB instead of the
underlying cipher. For compatibility the existing naming scheme
of xts(aes) is maintained as opposed to the more obvious one of
xts(ecb(aes)).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts lrw over to the skcipher interface. It also
optimises the implementation to be based on ECB instead of the
underlying cipher. For compatibility the existing naming scheme
of lrw(aes) is maintained as opposed to the more obvious one of
lrw(ecb(aes)).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes use of the new skcipher walk interface instead of
the obsolete blkcipher walk interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the skcipher walk interface which replaces both
blkcipher walk and ablkcipher walk. Just like blkcipher walk it
can also be used for AEAD algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For consistency with the other 246 kernel configuration options,
rename CRYPT_CRC32C_VPMSUM to CRYPTO_CRC32C_VPMSUM.
Signed-off-by: Jean Delvare <jdelvare@suse.de>
Cc: Anton Blanchard <anton@samba.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This integrates both the accelerated scalar and the NEON implementations
of SHA-224/256 as well as SHA-384/512 from the OpenSSL project.
Relative performance compared to the respective generic C versions:
| SHA256-scalar | SHA256-NEON* | SHA512 |
------------+-----------------+--------------+----------+
Cortex-A53 | 1.63x | 1.63x | 2.34x |
Cortex-A57 | 1.43x | 1.59x | 1.95x |
Cortex-A73 | 1.26x | 1.56x | ? |
The core crypto code was authored by Andy Polyakov of the OpenSSL
project, in collaboration with whom the upstream code was adapted so
that this module can be built from the same version of sha512-armv8.pl.
The version in this patch was taken from OpenSSL commit 32bbb62ea634
("sha/asm/sha512-armv8.pl: fix big-endian support in __KERNEL__ case.")
* The core SHA algorithm is fundamentally sequential, but there is a
secondary transformation involved, called the schedule update, which
can be performed independently. The NEON version of SHA-224/SHA-256
only implements this part of the algorithm using NEON instructions,
the sequential part is always done using scalar instructions.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As hw_random core calls ->read with max > 32 or more, make it explicit.
Also remove checks involving 'max' being less than 8.
Signed-off-by: PrasannaKumar Muralidharan <prasannatsmkumar@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CTR DRBG segments the number of random bytes to be generated into
128 byte blocks. The current code misses the advancement of the output
buffer pointer when the requestor asks for more than 128 bytes of data.
In this case, the next 128 byte block of random numbers is copied to
the beginning of the output buffer again. This implies that only the
first 128 bytes of the output buffer would ever be filled.
The patch adds the advancement of the buffer pointer to fill the entire
buffer.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
First up, clean up the generated .S files properly on a 'make clean'.
Secondly, force re-generation of these files when building for different
endian-ness than what was built previously. Finally, generate the new
files in the build tree, rather than the source tree.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Current multi-buffer hash implementations have a restriction on the total
length of a hash job to 512MB. Hashing larger buffers will result in an
incorrect hash. This extends the limit to 2^62 - 1.
Signed-off-by: Greg Tucker <greg.b.tucker@intel.com>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
GF(2^128) multiplication tables are typically used for secret
information, so it's a good idea to zero them on free.
Signed-off-by: Alex Cope <alexcope@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since clk_prepare_enable() is used to get trng->clk, we should
use clk_disable_unprepare() to release it for the error path.
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Drop duplicate header module.h from jitterentropy-kcapi.c.
Signed-off-by: Geliang Tang <geliangtang@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Shared descriptors used by ahash_final() and ahash_finup()
are identical, thus get rid of one of them (sh_desc_finup).
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The pointer to the descriptor buffer is not touched,
it always points to start of the descriptor buffer.
Thus, make it const.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
sec4_sg_entry structure is used only by helper functions in sg_sw_sec4.h.
Since SEC HW S/G entries are to be manipulated only indirectly, via these
functions, move sec4_sg_entry to the corresponding header.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This reverts commit 66d2e20280.
Quoting from Russell's findings:
https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg21136.html
[quote]
Okay, I've re-tested, using a different way of measuring, because using
openssl speed is impractical for off-loaded engines. I've decided to
use this way to measure the performance:
dd if=/dev/zero bs=1048576 count=128 | /usr/bin/time openssl dgst -md5
For the threaded IRQs case gives:
0.05user 2.74system 0:05.30elapsed 52%CPU (0avgtext+0avgdata 2400maxresident)k
0.06user 2.52system 0:05.18elapsed 49%CPU (0avgtext+0avgdata 2404maxresident)k
0.12user 2.60system 0:05.61elapsed 48%CPU (0avgtext+0avgdata 2460maxresident)k
=> 5.36s => 25.0MB/s
and the tasklet case:
0.08user 2.53system 0:04.83elapsed 54%CPU (0avgtext+0avgdata 2468maxresident)k
0.09user 2.47system 0:05.16elapsed 49%CPU (0avgtext+0avgdata 2368maxresident)k
0.10user 2.51system 0:04.87elapsed 53%CPU (0avgtext+0avgdata 2460maxresident)k
=> 4.95 => 27.1MB/s
which corresponds to an 8% slowdown for the threaded IRQ case. So,
tasklets are indeed faster than threaded IRQs.
[...]
I think I've proven from the above that this patch needs to be reverted
due to the performance regression, and that there _is_ most definitely
a deterimental effect of switching from tasklets to threaded IRQs.
[/quote]
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
alkcipher_edesc_alloc() and ablkcipher_giv_edesc_alloc() don't
free / unmap resources on error path:
- dmap_map_sg() could fail, thus make sure the return value is checked
- unmap DMA mappings in case of error
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
ERRID is a 4-bit field.
Since err_id values are in [0..15] and err_id_list array size is 16,
the condition "err_id < ARRAY_SIZE(err_id_list)" is always true.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
REG3 no longer needs to be updated, since it's not used after that.
This shared descriptor command is a leftover of the conversion to
AEAD interface.
Fixes: 479bcc7c5b "crypto: caam - Convert authenc to new AEAD interface"
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix the following smatch warnings:
drivers/crypto/caam/caamalg.c:2350 aead_edesc_alloc() warn: we tested 'src_nents' before and it was 'true'
drivers/crypto/caam/caamrng.c:351 caam_rng_init() error: no modifiers for allocation.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
1. fix HDR_START_IDX_MASK, HDR_SD_SHARE_MASK, HDR_JD_SHARE_MASK
Define HDR_START_IDX_MASK consistently with the other masks:
mask = bitmask << offset
2. OP_ALG_TYPE_CLASS1 and OP_ALG_TYPE_CLASS2 must be shifted.
3. fix FIFO_STORE output data type value for AFHA S-Box
4. fix OPERATION pkha modular arithmetic source mask
5. rename LDST_SRCDST_WORD_CLASS1_ICV_SZ to
LDST_SRCDST_WORD_CLASS1_IV_SZ (it refers to IV, not ICV).
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit 4464a7d4f5
("crypto: caam - remove error propagation handling")
removed error propagation handling only from caamalg.
Do this in all other places: caamhash, caamrng.
Update descriptors' lengths appropriately.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The AEAD givenc descriptor relies on moving the IV through the
output FIFO and then back to the CTX2 for authentication. The
SEQ FIFO STORE could be scheduled before the data can be
read from OFIFO, especially since the SEQ FIFO LOAD needs
to wait for the SEQ FIFO LOAD SKIP to finish first. The
SKIP takes more time when the input is SG than when it's
a contiguous buffer. If the SEQ FIFO LOAD is not scheduled
before the STORE, the DECO will hang waiting for data
to be available in the OFIFO so it can be transferred to C2.
In order to overcome this, first force transfer of IV to C2
by starting the "cryptlen" transfer first and then starting to
store data from OFIFO to the output buffer.
Fixes: 1acebad3d8 ("crypto: caam - faster aead implementation")
Cc: <stable@vger.kernel.org> # 3.2+
Signed-off-by: Alex Porosanu <alexandru.porosanu@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This code is unlikely to be useful in the future because transforms
don't know how often keys will be changed, new algorithms are unlikely
to use lle representation, and tables should be replaced with
carryless multiplication instructions when available.
Signed-off-by: Alex Cope <alexcope@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix the single instance where a positive EINVAL was returned.
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
By using the unaligned access helpers, we drastically improve
performance on small MIPS routers that have to go through the exception
fix-up handler for these unaligned accesses.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Switch to resource-managed function devm_kzalloc instead
of kzalloc and remove unneeded kfree
Also, remove kfree in probe function and remove
function, mv_remove as it is now has nothing to do.
The Coccinelle semantic patch used to make this change is as follows:
//<smpl>
@platform@
identifier p, probefn, removefn;
@@
struct platform_driver p = {
.probe = probefn,
.remove = removefn,
};
@prb@
identifier platform.probefn, pdev;
expression e, e1, e2;
@@
probefn(struct platform_device *pdev, ...) {
<+...
- e = kzalloc(e1, e2)
+ e = devm_kzalloc(&pdev->dev, e1, e2)
...
?-kfree(e);
...+>
}
@rem depends on prb@
identifier platform.removefn;
expression prb.e;
@@
removefn(...) {
<...
- kfree(e);
...>
}
//</smpl>
Signed-off-by: Nadim Almas <nadim.902@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Trivial fix to spelling mistake "pointeur" to "pointer"
in dev_err message
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The exponent size in the ccp_op structure is in bits. A v5
CCP requires the exponent size to be in bytes, so convert
the size from bits to bytes when populating the descriptor.
The current code references the exponent in memory, but
these fields have not been set since the exponent is
actually store in the LSB. Populate the descriptor with
the LSB location (address).
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the unused but set variable tfm in cryptd_enqueue_request to fix
the following warning when building with 'W=1':
crypto/cryptd.c:125:21: warning: variable 'tfm' set but not used [-Wunused-but-set-variable]
Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since commit 3a01d0ee2b ("crypto: skcipher - Remove top-level
givcipher interface"), crypto_spawn_skcipher2() and
crypto_spawn_skcipher() are equivalent. So switch callers of
crypto_spawn_skcipher2() to crypto_spawn_skcipher() and remove it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since commit 3a01d0ee2b ("crypto: skcipher - Remove top-level
givcipher interface"), crypto_grab_skcipher2() and
crypto_grab_skcipher() are equivalent. So switch callers of
crypto_grab_skcipher2() to crypto_grab_skcipher() and remove it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
To fix the over consumption on the VDDCore due to the TRNG enabled,
disable the TRNG during suspend, not only disable the user interface
clock (which is controlled by PMC). Because the user interface clock
is independent from any clock that may be used in the entropy source
logic circuitry.
Signed-off-by: Wenyou Yang <wenyou.yang@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix dependency between acomp and scomp that appears when acomp is
built as module
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Building the caam driver on arm64 produces a harmless warning:
drivers/crypto/caam/caamalg.c:140:139: warning: comparison of distinct pointer types lacks a cast
We can use min_t to tell the compiler which type we want it to use
here.
Fixes: 5ecf8ef910 ("crypto: caam - fix sg dump")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Trivial fix to typo in dev_dbg message
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There is no need to have the 'struct atmel_aes_dev *aes_dd' variable
static since new value always be assigned before use it.
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The HWRNG core allocates two buffers during initialization which are
used to obtain random data. After that data is processed, it is now
zeroized as it is possible that the HWRNG core will not be asked to
produce more random data for a long time. This prevents leaving such
sensitive data in memory.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add tests to the test manager for algorithms exposed through acomp.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add scomp backend for deflate compression algorithm.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add scomp backend for 842 compression algorithm.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add scomp backend for lz4hc compression algorithm.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add scomp backend for lz4 compression algorithm.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add scomp backend for lzo compression algorithm.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a synchronous back-end (scomp) to acomp. This allows to easily
expose the already present compression algorithms in LKCF via acomp.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>