2019-05-27 14:55:01 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-or-later
|
2008-07-31 17:08:25 +08:00
|
|
|
/*
|
|
|
|
* Algorithm testing framework and tests.
|
|
|
|
*
|
|
|
|
* Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
|
|
|
|
* Copyright (c) 2002 Jean-Francois Dive <jef@linuxbe.org>
|
|
|
|
* Copyright (c) 2007 Nokia Siemens Networks
|
|
|
|
* Copyright (c) 2008 Herbert Xu <herbert@gondor.apana.org.au>
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
* Copyright (c) 2019 Google LLC
|
2008-07-31 17:08:25 +08:00
|
|
|
*
|
2010-11-05 03:02:04 +08:00
|
|
|
* Updated RFC4106 AES-GCM testing.
|
|
|
|
* Authors: Aidan O'Mahony (aidan.o.mahony@intel.com)
|
|
|
|
* Adrian Hoban <adrian.hoban@intel.com>
|
|
|
|
* Gabriele Paoloni <gabriele.paoloni@intel.com>
|
|
|
|
* Tadeusz Struk (tadeusz.struk@intel.com)
|
|
|
|
* Copyright (c) 2010, Intel Corporation.
|
2008-07-31 17:08:25 +08:00
|
|
|
*/
|
|
|
|
|
2015-04-22 15:06:31 +08:00
|
|
|
#include <crypto/aead.h>
|
2008-07-31 17:08:25 +08:00
|
|
|
#include <crypto/hash.h>
|
2015-08-20 15:21:46 +08:00
|
|
|
#include <crypto/skcipher.h>
|
2008-07-31 17:08:25 +08:00
|
|
|
#include <linux/err.h>
|
2015-04-22 13:25:58 +08:00
|
|
|
#include <linux/fips.h>
|
2008-07-31 17:08:25 +08:00
|
|
|
#include <linux/module.h>
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
#include <linux/once.h>
|
2024-09-30 20:33:13 +08:00
|
|
|
#include <linux/prandom.h>
|
2008-07-31 17:08:25 +08:00
|
|
|
#include <linux/scatterlist.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/string.h>
|
2020-08-19 19:58:20 +08:00
|
|
|
#include <linux/uio.h>
|
2009-05-04 19:44:50 +08:00
|
|
|
#include <crypto/rng.h>
|
2014-05-31 23:25:36 +08:00
|
|
|
#include <crypto/drbg.h>
|
2015-06-17 01:31:06 +08:00
|
|
|
#include <crypto/akcipher.h>
|
2016-06-23 00:49:14 +08:00
|
|
|
#include <crypto/kpp.h>
|
2016-10-21 20:19:54 +08:00
|
|
|
#include <crypto/acompress.h>
|
2024-09-10 22:30:12 +08:00
|
|
|
#include <crypto/sig.h>
|
2020-12-11 20:27:15 +08:00
|
|
|
#include <crypto/internal/cipher.h>
|
2019-03-13 13:12:47 +08:00
|
|
|
#include <crypto/internal/simd.h>
|
2008-07-31 17:08:25 +08:00
|
|
|
|
|
|
|
#include "internal.h"
|
2010-06-03 18:53:43 +08:00
|
|
|
|
2020-12-11 20:27:15 +08:00
|
|
|
MODULE_IMPORT_NS(CRYPTO_INTERNAL);
|
|
|
|
|
2016-05-03 17:00:17 +08:00
|
|
|
static bool notests;
|
|
|
|
module_param(notests, bool, 0644);
|
|
|
|
MODULE_PARM_DESC(notests, "disable crypto self-tests");
|
|
|
|
|
2019-04-01 04:09:14 +08:00
|
|
|
static bool panic_on_fail;
|
|
|
|
module_param(panic_on_fail, bool, 0444);
|
|
|
|
|
2019-02-01 15:51:44 +08:00
|
|
|
#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
|
|
|
|
static bool noextratests;
|
|
|
|
module_param(noextratests, bool, 0644);
|
|
|
|
MODULE_PARM_DESC(noextratests, "disable expensive crypto self-tests");
|
|
|
|
|
|
|
|
static unsigned int fuzz_iterations = 100;
|
|
|
|
module_param(fuzz_iterations, uint, 0644);
|
|
|
|
MODULE_PARM_DESC(fuzz_iterations, "number of fuzz test iterations");
|
|
|
|
#endif
|
|
|
|
|
2010-08-06 09:40:28 +08:00
|
|
|
#ifdef CONFIG_CRYPTO_MANAGER_DISABLE_TESTS
|
2010-06-03 18:53:43 +08:00
|
|
|
|
|
|
|
/* a perfect nop */
|
|
|
|
int alg_test(const char *driver, const char *alg, u32 type, u32 mask)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
#else
|
|
|
|
|
2008-07-31 17:08:25 +08:00
|
|
|
#include "testmgr.h"
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Need slab memory for testing (size in number of pages).
|
|
|
|
*/
|
|
|
|
#define XBUFSIZE 8
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Used by test_cipher()
|
|
|
|
*/
|
|
|
|
#define ENCRYPT 1
|
|
|
|
#define DECRYPT 0
|
|
|
|
|
|
|
|
struct aead_test_suite {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
const struct aead_testvec *vecs;
|
|
|
|
unsigned int count;
|
2019-12-02 05:53:30 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Set if trying to decrypt an inauthentic ciphertext with this
|
|
|
|
* algorithm might result in EINVAL rather than EBADMSG, due to other
|
|
|
|
* validation the algorithm does on the inputs such as length checks.
|
|
|
|
*/
|
|
|
|
unsigned int einval_allowed : 1;
|
|
|
|
|
|
|
|
/*
|
2020-03-05 06:44:03 +08:00
|
|
|
* Set if this algorithm requires that the IV be located at the end of
|
|
|
|
* the AAD buffer, in addition to being given in the normal way. The
|
|
|
|
* behavior when the two IV copies differ is implementation-defined.
|
2019-12-02 05:53:30 +08:00
|
|
|
*/
|
2020-03-05 06:44:03 +08:00
|
|
|
unsigned int aad_iv : 1;
|
2008-07-31 17:08:25 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct cipher_test_suite {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
const struct cipher_testvec *vecs;
|
|
|
|
unsigned int count;
|
2008-07-31 17:08:25 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct comp_test_suite {
|
|
|
|
struct {
|
2017-02-25 07:46:59 +08:00
|
|
|
const struct comp_testvec *vecs;
|
2008-07-31 17:08:25 +08:00
|
|
|
unsigned int count;
|
|
|
|
} comp, decomp;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct hash_test_suite {
|
2017-02-25 07:46:59 +08:00
|
|
|
const struct hash_testvec *vecs;
|
2008-07-31 17:08:25 +08:00
|
|
|
unsigned int count;
|
|
|
|
};
|
|
|
|
|
2009-05-04 19:44:50 +08:00
|
|
|
struct cprng_test_suite {
|
2017-02-25 07:46:59 +08:00
|
|
|
const struct cprng_testvec *vecs;
|
2009-05-04 19:44:50 +08:00
|
|
|
unsigned int count;
|
|
|
|
};
|
|
|
|
|
2014-05-31 23:25:36 +08:00
|
|
|
struct drbg_test_suite {
|
2017-02-25 07:46:59 +08:00
|
|
|
const struct drbg_testvec *vecs;
|
2014-05-31 23:25:36 +08:00
|
|
|
unsigned int count;
|
|
|
|
};
|
|
|
|
|
2015-06-17 01:31:06 +08:00
|
|
|
struct akcipher_test_suite {
|
2017-02-25 07:46:59 +08:00
|
|
|
const struct akcipher_testvec *vecs;
|
2015-06-17 01:31:06 +08:00
|
|
|
unsigned int count;
|
|
|
|
};
|
|
|
|
|
2024-09-10 22:30:12 +08:00
|
|
|
struct sig_test_suite {
|
|
|
|
const struct sig_testvec *vecs;
|
|
|
|
unsigned int count;
|
|
|
|
};
|
|
|
|
|
2016-06-23 00:49:14 +08:00
|
|
|
struct kpp_test_suite {
|
2017-02-25 07:46:59 +08:00
|
|
|
const struct kpp_testvec *vecs;
|
2016-06-23 00:49:14 +08:00
|
|
|
unsigned int count;
|
|
|
|
};
|
|
|
|
|
2008-07-31 17:08:25 +08:00
|
|
|
struct alg_test_desc {
|
|
|
|
const char *alg;
|
2019-04-12 12:57:38 +08:00
|
|
|
const char *generic_driver;
|
2008-07-31 17:08:25 +08:00
|
|
|
int (*test)(const struct alg_test_desc *desc, const char *driver,
|
|
|
|
u32 type, u32 mask);
|
2009-05-15 13:16:03 +08:00
|
|
|
int fips_allowed; /* set if alg is allowed in fips mode */
|
2008-07-31 17:08:25 +08:00
|
|
|
|
|
|
|
union {
|
|
|
|
struct aead_test_suite aead;
|
|
|
|
struct cipher_test_suite cipher;
|
|
|
|
struct comp_test_suite comp;
|
|
|
|
struct hash_test_suite hash;
|
2009-05-04 19:44:50 +08:00
|
|
|
struct cprng_test_suite cprng;
|
2014-05-31 23:25:36 +08:00
|
|
|
struct drbg_test_suite drbg;
|
2015-06-17 01:31:06 +08:00
|
|
|
struct akcipher_test_suite akcipher;
|
2024-09-10 22:30:12 +08:00
|
|
|
struct sig_test_suite sig;
|
2016-06-23 00:49:14 +08:00
|
|
|
struct kpp_test_suite kpp;
|
2008-07-31 17:08:25 +08:00
|
|
|
} suite;
|
|
|
|
};
|
|
|
|
|
|
|
|
static void hexdump(unsigned char *buf, unsigned int len)
|
|
|
|
{
|
|
|
|
print_hex_dump(KERN_CONT, "", DUMP_PREFIX_OFFSET,
|
|
|
|
16, 1,
|
|
|
|
buf, len, false);
|
|
|
|
}
|
|
|
|
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
static int __testmgr_alloc_buf(char *buf[XBUFSIZE], int order)
|
2009-05-06 14:15:47 +08:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < XBUFSIZE; i++) {
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
buf[i] = (char *)__get_free_pages(GFP_KERNEL, order);
|
2009-05-06 14:15:47 +08:00
|
|
|
if (!buf[i])
|
|
|
|
goto err_free_buf;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err_free_buf:
|
|
|
|
while (i-- > 0)
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
free_pages((unsigned long)buf[i], order);
|
2009-05-06 14:15:47 +08:00
|
|
|
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
static int testmgr_alloc_buf(char *buf[XBUFSIZE])
|
|
|
|
{
|
|
|
|
return __testmgr_alloc_buf(buf, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void __testmgr_free_buf(char *buf[XBUFSIZE], int order)
|
2009-05-06 14:15:47 +08:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < XBUFSIZE; i++)
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
free_pages((unsigned long)buf[i], order);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void testmgr_free_buf(char *buf[XBUFSIZE])
|
|
|
|
{
|
|
|
|
__testmgr_free_buf(buf, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
#define TESTMGR_POISON_BYTE 0xfe
|
|
|
|
#define TESTMGR_POISON_LEN 16
|
|
|
|
|
|
|
|
static inline void testmgr_poison(void *addr, size_t len)
|
|
|
|
{
|
|
|
|
memset(addr, TESTMGR_POISON_BYTE, len);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Is the memory region still fully poisoned? */
|
|
|
|
static inline bool testmgr_is_poison(const void *addr, size_t len)
|
|
|
|
{
|
|
|
|
return memchr_inv(addr, TESTMGR_POISON_BYTE, len) == NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* flush type for hash algorithms */
|
|
|
|
enum flush_type {
|
|
|
|
/* merge with update of previous buffer(s) */
|
|
|
|
FLUSH_TYPE_NONE = 0,
|
|
|
|
|
|
|
|
/* update with previous buffer(s) before doing this one */
|
|
|
|
FLUSH_TYPE_FLUSH,
|
|
|
|
|
|
|
|
/* likewise, but also export and re-import the intermediate state */
|
|
|
|
FLUSH_TYPE_REIMPORT,
|
|
|
|
};
|
|
|
|
|
|
|
|
/* finalization function for hash algorithms */
|
|
|
|
enum finalization_type {
|
|
|
|
FINALIZATION_TYPE_FINAL, /* use final() */
|
|
|
|
FINALIZATION_TYPE_FINUP, /* use finup() */
|
|
|
|
FINALIZATION_TYPE_DIGEST, /* use digest() */
|
|
|
|
};
|
|
|
|
|
2022-03-26 15:11:59 +08:00
|
|
|
/*
|
|
|
|
* Whether the crypto operation will occur in-place, and if so whether the
|
|
|
|
* source and destination scatterlist pointers will coincide (req->src ==
|
|
|
|
* req->dst), or whether they'll merely point to two separate scatterlists
|
|
|
|
* (req->src != req->dst) that reference the same underlying memory.
|
|
|
|
*
|
|
|
|
* This is only relevant for algorithm types that support in-place operation.
|
|
|
|
*/
|
|
|
|
enum inplace_mode {
|
|
|
|
OUT_OF_PLACE,
|
|
|
|
INPLACE_ONE_SGLIST,
|
|
|
|
INPLACE_TWO_SGLISTS,
|
|
|
|
};
|
|
|
|
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
#define TEST_SG_TOTAL 10000
|
|
|
|
|
|
|
|
/**
|
|
|
|
* struct test_sg_division - description of a scatterlist entry
|
|
|
|
*
|
|
|
|
* This struct describes one entry of a scatterlist being constructed to check a
|
|
|
|
* crypto test vector.
|
|
|
|
*
|
|
|
|
* @proportion_of_total: length of this chunk relative to the total length,
|
|
|
|
* given as a proportion out of TEST_SG_TOTAL so that it
|
|
|
|
* scales to fit any test vector
|
|
|
|
* @offset: byte offset into a 2-page buffer at which this chunk will start
|
|
|
|
* @offset_relative_to_alignmask: if true, add the algorithm's alignmask to the
|
|
|
|
* @offset
|
|
|
|
* @flush_type: for hashes, whether an update() should be done now vs.
|
|
|
|
* continuing to accumulate data
|
2019-03-13 13:12:52 +08:00
|
|
|
* @nosimd: if doing the pending update(), do it with SIMD disabled?
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
*/
|
|
|
|
struct test_sg_division {
|
|
|
|
unsigned int proportion_of_total;
|
|
|
|
unsigned int offset;
|
|
|
|
bool offset_relative_to_alignmask;
|
|
|
|
enum flush_type flush_type;
|
2019-03-13 13:12:52 +08:00
|
|
|
bool nosimd;
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
/**
|
|
|
|
* struct testvec_config - configuration for testing a crypto test vector
|
|
|
|
*
|
|
|
|
* This struct describes the data layout and other parameters with which each
|
|
|
|
* crypto test vector can be tested.
|
|
|
|
*
|
|
|
|
* @name: name of this config, logged for debugging purposes if a test fails
|
2022-03-26 15:11:59 +08:00
|
|
|
* @inplace_mode: whether and how to operate on the data in-place, if applicable
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
* @req_flags: extra request_flags, e.g. CRYPTO_TFM_REQ_MAY_SLEEP
|
|
|
|
* @src_divs: description of how to arrange the source scatterlist
|
|
|
|
* @dst_divs: description of how to arrange the dst scatterlist, if applicable
|
|
|
|
* for the algorithm type. Defaults to @src_divs if unset.
|
|
|
|
* @iv_offset: misalignment of the IV in the range [0..MAX_ALGAPI_ALIGNMASK+1],
|
|
|
|
* where 0 is aligned to a 2*(MAX_ALGAPI_ALIGNMASK+1) byte boundary
|
|
|
|
* @iv_offset_relative_to_alignmask: if true, add the algorithm's alignmask to
|
|
|
|
* the @iv_offset
|
2019-12-02 05:53:28 +08:00
|
|
|
* @key_offset: misalignment of the key, where 0 is default alignment
|
|
|
|
* @key_offset_relative_to_alignmask: if true, add the algorithm's alignmask to
|
|
|
|
* the @key_offset
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
* @finalization_type: what finalization function to use for hashes
|
2019-03-13 13:12:52 +08:00
|
|
|
* @nosimd: execute with SIMD disabled? Requires !CRYPTO_TFM_REQ_MAY_SLEEP.
|
2024-05-27 16:05:39 +08:00
|
|
|
* This applies to the parts of the operation that aren't controlled
|
|
|
|
* individually by @nosimd_setkey or @src_divs[].nosimd.
|
|
|
|
* @nosimd_setkey: set the key (if applicable) with SIMD disabled? Requires
|
|
|
|
* !CRYPTO_TFM_REQ_MAY_SLEEP.
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
*/
|
|
|
|
struct testvec_config {
|
|
|
|
const char *name;
|
2022-03-26 15:11:59 +08:00
|
|
|
enum inplace_mode inplace_mode;
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
u32 req_flags;
|
|
|
|
struct test_sg_division src_divs[XBUFSIZE];
|
|
|
|
struct test_sg_division dst_divs[XBUFSIZE];
|
|
|
|
unsigned int iv_offset;
|
2019-12-02 05:53:28 +08:00
|
|
|
unsigned int key_offset;
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
bool iv_offset_relative_to_alignmask;
|
2019-12-02 05:53:28 +08:00
|
|
|
bool key_offset_relative_to_alignmask;
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
enum finalization_type finalization_type;
|
2019-03-13 13:12:52 +08:00
|
|
|
bool nosimd;
|
2024-05-27 16:05:39 +08:00
|
|
|
bool nosimd_setkey;
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
#define TESTVEC_CONFIG_NAMELEN 192
|
|
|
|
|
2019-02-01 15:51:46 +08:00
|
|
|
/*
|
|
|
|
* The following are the lists of testvec_configs to test for each algorithm
|
|
|
|
* type when the basic crypto self-tests are enabled, i.e. when
|
|
|
|
* CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is unset. They aim to provide good test
|
|
|
|
* coverage, while keeping the test time much shorter than the full fuzz tests
|
|
|
|
* so that the basic tests can be enabled in a wider range of circumstances.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* Configs for skciphers and aeads */
|
|
|
|
static const struct testvec_config default_cipher_testvec_configs[] = {
|
|
|
|
{
|
2022-03-26 15:11:59 +08:00
|
|
|
.name = "in-place (one sglist)",
|
|
|
|
.inplace_mode = INPLACE_ONE_SGLIST,
|
|
|
|
.src_divs = { { .proportion_of_total = 10000 } },
|
|
|
|
}, {
|
|
|
|
.name = "in-place (two sglists)",
|
|
|
|
.inplace_mode = INPLACE_TWO_SGLISTS,
|
2019-02-01 15:51:46 +08:00
|
|
|
.src_divs = { { .proportion_of_total = 10000 } },
|
|
|
|
}, {
|
|
|
|
.name = "out-of-place",
|
2022-03-26 15:11:59 +08:00
|
|
|
.inplace_mode = OUT_OF_PLACE,
|
2019-02-01 15:51:46 +08:00
|
|
|
.src_divs = { { .proportion_of_total = 10000 } },
|
|
|
|
}, {
|
|
|
|
.name = "unaligned buffer, offset=1",
|
|
|
|
.src_divs = { { .proportion_of_total = 10000, .offset = 1 } },
|
|
|
|
.iv_offset = 1,
|
2019-12-02 05:53:28 +08:00
|
|
|
.key_offset = 1,
|
2019-02-01 15:51:46 +08:00
|
|
|
}, {
|
|
|
|
.name = "buffer aligned only to alignmask",
|
|
|
|
.src_divs = {
|
|
|
|
{
|
|
|
|
.proportion_of_total = 10000,
|
|
|
|
.offset = 1,
|
|
|
|
.offset_relative_to_alignmask = true,
|
|
|
|
},
|
|
|
|
},
|
|
|
|
.iv_offset = 1,
|
|
|
|
.iv_offset_relative_to_alignmask = true,
|
2019-12-02 05:53:28 +08:00
|
|
|
.key_offset = 1,
|
|
|
|
.key_offset_relative_to_alignmask = true,
|
2019-02-01 15:51:46 +08:00
|
|
|
}, {
|
|
|
|
.name = "two even aligned splits",
|
|
|
|
.src_divs = {
|
|
|
|
{ .proportion_of_total = 5000 },
|
|
|
|
{ .proportion_of_total = 5000 },
|
|
|
|
},
|
2023-02-02 16:38:05 +08:00
|
|
|
}, {
|
|
|
|
.name = "one src, two even splits dst",
|
|
|
|
.inplace_mode = OUT_OF_PLACE,
|
|
|
|
.src_divs = { { .proportion_of_total = 10000 } },
|
|
|
|
.dst_divs = {
|
|
|
|
{ .proportion_of_total = 5000 },
|
|
|
|
{ .proportion_of_total = 5000 },
|
|
|
|
},
|
2019-02-01 15:51:46 +08:00
|
|
|
}, {
|
|
|
|
.name = "uneven misaligned splits, may sleep",
|
|
|
|
.req_flags = CRYPTO_TFM_REQ_MAY_SLEEP,
|
|
|
|
.src_divs = {
|
|
|
|
{ .proportion_of_total = 1900, .offset = 33 },
|
|
|
|
{ .proportion_of_total = 3300, .offset = 7 },
|
|
|
|
{ .proportion_of_total = 4800, .offset = 18 },
|
|
|
|
},
|
|
|
|
.iv_offset = 3,
|
2019-12-02 05:53:28 +08:00
|
|
|
.key_offset = 3,
|
2019-02-01 15:51:46 +08:00
|
|
|
}, {
|
|
|
|
.name = "misaligned splits crossing pages, inplace",
|
2022-03-26 15:11:59 +08:00
|
|
|
.inplace_mode = INPLACE_ONE_SGLIST,
|
2019-02-01 15:51:46 +08:00
|
|
|
.src_divs = {
|
|
|
|
{
|
|
|
|
.proportion_of_total = 7500,
|
|
|
|
.offset = PAGE_SIZE - 32
|
|
|
|
}, {
|
|
|
|
.proportion_of_total = 2500,
|
|
|
|
.offset = PAGE_SIZE - 7
|
|
|
|
},
|
|
|
|
},
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
static const struct testvec_config default_hash_testvec_configs[] = {
|
|
|
|
{
|
|
|
|
.name = "init+update+final aligned buffer",
|
|
|
|
.src_divs = { { .proportion_of_total = 10000 } },
|
|
|
|
.finalization_type = FINALIZATION_TYPE_FINAL,
|
|
|
|
}, {
|
|
|
|
.name = "init+finup aligned buffer",
|
|
|
|
.src_divs = { { .proportion_of_total = 10000 } },
|
|
|
|
.finalization_type = FINALIZATION_TYPE_FINUP,
|
|
|
|
}, {
|
|
|
|
.name = "digest aligned buffer",
|
|
|
|
.src_divs = { { .proportion_of_total = 10000 } },
|
|
|
|
.finalization_type = FINALIZATION_TYPE_DIGEST,
|
|
|
|
}, {
|
|
|
|
.name = "init+update+final misaligned buffer",
|
|
|
|
.src_divs = { { .proportion_of_total = 10000, .offset = 1 } },
|
|
|
|
.finalization_type = FINALIZATION_TYPE_FINAL,
|
2019-12-02 05:53:28 +08:00
|
|
|
.key_offset = 1,
|
2019-02-01 15:51:48 +08:00
|
|
|
}, {
|
2023-10-22 16:10:47 +08:00
|
|
|
.name = "digest misaligned buffer",
|
2019-02-01 15:51:48 +08:00
|
|
|
.src_divs = {
|
|
|
|
{
|
|
|
|
.proportion_of_total = 10000,
|
|
|
|
.offset = 1,
|
|
|
|
},
|
|
|
|
},
|
|
|
|
.finalization_type = FINALIZATION_TYPE_DIGEST,
|
2019-12-02 05:53:28 +08:00
|
|
|
.key_offset = 1,
|
2019-02-01 15:51:48 +08:00
|
|
|
}, {
|
|
|
|
.name = "init+update+update+final two even splits",
|
|
|
|
.src_divs = {
|
|
|
|
{ .proportion_of_total = 5000 },
|
|
|
|
{
|
|
|
|
.proportion_of_total = 5000,
|
|
|
|
.flush_type = FLUSH_TYPE_FLUSH,
|
|
|
|
},
|
|
|
|
},
|
|
|
|
.finalization_type = FINALIZATION_TYPE_FINAL,
|
|
|
|
}, {
|
|
|
|
.name = "digest uneven misaligned splits, may sleep",
|
|
|
|
.req_flags = CRYPTO_TFM_REQ_MAY_SLEEP,
|
|
|
|
.src_divs = {
|
|
|
|
{ .proportion_of_total = 1900, .offset = 33 },
|
|
|
|
{ .proportion_of_total = 3300, .offset = 7 },
|
|
|
|
{ .proportion_of_total = 4800, .offset = 18 },
|
|
|
|
},
|
|
|
|
.finalization_type = FINALIZATION_TYPE_DIGEST,
|
|
|
|
}, {
|
|
|
|
.name = "digest misaligned splits crossing pages",
|
|
|
|
.src_divs = {
|
|
|
|
{
|
|
|
|
.proportion_of_total = 7500,
|
|
|
|
.offset = PAGE_SIZE - 32,
|
|
|
|
}, {
|
|
|
|
.proportion_of_total = 2500,
|
|
|
|
.offset = PAGE_SIZE - 7,
|
|
|
|
},
|
|
|
|
},
|
|
|
|
.finalization_type = FINALIZATION_TYPE_DIGEST,
|
|
|
|
}, {
|
|
|
|
.name = "import/export",
|
|
|
|
.src_divs = {
|
|
|
|
{
|
|
|
|
.proportion_of_total = 6500,
|
|
|
|
.flush_type = FLUSH_TYPE_REIMPORT,
|
|
|
|
}, {
|
|
|
|
.proportion_of_total = 3500,
|
|
|
|
.flush_type = FLUSH_TYPE_REIMPORT,
|
|
|
|
},
|
|
|
|
},
|
|
|
|
.finalization_type = FINALIZATION_TYPE_FINAL,
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
static unsigned int count_test_sg_divisions(const struct test_sg_division *divs)
|
|
|
|
{
|
|
|
|
unsigned int remaining = TEST_SG_TOTAL;
|
|
|
|
unsigned int ndivs = 0;
|
|
|
|
|
|
|
|
do {
|
|
|
|
remaining -= divs[ndivs++].proportion_of_total;
|
|
|
|
} while (remaining);
|
|
|
|
|
|
|
|
return ndivs;
|
|
|
|
}
|
|
|
|
|
2019-03-13 13:12:52 +08:00
|
|
|
#define SGDIVS_HAVE_FLUSHES BIT(0)
|
|
|
|
#define SGDIVS_HAVE_NOSIMD BIT(1)
|
|
|
|
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
static bool valid_sg_divisions(const struct test_sg_division *divs,
|
2019-03-13 13:12:52 +08:00
|
|
|
unsigned int count, int *flags_ret)
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
{
|
|
|
|
unsigned int total = 0;
|
|
|
|
unsigned int i;
|
|
|
|
|
|
|
|
for (i = 0; i < count && total != TEST_SG_TOTAL; i++) {
|
|
|
|
if (divs[i].proportion_of_total <= 0 ||
|
|
|
|
divs[i].proportion_of_total > TEST_SG_TOTAL - total)
|
|
|
|
return false;
|
|
|
|
total += divs[i].proportion_of_total;
|
|
|
|
if (divs[i].flush_type != FLUSH_TYPE_NONE)
|
2019-03-13 13:12:52 +08:00
|
|
|
*flags_ret |= SGDIVS_HAVE_FLUSHES;
|
|
|
|
if (divs[i].nosimd)
|
|
|
|
*flags_ret |= SGDIVS_HAVE_NOSIMD;
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
}
|
|
|
|
return total == TEST_SG_TOTAL &&
|
|
|
|
memchr_inv(&divs[i], 0, (count - i) * sizeof(divs[0])) == NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check whether the given testvec_config is valid. This isn't strictly needed
|
|
|
|
* since every testvec_config should be valid, but check anyway so that people
|
|
|
|
* don't unknowingly add broken configs that don't do what they wanted.
|
|
|
|
*/
|
|
|
|
static bool valid_testvec_config(const struct testvec_config *cfg)
|
|
|
|
{
|
2019-03-13 13:12:52 +08:00
|
|
|
int flags = 0;
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
|
|
|
|
if (cfg->name == NULL)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (!valid_sg_divisions(cfg->src_divs, ARRAY_SIZE(cfg->src_divs),
|
2019-03-13 13:12:52 +08:00
|
|
|
&flags))
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
return false;
|
|
|
|
|
|
|
|
if (cfg->dst_divs[0].proportion_of_total) {
|
|
|
|
if (!valid_sg_divisions(cfg->dst_divs,
|
2019-03-13 13:12:52 +08:00
|
|
|
ARRAY_SIZE(cfg->dst_divs), &flags))
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
return false;
|
|
|
|
} else {
|
|
|
|
if (memchr_inv(cfg->dst_divs, 0, sizeof(cfg->dst_divs)))
|
|
|
|
return false;
|
|
|
|
/* defaults to dst_divs=src_divs */
|
|
|
|
}
|
|
|
|
|
|
|
|
if (cfg->iv_offset +
|
|
|
|
(cfg->iv_offset_relative_to_alignmask ? MAX_ALGAPI_ALIGNMASK : 0) >
|
|
|
|
MAX_ALGAPI_ALIGNMASK + 1)
|
|
|
|
return false;
|
|
|
|
|
2019-03-13 13:12:52 +08:00
|
|
|
if ((flags & (SGDIVS_HAVE_FLUSHES | SGDIVS_HAVE_NOSIMD)) &&
|
|
|
|
cfg->finalization_type == FINALIZATION_TYPE_DIGEST)
|
|
|
|
return false;
|
|
|
|
|
2024-05-27 16:05:39 +08:00
|
|
|
if ((cfg->nosimd || cfg->nosimd_setkey ||
|
|
|
|
(flags & SGDIVS_HAVE_NOSIMD)) &&
|
2019-03-13 13:12:52 +08:00
|
|
|
(cfg->req_flags & CRYPTO_TFM_REQ_MAY_SLEEP))
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct test_sglist {
|
|
|
|
char *bufs[XBUFSIZE];
|
|
|
|
struct scatterlist sgl[XBUFSIZE];
|
|
|
|
struct scatterlist sgl_saved[XBUFSIZE];
|
|
|
|
struct scatterlist *sgl_ptr;
|
|
|
|
unsigned int nents;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int init_test_sglist(struct test_sglist *tsgl)
|
|
|
|
{
|
|
|
|
return __testmgr_alloc_buf(tsgl->bufs, 1 /* two pages per buffer */);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void destroy_test_sglist(struct test_sglist *tsgl)
|
|
|
|
{
|
|
|
|
return __testmgr_free_buf(tsgl->bufs, 1 /* two pages per buffer */);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* build_test_sglist() - build a scatterlist for a crypto test
|
|
|
|
*
|
|
|
|
* @tsgl: the scatterlist to build. @tsgl->bufs[] contains an array of 2-page
|
|
|
|
* buffers which the scatterlist @tsgl->sgl[] will be made to point into.
|
|
|
|
* @divs: the layout specification on which the scatterlist will be based
|
|
|
|
* @alignmask: the algorithm's alignmask
|
|
|
|
* @total_len: the total length of the scatterlist to build in bytes
|
|
|
|
* @data: if non-NULL, the buffers will be filled with this data until it ends.
|
|
|
|
* Otherwise the buffers will be poisoned. In both cases, some bytes
|
|
|
|
* past the end of each buffer will be poisoned to help detect overruns.
|
|
|
|
* @out_divs: if non-NULL, the test_sg_division to which each scatterlist entry
|
|
|
|
* corresponds will be returned here. This will match @divs except
|
|
|
|
* that divisions resolving to a length of 0 are omitted as they are
|
|
|
|
* not included in the scatterlist.
|
|
|
|
*
|
|
|
|
* Return: 0 or a -errno value
|
|
|
|
*/
|
|
|
|
static int build_test_sglist(struct test_sglist *tsgl,
|
|
|
|
const struct test_sg_division *divs,
|
|
|
|
const unsigned int alignmask,
|
|
|
|
const unsigned int total_len,
|
|
|
|
struct iov_iter *data,
|
|
|
|
const struct test_sg_division *out_divs[XBUFSIZE])
|
|
|
|
{
|
|
|
|
struct {
|
|
|
|
const struct test_sg_division *div;
|
|
|
|
size_t length;
|
|
|
|
} partitions[XBUFSIZE];
|
|
|
|
const unsigned int ndivs = count_test_sg_divisions(divs);
|
|
|
|
unsigned int len_remaining = total_len;
|
|
|
|
unsigned int i;
|
|
|
|
|
|
|
|
BUILD_BUG_ON(ARRAY_SIZE(partitions) != ARRAY_SIZE(tsgl->sgl));
|
|
|
|
if (WARN_ON(ndivs > ARRAY_SIZE(partitions)))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/* Calculate the (div, length) pairs */
|
|
|
|
tsgl->nents = 0;
|
|
|
|
for (i = 0; i < ndivs; i++) {
|
|
|
|
unsigned int len_this_sg =
|
|
|
|
min(len_remaining,
|
|
|
|
(total_len * divs[i].proportion_of_total +
|
|
|
|
TEST_SG_TOTAL / 2) / TEST_SG_TOTAL);
|
|
|
|
|
|
|
|
if (len_this_sg != 0) {
|
|
|
|
partitions[tsgl->nents].div = &divs[i];
|
|
|
|
partitions[tsgl->nents].length = len_this_sg;
|
|
|
|
tsgl->nents++;
|
|
|
|
len_remaining -= len_this_sg;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (tsgl->nents == 0) {
|
|
|
|
partitions[tsgl->nents].div = &divs[0];
|
|
|
|
partitions[tsgl->nents].length = 0;
|
|
|
|
tsgl->nents++;
|
|
|
|
}
|
|
|
|
partitions[tsgl->nents - 1].length += len_remaining;
|
|
|
|
|
|
|
|
/* Set up the sgl entries and fill the data or poison */
|
|
|
|
sg_init_table(tsgl->sgl, tsgl->nents);
|
|
|
|
for (i = 0; i < tsgl->nents; i++) {
|
|
|
|
unsigned int offset = partitions[i].div->offset;
|
|
|
|
void *addr;
|
|
|
|
|
|
|
|
if (partitions[i].div->offset_relative_to_alignmask)
|
|
|
|
offset += alignmask;
|
|
|
|
|
|
|
|
while (offset + partitions[i].length + TESTMGR_POISON_LEN >
|
|
|
|
2 * PAGE_SIZE) {
|
|
|
|
if (WARN_ON(offset <= 0))
|
|
|
|
return -EINVAL;
|
|
|
|
offset /= 2;
|
|
|
|
}
|
|
|
|
|
|
|
|
addr = &tsgl->bufs[i][offset];
|
|
|
|
sg_set_buf(&tsgl->sgl[i], addr, partitions[i].length);
|
|
|
|
|
|
|
|
if (out_divs)
|
|
|
|
out_divs[i] = partitions[i].div;
|
|
|
|
|
|
|
|
if (data) {
|
|
|
|
size_t copy_len, copied;
|
|
|
|
|
|
|
|
copy_len = min(partitions[i].length, data->count);
|
|
|
|
copied = copy_from_iter(addr, copy_len, data);
|
|
|
|
if (WARN_ON(copied != copy_len))
|
|
|
|
return -EINVAL;
|
|
|
|
testmgr_poison(addr + copy_len, partitions[i].length +
|
|
|
|
TESTMGR_POISON_LEN - copy_len);
|
|
|
|
} else {
|
|
|
|
testmgr_poison(addr, partitions[i].length +
|
|
|
|
TESTMGR_POISON_LEN);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sg_mark_end(&tsgl->sgl[tsgl->nents - 1]);
|
|
|
|
tsgl->sgl_ptr = tsgl->sgl;
|
|
|
|
memcpy(tsgl->sgl_saved, tsgl->sgl, tsgl->nents * sizeof(tsgl->sgl[0]));
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Verify that a scatterlist crypto operation produced the correct output.
|
|
|
|
*
|
|
|
|
* @tsgl: scatterlist containing the actual output
|
|
|
|
* @expected_output: buffer containing the expected output
|
|
|
|
* @len_to_check: length of @expected_output in bytes
|
|
|
|
* @unchecked_prefix_len: number of ignored bytes in @tsgl prior to real result
|
|
|
|
* @check_poison: verify that the poison bytes after each chunk are intact?
|
|
|
|
*
|
|
|
|
* Return: 0 if correct, -EINVAL if incorrect, -EOVERFLOW if buffer overrun.
|
|
|
|
*/
|
|
|
|
static int verify_correct_output(const struct test_sglist *tsgl,
|
|
|
|
const char *expected_output,
|
|
|
|
unsigned int len_to_check,
|
|
|
|
unsigned int unchecked_prefix_len,
|
|
|
|
bool check_poison)
|
|
|
|
{
|
|
|
|
unsigned int i;
|
|
|
|
|
|
|
|
for (i = 0; i < tsgl->nents; i++) {
|
|
|
|
struct scatterlist *sg = &tsgl->sgl_ptr[i];
|
|
|
|
unsigned int len = sg->length;
|
|
|
|
unsigned int offset = sg->offset;
|
|
|
|
const char *actual_output;
|
|
|
|
|
|
|
|
if (unchecked_prefix_len) {
|
|
|
|
if (unchecked_prefix_len >= len) {
|
|
|
|
unchecked_prefix_len -= len;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
offset += unchecked_prefix_len;
|
|
|
|
len -= unchecked_prefix_len;
|
|
|
|
unchecked_prefix_len = 0;
|
|
|
|
}
|
|
|
|
len = min(len, len_to_check);
|
|
|
|
actual_output = page_address(sg_page(sg)) + offset;
|
|
|
|
if (memcmp(expected_output, actual_output, len) != 0)
|
|
|
|
return -EINVAL;
|
|
|
|
if (check_poison &&
|
|
|
|
!testmgr_is_poison(actual_output + len, TESTMGR_POISON_LEN))
|
|
|
|
return -EOVERFLOW;
|
|
|
|
len_to_check -= len;
|
|
|
|
expected_output += len;
|
|
|
|
}
|
|
|
|
if (WARN_ON(len_to_check != 0))
|
|
|
|
return -EINVAL;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool is_test_sglist_corrupted(const struct test_sglist *tsgl)
|
|
|
|
{
|
|
|
|
unsigned int i;
|
|
|
|
|
|
|
|
for (i = 0; i < tsgl->nents; i++) {
|
|
|
|
if (tsgl->sgl[i].page_link != tsgl->sgl_saved[i].page_link)
|
|
|
|
return true;
|
|
|
|
if (tsgl->sgl[i].offset != tsgl->sgl_saved[i].offset)
|
|
|
|
return true;
|
|
|
|
if (tsgl->sgl[i].length != tsgl->sgl_saved[i].length)
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct cipher_test_sglists {
|
|
|
|
struct test_sglist src;
|
|
|
|
struct test_sglist dst;
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct cipher_test_sglists *alloc_cipher_test_sglists(void)
|
|
|
|
{
|
|
|
|
struct cipher_test_sglists *tsgls;
|
|
|
|
|
|
|
|
tsgls = kmalloc(sizeof(*tsgls), GFP_KERNEL);
|
|
|
|
if (!tsgls)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
if (init_test_sglist(&tsgls->src) != 0)
|
|
|
|
goto fail_kfree;
|
|
|
|
if (init_test_sglist(&tsgls->dst) != 0)
|
|
|
|
goto fail_destroy_src;
|
|
|
|
|
|
|
|
return tsgls;
|
|
|
|
|
|
|
|
fail_destroy_src:
|
|
|
|
destroy_test_sglist(&tsgls->src);
|
|
|
|
fail_kfree:
|
|
|
|
kfree(tsgls);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void free_cipher_test_sglists(struct cipher_test_sglists *tsgls)
|
|
|
|
{
|
|
|
|
if (tsgls) {
|
|
|
|
destroy_test_sglist(&tsgls->src);
|
|
|
|
destroy_test_sglist(&tsgls->dst);
|
|
|
|
kfree(tsgls);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Build the src and dst scatterlists for an skcipher or AEAD test */
|
|
|
|
static int build_cipher_test_sglists(struct cipher_test_sglists *tsgls,
|
|
|
|
const struct testvec_config *cfg,
|
|
|
|
unsigned int alignmask,
|
|
|
|
unsigned int src_total_len,
|
|
|
|
unsigned int dst_total_len,
|
|
|
|
const struct kvec *inputs,
|
|
|
|
unsigned int nr_inputs)
|
|
|
|
{
|
|
|
|
struct iov_iter input;
|
|
|
|
int err;
|
|
|
|
|
2022-09-16 08:25:47 +08:00
|
|
|
iov_iter_kvec(&input, ITER_SOURCE, inputs, nr_inputs, src_total_len);
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
err = build_test_sglist(&tsgls->src, cfg->src_divs, alignmask,
|
2022-03-26 15:11:59 +08:00
|
|
|
cfg->inplace_mode != OUT_OF_PLACE ?
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
max(dst_total_len, src_total_len) :
|
|
|
|
src_total_len,
|
|
|
|
&input, NULL);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
2022-03-26 15:11:59 +08:00
|
|
|
/*
|
|
|
|
* In-place crypto operations can use the same scatterlist for both the
|
|
|
|
* source and destination (req->src == req->dst), or can use separate
|
|
|
|
* scatterlists (req->src != req->dst) which point to the same
|
|
|
|
* underlying memory. Make sure to test both cases.
|
|
|
|
*/
|
|
|
|
if (cfg->inplace_mode == INPLACE_ONE_SGLIST) {
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
tsgls->dst.sgl_ptr = tsgls->src.sgl;
|
|
|
|
tsgls->dst.nents = tsgls->src.nents;
|
|
|
|
return 0;
|
|
|
|
}
|
2022-03-26 15:11:59 +08:00
|
|
|
if (cfg->inplace_mode == INPLACE_TWO_SGLISTS) {
|
|
|
|
/*
|
|
|
|
* For now we keep it simple and only test the case where the
|
|
|
|
* two scatterlists have identical entries, rather than
|
|
|
|
* different entries that split up the same memory differently.
|
|
|
|
*/
|
|
|
|
memcpy(tsgls->dst.sgl, tsgls->src.sgl,
|
|
|
|
tsgls->src.nents * sizeof(tsgls->src.sgl[0]));
|
|
|
|
memcpy(tsgls->dst.sgl_saved, tsgls->src.sgl,
|
|
|
|
tsgls->src.nents * sizeof(tsgls->src.sgl[0]));
|
|
|
|
tsgls->dst.sgl_ptr = tsgls->dst.sgl;
|
|
|
|
tsgls->dst.nents = tsgls->src.nents;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
/* Out of place */
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
return build_test_sglist(&tsgls->dst,
|
|
|
|
cfg->dst_divs[0].proportion_of_total ?
|
|
|
|
cfg->dst_divs : cfg->src_divs,
|
|
|
|
alignmask, dst_total_len, NULL, NULL);
|
2009-05-06 14:15:47 +08:00
|
|
|
}
|
|
|
|
|
2019-12-02 05:53:28 +08:00
|
|
|
/*
|
|
|
|
* Support for testing passing a misaligned key to setkey():
|
|
|
|
*
|
|
|
|
* If cfg->key_offset is set, copy the key into a new buffer at that offset,
|
|
|
|
* optionally adding alignmask. Else, just use the key directly.
|
|
|
|
*/
|
|
|
|
static int prepare_keybuf(const u8 *key, unsigned int ksize,
|
|
|
|
const struct testvec_config *cfg,
|
|
|
|
unsigned int alignmask,
|
|
|
|
const u8 **keybuf_ret, const u8 **keyptr_ret)
|
|
|
|
{
|
|
|
|
unsigned int key_offset = cfg->key_offset;
|
|
|
|
u8 *keybuf = NULL, *keyptr = (u8 *)key;
|
|
|
|
|
|
|
|
if (key_offset != 0) {
|
|
|
|
if (cfg->key_offset_relative_to_alignmask)
|
|
|
|
key_offset += alignmask;
|
|
|
|
keybuf = kmalloc(key_offset + ksize, GFP_KERNEL);
|
|
|
|
if (!keybuf)
|
|
|
|
return -ENOMEM;
|
|
|
|
keyptr = keybuf + key_offset;
|
|
|
|
memcpy(keyptr, key, ksize);
|
|
|
|
}
|
|
|
|
*keybuf_ret = keybuf;
|
|
|
|
*keyptr_ret = keyptr;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2024-05-27 16:05:39 +08:00
|
|
|
/*
|
|
|
|
* Like setkey_f(tfm, key, ksize), but sometimes misalign the key.
|
|
|
|
* In addition, run the setkey function in no-SIMD context if requested.
|
|
|
|
*/
|
2019-12-02 05:53:28 +08:00
|
|
|
#define do_setkey(setkey_f, tfm, key, ksize, cfg, alignmask) \
|
|
|
|
({ \
|
|
|
|
const u8 *keybuf, *keyptr; \
|
|
|
|
int err; \
|
|
|
|
\
|
|
|
|
err = prepare_keybuf((key), (ksize), (cfg), (alignmask), \
|
|
|
|
&keybuf, &keyptr); \
|
|
|
|
if (err == 0) { \
|
2024-05-27 16:05:39 +08:00
|
|
|
if ((cfg)->nosimd_setkey) \
|
|
|
|
crypto_disable_simd_for_test(); \
|
2019-12-02 05:53:28 +08:00
|
|
|
err = setkey_f((tfm), keyptr, (ksize)); \
|
2024-05-27 16:05:39 +08:00
|
|
|
if ((cfg)->nosimd_setkey) \
|
|
|
|
crypto_reenable_simd_for_test(); \
|
2019-12-02 05:53:28 +08:00
|
|
|
kfree(keybuf); \
|
|
|
|
} \
|
|
|
|
err; \
|
|
|
|
})
|
|
|
|
|
2019-02-01 15:51:45 +08:00
|
|
|
#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
|
2019-04-12 12:57:38 +08:00
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
/*
|
|
|
|
* The fuzz tests use prandom instead of the normal Linux RNG since they don't
|
|
|
|
* need cryptographically secure random numbers. This greatly improves the
|
|
|
|
* performance of these tests, especially if they are run before the Linux RNG
|
|
|
|
* has been initialized or if they are run on a lockdep-enabled kernel.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static inline void init_rnd_state(struct rnd_state *rng)
|
|
|
|
{
|
|
|
|
prandom_seed_state(rng, get_random_u64());
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline u8 prandom_u8(struct rnd_state *rng)
|
|
|
|
{
|
|
|
|
return prandom_u32_state(rng);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline u32 prandom_u32_below(struct rnd_state *rng, u32 ceil)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* This is slightly biased for non-power-of-2 values of 'ceil', but this
|
|
|
|
* isn't important here.
|
|
|
|
*/
|
|
|
|
return prandom_u32_state(rng) % ceil;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool prandom_bool(struct rnd_state *rng)
|
|
|
|
{
|
|
|
|
return prandom_u32_below(rng, 2);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline u32 prandom_u32_inclusive(struct rnd_state *rng,
|
|
|
|
u32 floor, u32 ceil)
|
|
|
|
{
|
|
|
|
return floor + prandom_u32_below(rng, ceil - floor + 1);
|
|
|
|
}
|
|
|
|
|
2019-04-12 12:57:38 +08:00
|
|
|
/* Generate a random length in range [0, max_len], but prefer smaller values */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
static unsigned int generate_random_length(struct rnd_state *rng,
|
|
|
|
unsigned int max_len)
|
2019-04-12 12:57:38 +08:00
|
|
|
{
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
unsigned int len = prandom_u32_below(rng, max_len + 1);
|
2019-04-12 12:57:38 +08:00
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
switch (prandom_u32_below(rng, 4)) {
|
2019-04-12 12:57:38 +08:00
|
|
|
case 0:
|
2024-07-04 03:04:31 +08:00
|
|
|
len %= 64;
|
|
|
|
break;
|
2019-04-12 12:57:38 +08:00
|
|
|
case 1:
|
2024-07-04 03:04:31 +08:00
|
|
|
len %= 256;
|
|
|
|
break;
|
2019-04-12 12:57:38 +08:00
|
|
|
case 2:
|
2024-07-04 03:04:31 +08:00
|
|
|
len %= 1024;
|
|
|
|
break;
|
2019-04-12 12:57:38 +08:00
|
|
|
default:
|
2024-07-04 03:04:31 +08:00
|
|
|
break;
|
2019-04-12 12:57:38 +08:00
|
|
|
}
|
2024-07-04 03:04:31 +08:00
|
|
|
if (len && prandom_u32_below(rng, 4) == 0)
|
|
|
|
len = rounddown_pow_of_two(len);
|
|
|
|
return len;
|
2019-04-12 12:57:38 +08:00
|
|
|
}
|
|
|
|
|
2019-12-02 05:53:30 +08:00
|
|
|
/* Flip a random bit in the given nonempty data buffer */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
static void flip_random_bit(struct rnd_state *rng, u8 *buf, size_t size)
|
2019-12-02 05:53:30 +08:00
|
|
|
{
|
|
|
|
size_t bitpos;
|
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
bitpos = prandom_u32_below(rng, size * 8);
|
2019-12-02 05:53:30 +08:00
|
|
|
buf[bitpos / 8] ^= 1 << (bitpos % 8);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Flip a random byte in the given nonempty data buffer */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
static void flip_random_byte(struct rnd_state *rng, u8 *buf, size_t size)
|
2019-12-02 05:53:30 +08:00
|
|
|
{
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
buf[prandom_u32_below(rng, size)] ^= 0xff;
|
2019-12-02 05:53:30 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Sometimes make some random changes to the given nonempty data buffer */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
static void mutate_buffer(struct rnd_state *rng, u8 *buf, size_t size)
|
2019-04-12 12:57:38 +08:00
|
|
|
{
|
|
|
|
size_t num_flips;
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
/* Sometimes flip some bits */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
if (prandom_u32_below(rng, 4) == 0) {
|
|
|
|
num_flips = min_t(size_t, 1 << prandom_u32_below(rng, 8),
|
|
|
|
size * 8);
|
2019-12-02 05:53:30 +08:00
|
|
|
for (i = 0; i < num_flips; i++)
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
flip_random_bit(rng, buf, size);
|
2019-04-12 12:57:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Sometimes flip some bytes */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
if (prandom_u32_below(rng, 4) == 0) {
|
|
|
|
num_flips = min_t(size_t, 1 << prandom_u32_below(rng, 8), size);
|
2019-04-12 12:57:38 +08:00
|
|
|
for (i = 0; i < num_flips; i++)
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
flip_random_byte(rng, buf, size);
|
2019-04-12 12:57:38 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Randomly generate 'count' bytes, but sometimes make them "interesting" */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
static void generate_random_bytes(struct rnd_state *rng, u8 *buf, size_t count)
|
2019-04-12 12:57:38 +08:00
|
|
|
{
|
|
|
|
u8 b;
|
|
|
|
u8 increment;
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
if (count == 0)
|
|
|
|
return;
|
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
switch (prandom_u32_below(rng, 8)) { /* Choose a generation strategy */
|
2019-04-12 12:57:38 +08:00
|
|
|
case 0:
|
|
|
|
case 1:
|
|
|
|
/* All the same byte, plus optional mutations */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
switch (prandom_u32_below(rng, 4)) {
|
2019-04-12 12:57:38 +08:00
|
|
|
case 0:
|
|
|
|
b = 0x00;
|
|
|
|
break;
|
|
|
|
case 1:
|
|
|
|
b = 0xff;
|
|
|
|
break;
|
|
|
|
default:
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
b = prandom_u8(rng);
|
2019-04-12 12:57:38 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
memset(buf, b, count);
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
mutate_buffer(rng, buf, count);
|
2019-04-12 12:57:38 +08:00
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
/* Ascending or descending bytes, plus optional mutations */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
increment = prandom_u8(rng);
|
|
|
|
b = prandom_u8(rng);
|
2019-04-12 12:57:38 +08:00
|
|
|
for (i = 0; i < count; i++, b += increment)
|
|
|
|
buf[i] = b;
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
mutate_buffer(rng, buf, count);
|
2019-04-12 12:57:38 +08:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
/* Fully random bytes */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
prandom_bytes_state(rng, buf, count);
|
2019-04-12 12:57:38 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
static char *generate_random_sgl_divisions(struct rnd_state *rng,
|
|
|
|
struct test_sg_division *divs,
|
2019-02-01 15:51:45 +08:00
|
|
|
size_t max_divs, char *p, char *end,
|
2019-03-13 13:12:52 +08:00
|
|
|
bool gen_flushes, u32 req_flags)
|
2019-02-01 15:51:45 +08:00
|
|
|
{
|
|
|
|
struct test_sg_division *div = divs;
|
|
|
|
unsigned int remaining = TEST_SG_TOTAL;
|
|
|
|
|
|
|
|
do {
|
|
|
|
unsigned int this_len;
|
2019-03-13 13:12:52 +08:00
|
|
|
const char *flushtype_str;
|
2019-02-01 15:51:45 +08:00
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
if (div == &divs[max_divs - 1] || prandom_bool(rng))
|
2019-02-01 15:51:45 +08:00
|
|
|
this_len = remaining;
|
2024-07-04 03:04:31 +08:00
|
|
|
else if (prandom_u32_below(rng, 4) == 0)
|
|
|
|
this_len = (remaining + 1) / 2;
|
2019-02-01 15:51:45 +08:00
|
|
|
else
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
this_len = prandom_u32_inclusive(rng, 1, remaining);
|
2019-02-01 15:51:45 +08:00
|
|
|
div->proportion_of_total = this_len;
|
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
if (prandom_u32_below(rng, 4) == 0)
|
|
|
|
div->offset = prandom_u32_inclusive(rng,
|
|
|
|
PAGE_SIZE - 128,
|
|
|
|
PAGE_SIZE - 1);
|
|
|
|
else if (prandom_bool(rng))
|
|
|
|
div->offset = prandom_u32_below(rng, 32);
|
2019-02-01 15:51:45 +08:00
|
|
|
else
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
div->offset = prandom_u32_below(rng, PAGE_SIZE);
|
|
|
|
if (prandom_u32_below(rng, 8) == 0)
|
2019-02-01 15:51:45 +08:00
|
|
|
div->offset_relative_to_alignmask = true;
|
|
|
|
|
|
|
|
div->flush_type = FLUSH_TYPE_NONE;
|
|
|
|
if (gen_flushes) {
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
switch (prandom_u32_below(rng, 4)) {
|
2019-02-01 15:51:45 +08:00
|
|
|
case 0:
|
|
|
|
div->flush_type = FLUSH_TYPE_REIMPORT;
|
|
|
|
break;
|
|
|
|
case 1:
|
|
|
|
div->flush_type = FLUSH_TYPE_FLUSH;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-03-13 13:12:52 +08:00
|
|
|
if (div->flush_type != FLUSH_TYPE_NONE &&
|
|
|
|
!(req_flags & CRYPTO_TFM_REQ_MAY_SLEEP) &&
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
prandom_bool(rng))
|
2019-03-13 13:12:52 +08:00
|
|
|
div->nosimd = true;
|
|
|
|
|
|
|
|
switch (div->flush_type) {
|
|
|
|
case FLUSH_TYPE_FLUSH:
|
|
|
|
if (div->nosimd)
|
|
|
|
flushtype_str = "<flush,nosimd>";
|
|
|
|
else
|
|
|
|
flushtype_str = "<flush>";
|
|
|
|
break;
|
|
|
|
case FLUSH_TYPE_REIMPORT:
|
|
|
|
if (div->nosimd)
|
|
|
|
flushtype_str = "<reimport,nosimd>";
|
|
|
|
else
|
|
|
|
flushtype_str = "<reimport>";
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
flushtype_str = "";
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2019-02-01 15:51:45 +08:00
|
|
|
BUILD_BUG_ON(TEST_SG_TOTAL != 10000); /* for "%u.%u%%" */
|
2019-03-13 13:12:52 +08:00
|
|
|
p += scnprintf(p, end - p, "%s%u.%u%%@%s+%u%s", flushtype_str,
|
2019-02-01 15:51:45 +08:00
|
|
|
this_len / 100, this_len % 100,
|
|
|
|
div->offset_relative_to_alignmask ?
|
|
|
|
"alignmask" : "",
|
|
|
|
div->offset, this_len == remaining ? "" : ", ");
|
|
|
|
remaining -= this_len;
|
|
|
|
div++;
|
|
|
|
} while (remaining);
|
|
|
|
|
|
|
|
return p;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Generate a random testvec_config for fuzz testing */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
static void generate_random_testvec_config(struct rnd_state *rng,
|
|
|
|
struct testvec_config *cfg,
|
2019-02-01 15:51:45 +08:00
|
|
|
char *name, size_t max_namelen)
|
|
|
|
{
|
|
|
|
char *p = name;
|
|
|
|
char * const end = name + max_namelen;
|
|
|
|
|
|
|
|
memset(cfg, 0, sizeof(*cfg));
|
|
|
|
|
|
|
|
cfg->name = name;
|
|
|
|
|
|
|
|
p += scnprintf(p, end - p, "random:");
|
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
switch (prandom_u32_below(rng, 4)) {
|
2022-03-26 15:11:59 +08:00
|
|
|
case 0:
|
|
|
|
case 1:
|
|
|
|
cfg->inplace_mode = OUT_OF_PLACE;
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
cfg->inplace_mode = INPLACE_ONE_SGLIST;
|
|
|
|
p += scnprintf(p, end - p, " inplace_one_sglist");
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
cfg->inplace_mode = INPLACE_TWO_SGLISTS;
|
|
|
|
p += scnprintf(p, end - p, " inplace_two_sglists");
|
|
|
|
break;
|
2019-02-01 15:51:45 +08:00
|
|
|
}
|
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
if (prandom_bool(rng)) {
|
2019-02-01 15:51:45 +08:00
|
|
|
cfg->req_flags |= CRYPTO_TFM_REQ_MAY_SLEEP;
|
|
|
|
p += scnprintf(p, end - p, " may_sleep");
|
|
|
|
}
|
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
switch (prandom_u32_below(rng, 4)) {
|
2019-02-01 15:51:45 +08:00
|
|
|
case 0:
|
|
|
|
cfg->finalization_type = FINALIZATION_TYPE_FINAL;
|
|
|
|
p += scnprintf(p, end - p, " use_final");
|
|
|
|
break;
|
|
|
|
case 1:
|
|
|
|
cfg->finalization_type = FINALIZATION_TYPE_FINUP;
|
|
|
|
p += scnprintf(p, end - p, " use_finup");
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
cfg->finalization_type = FINALIZATION_TYPE_DIGEST;
|
|
|
|
p += scnprintf(p, end - p, " use_digest");
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2024-05-27 16:05:39 +08:00
|
|
|
if (!(cfg->req_flags & CRYPTO_TFM_REQ_MAY_SLEEP)) {
|
|
|
|
if (prandom_bool(rng)) {
|
|
|
|
cfg->nosimd = true;
|
|
|
|
p += scnprintf(p, end - p, " nosimd");
|
|
|
|
}
|
|
|
|
if (prandom_bool(rng)) {
|
|
|
|
cfg->nosimd_setkey = true;
|
|
|
|
p += scnprintf(p, end - p, " nosimd_setkey");
|
|
|
|
}
|
2019-03-13 13:12:52 +08:00
|
|
|
}
|
|
|
|
|
2019-02-01 15:51:45 +08:00
|
|
|
p += scnprintf(p, end - p, " src_divs=[");
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
p = generate_random_sgl_divisions(rng, cfg->src_divs,
|
2019-02-01 15:51:45 +08:00
|
|
|
ARRAY_SIZE(cfg->src_divs), p, end,
|
|
|
|
(cfg->finalization_type !=
|
2019-03-13 13:12:52 +08:00
|
|
|
FINALIZATION_TYPE_DIGEST),
|
|
|
|
cfg->req_flags);
|
2019-02-01 15:51:45 +08:00
|
|
|
p += scnprintf(p, end - p, "]");
|
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
if (cfg->inplace_mode == OUT_OF_PLACE && prandom_bool(rng)) {
|
2019-02-01 15:51:45 +08:00
|
|
|
p += scnprintf(p, end - p, " dst_divs=[");
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
p = generate_random_sgl_divisions(rng, cfg->dst_divs,
|
2019-02-01 15:51:45 +08:00
|
|
|
ARRAY_SIZE(cfg->dst_divs),
|
2019-03-13 13:12:52 +08:00
|
|
|
p, end, false,
|
|
|
|
cfg->req_flags);
|
2019-02-01 15:51:45 +08:00
|
|
|
p += scnprintf(p, end - p, "]");
|
|
|
|
}
|
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
if (prandom_bool(rng)) {
|
|
|
|
cfg->iv_offset = prandom_u32_inclusive(rng, 1,
|
|
|
|
MAX_ALGAPI_ALIGNMASK);
|
2019-02-01 15:51:45 +08:00
|
|
|
p += scnprintf(p, end - p, " iv_offset=%u", cfg->iv_offset);
|
|
|
|
}
|
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
if (prandom_bool(rng)) {
|
|
|
|
cfg->key_offset = prandom_u32_inclusive(rng, 1,
|
|
|
|
MAX_ALGAPI_ALIGNMASK);
|
2019-12-02 05:53:28 +08:00
|
|
|
p += scnprintf(p, end - p, " key_offset=%u", cfg->key_offset);
|
|
|
|
}
|
|
|
|
|
2019-02-01 15:51:45 +08:00
|
|
|
WARN_ON_ONCE(!valid_testvec_config(cfg));
|
|
|
|
}
|
2019-03-13 13:12:47 +08:00
|
|
|
|
|
|
|
static void crypto_disable_simd_for_test(void)
|
|
|
|
{
|
2021-09-28 19:54:01 +08:00
|
|
|
migrate_disable();
|
2019-03-13 13:12:47 +08:00
|
|
|
__this_cpu_write(crypto_simd_disabled_for_test, true);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void crypto_reenable_simd_for_test(void)
|
|
|
|
{
|
|
|
|
__this_cpu_write(crypto_simd_disabled_for_test, false);
|
2021-09-28 19:54:01 +08:00
|
|
|
migrate_enable();
|
2019-03-13 13:12:47 +08:00
|
|
|
}
|
2019-04-12 12:57:38 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Given an algorithm name, build the name of the generic implementation of that
|
|
|
|
* algorithm, assuming the usual naming convention. Specifically, this appends
|
|
|
|
* "-generic" to every part of the name that is not a template name. Examples:
|
|
|
|
*
|
|
|
|
* aes => aes-generic
|
|
|
|
* cbc(aes) => cbc(aes-generic)
|
|
|
|
* cts(cbc(aes)) => cts(cbc(aes-generic))
|
|
|
|
* rfc7539(chacha20,poly1305) => rfc7539(chacha20-generic,poly1305-generic)
|
|
|
|
*
|
|
|
|
* Return: 0 on success, or -ENAMETOOLONG if the generic name would be too long
|
|
|
|
*/
|
|
|
|
static int build_generic_driver_name(const char *algname,
|
|
|
|
char driver_name[CRYPTO_MAX_ALG_NAME])
|
|
|
|
{
|
|
|
|
const char *in = algname;
|
|
|
|
char *out = driver_name;
|
|
|
|
size_t len = strlen(algname);
|
|
|
|
|
|
|
|
if (len >= CRYPTO_MAX_ALG_NAME)
|
|
|
|
goto too_long;
|
|
|
|
do {
|
|
|
|
const char *in_saved = in;
|
|
|
|
|
|
|
|
while (*in && *in != '(' && *in != ')' && *in != ',')
|
|
|
|
*out++ = *in++;
|
|
|
|
if (*in != '(' && in > in_saved) {
|
|
|
|
len += 8;
|
|
|
|
if (len >= CRYPTO_MAX_ALG_NAME)
|
|
|
|
goto too_long;
|
|
|
|
memcpy(out, "-generic", 8);
|
|
|
|
out += 8;
|
|
|
|
}
|
|
|
|
} while ((*out++ = *in++) != '\0');
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
too_long:
|
|
|
|
pr_err("alg: generic driver name for \"%s\" would be too long\n",
|
|
|
|
algname);
|
|
|
|
return -ENAMETOOLONG;
|
|
|
|
}
|
2019-03-13 13:12:47 +08:00
|
|
|
#else /* !CONFIG_CRYPTO_MANAGER_EXTRA_TESTS */
|
|
|
|
static void crypto_disable_simd_for_test(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static void crypto_reenable_simd_for_test(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
#endif /* !CONFIG_CRYPTO_MANAGER_EXTRA_TESTS */
|
2019-02-01 15:51:45 +08:00
|
|
|
|
2019-05-29 00:40:55 +08:00
|
|
|
static int build_hash_sglist(struct test_sglist *tsgl,
|
|
|
|
const struct hash_testvec *vec,
|
|
|
|
const struct testvec_config *cfg,
|
|
|
|
unsigned int alignmask,
|
|
|
|
const struct test_sg_division *divs[XBUFSIZE])
|
|
|
|
{
|
|
|
|
struct kvec kv;
|
|
|
|
struct iov_iter input;
|
|
|
|
|
|
|
|
kv.iov_base = (void *)vec->plaintext;
|
|
|
|
kv.iov_len = vec->psize;
|
2022-09-16 08:25:47 +08:00
|
|
|
iov_iter_kvec(&input, ITER_SOURCE, &kv, 1, vec->psize);
|
2019-05-29 00:40:55 +08:00
|
|
|
return build_test_sglist(tsgl, cfg->src_divs, alignmask, vec->psize,
|
|
|
|
&input, divs);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int check_hash_result(const char *type,
|
|
|
|
const u8 *result, unsigned int digestsize,
|
|
|
|
const struct hash_testvec *vec,
|
|
|
|
const char *vec_name,
|
|
|
|
const char *driver,
|
|
|
|
const struct testvec_config *cfg)
|
|
|
|
{
|
|
|
|
if (memcmp(result, vec->digest, digestsize) != 0) {
|
|
|
|
pr_err("alg: %s: %s test failed (wrong result) on test vector %s, cfg=\"%s\"\n",
|
|
|
|
type, driver, vec_name, cfg->name);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
if (!testmgr_is_poison(&result[digestsize], TESTMGR_POISON_LEN)) {
|
|
|
|
pr_err("alg: %s: %s overran result buffer on test vector %s, cfg=\"%s\"\n",
|
|
|
|
type, driver, vec_name, cfg->name);
|
|
|
|
return -EOVERFLOW;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int check_shash_op(const char *op, int err,
|
|
|
|
const char *driver, const char *vec_name,
|
|
|
|
const struct testvec_config *cfg)
|
|
|
|
{
|
|
|
|
if (err)
|
|
|
|
pr_err("alg: shash: %s %s() failed with err %d on test vector %s, cfg=\"%s\"\n",
|
|
|
|
driver, op, err, vec_name, cfg->name);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Test one hash test vector in one configuration, using the shash API */
|
2020-10-27 00:17:00 +08:00
|
|
|
static int test_shash_vec_cfg(const struct hash_testvec *vec,
|
2019-05-29 00:40:55 +08:00
|
|
|
const char *vec_name,
|
|
|
|
const struct testvec_config *cfg,
|
|
|
|
struct shash_desc *desc,
|
|
|
|
struct test_sglist *tsgl,
|
|
|
|
u8 *hashstate)
|
|
|
|
{
|
|
|
|
struct crypto_shash *tfm = desc->tfm;
|
|
|
|
const unsigned int digestsize = crypto_shash_digestsize(tfm);
|
|
|
|
const unsigned int statesize = crypto_shash_statesize(tfm);
|
2020-10-27 00:17:00 +08:00
|
|
|
const char *driver = crypto_shash_driver_name(tfm);
|
2019-05-29 00:40:55 +08:00
|
|
|
const struct test_sg_division *divs[XBUFSIZE];
|
|
|
|
unsigned int i;
|
|
|
|
u8 result[HASH_MAX_DIGESTSIZE + TESTMGR_POISON_LEN];
|
|
|
|
int err;
|
|
|
|
|
|
|
|
/* Set the key, if specified */
|
|
|
|
if (vec->ksize) {
|
2019-12-02 05:53:28 +08:00
|
|
|
err = do_setkey(crypto_shash_setkey, tfm, vec->key, vec->ksize,
|
2023-10-19 13:53:40 +08:00
|
|
|
cfg, 0);
|
2019-05-29 00:40:55 +08:00
|
|
|
if (err) {
|
|
|
|
if (err == vec->setkey_error)
|
|
|
|
return 0;
|
|
|
|
pr_err("alg: shash: %s setkey failed on test vector %s; expected_error=%d, actual_error=%d, flags=%#x\n",
|
|
|
|
driver, vec_name, vec->setkey_error, err,
|
|
|
|
crypto_shash_get_flags(tfm));
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
if (vec->setkey_error) {
|
|
|
|
pr_err("alg: shash: %s setkey unexpectedly succeeded on test vector %s; expected_error=%d\n",
|
|
|
|
driver, vec_name, vec->setkey_error);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Build the scatterlist for the source data */
|
2023-10-19 13:53:40 +08:00
|
|
|
err = build_hash_sglist(tsgl, vec, cfg, 0, divs);
|
2019-05-29 00:40:55 +08:00
|
|
|
if (err) {
|
|
|
|
pr_err("alg: shash: %s: error preparing scatterlist for test vector %s, cfg=\"%s\"\n",
|
|
|
|
driver, vec_name, cfg->name);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Do the actual hashing */
|
|
|
|
|
|
|
|
testmgr_poison(desc->__ctx, crypto_shash_descsize(tfm));
|
|
|
|
testmgr_poison(result, digestsize + TESTMGR_POISON_LEN);
|
|
|
|
|
|
|
|
if (cfg->finalization_type == FINALIZATION_TYPE_DIGEST ||
|
|
|
|
vec->digest_error) {
|
|
|
|
/* Just using digest() */
|
|
|
|
if (tsgl->nents != 1)
|
|
|
|
return 0;
|
|
|
|
if (cfg->nosimd)
|
|
|
|
crypto_disable_simd_for_test();
|
2021-02-23 11:42:04 +08:00
|
|
|
err = crypto_shash_digest(desc, sg_virt(&tsgl->sgl[0]),
|
2019-05-29 00:40:55 +08:00
|
|
|
tsgl->sgl[0].length, result);
|
|
|
|
if (cfg->nosimd)
|
|
|
|
crypto_reenable_simd_for_test();
|
|
|
|
if (err) {
|
|
|
|
if (err == vec->digest_error)
|
|
|
|
return 0;
|
|
|
|
pr_err("alg: shash: %s digest() failed on test vector %s; expected_error=%d, actual_error=%d, cfg=\"%s\"\n",
|
|
|
|
driver, vec_name, vec->digest_error, err,
|
|
|
|
cfg->name);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
if (vec->digest_error) {
|
|
|
|
pr_err("alg: shash: %s digest() unexpectedly succeeded on test vector %s; expected_error=%d, cfg=\"%s\"\n",
|
|
|
|
driver, vec_name, vec->digest_error, cfg->name);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
goto result_ready;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Using init(), zero or more update(), then final() or finup() */
|
|
|
|
|
|
|
|
if (cfg->nosimd)
|
|
|
|
crypto_disable_simd_for_test();
|
|
|
|
err = crypto_shash_init(desc);
|
|
|
|
if (cfg->nosimd)
|
|
|
|
crypto_reenable_simd_for_test();
|
|
|
|
err = check_shash_op("init", err, driver, vec_name, cfg);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
|
|
|
for (i = 0; i < tsgl->nents; i++) {
|
|
|
|
if (i + 1 == tsgl->nents &&
|
|
|
|
cfg->finalization_type == FINALIZATION_TYPE_FINUP) {
|
|
|
|
if (divs[i]->nosimd)
|
|
|
|
crypto_disable_simd_for_test();
|
2021-02-23 11:42:04 +08:00
|
|
|
err = crypto_shash_finup(desc, sg_virt(&tsgl->sgl[i]),
|
2019-05-29 00:40:55 +08:00
|
|
|
tsgl->sgl[i].length, result);
|
|
|
|
if (divs[i]->nosimd)
|
|
|
|
crypto_reenable_simd_for_test();
|
|
|
|
err = check_shash_op("finup", err, driver, vec_name,
|
|
|
|
cfg);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
goto result_ready;
|
|
|
|
}
|
|
|
|
if (divs[i]->nosimd)
|
|
|
|
crypto_disable_simd_for_test();
|
2021-02-23 11:42:04 +08:00
|
|
|
err = crypto_shash_update(desc, sg_virt(&tsgl->sgl[i]),
|
2019-05-29 00:40:55 +08:00
|
|
|
tsgl->sgl[i].length);
|
|
|
|
if (divs[i]->nosimd)
|
|
|
|
crypto_reenable_simd_for_test();
|
|
|
|
err = check_shash_op("update", err, driver, vec_name, cfg);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
if (divs[i]->flush_type == FLUSH_TYPE_REIMPORT) {
|
|
|
|
/* Test ->export() and ->import() */
|
|
|
|
testmgr_poison(hashstate + statesize,
|
|
|
|
TESTMGR_POISON_LEN);
|
|
|
|
err = crypto_shash_export(desc, hashstate);
|
|
|
|
err = check_shash_op("export", err, driver, vec_name,
|
|
|
|
cfg);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
if (!testmgr_is_poison(hashstate + statesize,
|
|
|
|
TESTMGR_POISON_LEN)) {
|
|
|
|
pr_err("alg: shash: %s export() overran state buffer on test vector %s, cfg=\"%s\"\n",
|
|
|
|
driver, vec_name, cfg->name);
|
|
|
|
return -EOVERFLOW;
|
|
|
|
}
|
|
|
|
testmgr_poison(desc->__ctx, crypto_shash_descsize(tfm));
|
|
|
|
err = crypto_shash_import(desc, hashstate);
|
|
|
|
err = check_shash_op("import", err, driver, vec_name,
|
|
|
|
cfg);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (cfg->nosimd)
|
|
|
|
crypto_disable_simd_for_test();
|
|
|
|
err = crypto_shash_final(desc, result);
|
|
|
|
if (cfg->nosimd)
|
|
|
|
crypto_reenable_simd_for_test();
|
|
|
|
err = check_shash_op("final", err, driver, vec_name, cfg);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
result_ready:
|
|
|
|
return check_hash_result("shash", result, digestsize, vec, vec_name,
|
|
|
|
driver, cfg);
|
|
|
|
}
|
|
|
|
|
2019-03-13 13:12:52 +08:00
|
|
|
static int do_ahash_op(int (*op)(struct ahash_request *req),
|
|
|
|
struct ahash_request *req,
|
|
|
|
struct crypto_wait *wait, bool nosimd)
|
|
|
|
{
|
|
|
|
int err;
|
|
|
|
|
|
|
|
if (nosimd)
|
|
|
|
crypto_disable_simd_for_test();
|
|
|
|
|
|
|
|
err = op(req);
|
|
|
|
|
|
|
|
if (nosimd)
|
|
|
|
crypto_reenable_simd_for_test();
|
|
|
|
|
|
|
|
return crypto_wait_req(err, wait);
|
|
|
|
}
|
|
|
|
|
2019-05-29 00:40:55 +08:00
|
|
|
static int check_nonfinal_ahash_op(const char *op, int err,
|
|
|
|
u8 *result, unsigned int digestsize,
|
|
|
|
const char *driver, const char *vec_name,
|
|
|
|
const struct testvec_config *cfg)
|
2018-01-16 22:26:13 +08:00
|
|
|
{
|
2019-02-01 15:51:48 +08:00
|
|
|
if (err) {
|
2019-05-29 00:40:55 +08:00
|
|
|
pr_err("alg: ahash: %s %s() failed with err %d on test vector %s, cfg=\"%s\"\n",
|
2019-04-12 12:57:37 +08:00
|
|
|
driver, op, err, vec_name, cfg->name);
|
2019-02-01 15:51:48 +08:00
|
|
|
return err;
|
2016-02-03 18:26:57 +08:00
|
|
|
}
|
2019-02-01 15:51:48 +08:00
|
|
|
if (!testmgr_is_poison(result, digestsize)) {
|
2019-05-29 00:40:55 +08:00
|
|
|
pr_err("alg: ahash: %s %s() used result buffer on test vector %s, cfg=\"%s\"\n",
|
2019-04-12 12:57:37 +08:00
|
|
|
driver, op, vec_name, cfg->name);
|
2019-02-01 15:51:48 +08:00
|
|
|
return -EINVAL;
|
2018-01-16 22:26:13 +08:00
|
|
|
}
|
2019-02-01 15:51:48 +08:00
|
|
|
return 0;
|
2016-02-03 18:26:57 +08:00
|
|
|
}
|
|
|
|
|
2019-05-29 00:40:55 +08:00
|
|
|
/* Test one hash test vector in one configuration, using the ahash API */
|
2020-10-27 00:17:00 +08:00
|
|
|
static int test_ahash_vec_cfg(const struct hash_testvec *vec,
|
2019-05-29 00:40:55 +08:00
|
|
|
const char *vec_name,
|
|
|
|
const struct testvec_config *cfg,
|
|
|
|
struct ahash_request *req,
|
|
|
|
struct test_sglist *tsgl,
|
|
|
|
u8 *hashstate)
|
2008-07-31 17:08:25 +08:00
|
|
|
{
|
2019-02-01 15:51:48 +08:00
|
|
|
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
|
|
|
|
const unsigned int digestsize = crypto_ahash_digestsize(tfm);
|
|
|
|
const unsigned int statesize = crypto_ahash_statesize(tfm);
|
2020-10-27 00:17:00 +08:00
|
|
|
const char *driver = crypto_ahash_driver_name(tfm);
|
2019-02-01 15:51:48 +08:00
|
|
|
const u32 req_flags = CRYPTO_TFM_REQ_MAY_BACKLOG | cfg->req_flags;
|
|
|
|
const struct test_sg_division *divs[XBUFSIZE];
|
|
|
|
DECLARE_CRYPTO_WAIT(wait);
|
|
|
|
unsigned int i;
|
|
|
|
struct scatterlist *pending_sgl;
|
|
|
|
unsigned int pending_len;
|
|
|
|
u8 result[HASH_MAX_DIGESTSIZE + TESTMGR_POISON_LEN];
|
|
|
|
int err;
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
/* Set the key, if specified */
|
|
|
|
if (vec->ksize) {
|
2019-12-02 05:53:28 +08:00
|
|
|
err = do_setkey(crypto_ahash_setkey, tfm, vec->key, vec->ksize,
|
2023-10-22 16:10:47 +08:00
|
|
|
cfg, 0);
|
2019-02-01 15:51:48 +08:00
|
|
|
if (err) {
|
2019-04-12 12:57:36 +08:00
|
|
|
if (err == vec->setkey_error)
|
|
|
|
return 0;
|
2019-05-29 00:40:55 +08:00
|
|
|
pr_err("alg: ahash: %s setkey failed on test vector %s; expected_error=%d, actual_error=%d, flags=%#x\n",
|
2019-04-12 12:57:37 +08:00
|
|
|
driver, vec_name, vec->setkey_error, err,
|
2019-02-01 15:51:48 +08:00
|
|
|
crypto_ahash_get_flags(tfm));
|
|
|
|
return err;
|
|
|
|
}
|
2019-04-12 12:57:36 +08:00
|
|
|
if (vec->setkey_error) {
|
2019-05-29 00:40:55 +08:00
|
|
|
pr_err("alg: ahash: %s setkey unexpectedly succeeded on test vector %s; expected_error=%d\n",
|
2019-04-12 12:57:37 +08:00
|
|
|
driver, vec_name, vec->setkey_error);
|
2019-04-12 12:57:36 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2019-02-01 15:51:48 +08:00
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
/* Build the scatterlist for the source data */
|
2023-10-22 16:10:47 +08:00
|
|
|
err = build_hash_sglist(tsgl, vec, cfg, 0, divs);
|
2019-02-01 15:51:48 +08:00
|
|
|
if (err) {
|
2019-05-29 00:40:55 +08:00
|
|
|
pr_err("alg: ahash: %s: error preparing scatterlist for test vector %s, cfg=\"%s\"\n",
|
2019-04-12 12:57:37 +08:00
|
|
|
driver, vec_name, cfg->name);
|
2019-02-01 15:51:48 +08:00
|
|
|
return err;
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
/* Do the actual hashing */
|
2009-05-29 14:23:12 +08:00
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
testmgr_poison(req->__ctx, crypto_ahash_reqsize(tfm));
|
|
|
|
testmgr_poison(result, digestsize + TESTMGR_POISON_LEN);
|
2013-06-13 22:37:55 +08:00
|
|
|
|
2019-04-12 12:57:36 +08:00
|
|
|
if (cfg->finalization_type == FINALIZATION_TYPE_DIGEST ||
|
|
|
|
vec->digest_error) {
|
2019-02-01 15:51:48 +08:00
|
|
|
/* Just using digest() */
|
|
|
|
ahash_request_set_callback(req, req_flags, crypto_req_done,
|
|
|
|
&wait);
|
|
|
|
ahash_request_set_crypt(req, tsgl->sgl, result, vec->psize);
|
2019-03-13 13:12:52 +08:00
|
|
|
err = do_ahash_op(crypto_ahash_digest, req, &wait, cfg->nosimd);
|
2019-02-01 15:51:48 +08:00
|
|
|
if (err) {
|
2019-04-12 12:57:36 +08:00
|
|
|
if (err == vec->digest_error)
|
|
|
|
return 0;
|
2019-05-29 00:40:55 +08:00
|
|
|
pr_err("alg: ahash: %s digest() failed on test vector %s; expected_error=%d, actual_error=%d, cfg=\"%s\"\n",
|
2019-04-12 12:57:37 +08:00
|
|
|
driver, vec_name, vec->digest_error, err,
|
2019-04-12 12:57:36 +08:00
|
|
|
cfg->name);
|
2019-02-01 15:51:48 +08:00
|
|
|
return err;
|
|
|
|
}
|
2019-04-12 12:57:36 +08:00
|
|
|
if (vec->digest_error) {
|
2019-05-29 00:40:55 +08:00
|
|
|
pr_err("alg: ahash: %s digest() unexpectedly succeeded on test vector %s; expected_error=%d, cfg=\"%s\"\n",
|
2019-04-12 12:57:37 +08:00
|
|
|
driver, vec_name, vec->digest_error, cfg->name);
|
2019-04-12 12:57:36 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2019-02-01 15:51:48 +08:00
|
|
|
goto result_ready;
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
/* Using init(), zero or more update(), then final() or finup() */
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
ahash_request_set_callback(req, req_flags, crypto_req_done, &wait);
|
|
|
|
ahash_request_set_crypt(req, NULL, result, 0);
|
2019-03-13 13:12:52 +08:00
|
|
|
err = do_ahash_op(crypto_ahash_init, req, &wait, cfg->nosimd);
|
2019-05-29 00:40:55 +08:00
|
|
|
err = check_nonfinal_ahash_op("init", err, result, digestsize,
|
|
|
|
driver, vec_name, cfg);
|
2019-02-01 15:51:48 +08:00
|
|
|
if (err)
|
|
|
|
return err;
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
pending_sgl = NULL;
|
|
|
|
pending_len = 0;
|
|
|
|
for (i = 0; i < tsgl->nents; i++) {
|
|
|
|
if (divs[i]->flush_type != FLUSH_TYPE_NONE &&
|
|
|
|
pending_sgl != NULL) {
|
|
|
|
/* update() with the pending data */
|
|
|
|
ahash_request_set_callback(req, req_flags,
|
|
|
|
crypto_req_done, &wait);
|
|
|
|
ahash_request_set_crypt(req, pending_sgl, result,
|
|
|
|
pending_len);
|
2019-03-13 13:12:52 +08:00
|
|
|
err = do_ahash_op(crypto_ahash_update, req, &wait,
|
|
|
|
divs[i]->nosimd);
|
2019-05-29 00:40:55 +08:00
|
|
|
err = check_nonfinal_ahash_op("update", err,
|
|
|
|
result, digestsize,
|
|
|
|
driver, vec_name, cfg);
|
2019-02-01 15:51:48 +08:00
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
pending_sgl = NULL;
|
|
|
|
pending_len = 0;
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2019-02-01 15:51:48 +08:00
|
|
|
if (divs[i]->flush_type == FLUSH_TYPE_REIMPORT) {
|
|
|
|
/* Test ->export() and ->import() */
|
|
|
|
testmgr_poison(hashstate + statesize,
|
|
|
|
TESTMGR_POISON_LEN);
|
|
|
|
err = crypto_ahash_export(req, hashstate);
|
2019-05-29 00:40:55 +08:00
|
|
|
err = check_nonfinal_ahash_op("export", err,
|
|
|
|
result, digestsize,
|
|
|
|
driver, vec_name, cfg);
|
2019-02-01 15:51:48 +08:00
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
if (!testmgr_is_poison(hashstate + statesize,
|
|
|
|
TESTMGR_POISON_LEN)) {
|
2019-05-29 00:40:55 +08:00
|
|
|
pr_err("alg: ahash: %s export() overran state buffer on test vector %s, cfg=\"%s\"\n",
|
2019-04-12 12:57:37 +08:00
|
|
|
driver, vec_name, cfg->name);
|
2019-02-01 15:51:48 +08:00
|
|
|
return -EOVERFLOW;
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2018-07-01 15:02:35 +08:00
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
testmgr_poison(req->__ctx, crypto_ahash_reqsize(tfm));
|
|
|
|
err = crypto_ahash_import(req, hashstate);
|
2019-05-29 00:40:55 +08:00
|
|
|
err = check_nonfinal_ahash_op("import", err,
|
|
|
|
result, digestsize,
|
|
|
|
driver, vec_name, cfg);
|
2019-02-01 15:51:48 +08:00
|
|
|
if (err)
|
|
|
|
return err;
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2019-02-01 15:51:48 +08:00
|
|
|
if (pending_sgl == NULL)
|
|
|
|
pending_sgl = &tsgl->sgl[i];
|
|
|
|
pending_len += tsgl->sgl[i].length;
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
ahash_request_set_callback(req, req_flags, crypto_req_done, &wait);
|
|
|
|
ahash_request_set_crypt(req, pending_sgl, result, pending_len);
|
|
|
|
if (cfg->finalization_type == FINALIZATION_TYPE_FINAL) {
|
|
|
|
/* finish with update() and final() */
|
2019-03-13 13:12:52 +08:00
|
|
|
err = do_ahash_op(crypto_ahash_update, req, &wait, cfg->nosimd);
|
2019-05-29 00:40:55 +08:00
|
|
|
err = check_nonfinal_ahash_op("update", err, result, digestsize,
|
|
|
|
driver, vec_name, cfg);
|
2019-02-01 15:51:48 +08:00
|
|
|
if (err)
|
|
|
|
return err;
|
2019-03-13 13:12:52 +08:00
|
|
|
err = do_ahash_op(crypto_ahash_final, req, &wait, cfg->nosimd);
|
2019-02-01 15:51:48 +08:00
|
|
|
if (err) {
|
2019-05-29 00:40:55 +08:00
|
|
|
pr_err("alg: ahash: %s final() failed with err %d on test vector %s, cfg=\"%s\"\n",
|
2019-04-12 12:57:37 +08:00
|
|
|
driver, err, vec_name, cfg->name);
|
2019-02-01 15:51:48 +08:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/* finish with finup() */
|
2019-03-13 13:12:52 +08:00
|
|
|
err = do_ahash_op(crypto_ahash_finup, req, &wait, cfg->nosimd);
|
2019-02-01 15:51:48 +08:00
|
|
|
if (err) {
|
2019-05-29 00:40:55 +08:00
|
|
|
pr_err("alg: ahash: %s finup() failed with err %d on test vector %s, cfg=\"%s\"\n",
|
2019-04-12 12:57:37 +08:00
|
|
|
driver, err, vec_name, cfg->name);
|
2019-02-01 15:51:48 +08:00
|
|
|
return err;
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
result_ready:
|
2019-05-29 00:40:55 +08:00
|
|
|
return check_hash_result("ahash", result, digestsize, vec, vec_name,
|
|
|
|
driver, cfg);
|
|
|
|
}
|
|
|
|
|
2020-10-27 00:17:00 +08:00
|
|
|
static int test_hash_vec_cfg(const struct hash_testvec *vec,
|
2019-05-29 00:40:55 +08:00
|
|
|
const char *vec_name,
|
|
|
|
const struct testvec_config *cfg,
|
|
|
|
struct ahash_request *req,
|
|
|
|
struct shash_desc *desc,
|
|
|
|
struct test_sglist *tsgl,
|
|
|
|
u8 *hashstate)
|
|
|
|
{
|
|
|
|
int err;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* For algorithms implemented as "shash", most bugs will be detected by
|
|
|
|
* both the shash and ahash tests. Test the shash API first so that the
|
|
|
|
* failures involve less indirection, so are easier to debug.
|
|
|
|
*/
|
|
|
|
|
|
|
|
if (desc) {
|
2020-10-27 00:17:00 +08:00
|
|
|
err = test_shash_vec_cfg(vec, vec_name, cfg, desc, tsgl,
|
2019-05-29 00:40:55 +08:00
|
|
|
hashstate);
|
|
|
|
if (err)
|
|
|
|
return err;
|
2019-02-01 15:51:48 +08:00
|
|
|
}
|
2013-06-13 22:37:55 +08:00
|
|
|
|
2020-10-27 00:17:00 +08:00
|
|
|
return test_ahash_vec_cfg(vec, vec_name, cfg, req, tsgl, hashstate);
|
2019-02-01 15:51:48 +08:00
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2020-10-27 00:17:00 +08:00
|
|
|
static int test_hash_vec(const struct hash_testvec *vec, unsigned int vec_num,
|
|
|
|
struct ahash_request *req, struct shash_desc *desc,
|
|
|
|
struct test_sglist *tsgl, u8 *hashstate)
|
2019-02-01 15:51:48 +08:00
|
|
|
{
|
2019-04-12 12:57:37 +08:00
|
|
|
char vec_name[16];
|
2019-02-01 15:51:48 +08:00
|
|
|
unsigned int i;
|
|
|
|
int err;
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-04-12 12:57:37 +08:00
|
|
|
sprintf(vec_name, "%u", vec_num);
|
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
for (i = 0; i < ARRAY_SIZE(default_hash_testvec_configs); i++) {
|
2020-10-27 00:17:00 +08:00
|
|
|
err = test_hash_vec_cfg(vec, vec_name,
|
2019-02-01 15:51:48 +08:00
|
|
|
&default_hash_testvec_configs[i],
|
2019-05-29 00:40:55 +08:00
|
|
|
req, desc, tsgl, hashstate);
|
2019-02-01 15:51:48 +08:00
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
}
|
2014-08-08 19:27:50 +08:00
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
|
|
|
|
if (!noextratests) {
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
struct rnd_state rng;
|
2019-02-01 15:51:48 +08:00
|
|
|
struct testvec_config cfg;
|
|
|
|
char cfgname[TESTVEC_CONFIG_NAMELEN];
|
2014-08-08 19:27:50 +08:00
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
init_rnd_state(&rng);
|
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
for (i = 0; i < fuzz_iterations; i++) {
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
generate_random_testvec_config(&rng, &cfg, cfgname,
|
2019-02-01 15:51:48 +08:00
|
|
|
sizeof(cfgname));
|
2020-10-27 00:17:00 +08:00
|
|
|
err = test_hash_vec_cfg(vec, vec_name, &cfg,
|
2019-05-29 00:40:55 +08:00
|
|
|
req, desc, tsgl, hashstate);
|
2019-02-01 15:51:48 +08:00
|
|
|
if (err)
|
|
|
|
return err;
|
2019-06-03 13:42:33 +08:00
|
|
|
cond_resched();
|
2016-02-03 18:26:57 +08:00
|
|
|
}
|
|
|
|
}
|
2019-02-01 15:51:48 +08:00
|
|
|
#endif
|
|
|
|
return 0;
|
|
|
|
}
|
2016-02-03 18:26:57 +08:00
|
|
|
|
2019-04-12 12:57:39 +08:00
|
|
|
#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
|
|
|
|
/*
|
|
|
|
* Generate a hash test vector from the given implementation.
|
|
|
|
* Assumes the buffers in 'vec' were already allocated.
|
|
|
|
*/
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
static void generate_random_hash_testvec(struct rnd_state *rng,
|
|
|
|
struct shash_desc *desc,
|
2019-04-12 12:57:39 +08:00
|
|
|
struct hash_testvec *vec,
|
|
|
|
unsigned int maxkeysize,
|
|
|
|
unsigned int maxdatasize,
|
|
|
|
char *name, size_t max_namelen)
|
|
|
|
{
|
|
|
|
/* Data */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
vec->psize = generate_random_length(rng, maxdatasize);
|
|
|
|
generate_random_bytes(rng, (u8 *)vec->plaintext, vec->psize);
|
2019-04-12 12:57:39 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Key: length in range [1, maxkeysize], but usually choose maxkeysize.
|
|
|
|
* If algorithm is unkeyed, then maxkeysize == 0 and set ksize = 0.
|
|
|
|
*/
|
|
|
|
vec->setkey_error = 0;
|
|
|
|
vec->ksize = 0;
|
|
|
|
if (maxkeysize) {
|
|
|
|
vec->ksize = maxkeysize;
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
if (prandom_u32_below(rng, 4) == 0)
|
|
|
|
vec->ksize = prandom_u32_inclusive(rng, 1, maxkeysize);
|
|
|
|
generate_random_bytes(rng, (u8 *)vec->key, vec->ksize);
|
2019-04-12 12:57:39 +08:00
|
|
|
|
2019-06-18 17:21:53 +08:00
|
|
|
vec->setkey_error = crypto_shash_setkey(desc->tfm, vec->key,
|
2019-04-12 12:57:39 +08:00
|
|
|
vec->ksize);
|
|
|
|
/* If the key couldn't be set, no need to continue to digest. */
|
|
|
|
if (vec->setkey_error)
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Digest */
|
|
|
|
vec->digest_error = crypto_shash_digest(desc, vec->plaintext,
|
|
|
|
vec->psize, (u8 *)vec->digest);
|
|
|
|
done:
|
|
|
|
snprintf(name, max_namelen, "\"random: psize=%u ksize=%u\"",
|
|
|
|
vec->psize, vec->ksize);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Test the hash algorithm represented by @req against the corresponding generic
|
|
|
|
* implementation, if one is available.
|
|
|
|
*/
|
2020-10-27 00:17:00 +08:00
|
|
|
static int test_hash_vs_generic_impl(const char *generic_driver,
|
2019-04-12 12:57:39 +08:00
|
|
|
unsigned int maxkeysize,
|
|
|
|
struct ahash_request *req,
|
2019-05-29 00:40:55 +08:00
|
|
|
struct shash_desc *desc,
|
2019-04-12 12:57:39 +08:00
|
|
|
struct test_sglist *tsgl,
|
|
|
|
u8 *hashstate)
|
|
|
|
{
|
|
|
|
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
|
|
|
|
const unsigned int digestsize = crypto_ahash_digestsize(tfm);
|
|
|
|
const unsigned int blocksize = crypto_ahash_blocksize(tfm);
|
|
|
|
const unsigned int maxdatasize = (2 * PAGE_SIZE) - TESTMGR_POISON_LEN;
|
|
|
|
const char *algname = crypto_hash_alg_common(tfm)->base.cra_name;
|
2020-10-27 00:17:00 +08:00
|
|
|
const char *driver = crypto_ahash_driver_name(tfm);
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
struct rnd_state rng;
|
2019-04-12 12:57:39 +08:00
|
|
|
char _generic_driver[CRYPTO_MAX_ALG_NAME];
|
|
|
|
struct crypto_shash *generic_tfm = NULL;
|
2019-06-18 17:21:53 +08:00
|
|
|
struct shash_desc *generic_desc = NULL;
|
2019-04-12 12:57:39 +08:00
|
|
|
unsigned int i;
|
|
|
|
struct hash_testvec vec = { 0 };
|
|
|
|
char vec_name[64];
|
2019-06-18 17:21:52 +08:00
|
|
|
struct testvec_config *cfg;
|
2019-04-12 12:57:39 +08:00
|
|
|
char cfgname[TESTVEC_CONFIG_NAMELEN];
|
|
|
|
int err;
|
|
|
|
|
|
|
|
if (noextratests)
|
|
|
|
return 0;
|
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
init_rnd_state(&rng);
|
|
|
|
|
2019-04-12 12:57:39 +08:00
|
|
|
if (!generic_driver) { /* Use default naming convention? */
|
|
|
|
err = build_generic_driver_name(algname, _generic_driver);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
generic_driver = _generic_driver;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (strcmp(generic_driver, driver) == 0) /* Already the generic impl? */
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
generic_tfm = crypto_alloc_shash(generic_driver, 0, 0);
|
|
|
|
if (IS_ERR(generic_tfm)) {
|
|
|
|
err = PTR_ERR(generic_tfm);
|
|
|
|
if (err == -ENOENT) {
|
|
|
|
pr_warn("alg: hash: skipping comparison tests for %s because %s is unavailable\n",
|
|
|
|
driver, generic_driver);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
pr_err("alg: hash: error allocating %s (generic impl of %s): %d\n",
|
|
|
|
generic_driver, algname, err);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2019-06-18 17:21:52 +08:00
|
|
|
cfg = kzalloc(sizeof(*cfg), GFP_KERNEL);
|
|
|
|
if (!cfg) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2019-06-18 17:21:53 +08:00
|
|
|
generic_desc = kzalloc(sizeof(*desc) +
|
|
|
|
crypto_shash_descsize(generic_tfm), GFP_KERNEL);
|
|
|
|
if (!generic_desc) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
generic_desc->tfm = generic_tfm;
|
|
|
|
|
2019-04-12 12:57:39 +08:00
|
|
|
/* Check the algorithm properties for consistency. */
|
|
|
|
|
|
|
|
if (digestsize != crypto_shash_digestsize(generic_tfm)) {
|
|
|
|
pr_err("alg: hash: digestsize for %s (%u) doesn't match generic impl (%u)\n",
|
|
|
|
driver, digestsize,
|
|
|
|
crypto_shash_digestsize(generic_tfm));
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (blocksize != crypto_shash_blocksize(generic_tfm)) {
|
|
|
|
pr_err("alg: hash: blocksize for %s (%u) doesn't match generic impl (%u)\n",
|
|
|
|
driver, blocksize, crypto_shash_blocksize(generic_tfm));
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now generate test vectors using the generic implementation, and test
|
|
|
|
* the other implementation against them.
|
|
|
|
*/
|
|
|
|
|
|
|
|
vec.key = kmalloc(maxkeysize, GFP_KERNEL);
|
|
|
|
vec.plaintext = kmalloc(maxdatasize, GFP_KERNEL);
|
|
|
|
vec.digest = kmalloc(digestsize, GFP_KERNEL);
|
|
|
|
if (!vec.key || !vec.plaintext || !vec.digest) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < fuzz_iterations * 8; i++) {
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
generate_random_hash_testvec(&rng, generic_desc, &vec,
|
2019-04-12 12:57:39 +08:00
|
|
|
maxkeysize, maxdatasize,
|
|
|
|
vec_name, sizeof(vec_name));
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
generate_random_testvec_config(&rng, cfg, cfgname,
|
|
|
|
sizeof(cfgname));
|
2019-04-12 12:57:39 +08:00
|
|
|
|
2020-10-27 00:17:00 +08:00
|
|
|
err = test_hash_vec_cfg(&vec, vec_name, cfg,
|
2019-05-29 00:40:55 +08:00
|
|
|
req, desc, tsgl, hashstate);
|
2019-04-12 12:57:39 +08:00
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
cond_resched();
|
|
|
|
}
|
|
|
|
err = 0;
|
|
|
|
out:
|
2019-06-18 17:21:52 +08:00
|
|
|
kfree(cfg);
|
2019-04-12 12:57:39 +08:00
|
|
|
kfree(vec.key);
|
|
|
|
kfree(vec.plaintext);
|
|
|
|
kfree(vec.digest);
|
|
|
|
crypto_free_shash(generic_tfm);
|
2020-08-07 14:18:13 +08:00
|
|
|
kfree_sensitive(generic_desc);
|
2019-04-12 12:57:39 +08:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
#else /* !CONFIG_CRYPTO_MANAGER_EXTRA_TESTS */
|
2020-10-27 00:17:00 +08:00
|
|
|
static int test_hash_vs_generic_impl(const char *generic_driver,
|
2019-04-12 12:57:39 +08:00
|
|
|
unsigned int maxkeysize,
|
|
|
|
struct ahash_request *req,
|
2019-05-29 00:40:55 +08:00
|
|
|
struct shash_desc *desc,
|
2019-04-12 12:57:39 +08:00
|
|
|
struct test_sglist *tsgl,
|
|
|
|
u8 *hashstate)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif /* !CONFIG_CRYPTO_MANAGER_EXTRA_TESTS */
|
|
|
|
|
2019-05-29 00:40:55 +08:00
|
|
|
static int alloc_shash(const char *driver, u32 type, u32 mask,
|
|
|
|
struct crypto_shash **tfm_ret,
|
|
|
|
struct shash_desc **desc_ret)
|
|
|
|
{
|
|
|
|
struct crypto_shash *tfm;
|
|
|
|
struct shash_desc *desc;
|
|
|
|
|
|
|
|
tfm = crypto_alloc_shash(driver, type, mask);
|
|
|
|
if (IS_ERR(tfm)) {
|
|
|
|
if (PTR_ERR(tfm) == -ENOENT) {
|
|
|
|
/*
|
|
|
|
* This algorithm is only available through the ahash
|
|
|
|
* API, not the shash API, so skip the shash tests.
|
|
|
|
*/
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
pr_err("alg: hash: failed to allocate shash transform for %s: %ld\n",
|
|
|
|
driver, PTR_ERR(tfm));
|
|
|
|
return PTR_ERR(tfm);
|
|
|
|
}
|
|
|
|
|
|
|
|
desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(tfm), GFP_KERNEL);
|
|
|
|
if (!desc) {
|
|
|
|
crypto_free_shash(tfm);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
desc->tfm = tfm;
|
|
|
|
|
|
|
|
*tfm_ret = tfm;
|
|
|
|
*desc_ret = desc;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
static int __alg_test_hash(const struct hash_testvec *vecs,
|
|
|
|
unsigned int num_vecs, const char *driver,
|
2019-04-12 12:57:39 +08:00
|
|
|
u32 type, u32 mask,
|
|
|
|
const char *generic_driver, unsigned int maxkeysize)
|
2019-02-01 15:51:48 +08:00
|
|
|
{
|
2019-05-29 00:40:55 +08:00
|
|
|
struct crypto_ahash *atfm = NULL;
|
2019-02-01 15:51:48 +08:00
|
|
|
struct ahash_request *req = NULL;
|
2019-05-29 00:40:55 +08:00
|
|
|
struct crypto_shash *stfm = NULL;
|
|
|
|
struct shash_desc *desc = NULL;
|
2019-02-01 15:51:48 +08:00
|
|
|
struct test_sglist *tsgl = NULL;
|
|
|
|
u8 *hashstate = NULL;
|
2019-05-29 00:40:55 +08:00
|
|
|
unsigned int statesize;
|
2019-02-01 15:51:48 +08:00
|
|
|
unsigned int i;
|
|
|
|
int err;
|
2016-02-03 18:26:57 +08:00
|
|
|
|
2019-05-29 00:40:55 +08:00
|
|
|
/*
|
|
|
|
* Always test the ahash API. This works regardless of whether the
|
|
|
|
* algorithm is implemented as ahash or shash.
|
|
|
|
*/
|
|
|
|
|
|
|
|
atfm = crypto_alloc_ahash(driver, type, mask);
|
|
|
|
if (IS_ERR(atfm)) {
|
2024-09-03 07:33:40 +08:00
|
|
|
if (PTR_ERR(atfm) == -ENOENT)
|
2024-10-06 09:24:56 +08:00
|
|
|
return 0;
|
2019-02-01 15:51:48 +08:00
|
|
|
pr_err("alg: hash: failed to allocate transform for %s: %ld\n",
|
2019-05-29 00:40:55 +08:00
|
|
|
driver, PTR_ERR(atfm));
|
|
|
|
return PTR_ERR(atfm);
|
2019-02-01 15:51:48 +08:00
|
|
|
}
|
2020-10-27 00:17:00 +08:00
|
|
|
driver = crypto_ahash_driver_name(atfm);
|
2016-02-03 18:26:57 +08:00
|
|
|
|
2019-05-29 00:40:55 +08:00
|
|
|
req = ahash_request_alloc(atfm, GFP_KERNEL);
|
2019-02-01 15:51:48 +08:00
|
|
|
if (!req) {
|
|
|
|
pr_err("alg: hash: failed to allocate request for %s\n",
|
|
|
|
driver);
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
2016-02-03 18:26:57 +08:00
|
|
|
|
2019-05-29 00:40:55 +08:00
|
|
|
/*
|
|
|
|
* If available also test the shash API, to cover corner cases that may
|
|
|
|
* be missed by testing the ahash API only.
|
|
|
|
*/
|
|
|
|
err = alloc_shash(driver, type, mask, &stfm, &desc);
|
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
tsgl = kmalloc(sizeof(*tsgl), GFP_KERNEL);
|
|
|
|
if (!tsgl || init_test_sglist(tsgl) != 0) {
|
|
|
|
pr_err("alg: hash: failed to allocate test buffers for %s\n",
|
|
|
|
driver);
|
|
|
|
kfree(tsgl);
|
|
|
|
tsgl = NULL;
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
2016-02-03 18:26:57 +08:00
|
|
|
|
2019-05-29 00:40:55 +08:00
|
|
|
statesize = crypto_ahash_statesize(atfm);
|
|
|
|
if (stfm)
|
|
|
|
statesize = max(statesize, crypto_shash_statesize(stfm));
|
|
|
|
hashstate = kmalloc(statesize + TESTMGR_POISON_LEN, GFP_KERNEL);
|
2019-02-01 15:51:48 +08:00
|
|
|
if (!hashstate) {
|
|
|
|
pr_err("alg: hash: failed to allocate hash state buffer for %s\n",
|
|
|
|
driver);
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
2016-02-03 18:26:57 +08:00
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
for (i = 0; i < num_vecs; i++) {
|
2022-02-01 16:40:58 +08:00
|
|
|
if (fips_enabled && vecs[i].fips_skip)
|
|
|
|
continue;
|
|
|
|
|
2020-10-27 00:17:00 +08:00
|
|
|
err = test_hash_vec(&vecs[i], i, req, desc, tsgl, hashstate);
|
2019-02-01 15:51:48 +08:00
|
|
|
if (err)
|
2014-08-08 19:27:50 +08:00
|
|
|
goto out;
|
2019-06-03 13:42:33 +08:00
|
|
|
cond_resched();
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2020-10-27 00:17:00 +08:00
|
|
|
err = test_hash_vs_generic_impl(generic_driver, maxkeysize, req,
|
2019-05-29 00:40:55 +08:00
|
|
|
desc, tsgl, hashstate);
|
2008-07-31 17:08:25 +08:00
|
|
|
out:
|
2019-02-01 15:51:48 +08:00
|
|
|
kfree(hashstate);
|
|
|
|
if (tsgl) {
|
|
|
|
destroy_test_sglist(tsgl);
|
|
|
|
kfree(tsgl);
|
|
|
|
}
|
2019-05-29 00:40:55 +08:00
|
|
|
kfree(desc);
|
|
|
|
crypto_free_shash(stfm);
|
2008-07-31 17:08:25 +08:00
|
|
|
ahash_request_free(req);
|
2019-05-29 00:40:55 +08:00
|
|
|
crypto_free_ahash(atfm);
|
2019-02-01 15:51:48 +08:00
|
|
|
return err;
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
static int alg_test_hash(const struct alg_test_desc *desc, const char *driver,
|
|
|
|
u32 type, u32 mask)
|
2013-06-13 22:37:55 +08:00
|
|
|
{
|
2019-02-01 15:51:48 +08:00
|
|
|
const struct hash_testvec *template = desc->suite.hash.vecs;
|
|
|
|
unsigned int tcount = desc->suite.hash.count;
|
|
|
|
unsigned int nr_unkeyed, nr_keyed;
|
2019-04-12 12:57:39 +08:00
|
|
|
unsigned int maxkeysize = 0;
|
2019-02-01 15:51:48 +08:00
|
|
|
int err;
|
2013-06-13 22:37:55 +08:00
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
/*
|
|
|
|
* For OPTIONAL_KEY algorithms, we have to do all the unkeyed tests
|
|
|
|
* first, before setting a key on the tfm. To make this easier, we
|
|
|
|
* require that the unkeyed test vectors (if any) are listed first.
|
|
|
|
*/
|
2013-06-13 22:37:55 +08:00
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
for (nr_unkeyed = 0; nr_unkeyed < tcount; nr_unkeyed++) {
|
|
|
|
if (template[nr_unkeyed].ksize)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
for (nr_keyed = 0; nr_unkeyed + nr_keyed < tcount; nr_keyed++) {
|
|
|
|
if (!template[nr_unkeyed + nr_keyed].ksize) {
|
|
|
|
pr_err("alg: hash: test vectors for %s out of order, "
|
|
|
|
"unkeyed ones must come first\n", desc->alg);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2019-04-12 12:57:39 +08:00
|
|
|
maxkeysize = max_t(unsigned int, maxkeysize,
|
|
|
|
template[nr_unkeyed + nr_keyed].ksize);
|
2019-02-01 15:51:48 +08:00
|
|
|
}
|
2013-06-13 22:37:55 +08:00
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
err = 0;
|
|
|
|
if (nr_unkeyed) {
|
2019-04-12 12:57:39 +08:00
|
|
|
err = __alg_test_hash(template, nr_unkeyed, driver, type, mask,
|
|
|
|
desc->generic_driver, maxkeysize);
|
2019-02-01 15:51:48 +08:00
|
|
|
template += nr_unkeyed;
|
2013-06-13 22:37:55 +08:00
|
|
|
}
|
|
|
|
|
2019-02-01 15:51:48 +08:00
|
|
|
if (!err && nr_keyed)
|
2019-04-12 12:57:39 +08:00
|
|
|
err = __alg_test_hash(template, nr_keyed, driver, type, mask,
|
|
|
|
desc->generic_driver, maxkeysize);
|
2019-02-01 15:51:48 +08:00
|
|
|
|
|
|
|
return err;
|
2013-06-13 22:37:55 +08:00
|
|
|
}
|
|
|
|
|
2020-10-27 00:17:01 +08:00
|
|
|
static int test_aead_vec_cfg(int enc, const struct aead_testvec *vec,
|
2019-04-12 12:57:37 +08:00
|
|
|
const char *vec_name,
|
2019-02-01 15:51:47 +08:00
|
|
|
const struct testvec_config *cfg,
|
|
|
|
struct aead_request *req,
|
|
|
|
struct cipher_test_sglists *tsgls)
|
2008-07-31 17:08:25 +08:00
|
|
|
{
|
2019-02-01 15:51:47 +08:00
|
|
|
struct crypto_aead *tfm = crypto_aead_reqtfm(req);
|
|
|
|
const unsigned int alignmask = crypto_aead_alignmask(tfm);
|
|
|
|
const unsigned int ivsize = crypto_aead_ivsize(tfm);
|
|
|
|
const unsigned int authsize = vec->clen - vec->plen;
|
2020-10-27 00:17:01 +08:00
|
|
|
const char *driver = crypto_aead_driver_name(tfm);
|
2019-02-01 15:51:47 +08:00
|
|
|
const u32 req_flags = CRYPTO_TFM_REQ_MAY_BACKLOG | cfg->req_flags;
|
|
|
|
const char *op = enc ? "encryption" : "decryption";
|
|
|
|
DECLARE_CRYPTO_WAIT(wait);
|
|
|
|
u8 _iv[3 * (MAX_ALGAPI_ALIGNMASK + 1) + MAX_IVLEN];
|
|
|
|
u8 *iv = PTR_ALIGN(&_iv[0], 2 * (MAX_ALGAPI_ALIGNMASK + 1)) +
|
|
|
|
cfg->iv_offset +
|
|
|
|
(cfg->iv_offset_relative_to_alignmask ? alignmask : 0);
|
|
|
|
struct kvec input[2];
|
|
|
|
int err;
|
2012-09-21 15:26:52 +08:00
|
|
|
|
2019-02-01 15:51:47 +08:00
|
|
|
/* Set the key */
|
|
|
|
if (vec->wk)
|
|
|
|
crypto_aead_set_flags(tfm, CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
|
2008-07-31 17:08:25 +08:00
|
|
|
else
|
2019-02-01 15:51:47 +08:00
|
|
|
crypto_aead_clear_flags(tfm, CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
|
2019-12-02 05:53:28 +08:00
|
|
|
|
|
|
|
err = do_setkey(crypto_aead_setkey, tfm, vec->key, vec->klen,
|
|
|
|
cfg, alignmask);
|
2019-04-12 12:57:36 +08:00
|
|
|
if (err && err != vec->setkey_error) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: aead: %s setkey failed on test vector %s; expected_error=%d, actual_error=%d, flags=%#x\n",
|
|
|
|
driver, vec_name, vec->setkey_error, err,
|
2019-04-12 12:57:36 +08:00
|
|
|
crypto_aead_get_flags(tfm));
|
2019-02-01 15:51:47 +08:00
|
|
|
return err;
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2019-04-12 12:57:36 +08:00
|
|
|
if (!err && vec->setkey_error) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: aead: %s setkey unexpectedly succeeded on test vector %s; expected_error=%d\n",
|
|
|
|
driver, vec_name, vec->setkey_error);
|
2019-02-01 15:51:47 +08:00
|
|
|
return -EINVAL;
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
|
2019-02-01 15:51:47 +08:00
|
|
|
/* Set the authentication tag size */
|
|
|
|
err = crypto_aead_setauthsize(tfm, authsize);
|
2019-04-12 12:57:36 +08:00
|
|
|
if (err && err != vec->setauthsize_error) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: aead: %s setauthsize failed on test vector %s; expected_error=%d, actual_error=%d\n",
|
|
|
|
driver, vec_name, vec->setauthsize_error, err);
|
2019-02-01 15:51:47 +08:00
|
|
|
return err;
|
|
|
|
}
|
2019-04-12 12:57:36 +08:00
|
|
|
if (!err && vec->setauthsize_error) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: aead: %s setauthsize unexpectedly succeeded on test vector %s; expected_error=%d\n",
|
|
|
|
driver, vec_name, vec->setauthsize_error);
|
2019-04-12 12:57:36 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (vec->setkey_error || vec->setauthsize_error)
|
|
|
|
return 0;
|
2013-11-28 21:11:18 +08:00
|
|
|
|
2019-02-01 15:51:47 +08:00
|
|
|
/* The IV must be copied to a buffer, as the algorithm may modify it */
|
|
|
|
if (WARN_ON(ivsize > MAX_IVLEN))
|
|
|
|
return -EINVAL;
|
|
|
|
if (vec->iv)
|
|
|
|
memcpy(iv, vec->iv, ivsize);
|
|
|
|
else
|
|
|
|
memset(iv, 0, ivsize);
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-02-01 15:51:47 +08:00
|
|
|
/* Build the src/dst scatterlists */
|
|
|
|
input[0].iov_base = (void *)vec->assoc;
|
|
|
|
input[0].iov_len = vec->alen;
|
|
|
|
input[1].iov_base = enc ? (void *)vec->ptext : (void *)vec->ctext;
|
|
|
|
input[1].iov_len = enc ? vec->plen : vec->clen;
|
|
|
|
err = build_cipher_test_sglists(tsgls, cfg, alignmask,
|
|
|
|
vec->alen + (enc ? vec->plen :
|
|
|
|
vec->clen),
|
|
|
|
vec->alen + (enc ? vec->clen :
|
|
|
|
vec->plen),
|
|
|
|
input, 2);
|
|
|
|
if (err) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: aead: %s %s: error preparing scatterlists for test vector %s, cfg=\"%s\"\n",
|
|
|
|
driver, op, vec_name, cfg->name);
|
2019-02-01 15:51:47 +08:00
|
|
|
return err;
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-02-01 15:51:47 +08:00
|
|
|
/* Do the actual encryption or decryption */
|
|
|
|
testmgr_poison(req->__ctx, crypto_aead_reqsize(tfm));
|
|
|
|
aead_request_set_callback(req, req_flags, crypto_req_done, &wait);
|
|
|
|
aead_request_set_crypt(req, tsgls->src.sgl_ptr, tsgls->dst.sgl_ptr,
|
|
|
|
enc ? vec->plen : vec->clen, iv);
|
|
|
|
aead_request_set_ad(req, vec->alen);
|
2019-03-13 13:12:52 +08:00
|
|
|
if (cfg->nosimd)
|
|
|
|
crypto_disable_simd_for_test();
|
|
|
|
err = enc ? crypto_aead_encrypt(req) : crypto_aead_decrypt(req);
|
|
|
|
if (cfg->nosimd)
|
|
|
|
crypto_reenable_simd_for_test();
|
|
|
|
err = crypto_wait_req(err, &wait);
|
2019-02-01 15:51:50 +08:00
|
|
|
|
|
|
|
/* Check that the algorithm didn't overwrite things it shouldn't have */
|
|
|
|
if (req->cryptlen != (enc ? vec->plen : vec->clen) ||
|
|
|
|
req->assoclen != vec->alen ||
|
|
|
|
req->iv != iv ||
|
|
|
|
req->src != tsgls->src.sgl_ptr ||
|
|
|
|
req->dst != tsgls->dst.sgl_ptr ||
|
|
|
|
crypto_aead_reqtfm(req) != tfm ||
|
|
|
|
req->base.complete != crypto_req_done ||
|
|
|
|
req->base.flags != req_flags ||
|
|
|
|
req->base.data != &wait) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: aead: %s %s corrupted request struct on test vector %s, cfg=\"%s\"\n",
|
|
|
|
driver, op, vec_name, cfg->name);
|
2019-02-01 15:51:50 +08:00
|
|
|
if (req->cryptlen != (enc ? vec->plen : vec->clen))
|
|
|
|
pr_err("alg: aead: changed 'req->cryptlen'\n");
|
|
|
|
if (req->assoclen != vec->alen)
|
|
|
|
pr_err("alg: aead: changed 'req->assoclen'\n");
|
|
|
|
if (req->iv != iv)
|
|
|
|
pr_err("alg: aead: changed 'req->iv'\n");
|
|
|
|
if (req->src != tsgls->src.sgl_ptr)
|
|
|
|
pr_err("alg: aead: changed 'req->src'\n");
|
|
|
|
if (req->dst != tsgls->dst.sgl_ptr)
|
|
|
|
pr_err("alg: aead: changed 'req->dst'\n");
|
|
|
|
if (crypto_aead_reqtfm(req) != tfm)
|
|
|
|
pr_err("alg: aead: changed 'req->base.tfm'\n");
|
|
|
|
if (req->base.complete != crypto_req_done)
|
|
|
|
pr_err("alg: aead: changed 'req->base.complete'\n");
|
|
|
|
if (req->base.flags != req_flags)
|
|
|
|
pr_err("alg: aead: changed 'req->base.flags'\n");
|
|
|
|
if (req->base.data != &wait)
|
|
|
|
pr_err("alg: aead: changed 'req->base.data'\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
if (is_test_sglist_corrupted(&tsgls->src)) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: aead: %s %s corrupted src sgl on test vector %s, cfg=\"%s\"\n",
|
|
|
|
driver, op, vec_name, cfg->name);
|
2019-02-01 15:51:50 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
if (tsgls->dst.sgl_ptr != tsgls->src.sgl &&
|
|
|
|
is_test_sglist_corrupted(&tsgls->dst)) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: aead: %s %s corrupted dst sgl on test vector %s, cfg=\"%s\"\n",
|
|
|
|
driver, op, vec_name, cfg->name);
|
2019-02-01 15:51:50 +08:00
|
|
|
return -EINVAL;
|
2019-02-01 15:51:47 +08:00
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-12-02 05:53:30 +08:00
|
|
|
/* Check for unexpected success or failure, or wrong error code */
|
|
|
|
if ((err == 0 && vec->novrfy) ||
|
|
|
|
(err != vec->crypt_error && !(err == -EBADMSG && vec->novrfy))) {
|
|
|
|
char expected_error[32];
|
|
|
|
|
|
|
|
if (vec->novrfy &&
|
|
|
|
vec->crypt_error != 0 && vec->crypt_error != -EBADMSG)
|
|
|
|
sprintf(expected_error, "-EBADMSG or %d",
|
|
|
|
vec->crypt_error);
|
|
|
|
else if (vec->novrfy)
|
|
|
|
sprintf(expected_error, "-EBADMSG");
|
|
|
|
else
|
|
|
|
sprintf(expected_error, "%d", vec->crypt_error);
|
|
|
|
if (err) {
|
|
|
|
pr_err("alg: aead: %s %s failed on test vector %s; expected_error=%s, actual_error=%d, cfg=\"%s\"\n",
|
|
|
|
driver, op, vec_name, expected_error, err,
|
|
|
|
cfg->name);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
pr_err("alg: aead: %s %s unexpectedly succeeded on test vector %s; expected_error=%s, cfg=\"%s\"\n",
|
2019-04-12 12:57:37 +08:00
|
|
|
driver, op, vec_name, expected_error, cfg->name);
|
2019-04-12 12:57:36 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2019-12-02 05:53:30 +08:00
|
|
|
if (err) /* Expectedly failed. */
|
|
|
|
return 0;
|
2019-04-12 12:57:36 +08:00
|
|
|
|
2019-02-01 15:51:47 +08:00
|
|
|
/* Check for the correct output (ciphertext or plaintext) */
|
|
|
|
err = verify_correct_output(&tsgls->dst, enc ? vec->ctext : vec->ptext,
|
|
|
|
enc ? vec->clen : vec->plen,
|
2022-03-26 15:11:59 +08:00
|
|
|
vec->alen,
|
|
|
|
enc || cfg->inplace_mode == OUT_OF_PLACE);
|
2019-02-01 15:51:47 +08:00
|
|
|
if (err == -EOVERFLOW) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: aead: %s %s overran dst buffer on test vector %s, cfg=\"%s\"\n",
|
|
|
|
driver, op, vec_name, cfg->name);
|
2019-02-01 15:51:47 +08:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
if (err) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: aead: %s %s test failed (wrong result) on test vector %s, cfg=\"%s\"\n",
|
|
|
|
driver, op, vec_name, cfg->name);
|
2019-02-01 15:51:47 +08:00
|
|
|
return err;
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-02-01 15:51:47 +08:00
|
|
|
return 0;
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2020-10-27 00:17:01 +08:00
|
|
|
static int test_aead_vec(int enc, const struct aead_testvec *vec,
|
|
|
|
unsigned int vec_num, struct aead_request *req,
|
2019-02-01 15:51:47 +08:00
|
|
|
struct cipher_test_sglists *tsgls)
|
|
|
|
{
|
2019-04-12 12:57:37 +08:00
|
|
|
char vec_name[16];
|
2019-02-01 15:51:47 +08:00
|
|
|
unsigned int i;
|
|
|
|
int err;
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-02-01 15:51:47 +08:00
|
|
|
if (enc && vec->novrfy)
|
|
|
|
return 0;
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-04-12 12:57:37 +08:00
|
|
|
sprintf(vec_name, "%u", vec_num);
|
|
|
|
|
2019-02-01 15:51:47 +08:00
|
|
|
for (i = 0; i < ARRAY_SIZE(default_cipher_testvec_configs); i++) {
|
2020-10-27 00:17:01 +08:00
|
|
|
err = test_aead_vec_cfg(enc, vec, vec_name,
|
2019-02-01 15:51:47 +08:00
|
|
|
&default_cipher_testvec_configs[i],
|
|
|
|
req, tsgls);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-02-01 15:51:47 +08:00
|
|
|
#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
|
|
|
|
if (!noextratests) {
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
struct rnd_state rng;
|
2019-02-01 15:51:47 +08:00
|
|
|
struct testvec_config cfg;
|
|
|
|
char cfgname[TESTVEC_CONFIG_NAMELEN];
|
2014-07-28 18:11:23 +08:00
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
init_rnd_state(&rng);
|
|
|
|
|
2019-02-01 15:51:47 +08:00
|
|
|
for (i = 0; i < fuzz_iterations; i++) {
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
generate_random_testvec_config(&rng, &cfg, cfgname,
|
2019-02-01 15:51:47 +08:00
|
|
|
sizeof(cfgname));
|
2020-10-27 00:17:01 +08:00
|
|
|
err = test_aead_vec_cfg(enc, vec, vec_name,
|
2019-02-01 15:51:47 +08:00
|
|
|
&cfg, req, tsgls);
|
|
|
|
if (err)
|
|
|
|
return err;
|
2019-06-03 13:42:33 +08:00
|
|
|
cond_resched();
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}
|
2019-02-01 15:51:47 +08:00
|
|
|
#endif
|
|
|
|
return 0;
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-04-12 12:57:41 +08:00
|
|
|
#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
|
2019-12-02 05:53:29 +08:00
|
|
|
|
|
|
|
struct aead_extra_tests_ctx {
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
struct rnd_state rng;
|
2019-12-02 05:53:29 +08:00
|
|
|
struct aead_request *req;
|
|
|
|
struct crypto_aead *tfm;
|
|
|
|
const struct alg_test_desc *test_desc;
|
|
|
|
struct cipher_test_sglists *tsgls;
|
|
|
|
unsigned int maxdatasize;
|
|
|
|
unsigned int maxkeysize;
|
|
|
|
|
|
|
|
struct aead_testvec vec;
|
|
|
|
char vec_name[64];
|
|
|
|
char cfgname[TESTVEC_CONFIG_NAMELEN];
|
|
|
|
struct testvec_config cfg;
|
|
|
|
};
|
|
|
|
|
2019-04-12 12:57:41 +08:00
|
|
|
/*
|
2019-12-02 05:53:30 +08:00
|
|
|
* Make at least one random change to a (ciphertext, AAD) pair. "Ciphertext"
|
|
|
|
* here means the full ciphertext including the authentication tag. The
|
|
|
|
* authentication tag (and hence also the ciphertext) is assumed to be nonempty.
|
|
|
|
*/
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
static void mutate_aead_message(struct rnd_state *rng,
|
|
|
|
struct aead_testvec *vec, bool aad_iv,
|
2020-03-05 06:44:03 +08:00
|
|
|
unsigned int ivsize)
|
2019-12-02 05:53:30 +08:00
|
|
|
{
|
2020-03-05 06:44:03 +08:00
|
|
|
const unsigned int aad_tail_size = aad_iv ? ivsize : 0;
|
2019-12-02 05:53:30 +08:00
|
|
|
const unsigned int authsize = vec->clen - vec->plen;
|
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
if (prandom_bool(rng) && vec->alen > aad_tail_size) {
|
2019-12-02 05:53:30 +08:00
|
|
|
/* Mutate the AAD */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
flip_random_bit(rng, (u8 *)vec->assoc,
|
|
|
|
vec->alen - aad_tail_size);
|
|
|
|
if (prandom_bool(rng))
|
2019-12-02 05:53:30 +08:00
|
|
|
return;
|
|
|
|
}
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
if (prandom_bool(rng)) {
|
2019-12-02 05:53:30 +08:00
|
|
|
/* Mutate auth tag (assuming it's at the end of ciphertext) */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
flip_random_bit(rng, (u8 *)vec->ctext + vec->plen, authsize);
|
2019-12-02 05:53:30 +08:00
|
|
|
} else {
|
|
|
|
/* Mutate any part of the ciphertext */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
flip_random_bit(rng, (u8 *)vec->ctext, vec->clen);
|
2019-12-02 05:53:30 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Minimum authentication tag size in bytes at which we assume that we can
|
|
|
|
* reliably generate inauthentic messages, i.e. not generate an authentic
|
|
|
|
* message by chance.
|
|
|
|
*/
|
|
|
|
#define MIN_COLLISION_FREE_AUTHSIZE 8
|
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
static void generate_aead_message(struct rnd_state *rng,
|
|
|
|
struct aead_request *req,
|
2019-12-02 05:53:30 +08:00
|
|
|
const struct aead_test_suite *suite,
|
|
|
|
struct aead_testvec *vec,
|
|
|
|
bool prefer_inauthentic)
|
|
|
|
{
|
|
|
|
struct crypto_aead *tfm = crypto_aead_reqtfm(req);
|
|
|
|
const unsigned int ivsize = crypto_aead_ivsize(tfm);
|
|
|
|
const unsigned int authsize = vec->clen - vec->plen;
|
|
|
|
const bool inauthentic = (authsize >= MIN_COLLISION_FREE_AUTHSIZE) &&
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
(prefer_inauthentic ||
|
|
|
|
prandom_u32_below(rng, 4) == 0);
|
2019-12-02 05:53:30 +08:00
|
|
|
|
|
|
|
/* Generate the AAD. */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
generate_random_bytes(rng, (u8 *)vec->assoc, vec->alen);
|
2020-03-05 06:44:03 +08:00
|
|
|
if (suite->aad_iv && vec->alen >= ivsize)
|
|
|
|
/* Avoid implementation-defined behavior. */
|
|
|
|
memcpy((u8 *)vec->assoc + vec->alen - ivsize, vec->iv, ivsize);
|
2019-12-02 05:53:30 +08:00
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
if (inauthentic && prandom_bool(rng)) {
|
2019-12-02 05:53:30 +08:00
|
|
|
/* Generate a random ciphertext. */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
generate_random_bytes(rng, (u8 *)vec->ctext, vec->clen);
|
2019-12-02 05:53:30 +08:00
|
|
|
} else {
|
|
|
|
int i = 0;
|
|
|
|
struct scatterlist src[2], dst;
|
|
|
|
u8 iv[MAX_IVLEN];
|
|
|
|
DECLARE_CRYPTO_WAIT(wait);
|
|
|
|
|
|
|
|
/* Generate a random plaintext and encrypt it. */
|
|
|
|
sg_init_table(src, 2);
|
|
|
|
if (vec->alen)
|
|
|
|
sg_set_buf(&src[i++], vec->assoc, vec->alen);
|
|
|
|
if (vec->plen) {
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
generate_random_bytes(rng, (u8 *)vec->ptext, vec->plen);
|
2019-12-02 05:53:30 +08:00
|
|
|
sg_set_buf(&src[i++], vec->ptext, vec->plen);
|
|
|
|
}
|
|
|
|
sg_init_one(&dst, vec->ctext, vec->alen + vec->clen);
|
|
|
|
memcpy(iv, vec->iv, ivsize);
|
|
|
|
aead_request_set_callback(req, 0, crypto_req_done, &wait);
|
|
|
|
aead_request_set_crypt(req, src, &dst, vec->plen, iv);
|
|
|
|
aead_request_set_ad(req, vec->alen);
|
|
|
|
vec->crypt_error = crypto_wait_req(crypto_aead_encrypt(req),
|
|
|
|
&wait);
|
|
|
|
/* If encryption failed, we're done. */
|
|
|
|
if (vec->crypt_error != 0)
|
|
|
|
return;
|
|
|
|
memmove((u8 *)vec->ctext, vec->ctext + vec->alen, vec->clen);
|
|
|
|
if (!inauthentic)
|
|
|
|
return;
|
|
|
|
/*
|
|
|
|
* Mutate the authentic (ciphertext, AAD) pair to get an
|
|
|
|
* inauthentic one.
|
|
|
|
*/
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
mutate_aead_message(rng, vec, suite->aad_iv, ivsize);
|
2019-12-02 05:53:30 +08:00
|
|
|
}
|
|
|
|
vec->novrfy = 1;
|
|
|
|
if (suite->einval_allowed)
|
|
|
|
vec->crypt_error = -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Generate an AEAD test vector 'vec' using the implementation specified by
|
|
|
|
* 'req'. The buffers in 'vec' must already be allocated.
|
|
|
|
*
|
|
|
|
* If 'prefer_inauthentic' is true, then this function will generate inauthentic
|
|
|
|
* test vectors (i.e. vectors with 'vec->novrfy=1') more often.
|
2019-04-12 12:57:41 +08:00
|
|
|
*/
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
static void generate_random_aead_testvec(struct rnd_state *rng,
|
|
|
|
struct aead_request *req,
|
2019-04-12 12:57:41 +08:00
|
|
|
struct aead_testvec *vec,
|
2019-12-02 05:53:30 +08:00
|
|
|
const struct aead_test_suite *suite,
|
2019-04-12 12:57:41 +08:00
|
|
|
unsigned int maxkeysize,
|
|
|
|
unsigned int maxdatasize,
|
2019-12-02 05:53:30 +08:00
|
|
|
char *name, size_t max_namelen,
|
|
|
|
bool prefer_inauthentic)
|
2019-04-12 12:57:41 +08:00
|
|
|
{
|
|
|
|
struct crypto_aead *tfm = crypto_aead_reqtfm(req);
|
|
|
|
const unsigned int ivsize = crypto_aead_ivsize(tfm);
|
2019-12-02 05:53:29 +08:00
|
|
|
const unsigned int maxauthsize = crypto_aead_maxauthsize(tfm);
|
2019-04-12 12:57:41 +08:00
|
|
|
unsigned int authsize;
|
|
|
|
unsigned int total_len;
|
|
|
|
|
|
|
|
/* Key: length in [0, maxkeysize], but usually choose maxkeysize */
|
|
|
|
vec->klen = maxkeysize;
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
if (prandom_u32_below(rng, 4) == 0)
|
|
|
|
vec->klen = prandom_u32_below(rng, maxkeysize + 1);
|
|
|
|
generate_random_bytes(rng, (u8 *)vec->key, vec->klen);
|
2019-04-12 12:57:41 +08:00
|
|
|
vec->setkey_error = crypto_aead_setkey(tfm, vec->key, vec->klen);
|
|
|
|
|
|
|
|
/* IV */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
generate_random_bytes(rng, (u8 *)vec->iv, ivsize);
|
2019-04-12 12:57:41 +08:00
|
|
|
|
|
|
|
/* Tag length: in [0, maxauthsize], but usually choose maxauthsize */
|
|
|
|
authsize = maxauthsize;
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
if (prandom_u32_below(rng, 4) == 0)
|
|
|
|
authsize = prandom_u32_below(rng, maxauthsize + 1);
|
2019-12-02 05:53:30 +08:00
|
|
|
if (prefer_inauthentic && authsize < MIN_COLLISION_FREE_AUTHSIZE)
|
|
|
|
authsize = MIN_COLLISION_FREE_AUTHSIZE;
|
2019-04-12 12:57:41 +08:00
|
|
|
if (WARN_ON(authsize > maxdatasize))
|
|
|
|
authsize = maxdatasize;
|
|
|
|
maxdatasize -= authsize;
|
|
|
|
vec->setauthsize_error = crypto_aead_setauthsize(tfm, authsize);
|
|
|
|
|
2019-12-02 05:53:30 +08:00
|
|
|
/* AAD, plaintext, and ciphertext lengths */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
total_len = generate_random_length(rng, maxdatasize);
|
|
|
|
if (prandom_u32_below(rng, 4) == 0)
|
2019-04-12 12:57:41 +08:00
|
|
|
vec->alen = 0;
|
|
|
|
else
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
vec->alen = generate_random_length(rng, total_len);
|
2019-04-12 12:57:41 +08:00
|
|
|
vec->plen = total_len - vec->alen;
|
|
|
|
vec->clen = vec->plen + authsize;
|
|
|
|
|
|
|
|
/*
|
2019-12-02 05:53:30 +08:00
|
|
|
* Generate the AAD, plaintext, and ciphertext. Not applicable if the
|
|
|
|
* key or the authentication tag size couldn't be set.
|
2019-04-12 12:57:41 +08:00
|
|
|
*/
|
2019-12-02 05:53:30 +08:00
|
|
|
vec->novrfy = 0;
|
2019-12-02 05:53:26 +08:00
|
|
|
vec->crypt_error = 0;
|
2019-12-02 05:53:30 +08:00
|
|
|
if (vec->setkey_error == 0 && vec->setauthsize_error == 0)
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
generate_aead_message(rng, req, suite, vec, prefer_inauthentic);
|
2019-04-12 12:57:41 +08:00
|
|
|
snprintf(name, max_namelen,
|
2019-12-02 05:53:30 +08:00
|
|
|
"\"random: alen=%u plen=%u authsize=%u klen=%u novrfy=%d\"",
|
|
|
|
vec->alen, vec->plen, authsize, vec->klen, vec->novrfy);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void try_to_generate_inauthentic_testvec(
|
|
|
|
struct aead_extra_tests_ctx *ctx)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < 10; i++) {
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
generate_random_aead_testvec(&ctx->rng, ctx->req, &ctx->vec,
|
2019-12-02 05:53:30 +08:00
|
|
|
&ctx->test_desc->suite.aead,
|
|
|
|
ctx->maxkeysize, ctx->maxdatasize,
|
|
|
|
ctx->vec_name,
|
|
|
|
sizeof(ctx->vec_name), true);
|
|
|
|
if (ctx->vec.novrfy)
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Generate inauthentic test vectors (i.e. ciphertext, AAD pairs that aren't the
|
|
|
|
* result of an encryption with the key) and verify that decryption fails.
|
|
|
|
*/
|
|
|
|
static int test_aead_inauthentic_inputs(struct aead_extra_tests_ctx *ctx)
|
|
|
|
{
|
|
|
|
unsigned int i;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
for (i = 0; i < fuzz_iterations * 8; i++) {
|
|
|
|
/*
|
|
|
|
* Since this part of the tests isn't comparing the
|
|
|
|
* implementation to another, there's no point in testing any
|
|
|
|
* test vectors other than inauthentic ones (vec.novrfy=1) here.
|
|
|
|
*
|
|
|
|
* If we're having trouble generating such a test vector, e.g.
|
|
|
|
* if the algorithm keeps rejecting the generated keys, don't
|
|
|
|
* retry forever; just continue on.
|
|
|
|
*/
|
|
|
|
try_to_generate_inauthentic_testvec(ctx);
|
|
|
|
if (ctx->vec.novrfy) {
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
generate_random_testvec_config(&ctx->rng, &ctx->cfg,
|
|
|
|
ctx->cfgname,
|
2019-12-02 05:53:30 +08:00
|
|
|
sizeof(ctx->cfgname));
|
2020-10-27 00:17:01 +08:00
|
|
|
err = test_aead_vec_cfg(DECRYPT, &ctx->vec,
|
2019-12-02 05:53:30 +08:00
|
|
|
ctx->vec_name, &ctx->cfg,
|
|
|
|
ctx->req, ctx->tsgls);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
cond_resched();
|
|
|
|
}
|
|
|
|
return 0;
|
2019-04-12 12:57:41 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2019-12-02 05:53:29 +08:00
|
|
|
* Test the AEAD algorithm against the corresponding generic implementation, if
|
|
|
|
* one is available.
|
2019-04-12 12:57:41 +08:00
|
|
|
*/
|
2019-12-02 05:53:29 +08:00
|
|
|
static int test_aead_vs_generic_impl(struct aead_extra_tests_ctx *ctx)
|
2019-04-12 12:57:41 +08:00
|
|
|
{
|
2019-12-02 05:53:29 +08:00
|
|
|
struct crypto_aead *tfm = ctx->tfm;
|
2019-04-12 12:57:41 +08:00
|
|
|
const char *algname = crypto_aead_alg(tfm)->base.cra_name;
|
2020-10-27 00:17:01 +08:00
|
|
|
const char *driver = crypto_aead_driver_name(tfm);
|
2019-12-02 05:53:29 +08:00
|
|
|
const char *generic_driver = ctx->test_desc->generic_driver;
|
2019-04-12 12:57:41 +08:00
|
|
|
char _generic_driver[CRYPTO_MAX_ALG_NAME];
|
|
|
|
struct crypto_aead *generic_tfm = NULL;
|
|
|
|
struct aead_request *generic_req = NULL;
|
|
|
|
unsigned int i;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
if (!generic_driver) { /* Use default naming convention? */
|
|
|
|
err = build_generic_driver_name(algname, _generic_driver);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
generic_driver = _generic_driver;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (strcmp(generic_driver, driver) == 0) /* Already the generic impl? */
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
generic_tfm = crypto_alloc_aead(generic_driver, 0, 0);
|
|
|
|
if (IS_ERR(generic_tfm)) {
|
|
|
|
err = PTR_ERR(generic_tfm);
|
|
|
|
if (err == -ENOENT) {
|
|
|
|
pr_warn("alg: aead: skipping comparison tests for %s because %s is unavailable\n",
|
|
|
|
driver, generic_driver);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
pr_err("alg: aead: error allocating %s (generic impl of %s): %d\n",
|
|
|
|
generic_driver, algname, err);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
generic_req = aead_request_alloc(generic_tfm, GFP_KERNEL);
|
|
|
|
if (!generic_req) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check the algorithm properties for consistency. */
|
|
|
|
|
2019-12-02 05:53:29 +08:00
|
|
|
if (crypto_aead_maxauthsize(tfm) !=
|
|
|
|
crypto_aead_maxauthsize(generic_tfm)) {
|
2019-04-12 12:57:41 +08:00
|
|
|
pr_err("alg: aead: maxauthsize for %s (%u) doesn't match generic impl (%u)\n",
|
2019-12-02 05:53:29 +08:00
|
|
|
driver, crypto_aead_maxauthsize(tfm),
|
|
|
|
crypto_aead_maxauthsize(generic_tfm));
|
2019-04-12 12:57:41 +08:00
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2019-12-02 05:53:29 +08:00
|
|
|
if (crypto_aead_ivsize(tfm) != crypto_aead_ivsize(generic_tfm)) {
|
2019-04-12 12:57:41 +08:00
|
|
|
pr_err("alg: aead: ivsize for %s (%u) doesn't match generic impl (%u)\n",
|
2019-12-02 05:53:29 +08:00
|
|
|
driver, crypto_aead_ivsize(tfm),
|
|
|
|
crypto_aead_ivsize(generic_tfm));
|
2019-04-12 12:57:41 +08:00
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2019-12-02 05:53:29 +08:00
|
|
|
if (crypto_aead_blocksize(tfm) != crypto_aead_blocksize(generic_tfm)) {
|
2019-04-12 12:57:41 +08:00
|
|
|
pr_err("alg: aead: blocksize for %s (%u) doesn't match generic impl (%u)\n",
|
2019-12-02 05:53:29 +08:00
|
|
|
driver, crypto_aead_blocksize(tfm),
|
|
|
|
crypto_aead_blocksize(generic_tfm));
|
2019-04-12 12:57:41 +08:00
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now generate test vectors using the generic implementation, and test
|
|
|
|
* the other implementation against them.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < fuzz_iterations * 8; i++) {
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
generate_random_aead_testvec(&ctx->rng, generic_req, &ctx->vec,
|
2019-12-02 05:53:30 +08:00
|
|
|
&ctx->test_desc->suite.aead,
|
2019-12-02 05:53:29 +08:00
|
|
|
ctx->maxkeysize, ctx->maxdatasize,
|
|
|
|
ctx->vec_name,
|
2019-12-02 05:53:30 +08:00
|
|
|
sizeof(ctx->vec_name), false);
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
generate_random_testvec_config(&ctx->rng, &ctx->cfg,
|
|
|
|
ctx->cfgname,
|
2019-12-02 05:53:29 +08:00
|
|
|
sizeof(ctx->cfgname));
|
2019-12-02 05:53:30 +08:00
|
|
|
if (!ctx->vec.novrfy) {
|
2020-10-27 00:17:01 +08:00
|
|
|
err = test_aead_vec_cfg(ENCRYPT, &ctx->vec,
|
2019-12-02 05:53:30 +08:00
|
|
|
ctx->vec_name, &ctx->cfg,
|
|
|
|
ctx->req, ctx->tsgls);
|
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (ctx->vec.crypt_error == 0 || ctx->vec.novrfy) {
|
2020-10-27 00:17:01 +08:00
|
|
|
err = test_aead_vec_cfg(DECRYPT, &ctx->vec,
|
2019-12-02 05:53:29 +08:00
|
|
|
ctx->vec_name, &ctx->cfg,
|
|
|
|
ctx->req, ctx->tsgls);
|
2019-12-02 05:53:26 +08:00
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
}
|
2019-04-12 12:57:41 +08:00
|
|
|
cond_resched();
|
|
|
|
}
|
|
|
|
err = 0;
|
|
|
|
out:
|
|
|
|
crypto_free_aead(generic_tfm);
|
|
|
|
aead_request_free(generic_req);
|
|
|
|
return err;
|
|
|
|
}
|
2019-12-02 05:53:29 +08:00
|
|
|
|
2020-10-27 00:17:01 +08:00
|
|
|
static int test_aead_extra(const struct alg_test_desc *test_desc,
|
2019-12-02 05:53:29 +08:00
|
|
|
struct aead_request *req,
|
|
|
|
struct cipher_test_sglists *tsgls)
|
|
|
|
{
|
|
|
|
struct aead_extra_tests_ctx *ctx;
|
|
|
|
unsigned int i;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
if (noextratests)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
|
|
|
|
if (!ctx)
|
|
|
|
return -ENOMEM;
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
init_rnd_state(&ctx->rng);
|
2019-12-02 05:53:29 +08:00
|
|
|
ctx->req = req;
|
|
|
|
ctx->tfm = crypto_aead_reqtfm(req);
|
|
|
|
ctx->test_desc = test_desc;
|
|
|
|
ctx->tsgls = tsgls;
|
|
|
|
ctx->maxdatasize = (2 * PAGE_SIZE) - TESTMGR_POISON_LEN;
|
|
|
|
ctx->maxkeysize = 0;
|
|
|
|
for (i = 0; i < test_desc->suite.aead.count; i++)
|
|
|
|
ctx->maxkeysize = max_t(unsigned int, ctx->maxkeysize,
|
|
|
|
test_desc->suite.aead.vecs[i].klen);
|
|
|
|
|
|
|
|
ctx->vec.key = kmalloc(ctx->maxkeysize, GFP_KERNEL);
|
|
|
|
ctx->vec.iv = kmalloc(crypto_aead_ivsize(ctx->tfm), GFP_KERNEL);
|
|
|
|
ctx->vec.assoc = kmalloc(ctx->maxdatasize, GFP_KERNEL);
|
|
|
|
ctx->vec.ptext = kmalloc(ctx->maxdatasize, GFP_KERNEL);
|
|
|
|
ctx->vec.ctext = kmalloc(ctx->maxdatasize, GFP_KERNEL);
|
|
|
|
if (!ctx->vec.key || !ctx->vec.iv || !ctx->vec.assoc ||
|
|
|
|
!ctx->vec.ptext || !ctx->vec.ctext) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2020-03-05 06:44:04 +08:00
|
|
|
err = test_aead_vs_generic_impl(ctx);
|
2019-12-02 05:53:30 +08:00
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
|
2020-03-05 06:44:04 +08:00
|
|
|
err = test_aead_inauthentic_inputs(ctx);
|
2019-12-02 05:53:29 +08:00
|
|
|
out:
|
|
|
|
kfree(ctx->vec.key);
|
|
|
|
kfree(ctx->vec.iv);
|
|
|
|
kfree(ctx->vec.assoc);
|
|
|
|
kfree(ctx->vec.ptext);
|
|
|
|
kfree(ctx->vec.ctext);
|
|
|
|
kfree(ctx);
|
|
|
|
return err;
|
|
|
|
}
|
2019-04-12 12:57:41 +08:00
|
|
|
#else /* !CONFIG_CRYPTO_MANAGER_EXTRA_TESTS */
|
2020-10-27 00:17:01 +08:00
|
|
|
static int test_aead_extra(const struct alg_test_desc *test_desc,
|
2019-12-02 05:53:29 +08:00
|
|
|
struct aead_request *req,
|
|
|
|
struct cipher_test_sglists *tsgls)
|
2019-04-12 12:57:41 +08:00
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif /* !CONFIG_CRYPTO_MANAGER_EXTRA_TESTS */
|
|
|
|
|
2020-10-27 00:17:01 +08:00
|
|
|
static int test_aead(int enc, const struct aead_test_suite *suite,
|
2019-02-01 15:51:47 +08:00
|
|
|
struct aead_request *req,
|
|
|
|
struct cipher_test_sglists *tsgls)
|
|
|
|
{
|
|
|
|
unsigned int i;
|
|
|
|
int err;
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-02-01 15:51:47 +08:00
|
|
|
for (i = 0; i < suite->count; i++) {
|
2020-10-27 00:17:01 +08:00
|
|
|
err = test_aead_vec(enc, &suite->vecs[i], i, req, tsgls);
|
2019-02-01 15:51:47 +08:00
|
|
|
if (err)
|
|
|
|
return err;
|
2019-06-03 13:42:33 +08:00
|
|
|
cond_resched();
|
2019-02-01 15:51:47 +08:00
|
|
|
}
|
|
|
|
return 0;
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
|
2019-02-01 15:51:47 +08:00
|
|
|
static int alg_test_aead(const struct alg_test_desc *desc, const char *driver,
|
|
|
|
u32 type, u32 mask)
|
2012-09-21 15:26:52 +08:00
|
|
|
{
|
2019-02-01 15:51:47 +08:00
|
|
|
const struct aead_test_suite *suite = &desc->suite.aead;
|
|
|
|
struct crypto_aead *tfm;
|
|
|
|
struct aead_request *req = NULL;
|
|
|
|
struct cipher_test_sglists *tsgls = NULL;
|
|
|
|
int err;
|
2012-09-21 15:26:52 +08:00
|
|
|
|
2019-02-01 15:51:47 +08:00
|
|
|
if (suite->count <= 0) {
|
|
|
|
pr_err("alg: aead: empty test suite for %s\n", driver);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2012-09-21 15:26:52 +08:00
|
|
|
|
2019-02-01 15:51:47 +08:00
|
|
|
tfm = crypto_alloc_aead(driver, type, mask);
|
|
|
|
if (IS_ERR(tfm)) {
|
2024-09-03 07:33:40 +08:00
|
|
|
if (PTR_ERR(tfm) == -ENOENT)
|
2024-10-06 09:24:56 +08:00
|
|
|
return 0;
|
2019-02-01 15:51:47 +08:00
|
|
|
pr_err("alg: aead: failed to allocate transform for %s: %ld\n",
|
|
|
|
driver, PTR_ERR(tfm));
|
|
|
|
return PTR_ERR(tfm);
|
|
|
|
}
|
2020-10-27 00:17:01 +08:00
|
|
|
driver = crypto_aead_driver_name(tfm);
|
2013-06-13 22:37:50 +08:00
|
|
|
|
2019-02-01 15:51:47 +08:00
|
|
|
req = aead_request_alloc(tfm, GFP_KERNEL);
|
|
|
|
if (!req) {
|
|
|
|
pr_err("alg: aead: failed to allocate request for %s\n",
|
|
|
|
driver);
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
2013-06-13 22:37:50 +08:00
|
|
|
|
2019-02-01 15:51:47 +08:00
|
|
|
tsgls = alloc_cipher_test_sglists();
|
|
|
|
if (!tsgls) {
|
|
|
|
pr_err("alg: aead: failed to allocate test buffers for %s\n",
|
|
|
|
driver);
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
2013-06-13 22:37:50 +08:00
|
|
|
}
|
|
|
|
|
2020-10-27 00:17:01 +08:00
|
|
|
err = test_aead(ENCRYPT, suite, req, tsgls);
|
2019-02-01 15:51:47 +08:00
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
|
2020-10-27 00:17:01 +08:00
|
|
|
err = test_aead(DECRYPT, suite, req, tsgls);
|
2019-04-12 12:57:41 +08:00
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
|
2020-10-27 00:17:01 +08:00
|
|
|
err = test_aead_extra(desc, req, tsgls);
|
2019-02-01 15:51:47 +08:00
|
|
|
out:
|
|
|
|
free_cipher_test_sglists(tsgls);
|
|
|
|
aead_request_free(req);
|
|
|
|
crypto_free_aead(tfm);
|
|
|
|
return err;
|
2012-09-21 15:26:52 +08:00
|
|
|
}
|
|
|
|
|
2008-08-17 15:01:56 +08:00
|
|
|
static int test_cipher(struct crypto_cipher *tfm, int enc,
|
2017-02-25 07:46:59 +08:00
|
|
|
const struct cipher_testvec *template,
|
|
|
|
unsigned int tcount)
|
2008-08-17 15:01:56 +08:00
|
|
|
{
|
|
|
|
const char *algo = crypto_tfm_alg_driver_name(crypto_cipher_tfm(tfm));
|
|
|
|
unsigned int i, j, k;
|
|
|
|
char *q;
|
|
|
|
const char *e;
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
const char *input, *result;
|
2008-08-17 15:01:56 +08:00
|
|
|
void *data;
|
2009-05-06 14:15:47 +08:00
|
|
|
char *xbuf[XBUFSIZE];
|
|
|
|
int ret = -ENOMEM;
|
|
|
|
|
|
|
|
if (testmgr_alloc_buf(xbuf))
|
|
|
|
goto out_nobuf;
|
2008-08-17 15:01:56 +08:00
|
|
|
|
|
|
|
if (enc == ENCRYPT)
|
|
|
|
e = "encryption";
|
|
|
|
else
|
|
|
|
e = "decryption";
|
|
|
|
|
|
|
|
j = 0;
|
|
|
|
for (i = 0; i < tcount; i++) {
|
|
|
|
|
2016-08-25 21:15:01 +08:00
|
|
|
if (fips_enabled && template[i].fips_skip)
|
|
|
|
continue;
|
|
|
|
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
input = enc ? template[i].ptext : template[i].ctext;
|
|
|
|
result = enc ? template[i].ctext : template[i].ptext;
|
2008-08-17 15:01:56 +08:00
|
|
|
j++;
|
|
|
|
|
2009-05-29 14:05:42 +08:00
|
|
|
ret = -EINVAL;
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
if (WARN_ON(template[i].len > PAGE_SIZE))
|
2009-05-29 14:05:42 +08:00
|
|
|
goto out;
|
|
|
|
|
2008-08-17 15:01:56 +08:00
|
|
|
data = xbuf[0];
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
memcpy(data, input, template[i].len);
|
2008-08-17 15:01:56 +08:00
|
|
|
|
|
|
|
crypto_cipher_clear_flags(tfm, ~0);
|
|
|
|
if (template[i].wk)
|
2019-01-19 14:48:00 +08:00
|
|
|
crypto_cipher_set_flags(tfm, CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
|
2008-08-17 15:01:56 +08:00
|
|
|
|
|
|
|
ret = crypto_cipher_setkey(tfm, template[i].key,
|
|
|
|
template[i].klen);
|
2019-04-12 12:57:36 +08:00
|
|
|
if (ret) {
|
|
|
|
if (ret == template[i].setkey_error)
|
|
|
|
continue;
|
|
|
|
pr_err("alg: cipher: %s setkey failed on test vector %u; expected_error=%d, actual_error=%d, flags=%#x\n",
|
|
|
|
algo, j, template[i].setkey_error, ret,
|
|
|
|
crypto_cipher_get_flags(tfm));
|
2008-08-17 15:01:56 +08:00
|
|
|
goto out;
|
2019-04-12 12:57:36 +08:00
|
|
|
}
|
|
|
|
if (template[i].setkey_error) {
|
|
|
|
pr_err("alg: cipher: %s setkey unexpectedly succeeded on test vector %u; expected_error=%d\n",
|
|
|
|
algo, j, template[i].setkey_error);
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
2008-08-17 15:01:56 +08:00
|
|
|
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
for (k = 0; k < template[i].len;
|
2008-08-17 15:01:56 +08:00
|
|
|
k += crypto_cipher_blocksize(tfm)) {
|
|
|
|
if (enc)
|
|
|
|
crypto_cipher_encrypt_one(tfm, data + k,
|
|
|
|
data + k);
|
|
|
|
else
|
|
|
|
crypto_cipher_decrypt_one(tfm, data + k,
|
|
|
|
data + k);
|
|
|
|
}
|
|
|
|
|
|
|
|
q = data;
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
if (memcmp(q, result, template[i].len)) {
|
2008-08-17 15:01:56 +08:00
|
|
|
printk(KERN_ERR "alg: cipher: Test %d failed "
|
|
|
|
"on %s for %s\n", j, e, algo);
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
hexdump(q, template[i].len);
|
2008-08-17 15:01:56 +08:00
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
out:
|
2009-05-06 14:15:47 +08:00
|
|
|
testmgr_free_buf(xbuf);
|
|
|
|
out_nobuf:
|
2008-08-17 15:01:56 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-10-27 00:17:02 +08:00
|
|
|
static int test_skcipher_vec_cfg(int enc, const struct cipher_testvec *vec,
|
2019-04-12 12:57:37 +08:00
|
|
|
const char *vec_name,
|
2019-02-01 15:51:46 +08:00
|
|
|
const struct testvec_config *cfg,
|
|
|
|
struct skcipher_request *req,
|
|
|
|
struct cipher_test_sglists *tsgls)
|
2008-07-31 17:08:25 +08:00
|
|
|
{
|
2019-02-01 15:51:46 +08:00
|
|
|
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
|
|
|
|
const unsigned int alignmask = crypto_skcipher_alignmask(tfm);
|
|
|
|
const unsigned int ivsize = crypto_skcipher_ivsize(tfm);
|
2020-10-27 00:17:02 +08:00
|
|
|
const char *driver = crypto_skcipher_driver_name(tfm);
|
2019-02-01 15:51:46 +08:00
|
|
|
const u32 req_flags = CRYPTO_TFM_REQ_MAY_BACKLOG | cfg->req_flags;
|
|
|
|
const char *op = enc ? "encryption" : "decryption";
|
|
|
|
DECLARE_CRYPTO_WAIT(wait);
|
|
|
|
u8 _iv[3 * (MAX_ALGAPI_ALIGNMASK + 1) + MAX_IVLEN];
|
|
|
|
u8 *iv = PTR_ALIGN(&_iv[0], 2 * (MAX_ALGAPI_ALIGNMASK + 1)) +
|
|
|
|
cfg->iv_offset +
|
|
|
|
(cfg->iv_offset_relative_to_alignmask ? alignmask : 0);
|
|
|
|
struct kvec input;
|
|
|
|
int err;
|
2012-09-21 15:26:47 +08:00
|
|
|
|
2019-02-01 15:51:46 +08:00
|
|
|
/* Set the key */
|
|
|
|
if (vec->wk)
|
|
|
|
crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
|
2008-07-31 17:08:25 +08:00
|
|
|
else
|
2019-02-01 15:51:46 +08:00
|
|
|
crypto_skcipher_clear_flags(tfm,
|
|
|
|
CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
|
2019-12-02 05:53:28 +08:00
|
|
|
err = do_setkey(crypto_skcipher_setkey, tfm, vec->key, vec->klen,
|
|
|
|
cfg, alignmask);
|
2019-02-01 15:51:46 +08:00
|
|
|
if (err) {
|
2019-04-12 12:57:36 +08:00
|
|
|
if (err == vec->setkey_error)
|
2019-02-01 15:51:46 +08:00
|
|
|
return 0;
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: skcipher: %s setkey failed on test vector %s; expected_error=%d, actual_error=%d, flags=%#x\n",
|
|
|
|
driver, vec_name, vec->setkey_error, err,
|
2019-04-12 12:57:36 +08:00
|
|
|
crypto_skcipher_get_flags(tfm));
|
2019-02-01 15:51:46 +08:00
|
|
|
return err;
|
|
|
|
}
|
2019-04-12 12:57:36 +08:00
|
|
|
if (vec->setkey_error) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: skcipher: %s setkey unexpectedly succeeded on test vector %s; expected_error=%d\n",
|
|
|
|
driver, vec_name, vec->setkey_error);
|
2019-02-01 15:51:46 +08:00
|
|
|
return -EINVAL;
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
|
2019-02-01 15:51:46 +08:00
|
|
|
/* The IV must be copied to a buffer, as the algorithm may modify it */
|
|
|
|
if (ivsize) {
|
|
|
|
if (WARN_ON(ivsize > MAX_IVLEN))
|
|
|
|
return -EINVAL;
|
2019-02-14 16:03:51 +08:00
|
|
|
if (vec->generates_iv && !enc)
|
|
|
|
memcpy(iv, vec->iv_out, ivsize);
|
|
|
|
else if (vec->iv)
|
2019-02-01 15:51:46 +08:00
|
|
|
memcpy(iv, vec->iv, ivsize);
|
2008-07-31 17:08:25 +08:00
|
|
|
else
|
2019-02-01 15:51:46 +08:00
|
|
|
memset(iv, 0, ivsize);
|
|
|
|
} else {
|
|
|
|
if (vec->generates_iv) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: skcipher: %s has ivsize=0 but test vector %s generates IV!\n",
|
|
|
|
driver, vec_name);
|
2019-02-01 15:51:46 +08:00
|
|
|
return -EINVAL;
|
2015-06-16 17:46:46 +08:00
|
|
|
}
|
2019-02-01 15:51:46 +08:00
|
|
|
iv = NULL;
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
|
2019-02-01 15:51:46 +08:00
|
|
|
/* Build the src/dst scatterlists */
|
|
|
|
input.iov_base = enc ? (void *)vec->ptext : (void *)vec->ctext;
|
|
|
|
input.iov_len = vec->len;
|
|
|
|
err = build_cipher_test_sglists(tsgls, cfg, alignmask,
|
|
|
|
vec->len, vec->len, &input, 1);
|
|
|
|
if (err) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: skcipher: %s %s: error preparing scatterlists for test vector %s, cfg=\"%s\"\n",
|
|
|
|
driver, op, vec_name, cfg->name);
|
2019-02-01 15:51:46 +08:00
|
|
|
return err;
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-02-01 15:51:46 +08:00
|
|
|
/* Do the actual encryption or decryption */
|
|
|
|
testmgr_poison(req->__ctx, crypto_skcipher_reqsize(tfm));
|
|
|
|
skcipher_request_set_callback(req, req_flags, crypto_req_done, &wait);
|
|
|
|
skcipher_request_set_crypt(req, tsgls->src.sgl_ptr, tsgls->dst.sgl_ptr,
|
|
|
|
vec->len, iv);
|
2019-03-13 13:12:52 +08:00
|
|
|
if (cfg->nosimd)
|
|
|
|
crypto_disable_simd_for_test();
|
|
|
|
err = enc ? crypto_skcipher_encrypt(req) : crypto_skcipher_decrypt(req);
|
|
|
|
if (cfg->nosimd)
|
|
|
|
crypto_reenable_simd_for_test();
|
|
|
|
err = crypto_wait_req(err, &wait);
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-02-01 15:51:49 +08:00
|
|
|
/* Check that the algorithm didn't overwrite things it shouldn't have */
|
|
|
|
if (req->cryptlen != vec->len ||
|
|
|
|
req->iv != iv ||
|
|
|
|
req->src != tsgls->src.sgl_ptr ||
|
|
|
|
req->dst != tsgls->dst.sgl_ptr ||
|
|
|
|
crypto_skcipher_reqtfm(req) != tfm ||
|
|
|
|
req->base.complete != crypto_req_done ||
|
|
|
|
req->base.flags != req_flags ||
|
|
|
|
req->base.data != &wait) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: skcipher: %s %s corrupted request struct on test vector %s, cfg=\"%s\"\n",
|
|
|
|
driver, op, vec_name, cfg->name);
|
2019-02-01 15:51:49 +08:00
|
|
|
if (req->cryptlen != vec->len)
|
|
|
|
pr_err("alg: skcipher: changed 'req->cryptlen'\n");
|
|
|
|
if (req->iv != iv)
|
|
|
|
pr_err("alg: skcipher: changed 'req->iv'\n");
|
|
|
|
if (req->src != tsgls->src.sgl_ptr)
|
|
|
|
pr_err("alg: skcipher: changed 'req->src'\n");
|
|
|
|
if (req->dst != tsgls->dst.sgl_ptr)
|
|
|
|
pr_err("alg: skcipher: changed 'req->dst'\n");
|
|
|
|
if (crypto_skcipher_reqtfm(req) != tfm)
|
|
|
|
pr_err("alg: skcipher: changed 'req->base.tfm'\n");
|
|
|
|
if (req->base.complete != crypto_req_done)
|
|
|
|
pr_err("alg: skcipher: changed 'req->base.complete'\n");
|
|
|
|
if (req->base.flags != req_flags)
|
|
|
|
pr_err("alg: skcipher: changed 'req->base.flags'\n");
|
|
|
|
if (req->base.data != &wait)
|
|
|
|
pr_err("alg: skcipher: changed 'req->base.data'\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
if (is_test_sglist_corrupted(&tsgls->src)) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: skcipher: %s %s corrupted src sgl on test vector %s, cfg=\"%s\"\n",
|
|
|
|
driver, op, vec_name, cfg->name);
|
2019-02-01 15:51:49 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
if (tsgls->dst.sgl_ptr != tsgls->src.sgl &&
|
|
|
|
is_test_sglist_corrupted(&tsgls->dst)) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: skcipher: %s %s corrupted dst sgl on test vector %s, cfg=\"%s\"\n",
|
|
|
|
driver, op, vec_name, cfg->name);
|
2019-02-01 15:51:49 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2019-04-12 12:57:36 +08:00
|
|
|
/* Check for success or failure */
|
|
|
|
if (err) {
|
|
|
|
if (err == vec->crypt_error)
|
|
|
|
return 0;
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: skcipher: %s %s failed on test vector %s; expected_error=%d, actual_error=%d, cfg=\"%s\"\n",
|
|
|
|
driver, op, vec_name, vec->crypt_error, err, cfg->name);
|
2019-04-12 12:57:36 +08:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
if (vec->crypt_error) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: skcipher: %s %s unexpectedly succeeded on test vector %s; expected_error=%d, cfg=\"%s\"\n",
|
|
|
|
driver, op, vec_name, vec->crypt_error, cfg->name);
|
2019-04-12 12:57:36 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2019-02-01 15:51:46 +08:00
|
|
|
/* Check for the correct output (ciphertext or plaintext) */
|
|
|
|
err = verify_correct_output(&tsgls->dst, enc ? vec->ctext : vec->ptext,
|
|
|
|
vec->len, 0, true);
|
|
|
|
if (err == -EOVERFLOW) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: skcipher: %s %s overran dst buffer on test vector %s, cfg=\"%s\"\n",
|
|
|
|
driver, op, vec_name, cfg->name);
|
2019-02-01 15:51:46 +08:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
if (err) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: skcipher: %s %s test failed (wrong result) on test vector %s, cfg=\"%s\"\n",
|
|
|
|
driver, op, vec_name, cfg->name);
|
2019-02-01 15:51:46 +08:00
|
|
|
return err;
|
|
|
|
}
|
2012-09-21 15:26:47 +08:00
|
|
|
|
2019-02-01 15:51:46 +08:00
|
|
|
/* If applicable, check that the algorithm generated the correct IV */
|
2019-02-14 16:03:51 +08:00
|
|
|
if (vec->iv_out && memcmp(iv, vec->iv_out, ivsize) != 0) {
|
2019-04-12 12:57:37 +08:00
|
|
|
pr_err("alg: skcipher: %s %s test failed (wrong output IV) on test vector %s, cfg=\"%s\"\n",
|
|
|
|
driver, op, vec_name, cfg->name);
|
2019-02-01 15:51:46 +08:00
|
|
|
hexdump(iv, ivsize);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2012-09-21 15:26:47 +08:00
|
|
|
|
2019-02-01 15:51:46 +08:00
|
|
|
return 0;
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2020-10-27 00:17:02 +08:00
|
|
|
static int test_skcipher_vec(int enc, const struct cipher_testvec *vec,
|
2019-02-01 15:51:46 +08:00
|
|
|
unsigned int vec_num,
|
|
|
|
struct skcipher_request *req,
|
|
|
|
struct cipher_test_sglists *tsgls)
|
|
|
|
{
|
2019-04-12 12:57:37 +08:00
|
|
|
char vec_name[16];
|
2019-02-01 15:51:46 +08:00
|
|
|
unsigned int i;
|
|
|
|
int err;
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-02-01 15:51:46 +08:00
|
|
|
if (fips_enabled && vec->fips_skip)
|
|
|
|
return 0;
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-04-12 12:57:37 +08:00
|
|
|
sprintf(vec_name, "%u", vec_num);
|
|
|
|
|
2019-02-01 15:51:46 +08:00
|
|
|
for (i = 0; i < ARRAY_SIZE(default_cipher_testvec_configs); i++) {
|
2020-10-27 00:17:02 +08:00
|
|
|
err = test_skcipher_vec_cfg(enc, vec, vec_name,
|
2019-02-01 15:51:46 +08:00
|
|
|
&default_cipher_testvec_configs[i],
|
|
|
|
req, tsgls);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-02-01 15:51:46 +08:00
|
|
|
#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
|
|
|
|
if (!noextratests) {
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
struct rnd_state rng;
|
2019-02-01 15:51:46 +08:00
|
|
|
struct testvec_config cfg;
|
|
|
|
char cfgname[TESTVEC_CONFIG_NAMELEN];
|
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
init_rnd_state(&rng);
|
|
|
|
|
2019-02-01 15:51:46 +08:00
|
|
|
for (i = 0; i < fuzz_iterations; i++) {
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
generate_random_testvec_config(&rng, &cfg, cfgname,
|
2019-02-01 15:51:46 +08:00
|
|
|
sizeof(cfgname));
|
2020-10-27 00:17:02 +08:00
|
|
|
err = test_skcipher_vec_cfg(enc, vec, vec_name,
|
2019-02-01 15:51:46 +08:00
|
|
|
&cfg, req, tsgls);
|
|
|
|
if (err)
|
|
|
|
return err;
|
2019-06-03 13:42:33 +08:00
|
|
|
cond_resched();
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}
|
2019-02-01 15:51:46 +08:00
|
|
|
#endif
|
|
|
|
return 0;
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-04-12 12:57:40 +08:00
|
|
|
#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
|
|
|
|
/*
|
|
|
|
* Generate a symmetric cipher test vector from the given implementation.
|
|
|
|
* Assumes the buffers in 'vec' were already allocated.
|
|
|
|
*/
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
static void generate_random_cipher_testvec(struct rnd_state *rng,
|
|
|
|
struct skcipher_request *req,
|
2019-04-12 12:57:40 +08:00
|
|
|
struct cipher_testvec *vec,
|
|
|
|
unsigned int maxdatasize,
|
|
|
|
char *name, size_t max_namelen)
|
|
|
|
{
|
|
|
|
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
|
2019-11-30 02:23:04 +08:00
|
|
|
const unsigned int maxkeysize = crypto_skcipher_max_keysize(tfm);
|
2019-04-12 12:57:40 +08:00
|
|
|
const unsigned int ivsize = crypto_skcipher_ivsize(tfm);
|
|
|
|
struct scatterlist src, dst;
|
|
|
|
u8 iv[MAX_IVLEN];
|
|
|
|
DECLARE_CRYPTO_WAIT(wait);
|
|
|
|
|
|
|
|
/* Key: length in [0, maxkeysize], but usually choose maxkeysize */
|
|
|
|
vec->klen = maxkeysize;
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
if (prandom_u32_below(rng, 4) == 0)
|
|
|
|
vec->klen = prandom_u32_below(rng, maxkeysize + 1);
|
|
|
|
generate_random_bytes(rng, (u8 *)vec->key, vec->klen);
|
2019-04-12 12:57:40 +08:00
|
|
|
vec->setkey_error = crypto_skcipher_setkey(tfm, vec->key, vec->klen);
|
|
|
|
|
|
|
|
/* IV */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
generate_random_bytes(rng, (u8 *)vec->iv, ivsize);
|
2019-04-12 12:57:40 +08:00
|
|
|
|
|
|
|
/* Plaintext */
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
vec->len = generate_random_length(rng, maxdatasize);
|
|
|
|
generate_random_bytes(rng, (u8 *)vec->ptext, vec->len);
|
2019-04-12 12:57:40 +08:00
|
|
|
|
|
|
|
/* If the key couldn't be set, no need to continue to encrypt. */
|
|
|
|
if (vec->setkey_error)
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
/* Ciphertext */
|
|
|
|
sg_init_one(&src, vec->ptext, vec->len);
|
|
|
|
sg_init_one(&dst, vec->ctext, vec->len);
|
|
|
|
memcpy(iv, vec->iv, ivsize);
|
|
|
|
skcipher_request_set_callback(req, 0, crypto_req_done, &wait);
|
|
|
|
skcipher_request_set_crypt(req, &src, &dst, vec->len, iv);
|
|
|
|
vec->crypt_error = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
|
2019-12-02 05:53:26 +08:00
|
|
|
if (vec->crypt_error != 0) {
|
|
|
|
/*
|
|
|
|
* The only acceptable error here is for an invalid length, so
|
|
|
|
* skcipher decryption should fail with the same error too.
|
|
|
|
* We'll test for this. But to keep the API usage well-defined,
|
|
|
|
* explicitly initialize the ciphertext buffer too.
|
|
|
|
*/
|
|
|
|
memset((u8 *)vec->ctext, 0, vec->len);
|
|
|
|
}
|
2019-04-12 12:57:40 +08:00
|
|
|
done:
|
|
|
|
snprintf(name, max_namelen, "\"random: len=%u klen=%u\"",
|
|
|
|
vec->len, vec->klen);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Test the skcipher algorithm represented by @req against the corresponding
|
|
|
|
* generic implementation, if one is available.
|
|
|
|
*/
|
2020-10-27 00:17:02 +08:00
|
|
|
static int test_skcipher_vs_generic_impl(const char *generic_driver,
|
2019-04-12 12:57:40 +08:00
|
|
|
struct skcipher_request *req,
|
|
|
|
struct cipher_test_sglists *tsgls)
|
|
|
|
{
|
|
|
|
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
|
2019-11-30 02:23:04 +08:00
|
|
|
const unsigned int maxkeysize = crypto_skcipher_max_keysize(tfm);
|
2019-04-12 12:57:40 +08:00
|
|
|
const unsigned int ivsize = crypto_skcipher_ivsize(tfm);
|
|
|
|
const unsigned int blocksize = crypto_skcipher_blocksize(tfm);
|
|
|
|
const unsigned int maxdatasize = (2 * PAGE_SIZE) - TESTMGR_POISON_LEN;
|
|
|
|
const char *algname = crypto_skcipher_alg(tfm)->base.cra_name;
|
2020-10-27 00:17:02 +08:00
|
|
|
const char *driver = crypto_skcipher_driver_name(tfm);
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
struct rnd_state rng;
|
2019-04-12 12:57:40 +08:00
|
|
|
char _generic_driver[CRYPTO_MAX_ALG_NAME];
|
|
|
|
struct crypto_skcipher *generic_tfm = NULL;
|
|
|
|
struct skcipher_request *generic_req = NULL;
|
|
|
|
unsigned int i;
|
|
|
|
struct cipher_testvec vec = { 0 };
|
|
|
|
char vec_name[64];
|
2019-06-18 17:21:52 +08:00
|
|
|
struct testvec_config *cfg;
|
2019-04-12 12:57:40 +08:00
|
|
|
char cfgname[TESTVEC_CONFIG_NAMELEN];
|
|
|
|
int err;
|
|
|
|
|
|
|
|
if (noextratests)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* Keywrap isn't supported here yet as it handles its IV differently. */
|
|
|
|
if (strncmp(algname, "kw(", 3) == 0)
|
|
|
|
return 0;
|
|
|
|
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
init_rnd_state(&rng);
|
|
|
|
|
2019-04-12 12:57:40 +08:00
|
|
|
if (!generic_driver) { /* Use default naming convention? */
|
|
|
|
err = build_generic_driver_name(algname, _generic_driver);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
generic_driver = _generic_driver;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (strcmp(generic_driver, driver) == 0) /* Already the generic impl? */
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
generic_tfm = crypto_alloc_skcipher(generic_driver, 0, 0);
|
|
|
|
if (IS_ERR(generic_tfm)) {
|
|
|
|
err = PTR_ERR(generic_tfm);
|
|
|
|
if (err == -ENOENT) {
|
|
|
|
pr_warn("alg: skcipher: skipping comparison tests for %s because %s is unavailable\n",
|
|
|
|
driver, generic_driver);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
pr_err("alg: skcipher: error allocating %s (generic impl of %s): %d\n",
|
|
|
|
generic_driver, algname, err);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2019-06-18 17:21:52 +08:00
|
|
|
cfg = kzalloc(sizeof(*cfg), GFP_KERNEL);
|
|
|
|
if (!cfg) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2019-04-12 12:57:40 +08:00
|
|
|
generic_req = skcipher_request_alloc(generic_tfm, GFP_KERNEL);
|
|
|
|
if (!generic_req) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check the algorithm properties for consistency. */
|
|
|
|
|
2019-12-02 05:53:27 +08:00
|
|
|
if (crypto_skcipher_min_keysize(tfm) !=
|
|
|
|
crypto_skcipher_min_keysize(generic_tfm)) {
|
|
|
|
pr_err("alg: skcipher: min keysize for %s (%u) doesn't match generic impl (%u)\n",
|
|
|
|
driver, crypto_skcipher_min_keysize(tfm),
|
|
|
|
crypto_skcipher_min_keysize(generic_tfm));
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2019-11-30 02:23:04 +08:00
|
|
|
if (maxkeysize != crypto_skcipher_max_keysize(generic_tfm)) {
|
2019-04-12 12:57:40 +08:00
|
|
|
pr_err("alg: skcipher: max keysize for %s (%u) doesn't match generic impl (%u)\n",
|
2019-11-30 02:23:04 +08:00
|
|
|
driver, maxkeysize,
|
|
|
|
crypto_skcipher_max_keysize(generic_tfm));
|
2019-04-12 12:57:40 +08:00
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ivsize != crypto_skcipher_ivsize(generic_tfm)) {
|
|
|
|
pr_err("alg: skcipher: ivsize for %s (%u) doesn't match generic impl (%u)\n",
|
|
|
|
driver, ivsize, crypto_skcipher_ivsize(generic_tfm));
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (blocksize != crypto_skcipher_blocksize(generic_tfm)) {
|
|
|
|
pr_err("alg: skcipher: blocksize for %s (%u) doesn't match generic impl (%u)\n",
|
|
|
|
driver, blocksize,
|
|
|
|
crypto_skcipher_blocksize(generic_tfm));
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now generate test vectors using the generic implementation, and test
|
|
|
|
* the other implementation against them.
|
|
|
|
*/
|
|
|
|
|
2019-11-30 02:23:04 +08:00
|
|
|
vec.key = kmalloc(maxkeysize, GFP_KERNEL);
|
2019-04-12 12:57:40 +08:00
|
|
|
vec.iv = kmalloc(ivsize, GFP_KERNEL);
|
|
|
|
vec.ptext = kmalloc(maxdatasize, GFP_KERNEL);
|
|
|
|
vec.ctext = kmalloc(maxdatasize, GFP_KERNEL);
|
|
|
|
if (!vec.key || !vec.iv || !vec.ptext || !vec.ctext) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < fuzz_iterations * 8; i++) {
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
generate_random_cipher_testvec(&rng, generic_req, &vec,
|
|
|
|
maxdatasize,
|
2019-04-12 12:57:40 +08:00
|
|
|
vec_name, sizeof(vec_name));
|
crypto: testmgr - fix RNG performance in fuzz tests
The performance of the crypto fuzz tests has greatly regressed since
v5.18. When booting a kernel on an arm64 dev board with all software
crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the
fuzz tests now take about 200 seconds to run, or about 325 seconds with
lockdep enabled, compared to about 5 seconds before.
The root cause is that the random number generation has become much
slower due to commit d4150779e60f ("random32: use real rng for
non-deterministic randomness"). On my same arm64 dev board, at the time
the fuzz tests are run, get_random_u8() is about 345x slower than
prandom_u32_state(), or about 469x if lockdep is enabled.
Lockdep makes a big difference, but much of the rest comes from the
get_random_*() functions taking a *very* slow path when the CRNG is not
yet initialized. Since the crypto self-tests run early during boot,
even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in
my case) doesn't prevent this. x86 systems don't have this issue, but
they still see a significant regression if lockdep is enabled.
Converting the "Fully random bytes" case in generate_random_bytes() to
use get_random_bytes() helps significantly, improving the test time to
about 27 seconds. But that's still over 5x slower than before.
This is all a bit silly, though, since the fuzz tests don't actually
need cryptographically secure random numbers. So let's just make them
use a non-cryptographically-secure RNG as they did before. The original
prandom_u32() is gone now, so let's use prandom_u32_state() instead,
with an explicitly managed state, like various other self-tests in the
kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This
also has the benefit that no locking is required anymore, so performance
should be even better than the original version that used prandom_u32().
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-28 02:29:47 +08:00
|
|
|
generate_random_testvec_config(&rng, cfg, cfgname,
|
|
|
|
sizeof(cfgname));
|
2019-04-12 12:57:40 +08:00
|
|
|
|
2020-10-27 00:17:02 +08:00
|
|
|
err = test_skcipher_vec_cfg(ENCRYPT, &vec, vec_name,
|
2019-06-18 17:21:52 +08:00
|
|
|
cfg, req, tsgls);
|
2019-04-12 12:57:40 +08:00
|
|
|
if (err)
|
|
|
|
goto out;
|
2020-10-27 00:17:02 +08:00
|
|
|
err = test_skcipher_vec_cfg(DECRYPT, &vec, vec_name,
|
2019-06-18 17:21:52 +08:00
|
|
|
cfg, req, tsgls);
|
2019-04-12 12:57:40 +08:00
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
cond_resched();
|
|
|
|
}
|
|
|
|
err = 0;
|
|
|
|
out:
|
2019-06-18 17:21:52 +08:00
|
|
|
kfree(cfg);
|
2019-04-12 12:57:40 +08:00
|
|
|
kfree(vec.key);
|
|
|
|
kfree(vec.iv);
|
|
|
|
kfree(vec.ptext);
|
|
|
|
kfree(vec.ctext);
|
|
|
|
crypto_free_skcipher(generic_tfm);
|
|
|
|
skcipher_request_free(generic_req);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
#else /* !CONFIG_CRYPTO_MANAGER_EXTRA_TESTS */
|
2020-10-27 00:17:02 +08:00
|
|
|
static int test_skcipher_vs_generic_impl(const char *generic_driver,
|
2019-04-12 12:57:40 +08:00
|
|
|
struct skcipher_request *req,
|
|
|
|
struct cipher_test_sglists *tsgls)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif /* !CONFIG_CRYPTO_MANAGER_EXTRA_TESTS */
|
|
|
|
|
2020-10-27 00:17:02 +08:00
|
|
|
static int test_skcipher(int enc, const struct cipher_test_suite *suite,
|
2019-02-01 15:51:46 +08:00
|
|
|
struct skcipher_request *req,
|
|
|
|
struct cipher_test_sglists *tsgls)
|
|
|
|
{
|
|
|
|
unsigned int i;
|
|
|
|
int err;
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2019-02-01 15:51:46 +08:00
|
|
|
for (i = 0; i < suite->count; i++) {
|
2020-10-27 00:17:02 +08:00
|
|
|
err = test_skcipher_vec(enc, &suite->vecs[i], i, req, tsgls);
|
2019-02-01 15:51:46 +08:00
|
|
|
if (err)
|
|
|
|
return err;
|
2019-06-03 13:42:33 +08:00
|
|
|
cond_resched();
|
2019-02-01 15:51:46 +08:00
|
|
|
}
|
|
|
|
return 0;
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
|
2019-02-01 15:51:46 +08:00
|
|
|
static int alg_test_skcipher(const struct alg_test_desc *desc,
|
|
|
|
const char *driver, u32 type, u32 mask)
|
2012-09-21 15:26:47 +08:00
|
|
|
{
|
2019-02-01 15:51:46 +08:00
|
|
|
const struct cipher_test_suite *suite = &desc->suite.cipher;
|
|
|
|
struct crypto_skcipher *tfm;
|
|
|
|
struct skcipher_request *req = NULL;
|
|
|
|
struct cipher_test_sglists *tsgls = NULL;
|
|
|
|
int err;
|
2012-09-21 15:26:47 +08:00
|
|
|
|
2019-02-01 15:51:46 +08:00
|
|
|
if (suite->count <= 0) {
|
|
|
|
pr_err("alg: skcipher: empty test suite for %s\n", driver);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2012-09-21 15:26:47 +08:00
|
|
|
|
2019-02-01 15:51:46 +08:00
|
|
|
tfm = crypto_alloc_skcipher(driver, type, mask);
|
|
|
|
if (IS_ERR(tfm)) {
|
2024-09-03 07:33:40 +08:00
|
|
|
if (PTR_ERR(tfm) == -ENOENT)
|
2024-10-06 09:24:56 +08:00
|
|
|
return 0;
|
2019-02-01 15:51:46 +08:00
|
|
|
pr_err("alg: skcipher: failed to allocate transform for %s: %ld\n",
|
|
|
|
driver, PTR_ERR(tfm));
|
|
|
|
return PTR_ERR(tfm);
|
|
|
|
}
|
2020-10-27 00:17:02 +08:00
|
|
|
driver = crypto_skcipher_driver_name(tfm);
|
2013-06-13 22:37:45 +08:00
|
|
|
|
2019-02-01 15:51:46 +08:00
|
|
|
req = skcipher_request_alloc(tfm, GFP_KERNEL);
|
|
|
|
if (!req) {
|
|
|
|
pr_err("alg: skcipher: failed to allocate request for %s\n",
|
|
|
|
driver);
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
2013-06-13 22:37:45 +08:00
|
|
|
|
2019-02-01 15:51:46 +08:00
|
|
|
tsgls = alloc_cipher_test_sglists();
|
|
|
|
if (!tsgls) {
|
|
|
|
pr_err("alg: skcipher: failed to allocate test buffers for %s\n",
|
|
|
|
driver);
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
2013-06-13 22:37:45 +08:00
|
|
|
}
|
|
|
|
|
2020-10-27 00:17:02 +08:00
|
|
|
err = test_skcipher(ENCRYPT, suite, req, tsgls);
|
2019-02-01 15:51:46 +08:00
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
|
2020-10-27 00:17:02 +08:00
|
|
|
err = test_skcipher(DECRYPT, suite, req, tsgls);
|
2019-04-12 12:57:40 +08:00
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
|
2020-10-27 00:17:02 +08:00
|
|
|
err = test_skcipher_vs_generic_impl(desc->generic_driver, req, tsgls);
|
2019-02-01 15:51:46 +08:00
|
|
|
out:
|
|
|
|
free_cipher_test_sglists(tsgls);
|
|
|
|
skcipher_request_free(req);
|
|
|
|
crypto_free_skcipher(tfm);
|
|
|
|
return err;
|
2012-09-21 15:26:47 +08:00
|
|
|
}
|
|
|
|
|
2017-02-25 07:46:59 +08:00
|
|
|
static int test_comp(struct crypto_comp *tfm,
|
|
|
|
const struct comp_testvec *ctemplate,
|
|
|
|
const struct comp_testvec *dtemplate,
|
|
|
|
int ctcount, int dtcount)
|
2008-07-31 17:08:25 +08:00
|
|
|
{
|
|
|
|
const char *algo = crypto_tfm_alg_driver_name(crypto_comp_tfm(tfm));
|
2018-04-12 02:28:32 +08:00
|
|
|
char *output, *decomp_output;
|
2008-07-31 17:08:25 +08:00
|
|
|
unsigned int i;
|
|
|
|
int ret;
|
|
|
|
|
2018-04-12 02:28:32 +08:00
|
|
|
output = kmalloc(COMP_BUF_SIZE, GFP_KERNEL);
|
|
|
|
if (!output)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
decomp_output = kmalloc(COMP_BUF_SIZE, GFP_KERNEL);
|
|
|
|
if (!decomp_output) {
|
|
|
|
kfree(output);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
2008-07-31 17:08:25 +08:00
|
|
|
for (i = 0; i < ctcount; i++) {
|
2009-03-29 15:44:19 +08:00
|
|
|
int ilen;
|
|
|
|
unsigned int dlen = COMP_BUF_SIZE;
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2018-10-07 19:58:10 +08:00
|
|
|
memset(output, 0, COMP_BUF_SIZE);
|
|
|
|
memset(decomp_output, 0, COMP_BUF_SIZE);
|
2008-07-31 17:08:25 +08:00
|
|
|
|
|
|
|
ilen = ctemplate[i].inlen;
|
|
|
|
ret = crypto_comp_compress(tfm, ctemplate[i].input,
|
2018-04-12 02:28:32 +08:00
|
|
|
ilen, output, &dlen);
|
2008-07-31 17:08:25 +08:00
|
|
|
if (ret) {
|
|
|
|
printk(KERN_ERR "alg: comp: compression failed "
|
|
|
|
"on test %d for %s: ret=%d\n", i + 1, algo,
|
|
|
|
-ret);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2018-04-12 02:28:32 +08:00
|
|
|
ilen = dlen;
|
|
|
|
dlen = COMP_BUF_SIZE;
|
|
|
|
ret = crypto_comp_decompress(tfm, output,
|
|
|
|
ilen, decomp_output, &dlen);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("alg: comp: compression failed: decompress: on test %d for %s failed: ret=%d\n",
|
|
|
|
i + 1, algo, -ret);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (dlen != ctemplate[i].inlen) {
|
2008-11-28 20:51:28 +08:00
|
|
|
printk(KERN_ERR "alg: comp: Compression test %d "
|
|
|
|
"failed for %s: output len = %d\n", i + 1, algo,
|
|
|
|
dlen);
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2018-04-12 02:28:32 +08:00
|
|
|
if (memcmp(decomp_output, ctemplate[i].input,
|
|
|
|
ctemplate[i].inlen)) {
|
|
|
|
pr_err("alg: comp: compression failed: output differs: on test %d for %s\n",
|
|
|
|
i + 1, algo);
|
|
|
|
hexdump(decomp_output, dlen);
|
2008-07-31 17:08:25 +08:00
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < dtcount; i++) {
|
2009-03-29 15:44:19 +08:00
|
|
|
int ilen;
|
|
|
|
unsigned int dlen = COMP_BUF_SIZE;
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2018-10-07 19:58:10 +08:00
|
|
|
memset(decomp_output, 0, COMP_BUF_SIZE);
|
2008-07-31 17:08:25 +08:00
|
|
|
|
|
|
|
ilen = dtemplate[i].inlen;
|
|
|
|
ret = crypto_comp_decompress(tfm, dtemplate[i].input,
|
2018-04-12 02:28:32 +08:00
|
|
|
ilen, decomp_output, &dlen);
|
2008-07-31 17:08:25 +08:00
|
|
|
if (ret) {
|
|
|
|
printk(KERN_ERR "alg: comp: decompression failed "
|
|
|
|
"on test %d for %s: ret=%d\n", i + 1, algo,
|
|
|
|
-ret);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2008-11-28 20:51:28 +08:00
|
|
|
if (dlen != dtemplate[i].outlen) {
|
|
|
|
printk(KERN_ERR "alg: comp: Decompression test %d "
|
|
|
|
"failed for %s: output len = %d\n", i + 1, algo,
|
|
|
|
dlen);
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2018-04-12 02:28:32 +08:00
|
|
|
if (memcmp(decomp_output, dtemplate[i].output, dlen)) {
|
2008-07-31 17:08:25 +08:00
|
|
|
printk(KERN_ERR "alg: comp: Decompression test %d "
|
|
|
|
"failed for %s\n", i + 1, algo);
|
2018-04-12 02:28:32 +08:00
|
|
|
hexdump(decomp_output, dlen);
|
2008-07-31 17:08:25 +08:00
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
out:
|
2018-04-12 02:28:32 +08:00
|
|
|
kfree(decomp_output);
|
|
|
|
kfree(output);
|
2008-07-31 17:08:25 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-02-25 07:46:59 +08:00
|
|
|
static int test_acomp(struct crypto_acomp *tfm,
|
2022-08-25 18:24:51 +08:00
|
|
|
const struct comp_testvec *ctemplate,
|
2017-02-25 07:46:59 +08:00
|
|
|
const struct comp_testvec *dtemplate,
|
|
|
|
int ctcount, int dtcount)
|
2016-10-21 20:19:54 +08:00
|
|
|
{
|
|
|
|
const char *algo = crypto_tfm_alg_driver_name(crypto_acomp_tfm(tfm));
|
|
|
|
unsigned int i;
|
2017-04-19 21:27:18 +08:00
|
|
|
char *output, *decomp_out;
|
2016-10-21 20:19:54 +08:00
|
|
|
int ret;
|
|
|
|
struct scatterlist src, dst;
|
|
|
|
struct acomp_req *req;
|
2017-10-18 15:00:43 +08:00
|
|
|
struct crypto_wait wait;
|
2016-10-21 20:19:54 +08:00
|
|
|
|
2016-11-24 02:24:35 +08:00
|
|
|
output = kmalloc(COMP_BUF_SIZE, GFP_KERNEL);
|
|
|
|
if (!output)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2017-04-19 21:27:18 +08:00
|
|
|
decomp_out = kmalloc(COMP_BUF_SIZE, GFP_KERNEL);
|
|
|
|
if (!decomp_out) {
|
|
|
|
kfree(output);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
2016-10-21 20:19:54 +08:00
|
|
|
for (i = 0; i < ctcount; i++) {
|
|
|
|
unsigned int dlen = COMP_BUF_SIZE;
|
|
|
|
int ilen = ctemplate[i].inlen;
|
2016-12-22 04:32:54 +08:00
|
|
|
void *input_vec;
|
2016-10-21 20:19:54 +08:00
|
|
|
|
2016-12-31 04:12:00 +08:00
|
|
|
input_vec = kmemdup(ctemplate[i].input, ilen, GFP_KERNEL);
|
2016-12-22 04:32:54 +08:00
|
|
|
if (!input_vec) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2016-11-24 02:24:35 +08:00
|
|
|
memset(output, 0, dlen);
|
2017-10-18 15:00:43 +08:00
|
|
|
crypto_init_wait(&wait);
|
2016-12-22 04:32:54 +08:00
|
|
|
sg_init_one(&src, input_vec, ilen);
|
2016-10-21 20:19:54 +08:00
|
|
|
sg_init_one(&dst, output, dlen);
|
|
|
|
|
|
|
|
req = acomp_request_alloc(tfm);
|
|
|
|
if (!req) {
|
|
|
|
pr_err("alg: acomp: request alloc failed for %s\n",
|
|
|
|
algo);
|
2016-12-22 04:32:54 +08:00
|
|
|
kfree(input_vec);
|
2016-10-21 20:19:54 +08:00
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
acomp_request_set_params(req, &src, &dst, ilen, dlen);
|
|
|
|
acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
|
2017-10-18 15:00:43 +08:00
|
|
|
crypto_req_done, &wait);
|
2016-10-21 20:19:54 +08:00
|
|
|
|
2017-10-18 15:00:43 +08:00
|
|
|
ret = crypto_wait_req(crypto_acomp_compress(req), &wait);
|
2016-10-21 20:19:54 +08:00
|
|
|
if (ret) {
|
|
|
|
pr_err("alg: acomp: compression failed on test %d for %s: ret=%d\n",
|
|
|
|
i + 1, algo, -ret);
|
2016-12-22 04:32:54 +08:00
|
|
|
kfree(input_vec);
|
2016-10-21 20:19:54 +08:00
|
|
|
acomp_request_free(req);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2017-04-19 21:27:18 +08:00
|
|
|
ilen = req->dlen;
|
|
|
|
dlen = COMP_BUF_SIZE;
|
|
|
|
sg_init_one(&src, output, ilen);
|
|
|
|
sg_init_one(&dst, decomp_out, dlen);
|
2017-10-18 15:00:43 +08:00
|
|
|
crypto_init_wait(&wait);
|
2017-04-19 21:27:18 +08:00
|
|
|
acomp_request_set_params(req, &src, &dst, ilen, dlen);
|
|
|
|
|
2017-10-18 15:00:43 +08:00
|
|
|
ret = crypto_wait_req(crypto_acomp_decompress(req), &wait);
|
2017-04-19 21:27:18 +08:00
|
|
|
if (ret) {
|
|
|
|
pr_err("alg: acomp: compression failed on test %d for %s: ret=%d\n",
|
|
|
|
i + 1, algo, -ret);
|
|
|
|
kfree(input_vec);
|
|
|
|
acomp_request_free(req);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (req->dlen != ctemplate[i].inlen) {
|
2016-10-21 20:19:54 +08:00
|
|
|
pr_err("alg: acomp: Compression test %d failed for %s: output len = %d\n",
|
|
|
|
i + 1, algo, req->dlen);
|
|
|
|
ret = -EINVAL;
|
2016-12-22 04:32:54 +08:00
|
|
|
kfree(input_vec);
|
2016-10-21 20:19:54 +08:00
|
|
|
acomp_request_free(req);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2017-04-19 21:27:18 +08:00
|
|
|
if (memcmp(input_vec, decomp_out, req->dlen)) {
|
2016-10-21 20:19:54 +08:00
|
|
|
pr_err("alg: acomp: Compression test %d failed for %s\n",
|
|
|
|
i + 1, algo);
|
|
|
|
hexdump(output, req->dlen);
|
|
|
|
ret = -EINVAL;
|
2016-12-22 04:32:54 +08:00
|
|
|
kfree(input_vec);
|
2016-10-21 20:19:54 +08:00
|
|
|
acomp_request_free(req);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2022-08-12 22:16:02 +08:00
|
|
|
#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
|
|
|
|
crypto_init_wait(&wait);
|
|
|
|
sg_init_one(&src, input_vec, ilen);
|
|
|
|
acomp_request_set_params(req, &src, NULL, ilen, 0);
|
|
|
|
|
|
|
|
ret = crypto_wait_req(crypto_acomp_compress(req), &wait);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("alg: acomp: compression failed on NULL dst buffer test %d for %s: ret=%d\n",
|
|
|
|
i + 1, algo, -ret);
|
|
|
|
kfree(input_vec);
|
|
|
|
acomp_request_free(req);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2016-12-22 04:32:54 +08:00
|
|
|
kfree(input_vec);
|
2016-10-21 20:19:54 +08:00
|
|
|
acomp_request_free(req);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < dtcount; i++) {
|
|
|
|
unsigned int dlen = COMP_BUF_SIZE;
|
|
|
|
int ilen = dtemplate[i].inlen;
|
2016-12-22 04:32:54 +08:00
|
|
|
void *input_vec;
|
|
|
|
|
2016-12-31 04:12:00 +08:00
|
|
|
input_vec = kmemdup(dtemplate[i].input, ilen, GFP_KERNEL);
|
2016-12-22 04:32:54 +08:00
|
|
|
if (!input_vec) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
2016-10-21 20:19:54 +08:00
|
|
|
|
2016-11-24 02:24:35 +08:00
|
|
|
memset(output, 0, dlen);
|
2017-10-18 15:00:43 +08:00
|
|
|
crypto_init_wait(&wait);
|
2016-12-22 04:32:54 +08:00
|
|
|
sg_init_one(&src, input_vec, ilen);
|
2016-10-21 20:19:54 +08:00
|
|
|
sg_init_one(&dst, output, dlen);
|
|
|
|
|
|
|
|
req = acomp_request_alloc(tfm);
|
|
|
|
if (!req) {
|
|
|
|
pr_err("alg: acomp: request alloc failed for %s\n",
|
|
|
|
algo);
|
2016-12-22 04:32:54 +08:00
|
|
|
kfree(input_vec);
|
2016-10-21 20:19:54 +08:00
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
acomp_request_set_params(req, &src, &dst, ilen, dlen);
|
|
|
|
acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
|
2017-10-18 15:00:43 +08:00
|
|
|
crypto_req_done, &wait);
|
2016-10-21 20:19:54 +08:00
|
|
|
|
2017-10-18 15:00:43 +08:00
|
|
|
ret = crypto_wait_req(crypto_acomp_decompress(req), &wait);
|
2016-10-21 20:19:54 +08:00
|
|
|
if (ret) {
|
|
|
|
pr_err("alg: acomp: decompression failed on test %d for %s: ret=%d\n",
|
|
|
|
i + 1, algo, -ret);
|
2016-12-22 04:32:54 +08:00
|
|
|
kfree(input_vec);
|
2016-10-21 20:19:54 +08:00
|
|
|
acomp_request_free(req);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (req->dlen != dtemplate[i].outlen) {
|
|
|
|
pr_err("alg: acomp: Decompression test %d failed for %s: output len = %d\n",
|
|
|
|
i + 1, algo, req->dlen);
|
|
|
|
ret = -EINVAL;
|
2016-12-22 04:32:54 +08:00
|
|
|
kfree(input_vec);
|
2016-10-21 20:19:54 +08:00
|
|
|
acomp_request_free(req);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (memcmp(output, dtemplate[i].output, req->dlen)) {
|
|
|
|
pr_err("alg: acomp: Decompression test %d failed for %s\n",
|
|
|
|
i + 1, algo);
|
|
|
|
hexdump(output, req->dlen);
|
|
|
|
ret = -EINVAL;
|
2016-12-22 04:32:54 +08:00
|
|
|
kfree(input_vec);
|
2016-10-21 20:19:54 +08:00
|
|
|
acomp_request_free(req);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2022-08-12 22:16:02 +08:00
|
|
|
#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
|
|
|
|
crypto_init_wait(&wait);
|
|
|
|
acomp_request_set_params(req, &src, NULL, ilen, 0);
|
|
|
|
|
|
|
|
ret = crypto_wait_req(crypto_acomp_decompress(req), &wait);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("alg: acomp: decompression failed on NULL dst buffer test %d for %s: ret=%d\n",
|
|
|
|
i + 1, algo, -ret);
|
|
|
|
kfree(input_vec);
|
|
|
|
acomp_request_free(req);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2016-12-22 04:32:54 +08:00
|
|
|
kfree(input_vec);
|
2016-10-21 20:19:54 +08:00
|
|
|
acomp_request_free(req);
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
out:
|
2017-04-19 21:27:18 +08:00
|
|
|
kfree(decomp_out);
|
2016-11-24 02:24:35 +08:00
|
|
|
kfree(output);
|
2016-10-21 20:19:54 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-02-25 07:46:59 +08:00
|
|
|
static int test_cprng(struct crypto_rng *tfm,
|
|
|
|
const struct cprng_testvec *template,
|
2009-05-04 19:44:50 +08:00
|
|
|
unsigned int tcount)
|
|
|
|
{
|
|
|
|
const char *algo = crypto_tfm_alg_driver_name(crypto_rng_tfm(tfm));
|
2009-10-27 19:04:42 +08:00
|
|
|
int err = 0, i, j, seedsize;
|
2009-05-04 19:44:50 +08:00
|
|
|
u8 *seed;
|
|
|
|
char result[32];
|
|
|
|
|
|
|
|
seedsize = crypto_rng_seedsize(tfm);
|
|
|
|
|
|
|
|
seed = kmalloc(seedsize, GFP_KERNEL);
|
|
|
|
if (!seed) {
|
|
|
|
printk(KERN_ERR "alg: cprng: Failed to allocate seed space "
|
|
|
|
"for %s\n", algo);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < tcount; i++) {
|
|
|
|
memset(result, 0, 32);
|
|
|
|
|
|
|
|
memcpy(seed, template[i].v, template[i].vlen);
|
|
|
|
memcpy(seed + template[i].vlen, template[i].key,
|
|
|
|
template[i].klen);
|
|
|
|
memcpy(seed + template[i].vlen + template[i].klen,
|
|
|
|
template[i].dt, template[i].dtlen);
|
|
|
|
|
|
|
|
err = crypto_rng_reset(tfm, seed, seedsize);
|
|
|
|
if (err) {
|
|
|
|
printk(KERN_ERR "alg: cprng: Failed to reset rng "
|
|
|
|
"for %s\n", algo);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (j = 0; j < template[i].loops; j++) {
|
|
|
|
err = crypto_rng_get_bytes(tfm, result,
|
|
|
|
template[i].rlen);
|
2015-03-11 00:00:36 +08:00
|
|
|
if (err < 0) {
|
2009-05-04 19:44:50 +08:00
|
|
|
printk(KERN_ERR "alg: cprng: Failed to obtain "
|
|
|
|
"the correct amount of random data for "
|
2015-03-11 00:00:36 +08:00
|
|
|
"%s (requested %d)\n", algo,
|
|
|
|
template[i].rlen);
|
2009-05-04 19:44:50 +08:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
err = memcmp(result, template[i].result,
|
|
|
|
template[i].rlen);
|
|
|
|
if (err) {
|
|
|
|
printk(KERN_ERR "alg: cprng: Test %d failed for %s\n",
|
|
|
|
i, algo);
|
|
|
|
hexdump(result, template[i].rlen);
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
kfree(seed);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2008-07-31 17:08:25 +08:00
|
|
|
static int alg_test_cipher(const struct alg_test_desc *desc,
|
|
|
|
const char *driver, u32 type, u32 mask)
|
|
|
|
{
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
const struct cipher_test_suite *suite = &desc->suite.cipher;
|
2008-08-17 15:01:56 +08:00
|
|
|
struct crypto_cipher *tfm;
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
int err;
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2016-11-22 20:08:31 +08:00
|
|
|
tfm = crypto_alloc_cipher(driver, type, mask);
|
2008-07-31 17:08:25 +08:00
|
|
|
if (IS_ERR(tfm)) {
|
2024-09-03 07:33:40 +08:00
|
|
|
if (PTR_ERR(tfm) == -ENOENT)
|
2024-10-06 09:24:56 +08:00
|
|
|
return 0;
|
2008-07-31 17:08:25 +08:00
|
|
|
printk(KERN_ERR "alg: cipher: Failed to load transform for "
|
|
|
|
"%s: %ld\n", driver, PTR_ERR(tfm));
|
|
|
|
return PTR_ERR(tfm);
|
|
|
|
}
|
|
|
|
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
err = test_cipher(tfm, ENCRYPT, suite->vecs, suite->count);
|
|
|
|
if (!err)
|
|
|
|
err = test_cipher(tfm, DECRYPT, suite->vecs, suite->count);
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2008-08-17 15:01:56 +08:00
|
|
|
crypto_free_cipher(tfm);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2008-07-31 17:08:25 +08:00
|
|
|
static int alg_test_comp(const struct alg_test_desc *desc, const char *driver,
|
|
|
|
u32 type, u32 mask)
|
|
|
|
{
|
2016-10-21 20:19:54 +08:00
|
|
|
struct crypto_comp *comp;
|
|
|
|
struct crypto_acomp *acomp;
|
2008-07-31 17:08:25 +08:00
|
|
|
int err;
|
2016-10-21 20:19:54 +08:00
|
|
|
u32 algo_type = type & CRYPTO_ALG_TYPE_ACOMPRESS_MASK;
|
|
|
|
|
|
|
|
if (algo_type == CRYPTO_ALG_TYPE_ACOMPRESS) {
|
|
|
|
acomp = crypto_alloc_acomp(driver, type, mask);
|
|
|
|
if (IS_ERR(acomp)) {
|
2024-09-03 07:33:40 +08:00
|
|
|
if (PTR_ERR(acomp) == -ENOENT)
|
2024-10-06 09:24:56 +08:00
|
|
|
return 0;
|
2016-10-21 20:19:54 +08:00
|
|
|
pr_err("alg: acomp: Failed to load transform for %s: %ld\n",
|
|
|
|
driver, PTR_ERR(acomp));
|
|
|
|
return PTR_ERR(acomp);
|
|
|
|
}
|
|
|
|
err = test_acomp(acomp, desc->suite.comp.comp.vecs,
|
|
|
|
desc->suite.comp.decomp.vecs,
|
|
|
|
desc->suite.comp.comp.count,
|
|
|
|
desc->suite.comp.decomp.count);
|
|
|
|
crypto_free_acomp(acomp);
|
|
|
|
} else {
|
|
|
|
comp = crypto_alloc_comp(driver, type, mask);
|
|
|
|
if (IS_ERR(comp)) {
|
2024-09-03 07:33:40 +08:00
|
|
|
if (PTR_ERR(comp) == -ENOENT)
|
2024-10-06 09:24:56 +08:00
|
|
|
return 0;
|
2016-10-21 20:19:54 +08:00
|
|
|
pr_err("alg: comp: Failed to load transform for %s: %ld\n",
|
|
|
|
driver, PTR_ERR(comp));
|
|
|
|
return PTR_ERR(comp);
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2016-10-21 20:19:54 +08:00
|
|
|
err = test_comp(comp, desc->suite.comp.comp.vecs,
|
|
|
|
desc->suite.comp.decomp.vecs,
|
|
|
|
desc->suite.comp.comp.count,
|
|
|
|
desc->suite.comp.decomp.count);
|
2008-07-31 17:08:25 +08:00
|
|
|
|
2016-10-21 20:19:54 +08:00
|
|
|
crypto_free_comp(comp);
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2008-11-07 14:58:52 +08:00
|
|
|
static int alg_test_crc32c(const struct alg_test_desc *desc,
|
|
|
|
const char *driver, u32 type, u32 mask)
|
|
|
|
{
|
|
|
|
struct crypto_shash *tfm;
|
2019-01-11 04:17:55 +08:00
|
|
|
__le32 val;
|
2008-11-07 14:58:52 +08:00
|
|
|
int err;
|
|
|
|
|
|
|
|
err = alg_test_hash(desc, driver, type, mask);
|
|
|
|
if (err)
|
2019-01-24 12:57:35 +08:00
|
|
|
return err;
|
2008-11-07 14:58:52 +08:00
|
|
|
|
2016-11-22 20:08:31 +08:00
|
|
|
tfm = crypto_alloc_shash(driver, type, mask);
|
2008-11-07 14:58:52 +08:00
|
|
|
if (IS_ERR(tfm)) {
|
2019-01-24 12:57:35 +08:00
|
|
|
if (PTR_ERR(tfm) == -ENOENT) {
|
|
|
|
/*
|
|
|
|
* This crc32c implementation is only available through
|
|
|
|
* ahash API, not the shash API, so the remaining part
|
|
|
|
* of the test is not applicable to it.
|
|
|
|
*/
|
|
|
|
return 0;
|
|
|
|
}
|
2008-11-07 14:58:52 +08:00
|
|
|
printk(KERN_ERR "alg: crc32c: Failed to load transform for %s: "
|
|
|
|
"%ld\n", driver, PTR_ERR(tfm));
|
2019-01-24 12:57:35 +08:00
|
|
|
return PTR_ERR(tfm);
|
2008-11-07 14:58:52 +08:00
|
|
|
}
|
2020-10-27 00:17:00 +08:00
|
|
|
driver = crypto_shash_driver_name(tfm);
|
2008-11-07 14:58:52 +08:00
|
|
|
|
|
|
|
do {
|
2012-07-02 19:48:30 +08:00
|
|
|
SHASH_DESC_ON_STACK(shash, tfm);
|
|
|
|
u32 *ctx = (u32 *)shash_desc_ctx(shash);
|
2008-11-07 14:58:52 +08:00
|
|
|
|
2012-07-02 19:48:30 +08:00
|
|
|
shash->tfm = tfm;
|
2008-11-07 14:58:52 +08:00
|
|
|
|
2019-01-11 04:17:55 +08:00
|
|
|
*ctx = 420553207;
|
2012-07-02 19:48:30 +08:00
|
|
|
err = crypto_shash_final(shash, (u8 *)&val);
|
2008-11-07 14:58:52 +08:00
|
|
|
if (err) {
|
|
|
|
printk(KERN_ERR "alg: crc32c: Operation failed for "
|
|
|
|
"%s: %d\n", driver, err);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2019-01-11 04:17:55 +08:00
|
|
|
if (val != cpu_to_le32(~420553207)) {
|
|
|
|
pr_err("alg: crc32c: Test failed for %s: %u\n",
|
|
|
|
driver, le32_to_cpu(val));
|
2008-11-07 14:58:52 +08:00
|
|
|
err = -EINVAL;
|
|
|
|
}
|
|
|
|
} while (0);
|
|
|
|
|
|
|
|
crypto_free_shash(tfm);
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2009-05-04 19:44:50 +08:00
|
|
|
static int alg_test_cprng(const struct alg_test_desc *desc, const char *driver,
|
|
|
|
u32 type, u32 mask)
|
|
|
|
{
|
|
|
|
struct crypto_rng *rng;
|
|
|
|
int err;
|
|
|
|
|
2016-11-22 20:08:31 +08:00
|
|
|
rng = crypto_alloc_rng(driver, type, mask);
|
2009-05-04 19:44:50 +08:00
|
|
|
if (IS_ERR(rng)) {
|
2024-09-03 07:33:40 +08:00
|
|
|
if (PTR_ERR(rng) == -ENOENT)
|
2024-10-06 09:24:56 +08:00
|
|
|
return 0;
|
2009-05-04 19:44:50 +08:00
|
|
|
printk(KERN_ERR "alg: cprng: Failed to load transform for %s: "
|
|
|
|
"%ld\n", driver, PTR_ERR(rng));
|
|
|
|
return PTR_ERR(rng);
|
|
|
|
}
|
|
|
|
|
|
|
|
err = test_cprng(rng, desc->suite.cprng.vecs, desc->suite.cprng.count);
|
|
|
|
|
|
|
|
crypto_free_rng(rng);
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2014-05-31 23:25:36 +08:00
|
|
|
|
2017-02-25 07:46:59 +08:00
|
|
|
static int drbg_cavs_test(const struct drbg_testvec *test, int pr,
|
2014-05-31 23:25:36 +08:00
|
|
|
const char *driver, u32 type, u32 mask)
|
|
|
|
{
|
|
|
|
int ret = -EAGAIN;
|
|
|
|
struct crypto_rng *drng;
|
|
|
|
struct drbg_test_data test_data;
|
|
|
|
struct drbg_string addtl, pers, testentropy;
|
|
|
|
unsigned char *buf = kzalloc(test->expectedlen, GFP_KERNEL);
|
|
|
|
|
|
|
|
if (!buf)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2016-11-22 20:08:31 +08:00
|
|
|
drng = crypto_alloc_rng(driver, type, mask);
|
2014-05-31 23:25:36 +08:00
|
|
|
if (IS_ERR(drng)) {
|
2024-10-06 09:24:56 +08:00
|
|
|
kfree_sensitive(buf);
|
2024-09-03 07:33:40 +08:00
|
|
|
if (PTR_ERR(drng) == -ENOENT)
|
2024-10-06 09:24:56 +08:00
|
|
|
return 0;
|
2014-07-30 03:47:56 +08:00
|
|
|
printk(KERN_ERR "alg: drbg: could not allocate DRNG handle for "
|
2014-05-31 23:25:36 +08:00
|
|
|
"%s\n", driver);
|
2024-09-03 07:33:40 +08:00
|
|
|
return PTR_ERR(drng);
|
2014-05-31 23:25:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
test_data.testentropy = &testentropy;
|
|
|
|
drbg_string_fill(&testentropy, test->entropy, test->entropylen);
|
|
|
|
drbg_string_fill(&pers, test->pers, test->perslen);
|
|
|
|
ret = crypto_drbg_reset_test(drng, &pers, &test_data);
|
|
|
|
if (ret) {
|
|
|
|
printk(KERN_ERR "alg: drbg: Failed to reset rng\n");
|
|
|
|
goto outbuf;
|
|
|
|
}
|
|
|
|
|
|
|
|
drbg_string_fill(&addtl, test->addtla, test->addtllen);
|
|
|
|
if (pr) {
|
|
|
|
drbg_string_fill(&testentropy, test->entpra, test->entprlen);
|
|
|
|
ret = crypto_drbg_get_bytes_addtl_test(drng,
|
|
|
|
buf, test->expectedlen, &addtl, &test_data);
|
|
|
|
} else {
|
|
|
|
ret = crypto_drbg_get_bytes_addtl(drng,
|
|
|
|
buf, test->expectedlen, &addtl);
|
|
|
|
}
|
2015-03-11 00:00:36 +08:00
|
|
|
if (ret < 0) {
|
2014-07-30 03:47:56 +08:00
|
|
|
printk(KERN_ERR "alg: drbg: could not obtain random data for "
|
2014-05-31 23:25:36 +08:00
|
|
|
"driver %s\n", driver);
|
|
|
|
goto outbuf;
|
|
|
|
}
|
|
|
|
|
|
|
|
drbg_string_fill(&addtl, test->addtlb, test->addtllen);
|
|
|
|
if (pr) {
|
|
|
|
drbg_string_fill(&testentropy, test->entprb, test->entprlen);
|
|
|
|
ret = crypto_drbg_get_bytes_addtl_test(drng,
|
|
|
|
buf, test->expectedlen, &addtl, &test_data);
|
|
|
|
} else {
|
|
|
|
ret = crypto_drbg_get_bytes_addtl(drng,
|
|
|
|
buf, test->expectedlen, &addtl);
|
|
|
|
}
|
2015-03-11 00:00:36 +08:00
|
|
|
if (ret < 0) {
|
2014-07-30 03:47:56 +08:00
|
|
|
printk(KERN_ERR "alg: drbg: could not obtain random data for "
|
2014-05-31 23:25:36 +08:00
|
|
|
"driver %s\n", driver);
|
|
|
|
goto outbuf;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = memcmp(test->expected, buf, test->expectedlen);
|
|
|
|
|
|
|
|
outbuf:
|
|
|
|
crypto_free_rng(drng);
|
2020-08-07 14:18:13 +08:00
|
|
|
kfree_sensitive(buf);
|
2014-05-31 23:25:36 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int alg_test_drbg(const struct alg_test_desc *desc, const char *driver,
|
|
|
|
u32 type, u32 mask)
|
|
|
|
{
|
|
|
|
int err = 0;
|
|
|
|
int pr = 0;
|
|
|
|
int i = 0;
|
2017-02-25 07:46:59 +08:00
|
|
|
const struct drbg_testvec *template = desc->suite.drbg.vecs;
|
2014-05-31 23:25:36 +08:00
|
|
|
unsigned int tcount = desc->suite.drbg.count;
|
|
|
|
|
|
|
|
if (0 == memcmp(driver, "drbg_pr_", 8))
|
|
|
|
pr = 1;
|
|
|
|
|
|
|
|
for (i = 0; i < tcount; i++) {
|
|
|
|
err = drbg_cavs_test(&template[i], pr, driver, type, mask);
|
|
|
|
if (err) {
|
|
|
|
printk(KERN_ERR "alg: drbg: Test %d failed for %s\n",
|
|
|
|
i, driver);
|
|
|
|
err = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return err;
|
|
|
|
|
|
|
|
}
|
|
|
|
|
2017-02-25 07:46:59 +08:00
|
|
|
static int do_test_kpp(struct crypto_kpp *tfm, const struct kpp_testvec *vec,
|
2016-06-23 00:49:14 +08:00
|
|
|
const char *alg)
|
|
|
|
{
|
|
|
|
struct kpp_request *req;
|
|
|
|
void *input_buf = NULL;
|
|
|
|
void *output_buf = NULL;
|
2017-05-30 22:52:49 +08:00
|
|
|
void *a_public = NULL;
|
|
|
|
void *a_ss = NULL;
|
|
|
|
void *shared_secret = NULL;
|
2017-10-18 15:00:43 +08:00
|
|
|
struct crypto_wait wait;
|
2016-06-23 00:49:14 +08:00
|
|
|
unsigned int out_len_max;
|
|
|
|
int err = -ENOMEM;
|
|
|
|
struct scatterlist src, dst;
|
|
|
|
|
|
|
|
req = kpp_request_alloc(tfm, GFP_KERNEL);
|
|
|
|
if (!req)
|
|
|
|
return err;
|
|
|
|
|
2017-10-18 15:00:43 +08:00
|
|
|
crypto_init_wait(&wait);
|
2016-06-23 00:49:14 +08:00
|
|
|
|
|
|
|
err = crypto_kpp_set_secret(tfm, vec->secret, vec->secret_size);
|
|
|
|
if (err < 0)
|
|
|
|
goto free_req;
|
|
|
|
|
|
|
|
out_len_max = crypto_kpp_maxsize(tfm);
|
|
|
|
output_buf = kzalloc(out_len_max, GFP_KERNEL);
|
|
|
|
if (!output_buf) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto free_req;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Use appropriate parameter as base */
|
|
|
|
kpp_request_set_input(req, NULL, 0);
|
|
|
|
sg_init_one(&dst, output_buf, out_len_max);
|
|
|
|
kpp_request_set_output(req, &dst, out_len_max);
|
|
|
|
kpp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
|
2017-10-18 15:00:43 +08:00
|
|
|
crypto_req_done, &wait);
|
2016-06-23 00:49:14 +08:00
|
|
|
|
2017-05-30 22:52:49 +08:00
|
|
|
/* Compute party A's public key */
|
2017-10-18 15:00:43 +08:00
|
|
|
err = crypto_wait_req(crypto_kpp_generate_public_key(req), &wait);
|
2016-06-23 00:49:14 +08:00
|
|
|
if (err) {
|
2017-05-30 22:52:49 +08:00
|
|
|
pr_err("alg: %s: Party A: generate public key test failed. err %d\n",
|
2016-06-23 00:49:14 +08:00
|
|
|
alg, err);
|
|
|
|
goto free_output;
|
|
|
|
}
|
2017-05-30 22:52:49 +08:00
|
|
|
|
|
|
|
if (vec->genkey) {
|
|
|
|
/* Save party A's public key */
|
2019-01-29 08:01:18 +08:00
|
|
|
a_public = kmemdup(sg_virt(req->dst), out_len_max, GFP_KERNEL);
|
2017-05-30 22:52:49 +08:00
|
|
|
if (!a_public) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto free_output;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/* Verify calculated public key */
|
|
|
|
if (memcmp(vec->expected_a_public, sg_virt(req->dst),
|
|
|
|
vec->expected_a_public_size)) {
|
|
|
|
pr_err("alg: %s: Party A: generate public key test failed. Invalid output\n",
|
|
|
|
alg);
|
|
|
|
err = -EINVAL;
|
|
|
|
goto free_output;
|
|
|
|
}
|
2016-06-23 00:49:14 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Calculate shared secret key by using counter part (b) public key. */
|
2019-01-29 08:01:18 +08:00
|
|
|
input_buf = kmemdup(vec->b_public, vec->b_public_size, GFP_KERNEL);
|
2016-06-23 00:49:14 +08:00
|
|
|
if (!input_buf) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto free_output;
|
|
|
|
}
|
|
|
|
|
|
|
|
sg_init_one(&src, input_buf, vec->b_public_size);
|
|
|
|
sg_init_one(&dst, output_buf, out_len_max);
|
|
|
|
kpp_request_set_input(req, &src, vec->b_public_size);
|
|
|
|
kpp_request_set_output(req, &dst, out_len_max);
|
|
|
|
kpp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
|
2017-10-18 15:00:43 +08:00
|
|
|
crypto_req_done, &wait);
|
|
|
|
err = crypto_wait_req(crypto_kpp_compute_shared_secret(req), &wait);
|
2016-06-23 00:49:14 +08:00
|
|
|
if (err) {
|
2017-05-30 22:52:49 +08:00
|
|
|
pr_err("alg: %s: Party A: compute shared secret test failed. err %d\n",
|
2016-06-23 00:49:14 +08:00
|
|
|
alg, err);
|
|
|
|
goto free_all;
|
|
|
|
}
|
2017-05-30 22:52:49 +08:00
|
|
|
|
|
|
|
if (vec->genkey) {
|
|
|
|
/* Save the shared secret obtained by party A */
|
2019-01-29 08:01:18 +08:00
|
|
|
a_ss = kmemdup(sg_virt(req->dst), vec->expected_ss_size, GFP_KERNEL);
|
2017-05-30 22:52:49 +08:00
|
|
|
if (!a_ss) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto free_all;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Calculate party B's shared secret by using party A's
|
|
|
|
* public key.
|
|
|
|
*/
|
|
|
|
err = crypto_kpp_set_secret(tfm, vec->b_secret,
|
|
|
|
vec->b_secret_size);
|
|
|
|
if (err < 0)
|
|
|
|
goto free_all;
|
|
|
|
|
|
|
|
sg_init_one(&src, a_public, vec->expected_a_public_size);
|
|
|
|
sg_init_one(&dst, output_buf, out_len_max);
|
|
|
|
kpp_request_set_input(req, &src, vec->expected_a_public_size);
|
|
|
|
kpp_request_set_output(req, &dst, out_len_max);
|
|
|
|
kpp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
|
2017-10-18 15:00:43 +08:00
|
|
|
crypto_req_done, &wait);
|
|
|
|
err = crypto_wait_req(crypto_kpp_compute_shared_secret(req),
|
|
|
|
&wait);
|
2017-05-30 22:52:49 +08:00
|
|
|
if (err) {
|
|
|
|
pr_err("alg: %s: Party B: compute shared secret failed. err %d\n",
|
|
|
|
alg, err);
|
|
|
|
goto free_all;
|
|
|
|
}
|
|
|
|
|
|
|
|
shared_secret = a_ss;
|
|
|
|
} else {
|
|
|
|
shared_secret = (void *)vec->expected_ss;
|
|
|
|
}
|
|
|
|
|
2016-06-23 00:49:14 +08:00
|
|
|
/*
|
|
|
|
* verify shared secret from which the user will derive
|
|
|
|
* secret key by executing whatever hash it has chosen
|
|
|
|
*/
|
2017-05-30 22:52:49 +08:00
|
|
|
if (memcmp(shared_secret, sg_virt(req->dst),
|
2016-06-23 00:49:14 +08:00
|
|
|
vec->expected_ss_size)) {
|
|
|
|
pr_err("alg: %s: compute shared secret test failed. Invalid output\n",
|
|
|
|
alg);
|
|
|
|
err = -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
free_all:
|
2017-05-30 22:52:49 +08:00
|
|
|
kfree(a_ss);
|
2016-06-23 00:49:14 +08:00
|
|
|
kfree(input_buf);
|
|
|
|
free_output:
|
2017-05-30 22:52:49 +08:00
|
|
|
kfree(a_public);
|
2016-06-23 00:49:14 +08:00
|
|
|
kfree(output_buf);
|
|
|
|
free_req:
|
|
|
|
kpp_request_free(req);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int test_kpp(struct crypto_kpp *tfm, const char *alg,
|
2017-02-25 07:46:59 +08:00
|
|
|
const struct kpp_testvec *vecs, unsigned int tcount)
|
2016-06-23 00:49:14 +08:00
|
|
|
{
|
|
|
|
int ret, i;
|
|
|
|
|
|
|
|
for (i = 0; i < tcount; i++) {
|
|
|
|
ret = do_test_kpp(tfm, vecs++, alg);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("alg: %s: test failed on vector %d, err=%d\n",
|
|
|
|
alg, i + 1, ret);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int alg_test_kpp(const struct alg_test_desc *desc, const char *driver,
|
|
|
|
u32 type, u32 mask)
|
|
|
|
{
|
|
|
|
struct crypto_kpp *tfm;
|
|
|
|
int err = 0;
|
|
|
|
|
2016-11-22 20:08:31 +08:00
|
|
|
tfm = crypto_alloc_kpp(driver, type, mask);
|
2016-06-23 00:49:14 +08:00
|
|
|
if (IS_ERR(tfm)) {
|
2024-09-03 07:33:40 +08:00
|
|
|
if (PTR_ERR(tfm) == -ENOENT)
|
2024-10-06 09:24:56 +08:00
|
|
|
return 0;
|
2016-06-23 00:49:14 +08:00
|
|
|
pr_err("alg: kpp: Failed to load tfm for %s: %ld\n",
|
|
|
|
driver, PTR_ERR(tfm));
|
|
|
|
return PTR_ERR(tfm);
|
|
|
|
}
|
|
|
|
if (desc->suite.kpp.vecs)
|
|
|
|
err = test_kpp(tfm, desc->alg, desc->suite.kpp.vecs,
|
|
|
|
desc->suite.kpp.count);
|
|
|
|
|
|
|
|
crypto_free_kpp(tfm);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2019-04-11 23:51:17 +08:00
|
|
|
static u8 *test_pack_u32(u8 *dst, u32 val)
|
|
|
|
{
|
|
|
|
memcpy(dst, &val, sizeof(val));
|
|
|
|
return dst + sizeof(val);
|
|
|
|
}
|
|
|
|
|
2016-06-29 19:32:20 +08:00
|
|
|
static int test_akcipher_one(struct crypto_akcipher *tfm,
|
2017-02-25 07:46:59 +08:00
|
|
|
const struct akcipher_testvec *vecs)
|
2015-06-17 01:31:06 +08:00
|
|
|
{
|
2016-05-05 16:42:49 +08:00
|
|
|
char *xbuf[XBUFSIZE];
|
2015-06-17 01:31:06 +08:00
|
|
|
struct akcipher_request *req;
|
|
|
|
void *outbuf_enc = NULL;
|
|
|
|
void *outbuf_dec = NULL;
|
2017-10-18 15:00:43 +08:00
|
|
|
struct crypto_wait wait;
|
2015-06-17 01:31:06 +08:00
|
|
|
unsigned int out_len_max, out_len = 0;
|
|
|
|
int err = -ENOMEM;
|
2024-09-10 22:30:21 +08:00
|
|
|
struct scatterlist src, dst, src_tab[2];
|
|
|
|
const char *c;
|
|
|
|
unsigned int c_size;
|
2015-06-17 01:31:06 +08:00
|
|
|
|
2016-05-05 16:42:49 +08:00
|
|
|
if (testmgr_alloc_buf(xbuf))
|
|
|
|
return err;
|
|
|
|
|
2015-06-17 01:31:06 +08:00
|
|
|
req = akcipher_request_alloc(tfm, GFP_KERNEL);
|
|
|
|
if (!req)
|
2016-05-05 16:42:49 +08:00
|
|
|
goto free_xbuf;
|
2015-06-17 01:31:06 +08:00
|
|
|
|
2017-10-18 15:00:43 +08:00
|
|
|
crypto_init_wait(&wait);
|
2015-06-17 01:31:06 +08:00
|
|
|
|
2015-10-09 00:26:55 +08:00
|
|
|
if (vecs->public_key_vec)
|
2024-09-10 22:30:21 +08:00
|
|
|
err = crypto_akcipher_set_pub_key(tfm, vecs->key,
|
|
|
|
vecs->key_len);
|
2015-10-09 00:26:55 +08:00
|
|
|
else
|
2024-09-10 22:30:21 +08:00
|
|
|
err = crypto_akcipher_set_priv_key(tfm, vecs->key,
|
|
|
|
vecs->key_len);
|
2015-10-09 00:26:55 +08:00
|
|
|
if (err)
|
2024-09-10 22:30:21 +08:00
|
|
|
goto free_req;
|
2015-06-17 01:31:06 +08:00
|
|
|
|
2024-09-10 22:30:21 +08:00
|
|
|
/* First run encrypt test which does not require a private key */
|
crypto: akcipher - new verify API for public key algorithms
Previous akcipher .verify() just `decrypts' (using RSA encrypt which is
using public key) signature to uncover message hash, which was then
compared in upper level public_key_verify_signature() with the expected
hash value, which itself was never passed into verify().
This approach was incompatible with EC-DSA family of algorithms,
because, to verify a signature EC-DSA algorithm also needs a hash value
as input; then it's used (together with a signature divided into halves
`r||s') to produce a witness value, which is then compared with `r' to
determine if the signature is correct. Thus, for EC-DSA, nor
requirements of .verify() itself, nor its output expectations in
public_key_verify_signature() wasn't sufficient.
Make improved .verify() call which gets hash value as input and produce
complete signature check without any output besides status.
Now for the top level verification only crypto_akcipher_verify() needs
to be called and its return value inspected.
Make sure that `digest' is in kmalloc'd memory (in place of `output`) in
{public,tpm}_key_verify_signature() as insisted by Herbert Xu, and will
be changed in the following commit.
Cc: David Howells <dhowells@redhat.com>
Cc: keyrings@vger.kernel.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-11 23:51:15 +08:00
|
|
|
err = -ENOMEM;
|
|
|
|
out_len_max = crypto_akcipher_maxsize(tfm);
|
2015-06-17 01:31:06 +08:00
|
|
|
outbuf_enc = kzalloc(out_len_max, GFP_KERNEL);
|
|
|
|
if (!outbuf_enc)
|
2024-09-10 22:30:21 +08:00
|
|
|
goto free_req;
|
|
|
|
|
|
|
|
c = vecs->c;
|
|
|
|
c_size = vecs->c_size;
|
2016-05-05 16:42:49 +08:00
|
|
|
|
2020-09-21 00:20:59 +08:00
|
|
|
err = -E2BIG;
|
2024-09-10 22:30:21 +08:00
|
|
|
if (WARN_ON(vecs->m_size > PAGE_SIZE))
|
2019-01-08 01:54:27 +08:00
|
|
|
goto free_all;
|
2024-09-10 22:30:21 +08:00
|
|
|
memcpy(xbuf[0], vecs->m, vecs->m_size);
|
2016-05-05 16:42:49 +08:00
|
|
|
|
2024-09-10 22:30:21 +08:00
|
|
|
sg_init_table(src_tab, 2);
|
2016-05-05 16:42:49 +08:00
|
|
|
sg_set_buf(&src_tab[0], xbuf[0], 8);
|
2024-09-10 22:30:21 +08:00
|
|
|
sg_set_buf(&src_tab[1], xbuf[0] + 8, vecs->m_size - 8);
|
|
|
|
sg_init_one(&dst, outbuf_enc, out_len_max);
|
|
|
|
akcipher_request_set_crypt(req, src_tab, &dst, vecs->m_size,
|
|
|
|
out_len_max);
|
2015-06-17 01:31:06 +08:00
|
|
|
akcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
|
2017-10-18 15:00:43 +08:00
|
|
|
crypto_req_done, &wait);
|
2015-06-17 01:31:06 +08:00
|
|
|
|
2024-09-10 22:30:21 +08:00
|
|
|
err = crypto_wait_req(crypto_akcipher_encrypt(req), &wait);
|
2015-06-17 01:31:06 +08:00
|
|
|
if (err) {
|
2024-09-10 22:30:21 +08:00
|
|
|
pr_err("alg: akcipher: encrypt test failed. err %d\n", err);
|
2015-06-17 01:31:06 +08:00
|
|
|
goto free_all;
|
|
|
|
}
|
2024-09-10 22:30:21 +08:00
|
|
|
if (c) {
|
crypto: akcipher - new verify API for public key algorithms
Previous akcipher .verify() just `decrypts' (using RSA encrypt which is
using public key) signature to uncover message hash, which was then
compared in upper level public_key_verify_signature() with the expected
hash value, which itself was never passed into verify().
This approach was incompatible with EC-DSA family of algorithms,
because, to verify a signature EC-DSA algorithm also needs a hash value
as input; then it's used (together with a signature divided into halves
`r||s') to produce a witness value, which is then compared with `r' to
determine if the signature is correct. Thus, for EC-DSA, nor
requirements of .verify() itself, nor its output expectations in
public_key_verify_signature() wasn't sufficient.
Make improved .verify() call which gets hash value as input and produce
complete signature check without any output besides status.
Now for the top level verification only crypto_akcipher_verify() needs
to be called and its return value inspected.
Make sure that `digest' is in kmalloc'd memory (in place of `output`) in
{public,tpm}_key_verify_signature() as insisted by Herbert Xu, and will
be changed in the following commit.
Cc: David Howells <dhowells@redhat.com>
Cc: keyrings@vger.kernel.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-11 23:51:15 +08:00
|
|
|
if (req->dst_len != c_size) {
|
2024-09-10 22:30:21 +08:00
|
|
|
pr_err("alg: akcipher: encrypt test failed. Invalid output len\n");
|
crypto: akcipher - new verify API for public key algorithms
Previous akcipher .verify() just `decrypts' (using RSA encrypt which is
using public key) signature to uncover message hash, which was then
compared in upper level public_key_verify_signature() with the expected
hash value, which itself was never passed into verify().
This approach was incompatible with EC-DSA family of algorithms,
because, to verify a signature EC-DSA algorithm also needs a hash value
as input; then it's used (together with a signature divided into halves
`r||s') to produce a witness value, which is then compared with `r' to
determine if the signature is correct. Thus, for EC-DSA, nor
requirements of .verify() itself, nor its output expectations in
public_key_verify_signature() wasn't sufficient.
Make improved .verify() call which gets hash value as input and produce
complete signature check without any output besides status.
Now for the top level verification only crypto_akcipher_verify() needs
to be called and its return value inspected.
Make sure that `digest' is in kmalloc'd memory (in place of `output`) in
{public,tpm}_key_verify_signature() as insisted by Herbert Xu, and will
be changed in the following commit.
Cc: David Howells <dhowells@redhat.com>
Cc: keyrings@vger.kernel.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-11 23:51:15 +08:00
|
|
|
err = -EINVAL;
|
|
|
|
goto free_all;
|
|
|
|
}
|
|
|
|
/* verify that encrypted message is equal to expected */
|
|
|
|
if (memcmp(c, outbuf_enc, c_size) != 0) {
|
2024-09-10 22:30:21 +08:00
|
|
|
pr_err("alg: akcipher: encrypt test failed. Invalid output\n");
|
crypto: akcipher - new verify API for public key algorithms
Previous akcipher .verify() just `decrypts' (using RSA encrypt which is
using public key) signature to uncover message hash, which was then
compared in upper level public_key_verify_signature() with the expected
hash value, which itself was never passed into verify().
This approach was incompatible with EC-DSA family of algorithms,
because, to verify a signature EC-DSA algorithm also needs a hash value
as input; then it's used (together with a signature divided into halves
`r||s') to produce a witness value, which is then compared with `r' to
determine if the signature is correct. Thus, for EC-DSA, nor
requirements of .verify() itself, nor its output expectations in
public_key_verify_signature() wasn't sufficient.
Make improved .verify() call which gets hash value as input and produce
complete signature check without any output besides status.
Now for the top level verification only crypto_akcipher_verify() needs
to be called and its return value inspected.
Make sure that `digest' is in kmalloc'd memory (in place of `output`) in
{public,tpm}_key_verify_signature() as insisted by Herbert Xu, and will
be changed in the following commit.
Cc: David Howells <dhowells@redhat.com>
Cc: keyrings@vger.kernel.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-11 23:51:15 +08:00
|
|
|
hexdump(outbuf_enc, c_size);
|
|
|
|
err = -EINVAL;
|
|
|
|
goto free_all;
|
|
|
|
}
|
2015-06-17 01:31:06 +08:00
|
|
|
}
|
2019-01-08 01:54:27 +08:00
|
|
|
|
|
|
|
/*
|
2024-09-10 22:30:21 +08:00
|
|
|
* Don't invoke decrypt test which requires a private key
|
2019-01-08 01:54:27 +08:00
|
|
|
* for vectors with only a public key.
|
|
|
|
*/
|
2015-06-17 01:31:06 +08:00
|
|
|
if (vecs->public_key_vec) {
|
|
|
|
err = 0;
|
|
|
|
goto free_all;
|
|
|
|
}
|
|
|
|
outbuf_dec = kzalloc(out_len_max, GFP_KERNEL);
|
|
|
|
if (!outbuf_dec) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto free_all;
|
|
|
|
}
|
2016-05-05 16:42:49 +08:00
|
|
|
|
2024-09-10 22:30:21 +08:00
|
|
|
if (!c) {
|
2020-09-21 00:20:58 +08:00
|
|
|
c = outbuf_enc;
|
|
|
|
c_size = req->dst_len;
|
|
|
|
}
|
|
|
|
|
2020-09-21 00:20:59 +08:00
|
|
|
err = -E2BIG;
|
2019-01-08 01:54:27 +08:00
|
|
|
if (WARN_ON(c_size > PAGE_SIZE))
|
2016-05-05 16:42:49 +08:00
|
|
|
goto free_all;
|
2019-01-08 01:54:27 +08:00
|
|
|
memcpy(xbuf[0], c, c_size);
|
2016-05-05 16:42:49 +08:00
|
|
|
|
2019-01-08 01:54:27 +08:00
|
|
|
sg_init_one(&src, xbuf[0], c_size);
|
2015-10-09 00:26:55 +08:00
|
|
|
sg_init_one(&dst, outbuf_dec, out_len_max);
|
2017-10-18 15:00:43 +08:00
|
|
|
crypto_init_wait(&wait);
|
2019-01-08 01:54:27 +08:00
|
|
|
akcipher_request_set_crypt(req, &src, &dst, c_size, out_len_max);
|
2015-06-17 01:31:06 +08:00
|
|
|
|
2024-09-10 22:30:21 +08:00
|
|
|
err = crypto_wait_req(crypto_akcipher_decrypt(req), &wait);
|
2015-06-17 01:31:06 +08:00
|
|
|
if (err) {
|
2024-09-10 22:30:21 +08:00
|
|
|
pr_err("alg: akcipher: decrypt test failed. err %d\n", err);
|
2015-06-17 01:31:06 +08:00
|
|
|
goto free_all;
|
|
|
|
}
|
|
|
|
out_len = req->dst_len;
|
2024-09-10 22:30:21 +08:00
|
|
|
if (out_len < vecs->m_size) {
|
|
|
|
pr_err("alg: akcipher: decrypt test failed. Invalid output len %u\n",
|
|
|
|
out_len);
|
2015-06-17 01:31:06 +08:00
|
|
|
err = -EINVAL;
|
|
|
|
goto free_all;
|
|
|
|
}
|
|
|
|
/* verify that decrypted message is equal to the original msg */
|
2024-09-10 22:30:21 +08:00
|
|
|
if (memchr_inv(outbuf_dec, 0, out_len - vecs->m_size) ||
|
|
|
|
memcmp(vecs->m, outbuf_dec + out_len - vecs->m_size,
|
|
|
|
vecs->m_size)) {
|
|
|
|
pr_err("alg: akcipher: decrypt test failed. Invalid output\n");
|
2016-06-29 19:32:20 +08:00
|
|
|
hexdump(outbuf_dec, out_len);
|
2015-06-17 01:31:06 +08:00
|
|
|
err = -EINVAL;
|
|
|
|
}
|
|
|
|
free_all:
|
|
|
|
kfree(outbuf_dec);
|
|
|
|
kfree(outbuf_enc);
|
|
|
|
free_req:
|
|
|
|
akcipher_request_free(req);
|
2016-05-05 16:42:49 +08:00
|
|
|
free_xbuf:
|
|
|
|
testmgr_free_buf(xbuf);
|
2015-06-17 01:31:06 +08:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2016-06-29 19:32:20 +08:00
|
|
|
static int test_akcipher(struct crypto_akcipher *tfm, const char *alg,
|
2017-02-25 07:46:59 +08:00
|
|
|
const struct akcipher_testvec *vecs,
|
|
|
|
unsigned int tcount)
|
2015-06-17 01:31:06 +08:00
|
|
|
{
|
2016-07-18 18:20:10 +08:00
|
|
|
const char *algo =
|
|
|
|
crypto_tfm_alg_driver_name(crypto_akcipher_tfm(tfm));
|
2015-06-17 01:31:06 +08:00
|
|
|
int ret, i;
|
|
|
|
|
|
|
|
for (i = 0; i < tcount; i++) {
|
2016-06-29 19:32:20 +08:00
|
|
|
ret = test_akcipher_one(tfm, vecs++);
|
|
|
|
if (!ret)
|
|
|
|
continue;
|
2015-06-17 01:31:06 +08:00
|
|
|
|
2016-07-18 18:20:10 +08:00
|
|
|
pr_err("alg: akcipher: test %d failed for %s, err=%d\n",
|
|
|
|
i + 1, algo, ret);
|
2016-06-29 19:32:20 +08:00
|
|
|
return ret;
|
|
|
|
}
|
2015-06-17 01:31:06 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int alg_test_akcipher(const struct alg_test_desc *desc,
|
|
|
|
const char *driver, u32 type, u32 mask)
|
|
|
|
{
|
|
|
|
struct crypto_akcipher *tfm;
|
|
|
|
int err = 0;
|
|
|
|
|
2016-11-22 20:08:31 +08:00
|
|
|
tfm = crypto_alloc_akcipher(driver, type, mask);
|
2015-06-17 01:31:06 +08:00
|
|
|
if (IS_ERR(tfm)) {
|
2024-09-03 07:33:40 +08:00
|
|
|
if (PTR_ERR(tfm) == -ENOENT)
|
2024-10-06 09:24:56 +08:00
|
|
|
return 0;
|
2015-06-17 01:31:06 +08:00
|
|
|
pr_err("alg: akcipher: Failed to load tfm for %s: %ld\n",
|
|
|
|
driver, PTR_ERR(tfm));
|
|
|
|
return PTR_ERR(tfm);
|
|
|
|
}
|
|
|
|
if (desc->suite.akcipher.vecs)
|
|
|
|
err = test_akcipher(tfm, desc->alg, desc->suite.akcipher.vecs,
|
|
|
|
desc->suite.akcipher.count);
|
|
|
|
|
|
|
|
crypto_free_akcipher(tfm);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2024-09-10 22:30:12 +08:00
|
|
|
static int test_sig_one(struct crypto_sig *tfm, const struct sig_testvec *vecs)
|
|
|
|
{
|
|
|
|
u8 *ptr, *key __free(kfree);
|
|
|
|
int err, sig_size;
|
|
|
|
|
|
|
|
key = kmalloc(vecs->key_len + 2 * sizeof(u32) + vecs->param_len,
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!key)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
/* ecrdsa expects additional parameters appended to the key */
|
|
|
|
memcpy(key, vecs->key, vecs->key_len);
|
|
|
|
ptr = key + vecs->key_len;
|
|
|
|
ptr = test_pack_u32(ptr, vecs->algo);
|
|
|
|
ptr = test_pack_u32(ptr, vecs->param_len);
|
|
|
|
memcpy(ptr, vecs->params, vecs->param_len);
|
|
|
|
|
|
|
|
if (vecs->public_key_vec)
|
|
|
|
err = crypto_sig_set_pubkey(tfm, key, vecs->key_len);
|
|
|
|
else
|
|
|
|
err = crypto_sig_set_privkey(tfm, key, vecs->key_len);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Run asymmetric signature verification first
|
|
|
|
* (which does not require a private key)
|
|
|
|
*/
|
|
|
|
err = crypto_sig_verify(tfm, vecs->c, vecs->c_size,
|
|
|
|
vecs->m, vecs->m_size);
|
|
|
|
if (err) {
|
|
|
|
pr_err("alg: sig: verify test failed: err %d\n", err);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't invoke sign test (which requires a private key)
|
|
|
|
* for vectors with only a public key.
|
|
|
|
*/
|
|
|
|
if (vecs->public_key_vec)
|
|
|
|
return 0;
|
|
|
|
|
2024-09-10 22:30:26 +08:00
|
|
|
sig_size = crypto_sig_keysize(tfm);
|
2024-09-10 22:30:12 +08:00
|
|
|
if (sig_size < vecs->c_size) {
|
|
|
|
pr_err("alg: sig: invalid maxsize %u\n", sig_size);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
u8 *sig __free(kfree) = kzalloc(sig_size, GFP_KERNEL);
|
|
|
|
if (!sig)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
/* Run asymmetric signature generation */
|
|
|
|
err = crypto_sig_sign(tfm, vecs->m, vecs->m_size, sig, sig_size);
|
|
|
|
if (err) {
|
|
|
|
pr_err("alg: sig: sign test failed: err %d\n", err);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Verify that generated signature equals cooked signature */
|
|
|
|
if (memcmp(sig, vecs->c, vecs->c_size) ||
|
|
|
|
memchr_inv(sig + vecs->c_size, 0, sig_size - vecs->c_size)) {
|
|
|
|
pr_err("alg: sig: sign test failed: invalid output\n");
|
|
|
|
hexdump(sig, sig_size);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int test_sig(struct crypto_sig *tfm, const char *alg,
|
|
|
|
const struct sig_testvec *vecs, unsigned int tcount)
|
|
|
|
{
|
|
|
|
const char *algo = crypto_tfm_alg_driver_name(crypto_sig_tfm(tfm));
|
|
|
|
int ret, i;
|
|
|
|
|
|
|
|
for (i = 0; i < tcount; i++) {
|
|
|
|
ret = test_sig_one(tfm, vecs++);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("alg: sig: test %d failed for %s: err %d\n",
|
|
|
|
i + 1, algo, ret);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int alg_test_sig(const struct alg_test_desc *desc, const char *driver,
|
|
|
|
u32 type, u32 mask)
|
|
|
|
{
|
|
|
|
struct crypto_sig *tfm;
|
|
|
|
int err = 0;
|
|
|
|
|
|
|
|
tfm = crypto_alloc_sig(driver, type, mask);
|
|
|
|
if (IS_ERR(tfm)) {
|
|
|
|
pr_err("alg: sig: Failed to load tfm for %s: %ld\n",
|
|
|
|
driver, PTR_ERR(tfm));
|
|
|
|
return PTR_ERR(tfm);
|
|
|
|
}
|
|
|
|
if (desc->suite.sig.vecs)
|
|
|
|
err = test_sig(tfm, desc->alg, desc->suite.sig.vecs,
|
|
|
|
desc->suite.sig.count);
|
|
|
|
|
|
|
|
crypto_free_sig(tfm);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2009-12-23 19:45:20 +08:00
|
|
|
static int alg_test_null(const struct alg_test_desc *desc,
|
|
|
|
const char *driver, u32 type, u32 mask)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-12-02 05:53:30 +08:00
|
|
|
#define ____VECS(tv) .vecs = tv, .count = ARRAY_SIZE(tv)
|
|
|
|
#define __VECS(tv) { ____VECS(tv) }
|
2017-01-12 21:40:39 +08:00
|
|
|
|
2008-07-31 17:08:25 +08:00
|
|
|
/* Please keep this list sorted by algorithm name. */
|
|
|
|
static const struct alg_test_desc alg_test_descs[] = {
|
|
|
|
{
|
crypto: adiantum - add Adiantum support
Add support for the Adiantum encryption mode. Adiantum was designed by
Paul Crowley and is specified by our paper:
Adiantum: length-preserving encryption for entry-level processors
(https://eprint.iacr.org/2018/720.pdf)
See our paper for full details; this patch only provides an overview.
Adiantum is a tweakable, length-preserving encryption mode designed for
fast and secure disk encryption, especially on CPUs without dedicated
crypto instructions. Adiantum encrypts each sector using the XChaCha12
stream cipher, two passes of an ε-almost-∆-universal (εA∆U) hash
function, and an invocation of the AES-256 block cipher on a single
16-byte block. On CPUs without AES instructions, Adiantum is much
faster than AES-XTS; for example, on ARM Cortex-A7, on 4096-byte sectors
Adiantum encryption is about 4 times faster than AES-256-XTS encryption,
and decryption about 5 times faster.
Adiantum is a specialization of the more general HBSH construction. Our
earlier proposal, HPolyC, was also a HBSH specialization, but it used a
different εA∆U hash function, one based on Poly1305 only. Adiantum's
εA∆U hash function, which is based primarily on the "NH" hash function
like that used in UMAC (RFC4418), is about twice as fast as HPolyC's;
consequently, Adiantum is about 20% faster than HPolyC.
This speed comes with no loss of security: Adiantum is provably just as
secure as HPolyC, in fact slightly *more* secure. Like HPolyC,
Adiantum's security is reducible to that of XChaCha12 and AES-256,
subject to a security bound. XChaCha12 itself has a security reduction
to ChaCha12. Therefore, one need not "trust" Adiantum; one need only
trust ChaCha12 and AES-256. Note that the εA∆U hash function is only
used for its proven combinatorical properties so cannot be "broken".
Adiantum is also a true wide-block encryption mode, so flipping any
plaintext bit in the sector scrambles the entire ciphertext, and vice
versa. No other such mode is available in the kernel currently; doing
the same with XTS scrambles only 16 bytes. Adiantum also supports
arbitrary-length tweaks and naturally supports any length input >= 16
bytes without needing "ciphertext stealing".
For the stream cipher, Adiantum uses XChaCha12 rather than XChaCha20 in
order to make encryption feasible on the widest range of devices.
Although the 20-round variant is quite popular, the best known attacks
on ChaCha are on only 7 rounds, so ChaCha12 still has a substantial
security margin; in fact, larger than AES-256's. 12-round Salsa20 is
also the eSTREAM recommendation. For the block cipher, Adiantum uses
AES-256, despite it having a lower security margin than XChaCha12 and
needing table lookups, due to AES's extensive adoption and analysis
making it the obvious first choice. Nevertheless, for flexibility this
patch also permits the "adiantum" template to be instantiated with
XChaCha20 and/or with an alternate block cipher.
We need Adiantum support in the kernel for use in dm-crypt and fscrypt,
where currently the only other suitable options are block cipher modes
such as AES-XTS. A big problem with this is that many low-end mobile
devices (e.g. Android Go phones sold primarily in developing countries,
as well as some smartwatches) still have CPUs that lack AES
instructions, e.g. ARM Cortex-A7. Sadly, AES-XTS encryption is much too
slow to be viable on these devices. We did find that some "lightweight"
block ciphers are fast enough, but these suffer from problems such as
not having much cryptanalysis or being too controversial.
The ChaCha stream cipher has excellent performance but is insecure to
use directly for disk encryption, since each sector's IV is reused each
time it is overwritten. Even restricting the threat model to offline
attacks only isn't enough, since modern flash storage devices don't
guarantee that "overwrites" are really overwrites, due to wear-leveling.
Adiantum avoids this problem by constructing a
"tweakable super-pseudorandom permutation"; this is the strongest
possible security model for length-preserving encryption.
Of course, storing random nonces along with the ciphertext would be the
ideal solution. But doing that with existing hardware and filesystems
runs into major practical problems; in most cases it would require data
journaling (like dm-integrity) which severely degrades performance.
Thus, for now length-preserving encryption is still needed.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-17 09:26:31 +08:00
|
|
|
.alg = "adiantum(xchacha12,aes)",
|
2019-04-12 12:57:40 +08:00
|
|
|
.generic_driver = "adiantum(xchacha12-generic,aes-generic,nhpoly1305-generic)",
|
crypto: adiantum - add Adiantum support
Add support for the Adiantum encryption mode. Adiantum was designed by
Paul Crowley and is specified by our paper:
Adiantum: length-preserving encryption for entry-level processors
(https://eprint.iacr.org/2018/720.pdf)
See our paper for full details; this patch only provides an overview.
Adiantum is a tweakable, length-preserving encryption mode designed for
fast and secure disk encryption, especially on CPUs without dedicated
crypto instructions. Adiantum encrypts each sector using the XChaCha12
stream cipher, two passes of an ε-almost-∆-universal (εA∆U) hash
function, and an invocation of the AES-256 block cipher on a single
16-byte block. On CPUs without AES instructions, Adiantum is much
faster than AES-XTS; for example, on ARM Cortex-A7, on 4096-byte sectors
Adiantum encryption is about 4 times faster than AES-256-XTS encryption,
and decryption about 5 times faster.
Adiantum is a specialization of the more general HBSH construction. Our
earlier proposal, HPolyC, was also a HBSH specialization, but it used a
different εA∆U hash function, one based on Poly1305 only. Adiantum's
εA∆U hash function, which is based primarily on the "NH" hash function
like that used in UMAC (RFC4418), is about twice as fast as HPolyC's;
consequently, Adiantum is about 20% faster than HPolyC.
This speed comes with no loss of security: Adiantum is provably just as
secure as HPolyC, in fact slightly *more* secure. Like HPolyC,
Adiantum's security is reducible to that of XChaCha12 and AES-256,
subject to a security bound. XChaCha12 itself has a security reduction
to ChaCha12. Therefore, one need not "trust" Adiantum; one need only
trust ChaCha12 and AES-256. Note that the εA∆U hash function is only
used for its proven combinatorical properties so cannot be "broken".
Adiantum is also a true wide-block encryption mode, so flipping any
plaintext bit in the sector scrambles the entire ciphertext, and vice
versa. No other such mode is available in the kernel currently; doing
the same with XTS scrambles only 16 bytes. Adiantum also supports
arbitrary-length tweaks and naturally supports any length input >= 16
bytes without needing "ciphertext stealing".
For the stream cipher, Adiantum uses XChaCha12 rather than XChaCha20 in
order to make encryption feasible on the widest range of devices.
Although the 20-round variant is quite popular, the best known attacks
on ChaCha are on only 7 rounds, so ChaCha12 still has a substantial
security margin; in fact, larger than AES-256's. 12-round Salsa20 is
also the eSTREAM recommendation. For the block cipher, Adiantum uses
AES-256, despite it having a lower security margin than XChaCha12 and
needing table lookups, due to AES's extensive adoption and analysis
making it the obvious first choice. Nevertheless, for flexibility this
patch also permits the "adiantum" template to be instantiated with
XChaCha20 and/or with an alternate block cipher.
We need Adiantum support in the kernel for use in dm-crypt and fscrypt,
where currently the only other suitable options are block cipher modes
such as AES-XTS. A big problem with this is that many low-end mobile
devices (e.g. Android Go phones sold primarily in developing countries,
as well as some smartwatches) still have CPUs that lack AES
instructions, e.g. ARM Cortex-A7. Sadly, AES-XTS encryption is much too
slow to be viable on these devices. We did find that some "lightweight"
block ciphers are fast enough, but these suffer from problems such as
not having much cryptanalysis or being too controversial.
The ChaCha stream cipher has excellent performance but is insecure to
use directly for disk encryption, since each sector's IV is reused each
time it is overwritten. Even restricting the threat model to offline
attacks only isn't enough, since modern flash storage devices don't
guarantee that "overwrites" are really overwrites, due to wear-leveling.
Adiantum avoids this problem by constructing a
"tweakable super-pseudorandom permutation"; this is the strongest
possible security model for length-preserving encryption.
Of course, storing random nonces along with the ciphertext would be the
ideal solution. But doing that with existing hardware and filesystems
runs into major practical problems; in most cases it would require data
journaling (like dm-integrity) which severely degrades performance.
Thus, for now length-preserving encryption is still needed.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-17 09:26:31 +08:00
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(adiantum_xchacha12_aes_tv_template)
|
|
|
|
},
|
|
|
|
}, {
|
|
|
|
.alg = "adiantum(xchacha20,aes)",
|
2019-04-12 12:57:40 +08:00
|
|
|
.generic_driver = "adiantum(xchacha20-generic,aes-generic,nhpoly1305-generic)",
|
crypto: adiantum - add Adiantum support
Add support for the Adiantum encryption mode. Adiantum was designed by
Paul Crowley and is specified by our paper:
Adiantum: length-preserving encryption for entry-level processors
(https://eprint.iacr.org/2018/720.pdf)
See our paper for full details; this patch only provides an overview.
Adiantum is a tweakable, length-preserving encryption mode designed for
fast and secure disk encryption, especially on CPUs without dedicated
crypto instructions. Adiantum encrypts each sector using the XChaCha12
stream cipher, two passes of an ε-almost-∆-universal (εA∆U) hash
function, and an invocation of the AES-256 block cipher on a single
16-byte block. On CPUs without AES instructions, Adiantum is much
faster than AES-XTS; for example, on ARM Cortex-A7, on 4096-byte sectors
Adiantum encryption is about 4 times faster than AES-256-XTS encryption,
and decryption about 5 times faster.
Adiantum is a specialization of the more general HBSH construction. Our
earlier proposal, HPolyC, was also a HBSH specialization, but it used a
different εA∆U hash function, one based on Poly1305 only. Adiantum's
εA∆U hash function, which is based primarily on the "NH" hash function
like that used in UMAC (RFC4418), is about twice as fast as HPolyC's;
consequently, Adiantum is about 20% faster than HPolyC.
This speed comes with no loss of security: Adiantum is provably just as
secure as HPolyC, in fact slightly *more* secure. Like HPolyC,
Adiantum's security is reducible to that of XChaCha12 and AES-256,
subject to a security bound. XChaCha12 itself has a security reduction
to ChaCha12. Therefore, one need not "trust" Adiantum; one need only
trust ChaCha12 and AES-256. Note that the εA∆U hash function is only
used for its proven combinatorical properties so cannot be "broken".
Adiantum is also a true wide-block encryption mode, so flipping any
plaintext bit in the sector scrambles the entire ciphertext, and vice
versa. No other such mode is available in the kernel currently; doing
the same with XTS scrambles only 16 bytes. Adiantum also supports
arbitrary-length tweaks and naturally supports any length input >= 16
bytes without needing "ciphertext stealing".
For the stream cipher, Adiantum uses XChaCha12 rather than XChaCha20 in
order to make encryption feasible on the widest range of devices.
Although the 20-round variant is quite popular, the best known attacks
on ChaCha are on only 7 rounds, so ChaCha12 still has a substantial
security margin; in fact, larger than AES-256's. 12-round Salsa20 is
also the eSTREAM recommendation. For the block cipher, Adiantum uses
AES-256, despite it having a lower security margin than XChaCha12 and
needing table lookups, due to AES's extensive adoption and analysis
making it the obvious first choice. Nevertheless, for flexibility this
patch also permits the "adiantum" template to be instantiated with
XChaCha20 and/or with an alternate block cipher.
We need Adiantum support in the kernel for use in dm-crypt and fscrypt,
where currently the only other suitable options are block cipher modes
such as AES-XTS. A big problem with this is that many low-end mobile
devices (e.g. Android Go phones sold primarily in developing countries,
as well as some smartwatches) still have CPUs that lack AES
instructions, e.g. ARM Cortex-A7. Sadly, AES-XTS encryption is much too
slow to be viable on these devices. We did find that some "lightweight"
block ciphers are fast enough, but these suffer from problems such as
not having much cryptanalysis or being too controversial.
The ChaCha stream cipher has excellent performance but is insecure to
use directly for disk encryption, since each sector's IV is reused each
time it is overwritten. Even restricting the threat model to offline
attacks only isn't enough, since modern flash storage devices don't
guarantee that "overwrites" are really overwrites, due to wear-leveling.
Adiantum avoids this problem by constructing a
"tweakable super-pseudorandom permutation"; this is the strongest
possible security model for length-preserving encryption.
Of course, storing random nonces along with the ciphertext would be the
ideal solution. But doing that with existing hardware and filesystems
runs into major practical problems; in most cases it would require data
journaling (like dm-integrity) which severely degrades performance.
Thus, for now length-preserving encryption is still needed.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-17 09:26:31 +08:00
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(adiantum_xchacha20_aes_tv_template)
|
|
|
|
},
|
|
|
|
}, {
|
2018-05-11 20:12:50 +08:00
|
|
|
.alg = "aegis128",
|
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
.aead = __VECS(aegis128_tv_template)
|
2018-05-11 20:12:50 +08:00
|
|
|
}
|
|
|
|
}, {
|
2009-05-04 19:46:29 +08:00
|
|
|
.alg = "ansi_cprng",
|
|
|
|
.test = alg_test_cprng,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.cprng = __VECS(ansi_cprng_aes_tv_template)
|
2009-05-04 19:46:29 +08:00
|
|
|
}
|
2014-03-14 23:46:51 +08:00
|
|
|
}, {
|
|
|
|
.alg = "authenc(hmac(md5),ecb(cipher_null))",
|
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
.aead = __VECS(hmac_md5_ecb_cipher_null_tv_template)
|
2014-03-14 23:46:51 +08:00
|
|
|
}
|
2012-07-04 00:16:54 +08:00
|
|
|
}, {
|
2015-07-30 17:53:23 +08:00
|
|
|
.alg = "authenc(hmac(sha1),cbc(aes))",
|
2012-07-04 00:16:54 +08:00
|
|
|
.test = alg_test_aead,
|
2017-06-28 19:09:07 +08:00
|
|
|
.fips_allowed = 1,
|
2012-07-04 00:16:54 +08:00
|
|
|
.suite = {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
.aead = __VECS(hmac_sha1_aes_cbc_tv_temp)
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
}
|
|
|
|
}, {
|
2015-07-30 17:53:23 +08:00
|
|
|
.alg = "authenc(hmac(sha1),cbc(des))",
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
.aead = __VECS(hmac_sha1_des_cbc_tv_temp)
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
}
|
|
|
|
}, {
|
2015-07-30 17:53:23 +08:00
|
|
|
.alg = "authenc(hmac(sha1),cbc(des3_ede))",
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
.aead = __VECS(hmac_sha1_des3_ede_cbc_tv_temp)
|
2012-07-04 00:16:54 +08:00
|
|
|
}
|
2016-02-06 18:53:07 +08:00
|
|
|
}, {
|
|
|
|
.alg = "authenc(hmac(sha1),ctr(aes))",
|
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
2014-03-14 23:46:51 +08:00
|
|
|
}, {
|
|
|
|
.alg = "authenc(hmac(sha1),ecb(cipher_null))",
|
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
.aead = __VECS(hmac_sha1_ecb_cipher_null_tv_temp)
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
}
|
2016-02-19 20:34:28 +08:00
|
|
|
}, {
|
|
|
|
.alg = "authenc(hmac(sha1),rfc3686(ctr(aes)))",
|
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
}, {
|
2015-07-30 17:53:23 +08:00
|
|
|
.alg = "authenc(hmac(sha224),cbc(des))",
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
.aead = __VECS(hmac_sha224_des_cbc_tv_temp)
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
}
|
|
|
|
}, {
|
2015-07-30 17:53:23 +08:00
|
|
|
.alg = "authenc(hmac(sha224),cbc(des3_ede))",
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
.aead = __VECS(hmac_sha224_des3_ede_cbc_tv_temp)
|
2014-03-14 23:46:51 +08:00
|
|
|
}
|
2012-07-04 00:16:54 +08:00
|
|
|
}, {
|
2015-07-30 17:53:23 +08:00
|
|
|
.alg = "authenc(hmac(sha256),cbc(aes))",
|
2012-07-04 00:16:54 +08:00
|
|
|
.test = alg_test_aead,
|
2016-02-05 21:23:33 +08:00
|
|
|
.fips_allowed = 1,
|
2012-07-04 00:16:54 +08:00
|
|
|
.suite = {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
.aead = __VECS(hmac_sha256_aes_cbc_tv_temp)
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
}
|
|
|
|
}, {
|
2015-07-30 17:53:23 +08:00
|
|
|
.alg = "authenc(hmac(sha256),cbc(des))",
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
.aead = __VECS(hmac_sha256_des_cbc_tv_temp)
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
}
|
|
|
|
}, {
|
2015-07-30 17:53:23 +08:00
|
|
|
.alg = "authenc(hmac(sha256),cbc(des3_ede))",
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
.aead = __VECS(hmac_sha256_des3_ede_cbc_tv_temp)
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
}
|
2016-02-06 18:53:07 +08:00
|
|
|
}, {
|
|
|
|
.alg = "authenc(hmac(sha256),ctr(aes))",
|
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
2016-02-19 20:34:28 +08:00
|
|
|
}, {
|
|
|
|
.alg = "authenc(hmac(sha256),rfc3686(ctr(aes)))",
|
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
}, {
|
2015-07-30 17:53:23 +08:00
|
|
|
.alg = "authenc(hmac(sha384),cbc(des))",
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
.aead = __VECS(hmac_sha384_des_cbc_tv_temp)
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
}
|
|
|
|
}, {
|
2015-07-30 17:53:23 +08:00
|
|
|
.alg = "authenc(hmac(sha384),cbc(des3_ede))",
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
.aead = __VECS(hmac_sha384_des3_ede_cbc_tv_temp)
|
2012-07-04 00:16:54 +08:00
|
|
|
}
|
2016-02-06 18:53:07 +08:00
|
|
|
}, {
|
|
|
|
.alg = "authenc(hmac(sha384),ctr(aes))",
|
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
2016-02-19 20:34:28 +08:00
|
|
|
}, {
|
|
|
|
.alg = "authenc(hmac(sha384),rfc3686(ctr(aes)))",
|
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
2012-07-04 00:16:54 +08:00
|
|
|
}, {
|
2015-07-30 17:53:23 +08:00
|
|
|
.alg = "authenc(hmac(sha512),cbc(aes))",
|
2016-02-05 21:23:33 +08:00
|
|
|
.fips_allowed = 1,
|
2012-07-04 00:16:54 +08:00
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
.aead = __VECS(hmac_sha512_aes_cbc_tv_temp)
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
}
|
|
|
|
}, {
|
2015-07-30 17:53:23 +08:00
|
|
|
.alg = "authenc(hmac(sha512),cbc(des))",
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
.aead = __VECS(hmac_sha512_des_cbc_tv_temp)
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
}
|
|
|
|
}, {
|
2015-07-30 17:53:23 +08:00
|
|
|
.alg = "authenc(hmac(sha512),cbc(des3_ede))",
|
crypto: testmgr - add aead cbc des, des3_ede tests
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21 19:39:08 +08:00
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
.aead = __VECS(hmac_sha512_des3_ede_cbc_tv_temp)
|
2012-07-04 00:16:54 +08:00
|
|
|
}
|
2016-02-06 18:53:07 +08:00
|
|
|
}, {
|
|
|
|
.alg = "authenc(hmac(sha512),ctr(aes))",
|
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
2016-02-19 20:34:28 +08:00
|
|
|
}, {
|
|
|
|
.alg = "authenc(hmac(sha512),rfc3686(ctr(aes)))",
|
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
crypto: testmgr - add test vectors for blake2b
Test vectors for blake2b with various digest sizes. As the algorithm is
the same up to the digest calculation, the key and input data length is
distributed in a way that tests all combinanions of the two over the
digest sizes.
Based on the suggestion from Eric, the following input sizes are tested
[0, 1, 7, 15, 64, 247, 256], where blake2b blocksize is 128, so the
padded and the non-padded input buffers are tested.
blake2b-160 blake2b-256 blake2b-384 blake2b-512
---------------------------------------------------
len=0 | klen=0 klen=1 klen=32 klen=64
len=1 | klen=32 klen=64 klen=0 klen=1
len=7 | klen=64 klen=0 klen=1 klen=32
len=15 | klen=1 klen=32 klen=64 klen=0
len=64 | klen=0 klen=1 klen=32 klen=64
len=247 | klen=32 klen=64 klen=0 klen=1
len=256 | klen=64 klen=0 klen=1 klen=32
Where key:
- klen=0: empty key
- klen=1: 1 byte value 0x42, 'B'
- klen=32: first 32 bytes of the default key, sequence 00..1f
- klen=64: default key, sequence 00..3f
The unkeyed vectors are ordered before keyed, as this is required by
testmgr.
CC: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-10-25 00:28:32 +08:00
|
|
|
}, {
|
|
|
|
.alg = "blake2b-160",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.fips_allowed = 0,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(blake2b_160_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "blake2b-256",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.fips_allowed = 0,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(blake2b_256_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "blake2b-384",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.fips_allowed = 0,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(blake2b_384_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "blake2b-512",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.fips_allowed = 0,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(blake2b_512_tv_template)
|
|
|
|
}
|
2009-05-04 19:46:29 +08:00
|
|
|
}, {
|
2008-07-31 17:08:25 +08:00
|
|
|
.alg = "cbc(aes)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2009-05-15 13:16:03 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(aes_cbc_tv_template)
|
|
|
|
},
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "cbc(anubis)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(anubis_cbc_tv_template)
|
|
|
|
},
|
2022-07-04 17:42:49 +08:00
|
|
|
}, {
|
|
|
|
.alg = "cbc(aria)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(aria_cbc_tv_template)
|
|
|
|
},
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "cbc(blowfish)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(bf_cbc_tv_template)
|
|
|
|
},
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "cbc(camellia)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(camellia_cbc_tv_template)
|
|
|
|
},
|
2012-07-12 01:37:21 +08:00
|
|
|
}, {
|
|
|
|
.alg = "cbc(cast5)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(cast5_cbc_tv_template)
|
|
|
|
},
|
2012-07-12 01:38:29 +08:00
|
|
|
}, {
|
|
|
|
.alg = "cbc(cast6)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(cast6_cbc_tv_template)
|
|
|
|
},
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "cbc(des)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(des_cbc_tv_template)
|
|
|
|
},
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "cbc(des3_ede)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(des3_ede_cbc_tv_template)
|
|
|
|
},
|
2018-04-23 15:25:14 +08:00
|
|
|
}, {
|
|
|
|
/* Same as cbc(aes) except the key is stored in
|
|
|
|
* hardware secure memory which we reference by index
|
|
|
|
*/
|
|
|
|
.alg = "cbc(paes)",
|
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
2019-04-18 21:38:36 +08:00
|
|
|
}, {
|
|
|
|
/* Same as cbc(sm4) except the key is stored in
|
|
|
|
* hardware secure memory which we reference by index
|
|
|
|
*/
|
|
|
|
.alg = "cbc(psm4)",
|
|
|
|
.test = alg_test_null,
|
2011-10-18 05:02:53 +08:00
|
|
|
}, {
|
|
|
|
.alg = "cbc(serpent)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(serpent_cbc_tv_template)
|
|
|
|
},
|
2018-09-20 21:18:38 +08:00
|
|
|
}, {
|
|
|
|
.alg = "cbc(sm4)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(sm4_cbc_tv_template)
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "cbc(twofish)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(tf_cbc_tv_template)
|
|
|
|
},
|
2017-02-03 22:49:35 +08:00
|
|
|
}, {
|
2020-01-22 21:43:23 +08:00
|
|
|
#if IS_ENABLED(CONFIG_CRYPTO_PAES_S390)
|
|
|
|
.alg = "cbc-paes-s390",
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(aes_cbc_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
#endif
|
2017-02-03 22:49:35 +08:00
|
|
|
.alg = "cbcmac(aes)",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(aes_cbcmac_tv_template)
|
|
|
|
}
|
2021-08-13 15:55:07 +08:00
|
|
|
}, {
|
|
|
|
.alg = "cbcmac(sm4)",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(sm4_cbcmac_tv_template)
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ccm(aes)",
|
2019-04-12 12:57:41 +08:00
|
|
|
.generic_driver = "ccm_base(ctr(aes-generic),cbcmac(aes-generic))",
|
2008-07-31 17:08:25 +08:00
|
|
|
.test = alg_test_aead,
|
2009-05-15 13:16:03 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
2019-12-02 05:53:30 +08:00
|
|
|
.aead = {
|
|
|
|
____VECS(aes_ccm_tv_template),
|
|
|
|
.einval_allowed = 1,
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2021-08-13 15:55:07 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ccm(sm4)",
|
|
|
|
.generic_driver = "ccm_base(ctr(sm4-generic),cbcmac(sm4-generic))",
|
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
|
|
|
.aead = {
|
|
|
|
____VECS(sm4_ccm_tv_template),
|
|
|
|
.einval_allowed = 1,
|
|
|
|
}
|
|
|
|
}
|
2015-06-01 19:43:57 +08:00
|
|
|
}, {
|
|
|
|
.alg = "chacha20",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(chacha20_tv_template)
|
|
|
|
},
|
2013-04-08 15:48:44 +08:00
|
|
|
}, {
|
|
|
|
.alg = "cmac(aes)",
|
2015-08-19 14:42:07 +08:00
|
|
|
.fips_allowed = 1,
|
2013-04-08 15:48:44 +08:00
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(aes_cmac128_tv_template)
|
2013-04-08 15:48:44 +08:00
|
|
|
}
|
2023-04-13 23:40:18 +08:00
|
|
|
}, {
|
|
|
|
.alg = "cmac(camellia)",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(camellia_cmac128_tv_template)
|
|
|
|
}
|
2013-04-08 15:48:44 +08:00
|
|
|
}, {
|
|
|
|
.alg = "cmac(des3_ede)",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(des3_ede_cmac64_tv_template)
|
2013-04-08 15:48:44 +08:00
|
|
|
}
|
2021-08-13 15:55:07 +08:00
|
|
|
}, {
|
|
|
|
.alg = "cmac(sm4)",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(sm4_cmac128_tv_template)
|
|
|
|
}
|
2013-04-07 21:43:56 +08:00
|
|
|
}, {
|
|
|
|
.alg = "compress_null",
|
|
|
|
.test = alg_test_null,
|
2015-05-04 17:00:17 +08:00
|
|
|
}, {
|
|
|
|
.alg = "crc32",
|
|
|
|
.test = alg_test_hash,
|
2019-01-25 17:31:47 +08:00
|
|
|
.fips_allowed = 1,
|
2015-05-04 17:00:17 +08:00
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(crc32_tv_template)
|
2015-05-04 17:00:17 +08:00
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "crc32c",
|
2008-11-07 14:58:52 +08:00
|
|
|
.test = alg_test_crc32c,
|
2009-05-15 13:16:03 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(crc32c_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2022-03-04 04:13:10 +08:00
|
|
|
}, {
|
|
|
|
.alg = "crc64-rocksoft",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(crc64_rocksoft_tv_template)
|
|
|
|
}
|
2013-09-07 10:56:26 +08:00
|
|
|
}, {
|
|
|
|
.alg = "crct10dif",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(crct10dif_tv_template)
|
2013-09-07 10:56:26 +08:00
|
|
|
}
|
2009-05-06 17:29:17 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ctr(aes)",
|
|
|
|
.test = alg_test_skcipher,
|
2009-05-15 13:16:03 +08:00
|
|
|
.fips_allowed = 1,
|
2009-05-06 17:29:17 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(aes_ctr_tv_template)
|
2009-05-06 17:29:17 +08:00
|
|
|
}
|
2022-07-04 17:42:49 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ctr(aria)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(aria_ctr_tv_template)
|
|
|
|
}
|
2011-10-11 04:03:03 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ctr(blowfish)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(bf_ctr_tv_template)
|
2011-10-11 04:03:03 +08:00
|
|
|
}
|
2012-03-06 02:26:21 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ctr(camellia)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(camellia_ctr_tv_template)
|
2012-03-06 02:26:21 +08:00
|
|
|
}
|
2012-07-12 01:37:21 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ctr(cast5)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(cast5_ctr_tv_template)
|
2012-07-12 01:37:21 +08:00
|
|
|
}
|
2012-07-12 01:38:29 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ctr(cast6)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(cast6_ctr_tv_template)
|
2012-07-12 01:38:29 +08:00
|
|
|
}
|
2012-10-20 19:53:07 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ctr(des)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(des_ctr_tv_template)
|
2012-10-20 19:53:07 +08:00
|
|
|
}
|
2012-10-20 19:53:12 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ctr(des3_ede)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(des3_ede_ctr_tv_template)
|
2012-10-20 19:53:12 +08:00
|
|
|
}
|
2018-04-23 15:25:14 +08:00
|
|
|
}, {
|
|
|
|
/* Same as ctr(aes) except the key is stored in
|
|
|
|
* hardware secure memory which we reference by index
|
|
|
|
*/
|
|
|
|
.alg = "ctr(paes)",
|
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
2011-10-18 05:02:53 +08:00
|
|
|
}, {
|
2019-04-18 21:38:36 +08:00
|
|
|
|
|
|
|
/* Same as ctr(sm4) except the key is stored in
|
|
|
|
* hardware secure memory which we reference by index
|
|
|
|
*/
|
|
|
|
.alg = "ctr(psm4)",
|
|
|
|
.test = alg_test_null,
|
|
|
|
}, {
|
2011-10-18 05:02:53 +08:00
|
|
|
.alg = "ctr(serpent)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(serpent_ctr_tv_template)
|
2011-10-18 05:02:53 +08:00
|
|
|
}
|
2018-09-20 21:18:38 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ctr(sm4)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(sm4_ctr_tv_template)
|
|
|
|
}
|
2011-10-11 04:03:12 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ctr(twofish)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(tf_ctr_tv_template)
|
2011-10-11 04:03:12 +08:00
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
2020-01-22 21:43:23 +08:00
|
|
|
#if IS_ENABLED(CONFIG_CRYPTO_PAES_S390)
|
|
|
|
.alg = "ctr-paes-s390",
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(aes_ctr_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
#endif
|
2008-07-31 17:08:25 +08:00
|
|
|
.alg = "cts(cbc(aes))",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2018-11-04 18:05:24 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(cts_mode_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2019-04-18 21:38:36 +08:00
|
|
|
}, {
|
|
|
|
/* Same as cts(cbc((aes)) except the key is stored in
|
|
|
|
* hardware secure memory which we reference by index
|
|
|
|
*/
|
|
|
|
.alg = "cts(cbc(paes))",
|
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
2022-10-27 14:54:56 +08:00
|
|
|
}, {
|
|
|
|
.alg = "cts(cbc(sm4))",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(sm4_cts_tv_template)
|
|
|
|
}
|
2019-11-08 20:22:33 +08:00
|
|
|
}, {
|
|
|
|
.alg = "curve25519",
|
|
|
|
.test = alg_test_kpp,
|
|
|
|
.suite = {
|
|
|
|
.kpp = __VECS(curve25519_tv_template)
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "deflate",
|
|
|
|
.test = alg_test_comp,
|
2012-12-06 17:16:28 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
|
|
|
.comp = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.comp = __VECS(deflate_comp_tv_template),
|
|
|
|
.decomp = __VECS(deflate_decomp_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}
|
crypto: iaa - Add support for deflate-iaa compression algorithm
This patch registers the deflate-iaa deflate compression algorithm and
hooks it up to the IAA hardware using the 'fixed' compression mode
introduced in the previous patch.
Because the IAA hardware has a 4k history-window limitation, only
buffers <= 4k, or that have been compressed using a <= 4k history
window, are technically compliant with the deflate spec, which allows
for a window of up to 32k. Because of this limitation, the IAA fixed
mode deflate algorithm is given its own algorithm name, 'deflate-iaa'.
With this change, the deflate-iaa crypto algorithm is registered and
operational, and compression and decompression operations are fully
enabled following the successful binding of the first IAA workqueue
to the iaa_crypto sub-driver.
when there are no IAA workqueues bound to the driver, the IAA crypto
algorithm can be unregistered by removing the module.
A new iaa_crypto 'verify_compress' driver attribute is also added,
allowing the user to toggle compression verification. If set, each
compress will be internally decompressed and the contents verified,
returning error codes if unsuccessful. This can be toggled with 0/1:
echo 0 > /sys/bus/dsa/drivers/crypto/verify_compress
The default setting is '1' - verify all compresses.
The verify_compress value setting at the time the algorithm is
registered is captured in the algorithm's crypto_ctx and used for all
compresses when using the algorithm.
[ Based on work originally by George Powley, Jing Lin and Kyung Min
Park ]
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-06 05:25:27 +08:00
|
|
|
}, {
|
|
|
|
.alg = "deflate-iaa",
|
|
|
|
.test = alg_test_comp,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.comp = {
|
|
|
|
.comp = __VECS(deflate_comp_tv_template),
|
|
|
|
.decomp = __VECS(deflate_decomp_tv_template)
|
|
|
|
}
|
|
|
|
}
|
2016-06-23 00:49:14 +08:00
|
|
|
}, {
|
|
|
|
.alg = "dh",
|
|
|
|
.test = alg_test_kpp,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.kpp = __VECS(dh_tv_template)
|
2016-06-23 00:49:14 +08:00
|
|
|
}
|
2013-04-07 21:43:56 +08:00
|
|
|
}, {
|
|
|
|
.alg = "digest_null",
|
|
|
|
.test = alg_test_null,
|
2014-05-31 23:25:36 +08:00
|
|
|
}, {
|
|
|
|
.alg = "drbg_nopr_ctr_aes128",
|
|
|
|
.test = alg_test_drbg,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.drbg = __VECS(drbg_nopr_ctr_aes128_tv_template)
|
2014-05-31 23:25:36 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "drbg_nopr_ctr_aes192",
|
|
|
|
.test = alg_test_drbg,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.drbg = __VECS(drbg_nopr_ctr_aes192_tv_template)
|
2014-05-31 23:25:36 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "drbg_nopr_ctr_aes256",
|
|
|
|
.test = alg_test_drbg,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.drbg = __VECS(drbg_nopr_ctr_aes256_tv_template)
|
2014-05-31 23:25:36 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "drbg_nopr_hmac_sha256",
|
|
|
|
.test = alg_test_drbg,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.drbg = __VECS(drbg_nopr_hmac_sha256_tv_template)
|
2014-05-31 23:25:36 +08:00
|
|
|
}
|
|
|
|
}, {
|
2023-10-30 20:05:16 +08:00
|
|
|
/*
|
|
|
|
* There is no need to specifically test the DRBG with every
|
|
|
|
* backend cipher -- covered by drbg_nopr_hmac_sha512 test
|
|
|
|
*/
|
2014-05-31 23:25:36 +08:00
|
|
|
.alg = "drbg_nopr_hmac_sha384",
|
|
|
|
.test = alg_test_null,
|
|
|
|
}, {
|
|
|
|
.alg = "drbg_nopr_hmac_sha512",
|
2021-06-24 23:44:35 +08:00
|
|
|
.test = alg_test_drbg,
|
2014-05-31 23:25:36 +08:00
|
|
|
.fips_allowed = 1,
|
2021-06-24 23:44:35 +08:00
|
|
|
.suite = {
|
|
|
|
.drbg = __VECS(drbg_nopr_hmac_sha512_tv_template)
|
|
|
|
}
|
2014-05-31 23:25:36 +08:00
|
|
|
}, {
|
|
|
|
.alg = "drbg_nopr_sha256",
|
|
|
|
.test = alg_test_drbg,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.drbg = __VECS(drbg_nopr_sha256_tv_template)
|
2014-05-31 23:25:36 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
/* covered by drbg_nopr_sha256 test */
|
|
|
|
.alg = "drbg_nopr_sha384",
|
|
|
|
.test = alg_test_null,
|
|
|
|
}, {
|
|
|
|
.alg = "drbg_nopr_sha512",
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.test = alg_test_null,
|
|
|
|
}, {
|
|
|
|
.alg = "drbg_pr_ctr_aes128",
|
|
|
|
.test = alg_test_drbg,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.drbg = __VECS(drbg_pr_ctr_aes128_tv_template)
|
2014-05-31 23:25:36 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
/* covered by drbg_pr_ctr_aes128 test */
|
|
|
|
.alg = "drbg_pr_ctr_aes192",
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.test = alg_test_null,
|
|
|
|
}, {
|
|
|
|
.alg = "drbg_pr_ctr_aes256",
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.test = alg_test_null,
|
|
|
|
}, {
|
|
|
|
.alg = "drbg_pr_hmac_sha256",
|
|
|
|
.test = alg_test_drbg,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.drbg = __VECS(drbg_pr_hmac_sha256_tv_template)
|
2014-05-31 23:25:36 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
/* covered by drbg_pr_hmac_sha256 test */
|
|
|
|
.alg = "drbg_pr_hmac_sha384",
|
|
|
|
.test = alg_test_null,
|
|
|
|
}, {
|
|
|
|
.alg = "drbg_pr_hmac_sha512",
|
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
}, {
|
|
|
|
.alg = "drbg_pr_sha256",
|
|
|
|
.test = alg_test_drbg,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.drbg = __VECS(drbg_pr_sha256_tv_template)
|
2014-05-31 23:25:36 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
/* covered by drbg_pr_sha256 test */
|
|
|
|
.alg = "drbg_pr_sha384",
|
|
|
|
.test = alg_test_null,
|
|
|
|
}, {
|
|
|
|
.alg = "drbg_pr_sha512",
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.test = alg_test_null,
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ecb(aes)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2009-05-15 13:16:03 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(aes_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "ecb(anubis)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(anubis_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "ecb(arc4)",
|
2023-10-03 11:43:18 +08:00
|
|
|
.generic_driver = "arc4-generic",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(arc4_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2022-07-04 17:42:49 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ecb(aria)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(aria_tv_template)
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ecb(blowfish)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(bf_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "ecb(camellia)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(camellia_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "ecb(cast5)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(cast5_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "ecb(cast6)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(cast6_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2013-04-07 21:43:56 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ecb(cipher_null)",
|
|
|
|
.test = alg_test_null,
|
2017-04-21 19:03:06 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ecb(des)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(des_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "ecb(des3_ede)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(des3_ede_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2013-01-19 19:31:36 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ecb(fcrypt)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.vecs = fcrypt_pcbc_tv_template,
|
|
|
|
.count = 1
|
2013-01-19 19:31:36 +08:00
|
|
|
}
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ecb(khazad)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(khazad_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2018-05-11 16:04:06 +08:00
|
|
|
}, {
|
|
|
|
/* Same as ecb(aes) except the key is stored in
|
|
|
|
* hardware secure memory which we reference by index
|
|
|
|
*/
|
|
|
|
.alg = "ecb(paes)",
|
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ecb(seed)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(seed_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "ecb(serpent)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(serpent_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2018-03-06 17:44:43 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ecb(sm4)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(sm4_tv_template)
|
2018-03-06 17:44:43 +08:00
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ecb(tea)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(tea_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "ecb(twofish)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(tf_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "ecb(xeta)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(xeta_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "ecb(xtea)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(xtea_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2016-06-23 00:49:15 +08:00
|
|
|
}, {
|
2020-01-22 21:43:23 +08:00
|
|
|
#if IS_ENABLED(CONFIG_CRYPTO_PAES_S390)
|
|
|
|
.alg = "ecb-paes-s390",
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(aes_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
#endif
|
2021-03-04 14:35:46 +08:00
|
|
|
.alg = "ecdh-nist-p192",
|
2016-06-23 00:49:15 +08:00
|
|
|
.test = alg_test_kpp,
|
|
|
|
.suite = {
|
2021-03-04 14:35:46 +08:00
|
|
|
.kpp = __VECS(ecdh_p192_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "ecdh-nist-p256",
|
|
|
|
.test = alg_test_kpp,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.kpp = __VECS(ecdh_p256_tv_template)
|
2016-06-23 00:49:15 +08:00
|
|
|
}
|
2021-05-22 10:44:31 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ecdh-nist-p384",
|
|
|
|
.test = alg_test_kpp,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.kpp = __VECS(ecdh_p384_tv_template)
|
|
|
|
}
|
2021-03-17 05:07:32 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ecdsa-nist-p192",
|
2024-09-10 22:30:13 +08:00
|
|
|
.test = alg_test_sig,
|
2021-03-17 05:07:32 +08:00
|
|
|
.suite = {
|
2024-09-10 22:30:13 +08:00
|
|
|
.sig = __VECS(ecdsa_nist_p192_tv_template)
|
2021-03-17 05:07:32 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "ecdsa-nist-p256",
|
2024-09-10 22:30:13 +08:00
|
|
|
.test = alg_test_sig,
|
2022-12-30 05:17:10 +08:00
|
|
|
.fips_allowed = 1,
|
2021-03-17 05:07:32 +08:00
|
|
|
.suite = {
|
2024-09-10 22:30:13 +08:00
|
|
|
.sig = __VECS(ecdsa_nist_p256_tv_template)
|
2021-03-17 05:07:32 +08:00
|
|
|
}
|
2021-03-17 05:07:35 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ecdsa-nist-p384",
|
2024-09-10 22:30:13 +08:00
|
|
|
.test = alg_test_sig,
|
2022-12-30 05:17:10 +08:00
|
|
|
.fips_allowed = 1,
|
2021-03-17 05:07:35 +08:00
|
|
|
.suite = {
|
2024-09-10 22:30:13 +08:00
|
|
|
.sig = __VECS(ecdsa_nist_p384_tv_template)
|
2021-03-17 05:07:35 +08:00
|
|
|
}
|
2024-04-04 22:18:54 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ecdsa-nist-p521",
|
2024-09-10 22:30:13 +08:00
|
|
|
.test = alg_test_sig,
|
2024-04-04 22:18:54 +08:00
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2024-09-10 22:30:13 +08:00
|
|
|
.sig = __VECS(ecdsa_nist_p521_tv_template)
|
2024-04-04 22:18:54 +08:00
|
|
|
}
|
2019-04-11 23:51:21 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ecrdsa",
|
2024-09-10 22:30:14 +08:00
|
|
|
.test = alg_test_sig,
|
2019-04-11 23:51:21 +08:00
|
|
|
.suite = {
|
2024-09-10 22:30:14 +08:00
|
|
|
.sig = __VECS(ecrdsa_tv_template)
|
2019-04-11 23:51:21 +08:00
|
|
|
}
|
2019-08-19 22:17:34 +08:00
|
|
|
}, {
|
|
|
|
.alg = "essiv(authenc(hmac(sha256),cbc(aes)),sha256)",
|
|
|
|
.test = alg_test_aead,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.aead = __VECS(essiv_hmac_sha256_aes_cbc_tv_temp)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "essiv(cbc(aes),sha256)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(essiv_aes_cbc_tv_template)
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
2022-02-21 20:10:54 +08:00
|
|
|
#if IS_ENABLED(CONFIG_CRYPTO_DH_RFC7919_GROUPS)
|
|
|
|
.alg = "ffdhe2048(dh)",
|
|
|
|
.test = alg_test_kpp,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.kpp = __VECS(ffdhe2048_dh_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "ffdhe3072(dh)",
|
|
|
|
.test = alg_test_kpp,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.kpp = __VECS(ffdhe3072_dh_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "ffdhe4096(dh)",
|
|
|
|
.test = alg_test_kpp,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.kpp = __VECS(ffdhe4096_dh_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "ffdhe6144(dh)",
|
|
|
|
.test = alg_test_kpp,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.kpp = __VECS(ffdhe6144_dh_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "ffdhe8192(dh)",
|
|
|
|
.test = alg_test_kpp,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.kpp = __VECS(ffdhe8192_dh_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
#endif /* CONFIG_CRYPTO_DH_RFC7919_GROUPS */
|
2008-07-31 17:08:25 +08:00
|
|
|
.alg = "gcm(aes)",
|
2019-04-12 12:57:41 +08:00
|
|
|
.generic_driver = "gcm_base(ctr(aes-generic),ghash-generic)",
|
2008-07-31 17:08:25 +08:00
|
|
|
.test = alg_test_aead,
|
2009-05-15 13:16:03 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
.aead = __VECS(aes_gcm_tv_template)
|
2021-08-13 15:55:07 +08:00
|
|
|
}
|
2022-07-04 17:42:49 +08:00
|
|
|
}, {
|
|
|
|
.alg = "gcm(aria)",
|
|
|
|
.generic_driver = "gcm_base(ctr(aria-generic),ghash-generic)",
|
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
|
|
|
.aead = __VECS(aria_gcm_tv_template)
|
|
|
|
}
|
2021-08-13 15:55:07 +08:00
|
|
|
}, {
|
|
|
|
.alg = "gcm(sm4)",
|
|
|
|
.generic_driver = "gcm_base(ctr(sm4-generic),ghash-generic)",
|
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
|
|
|
.aead = __VECS(sm4_gcm_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2009-11-23 20:23:04 +08:00
|
|
|
}, {
|
|
|
|
.alg = "ghash",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(ghash_tv_template)
|
2009-11-23 20:23:04 +08:00
|
|
|
}
|
crypto: hctr2 - Add HCTR2 support
Add support for HCTR2 as a template. HCTR2 is a length-preserving
encryption mode that is efficient on processors with instructions to
accelerate AES and carryless multiplication, e.g. x86 processors with
AES-NI and CLMUL, and ARM processors with the ARMv8 Crypto Extensions.
As a length-preserving encryption mode, HCTR2 is suitable for
applications such as storage encryption where ciphertext expansion is
not possible, and thus authenticated encryption cannot be used.
Currently, such applications usually use XTS, or in some cases Adiantum.
XTS has the disadvantage that it is a narrow-block mode: a bitflip will
only change 16 bytes in the resulting ciphertext or plaintext. This
reveals more information to an attacker than necessary.
HCTR2 is a wide-block mode, so it provides a stronger security property:
a bitflip will change the entire message. HCTR2 is somewhat similar to
Adiantum, which is also a wide-block mode. However, HCTR2 is designed
to take advantage of existing crypto instructions, while Adiantum
targets devices without such hardware support. Adiantum is also
designed with longer messages in mind, while HCTR2 is designed to be
efficient even on short messages.
HCTR2 requires POLYVAL and XCTR as components. More information on
HCTR2 can be found here: "Length-preserving encryption with HCTR2":
https://eprint.iacr.org/2021/1441.pdf
Signed-off-by: Nathan Huckleberry <nhuck@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-05-21 02:14:55 +08:00
|
|
|
}, {
|
|
|
|
.alg = "hctr2(aes)",
|
|
|
|
.generic_driver =
|
|
|
|
"hctr2_base(xctr(aes-generic),polyval-generic)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(aes_hctr2_tv_template)
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "hmac(md5)",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(hmac_md5_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "hmac(rmd160)",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(hmac_rmd160_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "hmac(sha1)",
|
|
|
|
.test = alg_test_hash,
|
2009-05-15 13:16:03 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(hmac_sha1_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "hmac(sha224)",
|
|
|
|
.test = alg_test_hash,
|
2009-05-15 13:16:03 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(hmac_sha224_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "hmac(sha256)",
|
|
|
|
.test = alg_test_hash,
|
2009-05-15 13:16:03 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(hmac_sha256_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2016-07-01 13:46:54 +08:00
|
|
|
}, {
|
|
|
|
.alg = "hmac(sha3-224)",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(hmac_sha3_224_tv_template)
|
2016-07-01 13:46:54 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "hmac(sha3-256)",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(hmac_sha3_256_tv_template)
|
2016-07-01 13:46:54 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "hmac(sha3-384)",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(hmac_sha3_384_tv_template)
|
2016-07-01 13:46:54 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "hmac(sha3-512)",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(hmac_sha3_512_tv_template)
|
2016-07-01 13:46:54 +08:00
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "hmac(sha384)",
|
|
|
|
.test = alg_test_hash,
|
2009-05-15 13:16:03 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(hmac_sha384_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "hmac(sha512)",
|
|
|
|
.test = alg_test_hash,
|
2009-05-15 13:16:03 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(hmac_sha512_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2019-09-13 23:20:38 +08:00
|
|
|
}, {
|
|
|
|
.alg = "hmac(sm3)",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(hmac_sm3_tv_template)
|
|
|
|
}
|
2018-11-07 05:00:03 +08:00
|
|
|
}, {
|
|
|
|
.alg = "hmac(streebog256)",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(hmac_streebog256_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "hmac(streebog512)",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(hmac_streebog512_tv_template)
|
|
|
|
}
|
2015-05-25 21:10:20 +08:00
|
|
|
}, {
|
|
|
|
.alg = "jitterentropy_rng",
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.test = alg_test_null,
|
2015-09-22 02:59:56 +08:00
|
|
|
}, {
|
|
|
|
.alg = "kw(aes)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(aes_kw_tv_template)
|
2015-09-22 02:59:56 +08:00
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "lrw(aes)",
|
2019-04-12 12:57:40 +08:00
|
|
|
.generic_driver = "lrw(ecb(aes-generic))",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(aes_lrw_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2012-03-06 02:26:21 +08:00
|
|
|
}, {
|
|
|
|
.alg = "lrw(camellia)",
|
2019-04-12 12:57:40 +08:00
|
|
|
.generic_driver = "lrw(ecb(camellia-generic))",
|
2012-03-06 02:26:21 +08:00
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(camellia_lrw_tv_template)
|
2012-03-06 02:26:21 +08:00
|
|
|
}
|
2012-07-12 01:38:29 +08:00
|
|
|
}, {
|
|
|
|
.alg = "lrw(cast6)",
|
2019-04-12 12:57:40 +08:00
|
|
|
.generic_driver = "lrw(ecb(cast6-generic))",
|
2012-07-12 01:38:29 +08:00
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(cast6_lrw_tv_template)
|
2012-07-12 01:38:29 +08:00
|
|
|
}
|
2011-10-18 18:32:34 +08:00
|
|
|
}, {
|
|
|
|
.alg = "lrw(serpent)",
|
2019-04-12 12:57:40 +08:00
|
|
|
.generic_driver = "lrw(ecb(serpent-generic))",
|
2011-10-18 18:32:34 +08:00
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(serpent_lrw_tv_template)
|
2011-10-18 18:32:34 +08:00
|
|
|
}
|
2011-10-18 18:32:50 +08:00
|
|
|
}, {
|
|
|
|
.alg = "lrw(twofish)",
|
2019-04-12 12:57:40 +08:00
|
|
|
.generic_driver = "lrw(ecb(twofish-generic))",
|
2011-10-18 18:32:50 +08:00
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(tf_lrw_tv_template)
|
2011-10-18 18:32:50 +08:00
|
|
|
}
|
2014-08-22 16:44:36 +08:00
|
|
|
}, {
|
|
|
|
.alg = "lz4",
|
|
|
|
.test = alg_test_comp,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.comp = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.comp = __VECS(lz4_comp_tv_template),
|
|
|
|
.decomp = __VECS(lz4_decomp_tv_template)
|
2014-08-22 16:44:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "lz4hc",
|
|
|
|
.test = alg_test_comp,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.comp = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.comp = __VECS(lz4hc_comp_tv_template),
|
|
|
|
.decomp = __VECS(lz4hc_decomp_tv_template)
|
2014-08-22 16:44:36 +08:00
|
|
|
}
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "lzo",
|
|
|
|
.test = alg_test_comp,
|
2012-12-06 17:16:28 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
|
|
|
.comp = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.comp = __VECS(lzo_comp_tv_template),
|
|
|
|
.decomp = __VECS(lzo_decomp_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}
|
2019-07-03 06:16:02 +08:00
|
|
|
}, {
|
|
|
|
.alg = "lzo-rle",
|
|
|
|
.test = alg_test_comp,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.comp = {
|
|
|
|
.comp = __VECS(lzorle_comp_tv_template),
|
|
|
|
.decomp = __VECS(lzorle_decomp_tv_template)
|
|
|
|
}
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "md4",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(md4_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "md5",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(md5_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "michael_mic",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(michael_mic_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2018-11-17 09:26:29 +08:00
|
|
|
}, {
|
|
|
|
.alg = "nhpoly1305",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(nhpoly1305_tv_template)
|
|
|
|
}
|
2024-09-10 22:30:28 +08:00
|
|
|
}, {
|
|
|
|
.alg = "p1363(ecdsa-nist-p192)",
|
|
|
|
.test = alg_test_null,
|
|
|
|
}, {
|
|
|
|
.alg = "p1363(ecdsa-nist-p256)",
|
|
|
|
.test = alg_test_sig,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.sig = __VECS(p1363_ecdsa_nist_p256_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "p1363(ecdsa-nist-p384)",
|
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
}, {
|
|
|
|
.alg = "p1363(ecdsa-nist-p521)",
|
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "pcbc(fcrypt)",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(fcrypt_pcbc_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2017-06-13 05:27:51 +08:00
|
|
|
}, {
|
crypto: rsassa-pkcs1 - Reinstate support for legacy protocols
Commit 1e562deacecc ("crypto: rsassa-pkcs1 - Migrate to sig_alg backend")
enforced that rsassa-pkcs1 sign/verify operations specify a hash
algorithm. That is necessary because per RFC 8017 sec 8.2, a hash
algorithm identifier must be prepended to the hash before generating or
verifying the signature ("Full Hash Prefix").
However the commit went too far in that it changed user space behavior:
KEYCTL_PKEY_QUERY system calls now return -EINVAL unless they specify a
hash algorithm. Intel Wireless Daemon (iwd) is one application issuing
such system calls (for EAP-TLS).
Closer analysis of the Embedded Linux Library (ell) used by iwd reveals
that the problem runs even deeper: When iwd uses TLS 1.1 or earlier, it
not only queries for keys, but performs sign/verify operations without
specifying a hash algorithm. These legacy TLS versions concatenate an
MD5 to a SHA-1 hash and omit the Full Hash Prefix:
https://git.kernel.org/pub/scm/libs/ell/ell.git/tree/ell/tls-suites.c#n97
TLS 1.1 was deprecated in 2021 by RFC 8996, but removal of support was
inadvertent in this case. It probably should be coordinated with iwd
maintainers first.
So reinstate support for such legacy protocols by defaulting to hash
algorithm "none" which uses an empty Full Hash Prefix.
If it is later on decided to remove TLS 1.1 support but still allow
KEYCTL_PKEY_QUERY without a hash algorithm, that can be achieved by
reverting the present commit and replacing it with the following patch:
https://lore.kernel.org/r/ZxalYZwH5UiGX5uj@wunner.de/
It's worth noting that Python's cryptography library gained support for
such legacy use cases very recently, so they do seem to still be a thing.
The Python developers identified IKE version 1 as another protocol
omitting the Full Hash Prefix:
https://github.com/pyca/cryptography/issues/10226
https://github.com/pyca/cryptography/issues/5495
The author of those issues, Zoltan Kelemen, spent considerable effort
searching for test vectors but only found one in a 2019 blog post by
Kevin Jones. Add it to testmgr.h to verify correctness of this feature.
Examination of wpa_supplicant as well as various IKE daemons (libreswan,
strongswan, isakmpd, raccoon) has determined that none of them seems to
use the kernel's Key Retention Service, so iwd is the only affected user
space application known so far.
Fixes: 1e562deacecc ("crypto: rsassa-pkcs1 - Migrate to sig_alg backend")
Reported-by: Klara Modin <klarasmodin@gmail.com>
Tested-by: Klara Modin <klarasmodin@gmail.com>
Closes: https://lore.kernel.org/r/2ed09a22-86c0-4cf0-8bda-ef804ccb3413@gmail.com/
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-29 18:24:57 +08:00
|
|
|
.alg = "pkcs1(rsa,none)",
|
|
|
|
.test = alg_test_sig,
|
|
|
|
.suite = {
|
|
|
|
.sig = __VECS(pkcs1_rsa_none_tv_template)
|
|
|
|
}
|
2017-06-13 05:27:51 +08:00
|
|
|
}, {
|
crypto: rsassa-pkcs1 - Migrate to sig_alg backend
A sig_alg backend has just been introduced with the intent of moving all
asymmetric sign/verify algorithms to it one by one.
Migrate the sign/verify operations from rsa-pkcs1pad.c to a separate
rsassa-pkcs1.c which uses the new backend.
Consequently there are now two templates which build on the "rsa"
akcipher_alg:
* The existing "pkcs1pad" template, which is instantiated as an
akcipher_instance and retains the encrypt/decrypt operations of
RSAES-PKCS1-v1_5 (RFC 8017 sec 7.2).
* The new "pkcs1" template, which is instantiated as a sig_instance
and contains the sign/verify operations of RSASSA-PKCS1-v1_5
(RFC 8017 sec 8.2).
In a separate step, rsa-pkcs1pad.c could optionally be renamed to
rsaes-pkcs1.c for clarity. Additional "oaep" and "pss" templates
could be added for RSAES-OAEP and RSASSA-PSS.
Note that it's currently allowed to allocate a "pkcs1pad(rsa)" transform
without specifying a hash algorithm. That makes sense if the transform
is only used for encrypt/decrypt and continues to be supported. But for
sign/verify, such transforms previously did not insert the Full Hash
Prefix into the padding. The resulting message encoding was incompliant
with EMSA-PKCS1-v1_5 (RFC 8017 sec 9.2) and therefore nonsensical.
From here on in, it is no longer allowed to allocate a transform without
specifying a hash algorithm if the transform is used for sign/verify
operations. This simplifies the code because the insertion of the Full
Hash Prefix is no longer optional, so various "if (digest_info)" clauses
can be removed.
There has been a previous attempt to forbid transform allocation without
specifying a hash algorithm, namely by commit c0d20d22e0ad ("crypto:
rsa-pkcs1pad - Require hash to be present"). It had to be rolled back
with commit b3a8c8a5ebb5 ("crypto: rsa-pkcs1pad: Allow hash to be
optional [ver #2]"), presumably because it broke allocation of a
transform which was solely used for encrypt/decrypt, not sign/verify.
Avoid such breakage by allowing transform allocation for encrypt/decrypt
with and without specifying a hash algorithm (and simply ignoring the
hash algorithm in the former case).
So again, specifying a hash algorithm is now mandatory for sign/verify,
but optional and ignored for encrypt/decrypt.
The new sig_alg API uses kernel buffers instead of sglists, which
avoids the overhead of copying signature and digest from sglists back
into kernel buffers. rsassa-pkcs1.c is thus simplified quite a bit.
sig_alg is always synchronous, whereas the underlying "rsa" akcipher_alg
may be asynchronous. So await the result of the akcipher_alg, similar
to crypto_akcipher_sync_{en,de}crypt().
As part of the migration, rename "rsa_digest_info" to "hash_prefix" to
adhere to the spec language in RFC 9580. Otherwise keep the code
unmodified wherever possible to ease reviewing and bisecting. Leave
several simplification and hardening opportunities to separate commits.
rsassa-pkcs1.c uses modern __free() syntax for allocation of buffers
which need to be freed by kfree_sensitive(), hence a DEFINE_FREE()
clause for kfree_sensitive() is introduced herein as a byproduct.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-09-10 22:30:16 +08:00
|
|
|
.alg = "pkcs1(rsa,sha224)",
|
2017-06-13 05:27:51 +08:00
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
}, {
|
crypto: rsassa-pkcs1 - Migrate to sig_alg backend
A sig_alg backend has just been introduced with the intent of moving all
asymmetric sign/verify algorithms to it one by one.
Migrate the sign/verify operations from rsa-pkcs1pad.c to a separate
rsassa-pkcs1.c which uses the new backend.
Consequently there are now two templates which build on the "rsa"
akcipher_alg:
* The existing "pkcs1pad" template, which is instantiated as an
akcipher_instance and retains the encrypt/decrypt operations of
RSAES-PKCS1-v1_5 (RFC 8017 sec 7.2).
* The new "pkcs1" template, which is instantiated as a sig_instance
and contains the sign/verify operations of RSASSA-PKCS1-v1_5
(RFC 8017 sec 8.2).
In a separate step, rsa-pkcs1pad.c could optionally be renamed to
rsaes-pkcs1.c for clarity. Additional "oaep" and "pss" templates
could be added for RSAES-OAEP and RSASSA-PSS.
Note that it's currently allowed to allocate a "pkcs1pad(rsa)" transform
without specifying a hash algorithm. That makes sense if the transform
is only used for encrypt/decrypt and continues to be supported. But for
sign/verify, such transforms previously did not insert the Full Hash
Prefix into the padding. The resulting message encoding was incompliant
with EMSA-PKCS1-v1_5 (RFC 8017 sec 9.2) and therefore nonsensical.
From here on in, it is no longer allowed to allocate a transform without
specifying a hash algorithm if the transform is used for sign/verify
operations. This simplifies the code because the insertion of the Full
Hash Prefix is no longer optional, so various "if (digest_info)" clauses
can be removed.
There has been a previous attempt to forbid transform allocation without
specifying a hash algorithm, namely by commit c0d20d22e0ad ("crypto:
rsa-pkcs1pad - Require hash to be present"). It had to be rolled back
with commit b3a8c8a5ebb5 ("crypto: rsa-pkcs1pad: Allow hash to be
optional [ver #2]"), presumably because it broke allocation of a
transform which was solely used for encrypt/decrypt, not sign/verify.
Avoid such breakage by allowing transform allocation for encrypt/decrypt
with and without specifying a hash algorithm (and simply ignoring the
hash algorithm in the former case).
So again, specifying a hash algorithm is now mandatory for sign/verify,
but optional and ignored for encrypt/decrypt.
The new sig_alg API uses kernel buffers instead of sglists, which
avoids the overhead of copying signature and digest from sglists back
into kernel buffers. rsassa-pkcs1.c is thus simplified quite a bit.
sig_alg is always synchronous, whereas the underlying "rsa" akcipher_alg
may be asynchronous. So await the result of the akcipher_alg, similar
to crypto_akcipher_sync_{en,de}crypt().
As part of the migration, rename "rsa_digest_info" to "hash_prefix" to
adhere to the spec language in RFC 9580. Otherwise keep the code
unmodified wherever possible to ease reviewing and bisecting. Leave
several simplification and hardening opportunities to separate commits.
rsassa-pkcs1.c uses modern __free() syntax for allocation of buffers
which need to be freed by kfree_sensitive(), hence a DEFINE_FREE()
clause for kfree_sensitive() is introduced herein as a byproduct.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-09-10 22:30:16 +08:00
|
|
|
.alg = "pkcs1(rsa,sha256)",
|
|
|
|
.test = alg_test_sig,
|
2017-06-13 05:27:51 +08:00
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
crypto: rsassa-pkcs1 - Migrate to sig_alg backend
A sig_alg backend has just been introduced with the intent of moving all
asymmetric sign/verify algorithms to it one by one.
Migrate the sign/verify operations from rsa-pkcs1pad.c to a separate
rsassa-pkcs1.c which uses the new backend.
Consequently there are now two templates which build on the "rsa"
akcipher_alg:
* The existing "pkcs1pad" template, which is instantiated as an
akcipher_instance and retains the encrypt/decrypt operations of
RSAES-PKCS1-v1_5 (RFC 8017 sec 7.2).
* The new "pkcs1" template, which is instantiated as a sig_instance
and contains the sign/verify operations of RSASSA-PKCS1-v1_5
(RFC 8017 sec 8.2).
In a separate step, rsa-pkcs1pad.c could optionally be renamed to
rsaes-pkcs1.c for clarity. Additional "oaep" and "pss" templates
could be added for RSAES-OAEP and RSASSA-PSS.
Note that it's currently allowed to allocate a "pkcs1pad(rsa)" transform
without specifying a hash algorithm. That makes sense if the transform
is only used for encrypt/decrypt and continues to be supported. But for
sign/verify, such transforms previously did not insert the Full Hash
Prefix into the padding. The resulting message encoding was incompliant
with EMSA-PKCS1-v1_5 (RFC 8017 sec 9.2) and therefore nonsensical.
From here on in, it is no longer allowed to allocate a transform without
specifying a hash algorithm if the transform is used for sign/verify
operations. This simplifies the code because the insertion of the Full
Hash Prefix is no longer optional, so various "if (digest_info)" clauses
can be removed.
There has been a previous attempt to forbid transform allocation without
specifying a hash algorithm, namely by commit c0d20d22e0ad ("crypto:
rsa-pkcs1pad - Require hash to be present"). It had to be rolled back
with commit b3a8c8a5ebb5 ("crypto: rsa-pkcs1pad: Allow hash to be
optional [ver #2]"), presumably because it broke allocation of a
transform which was solely used for encrypt/decrypt, not sign/verify.
Avoid such breakage by allowing transform allocation for encrypt/decrypt
with and without specifying a hash algorithm (and simply ignoring the
hash algorithm in the former case).
So again, specifying a hash algorithm is now mandatory for sign/verify,
but optional and ignored for encrypt/decrypt.
The new sig_alg API uses kernel buffers instead of sglists, which
avoids the overhead of copying signature and digest from sglists back
into kernel buffers. rsassa-pkcs1.c is thus simplified quite a bit.
sig_alg is always synchronous, whereas the underlying "rsa" akcipher_alg
may be asynchronous. So await the result of the akcipher_alg, similar
to crypto_akcipher_sync_{en,de}crypt().
As part of the migration, rename "rsa_digest_info" to "hash_prefix" to
adhere to the spec language in RFC 9580. Otherwise keep the code
unmodified wherever possible to ease reviewing and bisecting. Leave
several simplification and hardening opportunities to separate commits.
rsassa-pkcs1.c uses modern __free() syntax for allocation of buffers
which need to be freed by kfree_sensitive(), hence a DEFINE_FREE()
clause for kfree_sensitive() is introduced herein as a byproduct.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-09-10 22:30:16 +08:00
|
|
|
.sig = __VECS(pkcs1_rsa_tv_template)
|
2017-06-13 05:27:51 +08:00
|
|
|
}
|
|
|
|
}, {
|
crypto: rsassa-pkcs1 - Migrate to sig_alg backend
A sig_alg backend has just been introduced with the intent of moving all
asymmetric sign/verify algorithms to it one by one.
Migrate the sign/verify operations from rsa-pkcs1pad.c to a separate
rsassa-pkcs1.c which uses the new backend.
Consequently there are now two templates which build on the "rsa"
akcipher_alg:
* The existing "pkcs1pad" template, which is instantiated as an
akcipher_instance and retains the encrypt/decrypt operations of
RSAES-PKCS1-v1_5 (RFC 8017 sec 7.2).
* The new "pkcs1" template, which is instantiated as a sig_instance
and contains the sign/verify operations of RSASSA-PKCS1-v1_5
(RFC 8017 sec 8.2).
In a separate step, rsa-pkcs1pad.c could optionally be renamed to
rsaes-pkcs1.c for clarity. Additional "oaep" and "pss" templates
could be added for RSAES-OAEP and RSASSA-PSS.
Note that it's currently allowed to allocate a "pkcs1pad(rsa)" transform
without specifying a hash algorithm. That makes sense if the transform
is only used for encrypt/decrypt and continues to be supported. But for
sign/verify, such transforms previously did not insert the Full Hash
Prefix into the padding. The resulting message encoding was incompliant
with EMSA-PKCS1-v1_5 (RFC 8017 sec 9.2) and therefore nonsensical.
From here on in, it is no longer allowed to allocate a transform without
specifying a hash algorithm if the transform is used for sign/verify
operations. This simplifies the code because the insertion of the Full
Hash Prefix is no longer optional, so various "if (digest_info)" clauses
can be removed.
There has been a previous attempt to forbid transform allocation without
specifying a hash algorithm, namely by commit c0d20d22e0ad ("crypto:
rsa-pkcs1pad - Require hash to be present"). It had to be rolled back
with commit b3a8c8a5ebb5 ("crypto: rsa-pkcs1pad: Allow hash to be
optional [ver #2]"), presumably because it broke allocation of a
transform which was solely used for encrypt/decrypt, not sign/verify.
Avoid such breakage by allowing transform allocation for encrypt/decrypt
with and without specifying a hash algorithm (and simply ignoring the
hash algorithm in the former case).
So again, specifying a hash algorithm is now mandatory for sign/verify,
but optional and ignored for encrypt/decrypt.
The new sig_alg API uses kernel buffers instead of sglists, which
avoids the overhead of copying signature and digest from sglists back
into kernel buffers. rsassa-pkcs1.c is thus simplified quite a bit.
sig_alg is always synchronous, whereas the underlying "rsa" akcipher_alg
may be asynchronous. So await the result of the akcipher_alg, similar
to crypto_akcipher_sync_{en,de}crypt().
As part of the migration, rename "rsa_digest_info" to "hash_prefix" to
adhere to the spec language in RFC 9580. Otherwise keep the code
unmodified wherever possible to ease reviewing and bisecting. Leave
several simplification and hardening opportunities to separate commits.
rsassa-pkcs1.c uses modern __free() syntax for allocation of buffers
which need to be freed by kfree_sensitive(), hence a DEFINE_FREE()
clause for kfree_sensitive() is introduced herein as a byproduct.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-09-10 22:30:16 +08:00
|
|
|
.alg = "pkcs1(rsa,sha3-256)",
|
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
}, {
|
|
|
|
.alg = "pkcs1(rsa,sha3-384)",
|
2017-06-13 05:27:51 +08:00
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
}, {
|
crypto: rsassa-pkcs1 - Migrate to sig_alg backend
A sig_alg backend has just been introduced with the intent of moving all
asymmetric sign/verify algorithms to it one by one.
Migrate the sign/verify operations from rsa-pkcs1pad.c to a separate
rsassa-pkcs1.c which uses the new backend.
Consequently there are now two templates which build on the "rsa"
akcipher_alg:
* The existing "pkcs1pad" template, which is instantiated as an
akcipher_instance and retains the encrypt/decrypt operations of
RSAES-PKCS1-v1_5 (RFC 8017 sec 7.2).
* The new "pkcs1" template, which is instantiated as a sig_instance
and contains the sign/verify operations of RSASSA-PKCS1-v1_5
(RFC 8017 sec 8.2).
In a separate step, rsa-pkcs1pad.c could optionally be renamed to
rsaes-pkcs1.c for clarity. Additional "oaep" and "pss" templates
could be added for RSAES-OAEP and RSASSA-PSS.
Note that it's currently allowed to allocate a "pkcs1pad(rsa)" transform
without specifying a hash algorithm. That makes sense if the transform
is only used for encrypt/decrypt and continues to be supported. But for
sign/verify, such transforms previously did not insert the Full Hash
Prefix into the padding. The resulting message encoding was incompliant
with EMSA-PKCS1-v1_5 (RFC 8017 sec 9.2) and therefore nonsensical.
From here on in, it is no longer allowed to allocate a transform without
specifying a hash algorithm if the transform is used for sign/verify
operations. This simplifies the code because the insertion of the Full
Hash Prefix is no longer optional, so various "if (digest_info)" clauses
can be removed.
There has been a previous attempt to forbid transform allocation without
specifying a hash algorithm, namely by commit c0d20d22e0ad ("crypto:
rsa-pkcs1pad - Require hash to be present"). It had to be rolled back
with commit b3a8c8a5ebb5 ("crypto: rsa-pkcs1pad: Allow hash to be
optional [ver #2]"), presumably because it broke allocation of a
transform which was solely used for encrypt/decrypt, not sign/verify.
Avoid such breakage by allowing transform allocation for encrypt/decrypt
with and without specifying a hash algorithm (and simply ignoring the
hash algorithm in the former case).
So again, specifying a hash algorithm is now mandatory for sign/verify,
but optional and ignored for encrypt/decrypt.
The new sig_alg API uses kernel buffers instead of sglists, which
avoids the overhead of copying signature and digest from sglists back
into kernel buffers. rsassa-pkcs1.c is thus simplified quite a bit.
sig_alg is always synchronous, whereas the underlying "rsa" akcipher_alg
may be asynchronous. So await the result of the akcipher_alg, similar
to crypto_akcipher_sync_{en,de}crypt().
As part of the migration, rename "rsa_digest_info" to "hash_prefix" to
adhere to the spec language in RFC 9580. Otherwise keep the code
unmodified wherever possible to ease reviewing and bisecting. Leave
several simplification and hardening opportunities to separate commits.
rsassa-pkcs1.c uses modern __free() syntax for allocation of buffers
which need to be freed by kfree_sensitive(), hence a DEFINE_FREE()
clause for kfree_sensitive() is introduced herein as a byproduct.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-09-10 22:30:16 +08:00
|
|
|
.alg = "pkcs1(rsa,sha3-512)",
|
2017-06-13 05:27:51 +08:00
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
2023-10-23 02:22:05 +08:00
|
|
|
}, {
|
crypto: rsassa-pkcs1 - Migrate to sig_alg backend
A sig_alg backend has just been introduced with the intent of moving all
asymmetric sign/verify algorithms to it one by one.
Migrate the sign/verify operations from rsa-pkcs1pad.c to a separate
rsassa-pkcs1.c which uses the new backend.
Consequently there are now two templates which build on the "rsa"
akcipher_alg:
* The existing "pkcs1pad" template, which is instantiated as an
akcipher_instance and retains the encrypt/decrypt operations of
RSAES-PKCS1-v1_5 (RFC 8017 sec 7.2).
* The new "pkcs1" template, which is instantiated as a sig_instance
and contains the sign/verify operations of RSASSA-PKCS1-v1_5
(RFC 8017 sec 8.2).
In a separate step, rsa-pkcs1pad.c could optionally be renamed to
rsaes-pkcs1.c for clarity. Additional "oaep" and "pss" templates
could be added for RSAES-OAEP and RSASSA-PSS.
Note that it's currently allowed to allocate a "pkcs1pad(rsa)" transform
without specifying a hash algorithm. That makes sense if the transform
is only used for encrypt/decrypt and continues to be supported. But for
sign/verify, such transforms previously did not insert the Full Hash
Prefix into the padding. The resulting message encoding was incompliant
with EMSA-PKCS1-v1_5 (RFC 8017 sec 9.2) and therefore nonsensical.
From here on in, it is no longer allowed to allocate a transform without
specifying a hash algorithm if the transform is used for sign/verify
operations. This simplifies the code because the insertion of the Full
Hash Prefix is no longer optional, so various "if (digest_info)" clauses
can be removed.
There has been a previous attempt to forbid transform allocation without
specifying a hash algorithm, namely by commit c0d20d22e0ad ("crypto:
rsa-pkcs1pad - Require hash to be present"). It had to be rolled back
with commit b3a8c8a5ebb5 ("crypto: rsa-pkcs1pad: Allow hash to be
optional [ver #2]"), presumably because it broke allocation of a
transform which was solely used for encrypt/decrypt, not sign/verify.
Avoid such breakage by allowing transform allocation for encrypt/decrypt
with and without specifying a hash algorithm (and simply ignoring the
hash algorithm in the former case).
So again, specifying a hash algorithm is now mandatory for sign/verify,
but optional and ignored for encrypt/decrypt.
The new sig_alg API uses kernel buffers instead of sglists, which
avoids the overhead of copying signature and digest from sglists back
into kernel buffers. rsassa-pkcs1.c is thus simplified quite a bit.
sig_alg is always synchronous, whereas the underlying "rsa" akcipher_alg
may be asynchronous. So await the result of the akcipher_alg, similar
to crypto_akcipher_sync_{en,de}crypt().
As part of the migration, rename "rsa_digest_info" to "hash_prefix" to
adhere to the spec language in RFC 9580. Otherwise keep the code
unmodified wherever possible to ease reviewing and bisecting. Leave
several simplification and hardening opportunities to separate commits.
rsassa-pkcs1.c uses modern __free() syntax for allocation of buffers
which need to be freed by kfree_sensitive(), hence a DEFINE_FREE()
clause for kfree_sensitive() is introduced herein as a byproduct.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-09-10 22:30:16 +08:00
|
|
|
.alg = "pkcs1(rsa,sha384)",
|
2023-10-23 02:22:05 +08:00
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
}, {
|
crypto: rsassa-pkcs1 - Migrate to sig_alg backend
A sig_alg backend has just been introduced with the intent of moving all
asymmetric sign/verify algorithms to it one by one.
Migrate the sign/verify operations from rsa-pkcs1pad.c to a separate
rsassa-pkcs1.c which uses the new backend.
Consequently there are now two templates which build on the "rsa"
akcipher_alg:
* The existing "pkcs1pad" template, which is instantiated as an
akcipher_instance and retains the encrypt/decrypt operations of
RSAES-PKCS1-v1_5 (RFC 8017 sec 7.2).
* The new "pkcs1" template, which is instantiated as a sig_instance
and contains the sign/verify operations of RSASSA-PKCS1-v1_5
(RFC 8017 sec 8.2).
In a separate step, rsa-pkcs1pad.c could optionally be renamed to
rsaes-pkcs1.c for clarity. Additional "oaep" and "pss" templates
could be added for RSAES-OAEP and RSASSA-PSS.
Note that it's currently allowed to allocate a "pkcs1pad(rsa)" transform
without specifying a hash algorithm. That makes sense if the transform
is only used for encrypt/decrypt and continues to be supported. But for
sign/verify, such transforms previously did not insert the Full Hash
Prefix into the padding. The resulting message encoding was incompliant
with EMSA-PKCS1-v1_5 (RFC 8017 sec 9.2) and therefore nonsensical.
From here on in, it is no longer allowed to allocate a transform without
specifying a hash algorithm if the transform is used for sign/verify
operations. This simplifies the code because the insertion of the Full
Hash Prefix is no longer optional, so various "if (digest_info)" clauses
can be removed.
There has been a previous attempt to forbid transform allocation without
specifying a hash algorithm, namely by commit c0d20d22e0ad ("crypto:
rsa-pkcs1pad - Require hash to be present"). It had to be rolled back
with commit b3a8c8a5ebb5 ("crypto: rsa-pkcs1pad: Allow hash to be
optional [ver #2]"), presumably because it broke allocation of a
transform which was solely used for encrypt/decrypt, not sign/verify.
Avoid such breakage by allowing transform allocation for encrypt/decrypt
with and without specifying a hash algorithm (and simply ignoring the
hash algorithm in the former case).
So again, specifying a hash algorithm is now mandatory for sign/verify,
but optional and ignored for encrypt/decrypt.
The new sig_alg API uses kernel buffers instead of sglists, which
avoids the overhead of copying signature and digest from sglists back
into kernel buffers. rsassa-pkcs1.c is thus simplified quite a bit.
sig_alg is always synchronous, whereas the underlying "rsa" akcipher_alg
may be asynchronous. So await the result of the akcipher_alg, similar
to crypto_akcipher_sync_{en,de}crypt().
As part of the migration, rename "rsa_digest_info" to "hash_prefix" to
adhere to the spec language in RFC 9580. Otherwise keep the code
unmodified wherever possible to ease reviewing and bisecting. Leave
several simplification and hardening opportunities to separate commits.
rsassa-pkcs1.c uses modern __free() syntax for allocation of buffers
which need to be freed by kfree_sensitive(), hence a DEFINE_FREE()
clause for kfree_sensitive() is introduced herein as a byproduct.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-09-10 22:30:16 +08:00
|
|
|
.alg = "pkcs1(rsa,sha512)",
|
2023-10-23 02:22:05 +08:00
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
}, {
|
crypto: rsassa-pkcs1 - Migrate to sig_alg backend
A sig_alg backend has just been introduced with the intent of moving all
asymmetric sign/verify algorithms to it one by one.
Migrate the sign/verify operations from rsa-pkcs1pad.c to a separate
rsassa-pkcs1.c which uses the new backend.
Consequently there are now two templates which build on the "rsa"
akcipher_alg:
* The existing "pkcs1pad" template, which is instantiated as an
akcipher_instance and retains the encrypt/decrypt operations of
RSAES-PKCS1-v1_5 (RFC 8017 sec 7.2).
* The new "pkcs1" template, which is instantiated as a sig_instance
and contains the sign/verify operations of RSASSA-PKCS1-v1_5
(RFC 8017 sec 8.2).
In a separate step, rsa-pkcs1pad.c could optionally be renamed to
rsaes-pkcs1.c for clarity. Additional "oaep" and "pss" templates
could be added for RSAES-OAEP and RSASSA-PSS.
Note that it's currently allowed to allocate a "pkcs1pad(rsa)" transform
without specifying a hash algorithm. That makes sense if the transform
is only used for encrypt/decrypt and continues to be supported. But for
sign/verify, such transforms previously did not insert the Full Hash
Prefix into the padding. The resulting message encoding was incompliant
with EMSA-PKCS1-v1_5 (RFC 8017 sec 9.2) and therefore nonsensical.
From here on in, it is no longer allowed to allocate a transform without
specifying a hash algorithm if the transform is used for sign/verify
operations. This simplifies the code because the insertion of the Full
Hash Prefix is no longer optional, so various "if (digest_info)" clauses
can be removed.
There has been a previous attempt to forbid transform allocation without
specifying a hash algorithm, namely by commit c0d20d22e0ad ("crypto:
rsa-pkcs1pad - Require hash to be present"). It had to be rolled back
with commit b3a8c8a5ebb5 ("crypto: rsa-pkcs1pad: Allow hash to be
optional [ver #2]"), presumably because it broke allocation of a
transform which was solely used for encrypt/decrypt, not sign/verify.
Avoid such breakage by allowing transform allocation for encrypt/decrypt
with and without specifying a hash algorithm (and simply ignoring the
hash algorithm in the former case).
So again, specifying a hash algorithm is now mandatory for sign/verify,
but optional and ignored for encrypt/decrypt.
The new sig_alg API uses kernel buffers instead of sglists, which
avoids the overhead of copying signature and digest from sglists back
into kernel buffers. rsassa-pkcs1.c is thus simplified quite a bit.
sig_alg is always synchronous, whereas the underlying "rsa" akcipher_alg
may be asynchronous. So await the result of the akcipher_alg, similar
to crypto_akcipher_sync_{en,de}crypt().
As part of the migration, rename "rsa_digest_info" to "hash_prefix" to
adhere to the spec language in RFC 9580. Otherwise keep the code
unmodified wherever possible to ease reviewing and bisecting. Leave
several simplification and hardening opportunities to separate commits.
rsassa-pkcs1.c uses modern __free() syntax for allocation of buffers
which need to be freed by kfree_sensitive(), hence a DEFINE_FREE()
clause for kfree_sensitive() is introduced herein as a byproduct.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-09-10 22:30:16 +08:00
|
|
|
.alg = "pkcs1pad(rsa)",
|
2023-10-23 02:22:05 +08:00
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
2015-06-01 19:43:59 +08:00
|
|
|
}, {
|
|
|
|
.alg = "poly1305",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(poly1305_tv_template)
|
2015-06-01 19:43:59 +08:00
|
|
|
}
|
2022-05-21 02:14:54 +08:00
|
|
|
}, {
|
|
|
|
.alg = "polyval",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(polyval_tv_template)
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "rfc3686(ctr(aes))",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2009-05-15 13:16:03 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(aes_ctr_rfc3686_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2019-09-13 17:10:42 +08:00
|
|
|
}, {
|
|
|
|
.alg = "rfc3686(ctr(sm4))",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(sm4_ctr_rfc3686_tv_template)
|
|
|
|
}
|
2009-05-04 19:23:40 +08:00
|
|
|
}, {
|
2015-07-09 07:17:34 +08:00
|
|
|
.alg = "rfc4106(gcm(aes))",
|
2019-04-12 12:57:41 +08:00
|
|
|
.generic_driver = "rfc4106(gcm_base(ctr(aes-generic),ghash-generic))",
|
2010-11-05 03:02:04 +08:00
|
|
|
.test = alg_test_aead,
|
2015-01-24 01:42:15 +08:00
|
|
|
.fips_allowed = 1,
|
2010-11-05 03:02:04 +08:00
|
|
|
.suite = {
|
2019-12-02 05:53:30 +08:00
|
|
|
.aead = {
|
|
|
|
____VECS(aes_gcm_rfc4106_tv_template),
|
|
|
|
.einval_allowed = 1,
|
2020-03-05 06:44:03 +08:00
|
|
|
.aad_iv = 1,
|
2019-12-02 05:53:30 +08:00
|
|
|
}
|
2010-11-05 03:02:04 +08:00
|
|
|
}
|
|
|
|
}, {
|
2015-07-14 16:53:22 +08:00
|
|
|
.alg = "rfc4309(ccm(aes))",
|
2019-04-12 12:57:41 +08:00
|
|
|
.generic_driver = "rfc4309(ccm_base(ctr(aes-generic),cbcmac(aes-generic)))",
|
2009-05-04 19:23:40 +08:00
|
|
|
.test = alg_test_aead,
|
2009-05-15 13:16:03 +08:00
|
|
|
.fips_allowed = 1,
|
2009-05-04 19:23:40 +08:00
|
|
|
.suite = {
|
2019-12-02 05:53:30 +08:00
|
|
|
.aead = {
|
|
|
|
____VECS(aes_ccm_rfc4309_tv_template),
|
|
|
|
.einval_allowed = 1,
|
2020-03-05 06:44:03 +08:00
|
|
|
.aad_iv = 1,
|
2019-12-02 05:53:30 +08:00
|
|
|
}
|
2009-05-04 19:23:40 +08:00
|
|
|
}
|
2013-04-07 21:43:51 +08:00
|
|
|
}, {
|
2015-06-16 13:54:24 +08:00
|
|
|
.alg = "rfc4543(gcm(aes))",
|
2019-04-12 12:57:41 +08:00
|
|
|
.generic_driver = "rfc4543(gcm_base(ctr(aes-generic),ghash-generic))",
|
2013-04-07 21:43:51 +08:00
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
2019-12-02 05:53:30 +08:00
|
|
|
.aead = {
|
|
|
|
____VECS(aes_gcm_rfc4543_tv_template),
|
|
|
|
.einval_allowed = 1,
|
2020-03-05 06:44:03 +08:00
|
|
|
.aad_iv = 1,
|
2019-12-02 05:53:30 +08:00
|
|
|
}
|
2013-04-07 21:43:51 +08:00
|
|
|
}
|
2015-06-01 19:44:01 +08:00
|
|
|
}, {
|
|
|
|
.alg = "rfc7539(chacha20,poly1305)",
|
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
crypto: testmgr - unify the AEAD encryption and decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-14 07:32:28 +08:00
|
|
|
.aead = __VECS(rfc7539_tv_template)
|
2015-06-01 19:44:01 +08:00
|
|
|
}
|
2015-06-01 19:44:03 +08:00
|
|
|
}, {
|
|
|
|
.alg = "rfc7539esp(chacha20,poly1305)",
|
|
|
|
.test = alg_test_aead,
|
|
|
|
.suite = {
|
2019-12-02 05:53:30 +08:00
|
|
|
.aead = {
|
|
|
|
____VECS(rfc7539esp_tv_template),
|
|
|
|
.einval_allowed = 1,
|
2020-03-05 06:44:03 +08:00
|
|
|
.aad_iv = 1,
|
2019-12-02 05:53:30 +08:00
|
|
|
}
|
2015-06-01 19:44:03 +08:00
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "rmd160",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(rmd160_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2015-06-17 01:31:06 +08:00
|
|
|
}, {
|
|
|
|
.alg = "rsa",
|
|
|
|
.test = alg_test_akcipher,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.akcipher = __VECS(rsa_tv_template)
|
2015-06-17 01:31:06 +08:00
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "sha1",
|
|
|
|
.test = alg_test_hash,
|
2009-05-15 13:16:03 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(sha1_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "sha224",
|
|
|
|
.test = alg_test_hash,
|
2009-05-15 13:16:03 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(sha224_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "sha256",
|
|
|
|
.test = alg_test_hash,
|
2009-05-15 13:16:03 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(sha256_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2016-06-17 13:00:36 +08:00
|
|
|
}, {
|
|
|
|
.alg = "sha3-224",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(sha3_224_tv_template)
|
2016-06-17 13:00:36 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "sha3-256",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(sha3_256_tv_template)
|
2016-06-17 13:00:36 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "sha3-384",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(sha3_384_tv_template)
|
2016-06-17 13:00:36 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "sha3-512",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(sha3_512_tv_template)
|
2016-06-17 13:00:36 +08:00
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "sha384",
|
|
|
|
.test = alg_test_hash,
|
2009-05-15 13:16:03 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(sha384_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "sha512",
|
|
|
|
.test = alg_test_hash,
|
2009-05-15 13:16:03 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(sha512_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2017-08-21 18:51:29 +08:00
|
|
|
}, {
|
|
|
|
.alg = "sm3",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(sm3_tv_template)
|
|
|
|
}
|
2018-11-07 05:00:03 +08:00
|
|
|
}, {
|
|
|
|
.alg = "streebog256",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(streebog256_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "streebog512",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(streebog512_tv_template)
|
|
|
|
}
|
2018-06-19 01:22:39 +08:00
|
|
|
}, {
|
|
|
|
.alg = "vmac64(aes)",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(vmac64_aes_tv_template)
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "wp256",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(wp256_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "wp384",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(wp384_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "wp512",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(wp512_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2024-09-10 22:30:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "x962(ecdsa-nist-p192)",
|
|
|
|
.test = alg_test_sig,
|
|
|
|
.suite = {
|
|
|
|
.sig = __VECS(x962_ecdsa_nist_p192_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "x962(ecdsa-nist-p256)",
|
|
|
|
.test = alg_test_sig,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.sig = __VECS(x962_ecdsa_nist_p256_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "x962(ecdsa-nist-p384)",
|
|
|
|
.test = alg_test_sig,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.sig = __VECS(x962_ecdsa_nist_p384_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
.alg = "x962(ecdsa-nist-p521)",
|
|
|
|
.test = alg_test_sig,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.sig = __VECS(x962_ecdsa_nist_p521_tv_template)
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "xcbc(aes)",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
2017-01-12 21:40:39 +08:00
|
|
|
.hash = __VECS(aes_xcbc128_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2022-10-27 14:54:56 +08:00
|
|
|
}, {
|
|
|
|
.alg = "xcbc(sm4)",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(sm4_xcbc128_tv_template)
|
|
|
|
}
|
crypto: chacha - add XChaCha12 support
Now that the generic implementation of ChaCha20 has been refactored to
allow varying the number of rounds, add support for XChaCha12, which is
the XSalsa construction applied to ChaCha12. ChaCha12 is one of the
three ciphers specified by the original ChaCha paper
(https://cr.yp.to/chacha/chacha-20080128.pdf: "ChaCha, a variant of
Salsa20"), alongside ChaCha8 and ChaCha20. ChaCha12 is faster than
ChaCha20 but has a lower, but still large, security margin.
We need XChaCha12 support so that it can be used in the Adiantum
encryption mode, which enables disk/file encryption on low-end mobile
devices where AES-XTS is too slow as the CPUs lack AES instructions.
We'd prefer XChaCha20 (the more popular variant), but it's too slow on
some of our target devices, so at least in some cases we do need the
XChaCha12-based version. In more detail, the problem is that Adiantum
is still much slower than we're happy with, and encryption still has a
quite noticeable effect on the feel of low-end devices. Users and
vendors push back hard against encryption that degrades the user
experience, which always risks encryption being disabled entirely. So
we need to choose the fastest option that gives us a solid margin of
security, and here that's XChaCha12. The best known attack on ChaCha
breaks only 7 rounds and has 2^235 time complexity, so ChaCha12's
security margin is still better than AES-256's. Much has been learned
about cryptanalysis of ARX ciphers since Salsa20 was originally designed
in 2005, and it now seems we can be comfortable with a smaller number of
rounds. The eSTREAM project also suggests the 12-round version of
Salsa20 as providing the best balance among the different variants:
combining very good performance with a "comfortable margin of security".
Note that it would be trivial to add vanilla ChaCha12 in addition to
XChaCha12. However, it's unneeded for now and therefore is omitted.
As discussed in the patch that introduced XChaCha20 support, I
considered splitting the code into separate chacha-common, chacha20,
xchacha20, and xchacha12 modules, so that these algorithms could be
enabled/disabled independently. However, since nearly all the code is
shared anyway, I ultimately decided there would have been little benefit
to the added complexity.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-17 09:26:22 +08:00
|
|
|
}, {
|
|
|
|
.alg = "xchacha12",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(xchacha12_tv_template)
|
|
|
|
},
|
crypto: chacha20-generic - add XChaCha20 support
Add support for the XChaCha20 stream cipher. XChaCha20 is the
application of the XSalsa20 construction
(https://cr.yp.to/snuffle/xsalsa-20081128.pdf) to ChaCha20 rather than
to Salsa20. XChaCha20 extends ChaCha20's nonce length from 64 bits (or
96 bits, depending on convention) to 192 bits, while provably retaining
ChaCha20's security. XChaCha20 uses the ChaCha20 permutation to map the
key and first 128 nonce bits to a 256-bit subkey. Then, it does the
ChaCha20 stream cipher with the subkey and remaining 64 bits of nonce.
We need XChaCha support in order to add support for the Adiantum
encryption mode. Note that to meet our performance requirements, we
actually plan to primarily use the variant XChaCha12. But we believe
it's wise to first add XChaCha20 as a baseline with a higher security
margin, in case there are any situations where it can be used.
Supporting both variants is straightforward.
Since XChaCha20's subkey differs for each request, XChaCha20 can't be a
template that wraps ChaCha20; that would require re-keying the
underlying ChaCha20 for every request, which wouldn't be thread-safe.
Instead, we make XChaCha20 its own top-level algorithm which calls the
ChaCha20 streaming implementation internally.
Similar to the existing ChaCha20 implementation, we define the IV to be
the nonce and stream position concatenated together. This allows users
to seek to any position in the stream.
I considered splitting the code into separate chacha20-common, chacha20,
and xchacha20 modules, so that chacha20 and xchacha20 could be
enabled/disabled independently. However, since nearly all the code is
shared anyway, I ultimately decided there would have been little benefit
to the added complexity of separate modules.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-17 09:26:20 +08:00
|
|
|
}, {
|
|
|
|
.alg = "xchacha20",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(xchacha20_tv_template)
|
|
|
|
},
|
2022-05-21 02:14:53 +08:00
|
|
|
}, {
|
|
|
|
.alg = "xctr(aes)",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(aes_xctr_tv_template)
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}, {
|
|
|
|
.alg = "xts(aes)",
|
2019-04-12 12:57:40 +08:00
|
|
|
.generic_driver = "xts(ecb(aes-generic))",
|
2008-08-17 15:01:56 +08:00
|
|
|
.test = alg_test_skcipher,
|
2011-01-29 12:14:01 +08:00
|
|
|
.fips_allowed = 1,
|
2008-07-31 17:08:25 +08:00
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(aes_xts_tv_template)
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2012-03-06 02:26:21 +08:00
|
|
|
}, {
|
|
|
|
.alg = "xts(camellia)",
|
2019-04-12 12:57:40 +08:00
|
|
|
.generic_driver = "xts(ecb(camellia-generic))",
|
2012-03-06 02:26:21 +08:00
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(camellia_xts_tv_template)
|
2012-03-06 02:26:21 +08:00
|
|
|
}
|
2012-07-12 01:38:29 +08:00
|
|
|
}, {
|
|
|
|
.alg = "xts(cast6)",
|
2019-04-12 12:57:40 +08:00
|
|
|
.generic_driver = "xts(ecb(cast6-generic))",
|
2012-07-12 01:38:29 +08:00
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(cast6_xts_tv_template)
|
2012-07-12 01:38:29 +08:00
|
|
|
}
|
2018-05-11 16:04:06 +08:00
|
|
|
}, {
|
|
|
|
/* Same as xts(aes) except the key is stored in
|
|
|
|
* hardware secure memory which we reference by index
|
|
|
|
*/
|
|
|
|
.alg = "xts(paes)",
|
|
|
|
.test = alg_test_null,
|
|
|
|
.fips_allowed = 1,
|
2011-10-18 18:33:17 +08:00
|
|
|
}, {
|
|
|
|
.alg = "xts(serpent)",
|
2019-04-12 12:57:40 +08:00
|
|
|
.generic_driver = "xts(ecb(serpent-generic))",
|
2011-10-18 18:33:17 +08:00
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(serpent_xts_tv_template)
|
2011-10-18 18:33:17 +08:00
|
|
|
}
|
2022-10-27 14:54:56 +08:00
|
|
|
}, {
|
|
|
|
.alg = "xts(sm4)",
|
|
|
|
.generic_driver = "xts(ecb(sm4-generic))",
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(sm4_xts_tv_template)
|
|
|
|
}
|
2011-10-18 18:33:33 +08:00
|
|
|
}, {
|
|
|
|
.alg = "xts(twofish)",
|
2019-04-12 12:57:40 +08:00
|
|
|
.generic_driver = "xts(ecb(twofish-generic))",
|
2011-10-18 18:33:33 +08:00
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-21 13:50:29 +08:00
|
|
|
.cipher = __VECS(tf_xts_tv_template)
|
2011-10-18 18:33:33 +08:00
|
|
|
}
|
2018-05-11 16:04:06 +08:00
|
|
|
}, {
|
2020-01-22 21:43:23 +08:00
|
|
|
#if IS_ENABLED(CONFIG_CRYPTO_PAES_S390)
|
|
|
|
.alg = "xts-paes-s390",
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.test = alg_test_skcipher,
|
|
|
|
.suite = {
|
|
|
|
.cipher = __VECS(aes_xts_tv_template)
|
|
|
|
}
|
|
|
|
}, {
|
|
|
|
#endif
|
2019-05-30 14:52:57 +08:00
|
|
|
.alg = "xxhash64",
|
|
|
|
.test = alg_test_hash,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.hash = __VECS(xxhash64_tv_template)
|
|
|
|
}
|
2018-03-31 03:14:53 +08:00
|
|
|
}, {
|
|
|
|
.alg = "zstd",
|
|
|
|
.test = alg_test_comp,
|
|
|
|
.fips_allowed = 1,
|
|
|
|
.suite = {
|
|
|
|
.comp = {
|
|
|
|
.comp = __VECS(zstd_comp_tv_template),
|
|
|
|
.decomp = __VECS(zstd_decomp_tv_template)
|
|
|
|
}
|
|
|
|
}
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
};
|
|
|
|
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
static void alg_check_test_descs_order(void)
|
2013-06-13 22:37:40 +08:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 1; i < ARRAY_SIZE(alg_test_descs); i++) {
|
|
|
|
int diff = strcmp(alg_test_descs[i - 1].alg,
|
|
|
|
alg_test_descs[i].alg);
|
|
|
|
|
|
|
|
if (WARN_ON(diff > 0)) {
|
|
|
|
pr_warn("testmgr: alg_test_descs entries in wrong order: '%s' before '%s'\n",
|
|
|
|
alg_test_descs[i - 1].alg,
|
|
|
|
alg_test_descs[i].alg);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (WARN_ON(diff == 0)) {
|
|
|
|
pr_warn("testmgr: duplicate alg_test_descs entry: '%s'\n",
|
|
|
|
alg_test_descs[i].alg);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
static void alg_check_testvec_configs(void)
|
|
|
|
{
|
2019-02-01 15:51:46 +08:00
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(default_cipher_testvec_configs); i++)
|
|
|
|
WARN_ON(!valid_testvec_config(
|
|
|
|
&default_cipher_testvec_configs[i]));
|
2019-02-01 15:51:48 +08:00
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(default_hash_testvec_configs); i++)
|
|
|
|
WARN_ON(!valid_testvec_config(
|
|
|
|
&default_hash_testvec_configs[i]));
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void testmgr_onetime_init(void)
|
|
|
|
{
|
|
|
|
alg_check_test_descs_order();
|
|
|
|
alg_check_testvec_configs();
|
2019-02-01 15:51:44 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
|
|
|
|
pr_warn("alg: extra crypto tests enabled. This is intended for developer use only.\n");
|
|
|
|
#endif
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
}
|
|
|
|
|
2008-08-17 15:01:56 +08:00
|
|
|
static int alg_find_test(const char *alg)
|
2008-07-31 17:08:25 +08:00
|
|
|
{
|
|
|
|
int start = 0;
|
|
|
|
int end = ARRAY_SIZE(alg_test_descs);
|
|
|
|
|
|
|
|
while (start < end) {
|
|
|
|
int i = (start + end) / 2;
|
|
|
|
int diff = strcmp(alg_test_descs[i].alg, alg);
|
|
|
|
|
|
|
|
if (diff > 0) {
|
|
|
|
end = i;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (diff < 0) {
|
|
|
|
start = i + 1;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2008-08-17 15:01:56 +08:00
|
|
|
return i;
|
|
|
|
}
|
|
|
|
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
crypto: api - allow algs only in specific constructions in FIPS mode
Currently we do not distinguish between algorithms that fail on
the self-test vs. those which are disabled in FIPS mode (not allowed).
Both are marked as having failed the self-test.
Recently the need arose to allow the usage of certain algorithms only
as arguments to specific template instantiations in FIPS mode. For
example, standalone "dh" must be blocked, but e.g. "ffdhe2048(dh)" is
allowed. Other potential use cases include "cbcmac(aes)", which must
only be used with ccm(), or "ghash", which must be used only for
gcm().
This patch allows this scenario by adding a new flag FIPS_INTERNAL to
indicate those algorithms that are not FIPS-allowed. They can then be
used as template arguments only, i.e. when looked up via
crypto_grab_spawn() to be more specific. The FIPS_INTERNAL bit gets
propagated upwards recursively into the surrounding template
instances, until the construction eventually matches an explicit
testmgr entry with ->fips_allowed being set, if any.
The behaviour to skip !->fips_allowed self-test executions in FIPS
mode will be retained. Note that this effectively means that
FIPS_INTERNAL algorithms are handled very similarly to the INTERNAL
ones in this regard. It is expected that the FIPS_INTERNAL algorithms
will receive sufficient testing when the larger constructions they're
a part of, if any, get exercised by testmgr.
Note that as a side-effect of this patch algorithms which are not
FIPS-allowed will now return ENOENT instead of ELIBBAD. Hopefully
this is not an issue as some people were relying on this already.
Link: https://lore.kernel.org/r/YeEVSaMEVJb3cQkq@gondor.apana.org.au
Originally-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Nicolai Stange <nstange@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-21 20:10:58 +08:00
|
|
|
static int alg_fips_disabled(const char *driver, const char *alg)
|
|
|
|
{
|
|
|
|
pr_info("alg: %s (%s) is disabled due to FIPS\n", alg, driver);
|
|
|
|
|
|
|
|
return -ECANCELED;
|
|
|
|
}
|
|
|
|
|
2008-08-17 15:01:56 +08:00
|
|
|
int alg_test(const char *driver, const char *alg, u32 type, u32 mask)
|
|
|
|
{
|
|
|
|
int i;
|
2009-07-02 16:32:12 +08:00
|
|
|
int j;
|
2008-10-12 20:36:51 +08:00
|
|
|
int rc;
|
2008-08-17 15:01:56 +08:00
|
|
|
|
2016-05-03 17:00:17 +08:00
|
|
|
if (!fips_enabled && notests) {
|
|
|
|
printk_once(KERN_INFO "alg: self-tests disabled\n");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 15:51:43 +08:00
|
|
|
DO_ONCE(testmgr_onetime_init);
|
2013-06-13 22:37:40 +08:00
|
|
|
|
2008-08-17 15:01:56 +08:00
|
|
|
if ((type & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_CIPHER) {
|
|
|
|
char nalg[CRYPTO_MAX_ALG_NAME];
|
|
|
|
|
|
|
|
if (snprintf(nalg, sizeof(nalg), "ecb(%s)", alg) >=
|
|
|
|
sizeof(nalg))
|
|
|
|
return -ENAMETOOLONG;
|
|
|
|
|
|
|
|
i = alg_find_test(nalg);
|
|
|
|
if (i < 0)
|
|
|
|
goto notest;
|
|
|
|
|
2009-05-15 13:17:05 +08:00
|
|
|
if (fips_enabled && !alg_test_descs[i].fips_allowed)
|
|
|
|
goto non_fips_alg;
|
|
|
|
|
2009-05-04 19:49:23 +08:00
|
|
|
rc = alg_test_cipher(alg_test_descs + i, driver, type, mask);
|
|
|
|
goto test_done;
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
|
|
|
|
2008-08-17 15:01:56 +08:00
|
|
|
i = alg_find_test(alg);
|
2009-07-02 16:32:12 +08:00
|
|
|
j = alg_find_test(driver);
|
|
|
|
if (i < 0 && j < 0)
|
2008-08-17 15:01:56 +08:00
|
|
|
goto notest;
|
|
|
|
|
crypto: api - allow algs only in specific constructions in FIPS mode
Currently we do not distinguish between algorithms that fail on
the self-test vs. those which are disabled in FIPS mode (not allowed).
Both are marked as having failed the self-test.
Recently the need arose to allow the usage of certain algorithms only
as arguments to specific template instantiations in FIPS mode. For
example, standalone "dh" must be blocked, but e.g. "ffdhe2048(dh)" is
allowed. Other potential use cases include "cbcmac(aes)", which must
only be used with ccm(), or "ghash", which must be used only for
gcm().
This patch allows this scenario by adding a new flag FIPS_INTERNAL to
indicate those algorithms that are not FIPS-allowed. They can then be
used as template arguments only, i.e. when looked up via
crypto_grab_spawn() to be more specific. The FIPS_INTERNAL bit gets
propagated upwards recursively into the surrounding template
instances, until the construction eventually matches an explicit
testmgr entry with ->fips_allowed being set, if any.
The behaviour to skip !->fips_allowed self-test executions in FIPS
mode will be retained. Note that this effectively means that
FIPS_INTERNAL algorithms are handled very similarly to the INTERNAL
ones in this regard. It is expected that the FIPS_INTERNAL algorithms
will receive sufficient testing when the larger constructions they're
a part of, if any, get exercised by testmgr.
Note that as a side-effect of this patch algorithms which are not
FIPS-allowed will now return ENOENT instead of ELIBBAD. Hopefully
this is not an issue as some people were relying on this already.
Link: https://lore.kernel.org/r/YeEVSaMEVJb3cQkq@gondor.apana.org.au
Originally-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Nicolai Stange <nstange@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-21 20:10:58 +08:00
|
|
|
if (fips_enabled) {
|
|
|
|
if (j >= 0 && !alg_test_descs[j].fips_allowed)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (i >= 0 && !alg_test_descs[i].fips_allowed)
|
|
|
|
goto non_fips_alg;
|
|
|
|
}
|
2009-05-15 13:17:05 +08:00
|
|
|
|
2009-07-02 16:32:12 +08:00
|
|
|
rc = 0;
|
|
|
|
if (i >= 0)
|
|
|
|
rc |= alg_test_descs[i].test(alg_test_descs + i, driver,
|
|
|
|
type, mask);
|
2013-07-18 23:57:07 +08:00
|
|
|
if (j >= 0 && j != i)
|
2009-07-02 16:32:12 +08:00
|
|
|
rc |= alg_test_descs[j].test(alg_test_descs + j, driver,
|
|
|
|
type, mask);
|
|
|
|
|
2009-05-04 19:49:23 +08:00
|
|
|
test_done:
|
2020-10-27 00:31:12 +08:00
|
|
|
if (rc) {
|
|
|
|
if (fips_enabled || panic_on_fail) {
|
|
|
|
fips_fail_notify();
|
|
|
|
panic("alg: self-tests for %s (%s) failed in %s mode!\n",
|
|
|
|
driver, alg,
|
|
|
|
fips_enabled ? "fips" : "panic_on_fail");
|
|
|
|
}
|
crypto: testmgr - don't generate WARN for missing modules
This userspace command:
modprobe tcrypt
or
modprobe tcrypt mode=0
runs all the tcrypt test cases numbered <200 (i.e., all the
test cases calling tcrypt_test() and returning return values).
Tests are sparsely numbered from 0 to 1000. For example:
modprobe tcrypt mode=12
tests sha512, and
modprobe tcrypt mode=152
tests rfc4543(gcm(aes))) - AES-GCM as GMAC
The test manager generates WARNING crashdumps every time it attempts
a test using an algorithm that is not available (not built-in to the
kernel or available as a module):
alg: skcipher: failed to allocate transform for ecb(arc4): -2
------------[ cut here ]-----------
alg: self-tests for ecb(arc4) (ecb(arc4)) failed (rc=-2)
WARNING: CPU: 9 PID: 4618 at crypto/testmgr.c:5777
alg_test+0x30b/0x510
[50 more lines....]
---[ end trace 0000000000000000 ]---
If the kernel is compiled with CRYPTO_USER_API_ENABLE_OBSOLETE
disabled (the default), then these algorithms are not compiled into
the kernel or made into modules and trigger WARNINGs:
arc4 tea xtea khazad anubis xeta seed
Additionally, any other algorithms that are not enabled in .config
will generate WARNINGs. In RHEL 9.0, for example, the default
selection of algorithms leads to 16 WARNING dumps.
One attempt to fix this was by modifying tcrypt_test() to check
crypto_has_alg() and immediately return 0 if crypto_has_alg() fails,
rather than proceed and return a non-zero error value that causes
the caller (alg_test() in crypto/testmgr.c) to invoke WARN().
That knocks out too many algorithms, though; some combinations
like ctr(des3_ede) would work.
Instead, change the condition on the WARN to ignore a return
value is ENOENT, which is the value returned when the algorithm
or combination of algorithms doesn't exist. Add a pr_warn to
communicate that information in case the WARN is skipped.
This approach allows algorithm tests to work that are combinations,
not provided by one driver, like ctr(blowfish).
Result - no more WARNINGs:
modprobe tcrypt
[ 115.541765] tcrypt: testing md5
[ 115.556415] tcrypt: testing sha1
[ 115.570463] tcrypt: testing ecb(des)
[ 115.585303] cryptomgr: alg: skcipher: failed to allocate transform for ecb(des): -2
[ 115.593037] cryptomgr: alg: self-tests for ecb(des) using ecb(des) failed (rc=-2)
[ 115.593038] tcrypt: testing cbc(des)
[ 115.610641] cryptomgr: alg: skcipher: failed to allocate transform for cbc(des): -2
[ 115.618359] cryptomgr: alg: self-tests for cbc(des) using cbc(des) failed (rc=-2)
...
Signed-off-by: Robert Elliott <elliott@hpe.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-08-14 07:14:43 +08:00
|
|
|
pr_warn("alg: self-tests for %s using %s failed (rc=%d)",
|
|
|
|
alg, driver, rc);
|
|
|
|
WARN(rc != -ENOENT,
|
|
|
|
"alg: self-tests for %s using %s failed (rc=%d)",
|
|
|
|
alg, driver, rc);
|
2020-10-27 00:31:12 +08:00
|
|
|
} else {
|
|
|
|
if (fips_enabled)
|
|
|
|
pr_info("alg: self-tests for %s (%s) passed\n",
|
|
|
|
driver, alg);
|
2019-07-02 19:39:20 +08:00
|
|
|
}
|
2008-10-12 20:36:51 +08:00
|
|
|
|
|
|
|
return rc;
|
2008-08-17 15:01:56 +08:00
|
|
|
|
|
|
|
notest:
|
2023-09-14 16:28:26 +08:00
|
|
|
if ((type & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_LSKCIPHER) {
|
|
|
|
char nalg[CRYPTO_MAX_ALG_NAME];
|
|
|
|
|
|
|
|
if (snprintf(nalg, sizeof(nalg), "ecb(%s)", alg) >=
|
|
|
|
sizeof(nalg))
|
|
|
|
goto notest2;
|
|
|
|
|
|
|
|
i = alg_find_test(nalg);
|
|
|
|
if (i < 0)
|
|
|
|
goto notest2;
|
|
|
|
|
|
|
|
if (fips_enabled && !alg_test_descs[i].fips_allowed)
|
|
|
|
goto non_fips_alg;
|
|
|
|
|
|
|
|
rc = alg_test_skcipher(alg_test_descs + i, driver, type, mask);
|
|
|
|
goto test_done;
|
|
|
|
}
|
|
|
|
|
|
|
|
notest2:
|
2008-07-31 17:08:25 +08:00
|
|
|
printk(KERN_INFO "alg: No test for %s (%s)\n", alg, driver);
|
crypto: api - allow algs only in specific constructions in FIPS mode
Currently we do not distinguish between algorithms that fail on
the self-test vs. those which are disabled in FIPS mode (not allowed).
Both are marked as having failed the self-test.
Recently the need arose to allow the usage of certain algorithms only
as arguments to specific template instantiations in FIPS mode. For
example, standalone "dh" must be blocked, but e.g. "ffdhe2048(dh)" is
allowed. Other potential use cases include "cbcmac(aes)", which must
only be used with ccm(), or "ghash", which must be used only for
gcm().
This patch allows this scenario by adding a new flag FIPS_INTERNAL to
indicate those algorithms that are not FIPS-allowed. They can then be
used as template arguments only, i.e. when looked up via
crypto_grab_spawn() to be more specific. The FIPS_INTERNAL bit gets
propagated upwards recursively into the surrounding template
instances, until the construction eventually matches an explicit
testmgr entry with ->fips_allowed being set, if any.
The behaviour to skip !->fips_allowed self-test executions in FIPS
mode will be retained. Note that this effectively means that
FIPS_INTERNAL algorithms are handled very similarly to the INTERNAL
ones in this regard. It is expected that the FIPS_INTERNAL algorithms
will receive sufficient testing when the larger constructions they're
a part of, if any, get exercised by testmgr.
Note that as a side-effect of this patch algorithms which are not
FIPS-allowed will now return ENOENT instead of ELIBBAD. Hopefully
this is not an issue as some people were relying on this already.
Link: https://lore.kernel.org/r/YeEVSaMEVJb3cQkq@gondor.apana.org.au
Originally-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Nicolai Stange <nstange@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-21 20:10:58 +08:00
|
|
|
|
|
|
|
if (type & CRYPTO_ALG_FIPS_INTERNAL)
|
|
|
|
return alg_fips_disabled(driver, alg);
|
|
|
|
|
2008-07-31 17:08:25 +08:00
|
|
|
return 0;
|
2009-05-15 13:17:05 +08:00
|
|
|
non_fips_alg:
|
crypto: api - allow algs only in specific constructions in FIPS mode
Currently we do not distinguish between algorithms that fail on
the self-test vs. those which are disabled in FIPS mode (not allowed).
Both are marked as having failed the self-test.
Recently the need arose to allow the usage of certain algorithms only
as arguments to specific template instantiations in FIPS mode. For
example, standalone "dh" must be blocked, but e.g. "ffdhe2048(dh)" is
allowed. Other potential use cases include "cbcmac(aes)", which must
only be used with ccm(), or "ghash", which must be used only for
gcm().
This patch allows this scenario by adding a new flag FIPS_INTERNAL to
indicate those algorithms that are not FIPS-allowed. They can then be
used as template arguments only, i.e. when looked up via
crypto_grab_spawn() to be more specific. The FIPS_INTERNAL bit gets
propagated upwards recursively into the surrounding template
instances, until the construction eventually matches an explicit
testmgr entry with ->fips_allowed being set, if any.
The behaviour to skip !->fips_allowed self-test executions in FIPS
mode will be retained. Note that this effectively means that
FIPS_INTERNAL algorithms are handled very similarly to the INTERNAL
ones in this regard. It is expected that the FIPS_INTERNAL algorithms
will receive sufficient testing when the larger constructions they're
a part of, if any, get exercised by testmgr.
Note that as a side-effect of this patch algorithms which are not
FIPS-allowed will now return ENOENT instead of ELIBBAD. Hopefully
this is not an issue as some people were relying on this already.
Link: https://lore.kernel.org/r/YeEVSaMEVJb3cQkq@gondor.apana.org.au
Originally-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Nicolai Stange <nstange@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-21 20:10:58 +08:00
|
|
|
return alg_fips_disabled(driver, alg);
|
2008-07-31 17:08:25 +08:00
|
|
|
}
|
2010-06-03 18:53:43 +08:00
|
|
|
|
2010-08-06 09:40:28 +08:00
|
|
|
#endif /* CONFIG_CRYPTO_MANAGER_DISABLE_TESTS */
|
2010-06-03 18:53:43 +08:00
|
|
|
|
2008-07-31 17:08:25 +08:00
|
|
|
EXPORT_SYMBOL_GPL(alg_test);
|