linux/drivers/crypto/qce/cipher.h
Ard Biesheuvel 90e2f78271 crypto: qce - permit asynchronous skcipher as fallback
Even though the qce driver implements asynchronous versions of ecb(aes),
cbc(aes)and xts(aes), the fallbacks it allocates are required to be
synchronous. Given that SIMD based software implementations are usually
asynchronous as well, even though they rarely complete asynchronously
(this typically only happens in cases where the request was made from
softirq context, while SIMD was already in use in the task context that
it interrupted), these implementations are disregarded, and either the
generic C version or another table based version implemented in assembler
is selected instead.

Since falling back to synchronous AES is not only a performance issue, but
potentially a security issue as well (due to the fact that table based AES
is not time invariant), let's fix this, by allocating an ordinary skcipher
as the fallback, and invoke it with the completion routine that was given
to the outer request.

While at it, remove the pointless memset() from qce_skcipher_init(), and
remove the call to it qce_skcipher_init_fallback().

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16 21:49:03 +10:00

58 lines
1.4 KiB
C

/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2010-2014, The Linux Foundation. All rights reserved.
*/
#ifndef _CIPHER_H_
#define _CIPHER_H_
#include "common.h"
#include "core.h"
#define QCE_MAX_KEY_SIZE 64
struct qce_cipher_ctx {
u8 enc_key[QCE_MAX_KEY_SIZE];
unsigned int enc_keylen;
struct crypto_skcipher *fallback;
};
/**
* struct qce_cipher_reqctx - holds private cipher objects per request
* @flags: operation flags
* @iv: pointer to the IV
* @ivsize: IV size
* @src_nents: source entries
* @dst_nents: destination entries
* @result_sg: scatterlist used for result buffer
* @dst_tbl: destination sg table
* @dst_sg: destination sg pointer table beginning
* @src_tbl: source sg table
* @src_sg: source sg pointer table beginning;
* @cryptlen: crypto length
*/
struct qce_cipher_reqctx {
unsigned long flags;
u8 *iv;
unsigned int ivsize;
int src_nents;
int dst_nents;
struct scatterlist result_sg;
struct sg_table dst_tbl;
struct scatterlist *dst_sg;
struct sg_table src_tbl;
struct scatterlist *src_sg;
unsigned int cryptlen;
struct skcipher_request fallback_req; // keep at the end
};
static inline struct qce_alg_template *to_cipher_tmpl(struct crypto_skcipher *tfm)
{
struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
return container_of(alg, struct qce_alg_template, alg.skcipher);
}
extern const struct qce_algo_ops skcipher_ops;
#endif /* _CIPHER_H_ */