The xcbc template is setting its alignmask to that of its underlying
'cipher'. Yet, it doesn't care itself about how its inputs and outputs
are aligned, which is ostensibly the point of the alignmask. Instead,
xcbc actually just uses its alignmask itself to runtime-align certain
fields in its tfm and desc contexts appropriately for its underlying
cipher. That is almost entirely pointless too, though, since xcbc is
already using the cipher API functions that handle alignment themselves,
and few ciphers set a nonzero alignmask anyway. Also, even without
runtime alignment, an alignment of at least 4 bytes can be guaranteed.
Thus, at best this code is optimizing for the rare case of ciphers that
set an alignmask >= 7, at the cost of hurting the common cases.
Therefore, this patch removes the manual alignment code from xcbc and
makes it stop setting an alignmask.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The vmac template is setting its alignmask to that of its underlying
'cipher'. This doesn't actually accomplish anything useful, though, so
stop doing it. (vmac_update() does have an alignment bug, where it
assumes u64 alignment when it shouldn't, but that bug exists both before
and after this patch.) This is a prerequisite for removing support for
nonzero alignmasks from shash.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The hmac template is setting its alignmask to that of its underlying
unkeyed hash algorithm, and it is aligning the ipad and opad fields in
its tfm context to that alignment. However, hmac does not actually need
any sort of alignment itself, which makes this pointless except to keep
the pads aligned to what the underlying algorithm prefers. But very few
shash algorithms actually set an alignmask, and it is being removed from
those remaining ones; also, after setkey, the pads are only passed to
crypto_shash_import and crypto_shash_export which ignore the alignmask.
Therefore, make the hmac template stop setting an alignmask and simply
use natural alignment for ipad and opad. Note, this change also moves
the pads from the beginning of the tfm context to the end, which makes
much more sense; the variable-length fields should be at the end.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The cmac template is setting its alignmask to that of its underlying
'cipher'. Yet, it doesn't care itself about how its inputs and outputs
are aligned, which is ostensibly the point of the alignmask. Instead,
cmac actually just uses its alignmask itself to runtime-align certain
fields in its tfm and desc contexts appropriately for its underlying
cipher. That is almost entirely pointless too, though, since cmac is
already using the cipher API functions that handle alignment themselves,
and few ciphers set a nonzero alignmask anyway. Also, even without
runtime alignment, an alignment of at least 4 bytes can be guaranteed.
Thus, at best this code is optimizing for the rare case of ciphers that
set an alignmask >= 7, at the cost of hurting the common cases.
Therefore, this patch removes the manual alignment code from cmac and
makes it stop setting an alignmask.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The cbcmac template is aligning a field in its desc context to the
alignmask of its underlying 'cipher', at runtime. This is almost
entirely pointless, since cbcmac is already using the cipher API
functions that handle alignment themselves, and few ciphers set a
nonzero alignmask anyway. Also, even without runtime alignment, an
alignment of at least 4 bytes can be guaranteed.
Thus, at best this code is optimizing for the rare case of ciphers that
set an alignmask >= 7, at the cost of hurting the common cases.
Therefore, remove the manual alignment code from cbcmac.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This unnecessary explicit setting of cra_alignmask to 0 shows up when
grepping for shash algorithms that set an alignmask. Remove it. No
change in behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This unnecessary explicit setting of cra_alignmask to 0 shows up when
grepping for shash algorithms that set an alignmask. Remove it. No
change in behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The zynqmp-sha3-384 algorithm sets a nonzero alignmask, but it doesn't
appear to actually need it. Therefore, stop setting it. This will
allow this algorithm to keep being registered after alignmask support is
removed from shash.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The stm32 crc32 algorithms set a nonzero alignmask, but they don't seem
to actually need it. Their ->update function already has code that
handles aligning the data to the same alignment that the alignmask
specifies, their ->setkey function already uses get_unaligned_le32(),
and their ->final function already uses put_unaligned_le32().
Therefore, stop setting the alignmask. This will allow these algorithms
to keep being registered after alignmask support is removed from shash.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As far as I can tell, "crc32c-sparc64" is the only "shash" algorithm in
the kernel that sets a nonzero alignmask and actually relies on it to
get the crypto API to align the inputs and outputs. This capability is
not really useful, though. To unblock removing the support for
alignmask from shash_alg, this patch updates crc32c-sparc64 to no longer
use the alignmask. This means doing 8-byte alignment of the data when
doing an update, using get_unaligned_le32() when setting a non-default
initial CRC, and using put_unaligned_le32() to output the final CRC.
Partially tested with:
export ARCH=sparc64 CROSS_COMPILE=sparc64-linux-gnu-
make sparc64_defconfig
echo CONFIG_CRYPTO_CRC32C_SPARC64=y >> .config
echo '# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set' >> .config
echo CONFIG_DEBUG_KERNEL=y >> .config
echo CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y >> .config
make olddefconfig
make -j$(getconf _NPROCESSORS_ONLN)
qemu-system-sparc64 -kernel arch/sparc/boot/image -nographic
However, qemu doesn't actually support the sparc CRC32C instructions, so
for the test I temporarily replaced crc32c_sparc64() with __crc32c_le()
and made sparc64_has_crc32c_opcode() always return true. So essentially
I tested the glue code, not the actual SPARC part which is unchanged.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Most shash algorithms don't have custom ->import and ->export functions,
resulting in the memcpy() based default being used. Yet,
crypto_shash_import() and crypto_shash_export() still make an indirect
call, which is expensive. Therefore, change how the default import and
export are called to make it so that crypto_shash_import() and
crypto_shash_export() don't do an indirect call in this case.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Document SA8775P and SC7280 compatible for the True Random Number
Generator.
Signed-off-by: Om Prakash Singh <quic_omprsing@quicinc.com>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Reviewed-by: Bjorn Andersson <andersson@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The modular build fails because the self-test code depends on pkcs7
which in turn depends on x509 which contains the self-test.
Split the self-test out into its own module to break the cycle.
Fixes: 3cde3174eb ("certs: Add FIPS selftests")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In a high-load arm64 environment, the pcrypt_aead01 test in LTP can lead
to system UAF (Use-After-Free) issues. Due to the lengthy analysis of
the pcrypt_aead01 function call, I'll describe the problem scenario
using a simplified model:
Suppose there's a user of padata named `user_function` that adheres to
the padata requirement of calling `padata_free_shell` after `serial()`
has been invoked, as demonstrated in the following code:
```c
struct request {
struct padata_priv padata;
struct completion *done;
};
void parallel(struct padata_priv *padata) {
do_something();
}
void serial(struct padata_priv *padata) {
struct request *request = container_of(padata,
struct request,
padata);
complete(request->done);
}
void user_function() {
DECLARE_COMPLETION(done)
padata->parallel = parallel;
padata->serial = serial;
padata_do_parallel();
wait_for_completion(&done);
padata_free_shell();
}
```
In the corresponding padata.c file, there's the following code:
```c
static void padata_serial_worker(struct work_struct *serial_work) {
...
cnt = 0;
while (!list_empty(&local_list)) {
...
padata->serial(padata);
cnt++;
}
local_bh_enable();
if (refcount_sub_and_test(cnt, &pd->refcnt))
padata_free_pd(pd);
}
```
Because of the high system load and the accumulation of unexecuted
softirq at this moment, `local_bh_enable()` in padata takes longer
to execute than usual. Subsequently, when accessing `pd->refcnt`,
`pd` has already been released by `padata_free_shell()`, resulting
in a UAF issue with `pd->refcnt`.
The fix is straightforward: add `refcount_dec_and_test` before calling
`padata_free_pd` in `padata_free_shell`.
Fixes: 07928d9bfc ("padata: Remove broken queue flushing")
Signed-off-by: WangJinchao <wangjinchao@xfusion.com>
Acked-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Acked-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When an algorithm of the new "lskcipher" type is exposed through the
"skcipher" API, calls to crypto_skcipher_setkey() don't pass on the
CRYPTO_TFM_REQ_FORBID_WEAK_KEYS flag to the lskcipher. This causes
self-test failures for ecb(des), as weak keys are not rejected anymore.
Fix this.
Fixes: 31865c4c4d ("crypto: skcipher - Add lskcipher")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
During hisilicon accelerator live migration operation. In order to
prevent the problem of EQ/AEQ interrupt loss. Migration driver will
trigger an EQ/AEQ doorbell at the end of the migration.
This operation may cause double interruption of EQ/AEQ events.
To ensure that the EQ/AEQ interrupt processing function is normal.
The interrupt handling functionality of EQ/AEQ needs to be updated.
Used to handle repeated interrupts event.
Fixes: b0eed08590 ("hisi_acc_vfio_pci: Add support for VFIO live migration")
Signed-off-by: Longfang Liu <liulongfang@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The heuristics used by gcc triggers false positive truncation
warnings in hifn_alg_alloc. The warning triggered by the strings
here are clearly false positives (see
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95755).
Add checks on snprintf calls to silence these warnings, including
the one for cra_driver_name even though it does not currently trigger
a gcc warning.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Set the error value to -EINVAL instead of zero when the underlying
name (within "ecb()") fails basic sanity checks.
Fixes: 8aee5d4ebd ("crypto: lskcipher - Add compatibility wrapper around ECB")
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Closes: https://lore.kernel.org/r/202310111323.ZjK7bzjw-lkp@intel.com/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
NIST FIPS 186-5 states that it is recommended that the security
strength associated with the bit length of n and the security strength
of the hash function be the same, or higher upon agreement. Given NIST
P384 curve is used, force using either SHA384 or SHA512.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
sha224 does not provide enough security against collision attacks
relative to the default keys used for signing (RSA 4k & P-384). Also
sha224 never became popular, as sha256 got widely adopter ahead of
sha224 being introduced.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It is possible to stand up own certificates and sign PE-COFF binaries
using SHA-224. However it never became popular or needed since it has
similar costs as SHA-256. Windows Authenticode infrastructure never
had support for SHA-224, and all secureboot keys used fro linux
vmlinuz have always been using at least SHA-256.
Given the point of mscode_parser is to support interoperatiblity with
typical de-facto hashes, remove support for SHA-224 to avoid
posibility of creating interoperatibility issues with rhboot/shim,
grub, and non-linux systems trying to sign or verify vmlinux.
SHA-224 itself is not removed from the kernel, as it is truncated
SHA-256. If requested I can write patches to remove SHA-224 support
across all of the drivers.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Removes support for sha1 signed kernel modules, importing sha1 signed
x.509 certificates.
rsa-pkcs1pad keeps sha1 padding support, which seems to be used by
virtio driver.
sha1 remains available as there are many drivers and subsystems using
it. Note only hmac(sha1) with secret keys remains cryptographically
secure.
In the kernel there are filesystems, IMA, tpm/pcr that appear to be
using sha1. Maybe they can all start to be slowly upgraded to
something else i.e. blake3, ParallelHash, SHAKE256 as needed.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
PSP firmware may report additional error information in the SEV command
buffer registers in situations where an error occurs as the result of an
SEV command. In this case, check if the command buffer registers have been
modified and if so, dump the contents.
Signed-off-by: John Allen <john.allen@amd.com>
Reviewed-by: Mario Limonciello <mario.limonciello@amd.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the Linux kernel, a function whose name has two leading underscores
is conventionally called by the same-named function without leading
underscores -- not the other way around. __sha512_block_data_order()
got this backwards. Fix this, albeit without changing the name in the
perlasm since that is OpenSSL code. No change in behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the Linux kernel, a function whose name has two leading underscores
is conventionally called by the same-named function without leading
underscores -- not the other way around. __sha256_block_data_order()
and __sha256_block_neon() got this backwards. Fix this, albeit without
changing the names in the perlasm since that is OpenSSL code. No change
in behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the Linux kernel, a function whose name has two leading underscores
is conventionally called by the same-named function without leading
underscores -- not the other way around. __sha512_ce_transform() and
__sha512_block_data_order() got this backwards. Fix this, albeit
without changing "sha512_block_data_order" in the perlasm since that is
OpenSSL code. No change in behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the Linux kernel, a function whose name has two leading underscores
is conventionally called by the same-named function without leading
underscores -- not the other way around. __sha2_ce_transform() and
__sha256_block_data_order() got this backwards. Fix this, albeit
without changing "sha256_block_data_order" in the perlasm since that is
OpenSSL code. No change in behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the Linux kernel, a function whose name has two leading underscores
is conventionally called by the same-named function without leading
underscores -- not the other way around. __sha1_ce_transform() got this
backwards. Fix this. No change in behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Implement the ->digest method to improve performance on single-page
messages by reducing the number of indirect calls.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Implement the ->digest method to improve performance on single-page
messages by reducing the number of indirect calls.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Implement the ->digest method to improve performance on single-page
messages by reducing the number of indirect calls.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When the source scatterlist is a single page, optimize the first hash
step of adiantum to use crypto_shash_digest() instead of
init/update/final, and use the same local kmap for both hashing the bulk
part and loading the narrow part of the source data.
Likewise, when the destination scatterlist is a single page, optimize
the second hash step of adiantum to use crypto_shash_digest() instead of
init/update/final, and use the same local kmap for both hashing the bulk
part and storing the narrow part of the destination data.
In some cases these optimizations improve performance significantly.
Note: ideally, for optimal performance each architecture should
implement the full "adiantum(xchacha12,aes)" algorithm and fully
optimize the contiguous buffer case to use no indirect calls. That's
not something I've gotten around to doing, though. This commit just
makes a relatively small change that provides some benefit with the
existing template-based approach.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There is no need to free the reset_data structure if the recovery is
unsuccessful and the reset is synchronous. The function
adf_dev_aer_schedule_reset() handles the cleanup properly. Only
asynchronous resets require such structure to be freed inside the reset
worker.
Fixes: d8cba25d2c ("crypto: qat - Intel(R) QAT driver framework")
Signed-off-by: Svyatoslav Pankratov <svyatoslav.pankratov@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Implement a ->digest function for sha256-ssse3, sha256-avx, sha256-avx2,
and sha256-ni. This improves the performance of crypto_shash_digest()
with these algorithms by reducing the number of indirect calls that are
made.
For now, don't bother with this for sha224, since sha224 is rarely used.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Implement a ->digest function for sha256-ce. This improves the
performance of crypto_shash_digest() with this algorithm by reducing the
number of indirect calls that are made. This only adds ~112 bytes of
code, mostly for the inlined init, as the finup function is tail-called.
For now, don't bother with this for sha224, since sha224 is rarely used.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fold shash_digest_unaligned() into its only remaining caller. Also,
avoid a redundant check of CRYPTO_TFM_NEED_KEY by replacing the call to
crypto_shash_init() with shash->init(desc). Finally, replace
shash_update_unaligned() + shash_final_unaligned() with
shash_finup_unaligned() which does exactly that.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For an shash algorithm that doesn't implement ->digest, currently
crypto_shash_digest() with aligned input makes 5 indirect calls: 1 to
shash_digest_unaligned(), 1 to ->init, 2 to ->update ('alignmask + 1'
bytes, then the rest), then 1 to ->final. This is true even if the
algorithm implements ->finup. This is caused by an unnecessary fallback
to code meant to handle unaligned inputs. In fact,
crypto_shash_digest() already does the needed alignment check earlier.
Therefore, optimize the number of indirect calls for aligned inputs to 3
when the algorithm implements ->finup. It remains at 5 when the
algorithm implements neither ->finup nor ->digest.
Similarly, for an shash algorithm that doesn't implement ->finup,
currently crypto_shash_finup() with aligned input makes 4 indirect
calls: 1 to shash_finup_unaligned(), 2 to ->update, and
1 to ->final. Optimize this to 3 calls.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since commit adad556efc ("crypto: api - Fix built-in testing
dependency failures"), the following warning appears when booting an
x86_64 kernel that is configured with
CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y and CONFIG_CRYPTO_AES_NI_INTEL=y,
even when CONFIG_CRYPTO_XTS=y and CONFIG_CRYPTO_AES=y:
alg: skcipher: skipping comparison tests for xts-aes-aesni because xts(ecb(aes-generic)) is unavailable
This is caused by an issue in the xts template where it allocates an
"aes" single-block cipher without declaring a dependency on it via the
crypto_spawn mechanism. This issue was exposed by the above commit
because it reversed the order that the algorithms are tested in.
Specifically, when "xts(ecb(aes-generic))" is instantiated and tested
during the comparison tests for "xts-aes-aesni", the "xts" template
allocates an "aes" crypto_cipher for encrypting tweaks. This resolves
to "aes-aesni". (Getting "aes-aesni" instead of "aes-generic" here is a
bit weird, but it's apparently intended.) Due to the above-mentioned
commit, the testing of "aes-aesni", and the finalization of its
registration, now happens at this point instead of before. At the end
of that, crypto_remove_spawns() unregisters all algorithm instances that
depend on a lower-priority "aes" implementation such as "aes-generic"
but that do not depend on "aes-aesni". However, because "xts" does not
use the crypto_spawn mechanism for its "aes", its dependency on
"aes-aesni" is not recognized by crypto_remove_spawns(). Thus,
crypto_remove_spawns() unexpectedly unregisters "xts(ecb(aes-generic))".
Fix this issue by making the "xts" template use the crypto_spawn
mechanism for its "aes" dependency, like what other templates do.
Note, this fix could be applied as far back as commit f1c131b454
("crypto: xts - Convert to skcipher"). However, the issue only got
exposed by the much more recent changes to how the crypto API runs the
self-tests, so there should be no need to backport this to very old
kernels. Also, an alternative fix would be to flip the list iteration
order in crypto_start_tests() to restore the original testing order.
I'm thinking we should do that too, since the original order seems more
natural, but it shouldn't be relied on for correctness.
Fixes: adad556efc ("crypto: api - Fix built-in testing dependency failures")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
MST pointed out: config change callback is also handled incorrectly
in this driver, it takes a mutex from interrupt context.
Handle config changed by work queue instead.
Cc: Gonglei (Arei) <arei.gonglei@huawei.com>
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If the temporarily applied memory is used to set or get the xqc
information, the driver releases the memory immediately after the
hardware mailbox operation time exceeds the driver waiting time.
However, the hardware does not cancel the operation, so the hardware
may write data to released memory.
Therefore, when the driver is bound to a device, the driver reserves
memory for the xqc configuration. The subsequent xqc configuration
uses the reserved memory to prevent hardware from accessing the
released memory.
Signed-off-by: Weili Qian <qianweili@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In case a health test error occurs during runtime, the power-up health
tests are rerun to verify that the noise source is still good and
that the reported health test error was an outlier. For performing this
power-up health test, the already existing entropy collector instance
is used instead of allocating a new one. This change has the following
implications:
* The noise that is collected as part of the newly run health tests is
inserted into the entropy collector and thus stirs the existing
data present in there further. Thus, the entropy collected during
the health test is not wasted. This is also allowed by SP800-90B.
* The power-on health test is not affected by the state of the entropy
collector, because it resets the APT / RCT state. The remainder of
the state is unrelated to the health test as it is only applied to
newly obtained time stamps.
This change also fixes a bug report about an allocation while in an
atomic lock (the lock is taken in jent_kcapi_random, jent_read_entropy
is called and this can call jent_entropy_init).
Fixes: 04597c8dd6 ("jitter - add RCT/APT support for different OSRs")
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use preferred device_get_match_data() instead of of_match_device() to
get the driver match data. With this, adjust the includes to explicitly
include the correct headers.
Signed-off-by: Rob Herring <robh@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use preferred device_get_match_data() instead of of_match_device() to
get the driver match data. With this, adjust the includes to explicitly
include the correct headers.
Signed-off-by: Rob Herring <robh@kernel.org>
Reviewed-by: Andrew Jeffery <andrew@codeconstruct.com.au>
Reviewed-by: Neal Liu <neal_liu@aspeedtech.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The Compress and Verify (CnV) feature check and ensures data integrity
in the compression operation. The implementation of CnV keeps a record
of the CnV errors that have occurred since the driver was loaded.
Expose CnV error stats by providing the "cnv_errors" file under
debugfs. This includes the number of errors detected up to now and
the type of the last error. The error count is provided on a per
Acceleration Engine basis and it is reset every time the driver is loaded.
Signed-off-by: Lucas Segarra Fernandez <lucas.segarra.fernandez@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
QAT devices implement a mechanism that allows them to go autonomously
to a low power state depending on the load.
Expose power management info by providing the "pm_status" file under
debugfs. This includes PM state, PM event log, PM event counters, PM HW
CSRs, per-resource type constrain counters and per-domain power gating
status specific to the QAT device.
This information is retrieved from (1) the FW by means of
ICP_QAT_FW_PM_INFO command, (2) CSRs and (3) counters collected by the
device driver.
In addition, add logic to keep track and report power management event
interrupts and acks/nacks sent to FW to allow/prevent state transitions.
Signed-off-by: Lucas Segarra Fernandez <lucas.segarra.fernandez@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Include kernel.h for GENMASK(), kstrtobool() and types.
Add forward declaration for struct adf_accel_dev. Remove unneeded
include.
This change doesn't introduce any function change.
Signed-off-by: Lucas Segarra Fernandez <lucas.segarra.fernandez@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add hw_random interface support in qcom-rng driver as new IP block
in Qualcomm SoC has inbuilt NIST SP800 90B compliant entropic source
to generate true random number.
Keeping current rng_alg interface as well for random number generation
using Kernel Crypto API.
Signed-off-by: Om Prakash Singh <quic_omprsing@quicinc.com>
Reviewed-by: Bjorn Andersson <quic_bjorande@quicinc.com>
Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
Acked-by: Om Prakash Singh <quic_omprsing@quicinc.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Document SM8550 compatible for the True Random Number Generator.
Reviewed-by: Om Prakash Singh <quic_omprsing@quicinc.com>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>