Now that the alignmask for ahash and shash algorithms is always 0,
simplify crypto_ccm_create_common() accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the alignmask for ahash and shash algorithms is always 0,
crypto_ahash_alignmask() always returns 0 and will be removed. In
preparation for this, stop checking crypto_ahash_alignmask() in ah6.c.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the alignmask for ahash and shash algorithms is always 0,
crypto_ahash_alignmask() always returns 0 and will be removed. In
preparation for this, stop checking crypto_ahash_alignmask() in ah4.c.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the alignmask for ahash and shash algorithms is always 0,
crypto_ahash_alignmask() always returns 0 and will be removed. In
preparation for this, stop checking crypto_ahash_alignmask() in testmgr.
As a result of this change,
test_sg_division::offset_relative_to_alignmask and
testvec_config::key_offset_relative_to_alignmask no longer have any
effect on ahash (or shash) algorithms. Therefore, also stop setting
these flags in default_hash_testvec_configs[].
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the alignmask for ahash and shash algorithms is always 0,
simplify the code in authenc accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the alignmask for ahash and shash algorithms is always 0,
simplify the code in authenc accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently, the ahash API checks the alignment of all key and result
buffers against the algorithm's declared alignmask, and for any
unaligned buffers it falls back to manually aligned temporary buffers.
This is virtually useless, however. First, since it does not apply to
the message, its effect is much more limited than e.g. is the case for
the alignmask for "skcipher". Second, the key and result buffers are
given as virtual addresses and cannot (in general) be DMA'ed into, so
drivers end up having to copy to/from them in software anyway. As a
result it's easy to use memcpy() or the unaligned access helpers.
The crypto_hash_walk_*() helper functions do use the alignmask to align
the message. But with one exception those are only used for shash
algorithms being exposed via the ahash API, not for native ahashes, and
aligning the message is not required in this case, especially now that
alignmask support has been removed from shash. The exception is the
n2_core driver, which doesn't set an alignmask.
In any case, no ahash algorithms actually set a nonzero alignmask
anymore. Therefore, remove support for it from ahash. The benefit is
that all the code to handle "misaligned" buffers in the ahash API goes
away, reducing the overhead of the ahash API.
This follows the same change that was made to shash.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.
In preparation for removing alignmask support from ahash, this patch
makes the stm32 driver no longer use it. This driver didn't actually
rely on it; it only writes to the result buffer in stm32_hash_finish(),
simply using memcpy(). And stm32_hash_setkey() does not assume any
alignment for the key buffer.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.
In preparation for removing alignmask support from ahash, this patch
makes the starfive driver no longer use it. This driver did actually
rely on it, but only for storing to the result buffer using int stores
in starfive_hash_copy_hash(). This patch makes
starfive_hash_copy_hash() use put_unaligned() instead. (It really
should use a specific endianness, but that's an existing bug.)
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.
In preparation for removing alignmask support from ahash, this patch
makes the rockchip driver no longer use it. This driver didn't actually
rely on it; it only writes to the result buffer in rk_hash_run(),
already using put_unaligned_le32(). And this driver only supports
unkeyed hash algorithms, so the key buffer need not be considered.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.
In preparation for removing alignmask support from ahash, this patch
makes the omap-sham driver no longer use it. This driver did actually
rely on it, but only for storing to the result buffer using __u32 stores
in omap_sham_copy_ready_hash(). This patch makes
omap_sham_copy_ready_hash() use put_unaligned() instead. (It really
should use a specific endianness, but that's an existing bug.)
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.
In preparation for removing alignmask support from ahash, this patch
makes the talitos driver no longer use it. This driver didn't actually
rely on it; it only writes to the result buffer in
common_nonsnoop_hash_unmap(), simply using memcpy(). And this driver's
"ahash_setkey()" function does not assume any alignment for the key
buffer.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.
In preparation for removing alignmask support from ahash, this patch
makes the s5p-sss driver no longer use it. This driver didn't actually
rely on it; it only writes to the result buffer in
s5p_hash_copy_result(), simply using memcpy(). And this driver only
supports unkeyed hash algorithms, so the key buffer need not be
considered.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.
In preparation for removing alignmask support from ahash, this patch
makes the mxs-dcp driver no longer use it. This driver didn't actually
rely on it; it only writes to the result buffer in dcp_sha_req_to_buf(),
using a bytewise copy. And this driver only supports unkeyed hash
algorithms, so the key buffer need not be considered.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.
In preparation for removing alignmask support from ahash, this patch
makes the artpec6 driver no longer use it. This driver is unusual in
that it DMAs the digest directly to the result buffer. This is broken
because the crypto API provides the result buffer as an arbitrary
virtual address, which might not be valid for DMA, even after the crypto
API applies the alignmask. Maybe the alignmask (which this driver set
only to 3) made this code work in a few more cases than it otherwise
would have. But even if so, it doesn't make sense for this single
driver that is broken anyway to block removal of the alignmask support.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.
In preparation for removing alignmask support from ahash, this patch
makes the atmel driver no longer use it. This driver didn't actually
rely on it; it only writes to the result buffer in
atmel_sha_copy_ready_hash(), simply using memcpy(). And this driver
didn't set an alignmask for any keyed hash algorithms, so the key buffer
need not be considered.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.
In preparation for removing alignmask support from ahash, this patch
makes the sun8i-ss driver no longer use it. This driver didn't actually
rely on it; it only writes to the result buffer in sun8i_ss_hash_run(),
simply using memcpy(). And sun8i_ss_hmac_setkey() does not assume any
alignment for the key buffer.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.
In preparation for removing alignmask support from ahash, this patch
makes the sun8i-ce driver no longer use it. This driver didn't actually
rely on it; it only writes to the result buffer in sun8i_ce_hash_run(),
simply using memcpy(). And this driver only supports unkeyed hash
algorithms, so the key buffer need not be considered.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.
In preparation for removing alignmask support from ahash, this patch
makes the sun4i-ss driver no longer use it. This driver didn't actually
rely on it; it only writes to the result buffer in sun4i_hash(), already
using the unaligned access helpers. And this driver only supports
unkeyed hash algorithms, so the key buffer need not be considered.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto_shash_ctx_aligned() is no longer used, and it is useless now that
shash algorithms don't support nonzero alignmasks, so remove it.
Also remove crypto_tfm_ctx_aligned() which was only called by
crypto_shash_ctx_aligned(). It's unlikely to be useful again, since it
seems inappropriate to use cra_alignmask to represent alignment for the
tfm context when it already means alignment for inputs/outputs.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Per section 4.c. of the IETF Trust Legal Provisions, "Code Components"
in IETF Documents are licensed on the terms of the BSD-3-Clause license:
https://trustee.ietf.org/documents/trust-legal-provisions/tlp-5/
The term "Code Components" specifically includes ASN.1 modules:
https://trustee.ietf.org/documents/trust-legal-provisions/code-components-list-3/
Add an SPDX identifier as well as a copyright notice pursuant to section
6.d. of the Trust Legal Provisions to all ASN.1 modules in the tree
which are derived from IETF Documents.
Section 4.d. of the Trust Legal Provisions requests that each Code
Component identify the RFC from which it is taken, so link that RFC
in every ASN.1 module.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Brijesh is no longer with AMD.
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
Reviewed-by: Michael Roth <michael.roth@amd.com>
Acked-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If a request has the flag CRYPTO_TFM_REQ_MAY_BACKLOG set, the function
qat_alg_send_message_maybacklog(), enqueues it in a backlog list if
either (1) there is already at least one request in the backlog list, or
(2) the HW ring is nearly full or (3) the enqueue to the HW ring fails.
If an interrupt occurs right before the lock in qat_alg_backlog_req() is
taken and the backlog queue is being emptied, then there is no request
in the HW queues that can trigger a subsequent interrupt that can clear
the backlog queue. In addition subsequent requests are enqueued to the
backlog list and not sent to the hardware.
Fix it by holding the lock while taking the decision if the request
needs to be included in the backlog queue or not. This synchronizes the
flow with the interrupt handler that drains the backlog queue.
For performance reasons, the logic has been changed to try to enqueue
first without holding the lock.
Fixes: 3868238397 ("crypto: qat - add backlog mechanism")
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Closes: https://lore.kernel.org/all/af9581e2-58f9-cc19-428f-6f18f1f83d54@redhat.com/T/
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The file adf_cfg_services.h cannot be included in header files since it
instantiates the structure adf_cfg_services. Move that structure to its
own file and export the symbol.
This does not introduce any functional change.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Damian Muszynski <damian.muszynski@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add the attribute `num_rps` to the `qat` attribute group. This returns
the number of ring pairs that a single device has. This allows to know
the maximum value that can be set to the attribute `rp2svc`.
Signed-off-by: Ciunas Bennett <ciunas.bennett@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Damian Muszynski <damian.muszynski@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add the attribute `rp2svc` to the `qat` attribute group. This provides a
way for a user to query a specific ring pair for the type of service
that is currently configured for.
When read, the service will be returned for the defined ring pair.
When written to this value will be stored as the ring pair to return
the service of.
Signed-off-by: Ciunas Bennett <ciunas.bennett@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Damian Muszynski <damian.muszynski@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add an interface for the rate limiting feature which allows to add,
remove and modify a QAT SLA (Service Level Agreement).
This adds a new sysfs attribute group, `qat_rl`, which can be accessed
from /sys/bus/pci/devices/<BUS:DEV:FUNCTION> with the following
hierarchy:
|-+ qat_rl
|---- id (RW) # SLA identifier
|---- cir (RW) # Committed Information Rate
|---- pir (RW) # Peak Information Rate
|---- srv (RW) # Service to be rate limited
|---- rp (RW) (HEX) # Ring pairs to be rate limited
|---- cap_rem (RW) # Remaining capability for a service
|---- sla_op (WO) # Allows to perform an operation on an SLA
The API works by setting the appropriate RW attributes and then
issuing a command through the `sla_op`. For example, to create an SLA, a
user needs to input the necessary data into the attributes cir, pir, srv
and rp and then write into `sla_op` the command `add` to execute the
operation.
The API also provides `cap_rem` attribute to get information about
the remaining device capability within a certain service which is
required when setting an SLA.
Signed-off-by: Ciunas Bennett <ciunas.bennett@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Damian Muszynski <damian.muszynski@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The Rate Limiting (RL) feature allows to control the rate of requests
that can be submitted on a ring pair (RP). This allows sharing a QAT
device among multiple users while ensuring a guaranteed throughput.
The driver provides a mechanism that allows users to set policies, that
are programmed to the device. The device is then enforcing those policies.
Configuration of RL is accomplished through entities called SLAs
(Service Level Agreement). Each SLA object gets a unique identifier
and defines the limitations for a single service across up to four
ring pairs (RPs count allocated to a single VF).
The rate is determined using two fields:
* CIR (Committed Information Rate), i.e., the guaranteed rate.
* PIR (Peak Information Rate), i.e., the maximum rate achievable
when the device has available resources.
The rate values are expressed in permille scale i.e. 0-1000.
Ring pair selection is achieved by providing a 64-bit mask, where
each bit corresponds to one of the ring pairs.
This adds an interface and logic that allow to add, update, retrieve
and remove an SLA.
Signed-off-by: Damian Muszynski <damian.muszynski@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The QAT firmware provides a mechanism to retrieve its capabilities
through the init admin interface.
Add logic to retrieve the firmware capability mask from the firmware
through the init/admin channel. This mask reports if the
power management, telemetry and rate limiting features are supported.
The fw capabilities are stored in the accel_dev structure and are used
to detect if a certain feature is supported by the firmware loaded
in the device.
This is supported only by devices which have an admin AE.
Signed-off-by: Damian Muszynski <damian.muszynski@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some enums use the macro BIT. Include bits.h as it is missing.
Signed-off-by: Damian Muszynski <damian.muszynski@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There is going to be a new user of the BYTES_PER_[K/M/G]BIT definition
besides possibly existing ones. Add them to the header.
Signed-off-by: Damian Muszynski <damian.muszynski@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The admin API is growing and deserves its own include.
Move it from adf_common_drv.h to adf_admin.h.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Damian Muszynski <damian.muszynski@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The 4xxx drivers hardcode the ring to service mapping. However, when
additional configurations where added to the driver, the mappings were
not updated. This implies that an incorrect mapping might be reported
through pfvf for certain configurations.
Add an algorithm that computes the correct ring to service mapping based
on the firmware loaded on the device.
Fixes: 0cec19c761 ("crypto: qat - add support for compression for 4xxx")
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Damian Muszynski <damian.muszynski@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The adf_fw_config structures hardcode a bit mask that represents the
acceleration engines (AEs) where a certain firmware image will have to
be loaded to. Remove the hardcoded masks and replace them with defines.
This does not introduce any functional change.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Damian Muszynski <damian.muszynski@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The logic that selects the correct adf_fw_config structure based on the
configured service is replicated twice in the uof_get_name() and
uof_get_ae_mask() functions. Refactor the code so that there is no
replication.
This does not introduce any functional change.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Damian Muszynski <damian.muszynski@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add logic to count correctable, non fatal and fatal error for QAT GEN4
devices.
These counters are reported through sysfs attributes in the group
qat_ras.
Signed-off-by: Shashank Gupta <shashank.gupta@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Introduce ras counters interface for counting QAT specific device
errors and expose them through the newly created qat_ras sysfs
group attribute.
This adds the following attributes:
- errors_correctable: number of correctable errors
- errors_nonfatal: number of uncorrectable non fatal errors
- errors_fatal: number of uncorrectable fatal errors
- reset_error_counters: resets all counters
These counters are initialized during device bring up and cleared
during device shutdown and are applicable only to QAT GEN4 devices.
Signed-off-by: Shashank Gupta <shashank.gupta@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add logic to detect, report and handle uncorrectable errors reported
through the ERRSOU3 register in QAT GEN4 devices.
Signed-off-by: Shashank Gupta <shashank.gupta@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add the function adf_get_aram_base() which allows to return the
base address of the aram bar.
Signed-off-by: Shashank Gupta <shashank.gupta@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add logic to detect, report and handle correctable and uncorrectable
errors related to the compression hardware.
These are detected through the EXPRPSSMXLT, EXPRPSSMCPR and EXPRPSSMDCPR
registers.
Signed-off-by: Shashank Gupta <shashank.gupta@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add logic to detect, report and handle uncorrectable errors reported
through the ERRSOU2 register in QAT GEN4 devices.
Signed-off-by: Shashank Gupta <shashank.gupta@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add logic to detect and report uncorrectable errors reported through
the ERRSOU1 register in QAT GEN4 devices.
This also introduces the adf_dev_err_mask structure as part of
adf_hw_device_data which will allow to provide different error masks
per device generation.
Signed-off-by: Shashank Gupta <shashank.gupta@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add logic to detect and report correctable errors in QAT GEN4
devices.
This includes (1) enabling, disabling and handling error reported
through the ERRSOU0 register and (2) logic to log the errors
in the system log.
Signed-off-by: Shashank Gupta <shashank.gupta@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add infrastructure for enabling, disabling and reporting errors in the QAT
driver. This adds a new structure, adf_ras_ops, to adf_hw_device_data that
contains the following methods:
- enable_ras_errors(): allows to enable RAS errors at device
initialization.
- disable_ras_errors(): allows to disable RAS errors at device shutdown.
- handle_interrupt(): allows to detect if there is an error and report if
a reset is required. This is executed immediately after the error is
reported, in the context of an ISR.
An initial, empty, implementation of the methods above is provided
for QAT GEN4.
Signed-off-by: Shashank Gupta <shashank.gupta@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the scenario where the accelerator business is fully loaded.
When the workqueue receiving messages and performing callback
processing, there are a large number of messages that need to be
received, and there are continuously messages that have been
processed and need to be received.
This will cause the receive loop here to be locked for a long time.
This scenario will cause watchdog timeout problems on OS with kernel
preemption turned off.
The error logs:
watchdog: BUG: soft lockup - CPU#23 stuck for 23s! [kworker/u262:1:1407]
[ 1461.978428][ C23] Call trace:
[ 1461.981890][ C23] complete+0x8c/0xf0
[ 1461.986031][ C23] kcryptd_async_done+0x154/0x1f4 [dm_crypt]
[ 1461.992154][ C23] sec_skcipher_callback+0x7c/0xf4 [hisi_sec2]
[ 1461.998446][ C23] sec_req_cb+0x104/0x1f4 [hisi_sec2]
[ 1462.003950][ C23] qm_poll_req_cb+0xcc/0x150 [hisi_qm]
[ 1462.009531][ C23] qm_work_process+0x60/0xc0 [hisi_qm]
[ 1462.015101][ C23] process_one_work+0x1c4/0x470
[ 1462.020052][ C23] worker_thread+0x150/0x3c4
[ 1462.024735][ C23] kthread+0x108/0x13c
[ 1462.028889][ C23] ret_from_fork+0x10/0x18
Therefore, it is necessary to add an actively scheduled operation in the
while loop to prevent this problem.
After adding it, no matter whether the OS turns on or off the kernel
preemption function. Neither will cause watchdog timeout issues.
Signed-off-by: Longfang Liu <liulongfang@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The .remove() callback for a platform driver returns an int which makes
many driver authors wrongly assume it's possible to do error handling by
returning an error code. However the value returned is ignored (apart
from emitting a warning) and this typically results in resource leaks.
To improve here there is a quest to make the remove callback return
void. In the first step of this quest all drivers are converted to
.remove_new(), which already returns void. Eventually after all drivers
are converted, .remove_new() will be renamed to .remove().
Trivially convert this driver from always returning zero in the remove
callback to the void returning variant.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Reviewed-by: Michal Simek <michal.simek@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The .remove() callback for a platform driver returns an int which makes
many driver authors wrongly assume it's possible to do error handling by
returning an error code. However the value returned is ignored (apart
from emitting a warning) and this typically results in resource leaks.
To improve here there is a quest to make the remove callback return
void. In the first step of this quest all drivers are converted to
.remove_new(), which already returns void. Eventually after all drivers
are converted, .remove_new() will be renamed to .remove().
Trivially convert this driver from always returning zero in the remove
callback to the void returning variant.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Reviewed-by: Michal Simek <michal.simek@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The .remove() callback for a platform driver returns an int which makes
many driver authors wrongly assume it's possible to do error handling by
returning an error code. However the value returned is ignored (apart
from emitting a warning) and this typically results in resource leaks.
To improve here there is a quest to make the remove callback return
void. In the first step of this quest all drivers are converted to
.remove_new(), which already returns void. Eventually after all drivers
are converted, .remove_new() will be renamed to .remove().
Trivially convert this driver from always returning zero in the remove
callback to the void returning variant.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The .remove() callback for a platform driver returns an int which makes
many driver authors wrongly assume it's possible to do error handling by
returning an error code. However the value returned is (mostly) ignored
and this typically results in resource leaks. To improve here there is a
quest to make the remove callback return void. In the first step of this
quest all drivers are converted to .remove_new() which already returns
void.
The driver adapted here suffered from this wrong assumption and had
several error paths resulting in resource leaks.
The check for cryp being non-NULL is harmless. This can never happen as
.remove() is only called after .probe() completed successfully and in
that case drvdata was set to a non-NULL value. So this check can just be
dropped.
If pm_runtime_get() fails, the other resources held by the device must
still be freed. Only clk_disable_unprepare() should be skipped as the
pm_runtime_get() failed to call clk_prepare_enable().
After these changes the remove function returns zero unconditionally and
can trivially be converted to the prototype required for .remove_new().
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The .remove() callback for a platform driver returns an int which makes
many driver authors wrongly assume it's possible to do error handling by
returning an error code. However the value returned is (mostly) ignored
and this typically results in resource leaks. To improve here there is a
quest to make the remove callback return void. In the first step of this
quest all drivers are converted to .remove_new() which already returns
void.
The driver adapted here suffered from this wrong assumption and had an
error paths resulting in resource leaks.
If pm_runtime_get() fails, the other resources held by the device must
still be freed. Only clk_disable() should be skipped as the
pm_runtime_get() failed to call clk_enable().
After this change the remove function returns zero unconditionally and
can trivially be converted to the prototype required for .remove_new().
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>