The kzfree() function tests whether its argument is NULL and then
returns immediately. Thus the test around the call is not needed.
This issue was detected by using the Coccinelle software.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The core of the Jitter RNG is intended to be compiled with -O0. To
ensure that the Jitter RNG can be compiled on all architectures,
separate out the RNG core into a stand-alone C file that can be compiled
with -O0 which does not depend on any kernel include file.
As no kernel includes can be used in the C file implementing the core
RNG, any dependencies on kernel code must be extracted.
A second file provides the link to the kernel and the kernel crypto API
that can be compiled with the regular compile options of the kernel.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull networking updates from David Miller:
1) Add TX fast path in mac80211, from Johannes Berg.
2) Add TSO/GRO support to ibmveth, from Thomas Falcon
3) Move away from cached routes in ipv6, just like ipv4, from Martin
KaFai Lau.
4) Lots of new rhashtable tests, from Thomas Graf.
5) Run ingress qdisc lockless, from Alexei Starovoitov.
6) Allow servers to fetch TCP packet headers for SYN packets of new
connections, for fingerprinting. From Eric Dumazet.
7) Add mode parameter to pktgen, for testing receive. From Alexei
Starovoitov.
8) Cache access optimizations via simplifications of build_skb(), from
Alexander Duyck.
9) Move page frag allocator under mm/, also from Alexander.
10) Add xmit_more support to hv_netvsc, from KY Srinivasan.
11) Add a counter guard in case we try to perform endless reclassify
loops in the packet scheduler.
12) Extern flow dissector to be programmable and use it in new "Flower"
classifier. From Jiri Pirko.
13) AF_PACKET fanout rollover fixes, performance improvements, and new
statistics. From Willem de Bruijn.
14) Add netdev driver for GENEVE tunnels, from John W Linville.
15) Add ingress netfilter hooks and filtering, from Pablo Neira Ayuso.
16) Fix handling of epoll edge triggers in TCP, from Eric Dumazet.
17) Add an ECN retry fallback for the initial TCP handshake, from Daniel
Borkmann.
18) Add tail call support to BPF, from Alexei Starovoitov.
19) Add several pktgen helper scripts, from Jesper Dangaard Brouer.
20) Add zerocopy support to AF_UNIX, from Hannes Frederic Sowa.
21) Favor even port numbers for allocation to connect() requests, and
odd port numbers for bind(0), in an effort to help avoid
ip_local_port_range exhaustion. From Eric Dumazet.
22) Add Cavium ThunderX driver, from Sunil Goutham.
23) Allow bpf programs to access skb_iif and dev->ifindex SKB metadata,
from Alexei Starovoitov.
24) Add support for T6 chips in cxgb4vf driver, from Hariprasad Shenai.
25) Double TCP Small Queues default to 256K to accomodate situations
like the XEN driver and wireless aggregation. From Wei Liu.
26) Add more entropy inputs to flow dissector, from Tom Herbert.
27) Add CDG congestion control algorithm to TCP, from Kenneth Klette
Jonassen.
28) Convert ipset over to RCU locking, from Jozsef Kadlecsik.
29) Track and act upon link status of ipv4 route nexthops, from Andy
Gospodarek.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1670 commits)
bridge: vlan: flush the dynamically learned entries on port vlan delete
bridge: multicast: add a comment to br_port_state_selection about blocking state
net: inet_diag: export IPV6_V6ONLY sockopt
stmmac: troubleshoot unexpected bits in des0 & des1
net: ipv4 sysctl option to ignore routes when nexthop link is down
net: track link-status of ipv4 nexthops
net: switchdev: ignore unsupported bridge flags
net: Cavium: Fix MAC address setting in shutdown state
drivers: net: xgene: fix for ACPI support without ACPI
ip: report the original address of ICMP messages
net/mlx5e: Prefetch skb data on RX
net/mlx5e: Pop cq outside mlx5e_get_cqe
net/mlx5e: Remove mlx5e_cq.sqrq back-pointer
net/mlx5e: Remove extra spaces
net/mlx5e: Avoid TX CQE generation if more xmit packets expected
net/mlx5e: Avoid redundant dev_kfree_skb() upon NOP completion
net/mlx5e: Remove re-assignment of wq type in mlx5e_enable_rq()
net/mlx5e: Use skb_shinfo(skb)->gso_segs rather than counting them
net/mlx5e: Static mapping of netdev priv resources to/from netdev TX queues
net/mlx4_en: Use HW counters for rx/tx bytes/packets in PF device
...
As the AEAD conversion is still ongoing, we do not yet wish to
export legacy AEAD implementations to user-space, as their calling
convention will change.
This patch actually disables all AEAD algorithms because some of
them (e.g., cryptd) will need to be modified to propagate this flag.
Subsequent patches will reenable them on an individual basis.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The bit CRYPTO_ALG_INTERNAL was added to stop af_alg from accessing
internal algorithms. However, af_alg itself was never modified to
actually stop that bit from being used by the user. Therefore the
user could always override it by specifying the relevant bit in the
type and/or mask.
This patch silently discards the bit in both type and mask.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes the RNG allocation so that we only hold a
reference to the RNG during initialisation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When seqiv is used in compatibility mode, this patch allows it
to function even when an RNG Is not available. It also changes
the RNG allocation for the new explicit seqiv interface so that
we only hold a reference to the RNG during initialisation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The RNG may not be available during early boot, e.g., the relevant
modules may not be included in the initramfs. As the RNG Is only
needed for IPsec, we should not let this prevent use of ciphers
without IV generators, e.g., for disk encryption.
This patch postpones the RNG allocation to the init function so
that one failure during early boot does not make the RNG unavailable
for all subsequent users of the same cipher.
More importantly, it lets the cipher live even if RNG allocation
fails. Of course we no longer offer IV generation and which will
fail with an error if invoked. But all other cipher capabilities
will function as usual.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The RNG may not be available during early boot, e.g., the relevant
modules may not be included in the initramfs. As the RNG Is only
needed for IPsec, we should not let this prevent use of ciphers
without IV generators, e.g., for disk encryption.
This patch postpones the RNG allocation to the init function so
that one failure during early boot does not make the RNG unavailable
for all subsequent users of the same cipher.
More importantly, it lets the cipher live even if RNG allocation
fails. Of course we no longer offer IV generation and which will
fail with an error if invoked. But all other cipher capabilities
will function as usual.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a new crypto_user command that allows the admin to
delete the crypto system RNG. Note that this can only be done if
the RNG is currently not in use. The next time it is used a new
system RNG will be allocated.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently we free the default RNG when its use count hits zero.
This was OK when the IV generators would latch onto the RNG at
instance creation time and keep it until the instance is torn
down.
Now that IV generators only keep the RNG reference during init
time this scheme causes the default RNG to come and go at a high
frequencey. This is highly undesirable as we want to keep a single
RNG in use unless the admin wants it to be removed.
This patch changes the scheme so that the system RNG once allocated
is never removed unless a specifically requested.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently for skcipher IV generators they must provide givencrypt
as that is the whole point. We are currently replacing skcipher
IV generators with explicit IV generators. In order to maintain
backwards compatibility, we need to allow the IV generators to
still function as a normal skcipher when the RNG Is not present
(e.g., in the initramfs during boot). IOW everything but givencrypt
and givdecrypt will still work but those two will fail.
Therefore this patch assigns a default givencrypt that simply
returns an error should it be NULL.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Should be CRYPTO_AKCIPHER instead of AKCIPHER
Reported-by: Andreas Ruprecht <andreas.ruprecht@fau.de>
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Merge the mvebu/drivers branch of the arm-soc tree which contains
just a single patch bfa1ce5f38 ("bus:
mvebu-mbus: add mv_mbus_dram_info_nooverlap()") that happens to be
a prerequisite of the new marvell/cesa crypto driver.
Add a new rsa generic SW implementation.
This implements only cryptographic primitives.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Added select on ASN1.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add Public Key Encryption API.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Made CRYPTO_AKCIPHER invisible like other type config options.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The Poly1305 authenticator requires a unique key for each generated tag. This
implies that we can't set the key per tfm, as multiple users set individual
keys. Instead we pass a desc specific key as the first two blocks of the
message to authenticate in update().
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This reverts commit 9b9f9296a7b73fbafe0a0a6f2494eaadd97f9f73 as
all in-kernel implementations of GCM have been converted to the
new AEAD interface, meaning that they should now pass the updated
rfc4543 test.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch updates the rfc4543 test vectors to the new format
where the IV is part of the AD. For now these vectors are still
unused. They will be reactivated once all rfc4543 implementations
have migrated.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts generic gcm and its associated transforms to
the new AEAD interface. The biggest reward is in code reduction
for rfc4543 where it used to do IV stitching which is no longer
needed as the IV is already part of the AD on input.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Because the old rfc4543 implementation always injected an IV into
the AD, while the new one does not, we have to disable the test
while it is converted over to the new AEAD interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This driver builds off of the tristate CONFIG_PKCS7_TEST_KEY and calls
module_init and module_exit. So it should explicitly include module.h
to avoid compile breakage during header shuffles done in the future.
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Be more verbose and also report ->backend_cra_name when
crypto_alloc_shash() or crypto_alloc_cipher() fail in
drbg_init_hash_kernel() or drbg_init_sym_kernel()
correspondingly.
Example
DRBG: could not allocate digest TFM handle: hmac(sha256)
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As required by SP800-90A, the DRBG implements are reseeding threshold.
This threshold is at 2**48 (64 bit) and 2**32 bit (32 bit) as
implemented in drbg_max_requests.
With the recently introduced changes, the DRBG is now always used as a
stdrng which is initialized very early in the boot cycle. To ensure that
sufficient entropy is present, the Jitter RNG is added to even provide
entropy at early boot time.
However, the 2nd seed source, the nonblocking pool, is usually
degraded at that time. Therefore, the DRBG is seeded with the Jitter RNG
(which I believe contains good entropy, which however is questioned by
others) and is seeded with a degradded nonblocking pool. This seed is
now used for quasi the lifetime of the system (2**48 requests is a lot).
The patch now changes the reseed threshold as follows: up until the time
the DRBG obtains a seed from a fully iniitialized nonblocking pool, the
reseeding threshold is lowered such that the DRBG is forced to reseed
itself resonably often. Once it obtains the seed from a fully
initialized nonblocking pool, the reseed threshold is set to the value
required by SP800-90A.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The get_blocking_random_bytes API is broken because the wait can
be arbitrarily long (potentially forever) so there is no safe way
of calling it from within the kernel.
This patch replaces it with the new callback API which does not
have this problem.
The patch also removes the entropy buffer registered with the DRBG
handle in favor of stack variables to hold the seed data.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace the global -O0 compiler flag from the Makefile with GCC
pragmas to mark only the functions required to be compiled without
optimizations.
This patch also adds a comment describing the rationale for the
functions chosen to be compiled without optimizations.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch creates a new invisible Kconfig option CRYPTO_RNG_DEFAULT
that simply selects the DRBG. This new option is then selected
by the IV generators.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the stdrng module alias and increases the priority
to ensure that it is loaded in preference to other RNGs.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We currently do the IV seeding on the first givencrypt call in
order to conserve entropy. However, this does not work with
DRBG which cannot be called from interrupt context. In fact,
with DRBG we don't need to conserve entropy anyway. So this
patch moves the seeding into the init function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We currently do the IV seeding on the first givencrypt call in
order to conserve entropy. However, this does not work with
DRBG which cannot be called from interrupt context. In fact,
with DRBG we don't need to conserve entropy anyway. So this
patch moves the seeding into the init function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We currently do the IV seeding on the first givencrypt call in
order to conserve entropy. However, this does not work with
DRBG which cannot be called from interrupt context. In fact,
with DRBG we don't need to conserve entropy anyway. So this
patch moves the seeding into the init function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We currently do the IV seeding on the first givencrypt call in
order to conserve entropy. However, this does not work with
DRBG which cannot be called from interrupt context. In fact,
with DRBG we don't need to conserve entropy anyway. So this
patch moves the seeding into the init function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
draft-ietf-ipsecme-chacha20-poly1305 defines the use of ChaCha20/Poly1305 in
ESP. It uses additional four byte key material as a salt, which is then used
with an 8 byte IV to form the ChaCha20 nonce as defined in the RFC7539.
Signed-off-by: Martin Willi <martin@strongswan.org>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This AEAD uses a chacha20 ablkcipher and a poly1305 ahash to construct the
ChaCha20-Poly1305 AEAD as defined in RFC7539. It supports both synchronous and
asynchronous operations, even if we currently have no async chacha20 or poly1305
drivers.
Signed-off-by: Martin Willi <martin@strongswan.org>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Poly1305 is a fast message authenticator designed by Daniel J. Bernstein.
It is further defined in RFC7539 as a building block for the ChaCha20-Poly1305
AEAD for use in IETF protocols.
This is a portable C implementation of the algorithm without architecture
specific optimizations, based on public domain code by Daniel J. Bernstein and
Andrew Moon.
Signed-off-by: Martin Willi <martin@strongswan.org>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We explicitly set the Initial block Counter by prepending it to the nonce in
Little Endian. The same test vector is used for both encryption and decryption,
ChaCha20 is a cipher XORing a keystream.
Signed-off-by: Martin Willi <martin@strongswan.org>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
ChaCha20 is a high speed 256-bit key size stream cipher algorithm designed by
Daniel J. Bernstein. It is further specified in RFC7539 for use in IETF
protocols as a building block for the ChaCha20-Poly1305 AEAD.
This is a portable C implementation without any architecture specific
optimizations. It uses a 16-byte IV, which includes the 12-byte ChaCha20 nonce
prepended by the initial block counter. Some algorithms require an explicit
counter value, for example the mentioned AEAD construction.
Signed-off-by: Martin Willi <martin@strongswan.org>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
On architectures where flush_dcache_page is not needed, we will
end up generating all the code up to the PageSlab call. This is
because PageSlab operates on a volatile pointer and thus cannot
be optimised away.
This patch works around this by checking whether flush_dcache_page
is needed before we call PageSlab which then allows PageSlab to be
compiled awy.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
No new code should be using the return value of crypto_unregister_alg
as it will become void soon.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch ensures that the tfm context always has enough extra
memory to ensure that it is aligned according to cra_alignment.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As it stands the only non-type safe functions left in the new
AEAD interface are the cra_init/cra_exit functions. It means
exposing the ugly __crypto_aead_cast to every AEAD implementor.
This patch adds type-safe init/exit functions to AEAD. Existing
algorithms are unaffected while new implementations can simply
fill in these two instead of cra_init/cra_exit.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This reverts commit f858c7bcca as
the algif_aead interface has been switched over to the new AEAD
interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Conflicts:
drivers/net/phy/amd-xgbe-phy.c
drivers/net/wireless/iwlwifi/Kconfig
include/net/mac80211.h
iwlwifi/Kconfig and mac80211.h were both trivial overlapping
changes.
The drivers/net/phy/amd-xgbe-phy.c file got removed in 'net-next' and
the bug fix that happened on the 'net' side is already integrated
into the rest of the amd-xgbe driver.
Signed-off-by: David S. Miller <davem@davemloft.net>
The patch removes the use of timekeeping_valid_for_hres which is now
marked as internal for the time keeping subsystem. The jitterentropy
does not really require this verification as a coarse timer (when
random_get_entropy is absent) is discovered by the initialization test
of jent_entropy_init, which would cause the jitter rng to not load in
that case.
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes use of the new AEAD interface which uses a single
SG list instead of separate lists for the AD and plain text.
Note that the user-space interface now requires both input and
output to be of the same length, and both must include space for
the AD as well as the authentication tag.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes use of the new AEAD interface which uses a single
SG list instead of separate lists for the AD and plain text.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes use of the new AEAD interface which uses a single
SG list instead of separate lists for the AD and plain text.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
On module unload we weren't unregistering the seqniv template,
thus leading to a crash the next time someone walks the template
list.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes a bug in the context size calculation where we
were still referring to the old cra_aead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As the AD does not necessarily exist in the destination buffer
it must be copied along with the plain/cipher text.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes a bug in the context size calculation where we
were still referring to the old cra_aead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As the AD does not necessarily exist in the destination buffer
it must be copied along with the plain text.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds some common IV generation code currently duplicated
by seqiv and echainiv. For example, the setkey and setauthsize
functions are completely identical.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch tries to preserve in-place processing in old_crypt as
various algorithms are optimised for in-place processing where
src == dst.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto fix from Herbert Xu:
"This disables the newly (4.1) added user-space AEAD interface so that
we can fix issues in the underlying kernel AEAD interface. Once the
new kernel AEAD interface is ready we can then reenable the user-space
AEAD interface"
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: algif_aead - Disable AEAD user-space for now
The CPU Jitter RNG provides a source of good entropy by
collecting CPU executing time jitter. The entropy in the CPU
execution time jitter is magnified by the CPU Jitter Random
Number Generator. The CPU Jitter Random Number Generator uses
the CPU execution timing jitter to generate a bit stream
which complies with different statistical measurements that
determine the bit stream is random.
The CPU Jitter Random Number Generator delivers entropy which
follows information theoretical requirements. Based on these
studies and the implementation, the caller can assume that
one bit of data extracted from the CPU Jitter Random Number
Generator holds one bit of entropy.
The CPU Jitter Random Number Generator provides a decentralized
source of entropy, i.e. every caller can operate on a private
state of the entropy pool.
The RNG does not have any dependencies on any other service
in the kernel. The RNG only needs a high-resolution time
stamp.
Further design details, the cryptographic assessment and
large array of test results are documented at
http://www.chronox.de/jent.html.
CC: Andreas Steffen <andreas.steffen@strongswan.org>
CC: Theodore Ts'o <tytso@mit.edu>
CC: Sandy Harris <sandyinchina@gmail.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
During initialization, the DRBG now tries to allocate a handle of the
Jitter RNG. If such a Jitter RNG is available during seeding, the DRBG
pulls the required entropy/nonce string from get_random_bytes and
concatenates it with a string of equal size from the Jitter RNG. That
combined string is now the seed for the DRBG.
Written differently, the initial seed of the DRBG is now:
get_random_bytes(entropy/nonce) || jitterentropy (entropy/nonce)
If the Jitter RNG is not available, the DRBG only seeds from
get_random_bytes.
CC: Andreas Steffen <andreas.steffen@strongswan.org>
CC: Theodore Ts'o <tytso@mit.edu>
CC: Sandy Harris <sandyinchina@gmail.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The async seeding operation is triggered during initalization right
after the first non-blocking seeding is completed. As required by the
asynchronous operation of random.c, a callback function is provided that
is triggered by random.c once entropy is available. That callback
function performs the actual seeding of the DRBG.
CC: Andreas Steffen <andreas.steffen@strongswan.org>
CC: Theodore Ts'o <tytso@mit.edu>
CC: Sandy Harris <sandyinchina@gmail.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In order to prepare for the addition of the asynchronous seeding call,
the invocation of seeding the DRBG is moved out into a helper function.
In addition, a block of memory is allocated during initialization time
that will be used as a scratchpad for obtaining entropy. That scratchpad
is used for the initial seeding operation as well as by the
asynchronous seeding call. The memory must be zeroized every time the
DRBG seeding call succeeds to avoid entropy data lingering in memory.
CC: Andreas Steffen <andreas.steffen@strongswan.org>
CC: Theodore Ts'o <tytso@mit.edu>
CC: Sandy Harris <sandyinchina@gmail.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The newly added AEAD user-space isn't quite ready for prime time
just yet. In particular it is conflicting with the AEAD single
SG list interface change so this patch disables it now.
Once the SG list stuff is completely done we can then renable
this interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The cryptoff parameter was added to facilitate the skipping of
IVs that sit between the AD and the plain/cipher text. However,
it was never implemented correctly as and we do not handle users
such as IPsec setting cryptoff. It is simply ignored.
Implementing correctly is in fact more trouble than what it's
worth.
This patch removes the uses of cryptoff by moving the AD forward
to fill the gap left by the IV. The AD is moved back after the
underlying AEAD processing is finished.
This is in fact beter than the cryptoff solution because it allows
algorithms that use seqniv (i.e., GCM and CCM) to hash the whole
packet as a single piece, while cryptoff meant that there was
guaranteed to be a gap.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The cryptoff parameter was added to facilitate the skipping of
IVs that sit between the AD and the plain/cipher text. However,
it was never implemented correctly as and we do not handle users
such as IPsec setting cryptoff. It is simply ignored.
Implementing correctly is in fact more trouble than what it's
worth.
This patch removes the uses of cryptoff and simply falls back
to using the old AEAD interface as it's only needed for old AEAD
implementations.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The function aead_geniv_alloc currently sets cra_type even for
new style instances. This is unnecessary and may hide bugs such
as when our caller uses crypto_register_instance instead of the
correct aead_register_instance.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
New style AEAD instances must use aead_register_instance. This
worked by chance because aead_geniv_alloc is still setting things
the old way.
This patch converts the template over to the create model where
we are responsible for instance registration so that we can call
the correct function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
New style AEAD instances must use aead_register_instance. This
worked by chance because aead_geniv_alloc is still setting things
the old way.
This patch converts the template over to the create model where
we are responsible for instance registration so that we can call
the correct function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Newer templates use tmpl->create and have a NULL tmpl->alloc. So
we must use tmpl->create if it is set.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Newer templates use tmpl->create and have a NULL tmpl->alloc. So
we must use tmpl->create if it is set.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The variable tfm_count is accessed by multiple threads without
locking. This patch converts it to an atomic_t.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
AEAD algorithm implementors need to figure out a given algorithm's
IV size and maximum authentication size. During the transition
this is difficult to do as an algorithm could be new style or old
style.
This patch creates two helpers to make this easier.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Conflicts:
drivers/net/ethernet/cadence/macb.c
drivers/net/phy/phy.c
include/linux/skbuff.h
net/ipv4/tcp.c
net/switchdev/switchdev.c
Switchdev was a case of RTNH_H_{EXTERNAL --> OFFLOAD}
renaming overlapping with net-next changes of various
sorts.
phy.c was a case of two changes, one adding a local
variable to a function whilst the second was removing
one.
tcp.c overlapped a deadlock fix with the addition of new tcp_info
statistic values.
macb.c involved the addition of two zyncq device entries.
skbuff.h involved adding back ipv4_daddr to nf_bridge_info
whilst net-next changes put two other existing members of
that struct into a union.
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds a new AEAD IV generator echainiv. It is intended
to replace the existing skcipher IV generator eseqiv.
If the underlying AEAD algorithm is using the old AEAD interface,
then echainiv will simply use its IV generator.
Otherwise, echainiv will encrypt a counter just like eseqiv but
it'll first xor it against a previously stored IV similar to
chainiv.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a new IV generator seqniv which is identical to
seqiv except that it skips the IV when authenticating. This is
intended to be used by algorithms such as rfc4106 that does the
IV authentication implicitly.
Note that the code used for seqniv is in fact identical to the
compatibility case for seqiv.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts the seqiv IV generator to work with the new
AEAD interface where IV generators are just normal AEAD algorithms.
Full backwards compatibility is paramount at this point since
no users have yet switched over to the new interface. Nor can
they switch to the new interface until IV generation is fully
supported by it.
So this means we are adding two versions of seqiv alongside the
existing one. The first one is the one that will be used when
the underlying AEAD algorithm has switched over to the new AEAD
interface. The second one handles the current case where the
underlying AEAD algorithm still uses the old interface.
Both versions export themselves through the new AEAD interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a check for in scatterwalk_map_and_copy to avoid
copying from the same address to the same address. This is going
to be used for IV copying in AEAD IV generators.
There is no provision for partial overlaps.
This patch also uses the new scatterwalk_ffwd instead of doing
it by hand in scatterwalk_map_and_copy.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes gcm use the default null skcipher instead of
allocating a new one for each tfm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the basic structure of the new AEAD type. Unlike
the current version, there is no longer any concept of geniv. IV
generation will still be carried out by wrappers but they will be
normal AEAD algorithms that simply take the IPsec sequence number
as the IV.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch is the first step in the introduction of a new AEAD
alg type. Unlike normal conversions this patch only renames the
existing aead_alg structure because there are external references
to it.
Those references will be removed after this patch.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The primary user of AEAD, IPsec includes the IV in the AD in
most cases, except where it is implicitly authenticated by the
underlying algorithm.
The way it is currently implemented is a hack because we pass
the data in piecemeal and the underlying algorithms try to stitch
them back up into one piece.
This is why this patch is adding a new interface that allows a
single SG list to be passed in that contains everything so the
algorithm implementors do not have to stitch.
The new interface accepts a single source SG list and a single
destination SG list. Both must be laid out as follows:
AD, skipped data, plain/cipher text, ICV
The ICV is not present from the source during encryption and from
the destination during decryption.
For the top-level IPsec AEAD algorithm the plain/cipher text will
contain the generated (or received) IV.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the scatterwalk_ffwd helper which can create an
SG list that starts in the middle of an existing SG list. The
new list may either be part of the existing list or be a chain
that latches onto part of the existing list.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As AEAD has switched over to using frontend types, the function
crypto_init_spawn must not be used since it does not specify a
frontend type. Otherwise it leads to a crash when the spawn is
used.
This patch fixes it by switching over to crypto_grab_aead instead.
Fixes: 5d1d65f8be ("crypto: aead - Convert top level interface to new style")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As AEAD has switched over to using frontend types, the function
crypto_init_spawn must not be used since it does not specify a
frontend type. Otherwise it leads to a crash when the spawn is
used.
This patch fixes it by switching over to crypto_grab_aead instead.
Fixes: 5d1d65f8be ("crypto: aead - Convert top level interface to new style")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto fix from Herbert Xu:
"This fixes a the crash in the newly added algif_aead interface when it
tries to link SG lists"
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: algif_aead - fix invalid sgl linking
The call to asymmetric_key_hex_to_key_id() from ca_keys_setup()
silently fails with -ENOMEM. Instead of dynamically allocating
memory from a __setup function, this patch defines a variable
and calls __asymmetric_key_hex_to_key_id(), a new helper function,
directly.
This bug was introduced by 'commit 46963b774d ("KEYS: Overhaul
key identification when searching for asymmetric keys")'.
Changelog:
- for clarification, rename hexlen to asciihexlen in
asymmetric_key_hex_to_key_id()
- add size argument to __asymmetric_key_hex_to_key_id() - David Howells
- inline __asymmetric_key_hex_to_key_id() - David Howells
- remove duplicate strlen() calls
Acked-by: David Howells <dhowells@redhat.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org # 3.18
Since MD5 IV are now available in crypto/md5.h, use them.
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes it.
Also minor updates to comments.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts the top-level aead interface to the new style.
All user-level AEAD interface code have been moved into crypto/aead.h.
The allocation/free functions have switched over to the new way of
allocating tfms.
This patch also removes the double indrection on setkey so the
indirection now exists only at the alg level.
Apart from these there are no user-visible changes.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch uses the crypto_aead_set_reqsize helper to avoid directly
touching the internals of aead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch uses the crypto_aead_set_reqsize helper to avoid directly
touching the internals of aead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch uses the crypto_aead_set_reqsize helper to avoid directly
touching the internals of aead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch uses the crypto_aead_set_reqsize helper to avoid directly
touching the internals of aead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch uses the crypto_aead_set_reqsize helper to avoid directly
touching the internals of aead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch uses the crypto_aead_set_reqsize helper to avoid directly
touching the internals of aead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch uses the crypto_aead_set_reqsize helper to avoid directly
touching the internals of aead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a new primitive crypto_grab_spawn which is meant
to replace crypto_init_spawn and crypto_init_spawn2. Under the
new scheme the user no longer has to worry about reference counting
the alg object before it is subsumed by the spawn.
It is pretty much an exact copy of crypto_grab_aead.
Prior to calling this function spawn->frontend and spawn->inst
must have been set.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation for changing how struct net is refcounted
on kernel sockets pass the knowledge that we are creating
a kernel socket from sock_create_kern through to sk_alloc.
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change the crypto 842 compression alg to use the software 842 compression
and decompression library. Add the crypto driver_name as "842-generic".
Remove the fallback to LZO compression.
Previously, this crypto compression alg attemped 842 compression using
PowerPC hardware, and fell back to LZO compression and decompression if
the 842 PowerPC hardware was unavailable or failed. This should not
fall back to any other compression method, however; users of this crypto
compression alg can fallback if desired, and transparent fallback tricks
callers into thinking they are getting 842 compression when they actually
get LZO compression - the failure of the 842 hardware should not be
transparent to the caller.
The crypto compression alg for a hardware device also should not be located
in crypto/ so this is now a software-only implementation that uses the 842
software compression/decompression library.
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This adds a couple of test cases for CRC32 (not CRC32c) to
ensure that the generic and arch specific implementations
are in sync.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Steve Capper <steve.capper@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the test manager, there are a number of if-statements with expressions of
the form !x == y that incur warnings with gcc-5 of the following form:
../crypto/testmgr.c: In function '__test_aead':
../crypto/testmgr.c:523:12: warning: logical not is only applied to the left hand side of comparison [-Wlogical-not-parentheses]
if (!ret == template[i].fail) {
^
By converting the 'fail' member of struct aead_testvec and struct
cipher_testvec to a bool, we can get rid of the warnings.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In testmgr, struct pcomp_testvec takes a non-const 'params' field, which is
pointed to a const deflate_comp_params or deflate_decomp_params object. With
gcc-5 this incurs the following warnings:
In file included from ../crypto/testmgr.c:44:0:
../crypto/testmgr.h:28736:13: warning: initialization discards 'const' qualifier from pointer target type [-Wdiscarded-array-qualifiers]
.params = &deflate_comp_params,
^
../crypto/testmgr.h:28748:13: warning: initialization discards 'const' qualifier from pointer target type [-Wdiscarded-array-qualifiers]
.params = &deflate_comp_params,
^
../crypto/testmgr.h:28776:13: warning: initialization discards 'const' qualifier from pointer target type [-Wdiscarded-array-qualifiers]
.params = &deflate_decomp_params,
^
../crypto/testmgr.h:28800:13: warning: initialization discards 'const' qualifier from pointer target type [-Wdiscarded-array-qualifiers]
.params = &deflate_decomp_params,
^
Fix this by making the parameters pointer const and constifying the things
that use it.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When the user explicitly states that they don't care whether the
algorithm has been tested (type = CRYPTO_ALG_TESTED and mask = 0),
there is a corner case where we may erroneously return ENOENT.
This patch fixes it by correcting the logic in the test.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When the user explicitly states that they don't care whether the
algorithm has been tested (type = CRYPTO_ALG_TESTED and mask = 0),
there is a corner case where we may erroneously return ENOENT.
This patch fixes it by correcting the logic in the test.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The commit 59afdc7b32 ("crypto:
api - Move module sig ifdef into accessor function") broke the
build when modules are completely disabled because we directly
dereference module->name.
This patch fixes this by using the accessor function module_name.
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Highlights:
- "experimental" code for managing md/raid1 across a cluster using
DLM. Code is not ready for general use and triggers a WARNING if used.
However it is looking good and mostly done and having in mainline
will help co-ordinate development.
- RAID5/6 can now batch multiple (4K wide) stripe_heads so as to
handle a full (chunk wide) stripe as a single unit.
- RAID6 can now perform read-modify-write cycles which should
help performance on larger arrays: 6 or more devices.
- RAID5/6 stripe cache now grows and shrinks dynamically. The value
set is used as a minimum.
- Resync is now allowed to go a little faster than the 'mininum' when
there is competing IO. How much faster depends on the speed of the
devices, so the effective minimum should scale with device speed to
some extent.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIVAwUAVTbIxDnsnt1WYoG5AQIgAA//Z9FlEpkHcJJ75WGjrXgGJPyNTfEZOkoz
jnD8PpBY2Afp341vMatd0XKErdEAhuPQmJMAa+tbxht6pk77X3fSzXghGA8FEafg
yazn5pfBt6NXmepV4vhl/+LNYuWRxxLSA9EDm7wEg+tiO0UEts+a0w2TSXzcT1w+
30Yi1EjAlTaJ5yjlBHUOtTXWE43D6RKnUr6FMy2dRlnzlFyDlezRMChWo9v6OkQF
YqJ20FcvmJdLHY/6Yif3jvpm7eQecMqdCZENTvW/mJI86zqf6E+ToCYS1VNjfDnK
ud61iU9eshu4WtNWBG6KLuHBD0grO1NaEL7/S16w1KdNJMhYgiK8WussvIAJEesA
5SlETM7Y/1XFq8puwlAq2/tuPfhZ+TFxnAwce/C3hMTDcYAACnS/R6INFQXqGvy3
nX1NLogrCycX8oqxv3jTFKLVqIVwlkSlHcUGzIWjcfCF37StcXFKI5q862agyg2+
NNocFMuXhPPM1YcB9JJSo2nCsor4e9tTdVEZlFm2B3cc8LJ9BLWUMSoi1h7VK/1g
P7psnPIjz7/cdI2TZTFjGTZ0Kvhx/NTYp41AZealDNxeGWUNM+5xGZnUF8QRBc/E
0dGHtEAah834BDQFvNnJtuuh/s+KwbvswjNP+njoBsHjIQIvngDABpOwpIkdqF6r
diQ2gUPnHN0=
=OHG6
-----END PGP SIGNATURE-----
Merge tag 'md/4.1' of git://neil.brown.name/md
Pull md updates from Neil Brown:
"More updates that usual this time. A few have performance impacts
which hould mostly be positive, but RAID5 (in particular) can be very
work-load ensitive... We'll have to wait and see.
Highlights:
- "experimental" code for managing md/raid1 across a cluster using
DLM. Code is not ready for general use and triggers a WARNING if
used. However it is looking good and mostly done and having in
mainline will help co-ordinate development.
- RAID5/6 can now batch multiple (4K wide) stripe_heads so as to
handle a full (chunk wide) stripe as a single unit.
- RAID6 can now perform read-modify-write cycles which should help
performance on larger arrays: 6 or more devices.
- RAID5/6 stripe cache now grows and shrinks dynamically. The value
set is used as a minimum.
- Resync is now allowed to go a little faster than the 'mininum' when
there is competing IO. How much faster depends on the speed of the
devices, so the effective minimum should scale with device speed to
some extent"
* tag 'md/4.1' of git://neil.brown.name/md: (58 commits)
md/raid5: don't do chunk aligned read on degraded array.
md/raid5: allow the stripe_cache to grow and shrink.
md/raid5: change ->inactive_blocked to a bit-flag.
md/raid5: move max_nr_stripes management into grow_one_stripe and drop_one_stripe
md/raid5: pass gfp_t arg to grow_one_stripe()
md/raid5: introduce configuration option rmw_level
md/raid5: activate raid6 rmw feature
md/raid6 algorithms: xor_syndrome() for SSE2
md/raid6 algorithms: xor_syndrome() for generic int
md/raid6 algorithms: improve test program
md/raid6 algorithms: delta syndrome functions
raid5: handle expansion/resync case with stripe batching
raid5: handle io error of batch list
RAID5: batch adjacent full stripe write
raid5: track overwrite disk count
raid5: add a new flag to track if a stripe can be batched
raid5: use flex_array for scribble data
md raid0: access mddev->queue (request queue member) conditionally because it is not set when accessed from dm-raid
md: allow resync to go faster when there is competing IO.
md: remove 'go_faster' option from ->sync_request()
...
All users of AEAD should include crypto/aead.h instead of
include/linux/crypto.h.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: David S. Miller <davem@davemloft.net>
All users of AEAD should include crypto/aead.h instead of
include/linux/crypto.h.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: David S. Miller <davem@davemloft.net>
All users of AEAD should include crypto/aead.h instead of
include/linux/crypto.h.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: David S. Miller <davem@davemloft.net>
Now that all fips_enabled users are including linux/fips.h directly
instead of getting it through internal.h, we can remove the fips.h
inclusions from internal.h.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All users of fips_enabled should include linux/fips.h directly
instead of getting it through internal.h.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All users of fips_enabled should include linux/fips.h directly
instead of getting it through internal.h which is reserved for
internal crypto API implementors.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There is currently a large ifdef FIPS code section in proc.c.
Ostensibly it's there because the fips_enabled sysctl sits under
/proc/sys/crypto. However, no other crypto sysctls exist.
In fact, the whole ethos of the crypto API is against such user
interfaces so this patch moves all the FIPS sysctl code over to
fips.c.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The header file internal.h is only meant for internal crypto API
implementors such as rng.c. So fips has no business in including
it.
This patch removes that inclusions and instead adds inclusions of
the actual features used by fips.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All users of fips_enabled should include linux/fips.h directly
instead of getting it through internal.h.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch removes the unnecessary CRYPTO_FIPS ifdef from
drbg_healthcheck_sanity so that the code always gets checked
by the compiler.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Stephan Mueller <smueller@chronox.de>
Currently we're hiding mod->sig_ok under an ifdef in open code.
This patch adds a module_sig_ok accessor function and removes that
ifdef.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
The function crypto_ahash_init can also be asynchronous just
like update and final. So all callers must be able to handle
an async return.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If we allocate a seed on behalf ot the user in crypto_rng_reset,
we must ensure that it is zeroed afterwards or the RNG may be
compromised.
Reported-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that crypto_rng_reset takes a const argument, we no longer
need to cast away the const qualifier.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that all rng implementations have switched over to the new
interface, we can remove the old low-level interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch ocnverts the ANSI CPRNG implementation to the new
low-level rng interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
The file internal.h is only meant to be used by internel API
implementation and not algorithm implementations. In fact it
isn't even needed here so this patch removes it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
This patch converts the DRBG implementation to the new low-level
rng interface.
This allows us to get rid of struct drbg_gen by using the new RNG
API instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Stephan Mueller <smueller@chronox.de>
This patch adds the helpers that allow the registration and removal
of multiple RNG algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts the low-level crypto_rng interface to the
"new" style.
This allows existing implementations to be converted over one-
by-one. Once that is complete we can then remove the old rng
interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There is no reason why crypto_rng_reset should modify the seed
so this patch marks it as const. Since our algorithms don't
export a const seed function yet we have to go through some
contortions for now.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Glue it altogehter. The raid6 rmw path should work the same as the
already existing raid5 logic. So emulate the prexor handling/flags
and split functions as needed.
1) Enable xor_syndrome() in the async layer.
2) Split ops_run_prexor() into RAID4/5 and RAID6 logic. Xor the syndrome
at the start of a rmw run as we did it before for the single parity.
3) Take care of rmw run in ops_run_reconstruct6(). Again process only
the changed pages to get syndrome back into sync.
4) Enhance set_syndrome_sources() to fill NULL pages if we are in a rmw
run. The lower layers will calculate start & end pages from that and
call the xor_syndrome() correspondingly.
5) Adapt the several places where we ignored Q handling up to now.
Performance numbers for a single E5630 system with a mix of 10 7200k
desktop/server disks. 300 seconds random write with 8 threads onto a
3,2TB (10*400GB) RAID6 64K chunk without spare (group_thread_cnt=4)
bsize rmw_level=1 rmw_level=0 rmw_level=1 rmw_level=0
skip_copy=1 skip_copy=1 skip_copy=0 skip_copy=0
4K 115 KB/s 141 KB/s 165 KB/s 140 KB/s
8K 225 KB/s 275 KB/s 324 KB/s 274 KB/s
16K 434 KB/s 536 KB/s 640 KB/s 534 KB/s
32K 751 KB/s 1,051 KB/s 1,234 KB/s 1,045 KB/s
64K 1,339 KB/s 1,958 KB/s 2,282 KB/s 1,962 KB/s
128K 2,673 KB/s 3,862 KB/s 4,113 KB/s 3,898 KB/s
256K 7,685 KB/s 7,539 KB/s 7,557 KB/s 7,638 KB/s
512K 19,556 KB/s 19,558 KB/s 19,652 KB/s 19,688 Kb/s
Signed-off-by: Markus Stockhausen <stockhausen@collogia.de>
Signed-off-by: NeilBrown <neilb@suse.de>
This patch adds the new top-level function crypto_rng_generate
which generates random numbers with additional input. It also
extends the mid-level rng_gen_random function to take additional
data as input.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts the top-level crypto_rng to the "new" style.
It was the last algorithm type added before we switched over
to the new way of doing things exemplified by shash.
All users will automatically switch over to the new interface.
Note that this patch does not touch the low-level interface to
rng implementations.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a crypto_alg_extsize helper that can be used
by algorithm types such as pcompress and shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Initialising the RNG in drbg_kcapi_init is a waste of precious
entropy because all users will immediately seed the RNG after
the allocation.
In fact, all users should seed the RNG before using it. So there
is no point in doing the seeding in drbg_kcapi_init.
This patch removes the initial seeding and the user must seed
the RNG explicitly (as they all currently do).
This patch also changes drbg_kcapi_reset to allow reseeding.
That is, if you call it after a successful initial seeding, then
it will not reset the internal state of the DRBG before mixing
the new input and entropy.
If you still wish to reset the internal state, you can always
free the DRBG and allocate a new one.
Finally this patch removes locking from drbg_uninstantiate because
it's now only called from the destruction path which must not be
executed in parallel with normal operations.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Stephan Mueller <smueller@chronox.de>
As we moved the mutex init out of drbg_instantiate and into cra_init
we need to explicitly initialise the mutex in drbg_healthcheck_sanity.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Stephan Mueller <smueller@chronox.de>
As the DRBG does not operate on shadow copies of the DRBG instance
any more, the cipher handles only need to be allocated once during
initalization time and deallocated during uninstantiate time.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The creation of a shadow copy is intended to only hold a short term
lock. But the drawback is that parallel users have a very similar DRBG
state which only differs by a high-resolution time stamp.
The DRBG will now hold a long term lock. Therefore, the lock is changed
to a mutex which implies that the DRBG can only be used in process
context.
The lock now guards the instantiation as well as the entire DRBG
generation operation. Therefore, multiple callers are fully serialized
when generating a random number.
As the locking is changed to use a long-term lock to avoid such similar
DRBG states, the entire creation and maintenance of a shadow copy can be
removed.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The drbg_generate returns 0 in success case. That means that
drbg_generate_long will always only generate drbg_max_request_bytes at
most. Longer requests will be truncated to drbg_max_request_bytes.
Reported-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The buffer uses for temporary data must be cleared entirely. In AES192
the used buffer is drbg_statelen(drbg) + drbg_blocklen(drbg) as
documented in the comment above drbg_ctr_df.
This patch ensures that the temp buffer is completely wiped.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit 9c521a200b ("crypto: api - remove instance when test failed")
tried to grab a module reference count before the module was even set.
Worse, it then goes on to free the module reference count after it is
set so you quickly end up with a negative module reference count which
prevents people from using any instances belonging to that module.
This patch moves the module initialisation before the reference
count.
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The networking updates from David Miller removed the iocb argument from
sendmsg and recvmsg (in commit 1b78414047: "net: Remove iocb argument
from sendmsg and recvmsg"), but the crypto code had added new instances
of them.
When I pulled the crypto update, it was a silent semantic mis-merge, and
I overlooked the new warning messages in my test-build. I try to fix
those in the merge itself, but that relies on me noticing. Oh well.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull crypto update from Herbert Xu:
"Here is the crypto update for 4.1:
New interfaces:
- user-space interface for AEAD
- user-space interface for RNG (i.e., pseudo RNG)
New hashes:
- ARMv8 SHA1/256
- ARMv8 AES
- ARMv8 GHASH
- ARM assembler and NEON SHA256
- MIPS OCTEON SHA1/256/512
- MIPS img-hash SHA1/256 and MD5
- Power 8 VMX AES/CBC/CTR/GHASH
- PPC assembler AES, SHA1/256 and MD5
- Broadcom IPROC RNG driver
Cleanups/fixes:
- prevent internal helper algos from being exposed to user-space
- merge common code from assembly/C SHA implementations
- misc fixes"
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (169 commits)
crypto: arm - workaround for building with old binutils
crypto: arm/sha256 - avoid sha256 code on ARMv7-M
crypto: x86/sha512_ssse3 - move SHA-384/512 SSSE3 implementation to base layer
crypto: x86/sha256_ssse3 - move SHA-224/256 SSSE3 implementation to base layer
crypto: x86/sha1_ssse3 - move SHA-1 SSSE3 implementation to base layer
crypto: arm64/sha2-ce - move SHA-224/256 ARMv8 implementation to base layer
crypto: arm64/sha1-ce - move SHA-1 ARMv8 implementation to base layer
crypto: arm/sha2-ce - move SHA-224/256 ARMv8 implementation to base layer
crypto: arm/sha256 - move SHA-224/256 ASM/NEON implementation to base layer
crypto: arm/sha1-ce - move SHA-1 ARMv8 implementation to base layer
crypto: arm/sha1_neon - move SHA-1 NEON implementation to base layer
crypto: arm/sha1 - move SHA-1 ARM asm implementation to base layer
crypto: sha512-generic - move to generic glue implementation
crypto: sha256-generic - move to generic glue implementation
crypto: sha1-generic - move to generic glue implementation
crypto: sha512 - implement base layer for SHA-512
crypto: sha256 - implement base layer for SHA-256
crypto: sha1 - implement base layer for SHA-1
crypto: api - remove instance when test failed
crypto: api - Move alg ref count init to crypto_check_alg
...
This updated the generic SHA-512 implementation to use the
generic shared SHA-512 glue code.
It also implements a .finup hook crypto_sha512_finup() and exports
it to other modules. The import and export() functions and the
.statesize member are dropped, since the default implementation
is perfectly suitable for this module.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This updates the generic SHA-256 implementation to use the
new shared SHA-256 glue code.
It also implements a .finup hook crypto_sha256_finup() and exports
it to other modules. The import and export() functions and the
.statesize member are dropped, since the default implementation
is perfectly suitable for this module.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This updated the generic SHA-1 implementation to use the generic
shared SHA-1 glue code.
It also implements a .finup hook crypto_sha1_finup() and exports
it to other modules. The import and export() functions and the
.statesize member are dropped, since the default implementation
is perfectly suitable for this module.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
A cipher instance is added to the list of instances unconditionally
regardless of whether the associated test failed. However, a failed
test implies that during another lookup, the cipher instance will
be added to the list again as it will not be found by the lookup
code.
That means that the list can be filled up with instances whose tests
failed.
Note: tests only fail in reality in FIPS mode when a cipher is not
marked as fips_allowed=1. This can be seen with cmac(des3_ede) that does
not have a fips_allowed=1. When allocating the cipher, the allocation
fails with -ENOENT due to the missing fips_allowed=1 flag (which
causes the testmgr to return EINVAL). Yet, the instance of
cmac(des3_ede) is shown in /proc/crypto. Allocating the cipher again
fails again, but a 2nd instance is listed in /proc/crypto.
The patch simply de-registers the instance when the testing failed.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We currently initialise the crypto_alg ref count in the function
__crypto_register_alg. As one of the callers of that function
crypto_register_instance needs to obtain a ref count before it
calls __crypto_register_alg, we need to move the initialisation
out of there.
Since both callers of __crypto_register_alg call crypto_check_alg,
this is the logical place to perform the initialisation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Stephan Mueller <smueller@chronox.de>
trivial conflict in net/socket.c and non-trivial one in crypto -
that one had evaded aio_complete() removal.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
The function crypto_alg_match returns an algorithm without taking
any references on it. This means that the algorithm can be freed
at any time, therefore all users of crypto_alg_match are buggy.
This patch fixes this by taking a reference count on the algorithm
to prevent such races.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fix a spelling typo in crypto/Kconfig.
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes crypto_unregister_instance take a crypto_instance
instead of a crypto_alg. This allows us to remove a duplicate
CRYPTO_ALG_INSTANCE check in crypto_unregister_instance.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There are multiple problems in crypto_unregister_instance:
1) The cra_refcnt BUG_ON check is racy and can cause crashes.
2) The cra_refcnt check shouldn't exist at all.
3) There is no reference on tmpl to protect the tmpl->free call.
This patch rewrites the function using crypto_remove_spawn which
now morphs into crypto_remove_instance.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
After the TX sgl is expanded we need to explicitly mark end of data
at the last buffer that contains data.
Changes in v2
- use type 'bool' and true/false for 'mark'.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
No need to use kzalloc to allocate sgls as the structure is initialized anyway.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use EXPORT_SYMBOL_GPL instead of EXPORT_SYMBOL.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The mcryptd is used as a wrapper around internal ciphers. Therefore,
the mcryptd must process the internal cipher by marking mcryptd as
internal if the underlying cipher is an internal cipher.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
With ciphers that now cannot be accessed via the kernel crypto API,
callers shall be able to identify the ciphers that are not callable. The
/proc/crypto file is added a boolean field identifying that such
internal ciphers.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The cryptd is used as a wrapper around internal ciphers. Therefore, the
cryptd must process the internal cipher by marking cryptd as internal if
the underlying cipher is an internal cipher.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Allocate the ciphers irrespectively if they are marked as internal
or not. As all ciphers, including the internal ciphers will be
processed by the testmgr, it needs to be able to allocate those
ciphers.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Several hardware related cipher implementations are implemented as
follows: a "helper" cipher implementation is registered with the
kernel crypto API.
Such helper ciphers are never intended to be called by normal users. In
some cases, calling them via the normal crypto API may even cause
failures including kernel crashes. In a normal case, the "wrapping"
ciphers that use the helpers ensure that these helpers are invoked
such that they cannot cause any calamity.
Considering the AF_ALG user space interface, unprivileged users can
call all ciphers registered with the crypto API, including these
helper ciphers that are not intended to be called directly. That
means, with AF_ALG user space may invoke these helper ciphers
and may cause undefined states or side effects.
To avoid any potential side effects with such helpers, the patch
prevents the helpers to be called directly. A new cipher type
flag is added: CRYPTO_ALG_INTERNAL. This flag shall be used
to mark helper ciphers. These ciphers can only be used if the
caller invoke the cipher with CRYPTO_ALG_INTERNAL in the type and
mask field.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Change type from unsigned long to int to fix an issue reported by kbuild robot:
crypto/algif_skcipher.c:596 skcipher_recvmsg_async() warn: unsigned 'used' is
never less than zero.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The way the algif_skcipher works currently is that on sendmsg/sendpage it
builds an sgl for the input data and then on read/recvmsg it sends the job
for encryption putting the user to sleep till the data is processed.
This way it can only handle one job at a given time.
This patch changes it to be asynchronous by adding AIO support.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Due to the change to RNGs to always return zero in success case, the RNG
interface must zeroize the buffer with the length provided by the
caller.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Due to the change to RNGs to always return zero in success case, the
invocation of the RNGs in the test manager must be updated as otherwise
the RNG self tests are not properly executed any more.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Alexander Bergmann <abergmann@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This moves all Kconfig symbols defined in crypto/Kconfig that depend
on CONFIG_ARM to a dedicated Kconfig file in arch/arm/crypto, which is
where the code that implements those features resides as well.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Enable user to select OCTEON SHA1/256/512 modules.
Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Change the RNGs to always return 0 in success case.
This patch ensures that seqiv.c works with RNGs other than krng. seqiv
expects that any return code other than 0 is an error. Without the
patch, rfc4106(gcm(aes)) will not work when using a DRBG or an ANSI
X9.31 RNG.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The DRBG code contains memset(0) calls to initialize a varaible
that are not necessary as the variable is always overwritten by
the processing.
This patch increases the CTR and Hash DRBGs by about 5%.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CTR DRBG only encrypts one single block at a time. Thus, use the
single block crypto API to avoid additional overhead from the block
chaining modes.
With the patch, the speed of the DRBG increases between 30% and 40%.
The DRBG still passes the CTR DRBG CAVS test.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Integrate the module into the kernel config tree.
Signed-off-by: Markus Stockhausen <stockhausen@collogia.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Enable compilation of the AEAD AF_ALG support and provide a Kconfig
option to compile the AEAD AF_ALG support.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the AEAD support for AF_ALG.
The implementation is based on algif_skcipher, but contains heavy
modifications to streamline the interface for AEAD uses.
To use AEAD, the user space consumer has to use the salg_type named
"aead".
The AEAD implementation includes some overhead to calculate the size of
the ciphertext, because the AEAD implementation of the kernel crypto API
makes implied assumption on the location of the authentication tag. When
performing an encryption, the tag will be added to the created
ciphertext (note, the tag is placed adjacent to the ciphertext). For
decryption, the caller must hand in the ciphertext with the tag appended
to the ciphertext. Therefore, the selection of the used memory
needs to add/subtract the tag size from the source/destination buffers
depending on the encryption type. The code is provided with comments
explaining when and how that operation is performed.
A fully working example using all aspects of AEAD is provided at
http://www.chronox.de/libkcapi.html
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>