HIFN uses the transform context to store per-request data, which breaks
when more than one request is outstanding. Move per request members from
struct hifn_context to a new struct hifn_request_context and convert
the code to use this.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Evgeniy Polyakov <zbr@ioremap.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The ANSI X9.31 PRNG docs aren't particularly clear on how to increment DT,
but empirical testing shows we're incrementing from the wrong end. A 10,000
iteration Monte Carlo RNG test currently winds up not getting the expected
result.
From http://csrc.nist.gov/groups/STM/cavp/documents/rng/RNGVS.pdf :
# CAVS 4.3
# ANSI931 MCT
[X9.31]
[AES 128-Key]
COUNT = 0
Key = 9f5b51200bf334b5d82be8c37255c848
DT = 6376bbe52902ba3b67c925fa701f11ac
V = 572c8e76872647977e74fbddc49501d1
R = 48e9bd0d06ee18fbe45790d5c3fc9b73
Currently, we get 0dd08496c4f7178bfa70a2161a79459a after 10000 loops.
Inverting the DT increment routine results in us obtaining the expected result
of 48e9bd0d06ee18fbe45790d5c3fc9b73. Verified on both x86_64 and ppc64.
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Selecting CRYPTO_CRC32C is not enough as CRYPTO which CRYPTO_CRC32C
depends on may be disabled. This patch adds the select on CRYPTO.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
While working with some FIPS RNGVS test vectors yesterday, I discovered a
little bug in the way the ansi_cprng code works right now.
For example, the following test vector (complete with expected result)
from http://csrc.nist.gov/groups/STM/cavp/documents/rng/RNGVS.pdf ...
Key = f3b1666d13607242ed061cabb8d46202
DT = e6b3be782a23fa62d71d4afbb0e922fc
V = f0000000000000000000000000000000
R = 88dda456302423e5f69da57e7b95c73a
...when run through ansi_cprng, yields an incorrect R value
of e2afe0d794120103d6e86a2b503bdfaa.
If I load up ansi_cprng w/dbg=1 though, it was fairly obvious what was
going wrong:
----8<----
getting 16 random bytes for context ffff810033fb2b10
Calling _get_more_prng_bytes for context ffff810033fb2b10
Input DT: 00000000: e6 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc
Input I: 00000000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Input V: 00000000: f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
tmp stage 0: 00000000: e6 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc
tmp stage 1: 00000000: f4 8e cb 25 94 3e 8c 31 d6 14 cd 8a 23 f1 3f 84
tmp stage 2: 00000000: 8c 53 6f 73 a4 1a af d4 20 89 68 f4 58 64 f8 be
Returning new block for context ffff810033fb2b10
Output DT: 00000000: e7 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc
Output I: 00000000: 04 8e cb 25 94 3e 8c 31 d6 14 cd 8a 23 f1 3f 84
Output V: 00000000: 48 89 3b 71 bc e4 00 b6 5e 21 ba 37 8a 0a d5 70
New Random Data: 00000000: 88 dd a4 56 30 24 23 e5 f6 9d a5 7e 7b 95 c7 3a
Calling _get_more_prng_bytes for context ffff810033fb2b10
Input DT: 00000000: e7 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc
Input I: 00000000: 04 8e cb 25 94 3e 8c 31 d6 14 cd 8a 23 f1 3f 84
Input V: 00000000: 48 89 3b 71 bc e4 00 b6 5e 21 ba 37 8a 0a d5 70
tmp stage 0: 00000000: e7 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc
tmp stage 1: 00000000: 80 6b 3a 8c 23 ae 8f 53 be 71 4c 16 fc 13 b2 ea
tmp stage 2: 00000000: 2a 4d e1 2a 0b 58 8e e6 36 b8 9c 0a 26 22 b8 30
Returning new block for context ffff810033fb2b10
Output DT: 00000000: e8 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc
Output I: 00000000: c8 e2 01 fd 9f 4a 8f e5 e0 50 f6 21 76 19 67 9a
Output V: 00000000: ba 98 e3 75 c0 1b 81 8d 03 d6 f8 e2 0c c6 54 4b
New Random Data: 00000000: e2 af e0 d7 94 12 01 03 d6 e8 6a 2b 50 3b df aa
returning 16 from get_prng_bytes in context ffff810033fb2b10
----8<----
The expected result is there, in the first "New Random Data", but we're
incorrectly making a second call to _get_more_prng_bytes, due to some checks
that are slightly off, which resulted in our original bytes never being
returned anywhere.
One approach to fixing this would be to alter some byte_count checks in
get_prng_bytes, but it would mean the last DEFAULT_BLK_SZ bytes would be
copied a byte at a time, rather than in a single memcpy, so a slightly more
involved, equally functional, and ultimately more efficient way of fixing this
was suggested to me by Neil, which I'm submitting here. All of the RNGVS ANSI
X9.31 AES128 VST test vectors I've passed through ansi_cprng are now returning
the expected results with this change.
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
ARRAY_SIZE is more concise to use when the size of an array is divided by
the size of its type or the size of its first element.
The semantic patch that makes this change is as follows:
(http://www.emn.fr/x-info/coccinelle/)
// <smpl>
@i@
@@
#include <linux/kernel.h>
@depends on i using "paren.iso"@
type T;
T[] E;
@@
- (sizeof(E)/sizeof(T))
+ ARRAY_SIZE(E)
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The bnx2x driver actually uses the crc32c_le name so this patch
restores the crc32c_le symbol through a macro.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch swaps the role of libcrc32c and crc32c. Previously
the implementation was in libcrc32c and crc32c was a wrapper.
Now the code is in crc32c and libcrc32c just calls the crypto
layer.
The reason for the change is to tap into the algorithm selection
capability of the crypto API so that optimised implementations
such as the one utilising Intel's CRC32C instruction can be
used where available.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a test for the requirement that all crc32c algorithms
shall store the partial result in the first four bytes of the descriptor
context.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch allows shash algorithms to be used through the old hash
interface. This is a transitional measure so we can convert the
underlying algorithms to shash before converting the users across.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes /proc/crypto call the type-specific show function
if one is present before calling the legacy show functions for
cipher/digest/compress. This allows us to reuse the type values
for those legacy types. In particular, hash and digest will share
one type value while shash is phased in as the default hash type.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It is often useful to save the partial state of a hash function
so that it can be used as a base for two or more computations.
The most prominent example is HMAC where all hashes start from
a base determined by the key. Having an import/export interface
means that we only have to compute that base once rather than
for each message.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch allows shash algorithms to be used through the ahash
interface. This is required before we can convert digest algorithms
over to shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The shash interface replaces the current synchronous hash interface.
It improves over hash in two ways. Firstly shash is reentrant,
meaning that the same tfm may be used by two threads simultaneously
as all hashing state is stored in a local descriptor.
The other enhancement is that shash no longer takes scatter list
entries. This is because shash is specifically designed for
synchronous algorithms and as such scatter lists are unnecessary.
All existing hash users will be converted to shash once the
algorithms have been completely converted.
There is also a new finup function that combines update with final.
This will be extended to ahash once the algorithm conversion is
done.
This is also the first time that an algorithm type has their own
registration function. Existing algorithm types will be converted
to this way in due course.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch reintroduces a completely revamped crypto_alloc_tfm.
The biggest change is that we now take two crypto_type objects
when allocating a tfm, a frontend and a backend. In fact this
simply formalises what we've been doing behind the API's back.
For example, as it stands crypto_alloc_ahash may use an
actual ahash algorithm or a crypto_hash algorithm. Putting
this in the API allows us to do this much more cleanly.
The existing types will be converted across gradually.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The type exit function needs to undo any allocations done by the type
init function. However, the type init function may differ depending
on the upper-level type of the transform (e.g., a crypto_blkcipher
instantiated as a crypto_ablkcipher).
So we need to move the exit function out of the lower-level
structure and into crypto_tfm itself.
As it stands this is a no-op since nobody uses exit functions at
all. However, all cases where a lower-level type is instantiated
as a different upper-level type (such as blkcipher as ablkcipher)
will be converted such that they allocate the underlying transform
and use that instead of casting (e.g., crypto_ablkcipher casted
into crypto_blkcipher). That will need to use a different exit
function depending on the upper-level type.
This patch also allows the type init/exit functions to call (or not)
cra_init/cra_exit instead of always calling them from the top level.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This is a patch that was sent to me by Jarod Wilson, marking off my
outstanding todo to allow the ansi cprng to set/reset the DT counter value in a
cprng instance. Currently crytpo_rng_reset accepts a seed byte array which is
interpreted by the ansi_cprng as a {V key} tuple. This patch extends that tuple
to now be {V key DT}, with DT an optional value during reset. This patch also
fixes a bug we noticed in which the offset of the key area of the seed is
started at DEFAULT_PRNG_KSZ rather than DEFAULT_BLK_SZ as it should be.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Resetting the control word is quite expensive. Fortunately this
isn't an issue for the common operations such as CBC and ECB as
the whole operation is done through a single call. However, modes
such as LRW and XTS have to call padlock over and over again for
one operation which really hurts if each call resets the control
word.
This patch uses an idea by Sebastian Siewior to store the last
control word used on a CPU and only reset the control word if
that changes.
Note that any task switch automatically resets the control word
so we only need to be accurate with regard to the stored control
word when no task switches occur.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The original copyright head for crc32c-intel.c is incorrect. Please merge
the patch to update it.
Signed-Off-By: Kent Liu <kent.liu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In commit ec6644d632 "crypto: talitos - Preempt
overflow interrupts", the test in atomic_inc_not_zero was interpreted by the
author to be applied after the increment operation (not before). This off-by-one
fix prevents overflow error interrupts from occurring when requests are frequent
and large enough to do so.
Signed-off-by: Vishnu Suresh <Vishnu@freescale.com>
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the private implementation of 32-bit rotation and unaligned
access with byteswapping.
As a bonus, fixes sparse warnings:
crypto/camellia.c:602:2: warning: cast to restricted __be32
crypto/camellia.c:603:2: warning: cast to restricted __be32
crypto/camellia.c:604:2: warning: cast to restricted __be32
crypto/camellia.c:605:2: warning: cast to restricted __be32
crypto/camellia.c:710:2: warning: cast to restricted __be32
crypto/camellia.c:711:2: warning: cast to restricted __be32
crypto/camellia.c:712:2: warning: cast to restricted __be32
crypto/camellia.c:713:2: warning: cast to restricted __be32
crypto/camellia.c:714:2: warning: cast to restricted __be32
crypto/camellia.c:715:2: warning: cast to restricted __be32
crypto/camellia.c:716:2: warning: cast to restricted __be32
crypto/camellia.c:717:2: warning: cast to restricted __be32
[Thanks to Tomoyuki Okazaki for spotting the typo]
Tested-by: Carlo E. Prelz <fluido@fluido.as>
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The FIPS specification requires that should self test for any supported
crypto algorithm fail during operation in fips mode, we need to prevent
the use of any crypto functionality until such time as the system can
be re-initialized. Seems like the best way to handle that would be
to panic the system if we were in fips mode and failed a self test.
This patch implements that functionality. I've built and run it
successfully.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
SEC version 2.1 and above adds the capability to do the IPSec ICV
memcmp in h/w. Results of the cmp are written back in the descriptor
header, along with the done status. A new callback is added that
checks these ICCR bits instead of performing the memcmp on the core,
and is enabled by h/w capability.
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
After testing on different parts, another condition was added
before using h/w auth check because different
SEC revisions require different handling.
The SEC 3.0 allows a more flexible link table where
the auth data can span separate link table entries.
The SEC 2.4/2.1 does not support this case.
So a test was added in the decrypt routine
for a fragmented case; the h/w auth check is disallowed for
revisions not having the extent in the link table;
in this case the hw auth check is done by software.
A portion of a previous change for SEC 3.0 link table handling
was removed since it became dead code with the hw auth check supported.
This seems to be the best compromise for using hw auth check
on supporting SEC revisions; it keeps the link table logic
simpler for the fragmented cases.
Signed-off-by: Lee Nipper <lee.nipper@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In talitos_interrupt, upon one done interrupt, mask further done interrupts,
and ack only any error interrupt.
In talitos_done, unmask done interrupts after completing processing.
In flush_channel, ack each done channel processed.
Keep done overflow interrupts masked because even though each pkt
is ack'ed, a few done overflows still occur.
Signed-off-by: Lee Nipper <lee.nipper@freescale.com>
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since we ack early, the re-read interrupt status in talitos_error
may be already updated with a new value. Pass the error ISR value
directly in order to report and handle the error based on the correct
error status.
Also remove unused error tasklet.
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Lee Nipper <lee.nipper@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
On Tue, Sep 23, 2008 at 08:06:32PM +0200, Dimitri Puzin (max@psycast.de) wrote:
> With this patch applied it still doesn't work as expected. The overflow
> messages are gone however syslog shows
> [ 120.924266] hifn0: abort: c: 0, s: 1, d: 0, r: 0.
> when doing cryptsetup luksFormat as in original e-mail. At this point
> cryptsetup hangs and can't be killed with -SIGKILL. I've attached
> SysRq-t dump of this condition.
Yes, I was wrong with the patch: HIFN does not support 64-bit addresses
afaics.
Attached patch should not allow HIFN to be registered on 64-bit arch, so
crypto layer will fallback to the software algorithms.
Signed-off-by: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
there's a new ptrace arch level feature in .28:
config X86_PTRACE_BTS
bool "Branch Trace Store"
it has broken fork() handling: the old DS area gets copied over into
a new task without clearing it.
Fixes exist but they came too late:
c5dee61: x86, bts: memory accounting
bf53de9: x86, bts: add fork and exit handling
and are queued up for v2.6.29. This shows that the facility is still not
tested well enough to release into a stable kernel - disable it for now and
reactivate in .29. In .29 the hardware-branch-tracer will use the DS/BTS
facilities too - hopefully resulting in better code.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
flush_tlb_mm's "optimized" uniprocessor case of allocating a new
context for userspace is exposing a race where we can suddely return
to a syscall with the protection id and space id out of sync, trapping
on the next userspace access.
Debugged-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Tested-by: Helge Deller <deller@gmx.de>
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
When deleting an edac device, we have to wait for its edac_dev.work to be
completed before deleting the whole edac_dev structure. Since we have no
idea which work in current edac_poller's workqueue is the work we are
conerned about, we wait for all work in the edac_poller's workqueue to be
proceseed. This is done via flush_cpu_workqueue() which inserts a
wq_barrier into the tail of the workqueue and then sleeping on the
completion of this wq_barrier. The edac_poller will wake up sleepers when
it is found.
EDAC core creates only one kernel worker thread, edac_poller, to run the
works of all current edac devices. They share the same callback function
of edac_device_workq_function(), which would grab the mutex of
device_ctls_mutex first before it checks the device. This is exactly
where edac_poller and rmmod would have a great chance to deadlock.
In below call trace of rmmod > ... >
edac_device_del_device >
edac_device_workq_teardown > flush_workqueue > flush_cpu_workqueue,
device_ctls_mutex would have already been grabbed by
edac_device_del_device(). So, on one hand rmmod would sleep on the
completion of a wq_barrier, holding device_ctls_mutex; on the other hand
edac_poller would be blocked on the same mutex when it's running any one
of works of existing edac evices(Note, this edac_dev.work is likely to be
totally irrelevant to the one that is being removed right now)and never
would have a chance to run the work of above wq_barrier to wake rmmod up.
edac_device_workq_teardown() should not be called within the critical
region of device_ctls_mutex. Just like is done in edac_pci_del_device()
and edac_mc_del_mc(), where edac_pci_workq_teardown() and
edac_mc_workq_teardown() are called after related mutex are released.
Moreover, an edac_dev.work should check first if it is being removed. If
this is the case, then it should bail out immediately. Since not all of
existing edac devices are to be removed, this "shutting flag" should be
contained to edac device being removed. The current edac_dev.op_state can
be used to serve this purpose.
The original deadlock problem and the solution have been witnessed and
tested on actual hardware. Without the solution, rmmod an edac driver
would result in below deadlock:
root@localhost:/root> rmmod mv64x60_edac
EDAC DEBUG: mv64x60_dma_err_remove()
EDAC DEBUG: edac_device_del_device()
EDAC DEBUG: find_edac_device_by_dev()
(hang for a moment)
INFO: task edac-poller:2030 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
edac-poller D 00000000 0 2030 2
Call Trace:
[df159dc0] [c0071e3c] free_hot_cold_page+0x17c/0x304 (unreliable)
[df159e80] [c000a024] __switch_to+0x6c/0xa0
[df159ea0] [c03587d8] schedule+0x2f4/0x4d8
[df159f00] [c03598a8] __mutex_lock_slowpath+0xa0/0x174
[df159f40] [e1030434] edac_device_workq_function+0x28/0xd8 [edac_core]
[df159f60] [c003beb4] run_workqueue+0x114/0x218
[df159f90] [c003c674] worker_thread+0x5c/0xc8
[df159fd0] [c004106c] kthread+0x5c/0xa0
[df159ff0] [c0013538] original_kernel_thread+0x44/0x60
INFO: task rmmod:2062 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
rmmod D 0ff2c9fc 0 2062 1839
Call Trace:
[df119c00] [c0437a74] 0xc0437a74 (unreliable)
[df119cc0] [c000a024] __switch_to+0x6c/0xa0
[df119ce0] [c03587d8] schedule+0x2f4/0x4d8
[df119d40] [c03591dc] schedule_timeout+0xb0/0xf4
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If cgroup_get_rootdir() failed, free_cg_links() will be called in the
failure path, but tmp_cg_links hasn't been initialized at that time.
I introduced this bug in the 2.6.27 merge window.
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
Acked-by: Serge Hallyn <serue@us.ibm.com>
Cc: Paul Menage <menage@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
During test of the w1-gpio driver i found that in "w1.c:679
w1_slave_found()" the device id is converted to little-endian with
"cpu_to_le64()", but its not converted back to cpu format in "w1_io.c:293
w1_reset_select_slave()".
Based on a patch created by Andreas Hummel.
[akpm@linux-foundation.org: remove unneeded cast]
Reported-by: Andreas Hummel <andi_hummel@gmx.de>
Signed-off-by: Evgeniy Polyakov <zbr@ioremap.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This patch for the rtc-isl1208 driver makes it reject invalid dates.
Signed-off-by: Chris Elston <celston@katalix.com>
[a.zummo@towertech.it: added comment explaining the check]
Signed-off-by: Alessandro Zummo <a.zummo@towertech.it>
Cc: Hebert Valerio Riedel <hvr@gnu.org>
Cc: David Brownell <david-b@pacbell.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Fix a NULL pointer dereference that would occur if the video decoder tied to
the em28xx supports the VIDIOC_INT_RESET call (for example: the cx25840 driver)
Signed-off-by: Devin Heitmueller <dheitmueller@linuxtv.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
This check was introduced with the logic the wrong way around.
Fixes regression: http://bugzilla.kernel.org/show_bug.cgi?id=12216
Tested-by: François Valenduc <francois.valenduc@tvcablenet.be>
Signed-off-by: Dave Airlie <airlied@redhat.com>
In each case, if the NULL test is necessary, then the dereference should be
moved below the NULL test.
The semantic patch that makes this change is as follows:
(http://www.emn.fr/x-info/coccinelle/)
// <smpl>
@@
type T;
expression E;
identifier i,fld;
statement S;
@@
- T i = E->fld;
+ T i;
... when != E
when != i
if (E == NULL) S
+ i = E->fld;
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>