| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
Use skcipher_givcrypt_cast(crypto_dequeue_request(queue)) instead, which
does the same thing in much cleaner way. The skcipher_givcrypt_cast()
actually uses container_of() instead of messing around with offsetof()
too.
Signed-off-by: Marek Vasut <marex@denx.de>
Reported-by: Arnd Bergmann <arnd@arndb.de>
Cc: Pantelis Antoniou <panto@antoniou-consulting.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Although the existing hash walk interface has already been used
by a number of ahash crypto drivers, it turns out that none of
them were really asynchronous. They were all essentially polling
for completion.
That's why nobody has noticed until now that the walk interface
couldn't work with a real asynchronous driver since the memory
is mapped using kmap_atomic.
As we now have a use-case for a real ahash implementation on x86,
this patch creates a minimal ahash walk interface. Basically it
just calls kmap instead of kmap_atomic and does away with the
crypto_yield call. Real ahash crypto drivers don't need to yield
since by definition they won't be hogging the CPU.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
| |
entries at once
Add crypto_[un]register_shashes() to allow simplifying init/exit code of shash
crypto modules that register multiple algorithms.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We lookup algorithms with crypto_alg_mod_lookup() when instantiating via
crypto_add_alg(). However, algorithms that are wrapped by an IV genearator
(e.g. aead or genicv type algorithms) need special care. The userspace
process hangs until it gets a timeout when we use crypto_alg_mod_lookup()
to lookup these algorithms. So export the lookup functions for these
algorithms and use them in crypto_add_alg().
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (102 commits)
crypto: sha-s390 - Fix warnings in import function
crypto: vmac - New hash algorithm for intel_txt support
crypto: api - Do not displace newly registered algorithms
crypto: ansi_cprng - Fix module initialization
crypto: xcbc - Fix alignment calculation of xcbc_tfm_ctx
crypto: fips - Depend on ansi_cprng
crypto: blkcipher - Do not use eseqiv on stream ciphers
crypto: ctr - Use chainiv on raw counter mode
Revert crypto: fips - Select CPRNG
crypto: rng - Fix typo
crypto: talitos - add support for 36 bit addressing
crypto: talitos - align locks on cache lines
crypto: talitos - simplify hmac data size calculation
crypto: mv_cesa - Add support for Orion5X crypto engine
crypto: cryptd - Add support to access underlaying shash
crypto: gcm - Use GHASH digest algorithm
crypto: ghash - Add GHASH digest algorithm for GCM
crypto: authenc - Convert to ahash
crypto: api - Fix aligned ctx helper
crypto: hmac - Prehash ipad/opad
...
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch exports the finup operation where available and adds
a default finup operation for ahash. The operations final, finup
and digest also will now deal with unaligned result pointers by
copying it. Finally export/import operations are will now be
exported too.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| | |
Now that all ahash implementations have been converted to the new
ahash type, we can remove old_ahash_alg and its associated support.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch changes crypto4xx to use the new style ahash type.
In particular, we now use ahash_alg to define ahash algorithms
instead of crypto_alg.
This is achieved by introducing a union that encapsulates the
new type and the existing crypto_alg structure. They're told
apart through a u32 field containing the type value.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| | |
This patch adds the helpers crypto_drop_ahash and crypto_drop_shash
so that these spawns can be dropped without ugly casts.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| | |
This patch adds support for creating ahash instances and using
ahash as spawns.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch converts crypto_ahash to the new style. The old ahash
algorithm type is retained until the existing ahash implementations
are also converted. All ahash users will automatically get the
new crypto_ahash type.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| | |
This patch adds the helper crypto_ahash_set_reqsize so that
implementations do not directly access the crypto_ahash structure.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| | |
This patch exports the async functions so that they can be reused
by cryptd when it switches over to using shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| | |
This patch changes descsize to a run-time attribute so that
implementations can change it in their init functions.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| | |
This patch adds the helper shash_instance_ctx which is the shash
analogue of crypto_instance_ctx.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch adds __crypto_shash_cast which turns a crypto_tfm
into crypto_shash. It's analogous to the other __crypto_*_cast
functions.
It hasn't been needed until now since no existing shash algorithms
have had an init function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| | |
This patch adds crypto_shash_ctx_aligned which will be needed
by hmac after its conversion to shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| | |
This patch adds shash_register_instance so that shash instances
can be registered without bypassing the shash checks applied to
normal algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| | |
This patch adds the helper shash_attr_alg2 which locates a shash
algorithm based on the information in the given attribute.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| | |
This patch adds the functions needed to create and use shash
spawns, i.e., to use shash algorithms in a template.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch adds shash_instance and the associated alloc/free
functions. This is meant to be an instance that with a shash
algorithm under it. Note that the instance itself doesn't have
to be shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
As struct skcipher_givcrypt_request includes struct crypto_request
at a non-zero offset, testing for NULL after converting the pointer
returned by crypto_dequeue_request does not work. This can result
in IPsec crashes when the queue is depleted.
This patch fixes it by doing the pointer conversion only when the
return value is non-NULL. In particular, we create a new function
__crypto_dequeue_request that does the pointer conversion.
Reported-by: Brad Bosch <bradbosch@comcast.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current "comp" crypto interface supports one-shot (de)compression only,
i.e. the whole data buffer to be (de)compressed must be passed at once, and
the whole (de)compressed data buffer will be received at once.
In several use-cases (e.g. compressed file systems that store files in big
compressed blocks), this workflow is not suitable.
Furthermore, the "comp" type doesn't provide for the configuration of
(de)compression parameters, and always allocates workspace memory for both
compression and decompression, which may waste memory.
To solve this, add a "pcomp" partial (de)compression interface that provides
the following operations:
- crypto_compress_{init,update,final}() for compression,
- crypto_decompress_{init,update,final}() for decompression,
- crypto_{,de}compress_setup(), to configure (de)compression parameters
(incl. allocating workspace memory).
The (de)compression methods take a struct comp_request, which was mimicked
after the z_stream object in zlib, and contains buffer pointer and length
pairs for input and output.
The setup methods take an opaque parameter pointer and length pair. Parameters
are supposed to be encoded using netlink attributes, whose meanings depend on
the actual (name of the) (de)compression algorithm.
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
| |
This patch allows shash algorithms to be used through the old hash
interface. This is a transitional measure so we can convert the
underlying algorithms to shash before converting the users across.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is often useful to save the partial state of a hash function
so that it can be used as a base for two or more computations.
The most prominent example is HMAC where all hashes start from
a base determined by the key. Having an import/export interface
means that we only have to compute that base once rather than
for each message.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The shash interface replaces the current synchronous hash interface.
It improves over hash in two ways. Firstly shash is reentrant,
meaning that the same tfm may be used by two threads simultaneously
as all hashing state is stored in a local descriptor.
The other enhancement is that shash no longer takes scatter list
entries. This is because shash is specifically designed for
synchronous algorithms and as such scatter lists are unnecessary.
All existing hash users will be converted to shash once the
algorithms have been completely converted.
There is also a new finup function that combines update with final.
This will be extended to ahash once the algorithm conversion is
done.
This is also the first time that an algorithm type has their own
registration function. Existing algorithm types will be converted
to this way in due course.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds a random number generator interface as well as a
cryptographic pseudo-random number generator based on AES. It is
meant to be used in cases where a deterministic CPRNG is required.
One of the first applications will be as an input in the IPsec IV
generation process.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
| |
This patch moves the default IV generators into their own modules
in order to break a dependency loop between cryptomgr, rng, and
blkcipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
| |
All new crypto interfaces should go into individual files as much
as possible in order to ensure that crypto.h does not collapse under
its own weight.
This patch moves the ahash code into crypto/hash.h and crypto/internal/hash.h
respectively.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
| |
This patch adds the walking helpers for hash algorithms akin to
those of block ciphers. This is a necessary step before we can
reimplement existing hash algorithms using the new ahash interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
| |
The previous patch to move chainiv and eseqiv into blkcipher created
a section mismatch for the chainiv exit function which was also called
from __init. This patch removes the __exit marking on it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
| |
For compatibility with dm-crypt initramfs setups it is useful to merge
chainiv/seqiv into the crypto_blkcipher module. Since they're required
by most algorithms anyway this is an acceptable trade-off.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch makes chainiv avoid spinning by postponing requests on lock
contention if the user allows the use of asynchronous algorithms. If
a synchronous algorithm is requested then we behave as before.
This should improve IPsec performance on SMP when two CPUs attempt to
transmit over the same SA. Currently one of them will spin doing nothing
waiting for the other CPU to finish its encryption. This patch makes it
postpone the request and get on with other work.
If only one CPU is transmitting for a given SA, then we will process
the request synchronously as before.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
| |
This patch adds a null blkcipher algorithm called ecb(cipher_null) for
backwards compatibility. Previously the null algorithm when used by
IPsec copied the data byte by byte. This new algorithm optimises that
to a straight memcpy which lets us better measure inherent overheads in
our IPsec code.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
| |
This patch makes crypto_alloc_aead always return algorithms that is
capable of generating their own IVs through givencrypt and givdecrypt.
All existing AEAD algorithms already do. New ones must either supply
their own or specify a generic IV generator with the geniv field.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
| |
This patch creates the infrastructure to help the construction of IV
generator templates that wrap around AEAD algorithms by adding an IV
generator to them. This is useful for AEAD algorithms with no built-in
IV generator or to replace their built-in generator.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch makes crypto_alloc_ablkcipher/crypto_grab_skcipher always
return algorithms that are capable of generating their own IVs through
givencrypt and givdecrypt. Each algorithm may specify its default IV
generator through the geniv field.
For algorithms that do not set the geniv field, the blkcipher layer will
pick a default. Currently it's chainiv for synchronous algorithms and
eseqiv for asynchronous algorithms. Note that if these wrappers do not
work on an algorithm then that algorithm must specify its own geniv or
it can't be used at all.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
| |
This patch adds the helper skcipher_givcrypt_complete which should be
called when an ablkcipher algorithm has completed a givcrypt request.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
| |
This patch creates the infrastructure to help the construction of givcipher
templates that wrap around existing blkcipher/ablkcipher algorithms by adding
an IV generator to them.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Different block cipher modes have different requirements for intialisation
vectors. For example, CBC can use a simple randomly generated IV while
modes such as CTR must use an IV generation mechanisms that give a stronger
guarantee on the lack of collisions. Furthermore, disk encryption modes
have their own IV generation algorithms.
Up until now IV generation has been left to the users of the symmetric
key cipher API. This is inconvenient as the number of block cipher modes
increase because the user needs to be aware of which mode is supposed to
be paired with which IV generation algorithm.
Therefore it makes sense to integrate the IV generation into the crypto
API. This patch takes the first step in that direction by creating two
new ablkcipher operations, givencrypt and givdecrypt that generates an
IV before performing the actual encryption or decryption.
The operations are currently not exposed to the user. That will be done
once the underlying functionality has actually been implemented.
It also creates the underlying givcipher type. Algorithms that directly
generate IVs would use it instead of ablkcipher. All other algorithms
(including all existing ones) would generate a givcipher algorithm upon
registration. This givcipher algorithm will be constructed from the geniv
string that's stored in every algorithm. That string will locate a template
which is instantiated by the blkcipher/ablkcipher algorithm in question to
give a givcipher algorithm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Note: From now on the collective of ablkcipher/blkcipher/givcipher will
be known as skcipher, i.e., symmetric key cipher. The name blkcipher has
always been much of a misnomer since it supports stream ciphers too.
This patch adds the function crypto_grab_skcipher as a new way of getting
an ablkcipher spawn. The problem is that previously we did this in two
steps, first getting the algorithm and then calling crypto_init_spawn.
This meant that each spawn user had to be aware of what type and mask to
use for these two steps. This is difficult and also presents a problem
when the type/mask changes as they're about to be for IV generators.
The new interface does both steps together just like crypto_alloc_ablkcipher.
As a side-effect this also allows us to be stronger on type enforcement
for spawns. For now this is only done for ablkcipher but it's trivial
to extend for other types.
This patch also moves the type/mask logic for skcipher into the helpers
crypto_skcipher_type and crypto_skcipher_mask.
Finally this patch introduces the function crypto_require_sync to determine
whether the user is specifically requesting a sync algorithm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|