summaryrefslogtreecommitdiffstats
path: root/arch/arm/crypto
Commit message (Collapse)AuthorAgeFilesLines
* crypto: arm/crc32 - enable module autoloading based on CPU feature bitsArd Biesheuvel2017-06-011-0/+6
| | | | | | | | | Make the module autoloadable by tying it to the CPU feature bits that describe whether the optional instructions it relies on are implemented by the current CPU. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/sha2-ce - enable module autoloading based on CPU feature bitsArd Biesheuvel2017-06-011-3/+2
| | | | | | | | | Make the module autoloadable by tying it to the CPU feature bit that describes whether the optional instructions it relies on are implemented by the current CPU. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/sha1-ce - enable module autoloading based on CPU feature bitsArd Biesheuvel2017-06-011-3/+2
| | | | | | | | | Make the module autoloadable by tying it to the CPU feature bit that describes whether the optional instructions it relies on are implemented by the current CPU. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/ghash-ce - enable module autoloading based on CPU feature bitsArd Biesheuvel2017-06-011-4/+2
| | | | | | | | | Make the module autoloadable by tying it to the CPU feature bit that describes whether the optional instructions it relies on are implemented by the current CPU. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes-ce - enable module autoloading based on CPU feature bitsArd Biesheuvel2017-06-011-4/+2
| | | | | | | | | Make the module autoloadable by tying it to the CPU feature bit that describes whether the optional instructions it relies on are implemented by the current CPU. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes-neonbs - resolve fallback cipher at runtimeArd Biesheuvel2017-03-092-16/+46
| | | | | | | | | | | | | | | | | | | | | | | | Currently, the bit sliced NEON AES code for ARM has a link time dependency on the scalar ARM asm implementation, which it uses as a fallback to perform CBC encryption and the encryption of the initial XTS tweak. The bit sliced NEON code is both fast and time invariant, which makes it a reasonable default on hardware that supports it. However, the ARM asm code it pulls in is not time invariant, and due to the way it is linked in, cannot be overridden by the new generic time invariant driver. In fact, it will not be used at all, given that the ARM asm code registers itself as a cipher with a priority that exceeds the priority of the fixed time cipher. So remove the link time dependency, and allocate the fallback cipher via the crypto API. Note that this requires this driver's module_init call to be replaced with late_initcall, so that the (possibly generic) fallback cipher is guaranteed to be available when the builtin test is performed at registration time. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/crc32 - add build time test for CRC instruction supportArd Biesheuvel2017-03-011-1/+11
| | | | | | | | | | | | | | | | | | | The accelerated CRC32 module for ARM may use either the scalar CRC32 instructions, the NEON 64x64 to 128 bit polynomial multiplication (vmull.p64) instruction, or both, depending on what the current CPU supports. However, this also requires support in binutils, and as it turns out, versions of binutils exist that support the vmull.p64 instruction but not the crc32 instructions. So refactor the Makefile logic so that this module only gets built if binutils has support for both. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Jon Hunter <jonathanh@nvidia.com> Tested-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/crc32 - fix build error with outdated binutilsArd Biesheuvel2017-03-011-1/+1
| | | | | | | | | Annotate a vmov instruction with an explicit element size of 32 bits. This is inferred by recent toolchains, but apparently, older versions need some help figuring this out. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes - don't use IV buffer to return final keystream blockArd Biesheuvel2017-02-032-11/+14
| | | | | | | | | | | | | | | | | | | The ARM bit sliced AES core code uses the IV buffer to pass the final keystream block back to the glue code if the input is not a multiple of the block size, so that the asm code does not have to deal with anything except 16 byte blocks. This is done under the assumption that the outgoing IV is meaningless anyway in this case, given that chaining is no longer possible under these circumstances. However, as it turns out, the CCM driver does expect the IV to retain a value that is equal to the original IV except for the counter value, and even interprets byte zero as a length indicator, which may result in memory corruption if the IV is overwritten with something else. So use a separate buffer to return the final keystream block. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/chacha20 - remove cra_alignmaskArd Biesheuvel2017-02-031-1/+0
| | | | | | | | | Remove the unnecessary alignmask: it is much more efficient to deal with the misalignment in the core algorithm than relying on the crypto API to copy the data to a suitably aligned buffer. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes-ce - remove cra_alignmaskArd Biesheuvel2017-02-032-52/+47
| | | | | | | | | Remove the unnecessary alignmask: it is much more efficient to deal with the misalignment in the core algorithm than relying on the crypto API to copy the data to a suitably aligned buffer. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes-neonbs - fix issue with v2.22 and older assemblerArd Biesheuvel2017-01-231-4/+4
| | | | | | | | | | | | | | | | | | | | | | The GNU assembler for ARM version 2.22 or older fails to infer the element size from the vmov instructions, and aborts the build in the following way; .../aes-neonbs-core.S: Assembler messages: .../aes-neonbs-core.S:817: Error: bad type for scalar -- `vmov q1h[1],r10' .../aes-neonbs-core.S:817: Error: bad type for scalar -- `vmov q1h[0],r9' .../aes-neonbs-core.S:817: Error: bad type for scalar -- `vmov q1l[1],r8' .../aes-neonbs-core.S:817: Error: bad type for scalar -- `vmov q1l[0],r7' .../aes-neonbs-core.S:818: Error: bad type for scalar -- `vmov q2h[1],r10' .../aes-neonbs-core.S:818: Error: bad type for scalar -- `vmov q2h[0],r9' .../aes-neonbs-core.S:818: Error: bad type for scalar -- `vmov q2l[1],r8' .../aes-neonbs-core.S:818: Error: bad type for scalar -- `vmov q2l[0],r7' Fix this by setting the element size explicitly, by replacing vmov with vmov.32. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes - avoid reserved 'tt' mnemonic in asm codeArd Biesheuvel2017-01-131-5/+5
| | | | | | | | | | The ARMv8-M architecture introduces 'tt' and 'ttt' instructions, which means we can no longer use 'tt' as a register alias on recent versions of binutils for ARM. So replace the alias with 'ttab'. Fixes: 81edb4262975 ("crypto: arm/aes - replace scalar AES cipher") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes - replace bit-sliced OpenSSL NEON codeArd Biesheuvel2017-01-139-6499/+1429
| | | | | | | | | | | | | | | | | | | | | This replaces the unwieldy generated implementation of bit-sliced AES in CBC/CTR/XTS modes that originated in the OpenSSL project with a new version that is heavily based on the OpenSSL implementation, but has a number of advantages over the old version: - it does not rely on the scalar AES cipher that also originated in the OpenSSL project and contains redundant lookup tables and key schedule generation routines (which we already have in crypto/aes_generic.) - it uses the same expanded key schedule for encryption and decryption, reducing the size of the per-key data structure by 1696 bytes - it adds an implementation of AES in ECB mode, which can be wrapped by other generic chaining mode implementations - it moves the handling of corner cases that are non critical to performance to the glue layer written in C - it was written directly in assembler rather than generated from a Perl script Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes - replace scalar AES cipherArd Biesheuvel2017-01-135-119/+256
| | | | | | | | | | | | | | | | | This replaces the scalar AES cipher that originates in the OpenSSL project with a new implementation that is ~15% (*) faster (on modern cores), and reuses the lookup tables and the key schedule generation routines from the generic C implementation (which is usually compiled in anyway due to networking and other subsystems depending on it). Note that the bit sliced NEON code for AES still depends on the scalar cipher that this patch replaces, so it is not removed entirely yet. * On Cortex-A57, the performance increases from 17.0 to 14.9 cycles per byte for 128-bit keys. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/chacha20 - implement NEON version based on SSE3 codeArd Biesheuvel2017-01-134-0/+659
| | | | | | | | | This is a straight port to ARM/NEON of the x86 SSE3 implementation of the ChaCha20 stream cipher. It uses the new skcipher walksize attribute to process the input in strides of 4x the block size. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Revert "crypto: arm64/ARM: NEON accelerated ChaCha20"Herbert Xu2016-12-284-667/+0
| | | | | | | | | | | | | | This patch reverts the following commits: 8621caa0d45e731f2e9f5889ff5bb384fcd6e059 8096667273477e735b0072b11a6d617ccee45e5f I should not have applied them because they had already been obsoleted by a subsequent patch series. They also cause a build failure because of the subsequent commit 9ae433bc79f9. Fixes: 9ae433bc79f ("crypto: chacha20 - convert generic and...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/chacha20 - implement NEON version based on SSE3 codeArd Biesheuvel2016-12-274-0/+667
| | | | | | | | This is a straight port to ARM/NEON of the x86 SSE3 implementation of the ChaCha20 stream cipher. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/crc32 - accelerated support based on x86 SSE implementationArd Biesheuvel2016-12-074-0/+555
| | | | | | | | | | | | | | | | This is a combination of the the Intel algorithm implemented using SSE and PCLMULQDQ instructions from arch/x86/crypto/crc32-pclmul_asm.S, and the new CRC32 extensions introduced for both 32-bit and 64-bit ARM in version 8 of the architecture. Two versions of the above combo are provided, one for CRC32 and one for CRC32C. The PMULL/NEON algorithm is faster, but operates on blocks of at least 64 bytes, and on multiples of 16 bytes only. For the remaining input, or for all input on systems that lack the PMULL 64x64->128 instructions, the CRC32 instructions will be used. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/crct10dif - port x86 SSE implementation to ARMArd Biesheuvel2016-12-074-0/+535
| | | | | | | | | | This is a transliteration of the Intel algorithm implemented using SSE and PCLMULQDQ instructions that resides in the file arch/x86/crypto/crct10dif-pcl-asm_64.S, but simplified to only operate on buffers that are 16 byte aligned (but of any size) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aes-ce - Make aes_simd_algs staticHerbert Xu2016-12-011-1/+1
| | | | | | | | | | The variable aes_simd_algs should be static. In fact if it isn't it causes build errors when multiple copies of aes-ce-glue.c are built into the kernel. Fixes: da40e7a4ba4d ("crypto: aes-ce - Convert to skcipher") Reported-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aesbs - fix brokenness after skcipher conversionArd Biesheuvel2016-11-301-1/+1
| | | | | | | | The CBC encryption routine should use the encryption round keys, not the decryption round keys. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes - Add missing SIMD select for aesbsHerbert Xu2016-11-301-3/+3
| | | | | | | | | This patch adds one more missing SIMD select for AES_ARM_BS. It also changes selects on ALGAPI to BLKCIPHER. Fixes: 211f41af534a ("crypto: aesbs - Convert to skcipher") Reported-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes - Select SIMD in KconfigHerbert Xu2016-11-291-1/+1
| | | | | | | | | | The skcipher conversion for ARM missed the select on CRYPTO_SIMD, causing build failures if SIMD was not otherwise enabled. Fixes: da40e7a4ba4d ("crypto: aes-ce - Convert to skcipher") Fixes: 211f41af534a ("crypto: aesbs - Convert to skcipher") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aesbs - Convert to skcipherHerbert Xu2016-11-281-228/+152
| | | | | | This patch converts aesbs over to the skcipher interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aes-ce - Convert to skcipherHerbert Xu2016-11-281-233/+157
| | | | | | This patch converts aes-ce over to the skcipher interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes-ce - fix for big endianArd Biesheuvel2016-10-211-0/+5
| | | | | | | | | | | The AES key schedule generation is mostly endian agnostic, with the exception of the rotation and the incorporation of the round constant at the start of each round. So implement a big endian specific version of that part to make the whole routine big endian compatible. Fixes: 86464859cc77 ("crypto: arm - AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Herbert Xu2016-10-101-1/+1
|\ | | | | | | Merge the crypto tree to pull in vmx ghash fix.
| * crypto: arm/aes-ctr - fix NULL dereference in tail processingArd Biesheuvel2016-09-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The AES-CTR glue code avoids calling into the blkcipher API for the tail portion of the walk, by comparing the remainder of walk.nbytes modulo AES_BLOCK_SIZE with the residual nbytes, and jumping straight into the tail processing block if they are equal. This tail processing block checks whether nbytes != 0, and does nothing otherwise. However, in case of an allocation failure in the blkcipher layer, we may enter this code with walk.nbytes == 0, while nbytes > 0. In this case, we should not dereference the source and destination pointers, since they may be NULL. So instead of checking for nbytes != 0, check for (walk.nbytes % AES_BLOCK_SIZE) != 0, which implies the former in non-error conditions. Fixes: 86464859cc77 ("crypto: arm - AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions") Cc: stable@vger.kernel.org Reported-by: xiakaixu <xiakaixu@huawei.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: arm/ghash - change internal cra_name to "__ghash"Ard Biesheuvel2016-09-071-1/+1
| | | | | | | | | | | | | | | | | | | | The fact that the internal synchrous hash implementation is called "ghash" like the publicly visible one is causing the testmgr code to misidentify it as an algorithm that requires testing at boottime. So rename it to "__ghash" to prevent this. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: arm/ghash-ce - add missing async import/exportArd Biesheuvel2016-09-071-0/+24
| | | | | | | | | | | | | | | | | | | | | | Since commit 8996eafdcbad ("crypto: ahash - ensure statesize is non-zero"), all ahash drivers are required to implement import()/export(), and must have a non-zero statesize. Fix this for the ARM Crypto Extensions GHASH implementation. Fixes: 8996eafdcbad ("crypto: ahash - ensure statesize is non-zero") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: arm/sha1-neon - add support for building in Thumb2 modeArd Biesheuvel2016-09-071-1/+0
|/ | | | | | | | | | The ARMv7 NEON module is explicitly built in ARM mode, which is not supported by the Thumb2 kernel. So remove the explicit override, and leave it up to the build environment to decide whether the core SHA1 routines are assembled as ARM or as Thumb2 code. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ghash-ce - Fix cryptd reorderingHerbert Xu2016-06-231-23/+17
| | | | | | | | | | | | | | This patch fixes an old bug where requests can be reordered because some are processed by cryptd while others are processed directly in softirq context. The fix is to always postpone to cryptd if there are currently requests outstanding from the same tfm. This patch also removes the redundant use of cryptd in the async init function as init never touches the FPU. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge branch 'linus' of ↵Linus Torvalds2016-03-172-0/+11
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto update from Herbert Xu: "Here is the crypto update for 4.6: API: - Convert remaining crypto_hash users to shash or ahash, also convert blkcipher/ablkcipher users to skcipher. - Remove crypto_hash interface. - Remove crypto_pcomp interface. - Add crypto engine for async cipher drivers. - Add akcipher documentation. - Add skcipher documentation. Algorithms: - Rename crypto/crc32 to avoid name clash with lib/crc32. - Fix bug in keywrap where we zero the wrong pointer. Drivers: - Support T5/M5, T7/M7 SPARC CPUs in n2 hwrng driver. - Add PIC32 hwrng driver. - Support BCM6368 in bcm63xx hwrng driver. - Pack structs for 32-bit compat users in qat. - Use crypto engine in omap-aes. - Add support for sama5d2x SoCs in atmel-sha. - Make atmel-sha available again. - Make sahara hashing available again. - Make ccp hashing available again. - Make sha1-mb available again. - Add support for multiple devices in ccp. - Improve DMA performance in caam. - Add hashing support to rockchip" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (116 commits) crypto: qat - remove redundant arbiter configuration crypto: ux500 - fix checks of error code returned by devm_ioremap_resource() crypto: atmel - fix checks of error code returned by devm_ioremap_resource() crypto: qat - Change the definition of icp_qat_uof_regtype hwrng: exynos - use __maybe_unused to hide pm functions crypto: ccp - Add abstraction for device-specific calls crypto: ccp - CCP versioning support crypto: ccp - Support for multiple CCPs crypto: ccp - Remove check for x86 family and model crypto: ccp - memset request context to zero during import lib/mpi: use "static inline" instead of "extern inline" lib/mpi: avoid assembler warning hwrng: bcm63xx - fix non device tree compatibility crypto: testmgr - allow rfc3686 aes-ctr variants in fips mode. crypto: qat - The AE id should be less than the maximal AE number lib/mpi: Endianness fix crypto: rockchip - add hash support for crypto engine in rk3288 crypto: xts - fix compile errors crypto: doc - add skcipher API documentation crypto: doc - update AEAD AD handling ...
| * crypto: xts - fix compile errorsStephan Mueller2016-02-172-0/+2
| | | | | | | | | | | | | | | | Commit 28856a9e52c7 missed the addition of the crypto/xts.h include file for different architecture-specific AES implementations. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: xts - consolidate sanity check for keysStephan Mueller2016-02-172-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The patch centralizes the XTS key check logic into the service function xts_check_key which is invoked from the different XTS implementations. With this, the XTS implementations in ARM, ARM64, PPC and S390 have now a sanity check for the XTS keys similar to the other arches. In addition, this service function received a check to ensure that the key != the tweak key which is mandated by FIPS 140-2 IG A.9. As the check is not present in the standards defining XTS, it is only enforced in FIPS mode of the kernel. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | arm/arm64: crypto: assure that ECB modes don't require an IVJeremy Linton2016-02-151-2/+2
|/ | | | | | | | | ECB modes don't use an initialization vector. The kernel /proc/crypto interface doesn't reflect this properly. Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* crypto: arm - ignore generated SHA2 assembly filesBaruch Siach2015-07-061-0/+2
| | | | | | | | | | | These files are generated since commits f2f770d74a8d (crypto: arm/sha256 - Add optimized SHA-256/224, 2015-04-03) and c80ae7ca3726 (crypto: arm/sha512 - accelerated SHA-512 using ARM generic ASM and NEON, 2015-05-08). Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Baruch Siach <baruch@tkos.co.il> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes - streamline AES-192 code pathArd Biesheuvel2015-05-111-4/+3
| | | | | | | | | | | | | | This trims off a couple of instructions of the total size of the core AES transform by reordering the final branch in the AES-192 code path with the rounds that are performed regardless of whether the branch is taken or not. Other than the slight size reduction, this has no performance benefit. Fix up a comment regarding the prototype of this function while we're at it. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/sha512 - accelerated SHA-512 using ARM generic ASM and NEONArd Biesheuvel2015-05-119-774/+2748
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This replaces the SHA-512 NEON module with the faster and more versatile implementation from the OpenSSL project. It consists of both a NEON and a generic ASM version of the core SHA-512 transform, where the NEON version reverts to the ASM version when invoked in non-process context. This patch is based on the OpenSSL upstream version b1a5d1c65208 of sha512-armv4.pl, which can be found here: https://git.openssl.org/gitweb/?p=openssl.git;h=b1a5d1c65208 Performance relative to the generic implementation (measured using tcrypt.ko mode=306 sec=1 running on a Cortex-A57 under KVM): input size block size asm neon old neon 16 16 1.39 2.54 2.21 64 16 1.32 2.33 2.09 64 64 1.38 2.53 2.19 256 16 1.31 2.28 2.06 256 64 1.38 2.54 2.25 256 256 1.40 2.77 2.39 1024 16 1.29 2.22 2.01 1024 256 1.40 2.82 2.45 1024 1024 1.41 2.93 2.53 2048 16 1.33 2.21 2.00 2048 256 1.40 2.84 2.46 2048 1024 1.41 2.96 2.55 2048 2048 1.41 2.98 2.56 4096 16 1.34 2.20 1.99 4096 256 1.40 2.84 2.46 4096 1024 1.41 2.97 2.56 4096 4096 1.41 3.01 2.58 8192 16 1.34 2.19 1.99 8192 256 1.40 2.85 2.47 8192 1024 1.41 2.98 2.56 8192 4096 1.41 2.71 2.59 8192 8192 1.51 3.51 2.69 Acked-by: Jussi Kivilinna <jussi.kivilinna@iki.fi> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds2015-04-1519-212/+5899
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull crypto update from Herbert Xu: "Here is the crypto update for 4.1: New interfaces: - user-space interface for AEAD - user-space interface for RNG (i.e., pseudo RNG) New hashes: - ARMv8 SHA1/256 - ARMv8 AES - ARMv8 GHASH - ARM assembler and NEON SHA256 - MIPS OCTEON SHA1/256/512 - MIPS img-hash SHA1/256 and MD5 - Power 8 VMX AES/CBC/CTR/GHASH - PPC assembler AES, SHA1/256 and MD5 - Broadcom IPROC RNG driver Cleanups/fixes: - prevent internal helper algos from being exposed to user-space - merge common code from assembly/C SHA implementations - misc fixes" * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (169 commits) crypto: arm - workaround for building with old binutils crypto: arm/sha256 - avoid sha256 code on ARMv7-M crypto: x86/sha512_ssse3 - move SHA-384/512 SSSE3 implementation to base layer crypto: x86/sha256_ssse3 - move SHA-224/256 SSSE3 implementation to base layer crypto: x86/sha1_ssse3 - move SHA-1 SSSE3 implementation to base layer crypto: arm64/sha2-ce - move SHA-224/256 ARMv8 implementation to base layer crypto: arm64/sha1-ce - move SHA-1 ARMv8 implementation to base layer crypto: arm/sha2-ce - move SHA-224/256 ARMv8 implementation to base layer crypto: arm/sha256 - move SHA-224/256 ASM/NEON implementation to base layer crypto: arm/sha1-ce - move SHA-1 ARMv8 implementation to base layer crypto: arm/sha1_neon - move SHA-1 NEON implementation to base layer crypto: arm/sha1 - move SHA-1 ARM asm implementation to base layer crypto: sha512-generic - move to generic glue implementation crypto: sha256-generic - move to generic glue implementation crypto: sha1-generic - move to generic glue implementation crypto: sha512 - implement base layer for SHA-512 crypto: sha256 - implement base layer for SHA-256 crypto: sha1 - implement base layer for SHA-1 crypto: api - remove instance when test failed crypto: api - Move alg ref count init to crypto_check_alg ...
| * crypto: arm - workaround for building with old binutilsArd Biesheuvel2015-04-131-4/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Old versions of binutils (before 2.23) do not yet understand the crypto-neon-fp-armv8 fpu instructions, and an attempt to build these files results in a build failure: arch/arm/crypto/aes-ce-core.S:133: Error: selected processor does not support ARM mode `vld1.8 {q10-q11},[ip]!' arch/arm/crypto/aes-ce-core.S:133: Error: bad instruction `aese.8 q0,q8' arch/arm/crypto/aes-ce-core.S:133: Error: bad instruction `aesmc.8 q0,q0' arch/arm/crypto/aes-ce-core.S:133: Error: bad instruction `aese.8 q0,q9' arch/arm/crypto/aes-ce-core.S:133: Error: bad instruction `aesmc.8 q0,q0' Since the affected versions are still in widespread use, and this breaks 'allmodconfig' builds, we should try to at least get a successful kernel build. Unfortunately, I could not come up with a way to make the Kconfig symbol depend on the binutils version, which would be the nicest solution. Instead, this patch uses the 'as-instr' Kbuild macro to find out whether the support is present in the assembler, and otherwise emits a non-fatal warning indicating which selected modules could not be built. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Link: http://storage.kernelci.org/next/next-20150410/arm-allmodconfig/build.log Fixes: 864cbeed4ab22d ("crypto: arm - add support for SHA1 using ARMv8 Crypto Instructions") [ard.biesheuvel: - omit modules entirely instead of building empty ones if binutils is too old - update commit log accordingly] Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: arm/sha256 - avoid sha256 code on ARMv7-MArnd Bergmann2015-04-131-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The sha256 assembly implementation can deal with all architecture levels from ARMv4 to ARMv7-A, but not with ARMv7-M. Enabling it in an ARMv7-M kernel results in this build failure: arm-linux-gnueabi-ld: error: arch/arm/crypto/sha256_glue.o: Conflicting architecture profiles M/A arm-linux-gnueabi-ld: failed to merge target specific data of file arch/arm/crypto/sha256_glue.o This adds a Kconfig dependency to prevent the code from being disabled for ARMv7-M. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: arm/sha2-ce - move SHA-224/256 ARMv8 implementation to base layerArd Biesheuvel2015-04-103-137/+39
| | | | | | | | | | | | | | | | This removes all the boilerplate from the existing implementation, and replaces it with calls into the base layer. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: arm/sha256 - move SHA-224/256 ASM/NEON implementation to base layerArd Biesheuvel2015-04-103-264/+66
| | | | | | | | | | | | | | | | This removes all the boilerplate from the existing implementation, and replaces it with calls into the base layer. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: arm/sha1-ce - move SHA-1 ARMv8 implementation to base layerArd Biesheuvel2015-04-103-98/+33
| | | | | | | | | | | | | | | | This removes all the boilerplate from the existing implementation, and replaces it with calls into the base layer. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: arm/sha1_neon - move SHA-1 NEON implementation to base layerArd Biesheuvel2015-04-101-111/+24
| | | | | | | | | | | | | | | | This removes all the boilerplate from the existing implementation, and replaces it with calls into the base layer. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: arm/sha1 - move SHA-1 ARM asm implementation to base layerArd Biesheuvel2015-04-104-98/+32
| | | | | | | | | | | | | | | | This removes all the boilerplate from the existing implementation, and replaces it with calls into the base layer. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: arm/sha256 - Add optimized SHA-256/224Sami Tolvanen2015-04-038-3/+3981
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add Andy Polyakov's optimized assembly and NEON implementations for SHA-256/224. The sha256-armv4.pl script for generating the assembly code is from OpenSSL commit 51f8d095562f36cdaa6893597b5c609e943b0565. Compared to sha256-generic these implementations have the following tcrypt speed improvements on Motorola Nexus 6 (Snapdragon 805): bs b/u sha256-neon sha256-asm 16 16 x1.32 x1.19 64 16 x1.27 x1.15 64 64 x1.36 x1.20 256 16 x1.22 x1.11 256 64 x1.36 x1.19 256 256 x1.59 x1.23 1024 16 x1.21 x1.10 1024 256 x1.65 x1.23 1024 1024 x1.76 x1.25 2048 16 x1.21 x1.10 2048 256 x1.66 x1.23 2048 1024 x1.78 x1.25 2048 2048 x1.79 x1.25 4096 16 x1.20 x1.09 4096 256 x1.66 x1.23 4096 1024 x1.79 x1.26 4096 4096 x1.82 x1.26 8192 16 x1.20 x1.09 8192 256 x1.67 x1.23 8192 1024 x1.80 x1.26 8192 4096 x1.85 x1.28 8192 8192 x1.85 x1.27 Where bs refers to block size and b/u to bytes per update. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Cc: Andy Polyakov <appro@openssl.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: aes-ce - mark ARMv8 AES helper ciphersStephan Mueller2015-03-311-4/+8
| | | | | | | | | | | | | | | | Flag all ARMv8 AES helper ciphers as internal ciphers to prevent them from being called by normal users. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
OpenPOWER on IntegriCloud