summaryrefslogtreecommitdiffstats
path: root/crypto
Commit message (Collapse)AuthorAgeFilesLines
* async_tx: fix kmap_atomic usage in async_memcpyDan Williams2007-07-201-15/+4
| | | | | | | | | | | | | | | | | | | | Andrew Morton: [async_memcpy] is very wrong if both ASYNC_TX_KMAP_DST and ASYNC_TX_KMAP_SRC can ever be set. We'll end up using the same kmap slot for both src add dest and we get either corrupted data or a BUG. Evgeniy Polyakov: Btw, shouldn't it always be kmap_atomic() even if flag is not set. That pages are usual one returned by alloc_page(). So fix the usage of kmap_atomic and kill the ASYNC_TX_KMAP_DST and ASYNC_TX_KMAP_SRC flags. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Evgeniy Polyakov <johnpol@2ka.mipt.ru> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Make crypto API use seq_list_xxx helpersPavel Emelianov2007-07-161-14/+3
| | | | | | | | | Simple and stupid - just use the same code from another place in the kernel. Signed-off-by: Pavel Emelianov <xemul@openvz.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge master.kernel.org:/pub/scm/linux/kernel/git/herbert/crypto-2.6David S. Miller2007-07-147-14/+125
|\ | | | | | | | | | | Conflicts: crypto/Kconfig
| * [CRYPTO] api: Allow ablkcipher with no queuesSebastian Siewior2007-07-111-2/+4
| | | | | | | | | | | | | | | | | | Evgeniy's hifn driver and probably mine don't use ablkcipher->queue at all. The show method of ablkcipher will access this field without checking if it is valid. Signed-off-by: Sebastian Siewior <linux-crypto@ml.breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [CRYPTO] api: Handle unaligned keys in setkeySebastian Siewior2007-07-114-4/+117
| | | | | | | | | | | | | | | | | | setkey() in {cipher,blkcipher,ablkcipher,hash}.c does not respect the requested alignment by the algorithm. This patch fixes it. The extra memory is allocated by kmalloc() with GFP_ATOMIC flag. Signed-off-by: Sebastian Siewior <linux-crypto@ml.breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [CRYPTO] api: Wake up all waiters when larval completesHerbert Xu2007-07-112-3/+3
| | | | | | | | | | | | | | | | | | Right now when a larval matures or when it dies of an error we only wake up one waiter. This would cause other waiters to timeout unnecessarily. This patch changes it to use complete_all to wake up all waiters. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [CRYPTO] Kconfig: Use menuconfig objectsJan Engelhardt2007-07-111-5/+1
| | | | | | | | | | | | | | | | | | Use menuconfigs instead of menus, so the whole menu can be disabled at once instead of going through all options. Signed-off-by: Jan Engelhardt <jengelh@gmx.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | async_tx: add the async_tx apiDan Williams2007-07-139-17/+1104
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The async_tx api provides methods for describing a chain of asynchronous bulk memory transfers/transforms with support for inter-transactional dependencies. It is implemented as a dmaengine client that smooths over the details of different hardware offload engine implementations. Code that is written to the api can optimize for asynchronous operation and the api will fit the chain of operations to the available offload resources. I imagine that any piece of ADMA hardware would register with the 'async_*' subsystem, and a call to async_X would be routed as appropriate, or be run in-line. - Neil Brown async_tx exploits the capabilities of struct dma_async_tx_descriptor to provide an api of the following general format: struct dma_async_tx_descriptor * async_<operation>(..., struct dma_async_tx_descriptor *depend_tx, dma_async_tx_callback cb_fn, void *cb_param) { struct dma_chan *chan = async_tx_find_channel(depend_tx, <operation>); struct dma_device *device = chan ? chan->device : NULL; int int_en = cb_fn ? 1 : 0; struct dma_async_tx_descriptor *tx = device ? device->device_prep_dma_<operation>(chan, len, int_en) : NULL; if (tx) { /* run <operation> asynchronously */ ... tx->tx_set_dest(addr, tx, index); ... tx->tx_set_src(addr, tx, index); ... async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param); } else { /* run <operation> synchronously */ ... <operation> ... async_tx_sync_epilog(flags, depend_tx, cb_fn, cb_param); } return tx; } async_tx_find_channel() returns a capable channel from its pool. The channel pool is organized as a per-cpu array of channel pointers. The async_tx_rebalance() routine is tasked with managing these arrays. In the uniprocessor case async_tx_rebalance() tries to spread responsibility evenly over channels of similar capabilities. For example if there are two copy+xor channels, one will handle copy operations and the other will handle xor. In the SMP case async_tx_rebalance() attempts to spread the operations evenly over the cpus, e.g. cpu0 gets copy channel0 and xor channel0 while cpu1 gets copy channel 1 and xor channel 1. When a dependency is specified async_tx_find_channel defaults to keeping the operation on the same channel. A xor->copy->xor chain will stay on one channel if it supports both operation types, otherwise the transaction will transition between a copy and a xor resource. Currently the raid5 implementation in the MD raid456 driver has been converted to the async_tx api. A driver for the offload engines on the Intel Xscale series of I/O processors, iop-adma, is provided in a later commit. With the iop-adma driver and async_tx, raid456 is able to offload copy, xor, and xor-zero-sum operations to hardware engines. On iop342 tiobench showed higher throughput for sequential writes (20 - 30% improvement) and sequential reads to a degraded array (40 - 55% improvement). For the other cases performance was roughly equal, +/- a few percentage points. On a x86-smp platform the performance of the async_tx implementation (in synchronous mode) was also +/- a few percentage points of the original implementation. According to 'top' on iop342 CPU utilization drops from ~50% to ~15% during a 'resync' while the speed according to /proc/mdstat doubles from ~25 MB/s to ~50 MB/s. The tiobench command line used for testing was: tiobench --size 2048 --block 4096 --block 131072 --dir /mnt/raid --numruns 5 * iop342 had 1GB of memory available Details: * if CONFIG_DMA_ENGINE=n the asynchronous path is compiled away by making async_tx_find_channel a static inline routine that always returns NULL * when a callback is specified for a given transaction an interrupt will fire at operation completion time and the callback will occur in a tasklet. if the the channel does not support interrupts then a live polling wait will be performed * the api is written as a dmaengine client that requests all available channels * In support of dependencies the api implicitly schedules channel-switch interrupts. The interrupt triggers the cleanup tasklet which causes pending operations to be scheduled on the next channel * Xor engines treat an xor destination address differently than a software xor routine. To the software routine the destination address is an implied source, whereas engines treat it as a write-only destination. This patch modifies the xor_blocks routine to take a an explicit destination address to mirror the hardware. Changelog: * fixed a leftover debug print * don't allow callbacks in async_interrupt_cond * fixed xor_block changes * fixed usage of ASYNC_TX_XOR_DROP_DEST * drop dma mapping methods, suggested by Chris Leech * printk warning fixups from Andrew Morton * don't use inline in C files, Adrian Bunk * select the API when MD is enabled * BUG_ON xor source counts <= 1 * implicitly handle hardware concerns like channel switching and interrupts, Neil Brown * remove the per operation type list, and distribute operation capabilities evenly amongst the available channels * simplify async_tx_find_channel to optimize the fast path * introduce the channel_table_initialized flag to prevent early calls to the api * reorganize the code to mimic crypto * include mm.h as not all archs include it in dma-mapping.h * make the Kconfig options non-user visible, Adrian Bunk * move async_tx under crypto since it is meant as 'core' functionality, and the two may share algorithms in the future * move large inline functions into c files * checkpatch.pl fixes * gpl v2 only correction Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Acked-By: NeilBrown <neilb@suse.de>
* | xor: make 'xor_blocks' a library routine for use with async_txDan Williams2007-07-133-0/+168
|/ | | | | | | | | | | | | | | | | | The async_tx api tries to use a dma engine for an operation, but will fall back to an optimized software routine otherwise. Xor support is implemented using the raid5 xor routines. For organizational purposes this routine is moved to a common area. The following fixes are also made: * rename xor_block => xor_blocks, suggested by Adrian Bunk * ensure that xor.o initializes before md.o in the built-in case * checkpatch.pl fixes * mark calibrate_xor_blocks __init, Adrian Bunk Cc: Adrian Bunk <bunk@stusta.de> Cc: NeilBrown <neilb@suse.de> Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* [CRYPTO] cryptd: Fix problem with cryptd and the freezerRafael J. Wysocki2007-05-311-1/+3
| | | | | | | | Make sure that cryptd is marked as nonfreezable and does not hold up the freezer. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Read module pointer before freeing algorithmHerbert Xu2007-05-191-1/+3
| | | | | | | | | | | | | The function crypto_mod_put first frees the algorithm and then drops the reference to its module. Unfortunately we read the module pointer which after freeing the algorithm and that pointer sits inside the object that we just freed. So this patch reads the module pointer out before we free the object. Thanks to Luca Tettamanti for reporting this. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] tcrypt: Add missing error checkHerbert Xu2007-05-181-1/+1
| | | | | | | The return value of crypto_hash_final isn't checked in test_hash_cycles. This patch corrects this. Thanks to Eric Sesterhenn for reporting this. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Fix trivial typos in Kconfig* filesDavid Sterba2007-05-091-1/+1
| | | | | | | Fix several typos in help text in Kconfig* files. Signed-off-by: David Sterba <dave@jikos.cz> Signed-off-by: Adrian Bunk <bunk@stusta.de>
* [CRYPTO] cryptomgr: Fix use after freeHerbert Xu2007-05-091-4/+3
| | | | | | | | | | By the time kthread_run returns the param may have already been freed so writing the returned thread_struct pointer to param is wrong. In fact, we don't need it in param anyway so this patch simply puts it on the stack. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] cryptd: Add software async crypto daemonHerbert Xu2007-05-023-0/+385
| | | | | | | | This patch adds the cryptd module which is a template that takes a synchronous software crypto algorithm and converts it to an asynchronous one by executing it in a kernel thread. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Do not remove users unless new algorithm matchesHerbert Xu2007-05-021-26/+39
| | | | | | | | | | | | | As it is whenever a new algorithm with the same name is registered users of the old algorithm will be removed so that they can take advantage of the new algorithm. This presents a problem when the new algorithm is not equivalent to the old algorithm. In particular, the new algorithm might only function on top of the existing one. Hence we should not remove users unless they can make use of the new algorithm. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] cryptomgr: Fix parsing of nested templates Herbert Xu2007-05-021-13/+25
| | | | | | | This patch allows the use of nested templates by allowing the use of brackets inside a template parameter. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Add async blkcipher typeHerbert Xu2007-05-024-0/+150
| | | | | | | | This patch adds the mid-level interface for asynchronous block ciphers. It also includes a generic queueing mechanism that can be used by other asynchronous crypto operations in future. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] templates: Pass type/mask when creating instancesHerbert Xu2007-05-028-34/+103
| | | | | | | | | | | | | This patch passes the type/mask along when constructing instances of templates. This is in preparation for templates that may support multiple types of instances depending on what is requested. For example, the planned software async crypto driver will use this construct. For the moment this allows us to check whether the instance constructed is of the correct type and avoid returning success if the type does not match. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] tcrypt: Use async blkcipher interfaceHerbert Xu2007-05-021-39/+82
| | | | | | | | This patch converts the tcrypt module to use the asynchronous block cipher interface. As all synchronous block ciphers can be used through the async interface, tcrypt is still able to test them. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Add async block cipher interfaceHerbert Xu2007-05-021-5/+65
| | | | | | | | | | | | | | | | This patch adds the frontend interface for asynchronous block ciphers. In addition to the usual block cipher parameters, there is a callback function pointer and a data pointer. The callback will be invoked only if the encrypt/decrypt handlers return -EINPROGRESS. In other words, if the return value of zero the completion handler (or the equivalent code) needs to be invoked by the caller. The request structure is allocated and freed by the caller. Its size is determined by calling crypto_ablkcipher_reqsize(). The helpers ablkcipher_request_alloc/ablkcipher_request_free can be used to manage the memory for a request. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Proc functions should be marked as unusedHerbert Xu2007-05-022-2/+2
| | | | | | | The proc functions were incorrectly marked as used rather than unused. They may be unused if proc is disabled. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [PATCH] Update my email address from jkmaline@cc.hut.fi to j@w1.fiJouni Malinen2007-04-281-2/+2
| | | | | | | | | | | | | After 13 years of use, it looks like my email address is finally going to disappear. While this is likely to drop the amount of incoming spam greatly ;-), it may also affect more appropriate messages, so let's update my email address in various places. In addition, Host AP mailing list is subscribers-only and linux-wireless can also be used for discussing issues related to this driver which is now shown in MAINTAINERS. Signed-off-by: Jouni Malinen <j@w1.fi> Signed-off-by: John W. Linville <linville@tuxdriver.com>
* [CRYPTO] api: Flush the current page right than the nextHerbert Xu2007-03-311-2/+6
| | | | | | | | On platforms where flush_dcache_page is needed we're currently flushing the next page right than the one we've just processed. This patch fixes the off-by-one error. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Use the right value when advancing scatterwalk_copychunksHerbert Xu2007-03-311-1/+1
| | | | | | | In the scatterwalk_copychunks loop, We should be advancing by len_this_page and not nbytes. The latter is the total length. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] tcrypt: Fix error checking for comp allocationSebastian Siewior2007-03-211-1/+1
| | | | | | | | This patch fixes loading the tcrypt module while deflate isn't available at all (isn't build). Signed-off-by: Sebastian Siewior <linux-crypto@ml.breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: scatterwalk_copychunks() fails to advance through scatterlistJ. Bruce Fields2007-03-211-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the loop in scatterwalk_copychunks(), if walk->offset is zero, then scatterwalk_pagedone rounds that up to the nearest page boundary: walk->offset += PAGE_SIZE - 1; walk->offset &= PAGE_MASK; which is a no-op in this case, so we don't advance to the next element of the scatterlist array: if (walk->offset >= walk->sg->offset + walk->sg->length) scatterwalk_start(walk, sg_next(walk->sg)); and we end up copying the same data twice. It appears that other callers of scatterwalk_{page}done first advance walk->offset, so I believe that's the correct thing to do here. This caused a bug in NFS when run with krb5p security, which would cause some writes to fail with permissions errors--for example, writes of less than 8 bytes (the des blocksize) at the start of a file. A git-bisect shows the bug was originally introduced by 5c64097aa0f6dc4f27718ef47ca9a12538d62860, first in 2.6.19-rc1. Signed-off-by: "J. Bruce Fields" <bfields@citi.umich.edu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [PATCH] mark struct file_operations const 3Arjan van de Ven2007-02-121-1/+1
| | | | | | | | | | | Many struct file_operations in the kernel can be "const". Marking them const moves these to the .rodata section, which avoids false sharing with potential dirty data. In addition it'll catch accidental writes at compile time to these shared resources. Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'HEAD' of ↵David S. Miller2007-02-0820-591/+3337
|\ | | | | | | | | | | | | | | master.kernel.org:/pub/scm/linux/kernel/git/herbert/crypto-2.6 Conflicts: crypto/Kconfig
| * [CRYPTO] camellia: added the testing code of Camellia cipherNoriaki TAKAMIYA2007-02-072-1/+206
| | | | | | | | | | | | | | This patch adds the code of Camellia code for testing module. Signed-off-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [CRYPTO] camellia: added the code of Camellia cipher algorithm.Noriaki TAKAMIYA2007-02-072-0/+1802
| | | | | | | | | | | | | | This patch adds the main code of Camellia cipher algorithm. Signed-off-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [CRYPTO] camellia: Add Kconfig entry.Noriaki TAKAMIYA2007-02-071-0/+15
| | | | | | | | | | | | | | This patch adds the Kconfig entry for Camellia. Signed-off-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [CRYPTO] xcbc: Use new cipher interfaceHerbert Xu2007-02-071-14/+16
| | | | | | | | | | | | This patch changes xcbc to use the new cipher encryt_one interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [CRYPTO] api: Allow multiple frontends per backendHerbert Xu2007-02-075-18/+22
| | | | | | | | | | | | | | | | This patch adds support for multiple frontend types for each backend algorithm by passing the type and mask through to the backend type init function. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [CRYPTO] api: Add type-safe spawnsHerbert Xu2007-02-077-27/+42
| | | | | | | | | | | | This patch allows spawns of specific types (e.g., cipher) to be allocated. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [CRYPTO] api: Remove deprecated interfaceHerbert Xu2007-02-076-532/+16
| | | | | | | | | | | | This patch removes the old cipher interface and related code. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [CRYPTO] tcrypt: Removed vestigial crypto_alloc_tfm callHerbert Xu2007-02-071-1/+1
| | | | | | | | | | | | | | The crypto_comp conversion missed the last remaining crypto_alloc_tfm call. This patch replaces it with crypto_alloc_comp. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [CRYPTO] fcrypt: Add FCrypt from RxRPCDavid Howells2007-02-075-1/+574
| | | | | | | | | | | | | | Add a crypto module to provide FCrypt encryption as used by RxRPC. Signed-Off-By: David Howells <dhowells@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [CRYPTO] pcbc: Add Propagated CBC templateDavid Howells2007-02-073-0/+358
| | | | | | | | | | | | | | Add PCBC crypto template support as used by RxRPC. Signed-Off-By: David Howells <dhowells@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [CRYPTO] tcrypt: Added test vectors for sha384/sha512Andrew Donofrio2007-02-072-1/+259
| | | | | | | | | | | | | | | | | | This patch adds tests for SHA384 HMAC and SHA512 HMAC to the tcrypt module. Test data was taken from RFC4231. This patch is a follow-up to the discovery (bug 7646) that the kernel SHA384 HMAC implementation was not generating proper SHA384 HMACs. Signed-off-by: Andrew Donofrio <linuxbugzilla@kriptik.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [CRYPTO] all: Check for usage in hard IRQ contextHerbert Xu2007-02-073-7/+37
| | | | | | | | | | | | | | | | | | | | Using blkcipher/hash crypto operations in hard IRQ context can lead to random memory corruption due to the reuse of kmap_atomic slots. Since crypto operations were never meant to be used in hard IRQ contexts, this patch checks for such usage and returns an error before kmap_atomic is performed. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | [S390] move crypto options and some cleanup.Jan Glauber2007-02-051-49/+0
|/ | | | | | | | | | | | This patch moves the config options for the s390 crypto instructions to the standard "Hardware crypto devices" menu. In addition some cleanup has been done: use a flag for supported keylengths, add a warning about machien limitation, return ENOTSUPP in case the hardware has no support, remove superfluous printks and update email addresses. Signed-off-by: Jan Glauber <jan.glauber@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [PATCH] uml problems with linux/io.hAl Viro2006-12-131-1/+0
| | | | | | | | | | Remove useless includes of linux/io.h, don't even try to build iomap_copy on uml (it doesn't have readb() et.al., so...) Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Acked-by: Jeff Dike <jdike@addtoit.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [CRYPTO] sha512: Fix sha384 block sizeHerbert Xu2006-12-111-1/+1
| | | | | | | | | | The SHA384 block size should be 128 bytes, not 96 bytes. This was spotted by Andrew Donofrio. Fortunately the block size isn't actually used anywhere so this typo has had no real impact. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] lrw: round --> lrw_roundDavid S. Miller2006-12-061-2/+2
| | | | | | | | Fixes: crypto/lrw.c:99: warning: conflicting types for built-in function ‘round’ Signed-off-by: David S. Miller <davem@davemloft.net>
* [CRYPTO] tcrypt: LRW test vectorsRik Snel2006-12-062-4/+542
| | | | | | | | | | | | Do modprobe tcrypt mode=10 to check the included test vectors, they are from: http://grouper.ieee.org/groups/1619/email/pdf00017.pdf and from http://www.mail-archive.com/stds-p1619@listserv.ieee.org/msg00173.html. To make the last test vector fit, I had to increase the buffer size of input and result to 512 bytes. Signed-off-by: Rik Snel <rsnel@cube.dyndns.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] lrw: Liskov Rivest Wagner, a tweakable narrow block cipher modeRik Snel2006-12-063-0/+315
| | | | | | | | | | | | | | | | | | Main module, this implements the Liskov Rivest Wagner block cipher mode in the new blockcipher API. The implementation is based on ecb.c. The LRW-32-AES specification I used can be found at: http://grouper.ieee.org/groups/1619/email/pdf00017.pdf It implements the optimization specified as optional in the specification, and in addition it uses optimized multiplication routines from gf128mul.c. Since gf128mul.[ch] is not tested on bigendian, this cipher mode may currently fail badly on bigendian machines. Signed-off-by: Rik Snel <rsnel@cube.dyndns.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] lib: table driven multiplications in GF(2^128)Rik Snel2006-12-063-0/+477
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A lot of cypher modes need multiplications in GF(2^128). LRW, ABL, GCM... I use functions from this library in my LRW implementation and I will also use them in my ABL (Arbitrary Block Length, an unencumbered (correct me if I am wrong, wide block cipher mode). Elements of GF(2^128) must be presented as u128 *, it encourages automatic and proper alignment. The library contains support for two different representations of GF(2^128), see the comment in gf128mul.h. There different levels of optimization (memory/speed tradeoff). The code is based on work by Dr Brian Gladman. Notable changes: - deletion of two optimization modes - change from u32 to u64 for faster handling on 64bit machines - support for 'bbe' representation in addition to the, already implemented, 'lle' representation. - move 'inline void' functions from header to 'static void' in the source file - update to use the linux coding style conventions The original can be found at: http://fp.gladman.plus.com/AES/modes.vc8.19-06-06.zip The copyright (and GPL statement) of the original author is preserved. Signed-off-by: Rik Snel <rsnel@cube.dyndns.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Remove unused functionsAdrian Bunk2006-12-062-63/+0
| | | | | | | | | | | | This patch removes the following no longer used functions: - api.c: crypto_alg_available() - digest.c: crypto_digest_init() - digest.c: crypto_digest_update() - digest.c: crypto_digest_final() - digest.c: crypto_digest_digest() Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] xcbc: Make needlessly global code staticAdrian Bunk2006-12-061-6/+8
| | | | | | | | | | | | | | | | On Tue, Nov 14, 2006 at 01:41:25AM -0800, Andrew Morton wrote: >... > Changes since 2.6.19-rc5-mm2: >... > git-cryptodev.patch >... > git trees >... This patch makes some needlessly global code static. Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
OpenPOWER on IntegriCloud