summaryrefslogtreecommitdiffstats
path: root/crypto
Commit message (Collapse)AuthorAgeFilesLines
* crypto: ccm - Fix handling of null assoc dataJarod Wilson2009-01-271-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Its a valid use case to have null associated data in a ccm vector, but this case isn't being handled properly right now. The following ccm decryption/verification test vector, using the rfc4309 implementation regularly triggers a panic, as will any other vector with null assoc data: * key: ab2f8a74b71cd2b1ff802e487d82f8b9 * iv: c6fb7d800d13abd8a6b2d8 * Associated Data: [NULL] * Tag Length: 8 * input: d5e8939fc7892e2b The resulting panic looks like so: Unable to handle kernel paging request at ffff810064ddaec0 RIP: [<ffffffff8864c4d7>] :ccm:get_data_to_compute+0x1a6/0x1d6 PGD 8063 PUD 0 Oops: 0002 [1] SMP last sysfs file: /module/libata/version CPU 0 Modules linked in: crypto_tester_kmod(U) seqiv krng ansi_cprng chainiv rng ctr aes_generic aes_x86_64 ccm cryptomgr testmgr_cipher testmgr aead crypto_blkcipher crypto_a lgapi des ipv6 xfrm_nalgo crypto_api autofs4 hidp l2cap bluetooth nfs lockd fscache nfs_acl sunrpc ip_conntrack_netbios_ns ipt_REJECT xt_state ip_conntrack nfnetlink xt_ tcpudp iptable_filter ip_tables x_tables dm_mirror dm_log dm_multipath scsi_dh dm_mod video hwmon backlight sbs i2c_ec button battery asus_acpi acpi_memhotplug ac lp sg snd_intel8x0 snd_ac97_codec ac97_bus snd_seq_dummy snd_seq_oss joydev snd_seq_midi_event snd_seq snd_seq_device snd_pcm_oss snd_mixer_oss ide_cd snd_pcm floppy parport_p c shpchp e752x_edac snd_timer e1000 i2c_i801 edac_mc snd soundcore snd_page_alloc i2c_core cdrom parport serio_raw pcspkr ata_piix libata sd_mod scsi_mod ext3 jbd uhci_h cd ohci_hcd ehci_hcd Pid: 12844, comm: crypto-tester Tainted: G 2.6.18-128.el5.fips1 #1 RIP: 0010:[<ffffffff8864c4d7>] [<ffffffff8864c4d7>] :ccm:get_data_to_compute+0x1a6/0x1d6 RSP: 0018:ffff8100134434e8 EFLAGS: 00010246 RAX: 0000000000000000 RBX: ffff8100104898b0 RCX: ffffffffab6aea10 RDX: 0000000000000010 RSI: ffff8100104898c0 RDI: ffff810064ddaec0 RBP: 0000000000000000 R08: ffff8100104898b0 R09: 0000000000000000 R10: ffff8100103bac84 R11: ffff8100104898b0 R12: ffff810010489858 R13: ffff8100104898b0 R14: ffff8100103bac00 R15: 0000000000000000 FS: 00002ab881adfd30(0000) GS:ffffffff803ac000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: ffff810064ddaec0 CR3: 0000000012a88000 CR4: 00000000000006e0 Process crypto-tester (pid: 12844, threadinfo ffff810013442000, task ffff81003d165860) Stack: ffff8100103bac00 ffff8100104898e8 ffff8100134436f8 ffffffff00000000 0000000000000000 ffff8100104898b0 0000000000000000 ffff810010489858 0000000000000000 ffff8100103bac00 ffff8100134436f8 ffffffff8864c634 Call Trace: [<ffffffff8864c634>] :ccm:crypto_ccm_auth+0x12d/0x140 [<ffffffff8864cf73>] :ccm:crypto_ccm_decrypt+0x161/0x23a [<ffffffff88633643>] :crypto_tester_kmod:cavs_test_rfc4309_ccm+0x4a5/0x559 [...] The above is from a RHEL5-based kernel, but upstream is susceptible too. The fix is trivial: in crypto/ccm.c:crypto_ccm_auth(), pctx->ilen contains whatever was in memory when pctx was allocated if assoclen is 0. The tested fix is to simply add an else clause setting pctx->ilen to 0 for the assoclen == 0 case, so that get_data_to_compute() doesn't try doing things its not supposed to. Signed-off-by: Jarod Wilson <jarod@redhat.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: blkcipher - Fix WARN_ON handling in walk_doneHerbert Xu2009-01-271-1/+1
| | | | | | | | | | | | When we get left-over bits from a slow walk, it means that the underlying cipher has gone troppo. However, as we're handling that case we should ensure that the caller terminates the walk. This patch does this by setting walk->nbytes to zero. Reported-by: Roel Kluin <roel.kluin@gmail.com> Reported-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: authenc - Fix zero-length IV crashHerbert Xu2009-01-151-9/+15
| | | | | | | | | | | As it is if an algorithm with a zero-length IV is used (e.g., NULL encryption) with authenc, authenc may generate an SG entry of length zero, which will trigger a BUG check in the hash layer. This patch fixes it by skipping the IV SG generation if the IV size is zero. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* dmaengine: replace dma_async_client_register with dmaengine_getDan Williams2009-01-061-113/+2
| | | | | | | | | | Now that clients no longer need to be notified of channel arrival dma_async_client_register can simply increment the dmaengine_ref_count. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: provide a common 'issue_pending_all' implementationDan Williams2009-01-061-12/+0
| | | | | | | | | | | | | | | async_tx and net_dma each have open-coded versions of issue_pending_all, so provide a common routine in dmaengine. The implementation needs to walk the global device list, so implement rcu to allow dma_issue_pending_all to run lockless. Clients protect themselves from channel removal events by holding a dmaengine reference. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: centralize channel allocation, introduce dma_find_channelDan Williams2009-01-061-142/+4
| | | | | | | | | | | | | | | | | | | Allowing multiple clients to each define their own channel allocation scheme quickly leads to a pathological situation. For memory-to-memory offload all clients can share a central allocator. This simply moves the existing async_tx allocator to dmaengine with minimal fixups: * async_tx.c:get_chan_ref_by_cap --> dmaengine.c:nth_chan * async_tx.c:async_tx_rebalance --> dmaengine.c:dma_channel_rebalance * split out common code from async_tx.c:__async_tx_find_channel --> dma_find_channel Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: up-level reference counting to the module levelDan Williams2009-01-061-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | Simply, if a client wants any dmaengine channel then prevent all dmaengine modules from being removed. Once the clients are done re-enable module removal. Why?, beyond reducing complication: 1/ Tracking reference counts per-transaction in an efficient manner, as is currently done, requires a complicated scheme to avoid cache-line bouncing effects. 2/ Per-transaction ref-counting gives the false impression that a dma-driver can be gracefully removed ahead of its user (net, md, or dma-slave) 3/ None of the in-tree dma-drivers talk to hot pluggable hardware, but if such an engine were built one day we still would not need to notify clients of remove events. The driver can simply return NULL to a ->prep() request, something that is much easier for a client to handle. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: remove dependency on async_txDan Williams2009-01-051-75/+0
| | | | | | | | | | | | | | async_tx.ko is a consumer of dma channels. A circular dependency arises if modules in drivers/dma rely on common code in async_tx.ko. It prevents either module from being unloaded. Move dma_wait_for_async_tx and async_tx_run_dependencies to dmaeninge.o where they should have been from the beginning. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* crypto: aes - Precompute tablesHerbert Xu2008-12-251-90/+1055
| | | | | | | | | | | The tables used by the various AES algorithms are currently computed at run-time. This has created an init ordering problem because some AES algorithms may be registered before the tables have been initialised. This patch gets around this whole thing by precomputing the tables. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - Correct comment about deflate parametersGeert Uytterhoeven2008-12-251-1/+1
| | | | | | | | | | The comment for the deflate test vectors says the winbits parameter is 11, while the deflate module actually uses -11 (a negative window bits parameter enables the raw deflate format instead of the zlib format). Correct this, to avoid confusion about the format used. Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: salsa20 - Remove private wrappers around various operationsHarvey Harrison2008-12-251-39/+36
| | | | | | | | | ROTATE -> rol32 XOR was always used with the same destination, use ^= PLUS/PLUSONE use ++ or += Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: des3_ede - permit weak keys unless REQ_WEAK_KEY setJarod Wilson2008-12-251-2/+3
| | | | | | | | | | | | | | | | | While its a slightly insane to bypass the key1 == key2 || key2 == key3 check in triple-des, since it reduces it to the same strength as des, some folks do need to do this from time to time for backwards compatibility with des. My own case is FIPS CAVS test vectors. Many triple-des test vectors use a single key, replicated 3x. In order to get the expected results, des3_ede_setkey() needs to only reject weak keys if the CRYPTO_TFM_REQ_WEAK_KEY flag is set. Also sets a more appropriate RES flag when a weak key is found. Signed-off-by: Jarod Wilson <jarod@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: sha512 - Switch to shash Adrian-Ken Rueegsegger2008-12-252-54/+60
| | | | | | | This patch changes sha512 and sha384 to the new shash interface. Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: sha512 - Move message schedule W[80] to static percpu areaAdrian-Ken Rueegsegger2008-12-251-8/+9
| | | | | | | | | The message schedule W (u64[80]) is too big for the stack. In order for this algorithm to be used with shash it is moved to a static percpu area. Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: michael_mic - Switch to shashAdrian-Ken Rueegsegger2008-12-252-32/+42
| | | | | | | This patch changes michael_mic to the new shash interface. Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: wp512 - Switch to shashAdrian-Ken Rueegsegger2008-12-252-57/+66
| | | | | | | This patch changes wp512, wp384 and wp256 to the new shash interface. Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: tgr192 - Switch to shashAdrian-Ken Rueegsegger2008-12-252-65/+72
| | | | | | | This patch changes tgr192, tgr160 and tgr128 to the new shash interface. Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: sha256 - Switch to shashAdrian-Ken Rueegsegger2008-12-252-49/+57
| | | | | | | This patch changes sha256 and sha224 to the new shash interface. Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: md5 - Switch to shashAdrian-Ken Rueegsegger2008-12-252-23/+29
| | | | | | | This patch changes md5 to the new shash interface. Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: md4 - Switch to shashAdrian-Ken Rueegsegger2008-12-252-24/+30
| | | | | | | This patch changes md4 to the new shash interface. Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: sha1 - Switch to shashAdrian-Ken Rueegsegger2008-12-252-26/+32
| | | | | | | This patch changes sha1 to the new shash interface. Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: rmd320 - Switch to shashHerbert Xu2008-12-252-30/+33
| | | | | | This patch changes rmd320 to the new shash interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: rmd256 - Switch to shashHerbert Xu2008-12-252-30/+33
| | | | | | This patch changes rmd256 to the new shash interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: rmd160 - Switch to shashHerbert Xu2008-12-252-30/+33
| | | | | | This patch changes rmd160 to the new shash interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: rmd128 - Switch to shashHerbert Xu2008-12-252-30/+33
| | | | | | This patch changes rmd128 to the new shash interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: null - Switch to shashHerbert Xu2008-12-252-23/+42
| | | | | | This patch changes digest_null to the new shash interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Make setkey optionalHerbert Xu2008-12-252-1/+10
| | | | | | | Since most cryptographic hash algorithms have no keys, this patch makes the setkey function optional for ahash and shash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - Validate output length in (de)compression testsGeert Uytterhoeven2008-12-251-0/+16
| | | | | | | | | | | When self-testing (de)compression algorithms, make sure the actual size of the (de)compressed output data matches the expected output size. Otherwise, in case the actual output size would be smaller than the expected output size, the subsequent buffer compare test would still succeed, and no error would be reported. Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: remove uses of __constant_{endian} helpersHarvey Harrison2008-12-251-4/+4
| | | | | | | Base versions handle constant folding just fine. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - Fix error flow of test_compIngo Molnar2008-12-251-1/+1
| | | | | | | | | | | | | | | | | This warning: crypto/testmgr.c: In function ‘test_comp’: crypto/testmgr.c:829: warning: ‘ret’ may be used uninitialized in this function triggers because GCC correctly notices that in the ctcount == 0 && dtcount != 0 input condition case this function can return an undefined value, if the second loop fails. Remove the shadowed 'ret' variable from the second loop that was probably unintended. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ansi_cprng - fix inverted DT increment routineJarod Wilson2008-12-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ANSI X9.31 PRNG docs aren't particularly clear on how to increment DT, but empirical testing shows we're incrementing from the wrong end. A 10,000 iteration Monte Carlo RNG test currently winds up not getting the expected result. From http://csrc.nist.gov/groups/STM/cavp/documents/rng/RNGVS.pdf : # CAVS 4.3 # ANSI931 MCT [X9.31] [AES 128-Key] COUNT = 0 Key = 9f5b51200bf334b5d82be8c37255c848 DT = 6376bbe52902ba3b67c925fa701f11ac V = 572c8e76872647977e74fbddc49501d1 R = 48e9bd0d06ee18fbe45790d5c3fc9b73 Currently, we get 0dd08496c4f7178bfa70a2161a79459a after 10000 loops. Inverting the DT increment routine results in us obtaining the expected result of 48e9bd0d06ee18fbe45790d5c3fc9b73. Verified on both x86_64 and ppc64. Signed-off-by: Jarod Wilson <jarod@redhat.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ansi_cprng - Avoid incorrect extra call to _get_more_prng_bytesJarod Wilson2008-12-251-6/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While working with some FIPS RNGVS test vectors yesterday, I discovered a little bug in the way the ansi_cprng code works right now. For example, the following test vector (complete with expected result) from http://csrc.nist.gov/groups/STM/cavp/documents/rng/RNGVS.pdf ... Key = f3b1666d13607242ed061cabb8d46202 DT = e6b3be782a23fa62d71d4afbb0e922fc V = f0000000000000000000000000000000 R = 88dda456302423e5f69da57e7b95c73a ...when run through ansi_cprng, yields an incorrect R value of e2afe0d794120103d6e86a2b503bdfaa. If I load up ansi_cprng w/dbg=1 though, it was fairly obvious what was going wrong: ----8<---- getting 16 random bytes for context ffff810033fb2b10 Calling _get_more_prng_bytes for context ffff810033fb2b10 Input DT: 00000000: e6 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc Input I: 00000000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Input V: 00000000: f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 tmp stage 0: 00000000: e6 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc tmp stage 1: 00000000: f4 8e cb 25 94 3e 8c 31 d6 14 cd 8a 23 f1 3f 84 tmp stage 2: 00000000: 8c 53 6f 73 a4 1a af d4 20 89 68 f4 58 64 f8 be Returning new block for context ffff810033fb2b10 Output DT: 00000000: e7 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc Output I: 00000000: 04 8e cb 25 94 3e 8c 31 d6 14 cd 8a 23 f1 3f 84 Output V: 00000000: 48 89 3b 71 bc e4 00 b6 5e 21 ba 37 8a 0a d5 70 New Random Data: 00000000: 88 dd a4 56 30 24 23 e5 f6 9d a5 7e 7b 95 c7 3a Calling _get_more_prng_bytes for context ffff810033fb2b10 Input DT: 00000000: e7 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc Input I: 00000000: 04 8e cb 25 94 3e 8c 31 d6 14 cd 8a 23 f1 3f 84 Input V: 00000000: 48 89 3b 71 bc e4 00 b6 5e 21 ba 37 8a 0a d5 70 tmp stage 0: 00000000: e7 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc tmp stage 1: 00000000: 80 6b 3a 8c 23 ae 8f 53 be 71 4c 16 fc 13 b2 ea tmp stage 2: 00000000: 2a 4d e1 2a 0b 58 8e e6 36 b8 9c 0a 26 22 b8 30 Returning new block for context ffff810033fb2b10 Output DT: 00000000: e8 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc Output I: 00000000: c8 e2 01 fd 9f 4a 8f e5 e0 50 f6 21 76 19 67 9a Output V: 00000000: ba 98 e3 75 c0 1b 81 8d 03 d6 f8 e2 0c c6 54 4b New Random Data: 00000000: e2 af e0 d7 94 12 01 03 d6 e8 6a 2b 50 3b df aa returning 16 from get_prng_bytes in context ffff810033fb2b10 ----8<---- The expected result is there, in the first "New Random Data", but we're incorrectly making a second call to _get_more_prng_bytes, due to some checks that are slightly off, which resulted in our original bytes never being returned anywhere. One approach to fixing this would be to alter some byte_count checks in get_prng_bytes, but it would mean the last DEFAULT_BLK_SZ bytes would be copied a byte at a time, rather than in a single memcpy, so a slightly more involved, equally functional, and ultimately more efficient way of fixing this was suggested to me by Neil, which I'm submitting here. All of the RNGVS ANSI X9.31 AES128 VST test vectors I've passed through ansi_cprng are now returning the expected results with this change. Signed-off-by: Jarod Wilson <jarod@redhat.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: md4 - Use ARRAY_SIZEJulia Lawall2008-12-251-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | ARRAY_SIZE is more concise to use when the size of an array is divided by the size of its type or the size of its first element. The semantic patch that makes this change is as follows: (http://www.emn.fr/x-info/coccinelle/) // <smpl> @i@ @@ #include <linux/kernel.h> @depends on i using "paren.iso"@ type T; T[] E; @@ - (sizeof(E)/sizeof(T)) + ARRAY_SIZE(E) // </smpl> Signed-off-by: Julia Lawall <julia@diku.dk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* libcrc32c: Move implementation to crypto crc32cHerbert Xu2008-12-252-5/+112
| | | | | | | | | | | | | | This patch swaps the role of libcrc32c and crc32c. Previously the implementation was in libcrc32c and crc32c was a wrapper. Now the code is in crc32c and libcrc32c just calls the crypto layer. The reason for the change is to tap into the algorithm selection capability of the crypto API so that optimised implementations such as the one utilising Intel's CRC32C instruction can be used where available. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: crc32c - Test descriptor context formatHerbert Xu2008-12-251-1/+50
| | | | | | | | This patch adds a test for the requirement that all crc32c algorithms shall store the partial result in the first four bytes of the descriptor context. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: crc32c - Switch to shashHerbert Xu2008-12-251-132/+53
| | | | | | This patch changes crc32c to the new shash interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Export shash through hashHerbert Xu2008-12-254-4/+134
| | | | | | | | This patch allows shash algorithms to be used through the old hash interface. This is a transitional measure so we can convert the underlying algorithms to shash before converting the users across. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Call type show function before legacy for procHerbert Xu2008-12-251-7/+13
| | | | | | | | | | This patch makes /proc/crypto call the type-specific show function if one is present before calling the legacy show functions for cipher/digest/compress. This allows us to reuse the type values for those legacy types. In particular, hash and digest will share one type value while shash is phased in as the default hash type. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Add import/export interfaceHerbert Xu2008-12-252-0/+28
| | | | | | | | | | | | It is often useful to save the partial state of a hash function so that it can be used as a base for two or more computations. The most prominent example is HMAC where all hashes start from a base determined by the key. Having an import/export interface means that we only have to compute that base once rather than for each message. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Export shash through ahashHerbert Xu2008-12-251-0/+143
| | | | | | | | This patch allows shash algorithms to be used through the ahash interface. This is required before we can convert digest algorithms over to shash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Add shash interfaceHerbert Xu2008-12-252-0/+240
| | | | | | | | | | | | | | | | | | | | | | | | The shash interface replaces the current synchronous hash interface. It improves over hash in two ways. Firstly shash is reentrant, meaning that the same tfm may be used by two threads simultaneously as all hashing state is stored in a local descriptor. The other enhancement is that shash no longer takes scatter list entries. This is because shash is specifically designed for synchronous algorithms and as such scatter lists are unnecessary. All existing hash users will be converted to shash once the algorithms have been completely converted. There is also a new finup function that combines update with final. This will be extended to ahash once the algorithm conversion is done. This is also the first time that an algorithm type has their own registration function. Existing algorithm types will be converted to this way in due course. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Rebirth of crypto_alloc_tfmHerbert Xu2008-12-252-0/+110
| | | | | | | | | | | | | | | This patch reintroduces a completely revamped crypto_alloc_tfm. The biggest change is that we now take two crypto_type objects when allocating a tfm, a frontend and a backend. In fact this simply formalises what we've been doing behind the API's back. For example, as it stands crypto_alloc_ahash may use an actual ahash algorithm or a crypto_hash algorithm. Putting this in the API allows us to do this much more cleanly. The existing types will be converted across gradually. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Move type exit function into crypto_tfmHerbert Xu2008-12-251-7/+6
| | | | | | | | | | | | | | | | | | | | | | | The type exit function needs to undo any allocations done by the type init function. However, the type init function may differ depending on the upper-level type of the transform (e.g., a crypto_blkcipher instantiated as a crypto_ablkcipher). So we need to move the exit function out of the lower-level structure and into crypto_tfm itself. As it stands this is a no-op since nobody uses exit functions at all. However, all cases where a lower-level type is instantiated as a different upper-level type (such as blkcipher as ablkcipher) will be converted such that they allocate the underlying transform and use that instead of casting (e.g., crypto_ablkcipher casted into crypto_blkcipher). That will need to use a different exit function depending on the upper-level type. This patch also allows the type init/exit functions to call (or not) cra_init/cra_exit instead of always calling them from the top level. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ansi_cprng - Allow resetting of DT valueNeil Horman2008-12-251-3/+13
| | | | | | | | | | | | | | This is a patch that was sent to me by Jarod Wilson, marking off my outstanding todo to allow the ansi cprng to set/reset the DT counter value in a cprng instance. Currently crytpo_rng_reset accepts a seed byte array which is interpreted by the ansi_cprng as a {V key} tuple. This patch extends that tuple to now be {V key DT}, with DT an optional value during reset. This patch also fixes a bug we noticed in which the offset of the key area of the seed is started at DEFAULT_PRNG_KSZ rather than DEFAULT_BLK_SZ as it should be. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Jarod Wilson <jarod@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: camellia - use kernel-provided bitops, unaligned accessHarvey Harrison2008-12-251-48/+36
| | | | | | | | | | | | | | | | | | | | | | | | Remove the private implementation of 32-bit rotation and unaligned access with byteswapping. As a bonus, fixes sparse warnings: crypto/camellia.c:602:2: warning: cast to restricted __be32 crypto/camellia.c:603:2: warning: cast to restricted __be32 crypto/camellia.c:604:2: warning: cast to restricted __be32 crypto/camellia.c:605:2: warning: cast to restricted __be32 crypto/camellia.c:710:2: warning: cast to restricted __be32 crypto/camellia.c:711:2: warning: cast to restricted __be32 crypto/camellia.c:712:2: warning: cast to restricted __be32 crypto/camellia.c:713:2: warning: cast to restricted __be32 crypto/camellia.c:714:2: warning: cast to restricted __be32 crypto/camellia.c:715:2: warning: cast to restricted __be32 crypto/camellia.c:716:2: warning: cast to restricted __be32 crypto/camellia.c:717:2: warning: cast to restricted __be32 [Thanks to Tomoyuki Okazaki for spotting the typo] Tested-by: Carlo E. Prelz <fluido@fluido.as> Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - Trigger a panic when self test fails in FIPS modeNeil Horman2008-12-251-1/+6
| | | | | | | | | | | | | The FIPS specification requires that should self test for any supported crypto algorithm fail during operation in fips mode, we need to prevent the use of any crypto functionality until such time as the system can be re-initialized. Seems like the best way to handle that would be to panic the system if we were in fips mode and failed a self test. This patch implements that functionality. I've built and run it successfully. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge branch 'fixes' of ↵Linus Torvalds2008-12-181-2/+9
|\ | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx * 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx: async_xor: dma_map destination DMA_BIDIRECTIONAL dmaengine: protect 'id' from concurrent registrations ioat: wait for self-test completion
| * async_xor: dma_map destination DMA_BIDIRECTIONALDan Williams2008-12-081-2/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Mapping the destination multiple times is a misuse of the dma-api. Since the destination may be reused as a source, ensure that it is only mapped once and that it is mapped bidirectionally. This appears to add ugliness on the unmap side in that it always reads back the destination address from the descriptor, but gcc can determine that dma_unmap is a nop and not emit the code that calculates its arguments. Cc: <stable@kernel.org> Cc: Saeed Bishara <saeed@marvell.com> Acked-by: Yuri Tikhonov <yur@emcraft.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* | crypto: api - Disallow cryptomgr as a module if algorithms are built-inHerbert Xu2008-12-102-13/+41
|/ | | | | | | | | | | | | If we have at least one algorithm built-in then it no longer makes sense to have the testing framework, and hence cryptomgr to be a module. It should be either on or off, i.e., built-in or disabled. This just happens to stop a potential runaway modprobe loop that seems to trigger on at least one distro. With fixes from Evgeniy Polyakov. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge branch 'next' of ↵Linus Torvalds2008-10-201-18/+16
|\ | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx: fsldma: allow Freescale Elo DMA driver to be compiled as a module fsldma: remove internal self-test from Freescale Elo DMA driver drivers/dma/dmatest.c: switch a GFP_ATOMIC to GFP_KERNEL dmatest: properly handle duplicate DMA channels drivers/dma/ioat_dma.c: drop code after return async_tx: make async_tx_run_dependencies() easier to read
OpenPOWER on IntegriCloud