summaryrefslogtreecommitdiffstats
path: root/crypto
diff options
context:
space:
mode:
authorSzilveszter Ördög <slipszi@gmail.com>2010-08-06 09:26:38 +0800
committerHerbert Xu <herbert@gondor.apana.org.au>2010-08-06 09:26:38 +0800
commit23a75eee070f1370bee803a34f285cf81eb5f331 (patch)
tree6427c53a261840661f135b99d81062fc015dd571 /crypto
parentfc1caf6eafb30ea185720e29f7f5eccca61ecd60 (diff)
downloadop-kernel-dev-23a75eee070f1370bee803a34f285cf81eb5f331.zip
op-kernel-dev-23a75eee070f1370bee803a34f285cf81eb5f331.tar.gz
crypto: hash - Fix handling of small unaligned buffers
If a scatterwalk chain contains an entry with an unaligned offset then hash_walk_next() will cut off the next step at the next alignment point. However, if the entry ends before the next alignment point then we a loop, which leads to a kernel oops. Fix this by checking whether the next aligment point is before the end of the current entry. Signed-off-by: Szilveszter Ördög <slipszi@gmail.com> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Diffstat (limited to 'crypto')
-rw-r--r--crypto/ahash.c7
1 files changed, 5 insertions, 2 deletions
diff --git a/crypto/ahash.c b/crypto/ahash.c
index b8c59b8..f669822 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -47,8 +47,11 @@ static int hash_walk_next(struct crypto_hash_walk *walk)
walk->data = crypto_kmap(walk->pg, 0);
walk->data += offset;
- if (offset & alignmask)
- nbytes = alignmask + 1 - (offset & alignmask);
+ if (offset & alignmask) {
+ unsigned int unaligned = alignmask + 1 - (offset & alignmask);
+ if (nbytes > unaligned)
+ nbytes = unaligned;
+ }
walk->entrylen -= nbytes;
return nbytes;
OpenPOWER on IntegriCloud