summaryrefslogtreecommitdiffstats
path: root/drivers/char/random.c
Commit message (Collapse)AuthorAgeFilesLines
* random: Fix crashes with sparse node idsMichael Ellerman2016-07-301-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | On a system with sparse node ids, eg. a powerpc system with 4 nodes numbered like so: node 0: [mem 0x0000000000000000-0x00000007ffffffff] node 1: [mem 0x0000000800000000-0x0000000fffffffff] node 16: [mem 0x0000001000000000-0x00000017ffffffff] node 17: [mem 0x0000001800000000-0x0000001fffffffff] The code in rand_initialize() will allocate 4 pointers for the pool array, and initialise them correctly. However when go to use the pool, in eg. extract_crng(), we use the numa_node_id() to index into the array. For the higher numbered node ids this leads to random memory corruption, depending on what was kmalloc'ed adjacent to the pool array. Fix it by using nr_node_ids to size the pool array. Fixes: 1e7f583af67b ("random: make /dev/urandom scalable for silly userspace programs") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* random: use for_each_online_node() to iterate over NUMA nodesTheodore Ts'o2016-07-271-2/+1
| | | | | | | | This fixes a crash on s390 with fake NUMA enabled. Reported-by: Heiko Carstens <heiko.carstens@de.ibm.com> Fixes: 1e7f583af67b ("random: make /dev/urandom scalable for silly userspace programs") Signed-off-by: Theodore Ts'o <tytso@mit.edu>
* random: strengthen input validation for RNDADDTOENTCNTTheodore Ts'o2016-07-031-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Don't allow RNDADDTOENTCNT or RNDADDENTROPY to accept a negative entropy value. It doesn't make any sense to subtract from the entropy counter, and it can trigger a warning: random: negative entropy/overflow: pool input count -40000 ------------[ cut here ]------------ WARNING: CPU: 3 PID: 6828 at drivers/char/random.c:670[< none >] credit_entropy_bits+0x21e/0xad0 drivers/char/random.c:670 Modules linked in: CPU: 3 PID: 6828 Comm: a.out Not tainted 4.7.0-rc4+ #4 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 ffffffff880b58e0 ffff88005dd9fcb0 ffffffff82cc838f ffffffff87158b40 fffffbfff1016b1c 0000000000000000 0000000000000000 ffffffff87158b40 ffffffff83283dae 0000000000000009 ffff88005dd9fcf8 ffffffff8136d27f Call Trace: [< inline >] __dump_stack lib/dump_stack.c:15 [<ffffffff82cc838f>] dump_stack+0x12e/0x18f lib/dump_stack.c:51 [<ffffffff8136d27f>] __warn+0x19f/0x1e0 kernel/panic.c:516 [<ffffffff8136d48c>] warn_slowpath_null+0x2c/0x40 kernel/panic.c:551 [<ffffffff83283dae>] credit_entropy_bits+0x21e/0xad0 drivers/char/random.c:670 [< inline >] credit_entropy_bits_safe drivers/char/random.c:734 [<ffffffff8328785d>] random_ioctl+0x21d/0x250 drivers/char/random.c:1546 [< inline >] vfs_ioctl fs/ioctl.c:43 [<ffffffff8185316c>] do_vfs_ioctl+0x18c/0xff0 fs/ioctl.c:674 [< inline >] SYSC_ioctl fs/ioctl.c:689 [<ffffffff8185405f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:680 [<ffffffff86a995c0>] entry_SYSCALL_64_fastpath+0x23/0xc1 arch/x86/entry/entry_64.S:207 ---[ end trace 5d4902b2ba842f1f ]--- This was triggered using the test program: // autogenerated by syzkaller (http://github.com/google/syzkaller) int main() { int fd = open("/dev/random", O_RDWR); int val = -5000; ioctl(fd, RNDADDTOENTCNT, &val); return 0; } It's harmless in that (a) only root can trigger it, and (b) after complaining the code never does let the entropy count go negative, but it's better to simply not allow this userspace from passing in a negative entropy value altogether. Google-Bug-Id: #29575089 Reported-By: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
* random: add backtracking protection to the CRNGTheodore Ts'o2016-07-031-5/+49
| | | | Signed-off-by: Theodore Ts'o <tytso@mit.edu>
* random: make /dev/urandom scalable for silly userspace programsTheodore Ts'o2016-07-031-4/+58
| | | | | | | | | | | | On a system with a 4 socket (NUMA) system where a large number of application threads were all trying to read from /dev/urandom, this can result in the system spending 80% of its time contending on the global urandom spinlock. The application should have used its own PRNG, but let's try to help it from running, lemming-like, straight over the locking cliff. Reported-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
* random: replace non-blocking pool with a Chacha20-based CRNGTheodore Ts'o2016-07-031-102/+276
| | | | | | | The CRNG is faster, and we don't pretend to track entropy usage in the CRNG any more. Signed-off-by: Theodore Ts'o <tytso@mit.edu>
* random: properly align get_random_int_hashEric Biggers2016-06-131-1/+3
| | | | | | | | | | get_random_long() reads from the get_random_int_hash array using an unsigned long pointer. For this code to be guaranteed correct on all architectures, the array must be aligned to an unsigned long boundary. Cc: stable@kernel.org Signed-off-by: Eric Biggers <ebiggers3@gmail.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
* random: add interrupt callback to VMBus IRQ handlerStephan Mueller2016-06-131-0/+1
| | | | | | | | | | | | | | | The Hyper-V Linux Integration Services use the VMBus implementation for communication with the Hypervisor. VMBus registers its own interrupt handler that completely bypasses the common Linux interrupt handling. This implies that the interrupt entropy collector is not triggered. This patch adds the interrupt entropy collection callback into the VMBus interrupt handler function. Cc: stable@kernel.org Signed-off-by: Stephan Mueller <stephan.mueller@atsec.com> Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
* random: print a warning for the first ten uninitialized random usersTheodore Ts'o2016-06-131-4/+8
| | | | | | | | | | Since systemd is consistently using /dev/urandom before it is initialized, we can't see the other potentially dangerous users of /dev/urandom immediately after boot. So print the first ten such complaints instead. Cc: stable@kernel.org Signed-off-by: Theodore Ts'o <tytso@mit.edu>
* random: initialize the non-blocking pool via add_hwgenerator_randomness()Theodore Ts'o2016-06-131-5/+11
| | | | | | | | | If we have a hardware RNG and are using the in-kernel rngd, we should use this to initialize the non-blocking pool so that getrandom(2) doesn't block unnecessarily. Cc: stable@kernel.org Signed-off-by: Theodore Ts'o <tytso@mit.edu>
* lib/uuid.c: move generate_random_uuid() to uuid.cAndy Shevchenko2016-05-201-20/+1
| | | | | | | | | | | | | | | | Let's gather the UUID related functions under one hood. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Cc: Dmitry Kasatkin <dmitry.kasatkin@gmail.com> Cc: Mimi Zohar <zohar@linux.vnet.ibm.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Arnd Bergmann <arnd@arndb.de> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* drivers: char: random: add get_random_long()Daniel Cashman2016-02-271-0/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit d07e22597d1d ("mm: mmap: add new /proc tunable for mmap_base ASLR") added the ability to choose from a range of values to use for entropy count in generating the random offset to the mmap_base address. The maximum value on this range was set to 32 bits for 64-bit x86 systems, but this value could be increased further, requiring more than the 32 bits of randomness provided by get_random_int(), as is already possible for arm64. Add a new function: get_random_long() which more naturally fits with the mmap usage of get_random_int() but operates exactly the same as get_random_int(). Also, fix the shifting constant in mmap_rnd() to be an unsigned long so that values greater than 31 bits generate an appropriate mask without overflow. This is especially important on x86, as its shift instruction uses a 5-bit mask for the shift operand, which meant that any value for mmap_rnd_bits over 31 acts as a no-op and effectively disables mmap_base randomization. Finally, replace calls to get_random_int() with get_random_long() where appropriate. This patch (of 2): Add get_random_long(). Signed-off-by: Daniel Cashman <dcashman@android.com> Acked-by: Kees Cook <keescook@chromium.org> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: David S. Miller <davem@davemloft.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Nick Kralevich <nnk@google.com> Cc: Jeff Vander Stoep <jeffv@google.com> Cc: Mark Salyzyn <salyzyn@android.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* random: Remove kernel blocking APIHerbert Xu2015-06-101-12/+0
| | | | | | | This patch removes the kernel blocking API as it has been completely replaced by the callback API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* random: Add callback API for random pool readinessHerbert Xu2015-06-101-0/+78
| | | | | | | | | | | | | | | | | The get_blocking_random_bytes API is broken because the wait can be arbitrarily long (potentially forever) so there is no safe way of calling it from within the kernel. This patch replaces it with a callback API instead. The callback is invoked potentially from interrupt context so the user needs to schedule their own work thread if necessary. In addition to adding callbacks, they can also be removed as otherwise this opens up a way for user-space to allocate kernel memory with no bound (by opening algif_rng descriptors and then closing them). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* random: Blocking API for accessing nonblocking_poolStephan Mueller2015-05-271-0/+12
| | | | | | | | | | | | The added API calls provide a synchronous function call get_blocking_random_bytes where the caller is blocked until the nonblocking_pool is initialized. CC: Andreas Steffen <andreas.steffen@strongswan.org> CC: Theodore Ts'o <tytso@mit.edu> CC: Sandy Harris <sandyinchina@gmail.com> Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* random: Wake up all getrandom(2) callers when pool is readyHerbert Xu2015-05-271-1/+1
| | | | | | | | | | If more than one application invokes getrandom(2) before the pool is ready, then all bar one will be stuck forever because we use wake_up_interruptible which wakes up a single task. This patch replaces it with wake_up_all. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* random: Fix fast_mix() functionGeorge Spelvin2015-02-091-4/+4
| | | | | | | | | | | | | | | | | | | | | | | There was a bad typo in commit 43759d4f429c ("random: use an improved fast_mix() function") and I didn't notice because it "looked right", so I saw what I expected to see when I reviewed it. Only months later did I look and notice it's not the Threefish-inspired mix function that I had designed and optimized. Mea Culpa. Each input bit still has a chance to affect each output bit, and the fast pool is spilled *long* before it fills, so it's not a total disaster, but it's definitely not the intended great improvement. I'm still working on finding better rotation constants. These are good enough, but since it's unrolled twice, it's possible to get better mixing for free by using eight different constants rather than repeating the same four. Signed-off-by: George Spelvin <linux@horizon.com> Cc: Theodore Ts'o <tytso@mit.edu> Cc: stable@vger.kernel.org # v3.16+ Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge tag 'random_for_linus' of ↵Linus Torvalds2014-10-241-4/+4
|\ | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random Pull /dev/random updates from Ted Ts'o: "This adds a memzero_explicit() call which is guaranteed not to be optimized away by GCC. This is important when we are wiping cryptographically sensitive material" * tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random: crypto: memzero_explicit - make sure to clear out sensitive data random: add and use memzero_explicit() for clearing data
| * random: add and use memzero_explicit() for clearing dataDaniel Borkmann2014-10-171-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | zatimend has reported that in his environment (3.16/gcc4.8.3/corei7) memset() calls which clear out sensitive data in extract_{buf,entropy, entropy_user}() in random driver are being optimized away by gcc. Add a helper memzero_explicit() (similarly as explicit_bzero() variants) that can be used in such cases where a variable with sensitive data is being cleared out in the end. Other use cases might also be in crypto code. [ I have put this into lib/string.c though, as it's always built-in and doesn't need any dependencies then. ] Fixes kernel bugzilla: 82041 Reported-by: zatimend@hotmail.co.uk Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: stable@vger.kernel.org
* | drivers/char/random: Replace __get_cpu_var usesChristoph Lameter2014-08-261-1/+1
|/ | | | | | | | | A single case of using __get_cpu_var for address calculation. Cc: Arnd Bergmann <arnd@arndb.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Tejun Heo <tj@kernel.org>
* Merge tag 'random_for_linus' of ↵Linus Torvalds2014-08-061-128/+187
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random Pull randomness updates from Ted Ts'o: "Cleanups and bug fixes to /dev/random, add a new getrandom(2) system call, which is a superset of OpenBSD's getentropy(2) call, for use with userspace crypto libraries such as LibreSSL. Also add the ability to have a kernel thread to pull entropy from hardware rng devices into /dev/random" * tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random: hwrng: Pass entropy to add_hwgenerator_randomness() in bits, not bytes random: limit the contribution of the hw rng to at most half random: introduce getrandom(2) system call hw_random: fix sparse warning (NULL vs 0 for pointer) random: use registers from interrupted code for CPU's w/o a cycle counter hwrng: add per-device entropy derating hwrng: create filler thread random: add_hwgenerator_randomness() for feeding entropy from devices random: use an improved fast_mix() function random: clean up interrupt entropy accounting for archs w/o cycle counters random: only update the last_pulled time if we actually transferred entropy random: remove unneeded hash of a portion of the entropy pool random: always update the entropy pool under the spinlock
| * random: limit the contribution of the hw rng to at most halfTheodore Ts'o2014-08-051-39/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For people who don't trust a hardware RNG which can not be audited, the changes to add support for RDSEED can be troubling since 97% or more of the entropy will be contributed from the in-CPU hardware RNG. We now have a in-kernel khwrngd, so for those people who do want to implicitly trust the CPU-based system, we could create an arch-rng hw_random driver, and allow khwrng refill the entropy pool. This allows system administrator whether or not they trust the CPU (I assume the NSA will trust RDRAND/RDSEED implicitly :-), and if so, what level of entropy derating they want to use. The reason why this is a really good idea is that if different people use different levels of entropy derating, it will make it much more difficult to design a backdoor'ed hwrng that can be generally exploited in terms of the output of /dev/random when different attack targets are using differing levels of entropy derating. Signed-off-by: Theodore Ts'o <tytso@mit.edu>
| * random: introduce getrandom(2) system callTheodore Ts'o2014-08-051-3/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The getrandom(2) system call was requested by the LibreSSL Portable developers. It is analoguous to the getentropy(2) system call in OpenBSD. The rationale of this system call is to provide resiliance against file descriptor exhaustion attacks, where the attacker consumes all available file descriptors, forcing the use of the fallback code where /dev/[u]random is not available. Since the fallback code is often not well-tested, it is better to eliminate this potential failure mode entirely. The other feature provided by this new system call is the ability to request randomness from the /dev/urandom entropy pool, but to block until at least 128 bits of entropy has been accumulated in the /dev/urandom entropy pool. Historically, the emphasis in the /dev/urandom development has been to ensure that urandom pool is initialized as quickly as possible after system boot, and preferably before the init scripts start execution. This is because changing /dev/urandom reads to block represents an interface change that could potentially break userspace which is not acceptable. In practice, on most x86 desktop and server systems, in general the entropy pool can be initialized before it is needed (and in modern kernels, we will printk a warning message if not). However, on an embedded system, this may not be the case. And so with this new interface, we can provide the functionality of blocking until the urandom pool has been initialized. Any userspace program which uses this new functionality must take care to assure that if it is used during the boot process, that it will not cause the init scripts or other portions of the system startup to hang indefinitely. SYNOPSIS #include <linux/random.h> int getrandom(void *buf, size_t buflen, unsigned int flags); DESCRIPTION The system call getrandom() fills the buffer pointed to by buf with up to buflen random bytes which can be used to seed user space random number generators (i.e., DRBG's) or for other cryptographic uses. It should not be used for Monte Carlo simulations or other programs/algorithms which are doing probabilistic sampling. If the GRND_RANDOM flags bit is set, then draw from the /dev/random pool instead of the /dev/urandom pool. The /dev/random pool is limited based on the entropy that can be obtained from environmental noise, so if there is insufficient entropy, the requested number of bytes may not be returned. If there is no entropy available at all, getrandom(2) will either block, or return an error with errno set to EAGAIN if the GRND_NONBLOCK bit is set in flags. If the GRND_RANDOM bit is not set, then the /dev/urandom pool will be used. Unlike using read(2) to fetch data from /dev/urandom, if the urandom pool has not been sufficiently initialized, getrandom(2) will block (or return -1 with the errno set to EAGAIN if the GRND_NONBLOCK bit is set in flags). The getentropy(2) system call in OpenBSD can be emulated using the following function: int getentropy(void *buf, size_t buflen) { int ret; if (buflen > 256) goto failure; ret = getrandom(buf, buflen, 0); if (ret < 0) return ret; if (ret == buflen) return 0; failure: errno = EIO; return -1; } RETURN VALUE On success, the number of bytes that was filled in the buf is returned. This may not be all the bytes requested by the caller via buflen if insufficient entropy was present in the /dev/random pool, or if the system call was interrupted by a signal. On error, -1 is returned, and errno is set appropriately. ERRORS EINVAL An invalid flag was passed to getrandom(2) EFAULT buf is outside the accessible address space. EAGAIN The requested entropy was not available, and getentropy(2) would have blocked if the GRND_NONBLOCK flag was not set. EINTR While blocked waiting for entropy, the call was interrupted by a signal handler; see the description of how interrupted read(2) calls on "slow" devices are handled with and without the SA_RESTART flag in the signal(7) man page. NOTES For small requests (buflen <= 256) getrandom(2) will not return EINTR when reading from the urandom pool once the entropy pool has been initialized, and it will return all of the bytes that have been requested. This is the recommended way to use getrandom(2), and is designed for compatibility with OpenBSD's getentropy() system call. However, if you are using GRND_RANDOM, then getrandom(2) may block until the entropy accounting determines that sufficient environmental noise has been gathered such that getrandom(2) will be operating as a NRBG instead of a DRBG for those people who are working in the NIST SP 800-90 regime. Since it may block for a long time, these guarantees do *not* apply. The user may want to interrupt a hanging process using a signal, so blocking until all of the requested bytes are returned would be unfriendly. For this reason, the user of getrandom(2) MUST always check the return value, in case it returns some error, or if fewer bytes than requested was returned. In the case of !GRND_RANDOM and small request, the latter should never happen, but the careful userspace code (and all crypto code should be careful) should check for this anyway! Finally, unless you are doing long-term key generation (and perhaps not even then), you probably shouldn't be using GRND_RANDOM. The cryptographic algorithms used for /dev/urandom are quite conservative, and so should be sufficient for all purposes. The disadvantage of GRND_RANDOM is that it can block, and the increased complexity required to deal with partially fulfilled getrandom(2) requests. Signed-off-by: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Zach Brown <zab@zabbo.net>
| * random: use registers from interrupted code for CPU's w/o a cycle counterTheodore Ts'o2014-07-151-25/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For CPU's that don't have a cycle counter, or something equivalent which can be used for random_get_entropy(), random_get_entropy() will always return 0. In that case, substitute with the saved interrupt registers to add a bit more unpredictability. Some folks have suggested hashing all of the registers unconditionally, but this would increase the overhead of add_interrupt_randomness() by at least an order of magnitude, and this would very likely be unacceptable. The changes in this commit have been benchmarked as mostly unaffecting the overhead of add_interrupt_randomness() if the entropy counter is present, and doubling the overhead if it is not present. Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: Jörn Engel <joern@logfs.org>
| * random: add_hwgenerator_randomness() for feeding entropy from devicesTorsten Duwe2014-07-151-0/+21
| | | | | | | | | | | | | | | | | | This patch adds an interface to the random pool for feeding entropy in-kernel. Signed-off-by: Torsten Duwe <duwe@suse.de> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Acked-by: H. Peter Anvin <hpa@zytor.com>
| * random: use an improved fast_mix() functionTheodore Ts'o2014-07-151-24/+68
| | | | | | | | | | | | | | | | Use more efficient fast_mix() function. Thanks to George Spelvin for doing the leg work to find a more efficient mixing function. Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: George Spelvin <linux@horizon.com>
| * random: clean up interrupt entropy accounting for archs w/o cycle countersTheodore Ts'o2014-07-151-19/+23
| | | | | | | | | | | | | | | | | | | | | | For architectures that don't have cycle counters, the algorithm for deciding when to avoid giving entropy credit due to back-to-back timer interrupts didn't make any sense, since we were checking every 64 interrupts. Change it so that we only give an entropy credit if the majority of the interrupts are not based on the timer. Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: George Spelvin <linux@horizon.com>
| * random: only update the last_pulled time if we actually transferred entropyTheodore Ts'o2014-07-151-4/+7
| | | | | | | | | | | | | | | | | | In xfer_secondary_pull(), check to make sure we need to pull from the secondary pool before checking and potentially updating the last_pulled time. Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: George Spelvin <linux@horizon.com>
| * random: remove unneeded hash of a portion of the entropy poolTheodore Ts'o2014-07-151-31/+20
| | | | | | | | | | | | | | | | | | | | | | We previously extracted a portion of the entropy pool in mix_pool_bytes() and hashed it in to avoid racing CPU's from returning duplicate random values. Now that we are using a spinlock to prevent this from happening, this is no longer necessary. So remove it, to simplify the code a bit. Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: George Spelvin <linux@horizon.com>
| * random: always update the entropy pool under the spinlockTheodore Ts'o2014-07-151-21/+23
| | | | | | | | | | | | | | | | | | | | | | Instead of using lockless techniques introduced in commit 902c098a3663, use spin_trylock to try to grab entropy pool's lock. If we can't get the lock, then just try again on the next interrupt. Based on discussions with George Spelvin. Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: George Spelvin <linux@horizon.com>
* | random: check for increase of entropy_count because of signed conversionHannes Frederic Sowa2014-07-191-3/+14
|/ | | | | | | | | | | | | | | | | The expression entropy_count -= ibytes << (ENTROPY_SHIFT + 3) could actually increase entropy_count if during assignment of the unsigned expression on the RHS (mind the -=) we reduce the value modulo 2^width(int) and assign it to entropy_count. Trinity found this. [ Commit modified by tytso to add an additional safety check for a negative entropy_count -- which should never happen, and to also add an additional paranoia check to prevent overly large count values to be passed into urandom_read(). ] Reported-by: Dave Jones <davej@redhat.com> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: stable@vger.kernel.org
* Merge tag 'random_for_linus' of ↵Linus Torvalds2014-06-171-8/+9
|\ | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random Pull randomness bugfix from Ted Ts'o: "random: fix entropy accounting bug introduced in v3.15" * tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random: random: fix nasty entropy accounting bug
| * random: fix nasty entropy accounting bugTheodore Ts'o2014-06-151-8/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 0fb7a01af5b0 "random: simplify accounting code", introduced in v3.15, has a very nasty accounting problem when the entropy pool has has fewer bytes of entropy than the number of requested reserved bytes. In that case, "have_bytes - reserved" goes negative, and since size_t is unsigned, the expression: ibytes = min_t(size_t, ibytes, have_bytes - reserved); ... does not do the right thing. This is rather bad, because it defeats the catastrophic reseeding feature in the xfer_secondary_pool() path. It also can cause the "BUG: spinlock trylock failure on UP" for some kernel configurations when prandom_reseed() calls get_random_bytes() in the early init, since when the entropy count gets corrupted, credit_entropy_bits() erroneously believes that the nonblocking pool has been fully initialized (when in fact it is not), and so it calls prandom_reseed(true) recursively leading to the spinlock BUG. The logic is *not* the same it was originally, but in the cases where it matters, the behavior is the same, and the resulting code is hopefully easier to read and understand. Fixes: 0fb7a01af5b0 "random: simplify accounting code" Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: Greg Price <price@mit.edu> Cc: stable@vger.kernel.org #v3.15
* | random: convert use of typedef ctl_table to struct ctl_tableJoe Perches2014-06-061-2/+2
| | | | | | | | | | | | | | | | This typedef is unnecessary and should just be removed. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge branch 'for-3.16/core' of git://git.kernel.dk/linux-block into nextLinus Torvalds2014-06-021-0/+1
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull block core updates from Jens Axboe: "It's a big(ish) round this time, lots of development effort has gone into blk-mq in the last 3 months. Generally we're heading to where 3.16 will be a feature complete and performant blk-mq. scsi-mq is progressing nicely and will hopefully be in 3.17. A nvme port is in progress, and the Micron pci-e flash driver, mtip32xx, is converted and will be sent in with the driver pull request for 3.16. This pull request contains: - Lots of prep and support patches for scsi-mq have been integrated. All from Christoph. - API and code cleanups for blk-mq from Christoph. - Lots of good corner case and error handling cleanup fixes for blk-mq from Ming Lei. - A flew of blk-mq updates from me: * Provide strict mappings so that the driver can rely on the CPU to queue mapping. This enables optimizations in the driver. * Provided a bitmap tagging instead of percpu_ida, which never really worked well for blk-mq. percpu_ida relies on the fact that we have a lot more tags available than we really need, it fails miserably for cases where we exhaust (or are close to exhausting) the tag space. * Provide sane support for shared tag maps, as utilized by scsi-mq * Various fixes for IO timeouts. * API cleanups, and lots of perf tweaks and optimizations. - Remove 'buffer' from struct request. This is ancient code, from when requests were always virtually mapped. Kill it, to reclaim some space in struct request. From me. - Remove 'magic' from blk_plug. Since we store these on the stack and since we've never caught any actual bugs with this, lets just get rid of it. From me. - Only call part_in_flight() once for IO completion, as includes two atomic reads. Hopefully we'll get a better implementation soon, as the part IO stats are now one of the more expensive parts of doing IO on blk-mq. From me. - File migration of block code from {mm,fs}/ to block/. This includes bio.c, bio-integrity.c, bounce.c, and ioprio.c. From me, from a discussion on lkml. That should describe the meat of the pull request. Also has various little fixes and cleanups from Dave Jones, Shaohua Li, Duan Jiong, Fengguang Wu, Fabian Frederick, Randy Dunlap, Robert Elliott, and Sam Bradshaw" * 'for-3.16/core' of git://git.kernel.dk/linux-block: (100 commits) blk-mq: push IPI or local end_io decision to __blk_mq_complete_request() blk-mq: remember to start timeout handler for direct queue block: ensure that the timer is always added blk-mq: blk_mq_unregister_hctx() can be static blk-mq: make the sysfs mq/ layout reflect current mappings blk-mq: blk_mq_tag_to_rq should handle flush request block: remove dead code in scsi_ioctl:blk_verify_command blk-mq: request initialization optimizations block: add queue flag for disabling SG merging block: remove 'magic' from struct blk_plug blk-mq: remove alloc_hctx and free_hctx methods blk-mq: add file comments and update copyright notices blk-mq: remove blk_mq_alloc_request_pinned blk-mq: do not use blk_mq_alloc_request_pinned in blk_mq_map_request blk-mq: remove blk_mq_wait_for_tags blk-mq: initialize request in __blk_mq_alloc_request blk-mq: merge blk_mq_alloc_reserved_request into blk_mq_alloc_request blk-mq: add helper to insert requests from irq context blk-mq: remove stale comment for blk_mq_complete_request() blk-mq: allow non-softirq completions ...
| * random: export add_disk_randomnessChristoph Hellwig2014-04-281-0/+1
| | | | | | | | | | | | | | | | | | | | This will be needed for pending changes to the scsi midlayer that now calls lower level block APIs, as well as any blk-mq driver that wants to contribute to the random pool. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Jens Axboe <axboe@fb.com>
* | random: fix BUG_ON caused by accounting simplificationTheodore Ts'o2014-05-161-2/+5
|/ | | | | | | | | | | | | | | | Commit ee1de406ba6eb1 ("random: simplify accounting logic") simplified things too much, in that it allows the following to trigger an overflow that results in a BUG_ON crash: dd if=/dev/urandom of=/dev/zero bs=67108707 count=1 Thanks to Peter Zihlstra for discovering the crash, and Hannes Frederic for analyizing the root cause. Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Reported-by: Peter Zijlstra <peterz@infradead.org> Reported-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Cc: Greg Price <price@mit.edu>
* random: Add arch_has_random[_seed]()H. Peter Anvin2014-03-191-0/+3
| | | | | | | | | | | | Add predicate functions for having arch_get_random[_seed]*(). The only current use is to avoid the loop in arch_random_refill() when arch_get_random_seed_long() is unavailable. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
* random: If we have arch_get_random_seed*(), try it before blockingH. Peter Anvin2014-03-191-0/+33
| | | | | | | | | | | If we have arch_get_random_seed*(), try to use it for emergency refill of the entropy pool before giving up and blocking on /dev/random. It may or may not work in the moment, but if it does work, it will give the user better service than blocking will. Reviewed-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
* random: Use arch_get_random_seed*() at init time and once a secondH. Peter Anvin2014-03-191-4/+20
| | | | | | | | | | | | | | | | | | | | | Use arch_get_random_seed*() in two places in the Linux random driver (drivers/char/random.c): 1. During entropy pool initialization, use RDSEED in favor of RDRAND, with a fallback to the latter. Entropy exhaustion is unlikely to happen there on physical hardware as the machine is single-threaded at that point, but could happen in a virtual machine. In that case, the fallback to RDRAND will still provide more than adequate entropy pool initialization. 2. Once a second, issue RDSEED and, if successful, feed it to the entropy pool. To ensure an extra layer of security, only credit half the entropy just in case. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
* random: use the architectural HWRNG for the SHA's IV in extract_buf()Theodore Ts'o2014-03-191-8/+8
| | | | | | | | | | | | To help assuage the fears of those who think the NSA can introduce a massive hack into the instruction decode and out of order execution engine in the CPU without hundreds of Intel engineers knowing about it (only one of which woud need to have the conscience and courage of Edward Snowden to spill the beans to the public), use the HWRNG to initialize the SHA starting value, instead of xor'ing it in afterwards. Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
* random: clarify bits/bytes in wakeup thresholdsGreg Price2014-03-191-17/+17
| | | | | | | | These are a recurring cause of confusion, so rename them to hopefully be clearer. Signed-off-by: Greg Price <price@mit.edu> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
* random: entropy_bytes is actually bitsGreg Price2014-03-191-3/+3
| | | | | | | | | The variable 'entropy_bytes' is set from an expression that actually counts bits. Fortunately it's also only compared to values that also count bits. Rename it accordingly. Signed-off-by: Greg Price <price@mit.edu> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
* random: simplify accounting codeGreg Price2014-03-191-17/+10
| | | | | | | | | | | | | | | | | | | | | | | With this we handle "reserved" in just one place. As a bonus the code becomes less nested, and the "wakeup_write" flag variable becomes unnecessary. The variable "flags" was already unused. This code behaves identically to the previous version except in two pathological cases that don't occur. If the argument "nbytes" is already less than "min", then we didn't previously enforce "min". If r->limit is false while "reserved" is nonzero, then we previously applied "reserved" in checking whether we had enough bits, even though we don't apply it to actually limit how many we take. The callers of account() never exercise either of these cases. Before the previous commit, it was possible for "nbytes" to be less than "min" if userspace chose a pathological configuration, but no longer. Cc: Jiri Kosina <jkosina@suse.cz> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Greg Price <price@mit.edu> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
* random: tighten bound on random_read_wakeup_threshGreg Price2014-03-191-1/+1
| | | | | | | | | | | | | | | | | We use this value in a few places other than its literal meaning, in particular in _xfer_secondary_pool() as a minimum number of bits to pull from the input pool at a time into either output pool. It doesn't make sense to pull more bits than the whole size of an output pool. We could and possibly should separate the quantities "how much should the input pool have to have to wake up /dev/random readers" and "how much should we transfer from the input to an output pool at a time", but nobody is likely to be sad they can't set the first quantity to more than 1024 bits, so for now just limit them both. Signed-off-by: Greg Price <price@mit.edu> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
* random: forget lock in lockless accountingGreg Price2014-03-191-4/+0
| | | | | | | | | | | The only mutable data accessed here is ->entropy_count, but since 10b3a32d2 ("random: fix accounting race condition") we use cmpxchg to protect our accesses to ->entropy_count here. Drop the use of the lock. Cc: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Greg Price <price@mit.edu> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
* random: simplify accounting logicGreg Price2014-03-191-8/+4
| | | | | | | | | | | | | | This logic is exactly equivalent to the old logic, but it should be easier to see what it's doing. The equivalence depends on one fact from outside this function: when 'r->limit' is false, 'reserved' is zero. (Well, two facts; the other is that 'reserved' is never negative.) Cc: Jiri Kosina <jkosina@suse.cz> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Greg Price <price@mit.edu> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
* random: fix comment on "account"Greg Price2014-03-191-10/+21
| | | | | | | | This comment didn't quite keep up as extract_entropy() was split into four functions. Put each bit by the function it describes. Signed-off-by: Greg Price <price@mit.edu> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
* random: simplify loop in random_readGreg Price2014-03-191-39/+18
| | | | | | | | | | | | | The loop condition never changes until just before a break, so we might as well write it as a constant. Also since a996996dd75a ("random: drop weird m_time/a_time manipulation") we don't do anything after the loop finishes, so the 'break's might as well return directly. Some other simplifications. There should be no change in behavior introduced by this commit. Signed-off-by: Greg Price <price@mit.edu> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
* random: fix description of get_random_bytesGreg Price2014-03-191-2/+3
| | | | | | | | | After this remark was written, commit d2e7c96af added a use of arch_get_random_long() inside the get_random_bytes codepath. The main point stands, but it needs to be reworded. Signed-off-by: Greg Price <price@mit.edu> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
OpenPOWER on IntegriCloud