diff options
author | Peter Zijlstra <peterz@infradead.org> | 2011-07-22 15:26:05 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2011-08-04 10:17:54 +0200 |
commit | 83835b3d9aec8e9f666d8223d8a386814f756266 (patch) | |
tree | 6112e44af7202c41b48d468eff8b5a28138efd33 /crypto/testmgr.h | |
parent | 70a0686a72c7a7e554b404ca11406ceec709d425 (diff) | |
download | op-kernel-dev-83835b3d9aec8e9f666d8223d8a386814f756266.zip op-kernel-dev-83835b3d9aec8e9f666d8223d8a386814f756266.tar.gz |
slab, lockdep: Annotate slab -> rcu -> debug_object -> slab
Lockdep thinks there's lock recursion through:
kmem_cache_free()
cache_flusharray()
spin_lock(&l3->list_lock) <----------------.
free_block() |
slab_destroy() |
call_rcu() |
debug_object_activate() |
debug_object_init() |
__debug_object_init() |
kmem_cache_alloc() |
cache_alloc_refill() |
spin_lock(&l3->list_lock) --'
Now debug objects doesn't use SLAB_DESTROY_BY_RCU and hence there is no
actual possibility of recursing. Luckily debug objects marks it slab
with SLAB_DEBUG_OBJECTS so we can identify the thing.
Mark all SLAB_DEBUG_OBJECTS (all one!) slab caches with a special
lockdep key so that lockdep sees its a different cachep.
Also add a WARN on trying to create a SLAB_DESTROY_BY_RCU |
SLAB_DEBUG_OBJECTS cache, to avoid possible future trouble.
Reported-and-tested-by: Sebastian Siewior <sebastian@breakpoint.cc>
[ fixes to the initial patch ]
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Pekka Enberg <penberg@kernel.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1311341165.27400.58.camel@twins
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'crypto/testmgr.h')
0 files changed, 0 insertions, 0 deletions