summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_rwlock.c
Commit message (Collapse)AuthorAgeFilesLines
* MFC r285706,r303562,r303563,r303584,r303643,r303652,r303655,r303707:mjg2016-12-311-33/+70
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (by markj) Don't increment the spin count until after the first attempt to acquire a rwlock read lock. Otherwise the lockstat:::rw-spin probe will fire spuriously. == rwlock: s/READER/WRITER/ in wlock lockstat annotation == sx: increment spin_cnt before cpu_spinwait in xlock The change is a no-op only done for consistency with the rest of the file. == locks: change sleep_cnt and spin_cnt types to u_int Both variables are uint64_t, but they only count spins or sleeps. All reasonable values which we can get here comfortably hit in 32-bit range. == Implement trivial backoff for locking primitives. All current spinning loops retry an atomic op the first chance they get, which leads to performance degradation under load. One classic solution to the problem consists of delaying the test to an extent. This implementation has a trivial linear increment and a random factor for each attempt. For simplicity, this first thouch implementation only modifies spinning loops where the lock owner is running. spin mutexes and thread lock were not modified. Current parameters are autotuned on boot based on mp_cpus. Autotune factors are very conservative and are subject to change later. == locks: fix up ifdef guards introduced in r303643 Both sx and rwlocks had copy-pasted ADAPTIVE_MUTEXES instead of the correct define. == locks: fix compilation for KDTRACE_HOOKS && !ADAPTIVE_* case == locks: fix sx compilation on mips after r303643 The kernel.h header is required for the SYSINIT macro, which apparently was present on amd64 by accident.
* MFC r301157:mjg2016-12-311-1/+3
| | | | | | | | | | Microoptimize locking primitives by avoiding unnecessary atomic ops. Inline version of primitives do an atomic op and if it fails they fallback to actual primitives, which immediately retry the atomic op. The obvious optimisation is to check if the lock is free and only then proceed to do an atomic op.
* MFC r285663, r285664, r285667:markj2015-07-211-8/+8
| | | | | | | | | Ensure that locstat_nsecs() has no effect when lockstat probes are not enabled or when the profiled lock carries the LO_NOPROFILE flag. PR: 201642, 201517 Approved by: re (gjb) Tested by: Jason Unovitch
* MFC r284297: several lockstat improvementsavg2015-07-011-17/+39
|
* MFC 272315 272757 274091 274902sbruno2015-02-131-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | for real this time r272315 Explicitly return None for negative event indices. Prior to this, eventat(-1) would return the next-to-last event causing the back button to cycle back to the end of an event source instead of stopping at the start. r272757 Add schedgraph traces for callout handlers. Specifically, a callwheel logs a running event each time it executes a callout function. The event includes the function pointer, argument, and whether or not it was run from hardware interrupt context. The callwheel is marked idle when each handler completes. This effectively logs the duration of each callout routine in the graph. r274091 Bind Ctrl-Q as a global hotkey to exit. Bind Ctrl-W as a hotkey to close dialogs. r274902 Add a new thread state "spinning" to schedgraph and add tracepoints at the start and stop of spinning waits in lock primitives. Reviewed by: jhb
* Revert r278650. Definite layer 8 bug.sbruno2015-02-131-21/+0
| | | | Submitted by: dhw and Thomas Mueller <tmueller@sysgo.com>
* MFC 272315 272757 274091 274902sbruno2015-02-131-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | r272315 Explicitly return None for negative event indices. Prior to this, eventat(-1) would return the next-to-last event causing the back button to cycle back to the end of an event source instead of stopping at the start. r272757 Add schedgraph traces for callout handlers. Specifically, a callwheel logs a running event each time it executes a callout function. The event includes the function pointer, argument, and whether or not it was run from hardware interrupt context. The callwheel is marked idle when each handler completes. This effectively logs the duration of each callout routine in the graph. r274091 Bind Ctrl-Q as a global hotkey to exit. Bind Ctrl-W as a hotkey to close dialogs. r274902 Add a new thread state "spinning" to schedgraph and add tracepoints at the start and stop of spinning waits in lock primitives. Reviewed by: jhb
* MFC 261517,261520:jhb2014-02-181-3/+0
| | | | | Convert the license on files where I am the sole copyright holder to 2 clause BSD licenses.
* Consistently use the same value to indicate exclusively-held anddavide2013-09-221-4/+4
| | | | | | | | | | shared-held locks for all the primitives in lc_lock/lc_unlock routines. This fixes the problems introduced in r255747, which indeed introduced an inversion in the logic. Reported by: many Tested by: bdrewery, pho, lme, Adam McDougall, O. Hartmann Approved by: re (glebius)
* Fix lc_lock/lc_unlock() support for rmlocks held in shared mode. Withdavide2013-09-201-4/+4
| | | | | | | | | | | | | | | current lock classes KPI it was really difficult because there was no way to pass an rmtracker object to the lock/unlock routines. In order to accomplish the task, modify the aforementioned functions so that they can return (or pass as argument) an uinptr_t, which is in the rm case used to hold a pointer to struct rm_priotracker for current thread. As an added bonus, this fixes rm_sleep() in the rm shared case, which right now can communicate priotracker structure between lc_unlock()/lc_lock(). Suggested by: jhb Reviewed by: jhb Approved by: re (delphij)
* A few mostly cosmetic nits to aid in debugging:jhb2013-06-251-4/+4
| | | | | | | | - Call lock_init() first before setting any lock_object fields in lock init routines. This way if the machine panics due to a duplicate init the lock's original state is preserved. - Somewhat similarly, don't decrement td_locks and td_slocks until after an unlock operation has completed successfully.
* - Handle the recursed/not recursed flags with RA_RLOCKED in rw_assert().jhb2013-06-031-4/+6
| | | | - Tweak a panic message.
* Fixup r240424: On entering KDB backends, the hijacked thread to runattilio2012-12-221-4/+5
| | | | | | | | | | | | | interrupt context can still be idlethread. At that point, without the panic condition, it can still happen that idlethread then will try to acquire some locks to carry on some operations. Skip the idlethread check on block/sleep lock operations when KDB is active. Reported by: jh Tested by: jh MFC after: 1 week
* Merge r242395,242483 from mutex implementation:attilio2012-11-031-23/+74
| | | | | | | | | | | | | | | | | give rwlock(9) the ability to crunch different type of structures, with the only constraint that they have a lock cookie named rw_lock. This name, then, becames reserved from the struct that wants to use the rwlock(9) KPI and other locking primitives cannot reuse it for their members. Namely such structs are the current struct rwlock and the new struct rwlock_padalign. The new structure will define an object which has the same layout of a struct rwlock but will be allocated in areas aligned to the cache line size and will be as big as a cache line. For further details check comments on above mentioned revisions. Reviewed by: jimharris, jeff
* Remove all the checks on curthread != NULL with the exception of some MDattilio2012-09-131-2/+0
| | | | | | | | | | | trap checks (eg. printtrap()). Generally this check is not needed anymore, as there is not a legitimate case where curthread != NULL, after pcpu 0 area has been properly initialized. Reviewed by: bde, jhb MFC after: 1 week
* Improve check coverage about idle threads.attilio2012-09-121-0/+13
| | | | | | | | | | | | Idle threads are not allowed to acquire any lock but spinlocks. Deny any attempt to do so by panicing at the locking operation when INVARIANTS is on. Then, remove the check on blocking on a turnstile. The check in sleepqueues is left because they are not allowed to use tsleep() either which could happen still. Reviewed by: bde, jhb, kib MFC after: 1 week
* Add software PMC support.fabient2012-03-281-0/+12
| | | | | | | | | | | | | New kernel events can be added at various location for sampling or counting. This will for example allow easy system profiling whatever the processor is with known tools like pmcstat(8). Simultaneous usage of software PMC and hardware PMC is possible, for example looking at the lock acquire failure, page fault while sampling on instructions. Sponsored by: NETASQ MFC after: 1 month
* panic: add a switch and infrastructure for stopping other CPUs in SMP caseavg2011-12-111-0/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Historical behavior of letting other CPUs merily go on is a default for time being. The new behavior can be switched on via kern.stop_scheduler_on_panic tunable and sysctl. Stopping of the CPUs has (at least) the following benefits: - more of the system state at panic time is preserved intact - threads and interrupts do not interfere with dumping of the system state Only one thread runs uninterrupted after panic if stop_scheduler_on_panic is set. That thread might call code that is also used in normal context and that code might use locks to prevent concurrent execution of certain parts. Those locks might be held by the stopped threads and would never be released. To work around this issue, it was decided that instead of explicit checks for panic context, we would rather put those checks inside the locking primitives. This change has substantial portions written and re-written by attilio and kib at various times. Other changes are heavily based on the ideas and patches submitted by jhb and mdf. bde has provided many insights into the details and history of the current code. The new behavior may cause problems for systems that use a USB keyboard for interfacing with system console. This is because of some unusual locking patterns in the ukbd code which have to be used because on one hand ukbd is below syscons, but on the other hand it has to interface with other usb code that uses regular mutexes/Giant for its concurrency protection. Dumping to USB-connected disks may also be affected. PR: amd64/139614 (at least) In cooperation with: attilio, jhb, kib, mdf Discussed with: arch@, bde Tested by: Eugene Grosbein <eugen@grosbein.net>, gnn, Steven Hartland <killing@multiplay.co.uk>, glebius, Andrew Boyer <aboyer@averesystems.com> (various versions of the patch) MFC after: 3 months (or never)
* Constify arguments for locking KPIs where possible.pjd2011-11-161-12/+12
| | | | | | | This enables locking consumers to pass their own structures around as const and be able to assert locks embedded into those structures. Reviewed by: ed, kib, jhb
* Mark all SYSCTL_NODEs static that have no corresponding SYSCTL_DECLs.ed2011-11-071-1/+2
| | | | | | The SYSCTL_NODE macro defines a list that stores all child-elements of that node. If there's no SYSCTL_DECL macro anywhere else, there's no reason why it shouldn't be static.
* Print the pointer to the lock with the panic message. The previousbz2010-03-241-2/+2
| | | | | | | | | panic: rw lock not unlocked was not really helpful for debugging. Now one can at least call show lock <ptr> form ddb to learn more about the lock. MFC after: 3 days
* When releasing a read/shared lock we need to use a write memory barrierattilio2009-09-301-3/+4
| | | | | | | | | | in order to avoid, on architectures which doesn't have strong ordered writes, CPU instructions reordering. Diagnosed by: fabio Reviewed by: jhb Tested by: Giovanni Trematerra <giovanni dot trematerra at gmail dot com>
* * Change the scope of the ASSERT_ATOMIC_LOAD() from a generic check toattilio2009-08-171-2/+3
| | | | | | | | | | | | | | | a pointer-fetching specific operation check. Consequently, rename the operation ASSERT_ATOMIC_LOAD_PTR(). * Fix the implementation of ASSERT_ATOMIC_LOAD_PTR() by checking directly alignment on the word boundry, for all the given specific architectures. That's a bit too strict for some common case, but it assures safety. * Add a comment explaining the scope of the macro * Add a new stub in the lockmgr specific implementation Tested by: marcel (initial version), marius Reviewed by: rwatson, jhb (comment specific review) Approved by: re (kib)
* Add a new macro to test that a variable could be loaded atomically.bz2009-08-141-0/+2
| | | | | | | | | | | | | | | | | | | Check that the given variable is at most uintptr_t in size and that it is aligned. Note: ASSERT_ATOMIC_LOAD() uses ALIGN() to check for adequate alignment -- however, the function of ALIGN() is to guarantee alignment, and therefore may lead to stronger alignment enforcement than necessary for types that are smaller than sizeof(uintptr_t). Add checks to mtx, rw and sx locks init functions to detect possible breakage. This was used during debugging of the problem fixed with r196118 where a pointer was on an un-aligned address in the dpcpu area. In collaboration with: rwatson Reviewed by: rwatson Approved by: re (kib)
* Handle lock recursion differenty by always checking against LO_RECURSABLEattilio2009-06-021-6/+6
| | | | | | instead the lock own flag itself. Tested by: pho
* Remove extra cpu_spinwait() invocations. This should really only be usedjhb2009-05-291-8/+0
| | | | | | | in tight spin loops, not in these edge cases where we restart a much larger loop only a few times. Reviewed by: attilio
* Tweak a few comments on adaptive spinning.jhb2009-05-291-4/+10
|
* Add the OpenSolaris dtrace lockstat provider. The lockstat providersson2009-05-261-7/+90
| | | | | | | | | | adds probes for mutexes, reader/writer and shared/exclusive locks to gather contention statistics and other locking information for dtrace scripts, the lockstat(1M) command and other potential consumers. Reviewed by: attilio jhb jb Approved by: gnn (mentor)
* - Wrap lock profiling state variables in #ifdef LOCK_PROFILING blocks.jeff2009-03-151-1/+5
|
* Remove even more unneeded variable assignments.ed2009-02-261-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | kern_time.c: - Unused variable `p'. kern_thr.c: - Variable `error' is always caught immediately, so no reason to initialize it. There is no way that error != 0 at the end of create_thread(). kern_sig.c: - Unused variable `code'. kern_synch.c: - `rval' is always assigned in all different cases. kern_rwlock.c: - `v' is always overwritten with RW_UNLOCKED further on. kern_malloc.c: - `size' is always initialized with the proper value before being used. kern_exit.c: - `error' is always caught and returned immediately. abort2() never returns a non-zero value. kern_exec.c: - `len' is always assigned inside the if-statement right below it. tty_info.c: - `td' is always overwritten by FOREACH_THREAD_IN_PROC(). Found by: LLVM's scan-build
* add RW_SYSINIT_FLAGS macro and rw_sysinit_flags initialization functionkmacy2008-12-081-0/+8
|
* Teach WITNESS about the interlocks used with lockmgr. This removes a bunchjhb2008-09-101-2/+2
| | | | | | | | of spurious witness warnings since lockmgr grew witness support. Before this, every time you passed an interlock to a lockmgr lock WITNESS treated it as a LOR. Reviewed by: attilio
* Various whitespace fixes.jhb2008-09-101-1/+0
|
* Improve a comment which, in the actual CVS stock, doesn't completelyattilio2008-05-271-2/+5
| | | | explain the logic of the code chunk.
* - Add sysctls at debug.rwlock to control the behavior of the speculativejeff2008-04-041-3/+26
| | | | | | | | | | | | spinning when readers hold a lock. This spinning is speculative because, unlike the write case, we can not test whether the owners are running. - Add speculative read spinning for readers who are blocked by pending writers while a read lock is still held. This allows the thread to spin until the write lock succeeds after which it may spin until the writer has released the lock. This prevents excessive context switches when readers and writers both hold the lock for brief periods. Sponsored by: Nokia
* Add rw_try_rlock() and rw_try_wlock() to rwlocks.attilio2008-04-011-0/+49
| | | | | | | | | | These functions try the specified operation (rlocking and wlocking) and true is returned if the operation completes, false otherwise. The KPI is enriched by this commit, so __FreeBSD_version bumping and manpage updating will happen soon. Requested by: jeff, kris
* - In rw_wunlock_hard prefer to wakeup writers if there are both readersjeff2008-02-071-4/+4
| | | | | | | and writers available. Doing otherwise can cause deadlocks as no read locks can proceed while there are write waiters. Sponsored by: Nokia
* Adaptive spinning in write path with readers and writer starvation avoidance.jeff2008-02-061-154/+170
| | | | | | | | | | | | | | | | - Move recursion checking into rwlock inlines to free a bit for use with adaptive spinners. - Clear the RW_LOCK_WRITE_SPINNERS flag whenever the lock state changes causing write spinners to restart their loop. - Write spinners are limited by a count while readers hold the lock as there is no way to know for certain whether readers are running still. - In the read path block if there are write waiters or spinners to avoid starving writers. Use a new per-thread count, td_rw_rlocks, to skip starvation avoidance if it might cause a deadlock. - Remove or change invalid assertions in turnstiles. Reviewed by: attilio (developed parts of the patch as well) Sponsored by: Nokia
* Remove a conditional that is always true.jhb2008-01-171-1/+1
| | | | MFC after: 2 weeks
* - Re-implement lock profiling in such a way that it no longer breaksjeff2007-12-151-24/+7
| | | | | | | | | | | | | | | | | | | | | | the ABI when enabled. There is no longer an embedded lock_profile_object in each lock. Instead a list of lock_profile_objects is kept per-thread for each lock it may own. The cnt_hold statistic is now always 0 to facilitate this. - Support shared locking by tracking individual lock instances and statistics in the per-thread per-instance lock_profile_object. - Make the lock profiling hash table a per-cpu singly linked list with a per-cpu static lock_prof allocator. This removes the need for an array of spinlocks and reduces cache contention between cores. - Use a seperate hash for spinlocks and other locks so that only a critical_enter() is required and not a spinlock_enter() to modify the per-cpu tables. - Count time spent spinning in the lock statistics. - Remove the LOCK_PROFILE_SHARED option as it is always supported now. - Specifically drop and release the scheduler locks in both schedulers since we track owners now. In collaboration with: Kip Macy Sponsored by: Nokia
* Simplify the adaptive spinning algorithm in rwlock and mutex:attilio2007-11-261-112/+72
| | | | | | | | | | | | | | | | | | | | | currently, before to spin the turnstile spinlock is acquired and the waiters flag is set. This is not strictly necessary, so just spin before to acquire the spinlock and to set the flags. This will simplify a lot other functions too, as now we have the waiters flag set only if there are actually waiters. This should make wakeup/sleeping couplet faster under intensive mutex workload. This also fixes a bug in rw_try_upgrade() in the adaptive case, where turnstile_lookup() will recurse on the ts_lock lock that will never be really released [1]. [1] Reported by: jeff with Nokia help Tested by: pho, kris (earlier, bugged version of rwlock part) Discussed with: jhb [2], jeff MFC after: 1 week [2] John had a similar patch about 6.x and/or 7.x about mutexes probabilly
* Expand lock class with the "virtual" function lc_assert which will offerattilio2007-11-181-0/+9
| | | | | | | | | an unified way for all the lock primitives to express lock assertions. Currenty, lockmgrs and rmlocks don't have assertions, so just panic in that case. This will be a base for more callout improvements. Ok'ed by: jhb, jeff
* Remove a bogus KASSERT which will prevent rwlock to be acquiredattilio2007-11-141-3/+0
| | | | | | | recursively in exclusive mode with debugging kernels. Submitted by: kmacy Approved by: jeff
* generally we are interested in what thread did something asjulian2007-11-141-1/+1
| | | | | | opposed to what process. Since threads by default have teh name of the process unless over-written with more useful information, just print the thread name instead.
* Fix some problems with lock profiling in rw locks:attilio2007-07-201-8/+28
| | | | | | | | | | | | | | | | | - Adjust lock_profiling stubs semantic in the hard functions in order to be more accurate and trustable - As for sx locks, disable shared paths for lock_profiling. Actually, lock_profiling has a subtle race which makes results caming from shared paths not completely trustable. A macro stub (LOCK_PROFILING_SHARED) can be actually used for re-enabling this paths, but is currently intended for developing use only. - style(9) fixes Approved by: jeff, kmacy, jhb[1] Approved by: re [1] Had initial reservations not shared by others, conceded in the end.
* Introduce a new rwlocks initialization function: rw_init_flags.attilio2007-06-261-9/+85
| | | | | | | | | | | | | | | This is very similar to sx_init_flags: it initializes the rwlock using special flags passed as third argument (RW_DUPOK, RW_NOPROFILE, RW_NOWITNESS, RW_QUIET, RW_RECURSE). Among these, the most important new feature is probabilly that rwlocks can be acquired recursively now (for both shared and exclusive paths). Because of the recursion counter, the ABI is changed. Tested by: Timothy Redaelli <drizzt@gufi.org> Reviewed by: jhb Approved by: jeff (mentor) Approved by: re
* Commit 3/14 of sched_lock decomposition.jeff2007-06-041-26/+28
| | | | | | | | | | | | | | | | | - Add a per-turnstile spinlock to solve potential priority propagation deadlocks that are possible with thread_lock(). - The turnstile lock order is defined as the exact opposite of the lock order used with the sleep locks they represent. This allows us to walk in reverse order in priority_propagate and this is the only place we wish to multiply acquire turnstile locks. - Use the turnstile_chain lock to protect assigning mutexes to turnstiles. - Change the turnstile interface to pass back turnstile pointers to the consumers. This allows us to reduce some locking and makes it easier to cancel turnstile assignment while the turnstile chain lock is held. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Move lock_profile_object_{init,destroy}() into lock_{init,destroy}().jhb2007-05-181-2/+0
|
* Add destroyed cookie values for sx locks and rwlocks as well as extrajhb2007-05-081-1/+17
| | | | | KASSERTs so that any lock operations on a destroyed lock will panic or hang.
* - Drop memory barriers in rw_try_upgrade(). We don't need an 'acq' memoryjhb2007-03-301-4/+4
| | | | | barrier here as the earlier rw_rlock() already contained one. - Comment fix.
OpenPOWER on IntegriCloud