summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_lock.c
Commit message (Collapse)AuthorAgeFilesLines
* - Use lock_init/lock_destroy() to setup the lock_object inside of lockmgr.jhb2007-03-301-7/+11
| | | | | | | We can now use LOCK_CLASS() as a stronger check in lockmgr_chain() as a result. This required putting back lk_flags as lockmgr's use of flags conflicted with other flags in lo_flags otherwise. - Tweak 'show lock' output for lockmgr to match sx, rw, and mtx.
* Rename the 'mtx_object', 'rw_object', and 'sx_object' members of mutexes,jhb2007-03-211-1/+1
| | | | rwlocks, and sx locks to 'lock_object'.
* Handle the case when a thread is blocked on a lockmgr lock with LK_DRAINjhb2007-03-211-3/+16
| | | | | | in DDB's 'show sleepchain'. MFC after: 3 days
* Add two new function pointers 'lc_lock' and 'lc_unlock' to lock classes.jhb2007-03-091-3/+19
| | | | | | | | | | | | | These functions are intended to be used to drop a lock and then reacquire it when doing an sleep such as msleep(9). Both functions accept a 'struct lock_object *' as their first parameter. The 'lc_unlock' function returns an integer that is then passed as the second paramter to the subsequent 'lc_lock' function. This can be used to communicate state. For example, sx locks and rwlocks use this to indicate if the lock was share/read locked vs exclusive/write locked. Currently, spin mutexes and lockmgr locks do not provide working lc_lock and lc_unlock functions.
* Use C99-style struct member initialization for lock classes.jhb2007-03-091-3/+3
|
* general LOCK_PROFILING cleanupkmacy2007-02-261-13/+16
| | | | | | | | | | | | - only collect timestamps when a lock is contested - this reduces the overhead of collecting profiles from 20x to 5x - remove unused function from subr_lock.c - generalize cnt_hold and cnt_lock statistics to be kept for all locks - NOTE: rwlock profiling generates invalid statistics (and most likely always has) someone familiar with that should review
* track lock class name in a way that doesn't break WITNESSkmacy2006-11-131-7/+15
|
* show lock class in profiling output for default case where type is not ↵kmacy2006-11-121-0/+2
| | | | | | specified when initializing the lock Approved by: scottl (standing in for mentor rwatson)
* MUTEX_PROFILING has been generalized to LOCK_PROFILING. We now profilekmacy2006-11-111-5/+19
| | | | | | | | | | | wait (time waited to acquire) and hold times for *all* kernel locks. If the architecture has a system synchronized TSC, the profiling code will use that - thereby minimizing profiling overhead. Large chunks of profiling code have been moved out of line, the overhead measured on the T1 for when it is compiled in but not enabled is < 1%. Approved by: scottl (standing in for mentor rwatson) Reviewed by: des and jhb
* If the buffer lock has waiters after the buffer has changed identity thentegge2006-10-021-0/+15
| | | | | getnewbuf() needs to drop the buffer in order to wake waiters that might sleep on the buffer in the context of the old identity.
* Add a new 'show sleepchain' ddb command similar to 'show lockchain' exceptjhb2006-08-151-0/+28
| | | | | | | | | | that it operates on lockmgr and sx locks. This can be useful for tracking down vnode deadlocks in VFS for example. Note that this command is a bit more fragile than 'show lockchain' as we have to poke around at the wait channel of a thread to see if it points to either a struct lock or a condition variable inside of a struct sx. If td_wchan points to something unmapped, then this command will terminate early due to a fault, but no harm will be done.
* Add a 'show lockmgr' command that dumps the relevant details of a lockmgrjhb2006-08-151-0/+32
| | | | lock.
* Remove duplicated #include.pjd2006-07-141-1/+0
|
* - Remove and unused include.jeff2005-12-231-1/+0
| | | | Submitted by: Antoine Brodin <antoine.brodin@laposte.net>
* Include kdb.h so that kdb_active is declared regardless of KDB beingrwatson2005-10-021-0/+1
| | | | | | included in the kernel. MFC after: 0 days
* In lockstatus(), don't lock and unlock the interlock when testing therwatson2005-09-271-2/+8
| | | | | | | | | | sleep lock status while kdb_active, or we risk contending with the mutex on another CPU, resulting in a panic when using "show lockedvnods" while in DDB. MFC after: 3 days Reviewed by: jhb Reported by: kris
* Print out a warning and a backtrace if we try to unlock a lockmgr thatssouhlal2005-09-021-0/+7
| | | | | | | we do not hold. Glanced at by: phk MFC after: 3 days
* Add 'depth' argument to CTRSTACK() macro, which allows to reduce numberpjd2005-08-291-1/+1
| | | | | of ktr slots used. If 'depth' is equal to 0, the whole stack will be logged, just like before.
* - Fix a problem that slipped through review; the stack member of the lockmgrjeff2005-08-031-5/+8
| | | | | | structure should have the lk_ prefix. - Add stack_print(lkp->lk_stack) to the information printed with lockmgr_printinfo().
* - Replace the series of DEBUG_LOCKS hacks which tried to save the vn_lockjeff2005-08-031-34/+15
| | | | | | | caller by saving the stack of the last locker/unlocker in lockmgr. We also put the stack in KTR at the moment. Contributed by: Antoine Brodin <antoine.brodin@laposte.net>
* - Differentiate two UPGRADE panics so I have a better idea of what's goingjeff2005-04-121-1/+3
| | | | on here.
* - Remove dead code.jeff2005-04-061-26/+2
|
* - Slightly restructure acquire() so I can add more ktr information andjeff2005-04-031-18/+14
| | | | | an assert to help find two strange bugs. - Remove some nearby spls.
* - Add a LK_NOSHARE flag which forces all shared lock requests to bejeff2005-03-311-1/+5
| | | | | | treated as exclusive lock requests. Sponsored by: Isilon Systems, Inc.
* - Remove apause(). It makes no sense with our present mutex implementationjeff2005-03-311-40/+0
| | | | | | | | | since simply unlocking a mutex does not ensure that one of the waiters will run and acquire it. We're more likely to reacquire the mutex before anyone else has a chance. It has also bit me three times now, as it's not safe to drop the interlock before sleeping in many cases. Sponsored by: Isilon Systems, Inc.
* - Don't bump the count twice in the LK_DRAIN case.jeff2005-03-281-2/+0
| | | | Sponsored by: Isilon Systems, Inc.
* - Restore COUNT() in all of its original glory. Don't make it dependentjeff2005-03-251-17/+19
| | | | | | | on DEBUG as ufs will soon grow a dependency on this count. Discussed with: bde Sponsored by: Isilon Systems, Inc.
* - Complete the implementation of td_locks. Track the number of outstandingjeff2005-03-241-0/+11
| | | | | | | lockmgr locks that this thread owns. This is complicated due to LK_KERNPROC and because lockmgr tolerates unlocking an unlocked lock. Sponsored by: Isilon Systes, Inc.
* - transferlockers() requires the interlock to be SMP safe.jeff2005-03-151-2/+8
| | | | Sponsored by: Isilon Systems, Inc.
* - Include LK_INTERLOCK in LK_EXTFLG_MASK so that it makes its way intojeff2005-01-251-1/+1
| | | | | | | | acquire. - Correct the condition that causes us to skip apause() to only require the presence of LK_INTERLOCK. Sponsored by: Isilon Systems, Inc.
* - Do not use APAUSE if LK_INTERLOCK is set. We lose synchronizationjeff2005-01-241-10/+19
| | | | | | | | if the lockmgr interlock is dropped after the caller's interlock is dropped. - Change some lockmgr KTRs to be slightly more helpful. Sponsored By: Isilon Systems, Inc.
* /* -> /*- for copyright notices, minor format tweaks as necessaryimp2005-01-061-1/+1
|
* When upgrading the shared lock to an exclusive lock, if we discoverps2004-11-291-3/+2
| | | | | | | | | that the exclusive lock is already held, then we call panic. Don't clobber internal lock state before panic'ing. This change improves debugging if this case were to happen. Submitted by: Mohan Srinivasan mohans at yahoo-inc dot com Reviewed by: rwatson
* Reintroduce slightly modified patch from kern/69964. Check forkan2004-08-271-4/+11
| | | | | | LK_HAVE_EXL in both acquire invocations. MFC after: 5 days
* Temporarily back out r1.74 as it seems to cause a number of regressionskan2004-08-231-12/+5
| | | | | accordimg to numerous reports. It might get reintroduced some time later when an exact failure mode is understood better.
* Upgrading a lock does not play well together with acquiring an exclusive lockkan2004-08-161-5/+12
| | | | | | | | | | | | | | | and can lead to two threads being granted exclusive access. Check that no one has the same lock in exclusive mode before proceeding to acquire it. The LK_WANT_EXCL and LK_WANT_UPGRADE bits act as mini-locks and can block other threads. Normally this is not a problem since the mini locks are upgraded to full locks and the release of the locks will unblock the other threads. However if a thread reset the bits without obtaining a full lock other threads are not awoken. Add missing wakeups for these cases. PR: kern/69964 Submitted by: Stephan Uphoff <ups at tree dot com> Very good catch by: Stephan Uphoff <ups at tree dot com>
* Don't include a "\n" in KTR output, it confuses automatic parsing.rwatson2004-07-231-1/+1
|
* Move TDF_DEADLKTREAT into td_pflags (and rename it accordingly) to avoidtjr2004-06-031-4/+2
| | | | | | | having to acquire sched_lock when manipulating it in lockmgr(), uiomove(), and uiomove_fromphys(). Reviewed by: jhb
* Add pid to the info printed in lockmgr_printinfo. This makes VFSkan2004-01-061-2/+3
| | | | diagnostic messages slightly more useful.
* Rearrange the SYSINIT order to call lockmgr_init() earlier so thattruckman2003-07-161-27/+3
| | | | | | the runtime lockmgr initialization code in lockinit() can be eliminated. Reviewed by: jhb
* Extend the mutex pool implementation to permit the creation and use oftruckman2003-07-131-2/+2
| | | | | | | | | | | | | | | | multiple mutex pools with different options and sizes. Mutex pools can be created with either the default sleep mutexes or with spin mutexes. A dynamically created mutex pool can now be destroyed if it is no longer needed. Create two pools by default, one that matches the existing pool that uses the MTX_NOWITNESS option that should be used for building higher level locks, and a new pool with witness checking enabled. Modify the users of the existing mutex pool to use the appropriate pool in the new implementation. Reviewed by: jhb
* Use __FBSDID().obrien2003-06-111-1/+3
|
* Use the KTR_LOCK mask for logging events via KTR in lockmgr() ratherjhb2003-03-111-4/+4
| | | | than KTR_LOCKMGR. lockmgr locks are locks just like other locks.
* Replace calls to WITNESS_SLEEP() and witness_list() with equivalent callsjhb2003-03-041-1/+3
| | | | to WITNESS_WARN().
* - Add an interlock argument to BUF_LOCK and BUF_TIMELOCK.jeff2003-02-251-8/+5
| | | | | | | | | | - Remove the buftimelock mutex and acquire the buf's interlock to protect these fields instead. - Hold the vnode interlock while locking bufs on the clean/dirty queues. This reduces some cases from one BUF_LOCK with a LK_NOWAIT and another BUF_LOCK with a LK_TIMEFAIL to a single lock. Reviewed by: arch, mckusick
* - Add a WITNESS_SLEEP() for the appropriate cases in lockmgr().jeff2003-02-161-0/+7
|
* The lockmanager has to keep track of locks per thread, not per process.julian2003-02-051-19/+19
| | | | | Submitted by: david Xu (davidxu@) Reviewed by: jhb@
* Reversion of commit by Davidxu plus fixes since applied.julian2003-02-011-20/+20
| | | | | | | | I'm not convinced there is anything major wrong with the patch but them's the rules.. I am using my "David's mentor" hat to revert this as he's offline for a while.
* Move UPCALL related data structure out of kse, introduce a newdavidxu2003-01-261-20/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | data structure called kse_upcall to manage UPCALL. All KSE binding and loaning code are gone. A thread owns an upcall can collect all completed syscall contexts in its ksegrp, turn itself into UPCALL mode, and takes those contexts back to userland. Any thread without upcall structure has to export their contexts and exit at user boundary. Any thread running in user mode owns an upcall structure, when it enters kernel, if the kse mailbox's current thread pointer is not NULL, then when the thread is blocked in kernel, a new UPCALL thread is created and the upcall structure is transfered to the new UPCALL thread. if the kse mailbox's current thread pointer is NULL, then when a thread is blocked in kernel, no UPCALL thread will be created. Each upcall always has an owner thread. Userland can remove an upcall by calling kse_exit, when all upcalls in ksegrp are removed, the group is atomatically shutdown. An upcall owner thread also exits when process is in exiting state. when an owner thread exits, the upcall it owns is also removed. KSE is a pure scheduler entity. it represents a virtual cpu. when a thread is running, it always has a KSE associated with it. scheduler is free to assign a KSE to thread according thread priority, if thread priority is changed, KSE can be moved from one thread to another. When a ksegrp is created, there is always N KSEs created in the group. the N is the number of physical cpu in the current system. This makes it is possible that even an userland UTS is single CPU safe, threads in kernel still can execute on different cpu in parallel. Userland calls kse_create to add more upcall structures into ksegrp to increase concurrent in userland itself, kernel is not restricted by number of upcalls userland provides. The code hasn't been tested under SMP by author due to lack of hardware. Reviewed by: julian
* Remove a race condition / deadlock from snapshots. Whenmckusick2002-11-301-6/+37
| | | | | | | | | | | | converting from individual vnode locks to the snapshot lock, be sure to pass any waiting processes along to the new lock as well. This transfer is done by a new function in the lock manager, transferlockers(from_lock, to_lock); Thanks to Lamont Granquist <lamont@scriptkiddie.org> for his help in pounding on snapshots beyond all reason and finding this deadlock. Sponsored by: DARPA & NAI Labs.
OpenPOWER on IntegriCloud