summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_lock.c
Commit message (Collapse)AuthorAgeFilesLines
* Use the KTR_LOCK mask for logging events via KTR in lockmgr() ratherjhb2003-03-111-4/+4
| | | | than KTR_LOCKMGR. lockmgr locks are locks just like other locks.
* Replace calls to WITNESS_SLEEP() and witness_list() with equivalent callsjhb2003-03-041-1/+3
| | | | to WITNESS_WARN().
* - Add an interlock argument to BUF_LOCK and BUF_TIMELOCK.jeff2003-02-251-8/+5
| | | | | | | | | | - Remove the buftimelock mutex and acquire the buf's interlock to protect these fields instead. - Hold the vnode interlock while locking bufs on the clean/dirty queues. This reduces some cases from one BUF_LOCK with a LK_NOWAIT and another BUF_LOCK with a LK_TIMEFAIL to a single lock. Reviewed by: arch, mckusick
* - Add a WITNESS_SLEEP() for the appropriate cases in lockmgr().jeff2003-02-161-0/+7
|
* The lockmanager has to keep track of locks per thread, not per process.julian2003-02-051-19/+19
| | | | | Submitted by: david Xu (davidxu@) Reviewed by: jhb@
* Reversion of commit by Davidxu plus fixes since applied.julian2003-02-011-20/+20
| | | | | | | | I'm not convinced there is anything major wrong with the patch but them's the rules.. I am using my "David's mentor" hat to revert this as he's offline for a while.
* Move UPCALL related data structure out of kse, introduce a newdavidxu2003-01-261-20/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | data structure called kse_upcall to manage UPCALL. All KSE binding and loaning code are gone. A thread owns an upcall can collect all completed syscall contexts in its ksegrp, turn itself into UPCALL mode, and takes those contexts back to userland. Any thread without upcall structure has to export their contexts and exit at user boundary. Any thread running in user mode owns an upcall structure, when it enters kernel, if the kse mailbox's current thread pointer is not NULL, then when the thread is blocked in kernel, a new UPCALL thread is created and the upcall structure is transfered to the new UPCALL thread. if the kse mailbox's current thread pointer is NULL, then when a thread is blocked in kernel, no UPCALL thread will be created. Each upcall always has an owner thread. Userland can remove an upcall by calling kse_exit, when all upcalls in ksegrp are removed, the group is atomatically shutdown. An upcall owner thread also exits when process is in exiting state. when an owner thread exits, the upcall it owns is also removed. KSE is a pure scheduler entity. it represents a virtual cpu. when a thread is running, it always has a KSE associated with it. scheduler is free to assign a KSE to thread according thread priority, if thread priority is changed, KSE can be moved from one thread to another. When a ksegrp is created, there is always N KSEs created in the group. the N is the number of physical cpu in the current system. This makes it is possible that even an userland UTS is single CPU safe, threads in kernel still can execute on different cpu in parallel. Userland calls kse_create to add more upcall structures into ksegrp to increase concurrent in userland itself, kernel is not restricted by number of upcalls userland provides. The code hasn't been tested under SMP by author due to lack of hardware. Reviewed by: julian
* Remove a race condition / deadlock from snapshots. Whenmckusick2002-11-301-6/+37
| | | | | | | | | | | | converting from individual vnode locks to the snapshot lock, be sure to pass any waiting processes along to the new lock as well. This transfer is done by a new function in the lock manager, transferlockers(from_lock, to_lock); Thanks to Lamont Granquist <lamont@scriptkiddie.org> for his help in pounding on snapshots beyond all reason and finding this deadlock. Sponsored by: DARPA & NAI Labs.
* Have lockinit() initialize the debugging fields of a lockmckusick2002-10-181-0/+9
| | | | | | when DEBUG_LOCKS is defined. Sponsored by: DARPA & NAI Labs.
* Include <sys/lockmgr.h> for the definitions of the locking interfaces thatbde2002-08-271-2/+2
| | | | | | are implemented here instead of depending on namespace pollution in <sys/lock.h>. Fixed nearby include messes (1 disordered include and 1 unused include).
* Replace various spelling with FALLTHROUGH which is lint()ablecharnier2002-08-251-3/+3
|
* Record the file, line, and pid of the last successful shared lock holder. Thisjeff2002-05-301-0/+6
| | | | | is useful as a last effort in debugging file system deadlocks. This is enabled via 'options DEBUG_LOCKS'
* Change callers of mtx_init() to pass in an appropriate lock type name. Injhb2002-04-041-2/+2
| | | | | | | most cases NULL is passed, but in some cases such as network driver locks (which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used. Tested on: i386, alpha, sparc64
* Change wmesg to const char * instead of char *eivind2002-03-051-1/+1
|
* Fix a BUF_TIMELOCK race against BUF_LOCK and fix a deadlock in vget()dillon2001-12-201-2/+4
| | | | | | | | against VM_WAIT in the pageout code. Both fixes involve adjusting the lockmgr's timeout capability so locks obtained with timeouts do not interfere with locks obtained without a timeout. Hopefully MFC: before the 4.5 release
* Create a mutex pool API for short term leaf mutexes.dillon2001-11-131-37/+13
| | | | | Replace the manual mutex pool in kern_lock.c (lockmgr locks) with the new API. Replace the mutexes embedded in sxlocks with the new API.
* Add missing includes of sys/ktr.h.jhb2001-10-111-0/+1
|
* Malloc mutexes pre-zero'd as random garbage (including 0xdeadcode) myjhb2001-10-101-1/+1
| | | | trigget the check to make sure we don't initalize a mutex twice.
* Fix locking on td_flags for TDF_DEADLKTREAT. If the comments in the codejhb2001-09-131-8/+4
| | | | | are true that curthread can change during this function, then this flag needs to become a KSE flag, not a thread flag.
* KSE Milestone 2julian2001-09-121-14/+14
| | | | | | | | | | | | | | Note ALL MODULES MUST BE RECOMPILED make the kernel aware that there are smaller units of scheduling than the process. (but only allow one thread per process at this time). This is functionally equivalent to teh previousl -current except that there is a thread associated with each process. Sorry john! (your next MFC will be a doosie!) Reviewed by: peter@freebsd.org, dillon@freebsd.org X-MFC after: ha ha ha ha
* If we've panic'd already, then just bail in lockmgr rather than blocking orjhb2001-08-101-0/+5
| | | | possibly panic'ing again.
* Instead of asserting that a mutex is not still locked after unlocking it,alfred2001-04-281-1/+1
| | | | | | assert that the mutex is owned and not recursed prior to unlocking it. This should give a clearer diagnostic when a programming error is caught.
* Assert that when using an interlock mutex it is not recursed when lockmgr()alfred2001-04-201-1/+3
| | | | | | is called. Ok'd by: jhb
* convert if/panic -> KASSERT, explain what triggered the assertionalfred2001-04-131-2/+4
|
* Fix a precedence bug. ! has higher precedence than &.jake2001-04-081-1/+1
|
* Proc locking.jhb2001-02-091-14/+10
|
* Change and clean the mutex lock interface.bmilekic2001-02-091-12/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mtx_enter(lock, type) becomes: mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks) mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized) similarily, for releasing a lock, we now have: mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN. We change the caller interface for the two different types of locks because the semantics are entirely different for each case, and this makes it explicitly clear and, at the same time, it rids us of the extra `type' argument. The enter->lock and exit->unlock change has been made with the idea that we're "locking data" and not "entering locked code" in mind. Further, remove all additional "flags" previously passed to the lock acquire/release routines with the exception of two: MTX_QUIET and MTX_NOSWITCH The functionality of these flags is preserved and they can be passed to the lock/unlock routines by calling the corresponding wrappers: mtx_{lock, unlock}_flags(lock, flag(s)) and mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN locks, respectively. Re-inline some lock acq/rel code; in the sleep lock case, we only inline the _obtain_lock()s in order to ensure that the inlined code fits into a cache line. In the spin lock case, we inline recursion and actually only perform a function call if we need to spin. This change has been made with the idea that we generally tend to avoid spin locks and that also the spin locks that we do have and are heavily used (i.e. sched_lock) do recurse, and therefore in an effort to reduce function call overhead for some architectures (such as alpha), we inline recursion for this case. Create a new malloc type for the witness code and retire from using the M_DEV type. The new type is called M_WITNESS and is only declared if WITNESS is enabled. Begin cleaning up some machdep/mutex.h code - specifically updated the "optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently need those. Finally, caught up to the interface changes in all sys code. Contributors: jake, jhb, jasone (in no particular order)
* Convert all simplelocks to mutexes and remove the simplelock implementations.jasone2001-01-241-120/+2
|
* Remove MUTEX_DECLARE() and MTX_COLD. Instead, postpone full mutexjasone2001-01-211-3/+3
| | | | | | | | initialization until after malloc() is safe to call, then iterate through all mutexes and complete their initialization. This change is necessary in order to avoid some circular bootstrapping dependencies.
* Use msleep instead of mtx_exit; tsleep; mtx_enter, which is not safe.jake2000-12-011-6/+3
|
* - machine/mutex.h -> sys/mutex.hjhb2000-10-201-5/+4
| | | | | - The initial lock_mtx mutex used in the lockmgr code is initialized very early, so use MUTEX_DECLARE() and MTX_COLD.
* For lockmgr mutex protection, use an array of mutexes that are allocatedjasone2000-10-121-22/+79
| | | | | | | | | and initialized during boot. This avoids bloating sizeof(struct lock). As a side effect, it is no longer necessary to enforce the assumtion that lockinit()/lockdestroy() calls are paired, so the LK_VALID flag has been removed. Idea taken from: BSD/OS.
* Convert lockmgr locks from using simple locks to using mutexes.jasone2000-10-041-17/+47
| | | | | | Add lockdestroy() and appropriate invocations, which corresponds to lockinit() and must be called to clean up after a lockmgr lock is no longer needed.
* Move MAXCPU from machine/smp.h to machine/param.h to fix breakageps2000-09-231-2/+2
| | | | | with !SMP kernels. Also, replace NCPUS with MAXCPU since they are redundant.
* Make LINT compile.phk2000-09-161-2/+0
|
* Eliminate the undocumented, experimental, non-delivering and highlyphk2000-03-161-16/+0
| | | | dangerous MAX_PERF option.
* Lock reporting and assertion changes.eivind1999-12-111-4/+8
| | | | | | | | | | | | | | | * lockstatus() and VOP_ISLOCKED() gets a new process argument and a new return value: LK_EXCLOTHER, when the lock is held exclusively by another process. * The ASSERT_VOP_(UN)LOCKED family is extended to use what this gives them * Extend the vnode_if.src format to allow more exact specification than locked/unlocked. This commit should not do any semantic changes unless you are using DEBUG_VFS_LOCKS. Discussed with: grog, mch, peter, phk Reviewed by: peter
* Correct a locking error in apause: It should always holdalc1999-11-111-14/+16
| | | | | | | | | the simple lock when it returns. Also, eliminate spinning on a uniprocessor. It's pointless. Submitted by: bde, Assar Westerlund <assar@sics.se>
* Fix process p_locks accounting. Conversions of the owner to LK_KERNPROCdillon1999-09-271-2/+5
| | | | | | caused p_locks to be improperly accounted. Submitted by: Tor.Egge@fast.no
* $Id$ -> $FreeBSD$peter1999-08-281-1/+1
|
* When requesting an exclusive lock with LK_NOWAIT, do not panicmckusick1999-06-281-5/+7
| | | | | | | if LK_RECURSIVE is not set, as we will simply return that the lock is busy and not actually deadlock. This allows processes to use polling locks against buffers that they may already hold exclusively locked.
* Convert buffer locking from using the B_BUSY and B_WANTED flags to usingmckusick1999-06-261-2/+18
| | | | | | | lockmgr locks. This commit should be functionally equivalent to the old semantics. That is, all buffer locking is done with LK_EXCLUSIVE requests. Changes to take advantage of LK_SHARED and LK_RECURSIVE will be done in future commits.
* fix breakage for alphas.julian1999-03-151-2/+2
| | | | Submitted by: Andrew Gallatin <gallatin@cs.duke.edu>
* This solves a deadlock that can occur when read()ing into a file-mmap()julian1999-03-121-3/+24
| | | | | | | | | | | | space. When doing this, it is possible to for another process to attempt to get an exclusive lock on the vnode and deadlock the mmap/read combination when the uiomove() call tries to obtain a second shared lock on the vnode. There is still a potential deadlock situation with write()/mmap(). Submitted by: Matt Dillon <dillon@freebsd.org> Reviewed by: Luoqi Chen <luoqi@freebsd.org> Delimmitted by tag PRE_MATT_MMAP_LOCK and POST_MATT_MMAP_LOCK in kern/kern_lock.c kern/kern_subr.c
* Add 'options DEBUG_LOCKS', which stores extra information in structeivind1999-01-201-1/+25
| | | | | | | | lock, and add some macros and function parameters to make sure that the information get to the point where it can be put in the lock structure. While I'm here, add DEBUG_VFS_LOCKS to LINT.
* KNFize, by bde.eivind1999-01-101-1/+2
|
* Split DIAGNOSTIC -> DIAGNOSTIC, INVARIANTS, and INVARIANT_SUPPORT aseivind1999-01-081-9/+2
| | | | | | | | | discussed on -hackers. Introduce 'KASSERT(assertion, ("panic message", args))' for simple check + panic. Reviewed by: msmith
* Staticize.eivind1998-11-261-2/+2
|
* Really finish supporting compiling with `gcc -ansi'.bde1998-04-171-2/+2
|
* Some kern_lock code improvements. Add missing wakeup, and enabledyson1998-03-071-10/+43
| | | | disabling some diagnostics when memory or speed is at a premium.
OpenPOWER on IntegriCloud