summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_lock.c
Commit message (Collapse)AuthorAgeFilesLines
...
* - Remove a stale comment.attilio2008-04-121-4/+2
| | | | - Add an extra assertion in order to catch malformed requested operations.
* - Use a different encoding for lockmgr options: make them encoded byattilio2008-04-071-1/+1
| | | | | | | | | | bit in order to allow per-bit checks on the options flag, in particular in the consumers code [1] - Re-enable the check against TDP_DEADLKTREAT as the anti-waiters starvation patch allows exclusive waiters to override new shared requests. [1] Requested by: pjd, jeff
* Optimize lockmgr in order to get rid of the pool mutex interlock, of theattilio2008-04-061-623/+805
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | state transitioning flags and of msleep(9) callings. Use, instead, an algorithm very similar to what sx(9) and rwlock(9) alredy do and direct accesses to the sleepqueue(9) primitive. In order to avoid writer starvation a mechanism very similar to what rwlock(9) uses now is implemented, with the correspective per-thread shared lockmgrs counter. This patch also adds 2 new functions to lockmgr KPI: lockmgr_rw() and lockmgr_args_rw(). These two are like the 2 "normal" versions, but they both accept a rwlock as interlock. In order to realize this, the general lockmgr manager function "__lockmgr_args()" has been implemented through the generic lock layer. It supports all the blocking primitives, but currently only these 2 mappers live. The patch drops the support for WITNESS atm, but it will be probabilly added soon. Also, there is a little race in the draining code which is also present in the current CVS stock implementation: if some sharers, once they wakeup, are in the runqueue they can contend the lock with the exclusive drainer. This is hard to be fixed but the now committed code mitigate this issue a lot better than the (past) CVS version. In addition assertive KA_HELD and KA_UNHELD have been made mute assertions because they are dangerous and they will be nomore supported soon. In order to avoid namespace pollution, stack.h is splitted into two parts: one which includes only the "struct stack" definition (_stack.h) and one defining the KPI. In this way, newly added _lockmgr.h can just include _stack.h. Kernel ABI results heavilly changed by this commit (the now committed version of "struct lock" is a lot smaller than the previous one) and KPI results broken by lockmgr_rw() / lockmgr_args_rw() introduction, so manpages and __FreeBSD_version will be updated accordingly. Tested by: kris, pho, jeff, danger Reviewed by: jeff Sponsored by: Google, Summer of Code program 2007
* - Handle buffer lock waiters count directly in the buffer cache insteadattilio2008-03-011-18/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | than rely on the lockmgr support [1]: * bump the waiters only if the interlock is held * let brelvp() return the waiters count * rely on brelvp() instead than BUF_LOCKWAITERS() in order to check for the waiters number - Remove a namespace pollution introduced recently with lockmgr.h including lock.h by including lock.h directly in the consumers and making it mandatory for using lockmgr. - Modify flags accepted by lockinit(): * introduce LK_NOPROFILE which disables lock profiling for the specified lockmgr * introduce LK_QUIET which disables ktr tracing for the specified lockmgr [2] * disallow LK_SLEEPFAIL and LK_NOWAIT to be passed there so that it can only be used on a per-instance basis - Remove BUF_LOCKWAITERS() and lockwaiters() as they are no longer used This patch breaks KPI so __FreBSD_version will be bumped and manpages updated by further commits. Additively, 'struct buf' changes results in a disturbed ABI also. [2] Really, currently there is no ktr tracing in the lockmgr, but it will be added soon. [1] Submitted by: kib Tested by: pho, Andrea Barberio <insomniac at slackware dot it>
* Axe the 'thread' argument from VOP_ISLOCKED() and lockstatus() as it isattilio2008-02-251-5/+2
| | | | | | | | | always curthread. As KPI gets broken by this patch, manpages and __FreeBSD_version will be updated by further commits. Tested by: Andrea Barberio <insomniac at slackware dot it>
* - Introduce lockmgr_args() in the lockmgr space. This function performsattilio2008-02-151-24/+44
| | | | | | | | | | | the same operation of lockmgr() but accepting a custom wmesg, prio and timo for the particular lock instance, overriding default values lkp->lk_wmesg, lkp->lk_prio and lkp->lk_timo. - Use lockmgr_args() in order to implement BUF_TIMELOCK() - Cleanup BUF_LOCK() - Remove LK_INTERNAL as it is nomore used in the lockmgr namespace Tested by: Andrea Barberio <insomniac at slackware dot it>
* - Add real assertions to lockmgr locking primitives.attilio2008-02-131-28/+117
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A couple of notes for this: * WITNESS support, when enabled, is only used for shared locks in order to avoid problems with the "disowned" locks * KA_HELD and KA_UNHELD only exists in the lockmgr namespace in order to assert for a generic thread (not curthread) owning or not the lock. Really, this kind of check is bogus but it seems very widespread in the consumers code. So, for the moment, we cater this untrusted behaviour, until the consumers are not fixed and the options could be removed (hopefully during 8.0-CURRENT lifecycle) * Implementing KA_HELD and KA_UNHELD (not surported natively by WITNESS) made necessary the introduction of LA_MASKASSERT which specifies the range for default lock assertion flags * About other aspects, lockmgr_assert() follows exactly what other locking primitives offer about this operation. - Build real assertions for buffer cache locks on the top of lockmgr_assert(). They can be used with the BUF_ASSERT_*(bp) paradigm. - Add checks at lock destruction time and use a cookie for verifying lock integrity at any operation. - Redefine BUF_LOCKFREE() in order to not use a direct assert but let it rely on the aforementioned destruction time check. KPI results evidently broken, so __FreeBSD_version bumping and manpage update result necessary and will be committed soon. Side note: lockmgr_assert() will be used soon in order to implement real assertions in the vnode namespace replacing the legacy and still bogus "VOP_ISLOCKED()" way. Tested by: kris (earlier version) Reviewed by: jhb
* Conver all explicit instances to VOP_ISLOCKED(arg, NULL) intoattilio2008-02-081-2/+2
| | | | | | | | VOP_ISLOCKED(arg, curthread). Now, VOP_ISLOCKED() and lockstatus() should only acquire curthread as argument; this will lead in axing the additional argument from both functions, making the code cleaner. Reviewed by: jeff, kib
* td cannot be NULL in that place, so just axe out the check.attilio2008-02-061-1/+1
|
* Add WITNESS support to lockmgr locking primitive.attilio2008-02-061-11/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This support tries to be as parallel as possible with other locking primitives, but there are differences; more specifically: - The base witness support is alredy equipped for allowing lock duplication acquisition as lockmgr rely on this. - In the case of lockmgr_disown() the lock result unlocked by witness even if it is still held by the "kernel context" - In the case of upgrading we can have 3 different situations: * Total unlocking of the shared lock and nothing else * Real witness upgrade if the owner is the first upgrader * Shared unlocking and exclusive locking if the owner is not the first upgrade but it is still allowed to upgrade - LK_DRAIN is basically handled like an exclusive acquisition Additively new options LK_NODUP and LK_NOWITNESS can now be used with lockinit(): LK_NOWITNESS disables WITNESS for the specified lock while LK_NODUP enable duplicated locks tracking. This will require manpages update and a __FreeBSD_version bumping (addressed by further commits). This patch also fixes a problem occurring if a lockmgr is held in exclusive mode and the same owner try to acquire it in shared mode: currently there is a spourious shared locking acquisition while what we really want is a lock downgrade. Probabilly, this situation can be better served with a EDEADLK failing errno return. Side note: first testing on this patch alredy reveleated several LORs reported, so please expect LORs cascades until resolved. NTFS also is reported broken by WITNESS introduction. BTW, NTFS is exposing a lock leak which needs to be fixed, and this patch can help it out if rightly tweaked. Tested by: kris, yar, Scot Hetzel <swhetzel at gmail dot com>
* Cleanup lockmgr interface and exported KPI:attilio2008-01-241-24/+7
| | | | | | | | | | | | | | | | | | | | - Remove the "thread" argument from the lockmgr() function as it is always curthread now - Axe lockcount() function as it is no longer used - Axe LOCKMGR_ASSERT() as it is bogus really and no currently used. Hopefully this will be soonly replaced by something suitable for it. - Remove the prototype for dumplockinfo() as the function is no longer present Addictionally: - Introduce a KASSERT() in lockstatus() in order to let it accept only curthread or NULL as they should only be passed - Do a little bit of style(9) cleanup on lockmgr.h KPI results heavilly broken by this change, so manpages and FreeBSD_version will be modified accordingly by further commits. Tested by: matteo
* lockmgr() function will return successfully when trying to work underattilio2008-01-111-3/+6
| | | | | | | | | | | panic but it won't actually lock anything. This can lead some paths to reach lockmgr_disown() with inconsistent lock which will let trigger the relative assertions. Fix those in order to recognize panic situation and to not trigger. Reported by: pho Submitted by: kib
* Fix a last second typo about recent lockmgr_disown() introduction.attilio2008-01-091-2/+2
|
* Remove explicit calling of lockmgr() with the NULL argument.attilio2008-01-081-23/+42
| | | | | | | | | | | | | | | | | | Now, lockmgr() function can only be called passing curthread and the KASSERT() is upgraded according with this. In order to support on-the-fly owner switching, the new function lockmgr_disown() has been introduced and gets used in BUF_KERNPROC(). KPI, so, results changed and FreeBSD version will be bumped soon. Differently from previous code, we assume idle thread cannot try to acquire the lockmgr as it cannot sleep, so loose the relative check[1] in BUF_KERNPROC(). Tested by: kris [1] kib asked for a KASSERT in the lockmgr_disown() about this condition, but after thinking at it, as this is a well known general rule, I found it not really necessary.
* Trimm out now unused option LK_EXCLUPGRADE from the lockmgr namespace.attilio2007-12-281-13/+0
| | | | | | | | | | | | | This option just adds complexity and the new implementation no longer will support it, so axing it now that it is unused is probabilly the better idea. FreeBSD version is bumped in order to reflect the KPI breakage introduced by this patch. In the ports tree, kris found that only old OSKit code uses it, but as it is thought to work only on 2.x kernels serie, version bumping will solve any problem.
* In order to avoid a huge class of deadlocks (in particular in interactionsattilio2007-12-271-1/+9
| | | | | | | | | | with the interlock), owner of the lock should be only curthread or at least, for its limited usage, NULL which identifies LK_KERNPROC. The thread "extra argument" for the lockmgr interface is going to be removed in the near future, but for the moment, just let kernel run for some days with this check on in order to find potential deadlocking places around the kernel and fix them.
* Modify stack(9) stack_print() and stack_sbuf_print() routines to use newrwatson2007-12-011-1/+1
| | | | | | | | | | | | | | | | linker interfaces for looking up function names and offsets from instruction pointers. Create two variants of each call: one that is "DDB-safe" and avoids locking in the linker, and one that is safe for use in live kernels, by virtue of observing locking, and in particular safe when kernel modules are being loaded and unloaded simultaneous to their use. This will allow them to be used outside of debugging contexts. Modify two of three current stack(9) consumers to use the DDB-safe interfaces, as they run in low-level debugging contexts, such as inside lockmgr(9) and the kernel memory allocator. Update man page.
* transferlockers() is a very dangerous and hack-ish function as waitersattilio2007-11-241-28/+0
| | | | | | | | | | | should never be moved by one lock to another. As, luckily, nothing in our tree is using it, axe the function. This breaks lockmgr KPI, so interested, third-party modules should update their source code with appropriate replacement. Ok'ed by: ups, rwatson MFC after: 3 days
* Expand lock class with the "virtual" function lc_assert which will offerattilio2007-11-181-0/+9
| | | | | | | | | an unified way for all the lock primitives to express lock assertions. Currenty, lockmgrs and rmlocks don't have assertions, so just panic in that case. This will be a base for more callout improvements. Ok'ed by: jhb, jeff
* generally we are interested in what thread did something asjulian2007-11-141-1/+1
| | | | | | opposed to what process. Since threads by default have teh name of the process unless over-written with more useful information, just print the thread name instead.
* Move lock_profile_object_{init,destroy}() into lock_{init,destroy}().jhb2007-05-181-2/+1
|
* - Use lock_init/lock_destroy() to setup the lock_object inside of lockmgr.jhb2007-03-301-7/+11
| | | | | | | We can now use LOCK_CLASS() as a stronger check in lockmgr_chain() as a result. This required putting back lk_flags as lockmgr's use of flags conflicted with other flags in lo_flags otherwise. - Tweak 'show lock' output for lockmgr to match sx, rw, and mtx.
* Rename the 'mtx_object', 'rw_object', and 'sx_object' members of mutexes,jhb2007-03-211-1/+1
| | | | rwlocks, and sx locks to 'lock_object'.
* Handle the case when a thread is blocked on a lockmgr lock with LK_DRAINjhb2007-03-211-3/+16
| | | | | | in DDB's 'show sleepchain'. MFC after: 3 days
* Add two new function pointers 'lc_lock' and 'lc_unlock' to lock classes.jhb2007-03-091-3/+19
| | | | | | | | | | | | | These functions are intended to be used to drop a lock and then reacquire it when doing an sleep such as msleep(9). Both functions accept a 'struct lock_object *' as their first parameter. The 'lc_unlock' function returns an integer that is then passed as the second paramter to the subsequent 'lc_lock' function. This can be used to communicate state. For example, sx locks and rwlocks use this to indicate if the lock was share/read locked vs exclusive/write locked. Currently, spin mutexes and lockmgr locks do not provide working lc_lock and lc_unlock functions.
* Use C99-style struct member initialization for lock classes.jhb2007-03-091-3/+3
|
* general LOCK_PROFILING cleanupkmacy2007-02-261-13/+16
| | | | | | | | | | | | - only collect timestamps when a lock is contested - this reduces the overhead of collecting profiles from 20x to 5x - remove unused function from subr_lock.c - generalize cnt_hold and cnt_lock statistics to be kept for all locks - NOTE: rwlock profiling generates invalid statistics (and most likely always has) someone familiar with that should review
* track lock class name in a way that doesn't break WITNESSkmacy2006-11-131-7/+15
|
* show lock class in profiling output for default case where type is not ↵kmacy2006-11-121-0/+2
| | | | | | specified when initializing the lock Approved by: scottl (standing in for mentor rwatson)
* MUTEX_PROFILING has been generalized to LOCK_PROFILING. We now profilekmacy2006-11-111-5/+19
| | | | | | | | | | | wait (time waited to acquire) and hold times for *all* kernel locks. If the architecture has a system synchronized TSC, the profiling code will use that - thereby minimizing profiling overhead. Large chunks of profiling code have been moved out of line, the overhead measured on the T1 for when it is compiled in but not enabled is < 1%. Approved by: scottl (standing in for mentor rwatson) Reviewed by: des and jhb
* If the buffer lock has waiters after the buffer has changed identity thentegge2006-10-021-0/+15
| | | | | getnewbuf() needs to drop the buffer in order to wake waiters that might sleep on the buffer in the context of the old identity.
* Add a new 'show sleepchain' ddb command similar to 'show lockchain' exceptjhb2006-08-151-0/+28
| | | | | | | | | | that it operates on lockmgr and sx locks. This can be useful for tracking down vnode deadlocks in VFS for example. Note that this command is a bit more fragile than 'show lockchain' as we have to poke around at the wait channel of a thread to see if it points to either a struct lock or a condition variable inside of a struct sx. If td_wchan points to something unmapped, then this command will terminate early due to a fault, but no harm will be done.
* Add a 'show lockmgr' command that dumps the relevant details of a lockmgrjhb2006-08-151-0/+32
| | | | lock.
* Remove duplicated #include.pjd2006-07-141-1/+0
|
* - Remove and unused include.jeff2005-12-231-1/+0
| | | | Submitted by: Antoine Brodin <antoine.brodin@laposte.net>
* Include kdb.h so that kdb_active is declared regardless of KDB beingrwatson2005-10-021-0/+1
| | | | | | included in the kernel. MFC after: 0 days
* In lockstatus(), don't lock and unlock the interlock when testing therwatson2005-09-271-2/+8
| | | | | | | | | | sleep lock status while kdb_active, or we risk contending with the mutex on another CPU, resulting in a panic when using "show lockedvnods" while in DDB. MFC after: 3 days Reviewed by: jhb Reported by: kris
* Print out a warning and a backtrace if we try to unlock a lockmgr thatssouhlal2005-09-021-0/+7
| | | | | | | we do not hold. Glanced at by: phk MFC after: 3 days
* Add 'depth' argument to CTRSTACK() macro, which allows to reduce numberpjd2005-08-291-1/+1
| | | | | of ktr slots used. If 'depth' is equal to 0, the whole stack will be logged, just like before.
* - Fix a problem that slipped through review; the stack member of the lockmgrjeff2005-08-031-5/+8
| | | | | | structure should have the lk_ prefix. - Add stack_print(lkp->lk_stack) to the information printed with lockmgr_printinfo().
* - Replace the series of DEBUG_LOCKS hacks which tried to save the vn_lockjeff2005-08-031-34/+15
| | | | | | | caller by saving the stack of the last locker/unlocker in lockmgr. We also put the stack in KTR at the moment. Contributed by: Antoine Brodin <antoine.brodin@laposte.net>
* - Differentiate two UPGRADE panics so I have a better idea of what's goingjeff2005-04-121-1/+3
| | | | on here.
* - Remove dead code.jeff2005-04-061-26/+2
|
* - Slightly restructure acquire() so I can add more ktr information andjeff2005-04-031-18/+14
| | | | | an assert to help find two strange bugs. - Remove some nearby spls.
* - Add a LK_NOSHARE flag which forces all shared lock requests to bejeff2005-03-311-1/+5
| | | | | | treated as exclusive lock requests. Sponsored by: Isilon Systems, Inc.
* - Remove apause(). It makes no sense with our present mutex implementationjeff2005-03-311-40/+0
| | | | | | | | | since simply unlocking a mutex does not ensure that one of the waiters will run and acquire it. We're more likely to reacquire the mutex before anyone else has a chance. It has also bit me three times now, as it's not safe to drop the interlock before sleeping in many cases. Sponsored by: Isilon Systems, Inc.
* - Don't bump the count twice in the LK_DRAIN case.jeff2005-03-281-2/+0
| | | | Sponsored by: Isilon Systems, Inc.
* - Restore COUNT() in all of its original glory. Don't make it dependentjeff2005-03-251-17/+19
| | | | | | | on DEBUG as ufs will soon grow a dependency on this count. Discussed with: bde Sponsored by: Isilon Systems, Inc.
* - Complete the implementation of td_locks. Track the number of outstandingjeff2005-03-241-0/+11
| | | | | | | lockmgr locks that this thread owns. This is complicated due to LK_KERNPROC and because lockmgr tolerates unlocking an unlocked lock. Sponsored by: Isilon Systes, Inc.
* - transferlockers() requires the interlock to be SMP safe.jeff2005-03-151-2/+8
| | | | Sponsored by: Isilon Systems, Inc.
OpenPOWER on IntegriCloud