summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_mutex.c
Commit message (Collapse)AuthorAgeFilesLines
...
* Fix a problem with not-preemptive kernels caming from mis-merging ofattilio2007-06-051-47/+0
| | | | | | | existing code with the new thread_lock patch. This also cleans up a bit unlock operation for mutexes. Approved by: jhb, jeff(mentor)
* Restore non-SMP build.kib2007-06-051-1/+2
| | | | Reviewed by: attilio
* Commit 3/14 of sched_lock decomposition.jeff2007-06-041-27/+122
| | | | | | | | | | | | | | | | | - Add a per-turnstile spinlock to solve potential priority propagation deadlocks that are possible with thread_lock(). - The turnstile lock order is defined as the exact opposite of the lock order used with the sleep locks they represent. This allows us to walk in reverse order in priority_propagate and this is the only place we wish to multiply acquire turnstile locks. - Use the turnstile_chain lock to protect assigning mutexes to turnstiles. - Change the turnstile interface to pass back turnstile pointers to the consumers. This allows us to reduce some locking and makes it easier to cancel turnstile assignment while the turnstile chain lock is held. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Move lock_profile_object_{init,destroy}() into lock_{init,destroy}().jhb2007-05-181-2/+0
|
* Teach 'show lock' to properly handle a destroyed mutex.jhb2007-05-081-1/+5
|
* move lock_profile calls out of the macros and into kern_mutex.ckmacy2007-04-031-9/+17
| | | | add check for mtx_recurse == 0 when releasing sleep lock
* - Simplify the #ifdef's for adaptive mutexes and rwlocks by conditionallyjhb2007-03-221-4/+8
| | | | | defining a macro earlier in the file. - Add NO_ADAPTIVE_RWLOCKS option to disable adaptive spinning for rwlocks.
* Rename the 'mtx_object', 'rw_object', and 'sx_object' members of mutexes,jhb2007-03-211-68/+68
| | | | rwlocks, and sx locks to 'lock_object'.
* Add two new function pointers 'lc_lock' and 'lc_unlock' to lock classes.jhb2007-03-091-0/+40
| | | | | | | | | | | | | These functions are intended to be used to drop a lock and then reacquire it when doing an sleep such as msleep(9). Both functions accept a 'struct lock_object *' as their first parameter. The 'lc_unlock' function returns an integer that is then passed as the second paramter to the subsequent 'lc_lock' function. This can be used to communicate state. For example, sx locks and rwlocks use this to indicate if the lock was share/read locked vs exclusive/write locked. Currently, spin mutexes and lockmgr locks do not provide working lc_lock and lc_unlock functions.
* Use C99-style struct member initialization for lock classes.jhb2007-03-091-6/+6
|
* lock stats updates need to be protected by the lockkmacy2007-03-021-20/+5
|
* Evidently I've overestimated gcc's ability to peak inside inline functionskmacy2007-03-011-4/+8
| | | | | and optimize away unused stack values. The 48 bytes that the lock_profile_object adds to the stack evidently has a measurable performance impact on certain workloads.
* Further improvements to LOCK_PROFILING:kmacy2007-02-271-3/+14
| | | | | | | | | | | | - Fix missing initialization in kern_rwlock.c causing bogus times to be collected - Move updates to the lock hash to after the lock is released for spin mutexes, sleep mutexes, and sx locks - Add new kernel build option LOCK_PROFILE_FAST - only update lock profiling statistics when an acquisition is contended. This reduces the overhead of LOCK_PROFILING to increasing system time by 20%-25% which on "make -j8 kernel-toolchain" on a dual woodcrest is unmeasurable in terms of wall-clock time. Contrast this to enabling lock profiling without LOCK_PROFILE_FAST and I see a 5x-6x slowdown in wall-clock time.
* general LOCK_PROFILING cleanupkmacy2007-02-261-21/+8
| | | | | | | | | | | | - only collect timestamps when a lock is contested - this reduces the overhead of collecting profiles from 20x to 5x - remove unused function from subr_lock.c - generalize cnt_hold and cnt_lock statistics to be kept for all locks - NOTE: rwlock profiling generates invalid statistics (and most likely always has) someone familiar with that should review
* - Fix some gcc warnings in lock_profile.hkmacy2006-12-161-6/+20
| | | | | | | - add cnt_hold cnt_lock support for spin mutexes - make sure contested is initialized to zero to only bump contested when appropriate - move initialization function to kern_mutex.c to avoid cyclic dependency between mutex.h and lock_profile.h
* track lock class name in a way that doesn't break WITNESSkmacy2006-11-131-1/+1
|
* MUTEX_PROFILING has been generalized to LOCK_PROFILING. We now profilekmacy2006-11-111-248/+30
| | | | | | | | | | | wait (time waited to acquire) and hold times for *all* kernel locks. If the architecture has a system synchronized TSC, the profiling code will use that - thereby minimizing profiling overhead. Large chunks of profiling code have been moved out of line, the overhead measured on the T1 for when it is compiled in but not enabled is < 1%. Approved by: scottl (standing in for mentor rwatson) Reviewed by: des and jhb
* - When spinning on a spin lock, if the debugger is active or we are in ajhb2006-08-151-6/+12
| | | | | | | panic, go ahead and do the longer DELAY(1) spin wait. - If we panic due to spinning too long, print out a few more details including the pointer to the mutex in question and the tid of the owning thread.
* Adjust td_locks for non-spin mutexes, rwlocks, and sx locks so that it isjhb2006-07-271-1/+7
| | | | | | | | a count of all non-spin locks, not just lockmgr locks. This can give us a much cheaper way to see if we have any locks held (such as when returning to userland via userret()) without requiring WITNESS. MFC after: 1 week
* Write a magic value into mtx_lock when destroying a mutex that will forcejhb2006-07-271-0/+11
| | | | | | | | | | all other mtx_lock() operations to block. Previously, when the mutex was destroyed, it would still have a valid value in mtx_lock(): either the unowned cookie, which would allow a subsequent mtx_lock() to succeed, or a pointer to the thread who destroyed the mutex if the mutex was locked when it was destroyed. MFC after: 3 days
* Bah, fix fat finger in last. Invert the ~ on MTX_FLAGMASK as it'sjhb2006-06-031-2/+2
| | | | | | | | non-intuitive for the ~ to be built into the mask. All the users now explicitly ~ the mask. In addition, add MTX_UNOWNED to the mask even though it technically isn't a flag. This should unbreak mtx_owner(). Quickly spotted by: kris
* Simplify mtx_owner() so it only reads m->mtx_lock once.jhb2006-06-031-2/+1
|
* Style fix to be more like _mtx_lock_sleep(): use 'while (!foo) { ... }'jhb2006-06-031-3/+1
| | | | instead of 'for (;;) { if (foo) break; ... }'.
* Since DELAY() was moved, most <machine/clock.h> #includes have beenphk2006-05-161-1/+0
| | | | unnecessary.
* Remove various bits of conditional Alpha code and fixup a few comments.jhb2006-05-121-6/+0
|
* Mark the thread pointer used during an adaptive spin volatile so that thejhb2006-04-141-1/+1
| | | | | | compiler doesn't decide to cache td_state. Cachine the state would cause the spinning thread to not notice when the owning thread stopped executing (if it was preempted for example) which could result in livelock.
* - Add support for having both a shared and exclusive queue of threads injhb2006-01-271-5/+6
| | | | | | | | | | | | | | | each turnstile. Also, allow for the owner thread pointer of a turnstile to be NULL. This is needed for the upcoming reader/writer lock implementation. - Add a new ddb command 'show turnstile' that will look up the turnstile associated with the given lock argument and display useful information like the list of threads blocked on each queue, etc. If there isn't an active turnstile for a lock at the specified address, then the function will see if there is an active turnstile at the specified address and display info about it if so. - Adjust the mutex code to handle the turnstile API changes. Tested on: i386 (all), alpha, amd64, sparc64 (1 and 3)
* Whitespace fix.jhb2006-01-241-1/+1
|
* Add a new file (kern/subr_lock.c) for holding code related to structjhb2006-01-171-56/+28
| | | | | | | | | | | | | lock_obj objects: - Add new lock_init() and lock_destroy() functions to setup and teardown lock_object objects including KTR logging and registering with WITNESS. - Move all the handling of LO_INITIALIZED out of witness and the various lock init functions into lock_init() and lock_destroy(). - Remove the constants for static indices into the lock_classes[] array and change the code outside of subr_lock.c to use LOCK_CLASS to compare against a known lock class. - Move the 'show lock' ddb function and lock_classes[] array out of kern_mutex.c over to subr_lock.c.
* Initialize thread0.td_contested in init_turnstiles() rather thanjhb2006-01-171-3/+0
| | | | mutex_init() as it is used by the turnstile code and is not mutex-specific.
* If destroying a spinlock, make sure that it is exited properly.scottl2006-01-081-0/+4
| | | | | Submitted by: jhb MFC After: 3 days
* Revert an untested local change that crept in with the lo_class changesjhb2006-01-071-4/+0
| | | | | | | and subsequently broke the build. This change is supposed to fix the case where doing a mtx_destroy() off a spin mutex while you hold it fails. If it had been tested I would just leave it in, but it hasn't been tested yet, so it will have to wait until later.
* Trying to fix compilation bustage introduced in rev1.160 by convertingavatar2006-01-071-1/+1
| | | | a missing lo_class to LO_CLASSINDEX().
* Trim another pointer from struct lock_object (and thus from struct mtx andjhb2006-01-061-15/+28
| | | | | | | | | | | | | | struct sx). Instead of storing a direct pointer to a our lock_class struct in lock_object, reserve 4 bits in the lo_flags field to serve as an index into a global lock_classes array that contains pointers to the lock classes. Only debugging code such as WITNESS or INVARIANTS checks and KTR logging need to access the lock_class member, so this shouldn't add any overhead to production kernels. It might add some slight overhead to kernels using those debug options however. As with the previous set of changes to lock_object, this is going to completely obliterate the kernel ABI, so be sure to recompile all your modules.
* Add a new 'show lock' command to ddb. If the argument has a valid lockjhb2005-12-131-2/+73
| | | | | | | | | | | | class, then it displays various information about the lock and calls a new function pointer in lock_class (lc_ddb_show) to dump class-specific information about the lock as well (such as the owner of a mutex or xlock'ed sx lock). This is easier than staring at hex dumps of locks to figure out who owns the lock, etc. Note that extending lock_class doesn't affect the ABI for any kernel modules as the only code that deals with lock_class structures directly is kern_mutex.c, kern_sx.c, and witness. MFC after: 1 week
* Move the initialization of the devmtx into the mutex_init() functionjhb2005-10-181-0/+3
| | | | | | | | | called during early init before cninit(). Tested on: i386, alpha, sparc64 Reviewed by: phk, imp Reported by: Divacky Roman xdivac02 at stud dot fit dot vutbr dot cz MFC after: 1 week
* - Add an assertion to panic if one tries to call mtx_trylock() on a spinjhb2005-09-021-1/+4
| | | | | | | mutex. - Don't panic if a spin lock is held too long inside _mtx_lock_spin() if panicstr is set (meaning that we are already in a panic). Just keep spinning forever instead.
* Ignore mutex asserts when we're dumping as well. This allows meps2005-07-301-1/+2
| | | | | to panic a system from DDB when INVARIANTS is compiled into the kernel on a scsi system.
* Convert the atomic_ptr() operations over to operating on uintptr_tjhb2005-07-151-10/+9
| | | | | | | | | | variables rather than void * variables. This makes it easier and simpler to get asm constraints and volatile keywords correct. MFC after: 3 days Tested on: i386, alpha, sparc64 Compiled on: ia64, powerpc, amd64 Kernel toolchain busted on: arm
* Add additional newline to debug.mutex.prof.stats header, so thatglebius2005-04-081-1/+1
| | | | column names are printed exactly above the columns.
* Divorce critical sections from spinlocks. Critical sections as denoted byjhb2005-04-041-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | critical_enter() and critical_exit() are now solely a mechanism for deferring kernel preemptions. They no longer have any affect on interrupts. This means that standalone critical sections are now very cheap as they are simply unlocked integer increments and decrements for the common case. Spin mutexes now use a separate KPI implemented in MD code: spinlock_enter() and spinlock_exit(). This KPI is responsible for providing whatever MD guarantees are needed to ensure that a thread holding a spin lock won't be preempted by any other code that will try to lock the same lock. For now all archs continue to block interrupts in a "spinlock section" as they did formerly in all critical sections. Note that I've also taken this opportunity to push a few things into MD code rather than MI. For example, critical_fork_exit() no longer exists. Instead, MD code ensures that new threads have the correct state when they are created. Also, we no longer try to fixup the idlethreads for APs in MI code. Instead, each arch sets the initial curthread and adjusts the state of the idle thread it borrows in order to perform the initial context switch. This change is largely a big NOP, but the cleaner separation it provides will allow for more efficient alternative locking schemes in other parts of the kernel (bare critical sections rather than per-CPU spin mutexes for per-CPU data for example). Reviewed by: grehan, cognet, arch@, others Tested on: i386, alpha, sparc64, powerpc, arm, possibly more
* Rework the optimization for spinlocks on UP to be slightly less drastic andjhb2005-01-051-8/+2
| | | | | | | | | | | | turn it back on. Specifically, the actual changes are now less intrusive in that the _get_spin_lock() and _rel_spin_lock() macros now have their contents changed for UP vs SMP kernels which centralizes the changes. Also, UP kernels do not use _mtx_lock_spin() and no longer include it. The UP versions of the spin lock functions do not use any atomic operations, but simple compares and stores which allow mtx_owned() to still work for spin locks while removing the overhead of atomic operations. Tested on: i386, alpha
* Refine the turnstile and sleep queue interfaces just a bit:jhb2004-10-121-5/+4
| | | | | | | | | | | | | | | | | | | | | | | | | - Add a new _lock() call to each API that locks the associated chain lock for a lock_object pointer or wait channel. The _lookup() functions now require that the chain lock be locked via _lock() when they are called. - Change sleepq_add(), turnstile_wait() and turnstile_claim() to lookup the associated queue structure internally via _lookup() rather than accepting a pointer from the caller. For turnstiles, this means that the actual lookup of the turnstile in the hash table is only done when the thread actually blocks rather than being done on each loop iteration in _mtx_lock_sleep(). For sleep queues, this means that sleepq_lookup() is no longer used outside of the sleep queue code except to implement an assertion in cv_destroy(). - Change sleepq_broadcast() and sleepq_signal() to require that the chain lock is already required. For condition variables, this lets the cv_broadcast() and cv_signal() functions lock the sleep queue chain lock while testing the waiters count. This means that the waiters count internal to condition variables is no longer protected by the interlock mutex and cv_broadcast() and cv_signal() now no longer require that the interlock be held when they are called. This lets consumers of condition variables drop the lock before waking other threads which can result in fewer context switches. MFC after: 1 month
* Force MUTEX_WAKE_ALL.ups2004-10-121-0/+9
| | | | | | | | | A race condition in single thread wakeup may break priority inheritance. Tested by: pho Reviewed by: jhb,julian Approved by: sam (mentor) MFC: ASAP
* Turn PREEMPTION into a kernel option. Make sure that it's defined ifscottl2004-09-021-0/+1
| | | | | | FULL_PREEMPTION is defined. Add a runtime warning to ULE if PREEMPTION is enabled (code inspired by the PREEMPTION warning in kern_switch.c). This is a possible MT5 candidate.
* add options MPROF_BUFFERS and MPROF_HASH_SIZE that adjust the sizes ofjmg2004-08-191-0/+10
| | | | | the mutex profiling buffers. Document them in the man page and in NOTES. Ensure _HASH_SIZE is larger than _BUFFERS with a cpp error.
* Cache the value of curthread in the _get_sleep_lock() and _get_spin_lock()jhb2004-08-041-4/+5
| | | | | | | | | macros and pass the value to the associated _mtx_*() functions to avoid more curthread dereferences in the function implementations. This provided a very modest perf improvement in some benchmarks. Suggested by: rwatson Tested by: scottl
* Instead of calling ia32_pause() conditionally on __i386__ or __amd64__mux2004-08-031-15/+5
| | | | | | | | | being defined, define and use a new MD macro, cpu_spinwait(). It only expands to something on i386 and amd64, so the compiled code should be identical. Name of the macro found by: jhb Reviewed by: jhb
* Add "options ADAPTIVE_GIANT" which causes Giant to also be treated inrwatson2004-07-271-0/+4
| | | | | | | | | | an adaptive fashion when adaptive mutexes are enabled. The theory behind non-adaptive Giant is that Giant will be held for long periods of time, and therefore spinning waiting on it is wasteful. However, in MySQL benchmarks which are relatively Giant-free, running Giant adaptive makes an observable difference on SMP (5% transaction rate improvement). As such, make adaptive behavior on Giant an option so it can be more widely benchmarked.
* #ifdef __i386__ -> __i386__ || __amd64__peter2004-07-201-5/+5
|
OpenPOWER on IntegriCloud