summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_mutex.c
Commit message (Expand)AuthorAgeFilesLines
* Add KASSERT()'s to catch attempts to recurse on spin mutexes that aren'tjhb2008-02-131-1/+9
* Add a couple of assertions and KTR logging to thread_lock_flags() tojhb2008-02-131-1/+7
* - Re-implement lock profiling in such a way that it no longer breaksjeff2007-12-151-20/+6
* Make ADAPTIVE_GIANT as the default in the kernel and remove the option.attilio2007-11-281-8/+0
* Simplify the adaptive spinning algorithm in rwlock and mutex:attilio2007-11-261-29/+41
* Expand lock class with the "virtual" function lc_assert which will offerattilio2007-11-181-0/+10
* generally we are interested in what thread did something asjulian2007-11-141-1/+1
* - Remove the global definition of sched_lock in mutex.h to breakjeff2007-07-181-2/+0
* - Add the proper lock profiling calls to _thread_lock().jeff2007-07-181-2/+8
* Propagate volatile qualifier to make gcc4.2 happy.mjacob2007-06-091-1/+1
* Remove the MUTEX_WAKE_ALL option and make it the default behaviour for ourattilio2007-06-081-37/+0
* - Placing the 'volatile' on the right side of the * in the td_lockjeff2007-06-061-3/+3
* Fix a problem with not-preemptive kernels caming from mis-merging ofattilio2007-06-051-47/+0
* Restore non-SMP build.kib2007-06-051-1/+2
* Commit 3/14 of sched_lock decomposition.jeff2007-06-041-27/+122
* Move lock_profile_object_{init,destroy}() into lock_{init,destroy}().jhb2007-05-181-2/+0
* Teach 'show lock' to properly handle a destroyed mutex.jhb2007-05-081-1/+5
* move lock_profile calls out of the macros and into kern_mutex.ckmacy2007-04-031-9/+17
* - Simplify the #ifdef's for adaptive mutexes and rwlocks by conditionallyjhb2007-03-221-4/+8
* Rename the 'mtx_object', 'rw_object', and 'sx_object' members of mutexes,jhb2007-03-211-68/+68
* Add two new function pointers 'lc_lock' and 'lc_unlock' to lock classes.jhb2007-03-091-0/+40
* Use C99-style struct member initialization for lock classes.jhb2007-03-091-6/+6
* lock stats updates need to be protected by the lockkmacy2007-03-021-20/+5
* Evidently I've overestimated gcc's ability to peak inside inline functionskmacy2007-03-011-4/+8
* Further improvements to LOCK_PROFILING:kmacy2007-02-271-3/+14
* general LOCK_PROFILING cleanupkmacy2007-02-261-21/+8
* - Fix some gcc warnings in lock_profile.hkmacy2006-12-161-6/+20
* track lock class name in a way that doesn't break WITNESSkmacy2006-11-131-1/+1
* MUTEX_PROFILING has been generalized to LOCK_PROFILING. We now profilekmacy2006-11-111-248/+30
* - When spinning on a spin lock, if the debugger is active or we are in ajhb2006-08-151-6/+12
* Adjust td_locks for non-spin mutexes, rwlocks, and sx locks so that it isjhb2006-07-271-1/+7
* Write a magic value into mtx_lock when destroying a mutex that will forcejhb2006-07-271-0/+11
* Bah, fix fat finger in last. Invert the ~ on MTX_FLAGMASK as it'sjhb2006-06-031-2/+2
* Simplify mtx_owner() so it only reads m->mtx_lock once.jhb2006-06-031-2/+1
* Style fix to be more like _mtx_lock_sleep(): use 'while (!foo) { ... }'jhb2006-06-031-3/+1
* Since DELAY() was moved, most <machine/clock.h> #includes have beenphk2006-05-161-1/+0
* Remove various bits of conditional Alpha code and fixup a few comments.jhb2006-05-121-6/+0
* Mark the thread pointer used during an adaptive spin volatile so that thejhb2006-04-141-1/+1
* - Add support for having both a shared and exclusive queue of threads injhb2006-01-271-5/+6
* Whitespace fix.jhb2006-01-241-1/+1
* Add a new file (kern/subr_lock.c) for holding code related to structjhb2006-01-171-56/+28
* Initialize thread0.td_contested in init_turnstiles() rather thanjhb2006-01-171-3/+0
* If destroying a spinlock, make sure that it is exited properly.scottl2006-01-081-0/+4
* Revert an untested local change that crept in with the lo_class changesjhb2006-01-071-4/+0
* Trying to fix compilation bustage introduced in rev1.160 by convertingavatar2006-01-071-1/+1
* Trim another pointer from struct lock_object (and thus from struct mtx andjhb2006-01-061-15/+28
* Add a new 'show lock' command to ddb. If the argument has a valid lockjhb2005-12-131-2/+73
* Move the initialization of the devmtx into the mutex_init() functionjhb2005-10-181-0/+3
* - Add an assertion to panic if one tries to call mtx_trylock() on a spinjhb2005-09-021-1/+4
* Ignore mutex asserts when we're dumping as well. This allows meps2005-07-301-1/+2
OpenPOWER on IntegriCloud