summaryrefslogtreecommitdiffstats
path: root/sys/kern/subr_turnstile.c
Commit message (Collapse)AuthorAgeFilesLines
* Fixup a comment.jhb2004-03-121-1/+1
|
* Add an implementation of a generic sleep queue abstraction that is usedjhb2004-02-271-5/+0
| | | | | | | | | | | | | | | | to queue threads sleeping on a wait channel similar to how turnstiles are used to queue threads waiting for a lock. This subsystem will be used as the backend for sleep/wakeup and condition variables initially. Eventually it will also be used to replace the ithread-specific iwait thread inhibitor. Sleep queues are also not locked by sched_lock, so this splits sched_lock up a bit further increasing concurrency within the scheduler. Sleep queues also natively support timeouts on sleeps and interruptible sleeps allowing for the reduction of a lot of duplicated code between the sleep/wakeup and condition variable implementations. For more details on the sleep queue implementation, check the comments in sys/sleepqueue.h and kern/subr_sleepqueue.c.
* Clarify and tweak some comments.jhb2004-02-271-3/+3
|
* - Add a flags parameter to mi_switch. The value of flags may be SW_VOL orjeff2004-01-251-2/+1
| | | | | | | | | | SW_INVOL. Assert that one of these is set in mi_switch() and propery adjust the rusage statistics. This is to simplify the large number of users of this interface which were previously all required to adjust the proper counter prior to calling mi_switch(). This also facilitates more switch and locking optimizations. - Change all callers of mi_switch() to pass the appropriate paramter and remove direct references to the process statistics.
* Adjust an assertion for the TDF_TSNOBLOCK race handling injhb2003-12-091-2/+3
| | | | | | | turnstile_unpend(). A racing thread that does not have TDI_LOCK set may either be running on another CPU or it may be sitting on a run queue if it was preempted during the very small window in turnstile_wait() between unlocking the turnstile chain lock and locking sched_lock.
* Assert that the we never give a thread a NULL turnstile when waking it up.jhb2003-12-091-0/+2
|
* Revert the previous race fix and replace it with a more general fix. Thejhb2003-12-091-8/+9
| | | | | | | | case of a turnstile having no threads is just one instance of the more general case where the thread we are examining has been partially awakened already in that it has been removed from the turnstile's blocked list but still has TDI_LOCK set. We detect that case by checking to see if the thread has already had a turnstile reassigned to it.
* - Close a race where a thread on another CPU could release a contested lockjhb2003-11-121-4/+12
| | | | | | | | | | | | and empty its turnstile while the blocking threads still pointed to the turnstile. If the thread on the first CPU blocked on a lock owned by one of the threads blocked on the turnstile just woken up, then the first CPU could try to manipulate a bogus thread queue in the turnstile during priority propagation. - Update locking notes for ts_owner and always clear ts_owner, not just under INVARIANTS. Tested by: sam (1)
* Fix a typo in a comment.jhb2003-11-121-1/+1
| | | | Submitted by: das
* Add an implementation of turnstiles and change the sleep mutex code to usejhb2003-11-111-749/+462
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | turnstiles to implement blocking isntead of implementing a thread queue directly. These turnstiles are somewhat similar to those used in Solaris 7 as described in Solaris Internals but are also different. Turnstiles do not come out of a fixed-sized pool. Rather, each thread is assigned a turnstile when it is created that it frees when it is destroyed. When a thread blocks on a lock, it donates its turnstile to that lock to serve as queue of blocked threads. The queue associated with a given lock is found by a lookup in a simple hash table. The turnstile itself is protected by a lock associated with its entry in the hash table. This means that sched_lock is no longer needed to contest on a mutex. Instead, sched_lock is only used when manipulating run queues or thread priorities. Turnstiles also implement priority propagation inherently. Currently turnstiles only support mutexes. Eventually, however, turnstiles may grow two queue's to support a non-sleepable reader/writer lock implementation. For more details, see the comments in sys/turnstile.h and kern/subr_turnstile.c. The two primary advantages from the turnstile code include: 1) the size of struct mutex shrinks by four pointers as it no longer stores the thread queue linkages directly, and 2) less contention on sched_lock in SMP systems including the ability for multiple CPUs to contend on different locks simultaneously (not that this last detail is necessarily that much of a big win). Note that 1) means that this commit is a kernel ABI breaker, so don't mix old modules with a new kernel and vice versa. Tested on: i386 SMP, sparc64 SMP, alpha SMP
* If a spin lock is held for too long and WITNESS is enabled, then calljhb2003-07-311-3/+9
| | | | | witness_display_spinlock() to see if we can find out where the current owner of the spin lock last acquired the lock.
* When complaining about a sleeping thread owning a mutex, display thejhb2003-07-301-1/+3
| | | | | | | thread's pid to make debugging easier for people who don't want to have to use the intended tool for these panics (witness). Indirectly prodded by: kris
* - Add comments about the maintenance of the per-thread list of contestedjhb2003-07-021-4/+9
| | | | | | | | | | | | | locks held by each thread. - Fix a bug in the original BSD/OS code where a contested lock was not properly handed off from the old thread to the new thread when a contested lock with more than one blocked thread was transferred from one thread to another. - Don't use an atomic operation to write the MTX_CONTESTED value to mtx_lock in the aforementioned special case. The memory barriers and exclusion provided by sched_lock are sufficient. Spotted by: alc (2)
* Use __FBSDID().obrien2003-06-111-1/+3
|
* Add "" around mutex name to make message less confusing.phk2003-05-311-1/+1
|
* Use TD_IS_RUNNING() instead of thread_running() in the adaptive mutexjhb2003-04-171-7/+2
| | | | code.
* Move the _oncpu entry from the KSE to the thread.julian2003-04-101-1/+2
| | | | | The entry in the KSE still exists but it's purpose will change a bit when we add the ability to lock a KSE to a cpu.
* Remove unused mtx_lock_giant(), mtx_unlock_giant(), related globalstjr2003-03-231-43/+0
| | | | and sysctls.
* Including <sys/stdint.h> is (almost?) universally only to be able to usephk2003-03-181-1/+0
| | | | | %j in printfs, so put a newsted include in <sys/systm.h> where the printf prototype lives and save everybody else the trouble.
* Axe the useless MTX_SLEEPABLE flag. mutexes are not sleepable locks.jhb2003-03-111-3/+1
| | | | | Nothing used this flag and WITNESS would have panic'd during mtx_init() if anything had.
* Remove safety belt: it is now ok to do a mtx_trylock() on a mutex youjhb2003-03-041-5/+4
| | | | | | | already own. The mtx_trylock() will fail however. Enhance the comment at the top of the try lock function to explain this. Requested by: jlemon and his evil netisr locking
* Miscellaneous cleanups to _mtx_lock_sleep():jhb2003-03-041-4/+6
| | | | | | | | - Declare some local variables at the top of the function instead of in a nested block. - Use mtx_owned() instead of masking off bits from mtx_lock manually. - Read the value of mtx_lock into 'v' as a separate line rather than inside an if statement for clarity. This code is hairy enough as it is.
* Properly assert that mtx_trylock() is not called on a mutex we alreadyjhb2003-03-041-8/+4
| | | | | | | owned. Previously the KASSERT would only trigger if we successfully acquired a lock that we already held. However, _obtain_lock() fails to acquire locks that we already hold, so the KASSERT was never checked in the case it was supposed to fail.
* Unbreak mutex profiling (at least for me).mtm2003-02-251-3/+15
| | | | | | | | | | o Always check for null when dereferencing the filename component. o Implement a try-and-backoff method for allocating memory to dump stats to avoid a spin-lock -> sleep-lock mutex lock order panic with WITNESS. Approved by: des, markm (mentor) Not objected: jhb
* There's absolutely no need for a struct-within-a-struct, so move thedes2003-01-211-14/+12
| | | | counters out of the inner struct and remove it.
* Disable the kernacc() check in mtx_validate() until such time that kernaccphk2002-10-251-0/+5
| | | | | | | | | does not require Giant. This means that we may miss panics on a class of mutex programming bugs, but only if running with a Chernobyl setting of debug-flags. Spotted by: Pete Carah <pete@ns.altadena.net>
* Whitespace cleanup.des2002-10-231-10/+9
|
* Change the `mutex_prof' structure to use three variables containedrobert2002-10-221-18/+14
| | | | | | in an anonymous structure as counters, instead of an array with preprocessor-defined names for indices. Remove the associated XXX- comment.
* Reduce the overhead of the mutex statistics gathering code, try to producedes2002-10-211-19/+28
| | | | shorter lines in the report, and clean up some minor style issues.
* - Create a new scheduler api that is defined in sys/sched.hjeff2002-10-121-4/+2
| | | | | | | | | | - Begin moving scheduler specific functionality into sched_4bsd.c - Replace direct manipulation of scheduler data with hooks provided by the new api. - Remove KSE specific state modifications and single runq assumptions from kern_switch.c Reviewed by: -arch
* Rename the mutex thread and process states to use a more generic 'LOCK'jhb2002-10-021-13/+13
| | | | | | | | | name instead. (e.g., SLOCK instead of SMTX, TD_ON_LOCK() instead of TD_ON_MUTEX()) Eventually a turnstile abstraction will be added that will be shared with mutexes and other types of locks. SLOCK/TDI_LOCK will be used internally by the turnstile code and will not be specific to mutexes. Making the change now ensures that turnstiles can be dropped in at a later date without affecting the ABI of userland applications.
* uh, commit all of the patchjulian2002-09-291-0/+1
|
* commit the version I actually tested..julian2002-09-291-2/+4
| | | | Submitted by: davidxu
* Implement basic KSE loaning. This stops a hread that is blocked in BOUND modejulian2002-09-291-1/+2
| | | | | | | | | from stopping another thread from completing a syscall, and this allows it to release its resources etc. Probably more related commits to follow (at least one I know of) Initial concept by: julian, dillon Submitted by: davidxu
* Completely redo thread states.julian2002-09-111-7/+6
| | | | Reviewed by: davidxu@freebsd.org
* Add some KASSERT()'s to ensure that we don't perform spin mutex ops onjhb2002-09-031-4/+16
| | | | | | sleep mutexes and vice versa. WITNESS normally should catch this but not everyone uses WITNESS so this is a fallback to catch nasty but easy to do bugs.
* Add a new KTR type KTR_CONTENTION, and use it in the mutex code toiedowse2002-08-261-0/+20
| | | | | | | | log the start and end of periods during which mtx_lock() is waiting to acquire a sleep mutex. The log message includes the file and line of both the waiter and the holder. Reviewed by: jhb, jake
* Disable optimization of spinlocks on UP kernels w/o debugging for nowjhb2002-07-271-2/+2
| | | | | | | | since it breaks mtx_owned() on spin mutexes when used outside of mtx_assert(). Unfortunately we currently use it in the i386 MD code and in the sio(4) driver. Reported by: bde
* Add mtx_ prefixes to the fields used for mutex profiling, and fix a bugdes2002-07-031-11/+12
| | | | | | | where the profiling code would report the release point instead of the acquisition point. Requested by: bde
* Part 1 of KSE-IIIjulian2002-06-291-15/+16
| | | | | | | | | | | | | The ability to schedule multiple threads per process (one one cpu) by making ALL system calls optionally asynchronous. to come: ia64 and power-pc patches, patches for gdb, test program (in tools) Reviewed by: Almost everyone who counts (at various times, peter, jhb, matt, alfred, mini, bernd, and a cast of thousands) NOTE: this is still Beta code, and contains lots of debugging stuff. expect slight instability in signals..
* Replace thread_runnable() with thread_running() as the latter is morejhb2002-06-041-6/+5
| | | | | | accurate. Suggested by: julian
* Optimize the adaptive mutex spin a bit. Use a simple while loop withjhb2002-06-041-1/+4
| | | | | | | | | simple reads (and on IA32, a "pause" instruction for each interation of the loop) to spin until either the mutex owner field changes, or the lock owner stops executing. Suggested by: tanimura Tested on: i386
* Add a private thread_runnable() macro to make the code more readable andjhb2002-06-041-3/+5
| | | | make the KSE diff easier to maintain.
* Make the counters uintmax_ts, and use %ju rather than %llu.des2002-05-231-2/+3
|
* Rename pause() to ia32_pause() so it doesn't conflict with the pause()jhb2002-05-221-5/+5
| | | | | function defined in <unistd.h>. I didn't #ifdef _KERNEL it because the mutex implementation in libpthread will probably need this.
* Rename cpu_pause() to pause(). Originally I was going to make this anjhb2002-05-221-5/+5
| | | | | | | | | MI API with empty cpu_pause() functions on other arch's, but this functionality is definitely unique to IA-32, so I decided to leave it as i386-only and wrap it in #ifdef's. I should have dropped the cpu_ prefix when I made that decision. Requested by: bde
* Add appropriate IA32 "pause" instructions to improve performanec onjhb2002-05-211-1/+17
| | | | | | Pentium 4's and newer IA32 processors. The "pause" instruction has been verified by Intel to be a NOP on all currently existing IA32 processors prior to the Pentium 4.
* Fix an old cut 'n' paste bug inherited from BSD/OS: don't increment 'i'jhb2002-05-211-1/+1
| | | | twice once we are in the long wait stage of spinning on a spin mutex.
* Whitespace fixup, properly indent the body of an else clause.jhb2002-05-211-2/+2
|
* Add code to make default mutexes adaptive if the ADAPTIVE_MUTEXES kerneljhb2002-05-211-0/+26
| | | | | | | | | | | | | | | | | | | | | | option is used (not on by default). - In the case of trying to lock a mutex, if the MTX_CONTESTED flag is set, then we can safely read the thread pointer from the mtx_lock member while holding sched_lock. We then examine the thread to see if it is currently executing on another CPU. If it is, then we keep looping instead of blocking. - In the case of trying to unlock a mutex, it is now possible for a mutex to have MTX_CONTESTED set in mtx_lock but to not have any threads actually blocked on it, so we need to handle that case. In that case, we just release the lock as if MTX_CONTESTED was not set and return. - We do not adaptively spin on Giant as Giant is held for long times and it slows SMP systems down to a crawl (it was taking several minutes, like 5-10 or so for my test alpha and sparc64 SMP boxes to boot up when they adaptively spinned on Giant). - We only compile in the code to do this for SMP kernels, it doesn't make sense for UP kernels. Tested on: i386, alpha, sparc64
OpenPOWER on IntegriCloud