summaryrefslogtreecommitdiffstats
path: root/sys/kern/subr_sleepqueue.c
Commit message (Collapse)AuthorAgeFilesLines
...
* - Store threads on sleep queues in FIFO order rather than sorted byjhb2004-11-051-16/+19
| | | | | | | | | | | | | | | | | priority. The sleep queues don't get updated when the priority of threads changes, so sleepq_signal() might not always wakeup the highest priority thread. Updating the queues when thread priorities change cannot be easily done due to lock orders, so instead we do an O(n) walk of the queue for a sleepq_signal() operation instead of O(1). On the other hand, adding a thread to a sleep queue now goes from O(n) to O(1) so it ends up as an even tradeoff. The correctness here with regards to priorities is actually fairly important. msleep() gives interactive threads their priority "boost" after they are placed on the queue, but before this fix that "boost" wasn't used to determine the highest priority thread that sleepq_signal() awoke. - Fix up some comments. Inspired by: ups, bde
* Refine the turnstile and sleep queue interfaces just a bit:jhb2004-10-121-14/+31
| | | | | | | | | | | | | | | | | | | | | | | | | - Add a new _lock() call to each API that locks the associated chain lock for a lock_object pointer or wait channel. The _lookup() functions now require that the chain lock be locked via _lock() when they are called. - Change sleepq_add(), turnstile_wait() and turnstile_claim() to lookup the associated queue structure internally via _lookup() rather than accepting a pointer from the caller. For turnstiles, this means that the actual lookup of the turnstile in the hash table is only done when the thread actually blocks rather than being done on each loop iteration in _mtx_lock_sleep(). For sleep queues, this means that sleepq_lookup() is no longer used outside of the sleep queue code except to implement an assertion in cv_destroy(). - Change sleepq_broadcast() and sleepq_signal() to require that the chain lock is already required. For condition variables, this lets the cv_broadcast() and cv_signal() functions lock the sleep queue chain lock while testing the waiters count. This means that the waiters count internal to condition variables is no longer protected by the interlock mutex and cv_broadcast() and cv_signal() now no longer require that the interlock be held when they are called. This lets consumers of condition variables drop the lock before waking other threads which can result in fewer context switches. MFC after: 1 month
* Directly modifying the priority of a thread that may be on the runqueueups2004-10-121-1/+1
| | | | | | | | | can break the sorting order of the ksegp run queue. Tested by: pho Reviewed by: jhb, julian Approved by: sam (mentor) MFC: ASAP
* Now that the return value semantics of cv's for multithreaded processesjhb2004-08-191-9/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | have been unified with that of msleep(9), further refine the sleepq interface and consolidate some duplicated code: - Move the pre-sleep checks for theaded processes into a thread_sleep_check() function in kern_thread.c. - Move all handling of TDF_SINTR to be internal to subr_sleepqueue.c. Specifically, if a thread is awakened by something other than a signal while checking for signals before going to sleep, clear TDF_SINTR in sleepq_catch_signals(). This removes a sched_lock lock/unlock combo in that edge case during an interruptible sleep. Also, fix sleepq_check_signals() to properly handle the condition if TDF_SINTR is clear rather than requiring the callers of the sleepq API to notice this edge case and call a non-_sig variant of sleepq_wait(). - Clarify the flags arguments to sleepq_add(), sleepq_signal() and sleepq_broadcast() by creating an explicit submask for sleepq types. Also, add an explicit SLEEPQ_MSLEEP type rather than a magic number of 0. Also, add a SLEEPQ_INTERRUPTIBLE flag for use with sleepq_add() and move the setting of TDF_SINTR to sleepq_add() if this flag is set rather than sleepq_catch_signals(). Note that it is the caller's responsibility to ensure that sleepq_catch_signals() is called if and only if this flag is passed to the preceeding sleepq_add(). Note that this also removes a sched_lock lock/unlock pair from sleepq_catch_signals(). It also ensures that for an interruptible sleep, TDF_SINTR is always set when TD_ON_SLEEPQ() is true.
* - Change mi_switch() and sched_switch() to accept an optional thread tojhb2004-07-021-2/+2
| | | | | | | | | | | | | switch to. If a non-NULL thread pointer is passed in, then the CPU will switch to that thread directly rather than calling choosethread() to pick a thread to choose to. - Make sched_switch() aware of idle threads and know to do TD_SET_CAN_RUN() instead of sticking them on the run queue rather than requiring all callers of mi_switch() to know to do this if they can be called from an idlethread. - Move constants for arguments to mi_switch() and thread_single() out of the middle of the function prototypes and up above into their own section.
* Add two new kernel options to allow rudimentary profiling of the internaljhb2004-06-291-0/+41
| | | | | | | hash tables used in the sleep queue and turnstile code. Each option adds a sysctl tree under debug containing the maximum depth of any bucket in the hash table as well as a separate node for each bucket (or chain) containing the current depth and maximum depth for that bucket.
* Remove the signal_caught argument from sleepq_timedwait() as it wasjhb2004-06-281-5/+2
| | | | effectively always zero.
* Fixed some common printf format errors. Don't assume that "struct foo *"bde2004-05-141-14/+13
| | | | | | | | is "void *" (it isn't) or that the default promotion of pid_t is int. Instead, assume that casting "struct foo *" to "void *" and printing the result with %p is useful, and that all pid_t's are representable as longs. Fixed some minor style bugs (mainly spelling errors in comments).
* Split sleepq_wakeup_thread() into two functions. sleepq_remove_thread()jhb2004-05-131-13/+50
| | | | | | | | | | | | | | removes a specific thread from a sleep queue. sleepq_resume_thread() resumes scheduling of a thread that has been previously removed from a sleep queue. - sleepq_catch_signals() just removes a thread from the queue it was just added to when a pending signal is found. - sleepq_signal() and sleepq_broadcast() remove threads from a queue, drop the queue lock, and then resume all the previously removed threads. This doesn't completely fix the sched_lock <-> sleepq chain LOR, but it makes it a little better as we no longer call setrunnble() with a sleep queue lock held meaning if setrunnable() tries to wakeup the swapper we don't try to lock two sleep queue chains at the same time.
* Keep track of threads waiting in kse_release() to avoid a racedeischen2004-04-281-1/+5
| | | | | | | | | | | condition where kse_wakeup() doesn't yet see them in (interruptible) sleep queues. Also add an upcall check to sleepqueue_catch_signals() suggested by jhb. This commit should fix recent mysql hangs. Reviewed by: jhb, davidxu Mysql'd by: Robin P. Blanchard <robin.blanchard at gactr uga edu>
* Remove a bogus assertion and readd it in a more correct location. A threadjhb2004-03-161-1/+1
| | | | | | | | | might be enqueued on a sleep queue but not be asleep when the timeout fires if it is blocked on a lock trying to check for pending signals before going to sleep. In the case of fixing up the TDF_TIMEOUT race, however, the thread must be marked asleep. Reported by: kan (the bogus one)
* - Remove old sleep queues.jhb2004-03-121-1/+1
| | | | | - Remove sleepqueue argument from sleepq_set_timeout() since it is not used.
* Always assert that the passed in lock is the same as the saved lock in thejhb2004-03-021-19/+1
| | | | sleep queue now that the one abnormal case has been fixed.
* Add an implementation of a generic sleep queue abstraction that is usedjhb2004-02-271-0/+776
to queue threads sleeping on a wait channel similar to how turnstiles are used to queue threads waiting for a lock. This subsystem will be used as the backend for sleep/wakeup and condition variables initially. Eventually it will also be used to replace the ithread-specific iwait thread inhibitor. Sleep queues are also not locked by sched_lock, so this splits sched_lock up a bit further increasing concurrency within the scheduler. Sleep queues also natively support timeouts on sleeps and interruptible sleeps allowing for the reduction of a lot of duplicated code between the sleep/wakeup and condition variable implementations. For more details on the sleep queue implementation, check the comments in sys/sleepqueue.h and kern/subr_sleepqueue.c.
OpenPOWER on IntegriCloud