summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_condvar.c
Commit message (Collapse)AuthorAgeFilesLines
* Rework handling of thread sleeps before timers are working.jhb2016-03-311-41/+7
| | | | | | | | | | | | | | | | | | | Previously, calls to *sleep() and cv_*wait*() immediately returned during early boot. Instead, permit threads that request a sleep without a timeout to sleep as wakeup() works during early boot. Sleeps with timeouts are harder to emulate without working timers, so just punt and panic explicitly if any thread tries to use those before timers are working. Any threads that depend on timeouts should either wait until SI_SUB_KICK_SCHEDULER to start or they should use DELAY() until timers are available. Until APs are started earlier this should be a no-op as other kthreads shouldn't get a chance to start running until after timers are working regardless of when they were created. Reviewed by: kib Sponsored by: Netflix Differential Revision: https://reviews.freebsd.org/D5724
* Use SCHEDULER_STOPPED() in cv_*wait*() instead of checking panicstr.jhb2016-03-011-5/+5
| | | | | | | Reviewed by: kib MFC after: 1 month Sponsored by: Netflix Differential Revision: https://reviews.freebsd.org/D5516
* Prevent cv_waiters wraparound.markj2016-01-091-7/+26
| | | | | | | | | | | | | | r282971 attempted to fix this problem by decrementing cv_waiters after waking up from sleeping on a condition variable, but this can result in a use-after-free if the CV is freed before all woken threads have had a chance to run. Instead, avoid incrementing cv_waiters past INT_MAX, and have cv_signal() explicitly check for sleeping threads once cv_waiters has reached this bound. Reviewed by: jhb MFC after: 2 weeks Sponsored by: EMC / Isilon Storage Division Differential Revision: https://reviews.freebsd.org/D4822
* Revert r282971. It depends on condvar consumers not destroying condvarsjhb2015-05-211-12/+11
| | | | | | until all threads sleeping on a condvar have resumed execution after being awakened. However, there are cases where that guarantee is very hard to provide.
* Previously, cv_waiters was only updated by cv_signal or cv_wait. If ajhb2015-05-151-11/+12
| | | | | | | | | | | | | | | | | | | | | | thread awakened due to a time out, then cv_waiters was not decremented. If INT_MAX threads timed out on a cv without an intervening cv_broadcast, then cv_waiters could overflow. To fix this, have each sleeping thread decrement cv_waiters when it resumes. Note that previously cv_waiters was protected by the sleepq chain lock. However, that lock is not held when threads resume from sleep. In addition, the interlock is also not always reacquired after resuming (cv_wait_unlock), nor is it always held by callers of cv_signal() or cv_broadcast(). Instead, use atomic ops to update cv_waiters. Since the sleepq chain lock is still held on every increment, it should still be safe to compare cv_waiters against zero while holding the lock in the wakeup routines as the only way the race should be lost would result in extra calls to sleepq_signal() or sleepq_broadcast(). Differential Revision: https://reviews.freebsd.org/D2427 Reviewed by: benno Reported by: benno (wrap of cv_waiters in the field) MFC after: 2 weeks
* Revert for r277213:hselasky2015-01-221-4/+8
| | | | | | | | | | | FreeBSD developers need more time to review patches in the surrounding areas like the TCP stack which are using MPSAFE callouts to restore distribution of callouts on multiple CPUs. Bump the __FreeBSD_version instead of reverting it. Suggested by: kmacy, adrian, glebius and kib Differential Revision: https://reviews.freebsd.org/D1438
* Major callout subsystem cleanup and rewrite:hselasky2015-01-151-8/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Close a migration race where callout_reset() failed to set the CALLOUT_ACTIVE flag. - Callout callback functions are now allowed to be protected by spinlocks. - Switching the callout CPU number cannot always be done on a per-callout basis. See the updated timeout(9) manual page for more information. - The timeout(9) manual page has been updated to reflect how all the functions inside the callout API are working. The manual page has been made function oriented to make it easier to deduce how each of the functions making up the callout API are working without having to first read the whole manual page. Group all functions into a handful of sections which should give a quick top-level overview when the different functions should be used. - The CALLOUT_SHAREDLOCK flag and its functionality has been removed to reduce the complexity in the callout code and to avoid problems about atomically stopping callouts via callout_stop(). If someone needs it, it can be re-added. From my quick grep there are no CALLOUT_SHAREDLOCK clients in the kernel. - A new callout API function named "callout_drain_async()" has been added. See the updated timeout(9) manual page for a complete description. - Update the callout clients in the "kern/" folder to use the callout API properly, like cv_timedwait(). Previously there was some custom sleepqueue code in the callout subsystem, which has been removed, because we now allow callouts to be protected by spinlocks. This allows us to tear down the callout like done with regular mutexes, and a "td_slpmutex" has been added to "struct thread" to atomically teardown the "td_slpcallout". Further the "TDF_TIMOFAIL" and "SWT_SLEEPQTIMO" states can now be completely removed. Currently they are marked as available and will be cleaned up in a follow up commit. - Bump the __FreeBSD_version to indicate kernel modules need recompilation. - There has been several reports that this patch "seems to squash a serious bug leading to a callout timeout and panic". Kernel build testing: all architectures were built MFC after: 2 weeks Differential Revision: https://reviews.freebsd.org/D1438 Sponsored by: Mellanox Technologies Reviewed by: jhb, adrian, sbruno and emaste
* Fix lc_lock/lc_unlock() support for rmlocks held in shared mode. Withdavide2013-09-201-2/+3
| | | | | | | | | | | | | | | current lock classes KPI it was really difficult because there was no way to pass an rmtracker object to the lock/unlock routines. In order to accomplish the task, modify the aforementioned functions so that they can return (or pass as argument) an uinptr_t, which is in the rm case used to hold a pointer to struct rm_priotracker for current thread. As an added bonus, this fixes rm_sleep() in the rm shared case, which right now can communicate priotracker structure between lc_unlock()/lc_lock(). Suggested by: jhb Reviewed by: jhb Approved by: re (delphij)
* MFcalloutng:davide2013-03-041-11/+14
| | | | | | | | Extend condvar(9) KPI introducing sbt variant of cv_timedwait. This rely on the previously committed sleepq_set_timeout_sbt(). Sponsored by: Google Summer of Code 2012, iXsystems inc. Tested by: flo, marius, ian, markj, Fabian Keil
* Remove all the checks on curthread != NULL with the exception of some MDattilio2012-09-131-1/+1
| | | | | | | | | | | trap checks (eg. printtrap()). Generally this check is not needed anymore, as there is not a legitimate case where curthread != NULL, after pcpu 0 area has been properly initialized. Reviewed by: bde, jhb MFC after: 1 week
* Include the associated wait channel message for context switch ktracejhb2012-04-201-10/+10
| | | | | | | records. kdump supports both the old and new messages. Submitted by: Andrey Zonov andrey zonov org MFC after: 1 week
* Remove unused variables `p' and unneeded assignments of `rval'.ed2009-02-261-6/+0
| | | | Found by: LLVM's scan-build
* - Don't do a WITNESS_SAVE() on the interlock if it is Giant in the conditionjhb2008-09-251-4/+12
| | | | | | variable wait routines. DROP_GIANT() already manages that state in the Giant interlock case. - Assert that Giant is held when it is passed as a sleep interlock.
* Permit Giant to be passed as the explicit interlock either tojhb2008-08-071-28/+50
| | | | | | | | | | | | | | | msleep/mtx_sleep or the various cv_*wait*() routines. Currently, the "unlock" behavior of PDROP and cv_wait_unlock() with Giant is not permitted as it is will be confusing since Giant is fully unrecursed and unlocked during a thread sleep. This is handy for subsystems which wish to allow unlocked drivers to continue to use Giant such as CAM, the new TTY layer, and the new USB stack. CAM currently uses a hack that I told Scott to use because I really didn't want to permit this behavior, and the TTY and USB patches both have various patches to permit this. MFC after: 2 weeks
* If a thread that is swapped out is made runnable, then the setrunnable()jhb2008-08-051-2/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | routine wakes up proc0 so that proc0 can swap the thread back in. Historically, this has been done by waking up proc0 directly from setrunnable() itself via a wakeup(). When waking up a sleeping thread that was swapped out (the usual case when waking proc0 since only sleeping threads are eligible to be swapped out), this resulted in a bit of recursion (e.g. wakeup() -> setrunnable() -> wakeup()). With sleep queues having separate locks in 6.x and later, this caused a spin lock LOR (sleepq lock -> sched_lock/thread lock -> sleepq lock). An attempt was made to fix this in 7.0 by making the proc0 wakeup use the ithread mechanism for doing the wakeup. However, this required grabbing proc0's thread lock to perform the wakeup. If proc0 was asleep elsewhere in the kernel (e.g. waiting for disk I/O), then this degenerated into the same LOR since the thread lock would be some other sleepq lock. Fix this by deferring the wakeup of the swapper until after the sleepq lock held by the upper layer has been locked. The setrunnable() routine now returns a boolean value to indicate whether or not proc0 needs to be woken up. The end result is that consumers of the sleepq API such as *sleep/wakeup, condition variables, sx locks, and lockmgr, have to wakeup proc0 if they get a non-zero return value from sleepq_abort(), sleepq_broadcast(), or sleepq_signal(). Discussed with: jeff Glanced at by: sam Tested by: Jurgen Weber jurgen - ish com au MFC after: 2 weeks
* - Pass the priority argument from *sleep() into sleepq and down intojeff2008-03-121-9/+14
| | | | | | | | | | | | | | | | | sched_sleep(). This removes extra thread_lock() acquisition and allows the scheduler to decide what to do with the static boost. - Change the priority arguments to cv_* to match sleepq/msleep/etc. where 0 means no priority change. Catch -1 in cv_broadcastpri() and convert it to 0 for now. - Set a flag when sleeping in a way that is compatible with swapping since direct priority comparisons are meaningless now. - Add a sysctl to ule, kern.sched.static_boost, that defaults to on which controls the boost behavior. Turning it off gives better performance in some workloads but needs more investigation. - While we're modifying sleepq, change signal and broadcast to both return with the lock held as the lock was held on enter. Reviewed by: jhb, peter
* Commit 2/14 of sched_lock decomposition.jeff2007-06-041-2/+2
| | | | | | | | | | | | | | | | | - Adapt sleepqueues to the new thread_lock() mechanism. - Delay assigning the sleep queue spinlock as the thread lock until after we've checked for signals. It is illegal for a thread to return in mi_switch() with any lock assigned to td_lock other than the scheduler locks. - Change sleepq_catch_signals() to do the switch if necessary to simplify the callers. - Simplify timeout handling now that locking a sleeping thread has the side-effect of locking the sleepqueue. Some previous races are no longer possible. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Fix a potential LOR with sx_sleep() and cv_wait() with sx locks byjhb2007-05-081-5/+25
| | | | | | 1) adding the thread to the sleepq via sleepq_add() before dropping the lock, and 2) dropping the sleepq lock around calls to lc_unlock() for sleepable locks (i.e. locks that use sleepq's in their implementation).
* Rename the cv_*wait*() functions to _cv_*wait*() and change their secondjhb2007-03-211-51/+58
| | | | | | | | argument from a mutex to a lock_object. Add cv_*wait*() wrapper macros that accept either a mutex, rwlock, or sx lock as the second argument and convert it to a lock_object and then call _cv_*wait*(). Basically, the visible difference is that you can now use rwlocks and sx locks with condition variables using the same API as with mutexes.
* Rename the 'mtx_object', 'rw_object', and 'sx_object' members of mutexes,jhb2007-03-211-18/+18
| | | | rwlocks, and sx locks to 'lock_object'.
* Don't use cv_wait_unlock() to implement cv_wait(). Instead, implementjhb2007-03-211-1/+28
| | | | cv_wait() fully and add missing KTRACE context switch traces.
* Add second sleep queue so that sx and lockmgr can have separate sleepkmacy2006-12-161-6/+8
| | | | | | | queues for shared and exclusive acquisitions Submitted by: Attilio Rao Approved by: jhb
* Change sleepq_add(9) argument from 'struct mtx *' to 'struct lock_object *',pjd2006-11-161-4/+4
| | | | | | | | which allows to use it with different kinds of locks. For example it allows to implement Solaris conditions variables which will be used in ZFS port on top of sx(9) locks. Reviewed by: jhb
* Fix a sleep queue race for KSE thread.davidxu2006-02-231-24/+0
| | | | Reviewed by: jhb
* Fix a long standing race between sleep queue and threaddavidxu2006-02-151-9/+2
| | | | | | | | | | | | | | | | | | | | | | | | | suspension code. When a thread A is going to sleep, it calls sleepq_catch_signals() to detect any pending signals or thread suspension request, if nothing happens, it returns without holding process lock or scheduler lock, this opens a race window which allows thread B to come in and do process suspension work, however since A is still at running state, thread B can do nothing to A, thread A continues, and puts itself into actually sleeping state, but B has never seen it, and it sits there forever until B is woken up by other threads sometimes later(this can be very long delay or never happen). Fix this bug by forcing sleepq_catch_signals to return with scheduler lock held. Fix sleepq_abort() by passing it an interrupted code, previously, it worked as wakeup_one(), and the interruption can not be identified correctly by sleep queue code when the sleeping thread is resumed. Let thread_suspend_check() returns EINTR or ERESTART, so sleep queue no longer has to use SIGSTOP as a hack to build a return value. Reviewed by: jhb MFC after: 1 week
* Contributions from XFS for FreeBSD project:rodrigc2005-12-121-8/+27
| | | | | | | | | | | | - Implement cv_wait_unlock() method which has semantics compatible with the sv_wait() method in IRIX. For cv_wait_unlock(), the lock must be held before entering the function, but is not held when the function is exited. - Implement the existing cv_wait() function in terms of cv_wait_unlock(). Submitted by: kan Feedback from: jhb, trhodes, Christoph Hellwig <hch at infradead dot org>
* Refine the turnstile and sleep queue interfaces just a bit:jhb2004-10-121-15/+16
| | | | | | | | | | | | | | | | | | | | | | | | | - Add a new _lock() call to each API that locks the associated chain lock for a lock_object pointer or wait channel. The _lookup() functions now require that the chain lock be locked via _lock() when they are called. - Change sleepq_add(), turnstile_wait() and turnstile_claim() to lookup the associated queue structure internally via _lookup() rather than accepting a pointer from the caller. For turnstiles, this means that the actual lookup of the turnstile in the hash table is only done when the thread actually blocks rather than being done on each loop iteration in _mtx_lock_sleep(). For sleep queues, this means that sleepq_lookup() is no longer used outside of the sleep queue code except to implement an assertion in cv_destroy(). - Change sleepq_broadcast() and sleepq_signal() to require that the chain lock is already required. For condition variables, this lets the cv_broadcast() and cv_signal() functions lock the sleep queue chain lock while testing the waiters count. This means that the waiters count internal to condition variables is no longer protected by the interlock mutex and cv_broadcast() and cv_signal() now no longer require that the interlock be held when they are called. This lets consumers of condition variables drop the lock before waking other threads which can result in fewer context switches. MFC after: 1 month
* Now that the return value semantics of cv's for multithreaded processesjhb2004-08-191-39/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | have been unified with that of msleep(9), further refine the sleepq interface and consolidate some duplicated code: - Move the pre-sleep checks for theaded processes into a thread_sleep_check() function in kern_thread.c. - Move all handling of TDF_SINTR to be internal to subr_sleepqueue.c. Specifically, if a thread is awakened by something other than a signal while checking for signals before going to sleep, clear TDF_SINTR in sleepq_catch_signals(). This removes a sched_lock lock/unlock combo in that edge case during an interruptible sleep. Also, fix sleepq_check_signals() to properly handle the condition if TDF_SINTR is clear rather than requiring the callers of the sleepq API to notice this edge case and call a non-_sig variant of sleepq_wait(). - Clarify the flags arguments to sleepq_add(), sleepq_signal() and sleepq_broadcast() by creating an explicit submask for sleepq types. Also, add an explicit SLEEPQ_MSLEEP type rather than a magic number of 0. Also, add a SLEEPQ_INTERRUPTIBLE flag for use with sleepq_add() and move the setting of TDF_SINTR to sleepq_add() if this flag is set rather than sleepq_catch_signals(). Note that it is the caller's responsibility to ensure that sleepq_catch_signals() is called if and only if this flag is passed to the preceeding sleepq_add(). Note that this also removes a sched_lock lock/unlock pair from sleepq_catch_signals(). It also ensures that for an interruptible sleep, TDF_SINTR is always set when TD_ON_SLEEPQ() is true.
* Synchronize the extra SA threading checks and return value handling ofjhb2004-08-101-24/+50
| | | | | | condition variables with that of msleep(). Reviewed by: davidxu
* Remove the signal_caught argument from sleepq_timedwait() as it wasjhb2004-06-281-1/+1
| | | | effectively always zero.
* Associate a simple count of waiters with each condition variable. Thejhb2004-04-061-2/+13
| | | | | | | | | | | | | count is protected by the mutex that protects the condition, so the count does not require any extra locking or atomic operations. It serves as an optimization to avoid calling into the sleepqueue code at all if there are no waiters. Note that the count can get temporarily out of sync when threads sleeping on a condition variable time out or are aborted. However, it doesn't hurt to call the sleepqueue code for either a signal or a broadcast when there are no waiters, and the count is never out of sync in the opposite direction unless we have more than INT_MAX sleeping threads.
* - Remove old sleep queues.jhb2004-03-121-2/+2
| | | | | - Remove sleepqueue argument from sleepq_set_timeout() since it is not used.
* Switch the sleep/wakeup and condition variable implementations to use thejhb2004-02-271-295/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | sleep queue interface: - Sleep queues attempt to merge some of the benefits of both sleep queues and condition variables. Having sleep qeueus in a hash table avoids having to allocate a queue head for each wait channel. Thus, struct cv has shrunk down to just a single char * pointer now. However, the hash table does not hold threads directly, but queue heads. This means that once you have located a queue in the hash bucket, you no longer have to walk the rest of the hash chain looking for threads. Instead, you have a list of all the threads sleeping on that wait channel. - Outside of the sleepq code and the sleep/cv code the kernel no longer differentiates between cv's and sleep/wakeup. For example, calls to abortsleep() and cv_abort() are replaced with a call to sleepq_abort(). Thus, the TDF_CVWAITQ flag is removed. Also, calls to unsleep() and cv_waitq_remove() have been replaced with calls to sleepq_remove(). - The sched_sleep() function no longer accepts a priority argument as sleep's no longer inherently bump the priority. Instead, this is soley a propery of msleep() which explicitly calls sched_prio() before blocking. - The TDF_ONSLEEPQ flag has been dropped as it was never used. The associated TDF_SET_ONSLEEPQ and TDF_CLR_ON_SLEEPQ macros have also been dropped and replaced with a single explicit clearing of td_wchan. TD_SET_ONSLEEPQ() would really have only made sense if it had taken the wait channel and message as arguments anyway. Now that that only happens in one place, a macro would be overkill.
* - Add a flags parameter to mi_switch. The value of flags may be SW_VOL orjeff2004-01-251-6/+3
| | | | | | | | | | SW_INVOL. Assert that one of these is set in mi_switch() and propery adjust the rusage statistics. This is to simplify the large number of users of this interface which were previously all required to adjust the proper counter prior to calling mi_switch(). This also facilitates more switch and locking optimizations. - Change all callers of mi_switch() to pass the appropriate paramter and remove direct references to the process statistics.
* - Implement selwakeuppri() which allows raising the priority of atanimura2003-11-091-2/+9
| | | | | | | | | | | | | thread being waken up. The thread waken up can run at a priority as high as after tsleep(). - Replace selwakeup()s with selwakeuppri()s and pass appropriate priorities. - Add cv_broadcastpri() which raises the priority of the broadcast threads. Used by selwakeuppri() if collision occurs. Not objected in: -arch, -current
* Allow SA process unblocks a thread blocked in condition variable.davidxu2003-07-021-2/+8
| | | | Reviewed by: deischen
* Use __FBSDID().obrien2003-06-111-2/+3
|
* - Merge struct procsig with struct sigacts.jhb2003-05-131-0/+6
| | | | | | | | | | | | | | | | | - Move struct sigacts out of the u-area and malloc() it using the M_SUBPROC malloc bucket. - Add a small sigacts_*() API for managing sigacts structures: sigacts_alloc(), sigacts_free(), sigacts_copy(), sigacts_share(), and sigacts_shared(). - Remove the p_sigignore, p_sigacts, and p_sigcatch macros. - Add a mutex to struct sigacts that protects all the members of the struct. - Add sigacts locking. - Remove Giant from nosys(), kill(), killpg(), and kern_sigaction() now that sigacts is locked. - Several in-kernel functions such as psignal(), tdsignal(), trapsignal(), and thread_stopped() are now MP safe. Reviewed by: arch@ Approved by: re (rwatson)
* Test the P_WEXIT flag while already hold the proc lock instead of rightjhb2003-04-171-3/+2
| | | | after dropping it.
* Do NOT return from an non-interruptable cv_wait, falselyjulian2003-03-311-2/+0
| | | | claiming to have timed out. I don't know what I was thinking..
* Replace calls to WITNESS_SLEEP() and witness_list() with equivalent callsjhb2003-03-041-4/+8
| | | | to WITNESS_WARN().
* When a process has been waiting on a condition variable or mutex theharti2003-02-271-0/+1
| | | | | | | | | | | | td_wmesg field in the thread structure points to the description string of the condition variable or mutex. If the condvar or the mutex had been initialized from a loadable module that was unloaded in the meantime, td_wmesg may now point to invalid memory. Retrieving the process table now may panic the kernel (or access junk). Setting the td_wmesg field to NULL after unblocking on the condvar/mutex prevents this panic. PR: kern/47408 Approved by: jake (mentor)
* - Call sched_sleep() instead of rolling our own in cv_waitq_add().jeff2003-01-261-2/+2
|
* Add code to ddb to allow backtracing an arbitrary thread.julian2002-12-281-20/+0
| | | | | | | | | | | | | | | | | | | | | | | | (show thread {address}) Remove the IDLE kse state and replace it with a change in the way threads sahre KSEs. Every KSE now has a thread, which is considered its "owner" however a KSE may also be lent to other threads in the same group to allow completion of in-kernel work. n this case the owner remains the same and the KSE will revert to the owner when the other work has been completed. All creations of upcalls etc. is now done from kse_reassign() which in turn is called from mi_switch or thread_exit(). This means that special code can be removed from msleep() and cv_wait(). kse_release() does not leave a KSE with no thread any more but converts the existing thread into teh KSE's owner, and sets it up for doing an upcall. It is just inhibitted from being scheduled until there is some reason to do an upcall. Remove all trace of the kse_idle queue since it is no-longer needed. "Idle" KSEs are now on the loanable queue.
* More work on the interaction between suspending and sleeping threads.julian2002-10-251-17/+4
| | | | | | Also clean up some code used with 'single-threading'. Reviewed by: davidxu
* Round out the facilty for a 'bound' thread to loan out its KSEjulian2002-10-091-8/+6
| | | | | | | | | | | | | | | | | | | | | | | | in specific situations. The owner thread must be blocked, and the borrower can not proceed back to user space with the borrowed KSE. The borrower will return the KSE on the next context switch where teh owner wants it back. This removes a lot of possible race conditions and deadlocks. It is consceivable that the borrower should inherit the priority of the owner too. that's another discussion and would be simple to do. Also, as part of this, the "preallocatd spare thread" is attached to the thread doing a syscall rather than the KSE. This removes the need to lock the scheduler when we want to access it, as it's now "at hand". DDB now shows a lot mor info for threaded proceses though it may need some optimisation to squeeze it all back into 80 chars again. (possible JKH project) Upcalls are now "bound" threads, but "KSE Lending" now means that other completing syscalls can be completed using that KSE before the upcall finally makes it back to the UTS. (getting threads OUT OF THE KERNEL is one of the highest priorities in the KSE system.) The upcall when it happens will present all the completed syscalls to the KSE for selection.
* Completely redo thread states.julian2002-09-111-67/+35
| | | | Reviewed by: davidxu@freebsd.org
* fix bogus CTR3 message.davidxu2002-09-021-1/+1
| | | | Reviewed by: julian@freebsd.org (mentor)
* updatepri() works on a ksegrp (where the scheduling parameters are), sopeter2002-08-281-3/+5
| | | | | | | directly give it the ksegrp instead of the thread. The only thing it used to use in the thread was the ksegrp. Reviewed by: julian
* Remove code that removes thread from sleep queue beforejulian2002-07-301-7/+0
| | | | | adding it to a condvar wait. We do not have asleep() any more so this can not happen.
OpenPOWER on IntegriCloud