summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_umtx.c
Commit message (Collapse)AuthorAgeFilesLines
* Return EINVAL if the contested bit is not set on the umtx passed totjr2003-09-071-1/+2
| | | | _umtx_unlock() instead of firing a KASSERT.
* Initialize 'blocked' to NULL. I think this was a real problem, but Ipeter2003-07-231-0/+1
| | | | | am not sure about that. The lack of -Werror and the inline noise hid this for a while.
* Turn a KASSERT back into an EINVAL return value. So, next time someonemtm2003-07-191-2/+4
| | | | | | | comes across it, it will turn into a core dump in userland instead of a kernel panic. I had also inverted the sense of the test, so Double pointy hat to: mtm
* Remove a lock held across casuptr() that snuck in last commit.mtm2003-07-181-2/+5
|
* Move the decision on whether to unset the contestedmtm2003-07-181-48/+40
| | | | | | bit or not from lock to unlock time. Suggested by: jhb
* Fix umtx locking, for libthr, in the kernel.mtm2003-07-171-24/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1. There was a race condition between a thread unlocking a umtx and the thread contesting it. If the unlocking thread won the race it may try to wakeup a thread that was not yet in msleep(). The contesting thread would then go to sleep to await a wakeup that would never come. It's not possible to close the race by using a lock because calls to casuptr() may have to fault a page in from swap. Instead, the race was closed by introducing a flag that the unlocking thread will set when waking up a thread. The contesting thread will check for this flag before going to sleep. For now the flag is kept in td_flags, but it may be better to use some other member or create a new one because of the possible performance/contention issues of having to own sched_lock. Thanks to jhb for pointing me in the right direction on this one. 2. Once a umtx was contested all future locks and unlocks were happening in the kernel, regardless of whether it was contested or not. To prevent this from happening, when a thread locks a umtx it checks the queue for that umtx and unsets the contested bit if there are no other threads waiting on it. Again, this is slightly more complicated than it needs to be because we can't hold a lock across casuptr(). So, the thread has to check the queue again after unseting the bit, and reset the contested bit if it finds that another thread has put itself on the queue in the mean time. 3. Remove the if... block for unlocking an uncontested umtx, and replace it with a KASSERT. The _only_ time a thread should be unlocking a umtx in the kernel is if it is contested.
* I was so happy I found the semi-colon from hell that I didn'tmtm2003-07-041-1/+1
| | | | | | notice another typo in the same line. This typo makes libthr unuseable, but it's effects where counter-balanced by the extra semicolon, which made libthr remarkably useable for the past several months.
* It's unfair how one extraneous semi-colon can cause so much grief.mtm2003-07-041-1/+1
|
* Use __FBSDID().obrien2003-06-111-3/+3
|
* - Remove the blocked pointer from the umtx structure.jeff2003-06-031-171/+163
| | | | | | | | - Use a hash of umtx queues to queue blocked threads. We hash on pid and the virtual address of the umtx structure. This eliminates cases where we previously held a lock across a casuptr call. Reviwed by: jhb (quickly)
* - Create a new lock, umtx_lock, for use instead of the proc lock forjeff2003-05-251-6/+13
| | | | | | | protecting the umtx queues. We can't use the proc lock because we need to hold the lock across calls to casuptr, which can fault. Approved by: re
* - Make casuptr return the old value of the location we're trying to update,jake2003-04-021-10/+13
| | | | | | and change the umtx code to expect this. Reviewed by: jeff
* - Add an api for doing smp safe locks in userland.jeff2003-04-011-0/+303
- umtx_lock() is defined as an inline in umtx.h. It tries to do an uncontested acquire of a lock which falls back to the _umtx_lock() system-call if that fails. - umtx_unlock() is also an inline which falls back to _umtx_unlock() if the uncontested unlock fails. - Locks are keyed off of the thr_id_t of the currently running thread which is currently just the pointer to the 'struct thread' in kernel. - _umtx_lock() uses the proc pointer to synchronize access to blocked thread queues which are stored in the first blocked thread.
OpenPOWER on IntegriCloud