From 7f947a0032071b2639d959b21b13b71a532376d9 Mon Sep 17 00:00:00 2001 From: mtm Date: Sun, 25 May 2003 08:48:11 +0000 Subject: _pthread_cancel() breaks the normal lock order of first locking the joined and then the joiner thread. There isn't an easy (sane?) way to make it use the correct order without introducing races involving the target thread and finding which (active or dead) list it is on. So, after locking the canceled thread it will try to lock the joined thread and if it fails release the first lock and try again from the top. Introduce a new function, _spintrylock, which is simply a wrapper arround umtx_trylock(), to help accomplish this. Approved by: re/blanket libthr --- lib/libthr/thread/thr_spinlock.c | 10 ++++++++++ 1 file changed, 10 insertions(+) (limited to 'lib/libthr/thread/thr_spinlock.c') diff --git a/lib/libthr/thread/thr_spinlock.c b/lib/libthr/thread/thr_spinlock.c index ff9b9e0..0f9cb6b 100644 --- a/lib/libthr/thread/thr_spinlock.c +++ b/lib/libthr/thread/thr_spinlock.c @@ -69,6 +69,16 @@ _spinlock(spinlock_t *lck) _spinlock_pthread(curthread, lck); } +int +_spintrylock(spinlock_t *lck) +{ + int error; + error = umtx_trylock((struct umtx *)lck, curthread->thr_id); + if (error != 0 && error != EBUSY) + abort(); + return (error); +} + inline void _spinlock_pthread(pthread_t pthread, spinlock_t *lck) { -- cgit v1.1