summaryrefslogtreecommitdiffstats
path: root/lib/libthr
Commit message (Collapse)AuthorAgeFilesLines
* Relink libc_r.a, libc_r.so and libc_r_p.so from libthr to libkse.marcel2003-09-271-12/+0
| | | | | | | | | | | On ia64, where there's no libc_r at all, libkse is now the default thread library by virtue of these links. The reasons for this change are: 1. libkse is slated to become the default thread library anyway, 2. active development and maintenance is only present for libkse, 3. GNOME and KDE, both in the process of being supported on ia64, work better with KSE; even on ia64.
* Implement _get_curthread and _set_curthread. We use GCCs builtinmarcel2003-07-241-1/+6
| | | | | function this, which expands to PAL calls (rduniq and wruniq). This needs adjustment when TLS is implemented.
* This commit was generated by cvs2svn to compensate for changes in r117783,mtm2003-07-192-0/+55
|\ | | | | | | which included commits to RCS files with non-trunk default branches.
| * The MD framework for libthr on alphamtm2003-07-192-0/+55
|
* When _PTHREADSINVARIANTS is defined SIGABRT is not includedmtm2003-07-083-2/+19
| | | | | | in the set of signals to block. Also, make the PANIC macro call abort() instead of simply exiting.
* Change all instances of THR_LOCK/UNLOCK, etc to UMTX_*.mtm2003-07-068-23/+23
| | | | | It is a more acurate description of the locks they operate on.
* There's no need for _umtxtrylock to be a separate function.mtm2003-07-063-13/+8
| | | | Roll it into the pre-existing macro that's used to call it.
* _pthread_mutex_trylock() is another internal libc function that must blockmtm2003-07-031-0/+8
| | | | signals.
* Begin making libthr async signal safe.mtm2003-07-021-2/+22
| | | | | | | | Create a private, single underscore, version of pthread_mutex_unlock for libc. pthread_mutex_lock already has one. These versions are different from the ones that applications will link against because they block all signals from the time a call to lock the mutex is made until it is successfully unlocked.
* Do not attempt to reque a thread on a mutex queue. It may be thatmtm2003-07-011-1/+1
| | | | | | | | | a thread receives a spurious wakeup from sigtimedwait(), so make sure that the call to the queueing code is called only once before entering the loop (not in the loop). This should fix some fatal errors people are seeing with messages stating the thread is already on the mutex queue. These errors may still be triggered from signal handlers; however, since that part of the code is not locked down yet.
* Axe AINC.ru2003-07-011-1/+0
| | | | Submitted by: bde
* Catchup with _thread_suspend() changes.mtm2003-06-303-3/+9
|
* Sweep through pthread locking and use the new locking primitives formtm2003-06-297-20/+21
| | | | libthr.
* Locking primitives and operations in libthr should use struct umtx,mtm2003-06-292-4/+22
| | | | | | | | | | | not spinlock_t. Spinlock_t and the associated functions and macros may require blocking signals in order for async-safe libc functions to behave appropriately in libthr. This is undesriable for libthr internal locking. So, this is the first step in completely separating libthr from libc's locking primitives. Three new macros should be used for internal libthr locking from now on: THR_LOCK, THR_TRYLOCK, THR_UNLOCK.
* In a critical section, separate the aquisition of the thread lockmtm2003-06-292-17/+27
| | | | | | | | | | | | and the disabling of signals. What we are really interested in is keeping track of recursive disabling of signals. We should not be recursively acquiring thread locks. Any such situations should be reorganized to not require a recursive lock. Separating the two out also allows us to block signals independent of acquiring thread locks. This will be needed in libthr in the near future when we put the pieces together to protect libc functions that use pthread mutexes and low level locks.
* Make _thread_suspend work with both the old broken sigtimedwaitjdp2003-06-293-11/+31
| | | | | | implementation and the new improved one. We now precompute the signal set passed to sigtimedwait, using an inverted set when necessary for compatibility with older kernels.
* The move to _retire() a thread in the GC instead of in the thread'smtm2003-06-293-21/+5
| | | | | | | exit function has invalidated the need for _spin[un]lock_pthread(). The _spin[un]lock() functions can now dereference curthread without the danger that the ldtentry containing the pointer to the thread has been cleared out from under them.
* Create compatibility links for libc_r on ia64 to prevent build-timemarcel2003-06-271-0/+12
| | | | | | | | | breakages. Note that runtime compatibility is not guaranteed. Future changes to setjmp/longjmp in libc will break threaded applications linked against libc_r.so.5 on ia64. We pull our "tier 2" card once more... Reviewed by: ru
* _thread_printf() is only used for debugging or in cases where something'smtm2003-06-091-2/+2
| | | | | screwed beyond all help, so it can just skip the pthreads wrapper for write(2) and call directly into it.
* Make C applications statically compiled with libthr work. Previously,mtm2003-06-041-0/+6
| | | | | | an application compiled -static with libthr would dump core in malloc(3) because the stub thread initialization routine in libc would be used instead of the libthr supplied one.
* Teach recent changes in the umtx structure in the kernel to the libthrmtm2003-06-031-1/+1
| | | | | | initialiazer. Found by: tinderbox
* Unwind the _giant_mutex from pthread_detach(). When detaching a joiner threadmtm2003-06-021-8/+8
| | | | | it's important the correct lock order is observed: lock first the joined and then the joiner.
* Consolidate static_init() and static_init_private into one function.mtm2003-06-021-17/+11
| | | | The behaviour of this function is controlled by the argument: private.
* .S comments must be C comments, not ASM ones.obrien2003-06-021-1/+1
|
* I botched one of my committs in the last round. Fix it.mtm2003-05-312-12/+11
|
* Make the mutex static initializers look more like the one formtm2003-05-291-25/+19
| | | | | | | | | condition variables. Cosmetic. Explicitly compare against PTHREAD_MUTEX_INITIALIZER. We shouldn't encourage calls to the mutex functions with null pointers to mutexes. Approved by: re/jhb
* Use a static lock to ake sure pthread_cond_* functions calledmtm2003-05-291-2/+20
| | | | | | | | | | | | from multiple threads don't initialze the same condition variable more than once. Explicitly compare cond pointers with PTHREAD_COND_INITIALIZER instead of NULL. Just because it happens to be defined as NULL is no reason to encourage the idea that people can call those functions with NULL pointers to a condition variable. Approved by: re/jhb
* Missing unlock.mtm2003-05-291-0/+2
| | | | Approved by: re/jhb
* Don't hold the active thread list lock when signaling the gc thread.mtm2003-05-293-12/+21
| | | | | | | | | The dead list thread is sufficient for synchronization. Retire the arch_id (ldt array slot) in the gc thread instead of the doing it in the thread itself. Approved by: re/jhb
* It's unnecessary to lock the thread during creation. Simply extendmtm2003-05-291-5/+2
| | | | | | the scope of the active thread list lock. Approved by: re/jhb
* Minimize the potential for deadlocks between an exiting thread and it'smtm2003-05-271-2/+18
| | | | | | | | joiner by making sure all locks and unlocks occur in the same order. For the record the lock order is: DEAD_LIST, THREAD_LIST, exiting thread, joiner thread. Approved by: re/rwatson
* Revert part of the last commit. I don't know what I was smoking.mtm2003-05-271-2/+13
| | | | Approved by: re/rwatson
* Decouple the thread stack [de]allocating functions from the 'dead threads list'mtm2003-05-264-7/+16
| | | | | | | lock. It's not really necessary and we don't need the added complexity or potential for deadlocks. Approved by: re/blanket libthr
* Revise the unlock order in _pthread_join(). Also, if the joinedmtm2003-05-261-12/+6
| | | | | | | | thread is not dead, the join loop is guaranteed to execute at least once, so there is no need to pick up the thread list lock after we return from suspenstion only to release it after the loop. Approved by: re/blanket libthr
* Return gracefully, rather than aborting, when the maximum concurrentmtm2003-05-256-10/+27
| | | | | | threads per process has been reached. Return EAGAIN, as per spec. Approved by: re/blanket libthr
* _pthread_cancel() breaks the normal lock order of first locking themtm2003-05-253-4/+28
| | | | | | | | | | | | | joined and then the joiner thread. There isn't an easy (sane?) way to make it use the correct order without introducing races involving the target thread and finding which (active or dead) list it is on. So, after locking the canceled thread it will try to lock the joined thread and if it fails release the first lock and try again from the top. Introduce a new function, _spintrylock, which is simply a wrapper arround umtx_trylock(), to help accomplish this. Approved by: re/blanket libthr
* Part of the last patch.mtm2003-05-252-9/+9
| | | | | | | Modify the thread creation and thread searching routine to lock the thread lists with the new locks instead of GIANT_LOCK. Approved by: re/blanket libthr
* Start locking up the active and dead threads lists. The active threadsmtm2003-05-257-127/+135
| | | | | | | | | | | | | | | | | | | | | | | | | | | | list is protected by a spinlock_t, but the dead list uses a pthread_mutex because it is necessary to synchronize other threads with the garbage collector thread. Lock/Unlock macros are used so it's easier to make changes to the locks in the future. The 'dead thread list' lock is intended to replace the gc mutex. This doesn't have any practical ramifications. It simply makes it clearer what the purpose of the lock is. The gc will use this lock, instead of the gc mutex, to synchronize access to the dead list with other threads. Modify _pthread_exit() to use these two new locks instead of GIANT_LOCK, and also to properly lock and protect thread state changes, especially with respect to a joining thread. The gc thread was also re-arranged to be more organized and less nested. _pthread_join() was also modified to use the thread list locks. However, locking and unlocking here needs special care because a thread could find itself in a position where it's joining an exiting thread that is waiting on the dead list lock, which this thread (joiner) holds. If the joiner doesn't take care to lock *and* unlock in the same order they (the joiner and the joinee) could deadlock against each other. Approved by: re/blanket libthr
* The libthr code makes use of higher-level primitives (pthread_mutex_t andmtm2003-05-252-0/+14
| | | | | | | | | | | | | pthread_cond_t) internaly in addition to the low-level spinlock_t. The garbage collector mutex and condition variable are two such examples. This might lead to critical sections nested within critical sections. Implement a reference counting mechanism so that signals are masked only on the first entry and unmasked on the last exit. I'm not sure I like the idea of nested critical sections, but if the library is going to use the pthread primitives it might be necessary. Approved by: re/blanket libthr
* The struct mcontext has changed. It's using the register sets. Bringmarcel2003-05-251-1/+1
| | | | this in line.
* Lock the cond queue (condition variables):mtm2003-05-241-70/+43
| | | | | | | | | | | | | | | | Access to the thread's flags and state is protected by _thread_critical_enter/exit(). When a thread is signaled with a condition its state must be protected by locking it and disabling signals before it is taken of the waiters' queue. Move the implementation of pthread_cond_signal() and pthread_cond_broadcast() into one function, cond_signal(). Its behaviour is determined by the last argument, int broadcast. If this is set to 1 it will remove all waiters, otherwise it will wake up only the first waiter thread. Remove an extraneous call to pthread_testcancel(). Approved by: re/blanket libthr
* Add two functions: _spinlock_pthread() and _spinunlock_pthread()mtm2003-05-233-4/+20
| | | | | | | | | | that take the address of a struct pthread as their first argument. _spin[un]lock() just become wrappers arround these two functions. These new functions are for use in situations where curthread can't be used. One example is _thread_retire(), where we invalidate the array index curthread uses to get its pointer.. Approved by: re/blanket libthr
* EDOOFUSmtm2003-05-232-10/+3
| | | | | | | | Prevent one thread from messing up another thread's saved signal mask by saving it in struct pthread instead of leaving it as a global variable. D'oh! Approved by: re/blanket libthr
* Make WARNS2 clean. The fixes mostly included:mtm2003-05-2316-10/+41
| | | | | | | | o removed unused variables o explicit inclusion of header files o prototypes for externally defined functions Approved by: re/blanket libthr
* note to self: do not confuse void* with int.mtm2003-05-231-1/+1
| | | | Approved by: re/blanket libthr
* o Make the defenition of _set_curthread() match its declarationmtm2003-05-211-5/+27
| | | | | | | | | | | | in thr_private.h o Lock down the ldt_entries array and ldt_free, which points to the next free slot. As noted in the comments, it's necessary to special case the initial_thread because %gs is not setup for it yet. This is ok because that early in the program there won't be any reentrancy issues anyways. Approved by: re/blanket libthr
* Insert a debugging aid:mtm2003-05-212-2/+18
| | | | | | | | When in either the mutex or cond queue we notice that the thread is already on one of the queues, don't just simply abort(). Print out the thread's identifiers and what queue it was on. Approved by: markm/mentor, re/blanket libthr
* Re-enable the garbage collector thread in anticipation of furthermtm2003-05-211-2/+0
| | | | | | | locking work. I can't see anything obviously wrong with it (other than the need to update the locking). Approved by: markm/mentor, re/blanket libthr
* When a thread exits it does not return from the kernel unless itmtm2003-05-211-0/+4
| | | | | | | is the *only* remaining thread in the application, in which case we should not core dump, and instead exit gracefully. Approved by: markm/mentor, re/blanket libthr
* The thread id was being set *before* zeroing out the thread. Reversemtm2003-05-211-2/+3
| | | | | | the order. Approved by: markm/mentor, re/blanket libthr
OpenPOWER on IntegriCloud