summaryrefslogtreecommitdiffstats
path: root/lib/libthr/thread/thr_exit.c
Commit message (Collapse)AuthorAgeFilesLines
* MFC r303795:kib2016-08-201-1/+8
| | | | Add __cxa_thread_atexit(3) API implementation.
* change code to use unwind.h.davidxu2010-09-301-4/+4
|
* Report death event to debugger before moving to gc list, otherwisedavidxu2010-09-261-3/+2
| | | | debugger may can not find it on thread list.
* Because old _pthread_cleanup_push/pop do not have frame address,davidxu2010-09-251-5/+16
| | | | | | it is incompatible with stack unwinding code, if they are invoked, disable stack unwinding for current thread, and when thread is exiting, print a warning message.
* Simplify code, and in while loop, fix operator to match the unwindingdavidxu2010-09-251-7/+4
| | | | direction.
* Because atfork lock is held while forking, a thread cancellation triggereddavidxu2010-09-191-1/+1
| | | | by atfork handler is unsafe, use intenal flag no_cancel to disable it.
* - _Unwind_Resume function is not used, remove it.davidxu2010-09-191-14/+8
| | | | | | - Use a store barrier to make sure uwl_forcedunwind is lastest thing other threads can see. - Add some comments.
* Fix a race condition when finding stack unwinding functions.davidxu2010-09-191-7/+20
|
* add code to support stack unwinding when thread exits. note that onlydavidxu2010-09-151-1/+150
| | | | | | defer-mode cancellation works, asynchrnous mode does not work because it lacks of libuwind's support. stack unwinding is not enabled unless LIBTHR_UNWIND_STACK is defined in Makefile.
* Convert thread list lock from mutex to rwlock.davidxu2010-09-131-10/+7
|
* Add signal handler wrapper, the reason to add it becauses there aredavidxu2010-09-011-0/+19
| | | | | | | | | | | | | | | | | | | | | | | some cases we want to improve: 1) if a thread signal got a signal while in cancellation point, it is possible the TDP_WAKEUP may be eaten by signal handler if the handler called some interruptibly system calls. 2) In signal handler, we want to disable cancellation. 3) When thread holding some low level locks, it is better to disable signal, those code need not to worry reentrancy, sigprocmask system call is avoided because it is a bit expensive. The signal handler wrapper works in this way: 1) libthr installs its signal handler if user code invokes sigaction to install its handler, the user handler is recorded in internal array. 2) when a signal is delivered, libthr's signal handler is invoke, libthr checks if thread holds some low level lock or is in critical region, if it is true, the signal is buffered, and all signals are masked, once the thread leaves critical region, correct signal mask is restored and buffered signal is processed. 3) before user signal handler is invoked, cancellation is temporarily disabled, after user signal handler is returned, cancellation state is restored, and pending cancellation is rescheduled.
* eliminate unused code.davidxu2010-08-261-12/+0
|
* Tweak code a bit to be POSIX compatible, when a cancellation requestdavidxu2010-08-171-0/+2
| | | | | | | | | | | | is acted upon, or when a thread calls pthread_exit(), the thread first disables cancellation by setting its cancelability state to PTHREAD_CANCEL_DISABLE and its cancelability type to PTHREAD_CANCEL_DEFERRED. The cancelability state remains set to PTHREAD_CANCEL_DISABLE until the thread has terminated. It has no effect if a cancellation cleanup handler or thread-specific data destructor routine changes the cancelability state to PTHREAD_CANCEL_ENABLE.
* Move call to _malloc_thread_cleanup() so that if this is the last thread,jasone2008-09-091-3/+6
| | | | | | | the call never happens. This is necessary because malloc may be used during exit handler processing. Submitted by: davidxu
* Add thread-specific caching for small size classes, based on magazines.jasone2008-08-271-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | This caching allows for completely lock-free allocation/deallocation in the steady state, at the expense of likely increased memory use and fragmentation. Reduce the default number of arenas to 2*ncpus, since thread-specific caching typically reduces arena contention. Modify size class spacing to include ranges of 2^n-spaced, quantum-spaced, cacheline-spaced, and subpage-spaced size classes. The advantages are: fewer size classes, reduced false cacheline sharing, and reduced internal fragmentation for allocations that are slightly over 512, 1024, etc. Increase RUN_MAX_SMALL, in order to limit fragmentation for the subpage-spaced size classes. Add a size-->bin lookup table for small sizes to simplify translating sizes to size classes. Include a hard-coded constant table that is used unless custom size class spacing is specified at run time. Add the ability to disable tiny size classes at compile time via MALLOC_TINY.
* Remove libc_r's remnant code.davidxu2008-05-061-16/+0
|
* Use UMTX_OP_WAIT_UINT_PRIVATE and UMTX_OP_WAKE_PRIVATE to savedavidxu2008-04-291-1/+1
| | | | time in kernel(avoid VM lookup).
* Compile libthr with warnings.ru2008-03-251-0/+2
|
* - Copy signal mask out before THR_UNLOCK(), because THR_UNLOCK() may calldavidxu2008-03-181-0/+4
| | | | | | | | _thr_suspend_check() which messes sigmask saved in thread structure. - Don't suspend a thread has force_exit set. - In pthread_exit(), if there is a suspension flag set, wake up waiting- thread after setting PS_DEAD, this causes waiting-thread to break loop in suspend_common().
* Don't report death event to debugger if it is a forced exit.davidxu2008-03-061-1/+1
|
* call underscore version of pthread_cleanup_pop instead.davidxu2007-12-201-1/+1
|
* Remove 3rd clause, renumber, ok per emailimp2007-01-121-4/+1
|
* Eliminate atomic operations in thread cancellation functions, it shoulddavidxu2006-11-241-2/+2
| | | | reduce overheads of cancellation points.
* WARNS level 4 cleanup.davidxu2006-04-041-4/+2
|
* Refine thread suspension code, now thread suspension is a blockabledavidxu2006-01-051-1/+8
| | | | | | | operation, the caller is blocked util target threads are really suspended, also avoid suspending a thread when it is holding a critical lock. Fix a bug in _thr_ref_delete which tests a never set flag.
* Follow the change in kernel, joiner thread just waits at thread iddavidxu2005-10-261-2/+5
| | | | address, let kernel wake it up.
* Add debugger event reporting support, current only TD_CREATE and TD_DEATHdavidxu2005-04-121-1/+3
| | | | events are reported.
* Import my recent 1:1 threading working. some features improved includes:davidxu2005-04-021-118/+36
| | | | | | | | | | | | | | | | 1. fast simple type mutex. 2. __thread tls works. 3. asynchronous cancellation works ( using signal ). 4. thread synchronization is fully based on umtx, mainly, condition variable and other synchronization objects were rewritten by using umtx directly. those objects can be shared between processes via shared memory, it has to change ABI which does not happen yet. 5. default stack size is increased to 1M on 32 bits platform, 2M for 64 bits platform. As the result, some mysql super-smack benchmarks show performance is improved massivly. Okayed by: jeff, mtm, rwatson, scottl
* 1. Now that it's a thread's state is changed from within the kernel, wheremtm2004-10-131-1/+2
| | | | | | | | | | | | | | no userland locks are heald, the dead thread lock can no longer protect access to it. Therefore, instead of using an if (!dead)...else clause after walking the active threads list test the thread pointer before deciding not to walk the dead threads list. If the thread pointer is null it means it was not found in the active threads list and the dead threads list should be checked. 2. Do not free the stack of a thread that is not marked dead. This is the 2nd and final part of eliminating the race to free a thread's stack. MFC after: 3 days
* Remove a reference to a non-existent syscall: _thr_exit(). Themtm2004-10-081-4/+1
| | | | actual name is thr_exit(). How this ever worked is beyond me.
* Close a race between a thread exiting and the freeing of it's stack.mtm2004-10-061-3/+2
| | | | | | | | | After some discussion the best option seems to be to signal the thread's death from within the kernel. This requires that thr_exit() take an argument. Discussed with: davidxu, deischen, marcel MFC after: 3 days
* Make libthr async-signal-safe without costly signal masking. The guidlines Imtm2004-05-201-29/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | followed are: Only 3 functions (pthread_cancel, pthread_setcancelstate, pthread_setcanceltype) are required to be async-signal-safe by POSIX. None of the rest of the pthread api is required to be async-signal-safe. This means that only the three mentioned functions are safe to use from inside signal handlers. However, there are certain system/libc calls that are cancellation points that a caller may call from within a signal handler, and since they are cancellation points calls have to be made into libthr to test for cancellation and exit the thread if necessary. So, the cancellation test and thread exit code paths must be async-signal-safe as well. A summary of the changes follows: o Almost all of the code paths that masked signals, as well as locking the pthread structure now lock only the pthread structure. o Signals are masked (and left that way) as soon as a thread enters pthread_exit(). o The active and dead threads locks now explicitly require that signals are masked. o Access to the isdead field of the pthread structure is protected by both the active and dead list locks for writing. Either one is sufficient for reading. o The thread state and type fields have been combined into one three-state switch to make it easier to read without requiring a lock. It doesn't need a lock for writing (and therefore for reading either) because only the current thread can write to it and it is an integer value. o The thread state field of the pthread structure has been eliminated. It was an unnecessary field that mostly duplicated the flags field, but required additional locking that would make a lot more code paths require signal masking. Any truly unique values (such as PS_DEAD) have been reborn as separate members of the pthread structure. o Since the mutex and condvar pthread functions are not async-signal-safe there is no need to muck about with the wait queues when handling a signal ... o ... which also removes the need for wrapping signal handlers and sigaction(2). o The condvar and mutex async-cancellation code had to be revised as a result of some of these changes, which resulted in semi-unrelated changes which would have been difficult to work on as a separate commit, so they are included as well. The only part of the changes I am worried about is related to locking for the pthread joining fields. But, I will take a closer look at them once this mega-patch is committed.
* Remove the garbage collector thread. All resources are freedmtm2004-03-281-10/+38
| | | | | in-line. If the exiting thread cannot release a resource, then the next thread to exit will release it.
* Implement reference counting of read-write locks. This usesmtm2004-01-191-0/+6
| | | | | | | | | | | | | | | | | a list in the thread structure to keep track of the locks and how many times they have been locked. This list is checked on every lock and unlock. The traversal through the list is O(n). Most applications don't hold so many locks at once that this will become a problem. However, if it does become a problem it might be a good idea to review this once libthr is off probation and in the optimization cycle. This fixes: o deadlock when a thread tries to recursively acquire a read lock when a writer is waiting on the lock. o a thread could previously successfully unlock a lock it did not own o deadlock when a thread tries to acquire a write lock on a lock it already owns for reading or writing [ this is admittedly not required by POSIX, but is nice to have ]
* Change all instances of THR_LOCK/UNLOCK, etc to UMTX_*.mtm2003-07-061-2/+2
| | | | | It is a more acurate description of the locks they operate on.
* Sweep through pthread locking and use the new locking primitives formtm2003-06-291-2/+2
| | | | libthr.
* Don't hold the active thread list lock when signaling the gc thread.mtm2003-05-291-11/+13
| | | | | | | | | The dead list thread is sufficient for synchronization. Retire the arch_id (ldt array slot) in the gc thread instead of the doing it in the thread itself. Approved by: re/jhb
* Minimize the potential for deadlocks between an exiting thread and it'smtm2003-05-271-2/+18
| | | | | | | | joiner by making sure all locks and unlocks occur in the same order. For the record the lock order is: DEAD_LIST, THREAD_LIST, exiting thread, joiner thread. Approved by: re/rwatson
* Start locking up the active and dead threads lists. The active threadsmtm2003-05-251-30/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | list is protected by a spinlock_t, but the dead list uses a pthread_mutex because it is necessary to synchronize other threads with the garbage collector thread. Lock/Unlock macros are used so it's easier to make changes to the locks in the future. The 'dead thread list' lock is intended to replace the gc mutex. This doesn't have any practical ramifications. It simply makes it clearer what the purpose of the lock is. The gc will use this lock, instead of the gc mutex, to synchronize access to the dead list with other threads. Modify _pthread_exit() to use these two new locks instead of GIANT_LOCK, and also to properly lock and protect thread state changes, especially with respect to a joining thread. The gc thread was also re-arranged to be more organized and less nested. _pthread_join() was also modified to use the thread list locks. However, locking and unlocking here needs special care because a thread could find itself in a position where it's joining an exiting thread that is waiting on the dead list lock, which this thread (joiner) holds. If the joiner doesn't take care to lock *and* unlock in the same order they (the joiner and the joinee) could deadlock against each other. Approved by: re/blanket libthr
* Make WARNS2 clean. The fixes mostly included:mtm2003-05-231-0/+3
| | | | | | | | o removed unused variables o explicit inclusion of header files o prototypes for externally defined functions Approved by: re/blanket libthr
* note to self: do not confuse void* with int.mtm2003-05-231-1/+1
| | | | Approved by: re/blanket libthr
* When a thread exits it does not return from the kernel unless itmtm2003-05-211-0/+4
| | | | | | | is the *only* remaining thread in the application, in which case we should not core dump, and instead exit gracefully. Approved by: markm/mentor, re/blanket libthr
* - Define curthread as _get_curthread() and remove all direct calls tojeff2003-04-021-3/+0
| | | | | | | _get_curthread(). This is similar to the kernel's curthread. Doing this saves stack overhead and is more convenient to the programmer. - Pass the pointer to the newly created thread to _thread_init(). - Remove _get_curthread_slow().
* - Add libthr but don't hook it up to the regular build yet. This is anjeff2003-04-011-0/+186
adaptation of libc_r for the thr system call interface. This is beta quality code.
OpenPOWER on IntegriCloud