summaryrefslogtreecommitdiffstats
path: root/lib/libpthread/thread
Commit message (Collapse)AuthorAgeFilesLines
* style cleanup: Remove duplicate $FreeBSD$ tags.cperciva2004-02-101-2/+0
| | | | | | | | These files had tags after the copyright notice, inside the comment block (incorrect, removed), and outside the comment block (correct). Approved by: rwatson (mentor)
* Add cancellation point to sem_wait() and sem_timedwait() for pshareddeischen2004-02-061-10/+18
| | | | | | | | semaphores. Also add cancellation cleanup handlers to keep semaphores in a consistent state. Submitted in part by: davidxu Reviewed by: davidxu
* Provide a userland version of non-pshared semaphores and add cancellationdeischen2004-02-032-169/+153
| | | | points to sem_wait() and sem_timedwait(). Also make sem_post signal-safe.
* Return EPERM if mutex owner is not current thread but it tries todavidxu2004-01-171-18/+3
| | | | | | unlock the mutex, old code confuses some programs when it returns EINVAL. Noticed by: bland
* Add a simple work-around for deadlocking on recursive read locksdeischen2004-01-083-48/+86
| | | | | | | | | | on a rwlock while there are writers waiting. We normally favor writers but when a reader already has at least one other read lock, we favor the reader. We don't track all the rwlocks owned by a thread, nor all the threads that own a rwlock -- we just keep a count of all the read locks owned by a thread. PR: 24641
* Kernel now supports per-thread sigaltstack, follow the change todavidxu2004-01-031-6/+1
| | | | enable sigaltstack for scope system thread.
* Return error code in errno, not in return value.davidxu2004-01-021-3/+6
|
* Fix a typo.davidxu2004-01-021-1/+1
|
* Forgot to commit this file for last commit. :(davidxu2003-12-291-0/+4
|
* Implement sigaltstack() as per-threaded. Current only scope process threaddavidxu2003-12-294-24/+231
| | | | | | | | is supported, for scope system process, kernel signal bits need to be changed. Reviewed by: deischen Tested on : i386 amd64 ia64
* Correctly retrieve sigaction flags.davidxu2003-12-281-2/+2
|
* Replace a comment with more accurated one, memory heap is now protected bydavidxu2003-12-191-3/+2
| | | | new fork() wrapper.
* Code clean up, remove unused MACROS and function prototypes.davidxu2003-12-191-18/+0
|
* accept() returns a file descriptor when it succeeds which is verydeischen2003-12-092-2/+2
| | | | | | | | | likely to be non-zero. When leaving the cancellation point, check the return value against -1 to see if cancellation should be checked. While I'm here, make the same change to connect() just to be consisitent. Pointed out by: davidxu
* Remove an unused struct definition.deischen2003-12-091-12/+0
|
* Add cancellation points for accept() and connect().deischen2003-12-094-0/+102
|
* Use mutex instead of low level thread lock to implement spinlock, thisdavidxu2003-12-091-30/+18
| | | | avoids signal to be blocked when otherwise it can be handled.
* Rename _thr_enter_cancellation_point to _thr_cancel_enter, renamedavidxu2003-12-0930-136/+148
| | | | | | | | | | | | | | | | | | | | | _thr_leave_cancellation_point to _thr_cancel_leave, add a parameter to _thr_cancel_leave to indicate whether cancellation point should be checked, this gives us an option to not check cancallation point if a syscall successfully returns to avoid any leaks, current I have creat(), open() and fcntl(F_DUPFD) to not check cancellation point after they sucessfully returned. Replace some members in structure kse with bit flags to same some memory. Conditionally compile THR_ASSERT to nothing if _PTHREAD_INVARIANTS is not defined. Inline some small functions in thr_cancel.c. Use __predict_false in thr_kern.c for some executed only once code. Reviewd by: deischen
* More reliably check timeout for pthread_mutex_timedlock.davidxu2003-12-091-1/+1
|
* Go back to using rev 1.18 where thread locks are used instead of KSEdeischen2003-12-081-17/+16
| | | | | | | locks for [libc] spinlock implementation. This was previously backed out because it exposed a bug in ia64 implementation. OK'd by: marcel
* 1.Macro optimizing KSE_LOCK_ACQUIRE and THR_LOCK_ACQUIRE to use static falldavidxu2003-11-292-99/+31
| | | | | | | | | | | | | | through branch predict as suggested in INTEL IA32 optimization guide. 2.Allocate siginfo arrary separately to avoid pthread to be allocated at 2K boundary, which hits L1 address alias problem and causes context switch to be slow down. 3.Simplify context switch code by removing redundant code, code size is reduced, so it is expected to run faster. Reviewed by: deischen Approved by: re (scottl)
* Remove surplus mmap() call for stack guard page in init_private, it is donedavidxu2003-11-291-25/+0
| | | | | | | | in init_main_thread. Also don't initialize lock and lockuser again for initial thread, it is already done by _thr_alloc(). Reviewed by: deischen Approved by: re (scottl)
* Back out last change and go back to using KSE locks instead of threaddeischen2003-11-161-16/+17
| | | | | | locks until we know why this breaks ia64. Reported by: marcel
* If a thread in critical region got a synchronous signal, according currentdavidxu2003-11-091-0/+2
| | | | | | signal handling mode, there is no chance to handle the signal, something must be wrong in the library, just call kse_thr_interrupt to dump its core. I have the code for a long time, but forgot to commit it.
* Use THR lock instead of KSE lock to avoid scheduler be blocked in spinlock.davidxu2003-11-081-17/+16
| | | | Reviewed by: deischen
* style(9)deischen2003-11-051-40/+53
| | | | Reviewed by: bde
* Don't declare the malloc lock; use the declaration provided in libc.deischen2003-11-051-1/+6
| | | | Noticed by: bde
* Add pthread_atfork() source code. Dan forgot to commit this file.davidxu2003-11-051-0/+56
|
* Add an implementation for pthread_atfork().deischen2003-11-045-6/+71
| | | | | | | | Aside from the POSIX requirements for pthread_atfork(), when fork()ing, take the malloc lock to keep malloc state consistent in the child. Reviewed by: davidxu
* Add the ability to reinitialize a spinlock (libc/libpthreaddeischen2003-11-041-12/+17
| | | | | | internal lock, not a pthread spinlock). Reviewed by: davidxu
* s/foo()/foo(void)/deischen2003-11-041-2/+3
| | | | Add a blank line after a variable declaration.
* Libpthread uses the convention that all of its (non-weak) symbolsdeischen2003-11-041-5/+11
| | | | | begin with underscores and provide weak definitions without underscores. Make the pthread spinlock conform to this convention.
* Add the ability to reinitialize a mutex (internally, not a userlanddeischen2003-11-041-7/+20
| | | | | | API). Reviewed by: davidxu
* Fix some comments for last commit.davidxu2003-10-081-5/+4
|
* Complete cancellation support for M:N threads, check cancelling flag whendavidxu2003-10-082-69/+157
| | | | | | | thread state is changed from RUNNING to WAIT state and do some cancellation operations for every cancellable state. Reviewed by: deischen
* Use thread lock instead of scheduler lock to eliminate lock contentiondavidxu2003-10-081-18/+21
| | | | | | for all wrapped syscalls under SMP. Reviewed by: deischen
* When concurrency level is reduced and a kse is exiting, make sure no otherdavidxu2003-09-291-0/+13
| | | | | | threads are still referencing the kse by migrating them to initial kse. Reviewed by: deischen
* Remove unused variable.davidxu2003-09-281-2/+0
|
* pthread API should return error code in return value not in errno.davidxu2003-09-251-2/+2
| | | | Reviewed by: deischen
* If syscall failed, restore old sigaction and return error to thread.davidxu2003-09-251-11/+19
|
* As comments in _mutex_lock_backout state, only current threaddavidxu2003-09-241-6/+4
| | | | | | | | | | | | | | can clear the pointer to mutex, not the thread doing mutex handoff. Because _mutex_lock_backout does not hold scheduler lock while testing THR_FLAGS_IN_SYNCQ and then reading mutex pointer, it is possible mutex owner begin to unlock and handoff the mutex to the current thread, and mutex pointer will be cleared to NULL before current thread reading it, so current thread will end up with deferencing a NULL pointer, Fix the race by making mutex waiters to clear their mutex pointers. While I am here, also save inherited priority in mutex for PTHREAD_PRIO_INERIT mutex in mutex_trylock_common just like what we did in mutex_lock_common.
* Free thread name memory if there is.davidxu2003-09-231-0/+4
|
* Save and restore timeout field for signal frame just like what we diddavidxu2003-09-222-1/+4
| | | | | | for interrupted field. Also in _thr_sig_handler, retrieve current signal mask from kernel not from ucp, the later is pre-unioned mask, not current signal mask.
* Print waitset correctly.davidxu2003-09-221-1/+1
|
* Make KSE_STACKSIZE machine dependent by moving it from thr_kern.c tomarcel2003-09-191-2/+0
| | | | | | pthread_md.h. This commit only moves the definition; it does not change it for any of the platforms. This more easily allows 64-bit architectures (in particular) to pick a slightly larger stack size.
* pthread api should return error code in return value, not in errno.davidxu2003-09-181-2/+1
|
* Fix a typo. Also turn on PTHREAD_SCOPE_SYSTEM after fork().davidxu2003-09-161-1/+2
|
* Fix bogus comment and assign sigmask in critical region, usedavidxu2003-09-151-2/+4
| | | | SIG_CANTMASK to remove unmaskable signal masks.
* Fix a bogus comment, sigmask must be maintained correctly,davidxu2003-09-151-1/+1
| | | | it will be inherited in pthread_create.
* 1. Allocating and freeing lock related resource in _thr_alloc and _thr_freedavidxu2003-09-144-94/+87
| | | | | | | | | | | to avoid potential memory leak, also fix a bug in pthread_create, contention scope should be inherited when PTHREAD_INHERIT_SCHED is set, and also check right field for PTHREAD_INHERIT_SCHED, scheduling inherit flag is in sched_inherit. 2. Execute hooks registered by atexit() on thread stack but not on scheduler stack. 3. Simplify some code in _kse_single_thread by calling xxx_destroy functions. Reviewed by: deischen
OpenPOWER on IntegriCloud