summaryrefslogtreecommitdiffstats
path: root/lib/libthr/thread
Commit message (Collapse)AuthorAgeFilesLines
* Add POSIX pthread API pthread_getcpuclockid() to get a thread's cpudavidxu2008-03-222-0/+48
| | | | time clock id.
* Resolve __error()'s PLT early so that it needs not to be resolved again,davidxu2008-03-211-0/+3
| | | | | otherwise rwlock is recursivly called when signal happens and the __error was never resolved before.
* pthread_mutexattr_destroy() was accidentally broken in last revision,ru2008-03-201-0/+1
| | | | unbreak it. We should really start compiling this with warnings.
* Preserve application code's errno in rtld locking code, it attemps to keepdavidxu2008-03-201-2/+31
| | | | any case safe.
* Make pthread_mutexattr_settype to return error number directly anddavidxu2008-03-201-3/+1
| | | | | | conformant to POSIX specification. Bug reported by: modelnine at modelnine dt org
* don't reduce new thread's refcount if current thread can not set cpusetdavidxu2008-03-191-1/+1
| | | | for it, since the new thread will reduce it by itself.
* - Trim trailing spaces.davidxu2008-03-191-8/+8
| | | | - Use a different sigmask variable name to avoid confusing.
* if passed thread pointer is equal to current thread, pass -1 to kerneldavidxu2008-03-191-11/+19
| | | | to speed up searching.
* - Copy signal mask out before THR_UNLOCK(), because THR_UNLOCK() may calldavidxu2008-03-183-4/+17
| | | | | | | | _thr_suspend_check() which messes sigmask saved in thread structure. - Don't suspend a thread has force_exit set. - In pthread_exit(), if there is a suspension flag set, wake up waiting- thread after setting PS_DEAD, this causes waiting-thread to break loop in suspend_common().
* Actually delete SIGCANCEL mask for suspended thread, so the signal will notdavidxu2008-03-161-3/+2
| | | | be masked when it is resumed.
* If a thread is cancelled, it may have already consumed a umtx_wake,davidxu2008-03-111-0/+2
| | | | check waiter and semphore counter to see if we may wake up next thread.
* Fix a bug when calculating remnant size.davidxu2008-03-061-1/+1
|
* Don't report death event to debugger if it is a forced exit.davidxu2008-03-061-1/+1
|
* Restore code setting new thread's scheduler parameters, I was thinkingdavidxu2008-03-061-15/+11
| | | | | | that there might be starvations, but because we have already locked the thread, the cpuset settings will always be done before the new thread does real-world work.
* Increase and decrease in_sigcancel_handler accordingly to avoid possibledavidxu2008-03-051-2/+2
| | | | error caused by nested SIGCANCEL stack, it is a bit complex.
* Use cpuset defined in pthread_attr for newly created thread, for now,davidxu2008-03-053-21/+57
| | | | | | | we set scheduling parameters and cpu binding fully in userland, and because default scheduling policy is SCHED_RR (time-sharing), we set default sched_inherit to PTHREAD_SCHED_INHERIT, this saves a system call.
* Check actual size of cpuset kernel is using and define underscore versiondavidxu2008-03-051-7/+42
| | | | of API.
* If a new thread is created, it inherits current thread's signal masks,davidxu2008-03-043-1/+24
| | | | | | | | however if current thread is executing cancellation handler, signal SIGCANCEL may have already been blocked, this is unexpected, unblock the signal in new thread if this happens. MFC after: 1 week
* Include cpuset.h, unbreak compiling.davidxu2008-03-041-0/+2
|
* implement pthread_attr_getaffinity_np and pthread_attr_setaffinity_np.davidxu2008-03-044-3/+68
|
* Implement functions pthread_getaffinity_np and pthread_setaffinity_np todavidxu2008-03-032-0/+75
| | | | get and set thread's cpu affinity mask.
* _pthread_mutex_isowned_np(): use a more reliable method; the current codedes2008-02-141-1/+1
| | | | | | will work in simple cases, but may fail in more complicated ones. Reviewed by: davidxu
* Remove unnecessary prototype.des2008-02-061-1/+0
|
* Per discussion on -threads, rename _islocked_np() to _isowned_np().des2008-02-061-3/+3
|
* After careful consideration (and a brief discussion with attilio@), changedes2008-02-041-1/+1
| | | | | | | | | the semantics of pthread_mutex_islocked_np() to return true if and only if the mutex is held by the current thread. Obviously, change the regression test to match. MFC after: 2 weeks
* Add pthread_mutex_islocked_np(), a cheap way to verify that a mutex isdes2008-02-031-0/+16
| | | | | | | locked. This is intended primarily to support the userland equivalent of the various *_ASSERT_LOCKED() macros we have in the kernel. MFC after: 2 weeks
* sem_post() requires to return -1 on error.davidxu2008-01-071-2/+2
|
* call underscore version of pthread_cleanup_pop instead.davidxu2007-12-201-1/+1
|
* Remove vfork() overloading, it is no longer needed.davidxu2007-12-201-9/+0
|
* Add function prototypes.davidxu2007-12-171-1/+7
|
* 1. Add function pthread_mutex_setspinloops_np to turn a mutex's spindavidxu2007-12-142-29/+106
| | | | | | | | loop count. 2. Add function pthread_mutex_setyieldloops_np to turn a mutex's yield loop count. 3. Make environment variables PTHREAD_SPINLOOPS and PTHREAD_YIELDLOOPS to be only used for turnning PTHREAD_MUTEX_ADAPTIVE_NP mutex.
* Enclose all code for macro ENQUEUE_MUTEX in do while statement, anddavidxu2007-12-111-5/+7
| | | | | | add missing brackets. MFC: after 1 day
* Fix pointer dereferencing problems in _pthread_mutex_init_calloc_cb() thatjasone2007-11-281-7/+3
| | | | were obscured by pseudo-opaque pthreads API pointer casting.
* Add _pthread_mutex_init_calloc_cb() to libthr and libkse, so that malloc(3)jasone2007-11-271-6/+27
| | | | | (part of libc) can use pthreads mutexes without causing infinite recursion during initialization.
* Simplify code, fix a thread cancellation bug in sem_wait and sem_timedwait.davidxu2007-11-231-21/+15
|
* Reuse nwaiter member field to record number of waiters, in sem_post(),davidxu2007-11-211-7/+31
| | | | | this should reduce the chance having to do a syscall when there is no waiter in the semaphore.
* Convert ceiling type to unsigned integer before comparing, fix compilerdavidxu2007-11-211-3/+3
| | | | warnings.
* Add some function prototypes.davidxu2007-11-211-0/+5
|
* Remove umtx_t definition, use type long directly, add wrapper functiondavidxu2007-11-217-18/+31
| | | | | _thr_umtx_wait_uint() for umtx operation UMTX_OP_WAIT_UINT, use the function in semaphore operations, this fixed compiler warnings.
* In _pthread_key_create() ensure that libthr is initialized. Thismarius2007-11-061-1/+5
| | | | | | | | | | | fixes a NULL-dereference of curthread when libstdc+ initializes the exception handling globals on archs we can't use GNU TLS due to lack of support in binutils 2.15 (i.e. arm and sparc64), yet, thus making threaded C++ programs compiled with GCC 4.2.1 work again on these archs. Reviewed by: davidxu MFC after: 3 days
* Avoid doing adaptive spinning for priority protected mutex, currentdavidxu2007-10-311-2/+5
| | | | implementation always does lock in kernel.
* Don't do adaptive spinning if it is running on UP kernel.davidxu2007-10-311-3/+5
|
* Restore revision 1.55, the kris's adaptive mutex type.davidxu2007-10-311-14/+36
|
* Adaptive mutexes should have the same deadlock detection properties thatkris2007-10-301-0/+1
| | | | | | default (errorcheck) mutexes do. Noticed by: davidxu
* Add my recent work of adaptive spin mutex code. Use two environments variabledavidxu2007-10-303-47/+50
| | | | | | | | | | | | | | | | | | | | | | | | | to tune pthread mutex performance: 1. LIBPTHREAD_SPINLOOPS If a pthread mutex is being locked by another thread, this environment variable sets total number of spin loops before the current thread sleeps in kernel, this saves a syscall overhead if the mutex will be unlocked very soon (well written application code). 2. LIBPTHREAD_YIELDLOOPS If a pthread mutex is being locked by other threads, this environment variable sets total number of sched_yield() loops before the currrent thread sleeps in kernel. if a pthread mutex is locked, the current thread gives up cpu, but will not sleep in kernel, this means, current thread does not set contention bit in mutex, but let lock owner to run again if the owner is on kernel's run queue, and when lock owner unlocks the mutex, it does not need to enter kernel and do lots of work to resume mutex waiters, in some cases, this saves lots of syscall overheads for mutex owner. In my practice, sometimes LIBPTHREAD_YIELDLOOPS can massively improve performance than LIBPTHREAD_SPINLOOPS, this depends on application. These two environments are global to all pthread mutex, there is no interface to set them for each pthread mutex, the default values are zero, this means spinning is turned off by default.
* Add a new "non-portable" mutex type, PTHREAD_MUTEX_ADAPTIVE_NP. Thiskris2007-10-291-0/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | is also implemented in glibc and is used by a number of existing applications (mysql, firefox, etc). This mutex type is a default mutex with the additional property that it spins briefly when attempting to acquire a contested lock, doing trylock operations in userland before entering the kernel to block if eventually unsuccessful. The expectation is that applications requesting this mutex type know that the mutex is likely to be only held for very brief periods, so it is faster to spin in userland and probably succeed in acquiring the mutex, than to enter the kernel and sleep, only to be woken up almost immediately. This can help significantly in certain cases when pthread mutexes are heavily contended and held for brief durations (such as mysql). Spin up to 200 times before entering the kernel, which represents only a few us on modern CPUs. No performance degradation was observed with this value and it is sufficient to avoid a large performance drop in mysql performance in the heavily contended pthread mutex case. The libkse implementation is a NOP. Reviewed by: jeff MFC after: 3 days
* Use macro THR_CLEANUP_PUSH/POP, they are cheaper than pthread_cleanup_push/pop.davidxu2007-10-161-2/+4
|
* Reverse the logic of UP and SMP.davidxu2007-10-161-1/+1
| | | | Submitted by: jasone
* Output error message to STDERR_FILENO.davidxu2007-08-071-1/+1
| | | | Approved by: re (bmah)
* backout experimental adaptive spinning mutex for product use.davidxu2007-05-093-9/+0
|
OpenPOWER on IntegriCloud