summaryrefslogtreecommitdiffstats
path: root/lib/libthr/thread/thr_init.c
Commit message (Collapse)AuthorAgeFilesLines
* implement pthread_attr_getaffinity_np and pthread_attr_setaffinity_np.davidxu2008-03-041-1/+3
|
* Add my recent work of adaptive spin mutex code. Use two environments variabledavidxu2007-10-301-1/+9
| | | | | | | | | | | | | | | | | | | | | | | | | to tune pthread mutex performance: 1. LIBPTHREAD_SPINLOOPS If a pthread mutex is being locked by another thread, this environment variable sets total number of spin loops before the current thread sleeps in kernel, this saves a syscall overhead if the mutex will be unlocked very soon (well written application code). 2. LIBPTHREAD_YIELDLOOPS If a pthread mutex is being locked by other threads, this environment variable sets total number of sched_yield() loops before the currrent thread sleeps in kernel. if a pthread mutex is locked, the current thread gives up cpu, but will not sleep in kernel, this means, current thread does not set contention bit in mutex, but let lock owner to run again if the owner is on kernel's run queue, and when lock owner unlocks the mutex, it does not need to enter kernel and do lots of work to resume mutex waiters, in some cases, this saves lots of syscall overheads for mutex owner. In my practice, sometimes LIBPTHREAD_YIELDLOOPS can massively improve performance than LIBPTHREAD_SPINLOOPS, this depends on application. These two environments are global to all pthread mutex, there is no interface to set them for each pthread mutex, the default values are zero, this means spinning is turned off by default.
* backout experimental adaptive spinning mutex for product use.davidxu2007-05-091-7/+0
|
* get LIBPTHREAD_ADAPTIVE_SPIN early, so it can be used for some globaldavidxu2006-12-201-2/+5
| | | | mutexes.
* Check environment variable PTHREAD_ADAPTIVE_SPIN, if it is set, usedavidxu2006-12-201-0/+4
| | | | it as a default spin cycle count.
* - Remove variable _thr_scope_system, all threads are system scope.davidxu2006-12-151-12/+4
| | | | | - Rename _thr_smp_cpus to boolean variable _thr_is_smp. - Define CPU_SPINWAIT macro for each arch, only X86 supports it.
* Eliminate atomic operations in thread cancellation functions, it shoulddavidxu2006-11-241-1/+2
| | | | reduce overheads of cancellation points.
* use rtprio_thread system call to get or set thread priority.davidxu2006-09-211-2/+2
|
* Replace internal usage of struct umtx with umutex which can supportsdavidxu2006-09-061-13/+13
| | | | real-time if we want, no functionality is changed.
* Use umutex APIs to implement pthread_mutex, member pp_mutexq is addeddavidxu2006-08-281-0/+1
| | | | | | into pthread structure to keep track of locked PTHREAD_PRIO_PROTECT mutex, no real mutex code is changed, the mutex locking and unlocking code should has same performance as before.
* Get number of CPUs and ignore spin count on single processor machine.davidxu2006-08-081-0/+3
|
* 1. Don't override underscore version of aio_suspend(), system(),davidxu2006-07-251-17/+17
| | | | | | | | | | wait(), waitpid() and usleep(), they are internal versions and should not be cancellation points. 2. Make wait3() as a cancellation point. 3. Move raise() and pause() into file thr_sig.c. 4. Add functions _sigsuspend, _sigwait, _sigtimedwait and _sigwaitinfo, remove SIGCANCEL bit in wait-set for those functions, the signal is used internally to implement thread cancellation.
* Caching scheduling policy and priority in userland, a critical but baddlydavidxu2006-07-131-0/+6
| | | | | written application is frequently changing thread priority for SCHED_OTHER policy.
* Use kernel facilities to support real-time scheduling.davidxu2006-07-121-14/+4
|
* - Use same priority range returned by kernel's sched_get_priority_min()davidxu2006-04-271-5/+13
| | | | | and sched_get_priority_max() syscalls. - Remove unused fields from structure pthread_attr.
* WARNS level 4 cleanup.davidxu2006-04-041-25/+7
|
* Remove priority mutex code because it does not work correctly,davidxu2006-03-271-2/+1
| | | | | | | | | to make it work, turnstile like mechanism to support priority propagating and other realtime scheduling options in kernel should be available to userland mutex, for the moment, I just want to make libthr be simple and efficient thread library. Discussed with: deischen, julian
* Set default contention scope to system.davidxu2006-03-201-1/+1
|
* Add some more pthread stubs so that librt can use them.deischen2006-03-051-4/+35
| | | | | | | The thread jump table has been resorted, so you need to keep libc, libpthread, and libthr in sync. Submitted by: xu
* Rework last change of pthread_once, create a function _thr_once_init todavidxu2006-02-151-2/+1
| | | | reinitialize its internal locks.
* After fork(), reinitialize internal locks for pthread_once().davidxu2006-02-151-0/+2
|
* Now, thread name is stored in kernel, userland no longer has to keep it.davidxu2006-02-051-2/+1
|
* Use macro STATIC_LIB_REQUIRE to declare a symbol should be linked intodavidxu2006-01-101-95/+68
| | | | static binary.
* 1. Retire macro SCLASS, instead simply use language keyword anddavidxu2005-12-211-0/+52
| | | | | put variables in thr_init.c. 2. Hide all global symbols which won't be exported.
* Add code to handle timer_delete(). The timer wrapper code is completelydavidxu2005-11-011-0/+1
| | | | | | rewritten, now timers created with same sigev_notify_attributes will run in same thread, this allows user to organize which timers can run in same thread to save some thread resource.
* Conditionally report initial thread event.davidxu2005-04-121-1/+2
|
* Add debugger event reporting support, current only TD_CREATE and TD_DEATHdavidxu2005-04-121-0/+2
| | | | events are reported.
* Remove unique id field which is no longer used by debugger.davidxu2005-04-061-1/+0
|
* Import my recent 1:1 threading working. some features improved includes:davidxu2005-04-021-161/+214
| | | | | | | | | | | | | | | | 1. fast simple type mutex. 2. __thread tls works. 3. asynchronous cancellation works ( using signal ). 4. thread synchronization is fully based on umtx, mainly, condition variable and other synchronization objects were rewritten by using umtx directly. those objects can be shared between processes via shared memory, it has to change ABI which does not happen yet. 5. default stack size is increased to 1M on 32 bits platform, 2M for 64 bits platform. As the result, some mysql super-smack benchmarks show performance is improved massivly. Okayed by: jeff, mtm, rwatson, scottl
* Increase the default stacksizes:marcus2005-03-061-6/+16
| | | | | | | | | 32-bit 64-bit main thread 2 MB 4 MB other threads 1 MB 2 MB Approved by: mtm Adapted from: libpthread
* Don't include sys/user.h merely for its side-effect of recursivelydas2004-11-271-1/+0
| | | | including other headers.
* Implement pthread_atfork in libthr. This is mostly from deichen'smtm2004-06-271-0/+4
| | | | | | work in libpthread. Submitted by: Dan Nelson <dnelson@allantgroup.com>
* In the case that the global thread list is being re-initialized aftermtm2004-06-271-4/+4
| | | | | | a fork, make sure that the current thread isn't detached and freed. As a consequence the thread should be inserted into the head of the active list only once (in the beginning).
* Make libthr async-signal-safe without costly signal masking. The guidlines Imtm2004-05-201-18/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | followed are: Only 3 functions (pthread_cancel, pthread_setcancelstate, pthread_setcanceltype) are required to be async-signal-safe by POSIX. None of the rest of the pthread api is required to be async-signal-safe. This means that only the three mentioned functions are safe to use from inside signal handlers. However, there are certain system/libc calls that are cancellation points that a caller may call from within a signal handler, and since they are cancellation points calls have to be made into libthr to test for cancellation and exit the thread if necessary. So, the cancellation test and thread exit code paths must be async-signal-safe as well. A summary of the changes follows: o Almost all of the code paths that masked signals, as well as locking the pthread structure now lock only the pthread structure. o Signals are masked (and left that way) as soon as a thread enters pthread_exit(). o The active and dead threads locks now explicitly require that signals are masked. o Access to the isdead field of the pthread structure is protected by both the active and dead list locks for writing. Either one is sufficient for reading. o The thread state and type fields have been combined into one three-state switch to make it easier to read without requiring a lock. It doesn't need a lock for writing (and therefore for reading either) because only the current thread can write to it and it is an integer value. o The thread state field of the pthread structure has been eliminated. It was an unnecessary field that mostly duplicated the flags field, but required additional locking that would make a lot more code paths require signal masking. Any truly unique values (such as PS_DEAD) have been reborn as separate members of the pthread structure. o Since the mutex and condvar pthread functions are not async-signal-safe there is no need to muck about with the wait queues when handling a signal ... o ... which also removes the need for wrapping signal handlers and sigaction(2). o The condvar and mutex async-cancellation code had to be revised as a result of some of these changes, which resulted in semi-unrelated changes which would have been difficult to work on as a separate commit, so they are included as well. The only part of the changes I am worried about is related to locking for the pthread joining fields. But, I will take a closer look at them once this mega-patch is committed.
* o Remove more references to SIGTHRmtm2004-03-291-51/+0
| | | | o Remove clock resolution information left over from libc_r
* Remove the garbage collector thread. All resources are freedmtm2004-03-281-4/+2
| | | | | in-line. If the exiting thread cannot release a resource, then the next thread to exit will release it.
* Move the initialization of thread priority to a common function.mtm2004-02-181-5/+3
|
* Preparations to make libthr work in multi-threaded fork()ing applications.mtm2003-12-261-39/+78
| | | | | | | | | | | | | o Remove some code duplication between _thread_init(), which is run once to initialize libthr and the intitial thread, and pthread_create(), which initializes newly created threads, into a new function called from both places: init_td_common() o Move initialization of certain parts of libthr into a separate function. These include: - Active threads list and it's lock - Dead threads list and it's lock & condition variable - Naming and insertion of the initial thread into the active threads list.
* When _PTHREADSINVARIANTS is defined SIGABRT is not includedmtm2003-07-081-0/+3
| | | | | | in the set of signals to block. Also, make the PANIC macro call abort() instead of simply exiting.
* Make _thread_suspend work with both the old broken sigtimedwaitjdp2003-06-291-0/+26
| | | | | | implementation and the new improved one. We now precompute the signal set passed to sigtimedwait, using an inverted set when necessary for compatibility with older kernels.
* Make C applications statically compiled with libthr work. Previously,mtm2003-06-041-0/+6
| | | | | | an application compiled -static with libthr would dump core in malloc(3) because the stub thread initialization routine in libc would be used instead of the libthr supplied one.
* Return gracefully, rather than aborting, when the maximum concurrentmtm2003-05-251-1/+2
| | | | | | threads per process has been reached. Return EAGAIN, as per spec. Approved by: re/blanket libthr
* Start locking up the active and dead threads lists. The active threadsmtm2003-05-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | list is protected by a spinlock_t, but the dead list uses a pthread_mutex because it is necessary to synchronize other threads with the garbage collector thread. Lock/Unlock macros are used so it's easier to make changes to the locks in the future. The 'dead thread list' lock is intended to replace the gc mutex. This doesn't have any practical ramifications. It simply makes it clearer what the purpose of the lock is. The gc will use this lock, instead of the gc mutex, to synchronize access to the dead list with other threads. Modify _pthread_exit() to use these two new locks instead of GIANT_LOCK, and also to properly lock and protect thread state changes, especially with respect to a joining thread. The gc thread was also re-arranged to be more organized and less nested. _pthread_join() was also modified to use the thread list locks. However, locking and unlocking here needs special care because a thread could find itself in a position where it's joining an exiting thread that is waiting on the dead list lock, which this thread (joiner) holds. If the joiner doesn't take care to lock *and* unlock in the same order they (the joiner and the joinee) could deadlock against each other. Approved by: re/blanket libthr
* Make WARNS2 clean. The fixes mostly included:mtm2003-05-231-2/+1
| | | | | | | | o removed unused variables o explicit inclusion of header files o prototypes for externally defined functions Approved by: re/blanket libthr
* The thread id was being set *before* zeroing out the thread. Reversemtm2003-05-211-2/+3
| | | | | | the order. Approved by: markm/mentor, re/blanket libthr
* - Pass a ucontext_t to _set_curthread. If non-NULL the new thread is setjake2003-04-031-1/+1
| | | | | | | | as curthread in the new context, so that it will be set automatically when the thread is switched to. This fixes a race where we'd run for a little while with curthread unset in _thread_start. Reviewed by: jeff
* - Define curthread as _get_curthread() and remove all direct calls tojeff2003-04-021-16/+0
| | | | | | | _get_curthread(). This is similar to the kernel's curthread. Doing this saves stack overhead and is more convenient to the programmer. - Pass the pointer to the newly created thread to _thread_init(). - Remove _get_curthread_slow().
* - Add libthr but don't hook it up to the regular build yet. This is anjeff2003-04-011-0/+363
adaptation of libc_r for the thr system call interface. This is beta quality code.
OpenPOWER on IntegriCloud