summaryrefslogtreecommitdiffstats
path: root/lib/libpthread/thread/thr_spinlock.c
Commit message (Collapse)AuthorAgeFilesLines
* Add weak definitions for wrapped system calls. In general:deischen2001-01-241-6/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | _foo - wrapped system call foo - weak definition to _foo and for cancellation points: _foo - wrapped system call __foo - enter cancellation point, call _foo(), leave cancellation point foo - weak definition to __foo Change use of global _thread_run to call a function to get the currently running thread. Make all pthread_foo functions weak definitions to _pthread_foo, where _pthread_foo is the implementation. This allows an application to provide its own pthread functions. Provide slightly different versions of pthread_mutex_lock and pthread_mutex_init so that we can tell the difference between a libc mutex and an application mutex. Threads holding mutexes internal to libc should never be allowed to exit, call signal handlers, or cancel. Approved by: -arch
* Simplify sytem call renaming. Instead of _foo() <-- _libc_foo <-- foo(),jasone2000-01-271-1/+1
| | | | | | | | | | | | | | | | | just use _foo() <-- foo(). In the case of a libpthread that doesn't do call conversion (such as linuxthreads and our upcoming libpthread), this is adequate. In the case of libc_r, we still need three names, which are now _thread_sys_foo() <-- _foo() <-- foo(). Convert all internal libc usage of: aio_suspend(), close(), fsync(), msync(), nanosleep(), open(), fcntl(), read(), and write() to _foo() instead of foo(). Remove all internal libc usage of: creat(), pause(), sleep(), system(), tcdrain(), wait(), and waitpid(). Make thread cancellation fully POSIX-compliant. Suggested by: deischen
* $Id$ -> $FreeBSD$peter1999-08-281-1/+1
|
* Add RCS IDs to those files without them.deischen1999-08-051-2/+2
| | | | | | | Fix copyrights (s/REGENTS/AUTHOR). Suggested by: tg Approved by: jb
* [ The author's description... ]jb1999-03-231-17/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | o Runnable threads are now maintained in priority queues. The implementation requires two things: 1.) The priority queues must be protected during insertion and removal of threads. Since the kernel scheduler must modify the priority queues, a spinlock for protection cannot be used. The functions _thread_kern_sched_defer() and _thread_kern_sched_undefer() were added to {un}defer kernel scheduler activation. 2.) A thread (active) priority change can be performed only when the thread is removed from the priority queue. The implementation uses a threads active priority when inserting it into the queue. A by-product is that thread switches are much faster. A separate queue is used for waiting and/or blocked threads, and it is searched at most 2 times in the kernel scheduler when there are active threads. It should be possible to reduce this to once by combining polling of threads waiting on I/O with the loop that looks for timed out threads and the minimum timeout value. o Functions to defer kernel scheduler activation were added. These are _thread_kern_sched_defer() and _thread_kern_sched_undefer() and may be called recursively. These routines do not block the scheduling signal, but latch its occurrence. The signal handler will not call the kernel scheduler when the running thread has deferred scheduling, but it will be called when running thread undefers scheduling. o Added support for _POSIX_THREAD_PRIORITY_SCHEDULING. All the POSIX routines required by this should now be implemented. One note, SCHED_OTHER, SCHED_FIFO, and SCHED_RR are required to be defined by including pthread.h. These defines are currently in sched.h. I modified pthread.h to include sched.h but don't know if this is the proper thing to do. o Added support for priority protection and inheritence mutexes. This allows definition of _POSIX_THREAD_PRIO_PROTECT and _POSIX_THREAD_PRIO_INHERIT. o Added additional error checks required by POSIX for mutexes and condition variables. o Provided a wrapper for sigpending which is marked as a hidden syscall. o Added a non-portable function as a debugging aid to allow an application to monitor thread context switches. An application can install a routine that gets called everytime a thread (explicitly created by the application) gets context switched. The routine gets passed the pthread IDs of the threads that are being switched in and out. Submitted by: Dan Eischen <eischen@vigrid.com> Changes by me: o Added a PS_SPINBLOCK state to deal with the priority inversion problem most often (I think) seen by threads calling malloc/free/realloc. o Dispatch signals to the running thread directly rather than at a context switch to avoid the situation where the switch never occurs.
* Add support for compile time debug. This is enabled if libc_r is builtjb1998-06-091-20/+52
| | | | | | | | | | with -D_LOCK_DEBUG. This adds the file name and line number to each lock call and these are stored in the spinlock structure. When using debug mode, the lock function will check if the thread is trying to lock something it has already locked. This is not supposed to happen because the lock will be freed too early. Without lock debug, libc_r should be smaller and slightly faster.
* Add a warning message for a thread locking against itself. This isjb1998-06-061-3/+14
| | | | | not supposed to happen, but I have seen bogus g++ code that causes it.
* Treat the lock value as volatile.jb1998-05-051-2/+2
|
* Change signal model to match POSIX (i.e. one set of signal handlersjb1998-04-291-0/+65
for the process, not a separate set for each thread). By default, the process now only has signal handlers installed for SIGVTALRM, SIGINFO and SIGCHLD. The thread kernel signal handler is installed for other signals on demand. This means that SIG_IGN and SIG_DFL processing is now left to the kernel, not the thread kernel. Change the signal dispatch to no longer use a signal thread, and call the signal handler using the stack of the thread that has the signal pending. Change the atomic lock method to use test-and-set asm code with a yield if blocked. This introduces separate locks for each type of object instead of blocking signals to prevent a context switch. It was this blocking of signals that caused the performance degradation the people have noted. This is a *big* change!
OpenPOWER on IntegriCloud