summaryrefslogtreecommitdiffstats
path: root/lib/libpthread/thread/thr_setprio.c
Commit message (Collapse)AuthorAgeFilesLines
* To be consistent, use the __weak_reference macro from <sys/cdefs.h>deischen2001-04-101-1/+1
| | | | | | instead of #pragma weak to create weak definitions. Suggested by: bde
* Add weak definitions for wrapped system calls. In general:deischen2001-01-241-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | _foo - wrapped system call foo - weak definition to _foo and for cancellation points: _foo - wrapped system call __foo - enter cancellation point, call _foo(), leave cancellation point foo - weak definition to __foo Change use of global _thread_run to call a function to get the currently running thread. Make all pthread_foo functions weak definitions to _pthread_foo, where _pthread_foo is the implementation. This allows an application to provide its own pthread functions. Provide slightly different versions of pthread_mutex_lock and pthread_mutex_init so that we can tell the difference between a libc mutex and an application mutex. Threads holding mutexes internal to libc should never be allowed to exit, call signal handlers, or cancel. Approved by: -arch
* Implement continuations to correctly handle [sig|_]longjmp() inside of ajasone2000-01-191-1/+0
| | | | | | | | | | | | | | | | | | | | | | signal handler. Explicitly check for jumps to anywhere other than the current stack, since such jumps are undefined according to POSIX. While we're at it, convert thread cancellation to use continuations, since it's cleaner than the original cancellation code. Avoid delivering a signal to a thread twice. This was a pre-existing bug, but was likely unexposed until these other changes were made. Defer signals generated by pthread_kill() so that they can be delivered on the appropriate stack. deischen claims that this is unnecessary, which is likely true, but without this change, pthread_kill() can cause undefined priority queue states and/or PANICs in [sig|_]longjmp(), so I'm leaving this in for now. To compile this code out and exercise the bug, define the _NO_UNDISPATCH cpp macro. Defining _PTHREADS_INVARIANTS as well will cause earlier crashes. PR: kern/14685 Collaboration with: deischen
* $Id$ -> $FreeBSD$peter1999-08-281-1/+1
|
* Add RCS IDs to those files without them.deischen1999-08-051-0/+1
| | | | | | | Fix copyrights (s/REGENTS/AUTHOR). Suggested by: tg Approved by: jb
* [ The author's description... ]jb1999-03-231-11/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | o Runnable threads are now maintained in priority queues. The implementation requires two things: 1.) The priority queues must be protected during insertion and removal of threads. Since the kernel scheduler must modify the priority queues, a spinlock for protection cannot be used. The functions _thread_kern_sched_defer() and _thread_kern_sched_undefer() were added to {un}defer kernel scheduler activation. 2.) A thread (active) priority change can be performed only when the thread is removed from the priority queue. The implementation uses a threads active priority when inserting it into the queue. A by-product is that thread switches are much faster. A separate queue is used for waiting and/or blocked threads, and it is searched at most 2 times in the kernel scheduler when there are active threads. It should be possible to reduce this to once by combining polling of threads waiting on I/O with the loop that looks for timed out threads and the minimum timeout value. o Functions to defer kernel scheduler activation were added. These are _thread_kern_sched_defer() and _thread_kern_sched_undefer() and may be called recursively. These routines do not block the scheduling signal, but latch its occurrence. The signal handler will not call the kernel scheduler when the running thread has deferred scheduling, but it will be called when running thread undefers scheduling. o Added support for _POSIX_THREAD_PRIORITY_SCHEDULING. All the POSIX routines required by this should now be implemented. One note, SCHED_OTHER, SCHED_FIFO, and SCHED_RR are required to be defined by including pthread.h. These defines are currently in sched.h. I modified pthread.h to include sched.h but don't know if this is the proper thing to do. o Added support for priority protection and inheritence mutexes. This allows definition of _POSIX_THREAD_PRIO_PROTECT and _POSIX_THREAD_PRIO_INHERIT. o Added additional error checks required by POSIX for mutexes and condition variables. o Provided a wrapper for sigpending which is marked as a hidden syscall. o Added a non-portable function as a debugging aid to allow an application to monitor thread context switches. An application can install a routine that gets called everytime a thread (explicitly created by the application) gets context switched. The routine gets passed the pthread IDs of the threads that are being switched in and out. Submitted by: Dan Eischen <eischen@vigrid.com> Changes by me: o Added a PS_SPINBLOCK state to deal with the priority inversion problem most often (I think) seen by threads calling malloc/free/realloc. o Dispatch signals to the running thread directly rather than at a context switch to avoid the situation where the switch never occurs.
* Change signal model to match POSIX (i.e. one set of signal handlersjb1998-04-291-32/+8
| | | | | | | | | | | | | | | | | | | | for the process, not a separate set for each thread). By default, the process now only has signal handlers installed for SIGVTALRM, SIGINFO and SIGCHLD. The thread kernel signal handler is installed for other signals on demand. This means that SIG_IGN and SIG_DFL processing is now left to the kernel, not the thread kernel. Change the signal dispatch to no longer use a signal thread, and call the signal handler using the stack of the thread that has the signal pending. Change the atomic lock method to use test-and-set asm code with a yield if blocked. This introduces separate locks for each type of object instead of blocking signals to prevent a context switch. It was this blocking of signals that caused the performance degradation the people have noted. This is a *big* change!
* Submitted by: John Birrelljulian1997-02-051-2/+2
| | | | uthreads update from the author.
* Reviewed by: julianjulian1996-01-221-0/+80
Submitted by: john birrel One version of the pthreads library another will follow with differnt actions under some cases.. not QUITE complete
OpenPOWER on IntegriCloud