summaryrefslogtreecommitdiffstats
path: root/sys/kern/kern_thread.c
Commit message (Collapse)AuthorAgeFilesLines
...
* Define __lwpid_t as an int32_t in <sys/_types.h> and define lwpid_tmarcel2004-06-191-4/+6
| | | | | as an __lwpid_t in <sys/types.h>. Retype td_tid from an int to a lwpid_t and change related definitions accordingly.
* If thread singler wants to terminate other threads, make sure it includesdavidxu2004-06-181-2/+16
| | | | | | all threads except itself. Obtained from: julian
* Shuffle some code around.julian2004-06-111-1/+42
|
* Add a comment explaining td_critnest's initial state and its life from thatjmallett2004-06-091-0/+13
| | | | | | | point on, as it happens relatively indirectly, and in a codepath the casual reader may not be acquainted with or find obvious. Glanced at by: jhb
* Split kern_thread.c into 2 parts. kern_kse.c and kern_thread.cjulian2004-06-071-1209/+13
| | | | | Kern_kse has already been committed. This separates out the KSE threading ABI from generic thread support.
* Move TDF_SA from td_flags to td_pflags (and rename it accordingly)tjr2004-06-021-10/+10
| | | | | | | so that it is no longer necessary to hold sched_lock while manipulating it. Reviewed by: davidxu
* Clear KSE thread flags after KSE thread mode is ended. The side effectdavidxu2004-05-211-1/+1
| | | | | | | | of not clearing the flags for execv() syscall will result that a new program runs in KSE thread mode without enabling it. Submitted by: tjr Modified by: davidxu
* Keep track of threads waiting in kse_release() to avoid a racedeischen2004-04-281-16/+37
| | | | | | | | | | | condition where kse_wakeup() doesn't yet see them in (interruptible) sleep queues. Also add an upcall check to sleepqueue_catch_signals() suggested by jhb. This commit should fix recent mysql hangs. Reviewed by: jhb, davidxu Mysql'd by: Robin P. Blanchard <robin.blanchard at gactr uga edu>
* Assign thread IDs to kernel threads. The purpose of the thread ID (tid)marcel2004-04-031-2/+98
| | | | | | | | | | | | | | | | | | | | | | | is twofold: 1. When a 1:1 or M:N threaded process dumps core, we need to put the register state of each of its kernel threads in the core file. This can only be done by differentiating the pid field in the respective note. For this we need the tid. 2. When thread support is present for remote debugging the kernel with gdb(1), threads need to be identified by an integer due to limitations in the remote protocol. This requires having a tid. To minimize the impact of having thread IDs, threads that are created as part of a fork (i.e. the initial thread in a process) will inherit the process ID (i.e. tid=pid). Subsequent threads will have IDs larger than PID_MAX to avoid interference with the pid allocation algorithm. The assignment of tids is handled by thread_new_tid(). The thread ID allocation algorithm has been written with 3 assumptions in mind: 1. IDs need to be created as fast a possible, 2. Reuse of IDs may happen instantaneously, 3. Someone else will write a better algorithm.
* Massively up the (artificial) limit on system scope threadsjulian2004-03-211-2/+2
| | | | | | | in a process from 50 to 500 Also up the number of process scope threads allowed to be in the kernel at one time from 150 to 1500 (per process)
* Push Giant down a little further:peter2004-03-131-8/+5
| | | | | | | | | | | | | | | - no longer serialize on Giant for thread_single*() and family in fork, exit and exec - thread_wait() is mpsafe, assert no Giant - reduce scope of Giant in exit to not cover thread_wait and just do vm_waitproc(). - assert that thread_single() family are not called with Giant - remove the DROP/PICKUP_GIANT macros from thread_single() family - assert that thread_suspend_check() s not called with Giant - remove manual drop_giant hack in thread_suspend_check since we know it isn't held. - remove the DROP/PICKUP_GIANT macros from thread_suspend_check() family - mark kse_create() mpsafe
* Check for TDF_SINTR before calling sleepq_abort() as there is a narrowjhb2004-03-011-1/+1
| | | | | | | | | | | race in between sleepq_add() and sleepq_catch_signals() in that setting td_wchan and TDF_SINTR is not atomic to sched_lock but only to the sleepq lock. This band-aid will stop assertion failures, but there is perhaps a larger problem with the sleepq_add/sleepq_catch_signals race that I am not sure how to solve. For the signals case the race is harmless because we always call cursig() after setting TDF_SINTR. However, KSE doesn't do anything in sleepq_catch_signals() to check that this race was lost, so I am unsure if this race is harmful for this specific abort.
* Switch the sleep/wakeup and condition variable implementations to use thejhb2004-02-271-11/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | sleep queue interface: - Sleep queues attempt to merge some of the benefits of both sleep queues and condition variables. Having sleep qeueus in a hash table avoids having to allocate a queue head for each wait channel. Thus, struct cv has shrunk down to just a single char * pointer now. However, the hash table does not hold threads directly, but queue heads. This means that once you have located a queue in the hash bucket, you no longer have to walk the rest of the hash chain looking for threads. Instead, you have a list of all the threads sleeping on that wait channel. - Outside of the sleepq code and the sleep/cv code the kernel no longer differentiates between cv's and sleep/wakeup. For example, calls to abortsleep() and cv_abort() are replaced with a call to sleepq_abort(). Thus, the TDF_CVWAITQ flag is removed. Also, calls to unsleep() and cv_waitq_remove() have been replaced with calls to sleepq_remove(). - The sched_sleep() function no longer accepts a priority argument as sleep's no longer inherently bump the priority. Instead, this is soley a propery of msleep() which explicitly calls sched_prio() before blocking. - The TDF_ONSLEEPQ flag has been dropped as it was never used. The associated TDF_SET_ONSLEEPQ and TDF_CLR_ON_SLEEPQ macros have also been dropped and replaced with a single explicit clearing of td_wchan. TD_SET_ONSLEEPQ() would really have only made sense if it had taken the wait channel and message as arguments anyway. Now that that only happens in one place, a macro would be overkill.
* Use mtx_assert() rather than using a home-rolled version.jhb2004-01-281-1/+1
|
* - Add a flags parameter to mi_switch. The value of flags may be SW_VOL orjeff2004-01-251-4/+2
| | | | | | | | | | SW_INVOL. Assert that one of these is set in mi_switch() and propery adjust the rusage statistics. This is to simplify the large number of users of this interface which were previously all required to adjust the proper counter prior to calling mi_switch(). This also facilitates more switch and locking optimizations. - Change all callers of mi_switch() to pass the appropriate paramter and remove direct references to the process statistics.
* Reduce gratuitous includes: don't include jail.h if it's not needed.rwatson2004-01-211-1/+0
| | | | | | | Presumably, at some point, you had to include jail.h if you included proc.h, but that is no longer required. Result of: self injury involving adding something to struct prison
* s/Muliple/Multipleschweikh2004-01-101-48/+46
| | | | Removed whitespace at EOL and EOF.
* Don't use NULL (pointer) when we mean 0 (integer) for the number of tickspeter2003-12-231-1/+1
| | | | in msleep.
* Write the thread pointer (val) in the kse mailbox (loc) before wemarcel2003-12-101-2/+2
| | | | | set the new context in kse_switchin(2). This allows us to return an error to the calling context when the suword() fails.
* Add kse_switchin(2). This syscall can be used by KSE implementationsmarcel2003-12-071-0/+24
| | | | | | | | to have the kernel switch to a new thread, instead of doing it in userland. It is in fact needed on ia64 where syscall restarts do not return to userland first. It's completely handled inside the kernel. As such, any context created by the kernel as part of an upcall and caused by some syscall needs to be restored by the kernel.
* - Giant is no longer required by vm_thread_new().alc2003-12-071-2/+0
|
* Add an implementation of turnstiles and change the sleep mutex code to usejhb2003-11-111-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | turnstiles to implement blocking isntead of implementing a thread queue directly. These turnstiles are somewhat similar to those used in Solaris 7 as described in Solaris Internals but are also different. Turnstiles do not come out of a fixed-sized pool. Rather, each thread is assigned a turnstile when it is created that it frees when it is destroyed. When a thread blocks on a lock, it donates its turnstile to that lock to serve as queue of blocked threads. The queue associated with a given lock is found by a lookup in a simple hash table. The turnstile itself is protected by a lock associated with its entry in the hash table. This means that sched_lock is no longer needed to contest on a mutex. Instead, sched_lock is only used when manipulating run queues or thread priorities. Turnstiles also implement priority propagation inherently. Currently turnstiles only support mutexes. Eventually, however, turnstiles may grow two queue's to support a non-sleepable reader/writer lock implementation. For more details, see the comments in sys/turnstile.h and kern/subr_turnstile.c. The two primary advantages from the turnstile code include: 1) the size of struct mutex shrinks by four pointers as it no longer stores the thread queue linkages directly, and 2) less contention on sched_lock in SMP systems including the ability for multiple CPUs to contend on different locks simultaneously (not that this last detail is necessarily that much of a big win). Note that 1) means that this commit is a kernel ABI breaker, so don't mix old modules with a new kernel and vice versa. Tested on: i386 SMP, sparc64 SMP, alpha SMP
* Let SA process work under ULE scheduler, originally it would panic kernel.davidxu2003-08-261-3/+16
| | | | Reviewed by: jeff
* Change instances of callout_init that specify MPSAFE behaviour tosam2003-08-191-1/+1
| | | | | use CALLOUT_MPSAFE instead of "1" for the second parameter. This does not change the behaviour; it just makes the intent more clear.
* Update powerpc to use the (old thread,new thread) calling conventiongrehan2003-08-141-4/+0
| | | | for cpu_throw() and cpu_switch().
* - Convert Alpha over to the new calling conventions for cpu_throw() andjhb2003-08-121-1/+1
| | | | | | | | cpu_switch() where both the old and new threads are passed in as arguments. Only powerpc uses the old conventions now. - Update comments in the Alpha swtch.s to reflect KSE changes. Tested by: obrien, marcel
* Copyin the thread mailbox flags from the correct locationdeischen2003-08-081-1/+1
| | | | in the mailbox.
* Consistently use the BSD u_int and u_short instead of the SYSV uint andjhb2003-08-071-1/+1
| | | | | | | ushort. In most of these files, there was a mixture of both styles and this change just makes them self-consistent. Requested by: bde (kern_ktrace.c)
* Introduce a thread mailbox flag TMF_NOUPCALL. On some architectures otherdavidxu2003-08-051-7/+18
| | | | | | | than i386 or AMD64, TP register points to thread mailbox, and they can not atomically clear km_curthread in kse mailbox, in this case, thread retrieves its thread pointer from TP register and sets flag TMF_NOUPCALL in its thread mailbox to indicate a critical region.
* Set td_critnest to 1 when setting up a thread since it is a MI field withjhb2003-08-041-0/+1
| | | | | | | MI values. This ensures that td_critnest for a newly fork'd thread is always valid. Requested by: bde (a long time ago)
* o Refine kse_thr_interrupt to allow it to handle different commands.davidxu2003-07-171-62/+61
| | | | | | | o Remove TDF_NOSIGPOST. o Add a member td_waitset to proc structure, it will be used for sigwait. Tested by: deischen
* If initial thread is still a bound thread, don't change its signal mask.davidxu2003-07-151-1/+1
|
* Rename thread_siginfo to cpu_thread_siginfodavidxu2003-07-151-1/+1
|
* kse_thr_interrupt should target the thread, specifically.mtm2003-07-041-1/+1
| | | | Requested by: davidxu
* Signals sent specifically to a particular thread mustmtm2003-07-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | be delivered to that thread, regardless of whether it has it masked or not. Previously, if the targeted thread had the signal masked, it would be put on the processes' siglist. If another thread has the signal umasked or unmasks it before the target, then the thread it was intended for would never receive it. This patch attempts to solve the problem by requiring callers of tdsignal() to say whether the signal is for the thread or for the process. If it is for the process, then normal processing occurs and any thread that has it unmasked can receive it. But if it is destined for a specific thread, it is put on that thread's pending list regardless of whether it is currently masked or not. The new behaviour still needs more work, though. If the signal is reposted for some reason it is always posted back to the thread that handled it because the information regarding the target of the signal has been lost by then. Reviewed by: jdp, jeff, bde (style)
* Fix typo.davidxu2003-06-301-1/+1
|
* Don't use fuword() and suword() on struct members of type int. Thismarcel2003-06-281-4/+4
| | | | | | | | | | | | happens to work on 32-bit platforms as sizeof(long)=sizeof(int), but wrecks all kinds of havoc (garbage reads, corrupting writes and misaligned loads/stores) on 64-bit architectures. The fix for now is to use fuword32() and suword32() and change the type of the applicable int fields to int32. This is to make it explicit that we depend on these fields being 32-bit. We may want to revisit this later. Reviewed by: deischen
* o Change kse_thr_interrupt to allow send a signal to a specified thread,davidxu2003-06-281-56/+121
| | | | | | | | | | | | | | | | | or unblock a thread in kernel, and allow UTS to specify whether syscall should be restarted. o Add ability for UTS to monitor signal comes in and removed from process, the flag PS_SIGEVENT is used to indicate the events. o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with this flag set to wait for above signal event. o For SA based thread, kernel masks all signal in its signal mask, let UTS to use kse_thr_interrupt interrupt a thread, and install a signal frame in userland for the thread. o Add a tm_syncsig in thread mailbox, when a hardware trap occurs, it is used to deliver synchronous signal to userland, and upcall is schedule, so UTS can process the synchronous signal for the thread. Reviewed by: julian (mentor)
* cpu_set_upcall_kse needs to access userspace, release schedule lockdavidxu2003-06-201-4/+10
| | | | | | | | before calling it for bound thread. To avoid this problem, change thread_schedule_upcall to not put new thread on run queue, let caller do it, so we can tweak the new thread before setting it to run. Reported by: pho
* Forgot to commit code to disable creating a bound thread in samedavidxu2003-06-161-0/+2
| | | | | | group again except first kse_create syscall. Noticed by: julian
* Reset ncpus to 1 for bound thread group since there is only onedavidxu2003-06-161-1/+3
| | | | | | thread in such group. Change message text from kse_rel to kserel, it is better displayed in top.
* 1. Add code to support bound thread. when blocked, a bound thread neverdavidxu2003-06-151-55/+63
| | | | | | | schedules an upcall. Signal delivering to a bound thread is same as non-threaded process. This is intended to be used by libpthread to implement PTHREAD_SCOPE_SYSTEM thread. 2. Simplify kse_release() a bit, remove sleep loop.
* 1. Migrate TDF_UPCALLING from td_flags to td_pflags.davidxu2003-06-151-16/+6
| | | | | 2. Add a flag TDF_SA, it will be used to distinguish SA based thread from bound thread.
* Rename P_THREADED to P_SA. P_SA means a process is using schedulerdavidxu2003-06-151-6/+6
| | | | activations.
* Migrate the thread stack management functions from the machine-dependentalc2003-06-141-2/+3
| | | | | | | | | | | | | | | | to the machine-independent parts of the VM. At the same time, this introduces vm object locking for the non-i386 platforms. Two details: 1. KSTACK_GUARD has been removed in favor of KSTACK_GUARD_PAGES. The different machine-dependent implementations used various combinations of KSTACK_GUARD and KSTACK_GUARD_PAGES. To disable guard page, set KSTACK_GUARD_PAGES to 0. 2. Remove the (unnecessary) clearing of PG_ZERO in vm_thread_new. In 5.x, (but not 4.x,) PG_ZERO can only be set if VM_ALLOC_ZERO is passed to vm_page_alloc() or vm_page_grab().
* Fix error in my last commit. Correctly maintain p_maxthrwaits and unlockdavidxu2003-06-111-5/+8
| | | | sched_lock.
* Use __FBSDID().obrien2003-06-111-2/+3
|
* If there are signals delivered to current thread, breaks out of loop,davidxu2003-06-101-4/+3
| | | | | | | userret() will be called again by ast() and thread_userret() will be called again by userret(). Reported by: tegge
* thread_signal_add now is called with ps_mtx held, unlock it beforedavidxu2003-06-061-3/+5
| | | | calling copyin.
* Change the second (and last) argument of cpu_set_upcall(). Previouslymarcel2003-06-041-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | we were passing in a void* representing the PCB of the parent thread. Now we pass a pointer to the parent thread itself. The prime reason for this change is to allow cpu_set_upcall() to copy (parts of) the trapframe instead of having it done in MI code in each caller of cpu_set_upcall(). Copying the trapframe cannot always be done with a simply bcopy() or may not always be optimal that way. On ia64 specifically the trapframe contains information that is specific to an entry into the kernel and can only be used by the corresponding exit from the kernel. A trapframe copied verbatim from another frame is in most cases useless without some additional normalization. Note that this change removes the assignment to td->td_frame in some implementations of cpu_set_upcall(). The assignment is redundant. A previous call to cpu_thread_setup() already did the exact same assignment. An added benefit of removing the redundant assignment is that we can now change td_pcb without nasty side-effects. This change officially marks the ability on ia64 for 1:1 threading. Not tested on: amd64, powerpc Compile & boot tested on: alpha, sparc64 Functionally tested on: i386, ia64
OpenPOWER on IntegriCloud