diff options
author | jhb <jhb@FreeBSD.org> | 2004-07-02 20:21:44 +0000 |
---|---|---|
committer | jhb <jhb@FreeBSD.org> | 2004-07-02 20:21:44 +0000 |
commit | 696704716d52a895094da20b7e1a0f763b069e12 (patch) | |
tree | 2a5d6a91ba98f5b9e075eecc1a9ca724b8a9110a /sys/kern/sched_4bsd.c | |
parent | 1f506bc6fab7cc97cb923d4af1174f9c732221dd (diff) | |
download | FreeBSD-src-696704716d52a895094da20b7e1a0f763b069e12.zip FreeBSD-src-696704716d52a895094da20b7e1a0f763b069e12.tar.gz |
Implement preemption of kernel threads natively in the scheduler rather
than as one-off hacks in various other parts of the kernel:
- Add a function maybe_preempt() that is called from sched_add() to
determine if a thread about to be added to a run queue should be
preempted to directly. If it is not safe to preempt or if the new
thread does not have a high enough priority, then the function returns
false and sched_add() adds the thread to the run queue. If the thread
should be preempted to but the current thread is in a nested critical
section, then the flag TDF_OWEPREEMPT is set and the thread is added
to the run queue. Otherwise, mi_switch() is called immediately and the
thread is never added to the run queue since it is switch to directly.
When exiting an outermost critical section, if TDF_OWEPREEMPT is set,
then clear it and call mi_switch() to perform the deferred preemption.
- Remove explicit preemption from ithread_schedule() as calling
setrunqueue() now does all the correct work. This also removes the
do_switch argument from ithread_schedule().
- Do not use the manual preemption code in mtx_unlock if the architecture
supports native preemption.
- Don't call mi_switch() in a loop during shutdown to give ithreads a
chance to run if the architecture supports native preemption since
the ithreads will just preempt DELAY().
- Don't call mi_switch() from the page zeroing idle thread for
architectures that support native preemption as it is unnecessary.
- Native preemption is enabled on the same archs that supported ithread
preemption, namely alpha, i386, and amd64.
This change should largely be a NOP for the default case as committed
except that we will do fewer context switches in a few cases and will
avoid the run queues completely when preempting.
Approved by: scottl (with his re@ hat)
Diffstat (limited to 'sys/kern/sched_4bsd.c')
-rw-r--r-- | sys/kern/sched_4bsd.c | 12 |
1 files changed, 11 insertions, 1 deletions
diff --git a/sys/kern/sched_4bsd.c b/sys/kern/sched_4bsd.c index 5d8961e..b2ae3dd 100644 --- a/sys/kern/sched_4bsd.c +++ b/sys/kern/sched_4bsd.c @@ -654,7 +654,7 @@ sched_switch(struct thread *td, struct thread *newtd) sched_tdcnt++; td->td_lastcpu = td->td_oncpu; td->td_last_kse = ke; - td->td_flags &= ~TDF_NEEDRESCHED; + td->td_flags &= ~(TDF_NEEDRESCHED | TDF_OWEPREEMPT); td->td_oncpu = NOCPU; /* * At the last moment, if this thread is still marked RUNNING, @@ -712,6 +712,16 @@ sched_add(struct thread *td) ke->ke_proc->p_comm)); KASSERT(ke->ke_proc->p_sflag & PS_INMEM, ("sched_add: process swapped out")); + +#ifdef SMP + /* + * Only try to preempt if the thread is unpinned or pinned to the + * current CPU. + */ + if (KSE_CAN_MIGRATE(ke) || ke->ke_runq == &runq_pcpu[PCPU_GET(cpuid)]) +#endif + if (maybe_preempt(td)) + return; ke->ke_ksegrp->kg_runq_kses++; ke->ke_state = KES_ONRUNQ; |