diff options
author | jeff <jeff@FreeBSD.org> | 2007-01-23 08:46:51 +0000 |
---|---|---|
committer | jeff <jeff@FreeBSD.org> | 2007-01-23 08:46:51 +0000 |
commit | 474b917526db60cd113b34f9bbb30e8d252bae24 (patch) | |
tree | b133e2bceeb7a9d12a55f7f5eda206c4edcf51e2 /sys/sys/sched.h | |
parent | f53a7830f79b8d9247e5d2ae879f0a43c42b49fa (diff) | |
download | FreeBSD-src-474b917526db60cd113b34f9bbb30e8d252bae24.zip FreeBSD-src-474b917526db60cd113b34f9bbb30e8d252bae24.tar.gz |
- Remove setrunqueue and replace it with direct calls to sched_add().
setrunqueue() was mostly empty. The few asserts and thread state
setting were moved to the individual schedulers. sched_add() was
chosen to displace it for naming consistency reasons.
- Remove adjustrunqueue, it was 4 lines of code that was ifdef'd to be
different on all three schedulers where it was only called in one place
each.
- Remove the long ifdef'd out remrunqueue code.
- Remove the now redundant ts_state. Inspect the thread state directly.
- Don't set TSF_* flags from kern_switch.c, we were only doing this to
support a feature in one scheduler.
- Change sched_choose() to return a thread rather than a td_sched. Also,
rely on the schedulers to return the idlethread. This simplifies the
logic in choosethread(). Aside from the run queue links kern_switch.c
mostly does not care about the contents of td_sched.
Discussed with: julian
- Move the idle thread loop into the per scheduler area. ULE wants to
do something different from the other schedulers.
Suggested by: jhb
Tested on: x86/amd64 sched_{4BSD, ULE, CORE}.
Diffstat (limited to 'sys/sys/sched.h')
-rw-r--r-- | sys/sys/sched.h | 11 |
1 files changed, 11 insertions, 0 deletions
diff --git a/sys/sys/sched.h b/sys/sys/sched.h index a9f1748..1342906 100644 --- a/sys/sys/sched.h +++ b/sys/sys/sched.h @@ -115,6 +115,8 @@ void sched_clock(struct thread *td); void sched_rem(struct thread *td); void sched_tick(void); void sched_relinquish(struct thread *td); +struct thread *sched_choose(void); +void sched_idletd(void *); /* * Binding makes cpu affinity permanent while pinning is used to temporarily @@ -145,6 +147,15 @@ sched_unpin(void) curthread->td_pinned--; } +/* sched_add arguments (formerly setrunqueue) */ +#define SRQ_BORING 0x0000 /* No special circumstances. */ +#define SRQ_YIELDING 0x0001 /* We are yielding (from mi_switch). */ +#define SRQ_OURSELF 0x0002 /* It is ourself (from mi_switch). */ +#define SRQ_INTR 0x0004 /* It is probably urgent. */ +#define SRQ_PREEMPTED 0x0008 /* has been preempted.. be kind */ +#define SRQ_BORROWING 0x0010 /* Priority updated due to prio_lend */ + + /* temporarily here */ void schedinit(void); void sched_init_concurrency(struct proc *p); |