diff options
author | jhb <jhb@FreeBSD.org> | 2001-08-10 22:53:32 +0000 |
---|---|---|
committer | jhb <jhb@FreeBSD.org> | 2001-08-10 22:53:32 +0000 |
commit | 4a89454dcd75ebc44e557012c2d007934836f9de (patch) | |
tree | 1798843f61bbf42ad4e659497c23572b272969ca /sys/kern/kern_intr.c | |
parent | 63014c2530236dbd3818166d675b28e0e61b427e (diff) | |
download | FreeBSD-src-4a89454dcd75ebc44e557012c2d007934836f9de.zip FreeBSD-src-4a89454dcd75ebc44e557012c2d007934836f9de.tar.gz |
- Close races with signals and other AST's being triggered while we are in
the process of exiting the kernel. The ast() function now loops as long
as the PS_ASTPENDING or PS_NEEDRESCHED flags are set. It returns with
preemption disabled so that any further AST's that arrive via an
interrupt will be delayed until the low-level MD code returns to user
mode.
- Use u_int's to store the tick counts for profiling purposes so that we
do not need sched_lock just to read p_sticks. This also closes a
problem where the call to addupc_task() could screw up the arithmetic
due to non-atomic reads of p_sticks.
- Axe need_proftick(), aston(), astoff(), astpending(), need_resched(),
clear_resched(), and resched_wanted() in favor of direct bit operations
on p_sflag.
- Fix up locking with sched_lock some. In addupc_intr(), use sched_lock
to ensure pr_addr and pr_ticks are updated atomically with setting
PS_OWEUPC. In ast() we clear pr_ticks atomically with clearing
PS_OWEUPC. We also do not grab the lock just to test a flag.
- Simplify the handling of Giant in ast() slightly.
Reviewed by: bde (mostly)
Diffstat (limited to 'sys/kern/kern_intr.c')
-rw-r--r-- | sys/kern/kern_intr.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/sys/kern/kern_intr.c b/sys/kern/kern_intr.c index 07ee598..84dbc6b 100644 --- a/sys/kern/kern_intr.c +++ b/sys/kern/kern_intr.c @@ -371,8 +371,8 @@ ithread_schedule(struct ithd *ithread, int do_switch) * Set it_need to tell the thread to keep running if it is already * running. Then, grab sched_lock and see if we actually need to * put this thread on the runqueue. If so and the do_switch flag is - * true, then switch to the ithread immediately. Otherwise, use - * need_resched() to guarantee that this ithread will run before any + * true, then switch to the ithread immediately. Otherwise, set the + * needresched flag to guarantee that this ithread will run before any * userland processes. */ ithread->it_need = 1; @@ -387,7 +387,7 @@ ithread_schedule(struct ithd *ithread, int do_switch) curproc->p_stats->p_ru.ru_nivcsw++; mi_switch(); } else - need_resched(curproc); + curproc->p_sflag |= PS_NEEDRESCHED; } else { CTR3(KTR_INTR, __func__ ": pid %d: it_need %d, state %d", p->p_pid, ithread->it_need, p->p_stat); |