diff options
author | alfred <alfred@FreeBSD.org> | 2008-03-22 07:29:45 +0000 |
---|---|---|
committer | alfred <alfred@FreeBSD.org> | 2008-03-22 07:29:45 +0000 |
commit | b283b3e59a3e18ec4e7cf225a3a9922139733a73 (patch) | |
tree | b3443ae24c0f902b2465c50508b739835c6d3931 /sys/kern | |
parent | 3367c267b34ad2829e2902ae8f45beef27b57591 (diff) | |
download | FreeBSD-src-b283b3e59a3e18ec4e7cf225a3a9922139733a73.zip FreeBSD-src-b283b3e59a3e18ec4e7cf225a3a9922139733a73.tar.gz |
Fix a race where timeout/untimeout could cause crashes for Giant locked
code.
The bug:
There exists a race condition for timeout/untimeout(9) due to the
way that the softclock thread dequeues timeouts.
The softclock thread sets the c_func and c_arg of the callout to
NULL while holding the callout lock but not Giant. It then drops
the callout lock and acquires Giant.
It is at this point where untimeout(9) on another cpu/thread could
be called.
Since c_arg and c_func are cleared, untimeout(9) does not touch the
callout and returns as if the callout is canceled.
The softclock then tries to acquire Giant and likely blocks due to
the other cpu/thread holding it.
The other cpu/thread then likely deallocates the backing store that
c_arg points to and finishes working and hence drops Giant.
Softclock resumes and acquires giant and calls the function with
the now free'd c_arg and we have corruption/crash.
The fix:
We need to track curr_callout even for timeout(9) (LOCAL_ALLOC)
callouts. We need to free the callout after the softclock processes
it to deal with the race here.
Obtained from: Juniper Networks, iedowse
Reviewed by: jhb, iedowse
MFC After: 2 weeks.
Diffstat (limited to 'sys/kern')
-rw-r--r-- | sys/kern/kern_timeout.c | 23 |
1 files changed, 19 insertions, 4 deletions
diff --git a/sys/kern/kern_timeout.c b/sys/kern/kern_timeout.c index 70d558c..679430f 100644 --- a/sys/kern/kern_timeout.c +++ b/sys/kern/kern_timeout.c @@ -230,11 +230,8 @@ softclock(void *dummy) c_arg = c->c_arg; c_flags = c->c_flags; if (c->c_flags & CALLOUT_LOCAL_ALLOC) { - c->c_func = NULL; c->c_flags = CALLOUT_LOCAL_ALLOC; - SLIST_INSERT_HEAD(&callfree, c, - c_links.sle); - curr_callout = NULL; + curr_callout = c; } else { c->c_flags = (c->c_flags & ~CALLOUT_PENDING); @@ -299,6 +296,24 @@ softclock(void *dummy) class->lc_unlock(c_lock); skip: mtx_lock_spin(&callout_lock); + /* + * If the current callout is locally + * allocated (from timeout(9)) + * then put it on the freelist. + * + * Note: we need to check the cached + * copy of c_flags because if it was not + * local, then it's not safe to deref the + * callout pointer. + */ + if (c_flags & CALLOUT_LOCAL_ALLOC) { + KASSERT(c->c_flags == + CALLOUT_LOCAL_ALLOC, + ("corrupted callout")); + c->c_func = NULL; + SLIST_INSERT_HEAD(&callfree, c, + c_links.sle); + } curr_callout = NULL; if (callout_wait) { /* |