summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authoravg <avg@FreeBSD.org>2015-09-01 09:27:14 +0000
committeravg <avg@FreeBSD.org>2015-09-01 09:27:14 +0000
commita047794b25e4ffea843508c1ebb71502679ef5e6 (patch)
tree497d431e5826e1f23dd07d62dedab1468ac5c07b
parentcc00887f9b65d88ba65f56e85d24e4f95b061543 (diff)
downloadFreeBSD-src-a047794b25e4ffea843508c1ebb71502679ef5e6.zip
FreeBSD-src-a047794b25e4ffea843508c1ebb71502679ef5e6.tar.gz
callout_reset: fix a reversed check for cc_exec_cancel
The typo was introduced in r278469 / 344ecf88af2dfb. As a result of the bug there was a timing window where callout_reset() would fail to cancel a concurrent execution of a callout that is about to start and would schedule the callout again. The callout would fire more times than it is scheduled. That would happen even if the callout is initialized with a lock. For example, the bug triggered the "Stray timeout" assertion in taskqueue_timeout_func(). MFC after: 5 days
-rw-r--r--sys/kern/kern_timeout.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/sys/kern/kern_timeout.c b/sys/kern/kern_timeout.c
index 01da596..71c88e0 100644
--- a/sys/kern/kern_timeout.c
+++ b/sys/kern/kern_timeout.c
@@ -1032,7 +1032,7 @@ callout_reset_sbt_on(struct callout *c, sbintime_t sbt, sbintime_t precision,
* currently in progress. If there is a lock then we
* can cancel the callout if it has not really started.
*/
- if (c->c_lock != NULL && cc_exec_cancel(cc, direct))
+ if (c->c_lock != NULL && !cc_exec_cancel(cc, direct))
cancelled = cc_exec_cancel(cc, direct) = true;
if (cc_exec_waiting(cc, direct)) {
/*
OpenPOWER on IntegriCloud