diff options
author | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2017-03-04 12:33:53 -0800 |
---|---|---|
committer | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2017-04-18 11:38:19 -0700 |
commit | 3c345825c899df0751b01143b159ddaefb91a220 (patch) | |
tree | 97675c609e2f160a43c96eb7a41acd2be94574bd /kernel | |
parent | 2e8c28c2dd96c6f1f2d454a4e4b928385841e247 (diff) | |
download | op-kernel-dev-3c345825c899df0751b01143b159ddaefb91a220.zip op-kernel-dev-3c345825c899df0751b01143b159ddaefb91a220.tar.gz |
rcu: Expedited wakeups need to be fully ordered
Expedited grace periods use workqueue handlers that wake up the requesters,
but there is no lock mediating this wakeup. Therefore, memory barriers
are required to ensure that the handler's memory references are seen by
all to occur before synchronize_*_expedited() returns to its caller.
Possibly detected by syzkaller.
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/rcu/tree_exp.h | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index 51ca287..027e123 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -533,6 +533,7 @@ static void rcu_exp_wait_wake(struct rcu_state *rsp, unsigned long s) rnp->exp_seq_rq = s; spin_unlock(&rnp->exp_lock); } + smp_mb(); /* All above changes before wakeup. */ wake_up_all(&rnp->exp_wq[(rsp->expedited_sequence >> 1) & 0x3]); } trace_rcu_exp_grace_period(rsp->name, s, TPS("endwake")); @@ -614,6 +615,7 @@ static void _synchronize_rcu_expedited(struct rcu_state *rsp, wait_event(rnp->exp_wq[(s >> 1) & 0x3], sync_exp_work_done(rsp, &rdp->exp_workdone0, s)); + smp_mb(); /* Workqueue actions happen before return. */ /* Let the next expedited grace period start. */ mutex_unlock(&rsp->exp_mutex); |