summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorBharata B Rao <bharata@linux.vnet.ibm.com>2009-03-23 10:02:53 +0530
committerIngo Molnar <mingo@elte.hu>2009-03-31 18:27:59 +0200
commita18b83b7ef3c98cd8b4bb885e4a649a8f30fb7b0 (patch)
tree4e93447c0099672e2bedbc4374007a42f105e132
parent2f8501815256af8498904e68bd0984b1afffd6f8 (diff)
downloadop-kernel-dev-a18b83b7ef3c98cd8b4bb885e4a649a8f30fb7b0.zip
op-kernel-dev-a18b83b7ef3c98cd8b4bb885e4a649a8f30fb7b0.tar.gz
cpuacct: make cpuacct hierarchy walk in cpuacct_charge() safe when rcupreempt is used -v2
Impact: fix cgroups race under rcu-preempt cpuacct_charge() obtains task's ca and does a hierarchy walk upwards. This can race with the task's movement between cgroups. This race can cause an access to freed ca pointer in cpuacct_charge() or access to invalid cgroups pointer of the task. This will not happen with rcu or tree rcu as cpuacct_charge() is called with preemption disabled. However if rcupreempt is used, the race is seen. Thanks to Li Zefan for explaining this. Fix this race by explicitly protecting ca and the hierarchy walk with rcu_read_lock(). Changes for v2: - Update patch descrition (as per Li Zefan's review comments). - Remove comments in cpuacct_charge() which explained why rcu_read_lock() was needed (as per Peter Zijlstra's review comments). Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com> Cc: Dhaval Giani <dhaval@linux.vnet.ibm.com> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Paul Menage <menage@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com> Tested-by: Balbir Singh <balbir@linux.vnet.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-rw-r--r--kernel/sched.c5
1 files changed, 5 insertions, 0 deletions
diff --git a/kernel/sched.c b/kernel/sched.c
index 186c6fd..cc397aa 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -9654,12 +9654,17 @@ static void cpuacct_charge(struct task_struct *tsk, u64 cputime)
return;
cpu = task_cpu(tsk);
+
+ rcu_read_lock();
+
ca = task_ca(tsk);
for (; ca; ca = ca->parent) {
u64 *cpuusage = percpu_ptr(ca->cpuusage, cpu);
*cpuusage += cputime;
}
+
+ rcu_read_unlock();
}
struct cgroup_subsys cpuacct_subsys = {
OpenPOWER on IntegriCloud