summaryrefslogtreecommitdiffstats
path: root/kernel/sched.c
diff options
context:
space:
mode:
authorGerald Schaefer <gerald.schaefer@de.ibm.com>2010-11-22 15:47:36 +0100
committerIngo Molnar <mingo@elte.hu>2010-11-26 15:05:34 +0100
commit335d7afbfb71faac833734a94240c1e07cf0ead8 (patch)
tree483a24c6005af6eb25a256d92a697411d94a7458 /kernel/sched.c
parent22a867d81707b0a2720bb5f65255265b95d30526 (diff)
downloadop-kernel-dev-335d7afbfb71faac833734a94240c1e07cf0ead8.zip
op-kernel-dev-335d7afbfb71faac833734a94240c1e07cf0ead8.tar.gz
mutexes, sched: Introduce arch_mutex_cpu_relax()
The spinning mutex implementation uses cpu_relax() in busy loops as a compiler barrier. Depending on the architecture, cpu_relax() may do more than needed in this specific mutex spin loops. On System z we also give up the time slice of the virtual cpu in cpu_relax(), which prevents effective spinning on the mutex. This patch replaces cpu_relax() in the spinning mutex code with arch_mutex_cpu_relax(), which can be defined by each architecture that selects HAVE_ARCH_MUTEX_CPU_RELAX. The default is still cpu_relax(), so this patch should not affect other architectures than System z for now. Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1290437256.7455.4.camel@thinkpad> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched.c')
-rw-r--r--kernel/sched.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/kernel/sched.c b/kernel/sched.c
index 3e8a7db..abe7aec 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -75,6 +75,7 @@
#include <asm/tlb.h>
#include <asm/irq_regs.h>
+#include <asm/mutex.h>
#include "sched_cpupri.h"
#include "workqueue_sched.h"
@@ -3888,7 +3889,7 @@ int mutex_spin_on_owner(struct mutex *lock, struct thread_info *owner)
if (task_thread_info(rq->curr) != owner || need_resched())
return 0;
- cpu_relax();
+ arch_mutex_cpu_relax();
}
return 1;
OpenPOWER on IntegriCloud