summaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2008-05-10 20:58:02 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2008-05-10 20:58:02 -0700
commit8e3e076c5a78519a9f64cd384e8f18bc21882ce0 (patch)
treef032258fde3aa4771e86bf4552fe4530c221dec3 /kernel
parent00b41ec2611dc98f87f30753ee00a53db648d662 (diff)
downloadop-kernel-dev-8e3e076c5a78519a9f64cd384e8f18bc21882ce0.zip
op-kernel-dev-8e3e076c5a78519a9f64cd384e8f18bc21882ce0.tar.gz
BKL: revert back to the old spinlock implementation
The generic semaphore rewrite had a huge performance regression on AIM7 (and potentially other BKL-heavy benchmarks) because the generic semaphores had been rewritten to be simple to understand and fair. The latter, in particular, turns a semaphore-based BKL implementation into a mess of scheduling. The attempt to fix the performance regression failed miserably (see the previous commit 00b41ec2611dc98f87f30753ee00a53db648d662 'Revert "semaphore: fix"'), and so for now the simple and sane approach is to instead just go back to the old spinlock-based BKL implementation that never had any issues like this. This patch also has the advantage of being reported to fix the regression completely according to Yanmin Zhang, unlike the semaphore hack which still left a couple percentage point regression. As a spinlock, the BKL obviously has the potential to be a latency issue, but it's not really any different from any other spinlock in that respect. We do want to get rid of the BKL asap, but that has been the plan for several years. These days, the biggest users are in the tty layer (open/release in particular) and Alan holds out some hope: "tty release is probably a few months away from getting cured - I'm afraid it will almost certainly be the very last user of the BKL in tty to get fixed as it depends on everything else being sanely locked." so while we're not there yet, we do have a plan of action. Tested-by: Yanmin Zhang <yanmin_zhang@linux.intel.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Andi Kleen <andi@firstfloor.org> Cc: Matthew Wilcox <matthew@wil.cx> Cc: Alexander Viro <viro@ftp.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/sched.c27
1 files changed, 4 insertions, 23 deletions
diff --git a/kernel/sched.c b/kernel/sched.c
index 58fb8af..c51b656 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4567,8 +4567,6 @@ EXPORT_SYMBOL(schedule);
asmlinkage void __sched preempt_schedule(void)
{
struct thread_info *ti = current_thread_info();
- struct task_struct *task = current;
- int saved_lock_depth;
/*
* If there is a non-zero preempt_count or interrupts are disabled,
@@ -4579,16 +4577,7 @@ asmlinkage void __sched preempt_schedule(void)
do {
add_preempt_count(PREEMPT_ACTIVE);
-
- /*
- * We keep the big kernel semaphore locked, but we
- * clear ->lock_depth so that schedule() doesnt
- * auto-release the semaphore:
- */
- saved_lock_depth = task->lock_depth;
- task->lock_depth = -1;
schedule();
- task->lock_depth = saved_lock_depth;
sub_preempt_count(PREEMPT_ACTIVE);
/*
@@ -4609,26 +4598,15 @@ EXPORT_SYMBOL(preempt_schedule);
asmlinkage void __sched preempt_schedule_irq(void)
{
struct thread_info *ti = current_thread_info();
- struct task_struct *task = current;
- int saved_lock_depth;
/* Catch callers which need to be fixed */
BUG_ON(ti->preempt_count || !irqs_disabled());
do {
add_preempt_count(PREEMPT_ACTIVE);
-
- /*
- * We keep the big kernel semaphore locked, but we
- * clear ->lock_depth so that schedule() doesnt
- * auto-release the semaphore:
- */
- saved_lock_depth = task->lock_depth;
- task->lock_depth = -1;
local_irq_enable();
schedule();
local_irq_disable();
- task->lock_depth = saved_lock_depth;
sub_preempt_count(PREEMPT_ACTIVE);
/*
@@ -5853,8 +5831,11 @@ void __cpuinit init_idle(struct task_struct *idle, int cpu)
spin_unlock_irqrestore(&rq->lock, flags);
/* Set the preempt count _outside_ the spinlocks! */
+#if defined(CONFIG_PREEMPT)
+ task_thread_info(idle)->preempt_count = (idle->lock_depth >= 0);
+#else
task_thread_info(idle)->preempt_count = 0;
-
+#endif
/*
* The idle tasks have their own, simple scheduling class:
*/
OpenPOWER on IntegriCloud