From 64c7c8f88559624abdbe12b5da6502e8879f8d28 Mon Sep 17 00:00:00 2001 From: Nick Piggin Date: Tue, 8 Nov 2005 21:39:04 -0800 Subject: [PATCH] sched: resched and cpu_idle rework Make some changes to the NEED_RESCHED and POLLING_NRFLAG to reduce confusion, and make their semantics rigid. Improves efficiency of resched_task and some cpu_idle routines. * In resched_task: - TIF_NEED_RESCHED is only cleared with the task's runqueue lock held, and as we hold it during resched_task, then there is no need for an atomic test and set there. The only other time this should be set is when the task's quantum expires, in the timer interrupt - this is protected against because the rq lock is irq-safe. - If TIF_NEED_RESCHED is set, then we don't need to do anything. It won't get unset until the task get's schedule()d off. - If we are running on the same CPU as the task we resched, then set TIF_NEED_RESCHED and no further action is required. - If we are running on another CPU, and TIF_POLLING_NRFLAG is *not* set after TIF_NEED_RESCHED has been set, then we need to send an IPI. Using these rules, we are able to remove the test and set operation in resched_task, and make clear the previously vague semantics of POLLING_NRFLAG. * In idle routines: - Enter cpu_idle with preempt disabled. When the need_resched() condition becomes true, explicitly call schedule(). This makes things a bit clearer (IMO), but haven't updated all architectures yet. - Many do a test and clear of TIF_NEED_RESCHED for some reason. According to the resched_task rules, this isn't needed (and actually breaks the assumption that TIF_NEED_RESCHED is only cleared with the runqueue lock held). So remove that. Generally one less locked memory op when switching to the idle thread. - Many idle routines clear TIF_POLLING_NRFLAG, and only set it in the inner most polling idle loops. The above resched_task semantics allow it to be set until before the last time need_resched() is checked before going into a halt requiring interrupt wakeup. Many idle routines simply never enter such a halt, and so POLLING_NRFLAG can be always left set, completely eliminating resched IPIs when rescheduling the idle task. POLLING_NRFLAG width can be increased, to reduce the chance of resched IPIs. Signed-off-by: Nick Piggin Cc: Ingo Molnar Cc: Con Kolivas Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- arch/s390/kernel/process.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) (limited to 'arch/s390') diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c index 66ca575..78b64fe5 100644 --- a/arch/s390/kernel/process.c +++ b/arch/s390/kernel/process.c @@ -99,14 +99,15 @@ void default_idle(void) { int cpu, rc; + /* CPU is going idle. */ + cpu = smp_processor_id(); + local_irq_disable(); - if (need_resched()) { + if (need_resched()) { local_irq_enable(); - return; - } + return; + } - /* CPU is going idle. */ - cpu = smp_processor_id(); rc = notifier_call_chain(&idle_chain, CPU_IDLE, (void *)(long) cpu); if (rc != NOTIFY_OK && rc != NOTIFY_DONE) BUG(); @@ -119,7 +120,7 @@ void default_idle(void) __ctl_set_bit(8, 15); #ifdef CONFIG_HOTPLUG_CPU - if (cpu_is_offline(smp_processor_id())) + if (cpu_is_offline(cpu)) cpu_die(); #endif -- cgit v1.1