diff options
author | Viresh Kumar <viresh.kumar@linaro.org> | 2013-06-04 13:10:24 +0530 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2013-06-19 12:58:42 +0200 |
commit | 0a0fca9d832b704f116a25badd1ca8c16771dcac (patch) | |
tree | 499c5502a79447c84ad1d70d1e976083f2f071dc /Documentation | |
parent | 8404c90d050733b3404dc36c500f63ccb0c972ce (diff) | |
download | op-kernel-dev-0a0fca9d832b704f116a25badd1ca8c16771dcac.zip op-kernel-dev-0a0fca9d832b704f116a25badd1ca8c16771dcac.tar.gz |
sched: Rename sched.c as sched/core.c in comments and Documentation
Most of the stuff from kernel/sched.c was moved to kernel/sched/core.c long time
back and the comments/Documentation never got updated.
I figured it out when I was going through sched-domains.txt and so thought of
fixing it globally.
I haven't crossed check if the stuff that is referenced in sched/core.c by all
these files is still present and hasn't changed as that wasn't the motive behind
this patch.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/cdff76a265326ab8d71922a1db5be599f20aad45.1370329560.git.viresh.kumar@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'Documentation')
-rw-r--r-- | Documentation/cgroups/cpusets.txt | 2 | ||||
-rw-r--r-- | Documentation/rt-mutex-design.txt | 2 | ||||
-rw-r--r-- | Documentation/scheduler/sched-domains.txt | 4 | ||||
-rw-r--r-- | Documentation/spinlocks.txt | 2 | ||||
-rw-r--r-- | Documentation/virtual/uml/UserModeLinux-HOWTO.txt | 4 |
5 files changed, 7 insertions, 7 deletions
diff --git a/Documentation/cgroups/cpusets.txt b/Documentation/cgroups/cpusets.txt index 12e01d4..7740038 100644 --- a/Documentation/cgroups/cpusets.txt +++ b/Documentation/cgroups/cpusets.txt @@ -373,7 +373,7 @@ can become very uneven. 1.7 What is sched_load_balance ? -------------------------------- -The kernel scheduler (kernel/sched.c) automatically load balances +The kernel scheduler (kernel/sched/core.c) automatically load balances tasks. If one CPU is underutilized, kernel code running on that CPU will look for tasks on other more overloaded CPUs and move those tasks to itself, within the constraints of such placement mechanisms diff --git a/Documentation/rt-mutex-design.txt b/Documentation/rt-mutex-design.txt index 33ed800..a5bcd7f 100644 --- a/Documentation/rt-mutex-design.txt +++ b/Documentation/rt-mutex-design.txt @@ -384,7 +384,7 @@ priority back. __rt_mutex_adjust_prio examines the result of rt_mutex_getprio, and if the result does not equal the task's current priority, then rt_mutex_setprio is called to adjust the priority of the task to the new priority. -Note that rt_mutex_setprio is defined in kernel/sched.c to implement the +Note that rt_mutex_setprio is defined in kernel/sched/core.c to implement the actual change in priority. It is interesting to note that __rt_mutex_adjust_prio can either increase diff --git a/Documentation/scheduler/sched-domains.txt b/Documentation/scheduler/sched-domains.txt index 443f0c7..4af80b1 100644 --- a/Documentation/scheduler/sched-domains.txt +++ b/Documentation/scheduler/sched-domains.txt @@ -25,7 +25,7 @@ is treated as one entity. The load of a group is defined as the sum of the load of each of its member CPUs, and only when the load of a group becomes out of balance are tasks moved between groups. -In kernel/sched.c, trigger_load_balance() is run periodically on each CPU +In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU through scheduler_tick(). It raises a softirq after the next regularly scheduled rebalancing event for the current runqueue has arrived. The actual load balancing workhorse, run_rebalance_domains()->rebalance_domains(), is then run @@ -62,7 +62,7 @@ struct sched_domain fields, SD_FLAG_*, SD_*_INIT to get an idea of the specifics and what to tune. Architectures may retain the regular override the default SD_*_INIT flags -while using the generic domain builder in kernel/sched.c if they wish to +while using the generic domain builder in kernel/sched/core.c if they wish to retain the traditional SMT->SMP->NUMA topology (or some subset of that). This can be done by #define'ing ARCH_HASH_SCHED_TUNE. diff --git a/Documentation/spinlocks.txt b/Documentation/spinlocks.txt index 9dbe885..97eaf57 100644 --- a/Documentation/spinlocks.txt +++ b/Documentation/spinlocks.txt @@ -137,7 +137,7 @@ don't block on each other (and thus there is no dead-lock wrt interrupts. But when you do the write-lock, you have to use the irq-safe version. For an example of being clever with rw-locks, see the "waitqueue_lock" -handling in kernel/sched.c - nothing ever _changes_ a wait-queue from +handling in kernel/sched/core.c - nothing ever _changes_ a wait-queue from within an interrupt, they only read the queue in order to know whom to wake up. So read-locks are safe (which is good: they are very common indeed), while write-locks need to protect themselves against interrupts. diff --git a/Documentation/virtual/uml/UserModeLinux-HOWTO.txt b/Documentation/virtual/uml/UserModeLinux-HOWTO.txt index a5f8436..f4099ca 100644 --- a/Documentation/virtual/uml/UserModeLinux-HOWTO.txt +++ b/Documentation/virtual/uml/UserModeLinux-HOWTO.txt @@ -3127,7 +3127,7 @@ at process_kern.c:156 #3 0x1006a052 in switch_to (prev=0x50072000, next=0x507e8000, last=0x50072000) at process_kern.c:161 - #4 0x10001d12 in schedule () at sched.c:777 + #4 0x10001d12 in schedule () at core.c:777 #5 0x1006a744 in __down (sem=0x507d241c) at semaphore.c:71 #6 0x1006aa10 in __down_failed () at semaphore.c:157 #7 0x1006c5d8 in segv_handler (sc=0x5006e940) at trap_user.c:174 @@ -3191,7 +3191,7 @@ at process_kern.c:161 161 _switch_to(prev, next); (gdb) - #4 0x10001d12 in schedule () at sched.c:777 + #4 0x10001d12 in schedule () at core.c:777 777 switch_to(prev, next, prev); (gdb) #5 0x1006a744 in __down (sem=0x507d241c) at semaphore.c:71 |