summaryrefslogtreecommitdiffstats
path: root/kernel/sched_stats.h
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'proc' of ↵Linus Torvalds2008-10-231-1/+8
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/adobriyan/proc * 'proc' of git://git.kernel.org/pub/scm/linux/kernel/git/adobriyan/proc: (35 commits) proc: remove fs/proc/proc_misc.c proc: move /proc/vmcore creation to fs/proc/vmcore.c proc: move pagecount stuff to fs/proc/page.c proc: move all /proc/kcore stuff to fs/proc/kcore.c proc: move /proc/schedstat boilerplate to kernel/sched_stats.h proc: move /proc/modules boilerplate to kernel/module.c proc: move /proc/diskstats boilerplate to block/genhd.c proc: move /proc/zoneinfo boilerplate to mm/vmstat.c proc: move /proc/vmstat boilerplate to mm/vmstat.c proc: move /proc/pagetypeinfo boilerplate to mm/vmstat.c proc: move /proc/buddyinfo boilerplate to mm/vmstat.c proc: move /proc/vmallocinfo to mm/vmalloc.c proc: move /proc/slabinfo boilerplate to mm/slub.c, mm/slab.c proc: move /proc/slab_allocators boilerplate to mm/slab.c proc: move /proc/interrupts boilerplate code to fs/proc/interrupts.c proc: move /proc/stat to fs/proc/stat.c proc: move rest of /proc/partitions code to block/genhd.c proc: move /proc/cpuinfo code to fs/proc/cpuinfo.c proc: move /proc/devices code to fs/proc/devices.c proc: move rest of /proc/locks to fs/locks.c ...
| * proc: move /proc/schedstat boilerplate to kernel/sched_stats.hAlexey Dobriyan2008-10-231-1/+8
| | | | | | | | Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
* | Merge branch 'sched-fixes-for-linus' of ↵Linus Torvalds2008-10-231-1/+1
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'sched-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: sched: disable the hrtick for now sched: revert back to per-rq vruntime sched: fair scheduler should not resched rt tasks sched: optimize group load balancer sched: minor fast-path overhead reduction sched: fix the wrong mask_len, cleanup sched: kill unused scheduler decl. sched: fix the wrong mask_len sched: only update rq->clock while holding rq->lock
| * sched: fix the wrong mask_len, cleanupPeter Zijlstra2008-10-171-1/+1
| | | | | | | | | | | | | | Clean up the division in show_schedstat(). Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * sched: fix the wrong mask_lenMiao Xie2008-10-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | If NR_CPUS isn't a multiple of 32, we get a truncated string of sched domains by catting /proc/schedstat. This is caused by the wrong mask_len. This patch fixes it. Signed-off-by: Miao Xie <miaox@cn.fujitsu.com> Cc: <stable@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | timers: fix itimer/many thread hang, v3Frank Mayhar2008-09-271-88/+38
| | | | | | | | | | | | | | | | - fix UP lockup - another set of UP/SMP cleanups and simplifications Signed-off-by: Frank Mayhar <fmayhar@google.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | timers: fix itimer/many thread hang, v2Frank Mayhar2008-09-231-0/+136
|/ | | | | | | | | | | | | | | | | | | | | | | | This is the second resubmission of the posix timer rework patch, posted a few days ago. This includes the changes from the previous resubmittion, which addressed Oleg Nesterov's comments, removing the RCU stuff from the patch and un-inlining the thread_group_cputime() function for SMP. In addition, per Ingo Molnar it simplifies the UP code, consolidating much of it with the SMP version and depending on lower-level SMP/UP handling to take care of the differences. It also cleans up some UP compile errors, moves the scheduler stats-related macros into kernel/sched_stats.h, cleans up a merge error in kernel/fork.c and has a few other minor fixes and cleanups as suggested by Oleg and Ingo. Thanks for the review, guys. Signed-off-by: Frank Mayhar <fmayhar@google.com> Cc: Roland McGrath <roland@redhat.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: fix accounting in task delay accounting & migrationAnkita Garg2008-07-041-9/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On Thu, Jun 19, 2008 at 12:27:14PM +0200, Peter Zijlstra wrote: > On Thu, 2008-06-05 at 10:50 +0530, Ankita Garg wrote: > > > Thanks Peter for the explanation... > > > > I agree with the above and that is the reason why I did not see weird > > values with cpu_time. But, run_delay still would suffer skews as the end > > points for delta could be taken on different cpus due to migration (more > > so on RT kernel due to the push-pull operations). With the below patch, > > I could not reproduce the issue I had seen earlier. After every dequeue, > > we take the delta and start wait measurements from zero when moved to a > > different rq. > > OK, so task delay delay accounting is broken because it doesn't take > migration into account. > > What you've done is make it symmetric wrt enqueue, and account it like > > cpu0 cpu1 > > enqueue > <wait-d1> > dequeue > enqueue > <wait-d2> > run > > Where you add both d1 and d2 to the run_delay,.. right? > Thanks for reviewing the patch. The above is exactly what I have done. > This seems like a good fix, however it looks like the patch will break > compilation in !CONFIG_SCHEDSTATS && !CONFIG_TASK_DELAY_ACCT, of it > failing to provide a stub for sched_info_dequeue() in that case. Fixed. Pl. find the new patch below. Signed-off-by: Ankita Garg <ankita@in.ibm.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Gregory Haskins <ghaskins@novell.com> Cc: rostedt@goodmis.org Cc: suresh.b.siddha@intel.com Cc: aneesh.kumar@linux.vnet.ibm.com Cc: dhaval@linux.vnet.ibm.com Cc: vatsa@linux.vnet.ibm.com Cc: David Bahi <DBahi@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched, delay accounting: fix incorrect delay time when constantly waiting on ↵Bharath Ravi2008-06-191-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | runqueue This patch corrects the incorrect value of per process run-queue wait time reported by delay statistics. The anomaly was due to the following reason. When a process leaves the CPU and immediately starts waiting for CPU on the runqueue (which means it remains in the TASK_RUNNABLE state), the time of re-entry into the run-queue is never recorded. Due to this, the waiting time on the runqueue from this point of re-entry upto the next time it hits the CPU is not accounted for. This is solved by recording the time of re-entry of a process leaving the CPU in the sched_info_depart() function IF the process will go back to waiting on the run-queue. This IF condition is verified by checking whether the process is still in the TASK_RUNNABLE state. The patch was tested on 2.6.26-rc6 using two simple CPU hog programs. The values noted prior to the fix did not account for the time spent on the runqueue waiting. After the fix, the correct values were reported back to user space. Signed-off-by: Bharath Ravi <bharathravi1@gmail.com> Signed-off-by: Madhava K R <madhavakr@gmail.com> Cc: dhaval@linux.vnet.ibm.com Cc: vatsa@in.ibm.com Cc: balbir@in.ibm.com Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* show_schedstat(): fix memleakAdrian Bunk2008-05-291-0/+1
| | | | | | | | | | | | | The Coverity checker spotted a memleak introduced by commit 39106dcf85285e78f3b290022122c76f851379b8 (cpumask: use new cpus_scnprintf function). It seems the kfree() got lost between v2 and v3 of this patch... Signed-off-by: Adrian Bunk <bunk@kernel.org> Cc: Mike Travis <travis@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* cpumask: use new cpus_scnprintf functionMike Travis2008-04-191-2/+6
| | | | | | | | | | | | | * Cleaned up references to cpumask_scnprintf() and added new cpulist_scnprintf() interfaces where appropriate. * Fix some small bugs (or code efficiency improvments) for various uses of cpumask_scnprintf. * Clean up some checkpatch errors. Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: clean up kernel/sched_stat.hIngo Molnar2007-11-281-1/+2
| | | | | | clean up kernel/sched_stat.h. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: fix delay accounting regressionBalbir Singh2007-11-091-5/+6
| | | | | | | | | | | | | Fix the delay accounting regression introduced by commit 75d4ef16a6aa84f708188bada182315f80aab6fa. rq no longer has sched_info data associated with it. task_struct sched_info structure is used by delay accounting to provide back statistics to user space. also remove direct use of sched_clock() (which is not a valid thing to do anymore) and use rq->clock instead. Signed-off-by: Balbir Singh <balbir@linux.vnet.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: reduce schedstat variable overhead a bitKen Chen2007-10-181-5/+3
| | | | | | | | | | | | | | | | | schedstat is useful in investigating CPU scheduler behavior. Ideally, I think it is beneficial to have it on all the time. However, the cost of turning it on in production system is quite high, largely due to number of events it collects and also due to its large memory footprint. Most of the fields probably don't need to be full 64-bit on 64-bit arch. Rolling over 4 billion events will most like take a long time and user space tool can be made to accommodate that. I'm proposing kernel to cut back most of variable width on 64-bit system. (note, the following patch doesn't affect 32-bit system). Signed-off-by: Ken Chen <kenchen@google.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: clean up schedstats, cnt -> countIngo Molnar2007-10-151-12/+12
| | | | | | | | | | | | | | | | rename all 'cnt' fields and variables to the less yucky 'count' name. yuckage noticed by Andrew Morton. no change in code, other than the /proc/sched_debug bkl_count string got a bit larger: text data bss dec hex filename 38236 3506 24 41766 a326 sched.o.before 38240 3506 24 41770 a32a sched.o.after Signed-off-by: Ingo Molnar <mingo@elte.hu> Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
* sched: fix delay accounting performance regressionIngo Molnar2007-10-151-2/+2
| | | | | | | | | fix delay accounting performance regression - those sched_clock() calls are not needed. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
* [PATCH] sched: add schedstat_set() APIIngo Molnar2007-08-021-0/+2
| | | | | | | add the schedstat_set() API, to allow the reduction of CONFIG_SCHEDSTAT related #ifdefs. No code changed. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: update delay-accounting to use CFS's precise statsBalbir Singh2007-07-091-1/+1
| | | | | | update delay-accounting to use CFS's precise stats. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: move code into kernel/sched_stats.hIngo Molnar2007-07-091-0/+235
create sched_stats.h and move sched.c schedstats code into it. This cleans up sched.c a bit. no code changes are caused by this patch. Signed-off-by: Ingo Molnar <mingo@elte.hu>
OpenPOWER on IntegriCloud