summaryrefslogtreecommitdiffstats
path: root/kernel
Commit message (Collapse)AuthorAgeFilesLines
* timer: fix section mismatchRandy Dunlap2008-01-211-1/+1
| | | | | | | | | | | | | | | The caller is __cpuinit. Also, this code block and its caller are inside #ifdef CONFIG_HOTPLUG_CPU blocks, so this code should reflect that config symbol's usage. WARNING: vmlinux.o(.text+0x4252f): Section mismatch: reference to .init.text: (between 'timer_cpu_notify' and 'msleep') Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@akpm@linux-foundation.org>
* hrtimer: fix section mismatchRandy Dunlap2008-01-211-1/+1
| | | | | | | | | | | | | | | Fix section mismatch in hrtimer.c: WARNING: vmlinux.o(.text+0x50c61): Section mismatch: reference to .init.text: (between 'hrtimer_cpu_notify' and 'down_read_trylock') Noticed by Johannes Berg and confirmed by Sam Ravnborg. Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@akpm@linux-foundation.org>
* Fix unbalanced helper_lock in kernel/kmod.cNigel Cunningham2008-01-171-7/+6
| | | | | | | | | | | call_usermodehelper_exec() has an exit path that can leave the helper_lock() call at the top of the routine unbalanced. The attached patch fixes this issue. Signed-off-by: Nigel Cunningham <nigel@tuxonice.net> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* lockdep: fix workqueue creation API lockdep interactionJohannes Berg2008-01-161-2/+3
| | | | | | | | | | | | | | | | | | | | Dave Young reported warnings from lockdep that the workqueue API can sometimes try to register lockdep classes with the same key but different names. This is not permitted in lockdep. Unfortunately, I was unaware of that restriction when I wrote the code to debug workqueue problems with lockdep and used the workqueue name as the lockdep class name. This can obviously lead to the problem if the workqueue name is dynamic. This patch solves the problem by always using a constant name for the workqueue's lockdep class, namely either the constant name that was passed in or a string consisting of the variable name. Signed-off-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
* lockdep: fix internal double unlock during self-testNick Piggin2008-01-161-4/+8
| | | | | | | | | | Lockdep, during self-test (when it was simulating double unlocks) was sometimes unconditionally unlocking a spinlock when it had not been locked. This won't work for ticket locks. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
* modules: de-mutex more symbol lookup paths in the module codeRusty Russell2008-01-141-11/+18
| | | | | | | | | | | | | | | Kyle McMartin reports sysrq_timer_list_show() can hit the module mutex from hard interrupt context. These paths don't need to though, since we long ago changed all the module list manipulation to occur via stop_machine(). Disabling preemption is enough. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Ingo Molnar <mingo@elte.hu> Cc: Kyle McMartin <kyle@mcmartin.ca> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'release' of ↵Linus Torvalds2008-01-132-4/+3
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6 * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6: pnpacpi: print resource shortage message only once PM: ACPI and APM must not be enabled at the same time ACPI: apply quirk_ich6_lpc_acpi to more ICH8 and ICH9 ACPICA: fix acpi_serialize hang regression ACPI : Not register gsi for PCI IDE controller in legacy mode ACPI: Reintroduce run time configurable max_cstate for !CPU_IDLE case ACPI: Make sysfs interface in ACPI power optional. ACPI: EC: Enable boot EC before bus_scan increase PNP_MAX_PORT to 40 from 24
| * Pull bugzilla-9194 into release branchLen Brown2008-01-112-4/+3
| |\
| | * PM: ACPI and APM must not be enabled at the same timeLen Brown2008-01-112-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ACPI and APM used "pm_active" to guarantee that they would not be simultaneously active. But pm_active was recently moved under CONFIG_PM_LEGACY, so that without CONFIG_PM_LEGACY, pm_active became a NOP -- allowing ACPI and APM to both be simultaneously enabled. This caused unpredictable results, including boot hangs. Further, the code under CONFIG_PM_LEGACY is scheduled for removal. So replace pm_active with pm_flags. pm_flags depends only on CONFIG_PM, which is present for both CONFIG_APM and CONFIG_ACPI. http://bugzilla.kernel.org/show_bug.cgi?id=9194 Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
* | | remove task_ppid_nr_nsRoland McGrath2008-01-131-1/+1
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | task_ppid_nr_ns is called in three places. One of these should never have called it. In the other two, using it broke the existing semantics. This was presumably accidental. If the function had not been there, it would have been much more obvious to the eye that those patches were changing the behavior. We don't need this function. In task_state, the pid of the ptracer is not the ppid of the ptracer. In do_task_stat, ppid is the tgid of the real_parent, not its pid. I also moved the call outside of lock_task_sighand, since it doesn't need it. In sys_getppid, ppid is the tgid of the real_parent, not its pid. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | show_task: real_parentRoland McGrath2008-01-091-1/+1
| | | | | | | | | | | | | | | | | | | | The show_task function invoked by sysrq-t et al displays the pid and parent's pid of each task. It seems more useful to show the actual process hierarchy here than who is using ptrace on each process. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | futex: Prevent stale futex owner when interrupted/timeoutThomas Gleixner2008-01-081-10/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Roland Westrelin did a great analysis of a long standing thinko in the return path of futex_lock_pi. While we fixed the lock steal case long ago, which was easy to trigger, we never had a test case which exposed this problem and stupidly never thought about the reverse lock stealing scenario and the return to user space with a stale state. When a blocked tasks returns from rt_mutex_timed_locked without holding the rt_mutex (due to a signal or timeout) and at the same time the task holding the futex is releasing the futex and assigning the ownership of the futex to the returning task, then it might happen that a third task acquires the rt_mutex before the final rt_mutex_trylock() of the returning task happens under the futex hash bucket lock. The returning task returns to user space with ETIMEOUT or EINTR, but the user space futex value is assigned to this task. The task which acquired the rt_mutex fixes the user space futex value right after the hash bucket lock has been released by the returning task, but for a short period of time the user space value is wrong. Detailed description is available at: https://bugzilla.redhat.com/show_bug.cgi?id=400541 The fix for this is the same as we do when the rt_mutex was acquired by a higher priority task via lock stealing from the designated new owner. In that case we already fix the user space value and the internal pi_state up before we return. This mechanism can be used to fixup the above corner case as well. When the returning task, which failed to acquire the rt_mutex, notices that it is the designated owner of the futex, then it fixes up the stale user space value and the pi_state, before returning to user space. This happens with the futex hash bucket lock held, so the task which acquired the rt_mutex is guaranteed to be blocked on the hash bucket lock. We can access the rt_mutex owner, which gives us the pid of the new owner, safely here as the owner is not able to modify (release) it while waiting on the hash bucket lock. Rename the "curr" argument of fixup_pi_state_owner() to "newowner" to avoid confusion with current and add the check for the stale state into the failure path of rt_mutex_trylock() in the return path of unlock_futex_pi(). If the situation is detected use fixup_pi_state_owner() to assign everything to the owner of the rt_mutex. Pointed-out-and-tested-by: Roland Westrelin <roland.westrelin@sun.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | vmcoreinfo: add the array length of "free_list" for filtering free pagesKen'ichi Ohmichi2008-01-081-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the array length of "free_area.free_list" to the vmcoreinfo data so that makedumpfile (dump filtering command) can exclude all free pages in linux-2.6.24. makedumpfile creates a small dumpfile by excluding unnecessary pages for the analysis. To distinguish unnecessary pages, makedumpfile gets the vmcoreinfo data which has the minimum debugging information only for dump filtering. In 2.6.24-rc1 or later, the free_area.free_list is an array which has one list for each migrate types instead of a single list. makedumpfile needs the array length of "free_area.free_list" and the vmcoreinfo data should contain it. Signed-off-by: Huang Ying <ying.huang@intel.com> Tested-by: Ken'ichi Ohmichi <oomichi@mxs.nes.nec.co.jp> Acked-by: Simon Horman <horms@verge.net.au> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | acct: real_parent ppidRoland McGrath2008-01-071-1/+1
| | | | | | | | | | | | | | | | | | The ac_ppid field reported in process accounting records should match what getppid() would have returned to that process, regardless of whether a debugger is attached. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-schedLinus Torvalds2008-01-031-4/+4
|\ \ | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched: sched: fix gcc warnings
| * | sched: fix gcc warningsIngo Molnar2007-12-301-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Meelis Roos reported these warnings on sparc64: CC kernel/sched.o In file included from kernel/sched.c:879: kernel/sched_debug.c: In function 'nsec_high': kernel/sched_debug.c:38: warning: comparison of distinct pointer types lacks a cast the debug check in do_div() is over-eager here, because the long long is always positive in these places. Mark this by casting them to unsigned long long. no change in code output: text data bss dec hex filename 51471 6582 376 58429 e43d sched.o.before 51471 6582 376 58429 e43d sched.o.after md5: 7f7729c111f185bf3ccea4d542abc049 sched.o.before.asm 7f7729c111f185bf3ccea4d542abc049 sched.o.after.asm Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | | Fix kernel/ptrace.c compile problem (missing "may_attach()")Linus Torvalds2008-01-021-1/+1
| | | | | | | | | | | | | | | | | | | | | The previous commit missed one use of "may_attach()" that had been renamed to __ptrace_may_attach(). Tssk, tssk, Al. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | restrict reading from /proc/<pid>/maps to those who share ->mm or can ptrace pidAl Viro2008-01-021-2/+2
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | Contents of /proc/*/maps is sensitive and may become sensitive after open() (e.g. if target originally shares our ->mm and later does exec on suid-root binary). Check at read() (actually, ->start() of iterator) time that mm_struct we'd grabbed and locked is - still the ->mm of target - equal to reader's ->mm or the target is ptracable by reader. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Acked-by: Rik van Riel <riel@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | [SERIAL]: Fix section mismatches in Sun serial console drivers.David S. Miller2007-12-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We're exporting an __init function, oops :-) The core issue here is that add_preferred_console() is marked as __init, this makes it impossible to invoke this thing from a driver probe routine which is what the Sparc serial drivers need to do. There is no harm in dropping the __init marker. This code will actually work properly when invoked from a modular driver, except that init will probably not pick up the console change without some other support code. Then we can drop the __init from sunserial_console_match() and we're no longer exporting an __init function to modules. Signed-off-by: David S. Miller <davem@davemloft.net>
* | Modules: fix memory leak of module namesGreg Kroah-Hartman2007-12-221-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Due to the change in kobject name handling, the module kobject needs to have a null release function to ensure that the name it previously set will be properly cleaned up. All of this wierdness goes away in 2.6.25 with the rework of the kobject name and cleanup logic, but this is required for 2.6.24. Thanks to Alexey Dobriyan for finding the problem, and to Kay Sievers for pointing out the simple way to fix it after I tried many complex ways. Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* | debug: add end-of-oops markerArjan van de Ven2007-12-201-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Right now it's nearly impossible for parsers that collect kernel crashes from logs or emails (such as www.kerneloops.org) to detect the end-of-oops condition. In addition, it's not currently possible to detect whether or not 2 oopses that look alike are actually the same oops reported twice, or are truly two unique oopses. This patch adds an end-of-oops marker, and makes the end marker include a very simple 64-bit random ID to be able to detect duplicate reports. Normally, this ID is calculated as a late_initcall() (in the hope that at that time there is enough entropy to get a unique enough ID); however for early oopses the oops_exit() function needs to generate the ID on the fly. We do this all at the _end_ of an oops printout, so this does not impact our ability to get the most important portions of a crash out to the console first. [ Sidenote: the already existing oopses-since-bootup counter we print during crashes serves as the differentiator between multiple oopses that trigger during the same bootup. ] Tested on 32-bit and 64-bit x86. Artificially injected very early crashes as well, as expected they result in this constant ID after multiple bootups: ---[ end trace ca143223eefdc828 ]--- ---[ end trace ca143223eefdc828 ]--- because the random pools are still all zero. But it all still works fine and causes no additional problems (which is the main goal of instrumentation code). Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | sched: rt: account the cpu time during the tickPeter Zijlstra2007-12-201-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Realtime tasks would not account their runtime during ticks. Which would lead to: struct sched_param param = { .sched_priority = 10 }; pthread_setschedparam(pthread_self(), SCHED_FIFO, &param); while (1) ; Not showing up in top. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* | Merge git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86Linus Torvalds2007-12-183-44/+25
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86: x86: fix "Kernel panic - not syncing: IO-APIC + timer doesn't work!" genirq: revert lazy irq disable for simple irqs x86: also define AT_VECTOR_SIZE_ARCH x86: kprobes bugfix x86: jprobe bugfix timer: kernel/timer.c section fixes genirq: add unlocked version of set_irq_handler() clockevents: fix reprogramming decision in oneshot broadcast oprofile: op_model_athlon.c support for AMD family 10h barcelona performance counters
| * | genirq: revert lazy irq disable for simple irqsSteven Rostedt2007-12-181-7/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In commit 76d2160147f43f982dfe881404cfde9fd0a9da21 lazy irq disabling was implemented, and the simple irq handler had a masking set to it. Remy Bohmer discovered that some devices in the ARM architecture would trigger the mask, but never unmask it. His patch to do the unmasking was questioned by Russell King about masking simple irqs to begin with. Looking further, it was discovered that the problems Remy was seeing was due to improper use of the simple handler by devices, and he later submitted patches to fix those. But the issue that was uncovered was that the simple handler should never mask. This patch reverts the masking in the simple handler. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | timer: kernel/timer.c section fixesAdrian Bunk2007-12-181-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes the following section mismatches with CONFIG_HOTPLUG=n, CONFIG_HOTPLUG_CPU=y: ... WARNING: vmlinux.o(.text+0x41cd3): Section mismatch: reference to .init.data:tvec_base_done.22610 (between 'timer_cpu_notify' and 'run_timer_softirq') WARNING: vmlinux.o(.text+0x41d67): Section mismatch: reference to .init.data:tvec_base_done.22610 (between 'timer_cpu_notify' and 'run_timer_softirq') ... Signed-off-by: Adrian Bunk <bunk@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * | clockevents: fix reprogramming decision in oneshot broadcastThomas Gleixner2007-12-181-35/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Resolve the following regression of a choppy, almost unusable laptop: http://lkml.org/lkml/2007/12/7/299 http://bugzilla.kernel.org/show_bug.cgi?id=9525 A previous version of the code did the reprogramming of the broadcast device in the return from idle code. This was removed, but the logic in tick_handle_oneshot_broadcast() was kept the same. When a broadcast interrupt happens we signal the expiry to all CPUs which have an expired event. If none of the CPUs has an expired event, which can happen in dyntick mode, then we reprogram the broadcast device. We do not reprogram otherwise, but this is only correct if all CPUs, which are in the idle broadcast state have been woken up. The code ignores, that there might be pending not yet expired events on other CPUs, which are in the idle broadcast state. So the delivery of those events can be delayed for quite a time. Change the tick_handle_oneshot_broadcast() function to check for CPUs, which are in broadcast state and are not woken up by the current event, and enforce the rearming of the broadcast device for those CPUs. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | | sched: do not hurt SCHED_BATCH on wakeupIngo Molnar2007-12-181-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | measurements by Yanmin Zhang have shown that SCHED_BATCH tasks benefit if they run the same place_entity() logic as SCHED_OTHER tasks - so uniformize behavior in this area. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | | sched: touch softlockup watchdog after idlingIngo Molnar2007-12-181-0/+1
| | | | | | | | | | | | | | | | | | touch softlockup watchdog after idling. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | | sched: sysctl, proc_dointvec_minmax() expects int values forEric Dumazet2007-12-181-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | min_sched_granularity_ns, max_sched_granularity_ns, min_wakeup_granularity_ns and max_wakeup_granularity_ns are declared "unsigned long". This is incorrect since proc_dointvec_minmax() expects plain "int" guard values. This bug only triggers on big endian 64 bit arches. Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | | sched: mark rwsem functions as __sched for wchan/profilingLivio Soares2007-12-181-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This following commit http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=fdf8cb0909b531f9ae8f9b9d7e4eb35ba3505f07 un-inlined a low-level rwsem function, but did not mark it as __sched. The result is that it now shows up as thread wchan (which also affects /proc/profile stats). The following simple patch fixes this by properly marking rwsem_down_failed_common() as a __sched function. Also in this patch, which is up for discussion, marks down_read() and down_write() proper as __sched. For profiling, it is pretty much useless to know that a semaphore is beig help - it is necessary to know _which_ one. By going up another frame on the stack, the information becomes much more useful. In summary, the below change to lib/rwsem.c should be applied; the changes to kernel/rwsem.c could be applied if other kernel hackers agree with my proposal that down_read()/down_write() in the profile is not enough. [ akpm@linux-foundation.org: build fix ] Signed-off-by: Livio Soares <livio@eecg.toronto.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | | sched: fix crash on ia64, introduce task_current()Dmitry Adamushko2007-12-181-6/+11
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some services (e.g. sched_setscheduler(), rt_mutex_setprio() and sched_move_task()) must handle a given task differently in case it's the 'rq->curr' task on its run-queue. The task_running() interface is not suitable for determining such tasks for platforms with one of the following options: #define __ARCH_WANT_UNLOCKED_CTXSW #define __ARCH_WANT_INTERRUPTS_ON_CTXSW Due to the fact that it makes use of 'p->oncpu == 1' as a criterion but such a task is not necessarily 'rq->curr'. The detailed explanation is available here: https://lists.linux-foundation.org/pipermail/containers/2007-December/009262.html Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Tested-by: Dhaval Giani <dhaval@linux.vnet.ibm.com> Tested-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
* | sysctl: fix ax25 checksEric W. Biederman2007-12-171-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix: sysctl table check failed: /net/ax25/ax0/ax25_default_mode .3.9.1.2 Unknown sysctl binary path Pid: 2936, comm: kissattach Not tainted 2.6.24-rc5 #1 [<c012ca6a>] set_fail+0x3b/0x43 [<c012ce7a>] sysctl_check_table+0x408/0x456 [<c012ce8e>] sysctl_check_table+0x41c/0x456 [<c012ce8e>] sysctl_check_table+0x41c/0x456 ... Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Cc: Bernard Pidoux <pidoux@ccr.jussieu.fr> Cc: "David S. Miller" <davem@davemloft.net> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Revert "hugetlb: Add hugetlb_dynamic_pool sysctl"Nishanth Aravamudan2007-12-171-8/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 54f9f80d6543fb7b157d3b11e2e7911dc1379790 ("hugetlb: Add hugetlb_dynamic_pool sysctl") Given the new sysctl nr_overcommit_hugepages, the boolean dynamic pool sysctl is not needed, as its semantics can be expressed by 0 in the overcommit sysctl (no dynamic pool) and non-0 in the overcommit sysctl (pool enabled). (Needed in 2.6.24 since it reverts a post-2.6.23 userspace-visible change) Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com> Acked-by: Adam Litke <agl@us.ibm.com> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: Dave Hansen <haveblue@us.ibm.com> Cc: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | hugetlb: introduce nr_overcommit_hugepages sysctlNishanth Aravamudan2007-12-171-0/+8
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | hugetlb: introduce nr_overcommit_hugepages sysctl While examining the code to support /proc/sys/vm/hugetlb_dynamic_pool, I became convinced that having a boolean sysctl was insufficient: 1) To support per-node control of hugepages, I have previously submitted patches to add a sysfs attribute related to nr_hugepages. However, with a boolean global value and per-mount quota enforcement constraining the dynamic pool, adding corresponding control of the dynamic pool on a per-node basis seems inconsistent to me. 2) Administration of the hugetlb dynamic pool with multiple hugetlbfs mount points is, arguably, more arduous than it needs to be. Each quota would need to be set separately, and the sum would need to be monitored. To ease the administration, and to help make the way for per-node control of the static & dynamic hugepage pool, I added a separate sysctl, nr_overcommit_hugepages. This value serves as a high watermark for the overall hugepage pool, while nr_hugepages serves as a low watermark. The boolean sysctl can then be removed, as the condition nr_overcommit_hugepages > 0 indicates the same administrative setting as hugetlb_dynamic_pool == 1 Quotas still serve as local enforcement of the size of the pool on a per-mount basis. A few caveats: 1) There is a race whereby the global surplus huge page counter is incremented before a hugepage has allocated. Another process could then try grow the pool, and fail to convert a surplus huge page to a normal huge page and instead allocate a fresh huge page. I believe this is benign, as no memory is leaked (the actual pages are still tracked correctly) and the counters won't go out of sync. 2) Shrinking the static pool while a surplus is in effect will allow the number of surplus huge pages to exceed the overcommit value. As long as this condition holds, however, no more surplus huge pages will be allowed on the system until one of the two sysctls are increased sufficiently, or the surplus huge pages go out of use and are freed. Successfully tested on x86_64 with the current libhugetlbfs snapshot, modified to use the new sysctl. Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com> Acked-by: Adam Litke <agl@us.ibm.com> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: Dave Hansen <haveblue@us.ibm.com> Cc: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86Linus Torvalds2007-12-072-0/+13
|\ | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86: ACPI: move timer broadcast before busmaster disable clockevents: warn once when program_event() is called with negative expiry hrtimers: avoid overflow for large relative timeouts
| * clockevents: warn once when program_event() is called with negative expiryThomas Gleixner2007-12-071-0/+5
| | | | | | | | | | | | | | | | | | | | The hrtimer problem with large relative timeouts resulting in a negative expiry time went unnoticed as there is no check in the clockevents_program_event() code. Put a check there with a WARN_ONCE to avoid such problems in the future. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * hrtimers: avoid overflow for large relative timeoutsThomas Gleixner2007-12-071-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Relative hrtimers with a large timeout value might end up as negative timer values, when the current time is added in hrtimer_start(). This in turn is causing the clockevents_set_next() function to set an huge timeout and sleep for quite a long time when we have a clock source which is capable of long sleeps like HPET. With PIT this almost goes unnoticed as the maximum delta is ~27ms. The non-hrt/nohz code sorts this out in the next timer interrupt, so we never noticed that problem which has been there since the first day of hrtimers. This bug became more apparent in 2.6.24 which activates HPET on more hardware. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | sched: enable early use of sched_clock()Ingo Molnar2007-12-071-1/+6
| | | | | | | | | | | | | | | | | | | | some platforms have sched_clock() implementations that cannot be called very early during wakeup. If it's called it might hang or crash in hard to debug ways. So only call update_rq_clock() [which calls sched_clock()] if sched_init() has already been called. (rq->idle is NULL before the scheduler is initialized.) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | lockdep: make cli/sti annotation warnings clearerIngo Molnar2007-12-071-4/+9
|/ | | | | | | make cli/sti annotation warnings easier to interpret. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
* Tiny clean-up of OPROFILE/KPROBES configurationLinus Torvalds2007-12-061-4/+4
| | | | | | | Make the Kconfig.instrumentation file a bit easier on the eyes, and use the new ARCH_SUPPORTS_OPROFILE for x86[-64]. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Fix oprofile configuration breakageRalf Baechle2007-12-061-1/+1
| | | | | | | | | | | | | | The cleanup 09cadedbdc01f1a4bea1f427d4fb4642eaa19da9 broke the oprofile configuration for MIPS by allowing oprofile support to be built for kernel models where oprofile doesn't have a chance in hell to work. Just a dependecy list on a number of architectures is - surprise - broken and should as per past discussions probably in most considered to be broken in most cases. So I introduce a dependency for the oprofile configuration on ARCH_SUPPORTS_OPROFILE. Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-schedLinus Torvalds2007-12-053-89/+99
|\ | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched: futex: correctly return -EFAULT not -EINVAL lockdep: in_range() fix lockdep: fix debug_show_all_locks() sched: style cleanups futex: fix for futex_wait signal stack corruption
| * futex: correctly return -EFAULT not -EINVALThomas Gleixner2007-12-051-1/+1
| | | | | | | | | | | | | | return -EFAULT not -EINVAL. Found by review. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * lockdep: in_range() fixOleg Nesterov2007-12-051-12/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Torsten Kaiser wrote: | static inline int in_range(const void *start, const void *addr, const void *end) | { | return addr >= start && addr <= end; | } | This will return true, if addr is in the range of start (including) | to end (including). | | But debug_check_no_locks_freed() seems does: | const void *mem_to = mem_from + mem_len | -> mem_to is the last byte of the freed range, that fits in_range | lock_from = (void *)hlock->instance; | -> first byte of the lock | lock_to = (void *)(hlock->instance + 1); | -> first byte of the next lock, not last byte of the lock that is being checked! | | The test is: | if (!in_range(mem_from, lock_from, mem_to) && | !in_range(mem_from, lock_to, mem_to)) | continue; | So it tests, if the first byte of the lock is in the range that is freed ->OK | And if the first byte of the *next* lock is in the range that is freed | -> Not OK. We can also simplify in_range checks, we need only 2 comparisons, not 4. If the lock is not in memory range, it should be either at the left of range or at the right. Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
| * lockdep: fix debug_show_all_locks()Ingo Molnar2007-12-051-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | fix the oops that can be seen in: http://bugzilla.kernel.org/attachment.cgi?id=13828&action=view it is not safe to print the locks of running tasks. (even with this fix we have a small race - but this is a debug function after all.) Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
| * sched: style cleanupsIngo Molnar2007-12-051-64/+68
| | | | | | | | | | | | | | | | | | | | | | | | style cleanup of various changes that were done recently. no code changed: text data bss dec hex filename 23680 2542 28 26250 668a sched.o.before 23680 2542 28 26250 668a sched.o.after Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * futex: fix for futex_wait signal stack corruptionSteven Rostedt2007-12-051-12/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | David Holmes found a bug in the -rt tree with respect to pthread_cond_timedwait. After trying his test program on the latest git from mainline, I found the bug was there too. The bug he was seeing that his test program showed, was that if one were to do a "Ctrl-Z" on a process that was in the pthread_cond_timedwait, and then did a "bg" on that process, it would return with a "-ETIMEDOUT" but early. That is, the timer would go off early. Looking into this, I found the source of the problem. And it is a rather nasty bug at that. Here's the relevant code from kernel/futex.c: (not in order in the file) [...] smlinkage long sys_futex(u32 __user *uaddr, int op, u32 val, struct timespec __user *utime, u32 __user *uaddr2, u32 val3) { struct timespec ts; ktime_t t, *tp = NULL; u32 val2 = 0; int cmd = op & FUTEX_CMD_MASK; if (utime && (cmd == FUTEX_WAIT || cmd == FUTEX_LOCK_PI)) { if (copy_from_user(&ts, utime, sizeof(ts)) != 0) return -EFAULT; if (!timespec_valid(&ts)) return -EINVAL; t = timespec_to_ktime(ts); if (cmd == FUTEX_WAIT) t = ktime_add(ktime_get(), t); tp = &t; } [...] return do_futex(uaddr, op, val, tp, uaddr2, val2, val3); } [...] long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout, u32 __user *uaddr2, u32 val2, u32 val3) { int ret; int cmd = op & FUTEX_CMD_MASK; struct rw_semaphore *fshared = NULL; if (!(op & FUTEX_PRIVATE_FLAG)) fshared = &current->mm->mmap_sem; switch (cmd) { case FUTEX_WAIT: ret = futex_wait(uaddr, fshared, val, timeout); [...] static int futex_wait(u32 __user *uaddr, struct rw_semaphore *fshared, u32 val, ktime_t *abs_time) { [...] struct restart_block *restart; restart = &current_thread_info()->restart_block; restart->fn = futex_wait_restart; restart->arg0 = (unsigned long)uaddr; restart->arg1 = (unsigned long)val; restart->arg2 = (unsigned long)abs_time; restart->arg3 = 0; if (fshared) restart->arg3 |= ARG3_SHARED; return -ERESTART_RESTARTBLOCK; [...] static long futex_wait_restart(struct restart_block *restart) { u32 __user *uaddr = (u32 __user *)restart->arg0; u32 val = (u32)restart->arg1; ktime_t *abs_time = (ktime_t *)restart->arg2; struct rw_semaphore *fshared = NULL; restart->fn = do_no_restart_syscall; if (restart->arg3 & ARG3_SHARED) fshared = &current->mm->mmap_sem; return (long)futex_wait(uaddr, fshared, val, abs_time); } So when the futex_wait is interrupt by a signal we break out of the hrtimer code and set up or return from signal. This code does not return back to userspace, so we set up a RESTARTBLOCK. The bug here is that we save the "abs_time" which is a pointer to the stack variable "ktime_t t" from sys_futex. This returns and unwinds the stack before we get to call our signal. On return from the signal we go to futex_wait_restart, where we update all the parameters for futex_wait and call it. But here we have a problem where abs_time is no longer valid. I verified this with print statements, and sure enough, what abs_time was set to ends up being garbage when we get to futex_wait_restart. The solution I did to solve this (with input from Linus Torvalds) was to add unions to the restart_block to allow system calls to use the restart with specific parameters. This way the futex code now saves the time in a 64bit value in the restart block instead of storing it on the stack. Note: I'm a bit nervious to add "linux/types.h" and use u32 and u64 in thread_info.h, when there's a #ifdef __KERNEL__ just below that. Not sure what that is there for. If this turns out to be a problem, I've tested this with using "unsigned int" for u32 and "unsigned long long" for u64 and it worked just the same. I'm using u32 and u64 just to be consistent with what the futex code uses. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6Linus Torvalds2007-12-051-1/+1
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6: [SPARC64]: Update defconfig. [SPARC]: Add missing of_node_put [SPARC64]: check for possible NULL pointer dereference [SPARC]: Add missing "space" [SPARC64]: Add missing "space" [SPARC64]: Add missing pci_dev_put [SYSCTL_CHECK]: Fix typo in KERN_SPARC_SCONS_PWROFF entry string. [SPARC64]: Missing mdesc_release() in ldc_init().
| * | [SYSCTL_CHECK]: Fix typo in KERN_SPARC_SCONS_PWROFF entry string.David S. Miller2007-12-051-1/+1
| |/ | | | | | | | | | | Based upon a report by Mikael Pettersson. Signed-off-by: David S. Miller <davem@davemloft.net>
* | Avoid potential NULL dereference in unregister_sysctl_tablePavel Emelyanov2007-12-051-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | register_sysctl_table() can return NULL sometimes, e.g. when kmalloc() returns NULL or when sysctl check fails. I've also noticed, that many (most?) code in the kernel doesn't check for the return value from register_sysctl_table() and later simply calls the unregister_sysctl_table() with potentially NULL argument. This is unlikely on a common kernel configuration, but in case we're dealing with modules and/or fault-injection support, there's a slight possibility of an OOPS. Changing all the users to check for return code from the registering does not look like a good solution - there are too many code doing this and failure in sysctl tables registration is not a good reason to abort module loading (in most of the cases). So I think, that we can just have this check in unregister_sysctl_table just to avoid accidental OOPS-es (actually, the unregister_sysctl_table() did exactly this, before the start_unregistering() appeared). Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
OpenPOWER on IntegriCloud