summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* [PATCH] non lazy "sleazy" fpu implementationArjan van de Ven2006-09-264-1/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | Right now the kernel on x86-64 has a 100% lazy fpu behavior: after *every* context switch a trap is taken for the first FPU use to restore the FPU context lazily. This is of course great for applications that have very sporadic or no FPU use (since then you avoid doing the expensive save/restore all the time). However for very frequent FPU users... you take an extra trap every context switch. The patch below adds a simple heuristic to this code: After 5 consecutive context switches of FPU use, the lazy behavior is disabled and the context gets restored every context switch. If the app indeed uses the FPU, the trap is avoided. (the chance of the 6th time slice using FPU after the previous 5 having done so are quite high obviously). After 256 switches, this is reset and lazy behavior is returned (until there are 5 consecutive ones again). The reason for this is to give apps that do longer bursts of FPU use still the lazy behavior back after some time. [akpm@osdl.org: place new task_struct field next to jit_keyring to save space] Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org>
* [PATCH] i386: Support physical cpu hotplug for x86_64Ashok Raj2006-09-263-5/+67
| | | | | | | | | | | | This patch enables ACPI based physical CPU hotplug support for x86_64. Implements acpi_map_lsapic() and acpi_unmap_lsapic() to support physical cpu hotplug. Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@muc.de> Cc: "Brown, Len" <len.brown@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org>
* [PATCH] fix bus numbering format in mmconfig warningBrice Goglin2006-09-261-3/+2
| | | | | | | | Make an mmconfig warning print the bus id with a regular format. Signed-off-by: Brice Goglin <brice@myri.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org>
* [PATCH] i386: mark two more functions as __initMagnus Damm2006-09-261-1/+1
| | | | | | | | | | | cyrix_identify() should be __init because transmeta_identify() is. tsc_init() is only called from setup_arch() which is marked as __init. These two section mismatches have been detected using running modpost on a vmlinux image compiled with CONFIG_RELOCATABLE=y. Signed-off-by: Magnus Damm <magnus@valinux.co.jp> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] i386: clean up topology.cMagnus Damm2006-09-261-18/+3
| | | | | | | There is no need to duplicate the topology_init() function. Signed-off-by: Magnus Damm <magnus@valinux.co.jp> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Auto size the per cpu area.Eric W. Biederman2006-09-262-5/+12
| | | | | | | | | | | | | | | | | | | Now for a completely different but trivial approach. I just boot tested it with 255 CPUS and everything worked. Currently everything (except module data) we place in the per cpu area we know about at compile time. So instead of allocating a fixed size for the per_cpu area allocate the number of bytes we need plus a fixed constant for to be used for modules. It isn't perfect but it is much less of a pain to work with than what we are doing now. AK: fixed warning Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] i386: Descriptor and trap table cleanups.Rusty Russell2006-09-262-64/+81
| | | | | | | | | | | | | | | | | The implementation comes from Zach's [RFC, PATCH 10/24] i386 Vmi descriptor changes: Descriptor and trap table cleanups. Add cleanly written accessors for IDT and GDT gates so the subarch may override them. Note that this allows the hypervisor to transparently tweak the DPL of the descriptors as well as the RPL of segments in those descriptors, with no unnecessary kernel code modification. It also allows the hypervisor implementation of the VMI to tweak the gates, allowing for custom exception frames or extra layers of indirection above the guest fault / IRQ handlers. Signed-off-by: Zachary Amsden <zach@vmware.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] i386: move kernel_thread_helper into entry.SAndi Kleen2006-09-262-9/+13
| | | | | | | | | | | And add proper CFI annotation to it which was previously impossible. This prevents "stuck" messages by the dwarf2 unwinder when reaching the top of a kernel stack. Includes feedback from Jan Beulich Cc: jbeulich@novell.com Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] i386: Make enable_local_apic staticAdrian Bunk2006-09-262-13/+12
| | | | | | | | | enable_local_apic can now become static. Cc: len.brown@intel.com Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] i386: Make acpi_force staticAdrian Bunk2006-09-261-1/+1
| | | | | | | | | acpi_force can become static. Cc: len.brown@intel.com Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] i386: make fault notifier unconditional and export itAndi Kleen2006-09-261-10/+4
| | | | | | | | | | It's needed for external debuggers and overhead is very small. Also make the actual notifier chain they use static Cc: jbeulich@novell.com Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] make fault notifier unconditional and export itAndi Kleen2006-09-261-9/+3
| | | | | | | | | | It's needed for external debuggers and overhead is very small. Also make the actual notifier chain they use static Cc: jbeulich@novell.com Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] i386: Fix warning in mpparse.cAndi Kleen2006-09-261-0/+2
| | | | | | | | | Fix linux/arch/i386/kernel/mpparse.c: In function #MP_bus_info#: linux/arch/i386/kernel/mpparse.c:232: warning: comparison is always false due to limited range of data type Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Make boot_param_data pure BSSAndi Kleen2006-09-261-1/+1
| | | | | | | Since it's all zero. Actually I think gcc 4+ will do that automatically, but earlier compilers won't Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] i386/x86-64: Improve Kconfig description of CRASH_DUMPAndi Kleen2006-09-262-1/+15
| | | | | | | | | | Improve Kconfig description of CONFIG_CRASH_DUMP. Previously it was too brief to be useful. Cc: vgoyal@in.ibm.com Cc: ebiederm@xmission.com Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] X86_64 monotonic_clock goes backwardsDimitri Sivanich2006-09-261-7/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I've noticed some erratic behavior while testing the X86_64 version of monotonic_clock(). While spinning in a loop reading monotonic clock values (pinned to a single cpu) I noticed that the difference between subsequent values occasionally went negative (time going backwards). I found that in the following code: this_offset = get_cycles_sync(); /* FIXME: 1000 or 1000000? */ --> offset = (this_offset - last_offset)*1000 / cpu_khz; } return base + offset; the offset sometimes turns out to be 0, even though this_offset > last_offset. +Added fix From: Toyo Abe <toyoa@mvista.com> The x86_64-mm-monotonic-clock.patch in 2.6.18-rc4-mm2 made a change to the updating of monotonic_base. It now uses cycles_2_ns(). I suggest that a set_cyc2ns_scale() should be done prior to the setup_irq(). Because cycles_2_ns() can be called from the timer ISR right after the irq0 is enabled. Signed-off-by: Toyo Abe <toyoa@mvista.com> Signed-off-by: Dimitri Sivanich <sivanich@sgi.com> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] x86: error_code is not safe for kprobesPrasanna S.P2006-09-263-25/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch moves the entry.S:error_entry to .kprobes.text section, since code marked unsafe for kprobes jumps directly to entry.S::error_entry, that must be marked unsafe as well. This patch also moves all the ".previous.text" asm directives to ".previous" for kprobes section. AK: Following a similar i386 patch from Chuck Ebbert AK: Also merged Jeremy's fix in. +From: Jeremy Fitzhardinge <jeremy@goop.org> KPROBE_ENTRY does a .section .kprobes.text, and expects its users to do a .previous at the end of the function. Unfortunately, if any code within the function switches sections, for example .fixup, then the .previous ends up putting all subsequent code into .fixup. Worse, any subsequent .fixup code gets intermingled with the code its supposed to be fixing (which is also in .fixup). It's surprising this didn't cause more havok. The fix is to use .pushsection/.popsection, so this stuff nests properly. A further cleanup would be to get rid of all .section/.previous pairs, since they're inherently fragile. +From: Chuck Ebbert <76306.1226@compuserve.com> Because code marked unsafe for kprobes jumps directly to entry.S::error_code, that must be marked unsafe as well. The easiest way to do that is to move the page fault entry point to just before error_code and let it inherit the same section. Also moved all the ".previous" asm directives for kprobes sections to column 1 and removed ".text" from them. Signed-off-by: Chuck Ebbert <76306.1226@compuserve.com> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] i386: don't taint UP K7's running SMP kernels.Dave Jones2006-09-261-0/+3
| | | | | | | | | | | | | We have a test that looks for invalid pairings of certain athlon/durons that weren't designed for SMP, and taint accordingly (with 'S') if we find such a configuration. However, this test shouldn't fire if there's only a single CPU present. It's perfectly valid for an SMP kernel to boot on UP hardware for example. AK: changed to num_possible_cpus() Signed-off-by: Dave Jones <davej@redhat.com> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] i386: fix dubious segment register clear in cpu_init()Jeremy Fitzhardinge2006-09-261-1/+1
| | | | | | | | | | | | | | Fix a very dubious piece of code in arch/i386/kernel/cpu/common.c:cpu_init(). This clears out %fs and %gs, but clobbers %eax in the process without telling gcc. It turns out that gcc happens to be not using %eax at that point anyway so it doesn't matter much, but it looks like a bomb waiting to go off. This does end up saving an instruction, because gcc wants %eax==0 for the set_debugreg()s below. Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Don't force frame pointers for lockdepAndi Kleen2006-09-261-1/+1
| | | | | | | Now that stacktrace supports dwarf2 don't force frame pointers for lockdep anymore Cc: mingo@elte.hu Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] i386: Get ebp from unwinder state when continuing fallback backtraceAndi Kleen2006-09-262-8/+17
| | | | | Cc: jbeulich@novell.com Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] i386: Terminate backtrace fallback early if unwinder stack pointer ↵Andi Kleen2006-09-261-0/+2
| | | | | | | is zero Cc: jbeulich@novell.com Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] i386: Do stacktracer conversion tooAndi Kleen2006-09-264-121/+82
| | | | | | | | | | | | | | | Following x86-64 patches. Reuses code from them in fact. Convert the standard backtracer to do all output using callbacks. Use the x86-64 stack tracer implementation that uses these callbacks to implement the stacktrace interface. This allows to use the new dwarf2 unwinder for stacktrace and get better backtraces. Cc: mingo@elte.hu Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Check for end of stack trace before falling backAndi Kleen2006-09-261-0/+2
| | | | Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Merge stacktrace and show_traceAndi Kleen2006-09-263-211/+120
| | | | | | | | | | | | | | | This unifies the standard backtracer and the new stacktrace in memory backtracer. The standard one is converted to use callbacks and then reimplement stacktrace using new callbacks. The main advantage is that stacktrace can now use the new dwarf2 unwinder and avoid false positives in many cases. I kept it simple to make sure the standard backtracer stays reliable. Cc: mingo@elte.hu Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Move unwind_init earlierAndi Kleen2006-09-261-1/+1
| | | | | | | | | Needed for use of the unwinder in lockdep, because lockdep runs really early too. Cc: jbeulich@novell.com Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Don't access the APIC in safe_smp_processor_id when it is not mapped yetAndi Kleen2006-09-263-1/+4
| | | | | | | | | | | | | Lockdep can call the dwarf2 unwinder early, and the dwarf2 code uses safe_smp_processor_id which tries to access the local APIC page. But that doesn't work before the APIC code has set up its fixmap. Check for this case and always return boot cpu then. Cc: jbeulich@novell.com Cc: mingo@elte.hu Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Avoid recursion in lockdep when stack tracer takes locksAndi Kleen2006-09-261-0/+4
| | | | | | | | | | | The new dwarf2 unwinder needs to take locks to do backtraces inside modules. This patch makes sure lockdep which calls stacktrace is not reentered. Thanks to Ingo for suggesting this simpler approach. Cc: mingo@elte.hu Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] x86: Some preparationary cleanup for stack traceAndi Kleen2006-09-265-30/+24
| | | | | | | | | | | - Remove unused all_contexts parameter No caller used it - Move skip argument into the structure (needed for followon patches) Cc: mingo@elte.hu Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Calgary IOMMU: eradicate sole remaining 80 chars per line offenderMuli Ben-Yehuda2006-09-261-1/+2
| | | | | | Signed-off-by: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: Jon Mason <jdmason@us.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] remove tce_cache_blast_stress()Muli Ben-Yehuda2006-09-261-11/+0
| | | | | | | | | | | | tce_cache_blast_stress was useful during bringup to stress the IOMMU's cache flushing. Now that we quiesce DMAs on every cache flush, using _stress() brings the machine down to its knees once you put it under load. Remove this debug / bringup code that isn't useful anymore completely. Signed-off-by: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: Jon Mason <jdmason@us.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] only verify the allocation bitmap if CONFIG_IOMMU_DEBUG is onMuli Ben-Yehuda2006-09-261-9/+35
| | | | | | | | | | | | Introduce new function verify_bit_range(). Define two versions, one for CONFIG_IOMMU_DEBUG enabled and one for disabled. Previously we were checking that the bitmap was consistent every time we allocated or freed an entry in the TCE table, which is good for debugging but incurs an unnecessary penalty on non debug builds. Signed-off-by: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: Jon Mason <jdmason@us.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] print whether CONFIG_IOMMU_DEBUG is enabledMuli Ben-Yehuda2006-09-261-4/+10
| | | | | | Signed-off-by: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: Jon Mason <jdmason@us.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] i386/x86-64: rename is_at_popf(), add iret to tests and fixChuck Ebbert2006-09-262-12/+10
| | | | | | | | | | | | is_at_popf() needs to test for the iret instruction as well as popf. So add that test and rename it to is_setting_trap_flag(). Also change max insn length from 16 to 15 to match reality. LAHF / SAHF can't affect TF, so the comment in x86_64 is removed. Signed-off-by: Chuck Ebbert <76306.1226@compuserve.com> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] x86: Remove unneeded externs in acpi/boot.cAndi Kleen2006-09-262-3/+2
| | | | | | | And move one into proto.h Cc: len.brown@intel.com Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Replace local_save_flags+local_irq_disable withFernando Luis Vázquez Cao2006-09-262-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | The combination of "local_save_flags" and "local_irq_disable" seems to be equivalent to "local_irq_save" (see code snips below). Consequently, replace occurrences of local_save_flags+local_irq_disable with local_irq_save. * local_irq_save #define raw_local_irq_save(flags) \ do { (flags) = __raw_local_irq_save(); } while (0) static inline unsigned long __raw_local_irq_save(void) { unsigned long flags = __raw_local_save_flags(); raw_local_irq_disable(); return flags; } * local_save_flags #define raw_local_save_flags(flags) \ do { (flags) = __raw_local_save_flags(); } while (0) Signed-off-by: Fernando Vazquez <fernando@intellilink.co.jp> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Fix sparse warnings in compat aout codeAndi Kleen2006-09-261-3/+5
| | | | Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Fix most sparse warnings in sys_ia32.cAndi Kleen2006-09-261-11/+13
| | | | | | | | Mostly by adding casts. I didn't touch the "invalid access past ..." which are caused by the sigset conversion. Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Add sparse annotations to quiet sparse in arch/x86_64/mm/fault.cAndi Kleen2006-09-261-5/+5
| | | | | | | | | | | | | | | | Fixes linux/arch/x86_64/mm/fault.c:125:7: warning: incorrect type in argument 1 (different address spaces) linux/arch/x86_64/mm/fault.c:125:7: expected void [noderef] *<noident><asn:1> linux/arch/x86_64/mm/fault.c:125:7: got unsigned char *[assigned] instr linux/arch/x86_64/mm/fault.c:163:8: warning: incorrect type in argument 1 (different address spaces) linux/arch/x86_64/mm/fault.c:163:8: expected void [noderef] *<noident><asn:1> linux/arch/x86_64/mm/fault.c:163:8: got unsigned char *[assigned] instr linux/arch/x86_64/mm/fault.c:179:9: warning: incorrect type in argument 1 (different address spaces) linux/arch/x86_64/mm/fault.c:179:9: expected void [noderef] *<noident><asn:1> linux/arch/x86_64/mm/fault.c:179:9: got unsigned long *<noident> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Add sparse annotation to vsyscall.cAndi Kleen2006-09-261-6/+8
| | | | | | | | | | | | | | | | | | | | | | | Fixes linux/arch/x86_64/kernel/vsyscall.c:276:7: warning: constant 0x0f40000000000 is so big it is long linux/arch/x86_64/kernel/vsyscall.c:80:14: warning: incorrect type in argument 1 (different address spaces) linux/arch/x86_64/kernel/vsyscall.c:80:14: expected void const volatile [noderef] *addr<asn:2> linux/arch/x86_64/kernel/vsyscall.c:80:14: got void *<noident> linux/arch/x86_64/kernel/vsyscall.c:200:7: warning: incorrect type in assignment (different address spaces) linux/arch/x86_64/kernel/vsyscall.c:200:7: expected unsigned short [usertype] *map1 linux/arch/x86_64/kernel/vsyscall.c:200:7: got void [noderef] *<asn:2> linux/arch/x86_64/kernel/vsyscall.c:203:7: warning: incorrect type in assignment (different address spaces) linux/arch/x86_64/kernel/vsyscall.c:203:7: expected unsigned short [usertype] *map2 linux/arch/x86_64/kernel/vsyscall.c:203:7: got void [noderef] *<asn:2> linux/arch/x86_64/kernel/vsyscall.c:215:10: warning: incorrect type in argument 1 (different address spaces) linux/arch/x86_64/kernel/vsyscall.c:215:10: expected void volatile [noderef] *addr<asn:2> linux/arch/x86_64/kernel/vsyscall.c:215:10: got unsigned short [usertype] *map2 linux/arch/x86_64/kernel/vsyscall.c:217:10: warning: incorrect type in argument 1 (different address spaces) linux/arch/x86_64/kernel/vsyscall.c:217:10: expected void volatile [noderef] *addr<asn:2> linux/arch/x86_64/kernel/vsyscall.c:217:10: got unsigned short [usertype] *map1 Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Move e820 map into e820.cAndi Kleen2006-09-262-1/+2
| | | | | | Minor cleanup. Keep setup.c free from unrelated clutter. Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Clean up acpi_numa variableAndi Kleen2006-09-262-4/+3
| | | | | | | | | Move it into srat.c No need to clutter up setup.c for it And remove use in setup.c completely - it only guarded a printk which can be done unconditionally. Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] i386/x86-64: Move acpi_disabled variables into acpi/boot.cAndi Kleen2006-09-263-10/+7
| | | | | | | | | Removes code duplication between i386/x86-64. Not needed anymore in setup.c since early_param cleanup Cc: len.brown@intel.com Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Remove need for early lockdep initAndi Kleen2006-09-262-7/+3
| | | | | | | | | | | | | | I think it was only needed for the printks and we can do them later. I put in a single early_printk so that we know the kernel is alive (early_printk doesn't need any locks) This makes some things easier for initialization of unwind for lockdep, which is needed by later patches. cc: mingo@elte.hu Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Convert x86-64 to early paramAndi Kleen2006-09-2616-275/+159
| | | | | | | | | | | | | | | | | Instead of hackish manual parsing Requires earlier i386 patchkit, but also fixes i386 early_printk again. I removed some obsolete really early parameters which didn't do anything useful. Also made a few parameters that needed it early (mostly oops printing setup) Also removed one panic check that wasn't visible without early console anyways (the early console is now initialized after that panic) This cleans up a lot of code. Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] i386: Replace i386 open-coded cmdline parsing withRusty Russell2006-09-2612-289/+319
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch replaces the open-coded early commandline parsing throughout the i386 boot code with the generic mechanism (already used by ppc, powerpc, ia64 and s390). The code was inconsistent with whether it deletes the option from the cmdline or not, meaning some of these will get passed through the environment into init. This transformation is mainly mechanical, but there are some notable parts: 1) Grammar: s/linux never set's it up/linux never sets it up/ 2) Remove hacked-in earlyprintk= option scanning. When someone actually implements CONFIG_EARLY_PRINTK, then they can use early_param(). [AK: actually it is implemented, but I'm adding the early_param it in the next x86-64 patch] 3) Move declaration of generic_apic_probe() from setup.c into asm/apic.h 4) Various parameters now moved into their appropriate files (thanks Andi). 5) All parse functions which examine arg need to check for NULL, except one where it has subtle humor value. AK: readded acpi_sci handling which was completely dropped AK: moved some more variables into acpi/boot.c Cc: len.brown@intel.com Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Allow early_param and identical __setup to existRusty Russell2006-09-261-4/+8
| | | | | | | | | | | | | | We currently assume that boot parameters which are handled by early_param() will not overlap boot parameters handled by __setup: if they do, behaviour is dependent on link order, usually meaning __setup will not get called. ACPI wants to use early_param("pci"), and pci uses __setup("pci="), so we modify the core to let them coexist: "pci=noacpi" will now get passed to both. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Use early CPU identify before early command line parsingAndi Kleen2006-09-262-21/+2
| | | | | | | | | This makes it possible to modify CPU flags in command line options without hacks. And remove another copy in head64.c Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] remove lock prefix from is_at_popf() testsChuck Ebbert2006-09-261-2/+2
| | | | | | | | | The lock prefix will cause an exception when used with the popf instruction, so no need to continue searching after it's found. Signed-off-by: Chuck Ebbert <76306.1226@compuserve.com> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] remove superflous BUG_ON's in nommu and gartMuli Ben-Yehuda2006-09-262-4/+0
| | | | | | | | | There's no need to check for invalid DMA data direction in nommu and gart since we do it in dma-mapping.h anyway before calling the individual dma-ops. Signed-off-by: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
OpenPOWER on IntegriCloud