summaryrefslogtreecommitdiffstats
path: root/arch/ia64/kernel/setup.c
Commit message (Collapse)AuthorAgeFilesLines
* ia64: remove paravirt codeLuis R. Rodriguez2015-06-101-12/+0
| | | | | | | | | | | | | | | | | All the ia64 pvops code is now dead code since both xen and kvm support have been ripped out [0] [1]. Just that no one had troubled to rip this stuff out. The only useful remaining pieces were the old pvops docs but that was recently also generalized and moved out from ia64 [2]. This has been run time tested on an ia64 Madison system. [0] 003f7de625890 "KVM: ia64: remove" since v3.19-rc1 [1] d52eefb47d4eb "ia64/xen: Remove Xen support for ia64" since v3.14-rc1 [2] "virtual: Documentation: simplify and generalize paravirt_ops.txt" Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* ia64: fix up obsolete cpu function usage.Rusty Russell2015-03-051-5/+6
| | | | | | | | | Thanks to spatch, then a sweep for for_each_cpu_mask => for_each_cpu. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: linux-ia64@vger.kernel.org
* DMI: Parse memory device (type 17) in SMBIOSChen, Gong2013-10-231-0/+1
| | | | | | | | | | | | This patch adds a new interface to decode memory device (type 17) to help error reporting on DIMMs. Original-author: Tony Luck <tony.luck@intel.com> Signed-off-by: Chen, Gong <gong.chen@linux.intel.com> Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: Borislav Petkov <bp@suse.de> Reviewed-by: Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Delete __cpuinit usage from all ia64 usersPaul Gortmaker2013-06-241-5/+5
| | | | | | | | | | | | | | | | | | | | The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. This removes all the ia64 uses of the __cpuinit macros. [1] https://lkml.org/lkml/2013/5/20/589 Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* dump_stack: implement arch-specific hardware description in task dumpsTejun Heo2013-04-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | x86 and ia64 can acquire extra hardware identification information from DMI and print it along with task dumps; however, the usage isn't consistent. * x86 show_regs() collects vendor, product and board strings and print them out with PID, comm and utsname. Some of the information is printed again later in the same dump. * warn_slowpath_common() explicitly accesses the DMI board and prints it out with "Hardware name:" label. This applies to both x86 and ia64 but is irrelevant on all other archs. * ia64 doesn't show DMI information on other non-WARN dumps. This patch introduces arch-specific hardware description used by dump_stack(). It can be set by calling dump_stack_set_arch_desc() during boot and, if exists, printed out in a separate line with "Hardware name:" label. dmi_set_dump_stack_arch_desc() is added which sets arch-specific description from DMI data. It uses dmi_ids_string[] which is set from dmi_present() used for DMI debug message. It is superset of the information x86 show_regs() is using. The function is called from x86 and ia64 boot code right after dmi_scan_machine(). This makes the explicit DMI handling in warn_slowpath_common() unnecessary. Removed. show_regs() isn't yet converted to use generic debug information printing and this patch doesn't remove the duplicate DMI handling in x86 show_regs(). The next patch will unify show_regs() handling and remove the duplication. An example WARN dump follows. WARNING: at kernel/workqueue.c:4841 init_workqueues+0x35/0x505() Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.9.0-rc1-work+ #3 Hardware name: empty empty/S3992, BIOS 080011 10/26/2007 0000000000000009 ffff88007c861e08 ffffffff81c614dc ffff88007c861e48 ffffffff8108f500 ffffffff82228240 0000000000000040 ffffffff8234a08e 0000000000000000 0000000000000000 0000000000000000 ffff88007c861e58 Call Trace: [<ffffffff81c614dc>] dump_stack+0x19/0x1b [<ffffffff8108f500>] warn_slowpath_common+0x70/0xa0 [<ffffffff8108f54a>] warn_slowpath_null+0x1a/0x20 [<ffffffff8234a0c3>] init_workqueues+0x35/0x505 ... v2: Use the same string as the debug message from dmi_present() which also contains BIOS information. Move hardware name into its own line as warn_slowpath_common() did. This change was suggested by Bjorn Helgaas. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: David S. Miller <davem@davemloft.net> Cc: Fengguang Wu <fengguang.wu@intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Mike Frysinger <vapier@gentoo.org> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ia64 idle: delete pm_idleLen Brown2013-02-171-1/+0
| | | | | | | | pm_idle() on ia64 was a synonym for default_idle(). So simply invoke default_idle() directly. Signed-off-by: Len Brown <len.brown@intel.com> Cc: linux-ia64@vger.kernel.org
* Merge branch 'akpm' (Andrew's patch-bomb)Linus Torvalds2012-03-281-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Merge third batch of patches from Andrew Morton: - Some MM stragglers - core SMP library cleanups (on_each_cpu_mask) - Some IPI optimisations - kexec - kdump - IPMI - the radix-tree iterator work - various other misc bits. "That'll do for -rc1. I still have ~10 patches for 3.4, will send those along when they've baked a little more." * emailed from Andrew Morton <akpm@linux-foundation.org>: (35 commits) backlight: fix typo in tosa_lcd.c crc32: add help text for the algorithm select option mm: move hugepage test examples to tools/testing/selftests/vm mm: move slabinfo.c to tools/vm mm: move page-types.c from Documentation to tools/vm selftests/Makefile: make `run_tests' depend on `all' selftests: launch individual selftests from the main Makefile radix-tree: use iterators in find_get_pages* functions radix-tree: rewrite gang lookup using iterator radix-tree: introduce bit-optimized iterator fs/proc/namespaces.c: prevent crash when ns_entries[] is empty nbd: rename the nbd_device variable from lo to nbd pidns: add reboot_pid_ns() to handle the reboot syscall sysctl: use bitmap library functions ipmi: use locks on watchdog timeout set on reboot ipmi: simplify locking ipmi: fix message handling during panics ipmi: use a tasklet for handling received messages ipmi: increase KCS timeouts ipmi: decrease the IPMI message transaction time in interrupt mode ...
| * arch/ia64: remove references to cpu_*_mapSrivatsa S. Bhat2012-03-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This was marked as obsolete for quite a while now.. Now it is time to remove it altogether. And while doing this, get rid of first_cpu() as well. Also, remove the redundant setting of cpu_online_mask in smp_prepare_cpus() because the generic code would have already set cpu 0 in cpu_online_mask. Reported-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Disintegrate asm/system.h for IA64David Howells2012-03-281-1/+0
|/ | | | | | | | Disintegrate asm/system.h for IA64. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Tony Luck <tony.luck@intel.com> cc: linux-ia64@vger.kernel.org
* [IA64] Merge overlapping reserved regions at bootPetr Tesarik2011-12-091-0/+19
| | | | | | | | | | | | | | | | | | | | | | While working on the upcoming SLES11 SP2, I ran into an issue with booting the panic kernel on a kernel crash. In the first iteration I found out that the initial register backing store gets overwritten with zeroes, causing a kernel crash shortly afterwards. Further investigation revealed that rsvd_region[] contains overlapping entries: find_memmap_space() returns a pointer which lies between KERNEL_START and _end. This is correct with the EFI memmap as patched by the kexec purgatory code. That code removes vmlinux LOAD segments from the usable map, but there is a pretty large hole between the gate section and the per-cpu section. This happens because reserve_memory() blindly marks [KERNEL_START, __end] as reserved, even though there is a free block in the middle in the kexec case because it noticed a large gap between sections and modified the efi_memory_map to account for this. Signed-off-by: Petr Tesarik <ptesarik@suse.cz> Signed-off-by: Tony Luck <tony.luck@intel.com>
* crash_dump: export is_kdump_kernel to modules, consolidate elfcorehdr_addr, ↵Olaf Hering2011-03-231-18/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | setup_elfcorehdr and saved_max_pfn The Xen PV drivers in a crashed HVM guest can not connect to the dom0 backend drivers because both frontend and backend drivers are still in connected state. To run the connection reset function only in case of a crashdump, the is_kdump_kernel() function needs to be available for the PV driver modules. Consolidate elfcorehdr_addr, setup_elfcorehdr and saved_max_pfn into kernel/crash_dump.c Also export elfcorehdr_addr to make is_kdump_kernel() usable for modules. Leave 'elfcorehdr' as early_param(). This changes powerpc from __setup() to early_param(). It adds an address range check from x86 also on ia64 and powerpc. [akpm@linux-foundation.org: additional #includes] [akpm@linux-foundation.org: remove elfcorehdr_addr export] [akpm@linux-foundation.org: fix for Tejun's mm/nobootmem.c changes] Signed-off-by: Olaf Hering <olaf@aepfle.de> Cc: Russell King <rmk@arm.linux.org.uk> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [IA64] Initialize interrupts later (from init_IRQ())Tony Luck2010-10-051-4/+0
| | | | | | | | | | | | | Thomas Gleixner is cleaning up the generic irq code, and ia64 ran into problems because it calls register_intr() before early_irq_init() is called. Move the call to acpi_boot_init() from setup_arch() to init_IRQ(). As a bonus - moving the call later means we no longer need the hacks in iosapic.c to switch between the bootmem and regular allocator - we can just used kzalloc() for allocation. Signed-off-by: Tony Luck <tony.luck@intel.com>
* dma-mapping: unify dma_get_cache_alignment implementationsFUJITA Tomonori2010-08-111-6/+0
| | | | | | | | | | | | | | | | dma_get_cache_alignment returns the minimum DMA alignment. Architectures defines it as ARCH_DMA_MINALIGN (formally ARCH_KMALLOC_MINALIGN). So we can unify dma_get_cache_alignment implementations. Note that some architectures implement dma_get_cache_alignment wrongly. dma_get_cache_alignment() should return the minimum DMA alignment. So fully-coherent architectures should return 1. This patch also fixes this issue. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [IA64] Remove COMPAT_IA32 supportTony Luck2010-02-081-5/+0
| | | | | | | | | This has been broken since May 2008 when Al Viro killed altroot support. Since nobody has complained, it would appear that there are no users of this code (A plausible theory since the main OSVs that support ia64 prefer to use the IA32-EL software emulation). Signed-off-by: Tony Luck <tony.luck@intel.com>
* percpu: make percpu symbols in ia64 uniqueTejun Heo2009-10-291-2/+2
| | | | | | | | | | | | | | | | | | | This patch updates percpu related symbols in ia64 such that percpu symbols are unique and don't clash with local symbols. This serves two purposes of decreasing the possibility of global percpu symbol collision and allowing dropping per_cpu__ prefix from percpu symbols. * arch/ia64/kernel/setup.c: s/cpu_info/ia64_cpu_info/ Partly based on Rusty Russell's "alloc_percpu: rename percpu vars which cause name clashes" patch. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Christoph Lameter <cl@linux-foundation.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: linux-ia64@vger.kernel.org
* ia64: convert to dynamic percpu allocatorTejun Heo2009-10-021-12/+0
| | | | | | | | | | | | | | | | | | | | | | | Unlike other archs, ia64 reserves space for percpu areas during early memory initialization. These areas occupy a contiguous region indexed by cpu number on contiguous memory model or are grouped by node on discontiguous memory model. As allocation and initialization are done by the arch code, all that setup_per_cpu_areas() needs to do is communicating the determined layout to the percpu allocator. This patch implements setup_per_cpu_areas() for both contig and discontig memory models and drops HAVE_LEGACY_PER_CPU_AREA. Please note that for contig model, the allocation itself is modified only to allocate for possible cpus instead of NR_CPUS. As dynamic percpu allocator can handle non-direct mapping, there's no reason to allocate memory for cpus which aren't possible. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: linux-ia64 <linux-ia64@vger.kernel.org>
* ia64: initialize cpu maps earlyTejun Heo2009-10-021-6/+5
| | | | | | | | | | | | | | | | | | | | | | | | | All information necessary to initialize cpu possible and present maps are available once early_acpi_boot_init() is complete. Reorganize setup_arch() and acpi init functions such that, * CPU information is printed after LAPIC entries are parsed in early_acpi_boot_init(). * smp_build_cpu_map() is called by setup_arch() instead of acpi functions. * smp_build_cpu_map() is called once all CPU related information is available before memory is initialized. This is primarily to allow find_memory() to use cpu maps but is also a general cleanup. Please note that with this change, the somewhat ad-hoc early_cpu_possible_map defined and used for NUMA configurations is probably unnecessary. Something to clean up another day. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: linux-ia64 <linux-ia64@vger.kernel.org>
* ia64: Fix setup_per_cpu_areas() compilation errorFenghua Yu2009-07-151-0/+6
| | | | | | | | | | | | | | Fix ia64 build setup_per_cpu_areas() redifinition issue in UP configuration. When compiling ia64 kernel in UP configuration, the following compilation errors are reported: arch/ia64/kernel/setup.c:860: error: redefinition of 'setup_per_cpu_areas' include/linux/percpu.h:185: error: previous definition of 'setup_per_cpu_areas' was here The patch fixes the issue in arch/ia64/kernel/setup.c Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Tejun Heo <tj@kernel.org>
* [IA64] Convert ia64 to use int-ll64.hMatthew Wilcox2009-06-171-16/+16
| | | | | | | | | | | | | | | | | It is generally agreed that it would be beneficial for u64 to be an unsigned long long on all architectures. ia64 (in common with several other 64-bit architectures) currently uses unsigned long. Migrating piecemeal is too painful; this giant patch fixes all compilation warnings and errors that come as a result of switching to use int-ll64.h. Note that userspace will still see __u64 defined as unsigned long. This is important as it affects C++ name mangling. [Updated by Tony Luck to change efi.h:efi_freemem_callback_t to use u64 for start/end rather than unsigned long] Signed-off-by: Matthew Wilcox <willy@linux.intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] BUG to BUG_ON changesStoyan Gaydarov2009-04-011-2/+1
| | | | | | | | | | | | | Replace: if (test) BUG(); with BUG_ON(test); Signed-off-by: Stoyan Gaydarov <stoyboyker@gmail.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* Pull pvops into release branchTony Luck2009-03-311-0/+2
|\
| * ia64/pv_ops: implement binary patching optimization for native.Isaku Yamahata2009-03-261-0/+2
| | | | | | | | | | | | | | | | | | implement binary patching optimization for pv_cpu_ops. With this optimization, indirect call for pv_cpu_ops methods can be converted into inline execution or direct call. Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | cpumask: prepare for iterators to only go to nr_cpu_ids/nr_cpumask_bits.: ia64Rusty Russell2009-03-161-2/+2
|/ | | | | | | | | | | | | | | Impact: cleanup, futureproof In fact, all cpumask ops will only be valid (in general) for bit numbers < nr_cpu_ids. So use that instead of NR_CPUS in various places. This is always safe: no cpu number can be >= nr_cpu_ids, and nr_cpu_ids is initialized to NR_CPUS at boot. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Acked-by: Ingo Molnar <mingo@elte.hu>
* [IA64] Reserve elfcorehdr memory in CONFIG_CRASH_DUMPJay Lan2008-11-071-1/+1
| | | | | | | | | | | | | | | | IA64 kdump kernel failed to initialize /proc/vmcore in 2.6.28-rc2. A bug was introduced in this patch commit: d9a9855d0b06ca6d6cc92596fedcc03f8512e062 always reserve elfcore header memory in crash kernel The problem was that the call to reserve_elfcorehdr() should be placed in CONFIG_CRASH_DUMP rather than in CONFIG_CRASH_KERNEL, which does not exist. Signed-off-by: Jay Lan <jlan@sgi.com> Acked-by: Simon Hormon <horms@verge.net.au> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] fix boot panic caused by offline CPUsDoug Chapman2008-11-061-3/+4
| | | | | | | | | | | | | | | | | | | This fixes a regression introduced by 2c6e6db41f01b6b4eb98809350827c9678996698 "Minimize per_cpu reservations." That patch incorrectly used information about what CPUs are possible that was not yet initialized by ACPI. The end result was that per_cpu structures for offline CPUs were not initialized causing a NULL pointer reference. Since we cannot do the full acpi_boot_init() call any earlier, the simplest fix is to just parse the MADT for SAPIC entries early to find the CPU info. This should also allow for some cleanup of the code added by the "Minimize per_cpu reservations". This patch just fixes the regressions, the cleanup will come in a later patch. Signed-off-by: Doug Chapman <doug.chapman@hp.com> Signed-off-by: Alex Chiang <achiang@hp.com> CC: Robin Holt <holt@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* Merge branch 'release' of ↵Linus Torvalds2008-10-231-13/+29
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6 * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: (41 commits) [IA64] Fix annoying IA64_TR_ALLOC_MAX message. [IA64] kill sys32_pipe [IA64] remove sys32_pause [IA64] Add Variable Page Size and IA64 Support in Intel IOMMU ia64/pv_ops: paravirtualized instruction checker. ia64/xen: a recipe for using xen/ia64 with pv_ops. ia64/pv_ops: update Kconfig for paravirtualized guest and xen. ia64/xen: preliminary support for save/restore. ia64/xen: define xen machine vector for domU. ia64/pv_ops/xen: implement xen pv_time_ops. ia64/pv_ops/xen: implement xen pv_irq_ops. ia64/pv_ops/xen: define the nubmer of irqs which xen needs. ia64/pv_ops/xen: implement xen pv_iosapic_ops. ia64/pv_ops/xen: paravirtualize entry.S for ia64/xen. ia64/pv_ops/xen: paravirtualize ivt.S for xen. ia64/pv_ops/xen: paravirtualize DO_SAVE_MIN for xen. ia64/pv_ops/xen: define xen paravirtualized instructions for hand written assembly code ia64/pv_ops/xen: define xen pv_cpu_ops. ia64/pv_ops/xen: define xen pv_init_ops for various xen initialization. ia64/pv_ops/xen: elf note based xen startup. ...
| * [IA64] Add Variable Page Size and IA64 Support in Intel IOMMUFenghua Yu2008-10-171-13/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The patch contains Intel IOMMU IA64 specific code. It defines new machvec dig_vtd, hooks for IOMMU, DMAR table detection, cache line flush function, etc. For a generic kernel with CONFIG_DMAR=y, if Intel IOMMU is detected, dig_vtd is used for machinve vector. Otherwise, kernel falls back to dig machine vector. Kernel parameter "machvec=dig" or "intel_iommu=off" can be used to force kernel to boot dig machine vector. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | always reserve elfcore header memory in crash kernelSimon Horman2008-10-201-3/+1
| | | | | | | | | | | | | | | | | | | | | | elfcore header memory needs to be reserved in a crash kernel. This means that the relevant code should be protected by CONFIG_CRASH_DUMP rather than CONFIG_PROC_VMCORE. Signed-off-by: Simon Horman <horms@verge.net.au> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | kdump: add is_vmcore_usable() and vmcore_unusable()Simon Horman2008-10-201-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The usage of elfcorehdr_addr has changed recently such that being set to ELFCORE_ADDR_MAX is used by is_kdump_kernel() to indicate if the code is executing in a kernel executed as a crash kernel. However, arch/ia64/kernel/setup.c:reserve_elfcorehdr will rest elfcorehdr_addr to ELFCORE_ADDR_MAX on error, which means any subsequent calls to is_kdump_kernel() will return 0, even though they should return 1. Ok, at this point in time there are no subsequent calls, but I think its fair to say that there is ample scope for error or at the very least confusion. This patch add an extra state, ELFCORE_ADDR_ERR, which indicates that elfcorehdr_addr was passed on the command line, and thus execution is taking place in a crashdump kernel, but vmcore can't be used for some reason. This is tested for using is_vmcore_usable() and set using vmcore_unusable(). A subsequent patch makes use of this new code. To summarise, the states that elfcorehdr_addr can now be in are as follows: ELFCORE_ADDR_MAX: not a crashdump kernel ELFCORE_ADDR_ERR: crashdump kernel but vmcore is unusable any other value: crash dump kernel and vmcore is usable Signed-off-by: Simon Horman <horms@verge.net.au> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | kdump: make elfcorehdr_addr independent of CONFIG_PROC_VMCOREVivek Goyal2008-10-201-1/+8
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | o elfcorehdr_addr is used by not only the code under CONFIG_PROC_VMCORE but also by the code which is not inside CONFIG_PROC_VMCORE. For example, is_kdump_kernel() is used by powerpc code to determine if kernel is booting after a panic then use previous kernel's TCE table. So even if CONFIG_PROC_VMCORE is not set in second kernel, one should be able to correctly determine that we are booting after a panic and setup calgary iommu accordingly. o So remove the assumption that elfcorehdr_addr is under CONFIG_PROC_VMCORE. o Move definition of elfcorehdr_addr to arch dependent crash files. (Unfortunately crash dump does not have an arch independent file otherwise that would have been the best place). o kexec.c is not the right place as one can Have CRASH_DUMP enabled in second kernel without KEXEC being enabled. o I don't see sh setup code parsing the command line for elfcorehdr_addr. I am wondering how does vmcore interface work on sh. Anyway, I am atleast defining elfcoredhr_addr so that compilation is not broken on sh. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Acked-by: Simon Horman <horms@verge.net.au> Acked-by: Paul Mundt <lethal@linux-sh.org> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [IA64] Ski simulator doesn't need check_sal_cache_flushAlex Chiang2008-09-221-0/+2
| | | | | | | | | | | | | Peter Chubb reported that commit 3463a93def55c309f3c0d0a8aaf216be3be42d64 (Update check_sal_cache_flush to use platform_send_ipi()) broke Ski because it does not implement IPIs. Tony Luck suggested we just #ifndef out the call (since the simulator does not have the SAL bug that this code is attempting to detect and workaround) Signed-off-by: Alex Chiang <achiang@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Ensure cpu0 can access per-cpu variables in early boot codeTony Luck2008-08-121-8/+10
| | | | | | | | | | | | | | | | | | | | ia64 handles per-cpu variables a litle differently from other architectures in that it maps the physical memory allocated for each cpu at a constant virtual address (0xffffffffffff0000). This mapping is not enabled until the architecture specific cpu_init() function is run, which causes problems since some generic code is run before this point. In particular when CONFIG_PRINTK_TIME is enabled, the boot cpu will trap on the access to per-cpu memory at the first printk() call so the boot will fail without the kernel printing anything to the console. Fix this by allocating percpu memory for cpu0 in the kernel data section and doing all initialization to enable percpu access in head.S before calling any generic code. Other cpus must take care not to access per-cpu variables too early, but their code path from start_secondary() to cpu_init() is all in arch/ia64 Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Move include/asm-ia64 to arch/ia64/include/asmTony Luck2008-08-011-1/+1
| | | | | | | | | | | | | | | After moving the the include files there were a few clean-ups: 1) Some files used #include <asm-ia64/xyz.h>, changed to <asm/xyz.h> 2) Some comments alerted maintainers to look at various header files to make matching updates if certain code were to be changed. Updated these comments to use the new include paths. 3) Some header files mentioned their own names in initial comments. Just deleted these self references. Signed-off-by: Tony Luck <tony.luck@intel.com>
* Pull pvops into release branchTony Luck2008-07-171-0/+10
|\
| * [IA64] pvops: define initialization hooks, pv_init_ops, for paravirtualized ↵Isaku Yamahata2008-05-271-0/+10
| | | | | | | | | | | | | | | | | | | | | | environment. define pv_init_ops hooks which represents various initialization hooks for paravirtualized environment. and add hooks. Signed-off-by: Alex Williamson <alex.williamson@hp.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | [IA64] Bugfix for system with 32 cpusTony Luck2008-06-301-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On a system where there are no hot pluggable cpus "additional_cpus" is still set to -1 at the point where we call per_cpu_scan_finalize(). If we didn't find an SRAT table and so pick the default "32" for the number of cpus, when we get to: high_cpu = min(high_cpu + reserve_cpus, NR_CPUS); we will end up initializing for just 31 cpus ... and so we will die horribly when bringing up cpu#32. Problem introduced by: 2c6e6db41f01b6b4eb98809350827c9678996698 "Minimize per_cpu reservations." Acked-by: Robin Holt <holt@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | [IA64] Fix boot failure on ia64/sn2Jes Sorensen2008-06-241-2/+1
|/ | | | | | | | | | | | | Call check_sal_cache_flush() after platform_setup() as check_sal_cache_flush() now relies on being able to call platform vector code. Problem was introduced by: 3463a93def55c309f3c0d0a8aaf216be3be42d64 "Update check_sal_cache_flush to use platform_send_ipi()" Signed-off-by: Jes Sorensen <jes@sgi.com> Tested-by: Alex Chiang: <achiang@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Workaround for RSE issueTony Luck2008-05-271-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: An application violating the architectural rules regarding operation dependencies and having specific Register Stack Engine (RSE) state at the time of the violation, may result in an illegal operation fault and invalid RSE state. Such faults may initiate a cascade of repeated illegal operation faults within OS interruption handlers. The specific behavior is OS dependent. Implication: An application causing an illegal operation fault with specific RSE state may result in a series of illegal operation faults and an eventual OS stack overflow condition. Workaround: OS interruption handlers that switch to kernel backing store implement a check for invalid RSE state to avoid the series of illegal operation faults. The core of the workaround is the RSE_WORKAROUND code sequence inserted into each invocation of the SAVE_MIN_WITH_COVER and SAVE_MIN_WITH_COVER_R19 macros. This sequence includes hard-coded constants that depend on the number of stacked physical registers being 96. The rest of this patch consists of code to disable this workaround should this not be the case (with the presumption that if a future Itanium processor increases the number of registers, it would also remove the need for this patch). Move the start of the RBS up to a mod32 boundary to avoid some corner cases. The dispatch_illegal_op_fault code outgrew the spot it was squatting in when built with this patch and CONFIG_VIRT_CPU_ACCOUNTING=y Move it out to the end of the ivt. Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Don't reserve crashkernel memory > 4 GBBernhard Walle2008-05-141-0/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some IA64 machines map all cell-local memory above 4 GB (32 bit limit). However, in most cases, the kernel needs some memory below that limit that is DMA-capable. So in this machine configuration, the crashkernel will be reserved above 4 GB. For machines that use SWIOTLB implementation because they lack an I/O MMU the low memory is required by the SWIOTLB implementation. In that case, it doesn't make sense to reserve the crashkernel at all because it's unusable for kdump. A special case is the "hpzx1" machine vector. In theory, it has a I/O MMU, so it can be booted above 4 GB. However, in the kdump case that is not possible because of changeset 51b58e3e26ebfb8cd56825c4b396ed251f51dec9: On HP zx1 machines, the 'machvec=dig' parameter is needed for the kdump kernel to avoid problems with the HP sba iommu. The problem is that during the boot of the kdump kernel, the iommu is re-initialized, so in-flight DMA from improperly shutdown drivers causes an IOTLB miss which leads to an MCA. With kdump, the idea is to get into the kdump kernel with as little code as we can, so shutting down drivers properly is not an option. The workaround is to add 'machvec=dig' to the kdump kernel boot parameters. This makes the kdump kernel avoid using the sba iommu altogether, leaving the IOTLB intact. Any ongoing DMA falls harmlessly outside the kdump kernel. After the kdump kernel reboots, all devices will have been shutdown properly and DMA stopped. This patch pushes that functionality into the sba iommu initialization code, so that users won't have to find the obscure documentation telling them about 'machvec=dig'. This means that also for hpzx1 it's not possible to boot when all memory is above the 4 GB limit. So the only machine vectors that can handle this case are "sn2" and "uv". Signed-off-by: Bernhard Walle <bwalle@suse.de> Signed-off-by: Tony Luck <tony.luck@intel.com>
* Pull miscellaneous into release branchTony Luck2008-04-171-0/+23
|\ | | | | | | | | | | Conflicts: arch/ia64/kernel/mca.c
| * [IA64] Fix NUMA configuration issueZoltan Menyhart2008-04-111-0/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a NUMA memory configuration issue in 2.6.24: A 2-node machine of ours has got the following memory layout: Node 0: 0 - 2 Gbytes Node 0: 4 - 8 Gbytes Node 1: 8 - 16 Gbytes Node 0: 16 - 18 Gbytes "efi_memmap_init()" merges the three last ranges into one. "register_active_ranges()" is called as follows: efi_memmap_walk(register_active_ranges, NULL); i.e. once for the 4 - 18 Gbytes range. It picks up the node number from the start address, and registers all the memory for the node #0. "register_active_ranges()" should be called as follows to make sure there is no merged address range at its entry: efi_memmap_walk(filter_memory, register_active_ranges); "filter_memory()" is similar to "filter_rsvd_memory()", but the reserved memory ranges are not filtered out. Signed-off-by: Zoltan Menyhart <Zoltan.Menyhart@bull.net> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | Pull nptcg into release branchTony Luck2008-04-171-2/+4
|\ \ | | | | | | | | | | | | | | | Conflicts: arch/ia64/mm/tlb.c
| * | [IA64] Kernel parameter for max number of concurrent global TLB purgesFenghua Yu2008-04-041-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | The patch defines kernel parameter "nptcg=". The parameter overrides max number of concurrent global TLB purges which is reported from either PAL_VM_SUMMARY or SAL PALO. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | [IA64] Multiple outstanding ptc.g instruction supportFenghua Yu2008-04-041-2/+4
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | According to SDM2.2, Itanium supports multiple outstanding ptc.g instructions. But current kernel function ia64_global_tlb_purge() uses a spinlock to serialize ptc.g instructions issued by multiple processors. This serialization might have scalability issue on a big SMP machine where many processors could purge TLB in parallel. The patch fixes this problem by issuing multiple ptc.g instructions in ia64_global_tlb_purge(). It also adds support for the "PALO" table to get a platform view of the max number of outstanding ptc.g instructions (which may be different from the processor view found from PAL_VM_SUMMARY). PALO specification can be found at: http://www.dig64.org/home/DIG64_PALO_R1_0.pdf spinaphore implementation by Matthew Wilcox. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | [IA64] Minimize per_cpu reservations.holt@sgi.com2008-04-081-0/+2
|/ | | | | | | | | | | | | This attached patch significantly shrinks boot memory allocation on ia64. It does this by not allocating per_cpu areas for cpus that can never exist. In the case where acpi does not have any numa node description of the cpus, I defaulted to assigning the first 32 round-robin on the known nodes.. For the !CONFIG_ACPI I used for_each_possible_cpu(). Signed-off-by: Robin Holt <holt@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] remove remaining __FUNCTION__ occurrencesHarvey Harrison2008-03-061-4/+4
| | | | | | | | | | | __FUNCTION__ is gcc-specific, use __func__ Long lines have been kept where they exist, some small spacing changes have been done. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] constify function pointer tablesJan Engelhardt2008-02-041-1/+1
| | | | | Signed-off-by: Jan Engelhardt <jengelh@computergmbh.de> Signed-off-by: Tony Luck <tony.luck@intel.com>
* sched: remove printk_clock references from ia64Ingo Molnar2008-01-251-4/+0
| | | | | | remove remaining printk_clock references from ia64. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* [IA64] rename _bss to __bss_startBernhard Walle2007-12-071-2/+1
| | | | | | | | | | | Rename _bss to __bss_start as on other architectures. That makes it possible to use the <linux/sections.h> instead of own declarations. Also add __bss_stop because that symbol exists on other architectures. Signed-off-by: Bernhard Walle <bwalle@suse.de> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] /proc/cpuinfo "physical id" field cleanupsAlex Chiang2007-10-291-41/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Clean up the process for presenting the "physical id" field in /proc/cpuinfo. - remove global smp_num_cpucores, as it is mostly useless - remove check_for_logical_procs(), since we do the same functionality in identify_siblings() - reflow logic in identify_siblings(). If an older CPU does not implement PAL_LOGICAL_TO_PHYSICAL, we may still be able to get useful information from SAL_PHYSICAL_ID_INFO - in identify_siblings(), threads/cores are a property of the CPU, not the platform - remove useless printk's about multi-core / thread capability in identify_siblings(), as that information is readily available in /proc/cpuinfo, and printing for the BSP only adds little value - smp_num_siblings is now meaningful if any CPU in the system supports threads, not just the BSP - expose "physical id" field, even on CPUs that are not multi-core / multi-threaded (as long as we have a valid value). Now we know what sockets Madisons live in too. Signed-off-by: Alex Chiang <achiang@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
OpenPOWER on IntegriCloud