summaryrefslogtreecommitdiffstats
path: root/arch/powerpc/kernel/setup_64.c
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'next' of ↵Linus Torvalds2014-01-271-6/+41
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc Pull powerpc updates from Ben Herrenschmidt: "So here's my next branch for powerpc. A bit late as I was on vacation last week. It's mostly the same stuff that was in next already, I just added two patches today which are the wiring up of lockref for powerpc, which for some reason fell through the cracks last time and is trivial. The highlights are, in addition to a bunch of bug fixes: - Reworked Machine Check handling on kernels running without a hypervisor (or acting as a hypervisor). Provides hooks to handle some errors in real mode such as TLB errors, handle SLB errors, etc... - Support for retrieving memory error information from the service processor on IBM servers running without a hypervisor and routing them to the memory poison infrastructure. - _PAGE_NUMA support on server processors - 32-bit BookE relocatable kernel support - FSL e6500 hardware tablewalk support - A bunch of new/revived board support - FSL e6500 deeper idle states and altivec powerdown support You'll notice a generic mm change here, it has been acked by the relevant authorities and is a pre-req for our _PAGE_NUMA support" * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: (121 commits) powerpc: Implement arch_spin_is_locked() using arch_spin_value_unlocked() powerpc: Add support for the optimised lockref implementation powerpc/powernv: Call OPAL sync before kexec'ing powerpc/eeh: Escalate error on non-existing PE powerpc/eeh: Handle multiple EEH errors powerpc: Fix transactional FP/VMX/VSX unavailable handlers powerpc: Don't corrupt transactional state when using FP/VMX in kernel powerpc: Reclaim two unused thread_info flag bits powerpc: Fix races with irq_work Move precessing of MCE queued event out from syscall exit path. pseries/cpuidle: Remove redundant call to ppc64_runlatch_off() in cpu idle routines powerpc: Make add_system_ram_resources() __init powerpc: add SATA_MV to ppc64_defconfig powerpc/powernv: Increase candidate fw image size powerpc: Add debug checks to catch invalid cpu-to-node mappings powerpc: Fix the setup of CPU-to-Node mappings during CPU online powerpc/iommu: Don't detach device without IOMMU group powerpc/eeh: Hotplug improvement powerpc/eeh: Call opal_pci_reinit() on powernv for restoring config space powerpc/eeh: Add restore_config operation ...
| * powerpc/e6500: TLB miss handler with hardware tablewalk supportScott Wood2014-01-091-0/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are a few things that make the existing hw tablewalk handlers unsuitable for e6500: - Indirect entries go in TLB1 (though the resulting direct entries go in TLB0). - It has threads, but no "tlbsrx." -- so we need a spinlock and a normal "tlbsx". Because we need this lock, hardware tablewalk is mandatory on e6500 unless we want to add spinlock+tlbsx to the normal bolted TLB miss handler. - TLB1 has no HES (nor next-victim hint) so we need software round robin (TODO: integrate this round robin data with hugetlb/KVM) - The existing tablewalk handlers map half of a page table at a time, because IBM hardware has a fixed 1MiB indirect page size. e6500 has variable size indirect entries, with a minimum of 2MiB. So we can't do the half-page indirect mapping, and even if we could it would be less efficient than mapping the full page. - Like on e5500, the linear mapping is bolted, so we don't need the overhead of supporting nested tlb misses. Note that hardware tablewalk does not work in rev1 of e6500. We do not expect to support e6500 rev1 in mainline Linux. Signed-off-by: Scott Wood <scottwood@freescale.com> Cc: Mihai Caraman <mihai.caraman@freescale.com>
| * powerpc/book3s: Introduce exclusive emergency stack for machine check exception.Mahesh Salgaonkar2013-12-051-1/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces exclusive emergency stack for machine check exception. We use emergency stack to handle machine check exception so that we can save MCE information (srr1, srr0, dar and dsisr) before turning on ME bit and be ready for re-entrancy. This helps us to prevent clobbering of MCE information in case of nested machine checks. The reason for using emergency stack over normal kernel stack is that the machine check might occur in the middle of setting up a stack frame which may result into improper use of kernel stack. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * powerpc: Use patch_exception to update the debug exception handlerKevin Hao2013-12-021-5/+1
| | | | | | | | | | Signed-off-by: Kevin Hao <haokexin@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc: Clean up panic_timeout usageJason Baron2013-11-261-3/+0
|/ | | | | | | | | | | | | | | | | | | | | | | | Default CONFIG_PANIC_TIMEOUT to 180 seconds on powerpc. The pSeries continue to set the timeout to 10 seconds at run-time. Thus, there's a small window where we don't have the correct value on pSeries, but if this is only run-time discoverable we don't have a better option. In any case, if the user changes the default setting of 180 seconds, we honor that user setting. Signed-off-by: Jason Baron <jbaron@akamai.com> Cc: benh@kernel.crashing.org Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: mpe@ellerman.id.au Cc: felipe.contreras@gmail.com Cc: linuxppc-dev@lists.ozlabs.org Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/705bbe0f70fb20759151642ba0176a6414ec9f7a.1385418410.git.jbaron@akamai.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* powerpc: Move local setup.h declarations to arch includesRobert Jennings2013-10-301-2/+0
| | | | | | | | | | | | | | | | Move the few declarations from arch/powerpc/kernel/setup.h into arch/powerpc/include/asm/setup.h. This resolves a sparse warning for arch/powerpc/mm/numa.c which defines do_init_bootmem() but can't include the setup.h header in the prior path. Resolves: arch/powerpc/mm/numa.c:998:13: warning: symbol 'do_init_bootmem' was not declared. Should it be static? Signed-off-by: Robert C Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* Merge branch 'next' of ↵Linus Torvalds2013-09-061-11/+25
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc Pull powerpc updates from Ben Herrenschmidt: "Here's the powerpc batch for this merge window. Some of the highlights are: - A bunch of endian fixes ! We don't have full LE support yet in that release but this contains a lot of fixes all over arch/powerpc to use the proper accessors, call the firmware with the right endian mode, etc... - A few updates to our "powernv" platform (non-virtualized, the one to run KVM on), among other, support for bridging the P8 LPC bus for UARTs, support and some EEH fixes. - Some mpc51xx clock API cleanups in preparation for a clock API overhaul - A pile of cleanups of our old math emulation code, including better support for using it to emulate optional FP instructions on embedded chips that otherwise have a HW FPU. - Some infrastructure in selftest, for powerpc now, but could be generalized, initially used by some tests for our perf instruction counting code. - A pile of fixes for hotplug on pseries (that was seriously bitrotting) - The usual slew of freescale embedded updates, new boards, 64-bit hiberation support, e6500 core PMU support, etc..." * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: (146 commits) powerpc: Correct FSCR bit definitions powerpc/xmon: Fix printing of set of CPUs in xmon powerpc/pseries: Move lparcfg.c to platforms/pseries powerpc/powernv: Return secondary CPUs to firmware on kexec powerpc/btext: Fix CONFIG_PPC_EARLY_DEBUG_BOOTX on ppc32 powerpc: Cleanup handling of the DSCR bit in the FSCR register powerpc/pseries: Child nodes are not detached by dlpar_detach_node powerpc/pseries: Add mising of_node_put in delete_dt_node powerpc/pseries: Make dlpar_configure_connector parent node aware powerpc/pseries: Do all node initialization in dlpar_parse_cc_node powerpc/pseries: Fix parsing of initial node path in update_dt_node powerpc/pseries: Pack update_props_workarea to map correctly to rtas buffer header powerpc/pseries: Fix over writing of rtas return code in update_dt_node powerpc/pseries: Fix creation of loop in device node property list powerpc: Skip emulating & leave interrupts off for kernel program checks powerpc: Add more exception trampolines for hypervisor exceptions powerpc: Fix location and rename exception trampolines powerpc: Add more trap names to xmon powerpc/pseries: Add a warning in the case of cross-cpu VPA registration powerpc: Update the 00-Index in Documentation/powerpc ...
| * powerpc: Make cache info device tree accesses endian safeAnton Blanchard2013-08-141-5/+5
| | | | | | | | | | Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * powerpc: Better split CONFIG_PPC_INDIRECT_PIO and CONFIG_PPC_INDIRECT_MMIOBenjamin Herrenschmidt2013-08-141-3/+2
| | | | | | | | | | | | | | | | Remove the generic PPC_INDIRECT_IO and ensure we only add overhead to the right accessors. IE. If only CONFIG_PPC_INDIRECT_PIO is set, we don't add overhead to all MMIO accessors. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * powerpc/pmac: Early debug output on screen on 64-bit macsBenjamin Herrenschmidt2013-08-141-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have a bunch of CONFIG_PPC_EARLY_DEBUG_* options that are intended for bringup/debug only. They hard wire a machine specific udbg backend very early on (before we even probe the platform), and use whatever tricks are available on each machine/cpu to be able to get some kind of output out there early on. So far, on powermac with no serial ports, we have CONFIG_PPC_EARLY_DEBUG_BOOTX to use the low-level btext engine on the screen, but it doesn't do much, at least on 64-bit. It only really gets enabled after the platform has been probed and the MMU enabled. This adds a way to enable it much earlier. From prom_init.c (while still running with Open Firmware), we grab the screen details and set things up using the physical address of the frame buffer. Then btext itself uses the "rm_ci" feature of the 970 processor (Real Mode Cache Inhibited) to access it while in real mode. We need to do a little bit of reorg of the btext code to inline things better, in order to limit how much we touch memory while in this mode as the consequences might be ... interesting. This successfully allowed me to debug problems early on with the G5 (related to gold being broken vs. ppc64 kernels). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * powerpc: Fix a number of sparse warningsAnton Blanchard2013-08-141-2/+2
| | | | | | | | | | | | | | Address some of the trivial sparse warnings in arch/powerpc. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * powerpc/85xx: Move ePAPR paravirt initialization earlierLaurentiu TUDOR2013-08-071-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | At console init, when the kernel tries to flush the log buffer the ePAPR byte-channel based console write fails silently, losing the buffered messages. This happens because The ePAPR para-virtualization init isn't done early enough so that the hcall instruction to be set, causing the byte-channel write hcall to be a nop. To fix, change the ePAPR para-virt init to use early device tree functions and move it in early init. Signed-off-by: Laurentiu Tudor <Laurentiu.Tudor@freescale.com> Signed-off-by: Scott Wood <scottwood@freescale.com>
* | Merge remote-tracking branch 'origin/next' into kvm-ppc-nextAlexander Graf2013-08-291-1/+1
|\ \ | |/ | | | | | | | | | | Conflicts: mm/Kconfig CMA DMA split and ZSWAP introduction were conflicting, fix up manually.
| * powerpc/smp: Section mismatch from smp_release_cpus to __initdata ↵Chen Gang2013-07-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | spinning_secondaries the smp_release_cpus is a normal funciton and called in normal environments, but it calls the __initdata spinning_secondaries. need modify spinning_secondaries to match smp_release_cpus. the related warning: (the linker report boot_paca.33377, but it should be spinning_secondaries) ----------------------------------------------------------------------------- WARNING: arch/powerpc/kernel/built-in.o(.text+0x23176): Section mismatch in reference from the function .smp_release_cpus() to the variable .init.data:boot_paca.33377 The function .smp_release_cpus() references the variable __initdata boot_paca.33377. This is often because .smp_release_cpus lacks a __initdata annotation or the annotation of boot_paca.33377 is wrong. WARNING: arch/powerpc/kernel/built-in.o(.text+0x231fe): Section mismatch in reference from the function .smp_release_cpus() to the variable .init.data:boot_paca.33377 The function .smp_release_cpus() references the variable __initdata boot_paca.33377. This is often because .smp_release_cpus lacks a __initdata annotation or the annotation of boot_paca.33377 is wrong. ----------------------------------------------------------------------------- Signed-off-by: Chen Gang <gang.chen@asianux.com> CC: <stable@vger.kernel.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc/kvm: Contiguous memory allocator based RMA allocationAneesh Kumar K.V2013-07-081-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | Older version of power architecture use Real Mode Offset register and Real Mode Limit Selector for mapping guest Real Mode Area. The guest RMA should be physically contigous since we use the range when address translation is not enabled. This patch switch RMA allocation code to use contigous memory allocator. The patch also remove the the linear allocator which not used any more Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Alexander Graf <agraf@suse.de>
* | powerpc/kvm: Contiguous memory allocator based hash page table allocationAneesh Kumar K.V2013-07-081-0/+2
|/ | | | | | | | | | | | | | | | | | Powerpc architecture uses a hash based page table mechanism for mapping virtual addresses to physical address. The architecture require this hash page table to be physically contiguous. With KVM on Powerpc currently we use early reservation mechanism for allocating guest hash page table. This implies that we need to reserve a big memory region to ensure we can create large number of guest simultaneously with KVM on Power. Another disadvantage is that the reserved memory is not available to rest of the subsystems and and that implies we limit the total available memory in the host. This patch series switch the guest hash page table allocation to use contiguous memory allocator. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de>
* powerpc: Reduce PTE table memory wastageAneesh Kumar K.V2013-04-301-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | We allocate one page for the last level of linux page table. With THP and large page size of 16MB, that would mean we are wasting large part of that page. To map 16MB area, we only need a PTE space of 2K with 64K page size. This patch reduce the space wastage by sharing the page allocated for the last level of linux page table with multiple pmd entries. We call these smaller chunks PTE page fragments and allocated page, PTE page. In order to support systems which doesn't have 64K HPTE support, we also add another 2K to PTE page fragment. The second half of the PTE fragments is used for storing slot and secondary bit information of an HPTE. With this we now have a 4K PTE fragment. We use a simple approach to share the PTE page. On allocation, we bump the PTE page refcount to 16 and share the PTE page with the next 16 pte alloc request. This should help in the node locality of the PTE page fragment, assuming that the immediate pte alloc request will mostly come from the same NUMA node. We don't try to reuse the freed PTE page fragment. Hence we could be waisting some space. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Apply early paca fixups to boot_paca and the boot cpu's pacaMichael Ellerman2013-02-151-5/+11
| | | | | | | | | | | | | | | | | In commit 466921c we added a hack to set the paca data_offset to zero so that per-cpu accesses would work on the boot cpu prior to per-cpu areas being setup. This fixed a problem with lockdep touching per-cpu areas very early in boot. However if we combine CONFIG_LOCK_STAT=y with any of the PPC_EARLY_DEBUG options, we can hit the same problem in udbg_early_init(). To avoid that we need to set the data_offset of the boot_paca also. So factor out the fixup logic and call it for both the boot_paca, and "the paca of the boot cpu". Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Tested-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Move boot_paca into early_setupGeoff Levand2013-02-151-0/+2
| | | | | | | | | The powerpc boot_paca symbol is now only used within the early_setup() routine, so move it from its global definition into early_setup(). Signed-off-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Make load_hander handle upto 64k offsetMichael Neuling2012-11-151-0/+5
| | | | | | | If we change load_hander() to use an ori instead of addi, we can load handlers upto 64k away provided we are still 64k aligned. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Set paca->data_offset = 0 for boot cpuMichael Ellerman2012-09-271-0/+2
| | | | | | | | | | | | | | | | | | | | | | In commit 407821a we assigned a poison value to the paca->data_offset. Unfortunately with CONFIG_LOCK_STAT=y lockdep will read & write to percpu data very early in boot, prior to us initialising the percpu areas, leading to a crash. We have been getting away with this because the data_offset was previously set to zero. This causes lockdep to read & write to the initial copy of the percpu variables, which are discarded later in boot. Although that is "fishy", it does work, and for lock statistics it is no big deal to discard the counts from early boot. So set the paca->data_offset = 0 for the boot cpu paca only. Reported-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* Merge tag 'split-asm_system_h-for-linus-20120328' of ↵Linus Torvalds2012-03-281-1/+0
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-asm_system Pull "Disintegrate and delete asm/system.h" from David Howells: "Here are a bunch of patches to disintegrate asm/system.h into a set of separate bits to relieve the problem of circular inclusion dependencies. I've built all the working defconfigs from all the arches that I can and made sure that they don't break. The reason for these patches is that I recently encountered a circular dependency problem that came about when I produced some patches to optimise get_order() by rewriting it to use ilog2(). This uses bitops - and on the SH arch asm/bitops.h drags in asm-generic/get_order.h by a circuituous route involving asm/system.h. The main difficulty seems to be asm/system.h. It holds a number of low level bits with no/few dependencies that are commonly used (eg. memory barriers) and a number of bits with more dependencies that aren't used in many places (eg. switch_to()). These patches break asm/system.h up into the following core pieces: (1) asm/barrier.h Move memory barriers here. This already done for MIPS and Alpha. (2) asm/switch_to.h Move switch_to() and related stuff here. (3) asm/exec.h Move arch_align_stack() here. Other process execution related bits could perhaps go here from asm/processor.h. (4) asm/cmpxchg.h Move xchg() and cmpxchg() here as they're full word atomic ops and frequently used by atomic_xchg() and atomic_cmpxchg(). (5) asm/bug.h Move die() and related bits. (6) asm/auxvec.h Move AT_VECTOR_SIZE_ARCH here. Other arch headers are created as needed on a per-arch basis." Fixed up some conflicts from other header file cleanups and moving code around that has happened in the meantime, so David's testing is somewhat weakened by that. We'll find out anything that got broken and fix it.. * tag 'split-asm_system_h-for-linus-20120328' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-asm_system: (38 commits) Delete all instances of asm/system.h Remove all #inclusions of asm/system.h Add #includes needed to permit the removal of asm/system.h Move all declarations of free_initmem() to linux/mm.h Disintegrate asm/system.h for OpenRISC Split arch_align_stack() out from asm-generic/system.h Split the switch_to() wrapper out of asm-generic/system.h Move the asm-generic/system.h xchg() implementation to asm-generic/cmpxchg.h Create asm-generic/barrier.h Make asm-generic/cmpxchg.h #include asm-generic/cmpxchg-local.h Disintegrate asm/system.h for Xtensa Disintegrate asm/system.h for Unicore32 [based on ver #3, changed by gxt] Disintegrate asm/system.h for Tile Disintegrate asm/system.h for Sparc Disintegrate asm/system.h for SH Disintegrate asm/system.h for Score Disintegrate asm/system.h for S390 Disintegrate asm/system.h for PowerPC Disintegrate asm/system.h for PA-RISC Disintegrate asm/system.h for MN10300 ...
| * Disintegrate asm/system.h for PowerPCDavid Howells2012-03-281-1/+0
| | | | | | | | | | | | | | | | Disintegrate asm/system.h for PowerPC. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> cc: linuxppc-dev@lists.ozlabs.org
* | KVM: PPC: Convert RMA allocation into generic codeAlexander Graf2012-03-051-1/+1
|/ | | | | | | | | | | | | We have code to allocate big chunks of linear memory on bootup for later use. This code is currently used for RMA allocation, but can be useful beyond that extent. Make it generic so we can reuse it for other stuff later. Signed-off-by: Alexander Graf <agraf@suse.de> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Avi Kivity <avi@redhat.com>
* powerpc: Add gpages reservation code for 64-bit FSL BOOKEBecky Bruce2011-12-071-0/+10
| | | | | | | | | | | | | | | For 64-bit FSL_BOOKE implementations, gigantic pages need to be reserved at boot time by the memblock code based on the command line. This adds the call that handles the reservation, and fixes some code comments. It also removes the previous pr_err when reserve_hugetlb_gpages is called on a system without hugetlb enabled - the way the code is structured, the call is unconditional and the resulting error message spurious and confusing. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Copy down exception vectors after feature fixupsAnton Blanchard2011-11-161-0/+1
| | | | | | | | | | | | | | kdump fails because we try to execute an HV only instruction. Feature fixups are being applied after we copy the exception vectors down to 0 so they miss out on any updates. We have always had this issue but it only became critical in v3.0 when we added CFAR support (breaks POWER5) and v3.1 when we added POWERNV (breaks everyone). Signed-off-by: Anton Blanchard <anton@samba.org> Cc: <stable@kernel.org> [v3.0+] Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* Merge branch 'modsplit-Oct31_2011' of ↵Linus Torvalds2011-11-061-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/paulg/linux * 'modsplit-Oct31_2011' of git://git.kernel.org/pub/scm/linux/kernel/git/paulg/linux: (230 commits) Revert "tracing: Include module.h in define_trace.h" irq: don't put module.h into irq.h for tracking irqgen modules. bluetooth: macroize two small inlines to avoid module.h ip_vs.h: fix implicit use of module_get/module_put from module.h nf_conntrack.h: fix up fallout from implicit moduleparam.h presence include: replace linux/module.h with "struct module" wherever possible include: convert various register fcns to macros to avoid include chaining crypto.h: remove unused crypto_tfm_alg_modname() inline uwb.h: fix implicit use of asm/page.h for PAGE_SIZE pm_runtime.h: explicitly requires notifier.h linux/dmaengine.h: fix implicit use of bitmap.h and asm/page.h miscdevice.h: fix up implicit use of lists and types stop_machine.h: fix implicit use of smp.h for smp_processor_id of: fix implicit use of errno.h in include/linux/of.h of_platform.h: delete needless include <linux/module.h> acpi: remove module.h include from platform/aclinux.h miscdevice.h: delete unnecessary inclusion of module.h device_cgroup.h: delete needless include <linux/module.h> net: sch_generic remove redundant use of <linux/module.h> net: inet_timewait_sock doesnt need <linux/module.h> ... Fix up trivial conflicts (other header files, and removal of the ab3550 mfd driver) in - drivers/media/dvb/frontends/dibx000_common.c - drivers/media/video/{mt9m111.c,ov6650.c} - drivers/mfd/ab3550-core.c - include/linux/dmaengine.h
| * powerpc: various straight conversions from module.h --> export.hPaul Gortmaker2011-10-311-1/+1
| | | | | | | | | | | | | | | | | | All these files were including module.h just for the basic EXPORT_SYMBOL infrastructure. We can shift them off to the export.h header which is a way smaller footprint and thus realize some compile time gains. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
* | powerpc: Coding style cleanupsAnton Blanchard2011-09-201-7/+13
| | | | | | | | | | | | | | | | While converting code to use for_each_node_by_type I noticed a number of coding style issues. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc: Use for_each_node_by_type instead of open coding itAnton Blanchard2011-09-201-1/+1
|/ | | | | | | Use for_each_node_by_type instead of open coding it. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* Merge branch 'next' of ↵Linus Torvalds2011-07-251-3/+3
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: (99 commits) drivers/virt: add missing linux/interrupt.h to fsl_hypervisor.c powerpc/85xx: fix mpic configuration in CAMP mode powerpc: Copy back TIF flags on return from softirq stack powerpc/64: Make server perfmon only built on ppc64 server devices powerpc/pseries: Fix hvc_vio.c build due to recent changes powerpc: Exporting boot_cpuid_phys powerpc: Add CFAR to oops output hvc_console: Add kdb support powerpc/pseries: Fix hvterm_raw_get_chars to accept < 16 chars, fixing xmon powerpc/irq: Quieten irq mapping printks powerpc: Enable lockup and hung task detectors in pseries and ppc64 defeconfigs powerpc: Add mpt2sas driver to pseries and ppc64 defconfig powerpc: Disable IRQs off tracer in ppc64 defconfig powerpc: Sync pseries and ppc64 defconfigs powerpc/pseries/hvconsole: Fix dropped console output hvc_console: Improve tty/console put_chars handling powerpc/kdump: Fix timeout in crash_kexec_wait_realmode powerpc/mm: Fix output of total_ram. powerpc/cpufreq: Add cpufreq driver for Momentum Maple boards powerpc: Correct annotations of pmu registration functions ... Fix up trivial Kconfig/Makefile conflicts in arch/powerpc, drivers, and drivers/cpufreq
| * powerpc: Fix early boot accounting of CPUsMatt Evans2011-06-171-3/+3
| | | | | | | | | | | | | | | | | | smp_release_cpus() waits for all cpus (including the bootcpu) due to an off-by-one count on boot_cpu_count (which is all CPUs). This patch replaces that with spinning_secondaries (which is all secondary CPUs). Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | KVM: PPC: Allocate RMAs (Real Mode Areas) at boot for use by guestsPaul Mackerras2011-07-121-0/+3
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds infrastructure which will be needed to allow book3s_hv KVM to run on older POWER processors, including PPC970, which don't support the Virtual Real Mode Area (VRMA) facility, but only the Real Mode Offset (RMO) facility. These processors require a physically contiguous, aligned area of memory for each guest. When the guest does an access in real mode (MMU off), the address is compared against a limit value, and if it is lower, the address is ORed with an offset value (from the Real Mode Offset Register (RMOR)) and the result becomes the real address for the access. The size of the RMA has to be one of a set of supported values, which usually includes 64MB, 128MB, 256MB and some larger powers of 2. Since we are unlikely to be able to allocate 64MB or more of physically contiguous memory after the kernel has been running for a while, we allocate a pool of RMAs at boot time using the bootmem allocator. The size and number of the RMAs can be set using the kvm_rma_size=xx and kvm_rma_count=xx kernel command line options. KVM exports a new capability, KVM_CAP_PPC_RMA, to signal the availability of the pool of preallocated RMAs. The capability value is 1 if the processor can use an RMA but doesn't require one (because it supports the VRMA facility), or 2 if the processor requires an RMA for each guest. This adds a new ioctl, KVM_ALLOCATE_RMA, which allocates an RMA from the pool and returns a file descriptor which can be used to map the RMA. It also returns the size of the RMA in the argument structure. Having an RMA means we will get multiple KMV_SET_USER_MEMORY_REGION ioctl calls from userspace. To cope with this, we now preallocate the kvm->arch.ram_pginfo array when the VM is created with a size sufficient for up to 64GB of guest memory. Subsequently we will get rid of this array and use memory associated with each memslot instead. This moves most of the code that translates the user addresses into host pfns (page frame numbers) out of kvmppc_prepare_vrma up one level to kvmppc_core_prepare_memory_region. Also, instead of having to look up the VMA for each page in order to check the page size, we now check that the pages we get are compound pages of 16MB. However, if we are adding memory that is mapped to an RMA, we don't bother with calling get_user_pages_fast and instead just offset from the base pfn for the RMA. Typically the RMA gets added after vcpus are created, which makes it inconvenient to have the LPCR (logical partition control register) value in the vcpu->arch struct, since the LPCR controls whether the processor uses RMA or VRMA for the guest. This moves the LPCR value into the kvm->arch struct and arranges for the MER (mediated external request) bit, which is the only bit that varies between vcpus, to be set in assembly code when going into the guest if there is a pending external interrupt request. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de>
* powerpc/fsl-booke64: Add support for Debug Level exception handlerKumar Gala2011-05-191-0/+8
| | | | Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* powerpc: Rename slb0_limit() to safe_stack_limit() and add Book3E supportBenjamin Herrenschmidt2011-05-061-5/+18
| | | | | | | | slb0_limit() wasn't a very descriptive name. This changes it along with a comment explaining what it's used for, and provides a 64-bit BookE implementation. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Free up some CPU feature bits by moving out MMU-related featuresMatt Evans2011-04-271-1/+1
| | | | | | | | | Some of the 64bit PPC CPU features are MMU-related, so this patch moves them to MMU_FTR_ bits. All cpu_has_feature()-style tests are moved to mmu_has_feature(), and seven feature bits are freed as a result. Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Properly handshake CPUs going out of boot spin loopBenjamin Herrenschmidt2011-04-201-1/+12
| | | | | | | | We need to wait a bit for them to have done their CPU setup or we might end up with translation and EE on with different LPCR values between threads Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Fix incorrect comment about interrupt stack allocationAnton Blanchard2010-12-091-2/+2
| | | | | | | | We now allow interrupt stacks anywhere in the first segment which can be 256M or 1TB. Fix the comment. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Update a BKL related commentAlessio Igor Bogani2010-11-181-3/+2
| | | | | | | | | | | The commit 5e3d20a remove bkl from startup code so setup_arch() it isn't called with bkl held anymore. Update the comment on top of that function. Fix also a typo. This work was supported by a hardware donation from the CE Linux Forum. Signed-off-by: Alessio Igor Bogani <abogani@texware.it> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* Merge commit 'v2.6.36-rc3' into x86/memblockIngo Molnar2010-08-311-39/+43
|\ | | | | | | | | | | | | | | | | | | Conflicts: arch/x86/kernel/trampoline.c mm/memblock.c Merge reason: Resolve the conflicts, update to latest upstream. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * powerpc: Correct smt_enabled=X boot option for > 2 threads per coreNathan Fontenot2010-08-241-27/+36
| | | | | | | | | | | | | | | | | | | | | | The 'smt_enabled=X' boot option does not handle values of X > 2. For Power 7 processors with smt modes of 0,1,2,3, and 4 this does not work. This patch allows the smt_enabled option to be set to any value limited to a max equal to the number of threads per core. Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * Merge commit 'gcl/next' into nextBenjamin Herrenschmidt2010-08-041-10/+10
| |\
| * | powerpc/kexec: Switch to a static PACA on the way outMatt Evans2010-07-311-10/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With dynamic PACAs, the kexecing CPU's PACA won't lie within the kernel static data and there is a chance that something may stomp it when preparing to kexec. This patch switches this final CPU to a static PACA just before we pull the switch. Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc: Optimise per cpu accesses on 64bitAnton Blanchard2010-07-091-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now we dynamically allocate the paca array, it takes an extra load whenever we want to access another cpu's paca. One place we do that a lot is per cpu variables. A simple example: DEFINE_PER_CPU(unsigned long, vara); unsigned long test4(int cpu) { return per_cpu(vara, cpu); } This takes 4 loads, 5 if you include the actual load of the per cpu variable: ld r11,-32760(r30) # load address of paca pointer ld r9,-32768(r30) # load link address of percpu variable sldi r3,r29,9 # get offset into paca (each entry is 512 bytes) ld r0,0(r11) # load paca pointer add r3,r0,r3 # paca + offset ld r11,64(r3) # load paca[cpu].data_offset ldx r3,r9,r11 # load per cpu variable If we remove the ppc64 specific per_cpu_offset(), we get the generic one which indexes into a statically allocated array. This removes one load and one add: ld r11,-32760(r30) # load address of __per_cpu_offset ld r9,-32768(r30) # load link address of percpu variable sldi r3,r29,3 # get offset into __per_cpu_offset (each entry 8 bytes) ldx r11,r11,r3 # load __per_cpu_offset[cpu] ldx r3,r9,r11 # load per cpu variable Having all the offsets in one array also helps when iterating over a per cpu variable across a number of cpus, such as in the scheduler. Before we would need to load one paca cacheline when calculating each per cpu offset. Now we have 16 (128 / sizeof(long)) per cpu offsets in each cacheline. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | | memblock: Remove rmo_size, burry it in arch/powerpc where it belongsBenjamin Herrenschmidt2010-08-051-1/+1
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | The RMA (RMO is a misnomer) is a concept specific to ppc64 (in fact server ppc64 though I hijack it on embedded ppc64 for similar purposes) and represents the area of memory that can be accessed in real mode (aka with MMU off), or on embedded, from the exception vectors (which is bolted in the TLB) which pretty much boils down to the same thing. We take that out of the generic MEMBLOCK data structure and move it into arch/powerpc where it belongs, renaming it to "RMA" while at it. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | lmb: rename to memblockYinghai Lu2010-07-141-10/+10
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | via following scripts FILES=$(find * -type f | grep -vE 'oprofile|[^K]config') sed -i \ -e 's/lmb/memblock/g' \ -e 's/LMB/MEMBLOCK/g' \ $FILES for N in $(find . -name lmb.[ch]); do M=$(echo $N | sed 's/lmb/memblock/g') mv $N $M done and remove some wrong change like lmbench and dlmb etc. also move memblock.c from lib/ to mm/ Suggested-by: Ingo Molnar <mingo@elte.hu> Acked-by: "H. Peter Anvin" <hpa@zytor.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Unconditionally enabled irq stacksChristoph Hellwig2010-06-151-4/+0
| | | | | | | | | | | Irq stacks provide an essential protection from stack overflows through external interrupts, at the cost of two additionals stacks per CPU. Enable them unconditionally to simplify the kernel build and prevent people from accidentally disabling them. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Use more accurate limit for first segment memory allocationsAnton Blanchard2010-05-211-4/+13
| | | | | | | | | | | | | | | | | | Author: Milton Miller <miltonm@bga.com> On large machines we are running out of room below 256MB. In some cases we only need to ensure the allocation is in the first segment, which may be 256MB or 1TB. Add slb0_limit and use it to specify the upper limit for the irqstack and emergency stacks. On a large ppc64 box, this fixes a panic at boot when the crashkernel= option is specified (previously we would run out of memory below 256MB). Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Use common cpu_die (fixes SMP+SUSPEND build)Milton Miller2010-05-211-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | Configuring a powerpc 32 bit kernel for both SMP and SUSPEND turns on CPU_HOTPLUG to enable disable_nonboot_cpus to be called by the common suspend code. Previously the definition of cpu_die for ppc32 was in the powermac platform code, causing it to be undefined if that platform as not selected. arch/powerpc/kernel/built-in.o: In function 'cpu_idle': arch/powerpc/kernel/idle.c:98: undefined reference to 'cpu_die' Move the code from setup_64 to smp.c and rename the power mac versions to their specific names. Note that this does not setup the cpu_die pointers in either smp_ops (request a given cpu die) or ppc_md (make this cpu die), for other platforms but there are generic versions in smp.c. Reported-by: Matt Sealey <matt@genesi-usa.com> Reported-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Anton Vorontsov <avorontsov@mvista.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Fix swiotlb to respect the boot optionFUJITA Tomonori2010-03-191-6/+0
| | | | | | | | | | | | | | powerpc initializes swiotlb before parsing the kernel boot options so swiotlb options (e.g. specifying the swiotlb buffer size) are ignored. Any time before freeing bootmem works for swiotlb so this patch moves powerpc's swiotlb initialization after parsing the kernel boot options, mem_init (as x86 does). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Tested-by: Becky Bruce <beckyb@kernel.crashing.org> Tested-by: Albert Herranz <albert_herranz@yahoo.es> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
OpenPOWER on IntegriCloud