summaryrefslogtreecommitdiffstats
path: root/arch/powerpc/include
Commit message (Collapse)AuthorAgeFilesLines
...
| * | | powerpc/powernv: Allow to freeze PEGavin Shan2014-08-051-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The patch synchronizes header file with firmware to have new OPAL API opal_pci_eeh_freeze_set(), which is used to freeze the specified PE in order to support "compound" PE. Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/powernv: Enable M64 aperatus for PHB3Guo Chao2014-08-051-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch enables M64 aperatus for PHB3. We already had platform hook (ppc_md.pcibios_window_alignment) to affect the PCI resource assignment done in PCI core so that each PE's M32 resource was built on basis of M32 segment size. Similarly, we're using that for M64 assignment on basis of M64 segment size. * We're using last M64 BAR to cover M64 aperatus, and it's shared by all 256 PEs. * We don't support P7IOC yet. However, some function callbacks are added to (struct pnv_phb) so that we can reuse them on P7IOC in future. * PE, corresponding to PCI bus with large M64 BAR device attached, might span multiple M64 segments. We introduce "compound" PE to cover the case. The compound PE is a list of PEs and the master PE is used as before. The slave PEs are just for MMIO isolation. Signed-off-by: Guo Chao <yan@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/eeh: Aux PE data for error logGavin Shan2014-08-051-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The patch allows PE (struct eeh_pe) instance to have auxillary data, whose size is configurable on basis of platform. For PowerNV, the auxillary data will be used to cache PHB diag-data for that PE (frozen PE or fenced PHB). In turn, we can retrieve the diag-data at any later points. It's useful for the case of VFIO PCI devices where the error log should be cached, and then be retrieved by the guest at later point. Also, it can avoid PHB diag-data overwritting if another frozen PE reported and the previous diag-data isn't fetched by guest. Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/eeh: Make diag-data not endian dependentGavin Shan2014-08-051-64/+64
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's followup of commit ddf0322a ("powerpc/powernv: Fix endianness problems in EEH"). The patch helps to get non-endian-dependent diag-data. Cc: Guo Chao <yan@linux.vnet.ibm.com> Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/eeh: Selectively enable IO for error logGavin Shan2014-08-051-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | According to the experiment I did, PCI config access is blocked on P7IOC frozen PE by hardware, but PHB3 doesn't do that. That means we always get 0xFF's while dumping PCI config space of the frozen PE on P7IOC. We don't have the problem on PHB3. So we have to enable I/O prioir to collecting error log. Otherwise, meaningless 0xFF's are always returned. The patch fixes it by EEH flag (EEH_ENABLE_IO_FOR_LOG), which is selectively set to indicate the case for: P7IOC on PowerNV platform, pSeries platform. Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/eeh: Refactor EEH flag accessorsGavin Shan2014-08-051-21/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are multiple global EEH flags. Almost each flag has its own accessor, which doesn't make sense. The patch refactors EEH flag accessors so that they look unified: eeh_add_flag(): Add EEH flag eeh_clear_flag(): Clear EEH flag eeh_has_flag(): Check if one specific flag has been set eeh_enabled(): Check if EEH functionality has been enabled Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/eeh: EEH support for VFIO PCI deviceGavin Shan2014-08-051-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The patch exports functions to be used by new VFIO ioctl command, which will be introduced in subsequent patch, to support EEH functinality for VFIO PCI devices. Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Acked-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/eeh: Avoid event on passed PEGavin Shan2014-08-051-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We must not handle EEH error on devices which are passed to somebody else. Instead, we expect that the frozen device owner detects an EEH error and recovers from it. This avoids EEH error handling on passed through devices so the device owner gets a chance to handle them. Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Acked-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | Merge remote-tracking branch 'scott/next' into nextBenjamin Herrenschmidt2014-08-053-10/+56
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Scott writes: Highlights include e6500 hardware threading support, an e6500 TLB erratum workaround, corenet error reporting, support for a new board, and some minor fixes.
| | * | | powerpc/e6500: Add support for hardware threadsAndy Fleming2014-07-293-1/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The general idea is that each core will release all of its threads into the secondary thread startup code, which will eventually wait in the secondary core holding area, for the appropriate bit in the PACA to be set. The kick_cpu function pointer will set that bit in the PACA, and thus "release" the core/thread to boot. We also need to do a few things that U-Boot normally does for CPUs (like enable branch prediction). Signed-off-by: Andy Fleming <afleming@freescale.com> [scottwood@freescale.com: various changes, including only enabling threads if Linux wants to kick them] Signed-off-by: Scott Wood <scottwood@freescale.com>
| | * | | powerpc/booke: Define MSR bits the same way as reg.hScott Wood2014-07-291-9/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This ensures that all MSR definitions are consistently unsigned long, and that MSR_CM does not become 0xffffffff80000000 (this is usually harmless because MSR is 32-bit on booke and is mainly noticeable when debugging, but still I'd rather avoid it). Signed-off-by: Scott Wood <scottwood@freescale.com>
| * | | | powerpc/perf: Pass the struct perf_events down to compute_mmcr()Michael Ellerman2014-07-281-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To support per-event exclude settings on Power8 we need access to the struct perf_events in compute_mmcr(). Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | powerpc: Add machine_early_initcall()Michael Ellerman2014-07-281-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | powerpc: Remove misleading DISABLE_INTSMichael Ellerman2014-07-281-4/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DISABLE_INTS has a long and storied history, but for some time now it has not actually disabled interrupts. For the open-coded exception handlers, just stop using it, instead call RECONCILE_IRQ_STATE directly. This has the benefit of removing a level of indirection, and making it clear that r10 & r11 are used at that point. For the addition case we still need a macro, so rename it to clarify what it actually does. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | powerpc: Document register clobbering in EXCEPTION_COMMON()Michael Ellerman2014-07-281-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | powerpc: Update comments in irqflags.hMichael Ellerman2014-07-281-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The comment on TRACE_ENABLE_INTS is incorrect, and appears to have always been incorrect since the code was merged. It probably came from an original out-of-tree patch. Replace it with something that's correct. Also propagate the message to RECONCILE_IRQ_STATE(), because it's potentially subtle. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | powerpc: Remove CLASSIC_PPCMichael Ellerman2014-07-281-5/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have a strange #define in cputable.h called CLASSIC_PPC. Although it is defined for 32 & 64bit, it's only used for 32bit and it's basically a duplicate of CONFIG_PPC_BOOK3S_32, so let's use the latter. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | powerpc: Remove CONFIG_POWER4Michael Ellerman2014-07-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Although the name CONFIG_POWER4 suggests that it controls support for power4 cpus, this symbol is actually misnamed. It is a historical wart from the powermac code, which used to support building a 32-bit kernel for power4. CONFIG_POWER4 was used in that context to guard code that was 64-bit only. In the powermac code we can just use CONFIG_PPC64 instead, and in other places it is a synonym for CONFIG_PPC_BOOK3S_64. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | powerpc: Remove oprofile RS64 supportMichael Ellerman2014-07-281-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We no longer support these cpus, so we don't need oprofile support for them either. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | powerpc: Remove CONFIG_POWER3Michael Ellerman2014-07-281-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that we have dropped power3 support we can remove CONFIG_POWER3. The usage in pgtable_32.c was already dead code as CONFIG_POWER3 was not selectable on PPC32. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | powerpc: Remove MMU_FTR_SLBMichael Ellerman2014-07-283-10/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We now only support cpus that use an SLB, so we don't need an MMU feature to indicate that. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | powerpc: Remove STAB codeMichael Ellerman2014-07-284-30/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Old cpus didn't have a Segment Lookaside Buffer (SLB), instead they had a Segment Table (STAB). Now that we've dropped support for those cpus, we can remove the STAB support entirely. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | powerpc: Drop support for pre-POWER4 cpusMichael Ellerman2014-07-281-12/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We inadvertently broke power3 support back in 3.4 with commit f5339277eb8d "powerpc: Remove FW_FEATURE ISERIES from arch code". No one noticed until at least 3.9. By then we'd also broken it with the optimised memcpy, copy_to/from_user and clear_user routines. We don't want to add any more complexity to those just to support ancient cpus, so it seems like it's a good time to drop support for power3 and earlier. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | powerpc: Use standard macros for sys_sigpending() & sys_old_getrlimit()Michael Ellerman2014-07-281-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we have sys_sigpending and sys_old_getrlimit defined to use COMPAT_SYS() in systbl.h, but then both are #defined to sys_ni_syscall in systbl.S. This seems to have been done when ppc and ppc64 were merged, in commit 9994a33 "Introduce entry_{32,64}.S, misc_{32,64}.S, systbl.S". AFAICS there's no longer (or never was) any need for this, we can just use SYSX() for both and remove the #defines to sys_ni_syscall. The expansion before was: #define COMPAT_SYS(func) .llong .sys_##func,.compat_sys_##func #define sys_old_getrlimit sys_ni_syscall COMPAT_SYS(old_getrlimit) => .llong .sys_old_getrlimit,.compat_sys_old_getrlimit => .llong .sys_ni_syscall,.compat_sys_old_getrlimit After is: #define SYSX(f, f3264, f32) .llong .f,.f3264 SYSX(sys_ni_syscall, compat_sys_old_getrlimit, sys_old_getrlimit) => .llong .sys_ni_syscall,.compat_sys_old_getrlimit ie. they are equivalent. Finally both COMPAT_SYS() and SYSX() evaluate to sys_ni_syscall in the Cell SPU code. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | Merge branch 'merge' into nextBenjamin Herrenschmidt2014-07-282-1/+3
| |\ \ \ \ | | | | | | | | | | | | | | | | | | Bring in some important fixes from the 3.16 branch
| * | | | | powernv: Add OPAL tracepointsAnton Blanchard2014-07-111-0/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Knowing how long we spend in firmware calls is an important part of minimising OS jitter. This patch adds tracepoints to each OPAL call. If tracepoints are enabled we branch out to a common routine that calls an entry and exit tracepoint. This allows us to write tools that monitor the frequency and duration of OPAL calls, eg: name count total(ms) min(ms) max(ms) avg(ms) period(ms) OPAL_HANDLE_INTERRUPT 5 0.199 0.037 0.042 0.040 12547.545 OPAL_POLL_EVENTS 204 2.590 0.012 0.036 0.013 2264.899 OPAL_PCI_MSI_EOI 2830 3.066 0.001 0.005 0.001 81.166 We use jump labels if configured, which means we only add a single nop instruction to every OPAL call when the tracepoints are disabled. Signed-off-by: Anton Blanchard <anton@samba.org> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | | powerpc/pseries: Use jump labels for hcall tracepointsAnton Blanchard2014-07-111-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | hcall tracepoints add quite a few instructions to our hcall path: plpar_hcall: mr r2,r2 mfcr r0 stw r0,8(r1) b 164 <---- start ld r12,0(r2) std r12,32(r1) cmpdi r12,0 beq 164 <---- end ... We have an unconditional branch that gets noped out during boot and a load/compare/branch. We also store the tracepoint value to the stack for the hcall_exit path to use. By using jump labels we can simplify this to just a single nop that gets replaced with a branch when the tracepoint is enabled: plpar_hcall: mr r2,r2 mfcr r0 stw r0,8(r1) nop <---- ... If jump labels are not enabled, we fall back to the old method. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | | | Merge branch 'merge' into nextBenjamin Herrenschmidt2014-07-115-83/+13
| |\ \ \ \ \ | | |_|/ / / | |/| | | |
| * | | | | powerpc/fsl-booke: Add support for T2080/T2081 SoCShengzhou Liu2014-07-021-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The T2080 QorIQ multicore processor combines four dual-threaded e6500 Power Architecture processor cores with high-performance datapath acceleration logic and network and peripheral bus interfaces required for networking, telecom/datacom, wireless infrastructure, and mil/aerospace applications. The T2080 SoC includes the following function and features: - Four dual-threaded 64-bit Power architecture e6500 cores, up to 1.8GHz - 2MB L2 cache and 512KB CoreNet platform cache (CPC) - Hierarchical interconnect fabric - One 32-/64-bit DDR3/3L SDRAM memory controllers with ECC and interleaving - Data Path Acceleration Architecture (DPAA) incorporating acceleration - 16 SerDes lanes up to 10.3125 GHz - 8 Ethernet interfaces (multiple 1G/2.5G/10G MACs) - High-speed peripheral interfaces - Four PCI Express controllers (two PCIe 2.0 and two PCIe 3.0) - Two Serial RapidIO 2.0 controllers/ports running at up to 5 GHz - Additional peripheral interfaces - Two serial ATA (SATA 2.0) controllers - Two high-speed USB 2.0 controllers with integrated PHY - Enhanced secure digital host controller (SD/SDXC/eMMC) - Enhanced serial peripheral interface (eSPI) - Four I2C controllers - Four 2-pin UARTs or two 4-pin UARTs - Integrated Flash Controller supporting NAND and NOR flash - Three eight-channel DMA engines - Support for hardware virtualization and partitioning enforcement - QorIQ Platform's Trust Architecture 2.0 T2081 is a reduced personality of T2080 with following difference: Feature T2080 T2081 1G Ethernet numbers: 8 6 10G Ethernet numbers: 4 2 SerDes lanes: 16 8 Serial RapidIO,RMan: 2 no SATA Controller: 2 no Aurora: yes no SoC Package: 896-pins 780-pins Signed-off-by: Shengzhou Liu <Shengzhou.Liu@freescale.com> [scottwood@freescale.com: added fsl,qoriq-pci-v3.0 for U-Boot compat] Signed-off-by: Scott Wood <scottwood@freescale.com>
| * | | | | powerpc/8xx: Remove empty asm/mpc8xx.hScott Wood2014-06-252-11/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | m8xx_pcmcia_ops was the only thing in this file (other than a comment that describes a usage that doesn't match the file's contents); now that m8xx_pcmcia_ops is gone, remove the empty file. Signed-off-by: Scott Wood <scottwood@freescale.com> Cc: Pantelis Antoniou <pantelis.antoniou@gmail.com> Cc: Vitaly Bordug <vitb@kernel.crashing.org> Cc: netdev@vger.kernel.org
| * | | | | pcmcia: Remove m8xx_pcmcia driverScott Wood2014-06-251-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This driver doesn't build, and apparently has not built since arch/ppc was removed in 2008 (when mk_int_int_mask was removed from asm/irq.h, among other build errors). A few weeks ago I asked whether anyone was actively maintaining this code, and got no positive response: http://patchwork.ozlabs.org/patch/352082/ So, let's remove it. Signed-off-by: Scott Wood <scottwood@freescale.com> Cc: Vitaly Bordug <vitb@kernel.crashing.org> Cc: linux-pcmcia@lists.infradead.org Cc: Paul Bolle <pebolle@tiscali.nl>
| * | | | | booke/powerpc: define wimge shift mask to fix compilation errorBharat Bhushan2014-06-251-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes below compilation error on SOCs where CONFIG_PHYS_64BIT is not defined: arch/powerpc/kvm/e500_mmu_host.c: In function 'kvmppc_e500_shadow_map': | arch/powerpc/kvm/e500_mmu_host.c:631:20: error: 'PTE_WIMGE_SHIFT' undeclared (first use in this function) | wimg = (*ptep >> PTE_WIMGE_SHIFT) & MAS2_WIMGE_MASK; | ^ | arch/powerpc/kvm/e500_mmu_host.c:631:20: note: each undeclared identifier is reported only once for each function it appears in | make[1]: *** [arch/powerpc/kvm/e500_mmu_host.o] Error 1 Signed-off-by: Bharat Bhushan <Bharat.Bhushan@freescale.com> Signed-off-by: Scott Wood <scottwood@freescale.com>
| * | | | | powerpc/e500mc: Fix wrong value of MCSR_L2MMU_MHITScott Wood2014-06-201-1/+1
| | |_|_|/ | |/| | | | | | | | | | | | | | | | | | Signed-off-by: Scott Wood <scottwood@freescale.com> Reported-by: Ed Swarthout <ed.swarthout@freescale.com>
* | | | | Merge branch 'locking-core-for-linus' of ↵Linus Torvalds2014-08-041-0/+2
|\ \ \ \ \ | |_|_|_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking updates from Ingo Molnar: "The main changes in this cycle are: - big rtmutex and futex cleanup and robustification from Thomas Gleixner - mutex optimizations and refinements from Jason Low - arch_mutex_cpu_relax() removal and related cleanups - smaller lockdep tweaks" * 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (23 commits) arch, locking: Ciao arch_mutex_cpu_relax() locking/lockdep: Only ask for /proc/lock_stat output when available locking/mutexes: Optimize mutex trylock slowpath locking/mutexes: Try to acquire mutex only if it is unlocked locking/mutexes: Delete the MUTEX_SHOW_NO_WAITER macro locking/mutexes: Correct documentation on mutex optimistic spinning rtmutex: Make the rtmutex tester depend on BROKEN futex: Simplify futex_lock_pi_atomic() and make it more robust futex: Split out the first waiter attachment from lookup_pi_state() futex: Split out the waiter check from lookup_pi_state() futex: Use futex_top_waiter() in lookup_pi_state() futex: Make unlock_pi more robust rtmutex: Avoid pointless requeueing in the deadlock detection chain walk rtmutex: Cleanup deadlock detector debug logic rtmutex: Confine deadlock logic to futex rtmutex: Simplify remove_waiter() rtmutex: Document pi chain walk rtmutex: Clarify the boost/deboost part rtmutex: No need to keep task ref for lock owner check rtmutex: Simplify and document try_to_take_rtmutex() ...
| * | | | arch, locking: Ciao arch_mutex_cpu_relax()Davidlohr Bueso2014-07-171-0/+2
| | |/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The arch_mutex_cpu_relax() function, introduced by 34b133f, is hacky and ugly. It was added a few years ago to address the fact that common cpu_relax() calls include yielding on s390, and thus impact the optimistic spinning functionality of mutexes. Nowadays we use this function well beyond mutexes: rwsem, qrwlock, mcs and lockref. Since the macro that defines the call is in the mutex header, any users must include mutex.h and the naming is misleading as well. This patch (i) renames the call to cpu_relax_lowlatency ("relax, but only if you can do it with very low latency") and (ii) defines it in each arch's asm/processor.h local header, just like for regular cpu_relax functions. On all archs, except s390, cpu_relax_lowlatency is simply cpu_relax, and thus we can take it out of mutex.h. While this can seem redundant, I believe it is a good choice as it allows us to move out arch specific logic from generic locking primitives and enables future(?) archs to transparently define it, similarly to System Z. Signed-off-by: Davidlohr Bueso <davidlohr@hp.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Anton Blanchard <anton@samba.org> Cc: Aurelien Jacquiot <a-jacquiot@ti.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Bharat Bhushan <r65777@freescale.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chen Liqin <liqin.linux@gmail.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Chris Zankel <chris@zankel.net> Cc: David Howells <dhowells@redhat.com> Cc: David S. Miller <davem@davemloft.net> Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com> Cc: Dominik Dingel <dingel@linux.vnet.ibm.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Haavard Skinnemoen <hskinnemoen@gmail.com> Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: James E.J. Bottomley <jejb@parisc-linux.org> Cc: James Hogan <james.hogan@imgtec.com> Cc: Jason Wang <jasowang@redhat.com> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: Joe Perches <joe@perches.com> Cc: Jonas Bonn <jonas@southpole.se> Cc: Joseph Myers <joseph@codesourcery.com> Cc: Kees Cook <keescook@chromium.org> Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com> Cc: Lennox Wu <lennox.wu@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Neuling <mikey@neuling.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Mikael Starvik <starvik@axis.com> Cc: Nicolas Pitre <nico@linaro.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Qais Yousef <qais.yousef@imgtec.com> Cc: Qiaowei Ren <qiaowei.ren@intel.com> Cc: Rafael Wysocki <rafael.j.wysocki@intel.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Steven Miao <realmz6@gmail.com> Cc: Steven Rostedt <srostedt@redhat.com> Cc: Stratos Karafotis <stratosk@semaphore.gr> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Vasily Kulikov <segoon@openwall.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Vineet Gupta <Vineet.Gupta1@synopsys.com> Cc: Waiman Long <Waiman.Long@hp.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Wolfram Sang <wsa@the-dreams.de> Cc: adi-buildroot-devel@lists.sourceforge.net Cc: linux390@de.ibm.com Cc: linux-alpha@vger.kernel.org Cc: linux-am33-list@redhat.com Cc: linux-arm-kernel@lists.infradead.org Cc: linux-c6x-dev@linux-c6x.org Cc: linux-cris-kernel@axis.com Cc: linux-hexagon@vger.kernel.org Cc: linux-ia64@vger.kernel.org Cc: linux@lists.openrisc.net Cc: linux-m32r-ja@ml.linux-m32r.org Cc: linux-m32r@ml.linux-m32r.org Cc: linux-m68k@lists.linux-m68k.org Cc: linux-metag@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: linux-parisc@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-s390@vger.kernel.org Cc: linux-sh@vger.kernel.org Cc: linux-xtensa@linux-xtensa.org Cc: sparclinux@vger.kernel.org Link: http://lkml.kernel.org/r/1404079773.2619.4.camel@buesod1.americas.hpqcorp.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
* | | | Merge branch 'merge' of ↵Linus Torvalds2014-07-232-1/+3
|\ \ \ \ | | |_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc Pull powerpc fixes from Ben Herrenschmidt: "Here is a handful of powerpc fixes for 3.16. They are all pretty simple and self contained and should still make this release" * 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: powerpc: use _GLOBAL_TOC for memmove powerpc/pseries: dynamically added OF nodes need to call of_node_init powerpc: subpage_protect: Increase the array size to take care of 64TB powerpc: Fix bugs in emulate_step() powerpc: Disable doorbells on Power8 DD1.x
| * | | powerpc: subpage_protect: Increase the array size to take care of 64TBAneesh Kumar K.V2014-07-221-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We now support TASK_SIZE of 16TB, hence the array should be 8. Fixes the below crash: Unable to handle kernel paging request for data at address 0x000100bd Faulting instruction address: 0xc00000000004f914 cpu 0x13: Vector: 300 (Data Access) at [c000000fea75fa90] pc: c00000000004f914: .sys_subpage_prot+0x2d4/0x5c0 lr: c00000000004fb5c: .sys_subpage_prot+0x51c/0x5c0 sp: c000000fea75fd10 msr: 9000000000009032 dar: 100bd dsisr: 40000000 current = 0xc000000fea6ae490 paca = 0xc00000000fb8ab00 softe: 0 irq_happened: 0x00 pid = 8237, comm = a.out enter ? for help [c000000fea75fe30] c00000000000a164 syscall_exit+0x0/0x98 Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc: Disable doorbells on Power8 DD1.xJoel Stanley2014-07-221-0/+1
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These processors do not currently support doorbell IPIs, so remove them from the feature list if we are at DD 1.xx for the 0x004d part. This fixes a regression caused by d4e58e5928f8 (powerpc/powernv: Enable POWER8 doorbell IPIs). With that patch the kernel would hang at boot when calling smp_call_function_many, as the doorbell would not be received by the target CPUs: .smp_call_function_many+0x2bc/0x3c0 (unreliable) .on_each_cpu_mask+0x30/0x100 .cpuidle_register_driver+0x158/0x1a0 .cpuidle_register+0x2c/0x110 .powernv_processor_idle_init+0x23c/0x2c0 .do_one_initcall+0xd4/0x260 .kernel_init_freeable+0x25c/0x33c .kernel_init+0x1c/0x120 .ret_from_kernel_thread+0x58/0x7c Fixes: d4e58e5928f8 (powerpc/powernv: Enable POWER8 doorbell IPIs) Signed-off-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | | Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2014-07-212-2/+19
|\ \ \ | |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull kvm fixes from Paolo Bonzini: "These are mostly PPC changes for 3.16-new things. However, there is an x86 change too and it is a regression from 3.14. As it only affects nested virtualization and there were other changes in this area in 3.16, I am not nominating it for 3.15-stable" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: x86: Check for nested events if there is an injectable interrupt KVM: PPC: RTAS: Do byte swaps explicitly KVM: PPC: Book3S PR: Fix ABIv2 on LE KVM: PPC: Assembly functions exported to modules need _GLOBAL_TOC() PPC: Add _GLOBAL_TOC for 32bit KVM: PPC: BOOK3S: HV: Use base page size when comparing against slb value KVM: PPC: Book3E: Unlock mmu_lock when setting caching atttribute
| * | Merge tag 'signed-for-3.16' of git://github.com/agraf/linux-2.6 into kvm-masterPaolo Bonzini2014-07-082-2/+19
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Patch queue for 3.16 - 2014-07-08 A few bug fixes to make 3.16 work well with KVM on PowerPC: - Fix ppc32 module builds - Fix Little Endian hosts - Fix Book3S HV HPTE lookup with huge pages in guest - Fix BookE lock leak
| | * | PPC: Add _GLOBAL_TOC for 32bitAlexander Graf2014-06-261-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit ac5a8ee8 started using _GLOBAL_TOC on ppc32 code. Unfortunately it's only defined for 64bit targets though. Define it for ppc32 as well, fixing the build breakage that commit introduced. Signed-off-by: Alexander Graf <agraf@suse.de>
| | * | KVM: PPC: BOOK3S: HV: Use base page size when comparing against slb valueAneesh Kumar K.V2014-06-251-2/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With guests supporting Multiple page size per segment (MPSS), hpte_page_size returns the actual page size used. Add a new function to return base page size and use that to compare against the the page size calculated from SLB. Without this patch a hpte lookup can fail since we are comparing wrong page size in kvmppc_hv_find_lock_hpte. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Alexander Graf <agraf@suse.de>
* | | | powerpc/perf: Add PPMU_ARCH_207S defineJoel Stanley2014-07-111-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of separate bits for every POWER8 PMU feature, have a single one for v2.07 of the architecture. This saves us adding a MMCR2 define for a future patch. Cc: stable@vger.kernel.org Signed-off-by: Joel Stanley <joel@jms.id.au> Acked-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | | | powerpc: Clean up MMU_FTRS_A2 and MMU_FTR_TYPE_3EMichael Ellerman2014-07-111-9/+1
|/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In fb5a515704d7 "powerpc: Remove platforms/wsp and associated pieces", we removed the last user of MMU_FTRS_A2. So remove it. MMU_FTRS_A2 was the last user of MMU_FTR_TYPE_3E, so remove it also. This leaves some unreachable code in mmu_context_nohash.c, so remove that also. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | | powerpc/powernv: Remove OPAL v1 takeoverMichael Ellerman2014-06-251-29/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In commit 27f4488872d9 "Add OPAL takeover from PowerVM" we added support for "takeover" on OPAL v1 machines. This was a mode of operation where we would boot under pHyp, and query for the presence of OPAL. If detected we would then do a special sequence to take over the machine, and the kernel would end up running in hypervisor mode. OPAL v1 was never a supported product, and was never shipped outside IBM. As far as we know no one is still using it. Newer versions of OPAL do not use the takeover mechanism. Although the query for OPAL should be harmless on machines with newer OPAL, we have seen a machine where it causes a crash in Open Firmware. The code in early_init_devtree() to copy boot_command_line into cmd_line was added in commit 817c21ad9a1f "Get kernel command line accross OPAL takeover", and AFAIK is only used by takeover, so should also be removed. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | | powerpc: Add ppc_global_function_entry()Michael Ellerman2014-06-241-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ABIv2 has the concept of a global and local entry point to a function. In most cases we are interested in the local entry point, and so that is what ppc_function_entry() returns. However we have a case in the ftrace code where we want the global entry point, and there may be other places we need it too. Rather than special casing each, add an accessor. For ABIv1 and 32-bit there is only a single entry point, so we return that. That means it's safe for the caller to use this without also checking the ABI version. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | | powerpc: Remove __arch_swab*Benjamin Herrenschmidt2014-06-241-43/+0
| |/ |/| | | | | | | | | | | | | | | | | The generic code uses gcc built-ins which work fine so there's no benefit in implementing our own anymore. We can't completely remove the ld/st_le* functions as some historical cruft still uses them, but that's next on the radar Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc/eeh: Dump PE location codeGavin Shan2014-06-111-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As Ben suggested, it's meaningful to dump PE's location code for site engineers when hitting EEH errors. The patch introduces function eeh_pe_loc_get() to retireve the location code from dev-tree so that we can output it when hitting EEH errors. If primary PE bus is root bus, the PHB's dev-node would be tried prior to root port's dev-node. Otherwise, the upstream bridge's dev-node of the primary PE bus will be check for the location code directly. Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc/powernv: Fix killed EEH eventGavin Shan2014-06-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On PowerNV platform, EEH errors are reported by IO accessors or poller driven by interrupt. After the PE is isolated, we won't produce EEH event for the PE. The current implementation has possibility of EEH event lost in this way: The interrupt handler queues one "special" event, which drives the poller. EEH thread doesn't pick the special event yet. IO accessors kicks in, the frozen PE is marked as "isolated" and EEH event is queued to the list. EEH thread runs because of special event and purge all existing EEH events. However, we never produce an other EEH event for the frozen PE. Eventually, the PE is marked as "isolated" and we don't have EEH event to recover it. The patch fixes the issue to keep EEH events for PEs that have been marked as "isolated" with the help of additional "force" help to eeh_remove_event(). Reported-by: Rolf Brudeseth <rolfb@us.ibm.com> Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc/perf: Ensure all EBB register state is cleared on fork()Michael Ellerman2014-06-111-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In commit 330a1eb "Core EBB support for 64-bit book3s" I messed up clear_task_ebb(). It clears some but not all of the task's Event Based Branch (EBB) registers when we duplicate a task struct. That allows a child task to observe the EBBHR & EBBRR of its parent, which it should not be able to do. Fix it by clearing EBBHR & EBBRR. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Cc: stable@vger.kernel.org [v3.11+] Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
OpenPOWER on IntegriCloud