summaryrefslogtreecommitdiffstats
path: root/arch/powerpc/include/asm
Commit message (Collapse)AuthorAgeFilesLines
* powerpc/book3e: Add generic 64-bit idle powersave supportBenjamin Herrenschmidt2010-07-141-0/+1
| | | | | | | | | | | | | We use a similar technique to ppc32: We set a thread local flag to indicate that we are about to enter or have entered the stop state, and have fixup code in the async interrupt entry code that reacts to this flag to make us return to a different location (sets NIP to LINK in our case). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> -- v2. Fix lockdep bug Re-mask interrupts when coming back from idle
* powerpc/book3e: Resend doorbell exceptions to ourselfMichael Ellerman2010-07-091-0/+1
| | | | | | | | | | | | | | If we are soft disabled and receive a doorbell exception we don't process it immediately. This means we need to check on the way out of irq restore if there are any doorbell exceptions to process. The problem is at that point we don't know what our regs are, and that in turn makes xmon unhappy. To workaround the problem, instead of checking for and processing doorbells, we check for any doorbells and if there were any we send ourselves another. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/book3e: More doorbell cleanups. Sample the PIR registerBenjamin Herrenschmidt2010-07-091-4/+3
| | | | | | | | | | | | | | | | | | | | The doorbells use the content of the PIR register to match messages from other CPUs. This may or may not be the same as our linux CPU number, so using that as the "target" is no right. Instead, we sample the PIR register at boot on every processor and use that value subsequently when sending IPIs. We also use a per-cpu message mask rather than a global array which should limit cache line contention. Note: We could use the CPU number in the device-tree instead of the PIR register, as they are supposed to be equivalent. This might prove useful if doorbells are to be used to kick CPUs out of FW at boot time, thus before we can sample the PIR. This is however not the case now and using the PIR just works. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/book3e: Hack to get gdb moving along on Book3E 64-bitBenjamin Herrenschmidt2010-07-091-2/+2
| | | | | | | | | | | | | | | | Our handling of debug interrupts on Book3E 64-bit is not quite the way it should be just yet. This is a workaround to let gdb work at least for now. We ensure that when context switching, we set the appropriate DBCR0 value for the new task. We also make sure that we turn off MSR[DE] within the kernel, and set it as part of the bits that get set when going back to userspace. In the long run, we will probably set the userspace DBCR0 on the exception exit code path and ensure we have some proper kernel value to set on the way into the kernel, a bit like ppc32 does, but that will take more work. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/book3e: mtmsr should not be mtmsrd on book3e 64-bitBenjamin Herrenschmidt2010-07-091-1/+1
| | | | Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/numa: Use form 1 affinity to setup node distanceAnton Blanchard2010-07-091-0/+3
| | | | | | | | | | | | | | | | | Form 1 affinity allows multiple entries in ibm,associativity-reference-points which represent affinity domains in decreasing order of importance. The Linux concept of a node is always the first entry, but using the other values as an input to node_distance() allows the memory allocator to make better decisions on which node to go first when local memory has been exhausted. We keep things simple and create an array indexed by NUMA node, capped at 4 entries. Each time we lookup an associativity property we initialise the array which is overkill, but since we should only hit this path during boot it didn't seem worth adding a per node valid bit. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/pseries: Rename RAS_VECTOR_OFFSET to RTAS_VECTOR_EXTERNAL_INTERRUPT ↵Mark Nelson2010-07-091-0/+3
| | | | | | | | | | | | | | and move to rtas.h The RAS code has a #define, RAS_VECTOR_OFFSET, that's used in the check-exception RTAS call for the vector offset of the exception. We'll be using this same vector offset for the upcoming IO Event interrupts code (0x500) so let's move it to include/asm/rtas.h and call it RTAS_VECTOR_EXTERNAL_INTERRUPT. Signed-off-by: Mark Nelson <markn@au1.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Optimise per cpu accesses on 64bitAnton Blanchard2010-07-091-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now we dynamically allocate the paca array, it takes an extra load whenever we want to access another cpu's paca. One place we do that a lot is per cpu variables. A simple example: DEFINE_PER_CPU(unsigned long, vara); unsigned long test4(int cpu) { return per_cpu(vara, cpu); } This takes 4 loads, 5 if you include the actual load of the per cpu variable: ld r11,-32760(r30) # load address of paca pointer ld r9,-32768(r30) # load link address of percpu variable sldi r3,r29,9 # get offset into paca (each entry is 512 bytes) ld r0,0(r11) # load paca pointer add r3,r0,r3 # paca + offset ld r11,64(r3) # load paca[cpu].data_offset ldx r3,r9,r11 # load per cpu variable If we remove the ppc64 specific per_cpu_offset(), we get the generic one which indexes into a statically allocated array. This removes one load and one add: ld r11,-32760(r30) # load address of __per_cpu_offset ld r9,-32768(r30) # load link address of percpu variable sldi r3,r29,3 # get offset into __per_cpu_offset (each entry 8 bytes) ldx r11,r11,r3 # load __per_cpu_offset[cpu] ldx r3,r9,r11 # load per cpu variable Having all the offsets in one array also helps when iterating over a per cpu variable across a number of cpus, such as in the scheduler. Before we would need to load one paca cacheline when calculating each per cpu offset. Now we have 16 (128 / sizeof(long)) per cpu offsets in each cacheline. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/iseries: Fix constant warningDenis Kirjanov2010-07-091-1/+1
| | | | | | | Fix smatch warning: constant 0x8000000000000000 is so big it is unsigned long Signed-off-by: Denis Kirjanov <dkirjanov@kernel.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/pseries: Partition hibernation supportBrian King2010-07-091-0/+1
| | | | | | | | | | Enables support for HMC initiated partition hibernation. This is a firmware assisted hibernation, since the firmware handles writing the memory out to disk, along with other partition information, so we just mimic suspend to ram. Signed-off-by: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/pseries: Migration code reorganization / hibernation prepBrian King2010-07-092-0/+11
| | | | | | | | | | | | Partition hibernation will use some of the same code as is currently used for Live Partition Migration. This function further abstracts this code such that code outside of rtas.c can utilize it. It also changes the error field in the suspend me data structure to be an atomic type, since it is set and checked on different cpus without any barriers or locking. Signed-off-by: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Clean up obsolete code relating to decrementer and timebasePaul Mackerras2010-07-092-10/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | Since the decrementer and timekeeping code was moved over to using the generic clockevents and timekeeping infrastructure, several variables and functions have been obsolete and effectively unused. This deletes them. In particular, wakeup_decrementer() is no longer needed since the generic code reprograms the decrementer as part of the process of resuming the timekeeping code, which happens during sysdev resume. Thus the wakeup_decrementer calls in the suspend_enter methods for 52xx platforms have been removed. The call in the powermac cpu frequency change code has been replaced by set_dec(1), which will cause a timer interrupt as soon as interrupts are enabled, and the generic code will then reprogram the decrementer with the correct value. This also simplifies the generic_suspend_en/disable_irqs functions and makes them static since they are not referenced outside time.c. The preempt_enable/disable calls are removed because the generic code has disabled all but the boot cpu at the point where these functions are called, so we can't be moved to another cpu. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Rework VDSO gettimeofday to prevent time going backwardsPaul Mackerras2010-07-091-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently it is possible for userspace to see the result of gettimeofday() going backwards by 1 microsecond, assuming that userspace is using the gettimeofday() in the VDSO. The VDSO gettimeofday() algorithm computes the time in "xsecs", which are units of 2^-20 seconds, or approximately 0.954 microseconds, using the algorithm now = (timebase - tb_orig_stamp) * tb_to_xs + stamp_xsec and then converts the time in xsecs to seconds and microseconds. The kernel updates the tb_orig_stamp and stamp_xsec values every tick in update_vsyscall(). If the length of the tick is not an integer number of xsecs, then some precision is lost in converting the current time to xsecs. For example, with CONFIG_HZ=1000, the tick is 1ms long, which is 1048.576 xsecs. That means that stamp_xsec will advance by either 1048 or 1049 on each tick. With the right conditions, it is possible for userspace to get (timebase - tb_orig_stamp) * tb_to_xs being 1049 if the kernel is slightly late in updating the vdso_datapage, and then for stamp_xsec to advance by 1048 when the kernel does update it, and for userspace to then see (timebase - tb_orig_stamp) * tb_to_xs being zero due to integer truncation. The result is that time appears to go backwards by 1 microsecond. To fix this we change the VDSO gettimeofday to use a new field in the VDSO datapage which stores the nanoseconds part of the time as a fractional number of seconds in a 0.32 binary fraction format. (Or put another way, as a 32-bit number in units of 0.23283 ns.) This is convenient because we can use the mulhwu instruction to convert it to either microseconds or nanoseconds. Since it turns out that computing the time of day using this new field is simpler than either using stamp_xsec (as gettimeofday does) or stamp_xtime.tv_nsec (as clock_gettime does), this converts both gettimeofday and clock_gettime to use the new field. The existing __do_get_tspec function is converted to use the new field and take a parameter in r7 that indicates the desired resolution, 1,000,000 for microseconds or 1,000,000,000 for nanoseconds. The __do_get_xsec function is then unused and is deleted. The new algorithm is now = ((timebase - tb_orig_stamp) << 12) * tb_to_xs + (stamp_xtime_seconds << 32) + stamp_sec_fraction with 'now' in units of 2^-32 seconds. That is then converted to seconds and either microseconds or nanoseconds with seconds = now >> 32 partseconds = ((now & 0xffffffff) * resolution) >> 32 The 32-bit VDSO code also makes a further simplification: it ignores the bottom 32 bits of the tb_to_xs value, which is a 0.64 format binary fraction. Doing so gets rid of 4 multiply instructions. Assuming a timebase frequency of 1GHz or less and an update interval of no more than 10ms, the upper 32 bits of tb_to_xs will be at least 4503599, so the error from ignoring the low 32 bits will be at most 2.2ns, which is more than an order of magnitude less than the time taken to do gettimeofday or clock_gettime on our fastest processors, so there is no possibility of seeing inconsistent values due to this. This also moves update_gtod() down next to its only caller, and makes update_vsyscall use the time passed in via the wall_time argument rather than accessing xtime directly. At present, wall_time always points to xtime, but that could change in future. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* Merge commit 'paulus-perf/master' into nextBenjamin Herrenschmidt2010-07-095-0/+95
|\
| * powerpc, hw_breakpoint: Tell generic code we have no instruction breakpointsPaul Mackerras2010-06-301-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At present, hw_breakpoint_slots() returns 1 regardless of what type of breakpoint is specified in the type argument. Since we don't define CONFIG_HAVE_MIXED_BREAKPOINTS_REGS, there are separate values for TYPE_INST and TYPE_DATA, and hw_breakpoint_slots() returns 1 for both, effectively advertising instruction breakpoint support which doesn't exist. This fixes it by making hw_breakpoint_slots return 1 for TYPE_DATA and 0 for TYPE_INST. This moves hw_breakpoint_slots() from the powerpc hw_breakpoint.h to hw_breakpoint.c because the definitions of TYPE_INST and TYPE_DATA aren't available in <asm/hw_breakpoint.h>. They are defined in <linux/hw_breakpoint.h> but we can't include that header in <asm/hw_breakpoint.h>, and nor can we rely on <linux/hw_breakpoint.h> being included before <asm/hw_breakpoint.h>. Since hw_breakpoint_slots() is only called at boot time, there is no performance impact from making it a real function rather than a static inline. Signed-off-by: Paul Mackerras <paulus@samba.org>
| * powerpc, hw_breakpoint: Discard extraneous interrupt due to accesses outside ↵K.Prasad2010-06-221-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | symbol length Many a times, the requested breakpoint length can be less than the fixed breakpoint length i.e. 8 bytes supported by PowerPC 64-bit server (Book III S) processors. This could lead to extraneous interrupts resulting in false breakpoint notifications. This detects and discards such interrupts for non-ptrace requests. We don't change ptrace behaviour to avoid breaking compatability. [Suggestion from Paul Mackerras <paulus@samba.org> to add a new flag in 'struct arch_hw_breakpoint' to identify extraneous interrupts] Signed-off-by: K.Prasad <prasad@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * powerpc, hw_breakpoint: Enable hw-breakpoints while handling intervening signalsK.Prasad2010-06-221-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A signal delivered between a hw_breakpoint_handler() and the single_step_dabr_instruction() will not have the breakpoint active while the signal handler is running -- the signal delivery will set up a new MSR value which will not have MSR_SE set, so we won't get the signal step interrupt until and unless the signal handler returns (which it may never do). To fix this, we restore the breakpoint when delivering a signal -- we clear the MSR_SE bit and set the DABR again. If the signal handler returns, the DABR interrupt will occur again when the instruction that we were originally trying to single-step gets re-executed. [Paul Mackerras <paulus@samba.org> pointed out the need to do this.] Signed-off-by: K.Prasad <prasad@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * powerpc, hw_breakpoints: Implement hw_breakpoints for 64-bit server processorsK.Prasad2010-06-223-0/+85
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement perf-events based hw-breakpoint interfaces for PowerPC 64-bit server (Book III S) processors. This allows access to a given location to be used as an event that can be counted or profiled by the perf_events subsystem. This is done using the DABR (data breakpoint register), which can also be used for process debugging via ptrace. When perf_event hw_breakpoint support is configured in, the perf_event subsystem manages the DABR and arbitrates access to it, and ptrace then creates a perf_event when it is requested to set a data breakpoint. [Adopted suggestions from Paul Mackerras <paulus@samba.org> to - emulate_step() all system-wide breakpoints and single-step only the per-task breakpoints - perform arch-specific cleanup before unregistration through arch_unregister_hw_breakpoint() ] Signed-off-by: K.Prasad <prasad@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * powerpc: Emulate most Book I instructions in emulate_step()Paul Mackerras2010-06-222-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This extends the emulate_step() function to handle a large proportion of the Book I instructions implemented on current 64-bit server processors. The aim is to handle all the load and store instructions used in the kernel, plus all of the instructions that appear between l[wd]arx and st[wd]cx., so this handles the Altivec/VMX lvx and stvx and the VSX lxv2dx and stxv2dx instructions (implemented in POWER7). The new code can emulate user mode instructions, and checks the effective address for a load or store if the saved state is for user mode. It doesn't handle little-endian mode at present. For floating-point, Altivec/VMX and VSX instructions, it checks that the saved MSR has the enable bit for the relevant facility set, and if so, assumes that the FP/VMX/VSX registers contain valid state, and does loads or stores directly to/from the FP/VMX/VSX registers, using assembly helpers in ldstfp.S. Instructions supported now include: * Loads and stores, including some but not all VMX and VSX instructions, and lmw/stmw * Atomic loads and stores (l[dw]arx, st[dw]cx.) * Arithmetic instructions (add, subtract, multiply, divide, etc.) * Compare instructions * Rotate and mask instructions * Shift instructions * Logical instructions (and, or, xor, etc.) * Condition register logical instructions * mtcrf, cntlz[wd], exts[bhw] * isync, sync, lwsync, ptesync, eieio * Cache operations (dcbf, dcbst, dcbt, dcbtst) The overflow-checking arithmetic instructions are not included, but they appear not to be ever used in C code. This uses decimal values for the minor opcodes in the switch statements because that is what appears in the Power ISA specification, thus it is easier to check that they are correct if they are in decimal. If this is used to single-step an instruction where a data breakpoint interrupt occurred, then there is the possibility that the instruction is a lwarx or ldarx. In that case we have to be careful not to lose the reservation until we get to the matching st[wd]cx., or we'll never make forward progress. One alternative is to try to arrange that we can return from interrupts and handle data breakpoint interrupts without losing the reservation, which means not using any spinlocks, mutexes, or atomic ops (including bitops). That seems rather fragile. The other alternative is to emulate the larx/stcx and all the instructions in between. This is why this commit adds support for a wide range of integer instructions. Signed-off-by: Paul Mackerras <paulus@samba.org>
* | powerpc: Fix userspace build of ptrace.hSam Ravnborg2010-07-081-18/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Build of ptrace.h failed for assembly because it pulls in stdint.h. Use exportable types (__u32, __u64) to avoid the dependency on stdint.h. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Andrey Volkov <avolkov@varma-el.com> Cc: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc: Unconditionally enabled irq stacksChristoph Hellwig2010-06-151-6/+0
| | | | | | | | | | | | | | | | | | | | | | Irq stacks provide an essential protection from stack overflows through external interrupts, at the cost of two additionals stacks per CPU. Enable them unconditionally to simplify the kernel build and prevent people from accidentally disabling them. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc: Move kdump default base address to 64MB on 64bitAnton Blanchard2010-06-151-1/+10
|/ | | | | | | | | | | | | | | | | | | | | | | | | | We are seeing boot fails on some System p machines when using the kdump crashkernel= boot option. The default kdump base address is 32MB, so if we reserve 256MB for kdump then we reserve all of the RMO except the first 32MB. We really want kdump to reserve some memory in the RMO and most of it elsewhere but that will require more significant changes. For now we can shift the default base address to 64MB when CONFIG_PPC64 and CONFIG_RELOCATABLE are set. This isn't quite correct since what we really care about is the kdump kernel is relocatable, but we already make the assumption that base kernel and kdump kernel have the same CONFIG_RELOCATABLE setting, eg: #ifndef CONFIG_RELOCATABLE if (crashk_res.start != KDUMP_KERNELBASE) printk("Crash kernel location must be 0x%x\n", KDUMP_KERNELBASE); ... RTAS is instantiated towards the top of our RMO, so if we were to go any higher we risk not having enough RMO memory for the kdump kernel on boxes with a 128MB RMO. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/macio: Fix probing of macio devices by using the right of match tableBenjamin Herrenschmidt2010-06-021-4/+0
| | | | | | | | | | | | | | Grant patches added an of mach table to struct device_driver. However, while he changed the macio device code to use that, he left the match table pointer in struct macio_driver and didn't update drivers to use the "new" one, thus breaking the probing. This completes the change by moving all drivers to setup the "new" one, removing all traces of the old one, and while at it (since it changes the exact same locations), I also remove two other duplicates from struct driver which are the name and owner fields. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* Merge branch 'next' of ↵Linus Torvalds2010-06-013-10/+37
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: powerpc: Don't export cvt_fd & _df when CONFIG_PPC_FPU is not set powerpc/44x: icon: select SM502 and frame buffer console support powerpc/85xx: Add P1021MDS board support powerpc/85xx: Change MPC8572DS camp dtses for MSI sharing powerpc/fsl_msi: add removal path and probe failing path powerpc/fsl_msi: enable msi sharing through AMP OSes powerpc/fsl_msi: enable msi allocation in all banks powerpc/fsl_msi: fix the conflict of virt_msir's chip_data powerpc/fsl_msi: Add multiple MSI bank support powerpc/kexec: Add support for FSL-BookE powerpc/fsl-booke: Move the entry setup code into a seperate file powerpc/fsl-booke: fix the case where we are not in the first page powerpc/85xx: Enable support for ports 3 and 4 on 8548 CDS powerpc/fsl-booke: Add hibernation support for FSL BookE processors powerpc/e500mc: Implement machine check handler. powerpc/44x: Add basic ICON PPC440SPe board support powerpc/44x: Fix UART clocks on 440SPe powerpc/44x: Add reset-type to katmai.dts powerpc/44x: Adding PCI-E support for PowerPC 460SX based SOC.
| * Merge commit 'kumar/next' into nextBenjamin Herrenschmidt2010-05-313-10/+37
| |\ | | | | | | | | | | | | Conflicts: arch/powerpc/sysdev/fsl_msi.c
| | * powerpc/kexec: Add support for FSL-BookESebastian Andrzej Siewior2010-05-241-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds support kexec on FSL-BookE where the MMU can not be simply switched off. The code borrows the initial MMU-setup code to create the identical mapping mapping. The only difference to the original boot code is the size of the mapping(s) and the executeable address. The kexec code maps the first 2 GiB of memory in 256 MiB steps. This should work also on e500v1 boxes. SMP support is still not available. (Kumar: Added minor change to build to ifdef CONFIG_PPC_STD_MMU_64 some code that was PPC64 specific) Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
| | * powerpc/e500mc: Implement machine check handler.Scott Wood2010-05-212-10/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Most of the MSCR bit assigments are different in e500mc versus e500, and they are now write-one-to-clear. Some e500mc machine check conditions are made recoverable (as long as they aren't stuck on), most notably L1 instruction cache parity errors. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* | | Merge branch 'for-35' of git://repo.or.cz/linux-kbuildLinus Torvalds2010-06-012-9/+1
|\ \ \ | |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'for-35' of git://repo.or.cz/linux-kbuild: (81 commits) kbuild: Revert part of e8d400a to resolve a conflict kbuild: Fix checking of scm-identifier variable gconfig: add support to show hidden options that have prompts menuconfig: add support to show hidden options which have prompts gconfig: remove show_debug option gconfig: remove dbg_print_ptype() and dbg_print_stype() kconfig: fix zconfdump() kconfig: some small fixes add random binaries to .gitignore kbuild: Include gen_initramfs_list.sh and the file list in the .d file kconfig: recalc symbol value before showing search results .gitignore: ignore *.lzo files headerdep: perlcritic warning scripts/Makefile.lib: Align the output of LZO kbuild: Generate modules.builtin in make modules_install Revert "kbuild: specify absolute paths for cscope" kbuild: Do not unnecessarily regenerate modules.builtin headers_install: use local file handles headers_check: fix perl warnings export_report: fix perl warnings ...
| * | Rename .data.read_mostly to .data..read_mostly.Denys Vlasenko2010-03-031-1/+1
| | | | | | | | | | | | | | | Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com> Signed-off-by: Michal Marek <mmarek@suse.cz>
| * | powerpc: remove unused __page_aligned definition.Tim Abbott2010-03-031-8/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is already an architecture-independent __page_aligned_data macro for this purpose, so removing the powerpc-specific macro should be harmless. Signed-off-by: Tim Abbott <tabbott@ksplice.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com> Signed-off-by: Michal Marek <mmarek@suse.cz>
* | | asm-generic: remove ARCH_HAS_SG_CHAIN in scatterlist.hFUJITA Tomonori2010-05-271-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are more architectures that don't support ARCH_HAS_SG_CHAIN than those that support it. This removes removes ARCH_HAS_SG_CHAIN in asm-generic/scatterlist.h and lets arhictectures to define it. It's clearer than defining ARCH_HAS_SG_CHAIN asm-generic/scatterlist.h and undefing it in arhictectures that don't support it. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | powerpc: use asm-generic/scatterlist.hFUJITA Tomonori2010-05-271-28/+1
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | Revert "endian: #define __BYTE_ORDER"Linus Torvalds2010-05-261-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit b3b77c8caef1750ebeea1054e39e358550ea9f55, which was also totally broken (see commit 0d2daf5cc858 that reverted the crc32 version of it). As reported by Stephen Rothwell, it causes problems on big-endian machines: > In file included from fs/jfs/jfs_types.h:33, > from fs/jfs/jfs_incore.h:26, > from fs/jfs/file.c:22: > fs/jfs/endian24.h:36:101: warning: "__LITTLE_ENDIAN" is not defined The kernel has never had that crazy "__BYTE_ORDER == __LITTLE_ENDIAN" model. It's not how we do things, and it isn't how we _should_ do things. So don't go there. Requested-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | Merge branch 'next-spi' of git://git.secretlab.ca/git/linux-2.6Linus Torvalds2010-05-251-0/+1
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'next-spi' of git://git.secretlab.ca/git/linux-2.6: spi/xilinx: Fix compile error spi/davinci: Fix clock prescale factor computation spi: move bitbang txrx utility functions to private header spi/mpc5121: Add SPI master driver for MPC5121 PSC powerpc/mpc5121: move PSC FIFO memory init to platform code spi/ep93xx: implemented driver for Cirrus EP93xx SPI controller Documentation/spi/* compile warning fix spi/omap2_mcspi: Check params before dereference or use spi/omap2_mcspi: add turbo mode support spi/omap2_mcspi: change default DMA_MIN_BYTES value to 160 spi/pl022: fix stop queue procedure spi/pl022: add support for the PL023 derivate spi/pl022: fix up differences between ARM and ST versions spi/spi_mpc8xxx: Do not use map_tx_dma to unmap rx_dma spi/spi_mpc8xxx: Fix QE mode Litte Endian spi/spi_mpc8xxx: fix potential memory corruption.
| * \ \ Merge remote branch 'origin' into secretlab/next-spiGrant Likely2010-05-2536-156/+711
| |\ \ \
| * | | | spi/mpc5121: Add SPI master driver for MPC5121 PSCAnatolij Gustschin2010-05-251-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: John Rigby <jcrigby@gmail.com> Signed-off-by: Anatolij Gustschin <agust@denx.de> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
* | | | | endian: #define __BYTE_ORDERJoakim Tjernlund2010-05-251-6/+0
| |/ / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Linux does not define __BYTE_ORDER in its endian header files which makes some header files bend backwards to get at the current endian. Lets #define __BYTE_ORDER in big_endian.h/litte_endian.h to make it easier for header files that are used in user space too. In userspace the convention is that 1. _both_ __LITTLE_ENDIAN and __BIG_ENDIAN are defined, 2. you have to test for e.g. __BYTE_ORDER == __BIG_ENDIAN. Signed-off-by: Joakim Tjernlund <Joakim.Tjernlund@transmode.se> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | | Merge remote branch 'origin' into secretlab/next-devicetreeGrant Likely2010-05-2235-177/+722
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Merging in current state of Linus' tree to deal with merge conflicts and build failures in vio.c after merge. Conflicts: drivers/i2c/busses/i2c-cpm.c drivers/i2c/busses/i2c-mpc.c drivers/net/gianfar.c Also fixed up one line in arch/powerpc/kernel/vio.c to use the correct node pointer. Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
| * \ \ \ Merge git://git.infradead.org/iommu-2.6Linus Torvalds2010-05-211-3/+3
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.infradead.org/iommu-2.6: intel-iommu: Set a more specific taint flag for invalid BIOS DMAR tables intel-iommu: Combine the BIOS DMAR table warning messages panic: Add taint flag TAINT_FIRMWARE_WORKAROUND ('I') panic: Allow warnings to set different taint flags intel-iommu: intel_iommu_map_range failed at very end of address space intel-iommu: errors with smaller iommu widths intel-iommu: Fix boot inside 64bit virtualbox with io-apic disabled intel-iommu: use physfn to search drhd for VF intel-iommu: Print out iommu seq_id intel-iommu: Don't complain that ACPI_DMAR_SCOPE_TYPE_IOAPIC is not supported intel-iommu: Avoid global flushes with caching mode. intel-iommu: Use correct domain ID when caching mode is enabled intel-iommu mistakenly uses offset_pfn when caching mode is enabled intel-iommu: use for_each_set_bit() intel-iommu: Fix section mismatch dmar_ir_support() uses dmar_tbl.
| | * | | | panic: Allow warnings to set different taint flagsBen Hutchings2010-05-191-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | WARN() is used in some places to report firmware or hardware bugs that are then worked-around. These bugs do not affect the stability of the kernel and should not set the flag for TAINT_WARN. To allow for this, add WARN_TAINT() and WARN_TAINT_ONCE() macros that take a taint number as argument. Architectures that implement warnings using trap instructions instead of calls to warn_slowpath_*() now implement __WARN_TAINT(taint) instead of __WARN(). Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Acked-by: Helge Deller <deller@gmx.de> Tested-by: Paul Mundt <lethal@linux-sh.org> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * | | | | Merge branch 'kvm-updates/2.6.35' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2010-05-2115-117/+490
| |\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'kvm-updates/2.6.35' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (269 commits) KVM: x86: Add missing locking to arch specific vcpu ioctls KVM: PPC: Add missing vcpu_load()/vcpu_put() in vcpu ioctls KVM: MMU: Segregate shadow pages with different cr0.wp KVM: x86: Check LMA bit before set_efer KVM: Don't allow lmsw to clear cr0.pe KVM: Add cpuid.txt file KVM: x86: Tell the guest we'll warn it about tsc stability x86, paravirt: don't compute pvclock adjustments if we trust the tsc x86: KVM guest: Try using new kvm clock msrs KVM: x86: export paravirtual cpuid flags in KVM_GET_SUPPORTED_CPUID KVM: x86: add new KVMCLOCK cpuid feature KVM: x86: change msr numbers for kvmclock x86, paravirt: Add a global synchronization point for pvclock x86, paravirt: Enable pvclock flags in vcpu_time_info structure KVM: x86: Inject #GP with the right rip on efer writes KVM: SVM: Don't allow nested guest to VMMCALL into host KVM: x86: Fix exception reinjection forced to true KVM: Fix wallclock version writing race KVM: MMU: Don't read pdptrs with mmu spinlock held in mmu_alloc_roots KVM: VMX: enable VMXON check with SMX enabled (Intel TXT) ...
| | * | | | | KVM: PPC: Enable native paired singlesAlexander Graf2010-05-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we're on a paired single capable host, we can just always enable paired singles and expose them to the guest directly. This approach breaks when multiple VMs run and access PS concurrently, but this should suffice until we get a proper framework for it in Linux. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| | * | | | | KVM: PPC: Improve split modeAlexander Graf2010-05-171-5/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When in split mode, instruction relocation and data relocation are not equal. So far we implemented this mode by reserving a special pseudo-VSID for the two cases and flushing all PTEs when going into split mode, which is slow. Unfortunately 32bit Linux and Mac OS X use split mode extensively. So to not slow down things too much, I came up with a different idea: Mark the split mode with a bit in the VSID and then treat it like any other segment. This means we can just flush the shadow segment cache, but keep the PTEs intact. I verified that this works with ppc32 Linux and Mac OS X 10.4 guests and does speed them up. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| | * | | | | KVM: PPC: Convert u64 -> ulongAlexander Graf2010-05-172-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are some pieces in the code that I overlooked that still use u64s instead of longs. This slows down 32 bit hosts unnecessarily, so let's just move them to ulong. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| | * | | | | KVM: PPC: Add SVCPU to Book3S_32Alexander Graf2010-05-171-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We need to keep the pointer to the shadow vcpu somewhere accessible from within really early interrupt code. The best fit I found was the thread struct, as that resides in an SPRG. So let's put a pointer to the shadow vcpu in the thread struct and add an asm-offset so we can find it. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| | * | | | | KVM: PPC: Extract MMU initAlexander Graf2010-05-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The host shadow mmu code needs to get initialized. It needs to fetch a segment it can use to put shadow PTEs into. That initialization code was in generic code, which is icky. Let's move it over to the respective MMU file. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| | * | | | | KVM: PPC: Use now shadowed vcpu fieldsAlexander Graf2010-05-172-11/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The shadow vcpu now contains some fields we don't use from the vcpu anymore. Access to them happens using inline functions that happily use the shadow vcpu fields. So let's now ifdef them out to booke only and add asm-offsets. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| | * | | | | PPC: Add STLUAlexander Graf2010-05-171-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For assembly code there are several "long" load and store defines already. The one that's missing is the typical stack store, stdu/stwu. So let's add that define as well, making my KVM code happy. CC: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Alexander Graf <agraf@suse.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Avi Kivity <avi@redhat.com>
| | * | | | | KVM: PPC: Use CONFIG_PPC_BOOK3S defineAlexander Graf2010-05-171-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Upstream recently added a new name for PPC64: Book3S_64. So instead of using CONFIG_PPC64 we should use CONFIG_PPC_BOOK3S consotently. That makes understanding the code easier (I hope). Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| | * | | | | KVM: PPC: Use KVM_BOOK3S_HANDLERAlexander Graf2010-05-172-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | So far we had a lot of conditional code on CONFIG_KVM_BOOK3S_64_HANDLER. As we're moving towards common code between 32 and 64 bits, most of these ifdefs can be moved to a more generic term define, called CONFIG_KVM_BOOK3S_HANDLER. This patch adds the new generic config option and moves ifdefs over. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
OpenPOWER on IntegriCloud