summaryrefslogtreecommitdiffstats
path: root/arch/s390/kernel/entry64.S
Commit message (Collapse)AuthorAgeFilesLines
* s390: fix system call restart after inferior callMartin Schwidefsky2013-09-301-0/+1
| | | | | | | | | | | Git commit 616498813b11ffef "s390: system call path micro optimization" introduced a regression in regard to system call restarting and inferior function calls via the ptrace interface. The pointer to the system call table needs to be loaded in sysc_sigpending if do_signal returns with TIF_SYSCALl set after it restored a system call context. Cc: stable@vger.kernel.org # 3.10+ Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390/time: return with irqs disabled from psw_idleMartin Schwidefsky2013-08-281-1/+1
| | | | | | | | | Modify the psw_idle waiting logic in entry[64].S to return with interrupts disabled. This avoids potential issues with udelay and interrupt loops as interrupts are not reenabled after clock comparator interrupts. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390: convert interrupt handling to use generic hardirqMartin Schwidefsky2013-08-221-1/+8
| | | | | | | | | | | | | | | | | | With the introduction of PCI it became apparent that s390 should convert to generic hardirqs as too many drivers do not have the correct dependency for GENERIC_HARDIRQS. On the architecture level s390 does not have irq lines. It has external interrupts, I/O interrupts and adapter interrupts. This patch hard-codes all external interrupts as irq #1, all I/O interrupts as irq #2 and all adapter interrupts as irq #3. The additional information from the lowcore associated with the interrupt is stored in the pt_regs of the interrupt frame, where the interrupt handler can pick it up. For PCI/MSI interrupts the adapter interrupt handler scans the relevant bit fields and calls generic_handle_irq with the virtual irq number for the MSI interrupt. Reviewed-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2013-07-031-42/+39
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull KVM fixes from Paolo Bonzini: "On the x86 side, there are some optimizations and documentation updates. The big ARM/KVM change for 3.11, support for AArch64, will come through Catalin Marinas's tree. s390 and PPC have misc cleanups and bugfixes" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (87 commits) KVM: PPC: Ignore PIR writes KVM: PPC: Book3S PR: Invalidate SLB entries properly KVM: PPC: Book3S PR: Allow guest to use 1TB segments KVM: PPC: Book3S PR: Don't keep scanning HPTEG after we find a match KVM: PPC: Book3S PR: Fix invalidation of SLB entry 0 on guest entry KVM: PPC: Book3S PR: Fix proto-VSID calculations KVM: PPC: Guard doorbell exception with CONFIG_PPC_DOORBELL KVM: Fix RTC interrupt coalescing tracking kvm: Add a tracepoint write_tsc_offset KVM: MMU: Inform users of mmio generation wraparound KVM: MMU: document fast invalidate all mmio sptes KVM: MMU: document fast invalidate all pages KVM: MMU: document fast page fault KVM: MMU: document mmio page fault KVM: MMU: document write_flooding_count KVM: MMU: document clear_spte_count KVM: MMU: drop kvm_mmu_zap_mmio_sptes KVM: MMU: init kvm generation close to mmio wrap-around value KVM: MMU: add tracepoint for check_mmio_spte KVM: MMU: fast invalidate all mmio sptes ...
| * KVM: s390,perf: Detect if perf samples belong to KVM host or guestHeinz Graalfs2013-06-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch is based on an original patch of David Hildenbrand. The perf core implementation calls architecture specific code in order to ask for specific information for a particular sample: perf_instruction_pointer() When perf core code asks for the instruction pointer, architecture specific code must detect if a KVM guest was running when the sample was taken. A sample can be associated with a KVM guest when the PSW supervisor state bit is set and the PSW instruction pointer part contains the address of 'sie_exit'. A KVM guest's instruction pointer information is then retrieved via gpsw entry pointed to by the sie control-block. perf_misc_flags() perf code code calls this function in order to associate the kernel vs. user state infomation with a particular sample. Architecture specific code must also first detectif a KVM guest was running at the time the sample was taken. Signed-off-by: Heinz Graalfs <graalfs@linux.vnet.ibm.com> Reviewed-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * s390/kvm: avoid automatic sie reentryMartin Schwidefsky2013-05-211-43/+33
| | | | | | | | | | | | | | | | | | | | Do not automatically restart the sie instruction in entry64.S after an interrupt, return to the caller with a reason code instead. That allows to deal with RCU and other conditions in C code. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| * s390/kvm: Provide a way to prevent reentering SIEChristian Borntraeger2013-05-211-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Lets provide functions to prevent KVM from reentering SIE and to kick cpus out of SIE. We cannot use the common kvm_vcpu_kick code, since we need to kick out guests in places that hold architecture specific locks (e.g. pgste lock) which might be necessary on the other cpus - so no waiting possible. So lets provide a bit in a private field of the sie control block that acts as a gate keeper, after we claimed we are in SIE. Please note that we do not reuse prog0c, since we want to access that bit without atomic ops. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| * s390/kvm: Mark if a cpu is in SIEChristian Borntraeger2013-05-211-3/+7
| | | | | | | | | | | | | | | | | | | | | | Lets track in a private bit if the sie control block is active. We want to track this as closely as possible, so we also have to instrument the interrupt and program check handler. Lets use the existing HANDLE_SIE_INTERCEPT macro. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Gleb Natapov <gleb@redhat.com>
* | s390/irq: store interrupt information in pt_regsMartin Schwidefsky2013-06-261-4/+12
|/ | | | | | | | | | Copy the interrupt parameters from the lowcore to the pt_regs structure in entry[64].S and reduce the arguments of the low level interrupt handler to the pt_regs pointer only. In addition move the test-pending-interrupt loop from do_IRQ to entry[64].S to make sure that interrupt information is always delivered via pt_regs. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390: system call path micro optimizationMartin Schwidefsky2013-04-261-7/+2
| | | | | | | | | | Add a pointer to the system call table to the thread_info structure. The TIF_31BIT bit is set or cleared by SET_PERSONALITY exactly once for the lifetime of a process. With the pointer to the correct system call table in thread_info the system call code in entry64.S path can drop the check for TIF_31BIT which saves a couple of instructions. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390: lowcore stack pointer offsetsMartin Schwidefsky2013-04-261-20/+14
| | | | | | | | Store the stack pointers in the lowcore for the kernel stack, the async stack and the panic stack with the offset required for the first user. This avoids an unnecessary add instruction on the system call path. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390: critical section cleanup vs. machine checksMartin Schwidefsky2013-03-051-2/+3
| | | | | | | | | | | | | | | | The current machine check code uses the registers stored by the machine in the lowcore at __LC_GPREGS_SAVE_AREA as the registers of the interrupted context. The registers 0-7 of a user process can get clobbered if a machine checks interrupts the execution of a critical section in entry[64].S. The reason is that the critical section cleanup code may need to modify the PSW and the registers for the previous context to get to the end of a critical section. If registers 0-7 have to be replaced the relevant copy will be in the registers, which invalidates the copy in the lowcore. The machine check handler needs to explicitly store registers 0-7 to the stack. Cc: stable@vger.kernel.org Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390/cleanup: rename SPP to LPPHendrik Brueckner2013-02-141-5/+5
| | | | | | | | | | | | The set-program-parameter (SPP) instruction has been renamed to load-program-parameter (LPP) (see SA23-2260). Reflect this change and rename all macro/instruction references. Also remove the duplicate SPP/LPP entry in the kernel disassembler instruction list. Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* Merge branch 'for-linus' of ↵Linus Torvalds2012-12-131-11/+25
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull s390 update from Martin Schwidefsky: "Add support to generate code for the latest machine zEC12, MOD and XOR instruction support for the BPF jit compiler, the dasd safe offline feature and the big one: the s390 architecture gets PCI support!! Right before the world ends on the 21st ;-)" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (41 commits) s390/qdio: rename the misleading PCI flag of qdio devices s390/pci: remove obsolete email addresses s390/pci: speed up __iowrite64_copy by using pci store block insn s390/pci: enable NEED_DMA_MAP_STATE s390/pci: no msleep in potential IRQ context s390/pci: fix potential NULL pointer dereference in dma_free_seg_table() s390/pci: use kmem_cache_zalloc instead of kmem_cache_alloc/memset s390/bpf,jit: add support for XOR instruction s390/bpf,jit: add support MOD instruction s390/cio: fix pgid reserved check vga: compile fix, disable vga for s390 s390/pci: add PCI Kconfig options s390/pci: s390 specific PCI sysfs attributes s390/pci: PCI hotplug support via SCLP s390/pci: CHSC PCI support for error and availability events s390/pci: DMA support s390/pci: PCI adapter interrupts for MSI/MSI-X s390/bitops: find leftmost bit instruction support s390/pci: CLP interface s390/pci: base support ...
| * s390/kvm: Fix address space mixupChristian Borntraeger2012-11-231-5/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I was chasing down a bug of random validity intercepts on s390. (guest prefix page not mapped in the host virtual aspace). Turns out that the problem was a wrong address space control element. The cause was quite complex: During paging activity a DAT protection during SIE caused a program interrupt. Normally, the sie retry loop tries to catch all interrupts during and shortly before sie to rerun the setup. The problem is now that protection causes a suppressing program interrupt, causing the PSW to point to the instruction AFTER SIE in case of DAT protection. This confused the logic of the retry loop to not trigger, instead we jumped directly back to SIE after return from the program interrupt. (the protection fault handler itself did a rewind of the psw). This usually works quite well, but: If now the protection fault handler has to wait, another program might be scheduled in. Later on the sie process will be schedules in again. In that case the content of CR1 (primary address space) will be wrong because switch_to will put the user space ASCE into CR1 and not the guest ASCE. In addition the program parameter is also wrong for every protection fault of a guest, since we dont issue the SPP instruction. So lets also check for PSW == instruction after SIE in the program check handler. Instead of expensively checking all program interruption codes that might be suppressing we assume that a program interrupt pointing after SIE was always a program interrupt in SIE. (Otherwise we have a kernel bug anyway). We also have to compensate the rewinding, since the C-level handlers will do that. Therefore we need to add a nop with the same length as SIE before the sie_loop. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> CC: stable@vger.kernel.org CC: Heiko Carstens <heiko.carstens@de.ibm.com>
| * s390/ptrace: race of single stepping vs signal deliveryMartin Schwidefsky2012-11-231-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current single step code is racy in regard to concurrent delivery of signals. If a signal is delivered after a PER program check occurred but before the TIF_PER_TRAP bit has been checked in entry[64].S the code clears TIF_PER_TRAP and then calls do_signal. This is wrong, if the instruction completed (or has been suppressed) a SIGTRAP should be delivered to the debugger in any case. Only if the instruction has been nullified the SIGTRAP may not be send. The new logic always sets TIF_PER_TRAP if the program check indicates PER tracing but removes it again for all program checks that are nullifying. The effect is that for each change in the PSW address we now get a single SIGTRAP. Reported-by: Andreas Arnez <arnez@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * s390/traps: preinitialize program check tableHeiko Carstens2012-11-231-2/+2
| | | | | | | | | | | | | | | | | | | | | | Preinitialize the program check table, so we can put it into the read-only data section. Also use only four byte entries for the table, since each program check handler resides within the first 2GB. Therefore this reduces the size of the table by 50% on 64 bit builds. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* | s390: switch to saner kernel_execve() semanticsAl Viro2012-10-291-21/+5
|/ | | | | Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* Merge branch 'for-linus' of ↵Linus Torvalds2012-10-101-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull second s390 update from Martin Schwidefsky: "The big thing in this pull request is the UAPI patch from David, and worth mentioning is the page table dumper. The rest are small improvements and bug fixes." * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: s390/entry: fix svc number for TIF_SYSCALL system call restart s390/mm,vmem: fix vmem_add_mem()/vmem_remove_range() s390/vmalloc: have separate modules area s390/zcrypt: remove duplicated include from zcrypt_pcixcc.c s390/css_chars: remove superfluous ifdef s390/chsc: make headers usable s390/mm: let kernel text section always begin at 1MB s390/mm: fix mapping of read-only kernel text section s390/mm: add page table dumper s390: add support to start the kernel in 64 bit mode. s390/mm,pageattr: remove superfluous EXPORT_SYMBOLs s390/mm,pageattr: add more page table walk sanity checks s390/mm: fix pmd_huge() usage for kernel mapping s390/dcssblk: cleanup device attribute usage s390/mm: use pfmf instruction to initialize storage keys s390/facilities: cleanup PFMF and HPAGE machine facility detection UAPI: (Scripted) Disintegrate arch/s390/include/asm
| * s390/entry: fix svc number for TIF_SYSCALL system call restartMartin Schwidefsky2012-10-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | The load of the svc number in the TIF_SYSCALL restart path needs to be done with an instruction that loads all 64 bits of %r1, 'lh' only loads 32 bits. If the upper half of %r1 is not zero and has the msb set, entry64.S will try to execute an svc with a really large number. What will be in the upper half of %r1 depends on the code generated by gcc for the functions on the do_signal() callchain. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* | Merge branch 'for-linus' of ↵Linus Torvalds2012-10-101-30/+20
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal Pull generic execve() changes from Al Viro: "This introduces the generic kernel_thread() and kernel_execve() functions, and switches x86, arm, alpha, um and s390 over to them." * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal: (26 commits) s390: convert to generic kernel_execve() s390: switch to generic kernel_thread() s390: fold kernel_thread_helper() into ret_from_fork() s390: fold execve_tail() into start_thread(), convert to generic sys_execve() um: switch to generic kernel_thread() x86, um/x86: switch to generic sys_execve and kernel_execve x86: split ret_from_fork alpha: introduce ret_from_kernel_execve(), switch to generic kernel_execve() alpha: switch to generic kernel_thread() alpha: switch to generic sys_execve() arm: get rid of execve wrapper, switch to generic execve() implementation arm: optimized current_pt_regs() arm: introduce ret_from_kernel_execve(), switch to generic kernel_execve() arm: split ret_from_fork, simplify kernel_thread() [based on patch by rmk] generic sys_execve() generic kernel_execve() new helper: current_pt_regs() preparation for generic kernel_thread() um: kill thread->forking um: let signal_delivered() do SIGTRAP on singlestepping into handler ...
| * s390: convert to generic kernel_execve()Al Viro2012-09-301-25/+6
| | | | | | | | | | | | same situation as with alpha and arm - only massage needed Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * s390: fold kernel_thread_helper() into ret_from_fork()Al Viro2012-09-301-3/+13
| | | | | | | | | | | | | | ... and don't bother with syscall return path in case of kernel threads. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * s390: fold execve_tail() into start_thread(), convert to generic sys_execve()Al Viro2012-09-301-1/+0
| | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | s390/exceptions: switch to relative exception table entriesHeiko Carstens2012-09-261-3/+2
| | | | | | | | | | | | | | | | | | This is the s390 port of 70627654 "x86, extable: Switch to relative exception table entries". Reduces the size of our exception tables by 50% on 64 bit builds. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* | s390: add support for transactional memoryMartin Schwidefsky2012-09-261-4/+8
|/ | | | | | | | | | | Allow user-space processes to use transactional execution (TX). If the TX facility is available user space programs can use transactions for fine-grained serialization based on the data objects that are referenced during a transaction. This is useful for lockless data structures and speculative compiler optimizations. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390/vtimer: rework virtual timer interfaceMartin Schwidefsky2012-07-201-25/+14
| | | | | | | | | | | | | | | | | The current virtual timer interface is inherently per-cpu and hard to use. The sole user of the interface is appldata which uses it to execute a function after a specific amount of cputime has been used over all cpus. Rework the virtual timer interface to hook into the cputime accounting. This makes the interface independent from the CPU timer interrupts, and makes the virtual timers global as opposed to per-cpu. Overall the code is greatly simplified. The downside is that the accuracy is not as good as the original implementation, but it is still good enough for appldata. Reviewed-by: Jan Glauber <jang@linux.vnet.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390/comments: unify copyright messages and remove file namesHeiko Carstens2012-07-201-2/+1
| | | | | | | | | | | | | | Remove the file name from the comment at top of many files. In most cases the file name was wrong anyway, so it's rather pointless. Also unify the IBM copyright statement. We did have a lot of sightly different statements and wanted to change them one after another whenever a file gets touched. However that never happened. Instead people start to take the old/"wrong" statements to use as a template for new files. So unify all of them in one go. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
* s390/smp: make absolute lowcore / cpu restart parameter accesses more robustHeiko Carstens2012-06-141-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | Setting the cpu restart parameters is done in three different fashions: - directly setting the four parameters individually - copying the four parameters with memcpy (using 4 * sizeof(long)) - copying the four parameters using a private structure In addition code in entry*.S relies on a certain order of the restart members of struct _lowcore. Make all of this more robust to future changes by adding a mem_absolute_assign(dest, val) define, which assigns val to dest using absolute addressing mode. Also the load multiple instructions in entry*.S have been split into separate load instruction so the order of the struct _lowcore members doesn't matter anymore. In addition move the prototypes of memcpy_real/absolute from uaccess.h to processor.h. These memcpy* variants are not related to uaccess at all. string.h doesn't seem to match as well, so lets use processor.h. Also replace the eight byte array in struct _lowcore which represents a misaliged u64 with a u64. The compiler will always create code that handles the misaligned u64 correctly. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390/sigp: use sigp order code defines in assembly codeHeiko Carstens2012-06-051-2/+3
| | | | | | | | | | Use sigp order code defines in assembly code as well. With this change all places that use sigp constants should have been converted to use self describing defines instead of directly using constants. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390/kvm: get rid of duplicate instructionChristian Borntraeger2012-06-051-1/+0
| | | | | | | | | | | | After commit 5e8010cb50d3de7202641c0088c211f7c9593ebc s390: replace TIF_SIE with PF_VCPU there is no need to load the thread info before sie_loop where it is also loaded. Get rid of this duplicate instruction. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390: fix race on TIF_MCCK_PENDINGMartin Schwidefsky2012-05-161-10/+11
| | | | | | | | | | | | | | | | There is a small race window in the __switch_to code in regard to the transfer of the TIF_MCCK_PENDING bit from the previous to the next task. The bit is transferred before the task struct pointer and the thread-info pointer for the next task has been stored to lowcore. If a machine check sets the TIF_MCCK_PENDING bit between the transfer code and the store of current/thread_info the bit is still set for the previous task. And if the previous task has terminated it can get lost. The effect is that a pending CRW is not retrieved until the next machine checks sets TIF_MCCK_PENDING. To fix this reorder __switch_to to first store the task struct and thread-info pointer and then do the transfer of the bit. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390/time: simply Kconfig dependencyHeiko Carstens2012-05-161-1/+1
| | | | | | | | Use HAVE_MARCH_Z9_109_FEATURES to figure out if stckf is available at compile time. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390/entry64: avoid SPP code duplicationHeiko Carstens2012-05-161-3/+1
| | | | | Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390/time: always use stckf instead of stck if availableHeiko Carstens2012-05-161-4/+12
| | | | | | | | | | The store clock fast instruction saves a couple of instructions compared to the store clock instruction. Always use stckf instead of stck if it is available. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390: replace TIF_SIE with PF_VCPUMartin Schwidefsky2012-05-161-3/+0
| | | | | | | Replace the check for TIF_SIE in the fault handler by a check for PF_VCPU. With the last user of TIF_SIE gone we can now remove the bit. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390: make sie intercept independent of thread_infoMartin Schwidefsky2012-05-161-6/+6
| | | | | | | | HANDLE_SIE_INTERCEPT is called early, use supervisor state and instruction address to decide if the reset of the PSW to sie_loop is required. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390: initialize backchain for ext_int_handler()Michael Holzheu2012-05-161-0/+1
| | | | | | | | | To allow correct stack backtraces the backchain for the external interrupt handler is now initialized with zero like it is already done for example by io_int_handler(). Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] rework idle codeMartin Schwidefsky2012-03-111-3/+62
| | | | | | | | | | | | | | Whenever the cpu loads an enabled wait PSW it will appear as idle to the underlying host system. The code in default_idle calls vtime_stop_cpu which does the necessary voodoo to get the cpu time accounting right. The udelay code just loads an enabled wait PSW. To correct this rework the vtime_stop_cpu/vtime_start_cpu logic and move the difficult parts to entry[64].S, vtime_stop_cpu can now be called from anywhere and vtime_start_cpu is gone. The correction of the cpu time during wakeup from an enabled wait PSW is done with a critical section in entry[64].S. As vtime_start_cpu is gone, s390_idle_check can be removed as well. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] rework smp codeMartin Schwidefsky2012-03-111-55/+17
| | | | | | | | | | | | | | | | | | | | | Define struct pcpu and merge some of the NR_CPUS arrays into it, including __cpu_logical_map, current_set and smp_cpu_state. Split smp related functions to those operating on physical cpus and the functions operating on a logical cpu number. Make the functions for physical cpus use a pointer to a struct pcpu. This hides the knowledge about cpu addresses in smp.c, entry[64].S and swsusp_asm64.S, thus remove the sigp.h header. The PSW restart mechanism is used to start secondary cpus, calling a function on an online cpu, calling a function on the ipl cpu, and for the nmi signal. Replace the different assembler functions with a single function restart_int_handler. The new entry point calls a function whose pointer is stored in the lowcore of the target cpu and it can wait for the source cpu to stop. This covers all existing use cases. Overall the code is now simpler and there are ~380 lines less code. Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] rename lowcore fieldMartin Schwidefsky2012-03-111-1/+1
| | | | | | | | The 16 bit value at the lowcore location with offset 0x84 is the cpu address that is associated with an external interrupt. Rename the field from cpu_addr to ext_cpu_addr to make that clear. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] cleanup trap handlingMartin Schwidefsky2011-12-271-14/+13
| | | | | | | | | | | | Move the program interruption code and the translation exception identifier to the pt_regs structure as 'int_code' and 'int_parm_long' and make the first level interrupt handler in entry[64].S store the two values. That makes it possible to drop 'prot_addr' and 'trap_no' from the thread_struct and to reduce the number of arguments to a lot of functions. Finally un-inline do_trap. Overall this saves 5812 bytes in the .text section of the 64 bit kernel. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] entry[64].S improvementsMartin Schwidefsky2011-12-271-539/+429
| | | | | | | | | Another round of cleanup for entry[64].S, in particular the program check handler looks more reasonable now. The code size for the 31 bit kernel has been reduced by 616 byte and by 528 byte for the 64 bit version. Even better the code is a bit faster as well. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] kvm: move cmf host id constant out of lowcoreMartin Schwidefsky2011-12-271-2/+5
| | | | | | | | | There is no reason for the cpu-measurement-facility host id constant to reside in the lowcore where space is precious. Use an entry in the literal pool in HANDLE_SIE_INTERCEPT and a stack slot in sie64a. While we are at it replace the id -1 with 0 to indicate host execution. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] load user asce on sie_faultCarsten Otte2011-10-301-0/+1
| | | | | | | | | On sie_fault we need to switch back to user ASCE. Otherwise we get interresting effects when exiting to "userspace" while the guest space is still active. Signed-off-by: Carsten Otte <cotte@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] add TIF_SYSCALL thread flagMartin Schwidefsky2011-10-301-41/+26
| | | | | | | | | | Add an explicit TIF_SYSCALL bit that indicates if a task is inside a system call. The svc_code in the pt_regs structure is now only valid if TIF_SYSCALL is set. With this definition TIF_RESTART_SVC can be replaced with TIF_SYSCALL. Overall do_signal is a bit more readable and it saves a few lines of code. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] signal race with restarting system callsMartin Schwidefsky2011-10-301-15/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For a ERESTARTNOHAND/ERESTARTSYS/ERESTARTNOINTR restarting system call do_signal will prepare the restart of the system call with a rewind of the PSW before calling get_signal_to_deliver (where the debugger might take control). For A ERESTART_RESTARTBLOCK restarting system call do_signal will set -EINTR as return code. There are two issues with this approach: 1) strace never sees ERESTARTNOHAND, ERESTARTSYS, ERESTARTNOINTR or ERESTART_RESTARTBLOCK as the rewinding already took place or the return code has been changed to -EINTR 2) if get_signal_to_deliver does not return with a signal to deliver the restart via the repeat of the svc instruction is left in place. This opens a race if another signal is made pending before the system call instruction can be reexecuted. The original system call will be restarted even if the second signal would have ended the system call with -EINTR. These two issues can be solved by dropping the early rewind of the system call before get_signal_to_deliver has been called and by using the TIF_RESTART_SVC magic to do the restart if no signal has to be delivered. The only situation where the system call restart via the repeat of the svc instruction is appropriate is when a SA_RESTART signal is delivered to user space. Unfortunately this breaks inferior calls by the debugger again. The system call number and the length of the system call instruction is lost over the inferior call and user space will see ERESTARTNOHAND/ ERESTARTSYS/ERESTARTNOINTR/ERESTART_RESTARTBLOCK. To correct this a new ptrace interface is added to save/restore the system call number and system call instruction length. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] lowcore cleanupMartin Schwidefsky2011-10-301-2/+2
| | | | | | | Remove the save_area_64 field from the 0xe00 - 0xf00 area in the lowcore. Use a free slot in the save_area array instead. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] kvm: fix address mode switchingChristian Borntraeger2011-09-201-0/+6
| | | | | | | | | | | | | | | | | 598841ca9919d008b520114d8a4378c4ce4e40a1 ([S390] use gmap address spaces for kvm guest images) changed kvm to use a separate address space for kvm guests. This address space was switched in __vcpu_run In some cases (preemption, page fault) there is the possibility that this address space switch is lost. The typical symptom was a huge amount of validity intercepts or random guest addressing exceptions. Fix this by doing the switch in sie_loop and sie_exit and saving the address space in the gmap structure itself. Also use the preempt notifier. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Avi Kivity <avi@redhat.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
* [S390] Add PSW restart shutdown triggerMichael Holzheu2011-08-031-0/+20
| | | | | | | | | | | | | With this patch a new S390 shutdown trigger "restart" is added. If under z/VM "systerm restart" is entered or under the HMC the "PSW restart" button is pressed, the PSW located at 0 (31 bit) or 0x1a0 (64 bit) bit is loaded. Now we execute do_restart() that processes the restart action that is defined under /sys/firmware/shutdown_actions/on_restart. Currently the following actions are possible: reipl (default), stop, vmcmd, dump, and dump_reipl. Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
OpenPOWER on IntegriCloud