summaryrefslogtreecommitdiffstats
path: root/arch/s390/kernel/asm-offsets.c
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2013-07-031-0/+3
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull KVM fixes from Paolo Bonzini: "On the x86 side, there are some optimizations and documentation updates. The big ARM/KVM change for 3.11, support for AArch64, will come through Catalin Marinas's tree. s390 and PPC have misc cleanups and bugfixes" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (87 commits) KVM: PPC: Ignore PIR writes KVM: PPC: Book3S PR: Invalidate SLB entries properly KVM: PPC: Book3S PR: Allow guest to use 1TB segments KVM: PPC: Book3S PR: Don't keep scanning HPTEG after we find a match KVM: PPC: Book3S PR: Fix invalidation of SLB entry 0 on guest entry KVM: PPC: Book3S PR: Fix proto-VSID calculations KVM: PPC: Guard doorbell exception with CONFIG_PPC_DOORBELL KVM: Fix RTC interrupt coalescing tracking kvm: Add a tracepoint write_tsc_offset KVM: MMU: Inform users of mmio generation wraparound KVM: MMU: document fast invalidate all mmio sptes KVM: MMU: document fast invalidate all pages KVM: MMU: document fast page fault KVM: MMU: document mmio page fault KVM: MMU: document write_flooding_count KVM: MMU: document clear_spte_count KVM: MMU: drop kvm_mmu_zap_mmio_sptes KVM: MMU: init kvm generation close to mmio wrap-around value KVM: MMU: add tracepoint for check_mmio_spte KVM: MMU: fast invalidate all mmio sptes ...
| * s390/kvm: Provide a way to prevent reentering SIEChristian Borntraeger2013-05-211-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Lets provide functions to prevent KVM from reentering SIE and to kick cpus out of SIE. We cannot use the common kvm_vcpu_kick code, since we need to kick out guests in places that hold architecture specific locks (e.g. pgste lock) which might be necessary on the other cpus - so no waiting possible. So lets provide a bit in a private field of the sie control block that acts as a gate keeper, after we claimed we are in SIE. Please note that we do not reuse prog0c, since we want to access that bit without atomic ops. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| * s390/kvm: Mark if a cpu is in SIEChristian Borntraeger2013-05-211-0/+2
| | | | | | | | | | | | | | | | | | | | | | Lets track in a private bit if the sie control block is active. We want to track this as closely as possible, so we also have to instrument the interrupt and program check handler. Lets use the existing HANDLE_SIE_INTERCEPT macro. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Gleb Natapov <gleb@redhat.com>
* | s390/irq: store interrupt information in pt_regsMartin Schwidefsky2013-06-261-0/+1
|/ | | | | | | | | | Copy the interrupt parameters from the lowcore to the pt_regs structure in entry[64].S and reduce the arguments of the low level interrupt handler to the pt_regs pointer only. In addition move the test-pending-interrupt loop from do_IRQ to entry[64].S to make sure that interrupt information is always delivered via pt_regs. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390: system call path micro optimizationMartin Schwidefsky2013-04-261-0/+1
| | | | | | | | | | Add a pointer to the system call table to the thread_info structure. The TIF_31BIT bit is set or cleared by SET_PERSONALITY exactly once for the lifetime of a process. With the pointer to the correct system call table in thread_info the system call code in entry64.S path can drop the check for TIF_31BIT which saves a couple of instructions. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390: add support for transactional memoryMartin Schwidefsky2012-09-261-0/+2
| | | | | | | | | | | Allow user-space processes to use transactional execution (TX). If the TX facility is available user space programs can use transactions for fine-grained serialization based on the data objects that are referenced during a transaction. This is useful for lockless data structures and speculative compiler optimizations. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390/vtimer: rework virtual timer interfaceMartin Schwidefsky2012-07-201-6/+4
| | | | | | | | | | | | | | | | | The current virtual timer interface is inherently per-cpu and hard to use. The sole user of the interface is appldata which uses it to execute a function after a specific amount of cputime has been used over all cpus. Rework the virtual timer interface to hook into the cputime accounting. This makes the interface independent from the CPU timer interrupts, and makes the virtual timers global as opposed to per-cpu. Overall the code is greatly simplified. The downside is that the accuracy is not as good as the original implementation, but it is still good enough for appldata. Reviewed-by: Jan Glauber <jang@linux.vnet.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390/smp: make absolute lowcore / cpu restart parameter accesses more robustHeiko Carstens2012-06-141-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Setting the cpu restart parameters is done in three different fashions: - directly setting the four parameters individually - copying the four parameters with memcpy (using 4 * sizeof(long)) - copying the four parameters using a private structure In addition code in entry*.S relies on a certain order of the restart members of struct _lowcore. Make all of this more robust to future changes by adding a mem_absolute_assign(dest, val) define, which assigns val to dest using absolute addressing mode. Also the load multiple instructions in entry*.S have been split into separate load instruction so the order of the struct _lowcore members doesn't matter anymore. In addition move the prototypes of memcpy_real/absolute from uaccess.h to processor.h. These memcpy* variants are not related to uaccess at all. string.h doesn't seem to match as well, so lets use processor.h. Also replace the eight byte array in struct _lowcore which represents a misaliged u64 with a u64. The compiler will always create code that handles the misaligned u64 correctly. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* Disintegrate asm/system.h for S390David Howells2012-03-281-1/+0
| | | | | | | Disintegrate asm/system.h for S390. Signed-off-by: David Howells <dhowells@redhat.com> cc: linux-s390@vger.kernel.org
* [S390] rework idle codeMartin Schwidefsky2012-03-111-0/+8
| | | | | | | | | | | | | | Whenever the cpu loads an enabled wait PSW it will appear as idle to the underlying host system. The code in default_idle calls vtime_stop_cpu which does the necessary voodoo to get the cpu time accounting right. The udelay code just loads an enabled wait PSW. To correct this rework the vtime_stop_cpu/vtime_start_cpu logic and move the difficult parts to entry[64].S, vtime_stop_cpu can now be called from anywhere and vtime_start_cpu is gone. The correction of the cpu time during wakeup from an enabled wait PSW is done with a critical section in entry[64].S. As vtime_start_cpu is gone, s390_idle_check can be removed as well. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] rework smp codeMartin Schwidefsky2012-03-111-10/+7
| | | | | | | | | | | | | | | | | | | | | Define struct pcpu and merge some of the NR_CPUS arrays into it, including __cpu_logical_map, current_set and smp_cpu_state. Split smp related functions to those operating on physical cpus and the functions operating on a logical cpu number. Make the functions for physical cpus use a pointer to a struct pcpu. This hides the knowledge about cpu addresses in smp.c, entry[64].S and swsusp_asm64.S, thus remove the sigp.h header. The PSW restart mechanism is used to start secondary cpus, calling a function on an online cpu, calling a function on the ipl cpu, and for the nmi signal. Replace the different assembler functions with a single function restart_int_handler. The new entry point calls a function whose pointer is stored in the lowcore of the target cpu and it can wait for the source cpu to stop. This covers all existing use cases. Overall the code is now simpler and there are ~380 lines less code. Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] rename lowcore fieldMartin Schwidefsky2012-03-111-1/+1
| | | | | | | | The 16 bit value at the lowcore location with offset 0x84 is the cpu address that is associated with an external interrupt. Rename the field from cpu_addr to ext_cpu_addr to make that clear. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] cleanup trap handlingMartin Schwidefsky2011-12-271-1/+2
| | | | | | | | | | | | Move the program interruption code and the translation exception identifier to the pt_regs structure as 'int_code' and 'int_parm_long' and make the first level interrupt handler in entry[64].S store the two values. That makes it possible to drop 'prot_addr' and 'trap_no' from the thread_struct and to reduce the number of arguments to a lot of functions. Finally un-inline do_trap. Overall this saves 5812 bytes in the .text section of the 64 bit kernel. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] entry[64].S improvementsMartin Schwidefsky2011-12-271-1/+3
| | | | | | | | | Another round of cleanup for entry[64].S, in particular the program check handler looks more reasonable now. The code size for the 31 bit kernel has been reduced by 616 byte and by 528 byte for the 64 bit version. Even better the code is a bit faster as well. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] kvm: move cmf host id constant out of lowcoreMartin Schwidefsky2011-12-271-1/+0
| | | | | | | | | There is no reason for the cpu-measurement-facility host id constant to reside in the lowcore where space is precious. Use an entry in the literal pool in HANDLE_SIE_INTERCEPT and a stack slot in sie64a. While we are at it replace the id -1 with 0 to indicate host execution. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] signal race with restarting system callsMartin Schwidefsky2011-10-301-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For a ERESTARTNOHAND/ERESTARTSYS/ERESTARTNOINTR restarting system call do_signal will prepare the restart of the system call with a rewind of the PSW before calling get_signal_to_deliver (where the debugger might take control). For A ERESTART_RESTARTBLOCK restarting system call do_signal will set -EINTR as return code. There are two issues with this approach: 1) strace never sees ERESTARTNOHAND, ERESTARTSYS, ERESTARTNOINTR or ERESTART_RESTARTBLOCK as the rewinding already took place or the return code has been changed to -EINTR 2) if get_signal_to_deliver does not return with a signal to deliver the restart via the repeat of the svc instruction is left in place. This opens a race if another signal is made pending before the system call instruction can be reexecuted. The original system call will be restarted even if the second signal would have ended the system call with -EINTR. These two issues can be solved by dropping the early rewind of the system call before get_signal_to_deliver has been called and by using the TIF_RESTART_SVC magic to do the restart if no signal has to be delivered. The only situation where the system call restart via the repeat of the svc instruction is appropriate is when a SA_RESTART signal is delivered to user space. Unfortunately this breaks inferior calls by the debugger again. The system call number and the length of the system call instruction is lost over the inferior call and user space will see ERESTARTNOHAND/ ERESTARTSYS/ERESTARTNOINTR/ERESTART_RESTARTBLOCK. To correct this a new ptrace interface is added to save/restore the system call number and system call instruction length. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] lowcore cleanupMartin Schwidefsky2011-10-301-1/+0
| | | | | | | Remove the save_area_64 field from the 0xe00 - 0xf00 area in the lowcore. Use a free slot in the save_area array instead. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] kvm: fix address mode switchingChristian Borntraeger2011-09-201-0/+3
| | | | | | | | | | | | | | | | | 598841ca9919d008b520114d8a4378c4ce4e40a1 ([S390] use gmap address spaces for kvm guest images) changed kvm to use a separate address space for kvm guests. This address space was switched in __vcpu_run In some cases (preemption, page fault) there is the possibility that this address space switch is lost. The typical symptom was a huge amount of validity intercepts or random guest addressing exceptions. Fix this by doing the switch in sie_loop and sie_exit and saving the address space in the gmap structure itself. Also use the preempt notifier. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Avi Kivity <avi@redhat.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
* [S390] asm offsets: fix coding styleHeiko Carstens2011-08-031-6/+3
| | | | | | | Because of readability reasons we ignore the 80 character line limit in asm offsets. Just one line per define, nothing else. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
* [S390] Add PSW restart shutdown triggerMichael Holzheu2011-08-031-0/+1
| | | | | | | | | | | | | With this patch a new S390 shutdown trigger "restart" is added. If under z/VM "systerm restart" is entered or under the HMC the "PSW restart" button is pressed, the PSW located at 0 (31 bit) or 0x1a0 (64 bit) bit is loaded. Now we execute do_restart() that processes the restart action that is defined under /sys/firmware/shutdown_actions/on_restart. Currently the following actions are possible: reipl (default), stop, vmcmd, dump, and dump_reipl. Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
* [S390] kvm guest address space mappingMartin Schwidefsky2011-07-241-1/+1
| | | | | | | | | | | | | | | | | | Add code that allows KVM to control the virtual memory layout that is seen by a guest. The guest address space uses a second page table that shares the last level pte-tables with the process page table. If a page is unmapped from the process page table it is automatically unmapped from the guest page table as well. The guest address space mapping starts out empty, KVM can map any individual 1MB segments from the process virtual memory to any 1MB aligned location in the guest virtual memory. If a target segment in the process virtual memory does not exist or is unmapped while a guest mapping exists the desired target address is stored as an invalid segment table entry in the guest page table. The population of the guest page table is fault driven. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] pfault: cpu hotplug vs missing completion interruptsHeiko Carstens2011-05-231-0/+1
| | | | | | | | | | | | | | | | | On cpu hot remove a PFAULT CANCEL command is sent to the hypervisor which in turn will cancel all outstanding pfault requests that have been issued on that cpu (the same happens with a SIGP cpu reset). The result is that we end up with uninterruptible processes where the interrupt that would wake up these processes never arrives. In order to solve this all processes which wait for a pfault completion interrupt get woken up after a cpu hot remove. The worst case that could happen is that they fault again and in turn need to wait again. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] Remove data execution protectionMartin Schwidefsky2011-05-231-3/+0
| | | | | | | | | | | | | | | | | | The noexec support on s390 does not rely on a bit in the page table entry but utilizes the secondary space mode to distinguish between memory accesses for instructions vs. data. The noexec code relies on the assumption that the cpu will always use the secondary space page table for data accesses while it is running in the secondary space mode. Up to the z9-109 class machines this has been the case. Unfortunately this is not true anymore with z10 and later machines. The load-relative-long instructions lrl, lgrl and lgfrl access the memory operand using the same addressing-space mode that has been used to fetch the instruction. This breaks the noexec mode for all user space binaries compiled with march=z10 or later. The only option is to remove the current noexec support. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] ptrace cleanupMartin Schwidefsky2011-01-051-6/+8
| | | | | | | Overhaul program event recording and the code dealing with the ptrace user space interface. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] vdso: get rid of redefinition warningsHeiko Carstens2010-10-291-3/+3
| | | | | | | | | | | | | | | | The CLOCK_* defines in asm-offsets.c are only used for the vdso code however in the meantime they cause other trouble. Just rename them to get permanently rid of this: In file included from /home2/heicarst/linux-2.6/arch/s390/include/asm/asm-offsets.h:1:0, from arch/s390/mm/fault.c:33: include/generated/asm-offsets.h:53:0: warning: "CLOCK_REALTIME" redefined include/linux/time.h:286:0: note: this is the location of the previous definition include/generated/asm-offsets.h:54:0: warning: "CLOCK_MONOTONIC" redefined include/linux/time.h:287:0: note: this is the location of the previous definition Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] cleanup lowcore access from external interruptsMartin Schwidefsky2010-10-251-2/+0
| | | | | | | Read external interrupts parameters from the lowcore in the first level interrupt handler in entry[64].S. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] cleanup lowcore access from program checksMartin Schwidefsky2010-10-251-0/+1
| | | | | | | | | Read all required fields for program checks from the lowcore in the first level interrupt handler in entry[64].S. If the context that caused the fault was enabled for interrupts we can now re-enable the irqs in entry[64].S. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] spp: fix compilation for CONFIG_32BITHeiko Carstens2010-05-261-2/+2
| | | | | | | | Fix build breakage for CONFIG_32BIT caused by cd3b70f5 "[S390] virtualization aware cpu measurement" Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] add breaking event address for user spaceMartin Schwidefsky2010-05-171-0/+1
| | | | | | | | | | | Copy the last breaking event address from the lowcore to a new field in the thread_struct on each system entry. Add a new ptrace request PTRACE_GET_LAST_BREAK and a new utrace regset REGSET_LAST_BREAK to query the last breaking event. This is useful for debugging wild branches in user space code. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] virtualization aware cpu measurementCarsten Otte2010-05-171-0/+2
| | | | | | | | Use the SPP instruction to set a tag on entry to / exit of the virtual machine context. This allows the cpu measurement facility to distinguish the samples from the host and the different guests. Signed-off-by: Carsten Otte <cotte@de.ibm.com>
* [S390] idle time accounting vs. machine checksMartin Schwidefsky2010-05-171-0/+2
| | | | | | | | | | | | | | | A machine check can interrupt the i/o and external interrupt handler anytime. If the machine check occurs while the interrupt handler is waking up from idle vtime_start_cpu can get executed a second time and the int_clock / async_enter_timer values in the lowcore get clobbered. This can confuse the cpu time accounting. To fix this problem two changes are needed. First the machine check handler has to use its own copies of int_clock and async_enter_timer, named mcck_clock and mcck_enter_timer. Second the nested execution of vtime_start_cpu has to be prevented. This is done in s390_idle_check by checking the wait bit in the program status word. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] More cleanup for struct _lowcoreMartin Schwidefsky2010-05-171-1/+0
| | | | | | Remove cpu_id from lowcore and replace addr_t with __u64. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] vdso: use ntp adjusted clock multiplierHendrik Brueckner2010-04-221-0/+1
| | | | | | | | | | | | | | | | | | Commit "timekeeping: Fix clock_gettime vsyscall time warp" (0696b711e) introduced the new parameter "mult" to update_vsyscall(). This parameter contains the internal NTP adjusted clock multiplier. The s390x vdso did not use this adjusted multiplier. Instead, it used the constant clock multiplier for gettimeofday() and clock_gettime() variants. This may result in observable time warps as explained in commit 0696b711e. Make the NTP adjusted clock multiplier available to the s390x vdso implementation and use it for time calculations. Cc: <stable@kernel.org> Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] Cleanup struct _lowcore usage and defines.Heiko Carstens2010-02-261-7/+84
| | | | | | | | | | | | Use asm offsets to make sure the offset defines to struct _lowcore and its layout don't get out of sync. Also add a BUILD_BUG_ON() which checks that the size of the structure is sane. And while being at it change those sites which use odd casts to access the current lowcore. These should use S390_lowcore instead. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] use inline assembly contraints available with gcc 3.3.3Martin Schwidefsky2010-02-261-0/+8
| | | | | | | | | | | Drop support to compile the kernel with gcc versions older than 3.3.3. This allows us to use the "Q" inline assembly contraint on some more inline assemblies without duplicating a lot of complex code (e.g. __xchg and __cmpxchg). The distinction for older gcc versions can be removed which saves a few lines and simplifies the code. Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] hibernate: Do real CPU swap at resume timeMichael Holzheu2009-09-221-1/+6
| | | | | | | | | | | | | | | Currently, when the physical resume CPU is not equal to the physical suspend CPU, we swap the CPUs logically, by modifying the logical/physical CPU mapping. This has two major drawbacks: First the change is visible from user space (e.g. CPU sysfs files) and second it is hard to ensure that nowhere in the kernel the physical CPU ID is stored before suspend. To fix this, we now really swap the physical CPUs, if the resume CPU is not the pysical suspend CPU. We restart the suspend CPU and stop the resume CPU using SIGP restart and SIGP stop. If the suspend CPU is no longer available, we write a message and load a disabled wait PSW. Signed-off-by: Michael Holzheu <michael.holzheu@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] cpu hotplug and accounting valuesMartin Schwidefsky2009-04-141-0/+2
| | | | | | | Reset the cpu timer to the maximum value and correctly initialize the cpu accounting values in the lowcore when the cpu is started. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [PATCH] fast vdso implementation for CLOCK_THREAD_CPUTIME_IDMartin Schwidefsky2008-12-311-0/+5
| | | | | | | | | | | The extract cpu time instruction (ectg) instruction allows the user process to get the current thread cputime without calling into the kernel. The code that uses the instruction needs to switch to the access registers mode to get access to the per-cpu info page that contains the two base values that are needed to calculate the current cputime from the CPU timer with the ectg instruction. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] introduce vdso on s390Martin Schwidefsky2008-12-251-0/+15
| | | | | | | Add a vdso to speed up gettimeofday and clock_getres/clock_gettime for CLOCK_REALTIME/CLOCK_MONOTONIC. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] fix system call parameter functions.Martin Schwidefsky2008-11-271-1/+1
| | | | | | | | | | | | | | | | | | syscall_get_nr() currently returns a valid result only if the call chain of the traced process includes do_syscall_trace_enter(). But collect_syscall() can be called for any sleeping task, the result of syscall_get_nr() in general is completely bogus. To make syscall_get_nr() work for any sleeping task the traps field in pt_regs is replace with svcnr - the system call number the process is executing. If svcnr == 0 the process is not on a system call path. The syscall_get_arguments and syscall_set_arguments use regs->gprs[2] for the first system call parameter. This is incorrect since gprs[2] may have been overwritten with the system call number if the call chain includes do_syscall_trace_enter. Use regs->orig_gprs2 instead. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* s390: use kbuild.h instead of defining macros in asm-offsets.cChristoph Lameter2008-04-291-28/+23
| | | | | | | | | | | | | | New version that does not preserve the marker. Arch maintainers indicate that the marker functionality is is not needed anymore. Note you may simplify the s390 asm-offsets.c code further if you use the OFFSET() macro instead of the DEFINE. See kbuild.h Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* s390: use kbuild.h instead of defining macros in asm-offsets.cChristoph Lameter2008-04-291-3/+2
| | | | | | | | | | | | | | s390 has a strange marker in DEFINE. Undefine the DEFINE from kbuild.h and define it the way s390 wants it to preserve things as they were. May be good if the arch maintainer could go over this and check if this workaround is really necessary. Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* rename thread_info to stackRoman Zippel2007-05-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This finally renames the thread_info field in task structure to stack, so that the assumptions about this field are gone and archs have more freedom about placing the thread_info structure. Nonbroken archs which have a proper thread pointer can do the access to both current thread and task structure via a single pointer. It'll allow for a few more cleanups of the fork code, from which e.g. ia64 could benefit. Signed-off-by: Roman Zippel <zippel@linux-m68k.org> [akpm@linux-foundation.org: build fix] Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ian Molton <spyro@f2s.com> Cc: Haavard Skinnemoen <hskinnemoen@atmel.com> Cc: Mikael Starvik <starvik@axis.com> Cc: David Howells <dhowells@redhat.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: Greg Ungerer <gerg@uclinux.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp> Cc: Richard Curnow <rc@rc0.org.uk> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jeff Dike <jdike@addtoit.com> Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Cc: Miles Bader <uclinux-v850@lsi.nec.co.jp> Cc: Andi Kleen <ak@muc.de> Cc: Chris Zankel <chris@zankel.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Remove obsolete #include <linux/config.h>Jörn Engel2006-06-301-1/+0
| | | | | Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
* Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds2005-04-161-0/+49
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
OpenPOWER on IntegriCloud