| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* 'for-linus' of git://git390.marist.edu/pub/scm/linux-2.6:
[S390] kexec: Disable ftrace during kexec
[S390] support XZ compressed kernel
[S390] css_bus_type: make it static
[S390] css_driver: remove duplicate members
[S390] css: remove subchannel private
[S390] css: move chsc_private to drv_data
[S390] css: move io_private to drv_data
[S390] cio: move cdev pointer to io_subchannel_private
[S390] cio: move options to io_sch_private
[S390] cio: move asms to generic header
[S390] cio: move orb definitions to separate header
[S390] Write protect module text and RO data
[S390] dasd: get rid of compile warning
[S390] remove superfluous check from do_IRQ
[S390] remove redundant stack check option
|
| |
| |
| |
| |
| |
| |
| |
| | |
Disable ftrace during kexec. Same as on x86/powerpc.
ac4414e "powerpc/kdump: Disable ftrace during kexec".
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| |
| |
| |
| |
| |
| |
| | |
Add support for XZ compressed kernel. Same as on x86 and sh.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Implement write protection for kernel modules text and read-only
data sections by implementing set_memory_[ro|rw] on s390.
Since s390 has no execute bit in the pte's NX is not supported.
set_memory_[ro|rw] will only work on normal pages and not on
large pages, so in case a large page should be modified bail
out with a warning.
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Newer gcc versions offer an architecture-independent option to check
the stack size and warn if it reaches a certain limit. This option
already existed for s390 by using -mwarn-dynamicstack. Since one
stack check option is enough remove the s390 specific stack check
but keep the option that warns about dynamic stack usage because
that is not covered by the generic option.
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu
* 'for-2.6.39' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
percpu, x86: Add arch-specific this_cpu_cmpxchg_double() support
percpu: Generic support for this_cpu_cmpxchg_double()
alpha: use L1_CACHE_BYTES for cacheline size in the linker script
percpu: align percpu readmostly subsection to cacheline
Fix up trivial conflict in arch/x86/kernel/vmlinux.lds.S due to the
percpu alignment having changed ("x86: Reduce back the alignment of the
per-CPU data section")
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Currently percpu readmostly subsection may share cachelines with other
percpu subsections which may result in unnecessary cacheline bounce
and performance degradation.
This patch adds @cacheline parameter to PERCPU() and PERCPU_VADDR()
linker macros, makes each arch linker scripts specify its cacheline
size and use it to align percpu subsections.
This is based on Shaohua's x86 only patch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Shaohua Li <shaohua.li@intel.com>
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'core-locking-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
rtmutex: tester: Remove the remaining BKL leftovers
lockdep/timers: Explain in detail the locking problems del_timer_sync() may cause
rtmutex: Simplify PI algorithm and make highest prio task get lock
rwsem: Remove redundant asmregparm annotation
rwsem: Move duplicate function prototypes to linux/rwsem.h
rwsem: Unify the duplicate rwsem_is_locked() inlines
rwsem: Move duplicate init macros and functions to linux/rwsem.h
rwsem: Move duplicate struct rwsem declaration to linux/rwsem.h
x86: Cleanup rwsem_count_t typedef
rwsem: Cleanup includes
locking: Remove deprecated lock initializers
cred: Replace deprecated spinlock initialization
kthread: Replace deprecated spinlock initialization
xtensa: Replace deprecated spinlock initialization
um: Replace deprecated spinlock initialization
sparc: Replace deprecated spinlock initialization
mips: Replace deprecated spinlock initialization
cris: Replace deprecated spinlock initialization
alpha: Replace deprecated spinlock initialization
rtmutex-tester: Remove BKL tests
|
| |\ \ \
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Merge reason: pick up upstream fixes.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
All architecture specific rwsem headers carry the same function
prototypes. Just x86 adds asmregparm, which is an empty define on all
other architectures. S390 has a stale rwsem_downgrade_write()
prototype.
Remove the duplicates and add the prototypes to linux/rwsem.h
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Richard Henderson <rth@twiddle.net>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Acked-by: David Miller <davem@davemloft.net>
Cc: Chris Zankel <chris@zankel.net>
LKML-Reference: <20110126195833.970840140@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Instead of having the same implementation in each architecture, move
it to linux/rwsem.h and remove the duplicates. It's unlikely that an
arch will ever implement something different, but we can deal with
that when it happens.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Matt Turner <mattst88@gmail.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Acked-by: David Miller <davem@davemloft.net>
Cc: Chris Zankel <chris@zankel.net>
LKML-Reference: <20110126195833.876773757@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The rwsem initializers and related macros and functions are mostly the
same. Some of them lack the lockdep initializer, but having it in
place does not matter for architectures which do not support lockdep.
powerpc, sparc, x86: No functional change
sh, s390: Removes the duplicate init_rwsem (inline and #define)
alpha, ia64, xtensa: Use the lockdep capable init function in
lib/rwsem.c which is just uninlining the init
function for the LOCKDEP=n case
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Matt Turner <mattst88@gmail.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Acked-by: David Miller <davem@davemloft.net>
Cc: Chris Zankel <chris@zankel.net>
LKML-Reference: <20110126195833.771812729@linutronix.de>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The difference between these declarations is the data type of the
count member and the lack of lockdep in some architectures/
long is equivivalent to signed long and the #ifdef guarded dep_map
member does not hurt anyone.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Matt Turner <mattst88@gmail.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Acked-by: David Miller <davem@davemloft.net>
Cc: Chris Zankel <chris@zankel.net>
LKML-Reference: <20110126195833.679641914@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
| | |/ /
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
All rwsem implementations include the same headers. Include them from
include/linux/rwsem.h
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Matt Turner <mattst88@gmail.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Acked-by: David Miller <davem@davemloft.net>
Cc: Chris Zankel <chris@zankel.net>
LKML-Reference: <20110126195833.483520950@linutronix.de>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Change futex_atomic_op_inuser and futex_atomic_cmpxchg_inatomic
prototypes to use u32 types for the futex as this is the data type the
futex core code uses all over the place.
Signed-off-by: Michel Lespinasse <walken@google.com>
Cc: Darren Hart <darren@dvhart.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: David Howells <dhowells@redhat.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
LKML-Reference: <20110311025058.GD26122@google.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
| |_|/
|/| |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The cmpxchg_futex_value_locked API was funny in that it returned either
the original, user-exposed futex value OR an error code such as -EFAULT.
This was confusing at best, and could be a source of livelocks in places
that retry the cmpxchg_futex_value_locked after trying to fix the issue
by running fault_in_user_writeable().
This change makes the cmpxchg_futex_value_locked API more similar to the
get_futex_value_locked one, returning an error code and updating the
original value through a reference argument.
Signed-off-by: Michel Lespinasse <walken@google.com>
Acked-by: Chris Metcalf <cmetcalf@tilera.com> [tile]
Acked-by: Tony Luck <tony.luck@intel.com> [ia64]
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michal Simek <monstr@monstr.eu> [microblaze]
Acked-by: David Howells <dhowells@redhat.com> [frv]
Cc: Darren Hart <darren@dvhart.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
LKML-Reference: <20110311024851.GC26122@google.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
NET_SKB_PAD has been increased from 32 to 64 and later to
max(32, L1_CACHE_BYTES). This led to a 25% throughput decrease for
streaming workloads accompanied by a 37% CPU cost increase on s390.
Define a architecture specific NET_SKB_PAD with the old value of 32.
Signed-off-by: Horst Hartmann <horsth@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Use inline assemblies for atomic_read/set(). This way there shouldn't
be any questions or subtle volatile semantics left.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The 'output' variable is passed from decompress_kernel to
check_ipl_parmblock before it is initialized. That disables the
safe guard against the overwrite of the ipl parameter block.
Fix this by passing the correct value to check_ipl_parmblock.
Reported-by: David Binderman <dcb314@hotmail.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Let's make atomic_read() and atomic_set() behave like on all/most other
architectures. Generated code is identical with gcc 4.5.2.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|\ \ \
| |_|/
|/| |
| | |
| | | |
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: sha-s390 - Reset index after processing partial block
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The partial block handling in sha-s390 is broken when we get a
partial block that is followed by an update which fills it with
bytes left-over. Instead of storing the newly left-over bytes
at the start of the buffer, it will be stored immediately after
the previous partial block.
This patch fixes this by resetting the index pointer.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
task_show_regs used to be a debugging aid in the early bringup days
of Linux on s390. /proc/<pid>/status is a world readable file, it
is not a good idea to show the registers of a process. The only
correct fix is to remove task_show_regs.
Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
| |
| |
| |
| |
| |
| |
| |
| | |
6f9a3c33 "[S390] cleanup s390 Kconfig" accidentally changed
the default for CONFIG_CHSC_SCH. Reset it to m.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
The implementation of the cache flushing interfaces on the s390
is identical with the default implementation in asm-generic.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fix this build error with !CONFIG_SWAP caused by tranparent huge pages support:
In file included from mm/pgtable-generic.c:9:0:
/linux-2.6/arch/s390/include/asm/tlb.h: In function 'tlb_remove_page':
/linux-2.6/arch/s390/include/asm/tlb.h:92:2: error: implicit declaration of function 'page_cache_release'
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The uaccess functions copy_in_user_std and clear_user_std fail to
switch back from secondary space mode to primary space mode with sacf
in case of an unresolvable page fault. We need to make sure that the
switch back to primary mode is done in all cases, otherwise the code
following the uaccess inline assembly will crash.
Reported-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|/
|
|
|
|
|
|
| |
After page_table_free_rcu removed a page from the pgtable_list
page_table_free better not add it again. Otherwise a page_table_alloc
can reuse a page table fragment that is still in the rcu process.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* 'for-linus' of git://git390.marist.edu/pub/scm/linux-2.6:
[S390] MAINTAINERS: Update zcrypt driver entry
[S390] Randomize PIEs
[S390] Randomise the brk region
[S390] Add is_32bit_task() helper function
[S390] Randomize lower bits of stack address
[S390] Randomize mmap start address
[S390] Rearrange mmap.c
[S390] Enable flexible mmap layout for 64 bit processes
[S390] vdso: dont map at mmap_base
[S390] reduce miminum gap between stack and mmap_base
[S390] mmap: consider stack address randomization
[S390] Update default configuration
[S390] cio: path_event overindication after resume
|
| |
| |
| |
| |
| |
| |
| |
| | |
Randomize ELF_ET_DYN_BASE, which is used when loading position
independent executables.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| |
| |
| |
| |
| |
| |
| | |
Randomize heap address like other architectures do already.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| |
| |
| |
| |
| |
| |
| | |
Helper function which tells us if a task is running in ESA mode.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| |
| |
| |
| |
| |
| |
| | |
Randomize the lower bits of the stack address like x86 and powerpc.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| |
| |
| |
| |
| |
| |
| | |
Randomize mmap start address with 8MB.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| |
| |
| |
| |
| |
| |
| | |
Shuffle code around so it looks more like x86 and powerpc.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Historically 64 bit processes use the legacy address layout. However
there is no reason why 64 bit processes shouldn't benefit from the
flexible mmap layout advantages.
Therefore just enable it.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The vdso object is currently always mapped with mm->mmap_base used as
requested address. In case of flexible mmap layout this means it gets
mapped above mmap_base and therefore potentially stealing a bit of
address space that is reserved for the stack.
In case of flexible mmap layout the object should be mapped below
mmap base. For legacy mmap layout above.
To fix this just don't request any specific address and let the mmap
code figure out an address that fits.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Reduce minimum gap between stack and mmap_base to 32MB. That way there
is a bit more space for heap and mmap for tight 31 bit address spaces.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Consider stack address randomization when calulating mmap_base for
flexible mmap layout . Because of address randomization the stack
address can be up to 8MB lower than STACK_TOP.
When calculating mmap_base this isn't taken into account, which could
lead to the case that the gap between the real stack top and mmap_base
is lower than what ulimit specifies for the maximum stack size.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| |
| |
| |
| | |
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|\ \
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* 'kvm-updates/2.6.38' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (142 commits)
KVM: Initialize fpu state in preemptible context
KVM: VMX: when entering real mode align segment base to 16 bytes
KVM: MMU: handle 'map_writable' in set_spte() function
KVM: MMU: audit: allow audit more guests at the same time
KVM: Fetch guest cr3 from hardware on demand
KVM: Replace reads of vcpu->arch.cr3 by an accessor
KVM: MMU: only write protect mappings at pagetable level
KVM: VMX: Correct asm constraint in vmcs_load()/vmcs_clear()
KVM: MMU: Initialize base_role for tdp mmus
KVM: VMX: Optimize atomic EFER load
KVM: VMX: Add definitions for more vm entry/exit control bits
KVM: SVM: copy instruction bytes from VMCB
KVM: SVM: implement enhanced INVLPG intercept
KVM: SVM: enhance mov DR intercept handler
KVM: SVM: enhance MOV CR intercept handler
KVM: SVM: add new SVM feature bit names
KVM: cleanup emulate_instruction
KVM: move complete_insn_gp() into x86.c
KVM: x86: fix CR8 handling
KVM guest: Fix kvm clock initialization when it's configured out
...
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
IA64 support forces us to abstract the allocation of the kvm structure.
But instead of mixing this up with arch-specific initialization and
doing the same on destruction, split both steps. This allows to move
generic destruction calls into generic code.
It also fixes error clean-up on failures of kvm_create_vm for IA64.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* 'for-linus' of git://git390.marist.edu/pub/scm/linux-2.6: (65 commits)
[S390] prevent unneccesary loops_per_jiffy recalculation
[S390] cpuinfo: use get_online_cpus() instead of preempt_disable()
[S390] smp: remove cpu hotplug messages
[S390] mutex: enable spinning mutex on s390
[S390] mutex: Introduce arch_mutex_cpu_relax()
[S390] cio: fix ccwgroup unregistration race condition
[S390] perf: add DWARF register lookup for s390
[S390] cleanup ftrace backend functions
[S390] ptrace cleanup
[S390] smp/idle: call init_idle() before starting a new cpu
[S390] smp: delay idle task creation
[S390] dasd: Correct retry counter for terminated I/O.
[S390] dasd: Add support for raw ECKD access.
[S390] dasd: Prevent deadlock during suspend/resume.
[S390] dasd: Improve handling of stolen DASD reservation
[S390] dasd: do path verification for paths added at runtime
[S390] dasd: add High Performance FICON multitrack support
[S390] cio: reduce memory consumption of itcw structures
[S390] nmi: enable machine checks early
[S390] qeth: buffer count imbalance
...
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When the seqfile /proc/cpuinfo gets accesses for each possible cpu
loops_per_jiffy gets recalculated. However its value is only needed
on first access.
In addition loops_per_jiffy should be recalculated when the machine
reports a capability change.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Use get_online_cpus() instead of preempt_disable() to make sure cpus
don't go offline while accessing their per cpu data.
The preempt_disable() stuff is old code which was used before
get_online_cpus() was available.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Get rid of messages that indicate if a cpu went online or offline.
There is nothing special about this anymore and these messages might
flood the kernel log buffer which makes debugging harder since more
important messages might be overwritten.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This enables the spinning mutex feature on s390 by removing
HAVE_DEFAULT_NO_SPIN_MUTEXES from arch/s390/Kconfig.
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The spinning mutex implementation uses cpu_relax() in busy loops as a
compiler barrier. Depending on the architecture, cpu_relax() may do more
than needed in this specific mutex spin loops. On System z we also give
up the time slice of the virtual cpu in cpu_relax(), which prevents
effective spinning on the mutex.
This patch replaces cpu_relax() in the spinning mutex code with
arch_mutex_cpu_relax(), which can be defined by each architecture that
selects HAVE_ARCH_MUTEX_CPU_RELAX. The default is still cpu_relax(), so
this patch should not affect other architectures than System z for now.
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1290437256.7455.4.camel@thinkpad>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Overhaul program event recording and the code dealing with the ptrace
user space interface.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|