summaryrefslogtreecommitdiffstats
path: root/arch/x86/include/asm/mutex_64.h
Commit message (Collapse)AuthorAgeFilesLines
* compiler/gcc4: Add quirk for 'asm goto' miscompilation bugIngo Molnar2013-10-111-2/+2
| | | | | | | | | | | | | | | | | | | | Fengguang Wu, Oleg Nesterov and Peter Zijlstra tracked down a kernel crash to a GCC bug: GCC miscompiles certain 'asm goto' constructs, as outlined here: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670 Implement a workaround suggested by Jakub Jelinek. Reported-and-tested-by: Fengguang Wu <fengguang.wu@intel.com> Reported-by: Oleg Nesterov <oleg@redhat.com> Reported-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Suggested-by: Jakub Jelinek <jakub@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: <stable@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org>
* Merge branch 'x86-asm-for-linus' of ↵Linus Torvalds2013-09-041-0/+30
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86/asm changes from Ingo Molnar: "Main changes: - Apply low level mutex optimization on x86-64, by Wedson Almeida Filho. - Change bitops to be naturally 'long', by H Peter Anvin. - Add TSX-NI opcodes support to the x86 (instrumentation) decoder, by Masami Hiramatsu. - Add clang compatibility adjustments/workarounds, by Jan-Simon Möller" * 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86, doc: Update uaccess.h comment to reflect clang changes x86, asm: Fix a compilation issue with clang x86, asm: Extend definitions of _ASM_* with a raw format x86, insn: Add new opcodes as of June, 2013 x86/ia32/asm: Remove unused argument in macro x86, bitops: Change bitops to be native operand size x86: Use asm-goto to implement mutex fast path on x86-64
| * x86: Use asm-goto to implement mutex fast path on x86-64Wedson Almeida Filho2013-06-281-0/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The new implementation allows the compiler to better optimize the code; the original implementation is still used when the kernel is compiled with older versions of gcc that don't support asm-goto. Compiling with gcc 4.7.3, the original mutex_lock() is 60 bytes with the fast path taking 16 instructions; the new mutex_lock() is 42 bytes, with the fast path taking 12 instructions. The original mutex_unlock() is 24 bytes with the fast path taking 7 instructions; the new mutex_unlock() is 25 bytes (because the compiler used a 2-byte ret) with the fast path taking 4 instructions. The two versions of the functions are included below for reference. Old: ffffffff817742a0 <mutex_lock>: ffffffff817742a0: 55 push %rbp ffffffff817742a1: 48 89 e5 mov %rsp,%rbp ffffffff817742a4: 48 83 ec 10 sub $0x10,%rsp ffffffff817742a8: 48 89 5d f0 mov %rbx,-0x10(%rbp) ffffffff817742ac: 48 89 fb mov %rdi,%rbx ffffffff817742af: 4c 89 65 f8 mov %r12,-0x8(%rbp) ffffffff817742b3: e8 28 15 00 00 callq ffffffff817757e0 <_cond_resched> ffffffff817742b8: 48 89 df mov %rbx,%rdi ffffffff817742bb: f0 ff 0f lock decl (%rdi) ffffffff817742be: 79 05 jns ffffffff817742c5 <mutex_lock+0x25> ffffffff817742c0: e8 cb 04 00 00 callq ffffffff81774790 <__mutex_lock_slowpath> ffffffff817742c5: 65 48 8b 04 25 c0 b7 mov %gs:0xb7c0,%rax ffffffff817742cc: 00 00 ffffffff817742ce: 4c 8b 65 f8 mov -0x8(%rbp),%r12 ffffffff817742d2: 48 89 43 18 mov %rax,0x18(%rbx) ffffffff817742d6: 48 8b 5d f0 mov -0x10(%rbp),%rbx ffffffff817742da: c9 leaveq ffffffff817742db: c3 retq ffffffff81774250 <mutex_unlock>: ffffffff81774250: 55 push %rbp ffffffff81774251: 48 c7 47 18 00 00 00 movq $0x0,0x18(%rdi) ffffffff81774258: 00 ffffffff81774259: 48 89 e5 mov %rsp,%rbp ffffffff8177425c: f0 ff 07 lock incl (%rdi) ffffffff8177425f: 7f 05 jg ffffffff81774266 <mutex_unlock+0x16> ffffffff81774261: e8 ea 04 00 00 callq ffffffff81774750 <__mutex_unlock_slowpath> ffffffff81774266: 5d pop %rbp ffffffff81774267: c3 retq New: ffffffff81774920 <mutex_lock>: ffffffff81774920: 55 push %rbp ffffffff81774921: 48 89 e5 mov %rsp,%rbp ffffffff81774924: 53 push %rbx ffffffff81774925: 48 89 fb mov %rdi,%rbx ffffffff81774928: e8 a3 0e 00 00 callq ffffffff817757d0 <_cond_resched> ffffffff8177492d: f0 ff 0b lock decl (%rbx) ffffffff81774930: 79 08 jns ffffffff8177493a <mutex_lock+0x1a> ffffffff81774932: 48 89 df mov %rbx,%rdi ffffffff81774935: e8 16 fe ff ff callq ffffffff81774750 <__mutex_lock_slowpath> ffffffff8177493a: 65 48 8b 04 25 c0 b7 mov %gs:0xb7c0,%rax ffffffff81774941: 00 00 ffffffff81774943: 48 89 43 18 mov %rax,0x18(%rbx) ffffffff81774947: 5b pop %rbx ffffffff81774948: 5d pop %rbp ffffffff81774949: c3 retq ffffffff81774730 <mutex_unlock>: ffffffff81774730: 48 c7 47 18 00 00 00 movq $0x0,0x18(%rdi) ffffffff81774737: 00 ffffffff81774738: f0 ff 07 lock incl (%rdi) ffffffff8177473b: 7f 0a jg ffffffff81774747 <mutex_unlock+0x17> ffffffff8177473d: 55 push %rbp ffffffff8177473e: 48 89 e5 mov %rsp,%rbp ffffffff81774741: e8 aa ff ff ff callq ffffffff817746f0 <__mutex_unlock_slowpath> ffffffff81774746: 5d pop %rbp ffffffff81774747: f3 c3 repz retq Signed-off-by: Wedson Almeida Filho <wedsonaf@gmail.com> Link: http://lkml.kernel.org/r/1372420245-60021-1-git-send-email-wedsonaf@gmail.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* | arch: Make __mutex_fastpath_lock_retval return whether fastpath succeeded or notMaarten Lankhorst2013-06-261-7/+4
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This will allow me to call functions that have multiple arguments if fastpath fails. This is required to support ticket mutexes, because they need to be able to pass an extra argument to the fail function. Originally I duplicated the functions, by adding __mutex_fastpath_lock_retval_arg. This ended up being just a duplication of the existing function, so a way to test if fastpath was called ended up being better. This also cleaned up the reservation mutex patch some by being able to call an atomic_set instead of atomic_xchg, and making it easier to detect if the wrong unlock function was previously used. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org Cc: robclark@gmail.com Cc: rostedt@goodmis.org Cc: daniel@ffwll.ch Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20130620113105.4001.83929.stgit@patser Signed-off-by: Ingo Molnar <mingo@kernel.org>
* x86: Fix ASM_X86__ header guardsH. Peter Anvin2008-10-221-3/+3
| | | | | | | | | Change header guards named "ASM_X86__*" to "_ASM_X86_*" since: a. the double underscore is ugly and pointless. b. no leading underscore violates namespace constraints. Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* x86, um: ... and asm-x86 moveAl Viro2008-10-221-0/+100
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
OpenPOWER on IntegriCloud