diff options
author | Kirill A. Shutemov <kirill.shutemov@linux.intel.com> | 2017-09-09 00:56:03 +0300 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2017-09-13 11:26:52 +0200 |
commit | 5b65c4677a57a1d4414212f9995aa0e46a21ff80 (patch) | |
tree | 0b47579035adebb1af14f009abf355e447ced9b0 /mm/hugetlb_cgroup.c | |
parent | 9e52fc2b50de3a1c08b44f94c610fbe998c0031a (diff) | |
download | op-kernel-dev-5b65c4677a57a1d4414212f9995aa0e46a21ff80.zip op-kernel-dev-5b65c4677a57a1d4414212f9995aa0e46a21ff80.tar.gz |
mm, x86/mm: Fix performance regression in get_user_pages_fast()
The 0-day test bot found a performance regression that was tracked down to
switching x86 to the generic get_user_pages_fast() implementation:
http://lkml.kernel.org/r/20170710024020.GA26389@yexl-desktop
The regression was caused by the fact that we now use local_irq_save() +
local_irq_restore() in get_user_pages_fast() to disable interrupts.
In x86 implementation local_irq_disable() + local_irq_enable() was used.
The fix is to make get_user_pages_fast() use local_irq_disable(),
leaving local_irq_save() for __get_user_pages_fast() that can be called
with interrupts disabled.
Numbers for pinning a gigabyte of memory, one page a time, 20 repeats:
Before: Average: 14.91 ms, stddev: 0.45 ms
After: Average: 10.76 ms, stddev: 0.18 ms
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Thorsten Leemhuis <regressions@leemhuis.info>
Cc: linux-mm@kvack.org
Fixes: e585513b76f7 ("x86/mm/gup: Switch GUP to the generic get_user_page_fast() implementation")
Link: http://lkml.kernel.org/r/20170908215603.9189-3-kirill.shutemov@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'mm/hugetlb_cgroup.c')
0 files changed, 0 insertions, 0 deletions