summaryrefslogtreecommitdiffstats
path: root/arch/x86/mm/gup.c
Commit message (Collapse)AuthorAgeFilesLines
* x86, doc: Fix minor spelling error in arch/x86/mm/gup.cAndy Shevchenko2010-02-021-1/+1
| | | | | | | | | | Fix minor spelling error in comment. No code change. Signed-off-by: Andy Shevchenko <ext-andriy.shevchenko@nokia.com> LKML-Reference: <201002022238.o12McDiF018720@imap1.linux-foundation.org> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* Merge branch 'perfcounters-fixes-for-linus' of ↵Linus Torvalds2009-06-201-1/+57
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'perfcounters-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (49 commits) perfcounter: Handle some IO return values perf_counter: Push perf_sample_data through the swcounter code perf_counter tools: Define and use our own u64, s64 etc. definitions perf_counter: Close race in perf_lock_task_context() perf_counter, x86: Improve interactions with fast-gup perf_counter: Simplify and fix task migration counting perf_counter tools: Add a data file header perf_counter: Update userspace callchain sampling uses perf_counter: Make callchain samples extensible perf report: Filter to parent set by default perf_counter tools: Handle lost events perf_counter: Add event overlow handling fs: Provide empty .set_page_dirty() aop for anon inodes perf_counter: tools: Makefile tweaks for 64-bit powerpc perf_counter: powerpc: Add processor back-end for MPC7450 family perf_counter: powerpc: Make powerpc perf_counter code safe for 32-bit kernels perf_counter: powerpc: Change how processor-specific back-ends get selected perf_counter: powerpc: Use unsigned long for register and constraint values perf_counter: powerpc: Enable use of software counters on 32-bit powerpc perf_counter tools: Add and use isprint() ...
| * perf_counter, x86: Improve interactions with fast-gupIngo Molnar2009-06-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Improve a few details in perfcounter call-chain recording that makes use of fast-GUP: - Use ACCESS_ONCE() to observe the pte value. ptes are fundamentally racy and can be changed on another CPU, so we have to be careful about how we access them. The PAE branch is already careful with read-barriers - but the non-PAE and 64-bit side needs an ACCESS_ONCE() to make sure the pte value is observed only once. - make the checks a bit stricter so that we can feed it any kind of cra^H^H^H user-space input ;-) Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * x86, mm: Add __get_user_pages_fast()Peter Zijlstra2009-06-151-0/+56
| | | | | | | | | | | | | | | | | | | | | | | | Introduce a gup_fast() variant which is usable from IRQ/NMI context. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> CC: Nick Piggin <npiggin@suse.de> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | x86: don't use 'access_ok()' as a range check in get_user_pages_fast()Linus Torvalds2009-06-201-2/+7
|/ | | | | | | | | | | | | | | | | | | | | | | It's really not right to use 'access_ok()', since that is meant for the normal "get_user()" and "copy_from/to_user()" accesses, which are done through the TLB, rather than through the page tables. Why? access_ok() does both too few, and too many checks. Too many, because it is meant for regular kernel accesses that will not honor the 'user' bit in the page tables, and because it honors the USER_DS vs KERNEL_DS distinction that we shouldn't care about in GUP. And too few, because it doesn't do the 'canonical' check on the address on x86-64, since the TLB will do that for us. So instead of using a function that isn't meant for this, and does something else and much more complicated, just do the real rules: we don't want the range to overflow, and on x86-64, we want it to be a canonical low address (on 32-bit, all addresses are canonical). Acked-by: Ingo Molnar <mingo@elte.hu> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86: Document get_user_pages_fast()Andy Grover2009-04-101-0/+16
| | | | | | | | | | | | | While better than get_user_pages(), the usage of gupf(), especially the return values and the fact that it can potentially only partially pin the range, warranted some documentation. Signed-off-by: Andy Grover <andy.grover@oracle.com> Cc: npiggin@suse.de Cc: akpm@linux-foundation.org LKML-Reference: <1239320729-3262-1-git-send-email-andy.grover@oracle.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: two trivial sparse annotationsHarvey Harrison2008-10-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | Impact: fewer sparse warnings, no functional changes arch/x86/kernel/vsmp_64.c:87:14: warning: incorrect type in argument 1 (different address spaces) arch/x86/kernel/vsmp_64.c:87:14: expected void const volatile [noderef] <asn:2>*addr arch/x86/kernel/vsmp_64.c:87:14: got void *[assigned] address arch/x86/kernel/vsmp_64.c:88:22: warning: incorrect type in argument 1 (different address spaces) arch/x86/kernel/vsmp_64.c:88:22: expected void const volatile [noderef] <asn:2>*addr arch/x86/kernel/vsmp_64.c:88:22: got void * arch/x86/kernel/vsmp_64.c:100:23: warning: incorrect type in argument 2 (different address spaces) arch/x86/kernel/vsmp_64.c:100:23: expected void volatile [noderef] <asn:2>*addr arch/x86/kernel/vsmp_64.c:100:23: got void * arch/x86/kernel/vsmp_64.c:101:23: warning: incorrect type in argument 1 (different address spaces) arch/x86/kernel/vsmp_64.c:101:23: expected void const volatile [noderef] <asn:2>*addr arch/x86/kernel/vsmp_64.c:101:23: got void * arch/x86/mm/gup.c:235:6: warning: incorrect type in argument 1 (different base types) arch/x86/mm/gup.c:235:6: expected void const volatile [noderef] <asn:1>*<noident> arch/x86/mm/gup.c:235:6: got unsigned long [unsigned] [assigned] start Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: make mm/gup.c more virtualization friendlyJan Beulich2008-10-131-5/+5
| | | | | | | | | Since pte_flags() is much cheaper than pte_val() in some virtualized environments (namely, Xen), use the former whereever possible. Signed-off-by: Jan Beulich <jbeulich@novell.com> Cc: "Nick Piggin" <npiggin@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* Fix 'get_user_pages_fast()' with non-page-aligned start addressLinus Torvalds2008-07-281-3/+6
| | | | | | | | | | | | | | | | | | Alexey Dobriyan reported trouble with LTP with the new fast-gup code, and Johannes Weiner debugged it to non-page-aligned addresses, where the new get_user_pages_fast() code would do all the wrong things, including just traversing past the end of the requested area due to 'addr' never matching 'end' exactly. This is not a pretty fix, and we may actually want to move the alignment into generic code, leaving just the core code per-arch, but Alexey verified that the vmsplice01 LTP test doesn't crash with this. Reported-and-tested-by: Alexey Dobriyan <adobriyan@gmail.com> Debugged-by: Johannes Weiner <hannes@saeurebad.de> Cc: Nick Piggin <npiggin@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86: support 1GB hugepages with get_user_pages_lockless()Nick Piggin2008-07-261-3/+40
| | | | | | | | | Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Andi Kleen <andi@firstfloor.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86: lockless get_user_pages_fast()Nick Piggin2008-07-261-0/+258
Implement get_user_pages_fast without locking in the fastpath on x86. Do an optimistic lockless pagetable walk, without taking mmap_sem or any page table locks or even mmap_sem. Page table existence is guaranteed by turning interrupts off (combined with the fact that we're always looking up the current mm, means we can do the lockless page table walk within the constraints of the TLB shootdown design). Basically we can do this lockless pagetable walk in a similar manner to the way the CPU's pagetable walker does not have to take any locks to find present ptes. This patch (combined with the subsequent ones to convert direct IO to use it) was found to give about 10% performance improvement on a 2 socket 8 core Intel Xeon system running an OLTP workload on DB2 v9.5 "To test the effects of the patch, an OLTP workload was run on an IBM x3850 M2 server with 2 processors (quad-core Intel Xeon processors at 2.93 GHz) using IBM DB2 v9.5 running Linux 2.6.24rc7 kernel. Comparing runs with and without the patch resulted in an overall performance benefit of ~9.8%. Correspondingly, oprofiles showed that samples from __up_read and __down_read routines that is seen during thread contention for system resources was reduced from 2.8% down to .05%. Monitoring the /proc/vmstat output from the patched run showed that the counter for fast_gup contained a very high number while the fast_gup_slow value was zero." (fast_gup is the old name for get_user_pages_fast, fast_gup_slow is a counter we had for the number of times the slowpath was invoked). The main reason for the improvement is that DB2 has multiple threads each issuing direct-IO. Direct-IO uses get_user_pages, and thus the threads contend the mmap_sem cacheline, and can also contend on page table locks. I would anticipate larger performance gains on larger systems, however I think DB2 uses an adaptive mix of threads and processes, so it could be that thread contention remains pretty constant as machine size increases. In which case, we stuck with "only" a 10% gain. The downside of using get_user_pages_fast is that if there is not a pte with the correct permissions for the access, we end up falling back to get_user_pages and so the get_user_pages_fast is a bit of extra work. However this should not be the common case in most performance critical code. [akpm@linux-foundation.org: coding-style fixes] [akpm@linux-foundation.org: build fix] [akpm@linux-foundation.org: Kconfig fix] [akpm@linux-foundation.org: Makefile fix/cleanup] [akpm@linux-foundation.org: warning fix] Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Dave Kleikamp <shaggy@austin.ibm.com> Cc: Andy Whitcroft <apw@shadowen.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Andi Kleen <andi@firstfloor.org> Cc: Dave Kleikamp <shaggy@austin.ibm.com> Cc: Badari Pulavarty <pbadari@us.ibm.com> Cc: Zach Brown <zach.brown@oracle.com> Cc: Jens Axboe <jens.axboe@oracle.com> Reviewed-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
OpenPOWER on IntegriCloud