summaryrefslogtreecommitdiffstats
path: root/arch/arm/lib/uaccess_with_memcpy.c
Commit message (Collapse)AuthorAgeFilesLines
* Revert "arm: move exports to definitions"Russell King2016-11-231-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 4dd1837d7589f468ed109556513f476e7a7f9121. Moving the exports for assembly code into the assembly files breaks KSYM trimming, but also breaks modversions. While fixing the KSYM trimming is trivial, fixing modversions brings us to a technically worse position that we had prior to the above change: - We end up with the prototype definitions divorsed from everything else, which means that adding or removing assembly level ksyms become more fragile: * if adding a new assembly ksyms export, a missed prototype in asm-prototypes.h results in a successful build if no module in the selected configuration makes use of the symbol. * when removing a ksyms export, asm-prototypes.h will get forgotten, with armksyms.c, you'll get a build error if you forget to touch the file. - We end up with the same amount of include files and prototypes, they're just in a header file instead of a .c file with their exports. As for lines of code, we don't get much of a size reduction: (original commit) 47 files changed, 131 insertions(+), 208 deletions(-) (fix for ksyms trimming) 7 files changed, 18 insertions(+), 5 deletions(-) (two fixes for modversions) 1 file changed, 34 insertions(+) 3 files changed, 7 insertions(+), 2 deletions(-) which results in a net total of only 25 lines deleted. As there does not seem to be much benefit from this change of approach, revert the change. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
* arm: move exports to definitionsAl Viro2016-08-071-0/+3
| | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* arm, thp: remove infrastructure for handling splitting PMDsKirill A. Shutemov2016-01-151-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With new refcounting we don't need to mark PMDs splitting. Let's drop code to handle this. pmdp_splitting_flush() is not needed too: on splitting PMD we will do pmdp_clear_flush() + set_pte_at(). pmdp_clear_flush() will do IPI as needed for fast_gup. [arnd@arndb.de: fix unterminated ifdef in header file] Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Steve Capper <steve.capper@linaro.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ARM: fix uaccess_with_memcpy() with SW_DOMAIN_PANRussell King2015-12-151-6/+23
| | | | | | | | | | | | The uaccess_with_memcpy() code is currently incompatible with the SW PAN code: it takes locks within the region that we've changed the DACR, potentially sleeping as a result. As we do not save and restore the DACR across co-operative sleep events, can lead to an incorrect DACR value later in this code path. Reported-by: Peter Rosin <peda@axentia.se> Tested-by: Peter Rosin <peda@axentia.se> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
*-. Merge branches 'cleanup', 'fixes', 'misc', 'omap-barrier' and 'uaccess' into ↵Russell King2015-09-031-3/+3
|\ \ | | | | | | | | | for-linus
| | * ARM: uaccess: provide uaccess_save_and_enable() and uaccess_restore()Russell King2015-08-251-2/+2
| |/ |/| | | | | | | | | | | | | | | Provide uaccess_save_and_enable() and uaccess_restore() to permit control of userspace visibility to the kernel, and hook these into the appropriate places in the kernel where we need to access userspace. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * ARM: 8414/1: __copy_to_user_memcpy: fix mmap semaphore usageNicolas Pitre2015-08-181-1/+1
|/ | | | | | | | | | The mmap semaphore should not be taken when page faults are disabled. Since pagefault_disable() no longer disables preemption, we now need to use faulthandler_disabled() in place of in_atomic(). Signed-off-by: Nicolas Pitre <nico@linaro.org> Tested-by: Mark Salter <msalter@redhat.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7858/1: mm: make UACCESS_WITH_MEMCPY huge page awareSteven Capper2013-10-291-3/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The memory pinning code in uaccess_with_memcpy.c does not check for HugeTLB or THP pmds, and will enter an infinite loop should a __copy_to_user or __clear_user occur against a huge page. This patch adds detection code for huge pages to pin_page_for_write. As this code can be executed in a fast path it refers to the actual pmds rather than the vma. If a HugeTLB or THP is found (they have the same pmd representation on ARM), the page table spinlock is taken to prevent modification whilst the page is pinned. On ARM, huge pages are only represented as pmds, thus no huge pud checks are performed. (For huge puds one would lock the page table in a similar manner as in the pmd case). Two helper functions are introduced; pmd_thp_or_huge will check whether or not a page is huge or transparent huge (which have the same pmd layout on ARM), and pmd_hugewillfault will detect whether or not a page fault will occur on write to the page. Running the following test (with the chunking from read_zero removed): $ dd if=/dev/zero of=/dev/null bs=10M count=1024 Gave: 2.3 GB/s backed by normal pages, 2.9 GB/s backed by huge pages, 5.1 GB/s backed by huge pages, with page mask=HPAGE_MASK. After some discussion, it was decided not to adopt the HPAGE_MASK, as this would have a significant detrimental effect on the overall system latency due to page_table_lock being held for too long. This could be revisited if split huge page locks are adopted. Signed-off-by: Steve Capper <steve.capper@linaro.org> Reviewed-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: include linux/highmem.h in uaccess functionsArnd Bergmann2011-10-021-0/+1
| | | | | | | When highpte support is enabled, this is required to build the kernel. Signed-off-by: Arnd Bergmann <arnd@arndb.de>
* ARM: pgtable: add pud-level codeRussell King2011-02-211-1/+6
| | | | | | | | | | Add pud_offset() et.al. between the pgd and pmd code in preparation of using pgtable-nopud.h rather than 4level-fixup.h. This incorporates a fix from Jamie Iles <jamie@jamieiles.com> for uaccess_with_memcpy.c. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* include cleanup: Update gfp.h and slab.h includes to prepare for breaking ↵Tejun Heo2010-03-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
* [ARM] alternative copy_to_user: more precise fallback thresholdNicolas Pitre2009-05-301-2/+73
| | | | | | | | | | | | | | | | | | | | | | Previous size thresholds were guessed from various user space benchmarks using a kernel with and without the alternative uaccess option. This is however not as precise as a kernel based test to measure the real speed of each method. This adds a simple test bench to show the time needed for each method. With this, the optimal size treshold for the alternative implementation can be determined with more confidence. It appears that the optimal threshold for both copy_to_user and clear_user is around 64 bytes. This is not a surprise knowing that the memcpy and memset implementations need at least 64 bytes to achieve maximum throughput. One might suggest that such test be used to determine the optimal threshold at run time instead, but results are near enough to 64 on tested targets concerned by this alternative copy_to_user implementation, so adding some overhead associated with a variable threshold is probably not worth it for now. Signed-off-by: Nicolas Pitre <nico@marvell.com>
* [ARM] lower overhead with alternative copy_to_user for small copiesNicolas Pitre2009-05-291-9/+27
| | | | | | | | | | | | | | | | | | | | Because the alternate copy_to_user implementation has a higher setup cost than the standard implementation, the size of the memory area to copy is tested and the standard implementation invoked instead when that size is too small. Still, that test is made after the processor has preserved a bunch of registers on the stack which have to be reloaded right away needlessly in that case, causing a measurable performance regression compared to plain usage of the standard implementation only. To make the size test overhead negligible, let's factorize it out of the alternate copy_to_user function where it is clear to the compiler that no stack frame is needed. Thanks to CONFIG_ARM_UNWIND allowing for frame pointers to be disabled and tail call optimization to kick in, the overhead in the small copy case becomes only 3 assembly instructions. A similar trick is applied to clear_user as well. Signed-off-by: Nicolas Pitre <nico@marvell.com>
* [ARM] alternative copy_to_user/clear_user implementationLennert Buytenhek2009-05-291-0/+139
This implements {copy_to,clear}_user() by faulting in the userland pages and then using the regular kernel mem{cpy,set}() to copy the data (while holding the page table lock). This is a win if the regular mem{cpy,set}() implementations are faster than the user copy functions, which is the case e.g. on Feroceon, where 8-word STMs (which memcpy() uses under the right conditions) give significantly higher memory write throughput than a sequence of individual 32bit stores. Here are numbers for page sized buffers on some Feroceon cores: - copy_to_user on Orion5x goes from 51 MB/s to 83 MB/s - clear_user on Orion5x goes from 89MB/s to 314MB/s - copy_to_user on Kirkwood goes from 240 MB/s to 356 MB/s - clear_user on Kirkwood goes from 367 MB/s to 1108 MB/s - copy_to_user on Disco-Duo goes from 248 MB/s to 398 MB/s - clear_user on Disco-Duo goes from 328 MB/s to 1741 MB/s Because the setup cost is non negligible, this is worthwhile only if the amount of data to copy is large enough. The operation falls back to the standard implementation when the amount of data is below a certain threshold. This threshold was determined empirically, however some targets could benefit from a lower runtime determined value for optimal results eventually. In the copy_from_user() case, this technique does not provide any worthwhile performance gain due to the fact that any kind of read access allocates the cache and subsequent 32bit loads are just as fast as the equivalent 8-word LDM. Signed-off-by: Lennert Buytenhek <buytenh@marvell.com> Signed-off-by: Nicolas Pitre <nico@marvell.com> Tested-by: Martin Michlmayr <tbm@cyrius.com>
OpenPOWER on IntegriCloud