summaryrefslogtreecommitdiffstats
path: root/arch/arm/lib
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'copy_user' of git://git.marvell.com/orion into develRussell King2009-06-144-2/+235
|\
| * [ARM] alternative copy_to_user: more precise fallback thresholdNicolas Pitre2009-05-301-2/+73
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previous size thresholds were guessed from various user space benchmarks using a kernel with and without the alternative uaccess option. This is however not as precise as a kernel based test to measure the real speed of each method. This adds a simple test bench to show the time needed for each method. With this, the optimal size treshold for the alternative implementation can be determined with more confidence. It appears that the optimal threshold for both copy_to_user and clear_user is around 64 bytes. This is not a surprise knowing that the memcpy and memset implementations need at least 64 bytes to achieve maximum throughput. One might suggest that such test be used to determine the optimal threshold at run time instead, but results are near enough to 64 on tested targets concerned by this alternative copy_to_user implementation, so adding some overhead associated with a variable threshold is probably not worth it for now. Signed-off-by: Nicolas Pitre <nico@marvell.com>
| * [ARM] lower overhead with alternative copy_to_user for small copiesNicolas Pitre2009-05-291-9/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Because the alternate copy_to_user implementation has a higher setup cost than the standard implementation, the size of the memory area to copy is tested and the standard implementation invoked instead when that size is too small. Still, that test is made after the processor has preserved a bunch of registers on the stack which have to be reloaded right away needlessly in that case, causing a measurable performance regression compared to plain usage of the standard implementation only. To make the size test overhead negligible, let's factorize it out of the alternate copy_to_user function where it is clear to the compiler that no stack frame is needed. Thanks to CONFIG_ARM_UNWIND allowing for frame pointers to be disabled and tail call optimization to kick in, the overhead in the small copy case becomes only 3 assembly instructions. A similar trick is applied to clear_user as well. Signed-off-by: Nicolas Pitre <nico@marvell.com>
| * [ARM] alternative copy_to_user/clear_user implementationLennert Buytenhek2009-05-292-0/+142
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This implements {copy_to,clear}_user() by faulting in the userland pages and then using the regular kernel mem{cpy,set}() to copy the data (while holding the page table lock). This is a win if the regular mem{cpy,set}() implementations are faster than the user copy functions, which is the case e.g. on Feroceon, where 8-word STMs (which memcpy() uses under the right conditions) give significantly higher memory write throughput than a sequence of individual 32bit stores. Here are numbers for page sized buffers on some Feroceon cores: - copy_to_user on Orion5x goes from 51 MB/s to 83 MB/s - clear_user on Orion5x goes from 89MB/s to 314MB/s - copy_to_user on Kirkwood goes from 240 MB/s to 356 MB/s - clear_user on Kirkwood goes from 367 MB/s to 1108 MB/s - copy_to_user on Disco-Duo goes from 248 MB/s to 398 MB/s - clear_user on Disco-Duo goes from 328 MB/s to 1741 MB/s Because the setup cost is non negligible, this is worthwhile only if the amount of data to copy is large enough. The operation falls back to the standard implementation when the amount of data is below a certain threshold. This threshold was determined empirically, however some targets could benefit from a lower runtime determined value for optimal results eventually. In the copy_from_user() case, this technique does not provide any worthwhile performance gain due to the fact that any kind of read access allocates the cache and subsequent 32bit loads are just as fast as the equivalent 8-word LDM. Signed-off-by: Lennert Buytenhek <buytenh@marvell.com> Signed-off-by: Nicolas Pitre <nico@marvell.com> Tested-by: Martin Michlmayr <tbm@cyrius.com>
| * [ARM] allow for alternative __copy_to_user/__clear_user implementationsNicolas Pitre2009-05-292-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows for optional alternative implementations of __copy_to_user and __clear_user, with a possible runtime fallback to the standard version when the alternative provides no gain over that standard version. This is done by making the standard __copy_to_user into a weak alias for the symbol __copy_to_user_std. Same thing for __clear_user. Those two functions are particularly good candidates to have alternative implementations for, since they rely on the STRT instruction which has lower performances than STM instructions on some CPU cores such as the ARM1176 and Marvell Feroceon. Signed-off-by: Nicolas Pitre <nico@marvell.com>
* | [ARM] barriers: improve xchg, bitops and atomic SMP barriersRussell King2009-05-281-0/+2
|/ | | | | | | | | | | | | | | | | Mathieu Desnoyers pointed out that the ARM barriers were lacking: - cmpxchg, xchg and atomic add return need memory barriers on architectures which can reorder the relative order in which memory read/writes can be seen between CPUs, which seems to include recent ARM architectures. Those barriers are currently missing on ARM. - test_and_xxx_bit were missing SMP barriers. So put these barriers in. Provide separate atomic_add/atomic_sub operations which do not require barriers. Reported-Reviewed-and-Acked-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge branch 'clps7500' into develRussell King2008-11-271-1/+0
|\ | | | | | | | | | | Conflicts: arch/arm/Kconfig
| * [ARM] clps7500: remove supportRussell King2008-11-271-1/+0
| | | | | | | | | | | | | | | | The CLPS7500 platform has not built since 2.6.22-git7 and there seems to be no interest in fixing it. So, remove the platform support. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | [ARM] remove memzero()Russell King2008-11-271-1/+1
|/ | | | | | | | | As suggested by Andrew Morton, remove memzero() - it's not supported on other architectures so use of it is a potential build breaking bug. Since the compiler optimizes memset(x,0,n) to __memzero() perfectly well, we don't miss out on the underlying benefits of memzero(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge branch 'ptebits' into develRussell King2008-10-091-1/+1
|\ | | | | | | | | | | Conflicts: arch/arm/Kconfig
| * [ARM] 5226/1: remove unmatched comment end.Jean-Christophe DUBOIS2008-08-281-1/+1
| | | | | | | | | | | | | | remove unmatched comment end. Signed-off-by: Jean-Christophe DUBOIS <jcd@tribudubois.net> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | [ARM] 5231/1: Do not save the frame pointer in the csum_partial_copy_* functionsCatalin Marinas2008-09-012-8/+4
| | | | | | | | | | | | | | | | | | Since the other assembly functions do not seem to save the frame pointer onto the stack, this patch changes the csum_partial_copy_* functions to behave in the same way. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | [ARM] 5232/1: Do not post-index STRT instruction in clear_user.SCatalin Marinas2008-09-011-1/+1
| | | | | | | | | | | | | | | | | | The last strnebt instruction has a post-index of 1 but the address register is set to 0 in the next instruction, so no need for post-indexing. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | [ARM] 5227/1: Add the ENDPROC declarations to the .S filesCatalin Marinas2008-09-0144-14/+100
|/ | | | | | | | | This declaration specifies the "function" type and size for various assembly functions, mainly needed for generating the correct branch instructions in Thumb-2. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] Move include/asm-arm/arch-* to arch/arm/*/include/machRussell King2008-08-073-3/+3
| | | | | | This just leaves include/asm-arm/plat-* to deal with. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] Remove asm/hardware.h, use asm/arch/hardware.h insteadRussell King2008-08-073-3/+3
| | | | | | | | | Remove includes of asm/hardware.h in addition to asm/arch/hardware.h. Then, since asm/hardware.h only exists to include asm/arch/hardware.h, update everything to directly include asm/arch/hardware.h and remove asm/hardware.h. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] move include/asm-arm to arch/arm/include/asmRussell King2008-08-022-2/+2
| | | | | | | Move platform independent header files to arch/arm/include/asm, leaving those in asm/arch* and asm/plat* alone. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] cache align memset and memzeroNicolas Pitre2008-06-222-0/+90
| | | | | | | | This is a natural extension following the previous patch. Non Feroceon based targets are unchanged. Signed-off-by: Nicolas Pitre <nico@marvell.com> Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
* [ARM] cache align destination pointer when copying memory for some processorsNicolas Pitre2008-06-222-20/+4
| | | | | | | | | | | | | | | The implementation for memory copy functions on ARM had a (disabled) provision for aligning the source pointer before loading registers with data. Turns out that aligning the _destination_ pointer is much more useful, as the read side is already sufficiently helped with the use of preload. So this changes the definition of the CALGN() macro to target the destination pointer instead, and turns it on for Feroceon processors where the gain is very noticeable. Signed-off-by: Nicolas Pitre <nico@marvell.com> Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
* [ARM] fix cache alignment code in memset.SNicolas Pitre2008-06-221-1/+1
| | | | | | | This code is currently disabled, which explains why no one was affected. Signed-off-by: Nicolas Pitre <nico@marvell.com> Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
* [ARM] spelling fixesSimon Arlott2007-05-201-1/+1
| | | | | | | Spelling fixes in arch/arm/. Signed-off-by: Simon Arlott <simon@fire.lp0.eu> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] getuser.S and putuser.S don't need thread_info.h nor asm-offsets.hRussell King2007-04-212-4/+0
| | | | Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] Add ability to dump exception stacks to kernel backtracesRussell King2007-04-211-84/+81
| | | | Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Remove obsolete #include <linux/config.h>Jörn Engel2006-06-303-3/+0
| | | | | Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
* [ARM] nommu: backtrace code must not reference a discarded sectionRussell King2006-06-281-4/+1
| | | | | | | | | The code in "1007:" is in the .fixup section, which in the mmuless case is discarded. Since this code is referenced from the .text section, it causes an link error. Move this code into the .text section instead. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] nommu: uaccess tweaksRussell King2006-06-281-5/+8
| | | | | | | | | | | | | | | | MMUless systems have only one address space for all threads, so both the usual access_ok() checks, and the exception handling do not make much sense. Hence, discard the fixup and exception tables at link time, use memcpy/memset for the user copy/clearing functions, and define the permission check macros to be constants. Some of this patch was derived from the equivalent patch by Hyok S. Choi. Signed-off-by: Hyok S. Choi <hyok.choi@samsung.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] Remove the __arch_* layer from uaccess.hRussell King2006-06-286-13/+13
| | | | | | | | | Back in the days when we had armo (26-bit) and armv (32-bit) combined, we had an additional layer to the uaccess macros to ensure correct typing. Since we no longer have 26-bit in this tree, we no longer need this layer, so eliminate it. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] Remove save_lr/restore_pc macrosRussell King2006-06-252-6/+4
| | | | | | | As for RETINSTR/LOADREGS macros, these were for compatibility with 26-bit ARMs. No longer required, so remove them. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] Remove LOADREGS macroRussell King2006-06-2512-26/+26
| | | | | | | As for RETINSTR, LOADREGS is a left-over from the 26-bit days. Remove it. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] Remove RETINSTR macroRussell King2006-06-259-21/+21
| | | | | | | | RETINSTR is a left-over from the days when we had 26-bit and 32-bit CPU support integrated into the same tree. Since this is no longer the case, we can now remove RETINSTR. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] 3524/1: ARM EABI: more 64-bit aligned stack fixesNicolas Pitre2006-05-162-4/+4
| | | | | | | | | | Patch from Nicolas Pitre Assembly code that calls C code must ensure the C code sees a 64-bit aligned stack pointer. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge master.kernel.org:/home/rmk/linux-2.6-armLinus Torvalds2006-03-281-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | * master.kernel.org:/home/rmk/linux-2.6-arm: [ARM] 3388/1: ixp23xx: add core ixp23xx support [ARM] 3417/1: add support for logicpd pxa270 card engine [ARM] 3387/1: ixp23xx: add defconfig [ARM] 3377/2: add support for intel xsc3 core [ARM] Move ice-dcc code into misc.c [ARM] Fix decompressor serial IO to give CRLF not LFCR [ARM] proc-v6: mark page table walks outer-cacheable, shared. Enable NX. [ARM] nommu: trivial patch for arch/arm/lib/Makefile [ARM] 3416/1: Update LART site URL [ARM] 3415/1: Akita: Add missing EXPORT_SYMBOL [ARM] 3414/1: ep93xx: reset ethernet controller before uncompressing
| * [ARM] nommu: trivial patch for arch/arm/lib/MakefileHyok S. Choi2006-03-271-1/+1
| | | | | | | | | | | | | | ifeq ($CONFIG_PREEMPT,y) -> ifeq ($(CONFIG_PREEMPT),y) Signed-off-by: Hyok S. Choi <hyok.choi@samsung.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | [PATCH] Typo fixesAlexey Dobriyan2006-03-281-1/+1
|/ | | | | | | | Fix a lot of typos. Eyeballed by jmc@ in OpenBSD. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [ARM] 3399/1: Fix link problem when CONFIG_PRINTK is disabledMalcolm Parsons2006-03-251-1/+1
| | | | | | | | | | | | | | | | | Patch from Malcolm Parsons Printking a backtrace requires printk, so disable backtrace code when printk is disabled. Without this patch, a kernel with CONFIG_PRINTK disabled does not link: arch/arm/lib/lib.a(backtrace.o): In function `c_backtrace': arch/arm/lib/backtrace.S:(.text+0x108): undefined reference to `printk' arch/arm/lib/backtrace.S:(.text+0x11c): undefined reference to `printk' arch/arm/lib/lib.a(backtrace.o):(.fixup+0x8): undefined reference to `printk' Signed-off-by: Malcolm Parsons <malcolm.parsons@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] 3346/1: Fix udelay() for HZ values different from 100Peter Teichmann2006-03-211-8/+12
| | | | | | | | | | | Patch from Peter Teichmann Currently, if the kernels HZ value is greater than 100, delays with the udelay function are too short. This can cause trouble for instance with the zd1201 usb wlan driver. This patch suggests a solution that keeps the overhead small and maintains (hopefully) sufficient resolution. Signed-off-by: Peter Teichmann Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] Remove unnecessary asm/hardware.h includesRussell King2006-03-211-1/+0
| | | | | | | | | asm/hardware.h is not required for the majority of processor support files, ioremap support, mm initialisation, acorn IO support, nor the debug code (which picks up its machine specific includes via debug-macros.S) Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] Fix muldi3.SRussell King2006-03-081-2/+2
| | | | | | | | When shifting the low-parts of signed numbers, a logical shift should be used to avoid sign-extending a bit which isn't a sign bit. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] 3104/1: ARM EABI: new helper function namesNicolas Pitre2006-01-146-0/+41
| | | | | | | | | Patch from Nicolas Pitre The ARM EABI defines new names for GCC helper functions. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] 3103/1: ARM EABI: stack pointer must be 64-bit aligned (part 2)Nicolas Pitre2006-01-141-2/+2
| | | | | | | | | | Patch from Nicolas Pitre We must make sure that assembly code that modifies the stack pointer before calling a C function does it so it remains 64-bit aligned. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] 3256/1: Make the function-returning ldm's use sp as the base registerCatalin Marinas2006-01-123-9/+11
| | | | | | | | | | | | | | | | | | Patch from Catalin Marinas If the low interrupt latency mode is enabled for the CPU (from ARMv6 onwards), the ldm/stm instructions are no longer atomic. An ldm instruction restoring the sp and pc registers can be interrupted immediately after sp was updated but before the pc. If this happens, the CPU restores the base register to the value before the ldm instruction but if the base register is not sp, the interrupt routine will corrupt the stack and the restarted ldm instruction will load garbage. Note that future ARM cores might always run in the low interrupt latency mode. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] Fix get_user when passed a const pointerRussell King2005-11-181-11/+0
| | | | | | | | Unfortunately, later gcc versions error out when our get_user is passed a const pointer, since we write to a temporary variable declared as typeof(*(p)) which propagates the const-ness. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] 3152/1: make various assembly local labels actually local (the rest)Nicolas Pitre2005-11-114-59/+61
| | | | | | | | | | | | | Patch from Nicolas Pitre For assembly labels to actually be local they must start with ".L" and not only "." otherwise they still remain visible in the final link and clutter kallsyms needlessly, and possibly make for unclear symbolic backtrace. This patch simply inserts a"L" where appropriate. The code itself is unchanged. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] 3151/1: make various assembly local labels actually local (io-*.S)Nicolas Pitre2005-11-117-80/+82
| | | | | | | | | | | | | Patch from Nicolas Pitre For assembly labels to actually be local they must start with ".L" and not only "." otherwise they still remain visible in the final link and clutter kallsyms needlessly, and possibly make for unclear symbolic backtrace. This patch simply inserts a"L" where appropriate. The code itself is unchanged. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] 3150/1: make various assembly local labels actually local (uaccess.S)Nicolas Pitre2005-11-111-114/+116
| | | | | | | | | | | | | Patch from Nicolas Pitre For assembly labels to actually be local they must start with ".L" and not only "." otherwise they still remain visible in the final link and clutter kallsyms needlessly, and possibly make for unclear symbolic backtrace. This patch simply inserts a"L" where appropriate. The code itself is unchanged. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] Fix csumpartial corner caseRussell King2005-11-101-0/+4
| | | | | | | Ji-In Park discovered a bug in csumpartial which caused wrong checksums with misaligned buffers. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] Clean up save_and_disable_irqs macro and allow use of ARMv6 CPSIDRussell King2005-11-091-2/+2
| | | | | | | | | save_and_disable_irqs does not need to use mov + msr (which was introduced to work around a documentation bug which was propagated into binutils.) Use msr with an immediate constant, and if we're building for ARMv6 or later, use the new CPSID instruction. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] 3094/1: remove PLD stuff from old uaccess codeNicolas Pitre2005-11-041-116/+16
| | | | | | | | | | | Patch from Nicolas Pitre ARM processors that have pld instructions are not using those copy_user implementation anymore. Let's remove the useless PLD lines which were half wrong anyway. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM SMP] Add configuration option for ARMv6K processorsRussell King2005-11-031-1/+1
| | | | | | | The 'K' extension adds several new instructions to the ARMv6 ISA which are primerily useful for SMP. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] 2948/1: new preemption safe copy_{to|from}_user implementationNicolas Pitre2005-11-013-2/+216
| | | | | | | | | | | | | | | Patch from Nicolas Pitre This patch provides a preemption safe implementation of copy_to_user and copy_from_user based on the copy template also used for memcpy. It is enabled unconditionally when CONFIG_PREEMPT=y. Otherwise if the configured architecture is not ARMv3 then it is enabled as well as it gives better performances at least on StrongARM and XScale cores. If ARMv3 is not too affected or if it doesn't matter too much then uaccess.S could be removed altogether. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
OpenPOWER on IntegriCloud