summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorGiuseppe CAVALLARO <peppe.cavallaro@st.com>2010-11-17 06:50:17 +0000
committerPaul Mundt <lethal@linux-sh.org>2010-11-18 14:53:18 +0900
commitd53e4307c2f3856167407a1d9b8f8fa001286066 (patch)
treed34c13a6b041371b63d265619628a57c5e7d5595
parent94ab115fd3e7f7e7f92f1645bbb6ba5414701b25 (diff)
downloadop-kernel-dev-d53e4307c2f3856167407a1d9b8f8fa001286066.zip
op-kernel-dev-d53e4307c2f3856167407a1d9b8f8fa001286066.tar.gz
sh: Use GCC __builtin_prefetch() to implement prefetch().
GCC's __builtin_prefetch() was introduced a long time ago, all supported GCC versions have it. So this patch is to use it for implementing the prefetch on SH2A and SH4. The current prefetch implementation is almost equivalent with __builtin_prefetch. The third parameter in the __builtin_prefetch is the locality that it's not supported on SH architectures. It has been set to three and it should be verified if it's suitable for SH2A as well. I didn't test on this architecture. The builtin usage should be more efficient that an __asm__ because less barriers, and because the compiler doesn't see the inst as a "black box" allowing better code generation. This has been already done on other architectures (see the commit: 0453fb3c528c5eb3483441a466b24a4cb409eec5). Many thanks to Christian Bruel <christain.bruel@st.com> for his support on evaluate the impact of the gcc built-in on SH4 arch. No regressions found while testing with LMbench on STLinux targets. Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: Stuart Menefy <stuart.menefy@st.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-rw-r--r--arch/sh/include/asm/processor_32.h7
1 files changed, 5 insertions, 2 deletions
diff --git a/arch/sh/include/asm/processor_32.h b/arch/sh/include/asm/processor_32.h
index 46d5179..e3c73cd 100644
--- a/arch/sh/include/asm/processor_32.h
+++ b/arch/sh/include/asm/processor_32.h
@@ -199,10 +199,13 @@ extern unsigned long get_wchan(struct task_struct *p);
#define ARCH_HAS_PREFETCHW
static inline void prefetch(void *x)
{
- __asm__ __volatile__ ("pref @%0\n\t" : : "r" (x) : "memory");
+ __builtin_prefetch(x, 0, 3);
}
-#define prefetchw(x) prefetch(x)
+static inline void prefetchw(void *x)
+{
+ __builtin_prefetch(x, 1, 3);
+}
#endif
#endif /* __KERNEL__ */
OpenPOWER on IntegriCloud