summaryrefslogtreecommitdiffstats
path: root/arch/sh
diff options
context:
space:
mode:
authorMichal Simek <monstr@monstr.eu>2010-04-26 08:54:13 +0200
committerMichal Simek <monstr@monstr.eu>2010-05-06 11:22:00 +0200
commit3274c5707c22221574b396d140d0db3480a2027a (patch)
tree914be5462b00a74db910795062f5616baa0a3ce6 /arch/sh
parent385e1efafc73a5deeb20645ae8b227b4896852e2 (diff)
downloadop-kernel-dev-3274c5707c22221574b396d140d0db3480a2027a.zip
op-kernel-dev-3274c5707c22221574b396d140d0db3480a2027a.tar.gz
microblaze: Optimize CACHE_LOOP_LIMITS and CACHE_RANGE_LOOP macros
1. Remove CACHE_ALL_LOOP2 macro because it is identical to CACHE_ALL_LOOP 2. Change BUG_ON to WARN_ON 3. Remove end aligned from CACHE_LOOP_LIMITS. C implementation do not need aligned end address and ASM code do aligned in their macros 4. ASM optimized CACHE_RANGE_LOOP_1/2 macros needs to get aligned end address. Because end address is compound from start + size, end address is the first address which is exclude. Here is the corresponding code which describe it. + int align = ~(line_length - 1); + end = ((end & align) == end) ? end - line_length : end & align; a) end is aligned: it is necessary to subtruct line length because we don't want to work with next cacheline b) end address is not aligned: Just align it to be ready for ASM code. Signed-off-by: Michal Simek <monstr@monstr.eu>
Diffstat (limited to 'arch/sh')
0 files changed, 0 insertions, 0 deletions
OpenPOWER on IntegriCloud