diff options
author | Vineet Gupta <vgupta@synopsys.com> | 2013-05-22 18:38:10 +0530 |
---|---|---|
committer | Vineet Gupta <vgupta@synopsys.com> | 2013-05-23 14:24:52 +0530 |
commit | 3e87974dec5ec25a8a4852d9292db6be659164e6 (patch) | |
tree | ce7d2e84901ade9ff8f56a9319eb1a690200c2f5 | |
parent | a950549c675f2c8c504469dec7d780da8a6433dc (diff) | |
download | op-kernel-dev-3e87974dec5ec25a8a4852d9292db6be659164e6.zip op-kernel-dev-3e87974dec5ec25a8a4852d9292db6be659164e6.tar.gz |
ARC: Brown paper bag bug in macro for checking cache color
The VM_EXEC check in update_mmu_cache() was getting optimized away
because of a stupid error in definition of macro addr_not_cache_congruent()
The intention was to have the equivalent of following:
if (a || (1 ? b : 0))
but we ended up with following:
if (a || 1 ? b : 0)
And because precedence of '||' is more that that of '?', gcc was optimizing
away evaluation of <a>
Nasty Repercussions:
1. For non-aliasing configs it would mean some extraneous dcache flushes
for non-code pages if U/K mappings were not congruent.
2. For aliasing config, some needed dcache flush for code pages might
be missed if U/K mappings were congruent.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-rw-r--r-- | arch/arc/include/asm/cacheflush.h | 4 | ||||
-rw-r--r-- | arch/arc/mm/tlb.c | 3 |
2 files changed, 5 insertions, 2 deletions
diff --git a/arch/arc/include/asm/cacheflush.h b/arch/arc/include/asm/cacheflush.h index 9f841af..7d81974 100644 --- a/arch/arc/include/asm/cacheflush.h +++ b/arch/arc/include/asm/cacheflush.h @@ -99,8 +99,10 @@ static inline int cache_is_vipt_aliasing(void) * checks if two addresses (after page aligning) index into same cache set */ #define addr_not_cache_congruent(addr1, addr2) \ +({ \ cache_is_vipt_aliasing() ? \ - (CACHE_COLOR(addr1) != CACHE_COLOR(addr2)) : 0 \ + (CACHE_COLOR(addr1) != CACHE_COLOR(addr2)) : 0; \ +}) #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 066145b..fe1c5a0 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -444,7 +444,8 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, * so userspace sees the right data. * (Avoids the flush for Non-exec + congruent mapping case) */ - if (vma->vm_flags & VM_EXEC || addr_not_cache_congruent(paddr, vaddr)) { + if ((vma->vm_flags & VM_EXEC) || + addr_not_cache_congruent(paddr, vaddr)) { struct page *page = pfn_to_page(pte_pfn(*ptep)); int dirty = test_and_clear_bit(PG_arch_1, &page->flags); |