diff options
author | Borislav Petkov <borislav.petkov@amd.com> | 2011-08-05 15:15:08 +0200 |
---|---|---|
committer | H. Peter Anvin <hpa@linux.intel.com> | 2011-08-05 12:26:44 -0700 |
commit | dfb09f9b7ab03fd367740e541a5caf830ed56726 (patch) | |
tree | 8bd8fdbbf3fb67f7d0aed73a1e8e1c7034ed2d54 /arch/x86/vdso | |
parent | 13f9a3737c903ace57d8aaebe81a3bbaeb0aa0a2 (diff) | |
download | op-kernel-dev-dfb09f9b7ab03fd367740e541a5caf830ed56726.zip op-kernel-dev-dfb09f9b7ab03fd367740e541a5caf830ed56726.tar.gz |
x86, amd: Avoid cache aliasing penalties on AMD family 15h
This patch provides performance tuning for the "Bulldozer" CPU. With its
shared instruction cache there is a chance of generating an excessive
number of cache cross-invalidates when running specific workloads on the
cores of a compute module.
This excessive amount of cross-invalidations can be observed if cache
lines backed by shared physical memory alias in bits [14:12] of their
virtual addresses, as those bits are used for the index generation.
This patch addresses the issue by clearing all the bits in the [14:12]
slice of the file mapping's virtual address at generation time, thus
forcing those bits the same for all mappings of a single shared library
across processes and, in doing so, avoids instruction cache aliases.
It also adds the command line option "align_va_addr=(32|64|on|off)" with
which virtual address alignment can be enabled for 32-bit or 64-bit x86
individually, or both, or be completely disabled.
This change leaves virtual region address allocation on other families
and/or vendors unaffected.
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Link: http://lkml.kernel.org/r/1312550110-24160-2-git-send-email-bp@amd64.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Diffstat (limited to 'arch/x86/vdso')
-rw-r--r-- | arch/x86/vdso/vma.c | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/arch/x86/vdso/vma.c b/arch/x86/vdso/vma.c index 7abd2be..caa42ce 100644 --- a/arch/x86/vdso/vma.c +++ b/arch/x86/vdso/vma.c @@ -69,6 +69,15 @@ static unsigned long vdso_addr(unsigned long start, unsigned len) addr = start + (offset << PAGE_SHIFT); if (addr >= end) addr = end; + + /* + * page-align it here so that get_unmapped_area doesn't + * align it wrongfully again to the next page. addr can come in 4K + * unaligned here as a result of stack start randomization. + */ + addr = PAGE_ALIGN(addr); + addr = align_addr(addr, NULL, ALIGN_VDSO); + return addr; } |