summaryrefslogtreecommitdiffstats
path: root/arch/x86/vdso/vgetcpu.c
Commit message (Collapse)AuthorAgeFilesLines
* x86,vdso: Use LSL unconditionally for vgetcpuAndy Lutomirski2014-11-031-1/+0
| | | | | | | | | | LSL is faster than RDTSCP and works everywhere; there's no need to switch between them depending on CPU. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Cc: Andi Kleen <andi@firstfloor.org> Link: http://lkml.kernel.org/r/72f73d5ec4514e02bba345b9759177ef03742efb.1414706021.git.luto@amacapital.net Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86_64/vdso: Remove jiffies from the vvar pageAndy Lutomirski2014-10-281-1/+0
| | | | | | | | | | I think that the jiffies vvar was once used for the vgetcpu cache. That code is long gone, so let's just make jiffies be a normal variable. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/fcfee6f8749af14d96373a9e2656354ad0b95499.1411494540.git.luto@amacapital.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
* x86: vdso: pvclock gettime supportMarcelo Tosatti2012-11-271-8/+3
| | | | | | | | | Improve performance of time system calls when using Linux pvclock, by reading time info from fixmap visible copy of pvclock data. Originally from Jeremy Fitzhardinge. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* x86-64: Clean up vdso/kernel shared variablesAndy Lutomirski2011-05-241-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Variables that are shared between the vdso and the kernel are currently a bit of a mess. They are each defined with their own magic, they are accessed differently in the kernel, the vsyscall page, and the vdso, and one of them (vsyscall_clock) doesn't even really exist. This changes them all to use a common mechanism. All of them are delcared in vvar.h with a fixed address (validated by the linker script). In the kernel (as before), they look like ordinary read-write variables. In the vsyscall page and the vdso, they are accessed through a new macro VVAR, which gives read-only access. The vdso is now loaded verbatim into memory without any fixups. As a side bonus, access from the vdso is faster because a level of indirection is removed. While we're at it, pack jiffies and vgetcpu_mode into the same cacheline. Signed-off-by: Andy Lutomirski <luto@mit.edu> Cc: Andi Kleen <andi@firstfloor.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Borislav Petkov <bp@amd64.org> Link: http://lkml.kernel.org/r/%3C7357882fbb51fa30491636a7b6528747301b7ee9.1306156808.git.luto%40mit.edu%3E Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: add notrace annotations to vsyscall.Steven Rostedt2008-05-231-1/+2
| | | | | | | | | | Add the notrace annotations to the vsyscall functions - there we are not in kernel context yet, so the tracer function cannot (and must not) be called. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: introduce native_read_tscpGlauber de Oliveira Costa2008-01-301-2/+2
| | | | | | | | | | | Targetting paravirt, this patch introduces native_read_tscp, in place of rdtscp() macro. When in a paravirt guest, this will involve a function call, and thus, cannot be done in the vdso area. These users then have to call the native version directly Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: ignore the sys_getcpu() tcache parameterIngo Molnar2007-11-171-17/+2
| | | | | | | | | | | | | | | | | dont use the vgetcpu tcache - it's causing problems with tasks migrating, they'll see the old cache up to a jiffy after the migration, further increasing the costs of the migration. In the worst case they see a complete bogus information from the tcache, when a sys_getcpu() call "invalidated" the cache info by incrementing the jiffies _and_ the cpuid info in the cache and the following vdso_getcpu() call happens after vdso_jiffies have been incremented. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Ulrich Drepper <drepper@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86_64: move vdsoThomas Gleixner2007-10-111-0/+50
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
OpenPOWER on IntegriCloud