summaryrefslogtreecommitdiffstats
path: root/arch/x86/kernel/cpu/perf_event.h
diff options
context:
space:
mode:
authorAndi Kleen <ak@linux.intel.com>2015-03-20 10:11:23 -0700
committerIngo Molnar <mingo@kernel.org>2015-04-02 17:33:19 +0200
commit1a78d93750bb5f61abdc59a91fc3bd06a214542a (patch)
treee8307794f69c68367d42c7a0cef8c9a1d5b89461 /arch/x86/kernel/cpu/perf_event.h
parent15fde1101a1aed11958e0d86bc360f01866a74b1 (diff)
downloadop-kernel-dev-1a78d93750bb5f61abdc59a91fc3bd06a214542a.zip
op-kernel-dev-1a78d93750bb5f61abdc59a91fc3bd06a214542a.tar.gz
perf/x86/intel: Streamline LBR MSR handling in PMI
The perf PMI currently does unnecessary MSR accesses when LBRs are enabled. We use LBR freezing, or when in callstack mode force the LBRs to only filter on ring 3. So there is no need to disable the LBRs explicitely in the PMI handler. Also we always unnecessarily rewrite LBR_SELECT in the LBR handler, even though it can never change. 5) | /* write_msr: MSR_LBR_SELECT(1c8), value 0 */ 5) | /* read_msr: MSR_IA32_DEBUGCTLMSR(1d9), value 1801 */ 5) | /* write_msr: MSR_IA32_DEBUGCTLMSR(1d9), value 1801 */ 5) | /* write_msr: MSR_CORE_PERF_GLOBAL_CTRL(38f), value 70000000f */ 5) | /* write_msr: MSR_CORE_PERF_GLOBAL_CTRL(38f), value 0 */ 5) | /* write_msr: MSR_LBR_SELECT(1c8), value 0 */ 5) | /* read_msr: MSR_IA32_DEBUGCTLMSR(1d9), value 1801 */ 5) | /* write_msr: MSR_IA32_DEBUGCTLMSR(1d9), value 1801 */ This patch: - Avoids disabling already frozen LBRs unnecessarily in the PMI - Avoids changing LBR_SELECT in the PMI Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: eranian@google.com Link: http://lkml.kernel.org/r/1426871484-21285-1-git-send-email-andi@firstfloor.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/x86/kernel/cpu/perf_event.h')
-rw-r--r--arch/x86/kernel/cpu/perf_event.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h
index 7250c02..329f035 100644
--- a/arch/x86/kernel/cpu/perf_event.h
+++ b/arch/x86/kernel/cpu/perf_event.h
@@ -870,7 +870,7 @@ void intel_pmu_lbr_enable(struct perf_event *event);
void intel_pmu_lbr_disable(struct perf_event *event);
-void intel_pmu_lbr_enable_all(void);
+void intel_pmu_lbr_enable_all(bool pmi);
void intel_pmu_lbr_disable_all(void);
OpenPOWER on IntegriCloud