diff options
author | jhb <jhb@FreeBSD.org> | 2015-06-01 18:08:56 +0000 |
---|---|---|
committer | jhb <jhb@FreeBSD.org> | 2015-06-01 18:08:56 +0000 |
commit | 34dd7e04e0d09812f05591fc803ce58d0708ead1 (patch) | |
tree | fd15c2b27333272616e613a027aa2a6a95e1b350 /sys | |
parent | ed437ec23ce9e663b9b89d839cade53697620378 (diff) | |
download | FreeBSD-src-34dd7e04e0d09812f05591fc803ce58d0708ead1.zip FreeBSD-src-34dd7e04e0d09812f05591fc803ce58d0708ead1.tar.gz |
MFC 283123:
Fix two bugs that could result in PMC sampling effectively stopping.
In both cases, the the effect of the bug was that a very small positive
number was written to the counter. This means that a large number of
events needed to occur before the next sampling interrupt would trigger.
Even with very frequently occurring events like clock cycles wrapping all
the way around could take a long time. Both bugs occurred when updating
the saved reload count for an outgoing thread on a context switch.
First, the counter-independent code compares the current reload count
against the count set when the thread switched in and generates a delta
to apply to the saved count. If this delta causes the reload counter
to go negative, it would add a full reload interval to wrap it around to
a positive value. The fix is to add the full reload interval if the
resulting counter is zero.
Second, occasionally the raw counter value read during a context switch
has actually wrapped, but an interrupt has not yet triggered. In this
case the existing logic would return a very large reload count (e.g.
2^48 - 2 if the counter had overflowed by a count of 2). This was seen
both for fixed-function and programmable counters on an E5-2643.
Workaround this case by returning a reload count of zero.
PR: 198149
Sponsored by: Norse Corp, Inc.
Diffstat (limited to 'sys')
-rw-r--r-- | sys/dev/hwpmc/hwpmc_core.c | 8 | ||||
-rw-r--r-- | sys/dev/hwpmc/hwpmc_mod.c | 2 |
2 files changed, 9 insertions, 1 deletions
diff --git a/sys/dev/hwpmc/hwpmc_core.c b/sys/dev/hwpmc/hwpmc_core.c index 2525657..80c40e7 100644 --- a/sys/dev/hwpmc/hwpmc_core.c +++ b/sys/dev/hwpmc/hwpmc_core.c @@ -203,6 +203,10 @@ core_pcpu_fini(struct pmc_mdep *md, int cpu) static pmc_value_t iaf_perfctr_value_to_reload_count(pmc_value_t v) { + + /* If the PMC has overflowed, return a reload count of zero. */ + if ((v & (1ULL << (core_iaf_width - 1))) == 0) + return (0); v &= (1ULL << core_iaf_width) - 1; return (1ULL << core_iaf_width) - v; } @@ -1786,6 +1790,10 @@ static const int niap_events = sizeof(iap_events) / sizeof(iap_events[0]); static pmc_value_t iap_perfctr_value_to_reload_count(pmc_value_t v) { + + /* If the PMC has overflowed, return a reload count of zero. */ + if ((v & (1ULL << (core_iap_width - 1))) == 0) + return (0); v &= (1ULL << core_iap_width) - 1; return (1ULL << core_iap_width) - v; } diff --git a/sys/dev/hwpmc/hwpmc_mod.c b/sys/dev/hwpmc/hwpmc_mod.c index 230f25b..9e369fe 100644 --- a/sys/dev/hwpmc/hwpmc_mod.c +++ b/sys/dev/hwpmc/hwpmc_mod.c @@ -1440,7 +1440,7 @@ pmc_process_csw_out(struct thread *td) tmp += pm->pm_sc.pm_reloadcount; mtx_pool_lock_spin(pmc_mtxpool, pm); pp->pp_pmcs[ri].pp_pmcval -= tmp; - if ((int64_t) pp->pp_pmcs[ri].pp_pmcval < 0) + if ((int64_t) pp->pp_pmcs[ri].pp_pmcval <= 0) pp->pp_pmcs[ri].pp_pmcval += pm->pm_sc.pm_reloadcount; mtx_pool_unlock_spin(pmc_mtxpool, pm); |