diff options
author | Paul E. McKenney <paulmck@kernel.org> | 2024-08-05 21:44:43 +0300 |
---|---|---|
committer | Paul E. McKenney <paulmck@kernel.org> | 2024-10-11 19:31:21 +0300 |
commit | 9861f7f66f98a6358c944c17a5d4acd07abcb1a7 (patch) | |
tree | 89823d6cbadcb42ffe6e08408f25007ae2cf9799 /tools/perf/scripts/python/syscall-counts-by-pid.py | |
parent | 9852d85ec9d492ebef56dc5f229416c925758edc (diff) | |
download | linux-9861f7f66f98a6358c944c17a5d4acd07abcb1a7.tar.xz |
locking/csd-lock: Switch from sched_clock() to ktime_get_mono_fast_ns()
Currently, the CONFIG_CSD_LOCK_WAIT_DEBUG code uses sched_clock() to check
for excessive CSD-lock wait times. This works, but does not guarantee
monotonic timestamps on x86 due to the sched_clock() function's use of
the rdtsc instruction, which does not guarantee ordering. This means
that, given successive calls to sched_clock(), the second might return
an earlier time than the second, that is, time might seem to go backwards.
This can (and does!) result in false-positive CSD-lock wait complaints
claiming almost 2^64 nanoseconds of delay.
Therefore, switch from sched_clock() to ktime_get_mono_fast_ns(), which
does guarantee monotonic timestamps via the rdtsc_ordered() function,
which as the name implies, does guarantee ordered timestamps, at least
in the absence of calls from NMI handlers, which are not involved in
this code path.
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Reviewed-by: Rik van Riel <riel@surriel.com>
Cc: Neeraj Upadhyay <neeraj.upadhyay@kernel.org>
Cc: Leonardo Bras <leobras@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Diffstat (limited to 'tools/perf/scripts/python/syscall-counts-by-pid.py')
0 files changed, 0 insertions, 0 deletions