diff options
author | Alexander Shishkin <alexander.shishkin@linux.intel.com> | 2015-01-14 15:18:13 +0300 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2015-04-02 18:14:10 +0300 |
commit | 6a279230391b63130070e0219b0ad09d34d28c89 (patch) | |
tree | 0acce118e036a8474a77fde1c899819580478ab6 | |
parent | 0a4e38e64f5e91ce131cc42ee5bb3925377ec840 (diff) | |
download | linux-6a279230391b63130070e0219b0ad09d34d28c89.tar.xz |
perf: Add a capability for AUX_NO_SG pmus to do software double buffering
For pmus that don't support scatter-gather for AUX data in hardware, it
might still make sense to implement software double buffering to avoid
losing data while the user is reading data out. For this purpose, add
a pmu capability that guarantees multiple high-order chunks for AUX buffer,
so that the pmu driver can do switchover tricks.
To make use of this feature, add PERF_PMU_CAP_AUX_SW_DOUBLEBUF to your
pmu's capability mask. This will make the ring buffer AUX allocation code
ensure that the biggest high order allocation for the aux buffer pages is
no bigger than half of the total requested buffer size, thus making sure
that the buffer has at least two high order allocations.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kaixu Xia <kaixu.xia@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Robert Richter <rric@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: adrian.hunter@intel.com
Cc: kan.liang@intel.com
Cc: markus.t.metzger@intel.com
Cc: mathieu.poirier@linaro.org
Link: http://lkml.kernel.org/r/1421237903-181015-5-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-rw-r--r-- | include/linux/perf_event.h | 1 | ||||
-rw-r--r-- | kernel/events/ring_buffer.c | 15 |
2 files changed, 15 insertions, 1 deletions
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index d5a4a8e95808..13a1eb3a2a2d 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -175,6 +175,7 @@ struct perf_event; #define PERF_PMU_CAP_NO_INTERRUPT 0x01 #define PERF_PMU_CAP_NO_NMI 0x02 #define PERF_PMU_CAP_AUX_NO_SG 0x04 +#define PERF_PMU_CAP_AUX_SW_DOUBLEBUF 0x08 /** * struct pmu - generic performance monitoring unit diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c index ed0859e33b2f..6e3be7a10c50 100644 --- a/kernel/events/ring_buffer.c +++ b/kernel/events/ring_buffer.c @@ -287,13 +287,26 @@ int rb_alloc_aux(struct ring_buffer *rb, struct perf_event *event, if (!has_aux(event)) return -ENOTSUPP; - if (event->pmu->capabilities & PERF_PMU_CAP_AUX_NO_SG) + if (event->pmu->capabilities & PERF_PMU_CAP_AUX_NO_SG) { /* * We need to start with the max_order that fits in nr_pages, * not the other way around, hence ilog2() and not get_order. */ max_order = ilog2(nr_pages); + /* + * PMU requests more than one contiguous chunks of memory + * for SW double buffering + */ + if ((event->pmu->capabilities & PERF_PMU_CAP_AUX_SW_DOUBLEBUF) && + !overwrite) { + if (!max_order) + return -EINVAL; + + max_order--; + } + } + rb->aux_pages = kzalloc_node(nr_pages * sizeof(void *), GFP_KERNEL, node); if (!rb->aux_pages) return -ENOMEM; |