summaryrefslogtreecommitdiff
path: root/drivers/usb/serial/mct_u232.c
diff options
context:
space:
mode:
authorFrederic Weisbecker <fweisbec@gmail.com>2010-04-06 02:10:17 +0400
committerIngo Molnar <mingo@elte.hu>2010-04-06 02:15:37 +0400
commitbd6d29c25bb1a24a4c160ec5de43e0004e01f72b (patch)
tree0aa96c7e9fdfbe7dc9c7e40151aed928903240f0 /drivers/usb/serial/mct_u232.c
parentced918eb748ce30b3aace549fd17540e40ffdca0 (diff)
downloadlinux-bd6d29c25bb1a24a4c160ec5de43e0004e01f72b.tar.xz
lockstat: Make lockstat counting per cpu
Locking statistics are implemented using global atomic variables. This is usually fine unless some path write them very often. This is the case for the function and function graph tracers that disable irqs for each entry saved (except if the function tracer is in preempt disabled only mode). And calls to local_irq_save/restore() increment hardirqs_on_events and hardirqs_off_events stats (or similar stats for redundant versions). Incrementing these global vars for each function ends up in too much cache bouncing if lockstats are enabled. To solve this, implement the debug_atomic_*() operations using per cpu vars. -v2: Use per_cpu() instead of get_cpu_var() to fetch the desired cpu vars on debug_atomic_read() -v3: Store the stats in a structure. No need for local_t as we are NMI/irq safe. -v4: Fix tons of build errors. I thought I had tested it but I probably forgot to select the relevant config. Suggested-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Steven Rostedt <rostedt@goodmis.org> LKML-Reference: <1270505417-8144-1-git-send-regression-fweisbec@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Steven Rostedt <rostedt@goodmis.org>
Diffstat (limited to 'drivers/usb/serial/mct_u232.c')
0 files changed, 0 insertions, 0 deletions