diff options
author | Frederic Weisbecker <frederic@kernel.org> | 2019-04-02 19:02:44 +0300 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2019-04-29 09:29:20 +0300 |
commit | 948f83768a180ec8e85c4a8ff269d5e433d10815 (patch) | |
tree | 422edd0153306a8f5c5b1f6aa5442726438695f3 /kernel/locking/lockdep_internals.h | |
parent | 3771b0fe9dfc3801eac0142d1af6ba94dee83c6c (diff) | |
download | linux-948f83768a180ec8e85c4a8ff269d5e433d10815.tar.xz |
locking/lockdep: Test all incompatible scenarios at once in check_irq_usage()
check_prev_add_irq() tests all incompatible scenarios one after the
other while adding a lock (@next) to a tree dependency (@prev):
LOCK_USED_IN_HARDIRQ vs LOCK_ENABLED_HARDIRQ
LOCK_USED_IN_HARDIRQ_READ vs LOCK_ENABLED_HARDIRQ
LOCK_USED_IN_SOFTIRQ vs LOCK_ENABLED_SOFTIRQ
LOCK_USED_IN_SOFTIRQ_READ vs LOCK_ENABLED_SOFTIRQ
Also for these four scenarios, we must at least iterate the @prev
backward dependency. Then if it matches the relevant LOCK_USED_* bit,
we must also iterate the @next forward dependency.
Therefore in the best case we iterate 4 times, in the worst case 8 times.
A different approach can let us divide the number of branch iterations
by 4:
1) Iterate through @prev backward dependencies and accumulate all the IRQ
uses in a single mask. In the best case where the current lock hasn't
been used in IRQ, we stop here.
2) Iterate through @next forward dependencies and try to find a lock
whose usage is exclusive to the accumulated usages gathered in the
previous step. If we find one (call it @lockA), we have found an
incompatible use, otherwise we stop here. Only bad locking scenario
go further. So a sane verification stop here.
3) Iterate again through @prev backward dependency and find the lock
whose usage matches @lockA in term of incompatibility. Call that
lock @lockB.
4) Report the incompatible usages of @lockA and @lockB
If no incompatible use is found, the verification never goes beyond
step 2 which means at most two iterations.
The following compares the execution measurements of the function
check_prev_add_irq():
Number of calls | Avg (ns) | Stdev (ns) | Total time (ns)
------------------------------------------------------------------------
Mainline 8452 | 2652 | 11962 | 22415143
This patch 8452 | 1518 | 7090 | 12835602
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Link: https://lkml.kernel.org/r/20190402160244.32434-5-frederic@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/locking/lockdep_internals.h')
-rw-r--r-- | kernel/locking/lockdep_internals.h | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h index 2b3ffd4117ad..150ec3f0c5b5 100644 --- a/kernel/locking/lockdep_internals.h +++ b/kernel/locking/lockdep_internals.h @@ -66,6 +66,12 @@ static const unsigned long LOCKF_USED_IN_IRQ_READ = 0; #undef LOCKDEP_STATE +#define LOCKF_ENABLED_IRQ_ALL (LOCKF_ENABLED_IRQ | LOCKF_ENABLED_IRQ_READ) +#define LOCKF_USED_IN_IRQ_ALL (LOCKF_USED_IN_IRQ | LOCKF_USED_IN_IRQ_READ) + +#define LOCKF_IRQ (LOCKF_ENABLED_IRQ | LOCKF_USED_IN_IRQ) +#define LOCKF_IRQ_READ (LOCKF_ENABLED_IRQ_READ | LOCKF_USED_IN_IRQ_READ) + /* * CONFIG_LOCKDEP_SMALL is defined for sparc. Sparc requires .text, * .data and .bss to fit in required 32MB limit for the kernel. With |