diff options
author | Sasha Levin <sasha.levin@oracle.com> | 2013-06-14 02:41:17 +0400 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2013-11-27 14:55:21 +0400 |
commit | 5634bd7d2ab14fbf736b62b0788fb68e2cb0fde2 (patch) | |
tree | 1199737c610e0d0402a32c013927df27fd3cbe94 /tools/lib/lockdep/uinclude/linux/spinlock.h | |
parent | 8dce7a9a6f4ca7163161a80a4603b66c88c5de8e (diff) | |
download | linux-5634bd7d2ab14fbf736b62b0788fb68e2cb0fde2.tar.xz |
liblockdep: Wrap kernel/locking/lockdep.c to allow usage from userspace
kernel/locking/lockdep.c deals with validating locking scenarios for
various architectures supported by the kernel. There isn't
anything kernel specific going on in lockdep, and when we
compare userspace to other architectures that don't have to deal
with irqs such as s390, they become all too similar.
We wrap kernel/locking/lockdep.c and include/linux/lockdep.h with
several headers which allow us to build and use lockdep from
userspace. We don't touch the kernel code itself which means
that any work done on lockdep in the kernel will automatically
benefit userspace lockdep as well!
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: torvalds@linux-foundation.org
Link: http://lkml.kernel.org/r/1371163284-6346-3-git-send-email-sasha.levin@oracle.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'tools/lib/lockdep/uinclude/linux/spinlock.h')
-rw-r--r-- | tools/lib/lockdep/uinclude/linux/spinlock.h | 25 |
1 files changed, 25 insertions, 0 deletions
diff --git a/tools/lib/lockdep/uinclude/linux/spinlock.h b/tools/lib/lockdep/uinclude/linux/spinlock.h new file mode 100644 index 000000000000..68c1aa2bcba5 --- /dev/null +++ b/tools/lib/lockdep/uinclude/linux/spinlock.h @@ -0,0 +1,25 @@ +#ifndef _LIBLOCKDEP_SPINLOCK_H_ +#define _LIBLOCKDEP_SPINLOCK_H_ + +#include <pthread.h> +#include <stdbool.h> + +#define arch_spinlock_t pthread_mutex_t +#define __ARCH_SPIN_LOCK_UNLOCKED PTHREAD_MUTEX_INITIALIZER + +static inline void arch_spin_lock(arch_spinlock_t *mutex) +{ + pthread_mutex_lock(mutex); +} + +static inline void arch_spin_unlock(arch_spinlock_t *mutex) +{ + pthread_mutex_unlock(mutex); +} + +static inline bool arch_spin_is_locked(arch_spinlock_t *mutex) +{ + return true; +} + +#endif |