index
:
BMC/Intel-BMC/linux.git
dev
dev-4.10
dev-4.13
dev-4.17
dev-4.18
dev-4.19
dev-4.3
dev-4.4
dev-4.6
dev-4.7
dev-5.0
dev-5.1
dev-5.10-intel
dev-5.10.46-intel
dev-5.10.49-intel
dev-5.14-intel
dev-5.15-intel
dev-5.2
dev-5.3
dev-5.4
dev-5.7
dev-5.8
dev-5.8-intel
master
Intel OpenBMC Linux kernel source tree (mirror)
Andrey V.Kosteltsev
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
kernel
/
locking
/
qspinlock.c
Age
Commit message (
Expand
)
Author
Files
Lines
2018-04-27
locking/qspinlock: Add stat tracking for pending vs. slowpath
Waiman Long
1
-3
/
+11
2018-04-27
locking/qspinlock: Use try_cmpxchg() instead of cmpxchg() when locking
Will Deacon
1
-10
/
+9
2018-04-27
locking/qspinlock: Elide back-to-back RELEASE operations with smp_wmb()
Will Deacon
1
-16
/
+17
2018-04-27
locking/qspinlock: Use smp_cond_load_relaxed() to wait for next node
Will Deacon
1
-4
/
+2
2018-04-27
locking/qspinlock: Use atomic_cond_read_acquire()
Will Deacon
1
-6
/
+6
2018-04-27
locking/qspinlock: Kill cmpxchg() loop when claiming lock from head of queue
Will Deacon
1
-11
/
+8
2018-04-27
locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpath
Will Deacon
1
-44
/
+58
2018-04-27
locking/qspinlock: Bound spinning on pending->locked transition in slowpath
Will Deacon
1
-3
/
+17
2018-04-27
locking/qspinlock: Merge 'struct __qspinlock' into 'struct qspinlock'
Will Deacon
1
-43
/
+3
2018-02-13
locking/qspinlock: Ensure node->count is updated before initialising node
Will Deacon
1
-0
/
+8
2018-02-13
locking/qspinlock: Ensure node is initialised before updating prev->next
Will Deacon
1
-6
/
+7
2017-12-04
locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath()
Paul E. McKenney
1
-7
/
+5
2017-08-17
locking: Remove spin_unlock_wait() generic definitions
Paul E. McKenney
1
-117
/
+0
2017-07-08
locking/qspinlock: Explicitly include asm/prefetch.h
Stafford Horne
1
-0
/
+1
2016-06-27
locking/qspinlock: Use __this_cpu_dec() instead of full-blown this_cpu_dec()
Pan Xinhui
1
-1
/
+1
2016-06-14
locking/barriers: Introduce smp_acquire__after_ctrl_dep()
Peter Zijlstra
1
-1
/
+1
2016-06-14
locking/barriers: Replace smp_cond_acquire() with smp_cond_load_acquire()
Peter Zijlstra
1
-6
/
+6
2016-06-08
locking/qspinlock: Add comments
Peter Zijlstra
1
-0
/
+57
2016-06-08
locking/qspinlock: Clarify xchg_tail() ordering
Peter Zijlstra
1
-2
/
+13
2016-06-08
locking/qspinlock: Fix spin_unlock_wait() some more
Peter Zijlstra
1
-0
/
+60
2016-02-29
locking/qspinlock: Use smp_cond_acquire() in pending code
Waiman Long
1
-4
/
+3
2015-12-04
locking/pvqspinlock: Queue node adaptive spinning
Waiman Long
1
-2
/
+3
2015-12-04
locking/pvqspinlock: Allow limited lock stealing
Waiman Long
1
-6
/
+20
2015-12-04
locking, sched: Introduce smp_cond_acquire() and use it
Peter Zijlstra
1
-2
/
+1
2015-11-23
locking/qspinlock: Avoid redundant read of next pointer
Waiman Long
1
-3
/
+6
2015-11-23
locking/qspinlock: Prefetch the next node cacheline
Waiman Long
1
-0
/
+10
2015-11-23
locking/qspinlock: Use _acquire/_release() versions of cmpxchg() & xchg()
Waiman Long
1
-5
/
+24
2015-09-11
locking/qspinlock/x86: Fix performance regression under unaccelerated VMs
Peter Zijlstra
1
-1
/
+1
2015-08-03
locking/pvqspinlock: Only kick CPU at unlock time
Waiman Long
1
-3
/
+3
2015-05-08
locking/pvqspinlock: Implement simple paravirt support for the qspinlock
Waiman Long
1
-1
/
+67
2015-05-08
locking/qspinlock: Revert to test-and-set on hypervisors
Peter Zijlstra (Intel)
1
-0
/
+3
2015-05-08
locking/qspinlock: Use a simple write to grab the lock
Waiman Long
1
-16
/
+50
2015-05-08
locking/qspinlock: Optimize for smaller NR_CPUS
Peter Zijlstra (Intel)
1
-1
/
+68
2015-05-08
locking/qspinlock: Extract out code snippets for the next patch
Waiman Long
1
-31
/
+48
2015-05-08
locking/qspinlock: Add pending bit
Peter Zijlstra (Intel)
1
-21
/
+98
2015-05-08
locking/qspinlock: Introduce a simple generic 4-byte queued spinlock
Waiman Long
1
-0
/
+209