From 4009f4b3a9d8b74547269f293e6a920adf278996 Mon Sep 17 00:00:00 2001 From: "Steven Rostedt (VMware)" Date: Thu, 19 Jan 2017 11:32:34 -0500 Subject: locking/rtmutex: Flip unlikely() branch to likely() in __rt_mutex_slowlock() Running my likely/unlikely profiler for 3 weeks on two production machines, I discovered that the unlikely() test in __rt_mutex_slowlock() checking if state is TASK_INTERRUPTIBLE is hit 100% of the time, making it a very likely case. The reason is, on a vanilla kernel, the majority case of calling rt_mutex() is from the futex code. This code is always called as TASK_INTERRUPTIBLE. In the -rt patch, this code is commonly called when PREEMPT_RT is enabled with TASK_UNINTERRUPTIBLE. But that's not the likely scenario. The rt_mutex() code should be optimized for the common vanilla case, and that is from a futex, with TASK_INTERRUPTIBLE as the state. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Peter Zijlstra (Intel) Cc: Andrew Morton Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/20170119113234.1efeedd1@gandalf.local.home Signed-off-by: Ingo Molnar --- kernel/locking/rtmutex.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'kernel/locking/rtmutex.c') diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 2f443ed2320a..d340be3a488f 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -1179,7 +1179,7 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state, * TASK_INTERRUPTIBLE checks for signals and * timeout. Ignored otherwise. */ - if (unlikely(state == TASK_INTERRUPTIBLE)) { + if (likely(state == TASK_INTERRUPTIBLE)) { /* Signal pending? */ if (signal_pending(current)) ret = -EINTR; -- cgit v1.2.3