diff options
author | Peter Zijlstra <peterz@infradead.org> | 2015-09-28 18:45:40 +0300 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2015-10-06 18:08:12 +0300 |
commit | 87dcbc0610cb580c8eaf289f52aca3620af825f0 (patch) | |
tree | 8fe4c05507c8c9ff4e2369cc16f86cad2a82dd14 /include/linux/sched.h | |
parent | fe19159225d8516f3f57a5fe8f735c01684f0ddd (diff) | |
download | linux-87dcbc0610cb580c8eaf289f52aca3620af825f0.tar.xz |
sched/core: Simplify INIT_PREEMPT_COUNT
As per the following commit:
d86ee4809d03 ("sched: optimize cond_resched()")
we need PREEMPT_ACTIVE to avoid cond_resched() from working before
the scheduler is set up.
However, keeping preemption disabled should do the same thing
already, making the PREEMPT_ACTIVE part entirely redundant.
The only complication is !PREEMPT_COUNT kernels, where
PREEMPT_DISABLED ends up being 0. Instead we use an unconditional
PREEMPT_OFFSET to set preempt_count() even on !PREEMPT_COUNT
kernels.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'include/linux/sched.h')
-rw-r--r-- | include/linux/sched.h | 11 |
1 files changed, 5 insertions, 6 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h index d086cf0ca2c7..e5b8cbc4b8d6 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -606,19 +606,18 @@ struct task_cputime_atomic { #endif /* - * Disable preemption until the scheduler is running. - * Reset by start_kernel()->sched_init()->init_idle(). + * Disable preemption until the scheduler is running -- use an unconditional + * value so that it also works on !PREEMPT_COUNT kernels. * - * We include PREEMPT_ACTIVE to avoid cond_resched() from working - * before the scheduler is active -- see should_resched(). + * Reset by start_kernel()->sched_init()->init_idle()->init_idle_preempt_count(). */ -#define INIT_PREEMPT_COUNT (PREEMPT_DISABLED + PREEMPT_ACTIVE) +#define INIT_PREEMPT_COUNT PREEMPT_OFFSET /** * struct thread_group_cputimer - thread group interval timer counts * @cputime_atomic: atomic thread group interval timers. * @running: non-zero when there are timers running and - * @cputime receives updates. + * @cputime receives updates. * * This structure contains the version of task_cputime, above, that is * used for thread group CPU timer calculations. |