diff options
author | Nick Piggin <nickpiggin@yahoo.com.au> | 2005-06-26 01:57:12 +0400 |
---|---|---|
committer | Linus Torvalds <torvalds@ppc970.osdl.org> | 2005-06-26 03:24:41 +0400 |
commit | 99b61ccf0bf0e9a85823d39a5db6a1519caeb13d (patch) | |
tree | 4d793013d9317928e04e7edfe1b5766dc5e84cca /kernel/sched.c | |
parent | db935dbd43c4290d710304662cc908f733afea06 (diff) | |
download | linux-99b61ccf0bf0e9a85823d39a5db6a1519caeb13d.tar.xz |
[PATCH] sched: less aggressive idle balancing
Remove the special casing for idle CPU balancing. Things like this are
hurting for example on SMT, where are single sibling being idle doesn't really
warrant a really aggressive pull over the NUMA domain, for example.
Signed-off-by: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'kernel/sched.c')
-rw-r--r-- | kernel/sched.c | 6 |
1 files changed, 0 insertions, 6 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index 8b035a8b3c30..f665de34ed82 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -1877,15 +1877,9 @@ nextgroup: /* Get rid of the scaling factor, rounding down as we divide */ *imbalance = *imbalance / SCHED_LOAD_SCALE; - return busiest; out_balanced: - if (busiest && (idle == NEWLY_IDLE || - (idle == SCHED_IDLE && max_load > SCHED_LOAD_SCALE)) ) { - *imbalance = 1; - return busiest; - } *imbalance = 0; return NULL; |