summaryrefslogtreecommitdiff
path: root/include/asm-x86/mmu_context_32.h
diff options
context:
space:
mode:
authorVenki Pallipadi <venkatesh.pallipadi@intel.com>2008-01-30 15:32:01 +0300
committerIngo Molnar <mingo@elte.hu>2008-01-30 15:32:01 +0300
commitbde6f5f59c2b2b48a7a849c129d5b48838fe77ee (patch)
tree4fa3befdfa227db56770a0dc85b8fc18be232f70 /include/asm-x86/mmu_context_32.h
parent7d409d6057c7244f8757ce15245f6df27271be0c (diff)
downloadlinux-bde6f5f59c2b2b48a7a849c129d5b48838fe77ee.tar.xz
x86: voluntary leave_mm before entering ACPI C3
Aviod TLB flush IPIs during C3 states by voluntary leave_mm() before entering C3. The performance impact of TLB flush on C3 should not be significant with respect to C3 wakeup latency. Also, CPUs tend to flush TLB in hardware while in C3 anyways. On a 8 logical CPU system, running make -j2, the number of tlbflush IPIs goes down from 40 per second to ~ 0. Total number of interrupts during the run of this workload was ~1200 per second, which makes it ~3% savings in wakeups. There was no measurable performance or power impact however. [ akpm@linux-foundation.org: symbol export fixes. ] Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Diffstat (limited to 'include/asm-x86/mmu_context_32.h')
-rw-r--r--include/asm-x86/mmu_context_32.h2
1 files changed, 0 insertions, 2 deletions
diff --git a/include/asm-x86/mmu_context_32.h b/include/asm-x86/mmu_context_32.h
index 7eb0b0b1fb3c..8198d1cca1f3 100644
--- a/include/asm-x86/mmu_context_32.h
+++ b/include/asm-x86/mmu_context_32.h
@@ -32,8 +32,6 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
#endif
}
-void leave_mm(unsigned long cpu);
-
static inline void switch_mm(struct mm_struct *prev,
struct mm_struct *next,
struct task_struct *tsk)