From c4755613a1339ea77dbb15de75c9f74217209265 Mon Sep 17 00:00:00 2001 From: "Chang S. Bae" Date: Tue, 18 Sep 2018 16:08:57 -0700 Subject: x86/segments/64: Rename the GDT PER_CPU entry to CPU_NUMBER The old 'per CPU' naming was misleading: 64-bit kernels don't use this GDT entry for per CPU data, but to store the CPU (and node) ID. [ mingo: Wrote new changelog. ] Suggested-by: H. Peter Anvin Signed-off-by: Chang S. Bae Reviewed-by: Thomas Gleixner Acked-by: Andy Lutomirski Cc: Andy Lutomirski Cc: Borislav Petkov Cc: Brian Gerst Cc: Dave Hansen Cc: Denys Vlasenko Cc: Linus Torvalds Cc: Markus T Metzger Cc: Peter Zijlstra Cc: Ravi Shankar Cc: Rik van Riel Link: http://lkml.kernel.org/r/1537312139-5580-7-git-send-email-chang.seok.bae@intel.com Signed-off-by: Ingo Molnar --- arch/x86/include/asm/vgtod.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) (limited to 'arch/x86/include/asm/vgtod.h') diff --git a/arch/x86/include/asm/vgtod.h b/arch/x86/include/asm/vgtod.h index 53748541c487..4e81ea920722 100644 --- a/arch/x86/include/asm/vgtod.h +++ b/arch/x86/include/asm/vgtod.h @@ -86,9 +86,9 @@ static inline unsigned int __getcpu(void) unsigned int p; /* - * Load per CPU data from GDT. LSL is faster than RDTSCP and - * works on all CPUs. This is volatile so that it orders - * correctly wrt barrier() and to keep gcc from cleverly + * Load CPU (and node) number from GDT. LSL is faster than RDTSCP + * and works on all CPUs. This is volatile so that it orders + * correctly with respect to barrier() and to keep GCC from cleverly * hoisting it out of the calling function. * * If RDPID is available, use it. @@ -96,7 +96,7 @@ static inline unsigned int __getcpu(void) alternative_io ("lsl %[seg],%[p]", ".byte 0xf3,0x0f,0xc7,0xf8", /* RDPID %eax/rax */ X86_FEATURE_RDPID, - [p] "=a" (p), [seg] "r" (__PER_CPU_SEG)); + [p] "=a" (p), [seg] "r" (__CPU_NUMBER_SEG)); return p; } -- cgit v1.2.3