diff options
author | Chang S. Bae <chang.seok.bae@intel.com> | 2018-09-19 02:08:57 +0300 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2018-10-08 11:41:10 +0300 |
commit | c4755613a1339ea77dbb15de75c9f74217209265 (patch) | |
tree | fc7b090af6ae081c5c6faec8d175a246581fa130 /arch/x86/include/asm/vgtod.h | |
parent | f4550b52e495e1b634d1f2c1004bcea5dc3321ea (diff) | |
download | linux-c4755613a1339ea77dbb15de75c9f74217209265.tar.xz |
x86/segments/64: Rename the GDT PER_CPU entry to CPU_NUMBER
The old 'per CPU' naming was misleading: 64-bit kernels don't use this
GDT entry for per CPU data, but to store the CPU (and node) ID.
[ mingo: Wrote new changelog. ]
Suggested-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Andy Lutomirski <luto@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Markus T Metzger <markus.t.metzger@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Shankar <ravi.v.shankar@intel.com>
Cc: Rik van Riel <riel@surriel.com>
Link: http://lkml.kernel.org/r/1537312139-5580-7-git-send-email-chang.seok.bae@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/x86/include/asm/vgtod.h')
-rw-r--r-- | arch/x86/include/asm/vgtod.h | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/arch/x86/include/asm/vgtod.h b/arch/x86/include/asm/vgtod.h index 53748541c487..4e81ea920722 100644 --- a/arch/x86/include/asm/vgtod.h +++ b/arch/x86/include/asm/vgtod.h @@ -86,9 +86,9 @@ static inline unsigned int __getcpu(void) unsigned int p; /* - * Load per CPU data from GDT. LSL is faster than RDTSCP and - * works on all CPUs. This is volatile so that it orders - * correctly wrt barrier() and to keep gcc from cleverly + * Load CPU (and node) number from GDT. LSL is faster than RDTSCP + * and works on all CPUs. This is volatile so that it orders + * correctly with respect to barrier() and to keep GCC from cleverly * hoisting it out of the calling function. * * If RDPID is available, use it. @@ -96,7 +96,7 @@ static inline unsigned int __getcpu(void) alternative_io ("lsl %[seg],%[p]", ".byte 0xf3,0x0f,0xc7,0xf8", /* RDPID %eax/rax */ X86_FEATURE_RDPID, - [p] "=a" (p), [seg] "r" (__PER_CPU_SEG)); + [p] "=a" (p), [seg] "r" (__CPU_NUMBER_SEG)); return p; } |