summaryrefslogtreecommitdiff
path: root/arch/x86/vdso
AgeCommit message (Collapse)AuthorFilesLines
2015-06-03x86/asm/entry, x86/vdso: Move the vDSO code to arch/x86/entry/vdso/Ingo Molnar22-2142/+0
Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Brian Gerst <brgerst@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-11x86/vdso: Fix 'make bzImage' on older distrosOleg Nesterov1-1/+1
Change HOST_EXTRACFLAGS to include arch/x86/include/uapi along with include/uapi. This looks more consistent, and this fixes "make bzImage" on my old distro which doesn't have asm/bitsperlong.h in /usr/include/. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Andy Lutomirski <luto@kernel.org> Cc: <stable@vger.kernel.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: 6f121e548f83 ("x86, vdso: Reimplement vdso.so preparation in build-time C") Link: http://lkml.kernel.org/r/1431332153-18566-6-git-send-email-bp@alien8.de Link: http://lkml.kernel.org/r/20150507165835.GB18652@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-04-27x86: pvclock: Really remove the sched notifier for cross-cpu migrationsPaolo Bonzini1-19/+15
This reverts commits 0a4e6be9ca17c54817cf814b4b5aa60478c6df27 and 80f7fdb1c7f0f9266421f823964fd1962681f6ce. The task migration notifier was originally introduced in order to support the pvclock vsyscall with non-synchronized TSC, but KVM only supports it with synchronized TSC. Hence, on KVM the race condition is only needed due to a bad implementation on the host side, and even then it's so rare that it's mostly theoretical. As far as KVM is concerned it's possible to fix the host, avoiding the additional complexity in the vDSO and the (re)introduction of the task migration notifier. Xen, on the other hand, hasn't yet implemented vsyscall support at all, so we do not care about its plans for non-synchronized TSC. Reported-by: Peter Zijlstra <peterz@infradead.org> Suggested-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-04-13Merge branch 'x86-vdso-for-linus' of ↵Linus Torvalds2-4/+2
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 vdso changes from Ingo Molnar: "Misc vDSO updates" * 'x86-vdso-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/vdso: Remove x32 intermediates during 'make clean' x86/vdso: Teach 'make clean' to remove generated vdso-image-*.c files x86/vdso32/syscall.S: Do not load __USER32_DS to %ss x86/vdso: Fix the x86 vdso2c tool includes
2015-04-13Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds1-15/+19
Pull KVM updates from Paolo Bonzini: "First batch of KVM changes for 4.1 The most interesting bit here is irqfd/ioeventfd support for ARM and ARM64. Summary: ARM/ARM64: fixes for live migration, irqfd and ioeventfd support (enabling vhost, too), page aging s390: interrupt handling rework, allowing to inject all local interrupts via new ioctl and to get/set the full local irq state for migration and introspection. New ioctls to access memory by virtual address, and to get/set the guest storage keys. SIMD support. MIPS: FPU and MIPS SIMD Architecture (MSA) support. Includes some patches from Ralf Baechle's MIPS tree. x86: bugfixes (notably for pvclock, the others are small) and cleanups. Another small latency improvement for the TSC deadline timer" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (146 commits) KVM: use slowpath for cross page cached accesses kvm: mmu: lazy collapse small sptes into large sptes KVM: x86: Clear CR2 on VCPU reset KVM: x86: DR0-DR3 are not clear on reset KVM: x86: BSP in MSR_IA32_APICBASE is writable KVM: x86: simplify kvm_apic_map KVM: x86: avoid logical_map when it is invalid KVM: x86: fix mixed APIC mode broadcast KVM: x86: use MDA for interrupt matching kvm/ppc/mpic: drop unused IRQ_testbit KVM: nVMX: remove unnecessary double caching of MAXPHYADDR KVM: nVMX: checks for address bits beyond MAXPHYADDR on VM-entry KVM: x86: cache maxphyaddr CPUID leaf in struct kvm_vcpu KVM: vmx: pass error code with internal error #2 x86: vdso: fix pvclock races with task migration KVM: remove kvm_read_hva and kvm_read_hva_atomic KVM: x86: optimize delivery of TSC deadline timer interrupt KVM: x86: extract blocking logic from __vcpu_run kvm: x86: fix x86 eflags fixed bit KVM: s390: migrate vcpu interrupt state ...
2015-04-08x86: vdso: fix pvclock races with task migrationRadim Krčmář1-8/+12
If we were migrated right after __getcpu, but before reading the migration_count, we wouldn't notice that we read TSC of a different VCPU, nor that KVM's bug made pvti invalid, as only migration_count on source VCPU is increased. Change vdso instead of updating migration_count on destination. Cc: stable@vger.kernel.org Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> Fixes: 0a4e6be9ca17 ("x86: kvm: Revert "remove sched notifier for cross-cpu migrations"") Message-Id: <1428000263-11892-1-git-send-email-rkrcmar@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-03-31x86/vdso: Remove x32 intermediates during 'make clean'Andy Lutomirski1-1/+1
The existing clean-files rule was missing vdsox32.so and vdsox32.so.dbg. We should really rename the intermediates to allow a single rule to get them all. Also-reported-by: Magnus Damm <damm+renesas@opensource.se> Signed-off-by: Andy Lutomirski <luto@kernel.org> Link: http://lkml.kernel.org/r/7fa2ad4a63bc6f52e214125900d54165ef06cc10.1427482099.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-31x86/vdso: Teach 'make clean' to remove generated vdso-image-*.c filesAndrey Skvortsov1-1/+1
After 'make clean' the following files were left in arch/x86/vdso/: vdso-image-32-int80.c vdso-image-32-syscall.c vdso-image-32-sysenter.c These file are generated during the build process and are present in .gitignore, so remove them. Signed-off-by: Andrey Skvortsov <andrej.skvortzov@gmail.com> Signed-off-by: Andy Lutomirski <luto@kernel.org> Link: http://lkml.kernel.org/r/f85bb7ef6f8c6f6aa4bf422348018c84321454f8.1427482099.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-31x86/vdso32/syscall.S: Do not load __USER32_DS to %ssDenys Vlasenko1-2/+0
This vDSO code only gets used by 64-bit kernels, not 32-bit ones. On 64-bit kernels, the data segment is the same for 32-bit and 64-bit userspace, and the SYSRET instruction loads %ss with its selector. So there's no need to repeat it by hand. Segment loads are somewhat expensive: tens of cycles. Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com> [ Removed unnecessary comment. ] Signed-off-by: Andy Lutomirski <luto@kernel.org> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Drewry <wad@chromium.org> Link: http://lkml.kernel.org/r/63da6d778f69fd0f1345d9287f6764d58be519fa.1427482099.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-31x86/vdso: Fix the x86 vdso2c tool includesTommi Kyntola1-1/+1
The build-time tool arch/x86/vdso/vdso2c.c includes <linux/elf.h>, but cannot find it, unless the build host happens to provide it. It should be reading the uapi linux/elf.h This build regression came along with the vdso2c changes between v3.15 and v3.16. Signed-off-by: Tommi Kyntola <tommi.kyntola@gmail.com> Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/1525002.3cJ7BySVpA@musta Link: http://lkml.kernel.org/r/efe1ec29eda830b1d0030882706f3dac99ce1f73.1427482099.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-24x86: kvm: Revert "remove sched notifier for cross-cpu migrations"Marcelo Tosatti1-8/+8
The following point: 2. per-CPU pvclock time info is updated if the underlying CPU changes. Is not true anymore since "KVM: x86: update pvclock area conditionally, on cpu migration". Add task migration notification back. Problem noticed by Andy Lutomirski. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> CC: stable@kernel.org # 3.11+
2015-03-06x86/vdso: Fix the build on GCC5Jiri Slaby1-0/+1
On gcc5 the kernel does not link: ld: .eh_frame_hdr table[4] FDE at 0000000000000648 overlaps table[5] FDE at 0000000000000670. Because prior GCC versions always emitted NOPs on ALIGN directives, but gcc5 started omitting them. .LSTARTFDEDLSI1 says: /* HACK: The dwarf2 unwind routines will subtract 1 from the return address to get an address in the middle of the presumed call instruction. Since we didn't get here via a call, we need to include the nop before the real start to make up for it. */ .long .LSTART_sigreturn-1-. /* PC-relative start address */ But commit 69d0627a7f6e ("x86 vDSO: reorder vdso32 code") from 2.6.25 replaced .org __kernel_vsyscall+32,0x90 by ALIGN right before __kernel_sigreturn. Of course, ALIGN need not generate any NOP in there. Esp. gcc5 collapses vclock_gettime.o and int80.o together with no generated NOPs as "ALIGN". So fix this by adding to that point at least a single NOP and make the function ALIGN possibly with more NOPs then. Kudos for reporting and diagnosing should go to Richard. Reported-by: Richard Biener <rguenther@suse.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz> Acked-by: Andy Lutomirski <luto@amacapital.net> Cc: <stable@vger.kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1425543211-12542-1-git-send-email-jslaby@suse.cz Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-02-14x86_64: add KASan supportAndrey Ryabinin1-0/+1
This patch adds arch specific code for kernel address sanitizer. 16TB of virtual addressed used for shadow memory. It's located in range [ffffec0000000000 - fffffc0000000000] between vmemmap and %esp fixup stacks. At early stage we map whole shadow region with zero page. Latter, after pages mapped to direct mapping address range we unmap zero pages from corresponding shadow (see kasan_map_shadow()) and allocate and map a real shadow memory reusing vmemmap_populate() function. Also replace __pa with __pa_nodebug before shadow initialized. __pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr) __phys_addr is instrumented, so __asan_load could be called before shadow area initialized. Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Konstantin Serebryany <kcc@google.com> Cc: Dmitry Chernenkov <dmitryc@google.com> Signed-off-by: Andrey Konovalov <adech.fo@gmail.com> Cc: Yuri Gribov <tetra2005@gmail.com> Cc: Konstantin Khlebnikov <koct9i@gmail.com> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Christoph Lameter <cl@linux.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Jim Davis <jim.epost@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-01-29x86, vdso: teach 'make clean' remove vdso64 binariesAndrey Skvortsov1-1/+1
After 'make clean' vdso64.so and vdso64.dbg.so were left in arch/x86/vdso/. Link: http://lkml.kernel.org/r/1422453867-17326-1-git-send-email-andrej.skvortzov@gmail.com Signed-off-by: Andrey Skvortsov <andrej.skvortzov@gmail.com> Signed-off-by: Andy Lutomirski <luto@amacapital.net>
2014-12-21x86_64, vdso: Fix the vdso address randomization algorithmAndy Lutomirski1-16/+29
The theory behind vdso randomization is that it's mapped at a random offset above the top of the stack. To avoid wasting a page of memory for an extra page table, the vdso isn't supposed to extend past the lowest PMD into which it can fit. Other than that, the address should be a uniformly distributed address that meets all of the alignment requirements. The current algorithm is buggy: the vdso has about a 50% probability of being at the very end of a PMD. The current algorithm also has a decent chance of failing outright due to incorrect handling of the case where the top of the stack is near the top of its PMD. This fixes the implementation. The paxtest estimate of vdso "randomisation" improves from 11 bits to 18 bits. (Disclaimer: I don't know what the paxtest code is actually calculating.) It's worth noting that this algorithm is inherently biased: the vdso is more likely to end up near the end of its PMD than near the beginning. Ideally we would either nix the PMD sharing requirement or jointly randomize the vdso and the stack to reduce the bias. In the mean time, this is a considerable improvement with basically no risk of compatibility issues, since the allowed outputs of the algorithm are unchanged. As an easy test, doing this: for i in `seq 10000` do grep -P vdso /proc/self/maps |cut -d- -f1 done |sort |uniq -d used to produce lots of output (1445 lines on my most recent run). A tiny subset looks like this: 7fffdfffe000 7fffe01fe000 7fffe05fe000 7fffe07fe000 7fffe09fe000 7fffe0bfe000 7fffe0dfe000 Note the suspicious fe000 endings. With the fix, I get a much more palatable 76 repeated addresses. Reviewed-by: Kees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org Signed-off-by: Andy Lutomirski <luto@amacapital.net>
2014-11-03x86,vdso: Use LSL unconditionally for vgetcpuAndy Lutomirski1-1/+0
LSL is faster than RDTSCP and works everywhere; there's no need to switch between them depending on CPU. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Cc: Andi Kleen <andi@firstfloor.org> Link: http://lkml.kernel.org/r/72f73d5ec4514e02bba345b9759177ef03742efb.1414706021.git.luto@amacapital.net Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-11-01x86: vdso: Fix build with older gccAndrew Morton1-10/+8
gcc-4.4.4: arch/x86/vdso/vma.c: In function 'vgetcpu_cpu_init': arch/x86/vdso/vma.c:247: error: unknown field 'limit0' specified in initializer arch/x86/vdso/vma.c:247: warning: missing braces around initializer arch/x86/vdso/vma.c:247: warning: (near initialization for '(anonymous).<anonymous>') arch/x86/vdso/vma.c:248: error: unknown field 'limit' specified in initializer arch/x86/vdso/vma.c:248: warning: excess elements in struct initializer arch/x86/vdso/vma.c:248: warning: (near initialization for '(anonymous)') .... I couldn't find any way of tricking it into accepting an initializer format :( Reported-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Fixes: 258801563b ("x86/vdso: Change the PER_CPU segment to use struct desc_struct") Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Ingo Molnar <mingo@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-10-28x86_64/vdso: Clean up vgetcpu init and merge the vdso initcallsAndy Lutomirski1-36/+18
Now vdso/vma.c has a single initcall and no references to "vsyscall". Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/945c463e2804fedd8b08d63a040cbe85d55195aa.1411494540.git.luto@amacapital.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-10-28x86_64/vdso: Remove jiffies from the vvar pageAndy Lutomirski1-1/+0
I think that the jiffies vvar was once used for the vgetcpu cache. That code is long gone, so let's just make jiffies be a normal variable. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/fcfee6f8749af14d96373a9e2656354ad0b95499.1411494540.git.luto@amacapital.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-10-28x86/vdso: Make the PER_CPU segment 32 bitsAndy Lutomirski1-0/+1
IMO users ought not to be able to use 16-bit segments without using modify_ldt. Fortunately, it's impossible to break espfix64 by loading the PER_CPU segment into SS because it's PER_CPU is marked read-only and SS cannot contain an RO segment, but marking PER_CPU as 32-bit is less fragile. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/179f490d659307873eefd09206bebd417e2ab5ad.1411494540.git.luto@amacapital.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-10-28x86/vdso: Make the PER_CPU segment start out accessedAndy Lutomirski1-1/+1
The first userspace attempt to read or write the PER_CPU segment will write the accessed bit to the GDT. This is visible to userspace using the LAR instruction, and it also pointlessly dirties a cache line. Set the segment's accessed bit at boot to prevent userspace access to segments from having side effects. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/ac63814ca4c637a08ec2fd0360d67ca67560a9ee.1411494540.git.luto@amacapital.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-10-28x86/vdso: Change the PER_CPU segment to use struct desc_structAndy Lutomirski1-7/+12
This makes it easier to see what's going on. It produces exactly the same segment descriptor as the old code. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/d492f7b55136cbc60f016adae79160707b2e03b7.1411494540.git.luto@amacapital.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-10-28x86_64/vdso: Move getcpu code from vsyscall_64.c to vdso/vma.cAndy Lutomirski1-0/+61
This is pure cut-and-paste. At this point, vsyscall_64.c contains only code needed for vsyscall emulation, but some of the comments and function names are still confused. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/a244daf7d3cbe71afc08ad09fdfe1866ca1f1978.1411494540.git.luto@amacapital.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-09-24x86/vdso: Fix vdso2c's special_pages[] error checkingAndy Lutomirski1-5/+7
Stephen Rothwell's compiler did something amazing: it unrolled a loop, discovered that one iteration of that loop contained an always-true test, and emitted a warning that will IMO only serve to convince people to disable the warning. That bogus warning caused me to wonder what prompted such an absurdity from his compiler, and I discovered that the code in question was, in fact, completely wrong -- I was looking things up in the wrong array. This affects 3.16 as well, but the only effect is to screw up the error checking a bit. vdso2c's output is unaffected. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andy Lutomirski <luto@amacapital.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/53d96ad5.80ywqrbs33ZBCQej%25akpm@linux-foundation.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-08-09arm64,ia64,ppc,s390,sh,tile,um,x86,mm: remove default gate areaAndy Lutomirski1-18/+1
The core mm code will provide a default gate area based on FIXADDR_USER_START and FIXADDR_USER_END if !defined(__HAVE_ARCH_GATE_AREA) && defined(AT_SYSINFO_EHDR). This default is only useful for ia64. arm64, ppc, s390, sh, tile, 64-bit UML, and x86_32 have their own code just to disable it. arm, 32-bit UML, and x86_64 have gate areas, but they have their own implementations. This gets rid of the default and moves the code into ia64. This should save some code on architectures without a gate area: it's now possible to inline the gate_area functions in the default case. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Acked-by: Nathan Lynch <nathan_lynch@mentor.com> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [in principle] Acked-by: Richard Weinberger <richard@nod.at> [for um] Acked-by: Will Deacon <will.deacon@arm.com> [for arm64] Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Jeff Dike <jdike@addtoit.com> Cc: Richard Weinberger <richard@nod.at> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Nathan Lynch <Nathan_Lynch@mentor.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-07-26x86/vdso: Set VM_MAYREAD for the vvar vmaAndy Lutomirski1-1/+1
The VVAR area can, obviously, be read; that is kind of the point. AFAIK this has no effect whatsoever unless x86 suddenly turns into a nommu architecture. Nonetheless, not setting it is suspicious. Reported-by: Nathan Lynch <Nathan_Lynch@mentor.com> Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/e4c8bf4bc2725bda22c4a4b7d0c82adcd8f8d9b8.1406330779.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-07-12x86, vdso: Get rid of the fake section mechanismAndy Lutomirski4-229/+126
Now that we can tolerate extra things dangling off the end of the vdso image, we can strip the vdso the old fashioned way rather than using an overcomplicated custom stripping algorithm. This is a partial reversion of: 6f121e5 x86, vdso: Reimplement vdso.so preparation in build-time C Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/50e01ed6dcc0575d20afd782f9fe98d5ee3e2d8a.1405040914.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-07-12x86, vdso: Move the vvar area before the vdso textAndy Lutomirski4-48/+53
Putting the vvar area after the vdso text is rather complicated: it only works of the total length of the vdso text mapping is known at vdso link time, and the linker doesn't allow symbol addresses to depend on the sizes of non-allocatable data after the PT_LOAD segment. Moving the vvar area before the vdso text will allow is to safely map non-allocatable data after the vdso text, which is a nice simplification. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/156c78c0d93144ff1055a66493783b9e56813983.1405040914.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-07-11x86-32, vdso: Fix vDSO build error due to missing align_vdso_addr()Jan Beulich1-0/+4
Relying on static functions used just once to get inlined (and subsequently have dead code paths eliminated) is wrong: Compilers are free to decide whether they do this, regardless of optimization level. With this not happening for vdso_addr() (observed with gcc 4.1.x), an unresolved reference to align_vdso_addr() causes the build to fail. [ hpa: vdso_addr() is never actually used on x86-32, as calculate_addr in map_vdso() is always false. It ought to be possible to clean this up further, but this fixes the immediate problem. ] Signed-off-by: Jan Beulich <jbeulich@suse.com> Link: http://lkml.kernel.org/r/53B5863B02000078000204D5@mail.emea.novell.com Acked-by: Andy Lutomirski <luto@amacapital.net> Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-07-11x86-64, vdso: Fix vDSO build breakage due to empty .rela.dynJan Beulich1-0/+3
Certain ld versions (observed with 2.20.0) put an empty .rela.dyn section into shared object files, breaking the assumption on the number of sections to be copied to the final output. Simply discard any empty SHT_REL and SHT_RELA sections to address this. Signed-off-by: Jan Beulich <jbeulich@suse.com> Link: http://lkml.kernel.org/r/53B5861E02000078000204D1@mail.emea.novell.com Acked-by: Andy Lutomirski <luto@amacapital.net> Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-06-25x86/vdso: Error out in vdso2c if DT_RELA is presentAndy Lutomirski1-1/+1
vdso2c was checking for various types of relocations to detect when the vdso had undefined symbols or was otherwise dependent on relocation at load time. Undefined symbols in the vdso would fail if accessed at runtime, and certain implementation errors (e.g. branch profiling or incorrect symbol visibilities) could result in data access through the GOT that requires relocations. This could be as simple as: extern char foo; return foo; Without some kind of visibility control, the compiler would assume that foo could be interposed at load time and would generate a relocation. x86-64 and x32 (as opposed to i386) use explicit-addent (RELA) instead of implicit-addent (REL) relocations for data access, and vdso2c forgot to detect those. Whether these bad relocations would actually fail at runtime depends on what the linker sticks in the unrelocated references. Nonetheless, these relocations have no business existing in the vDSO and should be fixed rather than silently ignored. This error could trigger on some configurations due to branch profiling. The previous patch fixed that. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/74ef0c00b4d2a3b573e00a4113874e62f772e348.1403642755.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-06-25x86/vdso: Move DISABLE_BRANCH_PROFILING into the vdso makefileAndy Lutomirski2-4/+3
DISABLE_BRANCH_PROFILING turns off branch profiling (i.e. a redefinition of 'if'). Branch profiling depends on a bunch of kernel-internal symbols and generates extra output sections, none of which are useful or functional in the vDSO. It's currently turned off for vclock_gettime.c, but vgetcpu.c also triggers branch profiling, so just turn it off in the makefile. This fixes the build on some configurations: the vdso could contain undefined symbols, and the fake section table overflowed due to ftrace's added sections. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/bf1ec29e03b2bbc081f6dcaefa64db1c3a83fb21.1403642755.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-06-21x86/vdso: Create .build-id links for unstripped vdso filesAndy Lutomirski1-3/+13
With this change, doing 'make vdso_install' and telling gdb: set debug-file-directory /lib/modules/KVER/vdso will enable vdso debugging with symbols. This is useful for testing, but kernel RPM builds will probably want to manually delete these symlinks or otherwise do something sensible when they strip the vdso/*.so files. If ld does not support --build-id, then the symlinks will not be created. Note that kernel packagers that use vdso_install may need to adjust their packaging scripts to accomdate this change. For example, Fedora's scripts create build-id symlinks themselves in a different location, so the spec should probably be updated to remove the symlinks created by make vdso_install. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/a424b189ce3ced85fe1e82d032a20e765e0fe0d3.1403291930.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-06-20x86/vdso: Remove some redundant in-memory section headersAndy Lutomirski3-24/+26
.data doesn't need to be separate from .rodata: they're both readonly. .altinstructions and .altinstr_replacement aren't needed by anything except vdso2c; strip them from the final image. While we're at it, rather than aligning the actual executable text, just shove some unused-at-runtime data in between real data and text. My vdso image is still above 4k, but I'm disinclined to try to trim it harder for 3.16. For future trimming, I suspect that these sections could be moved to later in the file and dropped from the in-memory image: .gnu.version and .gnu.version_d (this may lose versions in gdb) .eh_frame (should be harmless) .eh_frame_hdr (I'm not really sure) .hash (AFAIK nothing needs this section header) Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/2e96d0c49016ea6d026a614ae645e93edd325961.1403129369.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-06-20x86/vdso: Improve the fake section headersAndy Lutomirski8-67/+237
Fully stripping the vDSO has other unfortunate side effects: - binutils is unable to find ELF notes without a SHT_NOTE section. - Even elfutils has trouble: it can find ELF notes without a section table at all, but if a section table is present, it won't look for PT_NOTE. - gdb wants section names to match between stripped DSOs and their symbols; otherwise it will corrupt symbol addresses. We're also breaking the rules: section 0 is supposed to be SHT_NULL. Fix these problems by building a better fake section table. While we're at it, we might as well let buggy Go versions keep working well by giving the SHT_DYNSYM entry the correct size. This is a bit unfortunate: it adds quite a bit of size to the vdso image. If/when binutils improves and the improved versions become widespread, it would be worth considering dropping most of this. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/0e546a5eeaafdf1840e6ee654a55c1e727c26663.1403129369.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-06-20x86/vdso2c: Use better macros for ELF bitnessAndy Lutomirski2-40/+25
Rather than using a separate macro for each replacement, use generic macros. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/d953cd2e70ceee1400985d091188cdd65fba2f05.1403129369.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-06-20x86/vdso: Discard the __bug_table sectionAndy Lutomirski1-0/+1
It serves no purpose in user code. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/2a5bebff42defd8a5e81d96f7dc00f21143c80e8.1403129369.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-06-13x86/vdso: Fix vdso_installAndy Lutomirski1-11/+11
"make vdso_install" installs unstripped versions of the vdso objects for the benefit of the debugger. This was broken by checkin: 6f121e548f83 x86, vdso: Reimplement vdso.so preparation in build-time C The filenames are different now, so update the Makefile to cope. This still installs the 64-bit vdso as vdso64.so. We believe this will be okay, as the only known user is a patched gdb which is known to use build-ids, but if it turns out to be a problem we may have to add a link. Inspired by a patch from Sam Ravnborg. Acked-by: Sam Ravnborg <sam@ravnborg.org> Reported-by: Josh Boyer <jwboyer@fedoraproject.org> Tested-by: Josh Boyer <jwboyer@fedoraproject.org> Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/b10299edd8ba98d17e07dafcd895b8ecf4d99eff.1402586707.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-06-13x86/vdso: Hack to keep 64-bit Go programs workingAndy Lutomirski3-13/+60
The Go runtime has a buggy vDSO parser that currently segfaults. This writes an empty SHT_DYNSYM entry that causes Go's runtime to malfunction by thinking that the vDSO is empty rather than malfunctioning by running off the end and segfaulting. This affects x86-64 only as far as we know, so we do not need this for the i386 and x32 vdsos. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/d10618176c4bd39b457a5e85c497295c90cab1bc.1402620737.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-06-13x86/vdso: Add PUT_LE to store little-endian valuesAndy Lutomirski1-3/+16
Add PUT_LE() by analogy with GET_LE() to write littleendian values in addition to reading them. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/3d9b27e92745b27b6fda1b9a98f70dc9c1246c7a.1402620737.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-06-11x86, vdso: Remove one final use of htole16()H. Peter Anvin1-1/+1
One final use of the macros from <endian.h> which are not available on older system. In this case we had one sole case of *writing* a littleendian number, but the number is SHN_UNDEF which is the constant zero, so rather than dealing with the general case of littleendian puts here, just document that the constant is zero and be done with it. Reported-and-Tested-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Cc: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/20140610135051.c3c34165f73d67d218b62bd9@linux-foundation.org
2014-06-07x86, vdso: Use <tools/le_byteshift.h> for littleendian accessH. Peter Anvin3-35/+38
There are no standard functions for littleendian data (unlike bigendian data.) Thus, use <tools/le_byteshift.h> to access littleendian data members. Those are fairly inefficient, but it doesn't matter for this purpose (and can be optimized later.) This avoids portability problems. Reported-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Tested-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/20140606140017.afb7f91142f66cb3dd13c186@linux-foundation.org
2014-05-31x86/vdso, build: Make LE access macros clearer, host-safeH. Peter Anvin2-39/+42
Make it a little clearer what the littleendian access macros in vdso2c.[ch] actually do. This way they can probably also be moved to a central location (e.g. tools/include) for the benefit of other host tools. We should avoid implementation namespace symbols when writing code that is compiling for the compiler host, so avoid names starting with double underscore or underscore-capital. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Cc: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/2cf258df123cb24bad63c274c8563c050547d99d.1401464755.git.luto@amacapital.net
2014-05-31x86/vdso, build: Fix cross-compilation from big-endian architecturesAndy Lutomirski2-28/+50
This adds a macro GET(x) to convert x from big-endian to little-endian. Hopefully I put it everywhere it needs to go and got all the cases needed for everyone's linux/elf.h. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/2cf258df123cb24bad63c274c8563c050547d99d.1401464755.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-05-31x86/vdso, build: When vdso2c fails, unlink the outputAndy Lutomirski2-16/+14
This avoids bizarre failures if make is run again. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/1764385fe9931e8940b9d001132515448ea89523.1401464755.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-05-20x86, mm: Improve _install_special_mapping and fix x86 vdso namingAndy Lutomirski3-17/+20
Using arch_vma_name to give special mappings a name is awkward. x86 currently implements it by comparing the start address of the vma to the expected address of the vdso. This requires tracking the start address of special mappings and is probably buggy if a special vma is split or moved. Improve _install_special_mapping to just name the vma directly. Use it to give the x86 vvar area a name, which should make CRIU's life easier. As a side effect, the vvar area will show up in core dumps. This could be considered weird and is fixable. [hpa: I say we accept this as-is but be prepared to deal with knocking out the vvars from core dumps if this becomes a problem.] Cc: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Pavel Emelyanov <xemul@parallels.com> Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/276b39b6b645fb11e345457b503f17b83c2c6fd0.1400538962.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-05-20x86, vdso: Fix an OOPS accessing the HPET mapping w/o an HPETAndy Lutomirski1-1/+2
The oops can be triggered in qemu using -no-hpet (but not nohpet) by reading a couple of pages past the end of the vdso text. This should send SIGBUS instead of OOPSing. The bug was introduced by: commit 7a59ed415f5b57469e22e41fc4188d5399e0b194 Author: Stefani Seibold <stefani@seibold.net> Date: Mon Mar 17 23:22:09 2014 +0100 x86, vdso: Add 32 bit VDSO time support for 32 bit kernel which is new in 3.15. This will be fixed separately in 3.15, but that patch will not apply to tip/x86/vdso. This is the equivalent fix for tip/x86/vdso and, presumably, 3.16. Cc: Stefani Seibold <stefani@seibold.net> Reported-by: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/c8b0a9a0b8d011a8b273cbb2de88d37190ed2751.1400538962.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-05-06x86, vdso: Remove vestiges of VDSO_PRELINK and some outdated commentsAndy Lutomirski3-16/+3
These definitions had no effect. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/946c104e40c47319f8ab406e54118799cb55bd99.1399317206.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-05-06x86, vdso: Move the vvar and hpet mappings next to the 64-bit vDSOAndy Lutomirski2-19/+5
This makes the 64-bit and x32 vdsos use the same mechanism as the 32-bit vdso. Most of the churn is deleting all the old fixmap code. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/8af87023f57f6bb96ec8d17fce3f88018195b49b.1399317206.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-05-06x86, vdso: Move the 32-bit vdso special pages after the textAndy Lutomirski5-150/+166
This unifies the vdso mapping code and teaches it how to map special pages at addresses corresponding to symbols in the vdso image. The new code is used for all vdso variants, but so far only the 32-bit variants use the new vvar page position. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/b6d7858ad7b5ac3fd3c29cab6d6d769bc45d195e.1399317206.git.luto@amacapital.net Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>