summaryrefslogtreecommitdiff
path: root/arch/arm64/kernel/vdso
AgeCommit message (Collapse)AuthorFilesLines
2018-07-17arm64: Add build salt to the vDSOLaura Abbott1-0/+3
The vDSO needs to have a unique build id in a similar manner to the kernel and modules. Use the build salt macro. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-11-15Merge tag 'arm64-upstream' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Will Deacon: "The big highlight is support for the Scalable Vector Extension (SVE) which required extensive ABI work to ensure we don't break existing applications by blowing away their signal stack with the rather large new vector context (<= 2 kbit per vector register). There's further work to be done optimising things like exception return, but the ABI is solid now. Much of the line count comes from some new PMU drivers we have, but they're pretty self-contained and I suspect we'll have more of them in future. Plenty of acronym soup here: - initial support for the Scalable Vector Extension (SVE) - improved handling for SError interrupts (required to handle RAS events) - enable GCC support for 128-bit integer types - remove kernel text addresses from backtraces and register dumps - use of WFE to implement long delay()s - ACPI IORT updates from Lorenzo Pieralisi - perf PMU driver for the Statistical Profiling Extension (SPE) - perf PMU driver for Hisilicon's system PMUs - misc cleanups and non-critical fixes" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (97 commits) arm64: Make ARMV8_DEPRECATED depend on SYSCTL arm64: Implement __lshrti3 library function arm64: support __int128 on gcc 5+ arm64/sve: Add documentation arm64/sve: Detect SVE and activate runtime support arm64/sve: KVM: Hide SVE from CPU features exposed to guests arm64/sve: KVM: Treat guest SVE use as undefined instruction execution arm64/sve: KVM: Prevent guests from using SVE arm64/sve: Add sysctl to set the default vector length for new processes arm64/sve: Add prctl controls for userspace vector length management arm64/sve: ptrace and ELF coredump support arm64/sve: Preserve SVE registers around EFI runtime service calls arm64/sve: Preserve SVE registers around kernel-mode NEON use arm64/sve: Probe SVE capabilities and usable vector lengths arm64: cpufeature: Move sys_caps_initialised declarations arm64/sve: Backend logic for setting the vector length arm64/sve: Signal handling support arm64/sve: Support vector length resetting for new processes arm64/sve: Core task context handling arm64/sve: Low-level CPU setup ...
2017-11-02License cleanup: add SPDX GPL-2.0 license identifier to files with no licenseGreg Kroah-Hartman2-0/+2
Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-31arm64: vdso: fix clock_getres for 4GiB-aligned resMark Rutland1-1/+1
The vdso tries to check for a NULL res pointer in __kernel_clock_getres, but only checks the lower 32 bits as is uses CBZ on the W register the res pointer is held in. Thus, if the res pointer happened to be aligned to a 4GiB boundary, we'd spuriously skip storing the timespec to it, while returning a zero error code to the caller. Prevent this by checking the whole pointer, using CBZ on the X register the res pointer is held in. Fixes: 9031fefde6f2ac1d ("arm64: VDSO support") Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reported-by: Andrew Pinski <apinski@cavium.com> Reported-by: Mark Salyzyn <salyzyn@android.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-06-20arm64/vdso: Fix nsec handling for CLOCK_MONOTONIC_RAWWill Deacon1-1/+0
Recently vDSO support for CLOCK_MONOTONIC_RAW was added in 49eea433b326 ("arm64: Add support for CLOCK_MONOTONIC_RAW in clock_gettime() vDSO"). Noticing that the core timekeeping code never set tkr_raw.xtime_nsec, the vDSO implementation didn't bother exposing it via the data page and instead took the unshifted tk->raw_time.tv_nsec value which was then immediately shifted left in the vDSO code. Unfortunately, by accellerating the MONOTONIC_RAW clockid, it uncovered potential 1ns time inconsistencies caused by the timekeeping core not handing sub-ns resolution. Now that the core code has been fixed and is actually setting tkr_raw.xtime_nsec, we need to take that into account in the vDSO by adding it to the shifted raw_time value, in order to fix the user-visible inconsistency. Rather than do that at each use (and expand the data page in the process), instead perform the shift/addition operation when populating the data page and remove the shift from the vDSO code entirely. [jstultz: minor whitespace tweak, tried to improve commit message to make it more clear this fixes a regression] Reported-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: John Stultz <john.stultz@linaro.org> Tested-by: Daniel Mentz <danielmentz@google.com> Acked-by: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Stephen Boyd <stephen.boyd@linaro.org> Cc: "stable #4 . 8+" <stable@vger.kernel.org> Cc: Miroslav Lichvar <mlichvar@redhat.com> Link: http://lkml.kernel.org/r/1496965462-20003-4-git-send-email-john.stultz@linaro.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2017-03-30arm64: drop non-existing vdso-offsets.h from .gitignoreMasahiro Yamada1-1/+0
Since commit a66649dab350 ("arm64: fix vdso-offsets.h dependency"), include/generated/vdso-offsets.h is directly generated without arch/arm64/kernel/vdso/vdso-offsets.h. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-07-12arm64: Add support for CLOCK_MONOTONIC_RAW in clock_gettime() vDSOKevin Brodsky1-11/+46
So far the arm64 clock_gettime() vDSO implementation only supported the following clocks, falling back to the syscall for the others: - CLOCK_REALTIME{,_COARSE} - CLOCK_MONOTONIC{,_COARSE} This patch adds support for the CLOCK_MONOTONIC_RAW clock, taking advantage of the recent refactoring of the vDSO time functions. Like the non-_COARSE clocks, this only works when the "arch_sys_counter" clocksource is in use (allowing us to read the current time from the virtual counter register), otherwise we also have to fall back to the syscall. Most of the data is shared with CLOCK_MONOTONIC, and the algorithm is similar. The reference implementation in kernel/time/timekeeping.c shows that: - CLOCK_MONOTONIC = tk->wall_to_monotonic + tk->xtime_sec + timekeeping_get_ns(&tk->tkr_mono) - CLOCK_MONOTONIC_RAW = tk->raw_time + timekeeping_get_ns(&tk->tkr_raw) - tkr_mono and tkr_raw are identical (in particular, same clocksource), except these members: * mult (only mono's multiplier is NTP-adjusted) * xtime_nsec (always 0 for raw) Therefore, tk->raw_time and tkr_raw->mult are now also stored in the vDSO data page. Cc: Ali Saidi <ali.saidi@arm.com> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-07-12arm64: Refactor vDSO time functionsKevin Brodsky1-126/+172
Time functions are directly implemented in assembly in arm64, and it is desirable to keep it this way for performance reasons (everything fits in registers, so that the stack is not used at all). However, the current implementation is quite difficult to read and understand (even considering it's assembly). Additionally, due to the structure of __kernel_clock_gettime, which heavily uses conditional branches to share code between the different clocks, it is difficult to support a new clock without making the branches even harder to follow. This commit completely refactors the structure of clock_gettime (and gettimeofday along the way) while keeping exactly the same algorithms. We no longer try to share code; instead, macros provide common operations. This new approach comes with a number of advantages: - In clock_gettime, clock implementations are no longer interspersed, making them much more readable. Additionally, macros only use registers passed as arguments or reserved with .req, this way it is easy to make sure that registers are properly allocated. To avoid a large number of branches in a given execution path, a jump table is used; a normal execution uses 3 unconditional branches. - __do_get_tspec has been replaced with 2 macros (get_ts_clock_mono, get_clock_shifted_nsec) and explicit loading of data from the vDSO page. Consequently, clock_gettime and gettimeofday are now leaf functions, and saving x30 (lr) is no longer necessary. - Variables protected by tb_seq_count are now loaded all at once, allowing to merge the seqcnt_read macro into seqcnt_check. - For CLOCK_REALTIME_COARSE, removed an unused load of the wall to monotonic timespec. - For CLOCK_MONOTONIC_COARSE, removed a few shift instructions. Obviously, the downside of sharing less code is an increase in code size. However since the vDSO has its own code page, this does not really matter, as long as the size of the DSO remains below 4 kB. For now this should be all right: Before After vdso.so size (B) 2776 3000 Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-07-11arm64: fix vdso-offsets.h dependencyKevin Brodsky1-4/+3
arm64/kernel/{vdso,signal}.c include vdso-offsets.h, as well as any file that includes asm/vdso.h. Therefore, vdso-offsets.h must be generated before these files are compiled. The current rules in arm64/kernel/Makefile do not actually enforce this, because even though $(obj)/vdso is listed as a prerequisite for vdso-offsets.h, this does not result in the intended effect of building the vdso subdirectory (before all the other objects). As a consequence, depending on the order in which the rules are followed, vdso-offsets.h is updated or not before arm64/kernel/{vdso,signal}.o are built. The current rules also impose an unnecessary dependency on vdso-offsets.h for all arm64/kernel/*.o, resulting in unnecessary rebuilds. This is made obvious when using make -j: touch arch/arm64/kernel/vdso/gettimeofday.S && make -j$NCPUS arch/arm64/kernel will sometimes result in none of arm64/kernel/*.o being rebuilt, sometimes all of them, or even just some of them. It is quite difficult to ensure that a header is generated before it is used with recursive Makefiles by using normal rules. Instead, arch-specific generated headers are normally built in the archprepare recipe in the arch Makefile (see for instance arch/ia64/Makefile). Unfortunately, asm-offsets.h is included in gettimeofday.S, and must therefore be generated before vdso-offsets.h, which is not the case if archprepare is used. For this reason, a rule run after archprepare has to be used. This commit adds rules in arm64/Makefile to build vdso-offsets.h during the prepare step, ensuring that vdso-offsets.h is generated before building anything. It also removes the now-unnecessary dependencies on vdso-offsets.h in arm64/kernel/Makefile. Finally, it removes the duplication of asm-offsets.h between arm64/kernel/vdso/ and include/generated/ and makes include/generated/vdso-offsets.h a target in arm64/kernel/vdso/Makefile. Cc: Will Deacon <will.deacon@arm.com> Cc: Michal Marek <mmarek@suse.com> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-07-11Revert "arm64: Fix vdso-offsets.h dependency"Catalin Marinas1-3/+4
This reverts commit 90f777beb788d08300f4a1482cb4fd37a401b472. While this commit was aimed at fixing the dependencies, with a large make -j the vdso-offsets.h file is not generated, leading to build failures. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-07-08arm64: Fix vdso-offsets.h dependencyCatalin Marinas1-4/+3
arch/arm64/kernel/{vdso,signal}.c include generated/vdso-offsets.h, and therefore the symbol offsets must be generated before these files are compiled. The current rules in arm64/kernel/Makefile do not actually enforce this, because even though $(obj)/vdso is listed as a prerequisite for vdso-offsets.h, this does not result in the intended effect of building the vdso subdirectory (before all the other objects). As a consequence, depending on the order in which the rules are followed, vdso-offsets.h is updated or not before arm64/kernel/{vdso,signal}.o are built. The current rules also impose an unnecessary dependency on vdso-offsets.h for all arm64/kernel/*.o, resulting in unnecessary rebuilds. This patch removes the arch/arm64/kernel/vdso/vdso-offsets.h file generation, leaving only the include/generated/vdso-offsets.h one. It adds a forced dependency check of the vdso-offsets.h file in arch/arm64/kernel/Makefile which, if not up to date according to the arch/arm64/kernel/vdso/Makefile rules (depending on vdso.so.dbg), will trigger the vdso/ subdirectory build and vdso-offsets.h re-generation. Automatic kbuild dependency rules between kernel/{vdso,signal}.c rules and vdso-offsets.h will guarantee that the vDSO object is built first, followed by the generated symbol offsets header file. Reported-by: Kevin Brodsky <kevin.brodsky@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-16arm64: vdso: Mark vDSO code as read-onlyDavid Brown1-2/+1
Although the arm64 vDSO is cleanly separated by code/data with the code being read-only in userspace mappings, the code page is still writable from the kernel. There have been exploits (such as http://itszn.com/blog/?p=21) that take advantage of this on x86 to go from a bad kernel write to full root. Prevent this specific exploit on arm64 by putting the vDSO code page in read-only memory as well. Before the change: [ 3.138366] vdso: 2 pages (1 code @ ffffffc000a71000, 1 data @ ffffffc000a70000) ---[ Kernel Mapping ]--- 0xffffffc000000000-0xffffffc000082000 520K RW NX SHD AF UXN MEM/NORMAL 0xffffffc000082000-0xffffffc000200000 1528K ro x SHD AF UXN MEM/NORMAL 0xffffffc000200000-0xffffffc000800000 6M ro x SHD AF BLK UXN MEM/NORMAL 0xffffffc000800000-0xffffffc0009b6000 1752K ro x SHD AF UXN MEM/NORMAL 0xffffffc0009b6000-0xffffffc000c00000 2344K RW NX SHD AF UXN MEM/NORMAL 0xffffffc000c00000-0xffffffc008000000 116M RW NX SHD AF BLK UXN MEM/NORMAL 0xffffffc00c000000-0xffffffc07f000000 1840M RW NX SHD AF BLK UXN MEM/NORMAL 0xffffffc800000000-0xffffffc840000000 1G RW NX SHD AF BLK UXN MEM/NORMAL 0xffffffc840000000-0xffffffc87ae00000 942M RW NX SHD AF BLK UXN MEM/NORMAL 0xffffffc87ae00000-0xffffffc87ae70000 448K RW NX SHD AF UXN MEM/NORMAL 0xffffffc87af80000-0xffffffc87af8a000 40K RW NX SHD AF UXN MEM/NORMAL 0xffffffc87af8b000-0xffffffc87b000000 468K RW NX SHD AF UXN MEM/NORMAL 0xffffffc87b000000-0xffffffc87fe00000 78M RW NX SHD AF BLK UXN MEM/NORMAL 0xffffffc87fe00000-0xffffffc87ff50000 1344K RW NX SHD AF UXN MEM/NORMAL 0xffffffc87ff90000-0xffffffc87ffa0000 64K RW NX SHD AF UXN MEM/NORMAL 0xffffffc87fff0000-0xffffffc880000000 64K RW NX SHD AF UXN MEM/NORMAL After: [ 3.138368] vdso: 2 pages (1 code @ ffffffc0006de000, 1 data @ ffffffc000a74000) ---[ Kernel Mapping ]--- 0xffffffc000000000-0xffffffc000082000 520K RW NX SHD AF UXN MEM/NORMAL 0xffffffc000082000-0xffffffc000200000 1528K ro x SHD AF UXN MEM/NORMAL 0xffffffc000200000-0xffffffc000800000 6M ro x SHD AF BLK UXN MEM/NORMAL 0xffffffc000800000-0xffffffc0009b8000 1760K ro x SHD AF UXN MEM/NORMAL 0xffffffc0009b8000-0xffffffc000c00000 2336K RW NX SHD AF UXN MEM/NORMAL 0xffffffc000c00000-0xffffffc008000000 116M RW NX SHD AF BLK UXN MEM/NORMAL 0xffffffc00c000000-0xffffffc07f000000 1840M RW NX SHD AF BLK UXN MEM/NORMAL 0xffffffc800000000-0xffffffc840000000 1G RW NX SHD AF BLK UXN MEM/NORMAL 0xffffffc840000000-0xffffffc87ae00000 942M RW NX SHD AF BLK UXN MEM/NORMAL 0xffffffc87ae00000-0xffffffc87ae70000 448K RW NX SHD AF UXN MEM/NORMAL 0xffffffc87af80000-0xffffffc87af8a000 40K RW NX SHD AF UXN MEM/NORMAL 0xffffffc87af8b000-0xffffffc87b000000 468K RW NX SHD AF UXN MEM/NORMAL 0xffffffc87b000000-0xffffffc87fe00000 78M RW NX SHD AF BLK UXN MEM/NORMAL 0xffffffc87fe00000-0xffffffc87ff50000 1344K RW NX SHD AF UXN MEM/NORMAL 0xffffffc87ff90000-0xffffffc87ffa0000 64K RW NX SHD AF UXN MEM/NORMAL 0xffffffc87fff0000-0xffffffc880000000 64K RW NX SHD AF UXN MEM/NORMAL Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the PaX Team, Brad Spengler, and Kees Cook. Signed-off-by: David Brown <david.brown@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [catalin.marinas@arm.com: removed superfluous __PAGE_ALIGNED_DATA] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-11-12arm64: build vdso without libgcovArnd Bergmann1-0/+3
On a cross-toolchain without glibc support, libgcov may not be available, and attempting to build an arm64 kernel with GCOV enabled then results in a build error: /home/arnd/cross-gcc/lib/gcc/aarch64-linux/5.2.1/../../../../aarch64-linux/bin/ld: cannot find -lgcov We don't really want to link libgcov into the vdso anyway, so this patch just disables GCOV in the vdso directory, just as we do for most other architectures. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-06-19arm64: vdso: work-around broken ELF toolchains in MakefileWill Deacon1-0/+4
When building the kernel with a bare-metal (ELF) toolchain, the -shared option may not be passed down to collect2, resulting in silent corruption of the vDSO image (in particular, the DYNAMIC section is omitted). The effect of this corruption is that the dynamic linker fails to find the vDSO symbols and libc is instead used for the syscalls that we intended to optimise (e.g. gettimeofday). Functionally, there is no issue as the sigreturn trampoline is still intact and located by the kernel. This patch fixes the problem by explicitly passing -shared to the linker when building the vDSO. Cc: <stable@vger.kernel.org> Reported-by: Szabolcs Nagy <Szabolcs.Nagy@arm.com> Reported-by: James Greenlaigh <james.greenhalgh@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-02-26arm64: vdso: minor ABI fix for clock_getresNathan Lynch1-2/+1
The vdso implementation of clock_getres currently returns 0 (success) whenever a null timespec is provided by the caller, regardless of the clock id supplied. This behavior is incorrect. It should fall back to syscall when an unrecognized clock id is passed, even when the timespec argument is null. This ensures that clock_getres always returns an error for invalid clock ids. Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-30arm64: vdso: fix build error when switching from LE to BEArun Chandran1-1/+1
Building a kernel with CPU_BIG_ENDIAN fails if there are stale objects from a !CPU_BIG_ENDIAN build. Due to a missing FORCE prerequisite on an if_changed rule in the VDSO Makefile, we attempt to link a stale LE object into the new BE kernel. According to Documentation/kbuild/makefiles.txt, FORCE is required for if_changed rules and forgetting it is a common mistake, so fix it by 'Forcing' the build of vdso. This patch fixes build errors like these: arch/arm64/kernel/vdso/note.o: compiled for a little endian system and target is big endian failed to merge target specific data of file arch/arm64/kernel/vdso/note.o arch/arm64/kernel/vdso/sigreturn.o: compiled for a little endian system and target is big endian failed to merge target specific data of file arch/arm64/kernel/vdso/sigreturn.o Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Arun Chandran <achandran@mvista.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-07-17arm64: Align the kbuild output for VDSOL and VDSOAIan Campbell1-2/+2
Signed-off-by: Ian Campbell <ijc@hellion.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Michal Marek <mmarek@suse.cz> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kbuild@vger.kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-17arm64: vdso: move data page before code pagesWill Deacon1-3/+1
Andy pointed out that binutils generates additional sections in the vdso image (e.g. section string table) which, if our .text section gets big enough, could cross a page boundary and end up screwing up the location where the kernel expects to put the data page. This patch solves the issue in the same manner as x86_32, by moving the data page before the code pages. Cc: Andy Lutomirski <luto@amacapital.net> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-05arm64: vdso: fix coarse clock handlingNathan Lynch1-1/+6
When __kernel_clock_gettime is called with a CLOCK_MONOTONIC_COARSE or CLOCK_REALTIME_COARSE clock id, it returns incorrectly to whatever the caller has placed in x2 ("ret x2" to return from the fast path). Fix this by saving x30/LR to x2 only in code that will call __do_get_tspec, restoring x30 afterward, and using a plain "ret" to return from the routine. Also: while the resulting tv_nsec value for CLOCK_REALTIME and CLOCK_MONOTONIC must be computed using intermediate values that are left-shifted by cs_shift (x12, set by __do_get_tspec), the results for coarse clocks should be calculated using unshifted values (xtime_coarse_nsec is in units of actual nanoseconds). The current code shifts intermediate values by x12 unconditionally, but x12 is uninitialized when servicing a coarse clock. Fix this by setting x12 to 0 once we know we are dealing with a coarse clock id. Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com> Acked-by: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-04arm64: vdso: prevent ld from aligning PT_LOAD segments to 64kWill Deacon1-1/+1
Whilst the text segment for our VDSO is marked as PT_LOAD in the ELF headers, it is mapped by the kernel and not actually subject to demand-paging. ld doesn't realise this, and emits a p_align field of 64k (the maximum supported page size), which conflicts with the load address picked by the kernel on 4k systems, which will be 4k aligned. This causes GDB to fail with "Failed to read a valid object file image from memory" when attempting to load the VDSO. This patch passes the -n option to ld, which prevents it from aligning PT_LOAD segments to the maximum page size. Cc: <stable@vger.kernel.org> Reported-by: Kyle McMartin <kyle@redhat.com> Acked-by: Kyle McMartin <kyle@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-01-10arm64: vdso: remove broken, redundant sequence counting for timezonesWill Deacon1-2/+0
This patch is an arm64 version of ce73ec6db47a ("powerpc/vdso: Remove redundant locking in update_vsyscall_tz()"). Timezone data is not protected, so the sequence counter is not required to ensure consistency. Furthermore, having multiple paths updating the counter leads to a race between update_vsyscall and update_vsyscall_tz, so remove the timezone sequence counting from both the kernel and the vdso. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-12-05arm64: generic timer: use virtual counter instead of physical at EL0Will Deacon1-2/+2
We want to use the virtual counter at EL0, as the physical counter may not track the current clocksource for guests running under a hypervisor. This patch updates the vdso and generic timer driver to use the virtual counter. The kernel EL2 entry code is also updated to ensure that the virtual offset is initialised to zero. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-12-05arm64: vdso: defer shifting of nanosecond component of timespecWill Deacon1-2/+6
Shifting the nanosecond component of the computed timespec early can lead to sub-ns inaccuracies when using the truncated value as input to further arithmetic for things like conversions to monotonic time. This patch defers the timespec shifting until after the final value has been computed. Reported-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-12-05arm64: vdso: rework __do_get_tspec register allocation and return shiftWill Deacon1-44/+44
In preparation for sub-ns precision in the vdso timespec maths, change the __do_get_tspec register allocation so that we return the clocksource shift value instead of the unused xtime tspec. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-12-05arm64: vdso: check sequence counter even for coarse realtime operationsWill Deacon1-3/+3
When returning coarse realtime values from clock_gettime, we must still check the sequence counter to ensure that the kernel does not update the vdso datapage whilst we are loading the coarse timespec as this could potentially result in time appearing to go backwards. This patch delays the coarse realtime check until after we have loaded successfully from the vdso datapage. This does mean that we always load the wtm timespec, but conditionalising the load and adding an extra sequence test is unlikely to buy us anything other than messy code, particularly as the sequence test implies a read barrier. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-12-05arm64: vdso: fix clocksource mask when extracting bottom 56 bitsWill Deacon1-1/+1
The generic timer clocksource has 56 bits of precision and as such must be masked appropriately after we have read it. The current mask generated by a movn instruction is off by 4 bits, so we accidentally include the top 4 bits in the final value. This patch fixes the broken mask. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-09-17arm64: VDSO supportWill Deacon8-0/+520
This patch adds VDSO support for 64-bit applications. The VDSO code is currently used for sys_rt_sigreturn() and optimised gettimeofday() (using the user-accessible generic counter). Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Nicolas Pitre <nico@linaro.org> Acked-by: Olof Johansson <olof@lixom.net> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Arnd Bergmann <arnd@arndb.de>