summaryrefslogtreecommitdiff
path: root/arch/arm64/kernel
AgeCommit message (Collapse)AuthorFilesLines
2020-03-18arm64: cpufeature: Fix meta-capability cpufeature checkAmit Daniel Kachhap1-1/+20
Some existing/future meta cpucaps match need the presence of individual cpucaps. Currently the individual cpucaps checks it via an array based flag and this introduces dependency on the array entry order. This limitation exists only for system scope cpufeature. This patch introduces an internal helper function (__system_matches_cap) to invoke the matching handler for system scope. This helper has to be used during a narrow window when, - The system wide safe registers are set with all the SMP CPUs and, - The SYSTEM_FEATURE cpu_hwcaps may not have been set. Normal users should use the existing cpus_have_{const_}cap() global function. Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com> Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-18arm64: smp: fix crash_smp_send_stop() behaviourCristian Marussi1-2/+6
On a system configured to trigger a crash_kexec() reboot, when only one CPU is online and another CPU panics while starting-up, crash_smp_send_stop() will fail to send any STOP message to the other already online core, resulting in fail to freeze and registers not properly saved. Moreover even if the proper messages are sent (case CPUs > 2) it will similarly fail to account for the booting CPU when executing the final stop wait-loop, so potentially resulting in some CPU not been waited for shutdown before rebooting. A tangible effect of this behaviour can be observed when, after a panic with kexec enabled and loaded, on the following reboot triggered by kexec, the cpu that could not be successfully stopped fails to come back online: [ 362.291022] ------------[ cut here ]------------ [ 362.291525] kernel BUG at arch/arm64/kernel/cpufeature.c:886! [ 362.292023] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP [ 362.292400] Modules linked in: [ 362.292970] CPU: 3 PID: 0 Comm: swapper/3 Kdump: loaded Not tainted 5.6.0-rc4-00003-gc780b890948a #105 [ 362.293136] Hardware name: Foundation-v8A (DT) [ 362.293382] pstate: 200001c5 (nzCv dAIF -PAN -UAO) [ 362.294063] pc : has_cpuid_feature+0xf0/0x348 [ 362.294177] lr : verify_local_elf_hwcaps+0x84/0xe8 [ 362.294280] sp : ffff800011b1bf60 [ 362.294362] x29: ffff800011b1bf60 x28: 0000000000000000 [ 362.294534] x27: 0000000000000000 x26: 0000000000000000 [ 362.294631] x25: 0000000000000000 x24: ffff80001189a25c [ 362.294718] x23: 0000000000000000 x22: 0000000000000000 [ 362.294803] x21: ffff8000114aa018 x20: ffff800011156a00 [ 362.294897] x19: ffff800010c944a0 x18: 0000000000000004 [ 362.294987] x17: 0000000000000000 x16: 0000000000000000 [ 362.295073] x15: 00004e53b831ae3c x14: 00004e53b831ae3c [ 362.295165] x13: 0000000000000384 x12: 0000000000000000 [ 362.295251] x11: 0000000000000000 x10: 00400032b5503510 [ 362.295334] x9 : 0000000000000000 x8 : ffff800010c7e204 [ 362.295426] x7 : 00000000410fd0f0 x6 : 0000000000000001 [ 362.295508] x5 : 00000000410fd0f0 x4 : 0000000000000000 [ 362.295592] x3 : 0000000000000000 x2 : ffff8000100939d8 [ 362.295683] x1 : 0000000000180420 x0 : 0000000000180480 [ 362.296011] Call trace: [ 362.296257] has_cpuid_feature+0xf0/0x348 [ 362.296350] verify_local_elf_hwcaps+0x84/0xe8 [ 362.296424] check_local_cpu_capabilities+0x44/0x128 [ 362.296497] secondary_start_kernel+0xf4/0x188 [ 362.296998] Code: 52805001 72a00301 6b01001f 54000ec0 (d4210000) [ 362.298652] SMP: stopping secondary CPUs [ 362.300615] Starting crashdump kernel... [ 362.301168] Bye! [ 0.000000] Booting Linux on physical CPU 0x0000000003 [0x410fd0f0] [ 0.000000] Linux version 5.6.0-rc4-00003-gc780b890948a (crimar01@e120937-lin) (gcc version 8.3.0 (GNU Toolchain for the A-profile Architecture 8.3-2019.03 (arm-rel-8.36))) #105 SMP PREEMPT Fri Mar 6 17:00:42 GMT 2020 [ 0.000000] Machine model: Foundation-v8A [ 0.000000] earlycon: pl11 at MMIO 0x000000001c090000 (options '') [ 0.000000] printk: bootconsole [pl11] enabled ..... [ 0.138024] rcu: Hierarchical SRCU implementation. [ 0.153472] its@2f020000: unable to locate ITS domain [ 0.154078] its@2f020000: Unable to locate ITS domain [ 0.157541] EFI services will not be available. [ 0.175395] smp: Bringing up secondary CPUs ... [ 0.209182] psci: failed to boot CPU1 (-22) [ 0.209377] CPU1: failed to boot: -22 [ 0.274598] Detected PIPT I-cache on CPU2 [ 0.278707] GICv3: CPU2: found redistributor 1 region 0:0x000000002f120000 [ 0.285212] CPU2: Booted secondary processor 0x0000000001 [0x410fd0f0] [ 0.369053] Detected PIPT I-cache on CPU3 [ 0.372947] GICv3: CPU3: found redistributor 2 region 0:0x000000002f140000 [ 0.378664] CPU3: Booted secondary processor 0x0000000002 [0x410fd0f0] [ 0.401707] smp: Brought up 1 node, 3 CPUs [ 0.404057] SMP: Total of 3 processors activated. Make crash_smp_send_stop() account also for the online status of the calling CPU while evaluating how many CPUs are effectively online: this way the right number of STOPs is sent and all other stopped-cores's registers are properly saved. Fixes: 78fd584cdec05 ("arm64: kdump: implement machine_crash_shutdown()") Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-03-18arm64: smp: fix smp_send_stop() behaviourCristian Marussi1-3/+14
On a system with only one CPU online, when another one CPU panics while starting-up, smp_send_stop() will fail to send any STOP message to the other already online core, resulting in a system still responsive and alive at the end of the panic procedure. [ 186.700083] CPU3: shutdown [ 187.075462] CPU2: shutdown [ 187.162869] CPU1: shutdown [ 188.689998] ------------[ cut here ]------------ [ 188.691645] kernel BUG at arch/arm64/kernel/cpufeature.c:886! [ 188.692079] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP [ 188.692444] Modules linked in: [ 188.693031] CPU: 3 PID: 0 Comm: swapper/3 Not tainted 5.6.0-rc4-00001-g338d25c35a98 #104 [ 188.693175] Hardware name: Foundation-v8A (DT) [ 188.693492] pstate: 200001c5 (nzCv dAIF -PAN -UAO) [ 188.694183] pc : has_cpuid_feature+0xf0/0x348 [ 188.694311] lr : verify_local_elf_hwcaps+0x84/0xe8 [ 188.694410] sp : ffff800011b1bf60 [ 188.694536] x29: ffff800011b1bf60 x28: 0000000000000000 [ 188.694707] x27: 0000000000000000 x26: 0000000000000000 [ 188.694801] x25: 0000000000000000 x24: ffff80001189a25c [ 188.694905] x23: 0000000000000000 x22: 0000000000000000 [ 188.694996] x21: ffff8000114aa018 x20: ffff800011156a38 [ 188.695089] x19: ffff800010c944a0 x18: 0000000000000004 [ 188.695187] x17: 0000000000000000 x16: 0000000000000000 [ 188.695280] x15: 0000249dbde5431e x14: 0262cbe497efa1fa [ 188.695371] x13: 0000000000000002 x12: 0000000000002592 [ 188.695472] x11: 0000000000000080 x10: 00400032b5503510 [ 188.695572] x9 : 0000000000000000 x8 : ffff800010c80204 [ 188.695659] x7 : 00000000410fd0f0 x6 : 0000000000000001 [ 188.695750] x5 : 00000000410fd0f0 x4 : 0000000000000000 [ 188.695836] x3 : 0000000000000000 x2 : ffff8000100939d8 [ 188.695919] x1 : 0000000000180420 x0 : 0000000000180480 [ 188.696253] Call trace: [ 188.696410] has_cpuid_feature+0xf0/0x348 [ 188.696504] verify_local_elf_hwcaps+0x84/0xe8 [ 188.696591] check_local_cpu_capabilities+0x44/0x128 [ 188.696666] secondary_start_kernel+0xf4/0x188 [ 188.697150] Code: 52805001 72a00301 6b01001f 54000ec0 (d4210000) [ 188.698639] ---[ end trace 3f12ca47652f7b72 ]--- [ 188.699160] Kernel panic - not syncing: Attempted to kill the idle task! [ 188.699546] Kernel Offset: disabled [ 188.699828] CPU features: 0x00004,20c02008 [ 188.700012] Memory Limit: none [ 188.700538] ---[ end Kernel panic - not syncing: Attempted to kill the idle task! ]--- [root@arch ~]# echo Helo Helo [root@arch ~]# cat /proc/cpuinfo | grep proce processor : 0 Make smp_send_stop() account also for the online status of the calling CPU while evaluating how many CPUs are effectively online: this way, the right number of STOPs is sent, so enforcing a proper freeze of the system at the end of panic even under the above conditions. Fixes: 08e875c16a16c ("arm64: SMP support") Reported-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-03-18arm64: perf: Add support for ARMv8.5-PMU 64-bit countersAndrew Murray1-16/+71
At present ARMv8 event counters are limited to 32-bits, though by using the CHAIN event it's possible to combine adjacent counters to achieve 64-bits. The perf config1:0 bit can be set to use such a configuration. With the introduction of ARMv8.5-PMU support, all event counters can now be used as 64-bit counters. Let's enable 64-bit event counters where support exists. Unless the user sets config1:0 we will adjust the counter value such that it overflows upon 32-bit overflow. This follows the same behaviour as the cycle counter which has always been (and remains) 64-bits. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> [Mark: fix ID field names, compare with 8.5 value] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-03-18arm64: perf: Clean up enable/disable callsRobin Murphy1-52/+35
Reading this code bordered on painful, what with all the repetition and pointless return values. More fundamentally, dribbling the hardware enables and disables in one bit at a time incurs needless system register overhead for chained events and on reset. We already use bitmask values for the KVM hooks, so consolidate all the register accesses to match, and make a reasonable saving in both source and object code. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-03-17arm64/kernel: Simplify __cpu_up() by bailing out earlyGavin Shan1-42/+37
The function __cpu_up() is invoked to bring up the target CPU through the backend, PSCI for example. The nested if statements won't be needed if we bail out early on the following two conditions where the status won't be checked. The code looks simplified in that case. * Error returned from the backend (e.g. PSCI) * The target CPU has been marked as onlined Signed-off-by: Gavin Shan <gshan@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com>
2020-03-17arm64: remove redundant blank for '=' operator韩科才1-1/+1
remove redundant blank for '=' operator, it may be more elegant. Signed-off-by: hankecai <hankecai@vivo.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-17arm64: kexec_file: Fixed code style.Li Tao1-1/+1
Remove unnecessary blank. Signed-off-by: Li Tao <tao.li@vivo.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-17arm64: add blank after 'if'Zheng Wei1-1/+1
add blank after 'if' for armv8_deprecated_init() to make it comply with kernel coding style. Signed-off-by: Zheng Wei <wei.zheng@vivo.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-16arm64: BTI: Reset BTYPE when skipping emulated instructionsDave Martin1-0/+2
Since normal execution of any non-branch instruction resets the PSTATE BTYPE field to 0, so do the same thing when emulating a trapped instruction. Branches don't trap directly, so we should never need to assign a non-zero value to BTYPE here. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-16arm64: traps: Shuffle code to eliminate forward declarationsDave Martin1-52/+55
Hoist the IT state handling code earlier in traps.c, to avoid accumulating forward declarations. No functional change. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-16arm64: unify native/compat instruction skippingDave Martin1-10/+8
Skipping of an instruction on AArch32 works a bit differently from AArch64, mainly due to the different CPSR/PSTATE semantics. Currently arm64_skip_faulting_instruction() is only suitable for AArch64, and arm64_compat_skip_faulting_instruction() handles the IT state machine but is local to traps.c. Since manual instruction skipping implies a trap, it's a relatively slow path. So, make arm64_skip_faulting_instruction() handle both compat and native, and get rid of the arm64_compat_skip_faulting_instruction() special case. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-16arm64: BTI: Decode BYTPE bits when printing PSTATEDave Martin1-2/+15
The current code to print PSTATE symbolically when generating backtraces etc., does not include the BYTPE field used by Branch Target Identification. So, decode BYTPE and print it too. In the interests of human-readability, print the classes of BTI matched. The symbolic notation, BYTPE (PSTATE[11:10]) and permitted classes of subsequent instruction are: -- (BTYPE=0b00): any insn jc (BTYPE=0b01): BTI jc, BTI j, BTI c, PACIxSP -c (BYTPE=0b10): BTI jc, BTI c, PACIxSP j- (BTYPE=0b11): BTI jc, BTI j Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-16arm64: elf: Enable BTI at exec based on ELF program propertiesDave Martin1-0/+19
For BTI protection to be as comprehensive as possible, it is desirable to have BTI enabled from process startup. If this is not done, the process must use mprotect() to enable BTI for each of its executable mappings, but this is painful to do in the libc startup code. It's simpler and more sound to have the kernel do it instead. To this end, detect BTI support in the executable (or ELF interpreter, as appropriate), via the NT_GNU_PROGRAM_PROPERTY_TYPE_0 note, and tweak the initial prot flags for the process' executable pages to include PROT_BTI as appropriate. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-16arm64: Basic Branch Target Identification supportDave Martin7-1/+88
This patch adds the bare minimum required to expose the ARMv8.5 Branch Target Identification feature to userspace. By itself, this does _not_ automatically enable BTI for any initial executable pages mapped by execve(). This will come later, but for now it should be possible to enable BTI manually on those pages by using mprotect() from within the target process. Other arches already using the generic mman.h are already using 0x10 for arch-specific prot flags, so we use that for PROT_BTI here. For consistency, signal handler entry points in BTI guarded pages are required to be annotated as such, just like any other function. This blocks a relatively minor attack vector, but comforming userspace will have the annotations anyway, so we may as well enforce them. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-11arm64: entry: unmask IRQ in el0_sp()Mark Rutland1-1/+1
Currently, the EL0 SP alignment handler masks IRQs unnecessarily. It does so due to historic code sharing of the EL0 SP and PC alignment handlers, and branch predictor hardening applicable to the EL0 SP handler. We began masking IRQs in the EL0 SP alignment handler in commit: 5dfc6ed27710c42c ("arm64: entry: Apply BP hardening for high-priority synchronous exception") ... as this shared code with the EL0 PC alignment handler, and branch predictor hardening made it necessary to disable IRQs for early parts of the EL0 PC alignment handler. It was not necessary to mask IRQs during EL0 SP alignment exceptions, but it was not considered harmful to do so. This masking was carried forward into C code in commit: 582f95835a8fc812 ("arm64: entry: convert el0_sync to C") ... where the SP/PC cases were split into separate handlers, and the masking duplicated. Subsequently the EL0 PC alignment handler was refactored to perform branch predictor hardening before unmasking IRQs, in commit: bfe298745afc9548 ("arm64: entry-common: don't touch daif before bp-hardening") ... but the redundant masking of IRQs was not removed from the EL0 SP alignment handler. Let's do so now, and make it interruptible as with most other synchronous exception handlers. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: James Morse <james.morse@arm.com>
2020-03-11arm64: Mark call_smc_arch_workaround_1 as __maybe_unusedNathan Chancellor1-1/+1
When building allnoconfig: arch/arm64/kernel/cpu_errata.c:174:13: warning: unused function 'call_smc_arch_workaround_1' [-Wunused-function] static void call_smc_arch_workaround_1(void) ^ 1 warning generated. Follow arch/arm and mark this function as __maybe_unused. Fixes: 4db61fef16a1 ("arm64: kvm: Modernize __smccc_workaround_1_smc_start annotations") Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-11arm64: entry-ftrace.S: Fix missing argument for CONFIG_FUNCTION_GRAPH_TRACER=yKunihiko Hayashi1-1/+1
Missing argument of another SYM_INNER_LABEL() breaks build for CONFIG_FUNCTION_GRAPH_TRACER=y. Fixes: e2d591d29d44 ("arm64: entry-ftrace.S: Convert to modern annotations for assembly functions") Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Brown <broonie@kernel.org>
2020-03-09arm64: efi: add efi-entry.o to targets instead of extra-$(CONFIG_EFI)Masahiro Yamada1-1/+1
efi-entry.o is built on demand for efi-entry.stub.o, so you do not have to repeat $(CONFIG_EFI) here. Adding it to 'targets' is enough. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
2020-03-09arm64: vdso32: Convert to modern assembler annotationsMark Brown1-15/+8
In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. Use these for the compat VDSO, allowing us to drop the custom ARM_ENTRY() and ARM_ENDPROC() macros. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-09arm64: vdso: Convert to modern assembler annotationsMark Brown1-2/+2
In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. Convert the assembly function in the arm64 VDSO to the new macros. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-09arm64: sdei: Annotate SDEI entry points using new style annotationsMark Brown1-6/+6
In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. The SDEI entry points are currently annotated as normal functions but are called from non-kernel contexts with non-standard calling convention and should therefore be annotated as such so do so. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: James Morse <james.Morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-09arm64: kvm: Modernize __smccc_workaround_1_smc_start annotationsMark Brown1-8/+6
In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC with separate annotations for standard C callable functions, data and code with different calling conventions. Using these for __smccc_workaround_1_smc is more involved than for most symbols as this symbol is annotated quite unusually, rather than just have the explicit symbol we define _start and _end symbols which we then use to compute the length. This does not play at all nicely with the new style macros. Instead define a constant for the size of the function and use that in both the C code and for .org based size checks in the assembly code. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org>
2020-03-09arm64: kvm: Modernize annotation for __bp_harden_hyp_vecsMark Brown1-1/+1
We have recently introduced new macros for annotating assembly symbols for things that aren't C functions, SYM_CODE_START() and SYM_CODE_END(), in an effort to clarify and simplify our annotations of assembly files. Using these for __bp_harden_hyp_vecs is more involved than for most symbols as this symbol is annotated quite unusually as rather than just have the explicit symbol we define _start and _end symbols which we then use to compute the length. This does not play at all nicely with the new style macros. Since the size of the vectors is a known constant which won't vary the simplest thing to do is simply to drop the separate _start and _end symbols and just use a #define for the size. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org>
2020-03-09arm64: kernel: Convert to modern annotations for assembly dataMark Brown2-6/+10
In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These include specific annotations for the start and end of data, update symbols for data to use these. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-09arm64: head: Annotate stext and preserve_boot_args as codeMark Brown1-4/+4
In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. Neither stext nor preserve_boot_args is called with the usual AAPCS calling conventions and they should therefore be annotated as code. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-09arm64: head.S: Convert to modern annotations for assembly functionsMark Brown1-28/+28
In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC and also add a new annotation for static functions which previously had no ENTRY equivalent. Update the annotations in the core kernel code to the new macros. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-09arm64: ftrace: Modernise annotation of return_to_handlerMark Brown1-2/+2
In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. return_to_handler does entertaining things with LR so doesn't follow the usual C conventions and should therefore be annotated as code rather than a function. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-09arm64: ftrace: Correct annotation of ftrace_caller assemblyMark Brown1-8/+8
In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. The patchable function entry versions of ftrace_*_caller don't follow the usual AAPCS rules, pushing things onto the stack which they don't clean up, and therefore should be annotated as code rather than functions. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-09arm64: entry-ftrace.S: Convert to modern annotations for assembly functionsMark Brown1-14/+14
In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC and also add a new annotation for static functions which previously had no ENTRY equivalent. Update the annotations in the core kernel code to the new macros. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-09arm64: entry: Additional annotation conversions for entry.SMark Brown1-8/+8
In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC with separate annotations for standard C callable functions, data and code with different calling conventions. Update the remaining annotations in the entry.S code to the new macros. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-09arm64: entry: Annotate ret_from_fork as codeMark Brown1-2/+2
In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. ret_from_fork is not a normal C function and should therefore be annotated as code. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-09arm64: entry: Annotate vector table and handlers as codeMark Brown1-38/+38
In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. The vector table and handlers aren't normal C style code so should be annotated as CODE. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-06arm64: use activity monitors for frequency invarianceIonela Voinescu2-0/+184
The Frequency Invariance Engine (FIE) is providing a frequency scaling correction factor that helps achieve more accurate load-tracking. So far, for arm and arm64 platforms, this scale factor has been obtained based on the ratio between the current frequency and the maximum supported frequency recorded by the cpufreq policy. The setting of this scale factor is triggered from cpufreq drivers by calling arch_set_freq_scale. The current frequency used in computation is the frequency requested by a governor, but it may not be the frequency that was implemented by the platform. This correction factor can also be obtained using a core counter and a constant counter to get information on the performance (frequency based only) obtained in a period of time. This will more accurately reflect the actual current frequency of the CPU, compared with the alternative implementation that reflects the request of a performance level from the OS. Therefore, implement arch_scale_freq_tick to use activity monitors, if present, for the computation of the frequency scale factor. The use of AMU counters depends on: - CONFIG_ARM64_AMU_EXTN - depents on the AMU extension being present - CONFIG_CPU_FREQ - the current frequency obtained using counter information is divided by the maximum frequency obtained from the cpufreq policy. While it is possible to have a combination of CPUs in the system with and without support for activity monitors, the use of counters for frequency invariance is only enabled for a CPU if all related CPUs (CPUs in the same frequency domain) support and have enabled the core and constant activity monitor counters. In this way, there is a clear separation between the policies for which arch_set_freq_scale (cpufreq based FIE) is used, and the policies for which arch_scale_freq_tick (counter based FIE) is used to set the frequency scale factor. For this purpose, a late_initcall_sync is registered to trigger validation work for policies that will enable or disable the use of AMU counters for frequency invariance. If CONFIG_CPU_FREQ is not defined, the use of counters is enabled on all CPUs only if all possible CPUs correctly support the necessary counters. Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com> Reviewed-by: Lukasz Luba <lukasz.luba@arm.com> Acked-by: Sudeep Holla <sudeep.holla@arm.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-06arm64: add support for the AMU extension v1Ionela Voinescu1-0/+66
The activity monitors extension is an optional extension introduced by the ARMv8.4 CPU architecture. This implements basic support for version 1 of the activity monitors architecture, AMUv1. This support includes: - Extension detection on each CPU (boot, secondary, hotplugged) - Register interface for AMU aarch64 registers Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-04arm64: remove gratuitious/stray .ltorg stanzasRemi Denis-Courmont2-3/+0
There are no applicable literals above them. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Remi Denis-Courmont <remi.denis.courmont@huawei.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-02arm64: perf: Support new DT compatiblesRobin Murphy1-0/+56
Add support for matching the new PMUs. For now, this just wires them up as generic PMUv3 such that people writing DTs for new SoCs can do the right thing, and at least have architectural and raw events be usable. We can come back and fill in event maps for sysfs and/or perf tools at a later date. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-03-02arm64: perf: Refactor PMU init callbacksRobin Murphy1-97/+27
The PMU init callbacks are already drowning in boilerplate, so before doubling the number of supported PMU models, give it a sensible refactor to significantly reduce the bloat, both in source and object code. Although nobody uses non-default sysfs attributes today, there's minimal impact to preserving the notion that maybe, some day, somebody might, so we may as well keep up appearances. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-02-29efi/arm64: Clean EFI stub exit code from cache instead of avoiding itArd Biesheuvel2-15/+15
Commit 9f9223778 ("efi/libstub/arm: Make efi_entry() an ordinary PE/COFF entrypoint") modified the handover code written in assembler, and for maintainability, aligned the logic with the logic used in the 32-bit ARM version, which is to avoid cache maintenance on the remaining instructions in the subroutine that will be executed with the MMU and caches off, and instead, branch into the relocated copy of the kernel image. However, this assumes that this copy is executable, and this means we expect EFI_LOADER_DATA regions to be executable as well, which is not a reasonable assumption to make, even if this is true for most UEFI implementations today. So change this back, and add a __clean_dcache_area_poc() call to cover the remaining code in the subroutine. While at it, switch the other call site over to __clean_dcache_area_poc() as well, and clean up the terminology in comments to avoid using 'flush' in the context of cache maintenance. Also, let's switch to the new style asm annotations. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: linux-efi@vger.kernel.org Cc: Ingo Molnar <mingo@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: David Hildenbrand <david@redhat.com> Cc: Heinrich Schuchardt <xypron.glpk@gmx.de> Cc: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20200228121408.9075-6-ardb@kernel.org
2020-02-26Merge tag 'efi-next' of ↵Ingo Molnar3-73/+24
git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi into efi/core Pull EFI updates for v5.7 from Ard Biesheuvel: This time, the set of changes for the EFI subsystem is much larger than usual. The main reasons are: - Get things cleaned up before EFI support for RISC-V arrives, which will increase the size of the validation matrix, and therefore the threshold to making drastic changes, - After years of defunct maintainership, the GRUB project has finally started to consider changes from the distros regarding UEFI boot, some of which are highly specific to the way x86 does UEFI secure boot and measured boot, based on knowledge of both shim internals and the layout of bootparams and the x86 setup header. Having this maintenance burden on other architectures (which don't need shim in the first place) is hard to justify, so instead, we are introducing a generic Linux/UEFI boot protocol. Summary of changes: - Boot time GDT handling changes (Arvind) - Simplify handling of EFI properties table on arm64 - Generic EFI stub cleanups, to improve command line handling, file I/O, memory allocation, etc. - Introduce a generic initrd loading method based on calling back into the firmware, instead of relying on the x86 EFI handover protocol or device tree. - Introduce a mixed mode boot method that does not rely on the x86 EFI handover protocol either, and could potentially be adopted by other architectures (if another one ever surfaces where one execution mode is a superset of another) - Clean up the contents of struct efi, and move out everything that doesn't need to be stored there. - Incorporate support for UEFI spec v2.8A changes that permit firmware implementations to return EFI_UNSUPPORTED from UEFI runtime services at OS runtime, and expose a mask of which ones are supported or unsupported via a configuration table. - Various documentation updates and minor code cleanups (Heinrich) - Partial fix for the lack of by-VA cache maintenance in the decompressor on 32-bit ARM. Note that these patches were deliberately put at the beginning so they can be used as a stable branch that will be shared with a PR containing the complete fix, which I will send to the ARM tree. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-02-23efi/libstub: Introduce symbolic constants for the stub major/minor versionArd Biesheuvel1-2/+2
Now that we have added new ways to load the initrd or the mixed mode kernel, we will also need a way to tell the loader about this. Add symbolic constants for the PE/COFF major/minor version numbers (which fortunately have always been 0x0 for all architectures), so that we can bump them later to document the capabilities of the stub. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-02-23efi/libstub: Clean up command line parsing routineArd Biesheuvel1-0/+1
We currently parse the command non-destructively, to avoid having to allocate memory for a copy before passing it to the standard parsing routines that are used by the core kernel, and which modify the input to delineate the parsed tokens with NUL characters. Instead, we call strstr() and strncmp() to go over the input multiple times, and match prefixes rather than tokens, which implies that we would match, e.g., 'nokaslrfoo' in the stub and disable KASLR, while the kernel would disregard the option and run with KASLR enabled. In order to avoid having to reason about whether and how this behavior may be abused, let's clean up the parsing routines, and rebuild them on top of the existing helpers. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-02-23efi/libstub/arm: Make efi_entry() an ordinary PE/COFF entrypointArd Biesheuvel3-71/+21
Expose efi_entry() as the PE/COFF entrypoint directly, instead of jumping into a wrapper that fiddles with stack buffers and other stuff that the compiler is much better at. The only reason this code exists is to obtain a pointer to the base of the image, but we can get the same value from the loaded_image protocol, which we already need for other reasons anyway. Update the return type as well, to make it consistent with what is required for a PE/COFF executable entrypoint. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-02-12arm64: time: Replace <linux/clk-provider.h> by <linux/of_clk.h>Geert Uytterhoeven1-1/+1
The arm64 time code is not a clock provider, and just needs to call of_clk_init(). Hence it can include <linux/of_clk.h> instead of <linux/clk-provider.h>. Reviewed-by: Stephen Boyd <sboyd@kernel.org> Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Will Deacon <will@kernel.org>
2020-02-11arm64: Fix CONFIG_ARCH_RANDOM=n buildRobin Murphy1-0/+1
The entire asm/archrandom.h header is generically included via linux/archrandom.h only when CONFIG_ARCH_RANDOM is already set, so the stub definitions of __arm64_rndr() and __early_cpu_has_rndr() are only visible to KASLR if it explicitly includes the arch-internal header. Acked-by: Mark Brown <broonie@kernel.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-02-10arm64: ssbs: Fix context-switch when SSBS is present on all CPUsWill Deacon1-0/+7
When all CPUs in the system implement the SSBS extension, the SSBS field in PSTATE is the definitive indication of the mitigation state. Further, when the CPUs implement the SSBS manipulation instructions (advertised to userspace via an HWCAP), EL0 can toggle the SSBS field directly and so we cannot rely on any shadow state such as TIF_SSBD at all. Avoid forcing the SSBS field in context-switch on such a system, and simply rely on the PSTATE register instead. Cc: <stable@vger.kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Srinivas Ramana <sramana@codeaurora.org> Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch") Reviewed-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-02-10arm64: use shared sysctl constantsMatteo Croce1-4/+2
Use shared sysctl variables for zero and one constants, as in commit eec4844fae7c ("proc/sysctl: add shared variables for range check") Fixes: 63f0c6037965 ("arm64: Introduce prctl() options to control the tagged user addresses ABI") Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-02-03kbuild: rename hostprogs-y/always to hostprogs/always-yMasahiro Yamada1-2/+2
In old days, the "host-progs" syntax was used for specifying host programs. It was renamed to the current "hostprogs-y" in 2004. It is typically useful in scripts/Makefile because it allows Kbuild to selectively compile host programs based on the kernel configuration. This commit renames like follows: always -> always-y hostprogs-y -> hostprogs So, scripts/Makefile will look like this: always-$(CONFIG_BUILD_BIN2C) += ... always-$(CONFIG_KALLSYMS) += ... ... hostprogs := $(always-y) $(always-m) I think this makes more sense because a host program is always a host program, irrespective of the kernel configuration. We want to specify which ones to compile by CONFIG options, so always-y will be handier. The "always", "hostprogs-y", "hostprogs-m" will be kept for backward compatibility for a while. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2020-01-29Merge tag 'tty-5.6-rc1' of ↵Linus Torvalds1-3/+0
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty Pull tty/serial driver updates from Greg KH: "Here are the big set of tty and serial driver updates for 5.6-rc1 Included in here are: - dummy_con cleanups (touches lots of arch code) - sysrq logic cleanups (touches lots of serial drivers) - samsung driver fixes (wasn't really being built) - conmakeshash move to tty subdir out of scripts - lots of small tty/serial driver updates All of these have been in linux-next for a while with no reported issues" * tag 'tty-5.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty: (140 commits) tty: n_hdlc: Use flexible-array member and struct_size() helper tty: baudrate: SPARC supports few more baud rates tty: baudrate: Synchronise baud_table[] and baud_bits[] tty: serial: meson_uart: Add support for kernel debugger serial: imx: fix a race condition in receive path serial: 8250_bcm2835aux: Document struct bcm2835aux_data serial: 8250_bcm2835aux: Use generic remapping code serial: 8250_bcm2835aux: Allocate uart_8250_port on stack serial: 8250_bcm2835aux: Suppress register_port error on -EPROBE_DEFER serial: 8250_bcm2835aux: Suppress clk_get error on -EPROBE_DEFER serial: 8250_bcm2835aux: Fix line mismatch on driver unbind serial_core: Remove unused member in uart_port vt: Correct comment documenting do_take_over_console() vt: Delete comment referencing non-existent unbind_con_driver() arch/xtensa/setup: Drop dummy_con initialization arch/x86/setup: Drop dummy_con initialization arch/unicore32/setup: Drop dummy_con initialization arch/sparc/setup: Drop dummy_con initialization arch/sh/setup: Drop dummy_con initialization arch/s390/setup: Drop dummy_con initialization ...
2020-01-28Merge branch 'sched-core-for-linus' of ↵Linus Torvalds2-1/+4
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler updates from Ingo Molnar: "These were the main changes in this cycle: - More -rt motivated separation of CONFIG_PREEMPT and CONFIG_PREEMPTION. - Add more low level scheduling topology sanity checks and warnings to filter out nonsensical topologies that break scheduling. - Extend uclamp constraints to influence wakeup CPU placement - Make the RT scheduler more aware of asymmetric topologies and CPU capacities, via uclamp metrics, if CONFIG_UCLAMP_TASK=y - Make idle CPU selection more consistent - Various fixes, smaller cleanups, updates and enhancements - please see the git log for details" * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (58 commits) sched/fair: Define sched_idle_cpu() only for SMP configurations sched/topology: Assert non-NUMA topology masks don't (partially) overlap idle: fix spelling mistake "iterrupts" -> "interrupts" sched/fair: Remove redundant call to cpufreq_update_util() sched/psi: create /proc/pressure and /proc/pressure/{io|memory|cpu} only when psi enabled sched/fair: Fix sgc->{min,max}_capacity calculation for SD_OVERLAP sched/fair: calculate delta runnable load only when it's needed sched/cputime: move rq parameter in irqtime_account_process_tick stop_machine: Make stop_cpus() static sched/debug: Reset watchdog on all CPUs while processing sysrq-t sched/core: Fix size of rq::uclamp initialization sched/uclamp: Fix a bug in propagating uclamp value in new cgroups sched/fair: Load balance aggressively for SCHED_IDLE CPUs sched/fair : Improve update_sd_pick_busiest for spare capacity case watchdog: Remove soft_lockup_hrtimer_cnt and related code sched/rt: Make RT capacity-aware sched/fair: Make EAS wakeup placement consider uclamp restrictions sched/fair: Make task_fits_capacity() consider uclamp restrictions sched/uclamp: Rename uclamp_util_with() into uclamp_rq_util_with() sched/uclamp: Make uclamp util helpers use and return UL values ...