summaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)AuthorFilesLines
2020-08-28bpf: Introduce sleepable BPF programsAlexei Starovoitov1-11/+21
Introduce sleepable BPF programs that can request such property for themselves via BPF_F_SLEEPABLE flag at program load time. In such case they will be able to use helpers like bpf_copy_from_user() that might sleep. At present only fentry/fexit/fmod_ret and lsm programs can request to be sleepable and only when they are attached to kernel functions that are known to allow sleeping. The non-sleepable programs are relying on implicit rcu_read_lock() and migrate_disable() to protect life time of programs, maps that they use and per-cpu kernel structures used to pass info between bpf programs and the kernel. The sleepable programs cannot be enclosed into rcu_read_lock(). migrate_disable() maps to preempt_disable() in non-RT kernels, so the progs should not be enclosed in migrate_disable() as well. Therefore rcu_read_lock_trace is used to protect the life time of sleepable progs. There are many networking and tracing program types. In many cases the 'struct bpf_prog *' pointer itself is rcu protected within some other kernel data structure and the kernel code is using rcu_dereference() to load that program pointer and call BPF_PROG_RUN() on it. All these cases are not touched. Instead sleepable bpf programs are allowed with bpf trampoline only. The program pointers are hard-coded into generated assembly of bpf trampoline and synchronize_rcu_tasks_trace() is used to protect the life time of the program. The same trampoline can hold both sleepable and non-sleepable progs. When rcu_read_lock_trace is held it means that some sleepable bpf program is running from bpf trampoline. Those programs can use bpf arrays and preallocated hash/lru maps. These map types are waiting on programs to complete via synchronize_rcu_tasks_trace(); Updates to trampoline now has to do synchronize_rcu_tasks_trace() and synchronize_rcu_tasks() to wait for sleepable progs to finish and for trampoline assembly to finish. This is the first step of introducing sleepable progs. Eventually dynamically allocated hash maps can be allowed and networking program types can become sleepable too. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: KP Singh <kpsingh@google.com> Link: https://lore.kernel.org/bpf/20200827220114.69225-3-alexei.starovoitov@gmail.com
2020-08-27x86/mpparse: Remove duplicate io_apic.h includeWang Hai1-1/+0
Remove asm/io_apic.h which is included more than once. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Wang Hai <wanghai38@huawei.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Pekka Enberg <penberg@kernel.org> Link: https://lkml.kernel.org/r/20200819112910.7629-1-wanghai38@huawei.com
2020-08-27x86/irq: Unbreak interrupt affinity settingThomas Gleixner1-7/+9
Several people reported that 5.8 broke the interrupt affinity setting mechanism. The consolidation of the entry code reused the regular exception entry code for device interrupts and changed the way how the vector number is conveyed from ptregs->orig_ax to a function argument. The low level entry uses the hardware error code slot to push the vector number onto the stack which is retrieved from there into a function argument and the slot on stack is set to -1. The reason for setting it to -1 is that the error code slot is at the position where pt_regs::orig_ax is. A positive value in pt_regs::orig_ax indicates that the entry came via a syscall. If it's not set to a negative value then a signal delivery on return to userspace would try to restart a syscall. But there are other places which rely on pt_regs::orig_ax being a valid indicator for syscall entry. But setting pt_regs::orig_ax to -1 has a nasty side effect vs. the interrupt affinity setting mechanism, which was overlooked when this change was made. Moving interrupts on x86 happens in several steps. A new vector on a different CPU is allocated and the relevant interrupt source is reprogrammed to that. But that's racy and there might be an interrupt already in flight to the old vector. So the old vector is preserved until the first interrupt arrives on the new vector and the new target CPU. Once that happens the old vector is cleaned up, but this cleanup still depends on the vector number being stored in pt_regs::orig_ax, which is now -1. That -1 makes the check for cleanup: pt_regs::orig_ax == new_vector always false. As a consequence the interrupt is moved once, but then it cannot be moved anymore because the cleanup of the old vector never happens. There would be several ways to convey the vector information to that place in the guts of the interrupt handling, but on deeper inspection it turned out that this check is pointless and a leftover from the old affinity model of X86 which supported multi-CPU affinities. Under this model it was possible that an interrupt had an old and a new vector on the same CPU, so the vector match was required. Under the new model the effective affinity of an interrupt is always a single CPU from the requested affinity mask. If the affinity mask changes then either the interrupt stays on the CPU and on the same vector when that CPU is still in the new affinity mask or it is moved to a different CPU, but it is never moved to a different vector on the same CPU. Ergo the cleanup check for the matching vector number is not required and can be removed which makes the dependency on pt_regs:orig_ax go away. The remaining check for new_cpu == smp_processsor_id() is completely sufficient. If it matches then the interrupt was successfully migrated and the cleanup can proceed. For paranoia sake add a warning into the vector assignment code to validate that the assumption of never moving to a different vector on the same CPU holds. Fixes: 633260fa143 ("x86/irq: Convey vector as argument and not in ptregs") Reported-by: Alex bykov <alex.bykov@scylladb.com> Reported-by: Avi Kivity <avi@scylladb.com> Reported-by: Alexander Graf <graf@amazon.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Alexander Graf <graf@amazon.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/87wo1ltaxz.fsf@nanos.tec.linutronix.de
2020-08-27x86/hotplug: Silence APIC only after all interrupts are migratedAshok Raj1-6/+20
There is a race when taking a CPU offline. Current code looks like this: native_cpu_disable() { ... apic_soft_disable(); /* * Any existing set bits for pending interrupt to * this CPU are preserved and will be sent via IPI * to another CPU by fixup_irqs(). */ cpu_disable_common(); { .... /* * Race window happens here. Once local APIC has been * disabled any new interrupts from the device to * the old CPU are lost */ fixup_irqs(); // Too late to capture anything in IRR. ... } } The fix is to disable the APIC *after* cpu_disable_common(). Testing was done with a USB NIC that provided a source of frequent interrupts. A script migrated interrupts to a specific CPU and then took that CPU offline. Fixes: 60dcaad5736f ("x86/hotplug: Silence APIC and NMI when CPU is dead") Reported-by: Evan Green <evgreen@chromium.org> Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Mathias Nyman <mathias.nyman@linux.intel.com> Tested-by: Evan Green <evgreen@chromium.org> Reviewed-by: Evan Green <evgreen@chromium.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/lkml/875zdarr4h.fsf@nanos.tec.linutronix.de/ Link: https://lore.kernel.org/r/1598501530-45821-1-git-send-email-ashok.raj@intel.com
2020-08-26x86/mce: Delay clearing IA32_MCG_STATUS to the end of do_machine_check()Tony Luck1-5/+4
A long time ago, Linux cleared IA32_MCG_STATUS at the very end of machine check processing. Then, some fancy recovery and IST manipulation was added in: d4812e169de4 ("x86, mce: Get rid of TIF_MCE_NOTIFY and associated mce tricks") and clearing IA32_MCG_STATUS was pulled earlier in the function. Next change moved the actual recovery out of do_machine_check() and just used task_work_add() to schedule it later (before returning to the user): 5567d11c21a1 ("x86/mce: Send #MC singal from task work") Most recently the fancy IST footwork was removed as no longer needed: b052df3da821 ("x86/entry: Get rid of ist_begin/end_non_atomic()") At this point there is no reason remaining to clear IA32_MCG_STATUS early. It can move back to the very end of the function. Also move sync_core(). The comments for this function say that it should only be called when instructions have been changed/re-mapped. Recovery for an instruction fetch may change the physical address. But that doesn't happen until the scheduled work runs (which could be on another CPU). [ bp: Massage commit message. ] Reported-by: Gabriele Paoloni <gabriele.paoloni@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200824221237.5397-1-tony.luck@intel.com
2020-08-26x86/resctrl: Enable user to view thread or core throttling modeFenghua Yu3-7/+87
Early Intel hardware implementations of Memory Bandwidth Allocation (MBA) could only control bandwidth at the processor core level. This meant that when two processes with different bandwidth allocations ran simultaneously on the same core the hardware had to resolve this difference. It did so by applying the higher throttling value (lower bandwidth) to both processes. Newer implementations can apply different throttling values to each thread on a core. Introduce a new resctrl file, "thread_throttle_mode", on Intel systems that shows to the user how throttling values are allocated, per-core or per-thread. On systems that support per-core throttling, the file will display "max". On newer systems that support per-thread throttling, the file will display "per-thread". AMD confirmed in [1] that AMD bandwidth allocation is already at thread level but that the AMD implementation does not use a memory delay throttle mode. So to avoid confusion the thread throttling mode would be UNDEFINED on AMD systems and the "thread_throttle_mode" file will not be visible. Originally-by: Reinette Chatre <reinette.chatre@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lkml.kernel.org/r/1598296281-127595-3-git-send-email-fenghua.yu@intel.com Link: [1] https://lore.kernel.org/lkml/18d277fd-6523-319c-d560-66b63ff606b8@amd.com
2020-08-26x86/resctrl: Enumerate per-thread MBA controlsFenghua Yu3-0/+3
Some systems support per-thread Memory Bandwidth Allocation (MBA) which applies a throttling delay value to each hardware thread instead of to a core. Per-thread MBA is enumerated by CPUID. No feature flag is shown in /proc/cpuinfo. User applications need to check a resctrl throttling mode info file to know if the feature is supported. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Babu Moger <babu.moger@amd.com> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lkml.kernel.org/r/1598296281-127595-2-git-send-email-fenghua.yu@intel.com
2020-08-26x86/entry: Remove unused THUNKsPeter Zijlstra1-5/+0
Unused remnants Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: Marco Elver <elver@google.com> Link: https://lkml.kernel.org/r/20200821085348.487040689@infradead.org
2020-08-26cpuidle: Move trace_cpu_idle() into generic codePeter Zijlstra1-4/+0
Remove trace_cpu_idle() from the arch_cpu_idle() implementations and put it in the generic code, right before disabling RCU. Gets rid of more trace_*_rcuidle() users. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: Marco Elver <elver@google.com> Link: https://lkml.kernel.org/r/20200821085348.428433395@infradead.org
2020-08-26cpuidle: Make CPUIDLE_FLAG_TLB_FLUSHED genericPeter Zijlstra2-11/+3
This allows moving the leave_mm() call into generic code before rcu_idle_enter(). Gets rid of more trace_*_rcuidle() users. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: Marco Elver <elver@google.com> Link: https://lkml.kernel.org/r/20200821085348.369441600@infradead.org
2020-08-25kvm: mmu: page_track: Fix RCU list API usageMadhuparna Bhowmik1-2/+4
Use hlist_for_each_entry_srcu() instead of hlist_for_each_entry_rcu() as it also checkes if the right lock is held. Using hlist_for_each_entry_rcu() with a condition argument will not report the cases where a SRCU protected list is traversed using rcu_read_lock(). Hence, use hlist_for_each_entry_srcu(). Signed-off-by: Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Cc: <kvm@vger.kernel.org>
2020-08-24x86/fsgsbase: Replace static_cpu_has() with boot_cpu_has()Borislav Petkov2-6/+6
ptrace and prctl() are not really fast paths to warrant the use of static_cpu_has() and cause alternatives patching for no good reason. Replace with boot_cpu_has() which is simple and fast enough. No functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200818103715.32736-1-bp@alien8.de
2020-08-24x86/entry/64: Correct the comment over SAVE_AND_SET_GSBASEBorislav Petkov1-2/+3
Add the proper explanation why an LFENCE is not needed in the FSGSBASE case. Fixes: c82965f9e530 ("x86/entry/64: Handle FSGSBASE enabled paranoid entry/exit") Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200821090710.GE12181@zn.tnic
2020-08-24treewide: Use fallthrough pseudo-keywordGustavo A. R. Silva31-57/+52
Replace the existing /* fall through */ comments and its variants with the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary fall-through markings when it is the case. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
2020-08-23Merge tag 'x86-urgent-2020-08-23' of ↵Linus Torvalds1-4/+6
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fix from Thomas Gleixner: "A single fix for x86 which removes the RDPID usage from the paranoid entry path and unconditionally uses LSL to retrieve the CPU number. RDPID depends on MSR_TSX_AUX. KVM has an optmization to avoid expensive MRS read/writes on VMENTER/EXIT. It caches the MSR values and restores them either when leaving the run loop, on preemption or when going out to user space. MSR_TSX_AUX is part of that lazy MSR set, so after writing the guest value and before the lazy restore any exception using the paranoid entry will read the guest value and use it as CPU number to retrieve the GSBASE value for the current CPU when FSGSBASE is enabled. As RDPID is only used in that particular entry path, there is no reason to burden VMENTER/EXIT with two extra MSR writes. Remove the RDPID optimization, which is not even backed by numbers from the paranoid entry path instead" * tag 'x86-urgent-2020-08-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/entry/64: Do not use RDPID in paranoid entry to accomodate KVM
2020-08-23Merge tag 'perf-urgent-2020-08-23' of ↵Linus Torvalds1-3/+49
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 perf fix from Thomas Gleixner: "A single update for perf on x86 which has support for the broken down bandwith counters" * tag 'perf-urgent-2020-08-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/x86/intel/uncore: Add BW counters for GT, IA and IO breakdown
2020-08-23Merge tag 'efi-urgent-2020-08-23' of ↵Linus Torvalds4-86/+39
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull EFI fixes from Thomas Gleixner: - Enforce NX on RO data in mixed EFI mode - Destroy workqueue in an error handling path to prevent UAF - Stop argument parser at '--' which is the delimiter for init - Treat a NULL command line pointer as empty instead of dereferncing it unconditionally. - Handle an unterminated command line correctly - Cleanup the 32bit code leftovers and remove obsolete documentation * tag 'efi-urgent-2020-08-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: Documentation: efi: remove description of efi=old_map efi/x86: Move 32-bit code into efi_32.c efi/libstub: Handle unterminated cmdline efi/libstub: Handle NULL cmdline efi/libstub: Stop parsing arguments at "--" efi: add missed destroy_workqueue when efisubsys_init fails efi/x86: Mark kernel rodata non-executable for mixed mode
2020-08-22Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds3-4/+8
Pull kvm fixes from Paolo Bonzini: - PAE and PKU bugfixes for x86 - selftests fix for new binutils - MMU notifier fix for arm64 * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: arm64: Only reschedule if MMU_NOTIFIER_RANGE_BLOCKABLE is not set KVM: Pass MMU notifier range flags to kvm_unmap_hva_range() kvm: x86: Toggling CR4.PKE does not load PDPTEs in PAE mode kvm: x86: Toggling CR4.SMAP does not load PDPTEs in PAE mode KVM: x86: fix access code passed to gva_to_gpa selftests: kvm: Use a shorter encoding to clear RAX
2020-08-22x86/msr: Make source of unrecognised MSR writes unambiguousChris Down1-2/+2
In many cases, task_struct.comm isn't enough to distinguish the offender, since for interpreted languages it's likely just going to be "python3" or whatever. Add the pid to make it unambiguous. [ bp: Make the printk string a single line for easier grepping. ] Signed-off-by: Chris Down <chris@chrisdown.name> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/6f6fbd0ee6c99bc5e47910db700a6642159db01b.1598011595.git.chris@chrisdown.name
2020-08-22x86/msr: Prevent userspace MSR access from dominating the consoleChris Down1-3/+15
Applications which manipulate MSRs from userspace often do so infrequently, and all at once. As such, the default printk ratelimit architecture supplied by pr_err_ratelimited() doesn't do enough to prevent kmsg becoming completely overwhelmed with their messages and pushing other salient information out of the circular buffer. In one case, I saw over 80% of kmsg being filled with these messages, and the default kmsg buffer being completely filled less than 5 minutes after boot(!). Make things much less aggressive, while still achieving the original goal of fiter_write(). Operators will still get warnings that MSRs are being manipulated from userspace, but they won't have other also potentially useful messages pushed out of the kmsg buffer. Of course, one can boot with `allow_writes=1` to avoid these messages at all, but that then has the downfall that one doesn't get _any_ notification at all about these problems in the first place, and so is much less likely to forget to fix it. One might rather it was less binary: it was still logged, just less often, so that application developers _do_ have the incentive to improve their current methods, without the kernel having to push other useful stuff out of the kmsg buffer. This one example isn't the point, of course: I'm sure there are plenty of other non-ideal-but-pragmatic cases where people are writing to MSRs from userspace right now, and it will take time for those people to find other solutions. Overall, keep the intent of the original patch, while mitigating its sometimes heavy effects on kmsg composition. [ bp: Massage a bit. ] Signed-off-by: Chris Down <chris@chrisdown.name> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/563994ef132ce6cffd28fc659254ca37d032b5ef.1598011595.git.chris@chrisdown.name
2020-08-22KVM: Pass MMU notifier range flags to kvm_unmap_hva_range()Will Deacon2-2/+4
The 'flags' field of 'struct mmu_notifier_range' is used to indicate whether invalidate_range_{start,end}() are permitted to block. In the case of kvm_mmu_notifier_invalidate_range_start(), this field is not forwarded on to the architecture-specific implementation of kvm_unmap_hva_range() and therefore the backend cannot sensibly decide whether or not to block. Add an extra 'flags' parameter to kvm_unmap_hva_range() so that architectures are aware as to whether or not they are permitted to block. Cc: <stable@vger.kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will@kernel.org> Message-Id: <20200811102725.7121-2-will@kernel.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-08-21Merge tag 'for-linus-5.9-rc2-tag' of ↵Linus Torvalds1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull xen fixes from Juergen Gross: "One build fix and a minor fix for suppressing a useless warning when booting a Xen dom0 via UEFI" * tag 'for-linus-5.9-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: Fix build error when CONFIG_ACPI is not set/enabled: efi: avoid error message when booting under Xen
2020-08-21x86/entry/64: Do not use RDPID in paranoid entry to accomodate KVMSean Christopherson1-4/+6
KVM has an optmization to avoid expensive MRS read/writes on VMENTER/EXIT. It caches the MSR values and restores them either when leaving the run loop, on preemption or when going out to user space. The affected MSRs are not required for kernel context operations. This changed with the recently introduced mechanism to handle FSGSBASE in the paranoid entry code which has to retrieve the kernel GSBASE value by accessing per CPU memory. The mechanism needs to retrieve the CPU number and uses either LSL or RDPID if the processor supports it. Unfortunately RDPID uses MSR_TSC_AUX which is in the list of cached and lazily restored MSRs, which means between the point where the guest value is written and the point of restore, MSR_TSC_AUX contains a random number. If an NMI or any other exception which uses the paranoid entry path happens in such a context, then RDPID returns the random guest MSR_TSC_AUX value. As a consequence this reads from the wrong memory location to retrieve the kernel GSBASE value. Kernel GS is used to for all regular this_cpu_*() operations. If the GSBASE in the exception handler points to the per CPU memory of a different CPU then this has the obvious consequences of data corruption and crashes. As the paranoid entry path is the only place which accesses MSR_TSX_AUX (via RDPID) and the fallback via LSL is not significantly slower, remove the RDPID alternative from the entry path and always use LSL. The alternative would be to write MSR_TSC_AUX on every VMENTER and VMEXIT which would be inflicting massive overhead on that code path. [ tglx: Rewrote changelog ] Fixes: eaad981291ee3 ("x86/entry/64: Introduce the FIND_PERCPU_BASE macro") Reported-by: Tom Lendacky <thomas.lendacky@amd.com> Debugged-by: Tom Lendacky <thomas.lendacky@amd.com> Suggested-by: Andy Lutomirski <luto@kernel.org> Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20200821105229.18938-1-pbonzini@redhat.com
2020-08-21arm64/x86: KVM: Introduce steal-time capAndrew Jones1-0/+3
arm64 requires a vcpu fd (KVM_HAS_DEVICE_ATTR vcpu ioctl) to probe support for steal-time. However this is unnecessary, as only a KVM fd is required, and it complicates userspace (userspace may prefer delaying vcpu creation until after feature probing). Introduce a cap that can be checked instead. While x86 can already probe steal-time support with a kvm fd (KVM_GET_SUPPORTED_CPUID), we add the cap there too for consistency. Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Steven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/20200804170604.42662-7-drjones@redhat.com
2020-08-21crypto: x86/crc32c-intel - Use CRC32 mnemonicUros Bizjak1-12/+6
Current minimum required version of binutils is 2.23, which supports CRC32 instruction mnemonic. Replace the byte-wise specification of CRC32 with this proper mnemonic. The compiler is now able to pass memory operand to the instruction, so there is no need for a temporary register anymore. Some examples of the improvement: 12a: 48 8b 08 mov (%rax),%rcx 12d: f2 48 0f 38 f1 f1 crc32q %rcx,%rsi 133: 48 83 c0 08 add $0x8,%rax 137: 48 39 d0 cmp %rdx,%rax 13a: 75 ee jne 12a <crc32c_intel_update+0x1a> to: 125: f2 48 0f 38 f1 06 crc32q (%rsi),%rax 12b: 48 83 c6 08 add $0x8,%rsi 12f: 48 39 d6 cmp %rdx,%rsi 132: 75 f1 jne 125 <crc32c_intel_update+0x15> and: 146: 0f b6 08 movzbl (%rax),%ecx 149: f2 0f 38 f0 f1 crc32b %cl,%esi 14e: 48 83 c0 01 add $0x1,%rax 152: 48 39 d0 cmp %rdx,%rax 155: 75 ef jne 146 <crc32c_intel_update+0x36> to: 13b: f2 0f 38 f0 02 crc32b (%rdx),%eax 140: 48 83 c2 01 add $0x1,%rdx 144: 48 39 ca cmp %rcx,%rdx 147: 75 f2 jne 13b <crc32c_intel_update+0x2b> As the compiler has some more freedom w.r.t. register allocation, there is also a couple of reg-reg moves removed. There are no hidden states for CRC32 insn, so there is no need to mark assembly as volatile. v2: Introduce CRC32_INST define. Signed-off-by: Uros Bizjak <ubizjak@gmail.com> CC: Herbert Xu <herbert@gondor.apana.org.au> CC: "David S. Miller" <davem@davemloft.net> CC: Thomas Gleixner <tglx@linutronix.de> CC: Ingo Molnar <mingo@redhat.com> CC: Borislav Petkov <bp@alien8.de> CC: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-08-20amd64: switch csum_partial_copy_generic() to new calling conventionsAl Viro3-123/+94
... and fold handling of misaligned case into it. Implementation note: we stash the "will we need to rol8 the sum in the end" flag into the MSB of %rcx (the lower 32 bits are used for length); the rest is pretty straightforward. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-08-20i386: propagate the calling conventions change down to ↵Al Viro2-88/+47
csum_partial_copy_generic() ... and don't bother zeroing destination on error Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-08-20saner calling conventions for csum_and_copy_..._user()Al Viro4-70/+32
All callers of these primitives will * discard anything we might've copied in case of error * ignore the csum value in case of error * always pass 0xffffffff as the initial sum, so the resulting csum value (in case of success, that is) will never be 0. That suggest the following calling conventions: * don't pass err_ptr - just return 0 on error. * don't bother with zeroing destination, etc. in case of error * don't pass the initial sum - just use 0xffffffff. This commit does the minimal conversion in the instances of csum_and_copy_...(); the changes of actual asm code behind them are done later in the series. Note that this asm code is often shared with csum_partial_copy_nocheck(); the difference is that csum_partial_copy_nocheck() passes 0 for initial sum while csum_and_copy_..._user() pass 0xffffffff. Fortunately, we are free to pass 0xffffffff in all cases and subsequent patches will use that freedom without any special comments. A part that could be split off: parisc and uml/i386 claimed to have csum_and_copy_to_user() instances of their own, but those were identical to the generic one, so we simply drop them. Not sure if it's worth a separate commit... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-08-20csum_partial_copy_nocheck(): drop the last argumentAl Viro3-7/+5
It's always 0. Note that we theoretically could use ~0U as well - result will be the same modulo 0xffff, _if_ the damn thing did the right thing for any value of initial sum; later we'll make use of that when convenient. However, unlike csum_and_copy_..._user(), there are instances that did not work for arbitrary initial sums; c6x is one such. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-08-20unify generic instances of csum_partial_copy_nocheck()Al Viro2-16/+1
quite a few architectures have the same csum_partial_copy_nocheck() - simply memcpy() the data and then return the csum of the copy. hexagon, parisc, ia64, s390, um: explicitly spelled out that way. arc, arm64, csky, h8300, m68k/nommu, microblaze, mips/GENERIC_CSUM, nds32, nios2, openrisc, riscv, unicore32: end up picking the same thing spelled out in lib/checksum.h (with varying amounts of perversions along the way). everybody else (alpha, arm, c6x, m68k/mmu, mips/!GENERIC_CSUM, powerpc, sh, sparc, x86, xtensa) have non-generic variants. For all except c6x the declaration is in their asm/checksum.h. c6x uses the wrapper from asm-generic/checksum.h that would normally lead to the lib/checksum.h instance, but in case of c6x we end up using an asm function from arch/c6x instead. Screw that mess - have architectures with private instances define _HAVE_ARCH_CSUM_AND_COPY in their asm/checksum.h and have the default one right in net/checksum.h conditional on _HAVE_ARCH_CSUM_AND_COPY *not* defined. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-08-20x86/umip: Add emulation/spoofing for SLDT and STR instructionsBrendan Shanks1-13/+27
Add emulation/spoofing of SLDT and STR for both 32- and 64-bit processes. Wine users have found a small number of Windows apps using SLDT that were crashing when run on UMIP-enabled systems. Originally-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Reported-by: Andreas Rammhold <andi@notmuch.email> Signed-off-by: Brendan Shanks <bshanks@codeweavers.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Andy Lutomirski <luto@kernel.org> Reviewed-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Tested-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Link: https://lkml.kernel.org/r/20200710224525.21966-1-bshanks@codeweavers.com
2020-08-20x86: switch to kernel_clone()Christian Brauner1-1/+1
The old _do_fork() helper is removed in favor of the new kernel_clone() helper. The latter adheres to naming conventions for kernel internal syscall helpers. Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: x86@kernel.org Link: https://lore.kernel.org/r/20200819104655.436656-8-christian.brauner@ubuntu.com
2020-08-20efi/x86: Move 32-bit code into efi_32.cArd Biesheuvel3-86/+37
Now that the old memmap code has been removed, some code that was left behind in arch/x86/platform/efi/efi.c is only used for 32-bit builds, which means it can live in efi_32.c as well. So move it over. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-08-20efi/x86: Mark kernel rodata non-executable for mixed modeArvind Sankar1-0/+2
When remapping the kernel rodata section RO in the EFI pagetables, the protection flags that were used for the text section are being reused, but the rodata section should not be marked executable. Cc: <stable@vger.kernel.org> Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Link: https://lore.kernel.org/r/20200717194526.3452089-1-nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-08-20x86/MCE/AMD, EDAC/mce_amd: Remove struct smca_hwid.xec_bitmapYazen Ghannam2-23/+22
The Extended Error Code Bitmap (xec_bitmap) for a Scalable MCA bank type was intended to be used by the kernel to filter out invalid error codes on a system. However, this is unnecessary after a few product releases because the hardware will only report valid error codes. Thus, there's no need for it with future systems. Remove the xec_bitmap field and all references to it. Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200720145353.43924-1-Yazen.Ghannam@amd.com
2020-08-20x86/build: Declutter the build outputIngo Molnar1-4/+0
We have some really ancient debug printouts in the x86 boot image build code: Setup is 14108 bytes (padded to 14336 bytes). System is 8802 kB CRC 27e909d4 None of these ever helped debug any sort of breakage that I know of, and they clutter the build output. Remove them - if anyone needs the see the various interim stages of this to debug an obscure bug, they can add these printfs and more. We still keep this one: Kernel: arch/x86/boot/bzImage is ready (#19) As a sentimental leftover, plus the '#19' build count tag is mildly useful. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: linux-kernel@vger.kernel.org Cc: x86@kernel.org
2020-08-20Fix build error when CONFIG_ACPI is not set/enabled:Randy Dunlap1-0/+1
../arch/x86/pci/xen.c: In function ‘pci_xen_init’: ../arch/x86/pci/xen.c:410:2: error: implicit declaration of function ‘acpi_noirq_set’; did you mean ‘acpi_irq_get’? [-Werror=implicit-function-declaration] acpi_noirq_set(); Fixes: 88e9ca161c13 ("xen/pci: Use acpi_noirq_set() helper to avoid #ifdef") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reviewed-by: Juergen Gross <jgross@suse.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: xen-devel@lists.xenproject.org Cc: linux-pci@vger.kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-20crypto: algapi - Remove skbuff.h inclusionHerbert Xu6-0/+6
The header file algapi.h includes skbuff.h unnecessarily since all we need is a forward declaration for struct sk_buff. This patch removes that inclusion. Unfortunately skbuff.h pulls in a lot of things and drivers over the years have come to rely on it so this patch adds a lot of missing inclusions that result from this. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-08-19x86/boot/compressed: Use builtin mem functions for decompressorArvind Sankar2-9/+3
Since commits c041b5ad8640 ("x86, boot: Create a separate string.h file to provide standard string functions") fb4cac573ef6 ("x86, boot: Move memcmp() into string.h and string.c") the decompressor stub has been using the compiler's builtin memcpy, memset and memcmp functions, _except_ where it would likely have the largest impact, in the decompression code itself. Remove the #undef's of memcpy and memset in misc.c so that the decompressor code also uses the compiler builtins. The rationale given in the comment doesn't really apply: just because some functions use the out-of-line version is no reason to not use the builtin version in the rest. Replace the comment with an explanation of why memzero and memmove are being #define'd. Drop the suggestion to #undef in boot/string.h as well: the out-of-line versions are not really optimized versions, they're generic code that's good enough for the preboot environment. The compiler will likely generate better code for constant-size memcpy/memset/memcmp if it is allowed to. Most decompressors' performance is unchanged, with the exception of LZ4 and 64-bit ZSTD. Before After ARCH LZ4 73ms 10ms 32 LZ4 120ms 10ms 64 ZSTD 90ms 74ms 64 Measurements on QEMU on 2.2GHz Broadwell Xeon, using defconfig kernels. Decompressor code size has small differences, with the largest being that 64-bit ZSTD decreases just over 2k. The largest code size increase was on 64-bit XZ, of about 400 bytes. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Suggested-by: Nick Terrell <nickrterrell@gmail.com> Tested-by: Nick Terrell <nickrterrell@gmail.com> Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-08-19cacheinfo: Move resctrl's get_cache_id() to the cacheinfo header fileJames Morse1-15/+2
resctrl/core.c defines get_cache_id() for use in its cpu-hotplug callbacks. This gets the id attribute of the cache at the corresponding level of a CPU. Later rework means this private function needs to be shared. Move it to the header file. The name conflicts with a different definition in intel_cacheinfo.c, name it get_cpu_cacheinfo_id() to show its relation with get_cpu_cacheinfo(). Now this is visible on other architectures, check the id attribute has actually been set. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Babu Moger <babu.moger@amd.com> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lkml.kernel.org/r/20200708163929.2783-11-james.morse@arm.com
2020-08-19x86/resctrl: Add struct rdt_cache::arch_has_{sparse, empty}_bitmapsJames Morse3-39/+22
Intel CPUs expect the cache bitmap provided by user-space to have on a single span of 1s, whereas AMD can support bitmaps like 0xf00f. Arm's MPAM support also allows sparse bitmaps. Similarly, Intel CPUs check at least one bit set, whereas AMD CPUs are quite happy with an empty bitmap. Arm's MPAM allows an empty bitmap. To move resctrl out to /fs/, platform differences like this need to be explained. Add two resource properties arch_has_{empty,sparse}_bitmaps. Test these around the relevant parts of cbm_validate(). Merging the validate calls causes AMD to gain the min_cbm_bits test needed for Haswell, but as it always sets this value to 1, it will never match. [ bp: Massage commit message. ] Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Babu Moger <babu.moger@amd.com> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lkml.kernel.org/r/20200708163929.2783-10-james.morse@arm.com
2020-08-19x86/cpu: Fix typos and improve the comments in sync_core()Ingo Molnar1-8/+8
- Fix typos. - Move the compiler barrier comment to the top, because it's valid for the whole function, not just the legacy branch. Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200818053130.GA3161093@gmail.com Reviewed-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
2020-08-19x86/resctrl: Merge AMD/Intel parse_bw() callsJames Morse3-61/+5
Now after arch_needs_linear has been added, the parse_bw() calls are almost the same between AMD and Intel. The difference is '!is_mba_sc()', which is not checked on AMD. This will always be true on AMD CPUs as mba_sc cannot be enabled as is_mba_linear() is false. Removing this duplication means user-space visible behaviour and error messages are not validated or generated in different places. Reviewed-by : Babu Moger <babu.moger@amd.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lkml.kernel.org/r/20200708163929.2783-9-james.morse@arm.com
2020-08-19x86/resctrl: Add struct rdt_membw::arch_needs_linear to explain AMD/Intel ↵James Morse3-1/+12
MBA difference The configuration values user-space provides to the resctrl filesystem are ABI. To make this work on another architecture, all the ABI bits should be moved out of /arch/x86 and under /fs. To do this, the differences between AMD and Intel CPUs needs to be explained to resctrl via resource properties, instead of function pointers that let the arch code accept subtly different values on different platforms/architectures. For MBA, Intel CPUs reject configuration attempts for non-linear resources, whereas AMD ignore this field as its MBA resource is never linear. To merge the parse/validate functions, this difference needs to be explained. Add struct rdt_membw::arch_needs_linear to indicate the arch code needs the linear property to be true to configure this resource. AMD can set this and delay_linear to false. Intel can set arch_needs_linear to true to keep the existing "No support for non-linear MB domains" error message for affected platforms. [ bp: convert "we" etc to passive voice. ] Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Reviewed-by: Babu Moger <babu.moger@amd.com> Link: https://lkml.kernel.org/r/20200708163929.2783-8-james.morse@arm.com
2020-08-19x86/resctrl: Use is_closid_match() in more placesJames Morse1-16/+14
rdtgroup_tasks_assigned() and show_rdt_tasks() loop over threads testing for a CTRL/MON group match by closid/rmid with the provided rdtgrp. Further down the file are helpers to do this, move these further up and make use of them here. These helpers additionally check for alloc/mon capable. This is harmless as rdtgroup_mkdir() tests these capable flags before allowing the config directories to be created. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lkml.kernel.org/r/20200708163929.2783-7-james.morse@arm.com
2020-08-18x86/resctrl: Use container_of() in delayed_work handlersJames Morse1-11/+2
mbm_handle_overflow() and cqm_handle_limbo() are both provided with the domain's work_struct when called, but use get_domain_from_cpu() to find the domain, along with the appropriate error handling. container_of() saves some list walking and bitmap testing, use that instead. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lkml.kernel.org/r/20200708163929.2783-5-james.morse@arm.com
2020-08-18x86/resctrl: Fix stale commentJames Morse1-1/+1
The comment in rdtgroup_init() refers to the non existent function rdt_mount(), which has now been renamed rdt_get_tree(). Fix the comment. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lkml.kernel.org/r/20200708163929.2783-4-james.morse@arm.com
2020-08-18x86/resctrl: Remove struct rdt_membw::max_delayJames Morse2-7/+4
max_delay is used by x86's __get_mem_config_intel() as a local variable. Remove it, replacing it with a local variable. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lkml.kernel.org/r/20200708163929.2783-3-james.morse@arm.com
2020-08-18x86/resctrl: Remove unused struct mbm_state::chunks_bwJames Morse2-4/+1
Nothing reads struct mbm_states's chunks_bw value, its a copy of chunks. Remove it. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lkml.kernel.org/r/20200708163929.2783-2-james.morse@arm.com
2020-08-18perf/x86/intel: Support per-thread RDPMC TopDown metricsKan Liang2-12/+83
Starts from Ice Lake, the TopDown metrics are directly available as fixed counters and do not require generic counters. Also, the TopDown metrics can be collected per thread. Extend the RDPMC usage to support per-thread TopDown metrics. The RDPMC index of the PERF_METRICS will be output if RDPMC users ask for the RDPMC index of the metrics events. To support per thread RDPMC TopDown, the metrics and slots counters have to be saved/restored during the context switching. The last_period and period_left are not used in the counting mode. Use the fields for saved_metric and saved_slots. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200723171117.9918-12-kan.liang@linux.intel.com