summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2021-03-17powerpc/svm: stop using io_tlb_startChristoph Hellwig1-3/+3
Use the local variable that is passed to swiotlb_init_with_tbl for freeing the memory in the failure case to isolate the code a little better from swiotlb internals. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17MIPS: vmlinux.lds.S: Fix appended dtb not properly alignedPaul Cercueil1-1/+1
Commit 6654111c893f ("MIPS: vmlinux.lds.S: align raw appended dtb to 8 bytes") changed the alignment from STRUCT_ALIGNMENT bytes to 8 bytes. The commit's message makes it sound like it was actually done on purpose, but this is not the case. The commit was written when raw appended dtb were not aligned at all. The STRUCT_ALIGN() was added a few days before, in commit 7a05293af39f ("MIPS: boot/compressed: Copy DTB to aligned address"). The true purpose of the commit was not to align specifically to 8 bytes, but to make sure that the generated vmlinux' size was properly padded to the alignment required for DTBs. While the switch to 8-byte alignment worked for vmlinux-appended dtb blobs, it broke vmlinuz-appended dtb blobs, as the decompress routine moves the blob to a STRUCT_ALIGNMENT aligned address. Fix this by changing the raw appended dtb blob alignment from 8 bytes back to STRUCT_ALIGNMENT bytes in vmlinux.lds.S. Fixes: 6654111c893f ("MIPS: vmlinux.lds.S: align raw appended dtb to 8 bytes") Cc: Bjørn Mork <bjorn@mork.no> Signed-off-by: Paul Cercueil <paul@crapouillou.net> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-03-17x86: Introduce restart_block->arch_data to remove TS_COMPAT_RESTARTOleg Nesterov2-11/+3
Save the current_thread_info()->status of X86 in the new restart_block->arch_data field so TS_COMPAT_RESTART can be removed again. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20210201174716.GA17898@redhat.com
2021-03-17x86: Introduce TS_COMPAT_RESTART to fix get_nr_restart_syscall()Oleg Nesterov2-24/+14
The comment in get_nr_restart_syscall() says: * The problem is that we can get here when ptrace pokes * syscall-like values into regs even if we're not in a syscall * at all. Yes, but if not in a syscall then the status & (TS_COMPAT|TS_I386_REGS_POKED) check below can't really help: - TS_COMPAT can't be set - TS_I386_REGS_POKED is only set if regs->orig_ax was changed by 32bit debugger; and even in this case get_nr_restart_syscall() is only correct if the tracee is 32bit too. Suppose that a 64bit debugger plays with a 32bit tracee and * Tracee calls sleep(2) // TS_COMPAT is set * User interrupts the tracee by CTRL-C after 1 sec and does "(gdb) call func()" * gdb saves the regs by PTRACE_GETREGS * does PTRACE_SETREGS to set %rip='func' and %orig_rax=-1 * PTRACE_CONT // TS_COMPAT is cleared * func() hits int3. * Debugger catches SIGTRAP. * Restore original regs by PTRACE_SETREGS. * PTRACE_CONT get_nr_restart_syscall() wrongly returns __NR_restart_syscall==219, the tracee calls ia32_sys_call_table[219] == sys_madvise. Add the sticky TS_COMPAT_RESTART flag which survives after return to user mode. It's going to be removed in the next step again by storing the information in the restart block. As a further cleanup it might be possible to remove also TS_I386_REGS_POKED with that. Test-case: $ cvs -d :pserver:anoncvs:anoncvs@sourceware.org:/cvs/systemtap co ptrace-tests $ gcc -o erestartsys-trap-debuggee ptrace-tests/tests/erestartsys-trap-debuggee.c --m32 $ gcc -o erestartsys-trap-debugger ptrace-tests/tests/erestartsys-trap-debugger.c -lutil $ ./erestartsys-trap-debugger Unexpected: retval 1, errno 22 erestartsys-trap-debugger: ptrace-tests/tests/erestartsys-trap-debugger.c:421 Fixes: 609c19a385c8 ("x86/ptrace: Stop setting TS_COMPAT in ptrace code") Reported-by: Jan Kratochvil <jan.kratochvil@redhat.com> Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210201174709.GA17895@redhat.com
2021-03-17x86: Move TS_COMPAT back to asm/thread_info.hOleg Nesterov2-9/+9
Move TS_COMPAT back to asm/thread_info.h, close to TS_I386_REGS_POKED. It was moved to asm/processor.h by b9d989c7218a ("x86/asm: Move the thread_info::status field to thread_struct"), then later 37a8f7c38339 ("x86/asm: Move 'status' from thread_struct to thread_info") moved the 'status' field back but TS_COMPAT was forgotten. Preparatory patch to fix the COMPAT case for get_nr_restart_syscall() Fixes: 609c19a385c8 ("x86/ptrace: Stop setting TS_COMPAT in ptrace code") Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210201174649.GA17880@redhat.com
2021-03-16perf/x86/intel: Fix unchecked MSR access error caused by VLBR_EVENTKan Liang1-0/+3
On a Haswell machine, the perf_fuzzer managed to trigger this message: [117248.075892] unchecked MSR access error: WRMSR to 0x3f1 (tried to write 0x0400000000000000) at rIP: 0xffffffff8106e4f4 (native_write_msr+0x4/0x20) [117248.089957] Call Trace: [117248.092685] intel_pmu_pebs_enable_all+0x31/0x40 [117248.097737] intel_pmu_enable_all+0xa/0x10 [117248.102210] __perf_event_task_sched_in+0x2df/0x2f0 [117248.107511] finish_task_switch.isra.0+0x15f/0x280 [117248.112765] schedule_tail+0xc/0x40 [117248.116562] ret_from_fork+0x8/0x30 A fake event called VLBR_EVENT may use the bit 58 of the PEBS_ENABLE, if the precise_ip is set. The bit 58 is reserved by the HW. Accessing the bit causes the unchecked MSR access error. The fake event doesn't support PEBS. The case should be rejected. Fixes: 097e4311cda9 ("perf/x86: Add constraint to create guest LBR event without hw counter") Reported-by: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/1615555298-140216-2-git-send-email-kan.liang@linux.intel.com
2021-03-16perf/x86/intel: Fix a crash caused by zero PEBS statusKan Liang1-1/+1
A repeatable crash can be triggered by the perf_fuzzer on some Haswell system. https://lore.kernel.org/lkml/7170d3b-c17f-1ded-52aa-cc6d9ae999f4@maine.edu/ For some old CPUs (HSW and earlier), the PEBS status in a PEBS record may be mistakenly set to 0. To minimize the impact of the defect, the commit was introduced to try to avoid dropping the PEBS record for some cases. It adds a check in the intel_pmu_drain_pebs_nhm(), and updates the local pebs_status accordingly. However, it doesn't correct the PEBS status in the PEBS record, which may trigger the crash, especially for the large PEBS. It's possible that all the PEBS records in a large PEBS have the PEBS status 0. If so, the first get_next_pebs_record_by_bit() in the __intel_pmu_pebs_event() returns NULL. The at = NULL. Since it's a large PEBS, the 'count' parameter must > 1. The second get_next_pebs_record_by_bit() will crash. Besides the local pebs_status, correct the PEBS status in the PEBS record as well. Fixes: 01330d7288e0 ("perf/x86: Allow zero PEBS status with only single active event") Reported-by: Vince Weaver <vincent.weaver@maine.edu> Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/1615555298-140216-1-git-send-email-kan.liang@linux.intel.com
2021-03-16s390/bpf: Implement new atomic opsIlya Leoshkevich1-9/+55
Implement BPF_AND, BPF_OR and BPF_XOR as the existing BPF_ADD. Since the corresponding machine instructions return the old value, BPF_FETCH happens by itself, the only additional thing that is required is zero-extension. There is no single instruction that implements BPF_XCHG on s390, so use a COMPARE AND SWAP loop. BPF_CMPXCHG, on the other hand, can be implemented by a single COMPARE AND SWAP. Zero-extension is automatically inserted by the verifier. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210304233002.149096-1-iii@linux.ibm.com
2021-03-16KVM: x86/mmu: Store the address space ID in the TDP iteratorSean Christopherson4-24/+13
Store the address space ID in the TDP iterator so that it can be retrieved without having to bounce through the root shadow page. This streamlines the code and fixes a Sparse warning about not properly using rcu_dereference() when grabbing the ID from the root on the fly. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Ben Gardon <bgardon@google.com> Message-Id: <20210315233803.2706477-5-bgardon@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-03-16KVM: x86/mmu: Factor out tdp_iter_return_to_rootBen Gardon3-10/+19
In tdp_mmu_iter_cond_resched there is a call to tdp_iter_start which causes the iterator to continue its walk over the paging structure from the root. This is needed after a yield as paging structure could have been freed in the interim. The tdp_iter_start call is not very clear and something of a hack. It requires exposing tdp_iter fields not used elsewhere in tdp_mmu.c and the effect is not obvious from the function name. Factor a more aptly named function out of tdp_iter_start and call it from tdp_mmu_iter_cond_resched and tdp_iter_start. No functional change intended. Signed-off-by: Ben Gardon <bgardon@google.com> Message-Id: <20210315233803.2706477-4-bgardon@google.com> Reviewed-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-03-16KVM: x86/mmu: Fix RCU usage when atomically zapping SPTEsBen Gardon1-1/+1
Fix a missing rcu_dereference in tdp_mmu_zap_spte_atomic. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Ben Gardon <bgardon@google.com> Message-Id: <20210315233803.2706477-3-bgardon@google.com> Reviewed-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-03-16KVM: x86/mmu: Fix RCU usage in handle_removed_tdp_mmu_pageBen Gardon1-3/+8
The pt passed into handle_removed_tdp_mmu_page does not need RCU protection, as it is not at any risk of being freed by another thread at that point. However, the implicit cast from tdp_sptep_t to u64 * dropped the __rcu annotation without a proper rcu_derefrence. Fix this by passing the pt as a tdp_ptep_t and then rcu_dereferencing it in the function. Suggested-by: Sean Christopherson <seanjc@google.com> Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Ben Gardon <bgardon@google.com> Message-Id: <20210315233803.2706477-2-bgardon@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-03-16arm64: dts: qcom: sdm845-db845c: Enable ov8856 sensor and connect to ISPRobert Foss1-2/+17
Enable camss & ov8856 DT nodes. Signed-off-by: Robert Foss <robert.foss@linaro.org> Reviewed-by: Andrey Konovalov <andrey.konovalov@linaro.org> Link: https://lore.kernel.org/r/20210316171931.812748-23-robert.foss@linaro.org Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
2021-03-16arm64: dts: qcom: sdm845-db845c: Configure regulators for camss nodeRobert Foss1-0/+4
Add regulator to camss device tree node. Signed-off-by: Robert Foss <robert.foss@linaro.org> Reviewed-by: Andrey Konovalov <andrey.konovalov@linaro.org> Link: https://lore.kernel.org/r/20210316171931.812748-22-robert.foss@linaro.org Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
2021-03-16arm64: dts: qcom: sdm845: Add CAMSS ISP nodeRobert Foss1-0/+135
Add the camss dt node for sdm845. Signed-off-by: Robert Foss <robert.foss@linaro.org> Reviewed-by: Andrey Konovalov <andrey.konovalov@linaro.org> Link: https://lore.kernel.org/r/20210316171931.812748-21-robert.foss@linaro.org Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
2021-03-16ARM: pxa: zeus: Constify the software nodeHeikki Krogerus1-1/+5
When device properties are supplied to the devices, in reality a software fwnode that holds those properties is created which is then assigned to the device. If the device properties are constant the software node can also be constant. Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Cc: Daniel Mack <daniel@zonque.org> Cc: Haojian Zhuang <haojian.zhuang@gmail.com> Cc: Robert Jarzmik <robert.jarzmik@free.fr> Link: https://lore.kernel.org/r/20210303152814.35070-4-heikki.krogerus@linux.intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-03-16ARM: pxa: icontrol: Constify the software nodeHeikki Krogerus1-4/+8
When device properties are supplied to the devices, in reality a software fwnode that holds those properties is created which is then assigned to the device. If the device properties are constant the software node can also be constant. Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Cc: Daniel Mack <daniel@zonque.org> Cc: Haojian Zhuang <haojian.zhuang@gmail.com> Cc: Robert Jarzmik <robert.jarzmik@free.fr> Link: https://lore.kernel.org/r/20210303152814.35070-3-heikki.krogerus@linux.intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-03-16arm64: dts: renesas: r8a77980: Fix vin4-7 endpoint bindingVladimir Barinov1-8/+8
This fixes the bindings in media framework: The CSI40 is endpoint number 2 The CSI41 is endpoint number 3 Signed-off-by: Vladimir Barinov <vladimir.barinov@cogentembedded.com> Reviewed-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Link: https://lore.kernel.org/r/20210312174735.2118212-1-niklas.soderlund+renesas@ragnatech.se Fixes: 3182aa4e0bf4d0ee ("arm64: dts: renesas: r8a77980: add CSI2/VIN support") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
2021-03-15ARM64: enable GENERIC_FIND_FIRST_BITYury Norov1-0/+1
ARM64 doesn't implement find_first_{zero}_bit in arch code and doesn't enable it in a config. It leads to using find_next_bit() which is less efficient: 0000000000000000 <find_first_bit>: 0: aa0003e4 mov x4, x0 4: aa0103e0 mov x0, x1 8: b4000181 cbz x1, 38 <find_first_bit+0x38> c: f9400083 ldr x3, [x4] 10: d2800802 mov x2, #0x40 // #64 14: 91002084 add x4, x4, #0x8 18: b40000c3 cbz x3, 30 <find_first_bit+0x30> 1c: 14000008 b 3c <find_first_bit+0x3c> 20: f8408483 ldr x3, [x4], #8 24: 91010045 add x5, x2, #0x40 28: b50000c3 cbnz x3, 40 <find_first_bit+0x40> 2c: aa0503e2 mov x2, x5 30: eb02001f cmp x0, x2 34: 54ffff68 b.hi 20 <find_first_bit+0x20> // b.pmore 38: d65f03c0 ret 3c: d2800002 mov x2, #0x0 // #0 40: dac00063 rbit x3, x3 44: dac01063 clz x3, x3 48: 8b020062 add x2, x3, x2 4c: eb02001f cmp x0, x2 50: 9a829000 csel x0, x0, x2, ls // ls = plast 54: d65f03c0 ret ... 0000000000000118 <_find_next_bit.constprop.1>: 118: eb02007f cmp x3, x2 11c: 540002e2 b.cs 178 <_find_next_bit.constprop.1+0x60> // b.hs, b.nlast 120: d346fc66 lsr x6, x3, #6 124: f8667805 ldr x5, [x0, x6, lsl #3] 128: b4000061 cbz x1, 134 <_find_next_bit.constprop.1+0x1c> 12c: f8667826 ldr x6, [x1, x6, lsl #3] 130: 8a0600a5 and x5, x5, x6 134: ca0400a6 eor x6, x5, x4 138: 92800005 mov x5, #0xffffffffffffffff // #-1 13c: 9ac320a5 lsl x5, x5, x3 140: 927ae463 and x3, x3, #0xffffffffffffffc0 144: ea0600a5 ands x5, x5, x6 148: 54000120 b.eq 16c <_find_next_bit.constprop.1+0x54> // b.none 14c: 1400000e b 184 <_find_next_bit.constprop.1+0x6c> 150: d346fc66 lsr x6, x3, #6 154: f8667805 ldr x5, [x0, x6, lsl #3] 158: b4000061 cbz x1, 164 <_find_next_bit.constprop.1+0x4c> 15c: f8667826 ldr x6, [x1, x6, lsl #3] 160: 8a0600a5 and x5, x5, x6 164: eb05009f cmp x4, x5 168: 540000c1 b.ne 180 <_find_next_bit.constprop.1+0x68> // b.any 16c: 91010063 add x3, x3, #0x40 170: eb03005f cmp x2, x3 174: 54fffee8 b.hi 150 <_find_next_bit.constprop.1+0x38> // b.pmore 178: aa0203e0 mov x0, x2 17c: d65f03c0 ret 180: ca050085 eor x5, x4, x5 184: dac000a5 rbit x5, x5 188: dac010a5 clz x5, x5 18c: 8b0300a3 add x3, x5, x3 190: eb03005f cmp x2, x3 194: 9a839042 csel x2, x2, x3, ls // ls = plast 198: aa0203e0 mov x0, x2 19c: d65f03c0 ret ... 0000000000000238 <find_next_bit>: 238: a9bf7bfd stp x29, x30, [sp, #-16]! 23c: aa0203e3 mov x3, x2 240: d2800004 mov x4, #0x0 // #0 244: aa0103e2 mov x2, x1 248: 910003fd mov x29, sp 24c: d2800001 mov x1, #0x0 // #0 250: 97ffffb2 bl 118 <_find_next_bit.constprop.1> 254: a8c17bfd ldp x29, x30, [sp], #16 258: d65f03c0 ret Enabling find_{first,next}_bit() would also benefit for_each_{set,clear}_bit(). On A-53 find_first_bit() is almost twice faster than find_next_bit(), according to lib/find_bit_benchmark (thanks to Alexey for testing): GENERIC_FIND_FIRST_BIT=n: [7126084.948181] find_first_bit: 47389224 ns, 16357 iterations [7126085.032315] find_first_bit: 19048193 ns, 655 iterations GENERIC_FIND_FIRST_BIT=y: [ 84.158068] find_first_bit: 27193319 ns, 16406 iterations [ 84.233005] find_first_bit: 11082437 ns, 656 iterations GENERIC_FIND_FIRST_BIT=n bloats the kernel despite that it disables generation of find_{first,next}_bit(): yury:linux$ scripts/bloat-o-meter vmlinux vmlinux.ffb add/remove: 4/1 grow/shrink: 19/251 up/down: 564/-1692 (-1128) ... Overall, GENERIC_FIND_FIRST_BIT=n is harmful both in terms of performance and code size, and it's better to have GENERIC_FIND_FIRST_BIT enabled. Tested-by: Alexey Klimov <aklimov@redhat.com> Signed-off-by: Yury Norov <yury.norov@gmail.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210225135700.1381396-2-yury.norov@gmail.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-15arm64: defconfig: Use DEBUG_INFO_REDUCEDMark Brown1-0/+1
We've had DEBUG_INFO enabled for arm64 defconfigs since the initial commit. This is probably not that frequently used but substantially inflates the size of the build tree and amount of I/O needed during the build. This was causing issues with storage usage in some automated CI environments which don't expect defconfigs to be quite this big, and generally increases the resource consumption for both them and people doing local builds. The main use case for the debug info is decoding things with scripts/faddr2line but that doesn't need the full DEBUG_INFO, DEBUG_INFO_REDUCED is enough for it, so enable that by default. Without this patch my build tree is 6.8G, with it the size drops to 2G (smaller than the 6.4G for allmodconfig!). Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Guillaume Tucker <guillaume.tucker@collabora.com> Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Kevin Hilman <khilman@baylibre.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210304174407.17537-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-15s390/pci: fix leak of PCI device structureNiklas Schnelle3-17/+31
In commit 05bc1be6db4b2 ("s390/pci: create zPCI bus") we removed the pci_dev_put() call matching the earlier pci_get_slot() done as part of __zpci_event_availability(). This was based on the wrong understanding that the device_put() done as part of pci_destroy_device() would counter the pci_get_slot() when it only counters the initial reference. This same understanding and existing bad example also lead to not doing a pci_dev_put() in zpci_remove_device(). Since releasing the PCI devices, unlike releasing the PCI slot, does not print any debug message for testing I added one in pci_release_dev(). This revealed that we are indeed leaking the PCI device on PCI hotunplug. Further testing also revealed another missing pci_dev_put() in disable_slot(). Fix this by adding the missing pci_dev_put() in disable_slot() and fix zpci_remove_device() with the correct pci_dev_put() calls. Also instead of calling pci_get_slot() in __zpci_event_availability() to determine if a PCI device is registered and then doing the same again in zpci_remove_device() do this once in zpci_remove_device() which makes sure that the pdev in __zpci_event_availability() is only used for the result of pci_scan_single_device() which does not need a reference count decremnt as its ownership goes to the PCI bus. Also move the check if zdev->zbus->bus is set into zpci_remove_device() since it may be that we're removing a device with devfn != 0 which never had a PCI bus. So we can still set the pdev->error_state to indicate that the device is not usable anymore, add a flag to set the error state. Fixes: 05bc1be6db4b2 ("s390/pci: create zPCI bus") Cc: <stable@vger.kernel.org> # 5.8+: e1bff843cde6 s390/pci: remove superfluous zdev->zbus check Cc: <stable@vger.kernel.org> # 5.8+: ba764dd703fe s390/pci: refactor zpci_create_device() Cc: <stable@vger.kernel.org> # 5.8+ Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-03-15s390/vtime: fix increased steal time accountingGerald Schaefer1-1/+1
Commit 152e9b8676c6e ("s390/vtime: steal time exponential moving average") inadvertently changed the input value for account_steal_time() from "cputime_to_nsecs(steal)" to just "steal", resulting in broken increased steal time accounting. Fix this by changing it back to "cputime_to_nsecs(steal)". Fixes: 152e9b8676c6e ("s390/vtime: steal time exponential moving average") Cc: <stable@vger.kernel.org> # 5.1 Reported-by: Sabine Forkel <sabine.forkel@de.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-03-15s390/cpumf: disable preemption when accessing per-cpu variableThomas Richter1-1/+2
The following BUG message was triggered repeatedly when complete counter sets are extracted from the CPUMF: BUG: using smp_processor_id() in preemptible [00000000] code: psvc-readsets/7759 caller is cf_diag_needspace+0x2c/0x100 CPU: 7 PID: 7759 Comm: psvc-readsets Not tainted 5.12.0 Hardware name: IBM 3906 M03 703 (LPAR) Call Trace: [<00000000c7043f78>] show_stack+0x90/0xf8 [<00000000c705776a>] dump_stack+0xba/0x108 [<00000000c705d91c>] check_preemption_disabled+0xec/0xf0 [<00000000c63eb1c4>] cf_diag_needspace+0x2c/0x100 [<00000000c63ecbcc>] cf_diag_ioctl_start+0x10c/0x240 [<00000000c63ece9a>] cf_diag_ioctl+0x19a/0x238 [<00000000c675f3f4>] __s390x_sys_ioctl+0xc4/0x100 [<00000000c63ca762>] do_syscall+0x82/0xd0 [<00000000c705bdd8>] __do_syscall+0xc0/0xd8 [<00000000c706d532>] system_call+0x72/0x98 2 locks held by psvc-readsets/7759: #0: 00000000c75a57c0 (cpu_hotplug_lock){++++}-{0:0}, at: cf_diag_ioctl+0x44/0x238 #1: 00000000c75a3078 (cf_diag_ctrset_mutex){+.+.}-{3:3}, at: cf_diag_ioctl+0x54/0x238 This issue is a missing get_cpu_ptr/put_cpu_ptr pair in function cf_diag_needspace. Add it. Fixes: cf6acb8bdb1d ("s390/cpumf: Add support for complete counter set extraction") Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-03-15arm64: remove select PINCTRL_ROCKCHIP from ARCH_ROCKCHIPJianqun Xu1-2/+0
Prepare to make pinctrl driver of rockchip to be module able, this patch remove the select of PINCTRL_ROCKCHIP from ARCH_ROCKCHIP. Signed-off-by: Jianqun Xu <jay.xu@rock-chips.com> Link: https://lore.kernel.org/r/20210305003907.1692515-2-jay.xu@rock-chips.com Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2021-03-15x86: Remove dynamic NOP selectionPeter Zijlstra12-351/+97
This ensures that a NOP is a NOP and not a random other instruction that is also a NOP. It allows simplification of dynamic code patching that wants to verify existing code before writing new instructions (ftrace, jump_label, static_call, etc..). Differentiating on NOPs is not a feature. This pessimises 32bit (DONTCARE) and 32bit on 64bit CPUs (CARELESS). 32bit is not a performance target. Everything x86_64 since AMD K10 (2007) and Intel IvyBridge (2012) is fine with using NOPL (as opposed to prefix NOP). And per FEATURE_NOPL being required for x86_64, all x86_64 CPUs can use NOPL. So stop caring about NOPs, simplify things and get on with life. [ The problem seems to be that some uarchs can only decode NOPL on a single front-end port while others have severe decode penalties for excessive prefixes. All modern uarchs can handle both, except Atom, which has prefix penalties. ] [ Also, much doubt you can actually measure any of this on normal workloads. ] After this, FEATURE_NOPL is unused except for required-features for x86_64. FEATURE_K8 is only used for PTI. [ bp: Kernel build measurements showed ~0.3s slowdown on Sandybridge which is hardly a slowdown. Get rid of X86_FEATURE_K7, while at it. ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Alexei Starovoitov <alexei.starovoitov@gmail.com> # bpf Acked-by: Linus Torvalds <torvalds@linuxfoundation.org> Link: https://lkml.kernel.org/r/20210312115749.065275711@infradead.org
2021-03-15arm64: dts: juno: Enable more SMMUsRobin Murphy3-1/+8
Now that PCI inbound window restrictions are handled generically between the of_pci resource parsing and the IOMMU layer, and described in the Juno DT, we can finally enable the PCIe SMMU without the risk of DMA mappings inadvertently allocating unusable addresses. Similarly, the relevant support for IOMMU mappings for peripheral transfers has been hooked up in the pl330 driver for ages, so we can happily enable the DMA SMMU without that breaking anything either. Link: https://lore.kernel.org/r/a730070d718cb119f77c8ca1782a0d4189bfb3e7.1614965598.git.robin.murphy@arm.com Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
2021-03-15arm64: dts: juno: Describe PCI dma-rangesRobin Murphy1-0/+4
The PLDA root complex on Juno relies on an address-based lookup table to generate AXI attributes for inbound PCI transactions, and as such will not pass any transaction not matching any programmed address range. The standard firmware configuration programs 3 entries covering the GICv2m MSI doorbell and the 2 DRAM regions, so add a "dma-ranges" property to describe those usable inbound windows. Link: https://lore.kernel.org/r/720d0a9a42e33148fcac45cd39a727093a32bf32.1614965598.git.robin.murphy@arm.com Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
2021-03-15x86/insn: Make insn_complete() staticBorislav Petkov2-7/+7
... and move it above the only place it is used. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210304174237.31945-22-bp@alien8.de
2021-03-15x86/insn: Remove kernel_insn_init()Borislav Petkov1-11/+0
Now that it is not needed anymore, drop it. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210304174237.31945-21-bp@alien8.de
2021-03-15x86/tools/insn_sanity: Convert to insn_decode()Borislav Petkov1-4/+4
Simplify code, no functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210304174237.31945-19-bp@alien8.de
2021-03-15x86/tools/insn_decoder_test: Convert to insn_decode()Borislav Petkov1-4/+6
Simplify code, no functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210304174237.31945-17-bp@alien8.de
2021-03-15x86/uprobes: Convert to insn_decode()Borislav Petkov1-4/+4
Simplify code, no functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210304174237.31945-16-bp@alien8.de
2021-03-15x86/traps: Convert to insn_decode()Borislav Petkov1-3/+4
Simplify code, no functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210304174237.31945-15-bp@alien8.de
2021-03-15arm64: csum: cast to the proper typeAlex Elder1-1/+1
The last line of ip_fast_csum() calls csum_fold(), forcing the type of the argument passed to be u32. But csum_fold() takes a __wsum argument (which is __u32 __bitwise for arm64). As long as we're forcing the cast, cast it to the right type. Signed-off-by: Alex Elder <elder@linaro.org> Acked-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20210315012650.1221328-1-elder@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2021-03-15x86/sev-es: Convert to insn_decode()Borislav Petkov1-12/+10
Simplify code, no functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210304174237.31945-14-bp@alien8.de
2021-03-15x86/sev-es: Split vc_decode_insn()Borislav Petkov1-21/+38
Split it into two helpers - a user- and a kernel-mode one for readability. Yes, the original function body is not that convoluted but splitting it makes following through that code trivial than having to pay attention to each little difference when in user or in kernel mode. No functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210304174237.31945-13-bp@alien8.de
2021-03-15x86/kprobes: Convert to insn_decode()Borislav Petkov2-8/+18
Simplify code, improve decoding error checking. Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lkml.kernel.org/r/20210304174237.31945-12-bp@alien8.de
2021-03-15x86/mce: Convert to insn_decode()Borislav Petkov1-8/+4
Simplify code, no functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210304174237.31945-11-bp@alien8.de
2021-03-15x86/alternative: Use insn_decode()Borislav Petkov1-3/+3
No functional changes, just simplification. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210304174237.31945-10-bp@alien8.de
2021-03-15perf/x86/intel/ds: Check return values of insn decoder functionsBorislav Petkov1-5/+5
branch_type() doesn't need to call the full insn_decode() because it doesn't need it in all cases thus leave the calls separate. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210304174237.31945-9-bp@alien8.de
2021-03-15perf/x86/intel/ds: Check insn_get_length() retvalBorislav Petkov1-6/+5
intel_pmu_pebs_fixup_ip() needs only the insn length so use the appropriate helper instead of a full decode. A full decode differs only in running insn_complete() on the decoded insn but that is not needed here. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210304174237.31945-8-bp@alien8.de
2021-03-15x86/boot/compressed/sev-es: Convert to insn_decode()Borislav Petkov1-6/+5
Other than simplifying the code there should be no functional changes resulting from this. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210304174237.31945-7-bp@alien8.de
2021-03-15x86/insn-eval: Handle return values from the decoderBorislav Petkov1-13/+21
Now that the different instruction-inspecting functions return a value, test that and return early from callers if error has been encountered. While at it, do not call insn_get_modrm() when calling insn_get_displacement() because latter will make sure to call insn_get_modrm() if ModRM hasn't been parsed yet. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210304174237.31945-6-bp@alien8.de
2021-03-15x86/insn: Add an insn_decode() APIBorislav Petkov2-52/+188
Users of the instruction decoder should use this to decode instruction bytes. For that, have insn*() helpers return an int value to denote success/failure. When there's an error fetching the next insn byte and the insn falls short, return -ENODATA to denote that. While at it, make insn_get_opcode() more stricter as to whether what has seen so far is a valid insn and if not. Copy linux/kconfig.h for the tools-version of the decoder so that it can use IS_ENABLED(). Also, cast the INSN_MODE_KERN dummy define value to (enum insn_mode) for tools use of the decoder because perf tool builds with -Werror and errors out with -Werror=sign-compare otherwise. Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lkml.kernel.org/r/20210304174237.31945-5-bp@alien8.de
2021-03-15x86/insn: Add a __ignore_sync_check__ markerBorislav Petkov4-6/+6
Add an explicit __ignore_sync_check__ marker which will be used to mark lines which are supposed to be ignored by file synchronization check scripts, its advantage being that it explicitly denotes such lines in the code. Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lkml.kernel.org/r/20210304174237.31945-4-bp@alien8.de
2021-03-15x86/insn: Add @buf_len param to insn_init() kernel-doc commentBorislav Petkov1-0/+1
It wasn't documented so add it. No functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lkml.kernel.org/r/20210304174237.31945-3-bp@alien8.de
2021-03-15x86/insn: Rename insn_decode() to insn_decode_from_regs()Borislav Petkov4-7/+7
Rename insn_decode() to insn_decode_from_regs() to denote that it receives regs as param and uses registers from there during decoding. Free the former name for a more generic version of the function. No functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210304174237.31945-2-bp@alien8.de
2021-03-15Merge 'x86/seves' into x86/coreBorislav Petkov3-1/+8
Pick up dependent changes. Signed-off-by: Borislav Petkov <bp@suse.de>
2021-03-15Merge tag 'v5.12-rc3' into x86/coreBorislav Petkov116-997/+782
Pick up dependent SEV-ES urgent changes to base new work ontop. Signed-off-by: Borislav Petkov <bp@suse.de>
2021-03-15KVM: x86/mmu: Mark the PAE roots as decrypted for shadow pagingSean Christopherson2-4/+23
Set the PAE roots used as decrypted to play nice with SME when KVM is using shadow paging. Explicitly skip setting the C-bit when loading CR3 for PAE shadow paging, even though it's completely ignored by the CPU. The extra documentation is nice to have. Note, there are several subtleties at play with NPT. In addition to legacy shadow paging, the PAE roots are used for SVM's NPT when either KVM is 32-bit (uses PAE paging) or KVM is 64-bit and shadowing 32-bit NPT. However, 32-bit Linux, and thus KVM, doesn't support SME. And 64-bit KVM can happily set the C-bit in CR3. This also means that keeping __sme_set(root) for 32-bit KVM when NPT is enabled is conceptually wrong, but functionally ok since SME is 64-bit only. Leave it as is to avoid unnecessary pollution. Fixes: d0ec49d4de90 ("kvm/x86/svm: Support Secure Memory Encryption within KVM") Cc: stable@vger.kernel.org Cc: Brijesh Singh <brijesh.singh@amd.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210309224207.1218275-5-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>