summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2020-09-23x86/defconfig: Enable CONFIG_USB_XHCI_HCD=yAdam Borowski2-0/+2
commit 72a9c673636b779e370983fea08e40f97039b981 upstream. A spanking new machine I just got has all but one USB ports wired as 3.0. Booting defconfig resulted in no keyboard or mouse, which was pretty uncool. Let's enable that -- USB3 is ubiquitous rather than an oddity. As 'y' not 'm' -- recovering from initrd problems needs a keyboard. Also add it to the 32-bit defconfig. Signed-off-by: Adam Borowski <kilobyte@angband.pl> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-usb@vger.kernel.org Link: http://lkml.kernel.org/r/20181009062803.4332-1-kilobyte@angband.pl Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-23powerpc/dma: Fix dma_map_ops::get_required_maskAlexey Kardashevskiy1-1/+2
commit 437ef802e0adc9f162a95213a3488e8646e5fc03 upstream. There are 2 problems with it: 1. "<" vs expected "<<" 2. the shift number is an IOMMU page number mask, not an address mask as the IOMMU page shift is missing. This did not hit us before f1565c24b596 ("powerpc: use the generic dma_ops_bypass mode") because we had additional code to handle bypass mask so this chunk (almost?) never executed.However there were reports that aacraid does not work with "iommu=nobypass". After f1565c24b596, aacraid (and probably others which call dma_get_required_mask() before setting the mask) was unable to enable 64bit DMA and fall back to using IOMMU which was known not to work, one of the problems is double free of an IOMMU page. This fixes DMA for aacraid, both with and without "iommu=nobypass" in the kernel command line. Verified with "stress-ng -d 4". Fixes: 6a5c7be5e484 ("powerpc: Override dma_get_required_mask by platform hook and ops") Cc: stable@vger.kernel.org # v3.2+ Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200908015106.79661-1-aik@ozlabs.ru Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-23MIPS: SNI: Fix spurious interruptsThomas Bogendoerfer1-2/+7
[ Upstream commit b959b97860d0fee8c8f6a3e641d3c2ad76eab6be ] On A20R machines the interrupt pending bits in cause register need to be updated by requesting the chipset to do it. This needs to be done to find the interrupt cause and after interrupt service. In commit 0b888c7f3a03 ("MIPS: SNI: Convert to new irq_chip functions") the function to do after service update got lost, which caused spurious interrupts. Fixes: 0b888c7f3a03 ("MIPS: SNI: Convert to new irq_chip functions") Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-09-23MIPS: SNI: Fix MIPS_L1_CACHE_SHIFTThomas Bogendoerfer1-0/+1
[ Upstream commit 564c836fd945a94b5dd46597d6b7adb464092650 ] Commit 930beb5ac09a ("MIPS: introduce MIPS_L1_CACHE_SHIFT_<N>") forgot to select the correct MIPS_L1_CACHE_SHIFT for SNI RM. This breaks non coherent DMA because of a wrong allocation alignment. Fixes: 930beb5ac09a ("MIPS: introduce MIPS_L1_CACHE_SHIFT_<N>") Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-09-23KVM: VMX: Don't freeze guest when event delivery causes an APIC-access exitWanpeng Li1-0/+1
commit 99b82a1437cb31340dbb2c437a2923b9814a7b15 upstream. According to SDM 27.2.4, Event delivery causes an APIC-access VM exit. Don't report internal error and freeze guest when event delivery causes an APIC-access exit, it is handleable and the event will be re-injected during the next vmentry. Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Message-Id: <1597827327-25055-2-git-send-email-wanpengli@tencent.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-23vgacon: remove software scrollback supportLinus Torvalds4-4/+0
commit 973c096f6a85e5b5f2a295126ba6928d9a6afd45 upstream. Yunhai Zhang recently fixed a VGA software scrollback bug in commit ebfdfeeae8c0 ("vgacon: Fix for missing check in scrollback handling"), but that then made people look more closely at some of this code, and there were more problems on the vgacon side, but also the fbcon software scrollback. We don't really have anybody who maintains this code - probably because nobody actually _uses_ it any more. Sure, people still use both VGA and the framebuffer consoles, but they are no longer the main user interfaces to the kernel, and haven't been for decades, so these kinds of extra features end up bitrotting and not really being used. So rather than try to maintain a likely unused set of code, I'll just aggressively remove it, and see if anybody even notices. Maybe there are people who haven't jumped on the whole GUI badnwagon yet, and think it's just a fad. And maybe those people use the scrollback code. If that turns out to be the case, we can resurrect this again, once we've found the sucker^Wmaintainer for it who actually uses it. Reported-by: NopNop Nop <nopitydays@gmail.com> Tested-by: Willy Tarreau <w@1wt.eu> Cc: 张云海 <zhangyunhai@nsfocus.com> Acked-by: Andy Lutomirski <luto@amacapital.net> Acked-by: Willy Tarreau <w@1wt.eu> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-23irqchip/eznps: Fix build error for !ARC700 buildsVineet Gupta1-1/+0
[ Upstream commit 89d29997f103d08264b0685796b420d911658b96 ] eznps driver is supposed to be platform independent however it ends up including stuff from inside arch/arc headers leading to rand config build errors. The quick hack to fix this (proper fix is too much chrun for non active user-base) is to add following to nps platform agnostic header. - copy AUX_IENABLE from arch/arc header - move CTOP_AUX_IACK from arch/arc/plat-eznps/*/** Reported-by: kernel test robot <lkp@intel.com> Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://lkml.kernel.org/r/20200824095831.5lpkmkafelnvlpi2@linutronix.de Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-09-23ARM: dts: socfpga: fix register entry for timer3 on Arria10Dinh Nguyen1-1/+1
[ Upstream commit 0ff5a4812be4ebd4782bbb555d369636eea164f7 ] Fixes the register address for the timer3 entry on Arria10. Fixes: 475dc86d08de4 ("arm: dts: socfpga: Add a base DTSI for Altera's Arria10 SOC") Signed-off-by: Dinh Nguyen <dinguyen@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-09-12KVM: arm64: Set HCR_EL2.PTW to prevent AT taking synchronous exceptionJames Morse1-1/+2
commit 71a7f8cb1ca4ca7214a700b1243626759b6c11d4 upstream. AT instructions do a translation table walk and return the result, or the fault in PAR_EL1. KVM uses these to find the IPA when the value is not provided by the CPU in HPFAR_EL1. If a translation table walk causes an external abort it is taken as an exception, even if it was due to an AT instruction. (DDI0487F.a's D5.2.11 "Synchronous faults generated by address translation instructions") While we previously made KVM resilient to exceptions taken due to AT instructions, the device access causes mismatched attributes, and may occur speculatively. Prevent this, by forbidding a walk through memory described as device at stage2. Now such AT instructions will report a stage2 fault. Such a fault will cause KVM to restart the guest. If the AT instructions always walk the page tables, but guest execution uses the translation cached in the TLB, the guest can't make forward progress until the TLB entry is evicted. This isn't a problem, as since commit 5dcd0fdbb492 ("KVM: arm64: Defer guest entry when an asynchronous exception is pending"), KVM will return to the host to process IRQs allowing the rest of the system to keep running. Cc: stable@vger.kernel.org # v4.9 Signed-off-by: James Morse <james.morse@arm.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-12KVM: arm64: Survive synchronous exceptions caused by AT instructionsJames Morse3-6/+42
commit 88a84ccccb3966bcc3f309cdb76092a9892c0260 upstream. KVM doesn't expect any synchronous exceptions when executing, any such exception leads to a panic(). AT instructions access the guest page tables, and can cause a synchronous external abort to be taken. The arm-arm is unclear on what should happen if the guest has configured the hardware update of the access-flag, and a memory type in TCR_EL1 that does not support atomic operations. B2.2.6 "Possible implementation restrictions on using atomic instructions" from DDI0487F.a lists synchronous external abort as a possible behaviour of atomic instructions that target memory that isn't writeback cacheable, but the page table walker may behave differently. Make KVM robust to synchronous exceptions caused by AT instructions. Add a get_user() style helper for AT instructions that returns -EFAULT if an exception was generated. While KVM's version of the exception table mixes synchronous and asynchronous exceptions, only one of these can occur at each location. Re-enter the guest when the AT instructions take an exception on the assumption the guest will take the same exception. This isn't guaranteed to make forward progress, as the AT instructions may always walk the page tables, but guest execution may use the translation cached in the TLB. This isn't a problem, as since commit 5dcd0fdbb492 ("KVM: arm64: Defer guest entry when an asynchronous exception is pending"), KVM will return to the host to process IRQs allowing the rest of the system to keep running. Cc: stable@vger.kernel.org # v4.9 Signed-off-by: James Morse <james.morse@arm.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-12KVM: arm64: Defer guest entry when an asynchronous exception is pendingJames Morse1-0/+10
commit 5dcd0fdbb492d49dac6bf21c436dfcb5ded0a895 upstream. SError that occur during world-switch's entry to the guest will be accounted to the guest, as the exception is masked until we enter the guest... but we want to attribute the SError as precisely as possible. Reading DISR_EL1 before guest entry requires free registers, and using ESB+DISR_EL1 to consume and read back the ESR would leave KVM holding a host SError... We would rather leave the SError pending and let the host take it once we exit world-switch. To do this, we need to defer guest-entry if an SError is pending. Read the ISR to see if SError (or an IRQ) is pending. If so fake an exit. Place this check between __guest_enter()'s save of the host registers, and restore of the guest's. SError that occur between here and the eret into the guest must have affected the guest's registers, which we can naturally attribute to the guest. The dsb is needed to ensure any previous writes have been done before we read ISR_EL1. On systems without the v8.2 RAS extensions this doesn't give us anything as we can't contain errors, and the ESR bits to describe the severity are all implementation-defined. Replace this with a nop for these systems. v4.9-backport: as this kernel version doesn't have the RAS support at all, remove the RAS alternative. Cc: stable@vger.kernel.org # v4.9 Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> [ James: Removed v8.2 RAS related barriers ] Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-12KVM: arm64: Add kvm_extable for vaxorcism codeJames Morse5-25/+96
commit e9ee186bb735bfc17fa81dbc9aebf268aee5b41e upstream. KVM has a one instruction window where it will allow an SError exception to be consumed by the hypervisor without treating it as a hypervisor bug. This is used to consume asynchronous external abort that were caused by the guest. As we are about to add another location that survives unexpected exceptions, generalise this code to make it behave like the host's extable. KVM's version has to be mapped to EL2 to be accessible on nVHE systems. The SError vaxorcism code is a one instruction window, so has two entries in the extable. Because the KVM code is copied for VHE and nVHE, we end up with four entries, half of which correspond with code that isn't mapped. Cc: stable@vger.kernel.org # v4.9 Signed-off-by: James Morse <james.morse@arm.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-12block: Move SECTOR_SIZE and SECTOR_SHIFT definitions into <linux/blkdev.h>Bart Van Assche1-1/+0
commit 233bde21aa43516baa013ef7ac33f3427056db3e upstream. It happens often while I'm preparing a patch for a block driver that I'm wondering: is a definition of SECTOR_SIZE and/or SECTOR_SHIFT available for this driver? Do I have to introduce definitions of these constants before I can use these constants? To avoid this confusion, move the existing definitions of SECTOR_SIZE and SECTOR_SHIFT into the <linux/blkdev.h> header file such that these become available for all block drivers. Make the SECTOR_SIZE definition in the uapi msdos_fs.h header file conditional to avoid that including that header file after <linux/blkdev.h> causes the compiler to complain about a SECTOR_SIZE redefinition. Note: the SECTOR_SIZE / SECTOR_SHIFT / SECTOR_BITS definitions have not been removed from uapi header files nor from NAND drivers in which these constants are used for another purpose than converting block layer offsets and sizes into a number of sectors. Cc: David S. Miller <davem@davemloft.net> Cc: Mike Snitzer <snitzer@redhat.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <ngupta@vflare.org> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-12MIPS: BMIPS: Also call bmips_cpu_setup() for secondary coresFlorian Fainelli1-0/+2
[ Upstream commit e14f633b66902615cf7faa5d032b45ab8b6fb158 ] The initialization done by bmips_cpu_setup() typically affects both threads of a given core, on 7435 which supports 2 cores and 2 threads, logical CPU number 2 and 3 would not run this initialization. Fixes: 738a3f79027b ("MIPS: BMIPS: Add early CPU initialization code") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-09-12MIPS: mm: BMIPS5000 has inclusive physical cachesFlorian Fainelli1-0/+4
[ Upstream commit dbfc95f98f0158958d1f1e6bf06d74be38dbd821 ] When the BMIPS generic cpu-feature-overrides.h file was introduced, cpu_has_inclusive_caches/MIPS_CPU_INCLUSIVE_CACHES was not set for BMIPS5000 CPUs. Correct this when we have initialized the MIPS secondary cache successfully. Fixes: f337967d6d87 ("MIPS: BMIPS: Add cpu-feature-overrides.h") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-09-12s390: don't trace preemption in percpu macrosSven Schnelle1-14/+14
[ Upstream commit 1196f12a2c960951d02262af25af0bb1775ebcc2 ] Since commit a21ee6055c30 ("lockdep: Change hardirq{s_enabled,_context} to per-cpu variables") the lockdep code itself uses percpu variables. This leads to recursions because the percpu macros are calling preempt_enable() which might call trace_preempt_on(). Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-09-03powerpc/perf: Fix soft lockups due to missed interrupt accountingAthira Rajeev1-0/+4
[ Upstream commit 17899eaf88d689529b866371344c8f269ba79b5f ] Performance monitor interrupt handler checks if any counter has overflown and calls record_and_restart() in core-book3s which invokes perf_event_overflow() to record the sample information. Apart from creating sample, perf_event_overflow() also does the interrupt and period checks via perf_event_account_interrupt(). Currently we record information only if the SIAR (Sampled Instruction Address Register) valid bit is set (using siar_valid() check) and hence the interrupt check. But it is possible that we do sampling for some events that are not generating valid SIAR, and hence there is no chance to disable the event if interrupts are more than max_samples_per_tick. This leads to soft lockup. Fix this by adding perf_event_account_interrupt() in the invalid SIAR code path for a sampling event. ie if SIAR is invalid, just do interrupt check and don't record the sample information. Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Tested-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1596717992-7321-1-git-send-email-atrajeev@linux.vnet.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-09-03powerpc/spufs: add CONFIG_COREDUMP dependencyArnd Bergmann1-0/+1
[ Upstream commit b648a5132ca3237a0f1ce5d871fff342b0efcf8a ] The kernel test robot pointed out a slightly different error message after recent commit 5456ffdee666 ("powerpc/spufs: simplify spufs core dumping") to spufs for a configuration that never worked: powerpc64-linux-ld: arch/powerpc/platforms/cell/spufs/file.o: in function `.spufs_proxydma_info_dump': >> file.c:(.text+0x4c68): undefined reference to `.dump_emit' powerpc64-linux-ld: arch/powerpc/platforms/cell/spufs/file.o: in function `.spufs_dma_info_dump': file.c:(.text+0x4d70): undefined reference to `.dump_emit' powerpc64-linux-ld: arch/powerpc/platforms/cell/spufs/file.o: in function `.spufs_wbox_info_dump': file.c:(.text+0x4df4): undefined reference to `.dump_emit' Add a Kconfig dependency to prevent this from happening again. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200706132302.3885935-1-arnd@arndb.de Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-09-03KVM: arm64: Fix symbol dependency in __hyp_call_panic_nvheDavid Brazdil1-1/+1
[ Upstream commit b38b298aa4397e2dc74a89b4dd3eac9e59b64c96 ] __hyp_call_panic_nvhe contains inline assembly which did not declare its dependency on the __hyp_panic_string symbol. The static-declared string has previously been kept alive because of a use in __hyp_call_panic_vhe. Fix this in preparation for separating the source files between VHE and nVHE when the two users land in two different compilation units. The static variable otherwise gets dropped when compiling the nVHE source file, causing an undefined symbol linker error later. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200625131420.71444-2-dbrazdil@google.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-09-03mips/vdso: Fix resource leaks in genvdso.cPeng Fan1-0/+10
[ Upstream commit a859647b4e6bfeb192284d27d24b6a0c914cae1d ] Close "fd" before the return of map_vdso() and close "out_file" in main(). Signed-off-by: Peng Fan <fanpeng@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-09-03arm64: dts: qcom: msm8916: Pull down PDM GPIOs during sleepStephan Gerhold1-1/+1
[ Upstream commit e2ee9edc282961783d519c760bbaa20fed4dec38 ] The original qcom kernel changed the PDM GPIOs to be pull-down during sleep at some point. Reportedly this was done because there was some "leakage at PDM outputs during sleep": https://source.codeaurora.org/quic/la/kernel/msm-3.10/commit/?id=0f87e08c1cd3e6484a6f7fb3e74e37340bdcdee0 I cannot say how effective this is, but everything seems to work fine with this change so let's apply the same to mainline just to be sure. Cc: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> Signed-off-by: Stephan Gerhold <stephan@gerhold.net> Link: https://lore.kernel.org/r/20200605185916.318494-3-stephan@gerhold.net Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-26KVM: arm/arm64: Don't reschedule in unmap_stage2_range()Will Deacon1-6/+0
Upstream commits fdfe7cbd5880 ("KVM: Pass MMU notifier range flags to kvm_unmap_hva_range()") and b5331379bc62 ("KVM: arm64: Only reschedule if MMU_NOTIFIER_RANGE_BLOCKABLE is not set") fix a "sleeping from invalid context" BUG caused by unmap_stage2_range() attempting to reschedule when called on the OOM path. Unfortunately, these patches rely on the MMU notifier callback being passed knowledge about whether or not blocking is permitted, which was introduced in 4.19. Rather than backport this considerable amount of infrastructure just for KVM on arm, instead just remove the conditional reschedule. Cc: <stable@vger.kernel.org> # v4.9 only Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will@kernel.org> Acked-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-26powerpc: Allow 4224 bytes of stack expansion for the signal frameMichael Ellerman1-2/+5
commit 63dee5df43a31f3844efabc58972f0a206ca4534 upstream. We have powerpc specific logic in our page fault handling to decide if an access to an unmapped address below the stack pointer should expand the stack VMA. The code was originally added in 2004 "ported from 2.4". The rough logic is that the stack is allowed to grow to 1MB with no extra checking. Over 1MB the access must be within 2048 bytes of the stack pointer, or be from a user instruction that updates the stack pointer. The 2048 byte allowance below the stack pointer is there to cover the 288 byte "red zone" as well as the "about 1.5kB" needed by the signal delivery code. Unfortunately since then the signal frame has expanded, and is now 4224 bytes on 64-bit kernels with transactional memory enabled. This means if a process has consumed more than 1MB of stack, and its stack pointer lies less than 4224 bytes from the next page boundary, signal delivery will fault when trying to expand the stack and the process will see a SEGV. The total size of the signal frame is the size of struct rt_sigframe (which includes the red zone) plus __SIGNAL_FRAMESIZE (128 bytes on 64-bit). The 2048 byte allowance was correct until 2008 as the signal frame was: struct rt_sigframe { struct ucontext uc; /* 0 1440 */ /* --- cacheline 11 boundary (1408 bytes) was 32 bytes ago --- */ long unsigned int _unused[2]; /* 1440 16 */ unsigned int tramp[6]; /* 1456 24 */ struct siginfo * pinfo; /* 1480 8 */ void * puc; /* 1488 8 */ struct siginfo info; /* 1496 128 */ /* --- cacheline 12 boundary (1536 bytes) was 88 bytes ago --- */ char abigap[288]; /* 1624 288 */ /* size: 1920, cachelines: 15, members: 7 */ /* padding: 8 */ }; 1920 + 128 = 2048 Then in commit ce48b2100785 ("powerpc: Add VSX context save/restore, ptrace and signal support") (Jul 2008) the signal frame expanded to 2304 bytes: struct rt_sigframe { struct ucontext uc; /* 0 1696 */ <-- /* --- cacheline 13 boundary (1664 bytes) was 32 bytes ago --- */ long unsigned int _unused[2]; /* 1696 16 */ unsigned int tramp[6]; /* 1712 24 */ struct siginfo * pinfo; /* 1736 8 */ void * puc; /* 1744 8 */ struct siginfo info; /* 1752 128 */ /* --- cacheline 14 boundary (1792 bytes) was 88 bytes ago --- */ char abigap[288]; /* 1880 288 */ /* size: 2176, cachelines: 17, members: 7 */ /* padding: 8 */ }; 2176 + 128 = 2304 At this point we should have been exposed to the bug, though as far as I know it was never reported. I no longer have a system old enough to easily test on. Then in 2010 commit 320b2b8de126 ("mm: keep a guard page below a grow-down stack segment") caused our stack expansion code to never trigger, as there was always a VMA found for a write up to PAGE_SIZE below r1. That meant the bug was hidden as we continued to expand the signal frame in commit 2b0a576d15e0 ("powerpc: Add new transactional memory state to the signal context") (Feb 2013): struct rt_sigframe { struct ucontext uc; /* 0 1696 */ /* --- cacheline 13 boundary (1664 bytes) was 32 bytes ago --- */ struct ucontext uc_transact; /* 1696 1696 */ <-- /* --- cacheline 26 boundary (3328 bytes) was 64 bytes ago --- */ long unsigned int _unused[2]; /* 3392 16 */ unsigned int tramp[6]; /* 3408 24 */ struct siginfo * pinfo; /* 3432 8 */ void * puc; /* 3440 8 */ struct siginfo info; /* 3448 128 */ /* --- cacheline 27 boundary (3456 bytes) was 120 bytes ago --- */ char abigap[288]; /* 3576 288 */ /* size: 3872, cachelines: 31, members: 8 */ /* padding: 8 */ /* last cacheline: 32 bytes */ }; 3872 + 128 = 4000 And commit 573ebfa6601f ("powerpc: Increase stack redzone for 64-bit userspace to 512 bytes") (Feb 2014): struct rt_sigframe { struct ucontext uc; /* 0 1696 */ /* --- cacheline 13 boundary (1664 bytes) was 32 bytes ago --- */ struct ucontext uc_transact; /* 1696 1696 */ /* --- cacheline 26 boundary (3328 bytes) was 64 bytes ago --- */ long unsigned int _unused[2]; /* 3392 16 */ unsigned int tramp[6]; /* 3408 24 */ struct siginfo * pinfo; /* 3432 8 */ void * puc; /* 3440 8 */ struct siginfo info; /* 3448 128 */ /* --- cacheline 27 boundary (3456 bytes) was 120 bytes ago --- */ char abigap[512]; /* 3576 512 */ <-- /* size: 4096, cachelines: 32, members: 8 */ /* padding: 8 */ }; 4096 + 128 = 4224 Then finally in 2017, commit 1be7107fbe18 ("mm: larger stack guard gap, between vmas") exposed us to the existing bug, because it changed the stack VMA to be the correct/real size, meaning our stack expansion code is now triggered. Fix it by increasing the allowance to 4224 bytes. Hard-coding 4224 is obviously unsafe against future expansions of the signal frame in the same way as the existing code. We can't easily use sizeof() because the signal frame structure is not in a header. We will either fix that, or rip out all the custom stack expansion checking logic entirely. Fixes: ce48b2100785 ("powerpc: Add VSX context save/restore, ptrace and signal support") Cc: stable@vger.kernel.org # v2.6.27+ Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Tested-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724092528.1578671-2-mpe@ellerman.id.au Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-26powerpc/pseries: Do not initiate shutdown when system is running on UPSVasant Hegde1-1/+0
commit 90a9b102eddf6a3f987d15f4454e26a2532c1c98 upstream. As per PAPR we have to look for both EPOW sensor value and event modifier to identify the type of event and take appropriate action. In LoPAPR v1.1 section 10.2.2 includes table 136 "EPOW Action Codes": SYSTEM_SHUTDOWN 3 The system must be shut down. An EPOW-aware OS logs the EPOW error log information, then schedules the system to be shut down to begin after an OS defined delay internal (default is 10 minutes.) Then in section 10.3.2.2.8 there is table 146 "Platform Event Log Format, Version 6, EPOW Section", which includes the "EPOW Event Modifier": For EPOW sensor value = 3 0x01 = Normal system shutdown with no additional delay 0x02 = Loss of utility power, system is running on UPS/Battery 0x03 = Loss of system critical functions, system should be shutdown 0x04 = Ambient temperature too high All other values = reserved We have a user space tool (rtas_errd) on LPAR to monitor for EPOW_SHUTDOWN_ON_UPS. Once it gets an event it initiates shutdown after predefined time. It also starts monitoring for any new EPOW events. If it receives "Power restored" event before predefined time it will cancel the shutdown. Otherwise after predefined time it will shutdown the system. Commit 79872e35469b ("powerpc/pseries: All events of EPOW_SYSTEM_SHUTDOWN must initiate shutdown") changed our handling of the "on UPS/Battery" case, to immediately shutdown the system. This breaks existing setups that rely on the userspace tool to delay shutdown and let the system run on the UPS. Fixes: 79872e35469b ("powerpc/pseries: All events of EPOW_SYSTEM_SHUTDOWN must initiate shutdown") Cc: stable@vger.kernel.org # v4.0+ Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com> [mpe: Massage change log and add PAPR references] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200820061844.306460-1-hegdevasant@linux.vnet.ibm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-26alpha: fix annotation of io{read,write}{16,32}be()Luc Van Oostenryck1-4/+4
[ Upstream commit bd72866b8da499e60633ff28f8a4f6e09ca78efe ] These accessors must be used to read/write a big-endian bus. The value returned or written is native-endian. However, these accessors are defined using be{16,32}_to_cpu() or cpu_to_be{16,32}() to make the endian conversion but these expect a __be{16,32} when none is present. Keeping them would need a force cast that would solve nothing at all. So, do the conversion using swab{16,32}, like done in asm-generic for similar situations. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Stephen Boyd <sboyd@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Link: http://lkml.kernel.org/r/20200622114232.80039-1-luc.vanoostenryck@gmail.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-26m68knommu: fix overwriting of bits in ColdFire V3 cache controlGreg Ungerer1-3/+3
[ Upstream commit bdee0e793cea10c516ff48bf3ebb4ef1820a116b ] The Cache Control Register (CACR) of the ColdFire V3 has bits that control high level caching functions, and also enable/disable the use of the alternate stack pointer register (the EUSP bit) to provide separate supervisor and user stack pointer registers. The code as it is today will blindly clear the EUSP bit on cache actions like invalidation. So it is broken for this case - and that will result in failed booting (interrupt entry and exit processing will be completely hosed). This only affects ColdFire V3 parts that support the alternate stack register (like the 5329 for example) - generally speaking new parts do, older parts don't. It has no impact on ColdFire V3 parts with the single stack pointer, like the 5307 for example. Fix the cache bit defines used, so they maintain the EUSP bit when carrying out cache actions through the CACR register. Signed-off-by: Greg Ungerer <gerg@linux-m68k.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-26x86/asm: Add instruction suffixes to bitopsJan Beulich2-14/+17
commit 22636f8c9511245cb3c8412039f1dd95afb3aa59 upstream. Omitting suffixes from instructions in AT&T mode is bad practice when operand size cannot be determined by the assembler from register operands, and is likely going to be warned about by upstream gas in the future (mine does already). Add the missing suffixes here. Note that for 64-bit this means some operations change from being 32-bit to 64-bit. Signed-off-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/5A93F98702000078001ABACC@prv-mh.provo.novell.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-26x86/asm: Remove unnecessary \n\t in front of CC_SET() from asm templatesUros Bizjak3-9/+9
commit 3c52b5c64326d9dcfee4e10611c53ec1b1b20675 upstream. There is no need for \n\t in front of CC_SET(), as the macro already includes these two. Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20170906151808.5634-1-ubizjak@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-21sh: landisk: Add missing initialization of sh_io_port_baseGeert Uytterhoeven1-0/+3
[ Upstream commit 0c64a0dce51faa9c706fdf1f957d6f19878f4b81 ] The Landisk setup code maps the CF IDE area using ioremap_prot(), and passes the resulting virtual addresses to the pata_platform driver, disguising them as I/O port addresses. Hence the pata_platform driver translates them again using ioport_map(). As CONFIG_GENERIC_IOMAP=n, and CONFIG_HAS_IOPORT_MAP=y, the SuperH-specific mapping code in arch/sh/kernel/ioport.c translates I/O port addresses to virtual addresses by adding sh_io_port_base, which defaults to -1, thus breaking the assumption of an identity mapping. Fix this by setting sh_io_port_base to zero. Fixes: 37b7a97884ba64bf ("sh: machvec IO death.") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Rich Felker <dalias@libc.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-21pseries: Fix 64 bit logical memory block panicAnton Blanchard1-1/+1
commit 89c140bbaeee7a55ed0360a88f294ead2b95201b upstream. Booting with a 4GB LMB size causes us to panic: qemu-system-ppc64: OS terminated: OS panic: Memory block size not suitable: 0x0 Fix pseries_memory_block_size() to handle 64 bit LMBs. Cc: stable@vger.kernel.org Signed-off-by: Anton Blanchard <anton@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200715000820.1255764-1-anton@ozlabs.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-21MIPS: CPU#0 is not hotpluggableHuacai Chen1-1/+1
commit 9cce844abf07b683cff5f0273977d5f8d0af94c7 upstream. Now CPU#0 is not hotpluggable on MIPS, so prevent to create /sys/devices /system/cpu/cpu0/online which confuses some user-space tools. Cc: stable@vger.kernel.org Signed-off-by: Huacai Chen <chenhc@lemote.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-21powerpc: Fix circular dependency between percpu.h and mmu.hMichael Ellerman1-2/+2
commit 0c83b277ada72b585e6a3e52b067669df15bcedb upstream. Recently random.h started including percpu.h (see commit f227e3ec3b5c ("random32: update the net random state on interrupt and activity")), which broke corenet64_smp_defconfig: In file included from /linux/arch/powerpc/include/asm/paca.h:18, from /linux/arch/powerpc/include/asm/percpu.h:13, from /linux/include/linux/random.h:14, from /linux/lib/uuid.c:14: /linux/arch/powerpc/include/asm/mmu.h:139:22: error: unknown type name 'next_tlbcam_idx' 139 | DECLARE_PER_CPU(int, next_tlbcam_idx); This is due to a circular header dependency: asm/mmu.h includes asm/percpu.h, which includes asm/paca.h, which includes asm/mmu.h Which means DECLARE_PER_CPU() isn't defined when mmu.h needs it. We can fix it by moving the include of paca.h below the include of asm-generic/percpu.h. This moves the include of paca.h out of the #ifdef __powerpc64__, but that is OK because paca.h is almost entirely inside #ifdef CONFIG_PPC64 anyway. It also moves the include of paca.h out of the #ifdef CONFIG_SMP, which could possibly break something, but seems to have no ill effects. Fixes: f227e3ec3b5c ("random32: update the net random state on interrupt and activity") Cc: stable@vger.kernel.org # v5.8 Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200804130558.292328-1-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-21xtensa: fix xtensa_pmu_setup prototypeMax Filippov1-1/+1
commit 6d65d3769d1910379e1cfa61ebf387efc6bfb22c upstream. Fix the following build error in configurations with CONFIG_XTENSA_VARIANT_HAVE_PERF_EVENTS=y: arch/xtensa/kernel/perf_event.c:420:29: error: passing argument 3 of ‘cpuhp_setup_state’ from incompatible pointer type Cc: stable@vger.kernel.org Fixes: 25a77b55e74c ("xtensa/perf: Convert the hotplug notifier to state machine callbacks") Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-21ARM: 8992/1: Fix unwind_frame for clang-built kernelsNathan Huckleberry1-0/+24
commit b4d5ec9b39f8b31d98f65bc5577b5d15d93795d7 upstream. Since clang does not push pc and sp in function prologues, the current implementation of unwind_frame does not work. By using the previous frame's lr/fp instead of saved pc/sp we get valid unwinds on clang-built kernels. The bounds check on next frame pointer must be changed as well since there are 8 less bytes between frames. This fixes /proc/<pid>/stack. Link: https://github.com/ClangBuiltLinux/linux/issues/912 Reported-by: Miles Chen <miles.chen@mediatek.com> Tested-by: Miles Chen <miles.chen@mediatek.com> Cc: stable@vger.kernel.org Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Nathan Huckleberry <nhuck@google.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-21powerpc/vdso: Fix vdso cpu truncationMilton Miller1-1/+1
[ Upstream commit a9f675f950a07d5c1dbcbb97aabac56f5ed085e3 ] The code in vdso_cpu_init that exposes the cpu and numa node to userspace via SPRG_VDSO incorrctly masks the cpu to 12 bits. This means that any kernel running on a box with more than 4096 threads (NR_CPUS advertises a limit of of 8192 cpus) would expose userspace to two cpu contexts running at the same time with the same cpu number. Note: I'm not aware of any distro shipping a kernel with support for more than 4096 threads today, nor of any system image that currently exceeds 4096 threads. Found via code browsing. Fixes: 18ad51dd342a7eb09dbcd059d0b451b616d4dafc ("powerpc: Add VDSO version of getcpu") Signed-off-by: Milton Miller <miltonm@us.ibm.com> Signed-off-by: Anton Blanchard <anton@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200715233704.1352257-1-anton@ozlabs.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-21ARM: socfpga: PM: add missing put_device() call in ↵Yu Kuai1-3/+5
socfpga_setup_ocram_self_refresh() [ Upstream commit 3ad7b4e8f89d6bcc9887ca701cf2745a6aedb1a0 ] if of_find_device_by_node() succeed, socfpga_setup_ocram_self_refresh doesn't have a corresponding put_device(). Thus add a jump target to fix the exception handling for this function implementation. Fixes: 44fd8c7d4005 ("ARM: socfpga: support suspend to ram") Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Dinh Nguyen <dinguyen@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-21ARM: at91: pm: add missing put_device() call in at91_pm_sram_init()yu kuai1-3/+8
[ Upstream commit f87a4f022c44e5b87e842a9f3e644fba87e8385f ] if of_find_device_by_node() succeed, at91_pm_sram_init() doesn't have a corresponding put_device(). Thus add a jump target to fix the exception handling for this function implementation. Fixes: d2e467905596 ("ARM: at91: pm: use the mmio-sram pool to access SRAM") Signed-off-by: yu kuai <yukuai3@huawei.com> Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com> Link: https://lore.kernel.org/r/20200604123301.3905837-1-yukuai3@huawei.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-21m68k: mac: Fix IOP status/control register writesFinn Thain1-9/+3
[ Upstream commit 931fc82a6aaf4e2e4a5490addaa6a090d78c24a7 ] When writing values to the IOP status/control register make sure those values do not have any extraneous bits that will clear interrupt flags. To place the SCC IOP into bypass mode would be desirable but this is not achieved by writing IOP_DMAINACTIVE | IOP_RUN | IOP_AUTOINC | IOP_BYPASS to the control register. Drop this ineffective register write. Remove the flawed and unused iop_bypass() function. Make use of the unused iop_stop() function. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Tested-by: Stan Johnson <userm57@yahoo.com> Cc: Joshua Thompson <funaho@jurai.org> Link: https://lore.kernel.org/r/09bcb7359a1719a18b551ee515da3c4c3cf709e6.1590880333.git.fthain@telegraphics.com.au Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-21m68k: mac: Don't send IOP message until channel is idleFinn Thain1-6/+3
[ Upstream commit aeb445bf2194d83e12e85bf5c65baaf1f093bd8f ] In the following sequence of calls, iop_do_send() gets called when the "send" channel is not in the IOP_MSG_IDLE state: iop_ism_irq() iop_handle_send() (msg->handler)() iop_send_message() iop_do_send() Avoid this by testing the channel state before calling iop_do_send(). When sending, and iop_send_queue is empty, call iop_do_send() because the channel is idle. If iop_send_queue is not empty, iop_do_send() will get called later by iop_handle_send(). Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Tested-by: Stan Johnson <userm57@yahoo.com> Cc: Joshua Thompson <funaho@jurai.org> Link: https://lore.kernel.org/r/6d667c39e53865661fa5a48f16829d18ed8abe54.1590880333.git.fthain@telegraphics.com.au Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-21arm64: dts: exynos: Fix silent hang after boot on EspressoAlim Akhtar1-0/+1
[ Upstream commit b072714bfc0e42c984b8fd6e069f3ca17de8137a ] Once regulators are disabled after kernel boot, on Espresso board silent hang observed because of LDO7 being disabled. LDO7 actually provide power to CPU cores and non-cpu blocks circuitries. Keep this regulator always-on to fix this hang. Fixes: 9589f7721e16 ("arm64: dts: Add S2MPS15 PMIC node on exynos7-espresso") Signed-off-by: Alim Akhtar <alim.akhtar@samsung.com> Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-21arm64: dts: qcom: msm8916: Replace invalid bias-pull-none propertyStephan Gerhold1-5/+5
[ Upstream commit 1b6a1a162defe649c5599d661b58ac64bb6f31b6 ] msm8916-pins.dtsi specifies "bias-pull-none" for most of the audio pin configurations. This was likely copied from the qcom kernel fork where the same property was used for these audio pins. However, "bias-pull-none" actually does not exist at all - not in mainline and not in downstream. I can only guess that the original intention was to configure "no pull", i.e. bias-disable. Change it to that instead. Fixes: 143bb9ad85b7 ("arm64: dts: qcom: add audio pinctrls") Cc: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> Signed-off-by: Stephan Gerhold <stephan@gerhold.net> Link: https://lore.kernel.org/r/20200605185916.318494-2-stephan@gerhold.net Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-21ARM: percpu.h: fix build errorGrygorii Strashko1-0/+2
commit aa54ea903abb02303bf55855fb51e3fcee135d70 upstream. Fix build error for the case: defined(CONFIG_SMP) && !defined(CONFIG_CPU_V6) config: keystone_defconfig CC arch/arm/kernel/signal.o In file included from ../include/linux/random.h:14, from ../arch/arm/kernel/signal.c:8: ../arch/arm/include/asm/percpu.h: In function ‘__my_cpu_offset’: ../arch/arm/include/asm/percpu.h:29:34: error: ‘current_stack_pointer’ undeclared (first use in this function); did you mean ‘user_stack_pointer’? : "Q" (*(const unsigned long *)current_stack_pointer)); ^~~~~~~~~~~~~~~~~~~~~ user_stack_pointer Fixes: f227e3ec3b5c ("random32: update the net random state on interrupt and activity") Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-21x86/i8259: Use printk_deferred() to prevent deadlockThomas Gleixner1-1/+1
commit bdd65589593edd79b6a12ce86b3b7a7c6dae5208 upstream. 0day reported a possible circular locking dependency: Chain exists of: &irq_desc_lock_class --> console_owner --> &port_lock_key Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&port_lock_key); lock(console_owner); lock(&port_lock_key); lock(&irq_desc_lock_class); The reason for this is a printk() in the i8259 interrupt chip driver which is invoked with the irq descriptor lock held, which reverses the lock operations vs. printk() from arbitrary contexts. Switch the printk() to printk_deferred() to avoid that. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/87365abt2v.fsf@nanos.tec.linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-21KVM: LAPIC: Prevent setting the tscdeadline timer if the lapic is hw disabledWanpeng Li1-1/+1
commit d2286ba7d574ba3103a421a2f9ec17cb5b0d87a1 upstream. Prevent setting the tscdeadline timer if the lapic is hw disabled. Fixes: bce87cce88 (KVM: x86: consolidate different ways to test for in-kernel LAPIC) Cc: <stable@vger.kernel.org> Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Message-Id: <1596165141-28874-1-git-send-email-wanpengli@tencent.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-21parisc: add support for cmpxchg on u8 pointersLiam Beguin2-0/+14
[ Upstream commit b344d6a83d01c52fddbefa6b3b4764da5b1022a0 ] The kernel test bot reported[1] that using set_mask_bits on a u8 causes the following issue on parisc: hppa-linux-ld: drivers/phy/ti/phy-tusb1210.o: in function `tusb1210_probe': >> (.text+0x2f4): undefined reference to `__cmpxchg_called_with_bad_pointer' >> hppa-linux-ld: (.text+0x324): undefined reference to `__cmpxchg_called_with_bad_pointer' hppa-linux-ld: (.text+0x354): undefined reference to `__cmpxchg_called_with_bad_pointer' Add support for cmpxchg on u8 pointers. [1] https://lore.kernel.org/patchwork/patch/1272617/#1468946 Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Liam Beguin <liambeguin@gmail.com> Tested-by: Dave Anglin <dave.anglin@bell.net> Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-21arm64: csum: Fix handling of bad packetsRobin Murphy1-2/+3
[ Upstream commit 05fb3dbda187bbd9cc1cd0e97e5d6595af570ac6 ] Although iph is expected to point to at least 20 bytes of valid memory, ihl may be bogus, for example on reception of a corrupt packet. If it happens to be less than 5, we really don't want to run away and dereference 16GB worth of memory until it wraps back to exactly zero... Fixes: 0e455d8e80aa ("arm64: Implement optimised IP checksum helpers") Reported-by: guodeqing <geffrey.guo@huawei.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-21sh: Fix validation of system call numberMichael Karcher1-3/+3
[ Upstream commit 04a8a3d0a73f51c7c2da84f494db7ec1df230e69 ] The slow path for traced system call entries accessed a wrong memory location to get the number of the maximum allowed system call number. Renumber the numbered "local" label for the correct location to avoid collisions with actual local labels. Signed-off-by: Michael Karcher <kernel@mkarcher.dialup.fu-berlin.de> Tested-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Fixes: f3a8308864f920d2 ("sh: Add a few missing irqflags tracing markers.") Signed-off-by: Rich Felker <dalias@libc.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-21install several missing uapi headersRolf Eike Beer2-0/+4
Commit fcc8487d477a3452a1d0ccbdd4c5e0e1e3cb8bed ("uapi: export all headers under uapi directories") changed the default to install all headers not marked to be conditional. This takes the list of headers listed in the commit message and manually adds an export for those that are already present in this kernel version. Found during an attempt to build mtd-utils 2.1.2 as it wants hash_info.h, which exists since 3.13 but has not been installed until the above mentioned commit, which ended up in 4.12. Signed-off-by: Rolf Eike Beer <eb@emlix.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-21x86, vmlinux.lds: Page-align end of ..page_aligned sectionsJoerg Roedel1-0/+1
[ Upstream commit de2b41be8fcccb2f5b6c480d35df590476344201 ] On x86-32 the idt_table with 256 entries needs only 2048 bytes. It is page-aligned, but the end of the .bss..page_aligned section is not guaranteed to be page-aligned. As a result, objects from other .bss sections may end up on the same 4k page as the idt_table, and will accidentially get mapped read-only during boot, causing unexpected page-faults when the kernel writes to them. This could be worked around by making the objects in the page aligned sections page sized, but that's wrong. Explicit sections which store only page aligned objects have an implicit guarantee that the object is alone in the page in which it is placed. That works for all objects except the last one. That's inconsistent. Enforcing page sized objects for these sections would wreckage memory sanitizers, because the object becomes artificially larger than it should be and out of bound access becomes legit. Align the end of the .bss..page_aligned and .data..page_aligned section on page-size so all objects places in these sections are guaranteed to have their own page. [ tglx: Amended changelog ] Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20200721093448.10417-1-joro@8bytes.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-21x86/build/lto: Fix truncated .bss with -fdata-sectionsSami Tolvanen1-1/+1
[ Upstream commit 6a03469a1edc94da52b65478f1e00837add869a3 ] With CONFIG_LD_DEAD_CODE_DATA_ELIMINATION=y, we compile the kernel with -fdata-sections, which also splits the .bss section. The new section, with a new .bss.* name, which pattern gets missed by the main x86 linker script which only expects the '.bss' name. This results in the discarding of the second part and a too small, truncated .bss section and an unhappy, non-working kernel. Use the common BSS_MAIN macro in the linker script to properly capture and merge all the generated BSS sections. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20190415164956.124067-1-samitolvanen@google.com [ Extended the changelog. ] Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>