summaryrefslogtreecommitdiff
path: root/arch/arm/mm
AgeCommit message (Collapse)AuthorFilesLines
2022-07-21ARM: 9209/1: Spectre-BHB: avoid pr_info() every time a CPU comes out of idleArd Biesheuvel1-3/+3
[ Upstream commit 0609e200246bfd3b7516091c491bec4308349055 ] Jon reports that the Spectre-BHB init code is filling up the kernel log with spurious notifications about which mitigation has been enabled, every time any CPU comes out of a low power state. Given that Spectre-BHB mitigations are system wide, only a single mitigation can be enabled, and we already print an error if two types of CPUs coexist in a single system that require different Spectre-BHB mitigations. This means that the pr_info() that describes the selected mitigation does not need to be emitted for each CPU anyway, and so we can simply emit it only once. In order to clarify the above in the log message, update it to describe that the selected mitigation will be enabled on all CPUs, including ones that are unaffected. If another CPU comes up later that is affected and requires a different mitigation, we report an error as before. Fixes: b9baf5c8c5c3 ("ARM: Spectre-BHB workaround") Tested-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-07-21ARM: 9214/1: alignment: advance IT state after emulating Thumb instructionArd Biesheuvel1-0/+3
commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream. After emulating a misaligned load or store issued in Thumb mode, we have to advance the IT state by hand, or it will get out of sync with the actual instruction stream, which means we'll end up applying the wrong condition code to subsequent instructions. This might corrupt the program state rather catastrophically. So borrow the it_advance() helper from the probing code, and use it on CPSR if the emulated instruction is Thumb. Cc: <stable@vger.kernel.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-07-21ARM: 9213/1: Print message about disabled Spectre workarounds only onceDmitry Osipenko1-2/+1
commit e4ced82deb5fb17222fb82e092c3f8311955b585 upstream. Print the message about disabled Spectre workarounds only once. The message is printed each time CPU goes out from idling state on NVIDIA Tegra boards, causing storm in KMSG that makes system unusable. Cc: stable@vger.kernel.org Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-05-25ARM: 9196/1: spectre-bhb: enable for Cortex-A15Ard Biesheuvel1-0/+1
[ Upstream commit 0dc14aa94ccd8ba35eb17a0f9b123d1566efd39e ] The Spectre-BHB mitigations were inadvertently left disabled for Cortex-A15, due to the fact that cpu_v7_bugs_init() is not called in that case. So fix that. Fixes: b9baf5c8c5c3 ("ARM: Spectre-BHB workaround") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-03-11ARM: fix build warning in proc-v7-bugs.cRussell King (Oracle)1-1/+2
commit b1a384d2cbccb1eb3f84765020d25e2c1929706e upstream. The kernel test robot discovered that building without HARDEN_BRANCH_PREDICTOR issues a warning due to a missing argument to pr_info(). Add the missing argument. Reported-by: kernel test robot <lkp@intel.com> Fixes: 9dd78194a372 ("ARM: report Spectre v2 status through sysfs") Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-11ARM: Spectre-BHB workaroundRussell King (Oracle)2-0/+86
commit b9baf5c8c5c356757f4f9d8180b5e9d234065bc3 upstream. Workaround the Spectre BHB issues for Cortex-A15, Cortex-A57, Cortex-A72, Cortex-A73 and Cortex-A75. We also include Brahma B15 as well to be safe, which is affected by Spectre V2 in the same ways as Cortex-A15. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> [changes due to lack of SYSTEM_FREEING_INITMEM - gregkh] Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-11ARM: report Spectre v2 status through sysfsRussell King (Oracle)2-31/+100
commit 9dd78194a3722fa6712192cdd4f7032d45112a9a upstream. As per other architectures, add support for reporting the Spectre vulnerability status via sysfs CPU. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> [ preserve res variable and add SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED - gregkh ] Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-08ARM: 9182/1: mmu: fix returns from early_param() and __setup() functionsRandy Dunlap1-0/+2
commit 7b83299e5b9385943a857d59e15cba270df20d7e upstream. early_param() handlers should return 0 on success. __setup() handlers should return 1 on success, i.e., the parameter has been handled. A return of 0 would cause the "option=value" string to be added to init's environment strings, polluting it. ../arch/arm/mm/mmu.c: In function 'test_early_cachepolicy': ../arch/arm/mm/mmu.c:215:1: error: no return statement in function returning non-void [-Werror=return-type] ../arch/arm/mm/mmu.c: In function 'test_noalign_setup': ../arch/arm/mm/mmu.c:221:1: error: no return statement in function returning non-void [-Werror=return-type] Fixes: b849a60e0903 ("ARM: make cr_alignment read-only #ifndef CONFIG_CPU_CP15") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reported-by: Igor Zhbanov <i.zhbanov@omprussia.ru> Cc: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Cc: linux-arm-kernel@lists.infradead.org Cc: patches@armlinux.org.uk Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-12-22ARM: 8805/2: remove unneeded naked function usageNicolas Pitre7-194/+178
commit b99afae1390140f5b0039e6b37a7380de31ae874 upstream. The naked attribute is known to confuse some old gcc versions when function arguments aren't explicitly listed as inline assembly operands despite the gcc documentation. That resulted in commit 9a40ac86152c ("ARM: 6164/1: Add kto and kfrom to input operands list."). Yet that commit has problems of its own by having assembly operand constraints completely wrong. If the generated code has been OK since then, it is due to luck rather than correctness. So this patch also provides proper assembly operand constraints, and removes two instances of redundant register usages in the implementation while at it. Inspection of the generated code with this patch doesn't show any obvious quality degradation either, so not relying on __naked at all will make the code less fragile, and avoid some issues with clang. The only remaining __naked instances (excluding the kprobes test cases) are exynos_pm_power_up_setup(), tc2_pm_power_up_setup() and cci_enable_port_for_self(. But in the first two cases, only the function address is used by the compiler with no chance of inlining it by mistake, and the third case is called from assembly code only. And the fact that no stack is available when the corresponding code is executed does warrant the __naked usage in those cases. Signed-off-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Stefan Agner <stefan@agner.ch> Tested-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-26ARM: 9155/1: fix early early_iounmap()Michał Mirosław1-2/+2
commit 0d08e7bf0d0d1a29aff7b16ef516f7415eb1aa05 upstream. Currently __set_fixmap() bails out with a warning when called in early boot from early_iounmap(). Fix it, and while at it, make the comment a bit easier to understand. Cc: <stable@vger.kernel.org> Fixes: b089c31c519c ("ARM: 8667/3: Fix memory attribute inconsistencies when using fixmap") Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-26ARM: 9136/1: ARMv7-M uses BE-8, not BE-32Arnd Bergmann1-1/+1
[ Upstream commit 345dac33f58894a56d17b92a41be10e16585ceff ] When configuring the kernel for big-endian, we set either BE-8 or BE-32 based on the CPU architecture level. Until linux-4.4, we did not have any ARMv7-M platform allowing big-endian builds, but now i.MX/Vybrid is in that category, adn we get a build error because of this: arch/arm/kernel/module-plts.c: In function 'get_module_plt': arch/arm/kernel/module-plts.c:60:46: error: implicit declaration of function '__opcode_to_mem_thumb32' [-Werror=implicit-function-declaration] This comes down to picking the wrong default, ARMv7-M uses BE8 like ARMv7-A does. Changing the default gets the kernel to compile and presumably works. https://lore.kernel.org/all/1455804123-2526139-2-git-send-email-arnd@arndb.de/ Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-11-02ARM: 9133/1: mm: proc-macros: ensure *_tlb_fns are 4B alignedNick Desaulniers1-0/+1
commit e6a0c958bdf9b2e1b57501fc9433a461f0a6aadd upstream. A kernel built with CONFIG_THUMB2_KERNEL=y and using clang as the assembler could generate non-naturally-aligned v7wbi_tlb_fns which results in a boot failure. The original commit adding the macro missed the .align directive on this data. Link: https://github.com/ClangBuiltLinux/linux/issues/1447 Link: https://lore.kernel.org/all/0699da7b-354f-aecc-a62f-e25693209af4@linaro.org/ Debugged-by: Ard Biesheuvel <ardb@kernel.org> Debugged-by: Nathan Chancellor <nathan@kernel.org> Debugged-by: Richard Henderson <richard.henderson@linaro.org> Fixes: 66a625a88174 ("ARM: mm: proc-macros: Add generic proc/cache/tlb struct definition macros") Suggested-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-10-29ARM: 9007/1: l2c: fix prefetch bits init in L2X0_AUX_CTRL using DT valuesGuillaume Tucker1-4/+12
[ Upstream commit 8e007b367a59bcdf484c81f6df9bd5a4cc179ca6 ] The L310_PREFETCH_CTRL register bits 28 and 29 to enable data and instruction prefetch respectively can also be accessed via the L2X0_AUX_CTRL register. They appear to be actually wired together in hardware between the registers. Changing them in the prefetch register only will get undone when restoring the aux control register later on. For this reason, set these bits in both registers during initialisation according to the devicetree property values. Link: https://lore.kernel.org/lkml/76f2f3ad5e77e356e0a5b99ceee1e774a2842c25.1597061474.git.guillaume.tucker@collabora.com/ Fixes: ec3bd0e68a67 ("ARM: 8391/1: l2c: add options to overwrite prefetching behavior") Signed-off-by: Guillaume Tucker <guillaume.tucker@collabora.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-20ARM: 8978/1: mm: make act_mm() respect THREAD_SIZELinus Walleij1-1/+2
[ Upstream commit e1de94380af588bdf6ad6f0cc1f75004c35bc096 ] Recent work with KASan exposed the folling hard-coded bitmask in arch/arm/mm/proc-macros.S: bic rd, sp, #8128 bic rd, rd, #63 This forms the bitmask 0x1FFF that is coinciding with (PAGE_SIZE << THREAD_SIZE_ORDER) - 1, this code was assuming that THREAD_SIZE is always 8K (8192). As KASan was increasing THREAD_SIZE_ORDER to 2, I ran into this bug. Fix it by this little oneline suggested by Ard: bic rd, sp, #(THREAD_SIZE - 1) & ~63 Where THREAD_SIZE is defined using THREAD_SIZE_ORDER. We have to also include <linux/const.h> since the THREAD_SIZE expands to use the _AC() macro. Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Florian Fainelli <f.fainelli@gmail.com> Suggested-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-02-15ARM: 8949/1: mm: mark free_memmap as __initOlof Johansson1-1/+1
commit 31f3010e60522ede237fb145a63b4af5a41718c2 upstream. As of commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly"), free_memmap() might not always be inlined, and thus is triggering a section warning: WARNING: vmlinux.o(.text.unlikely+0x904): Section mismatch in reference from the function free_memmap() to the function .meminit.text:memblock_free() Mark it as __init, since the faller (free_unused_memmap) already is. Fixes: ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly") Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-12-01ARM: 8904/1: skip nomap memblocks while finding the lowmem/highmem boundaryChester Lin1-0/+3
commit 1d31999cf04c21709f72ceb17e65b54a401330da upstream. adjust_lowmem_bounds() checks every memblocks in order to find the boundary between lowmem and highmem. However some memblocks could be marked as NOMAP so they are not used by kernel, which should be skipped while calculating the boundary. Signed-off-by: Chester Lin <clin@suse.com> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-11-10ARM: 8926/1: v7m: remove register save to stack before svcafzal mohammed1-1/+0
[ Upstream commit 2ecb287998a47cc0a766f6071f63bc185f338540 ] r0-r3 & r12 registers are saved & restored, before & after svc respectively. Intention was to preserve those registers across thread to handler mode switch. On v7-M, hardware saves the register context upon exception in AAPCS complaint way. Restoring r0-r3 & r12 is done from stack location where hardware saves it, not from the location on stack where these registers were saved. To clarify, on stm32f429 discovery board: 1. before svc, sp - 0x90009ff8 2. r0-r3,r12 saved to 0x90009ff8 - 0x9000a00b 3. upon svc, h/w decrements sp by 32 & pushes registers onto stack 4. after svc, sp - 0x90009fd8 5. r0-r3,r12 restored from 0x90009fd8 - 0x90009feb Above means r0-r3,r12 is not restored from the location where they are saved, but since hardware pushes the registers onto stack, the registers are restored correctly. Note that during register saving to stack (step 2), it goes past 0x9000a000. And it seems, based on objdump, there are global symbols residing there, and it perhaps can cause issues on a non-XIP Kernel (on XIP, data section is setup later). Based on the analysis above, manually saving registers onto stack is at best no-op and at worst can cause data section corruption. Hence remove storing of registers onto stack before svc. Fixes: b70cd406d7fe ("ARM: 8671/1: V7M: Preserve registers across switch from Thread to Handler mode") Signed-off-by: afzal mohammed <afzal.mohd.ma@gmail.com> Acked-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-11-10ARM: mm: fix alignment handler faults under memory pressureRussell King1-8/+36
[ Upstream commit 67e15fa5b487adb9b78a92789eeff2d6ec8f5cee ] When the system has high memory pressure, the page containing the instruction may be paged out. Using probe_kernel_address() means that if the page is swapped out, the resulting page fault will not be handled because page faults are disabled by this function. Use get_user() to read the instruction instead. Reported-by: Jing Xiangfeng <jingxiangfeng@huawei.com> Fixes: b255188f90e2 ("ARM: fix scheduling while atomic warning in alignment handling code") Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-10-07ARM: 8903/1: ensure that usable memory in bank 0 starts from a PMD-aligned ↵Mike Rapoport1-0/+16
address [ Upstream commit 00d2ec1e6bd82c0538e6dd3e4a4040de93ba4fef ] The calculation of memblock_limit in adjust_lowmem_bounds() assumes that bank 0 starts from a PMD-aligned address. However, the beginning of the first bank may be NOMAP memory and the start of usable memory will be not aligned to PMD boundary. In such case the memblock_limit will be set to the end of the NOMAP region, which will prevent any memblock allocations. Mark the region between the end of the NOMAP area and the next PMD-aligned address as NOMAP as well, so that the usable memory will start at PMD-aligned address. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-10-07ARM: 8898/1: mm: Don't treat faults reported from cache maintenance as writesWill Deacon2-2/+3
[ Upstream commit 834020366da9ab3fb87d1eb9a3160eb22dbed63a ] Translation faults arising from cache maintenance instructions are rather unhelpfully reported with an FSR value where the WnR field is set to 1, indicating that the faulting access was a write. Since cache maintenance instructions on 32-bit ARM do not require any particular permissions, this can cause our private 'cacheflush' system call to fail spuriously if a translation fault is generated due to page aging when targetting a read-only VMA. In this situation, we will return -EFAULT to userspace, although this is unfortunately suppressed by the popular '__builtin___clear_cache()' intrinsic provided by GCC, which returns void. Although it's tempting to write this off as a userspace issue, we can actually do a little bit better on CPUs that support LPAE, even if the short-descriptor format is in use. On these CPUs, cache maintenance faults additionally set the CM field in the FSR, which we can use to suppress the write permission checks in the page fault handler and succeed in performing cache maintenance to read-only areas even in the presence of a translation fault. Reported-by: Orion Hodson <oth@google.com> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-09-21ARM: 8901/1: add a criteria for pfn_valid of armzhaoyang1-0/+5
[ Upstream commit 5b3efa4f1479c91cb8361acef55f9c6662feba57 ] pfn_valid can be wrong when parsing a invalid pfn whose phys address exceeds BITS_PER_LONG as the MSB will be trimed when shifted. The issue originally arise from bellowing call stack, which corresponding to an access of the /proc/kpageflags from userspace with a invalid pfn parameter and leads to kernel panic. [46886.723249] c7 [<c031ff98>] (stable_page_flags) from [<c03203f8>] [46886.723264] c7 [<c0320368>] (kpageflags_read) from [<c0312030>] [46886.723280] c7 [<c0311fb0>] (proc_reg_read) from [<c02a6e6c>] [46886.723290] c7 [<c02a6e24>] (__vfs_read) from [<c02a7018>] [46886.723301] c7 [<c02a6f74>] (vfs_read) from [<c02a778c>] [46886.723315] c7 [<c02a770c>] (SyS_pread64) from [<c0108620>] (ret_fast_syscall+0x0/0x28) Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-09-21ARM: 8874/1: mm: only adjust sections of valid mm structuresDoug Berger1-1/+2
[ Upstream commit c51bc12d06b3a5494fbfcbd788a8e307932a06e9 ] A timing hazard exists when an early fork/exec thread begins exiting and sets its mm pointer to NULL while a separate core tries to update the section information. This commit ensures that the mm pointer is not NULL before setting its section parameters. The arguments provided by commit 11ce4b33aedc ("ARM: 8672/1: mm: remove tasklist locking from update_sections_early()") are equally valid for not requiring grabbing the task_lock around this check. Fixes: 08925c2f124f ("ARM: 8464/1: Update all mm structures with section adjustments") Signed-off-by: Doug Berger <opendmb@gmail.com> Acked-by: Laura Abbott <labbott@redhat.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Rob Herring <robh@kernel.org> Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org> Cc: Peng Fan <peng.fan@nxp.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-04-05ARM: 8830/1: NOMMU: Toggle only bits in EXC_RETURN we are really care ofVladimir Murzin1-0/+3
[ Upstream commit 72cd4064fccaae15ab84d40d4be23667402df4ed ] ARMv8M introduces support for Security extension to M class, among other things it affects exception handling, especially, encoding of EXC_RETURN. The new bits have been added: Bit [6] Secure or Non-secure stack Bit [5] Default callee register stacking Bit [0] Exception Secure which conflicts with hard-coded value of EXC_RETURN: In fact, we only care of few bits: Bit [3] Mode (0 - Handler, 1 - Thread) Bit [2] Stack pointer selection (0 - Main, 1 - Process) We can toggle only those bits and left other bits as they were on exception entry. It is basically, what patch does - saves EXC_RETURN when we do transition form Thread to Handler mode (it is first svc), so later saved value is used instead of EXC_RET_THREADMODE_PROCESSSTACK. Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-02-20ARM: fix the cockup in the previous patchRussell King1-2/+2
Commit d6951f582cc50ba0ad22ef46b599740966599b14 upstream. The intention in the previous patch was to only place the processor tables in the .rodata section if big.Little was being built and we wanted the branch target hardening, but instead (due to the way it was tested) it ended up always placing the tables into the .rodata section. Although harmless, let's correct this anyway. Fixes: 3a4d0c2172bc ("ARM: ensure that processor vtables is not lost after boot") Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David A. Long <dave.long@linaro.org> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Tested-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-02-20ARM: ensure that processor vtables is not lost after bootRussell King1-0/+10
Commit 3a4d0c2172bcf15b7a3d9d498b2b355f9864286b upstream. Marek Szyprowski reported problems with CPU hotplug in current kernels. This was tracked down to the processor vtables being located in an init section, and therefore discarded after kernel boot, despite being required after boot to properly initialise the non-boot CPUs. Arrange for these tables to end up in .rodata when required. Reported-by: Marek Szyprowski <m.szyprowski@samsung.com> Tested-by: Krzysztof Kozlowski <krzk@kernel.org> Fixes: 383fb3ee8024 ("ARM: spectre-v2: per-CPU vtables to work around big.Little systems") Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David A. Long <dave.long@linaro.org> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Tested-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-02-20ARM: spectre-v2: per-CPU vtables to work around big.Little systemsRussell King1-15/+2
Commit 383fb3ee8024d596f488d2dbaf45e572897acbdb upstream. In big.Little systems, some CPUs require the Spectre workarounds in paths such as the context switch, but other CPUs do not. In order to handle these differences, we need per-CPU vtables. We are unable to use the kernel's per-CPU variables to support this as per-CPU is not initialised at times when we need access to the vtables, so we have to use an array indexed by logical CPU number. We use an array-of-pointers to avoid having function pointers in the kernel's read/write .data section. Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David A. Long <dave.long@linaro.org> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Tested-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2018-12-21ARM: 8815/1: V7M: align v7m_dma_inv_range() with v7 counterpartVladimir Murzin1-5/+9
[ Upstream commit 3d0358d0ba048c5afb1385787aaec8fa5ad78fcc ] Chris has discovered and reported that v7_dma_inv_range() may corrupt memory if address range is not aligned to cache line size. Since the whole cache-v7m.S was lifted form cache-v7.S the same observation applies to v7m_dma_inv_range(). So the fix just mirrors what has been done for v7 with a little specific of M-class. Cc: Chris Cole <chris@sageembedded.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2018-12-21ARM: 8814/1: mm: improve/fix ARM v7_dma_inv_range() unaligned address handlingChris Cole1-3/+5
[ Upstream commit a1208f6a822ac29933e772ef1f637c5d67838da9 ] This patch addresses possible memory corruption when v7_dma_inv_range(start_address, end_address) address parameters are not aligned to whole cache lines. This function issues "invalidate" cache management operations to all cache lines from start_address (inclusive) to end_address (exclusive). When start_address and/or end_address are not aligned, the start and/or end cache lines are first issued "clean & invalidate" operation. The assumption is this is done to ensure that any dirty data addresses outside the address range (but part of the first or last cache lines) are cleaned/flushed so that data is not lost, which could happen if just an invalidate is issued. The problem is that these first/last partial cache lines are issued "clean & invalidate" and then "invalidate". This second "invalidate" is not required and worse can cause "lost" writes to addresses outside the address range but part of the cache line. If another component writes to its part of the cache line between the "clean & invalidate" and "invalidate" operations, the write can get lost. This fix is to remove the extra "invalidate" operation when unaligned addressed are used. A kernel module is available that has a stress test to reproduce the issue and a unit test of the updated v7_dma_inv_range(). It can be downloaded from http://ftp.sageembedded.com/outgoing/linux/cache-test-20181107.tgz. v7_dma_inv_range() is call by dmac_[un]map_area(addr, len, direction) when the direction is DMA_FROM_DEVICE. One can (I believe) successfully argue that DMA from a device to main memory should use buffers aligned to cache line size, because the "clean & invalidate" might overwrite data that the device just wrote using DMA. But if a driver does use unaligned buffers, at least this fix will prevent memory corruption outside the buffer. Signed-off-by: Chris Cole <chris@sageembedded.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2018-11-21ARM: 8809/1: proc-v7: fix Thumb annotation of cpu_v7_hvc_switch_mmArd Biesheuvel1-1/+1
commit 6282e916f774e37845c65d1eae9f8c649004f033 upstream. Due to what appears to be a copy/paste error, the opening ENTRY() of cpu_v7_hvc_switch_mm() lacks a matching ENDPROC(), and instead, the one for cpu_v7_smc_switch_mm() is duplicated. Given that it is ENDPROC() that emits the Thumb annotation, the cpu_v7_hvc_switch_mm() routine will be called in ARM mode on a Thumb2 kernel, resulting in the following splat: Internal error: Oops - undefined instruction: 0 [#1] SMP THUMB2 Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.18.0-rc1-00030-g4d28ad89189d-dirty #488 Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 PC is at cpu_v7_hvc_switch_mm+0x12/0x18 LR is at flush_old_exec+0x31b/0x570 pc : [<c0316efe>] lr : [<c04117c7>] psr: 00000013 sp : ee899e50 ip : 00000000 fp : 00000001 r10: eda28f34 r9 : eda31800 r8 : c12470e0 r7 : eda1fc00 r6 : eda53000 r5 : 00000000 r4 : ee88c000 r3 : c0316eec r2 : 00000001 r1 : eda53000 r0 : 6da6c000 Flags: nzcv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none Note the 'ISA ARM' in the last line. Fix this by using the correct name in ENDPROC(). Cc: <stable@vger.kernel.org> Fixes: 10115105cb3a ("ARM: spectre-v2: add firmware based hardening") Reviewed-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-04ARM: 8799/1: mm: fix pci_ioremap_io() offset checkThomas Petazzoni1-1/+1
[ Upstream commit 3a58ac65e2d7969bcdf1b6acb70fa4d12a88e53e ] IO_SPACE_LIMIT is the ending address of the PCI IO space, i.e something like 0xfffff (and not 0x100000). Therefore, when offset = 0xf0000 is passed as argument, this function fails even though the offset + SZ_64K fits below the IO_SPACE_LIMIT. This makes the last chunk of 64 KB of the I/O space not usable as it cannot be mapped. This patch fixes that by substracing 1 to offset + SZ_64K, so that we compare the addrss of the last byte of the I/O space against IO_SPACE_LIMIT instead of the address of the first byte of what is after the I/O space. Fixes: c2794437091a4 ("ARM: Add fixed PCI i/o mapping") Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2018-10-18ARM: spectre-v2: warn about incorrect context switching functionsRussell King1-0/+15
Commit c44f366ea7c85e1be27d08f2f0880f4120698125 upstream. Warn at error level if the context switching function is not what we are expecting. This can happen with big.Little systems, which we currently do not support. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Boot-tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Tony Lindgren <tony@atomide.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-10-18ARM: spectre-v2: add firmware based hardeningRussell King2-0/+81
Commit 10115105cb3aa17b5da1cb726ae8dd5f6854bd93 upstream. Add firmware based hardening for cores that require more complex handling in firmware. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Boot-tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-10-18ARM: spectre-v2: harden user aborts in kernel spaceRussell King3-8/+76
Commit f5fe12b1eaee220ce62ff9afb8b90929c396595f upstream. In order to prevent aliasing attacks on the branch predictor, invalidate the BTB or instruction cache on CPUs that are known to be affected when taking an abort on a address that is outside of a user task limit: Cortex A8, A9, A12, A17, A73, A75: flush BTB. Cortex A15, Brahma B15: invalidate icache. If the IBE bit is not set, then there is little point to enabling the workaround. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Boot-tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Tony Lindgren <tony@atomide.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-10-18ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bitRussell King3-3/+39
Commit e388b80288aade31135aca23d32eee93dd106795 upstream. When the branch predictor hardening is enabled, firmware must have set the IBE bit in the auxiliary control register. If this bit has not been set, the Spectre workarounds will not be functional. Add validation that this bit is set, and print a warning at alert level if this is not the case. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Boot-tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Tony Lindgren <tony@atomide.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-10-18ARM: spectre-v2: harden branch predictor on context switchesRussell King3-35/+115
Commit 06c23f5ffe7ad45b908d0fff604dae08a7e334b9 upstream. Required manual merge of arch/arm/mm/proc-v7.S. Harden the branch predictor against Spectre v2 attacks on context switches for ARMv7 and later CPUs. We do this by: Cortex A9, A12, A17, A73, A75: invalidating the BTB. Cortex A15, Brahma B15: invalidating the instruction cache. Cortex A57 and Cortex A72 are not addressed in this patch. Cortex R7 and Cortex R8 are also not addressed as we do not enforce memory protection on these cores. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Boot-tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Tony Lindgren <tony@atomide.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-10-18ARM: spectre: add Kconfig symbol for CPUs vulnerable to SpectreRussell King1-0/+4
Commit c58d237d0852a57fde9bc2c310972e8f4e3d155d upstream. Add a Kconfig symbol for CPUs which are vulnerable to the Spectre attacks. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Boot-tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Tony Lindgren <tony@atomide.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-10-18ARM: bugs: add support for per-processor bug checkingRussell King1-1/+2
Commit 9d3a04925deeabb97c8e26d940b501a2873e8af3 upstream. Add support for per-processor bug checking - each processor function descriptor gains a function pointer for this check, which must not be an __init function. If non-NULL, this will be called whenever a CPU enters the kernel via which ever path (boot CPU, secondary CPU startup, CPU resuming, etc.) This allows processor specific bug checks to validate that workaround bits are properly enabled by firmware via all entry paths to the kernel. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Boot-tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Tony Lindgren <tony@atomide.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-24ARM: 8780/1: ftrace: Only set kernel memory back to read-only after bootSteven Rostedt (VMware)1-0/+9
[ Upstream commit b4c7e2bd2eb4764afe3af9409ff3b1b87116fa30 ] Dynamic ftrace requires modifying the code segments that are usually set to read-only. To do this, a per arch function is called both before and after the ftrace modifications are performed. The "before" function will set kernel code text to read-write to allow for ftrace to make the modifications, and the "after" function will set the kernel code text back to "read-only" to keep the kernel code text protected. The issue happens when dynamic ftrace is tested at boot up. The test is done before the kernel code text has been set to read-only. But the "before" and "after" calls are still performed. The "after" call will change the kernel code text to read-only prematurely, and other boot code that expects this code to be read-write will fail. The solution is to add a variable that is set when the kernel code text is expected to be converted to read-only, and make the ftrace "before" and "after" calls do nothing if that variable is not yet set. This is similar to the x86 solution from commit 162396309745 ("ftrace, x86: make kernel text writable only for conversions"). Link: http://lkml.kernel.org/r/20180620212906.24b7b66e@vmware.local.home Reported-by: Stefan Agner <stefan@agner.ch> Tested-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30ARM: 8721/1: mm: dump: check hardware RO bit for LPAEPhilip Derrin1-2/+2
commit 3b0c0c922ff4be275a8beb87ce5657d16f355b54 upstream. When CONFIG_ARM_LPAE is set, the PMD dump relies on the software read-only bit to determine whether a page is writable. This concealed a bug which left the kernel text section writable (AP2=0) while marked read-only in the software bit. In a kernel with the AP2 bug, the dump looks like this: ---[ Kernel Mapping ]--- 0xc0000000-0xc0200000 2M RW NX SHD 0xc0200000-0xc0600000 4M ro x SHD 0xc0600000-0xc0800000 2M ro NX SHD 0xc0800000-0xc4800000 64M RW NX SHD The fix is to check that the software and hardware bits are both set before displaying "ro". The dump then shows the true perms: ---[ Kernel Mapping ]--- 0xc0000000-0xc0200000 2M RW NX SHD 0xc0200000-0xc0600000 4M RW x SHD 0xc0600000-0xc0800000 2M RW NX SHD 0xc0800000-0xc4800000 64M RW NX SHD Fixes: ded947798469 ("ARM: 8109/1: mm: Modify pte_write and pmd_write logic for LPAE") Signed-off-by: Philip Derrin <philip@cog.systems> Tested-by: Neil Dick <neil@cog.systems> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30ARM: 8722/1: mm: make STRICT_KERNEL_RWX effective for LPAEPhilip Derrin1-2/+2
commit 400eeffaffc7232c0ae1134fe04e14ae4fb48d8c upstream. Currently, for ARM kernels with CONFIG_ARM_LPAE and CONFIG_STRICT_KERNEL_RWX enabled, the 2MiB pages mapping the kernel code and rodata are writable. They are marked read-only in a software bit (L_PMD_SECT_RDONLY) but the hardware read-only bit is not set (PMD_SECT_AP2). For user mappings, the logic that propagates the software bit to the hardware bit is in set_pmd_at(); but for the kernel, section_update() writes the PMDs directly, skipping this logic. The fix is to set PMD_SECT_AP2 for read-only sections in section_update(), at the same time as L_PMD_SECT_RDONLY. Fixes: 1e3479225acb ("ARM: 8275/1: mm: fix PMD_SECT_RDONLY undeclared compile error") Signed-off-by: Philip Derrin <philip@cog.systems> Reported-by: Neil Dick <neil@cog.systems> Tested-by: Neil Dick <neil@cog.systems> Tested-by: Laura Abbott <labbott@redhat.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-02License cleanup: add SPDX GPL-2.0 license identifier to files with no licenseGreg Kroah-Hartman26-0/+26
Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-12ARM: 8700/1: nommu: always reserve address 0 awayNicolas Pitre1-0/+5
Some nommu systems have RAM at address 0. When vectors are not located there, the very beginning of memory remains available for dynamic allocations. The memblock allocator explicitly skips the first page but the standard page allocator does not, and while it correctly returns a non-null struct page pointer for that page, page_address() gives 0 which gets confused with NULL (out of memory) by callers despite having plenty of free memory left. Signed-off-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-09-12Merge branch 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds4-3/+7
Pull ARM updates from Russell King: "Low priority fixes and updates for ARM: - add some missing includes - efficiency improvements in system call entry code when tracing is enabled - ensure ARMv6+ is always built as EABI - export save_stack_trace_tsk() - fix fatal signal handling during mm fault - build translation table base address register from scratch - appropriately align the .data section to a word boundary where we rely on that data being word aligned" * 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: 8691/1: Export save_stack_trace_tsk() ARM: 8692/1: mm: abort uaccess retries upon fatal signal ARM: 8690/1: lpae: build TTB control register value from scratch in v7_ttb_setup ARM: align .data section ARM: always enable AEABI for ARMv6+ ARM: avoid saving and restoring registers unnecessarily ARM: move PC value into r9 ARM: obtain thread info structure later ARM: use aliases for registers in entry-common ARM: 8689/1: scu: add missing errno include ARM: 8688/1: pm: add missing types include
2017-09-09Merge branches 'fixes' and 'misc' into for-linusRussell King5-56/+278
2017-08-29ARM: 8692/1: mm: abort uaccess retries upon fatal signalMark Rutland1-1/+4
When there's a fatal signal pending, arm's do_page_fault() implementation returns 0. The intent is that we'll return to the faulting userspace instruction, delivering the signal on the way. However, if we take a fatal signal during fixing up a uaccess, this results in a return to the faulting kernel instruction, which will be instantly retried, resulting in the same fault being taken forever. As the task never reaches userspace, the signal is not delivered, and the task is left unkillable. While the task is stuck in this state, it can inhibit the forward progress of the system. To avoid this, we must ensure that when a fatal signal is pending, we apply any necessary fixup for a faulting kernel instruction. Thus we will return to an error path, and it is up to that code to make forward progress towards delivering the fatal signal. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Steve Capper <steve.capper@arm.com> Cc: stable@vger.kernel.org Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-08-29ARM: 8690/1: lpae: build TTB control register value from scratch in v7_ttb_setupHoeun Ryu1-2/+1
Reading TTBCR in early boot stage might return the value of the previous kernel's configuration, especially in case of kexec. For example, if normal kernel (first kernel) had run on a configuration of PHYS_OFFSET <= PAGE_OFFSET and crash kernel (second kernel) is running on a configuration PHYS_OFFSET > PAGE_OFFSET, which can happen because it depends on the reserved area for crash kernel, reading TTBCR and using the value to OR other bit fields might be risky because it doesn't have a reset value for TTBCR. Suggested-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Hoeun Ryu <hoeun.ryu@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-08-14ARM: align .data sectionRussell King2-0/+2
Robert Jarzmik reports that his PXA25x system fails to boot with 4.12, failing at __flush_whole_cache in arch/arm/mm/proc-xscale.S:215: 0xc0019e20 <+0>: ldr r1, [pc, #788] 0xc0019e24 <+4>: ldr r0, [r1] <== here with r1 containing 0xc06f82cd, which is the address of "clean_addr". Examination of the System.map shows: c06f22c8 D user_pmd_table c06f22cc d __warned.19178 c06f22cd d clean_addr indicating that a .data.unlikely section has appeared just before the .data section from proc-xscale.S. According to objdump -h, it appears that our assembly files default their .data alignment to 2**0, which is bad news if the preceding .data section size is not power-of-2 aligned at link time. Add the appropriate .align directives to all assembly files in arch/arm that are missing them where we require an appropriate alignment. Reported-by: Robert Jarzmik <robert.jarzmik@free.fr> Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-07-20ARM: NOMMU: Wire-up default DMA interfaceVladimir Murzin1-9/+36
The way how default DMA pool is exposed has changed and now we need to use dedicated interface to work with it. This patch makes alloc/release operations to use such interface. Since, default DMA pool is not handled by generic code anymore we have to implement our own mmap operation. Tested-by: Andras Szemzo <sza@esh.hu> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2017-07-20dma-coherent: introduce interface for default DMA poolVladimir Murzin1-1/+1
Christoph noticed [1] that default DMA pool in current form overload the DMA coherent infrastructure. In reply, Robin suggested [2] to split the per-device vs. global pool interfaces, so allocation/release from default DMA pool is driven by dma ops implementation. This patch implements Robin's idea and provide interface to allocate/release/mmap the default (aka global) DMA pool. To make it clear that existing *_from_coherent routines work on per-device pool rename them to *_from_dev_coherent. [1] https://lkml.org/lkml/2017/7/7/370 [2] https://lkml.org/lkml/2017/7/7/431 Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ralf Baechle <ralf@linux-mips.org> Suggested-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Andras Szemzo <sza@esh.hu> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2017-07-08Merge branch 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds1-1/+5
Pull ARM updates from Russell King: - add support for ftrace-with-registers, which is needed for kgraft and other ftrace tools - support for mremap() for the sigpage/vDSO so that checkpoint/restore can work - add timestamps to each line of the register dump output - remove the unused KTHREAD_SIZE from nommu - align the ARM bitops APIs with the generic API (using unsigned long pointers rather than void pointers) - make the configuration of userspace Thumb support an expert option so that we can default it on, and avoid some hard to debug userspace crashes * 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: 8684/1: NOMMU: Remove unused KTHREAD_SIZE definition ARM: 8683/1: ARM32: Support mremap() for sigpage/vDSO ARM: 8679/1: bitops: Align prototypes to generic API ARM: 8678/1: ftrace: Adds support for CONFIG_DYNAMIC_FTRACE_WITH_REGS ARM: make configuration of userspace Thumb support an expert option ARM: 8673/1: Fix __show_regs output timestamps