summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2024-12-05um: Always dump trace for specified task in show_stackTiwei Bie1-1/+1
[ Upstream commit 0f659ff362eac69777c4c191b7e5ccb19d76c67d ] Currently, show_stack() always dumps the trace of the current task. However, it should dump the trace of the specified task if one is provided. Otherwise, things like running "echo t > sysrq-trigger" won't work as expected. Fixes: 970e51feaddb ("um: Add support for CONFIG_STACKTRACE") Signed-off-by: Tiwei Bie <tiwei.btw@antgroup.com> Link: https://patch.msgid.link/20241106103933.1132365-1-tiwei.btw@antgroup.com Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05um: ubd: Initialize ubd's disk pointer in ubd_addTiwei Bie1-0/+2
[ Upstream commit df700802abcac3c7c4a4ced099aa42b9a144eea8 ] Currently, the initialization of the disk pointer in the ubd structure is missing. It should be initialized with the allocated gendisk pointer in ubd_add(). Fixes: 32621ad7a7ea ("ubd: remove the ubd_gendisk array") Signed-off-by: Tiwei Bie <tiwei.btw@antgroup.com> Acked-By: Anton Ivanov <anton.ivanov@cambridgegreys.com> Link: https://patch.msgid.link/20241104163203.435515-2-tiwei.btw@antgroup.com Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05um: Fix the return value of elf_core_copy_task_fpregsTiwei Bie1-1/+1
[ Upstream commit 865e3845eeaa21e9a62abc1361644e67124f1ec0 ] This function is expected to return a boolean value, which should be true on success and false on failure. Fixes: d1254b12c93e ("uml: fix x86_64 core dump crash") Signed-off-by: Tiwei Bie <tiwei.btw@antgroup.com> Link: https://patch.msgid.link/20240913023302.130300-1-tiwei.btw@antgroup.com Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05um: Fix potential integer overflow during physmem setupTiwei Bie1-3/+3
[ Upstream commit a98b7761f697e590ed5d610d87fa12be66f23419 ] This issue happens when the real map size is greater than LONG_MAX, which can be easily triggered on UML/i386. Fixes: fe205bdd1321 ("um: Print minimum physical memory requirement") Signed-off-by: Tiwei Bie <tiwei.btw@antgroup.com> Link: https://patch.msgid.link/20240916045950.508910-3-tiwei.btw@antgroup.com Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05arm64: dts: mediatek: mt8186-corsola-voltorb: Merge speaker codec nodesChen-Yu Tsai2-20/+9
commit 26ea2459d1724406bf0b342df6b4b1f002d7f8e3 upstream. The Voltorb device uses a speaker codec different from the original Corsola device. When the Voltorb device tree was first added, the new codec was added as a separate node when it should have just replaced the existing one. Merge the two nodes. The only differences are the compatible string and the GPIO line property name. This keeps the device node path for the speaker codec the same across the MT8186 Chromebook line. Also rename the related labels and node names from having rt1019p to speaker codec. Cc: stable@vger.kernel.org # v6.11+ Signed-off-by: Chen-Yu Tsai <wenst@chromium.org> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Link: https://lore.kernel.org/r/20241018082113.1297268-1-wenst@chromium.org Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05arm64: tls: Fix context-switching of tpidrro_el0 when kpti is enabledWill Deacon1-1/+1
commit 67ab51cbdfee02ef07fb9d7d14cc0bf6cb5a5e5c upstream. Commit 18011eac28c7 ("arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks") tried to optimise the context switching of tpidrro_el0 by eliding the clearing of the register when switching to a native task with kpti enabled, on the erroneous assumption that the kpti trampoline entry code would already have taken care of the write. Although the kpti trampoline does zero the register on entry from a native task, the check in tls_thread_switch() is on the *next* task and so we can end up leaving a stale, non-zero value in the register if the previous task was 32-bit. Drop the broken optimisation and zero tpidrro_el0 unconditionally when switching to a native 64-bit task. Cc: Mark Rutland <mark.rutland@arm.com> Cc: stable@vger.kernel.org Fixes: 18011eac28c7 ("arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks") Signed-off-by: Will Deacon <will@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20241114095332.23391-1-will@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05sh: cpuinfo: Fix a warning for CONFIG_CPUMASK_OFFSTACKHuacai Chen1-1/+1
commit 3c891f7c6a4e90bb1199497552f24b26e46383bc upstream. When CONFIG_CPUMASK_OFFSTACK and CONFIG_DEBUG_PER_CPU_MAPS are selected, cpu_max_bits_warn() generates a runtime warning similar as below when showing /proc/cpuinfo. Fix this by using nr_cpu_ids (the runtime limit) instead of NR_CPUS to iterate CPUs. [ 3.052463] ------------[ cut here ]------------ [ 3.059679] WARNING: CPU: 3 PID: 1 at include/linux/cpumask.h:108 show_cpuinfo+0x5e8/0x5f0 [ 3.070072] Modules linked in: efivarfs autofs4 [ 3.076257] CPU: 0 PID: 1 Comm: systemd Not tainted 5.19-rc5+ #1052 [ 3.099465] Stack : 9000000100157b08 9000000000f18530 9000000000cf846c 9000000100154000 [ 3.109127] 9000000100157a50 0000000000000000 9000000100157a58 9000000000ef7430 [ 3.118774] 90000001001578e8 0000000000000040 0000000000000020 ffffffffffffffff [ 3.128412] 0000000000aaaaaa 1ab25f00eec96a37 900000010021de80 900000000101c890 [ 3.138056] 0000000000000000 0000000000000000 0000000000000000 0000000000aaaaaa [ 3.147711] ffff8000339dc220 0000000000000001 0000000006ab4000 0000000000000000 [ 3.157364] 900000000101c998 0000000000000004 9000000000ef7430 0000000000000000 [ 3.167012] 0000000000000009 000000000000006c 0000000000000000 0000000000000000 [ 3.176641] 9000000000d3de08 9000000001639390 90000000002086d8 00007ffff0080286 [ 3.186260] 00000000000000b0 0000000000000004 0000000000000000 0000000000071c1c [ 3.195868] ... [ 3.199917] Call Trace: [ 3.203941] [<90000000002086d8>] show_stack+0x38/0x14c [ 3.210666] [<9000000000cf846c>] dump_stack_lvl+0x60/0x88 [ 3.217625] [<900000000023d268>] __warn+0xd0/0x100 [ 3.223958] [<9000000000cf3c90>] warn_slowpath_fmt+0x7c/0xcc [ 3.231150] [<9000000000210220>] show_cpuinfo+0x5e8/0x5f0 [ 3.238080] [<90000000004f578c>] seq_read_iter+0x354/0x4b4 [ 3.245098] [<90000000004c2e90>] new_sync_read+0x17c/0x1c4 [ 3.252114] [<90000000004c5174>] vfs_read+0x138/0x1d0 [ 3.258694] [<90000000004c55f8>] ksys_read+0x70/0x100 [ 3.265265] [<9000000000cfde9c>] do_syscall+0x7c/0x94 [ 3.271820] [<9000000000202fe4>] handle_syscall+0xc4/0x160 [ 3.281824] ---[ end trace 8b484262b4b8c24c ]--- Cc: stable@vger.kernel.org Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Reviewed-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Tested-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Signed-off-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05um: vector: Do not use drvdata in releaseTiwei Bie1-1/+2
commit 51b39d741970742a5c41136241a9c48ac607cf82 upstream. The drvdata is not available in release. Let's just use container_of() to get the vector_device instance. Otherwise, removing a vector device will result in a crash: RIP: 0033:vector_device_release+0xf/0x50 RSP: 00000000e187bc40 EFLAGS: 00010202 RAX: 0000000060028f61 RBX: 00000000600f1baf RCX: 00000000620074e0 RDX: 000000006220b9c0 RSI: 0000000060551c80 RDI: 0000000000000000 RBP: 00000000e187bc50 R08: 00000000603ad594 R09: 00000000e187bb70 R10: 000000000000135a R11: 00000000603ad422 R12: 00000000623ae028 R13: 000000006287a200 R14: 0000000062006d30 R15: 00000000623700b6 Kernel panic - not syncing: Segfault with no mm CPU: 0 UID: 0 PID: 16 Comm: kworker/0:1 Not tainted 6.12.0-rc6-g59b723cd2adb #1 Workqueue: events mc_work_proc Stack: 60028f61 623ae028 e187bc80 60276fcd 6220b9c0 603f5820 623ae028 00000000 e187bcb0 603a2bcd 623ae000 62370010 Call Trace: [<60028f61>] ? vector_device_release+0x0/0x50 [<60276fcd>] device_release+0x70/0xba [<603a2bcd>] kobject_put+0xba/0xe7 [<60277265>] put_device+0x19/0x1c [<60281266>] platform_device_put+0x26/0x29 [<60281e5f>] platform_device_unregister+0x2c/0x2e [<60029422>] vector_remove+0x52/0x58 [<60031316>] ? mconsole_reply+0x0/0x50 [<600310c8>] mconsole_remove+0x160/0x1cc [<603b19f4>] ? strlen+0x0/0x15 [<60066611>] ? __dequeue_entity+0x1a9/0x206 [<600666a7>] ? set_next_entity+0x39/0x63 [<6006666e>] ? set_next_entity+0x0/0x63 [<60038fa6>] ? um_set_signals+0x0/0x43 [<6003070c>] mc_work_proc+0x77/0x91 [<60057664>] process_scheduled_works+0x1b3/0x2dd [<60055f32>] ? assign_work+0x0/0x58 [<60057f0a>] worker_thread+0x1e9/0x293 [<6005406f>] ? set_pf_worker+0x0/0x64 [<6005d65d>] ? arch_local_irq_save+0x0/0x2d [<6005d748>] ? kthread_exit+0x0/0x3a [<60057d21>] ? worker_thread+0x0/0x293 [<6005dbf1>] kthread+0x126/0x12b [<600219c5>] new_thread_handler+0x85/0xb6 Cc: stable@vger.kernel.org Signed-off-by: Tiwei Bie <tiwei.btw@antgroup.com> Acked-By: Anton Ivanov <anton.ivanov@cambridgegreys.com> Link: https://patch.msgid.link/20241104163203.435515-5-tiwei.btw@antgroup.com Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05um: net: Do not use drvdata in releaseTiwei Bie1-1/+1
commit d1db692a9be3b4bd3473b64fcae996afaffe8438 upstream. The drvdata is not available in release. Let's just use container_of() to get the uml_net instance. Otherwise, removing a network device will result in a crash: RIP: 0033:net_device_release+0x10/0x6f RSP: 00000000e20c7c40 EFLAGS: 00010206 RAX: 000000006002e4e7 RBX: 00000000600f1baf RCX: 00000000624074e0 RDX: 0000000062778000 RSI: 0000000060551c80 RDI: 00000000627af028 RBP: 00000000e20c7c50 R08: 00000000603ad594 R09: 00000000e20c7b70 R10: 000000000000135a R11: 00000000603ad422 R12: 0000000000000000 R13: 0000000062c7af00 R14: 0000000062406d60 R15: 00000000627700b6 Kernel panic - not syncing: Segfault with no mm CPU: 0 UID: 0 PID: 29 Comm: kworker/0:2 Not tainted 6.12.0-rc6-g59b723cd2adb #1 Workqueue: events mc_work_proc Stack: 627af028 62c7af00 e20c7c80 60276fcd 62778000 603f5820 627af028 00000000 e20c7cb0 603a2bcd 627af000 62770010 Call Trace: [<60276fcd>] device_release+0x70/0xba [<603a2bcd>] kobject_put+0xba/0xe7 [<60277265>] put_device+0x19/0x1c [<60281266>] platform_device_put+0x26/0x29 [<60281e5f>] platform_device_unregister+0x2c/0x2e [<6002ec9c>] net_remove+0x63/0x69 [<60031316>] ? mconsole_reply+0x0/0x50 [<600310c8>] mconsole_remove+0x160/0x1cc [<60087d40>] ? __remove_hrtimer+0x38/0x74 [<60087ff8>] ? hrtimer_try_to_cancel+0x8c/0x98 [<6006b3cf>] ? dl_server_stop+0x3f/0x48 [<6006b390>] ? dl_server_stop+0x0/0x48 [<600672e8>] ? dequeue_entities+0x327/0x390 [<60038fa6>] ? um_set_signals+0x0/0x43 [<6003070c>] mc_work_proc+0x77/0x91 [<60057664>] process_scheduled_works+0x1b3/0x2dd [<60055f32>] ? assign_work+0x0/0x58 [<60057f0a>] worker_thread+0x1e9/0x293 [<6005406f>] ? set_pf_worker+0x0/0x64 [<6005d65d>] ? arch_local_irq_save+0x0/0x2d [<6005d748>] ? kthread_exit+0x0/0x3a [<60057d21>] ? worker_thread+0x0/0x293 [<6005dbf1>] kthread+0x126/0x12b [<600219c5>] new_thread_handler+0x85/0xb6 Cc: stable@vger.kernel.org Signed-off-by: Tiwei Bie <tiwei.btw@antgroup.com> Acked-By: Anton Ivanov <anton.ivanov@cambridgegreys.com> Link: https://patch.msgid.link/20241104163203.435515-4-tiwei.btw@antgroup.com Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05um: ubd: Do not use drvdata in releaseTiwei Bie1-1/+1
commit 5bee35e5389f450a7eea7318deb9073e9414d3b1 upstream. The drvdata is not available in release. Let's just use container_of() to get the ubd instance. Otherwise, removing a ubd device will result in a crash: RIP: 0033:blk_mq_free_tag_set+0x1f/0xba RSP: 00000000e2083bf0 EFLAGS: 00010246 RAX: 000000006021463a RBX: 0000000000000348 RCX: 0000000062604d00 RDX: 0000000004208060 RSI: 00000000605241a0 RDI: 0000000000000348 RBP: 00000000e2083c10 R08: 0000000062414010 R09: 00000000601603f7 R10: 000000000000133a R11: 000000006038c4bd R12: 0000000000000000 R13: 0000000060213a5c R14: 0000000062405d20 R15: 00000000604f7aa0 Kernel panic - not syncing: Segfault with no mm CPU: 0 PID: 17 Comm: kworker/0:1 Not tainted 6.8.0-rc3-00107-gba3f67c11638 #1 Workqueue: events mc_work_proc Stack: 00000000 604f7ef0 62c5d000 62405d20 e2083c30 6002c776 6002c755 600e47ff e2083c60 6025ffe3 04208060 603d36e0 Call Trace: [<6002c776>] ubd_device_release+0x21/0x55 [<6002c755>] ? ubd_device_release+0x0/0x55 [<600e47ff>] ? kfree+0x0/0x100 [<6025ffe3>] device_release+0x70/0xba [<60381d6a>] kobject_put+0xb5/0xe2 [<6026027b>] put_device+0x19/0x1c [<6026a036>] platform_device_put+0x26/0x29 [<6026ac5a>] platform_device_unregister+0x2c/0x2e [<6002c52e>] ubd_remove+0xb8/0xd6 [<6002bb74>] ? mconsole_reply+0x0/0x50 [<6002b926>] mconsole_remove+0x160/0x1cc [<6002bbbc>] ? mconsole_reply+0x48/0x50 [<6003379c>] ? um_set_signals+0x3b/0x43 [<60061c55>] ? update_min_vruntime+0x14/0x70 [<6006251f>] ? dequeue_task_fair+0x164/0x235 [<600620aa>] ? update_cfs_group+0x0/0x40 [<603a0e77>] ? __schedule+0x0/0x3ed [<60033761>] ? um_set_signals+0x0/0x43 [<6002af6a>] mc_work_proc+0x77/0x91 [<600520b4>] process_scheduled_works+0x1af/0x2c3 [<6004ede3>] ? assign_work+0x0/0x58 [<600527a1>] worker_thread+0x2f7/0x37a [<6004ee3b>] ? set_pf_worker+0x0/0x64 [<6005765d>] ? arch_local_irq_save+0x0/0x2d [<60058e07>] ? kthread_exit+0x0/0x3a [<600524aa>] ? worker_thread+0x0/0x37a [<60058f9f>] kthread+0x130/0x135 [<6002068e>] new_thread_handler+0x85/0xb6 Cc: stable@vger.kernel.org Signed-off-by: Tiwei Bie <tiwei.btw@antgroup.com> Acked-By: Anton Ivanov <anton.ivanov@cambridgegreys.com> Link: https://patch.msgid.link/20241104163203.435515-3-tiwei.btw@antgroup.com Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05x86/CPU/AMD: Terminate the erratum_1386_microcode arraySebastian Andrzej Siewior1-0/+1
commit ff6cdc407f4179748f4673c39b0921503199a0ad upstream. The erratum_1386_microcode array requires an empty entry at the end. Otherwise x86_match_cpu_with_stepping() will continue iterate the array after it ended. Add an empty entry to erratum_1386_microcode to its end. Fixes: 29ba89f189528 ("x86/CPU/AMD: Improve the erratum 1386 workaround") Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: <stable@kernel.org> Link: https://lore.kernel.org/r/20241126134722.480975-1-bigeasy@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05RISC-V: Check scalar unaligned access on all CPUsJesse Taube2-7/+9
commit 8d20a739f17a2de9e269db72330f5655d6545dd4 upstream. Originally, the check_unaligned_access_emulated_all_cpus function only checked the boot hart. This fixes the function to check all harts. Fixes: 71c54b3d169d ("riscv: report misaligned accesses emulation to hwprobe") Signed-off-by: Jesse Taube <jesse@rivosinc.com> Reviewed-by: Charlie Jenkins <charlie@rivosinc.com> Reviewed-by: Evan Green <evan@rivosinc.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20241017-jesse_unaligned_vector-v10-1-5b33500160f8@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05RISC-V: Scalar unaligned access emulated on hotplug CPUsJesse Taube1-0/+1
commit 9c528b5f7927b857b40f3c46afbc869827af3c94 upstream. The check_unaligned_access_emulated() function should have been called during CPU hotplug to ensure that if all CPUs had emulated unaligned accesses, the new CPU also does. This patch adds the call to check_unaligned_access_emulated() in the hotplug path. Fixes: 55e0bf49a0d0 ("RISC-V: Probe misaligned access speed in parallel") Signed-off-by: Jesse Taube <jesse@rivosinc.com> Reviewed-by: Evan Green <evan@rivosinc.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20241017-jesse_unaligned_vector-v10-2-5b33500160f8@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05parisc/ftrace: Fix function graph tracing disablementJosh Poimboeuf1-1/+1
commit a5f05a138a8cac035bf9da9b6ed0e532bc7942c8 upstream. Due to an apparent copy-paste bug, the parisc implementation of ftrace_disable_ftrace_graph_caller() doesn't actually do anything. It enables the (already-enabled) static key rather than disabling it. The result is that after function graph tracing has been "disabled", any subsequent (non-graph) function tracing will inadvertently also enable the slow fgraph return address hijacking. Fixes: 98f2926171ae ("parisc/ftrace: use static key to enable/disable function graph tracer") Cc: stable@vger.kernel.org # 5.16+ Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05ARM: dts: omap36xx: declare 1GHz OPP as turbo againAndreas Kemnade1-0/+1
commit 96a64e9730c2c76cfa5c510583a0fbf40d62886b upstream. Operating stable without reduced chip life at 1Ghz needs several technologies working: The technologies involve - SmartReflex - DVFS As this cannot directly specified in the OPP table as dependecies in the devicetree yet, use the turbo flag again to mark this OPP as something special to have some kind of opt-in. So revert commit 5f1bf7ae8481 ("ARM: dts: omap36xx: Remove turbo mode for 1GHz variants") Practical reasoning: At least the GTA04A5 (DM3730) has become unstable with that OPP enabled. Furthermore nothing enforces the availability of said technologies, even in the kernel configuration, so allow users to rather opt-in. Cc: Stable@vger.kernel.org Fixes: 5f1bf7ae8481 ("ARM: dts: omap36xx: Remove turbo mode for 1GHz variants") Signed-off-by: Andreas Kemnade <andreas@kemnade.info> Link: https://lore.kernel.org/r/20241018214727.275162-1-andreas@kemnade.info Signed-off-by: Kevin Hilman <khilman@baylibre.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05LoongArch: Explicitly specify code model in MakefileHuacai Chen1-2/+2
commit e67e0eb6a98b261caf45048f9eb95fd7609289c0 upstream. LoongArch's toolchain may change the default code model from normal to medium. This is unnecessary for kernel, and generates some relocations which cannot be handled by the module loader. So explicitly specify the code model to normal in Makefile (for Rust 'normal' is 'small'). Cc: stable@vger.kernel.org Tested-by: Haiyong Sun <sunhaiyong@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05KVM: arm64: vgic-its: Clear DTE when MAPD unmaps a deviceKunkun Jiang1-2/+4
commit e9649129d33dca561305fc590a7c4ba8c3e5675a upstream. vgic_its_save_device_tables will traverse its->device_list to save DTE for each device. vgic_its_restore_device_tables will traverse each entry of device table and check if it is valid. Restore if valid. But when MAPD unmaps a device, it does not invalidate the corresponding DTE. In the scenario of continuous saves and restores, there may be a situation where a device's DTE is not saved but is restored. This is unreasonable and may cause restore to fail. This patch clears the corresponding DTE when MAPD unmaps a device. Cc: stable@vger.kernel.org Fixes: 57a9a117154c ("KVM: arm64: vgic-its: Device table save/restore") Co-developed-by: Shusen Li <lishusen2@huawei.com> Signed-off-by: Shusen Li <lishusen2@huawei.com> Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com> [Jing: Update with entry write helper] Signed-off-by: Jing Zhang <jingzhangos@google.com> Link: https://lore.kernel.org/r/20241107214137.428439-5-jingzhangos@google.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05KVM: arm64: vgic-its: Add a data length check in vgic_its_save_*Jing Zhang2-12/+31
commit 7fe28d7e68f92cc3d0668b8f2fbdf5c303ac3022 upstream. In all the vgic_its_save_*() functinos, they do not check whether the data length is 8 bytes before calling vgic_write_guest_lock. This patch adds the check. To prevent the kernel from being blown up when the fault occurs, KVM_BUG_ON() is used. And the other BUG_ON()s are replaced together. Cc: stable@vger.kernel.org Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com> [Jing: Update with the new entry read/write helpers] Signed-off-by: Jing Zhang <jingzhangos@google.com> Link: https://lore.kernel.org/r/20241107214137.428439-4-jingzhangos@google.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05KVM: arm64: Get rid of userspace_irqchip_in_useRaghavendra Rao Ananta3-19/+4
commit 38d7aacca09230fdb98a34194fec2af597e8e20d upstream. Improper use of userspace_irqchip_in_use led to syzbot hitting the following WARN_ON() in kvm_timer_update_irq(): WARNING: CPU: 0 PID: 3281 at arch/arm64/kvm/arch_timer.c:459 kvm_timer_update_irq+0x21c/0x394 Call trace: kvm_timer_update_irq+0x21c/0x394 arch/arm64/kvm/arch_timer.c:459 kvm_timer_vcpu_reset+0x158/0x684 arch/arm64/kvm/arch_timer.c:968 kvm_reset_vcpu+0x3b4/0x560 arch/arm64/kvm/reset.c:264 kvm_vcpu_set_target arch/arm64/kvm/arm.c:1553 [inline] kvm_arch_vcpu_ioctl_vcpu_init arch/arm64/kvm/arm.c:1573 [inline] kvm_arch_vcpu_ioctl+0x112c/0x1b3c arch/arm64/kvm/arm.c:1695 kvm_vcpu_ioctl+0x4ec/0xf74 virt/kvm/kvm_main.c:4658 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:907 [inline] __se_sys_ioctl fs/ioctl.c:893 [inline] __arm64_sys_ioctl+0x108/0x184 fs/ioctl.c:893 __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline] invoke_syscall+0x78/0x1b8 arch/arm64/kernel/syscall.c:49 el0_svc_common+0xe8/0x1b0 arch/arm64/kernel/syscall.c:132 do_el0_svc+0x40/0x50 arch/arm64/kernel/syscall.c:151 el0_svc+0x54/0x14c arch/arm64/kernel/entry-common.c:712 el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:730 el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:598 The following sequence led to the scenario: - Userspace creates a VM and a vCPU. - The vCPU is initialized with KVM_ARM_VCPU_PMU_V3 during KVM_ARM_VCPU_INIT. - Without any other setup, such as vGIC or vPMU, userspace issues KVM_RUN on the vCPU. Since the vPMU is requested, but not setup, kvm_arm_pmu_v3_enable() fails in kvm_arch_vcpu_run_pid_change(). As a result, KVM_RUN returns after enabling the timer, but before incrementing 'userspace_irqchip_in_use': kvm_arch_vcpu_run_pid_change() ret = kvm_arm_pmu_v3_enable() if (!vcpu->arch.pmu.created) return -EINVAL; if (ret) return ret; [...] if (!irqchip_in_kernel(kvm)) static_branch_inc(&userspace_irqchip_in_use); - Userspace ignores the error and issues KVM_ARM_VCPU_INIT again. Since the timer is already enabled, control moves through the following flow, ultimately hitting the WARN_ON(): kvm_timer_vcpu_reset() if (timer->enabled) kvm_timer_update_irq() if (!userspace_irqchip()) ret = kvm_vgic_inject_irq() ret = vgic_lazy_init() if (unlikely(!vgic_initialized(kvm))) if (kvm->arch.vgic.vgic_model != KVM_DEV_TYPE_ARM_VGIC_V2) return -EBUSY; WARN_ON(ret); Theoretically, since userspace_irqchip_in_use's functionality can be simply replaced by '!irqchip_in_kernel()', get rid of the static key to avoid the mismanagement, which also helps with the syzbot issue. Cc: <stable@vger.kernel.org> Reported-by: syzbot <syzkaller@googlegroups.com> Suggested-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05KVM: arm64: vgic-its: Clear ITE when DISCARD frees an ITEKunkun Jiang1-1/+5
commit 7602ffd1d5e8927fadd5187cb4aed2fdc9c47143 upstream. When DISCARD frees an ITE, it does not invalidate the corresponding ITE. In the scenario of continuous saves and restores, there may be a situation where an ITE is not saved but is restored. This is unreasonable and may cause restore to fail. This patch clears the corresponding ITE when DISCARD frees an ITE. Cc: stable@vger.kernel.org Fixes: eff484e0298d ("KVM: arm64: vgic-its: ITT save and restore") Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com> [Jing: Update with entry write helper] Signed-off-by: Jing Zhang <jingzhangos@google.com> Link: https://lore.kernel.org/r/20241107214137.428439-6-jingzhangos@google.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05KVM: arm64: Don't retire aborted MMIO instructionOliver Upton1-2/+30
commit e735a5da64420a86be370b216c269b5dd8e830e2 upstream. Returning an abort to the guest for an unsupported MMIO access is a documented feature of the KVM UAPI. Nevertheless, it's clear that this plumbing has seen limited testing, since userspace can trivially cause a WARN in the MMIO return: WARNING: CPU: 0 PID: 30558 at arch/arm64/include/asm/kvm_emulate.h:536 kvm_handle_mmio_return+0x46c/0x5c4 arch/arm64/include/asm/kvm_emulate.h:536 Call trace: kvm_handle_mmio_return+0x46c/0x5c4 arch/arm64/include/asm/kvm_emulate.h:536 kvm_arch_vcpu_ioctl_run+0x98/0x15b4 arch/arm64/kvm/arm.c:1133 kvm_vcpu_ioctl+0x75c/0xa78 virt/kvm/kvm_main.c:4487 __do_sys_ioctl fs/ioctl.c:51 [inline] __se_sys_ioctl fs/ioctl.c:893 [inline] __arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:893 __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline] invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49 el0_svc_common+0x1e0/0x23c arch/arm64/kernel/syscall.c:132 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151 el0_svc+0x38/0x68 arch/arm64/kernel/entry-common.c:712 el0t_64_sync_handler+0x90/0xfc arch/arm64/kernel/entry-common.c:730 el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:598 The splat is complaining that KVM is advancing PC while an exception is pending, i.e. that KVM is retiring the MMIO instruction despite a pending synchronous external abort. Womp womp. Fix the glaring UAPI bug by skipping over all the MMIO emulation in case there is a pending synchronous exception. Note that while userspace is capable of pending an asynchronous exception (SError, IRQ, or FIQ), it is still safe to retire the MMIO instruction in this case as (1) they are by definition asynchronous, and (2) KVM relies on hardware support for pending/delivering these exceptions instead of the software state machine for advancing PC. Cc: stable@vger.kernel.org Fixes: da345174ceca ("KVM: arm/arm64: Allow user injection of external data aborts") Reported-by: Alexander Potapenko <glider@google.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241025203106.3529261-2-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05KVM: arm64: Ignore PMCNTENSET_EL0 while checking for overflow statusRaghavendra Rao Ananta1-1/+0
commit 54bbee190d42166209185d89070c58a343bf514b upstream. DDI0487K.a D13.3.1 describes the PMU overflow condition, which evaluates to true if any counter's global enable (PMCR_EL0.E), overflow flag (PMOVSSET_EL0[n]), and interrupt enable (PMINTENSET_EL1[n]) are all 1. Of note, this does not require a counter to be enabled (i.e. PMCNTENSET_EL0[n] = 1) to generate an overflow. Align kvm_pmu_overflow_status() with the reality of the architecture and stop using PMCNTENSET_EL0 as part of the overflow condition. The bug was discovered while running an SBSA PMU test [*], which only sets PMCR.E, PMOVSSET<0>, PMINTENSET<0>, and expects an overflow interrupt. Cc: stable@vger.kernel.org Fixes: 76d883c4e640 ("arm64: KVM: Add access handler for PMOVSSET and PMOVSCLR register") Link: https://github.com/ARM-software/sbsa-acs/blob/master/test_pool/pmu/operating_system/test_pmu001.c Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> [ oliver: massaged changelog ] Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241120005230.2335682-2-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05KVM: arm64: vgic-v3: Sanitise guest writes to GICR_INVLPIRMarc Zyngier1-1/+6
commit d561491ba927cb5634094ff311795e9d618e9b86 upstream. Make sure we filter out non-LPI invalidation when handling writes to GICR_INVLPIR. Fixes: 4645d11f4a553 ("KVM: arm64: vgic-v3: Implement MMIO-based LPI invalidation") Reported-by: Alexander Potapenko <glider@google.com> Tested-by: Alexander Potapenko <glider@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20241117165757.247686-2-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05powerpc/pseries: Fix KVM guest detection for disabling hardlockup detectorGautam Menghani1-0/+1
commit 44e5d21e6d3fd2a1fed7f0327cf72e99397e2eaf upstream. As per the kernel documentation[1], hardlockup detector should be disabled in KVM guests as it may give false positives. On PPC, hardlockup detector is enabled inside KVM guests because disable_hardlockup_detector() is marked as early_initcall and it relies on kvm_guest static key (is_kvm_guest()) which is initialized later during boot by check_kvm_guest(), which is a core_initcall. check_kvm_guest() is also called in pSeries_smp_probe(), which is called before initcalls, but it is skipped if KVM guest does not have doorbell support or if the guest is launched with SMT=1. Call check_kvm_guest() in disable_hardlockup_detector() so that is_kvm_guest() check goes through fine and hardlockup detector can be disabled inside the KVM guest. [1]: Documentation/admin-guide/sysctl/kernel.rst Fixes: 633c8e9800f3 ("powerpc/pseries: Enable hardlockup watchdog for PowerVM partitions") Cc: stable@vger.kernel.org # v5.14+ Signed-off-by: Gautam Menghani <gautam@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://patch.msgid.link/20241108094839.33084-1-gautam@linux.ibm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05KVM: x86/mmu: Skip the "try unsync" path iff the old SPTE was a leaf SPTESean Christopherson1-5/+13
commit 2867eb782cf7f64c2ac427596133b6f9c3f64b7a upstream. Apply make_spte()'s optimization to skip trying to unsync shadow pages if and only if the old SPTE was a leaf SPTE, as non-leaf SPTEs in direct MMUs are always writable, i.e. could trigger a false positive and incorrectly lead to KVM creating a SPTE without write-protecting or marking shadow pages unsync. This bug only affects the TDP MMU, as the shadow MMU only overwrites a shadow-present SPTE when synchronizing SPTEs (and only 4KiB SPTEs can be unsync). Specifically, mmu_set_spte() drops any non-leaf SPTEs *before* calling make_spte(), whereas the TDP MMU can do a direct replacement of a page table with the leaf SPTE. Opportunistically update the comment to explain why skipping the unsync stuff is safe, as opposed to simply saying "it's someone else's problem". Cc: stable@vger.kernel.org Tested-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Sean Christopherson <seanjc@google.com> Tested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-ID: <20241010182427.1434605-5-seanjc@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05KVM: x86: switch hugepage recovery thread to vhost_taskPaolo Bonzini3-38/+35
commit d96c77bd4eeba469bddbbb14323d2191684da82a upstream. kvm_vm_create_worker_thread() is meant to be used for kthreads that can consume significant amounts of CPU time on behalf of a VM or in response to how the VM behaves (for example how it accesses its memory). Therefore it wants to charge the CPU time consumed by that work to the VM's container. However, because of these threads, cgroups which have kvm instances inside never complete freezing. This can be trivially reproduced: root@test ~# mkdir /sys/fs/cgroup/test root@test ~# echo $$ > /sys/fs/cgroup/test/cgroup.procs root@test ~# qemu-system-x86_64 -nographic -enable-kvm and in another terminal: root@test ~# echo 1 > /sys/fs/cgroup/test/cgroup.freeze root@test ~# cat /sys/fs/cgroup/test/cgroup.events populated 1 frozen 0 The cgroup freezing happens in the signal delivery path but kvm_nx_huge_page_recovery_worker, while joining non-root cgroups, never calls into the signal delivery path and thus never gets frozen. Because the cgroup freezer determines whether a given cgroup is frozen by comparing the number of frozen threads to the total number of threads in the cgroup, the cgroup never becomes frozen and users waiting for the state transition may hang indefinitely. Since the worker kthread is tied to a user process, it's better if it behaves similarly to user tasks as much as possible, including being able to send SIGSTOP and SIGCONT. In fact, vhost_task is all that kvm_vm_create_worker_thread() wanted to be and more: not only it inherits the userspace process's cgroups, it has other niceties like being parented properly in the process tree. Use it instead of the homegrown alternative. Incidentally, the new code is also better behaved when you flip recovery back and forth to disabled and back to enabled. If your recovery period is 1 minute, it will run the next recovery after 1 minute independent of how many times you flipped the parameter. (Commit message based on emails from Tejun). Reported-by: Tejun Heo <tj@kernel.org> Reported-by: Luca Boccassi <bluca@debian.org> Acked-by: Tejun Heo <tj@kernel.org> Tested-by: Luca Boccassi <bluca@debian.org> Cc: stable@vger.kernel.org Reviewed-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05crypto: x86/aegis128 - access 32-bit arguments as 32-bitEric Biggers1-14/+15
commit 3b2f2d22fb424e9bebda4dbf6676cbfc7f9f62cd upstream. Fix the AEGIS assembly code to access 'unsigned int' arguments as 32-bit values instead of 64-bit, since the upper bits of the corresponding 64-bit registers are not guaranteed to be zero. Note: there haven't been any reports of this bug actually causing incorrect behavior. Neither gcc nor clang guarantee zero-extension to 64 bits, but zero-extension is likely to happen in practice because most instructions that operate on 32-bit registers zero-extend to 64 bits. Fixes: 1d373d4e8e15 ("crypto: x86 - Add optimized AEGIS implementations") Cc: stable@vger.kernel.org Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05perf/x86/intel/pt: Fix buffer full but size is 0 caseAdrian Hunter2-3/+10
commit 5b590160d2cf776b304eb054afafea2bd55e3620 upstream. If the trace data buffer becomes full, a truncated flag [T] is reported in PERF_RECORD_AUX. In some cases, the size reported is 0, even though data must have been added to make the buffer full. That happens when the buffer fills up from empty to full before the Intel PT driver has updated the buffer position. Then the driver calculates the new buffer position before calculating the data size. If the old and new positions are the same, the data size is reported as 0, even though it is really the whole buffer size. Fix by detecting when the buffer position is wrapped, and adjust the data size calculation accordingly. Example Use a very small buffer size (8K) and observe the size of truncated [T] data. Before the fix, it is possible to see records of 0 size. Before: $ perf record -m,8K -e intel_pt// uname Linux [ perf record: Woken up 2 times to write data ] [ perf record: Captured and wrote 0.105 MB perf.data ] $ perf script -D --no-itrace | grep AUX | grep -F '[T]' Warning: AUX data lost 2 times out of 3! 5 19462712368111 0x19710 [0x40]: PERF_RECORD_AUX offset: 0 size: 0 flags: 0x1 [T] 5 19462712700046 0x19ba8 [0x40]: PERF_RECORD_AUX offset: 0x170 size: 0xe90 flags: 0x1 [T] After: $ perf record -m,8K -e intel_pt// uname Linux [ perf record: Woken up 3 times to write data ] [ perf record: Captured and wrote 0.040 MB perf.data ] $ perf script -D --no-itrace | grep AUX | grep -F '[T]' Warning: AUX data lost 2 times out of 3! 1 113720802995 0x4948 [0x40]: PERF_RECORD_AUX offset: 0 size: 0x2000 flags: 0x1 [T] 1 113720979812 0x6b10 [0x40]: PERF_RECORD_AUX offset: 0x2000 size: 0x2000 flags: 0x1 [T] Fixes: 52ca9ced3f70 ("perf/x86/intel/pt: Add Intel PT PMU driver") Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20241022155920.17511-2-adrian.hunter@intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-12-05s390/pci: Fix potential double remove of hotplug slotNiklas Schnelle1-25/+9
[ Upstream commit c4a585e952ca403a370586d3f16e8331a7564901 ] In commit 6ee600bfbe0f ("s390/pci: remove hotplug slot when releasing the device") the zpci_exit_slot() was moved from zpci_device_reserved() to zpci_release_device() with the intention of keeping the hotplug slot around until the device is actually removed. Now zpci_release_device() is only called once all references are dropped. Since the zPCI subsystem only drops its reference once the device is in the reserved state it follows that zpci_release_device() must only deal with devices in the reserved state. Despite that it contains code to tear down from both configured and standby state. For the standby case this already includes the removal of the hotplug slot so would cause a double removal if a device was ever removed in either configured or standby state. Instead of causing a potential double removal in a case that should never happen explicitly WARN_ON() if a device in non-reserved state is released and get rid of the dead code cases. Fixes: 6ee600bfbe0f ("s390/pci: remove hotplug slot when releasing the device") Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Gerd Bayer <gbayer@linux.ibm.com> Tested-by: Gerd Bayer <gbayer@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05LoongArch: BPF: Sign-extend return valuesTiezhu Yang1-1/+1
[ Upstream commit 73c359d1d356cf10236ccd358bd55edab33e9424 ] (1) Description of Problem: When testing BPF JIT with the latest compiler toolchains on LoongArch, there exist some strange failed test cases, dmesg shows something like this: # dmesg -t | grep FAIL | head -1 ... ret -3 != -3 (0xfffffffd != 0xfffffffd)FAIL ... (2) Steps to Reproduce: # echo 1 > /proc/sys/net/core/bpf_jit_enable # modprobe test_bpf (3) Additional Info: There are no failed test cases compiled with the lower version of GCC such as 13.3.0, while the problems only appear with higher version of GCC such as 14.2.0. This is because the problems were hidden by the lower version of GCC due to redundant sign extension instructions generated by compiler, but with optimization of higher version of GCC, the sign extension instructions have been removed. (4) Root Cause Analysis: The LoongArch architecture does not expose sub-registers, and hold all 32-bit values in a sign-extended format. While BPF, on the other hand, exposes sub-registers, and use zero-extension (similar to arm64/x86). This has led to some subtle bugs, where a BPF JITted program has not sign-extended the a0 register (return value in LoongArch land), passed the return value up the kernel, for example: | int from_bpf(void); | | long foo(void) | { | return from_bpf(); | } Here, a0 would be 0xffffffff instead of the expected 0xffffffffffffffff. Internally, the LoongArch JIT uses a5 as a dedicated register for BPF return values. That is to say, the LoongArch BPF uses a5 for BPF return values, which are zero-extended, whereas the LoongArch ABI uses a0 which is sign-extended. (5) Final Solution: Keep a5 zero-extended, but explicitly sign-extend a0 (which is used outside BPF land). Because libbpf currently defines the return value of an ebpf program as a 32-bit unsigned integer, just use addi.w to extend bit 31 into bits 63 through 32 of a5 to a0. This is similar to commit 2f1b0d3d7331 ("riscv, bpf: Sign-extend return values"). Fixes: 5dc615520c4d ("LoongArch: Add BPF JIT support") Acked-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05LoongArch: Fix build failure with GCC 15 (-std=gnu23)Tiezhu Yang1-1/+1
[ Upstream commit 947d5d036c788156f09e83e7f16322ffe8124384 ] Whenever I try to build the kernel with upcoming GCC 15 which defaults to -std=gnu23 I get a build failure: CC arch/loongarch/vdso/vgetcpu.o In file included from ./include/uapi/linux/posix_types.h:5, from ./include/uapi/linux/types.h:14, from ./include/linux/types.h:6, from ./include/linux/kasan-checks.h:5, from ./include/asm-generic/rwonce.h:26, from ./arch/loongarch/include/generated/asm/rwonce.h:1, from ./include/linux/compiler.h:317, from ./include/asm-generic/bug.h:5, from ./arch/loongarch/include/asm/bug.h:60, from ./include/linux/bug.h:5, from ./include/linux/mmdebug.h:5, from ./include/linux/mm.h:6, from ./arch/loongarch/include/asm/vdso.h:10, from arch/loongarch/vdso/vgetcpu.c:6: ./include/linux/stddef.h:11:9: error: expected identifier before 'false' 11 | false = 0, | ^~~~~ ./include/linux/types.h:35:33: error: two or more data types in declaration specifiers 35 | typedef _Bool bool; | ^~~~ ./include/linux/types.h:35:1: warning: useless type name in empty declaration 35 | typedef _Bool bool; | ^~~~~~~ The kernel builds explicitly with -std=gnu11 in top Makefile, but arch/loongarch/vdso does not use KBUILD_CFLAGS from the rest of the kernel, just add -std=gnu11 flag to arch/loongarch/vdso/Makefile. By the way, commit e8c07082a810 ("Kbuild: move to -std=gnu11") did a similar change for arch/arm64/kernel/vdso32/Makefile. Fixes: c6b99bed6b8f ("LoongArch: Add VDSO and VSYSCALL support") Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05m68k: coldfire/device.c: only build FEC when HW macros are definedAntonio Quartulli1-4/+4
[ Upstream commit 63a24cf8cc330e5a68ebd2e20ae200096974c475 ] When CONFIG_FEC is set (due to COMPILE_TEST) along with CONFIG_M54xx, coldfire/device.c has compile errors due to missing MCFEC_* and MCF_IRQ_FEC_* symbols. Make the whole FEC blocks dependent on having the HW macros defined, rather than on CONFIG_FEC itself. This fix is very similar to commit e6e1e7b19fa1 ("m68k: coldfire/device.c: only build for MCF_EDMA when h/w macros are defined") Fixes: b7ce7f0d0efc ("m68knommu: merge common ColdFire FEC platform setup code") To: Greg Ungerer <gerg@linux-m68k.org> To: Geert Uytterhoeven <geert@linux-m68k.org> Cc: linux-m68k@lists.linux-m68k.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Antonio Quartulli <antonio@mandelbit.com> Signed-off-by: Greg Ungerer <gerg@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05m68k: mcfgpio: Fix incorrect register offset for CONFIG_M5441xJean-Michel Hautbois1-1/+1
[ Upstream commit f212140962c93cd5da43283a18e31681540fc23d ] Fix a typo in the CONFIG_M5441x preprocessor condition, where the GPIO register offset was incorrectly set to 8 instead of 0. This prevented proper GPIO configuration for m5441x targets. Fixes: bea8bcb12da0 ("m68knommu: Add support for the Coldfire m5441x.") Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@yoseli.org> Signed-off-by: Greg Ungerer <gerg@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05x86: fix off-by-one in access_ok()David Laight1-2/+2
[ Upstream commit 573f45a9f9a47fed4c7957609689b772121b33d7 ] When the size isn't a small constant, __access_ok() will call valid_user_address() with the address after the last byte of the user buffer. It is valid for a buffer to end with the last valid user address so valid_user_address() must allow accesses to the base of the guard page. [ This introduces an off-by-one in the other direction for the plain non-sized accesses, but since we have that guard region that is a whole page, those checks "allowing" accesses to that guard region don't really matter. The access will fault anyway, whether to the guard page or if the address has been masked to all ones - Linus ] Fixes: 86e6b1547b3d0 ("x86: fix user address masking non-canonical speculation issue") Signed-off-by: David Laight <david.laight@aculab.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05arm64: dts: qcom: sc8180x: Add a SoC-specific compatible to cpufreq-hwKonrad Dybcio1-1/+1
[ Upstream commit 5df30684415d5a902f23862ab5bbed2a2df7fbf1 ] Comply with bindings guidelines and get rid of errors such as: cpufreq@18323000: compatible: 'oneOf' conditional failed, one must be fixed: ['qcom,cpufreq-hw'] is too short Fixes: 8575f197b077 ("arm64: dts: qcom: Introduce the SC8180x platform") Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05powerpc/kexec: Fix return of uninitialized variableZhang Zekun1-2/+7
[ Upstream commit 83b5a407fbb73e6965adfb4bd0a803724bf87f96 ] of_property_read_u64() can fail and leave the variable uninitialized, which will then be used. Return error if reading the property failed. Fixes: 2e6bd221d96f ("powerpc/kexec_file: Enable early kernel OPAL calls") Signed-off-by: Zhang Zekun <zhangzekun11@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://patch.msgid.link/20240930075628.125138-1-zhangzekun11@huawei.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05KVM: PPC: Book3S HV: Fix kmv -> kvm typoKajol Jain3-5/+5
[ Upstream commit 590d2f9347f7974d7954400e5d937672fd844a8b ] Fix typo in the following kvm function names from: kmvhv_counters_tracepoint_regfunc -> kvmhv_counters_tracepoint_regfunc kmvhv_counters_tracepoint_unregfunc -> kvmhv_counters_tracepoint_unregfunc Fixes: e1f288d2f9c6 ("KVM: PPC: Book3S HV nestedv2: Add support for reading VPA counters for pseries guests") Reported-by: Madhavan Srinivasan <maddy@linux.ibm.com> Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Reviewed-by: Amit Machhiwal <amachhiw@linux.ibm.com> Signed-off-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://patch.msgid.link/20241114085020.1147912-1-kjain@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05powerpc/sstep: make emulate_vsx_load and emulate_vsx_store staticMichal Suchanek2-13/+4
[ Upstream commit a26c4dbb3d9c1821cb0fc11cb2dbc32d5bf3463b ] These functions are not used outside of sstep.c Fixes: 350779a29f11 ("powerpc: Handle most loads and stores in instruction emulation code") Signed-off-by: Michal Suchanek <msuchanek@suse.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://patch.msgid.link/20241001130356.14664-1-msuchanek@suse.de Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05KVM: PPC: Book3S HV: Avoid returning to nested hypervisor on pending doorbellsGautam Menghani1-1/+0
[ Upstream commit 26686db69917399fa30e3b3135360771e90f83ec ] Commit 6398326b9ba1 ("KVM: PPC: Book3S HV P9: Stop using vc->dpdes") dropped the use of vcore->dpdes for msgsndp / SMT emulation. Prior to that commit, the below code at L1 level (see [1] for terminology) was responsible for setting vc->dpdes for the respective L2 vCPU: if (!nested) { kvmppc_core_prepare_to_enter(vcpu); if (vcpu->arch.doorbell_request) { vc->dpdes = 1; smp_wmb(); vcpu->arch.doorbell_request = 0; } L1 then sent vc->dpdes to L0 via kvmhv_save_hv_regs(), and while servicing H_ENTER_NESTED at L0, the below condition at L0 level made sure to abort and go back to L1 if vcpu->arch.doorbell_request = 1 so that L1 sets vc->dpdes as per above if condition: } else if (vcpu->arch.pending_exceptions || vcpu->arch.doorbell_request || xive_interrupt_pending(vcpu)) { vcpu->arch.ret = RESUME_HOST; goto out; } This worked fine since vcpu->arch.doorbell_request was used more like a flag and vc->dpdes was used to pass around the doorbell state. But after Commit 6398326b9ba1 ("KVM: PPC: Book3S HV P9: Stop using vc->dpdes"), vcpu->arch.doorbell_request is the only variable used to pass around doorbell state. With the plumbing for handling doorbells for nested guests updated to use vcpu->arch.doorbell_request over vc->dpdes, the above "else if" stops doorbells from working correctly as L0 aborts execution of L2 and instead goes back to L1. Remove vcpu->arch.doorbell_request from the above "else if" condition as it is no longer needed for L0 to correctly handle the doorbell status while running L2. [1] Terminology 1. L0 : PowerNV linux running with HV privileges 2. L1 : Pseries KVM guest running on top of L0 2. L2 : Nested KVM guest running on top of L1 Fixes: 6398326b9ba1 ("KVM: PPC: Book3S HV P9: Stop using vc->dpdes") Signed-off-by: Gautam Menghani <gautam@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://patch.msgid.link/20241109063301.105289-4-gautam@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05KVM: PPC: Book3S HV: Stop using vc->dpdes for nested KVM guestsGautam Menghani2-4/+19
[ Upstream commit 0d3c6b28896f9889c8864dab469e0343a0ad1c0c ] commit 6398326b9ba1 ("KVM: PPC: Book3S HV P9: Stop using vc->dpdes") introduced an optimization to use only vcpu->doorbell_request for SMT emulation for Power9 and above guests, but the code for nested guests still relies on the old way of handling doorbells, due to which an L2 guest (see [1]) cannot be booted with XICS with SMT>1. The command to repro this issue is: // To be run in L1 qemu-system-ppc64 \ -drive file=rhel.qcow2,format=qcow2 \ -m 20G \ -smp 8,cores=1,threads=8 \ -cpu host \ -nographic \ -machine pseries,ic-mode=xics -accel kvm Fix the plumbing to utilize vcpu->doorbell_request instead of vcore->dpdes for nested KVM guests on P9 and above. [1] Terminology 1. L0 : PowerNV linux running with HV privileges 2. L1 : Pseries KVM guest running on top of L0 2. L2 : Nested KVM guest running on top of L1 Fixes: 6398326b9ba1 ("KVM: PPC: Book3S HV P9: Stop using vc->dpdes") Signed-off-by: Gautam Menghani <gautam@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://patch.msgid.link/20241109063301.105289-3-gautam@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05fadump: reserve param area if below boot_mem_topSourabh Jain1-1/+1
[ Upstream commit fb90dca828b6070709093934c6dec56489a2d91d ] The param area is a memory region where the kernel places additional command-line arguments for fadump kernel. Currently, the param memory area is reserved in fadump kernel if it is above boot_mem_top. However, it should be reserved if it is below boot_mem_top because the fadump kernel already reserves memory from boot_mem_top to the end of DRAM. Currently, there is no impact from not reserving param memory if it is below boot_mem_top, as it is not used after the early boot phase of the fadump kernel. However, if this changes in the future, it could lead to issues in the fadump kernel. Fixes: 3416c9daa6b1 ("powerpc/fadump: pass additional parameters when fadump is active") Acked-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Sourabh Jain <sourabhjain@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://patch.msgid.link/20241107055817.489795-2-sourabhjain@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05powerpc/fadump: allocate memory for additional parameters earlyHari Bathini3-5/+15
[ Upstream commit f4892c68ecc1cf45e41a78820dd2eebccc945b66 ] Memory for passing additional parameters to fadump capture kernel is allocated during subsys_initcall level, using memblock. But as slab is already available by this time, allocation happens via the buddy allocator. This may work for radix MMU but is likely to fail in most cases for hash MMU as hash MMU needs this memory in the first memory block for it to be accessible in real mode in the capture kernel (second boot). So, allocate memory for additional parameters area as soon as MMU mode is obvious. Fixes: 683eab94da75 ("powerpc/fadump: setup additional parameters for dump capture kernel") Reported-by: Venkat Rao Bagalkote <venkat88@linux.vnet.ibm.com> Closes: https://lore.kernel.org/lkml/a70e4064-a040-447b-8556-1fd02f19383d@linux.vnet.ibm.com/T/#u Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Sourabh Jain <sourabhjain@linux.ibm.com> Tested-by: Venkat Rao Bagalkote <venkat88@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://patch.msgid.link/20241107055817.489795-1-sourabhjain@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05x86/tdx: Dynamically disable SEPT violations from causing #VEsKirill A. Shutemov2-17/+67
[ Upstream commit f65aa0ad79fca4ace921da0701644f020129043d ] Memory access #VEs are hard for Linux to handle in contexts like the entry code or NMIs. But other OSes need them for functionality. There's a static (pre-guest-boot) way for a VMM to choose one or the other. But VMMs don't always know which OS they are booting, so they choose to deliver those #VEs so the "other" OSes will work. That, unfortunately has left us in the lurch and exposed to these hard-to-handle #VEs. The TDX module has introduced a new feature. Even if the static configuration is set to "send nasty #VEs", the kernel can dynamically request that they be disabled. Once they are disabled, access to private memory that is not in the Mapped state in the Secure-EPT (SEPT) will result in an exit to the VMM rather than injecting a #VE. Check if the feature is available and disable SEPT #VE if possible. If the TD is allowed to disable/enable SEPT #VEs, the ATTR_SEPT_VE_DISABLE attribute is no longer reliable. It reflects the initial state of the control for the TD, but it will not be updated if someone (e.g. bootloader) changes it before the kernel starts. Kernel must check TDCS_TD_CTLS bit to determine if SEPT #VEs are enabled or disabled. [ dhansen: remove 'return' at end of function ] Fixes: 373e715e31bf ("x86/tdx: Panic on bad configs that #VE on "private" memory access") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Kai Huang <kai.huang@intel.com> Link: https://lore.kernel.org/all/20241104103803.195705-4-kirill.shutemov%40linux.intel.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05x86/tdx: Rename tdx_parse_tdinfo() to tdx_setup()Kirill A. Shutemov1-5/+8
[ Upstream commit b064043d9565786b385f85e6436ca5716bbd5552 ] Rename tdx_parse_tdinfo() to tdx_setup() and move setting NOTIFY_ENABLES there. The function will be extended to adjust TD configuration. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Reviewed-by: Kai Huang <kai.huang@intel.com> Link: https://lore.kernel.org/all/20241104103803.195705-3-kirill.shutemov%40linux.intel.com Stable-dep-of: f65aa0ad79fc ("x86/tdx: Dynamically disable SEPT violations from causing #VEs") Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05x86/tdx: Introduce wrappers to read and write TD metadataKirill A. Shutemov2-5/+28
[ Upstream commit 5081e8fadb809253c911b349b01d87c5b4e3fec5 ] The TDG_VM_WR TDCALL is used to ask the TDX module to change some TD-specific VM configuration. There is currently only one user in the kernel of this TDCALL leaf. More will be added shortly. Refactor to make way for more users of TDG_VM_WR who will need to modify other TD configuration values. Add a wrapper for the TDG_VM_RD TDCALL that requests TD-specific metadata from the TDX module. There are currently no users for TDG_VM_RD. Mark it as __maybe_unused until the first user appears. This is preparation for enumeration and enabling optional TD features. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Kai Huang <kai.huang@intel.com> Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Link: https://lore.kernel.org/all/20241104103803.195705-2-kirill.shutemov%40linux.intel.com Stable-dep-of: f65aa0ad79fc ("x86/tdx: Dynamically disable SEPT violations from causing #VEs") Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05riscv: kvm: Fix out-of-bounds array accessBjörn Töpel1-4/+7
[ Upstream commit 332fa4a802b16ccb727199da685294f85f9880cb ] In kvm_riscv_vcpu_sbi_init() the entry->ext_idx can contain an out-of-bound index. This is used as a special marker for the base extensions, that cannot be disabled. However, when traversing the extensions, that special marker is not checked prior indexing the array. Add an out-of-bounds check to the function. Fixes: 56d8a385b605 ("RISC-V: KVM: Allow some SBI extensions to be disabled by default") Signed-off-by: Björn Töpel <bjorn@rivosinc.com> Reviewed-by: Anup Patel <anup@brainfault.org> Link: https://lore.kernel.org/r/20241104191503.74725-1-bjorn@kernel.org Signed-off-by: Anup Patel <anup@brainfault.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05RISC-V: KVM: Fix APLIC in_clrip and clripnum write emulationYong-Xuan Wang1-1/+2
[ Upstream commit 60821fb4dd7345e5662094accf0a52845306de8c ] In the section "4.7 Precise effects on interrupt-pending bits" of the RISC-V AIA specification defines that: "If the source mode is Level1 or Level0 and the interrupt domain is configured in MSI delivery mode (domaincfg.DM = 1): The pending bit is cleared whenever the rectified input value is low, when the interrupt is forwarded by MSI, or by a relevant write to an in_clrip register or to clripnum." Update the aplic_write_pending() to match the spec. Fixes: d8dd9f113e16 ("RISC-V: KVM: Fix APLIC setipnum_le/be write emulation") Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com> Reviewed-by: Vincent Chen <vincent.chen@sifive.com> Reviewed-by: Anup Patel <anup@brainfault.org> Link: https://lore.kernel.org/r/20241029085542.30541-1-yongxuan.wang@sifive.com Signed-off-by: Anup Patel <anup@brainfault.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05powerpc/pseries: Fix dtl_access_lock to be a rw_semaphoreMichael Ellerman3-10/+10
[ Upstream commit cadae3a45d23aa4f6485938a67cbc47aaaa25e38 ] The dtl_access_lock needs to be a rw_sempahore, a sleeping lock, because the code calls kmalloc() while holding it, which can sleep: # echo 1 > /proc/powerpc/vcpudispatch_stats BUG: sleeping function called from invalid context at include/linux/sched/mm.h:337 in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 199, name: sh preempt_count: 1, expected: 0 3 locks held by sh/199: #0: c00000000a0743f8 (sb_writers#3){.+.+}-{0:0}, at: vfs_write+0x324/0x438 #1: c0000000028c7058 (dtl_enable_mutex){+.+.}-{3:3}, at: vcpudispatch_stats_write+0xd4/0x5f4 #2: c0000000028c70b8 (dtl_access_lock){+.+.}-{2:2}, at: vcpudispatch_stats_write+0x220/0x5f4 CPU: 0 PID: 199 Comm: sh Not tainted 6.10.0-rc4 #152 Hardware name: IBM pSeries (emulated by qemu) POWER9 (raw) 0x4e1202 0xf000005 of:SLOF,HEAD hv:linux,kvm pSeries Call Trace: dump_stack_lvl+0x130/0x148 (unreliable) __might_resched+0x174/0x410 kmem_cache_alloc_noprof+0x340/0x3d0 alloc_dtl_buffers+0x124/0x1ac vcpudispatch_stats_write+0x2a8/0x5f4 proc_reg_write+0xf4/0x150 vfs_write+0xfc/0x438 ksys_write+0x88/0x148 system_call_exception+0x1c4/0x5a0 system_call_common+0xf4/0x258 Fixes: 06220d78f24a ("powerpc/pseries: Introduce rwlock to gatekeep DTLB usage") Tested-by: Kajol Jain <kjain@linux.ibm.com> Reviewed-by: Nysal Jan K.A <nysal@linux.ibm.com> Reviewed-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://patch.msgid.link/20240819122401.513203-1-mpe@ellerman.id.au Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05powerpc/mm/fault: Fix kfence page fault reportingRitesh Harjani (IBM)1-2/+8
[ Upstream commit 06dbbb4d5f7126b6307ab807cbf04ecfc459b933 ] copy_from_kernel_nofault() can be called when doing read of /proc/kcore. /proc/kcore can have some unmapped kfence objects which when read via copy_from_kernel_nofault() can cause page faults. Since *_nofault() functions define their own fixup table for handling fault, use that instead of asking kfence to handle such faults. Hence we search the exception tables for the nip which generated the fault. If there is an entry then we let the fixup table handler handle the page fault by returning an error from within ___do_page_fault(). This can be easily triggered if someone tries to do dd from /proc/kcore. eg. dd if=/proc/kcore of=/dev/null bs=1M Some example false negatives: =============================== BUG: KFENCE: invalid read in copy_from_kernel_nofault+0x9c/0x1a0 Invalid read at 0xc0000000fdff0000: copy_from_kernel_nofault+0x9c/0x1a0 0xc00000000665f950 read_kcore_iter+0x57c/0xa04 proc_reg_read_iter+0xe4/0x16c vfs_read+0x320/0x3ec ksys_read+0x90/0x154 system_call_exception+0x120/0x310 system_call_vectored_common+0x15c/0x2ec BUG: KFENCE: use-after-free read in copy_from_kernel_nofault+0x9c/0x1a0 Use-after-free read at 0xc0000000fe050000 (in kfence-#2): copy_from_kernel_nofault+0x9c/0x1a0 0xc00000000665f950 read_kcore_iter+0x57c/0xa04 proc_reg_read_iter+0xe4/0x16c vfs_read+0x320/0x3ec ksys_read+0x90/0x154 system_call_exception+0x120/0x310 system_call_vectored_common+0x15c/0x2ec Fixes: 90cbac0e995d ("powerpc: Enable KFENCE for PPC32") Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reported-by: Disha Goel <disgoel@linux.ibm.com> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://patch.msgid.link/a411788081d50e3b136c6270471e35aba3dfafa3.1729271995.git.ritesh.list@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-12-05powerpc/fadump: Move fadump_cma_init to setup_arch() after initmem_init()Ritesh Harjani (IBM)3-7/+12
[ Upstream commit 05b94cae1c47f94588c3e7096963c1007c4d9c1d ] During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE, since pageblock_order is still zero and it gets initialized later during initmem_init() e.g. setup_arch() -> initmem_init() -> sparse_init() -> set_pageblock_order() One such use case where this causes issue is - early_setup() -> early_init_devtree() -> fadump_reserve_mem() -> fadump_cma_init() This causes CMA memory alignment check to be bypassed in cma_init_reserved_mem(). Then later cma_activate_area() can hit a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory area was not pageblock_order aligned. Fix it by moving the fadump_cma_init() after initmem_init(), where other such cma reservations also gets called. <stack trace> ============== page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10010 flags: 0x13ffff800000000(node=1|zone=0|lastcpupid=0x7ffff) CMA raw: 013ffff800000000 5deadbeef0000100 5deadbeef0000122 0000000000000000 raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000 page dumped because: VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) ------------[ cut here ]------------ kernel BUG at mm/page_alloc.c:778! Call Trace: __free_one_page+0x57c/0x7b0 (unreliable) free_pcppages_bulk+0x1a8/0x2c8 free_unref_page_commit+0x3d4/0x4e4 free_unref_page+0x458/0x6d0 init_cma_reserved_pageblock+0x114/0x198 cma_init_reserved_areas+0x270/0x3e0 do_one_initcall+0x80/0x2f8 kernel_init_freeable+0x33c/0x530 kernel_init+0x34/0x26c ret_from_kernel_user_thread+0x14/0x1c Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment") Suggested-by: David Hildenbrand <david@redhat.com> Reported-by: Sachin P Bappalige <sachinpb@linux.ibm.com> Acked-by: Hari Bathini <hbathini@linux.ibm.com> Reviewed-by: Madhavan Srinivasan <maddy@linux.ibm.com> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://patch.msgid.link/3ae208e48c0d9cefe53d2dc4f593388067405b7d.1729146153.git.ritesh.list@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>