summaryrefslogtreecommitdiff
path: root/arch/arm64/include/asm/smp.h
diff options
context:
space:
mode:
authorMarc Zyngier <maz@kernel.org>2023-07-03 19:35:48 +0300
committerOliver Upton <oliver.upton@linux.dev>2023-07-11 22:30:14 +0300
commit970dee09b230895fe2230d2b32ad05a2826818c6 (patch)
tree82457e664e3463e2dfc9321b054ab8e8f332cf7b /arch/arm64/include/asm/smp.h
parentfa729bc7c9c8c17a2481358c841ef8ca920485d3 (diff)
downloadlinux-970dee09b230895fe2230d2b32ad05a2826818c6.tar.xz
KVM: arm64: Disable preemption in kvm_arch_hardware_enable()
Since 0bf50497f03b ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock"), hotplugging back a CPU whilst a guest is running results in a number of ugly splats as most of this code expects to run with preemption disabled, which isn't the case anymore. While the context is preemptable, it isn't migratable, which should be enough. But we have plenty of preemptible() checks all over the place, and our per-CPU accessors also disable preemption. Since this affects released versions, let's do the easy fix first, disabling preemption in kvm_arch_hardware_enable(). We can always revisit this with a more invasive fix in the future. Fixes: 0bf50497f03b ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Reported-by: Kristina Martsenko <kristina.martsenko@arm.com> Tested-by: Kristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/aeab7562-2d39-e78e-93b1-4711f8cc3fa5@arm.com Cc: stable@vger.kernel.org # v6.3, v6.4 Link: https://lore.kernel.org/r/20230703163548.1498943-1-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Diffstat (limited to 'arch/arm64/include/asm/smp.h')
0 files changed, 0 insertions, 0 deletions