diff options
author | Marc Zyngier <maz@kernel.org> | 2023-07-03 19:35:48 +0300 |
---|---|---|
committer | Oliver Upton <oliver.upton@linux.dev> | 2023-07-11 22:30:14 +0300 |
commit | 970dee09b230895fe2230d2b32ad05a2826818c6 (patch) | |
tree | 82457e664e3463e2dfc9321b054ab8e8f332cf7b /arch/arm64/kvm | |
parent | fa729bc7c9c8c17a2481358c841ef8ca920485d3 (diff) | |
download | linux-970dee09b230895fe2230d2b32ad05a2826818c6.tar.xz |
KVM: arm64: Disable preemption in kvm_arch_hardware_enable()
Since 0bf50497f03b ("KVM: Drop kvm_count_lock and instead protect
kvm_usage_count with kvm_lock"), hotplugging back a CPU whilst
a guest is running results in a number of ugly splats as most
of this code expects to run with preemption disabled, which isn't
the case anymore.
While the context is preemptable, it isn't migratable, which should
be enough. But we have plenty of preemptible() checks all over
the place, and our per-CPU accessors also disable preemption.
Since this affects released versions, let's do the easy fix first,
disabling preemption in kvm_arch_hardware_enable(). We can always
revisit this with a more invasive fix in the future.
Fixes: 0bf50497f03b ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock")
Reported-by: Kristina Martsenko <kristina.martsenko@arm.com>
Tested-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/aeab7562-2d39-e78e-93b1-4711f8cc3fa5@arm.com
Cc: stable@vger.kernel.org # v6.3, v6.4
Link: https://lore.kernel.org/r/20230703163548.1498943-1-maz@kernel.org
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Diffstat (limited to 'arch/arm64/kvm')
-rw-r--r-- | arch/arm64/kvm/arm.c | 13 |
1 files changed, 12 insertions, 1 deletions
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 0a564d2f30ad..a402ea5511f3 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1872,8 +1872,17 @@ static void _kvm_arch_hardware_enable(void *discard) int kvm_arch_hardware_enable(void) { - int was_enabled = __this_cpu_read(kvm_arm_hardware_enabled); + int was_enabled; + /* + * Most calls to this function are made with migration + * disabled, but not with preemption disabled. The former is + * enough to ensure correctness, but most of the helpers + * expect the later and will throw a tantrum otherwise. + */ + preempt_disable(); + + was_enabled = __this_cpu_read(kvm_arm_hardware_enabled); _kvm_arch_hardware_enable(NULL); if (!was_enabled) { @@ -1881,6 +1890,8 @@ int kvm_arch_hardware_enable(void) kvm_timer_cpu_up(); } + preempt_enable(); + return 0; } |