diff options
author | Sean Christopherson <seanjc@google.com> | 2024-01-10 03:42:39 +0300 |
---|---|---|
committer | Sean Christopherson <seanjc@google.com> | 2024-01-29 19:16:58 +0300 |
commit | d489ec95658392a000dd26fba511eec1900245b0 (patch) | |
tree | f6795331c52740b72c6a60156a43fdfe289624b3 /lib/memory-notifier-error-inject.c | |
parent | 41bccc98fb7931d63d03f326a746ac4d429c1dd3 (diff) | |
download | linux-d489ec95658392a000dd26fba511eec1900245b0.tar.xz |
KVM: Harden against unpaired kvm_mmu_notifier_invalidate_range_end() calls
When handling the end of an mmu_notifier invalidation, WARN if
mn_active_invalidate_count is already 0 do not decrement it further, i.e.
avoid causing mn_active_invalidate_count to underflow/wrap. In the worst
case scenario, effectively corrupting mn_active_invalidate_count could
cause kvm_swap_active_memslots() to hang indefinitely.
end() calls are *supposed* to be paired with start(), i.e. underflow can
only happen if there is a bug elsewhere in the kernel, but due to lack of
lockdep assertions in the mmu_notifier helpers, it's all too easy for a
bug to go unnoticed for some time, e.g. see the recently introduced
PAGEMAP_SCAN ioctl().
Ideally, mmu_notifiers would incorporate lockdep assertions, but users of
mmu_notifiers aren't required to hold any one specific lock, i.e. adding
the necessary annotations to make lockdep aware of all locks that are
mutally exclusive with mm_take_all_locks() isn't trivial.
Link: https://lore.kernel.org/all/000000000000f6d051060c6785bc@google.com
Link: https://lore.kernel.org/r/20240110004239.491290-1-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Diffstat (limited to 'lib/memory-notifier-error-inject.c')
0 files changed, 0 insertions, 0 deletions