diff options
author | Sean Christopherson <seanjc@google.com> | 2025-02-04 03:40:36 +0300 |
---|---|---|
committer | Sean Christopherson <seanjc@google.com> | 2025-02-14 18:17:40 +0300 |
commit | 4834eaded91e5c90141540ccfb1af2bd40a4ac80 (patch) | |
tree | 849145e9f2bf96f1d3b98e754181e39d9d5c3188 /tools/perf/scripts/python/task-analyzer.py | |
parent | 9fb13ba6b5ff9649f4283724ed75828d8a53cf3b (diff) | |
download | linux-4834eaded91e5c90141540ccfb1af2bd40a4ac80.tar.xz |
KVM: x86/mmu: Add infrastructure to allow walking rmaps outside of mmu_lock
Steal another bit from rmap entries (which are word aligned pointers, i.e.
have 2 free bits on 32-bit KVM, and 3 free bits on 64-bit KVM), and use
the bit to implement a *very* rudimentary per-rmap spinlock. The only
anticipated usage of the lock outside of mmu_lock is for aging gfns, and
collisions between aging and other MMU rmap operations are quite rare,
e.g. unless userspace is being silly and aging a tiny range over and over
in a tight loop, time between contention when aging an actively running VM
is O(seconds). In short, a more sophisticated locking scheme shouldn't be
necessary.
Note, the lock only protects the rmap structure itself, SPTEs that are
pointed at by a locked rmap can still be modified and zapped by another
task (KVM drops/zaps SPTEs before deleting the rmap entries)
Co-developed-by: James Houghton <jthoughton@google.com>
Signed-off-by: James Houghton <jthoughton@google.com>
Link: https://lore.kernel.org/r/20250204004038.1680123-10-jthoughton@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Diffstat (limited to 'tools/perf/scripts/python/task-analyzer.py')
0 files changed, 0 insertions, 0 deletions