summaryrefslogtreecommitdiff
path: root/arch/arm64/include/asm/kvm_host.h
AgeCommit message (Collapse)AuthorFilesLines
2025-01-17Merge branch kvm-arm64/coresight-6.14 into kvmarm-master/nextMarc Zyngier1-0/+11
* kvm-arm64/coresight-6.14: : . : Trace filtering update from James Clark. From the cover letter: : : "The guest filtering rules from the Perf session are now honored for both : nVHE and VHE modes. This is done by either writing to TRFCR_EL12 at the : start of the Perf session and doing nothing else further, or caching the : guest value and writing it at guest switch for nVHE. In pKVM, trace is : now be disabled for both protected and unprotected guests." : . KVM: arm64: Fix selftests after sysreg field name update coresight: Pass guest TRFCR value to KVM KVM: arm64: Support trace filtering for guests KVM: arm64: coresight: Give TRBE enabled state to KVM coresight: trbe: Remove redundant disable call arm64/sysreg/tools: Move TRFCR definitions to sysreg tools: arm64: Update sysreg.h header files Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-01-17Merge branch kvm-arm64/nv-timers into kvmarm-master/nextMarc Zyngier1-1/+1
* kvm-arm64/nv-timers: : . : Nested Virt support for the EL2 timers. From the initial cover letter: : : "Here's another batch of NV-related patches, this time bringing in most : of the timer support for EL2 as well as nested guests. : : The code is pretty convoluted for a bunch of reasons: : : - FEAT_NV2 breaks the timer semantics by redirecting HW controls to : memory, meaning that a guest could setup a timer and never see it : firing until the next exit : : - We go try hard to reflect the timer state in memory, but that's not : great. : : - With FEAT_ECV, we can finally correctly emulate the virtual timer, : but this emulation is pretty costly : : - As a way to make things suck less, we handle timer reads as early as : possible, and only defer writes to the normal trap handling : : - Finally, some implementations are badly broken, and require some : hand-holding, irrespective of NV support. So we try and reuse the NV : infrastructure to make them usable. This could be further optimised, : but I'm running out of patience for this sort of HW. : : [...]" : . KVM: arm64: nv: Fix doc header layout for timers KVM: arm64: nv: Document EL2 timer API KVM: arm64: Work around x1e's CNTVOFF_EL2 bogosity KVM: arm64: nv: Sanitise CNTHCTL_EL2 KVM: arm64: nv: Propagate CNTHCTL_EL2.EL1NV{P,V}CT bits KVM: arm64: nv: Add trap routing for CNTHCTL_EL2.EL1{NVPCT,NVVCT,TVT,TVCT} KVM: arm64: Handle counter access early in non-HYP context KVM: arm64: nv: Accelerate EL0 counter accesses from hypervisor context KVM: arm64: nv: Accelerate EL0 timer read accesses when FEAT_ECV in use KVM: arm64: nv: Use FEAT_ECV to trap access to EL0 timers KVM: arm64: nv: Publish emulated timer interrupt state in the in-memory state KVM: arm64: nv: Sync nested timer state with FEAT_NV2 KVM: arm64: nv: Add handling of EL2-specific timer registers Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-01-12KVM: arm64: Support trace filtering for guestsJames Clark1-0/+2
For nVHE, switch the filter value in and out if the Coresight driver asks for it. This will support filters for guests when sinks other than TRBE are used. For VHE, just write the filter directly to TRFCR_EL1 where trace can be used even with TRBE sinks. Signed-off-by: James Clark <james.clark@linaro.org> Link: https://lore.kernel.org/r/20250106142446.628923-7-james.clark@linaro.org Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-01-12KVM: arm64: coresight: Give TRBE enabled state to KVMJames Clark1-0/+9
Currently in nVHE, KVM has to check if TRBE is enabled on every guest switch even if it was never used. Because it's a debug feature and is more likely to not be used than used, give KVM the TRBE buffer status to allow a much simpler and faster do-nothing path in the hyp. Protected mode now disables trace regardless of TRBE (because trfcr_while_in_guest is always 0), which was not previously done. However, it continues to flush whenever the buffer is enabled regardless of the filter status. This avoids the hypothetical case of a host that had disabled the filter but not flushed which would arise if only doing the flush when the filter was enabled. Signed-off-by: James Clark <james.clark@linaro.org> Link: https://lore.kernel.org/r/20250106142446.628923-6-james.clark@linaro.org Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-01-12Merge branch kvm-arm64/pkvm-fixed-features-6.14 into kvmarm-master/nextMarc Zyngier1-10/+15
* kvm-arm64/pkvm-fixed-features-6.14: (24 commits) : . : Complete rework of the pKVM handling of features, catching up : with the rest of the code deals with it these days. : Patches courtesy of Fuad Tabba. From the cover letter: : : "This patch series uses the vm's feature id registers to track the : supported features, a framework similar to nested virt to set the : trap values, and removes the need to store cptr_el2 per vcpu in : favor of setting its value when traps are activated, as VHE mode : does." : : This branch drags the arm64/for-next/cpufeature branch to solve : ugly conflicts in -next. : . KVM: arm64: Fix FEAT_MTE in pKVM KVM: arm64: Use kvm_vcpu_has_feature() directly for struct kvm KVM: arm64: Convert the SVE guest vcpu flag to a vm flag KVM: arm64: Remove PtrAuth guest vcpu flag KVM: arm64: Fix the value of the CPTR_EL2 RES1 bitmask for nVHE KVM: arm64: Refactor kvm_reset_cptr_el2() KVM: arm64: Calculate cptr_el2 traps on activating traps KVM: arm64: Remove redundant setting of HCR_EL2 trap bit KVM: arm64: Remove fixed_config.h header KVM: arm64: Rework specifying restricted features for protected VMs KVM: arm64: Set protected VM traps based on its view of feature registers KVM: arm64: Fix RAS trapping in pKVM for protected VMs KVM: arm64: Initialize feature id registers for protected VMs KVM: arm64: Use KVM extension checks for allowed protected VM capabilities KVM: arm64: Remove KVM_ARM_VCPU_POWER_OFF from protected VMs allowed features in pKVM KVM: arm64: Move checking protected vcpu features to a separate function KVM: arm64: Group setting traps for protected VMs by control register KVM: arm64: Consolidate allowed and restricted VM feature checks arm64/sysreg: Get rid of CPACR_ELx SysregFields arm64/sysreg: Convert *_EL12 accessors to Mapping ... Signed-off-by: Marc Zyngier <maz@kernel.org> # Conflicts: # arch/arm64/kvm/fpsimd.c # arch/arm64/kvm/hyp/nvhe/pkvm.c
2025-01-12Merge branch kvm-arm64/pkvm-np-guest into kvmarm-master/nextMarc Zyngier1-0/+4
* kvm-arm64/pkvm-np-guest: : . : pKVM support for non-protected guests using the standard MM : infrastructure, courtesy of Quentin Perret. From the cover letter: : : "This series moves the stage-2 page-table management of non-protected : guests to EL2 when pKVM is enabled. This is only intended as an : incremental step towards a 'feature-complete' pKVM, there is however a : lot more that needs to come on top. : : With that series applied, pKVM provides near-parity with standard KVM : from a functional perspective all while Linux no longer touches the : stage-2 page-tables itself at EL1. The majority of mm-related KVM : features work out of the box, including MMU notifiers, dirty logging, : RO memslots and things of that nature. There are however two gotchas: : : - We don't support mapping devices into guests: this requires : additional hypervisor support for tracking the 'state' of devices, : which will come in a later series. No device assignment until then. : : - Stage-2 mappings are forced to page-granularity even when backed by a : huge page for the sake of simplicity of this series. I'm only aiming : at functional parity-ish (from userspace's PoV) for now, support for : HP can be added on top later as a perf improvement." : . KVM: arm64: Plumb the pKVM MMU in KVM KVM: arm64: Introduce the EL1 pKVM MMU KVM: arm64: Introduce __pkvm_tlb_flush_vmid() KVM: arm64: Introduce __pkvm_host_mkyoung_guest() KVM: arm64: Introduce __pkvm_host_test_clear_young_guest() KVM: arm64: Introduce __pkvm_host_wrprotect_guest() KVM: arm64: Introduce __pkvm_host_relax_guest_perms() KVM: arm64: Introduce __pkvm_host_unshare_guest() KVM: arm64: Introduce __pkvm_host_share_guest() KVM: arm64: Introduce __pkvm_vcpu_{load,put}() KVM: arm64: Add {get,put}_pkvm_hyp_vm() helpers KVM: arm64: Make kvm_pgtable_stage2_init() a static inline function KVM: arm64: Pass walk flags to kvm_pgtable_stage2_relax_perms KVM: arm64: Pass walk flags to kvm_pgtable_stage2_mkyoung KVM: arm64: Move host page ownership tracking to the hyp vmemmap KVM: arm64: Make hyp_page::order a u8 KVM: arm64: Move enum pkvm_page_state to memory.h KVM: arm64: Change the layout of enum pkvm_page_state Signed-off-by: Marc Zyngier <maz@kernel.org> # Conflicts: # arch/arm64/kvm/arm.c
2025-01-02KVM: arm64: nv: Sanitise CNTHCTL_EL2Marc Zyngier1-1/+1
Inject some sanity in CNTHCTL_EL2, ensuring that we don't handle more than we advertise to the guest. Acked-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20241217142321.763801-11-maz@kernel.org Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-12-20KVM: arm64: Convert the SVE guest vcpu flag to a vm flagFuad Tabba1-6/+12
The vcpu flag GUEST_HAS_SVE is per-vcpu, but it is based on what is now a per-vm feature. Make the flag per-vm. Signed-off-by: Fuad Tabba <tabba@google.com> Link: https://lore.kernel.org/r/20241216105057.579031-17-tabba@google.com Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-12-20KVM: arm64: Remove PtrAuth guest vcpu flagFuad Tabba1-4/+3
The vcpu flag GUEST_HAS_PTRAUTH is always associated with the vcpu PtrAuth features, which are defined per vm rather than per vcpu. Remove the flag, and replace it with checks for the features instead. Signed-off-by: Fuad Tabba <tabba@google.com> Link: https://lore.kernel.org/r/20241216105057.579031-16-tabba@google.com Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-12-20KVM: arm64: Calculate cptr_el2 traps on activating trapsFuad Tabba1-1/+0
Similar to VHE, calculate the value of cptr_el2 from scratch on activate traps. This removes the need to store cptr_el2 in every vcpu structure. Moreover, some traps, such as whether the guest owns the fp registers, need to be set on every vcpu run. Reported-by: James Clark <james.clark@linaro.org> Fixes: 5294afdbf45a ("KVM: arm64: Exclude FP ownership from kvm_vcpu_arch") Signed-off-by: Fuad Tabba <tabba@google.com> Link: https://lore.kernel.org/r/20241216105057.579031-13-tabba@google.com Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-12-20KVM: arm64: Rework specifying restricted features for protected VMsFuad Tabba1-0/+1
The existing code didn't properly distinguish between signed and unsigned features, and was difficult to read and to maintain. Rework it using the same method used in other parts of KVM when handling vcpu features. Signed-off-by: Fuad Tabba <tabba@google.com> Link: https://lore.kernel.org/r/20241216105057.579031-10-tabba@google.com Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-12-20KVM: arm64: Introduce the EL1 pKVM MMUQuentin Perret1-0/+1
Introduce a set of helper functions allowing to manipulate the pKVM guest stage-2 page-tables from EL1 using pKVM's HVC interface. Each helper has an exact one-to-one correspondance with the traditional kvm_pgtable_stage2_*() functions from pgtable.c, with a strictly matching prototype. This will ease plumbing later on in mmu.c. These callbacks track the gfn->pfn mappings in a simple rb_tree indexed by IPA in lieu of a page-table. This rb-tree is kept in sync with pKVM's state and is protected by the mmu_lock like a traditional stage-2 page-table. Signed-off-by: Quentin Perret <qperret@google.com> Tested-by: Fuad Tabba <tabba@google.com> Reviewed-by: Fuad Tabba <tabba@google.com> Link: https://lore.kernel.org/r/20241218194059.3670226-18-qperret@google.com Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-12-20KVM: arm64: Introduce __pkvm_host_share_guest()Quentin Perret1-0/+3
In preparation for handling guest stage-2 mappings at EL2, introduce a new pKVM hypercall allowing to share pages with non-protected guests. Tested-by: Fuad Tabba <tabba@google.com> Reviewed-by: Fuad Tabba <tabba@google.com> Signed-off-by: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20241218194059.3670226-11-qperret@google.com Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-12-20KVM: arm64: Avoid reading ID_AA64DFR0_EL1 for debug save/restoreOliver Upton1-0/+4
Similar to other per-CPU profiling/debug features we handle, store the number of breakpoints/watchpoints in kvm_host_data to avoid reading the ID register 4 times on every guest entry/exit. And if you're in the nested virt business that's quite a few avoidable exits to the L0 hypervisor. Tested-by: James Clark <james.clark@linaro.org> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20241219224116.3941496-18-oliver.upton@linux.dev Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-12-20KVM: arm64: Manage software step state at load/putMarc Zyngier1-17/+7
KVM takes over the guest's software step state machine if the VMM is debugging the guest, but it does the save/restore fiddling for every guest entry. Note that the only constraint on host usage of software step is that the guest's configuration remains visible to userspace via the ONE_REG ioctls. So, we can cut down on the amount of fiddling by doing this at load/put instead. Tested-by: James Clark <james.clark@linaro.org> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20241219224116.3941496-16-oliver.upton@linux.dev Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-12-20KVM: arm64: Don't hijack guest context MDSCR_EL1Oliver Upton1-1/+1
Stealing MDSCR_EL1 in the guest's kvm_cpu_context for external debugging is rather gross. Just add a field for this instead and let the context switch code pick the correct one based on the debug owner. Tested-by: James Clark <james.clark@linaro.org> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20241219224116.3941496-15-oliver.upton@linux.dev Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-12-20KVM: arm64: Compute MDCR_EL2 at vcpu_load()Oliver Upton1-1/+0
KVM has picked up several hacks to cope with vcpu->arch.mdcr_el2 needing to be prepared before vcpu_load(), which is when it gets programmed into hardware on VHE. Now that the flows for reprogramming MDCR_EL2 have been simplified, move that computation to vcpu_load(). Tested-by: James Clark <james.clark@linaro.org> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20241219224116.3941496-14-oliver.upton@linux.dev Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-12-20KVM: arm64: Reload vCPU for accesses to OSLAR_EL1Oliver Upton1-0/+1
KVM takes ownership of the debug regs if the guest enables the OS lock, as it needs to use MDSCR_EL1 to mask debug exceptions. Just reload the vCPU if the guest toggles the OS lock, relying on kvm_vcpu_load_debug() to update the debug owner and get the right trap configuration in place. Tested-by: James Clark <james.clark@linaro.org> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20241219224116.3941496-13-oliver.upton@linux.dev Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-12-20KVM: arm64: Use debug_owner to track if debug regs need save/restoreOliver Upton1-2/+2
Use the debug owner to determine if the debug regs are in use instead of keeping around the DEBUG_DIRTY flag. Debug registers are now saved/restored after the first trap, regardless of whether it was a read or a write. This also shifts the point at which KVM becomes lazy to vcpu_put() rather than the next exception taken from the guest. Tested-by: James Clark <james.clark@linaro.org> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20241219224116.3941496-12-oliver.upton@linux.dev Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-12-20KVM: arm64: Remove vestiges of debug_ptrOliver Upton1-5/+0
Delete the remnants of debug_ptr now that debug registers are selected based on the debug owner instead. Tested-by: James Clark <james.clark@linaro.org> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20241219224116.3941496-11-oliver.upton@linux.dev Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-12-20KVM: arm64: Select debug state to save/restore based on debug ownerOliver Upton1-0/+2
Select the set of debug registers to use based on the owner rather than relying on debug_ptr. Besides the code cleanup, this allows us to eliminate a couple instances kern_hyp_va() as well. Tested-by: James Clark <james.clark@linaro.org> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20241219224116.3941496-9-oliver.upton@linux.dev Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-12-20KVM: arm64: Evaluate debug owner at vcpu_load()Oliver Upton1-0/+11
In preparation for tossing the debug_ptr mess, introduce an enumeration to track the ownership of the debug registers while in the guest. Update the owner at vcpu_load() based on whether the host needs to steal the guest's debug context or if breakpoints/watchpoints are actively in use. Tested-by: James Clark <james.clark@linaro.org> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20241219224116.3941496-7-oliver.upton@linux.dev Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-12-20KVM: arm64: Move host SME/SVE tracking flags to host dataOliver Upton1-12/+10
The SME/SVE state tracking flags have no business in the vCPU. Move them to kvm_host_data. Tested-by: James Clark <james.clark@linaro.org> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20241219224116.3941496-5-oliver.upton@linux.dev Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-12-20KVM: arm64: Track presence of SPE/TRBE in kvm_host_data instead of vCPUOliver Upton1-8/+11
Add flags to kvm_host_data to track if SPE/TRBE is present + programmable on a per-CPU basis. Set the flags up at init rather than vcpu_load() as the programmability of these buffers is unlikely to change. Reviewed-by: James Clark <james.clark@linaro.org> Tested-by: James Clark <james.clark@linaro.org> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20241219224116.3941496-4-oliver.upton@linux.dev Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-12-20KVM: arm64: Get rid of __kvm_get_mdcr_el2() and related wartsOliver Upton1-2/+5
KVM caches MDCR_EL2 on a per-CPU basis in order to preserve the configuration of MDCR_EL2.HPMN while running a guest. This is a bit gross, since we're relying on some baked configuration rather than the hardware definition of implemented counters. Discover the number of implemented counters by reading PMCR_EL0.N instead. This works because: - In VHE the kernel runs at EL2, and N always returns the number of counters implemented in hardware - In {n,h}VHE, the EL2 setup code programs MDCR_EL2.HPMN with the EL2 view of PMCR_EL0.N for the host Lastly, avoid traps under nested virtualization by saving PMCR_EL0.N in host data. Tested-by: James Clark <james.clark@linaro.org> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20241219224116.3941496-3-oliver.upton@linux.dev Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-11-14Merge tag 'kvmarm-6.13' of ↵Paolo Bonzini1-9/+35
https://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD KVM/arm64 changes for 6.13, part #1 - Support for stage-1 permission indirection (FEAT_S1PIE) and permission overlays (FEAT_S1POE), including nested virt + the emulated page table walker - Introduce PSCI SYSTEM_OFF2 support to KVM + client driver. This call was introduced in PSCIv1.3 as a mechanism to request hibernation, similar to the S4 state in ACPI - Explicitly trap + hide FEAT_MPAM (QoS controls) from KVM guests. As part of it, introduce trivial initialization of the host's MPAM context so KVM can use the corresponding traps - PMU support under nested virtualization, honoring the guest hypervisor's trap configuration and event filtering when running a nested guest - Fixes to vgic ITS serialization where stale device/interrupt table entries are not zeroed when the mapping is invalidated by the VM - Avoid emulated MMIO completion if userspace has requested synchronous external abort injection - Various fixes and cleanups affecting pKVM, vCPU initialization, and selftests
2024-11-11Merge branch kvm-arm64/nv-pmu into kvmarm/nextOliver Upton1-1/+1
* kvm-arm64/nv-pmu: : Support for vEL2 PMU controls : : Align the vEL2 PMU support with the current state of non-nested KVM, : including: : : - Trap routing, with the annoying complication of EL2 traps that apply : in Host EL0 : : - PMU emulation, using the correct configuration bits depending on : whether a counter falls in the hypervisor or guest range of PMCs : : - Perf event swizzling across nested boundaries, as the event filtering : needs to be remapped to cope with vEL2 KVM: arm64: nv: Reprogram PMU events affected by nested transition KVM: arm64: nv: Apply EL2 event filtering when in hyp context KVM: arm64: nv: Honor MDCR_EL2.HLP KVM: arm64: nv: Honor MDCR_EL2.HPME KVM: arm64: Add helpers to determine if PMC counts at a given EL KVM: arm64: nv: Adjust range of accessible PMCs according to HPMN KVM: arm64: Rename kvm_pmu_valid_counter_mask() KVM: arm64: nv: Advertise support for FEAT_HPMN0 KVM: arm64: nv: Describe trap behaviour of MDCR_EL2.HPMN KVM: arm64: nv: Honor MDCR_EL2.{TPM, TPMCR} in Host EL0 KVM: arm64: nv: Reinject traps that take effect in Host EL0 KVM: arm64: nv: Rename BEHAVE_FORWARD_ANY KVM: arm64: nv: Allow coarse-grained trap combos to use complex traps KVM: arm64: Describe RES0/RES1 bits of MDCR_EL2 arm64: sysreg: Add new definitions for ID_AA64DFR0_EL1 arm64: sysreg: Migrate MDCR_EL2 definition to table arm64: sysreg: Describe ID_AA64DFR2_EL1 fields Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-11-11Merge branch kvm-arm64/misc into kvmarm/nextOliver Upton1-2/+0
* kvm-arm64/misc: : Miscellaneous updates : : - Drop useless check against vgic state in ICC_CLTR_EL1.SEIS read : emulation : : - Fix trap configuration for pKVM : : - Close the door on initialization bugs surrounding userspace irqchip : static key by removing it. KVM: selftests: Don't bother deleting memslots in KVM when freeing VMs KVM: arm64: Get rid of userspace_irqchip_in_use KVM: arm64: Initialize trap register values in hyp in pKVM KVM: arm64: Initialize the hypervisor's VM state at EL2 KVM: arm64: Refactor kvm_vcpu_enable_ptrauth() for hyp use KVM: arm64: Move pkvm_vcpu_init_traps() to init_pkvm_hyp_vcpu() KVM: arm64: Don't map 'kvm_vgic_global_state' at EL2 with pKVM KVM: arm64: Just advertise SEIS as 0 when emulating ICC_CTLR_EL1 Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-11-01KVM: arm64: Get rid of userspace_irqchip_in_useRaghavendra Rao Ananta1-2/+0
Improper use of userspace_irqchip_in_use led to syzbot hitting the following WARN_ON() in kvm_timer_update_irq(): WARNING: CPU: 0 PID: 3281 at arch/arm64/kvm/arch_timer.c:459 kvm_timer_update_irq+0x21c/0x394 Call trace: kvm_timer_update_irq+0x21c/0x394 arch/arm64/kvm/arch_timer.c:459 kvm_timer_vcpu_reset+0x158/0x684 arch/arm64/kvm/arch_timer.c:968 kvm_reset_vcpu+0x3b4/0x560 arch/arm64/kvm/reset.c:264 kvm_vcpu_set_target arch/arm64/kvm/arm.c:1553 [inline] kvm_arch_vcpu_ioctl_vcpu_init arch/arm64/kvm/arm.c:1573 [inline] kvm_arch_vcpu_ioctl+0x112c/0x1b3c arch/arm64/kvm/arm.c:1695 kvm_vcpu_ioctl+0x4ec/0xf74 virt/kvm/kvm_main.c:4658 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:907 [inline] __se_sys_ioctl fs/ioctl.c:893 [inline] __arm64_sys_ioctl+0x108/0x184 fs/ioctl.c:893 __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline] invoke_syscall+0x78/0x1b8 arch/arm64/kernel/syscall.c:49 el0_svc_common+0xe8/0x1b0 arch/arm64/kernel/syscall.c:132 do_el0_svc+0x40/0x50 arch/arm64/kernel/syscall.c:151 el0_svc+0x54/0x14c arch/arm64/kernel/entry-common.c:712 el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:730 el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:598 The following sequence led to the scenario: - Userspace creates a VM and a vCPU. - The vCPU is initialized with KVM_ARM_VCPU_PMU_V3 during KVM_ARM_VCPU_INIT. - Without any other setup, such as vGIC or vPMU, userspace issues KVM_RUN on the vCPU. Since the vPMU is requested, but not setup, kvm_arm_pmu_v3_enable() fails in kvm_arch_vcpu_run_pid_change(). As a result, KVM_RUN returns after enabling the timer, but before incrementing 'userspace_irqchip_in_use': kvm_arch_vcpu_run_pid_change() ret = kvm_arm_pmu_v3_enable() if (!vcpu->arch.pmu.created) return -EINVAL; if (ret) return ret; [...] if (!irqchip_in_kernel(kvm)) static_branch_inc(&userspace_irqchip_in_use); - Userspace ignores the error and issues KVM_ARM_VCPU_INIT again. Since the timer is already enabled, control moves through the following flow, ultimately hitting the WARN_ON(): kvm_timer_vcpu_reset() if (timer->enabled) kvm_timer_update_irq() if (!userspace_irqchip()) ret = kvm_vgic_inject_irq() ret = vgic_lazy_init() if (unlikely(!vgic_initialized(kvm))) if (kvm->arch.vgic.vgic_model != KVM_DEV_TYPE_ARM_VGIC_V2) return -EBUSY; WARN_ON(ret); Theoretically, since userspace_irqchip_in_use's functionality can be simply replaced by '!irqchip_in_kernel()', get rid of the static key to avoid the mismanagement, which also helps with the syzbot issue. Cc: <stable@vger.kernel.org> Reported-by: syzbot <syzkaller@googlegroups.com> Suggested-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-10-31KVM: arm64: Describe RES0/RES1 bits of MDCR_EL2Oliver Upton1-1/+1
Add support for sanitising MDCR_EL2 and describe the RES0/RES1 bits according to the feature set exposed to the VM. Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241025182354.3364124-6-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-10-31KVM: arm64: Add basic support for POR_EL2Marc Zyngier1-0/+3
S1POE support implies support for POR_EL2, which we provide by - adding it to the vcpu_sysreg enum - advertising it as mapped to its EL1 counterpart in get_el2_to_el1_mapping - wiring it in the sys_reg_desc table with the correct visibility - handling POR_EL1 in __vcpu_{read,write}_sys_reg_from_cpu() Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241023145345.1613824-32-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-10-31KVM: arm64: Add kvm_has_s1poe() helperMarc Zyngier1-0/+3
Just like we have kvm_has_s1pie(), add its S1POE counterpart, making the code slightly more readable. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241023145345.1613824-31-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-10-31KVM: arm64: Hide S1PIE registers from userspace when disabled for guestsMark Brown1-0/+3
When the guest does not support S1PIE we should not allow any access to the system registers it adds in order to ensure that we do not create spurious issues with guest migration. Add a visibility operation for these registers. Fixes: 86f9de9db178 ("KVM: arm64: Save/restore PIE registers") Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20240822-kvm-arm64-hide-pie-regs-v2-3-376624fa829c@kernel.org [maz: simplify by using __el2_visibility(), kvm_has_s1pie() throughout] Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241023145345.1613824-26-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-10-31KVM: arm64: Hide TCR2_EL1 from userspace when disabled for guestsMark Brown1-0/+3
When the guest does not support FEAT_TCR2 we should not allow any access to it in order to ensure that we do not create spurious issues with guest migration. Add a visibility operation for it. Fixes: fbff56068232 ("KVM: arm64: Save/restore TCR2_EL1") Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20240822-kvm-arm64-hide-pie-regs-v2-2-376624fa829c@kernel.org [maz: simplify by using __el2_visibility(), kvm_has_tcr2() throughout] Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241023145345.1613824-25-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-10-31KVM: arm64: Add PIR{,E0}_EL2 to the sysreg arraysMarc Zyngier1-0/+2
Add the FEAT_S1PIE EL2 registers to the per-vcpu sysreg register array. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241023145345.1613824-15-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-10-31KVM: arm64: Extend masking facility to arbitrary registersMarc Zyngier1-6/+13
We currently only use the masking (RES0/RES1) facility for VNCR registers, as they are memory-based and thus easy to sanitise. But we could apply the same thing to other registers if we: - split the sanitisation from __VNCR_START__ - apply the sanitisation when reading from a HW register This involves a new "marker" in the vcpu_sysreg enum, which defines the point at which the sanitisation applies (the VNCR registers being of course after this marker). Whle we are at it, rename kvm_vcpu_sanitise_vncr_reg() to kvm_vcpu_apply_reg_masks(), which is vaguely more explicit, and harden set_sysreg_masks() against setting masks for random registers... Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20241023145345.1613824-10-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-10-31KVM: arm64: Correctly access TCR2_EL1, PIR_EL1, PIRE0_EL1 with VHEMarc Zyngier1-0/+6
For code that accesses any of the guest registers for emulation purposes, it is crucial to know where the most up-to-date data is. While this is pretty clear for nVHE (memory is the sole repository), things are a lot muddier for VHE, as depending on the SYSREGS_ON_CPU flag, registers can either be loaded on the HW or be in memory. Even worse with NV, where the loaded state is by definition partial. For these reasons, KVM offers the vcpu_read_sys_reg() and vcpu_write_sys_reg() primitives that always do the right thing. However, these primitive must know what register to access, and this is the role of the __vcpu_read_sys_reg_from_cpu() and __vcpu_write_sys_reg_to_cpu() helpers. As it turns out, TCR2_EL1, PIR_EL1, PIRE0_EL1 and not described in the latter helpers, meaning that the AT code cannot use them to emulate S1PIE. Add the three registers to the (long) list. Fixes: 86f9de9db178 ("KVM: arm64: Save/restore PIE registers") Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: Joey Gouly <joey.gouly@arm.com> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20241023145345.1613824-9-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-10-31KVM: arm64: Add TCR2_EL2 to the sysreg arraysMarc Zyngier1-0/+1
Add the TCR2_EL2 register to the per-vcpu sysreg register array, the sysreg descriptor array, and advertise it as mapped to TCR2_EL1 for NV purposes. Access to this register is conditional based on ID_AA64MMFR3_EL1.TCRX being advertised. Reviewed-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241023145345.1613824-12-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2024-10-31KVM: Protect vCPU's "last run PID" with rwlock, not RCUSean Christopherson1-1/+1
To avoid jitter on KVM_RUN due to synchronize_rcu(), use a rwlock instead of RCU to protect vcpu->pid, a.k.a. the pid of the task last used to a vCPU. When userspace is doing M:N scheduling of tasks to vCPUs, e.g. to run SEV migration helper vCPUs during post-copy, the synchronize_rcu() needed to change the PID associated with the vCPU can stall for hundreds of milliseconds, which is problematic for latency sensitive post-copy operations. In the directed yield path, do not acquire the lock if it's contended, i.e. if the associated PID is changing, as that means the vCPU's task is already running. Reported-by: Steve Rutherford <srutherford@google.com> Reviewed-by: Steve Rutherford <srutherford@google.com> Acked-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20240802200136.329973-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-08KVM: arm64: nv: Punt stage-2 recycling to a vCPU requestOliver Upton1-0/+7
Currently, when a nested MMU is repurposed for some other MMU context, KVM unmaps everything during vcpu_load() while holding the MMU lock for write. This is quite a performance bottleneck for large nested VMs, as all vCPU scheduling will spin until the unmap completes. Start punting the MMU cleanup to a vCPU request, where it is then possible to periodically release the MMU lock and CPU in the presence of contention. Ensure that no vCPU winds up using a stale MMU by tracking the pending unmap on the S2 MMU itself and requesting an unmap on every vCPU that finds it. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20241007233028.2236133-4-oliver.upton@linux.dev Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-10-03KVM: arm64: Fix kvm_has_feat*() handling of negative featuresMarc Zyngier1-12/+13
Oliver reports that the kvm_has_feat() helper is not behaviing as expected for negative feature. On investigation, the main issue seems to be caused by the following construct: #define get_idreg_field(kvm, id, fld) \ (id##_##fld##_SIGNED ? \ get_idreg_field_signed(kvm, id, fld) : \ get_idreg_field_unsigned(kvm, id, fld)) where one side of the expression evaluates as something signed, and the other as something unsigned. In retrospect, this is totally braindead, as the compiler converts this into an unsigned expression. When compared to something that is 0, the test is simply elided. Epic fail. Similar issue exists in the expand_field_sign() macro. The correct way to handle this is to chose between signed and unsigned comparisons, so that both sides of the ternary expression are of the same type (bool). In order to keep the code readable (sort of), we introduce new comparison primitives taking an operator as a parameter, and rewrite the kvm_has_feat*() helpers in terms of these primitives. Fixes: c62d7a23b947 ("KVM: arm64: Add feature checking helpers") Reported-by: Oliver Upton <oliver.upton@linux.dev> Tested-by: Oliver Upton <oliver.upton@linux.dev> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20241002204239.2051637-1-maz@kernel.org Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-09-16Merge tag 'for-linus-non-x86' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds1-2/+20
Pull kvm updates from Paolo Bonzini: "These are the non-x86 changes (mostly ARM, as is usually the case). The generic and x86 changes will come later" ARM: - New Stage-2 page table dumper, reusing the main ptdump infrastructure - FP8 support - Nested virtualization now supports the address translation (FEAT_ATS1A) family of instructions - Add selftest checks for a bunch of timer emulation corner cases - Fix multiple cases where KVM/arm64 doesn't correctly handle the guest trying to use a GICv3 that wasn't advertised - Remove REG_HIDDEN_USER from the sysreg infrastructure, making things little simpler - Prevent MTE tags being restored by userspace if we are actively logging writes, as that's a recipe for disaster - Correct the refcount on a page that is not considered for MTE tag copying (such as a device) - When walking a page table to split block mappings, synchronize only at the end the walk rather than on every store - Fix boundary check when transfering memory using FFA - Fix pKVM TLB invalidation, only affecting currently out of tree code but worth addressing for peace of mind LoongArch: - Revert qspinlock to test-and-set simple lock on VM. - Add Loongson Binary Translation extension support. - Add PMU support for guest. - Enable paravirt feature control from VMM. - Implement function kvm_para_has_feature(). RISC-V: - Fix sbiret init before forwarding to userspace - Don't zero-out PMU snapshot area before freeing data - Allow legacy PMU access from guest - Fix to allow hpmcounter31 from the guest" * tag 'for-linus-non-x86' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (64 commits) LoongArch: KVM: Implement function kvm_para_has_feature() LoongArch: KVM: Enable paravirt feature control from VMM LoongArch: KVM: Add PMU support for guest KVM: arm64: Get rid of REG_HIDDEN_USER visibility qualifier KVM: arm64: Simplify visibility handling of AArch32 SPSR_* KVM: arm64: Simplify handling of CNTKCTL_EL12 LoongArch: KVM: Add vm migration support for LBT registers LoongArch: KVM: Add Binary Translation extension support LoongArch: KVM: Add VM feature detection function LoongArch: Revert qspinlock to test-and-set simple lock on VM KVM: arm64: Register ptdump with debugfs on guest creation arm64: ptdump: Don't override the level when operating on the stage-2 tables arm64: ptdump: Use the ptdump description from a local context arm64: ptdump: Expose the attribute parsing functionality KVM: arm64: Add memory length checks and remove inline in do_ffa_mem_xfer KVM: arm64: Move pagetable definitions to common header KVM: arm64: nv: Add support for FEAT_ATS1A KVM: arm64: nv: Plumb handling of AT S1* traps from EL2 KVM: arm64: nv: Make AT+PAN instructions aware of FEAT_PAN3 KVM: arm64: nv: Sanitise SCTLR_EL1.EPAN according to VM configuration ...
2024-09-12Merge branch 'for-next/poe' into for-next/coreWill Deacon1-0/+4
* for-next/poe: (31 commits) arm64: pkeys: remove redundant WARN kselftest/arm64: Add test case for POR_EL0 signal frame records kselftest/arm64: parse POE_MAGIC in a signal frame kselftest/arm64: add HWCAP test for FEAT_S1POE selftests: mm: make protection_keys test work on arm64 selftests: mm: move fpregs printing kselftest/arm64: move get_header() arm64: add Permission Overlay Extension Kconfig arm64: enable PKEY support for CPUs with S1POE arm64: enable POE and PIE to coexist arm64/ptrace: add support for FEAT_POE arm64: add POE signal support arm64: implement PKEYS support arm64: add pte_access_permitted_no_overlay() arm64: handle PKEY/POE faults arm64: mask out POIndex when modifying a PTE arm64: convert protection key into vm_flags and pgprot values arm64: add POIndex defines arm64: re-order MTE VM_ flags arm64: enable the Permission Overlay Extension for EL0 ...
2024-09-12Merge branch kvm-arm64/vgic-sre-traps into kvmarm-master/nextMarc Zyngier1-0/+2
* kvm-arm64/vgic-sre-traps: : . : Fix the multiple of cases where KVM/arm64 doesn't correctly : handle the guest trying to use a GICv3 that isn't advertised. : : From the cover letter: : : "It recently appeared that, when running on a GICv3-equipped platform : (which is what non-ancient arm64 HW has), *not* configuring a GICv3 : for the guest could result in less than desirable outcomes. : : We have multiple issues to fix: : : - for registers that *always* trap (the SGI registers) or that *may* : trap (the SRE register), we need to check whether a GICv3 has been : instantiated before acting upon the trap. : : - for registers that only conditionally trap, we must actively trap : them even in the absence of a GICv3 being instantiated, and handle : those traps accordingly. : : - finally, ID registers must reflect the absence of a GICv3, so that : we are consistent. : : This series goes through all these requirements. The main complexity : here is to apply a GICv3 configuration on the host in the absence of a : GICv3 in the guest. This is pretty hackish, but I don't have a much : better solution so far. : : As part of making wider use of of the trap bits, we fully define the : trap routing as per the architecture, something that we eventually : need for NV anyway." : . KVM: arm64: selftests: Cope with lack of GICv3 in set_id_regs KVM: arm64: Add selftest checking how the absence of GICv3 is handled KVM: arm64: Unify UNDEF injection helpers KVM: arm64: Make most GICv3 accesses UNDEF if they trap KVM: arm64: Honor guest requested traps in GICv3 emulation KVM: arm64: Add trap routing information for ICH_HCR_EL2 KVM: arm64: Add ICH_HCR_EL2 to the vcpu state KVM: arm64: Zero ID_AA64PFR0_EL1.GIC when no GICv3 is presented to the guest KVM: arm64: Add helper for last ditch idreg adjustments KVM: arm64: Force GICv3 trap activation when no irqchip is configured on VHE KVM: arm64: Force SRE traps when SRE access is not enabled KVM: arm64: Move GICv3 trap configuration to kvm_calculate_traps() Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-09-04KVM: arm64: Save/restore POE registersJoey Gouly1-0/+4
Define the new system registers that POE introduces and context switch them. Signed-off-by: Joey Gouly <joey.gouly@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Oliver Upton <oliver.upton@linux.dev> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240822151113.1479789-8-joey.gouly@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2024-08-27KVM: arm64: Add ICH_HCR_EL2 to the vcpu stateMarc Zyngier1-0/+2
As we are about to describe the trap routing for ICH_HCR_EL2, add the register to the vcpu state in its VNCR form, as well as reset Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20240827152517.3909653-7-maz@kernel.org Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-08-27KVM: arm64: Add save/restore support for FPMRMarc Zyngier1-0/+10
Just like the rest of the FP/SIMD state, FPMR needs to be context switched. The only interesting thing here is that we need to treat the pKVM part a bit differently, as the host FP state is never written back to the vcpu thread, but instead stored locally and eagerly restored. Reviewed-by: Mark Brown <broonie@kernel.org> Tested-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20240820131802.3547589-5-maz@kernel.org Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-08-27KVM: arm64: Move FPMR into the sysreg arrayMarc Zyngier1-1/+1
Just like SVCR, FPMR is currently stored at the wrong location. Let's move it where it belongs. Reviewed-by: Mark Brown <broonie@kernel.org> Tested-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20240820131802.3547589-4-maz@kernel.org Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-08-27KVM: arm64: Add predicate for FPMR support in a VMMarc Zyngier1-0/+4
As we are about to check for the advertisement of FPMR support to a guest in a number of places, add a predicate that will gate most of the support code for FPMR. Reviewed-by: Mark Brown <broonie@kernel.org> Tested-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20240820131802.3547589-3-maz@kernel.org Signed-off-by: Marc Zyngier <maz@kernel.org>
2024-08-27KVM: arm64: Move SVCR into the sysreg arrayMarc Zyngier1-1/+3
SVCR is just a system register, and has no purpose being outside of the sysreg array. If anything, it only makes it more difficult to eventually support SME one day. If ever. Move it into the array with its little friends, and associate it with a visibility predicate. Although this is dead code, it at least paves the way for the next set of FP-related extensions. Reviewed-by: Mark Brown <broonie@kernel.org> Tested-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20240820131802.3547589-2-maz@kernel.org Signed-off-by: Marc Zyngier <maz@kernel.org>