summaryrefslogtreecommitdiff
path: root/arch/arm64/kvm
AgeCommit message (Collapse)AuthorFilesLines
2025-03-17KVM: arm64: PMU: Assume PMU presence in pmu-emul.cAkihiko Odaki4-35/+20
Many functions in pmu-emul.c checks kvm_vcpu_has_pmu(vcpu). A favorable interpretation is defensive programming, but it also has downsides: - It is confusing as it implies these functions are called without PMU although most of them are called only when a PMU is present. - It makes semantics of functions fuzzy. For example, calling kvm_pmu_disable_counter_mask() without PMU may result in no-op as there are no enabled counters, but it's unclear what kvm_pmu_get_counter_value() returns when there is no PMU. - It allows callers without checking kvm_vcpu_has_pmu(vcpu), but it is often wrong to call these functions without PMU. - It is error-prone to duplicate kvm_vcpu_has_pmu(vcpu) checks into multiple functions. Many functions are called for system registers, and the system register infrastructure already employs less error-prone, comprehensive checks. Check kvm_vcpu_has_pmu(vcpu) in callers of these functions instead, and remove the obsolete checks from pmu-emul.c. The only exceptions are the functions that implement ioctls as they have definitive semantics even when the PMU is not present. Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250315-pmc-v5-2-ecee87dab216@daynix.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-17KVM: arm64: PMU: Set raw values from user to PM{C,I}NTEN{SET,CLR}, ↵Akihiko Odaki1-19/+2
PMOVS{SET,CLR} Commit a45f41d754e0 ("KVM: arm64: Add {get,set}_user for PM{C,I}NTEN{SET,CLR}, PMOVS{SET,CLR}") changed KVM_SET_ONE_REG to update the mentioned registers in a way matching with the behavior of guest register writes. This is a breaking change of a UAPI though the new semantics looks cleaner and VMMs are not prepared for this. Firecracker, QEMU, and crosvm perform migration by listing registers with KVM_GET_REG_LIST, getting their values with KVM_GET_ONE_REG and setting them with KVM_SET_ONE_REG. This algorithm assumes KVM_SET_ONE_REG restores the values retrieved with KVM_GET_ONE_REG without any alteration. However, bit operations added by the earlier commit do not preserve the values retried with KVM_GET_ONE_REG and potentially break migration. Remove the bit operations that alter the values retrieved with KVM_GET_ONE_REG. Cc: stable@vger.kernel.org Fixes: a45f41d754e0 ("KVM: arm64: Add {get,set}_user for PM{C,I}NTEN{SET,CLR}, PMOVS{SET,CLR}") Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250315-pmc-v5-1-ecee87dab216@daynix.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-17mm: rename GENERIC_PTDUMP and PTDUMP_COREAnshuman Khandual1-2/+2
Platforms subscribe into generic ptdump implementation via GENERIC_PTDUMP. But generic ptdump gets enabled via PTDUMP_CORE. These configs combination is confusing as they sound very similar and does not differentiate between platform's feature subscription and feature enablement for ptdump. Rename the configs as ARCH_HAS_PTDUMP and PTDUMP making it more clear and improve readability. Link: https://lkml.kernel.org/r/20250226122404.1927473-6-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> (powerpc) Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64] Cc: Will Deacon <will@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Marc Zyngier <maz@kernel.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Steven Price <steven.price@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-03-15KVM: arm64: Create each pKVM hyp vcpu after its corresponding host vcpuFuad Tabba4-42/+51
Instead of creating and initializing _all_ hyp vcpus in pKVM when the first host vcpu runs for the first time, initialize _each_ hyp vcpu in conjunction with its corresponding host vcpu. Some of the host vcpu state (e.g., system registers and traps values) is not initialized until the first time the host vcpu is run. Therefore, initializing a hyp vcpu before its corresponding host vcpu has run for the first time might not view the complete host state of these vcpus. Additionally, this behavior is inline with non-protected modes. Acked-by: Will Deacon <will@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Fuad Tabba <tabba@google.com> Link: https://lore.kernel.org/r/20250314111832.4137161-5-tabba@google.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-15KVM: arm64: Factor out pKVM hyp vcpu creation to separate functionFuad Tabba1-28/+24
Move the code that creates and initializes the hyp view of a vcpu in pKVM to its own function. This is meant to make the transition to initializing every vcpu individually clearer. Acked-by: Will Deacon <will@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Fuad Tabba <tabba@google.com> Link: https://lore.kernel.org/r/20250314111832.4137161-4-tabba@google.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-15KVM: arm64: Initialize HCRX_EL2 traps in pKVMFuad Tabba1-1/+7
Initialize and set the traps controlled by the HCRX_EL2 in pKVM. Reviewed-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Fuad Tabba <tabba@google.com> Link: https://lore.kernel.org/r/20250314111832.4137161-3-tabba@google.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-15KVM: arm64: Factor out setting HCRX_EL2 traps into separate functionFuad Tabba1-19/+1
Factor out the code for setting a vcpu's HCRX_EL2 traps in to a separate inline function. This allows us to share the logic with pKVM when setting the traps in protected mode. No functional change intended. Reviewed-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Fuad Tabba <tabba@google.com> Link: https://lore.kernel.org/r/20250314111832.4137161-2-tabba@google.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-14KVM: arm64: Count pKVM stage-2 usage in secondary pagetable statsVincent Donnefort2-1/+17
Count the pages used by pKVM for the guest stage-2 in memory stats under secondary pagetable, similarly to what the VHE mode does. Signed-off-by: Vincent Donnefort <vdonnefort@google.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250313114038.1502357-4-vdonnefort@google.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-14KVM: arm64: Distinct pKVM teardown memcache for stage-2Vincent Donnefort4-6/+7
In order to account for memory dedicated to the stage-2 page-tables, use a separated memcache when tearing down the VM. Meanwhile rename reclaim_guest_pages to reflect the fact it only reclaim page-table pages. Signed-off-by: Vincent Donnefort <vdonnefort@google.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250313114038.1502357-3-vdonnefort@google.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-14KVM: arm64: Add flags to kvm_hyp_memcacheVincent Donnefort1-4/+4
Add flags to kvm_hyp_memcache and propagate the latter to the allocation and free callbacks. This will later allow to account for memory, based on the memcache configuration. Signed-off-by: Vincent Donnefort <vdonnefort@google.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250313114038.1502357-2-vdonnefort@google.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-12KVM: arm64: Allow userspace to write ID_AA64MMFR0_EL1.TGRAN*_2Sebastian Ott1-4/+17
Allow userspace to write the safe (NI) value for ID_AA64MMFR0_EL1.TGRAN*_2. Disallow to change these fields for NV since kvm provides a sanitized view for them based on the PAGE_SIZE. Signed-off-by: Sebastian Ott <sebott@redhat.com> Link: https://lore.kernel.org/kvmarm/20250306184013.30008-1-sebott@redhat.com/ Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-12KVM: arm64: ptdump: Test PMD_TYPE_MASK for block mappingAnshuman Khandual1-2/+2
Test given page table entries against PMD_TYPE_SECT on PMD_TYPE_MASK mask bits for identifying block mappings in stage 2 page tables. Cc: Marc Zyngier <maz@kernel.org> Cc: Oliver Upton <oliver.upton@linux.dev> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.linux.dev Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Link: https://lore.kernel.org/r/20250221044227.1145393-2-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2025-03-11KVM: arm64: Provide 1 event counter on IMPDEF hardwareOliver Upton1-0/+7
PMUv3 requires that all programmable event counters are capable of counting any event. The Apple M* PMU is quite a bit different, and events have affinities for particular PMCs. Expose 1 event counter on IMPDEF hardware, allowing the guest to do something useful with its PMU while also upholding the requirements of the architecture. Tested-by: Janne Grunau <j@jannau.net> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250305203021.428366-1-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-11KVM: arm64: Remap PMUv3 events onto hardwareOliver Upton1-1/+24
Map PMUv3 event IDs onto hardware, if the driver exposes such a helper. This is expected to be quite rare, and only useful for non-PMUv3 hardware. Tested-by: Janne Grunau <j@jannau.net> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250305202641.428114-12-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-11KVM: arm64: Advertise PMUv3 if IMPDEF traps are presentOliver Upton1-1/+11
Advertise a baseline PMUv3 implementation when running on hardware with IMPDEF traps of the PMUv3 sysregs. Tested-by: Janne Grunau <j@jannau.net> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250305202641.428114-11-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-11KVM: arm64: Compute synthetic sysreg ESR for Apple PMUv3 trapsOliver Upton1-0/+22
Apple M* CPUs provide an IMPDEF trap for PMUv3 sysregs, where ESR_EL2.EC is a reserved value (0x3F) and a sysreg-like ISS is reported in AFSR1_EL2. Compute a synthetic ESR for these PMUv3 traps, giving the illusion of something architectural to the rest of KVM. Tested-by: Janne Grunau <j@jannau.net> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250305202641.428114-10-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-11KVM: arm64: Move PMUVer filtering into KVM codeOliver Upton1-6/+9
The supported guest PMU version on a particular platform is ultimately a KVM decision. Move PMUVer filtering into KVM code. Tested-by: Janne Grunau <j@jannau.net> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250305202641.428114-9-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-11KVM: arm64: Use guard() to cleanup usage of arm_pmus_lockOliver Upton1-15/+8
Get rid of some goto label patterns by using guard() to drop the arm_pmus_lock when returning from a function. Tested-by: Janne Grunau <j@jannau.net> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250305202641.428114-8-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-11KVM: arm64: Drop kvm_arm_pmu_available static keyOliver Upton2-7/+8
With the PMUv3 cpucap, kvm_arm_pmu_available is no longer used in the hot path of guest entry/exit. On top of that, guest support for PMUv3 may not correlate with host support for the feature, e.g. on IMPDEF hardware. Throw out the static key and just inspect the list of PMUs to determine if PMUv3 is supported for KVM guests. Tested-by: Janne Grunau <j@jannau.net> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250305202641.428114-7-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-11KVM: arm64: Use a cpucap to determine if system supports FEAT_PMUv3Oliver Upton2-7/+7
KVM is about to learn some new tricks to virtualize PMUv3 on IMPDEF hardware. As part of that, we now need to differentiate host support from guest support for PMUv3. Add a cpucap to determine if an architectural PMUv3 is present to guard host usage of PMUv3 controls. Tested-by: Janne Grunau <j@jannau.net> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250305202641.428114-6-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-11KVM: arm64: Always support SW_INCR PMU eventOliver Upton1-0/+2
Support for SW_INCR is unconditional, as KVM traps accesses to PMSWINC_EL0 and emulates the intended event increment. While it is expected that ~all PMUv3 implementations already advertise this event, non-PMUv3 hardware may not. Tested-by: Janne Grunau <j@jannau.net> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250305202641.428114-5-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-11KVM: arm64: Compute PMCEID from arm_pmu's event bitmapsOliver Upton1-11/+36
The PMUv3 driver populates a couple of bitmaps with the values of PMCEID{0,1}, from which the guest's PMCEID{0,1} can be derived. This is particularly convenient when virtualizing PMUv3 on IMP DEF hardware, as reading the nonexistent PMCEID registers leads to a rather unpleasant UNDEF. Tested-by: Janne Grunau <j@jannau.net> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250305202641.428114-4-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-10arm64/sysreg: Rename POE_RXW to POE_RWXKevin Brodsky1-4/+4
It is customary to list R, W, X permissions in that order. In fact this is already the case for PIE constants (PIE_RWX). Rename POE_RXW accordingly, as well as POE_XW (currently unused). While at it also swap the W/X lines in compute_s1_overlay_permissions() to follow the R, W, X order. Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Link: https://lore.kernel.org/r/20250219164029.2309119-3-kevin.brodsky@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2025-03-06KVM: arm64: Copy MIDR_EL1 into hyp VM when it is writableOliver Upton1-0/+4
KVM recently added a capability that allows userspace to override the 'implementation ID' registers presented to the VM. MIDR_EL1 is a special example, where the hypervisor can directly set the value when read from EL1 using VPIDR_EL2. Copy the VM-wide value for MIDR_EL1 into the hyp VM for non-protected guests when the capability is enabled so VPIDR_EL2 gets set up correctly. Reported-by: Mark Brown <broonie@kernel.org> Closes: https://lore.kernel.org/kvmarm/ac594b9c-4bbb-46c8-9391-e7a68ce4de5b@sirena.org.uk/ Fixes: 3adaee783061 ("KVM: arm64: Allow userspace to change the implementation ID registers") Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250305230825.484091-3-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-06KVM: arm64: Copy guest CTR_EL0 into hyp VMOliver Upton1-1/+5
Since commit 2843cae26644 ("KVM: arm64: Treat CTR_EL0 as a VM feature ID register") KVM has allowed userspace to configure the VM-wide view of CTR_EL0, falling back to trap-n-emulate if the value doesn't match hardware. It appears that this has worked by chance in protected-mode for some time, and on systems with FEAT_EVT protected-mode unconditionally sets TID4 (i.e. TID2 traps sans CTR_EL0). Forward the guest CTR_EL0 value through to the hyp VM and align the TID2/TID4 configuration with the non-protected setup. Fixes: 2843cae26644 ("KVM: arm64: Treat CTR_EL0 as a VM feature ID register") Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250305230825.484091-2-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-04KVM: arm64: nv: Fail KVM init if asking for NV without GICv3Marc Zyngier1-0/+7
Although there is nothing in NV that is fundamentally incompatible with the lack of GICv3, there is no HW implementation without one, at least on the virtual side (yes, even fruits have some form of vGICv3). We therefore make the decision to require GICv3, which will only affect models such as QEMU. Booting with a GICv2 or something even more exotic while asking for NV will result in KVM being disabled. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250225172930.1850838-17-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-04KVM: arm64: nv: Allow userland to set VGIC maintenance IRQAndre Przywara1-2/+27
The VGIC maintenance IRQ signals various conditions about the LRs, when the GIC's virtualization extension is used. So far we didn't need it, but nested virtualization needs to know about this interrupt, so add a userland interface to setup the IRQ number. The architecture mandates that it must be a PPI, on top of that this code only exports a per-device option, so the PPI is the same on all VCPUs. Signed-off-by: Andre Przywara <andre.przywara@arm.com> [added some bits of documentation] Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250225172930.1850838-16-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-04KVM: arm64: nv: Fold GICv3 host trapping requirements into guest setupMarc Zyngier1-3/+17
Popular HW that is able to use NV also has a broken vgic implementation that requires trapping. On such HW, propagate the host trap bits into the guest's shadow ICH_HCR_EL2 register, making sure we don't allow an L2 guest to bring the system down. This involves a bit of tweaking so that the emulation code correctly poicks up the shadow state as needed, and to only partially sync ICH_HCR_EL2 back with the guest state to capture EOIcount. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250225172930.1850838-15-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-04KVM: arm64: nv: Propagate used_lrs between L1 and L0 contextsMarc Zyngier1-0/+7
We have so far made sure that L1 and L0 vgic contexts were totally independent. There is however one spot of bother with this approach, and that's in the GICv3 emulation code required by our fruity friends. The issue is that the emulation code needs to know how many LRs are in flight. And while it is easy to reach the L0 version through the vcpu pointer, doing so for the L1 is much more complicated, as these structures are private to the nested code. We could simply expose that structure and pick one or the other depending on the context, but this seems extra complexity for not much benefit. Instead, just propagate the number of used LRs from the nested code into the L0 context, and be done with it. Should this become a burden, it can be easily rectified. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250225172930.1850838-14-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-04KVM: arm64: nv: Request vPE doorbell upon nested ERET to L2Oliver Upton2-1/+19
Running an L2 guest with GICv4 enabled goes absolutely nowhere, and gets into a vicious cycle of nested ERET followed by nested exception entry into the L1. When KVM does a put on a runnable vCPU, it marks the vPE as nonresident but does not request a doorbell IRQ. Behind the scenes in the ITS driver's view of the vCPU, its_vpe::pending_last gets set to true to indicate that context is still runnable. This comes to a head when doing the nested ERET into L2. The vPE doesn't get scheduled on the redistributor as it is exclusively part of the L1's VGIC context. kvm_vgic_vcpu_pending_irq() returns true because the vPE appears runnable, and KVM does a nested exception entry into the L1 before L2 ever gets off the ground. This issue can be papered over by requesting a doorbell IRQ when descheduling a vPE as part of a nested ERET. KVM needs this anyway to kick the vCPU out of the L2 when an IRQ becomes pending for the L1. Link: https://lore.kernel.org/r/20240823212703.3576061-4-oliver.upton@linux.dev Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250225172930.1850838-13-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-04KVM: arm64: nv: Respect virtual HCR_EL2.TWx settingJintack Lim1-1/+5
Forward exceptions due to WFI or WFE instructions to the virtual EL2 if they are not coming from the virtual EL2 and virtual HCR_EL2.TWx is set. Signed-off-by: Jintack Lim <jintack.lim@linaro.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250225172930.1850838-12-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-04KVM: arm64: nv: Add Maintenance Interrupt emulationMarc Zyngier5-0/+91
Emulating the vGIC means emulating the dreaded Maintenance Interrupt. This is a two-pronged problem: - while running L2, getting an MI translates into an MI injected in the L1 based on the state of the HW. - while running L1, we must accurately reflect the state of the MI line, based on the in-memory state. The MI INTID is added to the distributor, as expected on any virtualisation-capable implementation, and further patches will allow its configuration. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250225172930.1850838-11-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-04KVM: arm64: nv: Handle L2->L1 transition on interrupt injectionMarc Zyngier3-0/+32
An interrupt being delivered to L1 while running L2 must result in the correct exception being delivered to L1. This means that if, on entry to L2, we found ourselves with pending interrupts in the L1 distributor, we need to take immediate action. This is done by posting a request which will prevent the entry in L2, and deliver an IRQ exception to L1, forcing the switch. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250225172930.1850838-10-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-04KVM: arm64: nv: Nested GICv3 emulationMarc Zyngier5-1/+240
When entering a nested VM, we set up the hypervisor control interface based on what the guest hypervisor has set. Especially, we investigate each list register written by the guest hypervisor whether HW bit is set. If so, we translate hw irq number from the guest's point of view to the real hardware irq number if there is a mapping. Co-developed-by: Jintack Lim <jintack@cs.columbia.edu> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu> [Christoffer: Redesigned execution flow around vcpu load/put] Co-developed-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@arm.com> [maz: Rewritten to support GICv3 instead of GICv2, NV2 support] Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250225172930.1850838-9-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-04KVM: arm64: nv: Sanitise ICH_HCR_EL2 accessesMarc Zyngier1-0/+9
As ICH_HCR_EL2 is a VNCR accessor when runnintg NV, add some sanitising to what gets written. Crucially, mark TDIR as RES0 if the HW doesn't support it (unlikely, but hey...), as well as anything GICv4 related, since we only expose a GICv3 to the uest. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250225172930.1850838-8-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-04KVM: arm64: nv: Plumb handling of GICv3 EL2 accessesMarc Zyngier3-2/+220
Wire the handling of all GICv3 EL2 registers, and provide emulation for all the non memory-backed registers (ICC_SRE_EL2, ICH_VTR_EL2, ICH_MISR_EL2, ICH_ELRSR_EL2, and ICH_EISR_EL2). Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250225172930.1850838-7-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-04KVM: arm64: nv: Load timer before the GICMarc Zyngier1-1/+5
In order for vgic_v3_load_nested to be able to observe which timer interrupts have the HW bit set for the current context, the timers must have been loaded in the new mode and the right timer mapped to their corresponding HW IRQs. At the moment, we load the GIC first, meaning that timer interrupts injected to an L2 guest will never have the HW bit set (we see the old configuration). Swapping the two loads solves this particular problem. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250225172930.1850838-5-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-04arm64: sysreg: Add layout for ICH_VTR_EL2Marc Zyngier2-13/+11
The ICH_VTR_EL2-related macros are missing a number of config bits that we are about to handle. Take this opportunity to fully describe the layout of that register as part of the automatic generation infrastructure. This results in a bit of churn to repaint constants that are now generated with a different format. Reviewed-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250225172930.1850838-3-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-04arm64: sysreg: Add layout for ICH_HCR_EL2Marc Zyngier3-23/+24
The ICH_HCR_EL2-related macros are missing a number of control bits that we are about to handle. Take this opportunity to fully describe the layout of that register as part of the automatic generation infrastructure. This results in a bit of churn, unfortunately. Reviewed-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250225172930.1850838-2-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-03-02KVM: arm64: Initialize SCTLR_EL1 in __kvm_hyp_init_cpu()Ahmed Genidi2-2/+3
When KVM is in protected mode, host calls to PSCI are proxied via EL2, and cold entries from CPU_ON, CPU_SUSPEND, and SYSTEM_SUSPEND bounce through __kvm_hyp_init_cpu() at EL2 before entering the host kernel's entry point at EL1. While __kvm_hyp_init_cpu() initializes SPSR_EL2 for the exception return to EL1, it does not initialize SCTLR_EL1. Due to this, it's possible to enter EL1 with SCTLR_EL1 in an UNKNOWN state. In practice this has been seen to result in kernel crashes after CPU_ON as a result of SCTLR_EL1.M being 1 in violation of the initial core configuration specified by PSCI. Fix this by initializing SCTLR_EL1 for cold entry to the host kernel. As it's necessary to write to SCTLR_EL12 in VHE mode, this initialization is moved into __kvm_host_psci_cpu_entry() where we can use write_sysreg_el1(). The remnants of the '__init_el2_nvhe_prepare_eret' macro are folded into its only caller, as this is clearer than having the macro. Fixes: cdf367192766ad11 ("KVM: arm64: Intercept host's CPU_ON SMCs") Reported-by: Leo Yan <leo.yan@arm.com> Signed-off-by: Ahmed Genidi <ahmed.genidi@arm.com> [ Mark: clarify commit message, handle E2H, move to C, remove macro ] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Ahmed Genidi <ahmed.genidi@arm.com> Cc: Ben Horgan <ben.horgan@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Leo Yan <leo.yan@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Oliver Upton <oliver.upton@linux.dev> Cc: Will Deacon <will@kernel.org> Reviewed-by: Leo Yan <leo.yan@arm.com> Link: https://lore.kernel.org/r/20250227180526.1204723-3-mark.rutland@arm.com Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-03-02KVM: arm64: Initialize HCR_EL2.E2H earlyMark Rutland1-1/+7
On CPUs without FEAT_E2H0, HCR_EL2.E2H is RES1, but may reset to an UNKNOWN value out of reset and consequently may not read as 1 unless it has been explicitly initialized. We handled this for the head.S boot code in commits: 3944382fa6f22b54 ("arm64: Treat HCR_EL2.E2H as RES1 when ID_AA64MMFR4_EL1.E2H0 is negative") b3320142f3db9b3f ("arm64: Fix early handling of FEAT_E2H0 not being implemented") Unfortunately, we forgot to apply a similar fix to the KVM PSCI entry points used when relaying CPU_ON, CPU_SUSPEND, and SYSTEM SUSPEND. When KVM is entered via these entry points, the value of HCR_EL2.E2H may be consumed before it has been initialized (e.g. by the 'init_el2_state' macro). Initialize HCR_EL2.E2H early in these paths such that it can be consumed reliably. The existing code in head.S is factored out into a new 'init_el2_hcr' macro, and this is used in the __kvm_hyp_init_cpu() function common to all the relevant PSCI entry points. For clarity, I've tweaked the assembly used to check whether ID_AA64MMFR4_EL1.E2H0 is negative. The bitfield is extracted as a signed value, and this is checked with a signed-greater-or-equal (GE) comparison. As the hyp code will reconfigure HCR_EL2 later in ___kvm_hyp_init(), all bits other than E2H are initialized to zero in __kvm_hyp_init_cpu(). Fixes: 3944382fa6f22b54 ("arm64: Treat HCR_EL2.E2H as RES1 when ID_AA64MMFR4_EL1.E2H0 is negative") Fixes: b3320142f3db9b3f ("arm64: Fix early handling of FEAT_E2H0 not being implemented") Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Ahmed Genidi <ahmed.genidi@arm.com> Cc: Ben Horgan <ben.horgan@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Leo Yan <leo.yan@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Oliver Upton <oliver.upton@linux.dev> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20250227180526.1204723-2-mark.rutland@arm.com [maz: fixed LT->GE thinko] Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-02-27KVM: arm64: Introduce KVM_REG_ARM_VENDOR_HYP_BMAP_2Shameer Kolothum1-0/+13
The vendor_hyp_bmap bitmap holds the information about the Vendor Hyp services available to the user space and can be get/set using {G, S}ET_ONE_REG interfaces. This is done using the pseudo-firmware bitmap register KVM_REG_ARM_VENDOR_HYP_BMAP. At present, this bitmap is a 64 bit one and since the function numbers for newly added DISCOVER_IPML_* hypercalls are 64-65, introduce another pseudo-firmware bitmap register KVM_REG_ARM_VENDOR_HYP_BMAP_2. Reviewed-by: Sebastian Ott <sebott@redhat.com> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Link: https://lore.kernel.org/r/20250221140229.12588-4-shameerali.kolothum.thodi@huawei.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-02-27arm64: Modify _midr_range() functions to read MIDR/REVIDR internallyShameer Kolothum1-1/+1
These changes lay the groundwork for adding support for guest kernels, allowing them to leverage target CPU implementations provided by the VMM. No functional changes intended. Suggested-by: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Sebastian Ott <sebott@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20250221140229.12588-2-shameerali.kolothum.thodi@huawei.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-02-27KVM: arm64: vgic-v4: Fall back to software irqbypass if LPI not foundOliver Upton1-5/+10
Continuing with the theme of broken VMMs and guests, irqbypass registration can fail if the virtual ITS lacks a translation for the MSI. Either the guest hasn't mapped it or userspace may have forgotten to restore the ITS. Exit silently and allow irqbypass configuration to succeed. As a reward for ingenuity, LPIs are demoted to software injection. Tested-by: Sudheer Dantuluri <dantuluris@google.com> Fixes: 196b136498b3 ("KVM: arm/arm64: GICv4: Wire mapping/unmapping of VLPIs in VFIO irq bypass") Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250226183124.82094-4-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-02-27KVM: arm64: vgic-v4: Only WARN for HW IRQ mismatch when unmapping vLPIOliver Upton1-1/+1
The VMM or guest can easily screw up GICv4 vLPI injection by misconfiguring the MSI or the virtual ITS. Don't fuss over it; limit the WARN in vgic_v4_unset_forwarding() to fire in the worrying case where an unrelated HW IRQ was mapped to a vLPI. Reported-by: Sudheer Dantuluri <dantuluris@google.com> Tested-by: Sudheer Dantuluri <dantuluris@google.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250226183124.82094-3-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-02-27KVM: arm64: vgic-v4: Only attempt vLPI mapping for actual MSIsOliver Upton1-0/+12
Some 'creative' VMMs out there may assign a VFIO MSI eventfd to an SPI routing entry. And yes, I can already hear you shouting about possibly driving a level interrupt with an edge-sensitive one. You know who you are. This works for the most part, and interrupt injection winds up taking the software path. However, when running on GICv4-enabled hardware, KVM erroneously attempts to setup LPI forwarding, even though the KVM routing isn't an MSI. Thanks to misuse of a union, the MSI destination is unlikely to match any ITS in the VM and kvm_vgic_v4_set_forwarding() bails early. Later on when the VM is being torn down, this half-configured state triggers the WARN_ON() in kvm_vgic_v4_unset_forwarding() due to the fact that no HW IRQ was ever assigned. Avoid the whole mess by preventing SPI routing entries from getting into the LPI forwarding helpers. Reported-by: Sudheer Dantuluri <dantuluris@google.com> Tested-by: Sudheer Dantuluri <dantuluris@google.com> Fixes: 196b136498b3 ("KVM: arm/arm64: GICv4: Wire mapping/unmapping of VLPIs in VFIO irq bypass") Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250226183124.82094-2-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-02-26KVM: arm64: Allow userspace to change the implementation ID registersSebastian Ott3-6/+56
KVM's treatment of the ID registers that describe the implementation (MIDR, REVIDR, and AIDR) is interesting, to say the least. On the userspace-facing end of it, KVM presents the values of the boot CPU on all vCPUs and treats them as invariant. On the guest side of things KVM presents the hardware values of the local CPU, which can change during CPU migration in a big-little system. While one may call this fragile, there is at least some degree of predictability around it. For example, if a VMM wanted to present big-little to a guest, it could affine vCPUs accordingly to the correct clusters. All of this makes a giant mess out of adding support for making these implementation ID registers writable. Avoid breaking the rather subtle ABI around the old way of doing things by requiring opt-in from userspace to make the registers writable. When the cap is enabled, allow userspace to set MIDR, REVIDR, and AIDR to any non-reserved value and present those values consistently across all vCPUs. Signed-off-by: Sebastian Ott <sebott@redhat.com> [oliver: changelog, capability] Link: https://lore.kernel.org/r/20250225005401.679536-5-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-02-26KVM: arm64: Load VPIDR_EL2 with the VM's MIDR_EL1 valueOliver Upton3-20/+20
Userspace will soon be able to change the value of MIDR_EL1. Prepare by loading VPIDR_EL2 with the guest value for non-nested VMs. Since VPIDR_EL2 is set for any VM, get rid of the NV-specific cleanup of reloading the hardware value on vcpu_put(). And for nVHE, load the hardware value before switching to the host. Link: https://lore.kernel.org/r/20250225005401.679536-4-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-02-26KVM: arm64: Maintain per-VM copy of implementation ID regsSebastian Ott1-21/+22
Get ready to allow changes to the implementation ID registers by tracking the VM-wide values. Signed-off-by: Sebastian Ott <sebott@redhat.com> Link: https://lore.kernel.org/r/20250225005401.679536-3-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2025-02-26KVM: arm64: Set HCR_EL2.TID1 unconditionallyOliver Upton1-85/+98
commit 90807748ca3a ("KVM: arm64: Hide SME system registers from guests") added trap handling for SMIDR_EL1, treating it as UNDEFINED as KVM does not support SME. This is right for the most part, however KVM needs to set HCR_EL2.TID1 to _actually_ trap the register. Unfortunately, this comes with some collateral damage as TID1 forces REVIDR_EL1 and AIDR_EL1 to trap as well. KVM has long treated these registers as "invariant" which is an awful term for the following: - Userspace sees the boot CPU values on all vCPUs - The guest sees the hardware values of the CPU on which a vCPU is scheduled Keep the plates spinning by adding trap handling for the affected registers and repaint all of the "invariant" crud into terms of identifying an implementation. Yes, at this point we only need to set TID1 on SME hardware, but REVIDR_EL1 and AIDR_EL1 are about to become mutable anyway. Cc: Mark Brown <broonie@kernel.org> Cc: stable@vger.kernel.org Fixes: 90807748ca3a ("KVM: arm64: Hide SME system registers from guests") [maz: handle traps from 32bit] Co-developed-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20250225005401.679536-2-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev>