summaryrefslogtreecommitdiff
path: root/arch/arm64/kvm
AgeCommit message (Collapse)AuthorFilesLines
2020-09-30Merge branch 'kvm-arm64/hyp-pcpu' into kvmarm-master/nextMarc Zyngier18-211/+229
Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-09-30Merge remote-tracking branch 'arm64/for-next/ghostbusters' into ↵Marc Zyngier11-135/+96
kvm-arm64/hyp-pcpu Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-09-30kvm: arm64: Remove unnecessary hyp mappingsDavid Brazdil1-16/+0
With all nVHE per-CPU variables being part of the hyp per-CPU region, mapping them individual is not necessary any longer. They are mapped to hyp as part of the overall per-CPU region. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Andrew Scull <ascull@google.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200922204910.7265-11-dbrazdil@google.com
2020-09-30kvm: arm64: Set up hyp percpu data for nVHEDavid Brazdil3-4/+62
Add hyp percpu section to linker script and rename the corresponding ELF sections of hyp/nvhe object files. This moves all nVHE-specific percpu variables to the new hyp percpu section. Allocate sufficient amount of memory for all percpu hyp regions at global KVM init time and create corresponding hyp mappings. The base addresses of hyp percpu regions are kept in a dynamically allocated array in the kernel. Add NULL checks in PMU event-reset code as it may run before KVM memory is initialized. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200922204910.7265-10-dbrazdil@google.com
2020-09-30kvm: arm64: Create separate instances of kvm_host_data for VHE/nVHEDavid Brazdil4-7/+12
Host CPU context is stored in a global per-cpu variable `kvm_host_data`. In preparation for introducing independent per-CPU region for nVHE hyp, create two separate instances of `kvm_host_data`, one for VHE and one for nVHE. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200922204910.7265-9-dbrazdil@google.com
2020-09-30kvm: arm64: Duplicate arm64_ssbd_callback_required for nVHE hypDavid Brazdil2-0/+6
Hyp keeps track of which cores require SSBD callback by accessing a kernel-proper global variable. Create an nVHE symbol of the same name and copy the value from kernel proper to nVHE as KVM is being enabled on a core. Done in preparation for separating percpu memory owned by kernel proper and nVHE. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200922204910.7265-8-dbrazdil@google.com
2020-09-30kvm: arm64: Remove hyp_adr/ldr_this_cpuDavid Brazdil1-1/+1
The hyp_adr/ldr_this_cpu helpers were introduced for use in hyp code because they always needed to use TPIDR_EL2 for base, while adr/ldr_this_cpu from kernel proper would select between TPIDR_EL2 and _EL1 based on VHE/nVHE. Simplify this now that the hyp mode case can be handled using the __KVM_VHE/NVHE_HYPERVISOR__ macros. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Andrew Scull <ascull@google.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200922204910.7265-6-dbrazdil@google.com
2020-09-30kvm: arm64: Remove __hyp_this_cpu_readDavid Brazdil5-10/+10
this_cpu_ptr is meant for use in kernel proper because it selects between TPIDR_EL1/2 based on nVHE/VHE. __hyp_this_cpu_ptr was used in hyp to always select TPIDR_EL2. Unify all users behind this_cpu_ptr and friends by selecting _EL2 register under __KVM_NVHE_HYPERVISOR__. VHE continues selecting the register using alternatives. Under CONFIG_DEBUG_PREEMPT, the kernel helpers perform a preemption check which is omitted by the hyp helpers. Preserve the behavior for nVHE by overriding the corresponding macros under __KVM_NVHE_HYPERVISOR__. Extend the checks into VHE hyp code. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Andrew Scull <ascull@google.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200922204910.7265-5-dbrazdil@google.com
2020-09-30kvm: arm64: Partially link nVHE hyp code, simplify HYPCOPYDavid Brazdil3-27/+48
Relying on objcopy to prefix the ELF section names of the nVHE hyp code is brittle and prevents us from using wildcards to match specific section names. Improve the build rules by partially linking all '.nvhe.o' files and prefixing their ELF section names using a linker script. Continue using objcopy for prefixing ELF symbol names. One immediate advantage of this approach is that all subsections matching a pattern can be merged into a single prefixed section, eg. .text and .text.* can be linked into a single '.hyp.text'. This removes the need for -fno-reorder-functions on GCC and will be useful in the future too: LTO builds use .text subsections, compilers routinely generate .rodata subsections, etc. Partially linking all hyp code into a single object file also makes it easier to analyze. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200922204910.7265-2-dbrazdil@google.com
2020-09-29KVM: arm64: Allow patching EL2 vectors even with KASLR is not enabledWill Deacon3-4/+35
Patching the EL2 exception vectors is integral to the Spectre-v2 workaround, where it can be necessary to execute CPU-specific sequences to nobble the branch predictor before running the hypervisor text proper. Remove the dependency on CONFIG_RANDOMIZE_BASE and allow the EL2 vectors to be patched even when KASLR is not enabled. Fixes: 7a132017e7a5 ("KVM: arm64: Replace CONFIG_KVM_INDIRECT_VECTORS with CONFIG_RANDOMIZE_BASE") Reported-by: kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/r/202009221053.Jv1XsQUZ%lkp@intel.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29KVM: arm64: Convert ARCH_WORKAROUND_2 to arm64_get_spectre_v4_state()Marc Zyngier3-14/+30
Convert the KVM WA2 code to using the Spectre infrastructure, making the code much more readable. It also allows us to take SSBS into account for the mitigation. Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29KVM: arm64: Simplify handling of ARCH_WORKAROUND_2Marc Zyngier8-113/+32
Owing to the fact that the host kernel is always mitigated, we can drastically simplify the WA2 handling by keeping the mitigation state ON when entering the guest. This means the guest is either unaffected or not mitigated. This results in a nice simplification of the mitigation space, and the removal of a lot of code that was never really used anyway. Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Rename ARM64_SSBD to ARM64_SPECTRE_V4Will Deacon1-1/+1
In a similar manner to the renaming of ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2, rename ARM64_SSBD to ARM64_SPECTRE_V4. This isn't _entirely_ accurate, as we also need to take into account the interaction with SSBS, but that will be taken care of in subsequent patches. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29KVM: arm64: Set CSV2 for guests on hardware unaffected by Spectre-v2Marc Zyngier1-0/+3
If the system is not affected by Spectre-v2, then advertise to the KVM guest that it is not affected, without the need for a safelist in the guest. Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Rewrite Spectre-v2 mitigation codeWill Deacon2-8/+8
The Spectre-v2 mitigation code is pretty unwieldy and hard to maintain. This is largely due to it being written hastily, without much clue as to how things would pan out, and also because it ends up mixing policy and state in such a way that it is very difficult to figure out what's going on. Rewrite the Spectre-v2 mitigation so that it clearly separates state from policy and follows a more structured approach to handling the mitigation. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29KVM: arm64: Replace CONFIG_KVM_INDIRECT_VECTORS with CONFIG_RANDOMIZE_BASEWill Deacon3-5/+2
The removal of CONFIG_HARDEN_BRANCH_PREDICTOR means that CONFIG_KVM_INDIRECT_VECTORS is synonymous with CONFIG_RANDOMIZE_BASE, so replace it. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Remove Spectre-related CONFIG_* optionsWill Deacon3-7/+1
The spectre mitigations are too configurable for their own good, leading to confusing logic trying to figure out when we should mitigate and when we shouldn't. Although the plethora of command-line options need to stick around for backwards compatibility, the default-on CONFIG options that depend on EXPERT can be dropped, as the mitigations only do anything if the system is vulnerable, a mitigation is available and the command-line hasn't disabled it. Remove CONFIG_HARDEN_BRANCH_PREDICTOR and CONFIG_ARM64_SSBD in favour of enabling this code unconditionally. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29Merge branch 'kvm-arm64/pmu-5.9' into kvmarm-master/nextMarc Zyngier3-26/+176
Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-09-29KVM: arm64: Mask out filtered events in PCMEID{0,1}_EL1Marc Zyngier2-4/+30
As we can now hide events from the guest, let's also adjust its view of PCMEID{0,1}_EL1 so that it can figure out why some common events are not counting as they should. The astute user can still look into the TRM for their CPU and find out they've been cheated, though. Nobody's perfect. Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-09-29KVM: arm64: Add PMU event filtering infrastructureMarc Zyngier2-6/+65
It can be desirable to expose a PMU to a guest, and yet not want the guest to be able to count some of the implemented events (because this would give information on shared resources, for example. For this, let's extend the PMUv3 device API, and offer a way to setup a bitmap of the allowed events (the default being no bitmap, and thus no filtering). Userspace can thus allow/deny ranges of event. The default policy depends on the "polarity" of the first filter setup (default deny if the filter allows events, and default allow if the filter denies events). This allows to setup exactly what is allowed for a given guest. Note that although the ioctl is per-vcpu, the map of allowed events is global to the VM (it can be setup from any vcpu until the vcpu PMU is initialized). Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-09-29KVM: arm64: Use event mask matching architecture revisionMarc Zyngier1-5/+75
The PMU code suffers from a small defect where we assume that the event number provided by the guest is always 16 bit wide, even if the CPU only implements the ARMv8.0 architecture. This isn't really problematic in the sense that the event number ends up in a system register, cropping it to the right width, but still this needs fixing. In order to make it work, let's probe the version of the PMU that the guest is going to use. This is done by temporarily creating a kernel event and looking at the PMUVer field that has been saved at probe time in the associated arm_pmu structure. This in turn gets saved in the kvm structure, and subsequently used to compute the event mask that gets used throughout the PMU code. Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-09-29KVM: arm64: Refactor PMU attribute error handlingMarc Zyngier1-12/+7
The PMU emulation error handling is pretty messy when dealing with attributes. Let's refactor it so that we have less duplication, and that it is easy to extend later on. A functional change is that kvm_arm_pmu_v3_init() used to return -ENXIO when the PMU feature wasn't set. The error is now reported as -ENODEV, matching the documentation. -ENXIO is still returned when the interrupt isn't properly configured. Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-09-28KVM: arm64: pmu: Make overflow handler NMI safeJulien Thierry1-1/+25
kvm_vcpu_kick() is not NMI safe. When the overflow handler is called from NMI context, defer waking the vcpu to an irq_work queue. A vcpu can be freed while it's not running by kvm_destroy_vm(). Prevent running the irq_work for a non-existent vcpu by calling irq_work_sync() on the PMU destroy path. [Alexandru E.: Added irq_work_sync()] Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox) Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Pouloze <suzuki.poulose@arm.com> Cc: kvm@vger.kernel.org Cc: kvmarm@lists.cs.columbia.edu Link: https://lore.kernel.org/r/20200924110706.254996-6-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-22Merge branch 'x86-seves-for-paolo' of ↵Paolo Bonzini9-73/+138
https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into HEAD
2020-09-21Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2-3/+3
Pull kvm fixes from Paolo Bonzini: "ARM: - fix fault on page table writes during instruction fetch s390: - doc improvement x86: - The obvious patches are always the ones that turn out to be completely broken. /me hangs his head in shame" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: Revert "KVM: Check the allocation of pv cpu mask" KVM: arm64: Remove S1PTW check from kvm_vcpu_dabt_iswrite() KVM: arm64: Assume write fault on S1PTW permission fault on instruction fetch docs: kvm: add documentation for KVM_CAP_S390_DIAG318
2020-09-21Merge tag 'kvmarm-fixes-5.9-2' of ↵Paolo Bonzini2-3/+3
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into kvm-master KVM/arm64 fixes for 5.9, take #2 - Fix handling of S1 Page Table Walk permission fault at S2 on instruction fetch - Cleanup kvm_vcpu_dabt_iswrite()
2020-09-18KVM: arm64: Assume write fault on S1PTW permission fault on instruction fetchMarc Zyngier2-3/+3
KVM currently assumes that an instruction abort can never be a write. This is in general true, except when the abort is triggered by a S1PTW on instruction fetch that tries to update the S1 page tables (to set AF, for example). This can happen if the page tables have been paged out and brought back in without seeing a direct write to them (they are thus marked read only), and the fault handling code will make the PT executable(!) instead of writable. The guest gets stuck forever. In these conditions, the permission fault must be considered as a write so that the Stage-1 update can take place. This is essentially the I-side equivalent of the problem fixed by 60e21a0ef54c ("arm64: KVM: Take S1 walks into account when determining S2 write faults"). Update kvm_is_write_fault() to return true on IABT+S1PTW, and introduce kvm_vcpu_trap_is_exec_fault() that only return true when no faulting on a S1 fault. Additionally, kvm_vcpu_dabt_iss1tw() is renamed to kvm_vcpu_abt_iss1tw(), as the above makes it plain that it isn't specific to data abort. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Will Deacon <will@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200915104218.1284701-2-maz@kernel.org
2020-09-18Merge branch 'kvm-arm64/misc-5.10' into kvmarm-master/nextMarc Zyngier2-22/+3
Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-09-18Merge branch 'kvm-arm64/pt-new' into kvmarm-master/nextMarc Zyngier1-8/+18
Signed-off-by: Marc Zyngier <maz@kernel.org> # Conflicts: # arch/arm64/kvm/mmu.c
2020-09-18KVM: arm64: Fix doc warnings in mmu codeXiaofei Tan1-2/+4
Fix following warnings caused by mismatch bewteen function parameters and comments. arch/arm64/kvm/mmu.c:128: warning: Function parameter or member 'mmu' not described in '__unmap_stage2_range' arch/arm64/kvm/mmu.c:128: warning: Function parameter or member 'may_block' not described in '__unmap_stage2_range' arch/arm64/kvm/mmu.c:128: warning: Excess function parameter 'kvm' description in '__unmap_stage2_range' arch/arm64/kvm/mmu.c:499: warning: Function parameter or member 'writable' not described in 'kvm_phys_addr_ioremap' arch/arm64/kvm/mmu.c:538: warning: Function parameter or member 'mmu' not described in 'stage2_wp_range' arch/arm64/kvm/mmu.c:538: warning: Excess function parameter 'kvm' description in 'stage2_wp_range' Signed-off-by: Xiaofei Tan <tanxiaofei@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1600307269-50957-1-git-send-email-tanxiaofei@huawei.com
2020-09-18KVM: arm64: Do not flush memslot if FWB is supportedAlexandru Elisei1-1/+1
As a result of a KVM_SET_USER_MEMORY_REGION ioctl, KVM flushes the dcache for the memslot being changed to ensure a consistent view of memory between the host and the guest: the host runs with caches enabled, and it is possible for the data written by the hypervisor to still be in the caches, but the guest is running with stage 1 disabled, meaning data accesses are to Device-nGnRnE memory, bypassing the caches entirely. Flushing the dcache is not necessary when KVM enables FWB, because it forces the guest to uses cacheable memory accesses. The current behaviour does not change, as the dcache flush helpers execute the cache operation only if FWB is not enabled, but walking the stage 2 table is avoided. Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200915170442.131635-1-alexandru.elisei@arm.com
2020-09-18KVM: arm64: vgic-debug: Convert to use DEFINE_SEQ_ATTRIBUTE macroLiu Shixin1-22/+2
Use DEFINE_SEQ_ATTRIBUTE macro to simplify the code. Signed-off-by: Liu Shixin <liushixin2@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200916025023.3992679-1-liushixin2@huawei.com
2020-09-18KVM: arm64: Fix inject_fault.c kernel-doc warningsTian Tao1-0/+1
Fix kernel-doc warnings. arch/arm64/kvm/inject_fault.c:210: warning: Function parameter or member 'vcpu' not described in 'kvm_inject_undefined' Signed-off-by: Tian Tao <tiantao6@hisilicon.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1600154512-44624-1-git-send-email-tiantao6@hisilicon.com
2020-09-18KVM: arm64: Try PMD block mappings if PUD mappings are not supportedAlexandru Elisei1-5/+14
When userspace uses hugetlbfs for the VM memory, user_mem_abort() tries to use the same block size to map the faulting IPA in stage 2. If stage 2 cannot the same block mapping because the block size doesn't fit in the memslot or the memslot is not properly aligned, user_mem_abort() will fall back to a page mapping, regardless of the block size. We can do better for PUD backed hugetlbfs by checking if a PMD block mapping is supported before deciding to use a page. vma_pagesize is an unsigned long, use 1UL instead of 1ULL when assigning its value. Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200910133351.118191-1-alexandru.elisei@arm.com
2020-09-16Merge branch 'kvm-arm64/nvhe-hyp-context' into kvmarm-master/nextMarc Zyngier14-244/+456
Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-09-15KVM: arm64: nVHE: Fix pointers during SMCCC convertionAndrew Scull4-12/+8
The host need not concern itself with the pointer differences for the hyp interfaces that are shared between VHE and nVHE so leave it to the hyp to handle. As the SMCCC function IDs are converted into function calls, it is a suitable place to also convert any pointer arguments into hyp pointers. This, additionally, eases the reuse of the handlers in different contexts. Signed-off-by: Andrew Scull <ascull@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200915104643.2543892-20-ascull@google.com
2020-09-15KVM: arm64: nVHE: Migrate hyp-init to SMCCCAndrew Scull4-52/+43
To complete the transition to SMCCC, the hyp initialization is given a function ID. This looks neater than comparing the hyp stub function IDs to the page table physical address. Some care is taken to only clobber x0-3 before the host context is saved as only those registers can be clobbered accoring to SMCCC. Fortunately, only a few acrobatics are needed. The possible new tpidr_el2 is moved to the argument in x2 so that it can be stashed in tpidr_el2 early to free up a scratch register. The page table configuration then makes use of x0-2. Signed-off-by: Andrew Scull <ascull@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200915104643.2543892-19-ascull@google.com
2020-09-15KVM: arm64: nVHE: Migrate hyp interface to SMCCCAndrew Scull3-32/+101
Rather than passing arbitrary function pointers to run at hyp, define and equivalent set of SMCCC functions. Since the SMCCC functions are strongly tied to the original function prototypes, it is not expected for the host to ever call an invalid ID but a warning is raised if this does ever occur. As __kvm_vcpu_run is used for every switch between the host and a guest, it is explicitly singled out to be identified before the other function IDs to improve the performance of the hot path. Signed-off-by: Andrew Scull <ascull@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200915104643.2543892-18-ascull@google.com
2020-09-15KVM: arm64: nVHE: Pass pointers consistently to hyp-initAndrew Scull2-1/+1
Rather than some being kernel pointer and others being hyp pointers, standardize on all pointers being hyp pointers. Signed-off-by: Andrew Scull <ascull@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200915104643.2543892-15-ascull@google.com
2020-09-15KVM: arm64: nVHE: Handle hyp panicsAndrew Scull2-33/+63
Restore the host context when panicking from hyp to give the best chance of the panic being clean. The host requires that registers be preserved such as x18 for the shadow callstack. If the panic is caused by an exception from EL1, the host context is still valid so the panic can return straight back to the host. If the panic comes from EL2 then it's most likely that the hyp context is active and the host context needs to be restored. There are windows before and after the host context is saved and restored that restoration is attempted incorrectly and the panic won't be clean. Signed-off-by: Andrew Scull <ascull@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200915104643.2543892-14-ascull@google.com
2020-09-15KVM: arm64: nVHE: Switch to hyp context for EL2Andrew Scull4-19/+89
Save and restore the host context when switching to and from hyp. This gives hyp its own context that the host will not see as a step towards a full trust boundary between the two. SP_EL0 and pointer authentication keys are currently shared between the host and hyp so don't need to be switched yet. Signed-off-by: Andrew Scull <ascull@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200915104643.2543892-13-ascull@google.com
2020-09-15KVM: arm64: Share context save and restore macrosAndrew Scull1-39/+0
To avoid duplicating the context save and restore macros, move them into a shareable header. Signed-off-by: Andrew Scull <ascull@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200915104643.2543892-12-ascull@google.com
2020-09-15KVM: arm64: Restore hyp when panicking in guest contextAndrew Scull4-4/+34
If the guest context is loaded when a panic is triggered, restore the hyp context so e.g. the shadow call stack works when hyp_panic() is called and SP_EL0 is valid when the host's panic() is called. Use the hyp context's __hyp_running_vcpu field to track when hyp transitions to and from the guest vcpu so the exception handlers know whether the context needs to be restored. Signed-off-by: Andrew Scull <ascull@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200915104643.2543892-11-ascull@google.com
2020-09-15KVM: arm64: Update context references from host to hypAndrew Scull1-11/+11
Hyp now has its own nominal context for saving and restoring its state when switching to and from a guest. Update the related comments and utilities to match the new name. Signed-off-by: Andrew Scull <ascull@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200915104643.2543892-10-ascull@google.com
2020-09-15KVM: arm64: Introduce hyp contextAndrew Scull5-8/+18
During __guest_enter, save and restore from a new hyp context rather than the host context. This is preparation for separation of the hyp and host context in nVHE. Signed-off-by: Andrew Scull <ascull@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200915104643.2543892-9-ascull@google.com
2020-09-15KVM: arm64: nVHE: Don't consume host SErrors with ESBAndrew Scull1-1/+5
The ESB at the start of the host vector may cause SErrors to be consumed to DISR_EL1. However, this is not checked for the host so the SError could go unhandled. Remove the ESB so that SErrors are not consumed but are instead left pending for the host to consume. __guest_enter already defers entry into a guest if there are any SErrors pending. Signed-off-by: Andrew Scull <ascull@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20200915104643.2543892-8-ascull@google.com
2020-09-15KVM: arm64: nVHE: Use separate vector for the hostAndrew Scull5-68/+122
The host is treated differently from the guests when an exception is taken so introduce a separate vector that is specialized for the host. This also allows the nVHE specific code to move out of hyp-entry.S and into nvhe/host.S. The host is only expected to make HVC calls and anything else is considered invalid and results in a panic. Hyp initialization is now passed the vector that is used for the host and it is swapped for the guest vector during the context switch. Signed-off-by: Andrew Scull <ascull@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200915104643.2543892-7-ascull@google.com
2020-09-15KVM: arm64: Save chosen hyp vector to a percpu variableAndrew Scull2-2/+5
Introduce a percpu variable to hold the address of the selected hyp vector that will be used with guests. This avoids the selection process each time a guest is being entered and can be used by nVHE when a separate vector is introduced for the host. Signed-off-by: Andrew Scull <ascull@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200915104643.2543892-6-ascull@google.com
2020-09-15KVM: arm64: Remove kvm_host_data_t typedefAndrew Scull1-2/+2
The kvm_host_data_t typedef is used inconsistently and goes against the kernel's coding style. Remove it in favour of the full struct specifier. Signed-off-by: Andrew Scull <ascull@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200915104643.2543892-4-ascull@google.com
2020-09-15KVM: arm64: Remove hyp_panic argumentsAndrew Scull4-15/+14
hyp_panic is able to find all the context it needs from within itself so remove the argument. The __hyp_panic wrapper becomes redundant so is also removed. Signed-off-by: Andrew Scull <ascull@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200915104643.2543892-3-ascull@google.com