summaryrefslogtreecommitdiff
path: root/arch/arm64/kvm
diff options
context:
space:
mode:
authorWill Deacon <will@kernel.org>2020-09-30 13:24:42 +0300
committerMarc Zyngier <maz@kernel.org>2020-10-02 11:25:25 +0300
commitffd1b63a5860968068e943eab33383a766d30f64 (patch)
tree3b2450a4440c5d83fd1335df552a0ca69b4e5ee5 /arch/arm64/kvm
parentb259d137e91d80bf92eac453ffab179eb7941ede (diff)
downloadlinux-ffd1b63a5860968068e943eab33383a766d30f64.tar.xz
KVM: arm64: Ensure user_mem_abort() return value is initialised
If a change in the MMU notifier sequence number forces user_mem_abort() to return early when attempting to handle a stage-2 fault, we return uninitialised stack to kvm_handle_guest_abort(), which could potentially result in the injection of an external abort into the guest or a spurious return to userspace. Neither or these are what we want to do. Initialise 'ret' to 0 in user_mem_abort() so that bailing due to a change in the MMU notrifier sequence number is treated as though the fault was handled. Reported-by: kernel test robot <lkp@intel.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Cc: Gavin Shan <gshan@redhat.com> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Link: https://lore.kernel.org/r/20200930102442.16142-1-will@kernel.org
Diffstat (limited to 'arch/arm64/kvm')
-rw-r--r--arch/arm64/kvm/mmu.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index c5c26a9cb85b..a816cb8e619b 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -742,7 +742,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
struct kvm_memory_slot *memslot, unsigned long hva,
unsigned long fault_status)
{
- int ret;
+ int ret = 0;
bool write_fault, writable, force_pte = false;
bool exec_fault;
bool device = false;