diff options
author | Simon Guo <wei.guo.simon@gmail.com> | 2018-05-23 10:02:01 +0300 |
---|---|---|
committer | Paul Mackerras <paulus@ozlabs.org> | 2018-06-01 03:30:10 +0300 |
commit | 5706340a339283fe60d55ddc72ee7728a571a834 (patch) | |
tree | 2f5c12185399ad0671cdb3baa0ca4360712aa0ab /arch/powerpc/kvm/book3s_pr.c | |
parent | 533082ae86e2f1ff6cb9eca7a25202a81fc0567e (diff) | |
download | linux-5706340a339283fe60d55ddc72ee7728a571a834.tar.xz |
KVM: PPC: Book3S PR: Always fail transactions in guest privileged state
Currently the kernel doesn't use transaction memory.
And there is an issue for privileged state in the guest that:
tbegin/tsuspend/tresume/tabort TM instructions can impact MSR TM bits
without trapping into the PR host. So following code will lead to a
false mfmsr result:
tbegin <- MSR bits update to Transaction active.
beq <- failover handler branch
mfmsr <- still read MSR bits from magic page with
transaction inactive.
It is not an issue for non-privileged guest state since its mfmsr is
not patched with magic page and will always trap into the PR host.
This patch will always fail tbegin attempt for privileged state in the
guest, so that the above issue is prevented. It is benign since
currently (guest) kernel doesn't initiate a transaction.
Test case:
https://github.com/justdoitqd/publicFiles/blob/master/test_tbegin_pr.c
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Diffstat (limited to 'arch/powerpc/kvm/book3s_pr.c')
-rw-r--r-- | arch/powerpc/kvm/book3s_pr.c | 11 |
1 files changed, 10 insertions, 1 deletions
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index c0f45c83f683..cc26be87e3b8 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -207,6 +207,15 @@ static void kvmppc_recalc_shadow_msr(struct kvm_vcpu *vcpu) #ifdef CONFIG_PPC_BOOK3S_64 smsr |= MSR_ISF | MSR_HV; #endif +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + /* + * in guest privileged state, we want to fail all TM transactions. + * So disable MSR TM bit so that all tbegin. will be able to be + * trapped into host. + */ + if (!(guest_msr & MSR_PR)) + smsr &= ~MSR_TM; +#endif vcpu->arch.shadow_msr = smsr; } @@ -299,7 +308,7 @@ static inline void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu) tm_disable(); } -static inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu) +void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu) { tm_enable(); mtspr(SPRN_TFHAR, vcpu->arch.tfhar); |