summaryrefslogtreecommitdiff
path: root/arch/mips/include
AgeCommit message (Collapse)AuthorFilesLines
2017-02-03KVM: MIPS/T&E: Expose CP0_EntryLo0/1 registersJames Hogan1-0/+2
Expose the CP0_EntryLo0 and CP0_EntryLo1 registers through the KVM register access API. This is fairly straightforward for trap & emulate since we don't support the RI and XI bits. For the sake of future proofing (particularly for VZ) it is explicitly specified that the API always exposes the 64-bit version of these registers (i.e. with the RI and XI bits in bit positions 63 and 62 respectively), and they are implemented in trap_emul.c rather than mips.c to allow them to be implemented differently for VZ. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/T&E: Implement CP0_EBase registerJames Hogan1-0/+3
The CP0_EBase register is a standard feature of MIPS32r2, so we should always have been implementing it properly. However the register value was ignored and wasn't exposed to userland. Fix the emulation of exceptions and interrupts to use the value stored in guest CP0_EBase, and fix the masks so that the top 3 bits (rather than the standard 2) are fixed, so that it is always in the guest KSeg0 segment. Also add CP0_EBASE to the KVM one_reg interface so it can be accessed by userland, also allowing the CPU number field to be written (which isn't permitted by the guest). Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/T&E: Move CP0 register access into T&EJames Hogan1-1/+0
Access to various CP0 registers via the KVM register access API needs to be implementation specific to allow restrictions to be made on changes, for example when VZ guest registers aren't present, so move them all into trap_emul.c in preparation for VZ. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS: Claim KVM_CAP_READONLY_MEM supportJames Hogan1-0/+2
Now that load/store faults due to read only memory regions are treated as MMIO accesses it is safe to claim support for read only memory regions (KVM_CAP_READONLY_MEM). Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/MMU: Implement KVM_CAP_SYNC_MMUJames Hogan1-0/+13
Implement the SYNC_MMU capability for KVM MIPS, allowing changes in the underlying user host virtual address (HVA) mappings to be promptly reflected in the corresponding guest physical address (GPA) mappings. This allows for several features to work with guest RAM which require mappings to be altered or protected, such as copy-on-write, KSM (Kernel Samepage Merging), idle page tracking, memory swapping, and guest memory ballooning. There are two main aspects of this change, described below. The KVM MMU notifier architecture callbacks are implemented so we can be notified of changes in the HVA mappings. These arrange for the guest physical address (GPA) page tables to be modified and possibly for derived mappings (GVA page tables and TLBs) to be flushed. - kvm_unmap_hva[_range]() - These deal with HVA mappings being removed, for example before a copy-on-write takes place, which requires the corresponding GPA page table mappings to be removed too. - kvm_set_spte_hva() - These update a GPA page table entry to match the new HVA entry, but must be careful to respect KVM specific configuration such as not dirtying a clean guest page which is dirty to the host, and write protecting writable pages in read only memslots (which will soon be supported). - kvm[_test]_age_hva() - These update GPA page table entries to be old (invalid) so that access can be tracked, making them young again. The GPA page fault handling (kvm_mips_map_page) is updated to use gfn_to_pfn_prot() (which may provide read-only pages), to handle asynchronous page table invalidation from MMU notifier callbacks, and to handle more cases in the fast path. - mmu_notifier_seq is used to detect asynchronous page table invalidations while we're holding a pfn from gfn_to_pfn_prot() outside of kvm->mmu_lock, retrying if invalidations have taken place, e.g. a COW or a KSM page merge. - The fast path (_kvm_mips_map_page_fast) now handles marking old pages as young / accessed, and disallowing dirtying of clean pages that aren't actually writable (e.g. shared pages that should COW, and read-only memory regions when they are enabled in a future patch). - Due to the use of MMU notifications we no longer need to keep the page references after we've updated the GPA page tables. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/MMU: Add GPA PT mkclean helperJames Hogan1-0/+1
Add a helper function to make a range of guest physical address (GPA) mappings in the GPA page table clean so that writes can be caught. This will be used in a few places to manage dirty page logging. Note that until the dirty bit is transferred from GPA page table entries to GVA page table entries in an upcoming patch this won't trigger a TLB modified exception on write. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/T&E: Handle read only GPA in TLB modJames Hogan1-5/+0
Rewrite TLB modified exception handling to handle read only GPA memory regions, instead of unconditionally passing the exception to the guest. If the guest TLB is not the cause of the exception we call into the normal TLB fault handling depending on the memory segment, which will soon attempt to remap the physical page to be writable (handling dirty page tracking or copy on write in the process). Failing that we fall back to treating it as MMIO, due to a read only memory region. Once the capability is enabled, this will allow read only memory regions (such as the Malta boot flash as emulated by QEMU) to have writes treated as MMIO, while still allowing reads to run untrapped. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS: Pass type of fault down to kvm_mips_map_page()James Hogan1-3/+6
kvm_mips_map_page() will need to know whether the fault was due to a read or a write in order to support dirty page tracking, KVM_CAP_SYNC_MMU, and read only memory regions, so get that information passed down to it via new bool write_fault arguments to various functions. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS: Implement kvm_arch_flush_shadow_all/memslotJames Hogan1-3/+8
Implement the kvm_arch_flush_shadow_all() and kvm_arch_flush_shadow_memslot() KVM functions for MIPS to allow guest physical mappings to be safely changed. The general MIPS KVM code takes care of flushing of GPA page table entries. kvm_arch_flush_shadow_all() flushes the whole GPA page table, and is always called on the cleanup path so there is no need to acquire the kvm->mmu_lock. kvm_arch_flush_shadow_memslot() flushes only the range of mappings in the GPA page table corresponding to the slot being flushed, and happens when memory regions are moved or deleted. MIPS KVM implementation callbacks are added for handling the implementation specific flushing of mappings derived from the GPA page tables. These are implemented for trap_emul.c using kvm_flush_remote_tlbs() which should now be functional, and will flush the per-VCPU GVA page tables and ASIDS synchronously (before next entering guest mode or directly accessing GVA space). Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/Emulate: Use lockless GVA helpers for cache emulationJames Hogan1-1/+1
Use the lockless GVA helpers to implement the reading of guest instructions for emulation. This will allow it to handle asynchronous TLB flushes when they are implemented. This is a little more complicated than the other two cases (get_inst() and dynamic translation) due to the need to emulate the appropriate guest TLB exception when the address isn't present or isn't valid in the guest TLB. Since there are several protected cache ops that may need to be performed safely, this is abstracted by kvm_mips_guest_cache_op() which is passed a protected cache op function pointer and takes care of the lockless operation and fault handling / retry if the op should fail, taking advantage of the new errors which the protected cache ops can now return. This allows the existing advance fault handling which relied on host TLB lookups to be removed, along with the now unused kvm_mips_host_tlb_lookup(), Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/T&E: Add lockless GVA access helpersJames Hogan1-0/+15
Add helpers to allow for lockless direct access to the GVA space, by changing the VCPU mode to READING_SHADOW_PAGE_TABLES for the duration of the access. This allows asynchronous TLB flush requests in future patches to safely trigger either a TLB flush before the direct GVA space access, or a delay until the in-progress lockless direct access is complete. The kvm_trap_emul_gva_lockless_begin() and kvm_trap_emul_gva_lockless_end() helpers take care of guarding the direct GVA accesses, and kvm_trap_emul_gva_fault() tries to handle a uaccess fault resulting from a flush having taken place. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/MMU: Convert guest physical map to page tableJames Hogan1-3/+4
Current guest physical memory is mapped to host physical addresses using a single linear array (guest_pmap of length guest_pmap_npages). This was only really meant to be temporary, and isn't sparse, so its wasteful of memory. A small amount of RAM at GPA 0 and a small boot exception vector at GPA 0x1fc00000 cannot be represented without a full 128KiB guest_pmap allocation (MIPS32 with 16KiB pages), which is one reason why QEMU currently runs its boot code at the top of RAM instead of the usual boot exception vector address. Instead use the existing infrastructure for host virtual page table management to allocate a page table for guest physical memory too. This should be sufficient for now, assuming the size of physical memory doesn't exceed the size of virtual memory. It may need extending in future to handle XPA (eXtended Physical Addressing) in 32-bit guests, as supported by VZ guests on P5600. Some of this code is based loosely on Cavium's VZ KVM implementation. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS: Use CP0_BadInstr[P] for emulationJames Hogan1-0/+4
When exiting from the guest, store the values of the CP0_BadInstr and CP0_BadInstrP registers if they exist, which contain the encodings of the instructions which caused the last synchronous exception. When the instruction is needed for emulation, kvm_get_badinstr() and kvm_get_badinstrp() are used instead of calling kvm_get_inst() directly, to decide whether to read the saved CP0_BadInstr/CP0_BadInstrP registers (if they exist), or read the instruction from memory (if not). The use of these registers should be more robust than using kvm_get_inst(), as it actually gives the instruction encoding seen by the hardware rather than relying on user accessors after the fact, which can be fooled by incoherent icache or a racing code modification. It will also work with VZ, where the guest virtual memory isn't directly accessible by the host with user accessors. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS: Improve kvm_get_inst() error returnJames Hogan1-2/+1
Currently kvm_get_inst() returns KVM_INVALID_INST in the event of a fault reading the guest instruction. This has the rather arbitrary magic value 0xdeadbeef. This API isn't very robust, and in fact 0xdeadbeef is a valid MIPS64 instruction encoding, namely "ld t1,-16657(s5)". Therefore change the kvm_get_inst() API to return 0 or -EFAULT, and to return the instruction via a u32 *out argument. We can then drop the KVM_INVALID_INST definition entirely. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/T&E: Don't treat code fetch faults as MMIOJames Hogan1-0/+27
In order to make use of the CP0_BadInstr & CP0_BadInstrP registers we need to be a bit more careful not to treat code fetch faults as MMIO, lest we hit an UNPREDICTABLE register value when we try to emulate the MMIO load instruction but there was no valid instruction word available to the hardware. Add a kvm_is_ifetch_fault() helper to try to figure out whether a load fault was due to a code fetch, and prevent MMIO instruction emulation in that case. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/MMU: Drop kvm_get_new_mmu_context()James Hogan1-5/+0
MIPS KVM uses its own variation of get_new_mmu_context() which takes an extra vcpu pointer (unused) and does exactly the same thing. Switch to just using get_new_mmu_context() directly and drop KVM's version of it as it doesn't really serve any purpose. The nearby declarations of kvm_mips_alloc_new_mmu_context(), kvm_mips_vcpu_load() and kvm_mips_vcpu_put() are also removed from kvm_host.h, as no definitions or users exist. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/Emulate: Drop redundant TLB flushes on exceptionsJames Hogan1-1/+0
When exceptions are injected into the MIPS KVM guest, the whole host TLB is flushed (except any entries in the guest KSeg0 range). This is certainly not mandated by the architecture when exceptions are taken (userland can't directly change TLB mappings anyway), and is a pretty heavyweight operation: - There may be hundreds of TLB entries especially when a 512 entry FTLB is present. These are walked and read and conditionally invalidated, so the TLBINV feature can't be used either. - It'll indiscriminately wipe out entries belonging to other memory spaces. A simple ASID regeneration would be much faster to perform, although it'd wipe out the guest KSeg0 mappings too. My suspicion is that this was simply to plaster over the fact that kvm_mips_host_tlb_inv() incorrectly only invalidated TLB entries in the ASID for guest usermode, and not the ASID for guest kernelmode. Now that the recent commit "KVM: MIPS/TLB: Flush host TLB entry in kernel ASID" fixes kvm_mips_host_tlb_inv() to flush TLB entries in the kernelmode ASID when the guest TLB changes, lets drop these calls and the otherwise unused kvm_mips_flush_host_tlb(). Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/TLB: Drop kvm_local_flush_tlb_all()James Hogan2-6/+0
Now that KVM no longer uses wired entries we can safely use local_flush_tlb_all() when we need to flush the entire TLB (on the start of a new ASID cycle). This doesn't flush wired entries, which allows other code to use them without KVM clobbering them all the time. It also is more up to date, knowing about the tlbinv architectural feature, flushing of micro TLB on cores where that is necessary (Loongson I believe), and knows to stop the HTW while doing so. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS: Use uaccess to read/modify guest instructionsJames Hogan1-2/+0
Now that we have GVA page tables, use standard user accesses with page faults disabled to read & modify guest instructions. This should be more robust (than the rather dodgy method of accessing guest mapped segments by just directly addressing them) and will also work with Enhanced Virtual Addressing (EVA) host kernel configurations where dedicated instructions are needed for accessing user mode memory. For simplicity and speed we do this regardless of the guest segment the address resides in, rather than handling guest KSeg0 specially with kmap_atomic() as before. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS: Drop vm_init() callbackJames Hogan1-3/+0
Now that the commpage doesn't use wired TLB entries, the per-CPU vm_init() callback is the only work done by kvm_mips_init_vm_percpu(). The trap & emulate implementation doesn't actually need to do anything from vm_init(), and the future VZ implementation would be better served by a kvm_arch_hardware_enable callback anyway. Therefore drop the vm_init() callback entirely, allowing the kvm_mips_init_vm_percpu() function to also be dropped, along with the kvm_mips_instance atomic counter. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/MMU: Convert commpage fault handling to page tablesJames Hogan1-3/+0
Now that we have GVA page tables and an optimised TLB refill handler in place, convert the handling of commpage faults from the guest kernel to fill the GVA page table and invalidate the TLB entry, rather than filling the wired TLB entry directly. For simplicity we no longer use a wired entry for the commpage (refill should be much cheaper with the fast-path handler anyway). Since we don't need to manipulate the TLB directly any longer, move the function from tlb.c to mmu.c. This puts it closer to the similar functions handling KSeg0 and TLB mapped page faults from the guest. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/MMU: Convert TLB mapped faults to page tablesJames Hogan1-5/+2
Now that we have GVA page tables and an optimised TLB refill handler in place, convert the handling of page faults in TLB mapped segment from the guest to fill a single GVA page table entry and invalidate the TLB entry, rather than filling a TLB entry pair directly. Also remove the now unused kvm_mips_get_{kernel,user}_asid() functions in mmu.c and kvm_mips_host_tlb_write() in tlb.c. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/MMU: Invalidate stale GVA PTEs on TLBWJames Hogan1-0/+17
Implement invalidation of specific pairs of GVA page table entries in one or both of the GVA page tables. This is used when existing mappings are replaced in the guest TLB by emulated TLBWI/TLBWR instructions. Due to the sharing of page tables in the host kernel range, we should be careful not to allow host pages to be invalidated. Add a helper kvm_mips_walk_pgd() which can be used when walking of either GPA (future patches) or GVA page tables is needed, optionally with allocation of page tables along the way when they don't exist. GPA page table walking will need to be protected by the kvm->mmu_lock, so we also add a small MMU page cache in each KVM VCPU, like that found for other architectures but smaller. This allows enough pages to be pre-allocated to handle a single fault without holding the lock, allowing the helper to run with the lock held without having to handle allocation failures. Using the same mechanism for GVA allows the same code to be used, and allows it to use the same cache of allocated pages if the GPA walk didn't need to allocate any new tables. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/MMU: Invalidate GVA PTs on ASID changesJames Hogan1-0/+17
Implement invalidation of large ranges of virtual addresses from GVA page tables in response to a guest ASID change (immediately for guest kernel page table, lazily for guest user page table). We iterate through a range of page tables invalidating entries and freeing fully invalidated tables. To minimise overhead the exact ranges invalidated depends on the flags argument to kvm_mips_flush_gva_pt(), which also allows it to be used in future KVM_CAP_SYNC_MMU patches in response to GPA changes, which unlike guest TLB mapping changes affects guest KSeg0 mappings. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/TLB: Generalise host TLB invalidate to kernel ASIDJames Hogan1-1/+2
Refactor kvm_mips_host_tlb_inv() to also be able to invalidate any matching TLB entry in the kernel ASID rather than assuming only the TLB entries in the user ASID can change. Two new bool user/kernel arguments allow the caller to indicate whether the mapping should affect each of the ASIDs for guest user/kernel mode. - kvm_mips_invalidate_guest_tlb() (used by TLBWI/TLBWR emulation) can now invalidate any corresponding TLB entry in both the kernel ASID (guest kernel may have accessed any guest mapping), and the user ASID if the entry being replaced is in guest USeg (where guest user may also have accessed it). - The tlbmod fault handler (and the KSeg0 / TLB mapped / commpage fault handlers in later patches) can now invalidate the corresponding TLB entry in whichever ASID is currently active, since only a single page table will have been updated anyway. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS: Add fast path TLB refill handlerJames Hogan1-0/+1
Use functions from the general MIPS TLB exception vector generation code (tlbex.c) to construct a fast path TLB refill handler similar to the general one, but cut down and capable of preserving K0 and K1. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/T&E: Activate GVA page tables in guest contextJames Hogan1-1/+3
Activate the GVA page tables when in guest context. This will allow the normal Linux TLB refill handler to fill from it when guest memory is read, as well as preventing accidental reading from user memory. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS: Wire up vcpu uninitJames Hogan1-1/+1
Wire up a vcpu uninit implementation callback. This will be used for the clean up of GVA->HPA page tables. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/T&E: active_mm = init_mm in guest contextJames Hogan1-0/+4
Set init_mm as the active_mm and update mm_cpumask(current->mm) to reflect that it isn't active when in guest context. This prevents cache management code from attempting cache flushes on host virtual addresses while in guest context, for example due to a cache management IPIs or later when writing of dynamically translated code hits copy on write. We do this using helpers in static kernel code to avoid having to export init_mm to modules. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS: Add vcpu_run() & vcpu_reenter() callbacksJames Hogan1-0/+2
Add implementation callbacks for entering the guest (vcpu_run()) and reentering the guest (vcpu_reenter()), allowing implementation specific operations to be performed before entering the guest or after returning to the host without cluttering kvm_arch_vcpu_ioctl_run(). This allows the T&E specific lazy user GVA flush to be moved into trap_emul.c, along with disabling of the HTW. We also move kvm_mips_deliver_interrupts() as VZ will need to restore the guest timer state prior to delivering interrupts. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS: Remove duplicated ASIDs from vcpuJames Hogan1-3/+1
The kvm_vcpu_arch structure contains both mm_structs for allocating MMU contexts (primarily the ASID) but it also copies the resulting ASIDs into guest_{user,kernel}_asid[] arrays which are referenced from uasm generated code. This duplication doesn't seem to serve any purpose, and it gets in the way of generalising the ASID handling across guest kernel/user modes, so lets just extract the ASID straight out of the mm_struct on demand, and in fact there are convenient cpu_context() and cpu_asid() macros for doing so. To reduce the verbosity of this code we do also add kern_mm and user_mm local variables where the kernel and user mm_structs are used. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS: Convert get/set_regs -> vcpu_load/putJames Hogan1-2/+2
Convert the get_regs() and set_regs() callbacks to vcpu_load() and vcpu_put(), which provide a cpu argument and more closely match the kvm_arch_vcpu_load() / kvm_arch_vcpu_put() that they are called by. This is in preparation for moving ASID management into the implementations. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03KVM: MIPS/MMU: Simplify ASID restorationJames Hogan1-3/+0
KVM T&E uses an ASID for guest kernel mode and an ASID for guest user mode. The current ASID is saved when the guest is scheduled out, and restored when scheduling back in, with checks for whether the ASID needs to be regenerated. This isn't really necessary as the ASID can be easily determined by the current guest mode, so lets simplify it to just read the required ASID from guest_kernel_asid or guest_user_asid even if the ASID hasn't been regenerated. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03MIPS: Add return errors to protected cache opsJames Hogan1-20/+35
The protected cache ops contain no out of line fixup code to return an error code in the event of a fault, with the cache op being skipped in that case. For KVM however we'd like to detect this case as page faulting will be disabled so it could happen during normal operation if the GVA page tables were flushed, and need to be handled by the caller. Add the out-of-line fixup code to load the error value -EFAULT into the return variable, and adapt the protected cache line functions to pass the error back to the caller. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03MIPS: Export some tlbex internals for KVM to useJames Hogan1-0/+26
Export to TLB exception code generating functions so that KVM can construct a fast TLB refill handler for guest context without reinventing the wheel quite so much. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-03MIPS: uasm: Add include guards in asm/uasm.hJames Hogan1-0/+5
Add include guards in asm/uasm.h to allow it to be safely used by a new header asm/tlbex.h in the next patch to expose TLB exception building functions for KVM to use. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-02MIPS: Move pgd_alloc() out of headerJames Hogan1-15/+1
pgd_alloc() references init_mm which is not exported to modules. In order for KVM to be able to use pgd_alloc() to allocate GVA page tables, move pgd_alloc() into a new pgtable.c file and export it to modules. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-02-01sched/cputime: Remove generic asm headersFrederic Weisbecker1-1/+0
cputime_t is now only used by two architectures: * powerpc (when CONFIG_VIRT_CPU_ACCOUNTING_NATIVE=y) * s390 And since the core doesn't use it anymore, we don't need any arch support from the others. So we can remove their stub implementations. A final cleanup would be to provide an efficient pure arch implementation of cputime_to_nsec() for s390 and powerpc and finally remove include/linux/cputime.h . Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Stanislaw Gruszka <sgruszka@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Wanpeng Li <wanpeng.li@hotmail.com> Link: http://lkml.kernel.org/r/1485832191-26889-36-git-send-email-fweisbec@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-01-25MIPS: BCM47XX: Add Luxul devices to the databaseDan Haab1-0/+9
So far only Luxul XWR-1750 router was supported. This adds a set of other Luxul devices based on BCM47XX. It's a standard support for LEDs and buttons. Signed-off-by: Dan Haab <dhaab@luxul.com> Cc: Hauke Mehrtens <hauke@hauke-m.de> Cc: Rafał Miłecki <zajec5@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/15106/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-01-25MIPS: VDSO: avoid duplicate CAC_BASE definitionArnd Bergmann1-2/+4
vdso.h includes <spaces.h> implicitly after defining CONFIG_32BITS. This defeats the override in mach-ip27/spaces.h, leading to a build error that shows up in kernelci.org: In file included from arch/mips/include/asm/mach-ip27/spaces.h:29:0, from arch/mips/include/asm/page.h:12, from arch/mips/vdso/vdso.h:26, from arch/mips/vdso/gettimeofday.c:11: arch/mips/include/asm/mach-generic/spaces.h:28:0: error: "CAC_BASE" redefined [-Werror] #define CAC_BASE _AC(0x80000000, UL) An earlier patch tried to make the second definition conditional, but that patch had the #ifdef in the wrong place, and would lead to another warning: arch/mips/include/asm/io.h: In function 'phys_to_virt': arch/mips/include/asm/io.h:138:9: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast] For all I can tell, there is no other reason than vdso32 to ever include this file with CONFIG_32BITS set, and the vdso itself should never refer to the base addresses as it is running in user space, so adding an #ifdef here is safe. Link: https://patchwork.kernel.org/patch/9418187/ Fixes: 3ffc17d8768b ("MIPS: Adjust MIPS64 CAC_BASE to reflect Config.K0") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Cc: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/15039/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-01-24MIPS: Fix modversionsArnd Bergmann2-0/+23
kernelci.org reports tons of build warnings for linux-next: 35 WARNING: "memcpy" [fs/fat/msdos.ko] has no CRC! 35 WARNING: "__copy_user" [fs/fat/fat.ko] has no CRC! 32 WARNING: EXPORT symbol "memset" [vmlinux] version generation failed, symbol will not be versioned. 32 WARNING: EXPORT symbol "copy_page" [vmlinux] version generation failed, symbol will not be versioned. 32 WARNING: EXPORT symbol "clear_page" [vmlinux] version generation failed, symbol will not be versioned. 32 WARNING: EXPORT symbol "__strncpy_from_user_nocheck_asm" [vmlinux] version generation failed, symbol will not be versioned. The problem here is mainly the missing asm/asm-prototypes.h header file that is supposed to include the prototypes for each symbol that is exported from an assembler file. A second problem is that the asm/uaccess.h header contains some but not all the necessary declarations for the user access helpers. Finally, the vdso build is broken once we add asm/asm-prototypes.h, so we have to fix this at the same time by changing the vdso header. My approach here is to just not look for exported symbols in the VDSO assembler files, as the symbols cannot be exported anyway. Fixes: 576a2f0c5c6d ("MIPS: Export memcpy & memset functions alongside their definitions") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: James Hogan <james.hogan@imgtec.com> Cc: Maciej W. Rozycki <macro@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/15038/ Patchwork: https://patchwork.linux-mips.org/patch/15069/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-01-24treewide: Consolidate get_dma_ops() implementationsBart Van Assche1-5/+2
Introduce a new architecture-specific get_arch_dma_ops() function that takes a struct bus_type * argument. Add get_dma_ops() in <linux/dma-mapping.h>. Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Juergen Gross <jgross@suse.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: Russell King <linux@armlinux.org.uk> Cc: x86@kernel.org Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-24treewide: Move dma_ops from struct dev_archdata into struct deviceBart Van Assche2-7/+2
Some but not all architectures provide set_dma_ops(). Move dma_ops from struct dev_archdata into struct device such that it becomes possible on all architectures to configure dma_ops per device. Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Juergen Gross <jgross@suse.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: Russell King <linux@armlinux.org.uk> Cc: x86@kernel.org Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-24treewide: Constify most dma_map_ops structuresBart Van Assche4-5/+5
Most dma_map_ops structures are never modified. Constify these structures such that these can be write-protected. This patch has been generated as follows: git grep -l 'struct dma_map_ops' | xargs -d\\n sed -i \ -e 's/struct dma_map_ops/const struct dma_map_ops/g' \ -e 's/const struct dma_map_ops {/struct dma_map_ops {/g' \ -e 's/^const struct dma_map_ops;$/struct dma_map_ops;/' \ -e 's/const const struct dma_map_ops /const struct dma_map_ops /g'; sed -i -e 's/const \(struct dma_map_ops intel_dma_ops\)/\1/' \ $(git grep -l 'struct dma_map_ops intel_dma_ops'); sed -i -e 's/const \(struct dma_map_ops dma_iommu_ops\)/\1/' \ $(git grep -l 'struct dma_map_ops' | grep ^arch/powerpc); sed -i -e '/^struct vmd_dev {$/,/^};$/ s/const \(struct dma_map_ops[[:blank:]]dma_ops;\)/\1/' \ -e '/^static void vmd_setup_dma_ops/,/^}$/ s/const \(struct dma_map_ops \*dest\)/\1/' \ -e 's/const \(struct dma_map_ops \*dest = \&vmd->dma_ops\)/\1/' \ drivers/pci/host/*.c sed -i -e '/^void __init pci_iommu_alloc(void)$/,/^}$/ s/dma_ops->/intel_dma_ops./' arch/ia64/kernel/pci-dma.c sed -i -e 's/static const struct dma_map_ops sn_dma_ops/static struct dma_map_ops sn_dma_ops/' arch/ia64/sn/pci/pci_dma.c sed -i -e 's/(const struct dma_map_ops \*)//' drivers/misc/mic/bus/vop_bus.c Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Juergen Gross <jgross@suse.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: Russell King <linux@armlinux.org.uk> Cc: x86@kernel.org Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-03MIPS: End asm function prologue macros with .insnPaul Burton1-4/+6
When building a kernel targeting a microMIPS ISA, recent GNU linkers will fail the link if they cannot determine that the target of a branch or jump is microMIPS code, with errors such as the following: mips-img-linux-gnu-ld: arch/mips/built-in.o: .text+0x542c: Unsupported jump between ISA modes; consider recompiling with interlinking enabled. mips-img-linux-gnu-ld: final link failed: Bad value or: ./arch/mips/include/asm/uaccess.h:1017: warning: JALX to a non-word-aligned address Placing anything other than an instruction at the start of a function written in assembly appears to trigger such errors. In order to prepare for allowing us to follow function prologue macros with an EXPORT_SYMBOL invocation, end the prologue macros (LEAD, NESTED & FEXPORT) with a .insn directive. This ensures that the start of the function is marked as code, which always makes sense for functions & safely prevents us from hitting the link errors described above. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Reviewed-by: Maciej W. Rozycki <macro@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14508/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-01-03MIPS: Use generic asm/export.hPaul Burton1-0/+1
Include export.h in the list of generic headers used by the MIPS architecture for use by later patches. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14506/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-01-03MIPS: IRQ: Remove useless i8259_of_init() prototype.Ralf Baechle1-1/+0
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-01-03MIPS: ralink: Add missing pinmux.John Crispin1-1/+6
The mt7620 has a pin that can be used to generate an external reference clock. The pinmux setup was missing the definition of said pin. This patch adds it. Signed-off-by: John Crispin <john@phrozen.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14898/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-01-03MIPS: platform: Allow for DTB to be moved during kernel relocationMarcin Nowakowski1-0/+13
Add plat_fdt_relocated(void*) API to allow the kernel relocation code to update platform's information about the DTB location if the DTB had to be moved due to being placed in a location used by the relocated kernel. Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14611/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-01-03MIPS: elfcore: add correct copy_regs implementationsMarcin Nowakowski1-0/+6
MIPS does not currently define ELF_CORE_COPY_REGS macros and as a result the generic implementation is used. The generic version attempts to do directly map (struct pt_regs) into (elf_gregset_t), which isn't correct for MIPS platforms and also triggers a BUG() at runtime in include/linux/elfcore.h:16 (BUG_ON(sizeof(*elfregs) != sizeof(*regs))) [ralf@linux-mips.org: Add semicolons to the macro definitions as I do not apply https://patchwork.linux-mips.org/patch/14588/ for now.] Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14586/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>