Age | Commit message (Collapse) | Author | Files | Lines |
|
"mm: introduce vm_ops->map_pages()" wants to export a do_set_pte() from core
kernel. Rename lguest's do_set_pte() to something more lguest-specific.
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
If we discover the entry is invalid, we kill the guest, but we must
avoid calling gpte_addr() on the invalid pmd, otherwise:
kernel BUG at drivers/lguest/page_tables.c:157!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
commit v3.9-rc1-53-g6d0cda9 "lguest: cache last cpu we ran on." missed
one case, which causes a triple fault. The guest calls guest_set_pgd()
on the top page, and we carefully remap the Switcher text page. But
we didn't reset last_host_cpu, so map_switcher_in_guest() thinks
the guest's regs and IDT/GDT etc are already mapped.
Reported-by: Paul Bolle <pebolle@tiscali.nl>
Tested-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux
Pull virtio & lguest updates from Rusty Russell:
"Lots of virtio work which wasn't quite ready for last merge window.
Plus I dived into lguest again, reworking the pagetable code so we can
move the switcher page: our fixmaps sometimes take more than 2MB now..."
Ugh. Annoying conflicts with the tcm_vhost -> vhost_scsi rename.
Hopefully correctly resolved.
* tag 'virtio-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux: (57 commits)
caif_virtio: Remove bouncing email addresses
lguest: improve code readability in lg_cpu_start.
virtio-net: fill only rx queues which are being used
lguest: map Switcher below fixmap.
lguest: cache last cpu we ran on.
lguest: map Switcher text whenever we allocate a new pagetable.
lguest: don't share Switcher PTE pages between guests.
lguest: expost switcher_pages array (as lg_switcher_pages).
lguest: extract shadow PTE walking / allocating.
lguest: make check_gpte et. al return bool.
lguest: assume Switcher text is a single page.
lguest: rename switcher_page to switcher_pages.
lguest: remove RESERVE_MEM constant.
lguest: check vaddr not pgd for Switcher protection.
lguest: prepare to make SWITCHER_ADDR a variable.
virtio: console: replace EMFILE with EBUSY for already-open port
virtio-scsi: reset virtqueue affinity when doing cpu hotplug
virtio-scsi: introduce multiqueue support
virtio-scsi: push vq lock/unlock into virtscsi_vq_done
virtio-scsi: pass struct virtio_scsi to virtqueue completion function
...
|
|
Use preferable function name which implies using a pseudo-random
number generator.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This optimizes the frobbing of our Switcher map.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
It's always to same, so no need to put in the PTE every time we're
about to run. Keep a flag to track whether the pagetable has the
Switcher entries allocated, and when allocating always initialize the
Switcher text PTE.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
We currently use the whole top PGD entry for the switcher, so we
simply share a fixed page of PTEs between all guests (actually, it's
one per Host CPU, to ensure isolation between guests).
Changes to a scheme where every guest has its own mappings.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
We want a separate find_pte() function so we can call it for populating the
switcher PTE entries.
We can also use it in page_writable().
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This is a bit neater: we can immediately return if a PTE/PGD/PMD entry
is invalid (which also kills the guest). It means we don't risk using
invalid entries as we reshuffle the code.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
ie. SHARED_SWITCHER_PAGES == 1. It is well under a page, and it's a
minor simplification: it's nice to have *one* simplification in a
patch series!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
There is a single page with the Switcher in it, but it's followed by 2
pages per Host CPU.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
We can use switcher_addr directly.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
We currently assume that the Switcher the top pgd; we want to remove
this assumption, so check that vaddr is OK, rather then checking pgd
index.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Also removes a long-unused #define and an extraneous semicolon.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
The Host used to create some page tables for the Guest to use at the
top of Guest memory; it would then tell the Guest where this was. In
particular, it created linear mappings for 0 and 0xC0000000 addresses
because lguest used to switch to its real page tables quite late in
boot.
However, since d50d8fe19 Linux initialized boot page tables in
head_32.S even before the "are we lguest?" boot jump. So, now we can
simplify things: the Host pagetable code assumes 1:1 linear mapping
until it first calls the LHCALL_NEW_PGTABLE hypercall, which we now do
before we reach C code.
This also means that the Host doesn't need to know anything about the
Guest's PAGE_OFFSET. (Non-Linux guests might not even have such a
thing).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Use this_cpu_ops in a couple of places in lguest.
Signed-off-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
|
|
* git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linus:
lguest: don't force VIRTIO_F_NOTIFY_ON_EMPTY
lguest: cleanup for map_switcher()
lguest: use PGDIR_SHIFT for PAE code to allow different PAGE_OFFSET
lguest: use set_pte/set_pmd uniformly for real page table entries
lguest: move panic notifier registration to its expected place.
virtio_blk: add support for cache flush
virtio: add virtio IDs file
virtio: get rid of redundant VIRTIO_ID_9P definition
virtio: make add_buf return capacity remaining
virtio_pci: minor MSI-X cleanups
|
|
We still assume the Guest and Host have the same PAGE_OFFSET settings,
but now we don't assume 0xC0000000.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Matias Zabaljauregui <zabaljauregui@gmail.com>
|
|
If we're building a pte, we can use simple assigment; only use set_pte
etc. when we're actually going to use that destination as a PTE. I
don't know that we'll ever run under Xen, but it's neater.
And use set_pte/set_pmd rather than assuming native_ versions, even
though that's probably true for most people.
(Includes compile fix by Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Matias Zabaljauregui <zabaljauregui@gmail.com>
Cc: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
|
|
Signed-off-by: Anand Gadiyar <gadiyar@ti.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
Every so often, after code shuffles, I need to go through and unbitrot
the Lguest Journey (see drivers/lguest/README). Since we now use RCU in
a simple form in one place I took the opportunity to expand that explanation.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
|
|
I don't really notice it (except to begrudge the extra vertical
space), but Ingo does. And he pointed out that one excuse of lguest
is as a teaching tool, it should set a good example.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Ingo Molnar <mingo@redhat.com>
|
|
1) j wasn't initialized in setup_pagetables, so they weren't set up for me
causing immediate guest crashes.
2) gpte_addr should not re-read the pmd from the Guest. Especially
not BUG_ON() based on the value. If we ever supported SMP guests,
they could trigger that. And the Launcher could also trigger it
(tho currently root-only).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This version requires that host and guest have the same PAE status.
NX cap is not offered to the guest, yet.
Signed-off-by: Matias Zabaljauregui <zabaljauregui@gmail.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
replace LHCALL_SET_PMD with LHCALL_SET_PGD hypercall name
(That's really what it is, and the confusion gets worse with PAE support)
Signed-off-by: Matias Zabaljauregui <zabaljauregui@gmail.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Reported-by: Jeremy Fitzhardinge <jeremy@goop.org>
|
|
PAE is activated
Some cleanups and replace direct assignment with native_set_* macros which properly handle 64-bit entries when PAE is activated
Signed-off-by: Matias Zabaljauregui <zabaljauregui@gmail.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Map switcher with executable page table entries.
(This bug didn't matter before PAE and hence NX support -- RR)
Signed-off-by: Matias Zabaljauregui <zabaljauregui@gmail.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Impact: clean up
Rusty told me, some time ago, that he had become a fan of "bool".
So, here are some replacements.
Signed-off-by: Matias Zabaljauregui <zabaljauregui at gmail.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Impact: fix crash on misbehaving guest
gpte_addr() contains a BUG_ON(), insisting that the present flag is
set. We need to return before we call it if that isn't the case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: stable@kernel.org
|
|
This patch moves the initial guest page table creation code to the host,
so the launcher keeps working with PAE enabled configs.
Signed-off-by: Matias Zabaljauregui <zabaljauregui@gmail.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Using a simple page table thrashing program I measure a slight
improvement. The program creates five processes. Each touches 1000
pages then schedules the next process. We repeat this 1000 times. As
lguest only caches 4 cr3 values, this rebuilds a lot of shadow page
tables requiring virt->phys mappings.
Before: 5.93 seconds
After: 5.40 seconds
(Counts of slow vs fastpath in this usage are 6092 and 2852462 respectively.)
And more importantly for lguest, the code is simpler.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Took some cycles to re-read the Lguest Journey end-to-end, fix some
rot and tighten some phrases.
Only comments change. No new jokes, but a couple of recycled old jokes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Ahmed managed to crash the Host in release_pgd(), which cannot be a Guest
bug, and indeed it wasn't.
The bug was that handing a 0 as the address of the toplevel page table
being manipulated can cause the lookup code in find_pgdir() to return
an uninitialized cache entry (we shadow up to 4 top level page tables
for each Guest).
Commit 37cc8d7f963ba2deec29c9b68716944516a3244f introduced this
behaviour in the Guest, uncovering the bug.
The patch which he submitted (which removed the /4 from the index
calculation) simply ensured that these high-indexed entries hit the
early exit path of guest_set_pmd(). But you get lots of segfaults in
guest userspace as the PMDs aren't being updated.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Beginning from commit 4138cc3418f5, ioremap_nocache() sets the _PAGE_PWT
flag.
Lguest doesn't accept a guest pte with a _PWT flag and reports a "bad
page table entry" in that case.
Accept guest _PAGE_PWT page table entries.
Signed-off-by: Ahmed S. Darwish <darwish.07@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
x86_64 don't expose the intermediate representation with one underline,
_PAGE_KERNEL, just the double-underlined one.
Use it, to get a common ground between 32 and 64-bit
Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
We can save some lines of code by getting rid of
*lg = cpu... lines of code spread everywhere by now.
Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
gpte_addr() does not depend on any guest information. So we wipe out
the lg parameter from it completely.
Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
spte_addr does not depend on any guest information, so we
wipe out the lg parameter completely.
Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
this patch makes the pgdir management per-vcpu. The pgdirs pool
is still guest-wide (although it'll probably need to grow when we
are really executing more vcpus), but the pgdidx index is gone,
since it makes no sense anymore. Instead, we use a per-vcpu
index.
Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
lguest struct have room for some fields, namely, cr2, ts, esp1
and ss1, that are not really guest-wide, but rather, vcpu-wide.
This patch puts it in the vcpu struct
Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This is the most obvious per-vcpu field: registers.
So this patch moves it from struct lguest to struct vcpu,
and patch the places in which they are used, accordingly
Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
The switcher needs to be mapped per-vcpu, because different vcpus
will potentially have different page tables (they don't have to,
because threads will share the same).
So our first step is the make the function receive a vcpu struct
Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Went through the documentation doing typo and content fixes. This
patch contains only comment and whitespace changes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Jes complains that page table code still uses lgread_u32 even though
it now uses general kernel pte types. The best thing to do is to
generalize lgread_u32 and lgwrite_u32.
This means we lose the efficiency of getuser(). We could potentially
regain it if we used __copy_from_user instead of copy_from_user, but
I'm not certain that our range check is equivalent to access_ok() on
all platforms.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: Jes Sorensen <jes@sgi.com>
|
|
1) This allows us to get alot closer to booting bzImages.
2) It means we don't have to know page_offset.
3) The Guest needs to modify the boot pagetables to create the
PAGE_OFFSET mapping before jumping to C code.
4) guest_pa() walks the page tables rather than using page_offset.
5) We don't use page_offset to figure out whether to emulate: it was
always kinda quesationable, and won't work for instructions done
before remapping (bzImage unpacking in particular).
6) We still want the kernel address for tlb flushing: have the initial
hypercall give us that, too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This is my first step in the migration of page_tables.c to the kernel
types and functions/macros (2.6.23-rc3). Seems to be working OK.
Signed-off-by: Matias Zabaljauregui <matias.zabaljauregui@cern.ch>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|