summaryrefslogtreecommitdiff
path: root/Documentation/vm
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/vm')
-rw-r--r--Documentation/vm/active_mm.rst2
-rw-r--r--Documentation/vm/hmm.rst139
-rw-r--r--Documentation/vm/index.rst1
-rw-r--r--Documentation/vm/ksm.rst2
-rw-r--r--Documentation/vm/memory-model.rst6
-rw-r--r--Documentation/vm/page_migration.rst164
6 files changed, 224 insertions, 90 deletions
diff --git a/Documentation/vm/active_mm.rst b/Documentation/vm/active_mm.rst
index c84471b180f8..6f8269c284ed 100644
--- a/Documentation/vm/active_mm.rst
+++ b/Documentation/vm/active_mm.rst
@@ -64,7 +64,7 @@ Active MM
actually get cases where you have a address space that is _only_ used by
lazy users. That is often a short-lived state, because once that thread
gets scheduled away in favour of a real thread, the "zombie" mm gets
- released because "mm_users" becomes zero.
+ released because "mm_count" becomes zero.
Also, a new rule is that _nobody_ ever has "init_mm" as a real MM any
more. "init_mm" should be considered just a "lazy context when no other
diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst
index 6f9e000757fa..09e28507f5b2 100644
--- a/Documentation/vm/hmm.rst
+++ b/Documentation/vm/hmm.rst
@@ -1,4 +1,4 @@
-.. hmm:
+.. _hmm:
=====================================
Heterogeneous Memory Management (HMM)
@@ -271,10 +271,139 @@ map those pages from the CPU side.
Migration to and from device memory
===================================
-Because the CPU cannot access device memory, migration must use the device DMA
-engine to perform copy from and to device memory. For this we need to use
-migrate_vma_setup(), migrate_vma_pages(), and migrate_vma_finalize() helpers.
-
+Because the CPU cannot access device memory directly, the device driver must
+use hardware DMA or device specific load/store instructions to migrate data.
+The migrate_vma_setup(), migrate_vma_pages(), and migrate_vma_finalize()
+functions are designed to make drivers easier to write and to centralize common
+code across drivers.
+
+Before migrating pages to device private memory, special device private
+``struct page`` need to be created. These will be used as special "swap"
+page table entries so that a CPU process will fault if it tries to access
+a page that has been migrated to device private memory.
+
+These can be allocated and freed with::
+
+ struct resource *res;
+ struct dev_pagemap pagemap;
+
+ res = request_free_mem_region(&iomem_resource, /* number of bytes */,
+ "name of driver resource");
+ pagemap.type = MEMORY_DEVICE_PRIVATE;
+ pagemap.range.start = res->start;
+ pagemap.range.end = res->end;
+ pagemap.nr_range = 1;
+ pagemap.ops = &device_devmem_ops;
+ memremap_pages(&pagemap, numa_node_id());
+
+ memunmap_pages(&pagemap);
+ release_mem_region(pagemap.range.start, range_len(&pagemap.range));
+
+There are also devm_request_free_mem_region(), devm_memremap_pages(),
+devm_memunmap_pages(), and devm_release_mem_region() when the resources can
+be tied to a ``struct device``.
+
+The overall migration steps are similar to migrating NUMA pages within system
+memory (see :ref:`Page migration <page_migration>`) but the steps are split
+between device driver specific code and shared common code:
+
+1. ``mmap_read_lock()``
+
+ The device driver has to pass a ``struct vm_area_struct`` to
+ migrate_vma_setup() so the mmap_read_lock() or mmap_write_lock() needs to
+ be held for the duration of the migration.
+
+2. ``migrate_vma_setup(struct migrate_vma *args)``
+
+ The device driver initializes the ``struct migrate_vma`` fields and passes
+ the pointer to migrate_vma_setup(). The ``args->flags`` field is used to
+ filter which source pages should be migrated. For example, setting
+ ``MIGRATE_VMA_SELECT_SYSTEM`` will only migrate system memory and
+ ``MIGRATE_VMA_SELECT_DEVICE_PRIVATE`` will only migrate pages residing in
+ device private memory. If the latter flag is set, the ``args->pgmap_owner``
+ field is used to identify device private pages owned by the driver. This
+ avoids trying to migrate device private pages residing in other devices.
+ Currently only anonymous private VMA ranges can be migrated to or from
+ system memory and device private memory.
+
+ One of the first steps migrate_vma_setup() does is to invalidate other
+ device's MMUs with the ``mmu_notifier_invalidate_range_start(()`` and
+ ``mmu_notifier_invalidate_range_end()`` calls around the page table
+ walks to fill in the ``args->src`` array with PFNs to be migrated.
+ The ``invalidate_range_start()`` callback is passed a
+ ``struct mmu_notifier_range`` with the ``event`` field set to
+ ``MMU_NOTIFY_MIGRATE`` and the ``migrate_pgmap_owner`` field set to
+ the ``args->pgmap_owner`` field passed to migrate_vma_setup(). This is
+ allows the device driver to skip the invalidation callback and only
+ invalidate device private MMU mappings that are actually migrating.
+ This is explained more in the next section.
+
+ While walking the page tables, a ``pte_none()`` or ``is_zero_pfn()``
+ entry results in a valid "zero" PFN stored in the ``args->src`` array.
+ This lets the driver allocate device private memory and clear it instead
+ of copying a page of zeros. Valid PTE entries to system memory or
+ device private struct pages will be locked with ``lock_page()``, isolated
+ from the LRU (if system memory since device private pages are not on
+ the LRU), unmapped from the process, and a special migration PTE is
+ inserted in place of the original PTE.
+ migrate_vma_setup() also clears the ``args->dst`` array.
+
+3. The device driver allocates destination pages and copies source pages to
+ destination pages.
+
+ The driver checks each ``src`` entry to see if the ``MIGRATE_PFN_MIGRATE``
+ bit is set and skips entries that are not migrating. The device driver
+ can also choose to skip migrating a page by not filling in the ``dst``
+ array for that page.
+
+ The driver then allocates either a device private struct page or a
+ system memory page, locks the page with ``lock_page()``, and fills in the
+ ``dst`` array entry with::
+
+ dst[i] = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED;
+
+ Now that the driver knows that this page is being migrated, it can
+ invalidate device private MMU mappings and copy device private memory
+ to system memory or another device private page. The core Linux kernel
+ handles CPU page table invalidations so the device driver only has to
+ invalidate its own MMU mappings.
+
+ The driver can use ``migrate_pfn_to_page(src[i])`` to get the
+ ``struct page`` of the source and either copy the source page to the
+ destination or clear the destination device private memory if the pointer
+ is ``NULL`` meaning the source page was not populated in system memory.
+
+4. ``migrate_vma_pages()``
+
+ This step is where the migration is actually "committed".
+
+ If the source page was a ``pte_none()`` or ``is_zero_pfn()`` page, this
+ is where the newly allocated page is inserted into the CPU's page table.
+ This can fail if a CPU thread faults on the same page. However, the page
+ table is locked and only one of the new pages will be inserted.
+ The device driver will see that the ``MIGRATE_PFN_MIGRATE`` bit is cleared
+ if it loses the race.
+
+ If the source page was locked, isolated, etc. the source ``struct page``
+ information is now copied to destination ``struct page`` finalizing the
+ migration on the CPU side.
+
+5. Device driver updates device MMU page tables for pages still migrating,
+ rolling back pages not migrating.
+
+ If the ``src`` entry still has ``MIGRATE_PFN_MIGRATE`` bit set, the device
+ driver can update the device MMU and set the write enable bit if the
+ ``MIGRATE_PFN_WRITE`` bit is set.
+
+6. ``migrate_vma_finalize()``
+
+ This step replaces the special migration page table entry with the new
+ page's page table entry and releases the reference to the source and
+ destination ``struct page``.
+
+7. ``mmap_read_unlock()``
+
+ The lock can now be released.
Memory cgroup (memcg) and rss accounting
========================================
diff --git a/Documentation/vm/index.rst b/Documentation/vm/index.rst
index 611140ffef7e..eff5fbd492d0 100644
--- a/Documentation/vm/index.rst
+++ b/Documentation/vm/index.rst
@@ -29,6 +29,7 @@ descriptions of data structures and algorithms.
:maxdepth: 1
active_mm
+ arch_pgtable_helpers
balance
cleancache
free_page_reporting
diff --git a/Documentation/vm/ksm.rst b/Documentation/vm/ksm.rst
index d1b7270ad55c..9e37add068e6 100644
--- a/Documentation/vm/ksm.rst
+++ b/Documentation/vm/ksm.rst
@@ -26,7 +26,7 @@ tree.
If a KSM page is shared between less than ``max_page_sharing`` VMAs,
the node of the stable tree that represents such KSM page points to a
-list of :c:type:`struct rmap_item` and the ``page->mapping`` of the
+list of struct rmap_item and the ``page->mapping`` of the
KSM page points to the stable tree node.
When the sharing passes this threshold, KSM adds a second dimension to
diff --git a/Documentation/vm/memory-model.rst b/Documentation/vm/memory-model.rst
index 769449734573..9daadf9faba1 100644
--- a/Documentation/vm/memory-model.rst
+++ b/Documentation/vm/memory-model.rst
@@ -24,7 +24,7 @@ whether it is possible to manually override that default.
although it is still in use by several architectures.
All the memory models track the status of physical page frames using
-:c:type:`struct page` arranged in one or more arrays.
+struct page arranged in one or more arrays.
Regardless of the selected memory model, there exists one-to-one
mapping between the physical page frame number (PFN) and the
@@ -111,7 +111,7 @@ maps for non-volatile memory devices and deferred initialization of
the memory map for larger systems.
The SPARSEMEM model presents the physical memory as a collection of
-sections. A section is represented with :c:type:`struct mem_section`
+sections. A section is represented with struct mem_section
that contains `section_mem_map` that is, logically, a pointer to an
array of struct pages. However, it is stored with some other magic
that aids the sections management. The section size and maximal number
@@ -172,7 +172,7 @@ management.
The virtually mapped memory map allows storing `struct page` objects
for persistent memory devices in pre-allocated storage on those
-devices. This storage is represented with :c:type:`struct vmem_altmap`
+devices. This storage is represented with struct vmem_altmap
that is eventually passed to vmemmap_populate() through a long chain
of function calls. The vmemmap_populate() implementation may use the
`vmem_altmap` along with :c:func:`vmemmap_alloc_block_buf` helper to
diff --git a/Documentation/vm/page_migration.rst b/Documentation/vm/page_migration.rst
index 68883ac485fa..91a98a6b43bb 100644
--- a/Documentation/vm/page_migration.rst
+++ b/Documentation/vm/page_migration.rst
@@ -4,25 +4,28 @@
Page migration
==============
-Page migration allows the moving of the physical location of pages between
-nodes in a numa system while the process is running. This means that the
+Page migration allows moving the physical location of pages between
+nodes in a NUMA system while the process is running. This means that the
virtual addresses that the process sees do not change. However, the
system rearranges the physical location of those pages.
-The main intend of page migration is to reduce the latency of memory access
+Also see :ref:`Heterogeneous Memory Management (HMM) <hmm>`
+for migrating pages to or from device private memory.
+
+The main intent of page migration is to reduce the latency of memory accesses
by moving pages near to the processor where the process accessing that memory
is running.
Page migration allows a process to manually relocate the node on which its
pages are located through the MF_MOVE and MF_MOVE_ALL options while setting
-a new memory policy via mbind(). The pages of process can also be relocated
+a new memory policy via mbind(). The pages of a process can also be relocated
from another process using the sys_migrate_pages() function call. The
-migrate_pages function call takes two sets of nodes and moves pages of a
+migrate_pages() function call takes two sets of nodes and moves pages of a
process that are located on the from nodes to the destination nodes.
Page migration functions are provided by the numactl package by Andi Kleen
(a version later than 0.9.3 is required. Get it from
-ftp://oss.sgi.com/www/projects/libnuma/download/). numactl provides libnuma
-which provides an interface similar to other numa functionality for page
+https://github.com/numactl/numactl.git). numactl provides libnuma
+which provides an interface similar to other NUMA functionality for page
migration. cat ``/proc/<pid>/numa_maps`` allows an easy review of where the
pages of a process are located. See also the numa_maps documentation in the
proc(5) man page.
@@ -30,19 +33,19 @@ proc(5) man page.
Manual migration is useful if for example the scheduler has relocated
a process to a processor on a distant node. A batch scheduler or an
administrator may detect the situation and move the pages of the process
-nearer to the new processor. The kernel itself does only provide
+nearer to the new processor. The kernel itself only provides
manual page migration support. Automatic page migration may be implemented
through user space processes that move pages. A special function call
"move_pages" allows the moving of individual pages within a process.
-A NUMA profiler may f.e. obtain a log showing frequent off node
+For example, A NUMA profiler may obtain a log showing frequent off-node
accesses and may use the result to move pages to more advantageous
locations.
Larger installations usually partition the system using cpusets into
sections of nodes. Paul Jackson has equipped cpusets with the ability to
move pages when a task is moved to another cpuset (See
-Documentation/admin-guide/cgroup-v1/cpusets.rst).
-Cpusets allows the automation of process locality. If a task is moved to
+:ref:`CPUSETS <cpusets>`).
+Cpusets allow the automation of process locality. If a task is moved to
a new cpuset then also all its pages are moved with it so that the
performance of the process does not sink dramatically. Also the pages
of processes in a cpuset are moved if the allowed memory nodes of a
@@ -67,9 +70,9 @@ In kernel use of migrate_pages()
Lists of pages to be migrated are generated by scanning over
pages and moving them into lists. This is done by
calling isolate_lru_page().
- Calling isolate_lru_page increases the references to the page
+ Calling isolate_lru_page() increases the references to the page
so that it cannot vanish while the page migration occurs.
- It also prevents the swapper or other scans to encounter
+ It also prevents the swapper or other scans from encountering
the page.
2. We need to have a function of type new_page_t that can be
@@ -91,23 +94,24 @@ is increased so that the page cannot be freed while page migration occurs.
Steps:
-1. Lock the page to be migrated
+1. Lock the page to be migrated.
2. Ensure that writeback is complete.
3. Lock the new page that we want to move to. It is locked so that accesses to
- this (not yet uptodate) page immediately lock while the move is in progress.
+ this (not yet uptodate) page immediately block while the move is in progress.
4. All the page table references to the page are converted to migration
entries. This decreases the mapcount of a page. If the resulting
mapcount is not zero then we do not migrate the page. All user space
- processes that attempt to access the page will now wait on the page lock.
+ processes that attempt to access the page will now wait on the page lock
+ or wait for the migration page table entry to be removed.
5. The i_pages lock is taken. This will cause all processes trying
to access the page via the mapping to block on the spinlock.
-6. The refcount of the page is examined and we back out if references remain
- otherwise we know that we are the only one referencing this page.
+6. The refcount of the page is examined and we back out if references remain.
+ Otherwise, we know that we are the only one referencing this page.
7. The radix tree is checked and if it does not contain the pointer to this
page then we back out because someone else modified the radix tree.
@@ -134,124 +138,124 @@ Steps:
15. Queued up writeback on the new page is triggered.
-16. If migration entries were page then replace them with real ptes. Doing
- so will enable access for user space processes not already waiting for
- the page lock.
+16. If migration entries were inserted into the page table, then replace them
+ with real ptes. Doing so will enable access for user space processes not
+ already waiting for the page lock.
-19. The page locks are dropped from the old and new page.
+17. The page locks are dropped from the old and new page.
Processes waiting on the page lock will redo their page faults
and will reach the new page.
-20. The new page is moved to the LRU and can be scanned by the swapper
- etc again.
+18. The new page is moved to the LRU and can be scanned by the swapper,
+ etc. again.
Non-LRU page migration
======================
-Although original migration aimed for reducing the latency of memory access
-for NUMA, compaction who want to create high-order page is also main customer.
+Although migration originally aimed for reducing the latency of memory accesses
+for NUMA, compaction also uses migration to create high-order pages.
Current problem of the implementation is that it is designed to migrate only
-*LRU* pages. However, there are potential non-lru pages which can be migrated
+*LRU* pages. However, there are potential non-LRU pages which can be migrated
in drivers, for example, zsmalloc, virtio-balloon pages.
For virtio-balloon pages, some parts of migration code path have been hooked
up and added virtio-balloon specific functions to intercept migration logics.
It's too specific to a driver so other drivers who want to make their pages
-movable would have to add own specific hooks in migration path.
+movable would have to add their own specific hooks in the migration path.
-To overclome the problem, VM supports non-LRU page migration which provides
+To overcome the problem, VM supports non-LRU page migration which provides
generic functions for non-LRU movable pages without driver specific hooks
-migration path.
+in the migration path.
-If a driver want to make own pages movable, it should define three functions
+If a driver wants to make its pages movable, it should define three functions
which are function pointers of struct address_space_operations.
1. ``bool (*isolate_page) (struct page *page, isolate_mode_t mode);``
- What VM expects on isolate_page function of driver is to return *true*
- if driver isolates page successfully. On returing true, VM marks the page
+ What VM expects from isolate_page() function of driver is to return *true*
+ if driver isolates the page successfully. On returning true, VM marks the page
as PG_isolated so concurrent isolation in several CPUs skip the page
for isolation. If a driver cannot isolate the page, it should return *false*.
Once page is successfully isolated, VM uses page.lru fields so driver
- shouldn't expect to preserve values in that fields.
+ shouldn't expect to preserve values in those fields.
2. ``int (*migratepage) (struct address_space *mapping,``
| ``struct page *newpage, struct page *oldpage, enum migrate_mode);``
- After isolation, VM calls migratepage of driver with isolated page.
- The function of migratepage is to move content of the old page to new page
+ After isolation, VM calls migratepage() of driver with the isolated page.
+ The function of migratepage() is to move the contents of the old page to the
+ new page
and set up fields of struct page newpage. Keep in mind that you should
indicate to the VM the oldpage is no longer movable via __ClearPageMovable()
- under page_lock if you migrated the oldpage successfully and returns
+ under page_lock if you migrated the oldpage successfully and returned
MIGRATEPAGE_SUCCESS. If driver cannot migrate the page at the moment, driver
can return -EAGAIN. On -EAGAIN, VM will retry page migration in a short time
- because VM interprets -EAGAIN as "temporal migration failure". On returning
- any error except -EAGAIN, VM will give up the page migration without retrying
- in this time.
+ because VM interprets -EAGAIN as "temporary migration failure". On returning
+ any error except -EAGAIN, VM will give up the page migration without
+ retrying.
- Driver shouldn't touch page.lru field VM using in the functions.
+ Driver shouldn't touch the page.lru field while in the migratepage() function.
3. ``void (*putback_page)(struct page *);``
- If migration fails on isolated page, VM should return the isolated page
- to the driver so VM calls driver's putback_page with migration failed page.
- In this function, driver should put the isolated page back to the own data
+ If migration fails on the isolated page, VM should return the isolated page
+ to the driver so VM calls the driver's putback_page() with the isolated page.
+ In this function, the driver should put the isolated page back into its own data
structure.
-4. non-lru movable page flags
+4. non-LRU movable page flags
- There are two page flags for supporting non-lru movable page.
+ There are two page flags for supporting non-LRU movable page.
* PG_movable
- Driver should use the below function to make page movable under page_lock::
+ Driver should use the function below to make page movable under page_lock::
void __SetPageMovable(struct page *page, struct address_space *mapping)
It needs argument of address_space for registering migration
family functions which will be called by VM. Exactly speaking,
- PG_movable is not a real flag of struct page. Rather than, VM
- reuses page->mapping's lower bits to represent it.
+ PG_movable is not a real flag of struct page. Rather, VM
+ reuses the page->mapping's lower bits to represent it::
-::
#define PAGE_MAPPING_MOVABLE 0x2
page->mapping = page->mapping | PAGE_MAPPING_MOVABLE;
so driver shouldn't access page->mapping directly. Instead, driver should
- use page_mapping which mask off the low two bits of page->mapping under
- page lock so it can get right struct address_space.
-
- For testing of non-lru movable page, VM supports __PageMovable function.
- However, it doesn't guarantee to identify non-lru movable page because
- page->mapping field is unified with other variables in struct page.
- As well, if driver releases the page after isolation by VM, page->mapping
- doesn't have stable value although it has PAGE_MAPPING_MOVABLE
- (Look at __ClearPageMovable). But __PageMovable is cheap to catch whether
- page is LRU or non-lru movable once the page has been isolated. Because
- LRU pages never can have PAGE_MAPPING_MOVABLE in page->mapping. It is also
- good for just peeking to test non-lru movable pages before more expensive
- checking with lock_page in pfn scanning to select victim.
-
- For guaranteeing non-lru movable page, VM provides PageMovable function.
- Unlike __PageMovable, PageMovable functions validates page->mapping and
- mapping->a_ops->isolate_page under lock_page. The lock_page prevents sudden
- destroying of page->mapping.
-
- Driver using __SetPageMovable should clear the flag via __ClearMovablePage
- under page_lock before the releasing the page.
+ use page_mapping() which masks off the low two bits of page->mapping under
+ page lock so it can get the right struct address_space.
+
+ For testing of non-LRU movable pages, VM supports __PageMovable() function.
+ However, it doesn't guarantee to identify non-LRU movable pages because
+ the page->mapping field is unified with other variables in struct page.
+ If the driver releases the page after isolation by VM, page->mapping
+ doesn't have a stable value although it has PAGE_MAPPING_MOVABLE set
+ (look at __ClearPageMovable). But __PageMovable() is cheap to call whether
+ page is LRU or non-LRU movable once the page has been isolated because LRU
+ pages can never have PAGE_MAPPING_MOVABLE set in page->mapping. It is also
+ good for just peeking to test non-LRU movable pages before more expensive
+ checking with lock_page() in pfn scanning to select a victim.
+
+ For guaranteeing non-LRU movable page, VM provides PageMovable() function.
+ Unlike __PageMovable(), PageMovable() validates page->mapping and
+ mapping->a_ops->isolate_page under lock_page(). The lock_page() prevents
+ sudden destroying of page->mapping.
+
+ Drivers using __SetPageMovable() should clear the flag via
+ __ClearMovablePage() under page_lock() before the releasing the page.
* PG_isolated
To prevent concurrent isolation among several CPUs, VM marks isolated page
- as PG_isolated under lock_page. So if a CPU encounters PG_isolated non-lru
- movable page, it can skip it. Driver doesn't need to manipulate the flag
- because VM will set/clear it automatically. Keep in mind that if driver
- sees PG_isolated page, it means the page have been isolated by VM so it
- shouldn't touch page.lru field.
- PG_isolated is alias with PG_reclaim flag so driver shouldn't use the flag
- for own purpose.
+ as PG_isolated under lock_page(). So if a CPU encounters PG_isolated
+ non-LRU movable page, it can skip it. Driver doesn't need to manipulate the
+ flag because VM will set/clear it automatically. Keep in mind that if the
+ driver sees a PG_isolated page, it means the page has been isolated by the
+ VM so it shouldn't touch the page.lru field.
+ The PG_isolated flag is aliased with the PG_reclaim flag so drivers
+ shouldn't use PG_isolated for its own purposes.
Monitoring Migration
=====================
@@ -266,8 +270,8 @@ The following events (counters) can be used to monitor page migration.
512.
2. PGMIGRATE_FAIL: Normal page migration failure. Same counting rules as for
- _SUCCESS, above: this will be increased by the number of subpages, if it was
- a THP.
+ PGMIGRATE_SUCCESS, above: this will be increased by the number of subpages,
+ if it was a THP.
3. THP_MIGRATION_SUCCESS: A THP was migrated without being split.