summaryrefslogtreecommitdiff
path: root/Documentation/mm
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/mm')
-rw-r--r--Documentation/mm/arch_pgtable_helpers.rst14
-rw-r--r--Documentation/mm/balance.rst2
-rw-r--r--Documentation/mm/damon/design.rst142
-rw-r--r--Documentation/mm/damon/index.rst6
-rw-r--r--Documentation/mm/damon/maintainer-profile.rst35
-rw-r--r--Documentation/mm/damon/monitoring_intervals_tuning_example.rst8
-rw-r--r--Documentation/mm/hmm.rst2
-rw-r--r--Documentation/mm/index.rst2
-rw-r--r--Documentation/mm/page_migration.rst39
-rw-r--r--Documentation/mm/physical_memory.rst266
-rw-r--r--Documentation/mm/process_addrs.rst98
-rw-r--r--Documentation/mm/slab.rst7
-rw-r--r--Documentation/mm/slub.rst470
-rw-r--r--Documentation/mm/split_page_table_lock.rst2
-rw-r--r--Documentation/mm/transhuge.rst39
-rw-r--r--Documentation/mm/z3fold.rst28
-rw-r--r--Documentation/mm/zsmalloc.rst5
17 files changed, 537 insertions, 628 deletions
diff --git a/Documentation/mm/arch_pgtable_helpers.rst b/Documentation/mm/arch_pgtable_helpers.rst
index af245161d8e7..ba2f658bc241 100644
--- a/Documentation/mm/arch_pgtable_helpers.rst
+++ b/Documentation/mm/arch_pgtable_helpers.rst
@@ -30,8 +30,6 @@ PTE Page Table Helpers
+---------------------------+--------------------------------------------------+
| pte_protnone | Tests a PROT_NONE PTE |
+---------------------------+--------------------------------------------------+
-| pte_devmap | Tests a ZONE_DEVICE mapped PTE |
-+---------------------------+--------------------------------------------------+
| pte_soft_dirty | Tests a soft dirty PTE |
+---------------------------+--------------------------------------------------+
| pte_swp_soft_dirty | Tests a soft dirty swapped PTE |
@@ -104,8 +102,6 @@ PMD Page Table Helpers
+---------------------------+--------------------------------------------------+
| pmd_protnone | Tests a PROT_NONE PMD |
+---------------------------+--------------------------------------------------+
-| pmd_devmap | Tests a ZONE_DEVICE mapped PMD |
-+---------------------------+--------------------------------------------------+
| pmd_soft_dirty | Tests a soft dirty PMD |
+---------------------------+--------------------------------------------------+
| pmd_swp_soft_dirty | Tests a soft dirty swapped PMD |
@@ -177,8 +173,6 @@ PUD Page Table Helpers
+---------------------------+--------------------------------------------------+
| pud_write | Tests a writable PUD |
+---------------------------+--------------------------------------------------+
-| pud_devmap | Tests a ZONE_DEVICE mapped PUD |
-+---------------------------+--------------------------------------------------+
| pud_mkyoung | Creates a young PUD |
+---------------------------+--------------------------------------------------+
| pud_mkold | Creates an old PUD |
@@ -242,13 +236,13 @@ SWAP Page Table Helpers
========================
+---------------------------+--------------------------------------------------+
-| __pte_to_swp_entry | Creates a swapped entry (arch) from a mapped PTE |
+| __pte_to_swp_entry | Creates a swp_entry_t (arch) from a swap PTE |
+---------------------------+--------------------------------------------------+
-| __swp_to_pte_entry | Creates a mapped PTE from a swapped entry (arch) |
+| __swp_entry_to_pte | Creates a swap PTE from a swp_entry_t (arch) |
+---------------------------+--------------------------------------------------+
-| __pmd_to_swp_entry | Creates a swapped entry (arch) from a mapped PMD |
+| __pmd_to_swp_entry | Creates a swp_entry_t (arch) from a swap PMD |
+---------------------------+--------------------------------------------------+
-| __swp_to_pmd_entry | Creates a mapped PMD from a swapped entry (arch) |
+| __swp_entry_to_pmd | Creates a swap PMD from a swp_entry_t (arch) |
+---------------------------+--------------------------------------------------+
| is_migration_entry | Tests a migration (read or write) swapped entry |
+-------------------------------+----------------------------------------------+
diff --git a/Documentation/mm/balance.rst b/Documentation/mm/balance.rst
index abaa78561c31..c4962c89a7f5 100644
--- a/Documentation/mm/balance.rst
+++ b/Documentation/mm/balance.rst
@@ -81,7 +81,7 @@ Page stealing from process memory and shm is done if stealing the page would
alleviate memory pressure on any zone in the page's node that has fallen below
its watermark.
-watemark[WMARK_MIN/WMARK_LOW/WMARK_HIGH]/low_on_memory/zone_wake_kswapd: These
+watermark[WMARK_MIN/WMARK_LOW/WMARK_HIGH]/low_on_memory/zone_wake_kswapd: These
are per-zone fields, used to determine when a zone needs to be balanced. When
the number of pages falls below watermark[WMARK_MIN], the hysteric field
low_on_memory gets set. This stays set till the number of free pages becomes
diff --git a/Documentation/mm/damon/design.rst b/Documentation/mm/damon/design.rst
index e28c6a1b40ae..03f8137256f5 100644
--- a/Documentation/mm/damon/design.rst
+++ b/Documentation/mm/damon/design.rst
@@ -54,7 +54,7 @@ monitoring are address-space dependent.
DAMON consolidates these implementations in a layer called DAMON Operations
Set, and defines the interface between it and the upper layer. The upper layer
is dedicated for DAMON's core logics including the mechanism for control of the
-monitoring accruracy and the overhead.
+monitoring accuracy and the overhead.
Hence, DAMON can easily be extended for any address space and/or available
hardware features by configuring the core logic to use the appropriate
@@ -313,6 +313,10 @@ sufficient for the given purpose, it shouldn't be unnecessarily further
lowered. It is recommended to be set proportional to ``aggregation interval``.
By default, the ratio is set as ``1/20``, and it is still recommended.
+Based on the manual tuning guide, DAMON provides more intuitive knob-based
+intervals auto tuning mechanism. Please refer to :ref:`the design document of
+the feature <damon_design_monitoring_intervals_autotuning>` for detail.
+
Refer to below documents for an example tuning based on the above guide.
.. toctree::
@@ -321,6 +325,52 @@ Refer to below documents for an example tuning based on the above guide.
monitoring_intervals_tuning_example
+.. _damon_design_monitoring_intervals_autotuning:
+
+Monitoring Intervals Auto-tuning
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+DAMON provides automatic tuning of the ``sampling interval`` and ``aggregation
+interval`` based on the :ref:`the tuning guide idea
+<damon_design_monitoring_params_tuning_guide>`. The tuning mechanism allows
+users to set the aimed amount of access events to observe via DAMON within
+given time interval. The target can be specified by the user as a ratio of
+DAMON-observed access events to the theoretical maximum amount of the events
+(``access_bp``) that measured within a given number of aggregations
+(``aggrs``).
+
+The DAMON-observed access events are calculated in byte granularity based on
+DAMON :ref:`region assumption <damon_design_region_based_sampling>`. For
+example, if a region of size ``X`` bytes of ``Y`` ``nr_accesses`` is found, it
+means ``X * Y`` access events are observed by DAMON. Theoretical maximum
+access events for the region is calculated in same way, but replacing ``Y``
+with theoretical maximum ``nr_accesses``, which can be calculated as
+``aggregation interval / sampling interval``.
+
+The mechanism calculates the ratio of access events for ``aggrs`` aggregations,
+and increases or decrease the ``sampleing interval`` and ``aggregation
+interval`` in same ratio, if the observed access ratio is lower or higher than
+the target, respectively. The ratio of the intervals change is decided in
+proportion to the distance between current samples ratio and the target ratio.
+
+The user can further set the minimum and maximum ``sampling interval`` that can
+be set by the tuning mechanism using two parameters (``min_sample_us`` and
+``max_sample_us``). Because the tuning mechanism changes ``sampling interval``
+and ``aggregation interval`` in same ratio always, the minimum and maximum
+``aggregation interval`` after each of the tuning changes can automatically set
+together.
+
+The tuning is turned off by default, and need to be set explicitly by the user.
+As a rule of thumbs and the Parreto principle, 4% access samples ratio target
+is recommended. Note that Parreto principle (80/20 rule) has applied twice.
+That is, assumes 4% (20% of 20%) DAMON-observed access events ratio (source)
+to capture 64% (80% multipled by 80%) real access events (outcomes).
+
+To know how user-space can use this feature via :ref:`DAMON sysfs interface
+<sysfs_interface>`, refer to :ref:`intervals_goal <sysfs_scheme>` part of
+the documentation.
+
+
.. _damon_design_damos:
Operation Schemes
@@ -402,9 +452,9 @@ that supports each action are as below.
- ``lru_deprio``: Deprioritize the region on its LRU lists.
Supported by ``paddr`` operations set.
- ``migrate_hot``: Migrate the regions prioritizing warmer regions.
- Supported by ``paddr`` operations set.
+ Supported by ``vaddr``, ``fvaddr`` and ``paddr`` operations set.
- ``migrate_cold``: Migrate the regions prioritizing colder regions.
- Supported by ``paddr`` operations set.
+ Supported by ``vaddr``, ``fvaddr`` and ``paddr`` operations set.
- ``stat``: Do nothing but count the statistics.
Supported by all operations sets.
@@ -500,10 +550,10 @@ aggressiveness (the quota) of the corresponding scheme. For example, if DAMOS
is under achieving the goal, DAMOS automatically increases the quota. If DAMOS
is over achieving the goal, it decreases the quota.
-The goal can be specified with three parameters, namely ``target_metric``,
-``target_value``, and ``current_value``. The auto-tuning mechanism tries to
-make ``current_value`` of ``target_metric`` be same to ``target_value``.
-Currently, two ``target_metric`` are provided.
+The goal can be specified with four parameters, namely ``target_metric``,
+``target_value``, ``current_value`` and ``nid``. The auto-tuning mechanism
+tries to make ``current_value`` of ``target_metric`` be same to
+``target_value``.
- ``user_input``: User-provided value. Users could use any metric that they
has interest in for the value. Use space main workload's latency or
@@ -515,6 +565,11 @@ Currently, two ``target_metric`` are provided.
in microseconds that measured from last quota reset to next quota reset.
DAMOS does the measurement on its own, so only ``target_value`` need to be
set by users at the initial time. In other words, DAMOS does self-feedback.
+- ``node_mem_used_bp``: Specific NUMA node's used memory ratio in bp (1/10,000).
+- ``node_mem_free_bp``: Specific NUMA node's free memory ratio in bp (1/10,000).
+
+``nid`` is optionally required for only ``node_mem_used_bp`` and
+``node_mem_free_bp`` to point the specific NUMA node.
To know how user-space can set the tuning goal metric, the target value, and/or
the current value via :ref:`DAMON sysfs interface <sysfs_interface>`, refer to
@@ -569,11 +624,22 @@ number of filters for each scheme. Each filter specifies
- whether it is to allow (include) or reject (exclude) applying
the scheme's action to the memory (``allow``).
-When multiple filters are installed, each filter is evaluated in the installed
-order. If a part of memory is matched to one of the filter, next filters are
-ignored. If the memory passes through the filters evaluation stage because it
-is not matched to any of the filters, applying the scheme's action to it is
-allowed, same to the behavior when no filter exists.
+For efficient handling of filters, some types of filters are handled by the
+core layer, while others are handled by operations set. In the latter case,
+hence, support of the filter types depends on the DAMON operations set. In
+case of the core layer-handled filters, the memory regions that excluded by the
+filter are not counted as the scheme has tried to the region. In contrast, if
+a memory regions is filtered by an operations set layer-handled filter, it is
+counted as the scheme has tried. This difference affects the statistics.
+
+When multiple filters are installed, the group of filters that handled by the
+core layer are evaluated first. After that, the group of filters that handled
+by the operations layer are evaluated. Filters in each of the groups are
+evaluated in the installed order. If a part of memory is matched to one of the
+filter, next filters are ignored. If the part passes through the filters
+evaluation stage because it is not matched to any of the filters, applying the
+scheme's action to it depends on the last filter's allowance type. If the last
+filter was for allowing, the part of memory will be rejected, and vice versa.
For example, let's assume 1) a filter for allowing anonymous pages and 2)
another filter for rejecting young pages are installed in the order. If a page
@@ -585,39 +651,29 @@ second reject-filter blocks it. If the page is neither anonymous nor young,
the page will pass through the filters evaluation stage since there is no
matching filter, and the action will be applied to the page.
-Note that the action can equally be applied to memory that either explicitly
-filter-allowed or filters evaluation stage passed. It means that installing
-allow-filters at the end of the list makes no practical change but only
-filters-checking overhead.
-
-For efficient handling of filters, some types of filters are handled by the
-core layer, while others are handled by operations set. In the latter case,
-hence, support of the filter types depends on the DAMON operations set. In
-case of the core layer-handled filters, the memory regions that excluded by the
-filter are not counted as the scheme has tried to the region. In contrast, if
-a memory regions is filtered by an operations set layer-handled filter, it is
-counted as the scheme has tried. This difference affects the statistics.
-
Below ``type`` of filters are currently supported.
-- anonymous page
- - Applied to pages that containing data that not stored in files.
- - Handled by operations set layer. Supported by only ``paddr`` set.
-- memory cgroup
- - Applied to pages that belonging to a given cgroup.
- - Handled by operations set layer. Supported by only ``paddr`` set.
-- young page
- - Applied to pages that are accessed after the last access check from the
- scheme.
- - Handled by operations set layer. Supported by only ``paddr`` set.
-- address range
- - Applied to pages that belonging to a given address range.
- - Handled by the core logic.
-- DAMON monitoring target
- - Applied to pages that belonging to a given DAMON monitoring target.
- - Handled by the core logic.
-
-To know how user-space can set the watermarks via :ref:`DAMON sysfs interface
+- Core layer handled
+ - addr
+ - Applied to pages that belonging to a given address range.
+ - target
+ - Applied to pages that belonging to a given DAMON monitoring target.
+- Operations layer handled, supported by only ``paddr`` operations set.
+ - anon
+ - Applied to pages that containing data that not stored in files.
+ - active
+ - Applied to active pages.
+ - memcg
+ - Applied to pages that belonging to a given cgroup.
+ - young
+ - Applied to pages that are accessed after the last access check from the
+ scheme.
+ - hugepage_size
+ - Applied to pages that managed in a given size range.
+ - unmapped
+ - Applied to pages that unmapped.
+
+To know how user-space can set the filters via :ref:`DAMON sysfs interface
<sysfs_interface>`, refer to :ref:`filters <sysfs_filters>` part of the
documentation.
diff --git a/Documentation/mm/damon/index.rst b/Documentation/mm/damon/index.rst
index 5a3359704cce..31c1fa955b3d 100644
--- a/Documentation/mm/damon/index.rst
+++ b/Documentation/mm/damon/index.rst
@@ -1,8 +1,8 @@
.. SPDX-License-Identifier: GPL-2.0
-==========================
-DAMON: Data Access MONitor
-==========================
+================================================================
+DAMON: Data Access MONitoring and Access-aware System Operations
+================================================================
DAMON is a Linux kernel subsystem that provides a framework for data access
monitoring and the monitoring results based system operations. The core
diff --git a/Documentation/mm/damon/maintainer-profile.rst b/Documentation/mm/damon/maintainer-profile.rst
index ce3e98458339..5cd07905a193 100644
--- a/Documentation/mm/damon/maintainer-profile.rst
+++ b/Documentation/mm/damon/maintainer-profile.rst
@@ -7,9 +7,9 @@ The DAMON subsystem covers the files that are listed in 'DATA ACCESS MONITOR'
section of 'MAINTAINERS' file.
The mailing lists for the subsystem are damon@lists.linux.dev and
-linux-mm@kvack.org. Patches should be made against the `mm-unstable tree
-<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ whenever possible and posted
-to the mailing lists.
+linux-mm@kvack.org. Patches should be made against the `mm-new tree
+<https://git.kernel.org/akpm/mm/h/mm-new>`_ whenever possible and posted to the
+mailing lists.
SCM Trees
---------
@@ -17,17 +17,19 @@ SCM Trees
There are multiple Linux trees for DAMON development. Patches under
development or testing are queued in `damon/next
<https://git.kernel.org/sj/h/damon/next>`_ by the DAMON maintainer.
-Sufficiently reviewed patches will be queued in `mm-unstable
-<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ by the memory management
-subsystem maintainer. After more sufficient tests, the patches will be queued
-in `mm-stable <https://git.kernel.org/akpm/mm/h/mm-stable>`_, and finally
-pull-requested to the mainline by the memory management subsystem maintainer.
-
-Note again the patches for `mm-unstable tree
-<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ are queued by the memory
-management subsystem maintainer. If the patches requires some patches in
-`damon/next tree <https://git.kernel.org/sj/h/damon/next>`_ which not yet merged
-in mm-unstable, please make sure the requirement is clearly specified.
+Sufficiently reviewed patches will be queued in `mm-new
+<https://git.kernel.org/akpm/mm/h/mm-new>`_ by the memory management subsystem
+maintainer. As more sufficient tests are done, the patches will move to
+`mm-unstable <https://git.kernel.org/akpm/mm/h/mm-unstable>`_ and then to
+`mm-stable <https://git.kernel.org/akpm/mm/h/mm-stable>`_. And finally those
+will be pull-requested to the mainline by the memory management subsystem
+maintainer.
+
+Note again the patches for `mm-new tree
+<https://git.kernel.org/akpm/mm/h/mm-new>`_ are queued by the memory management
+subsystem maintainer. If the patches requires some patches in `damon/next tree
+<https://git.kernel.org/sj/h/damon/next>`_ which not yet merged in mm-new,
+please make sure the requirement is clearly specified.
Submit checklist addendum
-------------------------
@@ -53,8 +55,9 @@ Further doing below and putting the results will be helpful.
Key cycle dates
---------------
-Patches can be sent anytime. Key cycle dates of the `mm-unstable
-<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ and `mm-stable
+Patches can be sent anytime. Key cycle dates of the `mm-new
+<https://git.kernel.org/akpm/mm/h/mm-new>`_, `mm-unstable
+<https://git.kernel.org/akpm/mm/h/mm-unstable>`_and `mm-stable
<https://git.kernel.org/akpm/mm/h/mm-stable>`_ trees depend on the memory
management subsystem maintainer.
diff --git a/Documentation/mm/damon/monitoring_intervals_tuning_example.rst b/Documentation/mm/damon/monitoring_intervals_tuning_example.rst
index 334a854efb40..7207cbed591f 100644
--- a/Documentation/mm/damon/monitoring_intervals_tuning_example.rst
+++ b/Documentation/mm/damon/monitoring_intervals_tuning_example.rst
@@ -36,7 +36,7 @@ Then, list the DAMON-found regions of different access patterns, sorted by the
"access temperature". "Access temperature" is a metric representing the
access-hotness of a region. It is calculated as a weighted sum of the access
frequency and the age of the region. If the access frequency is 0 %, the
-temperature is multipled by minus one. That is, if a region is not accessed,
+temperature is multiplied by minus one. That is, if a region is not accessed,
it gets minus temperature and it gets lower as not accessed for longer time.
The sorting is in temperature-ascendint order, so the region at the top of the
list is the coldest, and the one at the bottom is the hottest one. ::
@@ -58,11 +58,11 @@ list is the coldest, and the one at the bottom is the hottest one. ::
The list shows not seemingly hot regions, and only minimum access pattern
diversity. Every region has zero access frequency. The number of region is
10, which is the default ``min_nr_regions value``. Size of each region is also
-nearly idential. We can suspect this is because “adaptive regions adjustment”
+nearly identical. We can suspect this is because “adaptive regions adjustment”
mechanism was not well working. As the guide suggested, we can get relative
hotness of regions using ``age`` as the recency information. That would be
better than nothing, but given the fact that the longest age is only about 6
-seconds while we waited about ten minuts, it is unclear how useful this will
+seconds while we waited about ten minutes, it is unclear how useful this will
be.
The temperature ranges to total size of regions of each range histogram
@@ -190,7 +190,7 @@ for sampling and aggregation intervals, respectively). ::
The number of regions having different access patterns has significantly
increased. Size of each region is also more varied. Total size of non-zero
access frequency regions is also significantly increased. Maybe this is already
-good enough to make some meaningful memory management efficieny changes.
+good enough to make some meaningful memory management efficiency changes.
800ms/16s intervals: Another bias
=================================
diff --git a/Documentation/mm/hmm.rst b/Documentation/mm/hmm.rst
index f6d53c37a2ca..7d61b7a8b65b 100644
--- a/Documentation/mm/hmm.rst
+++ b/Documentation/mm/hmm.rst
@@ -400,7 +400,7 @@ Exclusive access memory
Some devices have features such as atomic PTE bits that can be used to implement
atomic access to system memory. To support atomic operations to a shared virtual
memory page such a device needs access to that page which is exclusive of any
-userspace access from the CPU. The ``make_device_exclusive_range()`` function
+userspace access from the CPU. The ``make_device_exclusive()`` function
can be used to make a memory range inaccessible from userspace.
This replaces all mappings for pages in the given range with special swap
diff --git a/Documentation/mm/index.rst b/Documentation/mm/index.rst
index 0be1c7503a01..fb45acba16ac 100644
--- a/Documentation/mm/index.rst
+++ b/Documentation/mm/index.rst
@@ -56,11 +56,9 @@ documentation, or deleted if it has served its purpose.
page_owner
page_table_check
remap_file_pages
- slub
split_page_table_lock
transhuge
unevictable-lru
vmalloced-kernel-stacks
vmemmap_dedup
- z3fold
zsmalloc
diff --git a/Documentation/mm/page_migration.rst b/Documentation/mm/page_migration.rst
index 519b35a4caf5..34602b254aa6 100644
--- a/Documentation/mm/page_migration.rst
+++ b/Documentation/mm/page_migration.rst
@@ -146,18 +146,33 @@ Steps:
18. The new page is moved to the LRU and can be scanned by the swapper,
etc. again.
-Non-LRU page migration
-======================
-
-Although migration originally aimed for reducing the latency of memory
-accesses for NUMA, compaction also uses migration to create high-order
-pages. For compaction purposes, it is also useful to be able to move
-non-LRU pages, such as zsmalloc and virtio-balloon pages.
-
-If a driver wants to make its pages movable, it should define a struct
-movable_operations. It then needs to call __SetPageMovable() on each
-page that it may be able to move. This uses the ``page->mapping`` field,
-so this field is not available for the driver to use for other purposes.
+movable_ops page migration
+==========================
+
+Selected typed, non-folio pages (e.g., pages inflated in a memory balloon,
+zsmalloc pages) can be migrated using the movable_ops migration framework.
+
+The "struct movable_operations" provide callbacks specific to a page type
+for isolating, migrating and un-isolating (putback) these pages.
+
+Once a page is indicated as having movable_ops, that condition must not
+change until the page was freed back to the buddy. This includes not
+changing/clearing the page type and not changing/clearing the
+PG_movable_ops page flag.
+
+Arbitrary drivers cannot currently make use of this framework, as it
+requires:
+
+(a) a page type
+(b) indicating them as possibly having movable_ops in page_has_movable_ops()
+ based on the page type
+(c) returning the movable_ops from page_movable_ops() based on the page
+ type
+(d) not reusing the PG_movable_ops and PG_movable_ops_isolated page flags
+ for other purposes
+
+For example, balloon drivers can make use of this framework through the
+balloon-compaction infrastructure residing in the core kernel.
Monitoring Migration
=====================
diff --git a/Documentation/mm/physical_memory.rst b/Documentation/mm/physical_memory.rst
index 71fd4a6acf42..9af11b5bd145 100644
--- a/Documentation/mm/physical_memory.rst
+++ b/Documentation/mm/physical_memory.rst
@@ -338,10 +338,272 @@ Statistics
Zones
=====
+As we have mentioned, each zone in memory is described by a ``struct zone``
+which is an element of the ``node_zones`` array of the node it belongs to.
+``struct zone`` is the core data structure of the page allocator. A zone
+represents a range of physical memory and may have holes.
+
+The page allocator uses the GFP flags, see :ref:`mm-api-gfp-flags`, specified by
+a memory allocation to determine the highest zone in a node from which the
+memory allocation can allocate memory. The page allocator first allocates memory
+from that zone, if the page allocator can't allocate the requested amount of
+memory from the zone, it will allocate memory from the next lower zone in the
+node, the process continues up to and including the lowest zone. For example, if
+a node contains ``ZONE_DMA32``, ``ZONE_NORMAL`` and ``ZONE_MOVABLE`` and the
+highest zone of a memory allocation is ``ZONE_MOVABLE``, the order of the zones
+from which the page allocator allocates memory is ``ZONE_MOVABLE`` >
+``ZONE_NORMAL`` > ``ZONE_DMA32``.
+
+At runtime, free pages in a zone are in the Per-CPU Pagesets (PCP) or free areas
+of the zone. The Per-CPU Pagesets are a vital mechanism in the kernel's memory
+management system. By handling most frequent allocations and frees locally on
+each CPU, the Per-CPU Pagesets improve performance and scalability, especially
+on systems with many cores. The page allocator in the kernel employs a two-step
+strategy for memory allocation, starting with the Per-CPU Pagesets before
+falling back to the buddy allocator. Pages are transferred between the Per-CPU
+Pagesets and the global free areas (managed by the buddy allocator) in batches.
+This minimizes the overhead of frequent interactions with the global buddy
+allocator.
+
+Architecture specific code calls free_area_init() to initializes zones.
+
+Zone structure
+--------------
+The zones structure ``struct zone`` is defined in ``include/linux/mmzone.h``.
+Here we briefly describe fields of this structure:
-.. admonition:: Stub
+General
+~~~~~~~
- This section is incomplete. Please list and describe the appropriate fields.
+``_watermark``
+ The watermarks for this zone. When the amount of free pages in a zone is below
+ the min watermark, boosting is ignored, an allocation may trigger direct
+ reclaim and direct compaction, it is also used to throttle direct reclaim.
+ When the amount of free pages in a zone is below the low watermark, kswapd is
+ woken up. When the amount of free pages in a zone is above the high watermark,
+ kswapd stops reclaiming (a zone is balanced) when the
+ ``NUMA_BALANCING_MEMORY_TIERING`` bit of ``sysctl_numa_balancing_mode`` is not
+ set. The promo watermark is used for memory tiering and NUMA balancing. When
+ the amount of free pages in a zone is above the promo watermark, kswapd stops
+ reclaiming when the ``NUMA_BALANCING_MEMORY_TIERING`` bit of
+ ``sysctl_numa_balancing_mode`` is set. The watermarks are set by
+ ``__setup_per_zone_wmarks()``. The min watermark is calculated according to
+ ``vm.min_free_kbytes`` sysctl. The other three watermarks are set according
+ to the distance between two watermarks. The distance itself is calculated
+ taking ``vm.watermark_scale_factor`` sysctl into account.
+
+``watermark_boost``
+ The number of pages which are used to boost watermarks to increase reclaim
+ pressure to reduce the likelihood of future fallbacks and wake kswapd now
+ as the node may be balanced overall and kswapd will not wake naturally.
+
+``nr_reserved_highatomic``
+ The number of pages which are reserved for high-order atomic allocations.
+
+``nr_free_highatomic``
+ The number of free pages in reserved highatomic pageblocks
+
+``lowmem_reserve``
+ The array of the amounts of the memory reserved in this zone for memory
+ allocations. For example, if the highest zone a memory allocation can
+ allocate memory from is ``ZONE_MOVABLE``, the amount of memory reserved in
+ this zone for this allocation is ``lowmem_reserve[ZONE_MOVABLE]`` when
+ attempting to allocate memory from this zone. This is a mechanism the page
+ allocator uses to prevent allocations which could use ``highmem`` from using
+ too much ``lowmem``. For some specialised workloads on ``highmem`` machines,
+ it is dangerous for the kernel to allow process memory to be allocated from
+ the ``lowmem`` zone. This is because that memory could then be pinned via the
+ ``mlock()`` system call, or by unavailability of swapspace.
+ ``vm.lowmem_reserve_ratio`` sysctl determines how aggressive the kernel is in
+ defending these lower zones. This array is recalculated by
+ ``setup_per_zone_lowmem_reserve()`` at runtime if ``vm.lowmem_reserve_ratio``
+ sysctl changes.
+
+``node``
+ The index of the node this zone belongs to. Available only when
+ ``CONFIG_NUMA`` is enabled because there is only one zone in a UMA system.
+
+``zone_pgdat``
+ Pointer to the ``struct pglist_data`` of the node this zone belongs to.
+
+``per_cpu_pageset``
+ Pointer to the Per-CPU Pagesets (PCP) allocated and initialized by
+ ``setup_zone_pageset()``. By handling most frequent allocations and frees
+ locally on each CPU, PCP improves performance and scalability on systems with
+ many cores.
+
+``pageset_high_min``
+ Copied to the ``high_min`` of the Per-CPU Pagesets for faster access.
+
+``pageset_high_max``
+ Copied to the ``high_max`` of the Per-CPU Pagesets for faster access.
+
+``pageset_batch``
+ Copied to the ``batch`` of the Per-CPU Pagesets for faster access. The
+ ``batch``, ``high_min`` and ``high_max`` of the Per-CPU Pagesets are used to
+ calculate the number of elements the Per-CPU Pagesets obtain from the buddy
+ allocator under a single hold of the lock for efficiency. They are also used
+ to decide if the Per-CPU Pagesets return pages to the buddy allocator in page
+ free process.
+
+``pageblock_flags``
+ The pointer to the flags for the pageblocks in the zone (see
+ ``include/linux/pageblock-flags.h`` for flags list). The memory is allocated
+ in ``setup_usemap()``. Each pageblock occupies ``NR_PAGEBLOCK_BITS`` bits.
+ Defined only when ``CONFIG_FLATMEM`` is enabled. The flags is stored in
+ ``mem_section`` when ``CONFIG_SPARSEMEM`` is enabled.
+
+``zone_start_pfn``
+ The start pfn of the zone. It is initialized by
+ ``calculate_node_totalpages()``.
+
+``managed_pages``
+ The present pages managed by the buddy system, which is calculated as:
+ ``managed_pages`` = ``present_pages`` - ``reserved_pages``, ``reserved_pages``
+ includes pages allocated by the memblock allocator. It should be used by page
+ allocator and vm scanner to calculate all kinds of watermarks and thresholds.
+ It is accessed using ``atomic_long_xxx()`` functions. It is initialized in
+ ``free_area_init_core()`` and then is reinitialized when memblock allocator
+ frees pages into buddy system.
+
+``spanned_pages``
+ The total pages spanned by the zone, including holes, which is calculated as:
+ ``spanned_pages`` = ``zone_end_pfn`` - ``zone_start_pfn``. It is initialized
+ by ``calculate_node_totalpages()``.
+
+``present_pages``
+ The physical pages existing within the zone, which is calculated as:
+ ``present_pages`` = ``spanned_pages`` - ``absent_pages`` (pages in holes). It
+ may be used by memory hotplug or memory power management logic to figure out
+ unmanaged pages by checking (``present_pages`` - ``managed_pages``). Write
+ access to ``present_pages`` at runtime should be protected by
+ ``mem_hotplug_begin/done()``. Any reader who can't tolerant drift of
+ ``present_pages`` should use ``get_online_mems()`` to get a stable value. It
+ is initialized by ``calculate_node_totalpages()``.
+
+``present_early_pages``
+ The present pages existing within the zone located on memory available since
+ early boot, excluding hotplugged memory. Defined only when
+ ``CONFIG_MEMORY_HOTPLUG`` is enabled and initialized by
+ ``calculate_node_totalpages()``.
+
+``cma_pages``
+ The pages reserved for CMA use. These pages behave like ``ZONE_MOVABLE`` when
+ they are not used for CMA. Defined only when ``CONFIG_CMA`` is enabled.
+
+``name``
+ The name of the zone. It is a pointer to the corresponding element of
+ the ``zone_names`` array.
+
+``nr_isolate_pageblock``
+ Number of isolated pageblocks. It is used to solve incorrect freepage counting
+ problem due to racy retrieving migratetype of pageblock. Protected by
+ ``zone->lock``. Defined only when ``CONFIG_MEMORY_ISOLATION`` is enabled.
+
+``span_seqlock``
+ The seqlock to protect ``zone_start_pfn`` and ``spanned_pages``. It is a
+ seqlock because it has to be read outside of ``zone->lock``, and it is done in
+ the main allocator path. However, the seqlock is written quite infrequently.
+ Defined only when ``CONFIG_MEMORY_HOTPLUG`` is enabled.
+
+``initialized``
+ The flag indicating if the zone is initialized. Set by
+ ``init_currently_empty_zone()`` during boot.
+
+``free_area``
+ The array of free areas, where each element corresponds to a specific order
+ which is a power of two. The buddy allocator uses this structure to manage
+ free memory efficiently. When allocating, it tries to find the smallest
+ sufficient block, if the smallest sufficient block is larger than the
+ requested size, it will be recursively split into the next smaller blocks
+ until the required size is reached. When a page is freed, it may be merged
+ with its buddy to form a larger block. It is initialized by
+ ``zone_init_free_lists()``.
+
+``unaccepted_pages``
+ The list of pages to be accepted. All pages on the list are ``MAX_PAGE_ORDER``.
+ Defined only when ``CONFIG_UNACCEPTED_MEMORY`` is enabled.
+
+``flags``
+ The zone flags. The least three bits are used and defined by
+ ``enum zone_flags``. ``ZONE_BOOSTED_WATERMARK`` (bit 0): zone recently boosted
+ watermarks. Cleared when kswapd is woken. ``ZONE_RECLAIM_ACTIVE`` (bit 1):
+ kswapd may be scanning the zone. ``ZONE_BELOW_HIGH`` (bit 2): zone is below
+ high watermark.
+
+``lock``
+ The main lock that protects the internal data structures of the page allocator
+ specific to the zone, especially protects ``free_area``.
+
+``percpu_drift_mark``
+ When free pages are below this point, additional steps are taken when reading
+ the number of free pages to avoid per-cpu counter drift allowing watermarks
+ to be breached. It is updated in ``refresh_zone_stat_thresholds()``.
+
+Compaction control
+~~~~~~~~~~~~~~~~~~
+
+``compact_cached_free_pfn``
+ The PFN where compaction free scanner should start in the next scan.
+
+``compact_cached_migrate_pfn``
+ The PFNs where compaction migration scanner should start in the next scan.
+ This array has two elements: the first one is used in ``MIGRATE_ASYNC`` mode,
+ and the other one is used in ``MIGRATE_SYNC`` mode.
+
+``compact_init_migrate_pfn``
+ The initial migration PFN which is initialized to 0 at boot time, and to the
+ first pageblock with migratable pages in the zone after a full compaction
+ finishes. It is used to check if a scan is a whole zone scan or not.
+
+``compact_init_free_pfn``
+ The initial free PFN which is initialized to 0 at boot time and to the last
+ pageblock with free ``MIGRATE_MOVABLE`` pages in the zone. It is used to check
+ if it is the start of a scan.
+
+``compact_considered``
+ The number of compactions attempted since last failure. It is reset in
+ ``defer_compaction()`` when a compaction fails to result in a page allocation
+ success. It is increased by 1 in ``compaction_deferred()`` when a compaction
+ should be skipped. ``compaction_deferred()`` is called before
+ ``compact_zone()`` is called, ``compaction_defer_reset()`` is called when
+ ``compact_zone()`` returns ``COMPACT_SUCCESS``, ``defer_compaction()`` is
+ called when ``compact_zone()`` returns ``COMPACT_PARTIAL_SKIPPED`` or
+ ``COMPACT_COMPLETE``.
+
+``compact_defer_shift``
+ The number of compactions skipped before trying again is
+ ``1<<compact_defer_shift``. It is increased by 1 in ``defer_compaction()``.
+ It is reset in ``compaction_defer_reset()`` when a direct compaction results
+ in a page allocation success. Its maximum value is ``COMPACT_MAX_DEFER_SHIFT``.
+
+``compact_order_failed``
+ The minimum compaction failed order. It is set in ``compaction_defer_reset()``
+ when a compaction succeeds and in ``defer_compaction()`` when a compaction
+ fails to result in a page allocation success.
+
+``compact_blockskip_flush``
+ Set to true when compaction migration scanner and free scanner meet, which
+ means the ``PB_compact_skip`` bits should be cleared.
+
+``contiguous``
+ Set to true when the zone is contiguous (in other words, no hole).
+
+Statistics
+~~~~~~~~~~
+
+``vm_stat``
+ VM statistics for the zone. The items tracked are defined by
+ ``enum zone_stat_item``.
+
+``vm_numa_event``
+ VM NUMA event statistics for the zone. The items tracked are defined by
+ ``enum numa_stat_item``.
+
+``per_cpu_zonestats``
+ Per-CPU VM statistics for the zone. It records VM statistics and VM NUMA event
+ statistics on a per-CPU basis. It reduces updates to the global ``vm_stat``
+ and ``vm_numa_event`` fields of the zone to improve performance.
.. _pages:
diff --git a/Documentation/mm/process_addrs.rst b/Documentation/mm/process_addrs.rst
index 81417fa2ed20..be49e2a269e4 100644
--- a/Documentation/mm/process_addrs.rst
+++ b/Documentation/mm/process_addrs.rst
@@ -303,7 +303,9 @@ There are four key operations typically performed on page tables:
1. **Traversing** page tables - Simply reading page tables in order to traverse
them. This only requires that the VMA is kept stable, so a lock which
establishes this suffices for traversal (there are also lockless variants
- which eliminate even this requirement, such as :c:func:`!gup_fast`).
+ which eliminate even this requirement, such as :c:func:`!gup_fast`). There is
+ also a special case of page table traversal for non-VMA regions which we
+ consider separately below.
2. **Installing** page table mappings - Whether creating a new mapping or
modifying an existing one in such a way as to change its identity. This
requires that the VMA is kept stable via an mmap or VMA lock (explicitly not
@@ -335,15 +337,13 @@ ahead and perform these operations on page tables (though internally, kernel
operations that perform writes also acquire internal page table locks to
serialise - see the page table implementation detail section for more details).
+.. note:: We free empty PTE tables on zap under the RCU lock - this does not
+ change the aforementioned locking requirements around zapping.
+
When **installing** page table entries, the mmap or VMA lock must be held to
keep the VMA stable. We explore why this is in the page table locking details
section below.
-.. warning:: Page tables are normally only traversed in regions covered by VMAs.
- If you want to traverse page tables in areas that might not be
- covered by VMAs, heavier locking is required.
- See :c:func:`!walk_page_range_novma` for details.
-
**Freeing** page tables is an entirely internal memory management operation and
has special requirements (see the page freeing section below for more details).
@@ -355,6 +355,44 @@ has special requirements (see the page freeing section below for more details).
from the reverse mappings, but no other VMAs can be permitted to be
accessible and span the specified range.
+Traversing non-VMA page tables
+------------------------------
+
+We've focused above on traversal of page tables belonging to VMAs. It is also
+possible to traverse page tables which are not represented by VMAs.
+
+Kernel page table mappings themselves are generally managed but whatever part of
+the kernel established them and the aforementioned locking rules do not apply -
+for instance vmalloc has its own set of locks which are utilised for
+establishing and tearing down page its page tables.
+
+However, for convenience we provide the :c:func:`!walk_kernel_page_table_range`
+function which is synchronised via the mmap lock on the :c:macro:`!init_mm`
+kernel instantiation of the :c:struct:`!struct mm_struct` metadata object.
+
+If an operation requires exclusive access, a write lock is used, but if not, a
+read lock suffices - we assert only that at least a read lock has been acquired.
+
+Since, aside from vmalloc and memory hot plug, kernel page tables are not torn
+down all that often - this usually suffices, however any caller of this
+functionality must ensure that any additionally required locks are acquired in
+advance.
+
+We also permit a truly unusual case is the traversal of non-VMA ranges in
+**userland** ranges, as provided for by :c:func:`!walk_page_range_debug`.
+
+This has only one user - the general page table dumping logic (implemented in
+:c:macro:`!mm/ptdump.c`) - which seeks to expose all mappings for debug purposes
+even if they are highly unusual (possibly architecture-specific) and are not
+backed by a VMA.
+
+We must take great care in this case, as the :c:func:`!munmap` implementation
+detaches VMAs under an mmap write lock before tearing down page tables under a
+downgraded mmap read lock.
+
+This means such an operation could race with this, and thus an mmap **write**
+lock is required.
+
Lock ordering
-------------
@@ -461,6 +499,10 @@ Locking Implementation Details
Page table locking details
--------------------------
+.. note:: This section explores page table locking requirements for page tables
+ encompassed by a VMA. See the above section on non-VMA page table
+ traversal for details on how we handle that case.
+
In addition to the locks described in the terminology section above, we have
additional locks dedicated to page tables:
@@ -716,9 +758,14 @@ calls :c:func:`!rcu_read_lock` to ensure that the VMA is looked up in an RCU
critical section, then attempts to VMA lock it via :c:func:`!vma_start_read`,
before releasing the RCU lock via :c:func:`!rcu_read_unlock`.
-VMA read locks hold the read lock on the :c:member:`!vma->vm_lock` semaphore for
-their duration and the caller of :c:func:`!lock_vma_under_rcu` must release it
-via :c:func:`!vma_end_read`.
+In cases when the user already holds mmap read lock, :c:func:`!vma_start_read_locked`
+and :c:func:`!vma_start_read_locked_nested` can be used. These functions do not
+fail due to lock contention but the caller should still check their return values
+in case they fail for other reasons.
+
+VMA read locks increment :c:member:`!vma.vm_refcnt` reference counter for their
+duration and the caller of :c:func:`!lock_vma_under_rcu` must drop it via
+:c:func:`!vma_end_read`.
VMA **write** locks are acquired via :c:func:`!vma_start_write` in instances where a
VMA is about to be modified, unlike :c:func:`!vma_start_read` the lock is always
@@ -726,9 +773,9 @@ acquired. An mmap write lock **must** be held for the duration of the VMA write
lock, releasing or downgrading the mmap write lock also releases the VMA write
lock so there is no :c:func:`!vma_end_write` function.
-Note that a semaphore write lock is not held across a VMA lock. Rather, a
-sequence number is used for serialisation, and the write semaphore is only
-acquired at the point of write lock to update this.
+Note that when write-locking a VMA lock, the :c:member:`!vma.vm_refcnt` is temporarily
+modified so that readers can detect the presense of a writer. The reference counter is
+restored once the vma sequence number used for serialisation is updated.
This ensures the semantics we require - VMA write locks provide exclusive write
access to the VMA.
@@ -738,7 +785,7 @@ Implementation details
The VMA lock mechanism is designed to be a lightweight means of avoiding the use
of the heavily contended mmap lock. It is implemented using a combination of a
-read/write semaphore and sequence numbers belonging to the containing
+reference counter and sequence numbers belonging to the containing
:c:struct:`!struct mm_struct` and the VMA.
Read locks are acquired via :c:func:`!vma_start_read`, which is an optimistic
@@ -779,28 +826,31 @@ release of any VMA locks on its release makes sense, as you would never want to
keep VMAs locked across entirely separate write operations. It also maintains
correct lock ordering.
-Each time a VMA read lock is acquired, we acquire a read lock on the
-:c:member:`!vma->vm_lock` read/write semaphore and hold it, while checking that
-the sequence count of the VMA does not match that of the mm.
+Each time a VMA read lock is acquired, we increment :c:member:`!vma.vm_refcnt`
+reference counter and check that the sequence count of the VMA does not match
+that of the mm.
-If it does, the read lock fails. If it does not, we hold the lock, excluding
-writers, but permitting other readers, who will also obtain this lock under RCU.
+If it does, the read lock fails and :c:member:`!vma.vm_refcnt` is dropped.
+If it does not, we keep the reference counter raised, excluding writers, but
+permitting other readers, who can also obtain this lock under RCU.
Importantly, maple tree operations performed in :c:func:`!lock_vma_under_rcu`
are also RCU safe, so the whole read lock operation is guaranteed to function
correctly.
-On the write side, we acquire a write lock on the :c:member:`!vma->vm_lock`
-read/write semaphore, before setting the VMA's sequence number under this lock,
-also simultaneously holding the mmap write lock.
+On the write side, we set a bit in :c:member:`!vma.vm_refcnt` which can't be
+modified by readers and wait for all readers to drop their reference count.
+Once there are no readers, the VMA's sequence number is set to match that of
+the mm. During this entire operation mmap write lock is held.
This way, if any read locks are in effect, :c:func:`!vma_start_write` will sleep
until these are finished and mutual exclusion is achieved.
-After setting the VMA's sequence number, the lock is released, avoiding
-complexity with a long-term held write lock.
+After setting the VMA's sequence number, the bit in :c:member:`!vma.vm_refcnt`
+indicating a writer is cleared. From this point on, VMA's sequence number will
+indicate VMA's write-locked state until mmap write lock is dropped or downgraded.
-This clever combination of a read/write semaphore and sequence count allows for
+This clever combination of a reference counter and sequence count allows for
fast RCU-based per-VMA lock acquisition (especially on page fault, though
utilised elsewhere) with minimal complexity around lock ordering.
diff --git a/Documentation/mm/slab.rst b/Documentation/mm/slab.rst
index 87d5a5bb172f..2bcc58ada302 100644
--- a/Documentation/mm/slab.rst
+++ b/Documentation/mm/slab.rst
@@ -3,3 +3,10 @@
===============
Slab Allocation
===============
+
+Functions and structures
+========================
+
+.. kernel-doc:: mm/slab.h
+.. kernel-doc:: mm/slub.c
+ :internal:
diff --git a/Documentation/mm/slub.rst b/Documentation/mm/slub.rst
deleted file mode 100644
index 84ca1dc94e5e..000000000000
--- a/Documentation/mm/slub.rst
+++ /dev/null
@@ -1,470 +0,0 @@
-==========================
-Short users guide for SLUB
-==========================
-
-The basic philosophy of SLUB is very different from SLAB. SLAB
-requires rebuilding the kernel to activate debug options for all
-slab caches. SLUB always includes full debugging but it is off by default.
-SLUB can enable debugging only for selected slabs in order to avoid
-an impact on overall system performance which may make a bug more
-difficult to find.
-
-In order to switch debugging on one can add an option ``slab_debug``
-to the kernel command line. That will enable full debugging for
-all slabs.
-
-Typically one would then use the ``slabinfo`` command to get statistical
-data and perform operation on the slabs. By default ``slabinfo`` only lists
-slabs that have data in them. See "slabinfo -h" for more options when
-running the command. ``slabinfo`` can be compiled with
-::
-
- gcc -o slabinfo tools/mm/slabinfo.c
-
-Some of the modes of operation of ``slabinfo`` require that slub debugging
-be enabled on the command line. F.e. no tracking information will be
-available without debugging on and validation can only partially
-be performed if debugging was not switched on.
-
-Some more sophisticated uses of slab_debug:
--------------------------------------------
-
-Parameters may be given to ``slab_debug``. If none is specified then full
-debugging is enabled. Format:
-
-slab_debug=<Debug-Options>
- Enable options for all slabs
-
-slab_debug=<Debug-Options>,<slab name1>,<slab name2>,...
- Enable options only for select slabs (no spaces
- after a comma)
-
-Multiple blocks of options for all slabs or selected slabs can be given, with
-blocks of options delimited by ';'. The last of "all slabs" blocks is applied
-to all slabs except those that match one of the "select slabs" block. Options
-of the first "select slabs" blocks that matches the slab's name are applied.
-
-Possible debug options are::
-
- F Sanity checks on (enables SLAB_DEBUG_CONSISTENCY_CHECKS
- Sorry SLAB legacy issues)
- Z Red zoning
- P Poisoning (object and padding)
- U User tracking (free and alloc)
- T Trace (please only use on single slabs)
- A Enable failslab filter mark for the cache
- O Switch debugging off for caches that would have
- caused higher minimum slab orders
- - Switch all debugging off (useful if the kernel is
- configured with CONFIG_SLUB_DEBUG_ON)
-
-F.e. in order to boot just with sanity checks and red zoning one would specify::
-
- slab_debug=FZ
-
-Trying to find an issue in the dentry cache? Try::
-
- slab_debug=,dentry
-
-to only enable debugging on the dentry cache. You may use an asterisk at the
-end of the slab name, in order to cover all slabs with the same prefix. For
-example, here's how you can poison the dentry cache as well as all kmalloc
-slabs::
-
- slab_debug=P,kmalloc-*,dentry
-
-Red zoning and tracking may realign the slab. We can just apply sanity checks
-to the dentry cache with::
-
- slab_debug=F,dentry
-
-Debugging options may require the minimum possible slab order to increase as
-a result of storing the metadata (for example, caches with PAGE_SIZE object
-sizes). This has a higher likelihood of resulting in slab allocation errors
-in low memory situations or if there's high fragmentation of memory. To
-switch off debugging for such caches by default, use::
-
- slab_debug=O
-
-You can apply different options to different list of slab names, using blocks
-of options. This will enable red zoning for dentry and user tracking for
-kmalloc. All other slabs will not get any debugging enabled::
-
- slab_debug=Z,dentry;U,kmalloc-*
-
-You can also enable options (e.g. sanity checks and poisoning) for all caches
-except some that are deemed too performance critical and don't need to be
-debugged by specifying global debug options followed by a list of slab names
-with "-" as options::
-
- slab_debug=FZ;-,zs_handle,zspage
-
-The state of each debug option for a slab can be found in the respective files
-under::
-
- /sys/kernel/slab/<slab name>/
-
-If the file contains 1, the option is enabled, 0 means disabled. The debug
-options from the ``slab_debug`` parameter translate to the following files::
-
- F sanity_checks
- Z red_zone
- P poison
- U store_user
- T trace
- A failslab
-
-failslab file is writable, so writing 1 or 0 will enable or disable
-the option at runtime. Write returns -EINVAL if cache is an alias.
-Careful with tracing: It may spew out lots of information and never stop if
-used on the wrong slab.
-
-Slab merging
-============
-
-If no debug options are specified then SLUB may merge similar slabs together
-in order to reduce overhead and increase cache hotness of objects.
-``slabinfo -a`` displays which slabs were merged together.
-
-Slab validation
-===============
-
-SLUB can validate all object if the kernel was booted with slab_debug. In
-order to do so you must have the ``slabinfo`` tool. Then you can do
-::
-
- slabinfo -v
-
-which will test all objects. Output will be generated to the syslog.
-
-This also works in a more limited way if boot was without slab debug.
-In that case ``slabinfo -v`` simply tests all reachable objects. Usually
-these are in the cpu slabs and the partial slabs. Full slabs are not
-tracked by SLUB in a non debug situation.
-
-Getting more performance
-========================
-
-To some degree SLUB's performance is limited by the need to take the
-list_lock once in a while to deal with partial slabs. That overhead is
-governed by the order of the allocation for each slab. The allocations
-can be influenced by kernel parameters:
-
-.. slab_min_objects=x (default: automatically scaled by number of cpus)
-.. slab_min_order=x (default 0)
-.. slab_max_order=x (default 3 (PAGE_ALLOC_COSTLY_ORDER))
-
-``slab_min_objects``
- allows to specify how many objects must at least fit into one
- slab in order for the allocation order to be acceptable. In
- general slub will be able to perform this number of
- allocations on a slab without consulting centralized resources
- (list_lock) where contention may occur.
-
-``slab_min_order``
- specifies a minimum order of slabs. A similar effect like
- ``slab_min_objects``.
-
-``slab_max_order``
- specified the order at which ``slab_min_objects`` should no
- longer be checked. This is useful to avoid SLUB trying to
- generate super large order pages to fit ``slab_min_objects``
- of a slab cache with large object sizes into one high order
- page. Setting command line parameter
- ``debug_guardpage_minorder=N`` (N > 0), forces setting
- ``slab_max_order`` to 0, what cause minimum possible order of
- slabs allocation.
-
-``slab_strict_numa``
- Enables the application of memory policies on each
- allocation. This results in more accurate placement of
- objects which may result in the reduction of accesses
- to remote nodes. The default is to only apply memory
- policies at the folio level when a new folio is acquired
- or a folio is retrieved from the lists. Enabling this
- option reduces the fastpath performance of the slab allocator.
-
-SLUB Debug output
-=================
-
-Here is a sample of slub debug output::
-
- ====================================================================
- BUG kmalloc-8: Right Redzone overwritten
- --------------------------------------------------------------------
-
- INFO: 0xc90f6d28-0xc90f6d2b. First byte 0x00 instead of 0xcc
- INFO: Slab 0xc528c530 flags=0x400000c3 inuse=61 fp=0xc90f6d58
- INFO: Object 0xc90f6d20 @offset=3360 fp=0xc90f6d58
- INFO: Allocated in get_modalias+0x61/0xf5 age=53 cpu=1 pid=554
-
- Bytes b4 (0xc90f6d10): 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a ........ZZZZZZZZ
- Object (0xc90f6d20): 31 30 31 39 2e 30 30 35 1019.005
- Redzone (0xc90f6d28): 00 cc cc cc .
- Padding (0xc90f6d50): 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZ
-
- [<c010523d>] dump_trace+0x63/0x1eb
- [<c01053df>] show_trace_log_lvl+0x1a/0x2f
- [<c010601d>] show_trace+0x12/0x14
- [<c0106035>] dump_stack+0x16/0x18
- [<c017e0fa>] object_err+0x143/0x14b
- [<c017e2cc>] check_object+0x66/0x234
- [<c017eb43>] __slab_free+0x239/0x384
- [<c017f446>] kfree+0xa6/0xc6
- [<c02e2335>] get_modalias+0xb9/0xf5
- [<c02e23b7>] dmi_dev_uevent+0x27/0x3c
- [<c027866a>] dev_uevent+0x1ad/0x1da
- [<c0205024>] kobject_uevent_env+0x20a/0x45b
- [<c020527f>] kobject_uevent+0xa/0xf
- [<c02779f1>] store_uevent+0x4f/0x58
- [<c027758e>] dev_attr_store+0x29/0x2f
- [<c01bec4f>] sysfs_write_file+0x16e/0x19c
- [<c0183ba7>] vfs_write+0xd1/0x15a
- [<c01841d7>] sys_write+0x3d/0x72
- [<c0104112>] sysenter_past_esp+0x5f/0x99
- [<b7f7b410>] 0xb7f7b410
- =======================
-
- FIX kmalloc-8: Restoring Redzone 0xc90f6d28-0xc90f6d2b=0xcc
-
-If SLUB encounters a corrupted object (full detection requires the kernel
-to be booted with slab_debug) then the following output will be dumped
-into the syslog:
-
-1. Description of the problem encountered
-
- This will be a message in the system log starting with::
-
- ===============================================
- BUG <slab cache affected>: <What went wrong>
- -----------------------------------------------
-
- INFO: <corruption start>-<corruption_end> <more info>
- INFO: Slab <address> <slab information>
- INFO: Object <address> <object information>
- INFO: Allocated in <kernel function> age=<jiffies since alloc> cpu=<allocated by
- cpu> pid=<pid of the process>
- INFO: Freed in <kernel function> age=<jiffies since free> cpu=<freed by cpu>
- pid=<pid of the process>
-
- (Object allocation / free information is only available if SLAB_STORE_USER is
- set for the slab. slab_debug sets that option)
-
-2. The object contents if an object was involved.
-
- Various types of lines can follow the BUG SLUB line:
-
- Bytes b4 <address> : <bytes>
- Shows a few bytes before the object where the problem was detected.
- Can be useful if the corruption does not stop with the start of the
- object.
-
- Object <address> : <bytes>
- The bytes of the object. If the object is inactive then the bytes
- typically contain poison values. Any non-poison value shows a
- corruption by a write after free.
-
- Redzone <address> : <bytes>
- The Redzone following the object. The Redzone is used to detect
- writes after the object. All bytes should always have the same
- value. If there is any deviation then it is due to a write after
- the object boundary.
-
- (Redzone information is only available if SLAB_RED_ZONE is set.
- slab_debug sets that option)
-
- Padding <address> : <bytes>
- Unused data to fill up the space in order to get the next object
- properly aligned. In the debug case we make sure that there are
- at least 4 bytes of padding. This allows the detection of writes
- before the object.
-
-3. A stackdump
-
- The stackdump describes the location where the error was detected. The cause
- of the corruption is may be more likely found by looking at the function that
- allocated or freed the object.
-
-4. Report on how the problem was dealt with in order to ensure the continued
- operation of the system.
-
- These are messages in the system log beginning with::
-
- FIX <slab cache affected>: <corrective action taken>
-
- In the above sample SLUB found that the Redzone of an active object has
- been overwritten. Here a string of 8 characters was written into a slab that
- has the length of 8 characters. However, a 8 character string needs a
- terminating 0. That zero has overwritten the first byte of the Redzone field.
- After reporting the details of the issue encountered the FIX SLUB message
- tells us that SLUB has restored the Redzone to its proper value and then
- system operations continue.
-
-Emergency operations
-====================
-
-Minimal debugging (sanity checks alone) can be enabled by booting with::
-
- slab_debug=F
-
-This will be generally be enough to enable the resiliency features of slub
-which will keep the system running even if a bad kernel component will
-keep corrupting objects. This may be important for production systems.
-Performance will be impacted by the sanity checks and there will be a
-continual stream of error messages to the syslog but no additional memory
-will be used (unlike full debugging).
-
-No guarantees. The kernel component still needs to be fixed. Performance
-may be optimized further by locating the slab that experiences corruption
-and enabling debugging only for that cache
-
-I.e.::
-
- slab_debug=F,dentry
-
-If the corruption occurs by writing after the end of the object then it
-may be advisable to enable a Redzone to avoid corrupting the beginning
-of other objects::
-
- slab_debug=FZ,dentry
-
-Extended slabinfo mode and plotting
-===================================
-
-The ``slabinfo`` tool has a special 'extended' ('-X') mode that includes:
- - Slabcache Totals
- - Slabs sorted by size (up to -N <num> slabs, default 1)
- - Slabs sorted by loss (up to -N <num> slabs, default 1)
-
-Additionally, in this mode ``slabinfo`` does not dynamically scale
-sizes (G/M/K) and reports everything in bytes (this functionality is
-also available to other slabinfo modes via '-B' option) which makes
-reporting more precise and accurate. Moreover, in some sense the `-X'
-mode also simplifies the analysis of slabs' behaviour, because its
-output can be plotted using the ``slabinfo-gnuplot.sh`` script. So it
-pushes the analysis from looking through the numbers (tons of numbers)
-to something easier -- visual analysis.
-
-To generate plots:
-
-a) collect slabinfo extended records, for example::
-
- while [ 1 ]; do slabinfo -X >> FOO_STATS; sleep 1; done
-
-b) pass stats file(-s) to ``slabinfo-gnuplot.sh`` script::
-
- slabinfo-gnuplot.sh FOO_STATS [FOO_STATS2 .. FOO_STATSN]
-
- The ``slabinfo-gnuplot.sh`` script will pre-processes the collected records
- and generates 3 png files (and 3 pre-processing cache files) per STATS
- file:
- - Slabcache Totals: FOO_STATS-totals.png
- - Slabs sorted by size: FOO_STATS-slabs-by-size.png
- - Slabs sorted by loss: FOO_STATS-slabs-by-loss.png
-
-Another use case, when ``slabinfo-gnuplot.sh`` can be useful, is when you
-need to compare slabs' behaviour "prior to" and "after" some code
-modification. To help you out there, ``slabinfo-gnuplot.sh`` script
-can 'merge' the `Slabcache Totals` sections from different
-measurements. To visually compare N plots:
-
-a) Collect as many STATS1, STATS2, .. STATSN files as you need::
-
- while [ 1 ]; do slabinfo -X >> STATS<X>; sleep 1; done
-
-b) Pre-process those STATS files::
-
- slabinfo-gnuplot.sh STATS1 STATS2 .. STATSN
-
-c) Execute ``slabinfo-gnuplot.sh`` in '-t' mode, passing all of the
- generated pre-processed \*-totals::
-
- slabinfo-gnuplot.sh -t STATS1-totals STATS2-totals .. STATSN-totals
-
- This will produce a single plot (png file).
-
- Plots, expectedly, can be large so some fluctuations or small spikes
- can go unnoticed. To deal with that, ``slabinfo-gnuplot.sh`` has two
- options to 'zoom-in'/'zoom-out':
-
- a) ``-s %d,%d`` -- overwrites the default image width and height
- b) ``-r %d,%d`` -- specifies a range of samples to use (for example,
- in ``slabinfo -X >> FOO_STATS; sleep 1;`` case, using a ``-r
- 40,60`` range will plot only samples collected between 40th and
- 60th seconds).
-
-
-DebugFS files for SLUB
-======================
-
-For more information about current state of SLUB caches with the user tracking
-debug option enabled, debugfs files are available, typically under
-/sys/kernel/debug/slab/<cache>/ (created only for caches with enabled user
-tracking). There are 2 types of these files with the following debug
-information:
-
-1. alloc_traces::
-
- Prints information about unique allocation traces of the currently
- allocated objects. The output is sorted by frequency of each trace.
-
- Information in the output:
- Number of objects, allocating function, possible memory wastage of
- kmalloc objects(total/per-object), minimal/average/maximal jiffies
- since alloc, pid range of the allocating processes, cpu mask of
- allocating cpus, numa node mask of origins of memory, and stack trace.
-
- Example:::
-
- 338 pci_alloc_dev+0x2c/0xa0 waste=521872/1544 age=290837/291891/293509 pid=1 cpus=106 nodes=0-1
- __kmem_cache_alloc_node+0x11f/0x4e0
- kmalloc_trace+0x26/0xa0
- pci_alloc_dev+0x2c/0xa0
- pci_scan_single_device+0xd2/0x150
- pci_scan_slot+0xf7/0x2d0
- pci_scan_child_bus_extend+0x4e/0x360
- acpi_pci_root_create+0x32e/0x3b0
- pci_acpi_scan_root+0x2b9/0x2d0
- acpi_pci_root_add.cold.11+0x110/0xb0a
- acpi_bus_attach+0x262/0x3f0
- device_for_each_child+0xb7/0x110
- acpi_dev_for_each_child+0x77/0xa0
- acpi_bus_attach+0x108/0x3f0
- device_for_each_child+0xb7/0x110
- acpi_dev_for_each_child+0x77/0xa0
- acpi_bus_attach+0x108/0x3f0
-
-2. free_traces::
-
- Prints information about unique freeing traces of the currently allocated
- objects. The freeing traces thus come from the previous life-cycle of the
- objects and are reported as not available for objects allocated for the first
- time. The output is sorted by frequency of each trace.
-
- Information in the output:
- Number of objects, freeing function, minimal/average/maximal jiffies since free,
- pid range of the freeing processes, cpu mask of freeing cpus, and stack trace.
-
- Example:::
-
- 1980 <not-available> age=4294912290 pid=0 cpus=0
- 51 acpi_ut_update_ref_count+0x6a6/0x782 age=236886/237027/237772 pid=1 cpus=1
- kfree+0x2db/0x420
- acpi_ut_update_ref_count+0x6a6/0x782
- acpi_ut_update_object_reference+0x1ad/0x234
- acpi_ut_remove_reference+0x7d/0x84
- acpi_rs_get_prt_method_data+0x97/0xd6
- acpi_get_irq_routing_table+0x82/0xc4
- acpi_pci_irq_find_prt_entry+0x8e/0x2e0
- acpi_pci_irq_lookup+0x3a/0x1e0
- acpi_pci_irq_enable+0x77/0x240
- pcibios_enable_device+0x39/0x40
- do_pci_enable_device.part.0+0x5d/0xe0
- pci_enable_device_flags+0xfc/0x120
- pci_enable_device+0x13/0x20
- virtio_pci_probe+0x9e/0x170
- local_pci_probe+0x48/0x80
- pci_device_probe+0x105/0x1c0
-
-Christoph Lameter, May 30, 2007
-Sergey Senozhatsky, October 23, 2015
diff --git a/Documentation/mm/split_page_table_lock.rst b/Documentation/mm/split_page_table_lock.rst
index 8e1ceb0a6619..cc3cd46abd1b 100644
--- a/Documentation/mm/split_page_table_lock.rst
+++ b/Documentation/mm/split_page_table_lock.rst
@@ -4,7 +4,7 @@ Split page table lock
Originally, mm->page_table_lock spinlock protected all page tables of the
mm_struct. But this approach leads to poor page fault scalability of
-multi-threaded applications due high contention on the lock. To improve
+multi-threaded applications due to high contention on the lock. To improve
scalability, split page table lock was introduced.
With split page table lock we have separate per-table lock to serialize
diff --git a/Documentation/mm/transhuge.rst b/Documentation/mm/transhuge.rst
index a2cd8800d527..0e7f8e4cd2e3 100644
--- a/Documentation/mm/transhuge.rst
+++ b/Documentation/mm/transhuge.rst
@@ -116,14 +116,27 @@ pages:
succeeds on tail pages.
- map/unmap of a PMD entry for the whole THP increment/decrement
- folio->_entire_mapcount, increment/decrement folio->_large_mapcount
- and also increment/decrement folio->_nr_pages_mapped by ENTIRELY_MAPPED
- when _entire_mapcount goes from -1 to 0 or 0 to -1.
+ folio->_entire_mapcount and folio->_large_mapcount.
+
+ We also maintain the two slots for tracking MM owners (MM ID and
+ corresponding mapcount), and the current status ("maybe mapped shared" vs.
+ "mapped exclusively").
+
+ With CONFIG_PAGE_MAPCOUNT, we also increment/decrement
+ folio->_nr_pages_mapped by ENTIRELY_MAPPED when _entire_mapcount goes
+ from -1 to 0 or 0 to -1.
- map/unmap of individual pages with PTE entry increment/decrement
- page->_mapcount, increment/decrement folio->_large_mapcount and also
- increment/decrement folio->_nr_pages_mapped when page->_mapcount goes
- from -1 to 0 or 0 to -1 as this counts the number of pages mapped by PTE.
+ folio->_large_mapcount.
+
+ We also maintain the two slots for tracking MM owners (MM ID and
+ corresponding mapcount), and the current status ("maybe mapped shared" vs.
+ "mapped exclusively").
+
+ With CONFIG_PAGE_MAPCOUNT, we also increment/decrement
+ page->_mapcount and increment/decrement folio->_nr_pages_mapped when
+ page->_mapcount goes from -1 to 0 or 0 to -1 as this counts the number
+ of pages mapped by PTE.
split_huge_page internally has to distribute the refcounts in the head
page to the tail pages before clearing all PG_head/tail bits from the page
@@ -151,8 +164,8 @@ clear where references should go after split: it will stay on the head page.
Note that split_huge_pmd() doesn't have any limitations on refcounting:
pmd can be split at any point and never fails.
-Partial unmap and deferred_split_folio()
-========================================
+Partial unmap and deferred_split_folio() (anon THP only)
+========================================================
Unmapping part of THP (with munmap() or other way) is not going to free
memory immediately. Instead, we detect that a subpage of THP is not in use
@@ -167,3 +180,13 @@ a THP crosses a VMA boundary.
The function deferred_split_folio() is used to queue a folio for splitting.
The splitting itself will happen when we get memory pressure via shrinker
interface.
+
+With CONFIG_PAGE_MAPCOUNT, we reliably detect partial mappings based on
+folio->_nr_pages_mapped.
+
+With CONFIG_NO_PAGE_MAPCOUNT, we detect partial mappings based on the
+average per-page mapcount in a THP: if the average is < 1, an anon THP is
+certainly partially mapped. As long as only a single process maps a THP,
+this detection is reliable. With long-running child processes, there can
+be scenarios where partial mappings can currently not be detected, and
+might need asynchronous detection during memory reclaim in the future.
diff --git a/Documentation/mm/z3fold.rst b/Documentation/mm/z3fold.rst
deleted file mode 100644
index 25b5935d06c7..000000000000
--- a/Documentation/mm/z3fold.rst
+++ /dev/null
@@ -1,28 +0,0 @@
-======
-z3fold
-======
-
-z3fold is a special purpose allocator for storing compressed pages.
-It is designed to store up to three compressed pages per physical page.
-It is a zbud derivative which allows for higher compression
-ratio keeping the simplicity and determinism of its predecessor.
-
-The main differences between z3fold and zbud are:
-
-* unlike zbud, z3fold allows for up to PAGE_SIZE allocations
-* z3fold can hold up to 3 compressed pages in its page
-* z3fold doesn't export any API itself and is thus intended to be used
- via the zpool API.
-
-To keep the determinism and simplicity, z3fold, just like zbud, always
-stores an integral number of compressed pages per page, but it can store
-up to 3 pages unlike zbud which can store at most 2. Therefore the
-compression ratio goes to around 2.7x while zbud's one is around 1.7x.
-
-Unlike zbud (but like zsmalloc for that matter) z3fold_alloc() does not
-return a dereferenceable pointer. Instead, it returns an unsigned long
-handle which encodes actual location of the allocated object.
-
-Keeping effective compression ratio close to zsmalloc's, z3fold doesn't
-depend on MMU enabled and provides more predictable reclaim behavior
-which makes it a better fit for small and response-critical systems.
diff --git a/Documentation/mm/zsmalloc.rst b/Documentation/mm/zsmalloc.rst
index 76902835e68e..d2bbecd78e14 100644
--- a/Documentation/mm/zsmalloc.rst
+++ b/Documentation/mm/zsmalloc.rst
@@ -27,9 +27,8 @@ Instead, it returns an opaque handle (unsigned long) which encodes actual
location of the allocated object. The reason for this indirection is that
zsmalloc does not keep zspages permanently mapped since that would cause
issues on 32-bit systems where the VA region for kernel space mappings
-is very small. So, before using the allocating memory, the object has to
-be mapped using zs_map_object() to get a usable pointer and subsequently
-unmapped using zs_unmap_object().
+is very small. So, using the allocated memory should be done through the
+proper handle-based APIs.
stat
====