diff options
Diffstat (limited to 'Documentation/vm')
-rw-r--r-- | Documentation/vm/00-INDEX | 20 | ||||
-rw-r--r-- | Documentation/vm/split_page_table_lock | 94 | ||||
-rw-r--r-- | Documentation/vm/zswap.txt | 8 |
3 files changed, 110 insertions, 12 deletions
diff --git a/Documentation/vm/00-INDEX b/Documentation/vm/00-INDEX index 5481c8ba3412..a39d06680e1c 100644 --- a/Documentation/vm/00-INDEX +++ b/Documentation/vm/00-INDEX @@ -4,10 +4,12 @@ active_mm.txt - An explanation from Linus about tsk->active_mm vs tsk->mm. balance - various information on memory balancing. -hugepage-mmap.c - - Example app using huge page memory with the mmap system call. -hugepage-shm.c - - Example app using huge page memory with Sys V shared memory system calls. +cleancache.txt + - Intro to cleancache and page-granularity victim cache. +frontswap.txt + - Outline frontswap, part of the transcendent memory frontend. +highmem.txt + - Outline of highmem and common issues. hugetlbpage.txt - a brief summary of hugetlbpage support in the Linux kernel. hwpoison.txt @@ -16,21 +18,23 @@ ksm.txt - how to use the Kernel Samepage Merging feature. locking - info on how locking and synchronization is done in the Linux vm code. -map_hugetlb.c - - an example program that uses the MAP_HUGETLB mmap flag. numa - information about NUMA specific code in the Linux vm. numa_memory_policy.txt - documentation of concepts and APIs of the 2.6 memory policy support. overcommit-accounting - description of the Linux kernels overcommit handling modes. -page-types.c - - Tool for querying page flags page_migration - description of page migration in NUMA systems. pagemap.txt - pagemap, from the userspace perspective slub.txt - a short users guide for SLUB. +soft-dirty.txt + - short explanation for soft-dirty PTEs +transhuge.txt + - Transparent Hugepage Support, alternative way of using hugepages. unevictable-lru.txt - Unevictable LRU infrastructure +zswap.txt + - Intro to compressed cache for swap pages diff --git a/Documentation/vm/split_page_table_lock b/Documentation/vm/split_page_table_lock new file mode 100644 index 000000000000..7521d367f21d --- /dev/null +++ b/Documentation/vm/split_page_table_lock @@ -0,0 +1,94 @@ +Split page table lock +===================== + +Originally, mm->page_table_lock spinlock protected all page tables of the +mm_struct. But this approach leads to poor page fault scalability of +multi-threaded applications due high contention on the lock. To improve +scalability, split page table lock was introduced. + +With split page table lock we have separate per-table lock to serialize +access to the table. At the moment we use split lock for PTE and PMD +tables. Access to higher level tables protected by mm->page_table_lock. + +There are helpers to lock/unlock a table and other accessor functions: + - pte_offset_map_lock() + maps pte and takes PTE table lock, returns pointer to the taken + lock; + - pte_unmap_unlock() + unlocks and unmaps PTE table; + - pte_alloc_map_lock() + allocates PTE table if needed and take the lock, returns pointer + to taken lock or NULL if allocation failed; + - pte_lockptr() + returns pointer to PTE table lock; + - pmd_lock() + takes PMD table lock, returns pointer to taken lock; + - pmd_lockptr() + returns pointer to PMD table lock; + +Split page table lock for PTE tables is enabled compile-time if +CONFIG_SPLIT_PTLOCK_CPUS (usually 4) is less or equal to NR_CPUS. +If split lock is disabled, all tables guaded by mm->page_table_lock. + +Split page table lock for PMD tables is enabled, if it's enabled for PTE +tables and the architecture supports it (see below). + +Hugetlb and split page table lock +--------------------------------- + +Hugetlb can support several page sizes. We use split lock only for PMD +level, but not for PUD. + +Hugetlb-specific helpers: + - huge_pte_lock() + takes pmd split lock for PMD_SIZE page, mm->page_table_lock + otherwise; + - huge_pte_lockptr() + returns pointer to table lock; + +Support of split page table lock by an architecture +--------------------------------------------------- + +There's no need in special enabling of PTE split page table lock: +everything required is done by pgtable_page_ctor() and pgtable_page_dtor(), +which must be called on PTE table allocation / freeing. + +Make sure the architecture doesn't use slab allocator for page table +allocation: slab uses page->slab_cache and page->first_page for its pages. +These fields share storage with page->ptl. + +PMD split lock only makes sense if you have more than two page table +levels. + +PMD split lock enabling requires pgtable_pmd_page_ctor() call on PMD table +allocation and pgtable_pmd_page_dtor() on freeing. + +Allocation usually happens in pmd_alloc_one(), freeing in pmd_free(), but +make sure you cover all PMD table allocation / freeing paths: i.e X86_PAE +preallocate few PMDs on pgd_alloc(). + +With everything in place you can set CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK. + +NOTE: pgtable_page_ctor() and pgtable_pmd_page_ctor() can fail -- it must +be handled properly. + +page->ptl +--------- + +page->ptl is used to access split page table lock, where 'page' is struct +page of page containing the table. It shares storage with page->private +(and few other fields in union). + +To avoid increasing size of struct page and have best performance, we use a +trick: + - if spinlock_t fits into long, we use page->ptr as spinlock, so we + can avoid indirect access and save a cache line. + - if size of spinlock_t is bigger then size of long, we use page->ptl as + pointer to spinlock_t and allocate it dynamically. This allows to use + split lock with enabled DEBUG_SPINLOCK or DEBUG_LOCK_ALLOC, but costs + one more cache line for indirect access; + +The spinlock_t allocated in pgtable_page_ctor() for PTE table and in +pgtable_pmd_page_ctor() for PMD table. + +Please, never access page->ptl directly -- use appropriate helper. diff --git a/Documentation/vm/zswap.txt b/Documentation/vm/zswap.txt index 7e492d8aaeaf..00c3d31e7971 100644 --- a/Documentation/vm/zswap.txt +++ b/Documentation/vm/zswap.txt @@ -8,7 +8,7 @@ significant performance improvement if reads from the compressed cache are faster than reads from a swap device. NOTE: Zswap is a new feature as of v3.11 and interacts heavily with memory -reclaim. This interaction has not be fully explored on the large set of +reclaim. This interaction has not been fully explored on the large set of potential configurations and workloads that exist. For this reason, zswap is a work in progress and should be considered experimental. @@ -23,7 +23,7 @@ Some potential benefits: drastically reducing life-shortening writes. Zswap evicts pages from compressed cache on an LRU basis to the backing swap -device when the compressed pool reaches it size limit. This requirement had +device when the compressed pool reaches its size limit. This requirement had been identified in prior community discussions. To enabled zswap, the "enabled" attribute must be set to 1 at boot time. e.g. @@ -37,7 +37,7 @@ the backing swap device in the case that the compressed pool is full. Zswap makes use of zbud for the managing the compressed memory pool. Each allocation in zbud is not directly accessible by address. Rather, a handle is -return by the allocation routine and that handle must be mapped before being +returned by the allocation routine and that handle must be mapped before being accessed. The compressed memory pool grows on demand and shrinks as compressed pages are freed. The pool is not preallocated. @@ -56,7 +56,7 @@ in the swap_map goes to 0) the swap code calls the zswap invalidate function, via frontswap, to free the compressed entry. Zswap seeks to be simple in its policies. Sysfs attributes allow for one user -controlled policies: +controlled policy: * max_pool_percent - The maximum percentage of memory that the compressed pool can occupy. |