Age | Commit message (Expand) | Author | Files | Lines |
2016-07-29 | mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB | Alexander Potapenko | 6 | -53/+83 |
2016-07-29 | mm, kasan: account for object redzone in SLUB's nearest_obj() | Alexander Potapenko | 1 | -1/+1 |
2016-07-29 | mm: fix use-after-free if memory allocation failed in vma_adjust() | Kirill A. Shutemov | 1 | -5/+15 |
2016-07-29 | zsmalloc: Delete an unnecessary check before the function call "iput" | Markus Elfring | 1 | -2/+1 |
2016-07-29 | mm/memblock.c: fix index adjustment error in __next_mem_range_rev() | zijun_hu | 1 | -1/+1 |
2016-07-29 | mem-hotplug: alloc new page from a nearest neighbor node when mem-offline | Xishi Qiu | 1 | -5/+33 |
2016-07-29 | mm: add cond_resched() to generic_swapfile_activate() | Mikulas Patocka | 1 | -0/+2 |
2016-07-29 | Revert "mm, mempool: only set __GFP_NOMEMALLOC if there are free elements" | Michal Hocko | 1 | -15/+3 |
2016-07-29 | mm, compaction: don't isolate PageWriteback pages in MIGRATE_SYNC_LIGHT mode | Hugh Dickins | 1 | -1/+1 |
2016-07-29 | mm: hwpoison: remove incorrect comments | Naoya Horiguchi | 2 | -3/+0 |
2016-07-29 | make __section_nr() more efficient | Zhou Chengming | 1 | -5/+7 |
2016-07-29 | kmemleak: don't hang if user disables scanning early | Vegard Nossum | 1 | -1/+3 |
2016-07-29 | mm/memblock.c: add new infrastructure to address the mem limit issue | Dennis Chen | 1 | -5/+52 |
2016-07-29 | mm: fix memcg stack accounting for sub-page stacks | Andy Lutomirski | 1 | -1/+1 |
2016-07-29 | mm: track NR_KERNEL_STACK in KiB instead of number of stacks | Andy Lutomirski | 1 | -2/+1 |
2016-07-29 | mm: CONFIG_ZONE_DEVICE stop depending on CONFIG_EXPERT | Dan Williams | 1 | -1/+1 |
2016-07-29 | memblock: include <asm/sections.h> instead of <asm-generic/sections.h> | Christoph Hellwig | 1 | -1/+1 |
2016-07-29 | mm, THP: clean up return value of madvise_free_huge_pmd | Huang Ying | 1 | -7/+8 |
2016-07-29 | mm/zsmalloc: use helper to clear page->flags bit | Ganesh Mahendran | 1 | -2/+2 |
2016-07-29 | mm/zsmalloc: add __init,__exit attribute | Ganesh Mahendran | 1 | -1/+1 |
2016-07-29 | mm/zsmalloc: keep comments consistent with code | Ganesh Mahendran | 1 | -4/+3 |
2016-07-29 | mm/zsmalloc: avoid calculate max objects of zspage twice | Ganesh Mahendran | 1 | -16/+10 |
2016-07-29 | mm/zsmalloc: use class->objs_per_zspage to get num of max objects | Ganesh Mahendran | 1 | -11/+7 |
2016-07-29 | mm/zsmalloc: take obj index back from find_alloced_obj | Ganesh Mahendran | 1 | -2/+6 |
2016-07-29 | mm/zsmalloc: use obj_index to keep consistent with others | Ganesh Mahendran | 1 | -7/+7 |
2016-07-29 | mm: bail out in shrink_inactive_list() | Minchan Kim | 1 | -0/+27 |
2016-07-29 | mm, vmscan: account for skipped pages as a partial scan | Mel Gorman | 1 | -2/+18 |
2016-07-29 | mm: consider whether to decivate based on eligible zones inactive ratio | Mel Gorman | 1 | -5/+29 |
2016-07-29 | mm: remove reclaim and compaction retry approximations | Mel Gorman | 6 | -58/+37 |
2016-07-29 | mm, vmscan: remove highmem_file_pages | Mel Gorman | 1 | -8/+4 |
2016-07-29 | mm: add per-zone lru list stat | Minchan Kim | 3 | -9/+15 |
2016-07-29 | mm, vmscan: release/reacquire lru_lock on pgdat change | Mel Gorman | 1 | -11/+10 |
2016-07-29 | mm, vmscan: remove redundant check in shrink_zones() | Mel Gorman | 1 | -3/+0 |
2016-07-29 | mm, vmscan: Update all zone LRU sizes before updating memcg | Mel Gorman | 2 | -11/+34 |
2016-07-29 | mm: show node_pages_scanned per node, not zone | Minchan Kim | 1 | -3/+3 |
2016-07-29 | mm, pagevec: release/reacquire lru_lock on pgdat change | Mel Gorman | 1 | -10/+10 |
2016-07-29 | mm, page_alloc: fix dirtyable highmem calculation | Minchan Kim | 1 | -6/+10 |
2016-07-29 | mm, vmstat: remove zone and node double accounting by approximating retries | Mel Gorman | 6 | -42/+67 |
2016-07-29 | mm, vmstat: print node-based stats in zoneinfo file | Mel Gorman | 1 | -0/+24 |
2016-07-29 | mm: vmstat: account per-zone stalls and pages skipped during reclaim | Mel Gorman | 2 | -3/+15 |
2016-07-29 | mm: vmstat: replace __count_zone_vm_events with a zone id equivalent | Mel Gorman | 1 | -1/+1 |
2016-07-29 | mm: page_alloc: cache the last node whose dirty limit is reached | Mel Gorman | 1 | -2/+11 |
2016-07-29 | mm, page_alloc: remove fair zone allocation policy | Mel Gorman | 3 | -78/+2 |
2016-07-29 | mm, vmscan: add classzone information to tracepoints | Mel Gorman | 1 | -5/+9 |
2016-07-29 | mm, vmscan: Have kswapd reclaim from all zones if reclaiming and buffer_heads... | Mel Gorman | 1 | -8/+14 |
2016-07-29 | mm, vmscan: avoid passing in `remaining' unnecessarily to prepare_kswapd_sleep() | Mel Gorman | 1 | -8/+4 |
2016-07-29 | mm, vmscan: avoid passing in classzone_idx unnecessarily to compaction_ready | Mel Gorman | 1 | -20/+7 |
2016-07-29 | mm, vmscan: avoid passing in classzone_idx unnecessarily to shrink_node | Mel Gorman | 1 | -11/+9 |
2016-07-29 | mm: convert zone_reclaim to node_reclaim | Mel Gorman | 4 | -53/+60 |
2016-07-29 | mm, page_alloc: wake kswapd based on the highest eligible zone | Mel Gorman | 1 | -1/+1 |