<feed xmlns='http://www.w3.org/2005/Atom'>
<title>BMC/Intel-BMC/linux.git/mm/page_alloc.c, branch dev-4.7</title>
<subtitle>Intel OpenBMC Linux kernel source tree (mirror)</subtitle>
<id>https://git.radix-linux.su/BMC/Intel-BMC/linux.git/atom?h=dev-4.7</id>
<link rel='self' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/atom?h=dev-4.7'/>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/'/>
<updated>2016-09-24T08:09:28+00:00</updated>
<entry>
<title>mm, oom: prevent premature OOM killer invocation for high order request</title>
<updated>2016-09-24T08:09:28+00:00</updated>
<author>
<name>Michal Hocko</name>
<email>mhocko@suse.com</email>
</author>
<published>2016-09-01T23:14:41+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=bec4e55b55867ed948a3afd9f9ccf3506bfdad24'/>
<id>urn:sha1:bec4e55b55867ed948a3afd9f9ccf3506bfdad24</id>
<content type='text'>
commit 6b4e3181d7bd5ca5ab6f45929e4a5ffa7ab4ab7f upstream.

There have been several reports about pre-mature OOM killer invocation
in 4.7 kernel when order-2 allocation request (for the kernel stack)
invoked OOM killer even during basic workloads (light IO or even kernel
compile on some filesystems).  In all reported cases the memory is
fragmented and there are no order-2+ pages available.  There is usually
a large amount of slab memory (usually dentries/inodes) and further
debugging has shown that there are way too many unmovable blocks which
are skipped during the compaction.  Multiple reporters have confirmed
that the current linux-next which includes [1] and [2] helped and OOMs
are not reproducible anymore.

A simpler fix for the late rc and stable is to simply ignore the
compaction feedback and retry as long as there is a reclaim progress and
we are not getting OOM for order-0 pages.  We already do that for
CONFING_COMPACTION=n so let's reuse the same code when compaction is
enabled as well.

[1] http://lkml.kernel.org/r/20160810091226.6709-1-vbabka@suse.cz
[2] http://lkml.kernel.org/r/f7a9ea9d-bb88-bfd6-e340-3a933559305a@suse.cz

Fixes: 0a0337e0d1d1 ("mm, oom: rework oom detection")
Link: http://lkml.kernel.org/r/20160823074339.GB23577@dhcp22.suse.cz
Signed-off-by: Michal Hocko &lt;mhocko@suse.com&gt;
Tested-by: Olaf Hering &lt;olaf@aepfle.de&gt;
Tested-by: Ralf-Peter Rohbeck &lt;Ralf-Peter.Rohbeck@quantum.com&gt;
Cc: Markus Trippelsdorf &lt;markus@trippelsdorf.de&gt;
Cc: Arkadiusz Miskiewicz &lt;a.miskiewicz@gmail.com&gt;
Cc: Ralf-Peter Rohbeck &lt;Ralf-Peter.Rohbeck@quantum.com&gt;
Cc: Jiri Slaby &lt;jslaby@suse.com&gt;
Cc: Vlastimil Babka &lt;vbabka@suse.cz&gt;
Cc: Joonsoo Kim &lt;js1304@gmail.com&gt;
Cc: Tetsuo Handa &lt;penguin-kernel@i-love.sakura.ne.jp&gt;
Cc: David Rientjes &lt;rientjes@google.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>mm, meminit: ensure node is online before checking whether pages are uninitialised</title>
<updated>2016-07-15T05:54:27+00:00</updated>
<author>
<name>Mel Gorman</name>
<email>mgorman@techsingularity.net</email>
</author>
<published>2016-07-14T19:07:23+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=ef70b6f41cda6270165a6f27b2548ed31cfa3cb2'/>
<id>urn:sha1:ef70b6f41cda6270165a6f27b2548ed31cfa3cb2</id>
<content type='text'>
early_page_uninitialised looks up an arbitrary PFN.  While a machine
without node 0 will boot with "mm, page_alloc: Always return a valid
node from early_pfn_to_nid", it works because it assumes that nodes are
always in PFN order.  This is not guaranteed so this patch adds
robustness by always checking if the node being checked is online.

Link: http://lkml.kernel.org/r/1468008031-3848-4-git-send-email-mgorman@techsingularity.net
Signed-off-by: Mel Gorman &lt;mgorman@techsingularity.net&gt;
Acked-by: David Rientjes &lt;rientjes@google.com&gt;
Cc: &lt;stable@vger.kernel.org&gt;	[4.2+]
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>mm, meminit: always return a valid node from early_pfn_to_nid</title>
<updated>2016-07-15T05:54:27+00:00</updated>
<author>
<name>Mel Gorman</name>
<email>mgorman@techsingularity.net</email>
</author>
<published>2016-07-14T19:07:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=e4568d3803852d00effd41dcdd489e726b998879'/>
<id>urn:sha1:e4568d3803852d00effd41dcdd489e726b998879</id>
<content type='text'>
early_pfn_to_nid can return node 0 if a PFN is invalid on machines that
has no node 0.  A machine with only node 1 was observed to crash with
the following message:

   BUG: unable to handle kernel paging request at 000000000002a3c8
   PGD 0
   Modules linked in:
   Hardware name: Supermicro H8DSP-8/H8DSP-8, BIOS 080011  06/30/2006
   task: ffffffff81c0d500 ti: ffffffff81c00000 task.ti: ffffffff81c00000
   RIP: reserve_bootmem_region+0x6a/0xef
   CR2: 000000000002a3c8 CR3: 0000000001c06000 CR4: 00000000000006b0
   Call Trace:
      free_all_bootmem+0x4b/0x12a
      mem_init+0x70/0xa3
      start_kernel+0x25b/0x49b

The problem is that early_page_uninitialised uses the early_pfn_to_nid
helper which returns node 0 for invalid PFNs.  No caller of
early_pfn_to_nid cares except early_page_uninitialised.  This patch has
early_pfn_to_nid always return a valid node.

Link: http://lkml.kernel.org/r/1468008031-3848-3-git-send-email-mgorman@techsingularity.net
Signed-off-by: Mel Gorman &lt;mgorman@techsingularity.net&gt;
Acked-by: David Rientjes &lt;rientjes@google.com&gt;
Cc: &lt;stable@vger.kernel.org&gt;	[4.2+]
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>mm, page_alloc: recalculate the preferred zoneref if the context can ignore memory policies</title>
<updated>2016-06-03T23:02:57+00:00</updated>
<author>
<name>Mel Gorman</name>
<email>mgorman@techsingularity.net</email>
</author>
<published>2016-06-03T21:56:01+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=e46e7b77c9096eb2f4d6bcb9ca0b64c9338465ee'/>
<id>urn:sha1:e46e7b77c9096eb2f4d6bcb9ca0b64c9338465ee</id>
<content type='text'>
The optimistic fast path may use cpuset_current_mems_allowed instead of
of a NULL nodemask supplied by the caller for cpuset allocations.  The
preferred zone is calculated on this basis for statistic purposes and as
a starting point in the zonelist iterator.

However, if the context can ignore memory policies due to being atomic
or being able to ignore watermarks then the starting point in the
zonelist iterator is no longer correct.  This patch resets the zonelist
iterator in the allocator slowpath if the context can ignore memory
policies.  This will alter the zone used for statistics but only after
it is known that it makes sense for that context.  Resetting it before
entering the slowpath would potentially allow an ALLOC_CPUSET allocation
to be accounted for against the wrong zone.  Note that while nodemask is
not explicitly set to the original nodemask, it would only have been
overwritten if cpuset_enabled() and it was reset before the slowpath was
entered.

Link: http://lkml.kernel.org/r/20160602103936.GU2527@techsingularity.net
Fixes: c33d6c06f60f710 ("mm, page_alloc: avoid looking up the first zone in a zonelist twice")
Signed-off-by: Mel Gorman &lt;mgorman@techsingularity.net&gt;
Reported-by: Geert Uytterhoeven &lt;geert@linux-m68k.org&gt;
Tested-by: Geert Uytterhoeven &lt;geert@linux-m68k.org&gt;
Acked-by: Vlastimil Babka &lt;vbabka@suse.cz&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>mm, page_alloc: reset zonelist iterator after resetting fair zone allocation policy</title>
<updated>2016-06-03T23:02:56+00:00</updated>
<author>
<name>Mel Gorman</name>
<email>mgorman@techsingularity.net</email>
</author>
<published>2016-06-03T21:55:58+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=0d0bd89435d19faa3f2fd73f7ee11c64ac9304f4'/>
<id>urn:sha1:0d0bd89435d19faa3f2fd73f7ee11c64ac9304f4</id>
<content type='text'>
Geert Uytterhoeven reported the following problem that bisected to
commit c33d6c06f60f ("mm, page_alloc: avoid looking up the first zone
in a zonelist twice") on m68k/ARAnyM

    BUG: scheduling while atomic: cron/668/0x10c9a0c0
    Modules linked in:
    CPU: 0 PID: 668 Comm: cron Not tainted 4.6.0-atari-05133-gc33d6c06f60f710f #364
    Call Trace: [&lt;0003d7d0&gt;] __schedule_bug+0x40/0x54
      __schedule+0x312/0x388
      __schedule+0x0/0x388
      prepare_to_wait+0x0/0x52
      schedule+0x64/0x82
      schedule_timeout+0xda/0x104
      set_next_entity+0x18/0x40
      pick_next_task_fair+0x78/0xda
      io_schedule_timeout+0x36/0x4a
      bit_wait_io+0x0/0x40
      bit_wait_io+0x12/0x40
      __wait_on_bit+0x46/0x76
      wait_on_page_bit_killable+0x64/0x6c
      bit_wait_io+0x0/0x40
      wake_bit_function+0x0/0x4e
      __lock_page_or_retry+0xde/0x124
      do_scan_async+0x114/0x17c
      lookup_swap_cache+0x24/0x4e
      handle_mm_fault+0x626/0x7de
      find_vma+0x0/0x66
      down_read+0x0/0xe
      wait_on_page_bit_killable_timeout+0x77/0x7c
      find_vma+0x16/0x66
      do_page_fault+0xe6/0x23a
      res_func+0xa3c/0x141a
      buserr_c+0x190/0x6d4
      res_func+0xa3c/0x141a
      buserr+0x20/0x28
      res_func+0xa3c/0x141a
      buserr+0x20/0x28

The relationship is not obvious but it's due to a failure to rescan the
full zonelist after the fair zone allocation policy exhausts the batch
count.  While this is a functional problem, it's also a performance
issue.  A page allocator microbenchmark showed the following

                                   4.7.0-rc1                  4.7.0-rc1
                                     vanilla                 reset-v1r2
  Min      alloc-odr0-1     327.00 (  0.00%)           326.00 (  0.31%)
  Min      alloc-odr0-2     235.00 (  0.00%)           235.00 (  0.00%)
  Min      alloc-odr0-4     198.00 (  0.00%)           198.00 (  0.00%)
  Min      alloc-odr0-8     170.00 (  0.00%)           170.00 (  0.00%)
  Min      alloc-odr0-16    156.00 (  0.00%)           156.00 (  0.00%)
  Min      alloc-odr0-32    150.00 (  0.00%)           150.00 (  0.00%)
  Min      alloc-odr0-64    146.00 (  0.00%)           146.00 (  0.00%)
  Min      alloc-odr0-128   145.00 (  0.00%)           145.00 (  0.00%)
  Min      alloc-odr0-256   155.00 (  0.00%)           155.00 (  0.00%)
  Min      alloc-odr0-512   168.00 (  0.00%)           165.00 (  1.79%)
  Min      alloc-odr0-1024  175.00 (  0.00%)           174.00 (  0.57%)
  Min      alloc-odr0-2048  180.00 (  0.00%)           180.00 (  0.00%)
  Min      alloc-odr0-4096  187.00 (  0.00%)           186.00 (  0.53%)
  Min      alloc-odr0-8192  190.00 (  0.00%)           190.00 (  0.00%)
  Min      alloc-odr0-16384 191.00 (  0.00%)           191.00 (  0.00%)
  Min      alloc-odr1-1     736.00 (  0.00%)           445.00 ( 39.54%)
  Min      alloc-odr1-2     343.00 (  0.00%)           335.00 (  2.33%)
  Min      alloc-odr1-4     277.00 (  0.00%)           270.00 (  2.53%)
  Min      alloc-odr1-8     238.00 (  0.00%)           233.00 (  2.10%)
  Min      alloc-odr1-16    224.00 (  0.00%)           218.00 (  2.68%)
  Min      alloc-odr1-32    210.00 (  0.00%)           208.00 (  0.95%)
  Min      alloc-odr1-64    207.00 (  0.00%)           203.00 (  1.93%)
  Min      alloc-odr1-128   276.00 (  0.00%)           202.00 ( 26.81%)
  Min      alloc-odr1-256   206.00 (  0.00%)           202.00 (  1.94%)
  Min      alloc-odr1-512   207.00 (  0.00%)           202.00 (  2.42%)
  Min      alloc-odr1-1024  208.00 (  0.00%)           205.00 (  1.44%)
  Min      alloc-odr1-2048  213.00 (  0.00%)           212.00 (  0.47%)
  Min      alloc-odr1-4096  218.00 (  0.00%)           216.00 (  0.92%)
  Min      alloc-odr1-8192  341.00 (  0.00%)           219.00 ( 35.78%)

Note that order-0 allocations are unaffected but higher orders get a
small boost from this patch and a large reduction in system CPU usage
overall as can be seen here:

             4.7.0-rc1   4.7.0-rc1
               vanilla  reset-v1r2
  User           85.32       86.31
  System       2221.39     2053.36
  Elapsed      2368.89     2202.47

Fixes: c33d6c06f60f ("mm, page_alloc: avoid looking up the first zone in a zonelist twice")
Link: http://lkml.kernel.org/r/20160531100848.GR2527@techsingularity.net
Signed-off-by: Mel Gorman &lt;mgorman@techsingularity.net&gt;
Reported-by: Geert Uytterhoeven &lt;geert@linux-m68k.org&gt;
Tested-by: Geert Uytterhoeven &lt;geert@linux-m68k.org&gt;
Tested-by: Mikulas Patocka &lt;mpatocka@redhat.com&gt;
Cc: Vlastimil Babka &lt;vbabka@suse.cz&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>mm, page_alloc: prevent infinite loop in buffered_rmqueue()</title>
<updated>2016-06-03T23:02:56+00:00</updated>
<author>
<name>Vlastimil Babka</name>
<email>vbabka@suse.cz</email>
</author>
<published>2016-06-03T21:55:52+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=83b9355bf6f449c7d1231206c300ea89d3313a9d'/>
<id>urn:sha1:83b9355bf6f449c7d1231206c300ea89d3313a9d</id>
<content type='text'>
In DEBUG_VM kernel, we can hit infinite loop for order == 0 in
buffered_rmqueue() when check_new_pcp() returns 1, because the bad page
is never removed from the pcp list.  Fix this by removing the page
before retrying.  Also we don't need to check if page is non-NULL,
because we simply grab it from the list which was just tested for being
non-empty.

Fixes: 479f854a207c ("mm, page_alloc: defer debugging checks of pages allocated from the PCP")
Link: http://lkml.kernel.org/r/20160530090154.GM2527@techsingularity.net
Signed-off-by: Vlastimil Babka &lt;vbabka@suse.cz&gt;
Signed-off-by: Mel Gorman &lt;mgorman@techsingularity.net&gt;
Reported-by: Naoya Horiguchi &lt;n-horiguchi@ah.jp.nec.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>mm: check the return value of lookup_page_ext for all call sites</title>
<updated>2016-06-03T22:06:22+00:00</updated>
<author>
<name>Yang Shi</name>
<email>yang.shi@linaro.org</email>
</author>
<published>2016-06-03T21:55:38+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=f86e4271978bd93db466d6a95dad4b0fdcdb04f6'/>
<id>urn:sha1:f86e4271978bd93db466d6a95dad4b0fdcdb04f6</id>
<content type='text'>
Per the discussion with Joonsoo Kim [1], we need check the return value
of lookup_page_ext() for all call sites since it might return NULL in
some cases, although it is unlikely, i.e.  memory hotplug.

Tested with ltp with "page_owner=0".

[1] http://lkml.kernel.org/r/20160519002809.GA10245@js1304-P5Q-DELUXE

[akpm@linux-foundation.org: fix build-breaking typos]
[arnd@arndb.de: fix build problems from lookup_page_ext]
  Link: http://lkml.kernel.org/r/6285269.2CksypHdYp@wuerfel
[akpm@linux-foundation.org: coding-style fixes]
Link: http://lkml.kernel.org/r/1464023768-31025-1-git-send-email-yang.shi@linaro.org
Signed-off-by: Yang Shi &lt;yang.shi@linaro.org&gt;
Signed-off-by: Arnd Bergmann &lt;arnd@arndb.de&gt;
Cc: Joonsoo Kim &lt;iamjoonsoo.kim@lge.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>mm: check_new_page_bad() directly returns in __PG_HWPOISON case</title>
<updated>2016-05-21T00:58:30+00:00</updated>
<author>
<name>Naoya Horiguchi</name>
<email>n-horiguchi@ah.jp.nec.com</email>
</author>
<published>2016-05-20T23:58:50+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=e570f56cccd215db68e50870ee74b7d9c0022109'/>
<id>urn:sha1:e570f56cccd215db68e50870ee74b7d9c0022109</id>
<content type='text'>
Currently we check page-&gt;flags twice for "HWPoisoned" case of
check_new_page_bad(), which can cause a race with unpoisoning.

This race unnecessarily taints kernel with "BUG: Bad page state".
check_new_page_bad() is the only caller of bad_page() which is
interested in __PG_HWPOISON, so let's move the hwpoison related code in
bad_page() to it.

Link: http://lkml.kernel.org/r/20160518100949.GA17299@hori1.linux.bs1.fc.nec.co.jp
Signed-off-by: Naoya Horiguchi &lt;n-horiguchi@ah.jp.nec.com&gt;
Acked-by: Mel Gorman &lt;mgorman@techsingularity.net&gt;
Cc: Vlastimil Babka &lt;vbabka@suse.cz&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>mm, kasan: fix to call kasan_free_pages() after poisoning page</title>
<updated>2016-05-21T00:58:30+00:00</updated>
<author>
<name>seokhoon.yoon</name>
<email>iamyooon@gmail.com</email>
</author>
<published>2016-05-20T23:58:47+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=29b52de182acf50f85a8284ad39104d84c9bbf57'/>
<id>urn:sha1:29b52de182acf50f85a8284ad39104d84c9bbf57</id>
<content type='text'>
When CONFIG_PAGE_POISONING and CONFIG_KASAN is enabled,
free_pages_prepare()'s codeflow is below.

  1)kmemcheck_free_shadow()
  2)kasan_free_pages()
    - set shadow byte of page is freed
  3)kernel_poison_pages()
  3.1) check access to page is valid or not using kasan
    ---&gt; error occur, kasan think it is invalid access
  3.2) poison page
  4)kernel_map_pages()

So kasan_free_pages() should be called after poisoning the page.

Link: http://lkml.kernel.org/r/1463220405-7455-1-git-send-email-iamyooon@gmail.com
Signed-off-by: seokhoon.yoon &lt;iamyooon@gmail.com&gt;
Cc: Andrey Ryabinin &lt;a.ryabinin@samsung.com&gt;
Cc: Laura Abbott &lt;labbott@fedoraproject.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>mm: use phys_addr_t for reserve_bootmem_region() arguments</title>
<updated>2016-05-21T00:58:30+00:00</updated>
<author>
<name>Stefan Bader</name>
<email>stefan.bader@canonical.com</email>
</author>
<published>2016-05-20T23:58:38+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=4b50bcc7eda4d3cc9e3f2a0aa60e590fedf728c5'/>
<id>urn:sha1:4b50bcc7eda4d3cc9e3f2a0aa60e590fedf728c5</id>
<content type='text'>
Since commit 92923ca3aace ("mm: meminit: only set page reserved in the
memblock region") the reserved bit is set on reserved memblock regions.
However start and end address are passed as unsigned long.  This is only
32bit on i386, so it can end up marking the wrong pages reserved for
ranges at 4GB and above.

This was observed on a 32bit Xen dom0 which was booted with initial
memory set to a value below 4G but allowing to balloon in memory
(dom0_mem=1024M for example).  This would define a reserved bootmem
region for the additional memory (for example on a 8GB system there was
a reverved region covering the 4GB-8GB range).  But since the addresses
were passed on as unsigned long, this was actually marking all pages
from 0 to 4GB as reserved.

Fixes: 92923ca3aacef63 ("mm: meminit: only set page reserved in the memblock region")
Link: http://lkml.kernel.org/r/1463491221-10573-1-git-send-email-stefan.bader@canonical.com
Signed-off-by: Stefan Bader &lt;stefan.bader@canonical.com&gt;
Cc: &lt;stable@vger.kernel.org&gt;	[4.2+]
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
</feed>
