summaryrefslogtreecommitdiff
path: root/arch/arc/lib/Makefile
diff options
context:
space:
mode:
authorAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>2018-02-13 14:09:33 +0300
committerMichael Ellerman <mpe@ellerman.id.au>2018-02-13 14:37:48 +0300
commitfc5c2f4a55a2c258e12013cdf287cf266dbcd2a7 (patch)
treee731a8d68592ba682bad8e43b074410f124354c1 /arch/arc/lib/Makefile
parentff31e105464d8c8c973019646827020aed9c2d9f (diff)
downloadlinux-fc5c2f4a55a2c258e12013cdf287cf266dbcd2a7.tar.xz
powerpc/mm/hash64: Zero PGD pages on allocation
On powerpc we allocate page table pages from slab caches of different sizes. Currently we have a constructor that zeroes out the objects when we allocate them for the first time. We expect the objects to be zeroed out when we free the the object back to slab cache. This happens in the unmap path. For hugetlb pages we call huge_pte_get_and_clear() to do that. With the current configuration of page table size, both PUD and PGD level tables are allocated from the same slab cache. At the PUD level, we use the second half of the table to store the slot information. But we never clear that when unmapping. When such a freed object is then allocated for a PGD page, the second half of the page table page will not be zeroed as expected. This results in a kernel crash. Fix it by always clearing PGD pages when they're allocated. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> [mpe: Change log wording and formatting, add whitespace] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Diffstat (limited to 'arch/arc/lib/Makefile')
0 files changed, 0 insertions, 0 deletions