summaryrefslogtreecommitdiff
path: root/arch/powerpc/mm/hugetlbpage-book3e.c
AgeCommit message (Collapse)AuthorFilesLines
2016-03-04powerpc/fsl-book3e: Avoid lbarx on e5500Scott Wood1-0/+13
lbarx/stbcx. are implemented on e6500, but not on e5500. Likewise, SMT is on e6500, but not on e5500. So, avoid executing an unimplemented instruction by only locking when needed (i.e. in the presence of SMT). Signed-off-by: Scott Wood <oss@buserror.net>
2015-12-23powerpc/e6500: add locking to hugetlbScott Wood1-0/+46
e6500 has threads but does not have TLB write conditional. Thus, the hugetlb code needs to take the same lock that the normal TLB miss handlers take, to ensure that the tlbsx and tlbwe are atomic. Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-11-03powerpc: Replace __get_cpu_var usesChristoph Lameter1-3/+3
This still has not been merged and now powerpc is the only arch that does not have this change. Sorry about missing linuxppc-dev before. V2->V2 - Fix up to work against 3.18-rc1 __get_cpu_var() is used for multiple purposes in the kernel source. One of them is address calculation via the form &__get_cpu_var(x). This calculates the address for the instance of the percpu variable of the current processor based on an offset. Other use cases are for storing and retrieving data from the current processors percpu area. __get_cpu_var() can be used as an lvalue when writing data or on the right side of an assignment. __get_cpu_var() is defined as : __get_cpu_var() always only does an address determination. However, store and retrieve operations could use a segment prefix (or global register on other platforms) to avoid the address calculation. this_cpu_write() and this_cpu_read() can directly take an offset into a percpu area and use optimized assembly code to read and write per cpu variables. This patch converts __get_cpu_var into either an explicit address calculation using this_cpu_ptr() or into a use of this_cpu operations that use the offset. Thereby address calculations are avoided and less registers are used when code is generated. At the end of the patch set all uses of __get_cpu_var have been removed so the macro is removed too. The patch set includes passes over all arches as well. Once these operations are used throughout then specialized macros can be defined in non -x86 arches as well in order to optimize per cpu access by f.e. using a global register that may be set to the per cpu base. Transformations done to __get_cpu_var() 1. Determine the address of the percpu instance of the current processor. DEFINE_PER_CPU(int, y); int *x = &__get_cpu_var(y); Converts to int *x = this_cpu_ptr(&y); 2. Same as #1 but this time an array structure is involved. DEFINE_PER_CPU(int, y[20]); int *x = __get_cpu_var(y); Converts to int *x = this_cpu_ptr(y); 3. Retrieve the content of the current processors instance of a per cpu variable. DEFINE_PER_CPU(int, y); int x = __get_cpu_var(y) Converts to int x = __this_cpu_read(y); 4. Retrieve the content of a percpu struct DEFINE_PER_CPU(struct mystruct, y); struct mystruct x = __get_cpu_var(y); Converts to memcpy(&x, this_cpu_ptr(&y), sizeof(x)); 5. Assignment to a per cpu variable DEFINE_PER_CPU(int, y) __get_cpu_var(y) = x; Converts to __this_cpu_write(y, x); 6. Increment/Decrement etc of a per cpu variable DEFINE_PER_CPU(int, y); __get_cpu_var(y)++ Converts to __this_cpu_inc(y) Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> CC: Paul Mackerras <paulus@samba.org> Signed-off-by: Christoph Lameter <cl@linux.com> [mpe: Fix build errors caused by set/or_softirq_pending(), and rework assignment in __set_breakpoint() to use memcpy().] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-01-10powerpc/fsl-book3e-64: Use paca for hugetlb TLB1 entry selectionScott Wood1-10/+41
This keeps usage coordinated for hugetlb and indirect entries, which should make entry selection more predictable and probably improve overall performance when mixing the two. Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-10powerpc/fsl_booke: protect the access to MAS7Kevin Hao1-1/+2
The e500v1 doesn't implement the MAS7, so we should avoid to access this register on that implementations. In the current kernel, the access to MAS7 are protected by either CONFIG_PHYS_64BIT or MMU_FTR_BIG_PHYS. Since some code are executed before the code patching, we have to use CONFIG_PHYS_64BIT in these cases. Signed-off-by: Kevin Hao <haokexin@gmail.com> Signed-off-by: Scott Wood <scottwood@freescale.com>
2013-11-23powerpc/booke: Only check for hugetlb in flush if vma != NULLScott Wood1-2/+1
And in flush_hugetlb_page(), don't check whether vma is NULL after we've already dereferenced it. This was found by Dan using static analysis as described here: https://lists.ozlabs.org/pipermail/linuxppc-dev/2013-November/113161.html We currently get away with this because the callers that currently pass NULL for vma seem to be 32-bit-only (e.g. highmem, and CONFIG_DEBUG_PGALLOC in pgtable_32.c) Hugetlb is currently 64-bit only, so we never saw a NULL vma here. Signed-off-by: Scott Wood <scottwood@freescale.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
2011-12-07powerpc/book3e: Change hugetlb preload to take vma argumentBecky Bruce1-2/+6
This avoids an extra find_vma() and is less error-prone. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-12-07powerpc: Fix booke hugetlb preload code for PPC_MM_SLICES and 64-bitBecky Bruce1-9/+6
This patch does 2 things: It corrects the code that determines the size to write into MAS1 for the PPC_MM_SLICES case (this originally came from David Gibson and I had incorrectly altered it), and it changes the methodolody used to calculate the size for !PPC_MM_SLICES to work for 64-bit as well as 32-bit. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-09-23powerpc: Fix hugetlb with CONFIG_PPC_MM_SLICES=yPaul Mackerras1-1/+1
Commit 41151e77a4 ("powerpc: Hugetlb for BookE") added some #ifdef CONFIG_MM_SLICES conditionals to hugetlb_get_unmapped_area() and vma_mmu_pagesize(). Unfortunately this is not the correct config symbol; it should be CONFIG_PPC_MM_SLICES. The result is that attempting to use hugetlbfs on 64-bit Power server processors results in an infinite stack recursion between get_unmapped_area() and hugetlb_get_unmapped_area(). This fixes it by changing the #ifdef to use CONFIG_PPC_MM_SLICES in those functions and also in book3e_hugetlb_preload(). Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-09-20powerpc: Hugetlb for BookEBecky Bruce1-0/+121
Enable hugepages on Freescale BookE processors. This allows the kernel to use huge TLB entries to map pages, which can greatly reduce the number of TLB misses and the amount of TLB thrashing experienced by applications with large memory footprints. Care should be taken when using this on FSL processors, as the number of large TLB entries supported by the core is low (16-64) on current processors. The supported set of hugepage sizes include 4m, 16m, 64m, 256m, and 1g. Page sizes larger than the max zone size are called "gigantic" pages and must be allocated on the command line (and cannot be deallocated). This is currently only fully implemented for Freescale 32-bit BookE processors, but there is some infrastructure in the code for 64-bit BooKE. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>