summaryrefslogtreecommitdiff
path: root/arch/ia64/mm/tlb.c
AgeCommit message (Collapse)AuthorFilesLines
2013-01-04IA64: drivers: remove __dev* attributes.Greg Kroah-Hartman1-2/+1
CONFIG_HOTPLUG is going away as an option. As a result, the __dev* markings need to be removed. This change removes the use of __devinit, __devexit_p, __devinitdata, and __devexit from these drivers. Based on patches originally written by Bill Pemberton, but redone by me in order to handle some of the coding style issues better, by hand. Cc: Bill Pemberton <wfp5p@virginia.edu> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2010-06-30[IA64] Fix spinaphore down_spin()Tony Luck1-1/+1
Typo in down_spin() meant it only read the low 32 bits of the "serve" value, instead of the full 64 bits. This results in the system hanging when the values in ticket/serve get larger than 32-bits. A big enough system running the right test can hit this in a just a few hours. Broken since 883a3acf5b0d4782ac35981227a0d094e8b44850 [IA64] Re-implement spinaphores using ticket lock concepts Reported via IRC by Bjorn Helgaas Signed-off-by: Tony Luck <tony.luck@intel.com>
2010-03-30include cleanup: Update gfp.h and slab.h includes to prepare for breaking ↵Tejun Heo1-0/+1
implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-01-08[IA64] __per_cpu_idtrs[] is a memory hogTony Luck1-13/+19
__per_cpu_idtrs is statically allocated ... on CONFIG_NR_CPUS=4096 systems it hogs 16MB of memory. This is way too much for a quite probably unused facility (only KVM uses dynamic TR registers). Change to an array of pointers, and allocate entries as needed on a per cpu basis. Change the name too as the __per_cpu_ prefix is confusing (this isn't a classic <linux/percpu.h> type object). Signed-off-by: Tony Luck <tony.luck@intel.com>
2009-10-09[IA64] Re-implement spinaphores using ticket lock conceptsTony Luck1-6/+18
Bound the wait time for the ptcg_sem by using similar idea to the ticket spin locks. In this case we have only one instance of a spinaphore, so make it 8 bytes rather than try to squeeze it into 4-bytes to keep the code simpler (and shorter). Signed-off-by: Tony Luck <tony.luck@intel.com>
2009-06-17[IA64] Convert ia64 to use int-ll64.hMatthew Wilcox1-2/+2
It is generally agreed that it would be beneficial for u64 to be an unsigned long long on all architectures. ia64 (in common with several other 64-bit architectures) currently uses unsigned long. Migrating piecemeal is too painful; this giant patch fixes all compilation warnings and errors that come as a result of switching to use int-ll64.h. Note that userspace will still see __u64 defined as unsigned long. This is important as it affects C++ name mangling. [Updated by Tony Luck to change efi.h:efi_freemem_callback_t to use u64 for start/end rather than unsigned long] Signed-off-by: Matthew Wilcox <willy@linux.intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2009-03-16cpumask: use mm_cpumask() wrapper: ia64Rusty Russell1-1/+1
Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2008-10-18[IA64] Fix annoying IA64_TR_ALLOC_MAX message.Tony Luck1-2/+6
Madison cpus support 64 TR registers. Increase IA64_TR_ALLOC_MAX to 64. Also fixup the messages that get printed when this limit is exceeded. Repeating for every cpu is too noisy. Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-04-30[IA64] bugfix: nptcg breaks cpu-hotaddHidetoshi Seto1-1/+4
If "max_purges" from PAL is 0, it actually means 1. However it was not handled later when a hot-added cpu pass the max_purges from PAL. This makes systems easy to go BUG_ON(). Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-04-17Pull nptcg into release branchTony Luck1-15/+146
Conflicts: arch/ia64/mm/tlb.c
2008-04-04[IA64] Kernel parameter for max number of concurrent global TLB purgesFenghua Yu1-5/+41
The patch defines kernel parameter "nptcg=". The parameter overrides max number of concurrent global TLB purges which is reported from either PAL_VM_SUMMARY or SAL PALO. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-04-04[IA64] Multiple outstanding ptc.g instruction supportFenghua Yu1-15/+110
According to SDM2.2, Itanium supports multiple outstanding ptc.g instructions. But current kernel function ia64_global_tlb_purge() uses a spinlock to serialize ptc.g instructions issued by multiple processors. This serialization might have scalability issue on a big SMP machine where many processors could purge TLB in parallel. The patch fixes this problem by issuing multiple ptc.g instructions in ia64_global_tlb_purge(). It also adds support for the "PALO" table to get a platform view of the max number of outstanding ptc.g instructions (which may be different from the processor view found from PAL_VM_SUMMARY). PALO specification can be found at: http://www.dig64.org/home/DIG64_PALO_R1_0.pdf spinaphore implementation by Matthew Wilcox. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-04-03[IA64] Add API for allocating Dynamic TR resource.Xiantao Zhang1-0/+196
Dynamic TR resource should be managed in the uniform way. Add two interfaces for kernel: ia64_itr_entry: Allocate a (pair of) TR for caller. ia64_ptr_entry: Purge a (pair of ) TR by caller. Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com> Signed-off-by: Anthony Xu <anthony.xu@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-12-19[IA64] Avoid unnecessary TLB flushes when allocating memoryde Dinechin, Christophe (Integrity VM)1-3/+15
Improve performance of memory allocations on ia64 by avoiding a global TLB purge to purge a single page from the file cache. This happens whenever we evict a page from the buffer cache to make room for some other allocation. Test case: Run 'find /usr -type f | xargs cat > /dev/null' in the background to fill the buffer cache, then run something that uses memory, e.g. 'gmake -j50 install'. Instrumentation showed that the number of global TLB purges went from a few millions down to about 170 over a 12 hours run of the above. The performance impact is particularly noticeable under virtualization, because a virtual TLB is generally both larger and slower to purge than a physical one. Signed-off-by: Christophe de Dinechin <ddd@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-12-08[IA64] Add missing "space" to concatenated stringsJoe Perches1-1/+1
Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-07-11[IA64] silence GCC ia64 unused variable warningsJes Sorensen1-1/+1
Tell GCC to stop spewing out unnecessary warnings for unused variables passed to functions as pointers for ia64 files. Signed-off-by: Jes Sorensen <jes@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-05-08[IA64] SPIN_LOCK_UNLOCKED macro cleanup in arch/ia64Milind Arun Choudhary1-3/+3
SPIN_LOCK_UNLOCKED macro cleanup, use __SPIN_LOCK_UNLOCKED instead. Signed-off-by: Milind Arun Choudhary <milindchoudhary@gmail.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-06-30Remove obsolete #include <linux/config.h>Jörn Engel1-1/+0
Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
2006-03-27[IA64] optimize flush_tlb_range on large numa boxChen, Kenneth W1-5/+7
It was reported from a field customer that global spin lock ptcg_lock is giving a lot of grief on munmap performance running on a large numa machine. What appears to be a problem coming from flush_tlb_range(), which currently unconditionally calls platform_global_tlb_purge(). For some of the numa machines in existence today, this function is mapped into ia64_global_tlb_purge(), which holds ptcg_lock spin lock while executing ptc.ga instruction. Here is a patch that attempt to avoid global tlb purge whenever possible. It will use local tlb purge as much as possible. Though the conditions to use local tlb purge is pretty restrictive. One of the side effect of having flush tlb range instruction on ia64 is that kernel don't get a chance to clear out cpu_vm_mask. On ia64, this mask is sticky and it will accumulate if process bounces around. Thus diminishing the possible use of ptc.l. Thoughts? Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Acked-by: Jack Steiner <steiner@sgi.com> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-14[IA64] Hole in IA64 TLB flushing from system threadsJack Steiner1-1/+1
I originally thought this was an bug only in the SN code, but I think I also see a hole in the generic IA64 tlb code. (Separate patch was sent for the SN problem). It looks like there is a bug in the TLB flushing code. During context switch, kernel threads (kswapd, for example) inherit the mm of the task that was previously running on the cpu. Normally, this is ok because the previous context is still loaded into the RR registers. However, if the owner of the mm migrates to another cpu, changes it's context number, and references a page before kswapd issues a tlb_purge for that same page, the purge will be done with a stale context number (& RR registers). Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-04[IA64] make mmu_context.h and tlb.c 80-column friendlyChen, Kenneth W1-15/+18
wrap_mmu_context(), delayed_tlb_flush(), get_mmu_context() all have an extra { } block which cause one extra indentation. get_mmu_context() is particularly bad with 5 indentations to the most inner "if". It finally gets on my nerve that I can't keep the code within 80 columns. Remove the extra { } block and while I'm at it, reformat all the comments to 80-column friendly. No functional change at all with this patch. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-01[IA64] Use bitmaps for efficient context allocation/freePeter Keilty1-29/+27
Corrects the very inefficent method of finding free context_ids in get_mmu_context(). Instead of walking the task_list of all processes, 2 bitmaps are used to efficently store and lookup state, inuse and needs flushing. The entire rid address space is now used before calling wrap_mmu_context and global tlb flushing. Special thanks to Ken and Rohit for their review and modifications in using a bit flushmap. Signed-off-by: Peter Keilty <peter.keilty@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-10-30[PATCH] mm: flush_tlb_range outside ptlockHugh Dickins1-0/+2
There was one small but very significant change in the previous patch: mprotect's flush_tlb_range fell outside the page_table_lock: as it is in 2.4, but that doesn't prove it safe in 2.6. On some architectures flush_tlb_range comes to the same as flush_tlb_mm, which has always been called from outside page_table_lock in dup_mmap, and is so proved safe. Others required a deeper audit: I could find no reliance on page_table_lock in any; but in ia64 and parisc found some code which looks a bit as if it might want preemption disabled. That won't do any actual harm, so pending a decision from the maintainers, disable preemption there. Remove comments on page_table_lock from flush_tlb_mm, flush_tlb_range and flush_tlb_page entries in cachetlb.txt: they were rather misleading (what generic code does is different from what usually happens), the rules are now changing, and it's not yet clear where we'll end up (will the generic tlb_flush_mmu happen always under lock? never under lock? or sometimes under and sometimes not?). Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-29Pull fix-slow-tlb-purge into release branchTony Luck1-7/+9
2005-10-28[IA64] - Avoid slow TLB purges on SGI Altix systemsDean Roe1-7/+9
flush_tlb_all() can be a scaling issue on large SGI Altix systems since it uses the global call_lock and always executes on all cpus. When a process enters flush_tlb_range() to purge TLBs for another process, it is possible to avoid flush_tlb_all() and instead allow sn2_global_tlb_purge() to purge TLBs only where necessary. This patch modifies flush_tlb_range() so that this case can be handled by platform TLB purge functions and updates ia64_global_tlb_purge() accordingly. sn2_global_tlb_purge() now calculates the region register value from the mm argument introduced with this patch. Signed-off-by: Dean Roe <roe@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-10-26[IA64] wider use of for_each_cpu_mask() in arch/ia64hawkes@sgi.com1-2/+3
In arch/ia64 change the explicit use of for-loops and NR_CPUS into the general for_each_cpu() or for_each_online_cpu() constructs, as appropriate. This widens the scope of potential future optimizations of the general constructs, as well as takes advantage of the existing optimizations of first_cpu() and next_cpu(). Signed-off-by: John Hawkes <hawkes@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-17Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds1-0/+190
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!