summaryrefslogtreecommitdiff
path: root/arch/x86/kernel/irq_32.c
AgeCommit message (Collapse)AuthorFilesLines
2015-09-16genirq: Remove irq argument from irq flow handlersThomas Gleixner1-12/+7
Most interrupt flow handlers do not use the irq argument. Those few which use it can retrieve the irq number from the irq descriptor. Remove the argument. Search and replace was done with coccinelle and some extra helper scripts around it. Thanks to Julia for her help! Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Julia Lawall <Julia.Lawall@lip6.fr> Cc: Jiang Liu <jiang.liu@linux.intel.com>
2015-08-28x86/irq: Do not dereference irq descriptor before checking itThomas Gleixner1-1/+2
Having the IS_NULL_OR_ERR() check after dereferencing the pointer is not really working well. Move the dereference after the check. Fixes: a782a7e46bb5 'x86/irq: Store irq descriptor in vector array' Reported-and-tested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-08-06x86/irq: Store irq descriptor in vector arrayThomas Gleixner1-5/+4
We can spare the irq_desc lookup in the interrupt entry code if we store the descriptor pointer in the vector array instead the interrupt number. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Bjorn Helgaas <bhelgaas@google.com> Link: http://lkml.kernel.org/r/20150802203609.717724106@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-05-10x86/irq: Merge irq_regs & irq_statBrian Gerst1-6/+0
Move irq_regs and irq_stat definitions to irq.c. Signed-off-by: Brian Gerst <brgerst@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1431185813-15413-2-git-send-email-brgerst@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-23x86/asm/entry: Change all 'user_mode_vm()' calls to 'user_mode()'Andy Lutomirski1-1/+1
user_mode_vm() and user_mode() are now the same. Change all callers of user_mode_vm() to user_mode(). The next patch will remove the definition of user_mode_vm. Signed-off-by: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brad Spengler <spender@grsecurity.net> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/43b1f57f3df70df5a08b0925897c660725015554.1426728647.git.luto@kernel.org [ Merged to a more recent kernel. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-01-02x86: Clean up current_stack_pointerAndy Lutomirski1-10/+3
There's no good reason for it to be a macro, and x86_64 will want to use it, so it should be in a header. Acked-by: Borislav Petkov <bp@suse.de> Signed-off-by: Andy Lutomirski <luto@amacapital.net>
2014-03-11x86, threadinfo: Redo "x86: Use inline assembler to get sp"Mathias Krause1-1/+5
This patch restores the changes of commit dff38e3e93 "x86: Use inline assembler instead of global register variable to get sp". They got lost in commit 198d208df4 "x86: Keep thread_info on thread stack in x86_32" while moving the code to arch/x86/kernel/irq_32.c. Quoting Andi from commit dff38e3e93: """ LTO in gcc 4.6/47. has trouble with global register variables. They were used to read the stack pointer. Use a simple inline assembler statement with a mov instead. This also helps LLVM/clang, which does not support global register variables. """ Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Brian Gerst <brgerst@gmail.com> Cc: Andi Kleen <ak@linux.intel.com> Signed-off-by: Mathias Krause <minipli@googlemail.com> Link: http://lkml.kernel.org/r/1394178752-18047-1-git-send-email-minipli@googlemail.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-03-07x86: Keep thread_info on thread stack in x86_32Steven Rostedt1-43/+31
x86_64 uses a per_cpu variable kernel_stack to always point to the thread stack of current. This is where the thread_info is stored and is accessed from this location even when the irq or exception stack is in use. This removes the complexity of having to maintain the thread info on the stack when interrupts are running and having to copy the preempt_count and other fields to the interrupt stack. x86_32 uses the old method of copying the thread_info from the thread stack to the exception stack just before executing the exception. Having the two different requires #ifdefs and also the x86_32 way is a bit of a pain to maintain. By converting x86_32 to the same method of x86_64, we can remove #ifdefs, clean up the x86_32 code a little, and remove the overhead of the copy. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Brian Gerst <brgerst@gmail.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/20110806012354.263834829@goodmis.org Link: http://lkml.kernel.org/r/20140206144321.852942014@goodmis.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-03-07x86: Prepare removal of previous_esp from i386 thread_info structureSteven Rostedt1-4/+11
The i386 thread_info contains a previous_esp field that is used to daisy chain the different stacks for dump_stack() (ie. irq, softirq, thread stacks). The goal is to eventual make i386 handling of thread_info the same as x86_64, which means that the thread_info will not be in the stack but as a per_cpu variable. We will no longer depend on thread_info being able to daisy chain different stacks as it will only exist in one location (the thread stack). By moving previous_esp to the end of thread_info and referencing it as an offset instead of using a thread_info field, this becomes a stepping stone to moving the thread_info. The offset to get to the previous stack is rather ugly in this patch, but this is only temporary and the prev_esp will be changed in the next commit. This commit is more for sanity checks of the change. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Brian Gerst <brgerst@gmail.com> Cc: Robert Richter <rric@kernel.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/20110806012353.891757693@goodmis.org Link: http://lkml.kernel.org/r/20140206144321.608754481@goodmis.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2013-11-12Merge branch 'sched-core-for-linus' of ↵Linus Torvalds1-4/+0
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler changes from Ingo Molnar: "The main changes in this cycle are: - (much) improved CONFIG_NUMA_BALANCING support from Mel Gorman, Rik van Riel, Peter Zijlstra et al. Yay! - optimize preemption counter handling: merge the NEED_RESCHED flag into the preempt_count variable, by Peter Zijlstra. - wait.h fixes and code reorganization from Peter Zijlstra - cfs_bandwidth fixes from Ben Segall - SMP load-balancer cleanups from Peter Zijstra - idle balancer improvements from Jason Low - other fixes and cleanups" * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (129 commits) ftrace, sched: Add TRACE_FLAG_PREEMPT_RESCHED stop_machine: Fix race between stop_two_cpus() and stop_cpus() sched: Remove unnecessary iteration over sched domains to update nr_busy_cpus sched: Fix asymmetric scheduling for POWER7 sched: Move completion code from core.c to completion.c sched: Move wait code from core.c to wait.c sched: Move wait.c into kernel/sched/ sched/wait: Fix __wait_event_interruptible_lock_irq_timeout() sched: Avoid throttle_cfs_rq() racing with period_timer stopping sched: Guarantee new group-entities always have weight sched: Fix hrtimer_cancel()/rq->lock deadlock sched: Fix cfs_bandwidth misuse of hrtimer_expires_remaining sched: Fix race on toggling cfs_bandwidth_used sched: Remove extra put_online_cpus() inside sched_setaffinity() sched/rt: Fix task_tick_rt() comment sched/wait: Fix build breakage sched/wait: Introduce prepare_to_wait_event() sched/wait: Add ___wait_cond_timeout() to wait_event*_timeout() too sched: Remove get_online_cpus() usage sched: Fix race in migrate_swap_stop() ...
2013-10-01irq: Consolidate do_softirq() arch overriden implementationsFrederic Weisbecker1-22/+8
All arch overriden implementations of do_softirq() share the following common code: disable irqs (to avoid races with the pending check), check if there are softirqs pending, then execute __do_softirq() on a specific stack. Consolidate the common parts such that archs only worry about the stack switch. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@au1.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul Mackerras <paulus@au1.ibm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: James E.J. Bottomley <jejb@parisc-linux.org> Cc: Helge Deller <deller@gmx.de> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: David S. Miller <davem@davemloft.net> Cc: Andrew Morton <akpm@linux-foundation.org>
2013-09-25sched, x86: Provide a per-cpu preempt_count implementationPeter Zijlstra1-4/+0
Convert x86 to use a per-cpu preemption count. The reason for doing so is that accessing per-cpu variables is a lot cheaper than accessing thread_info variables. We still need to save/restore the actual preemption count due to PREEMPT_ACTIVE so we place the per-cpu __preempt_count variable in the same cache-line as the other hot __switch_to() variables such as current_task. NOTE: this save/restore is required even for !PREEMPT kernels as cond_resched() also relies on preempt_count's PREEMPT_ACTIVE to ignore task_struct::state. Also rename thread_info::preempt_count to ensure nobody is 'accidentally' still poking at it. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-gzn5rfsf8trgjoqx8hyayy3q@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-07-15x86: delete __cpuinit usage from all x86 filesPaul Gortmaker1-1/+1
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) are flagged as __cpuinit -- so if we remove the __cpuinit from arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the arch/x86 uses of the __cpuinit macros from all C files. x86 only had the one __CPUINIT used in assembly files, and it wasn't paired off with a .previous or a __FINIT, so we can delete it directly w/o any corresponding additional change there. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2012-05-08x86: Use common threadinfo allocatorThomas Gleixner1-4/+4
The only difference is the free_thread_info function, which frees xstate. Use the new arch_release_task_struct() function instead and switch over to the core allocator. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20120505150141.559556763@linutronix.de Cc: x86@kernel.org
2012-02-20x86-32/irq: Don't switch to irq stack for a user-mode irqLinus Torvalds1-8/+3
If the irq happens in user mode, our kernel stack is empty (apart from the pt_regs themselves, of course), so there's no need or advantage to switch. And it really doesn't save any stack space, quite the reverse: it means that a nested interrupt cannot switch irq stacks. So instead of saving kernel stack space, it actually causes the potential for *more* stack usage. Also simplify the preemption count copy when we do switch stacks: just copy the whole preemption count, rather than just the softirq parts of it. There is no advantage to the partial copy: it is more effort to get a less correct result. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1202191139260.10000@i5.linux-foundation.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-12-05x86: Fix the 32-bit stackoverflow-debug buildIngo Molnar1-0/+3
The panic_on_stackoverflow variable needs to be avilable on the 32-bit side as well ... Cc: Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> Cc: Randy Dunlap <rdunlap@xenotime.net> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: http://lkml.kernel.org/r/20111129060836.11076.12323.stgit@ltc219.sdl.hitachi.co.jp Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-12-05x86: Panic on detection of stack overflowMitsuo Hayasaka1-0/+2
Currently, messages are just output on the detection of stack overflow, which is not sufficient for systems that need a high reliability. This is because in general the overflow may corrupt data, and the additional corruption may occur due to reading them unless systems stop. This patch adds the sysctl parameter kernel.panic_on_stackoverflow and causes a panic when detecting the overflows of kernel, IRQ and exception stacks except user stack according to the parameter. It is disabled by default. Signed-off-by: Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> Cc: yrl.pp-manager.tt@hitachi.com Cc: Randy Dunlap <rdunlap@xenotime.net> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: http://lkml.kernel.org/r/20111129060836.11076.12323.stgit@ltc219.sdl.hitachi.co.jp Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-03-18x86: Fix common misspellingsLucas De Marchi1-1/+1
They were generated by 'codespell' and then manually reviewed. Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi> Cc: trivial@kernel.org LKML-Reference: <1300389856-1099-3-git-send-email-lucas.demarchi@profusion.mobi> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-01-19Merge branch 'x86-fixes-for-linus' of ↵Linus Torvalds1-5/+2
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: Clear irqstack thread_info x86: Make relocatable kernel work with new binutils
2011-01-18x86: Clear irqstack thread_infoBrian Gerst1-5/+2
Mathias Merz reported that v2.6.37 failed to boot on his system. Make sure that the thread_info part of the irqstack is initialized to zeroes. Reported-and-Tested-by: Matthias Merz <linux@merz-ka.de> Signed-off-by: Brian Gerst <brgerst@gmail.com> Acked-by: Pekka Enberg <penberg@kernel.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <AANLkTimyKXfJ1x8tgwrr1hYnNLrPfgE1NTe4z7L6tUDm@mail.gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-12-30x86: Use this_cpu_ops to optimize codeTejun Heo1-2/+2
Go through x86 code and replace __get_cpu_var and get_cpu_var instances that refer to a scalar and are not used for address determinations. Cc: Yinghai Lu <yinghai@kernel.org> Cc: Ingo Molnar <mingo@elte.hu> Acked-by: Tejun Heo <tj@kernel.org> Acked-by: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2010-10-29x86-32: Restore irq stacks NUMA-aware allocationsEric Dumazet1-2/+7
Commit 22d4cd4c4d ("Allocate irq stacks seperate from percpu area") removed NUMA affinity of IRQ stacks as side-effect of the fix. Using alloc_pages_node() instead of __get_free_pages() is safe, even if the target node has no available LOWMEM pages : alloc_pages_node() fallbacks to another node. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Brian Gerst <brgerst@gmail.com> Cc: tj@kernel.org Cc: torvalds@linux-foundation.org Cc: Peter Zijlstra <peterz@infradead.org> LKML-Reference: <1288276854.2649.607.camel@edumazet-laptop> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-10-27x86-32: Allocate irq stacks seperate from percpu areaBrian Gerst1-10/+2
The percpu allocator cannot handle alignments larger than one page. Allocate the irq stacks seperately, and only keep the pointers as percpu data. Signed-off-by: Brian Gerst <brgerst@gmail.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: tj@kernel.org LKML-Reference: <1288158182-1753-1-git-send-email-brgerst@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-09-07x86, 32-bit: Align percpu area and irq stacks to THREAD_SIZEAlexander van Heukelum1-2/+2
The irq stacks, located in the percpu-area, need to be THREAD_SIZE aligned. Add the infrastucture to align percpu variables to larger-than-pagesize amounts within the percpu area, and use it to specify the alignment for the irq stacks. Also align the percpu area itself to THREAD_SIZE. This should make irq stacks work with 8K THREAD_SIZE. Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Cc: Tejun Heo <tj@kernel.org> Cc: hch@lst.de LKML-Reference: <1283799222.15941.1393621887@webmail.messagingengine.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-07-27x86-32: Align IRQ stacks properlyChristoph Hellwig1-1/+1
As suggested by Steven Rostedt we need to align the IRQ stacks to the stack size, not just the page size to make them work for stack traces and other things that depend on finding the stack slot itself with 8k stacks. Signed-off-by: Christoph Hellwig <hch@lst.de> LKML-Reference: <20100727121313.GA19976@lst.de> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2010-06-29x86: Always use irq stacksChristoph Hellwig1-6/+0
IRQ stacks provide much better safety against unexpected stack use from interrupts, at the minimal downside of slightly higher memory usage. Enable irq stacks also for the default 8k stack on 32-bit kernels to minimize the problem of stack overflows through interrupt activity. This is what the 64-bit kernel and various other architectures already do. Signed-off-by: Christoph Hellwig <hch@lst.de> LKML-Reference: <20100628121554.GA6605@lst.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2009-11-02x86: Unify fixup_irqs() for 32-bit and 64-bit kernelsSuresh Siddha1-45/+0
There is no reason to have different fixup_irqs() for 32-bit and 64-bit kernels. Unify by using the superior 64-bit version for both the kernels. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Gary Hade <garyhade@us.ibm.com> Cc: Eric W. Biederman <ebiederm@xmission.com> LKML-Reference: <20091026230001.562512739@sbs-t61.sc.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-08-10x86: Use printk_once()Marcin Slusarz1-3/+2
Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com> Cc: "H. Peter Anvin" <hpa@zytor.com> LKML-Reference: <1249847649-11631-6-git-send-email-marcin.slusarz@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-24Merge branch 'tj-percpu' of ↵Ingo Molnar1-14/+15
git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc into core/percpu Conflicts: arch/x86/include/asm/pgtable.h
2009-02-20x86: use percpu data for 4k hardirq and softirq stacksLai Jiangshan1-14/+15
Impact: economize memory for large NR_CPUS percpu data is setup earlier than irq, we can use percpu data to economize memory. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2009-02-17x86, apic: remove duplicate asm/apic.h inclusionsIngo Molnar1-1/+0
Impact: cleanup Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-17x86, apic: remove genapic.hIngo Molnar1-1/+1
Impact: cleanup Remove genapic.h and remove all references to it. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-09x86: unify do_IRQ()Jeremy Fitzhardinge1-27/+0
With the differences in interrupt handling hoisted into handle_irq(), do_IRQ is more or less identical between 32 and 64 bit, so unify it. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-09x86: add handle_irq() to allow interrupt injectionJeremy Fitzhardinge1-13/+21
Xen uses a different interrupt path, so introduce handle_irq() to allow interrupts to be inserted into the normal interrupt path. This is handled slightly differently on 32 and 64-bit. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-29x86: remove mach_apic.hIngo Molnar1-1/+1
Spread mach_apic.h definitions into genapic.h. (with some knock-on effects on smp.h and apic.h.) Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-11cpumask: update irq_desc to use cpumask_var_tMike Travis1-1/+1
Impact: reduce memory usage, use new cpumask API. Replace the affinity and pending_masks with cpumask_var_t's. This adds to the significant size reduction done with the SPARSE_IRQS changes. The added functions (init_alloc_desc_masks & init_copy_desc_masks) are in the include file so they can be inlined (and optimized out for the !CONFIG_CPUMASKS_OFFSTACK case.) [Naming chosen to be consistent with the other init*irq functions, as well as the backwards arg declaration of "from, to" instead of the more common "to, from" standard.] Includes a slight change to the declaration of struct irq_desc to embed the pending_mask within ifdef(CONFIG_SMP) to be consistent with other references, and some small changes to Xen. Tested: sparse/non-sparse/cpumask_offstack/non-cpumask_offstack/nonuma/nosmp on x86_64 Signed-off-by: Mike Travis <travis@sgi.com> Cc: Chris Wright <chrisw@sous-sol.org> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: virtualization@lists.osdl.org Cc: xen-devel@lists.xensource.com Cc: Yinghai Lu <yhlu.kernel@gmail.com>
2009-01-04x86: irq_32.c fix style problemsJaswinder Singh Rajput1-5/+5
Impact: cleanup Fix: WARNING: Use #include <linux/uaccess.h> instead of <asm/uaccess.h> ERROR: "(foo*)" should be "(foo *)" ERROR: space required after that ',' (ctx:VxV) total: 5 errors, 1 warnings Signed-off-by: Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-17x86: fixup_irqs() doesnt need an argument.Mike Travis1-6/+7
Impact: cleanup, remove on-stack cpumask. The "map" arg is always cpu_online_mask. Importantly, set_affinity always ands the argument with cpu_online_mask anyway, so we don't need to do it in fixup_irqs(), avoiding a temporary. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
2008-12-13Merge ../linux-2.6-x86Rusty Russell1-0/+2
Conflicts: arch/x86/kernel/io_apic.c kernel/sched.c kernel/sched_stats.h
2008-12-13cpumask: make irq_set_affinity() take a const struct cpumaskRusty Russell1-1/+1
Impact: change existing irq_chip API Not much point with gentle transition here: the struct irq_chip's setaffinity method signature needs to change. Fortunately, not widely used code, but hits a few architectures. Note: In irq_select_affinity() I save a temporary in by mangling irq_desc[irq].affinity directly. Ingo, does this break anything? (Folded in fix from KOSAKI Motohiro) Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Reviewed-by: Grant Grundler <grundler@parisc-linux.org> Acked-by: Ingo Molnar <mingo@redhat.com> Cc: ralf@linux-mips.org Cc: grundler@parisc-linux.org Cc: jeremy@xensource.com Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
2008-12-08sparse irq_desc[] array: core kernel and x86 changesYinghai Lu1-0/+2
Impact: new feature Problem on distro kernels: irq_desc[NR_IRQS] takes megabytes of RAM with NR_CPUS set to large values. The goal is to be able to scale up to much larger NR_IRQS value without impacting the (important) common case. To solve this, we generalize irq_desc[NR_IRQS] to an (optional) array of irq_desc pointers. When CONFIG_SPARSE_IRQ=y is used, we use kzalloc_node to get irq_desc, this also makes the IRQ descriptors NUMA-local (to the site that calls request_irq()). This gets rid of the irq_cfg[] static array on x86 as well: irq_cfg now uses desc->chip_data for x86 to store irq_cfg. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-16x86: move ack_bad_irq() to irq.cThomas Gleixner1-23/+0
Share more duplicated code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-10-16x86: unify show_interrupts() and proc helpersThomas Gleixner1-146/+0
show_interrupts() and proc helpers are basically the same for 32 and 64 bit. Move them to a shared source file. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-10-16x86: cleanup show_interruptsThomas Gleixner1-83/+77
The sparseirq patches introduced some more ugliness in show_interrupts(). Clean it up all together and make the code easier to read by splitting out the "tail" function which prints the special interrupts. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-10-16genirq: remove sparse irq codeThomas Gleixner1-8/+0
This code is not ready, but we need to rip it out instead of rebasing as we would lose the APIC/IO_APIC unification otherwise. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-10-16x86: irq no should not use hex in /proc/interruptsYinghai Lu1-1/+1
Arjan van de Ven noticed that we changed IRQ numbers from decimal to hex in /proc/interrupts - that can break user-space utilities like irqbalanced. Reported-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-16x86: make HAVE_SPARSE_IRQ support selectableYinghai Lu1-1/+1
Ingo said sparse_irq is some intrusive. need to make it selectable to make it simple, remove irq_desc as parameter in some functions. (ack, eoi, set_affinity). may need to make member if irq_chip to take irq_desc, or struct irq later. Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-16x86: add debug info for 32bit sparse_irqYinghai Lu1-2/+2
so could figure out bugs where we get an interrupt, but vector_irq is not initialized yet. Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-16x86: make 32bit support per_cpu vectorYinghai Lu1-7/+11
so we can merge io_apic_32.c and io_apic_64.c v2: Use cpu_online_map as target cpus for bigsmp, just like 64-bit is doing. Also remove some unused TARGET_CPUS macro. v3: need to check if desc is null in smp_irq_move_cleanup also migration needs to reset vector too, so copy __target_IO_APIC_irq from 64bit. (the duplication will go away once the two files are unified.) Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-16x86: make 32 bit to use sparse_irqYinghai Lu1-9/+28
but actually irq still needs to be less than NR_IRQS, because interrupt[NR_IRQS] in entry.S. need to enable per_cpu vector... Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>