summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2015-08-27ARCv2: entry: Fix reserved handlerVineet Gupta1-7/+2
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-27ARCv2: perf: Finally introduce HS perf unitVineet Gupta2-2/+6
With all features in place, the ARC HS pct block can now be effectively allowed to be probed/used Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-27ARCv2: perf: SMP supportAlexey Brodkin1-15/+54
* split off pmu info into singleton and per-cpu bits * setup PMU on all cores Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-27ARCv2: perf: implement exclusion of event counting in user or kernel modeAlexey Brodkin2-2/+17
Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-27ARCv2: perf: Support sampling events using overflow interruptsAlexey Brodkin2-10/+126
In times of ARC 700 performance counters didn't have support of interrupt an so for ARC we only had support of non-sampling events. Put simply only "perf stat" was functional. Now with ARC HS we have support of interrupts in performance counters which this change introduces support of. ARC performance counters act in the following way in regard of interrupts generation. [1] A counter counts starting from value set in PCT_COUNT register pair [2] Once counter reaches value set in PCT_INT_CNT interrupt is raised Basic setup look like this: [1] PCT_COUNT = 0; [2] PCT_INT_CNT = __limit_value__; [3] Enable interrupts for that counter and let it run [4] Let counter reach its limit [5] Handle interrupt when it happens Note that PCT HW block is build in CPU core and so ints interrupt line (which is basically OR of all counters IRQs) is wired directly to top-level IRQC. That means do de-assert PCT interrupt it's required to reset IRQs from all counters that have reached their limit values. Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-27ARCv2: perf: implement "event_set_period"Alexey Brodkin1-16/+63
This generalization prepares for support of overflow interrupts. Hardware event counters on ARC work that way: Each counter counts from programmed start value (set in ARC_REG_PCT_COUNT) to a limit value (set in ARC_REG_PCT_INT_CNT) and once limit value is reached this timer generates an interrupt. Even though this hardware implementation allows for more flexibility, in Linux kernel we decided to mimic behavior of other architectures this way: [1] Set limit value as half of counter's max value (to allow counter to run after reaching it limit, see below for more explanation): ---------->8----------- arc_pmu->max_period = (1ULL << counter_size) / 2 - 1ULL; ---------->8----------- [2] Set start value as "arc_pmu->max_period - sample_period" and then count up to the limit Our event counters don't stop on reaching max value (the one we set in ARC_REG_PCT_INT_CNT) but continue to count until kernel explicitly stops each of them. And setting a limit as half of counter capacity is done to allow capturing of additional events in between moment when interrupt was triggered until we're actually processing PMU interrupts. That way we're trying to be more precise. For example if we count CPU cycles we keep track of cycles while running through generic IRQ handling code: [1] We set counter period as say 100_000 events of type "crun" [2] Counter reaches that limit and raises its interrupt [3] Once we get in PMU IRQ handler we read current counter value from ARC_REG_PCT_SNAP ans see there something like 105_000. If counters stop on reaching a limit value then we would miss additional 5000 cycles. Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-27ARC: perf: cap the number of counters to hardware max of 32Vineet Gupta2-5/+6
The number of counters in PCT can never be more than 32 (while countable conditions could be 100+) for both ARCompact and ARCv2 And while at it update copyright dates. Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-21ARC: Eliminate some ARCv2 specific code for ARCompact buildVineet Gupta2-28/+34
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20ARC: add/fix some comments in code - no functional changeVineet Gupta6-22/+23
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20ARC: change some branchs to jumps to resolve linkage errorsYuriy Kolerov3-9/+9
When kernel's binary becomes large enough (32M and more) errors may occur during the final linkage stage. It happens because the build system uses short relocations for ARC by default. This problem may be easily resolved by passing -mlong-calls option to GCC to use long absolute jumps (j) instead of short relative branchs (b). But there are fragments of pure assembler code exist which use branchs in inappropriate places and cause a linkage error because of relocations overflow. First of these fragments is .fixup insertion in futex.h and unaligned.c. It inserts a code in the separate section (.fixup) with branch instruction. It leads to the linkage error when kernel becomes large. Second of these fragments is calling scheduler's functions (common kernel code) from entry.S of ARC's code. When kernel's binary becomes large it may lead to the linkage error because scheduler may occur far enough from ARC's code in the final binary. Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20ARC: ensure futex ops are atomic in !LLSC configVineet Gupta1-0/+12
W/o hardware assisted atomic r-m-w the best we can do is to disable preemption. Cc: David Hildenbrand <dahi@linux.vnet.ibm.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Michel Lespinasse <walken@google.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20ARC: Enable HAVE_FUTEX_CMPXCHGVineet Gupta1-0/+1
ARC doesn't need the runtime detection of futex cmpxchg op Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20ARC: make futex_atomic_cmpxchg_inatomic() return bimodalVineet Gupta1-9/+11
Callers of cmpxchg_futex_value_locked() in futex code expect bimodal return value: !0 (essentially -EFAULT as failure) 0 (success) Before this patch, the success return value was old value of futex, which could very well be non zero, causing caller to possibly take the failure path erroneously. Fix that by returning 0 for success (This fix was done back in 2011 for all upstream arches, which ARC obviously missed) Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Michel Lespinasse <walken@google.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20ARC: futex cosmeticsVineet Gupta1-8/+9
Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Michel Lespinasse <walken@google.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20ARC: add barriers to futex codeVineet Gupta1-11/+10
The atomic ops on futex need to provide the full barrier just like regular atomics in kernel. Also remove pagefault_enable/disable in futex_atomic_cmpxchg_inatomic() as core code already does that Cc: David Hildenbrand <dahi@linux.vnet.ibm.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Michel Lespinasse <walken@google.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20ARCv2: IOC: Allow boot time disableAlexey Brodkin1-3/+4
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20ARCv2: SLC: Allow boot time disableVineet Gupta1-2/+19
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20ARCv2: Support IO Coherency and permutations involving L1 and L2 cachesAlexey Brodkin4-16/+125
In case of ARCv2 CPU there're could be following configurations that affect cache handling for data exchanged with peripherals via DMA: [1] Only L1 cache exists [2] Both L1 and L2 exist, but no IO coherency unit [3] L1, L2 caches and IO coherency unit exist Current implementation takes care of [1] and [2]. Moreover support of [2] is implemented with run-time check for SLC existence which is not super optimal. This patch introduces support of [3] and rework of DMA ops usage. Instead of doing run-time check every time a particular DMA op is executed we'll have 3 different implementations of DMA ops and select appropriate one during init. As for IOC support for it we need: [a] Implement empty DMA ops because IOC takes care of cache coherency with DMAed data [b] Route dma_alloc_coherent() via dma_alloc_noncoherent() This is required to make IOC work in first place and also serves as optimization as LD/ST to coherent buffers can be srviced from caches w/o going all the way to memory Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> [vgupta: -Added some comments about IOC gains -Marked dma ops as static, -Massaged changelog a bit] Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-11ARC: Enable optimistic spinning for LLSC configVineet Gupta1-0/+1
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-09Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linusLinus Torvalds29-60/+128
Pull MIPS fixes from Ralf Baechle: "Another round of MIPS fixes for 4.2. No area does particularly stand out but we have a two unpleasant ones: - Kernel ptes are marked with a global bit which allows the kernel to share kernel TLB entries between all processes. For this to work both entries of an adjacent even/odd pte pair need to have the global bit set. There has been a subtle race in setting the other entry's global bit since ~ 2000 but it take particularly pathological workloads that essentially do mostly vmalloc/vfree to trigger this. This pull request fixes the 64-bit case but leaves the case of 32 bit CPUs with 64 bit ptes unsolved for now. The unfixed cases affect hardware that is not available in the field yet. - Instruction emulation requires loading instructions from user space but the current fast but simplistic approach will fail on pages that are PROT_EXEC but !PROT_READ. For this reason we temporarily do not permit this permission and will map pages with PROT_EXEC | PROT_READ. The remainder of this pull request is more or less across the field and the short log explains them well" * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: MIPS: Make set_pte() SMP safe. MIPS: Replace add and sub instructions in relocate_kernel.S with addiu MIPS: Flush RPS on kernel entry with EVA Revert "MIPS: BCM63xx: Provide a plat_post_dma_flush hook" MIPS: BMIPS: Delete unused Kconfig symbol MIPS: Export get_c0_perfcount_int() MIPS: show_stack: Fix stack trace with EVA MIPS: do_mcheck: Fix kernel code dump with EVA MIPS: SMP: Don't increment irq_count multiple times for call function IPIs MIPS: Partially disable RIXI support. MIPS: Handle page faults of executable but unreadable pages correctly. MIPS: Malta: Don't reinitialise RTC MIPS: unaligned: Fix build error on big endian R6 kernels MIPS: Fix sched_getaffinity with MT FPAFF enabled MIPS: Fix build with CONFIG_OF=y for non OF-enabled targets CPUFREQ: Loongson2: Fix broken build due to incorrect include.
2015-08-08Merge tag 'arc-v4.2-rc6-fixes' of ↵Linus Torvalds13-116/+718
git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc Pull ARC fixes from Vineet Gupta: "Here's a late pull request for accumulated ARC fixes which came out of extended testing of the new ARCv2 port with LTP etc. llock/scond livelock workaround has been reviewed by PeterZ. The changes look a lot but I've crafted them into finer grained patches for better tracking later. I have some more fixes (ARC Futex backend) ready to go but those will have to wait for tglx to return from vacation. Summary: - Enable a reduced config of HS38 (w/o div-rem, ll64...) - Add software workaround for LLOCK/SCOND livelock - Fallout of a recent pt_regs update" * tag 'arc-v4.2-rc6-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc: ARCv2: spinlock/rwlock/atomics: reduce 1 instruction in exponential backoff ARC: Make pt_regs regs unsigned ARCv2: spinlock/rwlock: Reset retry delay when starting a new spin-wait cycle ARCv2: spinlock/rwlock/atomics: Delayed retry of failed SCOND with exponential backoff ARC: LLOCK/SCOND based rwlock ARC: LLOCK/SCOND based spin_lock ARC: refactor atomic inline asm operands with symbolic names Revert "ARCv2: STAR 9000837815 workaround hardware exclusive transactions livelock" ARCv2: [axs103_smp] Reduce clk for Quad FPGA configs ARCv2: Fix the peripheral address space detection ARCv2: allow selection of page size for MMUv4 ARCv2: lib: memset: Don't assume 64-bit load/stores ARCv2: lib: memcpy: Missing PREFETCHW ARCv2: add knob for DIV_REV in Kconfig ARC/time: Migrate to new 'set-state' interface
2015-08-08Merge tag 'usb-4.2-rc6' of ↵Linus Torvalds1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb Pull USB fixes from Greg KH: "Here are some USB and PHY fixes for 4.2-rc6 that resolve some reported issues. All of these have been in the linux-next tree for a while, full details on the patches are in the shortlog below" * tag 'usb-4.2-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: ARM: dts: dra7: Add syscon-pllreset syscon to SATA PHY drivers/usb: Delete XHCI command timer if necessary xhci: fix off by one error in TRB DMA address boundary check usb: udc: core: add device_del() call to error pathway phy: ti-pipe3: i783 workaround for SATA lockup after dpll unlock/relock phy-sun4i-usb: Add missing EXPORT_SYMBOL_GPL for sun4i_usb_phy_set_squelch_detect USB: sierra: add 1199:68AB device ID usb: gadget: f_printer: actually limit the number of instances usb: gadget: f_hid: actually limit the number of instances usb: gadget: f_uac2: fix calculation of uac2->p_interval usb: gadget: bdc: fix a driver crash on disconnect usb: chipidea: ehci_init_driver is intended to call one time USB: qcserial: Add support for Dell Wireless 5809e 4G Modem USB: qcserial/option: make AT URCs work for Sierra Wireless MC7305/MC7355
2015-08-07ARCv2: spinlock/rwlock/atomics: reduce 1 instruction in exponential backoffVineet Gupta2-4/+2
The increment of delay counter was 2 instructions: Arithmatic Shfit Left (ASL) + set to 1 on overflow This can be done in 1 using ROtate Left (ROL) Suggested-by: Nigel Topham <ntopham@synopsys.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-07Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparcLinus Torvalds4-81/+11
Pull sparc fix from David Miller: "FPU register corruption bug fix" * git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc: sparc64: Fix userspace FPU register corruptions.
2015-08-07sparc64: Fix userspace FPU register corruptions.David S. Miller4-81/+11
If we have a series of events from userpsace, with %fprs=FPRS_FEF, like follows: ETRAP ETRAP VIS_ENTRY(fprs=0x4) VIS_EXIT RTRAP (kernel FPU restore with fpu_saved=0x4) RTRAP We will not restore the user registers that were clobbered by the FPU using kernel code in the inner-most trap. Traps allocate FPU save slots in the thread struct, and FPU using sequences save the "dirty" FPU registers only. This works at the initial trap level because all of the registers get recorded into the top-level FPU save area, and we'll return to userspace with the FPU disabled so that any FPU use by the user will take an FPU disabled trap wherein we'll load the registers back up properly. But this is not how trap returns from kernel to kernel operate. The simplest fix for this bug is to always save all FPU register state for anything other than the top-most FPU save area. Getting rid of the optimized inner-slot FPU saving code ends up making VISEntryHalf degenerate into plain VISEntry. Longer term we need to do something smarter to reinstate the partial save optimizations. Perhaps the fundament error is having trap entry and exit allocate FPU save slots and restore register state. Instead, the VISEntry et al. calls should be doing that work. This bug is about two decades old. Reported-by: James Y Knight <jyknight@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-07signal: fix information leak in copy_siginfo_to_userAmanieu d'Antras1-1/+2
This function may copy the si_addr_lsb, si_lower and si_upper fields to user mode when they haven't been initialized, which can leak kernel stack data to user mode. Just checking the value of si_code is insufficient because the same si_code value is shared between multiple signals. This is solved by checking the value of si_signo in addition to si_code. Signed-off-by: Amanieu d'Antras <amanieu@gmail.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-07signal: fix information leak in copy_siginfo_from_user32Amanieu d'Antras4-8/+0
This function can leak kernel stack data when the user siginfo_t has a positive si_code value. The top 16 bits of si_code descibe which fields in the siginfo_t union are active, but they are treated inconsistently between copy_siginfo_from_user32, copy_siginfo_to_user32 and copy_siginfo_to_user. copy_siginfo_from_user32 is called from rt_sigqueueinfo and rt_tgsigqueueinfo in which the user has full control overthe top 16 bits of si_code. This fixes the following information leaks: x86: 8 bytes leaked when sending a signal from a 32-bit process to itself. This leak grows to 16 bytes if the process uses x32. (si_code = __SI_CHLD) x86: 100 bytes leaked when sending a signal from a 32-bit process to a 64-bit process. (si_code = -1) sparc: 4 bytes leaked when sending a signal from a 32-bit process to a 64-bit process. (si_code = any) parsic and s390 have similar bugs, but they are not vulnerable because rt_[tg]sigqueueinfo have checks that prevent sending a positive si_code to a different process. These bugs are also fixed for consistency. Signed-off-by: Amanieu d'Antras <amanieu@gmail.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-05Merge tag 'phy-for-4.2-rc6' of ↵Greg Kroah-Hartman1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/kishon/linux-phy into usb-linus Kishon writes: phy: for 4.2-rc6 *) Fix compiler error when sun4i usb phy driver is built as module *) Fix SATA Lockup issue in dra7 SoC Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
2015-08-05Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2-6/+6
Pull KVM fixes from Paolo Bonzini: "Just two very small & simple patches" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: MTRR: Use default type for non-MTRR-covered gfn before WARN_ON KVM: s390: Fix hang VCPU hang/loop regression
2015-08-05KVM: MTRR: Use default type for non-MTRR-covered gfn before WARN_ONAlex Williamson1-4/+4
The patch was munged on commit to re-order these tests resulting in excessive warnings when trying to do device assignment. Return to original ordering: https://lkml.org/lkml/2015/7/15/769 Fixes: 3e5d2fdceda1 ("KVM: MTRR: simplify kvm_mtrr_get_guest_memory_type") Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Reviewed-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-05MIPS: Make set_pte() SMP safe.David Daney1-0/+31
On MIPS the GLOBAL bit of the PTE must have the same value in any aligned pair of PTEs. These pairs of PTEs are referred to as "buddies". In a SMP system is is possible for two CPUs to be calling set_pte() on adjacent PTEs at the same time. There is a race between setting the PTE and a different CPU setting the GLOBAL bit in its buddy PTE. This race can be observed when multiple CPUs are executing vmap()/vfree() at the same time. Make setting the buddy PTE's GLOBAL bit an atomic operation to close the race condition. The case of CONFIG_64BIT_PHYS_ADDR && CONFIG_CPU_MIPS32 is *not* handled. Signed-off-by: David Daney <david.daney@cavium.com> Cc: <stable@vger.kernel.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10835/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-05ARC: Make pt_regs regs unsignedVineet Gupta2-37/+37
KGDB fails to build after f51e2f191112 ("ARC: make sure instruction_pointer() returns unsigned value") The hack to force one specific reg to unsigned backfired. There's no reason to keep the regs signed after all. | CC arch/arc/kernel/kgdb.o |../arch/arc/kernel/kgdb.c: In function 'kgdb_trap': | ../arch/arc/kernel/kgdb.c:180:29: error: lvalue required as left operand of assignment | instruction_pointer(regs) -= BREAK_INSTR_SIZE; Reported-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com> Fixes: f51e2f191112 ("ARC: make sure instruction_pointer() returns unsigned value") Cc: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-04ARM: dts: dra7: Add syscon-pllreset syscon to SATA PHYRoger Quadros1-0/+1
This register is required to be passed to the SATA PHY driver to workaround errata i783 (SATA Lockup After SATA DPLL Unlock/Relock). Signed-off-by: Roger Quadros <rogerq@ti.com> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
2015-08-04ARCv2: spinlock/rwlock: Reset retry delay when starting a new spin-wait cycleVineet Gupta1-3/+3
The previous commit for delayed retry of SCOND needs some fine tuning for spin locks. The backoff from delayed retry in conjunction with spin looping of lock itself can potentially cause the delay counter to reach high values. So to provide fairness to any lock operation, after a lock "seems" available (i.e. just before first SCOND try0, reset the delay counter back to starting value of 1 Essentially reset delay to 1 for a new spin-wait-loop-acquire cycle. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-04ARCv2: spinlock/rwlock/atomics: Delayed retry of failed SCOND with ↵Vineet Gupta4-4/+347
exponential backoff This is to workaround the llock/scond livelock HS38x4 could get into a LLOCK/SCOND livelock in case of multiple overlapping coherency transactions in the SCU. The exclusive line state keeps rotating among contenting cores leading to a never ending cycle. So break the cycle by deferring the retry of failed exclusive access (SCOND). The actual delay needed is function of number of contending cores as well as the unrelated coherency traffic from other cores. To keep the code simple, start off with small delay of 1 which would suffice most cases and in case of contention double the delay. Eventually the delay is sufficient such that the coherency pipeline is drained, thus a subsequent exclusive access would succeed. Link: http://lkml.kernel.org/r/1438612568-28265-1-git-send-email-vgupta@synopsys.com Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-04ARC: LLOCK/SCOND based rwlockVineet Gupta2-10/+166
With LLOCK/SCOND, the rwlock counter can be atomically updated w/o need for a guarding spin lock. This in turn elides the EXchange instruction based spinning which causes the cacheline transition to exclusive state and concurrent spinning across cores would cause the line to keep bouncing around. LLOCK/SCOND based implementation is superior as spinning on LLOCK keeps the cacheline in shared state. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-04ARC: LLOCK/SCOND based spin_lockVineet Gupta1-7/+69
Current spin_lock uses EXchange instruction to implement the atomic test and set of lock location (reads orig value and ST 1). This however forces the cacheline into exclusive state (because of the ST) and concurrent loops in multiple cores will bounce the line around between cores. Instead, use LLOCK/SCOND to implement the atomic test and set which is better as line is in shared state while lock is spinning on LLOCK The real motivation of this change however is to make way for future changes in atomics to implement delayed retry (with backoff). Initial experiment with delayed retry in atomics combined with orig EX based spinlock was a total disaster (broke even LMBench) as struct sock has a cache line sharing an atomic_t and spinlock. The tight spinning on lock, caused the atomic retry to keep backing off such that it would never finish. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-04ARC: refactor atomic inline asm operands with symbolic namesVineet Gupta1-15/+17
This reduces the diff in forth-coming patches and also helps understand better the incremental changes to inline asm. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-04Revert "ARCv2: STAR 9000837815 workaround hardware exclusive transactions ↵Vineet Gupta1-12/+2
livelock" Extended testing of quad core configuration revealed that this fix was insufficient. Specifically LTP open posix shm_op/23-1 would cause the hardware livelock in llock/scond loop in update_cpu_load_active() So remove this and make way for a proper workaround This reverts commit a5c8b52abe677977883655166796f167ef1e0084. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-04ARCv2: [axs103_smp] Reduce clk for Quad FPGA configsVineet Gupta1-0/+15
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-03ARCv2: Fix the peripheral address space detectionVineet Gupta2-5/+10
With HS 2.1 release, the peripheral space register no longer contains the uncached space specifics, causing the kernel to panic early on. So read the newer NON VOLATILE AUX register to get that info. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-03MIPS: Replace add and sub instructions in relocate_kernel.S with addiuJames Cowgill1-4/+4
Fixes the assembler errors generated when compiling a MIPS R6 kernel with CONFIG_KEXEC on, by replacing the offending add and sub instructions with addiu instructions. Build errors: arch/mips/kernel/relocate_kernel.S: Assembler messages: arch/mips/kernel/relocate_kernel.S:27: Error: invalid operands `dadd $16,$16,8' arch/mips/kernel/relocate_kernel.S:64: Error: invalid operands `dadd $20,$20,8' arch/mips/kernel/relocate_kernel.S:65: Error: invalid operands `dadd $18,$18,8' arch/mips/kernel/relocate_kernel.S:66: Error: invalid operands `dsub $22,$22,1' scripts/Makefile.build:294: recipe for target 'arch/mips/kernel/relocate_kernel.o' failed Signed-off-by: James Cowgill <James.Cowgill@imgtec.com> Cc: <stable@vger.kernel.org> # 4.0+ Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10558/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03MIPS: Flush RPS on kernel entry with EVAJames Hogan1-0/+25
When EVA is enabled, flush the Return Prediction Stack (RPS) present on some MIPS cores on entry to the kernel from user mode. This is important specifically for interAptiv with EVA enabled, otherwise kernel mode RPS mispredicts may trigger speculative fetches of user return addresses, which may be sensitive in the kernel address space due to EVA's overlapping user/kernel address spaces. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.15.x- Patchwork: https://patchwork.linux-mips.org/patch/10812/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03Revert "MIPS: BCM63xx: Provide a plat_post_dma_flush hook"Florian Fainelli1-10/+0
This reverts commit 3cf29543413207d3ab1c3f62a88c09bb46f2264e ("MIPS: BCM63xx: Provide a plat_post_dma_flush hook") since this commit was found to prevent BCM6358 (early BMIPS4350 cores) and some BCM6368 (BMIPS4380 cores) from booting reliably. Alvaro was able to track this down to an issue specifically located to devices that use the second thread (TP1) when booting. Since BCM63xx did not have a need for plat_post_dma_flush() hook before, let's just keep things the way they were. Reported-by: Álvaro Fernández Rojas <noltari@gmail.com> Reported-by: Jonas Gorski <jogo@openwrt.org> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Cc: stable@vger.kernel.org Cc: Kevin Cernekee <cernekee@gmail.com> Cc: Nicolas Schichan <nschichan@freebox.fr> Cc: linux-mips@linux-mips.org Cc: blogic@openwrt.org Cc: noltari@gmail.com Cc: jogo@openwrt.org Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/10804/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03MIPS: BMIPS: Delete unused Kconfig symbolKevin Cernekee1-1/+0
This was left over from an earlier iteration of the BMIPS irqchip changes. It doesn't actually have an effect, so let's nuke it. Reported-by: Valentin Rothberg <valentinrothberg@gmail.com> Signed-off-by: Kevin Cernekee <cernekee@chromium.org> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Cc: stable@vger.kernel.org # v4.1+ Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/9910/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03MIPS: Export get_c0_perfcount_int()Felix Fietkau6-0/+6
get_c0_perfcount_int is tested from oprofile code. If oprofile is compiled as module, get_c0_perfcount_int needs to be exported, otherwise it cannot be resolved. Fixes: a669efc4a3b4 ("MIPS: Add hook to get C0 performance counter interrupt") Cc: stable@vger.kernel.org # v3.19+ Signed-off-by: Felix Fietkau <nbd@openwrt.org> Cc: linux-mips@linux-mips.org Cc: abrestic@chromium.org Patchwork: https://patchwork.linux-mips.org/patch/10763/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03MIPS: show_stack: Fix stack trace with EVAJames Hogan1-0/+7
The show_stack() function deals exclusively with kernel contexts, but if it gets called in user context with EVA enabled, show_stacktrace() will attempt to access the stack using EVA accesses, which will either read other user mapped data, or more likely cause an exception which will be handled by __get_user(). This is easily reproduced using SysRq t to show all task states, which results in the following stack dump output: Stack : (Bad stack address) Fix by setting the current user access mode to kernel around the call to show_stacktrace(). This causes __get_user() to use normal loads to read the kernel stack. Now we get the correct output, like this: Stack : 00000000 80168960 00000000 004a0000 00000000 00000000 8060016c 1f3abd0c 1f172cd8 8056f09c 7ff1e450 8014fc3c 00000001 806dd0b0 0000001d 00000002 1f17c6a0 1f17c804 1f17c6a0 8066f6e0 00000000 0000000a 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 0110e800 1f3abd6c 1f17c6a0 ... Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.15+ Patchwork: https://patchwork.linux-mips.org/patch/10778/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03MIPS: do_mcheck: Fix kernel code dump with EVAJames Hogan1-0/+6
If a machine check exception is raised in kernel mode, user context, with EVA enabled, then the do_mcheck handler will attempt to read the code around the EPC using EVA load instructions, i.e. as if the reads were from user mode. This will either read random user data if the process has anything mapped at the same address, or it will cause an exception which is handled by __get_user, resulting in this output: Code: (Bad address in epc) Fix by setting the current user access mode to kernel if the saved register context indicates the exception was taken in kernel mode. This causes __get_user to use normal loads to read the kernel code. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.15+ Patchwork: https://patchwork.linux-mips.org/patch/10777/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03MIPS: SMP: Don't increment irq_count multiple times for call function IPIsAlex Smith13-30/+29
The majority of SMP platforms handle their IPIs through do_IRQ() which calls irq_{enter/exit}(). When a call function IPI is received, smp_call_function_interrupt() is called which also calls irq_{enter,exit}(), meaning irq_count is raised twice. When tick broadcasting is used (which is implemented via a call function IPI), this incorrectly causes all CPU idle time on the core receiving broadcast ticks to be accounted as time spent servicing IRQs, as account_process_tick() will account as such if irq_count is greater than 1. This results in 100% CPU usage being reported on a core which receives its ticks via broadcast. This patch removes the SMP smp_call_function_interrupt() wrapper which calls irq_{enter,exit}(). Platforms which handle their IPIs through do_IRQ() now call generic_smp_call_function_interrupt() directly to avoid incrementing irq_count a second time. Platforms which don't (loongson, sgi-ip27, sibyte) call generic_smp_call_function_interrupt() wrapped in irq_{enter,exit}(). Signed-off-by: Alex Smith <alex.smith@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10770/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03MIPS: Partially disable RIXI support.Ralf Baechle1-4/+4
Execution of break instruction, trap instructions, emulation of unaligned loads or floating point instructions - anything that tries to read the instruction's opcode from userspace - needs read access to a page. RIXI (Read Inhibit / Execute Inhibit) support however allows the creation of pags that are executable but not readable. On such a mapping the attempted load of the opcode by the kernel is going to cause an endless loop of page faults. The quick workaround for this is to disable the combinations that the kernel currently isn't able to handle which are executable mappings. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>