diff options
author | Helge Deller <deller@gmx.de> | 2016-04-20 22:34:15 +0300 |
---|---|---|
committer | Helge Deller <deller@gmx.de> | 2016-05-22 22:39:25 +0300 |
commit | 54b668009076caddbede8fde513ca2c982590bfe (patch) | |
tree | 873f576cebe662cdb3c8a6626ba6be193a0a6ef4 /arch/parisc/lib | |
parent | 64e2a42bca12e408f0258c56adcf3595bcd116e7 (diff) | |
download | linux-54b668009076caddbede8fde513ca2c982590bfe.tar.xz |
parisc: Add native high-resolution sched_clock() implementation
Add a native implementation for the sched_clock() function which utilizes the
processor-internal cycle counter (Control Register 16) as high-resolution time
source.
With this patch we now get much more fine-grained resolutions in various
in-kernel time measurements (e.g. when viewing the function tracing logs), and
probably a more accurate scheduling on SMP systems.
There are a few specific implementation details in this patch:
1. On a 32bit kernel we emulate the higher 32bits of the required 64-bit
resolution of sched_clock() by increasing a per-cpu counter at every
wrap-around of the 32bit cycle counter.
2. In a SMP system, the cycle counters of the various CPUs are not syncronized
(similiar to the TSC in a x86_64 system). To cope with this we define
HAVE_UNSTABLE_SCHED_CLOCK and let the upper layers do the adjustment work.
3. Since we need HAVE_UNSTABLE_SCHED_CLOCK, we need to provide a cmpxchg64()
function even on a 32-bit kernel.
4. A 64-bit SMP kernel which is started on a UP system will mark the
sched_clock() implementation as "stable", which means that we don't expect any
jumps in the returned counter. This is true because we then run only on one
CPU.
Signed-off-by: Helge Deller <deller@gmx.de>
Diffstat (limited to 'arch/parisc/lib')
-rw-r--r-- | arch/parisc/lib/bitops.c | 6 |
1 files changed, 2 insertions, 4 deletions
diff --git a/arch/parisc/lib/bitops.c b/arch/parisc/lib/bitops.c index 187118841af1..8e45b0a97abf 100644 --- a/arch/parisc/lib/bitops.c +++ b/arch/parisc/lib/bitops.c @@ -55,11 +55,10 @@ unsigned long __xchg8(char x, char *ptr) } -#ifdef CONFIG_64BIT -unsigned long __cmpxchg_u64(volatile unsigned long *ptr, unsigned long old, unsigned long new) +u64 __cmpxchg_u64(volatile u64 *ptr, u64 old, u64 new) { unsigned long flags; - unsigned long prev; + u64 prev; _atomic_spin_lock_irqsave(ptr, flags); if ((prev = *ptr) == old) @@ -67,7 +66,6 @@ unsigned long __cmpxchg_u64(volatile unsigned long *ptr, unsigned long old, unsi _atomic_spin_unlock_irqrestore(ptr, flags); return prev; } -#endif unsigned long __cmpxchg_u32(volatile unsigned int *ptr, unsigned int old, unsigned int new) { |