Age | Commit message (Collapse) | Author | Files | Lines |
|
The kernel build for s390 fails for gcc compilers with version 3.x,
set the minimum required version of gcc to version 4.3.
As the atomic builtins are available with all gcc 4.x compilers,
use the __sync_val_compare_and_swap and __sync_bool_compare_and_swap
functions to replace the complex macro and inline assembler magic
in include/asm/cmpxchg.h. The compiler can just-do-it and generates
better code with the builtins.
While we are at it use __sync_bool_compare_and_swap for the
_raw_compare_and_swap function in the spinlock code as well.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Fix broken inline assembly contraints for cmpxchg64 on 32bit.
Fixes this crash:
specification exception: 0006 [#1] SMP
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.13.0 #4
task: 005a16c8 ti: 00592000 task.ti: 00592000
Krnl PSW : 070ce000 8029abd6 (lockref_get+0x3e/0x9c)
...
Krnl Code: 8029abcc: a71a0001 ahi %r1,1
8029abd0: 1852 lr %r5,%r2
#8029abd2: bb40f064 cds %r4,%r0,100(%r15)
>8029abd6: 1943 cr %r4,%r3
8029abd8: 1815 lr %r1,%r5
Call Trace:
([<0000000078e01870>] 0x78e01870)
[<000000000021105a>] sysfs_mount+0xd2/0x1c8
[<00000000001b551e>] mount_fs+0x3a/0x134
[<00000000001ce768>] vfs_kern_mount+0x44/0x11c
[<00000000001ce864>] kern_mount_data+0x24/0x3c
[<00000000005cc4b8>] sysfs_init+0x74/0xd4
[<00000000005cb5b4>] mnt_init+0xe0/0x1fc
[<00000000005cb16a>] vfs_caches_init+0xb6/0x14c
[<00000000005be794>] start_kernel+0x318/0x33c
[<000000000010001c>] _stext+0x1c/0x80
Reported-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
For 1 and 2 byte operands for xchg and cmpxchg the old and new values
get or'ed into the larger 4 byte old value before the compare and swap
instruction gets executed. This is done without using the proper byte
mask before or'ing the values.
If the caller passed in negative old or new values these got sign
extended by the caller. Which in turn means that either the old value
never matches, or, even worse, unrelated bytes would be changed in memory.
Luckily there don't seem to be any callers around yet, since that would
have resulted in the specification exception fixed in an earlies patch.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
When accessing a 1 or 2 byte memory operand we cannot use the
passed address since the compare and swap instruction only works
for 4 byte aligned memory operands.
Hence we calculate an aligned address so that compare and swap works
correctly. However we don't pass the calculated address to the inline
assembly. This results in incorrect memory accesses and in a
specification exception if used on non 4 byte aligned memory operands.
Since this didn't happen until now, there don't seem to be
too many users of cmpxchg on unaligned addresses.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
The cmpxchg macros and functions are a bit different than on other
architectures. In particular the macros do not store the return
value of a __cmpxchg function call in a variable before returning the
value.
This causes compile warnings that only occur on s390 like this one:
net/ipv4/af_inet.c: In function 'build_ehash_secret':
net/ipv4/af_inet.c:241:2: warning: value computed is not used [-Wunused-value]
To get rid of these warnings use the same construct that we already use
for the xchg macro, which was introduced for the same reason.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
All cmpxchg functions imply a memory barrier.
cmpxch64 did not have one for 31 bit code, so add it.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Implement arch specific irqsafe_cpu ops. The arch specific ops do not
disable/enable interrupts since that is an expensive operation. Instead
we disable preemption and perform a compare and swap loop.
Since on server distros (the ones we care about) preemption is disabled
the preempt_disable()/preempt_enable() pair is a nop.
In the end this code should be faster than the generic one.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
We have a cmpxchg64_local() implementation but strange enough the
SMP capable variant cmpxchg64() is missing. So implement it.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Move xchg() and cmpxchg() functions to own header file like some
other architectures have done.
With this we make sure that system.h now really looks like a place
where everything is gathered that doesn't fit anywhere else.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|