<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/linux.git/arch/mips/include/asm/atomic.h, branch v6.19.11</title>
<subtitle>Linux kernel stable tree (mirror)</subtitle>
<id>https://git.radix-linux.su/kernel/linux.git/atom?h=v6.19.11</id>
<link rel='self' href='https://git.radix-linux.su/kernel/linux.git/atom?h=v6.19.11'/>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/'/>
<updated>2023-06-05T07:57:14+00:00</updated>
<entry>
<title>locking/atomic: make atomic*_{cmp,}xchg optional</title>
<updated>2023-06-05T07:57:14+00:00</updated>
<author>
<name>Mark Rutland</name>
<email>mark.rutland@arm.com</email>
</author>
<published>2023-06-05T07:01:01+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=d12157efc8e083c77d054675fcdd594f54cc7e2b'/>
<id>urn:sha1:d12157efc8e083c77d054675fcdd594f54cc7e2b</id>
<content type='text'>
Most architectures define the atomic/atomic64 xchg and cmpxchg
operations in terms of arch_xchg and arch_cmpxchg respectfully.

Add fallbacks for these cases and remove the trivial cases from arch
code. On some architectures the existing definitions are kept as these
are used to build other arch_atomic*() operations.

Signed-off-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Reviewed-by: Kees Cook &lt;keescook@chromium.org&gt;
Link: https://lore.kernel.org/r/20230605070124.3741859-5-mark.rutland@arm.com
</content>
</entry>
<entry>
<title>MIPS: retire "asm/llsc.h"</title>
<updated>2022-01-05T10:16:35+00:00</updated>
<author>
<name>Huang Pei</name>
<email>huangpei@loongson.cn</email>
</author>
<published>2021-12-15T08:45:00+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=f0b7ddbd794bdffade370f22bb7a774002208ef4'/>
<id>urn:sha1:f0b7ddbd794bdffade370f22bb7a774002208ef4</id>
<content type='text'>
all that "asm/llsc.h" does is just to help inline asm, which can be
stringifyed from "asm/asm.h"

+. Since "asm/asm.h" has all we need, retire "asm/llsc.h"

+. remove unused header file

Inspired-by:   Maciej W. Rozycki &lt;macro@orcam.me.uk&gt;
Signed-off-by: Huang Pei &lt;huangpei@loongson.cn&gt;
Signed-off-by: Thomas Bogendoerfer &lt;tsbogend@alpha.franken.de&gt;
</content>
</entry>
<entry>
<title>MIPS: locking/atomic: Fix atomic{_64,}_sub_if_positive</title>
<updated>2021-08-05T09:01:18+00:00</updated>
<author>
<name>Rui Wang</name>
<email>wangrui@loongson.cn</email>
</author>
<published>2021-07-29T09:31:52+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=cb95ea79b3fc772c5873a7a4532ab4c14a455da2'/>
<id>urn:sha1:cb95ea79b3fc772c5873a7a4532ab4c14a455da2</id>
<content type='text'>
This looks like a typo and that caused atomic64 test failed.

Signed-off-by: Rui Wang &lt;wangrui@loongson.cn&gt;
Signed-off-by: Thomas Bogendoerfer &lt;tsbogend@alpha.franken.de&gt;
</content>
</entry>
<entry>
<title>locking/atomic: mips: move to ARCH_ATOMIC</title>
<updated>2021-05-26T11:20:51+00:00</updated>
<author>
<name>Mark Rutland</name>
<email>mark.rutland@arm.com</email>
</author>
<published>2021-05-25T14:02:21+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=c7b5fd6faa1dc6cdc721a978d9d122cd31bbd7b1'/>
<id>urn:sha1:c7b5fd6faa1dc6cdc721a978d9d122cd31bbd7b1</id>
<content type='text'>
We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates mips to ARCH_ATOMIC. The
arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
code wraps these with optional instrumentation to provide the regular
functions.

Signed-off-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Cc: Boqun Feng &lt;boqun.feng@gmail.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Bogendoerfer &lt;tsbogend@alpha.franken.de&gt;
Cc: Will Deacon &lt;will@kernel.org&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://lore.kernel.org/r/20210525140232.53872-23-mark.rutland@arm.com
</content>
</entry>
<entry>
<title>MIPS: Compare __SYNC_loongson3_war against 0</title>
<updated>2021-01-15T14:32:47+00:00</updated>
<author>
<name>Nathan Chancellor</name>
<email>natechancellor@gmail.com</email>
</author>
<published>2021-01-14T17:34:16+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=8790ccf8daf1a8c53b6cb8ce0c9a109274bd3fa8'/>
<id>urn:sha1:8790ccf8daf1a8c53b6cb8ce0c9a109274bd3fa8</id>
<content type='text'>
When building with clang when CONFIG_CPU_LOONGSON3_WORKAROUNDS is
enabled:

 In file included from lib/errseq.c:4:
 In file included from ./include/linux/atomic.h:7:
 ./arch/mips/include/asm/atomic.h:52:1: warning: converting the result of
 '&lt;&lt;' to a boolean always evaluates to true
 [-Wtautological-constant-compare]
 ATOMIC_OPS(atomic64, s64)
 ^
 ./arch/mips/include/asm/atomic.h:40:9: note: expanded from macro
 'ATOMIC_OPS'
         return cmpxchg(&amp;v-&gt;counter, o, n);
                ^
 ./arch/mips/include/asm/cmpxchg.h:194:7: note: expanded from macro
 'cmpxchg'
         if (!__SYNC_loongson3_war)
              ^
 ./arch/mips/include/asm/sync.h:147:34: note: expanded from macro
 '__SYNC_loongson3_war'
 # define __SYNC_loongson3_war   (1 &lt;&lt; 31)
                                    ^

While it is not wrong that the result of this shift is always true in a
boolean context, it is not a problem here. Regardless, the warning is
really noisy so rather than making the shift a boolean implicitly, use
it in an equality comparison so the shift is used as an integer value.

Fixes: 4d1dbfe6cbec ("MIPS: atomic: Emit Loongson3 sync workarounds within asm")
Fixes: a91f2a1dba44 ("MIPS: cmpxchg: Omit redundant barriers for Loongson3")
Reported-by: kernel test robot &lt;lkp@intel.com&gt;
Signed-off-by: Nathan Chancellor &lt;natechancellor@gmail.com&gt;
Acked-by: Nick Desaulniers &lt;ndesaulniers@google.com&gt;
Signed-off-by: Thomas Bogendoerfer &lt;tsbogend@alpha.franken.de&gt;
</content>
</entry>
<entry>
<title>locking/atomic: Move ATOMIC_INIT into linux/types.h</title>
<updated>2020-07-29T14:14:18+00:00</updated>
<author>
<name>Herbert Xu</name>
<email>herbert@gondor.apana.org.au</email>
</author>
<published>2020-07-29T12:31:05+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=7ca8cf5347f720b07a0b32a924b768f5710547e7'/>
<id>urn:sha1:7ca8cf5347f720b07a0b32a924b768f5710547e7</id>
<content type='text'>
This patch moves ATOMIC_INIT from asm/atomic.h into linux/types.h.
This allows users of atomic_t to use ATOMIC_INIT without having to
include atomic.h as that way may lead to header loops.

Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Acked-by: Waiman Long &lt;longman@redhat.com&gt;
Link: https://lkml.kernel.org/r/20200729123105.GB7047@gondor.apana.org.au
</content>
</entry>
<entry>
<title>MIPS: atomic: Deduplicate 32b &amp; 64b read, set, xchg, cmpxchg</title>
<updated>2019-10-07T16:42:36+00:00</updated>
<author>
<name>Paul Burton</name>
<email>paul.burton@mips.com</email>
</author>
<published>2019-10-01T21:53:24+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=1da7bce8591d58bf2a442b0324659af7390401c2'/>
<id>urn:sha1:1da7bce8591d58bf2a442b0324659af7390401c2</id>
<content type='text'>
Remove the remaining duplication between 32b &amp; 64b in asm/atomic.h by
making use of an ATOMIC_OPS() macro to generate:

  - atomic_read()/atomic64_read()
  - atomic_set()/atomic64_set()
  - atomic_cmpxchg()/atomic64_cmpxchg()
  - atomic_xchg()/atomic64_xchg()

This is consistent with the way all other functions in asm/atomic.h are
generated, and ensures consistency between the 32b &amp; 64b functions.

Of note is that this results in the above now being static inline
functions rather than macros.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</content>
</entry>
<entry>
<title>MIPS: atomic: Unify 32b &amp; 64b sub_if_positive</title>
<updated>2019-10-07T16:42:34+00:00</updated>
<author>
<name>Paul Burton</name>
<email>paul.burton@mips.com</email>
</author>
<published>2019-10-01T21:53:23+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=40e784b4d4bc31dee5f1db6a20287777d3aaa4dc'/>
<id>urn:sha1:40e784b4d4bc31dee5f1db6a20287777d3aaa4dc</id>
<content type='text'>
Unify the definitions of atomic_sub_if_positive() &amp;
atomic64_sub_if_positive() using a macro like we do for most other
atomic functions. This allows us to share the implementation ensuring
consistency between the two. Notably this provides the appropriate
loongson3_war barriers in the atomic64_sub_if_positive() case which were
previously missing.

The code is rearranged a little to handle the !kernel_uses_llsc case
first in order to de-indent the LL/SC case &amp; allow us not to go over 80
characters per line.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</content>
</entry>
<entry>
<title>MIPS: atomic: Use _atomic barriers in atomic_sub_if_positive()</title>
<updated>2019-10-07T16:42:32+00:00</updated>
<author>
<name>Paul Burton</name>
<email>paul.burton@mips.com</email>
</author>
<published>2019-10-01T21:53:21+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=77d281b7966e476927a45c5fb272d720aa75bb95'/>
<id>urn:sha1:77d281b7966e476927a45c5fb272d720aa75bb95</id>
<content type='text'>
Use smp_mb__before_atomic() &amp; smp_mb__after_atomic() in
atomic_sub_if_positive() rather than the equivalent
smp_mb__before_llsc() &amp; smp_llsc_mb(). The former are more standard &amp;
this preps us for avoiding redundant duplicate barriers on Loongson3 in
a later patch.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</content>
</entry>
<entry>
<title>MIPS: atomic: Emit Loongson3 sync workarounds within asm</title>
<updated>2019-10-07T16:42:31+00:00</updated>
<author>
<name>Paul Burton</name>
<email>paul.burton@mips.com</email>
</author>
<published>2019-10-01T21:53:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=4d1dbfe6cbec34c6398a480c0572bba794e89e11'/>
<id>urn:sha1:4d1dbfe6cbec34c6398a480c0572bba794e89e11</id>
<content type='text'>
Generate the sync instructions required to workaround Loongson3 LL/SC
errata within inline asm blocks, which feels a little safer than doing
it from C where strictly speaking the compiler would be well within its
rights to insert a memory access between the separate asm statements we
previously had, containing sync &amp; ll instructions respectively.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</content>
</entry>
</feed>
