<feed xmlns='http://www.w3.org/2005/Atom'>
<title>BMC/Intel-BMC/linux.git/arch/powerpc/lib, branch dev-4.7</title>
<subtitle>Intel OpenBMC Linux kernel source tree (mirror)</subtitle>
<id>https://git.radix-linux.su/BMC/Intel-BMC/linux.git/atom?h=dev-4.7</id>
<link rel='self' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/atom?h=dev-4.7'/>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/'/>
<updated>2016-09-24T08:09:37+00:00</updated>
<entry>
<title>powerpc/32: Fix again csum_partial_copy_generic()</title>
<updated>2016-09-24T08:09:37+00:00</updated>
<author>
<name>Christophe Leroy</name>
<email>christophe.leroy@c-s.fr</email>
</author>
<published>2016-08-26T14:45:13+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=63cbe6ea3b9dfaf4761cca70fda448b974ede463'/>
<id>urn:sha1:63cbe6ea3b9dfaf4761cca70fda448b974ede463</id>
<content type='text'>
commit 8540571e01f973d321b0821f4f32ed6e9ae8263c upstream.

Commit 7aef4136566b0 ("powerpc32: rewrite csum_partial_copy_generic()
based on copy_tofrom_user()") introduced a bug when destination address
is odd and len is lower than cacheline size.

In that case the resulting csum value doesn't have to be rotated one
byte because the cache-aligned copy part is skipped so no alignment
is performed.

Fixes: 7aef4136566b0 ("powerpc32: rewrite csum_partial_copy_generic() based on copy_tofrom_user()")
Cc: stable@vger.kernel.org # v4.6+
Reported-by: Alessio Igor Bogani &lt;alessio.bogani@elettra.eu&gt;
Signed-off-by: Christophe Leroy &lt;christophe.leroy@c-s.fr&gt;
Tested-by: Alessio Igor Bogani &lt;alessio.bogani@elettra.eu&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>powerpc/32: Fix csum_partial_copy_generic()</title>
<updated>2016-09-24T08:09:37+00:00</updated>
<author>
<name>Christophe Leroy</name>
<email>christophe.leroy@c-s.fr</email>
</author>
<published>2016-08-02T08:07:05+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=13c2ad632e69f96e0ab33439745b084d93319522'/>
<id>urn:sha1:13c2ad632e69f96e0ab33439745b084d93319522</id>
<content type='text'>
commit 1bc8b816cb8058c31f61fe78442f10a43209e582 upstream.

Commit 7aef4136566b0 ("powerpc32: rewrite csum_partial_copy_generic()
based on copy_tofrom_user()") introduced a bug when destination
address is odd and initial csum is not null

In that (rare) case the initial csum value has to be rotated one byte
as well as the resulting value is

This patch also fixes related comments

Fixes: 7aef4136566b0 ("powerpc32: rewrite csum_partial_copy_generic() based on copy_tofrom_user()")
Signed-off-by: Christophe Leroy &lt;christophe.leroy@c-s.fr&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>powerpc/sstep: Fix emulation fall-through</title>
<updated>2016-05-11T11:54:08+00:00</updated>
<author>
<name>Oliver O'Halloran</name>
<email>oohall@gmail.com</email>
</author>
<published>2016-02-16T06:31:53+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=66707836062db62756b395d1fb16ab02ff162862'/>
<id>urn:sha1:66707836062db62756b395d1fb16ab02ff162862</id>
<content type='text'>
There is a switch fallthough in instr_analyze() which can cause an
invalid instruction to be emulated as a different, valid, instruction.
The rld* (opcode 30) case extracts a sub-opcode from bits 3:1 of the
instruction word. However, the only valid values of this field are 001
and 000. These cases are correctly handled, but the others are not which
causes execution to fall through into case 31.

Breaking out of the switch causes the instruction to be marked as
unknown and allows the caller to deal with the invalid instruction in a
manner consistent with other invalid instructions.

Signed-off-by: Oliver O'Halloran &lt;oohall@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
</content>
</entry>
<entry>
<title>powerpc/sstep: Fix sstep.c compile on powerpcspe</title>
<updated>2016-05-11T11:54:07+00:00</updated>
<author>
<name>Lennart Sorensen</name>
<email>lsorense@csclub.uwaterloo.ca</email>
</author>
<published>2016-05-05T20:44:44+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=dd21731022faf43c1250050e5d28d11add599149'/>
<id>urn:sha1:dd21731022faf43c1250050e5d28d11add599149</id>
<content type='text'>
Commit be96f63375a1 ("powerpc: Split out instruction analysis part of
emulate_step()") introduced ldarx and stdcx into the instructions in
sstep.c, which are not accepted by the assembler on powerpcspe, but does
seem to be accepted by the normal powerpc assembler even in 32 bit mode.

Wrap these two instructions in a __powerpc64__ check like it is
everywhere else in the file.

Fixes: be96f63375a1 ("powerpc: Split out instruction analysis part of emulate_step()")
Signed-off-by: Len Sorensen &lt;lsorense@csclub.uwaterloo.ca&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
</content>
</entry>
<entry>
<title>powerpc: rework sparse for lib/xor_vmx.c</title>
<updated>2016-04-26T23:33:37+00:00</updated>
<author>
<name>Daniel Axtens</name>
<email>dja@axtens.net</email>
</author>
<published>2016-04-26T13:49:09+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=8fe088850f13eabf8197df5b0bf38668fa6c1294'/>
<id>urn:sha1:8fe088850f13eabf8197df5b0bf38668fa6c1294</id>
<content type='text'>
Sparse doesn't seem to be passing -maltivec around properly, leading
to lots of errors:

.../include/altivec.h:34:2: error: Use the "-maltivec" flag to enable PowerPC AltiVec support
arch/powerpc/lib/xor_vmx.c:27:16: error: Expected ; at end of declaration
arch/powerpc/lib/xor_vmx.c:27:16: error: got signed
arch/powerpc/lib/xor_vmx.c:60:9: error: No right hand side of '*'-expression
arch/powerpc/lib/xor_vmx.c:60:9: error: Expected ; at end of statement
arch/powerpc/lib/xor_vmx.c:60:9: error: got v1_in
...
arch/powerpc/lib/xor_vmx.c:87:9: error: too many errors

Only include the altivec.h header for non-__CHECKER__ builds.
For builds with __CHECKER__, make up some stubs instead, as
suggested by Balbir. (The vector size of 16 is arbitrary.)

Suggested-by: Balbir Singh &lt;bsingharora@gmail.com&gt;
Signed-off-by: Daniel Axtens &lt;dja@axtens.net&gt;
Tested-by: Balbir Singh &lt;bsingharora@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
</content>
</entry>
<entry>
<title>powerpc: Make generic_memcpy() private to copy_32.S</title>
<updated>2016-04-11T10:30:41+00:00</updated>
<author>
<name>Michael Ellerman</name>
<email>mpe@ellerman.id.au</email>
</author>
<published>2016-03-16T10:36:06+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=b4c6afdc3a1ad73542c8e92220c30b8f953c51de'/>
<id>urn:sha1:b4c6afdc3a1ad73542c8e92220c30b8f953c51de</id>
<content type='text'>
generic_memcpy() is only called from copy_32.S, so there's no reason for
it to be global.

Reported-by: Al Viro &lt;viro@zeniv.linux.org.uk&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
</content>
</entry>
<entry>
<title>Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/scottwood/linux into next</title>
<updated>2016-03-14T09:05:14+00:00</updated>
<author>
<name>Michael Ellerman</name>
<email>mpe@ellerman.id.au</email>
</author>
<published>2016-03-14T09:05:14+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=a1b5344620a3e6291afaf7542714ba9c391ef1c7'/>
<id>urn:sha1:a1b5344620a3e6291afaf7542714ba9c391ef1c7</id>
<content type='text'>
Freescale updates from Scott:

"Highlights include 8xx optimizations, 32-bit checksum optimizations,
86xx consolidation, e5500/e6500 cpu hotplug, more fman and other dt
bits, and minor fixes/cleanup."
</content>
</entry>
<entry>
<title>powerpc: optimise csum_partial() call when len is constant</title>
<updated>2016-03-09T16:44:18+00:00</updated>
<author>
<name>Christophe Leroy</name>
<email>christophe.leroy@c-s.fr</email>
</author>
<published>2016-03-07T17:44:37+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=7e393220b6e1ecfa5520d1b2ca31150b7588f458'/>
<id>urn:sha1:7e393220b6e1ecfa5520d1b2ca31150b7588f458</id>
<content type='text'>
csum_partial is often called for small fixed length packets
for which it is suboptimal to use the generic csum_partial()
function.

For instance, in my configuration, I got:
* One place calling it with constant len 4
* Seven places calling it with constant len 8
* Three places calling it with constant len 14
* One place calling it with constant len 20
* One place calling it with constant len 24
* One place calling it with constant len 32

This patch renames csum_partial() to __csum_partial() and
implements csum_partial() as a wrapper inline function which
* uses csum_add() for small 16bits multiple constant length
* uses ip_fast_csum() for other 32bits multiple constant
* uses __csum_partial() in all other cases

Signed-off-by: Christophe Leroy &lt;christophe.leroy@c-s.fr&gt;
Signed-off-by: Scott Wood &lt;oss@buserror.net&gt;
</content>
</entry>
<entry>
<title>powerpc/ftrace: Use $(CC_FLAGS_FTRACE) when disabling ftrace</title>
<updated>2016-03-07T03:53:55+00:00</updated>
<author>
<name>Torsten Duwe</name>
<email>duwe@lst.de</email>
</author>
<published>2016-03-03T04:26:58+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=9a7841ae8d6ce9b7a7cf879c9968fcf4c9545563'/>
<id>urn:sha1:9a7841ae8d6ce9b7a7cf879c9968fcf4c9545563</id>
<content type='text'>
Rather than open-coding -pg whereever we want to disable ftrace, use the
existing $(CC_FLAGS_FTRACE) variable.

This has the advantage that it will work in future when we use a
different set of flags to enable ftrace.

Signed-off-by: Torsten Duwe &lt;duwe@suse.de&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
</content>
</entry>
<entry>
<title>powerpc32: optimise csum_partial() loop</title>
<updated>2016-03-05T05:03:45+00:00</updated>
<author>
<name>Christophe Leroy</name>
<email>christophe.leroy@c-s.fr</email>
</author>
<published>2015-09-22T14:34:32+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/BMC/Intel-BMC/linux.git/commit/?id=f867d556dd8525fe6ff0d22a34249528e590f994'/>
<id>urn:sha1:f867d556dd8525fe6ff0d22a34249528e590f994</id>
<content type='text'>
On the 8xx, load latency is 2 cycles and taking branches also takes
2 cycles. So let's unroll the loop.

This patch improves csum_partial() speed by around 10% on both:
* 8xx (single issue processor with parallel execution)
* 83xx (superscalar 6xx processor with dual instruction fetch
and parallel execution)

Signed-off-by: Christophe Leroy &lt;christophe.leroy@c-s.fr&gt;
Signed-off-by: Scott Wood &lt;oss@buserror.net&gt;
</content>
</entry>
</feed>
