diff options
author | Michael Ellerman <mpe@ellerman.id.au> | 2015-03-28 13:35:16 +0300 |
---|---|---|
committer | Michael Ellerman <mpe@ellerman.id.au> | 2015-03-28 14:03:40 +0300 |
commit | 529d235a0e190ded1d21ccc80a73e625ebcad09b (patch) | |
tree | c807f7526f29b79a7ce0b233daf8a1c3030d8dd0 /arch/powerpc/include/asm/systbl.h | |
parent | c03e73740d24fbe990291cd9ac2d6ae0d95b975f (diff) | |
download | linux-529d235a0e190ded1d21ccc80a73e625ebcad09b.tar.xz |
powerpc: Add a proper syscall for switching endianness
We currently have a "special" syscall for switching endianness. This is
syscall number 0x1ebe, which is handled explicitly in the 64-bit syscall
exception entry.
That has a few problems, firstly the syscall number is outside of the
usual range, which confuses various tools. For example strace doesn't
recognise the syscall at all.
Secondly it's handled explicitly as a special case in the syscall
exception entry, which is complicated enough without it.
As a first step toward removing the special syscall, we need to add a
regular syscall that implements the same functionality.
The logic is simple, it simply toggles the MSR_LE bit in the userspace
MSR. This is the same as the special syscall, with the caveat that the
special syscall clobbers fewer registers.
This version clobbers r9-r12, XER, CTR, and CR0-1,5-7.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Diffstat (limited to 'arch/powerpc/include/asm/systbl.h')
-rw-r--r-- | arch/powerpc/include/asm/systbl.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/arch/powerpc/include/asm/systbl.h b/arch/powerpc/include/asm/systbl.h index 91062eef582f..f1863a138b4a 100644 --- a/arch/powerpc/include/asm/systbl.h +++ b/arch/powerpc/include/asm/systbl.h @@ -367,3 +367,4 @@ SYSCALL_SPU(getrandom) SYSCALL_SPU(memfd_create) SYSCALL_SPU(bpf) COMPAT_SYS(execveat) +PPC64ONLY(switch_endian) |