Age | Commit message (Collapse) | Author | Files | Lines |
|
Add ARCH_HAS_DEBUG_VM_PGTABLE selection in Kconfig, in order to make
corresponding vm debug features usable on LoongArch. Also update the
corresponding arch-support.txt document.
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
RISC-V uses xRET instructions on return from interrupt and to go back
to user-space; the xRET instruction is not core serializing.
Use FENCE.I for providing core serialization as follows:
- by calling sync_core_before_usermode() on return from interrupt (cf.
ipi_sync_core()),
- via switch_mm() and sync_core_before_usermode() (respectively, for
uthread->uthread and kthread->uthread transitions) before returning
to user-space.
On RISC-V, the serialization in switch_mm() is activated by resetting
the icache_stale_mask of the mm at prepare_sync_core_cmd().
Suggested-by: Palmer Dabbelt <palmer@dabbelt.com>
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://lore.kernel.org/r/20240131144936.29190-5-parri.andrea@gmail.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
|
|
Allow to defer the flushing of the TLB when unmapping pages, which allows
to reduce the numbers of IPI and the number of sfence.vma.
The ubenchmarch used in commit 43b3dfdd0455 ("arm64: support
batched/deferred tlb shootdown during page reclamation/migration") that
was multithreaded to force the usage of IPI shows good performance
improvement on all platforms:
* Unmatched: ~34%
* TH1520 : ~78%
* Qemu : ~81%
In addition, perf on qemu reports an important decrease in time spent
dealing with IPIs:
Before: 68.17% main [kernel.kallsyms] [k] __sbi_rfence_v02_call
After : 8.64% main [kernel.kallsyms] [k] __sbi_rfence_v02_call
* Benchmark:
int stick_this_thread_to_core(int core_id) {
int num_cores = sysconf(_SC_NPROCESSORS_ONLN);
if (core_id < 0 || core_id >= num_cores)
return EINVAL;
cpu_set_t cpuset;
CPU_ZERO(&cpuset);
CPU_SET(core_id, &cpuset);
pthread_t current_thread = pthread_self();
return pthread_setaffinity_np(current_thread,
sizeof(cpu_set_t), &cpuset);
}
static void *fn_thread (void *p_data)
{
int ret;
pthread_t thread;
stick_this_thread_to_core((int)p_data);
while (1) {
sleep(1);
}
return NULL;
}
int main()
{
volatile unsigned char *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE,
MAP_SHARED | MAP_ANONYMOUS, -1, 0);
pthread_t threads[4];
int ret;
for (int i = 0; i < 4; ++i) {
ret = pthread_create(&threads[i], NULL, fn_thread, (void *)i);
if (ret)
{
printf("%s", strerror (ret));
}
}
memset(p, 0x88, SIZE);
for (int k = 0; k < 10000; k++) {
/* swap in */
for (int i = 0; i < SIZE; i += 4096) {
(void)p[i];
}
/* swap out */
madvise(p, SIZE, MADV_PAGEOUT);
}
for (int i = 0; i < 4; i++)
{
pthread_cancel(threads[i]);
}
for (int i = 0; i < 4; i++)
{
pthread_join(threads[i], NULL);
}
return 0;
}
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Reviewed-by: Jisheng Zhang <jszhang@kernel.org>
Tested-by: Jisheng Zhang <jszhang@kernel.org> # Tested on TH1520
Tested-by: Nam Cao <namcao@linutronix.de>
Link: https://lore.kernel.org/r/20240108193640.344929-1-alexghiti@rivosinc.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
|
|
Itanium (IA64) is going away, so drop it from the kernel feature
documentation.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson
Pull LoongArch updates from Huacai Chen:
- Allow usage of LSX/LASX in the kernel, and use them for
SIMD-optimized RAID5/RAID6 routines
- Add Loongson Binary Translation (LBT) extension support
- Add basic KGDB & KDB support
- Add building with kcov coverage
- Add KFENCE (Kernel Electric-Fence) support
- Add KASAN (Kernel Address Sanitizer) support
- Some bug fixes and other small changes
- Update the default config file
* tag 'loongarch-6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson: (25 commits)
LoongArch: Update Loongson-3 default config file
LoongArch: Add KASAN (Kernel Address Sanitizer) support
LoongArch: Simplify the processing of jumping new kernel for KASLR
kasan: Add (pmd|pud)_init for LoongArch zero_(pud|p4d)_populate process
kasan: Add __HAVE_ARCH_SHADOW_MAP to support arch specific mapping
LoongArch: Add KFENCE (Kernel Electric-Fence) support
LoongArch: Get partial stack information when providing regs parameter
LoongArch: mm: Add page table mapped mode support for virt_to_page()
kfence: Defer the assignment of the local variable addr
LoongArch: Allow building with kcov coverage
LoongArch: Provide kaslr_offset() to get kernel offset
LoongArch: Add basic KGDB & KDB support
LoongArch: Add Loongson Binary Translation (LBT) extension support
raid6: Add LoongArch SIMD recovery implementation
raid6: Add LoongArch SIMD syndrome calculation
LoongArch: Add SIMD-optimized XOR routines
LoongArch: Allow usage of LSX/LASX in the kernel
LoongArch: Define symbol 'fault' as a local label in fpu.S
LoongArch: Adjust {copy, clear}_user exception handler behavior
LoongArch: Use static defined zero page rather than allocated
...
|
|
1/8 of kernel addresses reserved for shadow memory. But for LoongArch,
There are a lot of holes between different segments and valid address
space (256T available) is insufficient to map all these segments to kasan
shadow memory with the common formula provided by kasan core, saying
(addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET
So LoongArch has a arch-specific mapping formula, different segments are
mapped individually, and only limited space lengths of these specific
segments are mapped to shadow.
At early boot stage the whole shadow region populated with just one
physical page (kasan_early_shadow_page). Later, this page is reused as
readonly zero shadow for some memory that kasan currently don't track.
After mapping the physical memory, pages for shadow memory are allocated
and mapped.
Functions like memset()/memcpy()/memmove() do a lot of memory accesses.
If bad pointer passed to one of these function it is important to be
caught. Compiler's instrumentation cannot do this since these functions
are written in assembly.
KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions in
mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in names, so we could call non-instrumented variant
if needed.
Signed-off-by: Qing Zhang <zhangqing@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
Add ARCH_HAS_KCOV and HAVE_GCC_PLUGINS to the LoongArch Kconfig. And
also disable instrumentation of vdso.
Signed-off-by: Feiyang Chen <chenfeiyang@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
KGDB is intended to be used as a source level debugger for the Linux
kernel. It is used along with gdb to debug a Linux kernel. GDB can be
used to "break in" to the kernel to inspect memory, variables and regs
similar to the way an application developer would use GDB to debug an
application. KDB is a frontend of KGDB which is similar to GDB.
By now, in addition to the generic KGDB features, the LoongArch KGDB
implements the following features:
- Hardware breakpoints/watchpoints;
- Software single-step support for KDB.
Signed-off-by: Qing Zhang <zhangqing@loongson.cn> # Framework & CoreFeature
Signed-off-by: Binbin Zhou <zhoubinbin@loongson.cn> # BreakPoint & SingleStep
Signed-off-by: Hui Li <lihui@loongson.cn> # Some Minor Improvements
Signed-off-by: Randy Dunlap <rdunlap@infradead.org> # Some Build Error Fixes
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
Pull documentation updates from Jonathan Corbet:
"Documentation work keeps chugging along; this includes:
- Work from Carlos Bilbao to integrate rustdoc output into the
generated HTML documentation. This took some work to figure out how
to do it without slowing the docs build and without creating people
who don't have Rust installed, but Carlos got there
- Move the loongarch and mips architecture documentation under
Documentation/arch/
- Some more maintainer documentation from Jakub
... plus the usual assortment of updates, translations, and fixes"
* tag 'docs-6.6' of git://git.lwn.net/linux: (56 commits)
Docu: genericirq.rst: fix irq-example
input: docs: pxrc: remove reference to phoenix-sim
Documentation: serial-console: Fix literal block marker
docs/mm: remove references to hmm_mirror ops and clean typos
docs/zh_CN: correct regi_chg(),regi_add() to region_chg(),region_add()
Documentation: Fix typos
Documentation/ABI: Fix typos
scripts: kernel-doc: fix macro handling in enums
scripts: kernel-doc: parse DEFINE_DMA_UNMAP_[ADDR|LEN]
Documentation: riscv: Update boot image header since EFI stub is supported
Documentation: riscv: Add early boot document
Documentation: arm: Add bootargs to the table of added DT parameters
docs: kernel-parameters: Refer to the correct bitmap function
doc: update params of memhp_default_state=
docs: Add book to process/kernel-docs.rst
docs: sparse: fix invalid link addresses
docs: vfs: clean up after the iterate() removal
docs: Add a section on surveys to the researcher guidelines
docs: move mips under arch
docs: move loongarch under arch
...
|
|
Fix typos in Documentation.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Link: https://lore.kernel.org/r/20230814212822.193684-4-helgaas@kernel.org
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
|
|
On x86, batched and deferred tlb shootdown has lead to 90% performance
increase on tlb shootdown. on arm64, HW can do tlb shootdown without
software IPI. But sync tlbi is still quite expensive.
Even running a simplest program which requires swapout can
prove this is true,
#include <sys/types.h>
#include <unistd.h>
#include <sys/mman.h>
#include <string.h>
int main()
{
#define SIZE (1 * 1024 * 1024)
volatile unsigned char *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE,
MAP_SHARED | MAP_ANONYMOUS, -1, 0);
memset(p, 0x88, SIZE);
for (int k = 0; k < 10000; k++) {
/* swap in */
for (int i = 0; i < SIZE; i += 4096) {
(void)p[i];
}
/* swap out */
madvise(p, SIZE, MADV_PAGEOUT);
}
}
Perf result on snapdragon 888 with 8 cores by using zRAM
as the swap block device.
~ # perf record taskset -c 4 ./a.out
[ perf record: Woken up 10 times to write data ]
[ perf record: Captured and wrote 2.297 MB perf.data (60084 samples) ]
~ # perf report
# To display the perf.data header info, please use --header/--header-only options.
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 60K of event 'cycles'
# Event count (approx.): 35706225414
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. ......
#
21.07% a.out [kernel.kallsyms] [k] _raw_spin_unlock_irq
8.23% a.out [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
6.67% a.out [kernel.kallsyms] [k] filemap_map_pages
6.16% a.out [kernel.kallsyms] [k] __zram_bvec_write
5.36% a.out [kernel.kallsyms] [k] ptep_clear_flush
3.71% a.out [kernel.kallsyms] [k] _raw_spin_lock
3.49% a.out [kernel.kallsyms] [k] memset64
1.63% a.out [kernel.kallsyms] [k] clear_page
1.42% a.out [kernel.kallsyms] [k] _raw_spin_unlock
1.26% a.out [kernel.kallsyms] [k] mod_zone_state.llvm.8525150236079521930
1.23% a.out [kernel.kallsyms] [k] xas_load
1.15% a.out [kernel.kallsyms] [k] zram_slot_lock
ptep_clear_flush() takes 5.36% CPU in the micro-benchmark swapping in/out
a page mapped by only one process. If the page is mapped by multiple
processes, typically, like more than 100 on a phone, the overhead would be
much higher as we have to run tlb flush 100 times for one single page.
Plus, tlb flush overhead will increase with the number of CPU cores due to
the bad scalability of tlb shootdown in HW, so those ARM64 servers should
expect much higher overhead.
Further perf annonate shows 95% cpu time of ptep_clear_flush is actually
used by the final dsb() to wait for the completion of tlb flush. This
provides us a very good chance to leverage the existing batched tlb in
kernel. The minimum modification is that we only send async tlbi in the
first stage and we send dsb while we have to sync in the second stage.
With the above simplest micro benchmark, collapsed time to finish the
program decreases around 5%.
Typical collapsed time w/o patch:
~ # time taskset -c 4 ./a.out
0.21user 14.34system 0:14.69elapsed
w/ patch:
~ # time taskset -c 4 ./a.out
0.22user 13.45system 0:13.80elapsed
Also tested with benchmark in the commit on Kunpeng920 arm64 server
and observed an improvement around 12.5% with command
`time ./swap_bench`.
w/o w/
real 0m13.460s 0m11.771s
user 0m0.248s 0m0.279s
sys 0m12.039s 0m11.458s
Originally it's noticed a 16.99% overhead of ptep_clear_flush()
which has been eliminated by this patch:
[root@localhost yang]# perf record -- ./swap_bench && perf report
[...]
16.99% swap_bench [kernel.kallsyms] [k] ptep_clear_flush
It is tested on 4,8,128 CPU platforms and shows to be beneficial on
large systems but may not have improvement on small systems like on
a 4 CPU platform.
Also this patch improve the performance of page migration. Using pmbench
and tries to migrate the pages of pmbench between node 0 and node 1 for
100 times for 1G memory, this patch decrease the time used around 20%
(prev 18.338318910 sec after 13.981866350 sec) and saved the time used
by ptep_clear_flush().
Link: https://lkml.kernel.org/r/20230717131004.12662-5-yangyicong@huawei.com
Tested-by: Yicong Yang <yangyicong@hisilicon.com>
Tested-by: Xin Hao <xhao@linux.alibaba.com>
Tested-by: Punit Agrawal <punit.agrawal@bytedance.com>
Signed-off-by: Barry Song <v-songbaohua@oppo.com>
Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Xin Hao <xhao@linux.alibaba.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Nadav Amit <namit@vmware.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Barry Song <baohua@kernel.org>
Cc: Darren Hart <darren@os.amperecomputing.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: lipeifeng <lipeifeng@oppo.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Steven Miao <realmz6@gmail.com>
Cc: Will Deacon <will@kernel.org>
Cc: Zeng Tao <prime.zeng@hisilicon.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
|
|
Run the refresh script [1] to document the recent feature additions.
[1] Documentation/features/scripts/features-refresh.sh
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
Link: https://lore.kernel.org/r/1689060720-4628-3-git-send-email-yangtiezhu@loongson.cn
|
|
ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT selects ARCH_HAS_ELF_RANDOMIZE,
so add ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT as another Kconfig check
for ELF-ASLR feature, then the refresh script can be used to handle
this case for all archs.
Co-developed-by: Xi Ruoyao <xry111@xry111.site>
Signed-off-by: Xi Ruoyao <xry111@xry111.site>
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
Link: https://lore.kernel.org/r/1689060720-4628-2-git-send-email-yangtiezhu@loongson.cn
|
|
Add support for jump labels based on the ARM64 version.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Youling Tang <tangyouling@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
We can see that DEBUG_KMEMLEAK depends on HAVE_DEBUG_KMEMLEAK after
commit b69ec42b1b19 ("Kconfig: clean up the long arch list for the
DEBUG_KMEMLEAK config option"), just select HAVE_DEBUG_KMEMLEAK to
support kmemleak on LoongArch.
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
s390 trivially supports the ARCH_HAS_MEMBARRIER_SYNC_CORE requirements
since the used lpswe(y) instruction to return from any kernel context to
user space performs CPU serialization. This is very similar to arm, arm64
and powerpc.
See commit 70216e18e519 ("membarrier: Provide core serializing command,
*_SYNC_CORE") for further details.
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Add secure_computing() call to syscall_trace_enter to actually
filter system calls.
Add necessary arch Kconfig options, define TIF_SECCOMP trace
flag and provide basic seccomp filter support in asm/syscall.h
syscall_get_nr currently uses the syscall nr stored in orig_d0
because we change d0 to a default return code before starting a
syscall trace. This may be inconsistent with syscall_rollback
copying orig_d0 to d0 (which we never check upon return from
trace). We use d0 for the return code from syscall_trace_enter
in entry.S currently, and could perhaps expand that to store
a new syscall number returned by the seccomp filter before
executing the syscall. This clearly needs some discussion.
seccomp_bpf self test on ARAnyM passes 81 out of 94 tests.
Signed-off-by: Michael Schmitz <schmitzmic@gmail.com>
Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
Link: https://lore.kernel.org/r/20230112035529.13521-3-schmitzmic@gmail.com
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux
Pull RISC-V updates from Palmer Dabbelt:
- Support for the T-Head PMU via the perf subsystem
- ftrace support for rv32
- Support for non-volatile memory devices
- Various fixes and cleanups
* tag 'riscv-for-linus-6.2-mw1' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: (52 commits)
Documentation: RISC-V: patch-acceptance: s/implementor/implementer
Documentation: RISC-V: Mention the UEFI Standards
Documentation: RISC-V: Allow patches for non-standard behavior
Documentation: RISC-V: Fix a typo in patch-acceptance
riscv: Fixup compile error with !MMU
riscv: Fix P4D_SHIFT definition for 3-level page table mode
riscv: Apply a static assert to riscv_isa_ext_id
RISC-V: Add some comments about the shadow and overflow stacks
RISC-V: Align the shadow stack
RISC-V: Ensure Zicbom has a valid block size
RISC-V: Introduce riscv_isa_extension_check
RISC-V: Improve use of isa2hwcap[]
riscv: Don't duplicate _ALTERNATIVE_CFG* macros
riscv: alternatives: Drop the underscores from the assembly macro names
riscv: alternatives: Don't name unused macro parameters
riscv: Don't duplicate __ALTERNATIVE_CFG in __ALTERNATIVE_CFG_2
riscv: mm: call best_map_size many times during linear-mapping
riscv: Move cast inside kernel_mapping_[pv]a_to_[vp]a
riscv: Fix crash during early errata patching
riscv: boot: add zstd support
...
|
|
The official arch name is LoongArch [1], we should use small letter
loongarch instead of loong in Documentation/features, just use the
features-refresh.sh to refresh all the related files.
[1] https://www.kernel.org/doc/html/latest/loongarch/index.html
Fixes: 5860800e8696 ("Documentation/features: Update the arch support status files")
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Link: https://lore.kernel.org/r/1670156327-9631-3-git-send-email-yangtiezhu@loongson.cn
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
|
|
It should only sed the beginning "arch" of ARCH_DIR in features-refresh.sh,
otherwise loongarch is recognized as loong, that is not what we want.
Fixes: be99f610a110 ("Documentation/features: Add script that refreshes the arch support status files in place")
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Link: https://lore.kernel.org/r/1670156327-9631-2-git-send-email-yangtiezhu@loongson.cn
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
|
|
Run the refresh script to document the recent feature additions
on loong, um and csky as of v6.1-rc7.
Signed-off-by: Wei Li <liwei391@huawei.com>
Link: https://lore.kernel.org/r/20221203093750.4145802-1-liwei391@huawei.com
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
|
|
This sets the HAVE_ARCH_HUGE_VMAP option, and defines the required page
table functions. With this feature, ioremap area will be mapped with
huge page granularity according to its actual size. This feature can be
disabled by kernel parameter "nohugeiomap".
Signed-off-by: Liu Shixin <liushixin2@huawei.com>
Reviewed-by: Björn Töpel <bjorn@kernel.org>
Tested-by: Björn Töpel <bjorn@kernel.org>
Link: https://lore.kernel.org/r/20221012120038.1034354-2-liushixin2@huawei.com
[Palmer: minor formatting]
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
|
|
Pull xtensa updates from Max Filippov:
- support KCOV
- enable ARCH_HAS_GCOV_PROFILE_ALL
- minor ISS network driver cleanups
* tag 'xtensa-20220804' of https://github.com/jcmvbkbc/linux-xtensa:
xtensa: enable ARCH_HAS_GCOV_PROFILE_ALL
xtensa: enable KCOV support
xtensa: iss: fix handling error cases in iss_net_configure()
xtensa: iss/network: provide release() callback
xtensa: iss/network: drop 'devices' list
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu
Pull RCU updates from Paul McKenney:
- Documentation updates
- Miscellaneous fixes
- Callback-offload updates, perhaps most notably a new
RCU_NOCB_CPU_DEFAULT_ALL Kconfig option that causes all CPUs to be
offloaded at boot time, regardless of kernel boot parameters.
This is useful to battery-powered systems such as ChromeOS and
Android. In addition, a new RCU_NOCB_CPU_CB_BOOST kernel boot
parameter prevents offloaded callbacks from interfering with
real-time workloads and with energy-efficiency mechanisms
- Polled grace-period updates, perhaps most notably making these APIs
account for both normal and expedited grace periods
- Tasks RCU updates, perhaps most notably reducing the CPU overhead of
RCU tasks trace grace periods by more than a factor of two on a
system with 15,000 tasks.
The reduction is expected to increase with the number of tasks, so it
seems reasonable to hypothesize that a system with 150,000 tasks
might see a 20-fold reduction in CPU overhead
- Torture-test updates
- Updates that merge RCU's dyntick-idle tracking into context tracking,
thus reducing the overhead of transitioning to kernel mode from
either idle or nohz_full userspace execution for kernels that track
context independently of RCU.
This is expected to be helpful primarily for kernels built with
CONFIG_NO_HZ_FULL=y
* tag 'rcu.2022.07.26a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu: (98 commits)
rcu: Add irqs-disabled indicator to expedited RCU CPU stall warnings
rcu: Diagnose extended sync_rcu_do_polled_gp() loops
rcu: Put panic_on_rcu_stall() after expedited RCU CPU stall warnings
rcutorture: Test polled expedited grace-period primitives
rcu: Add polled expedited grace-period primitives
rcutorture: Verify that polled GP API sees synchronous grace periods
rcu: Make Tiny RCU grace periods visible to polled APIs
rcu: Make polled grace-period API account for expedited grace periods
rcu: Switch polled grace-period APIs to ->gp_seq_polled
rcu/nocb: Avoid polling when my_rdp->nocb_head_rdp list is empty
rcu/nocb: Add option to opt rcuo kthreads out of RT priority
rcu: Add nocb_cb_kthread check to rcu_is_callbacks_kthread()
rcu/nocb: Add an option to offload all CPUs on boot
rcu/nocb: Fix NOCB kthreads spawn failure with rcu_nocb_rdp_deoffload() direct call
rcu/nocb: Invert rcu_state.barrier_mutex VS hotplug lock locking order
rcu/nocb: Add/del rdp to iterate from rcuog itself
rcu/tree: Add comment to describe GP-done condition in fqs loop
rcu: Initialize first_gp_fqs at declaration in rcu_gp_fqs()
rcu/kvfree: Remove useless monitor_todo flag
rcu: Cleanup RCU urgency state for offline CPU
...
|
|
Select ARCH_HAS_GCOV_PROFILE_ALL and set GCOV_PROFILE = n inside
arch/xtensa/boot/lib.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
Select ARCH_HAS_KCOV and set KCOV_INSTRUMENT = n inside
arch/xtensa/boot/lib.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
Context tracking is going to be used not only to track user transitions
but also idle/IRQs/NMIs. The user tracking part will then become a
separate feature. Prepare Kconfig for that.
[ frederic: Apply Max Filippov feedback. ]
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Cc: Uladzislau Rezki <uladzislau.rezki@sony.com>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Nicolas Saenz Julienne <nsaenz@kernel.org>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Xiongfeng Wang <wangxiongfeng2@huawei.com>
Cc: Yu Liao <liaoyu15@huawei.com>
Cc: Phil Auld <pauld@redhat.com>
Cc: Paul Gortmaker<paul.gortmaker@windriver.com>
Cc: Alex Belits <abelits@marvell.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Reviewed-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
Tested-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
|
|
With ioremap_prot() definition from generic ioremap, also move
pte_pgprot() from hugetlbpage.c into pgtable.h, then arm64 could
have HAVE_IOREMAP_PROT, which will enable generic_access_phys()
code, it is useful for debug, eg, gdb.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Link: https://lore.kernel.org/r/20220607125027.44946-7-wangkefeng.wang@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
The arch support status files don't match reality as of v5.19-rc1,
use the features-refresh.sh to refresh all the arch-support.txt files
in place. The main effect is to add entries for the new loong
architecture.
Signed-off-by: Zheng Zengkai <zhengzengkai@huawei.com>
Link: https://lore.kernel.org/r/20220609025656.143460-1-zhengzengkai@huawei.com
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic
Pull asm-generic updates from Arnd Bergmann:
"The asm-generic tree contains three separate changes for linux-5.19:
- The h8300 architecture is retired after it has been effectively
unmaintained for a number of years. This is the last architecture
we supported that has no MMU implementation, but there are still a
few architectures (arm, m68k, riscv, sh and xtensa) that support
CPUs with and without an MMU.
- A series to add a generic ticket spinlock that can be shared by
most architectures with a working cmpxchg or ll/sc type atomic,
including the conversion of riscv, csky and openrisc. This series
is also a prerequisite for the loongarch64 architecture port that
will come as a separate pull request.
- A cleanup of some exported uapi header files to ensure they can be
included from user space without relying on other kernel headers"
* tag 'asm-generic-5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic:
h8300: remove stale bindings and symlink
sparc: add asm/stat.h to UAPI compile-test coverage
powerpc: add asm/stat.h to UAPI compile-test coverage
mips: add asm/stat.h to UAPI compile-test coverage
riscv: add linux/bpf_perf_event.h to UAPI compile-test coverage
kbuild: prevent exported headers from including <stdlib.h>, <stdbool.h>
agpgart.h: do not include <stdlib.h> from exported header
csky: Move to generic ticket-spinlock
RISC-V: Move to queued RW locks
RISC-V: Move to generic spinlocks
openrisc: Move to ticket-spinlock
asm-generic: qrwlock: Document the spinlock fairness requirements
asm-generic: qspinlock: Indicate the use of mixed-size atomics
asm-generic: ticket-lock: New generic ticket-based spinlock
remove the h8300 architecture
|
|
xtensa kernels successfully build and run with
CONFIG_DEBUG_VM_PGTABLE=y, enable arch support for it.
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
There's no direct cputime_t manipulation in the xtensa arch code, so
generic virt CPU accounting may be enabled.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
Put user exit context tracking call on the common kernel entry/exit path
(function calls are impossible at earlier kernel entry stages because
PS.EXCM is not cleared yet). Put user entry context tracking call on the
user exit path. Syscalls go through this common code too, so nothing
specific needs to be done for them.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
asm-generic
* 'remove-h8300' of git://git.infradead.org/users/hch/misc:
remove the h8300 architecture
This is clearly the least actively maintained architecture we have at
the moment, and probably the least useful. It is now the only one that
does not support MMUs at all, and most of the boards only support 4MB
of RAM, out of which the defconfig kernel needs more than half just
for .text/.data.
Guenter Roeck did the original patch to remove the architecture in 2013
after it had already been obsolete for a while, and Yoshinori Sato brought
it back in a much more modern form in 2015. Looking at the git history
since the reinstantiation, it's clear that almost all commits in the tree
are build fixes or cross-architecture cleanups:
$ git log --no-merges --format=%an v4.5.. arch/h8300/ | sort | uniq
-c | sort -rn | head -n 12
25 Masahiro Yamada
18 Christoph Hellwig
14 Mike Rapoport
9 Arnd Bergmann
8 Mark Rutland
7 Peter Zijlstra
6 Kees Cook
6 Ingo Molnar
6 Al Viro
5 Randy Dunlap
4 Yury Norov
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
|
|
The nds32 architecture, also known as AndeStar V3, is a custom 32-bit
RISC target designed by Andes Technologies. Support was added to the
kernel in 2016 as the replacement RISC-V based V5 processors were
already announced, and maintained by (current or former) Andes
employees.
As explained by Alan Kao, new customers are now all using RISC-V,
and all known nds32 users are already on longterm stable kernels
provided by Andes, with no development work going into mainline
support any more.
While the port is still in a reasonably good shape, it only gets
worse over time without active maintainers, so it seems best
to remove it before it becomes unusable. As always, if it turns
out that there are mainline users after all, and they volunteer
to maintain the port in the future, the removal can be reverted.
Link: https://lore.kernel.org/linux-mm/YhdWNLUhk+x9RAzU@yamatobi.andestech.com/
Link: https://lore.kernel.org/lkml/20220302065213.82702-1-alankao@andestech.com/
Link: https://www.andestech.com/en/products-solutions/andestar-architecture/
Signed-off-by: Alan Kao <alankao@andestech.com>
[arnd: rewrite changelog to provide more background]
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
|
|
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
Since commit bcf9033e5449 ("sched: move CPU field back into thread_info
if THREAD_INFO_IN_TASK=y"), the CPU field in thread_info went back to
being managed by the core code, so we no longer have to keep it in sync
in arch code.
While at it, mark THREAD_INFO_IN_TASK as done for ARM in the
documentation.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
|
|
This implements the CONFIG_THREAD_INFO_IN_TASK option.
With this change:
- before thread_info was part of the stack and located at the beginning of the stack
- now the thread_info struct is moved and located inside the task_struct structure
- the stack is allocated and handled like the major other platforms
- drop the cpu field of thread_info and use instead the one in task_struct
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Sven Schnelle <svens@stackframe.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux
Pull more RISC-V updates from Palmer Dabbelt:
- A pair of defconfig additions, for NVMe and the EFI filesystem
localization options.
- A larger address space for stack randomization.
- A cleanup to our install rules.
- A DTS update for the Microchip Icicle board, to fix the serial
console.
- Support for build-time table sorting, which allows us to have
__ex_table read-only.
* tag 'riscv-for-linus-5.15-mw1' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux:
riscv: Move EXCEPTION_TABLE to RO_DATA segment
riscv: Enable BUILDTIME_TABLE_SORT
riscv: dts: microchip: mpfs-icicle: Fix serial console
riscv: move the (z)install rules to arch/riscv/Makefile
riscv: Improve stack randomisation on RV64
riscv: defconfig: enable NLS_CODEPAGE_437, NLS_ISO8859_1
riscv: defconfig: enable BLK_DEV_NVME
|
|
This enlarges the bits availiable for stack randomisation on RV64 from
the default of 8MiB to 1GiB, to match arm64 and x86.
Also, update the documentation to reflect our support for stack
randomisation.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
[Palmer: commit text]
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
|
|
In commit:
bbc180a5adb05ee8 ("mm: HUGE_VMAP arch support cleanup")
We replaced:
* ioremap_pud_enabled() with arch_vmap_pud_supported()
* ioremap_pmd_enabled() with arch_vmap_pmd_supported()
Update the documentation accordingly.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: linux-doc@vger.kernel.org
Link: https://lore.kernel.org/r/20210817091621.16799-1-mark.rutland@arm.com
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
|
|
After commit e88b333142e4 ("riscv: mm: add THP support on 64-bit"),
riscv can support THP.
Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
Link: https://lore.kernel.org/r/20210805002739.23f44d2d@xhacker
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
|
|
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/YN2nhV5F0hBVNPuX@gmail.com
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
|
|
Risc-V gained support recently.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/YN2nqOVHgGDt4Iid@gmail.com
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc updates from Michael Ellerman:
- Enable KFENCE for 32-bit.
- Implement EBPF for 32-bit.
- Convert 32-bit to do interrupt entry/exit in C.
- Convert 64-bit BookE to do interrupt entry/exit in C.
- Changes to our signal handling code to use user_access_begin/end()
more extensively.
- Add support for time namespaces (CONFIG_TIME_NS)
- A series of fixes that allow us to reenable STRICT_KERNEL_RWX.
- Other smaller features, fixes & cleanups.
Thanks to Alexey Kardashevskiy, Andreas Schwab, Andrew Donnellan, Aneesh
Kumar K.V, Athira Rajeev, Bhaskar Chowdhury, Bixuan Cui, Cédric Le
Goater, Chen Huang, Chris Packham, Christophe Leroy, Christopher M.
Riedl, Colin Ian King, Dan Carpenter, Daniel Axtens, Daniel Henrique
Barboza, David Gibson, Davidlohr Bueso, Denis Efremov, dingsenjie,
Dmitry Safonov, Dominic DeMarco, Fabiano Rosas, Ganesh Goudar, Geert
Uytterhoeven, Geetika Moolchandani, Greg Kurz, Guenter Roeck, Haren
Myneni, He Ying, Jiapeng Chong, Jordan Niethe, Laurent Dufour, Lee
Jones, Leonardo Bras, Li Huafei, Madhavan Srinivasan, Mahesh Salgaonkar,
Masahiro Yamada, Nathan Chancellor, Nathan Lynch, Nicholas Piggin,
Oliver O'Halloran, Paul Menzel, Pu Lehui, Randy Dunlap, Ravi Bangoria,
Rosen Penev, Russell Currey, Santosh Sivaraj, Sebastian Andrzej Siewior,
Segher Boessenkool, Shivaprasad G Bhat, Srikar Dronamraju, Stephen
Rothwell, Thadeu Lima de Souza Cascardo, Thomas Gleixner, Tony Ambardar,
Tyrel Datwyler, Vaibhav Jain, Vincenzo Frascino, Xiongwei Song, Yang Li,
Yu Kuai, and Zhang Yunkai.
* tag 'powerpc-5.13-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (302 commits)
powerpc/signal32: Fix erroneous SIGSEGV on RT signal return
powerpc: Avoid clang uninitialized warning in __get_user_size_allowed
powerpc/papr_scm: Mark nvdimm as unarmed if needed during probe
powerpc/kvm: Fix build error when PPC_MEM_KEYS/PPC_PSERIES=n
powerpc/kasan: Fix shadow start address with modules
powerpc/kernel/iommu: Use largepool as a last resort when !largealloc
powerpc/kernel/iommu: Align size for IOMMU_PAGE_SIZE() to save TCEs
powerpc/44x: fix spelling mistake in Kconfig "varients" -> "variants"
powerpc/iommu: Annotate nested lock for lockdep
powerpc/iommu: Do not immediately panic when failed IOMMU table allocation
powerpc/iommu: Allocate it_map by vmalloc
selftests/powerpc: remove unneeded semicolon
powerpc/64s: remove unneeded semicolon
powerpc/eeh: remove unneeded semicolon
powerpc/selftests: Add selftest to test concurrent perf/ptrace events
powerpc/selftests/perf-hwbreak: Add testcases for 2nd DAWR
powerpc/selftests/perf-hwbreak: Coalesce event creation code
powerpc/selftests/ptrace-hwbreak: Add testcases for 2nd DAWR
powerpc/configs: Add IBMVNIC to some 64-bit configs
selftests/powerpc: Add uaccess flush test
...
|
|
This reverts commit 675bceb097e6 ("powerpc/mm: Remove DEBUG_VM_PGTABLE support on powerpc")
All the related issues are fixed as of commit:
f14312e1ed1e ("mm/debug_vm_pgtable: avoid doing memory allocation with pgtable_t mapped.")
Hence re-enable it.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210318034855.74513-1-aneesh.kumar@linux.ibm.com
|
|
BATCHED_UNMAP_TLB_FLUSH is used on x86 to do batched tlb shootdown by
sending one IPI to TLB flush all entries after unmapping pages rather
than sending an IPI to flush each individual entry.
On arm64, tlb shootdown is done by hardware. Flush instructions are
innershareable. The local flushes are limited to the boot (1 per CPU)
and when a task is getting a new ASID.
So marking this feature as "TODO" is not proper. ".." isn't good as
well. So this patch adds a "N/A" for this kind of features which are
not needed on some architectures.
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
Acked-by: Will Deacon <will@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210223003230.11976-1-song.bao.hua@hisilicon.com
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
|
|
Run the update script to document the recent feature additions
on riscv, mips and csky.
Fixes: c109f42450ec ("csky: Add kmemleak support")
Fixes: 8b3165e54566 ("MIPS: Enable GCOV")
Fixes: 1ddc96bd42da ("MIPS: kernel: Support extracting off-line stack traces from user-space with perf")
Fixes: 74784081aac8 ("riscv: Add uprobes supported")
Fixes: 829adda597fe ("riscv: Add KPROBES_ON_FTRACE supported")
Fixes: c22b0bcb1dd0 ("riscv: Add kprobes supported")
Fixes: dcdc7a53a890 ("RISC-V: Implement ptrace regs and stack API")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20210225142841.3385428-2-arnd@kernel.org
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
|
|
The references to arch/c6x are obsolete now that the architecture
is gone. Remove them.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20210225142841.3385428-1-arnd@kernel.org
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
|
|
Pull ARM updates from Russell King:
- Rework phys/virt translation
- Add KASan support
- Move DT out of linear map region
- Use more PC-relative addressing in assembly
- Remove FP emulation handling while in kernel mode
- Link with '-z norelro'
- remove old check for GCC <= 4.2 in ARM unwinder code
- disable big endian if using clang's linker
* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (46 commits)
ARM: 9027/1: head.S: explicitly map DT even if it lives in the first physical section
ARM: 9038/1: Link with '-z norelro'
ARM: 9037/1: uncompress: Add OF_DT_MAGIC macro
ARM: 9036/1: uncompress: Fix dbgadtb size parameter name
ARM: 9035/1: uncompress: Add be32tocpu macro
ARM: 9033/1: arm/smp: Drop the macro S(x,s)
ARM: 9032/1: arm/mm: Convert PUD level pgtable helper macros into functions
ARM: 9031/1: hyp-stub: remove unused .L__boot_cpu_mode_offset symbol
ARM: 9044/1: vfp: use undef hook for VFP support detection
ARM: 9034/1: __div64_32(): straighten up inline asm constraints
ARM: 9030/1: entry: omit FP emulation for UND exceptions taken in kernel mode
ARM: 9029/1: Make iwmmxt.S support Clang's integrated assembler
ARM: 9028/1: disable KASAN in call stack capturing routines
ARM: 9026/1: unwind: remove old check for GCC <= 4.2
ARM: 9025/1: Kconfig: CPU_BIG_ENDIAN depends on !LD_IS_LLD
ARM: 9024/1: Drop useless cast of "u64" to "long long"
ARM: 9023/1: Spelling s/mmeory/memory/
ARM: 9022/1: Change arch/arm/lib/mem*.S to use WEAK instead of .weak
ARM: kvm: replace open coded VA->PA calculations with adr_l call
ARM: head.S: use PC relative insn sequence to calculate PHYS_OFFSET
...
|