diff options
author | Zheng Yejian <zhengyejian1@huawei.com> | 2023-09-06 11:19:30 +0300 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2023-10-10 22:53:32 +0300 |
commit | 9ccce21bd77b117ede7d890bc4cef156768d8049 (patch) | |
tree | dde9cef527543ee26eb4401b516b89cd364bb0c5 | |
parent | 5dfcb92905b3593ed8aa56bb50fab4d8cb09d181 (diff) | |
download | linux-9ccce21bd77b117ede7d890bc4cef156768d8049.tar.xz |
ring-buffer: Avoid softlockup in ring_buffer_resize()
[ Upstream commit f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a ]
When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.
If the kernel preemption model is PREEMPT_NONE and there are many cpus
and there are many buffer pages to be allocated, it may not give up cpu
for a long time and finally cause a softlockup.
To avoid it, call cond_resched() after each cpu buffer allocation.
Link: https://lore.kernel.org/linux-trace-kernel/20230906081930.3939106-1-zhengyejian1@huawei.com
Cc: <mhiramat@kernel.org>
Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
-rw-r--r-- | kernel/trace/ring_buffer.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index f8126fa0630e..752e9549a59e 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -2080,6 +2080,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size, err = -ENOMEM; goto out_err; } + + cond_resched(); } get_online_cpus(); |