summaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)AuthorFilesLines
2023-06-06bpf: Cleanup unused function declarationRuiqi Gong1-1/+0
All usage and the definition of `bpf_prog_free_linfo()` has been removed in commit e16301fbe183 ("bpf: Simplify freeing logic in linfo and jited_linfo"). Clean up its declaration in the header file. Signed-off-by: Ruiqi Gong <gongruiqi@huaweicloud.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/all/20230602030842.279262-1-gongruiqi@huaweicloud.com/ Link: https://lore.kernel.org/bpf/20230606021047.170667-1-gongruiqi@huaweicloud.com
2023-06-06efi/libstub: Implement support for unaccepted memoryKirill A. Shutemov1-1/+11
UEFI Specification version 2.9 introduces the concept of memory acceptance: Some Virtual Machine platforms, such as Intel TDX or AMD SEV-SNP, requiring memory to be accepted before it can be used by the guest. Accepting happens via a protocol specific for the Virtual Machine platform. Accepting memory is costly and it makes VMM allocate memory for the accepted guest physical address range. It's better to postpone memory acceptance until memory is needed. It lowers boot time and reduces memory overhead. The kernel needs to know what memory has been accepted. Firmware communicates this information via memory map: a new memory type -- EFI_UNACCEPTED_MEMORY -- indicates such memory. Range-based tracking works fine for firmware, but it gets bulky for the kernel: e820 (or whatever the arch uses) has to be modified on every page acceptance. It leads to table fragmentation and there's a limited number of entries in the e820 table. Another option is to mark such memory as usable in e820 and track if the range has been accepted in a bitmap. One bit in the bitmap represents a naturally aligned power-2-sized region of address space -- unit. For x86, unit size is 2MiB: 4k of the bitmap is enough to track 64GiB or physical address space. In the worst-case scenario -- a huge hole in the middle of the address space -- It needs 256MiB to handle 4PiB of the address space. Any unaccepted memory that is not aligned to unit_size gets accepted upfront. The bitmap is allocated and constructed in the EFI stub and passed down to the kernel via EFI configuration table. allocate_e820() allocates the bitmap if unaccepted memory is present, according to the size of unaccepted region. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20230606142637.5171-4-kirill.shutemov@linux.intel.com
2023-06-06mm: Add support for unaccepted memoryKirill A. Shutemov2-0/+27
UEFI Specification version 2.9 introduces the concept of memory acceptance. Some Virtual Machine platforms, such as Intel TDX or AMD SEV-SNP, require memory to be accepted before it can be used by the guest. Accepting happens via a protocol specific to the Virtual Machine platform. There are several ways the kernel can deal with unaccepted memory: 1. Accept all the memory during boot. It is easy to implement and it doesn't have runtime cost once the system is booted. The downside is very long boot time. Accept can be parallelized to multiple CPUs to keep it manageable (i.e. via DEFERRED_STRUCT_PAGE_INIT), but it tends to saturate memory bandwidth and does not scale beyond the point. 2. Accept a block of memory on the first use. It requires more infrastructure and changes in page allocator to make it work, but it provides good boot time. On-demand memory accept means latency spikes every time kernel steps onto a new memory block. The spikes will go away once workload data set size gets stabilized or all memory gets accepted. 3. Accept all memory in background. Introduce a thread (or multiple) that gets memory accepted proactively. It will minimize time the system experience latency spikes on memory allocation while keeping low boot time. This approach cannot function on its own. It is an extension of #2: background memory acceptance requires functional scheduler, but the page allocator may need to tap into unaccepted memory before that. The downside of the approach is that these threads also steal CPU cycles and memory bandwidth from the user's workload and may hurt user experience. Implement #1 and #2 for now. #2 is the default. Some workloads may want to use #1 with accept_memory=eager in kernel command line. #3 can be implemented later based on user's demands. Support of unaccepted memory requires a few changes in core-mm code: - memblock accepts memory on allocation. It serves early boot memory allocations and doesn't limit them to pre-accepted pool of memory. - page allocator accepts memory on the first allocation of the page. When kernel runs out of accepted memory, it accepts memory until the high watermark is reached. It helps to minimize fragmentation. EFI code will provide two helpers if the platform supports unaccepted memory: - accept_memory() makes a range of physical addresses accepted. - range_contains_unaccepted_memory() checks anything within the range of physical addresses requires acceptance. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Mike Rapoport <rppt@linux.ibm.com> # memblock Link: https://lore.kernel.org/r/20230606142637.5171-2-kirill.shutemov@linux.intel.com
2023-06-06regulator: Add X-Powers AXP15060/AXP313a PMICMark Brown1-0/+32
Merge series from Andre Przywara <andre.przywara@arm.com>: This patch series adds support for the X-Powers AXP15060 and AXP313a PMIC, which are general purpose PMICs as seen on different boards with different SOCs, mostly from Allwinner. This is mostly a repost of the previous patches, combining both the AXP313a and AXP15060 series, rebased on top of v6.4-rc3, and omitting the patches that already got merged. The first two patches are the successors of the AXP313a v10 post, the third patch is based on Shengyu's AXP15060 v3 post. There were no code changes, just some tiny context differences due to the rebase, plus I added the newly gained tags. As the DT bindings and the AXP15060 MFD part are already in the tree, this is just completing support with the MFD part for the AXP313a, and the regulator support for both PMICs.
2023-06-06firmware: arm_scmi: Add Powercap protocol enable supportCristian Marussi1-0/+18
SCMI powercap protocol v3.2 supports disabling the powercap on a zone by zone basis by providing a zero valued powercap. Expose new operations to enable/disable powercapping on a per-zone base. Signed-off-by: Cristian Marussi <cristian.marussi@arm.com> Link: https://lore.kernel.org/r/20230531152039.2363181-3-cristian.marussi@arm.com Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
2023-06-06Merge tag 'platform-drivers-x86-v6.4-4' of ↵Linus Torvalds1-5/+1
git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86 Pull x86 platform driver fixes from Hans de Goede: - various Microsoft Surface support fixes - one fix for the INT3472 driver * tag 'platform-drivers-x86-v6.4-4' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86: platform/x86: int3472: Avoid crash in unregistering regulator gpio platform/surface: aggregator_tabletsw: Add support for book mode in POS subsystem platform/surface: aggregator_tabletsw: Add support for book mode in KIP subsystem platform/surface: aggregator: Allow completion work-items to be executed in parallel platform/surface: aggregator: Make to_ssam_device_driver() respect constness
2023-06-06tracing/probes: Add tracepoint support on fprobe_eventsMasami Hiramatsu (Google)2-0/+6
Allow fprobe_events to trace raw tracepoints so that user can trace tracepoints which don't have traceevent wrappers. This new event is always available if the fprobe_events is enabled (thus no kconfig), because the fprobe_events depends on the trace-event and traceporint. e.g. # echo 't sched_overutilized_tp' >> dynamic_events # echo 't 9p_client_req' >> dynamic_events # cat dynamic_events t:tracepoints/sched_overutilized_tp sched_overutilized_tp t:tracepoints/_9p_client_req 9p_client_req The event name is based on the tracepoint name, but if it is started with digit character, an underscore '_' will be added. NOTE: to avoid further confusion, this renames TPARG_FL_TPOINT to TPARG_FL_TEVENT because this flag is used for eprobe (trace-event probe). And reuse TPARG_FL_TPOINT for this raw tracepoint probe. Link: https://lore.kernel.org/all/168507471874.913472.17214624519622959593.stgit@mhiramat.roam.corp.google.com/ Reported-by: kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/oe-kbuild-all/202305020453.afTJ3VVp-lkp@intel.com/ Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2023-06-06tracing/probes: Add fprobe events for tracing function entry and exit.Masami Hiramatsu (Google)2-0/+8
Add fprobe events for tracing function entry and exit instead of kprobe events. With this change, we can continue to trace function entry/exit even if the CONFIG_KPROBES_ON_FTRACE is not available. Since CONFIG_KPROBES_ON_FTRACE requires the CONFIG_DYNAMIC_FTRACE_WITH_REGS, it is not available if the architecture only supports CONFIG_DYNAMIC_FTRACE_WITH_ARGS. And that means kprobe events can not probe function entry/exit effectively on such architecture. But this can be solved if the dynamic events supports fprobe events. The fprobe event is a new dynamic events which is only for the function (symbol) entry and exit. This event accepts non register fetch arguments so that user can trace the function arguments and return values. The fprobe events syntax is here; f[:[GRP/][EVENT]] FUNCTION [FETCHARGS] f[MAXACTIVE][:[GRP/][EVENT]] FUNCTION%return [FETCHARGS] E.g. # echo 'f vfs_read $arg1' >> dynamic_events # echo 'f vfs_read%return $retval' >> dynamic_events # cat dynamic_events f:fprobes/vfs_read__entry vfs_read arg1=$arg1 f:fprobes/vfs_read__exit vfs_read%return arg1=$retval # echo 1 > events/fprobes/enable # head -n 20 trace | tail # TASK-PID CPU# ||||| TIMESTAMP FUNCTION # | | | ||||| | | sh-142 [005] ...1. 448.386420: vfs_read__entry: (vfs_read+0x4/0x340) arg1=0xffff888007f7c540 sh-142 [005] ..... 448.386436: vfs_read__exit: (ksys_read+0x75/0x100 <- vfs_read) arg1=0x1 sh-142 [005] ...1. 448.386451: vfs_read__entry: (vfs_read+0x4/0x340) arg1=0xffff888007f7c540 sh-142 [005] ..... 448.386458: vfs_read__exit: (ksys_read+0x75/0x100 <- vfs_read) arg1=0x1 sh-142 [005] ...1. 448.386469: vfs_read__entry: (vfs_read+0x4/0x340) arg1=0xffff888007f7c540 sh-142 [005] ..... 448.386476: vfs_read__exit: (ksys_read+0x75/0x100 <- vfs_read) arg1=0x1 sh-142 [005] ...1. 448.602073: vfs_read__entry: (vfs_read+0x4/0x340) arg1=0xffff888007f7c540 sh-142 [005] ..... 448.602089: vfs_read__exit: (ksys_read+0x75/0x100 <- vfs_read) arg1=0x1 Link: https://lore.kernel.org/all/168507469754.913472.6112857614708350210.stgit@mhiramat.roam.corp.google.com/ Reported-by: kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/all/202302011530.7vm4O8Ro-lkp@intel.com/ Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2023-06-06fprobe: Pass return address to the handlersMasami Hiramatsu (Google)2-3/+5
Pass return address as 'ret_ip' to the fprobe entry and return handlers so that the fprobe user handler can get the reutrn address without analyzing arch-dependent pt_regs. Link: https://lore.kernel.org/all/168507467664.913472.11642316698862778600.stgit@mhiramat.roam.corp.google.com/ Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2023-06-06wifi: mac80211: provide a helper to fetch the medium synchronization delayEmmanuel Grumbach1-0/+35
There are drivers which need this information. Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Gregory Greenman <gregory.greenman@intel.com> Link: https://lore.kernel.org/r/20230604120651.b1043f3126e2.Iad3806f8bf8df07f52ef0a02cc3d0373c44a8c93@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-06-06wifi: mac80211: fetch and store the EML capability informationEmmanuel Grumbach1-0/+35
We need to teach the low level driver about the EML capability which includes information for EMLSR / EMLMR operation. Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Gregory Greenman <gregory.greenman@intel.com> Link: https://lore.kernel.org/r/20230504134511.828474-11-gregory.greenman@intel.com Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-06-06wifi: mac80211: skip EHT BSS membership selectorJohannes Berg1-1/+4
Skip the EHT BSS membership selector for getting rates. While at it, add the definitions for GLK and EPS, and sort the list. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Gregory Greenman <gregory.greenman@intel.com> Link: https://lore.kernel.org/r/20230504134511.828474-9-gregory.greenman@intel.com Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-06-06watch_queue: prevent dangling pipe pointerSiddh Raman Pant1-2/+1
NULL the dangling pipe reference while clearing watch_queue. If not done, a reference to a freed pipe remains in the watch_queue, as this function is called before freeing a pipe in free_pipe_info() (see line 834 of fs/pipe.c). The sole use of wqueue->defunct is for checking if the watch queue has been cleared, but wqueue->pipe is also NULLed while clearing. Thus, wqueue->defunct is superfluous, as wqueue->pipe can be checked for NULL. Hence, the former can be removed. Tested with keyutils testsuite. Cc: stable@vger.kernel.org # 6.1 Signed-off-by: Siddh Raman Pant <code@siddh.me> Acked-by: David Howells <dhowells@redhat.com> Message-Id: <20230605143616.640517-1-code@siddh.me> Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-06-06lib/string_helpers: Change returned value of the strreplace()Andy Shevchenko1-1/+1
It's more useful to return the pointer to the string itself with strreplace(), so it may be used like attr->name = strreplace(name, '/', '_'); While at it, amend the kernel documentation. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605170553.7835-3-andriy.shevchenko@linux.intel.com
2023-06-06Merge branch 'drm-i915-use-ref_tracker-library-for-tracking-wakerefs'Jakub Kicinski1-2/+23
Andrzej Hajda says: ==================== drm/i915: use ref_tracker library for tracking wakerefs This is reviewed series of ref_tracker patches, ready to merge via network tree, rebased on net-next/main. i915 patches will be merged later via intel-gfx tree. ==================== Merge on top of an -rc tag in case it's needed in another tree. Link: https://lore.kernel.org/r/20230224-track_gt-v9-0-5b47a33f55d1@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-06lib/ref_tracker: add printing to memory bufferAndrzej Hajda1-0/+8
Similar to stack_(depot|trace)_snprint the patch adds helper to printing stats to memory buffer. It will be helpful in case of debugfs. Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-06lib/ref_tracker: improve printing statsAndrzej Hajda1-2/+7
In case the library is tracking busy subsystem, simply printing stack for every active reference will spam log with long, hard to read, redundant stack traces. To improve readabilty following changes have been made: - reports are printed per stack_handle - log is more compact, - added display name for ref_tracker_dir - it will differentiate multiple subsystems, - stack trace is printed indented, in the same printk call, - info about dropped references is printed as well. Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-06lib/ref_tracker: add unlocked leak print helperAndrzej Hajda1-0/+8
To have reliable detection of leaks, caller must be able to check under the same lock both: tracked counter and the leaks. dir.lock is natural candidate for such lock and unlocked print helper can be called with this lock taken. As a bonus we can reuse this helper in ref_tracker_dir_exit. Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-05sched/clock: Provide local_clock_noinstr()Peter Zijlstra1-1/+16
Now that all ARCH_WANTS_NO_INSTR architectures (arm64, loongarch, s390, x86) provide sched_clock_noinstr(), use this to provide local_clock_noinstr(). This local_clock_noinstr() will be safe to use from noinstr code with the assumption that any such noinstr code is non-preemptible (it had better be, entry code will have IRQs disabled while __cpuidle must have preemption disabled). Specifically, preempt_enable_notrace(), a common part of many a sched_clock() implementation calls out to schedule() -- even though, per the above, it will never trigger -- which frustrates noinstr validation. vmlinux.o: warning: objtool: local_clock+0xb5: call to preempt_schedule_notrace_thunk() leaves .noinstr.text section Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Michael Kelley <mikelley@microsoft.com> # Hyper-V Link: https://lore.kernel.org/r/20230519102715.978624636@infradead.org
2023-06-05math64: Always inline u128 version of mul_u64_u64_shr()Peter Zijlstra1-1/+1
In order to prevent the following complaint from happening, always inline the u128 variant of mul_u64_u64_shr() -- which is what x86_64 will use. vmlinux.o: warning: objtool: read_hv_sched_clock_tsc+0x5a: call to mul_u64_u64_shr.constprop.0() leaves .noinstr.text section It should compile into something like: asm("mul %[mul];" "shrd %rdx, %rax, %cl" : "+&a" (a) : "c" shift, [mul] "r" (mul) : "d"); Which is silly not to inline, but it happens. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Michael Kelley <mikelley@microsoft.com> # Hyper-V Link: https://lore.kernel.org/r/20230519102715.637420396@infradead.org
2023-06-05seqlock/latch: Provide raw_read_seqcount_latch_retry()Peter Zijlstra2-8/+9
The read side of seqcount_latch consists of: do { seq = raw_read_seqcount_latch(&latch->seq); ... } while (read_seqcount_latch_retry(&latch->seq, seq)); which is asymmetric in the raw_ department, and sure enough, read_seqcount_latch_retry() includes (explicit) instrumentation where raw_read_seqcount_latch() does not. This inconsistency becomes a problem when trying to use it from noinstr code. As such, fix it by renaming and re-implementing raw_read_seqcount_latch_retry() without the instrumentation. Specifically the instrumentation in question is kcsan_atomic_next(0) in do___read_seqcount_retry(). Loosing this annotation is not a problem because raw_read_seqcount_latch() does not pass through kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX). Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Petr Mladek <pmladek@suse.com> Tested-by: Michael Kelley <mikelley@microsoft.com> # Hyper-V Link: https://lore.kernel.org/r/20230519102715.233598176@infradead.org
2023-06-05sched: Unconditionally use full-fat wait_task_inactive()Peter Zijlstra1-5/+2
While modifying wait_task_inactive() for PREEMPT_RT; the build robot noted that UP got broken. This led to audit and consideration of the UP implementation of wait_task_inactive(). It looks like the UP implementation is also broken for PREEMPT; consider task_current_syscall() getting preempted between the two calls to wait_task_inactive(). Therefore move the wait_task_inactive() implementation out of CONFIG_SMP and unconditionally use it. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20230602103731.GA630648%40hirez.programming.kicks-ass.net
2023-06-05block: mark early_lookup_bdev as __initChristoph Hellwig1-1/+1
early_lookup_bdev is now only used during the early boot code as it should, so mark it __init to not waste run time memory on it. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230531125535.676098-25-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-05dm: remove dm_get_dev_tChristoph Hellwig1-2/+0
Open code dm_get_dev_t in the only remaining caller, and propagate the exact error code from lookup_bdev and early_lookup_bdev. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230531125535.676098-20-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-05block: move more code to early-lookup.cChristoph Hellwig1-1/+0
blk_lookup_devt is only used by code in early-lookup.c, so move it there. printk_all_partitions and it's helper bdevt_str are only used by the early init code in init/do_mounts.c, so they should go there as well. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230531125535.676098-17-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-05init: improve the name_to_dev_t interfaceChristoph Hellwig2-1/+5
name_to_dev_t has a very misleading name, that doesn't make clear it should only be used by the early init code, and also has a bad calling convention that doesn't allow returning different kinds of errors. Rename it to early_lookup_bdev to make the use case clear, and return an errno, where -EINVAL means the string could not be parsed, and -ENODEV means it the string was valid, but there was no device found for it. Also stub out the whole call for !CONFIG_BLOCK as all the non-block root cases are always covered in the caller. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230531125535.676098-14-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-05init: handle ubi/mtd root mounting like all other root typesChristoph Hellwig1-0/+1
Assign a Root_Generic magic value for UBI/MTD root and handle the root mounting in mount_root like all other root types. Besides making the code more clear this also means that UBI/MTD root can be used together with an initrd (not that anyone should care). Also factor parsing of the root name into a helper now that it can be easily done and will get more complicated with subsequent patches. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230531125535.676098-11-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-05init: remove pointless Root_* valuesChristoph Hellwig1-8/+0
Remove all unused defines, and just use the expanded versions for the SCSI disk majors. I've decided to keep Root_RAM0 even if it could be expanded as there is a lot of special casing for it in the init code. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230531125535.676098-6-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-05driver core: return bool from driver_probe_doneChristoph Hellwig1-1/+1
bool is the most sensible return value for a yes/no return. Also add __init as this funtion is only called from the early boot code. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lore.kernel.org/r/20230531125535.676098-2-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-05fs: add a method to shut down the file systemChristoph Hellwig1-0/+1
Add a new ->shutdown super operation that can be used to tell the file system to shut down, and call it from newly created holder ops when the block device under a file system shuts down. This only covers the main block device for "simple" file systems using get_tree_bdev / mount_bdev. File systems their own get_tree method or opening additional devices will need to set up their own blk_holder_ops. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Darrick J. Wong <djwong@kernel.org> Acked-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Link: https://lore.kernel.org/r/20230601094459.1350643-12-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-05block: add a mark_dead holder operationChristoph Hellwig1-0/+1
Add a mark_dead method to blk_holder_ops that is called from blk_mark_disk_dead to notify the holder that the block device it is using has been marked dead. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jan Kara <jack@suse.cz> Acked-by: Christian Brauner <brauner@kernel.org> Acked-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Link: https://lore.kernel.org/r/20230601094459.1350643-11-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-05block: introduce holder opsChristoph Hellwig2-3/+10
Add a new blk_holder_ops structure, which is passed to blkdev_get_by_* and installed in the block_device for exclusive claims. It will be used to allow the block layer to call back into the user of the block device for thing like notification of a removed device or a device resize. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jan Kara <jack@suse.cz> Acked-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Link: https://lore.kernel.org/r/20230601094459.1350643-10-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-05Merge tag 'qcom-driver-fixes-for-6.4' of ↵Arnd Bergmann1-6/+0
https://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux into arm/fixes Qualcomm driver fixes for 6.4 Error paths is corrected across icc-bwmon, rpmh-rsc, ramp_controller and rmtfs. The ice module is renamed qcom_ice, to avoid clashing with existing "ice" driver. SA8155P-specific RPMh power-domains are introduced to avoid the code trying to access resources that exists on SM8150, but not on SA8155P. Lastly, changes to the EDAC driver to fix an issue where the driver performs mmio based on the wrong register map. * tag 'qcom-driver-fixes-for-6.4' of https://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux: EDAC/qcom: Get rid of hardcoded register offsets EDAC/qcom: Remove superfluous return variable assignment in qcom_llcc_core_setup() dt-bindings: cache: qcom,llcc: Fix SM8550 description soc: qcom: rpmhpd: Add SA8155P power domains dt-bindings: power: qcom,rpmpd: Add SA8155P soc: qcom: Rename ice to qcom_ice to avoid module name conflict soc: qcom: rmtfs: Fix error code in probe() soc: qcom: ramp_controller: Fix an error handling path in qcom_ramp_controller_probe() soc: qcom: rpmh-rsc: drop redundant unsigned >=0 comparision soc: qcom: icc-bwmon: fix incorrect error code passed to dev_err_probe() Link: https://lore.kernel.org/r/20230601141058.2246039-1-andersson@kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2023-06-05ALSA: usb-audio: Use __le16 for 16bit USB descriptor fieldsTakashi Iwai1-3/+3
Use proper notion for 16bit values for fixing the sparse warnings. Fixes: f8ddb0fb3289 ("ALSA: usb-audio: Define USB MIDI 2.0 specs") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202305260528.wcqjXso8-lkp@intel.com/ Closes: https://lore.kernel.org/oe-kbuild-all/202305270534.odwHL9F0-lkp@intel.com/ Link: https://lore.kernel.org/r/20230605144758.6677-1-tiwai@suse.de Signed-off-by: Takashi Iwai <tiwai@suse.de>
2023-06-05coresight: Fix CTI module refcount leak by making it a helper deviceJames Clark1-27/+2
The CTI module has some hard coded refcounting code that has a leak. For example running perf and then trying to unload it fails: perf record -e cs_etm// -a -- ls rmmod coresight_cti rmmod: ERROR: Module coresight_cti is in use The coresight core already handles references of devices in use, so by making CTI a normal helper device, we get working refcounting for free. Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: James Clark <james.clark@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20230425143542.2305069-14-james.clark@arm.com
2023-06-05coresight: Enable and disable helper devices adjacent to the pathJames Clark1-1/+10
Currently CATU is the only helper device, and its enable and disable calls are hard coded. To allow more helper devices to be added in a generic way, remove these hard coded calls and just enable and disable all helper devices. This has to apply to helpers adjacent to the path, because they will never be in the path. CATU was already discovered in this way, so there is no change there. One change that is needed is for CATU to call back into ETR to allocate the buffer. Because the enable call was previously hard coded, it was done at a point where the buffer was already allocated, but this is no longer the case. Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: James Clark <james.clark@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20230425143542.2305069-13-james.clark@arm.com
2023-06-05coresight: Make refcount a property of the connectionJames Clark1-5/+9
This removes the need to do an additional lookup for the total number of ports used and also removes the need to allocate an array of refcounts which is just another representation of a connection array. This was only used for link type devices, for regular devices a single refcount on the coresight device is used. There is a both an input and output refcount in case two link type devices are connected together so that they don't overwrite each other's counts. Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: James Clark <james.clark@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20230425143542.2305069-11-james.clark@arm.com
2023-06-05coresight: Store in-connections as well as out-connectionsJames Clark1-0/+26
This will allow CATU to get its associated ETR in a generic way where currently the enable path has some hard coded searches which avoid the need to store input connections. This also means that the full search for connected devices on removal can be replaced with a loop through only the input and output devices. Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: James Clark <james.clark@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20230425143542.2305069-10-james.clark@arm.com
2023-06-05coresight: Store pointers to connections rather than an array of themJames Clark1-2/+3
This will allow the same connection object to be referenced via the input connection list in a later commit rather than duplicating them. Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: James Clark <james.clark@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20230425143542.2305069-8-james.clark@arm.com
2023-06-05coresight: Dynamically add connectionsJames Clark1-1/+7
Add a function for adding connections dynamically. This also removes the 1:1 mapping between port number and the index into the connections array. The only place this mapping was used was in the warning for duplicate output ports, which has been replaced by a search. Other uses of the port number already use the port member variable. Being able to dynamically add connections will allow other devices like CTI to re-use the connection mechanism despite not having explicit connections described in the DT. The connections array is now no longer sparse, so child_fwnode doesn't need to be checked as all connections have a target node. Because the array is no longer sparse, the high in and out port numbers are required for the refcount arrays. But these will also be removed in a later commit when the refcount is made a property of the connection. Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: James Clark <james.clark@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20230425143542.2305069-7-james.clark@arm.com
2023-06-05coresight: Rename connection members to make the direction explicitJames Clark1-9/+9
When input connections are added they will use the same connection object as the output so parent and child could be misinterpreted. Making the direction unambiguous in the names should improve readability. Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: James Clark <james.clark@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20230425143542.2305069-6-james.clark@arm.com
2023-06-05coresight: Rename nr_outports to nr_outconnsJames Clark1-5/+5
Rename to avoid confusion between port number and the index in the connection array. The port number is already stored in the connection, and in a later commit the connection array will be appended to, so the length of it will no longer reflect the number of ports. No functional changes. Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: James Clark <james.clark@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20230425143542.2305069-5-james.clark@arm.com
2023-06-05coresight: Change name of pdata->connsJames Clark1-2/+2
conns is actually for output connections. Change the name to make it clearer and so that we can add input connections later. No functional changes. Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: James Clark <james.clark@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20230425143542.2305069-4-james.clark@arm.com
2023-06-05coresight: Use enum type for cs_mode wherever possibleJames Clark1-3/+10
mode is stored as a local_t, but it is also passed around a lot as a plain u32, so use the correct type wherever local_t isn't currently used. This helps a little bit with readability. Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: James Clark <james.clark@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20230425143542.2305069-3-james.clark@arm.com
2023-06-05drivers/perf: apple_m1: Force 63bit counters for M2 CPUsMarc Zyngier1-0/+2
Sidharth reports that on M2, the PMU never generates any interrupt when using 'perf record', which is a annoying as you get no sample. I'm temped to say "no sample, no problem", but others may have a different opinion. Upon investigation, it appears that the counters on M2 are significantly different from the ones on M1, as they count on 64 bits instead of 48. Which of course, in the fine M1 tradition, means that we can only use 63 bits, as the top bit is used to signal the interrupt... This results in having to introduce yet another flag to indicate yet another odd counter width. Who knows what the next crazy implementation will do... With this, perf can work out the correct offset, and 'perf record' works as intended. Tested on M2 and M2-Pro CPUs. Cc: Janne Grunau <j@jannau.net> Cc: Hector Martin <marcan@marcan.st> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Fixes: 7d0bfb7c9977 ("drivers/perf: apple_m1: Add Apple M2 support") Reported-by: Sidharth Kshatriya <sid.kshatriya@gmail.com> Tested-by: Sidharth Kshatriya <sid.kshatriya@gmail.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230528080205.288446-1-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2023-06-05iopoll: Do not use timekeeping in read_poll_timeout_atomic()Geert Uytterhoeven1-5/+17
read_poll_timeout_atomic() uses ktime_get() to implement the timeout feature, just like its non-atomic counterpart. However, there are several issues with this, due to its use in atomic contexts: 1. When called in the s2ram path (as typically done by clock or PM domain drivers), timekeeping may be suspended, triggering the WARN_ON(timekeeping_suspended) in ktime_get(): WARNING: CPU: 0 PID: 654 at kernel/time/timekeeping.c:843 ktime_get+0x28/0x78 Calling ktime_get_mono_fast_ns() instead of ktime_get() would get rid of that warning. However, that would break timeout handling, as (at least on systems with an ARM architectured timer), the time returned by ktime_get_mono_fast_ns() does not advance while timekeeping is suspended. Interestingly, (on the same ARM systems) the time returned by ktime_get() does advance while timekeeping is suspended, despite the warning. 2. Depending on the actual clock source, and especially before a high-resolution clocksource (e.g. the ARM architectured timer) becomes available, time may not advance in atomic contexts, thus breaking timeout handling. Fix this by abandoning the idea that one can rely on timekeeping to implement timeout handling in all atomic contexts, and switch from a global time-based to a locally-estimated timeout handling. In most (all?) cases the timeout condition is exceptional and an error condition, hence any additional delays due to underestimating wall clock time are irrelevant. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Link: https://lore.kernel.org/r/3d2a2f4e553489392d871108797c3be08f88300b.1685692810.git.geert+renesas@glider.be
2023-06-05iopoll: Call cpu_relax() in busy loopsGeert Uytterhoeven1-0/+2
It is considered good practice to call cpu_relax() in busy loops, see Documentation/process/volatile-considered-harmful.rst. This can not only lower CPU power consumption or yield to a hyperthreaded twin processor, but also allows an architecture to mitigate hardware issues (e.g. ARM Erratum 754327 for Cortex-A9 prior to r2p0) in the architecture-specific cpu_relax() implementation. In addition, cpu_relax() is also a compiler barrier. It is not immediately obvious that the @op argument "function" will result in an actual function call (e.g. in case of inlining). Where a function call is a C sequence point, this is lost on inlining. Therefore, with agressive enough optimization it might be possible for the compiler to hoist the: (val) = op(args); "load" out of the loop because it doesn't see the value changing. The addition of cpu_relax() would inhibit this. As the iopoll helpers lack calls to cpu_relax(), people are sometimes reluctant to use them, and may fall back to open-coded polling loops (including cpu_relax() calls) instead. Fix this by adding calls to cpu_relax() to the iopoll helpers: - For the non-atomic case, it is sufficient to call cpu_relax() in case of a zero sleep-between-reads value, as a call to usleep_range() is a safe barrier otherwise. However, it doesn't hurt to add the call regardless, for simplicity, and for similarity with the atomic case below. - For the atomic case, cpu_relax() must be called regardless of the sleep-between-reads value, as there is no guarantee all architecture-specific implementations of udelay() handle this. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Link: https://lore.kernel.org/r/45c87bec3397fdd704376807f0eec5cc71be440f.1685692810.git.geert+renesas@glider.be
2023-06-05NFSD: Ensure that xdr_write_pages updates rq_next_pageChuck Lever1-0/+21
All other NFSv[23] procedures manage to keep page_ptr and rq_next_page in lock step. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-06-05fs.h: Optimize file struct to prevent false sharingchenzhiyin1-5/+11
In the syscall test of UnixBench, performance regression occurred due to false sharing. The lock and atomic members, including file::f_lock, file::f_count and file::f_pos_lock are highly contended and frequently updated in the high-concurrency test scenarios. perf c2c indentified one affected read access, file::f_op. To prevent false sharing, the layout of file struct is changed as following (A) f_lock, f_count and f_pos_lock are put together to share the same cache line. (B) The read mostly members, including f_path, f_inode, f_op are put into a separate cache line. (C) f_mode is put together with f_count, since they are used frequently at the same time. Due to '__randomize_layout' attribute of file struct, the updated layout only can be effective when CONFIG_RANDSTRUCT_NONE is 'y'. The optimization has been validated in the syscall test of UnixBench. performance gain is 30~50%. Furthermore, to confirm the optimization effectiveness on the other codes path, the results of fsdisk, fsbuffer and fstime are also shown. Here are the detailed test results of unixbench. Command: numactl -C 3-18 ./Run -c 16 syscall fsbuffer fstime fsdisk Without Patch ------------------------------------------------------------------------ File Copy 1024 bufsize 2000 maxblocks 875052.1 KBps (30.0 s, 2 samples) File Copy 256 bufsize 500 maxblocks 235484.0 KBps (30.0 s, 2 samples) File Copy 4096 bufsize 8000 maxblocks 2815153.5 KBps (30.0 s, 2 samples) System Call Overhead 5772268.3 lps (10.0 s, 7 samples) System Benchmarks Partial Index BASELINE RESULT INDEX File Copy 1024 bufsize 2000 maxblocks 3960.0 875052.1 2209.7 File Copy 256 bufsize 500 maxblocks 1655.0 235484.0 1422.9 File Copy 4096 bufsize 8000 maxblocks 5800.0 2815153.5 4853.7 System Call Overhead 15000.0 5772268.3 3848.2 ======== System Benchmarks Index Score (Partial Only) 2768.3 With Patch ------------------------------------------------------------------------ File Copy 1024 bufsize 2000 maxblocks 1009977.2 KBps (30.0 s, 2 samples) File Copy 256 bufsize 500 maxblocks 264765.9 KBps (30.0 s, 2 samples) File Copy 4096 bufsize 8000 maxblocks 3052236.0 KBps (30.0 s, 2 samples) System Call Overhead 8237404.4 lps (10.0 s, 7 samples) System Benchmarks Partial Index BASELINE RESULT INDEX File Copy 1024 bufsize 2000 maxblocks 3960.0 1009977.2 2550.4 File Copy 256 bufsize 500 maxblocks 1655.0 264765.9 1599.8 File Copy 4096 bufsize 8000 maxblocks 5800.0 3052236.0 5262.5 System Call Overhead 15000.0 8237404.4 5491.6 ======== System Benchmarks Index Score (Partial Only) 3295.3 Signed-off-by: chenzhiyin <zhiyin.chen@intel.com> Message-Id: <20230601092400.27162-1-zhiyin.chen@intel.com> Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-06-05highmem: Rename put_and_unmap_page() to unmap_and_put_page()Fabio M. De Francesco1-1/+1
With commit 849ad04cf562a ("new helper: put_and_unmap_page()"), Al Viro introduced the put_and_unmap_page() to use in those many places where we have a common pattern consisting of calls to kunmap_local() + put_page(). Obviously, first we unmap and then we put pages. Instead, the original name of this helper seems to imply that we first put and then unmap. Therefore, rename the helper and change the only known upstreamed user (i.e., fs/sysv) before this helper enters common use and might become difficult to find all call sites and instead easy to break the builds. Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Message-Id: <20230602103307.5637-1-fmdefrancesco@gmail.com> Signed-off-by: Christian Brauner <brauner@kernel.org>