summaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)AuthorFilesLines
2019-07-25sched/topology: Add partition_sched_domains_locked()Mathieu Poirier1-0/+10
Introduce the partition_sched_domains_locked() function by taking the mutex locking code out of the original function. That way the work done by partition_sched_domains_locked() can be reused without dropping the mutex lock. No change of functionality is introduced by this patch. Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Tejun Heo <tj@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bristot@redhat.com Cc: claudio@evidence.eu.com Cc: lizefan@huawei.com Cc: longman@redhat.com Cc: luca.abeni@santannapisa.it Cc: rostedt@goodmis.org Cc: tommaso.cucinotta@santannapisa.it Link: https://lkml.kernel.org/r/20190719140000.31694-2-juri.lelli@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-07-25sched/core: Convert get_task_struct() to return the taskMatthew Wilcox (Oracle)1-1/+5
Returning the pointer that was passed in allows us to write slightly more idiomatic code. Convert a few users. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190704221323.24290-1-willy@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-07-25cpu/hotplug: Cache number of online CPUsThomas Gleixner1-9/+16
Re-evaluating the bitmap wheight of the online cpus bitmap in every invocation of num_online_cpus() over and over is a pretty useless exercise. Especially when num_online_cpus() is used in code paths like the IPI delivery of x86 or the membarrier code. Cache the number of online CPUs in the core and just return the cached variable. The accessor function provides only a snapshot when used without protection against concurrent CPU hotplug. The storage needs to use an atomic_t because the kexec and reboot code (ab)use set_cpu_online() in their 'shutdown' handlers without any form of serialization as pointed out by Mathieu. Regular CPU hotplug usage is properly serialized. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1907091622590.1634@nanos.tec.linutronix.de
2019-07-25cpumask: Implement cpumask_or_equal()Thomas Gleixner2-0/+37
The IPI code of x86 needs to evaluate whether the target cpumask is equal to the cpu_online_mask or equal except for the calling CPU. To replace the current implementation which requires the usage of a temporary cpumask, which might involve allocations, add a new function which compares a cpumask to the result of two other cpumasks which are or'ed together before comparison. This allows to make the required decision in one go and the calling code then can check for the calling CPU being set in the target mask with cpumask_test_cpu(). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20190722105220.585449120@linutronix.de
2019-07-25smp/hotplug: Track booted once CPUs in a cpumaskThomas Gleixner1-0/+2
The booted once information which is required to deal with the MCE broadcast issue on X86 correctly is stored in the per cpu hotplug state, which is perfectly fine for the intended purpose. X86 needs that information for supporting NMI broadcasting via shortcuts, but retrieving it from per cpu data is cumbersome. Move it to a cpumask so the information can be checked against the cpu_present_mask quickly. No functional change intended. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20190722105219.818822855@linutronix.de
2019-07-25locking/lockdep: Reduce space occupied by stack tracesBart Van Assche1-6/+3
Although commit 669de8bda87b ("kernel/workqueue: Use dynamic lockdep keys for workqueues") unregisters dynamic lockdep keys when a workqueue is destroyed, a side effect of that commit is that all stack traces associated with the lockdep key are leaked when a workqueue is destroyed. Fix this by storing each unique stack trace once. Other changes in this patch are: - Use NULL instead of { .nr_entries = 0 } to represent 'no trace'. - Store a pointer to a stack trace in struct lock_class and struct lock_list instead of storing 'nr_entries' and 'offset'. This patch avoids that the following program triggers the "BUG: MAX_STACK_TRACE_ENTRIES too low!" complaint: #include <fcntl.h> #include <unistd.h> int main() { for (;;) { int fd = open("/dev/infiniband/rdma_cm", O_RDWR); close(fd); } } Suggested-by: Peter Zijlstra <peterz@infradead.org> Reported-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Waiman Long <longman@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yuyang Du <duyuyang@gmail.com> Link: https://lkml.kernel.org/r/20190722182443.216015-4-bvanassche@acm.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-07-25stacktrace: Constify 'entries' argumentsBart Van Assche1-2/+2
Make it clear to humans and to the compiler that the stack trace ('entries') arguments are not modified. Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Waiman Long <longman@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Link: https://lkml.kernel.org/r/20190722182443.216015-3-bvanassche@acm.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-07-25locking/lockdep: Make it clear that what lock_class::key points at is not ↵Bart Van Assche1-1/+1
modified This patch does not change the behavior of the lockdep code. Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Waiman Long <longman@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Link: https://lkml.kernel.org/r/20190722182443.216015-2-bvanassche@acm.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-07-25sched/fair: Use RCU accessors consistently for ->numa_groupJann Horn1-1/+9
The old code used RCU annotations and accessors inconsistently for ->numa_group, which can lead to use-after-frees and NULL dereferences. Let all accesses to ->numa_group use proper RCU helpers to prevent such issues. Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Fixes: 8c8a743c5087 ("sched/numa: Use {cpu, pid} to create task groups for shared faults") Link: https://lkml.kernel.org/r/20190716152047.14424-3-jannh@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-07-25sched/fair: Don't free p->numa_faults with concurrent readersJann Horn1-2/+2
When going through execve(), zero out the NUMA fault statistics instead of freeing them. During execve, the task is reachable through procfs and the scheduler. A concurrent /proc/*/sched reader can read data from a freed ->numa_faults allocation (confirmed by KASAN) and write it back to userspace. I believe that it would also be possible for a use-after-free read to occur through a race between a NUMA fault and execve(): task_numa_fault() can lead to task_numa_compare(), which invokes task_weight() on the currently running task of a different CPU. Another way to fix this would be to make ->numa_faults RCU-managed or add extra locking, but it seems easier to wipe the NUMA fault statistics on execve. Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Fixes: 82727018b0d3 ("sched/numa: Call task_numa_free() from do_execve()") Link: https://lkml.kernel.org/r/20190716152047.14424-1-jannh@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-07-25driver core: Remove device link creation limitationRafael J. Wysocki1-1/+3
If device_link_add() is called for a consumer/supplier pair with an existing device link between them and the existing link's type is not in agreement with the flags passed to that function by its caller, NULL will be returned. That is seriously inconvenient, because it forces the callers of device_link_add() to worry about what others may or may not do even if that is not relevant to them for any other reasons. It turns out, however, that this limitation can be made go away relatively easily. The underlying observation is that if DL_FLAG_STATELESS has been passed to device_link_add() in flags for the given consumer/supplier pair at least once, calling either device_link_del() or device_link_remove() to release the link returned by it should work, but there are no other requirements associated with that flag. In turn, if at least one of the callers of device_link_add() for the given consumer/supplier pair has not passed DL_FLAG_STATELESS to it in flags, the driver core should track the status of the link and act on it as appropriate (ie. the link should be treated as "managed"). This means that DL_FLAG_STATELESS needs to be set for managed device links and it should be valid to call device_link_del() or device_link_remove() to drop references to them in certain sutiations. To allow that to happen, introduce a new (internal) device link flag called DL_FLAG_MANAGED and make device_link_add() set it automatically whenever DL_FLAG_STATELESS is not passed to it. Also make it take additional references to existing device links that were previously stateless (that is, with DL_FLAG_STATELESS set and DL_FLAG_MANAGED unset) and will need to be managed going forward and initialize their status (which has been DL_STATE_NONE so far). Accordingly, when a managed device link is dropped automatically by the driver core, make it clear DL_FLAG_MANAGED, reset the link's status back to DL_STATE_NONE and drop the reference to it associated with DL_FLAG_MANAGED instead of just deleting it right away (to allow it to stay around in case it still needs to be released explicitly by someone). With that, since setting DL_FLAG_STATELESS doesn't mean that the device link in question is not managed any more, replace all of the status-tracking checks against DL_FLAG_STATELESS with analogous checks against DL_FLAG_MANAGED and update the documentation to reflect these changes. While at it, make device_link_add() reject flags that it does not recognize, including DL_FLAG_MANAGED. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Saravana Kannan <saravanak@google.com> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Review-by: Saravana Kannan <saravanak@google.com> Link: https://lore.kernel.org/r/2305283.AStDPdUUnE@kreacher Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-07-25intel_th: msu: Introduce buffer interfaceAlexander Shishkin1-0/+79
Introduces a concept of external buffers, which is a mechanism for creating trace sinks that would receive trace data from MSC buffers and transfer it elsewhere. A external buffer can implement its own window allocation/deallocation if it has to. It must provide a callback that's used to notify it when a window fills up, so that it can then start a DMA transaction from that window 'elsewhere'. This window remains in a 'locked' state and won't be used for storing new trace data until the buffer 'unlocks' it with a provided API call, at which point the window can be used again for storing trace data. This relies on a functional "last block" interrupt, so not all versions of Trace Hub can use this feature, which does not reflect on existing users. Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20190705141425.19894-2-alexander.shishkin@linux.intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-07-25usb: host: oxu210hp-hcd: remove include/linux/oxu210hp.hMasahiro Yamada1-8/+0
struct oxu210hp_platform_data is defined, but not used at all. $ git grep oxu210hp_platform_data include/linux/oxu210hp.h:struct oxu210hp_platform_data { include/linux/oxu210hp.h exists just for defining an unused structure, so it can go away. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Link: https://lore.kernel.org/r/20190721144909.5295-1-yamada.masahiro@socionext.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-07-25Input: allow drivers specify timestamp for input eventsAtif Niyaz1-0/+14
Currently, evdev stamps events with timestamps acquired in evdev_events() However, this timestamping may not be accurate in terms of measuring when the actual event happened. Let's allow individual drivers specify timestamp in order to provide a more accurate sense of time for the event. It is expected that drivers will set the timestamp in their hard interrupt routine. Signed-off-by: Atif Niyaz <atifniyaz@google.com> Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
2019-07-25fpga: altera-pr-ip: Make alt_pr_unregister function voidMoritz Fischer1-1/+1
Make alt_pr_unregister function void, since it always returns 0, and nothing would act on the value anyways. Signed-off-by: Moritz Fischer <mdf@kernel.org>
2019-07-24access: avoid the RCU grace period for the temporary subjective credentialsLinus Torvalds1-1/+7
It turns out that 'access()' (and 'faccessat()') can cause a lot of RCU work because it installs a temporary credential that gets allocated and freed for each system call. The allocation and freeing overhead is mostly benign, but because credentials can be accessed under the RCU read lock, the freeing involves a RCU grace period. Which is not a huge deal normally, but if you have a lot of access() calls, this causes a fair amount of seconday damage: instead of having a nice alloc/free patterns that hits in hot per-CPU slab caches, you have all those delayed free's, and on big machines with hundreds of cores, the RCU overhead can end up being enormous. But it turns out that all of this is entirely unnecessary. Exactly because access() only installs the credential as the thread-local subjective credential, the temporary cred pointer doesn't actually need to be RCU free'd at all. Once we're done using it, we can just free it synchronously and avoid all the RCU overhead. So add a 'non_rcu' flag to 'struct cred', which can be set by users that know they only use it in non-RCU context (there are other potential users for this). We can make it a union with the rcu freeing list head that we need for the RCU case, so this doesn't need any extra storage. Note that this also makes 'get_current_cred()' clear the new non_rcu flag, in case we have filesystems that take a long-term reference to the cred and then expect the RCU delayed freeing afterwards. It's not entirely clear that this is required, but it makes for clear semantics: the subjective cred remains non-RCU as long as you only access it synchronously using the thread-local accessors, but you _can_ use it as a generic cred if you want to. It is possible that we should just remove the whole RCU markings for ->cred entirely. Only ->real_cred is really supposed to be accessed through RCU, and the long-term cred copies that nfs uses might want to explicitly re-enable RCU freeing if required, rather than have get_current_cred() do it implicitly. But this is a "minimal semantic changes" change for the immediate problem. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Eric Dumazet <edumazet@google.com> Acked-by: Paul E. McKenney <paulmck@linux.ibm.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Jan Glauber <jglauber@marvell.com> Cc: Jiri Kosina <jikos@kernel.org> Cc: Jayachandran Chandrasekharan Nair <jnair@marvell.com> Cc: Greg KH <greg@kroah.com> Cc: Kees Cook <keescook@chromium.org> Cc: David Howells <dhowells@redhat.com> Cc: Miklos Szeredi <miklos@szeredi.hu> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-24lib/timerqueue: Rely on rbtree semantics for next timerDavidlohr Bueso1-7/+6
Simplify the timerqueue code by using cached rbtrees and rely on the tree leftmost node semantics to get the timer with earliest expiration time. This is a drop in conversion, and therefore semantics remain untouched. The runtime overhead of cached rbtrees is be pretty much the same as the current head->next method, noting that when removing the leftmost node, a common operation for the timerqueue, the rb_next(leftmost) is O(1) as well, so the next timer will either be the right node or its parent. Therefore no extra pointer chasing. Finally, the size of the struct timerqueue_head remains the same. Passes several hours of rcutorture. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190724152323.bojciei3muvfxalm@linux-r8p5
2019-07-24iommu: Introduce iommu_iotlb_gather_add_page()Will Deacon1-0/+31
Introduce a helper function for drivers to use when updating an iommu_iotlb_gather structure in response to an ->unmap() call, rather than having to open-code the logic in every page-table implementation. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-24iommu: Introduce struct iommu_iotlb_gather for batching TLB flushesWill Deacon1-4/+39
To permit batching of TLB flushes across multiple calls to the IOMMU driver's ->unmap() implementation, introduce a new structure for tracking the address range to be flushed and the granularity at which the flushing is required. This is hooked into the IOMMU API and its caller are updated to make use of the new structure. Subsequent patches will plumb this into the IOMMU drivers as well, but for now the gathering information is ignored. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-24iommu/io-pgtable: Rename iommu_gather_ops to iommu_flush_opsWill Deacon1-3/+3
In preparation for TLB flush gathering in the IOMMU API, rename the iommu_gather_ops structure in io-pgtable to iommu_flush_ops, which better describes its purpose and avoids the potential for confusion between different levels of the API. $ find linux/ -type f -name '*.[ch]' | xargs sed -i 's/gather_ops/flush_ops/g' Signed-off-by: Will Deacon <will@kernel.org>
2019-07-24iommu: Remove empty iommu_tlb_range_add() callback from iommu_opsWill Deacon1-15/+0
Commit add02cfdc9bc ("iommu: Introduce Interface for IOMMU TLB Flushing") added three new TLB flushing operations to the IOMMU API so that the underlying driver operations can be batched when unmapping large regions of IO virtual address space. However, the ->iotlb_range_add() callback has not been implemented by any IOMMU drivers (amd_iommu.c implements it as an empty function, which incurs the overhead of an indirect branch). Instead, drivers either flush the entire IOTLB in the ->iotlb_sync() callback or perform the necessary invalidation during ->unmap(). Attempting to implement ->iotlb_range_add() for arm-smmu-v3.c revealed two major issues: 1. The page size used to map the region in the page-table is not known, and so it is not generally possible to issue TLB flushes in the most efficient manner. 2. The only mutable state passed to the callback is a pointer to the iommu_domain, which can be accessed concurrently and therefore requires expensive synchronisation to keep track of the outstanding flushes. Remove the callback entirely in preparation for extending ->unmap() and ->iotlb_sync() to update a token on the caller's stack. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-24can: Add SPDX license identifiers for CAN subsystemOliver Hartkopp2-2/+2
Add missing SPDX identifiers for the CAN network layer and correct the SPDX license for two of its include files to make sure the BSD-3-Clause applies for the entire subsystem. Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2019-07-24can: remove obsolete empty ioctl() handlerOliver Hartkopp1-1/+0
With commit c7cbdbf29f488a ("net: rework SIOCGSTAMP ioctl handling") the only ioctl function in can_ioctl() has been removed. As this SIOCGSTAMP ioctl command is now handled in net/socket.c we can entirely remove the CAN specific ioctl functions. Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2019-07-24PCI: Remove pci_block_cfg_access() et al (unused)Kelsey Skunberg1-5/+0
Remove the following unused functions from include/linux/pci.h: pci_block_cfg_access() pci_block_cfg_access_in_atomic() pci_unblock_cfg_access() These were added by fb51ccbf217c ("PCI: Rework config space blocking services"), though no callers were added. Link: https://lore.kernel.org/r/20190715203416.37547-1-skunberg.kelsey@gmail.com Signed-off-by: Kelsey Skunberg <skunberg.kelsey@gmail.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
2019-07-23bpf: fix narrower loads on s390Ilya Leoshkevich1-0/+13
The very first check in test_pkt_md_access is failing on s390, which happens because loading a part of a struct __sk_buff field produces an incorrect result. The preprocessed code of the check is: { __u8 tmp = *((volatile __u8 *)&skb->len + ((sizeof(skb->len) - sizeof(__u8)) / sizeof(__u8))); if (tmp != ((*(volatile __u32 *)&skb->len) & 0xFF)) return 2; }; clang generates the following code for it: 0: 71 21 00 03 00 00 00 00 r2 = *(u8 *)(r1 + 3) 1: 61 31 00 00 00 00 00 00 r3 = *(u32 *)(r1 + 0) 2: 57 30 00 00 00 00 00 ff r3 &= 255 3: 5d 23 00 1d 00 00 00 00 if r2 != r3 goto +29 <LBB0_10> Finally, verifier transforms it to: 0: (61) r2 = *(u32 *)(r1 +104) 1: (bc) w2 = w2 2: (74) w2 >>= 24 3: (bc) w2 = w2 4: (54) w2 &= 255 5: (bc) w2 = w2 The problem is that when verifier emits the code to replace a partial load of a struct __sk_buff field (*(u8 *)(r1 + 3)) with a full load of struct sk_buff field (*(u32 *)(r1 + 104)), an optional shift and a bitwise AND, it assumes that the machine is little endian and incorrectly decides to use a shift. Adjust shift count calculation to account for endianness. Fixes: 31fd85816dbe ("bpf: permits narrower load from bpf program context fields") Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-07-23dma-mapping: use dma_get_mask in dma_addressing_limitedEric Auger1-2/+2
We currently have cases where the dma_addressing_limited() gets called with dma_mask unset. This causes a NULL pointer dereference. Use dma_get_mask() accessor to prevent the crash. Fixes: b866455423e0 ("dma-mapping: add a dma_addressing_limited helper") Signed-off-by: Eric Auger <eric.auger@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-07-23block: blk-mq: Remove blk_mq_sched_started_request and started_requestMarcos Paulo de Souza1-1/+0
blk_mq_sched_completed_request is a function that checks if the elevator related to the request has started_request implemented, but currently, none of the available IO schedulers implement started_request, so remove both. Signed-off-by: Marcos Paulo de Souza <marcos.souza.org@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-07-23fbdev: Ditch fb_edid_add_monspecsDaniel Vetter1-3/+0
It's dead code ever since commit 34280340b1dc74c521e636f45cd728f9abf56ee2 Author: Geert Uytterhoeven <geert+renesas@glider.be> Date: Fri Dec 4 17:01:43 2015 +0100 fbdev: Remove unused SH-Mobile HDMI driver Also with this gone we can remove the cea_modes db. This entire thing is massively incomplete anyway, compared to the CEA parsing that drm_edid.c does. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Tavis Ormandy <taviso@gmail.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190721201956.941-1-daniel.vetter@ffwll.ch
2019-07-23iommu/iova: Fix compilation error with !CONFIG_IOMMU_IOVAJoerg Roedel1-1/+1
The stub function for !CONFIG_IOMMU_IOVA needs to be 'static inline'. Fixes: effa467870c76 ('iommu/vt-d: Don't queue_iova() if there is no flush queue') Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-07-23PM: sleep: Drop dpm_noirq_begin() and dpm_noirq_end()Rafael J. Wysocki1-4/+0
Note that after previous changes dpm_noirq_begin() and dpm_noirq_end() each have only one caller, so move the code from them to their respective callers and drop them. Also note that dpm_noirq_resume_devices() and dpm_noirq_suspend_devices() need not be exported any more, so make them both static. This change is not expected to alter functionality. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de>
2019-07-23PM: sleep: Simplify suspend-to-idle control flowRafael J. Wysocki1-1/+0
After commit 33e4f80ee69b ("ACPI / PM: Ignore spurious SCI wakeups from suspend-to-idle") the "noirq" phases of device suspend and resume may run for multiple times during suspend-to-idle, if there are spurious system wakeup events while suspended. However, this is complicated and fragile and actually unnecessary. The main reason for doing this is that on some systems the EC may signal system wakeup events (power button events, for example) as well as events that should not cause the system to resume (spurious system wakeup events). Thus, in order to determine whether or not a given event signaled by the EC while suspended is a proper system wakeup one, the EC GPE needs to be dispatched and to start with that was achieved by allowing the ACPI SCI action handler to run, which was only possible after calling resume_device_irqs(). However, dispatching the EC GPE this way turned out to take too much time in some cases and some EC events might be missed due to that, so commit 68e22011856f ("ACPI: EC: Dispatch the EC GPE directly on s2idle wake") started to dispatch the EC GPE right after a wakeup event has been detected, so in fact the full ACPI SCI action handler doesn't need to run any more to deal with the wakeups coming from the EC. Use this observation to simplify the suspend-to-idle control flow so that the "noirq" phases of device suspend and resume are each run only once in every suspend-to-idle cycle, which is reported to significantly reduce power drawn by some systems when suspended to idle (by allowing them to reach a deep platform-wide low-power state through the suspend-to-idle flow). [What appears to happen is that the "noirq" resume of devices after a spurious EC wakeup brings some devices into a state in which they prevent the platform from reaching the deep low-power state going forward, even after a subsequent "noirq" suspend phase, and on some systems the EC triggers such wakeups already when the "noirq" suspend of devices is running for the first time in the given suspend/resume cycle, so the platform cannot reach the deep low-power state at all.] First, make acpi_s2idle_wake() use the acpi_ec_dispatch_gpe() return value to determine whether or not the wakeup may have been triggered by the EC (in which case the system wakeup is canceled and ACPI events are processed in order to determine whether or not the event is a proper system wakeup one) and use rearm_wake_irq() (introduced by a previous change) in it to rearm the ACPI SCI for system wakeup detection in case the system will remain suspended. Second, drop acpi_s2idle_sync(), which is not needed any more, and the corresponding global platform suspend-to-idle callback. Next, drop the pm_wakeup_pending() check (which is an optimization only) from __device_suspend_noirq() to prevent it from returning errors on system wakeups occurring before the "noirq" phase of device suspend is complete (as in the case of suspend-to-idle it is not known whether or not these wakeups are suprious at that point), in order to avoid having to carry out a "noirq" resume of devices on a spurious system wakeup. Finally, change the code flow in s2idle_loop() to (1) run the "noirq" suspend of devices once before starting the loop, (2) check for spurious EC wakeups (via the platform ->wake callback) for the first time before calling s2idle_enter(), and (3) run the "noirq" resume of devices once after leaving the loop. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de>
2019-07-23PCI: irq: Introduce rearm_wake_irq()Rafael J. Wysocki1-0/+1
Introduce a new function, rearm_wake_irq(), allowing a wakeup IRQ to be armed for systen wakeup detection again without running any action handlers associated with it after it has been armed for wakeup detection and triggered. That is useful for IRQs, like ACPI SCI, that may deliver wakeup as well as non-wakeup interrupts when armed for systen wakeup detection. In those cases, it may be possible to determine whether or not the delivered interrupt is a systen wakeup one without running the entire action handler (or handlers, if the IRQ is shared) for the IRQ, and if the interrupt turns out to be a non-wakeup one, the IRQ can be rearmed with the help of the new function. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de>
2019-07-23net: Convert skb_frag_t to bio_vecMatthew Wilcox (Oracle)2-8/+6
There are a lot of users of frag->page_offset, so use a union to avoid converting those users today. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-23net: Rename skb_frag_t size to bv_lenMatthew Wilcox (Oracle)1-5/+5
Improved compatibility with bvec Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-23net: Rename skb_frag page to bv_pageMatthew Wilcox (Oracle)1-7/+5
One step closer to turning the skb_frag_t into a bio_vec. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-23net: Reorder the contents of skb_frag_tMatthew Wilcox (Oracle)1-1/+1
Match the layout of bio_vec. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-23net: Increase the size of skb_frag_tMatthew Wilcox (Oracle)1-5/+0
To increase commonality between block and net, we are going to replace the skb_frag_t with the bio_vec. This patch increases the size of skb_frag_t on 32-bit machines from 8 bytes to 12 bytes. The size is unchanged on 64-bit machines. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-23net: Use skb accessors in network coreMatthew Wilcox (Oracle)1-1/+1
In preparation for unifying the skb_frag and bio_vec, use the fine accessors which already exist and use skb_frag_t instead of struct skb_frag_struct. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-23firmware: qcom_scm: Cleanup code in qcom_scm_assign_mem()Stephen Boyd1-4/+5
There are some questionable coding styles in this function. It looks quite odd to deref a pointer with array indexing that only uses the first element. Also, destroying an input/output variable halfway through the function and then overwriting it on success is not clear. It's better to use a local variable and the kernel macros to step through each bit set in a bitmask and clearly show where outputs are set. Cc: Ian Jackson <ian.jackson@citrix.com> Cc: Julien Grall <julien.grall@arm.com> Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Cc: Avaneesh Kumar Dwivedi <akdwived@codeaurora.org> Tested-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Stephen Boyd <swboyd@chromium.org> [bjorn: Changed for_each_set_bit() size to BITS_PER_LONG] Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
2019-07-23clk: Add missing documentation of devm_clk_bulk_get_optional() argumentSylwester Nawrocki1-0/+1
Fix an incomplete devm_clk_bulk_get_optional() function documentation by adding description of the num_clks argument as in other *clk_bulk* functions. Fixes: 9bd5ef0bd874 ("clk: Add devm_clk_bulk_get_optional() function") Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com> Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2019-07-22Merge v5.3-rc1 into drm-misc-nextMaxime Ripard584-6471/+10274
Noralf needs some SPI patches in 5.3 to merge some work on tinydrm. Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com>
2019-07-22arm: Use common cpu_topology structure and functions.Atish Patra1-4/+2
Currently, ARM32 and ARM64 uses different data structures to represent their cpu topologies. Since, we are moving the ARM64 topology to common code to be used by other architectures, we can reuse that for ARM32 as well. Take this opprtunity to remove the redundant functions from ARM32 and reuse the common code instead. To: Russell King <linux@armlinux.org.uk> Signed-off-by: Atish Patra <atish.patra@wdc.com> Tested-by: Sudeep Holla <sudeep.holla@arm.com> (on TC2) Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Paul Walmsley <paul.walmsley@sifive.com>
2019-07-22cpu-topology: Move cpu topology code to common code.Atish Patra2-0/+29
Both RISC-V & ARM64 are using cpu-map device tree to describe their cpu topology. It's better to move the relevant code to a common place instead of duplicate code. To: Will Deacon <will.deacon@arm.com> To: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Atish Patra <atish.patra@wdc.com> [Tested on QDF2400] Tested-by: Jeffrey Hugo <jhugo@codeaurora.org> [Tested on Juno and other embedded platforms.] Tested-by: Sudeep Holla <sudeep.holla@arm.com> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Paul Walmsley <paul.walmsley@sifive.com>
2019-07-22Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netLinus Torvalds2-3/+1
Pull networking fixes from David Miller: 1) Several netfilter fixes including a nfnetlink deadlock fix from Florian Westphal and fix for dropping VRF packets from Miaohe Lin. 2) Flow offload fixes from Pablo Neira Ayuso including a fix to restore proper block sharing. 3) Fix r8169 PHY init from Thomas Voegtle. 4) Fix memory leak in mac80211, from Lorenzo Bianconi. 5) Missing NULL check on object allocation in cxgb4, from Navid Emamdoost. 6) Fix scaling of RX power in sfp phy driver, from Andrew Lunn. 7) Check that there is actually an ip header to access in skb->data in VRF, from Peter Kosyh. 8) Remove spurious rcu unlock in hv_netvsc, from Haiyang Zhang. 9) One more tweak the the TCP fragmentation memory limit changes, to be less harmful to applications setting small SO_SNDBUF values. From Eric Dumazet. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (40 commits) tcp: be more careful in tcp_fragment() hv_netvsc: Fix extra rcu_read_unlock in netvsc_recv_callback() vrf: make sure skb->data contains ip header to make routing connector: remove redundant input callback from cn_dev qed: Prefer pcie_capability_read_word() igc: Prefer pcie_capability_read_word() cxgb4: Prefer pcie_capability_read_word() be2net: Synchronize be_update_queues with dev_watchdog bnx2x: Prevent load reordering in tx completion processing net: phy: sfp: hwmon: Fix scaling of RX power net: sched: verify that q!=NULL before setting q->flags chelsio: Fix a typo in a function name allocate_flower_entry: should check for null deref net: hns3: typo in the name of a constant kbuild: add net/netfilter/nf_tables_offload.h to header-test blacklist. tipc: Fix a typo mac80211: don't warn about CW params when not using them mac80211: fix possible memory leak in ieee80211_assign_beacon nl80211: fix NL80211_HE_MAX_CAPABILITY_LEN nl80211: fix VENDOR_CMD_RAW_DATA ...
2019-07-22iommu/vt-d: Don't queue_iova() if there is no flush queueDmitry Safonov1-0/+6
Intel VT-d driver was reworked to use common deferred flushing implementation. Previously there was one global per-cpu flush queue, afterwards - one per domain. Before deferring a flush, the queue should be allocated and initialized. Currently only domains with IOMMU_DOMAIN_DMA type initialize their flush queue. It's probably worth to init it for static or unmanaged domains too, but it may be arguable - I'm leaving it to iommu folks. Prevent queuing an iova flush if the domain doesn't have a queue. The defensive check seems to be worth to keep even if queue would be initialized for all kinds of domains. And is easy backportable. On 4.19.43 stable kernel it has a user-visible effect: previously for devices in si domain there were crashes, on sata devices: BUG: spinlock bad magic on CPU#6, swapper/0/1 lock: 0xffff88844f582008, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0 CPU: 6 PID: 1 Comm: swapper/0 Not tainted 4.19.43 #1 Call Trace: <IRQ> dump_stack+0x61/0x7e spin_bug+0x9d/0xa3 do_raw_spin_lock+0x22/0x8e _raw_spin_lock_irqsave+0x32/0x3a queue_iova+0x45/0x115 intel_unmap+0x107/0x113 intel_unmap_sg+0x6b/0x76 __ata_qc_complete+0x7f/0x103 ata_qc_complete+0x9b/0x26a ata_qc_complete_multiple+0xd0/0xe3 ahci_handle_port_interrupt+0x3ee/0x48a ahci_handle_port_intr+0x73/0xa9 ahci_single_level_irq_intr+0x40/0x60 __handle_irq_event_percpu+0x7f/0x19a handle_irq_event_percpu+0x32/0x72 handle_irq_event+0x38/0x56 handle_edge_irq+0x102/0x121 handle_irq+0x147/0x15c do_IRQ+0x66/0xf2 common_interrupt+0xf/0xf RIP: 0010:__do_softirq+0x8c/0x2df The same for usb devices that use ehci-pci: BUG: spinlock bad magic on CPU#0, swapper/0/1 lock: 0xffff88844f402008, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0 CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.19.43 #4 Call Trace: <IRQ> dump_stack+0x61/0x7e spin_bug+0x9d/0xa3 do_raw_spin_lock+0x22/0x8e _raw_spin_lock_irqsave+0x32/0x3a queue_iova+0x77/0x145 intel_unmap+0x107/0x113 intel_unmap_page+0xe/0x10 usb_hcd_unmap_urb_setup_for_dma+0x53/0x9d usb_hcd_unmap_urb_for_dma+0x17/0x100 unmap_urb_for_dma+0x22/0x24 __usb_hcd_giveback_urb+0x51/0xc3 usb_giveback_urb_bh+0x97/0xde tasklet_action_common.isra.4+0x5f/0xa1 tasklet_action+0x2d/0x30 __do_softirq+0x138/0x2df irq_exit+0x7d/0x8b smp_apic_timer_interrupt+0x10f/0x151 apic_timer_interrupt+0xf/0x20 </IRQ> RIP: 0010:_raw_spin_unlock_irqrestore+0x17/0x39 Cc: David Woodhouse <dwmw2@infradead.org> Cc: Joerg Roedel <joro@8bytes.org> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: iommu@lists.linux-foundation.org Cc: <stable@vger.kernel.org> # 4.14+ Fixes: 13cf01744608 ("iommu/vt-d: Make use of iova deferred flushing") Signed-off-by: Dmitry Safonov <dima@arista.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-07-22dmaengine: edma: make edma_filter_fn privateArnd Bergmann1-29/+0
With the audio driver no longer referring to this function, it can be made private to the dmaengine driver itself, and the header file removed. Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20190722081705.2084961-2-arnd@arndb.de Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-07-22dmaengine: omap-dma: make omap_dma_filter_fn privateArnd Bergmann2-20/+0
With the audio driver no longer referring to this function, it can be made private to the dmaengine driver itself, and the header file removed. Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Link: https://lore.kernel.org/lkml/20190307151646.1016966-1-arnd@arndb.de/ Signed-off-by: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20190722081705.2084961-1-arnd@arndb.de Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-07-22bpf: sockmap/tls, close can race with map freeJohn Fastabend1-1/+7
When a map free is called and in parallel a socket is closed we have two paths that can potentially reset the socket prot ops, the bpf close() path and the map free path. This creates a problem with which prot ops should be used from the socket closed side. If the map_free side completes first then we want to call the original lowest level ops. However, if the tls path runs first we want to call the sockmap ops. Additionally there was no locking around prot updates in TLS code paths so the prot ops could be changed multiple times once from TLS path and again from sockmap side potentially leaving ops pointed at either TLS or sockmap when psock and/or tls context have already been destroyed. To fix this race first only update ops inside callback lock so that TLS, sockmap and lowest level all agree on prot state. Second and a ULP callback update() so that lower layers can inform the upper layer when they are being removed allowing the upper layer to reset prot ops. This gets us close to allowing sockmap and tls to be stacked in arbitrary order but will save that patch for *next trees. v4: - make sure we don't free things for device; - remove the checks which swap the callbacks back only if TLS is at the top. Reported-by: syzbot+06537213db7ba2745c4a@syzkaller.appspotmail.com Fixes: 02c558b2d5d6 ("bpf: sockmap, support for msg_peek in sk_msg with redirect ingress") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-22blk-mq: allow REQ_NOWAIT to return an error inlineJens Axboe1-1/+4
By default, if a caller sets REQ_NOWAIT and we need to block, we'll return -EAGAIN through the bio->bi_end_io() callback. For some use cases, this makes it hard to use. Allow a caller to ask for inline return of errors related to blocking by also setting REQ_NOWAIT_INLINE. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-07-22audit_inode(): switch to passing AUDIT_INODE_...Al Viro1-13/+7
don't bother with remapping LOOKUP_... values - all callers pass constants and we can just as well pass the right ones from the very beginning. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>