summaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)AuthorFilesLines
2020-10-01Merge tag 'soundwire-5.10-rc1' of ↵Greg Kroah-Hartman2-15/+40
git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/soundwire into char-misc-next Vinod writes: soundwire updates for 5.10-rc1 This round of update includes: - Generic bandwidth allocation algorithm from Intel folks - PM support for Intel chipsets - Updates to Intel drivers which makes sdw usable on latest laptops - Support for MMIO SDW controllers found in QC chipsets - Update to subsystem to use helpers in bitfield.h to manage register bits * tag 'soundwire-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/soundwire: (66 commits) soundwire: sysfs: add slave status and device number before probe soundwire: bus: add enumerated Slave device to device list soundwire: remove an unnecessary NULL check soundwire: cadence: add data port test fail interrupt soundwire: intel: enable test modes soundwire: enable Data Port test modes soundwire: intel: use {u32|u16}p_replace_bits soundwire: cadence: use u32p_replace_bits soundwire: qcom: get max rows and cols info from compatible soundwire: qcom: add support to block packing mode soundwire: qcom: clear BIT FIELDs before value set. soundwire: Add generic bandwidth allocation algorithm soundwire: cadence: add parity error injection through debugfs soundwire: bus: export broadcast read/write capability for tests ASoC: codecs: realtek-soundwire: ignore initial PARITY errors soundwire: bus: use quirk to filter out invalid parity errors soundwire: slave: add first_interrupt_done status soundwire: bus: filter-out unwanted interrupt reports ASoC/soundwire: bus: use property to set interrupt masks soundwire: qcom: fix SLIBMUS/SLIMBUS typo ...
2020-10-01regulator: qcom_smd: Add PM660/PM660L regulator supportAngeloGioacchino Del Regno1-0/+4
The PM660 and PM660L are a very very common PMIC combo, found on boards using the SDM630, SDM636, SDM660 (and SDA variants) SoC. PM660 provides 6 SMPS and 19 LDOs (of which one is unaccesible), while PM660L provides 5 SMPS (of which S3 and S4 are combined), 10 LDOs and a Buck-or-Boost (BoB) regulator. The PM660L IC also provides other regulators that are very specialized (for example, for the display) and will be managed in the other appropriate drivers (for example, labibb). Signed-off-by: AngeloGioacchino Del Regno <kholk11@gmail.com> Link: https://lore.kernel.org/r/20200926125549.13191-6-kholk11@gmail.com Signed-off-by: Mark Brown <broonie@kernel.org>
2020-10-01RDMA/mlx5: Extend advice MR to support non faulting modeYishai Hadas1-0/+1
Extend advice MR to support non faulting mode, this can improve performance by increasing the populated page tables in the device. Link: https://lore.kernel.org/r/20200930163828.1336747-4-leon@kernel.org Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-10-01IB/core: Enable ODP sync without faultingYishai Hadas1-1/+1
Enable ODP sync without faulting, this improves performance by reducing the number of page faults in the system. The gain from this option is that the device page table can be aligned with the presented pages in the CPU page table without causing page faults. As of that, the overhead on data path from hardware point of view to trigger a fault which end-up by calling the driver to bring the pages will be dropped. Link: https://lore.kernel.org/r/20200930163828.1336747-3-leon@kernel.org Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-10-01IB/core: Improve ODP to use hmm_range_fault()Yishai Hadas1-13/+8
Move to use hmm_range_fault() instead of get_user_pags_remote() to improve performance in a few aspects: This includes: - Dropping the need to allocate and free memory to hold its output - No need any more to use put_page() to unpin the pages - The logic to detect contiguous pages is done based on the returned order, no need to run per page and evaluate. In addition, moving to use hmm_range_fault() enables to reduce page faults in the system with it's snapshot mode, this will be introduced in next patches from this series. As part of this, cleanup some flows and use the required data structures to work with hmm_range_fault(). Link: https://lore.kernel.org/r/20200930163828.1336747-2-leon@kernel.org Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-10-01dm: export dm_copy_name_and_uuidMike Snitzer1-2/+2
Allow DM targets to access the configured name and uuid. Also, bump DM ioctl version. Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2020-10-01Merge tag 'arm64-fixes' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fix from Catalin Marinas: "A previous commit to prevent AML memory opregions from accessing the kernel memory turned out to be too restrictive. Relax the permission check to permit the ACPI core to map kernel memory used for table overrides" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: permit ACPI core to map kernel memory used for table overrides
2020-10-01debugobjects: Free per CPU pool after CPU unplugZqiang1-0/+1
If a CPU is offlined the debug objects per CPU pool is not cleaned up. If the CPU is never onlined again then the objects in the pool are wasted. Add a CPU hotplug callback which is invoked after the CPU is dead to free the pool. [ tglx: Massaged changelog and added comment about remote access safety ] Signed-off-by: Zqiang <qiang.zhang@windriver.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Waiman Long <longman@redhat.com> Link: https://lore.kernel.org/r/20200908062709.11441-1-qiang.zhang@windriver.com
2020-10-01pipe: Fix memory leaks in create_pipe_files()Qian Cai1-0/+6
Calling pipe2() with O_NOTIFICATION_PIPE could results in memory leaks unless watch_queue_init() is successful. In case of watch_queue_init() failure in pipe2() we are left with inode and pipe_inode_info instances that need to be freed. That failure exit has been introduced in commit c73be61cede5 ("pipe: Add general notification queue support") and its handling should've been identical to nearby treatment of alloc_file_pseudo() failures - it is dealing with the same situation. As it is, the mainline kernel leaks in that case. Another problem is that CONFIG_WATCH_QUEUE and !CONFIG_WATCH_QUEUE cases are treated differently (and the former leaks just pipe_inode_info, the latter - both pipe_inode_info and inode). Fixed by providing a dummy wacth_queue_init() in !CONFIG_WATCH_QUEUE case and by having failures of wacth_queue_init() handled the same way we handle alloc_file_pseudo() ones. Fixes: c73be61cede5 ("pipe: Add general notification queue support") Signed-off-by: Qian Cai <cai@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-10-01iommu/vt-d: Check UAPI data processed by IOMMU coreJacob Pan1-0/+1
IOMMU generic layer already does sanity checks on UAPI data for version match and argsz range based on generic information. This patch adjusts the following data checking responsibilities: - removes the redundant version check from VT-d driver - removes the check for vendor specific data size - adds check for the use of reserved/undefined flags Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/1601051567-54787-7-git-send-email-jacob.jun.pan@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-10-01iommu/uapi: Handle data and argsz filled by usersJacob Pan2-9/+20
IOMMU user APIs are responsible for processing user data. This patch changes the interface such that user pointers can be passed into IOMMU code directly. Separate kernel APIs without user pointers are introduced for in-kernel users of the UAPI functionality. IOMMU UAPI data has a user filled argsz field which indicates the data length of the structure. User data is not trusted, argsz must be validated based on the current kernel data size, mandatory data size, and feature flags. User data may also be extended, resulting in possible argsz increase. Backward compatibility is ensured based on size and flags (or the functional equivalent fields) checking. This patch adds sanity checks in the IOMMU layer. In addition to argsz, reserved/unused fields in padding, flags, and version are also checked. Details are documented in Documentation/userspace-api/iommu.rst Signed-off-by: Liu Yi L <yi.l.liu@intel.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/1601051567-54787-6-git-send-email-jacob.jun.pan@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-10-01iommu/uapi: Rename uapi functionsJacob Pan1-15/+16
User APIs such as iommu_sva_unbind_gpasid() may also be used by the kernel. Since we introduced user pointer to the UAPI functions, in-kernel callers cannot share the same APIs. In-kernel callers are also trusted, there is no need to validate the data. We plan to have two flavors of the same API functions, one called through ioctls, carrying a user pointer and one called directly with valid IOMMU UAPI structs. To differentiate both, let's rename existing functions with an iommu_uapi_ prefix. Suggested-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/1601051567-54787-5-git-send-email-jacob.jun.pan@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-10-01iommu/uapi: Use named union for user dataJacob Pan1-2/+2
IOMMU UAPI data size is filled by the user space which must be validated by the kernel. To ensure backward compatibility, user data can only be extended by either re-purpose padding bytes or extend the variable sized union at the end. No size change is allowed before the union. Therefore, the minimum size is the offset of the union. To use offsetof() on the union, we must make it named. Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/linux-iommu/20200611145518.0c2817d6@x1.home/ Link: https://lore.kernel.org/r/1601051567-54787-4-git-send-email-jacob.jun.pan@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-10-01iommu/uapi: Add argsz for user filled dataJacob Pan1-3/+9
As IOMMU UAPI gets extended, user data size may increase. To support backward compatibiliy, this patch introduces a size field to each UAPI data structures. It is *always* the responsibility for the user to fill in the correct size. Padding fields are adjusted to ensure 8 byte alignment. Specific scenarios for user data handling are documented in: Documentation/userspace-api/iommu.rst As there is no current users of the API, struct version is not incremented. Signed-off-by: Liu Yi L <yi.l.liu@intel.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/1601051567-54787-3-git-send-email-jacob.jun.pan@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-10-01bpf: Introduce BPF_F_PRESERVE_ELEMS for perf event arraySong Liu1-0/+3
Currently, perf event in perf event array is removed from the array when the map fd used to add the event is closed. This behavior makes it difficult to the share perf events with perf event array. Introduce perf event map that keeps the perf event open with a new flag BPF_F_PRESERVE_ELEMS. With this flag set, perf events in the array are not removed when the original map fd is closed. Instead, the perf event will stay in the map until 1) it is explicitly removed from the array; or 2) the array is freed. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200930224927.1936644-2-songliubraving@fb.com
2020-10-01net/mlx5: E-switch, Use PF num in metadata reg c0sunils1-7/+8
Currently only 256 vports can be supported as only 8 bits are reserved for them and 8 bits are reserved for vhca_ids in metadata reg c0. To support more than 256 vports, replace vhca_id with a unique shorter 4-bit PF number which covers upto 16 PF's. Use remaining 12 bits for vports ranging 1-4095. This will continue to generate unique metadata even if multiple PCI devices have same switch_id. Signed-off-by: sunils <sunils@nvidia.com> Reviewed-by: Parav Pandit <parav@nvidia.com> Reviewed-by: Vu Pham <vuhuong@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2020-10-01io_uring: provide IORING_ENTER_SQ_WAIT for SQPOLL SQ ring waitsJens Axboe1-0/+1
When using SQPOLL, applications can run into the issue of running out of SQ ring entries because the thread hasn't consumed them yet. The only option for dealing with that is checking later, or busy checking for the condition. Provide IORING_ENTER_SQ_WAIT if applications want to wait on this condition. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-01fs: align IOCB_* flags with RWF_* flagsJens Axboe1-18/+19
We have a set of flags that are shared between the two and inherired in kiocb_set_rw_flags(), but we check and set these individually. Reorder the IOCB flags so that the bottom part of the space is synced with the RWF flag space, and then we can do them all in one mask and set operation. The only exception is RWF_SYNC, which needs to mark IOCB_SYNC and IOCB_DSYNC. Do that one separately. This shaves 15 bytes of text from kiocb_set_rw_flags() for me. Suggested-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-01io_uring: allow disabling rings during the creationStefano Garzarella1-0/+2
This patch adds a new IORING_SETUP_R_DISABLED flag to start the rings disabled, allowing the user to register restrictions, buffers, files, before to start processing SQEs. When IORING_SETUP_R_DISABLED is set, SQE are not processed and SQPOLL kthread is not started. The restrictions registration are allowed only when the rings are disable to prevent concurrency issue while processing SQEs. The rings can be enabled using IORING_REGISTER_ENABLE_RINGS opcode with io_uring_register(2). Suggested-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-01io_uring: add IOURING_REGISTER_RESTRICTIONS opcodeStefano Garzarella1-0/+31
The new io_uring_register(2) IOURING_REGISTER_RESTRICTIONS opcode permanently installs a feature allowlist on an io_ring_ctx. The io_ring_ctx can then be passed to untrusted code with the knowledge that only operations present in the allowlist can be executed. The allowlist approach ensures that new features added to io_uring do not accidentally become available when an existing application is launched on a newer kernel version. Currently is it possible to restrict sqe opcodes, sqe flags, and register opcodes. IOURING_REGISTER_RESTRICTIONS can only be made once. Afterwards it is not possible to change restrictions anymore. This prevents untrusted code from removing restrictions. Suggested-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-01io_uring: use an enumeration for io_uring_register(2) opcodesStefano Garzarella1-11/+16
The enumeration allows us to keep track of the last io_uring_register(2) opcode available. Behaviour and opcodes names don't change. Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-01io_uring: move io_uring_get_socket() into io_uring.hJens Axboe2-9/+5
Now we have a io_uring kernel header, move this definition out of fs.h and into io_uring.h where it belongs. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-01io_uring: don't rely on weak ->files referencesJens Axboe2-0/+58
Grab actual references to the files_struct. To avoid circular references issues due to this, we add a per-task note that keeps track of what io_uring contexts a task has used. When the tasks execs or exits its assigned files, we cancel requests based on this tracking. With that, we can grab proper references to the files table, and no longer need to rely on stashing away ring_fd and ring_file to check if the ring_fd may have been closed. Cc: stable@vger.kernel.org # v5.5+ Reviewed-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-10-01drop_monitor: Filter control packets in drop monitorIdo Schimmel1-0/+2
Previously, devlink called into drop monitor in order to report hardware originated drops / exceptions. devlink intentionally filtered control packets and did not pass them to drop monitor as they were not dropped by the underlying hardware. Now drop monitor registers its probe on a generic 'devlink_trap_report' tracepoint and should therefore perform this filtering itself instead of having devlink do that. Add the trap type as metadata and have drop monitor ignore control packets. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-01drop_monitor: Convert to using devlink tracepointIdo Schimmel1-36/+0
Convert drop monitor to use the recently introduced 'devlink_trap_report' tracepoint instead of having devlink call into drop monitor. This is both consistent with software originated drops ('kfree_skb' tracepoint) and also allows drop monitor to be built as a module and still report hardware originated drops. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-01devlink: Add a tracepoint for trap reportsIdo Schimmel2-0/+51
Add a tracepoint for trap reports so that drop monitor could register its probe on it. Use trace_devlink_trap_report_enabled() to avoid wasting cycles setting the trap metadata if the tracepoint is not enabled. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-01arm64: permit ACPI core to map kernel memory used for table overridesArd Biesheuvel1-1/+1
Jonathan reports that the strict policy for memory mapped by the ACPI core breaks the use case of passing ACPI table overrides via initramfs. This is due to the fact that the memory type used for loading the initramfs in memory is not recognized as a memory type that is typically used by firmware to pass firmware tables. Since the purpose of the strict policy is to ensure that no AML or other ACPI code can manipulate any memory that is used by the kernel to keep its internal state or the state of user tasks, we can relax the permission check, and allow mappings of memory that is reserved and marked as NOMAP via memblock, and therefore not covered by the linear mapping to begin with. Fixes: 1583052d111f ("arm64/acpi: disallow AML memory opregions to access kernel memory") Fixes: 325f5585ec36 ("arm64/acpi: disallow writeable AML opregion mapping for EFI code regions") Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Link: https://lore.kernel.org/r/20200929132522.18067-1-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-10-01tcp: add exponential backoff in __tcp_send_ack()Eric Dumazet1-1/+2
Whenever host is under very high memory pressure, __tcp_send_ack() skb allocation fails, and we setup a 200 ms (TCP_DELACK_MAX) timer before retrying. On hosts with high number of TCP sockets, we can spend considerable amount of cpu cycles in these attempts, add high pressure on various spinlocks in mm-layer, ultimately blocking threads attempting to free space from making any progress. This patch adds standard exponential backoff to avoid adding fuel to the fire. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-01inet: remove icsk_ack.blockedEric Dumazet1-2/+2
TCP has been using it to work around the possibility of tcp_delack_timer() finding the socket owned by user. After commit 6f458dfb4092 ("tcp: improve latencies of timer triggered events") we added TCP_DELACK_TIMER_DEFERRED atomic bit for more immediate recovery, so we can get rid of icsk_ack.blocked This frees space that following patch will reuse. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-10-01net: macb: move pdata to private headerAlexandre Belloni1-20/+0
struct macb_platform_data is only used by macb_pci to register the platform device, move its definition to cadence/macb.h and remove platform_data/macb.h Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-09-30HID: add vivaldi HID driverSean O'Brien1-0/+2
Add vivaldi HID driver. This driver allows us to read and report the top row layout of keyboards which provide a vendor-defined (Google) HID usage. Signed-off-by: Sean O'Brien <seobrien@chromium.org> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2020-09-30bpf: Add redirect_neigh helper as redirect drop-inDaniel Borkmann2-0/+19
Add a redirect_neigh() helper as redirect() drop-in replacement for the xmit side. Main idea for the helper is to be very similar in semantics to the latter just that the skb gets injected into the neighboring subsystem in order to let the stack do the work it knows best anyway to populate the L2 addresses of the packet and then hand over to dev_queue_xmit() as redirect() does. This solves two bigger items: i) skbs don't need to go up to the stack on the host facing veth ingress side for traffic egressing the container to achieve the same for populating L2 which also has the huge advantage that ii) the skb->sk won't get orphaned in ip_rcv_core() when entering the IP routing layer on the host stack. Given that skb->sk neither gets orphaned when crossing the netns as per 9c4c325252c5 ("skbuff: preserve sock reference when scrubbing the skb.") the helper can then push the skbs directly to the phys device where FQ scheduler can do its work and TCP stack gets proper backpressure given we hold on to skb->sk as long as skb is still residing in queues. With the helper used in BPF data path to then push the skb to the phys device, I observed a stable/consistent TCP_STREAM improvement on veth devices for traffic going container -> host -> host -> container from ~10Gbps to ~15Gbps for a single stream in my test environment. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: David Ahern <dsahern@gmail.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Cc: David Ahern <dsahern@kernel.org> Link: https://lore.kernel.org/bpf/f207de81629e1724899b73b8112e0013be782d35.1601477936.git.daniel@iogearbox.net
2020-09-30bpf, net: Rework cookie generator as per-cpu oneDaniel Borkmann3-2/+65
With its use in BPF, the cookie generator can be called very frequently in particular when used out of cgroup v2 hooks (e.g. connect / sendmsg) and attached to the root cgroup, for example, when used in v1/v2 mixed environments. In particular, when there's a high churn on sockets in the system there can be many parallel requests to the bpf_get_socket_cookie() and bpf_get_netns_cookie() helpers which then cause contention on the atomic counter. As similarly done in f991bd2e1421 ("fs: introduce a per-cpu last_ino allocator"), add a small helper library that both can use for the 64 bit counters. Given this can be called from different contexts, we also need to deal with potential nested calls even though in practice they are considered extremely rare. One idea as suggested by Eric Dumazet was to use a reverse counter for this situation since we don't expect 64 bit overflows anyways; that way, we can avoid bigger gaps in the 64 bit counter space compared to just batch-wise increase. Even on machines with small number of cores (e.g. 4) the cookie generation shrinks from min/max/med/avg (ns) of 22/50/40/38.9 down to 10/35/14/17.3 when run in parallel from multiple CPUs. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Link: https://lore.kernel.org/bpf/8a80b8d27d3c49f9a14e1d5213c19d8be87d1dc8.1601477936.git.daniel@iogearbox.net
2020-09-30bpf: Add classid helper only based on skb->skDaniel Borkmann1-0/+10
Similarly to 5a52ae4e32a6 ("bpf: Allow to retrieve cgroup v1 classid from v2 hooks"), add a helper to retrieve cgroup v1 classid solely based on the skb->sk, so it can be used as key as part of BPF map lookups out of tc from host ns, in particular given the skb->sk is retained these days when crossing net ns thanks to 9c4c325252c5 ("skbuff: preserve sock reference when scrubbing the skb."). This is similar to bpf_skb_cgroup_id() which implements the same for v2. Kubernetes ecosystem is still operating on v1 however, hence net_cls needs to be used there until this can be dropped in with the v2 helper of bpf_skb_cgroup_id(). Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/ed633cf27a1c620e901c5aa99ebdefb028dce600.1601477936.git.daniel@iogearbox.net
2020-09-30RDMA/core: Remove ucontext->closingJason Gunthorpe1-6/+0
Nothing reads this any more, and the reason for its existence has passed due to the deferred fput() scheme. Fixes: 8ea1f989aa07 ("drivers/IB,usnic: reduce scope of mmap_sem") Link: https://lore.kernel.org/r/0-v1-df64ff042436+42-uctx_closing_jgg@nvidia.com Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-30leds: pca963x: register LEDs immediately after parsing, get rid of platdataMarek Behún1-35/+0
Register LEDs immediately after parsing their properties. This allows us to get rid of platdata, and since no one in tree uses header linux/platform_data/leds-pca963x.h, remove it. Signed-off-by: Marek Behún <marek.behun@nic.cz> Cc: Peter Meerwald <p.meerwald@bct-electronic.com> Cc: Ricardo Ribalda <ribalda@kernel.org> Cc: Zahari Petkov <zahari@balena.io> Signed-off-by: Pavel Machek <pavel@ucw.cz>
2020-09-30leds: tca6507: Absorb platform dataMarek Behún1-21/+0
The only in-tree usage of this driver is via device-tree. No on else includes linux/leds-tca6507.h, so absorb the definition of platdata structure. Signed-off-by: Marek Behún <marek.behun@nic.cz> Cc: NeilBrown <neilb@suse.de> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: H. Nikolaus Schaller <hns@goldelico.com> Signed-off-by: Pavel Machek <pavel@ucw.cz>
2020-09-30media: v4l2-subdev.h: fix a kernel-doc markupMauro Carvalho Chehab1-1/+1
As reported by Sphinx: ./Documentation/driver-api/media/v4l2-subdev:490: ./include/media/v4l2-subdev.h:384: WARNING: Unparseable C cross-reference: 'struct' Invalid C declaration: Expected identifier in nested name, got keyword: struct [error at 6] struct ------^ The markup there is wrong: &struct &v4l2_input -> &struct v4l2_input Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
2020-09-30mfd: intel-m10-bmc: Add Intel MAX 10 BMC chip support for Intel FPGA PACXu Yilun1-0/+65
This patch implements the basic functions of the BMC chip for some Intel FPGA PCIe Acceleration Cards (PAC). The BMC is implemented using the Intel MAX 10 CPLD. This BMC chip is connected to the FPGA by a SPI bus. To provide direct register access from the FPGA, the "SPI slave to Avalon Master Bridge" (spi-avmm) IP is integrated in the chip. It converts encoded streams of bytes from the host to the internal register read/write on the Avalon bus. So This driver uses the regmap-spi-avmm for register accessing. Signed-off-by: Xu Yilun <yilun.xu@intel.com> Signed-off-by: Wu Hao <hao.wu@intel.com> Signed-off-by: Matthew Gerlach <matthew.gerlach@linux.intel.com> Signed-off-by: Russ Weight <russell.h.weight@intel.com> Reviewed-by: Tom Rix <trix@redhat.com> Signed-off-by: Lee Jones <lee.jones@linaro.org>
2020-09-30ACPI / NUMA: Add stub function for pxm_to_node()Nathan Chancellor1-0/+5
After commit 01feba590cd6 ("ACPI: Do not create new NUMA domains from ACPI static tables that are not SRAT"): $ scripts/config --file arch/x86/configs/x86_64_defconfig -d NUMA -e ACPI_NFIT $ make -skj"$(nproc)" distclean defconfig drivers/acpi/nfit/ drivers/acpi/nfit/core.c: In function ‘acpi_nfit_register_region’: drivers/acpi/nfit/core.c:3010:27: error: implicit declaration of function ‘pxm_to_node’; did you mean ‘xa_to_node’? [-Werror=implicit-function-declaration] 3010 | ndr_desc->target_node = pxm_to_node(spa->proximity_domain); | ^~~~~~~~~~~ | xa_to_node cc1: some warnings being treated as errors ... Add a stub function like acpi_map_pxm_to_node() had so that the build continues to work. Fixes: 01feba590cd6 ("ACPI: Do not create new NUMA domains from ACPI static tables that are not SRAT") Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Acked-by: Randy Dunlap <rdunlap@infradead.org> # build-tested Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Hanjun Guo <guohanjun@huawei.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2020-09-30mfd: lp87565: Add LP87524-Q1 variantLuca Ceresoli1-0/+1
Add support for the LP87524B/J/P-Q1 Four 4-MHz Buck Converter. This is a variant of the LP87565 having 4 single-phase outputs and up to 10 A of total output current. Signed-off-by: Luca Ceresoli <luca@lucaceresoli.net> Signed-off-by: Lee Jones <lee.jones@linaro.org>
2020-09-30mtd: rawnand: Use the NAND framework user_conf object for ECC flagsMiquel Raynal1-1/+0
Instead of storing the ECC flags in chip->ecc.options, use nanddev->ecc.user_conf.flags. There is currently only one to save: NAND_ECC_MAXIMIZE. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20200827085208.16276-21-miquel.raynal@bootlin.com
2020-09-30mtd: rawnand: Use the ECC framework user input parsing bitsMiquel Raynal1-12/+0
Many helpers are generic to all NAND chips, they should not be raw-NAND specific, so use the generic ones. To avoid moving all the raw NAND core "history" into the generic NAND layer, we keep a part of this parsing in the raw NAND core to ensure backward compatibility. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20200827085208.16276-20-miquel.raynal@bootlin.com
2020-09-30mtd: rawnand: Use the ECC framework OOB layoutsMiquel Raynal1-3/+1
No need to have our own in the raw NAND core. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20200827085208.16276-18-miquel.raynal@bootlin.com
2020-09-30drm: drm_dsc.h: fix a kernel-doc markupMauro Carvalho Chehab1-1/+1
As warned by Sphinx: ./Documentation/gpu/drm-kms-helpers:305: ./include/drm/drm_dsc.h:587: WARNING: Unparseable C cross-reference: 'struct' Invalid C declaration: Expected identifier in nested name, got keyword: struct [error at 6] struct ------^ The markup for one struct is wrong, as struct is used twice. Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/3d467022325e15bba8dcb13da8fb730099303266.1601467849.git.mchehab+huawei@kernel.org
2020-09-30Partially revert "video: fbdev: amba-clcd: Retire elder CLCD driver"Peter Collingbourne2-0/+377
Also partially revert the follow-up change "drm: pl111: Absorb the external register header". This reverts the parts of commits 7e4e589db76a3cf4c1f534eb5a09cc6422766b93 and 0fb8125635e8eb5483fb095f98dcf0651206a7b8 that touch paths outside of drivers/gpu/drm/pl111. The fbdev driver is used by Android's FVP configuration. Using the DRM driver together with DRM's fbdev emulation results in a failure to boot Android. The root cause is that Android's generic fbdev userspace driver relies on the ability to set the pixel format via FBIOPUT_VSCREENINFO, which is not supported by fbdev emulation. There have been other less critical behavioral differences identified between the fbdev driver and the DRM driver with fbdev emulation. The DRM driver exposes different values for the panel's width, height and refresh rate, and the DRM driver fails a FBIOPUT_VSCREENINFO syscall with yres_virtual greater than the maximum supported value instead of letting the syscall succeed and setting yres_virtual based on yres. Signed-off-by: Peter Collingbourne <pcc@google.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20200929195344.2219796-1-pcc@google.com
2020-09-30dt-bindings: ti-serdes-mux: Add defines for J7200 SoCRoger Quadros1-0/+22
There are 4 lanes in each J7200 SERDES. Each SERDES lane mux can select upto 4 different IPs. Define all the possible functions. Signed-off-by: Roger Quadros <rogerq@ti.com> Signed-off-by: Nishanth Menon <nm@ti.com> Reviewed-by: Vignesh Raghavendra <vigneshr@ti.com> Acked-by: Rob Herring <robh@kernel.org> Acked-by: Peter Rosin <peda@axentia.se> Cc: Peter Rosin <peda@axentia.se> Link: https://lore.kernel.org/r/20200930122032.23481-2-rogerq@ti.com
2020-09-30netfilter: nf_tables: add userdata attributes to nft_chainJose M. Guisado Gomez2-0/+4
Enables storing userdata for nft_chain. Field udata points to user data and udlen stores its length. Adds new attribute flag NFTA_CHAIN_USERDATA. Signed-off-by: Jose M. Guisado Gomez <guigom@riseup.net> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-09-30gpio: uapi: document uAPI v1 as deprecatedKent Gibson1-0/+26
Update uAPI documentation to deprecate v1 structs and ioctls. Signed-off-by: Kent Gibson <warthog618@gmail.com> Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
2020-09-30gpio: uapi: define uAPI v2Kent Gibson1-7/+284
Add a new version of the uAPI to address existing 32/64-bit alignment issues, add support for debounce and event sequence numbers, allow requested lines with different configurations, and provide some future proofing by adding padding reserved for future use. The alignment issue relates to the gpioevent_data, which packs to different sizes on 32-bit and 64-bit platforms. That creates problems for 32-bit apps running on 64-bit kernels. uAPI v2 addresses that particular issue, and the problem more generally, by adding pad fields that explicitly pad structs out to 64-bit boundaries, so they will pack to the same size now, and even if some of the reserved padding is used for __u64 fields in the future. The new structs have been analysed with pahole to ensure that they are sized as expected and contain no implicit padding. The lack of future proofing in v1 makes it impossible to, for example, add the debounce feature that is included in v2. The future proofing is addressed by providing configurable attributes in line config and reserved padding in all structs for future features. Specifically, the line request, config, info, info_changed and event structs receive updated versions and new ioctls. As the majority of the structs and ioctls were being replaced, it is opportune to rework some of the other aspects of the uAPI: v1 has three different flags fields, each with their own separate bit definitions. In v2 that is collapsed to one - gpio_v2_line_flag. The handle and event requests are merged into a single request, the line request, as the two requests were mostly the same other than the edge detection provided by event requests. As a byproduct, the v2 uAPI allows for multiple lines producing edge events on the same line handle. This is a new capability as v1 only supports a single line in an event request. As a consequence, there are now only two types of file handle to be concerned with, the chip and the line, and it is clearer which ioctls apply to which type of handle. There is also some minor renaming of fields for consistency compared to their v1 counterparts, e.g. offset rather than lineoffset or line_offset, and consumer rather than consumer_label. Additionally, v1 GPIOHANDLES_MAX becomes GPIO_V2_LINES_MAX in v2 for clarity, and the gpiohandle_data __u8 array becomes a bitmap in gpio_v2_line_values. The v2 uAPI is mostly a reorganisation and extension of v1, so userspace code, particularly libgpiod, should readily port to it. Signed-off-by: Kent Gibson <warthog618@gmail.com> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>