summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2022-09-28Linux 5.10.146v5.10.146Greg Kroah-Hartman1-1/+1
Link: https://lore.kernel.org/r/20220926100754.639112000@linuxfoundation.org Tested-by: Pavel Machek (CIP) <pavel@denx.de> Link: https://lore.kernel.org/r/20220926163550.904900693@linuxfoundation.org Tested-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Linux Kernel Functional Testing <lkft@linaro.org> Tested-by: Sudip Mukherjee <sudip.mukherjee@codethink.co.uk> Tested-by: Pavel Machek (CIP) <pavel@denx.de> Tested-by: Guenter Roeck <linux@roeck-us.net> Tested-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-28ext4: make directory inode spreading reflect flexbg sizeJan Kara1-1/+1
commit 613c5a85898d1cd44e68f28d65eccf64a8ace9cf upstream. Currently the Orlov inode allocator searches for free inodes for a directory only in flex block groups with at most inodes_per_group/16 more directory inodes than average per flex block group. However with growing size of flex block group this becomes unnecessarily strict. Scale allowed difference from average directory count per flex block group with flex block group size as we do with other metrics. Tested-by: Stefan Wahren <stefan.wahren@i2se.com> Tested-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Cc: stable@kernel.org Link: https://lore.kernel.org/all/0d81a7c2-46b7-6010-62a4-3e6cfc1628d6@i2se.com/ Signed-off-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20220908092136.11770-3-jack@suse.cz Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-28ext4: limit the number of retries after discarding preallocations blocksTheodore Ts'o1-1/+3
commit 80fa46d6b9e7b1527bfd2197d75431fd9c382161 upstream. This patch avoids threads live-locking for hours when a large number threads are competing over the last few free extents as they blocks getting added and removed from preallocation pools. From our bug reporter: A reliable way for triggering this has multiple writers continuously write() to files when the filesystem is full, while small amounts of space are freed (e.g. by truncating a large file -1MiB at a time). In the local filesystem, this can be done by simply not checking the return code of write (0) and/or the error (ENOSPACE) that is set. Over NFS with an async mount, even clients with proper error checking will behave this way since the linux NFS client implementation will not propagate the server errors [the write syscalls immediately return success] until the file handle is closed. This leads to a situation where NFS clients send a continuous stream of WRITE rpcs which result in ERRNOSPACE -- but since the client isn't seeing this, the stream of writes continues at maximum network speed. When some space does appear, multiple writers will all attempt to claim it for their current write. For NFS, we may see dozens to hundreds of threads that do this. The real-world scenario of this is database backup tooling (in particular, github.com/mdkent/percona-xtrabackup) which may write large files (>1TiB) to NFS for safe keeping. Some temporary files are written, rewound, and read back -- all before closing the file handle (the temp file is actually unlinked, to trigger automatic deletion on close/crash.) An application like this operating on an async NFS mount will not see an error code until TiB have been written/read. The lockup was observed when running this database backup on large filesystems (64 TiB in this case) with a high number of block groups and no free space. Fragmentation is generally not a factor in this filesystem (~thousands of large files, mostly contiguous except for the parts written while the filesystem is at capacity.) Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-28ext4: fix bug in extents parsing when eh_entries == 0 and eh_depth > 0Luís Henriques1-0/+4
commit 29a5b8a137ac8eb410cc823653a29ac0e7b7e1b0 upstream. When walking through an inode extents, the ext4_ext_binsearch_idx() function assumes that the extent header has been previously validated. However, there are no checks that verify that the number of entries (eh->eh_entries) is non-zero when depth is > 0. And this will lead to problems because the EXT_FIRST_INDEX() and EXT_LAST_INDEX() will return garbage and result in this: [ 135.245946] ------------[ cut here ]------------ [ 135.247579] kernel BUG at fs/ext4/extents.c:2258! [ 135.249045] invalid opcode: 0000 [#1] PREEMPT SMP [ 135.250320] CPU: 2 PID: 238 Comm: tmp118 Not tainted 5.19.0-rc8+ #4 [ 135.252067] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.15.0-0-g2dd4b9b-rebuilt.opensuse.org 04/01/2014 [ 135.255065] RIP: 0010:ext4_ext_map_blocks+0xc20/0xcb0 [ 135.256475] Code: [ 135.261433] RSP: 0018:ffffc900005939f8 EFLAGS: 00010246 [ 135.262847] RAX: 0000000000000024 RBX: ffffc90000593b70 RCX: 0000000000000023 [ 135.264765] RDX: ffff8880038e5f10 RSI: 0000000000000003 RDI: ffff8880046e922c [ 135.266670] RBP: ffff8880046e9348 R08: 0000000000000001 R09: ffff888002ca580c [ 135.268576] R10: 0000000000002602 R11: 0000000000000000 R12: 0000000000000024 [ 135.270477] R13: 0000000000000000 R14: 0000000000000024 R15: 0000000000000000 [ 135.272394] FS: 00007fdabdc56740(0000) GS:ffff88807dd00000(0000) knlGS:0000000000000000 [ 135.274510] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 135.276075] CR2: 00007ffc26bd4f00 CR3: 0000000006261004 CR4: 0000000000170ea0 [ 135.277952] Call Trace: [ 135.278635] <TASK> [ 135.279247] ? preempt_count_add+0x6d/0xa0 [ 135.280358] ? percpu_counter_add_batch+0x55/0xb0 [ 135.281612] ? _raw_read_unlock+0x18/0x30 [ 135.282704] ext4_map_blocks+0x294/0x5a0 [ 135.283745] ? xa_load+0x6f/0xa0 [ 135.284562] ext4_mpage_readpages+0x3d6/0x770 [ 135.285646] read_pages+0x67/0x1d0 [ 135.286492] ? folio_add_lru+0x51/0x80 [ 135.287441] page_cache_ra_unbounded+0x124/0x170 [ 135.288510] filemap_get_pages+0x23d/0x5a0 [ 135.289457] ? path_openat+0xa72/0xdd0 [ 135.290332] filemap_read+0xbf/0x300 [ 135.291158] ? _raw_spin_lock_irqsave+0x17/0x40 [ 135.292192] new_sync_read+0x103/0x170 [ 135.293014] vfs_read+0x15d/0x180 [ 135.293745] ksys_read+0xa1/0xe0 [ 135.294461] do_syscall_64+0x3c/0x80 [ 135.295284] entry_SYSCALL_64_after_hwframe+0x46/0xb0 This patch simply adds an extra check in __ext4_ext_check(), verifying that eh_entries is not 0 when eh_depth is > 0. Link: https://bugzilla.kernel.org/show_bug.cgi?id=215941 Link: https://bugzilla.kernel.org/show_bug.cgi?id=216283 Cc: Baokun Li <libaokun1@huawei.com> Cc: stable@kernel.org Signed-off-by: Luís Henriques <lhenriques@suse.de> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Baokun Li <libaokun1@huawei.com> Link: https://lore.kernel.org/r/20220822094235.2690-1-lhenriques@suse.de Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-28devdax: Fix soft-reservation memory descriptionDan Williams1-0/+1
commit 67feaba413ec68daf4124e9870878899b4ed9a0e upstream. The "hmem" platform-devices that are created to represent the platform-advertised "Soft Reserved" memory ranges end up inserting a resource that causes the iomem_resource tree to look like this: 340000000-43fffffff : hmem.0 340000000-43fffffff : Soft Reserved 340000000-43fffffff : dax0.0 This is because insert_resource() reparents ranges when they completely intersect an existing range. This matters because code that uses region_intersects() to scan for a given IORES_DESC will only check that top-level 'hmem.0' resource and not the 'Soft Reserved' descendant. So, to support EINJ (via einj_error_inject()) to inject errors into memory hosted by a dax-device, be sure to describe the memory as IORES_DESC_SOFT_RESERVED. This is a follow-on to: commit b13a3e5fd40b ("ACPI: APEI: Fix _EINJ vs EFI_MEMORY_SP") ...that fixed EINJ support for "Soft Reserved" ranges in the first instance. Fixes: 262b45ae3ab4 ("x86/efi: EFI soft reservation to E820 enumeration") Reported-by: Ricardo Sandoval Torres <ricardo.sandoval.torres@intel.com> Tested-by: Ricardo Sandoval Torres <ricardo.sandoval.torres@intel.com> Cc: <stable@vger.kernel.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Omar Avelar <omar.avelar@intel.com> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Mark Gross <markgross@kernel.org> Link: https://lore.kernel.org/r/166397075670.389916.7435722208896316387.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-28i2c: mlxbf: Fix frequency calculationAsmaa Mnebhi1-40/+23
[ Upstream commit 37f071ec327b04c83d47637c5e5c2199b39899ca ] The i2c-mlxbf.c driver is currently broken because there is a bug in the calculation of the frequency. core_f, core_r and core_od are components read from hardware registers and are used to compute the frequency used to compute different timing parameters. The shifting mechanism used to get core_f, core_r and core_od is wrong. Use FIELD_GET to mask and shift the bitfields properly. Fixes: b5b5b32081cd206b (i2c: mlxbf: I2C SMBus driver for Mellanox BlueField SoC) Reviewed-by: Khalil Blaiech <kblaiech@nvidia.com> Signed-off-by: Asmaa Mnebhi <asmaa@nvidia.com> Signed-off-by: Wolfram Sang <wsa@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28i2c: mlxbf: prevent stack overflow in mlxbf_i2c_smbus_start_transaction()Asmaa Mnebhi1-0/+3
[ Upstream commit de24aceb07d426b6f1c59f33889d6a964770547b ] memcpy() is called in a loop while 'operation->length' upper bound is not checked and 'data_idx' also increments. Fixes: b5b5b32081cd206b ("i2c: mlxbf: I2C SMBus driver for Mellanox BlueField SoC") Reviewed-by: Khalil Blaiech <kblaiech@nvidia.com> Signed-off-by: Asmaa Mnebhi <asmaa@nvidia.com> Signed-off-by: Wolfram Sang <wsa@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28i2c: mlxbf: incorrect base address passed during io writeAsmaa Mnebhi1-1/+1
[ Upstream commit 2a5be6d1340c0fefcee8a6489cff7fd88a0d5b85 ] Correct the base address used during io write. This bug had no impact over the overall functionality of the read and write transactions. MLXBF_I2C_CAUSE_OR_CLEAR=0x18 so writing to (smbus->io + 0x18) instead of (mst_cause->ioi + 0x18) actually writes to the sc_low_timeout register which just sets the timeout value before a read/write aborts. Fixes: b5b5b32081cd206b (i2c: mlxbf: I2C SMBus driver for Mellanox BlueField SoC) Reviewed-by: Khalil Blaiech <kblaiech@nvidia.com> Signed-off-by: Asmaa Mnebhi <asmaa@nvidia.com> Signed-off-by: Wolfram Sang <wsa@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28i2c: imx: If pm_runtime_get_sync() returned 1 device access is possibleUwe Kleine-König1-1/+1
[ Upstream commit 085aacaa73163f4b8a89dec24ecb32cfacd34017 ] pm_runtime_get_sync() returning 1 also means the device is powered. So resetting the chip registers in .remove() is possible and should be done. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Fixes: d98bdd3a5b50 ("i2c: imx: Make sure to unregister adapter on remove()") Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Acked-by: Oleksij Rempel <o.rempel@pengutronix.de> Signed-off-by: Wolfram Sang <wsa@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28workqueue: don't skip lockdep work dependency in cancel_work_sync()Tetsuo Handa1-4/+2
[ Upstream commit c0feea594e058223973db94c1c32a830c9807c86 ] Like Hillf Danton mentioned syzbot should have been able to catch cancel_work_sync() in work context by checking lockdep_map in __flush_work() for both flush and cancel. in [1], being unable to report an obvious deadlock scenario shown below is broken. From locking dependency perspective, sync version of cancel request should behave as if flush request, for it waits for completion of work if that work has already started execution. ---------- #include <linux/module.h> #include <linux/sched.h> static DEFINE_MUTEX(mutex); static void work_fn(struct work_struct *work) { schedule_timeout_uninterruptible(HZ / 5); mutex_lock(&mutex); mutex_unlock(&mutex); } static DECLARE_WORK(work, work_fn); static int __init test_init(void) { schedule_work(&work); schedule_timeout_uninterruptible(HZ / 10); mutex_lock(&mutex); cancel_work_sync(&work); mutex_unlock(&mutex); return -EINVAL; } module_init(test_init); MODULE_LICENSE("GPL"); ---------- The check this patch restores was added by commit 0976dfc1d0cd80a4 ("workqueue: Catch more locking problems with flush_work()"). Then, lockdep's crossrelease feature was added by commit b09be676e0ff25bd ("locking/lockdep: Implement the 'crossrelease' feature"). As a result, this check was once removed by commit fd1a5b04dfb899f8 ("workqueue: Remove now redundant lock acquisitions wrt. workqueue flushes"). But lockdep's crossrelease feature was removed by commit e966eaeeb623f099 ("locking/lockdep: Remove the cross-release locking checks"). At this point, this check should have been restored. Then, commit d6e89786bed977f3 ("workqueue: skip lockdep wq dependency in cancel_work_sync()") introduced a boolean flag in order to distinguish flush_work() and cancel_work_sync(), for checking "struct workqueue_struct" dependency when called from cancel_work_sync() was causing false positives. Then, commit 87915adc3f0acdf0 ("workqueue: re-add lockdep dependencies for flushing") tried to restore "struct work_struct" dependency check, but by error checked this boolean flag. Like an example shown above indicates, "struct work_struct" dependency needs to be checked for both flush_work() and cancel_work_sync(). Link: https://lkml.kernel.org/r/20220504044800.4966-1-hdanton@sina.com [1] Reported-by: Hillf Danton <hdanton@sina.com> Suggested-by: Lai Jiangshan <jiangshanlai@gmail.com> Fixes: 87915adc3f0acdf0 ("workqueue: re-add lockdep dependencies for flushing") Cc: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28drm/rockchip: Fix return type of cdn_dp_connector_mode_validNathan Huckleberry1-2/+3
[ Upstream commit b0b9408f132623dc88e78adb5282f74e4b64bb57 ] The mode_valid field in drm_connector_helper_funcs is expected to be of type: enum drm_mode_status (* mode_valid) (struct drm_connector *connector, struct drm_display_mode *mode); The mismatched return type breaks forward edge kCFI since the underlying function definition does not match the function hook definition. The return type of cdn_dp_connector_mode_valid should be changed from int to enum drm_mode_status. Reported-by: Dan Carpenter <error27@gmail.com> Link: https://github.com/ClangBuiltLinux/linux/issues/1703 Cc: llvm@lists.linux.dev Signed-off-by: Nathan Huckleberry <nhuck@google.com> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Heiko Stuebner <heiko@sntech.de> Link: https://patchwork.freedesktop.org/patch/msgid/20220913205555.155149-1-nhuck@google.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28drm/amd/display: Mark dml30's UseMinimumDCFCLK() as noinline for stack usageNathan Chancellor1-2/+1
[ Upstream commit 41012d715d5d7b9751ae84b8fb255e404ac9c5d0 ] This function consumes a lot of stack space and it blows up the size of dml30_ModeSupportAndSystemConfigurationFull() with clang: drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn30/display_mode_vba_30.c:3542:6: error: stack frame size (2200) exceeds limit (2048) in 'dml30_ModeSupportAndSystemConfigurationFull' [-Werror,-Wframe-larger-than] void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_lib) ^ 1 error generated. Commit a0f7e7f759cf ("drm/amd/display: fix i386 frame size warning") aimed to address this for i386 but it did not help x86_64. To reduce the amount of stack space that dml30_ModeSupportAndSystemConfigurationFull() uses, mark UseMinimumDCFCLK() as noinline, using the _for_stack variant for documentation. While this will increase the total amount of stack usage between the two functions (1632 and 1304 bytes respectively), it will make sure both stay below the limit of 2048 bytes for these files. The aforementioned change does help reduce UseMinimumDCFCLK()'s stack usage so it should not be reverted in favor of this change. Link: https://github.com/ClangBuiltLinux/linux/issues/1681 Reported-by: "Sudip Mukherjee (Codethink)" <sudipm.mukherjee@gmail.com> Tested-by: Maíra Canal <mairacanal@riseup.net> Reviewed-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com> Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28drm/amd/display: Limit user regamma to a valid valueYao Wang11-0/+4
[ Upstream commit 3601d620f22e37740cf73f8278eabf9f2aa19eb7 ] [Why] For HDR mode, we get total 512 tf_point and after switching to SDR mode we actually get 400 tf_point and the rest of points(401~512) still use dirty value from HDR mode. We should limit the rest of the points to max value. [How] Limit the value when coordinates_x.x > 1, just like what we do in translate_from_linear_space for other re-gamma build paths. Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Reviewed-by: Krunoslav Kovac <Krunoslav.Kovac@amd.com> Reviewed-by: Aric Cyr <Aric.Cyr@amd.com> Acked-by: Pavle Kotarac <Pavle.Kotarac@amd.com> Signed-off-by: Yao Wang1 <Yao.Wang1@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28drm/amdgpu: use dirty framebuffer helperHamza Mahfooz1-0/+2
[ Upstream commit 66f99628eb24409cb8feb5061f78283c8b65f820 ] Currently, we aren't handling DRM_IOCTL_MODE_DIRTYFB. So, use drm_atomic_helper_dirtyfb() as the dirty callback in the amdgpu_fb_funcs struct. Signed-off-by: Hamza Mahfooz <hamza.mahfooz@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28drm/gma500: Fix BUG: sleeping function called from invalid context errorsHans de Goede1-4/+7
[ Upstream commit 63e37a79f7bd939314997e29c2f5a9f0ef184281 ] gma_crtc_page_flip() was holding the event_lock spinlock while calling crtc_funcs->mode_set_base() which takes ww_mutex. The only reason to hold event_lock is to clear gma_crtc->page_flip_event on mode_set_base() errors. Instead unlock it after setting gma_crtc->page_flip_event and on errors re-take the lock and clear gma_crtc->page_flip_event it it is still set. This fixes the following WARN/stacktrace: [ 512.122953] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:870 [ 512.123004] in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 1253, name: gnome-shell [ 512.123031] preempt_count: 1, expected: 0 [ 512.123048] RCU nest depth: 0, expected: 0 [ 512.123066] INFO: lockdep is turned off. [ 512.123080] irq event stamp: 0 [ 512.123094] hardirqs last enabled at (0): [<0000000000000000>] 0x0 [ 512.123134] hardirqs last disabled at (0): [<ffffffff8d0ec28c>] copy_process+0x9fc/0x1de0 [ 512.123176] softirqs last enabled at (0): [<ffffffff8d0ec28c>] copy_process+0x9fc/0x1de0 [ 512.123207] softirqs last disabled at (0): [<0000000000000000>] 0x0 [ 512.123233] Preemption disabled at: [ 512.123241] [<0000000000000000>] 0x0 [ 512.123275] CPU: 3 PID: 1253 Comm: gnome-shell Tainted: G W 5.19.0+ #1 [ 512.123304] Hardware name: Packard Bell dot s/SJE01_CT, BIOS V1.10 07/23/2013 [ 512.123323] Call Trace: [ 512.123346] <TASK> [ 512.123370] dump_stack_lvl+0x5b/0x77 [ 512.123412] __might_resched.cold+0xff/0x13a [ 512.123458] ww_mutex_lock+0x1e/0xa0 [ 512.123495] psb_gem_pin+0x2c/0x150 [gma500_gfx] [ 512.123601] gma_pipe_set_base+0x76/0x240 [gma500_gfx] [ 512.123708] gma_crtc_page_flip+0x95/0x130 [gma500_gfx] [ 512.123808] drm_mode_page_flip_ioctl+0x57d/0x5d0 [ 512.123897] ? drm_mode_cursor2_ioctl+0x10/0x10 [ 512.123936] drm_ioctl_kernel+0xa1/0x150 [ 512.123984] drm_ioctl+0x21f/0x420 [ 512.124025] ? drm_mode_cursor2_ioctl+0x10/0x10 [ 512.124070] ? rcu_read_lock_bh_held+0xb/0x60 [ 512.124104] ? lock_release+0x1ef/0x2d0 [ 512.124161] __x64_sys_ioctl+0x8d/0xd0 [ 512.124203] do_syscall_64+0x58/0x80 [ 512.124239] ? do_syscall_64+0x67/0x80 [ 512.124267] ? trace_hardirqs_on_prepare+0x55/0xe0 [ 512.124300] ? do_syscall_64+0x67/0x80 [ 512.124340] ? rcu_read_lock_sched_held+0x10/0x80 [ 512.124377] entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 512.124411] RIP: 0033:0x7fcc4a70740f [ 512.124442] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00 [ 512.124470] RSP: 002b:00007ffda73f5390 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 512.124503] RAX: ffffffffffffffda RBX: 000055cc9e474500 RCX: 00007fcc4a70740f [ 512.124524] RDX: 00007ffda73f5420 RSI: 00000000c01864b0 RDI: 0000000000000009 [ 512.124544] RBP: 00007ffda73f5420 R08: 000055cc9c0b0cb0 R09: 0000000000000034 [ 512.124564] R10: 0000000000000000 R11: 0000000000000246 R12: 00000000c01864b0 [ 512.124584] R13: 0000000000000009 R14: 000055cc9df484d0 R15: 000055cc9af5d0c0 [ 512.124647] </TASK> Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220906203852.527663-2-hdegoede@redhat.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28Drivers: hv: Never allocate anything besides framebuffer from framebuffer ↵Vitaly Kuznetsov1-1/+9
memory region [ Upstream commit f0880e2cb7e1f8039a048fdd01ce45ab77247221 ] Passed through PCI device sometimes misbehave on Gen1 VMs when Hyper-V DRM driver is also loaded. Looking at IOMEM assignment, we can see e.g. $ cat /proc/iomem ... f8000000-fffbffff : PCI Bus 0000:00 f8000000-fbffffff : 0000:00:08.0 f8000000-f8001fff : bb8c4f33-2ba2-4808-9f7f-02f3b4da22fe ... fe0000000-fffffffff : PCI Bus 0000:00 fe0000000-fe07fffff : bb8c4f33-2ba2-4808-9f7f-02f3b4da22fe fe0000000-fe07fffff : 2ba2:00:02.0 fe0000000-fe07fffff : mlx4_core the interesting part is the 'f8000000' region as it is actually the VM's framebuffer: $ lspci -v ... 0000:00:08.0 VGA compatible controller: Microsoft Corporation Hyper-V virtual VGA (prog-if 00 [VGA controller]) Flags: bus master, fast devsel, latency 0, IRQ 11 Memory at f8000000 (32-bit, non-prefetchable) [size=64M] ... hv_vmbus: registering driver hyperv_drm hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm] Synthvid Version major 3, minor 5 hyperv_drm 0000:00:08.0: vgaarb: deactivate vga console hyperv_drm 0000:00:08.0: BAR 0: can't reserve [mem 0xf8000000-0xfbffffff] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm] Cannot request framebuffer, boot fb still active? Note: "Cannot request framebuffer" is not a fatal error in hyperv_setup_gen1() as the code assumes there's some other framebuffer device there but we actually have some other PCI device (mlx4 in this case) config space there! The problem appears to be that vmbus_allocate_mmio() can use dedicated framebuffer region to serve any MMIO request from any device. The semantics one might assume of a parameter named "fb_overlap_ok" aren't implemented because !fb_overlap_ok essentially has no effect. The existing semantics are really "prefer_fb_overlap". This patch implements the expected and needed semantics, which is to not allocate from the frame buffer space when !fb_overlap_ok. Note, Gen2 VMs are usually unaffected by the issue because framebuffer region is already taken by EFI fb (in case kernel supports it) but Gen1 VMs may have this region unclaimed by the time Hyper-V PCI pass-through driver tries allocating MMIO space if Hyper-V DRM/FB drivers load after it. Devices can be brought up in any sequence so let's resolve the issue by always ignoring 'fb_mmio' region for non-FB requests, even if the region is unclaimed. Reviewed-by: Michael Kelley <mikelley@microsoft.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Link: https://lore.kernel.org/r/20220827130345.1320254-4-vkuznets@redhat.com Signed-off-by: Wei Liu <wei.liu@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28cifs: always initialize struct msghdr smb_msg completelyStefan Metzmacher2-13/+4
[ Upstream commit bedc8f76b3539ac4f952114b316bcc2251e808ce ] So far we were just lucky because the uninitialized members of struct msghdr are not used by default on a SOCK_STREAM tcp socket. But as new things like msg_ubuf and sg_from_iter where added recently, we should play on the safe side and avoid potention problems in future. Signed-off-by: Stefan Metzmacher <metze@samba.org> Cc: stable@vger.kernel.org Reviewed-by: Paulo Alcantara (SUSE) <pc@cjr.nz> Reviewed-by: Ronnie Sahlberg <lsahlber@redhat.com> Signed-off-by: Steve French <stfrench@microsoft.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28cifs: use discard iterator to discard unneeded network data more efficientlyDavid Howells3-3/+22
[ Upstream commit cf0604a686b11175d8beae60281c4ccc95aaa5c2 ] The iterator, ITER_DISCARD, that can only be used in READ mode and just discards any data copied to it, was added to allow a network filesystem to discard any unwanted data sent by a server. Convert cifs_discard_from_socket() to use this. Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Steve French <stfrench@microsoft.com> Stable-dep-of: bedc8f76b353 ("cifs: always initialize struct msghdr smb_msg completely") Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28drm/amdgpu: Fix check for RAS supportLuben Tuikov1-9/+6
[ Upstream commit 084e2640e51626f413f85663e3ba7e32d4272477 ] Use positive logic to check for RAS support. Rename the function to actually indicate what it is testing for. Essentially, make the function a predicate with the correct name. Cc: Stanley Yang <Stanley.Yang@amd.com> Cc: Alexander Deucher <Alexander.Deucher@amd.com> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Stable-dep-of: 6c2049066355 ("drm/amdgpu: Don't enable LTR if not supported") Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28vfio/type1: fix vaddr_get_pfns() return in vfio_pin_page_external()Daniel Jordan1-1/+7
commit 4ab4fcfce5b540227d80eb32f1db45ab615f7c92 upstream. vaddr_get_pfns() now returns the positive number of pfns successfully gotten instead of zero. vfio_pin_page_external() might return 1 to vfio_iommu_type1_pin_pages(), which will treat it as an error, if vaddr_get_pfns() is successful but vfio_pin_page_external() doesn't reach vfio_lock_acct(). Fix it up in vfio_pin_page_external(). Found by inspection. Fixes: be16c1fd99f4 ("vfio/type1: Change success value of vaddr_get_pfn()") Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Message-Id: <20210308172452.38864-1-daniel.m.jordan@oracle.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-28usb: xhci-mtk: fix issue of out-of-bounds array accessChunfeng Yun1-4/+6
commit de5107f473190538a65aac7edea85209cd5c1a8f upstream. Bus bandwidth array access is based on esit, increase one will cause out-of-bounds issue; for example, when esit is XHCI_MTK_MAX_ESIT, will overstep boundary. Fixes: 7c986fbc16ae ("usb: xhci-mtk: get the microframe boundary for ESIT") Cc: <stable@vger.kernel.org> Reported-by: Stan Lu <stan.lu@mediatek.com> Signed-off-by: Chunfeng Yun <chunfeng.yun@mediatek.com> Link: https://lore.kernel.org/r/1629189389-18779-5-git-send-email-chunfeng.yun@mediatek.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-28s390/dasd: fix Oops in dasd_alias_get_start_dev due to missing pavgroupStefan Haberland1-2/+7
commit db7ba07108a48c0f95b74fabbfd5d63e924f992d upstream. Fix Oops in dasd_alias_get_start_dev() function caused by the pavgroup pointer being NULL. The pavgroup pointer is checked on the entrance of the function but without the lcu->lock being held. Therefore there is a race window between dasd_alias_get_start_dev() and _lcu_update() which sets pavgroup to NULL with the lcu->lock held. Fix by checking the pavgroup pointer with lcu->lock held. Cc: <stable@vger.kernel.org> # 2.6.25+ Fixes: 8e09f21574ea ("[S390] dasd: add hyper PAV support to DASD device driver, part 1") Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Reviewed-by: Jan Hoeppner <hoeppner@linux.ibm.com> Link: https://lore.kernel.org/r/20220919154931.4123002-2-sth@linux.ibm.com Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-28serial: tegra-tcu: Use uart_xmit_advance(), fixes icount.tx accountingIlpo Järvinen1-1/+1
commit 1d10cd4da593bc0196a239dcc54dac24b6b0a74e upstream. Tx'ing does not correctly account Tx'ed characters into icount.tx. Using uart_xmit_advance() fixes the problem. Fixes: 2d908b38d409 ("serial: Add Tegra Combined UART driver") Cc: <stable@vger.kernel.org> # serial: Create uart_xmit_advance() Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Link: https://lore.kernel.org/r/20220901143934.8850-4-ilpo.jarvinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-28serial: tegra: Use uart_xmit_advance(), fixes icount.tx accountingIlpo Järvinen1-3/+2
commit 754f68044c7dd6c52534ba3e0f664830285c4b15 upstream. DMA complete & stop paths did not correctly account Tx'ed characters into icount.tx. Using uart_xmit_advance() fixes the problem. Fixes: e9ea096dd225 ("serial: tegra: add serial driver") Cc: <stable@vger.kernel.org> # serial: Create uart_xmit_advance() Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Link: https://lore.kernel.org/r/20220901143934.8850-3-ilpo.jarvinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-28serial: Create uart_xmit_advance()Ilpo Järvinen1-0/+17
commit e77cab77f2cb3a1ca2ba8df4af45bb35617ac16d upstream. A very common pattern in the drivers is to advance xmit tail index and do bookkeeping of Tx'ed characters. Create uart_xmit_advance() to handle it. Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: stable <stable@kernel.org> Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Link: https://lore.kernel.org/r/20220901143934.8850-2-ilpo.jarvinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-28drm/amd/amdgpu: fixing read wrong pf2vf data in SRIOVJingwen Chen2-14/+8
commit 9a458402fb69bda886aa6cbe067311b6e3d9c52a upstream. [Why] This fixes 892deb48269c ("drm/amdgpu: Separate vf2pf work item init from virt data exchange"). we should read pf2vf data based at mman.fw_vram_usage_va after gmc sw_init. commit 892deb48269c breaks this logic. [How] calling amdgpu_virt_exchange_data in amdgpu_virt_init_data_exchange to set the right base in the right sequence. v2: call amdgpu_virt_init_data_exchange after gmc sw_init to make data exchange workqueue run v3: clean up the code logic v4: add some comment and make the code more readable Fixes: 892deb48269c ("drm/amdgpu: Separate vf2pf work item init from virt data exchange") Signed-off-by: Jingwen Chen <Jingwen.Chen2@amd.com> Reviewed-by: Horace Chen <horace.chen@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-28selftests: forwarding: add shebang for sch_red.shHangbin Liu1-0/+1
[ Upstream commit 83e4b196838d90799a8879e5054a3beecf9ed256 ] RHEL/Fedora RPM build checks are stricter, and complain when executable files don't have a shebang line, e.g. *** WARNING: ./kselftests/net/forwarding/sch_red.sh is executable but has no shebang, removing executable bit Fix it by adding shebang line. Fixes: 6cf0291f9517 ("selftests: forwarding: Add a RED test for SW datapath") Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Link: https://lore.kernel.org/r/20220922024453.437757-1-liuhangbin@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28net: sched: fix possible refcount leak in tc_new_tfilter()Hangyu Hua1-0/+1
[ Upstream commit c2e1cfefcac35e0eea229e148c8284088ce437b5 ] tfilter_put need to be called to put the refount got by tp->ops->get to avoid possible refcount leak when chain->tmplt_ops != NULL and chain->tmplt_ops != tp->ops. Fixes: 7d5509fa0d3d ("net: sched: extend proto ops with 'put' callback") Signed-off-by: Hangyu Hua <hbh25y@gmail.com> Reviewed-by: Vlad Buslov <vladbu@nvidia.com> Link: https://lore.kernel.org/r/20220921092734.31700-1-hbh25y@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28net: sunhme: Fix packet reception for len < RX_COPY_THRESHOLDSean Anderson1-2/+2
[ Upstream commit 878e2405710aacfeeb19364c300f38b7a9abfe8f ] There is a separate receive path for small packets (under 256 bytes). Instead of allocating a new dma-capable skb to be used for the next packet, this path allocates a skb and copies the data into it (reusing the existing sbk for the next packet). There are two bytes of junk data at the beginning of every packet. I believe these are inserted in order to allow aligned DMA and IP headers. We skip over them using skb_reserve. Before copying over the data, we must use a barrier to ensure we see the whole packet. The current code only synchronizes len bytes, starting from the beginning of the packet, including the junk bytes. However, this leaves off the final two bytes in the packet. Synchronize the whole packet. To reproduce this problem, ping a HME with a payload size between 17 and 214 $ ping -s 17 <hme_address> which will complain rather loudly about the data mismatch. Small packets (below 60 bytes on the wire) do not have this issue. I suspect this is related to the padding added to increase the minimum packet size. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Sean Anderson <seanga2@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20220920235018.1675956-1-seanga2@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28net/smc: Stop the CLC flow if no link to map buffers onWen Gu1-1/+4
[ Upstream commit e738455b2c6dcdab03e45d97de36476f93f557d2 ] There might be a potential race between SMC-R buffer map and link group termination. smc_smcr_terminate_all() | smc_connect_rdma() -------------------------------------------------------------- | smc_conn_create() for links in smcibdev | schedule links down | | smc_buf_create() | \- smcr_buf_map_usable_links() | \- no usable links found, | (rmb->mr = NULL) | | smc_clc_send_confirm() | \- access conn->rmb_desc->mr[]->rkey | (panic) During reboot and IB device module remove, all links will be set down and no usable links remain in link groups. In such situation smcr_buf_map_usable_links() should return an error and stop the CLC flow accessing to uninitialized mr. Fixes: b9247544c1bc ("net/smc: convert static link ID instances to support multiple links") Signed-off-by: Wen Gu <guwen@linux.alibaba.com> Link: https://lore.kernel.org/r/1663656189-32090-1-git-send-email-guwen@linux.alibaba.com Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28drm/mediatek: dsi: Move mtk_dsi_stop() call back to mtk_dsi_poweroff()Nícolas F. R. A. Prado1-11/+10
[ Upstream commit 90144dd8b0d137d9e78ef34b3c418e51a49299ad ] As the comment right before the mtk_dsi_stop() call advises, mtk_dsi_stop() should only be called after mtk_drm_crtc_atomic_disable(). That's because that function calls drm_crtc_wait_one_vblank(), which requires the vblank irq to be enabled. Previously mtk_dsi_stop(), being in mtk_dsi_poweroff() and guarded by a refcount, would only be called at the end of mtk_drm_crtc_atomic_disable(), through the call to mtk_crtc_ddp_hw_fini(). Commit cde7e2e35c28 ("drm/mediatek: Separate poweron/poweroff from enable/disable and define new funcs") moved the mtk_dsi_stop() call to mtk_output_dsi_disable(), causing it to be called before mtk_drm_crtc_atomic_disable(), and consequently generating vblank timeout warnings during suspend. Move the mtk_dsi_stop() call back to mtk_dsi_poweroff() so that we have a working vblank irq during mtk_drm_crtc_atomic_disable() and stop getting vblank timeout warnings. Fixes: cde7e2e35c28 ("drm/mediatek: Separate poweron/poweroff from enable/disable and define new funcs") Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> Tested-by: Hsin-Yi Wang <hsinyi@chromium.org> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Tested-by: Allen-KH Cheng <allen-kh.cheng@mediatek.com> Link: http://lists.infradead.org/pipermail/linux-mediatek/2022-August/046713.html Signed-off-by: Chun-Kuang Hu <chunkuang.hu@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28perf kcore_copy: Do not check /proc/modules is unchangedAdrian Hunter1-5/+2
[ Upstream commit 5b427df27b94aec1312cace48a746782a0925c53 ] /proc/kallsyms and /proc/modules are compared before and after the copy in order to ensure no changes during the copy. However /proc/modules also might change due to reference counts changing even though that does not make any difference. Any modules loaded or unloaded should be visible in changes to kallsyms, so it is not necessary to check /proc/modules also anyway. Remove the comparison checking that /proc/modules is unchanged. Fixes: fc1b691d7651d949 ("perf buildid-cache: Add ability to add kcore to the cache") Reported-by: Daniel Dao <dqminh@cloudflare.com> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Daniel Dao <dqminh@cloudflare.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20220914122429.8770-1-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28perf jit: Include program header in ELF filesLieven Hey2-0/+18
[ Upstream commit babd04386b1df8c364cdaa39ac0e54349502e1e5 ] The missing header makes it hard for programs like elfutils to open these files. Fixes: 2d86612aacb7805f ("perf symbol: Correct address for bss symbols") Reviewed-by: Leo Yan <leo.yan@linaro.org> Signed-off-by: Lieven Hey <lieven.hey@kdab.com> Tested-by: Leo Yan <leo.yan@linaro.org> Cc: Leo Yan <leo.yan@linaro.org> Link: https://lore.kernel.org/r/20220915092910.711036-1-lieven.hey@kdab.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28can: gs_usb: gs_can_open(): fix race dev->can.state conditionMarc Kleine-Budde1-2/+2
[ Upstream commit 5440428b3da65408dba0241985acb7a05258b85e ] The dev->can.state is set to CAN_STATE_ERROR_ACTIVE, after the device has been started. On busy networks the CAN controller might receive CAN frame between and go into an error state before the dev->can.state is assigned. Assign dev->can.state before starting the controller to close the race window. Fixes: d08e973a77d1 ("can: gs_usb: Added support for the GS_USB CAN devices") Link: https://lore.kernel.org/all/20220920195216.232481-1-mkl@pengutronix.de Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28netfilter: ebtables: fix memory leak when blob is malformedFlorian Westphal1-1/+3
[ Upstream commit 62ce44c4fff947eebdf10bb582267e686e6835c9 ] The bug fix was incomplete, it "replaced" crash with a memory leak. The old code had an assignment to "ret" embedded into the conditional, restore this. Fixes: 7997eff82828 ("netfilter: ebtables: reject blobs that don't provide all entry points") Reported-and-tested-by: syzbot+a24c5252f3e3ab733464@syzkaller.appspotmail.com Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28netfilter: nf_tables: fix percpu memory leak at nf_tables_addchain()Tetsuo Handa1-0/+1
[ Upstream commit 9a4d6dd554b86e65581ef6b6638a39ae079b17ac ] It seems to me that percpu memory for chain stats started leaking since commit 3bc158f8d0330f0a ("netfilter: nf_tables: map basechain priority to hardware priority") when nft_chain_offload_priority() returned an error. Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Fixes: 3bc158f8d0330f0a ("netfilter: nf_tables: map basechain priority to hardware priority") Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28netfilter: nf_tables: fix nft_counters_enabled underflow at nf_tables_addchain()Tetsuo Handa1-4/+3
[ Upstream commit 921ebde3c0d22c8cba74ce8eb3cc4626abff1ccd ] syzbot is reporting underflow of nft_counters_enabled counter at nf_tables_addchain() [1], for commit 43eb8949cfdffa76 ("netfilter: nf_tables: do not leave chain stats enabled on error") missed that nf_tables_chain_destroy() after nft_basechain_init() in the error path of nf_tables_addchain() decrements the counter because nft_basechain_init() makes nft_is_base_chain() return true by setting NFT_CHAIN_BASE flag. Increment the counter immediately after returning from nft_basechain_init(). Link: https://syzkaller.appspot.com/bug?extid=b5d82a651b71cd8a75ab [1] Reported-by: syzbot <syzbot+b5d82a651b71cd8a75ab@syzkaller.appspotmail.com> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Tested-by: syzbot <syzbot+b5d82a651b71cd8a75ab@syzkaller.appspotmail.com> Fixes: 43eb8949cfdffa76 ("netfilter: nf_tables: do not leave chain stats enabled on error") Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28net/sched: taprio: make qdisc_leaf() see the per-netdev-queue pfifo child qdiscsVladimir Oltean1-3/+5
[ Upstream commit 1461d212ab277d8bba1a753d33e9afe03d81f9d4 ] taprio can only operate as root qdisc, and to that end, there exists the following check in taprio_init(), just as in mqprio: if (sch->parent != TC_H_ROOT) return -EOPNOTSUPP; And indeed, when we try to attach taprio to an mqprio child, it fails as expected: $ tc qdisc add dev swp0 root handle 1: mqprio num_tc 8 \ map 0 1 2 3 4 5 6 7 \ queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 hw 0 $ tc qdisc replace dev swp0 parent 1:2 taprio num_tc 8 \ map 0 1 2 3 4 5 6 7 \ queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 \ base-time 0 sched-entry S 0x7f 990000 sched-entry S 0x80 100000 \ flags 0x0 clockid CLOCK_TAI Error: sch_taprio: Can only be attached as root qdisc. (extack message added by me) But when we try to attach a taprio child to a taprio root qdisc, surprisingly it doesn't fail: $ tc qdisc replace dev swp0 root handle 1: taprio num_tc 8 \ map 0 1 2 3 4 5 6 7 queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 \ base-time 0 sched-entry S 0x7f 990000 sched-entry S 0x80 100000 \ flags 0x0 clockid CLOCK_TAI $ tc qdisc replace dev swp0 parent 1:2 taprio num_tc 8 \ map 0 1 2 3 4 5 6 7 \ queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 \ base-time 0 sched-entry S 0x7f 990000 sched-entry S 0x80 100000 \ flags 0x0 clockid CLOCK_TAI This is because tc_modify_qdisc() behaves differently when mqprio is root, vs when taprio is root. In the mqprio case, it finds the parent qdisc through p = qdisc_lookup(dev, TC_H_MAJ(clid)), and then the child qdisc through q = qdisc_leaf(p, clid). This leaf qdisc q has handle 0, so it is ignored according to the comment right below ("It may be default qdisc, ignore it"). As a result, tc_modify_qdisc() goes through the qdisc_create() code path, and this gives taprio_init() a chance to check for sch_parent != TC_H_ROOT and error out. Whereas in the taprio case, the returned q = qdisc_leaf(p, clid) is different. It is not the default qdisc created for each netdev queue (both taprio and mqprio call qdisc_create_dflt() and keep them in a private q->qdiscs[], or priv->qdiscs[], respectively). Instead, taprio makes qdisc_leaf() return the _root_ qdisc, aka itself. When taprio does that, tc_modify_qdisc() goes through the qdisc_change() code path, because the qdisc layer never finds out about the child qdisc of the root. And through the ->change() ops, taprio has no reason to check whether its parent is root or not, just through ->init(), which is not called. The problem is the taprio_leaf() implementation. Even though code wise, it does the exact same thing as mqprio_leaf() which it is copied from, it works with different input data. This is because mqprio does not attach itself (the root) to each device TX queue, but one of the default qdiscs from its private array. In fact, since commit 13511704f8d7 ("net: taprio offload: enforce qdisc to netdev queue mapping"), taprio does this too, but just for the full offload case. So if we tried to attach a taprio child to a fully offloaded taprio root qdisc, it would properly fail too; just not to a software root taprio. To fix the problem, stop looking at the Qdisc that's attached to the TX queue, and instead, always return the default qdiscs that we've allocated (and to which we privately enqueue and dequeue, in software scheduling mode). Since Qdisc_class_ops :: leaf is only called from tc_modify_qdisc(), the risk of unforeseen side effects introduced by this change is minimal. Fixes: 5a781ccbd19e ("tc: Add support for configuring the taprio scheduler") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28net/sched: taprio: avoid disabling offload when it was never enabledVladimir Oltean1-4/+6
[ Upstream commit db46e3a88a09c5cf7e505664d01da7238cd56c92 ] In an incredibly strange API design decision, qdisc->destroy() gets called even if qdisc->init() never succeeded, not exclusively since commit 87b60cfacf9f ("net_sched: fix error recovery at qdisc creation"), but apparently also earlier (in the case of qdisc_create_dflt()). The taprio qdisc does not fully acknowledge this when it attempts full offload, because it starts off with q->flags = TAPRIO_FLAGS_INVALID in taprio_init(), then it replaces q->flags with TCA_TAPRIO_ATTR_FLAGS parsed from netlink (in taprio_change(), tail called from taprio_init()). But in taprio_destroy(), we call taprio_disable_offload(), and this determines what to do based on FULL_OFFLOAD_IS_ENABLED(q->flags). But looking at the implementation of FULL_OFFLOAD_IS_ENABLED() (a bitwise check of bit 1 in q->flags), it is invalid to call this macro on q->flags when it contains TAPRIO_FLAGS_INVALID, because that is set to U32_MAX, and therefore FULL_OFFLOAD_IS_ENABLED() will return true on an invalid set of flags. As a result, it is possible to crash the kernel if user space forces an error between setting q->flags = TAPRIO_FLAGS_INVALID, and the calling of taprio_enable_offload(). This is because drivers do not expect the offload to be disabled when it was never enabled. The error that we force here is to attach taprio as a non-root qdisc, but instead as child of an mqprio root qdisc: $ tc qdisc add dev swp0 root handle 1: \ mqprio num_tc 8 map 0 1 2 3 4 5 6 7 \ queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 hw 0 $ tc qdisc replace dev swp0 parent 1:1 \ taprio num_tc 8 map 0 1 2 3 4 5 6 7 \ queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 base-time 0 \ sched-entry S 0x7f 990000 sched-entry S 0x80 100000 \ flags 0x0 clockid CLOCK_TAI Unable to handle kernel paging request at virtual address fffffffffffffff8 [fffffffffffffff8] pgd=0000000000000000, p4d=0000000000000000 Internal error: Oops: 96000004 [#1] PREEMPT SMP Call trace: taprio_dump+0x27c/0x310 vsc9959_port_setup_tc+0x1f4/0x460 felix_port_setup_tc+0x24/0x3c dsa_slave_setup_tc+0x54/0x27c taprio_disable_offload.isra.0+0x58/0xe0 taprio_destroy+0x80/0x104 qdisc_create+0x240/0x470 tc_modify_qdisc+0x1fc/0x6b0 rtnetlink_rcv_msg+0x12c/0x390 netlink_rcv_skb+0x5c/0x130 rtnetlink_rcv+0x1c/0x2c Fix this by keeping track of the operations we made, and undo the offload only if we actually did it. I've added "bool offloaded" inside a 4 byte hole between "int clockid" and "atomic64_t picos_per_byte". Now the first cache line looks like below: $ pahole -C taprio_sched net/sched/sch_taprio.o struct taprio_sched { struct Qdisc * * qdiscs; /* 0 8 */ struct Qdisc * root; /* 8 8 */ u32 flags; /* 16 4 */ enum tk_offsets tk_offset; /* 20 4 */ int clockid; /* 24 4 */ bool offloaded; /* 28 1 */ /* XXX 3 bytes hole, try to pack */ atomic64_t picos_per_byte; /* 32 0 */ /* XXX 8 bytes hole, try to pack */ spinlock_t current_entry_lock; /* 40 0 */ /* XXX 8 bytes hole, try to pack */ struct sched_entry * current_entry; /* 48 8 */ struct sched_gate_list * oper_sched; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ Fixes: 9c66d1564676 ("taprio: Add support for hardware offloading") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28net: socket: remove register_gifconfArnd Bergmann4-45/+20
[ Upstream commit b0e99d03778b2418aec20db99d97d19d25d198b6 ] Since dynamic registration of the gifconf() helper is only used for IPv4, and this can not be in a loadable module, this can be simplified noticeably by turning it into a direct function call as a preparation for cleaning up the compat handling. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net> Stable-dep-of: 5641c751fe2f ("net: enetc: deny offload of tc-based TSN features on VF interfaces") Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28net: enetc: move enetc_set_psfp() out of the common enetc_set_features()Vladimir Oltean5-35/+44
[ Upstream commit fed38e64d9b99d65a36c0dbadc3d3f8ddd9ea030 ] The VF netdev driver shouldn't respond to changes in the NETIF_F_HW_TC flag; only PFs should. Moreover, TSN-specific code should go to enetc_qos.c, which should not be included in the VF driver. Fixes: 79e499829f3f ("net: enetc: add hw tc hw offload features for PSPF capability") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://lore.kernel.org/r/20220916133209.3351399-1-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28wireguard: netlink: avoid variable-sized memcpy on sockaddrJason A. Donenfeld1-7/+6
[ Upstream commit 26c013108c12b94bc023bf19198a4300596c98b1 ] Doing a variable-sized memcpy is slower, and the compiler isn't smart enough to turn this into a constant-size assignment. Further, Kees' latest fortified memcpy will actually bark, because the destination pointer is type sockaddr, not explicitly sockaddr_in or sockaddr_in6, so it thinks there's an overflow: memcpy: detected field-spanning write (size 28) of single field "&endpoint.addr" at drivers/net/wireguard/netlink.c:446 (size 16) Fix this by just assigning by using explicit casts for each checked case. Fixes: e7096c131e51 ("net: WireGuard secure network tunnel") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reported-by: syzbot+a448cda4dba2dac50de5@syzkaller.appspotmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28wireguard: ratelimiter: disable timings test by defaultJason A. Donenfeld1-15/+10
[ Upstream commit 684dec3cf45da2b0848298efae4adf3b2aeafeda ] A previous commit tried to make the ratelimiter timings test more reliable but in the process made it less reliable on other configurations. This is an impossible problem to solve without increasingly ridiculous heuristics. And it's not even a problem that actually needs to be solved in any comprehensive way, since this is only ever used during development. So just cordon this off with a DEBUG_ ifdef, just like we do for the trie's randomized tests, so it can be enabled while hacking on the code, and otherwise disabled in CI. In the process we also revert 151c8e499f47. Fixes: 151c8e499f47 ("wireguard: ratelimiter: use hrtimer in selftest") Fixes: e7096c131e51 ("net: WireGuard secure network tunnel") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28net: ipa: properly limit modem routing table useAlex Elder5-26/+32
[ Upstream commit cf412ec333250cb82bafe57169204e14a9f1c2ac ] IPA can route packets between IPA-connected entities. The AP and modem are currently the only such entities supported, and no routing is required to transfer packets between them. The number of entries in each routing table is fixed, and defined at initialization time. Some of these entries are designated for use by the modem, and the rest are available for the AP to use. The AP sends a QMI message to the modem which describes (among other things) information about routing table memory available for the modem to use. Currently the QMI initialization packet gives wrong information in its description of routing tables. What *should* be supplied is the maximum index that the modem can use for the routing table memory located at a given location. The current code instead supplies the total *number* of routing table entries. Furthermore, the modem is granted the entire table, not just the subset it's supposed to use. This patch fixes this. First, the ipa_mem_bounds structure is generalized so its "end" field can be interpreted either as a final byte offset, or a final array index. Second, the IPv4 and IPv6 (non-hashed and hashed) table information fields in the QMI ipa_init_modem_driver_req structure are changed to be ipa_mem_bounds rather than ipa_mem_array structures. Third, we set the "end" value for each routing table to be the last index, rather than setting the "count" to be the number of indices. Finally, instead of allowing the modem to use all of a routing table's memory, it is limited to just the portion meant to be used by the modem. In all versions of IPA currently supported, that is IPA_ROUTE_MODEM_COUNT (8) entries. Update a few comments for clarity. Fixes: 530f9216a9537 ("soc: qcom: ipa: AP/modem communications") Signed-off-by: Alex Elder <elder@linaro.org> Link: https://lore.kernel.org/r/20220913204602.1803004-1-elder@linaro.org Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28net: ipa: kill IPA_TABLE_ENTRY_SIZEAlex Elder4-35/+39
[ Upstream commit 4ea29143ebe6c453f5fddc80ffe4ed046f44aa3a ] Entries in an IPA route or filter table are 64-bit little-endian addresses, each of which refers to a routing or filtering rule. The format of these table slots are fixed, but IPA_TABLE_ENTRY_SIZE is used to define their size. This symbol doesn't really add value, and I think it unnecessarily obscures what a table entry *is*. So get rid of IPA_TABLE_ENTRY_SIZE, and just use sizeof(__le64) in its place throughout the code. Update the comments in "ipa_table.c" to provide a little better explanation of these table slots. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net> Stable-dep-of: cf412ec33325 ("net: ipa: properly limit modem routing table use") Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28net: ipa: DMA addresses are nicely alignedAlex Elder5-35/+17
[ Upstream commit 19aaf72c0c7a26ab7ffc655a6d84da6a379f899b ] A recent patch avoided doing 64-bit modulo operations by checking the alignment of some DMA allocations using only the lower 32 bits of the address. David Laight pointed out (after the fix was committed) that DMA allocations might already satisfy the alignment requirements. And he was right. Remove the alignment checks that occur after DMA allocation requests, and update comments to explain why the constraint is satisfied. The only place IPA_TABLE_ALIGN was used was to check the alignment; it is therefore no longer needed, so get rid of it. Add comments where GSI_RING_ELEMENT_SIZE and the tre_count and event_count channel data fields are defined to make explicit they are required to be powers of 2. Revise a comment in gsi_trans_pool_init_dma(), taking into account that dma_alloc_coherent() guarantees its result is aligned to a page size (or order thereof). Don't bother printing an error if a DMA allocation fails. Suggested-by: David Laight <David.Laight@ACULAB.COM> Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net> Stable-dep-of: cf412ec33325 ("net: ipa: properly limit modem routing table use") Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28net: ipa: avoid 64-bit modulusAlex Elder2-7/+13
[ Upstream commit 437c78f976f5b39fc4b2a1c65903a229f55912dd ] It is possible for a 32 bit x86 build to use a 64 bit DMA address. There are two remaining spots where the IPA driver does a modulo operation to check alignment of a DMA address, and under certain conditions this can lead to a build error on i386 (at least). The alignment checks we're doing are for power-of-2 values, and this means the lower 32 bits of the DMA address can be used. This ensures both operands to the modulo operator are 32 bits wide. Reported-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Alex Elder <elder@linaro.org> Acked-by: Randy Dunlap <rdunlap@infradead.org> # build-tested Signed-off-by: David S. Miller <davem@davemloft.net> Stable-dep-of: cf412ec33325 ("net: ipa: properly limit modem routing table use") Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28net: ipa: fix table alignment requirementAlex Elder1-8/+12
[ Upstream commit e5d4e96b44cf20330c970c3e30ea0a8c3a23feca ] We currently have a build-time check to ensure that the minimum DMA allocation alignment satisfies the constraint that IPA filter and route tables must point to rules that are 128-byte aligned. But what's really important is that the actual allocated DMA memory has that alignment, even if the minimum is smaller than that. Remove the BUILD_BUG_ON() call checking against minimim DMA alignment and instead verify at rutime that the allocated memory is properly aligned. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net> Stable-dep-of: cf412ec33325 ("net: ipa: properly limit modem routing table use") Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28net: ipa: fix assumptions about DMA address sizeAlex Elder1-6/+8
[ Upstream commit d2fd2311de909a7f4e99b4bd11a19e6b671d6a6b ] Some build time checks in ipa_table_validate_build() assume that a DMA address is 64 bits wide. That is more restrictive than it has to be. A route or filter table is 64 bits wide no matter what the size of a DMA address is on the AP. The code actually uses a pointer to __le64 to access table entries, and a fixed constant IPA_TABLE_ENTRY_SIZE to describe the size of those entries. Loosen up two checks so they still verify some requirements, but such that they do not assume the size of a DMA address is 64 bits. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net> Stable-dep-of: cf412ec33325 ("net: ipa: properly limit modem routing table use") Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-09-28of: mdio: Add of_node_put() when breaking out of for_each_xxLiang He1-0/+1
[ Upstream commit 1c48709e6d9d353acaaac1d8e33474756b121d78 ] In of_mdiobus_register(), we should call of_node_put() for 'child' escaped out of for_each_available_child_of_node(). Fixes: 66bdede495c7 ("of_mdio: Fix broken PHY IRQ in case of probe deferral") Co-developed-by: Miaoqian Lin <linmq006@gmail.com> Signed-off-by: Miaoqian Lin <linmq006@gmail.com> Signed-off-by: Liang He <windhl@126.com> Link: https://lore.kernel.org/r/20220913125659.3331969-1-windhl@126.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>