summaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)AuthorFilesLines
2021-02-21Merge tag 'jfs-5.12' of git://github.com/kleikamp/linux-shaggyLinus Torvalds4-20/+28
Pull jfs updates from David Kleikamp: "A few jfs fixes" * tag 'jfs-5.12' of git://github.com/kleikamp/linux-shaggy: fs/jfs: fix potential integer overflow on shift of a int jfs: turn diLog(), dataLog() and txLog() into void functions JFS: more checks for invalid superblock
2021-02-21Merge tag 'locks-v5.12' of ↵Linus Torvalds1-6/+13
git://git.kernel.org/pub/scm/linux/kernel/git/jlayton/linux Pull fcntl fix from Jeff Layton. * tag 'locks-v5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/jlayton/linux: fcntl: make F_GETOWN(EX) return 0 on dead owner task
2021-02-21Merge branch 'work.namei' of ↵Linus Torvalds2-45/+50
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull namei updates from Al Viro: "Most of that pile is LOOKUP_CACHED series; the rest is a couple of misc cleanups in the general area... There's a minor bisect hazard in the end of series, and normally I would've just folded the fix into the previous commit, but this branch is shared with Jens' tree, with stuff on top of it in there, so that would've required rebases outside of vfs.git" * 'work.namei' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: fix handling of nd->depth on LOOKUP_CACHED failures in try_to_unlazy* fs: expose LOOKUP_CACHED through openat2() RESOLVE_CACHED fs: add support for LOOKUP_CACHED saner calling conventions for unlazy_child() fs: make unlazy_walk() error handling consistent fs/namei.c: Remove unlikely of status being -ECHILD in lookup_fast() do_tmpfile(): don't mess with finish_open()
2021-02-21Merge branch 'work.elf-compat' of ↵Linus Torvalds4-36/+16
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull ELF compat updates from Al Viro: "Sanitizing ELF compat support, especially for triarch architectures: - X32 handling cleaned up - MIPS64 uses compat_binfmt_elf.c both for O32 and N32 now - Kconfig side of things regularized Eventually I hope to have compat_binfmt_elf.c killed, with both native and compat built from fs/binfmt_elf.c, with -DELF_BITS={64,32} passed by kbuild, but that's a separate story - not included here" * 'work.elf-compat' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: get rid of COMPAT_ELF_EXEC_PAGESIZE compat_binfmt_elf: don't bother with undef of ELF_ARCH Kconfig: regularize selection of CONFIG_BINFMT_ELF mips compat: switch to compat_binfmt_elf.c mips: don't bother with ELF_CORE_EFLAGS mips compat: don't bother with ELF_ET_DYN_BASE mips: KVM_GUEST makes no sense for 64bit builds... mips: kill unused definitions in binfmt_elf[on]32.c mips binfmt_elf*32.c: use elfcore-compat.h x32: make X32, !IA32_EMULATION setups able to execute x32 binaries [amd64] clean PRSTATUS_SIZE/SET_PR_FPVALID up properly elf_prstatus: collect the common part (everything before pr_reg) into a struct binfmt_elf: partially sanitize PRSTATUS_SIZE and SET_PR_FPVALID
2021-02-21Merge branch 'work.sendfile' of ↵Linus Torvalds3-26/+46
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull sendfile updates from Al Viro: "Make sendfile() to pipe destination do the right thing, should make 'fs/pipe: allow sendfile() to pipe again' redundant" * 'work.sendfile' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: teach sendfile(2) to handle send-to-pipe directly take the guts of file-to-pipe splice into a helper function do_splice_to(): move the logics for limiting the read length in
2021-02-21Merge tag 'arm-platform-removal-v5.12' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc Pull ARM SoC platform removals from Arnd Bergmann: "There are a lot of platforms that have not seen any interesting code changes in the past five years or more. I made a list and asked around which ones are no longer in use, and received confirmation about six ARM platforms and the TI C6x architecture that have all reached the end of their life upstream, with no known users remaining: - efm32 - added in 2011, first Cortex-M, no notable changes after 2013 - picoxcell - added in 2011, abandoned after 2012 acquisition - prima2 - added in 20111, no notable changes since 2015 - tango - added in 2015, sporadic changes until 2017, but abandoned - u300 - added in 2009, no notable changes since 2013 - zx - added in 2015 for both 32, 2017 for 64 bit, no notable changes - arch/c6x - added in 2011, but work stalled soon after that A number of other platforms on the original list turned out to still have users. In some cases there are out-of-tree patches and users that plan to contribute them in the future, in other cases the code is complete and works reliably" Link: https://lore.kernel.org/lkml/CAK8P3a2DZ8xQp7R=H=wewHnT2=a_=M53QsZOueMVEf7tOZLKNg@mail.gmail.com/ * tag 'arm-platform-removal-v5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: ARM: remove u300 platform ARM: remove tango platform ARM: remove zte zx platform ARM: remove sirf prima2/atlas platforms c6x: remove architecture MAINTAINERS: Remove deleted platform efm32 ARM: drop efm32 platform ARM: Remove PicoXcell platform support ARM: dts: Remove PicoXcell platforms
2021-02-21io_uring: fix leaving invalid req->flagsPavel Begunkov1-1/+3
sqe->flags are subset of req flags, so incorrectly copied may span into in-kernel flags and wreck havoc, e.g. by setting REQ_F_INFLIGHT. Fixes: 5be9ad1e4287e ("io_uring: optimise io_init_req() flags setting") Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-21io_uring: wait potential ->release() on resurrectPavel Begunkov1-8/+18
There is a short window where percpu_refs are already turned zero, but we try to do resurrect(). Play nicer and wait for ->release() to happen in this case and proceed as everything is ok. One downside for ctx refs is that we can ignore signal_pending() on a rare occasion, but someone else should check for it later if needed. Cc: <stable@vger.kernel.org> # 5.5+ Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-21io_uring: keep generic rsrc infra genericPavel Begunkov1-19/+13
io_rsrc_ref_quiesce() is a generic resource function, though now it was wired to allocate and initialise ref nodes with file-specific callbacks/etc. Keep it sane by passing in as a parameters everything we need for initialisations, otherwise it will hurt us badly one day. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-21io_uring: zero ref_node after killing itPavel Begunkov1-0/+1
After a rsrc/files reference node's refs are killed, it must never be used. And that's how it works, it either assigns a new node or kills the whole data table. Let's explicitly NULL it, that shouldn't be necessary, but if something would go wrong I'd rather catch a NULL dereference to using a dangling pointer. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-21io_uring: make the !CONFIG_NET helpers a bit more robustJens Axboe1-50/+26
With the prep and prep async split, we now have potentially 3 helpers that need to be defined for !CONFIG_NET. Add some helpers to do just that. Fixes the following compile error on !CONFIG_NET: fs/io_uring.c:6171:10: error: implicit declaration of function 'io_sendmsg_prep_async'; did you mean 'io_req_prep_async'? [-Werror=implicit-function-declaration] return io_sendmsg_prep_async(req); ^~~~~~~~~~~~~~~~~~~~~ io_req_prep_async Fixes: 93642ef88434 ("io_uring: split sqe-prep and async setup") Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-21io_uring: don't hold uring_lock when calling io_run_task_work*Hao Xu1-17/+44
Abaci reported the below issue: [ 141.400455] hrtimer: interrupt took 205853 ns [ 189.869316] process 'usr/local/ilogtail/ilogtail_0.16.26' started with executable stack [ 250.188042] [ 250.188327] ============================================ [ 250.189015] WARNING: possible recursive locking detected [ 250.189732] 5.11.0-rc4 #1 Not tainted [ 250.190267] -------------------------------------------- [ 250.190917] a.out/7363 is trying to acquire lock: [ 250.191506] ffff888114dbcbe8 (&ctx->uring_lock){+.+.}-{3:3}, at: __io_req_task_submit+0x29/0xa0 [ 250.192599] [ 250.192599] but task is already holding lock: [ 250.193309] ffff888114dbfbe8 (&ctx->uring_lock){+.+.}-{3:3}, at: __x64_sys_io_uring_register+0xad/0x210 [ 250.194426] [ 250.194426] other info that might help us debug this: [ 250.195238] Possible unsafe locking scenario: [ 250.195238] [ 250.196019] CPU0 [ 250.196411] ---- [ 250.196803] lock(&ctx->uring_lock); [ 250.197420] lock(&ctx->uring_lock); [ 250.197966] [ 250.197966] *** DEADLOCK *** [ 250.197966] [ 250.198837] May be due to missing lock nesting notation [ 250.198837] [ 250.199780] 1 lock held by a.out/7363: [ 250.200373] #0: ffff888114dbfbe8 (&ctx->uring_lock){+.+.}-{3:3}, at: __x64_sys_io_uring_register+0xad/0x210 [ 250.201645] [ 250.201645] stack backtrace: [ 250.202298] CPU: 0 PID: 7363 Comm: a.out Not tainted 5.11.0-rc4 #1 [ 250.203144] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 [ 250.203887] Call Trace: [ 250.204302] dump_stack+0xac/0xe3 [ 250.204804] __lock_acquire+0xab6/0x13a0 [ 250.205392] lock_acquire+0x2c3/0x390 [ 250.205928] ? __io_req_task_submit+0x29/0xa0 [ 250.206541] __mutex_lock+0xae/0x9f0 [ 250.207071] ? __io_req_task_submit+0x29/0xa0 [ 250.207745] ? 0xffffffffa0006083 [ 250.208248] ? __io_req_task_submit+0x29/0xa0 [ 250.208845] ? __io_req_task_submit+0x29/0xa0 [ 250.209452] ? __io_req_task_submit+0x5/0xa0 [ 250.210083] __io_req_task_submit+0x29/0xa0 [ 250.210687] io_async_task_func+0x23d/0x4c0 [ 250.211278] task_work_run+0x89/0xd0 [ 250.211884] io_run_task_work_sig+0x50/0xc0 [ 250.212464] io_sqe_files_unregister+0xb2/0x1f0 [ 250.213109] __io_uring_register+0x115a/0x1750 [ 250.213718] ? __x64_sys_io_uring_register+0xad/0x210 [ 250.214395] ? __fget_files+0x15a/0x260 [ 250.214956] __x64_sys_io_uring_register+0xbe/0x210 [ 250.215620] ? trace_hardirqs_on+0x46/0x110 [ 250.216205] do_syscall_64+0x2d/0x40 [ 250.216731] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 250.217455] RIP: 0033:0x7f0fa17e5239 [ 250.218034] Code: 01 00 48 81 c4 80 00 00 00 e9 f1 fe ff ff 0f 1f 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 3d 01 f0 ff ff 73 01 c3 48 8b 0d 27 ec 2c 00 f7 d8 64 89 01 48 [ 250.220343] RSP: 002b:00007f0fa1eeac48 EFLAGS: 00000246 ORIG_RAX: 00000000000001ab [ 250.221360] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f0fa17e5239 [ 250.222272] RDX: 0000000000000000 RSI: 0000000000000003 RDI: 0000000000000008 [ 250.223185] RBP: 00007f0fa1eeae20 R08: 0000000000000000 R09: 0000000000000000 [ 250.224091] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [ 250.224999] R13: 0000000000021000 R14: 0000000000000000 R15: 00007f0fa1eeb700 This is caused by calling io_run_task_work_sig() to do work under uring_lock while the caller io_sqe_files_unregister() already held uring_lock. To fix this issue, briefly drop uring_lock when calling io_run_task_work_sig(), and there are two things to concern: - hold uring_lock in io_ring_ctx_free() around io_sqe_files_unregister() this is for consistency of lock/unlock. - add new fixed rsrc ref node before dropping uring_lock it's not safe to do io_uring_enter-->percpu_ref_get() with a dying one. - check if rsrc_data->refs is dying to avoid parallel io_sqe_files_unregister Reported-by: Abaci <abaci@linux.alibaba.com> Fixes: 1ffc54220c44 ("io_uring: fix io_sqe_files_unregister() hangs") Suggested-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Hao Xu <haoxu@linux.alibaba.com> [axboe: fixes from Pavel folded in] Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-21io_uring: fail io-wq submission from a task_workPavel Begunkov1-30/+18
In case of failure io_wq_submit_work() needs to post an CQE and so potentially take uring_lock. The safest way to deal with it is to do that from under task_work where we can safely take the lock. Also, as io_iopoll_check() holds the lock tight and releases it reluctantly, it will play nicer in the furuter with notifying an iopolling task about new such pending failed requests. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-20fix handling of nd->depth on LOOKUP_CACHED failures in try_to_unlazy*Al Viro1-4/+5
After switching to non-RCU mode, we want nd->depth to match the number of entries in nd->stack[] that need eventual path_put(). legitimize_links() takes care of that on failures; unfortunately, failure exits added for LOOKUP_CACHED do not. We could add the logics for that into those failure exits, both in try_to_unlazy() and in try_to_unlazy_next(), but since both checks are immediately followed by legitimize_links() and there's no calls of legitimize_links() other than those two... It's easier to move the check (and required handling of nd->depth on failure) into legitimize_links() itself. [caught by Jens: ... and since we are zeroing ->depth here, we need to do drop_links() first] Fixes: 6c6ec2b0a3e0 "fs: add support for LOOKUP_CACHED" Tested-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-02-20cifs: Fix inconsistent IS_ERR and PTR_ERRYueHaibing1-1/+1
Fix inconsistent IS_ERR and PTR_ERR in cifs_find_swn_reg(). The proper pointer to be passed as argument to PTR_ERR() is share_name. This bug was detected with the help of Coccinelle. Fixes: bf80e5d4259a ("cifs: Send witness register and unregister commands to userspace daemon") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Reviewed-by: Samuel Cabrero <scabrero@suse.de> Signed-off-by: Steve French <stfrench@microsoft.com>
2021-02-19io_uring: don't take uring_lock during iowq cancelPavel Begunkov1-2/+9
[ 97.866748] a.out/2890 is trying to acquire lock: [ 97.867829] ffff8881046763e8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_wq_submit_work+0x155/0x240 [ 97.869735] [ 97.869735] but task is already holding lock: [ 97.871033] ffff88810dfe0be8 (&ctx->uring_lock){+.+.}-{3:3}, at: __x64_sys_io_uring_enter+0x3f0/0x5b0 [ 97.873074] [ 97.873074] other info that might help us debug this: [ 97.874520] Possible unsafe locking scenario: [ 97.874520] [ 97.875845] CPU0 [ 97.876440] ---- [ 97.877048] lock(&ctx->uring_lock); [ 97.877961] lock(&ctx->uring_lock); [ 97.878881] [ 97.878881] *** DEADLOCK *** [ 97.878881] [ 97.880341] May be due to missing lock nesting notation [ 97.880341] [ 97.881952] 1 lock held by a.out/2890: [ 97.882873] #0: ffff88810dfe0be8 (&ctx->uring_lock){+.+.}-{3:3}, at: __x64_sys_io_uring_enter+0x3f0/0x5b0 [ 97.885108] [ 97.885108] stack backtrace: [ 97.890457] Call Trace: [ 97.891121] dump_stack+0xac/0xe3 [ 97.891972] __lock_acquire+0xab6/0x13a0 [ 97.892940] lock_acquire+0x2c3/0x390 [ 97.894894] __mutex_lock+0xae/0x9f0 [ 97.901101] io_wq_submit_work+0x155/0x240 [ 97.902112] io_wq_cancel_cb+0x162/0x490 [ 97.904126] io_async_find_and_cancel+0x3b/0x140 [ 97.905247] io_issue_sqe+0x86d/0x13e0 [ 97.909122] __io_queue_sqe+0x10b/0x550 [ 97.913971] io_queue_sqe+0x235/0x470 [ 97.914894] io_submit_sqes+0xcce/0xf10 [ 97.917872] __x64_sys_io_uring_enter+0x3fb/0x5b0 [ 97.921424] do_syscall_64+0x2d/0x40 [ 97.922329] entry_SYSCALL_64_after_hwframe+0x44/0xa9 While holding uring_lock, e.g. from inline execution, async cancel request may attempt cancellations through io_wq_submit_work, which may try to grab a lock. Delay it to task_work, so we do it from a clean context and don't have to worry about locking. Cc: <stable@vger.kernel.org> # 5.5+ Fixes: c07e6719511e ("io_uring: hold uring_lock while completing failed polled io in io_wq_submit_work()") Reported-by: Abaci <abaci@linux.alibaba.com> Reported-by: Hao Xu <haoxu@linux.alibaba.com> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-18pstore: Fix typo in compression option nameJiri Bohac1-2/+2
Both pstore_compress() and decompress_record() use a mistyped config option name ("PSTORE_COMPRESSION" instead of "PSTORE_COMPRESS"). As a result compression and decompression of pstore records was always disabled. Use the correct config option name. Signed-off-by: Jiri Bohac <jbohac@suse.cz> Fixes: fd49e03280e5 ("pstore: Fix linking when crypto API disabled") Acked-by: Matteo Croce <mcroce@microsoft.com> Signed-off-by: Kees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210218111547.johvp5klpv3xrpnn@dwarf.suse.cz
2021-02-18io_uring: fail links more in io_submit_sqe()Pavel Begunkov1-13/+8
Instead of marking a link with REQ_F_FAIL_LINK on an error and delaying its failing to the caller, do it eagerly right when after getting an error in io_submit_sqe(). This renders FAIL_LINK checks in io_queue_link_head() useless and we can skip it. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-18io_uring: don't do async setup for links' headsPavel Begunkov1-3/+0
Now, as we can do async setup without holding an SQE, we can skip doing io_req_defer_prep() for link heads, it will be tried to be executed inline and follows all the rules of the non-linked requests. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-18io_uring: do io_*_prep() early in io_submit_sqe()Pavel Begunkov1-35/+24
Now as preparations are split from async setup, we can do the first one pretty early not spilling it across multiple call sites. And after it's done SQE is not needed anymore and we can save on passing it deeply into the submission stack. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-18io_uring: split sqe-prep and async setupPavel Begunkov1-50/+70
There are two kinds of opcode-specific preparations we do. The first is just initialising req with what is always needed for an opcode and reading all non-generic SQE fields. And the second is copying some of the stuff like iovec preparing to punt a request to somewhere async, e.g. to io-wq or for draining. For requests that have tried an inline execution but still needing to be punted, the second prep type is done by the opcode handler itself. Currently, we don't explicitly split those preparation steps, but combining both of them into io_*_prep(), altering the behaviour by allocating ->async_data. That's pretty messy and hard to follow and also gets in the way of some optimisations. Split the steps, leave the first type as where it is now, and put the second into a new io_req_prep_async() helper. It may make us to do opcode switch twice, but it's worth it. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-18io_uring: don't submit link on errorPavel Begunkov1-4/+4
If we get an error in io_init_req() for a request that would have been linked, we break the submission but still issue a partially composed link, that's nasty, fail it instead. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-18io_uring: move req link into submit_statePavel Begunkov1-14/+14
Move struct io_submit_link into submit_state, which is a part of a submission state and so belongs to it. It saves us from explicitly passing it, and init/deinit is now nicely hidden in io_submit_state_[start,end]. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-18io_uring: move io_init_req() into io_submit_sqe()Pavel Begunkov1-17/+15
Behaves identically, just move io_init_req() call into the beginning of io_submit_sqes(). That looks better unloads io_submit_sqes(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-18io_uring: move io_init_req()'s definitionPavel Begunkov1-107/+107
A preparation patch, symbol to symbol move io_init_req() + io_check_restriction() a bit up. The submission path is pretty settled down, so don't worry about backports and move the functions instead of relying on forward declarations in the future. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-18io_uring: don't duplicate ->file check in sfrPavel Begunkov1-3/+0
IORING_OP_SYNC_FILE_RANGE is marked as .needs_file, so the common path will take care of assigning and validating req->file, no need to duplicate it in io_sfr_prep(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-18io_uring: keep io_*_prep() naming consistentPavel Begunkov1-4/+4
Follow io_*_prep() naming pattern, there are only fsync and sfr that don't do that. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-18io_uring: kill fictitious submit iteration indexPavel Begunkov1-2/+2
@i and @submitted are very much coupled together, and there is no need to keep them both. Remove @i, it doesn't change generated binary but helps to keep a single source of truth. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-18debugfs: do not attempt to create a new file before the filesystem is initalizedGreg Kroah-Hartman1-0/+3
Some subsystems want to add debugfs files at early boot, way before debugfs is initialized. This seems to work somehow as the vfs layer will not allow it to happen, but let's be explicit and test to ensure we are properly up and running before allowing files to be created. Cc: "Rafael J. Wysocki" <rafael@kernel.org> Cc: stable <stable@vger.kernel.org> Reported-by: Michael Walle <michael@walle.cc> Reported-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210218100818.3622317-2-gregkh@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-02-18debugfs: be more robust at handling improper input in debugfs_lookup()Greg Kroah-Hartman1-1/+1
debugfs_lookup() doesn't like it if it is passed an illegal name pointer, or if the filesystem isn't even initialized yet. If either of these happen, it will crash the system, so fix it up by properly testing for valid input and that we are up and running before trying to find a file in the filesystem. Cc: "Rafael J. Wysocki" <rafael@kernel.org> Cc: stable <stable@vger.kernel.org> Reported-by: Michael Walle <michael@walle.cc> Tested-by: Michael Walle <michael@walle.cc> Tested-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210218100818.3622317-1-gregkh@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-02-18zonefs: Fix file size of zones in full conditionShin'ichiro Kawasaki1-0/+3
Per ZBC/ZAC/ZNS specifications, write pointers may not have valid values when zones are in full condition. However, when zonefs mounts a zoned block device, zonefs refers write pointers to set file size even when the zones are in full condition. This results in wrong file size. To fix this, refer maximum file size in place of write pointers for zones in full condition. Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> Fixes: 8dcc1a9d90c1 ("fs: New zonefs file system") Cc: <stable@vger.kernel.org> # 5.6+ Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
2021-02-18io_uring: fix read memory leakPavel Begunkov1-4/+6
Don't forget to free iovec read inline completion and bunch of other cases that do "goto done" before setting up an async context. Fixes: 5ea5dd45844d ("io_uring: inline io_read()'s iovec freeing") Reported-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-17NFS: Support the '-owrite=' option in /proc/self/mounts and mountinfoTrond Myklebust1-0/+7
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2021-02-17gfs2: Use resource group glock sharingBob Peterson5-12/+15
This patch takes advantage of the new glock holder sharing feature for resource groups. We have already introduced local resource group locking in a previous patch, so competing accesses of local processes are already under control. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-02-17gfs2: Allow node-wide exclusive glock sharingBob Peterson2-3/+25
Introduce a new LM_FLAG_NODE_SCOPE glock holder flag: when taking a glock in LM_ST_EXCLUSIVE (EX) mode and with the LM_FLAG_NODE_SCOPE flag set, the exclusive lock is shared among all local processes who are holding the glock in EX mode and have the LM_FLAG_NODE_SCOPE flag set. From the point of view of other nodes, the lock is still held exclusively. A future patch will start using this flag to improve performance with rgrp sharing. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-02-17gfs2: Add local resource group lockingAndreas Gruenbacher4-7/+59
Prepare for treating resource group glocks as exclusive among nodes but shared among all tasks running on a node: introduce another layer of node-specific locking that the local tasks can use to coordinate their accesses. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-02-17gfs2: Add per-reservation reserved block accountingAndreas Gruenbacher5-28/+82
Add a rs_reserved field to struct gfs2_blkreserv to keep track of the number of blocks reserved by this particular reservation, and a rd_reserved field to struct gfs2_rgrpd to keep track of the total number of reserved blocks in the resource group. Those blocks are exclusively reserved, as opposed to the rs_requested / rd_requested blocks which are tracked in the reservation tree (rd_rstree) and which can be stolen if necessary. When making a reservation with gfs2_inplace_reserve, rs_reserved is set to somewhere between ap->min_target and ap->target depending on the number of free blocks in the resource group. When allocating blocks with gfs2_alloc_blocks, rs_reserved is decremented accordingly. Eventually, any reserved but not consumed blocks are returned to the resource group by gfs2_inplace_release. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-02-17gfs2: Rename rs_{free -> requested} and rd_{reserved -> requested}Andreas Gruenbacher3-33/+33
We keep track of what we've so far been referring to as reservations in rd_rstree: the nodes in that tree indicate where in a resource group we'd like to allocate the next couple of blocks for a particular inode. Local processes take those as hints, but they may still "steal" blocks from those extents, so when actually allocating a block, we must double check in the bitmap whether that block is actually still free. Likewise, other cluster nodes may "steal" such blocks as well. One of the following patches introduces resource group glock sharing, i.e., sharing of an exclusively locked resource group glock among local processes to speed up allocations. To make that work, we'll need to keep track of how many blocks we've actually reserved for each inode, so we end up with two different kinds of reservations. Distinguish these two kinds by referring to blocks which are reserved but may still be "stolen" as "requested". This rename also makes it more obvious that rs_requested and rd_requested are strongly related. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-02-17gfs2: Check for active reservation in gfs2_releaseAndreas Gruenbacher1-2/+2
In gfs2_release, check if the inode has an active reservation to avoid unnecessary lock taking. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-02-17gfs2: Don't search for unreserved space twiceAndreas Gruenbacher1-4/+5
If gfs2_inplace_reserve has chosen a resource group but it couldn't make a reservation there, there are too many other reservations in that resource group. In that case, don't even try to respect existing reservations in gfs2_alloc_blocks. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-02-17gfs2: Only pass reservation down to gfs2_rbm_findAndreas Gruenbacher2-14/+14
Only pass the current reservation down to gfs2_rbm_find rather than the entire inode; we don't need any of the other information. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-02-17gfs2: Also reflect single-block allocations in rgd->rd_extfail_ptAndreas Gruenbacher1-8/+10
Pass a non-NULL minext to gfs2_rbm_find even for single-block allocations. In gfs2_rbm_find, also set rgd->rd_extfail_pt when a single-block allocation fails in a resource group: there is no reason for treating that case differently. In gfs2_reservation_check_and_update, only check how many free blocks we have if more than one block is requested; we already know there's at least one free block. In addition, when allocating N blocks fails in gfs2_rbm_find, we need to set rd_extfail_pt to N - 1 rather than N: rd_extfail_pt defines the biggest allocation that might still succeed. Finally, reset rd_extfail_pt when updating the resource group statistics in update_rgrp_lvb, as we already do in gfs2_rgrp_bh_get. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-02-17cifs: Reformat DebugData and index connections by conn_id.Shyam Prasad N1-49/+68
Reformat the output of /proc/fs/cifs/DebugData to print the conn_id for each connection. Also reordered and numbered the data into a more reader-friendly format. This is what the new format looks like: $ cat /proc/fs/cifs/DebugData Display Internal CIFS Data Structures for Debugging --------------------------------------------------- CIFS Version 2.30 Features: DFS,FSCACHE,STATS,DEBUG,ALLOW_INSECURE_LEGACY,WEAK_PW_HASH,CIFS_POSIX,UPCALL(SPNEGO),XATTR,ACL CIFSMaxBufSize: 16384 Active VFS Requests: 0 Servers: 1) ConnectionId: 0x1 Number of credits: 371 Dialect 0x300 TCP status: 1 Instance: 1 Local Users To Server: 1 SecMode: 0x1 Req On Wire: 0 In Send: 0 In MaxReq Wait: 0 Sessions: 1) Name: 10.10.10.10 Uses: 1 Capability: 0x300077 Session Status: 1 Security type: RawNTLMSSP SessionId: 0x785560000019 User: 1000 Cred User: 0 Shares: 0) IPC: \\10.10.10.10\IPC$ Mounts: 1 DevInfo: 0x0 Attributes: 0x0 PathComponentMax: 0 Status: 1 type: 0 Serial Number: 0x0 Share Capabilities: None Share Flags: 0x30 tid: 0x1 Maximal Access: 0x11f01ff 1) \\10.10.10.10\shyam_test2 Mounts: 1 DevInfo: 0x20020 Attributes: 0xc706ff PathComponentMax: 255 Status: 1 type: DISK Serial Number: 0xd4723975 Share Capabilities: None Aligned, Partition Aligned, Share Flags: 0x0 tid: 0x5 Optimal sector size: 0x1000 Maximal Access: 0x1f01ff MIDs: Server interfaces: 3 1) Speed: 10000000000 bps Capabilities: rss IPv4: 10.10.10.1 2) Speed: 10000000000 bps Capabilities: rss IPv6: fe80:0000:0000:0000:18b4:0000:0000:0000 3) Speed: 1000000000 bps Capabilities: rss IPv4: 10.10.10.10 [CONNECTED] Signed-off-by: Shyam Prasad N <sprasad@microsoft.com> Reviewed-by: Pavel Shilovsky <pshilov@microsoft.com> Reviewed-by: Aurelien Aptel <aaptel@suse.com> Signed-off-by: Steve French <stfrench@microsoft.com>
2021-02-17cifs: Identify a connection by a conn_id.Shyam Prasad N6-44/+122
Introduced a new field conn_id in TCP_Server_Info structure. This is a non-persistent unique identifier maintained by the client for a connection to a file server. For this, a global counter named tcpSesNextId is maintained. On allocating a new TCP_Server_Info, this counter is incremented and assigned. Changed the dynamic tracepoints related to reconnects and crediting to be more informative (with conn_id printed). Debugging a crediting issue helped me understand the important things to print here. Always call dynamic tracepoints outside the scope of spinlocks. To do this, copy out the credits and in_flight fields of the server struct before dropping the lock. Signed-off-by: Shyam Prasad N <sprasad@microsoft.com> Reviewed-by: Pavel Shilovsky <pshilov@microsoft.com> Signed-off-by: Steve French <stfrench@microsoft.com>
2021-02-17cifs: Fix in error types returned for out-of-credit situations.Shyam Prasad N1-3/+3
For failure by timeout waiting for credits, changed the error returned to the app with EBUSY, instead of ENOTSUPP. This is done because this situation is possible even in non-buggy cases. i.e. overloaded server can return 0 credits until done with outstanding requests. And this feels like a better error to return to the app. For cases of zero credits found even when there are no requests in flight, replaced ENOTSUPP with EDEADLK, since we're avoiding deadlock here by returning error. Signed-off-by: Shyam Prasad N <sprasad@microsoft.com> Reviewed-by: Pavel Shilovsky <pshilov@microsoft.com> Signed-off-by: Steve French <stfrench@microsoft.com>
2021-02-17cifs: New optype for session operations.Shyam Prasad N4-5/+9
We used to share the CIFS_NEG_OP flag between negotiate and session authentication. There was an assumption in the code that CIFS_NEG_OP is used by negotiate only. So introcuded CIFS_SESS_OP and used it for session setup optypes. Signed-off-by: Shyam Prasad N <sprasad@microsoft.com> Reviewed-by: Pavel Shilovsky <pshilov@microsoft.com> Signed-off-by: Steve French <stfrench@microsoft.com>
2021-02-17NFS: Set the stable writes flag when initialising the super blockTrond Myklebust1-0/+2
We need to wait for outstanding writes on the page to complete before we can update it. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2021-02-17NFS: Add mount options supporting eager writesTrond Myklebust1-0/+33
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2021-02-17NFS: Add support for eager writesTrond Myklebust2-7/+29
Support eager writing to the server, meaning that we write the data to cache on the server, and wait for that to complete. This ensures that we see ENOSPC errors immediately. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2021-02-17cifs: fix trivial typoSteve French1-1/+1
Typo: exiting --> existing Signed-off-by: Steve French <stfrench@microsoft.com>