diff options
author | Eric Biggers <ebiggers@google.com> | 2019-12-31 21:14:16 +0300 |
---|---|---|
committer | Jaegeuk Kim <jaegeuk@kernel.org> | 2020-01-18 03:48:43 +0300 |
commit | 644c8c92adb669bdb2d4b2a271dfa9569ae07ee8 (patch) | |
tree | 29911d2a6d74bb39c2a4f6526e06919ae27dcc09 /fs/f2fs | |
parent | e8ce5749d781ec0ccdf03f4f174cc8f709c0057a (diff) | |
download | linux-644c8c92adb669bdb2d4b2a271dfa9569ae07ee8.tar.xz |
f2fs: fix deadlock allocating bio_post_read_ctx from mempool
Without any form of coordination, any case where multiple allocations
from the same mempool are needed at a time to make forward progress can
deadlock under memory pressure.
This is the case for struct bio_post_read_ctx, as one can be allocated
to decrypt a Merkle tree page during fsverity_verify_bio(), which itself
is running from a post-read callback for a data bio which has its own
struct bio_post_read_ctx.
Fix this by freeing first bio_post_read_ctx before calling
fsverity_verify_bio(). This works because verity (if enabled) is always
the last post-read step.
This deadlock can be reproduced by trying to read from an encrypted
verity file after reducing NUM_PREALLOC_POST_READ_CTXS to 1 and patching
mempool_alloc() to pretend that pool->alloc() always fails.
Note that since NUM_PREALLOC_POST_READ_CTXS is actually 128, to actually
hit this bug in practice would require reading from lots of encrypted
verity files at the same time. But it's theoretically possible, as N
available objects doesn't guarantee forward progress when > N/2 threads
each need 2 objects at a time.
Fixes: 95ae251fe828 ("f2fs: add fs-verity support")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Diffstat (limited to 'fs/f2fs')
-rw-r--r-- | fs/f2fs/data.c | 25 |
1 files changed, 19 insertions, 6 deletions
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index 63e0b814b567..8bd9afa81c54 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -204,19 +204,32 @@ static void f2fs_verity_work(struct work_struct *work) { struct bio_post_read_ctx *ctx = container_of(work, struct bio_post_read_ctx, work); + struct bio *bio = ctx->bio; +#ifdef CONFIG_F2FS_FS_COMPRESSION + unsigned int enabled_steps = ctx->enabled_steps; +#endif + + /* + * fsverity_verify_bio() may call readpages() again, and while verity + * will be disabled for this, decryption may still be needed, resulting + * in another bio_post_read_ctx being allocated. So to prevent + * deadlocks we need to release the current ctx to the mempool first. + * This assumes that verity is the last post-read step. + */ + mempool_free(ctx, bio_post_read_ctx_pool); + bio->bi_private = NULL; #ifdef CONFIG_F2FS_FS_COMPRESSION /* previous step is decompression */ - if (ctx->enabled_steps & (1 << STEP_DECOMPRESS)) { - - f2fs_verify_bio(ctx->bio); - f2fs_release_read_bio(ctx->bio); + if (enabled_steps & (1 << STEP_DECOMPRESS)) { + f2fs_verify_bio(bio); + f2fs_release_read_bio(bio); return; } #endif - fsverity_verify_bio(ctx->bio); - __f2fs_read_end_io(ctx->bio, false, false); + fsverity_verify_bio(bio); + __f2fs_read_end_io(bio, false, false); } static void f2fs_post_read_work(struct work_struct *work) |