diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2021-06-19 02:39:03 +0300 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2021-06-19 02:39:03 +0300 |
commit | 6fab154a33ba9b3574ba74a86ed085e0ed8454cb (patch) | |
tree | bbd22db90d73bdc4c59df495667d30e2d98779b4 /fs | |
parent | 728a748b3ff70326f652ab92081d639dc51269ea (diff) | |
parent | f9f28e5bd0baee9708c9011897196f06ae3a2733 (diff) | |
download | linux-6fab154a33ba9b3574ba74a86ed085e0ed8454cb.tar.xz |
Merge tag 'for-5.13-rc6-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux
Pull btrfs fix from David Sterba:
"One more fix, for a space accounting bug in zoned mode. It happens
when a block group is switched back rw->ro and unusable bytes (due to
zoned constraints) are subtracted twice.
It has user visible effects so I consider it important enough for late
-rc inclusion and backport to stable"
* tag 'for-5.13-rc6-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux:
btrfs: zoned: fix negative space_info->bytes_readonly
Diffstat (limited to 'fs')
-rw-r--r-- | fs/btrfs/block-group.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c index aa57bdc8fc89..6d5c4e45cfef 100644 --- a/fs/btrfs/block-group.c +++ b/fs/btrfs/block-group.c @@ -2442,16 +2442,16 @@ void btrfs_dec_block_group_ro(struct btrfs_block_group *cache) spin_lock(&sinfo->lock); spin_lock(&cache->lock); if (!--cache->ro) { - num_bytes = cache->length - cache->reserved - - cache->pinned - cache->bytes_super - - cache->zone_unusable - cache->used; - sinfo->bytes_readonly -= num_bytes; if (btrfs_is_zoned(cache->fs_info)) { /* Migrate zone_unusable bytes back */ cache->zone_unusable = cache->alloc_offset - cache->used; sinfo->bytes_zone_unusable += cache->zone_unusable; sinfo->bytes_readonly -= cache->zone_unusable; } + num_bytes = cache->length - cache->reserved - + cache->pinned - cache->bytes_super - + cache->zone_unusable - cache->used; + sinfo->bytes_readonly -= num_bytes; list_del_init(&cache->ro_list); } spin_unlock(&cache->lock); |