diff options
author | Hugh Dickins <hughd@google.com> | 2011-05-29 00:14:09 +0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2011-05-29 03:09:26 +0400 |
commit | 826267cf1e6c6899eda1325a19f1b1d15c558b20 (patch) | |
tree | f022fabd26f035888c4fec972ff54163378b8962 /mm | |
parent | 36947a76826111e661a26cb0f668a5be6cc3ddb4 (diff) | |
download | linux-826267cf1e6c6899eda1325a19f1b1d15c558b20.tar.xz |
tmpfs: fix race between truncate and writepage
While running fsx on tmpfs with a memhog then swapoff, swapoff was hanging
(interruptibly), repeatedly failing to locate the owner of a 0xff entry in
the swap_map.
Although shmem_writepage() does abandon when it sees incoming page index
is beyond eof, there was still a window in which shmem_truncate_range()
could come in between writepage's dropping lock and updating swap_map,
find the half-completed swap_map entry, and in trying to free it,
leave it in a state that swap_shmem_alloc() could not correct.
Arguably a bug in __swap_duplicate()'s and swap_entry_free()'s handling
of the different cases, but easiest to fix by moving swap_shmem_alloc()
under cover of the lock.
More interesting than the bug: it's been there since 2.6.33, why could
I not see it with earlier kernels? The mmotm of two weeks ago seems to
have some magic for generating races, this is just one of three I found.
With yesterday's git I first saw this in mainline, bisected in search of
that magic, but the easy reproducibility evaporated. Oh well, fix the bug.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/shmem.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/shmem.c b/mm/shmem.c index 1acfb2687bfa..d221a1cfd7b1 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1114,8 +1114,8 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) delete_from_page_cache(page); shmem_swp_set(info, entry, swap.val); shmem_swp_unmap(entry); - spin_unlock(&info->lock); swap_shmem_alloc(swap); + spin_unlock(&info->lock); BUG_ON(page_mapped(page)); swap_writepage(page, wbc); return 0; |