summaryrefslogtreecommitdiff
path: root/fs/btrfs
AgeCommit message (Collapse)AuthorFilesLines
2016-02-26Merge branch 'foreign/josef/space-updates' into for-chris-4.6David Sterba3-17/+36
2016-02-26Merge branch 'foreign/zhaolei/reada' into for-chris-4.6David Sterba3-137/+141
2016-02-26Merge branch 'foreign/qu/norecovery-v7' into for-chris-4.6David Sterba3-15/+51
2016-02-26Merge branch 'dev/rename-keys' into for-chris-4.6David Sterba3-12/+64
2016-02-26Merge branch 'dev/gfp-flags' into for-chris-4.6David Sterba10-56/+60
2016-02-26Merge branch 'chandan/prep-subpage-blocksize' into for-chris-4.6David Sterba7-165/+321
# Conflicts: # fs/btrfs/file.c
2016-02-18Btrfs: check reserved when deciding to background flushJosef Bacik1-1/+1
We will sometimes start background flushing the various enospc related things (delayed nodes, delalloc, etc) if we are getting close to reserving all of our available space. We don't want to do this however when we are actually using this space as it causes unneeded thrashing. We currently try to do this by checking bytes_used >= thresh, but bytes_used is only part of the equation, we need to use bytes_reserved as well as this represents space that is very likely to become bytes_used in the future. My tracing tool will keep count of the number of times we kick off the async flusher, the following are counts for the entire run of generic/027 No Patch Patch avg: 5385 5009 median: 5500 4916 We skewed lower than the average with my patch and higher than the average with the patch, overall it cuts the flushing from anywhere from 5-10%, which in the case of actual ENOSPC is quite helpful. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Reviewed-by: Liu Bo <bo.li.liu@oracle.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-18Btrfs: add transaction space reservation tracepointsJosef Bacik1-1/+5
There are a few places where we add to trans->bytes_reserved but don't have the corresponding trace point. With these added my tool no longer sees transaction leaks. Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-18Btrfs: fix truncate_space_checkJosef Bacik1-1/+10
truncate_space_check is using btrfs_csum_bytes_to_leaves() but forgetting to multiply by nodesize so we get an actual byte count. We need a tracepoint here so that we have the matching reserve for the release that will come later. Also add a comment to make clear what the intent of truncate_space_check is. Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-18Btrfs: change how we update the global block rsvJosef Bacik1-14/+20
I'm writing a tool to visualize the enospc system in order to help debug enospc bugs and I found weird data and ran it down to when we update the global block rsv. We add all of the remaining free space to the block rsv, do a trace event, then remove the extra and do another trace event. This makes my visualization look silly and is unintuitive code as well. Fix this stuff to only add the amount we are missing, or free the amount we are missing. This is less clean to read but more explicit in what it is doing, as well as only emitting events for values that make sense. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-18btrfs: reada: ignore creating reada_extent for a non-existent deviceZhao Lei1-9/+8
For a non-existent device, old code bypasses adding it in dev's reada queue. And to solve problem of unfinished waitting in raid5/6, commit 5fbc7c59fd22 ("Btrfs: fix unfinished readahead thread for raid5/6 degraded mounting") adding an exception for the first stripe, in short, the first stripe will always be processed whether the device exists or not. Actually we have a better way for the above request: just bypass creation of the reada_extent for non-existent device, it will make code simple and effective. Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-18btrfs: reada: avoid undone reada extents in btrfs_reada_waitZhao Lei1-1/+8
Reada background works is not designed to finish all jobs completely, it will break in following case: 1: When a device reaches workload limit (MAX_IN_FLIGHT) 2: Total reads reach max limit (10000) 3: All devices don't have queued more jobs, often happened in DUP case And if all background works exit with remaining jobs, btrfs_reada_wait() will wait indefinetelly. Above problem is rarely happened in old code, because: 1: Every work queues 2x new works So many works reduced chances of undone jobs. 2: One work will continue 10000 times loop in case of no-jobs It reduced no-thread window time. But after we fixed above case, the "undone reada extents" frequently happened. Fix: Check to ensure we have at least one thread if there are undone jobs in btrfs_reada_wait(). Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-18btrfs: reada: limit max works countZhao Lei3-1/+12
Reada creates 2 works for each level of tree recursively. In case of a tree having many levels, the number of created works is 2^level_of_tree. Actually we don't need so many works in parallel, this patch limits max works to BTRFS_MAX_MIRRORS * 2. The per-fs works_counter will be also used for btrfs_reada_wait() to check is there are background workers. Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-18btrfs: reada: simplify dev->reada_in_flight processingZhao Lei1-18/+10
No need to decrease dev->reada_in_flight in __readahead_hook()'s internal and reada_extent_put(). reada_extent_put() have no chance to decrease dev->reada_in_flight in free operation, because reada_extent have additional refcnt when scheduled to a dev. We can put inc and dec operation for dev->reada_in_flight to one place instead to make logic simple and safe, and move useless reada_extent->scheduled_for to a bool flag instead. Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-18btrfs: reada: Fix a debug code typoZhao Lei1-8/+3
Remove one copy of loop to fix the typo of iterate zones. Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-18btrfs: reada: Jump into cleanup in direct way for __readahead_hook()Zhao Lei1-19/+21
Current code set nritems to 0 to make for_loop useless to bypass it, and set generation's value which is not necessary. Jump into cleanup directly is better choise. Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-18btrfs: reada: Use fs_info instead of root in __readahead_hook's argumentZhao Lei3-25/+24
What __readahead_hook() need exactly is fs_info, no need to convert fs_info to root in caller and convert back in __readahead_hook() Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-18btrfs: reada: Pass reada_extent into __readahead_hook directlyZhao Lei1-21/+24
reada_start_machine_dev() already have reada_extent pointer, pass it into __readahead_hook() directly instead of search radix_tree will make code run faster. Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-18btrfs: reada: move reada_extent_put to place after __readahead_hook()Zhao Lei1-2/+2
We can't release reada_extent earlier than __readahead_hook(), because __readahead_hook() still need to use it, it is necessary to hode a refcnt to avoid it be freed. Actually it is not a problem after my patch named: Avoid many times of empty loop It make reada_extent in above line include at least one reada_extctl, which keeps additional one refcnt for reada_extent. But we still need this patch to make the code in pretty logic. Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-18btrfs: reada: Remove level argument in severial functionsZhao Lei1-9/+6
level is not used in severial functions, remove them from arguments, and remove relative code for get its value. Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-18btrfs: reada: bypass adding extent when all zone failedZhao Lei1-0/+5
When failed adding all dev_zones for a reada_extent, the extent will have no chance to be selected to run, and keep in memory for ever. We should bypass this extent to avoid above case. Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-18btrfs: reada: add all reachable mirrors into reada device listZhao Lei1-11/+9
If some device is not reachable, we should bypass and continus addingb next, instead of break on bad device. Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-18btrfs: reada: Move is_need_to_readahead contition earlierZhao Lei1-11/+9
Move is_need_to_readahead contition earlier to avoid useless loop to get relative data for readahead. Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-16btrfs: reada: Avoid many times of empty loopZhao Lei1-1/+1
We can see following loop(10000 times) in trace_log: [ 75.416137] ZL_DEBUG: reada_start_machine_dev:730: pid=771 comm=kworker/u2:3 re->ref_cnt ffff88003741e0c0 1 -> 2 [ 75.417413] ZL_DEBUG: reada_extent_put:524: pid=771 comm=kworker/u2:3 re = ffff88003741e0c0, refcnt = 2 -> 1 [ 75.418611] ZL_DEBUG: __readahead_hook:129: pid=771 comm=kworker/u2:3 re->ref_cnt ffff88003741e0c0 1 -> 2 [ 75.419793] ZL_DEBUG: reada_extent_put:524: pid=771 comm=kworker/u2:3 re = ffff88003741e0c0, refcnt = 2 -> 1 [ 75.421016] ZL_DEBUG: reada_start_machine_dev:730: pid=771 comm=kworker/u2:3 re->ref_cnt ffff88003741e0c0 1 -> 2 [ 75.422324] ZL_DEBUG: reada_extent_put:524: pid=771 comm=kworker/u2:3 re = ffff88003741e0c0, refcnt = 2 -> 1 [ 75.423661] ZL_DEBUG: __readahead_hook:129: pid=771 comm=kworker/u2:3 re->ref_cnt ffff88003741e0c0 1 -> 2 [ 75.424882] ZL_DEBUG: reada_extent_put:524: pid=771 comm=kworker/u2:3 re = ffff88003741e0c0, refcnt = 2 -> 1 ...(10000 times) [ 124.101672] ZL_DEBUG: reada_start_machine_dev:730: pid=771 comm=kworker/u2:3 re->ref_cnt ffff88003741e0c0 1 -> 2 [ 124.102850] ZL_DEBUG: reada_extent_put:524: pid=771 comm=kworker/u2:3 re = ffff88003741e0c0, refcnt = 2 -> 1 [ 124.104008] ZL_DEBUG: __readahead_hook:129: pid=771 comm=kworker/u2:3 re->ref_cnt ffff88003741e0c0 1 -> 2 [ 124.105121] ZL_DEBUG: reada_extent_put:524: pid=771 comm=kworker/u2:3 re = ffff88003741e0c0, refcnt = 2 -> 1 Reason: If more than one user trigger reada in same extent, the first task finished setting of reada data struct and call reada_start_machine() to start, and the second task only add a ref_count but have not add reada_extctl struct completely, the reada_extent can not finished all jobs, and will be selected in __reada_start_machine() for 10000 times(total times in __reada_start_machine()). Fix: For a reada_extent without job, we don't need to run it, just return 0 to let caller break. Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-16btrfs: reada: Add missed segment checking in reada_find_zoneZhao Lei1-1/+3
In rechecking zone-in-tree, we still need to check zone include our logical address. Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-16btrfs: reada: reduce additional fs_info->reada_lock in reada_find_zoneZhao Lei1-8/+4
We can avoid additional locking-acquirment and one pair of kref_get/put by combine two condition. Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-16btrfs: reada: Fix in-segment calculation for readaZhao Lei1-2/+2
reada_zone->end is end pos of segment: end = start + cache->key.offset - 1; So we need to use "<=" in condition to judge is a pos in the segment. The problem happened rearly, because logical pos rarely pointed to last 4k of a blockgroup, but we need to fix it to make code right in logic. Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-12btrfs: Introduce new mount option alias for nologreplayQu Wenruo1-1/+3
Introduce new mount option alias "norecovery" for nologreplay, to keep "norecovery" behavior the same with other filesystems. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-12btrfs: Introduce new mount option to disable tree log replayQu Wenruo3-7/+32
Introduce a new mount option "nologreplay" to co-operate with "ro" mount option to get real readonly mount, like "norecovery" in ext* and xfs. Since the new parse_options() need to check new flags at remount time, so add a new parameter for parse_options(). Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Reviewed-by: Chandan Rajendra <chandan@linux.vnet.ibm.com> Tested-by: Austin S. Hemmelgarn <ahferroin7@gmail.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-12btrfs: Introduce new mount option usebackuproot to replace recoveryQu Wenruo3-8/+17
Current "recovery" mount option will only try to use backup root. However the word "recovery" is too generic and may be confusing for some users. Here introduce a new and more specific mount option, "usebackuproot" to replace "recovery" mount option. "Recovery" will be kept for compatibility reason, but will be deprecated. Also, since "usebackuproot" will only affect mount behavior and after open_ctree() it has nothing to do with the filesystem, so clear the flag after mount succeeded. This provides the basis for later unified "norecovery" mount option. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> [ dropped usebackuproot from show_mount, added note about 'recovery' to docs ] Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-11btrfs: teach print_leaf about temporary item subtypesDavid Sterba1-0/+11
Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-11btrfs: teach print_leaf about permanent item subtypesDavid Sterba1-2/+10
Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-11btrfs: switch dev stats item to the permanent item keyDavid Sterba2-5/+8
Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-11btrfs: introduce key type for persistent permanent itemsDavid Sterba1-3/+17
The number of distinct key types is not that big that we could waste one for something new we want to store in the tree. Similar to the temporary items, we'll introduce a new name for an existing key value and use the objectid for further extension. The victim is the BTRFS_DEV_STATS_KEY (248). The device stats are an example of a permanent item. Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-11btrfs: switch balance item to the temporary item keyDavid Sterba1-3/+3
No visible change. Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-11btrfs: introduce key type for persistent temporary itemsDavid Sterba1-0/+16
The number of distinct key types is not that big that we could waste one for something new we want to store in the tree. We'll introduce a new name for an existing key value and use the objectid for further extension. The victim is the BTRFS_BALANCE_ITEM_KEY (248). The nature of the balance status item is a good example of the temporary item. It exists from beginning of the balance, keeps the status until it finishes. Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-11btrfs: switch to kcalloc in btrfs_cmp_data_prepareDavid Sterba1-2/+2
Kcalloc is functionally equivalent and does overflow checks. Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-11btrfs: extent same: use GFP_KERNEL for page array allocationsDavid Sterba1-2/+2
We can safely use GFP_KERNEL in the functions called from the ioctl handlers. Here we can allocate up to 32k so less pressure to the allocator could help. Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-11btrfs: device add and remove: use GFP_KERNELDavid Sterba1-4/+5
We can safely use GFP_KERNEL in the functions called from the ioctl handlers. Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-11btrfs: readdir: use GFP_KERNELDavid Sterba1-1/+1
Readdir is initiated from userspace and is not on the critical writeback path, we don't need to use GFP_NOFS for allocations. Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-11btrfs: fallocate: use GFP_KERNELDavid Sterba1-3/+3
Fallocate is initiated from userspace and is not on the critical writeback path, we don't need to use GFP_NOFS for allocations. Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-11btrfs: let callers of btrfs_alloc_root pass gfp flagsDavid Sterba1-10/+11
We don't need to use GFP_NOFS in all contexts, eg. during mount or for dummy root tree, but we might for the the log tree creation. Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-11btrfs: scrub: use GFP_KERNEL on the submission pathDavid Sterba2-12/+14
Scrub is not on the critical writeback path we don't need to use GFP_NOFS for all allocations. The failures are handled and stats passed back to userspace. Let's use GFP_KERNEL on the paths where everything is ok, ie. setup the global structures and the IO submission paths. Functions that do the repair and fixups still use GFP_NOFS as we might want to skip any other filesystem activity if we encounter an error. This could turn out to be unnecessary, but requires more review compared to the easy cases in this patch. Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-11btrfs: reada: use GFP_KERNEL everywhereDavid Sterba1-5/+5
The readahead framework is not on the critical writeback path we don't need to use GFP_NOFS for allocations. All error paths are handled and the readahead failures are not fatal. The actual users (scrub, dev-replace) will trigger reads if the blocks are not found in cache. Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-11btrfs: send: use GFP_KERNEL everywhereDavid Sterba2-19/+19
The send operation is not on the critical writeback path we don't need to use GFP_NOFS for allocations. All error paths are handled and the whole operation is restartable. Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-01Btrfs: btrfs_ioctl_clone: Truncate complete page after performing clone ↵Chandan Rajendra1-2/+3
operation In subpagesize-blocksize scenario, the "destination offset" argument passed to the btrfs_ioctl_clone() can be aligned to sectorsize but may not be necessarily aligned to the machine's page size. In such cases, truncate_inode_pages_range() ends up zeroing out the partial page and future read operations will return incorrect data. Hence this commit explicitly rounds down the "destination offset" to the machine's page size. Signed-off-by: Chandan Rajendra <chandan@linux.vnet.ibm.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-01Btrfs: Clean pte corresponding to page straddling i_sizeChandan Rajendra2-3/+11
When extending a file by either "truncate up" or by writing beyond i_size, the page which had i_size needs to be marked "read only" so that future writes to the page via mmap interface causes btrfs_page_mkwrite() to be invoked. If not, a write performed after extending the file via the mmap interface will find the page to be writaeable and continue writing to the page without invoking btrfs_page_mkwrite() i.e. we end up writing to a file without reserving disk space. Signed-off-by: Chandan Rajendra <chandan@linux.vnet.ibm.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-01Btrfs: Fix block size returned to user spaceChandan Rajendra1-1/+0
btrfs_getattr() returns PAGE_CACHE_SIZE as the block size. Since generic_fillattr() already does the right thing (by obtaining block size from inode->i_blkbits), just remove the statement from btrfs_getattr. Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chandan Rajendra <chandan@linux.vnet.ibm.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-01Btrfs: Limit inline extents to root->sectorsizeChandan Rajendra1-1/+1
cow_file_range_inline() limits the size of an inline extent to PAGE_CACHE_SIZE. This breaks in subpagesize-blocksize scenarios. Fix this by comparing against root->sectorsize. Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chandan Rajendra <chandan@linux.vnet.ibm.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-02-01Btrfs: btrfs_submit_direct_hook: Handle map_length < bio vector lengthChandan Rajendra1-8/+17
In subpagesize-blocksize scenario, map_length can be less than the length of a bio vector. Such a condition may cause btrfs_submit_direct_hook() to submit a zero length bio. Fix this by comparing map_length against block size rather than with bv_len. Signed-off-by: Chandan Rajendra <chandan@linux.vnet.ibm.com> Signed-off-by: David Sterba <dsterba@suse.com>