summaryrefslogtreecommitdiff
path: root/fs/btrfs
AgeCommit message (Collapse)AuthorFilesLines
2015-10-24Merge branch 'for-linus-4.3' of ↵Linus Torvalds2-2/+5
git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs Pull btrfs fixes from Chris Mason: "I have two more small fixes this week: Qu's fix avoids unneeded COW during fallocate, and Christian found a memory leak in the error handling of an earlier fix" * 'for-linus-4.3' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs: btrfs: fix possible leak in btrfs_ioctl_balance() btrfs: Avoid truncate tailing page if fallocate range doesn't exceed inode size
2015-10-22Merge branch 'allocator-fixes' into for-linus-4.4Chris Mason14-123/+459
Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: don't do extra bitmap search in one bit caseJosef Bacik1-13/+15
When we make ctl->unit allocations from a bitmap there is no point in searching for the next 0 in the bitmap. If we've found a bit we're done and can just exit the loop. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: keep track of largest extent in bitmapsJosef Bacik2-1/+39
We can waste a lot of time searching through bitmaps when we are heavily fragmented trying to find large contiguous areas that don't exist in the bitmap. So keep track of the max extent size when we do a full search of a bitmap so that next time around we can just skip the expensive searching if our max size is less than what we are looking for. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: don't keep trying to build clusters if we are fragmentedJosef Bacik3-24/+82
If we are extremely fragmented then we won't be able to create a free_cluster. So if this happens set last_ptr->fragmented so that all future allcations will give up trying to create a cluster. When we unpin extents we will unset ->fragmented if we free up a sufficient amount of space in a block group. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: cut down on loops through the allocatorJosef Bacik1-3/+36
We try really really hard to make allocations, but sometimes it is just not going to happen, especially when free space is extremely fragmented. So add a few short cuts through the looping states. For example if we couldn't allocate a chunk, just go straight to the NO_EMPTY_SIZE loop. If there are no uncached block groups and we've done a full search, go straight to the ALLOC_CHUNK stage. And finally if we already have empty_size and empty_cluster set to 0 go ahead and return -ENOSPC. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: don't continue setting up space cache when enospcJosef Bacik2-0/+20
If we hit ENOSPC when setting up a space cache don't bother setting up any of the other space cache's in this transaction, it'll just induce unnecessary latency. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: keep track of max_extent_size per space_infoJosef Bacik2-1/+34
When we are heavily fragmented we can induce a lot of latency trying to make an allocation happen that is simply not going to happen. Thankfully we keep track of our max_extent_size when going through the allocator, so if we get to the point where we are exiting find_free_extent with ENOSPC then set our space_info->max_extent_size so we can keep future allocations from having to pay this cost. We reset the max_extent_size whenever we release pinned bytes back into this space info so we can redo all the work. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: don't loop in allocator for space cacheJosef Bacik1-1/+1
The space cache needs to have contiguous allocations, and the allocator tries to make allocations by reducing the amount of bytes requested and re-searching. But this just makes us waste time when we are very fragmented, so if we can't find our space just exit, don't bother trying to search again. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: add a flags field to btrfs_transactionJosef Bacik4-18/+16
I want to set some per transaction flags, so instead of adding yet another int lets just convert the current two int indicators to flags and add a flags field for future use. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: fix prealloc under heavy fragmentation conditionsJosef Bacik1-0/+9
If we are heavily fragmented we will continually try to prealloc the largest extent size we can every time we call btrfs_reserve_extent. This can be very expensive when we are heavily fragmented, burning lots of CPU cycles and loops through the allocator. So instead notice when we get a smaller chunk from the allocator than what we specified and use this as the new maximum size we try to allocate. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: add fragment=* debug mount optionJosef Bacik5-7/+150
In tracking down these weird bitmap problems it was helpful to artificially create an extremely fragmented file system. These mount options let us either fragment data or metadata or both. With these options I could reproduce all sorts of weird latencies and hangs that occur under extreme fragmentation and get them fixed. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: fix qgroup sanity testsJosef Bacik1-0/+6
With my changes to allow us to find old roots when resolving indirect refs I introduced a regression to the sanity tests. Since we don't really care to go down into the fs roots we just need to have the old behavior of returning ENOENT for dummy roots for the sanity tests. In the future if we want to get fancy we can populate the test fs trees with the references as well. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: change how we wait for pending ordered extentsJosef Bacik5-64/+60
We have a mechanism to make sure we don't lose updates for ordered extents that were logged in the transaction that is currently running. We add the ordered extent to a transaction list and then the transaction waits on all the ordered extents in that list. However are substantially large file systems this list can be extremely large, and can give us soft lockups, since the ordered extents don't remove themselves from the list when they do complete. To fix this we simply add a counter to the transaction that is incremented any time we have a logged extent that needs to be completed in the current transaction. Then when the ordered extent finally completes it decrements the per transaction counter and wakes up the transaction if we are the last ones. This will eliminate the softlockup. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: qgroup: Check if qgroup reserved space leakedQu Wenruo3-0/+34
Add check at btrfs_destroy_inode() time to detect qgroup reserved space leak. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: qgroup: Avoid calling btrfs_free_reserved_data_space in clear_bit_hookQu Wenruo3-12/+22
In clear_bit_hook, qgroup reserved data is already handled quite well, either released by finish_ordered_io or invalidatepage. So calling btrfs_qgroup_free_data() here is completely meaningless, and since btrfs_qgroup_free_data() will lock io_tree, so it can't be called with io_tree lock hold. This patch will add a new function btrfs_free_reserved_data_space_noquota() for clear_bit_hook() to cease the lockdep warning. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: fallocate: Add support to accurate qgroup reserveQu Wenruo1-44/+117
Now fallocate will do accurate qgroup reserve space check, unlike old method, which will always reserve the whole length of the range. With this patch, fallocate will: 1) Iterate the desired range and mark in data rsv map Only range which is going to be allocated will be recorded in data rsv map and reserve the space. For already allocated range (normal/prealloc extent) they will be skipped. Also, record the marked range into a new list for later use. 2) If 1) succeeded, do real file extent allocate. And at file extent allocation time, corresponding range will be removed from the range in data rsv map. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: qgroup: Add new trace point for qgroup data reserveQu Wenruo2-2/+17
Now each qgroup reserve for data will has its ftrace event for better debugging. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: Add handler for invalidate pageQu Wenruo1-0/+24
For btrfs_invalidatepage() and its variant evict_inode_truncate_page(), there will be pages don't reach disk. In that case, their reserved space won't be release nor freed by finish_ordered_io() nor delayed_ref handler. So we must free their qgroup reserved space, or we will leaking reserved space again. So this will patch will call btrfs_qgroup_free_data() for invalidatepage() and its variant evict_inode_truncate_page(). And due to the nature of new btrfs_qgroup_reserve/free_data() reserved space will only be reserved or freed once, so for pages which are already flushed to disk, their reserved space will be released and freed by delayed_ref handler. Double free won't be a problem. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: qgroup: Add handler for NOCOW and inlineQu Wenruo2-1/+21
For NOCOW and inline case, there will be no delayed_ref created for them, so we should free their reserved data space at proper time(finish_ordered_io for NOCOW and cow_file_inline for inline). Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: qgroup: Cleanup old inaccurate facilitiesQu Wenruo9-156/+60
Cleanup the old facilities which use old btrfs_qgroup_reserve() function call, replace them with the newer version, and remove the "__" prefix in them. Also, make btrfs_qgroup_reserve/free() functions private, as they are now only used inside qgroup codes. Now, the whole btrfs qgroup is swithed to use the new reserve facilities. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: extent-tree: Switch to new delalloc space reserve and releaseQu Wenruo4-25/+38
Use new __btrfs_delalloc_reserve_space() and __btrfs_delalloc_release_space() to reserve and release space for delalloc. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: extent-tree: Add new version of btrfs_delalloc_reserve/release_spaceQu Wenruo2-0/+61
Add new version of btrfs_delalloc_reserve_space() and btrfs_delalloc_release_space() functions, which supports accurate qgroup reserve. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: extent-tree: Switch to new check_data_free_space and ↵Qu Wenruo3-19/+27
free_reserved_data_space Use new reserve/free for buffered write and inode cache. For buffered write case, as nodatacow write won't increase quota account, so unlike old behavior which does reserve before check nocow, now we check nocow first and then only reserve data if we can't do nocow write. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: extent-tree: Add new version of btrfs_check_data_free_space and ↵Qu Wenruo2-9/+79
btrfs_free_reserved_data_space. Add new functions __btrfs_check_data_free_space() and __btrfs_free_reserved_data_space() to work with new accurate qgroup reserved space framework. The new function will replace old btrfs_check_data_free_space() and btrfs_free_reserved_data_space() respectively, but until all the change is done, let's just use the new name. Also, export internal use function btrfs_alloc_data_chunk_ondemand(), as now qgroup reserve requires precious bytes, some operation can't get the accurate number in advance(like fallocate). But data space info check and data chunk allocate doesn't need to be that accurate, and can be called at the beginning. So export it for later operations. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: qgroup: Use new metadata reservation.Qu Wenruo3-36/+13
As we have the new metadata reservation functions, use them to replace the old btrfs_qgroup_reserve() call for metadata. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: qgroup: Introduce new functions to reserve/free metadataQu Wenruo4-0/+48
Introduce new functions btrfs_qgroup_reserve/free_meta() to reserve/free metadata reserved space. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: delayed_ref: release and free qgroup reserved at proper timingQu Wenruo4-4/+34
Qgroup reserved space needs to be released from inode dirty map and get freed at different timing: 1) Release when the metadata is written into tree After corresponding metadata is written into tree, any newer write will be COWed(don't include NOCOW case yet). So we must release its range from inode dirty range map, or we will forget to reserve needed range, causing accounting exceeding the limit. 2) Free reserved bytes when delayed ref is run When delayed refs are run, qgroup accounting will follow soon and turn the reserved bytes into rfer/excl numbers. As run_delayed_refs and qgroup accounting are all done at commit_transaction() time, we are safe to free reserved space in run_delayed_ref time(). With these timing to release/free reserved space, we should be able to resolve the long existing qgroup reserve space leak problem. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: delayed_ref: Add new function to record reserved space into delayed refQu Wenruo2-0/+43
Add new function btrfs_add_delayed_qgroup_reserve() function to record how much space is reserved for that extent. As btrfs only accounts qgroup at run_delayed_refs() time, so newly allocated extent should keep the reserved space until then. So add needed function with related members to do it. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: qgroup: Introduce functions to release/free qgroup reserve dataQu Wenruo2-0/+62
space Introduce functions btrfs_qgroup_release/free_data() to release/free reserved data range. Release means, just remove the data range from io_tree, but doesn't free the reserved space. This is for normal buffered write case, when data is written into disc and its metadata is added into tree, its reserved space should still be kept until commit_trans(). So in that case, we only release dirty range, but keep the reserved space recorded some other place until commit_tran(). Free means not only remove data range, but also free reserved space. This is used for case for cleanup and invalidate page. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: qgroup: Introduce btrfs_qgroup_reserve_data functionQu Wenruo3-0/+52
Introduce a new function, btrfs_qgroup_reserve_data(), which will use io_tree to accurate qgroup reserve, to avoid reserved space leaking. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: extent_io: Introduce new function clear_record_extent_bits()Qu Wenruo2-11/+42
Introduce new function clear_record_extent_bits(), which will clear bits for given range and record the details about which ranges are cleared and how many bytes in total it changes. This provides the basis for later qgroup reserve codes. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: extent_io: Introduce new function set_record_extent_bitsQu Wenruo2-18/+56
Introduce new function set_record_extent_bits(), which will not only set given bits, but also record how many bytes are changed, and detailed range info. This is quite important for later qgroup reserve framework. The number of bytes will be used to do qgroup reserve, and detailed range info will be used to cleanup for EQUOT case. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: extent_io: Introduce needed structure for recoding set/clear bitsQu Wenruo1-0/+12
Add a new structure, extent_change_set, to record how many bytes are changed in one set/clear_extent_bits() operation, with detailed changed ranges info. This provides the needed facilities for later qgroup reserve framework. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Merge branch 'integration-4.4' of ↵Chris Mason4-88/+407
git://git.kernel.org/pub/scm/linux/kernel/git/fdmanana/linux into for-linus-4.4
2015-10-22Merge branch 'cleanups/for-4.4' of ↵Chris Mason16-211/+220
git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux into for-linus-4.4
2015-10-22btrfs: fix possible leak in btrfs_ioctl_balance()Christian Engelmayer1-1/+4
Commit 8eb934591f8b ("btrfs: check unsupported filters in balance arguments") adds a jump to exit label out_bargs in case the argument check fails. At this point in addition to the bargs memory, the memory for struct btrfs_balance_control has already been allocated. Ownership of bctl is passed to btrfs_balance() in the good case, thus the memory is not freed due to the introduced jump. Make sure that the memory gets freed in any case as necessary. Detected by Coverity CID 1328378. Signed-off-by: Christian Engelmayer <cengelma@gmx.at> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-21btrfs: reada: Fix returned errno codeLuis de Bethencourt1-3/+5
reada is using -1 instead of the -ENOMEM defined macro to specify that a buffer allocation failed. Since the error number is propagated, the caller will get a -EPERM which is the wrong error condition. Also, updating the caller to return the exact value from reada_add_block. Smatch tool warning: reada_add_block() warn: returning -1 instead of -ENOMEM is sloppy Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Luis de Bethencourt <luisbg@osg.samsung.com> Signed-off-by: David Sterba <dsterba@suse.com>
2015-10-21btrfs: check-integrity: Fix returned errno codesLuis de Bethencourt1-2/+2
check-integrity is using -1 instead of the -ENOMEM defined macro to specify that a buffer allocation failed. Since the error number is propagated, the caller will get a -EPERM which is the wrong error condition. Also, the smatch tool complains with the following warnings: btrfsic_process_superblock() warn: returning -1 instead of -ENOMEM is sloppy btrfsic_read_block() warn: returning -1 instead of -ENOMEM is sloppy Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Luis de Bethencourt <luisbg@osg.samsung.com> Signed-off-by: David Sterba <dsterba@suse.com>
2015-10-21btrfs: compress: put variables defined per compress type in struct to make ↵Byongho Lee1-46/+48
cache friendly Below variables are defined per compress type. - struct list_head comp_idle_workspace[BTRFS_COMPRESS_TYPES] - spinlock_t comp_workspace_lock[BTRFS_COMPRESS_TYPES] - int comp_num_workspace[BTRFS_COMPRESS_TYPES] - atomic_t comp_alloc_workspace[BTRFS_COMPRESS_TYPES] - wait_queue_head_t comp_workspace_wait[BTRFS_COMPRESS_TYPES] BTW, while accessing one compress type of these variables, the next or before address is other compress types of it. So this patch puts these variables in a struct to make cache friendly. Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Byongho Lee <bhlee.kernel@gmail.com> Signed-off-by: David Sterba <dsterba@suse.com>
2015-10-21btrfs: cleanup iterating over prop_handlers arrayByongho Lee1-7/+6
This patch eliminates the last item of prop_handlers array which is used to check end of array and instead uses ARRAY_SIZE macro. Though this is a very tiny optimization, using ARRAY_SIZE macro is a good practice to iterate array. Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Byongho Lee <bhlee.kernel@gmail.com> Signed-off-by: David Sterba <dsterba@suse.com>
2015-10-21btrfs: fix a comment typoGeliang Tang1-1/+1
Just fix a typo in the code comment. Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Geliang Tang <geliangtang@163.com> Signed-off-by: David Sterba <dsterba@suse.com>
2015-10-21btrfs: declare rsv_count as unsigned int instead of intAlexandru Moise1-1/+1
rsv_count ultimately gets passed to start_transaction() which now takes an unsigned int as its num_items parameter. The value of rsv_count should always be positive so declare it as being unsigned. Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Alexandru Moise <00moses.alexander00@gmail.com> Signed-off-by: David Sterba <dsterba@suse.com>
2015-10-21btrfs: change num_items type from u64 to unsigned intAlexandru Moise2-6/+8
The value of num_items that start_transaction() ultimately always takes is a small one, so a 64 bit integer is overkill. Also change num_items for btrfs_start_transaction() and btrfs_start_transaction_lflush() as well. Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Alexandru Moise <00moses.alexander00@gmail.com> Signed-off-by: David Sterba <dsterba@suse.com>
2015-10-21btrfs: cleanup btrfs_balance profile validity checksAlexandru Moise1-9/+12
Improve readability by generalizing the profile validity checks. Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Alexandru Moise <00moses.alexander00@gmail.com> Signed-off-by: David Sterba <dsterba@suse.com>
2015-10-21btrfs/file.c: remove an unsed varialbe first_indexShan Hai1-3/+0
The commit b37392ea86761 ("Btrfs: cleanup unnecessary parameter and variant of prepare_pages()") makes it redundant. Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Shan Hai <haishan.bai@hotmail.com> Signed-off-by: David Sterba <dsterba@suse.com>
2015-10-21btrfs: use btrfs_raid_array in btrfs_reduce_alloc_profileZhao Lei1-26/+22
btrfs_raid_array[] holds attributes of all raid types. Use btrfs_raid_array[].devs_min is best way for request in btrfs_reduce_alloc_profile(), instead of use complex condition of each raid types. Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2015-10-21btrfs: use btrfs_raid_array for btrfs_get_num_tolerated_disk_barrier_failures()Zhao Lei3-13/+30
btrfs_raid_array[] is used to define all raid attributes, use it to get tolerated_failures in btrfs_get_num_tolerated_disk_barrier_failures(), instead of complex condition in function. It can make code simple and auto-support other possible raid-type in future. Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2015-10-21btrfs: Move btrfs_raid_array to publicZhao Lei2-59/+73
This array is used to record attributes of each raid type, make it public, and many functions will benifit with this array. For example, num_tolerated_disk_barrier_failures(), we can avoid complex conditions in this function, and get raid attribute simply by accessing above array. It can also make code logic simple, reduce duplication code, and increase maintainability. Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2015-10-21btrfs: use a single if() statement for one outcome in get_block_rsv()Alexandru Moise1-7/+3
Rather than have three separate if() statements for the same outcome we should just OR them together in the same if() statement. Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Alexandru Moise <00moses.alexander00@gmail.com> Signed-off-by: David Sterba <dsterba@suse.com>