summaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)AuthorFilesLines
2015-10-23nfsd: ensure that seqid morphing operations are atomic wrt to copiesJeff Layton3-39/+31
Bruce points out that the increment of the seqid in stateids is not serialized in any way, so it's possible for racing calls to bump it twice and end up sending the same stateid. While we don't have any reports of this problem it _is_ theoretically possible, and could lead to spurious state recovery by the client. In the current code, update_stateid is always followed by a memcpy of that stateid, so we can combine the two operations. For better atomicity, we add a spinlock to the nfs4_stid and hold that when bumping the seqid and copying the stateid. Signed-off-by: Jeff Layton <jeff.layton@primarydata.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2015-10-23nfsd: serialize layout stateid morphing operationsJeff Layton3-4/+26
In order to allow the client to make a sane determination of what happened with racing LAYOUTGET/LAYOUTRETURN/CB_LAYOUTRECALL calls, we must ensure that the seqids return accurately represent the order of operations. The simplest way to do that is to ensure that operations on a single stateid are serialized. This patch adds a mutex to the layout stateid, and locks it when checking the layout stateid's seqid. The mutex is held over the entire operation and released after the seqid is bumped. Note that in the case of CB_LAYOUTRECALL we must move the increment of the seqid and setting into a new cb "prepare" operation. The lease infrastructure will call the lm_break callback with a spinlock held, so and we can't take the mutex in that codepath. Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Jeff Layton <jeff.layton@primarydata.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2015-10-23nfsd: improve client_has_state to check for unused openownersJ. Bruce Fields1-8/+12
At least in the v4.0 case openowners can hang around for a while after last close, but they shouldn't really block (for example), a new mount with a different principal. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2015-10-23nfsd: fix clid_inuse on mount with security changeJ. Bruce Fields1-2/+8
In bakeathon testing Solaris client was getting CLID_INUSE error when doing a krb5 mount soon after an auth_sys mount, or vice versa. That's not really necessary since in this case the old client doesn't have any state any more: http://tools.ietf.org/html/rfc7530#page-103 "when the server gets a SETCLIENTID for a client ID that currently has no state, or it has state but the lease has expired, rather than returning NFS4ERR_CLID_INUSE, the server MUST allow the SETCLIENTID and confirm the new client ID if followed by the appropriate SETCLIENTID_CONFIRM." This doesn't fix the problem completely since our client_has_state() check counts openowners left around to handle close replays, which we should probably just remove in this case. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2015-10-23nfsd: move include of state.h from trace.c to trace.hJeff Layton2-2/+2
Any file which includes trace.h will need to include state.h, even if they aren't using any state tracepoints. Ensure that we include any headers that might be needed in trace.h instead of relying on the *.c files to have the right ones. Signed-off-by: Jeff Layton <jeff.layton@primarydata.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2015-10-23lockd: get rid of reference-counted NSM RPC clientsAndrey Ryabinin4-78/+16
Currently we have reference-counted per-net NSM RPC client which created on the first monitor request and destroyed after the last unmonitor request. It's needed because RPC client need to know 'utsname()->nodename', but utsname() might be NULL when nsm_unmonitor() called. So instead of holding the rpc client we could just save nodename in struct nlm_host and pass it to the rpc_create(). Thus ther is no need in keeping rpc client until last unmonitor request. We could create separate RPC clients for each monitor/unmonitor requests. Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2015-10-23ocfs2/dlm: unlock lockres spinlock before dlm_lockres_putJoseph Qi2-2/+3
dlm_lockres_put will call dlm_lockres_release if it is the last reference, and then it may call dlm_print_one_lock_resource and take lockres spinlock. So unlock lockres spinlock before dlm_lockres_put to avoid deadlock. Signed-off-by: Joseph Qi <joseph.qi@huawei.com> Cc: Mark Fasheh <mfasheh@suse.de> Cc: Joel Becker <jlbec@evilplan.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-10-22locks: cleanup posix_lock_inode_wait and flock_lock_inode_waitBenjamin Coddington1-6/+3
All callers use locks_lock_inode_wait() instead. Signed-off-by: Benjamin Coddington <bcodding@redhat.com> Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
2015-10-22Move locks API users to locks_lock_inode_wait()Benjamin Coddington11-53/+20
Instead of having users check for FL_POSIX or FL_FLOCK to call the correct locks API function, use the check within locks_lock_inode_wait(). This allows for some later cleanup. Signed-off-by: Benjamin Coddington <bcodding@redhat.com> Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
2015-10-22locks: introduce locks_lock_inode_wait()Benjamin Coddington1-0/+24
Users of the locks API commonly call either posix_lock_file_wait() or flock_lock_file_wait() depending upon the lock type. Add a new function locks_lock_inode_wait() which will check and call the correct function for the type of lock passed in. Signed-off-by: Benjamin Coddington <bcodding@redhat.com> Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
2015-10-22pstore: Fix return type of pstore_is_mounted()Geliang Tang2-2/+2
This patch changes return type of pstore_is_mounted from int to bool. Signed-off-by: Geliang Tang <geliangtang@163.com> Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2015-10-22f2fs: fix to skip shrinking extent nodesChao Yu1-1/+1
In f2fs_shrink_extent_tree we should stop shrink flow if we have already shrunk enough nodes in extent cache. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-10-22f2fs: fix error path of ->symlinkChao Yu1-3/+6
Now, in ->symlink of f2fs, we kept the fixed invoking order between f2fs_add_link and page_symlink since we should init node info firstly in f2fs_add_link, then such node info can be used in page_symlink. But we didn't fix to release meta info which was done before page_symlink in our error path, so this will leave us corrupt symlink entry in its parent's dentry page. Fix this issue by adding f2fs_unlink in the error path for removing such linking. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-10-22f2fs: fix to clear GCed flag for atomic written pageChao Yu1-0/+1
Atomic write page can be GCed, after committing this kind of page, we should clear the GCed flag for it. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-10-22pstore: add pstore unregisterGeliang Tang8-21/+86
pstore doesn't support unregistering yet. It was marked as TODO. This patch adds some code to fix it: 1) Add functions to unregister kmsg/console/ftrace/pmsg. 2) Add a function to free compression buffer. 3) Unmap the memory and free it. 4) Add a function to unregister pstore filesystem. Signed-off-by: Geliang Tang <geliangtang@163.com> Acked-by: Kees Cook <keescook@chromium.org> [Removed __exit annotation from ramoops_remove(). Reported by Arnd Bergmann] Signed-off-by: Tony Luck <tony.luck@intel.com>
2015-10-22f2fs: don't need to submit bio on error caseJaegeuk Kim1-1/+1
If commit_atomic_write is failed, we don't need to submit any bio. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-10-22f2fs: fix leakage of inmemory atomic pagesJaegeuk Kim1-7/+3
If we got failure during commit_atomic_write, abort_volatile_write will be called, but will not drop the inmemory pages due to no FI_ATOMIC_FILE. Actually, there is no reason to check the flag in abort_volatile_write. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-10-22Merge branch 'allocator-fixes' into for-linus-4.4Chris Mason14-123/+459
Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: don't do extra bitmap search in one bit caseJosef Bacik1-13/+15
When we make ctl->unit allocations from a bitmap there is no point in searching for the next 0 in the bitmap. If we've found a bit we're done and can just exit the loop. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: keep track of largest extent in bitmapsJosef Bacik2-1/+39
We can waste a lot of time searching through bitmaps when we are heavily fragmented trying to find large contiguous areas that don't exist in the bitmap. So keep track of the max extent size when we do a full search of a bitmap so that next time around we can just skip the expensive searching if our max size is less than what we are looking for. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: don't keep trying to build clusters if we are fragmentedJosef Bacik3-24/+82
If we are extremely fragmented then we won't be able to create a free_cluster. So if this happens set last_ptr->fragmented so that all future allcations will give up trying to create a cluster. When we unpin extents we will unset ->fragmented if we free up a sufficient amount of space in a block group. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: cut down on loops through the allocatorJosef Bacik1-3/+36
We try really really hard to make allocations, but sometimes it is just not going to happen, especially when free space is extremely fragmented. So add a few short cuts through the looping states. For example if we couldn't allocate a chunk, just go straight to the NO_EMPTY_SIZE loop. If there are no uncached block groups and we've done a full search, go straight to the ALLOC_CHUNK stage. And finally if we already have empty_size and empty_cluster set to 0 go ahead and return -ENOSPC. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: don't continue setting up space cache when enospcJosef Bacik2-0/+20
If we hit ENOSPC when setting up a space cache don't bother setting up any of the other space cache's in this transaction, it'll just induce unnecessary latency. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: keep track of max_extent_size per space_infoJosef Bacik2-1/+34
When we are heavily fragmented we can induce a lot of latency trying to make an allocation happen that is simply not going to happen. Thankfully we keep track of our max_extent_size when going through the allocator, so if we get to the point where we are exiting find_free_extent with ENOSPC then set our space_info->max_extent_size so we can keep future allocations from having to pay this cost. We reset the max_extent_size whenever we release pinned bytes back into this space info so we can redo all the work. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: don't loop in allocator for space cacheJosef Bacik1-1/+1
The space cache needs to have contiguous allocations, and the allocator tries to make allocations by reducing the amount of bytes requested and re-searching. But this just makes us waste time when we are very fragmented, so if we can't find our space just exit, don't bother trying to search again. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: add a flags field to btrfs_transactionJosef Bacik4-18/+16
I want to set some per transaction flags, so instead of adding yet another int lets just convert the current two int indicators to flags and add a flags field for future use. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: fix prealloc under heavy fragmentation conditionsJosef Bacik1-0/+9
If we are heavily fragmented we will continually try to prealloc the largest extent size we can every time we call btrfs_reserve_extent. This can be very expensive when we are heavily fragmented, burning lots of CPU cycles and loops through the allocator. So instead notice when we get a smaller chunk from the allocator than what we specified and use this as the new maximum size we try to allocate. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: add fragment=* debug mount optionJosef Bacik5-7/+150
In tracking down these weird bitmap problems it was helpful to artificially create an extremely fragmented file system. These mount options let us either fragment data or metadata or both. With these options I could reproduce all sorts of weird latencies and hangs that occur under extreme fragmentation and get them fixed. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: fix qgroup sanity testsJosef Bacik1-0/+6
With my changes to allow us to find old roots when resolving indirect refs I introduced a regression to the sanity tests. Since we don't really care to go down into the fs roots we just need to have the old behavior of returning ENOENT for dummy roots for the sanity tests. In the future if we want to get fancy we can populate the test fs trees with the references as well. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22Btrfs: change how we wait for pending ordered extentsJosef Bacik5-64/+60
We have a mechanism to make sure we don't lose updates for ordered extents that were logged in the transaction that is currently running. We add the ordered extent to a transaction list and then the transaction waits on all the ordered extents in that list. However are substantially large file systems this list can be extremely large, and can give us soft lockups, since the ordered extents don't remove themselves from the list when they do complete. To fix this we simply add a counter to the transaction that is incremented any time we have a logged extent that needs to be completed in the current transaction. Then when the ordered extent finally completes it decrements the per transaction counter and wakes up the transaction if we are the last ones. This will eliminate the softlockup. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: qgroup: Check if qgroup reserved space leakedQu Wenruo3-0/+34
Add check at btrfs_destroy_inode() time to detect qgroup reserved space leak. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: qgroup: Avoid calling btrfs_free_reserved_data_space in clear_bit_hookQu Wenruo3-12/+22
In clear_bit_hook, qgroup reserved data is already handled quite well, either released by finish_ordered_io or invalidatepage. So calling btrfs_qgroup_free_data() here is completely meaningless, and since btrfs_qgroup_free_data() will lock io_tree, so it can't be called with io_tree lock hold. This patch will add a new function btrfs_free_reserved_data_space_noquota() for clear_bit_hook() to cease the lockdep warning. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: fallocate: Add support to accurate qgroup reserveQu Wenruo1-44/+117
Now fallocate will do accurate qgroup reserve space check, unlike old method, which will always reserve the whole length of the range. With this patch, fallocate will: 1) Iterate the desired range and mark in data rsv map Only range which is going to be allocated will be recorded in data rsv map and reserve the space. For already allocated range (normal/prealloc extent) they will be skipped. Also, record the marked range into a new list for later use. 2) If 1) succeeded, do real file extent allocate. And at file extent allocation time, corresponding range will be removed from the range in data rsv map. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: qgroup: Add new trace point for qgroup data reserveQu Wenruo2-2/+17
Now each qgroup reserve for data will has its ftrace event for better debugging. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: Add handler for invalidate pageQu Wenruo1-0/+24
For btrfs_invalidatepage() and its variant evict_inode_truncate_page(), there will be pages don't reach disk. In that case, their reserved space won't be release nor freed by finish_ordered_io() nor delayed_ref handler. So we must free their qgroup reserved space, or we will leaking reserved space again. So this will patch will call btrfs_qgroup_free_data() for invalidatepage() and its variant evict_inode_truncate_page(). And due to the nature of new btrfs_qgroup_reserve/free_data() reserved space will only be reserved or freed once, so for pages which are already flushed to disk, their reserved space will be released and freed by delayed_ref handler. Double free won't be a problem. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: qgroup: Add handler for NOCOW and inlineQu Wenruo2-1/+21
For NOCOW and inline case, there will be no delayed_ref created for them, so we should free their reserved data space at proper time(finish_ordered_io for NOCOW and cow_file_inline for inline). Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: qgroup: Cleanup old inaccurate facilitiesQu Wenruo9-156/+60
Cleanup the old facilities which use old btrfs_qgroup_reserve() function call, replace them with the newer version, and remove the "__" prefix in them. Also, make btrfs_qgroup_reserve/free() functions private, as they are now only used inside qgroup codes. Now, the whole btrfs qgroup is swithed to use the new reserve facilities. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: extent-tree: Switch to new delalloc space reserve and releaseQu Wenruo4-25/+38
Use new __btrfs_delalloc_reserve_space() and __btrfs_delalloc_release_space() to reserve and release space for delalloc. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: extent-tree: Add new version of btrfs_delalloc_reserve/release_spaceQu Wenruo2-0/+61
Add new version of btrfs_delalloc_reserve_space() and btrfs_delalloc_release_space() functions, which supports accurate qgroup reserve. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: extent-tree: Switch to new check_data_free_space and ↵Qu Wenruo3-19/+27
free_reserved_data_space Use new reserve/free for buffered write and inode cache. For buffered write case, as nodatacow write won't increase quota account, so unlike old behavior which does reserve before check nocow, now we check nocow first and then only reserve data if we can't do nocow write. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: extent-tree: Add new version of btrfs_check_data_free_space and ↵Qu Wenruo2-9/+79
btrfs_free_reserved_data_space. Add new functions __btrfs_check_data_free_space() and __btrfs_free_reserved_data_space() to work with new accurate qgroup reserved space framework. The new function will replace old btrfs_check_data_free_space() and btrfs_free_reserved_data_space() respectively, but until all the change is done, let's just use the new name. Also, export internal use function btrfs_alloc_data_chunk_ondemand(), as now qgroup reserve requires precious bytes, some operation can't get the accurate number in advance(like fallocate). But data space info check and data chunk allocate doesn't need to be that accurate, and can be called at the beginning. So export it for later operations. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: qgroup: Use new metadata reservation.Qu Wenruo3-36/+13
As we have the new metadata reservation functions, use them to replace the old btrfs_qgroup_reserve() call for metadata. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: qgroup: Introduce new functions to reserve/free metadataQu Wenruo4-0/+48
Introduce new functions btrfs_qgroup_reserve/free_meta() to reserve/free metadata reserved space. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: delayed_ref: release and free qgroup reserved at proper timingQu Wenruo4-4/+34
Qgroup reserved space needs to be released from inode dirty map and get freed at different timing: 1) Release when the metadata is written into tree After corresponding metadata is written into tree, any newer write will be COWed(don't include NOCOW case yet). So we must release its range from inode dirty range map, or we will forget to reserve needed range, causing accounting exceeding the limit. 2) Free reserved bytes when delayed ref is run When delayed refs are run, qgroup accounting will follow soon and turn the reserved bytes into rfer/excl numbers. As run_delayed_refs and qgroup accounting are all done at commit_transaction() time, we are safe to free reserved space in run_delayed_ref time(). With these timing to release/free reserved space, we should be able to resolve the long existing qgroup reserve space leak problem. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: delayed_ref: Add new function to record reserved space into delayed refQu Wenruo2-0/+43
Add new function btrfs_add_delayed_qgroup_reserve() function to record how much space is reserved for that extent. As btrfs only accounts qgroup at run_delayed_refs() time, so newly allocated extent should keep the reserved space until then. So add needed function with related members to do it. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: qgroup: Introduce functions to release/free qgroup reserve dataQu Wenruo2-0/+62
space Introduce functions btrfs_qgroup_release/free_data() to release/free reserved data range. Release means, just remove the data range from io_tree, but doesn't free the reserved space. This is for normal buffered write case, when data is written into disc and its metadata is added into tree, its reserved space should still be kept until commit_trans(). So in that case, we only release dirty range, but keep the reserved space recorded some other place until commit_tran(). Free means not only remove data range, but also free reserved space. This is used for case for cleanup and invalidate page. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: qgroup: Introduce btrfs_qgroup_reserve_data functionQu Wenruo3-0/+52
Introduce a new function, btrfs_qgroup_reserve_data(), which will use io_tree to accurate qgroup reserve, to avoid reserved space leaking. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: extent_io: Introduce new function clear_record_extent_bits()Qu Wenruo2-11/+42
Introduce new function clear_record_extent_bits(), which will clear bits for given range and record the details about which ranges are cleared and how many bytes in total it changes. This provides the basis for later qgroup reserve codes. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: extent_io: Introduce new function set_record_extent_bitsQu Wenruo2-18/+56
Introduce new function set_record_extent_bits(), which will not only set given bits, but also record how many bytes are changed, and detailed range info. This is quite important for later qgroup reserve framework. The number of bytes will be used to do qgroup reserve, and detailed range info will be used to cleanup for EQUOT case. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-10-22btrfs: extent_io: Introduce needed structure for recoding set/clear bitsQu Wenruo1-0/+12
Add a new structure, extent_change_set, to record how many bytes are changed in one set/clear_extent_bits() operation, with detailed changed ranges info. This provides the needed facilities for later qgroup reserve framework. Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>