diff options
author | Ming Lei <ming.lei@redhat.com> | 2019-02-15 14:13:23 +0300 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2019-02-15 18:40:12 +0300 |
commit | 2705c93742e91730d335838025d75d8043861174 (patch) | |
tree | 16e379f17a745cbb3a1f8d7ba5ffe7e2b8ddee86 /include | |
parent | ac4fa1d107addb2c6b21067d8945a39316a09fc8 (diff) | |
download | linux-2705c93742e91730d335838025d75d8043861174.tar.xz |
block: kill QUEUE_FLAG_NO_SG_MERGE
Since bdced438acd83ad83a6c ("block: setup bi_phys_segments after splitting"),
physical segment number is mainly figured out in blk_queue_split() for
fast path, and the flag of BIO_SEG_VALID is set there too.
Now only blk_recount_segments() and blk_recalc_rq_segments() use this
flag.
Basically blk_recount_segments() is bypassed in fast path given BIO_SEG_VALID
is set in blk_queue_split().
For another user of blk_recalc_rq_segments():
- run in partial completion branch of blk_update_request, which is an unusual case
- run in blk_cloned_rq_check_limits(), still not a big problem if the flag is killed
since dm-rq is the only user.
Multi-page bvec is enabled now, not doing S/G merging is rather pointless with the
current setup of the I/O path, as it isn't going to save you a significant amount
of cycles.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/blkdev.h | 1 |
1 files changed, 0 insertions, 1 deletions
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index b6292d469ea4..faed9d9eb84c 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -588,7 +588,6 @@ struct request_queue { #define QUEUE_FLAG_SAME_FORCE 12 /* force complete on same CPU */ #define QUEUE_FLAG_DEAD 13 /* queue tear-down finished */ #define QUEUE_FLAG_INIT_DONE 14 /* queue is initialized */ -#define QUEUE_FLAG_NO_SG_MERGE 15 /* don't attempt to merge SG segments*/ #define QUEUE_FLAG_POLL 16 /* IO polling enabled if set */ #define QUEUE_FLAG_WC 17 /* Write back caching */ #define QUEUE_FLAG_FUA 18 /* device supports FUA writes */ |