diff options
author | Christoph Hellwig <hch@lst.de> | 2020-04-14 10:42:21 +0300 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2020-04-22 19:47:06 +0300 |
commit | e64a0e16928415648d53d721b3d6fc3635eddf92 (patch) | |
tree | bfb5f00ddd9a5dfb7883fc0dd5280342ed3706f1 /block/blk-merge.c | |
parent | 9bc5c397d8384b50c8202f4400bf2f87fe8291d9 (diff) | |
download | linux-e64a0e16928415648d53d721b3d6fc3635eddf92.tar.xz |
block: remove RQF_COPY_USER
The RQF_COPY_USER is set for bio where the passthrough request mapping
helpers decided that bounce buffering is required. It is then used to
pad scatterlist for drivers that required it. But given that
non-passthrough requests are per definition aligned, and directly mapped
pass-through request must be aligned it is not actually required at all.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-merge.c')
-rw-r--r-- | block/blk-merge.c | 3 |
1 files changed, 1 insertions, 2 deletions
diff --git a/block/blk-merge.c b/block/blk-merge.c index 1534ed736363..99c9759f3a8a 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -532,8 +532,7 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq, else if (rq->bio) nsegs = __blk_bios_map_sg(q, rq->bio, sglist, &sg); - if (unlikely(rq->rq_flags & RQF_COPY_USER) && - (blk_rq_bytes(rq) & q->dma_pad_mask)) { + if (blk_rq_bytes(rq) && (blk_rq_bytes(rq) & q->dma_pad_mask)) { unsigned int pad_len = (q->dma_pad_mask & ~blk_rq_bytes(rq)) + 1; |