2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-11-20 00:26:39 +08:00

block: update bio according to DMA alignment padding

DMA start address and transfer size alignment for PC requests are
achieved using bio_copy_user() instead of bio_map_user().  This works
because bio_copy_user() always uses full pages and block DMA alignment
isn't allowed to go over PAGE_SIZE.

However, the implementation didn't update the last bio of the request
to make this padding visible to lower layers.  This patch makes
blk_rq_map_user() extend the last bio such that it includes the
padding area and the size of area pointed to by the request is
properly aligned.

Signed-off-by: Tejun Heo <htejun@gmail.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
This commit is contained in:
Tejun Heo 2008-02-19 11:35:38 +01:00 committed by Jens Axboe
parent 56c819df77
commit 40b01b9bbd

View File

@ -139,6 +139,23 @@ int blk_rq_map_user(struct request_queue *q, struct request *rq,
ubuf += ret;
}
/*
* __blk_rq_map_user() copies the buffers if starting address
* or length isn't aligned. As the copied buffer is always
* page aligned, we know that there's enough room for padding.
* Extend the last bio and update rq->data_len accordingly.
*
* On unmap, bio_uncopy_user() will use unmodified
* bio_map_data pointed to by bio->bi_private.
*/
if (len & queue_dma_alignment(q)) {
unsigned int pad_len = (queue_dma_alignment(q) & ~len) + 1;
struct bio *bio = rq->biotail;
bio->bi_io_vec[bio->bi_vcnt - 1].bv_len += pad_len;
bio->bi_size += pad_len;
}
rq->buffer = rq->data = NULL;
return 0;
unmap_rq: