mirror of
https://github.com/edk2-porting/linux-next.git
synced 2024-12-17 09:43:59 +08:00
9e02977bfa
When we looked into FIO performance with swiotlb enabled in VM, we found
swiotlb_bounce() is always called one more time than expected for each DMA
read request.
It turns out that the bounce buffer is copied to original DMA buffer twice
after the completion of a DMA request (one is done by in
dma_direct_sync_single_for_cpu(), the other by swiotlb_tbl_unmap_single()).
But the content in bounce buffer actually doesn't change between the two
rounds of copy. So, one round of copy is redundant.
Pass DMA_ATTR_SKIP_CPU_SYNC flag to swiotlb_tbl_unmap_single() to
skip the memory copy in it.
This fix increases FIO 64KB sequential read throughput in a guest with
swiotlb=force by 5.6%.
Fixes:
|
||
---|---|---|
.. | ||
coherent.c | ||
contiguous.c | ||
debug.c | ||
debug.h | ||
direct.c | ||
direct.h | ||
dummy.c | ||
Kconfig | ||
Makefile | ||
map_benchmark.c | ||
mapping.c | ||
ops_helpers.c | ||
pool.c | ||
remap.c | ||
swiotlb.c |