mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-11-13 23:34:05 +08:00
RDMA/umem: Fix ib_umem_find_best_pgsz() for mappings that cross a page boundary
It is possible for a single SGL to span an aligned boundary, eg if the SGL
is
61440 -> 90112
Then the length is 28672, which currently limits the block size to
32k. With a 32k page size the two covering blocks will be:
32768->65536 and 65536->98304
However, the correct answer is a 128K block size which will span the whole
28672 bytes in a single block.
Instead of limiting based on length figure out which high IOVA bits don't
change between the start and end addresses. That is the highest useful
page size.
Fixes: 4a35339958
("RDMA/umem: Add API to find best driver supported page size in an MR")
Link: https://lore.kernel.org/r/1-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Reviewed-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
This commit is contained in:
parent
71ff3f6268
commit
a40c20dabd
@ -156,8 +156,13 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
|
||||
return 0;
|
||||
|
||||
va = virt;
|
||||
/* max page size not to exceed MR length */
|
||||
mask = roundup_pow_of_two(umem->length);
|
||||
/* The best result is the smallest page size that results in the minimum
|
||||
* number of required pages. Compute the largest page size that could
|
||||
* work based on VA address bits that don't change.
|
||||
*/
|
||||
mask = pgsz_bitmap &
|
||||
GENMASK(BITS_PER_LONG - 1,
|
||||
bits_per((umem->length - 1 + virt) ^ virt));
|
||||
/* offset into first SGL */
|
||||
pgoff = umem->address & ~PAGE_MASK;
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user