page_pool: unify frag_count handling in page_pool_is_last_frag()

Currently when page_pool_create() is called with
PP_FLAG_PAGE_FRAG flag, page_pool_alloc_pages() is only
allowed to be called under the below constraints:
1. page_pool_fragment_page() need to be called to setup
   page->pp_frag_count immediately.
2. page_pool_defrag_page() often need to be called to drain
   the page->pp_frag_count when there is no more user will
   be holding on to that page.

Those constraints exist in order to support a page to be
split into multi fragments.

And those constraints have some overhead because of the
cache line dirtying/bouncing and atomic update.

Those constraints are unavoidable for case when we need a
page to be split into more than one fragment, but there is
also case that we want to avoid the above constraints and
their overhead when a page can't be split as it can only
hold a fragment as requested by user, depending on different
use cases:
use case 1: allocate page without page splitting.
use case 2: allocate page with page splitting.
use case 3: allocate page with or without page splitting
            depending on the fragment size.

Currently page pool only provide page_pool_alloc_pages() and
page_pool_alloc_frag() API to enable the 1 & 2 separately,
so we can not use a combination of 1 & 2 to enable 3, it is
not possible yet because of the per page_pool flag
PP_FLAG_PAGE_FRAG.

So in order to allow allocating unsplit page without the
overhead of split page while still allow allocating split
page we need to remove the per page_pool flag in
page_pool_is_last_frag(), as best as I can think of, it seems
there are two methods as below:
1. Add per page flag/bit to indicate a page is split or
   not, which means we might need to update that flag/bit
   everytime the page is recycled, dirtying the cache line
   of 'struct page' for use case 1.
2. Unify the page->pp_frag_count handling for both split and
   unsplit page by assuming all pages in the page pool is split
   into a big fragment initially.

As page pool already supports use case 1 without dirtying the
cache line of 'struct page' whenever a page is recyclable, we
need to support the above use case 3 with minimal overhead,
especially not adding any noticeable overhead for use case 1,
and we are already doing an optimization by not updating
pp_frag_count in page_pool_defrag_page() for the last fragment
user, this patch chooses to unify the pp_frag_count handling
to support the above use case 3.

There is no noticeable performance degradation and some
justification for unifying the frag_count handling with this
patch applied using a micro-benchmark testing in [1].

1. https://lore.kernel.org/all/bf2591f8-7b3c-4480-bb2c-31dc9da1d6ac@huawei.com/

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
CC: Lorenzo Bianconi <lorenzo@kernel.org>
CC: Alexander Duyck <alexander.duyck@gmail.com>
CC: Liang Chen <liangchen.linux@gmail.com>
CC: Alexander Lobakin <aleksander.lobakin@intel.com>
Link: https://lore.kernel.org/r/20231020095952.11055-2-linyunsheng@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Yunsheng Lin 2023-10-20 17:59:48 +08:00 committed by Jakub Kicinski
parent f4dbc2bb7a
commit 58d53d8f7d
2 changed files with 43 additions and 14 deletions

View File

@ -115,28 +115,49 @@ static inline long page_pool_defrag_page(struct page *page, long nr)
long ret;
/* If nr == pp_frag_count then we have cleared all remaining
* references to the page. No need to actually overwrite it, instead
* we can leave this to be overwritten by the calling function.
* references to the page:
* 1. 'n == 1': no need to actually overwrite it.
* 2. 'n != 1': overwrite it with one, which is the rare case
* for pp_frag_count draining.
*
* The main advantage to doing this is that an atomic_read is
* generally a much cheaper operation than an atomic update,
* especially when dealing with a page that may be partitioned
* into only 2 or 3 pieces.
* The main advantage to doing this is that not only we avoid a atomic
* update, as an atomic_read is generally a much cheaper operation than
* an atomic update, especially when dealing with a page that may be
* partitioned into only 2 or 3 pieces; but also unify the pp_frag_count
* handling by ensuring all pages have partitioned into only 1 piece
* initially, and only overwrite it when the page is partitioned into
* more than one piece.
*/
if (atomic_long_read(&page->pp_frag_count) == nr)
if (atomic_long_read(&page->pp_frag_count) == nr) {
/* As we have ensured nr is always one for constant case using
* the BUILD_BUG_ON(), only need to handle the non-constant case
* here for pp_frag_count draining, which is a rare case.
*/
BUILD_BUG_ON(__builtin_constant_p(nr) && nr != 1);
if (!__builtin_constant_p(nr))
atomic_long_set(&page->pp_frag_count, 1);
return 0;
}
ret = atomic_long_sub_return(nr, &page->pp_frag_count);
WARN_ON(ret < 0);
/* We are the last user here too, reset pp_frag_count back to 1 to
* ensure all pages have been partitioned into 1 piece initially,
* this should be the rare case when the last two fragment users call
* page_pool_defrag_page() currently.
*/
if (unlikely(!ret))
atomic_long_set(&page->pp_frag_count, 1);
return ret;
}
static inline bool page_pool_is_last_frag(struct page_pool *pool,
struct page *page)
static inline bool page_pool_is_last_frag(struct page *page)
{
/* If fragments aren't enabled or count is 0 we were the last user */
return !(pool->p.flags & PP_FLAG_PAGE_FRAG) ||
(page_pool_defrag_page(page, 1) == 0);
/* If page_pool_defrag_page() returns 0, we were the last user */
return page_pool_defrag_page(page, 1) == 0;
}
/**
@ -161,7 +182,7 @@ static inline void page_pool_put_page(struct page_pool *pool,
* allow registering MEM_TYPE_PAGE_POOL, but shield linker.
*/
#ifdef CONFIG_PAGE_POOL
if (!page_pool_is_last_frag(pool, page))
if (!page_pool_is_last_frag(page))
return;
page_pool_put_defragged_page(pool, page, dma_sync_size, allow_direct);

View File

@ -376,6 +376,14 @@ static void page_pool_set_pp_info(struct page_pool *pool,
{
page->pp = pool;
page->pp_magic |= PP_SIGNATURE;
/* Ensuring all pages have been split into one fragment initially:
* page_pool_set_pp_info() is only called once for every page when it
* is allocated from the page allocator and page_pool_fragment_page()
* is dirtying the same cache line as the page->pp_magic above, so
* the overhead is negligible.
*/
page_pool_fragment_page(page, 1);
if (pool->p.init_callback)
pool->p.init_callback(page, pool->p.init_arg);
}
@ -672,7 +680,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
struct page *page = virt_to_head_page(data[i]);
/* It is not the last user for the page frag case */
if (!page_pool_is_last_frag(pool, page))
if (!page_pool_is_last_frag(page))
continue;
page = __page_pool_put_page(pool, page, -1, false);