mm/hugetlb: move page order check inside hugetlb_cma_reserve()

All platforms could benefit from page order check against MAX_PAGE_ORDER
before allocating a CMA area for gigantic hugetlb pages.  Let's move this
check from individual platforms to generic hugetlb.

Link: https://lkml.kernel.org/r/20240209054221.1403364-1-anshuman.khandual@arm.com
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Anshuman Khandual 2024-02-09 11:12:21 +05:30 committed by Andrew Morton
parent 4acef5694e
commit ce70cfb145
3 changed files with 8 additions and 10 deletions

View File

@ -45,13 +45,6 @@ void __init arm64_hugetlb_cma_reserve(void)
else
order = CONT_PMD_SHIFT - PAGE_SHIFT;
/*
* HugeTLB CMA reservation is required for gigantic
* huge pages which could not be allocated via the
* page allocator. Just warn if there is any change
* breaking this assumption.
*/
WARN_ON(order <= MAX_PAGE_ORDER);
hugetlb_cma_reserve(order);
}
#endif /* CONFIG_CMA */

View File

@ -614,8 +614,6 @@ void __init gigantic_hugetlb_cma_reserve(void)
*/
order = mmu_psize_to_shift(MMU_PAGE_16G) - PAGE_SHIFT;
if (order) {
VM_WARN_ON(order <= MAX_PAGE_ORDER);
if (order)
hugetlb_cma_reserve(order);
}
}

View File

@ -7720,6 +7720,13 @@ void __init hugetlb_cma_reserve(int order)
bool node_specific_cma_alloc = false;
int nid;
/*
* HugeTLB CMA reservation is required for gigantic
* huge pages which could not be allocated via the
* page allocator. Just warn if there is any change
* breaking this assumption.
*/
VM_WARN_ON(order <= MAX_PAGE_ORDER);
cma_reserve_called = true;
if (!hugetlb_cma_size)