mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2025-01-21 21:34:58 +08:00
8d57470d8f
Get pgt_buf early from BRK, and use it to map PMD_SIZE from top at first. Then use mapped pages to map more ranges below, and keep looping until all pages get mapped. alloc_low_page will use page from BRK at first, after that buffer is used up, will use memblock to find and reserve pages for page table usage. Introduce min_pfn_mapped to make sure find new pages from mapped ranges, that will be updated when lower pages get mapped. Also add step_size to make sure that don't try to map too big range with limited mapped pages initially, and increase the step_size when we have more mapped pages on hand. We don't need to call pagetable_reserve anymore, reserve work is done in alloc_low_page() directly. At last we can get rid of calculation and find early pgt related code. -v2: update to after fix_xen change, also use MACRO for initial pgt_buf size and add comments with it. -v3: skip big reserved range in memblock.reserved near end. -v4: don't need fix_xen change now. -v5: add changelog about moving about reserving pagetable to alloc_low_page. Suggested-by: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Yinghai Lu <yinghai@kernel.org> Link: http://lkml.kernel.org/r/1353123563-3103-22-git-send-email-yinghai@kernel.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> |
||
---|---|---|
.. | ||
kmemcheck | ||
amdtopology.c | ||
dump_pagetables.c | ||
extable.c | ||
fault.c | ||
gup.c | ||
highmem_32.c | ||
hugetlbpage.c | ||
init_32.c | ||
init_64.c | ||
init.c | ||
iomap_32.c | ||
ioremap.c | ||
kmmio.c | ||
Makefile | ||
memtest.c | ||
mmap.c | ||
mmio-mod.c | ||
numa_32.c | ||
numa_64.c | ||
numa_emulation.c | ||
numa_internal.h | ||
numa.c | ||
pageattr-test.c | ||
pageattr.c | ||
pat_internal.h | ||
pat_rbtree.c | ||
pat.c | ||
pf_in.c | ||
pf_in.h | ||
pgtable_32.c | ||
pgtable.c | ||
physaddr.c | ||
physaddr.h | ||
setup_nx.c | ||
srat.c | ||
testmmiotrace.c | ||
tlb.c |