mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-11-11 12:28:41 +08:00
Revert "mm/page_alloc: fix memmap_init_zone pageblock alignment"
This reverts commit864b75f9d6
. Commit864b75f9d6
("mm/page_alloc: fix memmap_init_zone pageblock alignment") modified the logic in memmap_init_zone() to initialize struct pages associated with invalid PFNs, to appease a VM_BUG_ON() in move_freepages(), which is redundant by its own admission, and dereferences struct page fields to obtain the zone without checking whether the struct pages in question are valid to begin with. Commit864b75f9d6
only makes it worse, since the rounding it does may cause pfn assume the same value it had in a prior iteration of the loop, resulting in an infinite loop and a hang very early in the boot. Also, since it doesn't perform the same rounding on start_pfn itself but only on intermediate values following an invalid PFN, we may still hit the same VM_BUG_ON() as before. So instead, let's fix this at the core, and ensure that the BUG check doesn't dereference struct page fields of invalid pages. Fixes:864b75f9d6
("mm/page_alloc: fix memmap_init_zone pageblock alignment") Tested-by: Jan Glauber <jglauber@cavium.com> Tested-by: Shanker Donthineni <shankerd@codeaurora.org> Cc: Daniel Vacek <neelx@redhat.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Michal Hocko <mhocko@suse.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Pavel Tatashin <pasha.tatashin@oracle.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
274a1ff070
commit
3e04040df6
@ -1910,7 +1910,9 @@ static int move_freepages(struct zone *zone,
|
||||
* Remove at a later date when no bug reports exist related to
|
||||
* grouping pages by mobility
|
||||
*/
|
||||
VM_BUG_ON(page_zone(start_page) != page_zone(end_page));
|
||||
VM_BUG_ON(pfn_valid(page_to_pfn(start_page)) &&
|
||||
pfn_valid(page_to_pfn(end_page)) &&
|
||||
page_zone(start_page) != page_zone(end_page));
|
||||
#endif
|
||||
|
||||
if (num_movable)
|
||||
@ -5359,14 +5361,9 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
|
||||
/*
|
||||
* Skip to the pfn preceding the next valid one (or
|
||||
* end_pfn), such that we hit a valid pfn (or end_pfn)
|
||||
* on our next iteration of the loop. Note that it needs
|
||||
* to be pageblock aligned even when the region itself
|
||||
* is not. move_freepages_block() can shift ahead of
|
||||
* the valid region but still depends on correct page
|
||||
* metadata.
|
||||
* on our next iteration of the loop.
|
||||
*/
|
||||
pfn = (memblock_next_valid_pfn(pfn, end_pfn) &
|
||||
~(pageblock_nr_pages-1)) - 1;
|
||||
pfn = memblock_next_valid_pfn(pfn, end_pfn) - 1;
|
||||
#endif
|
||||
continue;
|
||||
}
|
||||
|
Loading…
Reference in New Issue
Block a user