mm/page_isolation.c: make start_isolate_page_range() fail if already isolated

start_isolate_page_range() is used to set the migrate type of a set of
pageblocks to MIGRATE_ISOLATE while attempting to start a migration
operation.  It assumes that only one thread is calling it for the
specified range.  This routine is used by CMA, memory hotplug and
gigantic huge pages.  Each of these users synchronize access to the
range within their subsystem.  However, two subsystems (CMA and gigantic
huge pages for example) could attempt operations on the same range.  If
this happens, one thread may 'undo' the work another thread is doing.
This can result in pageblocks being incorrectly left marked as
MIGRATE_ISOLATE and therefore not available for page allocation.

What is ideally needed is a way to synchronize access to a set of
pageblocks that are undergoing isolation and migration.  The only thing
we know about these pageblocks is that they are all in the same zone.  A
per-node mutex is too coarse as we want to allow multiple operations on
different ranges within the same zone concurrently.  Instead, we will
use the migration type of the pageblocks themselves as a form of
synchronization.

start_isolate_page_range sets the migration type on a set of page-
blocks going in order from the one associated with the smallest pfn to
the largest pfn.  The zone lock is acquired to check and set the
migration type.  When going through the list of pageblocks check if
MIGRATE_ISOLATE is already set.  If so, this indicates another thread is
working on this pageblock.  We know exactly which pageblocks we set, so
clean up by undo those and return -EBUSY.

This allows start_isolate_page_range to serve as a synchronization
mechanism and will allow for more general use of callers making use of
these interfaces.  Update comments in alloc_contig_range to reflect this
new functionality.

Each CPU holds the associated zone lock to modify or examine the
migration type of a pageblock.  And, it will only examine/update a
single pageblock per lock acquire/release cycle.

Link: http://lkml.kernel.org/r/20180309224731.16978-1-mike.kravetz@oracle.com
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Luiz Capitulino <lcapitulino@redhat.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Mike Kravetz 2018-04-05 16:25:26 -07:00 committed by Linus Torvalds
parent 1c8f422059
commit 2c7452a075
2 changed files with 21 additions and 5 deletions

View File

@ -7765,11 +7765,11 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
* @gfp_mask: GFP mask to use during compaction * @gfp_mask: GFP mask to use during compaction
* *
* The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES
* aligned, however it's the caller's responsibility to guarantee that * aligned. The PFN range must belong to a single zone.
* we are the only thread that changes migrate type of pageblocks the
* pages fall in.
* *
* The PFN range must belong to a single zone. * The first thing this routine does is attempt to MIGRATE_ISOLATE all
* pageblocks in the range. Once isolated, the pageblocks should not
* be modified by others.
* *
* Returns zero on success or negative error code. On success all * Returns zero on success or negative error code. On success all
* pages which PFN is in [start, end) are allocated for the caller and * pages which PFN is in [start, end) are allocated for the caller and

View File

@ -28,6 +28,14 @@ static int set_migratetype_isolate(struct page *page, int migratetype,
spin_lock_irqsave(&zone->lock, flags); spin_lock_irqsave(&zone->lock, flags);
/*
* We assume the caller intended to SET migrate type to isolate.
* If it is already set, then someone else must have raced and
* set it before us. Return -EBUSY
*/
if (is_migrate_isolate_page(page))
goto out;
pfn = page_to_pfn(page); pfn = page_to_pfn(page);
arg.start_pfn = pfn; arg.start_pfn = pfn;
arg.nr_pages = pageblock_nr_pages; arg.nr_pages = pageblock_nr_pages;
@ -166,7 +174,15 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages)
* future will not be allocated again. * future will not be allocated again.
* *
* start_pfn/end_pfn must be aligned to pageblock_order. * start_pfn/end_pfn must be aligned to pageblock_order.
* Returns 0 on success and -EBUSY if any part of range cannot be isolated. * Return 0 on success and -EBUSY if any part of range cannot be isolated.
*
* There is no high level synchronization mechanism that prevents two threads
* from trying to isolate overlapping ranges. If this happens, one thread
* will notice pageblocks in the overlapping range already set to isolate.
* This happens in set_migratetype_isolate, and set_migratetype_isolate
* returns an error. We then clean up by restoring the migration type on
* pageblocks we may have modified and return -EBUSY to caller. This
* prevents two threads from simultaneously working on overlapping ranges.
*/ */
int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
unsigned migratetype, bool skip_hwpoisoned_pages) unsigned migratetype, bool skip_hwpoisoned_pages)