2005-04-17 06:20:36 +08:00
|
|
|
#ifndef _LINUX_RMAP_H
|
|
|
|
#define _LINUX_RMAP_H
|
|
|
|
/*
|
|
|
|
* Declarations for Reverse Mapping functions in mm/rmap.c
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/list.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/mm.h>
|
2012-12-03 03:56:46 +08:00
|
|
|
#include <linux/rwsem.h>
|
2008-02-07 16:14:01 +08:00
|
|
|
#include <linux/memcontrol.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The anon_vma heads a list of private "related" vmas, to scan if
|
|
|
|
* an anonymous page pointing to this anon_vma needs to be unmapped:
|
|
|
|
* the vmas on the list will be related by forking, or by splitting.
|
|
|
|
*
|
|
|
|
* Since vmas come and go as they are split and merged (particularly
|
|
|
|
* in mprotect), the mapping field of an anonymous page cannot point
|
|
|
|
* directly to a vma: instead it points to an anon_vma, on whose list
|
|
|
|
* the related vmas can be easily linked or unlinked.
|
|
|
|
*
|
|
|
|
* After unlinking the last vma on the list, we must garbage collect
|
|
|
|
* the anon_vma object itself: we're guaranteed no page can be
|
|
|
|
* pointing to this anon_vma once its vma list is empty.
|
|
|
|
*/
|
|
|
|
struct anon_vma {
|
2012-12-03 03:56:46 +08:00
|
|
|
struct anon_vma *root; /* Root of this anon_vma tree */
|
|
|
|
struct rw_semaphore rwsem; /* W: modification, R: walking the list */
|
2010-05-25 05:32:18 +08:00
|
|
|
/*
|
2011-03-23 07:32:48 +08:00
|
|
|
* The refcount is taken on an anon_vma when there is no
|
2010-05-25 05:32:18 +08:00
|
|
|
* guarantee that the vma of page tables will exist for
|
|
|
|
* the duration of the operation. A caller that takes
|
|
|
|
* the reference is responsible for clearing up the
|
|
|
|
* anon_vma if they are the last user on release
|
|
|
|
*/
|
2011-03-23 07:32:48 +08:00
|
|
|
atomic_t refcount;
|
|
|
|
|
2008-07-29 06:46:26 +08:00
|
|
|
/*
|
mm anon rmap: replace same_anon_vma linked list with an interval tree.
When a large VMA (anon or private file mapping) is first touched, which
will populate its anon_vma field, and then split into many regions through
the use of mprotect(), the original anon_vma ends up linking all of the
vmas on a linked list. This can cause rmap to become inefficient, as we
have to walk potentially thousands of irrelevent vmas before finding the
one a given anon page might fall into.
By replacing the same_anon_vma linked list with an interval tree (where
each avc's interval is determined by its vma's start and last pgoffs), we
can make rmap efficient for this use case again.
While the change is large, all of its pieces are fairly simple.
Most places that were walking the same_anon_vma list were looking for a
known pgoff, so they can just use the anon_vma_interval_tree_foreach()
interval tree iterator instead. The exception here is ksm, where the
page's index is not known. It would probably be possible to rework ksm so
that the index would be known, but for now I have decided to keep things
simple and just walk the entirety of the interval tree there.
When updating vma's that already have an anon_vma assigned, we must take
care to re-index the corresponding avc's on their interval tree. This is
done through the use of anon_vma_interval_tree_pre_update_vma() and
anon_vma_interval_tree_post_update_vma(), which remove the avc's from
their interval tree before the update and re-insert them after the update.
The anon_vma stays locked during the update, so there is no chance that
rmap would miss the vmas that are being updated.
Signed-off-by: Michel Lespinasse <walken@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Daniel Santos <daniel.santos@pobox.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-10-09 07:31:39 +08:00
|
|
|
* NOTE: the LSB of the rb_root.rb_node is set by
|
2008-07-29 06:46:26 +08:00
|
|
|
* mm_take_all_locks() _after_ taking the above lock. So the
|
mm anon rmap: replace same_anon_vma linked list with an interval tree.
When a large VMA (anon or private file mapping) is first touched, which
will populate its anon_vma field, and then split into many regions through
the use of mprotect(), the original anon_vma ends up linking all of the
vmas on a linked list. This can cause rmap to become inefficient, as we
have to walk potentially thousands of irrelevent vmas before finding the
one a given anon page might fall into.
By replacing the same_anon_vma linked list with an interval tree (where
each avc's interval is determined by its vma's start and last pgoffs), we
can make rmap efficient for this use case again.
While the change is large, all of its pieces are fairly simple.
Most places that were walking the same_anon_vma list were looking for a
known pgoff, so they can just use the anon_vma_interval_tree_foreach()
interval tree iterator instead. The exception here is ksm, where the
page's index is not known. It would probably be possible to rework ksm so
that the index would be known, but for now I have decided to keep things
simple and just walk the entirety of the interval tree there.
When updating vma's that already have an anon_vma assigned, we must take
care to re-index the corresponding avc's on their interval tree. This is
done through the use of anon_vma_interval_tree_pre_update_vma() and
anon_vma_interval_tree_post_update_vma(), which remove the avc's from
their interval tree before the update and re-insert them after the update.
The anon_vma stays locked during the update, so there is no chance that
rmap would miss the vmas that are being updated.
Signed-off-by: Michel Lespinasse <walken@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Daniel Santos <daniel.santos@pobox.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-10-09 07:31:39 +08:00
|
|
|
* rb_root must only be read/written after taking the above lock
|
2008-07-29 06:46:26 +08:00
|
|
|
* to be sure to see a valid next pointer. The LSB bit itself
|
|
|
|
* is serialized by a system wide lock only visible to
|
|
|
|
* mm_take_all_locks() (mm_all_locks_mutex).
|
|
|
|
*/
|
mm anon rmap: replace same_anon_vma linked list with an interval tree.
When a large VMA (anon or private file mapping) is first touched, which
will populate its anon_vma field, and then split into many regions through
the use of mprotect(), the original anon_vma ends up linking all of the
vmas on a linked list. This can cause rmap to become inefficient, as we
have to walk potentially thousands of irrelevent vmas before finding the
one a given anon page might fall into.
By replacing the same_anon_vma linked list with an interval tree (where
each avc's interval is determined by its vma's start and last pgoffs), we
can make rmap efficient for this use case again.
While the change is large, all of its pieces are fairly simple.
Most places that were walking the same_anon_vma list were looking for a
known pgoff, so they can just use the anon_vma_interval_tree_foreach()
interval tree iterator instead. The exception here is ksm, where the
page's index is not known. It would probably be possible to rework ksm so
that the index would be known, but for now I have decided to keep things
simple and just walk the entirety of the interval tree there.
When updating vma's that already have an anon_vma assigned, we must take
care to re-index the corresponding avc's on their interval tree. This is
done through the use of anon_vma_interval_tree_pre_update_vma() and
anon_vma_interval_tree_post_update_vma(), which remove the avc's from
their interval tree before the update and re-insert them after the update.
The anon_vma stays locked during the update, so there is no chance that
rmap would miss the vmas that are being updated.
Signed-off-by: Michel Lespinasse <walken@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Daniel Santos <daniel.santos@pobox.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-10-09 07:31:39 +08:00
|
|
|
struct rb_root rb_root; /* Interval tree of private "related" vmas */
|
mm: change anon_vma linking to fix multi-process server scalability issue
The old anon_vma code can lead to scalability issues with heavily forking
workloads. Specifically, each anon_vma will be shared between the parent
process and all its child processes.
In a workload with 1000 child processes and a VMA with 1000 anonymous
pages per process that get COWed, this leads to a system with a million
anonymous pages in the same anon_vma, each of which is mapped in just one
of the 1000 processes. However, the current rmap code needs to walk them
all, leading to O(N) scanning complexity for each page.
This can result in systems where one CPU is walking the page tables of
1000 processes in page_referenced_one, while all other CPUs are stuck on
the anon_vma lock. This leads to catastrophic failure for a benchmark
like AIM7, where the total number of processes can reach in the tens of
thousands. Real workloads are still a factor 10 less process intensive
than AIM7, but they are catching up.
This patch changes the way anon_vmas and VMAs are linked, which allows us
to associate multiple anon_vmas with a VMA. At fork time, each child
process gets its own anon_vmas, in which its COWed pages will be
instantiated. The parents' anon_vma is also linked to the VMA, because
non-COWed pages could be present in any of the children.
This reduces rmap scanning complexity to O(1) for the pages of the 1000
child processes, with O(N) complexity for at most 1/N pages in the system.
This reduces the average scanning cost in heavily forking workloads from
O(N) to 2.
The only real complexity in this patch stems from the fact that linking a
VMA to anon_vmas now involves memory allocations. This means vma_adjust
can fail, if it needs to attach a VMA to anon_vma structures. This in
turn means error handling needs to be added to the calling functions.
A second source of complexity is that, because there can be multiple
anon_vmas, the anon_vma linking in vma_adjust can no longer be done under
"the" anon_vma lock. To prevent the rmap code from walking up an
incomplete VMA, this patch introduces the VM_LOCK_RMAP VMA flag. This bit
flag uses the same slot as the NOMMU VM_MAPPED_COPY, with an ifdef in mm.h
to make sure it is impossible to compile a kernel that needs both symbolic
values for the same bitflag.
Some test results:
Without the anon_vma changes, when AIM7 hits around 9.7k users (on a test
box with 16GB RAM and not quite enough IO), the system ends up running
>99% in system time, with every CPU on the same anon_vma lock in the
pageout code.
With these changes, AIM7 hits the cross-over point around 29.7k users.
This happens with ~99% IO wait time, there never seems to be any spike in
system time. The anon_vma lock contention appears to be resolved.
[akpm@linux-foundation.org: cleanups]
Signed-off-by: Rik van Riel <riel@redhat.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Larry Woodman <lwoodman@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-03-06 05:42:07 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The copy-on-write semantics of fork mean that an anon_vma
|
|
|
|
* can become associated with multiple processes. Furthermore,
|
|
|
|
* each child process will have its own anon_vma, where new
|
|
|
|
* pages for that process are instantiated.
|
|
|
|
*
|
|
|
|
* This structure allows us to find the anon_vmas associated
|
|
|
|
* with a VMA, or the VMAs associated with an anon_vma.
|
|
|
|
* The "same_vma" list contains the anon_vma_chains linking
|
|
|
|
* all the anon_vmas associated with this VMA.
|
mm anon rmap: replace same_anon_vma linked list with an interval tree.
When a large VMA (anon or private file mapping) is first touched, which
will populate its anon_vma field, and then split into many regions through
the use of mprotect(), the original anon_vma ends up linking all of the
vmas on a linked list. This can cause rmap to become inefficient, as we
have to walk potentially thousands of irrelevent vmas before finding the
one a given anon page might fall into.
By replacing the same_anon_vma linked list with an interval tree (where
each avc's interval is determined by its vma's start and last pgoffs), we
can make rmap efficient for this use case again.
While the change is large, all of its pieces are fairly simple.
Most places that were walking the same_anon_vma list were looking for a
known pgoff, so they can just use the anon_vma_interval_tree_foreach()
interval tree iterator instead. The exception here is ksm, where the
page's index is not known. It would probably be possible to rework ksm so
that the index would be known, but for now I have decided to keep things
simple and just walk the entirety of the interval tree there.
When updating vma's that already have an anon_vma assigned, we must take
care to re-index the corresponding avc's on their interval tree. This is
done through the use of anon_vma_interval_tree_pre_update_vma() and
anon_vma_interval_tree_post_update_vma(), which remove the avc's from
their interval tree before the update and re-insert them after the update.
The anon_vma stays locked during the update, so there is no chance that
rmap would miss the vmas that are being updated.
Signed-off-by: Michel Lespinasse <walken@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Daniel Santos <daniel.santos@pobox.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-10-09 07:31:39 +08:00
|
|
|
* The "rb" field indexes on an interval tree the anon_vma_chains
|
mm: change anon_vma linking to fix multi-process server scalability issue
The old anon_vma code can lead to scalability issues with heavily forking
workloads. Specifically, each anon_vma will be shared between the parent
process and all its child processes.
In a workload with 1000 child processes and a VMA with 1000 anonymous
pages per process that get COWed, this leads to a system with a million
anonymous pages in the same anon_vma, each of which is mapped in just one
of the 1000 processes. However, the current rmap code needs to walk them
all, leading to O(N) scanning complexity for each page.
This can result in systems where one CPU is walking the page tables of
1000 processes in page_referenced_one, while all other CPUs are stuck on
the anon_vma lock. This leads to catastrophic failure for a benchmark
like AIM7, where the total number of processes can reach in the tens of
thousands. Real workloads are still a factor 10 less process intensive
than AIM7, but they are catching up.
This patch changes the way anon_vmas and VMAs are linked, which allows us
to associate multiple anon_vmas with a VMA. At fork time, each child
process gets its own anon_vmas, in which its COWed pages will be
instantiated. The parents' anon_vma is also linked to the VMA, because
non-COWed pages could be present in any of the children.
This reduces rmap scanning complexity to O(1) for the pages of the 1000
child processes, with O(N) complexity for at most 1/N pages in the system.
This reduces the average scanning cost in heavily forking workloads from
O(N) to 2.
The only real complexity in this patch stems from the fact that linking a
VMA to anon_vmas now involves memory allocations. This means vma_adjust
can fail, if it needs to attach a VMA to anon_vma structures. This in
turn means error handling needs to be added to the calling functions.
A second source of complexity is that, because there can be multiple
anon_vmas, the anon_vma linking in vma_adjust can no longer be done under
"the" anon_vma lock. To prevent the rmap code from walking up an
incomplete VMA, this patch introduces the VM_LOCK_RMAP VMA flag. This bit
flag uses the same slot as the NOMMU VM_MAPPED_COPY, with an ifdef in mm.h
to make sure it is impossible to compile a kernel that needs both symbolic
values for the same bitflag.
Some test results:
Without the anon_vma changes, when AIM7 hits around 9.7k users (on a test
box with 16GB RAM and not quite enough IO), the system ends up running
>99% in system time, with every CPU on the same anon_vma lock in the
pageout code.
With these changes, AIM7 hits the cross-over point around 29.7k users.
This happens with ~99% IO wait time, there never seems to be any spike in
system time. The anon_vma lock contention appears to be resolved.
[akpm@linux-foundation.org: cleanups]
Signed-off-by: Rik van Riel <riel@redhat.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Larry Woodman <lwoodman@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-03-06 05:42:07 +08:00
|
|
|
* which link all the VMAs associated with this anon_vma.
|
|
|
|
*/
|
|
|
|
struct anon_vma_chain {
|
|
|
|
struct vm_area_struct *vma;
|
|
|
|
struct anon_vma *anon_vma;
|
|
|
|
struct list_head same_vma; /* locked by mmap_sem & page_table_lock */
|
2012-12-03 03:56:46 +08:00
|
|
|
struct rb_node rb; /* locked by anon_vma->rwsem */
|
mm anon rmap: replace same_anon_vma linked list with an interval tree.
When a large VMA (anon or private file mapping) is first touched, which
will populate its anon_vma field, and then split into many regions through
the use of mprotect(), the original anon_vma ends up linking all of the
vmas on a linked list. This can cause rmap to become inefficient, as we
have to walk potentially thousands of irrelevent vmas before finding the
one a given anon page might fall into.
By replacing the same_anon_vma linked list with an interval tree (where
each avc's interval is determined by its vma's start and last pgoffs), we
can make rmap efficient for this use case again.
While the change is large, all of its pieces are fairly simple.
Most places that were walking the same_anon_vma list were looking for a
known pgoff, so they can just use the anon_vma_interval_tree_foreach()
interval tree iterator instead. The exception here is ksm, where the
page's index is not known. It would probably be possible to rework ksm so
that the index would be known, but for now I have decided to keep things
simple and just walk the entirety of the interval tree there.
When updating vma's that already have an anon_vma assigned, we must take
care to re-index the corresponding avc's on their interval tree. This is
done through the use of anon_vma_interval_tree_pre_update_vma() and
anon_vma_interval_tree_post_update_vma(), which remove the avc's from
their interval tree before the update and re-insert them after the update.
The anon_vma stays locked during the update, so there is no chance that
rmap would miss the vmas that are being updated.
Signed-off-by: Michel Lespinasse <walken@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Daniel Santos <daniel.santos@pobox.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-10-09 07:31:39 +08:00
|
|
|
unsigned long rb_subtree_last;
|
2012-10-09 07:31:45 +08:00
|
|
|
#ifdef CONFIG_DEBUG_VM_RB
|
|
|
|
unsigned long cached_vma_start, cached_vma_last;
|
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2012-10-09 07:31:55 +08:00
|
|
|
enum ttu_flags {
|
|
|
|
TTU_UNMAP = 0, /* unmap mode */
|
|
|
|
TTU_MIGRATION = 1, /* migration mode */
|
|
|
|
TTU_MUNLOCK = 2, /* munlock mode */
|
|
|
|
TTU_ACTION_MASK = 0xff,
|
|
|
|
|
|
|
|
TTU_IGNORE_MLOCK = (1 << 8), /* ignore mlock */
|
|
|
|
TTU_IGNORE_ACCESS = (1 << 9), /* don't age */
|
|
|
|
TTU_IGNORE_HWPOISON = (1 << 10),/* corrupted page is recoverable */
|
|
|
|
};
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#ifdef CONFIG_MMU
|
2010-08-10 08:18:41 +08:00
|
|
|
static inline void get_anon_vma(struct anon_vma *anon_vma)
|
|
|
|
{
|
2011-03-23 07:32:48 +08:00
|
|
|
atomic_inc(&anon_vma->refcount);
|
2010-08-10 08:18:41 +08:00
|
|
|
}
|
|
|
|
|
2011-03-23 07:32:49 +08:00
|
|
|
void __put_anon_vma(struct anon_vma *anon_vma);
|
|
|
|
|
|
|
|
static inline void put_anon_vma(struct anon_vma *anon_vma)
|
|
|
|
{
|
|
|
|
if (atomic_dec_and_test(&anon_vma->refcount))
|
|
|
|
__put_anon_vma(anon_vma);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-12-15 09:58:57 +08:00
|
|
|
static inline struct anon_vma *page_anon_vma(struct page *page)
|
|
|
|
{
|
|
|
|
if (((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) !=
|
|
|
|
PAGE_MAPPING_ANON)
|
|
|
|
return NULL;
|
|
|
|
return page_rmapping(page);
|
|
|
|
}
|
|
|
|
|
2010-08-10 08:18:37 +08:00
|
|
|
static inline void vma_lock_anon_vma(struct vm_area_struct *vma)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct anon_vma *anon_vma = vma->anon_vma;
|
|
|
|
if (anon_vma)
|
2012-12-03 03:56:46 +08:00
|
|
|
down_write(&anon_vma->root->rwsem);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2010-08-10 08:18:37 +08:00
|
|
|
static inline void vma_unlock_anon_vma(struct vm_area_struct *vma)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct anon_vma *anon_vma = vma->anon_vma;
|
|
|
|
if (anon_vma)
|
2012-12-03 03:56:46 +08:00
|
|
|
up_write(&anon_vma->root->rwsem);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2012-12-03 03:56:50 +08:00
|
|
|
static inline void anon_vma_lock_write(struct anon_vma *anon_vma)
|
2010-08-10 08:18:38 +08:00
|
|
|
{
|
2012-12-03 03:56:46 +08:00
|
|
|
down_write(&anon_vma->root->rwsem);
|
2010-08-10 08:18:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void anon_vma_unlock(struct anon_vma *anon_vma)
|
|
|
|
{
|
2012-12-03 03:56:46 +08:00
|
|
|
up_write(&anon_vma->root->rwsem);
|
2010-08-10 08:18:38 +08:00
|
|
|
}
|
|
|
|
|
2012-12-03 03:56:50 +08:00
|
|
|
static inline void anon_vma_lock_read(struct anon_vma *anon_vma)
|
|
|
|
{
|
|
|
|
down_read(&anon_vma->root->rwsem);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void anon_vma_unlock_read(struct anon_vma *anon_vma)
|
|
|
|
{
|
|
|
|
up_read(&anon_vma->root->rwsem);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* anon_vma helper functions.
|
|
|
|
*/
|
|
|
|
void anon_vma_init(void); /* create anon_vma_cachep */
|
|
|
|
int anon_vma_prepare(struct vm_area_struct *);
|
mm: change anon_vma linking to fix multi-process server scalability issue
The old anon_vma code can lead to scalability issues with heavily forking
workloads. Specifically, each anon_vma will be shared between the parent
process and all its child processes.
In a workload with 1000 child processes and a VMA with 1000 anonymous
pages per process that get COWed, this leads to a system with a million
anonymous pages in the same anon_vma, each of which is mapped in just one
of the 1000 processes. However, the current rmap code needs to walk them
all, leading to O(N) scanning complexity for each page.
This can result in systems where one CPU is walking the page tables of
1000 processes in page_referenced_one, while all other CPUs are stuck on
the anon_vma lock. This leads to catastrophic failure for a benchmark
like AIM7, where the total number of processes can reach in the tens of
thousands. Real workloads are still a factor 10 less process intensive
than AIM7, but they are catching up.
This patch changes the way anon_vmas and VMAs are linked, which allows us
to associate multiple anon_vmas with a VMA. At fork time, each child
process gets its own anon_vmas, in which its COWed pages will be
instantiated. The parents' anon_vma is also linked to the VMA, because
non-COWed pages could be present in any of the children.
This reduces rmap scanning complexity to O(1) for the pages of the 1000
child processes, with O(N) complexity for at most 1/N pages in the system.
This reduces the average scanning cost in heavily forking workloads from
O(N) to 2.
The only real complexity in this patch stems from the fact that linking a
VMA to anon_vmas now involves memory allocations. This means vma_adjust
can fail, if it needs to attach a VMA to anon_vma structures. This in
turn means error handling needs to be added to the calling functions.
A second source of complexity is that, because there can be multiple
anon_vmas, the anon_vma linking in vma_adjust can no longer be done under
"the" anon_vma lock. To prevent the rmap code from walking up an
incomplete VMA, this patch introduces the VM_LOCK_RMAP VMA flag. This bit
flag uses the same slot as the NOMMU VM_MAPPED_COPY, with an ifdef in mm.h
to make sure it is impossible to compile a kernel that needs both symbolic
values for the same bitflag.
Some test results:
Without the anon_vma changes, when AIM7 hits around 9.7k users (on a test
box with 16GB RAM and not quite enough IO), the system ends up running
>99% in system time, with every CPU on the same anon_vma lock in the
pageout code.
With these changes, AIM7 hits the cross-over point around 29.7k users.
This happens with ~99% IO wait time, there never seems to be any spike in
system time. The anon_vma lock contention appears to be resolved.
[akpm@linux-foundation.org: cleanups]
Signed-off-by: Rik van Riel <riel@redhat.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Larry Woodman <lwoodman@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-03-06 05:42:07 +08:00
|
|
|
void unlink_anon_vmas(struct vm_area_struct *);
|
|
|
|
int anon_vma_clone(struct vm_area_struct *, struct vm_area_struct *);
|
|
|
|
int anon_vma_fork(struct vm_area_struct *, struct vm_area_struct *);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
mm: change anon_vma linking to fix multi-process server scalability issue
The old anon_vma code can lead to scalability issues with heavily forking
workloads. Specifically, each anon_vma will be shared between the parent
process and all its child processes.
In a workload with 1000 child processes and a VMA with 1000 anonymous
pages per process that get COWed, this leads to a system with a million
anonymous pages in the same anon_vma, each of which is mapped in just one
of the 1000 processes. However, the current rmap code needs to walk them
all, leading to O(N) scanning complexity for each page.
This can result in systems where one CPU is walking the page tables of
1000 processes in page_referenced_one, while all other CPUs are stuck on
the anon_vma lock. This leads to catastrophic failure for a benchmark
like AIM7, where the total number of processes can reach in the tens of
thousands. Real workloads are still a factor 10 less process intensive
than AIM7, but they are catching up.
This patch changes the way anon_vmas and VMAs are linked, which allows us
to associate multiple anon_vmas with a VMA. At fork time, each child
process gets its own anon_vmas, in which its COWed pages will be
instantiated. The parents' anon_vma is also linked to the VMA, because
non-COWed pages could be present in any of the children.
This reduces rmap scanning complexity to O(1) for the pages of the 1000
child processes, with O(N) complexity for at most 1/N pages in the system.
This reduces the average scanning cost in heavily forking workloads from
O(N) to 2.
The only real complexity in this patch stems from the fact that linking a
VMA to anon_vmas now involves memory allocations. This means vma_adjust
can fail, if it needs to attach a VMA to anon_vma structures. This in
turn means error handling needs to be added to the calling functions.
A second source of complexity is that, because there can be multiple
anon_vmas, the anon_vma linking in vma_adjust can no longer be done under
"the" anon_vma lock. To prevent the rmap code from walking up an
incomplete VMA, this patch introduces the VM_LOCK_RMAP VMA flag. This bit
flag uses the same slot as the NOMMU VM_MAPPED_COPY, with an ifdef in mm.h
to make sure it is impossible to compile a kernel that needs both symbolic
values for the same bitflag.
Some test results:
Without the anon_vma changes, when AIM7 hits around 9.7k users (on a test
box with 16GB RAM and not quite enough IO), the system ends up running
>99% in system time, with every CPU on the same anon_vma lock in the
pageout code.
With these changes, AIM7 hits the cross-over point around 29.7k users.
This happens with ~99% IO wait time, there never seems to be any spike in
system time. The anon_vma lock contention appears to be resolved.
[akpm@linux-foundation.org: cleanups]
Signed-off-by: Rik van Riel <riel@redhat.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Larry Woodman <lwoodman@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-03-06 05:42:07 +08:00
|
|
|
static inline void anon_vma_merge(struct vm_area_struct *vma,
|
|
|
|
struct vm_area_struct *next)
|
|
|
|
{
|
|
|
|
VM_BUG_ON(vma->anon_vma != next->anon_vma);
|
|
|
|
unlink_anon_vmas(next);
|
|
|
|
}
|
|
|
|
|
2011-03-23 07:32:49 +08:00
|
|
|
struct anon_vma *page_get_anon_vma(struct page *page);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* rmap interfaces called when adding or removing pte of page
|
|
|
|
*/
|
2010-03-06 05:42:09 +08:00
|
|
|
void page_move_anon_rmap(struct page *, struct vm_area_struct *, unsigned long);
|
2005-04-17 06:20:36 +08:00
|
|
|
void page_add_anon_rmap(struct page *, struct vm_area_struct *, unsigned long);
|
2010-08-10 08:19:48 +08:00
|
|
|
void do_page_add_anon_rmap(struct page *, struct vm_area_struct *,
|
|
|
|
unsigned long, int);
|
2006-01-06 16:11:12 +08:00
|
|
|
void page_add_new_anon_rmap(struct page *, struct vm_area_struct *, unsigned long);
|
2005-04-17 06:20:36 +08:00
|
|
|
void page_add_file_rmap(struct page *);
|
2009-01-07 06:40:11 +08:00
|
|
|
void page_remove_rmap(struct page *);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-05-28 08:29:16 +08:00
|
|
|
void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *,
|
|
|
|
unsigned long);
|
|
|
|
void hugepage_add_new_anon_rmap(struct page *, struct vm_area_struct *,
|
|
|
|
unsigned long);
|
|
|
|
|
2009-09-22 08:01:59 +08:00
|
|
|
static inline void page_dup_rmap(struct page *page)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
atomic_inc(&page->_mapcount);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Called from mm/vmscan.c to handle paging out
|
|
|
|
*/
|
2009-06-17 06:33:05 +08:00
|
|
|
int page_referenced(struct page *, int is_locked,
|
2012-01-13 09:18:32 +08:00
|
|
|
struct mem_cgroup *memcg, unsigned long *vm_flags);
|
ksm: let shared pages be swappable
Initial implementation for swapping out KSM's shared pages: add
page_referenced_ksm() and try_to_unmap_ksm(), which rmap.c calls when
faced with a PageKsm page.
Most of what's needed can be got from the rmap_items listed from the
stable_node of the ksm page, without discovering the actual vma: so in
this patch just fake up a struct vma for page_referenced_one() or
try_to_unmap_one(), then refine that in the next patch.
Add VM_NONLINEAR to ksm_madvise()'s list of exclusions: it has always been
implicit there (being only set with VM_SHARED, already excluded), but
let's make it explicit, to help justify the lack of nonlinear unmap.
Rely on the page lock to protect against concurrent modifications to that
page's node of the stable tree.
The awkward part is not swapout but swapin: do_swap_page() and
page_add_anon_rmap() now have to allow for new possibilities - perhaps a
ksm page still in swapcache, perhaps a swapcache page associated with one
location in one anon_vma now needed for another location or anon_vma.
(And the vma might even be no longer VM_MERGEABLE when that happens.)
ksm_might_need_to_copy() checks for that case, and supplies a duplicate
page when necessary, simply leaving it to a subsequent pass of ksmd to
rediscover the identity and merge them back into one ksm page.
Disappointingly primitive: but the alternative would have to accumulate
unswappable info about the swapped out ksm pages, limiting swappability.
Remove page_add_ksm_rmap(): page_add_anon_rmap() now has to allow for the
particular case it was handling, so just use it instead.
Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Cc: Izik Eidus <ieidus@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Chris Wright <chrisw@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-12-15 09:59:24 +08:00
|
|
|
int page_referenced_one(struct page *, struct vm_area_struct *,
|
|
|
|
unsigned long address, unsigned int *mapcount, unsigned long *vm_flags);
|
|
|
|
|
2009-09-16 17:50:10 +08:00
|
|
|
#define TTU_ACTION(x) ((x) & TTU_ACTION_MASK)
|
|
|
|
|
|
|
|
int try_to_unmap(struct page *, enum ttu_flags flags);
|
ksm: let shared pages be swappable
Initial implementation for swapping out KSM's shared pages: add
page_referenced_ksm() and try_to_unmap_ksm(), which rmap.c calls when
faced with a PageKsm page.
Most of what's needed can be got from the rmap_items listed from the
stable_node of the ksm page, without discovering the actual vma: so in
this patch just fake up a struct vma for page_referenced_one() or
try_to_unmap_one(), then refine that in the next patch.
Add VM_NONLINEAR to ksm_madvise()'s list of exclusions: it has always been
implicit there (being only set with VM_SHARED, already excluded), but
let's make it explicit, to help justify the lack of nonlinear unmap.
Rely on the page lock to protect against concurrent modifications to that
page's node of the stable tree.
The awkward part is not swapout but swapin: do_swap_page() and
page_add_anon_rmap() now have to allow for new possibilities - perhaps a
ksm page still in swapcache, perhaps a swapcache page associated with one
location in one anon_vma now needed for another location or anon_vma.
(And the vma might even be no longer VM_MERGEABLE when that happens.)
ksm_might_need_to_copy() checks for that case, and supplies a duplicate
page when necessary, simply leaving it to a subsequent pass of ksmd to
rediscover the identity and merge them back into one ksm page.
Disappointingly primitive: but the alternative would have to accumulate
unswappable info about the swapped out ksm pages, limiting swappability.
Remove page_add_ksm_rmap(): page_add_anon_rmap() now has to allow for the
particular case it was handling, so just use it instead.
Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Cc: Izik Eidus <ieidus@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Chris Wright <chrisw@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-12-15 09:59:24 +08:00
|
|
|
int try_to_unmap_one(struct page *, struct vm_area_struct *,
|
|
|
|
unsigned long address, enum ttu_flags flags);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-06-24 13:05:25 +08:00
|
|
|
/*
|
|
|
|
* Called from mm/filemap_xip.c to unmap empty zero page
|
|
|
|
*/
|
2010-10-27 05:22:01 +08:00
|
|
|
pte_t *__page_check_address(struct page *, struct mm_struct *,
|
2008-08-21 05:09:18 +08:00
|
|
|
unsigned long, spinlock_t **, int);
|
2005-06-24 13:05:25 +08:00
|
|
|
|
2010-10-27 05:22:01 +08:00
|
|
|
static inline pte_t *page_check_address(struct page *page, struct mm_struct *mm,
|
|
|
|
unsigned long address,
|
|
|
|
spinlock_t **ptlp, int sync)
|
|
|
|
{
|
|
|
|
pte_t *ptep;
|
|
|
|
|
|
|
|
__cond_lock(*ptlp, ptep = __page_check_address(page, mm, address,
|
|
|
|
ptlp, sync));
|
|
|
|
return ptep;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Used by swapoff to help locate where page is expected in vma.
|
|
|
|
*/
|
|
|
|
unsigned long page_address_in_vma(struct page *, struct vm_area_struct *);
|
|
|
|
|
2006-09-26 14:30:57 +08:00
|
|
|
/*
|
|
|
|
* Cleans the PTEs of shared mappings.
|
|
|
|
* (and since clean PTEs should also be readonly, write protects them too)
|
|
|
|
*
|
|
|
|
* returns the number of cleaned PTEs.
|
|
|
|
*/
|
|
|
|
int page_mkclean(struct page *);
|
|
|
|
|
mlock: mlocked pages are unevictable
Make sure that mlocked pages also live on the unevictable LRU, so kswapd
will not scan them over and over again.
This is achieved through various strategies:
1) add yet another page flag--PG_mlocked--to indicate that
the page is locked for efficient testing in vmscan and,
optionally, fault path. This allows early culling of
unevictable pages, preventing them from getting to
page_referenced()/try_to_unmap(). Also allows separate
accounting of mlock'd pages, as Nick's original patch
did.
Note: Nick's original mlock patch used a PG_mlocked
flag. I had removed this in favor of the PG_unevictable
flag + an mlock_count [new page struct member]. I
restored the PG_mlocked flag to eliminate the new
count field.
2) add the mlock/unevictable infrastructure to mm/mlock.c,
with internal APIs in mm/internal.h. This is a rework
of Nick's original patch to these files, taking into
account that mlocked pages are now kept on unevictable
LRU list.
3) update vmscan.c:page_evictable() to check PageMlocked()
and, if vma passed in, the vm_flags. Note that the vma
will only be passed in for new pages in the fault path;
and then only if the "cull unevictable pages in fault
path" patch is included.
4) add try_to_unlock() to rmap.c to walk a page's rmap and
ClearPageMlocked() if no other vmas have it mlocked.
Reuses as much of try_to_unmap() as possible. This
effectively replaces the use of one of the lru list links
as an mlock count. If this mechanism let's pages in mlocked
vmas leak through w/o PG_mlocked set [I don't know that it
does], we should catch them later in try_to_unmap(). One
hopes this will be rare, as it will be relatively expensive.
Original mm/internal.h, mm/rmap.c and mm/mlock.c changes:
Signed-off-by: Nick Piggin <npiggin@suse.de>
splitlru: introduce __get_user_pages():
New munlock processing need to GUP_FLAGS_IGNORE_VMA_PERMISSIONS.
because current get_user_pages() can't grab PROT_NONE pages theresore it
cause PROT_NONE pages can't munlock.
[akpm@linux-foundation.org: fix this for pagemap-pass-mm-into-pagewalkers.patch]
[akpm@linux-foundation.org: untangle patch interdependencies]
[akpm@linux-foundation.org: fix things after out-of-order merging]
[hugh@veritas.com: fix page-flags mess]
[lee.schermerhorn@hp.com: fix munlock page table walk - now requires 'mm']
[kosaki.motohiro@jp.fujitsu.com: build fix]
[kosaki.motohiro@jp.fujitsu.com: fix truncate race and sevaral comments]
[kosaki.motohiro@jp.fujitsu.com: splitlru: introduce __get_user_pages()]
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>
Cc: Matt Mackall <mpm@selenic.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-10-19 11:26:44 +08:00
|
|
|
/*
|
|
|
|
* called in munlock()/munmap() path to check for other vmas holding
|
|
|
|
* the page mlocked.
|
|
|
|
*/
|
|
|
|
int try_to_munlock(struct page *);
|
|
|
|
|
2009-09-16 17:50:04 +08:00
|
|
|
/*
|
|
|
|
* Called by memory-failure.c to kill processes.
|
|
|
|
*/
|
2012-12-03 03:56:50 +08:00
|
|
|
struct anon_vma *page_lock_anon_vma_read(struct page *page);
|
|
|
|
void page_unlock_anon_vma_read(struct anon_vma *anon_vma);
|
2009-09-16 17:50:15 +08:00
|
|
|
int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma);
|
2009-09-16 17:50:04 +08:00
|
|
|
|
ksm: rmap_walk to remove_migation_ptes
A side-effect of making ksm pages swappable is that they have to be placed
on the LRUs: which then exposes them to isolate_lru_page() and hence to
page migration.
Add rmap_walk() for remove_migration_ptes() to use: rmap_walk_anon() and
rmap_walk_file() in rmap.c, but rmap_walk_ksm() in ksm.c. Perhaps some
consolidation with existing code is possible, but don't attempt that yet
(try_to_unmap needs to handle nonlinears, but migration pte removal does
not).
rmap_walk() is sadly less general than it appears: rmap_walk_anon(), like
remove_anon_migration_ptes() which it replaces, avoids calling
page_lock_anon_vma(), because that includes a page_mapped() test which
fails when all migration ptes are in place. That was valid when NUMA page
migration was introduced (holding mmap_sem provided the missing guarantee
that anon_vma's slab had not already been destroyed), but I believe not
valid in the memory hotremove case added since.
For now do the same as before, and consider the best way to fix that
unlikely race later on. When fixed, we can probably use rmap_walk() on
hwpoisoned ksm pages too: for now, they remain among hwpoison's various
exceptions (its PageKsm test comes before the page is locked, but its
page_lock_anon_vma fails safely if an anon gets upgraded).
Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Cc: Izik Eidus <ieidus@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Chris Wright <chrisw@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-12-15 09:59:31 +08:00
|
|
|
/*
|
|
|
|
* Called by migrate.c to remove migration ptes, but might be used more later.
|
|
|
|
*/
|
|
|
|
int rmap_walk(struct page *page, int (*rmap_one)(struct page *,
|
|
|
|
struct vm_area_struct *, unsigned long, void *), void *arg);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#else /* !CONFIG_MMU */
|
|
|
|
|
|
|
|
#define anon_vma_init() do {} while (0)
|
|
|
|
#define anon_vma_prepare(vma) (0)
|
|
|
|
#define anon_vma_link(vma) do {} while (0)
|
|
|
|
|
2009-06-24 03:37:01 +08:00
|
|
|
static inline int page_referenced(struct page *page, int is_locked,
|
2012-01-13 09:18:32 +08:00
|
|
|
struct mem_cgroup *memcg,
|
2009-06-24 03:37:01 +08:00
|
|
|
unsigned long *vm_flags)
|
|
|
|
{
|
|
|
|
*vm_flags = 0;
|
2010-03-06 05:42:22 +08:00
|
|
|
return 0;
|
2009-06-24 03:37:01 +08:00
|
|
|
}
|
|
|
|
|
2006-02-01 19:05:38 +08:00
|
|
|
#define try_to_unmap(page, refs) SWAP_FAIL
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-09-26 14:30:57 +08:00
|
|
|
static inline int page_mkclean(struct page *page)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#endif /* CONFIG_MMU */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Return values of try_to_unmap
|
|
|
|
*/
|
|
|
|
#define SWAP_SUCCESS 0
|
|
|
|
#define SWAP_AGAIN 1
|
|
|
|
#define SWAP_FAIL 2
|
mlock: mlocked pages are unevictable
Make sure that mlocked pages also live on the unevictable LRU, so kswapd
will not scan them over and over again.
This is achieved through various strategies:
1) add yet another page flag--PG_mlocked--to indicate that
the page is locked for efficient testing in vmscan and,
optionally, fault path. This allows early culling of
unevictable pages, preventing them from getting to
page_referenced()/try_to_unmap(). Also allows separate
accounting of mlock'd pages, as Nick's original patch
did.
Note: Nick's original mlock patch used a PG_mlocked
flag. I had removed this in favor of the PG_unevictable
flag + an mlock_count [new page struct member]. I
restored the PG_mlocked flag to eliminate the new
count field.
2) add the mlock/unevictable infrastructure to mm/mlock.c,
with internal APIs in mm/internal.h. This is a rework
of Nick's original patch to these files, taking into
account that mlocked pages are now kept on unevictable
LRU list.
3) update vmscan.c:page_evictable() to check PageMlocked()
and, if vma passed in, the vm_flags. Note that the vma
will only be passed in for new pages in the fault path;
and then only if the "cull unevictable pages in fault
path" patch is included.
4) add try_to_unlock() to rmap.c to walk a page's rmap and
ClearPageMlocked() if no other vmas have it mlocked.
Reuses as much of try_to_unmap() as possible. This
effectively replaces the use of one of the lru list links
as an mlock count. If this mechanism let's pages in mlocked
vmas leak through w/o PG_mlocked set [I don't know that it
does], we should catch them later in try_to_unmap(). One
hopes this will be rare, as it will be relatively expensive.
Original mm/internal.h, mm/rmap.c and mm/mlock.c changes:
Signed-off-by: Nick Piggin <npiggin@suse.de>
splitlru: introduce __get_user_pages():
New munlock processing need to GUP_FLAGS_IGNORE_VMA_PERMISSIONS.
because current get_user_pages() can't grab PROT_NONE pages theresore it
cause PROT_NONE pages can't munlock.
[akpm@linux-foundation.org: fix this for pagemap-pass-mm-into-pagewalkers.patch]
[akpm@linux-foundation.org: untangle patch interdependencies]
[akpm@linux-foundation.org: fix things after out-of-order merging]
[hugh@veritas.com: fix page-flags mess]
[lee.schermerhorn@hp.com: fix munlock page table walk - now requires 'mm']
[kosaki.motohiro@jp.fujitsu.com: build fix]
[kosaki.motohiro@jp.fujitsu.com: fix truncate race and sevaral comments]
[kosaki.motohiro@jp.fujitsu.com: splitlru: introduce __get_user_pages()]
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>
Cc: Matt Mackall <mpm@selenic.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-10-19 11:26:44 +08:00
|
|
|
#define SWAP_MLOCK 3
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
#endif /* _LINUX_RMAP_H */
|