mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-11-27 14:14:24 +08:00
mm: add new api to enable ksm per process
Patch series "mm: process/cgroup ksm support", v9. So far KSM can only be enabled by calling madvise for memory regions. To be able to use KSM for more workloads, KSM needs to have the ability to be enabled / disabled at the process / cgroup level. Use case 1: The madvise call is not available in the programming language. An example for this are programs with forked workloads using a garbage collected language without pointers. In such a language madvise cannot be made available. In addition the addresses of objects get moved around as they are garbage collected. KSM sharing needs to be enabled "from the outside" for these type of workloads. Use case 2: The same interpreter can also be used for workloads where KSM brings no benefit or even has overhead. We'd like to be able to enable KSM on a workload by workload basis. Use case 3: With the madvise call sharing opportunities are only enabled for the current process: it is a workload-local decision. A considerable number of sharing opportunities may exist across multiple workloads or jobs (if they are part of the same security domain). Only a higler level entity like a job scheduler or container can know for certain if its running one or more instances of a job. That job scheduler however doesn't have the necessary internal workload knowledge to make targeted madvise calls. Security concerns: In previous discussions security concerns have been brought up. The problem is that an individual workload does not have the knowledge about what else is running on a machine. Therefore it has to be very conservative in what memory areas can be shared or not. However, if the system is dedicated to running multiple jobs within the same security domain, its the job scheduler that has the knowledge that sharing can be safely enabled and is even desirable. Performance: Experiments with using UKSM have shown a capacity increase of around 20%. Here are the metrics from an instagram workload (taken from a machine with 64GB main memory): full_scans: 445 general_profit: 20158298048 max_page_sharing: 256 merge_across_nodes: 1 pages_shared: 129547 pages_sharing: 5119146 pages_to_scan: 4000 pages_unshared: 1760924 pages_volatile: 10761341 run: 1 sleep_millisecs: 20 stable_node_chains: 167 stable_node_chains_prune_millisecs: 2000 stable_node_dups: 2751 use_zero_pages: 0 zero_pages_sharing: 0 After the service is running for 30 minutes to an hour, 4 to 5 million shared pages are common for this workload when using KSM. Detailed changes: 1. New options for prctl system command This patch series adds two new options to the prctl system call. The first one allows to enable KSM at the process level and the second one to query the setting. The setting will be inherited by child processes. With the above setting, KSM can be enabled for the seed process of a cgroup and all processes in the cgroup will inherit the setting. 2. Changes to KSM processing When KSM is enabled at the process level, the KSM code will iterate over all the VMA's and enable KSM for the eligible VMA's. When forking a process that has KSM enabled, the setting will be inherited by the new child process. 3. Add general_profit metric The general_profit metric of KSM is specified in the documentation, but not calculated. This adds the general profit metric to /sys/kernel/debug/mm/ksm. 4. Add more metrics to ksm_stat This adds the process profit metric to /proc/<pid>/ksm_stat. 5. Add more tests to ksm_tests and ksm_functional_tests This adds an option to specify the merge type to the ksm_tests. This allows to test madvise and prctl KSM. It also adds a two new tests to ksm_functional_tests: one to test the new prctl options and the other one is a fork test to verify that the KSM process setting is inherited by client processes. This patch (of 3): So far KSM can only be enabled by calling madvise for memory regions. To be able to use KSM for more workloads, KSM needs to have the ability to be enabled / disabled at the process / cgroup level. 1. New options for prctl system command This patch series adds two new options to the prctl system call. The first one allows to enable KSM at the process level and the second one to query the setting. The setting will be inherited by child processes. With the above setting, KSM can be enabled for the seed process of a cgroup and all processes in the cgroup will inherit the setting. 2. Changes to KSM processing When KSM is enabled at the process level, the KSM code will iterate over all the VMA's and enable KSM for the eligible VMA's. When forking a process that has KSM enabled, the setting will be inherited by the new child process. 1) Introduce new MMF_VM_MERGE_ANY flag This introduces the new flag MMF_VM_MERGE_ANY flag. When this flag is set, kernel samepage merging (ksm) gets enabled for all vma's of a process. 2) Setting VM_MERGEABLE on VMA creation When a VMA is created, if the MMF_VM_MERGE_ANY flag is set, the VM_MERGEABLE flag will be set for this VMA. 3) support disabling of ksm for a process This adds the ability to disable ksm for a process if ksm has been enabled for the process with prctl. 4) add new prctl option to get and set ksm for a process This adds two new options to the prctl system call - enable ksm for all vmas of a process (if the vmas support it). - query if ksm has been enabled for a process. 3. Disabling MMF_VM_MERGE_ANY for storage keys in s390 In the s390 architecture when storage keys are used, the MMF_VM_MERGE_ANY will be disabled. Link: https://lkml.kernel.org/r/20230418051342.1919757-1-shr@devkernel.io Link: https://lkml.kernel.org/r/20230418051342.1919757-2-shr@devkernel.io Signed-off-by: Stefan Roesch <shr@devkernel.io> Acked-by: David Hildenbrand <david@redhat.com> Cc: David Hildenbrand <david@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Rik van Riel <riel@surriel.com> Cc: Bagas Sanjaya <bagasdotme@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
2124f79de6
commit
d7597f59d1
@ -2591,6 +2591,13 @@ int gmap_mark_unmergeable(void)
|
||||
int ret;
|
||||
VMA_ITERATOR(vmi, mm, 0);
|
||||
|
||||
/*
|
||||
* Make sure to disable KSM (if enabled for the whole process or
|
||||
* individual VMAs). Note that nothing currently hinders user space
|
||||
* from re-enabling it.
|
||||
*/
|
||||
clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
|
||||
|
||||
for_each_vma(vmi, vma) {
|
||||
/* Copy vm_flags to avoid partial modifications in ksm_madvise */
|
||||
vm_flags = vma->vm_flags;
|
||||
|
@ -18,13 +18,26 @@
|
||||
#ifdef CONFIG_KSM
|
||||
int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
|
||||
unsigned long end, int advice, unsigned long *vm_flags);
|
||||
|
||||
void ksm_add_vma(struct vm_area_struct *vma);
|
||||
int ksm_enable_merge_any(struct mm_struct *mm);
|
||||
|
||||
int __ksm_enter(struct mm_struct *mm);
|
||||
void __ksm_exit(struct mm_struct *mm);
|
||||
|
||||
static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
|
||||
{
|
||||
if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags))
|
||||
return __ksm_enter(mm);
|
||||
int ret;
|
||||
|
||||
if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags)) {
|
||||
ret = __ksm_enter(mm);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags))
|
||||
set_bit(MMF_VM_MERGE_ANY, &mm->flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -57,6 +70,10 @@ void collect_procs_ksm(struct page *page, struct list_head *to_kill,
|
||||
#endif
|
||||
#else /* !CONFIG_KSM */
|
||||
|
||||
static inline void ksm_add_vma(struct vm_area_struct *vma)
|
||||
{
|
||||
}
|
||||
|
||||
static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
|
||||
{
|
||||
return 0;
|
||||
|
@ -90,4 +90,5 @@ static inline int get_dumpable(struct mm_struct *mm)
|
||||
#define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\
|
||||
MMF_DISABLE_THP_MASK | MMF_HAS_MDWE_MASK)
|
||||
|
||||
#define MMF_VM_MERGE_ANY 29
|
||||
#endif /* _LINUX_SCHED_COREDUMP_H */
|
||||
|
@ -292,4 +292,6 @@ struct prctl_mm_map {
|
||||
|
||||
#define PR_GET_AUXV 0x41555856
|
||||
|
||||
#define PR_SET_MEMORY_MERGE 67
|
||||
#define PR_GET_MEMORY_MERGE 68
|
||||
#endif /* _LINUX_PRCTL_H */
|
||||
|
27
kernel/sys.c
27
kernel/sys.c
@ -15,6 +15,7 @@
|
||||
#include <linux/highuid.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/kmod.h>
|
||||
#include <linux/ksm.h>
|
||||
#include <linux/perf_event.h>
|
||||
#include <linux/resource.h>
|
||||
#include <linux/kernel.h>
|
||||
@ -2687,6 +2688,32 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
|
||||
case PR_SET_VMA:
|
||||
error = prctl_set_vma(arg2, arg3, arg4, arg5);
|
||||
break;
|
||||
#ifdef CONFIG_KSM
|
||||
case PR_SET_MEMORY_MERGE:
|
||||
if (arg3 || arg4 || arg5)
|
||||
return -EINVAL;
|
||||
if (mmap_write_lock_killable(me->mm))
|
||||
return -EINTR;
|
||||
|
||||
if (arg2) {
|
||||
error = ksm_enable_merge_any(me->mm);
|
||||
} else {
|
||||
/*
|
||||
* TODO: we might want disable KSM on all VMAs and
|
||||
* trigger unsharing to completely disable KSM.
|
||||
*/
|
||||
clear_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
|
||||
error = 0;
|
||||
}
|
||||
mmap_write_unlock(me->mm);
|
||||
break;
|
||||
case PR_GET_MEMORY_MERGE:
|
||||
if (arg2 || arg3 || arg4 || arg5)
|
||||
return -EINVAL;
|
||||
|
||||
error = !!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
|
||||
break;
|
||||
#endif
|
||||
default:
|
||||
error = -EINVAL;
|
||||
break;
|
||||
|
104
mm/ksm.c
104
mm/ksm.c
@ -515,6 +515,28 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
|
||||
return (ret & VM_FAULT_OOM) ? -ENOMEM : 0;
|
||||
}
|
||||
|
||||
static bool vma_ksm_compatible(struct vm_area_struct *vma)
|
||||
{
|
||||
if (vma->vm_flags & (VM_SHARED | VM_MAYSHARE | VM_PFNMAP |
|
||||
VM_IO | VM_DONTEXPAND | VM_HUGETLB |
|
||||
VM_MIXEDMAP))
|
||||
return false; /* just ignore the advice */
|
||||
|
||||
if (vma_is_dax(vma))
|
||||
return false;
|
||||
|
||||
#ifdef VM_SAO
|
||||
if (vma->vm_flags & VM_SAO)
|
||||
return false;
|
||||
#endif
|
||||
#ifdef VM_SPARC_ADI
|
||||
if (vma->vm_flags & VM_SPARC_ADI)
|
||||
return false;
|
||||
#endif
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm,
|
||||
unsigned long addr)
|
||||
{
|
||||
@ -1026,6 +1048,7 @@ mm_exiting:
|
||||
|
||||
mm_slot_free(mm_slot_cache, mm_slot);
|
||||
clear_bit(MMF_VM_MERGEABLE, &mm->flags);
|
||||
clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
|
||||
mmdrop(mm);
|
||||
} else
|
||||
spin_unlock(&ksm_mmlist_lock);
|
||||
@ -2408,6 +2431,7 @@ no_vmas:
|
||||
|
||||
mm_slot_free(mm_slot_cache, mm_slot);
|
||||
clear_bit(MMF_VM_MERGEABLE, &mm->flags);
|
||||
clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
|
||||
mmap_read_unlock(mm);
|
||||
mmdrop(mm);
|
||||
} else {
|
||||
@ -2485,6 +2509,66 @@ static int ksm_scan_thread(void *nothing)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void __ksm_add_vma(struct vm_area_struct *vma)
|
||||
{
|
||||
unsigned long vm_flags = vma->vm_flags;
|
||||
|
||||
if (vm_flags & VM_MERGEABLE)
|
||||
return;
|
||||
|
||||
if (vma_ksm_compatible(vma))
|
||||
vm_flags_set(vma, VM_MERGEABLE);
|
||||
}
|
||||
|
||||
/**
|
||||
* ksm_add_vma - Mark vma as mergeable if compatible
|
||||
*
|
||||
* @vma: Pointer to vma
|
||||
*/
|
||||
void ksm_add_vma(struct vm_area_struct *vma)
|
||||
{
|
||||
struct mm_struct *mm = vma->vm_mm;
|
||||
|
||||
if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
|
||||
__ksm_add_vma(vma);
|
||||
}
|
||||
|
||||
static void ksm_add_vmas(struct mm_struct *mm)
|
||||
{
|
||||
struct vm_area_struct *vma;
|
||||
|
||||
VMA_ITERATOR(vmi, mm, 0);
|
||||
for_each_vma(vmi, vma)
|
||||
__ksm_add_vma(vma);
|
||||
}
|
||||
|
||||
/**
|
||||
* ksm_enable_merge_any - Add mm to mm ksm list and enable merging on all
|
||||
* compatible VMA's
|
||||
*
|
||||
* @mm: Pointer to mm
|
||||
*
|
||||
* Returns 0 on success, otherwise error code
|
||||
*/
|
||||
int ksm_enable_merge_any(struct mm_struct *mm)
|
||||
{
|
||||
int err;
|
||||
|
||||
if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
|
||||
return 0;
|
||||
|
||||
if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) {
|
||||
err = __ksm_enter(mm);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
set_bit(MMF_VM_MERGE_ANY, &mm->flags);
|
||||
ksm_add_vmas(mm);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
|
||||
unsigned long end, int advice, unsigned long *vm_flags)
|
||||
{
|
||||
@ -2493,25 +2577,10 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
|
||||
|
||||
switch (advice) {
|
||||
case MADV_MERGEABLE:
|
||||
/*
|
||||
* Be somewhat over-protective for now!
|
||||
*/
|
||||
if (*vm_flags & (VM_MERGEABLE | VM_SHARED | VM_MAYSHARE |
|
||||
VM_PFNMAP | VM_IO | VM_DONTEXPAND |
|
||||
VM_HUGETLB | VM_MIXEDMAP))
|
||||
return 0; /* just ignore the advice */
|
||||
|
||||
if (vma_is_dax(vma))
|
||||
if (vma->vm_flags & VM_MERGEABLE)
|
||||
return 0;
|
||||
|
||||
#ifdef VM_SAO
|
||||
if (*vm_flags & VM_SAO)
|
||||
if (!vma_ksm_compatible(vma))
|
||||
return 0;
|
||||
#endif
|
||||
#ifdef VM_SPARC_ADI
|
||||
if (*vm_flags & VM_SPARC_ADI)
|
||||
return 0;
|
||||
#endif
|
||||
|
||||
if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) {
|
||||
err = __ksm_enter(mm);
|
||||
@ -2615,6 +2684,7 @@ void __ksm_exit(struct mm_struct *mm)
|
||||
|
||||
if (easy_to_free) {
|
||||
mm_slot_free(mm_slot_cache, mm_slot);
|
||||
clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
|
||||
clear_bit(MMF_VM_MERGEABLE, &mm->flags);
|
||||
mmdrop(mm);
|
||||
} else if (mm_slot) {
|
||||
|
@ -46,6 +46,7 @@
|
||||
#include <linux/pkeys.h>
|
||||
#include <linux/oom.h>
|
||||
#include <linux/sched/mm.h>
|
||||
#include <linux/ksm.h>
|
||||
|
||||
#include <linux/uaccess.h>
|
||||
#include <asm/cacheflush.h>
|
||||
@ -2729,6 +2730,7 @@ unmap_writable:
|
||||
if (file && vm_flags & VM_SHARED)
|
||||
mapping_unmap_writable(file->f_mapping);
|
||||
file = vma->vm_file;
|
||||
ksm_add_vma(vma);
|
||||
expanded:
|
||||
perf_event_mmap(vma);
|
||||
|
||||
@ -3001,6 +3003,7 @@ static int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
|
||||
goto mas_store_fail;
|
||||
|
||||
mm->map_count++;
|
||||
ksm_add_vma(vma);
|
||||
out:
|
||||
perf_event_mmap(vma);
|
||||
mm->total_vm += len >> PAGE_SHIFT;
|
||||
|
Loading…
Reference in New Issue
Block a user