2005-04-17 06:20:36 +08:00
|
|
|
Documentation for /proc/sys/vm/* kernel version 2.2.10
|
|
|
|
(c) 1998, 1999, Rik van Riel <riel@nl.linux.org>
|
|
|
|
|
|
|
|
For general info and legal blurb, please look in README.
|
|
|
|
|
|
|
|
==============================================================
|
|
|
|
|
|
|
|
This file contains the documentation for the sysctl files in
|
|
|
|
/proc/sys/vm and is valid for Linux kernel version 2.2.
|
|
|
|
|
|
|
|
The files in this directory can be used to tune the operation
|
|
|
|
of the virtual memory (VM) subsystem of the Linux kernel and
|
|
|
|
the writeout of dirty data to disk.
|
|
|
|
|
|
|
|
Default values and initialization routines for most of these
|
|
|
|
files can be found in mm/swap.c.
|
|
|
|
|
|
|
|
Currently, these files are in /proc/sys/vm:
|
|
|
|
- overcommit_memory
|
|
|
|
- page-cluster
|
|
|
|
- dirty_ratio
|
|
|
|
- dirty_background_ratio
|
|
|
|
- dirty_expire_centisecs
|
|
|
|
- dirty_writeback_centisecs
|
|
|
|
- max_map_count
|
|
|
|
- min_free_kbytes
|
|
|
|
- laptop_mode
|
|
|
|
- block_dump
|
2006-01-08 17:00:39 +08:00
|
|
|
- drop-caches
|
2006-01-19 09:42:32 +08:00
|
|
|
- zone_reclaim_mode
|
2006-02-01 19:05:33 +08:00
|
|
|
- zone_reclaim_interval
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
==============================================================
|
|
|
|
|
|
|
|
dirty_ratio, dirty_background_ratio, dirty_expire_centisecs,
|
|
|
|
dirty_writeback_centisecs, vfs_cache_pressure, laptop_mode,
|
2006-01-08 17:00:39 +08:00
|
|
|
block_dump, swap_token_timeout, drop-caches:
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
See Documentation/filesystems/proc.txt
|
|
|
|
|
|
|
|
==============================================================
|
|
|
|
|
|
|
|
overcommit_memory:
|
|
|
|
|
|
|
|
This value contains a flag that enables memory overcommitment.
|
|
|
|
|
|
|
|
When this flag is 0, the kernel attempts to estimate the amount
|
|
|
|
of free memory left when userspace requests more memory.
|
|
|
|
|
|
|
|
When this flag is 1, the kernel pretends there is always enough
|
|
|
|
memory until it actually runs out.
|
|
|
|
|
|
|
|
When this flag is 2, the kernel uses a "never overcommit"
|
|
|
|
policy that attempts to prevent any overcommit of memory.
|
|
|
|
|
|
|
|
This feature can be very useful because there are a lot of
|
|
|
|
programs that malloc() huge amounts of memory "just-in-case"
|
|
|
|
and don't use much of it.
|
|
|
|
|
|
|
|
The default value is 0.
|
|
|
|
|
|
|
|
See Documentation/vm/overcommit-accounting and
|
|
|
|
security/commoncap.c::cap_vm_enough_memory() for more information.
|
|
|
|
|
|
|
|
==============================================================
|
|
|
|
|
|
|
|
overcommit_ratio:
|
|
|
|
|
|
|
|
When overcommit_memory is set to 2, the committed address
|
|
|
|
space is not permitted to exceed swap plus this percentage
|
|
|
|
of physical RAM. See above.
|
|
|
|
|
|
|
|
==============================================================
|
|
|
|
|
|
|
|
page-cluster:
|
|
|
|
|
|
|
|
The Linux VM subsystem avoids excessive disk seeks by reading
|
|
|
|
multiple pages on a page fault. The number of pages it reads
|
|
|
|
is dependent on the amount of memory in your machine.
|
|
|
|
|
|
|
|
The number of pages the kernel reads in at once is equal to
|
|
|
|
2 ^ page-cluster. Values above 2 ^ 5 don't make much sense
|
|
|
|
for swap because we only cluster swap data in 32-page groups.
|
|
|
|
|
|
|
|
==============================================================
|
|
|
|
|
|
|
|
max_map_count:
|
|
|
|
|
|
|
|
This file contains the maximum number of memory map areas a process
|
|
|
|
may have. Memory map areas are used as a side-effect of calling
|
|
|
|
malloc, directly by mmap and mprotect, and also when loading shared
|
|
|
|
libraries.
|
|
|
|
|
|
|
|
While most applications need less than a thousand maps, certain
|
|
|
|
programs, particularly malloc debuggers, may consume lots of them,
|
|
|
|
e.g., up to one or two maps per allocation.
|
|
|
|
|
|
|
|
The default value is 65536.
|
|
|
|
|
|
|
|
==============================================================
|
|
|
|
|
|
|
|
min_free_kbytes:
|
|
|
|
|
|
|
|
This is used to force the Linux VM to keep a minimum number
|
|
|
|
of kilobytes free. The VM uses this number to compute a pages_min
|
|
|
|
value for each lowmem zone in the system. Each lowmem zone gets
|
|
|
|
a number of reserved free pages based proportionally on its size.
|
2006-01-08 17:00:40 +08:00
|
|
|
|
|
|
|
==============================================================
|
|
|
|
|
|
|
|
percpu_pagelist_fraction
|
|
|
|
|
|
|
|
This is the fraction of pages at most (high mark pcp->high) in each zone that
|
|
|
|
are allocated for each per cpu page list. The min value for this is 8. It
|
|
|
|
means that we don't allow more than 1/8th of pages in each zone to be
|
|
|
|
allocated in any single per_cpu_pagelist. This entry only changes the value
|
|
|
|
of hot per cpu pagelists. User can specify a number like 100 to allocate
|
|
|
|
1/100th of each zone to each per cpu page list.
|
|
|
|
|
|
|
|
The batch value of each per cpu pagelist is also updated as a result. It is
|
|
|
|
set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8)
|
|
|
|
|
|
|
|
The initial value is zero. Kernel does not use this value at boot time to set
|
|
|
|
the high water marks for each per cpu page list.
|
2006-01-19 09:42:32 +08:00
|
|
|
|
|
|
|
===============================================================
|
|
|
|
|
|
|
|
zone_reclaim_mode:
|
|
|
|
|
2006-02-01 19:05:34 +08:00
|
|
|
Zone_reclaim_mode allows to set more or less agressive approaches to
|
|
|
|
reclaim memory when a zone runs out of memory. If it is set to zero then no
|
|
|
|
zone reclaim occurs. Allocations will be satisfied from other zones / nodes
|
|
|
|
in the system.
|
|
|
|
|
|
|
|
This is value ORed together of
|
|
|
|
|
|
|
|
1 = Zone reclaim on
|
|
|
|
2 = Zone reclaim writes dirty pages out
|
|
|
|
4 = Zone reclaim swaps pages
|
2006-02-01 19:05:35 +08:00
|
|
|
8 = Also do a global slab reclaim pass
|
2006-02-01 19:05:34 +08:00
|
|
|
|
|
|
|
zone_reclaim_mode is set during bootup to 1 if it is determined that pages
|
|
|
|
from remote zones will cause a measurable performance reduction. The
|
2006-01-19 09:42:32 +08:00
|
|
|
page allocator will then reclaim easily reusable pages (those page
|
2006-02-01 19:05:34 +08:00
|
|
|
cache pages that are currently not used) before allocating off node pages.
|
|
|
|
|
|
|
|
It may be beneficial to switch off zone reclaim if the system is
|
|
|
|
used for a file server and all of memory should be used for caching files
|
|
|
|
from disk. In that case the caching effect is more important than
|
|
|
|
data locality.
|
|
|
|
|
|
|
|
Allowing zone reclaim to write out pages stops processes that are
|
|
|
|
writing large amounts of data from dirtying pages on other nodes. Zone
|
|
|
|
reclaim will write out dirty pages if a zone fills up and so effectively
|
|
|
|
throttle the process. This may decrease the performance of a single process
|
|
|
|
since it cannot use all of system memory to buffer the outgoing writes
|
|
|
|
anymore but it preserve the memory on other nodes so that the performance
|
|
|
|
of other processes running on other nodes will not be affected.
|
2006-01-19 09:42:32 +08:00
|
|
|
|
2006-02-01 19:05:34 +08:00
|
|
|
Allowing regular swap effectively restricts allocations to the local
|
|
|
|
node unless explicitly overridden by memory policies or cpuset
|
|
|
|
configurations.
|
2006-01-19 09:42:32 +08:00
|
|
|
|
2006-02-01 19:05:35 +08:00
|
|
|
It may be advisable to allow slab reclaim if the system makes heavy
|
|
|
|
use of files and builds up large slab caches. However, the slab
|
|
|
|
shrink operation is global, may take a long time and free slabs
|
|
|
|
in all nodes of the system.
|
|
|
|
|
2006-02-01 19:05:33 +08:00
|
|
|
================================================================
|
|
|
|
|
|
|
|
zone_reclaim_interval:
|
|
|
|
|
|
|
|
The time allowed for off node allocations after zone reclaim
|
|
|
|
has failed to reclaim enough pages to allow a local allocation.
|
|
|
|
|
|
|
|
Time is set in seconds and set by default to 30 seconds.
|
|
|
|
|
|
|
|
Reduce the interval if undesired off node allocations occur. However, too
|
|
|
|
frequent scans will have a negative impact onoff node allocation performance.
|
2006-01-19 09:42:32 +08:00
|
|
|
|