linux/arch/frv/Kconfig

381 lines
8.9 KiB
Plaintext
Raw Normal View History

config FRV
bool
default y
select HAVE_IDE
select HAVE_ARCH_TRACEHOOK
perf: Do the big rename: Performance Counters -> Performance Events Bye-bye Performance Counters, welcome Performance Events! In the past few months the perfcounters subsystem has grown out its initial role of counting hardware events, and has become (and is becoming) a much broader generic event enumeration, reporting, logging, monitoring, analysis facility. Naming its core object 'perf_counter' and naming the subsystem 'perfcounters' has become more and more of a misnomer. With pending code like hw-breakpoints support the 'counter' name is less and less appropriate. All in one, we've decided to rename the subsystem to 'performance events' and to propagate this rename through all fields, variables and API names. (in an ABI compatible fashion) The word 'event' is also a bit shorter than 'counter' - which makes it slightly more convenient to write/handle as well. Thanks goes to Stephane Eranian who first observed this misnomer and suggested a rename. User-space tooling and ABI compatibility is not affected - this patch should be function-invariant. (Also, defconfigs were not touched to keep the size down.) This patch has been generated via the following script: FILES=$(find * -type f | grep -vE 'oprofile|[^K]config') sed -i \ -e 's/PERF_EVENT_/PERF_RECORD_/g' \ -e 's/PERF_COUNTER/PERF_EVENT/g' \ -e 's/perf_counter/perf_event/g' \ -e 's/nb_counters/nb_events/g' \ -e 's/swcounter/swevent/g' \ -e 's/tpcounter_event/tp_event/g' \ $FILES for N in $(find . -name perf_counter.[ch]); do M=$(echo $N | sed 's/perf_counter/perf_event/g') mv $N $M done FILES=$(find . -name perf_event.*) sed -i \ -e 's/COUNTER_MASK/REG_MASK/g' \ -e 's/COUNTER/EVENT/g' \ -e 's/\<event\>/event_id/g' \ -e 's/counter/event/g' \ -e 's/Counter/Event/g' \ $FILES ... to keep it as correct as possible. This script can also be used by anyone who has pending perfcounters patches - it converts a Linux kernel tree over to the new naming. We tried to time this change to the point in time where the amount of pending patches is the smallest: the end of the merge window. Namespace clashes were fixed up in a preparatory patch - and some stylistic fallout will be fixed up in a subsequent patch. ( NOTE: 'counters' are still the proper terminology when we deal with hardware registers - and these sed scripts are a bit over-eager in renaming them. I've undone some of that, but in case there's something left where 'counter' would be better than 'event' we can undo that on an individual basis instead of touching an otherwise nicely automated patch. ) Suggested-by: Stephane Eranian <eranian@google.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Paul Mackerras <paulus@samba.org> Reviewed-by: Arjan van de Ven <arjan@linux.intel.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: David Howells <dhowells@redhat.com> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: <linux-arch@vger.kernel.org> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 18:02:48 +08:00
select HAVE_PERF_EVENTS
select HAVE_UID16
select HAVE_GENERIC_HARDIRQS
select HAVE_VIRT_TO_BUS
select GENERIC_IRQ_SHOW
select HAVE_DEBUG_BUGVERBOSE
Add Kconfig option ARCH_HAVE_NMI_SAFE_CMPXCHG cmpxchg() is widely used by lockless code, including NMI-safe lockless code. But on some architectures, the cmpxchg() implementation is not NMI-safe, on these architectures the lockless code may need a spin_trylock_irqsave() based implementation. This patch adds a Kconfig option: ARCH_HAVE_NMI_SAFE_CMPXCHG, so that NMI-safe lockless code can depend on it or provide different implementation according to it. On many architectures, cmpxchg is only NMI-safe for several specific operand sizes. So, ARCH_HAVE_NMI_SAFE_CMPXCHG define in this patch only guarantees cmpxchg is NMI-safe for sizeof(unsigned long). Signed-off-by: Huang Ying <ying.huang@intel.com> Acked-by: Mike Frysinger <vapier@gentoo.org> Acked-by: Paul Mundt <lethal@linux-sh.org> Acked-by: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Chris Metcalf <cmetcalf@tilera.com> Acked-by: Richard Henderson <rth@twiddle.net> CC: Mikael Starvik <starvik@axis.com> Acked-by: David Howells <dhowells@redhat.com> CC: Yoshinori Sato <ysato@users.sourceforge.jp> CC: Tony Luck <tony.luck@intel.com> CC: Hirokazu Takata <takata@linux-m32r.org> CC: Geert Uytterhoeven <geert@linux-m68k.org> CC: Michal Simek <monstr@monstr.eu> Acked-by: Ralf Baechle <ralf@linux-mips.org> CC: Kyle McMartin <kyle@mcmartin.ca> CC: Martin Schwidefsky <schwidefsky@de.ibm.com> CC: Chen Liqin <liqin.chen@sunplusct.com> CC: "David S. Miller" <davem@davemloft.net> CC: Ingo Molnar <mingo@redhat.com> CC: Chris Zankel <chris@zankel.net> Signed-off-by: Len Brown <len.brown@intel.com>
2011-07-13 13:14:22 +08:00
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select GENERIC_CPU_DEVICES
select ARCH_WANT_IPC_PARSE_VERSION
select OLD_SIGSUSPEND3
select OLD_SIGACTION
config ZONE_DMA
bool
default y
config RWSEM_GENERIC_SPINLOCK
bool
default y
config RWSEM_XCHGADD_ALGORITHM
bool
config GENERIC_HWEIGHT
bool
default y
config GENERIC_CALIBRATE_DELAY
bool
default n
config TIME_LOW_RES
bool
default y
config QUICKLIST
bool
default y
config ARCH_HAS_ILOG2_U32
bool
default y
config ARCH_HAS_ILOG2_U64
SLUB core This is a new slab allocator which was motivated by the complexity of the existing code in mm/slab.c. It attempts to address a variety of concerns with the existing implementation. A. Management of object queues A particular concern was the complex management of the numerous object queues in SLAB. SLUB has no such queues. Instead we dedicate a slab for each allocating CPU and use objects from a slab directly instead of queueing them up. B. Storage overhead of object queues SLAB Object queues exist per node, per CPU. The alien cache queue even has a queue array that contain a queue for each processor on each node. For very large systems the number of queues and the number of objects that may be caught in those queues grows exponentially. On our systems with 1k nodes / processors we have several gigabytes just tied up for storing references to objects for those queues This does not include the objects that could be on those queues. One fears that the whole memory of the machine could one day be consumed by those queues. C. SLAB meta data overhead SLAB has overhead at the beginning of each slab. This means that data cannot be naturally aligned at the beginning of a slab block. SLUB keeps all meta data in the corresponding page_struct. Objects can be naturally aligned in the slab. F.e. a 128 byte object will be aligned at 128 byte boundaries and can fit tightly into a 4k page with no bytes left over. SLAB cannot do this. D. SLAB has a complex cache reaper SLUB does not need a cache reaper for UP systems. On SMP systems the per CPU slab may be pushed back into partial list but that operation is simple and does not require an iteration over a list of objects. SLAB expires per CPU, shared and alien object queues during cache reaping which may cause strange hold offs. E. SLAB has complex NUMA policy layer support SLUB pushes NUMA policy handling into the page allocator. This means that allocation is coarser (SLUB does interleave on a page level) but that situation was also present before 2.6.13. SLABs application of policies to individual slab objects allocated in SLAB is certainly a performance concern due to the frequent references to memory policies which may lead a sequence of objects to come from one node after another. SLUB will get a slab full of objects from one node and then will switch to the next. F. Reduction of the size of partial slab lists SLAB has per node partial lists. This means that over time a large number of partial slabs may accumulate on those lists. These can only be reused if allocator occur on specific nodes. SLUB has a global pool of partial slabs and will consume slabs from that pool to decrease fragmentation. G. Tunables SLAB has sophisticated tuning abilities for each slab cache. One can manipulate the queue sizes in detail. However, filling the queues still requires the uses of the spin lock to check out slabs. SLUB has a global parameter (min_slab_order) for tuning. Increasing the minimum slab order can decrease the locking overhead. The bigger the slab order the less motions of pages between per CPU and partial lists occur and the better SLUB will be scaling. G. Slab merging We often have slab caches with similar parameters. SLUB detects those on boot up and merges them into the corresponding general caches. This leads to more effective memory use. About 50% of all caches can be eliminated through slab merging. This will also decrease slab fragmentation because partial allocated slabs can be filled up again. Slab merging can be switched off by specifying slub_nomerge on boot up. Note that merging can expose heretofore unknown bugs in the kernel because corrupted objects may now be placed differently and corrupt differing neighboring objects. Enable sanity checks to find those. H. Diagnostics The current slab diagnostics are difficult to use and require a recompilation of the kernel. SLUB contains debugging code that is always available (but is kept out of the hot code paths). SLUB diagnostics can be enabled via the "slab_debug" option. Parameters can be specified to select a single or a group of slab caches for diagnostics. This means that the system is running with the usual performance and it is much more likely that race conditions can be reproduced. I. Resiliency If basic sanity checks are on then SLUB is capable of detecting common error conditions and recover as best as possible to allow the system to continue. J. Tracing Tracing can be enabled via the slab_debug=T,<slabcache> option during boot. SLUB will then protocol all actions on that slabcache and dump the object contents on free. K. On demand DMA cache creation. Generally DMA caches are not needed. If a kmalloc is used with __GFP_DMA then just create this single slabcache that is needed. For systems that have no ZONE_DMA requirement the support is completely eliminated. L. Performance increase Some benchmarks have shown speed improvements on kernbench in the range of 5-10%. The locking overhead of slub is based on the underlying base allocation size. If we can reliably allocate larger order pages then it is possible to increase slub performance much further. The anti-fragmentation patches may enable further performance increases. Tested on: i386 UP + SMP, x86_64 UP + SMP + NUMA emulation, IA64 NUMA + Simulator SLUB Boot options slub_nomerge Disable merging of slabs slub_min_order=x Require a minimum order for slab caches. This increases the managed chunk size and therefore reduces meta data and locking overhead. slub_min_objects=x Mininum objects per slab. Default is 8. slub_max_order=x Avoid generating slabs larger than order specified. slub_debug Enable all diagnostics for all caches slub_debug=<options> Enable selective options for all caches slub_debug=<o>,<cache> Enable selective options for a certain set of caches Available Debug options F Double Free checking, sanity and resiliency R Red zoning P Object / padding poisoning U Track last free / alloc T Trace all allocs / frees (only use for individual slabs). To use SLUB: Apply this patch and then select SLUB as the default slab allocator. [hugh@veritas.com: fix an oops-causing locking error] [akpm@linux-foundation.org: various stupid cleanups and small fixes] Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-07 05:49:36 +08:00
bool
default y
avoid overflows in kernel/time.c When the conversion factor between jiffies and milli- or microseconds is not a single multiply or divide, as for the case of HZ == 300, we currently do a multiply followed by a divide. The intervening result, however, is subject to overflows, especially since the fraction is not simplified (for HZ == 300, we multiply by 300 and divide by 1000). This is exposed to the user when passing a large timeout to poll(), for example. This patch replaces the multiply-divide with a reciprocal multiplication on 32-bit platforms. When the input is an unsigned long, there is no portable way to do this on 64-bit platforms there is no portable way to do this since it requires a 128-bit intermediate result (which gcc does support on 64-bit platforms but may generate libgcc calls, e.g. on 64-bit s390), but since the output is a 32-bit integer in the cases affected, just simplify the multiply-divide (*3/10 instead of *300/1000). The reciprocal multiply used can have off-by-one errors in the upper half of the valid output range. This could be avoided at the expense of having to deal with a potential 65-bit intermediate result. Since the intent is to avoid overflow problems and most of the other time conversions are only semiexact, the off-by-one errors were considered an acceptable tradeoff. At Ralf Baechle's suggestion, this version uses a Perl script to compute the necessary constants. We already have dependencies on Perl for kernel compiles. This does, however, require the Perl module Math::BigInt, which is included in the standard Perl distribution starting with version 5.8.0. In order to support older versions of Perl, include a table of canned constants in the script itself, and structure the script so that Math::BigInt isn't required if pulling values from said table. Running the script requires that the HZ value is available from the Makefile. Thus, this patch also adds the Kconfig variable CONFIG_HZ to the architectures which didn't already have it (alpha, cris, frv, h8300, m32r, m68k, m68knommu, sparc, v850, and xtensa.) It does *not* touch the sh or sh64 architectures, since Paul Mundt has dealt with those separately in the sh tree. Signed-off-by: H. Peter Anvin <hpa@zytor.com> Cc: Ralf Baechle <ralf@linux-mips.org>, Cc: Sam Ravnborg <sam@ravnborg.org>, Cc: Paul Mundt <lethal@linux-sh.org>, Cc: Richard Henderson <rth@twiddle.net>, Cc: Michael Starvik <starvik@axis.com>, Cc: David Howells <dhowells@redhat.com>, Cc: Yoshinori Sato <ysato@users.sourceforge.jp>, Cc: Hirokazu Takata <takata@linux-m32r.org>, Cc: Geert Uytterhoeven <geert@linux-m68k.org>, Cc: Roman Zippel <zippel@linux-m68k.org>, Cc: William L. Irwin <sparclinux@vger.kernel.org>, Cc: Chris Zankel <chris@zankel.net>, Cc: H. Peter Anvin <hpa@zytor.com>, Cc: Jan Engelhardt <jengelh@computergmbh.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-08 20:21:26 +08:00
config HZ
int
default 1000
source "init/Kconfig"
container freezer: implement freezer cgroup subsystem This patch implements a new freezer subsystem in the control groups framework. It provides a way to stop and resume execution of all tasks in a cgroup by writing in the cgroup filesystem. The freezer subsystem in the container filesystem defines a file named freezer.state. Writing "FROZEN" to the state file will freeze all tasks in the cgroup. Subsequently writing "RUNNING" will unfreeze the tasks in the cgroup. Reading will return the current state. * Examples of usage : # mkdir /containers/freezer # mount -t cgroup -ofreezer freezer /containers # mkdir /containers/0 # echo $some_pid > /containers/0/tasks to get status of the freezer subsystem : # cat /containers/0/freezer.state RUNNING to freeze all tasks in the container : # echo FROZEN > /containers/0/freezer.state # cat /containers/0/freezer.state FREEZING # cat /containers/0/freezer.state FROZEN to unfreeze all tasks in the container : # echo RUNNING > /containers/0/freezer.state # cat /containers/0/freezer.state RUNNING This is the basic mechanism which should do the right thing for user space task in a simple scenario. It's important to note that freezing can be incomplete. In that case we return EBUSY. This means that some tasks in the cgroup are busy doing something that prevents us from completely freezing the cgroup at this time. After EBUSY, the cgroup will remain partially frozen -- reflected by freezer.state reporting "FREEZING" when read. The state will remain "FREEZING" until one of these things happens: 1) Userspace cancels the freezing operation by writing "RUNNING" to the freezer.state file 2) Userspace retries the freezing operation by writing "FROZEN" to the freezer.state file (writing "FREEZING" is not legal and returns EIO) 3) The tasks that blocked the cgroup from entering the "FROZEN" state disappear from the cgroup's set of tasks. [akpm@linux-foundation.org: coding-style fixes] [akpm@linux-foundation.org: export thaw_process] Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Matt Helsley <matthltc@us.ibm.com> Acked-by: Serge E. Hallyn <serue@us.ibm.com> Tested-by: Matt Helsley <matthltc@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-10-19 11:27:21 +08:00
source "kernel/Kconfig.freezer"
menu "Fujitsu FR-V system setup"
config MMU
bool "MMU support"
help
This options switches on and off support for the FR-V MMU
(effectively switching between vmlinux and uClinux). Not all FR-V
CPUs support this. Currently only the FR451 has a sufficiently
featured MMU.
config FRV_OUTOFLINE_ATOMIC_OPS
bool "Out-of-line the FRV atomic operations"
default n
help
Setting this option causes the FR-V atomic operations to be mostly
implemented out-of-line.
See Documentation/frv/atomic-ops.txt for more information.
config HIGHMEM
bool "High memory support"
depends on MMU
default y
help
If you wish to use more than 256MB of memory with your MMU based
system, you will need to select this option. The kernel can only see
the memory between 0xC0000000 and 0xD0000000 directly... everything
else must be kmapped.
The arch is, however, capable of supporting up to 3GB of SDRAM.
config HIGHPTE
bool "Allocate page tables in highmem"
depends on HIGHMEM
default y
help
The VM uses one page of memory for each page table. For systems
with a lot of RAM, this can be wasteful of precious low memory.
Setting this option will put user-space page tables in high memory.
source "mm/Kconfig"
choice
prompt "uClinux kernel load address"
depends on !MMU
default UCPAGE_OFFSET_C0000000
help
This option sets the base address for the uClinux kernel. The kernel
will rearrange the SDRAM layout to start at this address, and move
itself to start there. It must be greater than 0, and it must be
sufficiently less than 0xE0000000 that the SDRAM does not intersect
the I/O region.
The base address must also be aligned such that the SDRAM controller
can decode it. For instance, a 512MB SDRAM bank must be 512MB aligned.
config UCPAGE_OFFSET_20000000
bool "0x20000000"
config UCPAGE_OFFSET_40000000
bool "0x40000000"
config UCPAGE_OFFSET_60000000
bool "0x60000000"
config UCPAGE_OFFSET_80000000
bool "0x80000000"
config UCPAGE_OFFSET_A0000000
bool "0xA0000000"
config UCPAGE_OFFSET_C0000000
bool "0xC0000000 (Recommended)"
endchoice
config PAGE_OFFSET
hex
default 0x20000000 if UCPAGE_OFFSET_20000000
default 0x40000000 if UCPAGE_OFFSET_40000000
default 0x60000000 if UCPAGE_OFFSET_60000000
default 0x80000000 if UCPAGE_OFFSET_80000000
default 0xA0000000 if UCPAGE_OFFSET_A0000000
default 0xC0000000
config PROTECT_KERNEL
bool "Protect core kernel against userspace"
depends on !MMU
default y
help
Selecting this option causes the uClinux kernel to change the
permittivity of DAMPR register covering the core kernel image to
prevent userspace accessing the underlying memory directly.
choice
prompt "CPU Caching mode"
default FRV_DEFL_CACHE_WBACK
help
This option determines the default caching mode for the kernel.
Write-Back caching mode involves the all reads and writes causing
the affected cacheline to be read into the cache first before being
operated upon. Memory is not then updated by a write until the cache
is filled and a cacheline needs to be displaced from the cache to
make room. Only at that point is it written back.
Write-Behind caching is similar to Write-Back caching, except that a
write won't fetch a cacheline into the cache if there isn't already
one there; it will write directly to memory instead.
Write-Through caching only fetches cachelines from memory on a
read. Writes always get written directly to memory. If the affected
cacheline is also in cache, it will be updated too.
The final option is to turn of caching entirely.
Note that not all CPUs support Write-Behind caching. If the CPU on
which the kernel is running doesn't, it'll fall back to Write-Back
caching.
config FRV_DEFL_CACHE_WBACK
bool "Write-Back"
config FRV_DEFL_CACHE_WBEHIND
bool "Write-Behind"
config FRV_DEFL_CACHE_WTHRU
bool "Write-Through"
config FRV_DEFL_CACHE_DISABLED
bool "Disabled"
endchoice
menu "CPU core support"
config CPU_FR401
bool "Include FR401 core support"
depends on !MMU
default y
help
This enables support for the FR401, FR401A and FR403 CPUs
config CPU_FR405
bool "Include FR405 core support"
depends on !MMU
default y
help
This enables support for the FR405 CPU
config CPU_FR451
bool "Include FR451 core support"
default y
help
This enables support for the FR451 CPU
config CPU_FR451_COMPILE
bool "Specifically compile for FR451 core"
depends on CPU_FR451 && !CPU_FR401 && !CPU_FR405 && !CPU_FR551
default y
help
This causes appropriate flags to be passed to the compiler to
optimise for the FR451 CPU
config CPU_FR551
bool "Include FR551 core support"
depends on !MMU
default y
help
This enables support for the FR555 CPU
config CPU_FR551_COMPILE
bool "Specifically compile for FR551 core"
depends on CPU_FR551 && !CPU_FR401 && !CPU_FR405 && !CPU_FR451
default y
help
This causes appropriate flags to be passed to the compiler to
optimise for the FR555 CPU
config FRV_L1_CACHE_SHIFT
int
default "5" if CPU_FR401 || CPU_FR405 || CPU_FR451
default "6" if CPU_FR551
endmenu
choice
prompt "System support"
default MB93091_VDK
config MB93091_VDK
bool "MB93091 CPU board with or without motherboard"
config MB93093_PDK
bool "MB93093 PDK unit"
endchoice
if MB93091_VDK
choice
prompt "Motherboard support"
default MB93090_MB00
config MB93090_MB00
bool "Use the MB93090-MB00 motherboard"
help
Select this option if the MB93091 CPU board is going to be used with
a MB93090-MB00 VDK motherboard
config MB93091_NO_MB
bool "Use standalone"
help
Select this option if the MB93091 CPU board is going to be used
without a motherboard
endchoice
endif
config FUJITSU_MB93493
bool "MB93493 Multimedia chip"
help
Select this option if the MB93493 multimedia chip is going to be
used.
choice
prompt "GP-Relative data support"
default GPREL_DATA_8
help
This option controls what data, if any, should be placed in the GP
relative data sections. Using this means that the compiler can
generate accesses to the data using GR16-relative addressing which
is faster than absolute instructions and saves space (2 instructions
per access).
However, the GPREL region is limited in size because the immediate
value used in the load and store instructions is limited to a 12-bit
signed number.
So if the linker starts complaining that accesses to GPREL data are
out of range, try changing this option from the default.
Note that modules will always be compiled with this feature disabled
as the module data will not be in range of the GP base address.
config GPREL_DATA_8
bool "Put data objects of up to 8 bytes into GP-REL"
config GPREL_DATA_4
bool "Put data objects of up to 4 bytes into GP-REL"
config GPREL_DATA_NONE
bool "Don't use GP-REL"
endchoice
config FRV_ONCPU_SERIAL
bool "Use on-CPU serial ports"
select SERIAL_8250
default y
config PCI
bool "Use PCI"
depends on MB93090_MB00
default y
select GENERIC_PCI_IOMAP
help
Some FR-V systems (such as the MB93090-MB00 VDK) have PCI
onboard. If you have one of these boards and you wish to use the PCI
facilities, say Y here.
config RESERVE_DMA_COHERENT
bool "Reserve DMA coherent memory"
depends on PCI && !MMU
default y
help
Many PCI drivers require access to uncached memory for DMA device
communications (such as is done with some Ethernet buffer rings). If
a fully featured MMU is available, this can be done through page
table settings, but if not, a region has to be set aside and marked
with a special DAMPR register.
Setting this option causes uClinux to set aside a portion of the
available memory for use in this manner. The memory will then be
unavailable for normal kernel use.
source "drivers/pci/Kconfig"
source "drivers/pcmcia/Kconfig"
menu "Power management options"
config ARCH_SUSPEND_POSSIBLE
def_bool y
source kernel/power/Kconfig
endmenu
endmenu
menu "Executable formats"
source "fs/Kconfig.binfmt"
endmenu
source "net/Kconfig"
source "drivers/Kconfig"
source "fs/Kconfig"
source "arch/frv/Kconfig.debug"
source "security/Kconfig"
source "crypto/Kconfig"
source "lib/Kconfig"