2005-04-17 06:20:36 +08:00
|
|
|
#
|
|
|
|
# Makefile for the linux kernel.
|
|
|
|
#
|
|
|
|
|
2013-08-01 04:53:42 +08:00
|
|
|
obj-y = fork.o exec_domain.o panic.o \
|
2014-06-22 18:06:40 +08:00
|
|
|
cpu.o exit.o softirq.o resource.o \
|
|
|
|
sysctl.o sysctl_binary.o capability.o ptrace.o user.o \
|
2012-05-11 08:59:07 +08:00
|
|
|
signal.o sys.o kmod.o workqueue.o pid.o task_work.o \
|
2014-06-22 18:06:40 +08:00
|
|
|
extable.o params.o \
|
|
|
|
kthread.o sys_ni.o nsproxy.o \
|
2013-07-09 07:01:32 +08:00
|
|
|
notifier.o ksysfs.o cred.o reboot.o \
|
kernel: conditionally support non-root users, groups and capabilities
There are a lot of embedded systems that run most or all of their
functionality in init, running as root:root. For these systems,
supporting multiple users is not necessary.
This patch adds a new symbol, CONFIG_MULTIUSER, that makes support for
non-root users, non-root groups, and capabilities optional. It is enabled
under CONFIG_EXPERT menu.
When this symbol is not defined, UID and GID are zero in any possible case
and processes always have all capabilities.
The following syscalls are compiled out: setuid, setregid, setgid,
setreuid, setresuid, getresuid, setresgid, getresgid, setgroups,
getgroups, setfsuid, setfsgid, capget, capset.
Also, groups.c is compiled out completely.
In kernel/capability.c, capable function was moved in order to avoid
adding two ifdef blocks.
This change saves about 25 KB on a defconfig build. The most minimal
kernels have total text sizes in the high hundreds of kB rather than
low MB. (The 25k goes down a bit with allnoconfig, but not that much.
The kernel was booted in Qemu. All the common functionalities work.
Adding users/groups is not possible, failing with -ENOSYS.
Bloat-o-meter output:
add/remove: 7/87 grow/shrink: 19/397 up/down: 1675/-26325 (-24650)
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Iulia Manda <iulia.manda21@gmail.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Tested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-04-16 07:16:41 +08:00
|
|
|
async.o range.o smpboot.o
|
|
|
|
|
|
|
|
obj-$(CONFIG_MULTIUSER) += groups.o
|
2011-10-25 16:00:11 +08:00
|
|
|
|
2008-10-07 07:06:12 +08:00
|
|
|
ifdef CONFIG_FUNCTION_TRACER
|
2008-05-15 09:30:30 +08:00
|
|
|
# Do not trace debug files and internal ftrace files
|
2015-01-09 20:06:33 +08:00
|
|
|
CFLAGS_REMOVE_cgroup-debug.o = $(CC_FLAGS_FTRACE)
|
|
|
|
CFLAGS_REMOVE_irq_work.o = $(CC_FLAGS_FTRACE)
|
2008-05-13 03:20:55 +08:00
|
|
|
endif
|
|
|
|
|
2014-02-08 16:01:10 +08:00
|
|
|
# cond_syscall is currently not LTO compatible
|
|
|
|
CFLAGS_sys_ni.o = $(DISABLE_LTO)
|
|
|
|
|
2011-11-16 00:14:39 +08:00
|
|
|
obj-y += sched/
|
2013-11-01 01:11:53 +08:00
|
|
|
obj-y += locking/
|
2012-01-14 07:33:03 +08:00
|
|
|
obj-y += power/
|
2013-08-01 04:53:42 +08:00
|
|
|
obj-y += printk/
|
2013-08-30 15:39:53 +08:00
|
|
|
obj-y += irq/
|
2013-10-09 11:23:47 +08:00
|
|
|
obj-y += rcu/
|
2014-12-17 01:58:19 +08:00
|
|
|
obj-y += livepatch/
|
2011-11-16 00:14:39 +08:00
|
|
|
|
2013-02-28 09:05:58 +08:00
|
|
|
obj-$(CONFIG_CHECKPOINT_RESTORE) += kcmp.o
|
2008-10-19 11:27:19 +08:00
|
|
|
obj-$(CONFIG_FREEZER) += freezer.o
|
2008-07-25 16:45:35 +08:00
|
|
|
obj-$(CONFIG_PROFILING) += profile.o
|
2006-07-03 15:24:38 +08:00
|
|
|
obj-$(CONFIG_STACKTRACE) += stacktrace.o
|
2006-06-26 15:25:06 +08:00
|
|
|
obj-y += time/
|
2005-04-17 06:20:36 +08:00
|
|
|
obj-$(CONFIG_FUTEX) += futex.o
|
2006-03-27 17:16:24 +08:00
|
|
|
ifeq ($(CONFIG_COMPAT),y)
|
|
|
|
obj-$(CONFIG_FUTEX) += futex_compat.o
|
|
|
|
endif
|
2005-04-17 06:20:36 +08:00
|
|
|
obj-$(CONFIG_GENERIC_ISA_DMA) += dma.o
|
2011-01-13 08:59:39 +08:00
|
|
|
obj-$(CONFIG_SMP) += smp.o
|
2009-01-15 01:35:44 +08:00
|
|
|
ifneq ($(CONFIG_SMP),y)
|
2009-01-10 04:27:08 +08:00
|
|
|
obj-y += up.o
|
|
|
|
endif
|
2005-04-17 06:20:36 +08:00
|
|
|
obj-$(CONFIG_UID16) += uid16.o
|
|
|
|
obj-$(CONFIG_MODULES) += module.o
|
2013-08-30 23:07:30 +08:00
|
|
|
obj-$(CONFIG_MODULE_SIG) += module_signing.o
|
2005-04-17 06:20:36 +08:00
|
|
|
obj-$(CONFIG_KALLSYMS) += kallsyms.o
|
|
|
|
obj-$(CONFIG_BSD_PROCESS_ACCT) += acct.o
|
2005-06-26 05:57:52 +08:00
|
|
|
obj-$(CONFIG_KEXEC) += kexec.o
|
2008-01-30 20:33:08 +08:00
|
|
|
obj-$(CONFIG_BACKTRACE_SELF_TEST) += backtracetest.o
|
2005-04-17 06:20:36 +08:00
|
|
|
obj-$(CONFIG_COMPAT) += compat.o
|
Task Control Groups: basic task cgroup framework
Generic Process Control Groups
--------------------------
There have recently been various proposals floating around for
resource management/accounting and other task grouping subsystems in
the kernel, including ResGroups, User BeanCounters, NSProxy
cgroups, and others. These all need the basic abstraction of being
able to group together multiple processes in an aggregate, in order to
track/limit the resources permitted to those processes, or control
other behaviour of the processes, and all implement this grouping in
different ways.
This patchset provides a framework for tracking and grouping processes
into arbitrary "cgroups" and assigning arbitrary state to those
groupings, in order to control the behaviour of the cgroup as an
aggregate.
The intention is that the various resource management and
virtualization/cgroup efforts can also become task cgroup
clients, with the result that:
- the userspace APIs are (somewhat) normalised
- it's easier to test e.g. the ResGroups CPU controller in
conjunction with the BeanCounters memory controller, or use either of
them as the resource-control portion of a virtual server system.
- the additional kernel footprint of any of the competing resource
management systems is substantially reduced, since it doesn't need
to provide process grouping/containment, hence improving their
chances of getting into the kernel
This patch:
Add the main task cgroups framework - the cgroup filesystem, and the
basic structures for tracking membership and associating subsystem state
objects to tasks.
Signed-off-by: Paul Menage <menage@google.com>
Cc: Serge E. Hallyn <serue@us.ibm.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Dave Hansen <haveblue@us.ibm.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: Paul Jackson <pj@sgi.com>
Cc: Kirill Korotaev <dev@openvz.org>
Cc: Herbert Poetzl <herbert@13thfloor.at>
Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
Cc: Cedric Le Goater <clg@fr.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-19 14:39:30 +08:00
|
|
|
obj-$(CONFIG_CGROUPS) += cgroup.o
|
2008-10-19 11:27:21 +08:00
|
|
|
obj-$(CONFIG_CGROUP_FREEZER) += cgroup_freezer.o
|
2015-06-09 19:32:10 +08:00
|
|
|
obj-$(CONFIG_CGROUP_PIDS) += cgroup_pids.o
|
2005-04-17 06:20:36 +08:00
|
|
|
obj-$(CONFIG_CPUSETS) += cpuset.o
|
2008-02-08 20:18:23 +08:00
|
|
|
obj-$(CONFIG_UTS_NS) += utsname.o
|
|
|
|
obj-$(CONFIG_USER_NS) += user_namespace.o
|
2008-02-08 20:18:24 +08:00
|
|
|
obj-$(CONFIG_PID_NS) += pid_namespace.o
|
2005-04-17 06:20:36 +08:00
|
|
|
obj-$(CONFIG_IKCONFIG) += configs.o
|
2010-05-08 22:20:53 +08:00
|
|
|
obj-$(CONFIG_SMP) += stop_machine.o
|
2008-01-30 20:32:53 +08:00
|
|
|
obj-$(CONFIG_KPROBES_SANITY_TEST) += test_kprobes.o
|
2009-12-18 09:12:06 +08:00
|
|
|
obj-$(CONFIG_AUDIT) += audit.o auditfilter.o
|
2005-04-17 06:20:36 +08:00
|
|
|
obj-$(CONFIG_AUDITSYSCALL) += auditsc.o
|
2015-08-06 04:29:36 +08:00
|
|
|
obj-$(CONFIG_AUDIT_WATCH) += audit_watch.o audit_fsnotify.o
|
[PATCH] audit: watching subtrees
New kind of audit rule predicates: "object is visible in given subtree".
The part that can be sanely implemented, that is. Limitations:
* if you have hardlink from outside of tree, you'd better watch
it too (or just watch the object itself, obviously)
* if you mount something under a watched tree, tell audit
that new chunk should be added to watched subtrees
* if you umount something in a watched tree and it's still mounted
elsewhere, you will get matches on events happening there. New command
tells audit to recalculate the trees, trimming such sources of false
positives.
Note that it's _not_ about path - if something mounted in several places
(multiple mount, bindings, different namespaces, etc.), the match does
_not_ depend on which one we are using for access.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2007-07-22 20:04:18 +08:00
|
|
|
obj-$(CONFIG_AUDIT_TREE) += audit_tree.o
|
2009-12-18 09:12:06 +08:00
|
|
|
obj-$(CONFIG_GCOV_KERNEL) += gcov/
|
2005-04-17 06:20:36 +08:00
|
|
|
obj-$(CONFIG_KPROBES) += kprobes.o
|
2009-05-19 20:49:32 +08:00
|
|
|
obj-$(CONFIG_KGDB) += debug/
|
2009-01-16 03:08:40 +08:00
|
|
|
obj-$(CONFIG_DETECT_HUNG_TASK) += hung_task.o
|
2010-05-08 05:11:44 +08:00
|
|
|
obj-$(CONFIG_LOCKUP_DETECTOR) += watchdog.o
|
2005-04-17 06:20:36 +08:00
|
|
|
obj-$(CONFIG_SECCOMP) += seccomp.o
|
2006-03-24 02:56:55 +08:00
|
|
|
obj-$(CONFIG_RELAY) += relay.o
|
2007-02-14 16:33:58 +08:00
|
|
|
obj-$(CONFIG_SYSCTL) += utsname_sysctl.o
|
2006-07-14 15:24:36 +08:00
|
|
|
obj-$(CONFIG_TASK_DELAY_ACCT) += delayacct.o
|
2006-10-01 14:28:55 +08:00
|
|
|
obj-$(CONFIG_TASKSTATS) += taskstats.o tsacct.o
|
tracing: Kernel Tracepoints
Implementation of kernel tracepoints. Inspired from the Linux Kernel
Markers. Allows complete typing verification by declaring both tracing
statement inline functions and probe registration/unregistration static
inline functions within the same macro "DEFINE_TRACE". No format string
is required. See the tracepoint Documentation and Samples patches for
usage examples.
Taken from the documentation patch :
"A tracepoint placed in code provides a hook to call a function (probe)
that you can provide at runtime. A tracepoint can be "on" (a probe is
connected to it) or "off" (no probe is attached). When a tracepoint is
"off" it has no effect, except for adding a tiny time penalty (checking
a condition for a branch) and space penalty (adding a few bytes for the
function call at the end of the instrumented function and adds a data
structure in a separate section). When a tracepoint is "on", the
function you provide is called each time the tracepoint is executed, in
the execution context of the caller. When the function provided ends its
execution, it returns to the caller (continuing from the tracepoint
site).
You can put tracepoints at important locations in the code. They are
lightweight hooks that can pass an arbitrary number of parameters, which
prototypes are described in a tracepoint declaration placed in a header
file."
Addition and removal of tracepoints is synchronized by RCU using the
scheduler (and preempt_disable) as guarantees to find a quiescent state
(this is really RCU "classic"). The update side uses rcu_barrier_sched()
with call_rcu_sched() and the read/execute side uses
"preempt_disable()/preempt_enable()".
We make sure the previous array containing probes, which has been
scheduled for deletion by the rcu callback, is indeed freed before we
proceed to the next update. It therefore limits the rate of modification
of a single tracepoint to one update per RCU period. The objective here
is to permit fast batch add/removal of probes on _different_
tracepoints.
Changelog :
- Use #name ":" #proto as string to identify the tracepoint in the
tracepoint table. This will make sure not type mismatch happens due to
connexion of a probe with the wrong type to a tracepoint declared with
the same name in a different header.
- Add tracepoint_entry_free_old.
- Change __TO_TRACE to get rid of the 'i' iterator.
Masami Hiramatsu <mhiramat@redhat.com> :
Tested on x86-64.
Performance impact of a tracepoint : same as markers, except that it
adds about 70 bytes of instructions in an unlikely branch of each
instrumented function (the for loop, the stack setup and the function
call). It currently adds a memory read, a test and a conditional branch
at the instrumentation site (in the hot path). Immediate values will
eventually change this into a load immediate, test and branch, which
removes the memory read which will make the i-cache impact smaller
(changing the memory read for a load immediate removes 3-4 bytes per
site on x86_32 (depending on mov prefixes), or 7-8 bytes on x86_64, it
also saves the d-cache hit).
About the performance impact of tracepoints (which is comparable to
markers), even without immediate values optimizations, tests done by
Hideo Aoki on ia64 show no regression. His test case was using hackbench
on a kernel where scheduler instrumentation (about 5 events in code
scheduler code) was added.
Quoting Hideo Aoki about Markers :
I evaluated overhead of kernel marker using linux-2.6-sched-fixes git
tree, which includes several markers for LTTng, using an ia64 server.
While the immediate trace mark feature isn't implemented on ia64, there
is no major performance regression. So, I think that we don't have any
issues to propose merging marker point patches into Linus's tree from
the viewpoint of performance impact.
I prepared two kernels to evaluate. The first one was compiled without
CONFIG_MARKERS. The second one was enabled CONFIG_MARKERS.
I downloaded the original hackbench from the following URL:
http://devresources.linux-foundation.org/craiger/hackbench/src/hackbench.c
I ran hackbench 5 times in each condition and calculated the average and
difference between the kernels.
The parameter of hackbench: every 50 from 50 to 800
The number of CPUs of the server: 2, 4, and 8
Below is the results. As you can see, major performance regression
wasn't found in any case. Even if number of processes increases,
differences between marker-enabled kernel and marker- disabled kernel
doesn't increase. Moreover, if number of CPUs increases, the differences
doesn't increase either.
Curiously, marker-enabled kernel is better than marker-disabled kernel
in more than half cases, although I guess it comes from the difference
of memory access pattern.
* 2 CPUs
Number of | without | with | diff | diff |
processes | Marker [Sec] | Marker [Sec] | [Sec] | [%] |
--------------------------------------------------------------
50 | 4.811 | 4.872 | +0.061 | +1.27 |
100 | 9.854 | 10.309 | +0.454 | +4.61 |
150 | 15.602 | 15.040 | -0.562 | -3.6 |
200 | 20.489 | 20.380 | -0.109 | -0.53 |
250 | 25.798 | 25.652 | -0.146 | -0.56 |
300 | 31.260 | 30.797 | -0.463 | -1.48 |
350 | 36.121 | 35.770 | -0.351 | -0.97 |
400 | 42.288 | 42.102 | -0.186 | -0.44 |
450 | 47.778 | 47.253 | -0.526 | -1.1 |
500 | 51.953 | 52.278 | +0.325 | +0.63 |
550 | 58.401 | 57.700 | -0.701 | -1.2 |
600 | 63.334 | 63.222 | -0.112 | -0.18 |
650 | 68.816 | 68.511 | -0.306 | -0.44 |
700 | 74.667 | 74.088 | -0.579 | -0.78 |
750 | 78.612 | 79.582 | +0.970 | +1.23 |
800 | 85.431 | 85.263 | -0.168 | -0.2 |
--------------------------------------------------------------
* 4 CPUs
Number of | without | with | diff | diff |
processes | Marker [Sec] | Marker [Sec] | [Sec] | [%] |
--------------------------------------------------------------
50 | 2.586 | 2.584 | -0.003 | -0.1 |
100 | 5.254 | 5.283 | +0.030 | +0.56 |
150 | 8.012 | 8.074 | +0.061 | +0.76 |
200 | 11.172 | 11.000 | -0.172 | -1.54 |
250 | 13.917 | 14.036 | +0.119 | +0.86 |
300 | 16.905 | 16.543 | -0.362 | -2.14 |
350 | 19.901 | 20.036 | +0.135 | +0.68 |
400 | 22.908 | 23.094 | +0.186 | +0.81 |
450 | 26.273 | 26.101 | -0.172 | -0.66 |
500 | 29.554 | 29.092 | -0.461 | -1.56 |
550 | 32.377 | 32.274 | -0.103 | -0.32 |
600 | 35.855 | 35.322 | -0.533 | -1.49 |
650 | 39.192 | 38.388 | -0.804 | -2.05 |
700 | 41.744 | 41.719 | -0.025 | -0.06 |
750 | 45.016 | 44.496 | -0.520 | -1.16 |
800 | 48.212 | 47.603 | -0.609 | -1.26 |
--------------------------------------------------------------
* 8 CPUs
Number of | without | with | diff | diff |
processes | Marker [Sec] | Marker [Sec] | [Sec] | [%] |
--------------------------------------------------------------
50 | 2.094 | 2.072 | -0.022 | -1.07 |
100 | 4.162 | 4.273 | +0.111 | +2.66 |
150 | 6.485 | 6.540 | +0.055 | +0.84 |
200 | 8.556 | 8.478 | -0.078 | -0.91 |
250 | 10.458 | 10.258 | -0.200 | -1.91 |
300 | 12.425 | 12.750 | +0.325 | +2.62 |
350 | 14.807 | 14.839 | +0.032 | +0.22 |
400 | 16.801 | 16.959 | +0.158 | +0.94 |
450 | 19.478 | 19.009 | -0.470 | -2.41 |
500 | 21.296 | 21.504 | +0.208 | +0.98 |
550 | 23.842 | 23.979 | +0.137 | +0.57 |
600 | 26.309 | 26.111 | -0.198 | -0.75 |
650 | 28.705 | 28.446 | -0.259 | -0.9 |
700 | 31.233 | 31.394 | +0.161 | +0.52 |
750 | 34.064 | 33.720 | -0.344 | -1.01 |
800 | 36.320 | 36.114 | -0.206 | -0.57 |
--------------------------------------------------------------
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Acked-by: 'Peter Zijlstra' <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-19 00:16:16 +08:00
|
|
|
obj-$(CONFIG_TRACEPOINTS) += tracepoint.o
|
2008-01-26 04:08:34 +08:00
|
|
|
obj-$(CONFIG_LATENCYTOP) += latencytop.o
|
2010-03-06 05:44:07 +08:00
|
|
|
obj-$(CONFIG_BINFMT_ELF) += elfcore.o
|
|
|
|
obj-$(CONFIG_COMPAT_BINFMT_ELF) += elfcore.o
|
|
|
|
obj-$(CONFIG_BINFMT_ELF_FDPIC) += elfcore.o
|
2008-10-07 07:06:12 +08:00
|
|
|
obj-$(CONFIG_FUNCTION_TRACER) += trace/
|
2008-05-13 03:20:42 +08:00
|
|
|
obj-$(CONFIG_TRACING) += trace/
|
trace: Stop compiling in trace_clock unconditionally
Commit 56449f437 "tracing: make the trace clocks available generally",
in April 2009, made trace_clock available unconditionally, since
CONFIG_X86_DS used it too.
Commit faa4602e47 "x86, perf, bts, mm: Delete the never used BTS-ptrace code",
in March 2010, removed CONFIG_X86_DS, and now only CONFIG_RING_BUFFER (split
out from CONFIG_TRACING for general use) has a dependency on trace_clock. So,
only compile in trace_clock with CONFIG_RING_BUFFER or CONFIG_TRACING
enabled.
Link: http://lkml.kernel.org/r/20120903024513.GA19583@leaf
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Josh Triplett <josh@joshtriplett.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-09-03 10:45:14 +08:00
|
|
|
obj-$(CONFIG_TRACE_CLOCK) += trace/
|
2009-06-25 13:30:12 +08:00
|
|
|
obj-$(CONFIG_RING_BUFFER) += trace/
|
2010-10-28 23:31:17 +08:00
|
|
|
obj-$(CONFIG_TRACEPOINTS) += trace/
|
2010-10-14 14:01:34 +08:00
|
|
|
obj-$(CONFIG_IRQ_WORK) += irq_work.o
|
2011-02-10 18:04:45 +08:00
|
|
|
obj-$(CONFIG_CPU_PM) += cpu_pm.o
|
2014-10-24 09:41:08 +08:00
|
|
|
obj-$(CONFIG_BPF) += bpf/
|
2010-10-27 02:24:03 +08:00
|
|
|
|
|
|
|
obj-$(CONFIG_PERF_EVENTS) += events/
|
|
|
|
|
2009-10-25 20:24:45 +08:00
|
|
|
obj-$(CONFIG_USER_RETURN_NOTIFIER) += user-return-notifier.o
|
2010-01-06 16:47:10 +08:00
|
|
|
obj-$(CONFIG_PADATA) += padata.o
|
2011-03-24 07:43:29 +08:00
|
|
|
obj-$(CONFIG_CRASH_DUMP) += crash_dump.o
|
jump label: Reduce the cycle count by changing the link order
In the course of testing jump labels for use with the CFS
bandwidth controller, Paul Turner, discovered that using jump
labels reduced the branch count and the instruction count, but
did not reduce the cycle count or wall time.
I noticed that having the jump_label.o included in the kernel
but not used in any way still caused this increase in cycle
count and wall time. Thus, I moved jump_label.o in the
kernel/Makefile, thus changing the link order, and presumably
moving it out of hot icache areas. This brought down the cycle
count/time as expected.
In addition to Paul's testing, I've tested the patch using a
single 'static_branch()' in the getppid() path, and basically
running tight loops of calls to getppid(). Here are my results
for the branch disabled case:
With jump labels turned on (CONFIG_JUMP_LABEL), branch disabled:
Performance counter stats for 'bash -c /tmp/getppid;true' (50 runs):
3,969,510,217 instructions # 0.864 IPC ( +-0.000% )
4,592,334,954 cycles ( +- 0.046% )
751,634,470 branches ( +- 0.000% )
1.722635797 seconds time elapsed ( +- 0.046% )
Jump labels turned off (CONFIG_JUMP_LABEL not set), branch
disabled:
Performance counter stats for 'bash -c /tmp/getppid;true' (50 runs):
4,009,611,846 instructions # 0.867 IPC ( +-0.000% )
4,622,210,580 cycles ( +- 0.012% )
771,662,904 branches ( +- 0.000% )
1.734341454 seconds time elapsed ( +- 0.022% )
Signed-off-by: Jason Baron <jbaron@redhat.com>
Cc: rth@redhat.com
Cc: a.p.zijlstra@chello.nl
Cc: rostedt@goodmis.org
Link: http://lkml.kernel.org/r/20110805204040.GG2522@redhat.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Tested-by: Paul Turner <pjt@google.com>
2011-08-06 04:40:40 +08:00
|
|
|
obj-$(CONFIG_JUMP_LABEL) += jump_label.o
|
2012-11-28 02:33:25 +08:00
|
|
|
obj-$(CONFIG_CONTEXT_TRACKING) += context_tracking.o
|
2014-01-28 03:49:39 +08:00
|
|
|
obj-$(CONFIG_TORTURE_TEST) += torture.o
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2015-08-11 11:07:06 +08:00
|
|
|
obj-$(CONFIG_HAS_IOMEM) += memremap.o
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
$(obj)/configs.o: $(obj)/config_data.h
|
|
|
|
|
|
|
|
# config_data.h contains the same information as ikconfig.h but gzipped.
|
|
|
|
# Info from config_data can be extracted from /proc/config*
|
|
|
|
targets += config_data.gz
|
2010-12-15 00:39:44 +08:00
|
|
|
$(obj)/config_data.gz: $(KCONFIG_CONFIG) FORCE
|
2005-04-17 06:20:36 +08:00
|
|
|
$(call if_changed,gzip)
|
|
|
|
|
2014-08-09 05:25:38 +08:00
|
|
|
filechk_ikconfiggz = (echo "static const char kernel_config_data[] __used = MAGIC_START"; cat $< | scripts/basic/bin2c; echo "MAGIC_END;")
|
2005-04-17 06:20:36 +08:00
|
|
|
targets += config_data.h
|
|
|
|
$(obj)/config_data.h: $(obj)/config_data.gz FORCE
|
2011-07-06 07:42:18 +08:00
|
|
|
$(call filechk,ikconfiggz)
|